Compare commits

...

513 Commits

Author SHA1 Message Date
jxxghp
92769b27f1 v1.7.4
- 推荐增加了`Bangumi每日放送`
- `api.themoviedb.org`等域名会自动使用DOH解析IP地址,以避免DNS污染提升网络连通性(通过`DOH_ENABLE`变量控制,默认开)
- 站点浏览增加点击添加下载功能
- 优化了个别页面在数据多时的展示速度
2024-03-19 17:27:52 +08:00
jxxghp
fa83168b92 feat:增加DOH开关 2024-03-19 12:26:04 +08:00
jxxghp
f96295de3a add download api 2024-03-18 23:27:54 +08:00
jxxghp
6cecb3c6a6 fix bug 2024-03-18 20:02:03 +08:00
jxxghp
b6486035c4 add Bangumi 2024-03-18 19:02:34 +08:00
jxxghp
f7c1d28c0f remove cloudflared 2024-03-18 08:23:43 +08:00
jxxghp
47c2ae1c08 fix doh domains 2024-03-18 07:19:56 +08:00
jxxghp
c03f24dcf5 更新 doh.py 2024-03-17 23:40:19 +08:00
jxxghp
6e2f5762b4 add doh 2024-03-17 23:30:50 +08:00
jxxghp
75330a08cc add doh 2024-03-17 23:25:04 +08:00
jxxghp
3f17e371c3 add doh 2024-03-17 23:15:21 +08:00
jxxghp
a820341ec7 rollback cloudflared 2024-03-17 22:32:15 +08:00
jxxghp
c1f04f5631 Merge pull request #1697 from DDS-Derek/main 2024-03-17 19:06:31 +08:00
DDSRem
a121e45b94 fix: container resolv cannot be modified 2024-03-17 18:43:54 +08:00
DDSRem
885ee976b2 feat: better cloudflared install 2024-03-17 18:15:29 +08:00
jxxghp
e6229beb94 add cloudflared 2024-03-17 16:52:13 +08:00
jxxghp
f2a40e1ec3 fix themoviedb季不显示 2024-03-17 15:59:21 +08:00
jxxghp
5f80aa5b7c - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 15:12:24 +08:00
jxxghp
14ff1e9af6 fix resource 2024-03-17 15:09:10 +08:00
jxxghp
49ab5ac709 - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 13:43:11 +08:00
jxxghp
74c7a1927b fix cookiecloud 2024-03-17 13:42:01 +08:00
jxxghp
cbd704373c try fix cookiecloud 2024-03-17 12:57:38 +08:00
jxxghp
a05724f664 fix 自动校正站点地址格式 2024-03-17 12:21:32 +08:00
jxxghp
97d0fc046a fix 豆瓣订阅Bug 2024-03-17 11:27:54 +08:00
jxxghp
6248e34400 fix v1.7.3 2024-03-17 10:00:59 +08:00
jxxghp
a442dab85b fix nginx.conf 2024-03-17 09:51:04 +08:00
jxxghp
d4514edba6 v1.7.3
- `捷径`新增消息中心功能
- 内建支持CookieCloud本地化服务器,Cookie数据加密后保存在用户配置目录中,可在`设定`-`站点`中选择开启
- 优化了推荐详情页面,豆瓣推荐详情直接展示豆瓣数据源
- 修复了`蜜柑`无法搜索的问题
2024-03-17 09:09:21 +08:00
jxxghp
0c581565ad 更新 message.py 2024-03-16 22:21:12 +08:00
jxxghp
350def0a6f 更新 message.py 2024-03-16 22:20:14 +08:00
jxxghp
5b3027c0a7 fix reload 2024-03-16 21:06:52 +08:00
jxxghp
e4b90ca8f7 fix #1694 2024-03-16 20:40:02 +08:00
jxxghp
d917b00055 Merge pull request #1694 from lingjiameng/main
CookieCloud配置支持实时更新
2024-03-16 20:36:05 +08:00
s0mE
cc94c6c367 Merge branch 'jxxghp:main' into main 2024-03-16 19:24:25 +08:00
ljmeng
6410051e3a CookieCloud配置支持实时加载 2024-03-16 19:23:06 +08:00
jxxghp
aaa1b80edf fix 资源包更新Bug 2024-03-16 18:38:25 +08:00
jxxghp
f345d94009 fix README.md 2024-03-16 18:28:09 +08:00
jxxghp
550fe26d76 Merge pull request #1693 from lingjiameng/main
集成CookieCloud服务器端
2024-03-16 17:52:49 +08:00
jxxghp
7ad498b3a3 fix 2024-03-16 17:06:24 +08:00
jxxghp
20eb0b4635 fix message 2024-03-16 16:29:14 +08:00
ljmeng
747dc3fafe 默认关闭本地CookieCloud服务 2024-03-16 15:40:10 +08:00
s0mE
4708fbb3cb Merge branch 'jxxghp:main' into main 2024-03-16 15:36:20 +08:00
ljmeng
6ba40edeb4 Merge branch 'main' of github.com:lingjiameng/MoviePilot 2024-03-16 15:35:02 +08:00
ljmeng
79cb28faf9 默认配置关闭本地cookiecloud服务 2024-03-16 15:34:46 +08:00
jxxghp
9acf05f334 fix #1691 2024-03-16 15:31:04 +08:00
jxxghp
d0af1bf075 Merge pull request #1691 from hoey94/main 2024-03-16 13:53:10 +08:00
hoey94
f8a95cec4a fix: TR远程控制插件限速问题 104 2024-03-16 12:37:21 +08:00
jxxghp
3cd672fa8d fix 2024-03-16 08:40:36 +08:00
jxxghp
fe03638552 fix api 2024-03-16 08:39:57 +08:00
ljmeng
1ae220c654 集成CookieCloud服务端 2024-03-16 04:48:34 +08:00
jxxghp
75c7e71ee6 Merge pull request #1689 from hoey94/main 2024-03-15 19:14:26 +08:00
hoey94
4619158b99 fix: 限速开关BUG 104 2024-03-15 18:23:44 +08:00
jxxghp
3f88907ba9 fix bug 2024-03-15 18:17:04 +08:00
jxxghp
ae6440bd0a Merge pull request #1683 from lingjiameng/main 2024-03-15 07:55:01 +08:00
s0mE
261f5fc0c6 Merge branch 'jxxghp:main' into main 2024-03-14 23:26:58 +08:00
jxxghp
a5d044d535 fix message 2024-03-14 20:36:15 +08:00
jxxghp
6e607ca89f fix 优化推荐跳转
feat 消息落库
2024-03-14 19:44:15 +08:00
jxxghp
06e4b9ad83 Merge remote-tracking branch 'origin/main' 2024-03-14 19:15:22 +08:00
jxxghp
c755dc9b85 fix 优化推荐跳转
feat 消息落库
2024-03-14 19:15:13 +08:00
jxxghp
209451d5f9 Merge pull request #1678 from HankunYu/main 2024-03-14 06:57:31 +08:00
HankunYu
60b2d30f42 Update README.md
增加使用反代的描述,解决使用https反代时日志加载时间过长(十几分钟)不可用的问题。
2024-03-13 18:54:55 +00:00
ljmeng
399d26929d CookieCloud改为本地解密,增强安全性 2024-03-14 02:35:22 +08:00
jxxghp
f50c2e59a9 fix #1674 2024-03-13 14:54:37 +08:00
jxxghp
1cd768b3d0 v1.7.2
- 站点索引新增支持`蟹黄堡`,修复了`蝴蝶`、`蜜柑`的索引问题
- 针对themoviedb被大量删除中文标题的问题,补充使用新加坡(zh-sg)中文标题搜索和刮削
- 支持设定识别元数据的缓存时间(`META_CACHE_EXPIRE`,单位小时)
- 修复了未设定anime分类策略时,原tv下动漫二级分类失效的问题
- 提升了插件升级的使用体验
2024-03-13 08:21:59 +08:00
jxxghp
abc26b65ed fix #1645 兼容蝴蝶种子链接格式 2024-03-12 17:01:41 +08:00
jxxghp
dc1a41da90 fix 减少不必要的检测 2024-03-12 13:48:37 +08:00
jxxghp
a95dac1b32 fix 目录检测 2024-03-12 13:36:33 +08:00
jxxghp
18d9620687 #1653 搜索词中加入新加坡标题,同时主标题不是中文时会考虑使用中文新加坡标题 2024-03-12 11:55:47 +08:00
jxxghp
8808dcee52 fix 1659 2024-03-12 11:16:10 +08:00
jxxghp
17adc4deab Merge pull request #1662 from thsrite/main 2024-03-11 16:36:19 +08:00
thsrite
9351489166 fix 不查缓存识别媒体信息也应更新最新信息到缓存 2024-03-11 16:34:53 +08:00
jxxghp
e2148cb77f fix 2024-03-11 16:28:36 +08:00
jxxghp
e322204094 Merge pull request #1661 from jeblove/main 2024-03-11 16:25:05 +08:00
jxxghp
0fa884157a 支持设定meta缓存时间 2024-03-11 16:23:07 +08:00
jeblove
96468213fe Merge branch 'main' of https://github.com/jeblove/MoviePilot 2024-03-11 16:17:36 +08:00
jeblove
d044a9db00 fix 继续观看部分剧集图片 2024-03-11 16:17:10 +08:00
jxxghp
d5f5e0d526 Merge pull request #1660 from thsrite/main 2024-03-11 15:59:20 +08:00
thsrite
14a3bb8fc2 add db订阅、下载历史根据类型和时间查询列表(插件方法) 2024-03-11 15:56:19 +08:00
jxxghp
5921d43ae8 fix #1655 2024-03-11 12:34:19 +08:00
jxxghp
635061c054 Merge pull request #1654 from jeblove/main 2024-03-11 11:32:27 +08:00
jeblove
3c8c6e5375 fix 语法问题 2024-03-11 11:27:24 +08:00
jeblove
dd063bb16b fix 播放剧集微信消息推送图片问题 2024-03-11 01:57:57 +08:00
jeblove
750711611b fix 语法问题 2024-03-11 00:15:55 +08:00
jxxghp
d3983c51c2 Merge pull request #1652 from jeblove/main 2024-03-10 18:39:16 +08:00
jeblove
b9dec73773 fix 语法问题 2024-03-10 18:10:09 +08:00
jeblove
b310367d25 fix 播放微信消息推送图片问题 2024-03-10 17:50:01 +08:00
jxxghp
55beea87fd Merge pull request #1649 from thsrite/main 2024-03-10 11:24:57 +08:00
thsrite
4510382f74 fix tv动漫分类不生效 2024-03-10 09:27:48 +08:00
jxxghp
9b9ae9401e fix bug 2024-03-09 21:33:59 +08:00
jxxghp
e10464c278 v1.7.1
- 动漫独立目录时支持二级分类(category.yml配置模板已更新)
- 支持同时启用两个下载器,但只有第1个才会被默认使用(官方插件库个别插件进行了适配升级)
- 实时日志的最新日志显示在最顶部
- 优化了下载器及媒体目录的健康检查
- 优化了版本升级后因为浏览器缓存一直加载中的问题
- 优先级规则新增支持`官种`
- 修复了普通用户无法查看下载中任务的问题
- 修复了设定中修改定时服务相关设置时不立即生效的问题
2024-03-09 21:20:36 +08:00
jxxghp
542531a1ca fix yyyymmdd期 识别 2024-03-09 21:13:23 +08:00
jxxghp
04c21232e3 fix yyyymmdd期 识别 2024-03-09 21:03:29 +08:00
jxxghp
48a19fd57c fix 下载器测试 2024-03-09 20:03:57 +08:00
jxxghp
59cb69a96b Merge pull request #1643 from jeblove/main 2024-03-09 19:39:31 +08:00
jeblove
e7d94f7f70 fix 对接Library/VirtualFolders接口参数 2024-03-09 19:03:02 +08:00
jxxghp
27d2d01a20 feat:下载器支持多选 2024-03-09 18:52:27 +08:00
jxxghp
8b4495c857 feat:下载器支持多选 2024-03-09 18:25:04 +08:00
jxxghp
15bdb694cc fix 优化部分消息格式 2024-03-09 17:43:21 +08:00
jxxghp
3ef9c5ea2c fix 优化部分消息格式 2024-03-09 17:34:49 +08:00
jxxghp
ab6577f752 fix #1561 2024-03-09 17:12:21 +08:00
jxxghp
49a82d7a48 feat:新增官种优先级规则
fix #1635
feat:动漫支持二级目录 fix #1633
2024-03-09 09:53:15 +08:00
jxxghp
bdcbb168a0 Merge pull request #1636 from HankunYu/main 2024-03-09 08:05:49 +08:00
HankunYu
2e1cb0bd76 fix #1630
这里混淆了remove_tags与delete_tags。将原来的remove tags函数更正为delete,并新增一个remove tags函数。
2024-03-08 18:23:42 +00:00
jxxghp
851864cd49 fix 定时服务立即生效
fix #1615
2024-03-08 16:22:53 +08:00
jxxghp
b5d7b6fb53 fix 订阅全局通知 2024-03-08 15:38:40 +08:00
jxxghp
92bab2fc2f fix 下载全局通知 2024-03-08 15:27:24 +08:00
jxxghp
0dad6860c4 fix 下载任务userid登记 2024-03-08 14:40:00 +08:00
jxxghp
de4a7becc2 Merge pull request #1620 from thsrite/main 2024-03-08 11:58:52 +08:00
thsrite
2eeb24e22d fix 不开下载器监控,link测试无意义 2024-03-08 09:29:30 +08:00
jxxghp
e4a67ea052 - 修复了健康检查themoviedb、thetvdb时未使用内置代理的问题
- 修复了VoceChat部分场景下消息发送失败的问题
- 修复了VoceChat响应干扰了微信回调的问题
- 提升了VoceChat的安全性,机器人Webhook需要参考说明重新设置

注意:VoceChat机器人Webhook回调地址相对路径调整为:`/api/v1/message/?token=moviepilot`,其中`moviepilot`为环境变量中设置的`API_TOKEN`
2024-03-07 18:22:17 +08:00
jxxghp
a4df2f5213 fix wechat bug 2024-03-07 18:15:04 +08:00
jxxghp
4f89780a0f Merge remote-tracking branch 'origin/main' 2024-03-07 18:01:57 +08:00
jxxghp
26d6201b30 fix wechat bug 2024-03-07 18:01:50 +08:00
jxxghp
c9a9ff2692 Merge pull request #1613 from WangEdward/main 2024-03-07 17:48:31 +08:00
Edward
0be49953b4 fix: change vote to float 2024-03-07 09:45:14 +00:00
jxxghp
0de952f090 fix 2024-03-07 17:15:04 +08:00
jxxghp
2b570bf48f fix:提升VoceChat安全性 2024-03-07 17:07:28 +08:00
jxxghp
9476017af5 Merge remote-tracking branch 'origin/main' 2024-03-07 12:43:05 +08:00
jxxghp
54f808485e fix #1608 2024-03-07 12:42:59 +08:00
jxxghp
fa5c82899b Merge pull request #1605 from HankunYu/main
Update 中文字幕过滤
2024-03-07 12:33:06 +08:00
HankunYu
4a57071809 Update 中字过滤规则
添加匹配小写
2024-03-07 02:48:29 +00:00
HankunYu
4631db9a45 Update 中字过滤规则
去除重复 简体, 严格CHT以及CHS匹配规则
2024-03-07 02:43:59 +00:00
jxxghp
0f09da55b0 Merge pull request #1606 from thsrite/main 2024-03-07 09:22:21 +08:00
thsrite
b14b41c2c1 fix 系统健康检查tmdb、tvdb走代理 2024-03-07 09:20:20 +08:00
jxxghp
cf05ae20c5 v1.7.0
- 捷径中增加了系统健康检查功能,可快速检测各模块连接状态、目录是否跨盘等
- 网络测试增加了`fanart`和`thetvdb`测试项
- 新增`短剧自动分类`插件,可根据视频文件时长将短剧整理到独立分类目录
- 通知消息新增支持`VoceChat`,可实现消息交互以及群组通知功能,需自行搭建服务端
- 修复了订阅集数修改后会被覆盖的问题,订阅默认规则增加了做种数设定

VoceChat搭建及配置参考:https://doc.voce.chat/zh-cn/bot/bot-and-webhook ,Webhook回调地址相对路径为:/api/v1/message/
2024-03-07 08:10:52 +08:00
HankunYu
897758d829 Merge remote-tracking branch 'upstream/main' 2024-03-06 19:54:39 +00:00
jxxghp
85a77a66dd fix 2024-03-06 22:26:36 +08:00
HankunYu
c450dfc0fa Update 中文字幕过滤
添加对于动画番剧中文字幕识别的支持
2024-03-06 14:09:19 +00:00
jxxghp
3d782a7475 feat:目录检测 2024-03-06 21:42:21 +08:00
jxxghp
4734851213 Merge pull request #1587 from WangEdward/main 2024-03-06 20:11:10 +08:00
jxxghp
9c8635002d Merge pull request #1603 from thsrite/main 2024-03-06 19:50:33 +08:00
thsrite
4cd3cb2b60 fix 2024-03-06 19:46:09 +08:00
thsrite
fa890ca29c fix 手动修改过订阅总集数后,不再随tmdb变化 2024-03-06 19:33:31 +08:00
jxxghp
bbf1ec4c50 fix bug 2024-03-06 17:19:32 +08:00
jxxghp
523d458489 fix bug 2024-03-06 17:02:07 +08:00
jxxghp
45ec668875 add log 2024-03-06 16:47:04 +08:00
jxxghp
60122644b8 fix 2024-03-06 16:20:39 +08:00
jxxghp
07a77e0001 fix 2024-03-06 16:04:04 +08:00
jxxghp
d112f49a69 feat:支持VoceChat 2024-03-06 15:54:40 +08:00
jxxghp
8cb061ff75 feat:模块健康检查 2024-03-06 13:23:51 +08:00
jxxghp
01e08c8e69 Merge pull request #1595 from richard-guan-dev/main
fix: Add a validation check to see if "assisted verification users" are active when logging in.
2024-03-06 10:44:44 +08:00
Richard Guan
3549b38ee8 Add validation for whether the assistive user is activated. 2024-03-06 10:38:20 +08:00
jxxghp
f5fb888c85 fix #1594 2024-03-06 08:19:08 +08:00
Edward
8bcb6a7cb6 chore: merge nested if 2024-03-05 09:35:34 +00:00
Edward
ac81dd943c feat: add min_seeders in filter_rule 2024-03-05 09:25:23 +00:00
jxxghp
663d282b5e fix category.yaml 2024-03-05 15:54:13 +08:00
jxxghp
c7b389dd9b v1.6.9
- 索引站点支持`青蛙`
- 优化了种子索引匹配逻辑,减少`类型不匹配`的错误
- 修复了订阅默认过滤规则不生效的问题
2024-03-04 20:29:15 +08:00
jxxghp
bad37a1846 Merge pull request #1572 from WangEdward/main 2024-03-01 23:01:54 +08:00
jxxghp
9c09981583 Merge pull request #1571 from sohunjug/main 2024-03-01 22:47:46 +08:00
Edward
2d8e66cbe2 fix: site scope error 2024-03-01 14:40:50 +00:00
sohunjug
db28986d22 fix: softlink exists 2024-03-01 22:14:07 +08:00
Edward
727bed46b7 fix: empty return from get_subscribed_sites 2024-03-01 14:06:58 +00:00
jxxghp
8e0df90177 Merge pull request #1570 from DDS-Derek/main 2024-03-01 21:55:32 +08:00
DDSRem
34bbb86c16 fix: plugin directory backup failed 2024-03-01 21:40:46 +08:00
jxxghp
0403f1f48c fix logging 2024-03-01 13:10:19 +08:00
jxxghp
1db452e268 Merge pull request #1568 from cikezhu/main
fix time_difference()
2024-02-29 21:49:20 +08:00
叮叮当
81ca11650d fix time_difference() 2024-02-29 21:39:32 +08:00
jxxghp
2e4671fdbc Merge remote-tracking branch 'origin/main' 2024-02-29 17:28:42 +08:00
jxxghp
da80ad33d9 fix bug 2024-02-29 17:28:36 +08:00
jxxghp
a6f28569ab Merge pull request #1567 from sundxfansky/main 2024-02-29 17:20:45 +08:00
jxxghp
5dd36e95e0 feat:使用站点种子归类优化识别匹配
fix:优先级规则复杂时,过滤时间很长,调整到最后
fix #1432
2024-02-29 17:18:01 +08:00
sundxfansky
1eaeea62db rm line 2024-02-29 16:58:18 +08:00
sundxfansky
4282c5dfc2 fix update failed 2024-02-29 16:56:33 +08:00
jxxghp
2e661f8759 Merge pull request #1566 from sundxfansky/main 2024-02-29 06:48:15 +08:00
tonser
31ca41828e 修复tr 选中种子下载失败 2024-02-29 02:09:53 +08:00
jxxghp
c9ebe76eb1 - 修复v1.6.8订阅失效问题 2024-02-28 23:19:10 +08:00
jxxghp
71ac12ab7a 更新 scheduler.py 2024-02-28 23:03:31 +08:00
jxxghp
81c0e15a1c 更新 version.py 2024-02-28 22:19:12 +08:00
jxxghp
2bde4923f9 更新 subscribe.py 2024-02-28 21:57:35 +08:00
jxxghp
22fb6305cf 更新 subscribe.py 2024-02-28 21:50:32 +08:00
jxxghp
4bb5772e10 fix #1545
fix #1537
2024-02-28 15:03:55 +08:00
jxxghp
549658e871 fix #1553
fix #1496
2024-02-28 14:18:23 +08:00
jxxghp
80f47594f4 v1.6.8
- `🌈岛`支持多域名及HR检测,支持`麒麟`的短剧搜索
- 修复了做种数大于1000时识别为0的问题
- 官方插件库的所有插件均可在`设定-服务`中查看和启动后台任务(需要更新到最新插件版本,第三方插件需要开发者适配)
- 新增`共享识别词`插件(感谢@honuee),搭建了识别词共享服务,欢迎大家共同维护使用

1. 插件开发说明:https://github.com/jxxghp/MoviePilot-Plugins/blob/main/README.md
2. 识别词共享地址:
- https://movie-pilot.org/etherpad/p/MoviePilot_TV_Words
- https://movie-pilot.org/etherpad/p/MoviePilot_Anime_Words
2024-02-26 19:33:00 +08:00
jxxghp
2614eeadb0 fix #1544 2024-02-26 10:50:33 +08:00
jxxghp
a0af827319 Merge pull request #1541 from WangEdward/main 2024-02-25 23:48:08 +08:00
jxxghp
0233853794 fix bug 2024-02-25 17:16:46 +08:00
jxxghp
6b24ccdc35 fix bug 2024-02-25 16:43:21 +08:00
jxxghp
7d76ee2e65 fix warning 2024-02-25 09:11:47 +08:00
jxxghp
1dd9228d01 fix warning 2024-02-25 09:03:43 +08:00
Edward
a5b4221a00 fix: db migration for search_imdbid 2024-02-24 09:36:04 +00:00
jxxghp
37ba75b53c Merge pull request #1535 from cikezhu/main 2024-02-24 06:46:43 +08:00
WangEdward
b8553e2b86 feat: add search_imdbid in subscribe api 2024-02-24 00:14:44 +08:00
WangEdward
d28f3ed74b feat: add search_imdbid for subscribe 2024-02-23 23:55:32 +08:00
叮叮当
185c78b05c 定时作业添加提供者条目 2024-02-23 22:25:41 +08:00
叮叮当
f23cab861a fix 获取其他待执行任务>status 2024-02-23 21:39:58 +08:00
叮叮当
bbddec763a 插件直接采用程序的定时任务模块, 可显示在前端页面 2024-02-23 18:13:27 +08:00
jxxghp
06c3985aa4 v1.6.7
- 加快了插件页面的展现速度
- 插件的日志独立保存和查看
- 站点索引新增支持`萝莉`,支持`AGSVPT`的短剧搜索,修复了`象站`的索引问题
2024-02-23 10:59:20 +08:00
jxxghp
9503a603e6 fix 2024-02-23 08:23:56 +08:00
jxxghp
6e9ab24d95 Merge pull request #1531 from cikezhu/main 2024-02-23 07:59:40 +08:00
叮叮当
7524379af6 增加条件减少循环次数 2024-02-23 01:15:11 +08:00
叮叮当
eebf3dec68 fix 日志恢复成文件名方式/插件调用内置模块的日志将会显示在插件日志 2024-02-22 23:54:25 +08:00
jxxghp
a89dd636a4 fix #1528 2024-02-22 17:59:44 +08:00
jxxghp
7fb025bff4 Merge pull request #1529 from honue/main 2024-02-22 17:34:55 +08:00
honue
c44c0f6321 自定义识别词#注释 认为为注释 2024-02-22 17:22:40 +08:00
jxxghp
585bcb924f Merge pull request #1526 from WangEdward/main 2024-02-22 15:49:50 +08:00
jxxghp
0ce3c3d90f Merge pull request #1522 from honue/main 2024-02-22 15:48:32 +08:00
Edward
9cb69f4879 fix: discuz is_logged_in 2024-02-22 15:34:48 +08:00
honue
c5b13f2fee fix #1502 2024-02-22 14:06:56 +08:00
jxxghp
235af9e558 fix bug 2024-02-22 13:31:49 +08:00
jxxghp
cb274d1587 feat:插件日志文件独立 2024-02-22 13:23:23 +08:00
jxxghp
63643e6d26 feat:插件API支持类型过滤 2024-02-21 16:11:15 +08:00
jxxghp
0726600936 feat:插件API支持类型过滤 2024-02-21 16:05:24 +08:00
jxxghp
6151bd64dd Merge remote-tracking branch 'origin/main' 2024-02-21 16:00:00 +08:00
jxxghp
32dc0f69f9 feat:插件API支持类型过滤 2024-02-21 15:59:53 +08:00
jxxghp
5b563cf173 Merge pull request #1515 from WangEdward/main 2024-02-21 15:18:38 +08:00
Edward
3dbb534883 fix: 2fa Nonetype 2024-02-21 05:42:35 +00:00
jxxghp
7304fad460 Merge remote-tracking branch 'origin/main' 2024-02-21 13:36:32 +08:00
jxxghp
9f829c2129 fix #1514 2024-02-21 13:31:19 +08:00
jxxghp
32e71beca8 Merge pull request #1507 from donniex1986/patch-1 2024-02-20 19:09:29 +08:00
donniex1986
3c1c04f356 Update README.md 2024-02-20 18:21:25 +08:00
jxxghp
c473594663 v1.6.6
- IYUU Api域名地址更改为`api.bolahg.cn`
- 消息交互支持洗版订阅和强制重新下载
- 历史记录删除源文件时自动删除下载器中的下载任务
- 文件管理支持英文标题占位符`en_title`
- 更新了多个官方插件,其中目录监控插件修复了蓝光原盘目录整理重复通知的问题

【消息交互示例】
- 订阅 XXX:添加订阅
- 洗版 XXX:添加洗版订阅
- 搜索/下载 XXX:不检查本地是否存在,重新搜索下载
2024-02-20 16:40:28 +08:00
jxxghp
a8ce9648e2 fix bug 2024-02-20 16:18:29 +08:00
jxxghp
760285b085 Merge pull request #1503 from DDS-Derek/main
feat: startup and update script optimization
2024-02-20 15:09:18 +08:00
jxxghp
ccdad3e8dc fix #1502 2024-02-20 15:07:55 +08:00
jxxghp
f33e9bee21 Merge pull request #1502 from honue/main
当插件状态未启用时,设置事件注册状态不可用
2024-02-20 14:58:00 +08:00
jxxghp
4183dca80f fix bug 2024-02-20 14:41:25 +08:00
DDSRem
6f6fd6a42e fix: bug 2024-02-20 14:38:36 +08:00
DDSRem
13bb31fd93 fix: bug 2024-02-20 14:35:19 +08:00
DDSRem
5bac94cbc5 fix: bug 2024-02-20 14:34:29 +08:00
DDSRem
daa8d80ec9 feat: startup and update script optimization
1. 修复Shellcheck指出的问题,增强脚本稳定性。
2. 对于更新部分:采取先更新主程序和前端,插件和资源包再更新的原则;主要解决如果插件或资源包没有下载成功,主程序就无法更新成功的问题,但是其实资源包和插件是不影响主程序更新的。

Co-Authored-By: DDSDerek <108336573+DDSDerek@users.noreply.github.com>
Co-Authored-By: Summer⛱ <57806936+honue@users.noreply.github.com>
2024-02-20 14:26:59 +08:00
honue
b095f01b09 当插件状态未启用时,设置事件注册状态不可用 2024-02-20 13:27:02 +08:00
jxxghp
f43efab831 Merge pull request #1501 from honue/main 2024-02-20 11:44:29 +08:00
honue
946b7905b3 fix 总集数减小时,不能正确更新元数据 2024-02-20 11:34:40 +08:00
jxxghp
544625a9a3 fix bug 2024-02-19 18:03:35 +08:00
jxxghp
d7c6c27679 feat:消息交互支持洗版订阅及全量重新下载
fix #1202
2024-02-19 17:47:20 +08:00
jxxghp
70adbfe6b5 fix #1334 2024-02-19 16:20:03 +08:00
jxxghp
d8f9ab93e5 feat:源文件删除时删除下载任务 fix #1391 2024-02-19 16:07:46 +08:00
jxxghp
e06d07937e fix 2024-02-19 15:55:57 +08:00
jxxghp
f94d248383 Merge pull request #1492 from WangEdward/main
feat: english title from tmdb
2024-02-18 16:31:13 +08:00
Edward
c139aeebf5 feat: title search include en_title 2024-02-18 07:41:49 +00:00
Edward
89a8625817 feat: english title from tmdb 2024-02-18 07:40:59 +00:00
jxxghp
59acda5dec fix #1380
fix #1373
fix #1404
2024-02-18 11:26:05 +08:00
jxxghp
57d9e4a370 fix #1347 2024-02-18 11:20:04 +08:00
jxxghp
8b6a2a3d99 fix #1382 2024-02-18 10:53:13 +08:00
jxxghp
3e10642bdd v1.6.5
- 认证站点新增支持`麒麟`
- 修复了一个设定保存后无法启动的问题
2024-02-18 09:50:07 +08:00
jxxghp
c8e63b6ae0 Merge pull request #1489 from WangEdward/main 2024-02-17 19:47:11 +08:00
Edward
03c92ad41c fix: search_area 错误删除
resolve #1051
2024-02-17 11:33:59 +00:00
jxxghp
690b454bb1 fix api 2024-02-17 13:24:41 +08:00
jxxghp
2e6c1bef63 Merge pull request #1482 from WangEdward/main 2024-02-17 00:45:54 +08:00
jxxghp
0fd428f809 Merge pull request #1485 from cikezhu/main 2024-02-17 00:45:20 +08:00
叮叮当
6083a8a859 fix 更新站点图标 2024-02-16 20:35:03 +08:00
Edward
bb7d262ea3 feat: m-team 2fa 2024-02-15 16:39:14 +00:00
Edward
ca9a37d12a fix: 2fa 变量名冲突 2024-02-15 16:19:20 +00:00
jxxghp
595ca631f4 Merge pull request #1480 from WangEdward/main 2024-02-15 22:02:19 +08:00
jxxghp
cbffddc57f 更新 wechat.py 2024-02-15 21:51:57 +08:00
jxxghp
a5f5d41104 更新 transmission.py 2024-02-15 21:51:23 +08:00
jxxghp
56f07b3dd6 更新 telegram.py 2024-02-15 21:50:57 +08:00
jxxghp
fba10fe6a0 更新 synologychat.py 2024-02-15 21:50:16 +08:00
jxxghp
5639e0b7d0 更新 qbittorrent.py 2024-02-15 21:49:34 +08:00
jxxghp
a6ad58ca33 更新 plex.py 2024-02-15 21:49:03 +08:00
jxxghp
00447f2475 更新 emby.py 2024-02-15 21:48:11 +08:00
jxxghp
9d14fc47fe 更新 jellyfin.py 2024-02-15 21:47:52 +08:00
jxxghp
70c459f810 更新 emby.py 2024-02-15 21:47:01 +08:00
Edward
a0af2f4b68 fix: tmdb 同名返回已订阅 2024-02-15 13:45:29 +00:00
jxxghp
603eefb22f v1.6.4 2024-02-15 21:30:49 +08:00
jxxghp
34625ee384 feat:调整设置项内容结构 2024-02-15 20:40:01 +08:00
jxxghp
ca78fb7c22 fix api 2024-02-15 19:54:16 +08:00
jxxghp
3c710dd266 fix api 2024-02-15 19:46:30 +08:00
jxxghp
514e7add4b fix api 2024-02-15 18:57:24 +08:00
jxxghp
bdbf1e9084 fix api 2024-02-15 16:56:48 +08:00
jxxghp
6149cef1d3 fix api 2024-02-15 15:03:37 +08:00
jxxghp
b8fac86c6e feat:错误变量类型兼容 2024-02-15 13:28:52 +08:00
jxxghp
9f450dd8be fix settings api 2024-02-15 08:39:55 +08:00
jxxghp
24c2d3f8ca fix twofa 2024-02-14 21:11:35 +08:00
jxxghp
4248b8fa4e fix:多域名站点CookieCloud同步重复Bug 2024-02-14 21:10:08 +08:00
jxxghp
deaa2e5644 Merge pull request #1478 from WangEdward/main 2024-02-14 18:53:10 +08:00
Edward
dc43aabe2a fix 2fa helper 2024-02-14 08:51:20 +00:00
Edward
02981d38c0 chore 重命名 2fa 参数名 2024-02-14 08:47:55 +00:00
Edward
85fd9b3c09 feat 为 update_cookie 增加 2fa 支持 2024-02-14 08:47:02 +00:00
Edward
39ad54f3d9 feat 新增 2fa helper 2024-02-14 05:30:41 +00:00
jxxghp
aa9a2c46aa merge cookiecloud chain 2024-02-13 10:36:05 +08:00
jxxghp
c43a1411c9 fix 手动维护站点时缓存站点图标 2024-02-13 10:18:27 +08:00
jxxghp
928aaf0c19 Merge pull request #1474 from WangEdward/main 2024-02-12 15:19:03 +08:00
Edward
ea8a4a3ec4 fix: 支持 Radarr 的 X-Api-Key 请求头 2024-02-12 04:43:21 +00:00
jxxghp
c4dc468479 fix 增加插件库缓存 2024-02-11 22:02:03 +08:00
jxxghp
87ddfbca90 Merge remote-tracking branch 'origin/main' 2024-02-11 21:35:19 +08:00
jxxghp
164ce8f7c4 fix #984 2024-02-11 21:35:11 +08:00
jxxghp
c2fd6e3342 合并拉取请求 #1471
fix 后端程序目录不正确/其他目录被映射时mv会失败
2024-02-11 21:07:01 +08:00
jxxghp
16b79754c3 v1.6.3
- 文件管理支持手动削刮媒体文件
- 集成apexcharts,插件支持绘制图表
- 站点数据统计插件增加今日流量饼图
- 文件重命名兼容特殊字符
- 修复了资源包下载失败时无法启动的问题
2024-02-11 08:40:15 +08:00
叮叮当
9cfb1f789f fix 后端程序目录不正确/其他目录被映射时mv会失败 2024-02-11 02:18:35 +08:00
jxxghp
e3faa388cf fix 连不上Github可能导致无法启动的问题 2024-02-10 20:38:34 +08:00
jxxghp
b75ec92368 fix #1422 2024-02-10 20:35:07 +08:00
jxxghp
f91763ef7c add scrape api 2024-02-10 19:30:41 +08:00
jxxghp
edf8b03d3b Merge pull request #1464 from cikezhu/main
让自定义站点可自行设置: 搜索结果条数/请求超时
2024-02-10 11:30:25 +08:00
jxxghp
ea48eb5c56 fix update 2024-02-10 11:07:42 +08:00
jxxghp
282f723d34 fix plugin api 2024-02-10 10:58:43 +08:00
叮叮当
dde3b76573 让自定义站点可自行设置: 搜索结果条数/请求超时 2024-02-09 22:45:58 +08:00
jxxghp
f571711386 v1.6.2
- 支持更灵活的密码设置
- 支持在新窗口中打开实时日志
- 新增实时硬链接、二级分类策略、下载任务分类与标签、清理硬链接等插件
- 修复了ChineseSubFinder插件无法下载电影字幕的问题
- 前端集成了ace-builds,支持基于路径的反向代理
2024-02-09 11:23:24 +08:00
jxxghp
e8e8d36a13 fix logger 2024-02-09 09:43:35 +08:00
jxxghp
782a9a4759 fix logger 2024-02-09 09:42:49 +08:00
jxxghp
d0184bd34c fix logger 2024-02-09 09:35:05 +08:00
jxxghp
e4c0643c39 fix bug 2024-02-08 20:50:41 +08:00
jxxghp
305c08c7dd fix category 2024-02-08 14:42:38 +08:00
jxxghp
9521a3ef09 Merge remote-tracking branch 'origin/main' 2024-02-08 08:35:25 +08:00
jxxghp
b4c6a206af fix password 2024-02-08 08:35:18 +08:00
jxxghp
fa7eeec345 Merge pull request #1460 from cikezhu/main 2024-02-08 07:15:34 +08:00
叮叮当
7350216fc4 新窗口打开全部日志 2024-02-08 00:09:20 +08:00
jxxghp
36122dda31 Merge pull request #1454 from WangEdward/main 2024-02-07 21:11:58 +08:00
Edward
5851673b43 fix: 重新整理成功移动 2024-02-06 21:07:57 +08:00
Edward
0d81105a0b fix: 历史记录中重新整理成功记录时的问题 2024-02-06 18:05:45 +08:00
jxxghp
b934b0975b Merge pull request #1437 from falling/main 2024-02-01 13:55:37 +08:00
falling
035b4b0608 正在下载的任务状态更新 2024-02-01 12:03:09 +08:00
jxxghp
b98a033cd2 v1.6.1
- 更改IYUU认证及辅种服务器地址
2024-01-30 17:24:49 +08:00
jxxghp
c69853ce4b Merge pull request #1428 from EkkoG/debug_step 2024-01-30 16:14:19 +08:00
EkkoG
e00a440336 修正按 README 中步骤本地运行时提示 No module named 'app' 2024-01-30 15:31:18 +08:00
jxxghp
c0eb6b0600 Merge pull request #1423 from EkkoG/fixed_size_limit 2024-01-29 16:38:09 +08:00
EkkoG
4d1c8c3764 Fixed #1416 2024-01-29 16:24:23 +08:00
jxxghp
62628e526c 更新 README.md 2024-01-24 11:45:33 +08:00
jxxghp
ad7761a785 rollback #1399 2024-01-24 10:56:39 +08:00
jxxghp
e545b8d900 Merge pull request #1399 from falling/main 2024-01-23 07:12:12 +08:00
falling
f2f1ecfdf1 更新qbittorrent下载判断值
https://github.com/qbittorrent/qBittorrent/wiki/WebUI-API-(qBittorrent-4.1)#get-torrent-list
pausedDL	Torrent is paused and has NOT finished downloading
2024-01-21 19:51:38 +08:00
jxxghp
fdec997ed0 更新 app.env 2024-01-19 23:00:07 +08:00
jxxghp
9b653ceec9 更新 README.md 2024-01-19 22:58:21 +08:00
jxxghp
fbaaed1c61 更新 message.py 2024-01-19 22:55:45 +08:00
jxxghp
639abf67c2 v1.6.0
- 全新安装时,超级管理员初始密码为随机生成,并只能在首次启动的后台日志中查看(使用初始密码登录成功后可在设定中修改)。
- 用户密码修改需要同时包含大小写和数字。
- 修复了第三个插件依赖库无法自动安装的问题。
2024-01-19 11:20:30 +08:00
jxxghp
1f56ceaea9 更新 user.py 2024-01-19 10:54:13 +08:00
jxxghp
16a4f61fec Merge pull request #1386 from hussion/dev 2024-01-19 10:52:10 +08:00
jxxghp
ea0aba96fd Merge pull request #1383 from thsrite/main 2024-01-19 10:51:31 +08:00
嫣识
4393dad77c fix:修复bearer auth请求头设置错误,导致github_token参数应用失败 2024-01-18 19:58:06 +08:00
jxxghp
d099c0e702 更新 README.md 2024-01-18 12:05:12 +08:00
thsrite
a299d786fe feat 超级管理员初始化密码随机生成 && 修改密码强制要求大写小写数字组合 2024-01-18 11:08:16 +08:00
jxxghp
3500f5b9a6 Merge pull request #1378 from thsrite/main 2024-01-18 08:22:16 +08:00
thsrite
64233c89d7 fix emby/jellyfin首页继续观看、最近添加兼容共享路径 2024-01-17 16:59:22 +08:00
jxxghp
8c727da58a Merge pull request #1375 from thsrite/main 2024-01-17 12:06:03 +08:00
thsrite
152a87d109 fix 解决三方插件依赖安装失败 2024-01-17 10:14:49 +08:00
jxxghp
6a2cde0664 - Bug修复 2024-01-16 20:24:58 +08:00
jxxghp
c86cc2cb51 Merge pull request #1353 from thsrite/main 2024-01-12 19:04:14 +08:00
thsrite
6d7a63ff61 fix c044e594 2024-01-12 10:07:39 +08:00
thsrite
c044e59481 fix emby/jellyfin首页继续观看、最近添加 2024-01-12 09:56:55 +08:00
jxxghp
3c31bf24e5 v1.5.9
- 修复了个别情况下订阅重复下载的问题
- 仪表板中继续观看和最近添加组件支持过滤媒体库黑名单内容
- 增加了Fanart的开关设置(FANART_ENABLE,默认开),关闭后可减少网络请求但刮削图片数量会大幅减少
2024-01-11 20:18:58 +08:00
jxxghp
d89c80ac89 fix #1296 2024-01-11 16:35:51 +08:00
jxxghp
8236d6c8d7 Merge pull request #1328 from thsrite/main 2024-01-10 12:25:19 +08:00
thsrite
3646540a7f fix 2024-01-10 11:21:03 +08:00
thsrite
c1ecdfc61d Revert "fix #907"
This reverts commit 4dcefb141a.
2024-01-10 11:19:25 +08:00
thsrite
7587946d51 fix c674e320 2024-01-10 10:53:32 +08:00
jxxghp
3ad64baaeb 更新 README.md 2024-01-10 10:38:43 +08:00
jxxghp
24c43b53a2 Merge pull request #1338 from thsrite/fanart_switch 2024-01-10 10:29:59 +08:00
thsrite
53a6a1c691 feat Fanart开关支持环境变量配置,默认开启 2024-01-10 10:13:51 +08:00
jxxghp
c3ba83c7ca fix:订阅重复下载问题 2024-01-09 13:16:39 +08:00
jxxghp
d9b349873e v1.5.8
- 修复了启用内置代理时媒体组件无法显示图片的问题
- 优化媒体组件用户匹配,优先展示媒体服务器中同名用户的信息
- 用户认证失败时发送消息提醒
- UI主题支持跟随系统主题自动切换
2024-01-08 13:18:34 +08:00
thsrite
4dcefb141a fix #907 2024-01-08 13:05:48 +08:00
thsrite
c674e32046 fix 首页继续观看、最近添加排除黑名单媒体库 2024-01-08 13:05:09 +08:00
jxxghp
8aa1027aae fix image proxy 2024-01-08 12:24:00 +08:00
jxxghp
b4cb9c3fb3 fix 2024-01-07 18:24:13 +08:00
jxxghp
d82ab5d60d feat:用户认证失败时发送消息提醒 2024-01-07 12:10:51 +08:00
jxxghp
979b636eec fix bug 2024-01-07 11:51:49 +08:00
jxxghp
bf8a75b201 fix:优化emby、jellyfin用户匹配 2024-01-07 11:46:29 +08:00
jxxghp
87111c8736 fix exists api 2024-01-06 11:20:08 +08:00
jxxghp
9b97e478aa - 修复Plex媒体图片展示与跳转 2024-01-06 10:59:46 +08:00
jxxghp
2af7abee3c fix #1320 2024-01-06 08:46:18 +08:00
jxxghp
2c8a41ebad fix #1316 2024-01-06 08:31:07 +08:00
jxxghp
c632cfd6b9 - 优化媒体组件的图片代理 2024-01-05 21:40:21 +08:00
jxxghp
7f05df2fb3 fix count 2024-01-05 21:37:19 +08:00
jxxghp
ff33432809 fix api 2024-01-05 21:35:01 +08:00
jxxghp
0a57e69bcf v1.5.7
- 媒体服务器支持配置外网播放地址,媒体详情支持跳转在线播放
- `设定-订阅`中增加了文件大少过滤规则,以及控制订阅时是否立即弹出编辑框的选项(默认关闭)
- 仪表板显示的组件支持自定义,同时增加了媒体库相关面板组件
- 支持插件将定时作业任务注册到主程序,以在`设定-服务`中统一管理
2024-01-05 20:47:53 +08:00
jxxghp
7af8b15dbb fix apis 2024-01-05 20:31:15 +08:00
jxxghp
bc4931d971 fix api 2024-01-05 20:21:19 +08:00
jxxghp
cfb029b6b4 fix api 2024-01-05 15:58:47 +08:00
jxxghp
6fa50101a6 Merge pull request #1314 from thsrite/main 2024-01-05 13:00:38 +08:00
thsrite
843fbc83f4 fix 集如果带有.会刮削错误 2024-01-05 12:53:47 +08:00
jxxghp
55f8fb3b66 Merge pull request #1313 from thsrite/main 2024-01-05 11:52:39 +08:00
thsrite
a47774472d fix bug 2024-01-05 11:50:05 +08:00
jxxghp
713f4ca356 fix typo 2024-01-05 08:18:01 +08:00
jxxghp
b06795510a feat:插件支持注册公共服务 2024-01-05 08:12:27 +08:00
jxxghp
0f57ec099a Merge remote-tracking branch 'origin/main' 2024-01-04 20:54:46 +08:00
jxxghp
8325caabdc fix api 2024-01-04 20:53:59 +08:00
jxxghp
44d276d7e7 Merge pull request #1305 from honue/main 2024-01-04 07:08:07 +08:00
honue
935340561b package获取失败,增加日志warn 2024-01-03 22:34:01 +08:00
jxxghp
a60fde3b91 fix 2024-01-03 21:29:23 +08:00
jxxghp
163a855d5c fix play url api 2024-01-03 18:38:40 +08:00
jxxghp
c9b1e75361 fix 2024-01-03 18:07:48 +08:00
jxxghp
a9932d0866 fix 2024-01-03 17:40:13 +08:00
jxxghp
11d29919bf feat:大小过滤 2024-01-03 17:28:11 +08:00
jxxghp
4fe755332d fix bug 2024-01-03 12:42:47 +08:00
jxxghp
0095e0f4dd feat:播放跳转api 2024-01-03 12:02:08 +08:00
jxxghp
322c72ab54 feat:mediaserver apis 2024-01-02 20:54:54 +08:00
jxxghp
4d51459a47 v1.5.6
- 修复了插件重复显示的问题
- 站点资源支持显示免费剩余时间和H&R标志(仅部分站点)
- 刷流插件升级,支持排除H&R

提示:涉及前端改动时,可能需要清理浏览器缓存才能显示更新内容
2024-01-01 20:18:20 +08:00
jxxghp
d51de30898 Merge remote-tracking branch 'origin/main' 2024-01-01 19:44:08 +08:00
jxxghp
90f9edbf24 fix bug 2024-01-01 19:43:55 +08:00
jxxghp
8aa10457a7 Merge pull request #1294 from honue/main 2024-01-01 15:46:09 +08:00
honue
ab584720c6 fix 本地插件未安装,但不在市场显示的情况(v2) 2024-01-01 15:33:13 +08:00
jxxghp
56ad281cb6 feat:4X 2024-01-01 11:56:03 +08:00
jxxghp
61281cca02 feat:免费剩余时间 && HR 2024-01-01 10:22:18 +08:00
jxxghp
b53dbbc38e rollback #1287 2023-12-31 10:36:09 +08:00
jxxghp
3f88cfba28 fix #1287 2023-12-31 10:34:21 +08:00
jxxghp
e855d8b9af - 修复了1PTBA无法认证的问题
- 修复了个别情况下仍有一集缺失时提前完成订阅的问题
- 修复了电影订阅本地已存在时不完成订阅的问题
- 优化了资源搜索、订阅日历、历史记录界面
2023-12-31 09:56:46 +08:00
jxxghp
171720e629 fix bug 2023-12-31 09:41:02 +08:00
jxxghp
8aa6b33fba v1.5.5
- 修复了1PTBA无法认证的问题
- 修复了个别情况下仍有一集缺失时提前完成订阅的问题
- 修复了电影订阅本地已存在时不完成订阅的问题
- 优化了资源搜索、订阅日历、历史记录界面
2023-12-31 08:54:54 +08:00
jxxghp
505fc803db fix README.md 2023-12-31 08:46:38 +08:00
jxxghp
b5146620a6 fix #1266 2023-12-31 08:38:54 +08:00
jxxghp
7d44f24347 fix #1276 2023-12-31 08:21:58 +08:00
jxxghp
4dccc6e860 Merge pull request #1287 from honue/main 2023-12-29 21:30:34 +08:00
honue
ee6585c737 fix 本地插件未安装,但不在市场显示的情况 2023-12-29 18:32:01 +08:00
jxxghp
62e5e8a69f Merge pull request #1279 from thsrite/main 2023-12-25 17:08:13 +08:00
thsrite
e942a99ff0 fix bug 2023-12-25 15:30:10 +08:00
jxxghp
b3fe49684b fix bug 2023-12-23 19:51:35 +08:00
jxxghp
dcf1985361 - 修复了未设置订阅站点时无法编辑订阅的问题
- 历史记录支持过滤状态
2023-12-23 19:32:34 +08:00
jxxghp
8f4f4cc004 fix #1215 2023-12-23 18:49:01 +08:00
jxxghp
f49baadb76 fix #1225 2023-12-23 18:24:07 +08:00
jxxghp
5233484fc5 Merge pull request #1265 from honue/main 2023-12-20 07:57:29 +08:00
Summer⛱
84c4cc8b5d Update .gitignore 2023-12-19 17:36:58 +08:00
jxxghp
77036eccd8 v1.5.3 2023-12-17 10:59:27 +08:00
jxxghp
dcdb08ec80 feat:路径识别支持到3级 2023-12-17 10:59:02 +08:00
jxxghp
cd7f688e78 feat:刮削模块支持覆盖 2023-12-17 10:49:00 +08:00
jxxghp
cb12a052ac - 修复历史记录重新整理时路径不正确的问题 2023-12-16 12:21:22 +08:00
jxxghp
995c359f20 Merge pull request #1234 from thsrite/main 2023-12-14 06:28:50 +08:00
jxxghp
690066ad32 - 修复整理时不自动创建目标路径的问题 2023-12-13 06:53:11 +08:00
thsrite
73942e315a feat 订阅增加保存路径设置 2023-12-12 14:01:14 +08:00
jxxghp
48badb3243 Merge pull request #1228 from EkkoG/fixed_move_failed_msg 2023-12-11 19:31:19 +08:00
EkkoG
d5eb12cc4e 修复无法入库消息发送到 Telegram 时格式异常 2023-12-11 17:51:21 +08:00
jxxghp
7d7539df4c - 目录监控、手动整理等不指定目的目录时,不再强制创建一级分类目录,根据开关判定是否创建二级分类目录 2023-12-11 17:24:34 +08:00
jxxghp
14a8f44f8c fix bug 2023-12-10 18:51:20 +08:00
jxxghp
a7be470f33 v1.5.0 2023-12-10 17:49:05 +08:00
jxxghp
a677169f60 fix #1219 指定转移目录时不强制添加一级分类目录 2023-12-10 17:48:14 +08:00
jxxghp
b72ef4f2aa fix #1139 洗版重复下载问题 2023-12-10 17:27:34 +08:00
jxxghp
403054751b Merge remote-tracking branch 'origin/main' 2023-12-10 13:35:01 +08:00
jxxghp
b3e5c734d4 add ffmpeg 2023-12-10 13:34:54 +08:00
jxxghp
5732125ff6 Merge pull request #1221 from Vincwnt/main 2023-12-10 11:52:53 +08:00
林晓昱
eb66cf7aad fix 自定义识别词的媒体type正则bug 2023-12-10 11:34:39 +08:00
jxxghp
a317c35eab fix 集图片命名 2023-12-06 13:08:15 +08:00
jxxghp
ab138560c1 v1.4.9 2023-12-06 10:55:31 +08:00
jxxghp
f0fbad889d - 偿试修复可执行文件打包插件数据表缺失问题 2023-12-05 20:57:53 +08:00
jxxghp
1323cd5dc6 fix 插件下载Bug 2023-12-04 17:27:33 +08:00
jxxghp
2c43d8e145 fix 插件下载Bug 2023-12-04 16:58:39 +08:00
jxxghp
0214beb679 - 修复二进制打包插件表缺失的问题 2023-12-04 11:33:16 +08:00
jxxghp
7d73cdef33 Merge remote-tracking branch 'origin/main' 2023-12-04 11:03:00 +08:00
jxxghp
fcfab2c750 feat 命名增加豆瓣ID/识别英文名称 2023-12-04 11:02:53 +08:00
jxxghp
e048be17a5 Merge pull request #1195 from WithdewHua/fix-mediaserver 2023-12-02 12:47:20 +08:00
WithdewHua
024f1de4f1 fix: 清空 MediaServerItem 表 2023-12-02 12:31:38 +08:00
jxxghp
d2c9f7a778 更新 README.md 2023-12-01 18:39:40 +08:00
jxxghp
1784d2ec61 v1.4.8 2023-12-01 10:51:49 +08:00
jxxghp
8f4a213f55 fix QB&TR 判重 2023-12-01 08:12:06 +08:00
jxxghp
6a8c684af0 feat 兼容TR下载任务重复添加 2023-11-30 15:58:44 +08:00
jxxghp
aefba83319 fix 2023-11-30 13:35:26 +08:00
jxxghp
e411f4062a feat 兼容QB下载任务重复添加 2023-11-30 13:14:55 +08:00
jxxghp
3c6802860d fix #1186 2023-11-30 11:19:43 +08:00
jxxghp
3daad5ea90 Merge pull request #1186 from honue/main 2023-11-30 11:08:00 +08:00
jxxghp
6835c38c24 Merge remote-tracking branch 'origin/main' 2023-11-30 08:16:30 +08:00
honue
32be9b71d5 Merge remote-tracking branch 'origin/main' 2023-11-30 07:42:19 +08:00
honue
84bc0e0fb4 fix 2023-11-30 07:41:58 +08:00
Summer⛱
8105dc9c82 Merge branch 'jxxghp:main' into main 2023-11-30 00:42:23 +08:00
honue
840c968454 tr增加修改tracker方法 2023-11-30 00:39:57 +08:00
honue
8c5f19a0f4 fix 2023-11-30 00:39:35 +08:00
jxxghp
bf813aa906 Merge pull request #1181 from honue/main 2023-11-29 13:29:40 +08:00
honue
e5ac7f10d4 update downloader init 2023-11-29 13:23:28 +08:00
jxxghp
d049e04fa2 v1.4.7 2023-11-29 13:13:58 +08:00
jxxghp
ef45b08ee5 Merge pull request #1180 from honue/main 2023-11-29 13:01:18 +08:00
honue
d77644ab16 fix singleton.py 2023-11-29 12:46:49 +08:00
jxxghp
1a90b4a3cb v1.4.7 2023-11-29 12:39:37 +08:00
jxxghp
9e5d401a85 remove plugin_color 2023-11-29 12:07:54 +08:00
jxxghp
6da6dc2b8c Merge pull request #1178 from cikezhu/main 2023-11-29 08:43:26 +08:00
叮叮当
2ca7021df0 fix item_id 2023-11-29 08:35:44 +08:00
jxxghp
1682cdad37 fix README.md 2023-11-28 14:55:05 +08:00
jxxghp
ec5e898feb fix app.env 2023-11-27 08:36:45 +08:00
jxxghp
ccf6fb1b36 fix 插件重置 2023-11-27 08:30:13 +08:00
jxxghp
83c8d619d0 fix README.md 2023-11-27 08:17:23 +08:00
jxxghp
dd5b8219a1 更新 plugin.py 2023-11-27 07:09:38 +08:00
jxxghp
34fd927972 更新 README.md 2023-11-26 22:50:43 +08:00
jxxghp
6db5ad2697 feat:插件重置 2023-11-26 21:40:43 +08:00
jxxghp
0be1f27970 - 修复了种子失效时订阅一直偿试下载的问题
- 新增配置中心插件,可快速调整配置
2023-11-26 20:16:44 +08:00
jxxghp
54603798fc fix bug 2023-11-24 15:26:54 +08:00
jxxghp
2f1e947323 fix #1144 2023-11-24 15:24:38 +08:00
jxxghp
ef2cfe1c1d fix #1141 修复错误种子链接一直偿试下载的问题 2023-11-24 14:32:56 +08:00
jxxghp
f8edf79c59 fix api doc 2023-11-24 14:00:52 +08:00
jxxghp
dec80b6567 fix 2023-11-23 17:25:54 +08:00
jxxghp
4dac9237ef fix README.md 2023-11-23 17:22:13 +08:00
jxxghp
12f5f373b3 fix frozen 2023-11-23 10:59:33 +08:00
jxxghp
76472770bf v1.4.5 2023-11-22 15:35:12 +08:00
jxxghp
f5baf77c3c Merge pull request #1155 from thsrite/main 2023-11-22 13:42:33 +08:00
thsrite
126276c727 fix #1154 2023-11-22 13:39:39 +08:00
jxxghp
5d6ba83fc5 Merge pull request #1153 from thsrite/main 2023-11-22 09:23:19 +08:00
thsrite
047c3b596d add 插件方法 2023-11-22 09:21:51 +08:00
jxxghp
2240ff08a1 fix 插件事件依赖 2023-11-22 07:25:00 +08:00
jxxghp
4d6bf56fa0 fix bug 2023-11-21 21:44:50 +08:00
jxxghp
10ad9a5601 fix bug 2023-11-21 21:20:02 +08:00
jxxghp
fb39500428 feat 优化插件仓库URL格式 2023-11-21 21:10:34 +08:00
jxxghp
615a52cef9 更新 build.yml 2023-11-20 21:53:38 +08:00
jxxghp
791be0583a fix build 2023-11-20 20:14:07 +08:00
jxxghp
a324731061 - 优化豆瓣详情展示 2023-11-20 20:06:25 +08:00
jxxghp
539d9cf537 - 优化豆瓣详情展示 2023-11-20 20:01:04 +08:00
jxxghp
699312ff28 - 优化豆瓣详情展示 2023-11-20 19:56:52 +08:00
jxxghp
b92b0ec149 fix build 2023-11-20 19:54:41 +08:00
jxxghp
51536062f1 fix 2023-11-20 16:43:49 +08:00
jxxghp
4c230b4c1e feat 清理缓存时清理种子缓存 2023-11-20 10:56:28 +08:00
jxxghp
a7752ceb17 fix bug 2023-11-19 09:48:06 +08:00
jxxghp
ed59a90d78 fix update timeout 2023-11-19 09:45:13 +08:00
jxxghp
11d65e7527 fix douban scraper 2023-11-19 09:05:58 +08:00
jxxghp
b6ac5f0f84 feat 完善豆瓣详情API 2023-11-18 22:37:23 +08:00
jxxghp
c336e62885 Merge remote-tracking branch 'origin/main' 2023-11-18 21:54:10 +08:00
jxxghp
b868cdb25e feat 完善豆瓣详情API 2023-11-18 21:53:58 +08:00
jxxghp
04339539d1 fix bug 2023-11-17 14:25:00 +08:00
jxxghp
2d146880ec fix 2023-11-17 13:36:23 +08:00
jxxghp
6eec4ef7f4 fix build 2023-11-17 12:27:43 +08:00
jxxghp
9841f3dd18 fix build 2023-11-17 11:43:01 +08:00
jxxghp
03e0118fb7 add linux frozen build 2023-11-17 11:39:03 +08:00
jxxghp
c7c222b357 v1.4.3 2023-11-17 11:12:52 +08:00
jxxghp
2d8e45cd1b feat TG消息重试 2023-11-17 10:41:14 +08:00
jxxghp
e2bd5cc245 更新 resource.py 2023-11-16 21:10:21 +08:00
jxxghp
47ddfec30e feat 资源包自动更新 2023-11-16 20:57:41 +08:00
jxxghp
9344b2a324 add AGSVPT 2023-11-16 17:27:58 +08:00
jxxghp
8496fcccc5 Merge pull request #1133 from honue/main 2023-11-16 14:59:50 +08:00
honue
c6fa3b9d25 fix torrent_cache ttl 2023-11-16 13:34:23 +08:00
jxxghp
ce13987748 - 修复Windows打包 2023-11-15 08:20:57 +08:00
jxxghp
5221fc4f6a fix #1125 2023-11-15 08:08:46 +08:00
jxxghp
d4dc388d3f fix search 2023-11-14 17:54:41 +08:00
jxxghp
42966c2537 fix #1122 2023-11-14 17:35:31 +08:00
jxxghp
7921dcd86b Merge pull request #1122 from thsrite/main 2023-11-14 17:30:52 +08:00
thsrite
4c69bb6c48 fix 2023-11-14 16:22:04 +08:00
thsrite
0aad809c82 feat 增加GITHUB_TOKEN,提高API限流阈值 2023-11-14 16:12:02 +08:00
thsrite
f514a5a416 fix 793a7460 2023-11-14 14:17:14 +08:00
thsrite
793a7460c6 fix 动漫电影 剧场版 2023-11-14 14:10:48 +08:00
jxxghp
6dcc979fd5 fix #1072
fix #1118
2023-11-14 08:19:57 +08:00
jxxghp
5a07732712 fix 豆瓣已订阅显示 2023-11-13 20:25:57 +08:00
jxxghp
61d71b32ff fix 豆瓣已订阅显示 2023-11-13 20:22:51 +08:00
158 changed files with 6938 additions and 1951 deletions

View File

@@ -110,7 +110,7 @@ jobs:
- name: Pyinstaller
run: |
pyinstaller windows.spec
pyinstaller frozen.spec
shell: pwsh
- name: Upload Windows File
@@ -119,10 +119,54 @@ jobs:
name: windows
path: dist/MoviePilot.exe
Linux-build-amd64:
runs-on: ubuntu-latest
name: Build Linux Amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Init Python 3.11.4
uses: actions/setup-python@v4
with:
python-version: '3.11.4'
cache: 'pip'
- name: Install Dependent Packages
run: |
python -m pip install --upgrade pip
pip install wheel pyinstaller
pip install -r requirements.txt
- name: Prepare Frontend
run: |
wget https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip
unzip main.zip
mv MoviePilot-Plugins-main/plugins/* app/plugins/
rm main.zip
rm -rf MoviePilot-Plugins-main
wget https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip
unzip main.zip
mv MoviePilot-Resources-main/resources/* app/helper/
rm main.zip
rm -rf MoviePilot-Resources-main
- name: Pyinstaller
run: |
pyinstaller frozen.spec
mv dist/MoviePilot dist/MoviePilot_Amd64
- name: Upload Linux File
uses: actions/upload-artifact@v3
with:
name: linux-amd64
path: dist/MoviePilot_Amd64
Create-release:
permissions: write-all
runs-on: ubuntu-latest
needs: [ Windows-build, Docker-build ]
needs: [ Windows-build, Docker-build, Linux-build-amd64]
steps:
- uses: actions/checkout@v2
@@ -139,7 +183,8 @@ jobs:
shell: bash
run: |
mkdir releases
mv ./windows/MoviePilot.exe ./releases/MoviePilot_v${{ env.app_version }}.exe
mv ./windows/MoviePilot.exe ./releases/MoviePilot_Win_v${{ env.app_version }}.exe
mv ./linux-amd64/MoviePilot_Amd64 ./releases/MoviePilot_Amd64_v${{ env.app_version }}
- name: Create Release
id: create_release

3
.gitignore vendored
View File

@@ -9,7 +9,10 @@ app/helper/*.so
app/helper/*.pyd
app/helper/*.bin
app/plugins/**
!app/plugins/__init__.py
config/cookies/**
config/user.db
config/sites/**
*.pyc
*.log
.vscode

View File

@@ -32,6 +32,7 @@ RUN apt-get update -y \
haproxy \
fuse3 \
rsync \
ffmpeg \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \

345
README.md
View File

@@ -7,19 +7,25 @@
发布频道https://t.me/moviepilot_channel
## 主要特性
- 前后端分离基于FastApi + Vue3前端项目地址[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
- 前后端分离基于FastApi + Vue3前端项目地址[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)APIhttp://localhost:3001/docs
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
- 重新设计了用户界面,更加美观易用。
## 安装
### 注意管理员用户不要使用弱密码如非必要不要暴露到公网。如被盗取管理账号权限将会导致站点Cookie等敏感数据泄露
### 1. **安装CookieCloud插件**
站点信息需要通过CookieCloud同步获取因此需要安装CookieCloud插件将浏览器中的站点Cookie数据同步到云端后再同步到MoviePilot使用。 插件下载地址请点击 [这里](https://github.com/easychen/CookieCloud/releases)。
### 2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)。
通过CookieCloud可以快速同步浏览器中保存的站点数据到MoviePilot支持以下服务方式
- 使用公共CookieCloud远程服务器默认服务器地址为https://movie-pilot.org/cookiecloud
- 使用内建的本地Cookie服务`设定` - `站点` 中打开`启用本地CookieCloud服务器`将启用内建的CookieCloud提供服务服务地址为`http://localhost:${NGINX_PORT}/cookiecloud/`, Cookie数据加密保存在配置文件目录下的`cookies`文件中
- 自建服务CookieCloud服务器参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
@@ -39,244 +45,172 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
docker pull jxxghp/moviepilot:latest
```
- Windows
下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases),双击运行后自动生成配置文件目录
下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases),双击运行后自动生成配置文件目录访问http://localhost:3000
- 群晖套件
添加套件源https://spk7.imnks.com/
- 本地运行
1) 将工程 [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins) plugins目录下的所有文件复制到`app/plugins`目录
2) 将工程 [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources) resources目录下的所有文件复制到`app/helper`目录
3) 执行命令:`pip install -r requirements.txt` 安装依赖
4) 执行命令:`python app/main.py` 启动服务
4) 执行命令:`PYTHONPATH=. python app/main.py` 启动服务
5) 根据前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend) 说明,启动前端服务
## 配置
项目的所有配置均通过环境变量进行设置,支持两种配置方式:
- 在Docker环境变量部分或Windows系统环境变量中进行参数配置如未自动显示配置项则需要手动增加对应环境变量。
- 下载 [app.env](https://github.com/jxxghp/MoviePilot/raw/main/config/app.env) 配置文件,修改好配置后放置到配置文件映射路径根目录,配置项可根据说明自主增减。
大部分配置可启动后通过WEB管理界面进行配置但仍有部分配置需要通过环境变量/配置文件进行配置。
配置文件映射路径:`/config`,配置项生效优先级:环境变量 > env文件 > 默认值**部分参数如路径映射、站点认证、权限端口、时区等必须通过环境变量进行配置**
配置文件映射路径:`/config`,配置项生效优先级:环境变量 > env文件或通过WEB界面配置 > 默认值。
> ❗号标识的为必填项,其它为可选项,可选项可删除配置变量从而使用默认值。
### 1. **基础设置**
### 1. **环境变量**
- **❗NGINX_PORT** WEB服务端口默认`3000`可自行修改不能与API服务端口冲突(仅支持环境变量配置)
- **❗PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突(仅支持环境变量配置)
- **PUID**:运行程序用户的`uid`,默认`0`(仅支持环境变量配置)
- **PGID**:运行程序用户的`gid`,默认`0`(仅支持环境变量配置)
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`(仅支持环境变量配置)
- **PROXY_HOST** 网络代理访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`、`socks5://user:pass@host:port`(仅支持环境变量配置)
- **MOVIEPILOT_AUTO_UPDATE**重启更新,`true`/`release`/`dev`/`false`,默认`release` **注意:如果出现网络问题可以配置`PROXY_HOST`**(仅支持环境变量配置)
---
- **❗SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **❗SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **❗NGINX_PORT** WEB服务端口默认`3000`可自行修改不能与API服务端口冲突
- **❗PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **PUID**:运行程序用户的`uid`,默认`0`
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **PROXY_HOST** 网络代理访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`、`socks5://user:pass@host:port`
- **MOVIEPILOT_AUTO_UPDATE** 重启时自动更新,`true`/`release`/`dev`/`false`,默认`release`需要能正常连接Github **注意:如果出现网络问题可以配置`PROXY_HOST`**
- **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
认证资源`v1.1.4`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| ptba | `PTBA_UID`用户ID<br/>`PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
| xingtan | `XINGTAN_UID`用户ID<br/>`XINGTAN_PASSKEY`:密钥 |
| ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 |
| agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 |
| hdkyl | `HDKYL_UID`用户ID<br/>`HDKYL_PASSKEY`:密钥 |
### 2. **环境变量 / 配置文件**
配置文件名:`app.env`,放配置文件根目录。
- **❗SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!**
- **❗API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **TMDB_API_DOMAIN** TMDB API地址,默认`api.themoviedb.org`,也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
- **BIG_MEMORY_MODE** 大内存模式,默认`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快
- **DOH_ENABLE** DNS over HTTPS开关`true`/`false`,默认`true`开启后会使用DOH对api.themoviedb.org等域名进行解析以减少被DNS污染的情况提升网络连通性
- **META_CACHE_EXPIRE** 元数据识别缓存过期时间小时数字型不配置或者配置为0时使用系统默认大内存模式为7天否则为3天调大该值可减少themoviedb的访问次数
- **GITHUB_TOKEN** Github token提高自动更新、插件安装等请求Github Api的限流阈值格式ghp_****
- **DEV:** 开发者模式,`true`/`false`,默认`false`,开启后会暂停所有定时任务
- **AUTO_UPDATE_RESOURCE**:启动时自动检测和更新资源包(站点索引及认证等),`true`/`false`,默认`true`需要能正常连接Github仅支持Docker镜像
---
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`、`tmdb.movie-pilot.org` 或其它中转代理服务地址,能连通即可
- **TMDB_IMAGE_DOMAIN** TMDB图片地址默认`image.tmdb.org`可配置为其它中转代理以加速TMDB图片显示`static-mdb.v.geilijiasu.com`
- **WALLPAPER** 登录首页电影海报,`tmdb`/`bing`,默认`tmdb`
- **RECOGNIZE_SOURCE** 媒体信息识别来源,`themoviedb`/`douban`,默认`themoviedb`,使用`douban`时不支持二级分类
- **FANART_ENABLE** Fanart开关`true`/`false`,默认`true`,关闭后刮削的图片类型会大幅减少
- **SCRAP_SOURCE** 刮削元数据及图片使用的数据源,`themoviedb`/`douban`,默认`themoviedb`
---
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`,为`false`时即使TMDB信息变化了也会仍然按历史记录中已入库的信息进行刮削
---
- **❗TRANSFER_TYPE** 整理转移方式,支持`link`/`copy`/`move`/`softlink`/`rclone_copy`/`rclone_move` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响rclone需要自行映射rclone配置目录到容器中或在容器内完成rclone配置节点名称必须为`MP`**
- **❗OVERWRITE_MODE** 转移覆盖模式,默认为`size`,支持`nerver`/`size`/`always`/`latest`,分别表示`不覆盖同名文件`/`同名文件根据文件大小覆盖(大覆盖小)`/`总是覆盖同名文件`/`仅保留最新版本,删除旧版本文件(包括非同名文件)`
- **❗LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名称(不是完整路径),默认`电影`
- **LIBRARY_TV_NAME** 电视剧媒体库目录称(不是完整路径),默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录称(不是完整路径),默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在媒体库目录下建立二级目录分类
---
- **❗COOKIECLOUD_HOST** CookieCloud服务器地址格式`http(s)://ip:port`,不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **❗COOKIECLOUD_KEY** CookieCloud用户KEY
- **❗COOKIECLOUD_PASSWORD** CookieCloud端对端加密密码
- **❗COOKIECLOUD_INTERVAL** CookieCloud同步间隔分钟
- **❗USER_AGENT** CookieCloud保存Cookie对应的浏览器UA建议配置设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
---
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,未设置需要选择资源或者回复`0`
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,设置为 all 代表全部用户自动择优下载,未设置需要手动选择资源或者回复`0`才自动择优下载
---
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **PLUGIN_MARKET** 插件市场仓库地址,多个地址使用`,`分隔,保留最后的/,默认为官方插件仓库:`https://raw.githubusercontent.com/jxxghp/MoviePilot-Plugins/main/`。
---
- **❗MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`/`synologychat`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- `wechat`设置项:
- **WECHAT_CORPID** WeChat企业ID
- **WECHAT_APP_SECRET** WeChat应用Secret
- **WECHAT_APP_ID** WeChat应用ID
- **WECHAT_TOKEN** WeChat消息回调的Token
- **WECHAT_ENCODING_AESKEY** WeChat消息回调的EncodingAESKey
- **WECHAT_ADMINS** WeChat管理员列表多个管理员用英文逗号分隔可选
- **WECHAT_PROXY** WeChat代理服务器后面不要加/
- `telegram`设置项:
- **TELEGRAM_TOKEN** Telegram Bot Token
- **TELEGRAM_CHAT_ID** Telegram Chat ID
- **TELEGRAM_USERS** Telegram 用户ID多个使用,分隔只有用户ID在列表中才可以使用Bot如未设置则均可以使用Bot
- **TELEGRAM_ADMINS** Telegram 管理员ID多个使用,分隔只有管理员才可以操作Bot菜单如未设置则均可以操作菜单可选
- `slack`设置项:
- **SLACK_OAUTH_TOKEN** Slack Bot User OAuth Token
- **SLACK_APP_TOKEN** Slack App-Level Token
- **SLACK_CHANNEL** Slack 频道名称,默认`全体`(可选)
- `synologychat`设置项:
- **SYNOLOGYCHAT_WEBHOOK** 在Synology Chat中创建机器人获取机器人`传入URL`
- **SYNOLOGYCHAT_TOKEN** SynologyChat机器人`令牌`
---
- **❗DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot`及`下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **DOWNLOADER_MONITOR** 下载器监控,`true`/`false`,默认为`true`,开启后下载完成时才会自动整理入库
- **TORRENT_TAG** 下载器种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **❗DOWNLOADER** 下载器,支持`qbittorrent`/`transmission`QB版本号要求>= 4.3.9TR版本号要求>= 3.0,同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`qbittorrent`
- `qbittorrent`设置项:
- **QB_HOST** qbittorrent地址格式`ip:port`https需要添加`https://`前缀
- **QB_USER** qbittorrent用户名
- **QB_PASSWORD** qbittorrent密码
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`false`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- **QB_SEQUENTIAL** qbittorrent按顺序下载`true`/`false`,默认`true`
- **QB_FORCE_RESUME** qbittorrent忽略队列限制强制继续`true`/`false`,默认 `false`
- `transmission`设置项:
- **TR_HOST** transmission地址格式`ip:port`https需要添加`https://`前缀
- **TR_USER** transmission用户名
- **TR_PASSWORD** transmission密码
---
- **❗MEDIASERVER** 媒体服务器,支持`emby`/`jellyfin`/`plex`,同时开启多个使用`,`分隔。还需要配置对应媒体服务器的环境变量,非对应媒体服务器的变量可删除,推荐使用`emby`
- **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法
- `emby`设置项:
`MOVIE_RENAME_FORMAT`支持的配置项:
- **EMBY_HOST** Emby服务器地址格式`ip:port`https需要添加`https://`前缀
- **EMBY_API_KEY** Emby Api Key在`设置->高级->API密钥`处生成
> `title` TMDB/豆瓣中的标题
> `en_title` TMDB中的英文标题 (暂不支持豆瓣)
> `original_title` TMDB/豆瓣中的原语种标题
> `name` 从文件名中识别的名称(同时存在中英文时,优先使用中文)
> `en_name`:从文件名中识别的英文名称(可能为空)
> `original_name` 原文件名(包括文件外缀)
> `year` 年份
> `resourceType`:资源类型
> `effect`:特效
> `edition` 版本(资源类型+特效)
> `videoFormat` 分辨率
> `releaseGroup` 制作组/字幕组
> `customization` 自定义占位符
> `videoCodec` 视频编码
> `audioCodec` 音频编码
> `tmdbid` TMDB ID非TMDB识别源时为空
> `imdbid` IMDB ID可能为空
> `doubanid`豆瓣ID非豆瓣识别源时为空
> `part`:段/节
> `fileExt`:文件扩展名
> `customization`:自定义占位符
`MOVIE_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}{{fileExt}}
```
- `jellyfin`设置项:
- **JELLYFIN_HOST** Jellyfin服务器地址格式`ip:port`https需要添加`https://`前缀
- **JELLYFIN_API_KEY** Jellyfin Api Key在`设置->高级->API密钥`处生成
- `plex`设置项:
- **PLEX_HOST** Plex服务器地址格式`ip:port`https需要添加`https://`前缀
- **PLEX_TOKEN** Plex网页Url中的`X-Plex-Token`通过浏览器F12->网络从请求URL中获取
- **MEDIASERVER_SYNC_INTERVAL:** 媒体服务器同步间隔(小时),默认`6`,留空则不同步
- **MEDIASERVER_SYNC_BLACKLIST:** 媒体服务器同步黑名单,多个媒体库名称使用,分割
### 2. **用户认证**
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数(**仅能通过环境变量配置**
`AUTH_SITE`支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
- **❗AUTH_SITE** 认证站点,认证资源`v1.0.1`支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
| xingtan | `XINGTAN_UID`用户ID<br/>`XINGTAN_PASSKEY`:密钥 |
| ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 |
### 2. **进阶配置**
- **BIG_MEMORY_MODE** 大内存模式,默认为`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快
- **MOVIE_RENAME_FORMAT** 电影重命名格式
`MOVIE_RENAME_FORMAT`支持的配置项:
> `title` 标题
> `original_name` 原文件名
> `original_title` 原语种标题
> `name` 识别名称
> `year` 年份
> `resourceType`:资源类型
> `effect`:特效
> `edition` 版本(资源类型+特效)
> `videoFormat` 分辨率
> `releaseGroup` 制作组/字幕组
> `customization` 自定义占位符
> `videoCodec` 视频编码
> `audioCodec` 音频编码
> `tmdbid` TMDBID
> `imdbid` IMDBID
> `part`:段/节
> `fileExt`:文件扩展名
> `tmdbid`TMDB ID
> `imdbid`IMDB ID
> `customization`:自定义占位符
`MOVIE_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}{{fileExt}}
```
- **TV_RENAME_FORMAT** 电视剧重命名格式
`TV_RENAME_FORMAT`额外支持的配置项:
> `season` 季号
> `episode` 集号
> `season_episode` 季集 SxxExx
> `episode_title` 集标题
`TV_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/Season {{season}}/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}} 集{% endif %}{{fileExt}}
```
- **TV_RENAME_FORMAT** 电视剧重命名格式基于jinjia2语法
`TV_RENAME_FORMAT`额外支持的配置项:
> `season` 季号
> `episode` 集号
> `season_episode` 季集 SxxExx
> `episode_title` 集标题
`TV_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/Season {{season}}/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}} 集{% endif %}{{fileExt}}
```
### 3. **优先级规则**
- 仅支持使用内置规则进行排列组合,内置规则有:`蓝光原盘`、`4K`、`1080P`、`中文字幕`、`特效字幕`、`H265`、`H264`、`杜比`、`HDR`、`REMUX`、`WEB-DL`、`免费`、`国语配音` 等
- 仅支持使用内置规则进行排列组合,通过设置多层规则来实现优先级顺序匹配
- 符合任一层级规则的资源将被标识选中,匹配成功的层级做为该资源的优先级,排越前面优先级超高
- 不符合过滤规则所有层级规则的资源将不会被选中
### 4. **插件扩展**
- **PLUGIN_MARKET** 插件市场仓库地址仅支持Github仓库`main`分支,多个地址使用`,`分隔,默认为官方插件仓库:`https://github.com/jxxghp/MoviePilot-Plugins` ,通过查看[MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)项目的fork或者查看频道置顶了解更多第三方插件仓库。
## 使用
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用无法同步的站点可手动新增。
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`后台API端口:`3001`。
- 通过下载器监控或使用目录监控插件实现自动整理入库刮削(二选一)
- 通过微信/Telegram/Slack/SynologyChat远程管理其中微信/Telegram将会自动添加操作菜单微信菜单条数有限制部分菜单不显示微信需要在官方页面设置回调地址SynologyChat需要设置机器人传入地址地址相对路径为`/api/v1/message/`。
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot``3001`端口),其中`moviepilot`为设置的`API_TOKEN`
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`API服务端口`可使用Overseerr/Jellyseerr浏览订阅
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
### 1. **WEB后台管理**
- 通过设置的超级管理员用户登录后台管理界面(`SUPERUSER`配置项默认用户admin默认端口3000
> ❗**注意:超级管理员用户初始密码为自动生成,需要在首次运行时的后台日志中查看!** 如首次运行日志丢失,则需要删除配置文件目录下的`user.db`文件,然后重启服务
### 2. **站点维护**
- 通过CookieCloud同步快速添加站点不需要使用的站点可在WEB管理界面中禁用或删除无法同步的站点也可手动新增
- 需要通过环境变量设置用户认证信息且认证成功后才能使用站点相关功能,未认证通过时站点相关的插件也会无法显示
### 3. **文件整理**
- 默认通过监控下载器实现下载完成后自动整理入库并刮削媒体信息,需要后台打开`下载器监控`开关且仅会处理通过MoviePilot添加下载的任务。
- 使用`目录监控`等插件实现更灵活的自动整理。
### 4. **通知交互**
- 支持通过`微信`/`Telegram`/`Slack`/`SynologyChat`/`VoceChat`等渠道远程管理和订阅下载,其中 微信/Telegram 将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示)。
- `微信`回调地址、`SynologyChat`传入地址地址相对路径均为:`/api/v1/message/``VoceChat`的Webhook地址相对路径为`/api/v1/message/?token=moviepilot`其中moviepilot为设置的`API_TOKEN`。
### 5. **订阅与搜索**
- 通过MoviePilot管理后台搜索和订阅。
- 将MoviePilot做为`Radarr`或`Sonarr`服务器添加到`Overseerr`或`Jellyseerr`,可使用`Overseerr/Jellyseerr`浏览和添加订阅。
- 安装`豆瓣榜单订阅`、`猫眼订阅`等插件,实现自动订阅豆瓣榜单、猫眼榜单等。
### 6. **其他**
- 通过设置媒体服务器Webhook指向MoviePilot相对路径为`/api/v1/webhook?token=moviepilot`,其中`moviepilot`为设置的`API_TOKEN`可实现通过MoviePilot发送播放通知以及配合各类插件实现播放限速等功能。
- 映射宿主机`docker.sock`文件到容器`/var/run/docker.sock`,可支持应用内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`。
- 将WEB页面添加到手机桌面图标可获得与App一样的使用体验。
### **注意**
- 容器首次启动需要下载浏览器内核,根据网络情况可能需要较长时间,此时无法登录。可映射`/moviepilot`目录避免容器重置后重新触发浏览器内核下载。
@@ -290,6 +224,14 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme;
}
```
- 反代使用ssl时需要开启`http2`,否则会导致日志加载时间过长或不可用。以`Nginx`为例:
```nginx configuration
server {
listen 443 ssl;
http2 on;
# ...
}
```
- 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
@@ -303,6 +245,13 @@ location /cgi-bin/menu/create {
}
```
- 部分插件功能基于文件系统监控实现(如`目录监控`等需在宿主机上不是docker容器内执行以下命令并重启
```shell
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_instances=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
```
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)

View File

@@ -1,7 +1,8 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
filebrowser, transfer, mediaserver, bangumi
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -21,3 +22,6 @@ api_router.include_router(download.router, prefix="/download", tags=["download"]
api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboard"])
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])

View File

@@ -0,0 +1,64 @@
from typing import List, Any
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.core.context import MediaInfo
from app.core.security import verify_token
router = APIRouter()
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
infos = BangumiChain().calendar(page=page, count=count)
if not infos:
return []
medias = [MediaInfo(bangumi_info=info) for info in infos]
return [media.to_dict() for media in medias]
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.BangumiPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
"""
persons = BangumiChain().bangumi_credits(bangumiid, page=page, count=count)
if not persons:
return []
return [schemas.BangumiPerson(**person) for person in persons]
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
"""
infos = BangumiChain().bangumi_recommend(bangumiid)
if not infos:
return []
medias = [MediaInfo(bangumi_info=info) for info in infos]
return [media.to_dict() for media in medias]
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi详情
"""
info = BangumiChain().bangumi_info(bangumiid)
if info:
return MediaInfo(bangumi_info=info).to_dict()
else:
return schemas.MediaInfo()

View File

@@ -75,18 +75,16 @@ def downloader(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载器信息
"""
transfer_info = DashboardChain().downloader_info()
free_space = SystemUtils.free_space(settings.SAVE_PATH)
if transfer_info:
return schemas.DownloaderInfo(
download_speed=transfer_info.download_speed,
upload_speed=transfer_info.upload_speed,
download_size=transfer_info.download_size,
upload_size=transfer_info.upload_size,
free_space=free_space
)
else:
return schemas.DownloaderInfo()
downloader_info = schemas.DownloaderInfo()
transfer_infos = DashboardChain().downloader_info()
if transfer_infos:
for transfer_info in transfer_infos:
downloader_info.download_speed += transfer_info.download_speed
downloader_info.upload_speed += transfer_info.upload_speed
downloader_info.download_size += transfer_info.download_size
downloader_info.upload_size += transfer_info.upload_size
downloader_info.free_space = SystemUtils.free_space(settings.SAVE_PATH)
return downloader_info
@router.get("/downloader2", summary="下载器信息API_TOKEN", response_model=schemas.DownloaderInfo)

View File

@@ -90,7 +90,7 @@ def movie_top250(page: int = 1,
"""
浏览豆瓣剧集信息
"""
movies = DoubanChain().movie_top250(page=page, count=count)
movies = DoubanChain().movie_top250(page=page, count=count) or []
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
@@ -101,7 +101,7 @@ def tv_weekly_chinese(page: int = 1,
"""
中国每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count) or []
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
@@ -112,7 +112,7 @@ def tv_weekly_global(page: int = 1,
"""
全球每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
tvs = DoubanChain().tv_weekly_global(page=page, count=count) or []
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
@@ -123,7 +123,7 @@ def tv_animation(page: int = 1,
"""
热门动画剧集
"""
tvs = DoubanChain().tv_animation(page=page, count=count)
tvs = DoubanChain().tv_animation(page=page, count=count) or []
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
@@ -134,7 +134,7 @@ def movie_hot(page: int = 1,
"""
热门电影
"""
movies = DoubanChain().movie_hot(page=page, count=count)
movies = DoubanChain().movie_hot(page=page, count=count) or []
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
@@ -145,10 +145,51 @@ def tv_hot(page: int = 1,
"""
热门电视剧
"""
tvs = DoubanChain().tv_hot(page=page, count=count)
tvs = DoubanChain().tv_hot(page=page, count=count) or []
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.DoubanPerson])
def douban_credits(doubanid: str,
type_name: str,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_credits(doubanid=doubanid, page=page)
elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_credits(doubanid=doubanid, page=page)
else:
return []
if not doubaninfos:
return []
else:
return [schemas.DoubanPerson(**doubaninfo) for doubaninfo in doubaninfos]
@router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo])
def douban_recommend(doubanid: str,
type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据豆瓣ID查询推荐电影/电视剧type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_recommend(doubanid=doubanid)
elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_recommend(doubanid=doubanid)
else:
return []
if not doubaninfos:
return []
else:
return [MediaInfo(douban_info=doubaninfo).to_dict() for doubaninfo in doubaninfos]
@router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo)
def douban_info(doubanid: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -1,6 +1,6 @@
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.download import DownloadChain
@@ -10,13 +10,12 @@ from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db.models.user import User
from app.db.userauth import get_current_active_user
from app.schemas import NotExistMediaInfo, MediaType
router = APIRouter()
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
def read_downloading(
def read(
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询正在下载的任务
@@ -25,11 +24,10 @@ def read_downloading(
@router.post("/", summary="添加下载", response_model=schemas.Response)
def add_downloading(
def download(
media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务
"""
@@ -47,49 +45,42 @@ def add_downloading(
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name)
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name)
return schemas.Response(success=True if did else False, data={
"download_id": did
})
@router.post("/notexists", summary="查询缺失媒体信息", response_model=List[NotExistMediaInfo])
def exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.post("/add", summary="添加下载", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
查询缺失媒体信息
添加下载任
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
meta = MetaInfo(title=media_in.title)
mtype = MediaType(media_in.type) if media_in.type else None
if mtype:
meta.type = mtype
if media_in.season:
meta.begin_season = media_in.season
meta.type = MediaType.TV
if media_in.year:
meta.year = media_in.year
if media_in.tmdb_id or media_in.douban_id:
mediainfo = MediaChain().recognize_media(meta=meta, mtype=mtype,
tmdbid=media_in.tmdb_id, doubanid=media_in.douban_id)
else:
mediainfo = MediaChain().recognize_by_meta(metainfo=meta)
# 查询缺失信息
mediainfo = MediaChain().recognize_media(meta=metainfo)
if not mediainfo:
raise HTTPException(status_code=404, detail="媒体信息不存在")
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
if mediainfo.type == MediaType.MOVIE:
# 电影已存在时返回空列表,存在时返回空对像列表
return [] if exist_flag else [NotExistMediaInfo()]
elif no_exists and no_exists.get(mediakey):
# 电视剧返回缺失的剧集
return list(no_exists.get(mediakey).values())
return []
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
# 上下文
context = Context(
meta_info=metainfo,
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name)
return schemas.Response(success=True if did else False, data={
"download_id": did
})
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
def start_downloading(
def start(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -100,22 +91,22 @@ def start_downloading(
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
def stop_downloading(
def stop(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
控制下载任务
暂停下载任务
"""
ret = DownloadChain().set_downloading(hashString, "stop")
return schemas.Response(success=True if ret else False)
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def remove_downloading(
def info(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
控制下载任务
删除下载任务
"""
ret = DownloadChain().remove_downloading(hashString)
return schemas.Response(success=True if ret else False)

View File

@@ -42,17 +42,26 @@ def delete_download_history(history_in: schemas.DownloadHistory,
def transfer_history(title: str = None,
page: int = 1,
count: int = 30,
status: bool = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询转移历史记录
"""
if title == "失败":
title = None
status = False
elif title == "成功":
title = None
status = True
if title:
total = TransferHistory.count_by_title(db, title)
result = TransferHistory.list_by_title(db, title, page, count)
total = TransferHistory.count_by_title(db, title=title, status=status)
result = TransferHistory.list_by_title(db, title=title, page=page,
count=count, status=status)
else:
result = TransferHistory.list_by_page(db, page, count)
total = TransferHistory.count(db)
result = TransferHistory.list_by_page(db, page=page, count=count, status=status)
total = TransferHistory.count(db, status=status)
return schemas.Response(success=True,
data={
@@ -87,7 +96,8 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
eventmanager.send_event(
EventType.DownloadFileDeleted,
{
"src": history.src
"src": history.src,
"hash": history.download_hash
}
)
# 删除记录

View File

@@ -34,21 +34,24 @@ async def login_access_token(
)
if not user:
# 请求协助认证
logger.warn("登录用户本地不匹配,尝试辅助认证 ...")
logger.warn(f"登录用户 {form_data.username} 本地用户名或密码不匹配,尝试辅助认证 ...")
token = UserChain().user_authenticate(form_data.username, form_data.password)
if not token:
logger.warn(f"用户 {form_data.username} 登录失败!")
raise HTTPException(status_code=401, detail="用户名或密码不正确")
else:
logger.info(f"用户 {form_data.username} 辅助认证成功,用户信息: {token}")
logger.info(f"用户 {form_data.username} 辅助认证成功,用户信息: {token},以普通用户登录...")
# 加入用户信息表
user = User.get_by_name(db=db, name=form_data.username)
if not user:
logger.info(f"用户不存在,创建普通用户: {form_data.username}")
logger.info(f"用户不存在,创建用户: {form_data.username}")
user = User(name=form_data.username, is_active=True,
is_superuser=False, hashed_password=get_password_hash(token))
user.create(db)
else:
# 辅助验证用户若未启用,则禁止登录
if not user.is_active:
raise HTTPException(status_code=403, detail="用户未启用")
# 普通用户权限
user.is_superuser = False
elif not user.is_active:
@@ -56,7 +59,9 @@ async def login_access_token(
logger.info(f"用户 {user.name} 登录成功!")
return schemas.Token(
access_token=security.create_access_token(
user.id,
userid=user.id,
username=user.name,
super_user=user.is_superuser,
expires_delta=timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
),
token_type="bearer",

View File

@@ -1,16 +1,14 @@
from pathlib import Path
from typing import List, Any
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.core.security import verify_token, verify_uri_token
from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper
from app.schemas import MediaType
router = APIRouter()
@@ -79,26 +77,26 @@ def search_by_title(title: str,
return []
@router.get("/exists", summary="本地是否存在", response_model=schemas.Response)
def exists(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
@router.get("/scrape", summary="刮削媒体信息", response_model=schemas.Response)
def scrape(path: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
刮削媒体信息
"""
meta = MetaInfo(title)
if not season:
season = meta.begin_season
exist = MediaServerOper(db).exists(
title=meta.name, year=year, mtype=mtype, tmdbid=tmdbid, season=season
)
return schemas.Response(success=True if exist else False, data={
"item": exist or {}
})
if not path:
return schemas.Response(success=False, message="刮削路径无效")
scrape_path = Path(path)
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
# 识别
chain = MediaChain()
meta = MetaInfoPath(scrape_path)
mediainfo = chain.recognize_media(meta)
if not media_info:
return schemas.Response(success=False, message="刮削失败,无法识别媒体信息")
# 刮削
chain.scrape_metadata(path=scrape_path, mediainfo=mediainfo, transfer_type=settings.TRANSFER_TYPE)
return schemas.Response(success=True, message="刮削完成")
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
@@ -108,28 +106,18 @@ def media_info(mediaid: str, type_name: str,
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
tmdbid, doubanid = None, None
tmdbid, doubanid, bangumiid = None, None, None
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not tmdbid and not doubanid:
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid[8:])
if not tmdbid and not doubanid and not bangumiid:
return schemas.MediaInfo()
if settings.RECOGNIZE_SOURCE == "themoviedb":
if not tmdbid and doubanid:
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
tmdbid = tmdbinfo.get("id")
else:
return schemas.MediaInfo()
else:
if not doubanid and tmdbid:
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
if doubaninfo:
doubanid = doubaninfo.get("id")
else:
return schemas.MediaInfo()
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
# 识别
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
if mediainfo:
MediaChain().obtain_images(mediainfo)
return mediainfo.to_dict()
return schemas.MediaInfo()

View File

@@ -0,0 +1,146 @@
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.chain.mediaserver import MediaServerChain
from app.core.config import settings
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.schemas import MediaType, NotExistMediaInfo
router = APIRouter()
@router.get("/play/{itemid}", summary="在线播放")
def play_item(itemid: str) -> schemas.Response:
"""
获取媒体服务器播放页面地址
"""
if not itemid:
return schemas.Response(success=False, msg="参数错误")
if not settings.MEDIASERVER:
return schemas.Response(success=False, msg="未配置媒体服务器")
mediaserver = settings.MEDIASERVER.split(",")[0]
play_url = MediaServerChain().get_play_url(server=mediaserver, item_id=itemid)
# 重定向到play_url
if not play_url:
return schemas.Response(success=False, msg="未找到播放地址")
return schemas.Response(success=True, data={
"url": play_url
})
@router.get("/exists", summary="本地是否存在", response_model=schemas.Response)
def exists(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
"""
meta = MetaInfo(title)
if not season:
season = meta.begin_season
# 返回对象
ret_info = {}
# 本地数据库是否存在
exist: MediaServerItem = MediaServerOper(db).exists(
title=meta.name, year=year, mtype=mtype, tmdbid=tmdbid, season=season
)
if exist:
ret_info = {
"id": exist.item_id
}
"""
else:
# 服务器是否存在
mediainfo = MediaInfo()
mediainfo.from_dict({
"title": meta.name,
"year": year or meta.year,
"type": mtype or meta.type,
"tmdb_id": tmdbid,
"season": season
})
exist: schemas.ExistMediaInfo = MediaServerChain().media_exists(
mediainfo=mediainfo
)
if exist:
ret_info = {
"id": exist.itemid
}
"""
return schemas.Response(success=True if exist else False, data={
"item": ret_info
})
@router.post("/notexists", summary="查询缺失媒体信息", response_model=List[schemas.NotExistMediaInfo])
def not_exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询缺失媒体信息
"""
# 媒体信息
meta = MetaInfo(title=media_in.title)
mtype = MediaType(media_in.type) if media_in.type else None
if mtype:
meta.type = mtype
if media_in.season:
meta.begin_season = media_in.season
meta.type = MediaType.TV
if media_in.year:
meta.year = media_in.year
if media_in.tmdb_id or media_in.douban_id:
mediainfo = MediaChain().recognize_media(meta=meta, mtype=mtype,
tmdbid=media_in.tmdb_id, doubanid=media_in.douban_id)
else:
mediainfo = MediaChain().recognize_by_meta(metainfo=meta)
# 查询缺失信息
if not mediainfo:
raise HTTPException(status_code=404, detail="媒体信息不存在")
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
if mediainfo.type == MediaType.MOVIE:
# 电影已存在时返回空列表,存在时返回空对像列表
return [] if exist_flag else [NotExistMediaInfo()]
elif no_exists and no_exists.get(mediakey):
# 电视剧返回缺失的剧集
return list(no_exists.get(mediakey).values())
return []
@router.get("/latest", summary="最新入库条目", response_model=List[schemas.MediaServerPlayItem])
def latest(count: int = 18,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器最新入库条目
"""
return MediaServerChain().latest(count=count, username=userinfo.username) or []
@router.get("/playing", summary="正在播放条目", response_model=List[schemas.MediaServerPlayItem])
def playing(count: int = 12,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器正在播放条目
"""
return MediaServerChain().playing(count=count, username=userinfo.username) or []
@router.get("/library", summary="媒体库列表", response_model=List[schemas.MediaServerLibrary])
def library(userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器媒体库列表
"""
return MediaServerChain().librarys(username=userinfo.username) or []

View File

@@ -1,18 +1,24 @@
import json
from typing import Union, Any, List
from fastapi import APIRouter, BackgroundTasks, Depends
from fastapi import Request
from sqlalchemy.orm import Session
from starlette.responses import PlainTextResponse
from app import schemas
from app.chain.message import MessageChain
from app.core.config import settings
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.message import Message
from app.db.systemconfig_oper import SystemConfigOper
from app.db.userauth import get_current_active_superuser
from app.log import logger
from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt
from app.schemas import NotificationSwitch
from app.schemas.types import SystemConfigKey, NotificationType
from app.schemas.types import SystemConfigKey, NotificationType, MessageChannel
router = APIRouter()
@@ -36,13 +42,44 @@ async def user_message(background_tasks: BackgroundTasks, request: Request):
return schemas.Response(success=True)
@router.get("/", summary="微信验证")
@router.post("/web", summary="接收WEB消息", response_model=schemas.Response)
def web_message(text: str, current_user: User = Depends(get_current_active_superuser)):
"""
WEB消息响应
"""
MessageChain().handle_message(
channel=MessageChannel.Web,
userid=current_user.name,
username=current_user.name,
text=text
)
return schemas.Response(success=True)
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: int = 1,
count: int = 20):
"""
获取WEB消息列表
"""
ret_messages = []
messages = Message.list_by_page(db, page=page, count=count)
for message in messages:
try:
ret_messages.append(message.to_dict())
except Exception as e:
logger.error(f"获取WEB消息列表失败: {str(e)}")
continue
return ret_messages
def wechat_verify(echostr: str, msg_signature: str,
timestamp: Union[str, int], nonce: str) -> Any:
"""
用户消息响应
微信验证响应
"""
logger.info(f"收到微信验证请求: {echostr}")
try:
wxcpt = WXBizMsgCrypt(sToken=settings.WECHAT_TOKEN,
sEncodingAESKey=settings.WECHAT_ENCODING_AESKEY,
@@ -60,6 +97,28 @@ def wechat_verify(echostr: str, msg_signature: str,
return PlainTextResponse(sEchoStr)
def vocechat_verify(token: str) -> Any:
"""
VoceChat验证响应
"""
if token == settings.API_TOKEN:
return {"status": "OK"}
return {"status": "ERROR"}
@router.get("/", summary="回调请求验证")
def incoming_verify(token: str = None, echostr: str = None, msg_signature: str = None,
timestamp: Union[str, int] = None, nonce: str = None) -> Any:
"""
微信/VoceChat等验证响应
"""
logger.info(f"收到验证请求: token={token}, echostr={echostr}, "
f"msg_signature={msg_signature}, timestamp={timestamp}, nonce={nonce}")
if echostr and msg_signature and timestamp and nonce:
return wechat_verify(echostr, msg_signature, timestamp, nonce)
return vocechat_verify(token)
@router.get("/switchs", summary="查询通知消息渠道开关", response_model=List[NotificationSwitch])
def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -72,7 +131,7 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
for noti in NotificationType:
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True,
telegram=True, slack=True,
synologychat=True))
synologychat=True, vocechat=True))
else:
for switch in switchs:
return_list.append(NotificationSwitch(**switch))
@@ -83,7 +142,7 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def set_switchs(switchs: List[NotificationSwitch],
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询通知消息渠道开关
设置通知消息渠道开关
"""
switch_list = []
for switch in switchs:

View File

@@ -7,39 +7,60 @@ from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.plugin import PluginHelper
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
router = APIRouter()
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def all_plugins(_: schemas.TokenPayload = Depends(verify_token), state: str = "all") -> Any:
"""
查询所有插件清单,包括本地插件和在线插件
查询所有插件清单,包括本地插件和在线插件插件状态installed, market, all
"""
plugins = []
# 本地插件
local_plugins = PluginManager().get_local_plugins()
# 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.get("installed")]
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.get("installed")]
if state == "installed":
return installed_plugins
# 在线插件
online_plugins = PluginManager().get_online_plugins()
if not online_plugins:
# 没有获取在线插件
if state == "market":
# 返回未安装的本地插件
return not_installed_plugins
return local_plugins
# 插件市场插件清单
market_plugins = []
# 已安装插件IDS
installed_ids = [plugin["id"] for plugin in local_plugins if plugin.get("installed")]
# 已经安装的本地
plugins.extend([plugin for plugin in local_plugins if plugin.get("installed")])
_installed_ids = [plugin["id"] for plugin in installed_plugins]
# 未安装的线上插件或者有更新的插件
for plugin in online_plugins:
if plugin["id"] not in installed_ids:
plugins.append(plugin)
if plugin["id"] not in _installed_ids:
market_plugins.append(plugin)
elif plugin.get("has_update"):
plugin["installed"] = False
plugins.append(plugin)
return plugins
market_plugins.append(plugin)
# 未安装的本地插件,且不在线上插件中
_plugin_ids = [plugin["id"] for plugin in market_plugins]
for plugin in not_installed_plugins:
if plugin["id"] not in _plugin_ids:
market_plugins.append(plugin)
# 返回插件清单
if state == "market":
# 返回未安装的插件
return market_plugins
# 返回所有插件
return installed_plugins + market_plugins
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def installed(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户已安装插件清单
"""
@@ -47,10 +68,10 @@ def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install_plugin(plugin_id: str,
repo_url: str = "",
force: bool = False,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def install(plugin_id: str,
repo_url: str = "",
force: bool = False,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
安装插件
"""
@@ -62,7 +83,7 @@ def install_plugin(plugin_id: str,
state, msg = PluginHelper().install(pid=plugin_id, repo_url=repo_url)
if not state:
# 安装失败
return schemas.Response(success=False, msg=msg)
return schemas.Response(success=False, message=msg)
# 安装插件
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
@@ -70,6 +91,8 @@ def install_plugin(plugin_id: str,
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True)
@@ -89,11 +112,25 @@ def plugin_form(plugin_id: str,
@router.get("/page/{plugin_id}", summary="获取插件数据页面")
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
根据插件ID获取插件配置信息
根据插件ID获取插件数据页面
"""
return PluginManager().get_plugin_page(plugin_id)
@router.get("/reset/{plugin_id}", summary="重置插件配置", response_model=schemas.Response)
def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据插件ID重置插件配置
"""
# 删除配置
PluginManager().delete_plugin_config(plugin_id)
# 重新生效插件
PluginManager().reload_plugin(plugin_id, {})
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True)
@router.get("/{plugin_id}", summary="获取插件配置")
def plugin_config(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> dict:
"""
@@ -106,12 +143,14 @@ def plugin_config(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token
def set_plugin_config(plugin_id: str, conf: dict,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据插件ID获取插件配置信息
更新插件配置
"""
# 保存配置
PluginManager().save_plugin_config(plugin_id, conf)
# 重新生效插件
PluginManager().reload_plugin(plugin_id, conf)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
return schemas.Response(success=True)
@@ -131,6 +170,8 @@ def uninstall_plugin(plugin_id: str,
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
# 移除插件服务
Scheduler().remove_plugin_job(plugin_id)
return schemas.Response(success=True)

View File

@@ -52,6 +52,20 @@ def search_by_id(mediaid: str,
mtype=mtype, area=area)
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过BangumiID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area)
else:
# 通过BangumiID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area)
else:
return []
return [torrent.to_dict() for torrent in torrents]

View File

@@ -50,10 +50,17 @@ def add_site(
return schemas.Response(success=False, message=f"{domain} 站点己存在")
# 保存站点信息
site_in.domain = domain
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site_in.name = site_info.get("name")
site_in.id = None
site = Site(**site_in.dict())
site.create(db)
# 通知缓存站点图标
EventManager().send_event(EventType.CacheSiteIcon, {
"domain": domain
})
return schemas.Response(success=True)
@@ -70,7 +77,14 @@ def update_site(
site = Site.get(db, site_in.id)
if not site:
return schemas.Response(success=False, message="站点不存在")
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict())
# 通知缓存站点图标
EventManager().send_event(EventType.CacheSiteIcon, {
"domain": site_in.domain
})
return schemas.Response(success=True)
@@ -103,8 +117,8 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response)
def cookie_cloud_sync(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def reset(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
清空所有站点数据并重新同步CookieCloud站点信息
"""
@@ -126,6 +140,7 @@ def update_cookie(
site_id: int,
username: str,
password: str,
code: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -141,7 +156,8 @@ def update_cookie(
# 更新Cookie
state, message = SiteChain().update_cookie(site_info=site_info,
username=username,
password=password)
password=password,
two_step_code=code)
return schemas.Response(success=state, message=message)
@@ -228,10 +244,11 @@ def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
"""
# 选中的rss站点
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
# 所有站点
all_site = Site.list_order_by_pri(db)
if not selected_sites or not all_site:
return []
if not selected_sites:
return all_site
# 选中的rss站点
rss_sites = [site for site in all_site if site and site.id in selected_sites]

View File

@@ -65,7 +65,7 @@ def create_subscribe(
else:
mtype = None
# 豆瓣标理
if subscribe_in.doubanid:
if subscribe_in.doubanid or subscribe_in.bangumiid:
meta = MetaInfo(subscribe_in.name)
subscribe_in.name = meta.name
subscribe_in.season = meta.begin_season
@@ -80,12 +80,15 @@ def create_subscribe(
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
exist_ok=True)
return schemas.Response(success=True if sid else False, message=message, data={
"id": sid
})
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)
@router.put("/", summary="更新订阅", response_model=schemas.Response)
@@ -114,6 +117,9 @@ def update_subscribe(
subscribe_dict["lack_episode"] = (subscribe.lack_episode
+ (subscribe_in.total_episode
- (subscribe.total_episode or 0)))
# 是否手动修改过总集数
if subscribe_in.total_episode != subscribe.total_episode:
subscribe_dict["manual_total_episode"] = 1
subscribe.update(db, subscribe_dict)
return schemas.Response(success=True)
@@ -122,11 +128,14 @@ def update_subscribe(
def subscribe_mediaid(
mediaid: str,
season: int = None,
title: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID豆瓣ID查询订阅 tmdb:/douban:
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
"""
result = None
title_check = False
if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
@@ -137,8 +146,21 @@ def subscribe_mediaid(
if not doubanid:
return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid)
else:
result = None
if not result and title:
title_check = True
elif mediaid.startswith("bangumi:"):
bangumiid = mediaid[8:]
if not bangumiid or not str(bangumiid).isdigit():
return Subscribe()
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
# 使用名称检查订阅
if title_check and title:
meta = MetaInfo(title)
if season:
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
if result and result.sites:
result.sites = json.loads(result.sites)
@@ -257,7 +279,7 @@ def delete_subscribe(
async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
authorization: str = Header(None)) -> Any:
"""
Jellyseerr/Overseerr订阅
Jellyseerr/Overseerr网络勾子通知订阅
"""
if not authorization or authorization != settings.API_TOKEN:
raise HTTPException(

View File

@@ -1,15 +1,17 @@
import json
import time
from datetime import datetime
from typing import Union
from typing import Union, Any
import tailer
from fastapi import APIRouter, HTTPException, Depends
from dotenv import set_key
from fastapi import APIRouter, HTTPException, Depends, Response
from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.module import ModuleManager
from app.core.security import verify_token
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
@@ -24,13 +26,29 @@ from version import APP_VERSION
router = APIRouter()
@router.get("/img/{imgurl:path}/{proxy}", summary="图片代理")
def get_img(imgurl: str, proxy: bool = False) -> Any:
"""
通过图片代理(使用代理服务器)
"""
if not imgurl:
return None
if proxy:
response = RequestUtils(ua=settings.USER_AGENT, proxies=settings.PROXY).get_res(url=imgurl)
else:
response = RequestUtils(ua=settings.USER_AGENT).get_res(url=imgurl)
if response:
return Response(content=response.content, media_type="image/jpeg")
return None
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询系统环境变量,包括当前版本号
"""
info = settings.dict(
exclude={"SECRET_KEY", "SUPERUSER_PASSWORD", "API_TOKEN"}
exclude={"SECRET_KEY", "SUPERUSER_PASSWORD"}
)
info.update({
"VERSION": APP_VERSION,
@@ -41,6 +59,27 @@ def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
data=info)
@router.post("/env", summary="更新系统环境变量", response_model=schemas.Response)
def set_env_setting(env: dict,
_: schemas.TokenPayload = Depends(verify_token)):
"""
更新系统环境变量
"""
for k, v in env.items():
if k == "undefined":
continue
if hasattr(settings, k):
if v == "None":
v = None
setattr(settings, k, v)
if v is None:
v = ''
else:
v = str(v)
set_key(settings.CONFIG_PATH / "app.env", k, v)
return schemas.Response(success=True)
@router.get("/progress/{process_type}", summary="实时进度")
def get_progress(process_type: str, token: str):
"""
@@ -69,23 +108,37 @@ def get_setting(key: str,
"""
查询系统设置
"""
if hasattr(settings, key):
value = getattr(settings, key)
else:
value = SystemConfigOper().get(key)
return schemas.Response(success=True, data={
"value": SystemConfigOper().get(key)
"value": value
})
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, str, int] = None,
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: schemas.TokenPayload = Depends(verify_token)):
"""
更新系统设置
"""
SystemConfigOper().set(key, value)
if hasattr(settings, key):
if value == "None":
value = None
setattr(settings, key, value)
if value is None:
value = ''
else:
value = str(value)
set_key(settings.CONFIG_PATH / "app.env", key, value)
else:
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
@router.get("/message", summary="实时消息")
def get_message(token: str):
def get_message(token: str, role: str = "sys"):
"""
实时获取系统消息返回格式为SSE
"""
@@ -99,7 +152,7 @@ def get_message(token: str):
def event_generator():
while True:
detail = message.get()
detail = message.get(role)
yield 'data: %s\n\n' % (detail or '')
time.sleep(3)
@@ -107,9 +160,11 @@ def get_message(token: str):
@router.get("/logging", summary="实时日志")
def get_logging(token: str):
def get_logging(token: str, length: int = 50, logfile: str = "moviepilot.log"):
"""
实时获取系统日志返回格式为SSE
实时获取系统日志
length = -1 时, 返回text/plain
否则 返回格式SSE
"""
if not token or not verify_token(token):
raise HTTPException(
@@ -117,45 +172,29 @@ def get_logging(token: str):
detail="认证失败!",
)
log_path = settings.LOG_PATH / logfile
def log_generator():
log_path = settings.LOG_PATH / 'moviepilot.log'
# 读取文件末尾50行不使用tailer模块
with open(log_path, 'r', encoding='utf-8') as f:
for line in f.readlines()[-50:]:
for line in f.readlines()[-max(length, 50):]:
yield 'data: %s\n\n' % line
while True:
for text in tailer.follow(open(log_path, 'r', encoding='utf-8')):
yield 'data: %s\n\n' % (text or '')
for t in tailer.follow(open(log_path, 'r', encoding='utf-8')):
yield 'data: %s\n\n' % (t or '')
time.sleep(1)
return StreamingResponse(log_generator(), media_type="text/event-stream")
@router.get("/nettest", summary="测试网络连通性")
def nettest(url: str,
proxy: bool,
_: schemas.TokenPayload = Depends(verify_token)):
"""
测试网络连通性
"""
# 记录开始的毫秒数
start_time = datetime.now()
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
result = RequestUtils(proxies=settings.PROXY if proxy else None,
ua=settings.USER_AGENT).get_res(url)
# 计时结束的毫秒数
end_time = datetime.now()
# 计算相关秒数
if result and result.status_code == 200:
return schemas.Response(success=True, data={
"time": round((end_time - start_time).microseconds / 1000)
})
elif result:
return schemas.Response(success=False, message=f"错误码:{result.status_code}", data={
"time": round((end_time - start_time).microseconds / 1000)
})
# 根据length参数返回不同的响应
if length == -1:
# 返回全部日志作为文本响应
if not log_path.exists():
return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, 'r', encoding='utf-8') as file:
text = file.read()
return Response(content=text, media_type="text/plain")
else:
return schemas.Response(success=False, message="网络连接失败!")
# 返回SSE流响应
return StreamingResponse(log_generator(), media_type="text/event-stream")
@router.get("/versions", summary="查询Github所有Release版本", response_model=schemas.Response)
@@ -163,7 +202,8 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询Github所有Release版本
"""
version_res = RequestUtils().get_res(f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
if version_res:
ver_json = version_res.json()
if ver_json:
@@ -202,6 +242,53 @@ def ruletest(title: str,
})
@router.get("/nettest", summary="测试网络连通性")
def nettest(url: str,
proxy: bool,
_: schemas.TokenPayload = Depends(verify_token)):
"""
测试网络连通性
"""
# 记录开始的毫秒数
start_time = datetime.now()
url = url.replace("{TMDBAPIKEY}", settings.TMDB_API_KEY)
result = RequestUtils(proxies=settings.PROXY if proxy else None,
ua=settings.USER_AGENT).get_res(url)
# 计时结束的毫秒数
end_time = datetime.now()
# 计算相关秒数
if result and result.status_code == 200:
return schemas.Response(success=True, data={
"time": round((end_time - start_time).microseconds / 1000)
})
elif result:
return schemas.Response(success=False, message=f"错误码:{result.status_code}", data={
"time": round((end_time - start_time).microseconds / 1000)
})
else:
return schemas.Response(success=False, message="网络连接失败!")
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)
def modulelist(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询已加载的模块ID列表
"""
module_ids = [module.__name__ for module in ModuleManager().get_modules("test")]
return schemas.Response(success=True, data={
"ids": module_ids
})
@router.get("/moduletest/{moduleid}", summary="模块可用性测试", response_model=schemas.Response)
def moduletest(moduleid: str, _: schemas.TokenPayload = Depends(verify_token)):
"""
模块可用性测试接口
"""
state, errmsg = ModuleManager().test(moduleid)
return schemas.Response(success=state, message=errmsg)
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
"""
@@ -214,6 +301,16 @@ def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
return schemas.Response(success=ret, message=msg)
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: schemas.TokenPayload = Depends(verify_token)):
"""
重新加载模块
"""
ModuleManager().reload()
Scheduler().init()
return schemas.Response(success=True)
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def execute_command(jobid: str,
_: schemas.TokenPayload = Depends(verify_token)):

View File

@@ -37,7 +37,7 @@ def manual_transfer(path: str = None,
:param type_name: 媒体类型、电影/电视剧
:param tmdbid: tmdbid
:param season: 剧集季号
:param transfer_type: 转移类型move/copy
:param transfer_type: 转移类型move/copy
:param episode_format: 剧集识别格式
:param episode_detail: 剧集识别详细信息
:param episode_part: 剧集识别分集信息
@@ -47,31 +47,34 @@ def manual_transfer(path: str = None,
:param _: Token校验
"""
force = False
target = Path(target) if target else None
transfer = TransferChain()
if logid:
# 查询历史记录
history = TransferHistory.get(db, logid)
history: TransferHistory = TransferHistory.get(db, logid)
if not history:
return schemas.Response(success=False, message=f"历史记录不存在ID{logid}")
# 强制转移
force = True
# 源路径
in_path = Path(history.src)
# 目的路径
if history.dest and str(history.dest) != "None":
# 删除旧的已整理文件
TransferChain().delete_files(Path(history.dest))
if not target:
target = history.dest
if history.status and ("move" in history.mode):
# 重新整理成功的转移,则使用成功的 dest 做 in_path
in_path = Path(history.dest)
else:
# 源路径
in_path = Path(history.src)
# 目的路径
if history.dest and str(history.dest) != "None":
# 删除旧的已整理文件
transfer.delete_files(Path(history.dest))
if not target:
target = transfer.get_root_path(path=history.dest,
type_name=history.type,
category=history.category)
elif path:
in_path = Path(path)
else:
return schemas.Response(success=False, message=f"缺少参数path/logid")
if target and target != "None":
target = Path(target)
else:
target = None
# 类型
mtype = MediaType(type_name) if type_name else None
# 自定义格式
@@ -84,7 +87,7 @@ def manual_transfer(path: str = None,
offset=episode_offset,
)
# 开始转移
state, errormsg = TransferChain().manual_transfer(
state, errormsg = transfer.manual_transfer(
in_path=in_path,
target=target,
tmdbid=tmdbid,

View File

@@ -1,4 +1,5 @@
import base64
import re
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
@@ -59,6 +60,10 @@ def update_user(
"""
user_info = user_in.dict()
if user_info.get("password"):
# 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项
pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$'
if not re.match(pattern, user_info.get("password")):
return schemas.Response(success=False, message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位")
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User.get_by_name(db, name=user_info["name"])

View File

@@ -4,7 +4,6 @@ from fastapi import APIRouter, BackgroundTasks, Request, Depends
from app import schemas
from app.chain.webhook import WebhookChain
from app.core.config import settings
from app.core.security import verify_uri_token
router = APIRouter()

137
app/api/servcookie.py Normal file
View File

@@ -0,0 +1,137 @@
import gzip
import json
from hashlib import md5
from typing import Annotated, Callable
from typing import Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
from fastapi.responses import PlainTextResponse
from fastapi.routing import APIRoute
from app import schemas
from app.core.config import settings
from app.log import logger
from app.utils.common import decrypt
class GzipRequest(Request):
async def body(self) -> bytes:
if not hasattr(self, "_body"):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body
return self._body
class GzipRoute(APIRoute):
def get_route_handler(self) -> Callable:
original_route_handler = super().get_route_handler()
async def custom_route_handler(request: Request) -> Response:
request = GzipRequest(request.scope, request.receive)
return await original_route_handler(request)
return custom_route_handler
async def verify_server_enabled():
"""
校验CookieCloud服务路由是否打开
"""
if not settings.COOKIECLOUD_ENABLE_LOCAL:
raise HTTPException(status_code=400, detail="本地CookieCloud服务器未启用")
return True
cookie_router = APIRouter(route_class=GzipRoute,
tags=['servcookie'],
dependencies=[Depends(verify_server_enabled)])
@cookie_router.get("/", response_class=PlainTextResponse)
def get_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/", response_class=PlainTextResponse)
def post_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/update")
async def update_cookie(req: schemas.CookieData):
"""
上传Cookie数据
"""
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
content = json.dumps({"encrypted": req.encrypted})
with open(file_path, encoding="utf-8", mode="w") as file:
file.write(content)
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
if read_content == content:
return {"action": "done"}
else:
return {"action": "error"}
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
"""
加载本地加密原始数据
"""
file_path = settings.COOKIE_PATH / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
raise HTTPException(status_code=404, detail="Item not found")
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data
def get_decrypted_cookie_data(uuid: str, password: str,
encrypted: str) -> Optional[Dict[str, Any]]:
"""
加载本地加密数据并解密为Cookie
"""
key_md5 = md5()
key_md5.update((uuid + '-' + password).encode('utf-8'))
aes_key = (key_md5.hexdigest()[:16]).encode('utf-8')
if encrypted:
try:
decrypted_data = decrypt(encrypted, aes_key).decode('utf-8')
decrypted_data = json.loads(decrypted_data)
if 'cookie_data' in decrypted_data:
return decrypted_data
except Exception as e:
logger.error(f"解密Cookie数据失败{str(e)}")
return None
else:
return None
@cookie_router.get("/get/{uuid}")
async def get_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")]):
"""
GET 下载加密数据
"""
return load_encrypt_data(uuid)
@cookie_router.post("/get/{uuid}")
async def post_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")],
request: schemas.CookiePassword):
"""
POST 下载加密数据
"""
data = load_encrypt_data(uuid)
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])

View File

@@ -15,6 +15,8 @@ from app.core.context import MediaInfo, TorrentInfo
from app.core.event import EventManager
from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.db.message_oper import MessageOper
from app.helper.message import MessageHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, TmdbEpisode
@@ -33,6 +35,8 @@ class ChainBase(metaclass=ABCMeta):
"""
self.modulemanager = ModuleManager()
self.eventmanager = EventManager()
self.messageoper = MessageOper()
self.messagehelper = MessageHelper()
@staticmethod
def load_cache(filename: str) -> Any:
@@ -108,19 +112,23 @@ class ChainBase(metaclass=ABCMeta):
break
except Exception as err:
logger.error(
f"运行模块 {method} 出错:{module.__class__.__name__} - {str(err)}\n{traceback.print_exc()}")
f"运行模块 {method} 出错:{module.__class__.__name__} - {str(err)}\n{traceback.format_exc()}")
return result
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
tmdbid: int = None,
doubanid: str = None) -> Optional[MediaInfo]:
doubanid: str = None,
bangumiid: int = None,
cache: bool = True) -> Optional[MediaInfo]:
"""
识别媒体信息
:param meta: 识别的元数据
:param mtype: 识别的媒体类型与tmdbid配套
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
# 识别用名中含指定信息情形
@@ -130,8 +138,12 @@ class ChainBase(metaclass=ABCMeta):
tmdbid = meta.tmdbid
if not doubanid and hasattr(meta, "doubanid"):
doubanid = meta.doubanid
# 有tmdbid时不使用其它ID
if tmdbid:
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid)
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
def match_doubaninfo(self, name: str, imdbid: str = None,
mtype: MediaType = None, year: str = None, season: int = None) -> Optional[dict]:
@@ -208,6 +220,14 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: int
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def message_parser(self, body: Any, form: Any,
args: Any) -> Optional[CommingMessage]:
"""
@@ -280,7 +300,8 @@ class ChainBase(metaclass=ABCMeta):
mediainfo=mediainfo)
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None
episodes: Set[int] = None, category: str = None,
downloader: str = settings.DEFAULT_DOWNLOADER
) -> Optional[Tuple[Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
@@ -289,10 +310,12 @@ class ChainBase(metaclass=ABCMeta):
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:param downloader: 下载器
:return: 种子Hash错误信息
"""
return self.run_module("download", content=content, download_dir=download_dir,
cookie=cookie, episodes=episodes, category=category)
cookie=cookie, episodes=episodes, category=category,
downloader=downloader)
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
@@ -306,14 +329,17 @@ class ChainBase(metaclass=ABCMeta):
download_dir=download_dir)
def list_torrents(self, status: TorrentStatus = None,
hashs: Union[list, str] = None) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
hashs: Union[list, str] = None,
downloader: str = settings.DEFAULT_DOWNLOADER
) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
"""
获取下载器种子列表
:param status: 种子状态
:param hashs: 种子Hash
:param downloader: 下载器
:return: 下载器中符合状态的种子列表
"""
return self.run_module("list_torrents", status=status, hashs=hashs)
return self.run_module("list_torrents", status=status, hashs=hashs, downloader=downloader)
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None,
@@ -329,48 +355,56 @@ class ChainBase(metaclass=ABCMeta):
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target,
episodes_info=episodes_info)
transfer_type=transfer_type, target=target, episodes_info=episodes_info)
def transfer_completed(self, hashs: Union[str, list], path: Path = None) -> None:
def transfer_completed(self, hashs: Union[str, list], path: Path = None,
downloader: str = settings.DEFAULT_DOWNLOADER) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param path: 源目录
:param downloader: 下载器
"""
return self.run_module("transfer_completed", hashs=hashs, path=path)
return self.run_module("transfer_completed", hashs=hashs, path=path, downloader=downloader)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
def remove_torrents(self, hashs: Union[str, list], delete_file: bool = True,
downloader: str = settings.DEFAULT_DOWNLOADER) -> bool:
"""
删除下载器种子
:param hashs: 种子Hash
:param delete_file: 是否删除文件
:param downloader: 下载器
:return: bool
"""
return self.run_module("remove_torrents", hashs=hashs)
return self.run_module("remove_torrents", hashs=hashs, delete_file=delete_file, downloader=downloader)
def start_torrents(self, hashs: Union[list, str]) -> bool:
def start_torrents(self, hashs: Union[list, str], downloader: str = settings.DEFAULT_DOWNLOADER) -> bool:
"""
开始下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
return self.run_module("start_torrents", hashs=hashs)
return self.run_module("start_torrents", hashs=hashs, downloader=downloader)
def stop_torrents(self, hashs: Union[list, str]) -> bool:
def stop_torrents(self, hashs: Union[list, str], downloader: str = settings.DEFAULT_DOWNLOADER) -> bool:
"""
停止下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
return self.run_module("stop_torrents", hashs=hashs)
return self.run_module("stop_torrents", hashs=hashs, downloader=downloader)
def torrent_files(self, tid: str) -> Optional[Union[TorrentFilesList, List[File]]]:
def torrent_files(self, tid: str,
downloader: str = settings.DEFAULT_DOWNLOADER) -> Optional[Union[TorrentFilesList, List[File]]]:
"""
获取种子文件
:param tid: 种子Hash
:param downloader: 下载器
:return: 种子文件
"""
return self.run_module("torrent_files", tid=tid)
return self.run_module("torrent_files", tid=tid, downloader=downloader)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
"""
@@ -387,6 +421,10 @@ class ChainBase(metaclass=ABCMeta):
:param message: 消息体
:return: 成功或失败
"""
logger.info(f"发送消息channel={message.channel}"
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
# 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage,
data={
@@ -397,10 +435,13 @@ class ChainBase(metaclass=ABCMeta):
"image": message.image,
"userid": message.userid,
})
logger.info(f"发送消息channel={message.channel}"
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
# 保存消息
self.messagehelper.put(message, role="user")
self.messageoper.add(channel=message.channel, mtype=message.mtype,
title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1)
# 发送
self.run_module("post_message", message=message)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
@@ -410,6 +451,13 @@ class ChainBase(metaclass=ABCMeta):
:param medias: 媒体列表
:return: 成功或失败
"""
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list)
self.messageoper.add(channel=message.channel, mtype=message.mtype,
title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1,
note=note_list)
return self.run_module("post_medias_message", message=message, medias=medias)
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]:
@@ -419,17 +467,28 @@ class ChainBase(metaclass=ABCMeta):
:param torrents: 种子列表
:return: 成功或失败
"""
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list)
self.messageoper.add(channel=message.channel, mtype=message.mtype,
title=message.title, text=message.text,
image=message.image, link=message.link,
userid=message.userid, action=1,
note=note_list)
return self.run_module("post_torrents_message", message=message, torrents=torrents)
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str) -> None:
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None:
"""
刮削元数据
:param path: 媒体文件路径
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param force_nfo: 强制刮削nfo
:param force_img: 强制刮削图片
:return: 成功或失败
"""
self.run_module("scrape_metadata", path=path, mediainfo=mediainfo, transfer_type=transfer_type)
self.run_module("scrape_metadata", path=path, mediainfo=mediainfo,
transfer_type=transfer_type, force_nfo=force_nfo, force_img=force_img)
def register_commands(self, commands: Dict[str, dict]) -> None:
"""

42
app/chain/bangumi.py Normal file
View File

@@ -0,0 +1,42 @@
from typing import Optional, List
from app.chain import ChainBase
from app.utils.singleton import Singleton
class BangumiChain(ChainBase, metaclass=Singleton):
"""
Bangumi处理链单例运行
"""
def calendar(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
"""
获取Bangumi每日放送
:param page: 页码
:param count: 每页数量
"""
return self.run_module("bangumi_calendar", page=page, count=count)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: BangumiID
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def bangumi_credits(self, bangumiid: int, page: int = 1, count: int = 20) -> List[dict]:
"""
根据BangumiID查询电影演职员表
:param bangumiid: BangumiID
:param page: 页码
:param count: 数量
"""
return self.run_module("bangumi_credits", bangumiid=bangumiid, page=page, count=count)
def bangumi_recommend(self, bangumiid: int) -> List[dict]:
"""
根据BangumiID查询推荐电影
:param bangumiid: BangumiID
"""
return self.run_module("bangumi_recommend", bangumiid=bangumiid)

View File

@@ -1,178 +0,0 @@
import base64
from typing import Tuple, Optional
from urllib.parse import urljoin
from lxml import etree
from app.chain import ChainBase
from app.chain.site import SiteChain
from app.core.config import settings
from app.db.site_oper import SiteOper
from app.db.siteicon_oper import SiteIconOper
from app.helper.cloudflare import under_challenge
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
class CookieCloudChain(ChainBase):
"""
CookieCloud处理链
"""
def __init__(self):
super().__init__()
self.siteoper = SiteOper()
self.siteiconoper = SiteIconOper()
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.sitechain = SiteChain()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper(
server=settings.COOKIECLOUD_HOST,
key=settings.COOKIECLOUD_KEY,
password=settings.COOKIECLOUD_PASSWORD
)
def process(self, manual=False) -> Tuple[bool, str]:
"""
通过CookieCloud同步站点Cookie
"""
logger.info("开始同步CookieCloud站点 ...")
cookies, msg = self.cookiecloud.download()
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(f"CookieCloud同步失败 {msg}")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
_add_count = 0
_fail_count = 0
for domain, cookie in cookies.items():
# 获取站点信息
indexer = self.siteshelper.get_indexer(domain)
site_info = self.siteoper.get_by_domain(domain)
if site_info:
# 检查站点连通性
status, msg = self.sitechain.test(domain)
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
# 新增站点
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT
).get_res(url=indexer.get("domain"))
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
_fail_count += 1
if under_challenge(res.text):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
f"站点 {indexer.get('name')} 登录失败没有该站点账号或Cookie已失效无法添加站点")
continue
elif res is not None:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接状态码:{res.status_code},无法添加站点")
continue
else:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and indexer.get("domain"):
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=indexer.get("domain"),
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 保存站点图标
if indexer:
site_icon = self.siteiconoper.get_by_domain(domain)
if not site_icon or not site_icon.base64:
logger.info(f"开始缓存站点 {indexer.get('name')} 图标 ...")
icon_url, icon_base64 = self.__parse_favicon(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if icon_url:
self.siteiconoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
logger.info(f"缓存站点 {indexer.get('name')} 图标成功")
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
# 处理完成
ret_msg = f"更新了{_update_count}个站点,新增了{_add_count}个站点"
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(f"CookieCloud同步成功, {ret_msg}")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
"""
解析站点favicon,返回base64 fav图标
:param url: 站点地址
:param cookie: Cookie
:param ua: User-Agent
:return:
"""
favicon_url = urljoin(url, "favicon.ico")
res = RequestUtils(cookies=cookie, timeout=60, ua=ua).get_res(url=url)
if res:
html_text = res.text
else:
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if html:
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=20, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
return favicon_url, None

View File

@@ -15,7 +15,7 @@ class DashboardChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("media_statistic")
def downloader_info(self) -> schemas.DownloaderInfo:
def downloader_info(self) -> Optional[List[schemas.DownloaderInfo]]:
"""
下载器信息
"""

View File

@@ -72,3 +72,33 @@ class DoubanChain(ChainBase, metaclass=Singleton):
if settings.RECOGNIZE_SOURCE != "douban":
return None
return self.run_module("tv_hot", page=page, count=count)
def movie_credits(self, doubanid: str, page: int = 1) -> List[dict]:
"""
根据TMDBID查询电影演职人员
:param doubanid: 豆瓣ID
:param page: 页码
"""
return self.run_module("douban_movie_credits", doubanid=doubanid, page=page)
def tv_credits(self, doubanid: str, page: int = 1) -> List[dict]:
"""
根据TMDBID查询电视剧演职人员
:param doubanid: 豆瓣ID
:param page: 页码
"""
return self.run_module("douban_tv_credits", doubanid=doubanid, page=page)
def movie_recommend(self, doubanid: str) -> List[dict]:
"""
根据豆瓣ID查询推荐电影
:param doubanid: 豆瓣ID
"""
return self.run_module("douban_movie_recommend", doubanid=doubanid)
def tv_recommend(self, doubanid: str) -> List[dict]:
"""
根据豆瓣ID查询推荐电视剧
:param doubanid: 豆瓣ID
"""
return self.run_module("douban_tv_recommend", doubanid=doubanid)

View File

@@ -9,6 +9,7 @@ from typing import List, Optional, Tuple, Set, Dict, Union
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
@@ -55,6 +56,8 @@ class DownloadChain(ChainBase):
msg_text = f"{msg_text}\n种子:{torrent.title}"
if torrent.pubdate:
msg_text = f"{msg_text}\n发布时间:{torrent.pubdate}"
if torrent.freedate:
msg_text = f"{msg_text}\n免费时间:{StringUtils.diff_time_str(torrent.freedate)}"
if torrent.seeders:
msg_text = f"{msg_text}\n做种数:{torrent.seeders}"
if torrent.uploadvolumefactor and torrent.downloadvolumefactor:
@@ -192,7 +195,7 @@ class DownloadChain(ChainBase):
channel=channel,
userid=userid)
if not content:
return
return None
else:
content = torrent_file
# 获取种子文件的文件夹名和文件清单
@@ -209,7 +212,7 @@ class DownloadChain(ChainBase):
if _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = settings.SAVE_ANIME_PATH
download_dir = settings.SAVE_ANIME_PATH / _media.category
else:
# 电视剧
download_dir = settings.SAVE_TV_PATH / _media.category
@@ -283,9 +286,13 @@ class DownloadChain(ChainBase):
if not file_meta.begin_episode \
or file_meta.begin_episode not in episodes:
continue
# 只处理视频格式
if not Path(file).suffix \
or Path(file).suffix not in settings.RMT_MEDIAEXT:
continue
files_to_add.append({
"download_hash": _hash,
"downloader": settings.DOWNLOADER,
"downloader": settings.DEFAULT_DOWNLOADER,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"filepath": file,
@@ -294,8 +301,8 @@ class DownloadChain(ChainBase):
if files_to_add:
self.downloadhis.add_files(files_to_add)
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel, userid=userid)
# 发送消息群发不带channel和userid
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
@@ -325,7 +332,8 @@ class DownloadChain(ChainBase):
save_path: str = None,
channel: MessageChannel = None,
userid: str = None,
username: str = None) -> Tuple[List[Context], Dict[int, Dict[int, NotExistMediaInfo]]]:
username: str = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
:param contexts: 资源上下文列表
@@ -350,12 +358,13 @@ class DownloadChain(ChainBase):
need = list(set(_need).difference(set(_current)))
# 清除已下载的季信息
seas = copy.deepcopy(no_exists.get(_mid))
for _sea in list(seas):
if _sea not in need:
no_exists[_mid].pop(_sea)
if not no_exists.get(_mid) and no_exists.get(_mid) is not None:
no_exists.pop(_mid)
break
if seas:
for _sea in list(seas):
if _sea not in need:
no_exists[_mid].pop(_sea)
if not no_exists.get(_mid) and no_exists.get(_mid) is not None:
no_exists.pop(_mid)
break
return need
def __update_episodes(_mid: Union[int, str], _sea: int, _need: list, _current: set) -> list:
@@ -399,8 +408,8 @@ class DownloadChain(ChainBase):
# 如果是电影,直接下载
for context in contexts:
if context.media_info.type == MediaType.MOVIE:
if self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username):
if self.download_single(context, save_path=save_path, channel=channel,
userid=userid, username=username):
# 下载成功
downloaded_list.append(context)
@@ -473,8 +482,9 @@ class DownloadChain(ChainBase):
)
else:
# 下载
download_id = self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username)
download_id = self.download_single(context,
save_path=save_path, channel=channel,
userid=userid, username=username)
if download_id:
# 下载成功
@@ -483,6 +493,9 @@ class DownloadChain(ChainBase):
need_season = __update_seasons(_mid=need_mid,
_need=need_season,
_current=torrent_season)
if not need_season:
# 全部下载完成
break
# 电视剧季内的集匹配
if no_exists:
# TMDBID列表
@@ -505,7 +518,7 @@ class DownloadChain(ChainBase):
start_episode = tv.start_episode or 1
# 缺失整季的转化为缺失集进行比较
if not need_episodes:
need_episodes = list(range(start_episode, total_episode))
need_episodes = list(range(start_episode, total_episode + 1))
# 循环种子
for context in contexts:
# 媒体信息
@@ -533,8 +546,9 @@ class DownloadChain(ChainBase):
# 为需要集的子集则下载
if torrent_episodes.issubset(set(need_episodes)):
# 下载
download_id = self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username)
download_id = self.download_single(context,
save_path=save_path, channel=channel,
userid=userid, username=username)
if download_id:
# 下载成功
downloaded_list.append(context)
@@ -636,7 +650,7 @@ class DownloadChain(ChainBase):
mediainfo: MediaInfo,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
totals: Dict[int, int] = None
) -> Tuple[bool, Dict[int, Dict[int, NotExistMediaInfo]]]:
) -> Tuple[bool, Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
检查媒体库,查询是否存在,对于剧集同时返回不存在的季集信息
:param meta: 元数据
@@ -809,6 +823,7 @@ class DownloadChain(ChainBase):
}
# 下载用户
torrent.userid = history.userid
torrent.username = history.username
ret_torrents.append(torrent)
return ret_torrents
@@ -827,3 +842,16 @@ class DownloadChain(ChainBase):
删除下载任务
"""
return self.remove_torrents(hashs=[hash_str])
@eventmanager.register(EventType.DownloadFileDeleted)
def download_file_deleted(self, event: Event):
"""
下载文件删除时,同步删除下载任务
"""
if not event:
return
hash_str = event.event_data.get("hash")
if not hash_str:
return
logger.warn(f"检测到下载源文件被删除,删除下载任务(不含文件):{hash_str}")
self.remove_torrents(hashs=[hash_str], delete_file=False)

View File

@@ -229,6 +229,32 @@ class MediaChain(ChainBase, metaclass=Singleton):
)
return tmdbinfo
def get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取TMDB信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用原标题匹配
if bangumiinfo.get("name"):
meta = MetaInfo(title=bangumiinfo.get("name"))
else:
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 使用名称识别TMDB媒体信息
return self.match_tmdbinfo(
name=meta.name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return None
def get_doubaninfo_by_tmdbid(self, tmdbid: int,
mtype: MediaType = None, season: int = None) -> Optional[dict]:
"""
@@ -261,3 +287,29 @@ class MediaChain(ChainBase, metaclass=Singleton):
imdbid=imdbid
)
return None
def get_doubaninfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取豆瓣信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用中文标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 使用名称识别豆瓣媒体信息
return self.match_doubaninfo(
name=meta.name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return None

View File

@@ -1,6 +1,6 @@
import json
import threading
from typing import List, Union
from typing import List, Union, Optional
from app import schemas
from app.chain import ChainBase
@@ -20,11 +20,11 @@ class MediaServerChain(ChainBase):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str) -> List[schemas.MediaServerLibrary]:
def librarys(self, server: str = None, username: str = None) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
return self.run_module("mediaserver_librarys", server=server)
return self.run_module("mediaserver_librarys", server=server, username=username)
def items(self, server: str, library_id: Union[str, int]) -> List[schemas.MediaServerItem]:
"""
@@ -44,22 +44,40 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, count: int = 20, server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, count: int = 20, server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
def get_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取播放地址
"""
return self.run_module("mediaserver_play_url", server=server, item_id=item_id)
def sync(self):
"""
同步媒体库所有数据到本地数据库
"""
# 设置的媒体服务器
if not settings.MEDIASERVER:
return
# 同步黑名单
sync_blacklist = settings.MEDIASERVER_SYNC_BLACKLIST.split(
",") if settings.MEDIASERVER_SYNC_BLACKLIST else []
mediaservers = settings.MEDIASERVER.split(",")
with lock:
# 汇总统计
total_count = 0
# 清空登记薄
self.dboper.empty(server=settings.MEDIASERVER)
# 同步黑名单
sync_blacklist = settings.MEDIASERVER_SYNC_BLACKLIST.split(
",") if settings.MEDIASERVER_SYNC_BLACKLIST else []
# 设置的媒体服务器
if not settings.MEDIASERVER:
return
mediaservers = settings.MEDIASERVER.split(",")
self.dboper.empty()
# 遍历媒体服务器
for mediaserver in mediaservers:
logger.info(f"开始同步媒体库 {mediaserver} 的数据 ...")

View File

@@ -1,14 +1,24 @@
from typing import Any
import copy
import json
import re
from typing import Any, Optional, Dict, Union
from app.chain.download import *
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.chain.search import SearchChain
from app.chain.subscribe import SubscribeChain
from app.core.context import MediaInfo
from app.core.config import settings
from app.core.context import MediaInfo, Context
from app.core.event import EventManager
from app.core.meta import MetaBase
from app.db.message_oper import MessageOper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import EventType, MessageChannel
from app.schemas import Notification, NotExistMediaInfo, CommingMessage
from app.schemas.types import EventType, MessageChannel, MediaType
from app.utils.string import StringUtils
# 当前页面
_current_page: int = 0
@@ -32,17 +42,70 @@ class MessageChain(ChainBase):
self.downloadchain = DownloadChain()
self.subscribechain = SubscribeChain()
self.searchchain = SearchChain()
self.medtachain = MediaChain()
self.torrent = TorrentHelper()
self.mediachain = MediaChain()
self.eventmanager = EventManager()
self.torrenthelper = TorrentHelper()
self.messagehelper = MessageHelper()
self.messageoper = MessageOper()
def __get_noexits_info(
self,
_meta: MetaBase,
_mediainfo: MediaInfo) -> Dict[Union[int, str], Dict[int, NotExistMediaInfo]]:
"""
获取缺失的媒体信息
"""
if _mediainfo.type == MediaType.TV:
if not _mediainfo.seasons:
# 补充媒体信息
_mediainfo = self.mediachain.recognize_media(mtype=_mediainfo.type,
tmdbid=_mediainfo.tmdb_id,
doubanid=_mediainfo.douban_id,
cache=False)
if not _mediainfo:
logger.warn(f"{_mediainfo.tmdb_id or _mediainfo.douban_id} 媒体信息识别失败!")
return {}
if not _mediainfo.seasons:
logger.warn(f"媒体信息中没有季集信息,"
f"标题:{_mediainfo.title}"
f"tmdbid{_mediainfo.tmdb_id}doubanid{_mediainfo.douban_id}")
return {}
# KEY
_mediakey = _mediainfo.tmdb_id or _mediainfo.douban_id
_no_exists = {
_mediakey: {}
}
if _meta.begin_season:
# 指定季
episodes = _mediainfo.seasons.get(_meta.begin_season)
if not episodes:
return {}
_no_exists[_mediakey][_meta.begin_season] = NotExistMediaInfo(
season=_meta.begin_season,
episodes=[],
total_episode=len(episodes),
start_episode=episodes[0]
)
else:
# 所有季
for sea, eps in _mediainfo.seasons.items():
if not eps:
continue
_no_exists[_mediakey][sea] = NotExistMediaInfo(
season=sea,
episodes=[],
total_episode=len(eps),
start_episode=eps[0]
)
else:
_no_exists = {}
return _no_exists
def process(self, body: Any, form: Any, args: Any) -> None:
"""
识别消息内容,执行操作
调用模块识别消息内容
"""
# 申明全局变量
global _current_page, _current_meta, _current_media
# 获取消息内容
info = self.message_parser(body=body, form=form, args=args)
if not info:
@@ -61,10 +124,34 @@ class MessageChain(ChainBase):
if not text:
logger.debug(f'未识别到消息内容::{body}{form}{args}')
return
# 处理消息
self.handle_message(channel=channel, userid=userid, username=username, text=text)
def handle_message(self, channel: MessageChannel, userid: Union[str, int], username: str, text: str) -> None:
"""
识别消息内容,执行操作
"""
# 申明全局变量
global _current_page, _current_meta, _current_media
# 加载缓存
user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {}
# 处理消息
logger.info(f'收到用户消息内容,用户:{userid},内容:{text}')
# 保存消息
self.messagehelper.put(
CommingMessage(
userid=userid,
username=username,
channel=channel,
text=text
), role="user")
self.messageoper.add(
channel=channel,
userid=username or userid,
text=text,
action=0
)
# 处理消息
if text.startswith('/'):
# 执行命令
self.eventmanager.send_event(
@@ -77,6 +164,7 @@ class MessageChain(ChainBase):
)
elif text.isdigit():
# 用户选择了具体的条目
# 缓存
cache_data: dict = user_cache.get(userid)
# 选择项目
@@ -93,31 +181,44 @@ class MessageChain(ChainBase):
# 缓存列表
cache_list: list = copy.deepcopy(cache_data.get('items'))
# 选择
if cache_type == "Search":
if cache_type in ["Search", "ReSearch"]:
# 当前媒体信息
mediainfo: MediaInfo = cache_list[_choice]
_current_media = mediainfo
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
if exist_flag and cache_type == "Search":
# 媒体库中已存在
self.post_message(
Notification(channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 XXX 或 下载 XXX",
userid=userid))
return
elif exist_flag:
# 没有缺失,但要全量重新搜索和下载
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 发送缺失的媒体信息
if no_exists:
# 发送消息
messages = []
if no_exists and cache_type == "Search":
# 发送缺失消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季缺失 {StringUtils.str_series(no_exist.episodes) if no_exist.episodes else no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
elif no_exists:
# 发送总集数的消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季总 {no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
if messages:
self.post_message(Notification(channel=channel,
title=f"{mediainfo.title_year}\n" + "\n".join(messages),
userid=userid))
# 搜索种子,过滤掉不需要的剧集,以便选择
logger.info(f"{mediainfo.title_year} 媒体库中不存在,开始搜索 ...")
logger.info(f"开始搜索 {mediainfo.title_year} ...")
self.post_message(
Notification(channel=channel,
title=f"开始搜索 {mediainfo.type.value} {mediainfo.title_year} ...",
@@ -137,13 +238,16 @@ class MessageChain(ChainBase):
# 判断是否设置自动下载
auto_download_user = settings.AUTO_DOWNLOAD_USER
# 匹配到自动下载用户
if auto_download_user and any(userid == user for user in auto_download_user.split(",")):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载")
if auto_download_user \
and (auto_download_user == "all"
or any(userid == user for user in auto_download_user.split(","))):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载 ...")
# 自动选择下载
self.__auto_download(channel=channel,
cache_list=contexts,
userid=userid,
username=username)
username=username,
no_exists=no_exists)
else:
# 更新缓存
user_cache[userid] = {
@@ -158,19 +262,24 @@ class MessageChain(ChainBase):
userid=userid,
total=len(contexts))
elif cache_type == "Subscribe":
# 订阅媒体
elif cache_type in ["Subscribe", "ReSubscribe"]:
# 订阅或洗版媒体
mediainfo: MediaInfo = cache_list[_choice]
# 洗版标识
best_version = False
# 查询缺失的媒体信息
exist_flag, _ = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
if cache_type == "Subscribe":
exist_flag, _ = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX",
userid=userid))
return
else:
best_version = True
# 添加订阅状态为N
self.subscribechain.add(title=mediainfo.title,
year=mediainfo.year,
@@ -179,10 +288,11 @@ class MessageChain(ChainBase):
season=_current_meta.begin_season,
channel=channel,
userid=userid,
username=username)
username=username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
# 自动选择下载
# 自动选择下载,强制下载模式
self.__auto_download(channel=channel,
cache_list=cache_list,
userid=userid,
@@ -191,7 +301,8 @@ class MessageChain(ChainBase):
# 下载种子
context: Context = cache_list[_choice]
# 下载
self.downloadchain.download_single(context, userid=userid, channel=channel, username=username)
self.downloadchain.download_single(context, channel=channel,
userid=userid, username=username)
elif text.lower() == "p":
# 上一页
@@ -273,6 +384,14 @@ class MessageChain(ChainBase):
# 订阅
content = re.sub(r"订阅[:\s]*", "", text)
action = "Subscribe"
elif text.startswith("洗版"):
# 洗版
content = re.sub(r"洗版[:\s]*", "", text)
action = "ReSubscribe"
elif text.startswith("搜索") or text.startswith("下载"):
# 重新搜索/下载
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
action = "ReSearch"
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
@@ -283,12 +402,12 @@ class MessageChain(ChainBase):
action = "chat"
else:
# 搜索
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
content = text
action = "Search"
if action in ["Subscribe", "Search"]:
if action != "chat":
# 搜索
meta, medias = self.medtachain.search(content)
meta, medias = self.mediachain.search(content)
# 识别
if not meta.name:
self.post_message(Notification(
@@ -327,20 +446,22 @@ class MessageChain(ChainBase):
# 保存缓存
self.save_cache(user_cache, self._cache_file)
def __auto_download(self, channel, cache_list, userid, username):
def __auto_download(self, channel: MessageChannel, cache_list: list[Context],
userid: Union[str, int], username: str,
no_exists: Optional[Dict[Union[int, str], Dict[int, NotExistMediaInfo]]] = None):
"""
自动择优下载
"""
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
if no_exists is None:
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=_current_meta,
mediainfo=_current_media
)
if exist_flag:
# 媒体库中已存在,查询全量
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 批量下载
downloads, lefts = self.downloadchain.batch_download(contexts=cache_list,
no_exists=no_exists,
@@ -353,6 +474,13 @@ class MessageChain(ChainBase):
else:
# 未完成下载
logger.info(f'{_current_media.title_year} 未下载未完整,添加订阅 ...')
if downloads and _current_media.type == MediaType.TV:
# 获取已下载剧集
downloaded = [download.meta_info.begin_episode for download in downloads
if download.meta_info.begin_episode]
note = json.dumps(downloaded)
else:
note = None
# 添加订阅状态为R
self.subscribechain.add(title=_current_media.title,
year=_current_media.year,
@@ -362,7 +490,8 @@ class MessageChain(ChainBase):
channel=channel,
userid=userid,
username=username,
state="R")
state="R",
note=note)
def __post_medias_message(self, channel: MessageChannel,
title: str, items: list, userid: str, total: int):

View File

@@ -1,5 +1,6 @@
import pickle
import re
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
from datetime import datetime
from typing import Dict
@@ -74,7 +75,7 @@ class SearchChain(ChainBase):
try:
return pickle.loads(results)
except Exception as e:
print(str(e))
logger.error(f'加载搜索结果失败:{str(e)} - {traceback.format_exc()}')
return []
def process(self, mediainfo: MediaInfo,
@@ -122,10 +123,13 @@ class SearchChain(ChainBase):
# 搜索关键词
if keyword:
keywords = [keyword]
elif mediainfo.original_title and mediainfo.title != mediainfo.original_title:
keywords = [mediainfo.title, mediainfo.original_title]
else:
keywords = [mediainfo.title]
# 去重去空,但要保持顺序
keywords = list(dict.fromkeys([k for k in [mediainfo.title,
mediainfo.original_title,
mediainfo.en_title,
mediainfo.sg_title] if k]))
# 执行搜索
torrents: List[TorrentInfo] = self.__search_all_sites(
mediainfo=mediainfo,
@@ -136,27 +140,8 @@ class SearchChain(ChainBase):
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 未搜索到资源')
return []
# 过滤种子
if priority_rule is None:
# 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule:
logger.info(f'开始过滤资源,当前规则:{priority_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule,
torrent_list=torrents,
season_episodes=season_episodes,
mediainfo=mediainfo)
if result is not None:
torrents = result
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
return []
# 使用过滤规则再次过滤
torrents = self.filter_torrents_by_rule(torrents=torrents,
filter_rule=filter_rule)
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return []
# 开始新进度
self.progress.start(ProgressKey.Search)
# 匹配的资源
_match_torrents = []
# 总数
@@ -164,28 +149,32 @@ class SearchChain(ChainBase):
# 已处理数
_count = 0
if mediainfo:
self.progress.start(ProgressKey.Search)
logger.info(f'开始匹配,总 {_total} 个资源 ...')
# 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
self.progress.update(value=0, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
for torrent in torrents:
_count += 1
self.progress.update(value=(_count / _total) * 100,
self.progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search)
# 比对IMDBID
if torrent.imdbid \
and mediainfo.imdb_id \
and torrent.imdbid == mediainfo.imdb_id:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
logger.info(f'{mediainfo.title} 通过IMDBID匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
continue
# 识别
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 比对类型
if (torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV) \
or (torrent_meta.type != MediaType.TV and mediainfo.type == MediaType.TV):
logger.warn(f'{torrent.site_name} - {torrent.title} 类型不匹配')
# 比对种子识别类型
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子标题类型为 {torrent_meta.type.value}'
f'需要是 {mediainfo.type.value}不匹配')
continue
# 比对种子在站点中的类型
if torrent.category == MediaType.TV.value and mediainfo.type != MediaType.TV:
logger.warn(f'{torrent.site_name} - {torrent.title} 种子在站点中归类为 {torrent.category}'
f'需要是 {mediainfo.type.value},不匹配')
continue
# 比对年份
if mediainfo.year:
@@ -213,7 +202,7 @@ class SearchChain(ChainBase):
continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = torrent.description.split()
subtitle = re.split(r'[\s/|]+', torrent.description)
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
@@ -230,21 +219,55 @@ class SearchChain(ChainBase):
break
else:
logger.warn(f'{torrent.site_name} - {torrent.title} 标题不匹配')
self.progress.update(value=100,
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
self.progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
self.progress.end(ProgressKey.Search)
else:
_match_torrents = torrents
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
# 开始过滤
self.progress.update(value=98, text=f'开始过滤,总 {len(_match_torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
# 过滤种子
if priority_rule is None:
# 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule:
logger.info(f'开始优先级规则过滤,当前规则:{priority_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule,
torrent_list=_match_torrents,
season_episodes=season_episodes,
mediainfo=mediainfo)
if result is not None:
_match_torrents = result
if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
return []
# 使用过滤规则再次过滤
if filter_rule:
logger.info(f'开始过滤规则过滤,当前规则:{filter_rule} ...')
_match_torrents = self.filter_torrents_by_rule(torrents=_match_torrents,
mediainfo=mediainfo,
filter_rule=filter_rule)
if not _match_torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return []
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
media_info=mediainfo,
torrent_info=torrent) for torrent in _match_torrents]
logger.info(f"过滤完成,剩余 {_total} 个资源")
self.progress.update(value=99, text=f'过滤完成,剩余 {_total} 个资源', key=ProgressKey.Search)
# 排序
self.progress.update(value=100,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
contexts = self.torrenthelper.sort_torrents(contexts)
# 结束进度
self.progress.end(ProgressKey.Search)
# 返回
return contexts
@@ -336,12 +359,14 @@ class SearchChain(ChainBase):
def filter_torrents_by_rule(self,
torrents: List[TorrentInfo],
filter_rule: Dict[str, str] = None
mediainfo: MediaInfo,
filter_rule: Dict[str, str] = None,
) -> List[TorrentInfo]:
"""
使用过滤规则过滤种子
:param torrents: 种子列表
:param filter_rule: 过滤规则
:param mediainfo: 媒体信息
"""
if not filter_rule:
@@ -349,52 +374,13 @@ class SearchChain(ChainBase):
filter_rule = self.systemconfig.get(SystemConfigKey.DefaultSearchFilterRules)
if not filter_rule:
return torrents
# 包含
include = filter_rule.get("include")
# 排除
exclude = filter_rule.get("exclude")
# 质量
quality = filter_rule.get("quality")
# 分辨率
resolution = filter_rule.get("resolution")
# 特效
effect = filter_rule.get("effect")
def __filter_torrent(t: TorrentInfo) -> bool:
"""
过滤种子
"""
# 包含
if include:
if not re.search(r"%s" % include,
f"{t.title} {t.description}", re.I):
logger.info(f"{t.title} 不匹配包含规则 {include}")
return False
# 排除
if exclude:
if re.search(r"%s" % exclude,
f"{t.title} {t.description}", re.I):
logger.info(f"{t.title} 匹配排除规则 {exclude}")
return False
# 质量
if quality:
if not re.search(r"%s" % quality, t.title, re.I):
logger.info(f"{t.title} 不匹配质量规则 {quality}")
return False
# 分辨率
if resolution:
if not re.search(r"%s" % resolution, t.title, re.I):
logger.info(f"{t.title} 不匹配分辨率规则 {resolution}")
return False
# 特效
if effect:
if not re.search(r"%s" % effect, t.title, re.I):
logger.info(f"{t.title} 不匹配特效规则 {effect}")
return False
return True
# 使用默认过滤规则再次过滤
return list(filter(lambda t: __filter_torrent(t), torrents))
return list(filter(
lambda t: self.torrenthelper.filter_torrent(
torrent_info=t,
filter_rule=filter_rule,
mediainfo=mediainfo
),
torrents
))

View File

@@ -1,16 +1,27 @@
import base64
import re
from typing import Union, Tuple
from typing import Tuple, Optional
from typing import Union
from urllib.parse import urljoin
from lxml import etree
from app.chain import ChainBase
from app.core.config import settings
from app.core.event import eventmanager, Event, EventManager
from app.db.models.site import Site
from app.db.site_oper import SiteOper
from app.db.siteicon_oper import SiteIconOper
from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import MessageChannel, Notification
from app.schemas.types import EventType
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
from app.utils.string import StringUtils
@@ -24,8 +35,12 @@ class SiteChain(ChainBase):
def __init__(self):
super().__init__()
self.siteoper = SiteOper()
self.siteiconoper = SiteIconOper()
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.cookiehelper = CookieHelper()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper()
# 特殊站点登录验证
self.special_site_test = {
@@ -87,6 +102,190 @@ class SiteChain(ChainBase):
return True, "连接成功"
return False, "Cookie已失效"
@staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
"""
解析站点favicon,返回base64 fav图标
:param url: 站点地址
:param cookie: Cookie
:param ua: User-Agent
:return:
"""
favicon_url = urljoin(url, "favicon.ico")
res = RequestUtils(cookies=cookie, timeout=60, ua=ua).get_res(url=url)
if res:
html_text = res.text
else:
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if html:
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=20, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
return favicon_url, None
def sync_cookies(self, manual=False) -> Tuple[bool, str]:
"""
通过CookieCloud同步站点Cookie
"""
def __indexer_domain(inx: dict, sub_domain: str) -> str:
"""
根据主域名获取索引器地址
"""
if StringUtils.get_url_domain(inx.get("domain")) == sub_domain:
return inx.get("domain")
for ext_d in inx.get("ext_domains"):
if StringUtils.get_url_domain(ext_d) == sub_domain:
return ext_d
return sub_domain
logger.info("开始同步CookieCloud站点 ...")
cookies, msg = self.cookiecloud.download()
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(f"CookieCloud同步失败 {msg}")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
_add_count = 0
_fail_count = 0
for domain, cookie in cookies.items():
# 索引器信息
indexer = self.siteshelper.get_indexer(domain)
# 数据库的站点信息
site_info = self.siteoper.get_by_domain(domain)
if site_info:
# 站点已存在,检查站点连通性
status, msg = self.test(domain)
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
# 新增站点
domain_url = __indexer_domain(inx=indexer, sub_domain=domain)
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT
).get_res(url=domain_url)
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
_fail_count += 1
if under_challenge(res.text):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
f"站点 {indexer.get('name')} 登录失败没有该站点账号或Cookie已失效无法添加站点")
continue
elif res is not None:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接状态码:{res.status_code},无法添加站点")
continue
else:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and domain_url:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=domain_url,
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=domain_url,
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 通知缓存站点图标
if indexer:
EventManager().send_event(EventType.CacheSiteIcon, {
"domain": domain,
})
# 处理完成
ret_msg = f"更新了{_update_count}个站点,新增了{_add_count}个站点"
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(f"CookieCloud同步成功, {ret_msg}")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@eventmanager.register(EventType.CacheSiteIcon)
def cache_site_icon(self, event: Event):
"""
缓存站点图标
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
if str(domain).startswith("http"):
domain = StringUtils.get_url_domain(domain)
# 站点信息
siteinfo = self.siteoper.get_by_domain(domain)
if not siteinfo:
logger.warn(f"未维护站点 {domain} 信息!")
return
# Cookie
cookie = siteinfo.cookie
# 索引器
indexer = self.siteshelper.get_indexer(domain)
if not indexer:
logger.warn(f"站点 {domain} 索引器不存在!")
return
# 查询站点图标
site_icon = self.siteiconoper.get_by_domain(domain)
if not site_icon or not site_icon.base64:
logger.info(f"开始缓存站点 {indexer.get('name')} 图标 ...")
icon_url, icon_base64 = self.__parse_favicon(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if icon_url:
self.siteiconoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
logger.info(f"缓存站点 {indexer.get('name')} 图标成功")
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
def test(self, url: str) -> Tuple[bool, str]:
"""
测试站点是否可用
@@ -99,20 +298,21 @@ class SiteChain(ChainBase):
if not site_info:
return False, f"站点【{url}】不存在"
# 特殊站点测试
if self.special_site_test.get(domain):
return self.special_site_test[domain](site_info)
# 通用站点测试
site_url = site_info.url
site_cookie = site_info.cookie
ua = site_info.ua
render = site_info.render
public = site_info.public
proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
# 模拟登录
try:
# 特殊站点测试
if self.special_site_test.get(domain):
return self.special_site_test[domain](site_info)
# 通用站点测试
site_url = site_info.url
site_cookie = site_info.cookie
ua = site_info.ua
render = site_info.render
public = site_info.public
proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
# 访问链接
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
@@ -161,7 +361,7 @@ class SiteChain(ChainBase):
title = f"共有 {len(site_list)} 个站点,回复对应指令操作:" \
f"\n- 禁用站点:/site_disable [id]" \
f"\n- 启用站点:/site_enable [id]" \
f"\n- 更新站点Cookie/site_cookie [id] [username] [password]"
f"\n- 更新站点Cookie/site_cookie [id] [username] [password] [2fa_code/secret]"
messages = []
for site in site_list:
if site.render:
@@ -169,9 +369,9 @@ class SiteChain(ChainBase):
else:
render_str = ""
if site.is_active:
messages.append(f"{site.id}. [{site.name}]({site.url}){render_str}")
messages.append(f"{site.id}. {site.name} {render_str}")
else:
messages.append(f"{site.id}. {site.name}")
messages.append(f"{site.id}. {site.name} ⚠️")
# 发送列表
self.post_message(Notification(
channel=channel,
@@ -227,12 +427,13 @@ class SiteChain(ChainBase):
self.remote_list(channel, userid)
def update_cookie(self, site_info: Site,
username: str, password: str) -> Tuple[bool, str]:
username: str, password: str, two_step_code: str = None) -> Tuple[bool, str]:
"""
根据用户名密码更新站点Cookie
:param site_info: 站点信息
:param username: 用户名
:param password: 密码
:param two_step_code: 二步验证码或密钥
:return: (是否成功, 错误信息)
"""
# 更新站点Cookie
@@ -240,6 +441,7 @@ class SiteChain(ChainBase):
url=site_info.url,
username=username,
password=password,
two_step_code=two_step_code,
proxies=settings.PROXY_HOST if site_info.proxy else None
)
if result:
@@ -257,8 +459,8 @@ class SiteChain(ChainBase):
"""
使用用户名密码更新站点Cookie
"""
err_title = "请输入正确的命令格式:/site_cookie [id] [username] [password]" \
"[id]为站点编号,[uername]为站点用户名,[password]为站点密码"
err_title = "请输入正确的命令格式:/site_cookie [id] [username] [password] [2fa_code/secret]" \
"[id]为站点编号,[uername]为站点用户名,[password]为站点密码[2fa_code/secret]为站点二步验证码或密钥"
if not arg_str:
self.post_message(Notification(
channel=channel,
@@ -266,7 +468,11 @@ class SiteChain(ChainBase):
return
arg_str = str(arg_str).strip()
args = arg_str.split()
if len(args) != 3:
# 二步验证码
two_step_code = None
if len(args) == 4:
two_step_code = args[3]
elif len(args) != 3:
self.post_message(Notification(
channel=channel,
title=err_title, userid=userid))
@@ -296,7 +502,8 @@ class SiteChain(ChainBase):
# 更新Cookie
status, msg = self.update_cookie(site_info=site_info,
username=username,
password=password)
password=password,
two_step_code=two_step_code)
if not status:
logger.error(msg)
self.post_message(Notification(

View File

@@ -18,9 +18,11 @@ from app.db.models.subscribe import Subscribe
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo, Notification
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType
from app.utils.string import StringUtils
class SubscribeChain(ChainBase):
@@ -37,11 +39,13 @@ class SubscribeChain(ChainBase):
self.mediachain = MediaChain()
self.message = MessageHelper()
self.systemconfig = SystemConfigOper()
self.torrenthelper = TorrentHelper()
def add(self, title: str, year: str,
mtype: MediaType = None,
tmdbid: int = None,
doubanid: str = None,
bangumiid: int = None,
season: int = None,
channel: MessageChannel = None,
userid: str = None,
@@ -71,11 +75,11 @@ class SubscribeChain(ChainBase):
if tmdbinfo:
mediainfo = MediaInfo(tmdb_info=tmdbinfo)
else:
# 识别TMDB信息
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid)
# 识别TMDB信息,不使用缓存
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid, cache=False)
else:
# 豆瓣识别模式
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid)
# 豆瓣识别模式,不使用缓存
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid, cache=False)
if mediainfo:
# 豆瓣标题处理
meta = MetaInfo(mediainfo.title)
@@ -96,7 +100,9 @@ class SubscribeChain(ChainBase):
# 补充媒体信息
mediainfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
cache=False)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return None, "媒体信息识别失败"
@@ -120,6 +126,8 @@ class SubscribeChain(ChainBase):
# 合并信息
if doubanid:
mediainfo.douban_id = doubanid
if bangumiid:
mediainfo.bangumi_id = bangumiid
# 添加订阅
sid, err_msg = self.subscribeoper.add(mediainfo, season=season, username=username, **kwargs)
if not sid:
@@ -139,9 +147,8 @@ class SubscribeChain(ChainBase):
text = f"评分:{mediainfo.vote_average},来自用户:{username or userid}"
else:
text = f"评分:{mediainfo.vote_average}"
# 广而告之
self.post_message(Notification(channel=channel,
mtype=NotificationType.Subscribe,
# 群发
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text,
image=mediainfo.get_message_image()))
@@ -197,9 +204,11 @@ class SubscribeChain(ChainBase):
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid)
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版状态
@@ -235,12 +244,12 @@ class SubscribeChain(ChainBase):
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo)
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅处理缺失集
if meta.type == MediaType.TV:
# 使用订阅的总集数和开始集数替换no_exists
# 实际缺失集与订阅开始结束集范围进行整合
no_exists = self.__get_subscribe_no_exits(
no_exists=no_exists,
mediakey=mediakey,
@@ -249,17 +258,14 @@ class SubscribeChain(ChainBase):
start_episode=subscribe.start_episode,
)
# 打印缺失集信息
# 打印汇总缺失集信息
if no_exists and no_exists.get(mediakey):
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
logger.info(f'订阅 {mediainfo.title_year} {meta.season} 缺失集:{no_exists_info.episodes}')
# 站点范围
if subscribe.sites:
sites = json.loads(subscribe.sites)
else:
sites = None
sites = self.get_sub_sites(subscribe)
# 优先级过滤规则
if subscribe.best_version:
@@ -276,16 +282,15 @@ class SubscribeChain(ChainBase):
no_exists=no_exists,
sites=sites,
priority_rule=priority_rule,
filter_rule=filter_rule)
filter_rule=filter_rule,
area="imdbid" if subscribe.search_imdbid else "title")
if not contexts:
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe,
meta=meta, mediainfo=mediainfo)
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 过滤
# 过滤搜索结果
matched_contexts = []
for context in contexts:
torrent_meta = context.meta_info
@@ -304,41 +309,31 @@ class SubscribeChain(ChainBase):
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 优先级小于已下载优先级的不要
# 洗版时,优先级小于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order < subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于已下载优先级')
continue
matched_contexts.append(context)
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
# 非洗版未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
if meta.type == MediaType.TV and not subscribe.best_version:
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe,
meta=meta, mediainfo=mediainfo)
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 自动下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
no_exists=no_exists, username=subscribe.username)
# 更新已经下载的集数
if downloads \
and meta.type == MediaType.TV \
and not subscribe.best_version:
self.__update_subscribe_note(subscribe=subscribe, downloads=downloads)
downloads, lefts = self.downloadchain.batch_download(
contexts=matched_contexts,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path
)
if downloads and not lefts:
# 判断是否应完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, downloads=downloads)
else:
# 未完成下载
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
if meta.type == MediaType.TV and not subscribe.best_version:
# 更新订阅剩余集数和时间
update_date = True if downloads else False
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe, meta=meta,
mediainfo=mediainfo, update_date=update_date)
# 判断是否应完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
# 手动触发时发送系统消息
if manual:
@@ -347,35 +342,74 @@ class SubscribeChain(ChainBase):
else:
self.message.put('所有订阅搜索完成!')
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context] = None):
def update_subscribe_priority(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]):
"""
判断是否应完成订阅
更新订阅已下载资源的优先级
"""
if not downloads:
return
if not subscribe.best_version:
# 全部下载完成
logger.info(f'{mediainfo.title_year} 完成订阅')
return
# 当前下载资源的优先级
priority = max([item.torrent_info.pri_order for item in downloads])
if priority == 100:
logger.info(f'{mediainfo.title_year} 洗版完成,删除订阅')
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
title=f'{mediainfo.title_year} {meta.season}洗版完成',
image=mediainfo.get_message_image()))
elif downloads:
# 当前下载资源优先级
priority = max([item.torrent_info.pri_order for item in downloads])
if priority == 100:
logger.info(f'{mediainfo.title_year} 洗版完成,删除订阅')
else:
# 正在洗版,更新资源优先级
logger.info(f'{mediainfo.title_year} 正在洗版,更新资源优先级为 {priority}')
self.subscribeoper.update(subscribe.id, {
"current_priority": priority
})
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo, mediainfo: MediaInfo,
downloads: List[Context] = None,
lefts: Dict[Union[int | str], Dict[int, NotExistMediaInfo]] = None,
force: bool = False):
"""
判断是否应完成订阅
"""
mediakey = subscribe.tmdbid or subscribe.doubanid
# 是否有剩余集
no_lefts = not lefts or not lefts.get(mediakey)
# 是否完成订阅
if not subscribe.best_version:
# 非洗板
if ((no_lefts and meta.type == MediaType.TV)
or (downloads and meta.type == MediaType.MOVIE)
or force):
# 全部下载完成
logger.info(f'{mediainfo.title_year} 完成订阅')
self.subscribeoper.delete(subscribe.id)
# 发送通知
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season}洗版完成',
title=f'{mediainfo.title_year} {meta.season} 已完成订阅',
image=mediainfo.get_message_image()))
elif downloads and meta.type == MediaType.TV:
# 电视剧更新已下载集数
self.__update_subscribe_note(subscribe=subscribe, downloads=downloads)
# 更新订阅剩余集数和时间
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=True)
else:
# 正在洗版,更新资源优先级
logger.info(f'{mediainfo.title_year} 正在洗版,更新资源优先级')
self.subscribeoper.update(subscribe.id, {
"current_priority": priority
})
# 未下载到内容且不完整
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
if meta.type == MediaType.TV:
# 更新订阅剩余集数
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=False)
elif downloads:
# 洗板,下载到了内容,更新资源优先级
self.update_subscribe_priority(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, downloads=downloads)
else:
# 洗版,未下载到内容
logger.info(f'{mediainfo.title_year} 继续洗版 ...')
def refresh(self):
"""
@@ -389,6 +423,15 @@ class SubscribeChain(ChainBase):
self.torrentschain.refresh(sites=sites)
)
def get_sub_sites(self, subscribe: Subscribe) -> List[int]:
"""
获取订阅中涉及的站点清单
"""
if subscribe.sites:
return json.loads(subscribe.sites)
# 默认站点
return self.systemconfig.get(SystemConfigKey.RssSites) or []
def get_subscribed_sites(self) -> Optional[List[int]]:
"""
获取订阅中涉及的所有站点清单(节约资源)
@@ -401,13 +444,8 @@ class SubscribeChain(ChainBase):
ret_sites = []
# 刷新订阅选中的Rss站点
for subscribe in subscribes:
# 如果有一个订阅没有选择站点,则刷新所有订阅站点
if not subscribe.sites:
return []
# 刷新选中的站点
sub_sites = json.loads(subscribe.sites)
if sub_sites:
ret_sites.extend(sub_sites)
ret_sites.extend(self.get_sub_sites(subscribe))
# 去重
if ret_sites:
ret_sites = list(set(ret_sites))
@@ -416,64 +454,20 @@ class SubscribeChain(ChainBase):
def get_filter_rule(self, subscribe: Subscribe):
"""
获取订阅过滤规则,没有则返回默认规则
获取订阅过滤规则,同时组合默认规则
"""
# 默认过滤规则
if (subscribe.include
or subscribe.exclude
or subscribe.quality
or subscribe.resolution
or subscribe.effect):
return {
"include": subscribe.include,
"exclude": subscribe.exclude,
"quality": subscribe.quality,
"resolution": subscribe.resolution,
"effect": subscribe.effect,
}
# 订阅默认过滤规则
return self.systemconfig.get(SystemConfigKey.DefaultFilterRules) or {}
@staticmethod
def check_filter_rule(torrent_info: TorrentInfo, filter_rule: Dict[str, str]) -> bool:
"""
检查种子是否匹配订阅过滤规则
"""
if not filter_rule:
return True
# 包含
include = filter_rule.get("include")
if include:
if not re.search(r"%s" % include,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}")
return False
# 排除
exclude = filter_rule.get("exclude")
if exclude:
if re.search(r"%s" % exclude,
f"{torrent_info.title} {torrent_info.description}", re.I):
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}")
return False
# 质量
quality = filter_rule.get("quality")
if quality:
if not re.search(r"%s" % quality, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配质量规则 {quality}")
return False
# 分辨率
resolution = filter_rule.get("resolution")
if resolution:
if not re.search(r"%s" % resolution, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配分辨率规则 {resolution}")
return False
# 特效
effect = filter_rule.get("effect")
if effect:
if not re.search(r"%s" % effect, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配特效规则 {effect}")
return False
return True
default_rule = self.systemconfig.get(SystemConfigKey.DefaultFilterRules) or {}
return {
"include": subscribe.include or default_rule.get("include"),
"exclude": subscribe.exclude or default_rule.get("exclude"),
"quality": subscribe.quality or default_rule.get("quality"),
"resolution": subscribe.resolution or default_rule.get("resolution"),
"effect": subscribe.effect or default_rule.get("effect"),
"tv_size": default_rule.get("tv_size"),
"movie_size": default_rule.get("movie_size"),
"min_seeders": default_rule.get("min_seeders"),
}
def match(self, torrents: Dict[str, List[Context]]):
"""
@@ -495,9 +489,12 @@ class SubscribeChain(ChainBase):
meta.type = MediaType(subscribe.type)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid, doubanid=subscribe.doubanid)
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版
if not subscribe.best_version:
@@ -532,12 +529,12 @@ class SubscribeChain(ChainBase):
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo)
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅
if meta.type == MediaType.TV:
# 使用订阅的总集数和开始集数替换no_exists
# 整合实际缺失集与订阅开始集结束集
no_exists = self.__get_subscribe_no_exits(
no_exists=no_exists,
mediakey=mediakey,
@@ -546,7 +543,7 @@ class SubscribeChain(ChainBase):
start_episode=subscribe.start_episode,
)
# 打印缺失集信息
# 打印汇总缺失集信息
if no_exists and no_exists.get(mediakey):
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
@@ -558,15 +555,73 @@ class SubscribeChain(ChainBase):
# 遍历缓存种子
_match_context = []
for domain, contexts in torrents.items():
logger.info(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 检查是否匹配
torrent_meta = context.meta_info
torrent_mediainfo = context.media_info
torrent_info = context.torrent_info
# 比对TMDBID和类型
if torrent_mediainfo.tmdb_id != mediainfo.tmdb_id \
or torrent_mediainfo.type != mediainfo.type:
continue
# 如果识别了媒体信息则比对TMDBID和类型
if torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id:
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
and torrent_mediainfo.tmdb_id != mediainfo.tmdb_id:
continue
if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue
logger.info(f'{mediainfo.title_year} 通过媒体信ID匹配到资源{torrent_info.site_name} - {torrent_info.title}')
else:
# 按标题匹配
# 比对种子识别类型
if torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV:
continue
# 比对种子在站点中的类型
if torrent_info.category == MediaType.TV.value and mediainfo.type != MediaType.TV:
continue
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 剧集年份,每季的年份可能不同
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
continue
else:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
continue
# 标题匹配标志
title_match = False
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
title_match = True
# 在副标题中判断是否存在标题与原语种标题
if not title_match and torrent_info.description:
subtitle = re.split(r'[\s/|]+', torrent_info.description)
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
title_match = True
# 比对别名和译名
if not title_match:
for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name:
title_match = True
break
if not title_match:
continue
# 标题匹配成功
logger.info(f'{mediainfo.title_year} 通过名称匹配到资源:{torrent_info.site_name} - {torrent_info.title}')
# 优先级过滤规则
if subscribe.best_version:
priority_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
@@ -580,12 +635,13 @@ class SubscribeChain(ChainBase):
# 不符合过滤规则
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
continue
# 不在订阅站点范围的不处理
if subscribe.sites:
sub_sites = json.loads(subscribe.sites)
if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.title} 不符合 {torrent_mediainfo.title_year} 订阅站点要求")
continue
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
@@ -629,39 +685,39 @@ class SubscribeChain(ChainBase):
continue
# 过滤规则
if not self.check_filter_rule(torrent_info=torrent_info,
filter_rule=filter_rule):
if not self.torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_rule=filter_rule,
mediainfo=torrent_mediainfo):
continue
# 洗版时,优先级小于已下载优先级的不要
if subscribe.best_version:
if subscribe.current_priority \
and torrent_info.pri_order < subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于已下载优先级')
continue
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
_match_context.append(context)
# 开始下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
if _match_context:
# 批量择优下载
downloads, lefts = self.downloadchain.batch_download(contexts=_match_context, no_exists=no_exists,
username=subscribe.username)
# 更新已经下载的集数
if downloads and meta.type == MediaType.TV:
self.__update_subscribe_note(subscribe=subscribe, downloads=downloads)
if not _match_context:
# 未匹配到资源
logger.info(f'{mediainfo.title_year} 未匹配到符合条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
if downloads and not lefts:
# 判断是否要完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, downloads=downloads)
else:
if meta.type == MediaType.TV and not subscribe.best_version:
update_date = True if downloads else False
# 未完成下载,计算剩余集数
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe, meta=meta,
mediainfo=mediainfo, update_date=update_date)
else:
if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe,
meta=meta, mediainfo=mediainfo)
# 开始批量择优下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
downloads, lefts = self.downloadchain.batch_download(contexts=_match_context,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path)
# 判断是否要完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
def check(self):
"""
@@ -674,7 +730,7 @@ class SubscribeChain(ChainBase):
return
# 遍历订阅
for subscribe in subscribes:
logger.info(f'开始检查订阅{subscribe.name} ...')
logger.info(f'开始更新订阅元数据{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
@@ -682,13 +738,16 @@ class SubscribeChain(ChainBase):
meta.type = MediaType(subscribe.type)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid, doubanid=subscribe.doubanid)
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 对于电视剧,获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > (subscribe.total_episode or 0):
if not subscribe.manual_total_episode and len(episodes):
total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(
@@ -709,7 +768,7 @@ class SubscribeChain(ChainBase):
"total_episode": total_episode,
"lack_episode": lack_episode
})
logger.info(f'订阅 {subscribe.name} 更新完成')
logger.info(f'{subscribe.name} 订阅元数据更新完成')
def __update_subscribe_note(self, subscribe: Subscribe, downloads: List[Context]):
"""
@@ -756,14 +815,15 @@ class SubscribeChain(ChainBase):
return True
return False
def __update_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
def __update_lack_episodes(self, lefts: Dict[Union[int, str], Dict[int, NotExistMediaInfo]],
subscribe: Subscribe,
meta: MetaBase,
mediainfo: MediaInfo,
update_date: bool = False):
"""
更新订阅剩余集数
"""
if not lefts:
return
mediakey = subscribe.tmdbid or subscribe.doubanid
left_seasons = lefts.get(mediakey)
if left_seasons:
@@ -786,9 +846,6 @@ class SubscribeChain(ChainBase):
self.subscribeoper.update(subscribe.id, {
"lack_episode": lack_episode
})
else:
# 判断是否应完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo)
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -806,20 +863,12 @@ class SubscribeChain(ChainBase):
messages = []
for subscribe in subscribes:
if subscribe.type == MediaType.MOVIE.value:
if subscribe.tmdbid:
link = f"https://www.themoviedb.org/movie/{subscribe.tmdbid}"
else:
link = f"https://movie.douban.com/subject/{subscribe.doubanid}"
messages.append(f"{subscribe.id}. [{subscribe.name}{subscribe.year}]({link})")
messages.append(f"{subscribe.id}. {subscribe.name}{subscribe.year}")
else:
if subscribe.tmdbid:
link = f"https://www.themoviedb.org/tv/{subscribe.tmdbid}"
else:
link = f"https://movie.douban.com/subject/{subscribe.doubanid}"
messages.append(f"{subscribe.id}. [{subscribe.name}{subscribe.year}]({link}) "
messages.append(f"{subscribe.id}. {subscribe.name}{subscribe.year}"
f"{subscribe.season}"
f"_{subscribe.total_episode - (subscribe.lack_episode or subscribe.total_episode)}"
f"/{subscribe.total_episode}_")
f"[{subscribe.total_episode - (subscribe.lack_episode or subscribe.total_episode)}"
f"/{subscribe.total_episode}]")
# 发送列表
self.post_message(Notification(channel=channel,
title=title, text='\n'.join(messages), userid=userid))
@@ -850,7 +899,7 @@ class SubscribeChain(ChainBase):
self.remote_list(channel, userid)
@staticmethod
def __get_subscribe_no_exits(no_exists: Dict[int, Dict[int, NotExistMediaInfo]],
def __get_subscribe_no_exits(no_exists: Dict[Union[int, str], Dict[int, NotExistMediaInfo]],
mediakey: Union[str, int],
begin_season: int,
total_episode: int,

View File

@@ -87,7 +87,7 @@ class SystemChain(ChainBase, metaclass=Singleton):
"""
获取最新版本
"""
version_res = RequestUtils(proxies=settings.PROXY).get_res(
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest")
if version_res:
ver_json = version_res.json()

View File

@@ -62,28 +62,28 @@ class TmdbChain(ChainBase, metaclass=Singleton):
根据TMDBID查询类似电影
:param tmdbid: TMDBID
"""
return self.run_module("movie_similar", tmdbid=tmdbid)
return self.run_module("tmdb_movie_similar", tmdbid=tmdbid)
def tv_similar(self, tmdbid: int) -> List[dict]:
"""
根据TMDBID查询类似电视剧
:param tmdbid: TMDBID
"""
return self.run_module("tv_similar", tmdbid=tmdbid)
return self.run_module("tmdb_tv_similar", tmdbid=tmdbid)
def movie_recommend(self, tmdbid: int) -> List[dict]:
"""
根据TMDBID查询推荐电影
:param tmdbid: TMDBID
"""
return self.run_module("movie_recommend", tmdbid=tmdbid)
return self.run_module("tmdb_movie_recommend", tmdbid=tmdbid)
def tv_recommend(self, tmdbid: int) -> List[dict]:
"""
根据TMDBID查询推荐电视剧
:param tmdbid: TMDBID
"""
return self.run_module("tv_recommend", tmdbid=tmdbid)
return self.run_module("tmdb_tv_recommend", tmdbid=tmdbid)
def movie_credits(self, tmdbid: int, page: int = 1) -> List[dict]:
"""
@@ -91,7 +91,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
:param tmdbid: TMDBID
:param page: 页码
"""
return self.run_module("movie_credits", tmdbid=tmdbid, page=page)
return self.run_module("tmdb_movie_credits", tmdbid=tmdbid, page=page)
def tv_credits(self, tmdbid: int, page: int = 1) -> List[dict]:
"""
@@ -99,14 +99,14 @@ class TmdbChain(ChainBase, metaclass=Singleton):
:param tmdbid: TMDBID
:param page: 页码
"""
return self.run_module("tv_credits", tmdbid=tmdbid, page=page)
return self.run_module("tmdb_tv_credits", tmdbid=tmdbid, page=page)
def person_detail(self, person_id: int) -> dict:
"""
根据TMDBID查询演职员详情
:param person_id: 人物ID
"""
return self.run_module("person_detail", person_id=person_id)
return self.run_module("tmdb_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[dict]:
"""
@@ -114,7 +114,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
:param person_id: 人物ID
:param page: 页码
"""
return self.run_module("person_credits", person_id=person_id, page=page)
return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_random_wallpager(self):

View File

@@ -1,4 +1,5 @@
import re
import traceback
from typing import Dict, List, Union
from cachetools import cached, TTLCache
@@ -12,9 +13,10 @@ from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType, MediaType
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
@@ -34,6 +36,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
self.rsshelper = RssHelper()
self.systemconfig = SystemConfigOper()
self.mediachain = MediaChain()
self.torrenthelper = TorrentHelper()
def remote_refresh(self, channel: MessageChannel, userid: Union[str, int] = None):
"""
@@ -60,7 +63,16 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
return self.load_cache(self._rss_file) or {}
@cached(cache=TTLCache(maxsize=128, ttl=600))
def clear_torrents(self):
"""
清理种子缓存数据
"""
logger.info(f'开始清理种子缓存数据 ...')
self.remove_cache(self._spider_file)
self.remove_cache(self._rss_file)
logger.info(f'种子缓存数据清理完成')
@cached(cache=TTLCache(maxsize=128, ttl=595))
def browse(self, domain: str) -> List[TorrentInfo]:
"""
浏览站点首页内容返回种子清单TTL缓存10分钟
@@ -73,7 +85,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
return []
return self.refresh_torrents(site=site)
@cached(cache=TTLCache(maxsize=128, ttl=300))
@cached(cache=TTLCache(maxsize=128, ttl=295))
def rss(self, domain: str) -> List[TorrentInfo]:
"""
获取站点RSS内容返回种子清单TTL缓存5分钟
@@ -87,7 +99,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False)
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False, timeout=int(site.get("timeout") or 30))
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
@@ -134,6 +146,11 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 读取缓存
torrents_cache = self.get_torrents()
# 缓存过滤掉无效种子
for _domain, _torrents in torrents_cache.items():
torrents_cache[_domain] = [_torrent for _torrent in _torrents
if not self.torrenthelper.is_invalid(_torrent.torrent_info.enclosure)]
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 遍历站点缓存资源
@@ -167,6 +184,10 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_by_meta(meta)
if not mediainfo:
@@ -230,5 +251,5 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
except Exception as e:
print(str(e))
logger.error(f"站点 {domain} RSS链接自动获取失败{str(e)} - {traceback.format_exc()}")
self.post_message(Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))

View File

@@ -259,8 +259,8 @@ class TransferChain(ChainBase):
)
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 未识别到媒体信息,无法入库!\n"
f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别转移。"
title=f"{file_path.name} 未识别到媒体信息,无法入库!",
text=f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别转移。"
))
# 计数
processed_num += 1
@@ -481,6 +481,24 @@ class TransferChain(ChainBase):
text=errmsg, userid=userid))
return
@staticmethod
def get_root_path(path: str, type_name: str, category: str) -> Optional[Path]:
"""
计算媒体库目录的根路径
"""
if not path or path == "None":
return None
index = -2
if type_name != '电影':
index = -3
if category:
index -= 1
if '/' in path:
retpath = '/'.join(path.split('/')[:index])
else:
retpath = '\\'.join(path.split('\\')[:index])
return Path(retpath)
def re_transfer(self, logid: int, mtype: MediaType = None,
mediaid: str = None) -> Tuple[bool, str]:
"""
@@ -498,7 +516,7 @@ class TransferChain(ChainBase):
src_path = Path(history.src)
if not src_path.exists():
return False, f"源目录不存在:{src_path}"
dest_path = Path(history.dest) if history.dest else None
dest_path = self.get_root_path(path=history.dest, type_name=history.type, category=history.category)
# 查询媒体信息
if mtype and mediaid:
mediainfo = self.recognize_media(mtype=mtype, tmdbid=int(mediaid) if str(mediaid).isdigit() else None,

View File

@@ -267,7 +267,10 @@ class Command(metaclass=Singleton):
停止事件处理线程
"""
self._event.set()
self._thread.join()
try:
self._thread.join()
except Exception as e:
logger.error(f"停止事件处理线程出错:{str(e)} - {traceback.format_exc()}")
def get_commands(self):
"""
@@ -315,8 +318,7 @@ class Command(metaclass=Singleton):
else:
logger.info(f"{command.get('description')} 执行完成")
except Exception as err:
logger.error(f"执行命令 {cmd} 出错:{str(err)}")
traceback.print_exc()
logger.error(f"执行命令 {cmd} 出错:{str(err)} - {traceback.format_exc()}")
@staticmethod
def send_plugin_event(etype: EventType, data: dict) -> None:

View File

@@ -1,9 +1,9 @@
import secrets
import sys
from pathlib import Path
from typing import List
from typing import List, Optional
from pydantic import BaseSettings
from pydantic import BaseSettings, validator
from app.utils.system import SystemUtils
@@ -32,17 +32,15 @@ class Settings(BaseSettings):
# 是否开发模式
DEV: bool = False
# 配置文件目录
CONFIG_DIR: str = None
CONFIG_DIR: Optional[str] = None
# 超级管理员
SUPERUSER: str = "admin"
# 超级管理员初始密码
SUPERUSER_PASSWORD: str = "password"
# API密钥需要更换
API_TOKEN: str = "moviepilot"
# 登录页面电影海报,tmdb/bing
WALLPAPER: str = "tmdb"
# 网络代理 IP:PORT
PROXY_HOST: str = None
PROXY_HOST: Optional[str] = None
# 媒体识别来源 themoviedb/douban
RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban
@@ -59,6 +57,8 @@ class Settings(BaseSettings):
TMDB_API_KEY: str = "db55323b8d3e4154498498a75642b381"
# TVDB API Key
TVDB_API_KEY: str = "6b481081-10aa-440c-99f2-21d17717ee02"
# Fanart开关
FANART_ENABLE: bool = True
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# 支持的后缀格式
@@ -68,7 +68,9 @@ class Settings(BaseSettings):
'.m4v', '.flv', '.m2ts', '.strm',
'.tp']
# 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa']
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup']
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qB', '.part']
# 支持的音轨文件后缀格式
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
@@ -82,27 +84,27 @@ class Settings(BaseSettings):
# 用户认证站点
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None
# 消息通知渠道 telegram/wechat/slack多个通知渠道用,分隔
AUTO_DOWNLOAD_USER: Optional[str] = None
# 消息通知渠道 telegram/wechat/slack/synologychat/vocechat,多个通知渠道用,分隔
MESSAGER: str = "telegram"
# WeChat企业ID
WECHAT_CORPID: str = None
WECHAT_CORPID: Optional[str] = None
# WeChat应用Secret
WECHAT_APP_SECRET: str = None
WECHAT_APP_SECRET: Optional[str] = None
# WeChat应用ID
WECHAT_APP_ID: str = None
WECHAT_APP_ID: Optional[str] = None
# WeChat代理服务器
WECHAT_PROXY: str = "https://qyapi.weixin.qq.com"
# WeChat Token
WECHAT_TOKEN: str = None
WECHAT_TOKEN: Optional[str] = None
# WeChat EncodingAESKey
WECHAT_ENCODING_AESKEY: str = None
WECHAT_ENCODING_AESKEY: Optional[str] = None
# WeChat 管理员
WECHAT_ADMINS: str = None
WECHAT_ADMINS: Optional[str] = None
# Telegram Bot Token
TELEGRAM_TOKEN: str = None
TELEGRAM_TOKEN: Optional[str] = None
# Telegram Chat ID
TELEGRAM_CHAT_ID: str = None
TELEGRAM_CHAT_ID: Optional[str] = None
# Telegram 用户ID使用,分隔
TELEGRAM_USERS: str = ""
# Telegram 管理员ID使用,分隔
@@ -117,16 +119,22 @@ class Settings(BaseSettings):
SYNOLOGYCHAT_WEBHOOK: str = ""
# SynologyChat Token
SYNOLOGYCHAT_TOKEN: str = ""
# 下载器 qbittorrent/transmission
# VoceChat地址
VOCECHAT_HOST: str = ""
# VoceChat ApiKey
VOCECHAT_API_KEY: str = ""
# VoceChat 频道ID
VOCECHAT_CHANNEL_ID: str = ""
# 下载器 qbittorrent/transmission启用多个下载器时使用,分隔,只有第一个会被默认使用
DOWNLOADER: str = "qbittorrent"
# 下载器监控开关
DOWNLOADER_MONITOR: bool = True
# Qbittorrent地址IP:PORT
QB_HOST: str = None
QB_HOST: Optional[str] = None
# Qbittorrent用户名
QB_USER: str = None
QB_USER: Optional[str] = None
# Qbittorrent密码
QB_PASSWORD: str = None
QB_PASSWORD: Optional[str] = None
# Qbittorrent分类自动管理
QB_CATEGORY: bool = False
# Qbittorrent按顺序下载
@@ -134,21 +142,21 @@ class Settings(BaseSettings):
# Qbittorrent忽略队列限制强制继续
QB_FORCE_RESUME: bool = False
# Transmission地址IP:PORT
TR_HOST: str = None
TR_HOST: Optional[str] = None
# Transmission用户名
TR_USER: str = None
TR_USER: Optional[str] = None
# Transmission密码
TR_PASSWORD: str = None
TR_PASSWORD: Optional[str] = None
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载保存目录,容器内映射路径需要一致
DOWNLOAD_PATH: str = None
DOWNLOAD_PATH: Optional[str] = None
# 电影下载保存目录,容器内映射路径需要一致
DOWNLOAD_MOVIE_PATH: str = None
DOWNLOAD_MOVIE_PATH: Optional[str] = None
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: str = None
DOWNLOAD_TV_PATH: Optional[str] = None
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: str = None
DOWNLOAD_ANIME_PATH: Optional[str] = None
# 下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 下载站点字幕
@@ -156,43 +164,51 @@ class Settings(BaseSettings):
# 媒体服务器 emby/jellyfin/plex多个媒体服务器,分割
MEDIASERVER: str = "emby"
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
MEDIASERVER_SYNC_INTERVAL: Optional[int] = 6
# 媒体服务器同步黑名单,多个媒体库名称,分割
MEDIASERVER_SYNC_BLACKLIST: str = None
MEDIASERVER_SYNC_BLACKLIST: Optional[str] = None
# EMBY服务器地址IP:PORT
EMBY_HOST: str = None
EMBY_HOST: Optional[str] = None
# EMBY外网地址http(s)://DOMAIN:PORT未设置时使用EMBY_HOST
EMBY_PLAY_HOST: Optional[str] = None
# EMBY Api Key
EMBY_API_KEY: str = None
EMBY_API_KEY: Optional[str] = None
# Jellyfin服务器地址IP:PORT
JELLYFIN_HOST: str = None
JELLYFIN_HOST: Optional[str] = None
# Jellyfin外网地址http(s)://DOMAIN:PORT未设置时使用JELLYFIN_HOST
JELLYFIN_PLAY_HOST: Optional[str] = None
# Jellyfin Api Key
JELLYFIN_API_KEY: str = None
JELLYFIN_API_KEY: Optional[str] = None
# Plex服务器地址IP:PORT
PLEX_HOST: str = None
PLEX_HOST: Optional[str] = None
# Plex外网地址http(s)://DOMAIN:PORT未设置时使用PLEX_HOST
PLEX_PLAY_HOST: Optional[str] = None
# Plex Token
PLEX_TOKEN: str = None
PLEX_TOKEN: Optional[str] = None
# 转移方式 link/copy/move/softlink
TRANSFER_TYPE: str = "copy"
# CookieCloud是否启动本地服务
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
# CookieCloud服务器地址
COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud"
# CookieCloud用户KEY
COOKIECLOUD_KEY: str = None
COOKIECLOUD_KEY: Optional[str] = None
# CookieCloud端对端加密密码
COOKIECLOUD_PASSWORD: str = None
COOKIECLOUD_PASSWORD: Optional[str] = None
# CookieCloud同步间隔分钟
COOKIECLOUD_INTERVAL: int = 60 * 24
COOKIECLOUD_INTERVAL: Optional[int] = 60 * 24
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: str = None
# 电影媒体库目录名,默认"电影"
LIBRARY_MOVIE_NAME: str = None
# 电视剧媒体库目录名,默认"电视剧"
LIBRARY_TV_NAME: str = None
# 动漫媒体库目录名,默认"电视剧/动漫"
LIBRARY_ANIME_NAME: str = None
LIBRARY_PATH: Optional[str] = None
# 电影媒体库目录名
LIBRARY_MOVIE_NAME: str = "电影"
# 电视剧媒体库目录名
LIBRARY_TV_NAME: str = "电视剧"
# 动漫媒体库目录名,不设置时使用电视剧目录
LIBRARY_ANIME_NAME: Optional[str] = None
# 二级分类
LIBRARY_CATEGORY: bool = True
# 电视剧动漫的分类genre_ids
@@ -211,7 +227,28 @@ class Settings(BaseSettings):
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://raw.githubusercontent.com/jxxghp/MoviePilot-Plugins/main/"
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins"
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: Optional[str] = None
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
@validator("SUBSCRIBE_RSS_INTERVAL",
"COOKIECLOUD_INTERVAL",
"MEDIASERVER_SYNC_INTERVAL",
"META_CACHE_EXPIRE",
pre=True, always=True)
def convert_int(cls, value):
if not value:
return 0
try:
return int(value)
except (ValueError, TypeError):
raise ValueError(f"{value} 格式错误,不是有效数字!")
@property
def INNER_CONFIG_PATH(self):
@@ -242,6 +279,10 @@ class Settings(BaseSettings):
@property
def LOG_PATH(self):
return self.CONFIG_PATH / "logs"
@property
def COOKIE_PATH(self):
return self.CONFIG_PATH / "cookies"
@property
def CACHE_CONF(self):
@@ -252,7 +293,7 @@ class Settings(BaseSettings):
"torrents": 100,
"douban": 512,
"fanart": 512,
"meta": 15 * 24 * 3600
"meta": (self.META_CACHE_EXPIRE or 168) * 3600
}
return {
"tmdb": 256,
@@ -260,7 +301,7 @@ class Settings(BaseSettings):
"torrents": 50,
"douban": 256,
"fanart": 128,
"meta": 7 * 24 * 3600
"meta": (self.META_CACHE_EXPIRE or 72) * 3600
}
@property
@@ -321,6 +362,35 @@ class Settings(BaseSettings):
return Path(self.DOWNLOAD_ANIME_PATH)
return self.SAVE_TV_PATH
@property
def GITHUB_HEADERS(self):
"""
Github请求头
"""
if self.GITHUB_TOKEN:
return {
"Authorization": f"Bearer {self.GITHUB_TOKEN}"
}
return {}
@property
def DEFAULT_DOWNLOADER(self):
"""
默认下载器
"""
if not self.DOWNLOADER:
return None
return self.DOWNLOADER.split(",")[0]
@property
def DOWNLOADERS(self):
"""
下载器列表
"""
if not self.DOWNLOADER:
return []
return self.DOWNLOADER.split(",")
def __init__(self, **kwargs):
super().__init__(**kwargs)
with self.CONFIG_PATH as p:
@@ -335,6 +405,9 @@ class Settings(BaseSettings):
with self.LOG_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
with self.COOKIE_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
class Config:
case_sensitive = True

View File

@@ -6,6 +6,7 @@ from app.core.config import settings
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.schemas.types import MediaType
from app.utils.string import StringUtils
@dataclass
@@ -44,6 +45,8 @@ class TorrentInfo:
pubdate: str = None
# 已过时间
date_elapsed: str = None
# 免费截止时间
freedate: str = None
# 上传因子
uploadvolumefactor: float = None
# 下载因子
@@ -54,6 +57,8 @@ class TorrentInfo:
labels: list = field(default_factory=list)
# 种子优先级
pri_order: int = 0
# 种子分类 电影/电视剧
category: str = None
def __setattr__(self, name: str, value: Any):
self.__dict__[name] = value
@@ -90,7 +95,9 @@ class TorrentInfo:
"1.0 1.0": "普通",
"1.0 0.0": "免费",
"2.0 1.0": "2X",
"4.0 1.0": "4X",
"2.0 0.0": "2X免费",
"4.0 0.0": "4X免费",
"1.0 0.5": "50%",
"2.0 0.5": "2X 50%",
"1.0 0.7": "70%",
@@ -105,12 +112,22 @@ class TorrentInfo:
"""
return self.get_free_string(self.uploadvolumefactor, self.downloadvolumefactor)
@property
def freedate_diff(self):
"""
返回免费剩余时间
"""
if not self.freedate:
return ""
return StringUtils.diff_time_str(self.freedate)
def to_dict(self):
"""
返回字典
"""
dicts = asdict(self)
dicts["volume_factor"] = self.volume_factor
dicts["freedate_diff"] = self.freedate_diff
return dicts
@@ -120,6 +137,10 @@ class MediaInfo:
type: MediaType = None
# 媒体标题
title: str = None
# 英文标题
en_title: str = None
# 新加坡标题
sg_title: str = None
# 年份
year: str = None
# 季
@@ -132,6 +153,8 @@ class MediaInfo:
tvdb_id: int = None
# 豆瓣ID
douban_id: str = None
# Bangumi ID
bangumi_id: int = None
# 媒体原语种
original_language: str = None
# 媒体原发行标题
@@ -145,7 +168,7 @@ class MediaInfo:
# LOGO
logo_path: str = None
# 评分
vote_average: int = 0
vote_average: float = 0
# 描述
overview: str = None
# 风格ID
@@ -164,6 +187,8 @@ class MediaInfo:
tmdb_info: dict = field(default_factory=dict)
# 豆瓣 INFO
douban_info: dict = field(default_factory=dict)
# Bangumi INFO
bangumi_info: dict = field(default_factory=dict)
# 导演
directors: List[dict] = field(default_factory=list)
# 演员
@@ -219,6 +244,8 @@ class MediaInfo:
self.set_tmdb_info(self.tmdb_info)
if self.douban_info:
self.set_douban_info(self.douban_info)
if self.bangumi_info:
self.set_bangumi_info(self.bangumi_info)
def __setattr__(self, name: str, value: Any):
self.__dict__[name] = value
@@ -353,6 +380,10 @@ class MediaInfo:
self.genre_ids = info.get('genre_ids') or []
# 原语种
self.original_language = info.get('original_language')
# 英文标题
self.en_title = info.get('en_title')
# 新加坡标题
self.sg_title = info.get('sg_title')
if self.type == MediaType.MOVIE:
# 标题
self.title = info.get('title')
@@ -424,6 +455,9 @@ class MediaInfo:
# 标题
if not self.title:
self.title = info.get("title")
# 英文标题,暂时不支持
if not self.en_title:
self.en_title = info.get('original_title')
# 原语种标题
if not self.original_title:
self.original_title = info.get("original_title")
@@ -495,11 +529,86 @@ class MediaInfo:
self.season_years = {
season: self.year
}
# 风格
if not self.genres:
self.genres = [{"id": genre, "name": genre} for genre in info.get("genres") or []]
# 时长
if not self.runtime and info.get("durations"):
# 查找数字
match = re.search(r'\d+', info.get("durations")[0])
if match:
self.runtime = int(match.group())
# 国家
if not self.production_countries:
self.production_countries = [{"id": country, "name": country} for country in info.get("countries") or []]
# 剩余属性赋值
for key, value in info.items():
if not hasattr(self, key):
setattr(self, key, value)
def set_bangumi_info(self, info: dict):
"""
初始化Bangumi信息
"""
if not info:
return
# 本体
self.bangumi_info = info
# 豆瓣ID
self.bangumi_id = info.get("id")
# 类型
if not self.type:
self.type = MediaType.TV
# 标题
if not self.title:
self.title = info.get("name_cn") or info.get("name")
# 原语种标题
if not self.original_title:
self.original_title = info.get("name")
# 识别标题中的季
meta = MetaInfo(self.title)
# 季
if not self.season:
self.season = meta.begin_season
# 评分
if not self.vote_average:
rating = info.get("rating")
if rating:
vote_average = float(rating.get("score"))
else:
vote_average = 0
self.vote_average = vote_average
# 发行日期
if not self.release_date:
self.release_date = info.get("date") or info.get("air_date")
# 年份
if not self.year:
self.year = self.release_date[:4] if self.release_date else None
# 海报
if not self.poster_path:
self.poster_path = info.get("images", {}).get("large")
# 简介
if not self.overview:
self.overview = info.get("summary")
# 别名
if not self.names:
infobox = info.get("infobox")
if infobox:
akas = [item.get("value") for item in infobox if item.get("key") == "别名"]
if akas:
self.names = [aka.get("v") for aka in akas[0]]
# 剧集
if self.type == MediaType.TV and not self.seasons:
meta = MetaInfo(self.title)
season = meta.begin_season or 1
episodes_count = info.get("total_episodes")
if episodes_count:
self.seasons[season] = list(range(1, episodes_count + 1))
# 演员
if not self.actors:
self.actors = info.get("actors") or []
@property
def title_year(self):
if self.title:
@@ -518,6 +627,8 @@ class MediaInfo:
return "https://www.themoviedb.org/tv/%s" % self.tmdb_id
elif self.douban_id:
return "https://movie.douban.com/subject/%s" % self.douban_id
elif self.bangumi_id:
return "http://bgm.tv/subject/%s" % self.bangumi_id
return ""
@property
@@ -579,6 +690,9 @@ class MediaInfo:
dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year
dicts["tmdb_info"] = None
dicts["douban_info"] = None
dicts["bangumi_info"] = None
return dicts
def clear(self):
@@ -587,6 +701,7 @@ class MediaInfo:
"""
self.tmdb_info = {}
self.douban_info = {}
self.bangumi_info = {}
self.seasons = {}
self.genres = []
self.season_info = []

View File

@@ -37,9 +37,13 @@ class EventManager(metaclass=Singleton):
def check(self, etype: EventType):
"""
检查事件是否存在响应
检查事件是否存在响应,去除掉被禁用的事件响应
"""
return etype.value in self._handlers
if etype.value not in self._handlers:
return False
handlers = self._handlers.get(etype.value)
return any([handler for handler in handlers.values()
if handler.__qualname__.split(".")[0] not in self._disabled_handlers])
def add_event_listener(self, etype: EventType, handler: type):
"""
@@ -70,7 +74,7 @@ class EventManager(metaclass=Singleton):
"""
if class_name in self._disabled_handlers:
self._disabled_handlers.remove(class_name)
logger.debug(f"Event Enabled{class_name}")
logger.debug(f"Event Enabled{class_name}")
def send_event(self, etype: EventType, data: dict = None):
"""

View File

@@ -1,9 +1,12 @@
import re
import traceback
import zhconv
import anitopy
from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.log import logger
from app.utils.string import StringUtils
from app.schemas.types import MediaType
@@ -117,7 +120,7 @@ class MetaAnime(MetaBase):
else:
self.total_episode = 1
except Exception as err:
print(str(err))
logger.debug(f"解析集数失败:{str(err)} - {traceback.format_exc()}")
self.begin_episode = None
self.end_episode = None
self.type = MediaType.TV
@@ -162,7 +165,7 @@ class MetaAnime(MetaBase):
if not self.type:
self.type = MediaType.TV
except Exception as e:
print(str(e))
logger.error(f"解析动漫信息失败:{str(e)} - {traceback.format_exc()}")
@staticmethod
def __prepare_title(title: str):

View File

@@ -1,9 +1,11 @@
import traceback
from dataclasses import dataclass, asdict
from typing import Union, Optional, List, Self
import cn2an
import regex as re
from app.log import logger
from app.utils.string import StringUtils
from app.schemas.types import MediaType
@@ -67,8 +69,8 @@ class MetaBase(object):
_subtitle_flag = False
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期](?!\s*[全共])"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期](?!\s*[全共])"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
if not title:
@@ -108,7 +110,7 @@ class MetaBase(object):
if not title_text:
return
title_text = f" {title_text} "
if re.search(r'[全第季集话話期]', title_text, re.IGNORECASE):
if re.search(r'[全第季集话話期]', title_text, re.IGNORECASE):
# 第x季
season_str = re.search(r'%s' % self._subtitle_season_re, title_text, re.IGNORECASE)
if season_str:
@@ -127,7 +129,11 @@ class MetaBase(object):
else:
begin_season = int(cn2an.cn2an(seasons, mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
if begin_season and begin_season > 100:
return
if end_season and end_season > 100:
return
if self.begin_season is None and isinstance(begin_season, int):
self.begin_season = begin_season
@@ -158,7 +164,11 @@ class MetaBase(object):
else:
begin_episode = int(cn2an.cn2an(episodes, mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if begin_episode and begin_episode >= 10000:
return
if end_episode and end_episode >= 10000:
return
if self.begin_episode is None and isinstance(begin_episode, int):
self.begin_episode = begin_episode
@@ -181,7 +191,7 @@ class MetaBase(object):
try:
self.total_episode = int(cn2an.cn2an(episode_all.strip(), mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_episode = None
self.end_episode = None
@@ -197,7 +207,7 @@ class MetaBase(object):
try:
self.total_season = int(cn2an.cn2an(season_all.strip(), mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_season = 1
self.end_season = self.total_season

View File

@@ -34,6 +34,7 @@ class MetaVideo(MetaBase):
_name_no_begin_re = r"^\[.+?]"
_name_no_chinese_re = r".*版|.*字幕"
_name_se_words = ['', '', '', '', '', '', '']
_name_movie_words = ['剧场版', '劇場版', '电影版', '電影版']
_name_nostring_re = r"^PTS|^JADE|^AOD|^CHC|^[A-Z]{1,4}TV[\-0-9UVHDK]*" \
r"|HBO$|\s+HBO|\d{1,2}th|\d{1,2}bit|NETFLIX|AMAZON|IMAX|^3D|\s+3D|^BBC\s+|\s+BBC|BBC$|DISNEY\+?|XXX|\s+DC$" \
r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+季" \
@@ -182,8 +183,9 @@ class MetaVideo(MetaBase):
if not self.cn_name:
self.cn_name = token
elif not self._stop_cnname_flag:
if not re.search("%s" % self._name_no_chinese_re, token, flags=re.IGNORECASE) \
and not re.search("%s" % self._name_se_words, token, flags=re.IGNORECASE):
if re.search("%s" % self._name_movie_words, token, flags=re.IGNORECASE) \
or (not re.search("%s" % self._name_no_chinese_re, token, flags=re.IGNORECASE)
and not re.search("%s" % self._name_se_words, token, flags=re.IGNORECASE)):
self.cn_name = "%s %s" % (self.cn_name, token)
self._stop_cnname_flag = True
else:
@@ -268,7 +270,7 @@ class MetaVideo(MetaBase):
self.tokens.get_next()
self._last_token_type = "part"
self._continue_flag = False
self._stop_name_flag = False
# self._stop_name_flag = False
def __init_year(self, token: str):
if not self.name:

View File

@@ -1,9 +1,11 @@
import traceback
from typing import List, Tuple
import cn2an
import regex as re
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.singleton import Singleton
@@ -24,7 +26,7 @@ class WordsMatcher(metaclass=Singleton):
# 读取自定义识别词
words: List[str] = self.systemconfig.get(SystemConfigKey.CustomIdentifiers) or []
for word in words:
if not word:
if not word or word.find('#') == 0:
continue
try:
if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "):
@@ -62,7 +64,7 @@ class WordsMatcher(metaclass=Singleton):
appley_words.append(word)
except Exception as err:
print(str(err))
logger.error(f"自定义识别词预处理标题失败:{str(err)} - {traceback.format_exc()}")
return title, appley_words
@@ -77,7 +79,7 @@ class WordsMatcher(metaclass=Singleton):
else:
return re.sub(r'%s' % replaced, r'%s' % replace, title), "", True
except Exception as err:
print(str(err))
logger.error(f"自定义识别词正则替换失败:{str(err)} - {traceback.format_exc()}")
return title, str(err), False
@staticmethod
@@ -129,5 +131,5 @@ class WordsMatcher(metaclass=Singleton):
title = re.sub(episode_offset_re, r'%s' % episode_num[1], title)
return title, "", True
except Exception as err:
print(str(err))
logger.error(f"自定义识别词集数偏移失败:{str(err)} - {traceback.format_exc()}")
return title, str(err), False

View File

@@ -60,12 +60,16 @@ def MetaInfoPath(path: Path) -> MetaBase:
根据路径识别元数据
:param path: 路径
"""
# 上级目录元数据
dir_meta = MetaInfo(title=path.parent.name)
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=path.stem)
# 上级目录元数据
dir_meta = MetaInfo(title=path.parent.name)
# 合并元数据
file_meta.merge(dir_meta)
# 上上级目录元数据
root_meta = MetaInfo(title=path.parent.parent.name)
# 合并元数据
file_meta.merge(root_meta)
return file_meta
@@ -120,7 +124,7 @@ def find_metainfo(title: str) -> Tuple[str, dict]:
if doubanid and doubanid[0].isdigit():
metainfo['doubanid'] = doubanid[0]
# 查找媒体类型
mtype = re.findall(r'(?<=type=)\d+', result)
mtype = re.findall(r'(?<=type=)\w+', result)
if mtype:
match mtype[0]:
case "movie":

View File

@@ -1,4 +1,4 @@
from typing import Generator, Optional
from typing import Generator, Optional, Tuple
from app.core.config import settings
from app.helper.module import ModuleHelper
@@ -51,6 +51,25 @@ class ModuleManager(metaclass=Singleton):
if hasattr(module, "stop"):
module.stop()
def reload(self):
"""
重新加载所有模块
"""
self.stop()
self.load_modules()
def test(self, modleid: str) -> Tuple[bool, str]:
"""
测试模块
"""
if modleid not in self._running_modules:
return False, "模块未加载,请检查参数设置"
module = self._running_modules[modleid]
if hasattr(module, "test") \
and ObjectUtils.check_method(getattr(module, "test")):
return module.test()
return True, "模块不支持测试"
@staticmethod
def check_setting(setting: Optional[tuple]) -> bool:
"""

View File

@@ -12,6 +12,7 @@ from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class PluginManager(metaclass=Singleton):
@@ -71,9 +72,12 @@ class PluginManager(metaclass=Singleton):
plugin_obj.init_plugin(self.get_plugin_config(plugin_id))
# 存储运行实例
self._running_plugins[plugin_id] = plugin_obj
logger.info(f"Plugin Loaded{plugin_id}")
# 设置事件注册状态可用
eventmanager.enable_events_hander(plugin_id)
logger.info(f"加载插件:{plugin_id} 版本:{plugin_obj.plugin_version}")
# 启用的插件才设置事件注册状态可用
if plugin_obj.get_state():
eventmanager.enable_events_hander(plugin_id)
else:
eventmanager.disable_events_hander(plugin_id)
except Exception as err:
logger.error(f"加载插件 {plugin_id} 出错:{str(err)} - {traceback.format_exc()}")
@@ -84,6 +88,12 @@ class PluginManager(metaclass=Singleton):
if not self._running_plugins.get(plugin_id):
return
self._running_plugins[plugin_id].init_plugin(conf)
if self._running_plugins[plugin_id].get_state():
# 设置启用的插件事件注册状态可用
eventmanager.enable_events_hander(plugin_id)
else:
# 设置事件状态为不可用
eventmanager.disable_events_hander(plugin_id)
def stop(self):
"""
@@ -105,6 +115,8 @@ class PluginManager(metaclass=Singleton):
"""
安装本地不存在的在线插件
"""
if SystemUtils.is_frozen():
return
logger.info("开始安装在线插件...")
# 已安装插件
install_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
@@ -134,7 +146,11 @@ class PluginManager(metaclass=Singleton):
"""
if not self._plugins.get(pid):
return {}
return self.systemconfig.get(self._config_key % pid) or {}
conf = self.systemconfig.get(self._config_key % pid)
if conf:
# 去掉空Key
return {k: v for k, v in conf.items() if k}
return {}
def save_plugin_config(self, pid: str, conf: dict) -> bool:
"""
@@ -144,6 +160,14 @@ class PluginManager(metaclass=Singleton):
return False
return self.systemconfig.set(self._config_key % pid, conf)
def delete_plugin_config(self, pid: str) -> bool:
"""
删除插件配置
"""
if not self._plugins.get(pid):
return False
return self.systemconfig.delete(self._config_key % pid)
def get_plugin_form(self, pid: str) -> Tuple[List[dict], Dict[str, Any]]:
"""
获取插件表单
@@ -202,6 +226,36 @@ class PluginManager(metaclass=Singleton):
ret_apis.extend(apis)
return ret_apis
def get_plugin_services(self) -> List[Dict[str, Any]]:
"""
获取插件服务
[{
"id": "服务ID",
"name": "服务名称",
"trigger": "触发器cron、interval、date、CronTrigger.from_crontab()",
"func": self.xxx,
"kwagrs": {} # 定时器参数
}]
"""
ret_services = []
for pid, plugin in self._running_plugins.items():
if hasattr(plugin, "get_service") \
and ObjectUtils.check_method(plugin.get_service):
services = plugin.get_service()
if services:
ret_services.extend(services)
return ret_services
def get_plugin_attr(self, pid: str, attr: str) -> Any:
"""
获取插件属性
"""
if not self._running_plugins.get(pid):
return None
if not hasattr(self._running_plugins[pid], attr):
return None
return getattr(self._running_plugins[pid], attr)
def run_plugin_method(self, pid: str, method: str, *args, **kwargs) -> Any:
"""
运行插件方法
@@ -218,6 +272,12 @@ class PluginManager(metaclass=Singleton):
"""
return list(self._plugins.keys())
def get_running_plugin_ids(self) -> List[str]:
"""
获取所有运行态插件ID
"""
return list(self._running_plugins.keys())
def get_online_plugins(self) -> List[dict]:
"""
获取所有在线插件信息
@@ -232,6 +292,8 @@ class PluginManager(metaclass=Singleton):
markets = settings.PLUGIN_MARKET.split(",")
for market in markets:
online_plugins = self.pluginhelper.get_plugins(market) or {}
if not online_plugins:
logger.warn(f"获取插件库失败 {market}")
for pid, plugin in online_plugins.items():
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
@@ -285,9 +347,6 @@ class PluginManager(metaclass=Singleton):
# 图标
if plugin.get("icon"):
conf.update({"plugin_icon": plugin.get("icon")})
# 主题色
if plugin.get("color"):
conf.update({"plugin_color": plugin.get("color")})
# 作者
if plugin.get("author"):
conf.update({"plugin_author": plugin.get("author")})
@@ -355,9 +414,6 @@ class PluginManager(metaclass=Singleton):
# 图标
if hasattr(plugin, "plugin_icon"):
conf.update({"plugin_icon": plugin.plugin_icon})
# 主题色
if hasattr(plugin, "plugin_color"):
conf.update({"plugin_color": plugin.plugin_color})
# 作者
if hasattr(plugin, "plugin_author"):
conf.update({"plugin_author": plugin.plugin_author})

View File

@@ -3,12 +3,13 @@ import hashlib
import hmac
import json
import os
import traceback
from datetime import datetime, timedelta
from typing import Any, Union, Optional
from typing import Any, Union, Optional, Annotated
import jwt
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad
from fastapi import HTTPException, status, Depends
from fastapi import HTTPException, status, Depends, Header
from fastapi.security import OAuth2PasswordBearer
from passlib.context import CryptContext
@@ -16,6 +17,8 @@ from app import schemas
from app.core.config import settings
from cryptography.fernet import Fernet
from app.log import logger
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
ALGORITHM = "HS256"
@@ -26,7 +29,8 @@ reusable_oauth2 = OAuth2PasswordBearer(
def create_access_token(
subject: Union[str, Any], expires_delta: timedelta = None
userid: Union[str, Any], username: str, super_user: bool = False,
expires_delta: timedelta = None
) -> str:
if expires_delta:
expire = datetime.utcnow() + expires_delta
@@ -34,7 +38,12 @@ def create_access_token(
expire = datetime.utcnow() + timedelta(
minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES
)
to_encode = {"exp": expire, "sub": str(subject)}
to_encode = {
"exp": expire,
"sub": str(userid),
"username": username,
"super_user": super_user
}
encoded_jwt = jwt.encode(to_encode, settings.SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
@@ -59,11 +68,11 @@ def get_token(token: str = None) -> str:
return token
def get_apikey(apikey: str = None) -> str:
def get_apikey(apikey: str = None, x_api_key: Annotated[str | None, Header()] = None) -> str:
"""
从请求URL中获取apikey
"""
return apikey
return apikey or x_api_key
def verify_uri_token(token: str = Depends(get_token)) -> str:
@@ -106,7 +115,7 @@ def decrypt(data: bytes, key: bytes) -> Optional[bytes]:
try:
return fernet.decrypt(data)
except Exception as e:
print(str(e))
logger.error(f"解密失败:{str(e)} - {traceback.format_exc()}")
return None

View File

@@ -1,7 +1,10 @@
from typing import Any, Self, List
from typing import Tuple, Optional, Generator
from sqlalchemy import create_engine, QueuePool
from sqlalchemy.orm import sessionmaker, Session, scoped_session
from sqlalchemy import inspect
from sqlalchemy.orm import declared_attr
from sqlalchemy.orm import sessionmaker, Session, scoped_session, as_declarative
from app.core.config import settings
@@ -135,6 +138,52 @@ def db_query(func):
return wrapper
@as_declarative()
class Base:
id: Any
__name__: str
@db_update
def create(self, db: Session):
db.add(self)
@classmethod
@db_query
def get(cls, db: Session, rid: int) -> Self:
return db.query(cls).filter(cls.id == rid).first()
@db_update
def update(self, db: Session, payload: dict):
payload = {k: v for k, v in payload.items() if v is not None}
for key, value in payload.items():
setattr(self, key, value)
if inspect(self).detached:
db.add(self)
@classmethod
@db_update
def delete(cls, db: Session, rid):
db.query(cls).filter(cls.id == rid).delete()
@classmethod
@db_update
def truncate(cls, db: Session):
db.query(cls).delete()
@classmethod
@db_query
def list(cls, db: Session) -> List[Self]:
result = db.query(cls).all()
return list(result)
def to_dict(self):
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns}
@declared_attr
def __tablename__(self) -> str:
return self.__name__.lower()
class DbOper:
"""
数据库操作基类

View File

@@ -1,4 +1,3 @@
from pathlib import Path
from typing import List
from app.db import DbOper
@@ -10,12 +9,12 @@ class DownloadHistoryOper(DbOper):
下载历史管理
"""
def get_by_path(self, path: Path) -> DownloadHistory:
def get_by_path(self, path: str) -> DownloadHistory:
"""
按路径查询下载记录
:param path: 数据key
"""
return DownloadHistory.get_by_path(self._db, str(path))
return DownloadHistory.get_by_path(self._db, path)
def get_by_hash(self, download_hash: str) -> DownloadHistory:
"""
@@ -57,7 +56,14 @@ class DownloadHistoryOper(DbOper):
按fullpath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_fullpath(self._db, fullpath)
return DownloadFiles.get_by_fullpath(self._db, fullpath=fullpath, all_files=False)
def get_files_by_fullpath(self, fullpath: str) -> List[DownloadFiles]:
"""
按fullpath查询下载文件记录
:param fullpath: 数据key
"""
return DownloadFiles.get_by_fullpath(self._db, fullpath=fullpath, all_files=True)
def get_files_by_savepath(self, fullpath: str) -> List[DownloadFiles]:
"""
@@ -78,7 +84,7 @@ class DownloadHistoryOper(DbOper):
按fullpath查询下载文件记录hash
:param fullpath: 数据key
"""
fileinfo: DownloadFiles = DownloadFiles.get_by_fullpath(self._db, fullpath)
fileinfo: DownloadFiles = DownloadFiles.get_by_fullpath(self._db, fullpath=fullpath, all_files=False)
if fileinfo:
return fileinfo.download_hash
return ""
@@ -115,3 +121,21 @@ class DownloadHistoryOper(DbOper):
return DownloadHistory.list_by_user_date(db=self._db,
date=date,
username=username)
def list_by_date(self, date: str, type: str, tmdbid: str, seasons: str = None) -> List[DownloadHistory]:
"""
查询某时间之后的下载历史
"""
return DownloadHistory.list_by_date(db=self._db,
date=date,
type=type,
tmdbid=tmdbid,
seasons=seasons)
def list_by_type(self, mtype: str, days: int = 7) -> List[DownloadHistory]:
"""
获取指定类型的下载历史
"""
return DownloadHistory.list_by_type(db=self._db,
mtype=mtype,
days=days)

View File

@@ -1,14 +1,13 @@
import importlib
from pathlib import Path
import random
import string
from alembic.command import upgrade
from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, SessionFactory
from app.db.models import Base
from app.db.models.user import User
from app.db import Engine, SessionFactory, Base
from app.db.models import *
from app.log import logger
@@ -16,21 +15,29 @@ def init_db():
"""
初始化数据库
"""
# 导入模块,避免建表缺失
for module in Path(__file__).with_name("models").glob("*.py"):
importlib.import_module(f"app.db.models.{module.stem}")
# 全量建表
Base.metadata.create_all(bind=Engine)
def init_super_user():
"""
初始化超级管理员
"""
# 初始化超级管理员
with SessionFactory() as db:
user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not user:
user = User(
_user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not _user:
# 定义包含数字、大小写字母的字符集合
characters = string.ascii_letters + string.digits
# 生成随机密码
random_password = ''.join(random.choice(characters) for _ in range(16))
logger.info(f"【超级管理员初始密码】{random_password} 请登录系统后在设定中修改。 注:该密码只会显示一次,请注意保存。")
_user = User(
name=settings.SUPERUSER,
hashed_password=get_password_hash(settings.SUPERUSER_PASSWORD),
hashed_password=get_password_hash(random_password),
is_superuser=True,
)
user.create(db)
_user.create(db)
def update_db():

View File

@@ -25,7 +25,7 @@ class MediaServerOper(DbOper):
return True
return False
def empty(self, server: str):
def empty(self, server: Optional[str] = None):
"""
清空媒体服务器数据
"""

61
app/db/message_oper.py Normal file
View File

@@ -0,0 +1,61 @@
import json
import time
from typing import Optional, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db.models.message import Message
from app.schemas import MessageChannel, NotificationType
class MessageOper(DbOper):
"""
消息数据管理
"""
def __init__(self, db: Session = None):
super().__init__(db)
def add(self,
channel: MessageChannel = None,
mtype: NotificationType = None,
title: str = None,
text: str = None,
image: str = None,
link: str = None,
userid: str = None,
action: int = 1,
note: Union[list, dict] = None,
**kwargs):
"""
新增媒体服务器数据
:param channel: 消息渠道
:param mtype: 消息类型
:param title: 标题
:param text: 文本内容
:param image: 图片
:param link: 链接
:param userid: 用户ID
:param action: 消息方向0-接收息1-发送消息
:param note: 附件json
"""
kwargs.update({
"channel": channel.value if channel else '',
"mtype": mtype.value if mtype else '',
"title": title,
"text": text,
"image": image,
"link": link,
"userid": userid,
"action": action,
"reg_time": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
"note": json.dumps(note) if note else ''
})
Message(**kwargs).create(self._db)
def list_by_page(self, page: int = 1, count: int = 30) -> Optional[str]:
"""
获取媒体服务器数据ID
"""
return Message.list_by_page(self._db, page, count)

View File

@@ -1,52 +1,9 @@
from typing import Any, Self, List
from sqlalchemy import inspect
from sqlalchemy.orm import as_declarative, declared_attr, Session
from app.db import db_update, db_query
@as_declarative()
class Base:
id: Any
__name__: str
@db_update
def create(self, db: Session):
db.add(self)
@classmethod
@db_query
def get(cls, db: Session, rid: int) -> Self:
return db.query(cls).filter(cls.id == rid).first()
@db_update
def update(self, db: Session, payload: dict):
payload = {k: v for k, v in payload.items() if v is not None}
for key, value in payload.items():
setattr(self, key, value)
if inspect(self).detached:
db.add(self)
@classmethod
@db_update
def delete(cls, db: Session, rid):
db.query(cls).filter(cls.id == rid).delete()
@classmethod
@db_update
def truncate(cls, db: Session):
db.query(cls).delete()
@classmethod
@db_query
def list(cls, db: Session) -> List[Self]:
result = db.query(cls).all()
return list(result)
def to_dict(self):
return {c.name: getattr(self, c.name, None) for c in self.__table__.columns}
@declared_attr
def __tablename__(self) -> str:
return self.__name__.lower()
from .downloadhistory import DownloadHistory, DownloadFiles
from .mediaserver import MediaServerItem
from .plugindata import PluginData
from .site import Site
from .siteicon import SiteIcon
from .subscribe import Subscribe
from .systemconfig import SystemConfig
from .transferhistory import TransferHistory
from .user import User

View File

@@ -1,8 +1,9 @@
import time
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base, db_update
from app.db import db_query, db_update, Base
class DownloadHistory(Base):
@@ -123,6 +124,34 @@ class DownloadHistory(Base):
DownloadHistory.id.desc()).all()
return list(result)
@staticmethod
@db_query
def list_by_date(db: Session, date: str, type: str, tmdbid: str, seasons: str = None):
"""
查询某时间之后的下载历史
"""
if seasons:
return db.query(DownloadHistory).filter(DownloadHistory.date > date,
DownloadHistory.type == type,
DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == seasons).order_by(
DownloadHistory.id.desc()).all()
else:
return db.query(DownloadHistory).filter(DownloadHistory.date > date,
DownloadHistory.type == type,
DownloadHistory.tmdbid == tmdbid).order_by(
DownloadHistory.id.desc()).all()
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(DownloadHistory) \
.filter(DownloadHistory.type == mtype,
DownloadHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)
class DownloadFiles(Base):
"""
@@ -157,9 +186,13 @@ class DownloadFiles(Base):
@staticmethod
@db_query
def get_by_fullpath(db: Session, fullpath: str):
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath).order_by(
DownloadFiles.id.desc()).first()
def get_by_fullpath(db: Session, fullpath: str, all_files: bool = False):
if not all_files:
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath).order_by(
DownloadFiles.id.desc()).first()
else:
return db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath).order_by(
DownloadFiles.id.desc()).all()
@staticmethod
@db_query
@@ -167,6 +200,7 @@ class DownloadFiles(Base):
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
return list(result)
@staticmethod
@db_update
def delete_by_fullpath(db: Session, fullpath: str):
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,

View File

@@ -1,15 +1,15 @@
from datetime import datetime
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base, db_update
from app.db import db_query, db_update, Base
class MediaServerItem(Base):
"""
站点
媒体服务器媒体条目
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 服务器类型
@@ -48,8 +48,11 @@ class MediaServerItem(Base):
@staticmethod
@db_update
def empty(db: Session, server: str):
db.query(MediaServerItem).filter(MediaServerItem.server == server).delete()
def empty(db: Session, server: Optional[str] = None):
if server is None:
db.query(MediaServerItem).delete()
else:
db.query(MediaServerItem).filter(MediaServerItem.server == server).delete()
@staticmethod
@db_query

39
app/db/models/message.py Normal file
View File

@@ -0,0 +1,39 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query, Base
class Message(Base):
"""
消息表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 消息渠道
channel = Column(String)
# 消息类型
mtype = Column(String)
# 标题
title = Column(String)
# 文本内容
text = Column(String)
# 图片
image = Column(String)
# 链接
link = Column(String)
# 用户ID
userid = Column(String)
# 登记时间
reg_time = Column(String, index=True)
# 消息方向0-接收息1-发送消息
action = Column(Integer)
# 附件json
note = Column(String)
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)
return list(result)

View File

@@ -1,8 +1,7 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base, db_update
from app.db import db_query, db_update, Base
class PluginData(Base):

View File

@@ -3,8 +3,7 @@ from datetime import datetime
from sqlalchemy import Boolean, Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base, db_update
from app.db import db_query, db_update, Base
class Site(Base):

View File

@@ -1,8 +1,7 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base
from app.db import db_query, Base
class SiteIcon(Base):

View File

@@ -1,8 +1,9 @@
from sqlalchemy import Column, Integer, String, Sequence
import time
from sqlalchemy import Column, Integer, String, Sequence, Float
from sqlalchemy.orm import Session
from app.db import db_update, db_query
from app.db.models import Base
from app.db import db_query, db_update, Base
class Subscribe(Base):
@@ -22,14 +23,15 @@ class Subscribe(Base):
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分
vote = Column(Integer)
# 评分float
vote = Column(Float)
# 简介
description = Column(String)
# 过滤规则
@@ -66,6 +68,12 @@ class Subscribe(Base):
best_version = Column(Integer, default=0)
# 当前优先级
current_priority = Column(Integer)
# 保存路径
save_path = Column(String)
# 是否使用 imdbid 搜索
search_imdbid = Column(Integer, default=0)
# 是否手动修改过总集数 0否 1是
manual_total_episode = Column(Integer, default=0)
@staticmethod
@db_query
@@ -97,7 +105,10 @@ class Subscribe(Base):
@staticmethod
@db_query
def get_by_title(db: Session, title: str):
def get_by_title(db: Session, title: str, season: int = None):
if season:
return db.query(Subscribe).filter(Subscribe.name == title,
Subscribe.season == season).first()
return db.query(Subscribe).filter(Subscribe.name == title).first()
@staticmethod
@@ -105,6 +116,11 @@ class Subscribe(Base):
def get_by_doubanid(db: Session, doubanid: str):
return db.query(Subscribe).filter(Subscribe.doubanid == doubanid).first()
@staticmethod
@db_query
def get_by_bangumiid(db: Session, bangumiid: int):
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
@db_update
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
subscrbies = self.get_by_tmdbid(db, tmdbid, season)
@@ -118,3 +134,13 @@ class Subscribe(Base):
if subscribe:
subscribe.delete(db, subscribe.id)
return True
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(Subscribe) \
.filter(Subscribe.type == mtype,
Subscribe.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)

View File

@@ -1,8 +1,7 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.db import db_update, db_query
from app.db.models import Base
from app.db import db_query, db_update, Base
class SystemConfig(Base):

View File

@@ -3,8 +3,7 @@ import time
from sqlalchemy import Column, Integer, String, Sequence, Boolean, func
from sqlalchemy.orm import Session
from app.db import db_query
from app.db.models import Base, db_update
from app.db import db_query, db_update, Base
class TransferHistory(Base):
@@ -49,17 +48,28 @@ class TransferHistory(Base):
@staticmethod
@db_query
def list_by_title(db: Session, title: str, page: int = 1, count: int = 30):
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%')).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
def list_by_title(db: Session, title: str, page: int = 1, count: int = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%'),
TransferHistory.status == status).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
else:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%')).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
return list(result)
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
result = db.query(TransferHistory).order_by(TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
def list_by_page(db: Session, page: int = 1, count: int = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.status == status).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
else:
result = db.query(TransferHistory).order_by(TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
return list(result)
@staticmethod
@@ -93,13 +103,20 @@ class TransferHistory(Base):
@staticmethod
@db_query
def count(db: Session):
return db.query(func.count(TransferHistory.id)).first()[0]
def count(db: Session, status: bool = None):
if status is not None:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.status == status).first()[0]
else:
return db.query(func.count(TransferHistory.id)).first()[0]
@staticmethod
@db_query
def count_by_title(db: Session, title: str):
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
def count_by_title(db: Session, title: str, status: bool = None):
if status is not None:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%'),
TransferHistory.status == status).first()[0]
else:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
@staticmethod
@db_query

View File

@@ -2,8 +2,7 @@ from sqlalchemy import Boolean, Column, Integer, String, Sequence
from sqlalchemy.orm import Session
from app.core.security import verify_password
from app.db import db_update, db_query
from app.db.models import Base
from app.db import db_query, db_update, Base
class User(Base):

View File

@@ -2,7 +2,7 @@ import json
from typing import Any
from app.db import DbOper
from app.db.models.plugin import PluginData
from app.db.models.plugindata import PluginData
from app.utils.object import ObjectUtils
@@ -28,18 +28,21 @@ class PluginDataOper(DbOper):
else:
PluginData(plugin_id=plugin_id, key=key, value=value).create(self._db)
def get_data(self, plugin_id: str, key: str) -> Any:
def get_data(self, plugin_id: str, key: str = None) -> Any:
"""
获取插件数据
:param plugin_id: 插件id
:param key: 数据key
"""
data = PluginData.get_plugin_data_by_key(self._db, plugin_id, key)
if not data:
return None
if ObjectUtils.is_obj(data.value):
return json.loads(data.value)
return data.value
if key:
data = PluginData.get_plugin_data_by_key(self._db, plugin_id, key)
if not data:
return None
if ObjectUtils.is_obj(data.value):
return json.loads(data.value)
return data.value
else:
return PluginData.get_plugin_data(self._db, plugin_id)
def del_data(self, plugin_id: str, key: str) -> Any:
"""

View File

@@ -26,9 +26,9 @@ class SiteIconOper(DbOper):
更新站点图标
"""
icon_base64 = f"data:image/ico;base64,{icon_base64}" if icon_base64 else ""
siteicon = SiteIcon(name=name, domain=domain, url=icon_url, base64=icon_base64)
if not self.get_by_domain(domain):
siteicon.create(self._db)
siteicon = self.get_by_domain(domain)
if not siteicon:
SiteIcon(name=name, domain=domain, url=icon_url, base64=icon_base64).create(self._db)
elif icon_base64:
siteicon.update(self._db, {
"url": icon_url,

View File

@@ -27,6 +27,7 @@ class SubscribeOper(DbOper):
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
poster=mediainfo.get_poster_image(),
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
@@ -83,3 +84,9 @@ class SubscribeOper(DbOper):
subscribe = self.get(sid)
subscribe.update(self._db, payload)
return subscribe
def list_by_type(self, mtype: str, days: int = 7) -> Subscribe:
"""
获取指定类型的订阅
"""
return Subscribe.list_by_type(self._db, mtype=mtype, days=days)

View File

@@ -56,6 +56,20 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def delete(self, key: Union[str, SystemConfigKey]):
"""
删除系统设置
"""
if isinstance(key, SystemConfigKey):
key = key.value
# 更新内存
self.__SYSTEMCONF.pop(key, None)
# 写入数据库
conf = SystemConfig.get_by_key(self._db, key)
if conf:
conf.delete(self._db, conf.id)
return True
def __del__(self):
if self._db:
self._db.close()

View File

@@ -0,0 +1,2 @@
from .doh import doh_query_json
from .cloudflare import under_challenge

View File

@@ -6,6 +6,7 @@ from playwright.sync_api import Page
from app.helper.browser import PlaywrightHelper
from app.helper.ocr import OcrHelper
from app.helper.twofa import TwoFactorAuth
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
@@ -51,7 +52,8 @@ class CookieHelper:
],
"twostep": [
'//input[@name="two_step_code"]',
'//input[@name="2fa_secret"]'
'//input[@name="2fa_secret"]',
'//input[@name="otp"]'
]
}
@@ -71,12 +73,14 @@ class CookieHelper:
url: str,
username: str,
password: str,
two_step_code: str = None,
proxies: dict = None) -> Tuple[Optional[str], Optional[str], str]:
"""
获取站点cookie和ua
:param url: 站点地址
:param username: 用户名
:param password: 密码
:param two_step_code: 二步验证码或密钥
:param proxies: 代理
:return: cookie、ua、message
"""
@@ -107,6 +111,15 @@ class CookieHelper:
break
if not password_xpath:
return None, None, "未找到密码输入框"
# 处理二步验证码
otp_code = TwoFactorAuth(two_step_code).get_code()
# 查找二步验证码输入框
twostep_xpath = None
if otp_code:
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
twostep_xpath = xpath
break
# 查找验证码输入框
captcha_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("captcha"):
@@ -138,6 +151,9 @@ class CookieHelper:
page.fill(username_xpath, username)
# 输入密码
page.fill(password_xpath, password)
# 输入二步验证码
if twostep_xpath:
page.fill(twostep_xpath, otp_code)
# 识别验证码
if captcha_xpath and captcha_img_url:
captcha_element = page.query_selector(captcha_xpath)
@@ -164,6 +180,24 @@ class CookieHelper:
except Exception as e:
logger.error(f"仿真登录失败:{str(e)}")
return None, None, f"仿真登录失败:{str(e)}"
# 对于某二次验证码为单页面的站点,输入二次验证码
if "verify" in page.url:
if not otp_code:
return None, None, "需要二次验证码"
html = etree.HTML(page.content())
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
try:
# 刷新一下 2fa code
otp_code = TwoFactorAuth(two_step_code).get_code()
page.fill(xpath, otp_code)
# 登录按钮 xpath 理论上相同,不再重复查找
page.click(submit_xpath)
page.wait_for_load_state("networkidle", timeout=30 * 1000)
except Exception as e:
logger.error(f"二次验证码输入失败:{str(e)}")
return None, None, f"二次验证码输入失败:{str(e)}"
break
# 登录后的源码
html_text = page.content()
if not html_text:

View File

@@ -1,68 +1,126 @@
from typing import Tuple, Optional
import json
from hashlib import md5
from typing import Any, Dict, Tuple, Optional
from app.core.config import settings
from app.utils.common import decrypt
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class CookieCloudHelper:
_ignore_cookies: list = ["CookieAutoDeleteBrowsingDataCleanup", "CookieAutoDeleteCleaningDiscarded"]
def __init__(self, server, key, password):
self._server = server
self._key = key
self._password = password
def __init__(self):
self._sync_setting()
self._req = RequestUtils(content_type="application/json")
def _sync_setting(self):
self._server = settings.COOKIECLOUD_HOST
self._key = settings.COOKIECLOUD_KEY
self._password = settings.COOKIECLOUD_PASSWORD
self._enable_local = settings.COOKIECLOUD_ENABLE_LOCAL
self._local_path = settings.COOKIE_PATH
def download(self) -> Tuple[Optional[dict], str]:
"""
从CookieCloud下载数据
:return: Cookie数据、错误信息
"""
if not self._server or not self._key or not self._password:
# 更新为最新设置
self._sync_setting()
if ((not self._server and not self._enable_local)
or not self._key
or not self._password):
return None, "CookieCloud参数不正确"
req_url = "%s/get/%s" % (self._server, self._key)
ret = self._req.post_res(url=req_url, json={"password": self._password})
if ret and ret.status_code == 200:
result = ret.json()
if self._enable_local:
# 开启本地服务时,从本地直接读取数据
result = self._load_local_encrypt_data(self._key)
if not result:
return {}, "下载到数据"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
elif ret:
return None, f"同步CookieCloud失败错误码{ret.status_code}"
return {}, "从本地CookieCloud服务加载到cookie数据请检查服务器设置、用户KEY及加密密码是否正确"
else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确"
req_url = "%s/get/%s" % (self._server, str(self._key).strip())
ret = self._req.get_res(url=req_url)
if ret and ret.status_code == 200:
try:
result = ret.json()
if not result:
return {}, f"未从{self._server}下载到cookie数据"
except Exception as err:
return {}, f"{self._server}下载cookie数据错误{str(err)}"
elif ret:
return None, f"远程同步CookieCloud失败错误码{ret.status_code}"
else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确"
encrypted = result.get("encrypted")
if not encrypted:
return {}, "未获取到cookie密文"
else:
crypt_key = self._get_crypt_key()
try:
decrypted_data = decrypt(encrypted, crypt_key).decode('utf-8')
result = json.loads(decrypted_data)
except Exception as e:
return {}, "cookie解密失败" + str(e)
if not result:
return {}, "cookie解密为空"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
def _get_crypt_key(self) -> bytes:
"""
使用UUID和密码生成CookieCloud的加解密密钥
"""
md5_generator = md5()
md5_generator.update((str(self._key).strip() + '-' + str(self._password).strip()).encode('utf-8'))
return (md5_generator.hexdigest()[:16]).encode('utf-8')
def _load_local_encrypt_data(self, uuid: str) -> Dict[str, Any]:
file_path = self._local_path / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
return {}
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data

156
app/helper/doh.py Normal file
View File

@@ -0,0 +1,156 @@
"""
doh函数的实现。
author: https://github.com/C5H12O5/syno-videoinfo-plugin
"""
import base64
import concurrent
import concurrent.futures
import json
import socket
import struct
import urllib
import urllib.request
from typing import Dict, Optional
from app.core.config import settings
from app.log import logger
# 定义一个全局集合来存储注册的主机
_registered_hosts = {
'api.themoviedb.org',
'api.tmdb.org',
'webservice.fanart.tv',
'api.github.com',
'github.com',
'raw.githubusercontent.com',
'api.telegram.org'
}
# 定义一个全局线程池执行器
_executor = concurrent.futures.ThreadPoolExecutor()
# 定义默认的DoH配置
_doh_timeout = 5
_doh_cache: Dict[str, str] = {}
_doh_resolvers = [
# https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https
"1.0.0.1",
"1.1.1.1",
# https://support.quad9.net/hc/en-us
"9.9.9.9",
"149.112.112.112"
]
def _patched_getaddrinfo(host, *args, **kwargs):
"""
socket.getaddrinfo的补丁版本。
"""
if host not in _registered_hosts:
return _orig_getaddrinfo(host, *args, **kwargs)
# 检查主机是否已解析
if host in _doh_cache:
ip = _doh_cache[host]
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
return _orig_getaddrinfo(ip, *args, **kwargs)
# 使用DoH解析主机
futures = []
for resolver in _doh_resolvers:
futures.append(_executor.submit(_doh_query, resolver, host))
for future in concurrent.futures.as_completed(futures):
ip = future.result()
if ip is not None:
logger.info("已解析 [%s] 为 [%s]", host, ip)
_doh_cache[host] = ip
host = ip
break
return _orig_getaddrinfo(host, *args, **kwargs)
# 对 socket.getaddrinfo 进行补丁
if settings.DOH_ENABLE:
_orig_getaddrinfo = socket.getaddrinfo
socket.getaddrinfo = _patched_getaddrinfo
def _doh_query(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
# 构造DNS查询消息RFC 1035
header = b"".join(
[
b"\x00\x00", # ID: 0
b"\x01\x00", # FLAGS: 标准递归查询
b"\x00\x01", # QDCOUNT: 1
b"\x00\x00", # ANCOUNT: 0
b"\x00\x00", # NSCOUNT: 0
b"\x00\x00", # ARCOUNT: 0
]
)
question = b"".join(
[
b"".join(
[
struct.pack("B", len(item)) + item.encode("utf-8")
for item in host.split(".")
]
)
+ b"\x00", # QNAME: 域名序列
b"\x00\x01", # QTYPE: A
b"\x00\x01", # QCLASS: IN
]
)
message = header + question
try:
# 发送GET请求到DoH解析器RFC 8484
b64message = base64.b64encode(message).decode("utf-8").rstrip("=")
url = f"https://{resolver}/dns-query?dns={b64message}"
headers = {"Content-Type": "application/dns-message"}
logger.debug("DoH请求: %s", url)
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
resp_body = response.read()
# 解析DNS响应消息RFC 1035
# name压缩:2 + type:2 + class:2 + ttl:4 + rdlength:2 = 12字节
first_rdata_start = len(header) + len(question) + 12
# rdataA记录= 4字节
first_rdata_end = first_rdata_start + 4
# 将rdata转换为IP地址
return socket.inet_ntoa(resp_body[first_rdata_start:first_rdata_end])
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None
def doh_query_json(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
url = f"https://{resolver}/dns-query?name={host}&type=A"
headers = {"Accept": "application/dns-json"}
logger.debug("DoH请求: %s", url)
try:
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
response_body = response.read().decode("utf-8")
logger.debug("<== body: %s", response_body)
answer = json.loads(response_body)["Answer"]
return answer[0]["data"]
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None

View File

@@ -1,19 +1,46 @@
import json
import queue
import time
from typing import Optional, Any, Union
from app.utils.singleton import Singleton
class MessageHelper(metaclass=Singleton):
"""
消息队列管理器
消息队列管理器,包括系统消息和用户消息
"""
def __init__(self):
self.queue = queue.Queue()
self.sys_queue = queue.Queue()
self.user_queue = queue.Queue()
def put(self, message: str):
self.queue.put(message)
def put(self, message: Any, role: str = "sys", note: Union[list, dict] = None):
"""
存消息
:param message: 消息
:param role: 消息通道 sys/user
:param note: 附件json
"""
if role == "sys":
self.sys_queue.put(message)
else:
if isinstance(message, str):
self.user_queue.put(message)
elif hasattr(message, "to_dict"):
content = message.to_dict()
content['date'] = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
content['note'] = json.dumps(note) if note else None
self.user_queue.put(json.dumps(content))
def get(self):
if not self.queue.empty():
return self.queue.get(block=False)
def get(self, role: str = "sys") -> Optional[str]:
"""
取消息
:param role: 消息通道 sys/user
"""
if role == "sys":
if not self.sys_queue.empty():
return self.sys_queue.get(block=False)
else:
if not self.user_queue.empty():
return self.user_queue.get(block=False)
return None

View File

@@ -1,6 +1,10 @@
# -*- coding: utf-8 -*-
import importlib
import pkgutil
import traceback
from pathlib import Path
from app.log import logger
class ModuleHelper:
@@ -32,6 +36,20 @@ class ModuleHelper:
if isinstance(obj, type) and filter_func(name, obj):
submodules.append(obj)
except Exception as err:
print(f'加载模块 {package_name} 失败:{err}')
logger.error(f'加载模块 {package_name} 失败:{str(err)} - {traceback.format_exc()}')
return submodules
@staticmethod
def dynamic_import_all_modules(base_path: Path, package_name: str):
"""
动态导入目录下所有模块
"""
modules = []
# 遍历文件夹,找到所有模块文件
for file in base_path.glob("*.py"):
file_name = file.stem
if file_name != "__init__":
modules.append(file_name)
full_module_name = f"{package_name}.{file_name}"
importlib.import_module(full_module_name)

View File

@@ -1,11 +1,13 @@
import json
import shutil
import traceback
from pathlib import Path
from typing import Dict, Tuple, Optional, List
from cachetools import TTLCache, cached
from app.core.config import settings
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
@@ -16,7 +18,9 @@ class PluginHelper(metaclass=Singleton):
插件市场管理,下载安装插件到本地
"""
@cached(cache=TTLCache(maxsize=10, ttl=1800))
_base_url = "https://raw.githubusercontent.com/%s/%s/main/"
@cached(cache=TTLCache(maxsize=100, ttl=1800))
def get_plugins(self, repo_url: str) -> Dict[str, dict]:
"""
获取Github所有最新插件列表
@@ -24,32 +28,57 @@ class PluginHelper(metaclass=Singleton):
"""
if not repo_url:
return {}
res = RequestUtils(proxies=settings.PROXY, timeout=10).get_res(f"{repo_url}package.json")
user, repo = self.get_repo_info(repo_url)
if not user or not repo:
return {}
raw_url = self._base_url % (user, repo)
res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=10).get_res(f"{raw_url}package.json")
if res:
return json.loads(res.text)
try:
return json.loads(res.text)
except json.JSONDecodeError:
logger.error(f"插件包数据解析失败:{res.text}")
return {}
return {}
@staticmethod
def install(pid: str, repo_url: str) -> Tuple[bool, str]:
def get_repo_info(repo_url: str) -> Tuple[Optional[str], Optional[str]]:
"""
安装插件
获取Github仓库信息
:param repo_url: Github仓库地址
"""
# 从Github的repo_url获取用户和项目名
if not repo_url:
return None, None
if not repo_url.endswith("/"):
repo_url += "/"
if repo_url.count("/") < 6:
repo_url = f"{repo_url}main/"
try:
user, repo = repo_url.split("/")[-4:-2]
except Exception as e:
return False, f"不支持的插件仓库地址格式{str(e)}"
if not user or not repo:
return False, "不支持的插件仓库地址格式"
logger.error(f"解析Github仓库地址失败{str(e)} - {traceback.format_exc()}")
return None, None
return user, repo
def install(self, pid: str, repo_url: str) -> Tuple[bool, str]:
"""
安装插件
"""
if SystemUtils.is_frozen():
return False, "可执行文件模式下,只能安装本地插件"
# 从Github的repo_url获取用户和项目名
user, repo = self.get_repo_info(repo_url)
if not user or not repo:
return False, "不支持的插件仓库地址格式"
def __get_filelist(_p: str) -> Tuple[Optional[list], Optional[str]]:
"""
获取插件的文件列表
"""
file_api = f"https://api.github.com/repos/{user}/{repo}/contents/plugins/{_p.lower()}"
r = RequestUtils(proxies=settings.PROXY).get_res(file_api)
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS, timeout=30).get_res(file_api)
if not r or r.status_code != 200:
return None, f"连接仓库失败:{r.status_code} - {r.reason}"
ret = r.json()
@@ -66,7 +95,8 @@ class PluginHelper(metaclass=Singleton):
for item in _l:
if item.get("download_url"):
# 下载插件文件
res = RequestUtils(proxies=settings.PROXY).get_res(item["download_url"])
res = RequestUtils(proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS, timeout=60).get_res(item["download_url"])
if not res:
return False, f"文件 {item.get('name')} 下载失败!"
elif res.status_code != 200:
@@ -83,7 +113,7 @@ class PluginHelper(metaclass=Singleton):
l, m = __get_filelist(p)
if not l:
return False, m
return __download_files(p, l)
__download_files(p, l)
return True, ""
if not pid or not repo_url:
@@ -123,5 +153,5 @@ class PluginHelper(metaclass=Singleton):
# 插件目录下如有requirements.txt则安装依赖
requirements_file = plugin_dir / "requirements.txt"
if requirements_file.exists():
SystemUtils.execute(f"pip install -r {requirements_file}")
SystemUtils.execute(f"pip install -r {requirements_file} > /dev/null 2>&1")
return True, ""

103
app/helper/resource.py Normal file
View File

@@ -0,0 +1,103 @@
import json
from pathlib import Path
from app.core.config import settings
from app.helper.sites import SitesHelper
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
class ResourceHelper(metaclass=Singleton):
"""
检测和更新资源包
"""
# 资源包的git仓库地址
_repo = "https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.json"
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources"
_base_dir: Path = settings.ROOT_PATH
def __init__(self):
self.siteshelper = SitesHelper()
self.check()
def check(self):
"""
检测是否有更新,如有则下载安装
"""
if not settings.AUTO_UPDATE_RESOURCE:
return
if SystemUtils.is_frozen():
return
logger.info("开始检测资源包版本...")
res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS, timeout=10).get_res(self._repo)
if res:
try:
resource_info = json.loads(res.text)
except json.JSONDecodeError:
logger.error("资源包仓库数据解析失败!")
return
else:
logger.warn("无法连接资源包仓库!")
return
online_version = resource_info.get("version")
if online_version:
logger.info(f"最新资源包版本v{online_version}")
# 需要更新的资源包
need_updates = {}
# 资源明细
resources: dict = resource_info.get("resources") or {}
for rname, resource in resources.items():
rtype = resource.get("type")
platform = resource.get("platform")
target = resource.get("target")
version = resource.get("version")
# 判断平台
if platform and platform != SystemUtils.platform():
continue
# 判断版本号
if rtype == "auth":
# 站点认证资源
local_version = self.siteshelper.auth_version
elif rtype == "sites":
# 站点索引资源
local_version = self.siteshelper.indexer_version
else:
continue
if StringUtils.compare_version(version, local_version) > 0:
logger.info(f"{rname} 资源包有更新最新版本v{version}")
else:
continue
# 需要安装
need_updates[rname] = target
if need_updates:
# 下载文件信息列表
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=30).get_res(self._files_api)
if not r or r.status_code != 200:
return None, f"连接仓库失败:{r.status_code} - {r.reason}"
files_info = r.json()
for item in files_info:
save_path = need_updates.get(item.get("name"))
if not save_path:
continue
if item.get("download_url"):
# 下载资源文件
res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
timeout=180).get_res(item["download_url"])
if not res:
logger.error(f"文件 {item.get('name')} 下载失败!")
elif res.status_code != 200:
logger.error(f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}")
# 创建插件文件夹
file_path = self._base_dir / save_path / item.get("name")
if not file_path.parent.exists():
file_path.parent.mkdir(parents=True, exist_ok=True)
# 写入文件
file_path.write_bytes(res.content)
logger.info("资源包更新完成,开始重启服务...")
SystemUtils.restart()
else:
logger.info("所有资源已最新,无需更新")

View File

@@ -1,11 +1,15 @@
import re
import traceback
import xml.dom.minidom
from typing import List, Tuple, Union
from urllib.parse import urljoin
import chardet
from lxml import etree
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.log import logger
from app.utils.dom import DomUtils
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -221,11 +225,12 @@ class RssHelper:
}
@staticmethod
def parse(url, proxy: bool = False) -> Union[List[dict], None]:
def parse(url, proxy: bool = False, timeout: int = 30) -> Union[List[dict], None]:
"""
解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址
:param proxy: 是否使用代理
:param timeout: 请求超时
:return: 种子信息列表如为None代表Rss过期
"""
# 开始处理
@@ -233,15 +238,35 @@ class RssHelper:
if not url:
return []
try:
ret = RequestUtils(proxies=settings.PROXY if proxy else None).get_res(url)
ret = RequestUtils(proxies=settings.PROXY if proxy else None, timeout=timeout).get_res(url)
if not ret:
return []
except Exception as err:
print(str(err))
logger.error(f"获取RSS失败{str(err)} - {traceback.format_exc()}")
return []
if ret:
ret_xml = ret.text
ret_xml = ""
try:
# 使用chardet检测字符编码
raw_data = ret.content
if raw_data:
try:
result = chardet.detect(raw_data)
encoding = result['encoding']
# 解码为字符串
ret_xml = raw_data.decode(encoding)
except Exception as e:
logger.debug(f"chardet解码失败{str(e)}")
# 探测utf-8解码
match = re.search(r'encoding\s*=\s*["\']([^"\']+)["\']', ret.text)
if match:
encoding = match.group(1)
if encoding:
ret_xml = raw_data.decode(encoding)
else:
ret.encoding = ret.apparent_encoding
if not ret_xml:
ret_xml = ret.text
# 解析XML
dom_tree = xml.dom.minidom.parseString(ret_xml)
rootNode = dom_tree.documentElement
@@ -283,10 +308,10 @@ class RssHelper:
'pubdate': pubdate}
ret_array.append(tmp_dict)
except Exception as e1:
print(str(e1))
logger.debug(f"解析RSS失败{str(e1)} - {traceback.format_exc()}")
continue
except Exception as e2:
print(str(e2))
logger.error(f"解析RSS失败{str(e2)} - {traceback.format_exc()}")
# RSS过期 观众RSS 链接已过期,您需要获得一个新的! pthome RSS Link has expired, You need to get a new one!
_rss_expired_msg = [
"RSS 链接已过期, 您需要获得一个新的!",

View File

@@ -1,26 +1,32 @@
import datetime
import re
import traceback
from pathlib import Path
from typing import Tuple, Optional, List, Union
from typing import Tuple, Optional, List, Union, Dict
from urllib.parse import unquote
from requests import Response
from torrentool.api import Torrent
from app.core.config import settings
from app.core.context import Context
from app.core.context import Context, TorrentInfo, MediaInfo
from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.utils.http import RequestUtils
from app.schemas.types import MediaType, SystemConfigKey
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class TorrentHelper:
class TorrentHelper(metaclass=Singleton):
"""
种子帮助类
"""
# 失败的种子:站点链接
_invalid_torrents = []
def __init__(self):
self.system_config = SystemConfigOper()
@@ -123,6 +129,8 @@ class TorrentHelper:
elif req.status_code == 429:
return None, None, "", [], "触发站点流控,请稍后重试"
else:
# 把错误的种子记下来,避免重复使用
self.add_invalid(url)
return None, None, "", [], f"下载种子出错,状态码:{req.status_code}"
@staticmethod
@@ -276,3 +284,118 @@ class TorrentHelper:
continue
episodes = list(set(episodes).union(set(meta.episode_list)))
return episodes
def is_invalid(self, url: str) -> bool:
"""
判断种子是否是无效种子
"""
return url in self._invalid_torrents
def add_invalid(self, url: str):
"""
添加无效种子
"""
if url not in self._invalid_torrents:
self._invalid_torrents.append(url)
@staticmethod
def filter_torrent(torrent_info: TorrentInfo,
filter_rule: Dict[str, str],
mediainfo: MediaInfo) -> bool:
"""
检查种子是否匹配订阅过滤规则
"""
def __get_size_range(size_str: str) -> Tuple[float, float]:
"""
获取大小范围
"""
if not size_str:
return 0, 0
try:
size_range = size_str.split("-")
if len(size_range) == 1:
return 0, float(size_range[0])
elif len(size_range) == 2:
return float(size_range[0]), float(size_range[1])
except Exception as e:
logger.error(f"解析大小范围失败:{str(e)} - {traceback.format_exc()}")
return 0, 0
if not filter_rule:
return True
# 匹配内容
content = f"{torrent_info.title} {torrent_info.description} {' '.join(torrent_info.labels or [])}"
# 最少做种人数
min_seeders = filter_rule.get("min_seeders")
if min_seeders and torrent_info.seeders < int(min_seeders):
logger.info(f"{torrent_info.title} 做种人数不足 {min_seeders}")
return False
# 包含
include = filter_rule.get("include")
if include:
if not re.search(r"%s" % include, content, re.I):
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}")
return False
# 排除
exclude = filter_rule.get("exclude")
if exclude:
if re.search(r"%s" % exclude, content, re.I):
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}")
return False
# 质量
quality = filter_rule.get("quality")
if quality:
if not re.search(r"%s" % quality, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配质量规则 {quality}")
return False
# 分辨率
resolution = filter_rule.get("resolution")
if resolution:
if not re.search(r"%s" % resolution, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配分辨率规则 {resolution}")
return False
# 特效
effect = filter_rule.get("effect")
if effect:
if not re.search(r"%s" % effect, torrent_info.title, re.I):
logger.info(f"{torrent_info.title} 不匹配特效规则 {effect}")
return False
# 大小
tv_size = filter_rule.get("tv_size")
movie_size = filter_rule.get("movie_size")
if movie_size or tv_size:
if mediainfo.type == MediaType.TV:
size = tv_size
else:
size = movie_size
# 大小范围
begin_size, end_size = __get_size_range(size)
if begin_size or end_size:
meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description)
# 集数
if mediainfo.type == MediaType.TV:
# 电视剧
season = meta.begin_season or 1
if meta.total_episode:
# 识别的总集数
episodes_num = meta.total_episode
else:
# 整季集数
episodes_num = len(mediainfo.seasons.get(season) or [1])
# 比较大小
if not (begin_size * 1024 ** 3 <= (torrent_info.size / episodes_num) <= end_size * 1024 ** 3):
logger.info(f"{torrent_info.title} {StringUtils.str_filesize(torrent_info.size)} "
f"{episodes_num}集,不匹配大小规则 {size}")
return False
else:
# 电影比较大小
if not (begin_size * 1024 ** 3 <= torrent_info.size <= end_size * 1024 ** 3):
logger.info(
f"{torrent_info.title} {StringUtils.str_filesize(torrent_info.size)} 不匹配大小规则 {size}")
return False
return True

43
app/helper/twofa.py Normal file
View File

@@ -0,0 +1,43 @@
import base64
import hashlib
import hmac
import struct
import sys
import time
from app.log import logger
class TwoFactorAuth:
def __init__(self, code_or_secret: str):
if code_or_secret and len(code_or_secret) > 16:
self.code = None
self.secret = code_or_secret
else:
self.code = code_or_secret
self.secret = None
@staticmethod
def __calc(secret_key: str) -> str:
if not secret_key:
return ""
try:
input_time = int(time.time()) // 30
key = base64.b32decode(secret_key)
msg = struct.pack(">Q", input_time)
google_code = hmac.new(key, msg, hashlib.sha1).digest()
o = (
google_code[19] & 15
if sys.version_info > (2, 7)
else ord(str(google_code[19])) & 15
)
google_code = str(
(struct.unpack(">I", google_code[o: o + 4])[0] & 0x7FFFFFFF) % 1000000
)
return f"0{google_code}" if len(google_code) == 5 else google_code
except Exception as e:
logger.error(f"计算动态验证码失败:{str(e)}")
return ""
def get_code(self) -> str:
return self.code or self.__calc(self.secret)

View File

@@ -1,6 +1,8 @@
import inspect
import logging
from logging.handlers import RotatingFileHandler
from pathlib import Path
from typing import Dict, Any
import click
@@ -26,32 +28,156 @@ class CustomFormatter(logging.Formatter):
def format(self, record):
seperator = " " * (8 - len(record.levelname))
record.leveltext = level_name_colors[record.levelno](record.levelname + ":") + seperator
if record.filename == "__init__.py":
record.filename = Path(record.pathname).parent.name
return super().format(record)
# DEBUG
logger = logging.getLogger()
if settings.DEBUG:
logger.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
class LoggerManager:
"""
日志管理
"""
# 管理所有的Logger
_loggers: Dict[str, Any] = {}
# 默认日志文件
_default_log_file = "moviepilot.log"
# 终端日志
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_formatter = CustomFormatter("%(leveltext)s%(filename)s - %(message)s")
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
@staticmethod
def __get_caller():
"""
获取调用者的文件名称与插件名称(如果是插件调用内置的模块, 也能写入到插件日志文件中)
"""
# 调用者文件名称
caller_name = None
# 调用者插件名称
plugin_name = None
for i in inspect.stack()[3:]:
filepath = Path(i.filename)
parts = filepath.parts
if not caller_name:
# 设定调用者文件名称
if parts[-1] == "__init__.py":
caller_name = parts[-2]
else:
caller_name = parts[-1]
if "app" in parts:
if not plugin_name and "plugins" in parts:
# 设定调用者插件名称
plugin_name = parts[parts.index("plugins") + 1]
if plugin_name == "__init__.py":
plugin_name = "plugin"
break
if "main.py" in parts:
# 已经到达程序的入口
break
elif len(parts) != 1:
# 已经超出程序范围
break
return caller_name or "log.py", plugin_name
# 文件日志
file_handler = RotatingFileHandler(filename=settings.LOG_PATH / 'moviepilot.log',
mode='w',
maxBytes=5 * 1024 * 1024,
backupCount=3,
encoding='utf-8')
file_handler.setLevel(logging.INFO)
file_formater = CustomFormatter("%(levelname)s%(asctime)s - %(filename)s - %(message)s")
file_handler.setFormatter(file_formater)
logger.addHandler(file_handler)
@staticmethod
def __setup_logger(log_file: str):
"""
设置日志
log_file日志文件相对路径
"""
log_file_path = settings.LOG_PATH / log_file
if not log_file_path.parent.exists():
log_file_path.parent.mkdir(parents=True, exist_ok=True)
# 创建新实例
_logger = logging.getLogger(log_file_path.stem)
# DEBUG
if settings.DEBUG:
_logger.setLevel(logging.DEBUG)
else:
_logger.setLevel(logging.INFO)
# 移除已有的 handler避免重复添加
for handler in _logger.handlers:
_logger.removeHandler(handler)
# 终端日志
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
console_formatter = CustomFormatter(f"%(leveltext)s%(message)s")
console_handler.setFormatter(console_formatter)
_logger.addHandler(console_handler)
# 文件日志
file_handler = RotatingFileHandler(filename=log_file_path,
mode='w',
maxBytes=5 * 1024 * 1024,
backupCount=3,
encoding='utf-8')
file_handler.setLevel(logging.INFO)
file_formater = CustomFormatter(f"【%(levelname)s】%(asctime)s - %(message)s")
file_handler.setFormatter(file_formater)
_logger.addHandler(file_handler)
return _logger
def logger(self, method: str, msg: str, *args, **kwargs):
"""
获取模块的logger
:param method: 日志方法
:param msg: 日志信息
"""
# 获取调用者文件名和插件名
caller_name, plugin_name = self.__get_caller()
# 区分插件日志
if plugin_name:
# 使用插件日志文件
logfile = Path("plugins") / f"{plugin_name}.log"
else:
# 使用默认日志文件
logfile = self._default_log_file
# 获取调用者的模块的logger
_logger = self._loggers.get(logfile)
if not _logger:
_logger = self.__setup_logger(logfile)
self._loggers[logfile] = _logger
if hasattr(_logger, method):
method = getattr(_logger, method)
method(f"{caller_name} - {msg}", *args, **kwargs)
def info(self, msg: str, *args, **kwargs):
"""
重载info方法
"""
self.logger("info", msg, *args, **kwargs)
def debug(self, msg: str, *args, **kwargs):
"""
重载debug方法
"""
self.logger("debug", msg, *args, **kwargs)
def warning(self, msg: str, *args, **kwargs):
"""
重载warning方法
"""
self.logger("warning", msg, *args, **kwargs)
def warn(self, msg: str, *args, **kwargs):
"""
重载warn方法
"""
self.logger("warning", msg, *args, **kwargs)
def error(self, msg: str, *args, **kwargs):
"""
重载error方法
"""
self.logger("error", msg, *args, **kwargs)
def critical(self, msg: str, *args, **kwargs):
"""
重载critical方法
"""
self.logger("critical", msg, *args, **kwargs)
# 初始化公共日志
logger = LoggerManager()

View File

@@ -19,12 +19,15 @@ if SystemUtils.is_frozen():
from app.core.config import settings
from app.core.module import ModuleManager
from app.core.plugin import PluginManager
from app.db.init import init_db, update_db
from app.db.init import init_db, update_db, init_super_user
from app.helper.thread import ThreadHelper
from app.helper.display import DisplayHelper
from app.helper.resource import ResourceHelper
from app.helper.sites import SitesHelper
from app.helper.message import MessageHelper
from app.scheduler import Scheduler
from app.command import Command
from app.command import Command, CommandChian
from app.schemas import Notification, NotificationType
# App
App = FastAPI(title=settings.PROJECT_NAME,
@@ -50,17 +53,22 @@ def init_routers():
"""
from app.api.apiv1 import api_router
from app.api.servarr import arr_router
from app.api.servcookie import cookie_router
# API路由
App.include_router(api_router, prefix=settings.API_V1_STR)
# Radarr、Sonarr路由
App.include_router(arr_router, prefix="/api/v3")
# CookieCloud路由
App.include_router(cookie_router, prefix="/cookiecloud")
def start_frontend():
"""
启动前端服务
"""
if not SystemUtils.is_frozen():
# 仅Windows可执行文件支持内嵌nginx
if not SystemUtils.is_frozen() \
or not SystemUtils.is_windows():
return
# 临时Nginx目录
nginx_path = settings.ROOT_PATH / 'nginx'
@@ -73,27 +81,20 @@ def start_frontend():
SystemUtils.move(nginx_path, run_nginx_dir)
# 启动Nginx
import subprocess
if SystemUtils.is_windows():
subprocess.Popen("start nginx.exe",
cwd=run_nginx_dir,
shell=True)
else:
subprocess.Popen("nohup ./nginx &",
cwd=run_nginx_dir,
shell=True)
subprocess.Popen("start nginx.exe",
cwd=run_nginx_dir,
shell=True)
def stop_frontend():
"""
停止前端服务
"""
if not SystemUtils.is_frozen():
if not SystemUtils.is_frozen() \
or not SystemUtils.is_windows():
return
import subprocess
if SystemUtils.is_windows():
subprocess.Popen(f"taskkill /f /im nginx.exe", shell=True)
else:
subprocess.Popen(f"killall nginx", shell=True)
subprocess.Popen(f"taskkill /f /im nginx.exe", shell=True)
def start_tray():
@@ -104,6 +105,9 @@ def start_tray():
if not SystemUtils.is_frozen():
return
if not SystemUtils.is_windows():
return
def open_web():
"""
调用浏览器打开前端页面
@@ -139,6 +143,22 @@ def start_tray():
threading.Thread(target=TrayIcon.run, daemon=True).start()
def check_auth():
"""
检查认证状态
"""
if SitesHelper().auth_level < 2:
err_msg = "用户认证失败,站点相关功能将无法使用!"
MessageHelper().put(f"注意:{err_msg}")
CommandChian().post_message(
Notification(
mtype=NotificationType.Manual,
title="MoviePilot用户认证",
text=err_msg
)
)
@App.on_event("shutdown")
def shutdown_server():
"""
@@ -165,10 +185,14 @@ def start_module():
"""
启动模块
"""
# 初始化超级管理员
init_super_user()
# 虚拟显示
DisplayHelper()
# 站点管理
SitesHelper()
# 资源包检测
ResourceHelper()
# 加载模块
ModuleManager()
# 加载插件
@@ -181,6 +205,8 @@ def start_module():
init_routers()
# 启动前端服务
start_frontend()
# 检查认证状态
check_auth()
if __name__ == '__main__':

View File

@@ -35,6 +35,13 @@ class _ModuleBase(metaclass=ABCMeta):
"""
pass
@abstractmethod
def test(self) -> Tuple[bool, str]:
"""
模块测试, 返回测试结果和错误信息
"""
pass
def checkMessage(channel_type: MessageChannel):
"""
@@ -60,6 +67,8 @@ def checkMessage(channel_type: MessageChannel):
return None
if channel_type == MessageChannel.SynologyChat and not switch.get("synologychat"):
return None
if channel_type == MessageChannel.VoceChat and not switch.get("vocechat"):
return None
return func(self, message, *args, **kwargs)
return wrapper

View File

@@ -0,0 +1,93 @@
from typing import List, Optional, Tuple, Union
from app.core.context import MediaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.bangumi.bangumi import BangumiApi
from app.utils.http import RequestUtils
class BangumiModule(_ModuleBase):
bangumiapi: BangumiApi = None
def init_module(self) -> None:
self.bangumiapi = BangumiApi()
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
ret = RequestUtils().get_res("https://api.bgm.tv/")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接Bangumi错误码{ret.status_code}"
return False, "Bangumi网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def recognize_media(self, bangumiid: int = None,
**kwargs) -> Optional[MediaInfo]:
"""
识别媒体信息
:param bangumiid: 识别的Bangumi ID
:return: 识别的媒体信息,包括剧集信息
"""
if not bangumiid:
return None
# 直接查询详情
info = self.bangumi_info(bangumiid=bangumiid)
if info:
# 赋值TMDB信息并返回
mediainfo = MediaInfo(bangumi_info=info)
logger.info(f"{bangumiid} Bangumi识别结果{mediainfo.type.value} "
f"{mediainfo.title_year}")
return mediainfo
else:
logger.info(f"{bangumiid} 未匹配到Bangumi媒体信息")
return None
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: BangumiID
:return: Bangumi信息
"""
if not bangumiid:
return None
logger.info(f"开始获取Bangumi信息{bangumiid} ...")
return self.bangumiapi.detail(bangumiid)
def bangumi_calendar(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
"""
获取Bangumi每日放送
:param page: 页码
:param count: 每页数量
"""
return self.bangumiapi.calendar(page, count)
def bangumi_credits(self, bangumiid: int, page: int = 1, count: int = 20) -> List[dict]:
"""
根据TMDBID查询电影演职员表
:param bangumiid: BangumiID
:param page: 页码
:param count: 数量
"""
persons = self.bangumiapi.persons(bangumiid) or []
if persons:
return persons[(page - 1) * count: page * count]
else:
return []
def bangumi_recommend(self, bangumiid: int) -> List[dict]:
"""
根据BangumiID查询推荐电影
:param bangumiid: BangumiID
"""
return self.bangumiapi.subjects(bangumiid) or []

View File

@@ -0,0 +1,154 @@
from datetime import datetime
from functools import lru_cache
import requests
from app.utils.http import RequestUtils
class BangumiApi(object):
"""
https://bangumi.github.io/api/
"""
_urls = {
"calendar": "calendar",
"detail": "v0/subjects/%s",
"persons": "v0/subjects/%s/persons",
"subjects": "v0/subjects/%s/subjects"
}
_base_url = "https://api.bgm.tv/"
_req = RequestUtils(session=requests.Session())
def __init__(self):
pass
@classmethod
@lru_cache(maxsize=128)
def __invoke(cls, url, **kwargs):
req_url = cls._base_url + url
params = {}
if kwargs:
params.update(kwargs)
resp = cls._req.get_res(url=req_url, params=params)
try:
return resp.json() if resp else None
except Exception as e:
print(e)
return None
def calendar(self, page: int = 1, count: int = 30):
"""
获取每日放送返回items
"""
"""
[
{
"weekday": {
"en": "Mon",
"cn": "星期一",
"ja": "月耀日",
"id": 1
},
"items": [
{
"id": 350235,
"url": "http://bgm.tv/subject/350235",
"type": 2,
"name": "月が導く異世界道中 第二幕",
"name_cn": "月光下的异世界之旅 第二幕",
"summary": "",
"air_date": "2024-01-08",
"air_weekday": 1,
"rating": {
"total": 257,
"count": {
"1": 1,
"2": 1,
"3": 4,
"4": 15,
"5": 51,
"6": 111,
"7": 49,
"8": 13,
"9": 5,
"10": 7
},
"score": 6.1
},
"rank": 6125,
"images": {
"large": "http://lain.bgm.tv/pic/cover/l/3c/a5/350235_A0USf.jpg",
"common": "http://lain.bgm.tv/pic/cover/c/3c/a5/350235_A0USf.jpg",
"medium": "http://lain.bgm.tv/pic/cover/m/3c/a5/350235_A0USf.jpg",
"small": "http://lain.bgm.tv/pic/cover/s/3c/a5/350235_A0USf.jpg",
"grid": "http://lain.bgm.tv/pic/cover/g/3c/a5/350235_A0USf.jpg"
},
"collection": {
"doing": 920
}
},
{
"id": 358561,
"url": "http://bgm.tv/subject/358561",
"type": 2,
"name": "大宇宙时代",
"name_cn": "大宇宙时代",
"summary": "",
"air_date": "2024-01-22",
"air_weekday": 1,
"rating": {
"total": 2,
"count": {
"1": 0,
"2": 0,
"3": 0,
"4": 0,
"5": 1,
"6": 1,
"7": 0,
"8": 0,
"9": 0,
"10": 0
},
"score": 5.5
},
"images": {
"large": "http://lain.bgm.tv/pic/cover/l/71/66/358561_UzsLu.jpg",
"common": "http://lain.bgm.tv/pic/cover/c/71/66/358561_UzsLu.jpg",
"medium": "http://lain.bgm.tv/pic/cover/m/71/66/358561_UzsLu.jpg",
"small": "http://lain.bgm.tv/pic/cover/s/71/66/358561_UzsLu.jpg",
"grid": "http://lain.bgm.tv/pic/cover/g/71/66/358561_UzsLu.jpg"
},
"collection": {
"doing": 9
}
}
]
}
]
"""
ret_list = []
result = self.__invoke(self._urls["calendar"], _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
ret_list.extend(item.get("items") or [])
return ret_list[(page - 1) * count: page * count]
def detail(self, bid: int):
"""
获取番剧详情
"""
return self.__invoke(self._urls["detail"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
def persons(self, bid: int):
"""
获取番剧人物
"""
return self.__invoke(self._urls["persons"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
def subjects(self, bid: int):
"""
获取关联条目信息
"""
return self.__invoke(self._urls["subjects"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))

View File

@@ -13,6 +13,7 @@ from app.modules.douban.douban_cache import DoubanCache
from app.modules.douban.scraper import DoubanScraper
from app.schemas.types import MediaType
from app.utils.common import retry
from app.utils.http import RequestUtils
from app.utils.system import SystemUtils
@@ -29,30 +30,53 @@ class DoubanModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
ret = RequestUtils().get_res("https://movie.douban.com/")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接豆瓣,错误码:{ret.status_code}"
return False, "豆瓣网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
doubanid: str = None,
cache: bool = True,
**kwargs) -> Optional[MediaInfo]:
"""
识别媒体信息
:param meta: 识别的元数据
:param mtype: 识别的媒体类型与doubanid配套
:param doubanid: 豆瓣ID
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
if settings.RECOGNIZE_SOURCE != "douban":
if not doubanid and not meta:
return None
if meta and not doubanid \
and settings.RECOGNIZE_SOURCE != "douban":
return None
if not meta:
cache_info = {}
elif not meta.name:
logger.error("识别媒体信息时未提供元数据名称")
return None
else:
if mtype:
meta.type = mtype
if doubanid:
meta.doubanid = doubanid
# 读取缓存
cache_info = self.cache.get(meta)
if not cache_info:
if not cache_info or not cache:
# 缓存没有或者强制不使用缓存
if doubanid:
# 直接查询详情
@@ -80,7 +104,7 @@ class DoubanModule(_ModuleBase):
logger.error("识别媒体信息时未提供元数据或豆瓣ID")
return None
# 保存到缓存
if meta:
if meta and cache:
self.cache.update(meta, info)
else:
# 使用缓存信息
@@ -561,7 +585,7 @@ class DoubanModule(_ModuleBase):
continue
if mtype and mtype.value != type_name:
continue
if mtype == MediaType.TV and not season:
if mtype and mtype == MediaType.TV and not season:
season = 1
item = item_obj.get("target")
title = item.get("title")
@@ -588,12 +612,15 @@ class DoubanModule(_ModuleBase):
return []
return infos.get("subject_collection_items")
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str) -> None:
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None:
"""
刮削元数据
:param path: 媒体文件路径
:param mediainfo: 识别的媒体信息
:param transfer_type: 传输类型
:param force_nfo: 是否强制刮削nfo
:param force_img: 是否强制刮削图片
:return: 成功或失败
"""
if settings.SCRAP_SOURCE != "douban":
@@ -618,12 +645,21 @@ class DoubanModule(_ModuleBase):
else:
doubaninfo = self.douban_info(doubanid=mediainfo.douban_id,
mtype=mediainfo.type)
if not doubaninfo:
logger(f"未获取到 {mediainfo.douban_id} 的豆瓣媒体信息,无法刮削!")
return
# 豆瓣媒体信息
mediainfo = MediaInfo(douban_info=doubaninfo)
# 补充图片
self.obtain_images(mediainfo)
# 刮削路径
scrape_path = path / path.name
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
mediainfo=mediainfo,
file_path=scrape_path,
transfer_type=transfer_type)
transfer_type=transfer_type,
force_nfo=force_nfo,
force_img=force_img)
else:
# 目录下的所有文件
for file in SystemUtils.list_files(path, settings.RMT_MEDIAEXT):
@@ -649,18 +685,104 @@ class DoubanModule(_ModuleBase):
else:
doubaninfo = self.douban_info(doubanid=mediainfo.douban_id,
mtype=mediainfo.type)
if not doubaninfo:
logger(f"未获取到 {mediainfo.douban_id} 的豆瓣媒体信息,无法刮削!")
continue
# 豆瓣媒体信息
mediainfo = MediaInfo(douban_info=doubaninfo)
# 补充图片
self.obtain_images(mediainfo)
# 刮削
self.scraper.gen_scraper_files(meta=meta,
mediainfo=MediaInfo(douban_info=doubaninfo),
mediainfo=mediainfo,
file_path=file,
transfer_type=transfer_type)
transfer_type=transfer_type,
force_nfo=force_nfo,
force_img=force_img)
except Exception as e:
logger.error(f"刮削文件 {file} 失败,原因:{str(e)}")
logger.info(f"{path} 刮削完成")
def obtain_images(self, mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
补充抓取媒体信息图片
:param mediainfo: 识别的媒体信息
:return: 更新后的媒体信息
"""
if settings.RECOGNIZE_SOURCE != "douban":
return None
if not mediainfo.douban_id:
return None
if mediainfo.backdrop_path:
# 没有图片缺失
return mediainfo
# 调用图片接口
if not mediainfo.backdrop_path:
if mediainfo.type == MediaType.MOVIE:
info = self.doubanapi.movie_photos(mediainfo.douban_id)
else:
info = self.doubanapi.tv_photos(mediainfo.douban_id)
if not info:
return mediainfo
images = info.get("photos")
# 背景图
if images:
backdrop = images[0].get("image", {}).get("large") or {}
if backdrop:
mediainfo.backdrop_path = backdrop.get("url")
return mediainfo
def clear_cache(self):
"""
清除缓存
"""
logger.info("开始清除豆瓣缓存 ...")
self.doubanapi.clear_cache()
self.cache.clear()
logger.info("豆瓣缓存清除完成")
def douban_movie_credits(self, doubanid: str, page: int = 1, count: int = 20) -> List[dict]:
"""
根据TMDBID查询电影演职员表
:param doubanid: 豆瓣ID
:param page: 页码
:param count: 数量
"""
result = self.doubanapi.movie_celebrities(subject_id=doubanid)
if not result:
return []
ret_list = result.get("actors") or []
if ret_list:
return ret_list[(page - 1) * count: page * count]
else:
return []
def douban_tv_credits(self, doubanid: str, page: int = 1, count: int = 20) -> List[dict]:
"""
根据TMDBID查询电视剧演职员表
:param doubanid: 豆瓣ID
:param page: 页码
:param count: 数量
"""
result = self.doubanapi.tv_celebrities(subject_id=doubanid)
if not result:
return []
ret_list = result.get("actors") or []
if ret_list:
return ret_list[(page - 1) * count: page * count]
else:
return []
def douban_movie_recommend(self, doubanid: str) -> List[dict]:
"""
根据豆瓣ID查询推荐电影
:param doubanid: 豆瓣ID
"""
return self.doubanapi.movie_recommendations(subject_id=doubanid) or []
def douban_tv_recommend(self, doubanid: str) -> List[dict]:
"""
根据豆瓣ID查询推荐电视剧
:param doubanid: 豆瓣ID
"""
return self.doubanapi.tv_recommendations(subject_id=doubanid) or []

View File

@@ -427,6 +427,54 @@ class DoubanApi(metaclass=Singleton):
return self.__invoke(self._urls["doulist_items"] % subject_id,
start=start, count=count, _ts=ts)
def movie_recommendations(self, subject_id: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
电影推荐
:param subject_id: 电影id
:param start: 开始
:param count: 数量
:param ts: 时间戳
"""
return self.__invoke(self._urls["movie_recommendations"] % subject_id,
start=start, count=count, _ts=ts)
def tv_recommendations(self, subject_id: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
电视剧推荐
:param subject_id: 电视剧id
:param start: 开始
:param count: 数量
:param ts: 时间戳
"""
return self.__invoke(self._urls["tv_recommendations"] % subject_id,
start=start, count=count, _ts=ts)
def movie_photos(self, subject_id: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
电影剧照
:param subject_id: 电影id
:param start: 开始
:param count: 数量
:param ts: 时间戳
"""
return self.__invoke(self._urls["movie_photos"] % subject_id,
start=start, count=count, _ts=ts)
def tv_photos(self, subject_id: str, start: int = 0, count: int = 20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
电视剧剧照
:param subject_id: 电视剧id
:param start: 开始
:param count: 数量
:param ts: 时间戳
"""
return self.__invoke(self._urls["tv_photos"] % subject_id,
start=start, count=count, _ts=ts)
def clear_cache(self):
"""
清空LRU缓存

View File

@@ -1,6 +1,7 @@
import pickle
import random
import time
import traceback
from pathlib import Path
from threading import RLock
from typing import Optional
@@ -8,6 +9,7 @@ from typing import Optional
from app.core.config import settings
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.utils.singleton import Singleton
from app.schemas.types import MediaType
@@ -49,7 +51,8 @@ class DoubanCache(metaclass=Singleton):
"""
获取缓存KEY
"""
return f"[{meta.type.value if meta.type else '未知'}]{meta.name}-{meta.year}-{meta.begin_season}"
return f"[{meta.type.value if meta.type else '未知'}]" \
f"{meta.name or meta.doubanid}-{meta.year}-{meta.begin_season}"
def get(self, meta: MetaBase):
"""
@@ -119,7 +122,7 @@ class DoubanCache(metaclass=Singleton):
return data
return {}
except Exception as e:
print(str(e))
logger.error(f"加载缓存失败: {str(e)} - {traceback.format_exc()}")
return {}
def update(self, meta: MetaBase, info: dict) -> None:

View File

@@ -14,45 +14,67 @@ from app.utils.system import SystemUtils
class DoubanScraper:
_transfer_type = settings.TRANSFER_TYPE
_force_nfo = False
_force_img = False
def gen_scraper_files(self, meta: MetaBase, mediainfo: MediaInfo,
file_path: Path, transfer_type: str):
file_path: Path, transfer_type: str,
force_nfo: bool = False, force_img: bool = False):
"""
生成刮削文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param file_path: 文件路径或者目录路径
:param transfer_type: 转输类型
:param force_nfo: 强制生成NFO
:param force_img: 强制生成图片
"""
self._transfer_type = transfer_type
self._force_nfo = force_nfo
self._force_img = force_img
try:
# 电影
if mediainfo.type == MediaType.MOVIE:
# 强制或者不已存在时才处理
if not file_path.with_name("movie.nfo").exists() \
and not file_path.with_suffix(".nfo").exists():
if self._force_nfo or (not file_path.with_name("movie.nfo").exists()
and not file_path.with_suffix(".nfo").exists()):
# 生成电影描述文件
self.__gen_movie_nfo_file(mediainfo=mediainfo,
file_path=file_path)
# 生成电影图片
self.__save_image(url=mediainfo.poster_path,
file_path=file_path.with_name(f"poster{Path(mediainfo.poster_path).suffix}"))
image_path = file_path.with_name(f"poster{Path(mediainfo.poster_path).suffix}")
if self._force_img or not image_path.exists():
self.__save_image(url=mediainfo.poster_path,
file_path=image_path)
# 背景图
if mediainfo.backdrop_path:
image_path = file_path.with_name(f"backdrop{Path(mediainfo.backdrop_path).suffix}")
if self._force_img or not image_path.exists():
self.__save_image(url=mediainfo.backdrop_path,
file_path=image_path)
# 电视剧
else:
# 不存在时才处理
if not file_path.parent.with_name("tvshow.nfo").exists():
if self._force_nfo or not file_path.parent.with_name("tvshow.nfo").exists():
# 根目录描述文件
self.__gen_tv_nfo_file(mediainfo=mediainfo,
dir_path=file_path.parents[1])
# 生成根目录图片
self.__save_image(url=mediainfo.poster_path,
file_path=file_path.with_name(f"poster{Path(mediainfo.poster_path).suffix}"))
image_path = file_path.with_name(f"poster{Path(mediainfo.poster_path).suffix}")
if self._force_img or not image_path.exists():
self.__save_image(url=mediainfo.poster_path,
file_path=image_path)
# 背景图
if mediainfo.backdrop_path:
image_path = file_path.with_name(f"backdrop{Path(mediainfo.backdrop_path).suffix}")
if self._force_img or not image_path.exists():
self.__save_image(url=mediainfo.backdrop_path,
file_path=image_path)
# 季目录NFO
if not file_path.with_name("season.nfo").exists():
if self._force_nfo or not file_path.with_name("season.nfo").exists():
self.__gen_tv_season_nfo_file(mediainfo=mediainfo,
season=meta.begin_season,
season_path=file_path.parent)
@@ -167,8 +189,6 @@ class DoubanScraper:
"""
下载图片并保存
"""
if file_path.exists():
return
if not url:
return
try:
@@ -193,8 +213,6 @@ class DoubanScraper:
"""
保存NFO
"""
if file_path.exists():
return
xml_str = doc.toprettyxml(indent=" ", encoding="utf-8")
if self._transfer_type in ['rclone_move', 'rclone_copy']:
self.__save_remove_file(file_path, xml_str)

View File

@@ -17,6 +17,16 @@ class EmbyModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
if self.emby.is_inactive():
self.emby.reconnect()
if not self.emby.get_user():
return False, "无法连接Emby请检查参数配置"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "MEDIASERVER", "emby"
@@ -103,13 +113,13 @@ class EmbyModule(_ModuleBase):
media_statistic.user_count = self.emby.get_user_count()
return [media_statistic]
def mediaserver_librarys(self, server: str) -> Optional[List[schemas.MediaServerLibrary]]:
def mediaserver_librarys(self, server: str = None, username: str = None) -> Optional[List[schemas.MediaServerLibrary]]:
"""
媒体库列表
"""
if server != "emby":
if server and server != "emby":
return None
return self.emby.get_librarys()
return self.emby.get_librarys(username)
def mediaserver_items(self, server: str, library_id: str) -> Optional[Generator]:
"""
@@ -141,3 +151,29 @@ class EmbyModule(_ModuleBase):
season=season,
episodes=episodes
) for season, episodes in seasoninfo.items()]
def mediaserver_playing(self, count: int = 20,
server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
if server and server != "emby":
return []
return self.emby.get_resume(num=count, username=username)
def mediaserver_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取媒体库播放地址
"""
if server != "emby":
return None
return self.emby.get_play_url(item_id)
def mediaserver_latest(self, count: int = 20,
server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
if server and server != "emby":
return []
return self.emby.get_latest(num=count, username=username)

View File

@@ -1,5 +1,6 @@
import json
import re
import traceback
from pathlib import Path
from typing import List, Optional, Union, Dict, Generator, Tuple
@@ -10,10 +11,9 @@ from app.core.config import settings
from app.log import logger
from app.schemas.types import MediaType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
class Emby(metaclass=Singleton):
class Emby:
def __init__(self):
self._host = settings.EMBY_HOST
@@ -22,9 +22,16 @@ class Emby(metaclass=Singleton):
self._host += "/"
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._playhost = settings.EMBY_PLAY_HOST
if self._playhost:
if not self._playhost.endswith("/"):
self._playhost += "/"
if not self._playhost.startswith("http"):
self._playhost = "http://" + self._playhost
self._apikey = settings.EMBY_API_KEY
self.user = self.get_user(settings.SUPERUSER)
self.folders = self.get_emby_folders()
self.serverid = self.get_server_id()
def is_inactive(self) -> bool:
"""
@@ -59,13 +66,52 @@ class Emby(metaclass=Singleton):
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return []
def __get_emby_librarys(self) -> List[dict]:
def get_emby_virtual_folders(self) -> List[dict]:
"""
获取Emby媒体库所有路径列表包含共享路径
"""
if not self._host or not self._apikey:
return []
req_url = "%semby/Library/VirtualFolders/Query?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
library_items = res.json().get("Items")
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders/Query 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/VirtualFolders/Query 出错:" + str(e))
return []
def __get_emby_librarys(self, username: str = None) -> List[dict]:
"""
获取Emby媒体库列表
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}emby/Users/{self.user}/Views?api_key={self._apikey}"
if username:
user = self.get_user(username)
else:
user = self.user
req_url = f"{self._host}emby/Users/{user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -77,14 +123,17 @@ class Emby(metaclass=Singleton):
logger.error(f"连接User/Views 出错:" + str(e))
return []
def get_librarys(self) -> List[schemas.MediaServerLibrary]:
def get_librarys(self, username: str = None) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库列表
"""
if not self._host or not self._apikey:
return []
libraries = []
for library in self.__get_emby_librarys() or []:
black_list = (settings.MEDIASERVER_SYNC_BLACKLIST or '').split(",")
for library in self.__get_emby_librarys(username) or []:
if library.get("Name") in black_list:
continue
match library.get("CollectionType"):
case "movies":
library_type = MediaType.MOVIE.value
@@ -92,13 +141,17 @@ class Emby(metaclass=Singleton):
library_type = MediaType.TV.value
case _:
continue
image = self.__get_local_image_by_id(library.get("Id"))
libraries.append(
schemas.MediaServerLibrary(
server="emby",
id=library.get("Id"),
name=library.get("Name"),
path=library.get("Path"),
type=library_type
type=library_type,
image=image,
link=f'{self._playhost or self._host}web/index.html'
f'#!/videos?serverId={self.serverid}&parentId={library.get("Id")}'
)
)
return libraries
@@ -366,7 +419,7 @@ class Emby(metaclass=Singleton):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
# 返回
return tv_item.get("Id"), season_episodes
return item_id, season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None
@@ -383,20 +436,45 @@ class Emby(metaclass=Singleton):
return None
req_url = "%semby/Items/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey)
try:
res = RequestUtils().get_res(req_url)
res = RequestUtils(timeout=10).get_res(req_url)
if res:
images = res.json().get("Images")
for image in images:
if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type:
return image.get("Url")
else:
logger.error(f"Items/RemoteImages 未获取到返回数据")
return None
if images:
for image in images:
logger.info(image)
if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type:
return image.get("Url")
# 数据为空
logger.info(f"Items/RemoteImages 未获取到返回数据,采用本地图片")
return self.generate_external_image_link(item_id, image_type)
except Exception as e:
logger.error(f"连接Items/Id/RemoteImages出错" + str(e))
return None
return None
def generate_external_image_link(self, item_id: str, image_type: str) -> Optional[str]:
"""
根据ItemId和imageType查询本地对应图片
:param item_id: 在Emby中的ID
:param image_type: 图片类型如Backdrop、Primary
:return: 图片对应在外网播放器中的URL
"""
if not self._playhost:
logger.error("Emby外网播放地址未能获取或为空")
return None
req_url = "%sItems/%s/Images/%s" % (self._playhost, item_id, image_type)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code != 404:
logger.info("影片图片链接:{}".format(res.url))
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
def __refresh_emby_library_by_id(self, item_id: str) -> bool:
"""
通知Emby刷新一个项目的媒体库
@@ -482,7 +560,7 @@ class Emby(metaclass=Singleton):
if item_path.is_relative_to(subfolder_path):
return folder.get("Id")
except Exception as err:
print(str(err))
logger.debug(f"匹配子目录出错:{err} - {traceback.format_exc()}")
# 如果找不到,只要路径中有分类目录名就命中
for folder in self.folders:
for subfolder in folder.get("SubFolders"):
@@ -846,7 +924,7 @@ class Emby(metaclass=Singleton):
eventItem.overview = message.get('Item', {}).get('Overview')
eventItem.percentage = message.get('TranscodingInfo', {}).get('CompletionPercentage')
if not eventItem.percentage:
if message.get('PlaybackInfo', {}).get('PositionTicks'):
if message.get('PlaybackInfo', {}).get('PositionTicks') and message.get('Item', {}).get('RunTimeTicks'):
eventItem.percentage = message.get('PlaybackInfo', {}).get('PositionTicks') / \
message.get('Item', {}).get('RunTimeTicks') * 100
if message.get('Session'):
@@ -879,9 +957,9 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
url = url.replace("[HOST]", self._host or '') \
.replace("[APIKEY]", self._apikey or '') \
.replace("[USER]", self.user or '')
try:
return RequestUtils(content_type="application/json").get_res(url=url)
except Exception as e:
@@ -897,9 +975,9 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
url = url.replace("[HOST]", self._host or '') \
.replace("[APIKEY]", self._apikey or '') \
.replace("[USER]", self.user or '')
try:
return RequestUtils(
headers=headers,
@@ -907,3 +985,160 @@ class Emby(metaclass=Singleton):
except Exception as e:
logger.error(f"连接Emby出错" + str(e))
return None
def get_play_url(self, item_id: str) -> str:
"""
拼装媒体播放链接
:param item_id: 媒体的的ID
"""
return f"{self._playhost or self._host}web/index.html#!" \
f"/item?id={item_id}&context=home&serverId={self.serverid}"
def __get_backdrop_url(self, item_id: str, image_tag: str) -> str:
"""
获取Emby的Backdrop图片地址
:param: item_id: 在Emby中的ID
:param: image_tag: 图片的tag
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
if not image_tag or not item_id:
return ""
return f"{self._host}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&fillWidth=666&api_key={self._apikey}"
def __get_local_image_by_id(self, item_id: str) -> str:
"""
根据ItemId从媒体服务器查询本地图片地址
:param: item_id: 在Emby中的ID
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
return "%sItems/%s/Images/Primary" % (self._host, item_id)
def get_resume(self, num: int = 12, username: str = None) -> Optional[List[schemas.MediaServerPlayItem]]:
"""
获得继续观看
"""
if not self._host or not self._apikey:
return None
if username:
user = self.get_user(username)
else:
user = self.user
req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
res = RequestUtils().get_res(req_url)
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
if item_type == MediaType.MOVIE.value:
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
else:
image = self.__get_backdrop_url(item_id=item.get("SeriesId"),
image_tag=item.get("SeriesPrimaryImageTag"))
if not image:
image = self.__get_local_image_by_id(item.get("SeriesId"))
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e))
return []
def get_latest(self, num: int = 20, username: str = None) -> Optional[List[schemas.MediaServerPlayItem]]:
"""
获得最近更新
"""
if not self._host or not self._apikey:
return None
if username:
user = self.get_user(username)
else:
user = self.user
req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
res = RequestUtils().get_res(req_url)
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e))
return []
def get_user_library_folders(self):
"""
获取Emby媒体库文件夹列表排除黑名单
"""
if not self._host or not self._apikey:
return []
library_folders = []
black_list = (settings.MEDIASERVER_SYNC_BLACKLIST or '').split(",")
for library in self.get_emby_virtual_folders() or []:
if library.get("Name") in black_list:
continue
library_folders += [folder for folder in library.get("Path")]
return library_folders

View File

@@ -317,6 +317,17 @@ class FanartModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
ret = RequestUtils().get_res("https://webservice.fanart.tv")
if ret and ret.status_code == 200:
return True, ""
elif ret:
return False, f"无法连接fanart错误码{ret.status_code}"
return False, "fanart网络连接失败"
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "FANART_API_KEY", True
@@ -326,6 +337,8 @@ class FanartModule(_ModuleBase):
:param mediainfo: 识别的媒体信息
:return: 更新后的媒体信息
"""
if not settings.FANART_ENABLE:
return None
if not mediainfo.tmdb_id and not mediainfo.tvdb_id:
return None
if mediainfo.type == MediaType.MOVIE:

View File

@@ -27,6 +27,40 @@ class FileTransferModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
if not settings.DOWNLOAD_PATH:
return False, "下载目录未设置"
# 检查下载目录
download_paths: List[str] = []
for path in [settings.DOWNLOAD_PATH,
settings.DOWNLOAD_MOVIE_PATH,
settings.DOWNLOAD_TV_PATH,
settings.DOWNLOAD_ANIME_PATH]:
if not path:
continue
download_path = Path(path)
if not download_path.exists():
return False, f"下载目录 {download_path} 不存在"
download_paths.append(path)
# 下载目录的设备ID
download_devids = [Path(path).stat().st_dev for path in download_paths]
# 检查媒体库目录
if not settings.LIBRARY_PATH:
return False, "媒体库目录未设置"
# 比较媒体库目录的设备ID
for path in settings.LIBRARY_PATHS:
library_path = Path(path)
if not library_path.exists():
return False, f"媒体库目录不存在:{library_path}"
if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link":
if library_path.stat().st_dev not in download_devids:
return False, f"媒体库目录 {library_path} " \
f"与下载目录 {','.join(download_paths)} 不在同一设备,将无法硬链接"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
@@ -45,10 +79,20 @@ class FileTransferModule(_ModuleBase):
"""
# 获取目标路径
if not target:
# 未指定目的目录,根据源目录选择一个媒体库
target = self.get_target_path(in_path=path)
elif not target.exists() or target.is_file():
# 目的路径不存在或者是文件时,找对应的媒体库目录
target = self.get_library_path(target)
# 拼装媒体库一、二级子目录
target = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target)
else:
# 指定了目的目录
if target.is_file():
logger.error(f"转移目标路径是一个文件 {target} 是一个文件")
return TransferInfo(success=False,
path=path,
message=f"{target} 不是有效目录")
# 只拼装二级子目录(不要一级目录)
target = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target, typename_dir=False)
if not target:
logger.error("未找到媒体库目录,无法转移文件")
return TransferInfo(success=False,
@@ -56,6 +100,7 @@ class FileTransferModule(_ModuleBase):
message="未找到媒体库目录")
else:
logger.info(f"获取转移目标路径:{target}")
# 转移
return self.transfer_media(in_path=path,
in_meta=meta,
@@ -160,6 +205,8 @@ class FileTransferModule(_ModuleBase):
if (org_path.stem == Path(sub_file_name).stem) or \
(sub_metainfo.cn_name and sub_metainfo.cn_name == metainfo.cn_name) \
or (sub_metainfo.en_name and sub_metainfo.en_name == metainfo.en_name):
if metainfo.part and metainfo.part != sub_metainfo.part:
continue
if metainfo.season \
and metainfo.season != sub_metainfo.season:
continue
@@ -308,7 +355,7 @@ class FileTransferModule(_ModuleBase):
:param transfer_type: RmtMode转移方式
:param over_flag: 是否覆盖为True时会先删除再转移
"""
if new_file.exists():
if new_file.exists() or new_file.is_symlink():
if not over_flag:
logger.warn(f"文件已存在:{new_file}")
return 0
@@ -333,33 +380,42 @@ class FileTransferModule(_ModuleBase):
over_flag=over_flag)
@staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
def __get_dest_dir(mediainfo: MediaInfo, target_dir: Path, typename_dir: bool = True) -> Path:
"""
根据设置并装媒体库目录
:param mediainfo: 媒体信息
:target_dir: 媒体库根目录
:typename_dir: 是否加上类型目录
"""
if not target_dir:
return target_dir
if mediainfo.type == MediaType.MOVIE:
# 电影
if settings.LIBRARY_MOVIE_NAME:
if typename_dir:
# 目的目录加上类型和二级分类
target_dir = target_dir / settings.LIBRARY_MOVIE_NAME / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
# 目的目录加上二级分类
target_dir = target_dir / mediainfo.category
if mediainfo.type == MediaType.TV:
# 电视剧
if settings.LIBRARY_ANIME_NAME \
and mediainfo.genre_ids \
if mediainfo.genre_ids \
and set(mediainfo.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
target_dir = target_dir / settings.LIBRARY_ANIME_NAME / mediainfo.category
elif settings.LIBRARY_TV_NAME:
# 电视剧
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
if typename_dir:
target_dir = target_dir / (settings.LIBRARY_ANIME_NAME
or settings.LIBRARY_TV_NAME) / mediainfo.category
else:
target_dir = target_dir / mediainfo.category
else:
# 目的目录加上类型和二级分类
target_dir = target_dir / mediainfo.type.value / mediainfo.category
# 电视剧
if typename_dir:
target_dir = target_dir / settings.LIBRARY_TV_NAME / mediainfo.category
else:
target_dir = target_dir / mediainfo.category
return target_dir
def transfer_media(self,
@@ -389,12 +445,8 @@ class FileTransferModule(_ModuleBase):
if transfer_type not in ['rclone_copy', 'rclone_move']:
# 检查目标路径
if not target_dir.exists():
return TransferInfo(success=False,
path=in_path,
message=f"{target_dir} 目标路径不存在")
# 媒体库目的目录
target_dir = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_dir)
logger.info(f"目标路径不存在,正在创建:{target_dir} ...")
target_dir.mkdir(parents=True, exist_ok=True)
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
@@ -416,14 +468,6 @@ class FileTransferModule(_ModuleBase):
rename_dict=self.__get_naming_dict(meta=in_meta,
mediainfo=mediainfo)
).parent
# 目录已存在时不处理
if new_path.exists():
logger.warn(f"目标目录已存在:{new_path}")
return TransferInfo(success=False,
message=f"目标目录已存在:{new_path}",
path=in_path,
target_path=new_path,
is_bluray=bluray_flag)
# 转移蓝光原盘
retcode = self.__transfer_dir(file_path=in_path,
new_path=new_path,
@@ -478,37 +522,46 @@ class FileTransferModule(_ModuleBase):
# 判断是否要覆盖
overflag = False
if new_file.exists():
# 目标文件已存在
logger.info(f"目标文件已存在,转移覆盖模式:{settings.OVERWRITE_MODE}")
match settings.OVERWRITE_MODE:
case 'always':
# 总是覆盖同名文件
target_file = new_file
if new_file.exists() or new_file.is_symlink():
if new_file.is_symlink():
target_file = new_file.readlink()
if not target_file.exists():
overflag = True
case 'size':
# 存在时大覆盖小
if new_file.stat().st_size < in_path.stat().st_size:
logger.info(f"目标文件文件大小更小,将被覆盖:{new_file}")
if not overflag:
# 目标文件已存在
logger.info(f"目标文件已存在,转移覆盖模式:{settings.OVERWRITE_MODE}")
match settings.OVERWRITE_MODE:
case 'always':
# 总是覆盖同名文件
overflag = True
else:
case 'size':
# 存在时大覆盖小
if target_file.stat().st_size < in_path.stat().st_size:
logger.info(f"目标文件文件大小更小,将覆盖:{new_file}")
overflag = True
else:
return TransferInfo(success=False,
message=f"媒体库中已存在,且质量更好",
path=in_path,
target_path=new_file,
fail_list=[str(in_path)])
case 'never':
# 存在不覆盖
return TransferInfo(success=False,
message=f"媒体库中已存在,且质量更好",
message=f"媒体库中已存在,当前设置为不覆盖",
path=in_path,
target_path=new_file,
fail_list=[str(in_path)])
case 'never':
# 存在不覆盖
return TransferInfo(success=False,
message=f"媒体库中已存在,当前设置为不覆盖",
path=in_path,
target_path=new_file,
fail_list=[str(in_path)])
case 'latest':
# 仅保留最新版本
self.delete_all_version_files(new_file)
overflag = True
case _:
pass
case 'latest':
# 仅保留最新版本
logger.info(f"仅保留最新版本,将覆盖:{new_file}")
overflag = True
else:
if settings.OVERWRITE_MODE == 'latest':
# 文件不存在,但仅保留最新版本
logger.info(f"转移覆盖模式:{settings.OVERWRITE_MODE},仅保留最新版本")
self.delete_all_version_files(new_file)
# 原文件大小
file_size = in_path.stat().st_size
# 转移文件
@@ -544,6 +597,20 @@ class FileTransferModule(_ModuleBase):
:param file_ext: 文件扩展名
:param episodes_info: 当前季的全部集信息
"""
def __convert_invalid_characters(filename: str):
if not filename:
return filename
invalid_characters = r'\/:*?"<>|'
# 创建半角到全角字符的转换表
halfwidth_chars = "".join([chr(i) for i in range(33, 127)])
fullwidth_chars = "".join([chr(i + 0xFEE0) for i in range(33, 127)])
translation_table = str.maketrans(halfwidth_chars, fullwidth_chars)
# 将不支持的字符替换为对应的全角字符
for char in invalid_characters:
filename = filename.replace(char, char.translate(translation_table))
return filename
# 获取集标题
episode_title = None
if meta.begin_episode and episodes_info:
@@ -554,13 +621,17 @@ class FileTransferModule(_ModuleBase):
return {
# 标题
"title": mediainfo.title,
"title": __convert_invalid_characters(mediainfo.title),
# 英文标题
"en_title": __convert_invalid_characters(mediainfo.en_title),
# 原语种标题
"original_title": __convert_invalid_characters(mediainfo.original_title),
# 原文件名
"original_name": f"{meta.org_string}{file_ext}",
# 原语种标题
"original_title": mediainfo.original_title,
# 识别名称
# 识别名称(优先使用中文)
"name": meta.name,
# 识别的英文名称(可能为空)
"en_name": meta.en_name,
# 年份
"year": mediainfo.year or meta.year,
# 资源类型
@@ -581,6 +652,8 @@ class FileTransferModule(_ModuleBase):
"tmdbid": mediainfo.tmdb_id,
# IMDBID
"imdbid": mediainfo.imdb_id,
# 豆瓣ID
"doubanid": mediainfo.douban_id,
# 季号
"season": meta.season_seq,
# 集号
@@ -661,20 +734,19 @@ class FileTransferModule(_ModuleBase):
continue
if target_path:
return target_path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(path) > file_size:
return path
# 顺序匹配第1个满足空间存储要求的目录
if in_path.exists():
file_size = in_path.stat().st_size
for path in dest_paths:
if SystemUtils.free_space(path) > file_size:
return path
# 默认返回第1个
return dest_paths[0]
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, **kwargs) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在于本地文件系统
判断媒体文件是否存在于本地文件系统,只支持标准媒体库结构
:param mediainfo: 识别的媒体信息
:param itemid: 媒体服务器ItemID
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LIBRARY_PATHS:

View File

@@ -39,12 +39,19 @@ class FilterModule(_ModuleBase):
# 中字
"CNSUB": {
"include": [
r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]|繁體|简体|[中国國][字配]|国语|國語|中文|中字'],
r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]'
r'|繁體|简体|[中国國][字配]|国语|國語|中文|中字|简日|繁日|简繁|繁体'
r'|([\s,.-\[])(CHT|CHS|cht|chs)(|[\s,.-\]])'],
"exclude": [],
"tmdb": {
"original_language": "zh,cn"
}
},
# 官种
"GZ": {
"include": [r'官方', r'官种'],
"match": ["labels"]
},
# 特效字幕
"SPECSUB": {
"include": [r'特效'],
@@ -132,6 +139,9 @@ class FilterModule(_ModuleBase):
def stop(self):
pass
def test(self):
pass
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
@@ -253,14 +263,30 @@ class FilterModule(_ModuleBase):
# 符合TMDB规则的直接返回True即不过滤
if tmdb and self.__match_tmdb(tmdb):
return True
# 匹配项:标题、副标题、标签
content = f"{torrent.title} {torrent.description} {' '.join(torrent.labels or [])}"
# 只匹配指定关键字
match_content = []
matchs = self.rule_set[rule_name].get("match") or []
if matchs:
for match in matchs:
if not hasattr(torrent, match):
continue
match_value = getattr(torrent, match)
if not match_value:
continue
if isinstance(match_value, list):
match_content.extend(match_value)
else:
match_content.append(match_value)
if match_content:
content = " ".join(match_content)
# 包含规则项
includes = self.rule_set[rule_name].get("include") or []
# 排除规则项
excludes = self.rule_set[rule_name].get("exclude") or []
# FREE规则
downloadvolumefactor = self.rule_set[rule_name].get("downloadvolumefactor")
# 匹配项
content = f"{torrent.title} {torrent.description} {' '.join(torrent.labels or [])}"
for include in includes:
if not re.search(r"%s" % include, content, re.IGNORECASE):
# 未发现包含项

View File

@@ -4,6 +4,7 @@ from typing import List, Optional, Tuple, Union
from ruamel.yaml import CommentedMap
from app.core.context import TorrentInfo
from app.helper.sites import SitesHelper
from app.log import logger
from app.modules import _ModuleBase
from app.modules.indexer.mtorrent import MTorrentSpider
@@ -25,6 +26,15 @@ class IndexerModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
sites = SitesHelper().get_indexers()
if not sites:
return False, "未配置站点或未通过用户认证"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "INDEXER", "builtin"
@@ -43,7 +53,7 @@ class IndexerModule(_ModuleBase):
# 确认搜索的名字
if not keywords:
# 浏览种子页
keywords = [None]
keywords = ['']
# 开始索引
result_array = []

View File

@@ -1,8 +1,9 @@
import copy
import datetime
import re
import traceback
from typing import List
from urllib.parse import quote, urlencode
from urllib.parse import quote, urlencode, urlparse, parse_qs
import chardet
from jinja2 import Template
@@ -12,9 +13,9 @@ from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.log import logger
from app.schemas.types import MediaType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
from app.schemas.types import MediaType
class TorrentSpider:
@@ -56,12 +57,14 @@ class TorrentSpider:
fields: dict = {}
# 页码
page: int = 0
# 搜索条数
# 搜索条数, 默认: 100条
result_num: int = 100
# 单个种子信息
torrents_info: dict = {}
# 种子列表
torrents_info_array: list = []
# 搜索超时, 默认: 30秒
_timeout = 30
def __init__(self,
indexer: CommentedMap,
@@ -91,6 +94,8 @@ class TorrentSpider:
self.fields = indexer.get('torrents').get('fields')
self.render = indexer.get('render')
self.domain = indexer.get('domain')
self.result_num = int(indexer.get('result_num') or 100)
self._timeout = int(indexer.get('timeout') or 30)
self.page = page
if self.domain and not str(self.domain).endswith("/"):
self.domain = self.domain + "/"
@@ -152,7 +157,7 @@ class TorrentSpider:
search_mode = "0"
# 搜索URL
indexer_params = self.search.get("params") or {}
indexer_params = self.search.get("params", {}).copy()
if indexer_params:
search_area = indexer_params.get('search_area')
# search_area非0表示支持imdbid搜索
@@ -233,14 +238,15 @@ class TorrentSpider:
url=searchurl,
cookies=self.cookie,
ua=self.ua,
proxies=self.proxy_server
proxies=self.proxy_server,
timeout=self._timeout
)
else:
# requests请求
ret = RequestUtils(
ua=self.ua,
cookies=self.cookie,
timeout=30,
timeout=self._timeout,
referer=self.referer,
proxies=self.proxies
).get_res(searchurl, allow_redirects=True)
@@ -270,7 +276,7 @@ class TorrentSpider:
return self.parse(page_source)
def __get_title(self, torrent):
# title default
# title default text
if 'title' not in self.fields:
return
selector = self.fields.get('title', {})
@@ -300,7 +306,7 @@ class TorrentSpider:
selector.get('filters'))
def __get_description(self, torrent):
# title optional
# title optional text
if 'description' not in self.fields:
return
selector = self.fields.get('description', {})
@@ -346,7 +352,7 @@ class TorrentSpider:
selector.get('filters'))
def __get_detail(self, torrent):
# details
# details page text
if 'details' not in self.fields:
return
selector = self.fields.get('details', {})
@@ -367,7 +373,7 @@ class TorrentSpider:
self.torrents_info['page_url'] = detail_link
def __get_download(self, torrent):
# download link
# download link text
if 'download' not in self.fields:
return
selector = self.fields.get('download', {})
@@ -377,9 +383,19 @@ class TorrentSpider:
item = self.__index(items, selector)
download_link = self.__filter_text(item, selector.get('filters'))
if download_link:
if not download_link.startswith("http") and not download_link.startswith("magnet"):
self.torrents_info['enclosure'] = self.domain + download_link[1:] if download_link.startswith(
"/") else self.domain + download_link
if not download_link.startswith("http") \
and not download_link.startswith("magnet"):
_scheme, _domain = StringUtils.get_url_netloc(self.domain)
if _domain in download_link:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{_scheme}:{download_link}"
else:
self.torrents_info['enclosure'] = f"{_scheme}://{download_link}"
else:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{self.domain}{download_link[1:]}"
else:
self.torrents_info['enclosure'] = f"{self.domain}{download_link}"
else:
self.torrents_info['enclosure'] = download_link
@@ -397,7 +413,7 @@ class TorrentSpider:
selector.get('filters'))
def __get_size(self, torrent):
# torrent size
# torrent size int
if 'size' not in self.fields:
return
selector = self.fields.get('size', {})
@@ -414,7 +430,7 @@ class TorrentSpider:
self.torrents_info['size'] = 0
def __get_leechers(self, torrent):
# torrent leechers
# torrent leechers int
if 'leechers' not in self.fields:
return
selector = self.fields.get('leechers', {})
@@ -424,6 +440,7 @@ class TorrentSpider:
item = self.__index(items, selector)
if item:
peers_val = item.split("/")[0]
peers_val = peers_val.replace(",", "")
peers_val = self.__filter_text(peers_val,
selector.get('filters'))
self.torrents_info['peers'] = int(peers_val) if peers_val and peers_val.isdigit() else 0
@@ -431,7 +448,7 @@ class TorrentSpider:
self.torrents_info['peers'] = 0
def __get_seeders(self, torrent):
# torrent leechers
# torrent leechers int
if 'seeders' not in self.fields:
return
selector = self.fields.get('seeders', {})
@@ -441,6 +458,7 @@ class TorrentSpider:
item = self.__index(items, selector)
if item:
seeders_val = item.split("/")[0]
seeders_val = seeders_val.replace(",", "")
seeders_val = self.__filter_text(seeders_val,
selector.get('filters'))
self.torrents_info['seeders'] = int(seeders_val) if seeders_val and seeders_val.isdigit() else 0
@@ -448,7 +466,7 @@ class TorrentSpider:
self.torrents_info['seeders'] = 0
def __get_grabs(self, torrent):
# torrent grabs
# torrent grabs int
if 'grabs' not in self.fields:
return
selector = self.fields.get('grabs', {})
@@ -458,6 +476,7 @@ class TorrentSpider:
item = self.__index(items, selector)
if item:
grabs_val = item.split("/")[0]
grabs_val = grabs_val.replace(",", "")
grabs_val = self.__filter_text(grabs_val,
selector.get('filters'))
self.torrents_info['grabs'] = int(grabs_val) if grabs_val and grabs_val.isdigit() else 0
@@ -465,7 +484,7 @@ class TorrentSpider:
self.torrents_info['grabs'] = 0
def __get_pubdate(self, torrent):
# torrent pubdate
# torrent pubdate yyyy-mm-dd hh:mm:ss
if 'date_added' not in self.fields:
return
selector = self.fields.get('date_added', {})
@@ -477,7 +496,7 @@ class TorrentSpider:
selector.get('filters'))
def __get_date_elapsed(self, torrent):
# torrent pubdate
# torrent data elaspsed text
if 'date_elapsed' not in self.fields:
return
selector = self.fields.get('date_elapsed', {})
@@ -489,7 +508,7 @@ class TorrentSpider:
selector.get('filters'))
def __get_downloadvolumefactor(self, torrent):
# downloadvolumefactor
# downloadvolumefactor int
selector = self.fields.get('downloadvolumefactor', {})
if not selector:
return
@@ -512,7 +531,7 @@ class TorrentSpider:
self.torrents_info['downloadvolumefactor'] = int(downloadvolumefactor.group(1))
def __get_uploadvolumefactor(self, torrent):
# uploadvolumefactor
# uploadvolumefactor int
selector = self.fields.get('uploadvolumefactor', {})
if not selector:
return
@@ -535,7 +554,7 @@ class TorrentSpider:
self.torrents_info['uploadvolumefactor'] = int(uploadvolumefactor.group(1))
def __get_labels(self, torrent):
# labels
# labels ['label1', 'label2']
if 'labels' not in self.fields:
return
selector = self.fields.get('labels', {})
@@ -547,32 +566,100 @@ class TorrentSpider:
else:
self.torrents_info['labels'] = []
def __get_free_date(self, torrent):
# free date yyyy-mm-dd hh:mm:ss
if 'freedate' not in self.fields:
return
selector = self.fields.get('freedate', {})
freedate = torrent(selector.get('selector', '')).clone()
self.__remove(freedate, selector)
items = self.__attribute_or_text(freedate, selector)
self.torrents_info['freedate'] = self.__index(items, selector)
self.torrents_info['freedate'] = self.__filter_text(self.torrents_info.get('freedate'),
selector.get('filters'))
def __get_hit_and_run(self, torrent):
# hitandrun True/False
if 'hr' not in self.fields:
return
selector = self.fields.get('hr', {})
hit_and_run = torrent(selector.get('selector', ''))
if hit_and_run:
self.torrents_info['hit_and_run'] = True
else:
self.torrents_info['hit_and_run'] = False
def __get_category(self, torrent):
# category 电影/电视剧
if 'category' not in self.fields:
return
selector = self.fields.get('category', {})
category = torrent(selector.get('selector', '')).clone()
self.__remove(category, selector)
items = self.__attribute_or_text(category, selector)
category_value = self.__index(items, selector)
category_value = self.__filter_text(category_value,
selector.get('filters'))
if category_value and self.category:
tv_cats = [str(cat.get("id")) for cat in self.category.get("tv") or []]
movie_cats = [str(cat.get("id")) for cat in self.category.get("movie") or []]
if category_value in tv_cats \
and category_value not in movie_cats:
self.torrents_info['category'] = MediaType.TV.value
elif category_value in movie_cats:
self.torrents_info['category'] = MediaType.MOVIE.value
else:
self.torrents_info['category'] = MediaType.UNKNOWN.value
else:
self.torrents_info['category'] = MediaType.UNKNOWN.value
def get_info(self, torrent) -> dict:
"""
解析单条种子数据
"""
self.torrents_info = {}
try:
# 标题
self.__get_title(torrent)
# 描述
self.__get_description(torrent)
# 详情页面
self.__get_detail(torrent)
# 下载链接
self.__get_download(torrent)
# 完成数
self.__get_grabs(torrent)
# 下载数
self.__get_leechers(torrent)
# 做种数
self.__get_seeders(torrent)
# 大小
self.__get_size(torrent)
# IMDBID
self.__get_imdbid(torrent)
# 下载系数
self.__get_downloadvolumefactor(torrent)
# 上传系数
self.__get_uploadvolumefactor(torrent)
# 发布时间
self.__get_pubdate(torrent)
# 已发布时间
self.__get_date_elapsed(torrent)
# 免费载止时间
self.__get_free_date(torrent)
# 标签
self.__get_labels(torrent)
# HR
self.__get_hit_and_run(torrent)
# 分类
self.__get_category(torrent)
except Exception as err:
logger.error("%s 搜索出现错误:%s" % (self.indexername, str(err)))
return self.torrents_info
@staticmethod
def __filter_text(text, filters):
def __filter_text(text: str, filters: list):
"""
对文件进行处理
"""
@@ -583,8 +670,8 @@ class TorrentSpider:
for filter_item in filters:
if not text:
break
method_name = filter_item.get("name")
try:
method_name = filter_item.get("name")
args = filter_item.get("args")
if method_name == "re_search" and isinstance(args, list):
text = re.search(r"%s" % args[0], text).group(args[-1])
@@ -598,8 +685,13 @@ class TorrentSpider:
text = text.strip()
elif method_name == "appendleft":
text = f"{args}{text}"
elif method_name == "querystring":
parsed_url = urlparse(text)
query_params = parse_qs(parsed_url.query)
param_value = query_params.get(args)
text = param_value[0] if param_value else ''
except Exception as err:
print(str(err))
logger.debug(f'过滤器 {method_name} 处理失败:{str(err)} - {traceback.format_exc()}')
return text.strip()
@staticmethod
@@ -613,7 +705,7 @@ class TorrentSpider:
item.remove(v)
@staticmethod
def __attribute_or_text(item, selector):
def __attribute_or_text(item, selector: dict):
if not selector:
return item
if not item:
@@ -625,7 +717,7 @@ class TorrentSpider:
return items
@staticmethod
def __index(items, selector):
def __index(items: list, selector: dict):
if not items:
return None
if selector:

View File

@@ -28,6 +28,16 @@ class JellyfinModule(_ModuleBase):
def stop(self):
pass
def test(self) -> Tuple[bool, str]:
"""
测试模块连接性
"""
if self.jellyfin.is_inactive():
self.jellyfin.reconnect()
if not self.jellyfin.get_user():
return False, "无法连接Jellyfin请检查参数配置"
return True, ""
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
使用Emby用户辅助完成用户认证
@@ -101,13 +111,13 @@ class JellyfinModule(_ModuleBase):
media_statistic.user_count = self.jellyfin.get_user_count()
return [media_statistic]
def mediaserver_librarys(self, server: str) -> Optional[List[schemas.MediaServerLibrary]]:
def mediaserver_librarys(self, server: str = None, username: str = None) -> Optional[List[schemas.MediaServerLibrary]]:
"""
媒体库列表
"""
if server != "jellyfin":
if server and server != "jellyfin":
return None
return self.jellyfin.get_librarys()
return self.jellyfin.get_librarys(username)
def mediaserver_items(self, server: str, library_id: str) -> Optional[Generator]:
"""
@@ -139,3 +149,29 @@ class JellyfinModule(_ModuleBase):
season=season,
episodes=episodes
) for season, episodes in seasoninfo.items()]
def mediaserver_playing(self, count: int = 20,
server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
if server and server != "jellyfin":
return []
return self.jellyfin.get_resume(num=count, username=username)
def mediaserver_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取媒体库播放地址
"""
if server != "jellyfin":
return None
return self.jellyfin.get_play_url(item_id)
def mediaserver_latest(self, count: int = 20,
server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
if server and server != "jellyfin":
return []
return self.jellyfin.get_latest(num=count, username=username)

View File

@@ -8,10 +8,9 @@ from app.core.config import settings
from app.log import logger
from app.schemas import MediaType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
class Jellyfin(metaclass=Singleton):
class Jellyfin:
def __init__(self):
self._host = settings.JELLYFIN_HOST
@@ -20,6 +19,12 @@ class Jellyfin(metaclass=Singleton):
self._host += "/"
if not self._host.startswith("http"):
self._host = "http://" + self._host
self._playhost = settings.JELLYFIN_PLAY_HOST
if self._playhost:
if not self._playhost.endswith("/"):
self._playhost += "/"
if not self._playhost.startswith("http"):
self._playhost = "http://" + self._playhost
self._apikey = settings.JELLYFIN_API_KEY
self.user = self.get_user(settings.SUPERUSER)
self.serverid = self.get_server_id()
@@ -39,13 +44,70 @@ class Jellyfin(metaclass=Singleton):
self.user = self.get_user()
self.serverid = self.get_server_id()
def __get_jellyfin_librarys(self) -> List[dict]:
def get_jellyfin_folders(self) -> List[dict]:
"""
获取Jellyfin媒体库路径列表
"""
if not self._host or not self._apikey:
return []
req_url = "%sLibrary/SelectableMediaFolders?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
return res.json()
else:
logger.error(f"Library/SelectableMediaFolders 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/SelectableMediaFolders 出错:" + str(e))
return []
def get_jellyfin_virtual_folders(self) -> List[dict]:
"""
获取Jellyfin媒体库所有路径列表包含共享路径
"""
if not self._host or not self._apikey:
return []
req_url = "%sLibrary/VirtualFolders?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
library_items = res.json()
librarys = []
for library_item in library_items:
library_name = library_item.get('Name')
pathInfos = library_item.get('LibraryOptions', {}).get('PathInfos')
library_paths = []
for path in pathInfos:
if path.get('NetworkPath'):
library_paths.append(path.get('NetworkPath'))
else:
library_paths.append(path.get('Path'))
if library_name and library_paths:
librarys.append({
'Name': library_name,
'Path': library_paths
})
return librarys
else:
logger.error(f"Library/VirtualFolders 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接Library/VirtualFolders 出错:" + str(e))
return []
def __get_jellyfin_librarys(self, username: str = None) -> List[dict]:
"""
获取Jellyfin媒体库的信息
"""
if not self._host or not self._apikey:
return []
req_url = f"{self._host}Users/{self.user}/Views?api_key={self._apikey}"
if username:
user = self.get_user(username)
else:
user = self.user
req_url = f"{self._host}Users/{user}/Views?api_key={self._apikey}"
try:
res = RequestUtils().get_res(req_url)
if res:
@@ -57,14 +119,17 @@ class Jellyfin(metaclass=Singleton):
logger.error(f"连接Users/Views 出错:" + str(e))
return []
def get_librarys(self):
def get_librarys(self, username: str = None) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库列表
"""
if not self._host or not self._apikey:
return []
libraries = []
for library in self.__get_jellyfin_librarys() or []:
black_list = (settings.MEDIASERVER_SYNC_BLACKLIST or '').split(",")
for library in self.__get_jellyfin_librarys(username) or []:
if library.get("Name") in black_list:
continue
match library.get("CollectionType"):
case "movies":
library_type = MediaType.MOVIE.value
@@ -72,13 +137,21 @@ class Jellyfin(metaclass=Singleton):
library_type = MediaType.TV.value
case _:
continue
image = self.__get_local_image_by_id(library.get("Id"))
link = f"{self._playhost or self._host}web/index.html#!" \
f"/movies.html?topParentId={library.get('Id')}" \
if library_type == MediaType.MOVIE.value \
else f"{self._playhost or self._host}web/index.html#!" \
f"/tv.html?topParentId={library.get('Id')}"
libraries.append(
schemas.MediaServerLibrary(
server="jellyfin",
id=library.get("Id"),
name=library.get("Name"),
path=library.get("Path"),
type=library_type
type=library_type,
image=image,
link=link
))
return libraries
@@ -332,7 +405,7 @@ class Jellyfin(metaclass=Singleton):
if not season_episodes.get(season_index):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
return tv_info.get('Id'), season_episodes
return item_id, season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None, None
@@ -349,20 +422,73 @@ class Jellyfin(metaclass=Singleton):
return None
req_url = "%sItems/%s/RemoteImages?api_key=%s" % (self._host, item_id, self._apikey)
try:
res = RequestUtils().get_res(req_url)
res = RequestUtils(timeout=10).get_res(req_url)
if res:
images = res.json().get("Images")
for image in images:
if image.get("ProviderName") == "TheMovieDb" and image.get("Type") == image_type:
return image.get("Url")
# return images[0].get("Url") # 首选无则返回第一张
else:
logger.error(f"Items/RemoteImages 未获取到返回数据")
return None
logger.info(f"Items/RemoteImages 未获取到返回数据,采用本地图片")
return self.generate_image_link(item_id, image_type, True)
except Exception as e:
logger.error(f"连接Items/Id/RemoteImages出错" + str(e))
return None
return None
def generate_image_link(self, item_id: str, image_type: str, host_type: bool) -> Optional[str]:
"""
根据ItemId和imageType查询本地对应图片
:param item_id: 在Jellyfin中的ID
:param image_type: 图片类型如Backdrop、Primary
:param host_type: True为外网链接, False为内网链接
:return: 图片对应在host_type的播放器中的URL
"""
if not self._playhost:
logger.error("Jellyfin外网播放地址未能获取或为空")
return None
# 检测是否为TV
_parent_id = self.get_itemId_ancestors(item_id, 0, "ParentBackdropItemId")
if _parent_id:
item_id = _parent_id
_host = self._host
if host_type:
_host = self._playhost
req_url = "%sItems/%s/Images/%s" % (_host, item_id, image_type)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code != 404:
logger.info("影片图片链接:{}".format(res.url))
return res.url
else:
logger.error("Items/Id/Images 未获取到返回数据或无该影片{}图片".format(image_type))
return None
except Exception as e:
logger.error(f"连接Items/Id/Images出错" + str(e))
return None
def get_itemId_ancestors(self, item_id: str, index: int, key: str) -> Optional[Union[str, list, int, dict, bool]]:
"""
获得itemId的父item
:param item_id: 在Jellyfin中剧集的ID (S01E02的E02的item_id)
:param index: 第几个json对象
:param key: 需要得到父item中的键值对
:return key对应类型的值
"""
req_url = "%sItems/%s/Ancestors?api_key=%s" % (self._host, item_id, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
return res.json()[index].get(key)
else:
logger.error(f"Items/Id/Ancestors 未获取到返回数据")
return None
except Exception as e:
logger.error(f"连接Items/Id/Ancestors出错" + str(e))
return None
def refresh_root_library(self) -> bool:
"""
通知Jellyfin刷新整个媒体库
@@ -559,9 +685,9 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
url = url.replace("[HOST]", self._host or '') \
.replace("[APIKEY]", self._apikey or '') \
.replace("[USER]", self.user or '')
try:
return RequestUtils(accept_type="application/json").get_res(url=url)
except Exception as e:
@@ -577,9 +703,9 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
url = url.replace("[HOST]", self._host or '') \
.replace("[APIKEY]", self._apikey or '') \
.replace("[USER]", self.user or '')
try:
return RequestUtils(
headers=headers
@@ -587,3 +713,158 @@ class Jellyfin(metaclass=Singleton):
except Exception as e:
logger.error(f"连接Jellyfin出错" + str(e))
return None
def get_play_url(self, item_id: str) -> str:
"""
拼装媒体播放链接
:param item_id: 媒体的的ID
"""
return f"{self._playhost or self._host}web/index.html#!" \
f"/details?id={item_id}&serverId={self.serverid}"
def __get_local_image_by_id(self, item_id: str) -> str:
"""
根据ItemId从媒体服务器查询有声书图片地址
:param: item_id: 在Emby中的ID
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
return "%sItems/%s/Images/Primary" % (self._host, item_id)
def __get_backdrop_url(self, item_id: str, image_tag: str) -> str:
"""
获取Backdrop图片地址
:param: item_id: 在Emby中的ID
:param: image_tag: 图片的tag
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
if not image_tag or not item_id:
return ""
return f"{self._host}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&fillWidth=666&api_key={self._apikey}"
def get_resume(self, num: int = 12, username: str = None) -> Optional[List[schemas.MediaServerPlayItem]]:
"""
获得继续观看
"""
if not self._host or not self._apikey:
return None
if username:
user = self.get_user(username)
else:
user = self.user
req_url = (f"{self._host}Users/{user}/Items/Resume?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
res = RequestUtils().get_res(req_url)
if res:
result = res.json().get("Items") or []
ret_resume = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_resume) == num:
break
if item.get("Type") not in ["Movie", "Episode"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
# 小部分剧集无[xxx-S01E01-thumb.jpg]图片
image_res = RequestUtils().get_res(image)
if not image_res or image_res.status_code == 404:
image = self.generate_image_link(item.get("Id"), "Backdrop", False)
if item_type == MediaType.MOVIE.value:
title = item.get("Name")
subtitle = item.get("ProductionYear")
else:
title = f'{item.get("SeriesName")}'
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
ret_resume.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=title,
subtitle=subtitle,
type=item_type,
image=image,
link=link,
percent=item.get("UserData", {}).get("PlayedPercentage")
))
return ret_resume
else:
logger.error(f"Users/Items/Resume 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Resume出错" + str(e))
return []
def get_latest(self, num=20, username: str = None) -> Optional[List[schemas.MediaServerPlayItem]]:
"""
获得最近更新
"""
if not self._host or not self._apikey:
return None
if username:
user = self.get_user(username)
else:
user = self.user
req_url = (f"{self._host}Users/{user}/Items/Latest?"
f"Limit=100&MediaTypes=Video&api_key={self._apikey}&Fields=ProductionYear,Path")
try:
res = RequestUtils().get_res(req_url)
if res:
result = res.json() or []
ret_latest = []
# 用户媒体库文件夹列表(排除黑名单)
library_folders = self.get_user_library_folders()
for item in result:
if len(ret_latest) == num:
break
if item.get("Type") not in ["Movie", "Series"]:
continue
item_path = item.get("Path")
if item_path and library_folders and not any(
str(item_path).startswith(folder) for folder in library_folders):
continue
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
image = self.__get_local_image_by_id(item_id=item.get("Id"))
ret_latest.append(schemas.MediaServerPlayItem(
id=item.get("Id"),
title=item.get("Name"),
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
))
return ret_latest
else:
logger.error(f"Users/Items/Latest 未获取到返回数据")
except Exception as e:
logger.error(f"连接Users/Items/Latest出错" + str(e))
return []
def get_user_library_folders(self):
"""
获取Emby媒体库文件夹列表排除黑名单
"""
if not self._host or not self._apikey:
return []
library_folders = []
black_list = (settings.MEDIASERVER_SYNC_BLACKLIST or '').split(",")
for library in self.get_jellyfin_virtual_folders() or []:
if library.get("Name") in black_list:
continue
library_folders += [folder for folder in library.get("Path")]
return library_folders

Some files were not shown because too many files have changed in this diff Show More