Compare commits

...

1592 Commits

Author SHA1 Message Date
jxxghp
e6105fdab5 **v2.0.3**
- 修复了最新版本号获取错误的问题
- 修复了文件管理重命名失败的问题
- 修复了整理多季时 season.nfo 刮削错误的问题
- 修复了Rclone存储容量检测错误的问题
- 优化了自定义规则,剧集文件大小规则按平均每集大小过滤
- 移动文件整理时,自动删除空的父目录
- 增加了自动阅读和发送站点消息的开关
- 增加了数据库WAL模式开关,开启后提升数据库性能
2024-11-12 18:48:15 +08:00
jxxghp
df34c7e2da Merge pull request #3074 from InfinityPacer/feature/db 2024-11-12 17:30:34 +08:00
InfinityPacer
24cc36033f feat(db): add support for SQLite WAL mode 2024-11-12 17:17:16 +08:00
jxxghp
aafb2bc269 fix #3071 增加站点消息开关 2024-11-12 13:59:13 +08:00
jxxghp
9dde56467a 更新 __init__.py 2024-11-12 12:24:05 +08:00
jxxghp
f9d62e7451 fix Rclone存储容量检测问题 2024-11-12 10:10:37 +08:00
jxxghp
f1f379966a fix 修复V2最新版本号获取 2024-11-12 08:37:07 +08:00
jxxghp
942c9ae545 Merge pull request #3058 from wikrin/fix-scrape_metadata 2024-11-10 14:02:31 +08:00
jxxghp
89be4f6200 Merge pull request #3054 from wikrin/fix-rename 2024-11-10 14:02:01 +08:00
Attente
bcbf729fd4 修复整理多季时season.nfo刮削错误的问题 2024-11-10 13:43:59 +08:00
Attente
7fc5b7678e 更改判断顺序 2024-11-10 07:47:49 +08:00
Attente
e20578685a fix: 修复重命名失败的问题 2024-11-09 23:59:58 +08:00
jxxghp
40b82d9cb6 fix #3042 移动模式删除空文件夹 2024-11-09 18:23:08 +08:00
jxxghp
9b2fccee01 feat:剧集文件大小过滤按平均每集大小 2024-11-09 18:01:50 +08:00
jxxghp
87bbee8c36 Merge pull request #3038 from InfinityPacer/feature/setup 2024-11-08 18:16:32 +08:00
InfinityPacer
4412ce9f17 fix(playwright): add check for HTTPS proxy 2024-11-08 18:08:45 +08:00
jxxghp
35b78b0e66 Merge pull request #3034 from lybtt/fix_update_bash 2024-11-08 16:44:55 +08:00
lvyb
d97fcc4a96 修复update脚本,版本号比较问题 2024-11-08 16:37:36 +08:00
jxxghp
a2096e8e0f v2.0.2 2024-11-08 13:26:05 +08:00
jxxghp
75e80158e5 Merge pull request #3030 from Akimio521/fix(tmdb/douban)-cache 2024-11-08 10:48:23 +08:00
Akimio521
d42bd14288 fix: 优先使用id作为cache key避免key冲突 2024-11-08 10:35:29 +08:00
jxxghp
28f6e7f9bb fix https://github.com/jxxghp/MoviePilot-Plugins/issues/540 2024-11-07 18:58:32 +08:00
jxxghp
2aadbeaed7 Merge pull request #3025 from amtoaer/feat_jellyfin_item_path 2024-11-07 18:48:15 +08:00
jxxghp
3f6b4bf3f2 Merge pull request #3022 from MMZOX/v2 2024-11-07 18:46:59 +08:00
amtoaer
f73750fcf7 feat: 为 jellyfin 的 webhook 事件填充 item_path 字段 2024-11-07 15:01:19 +08:00
MMZOX
59df673eb5 try to fix #2965 2024-11-07 13:45:06 +08:00
jxxghp
e29ab92cd1 fix #3008 2024-11-07 08:27:05 +08:00
jxxghp
3777045a17 fix #3012 2024-11-07 08:24:22 +08:00
jxxghp
16165c0fcc fix #3018 2024-11-07 08:20:11 +08:00
jxxghp
4d377d5e04 Merge pull request #3016 from InfinityPacer/feature/scheduler 2024-11-06 20:01:14 +08:00
InfinityPacer
76c84f9bac fix(scheduler): optimize job registration and removal logic 2024-11-06 19:37:22 +08:00
jxxghp
88f91152d6 Merge pull request #3009 from lybtt/fix_local_storage 2024-11-06 10:52:15 +08:00
lvyb
dfdb88c5ac fix softlink 2024-11-06 09:30:53 +08:00
jxxghp
ec183b6d0d release v2 2024-11-05 21:24:39 +08:00
jxxghp
9d047dddb4 更新 mediaserver.py 2024-11-05 18:39:38 +08:00
jxxghp
2d83880830 更新 mediaserver.py 2024-11-05 18:39:00 +08:00
jxxghp
7e6ef04554 fix 优化媒体服务器图片获取性能 #2993 2024-11-05 18:21:08 +08:00
jxxghp
08aa5fe50a fix bug #2993 2024-11-05 10:07:25 +08:00
jxxghp
656cc1fe01 Merge pull request #3004 from InfinityPacer/feature/module 2024-11-05 07:03:49 +08:00
InfinityPacer
8afaa683cc fix(config): update DB_MAX_OVERFLOW to 500 2024-11-05 00:48:22 +08:00
InfinityPacer
4d3aa0faf3 fix(config): update in-memory setting only on env update 2024-11-05 00:48:02 +08:00
jxxghp
9e08b9129a Merge pull request #2994 from Aqr-K/patch-1
Update system.py
2024-11-04 10:20:42 +08:00
jxxghp
0584bda470 fix bug 2024-11-03 19:59:33 +08:00
jxxghp
df8531e4d8 fix #2993 格式错误和传参警告 2024-11-03 19:51:43 +08:00
jxxghp
cfc51c305b 更新 mediaserver.py 2024-11-03 14:41:42 +08:00
jxxghp
28759f6c81 Merge pull request #2998 from Akimio521/fix/wallpapers 2024-11-03 14:36:08 +08:00
jxxghp
15b701803f Merge pull request #2997 from Akimio521/perfect/cn_name 2024-11-03 14:35:49 +08:00
Akimio521
72774f80a5 fix: 修复 wallpapers 未返回 list 2024-11-03 14:22:04 +08:00
Akimio521
341526b4d9 perfect(MetaAnime): self.cn_name 属性不再进行简化处理,在使用 TMDB 和豆瓣查询时再查询简体化名字 2024-11-03 14:05:12 +08:00
jxxghp
b6bfd215bc Merge pull request #2993 from Akimio521/v2 2024-11-03 06:54:35 +08:00
Akimio521
6801032f7a fix: 避免在匿名环境下暴露Plex地址以及Plex Token 2024-11-02 16:18:17 +08:00
Akimio521
af2075578c feat(Plex): 增加从Plex获取图片的URL选项,支持选择返回Plex URL还是TMDB URL 2024-11-02 16:17:25 +08:00
Akimio521
b46ede86fc fix: 移除不必要的配置导入 2024-11-02 14:57:38 +08:00
Aqr-K
a104001087 Update system.py 2024-11-02 14:27:46 +08:00
Akimio521
88e8790678 feat: 可选从媒体服务器中获取最新入库条目海报作为登录页面壁纸 2024-11-02 14:17:52 +08:00
Akimio521
a59d73a68a feat(PlexModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
Akimio521
522d970731 feat(JellyfinModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
Akimio521
51a0f97580 feat(EmbyModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
jxxghp
0ef6d7bbf2 Merge pull request #2991 from wikrin/fix-get_dir 2024-11-02 06:27:43 +08:00
Attente
d818ceb8e6 fix: 修复了在某些特定场景下,手动整理无法正确获取目标路径的问题。 2024-11-02 02:01:29 +08:00
jxxghp
a69d56d9fd Merge pull request #2978 from wikrin/v2 2024-10-30 18:57:40 +08:00
jxxghp
957df2cf66 Merge pull request #2977 from wdmcheng/v2 2024-10-30 18:57:19 +08:00
wdmcheng
d863a7cb7f 改进 SystemUtils.list_files 遍历目录对特殊字符的兼容性(如'[]') 2024-10-30 18:29:14 +08:00
Attente
021fcb17bb fix: #2974 #2963 2024-10-30 18:16:52 +08:00
jxxghp
b4e233678d Merge pull request #2975 from thsrite/v2 2024-10-30 16:56:57 +08:00
thsrite
5e53825684 fix 增加开启检查本地媒体库是否存在资源开关,按需开启 2024-10-30 16:20:37 +08:00
jxxghp
236d860133 fix monitor 2024-10-30 08:25:26 +08:00
jxxghp
76d939b665 更新 __init__.py 2024-10-30 07:16:44 +08:00
jxxghp
63d35dfeef fix transfer 2024-10-30 07:12:58 +08:00
jxxghp
3dd7d36760 Merge pull request #2970 from wikrin/fix-monitor 2024-10-29 17:49:00 +08:00
jxxghp
e4b0e4bf33 Merge pull request #2969 from wikrin/fix-notify 2024-10-29 06:46:25 +08:00
jxxghp
3504c0cdd6 Merge pull request #2968 from wikrin/fix-get_dir 2024-10-29 06:45:57 +08:00
Attente
980feb3cd2 fix: 修复了整理模式目录监控时, 目标路径不符预期的问题 2024-10-29 06:13:18 +08:00
Attente
a1daf884e6 增加对配置了媒体库目录但没有设置自动整理的处理 2024-10-29 06:06:02 +08:00
Attente
f0e4d9bf63 去除多余判断
```
if src_path and download_path != src_path
if dest_path and library_path != dest_path
```
已经能排除`设定 -> 存储 & 目录`中未设置`下载目录`或`媒体库目录`的目录
2024-10-29 03:43:57 +08:00
Attente
15397a522e fix: 修复整理整个目录时,不发送通知的问题 2024-10-29 02:12:17 +08:00
Attente
1c00c47a9b fix: #2963 可能存在问题的修复
`def get_dir`引用只存在与下载相关模块中, 尝试删除公测看反馈
2024-10-29 00:49:36 +08:00
jxxghp
e9a6f08cc8 Merge pull request #2958 from thsrite/v2 2024-10-28 11:38:17 +08:00
thsrite
7ba2d60925 fix 2024-10-28 11:36:18 +08:00
thsrite
9686a20c2f fix #2905 订阅搜索不走订阅设置的分辨率等规则 2024-10-28 11:29:34 +08:00
jxxghp
6029cf283b Merge pull request #2953 from wikrin/BDMV 2024-10-28 09:55:34 +08:00
Attente
4d6ed7d552 - 将计算目录中所有文件总大小移动到 modules.filemanager 模块中。 2024-10-28 08:31:32 +08:00
Attente
8add8ed631 添加注释 2024-10-27 23:32:15 +08:00
Attente
ab78b10287 将判断移出, 减少is_bluray_dir调用次数 2024-10-27 23:28:28 +08:00
Attente
94ed377843 - 修复整理原盘报错的问题
- 添加类型注解
2024-10-27 23:02:45 +08:00
jxxghp
4cb85a2b4c Merge pull request #2949 from wikrin/fix 2024-10-27 07:56:18 +08:00
Attente
b2a88b2791 修正注释 2024-10-27 02:10:10 +08:00
Attente
88f451147e fix: 不知道算不算bug
- 修复 `新增订阅搜索` 阶段 `包含` 和 `排除` 不生效的问题
2024-10-27 01:36:38 +08:00
jxxghp
51099ace65 Merge pull request #2947 from InfinityPacer/feature/push 2024-10-26 17:11:33 +08:00
InfinityPacer
0564bdf020 fix(event): ensure backward compatibility 2024-10-26 15:56:10 +08:00
jxxghp
bbac709970 更新 __init__.py 2024-10-26 14:17:26 +08:00
jxxghp
bb9690c873 Merge pull request #2946 from InfinityPacer/feature/push 2024-10-26 13:40:38 +08:00
jxxghp
00be46b74f Merge pull request #2944 from wikrin/fix-message 2024-10-26 08:10:00 +08:00
jxxghp
2af21765e0 Merge pull request #2942 from wikrin/v2 2024-10-26 08:09:33 +08:00
Attente
646349ac35 fix: 修正msg => message 2024-10-26 07:10:42 +08:00
InfinityPacer
915388c109 feat(commands): support sending CommandRegister events for clients 2024-10-26 04:51:45 +08:00
InfinityPacer
3c24ae5351 feat(telegram): add delete_commands 2024-10-26 04:48:55 +08:00
InfinityPacer
e876ba38a7 fix(wechat): add error handling 2024-10-26 04:47:42 +08:00
Attente
01546baddc fix: 2941
`delete_media_file` 返回值现修改为:
- `目录存在其他媒体文件`时返回`文件删除状态`
- `目录不存在其他媒体文件`时返回`目录删除状态`
2024-10-26 00:38:42 +08:00
jxxghp
133195cc0a Merge pull request #2940 from thsrite/v2 2024-10-25 19:08:02 +08:00
thsrite
e58911397a fix dc3240e9 2024-10-25 19:06:32 +08:00
jxxghp
10553ad6fc Merge pull request #2939 from DDS-Derek/dev 2024-10-25 18:24:03 +08:00
DDSRem
672d430322 fix(docker): nginx directory permission issue
fix https://github.com/jxxghp/MoviePilot/issues/2892
2024-10-25 18:18:12 +08:00
jxxghp
be785f358d Merge pull request #2938 from InfinityPacer/feature/push 2024-10-25 17:37:11 +08:00
InfinityPacer
eff8a6c497 feat(wechat): add retry mechanism for message requests 2024-10-25 17:27:58 +08:00
InfinityPacer
5d89ad965f fix(telegram): ensure image cache path exists 2024-10-25 17:26:56 +08:00
jxxghp
1651f4677b Merge pull request #2937 from thsrite/v2 2024-10-25 16:59:29 +08:00
thsrite
dc3240e90a fix 种子过滤包含规则 2024-10-25 16:05:54 +08:00
jxxghp
e2ee930ff4 Merge pull request #2935 from thsrite/v2 2024-10-25 13:33:46 +08:00
thsrite
90901d7297 fix 获取站点最新数据的时候排除掉错误的数据 2024-10-25 13:24:29 +08:00
jxxghp
1b76f1c851 feat:站点未读消息发送 2024-10-25 13:10:35 +08:00
jxxghp
3d9853adcf fix #2933 2024-10-25 12:58:06 +08:00
jxxghp
81384c358e Merge pull request #2933 from wikrin/v2-transfer 2024-10-25 12:12:30 +08:00
InfinityPacer
a46463683d Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/push 2024-10-25 10:51:38 +08:00
jxxghp
4cf3b49324 Merge pull request #2932 from InfinityPacer/feature/module 2024-10-25 06:52:57 +08:00
Attente
1f6fa22aa1 fix: 修复 storagechain.list_files 递归得到的列表被覆盖的问题 2024-10-25 02:54:41 +08:00
Attente
d108b0da78 仅取消缩进,没有其他任何改动
减少`for`嵌套, 汇总遍历目录, 这样能提供更准确的`文件数`
2024-10-25 01:28:59 +08:00
Attente
0ee21b38de fix:
- 修复因首个子目录中无目标文件而不处理整个文件夹的问题
- 添加同时整理音轨
2024-10-25 01:27:39 +08:00
InfinityPacer
b1858f4849 fix #2931 2024-10-25 00:42:00 +08:00
InfinityPacer
ac086a7640 refactor(wechat): optimize message handling and add menu deletion 2024-10-24 20:27:41 +08:00
jxxghp
1d252f4eb2 Merge pull request #2930 from InfinityPacer/feature/push 2024-10-24 19:26:50 +08:00
jxxghp
ab354ef0e8 Merge pull request #2929 from InfinityPacer/feature/setup 2024-10-24 19:25:48 +08:00
jxxghp
167cba2dbb Merge pull request #2928 from InfinityPacer/feature/module 2024-10-24 19:25:01 +08:00
InfinityPacer
9cf7547a8c fix(downloader): ensure default downloader config fallback 2024-10-24 19:12:07 +08:00
InfinityPacer
823b81784e fix(setup): optimize logging 2024-10-24 16:50:03 +08:00
InfinityPacer
d9effb54ee feat(commands): support background initialization 2024-10-24 16:26:37 +08:00
jxxghp
1a8d9044d7 fix #2926 2024-10-24 14:28:30 +08:00
InfinityPacer
0a2ce11eb0 fix(setup): adjust dir name 2024-10-24 12:48:54 +08:00
jxxghp
42b5dd4178 fix #2924 2024-10-24 12:39:08 +08:00
InfinityPacer
2bae866f70 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/setup 2024-10-24 11:21:45 +08:00
InfinityPacer
2470a98491 fix(setup): remove pkg_resources import and add working_set 2024-10-24 11:21:37 +08:00
jxxghp
9d70b117d7 Merge remote-tracking branch 'origin/v2' into v2 2024-10-24 11:14:45 +08:00
jxxghp
1fad9d9904 Merge pull request #2923 from InfinityPacer/feature/setup 2024-10-24 10:50:20 +08:00
jxxghp
dc1533d5e8 Merge pull request #2922 from thsrite/v2 2024-10-24 10:49:10 +08:00
thsrite
e0cfb4fd6d fix 重启后v2插件丢失问题 2024-10-24 10:38:55 +08:00
jxxghp
119919da51 fix:附属文件整理报错 2024-10-24 09:56:39 +08:00
InfinityPacer
684e518b87 fix(setup): remove unnecessary comments 2024-10-24 09:53:44 +08:00
jxxghp
50febd6b2c 更新 update 2024-10-24 07:14:37 +08:00
InfinityPacer
86dec5aec2 feat(setup): complete missing dependencies installation 2024-10-24 02:55:54 +08:00
jxxghp
fa021de2ae Merge pull request #2918 from InfinityPacer/feature/event 2024-10-23 20:15:51 +08:00
InfinityPacer
874572253c feat(auth): integrate Plex auxiliary authentication support 2024-10-23 20:11:23 +08:00
jxxghp
059f7f8146 更新 update 2024-10-23 20:09:02 +08:00
jxxghp
d6f8c364bf refactor: 删除历史记录时删除空目录(无媒体文件) 2024-10-23 18:02:31 +08:00
jxxghp
a6f0792014 refactor: get_parent 2024-10-23 17:47:01 +08:00
jxxghp
a4419796ac Merge remote-tracking branch 'origin/v2' into v2 2024-10-23 17:05:28 +08:00
jxxghp
dad980fa14 fix 附属文件整理 2024-10-23 17:05:20 +08:00
jxxghp
a3cb805c64 Merge pull request #2916 from InfinityPacer/feature/push 2024-10-23 16:34:54 +08:00
InfinityPacer
c128dd9507 fix(command): delay module import 2024-10-23 16:32:12 +08:00
jxxghp
dbf1b691d6 目录监控服务去重 2024-10-23 15:58:00 +08:00
jxxghp
4199438d5e fix update shell 2024-10-23 15:09:46 +08:00
jxxghp
a0ad8faaf7 fix #2913 2024-10-23 14:43:47 +08:00
jxxghp
c4619edcde Merge pull request #2913 from thsrite/v2 2024-10-23 13:29:34 +08:00
thsrite
51f8fc07eb fix transfer notify 2024-10-23 13:24:08 +08:00
thsrite
f7357b8a71 feat 目录自定义是否通知 2024-10-23 12:57:51 +08:00
jxxghp
c5e7050898 Merge pull request #2912 from thsrite/v2 2024-10-23 12:36:37 +08:00
thsrite
5871c60a9d fix 修复目录监控·真 2024-10-23 12:34:39 +08:00
jxxghp
078bca1259 Merge pull request #2911 from thsrite/v2 2024-10-23 11:45:10 +08:00
thsrite
6ca78c0cb9 feat 目录监控可选监控模式 2024-10-23 11:35:40 +08:00
jxxghp
f03a977a99 Merge pull request #2909 from InfinityPacer/feature/push 2024-10-23 07:04:58 +08:00
InfinityPacer
ab32d3347d feat(command): optimize command registration event handling 2024-10-23 02:26:11 +08:00
jxxghp
f8631c68a3 Merge pull request #2904 from InfinityPacer/feature/setup 2024-10-22 17:38:34 +08:00
jxxghp
a052850990 Merge pull request #2903 from thsrite/v2 2024-10-22 17:37:54 +08:00
InfinityPacer
ea9db33323 chore(deps): keep requirements.txt consistent with requirements.in 2024-10-22 17:12:02 +08:00
thsrite
72b955ebae fix 2024-10-22 16:53:17 +08:00
thsrite
b1545fc351 Merge remote-tracking branch 'origin/v2' into v2 2024-10-22 16:47:50 +08:00
thsrite
be09d5e65d fix 修复自定义规则包含、过滤 2024-10-22 16:47:31 +08:00
InfinityPacer
48f4505161 chore(deps): keep requirements.txt consistent with requirements.in 2024-10-22 16:45:35 +08:00
jxxghp
5c5182941f fix 二级分类bug 2024-10-22 13:49:53 +08:00
jxxghp
7a0b0d114e fix 二级分类bug 2024-10-22 13:03:30 +08:00
jxxghp
5eaffd9797 make release 2024-10-22 12:58:58 +08:00
jxxghp
7cfa315529 build v2 latest 2024-10-22 12:47:18 +08:00
jxxghp
6869708e8e 更新 Dockerfile 2024-10-22 12:20:23 +08:00
jxxghp
5311b5f66a Merge pull request #2901 from thsrite/v2 2024-10-22 12:12:00 +08:00
jxxghp
f3144807bd build v2 beta 2024-10-22 12:09:57 +08:00
thsrite
7437c1ca51 fix 目录监控汇总消息发送 && 剧集刮削 2024-10-22 12:08:13 +08:00
jxxghp
60632aa9d3 Merge pull request #2899 from thsrite/dev 2024-10-22 11:22:51 +08:00
thsrite
c66793c0c8 fix 剧集根路径刮削 2024-10-22 11:10:40 +08:00
jxxghp
d5c8dffffe Merge pull request #2898 from thsrite/dev 2024-10-22 10:54:53 +08:00
thsrite
d4ac585549 fix 修复季刮削图片错位 && 集刮削图片None 2024-10-22 10:42:09 +08:00
jxxghp
b9c368e087 Merge pull request #2897 from InfinityPacer/feature/event 2024-10-22 10:33:29 +08:00
jxxghp
bda0d7a9fb Merge pull request #2896 from thsrite/dev 2024-10-22 09:50:39 +08:00
thsrite
d29ab9b5bd fix dirmonitor exclude_words 2024-10-22 09:49:43 +08:00
InfinityPacer
b2d66b8973 fix(auth): make login_access_token synchronous to prevent blocking 2024-10-22 02:02:21 +08:00
jxxghp
bb858f4bc1 Merge pull request #2895 from InfinityPacer/feature/plugin 2024-10-22 01:45:24 +08:00
InfinityPacer
6b875ef2de feat(plugin): add state check for commands, APIs, and services 2024-10-22 01:36:48 +08:00
jxxghp
0145421885 Merge pull request #2894 from InfinityPacer/feature/push 2024-10-21 23:44:55 +08:00
InfinityPacer
0ac6d9f25e fix(config): adjust API_TOKEN validation logic 2024-10-21 21:46:24 +08:00
jxxghp
80328bdf2d Merge pull request #2891 from InfinityPacer/feature/push 2024-10-21 18:30:13 +08:00
InfinityPacer
87166b3cd7 feat(command): add validation for menu configuration 2024-10-21 16:54:43 +08:00
jxxghp
f91daf2106 Merge pull request #2888 from thsrite/dev 2024-10-21 13:35:04 +08:00
thsrite
3c8cf65902 feat 本地目录监控统一汇总消息发送 2024-10-21 13:22:21 +08:00
jxxghp
3c784e946a Merge pull request #2887 from InfinityPacer/feature/module 2024-10-21 13:18:33 +08:00
InfinityPacer
4034d69fbc fix(config): improve env update logic 2024-10-21 13:15:40 +08:00
jxxghp
eeed9849ef SubscribeHistory 表结构修复 2024-10-21 13:07:47 +08:00
jxxghp
b07297c7e1 Merge pull request #2886 from InfinityPacer/feature/module 2024-10-21 06:42:48 +08:00
InfinityPacer
87813c853b fix(config): improve env update logic 2024-10-20 23:39:20 +08:00
jxxghp
571997fa8e Merge pull request #2885 from InfinityPacer/feature/module 2024-10-20 19:16:02 +08:00
InfinityPacer
9255c85a85 refactor(module): unify config retrieval logic 2024-10-20 18:56:52 +08:00
jxxghp
dba5603359 Merge pull request #2884 from InfinityPacer/feature/db 2024-10-20 17:35:16 +08:00
jxxghp
e76cb97092 Merge pull request #2882 from Aqr-K/dev-update 2024-10-20 16:58:51 +08:00
Aqr-K
6dde33d8fc fix(update)
- 修复了因主版本号的 v 前缀未去除而导致无法判断的问题。
- 增加了对非法版本号的识别。
- 将 `cat` 替换为 `grep` 并进行优化,即使 `version.py` 增加更多值,也能正常使用。
- 修复了 `cat` 获取的值中存在回车符、换行符,从而导致参数无法被版本判断正常识别使用的问题。
- 增加了自动排除 `version.py` 文件中变量行末尾注释的功能,并自动去除首尾多余空格,以确保始终能正确获取到需要的值。
2024-10-20 16:31:50 +08:00
InfinityPacer
d1d98a9081 feat(db): add pool class configuration and adjust connection settings 2024-10-20 03:10:08 +08:00
jxxghp
08e07625cd fix 远程交互命令 2024-10-20 01:54:24 +08:00
jxxghp
c650f1b5e3 Merge pull request #2879 from InfinityPacer/feature/cache 2024-10-20 01:08:45 +08:00
InfinityPacer
2c8ecdfcb9 fix(cache): support clear image cache 2024-10-20 01:05:06 +08:00
jxxghp
c6febe4755 Merge pull request #2878 from InfinityPacer/feature/module 2024-10-20 00:28:09 +08:00
InfinityPacer
08830c7edd revert: restore mistakenly committed 2024-10-20 00:26:50 +08:00
jxxghp
1a40860a5d fix default config 2024-10-19 23:54:41 +08:00
jxxghp
afd0edf7d1 fix 删除历史记录文件处理 2024-10-19 23:43:32 +08:00
jxxghp
c2d3a00615 Merge pull request #2877 from InfinityPacer/feature/event 2024-10-19 21:23:14 +08:00
InfinityPacer
5b6083a1ec fix(auth): prevent disabled users from authenticating 2024-10-19 21:18:02 +08:00
jxxghp
363f12ed5a refactor:Module加入执行优先顺序 2024-10-19 19:31:25 +08:00
jxxghp
de17bc5645 refactor:媒体服务器返回类型 2024-10-19 19:04:16 +08:00
jxxghp
1e4f3e97cd refactor:media_exists 支持指定服务器 2024-10-19 18:17:35 +08:00
jxxghp
69c02291a3 Merge pull request #2876 from InfinityPacer/feature/event
feat(auth): enhance auxiliary authentication
2024-10-19 18:04:43 +08:00
InfinityPacer
c7b27784c9 fix(auth): resolve conflicts 2024-10-19 18:03:18 +08:00
jxxghp
616b15e18a Merge pull request #2875 from Aqr-K/dev-login 2024-10-19 18:00:51 +08:00
InfinityPacer
1e781ba3d1 feat(auth): ensure user creation only for password strategy 2024-10-19 17:32:55 +08:00
InfinityPacer
d48c4d15e2 feat(auth): update intercept event type 2024-10-19 16:57:44 +08:00
jxxghp
1c2a194a7d fix rclone && alipan 2024-10-19 12:29:45 +08:00
Aqr-K
5d1ccef5a2 fix(login): 增加返回user_id 2024-10-19 11:36:03 +08:00
jxxghp
6f299b3255 fix bug 2024-10-19 08:13:47 +08:00
jxxghp
974fe7c965 更新 user.py 2024-10-19 07:33:44 +08:00
InfinityPacer
d8e7c7e6d7 feat(auth): enhance auxiliary authentication 2024-10-19 03:16:04 +08:00
jxxghp
386ff672a7 Merge pull request #2869 from DDS-Derek/docker 2024-10-18 21:19:48 +08:00
DDSRem
a802de2589 feat: docker built-in v2 compatible plugin 2024-10-18 20:25:54 +08:00
jxxghp
b6eac122b8 Merge pull request #2868 from InfinityPacer/feature/event 2024-10-18 20:16:45 +08:00
InfinityPacer
1a8e1844b4 feat(chain): add auth event to ChainEventType 2024-10-18 20:03:05 +08:00
jxxghp
2b982ce7a8 fix 消息交互 again 2024-10-18 18:30:34 +08:00
jxxghp
e93b3f5602 fix 消息交互 2024-10-18 18:10:46 +08:00
jxxghp
5ef4fc04d5 Merge pull request #2864 from Aqr-K/dev-user 2024-10-18 06:55:37 +08:00
jxxghp
1190d8dda4 Merge pull request #2863 from InfinityPacer/feature/setup 2024-10-18 06:54:03 +08:00
Aqr-K
0805f02f1f feat(user): Add username modification 2024-10-18 03:12:15 +08:00
InfinityPacer
4accd5d784 refactor(lifecycle): enhance shutdown support for event and mediaserver 2024-10-18 00:42:54 +08:00
InfinityPacer
4c2bb99b59 refactor(lifecycle): add async support for SSE 2024-10-18 00:39:54 +08:00
InfinityPacer
348923aaa6 refactor(lifecycle): set background threads to daemon mode 2024-10-18 00:39:54 +08:00
InfinityPacer
62ac03fb29 refactor(lifecycle): add graceful support and remove signal handling 2024-10-18 00:39:53 +08:00
jxxghp
a4bf59ad58 add 查询所有站点最新用户数据 api 2024-10-17 21:42:18 +08:00
jxxghp
c02c19d719 Merge pull request #2862 from InfinityPacer/feature/event 2024-10-17 16:15:27 +08:00
InfinityPacer
b83279b05a fix(site): resolve site user data update failure 2024-10-17 15:00:18 +08:00
jxxghp
cf94c70f8c Merge pull request #2861 from InfinityPacer/feature/event 2024-10-17 14:43:58 +08:00
InfinityPacer
52a15086cb feat(event): add SiteRefreshed event 2024-10-17 14:38:46 +08:00
jxxghp
8234c29006 Merge pull request #2860 from InfinityPacer/feature/setup 2024-10-17 12:18:29 +08:00
jxxghp
aeed9fb48e fix:SiteUserData 2024-10-17 12:17:03 +08:00
InfinityPacer
e233bc678c feat(plugin): add force install option and backup/restore on failure 2024-10-17 11:24:27 +08:00
InfinityPacer
346c6dd11c fix(plugin): optimize exist check and cleanup on installation failure 2024-10-17 10:51:14 +08:00
InfinityPacer
bcc48e885a feat(setup): support asynchronous install plugins on startup 2024-10-17 09:37:51 +08:00
jxxghp
4469a1b3b8 fix:优化媒体服务器同步媒体库设置 2024-10-16 15:58:37 +08:00
jxxghp
54666cb757 feat:优先下载排序逻辑,更加精细化 2024-10-16 15:30:58 +08:00
jxxghp
4455ac13e9 fix log 2024-10-16 08:12:34 +08:00
jxxghp
981e5ea927 Merge pull request #2856 from InfinityPacer/feature/module 2024-10-15 16:27:07 +08:00
jxxghp
541a3d68e6 Merge pull request #2855 from InfinityPacer/feature/event 2024-10-15 15:52:23 +08:00
InfinityPacer
ccc11c4892 fix(mediaserver): update get_type return type to ModuleType for consistency 2024-10-15 15:35:45 +08:00
InfinityPacer
9548409bd5 fix(event): refine handler invocation and improve class loading checks 2024-10-15 15:09:32 +08:00
InfinityPacer
11c10ea783 fix(event): improve handler enablement check mechanism 2024-10-15 14:44:13 +08:00
jxxghp
e99913f900 fix shudown 2024-10-15 13:43:13 +08:00
jxxghp
8af37a0adc fix shudown 2024-10-15 13:42:41 +08:00
jxxghp
810e3c98f9 Merge pull request #2854 from InfinityPacer/feature/security 2024-10-15 07:51:17 +08:00
jxxghp
4877ec68b1 feat:下载按站点上传排序 2024-10-14 19:45:22 +08:00
InfinityPacer
12c669aa17 fix(security): optimize URL validation 2024-10-14 19:38:25 +08:00
jxxghp
7fd65c572b Merge pull request #2852 from InfinityPacer/feature/cache 2024-10-14 17:09:27 +08:00
InfinityPacer
89819f8730 feat(cache): add HTTP cache support for image proxy 2024-10-14 17:00:27 +08:00
jxxghp
954110f166 Merge pull request #2849 from wikrin/dev 2024-10-14 06:50:29 +08:00
jxxghp
bd1427474d Merge pull request #2848 from InfinityPacer/feature/security 2024-10-14 06:49:38 +08:00
Attente
3909bb6393 fix: 修复订阅中文件加载失败的问题
- 修正哈希值字段名
可能存在的问题: 依赖网络获取信息
2024-10-14 05:44:50 +08:00
Attente
9a8e0a256a fix: 修复获取不到媒体文件的问题
通过递归列出目录中所有文件, 修复获取不到`Seasion X`目录下媒体文件的bug
2024-10-14 02:58:00 +08:00
InfinityPacer
675655bfc7 fix(security): optimize image caching 2024-10-14 02:22:07 +08:00
InfinityPacer
422474b4b7 feat(security): enhance image URL and domain validation 2024-10-14 01:33:53 +08:00
InfinityPacer
efb624259a fix(Utils): remove unnecessary methods 2024-10-13 22:40:58 +08:00
InfinityPacer
f9e06e4381 feat(security): add safe path check for log file access and validation 2024-10-13 21:59:22 +08:00
InfinityPacer
f67ee27618 Merge branch 'dev' of https://github.com/jxxghp/MoviePilot into feature/security 2024-10-13 00:13:35 +08:00
jxxghp
5224e6751d Merge pull request #2843 from InfinityPacer/dev 2024-10-12 14:36:45 +08:00
InfinityPacer
b263489635 feat(downloader): add compatibility support for qBittorrent 5.0 2024-10-12 14:27:35 +08:00
jxxghp
1a10f6d6e3 fix typo error 2024-10-12 12:34:28 +08:00
jxxghp
4e3a76ffa3 fix bug 2024-10-12 12:32:50 +08:00
jxxghp
0d139851af fix 按对象名称组织代码文件 2024-10-12 12:14:49 +08:00
jxxghp
603ab97665 add ModuleType Schema 2024-10-12 11:50:00 +08:00
jxxghp
fcfeeb09d3 Merge pull request #2838 from InfinityPacer/dev 2024-10-12 06:50:45 +08:00
InfinityPacer
ea32cd83af chore(deps): upgrade qbittorrent-api to support qbittorrent 5.0 2024-10-12 00:12:41 +08:00
jxxghp
1b8380d0c2 Merge pull request #2837 from Aqr-K/dev-update 2024-10-11 22:22:25 +08:00
Aqr-K
e3901c7621 feat(update): 解决v2版本启动时被v1覆盖的问题
- 原本不支持的带 `-` 的特殊后缀版本,现已支持,且内置4种英文格式的后缀(可修改)。
特殊后缀优先级:alpha < beta < rc < stable < 数字大小
版本优先级:v1.9.17-alpha < v1.9.17 < v1.9.17-2 < v2.0.0-alpha < v2.0.0 < v2.0.0-2
2024-10-11 22:15:45 +08:00
jxxghp
f633d09a1d Merge pull request #2832 from wikrin/dev 2024-10-10 22:49:24 +08:00
jxxghp
e4cc834fa7 Merge pull request #2831 from InfinityPacer/feature/push 2024-10-10 22:46:54 +08:00
InfinityPacer
828e9ab886 feat(webhook): add support for server_name field in WebhookEventInfo 2024-10-10 22:07:03 +08:00
Attente
d1bf1411b6 fix: 修正重复的特殊字符 [U+2014](https://symbl.cc/cn/2014/) --> [U+2015](https://symbl.cc/cn/2015/) 2024-10-10 22:01:09 +08:00
InfinityPacer
7532929669 fix(security): update SameSite setting to Lax for better compatibility 2024-10-10 20:08:30 +08:00
jxxghp
d2a613a441 Merge remote-tracking branch 'origin/dev' into dev 2024-10-10 19:04:46 +08:00
jxxghp
eb66f6c05a move NameRecognize to ChainEventType 2024-10-10 19:04:37 +08:00
jxxghp
9f79a30960 Merge pull request #2828 from InfinityPacer/feature/security 2024-10-10 15:58:58 +08:00
InfinityPacer
51391db262 fix(security): adjust resource token duration and refresh strategy 2024-10-10 15:43:29 +08:00
jxxghp
3541d47baf Merge remote-tracking branch 'origin/dev' into dev 2024-10-10 13:11:37 +08:00
jxxghp
b0c11bbe5f fix tmdb_trending api 2024-10-10 13:11:30 +08:00
jxxghp
82253af5a5 Merge pull request #2827 from InfinityPacer/feature/module 2024-10-10 06:34:01 +08:00
jxxghp
5ba555eead Merge pull request #2826 from InfinityPacer/feature/security 2024-10-10 06:33:29 +08:00
InfinityPacer
0c73bbbfe0 fix #2727 2024-10-10 02:09:23 +08:00
InfinityPacer
55403cd8a8 fix(security): handle errors and prevent unnecessary token refresh 2024-10-10 01:40:13 +08:00
InfinityPacer
871f8d3529 feat(security): add resource token authentication using HttpOnly Cookie 2024-10-10 00:45:26 +08:00
jxxghp
cadc0b0511 fix bug 2024-10-09 20:46:34 +08:00
jxxghp
084b5c8d68 add:订阅分享与复用API 2024-10-09 18:33:30 +08:00
jxxghp
16f6303609 fix 优化逻辑 2024-10-09 16:50:54 +08:00
jxxghp
7ea01c1109 feat:支持订阅绑定类别和自定义识别词 2024-10-09 15:21:32 +08:00
jxxghp
e31df15b5e Merge pull request #2820 from Cabbagec/dev-runscheduler2 2024-10-09 12:06:09 +08:00
Cabbagec
78cbe1aaed fix: formatting app/api/endpoints/system.py 2024-10-09 11:54:49 +08:00
jxxghp
2bfd32f716 Merge pull request #2821 from InfinityPacer/feature/module 2024-10-09 06:43:35 +08:00
jxxghp
092ac8a124 Merge pull request #2819 from InfinityPacer/feature/plugin 2024-10-09 06:40:39 +08:00
InfinityPacer
0d3d6e9bf9 fix(download): ensure params parsed from request body 2024-10-09 02:29:52 +08:00
InfinityPacer
e2ee3ec4cd feat(event): add downloader field to DownloadAdded event 2024-10-09 01:49:25 +08:00
InfinityPacer
9e161fb36c feat(module): add support for name filtering in service retrieval 2024-10-09 01:48:41 +08:00
brandonzhang
1b00bbc890 feat(endpoints): run scheduler through api by token 2024-10-08 23:54:18 +08:00
InfinityPacer
812d6029d0 chore: update plugin paths to use plugins.v2 2024-10-08 23:47:59 +08:00
jxxghp
52cf154e65 Merge pull request #2818 from InfinityPacer/feature/security 2024-10-08 20:36:49 +08:00
InfinityPacer
5b6b1231fe fix(security): update comments 2024-10-08 19:07:48 +08:00
InfinityPacer
1a9ba58023 feat(security): add token validation and support multi-server 2024-10-08 18:54:28 +08:00
InfinityPacer
4dd146d1c8 feat(security): replace validation with Depends for system endpoints 2024-10-08 18:12:40 +08:00
InfinityPacer
4af57d9857 feat(security): restore token validation 2024-10-08 17:28:30 +08:00
InfinityPacer
4f01b82b81 feat(security): unify token validation for message endpoints 2024-10-08 14:32:29 +08:00
jxxghp
9547847037 Merge pull request #2815 from InfinityPacer/feature/security 2024-10-08 06:32:03 +08:00
InfinityPacer
284082741e feat(security): obfuscate error messages in anonymous API 2024-10-08 01:51:45 +08:00
InfinityPacer
d7da2e133a feat(security): add cache to wallpaper endpoints to mitigate attacks 2024-10-07 23:37:20 +08:00
jxxghp
b704dcfe07 Merge pull request #2813 from InfinityPacer/feature/plugin 2024-10-07 21:00:11 +08:00
InfinityPacer
5c05845500 refactor(security): replace Depends with Security and define schemes 2024-10-07 16:35:39 +08:00
InfinityPacer
75530a22c3 fix(plugin): use positional arguments in get_plugins 2024-10-06 23:00:21 +08:00
jxxghp
cd4a6476c9 Merge pull request #2812 from InfinityPacer/feature/module 2024-10-06 14:49:52 +08:00
InfinityPacer
0afdd9056a fix(module): use getters for _instances and _configs in subclasses 2024-10-06 14:38:17 +08:00
InfinityPacer
5de882d788 fix(plex): resolve error in get_webhook_message 2024-10-06 14:26:44 +08:00
jxxghp
c35f1f0a07 Merge pull request #2811 from InfinityPacer/feature/module 2024-10-06 11:08:40 +08:00
InfinityPacer
4f27897e08 refactor(config): replace hard-coded strings with SystemConfigKey 2024-10-06 01:58:19 +08:00
InfinityPacer
ea76a27d26 feat(config): enforce API_TOKEN to meet security requirements 2024-10-06 01:33:16 +08:00
InfinityPacer
9d71c9b61e feat(config): centralize set_key usage through update_setting method 2024-10-05 03:14:16 +08:00
jxxghp
1484ce86a9 Merge pull request #2802 from InfinityPacer/feature/module 2024-10-02 20:16:49 +08:00
jxxghp
3b0154f8e3 Merge pull request #2801 from InfinityPacer/feature/plugin 2024-10-02 20:12:39 +08:00
InfinityPacer
cb761275ab feat(config): preprocess env variables using Pydantic validators 2024-10-02 19:17:31 +08:00
InfinityPacer
210c5e3151 feat(plugin): broadcast PluginReload event when plugin reload 2024-10-02 16:22:28 +08:00
jxxghp
bbe8f7f080 Merge pull request #2800 from InfinityPacer/feature/module 2024-10-02 13:08:40 +08:00
InfinityPacer
8317b6b7a2 fix(mediaserver): resolve media_statistic 2024-10-02 13:04:39 +08:00
jxxghp
9dcb28fe3d Merge pull request #2799 from InfinityPacer/feature/module 2024-10-02 11:50:14 +08:00
InfinityPacer
fb61eda831 fix(mediaserver): improve data isolation handling 2024-10-02 10:39:04 +08:00
jxxghp
f8149afb6e Merge pull request #2798 from InfinityPacer/feature/module 2024-10-01 19:46:17 +08:00
InfinityPacer
9dc603bd73 feat(downloader): support first_last_piece 2024-10-01 18:36:31 +08:00
jxxghp
0da914b891 Merge pull request #2797 from InfinityPacer/feature/db 2024-10-01 16:00:53 +08:00
jxxghp
5701bbb146 Merge pull request #2796 from InfinityPacer/feature/module 2024-10-01 16:00:23 +08:00
InfinityPacer
4b6d269230 feat(module): add type-checking methods 2024-10-01 15:28:26 +08:00
InfinityPacer
a25ff4302d fix(db): update Pydantic model to allow any type for 'note' field 2024-10-01 15:20:30 +08:00
jxxghp
80ada2232e Merge pull request #2795 from DDS-Derek/dev 2024-10-01 11:23:44 +08:00
DDSRem
557c1cd1e6 chore: update code logic optimization 2024-10-01 11:21:39 +08:00
jxxghp
7473f0ba27 Merge pull request #2793 from InfinityPacer/feature/db 2024-09-30 20:31:17 +08:00
jxxghp
ee455ac61e Merge pull request #2792 from InfinityPacer/feature/event 2024-09-30 20:28:56 +08:00
InfinityPacer
0ca42236d6 feat(event): add ModuleReload event type 2024-09-30 19:20:18 +08:00
InfinityPacer
835e0b4d5d fix(event): prevent error calls 2024-09-30 18:10:42 +08:00
InfinityPacer
d3186cd742 refactor(db): convert suitable string fields to JSON type 2024-09-30 16:16:29 +08:00
InfinityPacer
d69041f049 Merge remote-tracking branch 'upstream/dev' into feature/db 2024-09-30 14:31:20 +08:00
jxxghp
666f9a536d fix subscribe api 2024-09-30 13:33:06 +08:00
jxxghp
637e92304f Merge pull request #2791 from InfinityPacer/feature/plugin 2024-09-30 12:10:15 +08:00
jxxghp
80a1ded602 fix scraping file upload 2024-09-30 12:06:07 +08:00
InfinityPacer
e731767dfa feat(plugin): add PluginTriggered event type 2024-09-30 10:33:20 +08:00
jxxghp
06ea9e2d09 fix siteuserdata 2024-09-30 10:26:32 +08:00
jxxghp
886b31b35d Merge pull request #2790 from InfinityPacer/dev 2024-09-30 06:46:01 +08:00
jxxghp
da872cca41 Merge pull request #2789 from InfinityPacer/feature/module 2024-09-30 06:45:46 +08:00
InfinityPacer
daadfcffd8 feat(db): update model to support JSON 2024-09-30 03:07:33 +08:00
InfinityPacer
838e17bf6e fix(sync): have module return results directly instead of using yield 2024-09-30 02:59:09 +08:00
InfinityPacer
61ecc175f3 chore: Update .gitignore 2024-09-30 02:13:45 +08:00
InfinityPacer
709f8ef3ed chore: Update .gitignore to exclude all log files and archives 2024-09-30 00:38:01 +08:00
InfinityPacer
fdab59a84e fix #2784 2024-09-30 00:31:03 +08:00
InfinityPacer
0593275a62 feat(module): add ServiceBaseHelper for service and instance 2024-09-29 23:46:41 +08:00
jxxghp
7c643432ee Merge pull request #2783 from InfinityPacer/dev 2024-09-28 06:51:44 +08:00
InfinityPacer
5993bfcefb fix(#2755): remove yield None, handle generator termination on error 2024-09-28 00:57:59 +08:00
InfinityPacer
1add203c0e fix(#2755): refactor pagination and fix media sync DB issue 2024-09-28 00:57:13 +08:00
jxxghp
8b00e9cb72 Merge pull request #2781 from DDS-Derek/dev 2024-09-27 18:02:14 +08:00
DDSRem
14dd7c4e31 chore: use static compilation of aria2c 2024-09-27 17:33:52 +08:00
InfinityPacer
48122d8d9a fix(#2755): handle Plex None values and exceptions in item builder 2024-09-27 17:23:27 +08:00
jxxghp
8f5cf33fa9 Merge pull request #2780 from InfinityPacer/feature/module 2024-09-27 10:19:28 +08:00
jxxghp
3fe79d589a Merge pull request #2779 from InfinityPacer/feature/push 2024-09-27 10:09:55 +08:00
jxxghp
f3956a0504 Merge pull request #2778 from InfinityPacer/feature/plugin 2024-09-27 10:09:33 +08:00
InfinityPacer
efb3bd93d0 fix(wechat): reorder proxy setup 2024-09-27 04:27:16 +08:00
InfinityPacer
640a67fc3a fix(module): resolve infinite recursion in get_instance method 2024-09-27 04:12:22 +08:00
InfinityPacer
2ce3ddb75a refactor(module): simplify service instantiation with generics 2024-09-27 04:04:56 +08:00
jxxghp
1a36d9fe7a Merge pull request #2777 from Aqr-K/dev-transtype 2024-09-26 23:31:22 +08:00
Aqr-K
255c05daf9 fix: method name spelling error 2024-09-26 23:14:55 +08:00
Aqr-K
d1abc23cbd 更新 storage.py 2024-09-26 21:00:21 +08:00
Aqr-K
35c68fe30d feat: transType API
- 针对查询可用整理方式的API
2024-09-26 20:56:33 +08:00
InfinityPacer
5efcd6e6be refactor (module): improve the implementation of base classes 2024-09-26 19:44:35 +08:00
jxxghp
46fb52fff9 merge db oper 2024-09-26 14:13:29 +08:00
jxxghp
c6abb1f9f1 fix 站点数据刷新 2024-09-26 14:00:10 +08:00
jxxghp
b4b919db86 fix typo 2024-09-26 12:50:48 +08:00
jxxghp
1cef5e43e3 fix rule load 2024-09-26 12:36:56 +08:00
jxxghp
f6baf62189 Merge pull request #2776 from InfinityPacer/dev 2024-09-26 11:54:14 +08:00
InfinityPacer
e1aa4b7519 fix #2751 2024-09-26 11:11:17 +08:00
jxxghp
ddfcdf9ce2 fix 115网盘整理 2024-09-26 08:36:08 +08:00
jxxghp
eff3fadfbf Merge pull request #2775 from InfinityPacer/dev 2024-09-26 06:49:15 +08:00
InfinityPacer
3512e7df4a fix(storage): handle null values in file sorting to prevent crashes 2024-09-26 01:01:27 +08:00
jxxghp
5b1d111a97 更新 __init__.py 2024-09-25 22:35:18 +08:00
jxxghp
e1b557f681 更新 __init__.py 2024-09-25 22:25:27 +08:00
jxxghp
93e053d06a fix 跨存储整理(115下载除外) 2024-09-25 20:16:31 +08:00
jxxghp
f79364bc58 fix bug 2024-09-25 19:22:42 +08:00
jxxghp
2da95fa4e6 use aligo 2024-09-25 18:44:18 +08:00
InfinityPacer
90603fa2a9 fix(install): optimized logging 2024-09-25 17:52:22 +08:00
jxxghp
41d41685fe fix docker build 2024-09-25 16:55:52 +08:00
jxxghp
91efe2e94c fix docker build 2024-09-25 13:45:13 +08:00
jxxghp
d7f9ed5198 fix convert_boolean 2024-09-25 12:57:29 +08:00
jxxghp
f0464c4be7 Merge pull request #2774 from InfinityPacer/feature/api
feat(api): add support for dynamic plugin APIs
2024-09-25 08:13:17 +08:00
jxxghp
9863c85fe2 Merge pull request #2773 from InfinityPacer/dev
fix(queue): handle queue.Empty instead of TimeoutError on timeout
2024-09-25 08:10:23 +08:00
InfinityPacer
222991d07f feat(api): add support for dynamic plugin APIs 2024-09-25 02:20:23 +08:00
InfinityPacer
cf4c6b2d40 refactor(app): restructure project to avoid circular imports 2024-09-25 02:20:12 +08:00
InfinityPacer
6d55db466c fix(queue): handle queue.Empty instead of TimeoutError on timeout 2024-09-25 00:46:55 +08:00
jxxghp
88394005e5 fix log 2024-09-24 13:11:21 +08:00
jxxghp
959dc0f14b add filter log 2024-09-24 13:08:18 +08:00
jxxghp
c07d02e572 fix monitor 2024-09-24 13:02:10 +08:00
jxxghp
8612127161 fix 刮削 2024-09-24 12:16:49 +08:00
jxxghp
4bf7e05a3d fix download api add save_path 2024-09-23 17:52:06 +08:00
jxxghp
9cfc27392d fix download api 2024-09-23 17:50:24 +08:00
jxxghp
1a3d88f306 fix bug 2024-09-23 08:02:30 +08:00
jxxghp
d7c277a277 Merge pull request #2764 from InfinityPacer/feature/event 2024-09-23 07:04:29 +08:00
jxxghp
8e8a10f04e Merge pull request #2763 from Cabbagec/dev 2024-09-22 07:13:24 +08:00
InfinityPacer
5fc5838abd fix(event): replace condition-based wait with exponential backoff 2024-09-22 02:38:28 +08:00
InfinityPacer
748836df23 fix(event): restore missing method removed in be63e9ed 2024-09-22 01:36:26 +08:00
Cabbagec
f0100e6dbc fix: Path.rglob/glob does not follow symlinks 2024-09-22 01:29:06 +08:00
jxxghp
17aa6c674f fix event 2024-09-21 22:14:44 +08:00
jxxghp
796dc6d800 fix aliyun download 2024-09-21 22:06:40 +08:00
jxxghp
7444b3e84b fix storage api 2024-09-21 20:18:40 +08:00
jxxghp
fada22e892 fix webpush 2024-09-21 17:59:39 +08:00
jxxghp
c51826ba4c fix 文件整理 2024-09-21 17:11:12 +08:00
jxxghp
d7e56eeb36 fix 文件整理 2024-09-21 17:03:53 +08:00
jxxghp
f4b4e6e0dc Merge pull request #2760 from DDS-Derek/dev 2024-09-21 13:12:37 +08:00
DDSRem
a555c9b654 feat(playwright): add proxy support for chromium installation 2024-09-21 13:11:35 +08:00
jxxghp
997a9487a1 Merge pull request #2758 from InfinityPacer/feature/event 2024-09-20 22:07:29 +08:00
InfinityPacer
dea8fc5486 feat(event): optimized event execution flow 2024-09-20 21:45:52 +08:00
InfinityPacer
857383c8d0 feat(event): improve event consumer logic for handling of events 2024-09-20 20:37:29 +08:00
jxxghp
6a9fccaacb Merge pull request #2755 from qcgzxw/mediaserver 2024-09-20 19:04:54 +08:00
InfinityPacer
688693b31f feat(event): use dict for subscribers and replace handler if exists 2024-09-20 18:42:29 +08:00
Owen
7c5b4b6202 feat: MediaServerItem新增用户播放状态、mediaserver.items()增加分页参数 2024-09-20 17:57:43 +08:00
InfinityPacer
ef0768ec44 feat(event): simplify register decorator 2024-09-20 16:32:36 +08:00
InfinityPacer
be63e9ed15 feat(event): optimize handler 2024-09-20 16:26:45 +08:00
jxxghp
6431524e61 更新 local.py 2024-09-20 14:01:38 +08:00
InfinityPacer
3bee5a8a86 feat(event): separate implementation of broadcast and chain 2024-09-20 13:52:09 +08:00
jxxghp
e2bf0cd457 fix bug 2024-09-20 13:11:32 +08:00
jxxghp
8ac3fd46d2 fix file upload 2024-09-20 13:09:25 +08:00
jxxghp
117bd80528 fix scraping 2024-09-20 12:51:32 +08:00
jxxghp
91fc41261f Merge pull request #2751 from qcgzxw/bug
fix bug
2024-09-20 12:20:42 +08:00
jxxghp
ee5976a03e fix transfer 2024-09-20 12:17:50 +08:00
Owen
8a75159662 fix emby 2024-09-20 12:14:31 +08:00
Owen
63b0f5b70f fix flex 2024-09-20 12:14:01 +08:00
jxxghp
623580a7ae fix transfer 2024-09-20 08:15:23 +08:00
InfinityPacer
85cb9f7cd7 feat(event): add visualization and enhance handler 2024-09-20 01:34:01 +08:00
InfinityPacer
e786120e98 feat(event): update constant and support Condition for thread 2024-09-20 00:25:38 +08:00
InfinityPacer
49b6052ab0 refactor(event): optimize broadcast and chain event 2024-09-19 23:55:24 +08:00
jxxghp
2486b9274c fix webhook_parser 2024-09-19 21:09:07 +08:00
jxxghp
4016295696 fix transfer 2024-09-19 20:34:26 +08:00
jxxghp
f3b2bbfb6f fix rule group filter 2024-09-19 13:36:07 +08:00
jxxghp
786b317cea fix download && message 2024-09-19 08:30:58 +08:00
jxxghp
152546d89a Merge pull request #2744 from Aqr-K/dev-alipan 2024-09-19 07:35:33 +08:00
Aqr-K
bf21eda1bb 更新 alipan.py 2024-09-19 07:31:40 +08:00
jxxghp
6e8d1219f8 fix 115 2024-09-18 13:38:05 +08:00
jxxghp
69c3f9eb5d fix bug 2024-09-18 08:28:54 +08:00
jxxghp
bb086d7c83 Merge pull request #2735 from InfinityPacer/feature/plugin 2024-09-18 06:44:31 +08:00
InfinityPacer
28f7a409f9 fix(plugin): improve logging 2024-09-17 18:31:27 +08:00
InfinityPacer
3141d02e44 Merge branch 'dev' of https://github.com/jxxghp/MoviePilot into feature/plugin 2024-09-17 17:32:51 +08:00
InfinityPacer
5136698617 feat(plugin): improve online plugin retrieval 2024-09-17 17:26:06 +08:00
InfinityPacer
8cc72f402b fix(config): make VERSION_FLAG a read-only property 2024-09-17 16:56:08 +08:00
InfinityPacer
4d2e77fc51 feat(plugin): improve plugin version upgrade and compatibility 2024-09-17 16:05:16 +08:00
jxxghp
d5aa52ed91 Merge pull request #2729 from Aqr-K/dev 2024-09-16 22:17:57 +08:00
Aqr-K
148e4a95ee fix: cloud disk bug
- 解决前端调用时,没有认证参数或者失效时,后端返回的None,会引发pydantic的报错,从而导致的前端无法获取结果,卡在刷新页面
2024-09-16 22:16:54 +08:00
jxxghp
3c43055f10 Merge pull request #2727 from qcgzxw/mediaserver 2024-09-16 21:49:36 +08:00
jxxghp
1920dc0a82 Merge pull request #2725 from InfinityPacer/feature/auth 2024-09-16 21:46:02 +08:00
owen
8306aa92db refactor: 修改emby、jellyfin API url请求传参方式 2024-09-16 20:23:24 +08:00
InfinityPacer
947a19eb95 fix(auth): set empty avatar to avoid missing default avatar 2024-09-16 15:42:37 +08:00
InfinityPacer
36142b97bf fix(auth): handle scenario where the user is null 2024-09-16 15:41:13 +08:00
jxxghp
4efc80e35a Merge pull request #2716 from InfinityPacer/feature/plugin 2024-09-14 18:16:02 +08:00
jxxghp
31aadabe86 Merge pull request #2715 from InfinityPacer/feature/push 2024-09-14 18:00:23 +08:00
InfinityPacer
593bcbf455 fix(auth): set AUXILIARY_AUTH_ENABLE default to false 2024-09-14 17:56:50 +08:00
InfinityPacer
220fef5c9b feat(plugin): optimize logging with detailed debug info 2024-09-14 17:55:18 +08:00
InfinityPacer
343f51ce79 feat(plugin): enhance fallback strategies 2024-09-14 17:23:44 +08:00
jxxghp
e86bf61579 fix storage api 2024-09-14 14:43:14 +08:00
jxxghp
8bb25afcdc fix transfer bug 2024-09-14 13:19:51 +08:00
jxxghp
57bad6353c fix bug 2024-09-14 13:13:11 +08:00
jxxghp
f6c84a744c feat:国语配音适配大陆剧 2024-09-14 12:33:05 +08:00
jxxghp
5229a0173a fix 2024-09-14 12:20:22 +08:00
jxxghp
5a6733fa32 fix bug 2024-09-14 11:44:38 +08:00
InfinityPacer
e0e4b31933 feat(plugin): implement fallback mechanism for install plugin 2024-09-14 11:26:43 +08:00
jxxghp
a29cf83aba fix bug 2024-09-14 08:21:13 +08:00
jxxghp
ede37b80fc fix bug 2024-09-14 07:06:39 +08:00
InfinityPacer
2f2ecc8c43 fix(push): correct client type 2024-09-13 18:54:48 +08:00
jxxghp
5ec7357c56 fix bug 2024-09-13 17:23:41 +08:00
jxxghp
a547ea954d Merge pull request #2705 from InfinityPacer/feature/api-token 2024-09-13 09:59:17 +08:00
InfinityPacer
777c7c78d0 fix(plugin): use verify_apikey for backward compatibility 2024-09-13 09:49:22 +08:00
jxxghp
c1bf32318b Merge pull request #2704 from InfinityPacer/fix/sync 2024-09-13 08:33:59 +08:00
jxxghp
6a65b5b234 Merge pull request #2703 from InfinityPacer/feature/auth 2024-09-13 08:32:54 +08:00
jxxghp
d0a868123d Merge pull request #2701 from Akimio521/dev 2024-09-13 08:32:28 +08:00
jxxghp
b9f1ebff89 Merge pull request #2700 from InfinityPacer/feature/api-token 2024-09-13 08:32:10 +08:00
InfinityPacer
0ef8efd5a5 fix(sync): skip when no libraries are retrieved 2024-09-13 01:18:16 +08:00
InfinityPacer
9129de1720 fix(sync): skip disabled mediaservers and empty libraries 2024-09-13 00:58:03 +08:00
InfinityPacer
1ebec13afb feat(auth): add AUXILIARY_AUTH_ENABLE for user authentication 2024-09-12 21:12:10 +08:00
jxxghp
73407825f5 fix site userdata 2024-09-12 15:44:31 +08:00
jxxghp
53195457c7 fix module test 2024-09-12 15:13:58 +08:00
jxxghp
9a62feb9a9 feat:规则组适用媒体类别 2024-09-12 12:42:41 +08:00
jxxghp
26abccabf3 feat:规则组适用媒体类别 2024-09-12 12:36:58 +08:00
Akimio521
596b2e11b8 feat:查询种子时匹配香港、台湾译名 2024-09-12 12:33:17 +08:00
jxxghp
e2436ba94f fix api 2024-09-12 08:24:55 +08:00
jxxghp
f9895b2edd fix api 2024-09-12 08:16:02 +08:00
InfinityPacer
3446aec6a2 feat(plugin): add API_TOKEN validation for plugin API registration 2024-09-12 02:36:34 +08:00
InfinityPacer
23b9774c5d feat(auth): add API_TOKEN validation and auto-generation 2024-09-12 00:17:26 +08:00
InfinityPacer
540f5eb77f refactor: unify env path 2024-09-12 00:05:07 +08:00
InfinityPacer
8b336cf3eb fix(log): add support for CONFIG_DIR through environment variables 2024-09-11 23:53:15 +08:00
jxxghp
186476ad31 fix 通知发送范围初始化 2024-09-11 08:10:38 +08:00
jxxghp
171f15e410 Merge pull request #2698 from InfinityPacer/feature/log-refactor 2024-09-11 06:48:13 +08:00
jxxghp
05bbfde943 Merge pull request #2697 from InfinityPacer/dev 2024-09-11 06:46:52 +08:00
InfinityPacer
150e2366da refactor(log): add support for configurable log settings in .env 2024-09-10 21:43:09 +08:00
jxxghp
9c47da8c98 fix dashboard api 2024-09-10 21:10:45 +08:00
InfinityPacer
0f5290be18 chore: Delete 插件版本兼容与升级方案.md 2024-09-10 12:09:21 +08:00
jxxghp
104348ba0e fix dockerfile 2024-09-10 11:34:52 +08:00
jxxghp
1a1318b5e4 fix 2024-09-10 11:16:12 +08:00
jxxghp
8a6ad03880 feat:支持V2专用插件 2024-09-10 08:26:30 +08:00
jxxghp
d0ac5646f5 fix transfer_completed 2024-09-10 07:58:22 +08:00
jxxghp
89f2bf5f30 更新 message.py 2024-09-09 22:54:47 +08:00
jxxghp
c3ef3dd7d1 fix 全局变量定义 2024-09-09 22:17:49 +08:00
InfinityPacer
aa6fa8d336 chore: Add 插件版本兼容与升级方案.md 2024-09-09 21:39:27 +08:00
jxxghp
f18b9793b4 Merge pull request #2696 from Aqr-K/dev 2024-09-09 21:35:34 +08:00
Aqr-K
15946f8d0a Update config.py 2024-09-09 21:13:15 +08:00
InfinityPacer
c2824a1bc8 chore: Update development-setup.md 2024-09-09 19:02:13 +08:00
jxxghp
2d8dd6cc17 Merge pull request #2694 from InfinityPacer/dev 2024-09-09 18:46:45 +08:00
InfinityPacer
adf78a9e3e fix(requirements): install pywin32 only on Windows 2024-09-09 18:44:14 +08:00
jxxghp
40ee902457 remove pywin32 2024-09-09 16:55:45 +08:00
jxxghp
48ac6e727b fix build 2024-09-09 16:43:05 +08:00
jxxghp
d8a2b0497e feat:v2新增版本标识(用于插件进行兼容性判断),插件市场只显示兼容对应版本标识的插件 2024-09-09 10:36:42 +08:00
jxxghp
1d31785def fix reload api 2024-09-09 09:51:48 +08:00
jxxghp
b1d2125e22 fix https://github.com/jxxghp/MoviePilot/pull/2610 整合同步消息到dev分支 2024-09-09 08:55:10 +08:00
jxxghp
81ce44ee4d Merge remote-tracking branch 'origin/dev' into dev 2024-09-09 08:33:16 +08:00
jxxghp
d806931296 downloading api 支持多下载器 2024-09-09 08:33:07 +08:00
jxxghp
28a8bb4baa Merge pull request #2690 from DDS-Derek/dev 2024-09-08 17:54:11 +08:00
jxxghp
773399347d fix user api 2024-09-08 14:53:52 +08:00
DDSRem
a5cecdd631 feat(docker): retry if download fails
fix https://github.com/jxxghp/MoviePilot/issues/2688
2024-09-08 14:42:01 +08:00
jxxghp
34ae663d5a add subscribe/files api 2024-09-08 13:02:42 +08:00
jxxghp
01505ceaa7 add site userdata refresh api 2024-09-08 08:44:54 +08:00
jxxghp
f0b1cdbe52 Merge pull request #2689 from Akimio521/dev 2024-09-07 08:11:00 +08:00
Akimio521
a13d32c17f fix:将history.episodes转换成episode_detail 2024-09-06 19:05:55 +08:00
Akimio521
c438cd5713 feat:手动转移文件增加从历史记录获取相关信息 2024-09-06 19:05:30 +08:00
jxxghp
31c9fa932a Merge pull request #2687 from InfinityPacer/dev 2024-09-06 12:01:42 +08:00
InfinityPacer
4493d4c62f refactor(PluginMonitor): use rate_limit_window for rate limit 2024-09-05 23:47:57 +08:00
InfinityPacer
862f3cb623 feat(limit): change default raise_on_limit to False 2024-09-05 23:46:08 +08:00
InfinityPacer
ffbcc988b3 feat(limit): refactor RateLimiter to limit package 2024-09-05 23:29:44 +08:00
jxxghp
ab294ac35e Merge pull request #2684 from InfinityPacer/dev 2024-09-05 06:54:24 +08:00
InfinityPacer
d9f6db18d4 feat(Douban): add global rate-limiter 2024-09-05 00:49:29 +08:00
InfinityPacer
7a7225ba45 fix(rate-limiter): optimize log 2024-09-05 00:46:36 +08:00
InfinityPacer
b42a69f361 feat(rate-limiter): support dynamic raise_exception 2024-09-05 00:41:56 +08:00
InfinityPacer
eea6bd1ea3 feat(rate-limiter): add source context for enhanced logging 2024-09-04 20:12:29 +08:00
InfinityPacer
73fca81641 feat(rate-limiter): add rate limiter 2024-09-04 20:10:41 +08:00
jxxghp
f4b010f106 Merge pull request #2674 from InfinityPacer/dev 2024-09-01 06:56:48 +08:00
InfinityPacer
93801e857e fix(lxml): Adjust HTML element checks to prevent FutureWarning 2024-08-31 02:14:52 +08:00
jxxghp
8f73e45a30 Merge pull request #2668 from InfinityPacer/dev 2024-08-30 17:55:16 +08:00
InfinityPacer
9ab852c1ad feat(sqlite): adjust default settings 2024-08-30 04:46:08 +08:00
InfinityPacer
88a0de7fa6 feat(sqlite): support customizable connection settings 2024-08-30 02:42:50 +08:00
jxxghp
78657cb948 fix api 2024-08-29 16:15:06 +08:00
jxxghp
264cd2658b fix cache path 2024-08-29 15:39:12 +08:00
jxxghp
f4dfaa0519 fix douban api 2024-08-29 08:36:11 +08:00
jxxghp
707921e15d feat:global image cache api 2024-08-28 18:11:06 +08:00
jxxghp
eea8b9a8a6 feat:global image cache api 2024-08-28 17:53:06 +08:00
jxxghp
bd7fc2d4ff Merge pull request #2656 from InfinityPacer/dev 2024-08-26 10:08:31 +08:00
InfinityPacer
bc1da0a7c7 refactor(CookieCloud): Consolidate crypto and hash operations into HashUtils and CryptoJsUtils 2024-08-23 18:52:00 +08:00
jxxghp
3ae34216d0 Merge pull request #2653 from InfinityPacer/dev 2024-08-23 07:23:46 +08:00
InfinityPacer
c15d326636 Merge branch 'dev' of https://github.com/InfinityPacer/MoviePilot into dev 2024-08-22 21:52:52 +08:00
InfinityPacer
f93bcd852c chore(security): Ignore resolved vulnerabilities after upgrading python-multipart 2024-08-22 21:52:03 +08:00
InfinityPacer
0bf30bb75f fix(downgrade): Rollback FastAPI and Starlette due to compatibility issues with Pydantic V2 affecting API parameter handling. 2024-08-22 21:51:46 +08:00
InfinityPacer
93b899b7e9 refactor(UrlUtils): Migrate URL-related methods from RequestUtils 2024-08-21 22:12:56 +08:00
InfinityPacer
fef270f73b refactor(RSAUtils): Add key_size parameter to generate_rsa_key_pair method and update comments 2024-08-21 01:21:55 +08:00
jxxghp
c7dcbf697e Merge pull request #2649 from InfinityPacer/dev 2024-08-20 21:13:16 +08:00
InfinityPacer
d5241a2eb8 chore: Integrate pip-tools and safety, upgrade vulnerable dependencies 2024-08-20 18:52:07 +08:00
jxxghp
cf3d6bca91 115 linux client 2024-08-20 09:02:25 +08:00
jxxghp
1f87bc643a sync main 2024-08-19 13:06:39 +08:00
jxxghp
566928926b Merge pull request #2646 from InfinityPacer/dev 2024-08-18 23:13:15 +08:00
InfinityPacer
0a74437253 feat(TimerUtils): add random_even_scheduler for evenly distributed schedule creation 2024-08-18 22:24:24 +08:00
jxxghp
65ff01b713 fix parse_json_fields 2024-08-16 17:53:12 +08:00
jxxghp
5d3809b8f5 fix rclone usage 2024-08-16 17:14:20 +08:00
jxxghp
6e334ef333 fix apis 2024-08-16 17:03:51 +08:00
jxxghp
c030f52418 fix apis 2024-08-16 13:41:19 +08:00
jxxghp
af88618fbd fix storage api 2024-08-16 11:59:45 +08:00
jxxghp
8485d4ec30 fix storage api 2024-08-16 11:30:31 +08:00
jxxghp
61e4e63a6a fix storage api 2024-08-15 16:15:26 +08:00
jxxghp
47481d2482 fix storage api 2024-08-15 15:27:47 +08:00
jxxghp
65c8f35f6d fix types 2024-08-15 11:45:23 +08:00
jxxghp
6358e49a96 fix statistic info api 2024-08-12 11:02:05 +08:00
jxxghp
fc1076586a fix downloader info api 2024-08-12 08:17:39 +08:00
jxxghp
5bfd08cce8 Merge pull request #2630 from DDS-Derek/dev 2024-08-05 19:46:33 +08:00
DDSRem
63b0d0a86b feat: optimize proxy log output 2024-08-05 19:41:58 +08:00
jxxghp
a41be81f35 Merge pull request #2629 from Aqr-K/dev 2024-08-05 18:23:05 +08:00
jxxghp
cec671e8a1 fix rule schema 2024-08-05 18:14:24 +08:00
Aqr-K
0097a6f33b Merge branch 'jxxghp:dev' into dev 2024-08-05 17:57:41 +08:00
jxxghp
0055f4c7af fix rule schema 2024-08-05 17:48:15 +08:00
Aqr-K
e19abeb149 Update update 2024-08-05 17:45:09 +08:00
jxxghp
236f59d56f Merge pull request #2628 from Aqr-K/dev 2024-08-05 17:42:02 +08:00
Aqr-K
8fba3cf170 Update update 2024-08-05 17:39:52 +08:00
jxxghp
0cb120a9e5 Merge pull request #2626 from DDS-Derek/dev 2024-08-05 16:06:36 +08:00
DDSRem
67f991d217 fix: root-user-action parameter setting error 2024-08-05 15:59:59 +08:00
DDSRem
466b42bea7 feat: automatic acceleration selection 2024-08-05 15:52:58 +08:00
jxxghp
f7d583856f Merge pull request #2624 from Aqr-K/dev 2024-08-05 14:39:40 +08:00
Aqr-K
d5d32e2335 更新 update 2024-08-04 00:09:00 +08:00
Aqr-K
e8ff878aac 删除PROXY_SUPPLEMENT变量,增加只读属性‘PIP_OPTIONS’
1、删除`proxychains4`模块支持,`pip`代理已支持全局模式下的`socks4`、`socks4a`、`socks5`、`socks5h`、`http`、`https`协议;
2、删除`PROXY_SUPPLEMENT`变量,取消手动控制功能;
3、增加自动判断,将`pip`与`update`的代理判断,从手动改为自动,优先级:镜像站 > 全局 > 不代理;
4、将`pip`的附加代理参数,作为只读属性`PIP_OPTIONS`写入到`config`中,其他对象可通过`settings.PIP_OPTIONS`实现快速调用。
2024-08-03 23:02:10 +08:00
Aqr-K
fea7b7d02d 更新 urlparse.py 2024-08-03 18:36:16 +08:00
Aqr-K
c69d317054 增加PROXY_SUPPLEMENT变量
1、增加proxychains4模块,用于解决pip在socks5代理时,pip无法使用全局代理的问题
2、增加`PROXY_SUPPLEMENT`变量,可以手动控制,实现使用镜像站进行更新时,但缺少pip镜像站或者GitHub镜像站时,可以使用全局代理补全缺失的代理
2024-08-03 18:29:06 +08:00
jxxghp
40663b6ce7 fix local 2024-07-26 22:04:50 +08:00
jxxghp
4f4c7a5748 Merge pull request #2604 from Aqr-K/dev
增加``PIP_PROXY``变量,支持用镜像站下载与更新依赖
2024-07-25 17:51:40 +08:00
Aqr-K
7adae64955 修复GITHUB_PROXY错误代码,简化写法 2024-07-25 17:43:09 +08:00
Aqr-K
4afd043f85 清除多余参数 2024-07-24 23:17:13 +08:00
Aqr-K
dc5250a74e 调整重启时,使用代理更新的优先级,优先使用镜像站 2024-07-24 23:10:39 +08:00
Aqr-K
0bd91ee484 增加PYPI_PROXY变量,支持用镜像站下载与更新依赖 2024-07-24 23:02:46 +08:00
jxxghp
6bbdc574b6 fix db init 2024-07-24 16:58:50 +08:00
jxxghp
c275d4db22 fix db init 2024-07-24 16:49:27 +08:00
jxxghp
843d93f0a8 Merge pull request #2599 from DDS-Derek/dev 2024-07-23 16:16:25 +08:00
DDSRem
5d26e70cae fix: pip warning in root mode 2024-07-23 14:43:20 +08:00
DDSRem
766640a0a0 feat: better logging output 2024-07-23 14:39:15 +08:00
jxxghp
7459938e92 Merge pull request #2597 from DDS-Derek/dev 2024-07-22 15:58:59 +08:00
DDSRem
bc476cb0c9 feat: restart update failed and auto restore backup 2024-07-22 15:33:38 +08:00
jxxghp
dc92a554f6 init storage 2024-07-20 09:16:26 +08:00
jxxghp
d8c8d43ed9 fix 2024-07-20 08:52:36 +08:00
jxxghp
0f8c2d3fc9 fix db init 2024-07-20 08:47:26 +08:00
jxxghp
b949969b10 Merge pull request #2585 from DDS-Derek/dev 2024-07-19 19:49:00 +08:00
DDSRem
66e13c5a31 fix: display variable is not effective
fix 547812162d
2024-07-19 14:53:21 +08:00
jxxghp
294ff93e2b Merge pull request #2584 from DDS-Derek/dev 2024-07-19 12:47:01 +08:00
DDSRem
547812162d fix: display id conflict
fix https://github.com/jxxghp/MoviePilot/issues/2247
2024-07-19 12:44:16 +08:00
jxxghp
0028e2f830 fix 2024-07-14 19:32:18 +08:00
jxxghp
97fdfe789e fix 2024-07-14 18:50:31 +08:00
jxxghp
b0874f56c9 fix 2024-07-09 20:04:06 +08:00
jxxghp
3d2b645bfc fix user 2024-07-09 08:25:22 +08:00
jxxghp
47b276795f Merge pull request #2536 from DDS-Derek/dev 2024-07-08 12:24:29 +08:00
DDSRem
9331f82b81 feat: refactor docker http proxy 2024-07-08 12:20:54 +08:00
jxxghp
bb4355fbe0 fix permissions 2024-07-07 08:09:26 +08:00
jxxghp
a567a8644b add ModuleBases 2024-07-07 07:40:53 +08:00
jxxghp
9b7896ab96 fix 2024-07-06 16:48:43 +08:00
jxxghp
1a0c4acf1c fix bug 2024-07-06 08:30:09 +08:00
jxxghp
059e4f08a3 fix permissions 2024-07-05 08:10:21 +08:00
jxxghp
30ae583704 fix torrent filter 2024-07-04 22:16:20 +08:00
jxxghp
28d420af51 fix bug 2024-07-04 22:04:10 +08:00
jxxghp
c87b982ebf fix bug 2024-07-04 21:58:10 +08:00
jxxghp
290cafa03d add user requests 2024-07-04 21:52:49 +08:00
jxxghp
604c418bd4 fix filter rules 2024-07-04 21:25:50 +08:00
jxxghp
28345817d9 add siteuserdata 2024-07-04 19:42:09 +08:00
jxxghp
965e40e630 fix message 2024-07-04 18:45:22 +08:00
jxxghp
5f01dd5625 fix user 2024-07-04 07:13:49 +08:00
jxxghp
dde2d22d93 fix 2024-07-03 17:50:02 +08:00
jxxghp
9f34be049d add monitor 2024-07-03 17:46:35 +08:00
jxxghp
db26f2e108 add storage snapshot 2024-07-03 11:51:26 +08:00
jxxghp
35eda7d116 fix filemanager 2024-07-03 08:49:59 +08:00
jxxghp
b6800c7fda fix 2024-07-03 07:10:46 +08:00
jxxghp
03068778bc fix transfer 2024-07-02 20:48:26 +08:00
jxxghp
0da2bd6468 fix rclone 2024-07-02 20:32:32 +08:00
jxxghp
b37e50480a fix storage 2024-07-02 18:31:17 +08:00
jxxghp
8530d54fcc fix filemanager 2024-07-02 18:16:52 +08:00
jxxghp
1822d01d17 fix directories 2024-07-02 17:47:29 +08:00
jxxghp
f23be671c0 fix 2024-07-02 13:54:29 +08:00
jxxghp
15a7297099 fix messages 2024-07-02 13:50:41 +08:00
jxxghp
9484093d22 fix downloader 2024-07-02 11:11:25 +08:00
jxxghp
c8fe6e4284 fix downloaders 2024-07-02 11:00:55 +08:00
jxxghp
dfc5872087 fix mediaservers 2024-07-02 10:03:56 +08:00
jxxghp
9a07d88d41 fix downloaders && mediaservers && notifications 2024-07-02 07:16:33 +08:00
jxxghp
b4e1e911fc fix 2024-07-01 21:38:44 +08:00
jxxghp
60827fd5b1 fix local get_folder 2024-07-01 21:26:39 +08:00
jxxghp
cf409eb28f fix transfer 2024-07-01 21:24:02 +08:00
jxxghp
f16eb271da fix transfer chain 2024-07-01 18:30:15 +08:00
jxxghp
778b562cab fix scraper 2024-07-01 15:25:35 +08:00
jxxghp
964e212831 fix storage 2024-07-01 11:57:32 +08:00
jxxghp
302514a469 fix 2024-06-30 19:48:23 +08:00
jxxghp
3d79b5bb2a fix storage api 2024-06-30 19:44:04 +08:00
jxxghp
3dd5c91ce7 fix storage api 2024-06-30 19:43:07 +08:00
jxxghp
02ad98c024 fix storage api 2024-06-30 19:41:32 +08:00
jxxghp
a7b906ada6 fix storage 2024-06-30 18:44:23 +08:00
jxxghp
a62ca9a226 fix transfer 2024-06-30 13:25:29 +08:00
jxxghp
02030a8e2d add site_userdata 2024-06-30 11:00:25 +08:00
jxxghp
63ca5ee313 add storage 2024-06-30 08:59:12 +08:00
jxxghp
77632880d1 add site parsers 2024-06-30 08:09:23 +08:00
jxxghp
20fa8feab0 Merge pull request #2431 from jxxghp/main
fix
2024-06-26 16:09:51 +08:00
jxxghp
be55c7bdd9 Merge pull request #2430 from InfinityPacer/main 2024-06-26 15:55:24 +08:00
InfinityPacer
a4288aa871 fix #2428 2024-06-26 15:51:31 +08:00
jxxghp
c0f15ac7ff Merge remote-tracking branch 'origin/main' 2024-06-26 15:18:05 +08:00
jxxghp
4047d433f5 fix 2024-06-26 15:17:42 +08:00
jxxghp
0699f0003c Merge pull request #2429 from jxxghp/main
merge
2024-06-26 14:49:38 +08:00
jxxghp
91d6769d0f Merge branch 'dev' into main 2024-06-26 14:49:27 +08:00
jxxghp
ad378956bf support haidan index 2024-06-26 09:08:18 +08:00
jxxghp
9dcfb6dc1e v1.9.8-1
- 修复剧集自动刮削报错问题
2024-06-25 16:32:45 +08:00
jxxghp
2d0b21d3f2 fix #2418
fix #2421
fix #2412
2024-06-25 16:29:57 +08:00
jxxghp
3287c85300 Merge pull request #2415 from thsrite/main 2024-06-25 10:06:25 +08:00
thsrite
fd2682bc6a add 删除下载历史、删除下载文件历史 2024-06-25 10:03:29 +08:00
jxxghp
7dd1e75ad7 Merge remote-tracking branch 'origin/main' 2024-06-24 17:13:58 +08:00
jxxghp
93b8f24ec7 v1.9.8
- 修复阿里云盘无法整理备份盘的问题
- 修复手动整理时fanart图片文件不全的问题
- 修复了通过远程消息下载时不会自动分类的问题
- 修复登录失败时的提示信息
- 修复有的场景下订阅重复下载问题
2024-06-24 17:13:50 +08:00
jxxghp
1c240f9d76 Update README.md 2024-06-24 17:06:56 +08:00
jxxghp
9a2ef5fe48 Update README.md 2024-06-24 17:06:08 +08:00
jxxghp
7bd55caed7 reinit 2024-06-24 12:53:45 +08:00
jxxghp
ae36f5100a Merge pull request #2410 from jxxghp/main
fix bugs
2024-06-24 12:47:39 +08:00
jxxghp
b2efac0495 Merge pull request #2409 from jxxghp/revert-2407-dev
Revert "fix bugs"
2024-06-24 12:42:09 +08:00
jxxghp
1dced579ea Revert "fix bugs" 2024-06-24 12:41:59 +08:00
jxxghp
0deea17ef9 Merge pull request #2407 from jxxghp/dev
fix bugs
2024-06-24 12:36:37 +08:00
jxxghp
3d0c06013d fix bug 2024-06-24 09:37:11 +08:00
jxxghp
2536119f60 feat:网盘整理联动刮削 2024-06-24 09:12:26 +08:00
jxxghp
aeede861e3 fix bug 2024-06-24 08:49:20 +08:00
jxxghp
1edbfb0d2d fix bug 2024-06-24 08:08:39 +08:00
jxxghp
265724bbe9 Merge pull request #2402 from thsrite/main 2024-06-23 19:47:24 +08:00
jxxghp
2b0b190cf8 fix bug 2024-06-23 19:46:36 +08:00
thsrite
08a2b348d8 add get_by_dest 2024-06-23 19:45:08 +08:00
jxxghp
e896068bc5 fix #2400 2024-06-23 18:48:13 +08:00
jxxghp
85e5338121 fix #2340
fix 手动刮削图片不完整
2024-06-23 18:40:44 +08:00
jxxghp
5c3cd8cabc init repo 2024-06-23 09:33:27 +08:00
jxxghp
5a837a4161 v1.9.8-beta
- 文件管理支持多选,支持网盘批量整理和刮削,阿里云盘支持备份盘
2024-06-23 09:07:05 +08:00
jxxghp
1e1f80b6d9 add remote transfer 2024-06-23 09:04:08 +08:00
jxxghp
e06e00204b fix #2341 2024-06-22 21:31:10 +08:00
jxxghp
b98c0f205d fix scrape 2024-06-22 20:58:24 +08:00
jxxghp
0c266726ea fix scrap 2024-06-22 19:59:24 +08:00
jxxghp
b43e591e4c fix scrap 2024-06-22 08:32:25 +08:00
jxxghp
3d6e1335f8 更新 scraper.py 2024-06-22 06:45:17 +08:00
jxxghp
361e8dd65d fix api 2024-06-21 23:25:08 +08:00
jxxghp
de865f3cf1 fix api 2024-06-21 23:05:00 +08:00
jxxghp
37985eba25 fix api 2024-06-21 21:28:48 +08:00
jxxghp
e0a251b339 fix scrape api 2024-06-21 19:19:10 +08:00
jxxghp
f9f4d97a51 更新 media.py 2024-06-21 12:23:18 +08:00
jxxghp
6adc0e27d5 fix api 2024-06-21 12:17:30 +08:00
jxxghp
5deb0089bb fix api 2024-06-21 11:49:07 +08:00
jxxghp
bfbeae7fa7 fix api 2024-06-21 11:13:01 +08:00
jxxghp
8a98c65026 fix 2024-06-21 08:27:37 +08:00
jxxghp
0133c6e60c add upload api 2024-06-21 08:08:23 +08:00
jxxghp
ae0e171dd2 Merge pull request #2375 from InfinityPacer/main 2024-06-20 17:54:24 +08:00
InfinityPacer
9f0ed49d43 fix plugin auth_level 2024-06-20 17:44:55 +08:00
jxxghp
8df2955a67 add alipan/115 move api 2024-06-20 17:21:02 +08:00
jxxghp
ef0cd7d5c5 fix meta_nfo 2024-06-20 17:04:47 +08:00
jxxghp
463fd3761a add meta_nfo module function 2024-06-20 16:53:50 +08:00
jxxghp
4af4ad0243 fix bug 2024-06-20 15:52:52 +08:00
jxxghp
24aa64232f fix windows exe front path 2024-06-20 13:21:49 +08:00
jxxghp
9937f6792e feat:阿里云盘支持备份盘 2024-06-20 13:15:59 +08:00
jxxghp
185b72dc8d fix:优化文件管理api 2024-06-20 11:38:57 +08:00
jxxghp
0fb12c77eb fix bug 2024-06-19 18:04:00 +08:00
jxxghp
631df4c9f8 v1.9.7
- 文件管理支持阿里云盘、115网盘,新增批量认别重命名功能,以快速整理本地或网盘文件。
- 优化了资源搜索卡片视图结果太多时卡顿的问题
- 适配了M-Team Api域名变化
2024-06-19 17:20:33 +08:00
jxxghp
0da08394ae Merge pull request #2365 from InfinityPacer/main 2024-06-19 16:55:27 +08:00
InfinityPacer
6392ee627f fix 请求失败时记录debug日志 2024-06-19 16:36:31 +08:00
InfinityPacer
da6ba3fa8b feat:Plex 添加公共请求方法 2024-06-19 15:53:55 +08:00
InfinityPacer
cb0bb8a38e refactor request host 2024-06-19 15:51:57 +08:00
InfinityPacer
e1cdc51904 Merge branch 'main' of https://github.com/InfinityPacer/MoviePilot 2024-06-19 15:47:16 +08:00
jxxghp
79c57d8e4f 批量重命名进度更新 2024-06-19 15:22:05 +08:00
jxxghp
681f1eaeb5 fix m-team api path 2024-06-19 14:16:20 +08:00
InfinityPacer
de2323d67a refactor RequestUtils 2024-06-19 13:45:02 +08:00
jxxghp
9cf240b8e8 fix UserDeviceOffline tip 2024-06-19 13:42:19 +08:00
jxxghp
b93c97938c fix 2024-06-19 13:14:02 +08:00
jxxghp
41d347bcef fix 115 pan 2024-06-19 13:04:35 +08:00
jxxghp
060e2f225c fix 115 pan 2024-06-19 13:02:04 +08:00
jxxghp
7103b0334a add 115 apis 2024-06-19 07:11:26 +08:00
jxxghp
354d5977e0 fix api path 2024-06-18 19:19:32 +08:00
jxxghp
19a56f7d24 feat:文件管理批量重命名 2024-06-18 16:45:48 +08:00
jxxghp
323ad099c3 add 识别名称API 2024-06-18 13:56:12 +08:00
jxxghp
484ecf10c3 fix api 2024-06-18 13:05:11 +08:00
jxxghp
2a333add9b fix aliyunpan api 2024-06-18 12:01:53 +08:00
jxxghp
90df09e64d add aliyunpan userinfo api 2024-06-18 07:03:05 +08:00
jxxghp
53397536ce Merge pull request #2355 from InfinityPacer/main 2024-06-17 21:09:00 +08:00
InfinityPacer
f902f43c56 fix #2348 移除硬链接校验 2024-06-17 21:02:14 +08:00
jxxghp
9948db8bce add aliyun apis 2024-06-17 20:16:38 +08:00
jxxghp
1b6a06bd7b add aliyun apis 2024-06-17 19:45:39 +08:00
jxxghp
ce1db7f62b v1.9.6
- 增加了CookieCloud同步域名黑名单设定
- 调整了登录和订阅界面样式,优化了整体UI响应速度
2024-06-16 14:09:47 +08:00
jxxghp
74dbae8514 fix api 2024-06-16 09:53:23 +08:00
jxxghp
7d4ec2ddec fix api 2024-06-16 07:22:01 +08:00
jxxghp
3654b9609f fix 2024-06-16 07:10:32 +08:00
jxxghp
83e583032a add wallpapers api 2024-06-16 07:09:04 +08:00
jxxghp
35a4d77915 fix #2346 2024-06-15 21:12:16 +08:00
jxxghp
cbfb2027a8 Merge pull request #2345 from thsrite/main 2024-06-15 19:37:58 +08:00
thsrite
ce0548632e fix cookiecloud同步只同步启用的站点 && 同步域名黑名单 2024-06-15 19:33:03 +08:00
thsrite
da1f6a0997 fix cookiecloud同步只同步启用的站点 2024-06-15 19:21:20 +08:00
jxxghp
a514ec0761 v1.9.5
- 小屏幕新增`App模式`界面切换功能,配合PWA提升手机操作使用体验(默认开启,点击右上角头像切换)
- 修复了有的反向代理环境下无法新增用户的问题
- 修复了北洋园种子发布时间和标签识别问题
- 修复部分情况下订阅会重复下载的问题
2024-06-14 17:07:42 +08:00
jxxghp
851dd85fc6 rclone移动模式下删除种子文件 2024-06-14 16:53:15 +08:00
jxxghp
0270af5b19 fix 豆瓣转TMDB搜索时丢失季的问题 2024-06-14 16:13:21 +08:00
jxxghp
f8f964106a fix pubdate string 2024-06-14 12:27:26 +08:00
jxxghp
aa0f2a571c fix datestr \n 2024-06-14 11:31:04 +08:00
jxxghp
727a14864e fix #2327
fix #2261
2024-06-14 10:32:15 +08:00
jxxghp
c7e909520c fix webpush重复推送 2024-06-14 06:52:34 +08:00
jxxghp
7f40863449 Merge pull request #2330 from falling/main 2024-06-13 21:14:48 +08:00
falling
e994a9fc92 is_english_word 方法 更新 2024-06-13 21:10:20 +08:00
jxxghp
d8fe8b28e8 fix 2024-06-13 11:31:44 +08:00
jxxghp
7f4f085d4a update README.md 2024-06-13 11:27:20 +08:00
jxxghp
2052766a71 Update README.md 2024-06-13 11:12:56 +08:00
jxxghp
887fe834bd Update README.md 2024-06-13 11:11:10 +08:00
jxxghp
0d4f87a631 Update README.md 2024-06-13 11:10:19 +08:00
jxxghp
ed96241053 Merge pull request #2322 from zhu0823/main 2024-06-12 10:20:40 +08:00
zhu0823
788104d151 fix: 类型检查 2024-06-12 10:07:16 +08:00
jxxghp
f8b3dbaef5 update README.md 2024-06-11 21:57:59 +08:00
jxxghp
b66ca92d72 Update README.md 2024-06-11 13:21:16 +08:00
jxxghp
c2a80dbedd Merge pull request #2310 from InfinityPacer/main 2024-06-10 13:52:09 +08:00
InfinityPacer
95202af139 fix dashboard Plex卡片加载速度 2024-06-10 12:43:05 +08:00
jxxghp
d77ea8f0a0 - 修复榜单、订阅、目录匹配细节问题 2024-06-10 09:55:20 +08:00
jxxghp
bbba9813a2 Merge pull request #2307 from InfinityPacer/main 2024-06-10 09:50:26 +08:00
jxxghp
220cbc3072 fix #2291 2024-06-10 09:49:31 +08:00
InfinityPacer
fcbdef5e66 fix 插件重载找不到__init__.py的场景及部分细节调整 2024-06-10 09:45:56 +08:00
jxxghp
e2e1c7642d fix 订阅重置 & TMDB电视剧榜单 2024-06-10 09:38:12 +08:00
jxxghp
33813ecf1d Merge pull request #2302 from xcehnz/main 2024-06-09 16:03:18 +08:00
xcehnz
ef656fcc67 fix 同目录优先无效 2024-06-09 15:41:48 +08:00
jxxghp
8fe7e015dd Merge pull request #2299 from thsrite/main 2024-06-09 12:54:27 +08:00
thsrite
7132fdbb26 fix plugin command args 2024-06-09 12:41:42 +08:00
jxxghp
0f57b39345 fix webpush switch 2024-06-09 11:15:35 +08:00
jxxghp
d13b5622c7 remove js/css cache 2024-06-09 08:04:53 +08:00
jxxghp
b5eaba26da 更新 __init__.py 2024-06-08 21:05:53 +08:00
jxxghp
60007cf398 Merge pull request #2295 from thsrite/main 2024-06-08 20:47:51 +08:00
thsrite
65cc169391 fix 2024-06-08 19:56:08 +08:00
thsrite
68a9fc4a13 fix 订阅新增支持填充规则 2024-06-08 19:54:06 +08:00
jxxghp
08870a67ec v1.9.4
- 优化了硬链接的处理逻辑,兼容极空间的同时跨盘不再会自动变成复制了
- 订阅增加了重置按钮,手动删除了订阅下载过的任务或媒体库文件时,可通过重置订阅重新下载
- 消息通知新增超链接跳转功能,需在设定中维护好访问域名
- 新增WebPush通知推送功能,实现类客户端的通知提醒效果(无需第三方软件)。使用方法:
   1. 设定 -> 通知 中打开WebPush开关
   2. 将MoviePilot浏览器(Safari、Chrome)网页发送到桌面图标,打开登录时根据提示允许消息通知权限

📢:功能无效或UI出错请清理浏览器缓存(无需清理Cookie)。
2024-06-08 14:45:49 +08:00
jxxghp
518206c34a fix wechat link 2024-06-08 11:21:03 +08:00
jxxghp
e05c643a6b add notification link 2024-06-08 10:47:50 +08:00
jxxghp
748de0ff00 fix webpush link 2024-06-08 07:45:13 +08:00
jxxghp
29b94e859f 更新 config.py 2024-06-08 07:09:21 +08:00
jxxghp
ed3bd0ddef Merge pull request #2289 from InfinityPacer/main 2024-06-08 06:21:57 +08:00
InfinityPacer
3cdbdc2f78 fix mailto移除空格和尖括号 2024-06-08 00:40:14 +08:00
jxxghp
f8fbf9b5eb fix webpush 2024-06-07 21:53:48 +08:00
jxxghp
9e0751367b Merge pull request #2288 from DDS-Derek/main
Revert "feat: refactor docker http proxy"
2024-06-07 21:48:48 +08:00
jxxghp
bc689074e0 fix webpush 2024-06-07 21:47:43 +08:00
DDSRem
7e442650b0 Revert "feat: refactor docker http proxy"
This reverts commit 48a860bfd4.
2024-06-07 21:43:36 +08:00
jxxghp
0a9a391eb3 add webpush log 2024-06-07 21:31:03 +08:00
jxxghp
ea1e600474 Merge pull request #2286 from honue/main 2024-06-07 16:25:33 +08:00
honue
b0a2c1b957 重置插件时删除插件数据 2024-06-07 15:33:17 +08:00
jxxghp
624363476a Merge pull request #2284 from DDS-Derek/main 2024-06-07 13:42:51 +08:00
DDSRem
48a860bfd4 feat: refactor docker http proxy 2024-06-07 11:08:41 +08:00
jxxghp
2d4fb5d52e fix webpush switch 2024-06-07 08:15:30 +08:00
jxxghp
c0c787f7ed fix 2024-06-07 08:01:23 +08:00
jxxghp
03d6834471 fix #2264 2024-06-06 14:21:20 +08:00
jxxghp
947d0d6d4b add 订阅重置api 2024-06-06 07:56:16 +08:00
jxxghp
7611c88aa6 Merge pull request #2272 from InfinityPacer/main 2024-06-05 22:28:13 +08:00
InfinityPacer
7be262b182 fix #2265 优化硬链接的判断逻辑 2024-06-05 22:24:17 +08:00
jxxghp
a7a06a9a75 fix NotificationSwitch 2024-06-05 22:00:39 +08:00
jxxghp
6aa5a836b9 fix webpush api 2024-06-05 18:40:00 +08:00
jxxghp
efd0fc39c6 fix github proxy && add webpush api 2024-06-05 18:08:34 +08:00
jxxghp
7e1951b8e4 Merge pull request #2265 from InfinityPacer/main 2024-06-04 22:00:00 +08:00
InfinityPacer
27c6392b66 fix #667 优化历史兼容极空间硬链接逻辑 2024-06-04 21:54:37 +08:00
jxxghp
0fc7d883c0 v1.9.3
- 搜索功能全面升级,支持搜索多种类型数据,支持模糊搜索站点资源(不匹配不过滤)
- 设置目录时支持浏览选择路径
- 支持配置Github代理地址,以加快版本及插件更新下载
- 优化了插件去重显示
- 优化了目录匹配同路径优先处理逻辑
- 设定等表单中的提示信息强制显示
- 修复了个别插件安装后会消失的问题

注意:前端变化升级后清理浏览器缓存文件(无需清理cookie)
2024-06-03 17:38:31 +08:00
jxxghp
95b480af6d fix #2235 2024-06-03 12:40:35 +08:00
jxxghp
abe7795105 Merge pull request #2259 from thsrite/main 2024-06-03 09:59:52 +08:00
thsrite
74c71390c9 fix unhashable type: 'dict' 2024-06-03 09:54:32 +08:00
jxxghp
1ddd844c17 fix 2024-06-03 08:20:16 +08:00
jxxghp
de3ff2db2e remove api async 2024-06-03 08:17:53 +08:00
jxxghp
655e73f829 fix dir match 2024-06-03 08:03:40 +08:00
jxxghp
2232e51509 add GITHUB_PROXY 2024-06-03 07:09:30 +08:00
jxxghp
44f1a321d2 fix #2249 2024-06-02 21:15:58 +08:00
jxxghp
c05223846f fix api 2024-06-02 21:09:15 +08:00
jxxghp
45945bd025 feat:增加删除下载任务事件,历史记录中删除源文件时主程序会同步删除种子,同时会发出该事件(以便处理辅种等) 2024-06-01 07:47:00 +08:00
jxxghp
acff7e0610 fix log 2024-05-31 20:03:48 +08:00
jxxghp
e97ae488fd fix 线上插件去重 2024-05-31 16:03:27 +08:00
jxxghp
a7689e1e10 fix 2024-05-31 15:08:22 +08:00
jxxghp
9a4d537543 Merge pull request #2237 from hotlcc/develop-20240531 2024-05-31 15:05:22 +08:00
Allen
1b09bb8d22 去掉主动创建下载目录的逻辑,解耦下载器,避免在容器环境当下载器目录与MP映射不一致时导致的目录权限异常 2024-05-31 15:00:47 +08:00
jxxghp
13832a51e0 add listdir api 2024-05-31 13:55:36 +08:00
jxxghp
a09b2fa88a fix log 2024-05-31 09:10:26 +08:00
jxxghp
6361f8654c fix README 2024-05-30 14:13:15 +08:00
jxxghp
db4bda3b73 Merge remote-tracking branch 'origin/main' 2024-05-30 12:39:02 +08:00
jxxghp
3f557ee43c fix README 2024-05-30 12:38:55 +08:00
jxxghp
9e7e0a8730 Merge pull request #2223 from thsrite/main 2024-05-30 10:42:16 +08:00
thsrite
07de1eaa0d fix 插件版本比较 2024-05-30 10:02:26 +08:00
jxxghp
c872043bf4 Merge pull request #2226 from InfinityPacer/main 2024-05-30 06:33:48 +08:00
InfinityPacer
7ed194a62c fix 优先加载子模块 2024-05-30 00:58:20 +08:00
jxxghp
882da68903 Merge pull request #2224 from InfinityPacer/main 2024-05-29 23:06:31 +08:00
InfinityPacer
2798700f71 fix 插件重载时,支持reload一级子模块 2024-05-29 23:01:58 +08:00
thsrite
34e70adabb fix 插件库相同ID的插件保留版本号最大版本 2024-05-29 20:40:04 +08:00
jxxghp
fe999aa346 fix dir match 2024-05-29 17:30:00 +08:00
jxxghp
f7ca4abb01 fix dir match 2024-05-29 17:28:49 +08:00
jxxghp
8a4202cee5 fix dir match 2024-05-29 17:16:35 +08:00
jxxghp
55a85b87dd fix README 2024-05-29 16:47:03 +08:00
jxxghp
3470f96e39 feat:系统错误时发出事件 2024-05-29 16:29:47 +08:00
jxxghp
74980911fe feat:系统错误时发出事件 2024-05-29 16:28:17 +08:00
jxxghp
4c5366f8b4 fix 订阅类型错误日志 2024-05-29 13:41:19 +08:00
jxxghp
8eb89eec86 fix message 2024-05-28 15:39:17 +08:00
jxxghp
cfd7208cda Merge remote-tracking branch 'origin/main' 2024-05-28 12:25:39 +08:00
jxxghp
0c6684a572 fix #2208 下载历史错误数据兼容 2024-05-28 12:25:33 +08:00
jxxghp
f0692b2fb8 更新 __init__.py 2024-05-28 11:50:11 +08:00
jxxghp
c29ee4fb07 Merge pull request #2203 from BrettDean/main 2024-05-28 11:49:07 +08:00
jxxghp
dd40ef54c0 fix #1838 2024-05-28 08:16:51 +08:00
Dean
84d5e2a6b3 fix: Plex刷新媒体库无用 2024-05-28 02:11:42 +08:00
jxxghp
7defcff0e5 v1.9.2
- 修复了目录匹配时同路径优先无效的问题
- 修复了文件管理默认路径还在使用旧变量的问题
- 修复了会删除空媒体库目录的问题
- 修复了TR下载器整理后会覆盖任务原有标签的问题
- 修复了Windows打包,并默认内置了几个主要的插件库
- 优化了资源搜索页面剧集的过滤使用体验
- 硬链接模式下,如果硬链接失败(实际为复制)会发送通知提醒,同时历史记录的整理方式会显示为`复制`
- 消息类型新增了插件消息,专用于需要发送消息类的插件选择使用
2024-05-27 12:46:58 +08:00
jxxghp
d9e767f87d fix 新增消息类型显示 2024-05-27 12:31:48 +08:00
jxxghp
2b82173fba fix windows build 2024-05-27 12:27:15 +08:00
jxxghp
1425b15333 fix #2192 2024-05-27 11:16:41 +08:00
jxxghp
8d82d0f4fd Merge remote-tracking branch 'origin/main' 2024-05-27 10:26:21 +08:00
jxxghp
d352f09d4e fix #2193 2024-05-27 10:26:09 +08:00
jxxghp
aebd121939 Merge pull request #2195 from hotlcc/develop-20240527
Develop 20240527
2024-05-27 10:13:39 +08:00
jxxghp
81eed0d06d fix windows build 2024-05-27 10:11:41 +08:00
Allen
bacb7aaeb4 模块管理种新增根据模块id精确获取模块的方法,以便在插件等场景精确获取qb/tr等由系统统一维护的模块,而不需要插件各自为政 2024-05-27 10:09:18 +08:00
Allen
b238c6ad11 消息类型增加插件消息 2024-05-27 10:06:59 +08:00
jxxghp
5c8b843030 fix 目录健康检查 2024-05-27 08:46:44 +08:00
jxxghp
58acc62e16 feat:硬链接转复制时发系统通知提醒 2024-05-27 08:11:44 +08:00
jxxghp
ca5a240fc4 fix 同路径优先 2024-05-27 07:52:36 +08:00
jxxghp
dd5887d18d fix 同路径优先 2024-05-26 13:28:32 +08:00
jxxghp
97669405d0 fix #2185 2024-05-26 13:15:34 +08:00
jxxghp
bf2ea271b6 fix 硬链接检测 2024-05-26 12:48:52 +08:00
jxxghp
afd91bf760 v1.9.2-beta
- 修复了个别情况下剧集会超出下载范围的问题
- 修复了手动整理目的目录、订阅保存目录未去重显示的问题
- 手动整理下拉选择目的目录时,元数据刮削选项会自动跟随目录设定打开或关闭
- 当指定目的目录的情况下,会尝试查找是否有对应的目录设定,并应用目录的分类和刮削配置
2024-05-26 10:03:52 +08:00
jxxghp
7e982eaf4d fix 订阅整季重复下载 2024-05-26 09:34:50 +08:00
jxxghp
5f13824aa6 fix #2184 手动选择下载/媒体库目录时尝试查找对应的配置并做分类(如果查询不到则不自动分类) 2024-05-26 08:39:59 +08:00
jxxghp
9ca8e3f4a8 更新 category.py 2024-05-25 09:28:48 +08:00
jxxghp
9b749035c9 fix windows build 2024-05-25 07:21:26 +08:00
jxxghp
b8e09a6b06 fix 2024-05-25 07:14:24 +08:00
jxxghp
4bb95d519d v1.9.1-1
- 目前的目录设定结构已经可以做到按二级分类精细化设定目录,动漫一级分类已没有意义,现已取消,同时解决了动漫分类错误的问题。如动漫需单独一级目录,可在目录设定中对电影/电视剧下的动漫二级分类单独调定目录,并将优先级调高。 - 索引站点支持ToSky。 - 仪表板支持一个插件实现多个组件。
2024-05-25 06:37:28 +08:00
jxxghp
04280021b4 fix 动漫分类 2024-05-24 17:21:03 +08:00
jxxghp
355dad9205 Merge pull request #2167 from hotlcc/develop-20240524-插件支持多仪表板组件 2024-05-24 15:49:00 +08:00
Allen
a6714d3712 一个插件支持透出多个仪表板控件,并兼容历史 2024-05-24 14:54:46 +08:00
jxxghp
fe53819a81 fix directory helper 2024-05-24 13:26:20 +08:00
jxxghp
6965415c52 v1.9.1
- 回退了多线程优先级过滤,以暂时解决过滤器偶发报错的问题(搜索慢的过滤规则层数减一减吧)
- 发现推荐改为使用TTL缓存,解决榜单长时间不刷新的问题
- 修复了目录指定了类别时会重复创建类别文件夹的问题
- 订阅保存路径支持下拉选择已配置的下载目录
- 降低了INFO日志打印量,如需打印详细日志请设置`DEBUG`为`true`(写过多日志会降低响应速度)
- 目录设定中增加了同盘优先选项,开启后在匹配媒体库目录时优先使用同盘/同根路径目录
- 修复了一个订阅匹配的问题

注意:因媒体库目录设置结构调整,`目录监控`插件指定目的路径时不再支持自动创建下级分类目录。建议插件中不指定目的目录,而是在`设定->目录`中设定好目录结构让系统自动匹配使用(根据需要打开同盘优先选项)。实际上如果是可以通过分类来指定目录的,建议直接使用内建的下载器监控功能,目录监控仅是个插件,仅适用于不通过MoviePilot自动下载的文件监控整理。
2024-05-24 12:47:34 +08:00
jxxghp
9be671fa2c fix 82226f1956 2024-05-24 12:36:51 +08:00
jxxghp
27b4f206a1 fix 订阅刷新站点范围扩大问题 2024-05-24 12:31:11 +08:00
jxxghp
a2b0c9bd3a fix #2164 2024-05-24 11:37:26 +08:00
jxxghp
ebc46d7d3b fix #2165 2024-05-24 11:31:37 +08:00
jxxghp
eb4e4b5141 feat:新增同盘优先设置 2024-05-24 11:19:21 +08:00
jxxghp
be11ef72a9 调整INFO日志打印量 && 回滚多线程过滤 2024-05-24 10:48:56 +08:00
jxxghp
a278c80951 fix #2151 TMDB发现和趋势修改为TTL缓存 2024-05-24 09:29:51 +08:00
jxxghp
6ee6de48ff 尝试修复过滤器并发报错 2024-05-24 08:56:32 +08:00
jxxghp
671bdad77c v1.9.1-beta
- 优化了订阅匹配逻辑
- 目录设置全新改版,多目录支持更加灵活,存量目录配置会自动升级为新格式
- 手动整理时支持下拉选择已配置目录
- 元数据刮削取消了全局开关,按媒体库目录设置,目录监控等插件新增了刮削开关并需要手动打开
- 站点管理支持拖动排序
- 修复了仪表板不可见组件仍会刷新的问题

注意:涉及前端变化升级后需要清理浏览器缓存(仅清理缓存文件,无需清理cookie)
2024-05-23 19:43:08 +08:00
jxxghp
a9ff8ec96d 更新 README.md 2024-05-23 14:28:40 +08:00
jxxghp
d1678355f1 fix #2099 2024-05-23 12:45:27 +08:00
jxxghp
ea399daef9 更新 __init__.py 2024-05-23 11:46:30 +08:00
jxxghp
e1122af97c 更新 __init__.py 2024-05-23 10:05:56 +08:00
jxxghp
21861111e6 更新 download.py 2024-05-23 10:05:07 +08:00
jxxghp
bd1e83ee8a fix 2024-05-23 08:53:20 +08:00
jxxghp
43da33bc50 fix 2024-05-23 08:41:20 +08:00
jxxghp
a09a207407 fix manual_transfer api 2024-05-23 08:09:36 +08:00
jxxghp
0aa3aa8521 Merge pull request #2146 from zhu0823/main 2024-05-23 07:10:48 +08:00
jxxghp
d9c6375252 feat:目录结构重大调整,谨慎更新到dev 2024-05-22 20:02:14 +08:00
jxxghp
f1f187fc77 add media category api 2024-05-22 18:02:28 +08:00
zhu0823
99d22554a1 fix: plex最近添加类型为show的资源 2024-05-22 13:34:03 +08:00
jxxghp
4835f6c6c9 更新 subscribe.py 2024-05-21 23:17:59 +08:00
jxxghp
5be2bf0633 feat:服务器地址变量化 2024-05-21 17:31:12 +08:00
jxxghp
7c7bc0b504 fix log 2024-05-21 17:14:16 +08:00
jxxghp
e0939fee75 fix https://github.com/jxxghp/MoviePilot/issues/2134 2024-05-21 16:36:09 +08:00
jxxghp
82226f1956 fix https://github.com/jxxghp/MoviePilot/issues/2125 调整订阅匹配逻辑 2024-05-21 11:57:12 +08:00
jxxghp
cfb43b4b04 fix torrent match 2024-05-21 09:54:06 +08:00
jxxghp
ebe2795eae Merge pull request #2128 from jeblove/main 2024-05-21 06:48:54 +08:00
jeblove
f7f747278d fix: jellyfin webhook百分比数据有时为None引发报错 2024-05-20 23:47:37 +08:00
jxxghp
58f17e89b6 v1.9.0
- 支持用户自定义主题风格
- 资源搜索列表模式下支持排序
- 索引站点新增支持`YemaPT`
- 修复了某些情况下订阅查询转圈的问题
- 修复了默认洗版设置不生效的问题
- 修复了存在活动连接时无法正常关闭系统的问题
- 回退了上个版本的最后一点修改(会导致手机端添加桌面图标时没有App效果)
2024-05-20 18:24:39 +08:00
jxxghp
433ca2ec28 fix YemaPT 2024-05-20 17:41:46 +08:00
jxxghp
ffac57ad4d 支持 YemaPT 2024-05-20 16:55:36 +08:00
jxxghp
0d2a4c50d6 fix error message 2024-05-20 15:39:43 +08:00
jxxghp
02c2edc30e fix WEB页面活动时无法正常停止服务的问题 2024-05-20 11:50:49 +08:00
jxxghp
65975235d4 fix scheduler 2024-05-20 10:16:20 +08:00
jxxghp
07a6abde0e fix json exception 2024-05-20 09:13:51 +08:00
jxxghp
fa47d9adeb fix 2024-05-19 14:17:34 +08:00
jxxghp
18d08c3672 fix 2024-05-19 12:35:21 +08:00
jxxghp
cf20049b7f Merge pull request #2117 from InfinityPacer/main
fix 开启DEBUG时,LOG文件也记录DEBUG日志
2024-05-19 12:28:08 +08:00
InfinityPacer
e3ce3302da fix 开启DEBUG时,LOG文件也记录DEBUG日志 2024-05-19 12:18:43 +08:00
jxxghp
d20951e7a0 Merge pull request #2116 from InfinityPacer/main 2024-05-19 12:08:03 +08:00
InfinityPacer
8a565bb79f fix #2113 2024-05-19 11:53:18 +08:00
jxxghp
cfdc8fb2c3 Merge pull request #2115 from InfinityPacer/main 2024-05-19 11:02:19 +08:00
InfinityPacer
111f830664 fix #2104 2024-05-19 10:58:09 +08:00
jxxghp
2821d6a9dc fix #2076 2024-05-19 09:54:52 +08:00
jxxghp
495d98c2b2 fix #2086 没有站点时订阅打不开的问题 && 默认洗版不生效的问题 2024-05-19 09:52:00 +08:00
jxxghp
e1e2779e48 fix #2101 1ptba签到增加index.php,认证及种子浏览有带具体的URI,如有问题需要站点放开。 2024-05-19 09:42:22 +08:00
jxxghp
363318f4f0 智能下载增加日志打印 2024-05-19 09:19:44 +08:00
jxxghp
521b960364 catch command exception 2024-05-19 08:35:18 +08:00
jxxghp
d2bcb197eb v1.8.9
- 站点增加了独立的超时时间设置(默认15秒),可根据站点连通情况调整对应超时时间,以缩短搜索耗时
- 使用多线程加快搜索结果的优先级过滤速度
- 极大提升了非首次登录时的页面加载速度(需要重新拉取镜像该特性才会生效)
- 仪表仪的拖拽图标默认隐藏,避免影响无边框组件的显示效果
- 安装网页到桌面App时,支持前进后退操作按钮
2024-05-17 14:51:02 +08:00
jxxghp
b0f9ca52e3 fix nginx cache 2024-05-17 14:37:25 +08:00
jxxghp
01a3efd402 fix nginx cache 2024-05-17 13:58:29 +08:00
jxxghp
a50427948a feat:多线程过滤 2024-05-17 12:20:39 +08:00
jxxghp
5614f10962 fix get_dashboard 2024-05-17 10:50:03 +08:00
jxxghp
5ff80dbe89 fix get_dashboard 2024-05-17 10:49:47 +08:00
jxxghp
278835c5d4 fix 2024-05-17 07:46:27 +08:00
jxxghp
92cdd67f3a Merge pull request #2093 from thsrite/main 2024-05-16 20:53:57 +08:00
thsrite
c56b58cc56 fix 2024-05-16 20:50:42 +08:00
thsrite
8bd4c21511 fix 2024-05-16 20:49:28 +08:00
thsrite
b94f201667 fix 拆包场景下,下载通知标题展示实际下载的集数 2024-05-16 20:44:57 +08:00
jxxghp
125e9eb30a Merge pull request #2091 from happyTonakai/main
Jellyfin webhook 添加播放百分比字段
2024-05-16 20:36:24 +08:00
happyTonakai
ea09d8c8d4 Jellyfin webhook 添加播放百分比字段 2024-05-16 20:20:24 +08:00
jxxghp
de0237f348 dashboard 传递 UA 2024-05-16 17:32:35 +08:00
jxxghp
62143bf7b6 fix db 2024-05-16 16:18:22 +08:00
jxxghp
3088bbb2f8 站点独立设置超时时间 2024-05-16 14:42:10 +08:00
jxxghp
43647e59a4 fix module name 2024-05-16 14:20:18 +08:00
jxxghp
a740330e66 Update README.md 2024-05-15 19:25:53 +08:00
jxxghp
10d4766353 fix bug 2024-05-15 07:42:03 +08:00
jxxghp
0a845fe8b6 Merge remote-tracking branch 'origin/main' 2024-05-14 20:37:18 +08:00
jxxghp
90dc52bb70 v1.8.8
- 新增了系统级通知功能
- 优化了主题切换使用体验
- 修复了仪表板存储空间卡片无法移动的问题
- 修复了仪表板自动刷新组件会重复生成的问题
- 修复了插件图表在暗黑主题下字体颜色偏暗的问题
- 仪表板组件支持无边框模式
- 默认内置了几个主要的第三方插件库
- 优先级规则支持拖拽排序
2024-05-14 20:37:03 +08:00
jxxghp
3816e2fba8 Merge pull request #2073 from developer-wlj/wlj0409 2024-05-14 20:35:22 +08:00
mayun110
20d92ca577 fix: 收藏洗版 jellyfin webhook 缺少的字段 2024-05-14 20:11:49 +08:00
jxxghp
db92761964 fix ide warning 2024-05-14 18:04:10 +08:00
jxxghp
3af5870733 Merge pull request #2071 from hotlcc/develop-20240514-修复一些发现的问题 2024-05-14 18:00:08 +08:00
Allen
53d01267b8 修复模块加载时前置模块失败导致后续均无法加载的问题 2024-05-14 17:25:22 +08:00
jxxghp
5ff21641f9 内置部分第三方插件库 2024-05-14 15:27:50 +08:00
jxxghp
643f2e3e66 Merge remote-tracking branch 'origin/main' 2024-05-14 10:08:15 +08:00
jxxghp
ae839235eb feat:服务异常时推送前台消息 2024-05-14 10:08:07 +08:00
jxxghp
5af94144ce Merge pull request #2067 from thsrite/main 2024-05-14 09:48:05 +08:00
thsrite
4281692321 fix add插件热加载变量开关(保留DEV开关 2024-05-14 09:24:02 +08:00
jxxghp
bd6d6b6882 fix 2024-05-14 08:30:29 +08:00
jxxghp
fabb02a8a0 fix README 2024-05-13 20:25:52 +08:00
jxxghp
223e655b6f feat:插件API立即生效 2024-05-13 20:15:47 +08:00
jxxghp
f0cb5b3e85 Merge pull request #2059 from EkkoG/fix_min_seeders_time 2024-05-13 11:25:10 +08:00
EkkoG
7579aae823 修复订阅时没有读取 min_seeders_time 导致最少做种人数生效时间不生效 2024-05-13 11:06:49 +08:00
jxxghp
36a8b6d780 更新 download.py 2024-05-13 07:38:14 +08:00
jxxghp
0471167b74 fix dev 2024-05-12 20:12:59 +08:00
jxxghp
28b996e54b Merge pull request #2056 from DDS-Derek/bump 2024-05-12 11:17:13 +08:00
DDSRem
d0989f72a9 bump: debian bullseye to bookworm 2024-05-12 09:55:35 +08:00
jxxghp
663e61e3a1 fix message api 2024-05-11 16:24:25 +08:00
jxxghp
71b0090947 Merge remote-tracking branch 'origin/main' 2024-05-11 12:53:40 +08:00
jxxghp
8ff0f81f47 fix message api 2024-05-11 12:53:32 +08:00
jxxghp
4186613a86 Merge pull request #2051 from InfinityPacer/main 2024-05-11 06:25:50 +08:00
InfinityPacer
4e1be23317 fix 热加载增加双重防抖 2024-05-11 01:23:33 +08:00
jxxghp
0e9e626ab6 fix 2024-05-10 20:09:45 +08:00
jxxghp
3d5761157a Merge pull request #2047 from InfinityPacer/main
fix 热加载不同平台路径及插件实例化
2024-05-10 19:57:48 +08:00
InfinityPacer
c407800b30 fix 热加载不同平台路径及插件实例化 2024-05-10 19:27:44 +08:00
jxxghp
c888a37aba fix README 2024-05-10 18:29:20 +08:00
jxxghp
d43998efee fix 插件热加载控重 2024-05-10 18:27:31 +08:00
jxxghp
8813e84053 feat:DEV模式下插件文件修改后自动重新加载 2024-05-10 13:33:00 +08:00
jxxghp
cf5a746f53 fix #2037 2024-05-10 08:22:05 +08:00
jxxghp
f9f58fc559 v1.8.7
- 认证站点增加新成员:DiscFan
- 支持插件在仪表板中显示Widget,站点数据统计、站点刷流插件已支持,其余插件待开发者适配,插件开发说明:https://github.com/jxxghp/MoviePilot-Plugins
- 仪表板组件支持用户通过拖拽调整显示顺序
- 调整了热门订阅订阅人数的显示样式
2024-05-09 20:06:19 +08:00
jxxghp
f59b5b6d27 fix plugin dashboard 2024-05-09 19:12:43 +08:00
jxxghp
30b3ad4a99 fix plugin dashboard 2024-05-09 19:12:12 +08:00
jxxghp
dfb9ce7520 fix tmdb match api 2024-05-09 18:49:58 +08:00
jxxghp
6c365f552e Merge pull request #2038 from Devinaille/fix_subtitle_match 2024-05-09 11:40:03 +08:00
tianyf
a81ee7d89a 修复副标题包含【】的情况下无法匹配标题的问题 2024-05-09 11:18:00 +08:00
jxxghp
5c9039e6d0 feat:插件仪表板API 2024-05-08 20:58:28 +08:00
jxxghp
cce2e13e21 fix README.md 2024-05-08 16:46:35 +08:00
jxxghp
0da87abc71 Merge remote-tracking branch 'origin/main' 2024-05-08 15:54:56 +08:00
jxxghp
6a2eecc744 fix #2016 2024-05-08 15:54:50 +08:00
jxxghp
c049e13c1c Merge pull request #2032 from z3shan33/main 2024-05-08 13:04:13 +08:00
z3shan33
7ec49ce076 fix #2031 2024-05-08 11:09:36 +08:00
jxxghp
5be2fc35b5 Merge pull request #2026 from zhu0823/main 2024-05-07 18:31:12 +08:00
zhu0823
0b84312559 fix: 某些剧集parentThumb为None时的问题 2024-05-07 18:28:53 +08:00
jxxghp
8bb43b52bc - 优化热门订阅数据统计,热门订阅支持展示订阅人数 2024-05-07 16:19:05 +08:00
jxxghp
bd348f118c fix 订阅统计清理 2024-05-07 16:01:52 +08:00
jxxghp
4a3a3483d0 fix api 2024-05-07 12:31:24 +08:00
jxxghp
fd6314f19f fix 2024-05-06 22:48:24 +08:00
jxxghp
17a9f3a626 更新 version.py 2024-05-06 19:08:49 +08:00
jxxghp
75c898e6eb Merge pull request #2018 from zhu0823/main 2024-05-06 18:37:00 +08:00
zhu0823
089d4785aa fix: plex最近添加的剧集封面地址 2024-05-06 18:35:10 +08:00
jxxghp
4d48295f72 fix bug 2024-05-06 16:46:03 +08:00
jxxghp
ed119b7beb fix 数据共享开关 2024-05-06 12:37:51 +08:00
jxxghp
90d5a8b0c9 add 数据共享开关 2024-05-06 11:54:32 +08:00
jxxghp
dd5c0de7b1 feat:订阅统计 2024-05-06 11:19:41 +08:00
jxxghp
73bdca282c fix api desc 2024-05-05 13:51:49 +08:00
jxxghp
360a54581f add api 2024-05-05 13:43:14 +08:00
jxxghp
1fc7587cbb fix 豆瓣&Bangumi搜索过于宽泛 2024-05-05 12:14:58 +08:00
jxxghp
dcd46f1627 fix #2001 2024-05-05 11:54:42 +08:00
jxxghp
d8644a20c0 fix #2011 2024-05-05 11:22:23 +08:00
jxxghp
23b47f98c1 Merge pull request #2009 from DDS-Derek/main 2024-05-05 09:55:00 +08:00
DDSRem
347c91fa0b bump: ca-certificates version to bookworm 2024-05-04 19:14:30 +08:00
jxxghp
ac961b37b4 Merge pull request #2004 from thsrite/main 2024-05-03 13:52:45 +08:00
thsrite
068c49a79a fix 最小做种数生效时间 2024-05-03 13:09:08 +08:00
jxxghp
e7174b402c Merge pull request #1988 from zhu0823/main 2024-04-30 19:20:33 +08:00
zhu0823
d21267090a 同步更新上游api 2024-04-30 19:14:29 +08:00
zhu0823
51dc2c33a0 feat: plex最近添加过滤黑名单 2024-04-30 19:14:08 +08:00
jxxghp
8aef488ab6 Merge pull request #1986 from Sxnan/tmdbid-type 2024-04-30 16:03:44 +08:00
sxnan
0cbf45f9b9 Cast tmdbid to int type after getting metainfo from title 2024-04-30 15:55:10 +08:00
jxxghp
c0ae32d654 - 修复豆瓣数据源搜索媒体类型错误的问题 2024-04-30 13:33:28 +08:00
jxxghp
ff1b0e02d6 fix 特殊集数识别 2024-04-30 11:49:03 +08:00
jxxghp
76a8b02fe5 fix 豆瓣搜索词条类型错误问题 2024-04-30 11:17:03 +08:00
jxxghp
43f594393c feat:插件增加排序字段 2024-04-30 08:34:37 +08:00
jxxghp
008e11d63f fix 豆瓣人物头像质量 2024-04-30 07:15:21 +08:00
jxxghp
9dd610f245 更新 site.py 2024-04-29 21:46:55 +08:00
jxxghp
c5d087aad6 Merge pull request #1979 from InfinityPacer/main 2024-04-29 20:43:14 +08:00
jxxghp
576c5741f9 v1.8.5
- 支持媒体信息多数据源聚合搜索(TheMovieDb、豆瓣、Bangumi),可在`设定`-`搜索`中调整数据源范围和顺序
- 插件支持设置标签,优化了插件功能的使用体验
- 适配馒头最新架构调整,需在站点管理中手动维护站点令牌和请求头
2024-04-29 20:39:20 +08:00
InfinityPacer
51387c31c4 fix plugin_order 2024-04-29 20:39:01 +08:00
jxxghp
c2a40876e2 适配m-team新鉴权机制 2024-04-29 20:19:46 +08:00
jxxghp
c06bdf0491 fix exception 2024-04-28 18:10:04 +08:00
jxxghp
f726130c31 feat:插件读取Label 2024-04-28 13:35:47 +08:00
jxxghp
4033ffeb15 fix 聚合结果排序 2024-04-28 12:03:48 +08:00
jxxghp
f81af8e9fb Merge remote-tracking branch 'origin/main' 2024-04-28 10:41:05 +08:00
jxxghp
e3f9260299 fix README.md 2024-04-28 10:40:59 +08:00
jxxghp
c80ccaf74b Merge pull request #1976 from thsrite/main 2024-04-28 10:26:05 +08:00
thsrite
0e60c976be fix 默认过滤规则支持最少做种人数生效发布时间,防止过滤掉到最新发布的种子 2024-04-28 10:21:42 +08:00
jxxghp
805c7d2701 fix torrent filter log 2024-04-28 09:19:09 +08:00
jxxghp
4499f001dd fix bug 2024-04-28 09:03:16 +08:00
jxxghp
71c6a3718b feat:媒体搜索聚合开关 2024-04-28 08:55:07 +08:00
jxxghp
6404f9d45c 更新 tmdb.py 2024-04-27 22:39:50 +08:00
jxxghp
ce357540eb 更新 douban.py 2024-04-27 22:38:47 +08:00
jxxghp
e56cfd6ad4 fix douban apis 2024-04-27 22:29:17 +08:00
jxxghp
25e5f7a9f6 fix tmdb apis 2024-04-27 21:56:27 +08:00
jxxghp
6d69ac42e5 fix bangumi apis 2024-04-27 21:36:42 +08:00
jxxghp
6a71bed821 fix douban api 2024-04-27 21:17:32 +08:00
jxxghp
1718758d1c fix 豆瓣图片质量 2024-04-27 17:16:19 +08:00
jxxghp
7a37078e90 feat:媒体信息聚合 2024-04-27 15:20:46 +08:00
jxxghp
26b5ad6a44 Merge pull request #1969 from lightolly/dev/20240428_1
feat:add person api
2024-04-27 13:00:46 +08:00
olly
fa884c9608 feat:add person api 2024-04-27 11:52:23 +08:00
jxxghp
6927b5fbd3 Merge pull request #1968 from lightolly/dev/20240428 2024-04-27 10:24:35 +08:00
olly
59fca63d4a feat:add api 2024-04-27 10:12:40 +08:00
jxxghp
7489d6a912 v1.8.4
- 搜索框支持搜索演员,修复了演员参演作品只显示电影的问题
- 插件支持在数据页面绑定事件以及调用API接口
- 启动时如用户认证失败,后台会间歇性重试一段时间
- `设定`-`关于` 增加显示前端版本号
- 历史记录支持排序
- 优化了小屏幕下的弹窗使用体验
- Jellyfin 的页面跳转调整为支持`latest`分支版本
- 修复了个别情况下订阅历史记录会登记失败的问题
2024-04-27 09:09:09 +08:00
jxxghp
b437fd6021 fix bug 2024-04-27 00:30:17 +08:00
jxxghp
c303ab0765 fix api 2024-04-26 20:29:04 +08:00
jxxghp
9daff87f2f fix api 2024-04-26 19:41:17 +08:00
jxxghp
f20b1bcfe9 feat:人物搜索API 2024-04-26 17:47:45 +08:00
jxxghp
2f71e401be fix 2024-04-26 17:11:24 +08:00
jxxghp
0840e0bcbc Merge pull request #1967 from thsrite/main
fix 当前版本、重启检查前端版本
2024-04-26 17:05:12 +08:00
thsrite
933af7485c fix function name 2024-04-26 17:00:40 +08:00
thsrite
baddaabd73 fix 2024-04-26 16:59:19 +08:00
thsrite
8028866cee fix 2024-04-26 16:58:30 +08:00
thsrite
242894cec2 fix 当前版本、重启检查前端版本 2024-04-26 16:54:25 +08:00
jxxghp
967ad3a507 Merge pull request #1966 from thsrite/main 2024-04-26 14:44:16 +08:00
thsrite
2dbe049a91 fix 查询某时间之后的转移历史 2024-04-26 14:41:52 +08:00
jxxghp
c5afc65cbd fix #1955 启动时用户认证失败时,间歇性重试 2024-04-26 11:08:25 +08:00
jxxghp
e35bacecd5 fix #1942 2024-04-26 10:42:24 +08:00
jxxghp
d84c86b0f6 fix 插件重置 2024-04-25 16:48:37 +08:00
jxxghp
73ae09b041 fix bug 2024-04-25 10:31:11 +08:00
jxxghp
a11318390d feat:读取前端版本号 2024-04-25 10:18:38 +08:00
jxxghp
1714990e2e feat:读取前端版本号 2024-04-25 10:18:14 +08:00
jxxghp
44cd5f52e0 fix #1945 2024-04-24 10:25:45 +08:00
jxxghp
59b9dc354e fix 整合重复代码 2024-04-24 08:18:11 +08:00
jxxghp
591969015f v1.8.3
- 仪表板设定支持按用户持久化保存,同时也支持按浏览器差异化配置
- 修复了文件整理使用`original_name`占位符时后缀重复以及应用了识别词的问题
- 优化了个别站点的适配
- 修复了插件市场显示空插件的问题
2024-04-23 17:53:32 +08:00
jxxghp
6118e235c3 Merge pull request #1943 from thsrite/main 2024-04-23 17:17:28 +08:00
thsrite
228b1a11d0 fix 2024-04-23 16:18:30 +08:00
thsrite
c8a1e59310 fix plugin 403 msg 2024-04-23 16:16:39 +08:00
jxxghp
b0f7a11328 fix naming 2024-04-23 11:22:43 +08:00
jxxghp
b753e50580 fix api order 2024-04-23 10:06:19 +08:00
jxxghp
3002bf4dd2 fix 2024-04-23 10:02:15 +08:00
jxxghp
0cbe8f5cdc Merge pull request #1920 from hotlcc/develop-20240417-用户配置
新增用户配置相关能力和接口
2024-04-23 09:57:55 +08:00
jxxghp
1a03d19469 Merge remote-tracking branch 'origin/main' 2024-04-23 09:56:28 +08:00
jxxghp
b7c1106744 fix #1940 2024-04-23 09:56:21 +08:00
Allen
d6c6c999fc 优化用户级配置能力 2024-04-23 09:51:18 +08:00
jxxghp
408703d4a3 Merge pull request #1938 from thsrite/main 2024-04-22 18:14:34 +08:00
thsrite
40a612c327 fix 2024-04-22 15:57:34 +08:00
thsrite
e519fc484b fix 获取指定用户的订阅列表 2024-04-22 15:56:50 +08:00
thsrite
e430a3e88b fix 获取指定tmdb_id的订阅列表 2024-04-22 15:49:57 +08:00
jxxghp
316f61bf69 fix #1922 2024-04-22 09:54:51 +08:00
jxxghp
750c4441db fix #1930 2024-04-22 09:31:35 +08:00
jxxghp
441cee4ee5 v1.8.2
- 新增订阅历史记录功能
- 站点支持显示连接状态
- 站点新增支持花梨月下

注意:更新后需要清理浏览器缓存
2024-04-19 19:57:42 +08:00
jxxghp
ebf2f53ae1 fix api 2024-04-19 19:51:01 +08:00
jxxghp
4e7000efbb Merge pull request #1925 from Aodi/main
fix 反向代理图片显示问题,url改为查询参数避免双斜杠的优化
2024-04-19 19:37:29 +08:00
jxxghp
0679a32659 fix 2024-04-19 12:31:38 +08:00
jxxghp
148984ad0e Merge pull request #1923 from thsrite/main 2024-04-19 11:50:31 +08:00
thsrite
dd8804ef3e fix 2024-04-19 11:49:03 +08:00
aodi
fb0018dda6 fix 反向代理图片显示问题,url改为查询参数避免双斜杠的优化 2024-04-19 11:24:37 +08:00
thsrite
c14e529c91 fix #1921 2024-04-19 10:26:34 +08:00
jxxghp
f6222122c0 feat:订阅历史以及API 2024-04-18 21:00:57 +08:00
jxxghp
3a18267ec0 feat:订阅历史以及API 2024-04-18 15:48:46 +08:00
Allen
ae60040120 fixbug 2024-04-18 15:19:46 +08:00
jxxghp
b04bc74550 fix public site flag 2024-04-18 15:10:06 +08:00
Allen
666d6eb048 新增用户配置相关能力和接口 2024-04-18 12:33:35 +08:00
jxxghp
73a3a8cf94 fix #1914 2024-04-18 11:20:03 +08:00
jxxghp
6d66c5b577 更新 site.py 2024-04-17 15:02:03 +08:00
jxxghp
c3ffe38d4d feat:站点使用统计 2024-04-17 12:42:32 +08:00
jxxghp
5108dbbeb5 fix plugin install 2024-04-17 09:46:02 +08:00
jxxghp
cbf56bd9b7 fix module log 2024-04-16 18:22:02 +08:00
jxxghp
67965b09a6 Merge remote-tracking branch 'origin/main' 2024-04-16 18:21:04 +08:00
jxxghp
a2678d5815 fix #1882 季赋值错误! 2024-04-16 18:20:54 +08:00
jxxghp
36b25e6a08 Merge pull request #1908 from zhu0823/main 2024-04-16 17:49:33 +08:00
zhu0823
c98c8c8836 feat: plex媒体数量统计过滤黑名单 2024-04-16 17:42:25 +08:00
jxxghp
423b7cf340 Merge pull request #1907 from zhu0823/main 2024-04-16 16:54:43 +08:00
zhu0823
02acc8bc35 feat: plex继续观看过滤黑名单 2024-04-16 16:38:16 +08:00
jxxghp
664b42f050 fix original_name 2024-04-16 10:22:49 +08:00
jxxghp
ca491891dc - 修复历史记录问题 2024-04-16 10:06:14 +08:00
jxxghp
89e3d16f27 Merge pull request #1897 from Aodi/main 2024-04-15 18:29:02 +08:00
aodi
a02ea64068 fix 编码斜杠禁用的反代无法加载图片 2024-04-15 16:03:44 +08:00
jxxghp
0f0ace5ddc feat:历史记录按目录模糊匹配 2024-04-15 14:10:45 +08:00
jxxghp
04d94f3bdd fix torrents match 2024-04-15 13:26:25 +08:00
jxxghp
7d45b68b4f fix scheduler 2024-04-15 13:17:20 +08:00
jxxghp
ccb47c0120 v1.8.1
- 优化文件识别
- 优化历史记录性能、支持目录过滤
- 插件更新时支持查看更新记录
2024-04-14 14:04:21 +08:00
jxxghp
6939bff790 fix #1882 2024-04-14 13:47:12 +08:00
jxxghp
8cd0dd4198 Merge pull request #1882 from WangEdward/main
fix: metainfo for manual transfer
2024-04-14 13:19:46 +08:00
jxxghp
d6d1f6519a Merge remote-tracking branch 'origin/main' 2024-04-14 13:15:01 +08:00
jxxghp
906325710b fix #1876 2024-04-14 13:14:41 +08:00
jxxghp
05bafeaedf Merge pull request #1888 from thsrite/main 2024-04-14 12:31:36 +08:00
thsrite
babad5a098 fix 插件多次加载 2024-04-14 11:54:38 +08:00
jxxghp
fe07602a35 fix 新增站点区分提示 2024-04-13 18:56:07 +08:00
jxxghp
492533dcdb rollback #1884 2024-04-13 18:38:29 +08:00
jxxghp
45b044cd6b Merge pull request #1884 from thsrite/main 2024-04-13 18:03:34 +08:00
thsrite
fc65cc3619 fix 单例加锁,防止init方法时间过长导致多次init 2024-04-13 17:33:08 +08:00
jxxghp
c6e069331c Merge pull request #1883 from thsrite/main 2024-04-13 17:09:10 +08:00
thsrite
6a8a946ec8 fix PluginHelper().install已经统计安装 2024-04-13 17:05:12 +08:00
Edward
d96e4561e2 fix: metainfo for manual transfer 2024-04-12 14:20:41 +00:00
Edward
172bc23b2a fix: empty season 2024-04-12 14:12:15 +00:00
jxxghp
98baf922d6 fix resource exception 2024-04-12 21:38:53 +08:00
jxxghp
9a7cdc1e74 fix #1858 2024-04-12 12:45:16 +08:00
jxxghp
4e22293cda fix 文件多层路径识别 2024-04-12 12:04:42 +08:00
jxxghp
f17890b6ce fix 词表指定媒体ID的匹配 2024-04-12 08:24:20 +08:00
jxxghp
66af2de416 fix #1864 2024-04-11 19:48:19 +08:00
jxxghp
17e1e6b49b fix log 2024-04-11 12:46:29 +08:00
jxxghp
e501154ad4 v1.8.0
- 搜索和订阅支持指定季,输入:xxxx 第x季
- 插件市场支持查看插件的更新日志(需要插件作者补充)
- 优化了媒体信息识别
- 优化了动漫及拼音标题的资源搜索匹配
- 优化了UI性能
- 修复了手动搜索时默认过滤规则不生效的问题
- 新增下载文件实时整理API,可在 QB设置->下载完成时运行外部程序 处填入:curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot",实现下载器监控模式下无需等待轮循,下载完成后立即整理入库(地址、端口和token按实际调整,curl也可更换为wget)。

注意:如搜索异常请清理浏览器缓存。
2024-04-11 08:21:29 +08:00
jxxghp
c73cf1d7e2 v1.8.0
- 搜索和订阅支持指定季,输入:xxxx 第x季
- 插件市场支持查看插件的更新日志(需要插件作者补充)
- 优化了媒体信息识别
- 优化了动漫及拼音标题的资源搜索匹配
- 优化了UI性能
- 修复了手动搜索时默认过滤规则不生效的问题
- 新增下载文件实时整理API,可在 QB设置->下载完成时运行外部程序 处填入:curl "http://localhost:3000/api/v1/transfer/now?token=moviepilot",实现下载器监控模式下无需等待轮循,下载完成后立即整理入库(地址、端口和token按实际调整,curl也可更换为wget)。

注意:如搜索异常请清理浏览器缓存。
2024-04-11 08:02:24 +08:00
jxxghp
a3603f79c8 fix requests 2024-04-10 22:16:10 +08:00
jxxghp
294b4a6bf9 fix torrents match 2024-04-10 20:05:43 +08:00
jxxghp
f365d93316 fix torrents match 2024-04-10 20:02:02 +08:00
jxxghp
facd20ba3c fix bangumi 2024-04-10 19:04:59 +08:00
jxxghp
d0e596c93c feat: 插件更新历史 2024-04-10 16:44:08 +08:00
jxxghp
e20ec4ddf5 fix bug 2024-04-10 15:05:32 +08:00
jxxghp
ba0a1cb1bd fix #1738 搜索和订阅支持指定季 2024-04-10 14:51:34 +08:00
jxxghp
17438f8c5c fix log 2024-04-10 13:41:11 +08:00
jxxghp
e0c2ae0f0c fix log 2024-04-10 13:20:33 +08:00
jxxghp
9ebb211589 fix meta cases 2024-04-10 12:22:32 +08:00
jxxghp
8a0350c566 fix mtype 2024-04-10 11:50:14 +08:00
jxxghp
765d37fd6a fix meta 2024-04-10 11:44:14 +08:00
jxxghp
b3d57b868e fix:自定义识别词不处理空格 2024-04-10 07:09:31 +08:00
jxxghp
18e7099848 fix:自定义识别词不处理空格 2024-04-10 07:07:17 +08:00
jxxghp
27cb968a18 fix #1846 2024-04-09 18:47:25 +08:00
jxxghp
45bf84d448 fix #1849 2024-04-09 18:43:24 +08:00
jxxghp
85300b0931 more log 2024-04-09 13:36:13 +08:00
jxxghp
ac87c778f4 fix anime match 2024-04-09 13:20:28 +08:00
jxxghp
1ed511034c fix search match 2024-04-09 07:09:54 +08:00
jxxghp
ca7f121a21 Merge pull request #1847 from hotlcc/develop-修复PTLSP站点测试 2024-04-08 14:05:50 +08:00
Allen
c8e73e17d3 修复ptlsp测试问题 2024-04-08 04:26:49 +00:00
Allen
3bfc87f1cc ptlsp站点测试问题修复 2024-04-08 03:16:07 +00:00
jxxghp
e0e76bf3fe fix 2024-04-07 16:32:26 +08:00
jxxghp
6a3e3f1562 feat:中英文名依次匹配 2024-04-07 16:20:33 +08:00
jxxghp
59330657b2 add nano 2024-04-07 14:56:32 +08:00
jxxghp
927d510619 add 立即执行下载器文件整理 API 2024-04-07 14:45:59 +08:00
jxxghp
80a390ac6c feat:种子名为拼音的情况下,从副标题中提取中文名用于识别 2024-04-07 14:25:12 +08:00
jxxghp
cae563ce53 test:更加宽松的匹配规则 2024-04-06 21:07:00 +08:00
jxxghp
0495936ef8 v1.7.9
- 订阅支持预设订阅规则
- 插件新增快速搜索功能、优化了插件安装和卸载的响应速度
- 优化了文件管理、历史记录的性能和易用性
- 修复了馒头种子下载失败的问题

温馨提示:
1. 如遇到前端奇奇怪怪的问题,请先清理浏览器缓存
2. 合理设置优先级层级,如层级过多且搜索结果很多时,会明显增加搜索耗时
2024-04-06 17:27:48 +08:00
jxxghp
34d27fe85b fix #1818 2024-04-06 11:47:22 +08:00
jxxghp
0e2c4d74d6 feat:优化插件重载 2024-04-05 23:20:51 +08:00
jxxghp
bd137de042 Merge pull request #1833 from honue/main 2024-04-05 22:47:46 +08:00
honue
4a2688b52f fix #1744 2024-04-05 22:22:41 +08:00
jxxghp
36acb1daaa Merge pull request #1832 from cddjr/fix_ua 2024-04-05 12:01:47 +08:00
景大侠
a0c3b6b26b fix: 站点User-Agent没有设置的情况下以系统设置的UA进行访问 2024-04-05 11:40:52 +08:00
jxxghp
7c93432505 Merge pull request #1815 from z3shan33/main 2024-04-02 09:26:19 +08:00
z3shan33
2760f25992 fix #1792 2024-04-02 09:24:43 +08:00
jxxghp
d199c47666 fix #1804 2024-04-02 08:22:12 +08:00
jxxghp
a6550a21ef Merge pull request #1804 from thsrite/main 2024-04-01 19:11:29 +08:00
thsrite
26a321f119 feat 设置订阅默认规则 2024-04-01 13:29:22 +08:00
jxxghp
7e8f7be905 v1.7.8
- 支持用户开启管理后台登录双重认证,增强安全性
- 管理后台的大部分表单均增加了hint提示信息
- 重启时会重新安装插件依赖,避免安装在线插件时依赖安装不成功的问题(此特性需要重拉镜像生效)
- 提升了框架对于插件错误的兼容性,插件市场插件按下载热度排序
2024-03-31 19:38:20 +08:00
jxxghp
600b6144e4 fix #1783 目录完整度匹配 2024-03-31 08:17:51 +08:00
jxxghp
dfb11420e5 Merge pull request #1789 from DDS-Derek/main 2024-03-30 17:44:22 +08:00
DDSRem
584c8a2d94 feat: install the plug-in pip extension in advance 2024-03-30 17:41:04 +08:00
jxxghp
536bd9268a feat:新增订阅相关事件 2024-03-30 08:04:52 +08:00
jxxghp
5ee41b87a2 fix login api 2024-03-29 11:13:57 +08:00
jxxghp
89b2fe10fe Merge pull request #1774 from jeblove/main 2024-03-28 21:33:16 +08:00
jeblove
c180e50164 feat: 增加session方法,用于获取tr的会话、配置信息 2024-03-28 21:24:16 +08:00
jxxghp
8f7b08afae fix #1763 2024-03-28 17:04:44 +08:00
jxxghp
72de8a2192 Merge pull request #1772 from z3shan33/main
feat #1763
2024-03-28 16:57:55 +08:00
zss
40d99f1dd5 feat #1763 2024-03-28 16:39:34 +08:00
jxxghp
ff07841dd6 roll back site test 2024-03-28 13:20:48 +08:00
jxxghp
828fc08362 Merge pull request #1766 from cddjr/1761--bug 2024-03-28 06:48:22 +08:00
景大侠
3fd043bb9b fix #1761 2024-03-28 02:09:47 +08:00
jxxghp
f51c4ebed7 fix bug 2024-03-27 20:46:06 +08:00
jxxghp
9b917cd4c2 更新 requirements.txt 2024-03-27 19:50:22 +08:00
jxxghp
91eac50ab9 v1.7.7
- 多别名搜索(`SEARCH_MULTIPLE_NAME`)默认为关,优化了站点无法连通时的搜索处理逻辑,加快搜索速度 - 修复了站点删除或重置后订阅等站点设置残留的问题 - `馒头`站点数据统计切换为使用ApiKey - 优化了Bangumi每日放送的演员阵容显示 - 插件支持显示下载安装次数
2024-03-27 17:01:33 +08:00
jxxghp
f6468ad327 fix scraper 2024-03-27 16:01:20 +08:00
jxxghp
fb6c3a9f36 fix site test 2024-03-27 15:45:27 +08:00
jxxghp
eb751bb581 fix site test 2024-03-27 15:35:01 +08:00
jxxghp
f9069bf19b fix #1758 2024-03-27 12:22:15 +08:00
jxxghp
ef0c88a3b6 fix 种子去重 2024-03-27 11:37:51 +08:00
jxxghp
f1f8ccb5d6 feat:plugins statistics 2024-03-27 08:24:06 +08:00
jxxghp
2df113ad38 fix SiteDeleted 2024-03-27 07:09:00 +08:00
jxxghp
fa03232321 Merge pull request #1759 from cddjr/fix_remove_site 2024-03-27 06:24:18 +08:00
景大侠
04f50284c6 fix 删除站点会导致其订阅的站点列表出现数字ID 2024-03-27 00:54:58 +08:00
jxxghp
9fc950c2ed Merge pull request #1751 from z3shan33/main 2024-03-26 16:41:59 +08:00
zss
9c1aeb933e fix bangumi中通过characters获取配音角色信息 2024-03-26 16:11:03 +08:00
jxxghp
1cee20134a fix 插件去重&排序 2024-03-26 09:30:05 +08:00
jxxghp
0ca5f5bd89 fix timeout 2024-03-25 23:06:30 +08:00
jxxghp
25e0c25bc6 fix timeout 2024-03-25 23:01:50 +08:00
jxxghp
3f8453f054 fix 2024-03-25 20:14:24 +08:00
jxxghp
cf259af2d1 feat:插件安装统计 2024-03-25 18:02:57 +08:00
jxxghp
0b70f74553 fix site test 2024-03-24 21:33:41 +08:00
jxxghp
f0bc5d737b - 问题修复 2024-03-24 15:45:20 +08:00
jxxghp
181d87f68e fix mtorrent 2024-03-24 15:31:00 +08:00
jxxghp
e37ac4da6a v1.7.6
- 馒头搜索切换为使用ApiKey,需要先在`控制台`->`实验室`建立存取令牌,手工维护站点cookie后ApiKey会自动获取并缓存使用,如更换了ApiKey,需要手动触发站点修改才会清除缓存。
- 资源搜索时整合多个别名的搜索结果,避免搜索不全

注意:馒头除搜索下载外,站点签到、数据统计、刷流等仍然使用cookie访问,请自行评估风险。
2024-03-24 14:01:20 +08:00
jxxghp
bd7ca7fa60 feat:m-team x-api-key 2024-03-24 13:38:36 +08:00
jxxghp
96de772119 fix mtorrent 2024-03-24 10:20:12 +08:00
jxxghp
72b6556c62 add SEARCH_MULTIPLE_NAME 2024-03-24 08:26:59 +08:00
jxxghp
e4bb182668 feat:搜索更多结果 2024-03-24 08:13:08 +08:00
jxxghp
595d097235 v1.7.5
- 认证站点新增支持青蛙🐸,蝴蝶🦋支持ipv4域名,适配了馒头新UI
- 加快了插件市场的加载速度
- 插件日志倒序显示
2024-03-23 19:01:09 +08:00
jxxghp
9b53aad34f fix mtorrent 2024-03-23 13:46:06 +08:00
jxxghp
e92a2e1ff1 Merge pull request #1728 from developer-wlj/wlj0323 2024-03-23 13:38:33 +08:00
mayun110
764359c3e8 fix 2024-03-23 13:18:36 +08:00
mayun110
abd1a51863 fix: labels by mTorrent 2024-03-23 12:26:49 +08:00
jxxghp
2f05f8dc4d fix mtorrent 2024-03-23 09:50:03 +08:00
jxxghp
23c678e71e fix mtorrent 2024-03-23 09:42:11 +08:00
jxxghp
ef67b76453 fix 下载消息显示用户名 2024-03-22 13:26:07 +08:00
jxxghp
c4e7870f7b Merge pull request #1726 from sundxfansky/main 2024-03-22 06:53:18 +08:00
jxxghp
9cef50436a Merge pull request #1725 from Vincwnt/main 2024-03-22 06:51:40 +08:00
sundxfansky
a15aded0a0 无需添加时间 2024-03-22 04:40:33 +08:00
chenyuan
8ac40dc205 fix: 存在已删除用户时, 消息批量推送失败bug 2024-03-21 22:27:01 +08:00
jxxghp
92a5b3d227 feat:线上插件多线程加载 2024-03-21 21:30:26 +08:00
jxxghp
761f1e7a4b feat:线上插件多线程加载 2024-03-21 21:27:54 +08:00
jxxghp
ad0731e1ec 更新 README.md 2024-03-21 18:27:36 +08:00
jxxghp
a451f12d86 add qingwa 2024-03-21 16:55:57 +08:00
jxxghp
dcde619e77 插件日志倒序 & 补充安装版本Windows指引 2024-03-21 16:28:16 +08:00
jxxghp
92769b27f1 v1.7.4
- 推荐增加了`Bangumi每日放送`
- `api.themoviedb.org`等域名会自动使用DOH解析IP地址,以避免DNS污染提升网络连通性(通过`DOH_ENABLE`变量控制,默认开)
- 站点浏览增加点击添加下载功能
- 优化了个别页面在数据多时的展示速度
2024-03-19 17:27:52 +08:00
jxxghp
fa83168b92 feat:增加DOH开关 2024-03-19 12:26:04 +08:00
jxxghp
f96295de3a add download api 2024-03-18 23:27:54 +08:00
jxxghp
6cecb3c6a6 fix bug 2024-03-18 20:02:03 +08:00
jxxghp
b6486035c4 add Bangumi 2024-03-18 19:02:34 +08:00
jxxghp
f7c1d28c0f remove cloudflared 2024-03-18 08:23:43 +08:00
jxxghp
47c2ae1c08 fix doh domains 2024-03-18 07:19:56 +08:00
jxxghp
c03f24dcf5 更新 doh.py 2024-03-17 23:40:19 +08:00
jxxghp
6e2f5762b4 add doh 2024-03-17 23:30:50 +08:00
jxxghp
75330a08cc add doh 2024-03-17 23:25:04 +08:00
jxxghp
3f17e371c3 add doh 2024-03-17 23:15:21 +08:00
jxxghp
a820341ec7 rollback cloudflared 2024-03-17 22:32:15 +08:00
jxxghp
c1f04f5631 Merge pull request #1697 from DDS-Derek/main 2024-03-17 19:06:31 +08:00
DDSRem
a121e45b94 fix: container resolv cannot be modified 2024-03-17 18:43:54 +08:00
DDSRem
885ee976b2 feat: better cloudflared install 2024-03-17 18:15:29 +08:00
jxxghp
e6229beb94 add cloudflared 2024-03-17 16:52:13 +08:00
jxxghp
f2a40e1ec3 fix themoviedb季不显示 2024-03-17 15:59:21 +08:00
jxxghp
5f80aa5b7c - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 15:12:24 +08:00
jxxghp
14ff1e9af6 fix resource 2024-03-17 15:09:10 +08:00
jxxghp
49ab5ac709 - 豆瓣订阅及本地CookieCloud服务问题修复 2024-03-17 13:43:11 +08:00
jxxghp
74c7a1927b fix cookiecloud 2024-03-17 13:42:01 +08:00
jxxghp
cbd704373c try fix cookiecloud 2024-03-17 12:57:38 +08:00
jxxghp
a05724f664 fix 自动校正站点地址格式 2024-03-17 12:21:32 +08:00
jxxghp
97d0fc046a fix 豆瓣订阅Bug 2024-03-17 11:27:54 +08:00
jxxghp
6248e34400 fix v1.7.3 2024-03-17 10:00:59 +08:00
jxxghp
a442dab85b fix nginx.conf 2024-03-17 09:51:04 +08:00
jxxghp
d4514edba6 v1.7.3
- `捷径`新增消息中心功能
- 内建支持CookieCloud本地化服务器,Cookie数据加密后保存在用户配置目录中,可在`设定`-`站点`中选择开启
- 优化了推荐详情页面,豆瓣推荐详情直接展示豆瓣数据源
- 修复了`蜜柑`无法搜索的问题
2024-03-17 09:09:21 +08:00
jxxghp
0c581565ad 更新 message.py 2024-03-16 22:21:12 +08:00
jxxghp
350def0a6f 更新 message.py 2024-03-16 22:20:14 +08:00
jxxghp
5b3027c0a7 fix reload 2024-03-16 21:06:52 +08:00
jxxghp
e4b90ca8f7 fix #1694 2024-03-16 20:40:02 +08:00
jxxghp
d917b00055 Merge pull request #1694 from lingjiameng/main
CookieCloud配置支持实时更新
2024-03-16 20:36:05 +08:00
s0mE
cc94c6c367 Merge branch 'jxxghp:main' into main 2024-03-16 19:24:25 +08:00
ljmeng
6410051e3a CookieCloud配置支持实时加载 2024-03-16 19:23:06 +08:00
jxxghp
aaa1b80edf fix 资源包更新Bug 2024-03-16 18:38:25 +08:00
jxxghp
f345d94009 fix README.md 2024-03-16 18:28:09 +08:00
jxxghp
550fe26d76 Merge pull request #1693 from lingjiameng/main
集成CookieCloud服务器端
2024-03-16 17:52:49 +08:00
jxxghp
7ad498b3a3 fix 2024-03-16 17:06:24 +08:00
jxxghp
20eb0b4635 fix message 2024-03-16 16:29:14 +08:00
ljmeng
747dc3fafe 默认关闭本地CookieCloud服务 2024-03-16 15:40:10 +08:00
s0mE
4708fbb3cb Merge branch 'jxxghp:main' into main 2024-03-16 15:36:20 +08:00
ljmeng
6ba40edeb4 Merge branch 'main' of github.com:lingjiameng/MoviePilot 2024-03-16 15:35:02 +08:00
ljmeng
79cb28faf9 默认配置关闭本地cookiecloud服务 2024-03-16 15:34:46 +08:00
jxxghp
9acf05f334 fix #1691 2024-03-16 15:31:04 +08:00
jxxghp
d0af1bf075 Merge pull request #1691 from hoey94/main 2024-03-16 13:53:10 +08:00
hoey94
f8a95cec4a fix: TR远程控制插件限速问题 104 2024-03-16 12:37:21 +08:00
jxxghp
3cd672fa8d fix 2024-03-16 08:40:36 +08:00
jxxghp
fe03638552 fix api 2024-03-16 08:39:57 +08:00
ljmeng
1ae220c654 集成CookieCloud服务端 2024-03-16 04:48:34 +08:00
jxxghp
75c7e71ee6 Merge pull request #1689 from hoey94/main 2024-03-15 19:14:26 +08:00
hoey94
4619158b99 fix: 限速开关BUG 104 2024-03-15 18:23:44 +08:00
jxxghp
3f88907ba9 fix bug 2024-03-15 18:17:04 +08:00
jxxghp
ae6440bd0a Merge pull request #1683 from lingjiameng/main 2024-03-15 07:55:01 +08:00
s0mE
261f5fc0c6 Merge branch 'jxxghp:main' into main 2024-03-14 23:26:58 +08:00
jxxghp
a5d044d535 fix message 2024-03-14 20:36:15 +08:00
jxxghp
6e607ca89f fix 优化推荐跳转
feat 消息落库
2024-03-14 19:44:15 +08:00
jxxghp
06e4b9ad83 Merge remote-tracking branch 'origin/main' 2024-03-14 19:15:22 +08:00
jxxghp
c755dc9b85 fix 优化推荐跳转
feat 消息落库
2024-03-14 19:15:13 +08:00
jxxghp
209451d5f9 Merge pull request #1678 from HankunYu/main 2024-03-14 06:57:31 +08:00
HankunYu
60b2d30f42 Update README.md
增加使用反代的描述,解决使用https反代时日志加载时间过长(十几分钟)不可用的问题。
2024-03-13 18:54:55 +00:00
ljmeng
399d26929d CookieCloud改为本地解密,增强安全性 2024-03-14 02:35:22 +08:00
jxxghp
f50c2e59a9 fix #1674 2024-03-13 14:54:37 +08:00
jxxghp
1cd768b3d0 v1.7.2
- 站点索引新增支持`蟹黄堡`,修复了`蝴蝶`、`蜜柑`的索引问题
- 针对themoviedb被大量删除中文标题的问题,补充使用新加坡(zh-sg)中文标题搜索和刮削
- 支持设定识别元数据的缓存时间(`META_CACHE_EXPIRE`,单位小时)
- 修复了未设定anime分类策略时,原tv下动漫二级分类失效的问题
- 提升了插件升级的使用体验
2024-03-13 08:21:59 +08:00
jxxghp
abc26b65ed fix #1645 兼容蝴蝶种子链接格式 2024-03-12 17:01:41 +08:00
jxxghp
dc1a41da90 fix 减少不必要的检测 2024-03-12 13:48:37 +08:00
jxxghp
a95dac1b32 fix 目录检测 2024-03-12 13:36:33 +08:00
jxxghp
18d9620687 #1653 搜索词中加入新加坡标题,同时主标题不是中文时会考虑使用中文新加坡标题 2024-03-12 11:55:47 +08:00
jxxghp
8808dcee52 fix 1659 2024-03-12 11:16:10 +08:00
jxxghp
17adc4deab Merge pull request #1662 from thsrite/main 2024-03-11 16:36:19 +08:00
thsrite
9351489166 fix 不查缓存识别媒体信息也应更新最新信息到缓存 2024-03-11 16:34:53 +08:00
jxxghp
e2148cb77f fix 2024-03-11 16:28:36 +08:00
jxxghp
e322204094 Merge pull request #1661 from jeblove/main 2024-03-11 16:25:05 +08:00
jxxghp
0fa884157a 支持设定meta缓存时间 2024-03-11 16:23:07 +08:00
jeblove
96468213fe Merge branch 'main' of https://github.com/jeblove/MoviePilot 2024-03-11 16:17:36 +08:00
jeblove
d044a9db00 fix 继续观看部分剧集图片 2024-03-11 16:17:10 +08:00
jxxghp
d5f5e0d526 Merge pull request #1660 from thsrite/main 2024-03-11 15:59:20 +08:00
thsrite
14a3bb8fc2 add db订阅、下载历史根据类型和时间查询列表(插件方法) 2024-03-11 15:56:19 +08:00
jxxghp
5921d43ae8 fix #1655 2024-03-11 12:34:19 +08:00
jxxghp
635061c054 Merge pull request #1654 from jeblove/main 2024-03-11 11:32:27 +08:00
jeblove
3c8c6e5375 fix 语法问题 2024-03-11 11:27:24 +08:00
jeblove
dd063bb16b fix 播放剧集微信消息推送图片问题 2024-03-11 01:57:57 +08:00
jeblove
750711611b fix 语法问题 2024-03-11 00:15:55 +08:00
jxxghp
d3983c51c2 Merge pull request #1652 from jeblove/main 2024-03-10 18:39:16 +08:00
jeblove
b9dec73773 fix 语法问题 2024-03-10 18:10:09 +08:00
jeblove
b310367d25 fix 播放微信消息推送图片问题 2024-03-10 17:50:01 +08:00
jxxghp
55beea87fd Merge pull request #1649 from thsrite/main 2024-03-10 11:24:57 +08:00
thsrite
4510382f74 fix tv动漫分类不生效 2024-03-10 09:27:48 +08:00
jxxghp
9b9ae9401e fix bug 2024-03-09 21:33:59 +08:00
jxxghp
e10464c278 v1.7.1
- 动漫独立目录时支持二级分类(category.yml配置模板已更新)
- 支持同时启用两个下载器,但只有第1个才会被默认使用(官方插件库个别插件进行了适配升级)
- 实时日志的最新日志显示在最顶部
- 优化了下载器及媒体目录的健康检查
- 优化了版本升级后因为浏览器缓存一直加载中的问题
- 优先级规则新增支持`官种`
- 修复了普通用户无法查看下载中任务的问题
- 修复了设定中修改定时服务相关设置时不立即生效的问题
2024-03-09 21:20:36 +08:00
jxxghp
542531a1ca fix yyyymmdd期 识别 2024-03-09 21:13:23 +08:00
jxxghp
04c21232e3 fix yyyymmdd期 识别 2024-03-09 21:03:29 +08:00
jxxghp
48a19fd57c fix 下载器测试 2024-03-09 20:03:57 +08:00
jxxghp
59cb69a96b Merge pull request #1643 from jeblove/main 2024-03-09 19:39:31 +08:00
jeblove
e7d94f7f70 fix 对接Library/VirtualFolders接口参数 2024-03-09 19:03:02 +08:00
jxxghp
27d2d01a20 feat:下载器支持多选 2024-03-09 18:52:27 +08:00
jxxghp
8b4495c857 feat:下载器支持多选 2024-03-09 18:25:04 +08:00
jxxghp
15bdb694cc fix 优化部分消息格式 2024-03-09 17:43:21 +08:00
jxxghp
3ef9c5ea2c fix 优化部分消息格式 2024-03-09 17:34:49 +08:00
jxxghp
ab6577f752 fix #1561 2024-03-09 17:12:21 +08:00
jxxghp
49a82d7a48 feat:新增官种优先级规则
fix #1635
feat:动漫支持二级目录 fix #1633
2024-03-09 09:53:15 +08:00
jxxghp
bdcbb168a0 Merge pull request #1636 from HankunYu/main 2024-03-09 08:05:49 +08:00
HankunYu
2e1cb0bd76 fix #1630
这里混淆了remove_tags与delete_tags。将原来的remove tags函数更正为delete,并新增一个remove tags函数。
2024-03-08 18:23:42 +00:00
jxxghp
851864cd49 fix 定时服务立即生效
fix #1615
2024-03-08 16:22:53 +08:00
jxxghp
b5d7b6fb53 fix 订阅全局通知 2024-03-08 15:38:40 +08:00
jxxghp
92bab2fc2f fix 下载全局通知 2024-03-08 15:27:24 +08:00
jxxghp
0dad6860c4 fix 下载任务userid登记 2024-03-08 14:40:00 +08:00
jxxghp
de4a7becc2 Merge pull request #1620 from thsrite/main 2024-03-08 11:58:52 +08:00
thsrite
2eeb24e22d fix 不开下载器监控,link测试无意义 2024-03-08 09:29:30 +08:00
jxxghp
e4a67ea052 - 修复了健康检查themoviedb、thetvdb时未使用内置代理的问题
- 修复了VoceChat部分场景下消息发送失败的问题
- 修复了VoceChat响应干扰了微信回调的问题
- 提升了VoceChat的安全性,机器人Webhook需要参考说明重新设置

注意:VoceChat机器人Webhook回调地址相对路径调整为:`/api/v1/message/?token=moviepilot`,其中`moviepilot`为环境变量中设置的`API_TOKEN`
2024-03-07 18:22:17 +08:00
jxxghp
a4df2f5213 fix wechat bug 2024-03-07 18:15:04 +08:00
jxxghp
4f89780a0f Merge remote-tracking branch 'origin/main' 2024-03-07 18:01:57 +08:00
jxxghp
26d6201b30 fix wechat bug 2024-03-07 18:01:50 +08:00
jxxghp
c9a9ff2692 Merge pull request #1613 from WangEdward/main 2024-03-07 17:48:31 +08:00
Edward
0be49953b4 fix: change vote to float 2024-03-07 09:45:14 +00:00
jxxghp
0de952f090 fix 2024-03-07 17:15:04 +08:00
jxxghp
2b570bf48f fix:提升VoceChat安全性 2024-03-07 17:07:28 +08:00
jxxghp
9476017af5 Merge remote-tracking branch 'origin/main' 2024-03-07 12:43:05 +08:00
jxxghp
54f808485e fix #1608 2024-03-07 12:42:59 +08:00
jxxghp
fa5c82899b Merge pull request #1605 from HankunYu/main
Update 中文字幕过滤
2024-03-07 12:33:06 +08:00
HankunYu
4a57071809 Update 中字过滤规则
添加匹配小写
2024-03-07 02:48:29 +00:00
HankunYu
4631db9a45 Update 中字过滤规则
去除重复 简体, 严格CHT以及CHS匹配规则
2024-03-07 02:43:59 +00:00
jxxghp
0f09da55b0 Merge pull request #1606 from thsrite/main 2024-03-07 09:22:21 +08:00
thsrite
b14b41c2c1 fix 系统健康检查tmdb、tvdb走代理 2024-03-07 09:20:20 +08:00
jxxghp
cf05ae20c5 v1.7.0
- 捷径中增加了系统健康检查功能,可快速检测各模块连接状态、目录是否跨盘等
- 网络测试增加了`fanart`和`thetvdb`测试项
- 新增`短剧自动分类`插件,可根据视频文件时长将短剧整理到独立分类目录
- 通知消息新增支持`VoceChat`,可实现消息交互以及群组通知功能,需自行搭建服务端
- 修复了订阅集数修改后会被覆盖的问题,订阅默认规则增加了做种数设定

VoceChat搭建及配置参考:https://doc.voce.chat/zh-cn/bot/bot-and-webhook ,Webhook回调地址相对路径为:/api/v1/message/
2024-03-07 08:10:52 +08:00
HankunYu
897758d829 Merge remote-tracking branch 'upstream/main' 2024-03-06 19:54:39 +00:00
jxxghp
85a77a66dd fix 2024-03-06 22:26:36 +08:00
HankunYu
c450dfc0fa Update 中文字幕过滤
添加对于动画番剧中文字幕识别的支持
2024-03-06 14:09:19 +00:00
jxxghp
3d782a7475 feat:目录检测 2024-03-06 21:42:21 +08:00
jxxghp
4734851213 Merge pull request #1587 from WangEdward/main 2024-03-06 20:11:10 +08:00
jxxghp
9c8635002d Merge pull request #1603 from thsrite/main 2024-03-06 19:50:33 +08:00
thsrite
4cd3cb2b60 fix 2024-03-06 19:46:09 +08:00
thsrite
fa890ca29c fix 手动修改过订阅总集数后,不再随tmdb变化 2024-03-06 19:33:31 +08:00
jxxghp
bbf1ec4c50 fix bug 2024-03-06 17:19:32 +08:00
jxxghp
523d458489 fix bug 2024-03-06 17:02:07 +08:00
jxxghp
45ec668875 add log 2024-03-06 16:47:04 +08:00
jxxghp
60122644b8 fix 2024-03-06 16:20:39 +08:00
jxxghp
07a77e0001 fix 2024-03-06 16:04:04 +08:00
jxxghp
d112f49a69 feat:支持VoceChat 2024-03-06 15:54:40 +08:00
jxxghp
8cb061ff75 feat:模块健康检查 2024-03-06 13:23:51 +08:00
jxxghp
01e08c8e69 Merge pull request #1595 from richard-guan-dev/main
fix: Add a validation check to see if "assisted verification users" are active when logging in.
2024-03-06 10:44:44 +08:00
Richard Guan
3549b38ee8 Add validation for whether the assistive user is activated. 2024-03-06 10:38:20 +08:00
jxxghp
f5fb888c85 fix #1594 2024-03-06 08:19:08 +08:00
Edward
8bcb6a7cb6 chore: merge nested if 2024-03-05 09:35:34 +00:00
Edward
ac81dd943c feat: add min_seeders in filter_rule 2024-03-05 09:25:23 +00:00
jxxghp
663d282b5e fix category.yaml 2024-03-05 15:54:13 +08:00
jxxghp
c7b389dd9b v1.6.9
- 索引站点支持`青蛙`
- 优化了种子索引匹配逻辑,减少`类型不匹配`的错误
- 修复了订阅默认过滤规则不生效的问题
2024-03-04 20:29:15 +08:00
jxxghp
bad37a1846 Merge pull request #1572 from WangEdward/main 2024-03-01 23:01:54 +08:00
jxxghp
9c09981583 Merge pull request #1571 from sohunjug/main 2024-03-01 22:47:46 +08:00
Edward
2d8e66cbe2 fix: site scope error 2024-03-01 14:40:50 +00:00
sohunjug
db28986d22 fix: softlink exists 2024-03-01 22:14:07 +08:00
Edward
727bed46b7 fix: empty return from get_subscribed_sites 2024-03-01 14:06:58 +00:00
jxxghp
8e0df90177 Merge pull request #1570 from DDS-Derek/main 2024-03-01 21:55:32 +08:00
DDSRem
34bbb86c16 fix: plugin directory backup failed 2024-03-01 21:40:46 +08:00
jxxghp
0403f1f48c fix logging 2024-03-01 13:10:19 +08:00
jxxghp
1db452e268 Merge pull request #1568 from cikezhu/main
fix time_difference()
2024-02-29 21:49:20 +08:00
叮叮当
81ca11650d fix time_difference() 2024-02-29 21:39:32 +08:00
jxxghp
2e4671fdbc Merge remote-tracking branch 'origin/main' 2024-02-29 17:28:42 +08:00
jxxghp
da80ad33d9 fix bug 2024-02-29 17:28:36 +08:00
jxxghp
a6f28569ab Merge pull request #1567 from sundxfansky/main 2024-02-29 17:20:45 +08:00
jxxghp
5dd36e95e0 feat:使用站点种子归类优化识别匹配
fix:优先级规则复杂时,过滤时间很长,调整到最后
fix #1432
2024-02-29 17:18:01 +08:00
sundxfansky
1eaeea62db rm line 2024-02-29 16:58:18 +08:00
sundxfansky
4282c5dfc2 fix update failed 2024-02-29 16:56:33 +08:00
jxxghp
2e661f8759 Merge pull request #1566 from sundxfansky/main 2024-02-29 06:48:15 +08:00
tonser
31ca41828e 修复tr 选中种子下载失败 2024-02-29 02:09:53 +08:00
jxxghp
c9ebe76eb1 - 修复v1.6.8订阅失效问题 2024-02-28 23:19:10 +08:00
jxxghp
71ac12ab7a 更新 scheduler.py 2024-02-28 23:03:31 +08:00
jxxghp
81c0e15a1c 更新 version.py 2024-02-28 22:19:12 +08:00
jxxghp
2bde4923f9 更新 subscribe.py 2024-02-28 21:57:35 +08:00
jxxghp
22fb6305cf 更新 subscribe.py 2024-02-28 21:50:32 +08:00
jxxghp
4bb5772e10 fix #1545
fix #1537
2024-02-28 15:03:55 +08:00
jxxghp
549658e871 fix #1553
fix #1496
2024-02-28 14:18:23 +08:00
jxxghp
80f47594f4 v1.6.8
- `🌈岛`支持多域名及HR检测,支持`麒麟`的短剧搜索
- 修复了做种数大于1000时识别为0的问题
- 官方插件库的所有插件均可在`设定-服务`中查看和启动后台任务(需要更新到最新插件版本,第三方插件需要开发者适配)
- 新增`共享识别词`插件(感谢@honuee),搭建了识别词共享服务,欢迎大家共同维护使用

1. 插件开发说明:https://github.com/jxxghp/MoviePilot-Plugins/blob/main/README.md
2. 识别词共享地址:
- https://movie-pilot.org/etherpad/p/MoviePilot_TV_Words
- https://movie-pilot.org/etherpad/p/MoviePilot_Anime_Words
2024-02-26 19:33:00 +08:00
jxxghp
2614eeadb0 fix #1544 2024-02-26 10:50:33 +08:00
jxxghp
a0af827319 Merge pull request #1541 from WangEdward/main 2024-02-25 23:48:08 +08:00
jxxghp
0233853794 fix bug 2024-02-25 17:16:46 +08:00
jxxghp
6b24ccdc35 fix bug 2024-02-25 16:43:21 +08:00
jxxghp
7d76ee2e65 fix warning 2024-02-25 09:11:47 +08:00
jxxghp
1dd9228d01 fix warning 2024-02-25 09:03:43 +08:00
Edward
a5b4221a00 fix: db migration for search_imdbid 2024-02-24 09:36:04 +00:00
jxxghp
37ba75b53c Merge pull request #1535 from cikezhu/main 2024-02-24 06:46:43 +08:00
WangEdward
b8553e2b86 feat: add search_imdbid in subscribe api 2024-02-24 00:14:44 +08:00
WangEdward
d28f3ed74b feat: add search_imdbid for subscribe 2024-02-23 23:55:32 +08:00
叮叮当
185c78b05c 定时作业添加提供者条目 2024-02-23 22:25:41 +08:00
叮叮当
f23cab861a fix 获取其他待执行任务>status 2024-02-23 21:39:58 +08:00
叮叮当
bbddec763a 插件直接采用程序的定时任务模块, 可显示在前端页面 2024-02-23 18:13:27 +08:00
jxxghp
06c3985aa4 v1.6.7
- 加快了插件页面的展现速度
- 插件的日志独立保存和查看
- 站点索引新增支持`萝莉`,支持`AGSVPT`的短剧搜索,修复了`象站`的索引问题
2024-02-23 10:59:20 +08:00
jxxghp
9503a603e6 fix 2024-02-23 08:23:56 +08:00
jxxghp
6e9ab24d95 Merge pull request #1531 from cikezhu/main 2024-02-23 07:59:40 +08:00
叮叮当
7524379af6 增加条件减少循环次数 2024-02-23 01:15:11 +08:00
叮叮当
eebf3dec68 fix 日志恢复成文件名方式/插件调用内置模块的日志将会显示在插件日志 2024-02-22 23:54:25 +08:00
jxxghp
a89dd636a4 fix #1528 2024-02-22 17:59:44 +08:00
jxxghp
7fb025bff4 Merge pull request #1529 from honue/main 2024-02-22 17:34:55 +08:00
honue
c44c0f6321 自定义识别词#注释 认为为注释 2024-02-22 17:22:40 +08:00
jxxghp
585bcb924f Merge pull request #1526 from WangEdward/main 2024-02-22 15:49:50 +08:00
jxxghp
0ce3c3d90f Merge pull request #1522 from honue/main 2024-02-22 15:48:32 +08:00
Edward
9cb69f4879 fix: discuz is_logged_in 2024-02-22 15:34:48 +08:00
honue
c5b13f2fee fix #1502 2024-02-22 14:06:56 +08:00
jxxghp
235af9e558 fix bug 2024-02-22 13:31:49 +08:00
jxxghp
cb274d1587 feat:插件日志文件独立 2024-02-22 13:23:23 +08:00
jxxghp
63643e6d26 feat:插件API支持类型过滤 2024-02-21 16:11:15 +08:00
jxxghp
0726600936 feat:插件API支持类型过滤 2024-02-21 16:05:24 +08:00
jxxghp
6151bd64dd Merge remote-tracking branch 'origin/main' 2024-02-21 16:00:00 +08:00
jxxghp
32dc0f69f9 feat:插件API支持类型过滤 2024-02-21 15:59:53 +08:00
jxxghp
5b563cf173 Merge pull request #1515 from WangEdward/main 2024-02-21 15:18:38 +08:00
Edward
3dbb534883 fix: 2fa Nonetype 2024-02-21 05:42:35 +00:00
jxxghp
7304fad460 Merge remote-tracking branch 'origin/main' 2024-02-21 13:36:32 +08:00
jxxghp
9f829c2129 fix #1514 2024-02-21 13:31:19 +08:00
jxxghp
32e71beca8 Merge pull request #1507 from donniex1986/patch-1 2024-02-20 19:09:29 +08:00
donniex1986
3c1c04f356 Update README.md 2024-02-20 18:21:25 +08:00
jxxghp
c473594663 v1.6.6
- IYUU Api域名地址更改为`api.bolahg.cn`
- 消息交互支持洗版订阅和强制重新下载
- 历史记录删除源文件时自动删除下载器中的下载任务
- 文件管理支持英文标题占位符`en_title`
- 更新了多个官方插件,其中目录监控插件修复了蓝光原盘目录整理重复通知的问题

【消息交互示例】
- 订阅 XXX:添加订阅
- 洗版 XXX:添加洗版订阅
- 搜索/下载 XXX:不检查本地是否存在,重新搜索下载
2024-02-20 16:40:28 +08:00
jxxghp
a8ce9648e2 fix bug 2024-02-20 16:18:29 +08:00
jxxghp
760285b085 Merge pull request #1503 from DDS-Derek/main
feat: startup and update script optimization
2024-02-20 15:09:18 +08:00
jxxghp
ccdad3e8dc fix #1502 2024-02-20 15:07:55 +08:00
jxxghp
f33e9bee21 Merge pull request #1502 from honue/main
当插件状态未启用时,设置事件注册状态不可用
2024-02-20 14:58:00 +08:00
jxxghp
4183dca80f fix bug 2024-02-20 14:41:25 +08:00
DDSRem
6f6fd6a42e fix: bug 2024-02-20 14:38:36 +08:00
DDSRem
13bb31fd93 fix: bug 2024-02-20 14:35:19 +08:00
DDSRem
5bac94cbc5 fix: bug 2024-02-20 14:34:29 +08:00
DDSRem
daa8d80ec9 feat: startup and update script optimization
1. 修复Shellcheck指出的问题,增强脚本稳定性。
2. 对于更新部分:采取先更新主程序和前端,插件和资源包再更新的原则;主要解决如果插件或资源包没有下载成功,主程序就无法更新成功的问题,但是其实资源包和插件是不影响主程序更新的。

Co-Authored-By: DDSDerek <108336573+DDSDerek@users.noreply.github.com>
Co-Authored-By: Summer⛱ <57806936+honue@users.noreply.github.com>
2024-02-20 14:26:59 +08:00
honue
b095f01b09 当插件状态未启用时,设置事件注册状态不可用 2024-02-20 13:27:02 +08:00
jxxghp
f43efab831 Merge pull request #1501 from honue/main 2024-02-20 11:44:29 +08:00
honue
946b7905b3 fix 总集数减小时,不能正确更新元数据 2024-02-20 11:34:40 +08:00
jxxghp
544625a9a3 fix bug 2024-02-19 18:03:35 +08:00
jxxghp
d7c6c27679 feat:消息交互支持洗版订阅及全量重新下载
fix #1202
2024-02-19 17:47:20 +08:00
jxxghp
70adbfe6b5 fix #1334 2024-02-19 16:20:03 +08:00
jxxghp
d8f9ab93e5 feat:源文件删除时删除下载任务 fix #1391 2024-02-19 16:07:46 +08:00
jxxghp
e06d07937e fix 2024-02-19 15:55:57 +08:00
jxxghp
f94d248383 Merge pull request #1492 from WangEdward/main
feat: english title from tmdb
2024-02-18 16:31:13 +08:00
Edward
c139aeebf5 feat: title search include en_title 2024-02-18 07:41:49 +00:00
Edward
89a8625817 feat: english title from tmdb 2024-02-18 07:40:59 +00:00
jxxghp
59acda5dec fix #1380
fix #1373
fix #1404
2024-02-18 11:26:05 +08:00
jxxghp
57d9e4a370 fix #1347 2024-02-18 11:20:04 +08:00
jxxghp
8b6a2a3d99 fix #1382 2024-02-18 10:53:13 +08:00
jxxghp
3e10642bdd v1.6.5
- 认证站点新增支持`麒麟`
- 修复了一个设定保存后无法启动的问题
2024-02-18 09:50:07 +08:00
jxxghp
c8e63b6ae0 Merge pull request #1489 from WangEdward/main 2024-02-17 19:47:11 +08:00
Edward
03c92ad41c fix: search_area 错误删除
resolve #1051
2024-02-17 11:33:59 +00:00
jxxghp
690b454bb1 fix api 2024-02-17 13:24:41 +08:00
jxxghp
2e6c1bef63 Merge pull request #1482 from WangEdward/main 2024-02-17 00:45:54 +08:00
jxxghp
0fd428f809 Merge pull request #1485 from cikezhu/main 2024-02-17 00:45:20 +08:00
叮叮当
6083a8a859 fix 更新站点图标 2024-02-16 20:35:03 +08:00
Edward
bb7d262ea3 feat: m-team 2fa 2024-02-15 16:39:14 +00:00
Edward
ca9a37d12a fix: 2fa 变量名冲突 2024-02-15 16:19:20 +00:00
jxxghp
595ca631f4 Merge pull request #1480 from WangEdward/main 2024-02-15 22:02:19 +08:00
jxxghp
cbffddc57f 更新 wechat.py 2024-02-15 21:51:57 +08:00
jxxghp
a5f5d41104 更新 transmission.py 2024-02-15 21:51:23 +08:00
jxxghp
56f07b3dd6 更新 telegram.py 2024-02-15 21:50:57 +08:00
jxxghp
fba10fe6a0 更新 synologychat.py 2024-02-15 21:50:16 +08:00
jxxghp
5639e0b7d0 更新 qbittorrent.py 2024-02-15 21:49:34 +08:00
jxxghp
a6ad58ca33 更新 plex.py 2024-02-15 21:49:03 +08:00
jxxghp
00447f2475 更新 emby.py 2024-02-15 21:48:11 +08:00
jxxghp
9d14fc47fe 更新 jellyfin.py 2024-02-15 21:47:52 +08:00
jxxghp
70c459f810 更新 emby.py 2024-02-15 21:47:01 +08:00
Edward
a0af2f4b68 fix: tmdb 同名返回已订阅 2024-02-15 13:45:29 +00:00
jxxghp
603eefb22f v1.6.4 2024-02-15 21:30:49 +08:00
jxxghp
34625ee384 feat:调整设置项内容结构 2024-02-15 20:40:01 +08:00
jxxghp
ca78fb7c22 fix api 2024-02-15 19:54:16 +08:00
jxxghp
3c710dd266 fix api 2024-02-15 19:46:30 +08:00
jxxghp
514e7add4b fix api 2024-02-15 18:57:24 +08:00
jxxghp
bdbf1e9084 fix api 2024-02-15 16:56:48 +08:00
jxxghp
6149cef1d3 fix api 2024-02-15 15:03:37 +08:00
jxxghp
b8fac86c6e feat:错误变量类型兼容 2024-02-15 13:28:52 +08:00
jxxghp
9f450dd8be fix settings api 2024-02-15 08:39:55 +08:00
jxxghp
24c2d3f8ca fix twofa 2024-02-14 21:11:35 +08:00
jxxghp
4248b8fa4e fix:多域名站点CookieCloud同步重复Bug 2024-02-14 21:10:08 +08:00
jxxghp
deaa2e5644 Merge pull request #1478 from WangEdward/main 2024-02-14 18:53:10 +08:00
Edward
dc43aabe2a fix 2fa helper 2024-02-14 08:51:20 +00:00
Edward
02981d38c0 chore 重命名 2fa 参数名 2024-02-14 08:47:55 +00:00
Edward
85fd9b3c09 feat 为 update_cookie 增加 2fa 支持 2024-02-14 08:47:02 +00:00
Edward
39ad54f3d9 feat 新增 2fa helper 2024-02-14 05:30:41 +00:00
jxxghp
aa9a2c46aa merge cookiecloud chain 2024-02-13 10:36:05 +08:00
jxxghp
c43a1411c9 fix 手动维护站点时缓存站点图标 2024-02-13 10:18:27 +08:00
jxxghp
928aaf0c19 Merge pull request #1474 from WangEdward/main 2024-02-12 15:19:03 +08:00
Edward
ea8a4a3ec4 fix: 支持 Radarr 的 X-Api-Key 请求头 2024-02-12 04:43:21 +00:00
jxxghp
c4dc468479 fix 增加插件库缓存 2024-02-11 22:02:03 +08:00
jxxghp
87ddfbca90 Merge remote-tracking branch 'origin/main' 2024-02-11 21:35:19 +08:00
jxxghp
164ce8f7c4 fix #984 2024-02-11 21:35:11 +08:00
jxxghp
c2fd6e3342 合并拉取请求 #1471
fix 后端程序目录不正确/其他目录被映射时mv会失败
2024-02-11 21:07:01 +08:00
jxxghp
16b79754c3 v1.6.3
- 文件管理支持手动削刮媒体文件
- 集成apexcharts,插件支持绘制图表
- 站点数据统计插件增加今日流量饼图
- 文件重命名兼容特殊字符
- 修复了资源包下载失败时无法启动的问题
2024-02-11 08:40:15 +08:00
叮叮当
9cfb1f789f fix 后端程序目录不正确/其他目录被映射时mv会失败 2024-02-11 02:18:35 +08:00
jxxghp
e3faa388cf fix 连不上Github可能导致无法启动的问题 2024-02-10 20:38:34 +08:00
jxxghp
b75ec92368 fix #1422 2024-02-10 20:35:07 +08:00
jxxghp
f91763ef7c add scrape api 2024-02-10 19:30:41 +08:00
jxxghp
edf8b03d3b Merge pull request #1464 from cikezhu/main
让自定义站点可自行设置: 搜索结果条数/请求超时
2024-02-10 11:30:25 +08:00
jxxghp
ea48eb5c56 fix update 2024-02-10 11:07:42 +08:00
jxxghp
282f723d34 fix plugin api 2024-02-10 10:58:43 +08:00
叮叮当
dde3b76573 让自定义站点可自行设置: 搜索结果条数/请求超时 2024-02-09 22:45:58 +08:00
jxxghp
f571711386 v1.6.2
- 支持更灵活的密码设置
- 支持在新窗口中打开实时日志
- 新增实时硬链接、二级分类策略、下载任务分类与标签、清理硬链接等插件
- 修复了ChineseSubFinder插件无法下载电影字幕的问题
- 前端集成了ace-builds,支持基于路径的反向代理
2024-02-09 11:23:24 +08:00
jxxghp
e8e8d36a13 fix logger 2024-02-09 09:43:35 +08:00
jxxghp
782a9a4759 fix logger 2024-02-09 09:42:49 +08:00
jxxghp
d0184bd34c fix logger 2024-02-09 09:35:05 +08:00
jxxghp
e4c0643c39 fix bug 2024-02-08 20:50:41 +08:00
jxxghp
305c08c7dd fix category 2024-02-08 14:42:38 +08:00
jxxghp
9521a3ef09 Merge remote-tracking branch 'origin/main' 2024-02-08 08:35:25 +08:00
jxxghp
b4c6a206af fix password 2024-02-08 08:35:18 +08:00
jxxghp
fa7eeec345 Merge pull request #1460 from cikezhu/main 2024-02-08 07:15:34 +08:00
叮叮当
7350216fc4 新窗口打开全部日志 2024-02-08 00:09:20 +08:00
jxxghp
36122dda31 Merge pull request #1454 from WangEdward/main 2024-02-07 21:11:58 +08:00
Edward
5851673b43 fix: 重新整理成功移动 2024-02-06 21:07:57 +08:00
Edward
0d81105a0b fix: 历史记录中重新整理成功记录时的问题 2024-02-06 18:05:45 +08:00
jxxghp
b934b0975b Merge pull request #1437 from falling/main 2024-02-01 13:55:37 +08:00
falling
035b4b0608 正在下载的任务状态更新 2024-02-01 12:03:09 +08:00
jxxghp
b98a033cd2 v1.6.1
- 更改IYUU认证及辅种服务器地址
2024-01-30 17:24:49 +08:00
jxxghp
c69853ce4b Merge pull request #1428 from EkkoG/debug_step 2024-01-30 16:14:19 +08:00
EkkoG
e00a440336 修正按 README 中步骤本地运行时提示 No module named 'app' 2024-01-30 15:31:18 +08:00
jxxghp
c0eb6b0600 Merge pull request #1423 from EkkoG/fixed_size_limit 2024-01-29 16:38:09 +08:00
EkkoG
4d1c8c3764 Fixed #1416 2024-01-29 16:24:23 +08:00
jxxghp
62628e526c 更新 README.md 2024-01-24 11:45:33 +08:00
jxxghp
ad7761a785 rollback #1399 2024-01-24 10:56:39 +08:00
jxxghp
e545b8d900 Merge pull request #1399 from falling/main 2024-01-23 07:12:12 +08:00
falling
f2f1ecfdf1 更新qbittorrent下载判断值
https://github.com/qbittorrent/qBittorrent/wiki/WebUI-API-(qBittorrent-4.1)#get-torrent-list
pausedDL	Torrent is paused and has NOT finished downloading
2024-01-21 19:51:38 +08:00
jxxghp
fdec997ed0 更新 app.env 2024-01-19 23:00:07 +08:00
jxxghp
9b653ceec9 更新 README.md 2024-01-19 22:58:21 +08:00
jxxghp
fbaaed1c61 更新 message.py 2024-01-19 22:55:45 +08:00
jxxghp
639abf67c2 v1.6.0
- 全新安装时,超级管理员初始密码为随机生成,并只能在首次启动的后台日志中查看(使用初始密码登录成功后可在设定中修改)。
- 用户密码修改需要同时包含大小写和数字。
- 修复了第三个插件依赖库无法自动安装的问题。
2024-01-19 11:20:30 +08:00
jxxghp
1f56ceaea9 更新 user.py 2024-01-19 10:54:13 +08:00
jxxghp
16a4f61fec Merge pull request #1386 from hussion/dev 2024-01-19 10:52:10 +08:00
jxxghp
ea0aba96fd Merge pull request #1383 from thsrite/main 2024-01-19 10:51:31 +08:00
嫣识
4393dad77c fix:修复bearer auth请求头设置错误,导致github_token参数应用失败 2024-01-18 19:58:06 +08:00
jxxghp
d099c0e702 更新 README.md 2024-01-18 12:05:12 +08:00
thsrite
a299d786fe feat 超级管理员初始化密码随机生成 && 修改密码强制要求大写小写数字组合 2024-01-18 11:08:16 +08:00
jxxghp
3500f5b9a6 Merge pull request #1378 from thsrite/main 2024-01-18 08:22:16 +08:00
thsrite
64233c89d7 fix emby/jellyfin首页继续观看、最近添加兼容共享路径 2024-01-17 16:59:22 +08:00
jxxghp
8c727da58a Merge pull request #1375 from thsrite/main 2024-01-17 12:06:03 +08:00
thsrite
152a87d109 fix 解决三方插件依赖安装失败 2024-01-17 10:14:49 +08:00
jxxghp
6a2cde0664 - Bug修复 2024-01-16 20:24:58 +08:00
jxxghp
c86cc2cb51 Merge pull request #1353 from thsrite/main 2024-01-12 19:04:14 +08:00
thsrite
6d7a63ff61 fix c044e594 2024-01-12 10:07:39 +08:00
thsrite
c044e59481 fix emby/jellyfin首页继续观看、最近添加 2024-01-12 09:56:55 +08:00
jxxghp
3c31bf24e5 v1.5.9
- 修复了个别情况下订阅重复下载的问题
- 仪表板中继续观看和最近添加组件支持过滤媒体库黑名单内容
- 增加了Fanart的开关设置(FANART_ENABLE,默认开),关闭后可减少网络请求但刮削图片数量会大幅减少
2024-01-11 20:18:58 +08:00
jxxghp
d89c80ac89 fix #1296 2024-01-11 16:35:51 +08:00
jxxghp
8236d6c8d7 Merge pull request #1328 from thsrite/main 2024-01-10 12:25:19 +08:00
thsrite
3646540a7f fix 2024-01-10 11:21:03 +08:00
thsrite
c1ecdfc61d Revert "fix #907"
This reverts commit 4dcefb141a.
2024-01-10 11:19:25 +08:00
thsrite
7587946d51 fix c674e320 2024-01-10 10:53:32 +08:00
jxxghp
3ad64baaeb 更新 README.md 2024-01-10 10:38:43 +08:00
jxxghp
24c43b53a2 Merge pull request #1338 from thsrite/fanart_switch 2024-01-10 10:29:59 +08:00
thsrite
53a6a1c691 feat Fanart开关支持环境变量配置,默认开启 2024-01-10 10:13:51 +08:00
jxxghp
c3ba83c7ca fix:订阅重复下载问题 2024-01-09 13:16:39 +08:00
thsrite
4dcefb141a fix #907 2024-01-08 13:05:48 +08:00
thsrite
c674e32046 fix 首页继续观看、最近添加排除黑名单媒体库 2024-01-08 13:05:09 +08:00
270 changed files with 27379 additions and 8594 deletions

View File

@@ -1,11 +1,14 @@
name: MoviePilot Builder
name: MoviePilot Builder v2
on:
workflow_dispatch:
push:
branches:
- main
- v2
paths:
- version.py
- '.github/workflows/build.yml'
- 'Dockerfile'
- 'version.py'
- 'requirements.in'
jobs:
Docker-build:
@@ -25,7 +28,7 @@ jobs:
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
tags: |
type=raw,value=${{ env.app_version }}
type=raw,value=latest
@@ -51,158 +54,25 @@ jobs:
linux/amd64
linux/arm64/v8
push: true
build-args: |
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker
Windows-build:
runs-on: windows-latest
name: Build Windows Binary
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Init Python 3.11.4
uses: actions/setup-python@v4
- name: Delete Release
uses: dev-drprasad/delete-tag-and-release@v1.1
with:
python-version: '3.11.4'
cache: 'pip'
tag_name: ${{ env.app_version }}
delete_release: true
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Dependent Packages
run: |
python -m pip install --upgrade pip
pip install wheel pyinstaller
pip install -r requirements.txt
shell: pwsh
- name: Prepare Frontend
run: |
Invoke-WebRequest -Uri "http://nginx.org/download/nginx-1.25.2.zip" -OutFile "nginx.zip"
Expand-Archive -Path "nginx.zip" -DestinationPath "nginx-1.25.2"
Move-Item -Path "nginx-1.25.2/nginx-1.25.2" -Destination "nginx"
Remove-Item -Path "nginx.zip"
Remove-Item -Path "nginx-1.25.2" -Recurse -Force
$FRONTEND_VERSION = (Invoke-WebRequest -Uri "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | ConvertFrom-Json).tag_name
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/$FRONTEND_VERSION/dist.zip" -OutFile "dist.zip"
Expand-Archive -Path "dist.zip" -DestinationPath "dist"
Move-Item -Path "dist/dist/*" -Destination "nginx/html" -Force
Remove-Item -Path "dist.zip"
Remove-Item -Path "dist" -Recurse -Force
Move-Item -Path "nginx/html/nginx.conf" -Destination "nginx/conf/nginx.conf" -Force
New-Item -Path "nginx/temp" -ItemType Directory -Force
New-Item -Path "nginx/temp/__keep__.txt" -ItemType File -Force
New-Item -Path "nginx/logs" -ItemType Directory -Force
New-Item -Path "nginx/logs/__keep__.txt" -ItemType File -Force
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" -OutFile "MoviePilot-Plugins-main.zip"
Expand-Archive -Path "MoviePilot-Plugins-main.zip" -DestinationPath "MoviePilot-Plugins-main"
Move-Item -Path "MoviePilot-Plugins-main/MoviePilot-Plugins-main/plugins/*" -Destination "app/plugins/" -Force
Remove-Item -Path "MoviePilot-Plugins-main.zip"
Remove-Item -Path "MoviePilot-Plugins-main" -Recurse -Force
Invoke-WebRequest -Uri "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" -OutFile "MoviePilot-Resources-main.zip"
Expand-Archive -Path "MoviePilot-Resources-main.zip" -DestinationPath "MoviePilot-Resources-main"
Move-Item -Path "MoviePilot-Resources-main/MoviePilot-Resources-main/resources/*" -Destination "app/helper/" -Force
Remove-Item -Path "MoviePilot-Resources-main.zip"
Remove-Item -Path "MoviePilot-Resources-main" -Recurse -Force
shell: pwsh
- name: Pyinstaller
run: |
pyinstaller frozen.spec
shell: pwsh
- name: Upload Windows File
uses: actions/upload-artifact@v3
with:
name: windows
path: dist/MoviePilot.exe
Linux-build-amd64:
runs-on: ubuntu-latest
name: Build Linux Amd64
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Init Python 3.11.4
uses: actions/setup-python@v4
with:
python-version: '3.11.4'
cache: 'pip'
- name: Install Dependent Packages
run: |
python -m pip install --upgrade pip
pip install wheel pyinstaller
pip install -r requirements.txt
- name: Prepare Frontend
run: |
wget https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip
unzip main.zip
mv MoviePilot-Plugins-main/plugins/* app/plugins/
rm main.zip
rm -rf MoviePilot-Plugins-main
wget https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip
unzip main.zip
mv MoviePilot-Resources-main/resources/* app/helper/
rm main.zip
rm -rf MoviePilot-Resources-main
- name: Pyinstaller
run: |
pyinstaller frozen.spec
mv dist/MoviePilot dist/MoviePilot_Amd64
- name: Upload Linux File
uses: actions/upload-artifact@v3
with:
name: linux-amd64
path: dist/MoviePilot_Amd64
Create-release:
permissions: write-all
runs-on: ubuntu-latest
needs: [ Windows-build, Docker-build, Linux-build-amd64]
steps:
- uses: actions/checkout@v2
- name: Release Version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Download Artifact
uses: actions/download-artifact@v3
- name: get release_informations
shell: bash
run: |
mkdir releases
mv ./windows/MoviePilot.exe ./releases/MoviePilot_Win_v${{ env.app_version }}.exe
mv ./linux-amd64/MoviePilot_Amd64 ./releases/MoviePilot_Amd64_v${{ env.app_version }}
- name: Create Release
id: create_release
uses: actions/create-release@latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Generate Release
uses: softprops/action-gh-release@v2
with:
tag_name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
name: v${{ env.app_version }}
draft: false
prerelease: false
- name: Upload Release Asset
uses: dwenegar/upload-release-assets@v1
make_latest: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ steps.create_release.outputs.id }}
assets_path: |
./releases/

9
.gitignore vendored
View File

@@ -4,13 +4,20 @@ build/
dist/
nginx/
test.py
safety_report.txt
app/helper/sites.py
app/helper/*.so
app/helper/*.pyd
app/helper/*.bin
app/plugins/**
!app/plugins/__init__.py
config/user.db
config/cookies/**
config/user.db*
config/sites/**
config/logs/
config/temp/
config/cache/
*.pyc
*.log
.vscode
venv

View File

@@ -1,21 +1,22 @@
FROM python:3.11.4-slim-bullseye
ARG MOVIEPILOT_VERSION
FROM python:3.11.4-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
CONFIG_DIR="/config" \
TERM="xterm" \
DISPLAY=:987 \
PUID=0 \
PGID=0 \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
PROXY_HOST="" \
MOVIEPILOT_AUTO_UPDATE=release \
MOVIEPILOT_AUTO_UPDATE=false \
AUTH_SITE="iyuu" \
IYUU_SIGN=""
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \
@@ -29,10 +30,10 @@ RUN apt-get update -y \
busybox \
dumb-init \
jq \
haproxy \
fuse3 \
rsync \
ffmpeg \
nano \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
@@ -40,6 +41,7 @@ RUN apt-get update -y \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
@@ -47,11 +49,12 @@ RUN apt-get update -y \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.txt requirements.txt
COPY requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython \
&& pip install Cython pip-tools \
&& pip-compile requirements.in \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& apt-get remove -y build-essential \
@@ -66,23 +69,26 @@ COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& FRONTEND_VERSION=$(curl -sL "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | jq -r .tag_name) \
&& FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/version.py) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins/* /app/app/plugins/ \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Resources-main/resources/* /app/app/helper/ \
&& rm -rf /tmp/*
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]
ENTRYPOINT [ "/entrypoint" ]

315
README.md
View File

@@ -1,5 +1,14 @@
# MoviePilot
![GitHub Repo stars](https://img.shields.io/github/stars/jxxghp/MoviePilot?style=for-the-badge)
![GitHub forks](https://img.shields.io/github/forks/jxxghp/MoviePilot?style=for-the-badge)
![GitHub contributors](https://img.shields.io/github/contributors/jxxghp/MoviePilot?style=for-the-badge)
![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge)
![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge)
![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge)
基于 [NAStool](https://github.com/NAStool/nas-tools) 部分代码重新设计,聚焦自动化核心需求,减少问题同时更易于扩展和维护。
# 仅用于学习交流使用,请勿在任何国内平台宣传该项目!
@@ -7,309 +16,17 @@
发布频道https://t.me/moviepilot_channel
## 主要特性
- 前后端分离基于FastApi + Vue3前端项目地址[MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)APIhttp://localhost:3001/docs
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
- 重新设计了用户界面,更加美观易用。
## 安装
## 安装使用
### 1. **安装CookieCloud插件**
访问官方Wikihttps://wiki.movie-pilot.org
站点信息需要通过CookieCloud同步获取因此需要安装CookieCloud插件将浏览器中的站点Cookie数据同步到云端后再同步到MoviePilot使用。 插件下载地址请点击 [这里](https://github.com/easychen/CookieCloud/releases)。
### 2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)。
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
### 3. **安装配套管理软件**
MoviePilot需要配套下载器和媒体服务器配合使用。
- 下载器支持qBittorrent、TransmissionQB版本号要求>= 4.3.9TR版本号要求>= 3.0推荐使用QB。
- 媒体服务器支持Jellyfin、Emby、Plex推荐使用Emby。
### 4. **安装MoviePilot**
- Docker镜像
点击 [这里](https://hub.docker.com/r/jxxghp/moviepilot) 或执行命令:
```shell
docker pull jxxghp/moviepilot:latest
```
- Windows
下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000
- 群晖套件
添加套件源https://spk7.imnks.com/
- 本地运行
1) 将工程 [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins) plugins目录下的所有文件复制到`app/plugins`目录
2) 将工程 [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources) resources目录下的所有文件复制到`app/helper`目录
3) 执行命令:`pip install -r requirements.txt` 安装依赖
4) 执行命令:`python app/main.py` 启动服务
5) 根据前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend) 说明,启动前端服务
## 配置
配置文件映射路径:`/config`,配置项生效优先级:环境变量 > env文件 > 默认值,**部分参数如路径映射、站点认证、权限端口、时区等必须通过环境变量进行配置**。
> ❗号标识的为必填项,其它为可选项,可选项可删除配置变量从而使用默认值。
### 1. **环境变量**
- **❗NGINX_PORT** WEB服务端口默认`3000`可自行修改不能与API服务端口冲突
- **❗PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **PUID**:运行程序用户的`uid`,默认`0`
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **PROXY_HOST** 网络代理访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`、`socks5://user:pass@host:port`
- **MOVIEPILOT_AUTO_UPDATE** 重启时自动更新,`true`/`release`/`dev`/`false`,默认`release`需要能正常连接Github **注意:如果出现网络问题可以配置`PROXY_HOST`**
- **AUTO_UPDATE_RESOURCE**:启动时自动检测和更新资源包(站点索引及认证等),`true`/`false`,默认`true`需要能正常连接Github仅支持Docker
- **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数,认证资源`v1.1.1`支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba`1ptba /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| ptba | `PTBA_UID`用户ID<br/>`PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
| xingtan | `XINGTAN_UID`用户ID<br/>`XINGTAN_PASSKEY`:密钥 |
| ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 |
| agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 |
### 2. **app.env配置文件**
下载 [app.env 模板](https://github.com/jxxghp/MoviePilot/raw/main/config/app.env)修改后放配置文件目录下app.env 的所有配置项也可以通过环境变量进行配置。
- **❗SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!**
- **❗SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!**
- **❗API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **BIG_MEMORY_MODE** 大内存模式,默认为`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快
- **GITHUB_TOKEN** Github token提高自动更新、插件安装等请求Github Api的限流阈值格式ghp_****
---
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`、`tmdb.movie-pilot.org` 或其它中转代理服务地址,能连通即可
- **TMDB_IMAGE_DOMAIN** TMDB图片地址默认`image.tmdb.org`可配置为其它中转代理以加速TMDB图片显示`static-mdb.v.geilijiasu.com`
- **WALLPAPER** 登录首页电影海报,`tmdb`/`bing`,默认`tmdb`
- **RECOGNIZE_SOURCE** 媒体信息识别来源,`themoviedb`/`douban`,默认`themoviedb`,使用`douban`时不支持二级分类
---
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_SOURCE** 刮削元数据及图片使用的数据源,`themoviedb`/`douban`,默认`themoviedb`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`,为`false`时即使TMDB信息变化了也会仍然按历史记录中已入库的信息进行刮削
---
- **❗LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名称(不是完整路径),默认`电影`
- **LIBRARY_TV_NAME** 电视剧媒体库目录称(不是完整路径),默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录称(不是完整路径),默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在媒体库目录下建立二级目录分类
- **❗TRANSFER_TYPE** 整理转移方式,支持`link`/`copy`/`move`/`softlink`/`rclone_copy`/`rclone_move` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响rclone需要自行映射rclone配置目录到容器中或在容器内完成rclone配置节点名称必须为`MP`**
- **OVERWRITE_MODE** 转移覆盖模式,默认为`size`,支持`nerver`/`size`/`always`/`latest`,分别表示`不覆盖同名文件`/`同名文件根据文件大小覆盖(大覆盖小)`/`总是覆盖同名文件`/`仅保留最新版本,删除旧版本文件(包括非同名文件)`
---
- **❗COOKIECLOUD_HOST** CookieCloud服务器地址格式`http(s)://ip:port`,不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **❗COOKIECLOUD_KEY** CookieCloud用户KEY
- **❗COOKIECLOUD_PASSWORD** CookieCloud端对端加密密码
- **❗COOKIECLOUD_INTERVAL** CookieCloud同步间隔分钟
- **❗USER_AGENT** CookieCloud保存Cookie对应的浏览器UA建议配置设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
---
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID消息通知渠道的用户ID多个用户使用,分割,未设置需要选择资源或者回复`0`
---
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
---
- **❗MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`/`synologychat`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- `wechat`设置项:
- **WECHAT_CORPID** WeChat企业ID
- **WECHAT_APP_SECRET** WeChat应用Secret
- **WECHAT_APP_ID** WeChat应用ID
- **WECHAT_TOKEN** WeChat消息回调的Token
- **WECHAT_ENCODING_AESKEY** WeChat消息回调的EncodingAESKey
- **WECHAT_ADMINS** WeChat管理员列表多个管理员用英文逗号分隔可选
- **WECHAT_PROXY** WeChat代理服务器后面不要加/
- `telegram`设置项:
- **TELEGRAM_TOKEN** Telegram Bot Token
- **TELEGRAM_CHAT_ID** Telegram Chat ID
- **TELEGRAM_USERS** Telegram 用户ID多个使用,分隔只有用户ID在列表中才可以使用Bot如未设置则均可以使用Bot
- **TELEGRAM_ADMINS** Telegram 管理员ID多个使用,分隔只有管理员才可以操作Bot菜单如未设置则均可以操作菜单可选
- `slack`设置项:
- **SLACK_OAUTH_TOKEN** Slack Bot User OAuth Token
- **SLACK_APP_TOKEN** Slack App-Level Token
- **SLACK_CHANNEL** Slack 频道名称,默认`全体`(可选)
- `synologychat`设置项:
- **SYNOLOGYCHAT_WEBHOOK** 在Synology Chat中创建机器人获取机器人`传入URL`
- **SYNOLOGYCHAT_TOKEN** SynologyChat机器人`令牌`
---
- **❗DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot`及`下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
---
- **❗DOWNLOADER** 下载器,支持`qbittorrent`/`transmission`QB版本号要求>= 4.3.9TR版本号要求>= 3.0,同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`qbittorrent`
- `qbittorrent`设置项:
- **QB_HOST** qbittorrent地址格式`ip:port`https需要添加`https://`前缀
- **QB_USER** qbittorrent用户名
- **QB_PASSWORD** qbittorrent密码
- **QB_CATEGORY** qbittorrent分类自动管理`true`/`false`,默认`false`,开启后会将下载二级分类传递到下载器,由下载器管理下载目录,需要同步开启`DOWNLOAD_CATEGORY`
- **QB_SEQUENTIAL** qbittorrent按顺序下载`true`/`false`,默认`true`
- **QB_FORCE_RESUME** qbittorrent忽略队列限制强制继续`true`/`false`,默认 `false`
- `transmission`设置项:
- **TR_HOST** transmission地址格式`ip:port`https需要添加`https://`前缀
- **TR_USER** transmission用户名
- **TR_PASSWORD** transmission密码
- **DOWNLOADER_MONITOR** 下载器监控,`true`/`false`,默认为`true`,开启后下载完成时才会自动整理入库
- **TORRENT_TAG** 下载器种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
---
- **❗MEDIASERVER** 媒体服务器,支持`emby`/`jellyfin`/`plex`,同时开启多个使用`,`分隔。还需要配置对应媒体服务器的环境变量,非对应媒体服务器的变量可删除,推荐使用`emby`
- `emby`设置项:
- **EMBY_HOST** Emby服务器地址格式`ip:port`https需要添加`https://`前缀
- **EMBY_PLAY_HOST** EMBY外网地址格式`http(s)://DOMAIN:PORT`,未设置时使用`EMBY_HOST`
- **EMBY_API_KEY** Emby Api Key在`设置->高级->API密钥`处生成
- `jellyfin`设置项:
- **JELLYFIN_HOST** Jellyfin服务器地址格式`ip:port`https需要添加`https://`前缀
- **JELLYFIN_PLAY_HOST** Jellyfin外网地址格式`http(s)://DOMAIN:PORT`,未设置时使用`JELLYFIN_HOST`
- **JELLYFIN_API_KEY** Jellyfin Api Key在`设置->高级->API密钥`处生成
- `plex`设置项:
- **PLEX_HOST** Plex服务器地址格式`ip:port`https需要添加`https://`前缀
- **PLEX_PLAY_HOST** Plex外网地址格式`http(s)://DOMAIN:PORT`,未设置时使用`PLEX_HOST`
- **PLEX_TOKEN** Plex网页Url中的`X-Plex-Token`通过浏览器F12->网络从请求URL中获取
- **MEDIASERVER_SYNC_INTERVAL:** 媒体服务器同步间隔(小时),默认`6`,留空则不同步
- **MEDIASERVER_SYNC_BLACKLIST:** 媒体服务器同步黑名单,多个媒体库名称使用,分割
---
- **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法
`MOVIE_RENAME_FORMAT`支持的配置项:
> `title` TMDB/豆瓣中的标题
> `original_title` TMDB/豆瓣中的原语种标题
> `name` 从文件名中识别的名称(同时存在中英文时,优先使用中文)
> `en_name`:从文件名中识别的英文名称(可能为空)
> `original_name` 原文件名(包括文件外缀)
> `year` 年份
> `resourceType`:资源类型
> `effect`:特效
> `edition` 版本(资源类型+特效)
> `videoFormat` 分辨率
> `releaseGroup` 制作组/字幕组
> `customization` 自定义占位符
> `videoCodec` 视频编码
> `audioCodec` 音频编码
> `tmdbid` TMDB ID非TMDB识别源时为空
> `imdbid` IMDB ID可能为空
> `doubanid`豆瓣ID非豆瓣识别源时为空
> `part`:段/节
> `fileExt`:文件扩展名
> `customization`:自定义占位符
`MOVIE_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}{{fileExt}}
```
- **TV_RENAME_FORMAT** 电视剧重命名格式基于jinjia2语法
`TV_RENAME_FORMAT`额外支持的配置项:
> `season` 季号
> `episode` 集号
> `season_episode` 季集 SxxExx
> `episode_title` 集标题
`TV_RENAME_FORMAT`默认配置格式:
```
{{title}}{% if year %} ({{year}}){% endif %}/Season {{season}}/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}} 集{% endif %}{{fileExt}}
```
### 3. **优先级规则**
- 仅支持使用内置规则进行排列组合,内置规则有:`蓝光原盘`、`4K`、`1080P`、`中文字幕`、`特效字幕`、`H265`、`H264`、`杜比`、`HDR`、`REMUX`、`WEB-DL`、`免费`、`国语配音` 等
- 符合任一层级规则的资源将被标识选中,匹配成功的层级做为该资源的优先级,排越前面优先级超高
- 不符合过滤规则所有层级规则的资源将不会被选中
### 4. **插件扩展**
- **PLUGIN_MARKET** 插件市场仓库地址仅支持Github仓库`main`分支,多个地址使用`,`分隔,默认为官方插件仓库:`https://github.com/jxxghp/MoviePilot-Plugins` ,通过查看[MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)项目的fork或者查看频道置顶了解更多第三方插件仓库。
## 使用
- 通过CookieCloud同步快速同步站点不需要使用的站点可在WEB管理界面中禁用无法同步的站点可手动新增。
- 通过WEB进行管理将WEB添加到手机桌面获得类App使用效果管理界面端口`3000`后台API端口`3001`。
- 通过下载器监控或使用目录监控插件实现自动整理入库刮削(二选一)。
- 通过微信/Telegram/Slack/SynologyChat远程管理其中微信/Telegram将会自动添加操作菜单微信菜单条数有限制部分菜单不显示微信需要在官方页面设置回调地址SynologyChat需要设置机器人传入地址地址相对路径为`/api/v1/message/`。
- 设置媒体服务器Webhook通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot``3001`端口),其中`moviepilot`为设置的`API_TOKEN`。
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`API服务端口`可使用Overseerr/Jellyseerr浏览订阅。
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
### **注意**
- 容器首次启动需要下载浏览器内核,根据网络情况可能需要较长时间,此时无法登录。可映射`/moviepilot`目录避免容器重置后重新触发浏览器内核下载。
- 使用反向代理时,需要添加以下配置,否则可能会导致部分功能无法访问(`ip:port`修改为实际值):
```nginx configuration
location / {
proxy_pass http://ip:port;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
```
- 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/message/send {
proxy_pass https://qyapi.weixin.qq.com;
}
location /cgi-bin/menu/create {
proxy_pass https://qyapi.weixin.qq.com;
}
```
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/f2654b09-26f3-464f-a0af-1de3f97832ee)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/fcb87529-56dd-43df-8337-6e34b8582819)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/bfa77c71-510a-46a6-9c1e-cf98cb101e3a)
![image](https://github.com/jxxghp/MoviePilot/assets/51039935/51cafd09-e38c-47f9-ae62-1e83ab8bf89b)
## 贡献者
<a href="https://github.com/jxxghp/MoviePilot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=jxxghp/MoviePilot" />
</a>

View File

@@ -1,7 +1,8 @@
from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer, mediaserver
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -19,6 +20,7 @@ api_router.include_router(system.router, prefix="/system", tags=["system"])
api_router.include_router(plugin.router, prefix="/plugin", tags=["plugin"])
api_router.include_router(download.router, prefix="/download", tags=["download"])
api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboard"])
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
api_router.include_router(storage.router, prefix="/storage", tags=["storage"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])

View File

@@ -0,0 +1,86 @@
from typing import List, Any
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.core.context import MediaInfo
from app.core.security import verify_token
router = APIRouter()
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
medias = BangumiChain().calendar()
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi演职员表
"""
persons = BangumiChain().bangumi_credits(bangumiid)
if persons:
return persons[(page - 1) * count: page * count]
return []
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
def bangumi_recommend(bangumiid: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi推荐
"""
medias = BangumiChain().bangumi_recommend(bangumiid)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def bangumi_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
return BangumiChain().person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * 20: page * 20]]
return []
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
def bangumi_info(bangumiid: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询Bangumi详情
"""
info = BangumiChain().bangumi_info(bangumiid)
if info:
return MediaInfo(bangumi_info=info).to_dict()
else:
return schemas.MediaInfo()

View File

@@ -1,3 +1,4 @@
from pathlib import Path
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
@@ -5,10 +6,11 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.dashboard import DashboardChain
from app.core.config import settings
from app.core.security import verify_token, verify_uri_token
from app.chain.storage import StorageChain
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.transferhistory import TransferHistory
from app.helper.directory import DirectoryHelper
from app.scheduler import Scheduler
from app.utils.system import SystemUtils
@@ -16,11 +18,11 @@ router = APIRouter()
@router.get("/statistic", summary="媒体数量统计", response_model=schemas.Statistic)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def statistic(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体数量统计信息
"""
media_statistics: Optional[List[schemas.Statistic]] = DashboardChain().media_statistic()
media_statistics: Optional[List[schemas.Statistic]] = DashboardChain().media_statistic(name)
if media_statistics:
# 汇总各媒体库统计信息
ret_statistic = schemas.Statistic()
@@ -35,29 +37,38 @@ def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/statistic2", summary="媒体数量统计API_TOKEN", response_model=schemas.Statistic)
def statistic2(_: str = Depends(verify_uri_token)) -> Any:
def statistic2(_: str = Depends(verify_apitoken)) -> Any:
"""
查询媒体数量统计信息 API_TOKEN认证?token=xxx
"""
return statistic()
@router.get("/storage", summary="存储空间", response_model=schemas.Storage)
@router.get("/storage", summary="本地存储空间", response_model=schemas.Storage)
def storage(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询存储空间信息
查询本地存储空间信息
"""
total_storage, free_storage = SystemUtils.space_usage(settings.LIBRARY_PATHS)
total, available = 0, 0
dirs = DirectoryHelper().get_dirs()
if not dirs:
return schemas.Storage(total_storage=total, used_storage=total - available)
storages = set([d.library_storage for d in dirs if d.library_storage])
for _storage in storages:
_usage = StorageChain().storage_usage(_storage)
if _usage:
total += _usage.total
available += _usage.available
return schemas.Storage(
total_storage=total_storage,
used_storage=total_storage - free_storage
total_storage=total,
used_storage=total - available
)
@router.get("/storage2", summary="存储空间API_TOKEN", response_model=schemas.Storage)
def storage2(_: str = Depends(verify_uri_token)) -> Any:
@router.get("/storage2", summary="本地存储空间API_TOKEN", response_model=schemas.Storage)
def storage2(_: str = Depends(verify_apitoken)) -> Any:
"""
查询存储空间信息 API_TOKEN认证?token=xxx
查询本地存储空间信息 API_TOKEN认证?token=xxx
"""
return storage()
@@ -71,26 +82,28 @@ def processes(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/downloader", summary="下载器信息", response_model=schemas.DownloaderInfo)
def downloader(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def downloader(name: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询下载器信息
"""
transfer_info = DashboardChain().downloader_info()
free_space = SystemUtils.free_space(settings.SAVE_PATH)
if transfer_info:
return schemas.DownloaderInfo(
download_speed=transfer_info.download_speed,
upload_speed=transfer_info.upload_speed,
download_size=transfer_info.download_size,
upload_size=transfer_info.upload_size,
free_space=free_space
)
else:
return schemas.DownloaderInfo()
# 下载目录空间
download_dirs = DirectoryHelper().get_local_download_dirs()
_, free_space = SystemUtils.space_usage([Path(d.download_path) for d in download_dirs])
# 下载器信息
downloader_info = schemas.DownloaderInfo()
transfer_infos = DashboardChain().downloader_info(name)
if transfer_infos:
for transfer_info in transfer_infos:
downloader_info.download_speed += transfer_info.download_speed
downloader_info.upload_speed += transfer_info.upload_speed
downloader_info.download_size += transfer_info.download_size
downloader_info.upload_size += transfer_info.upload_size
downloader_info.free_space = free_space
return downloader_info
@router.get("/downloader2", summary="下载器信息API_TOKEN", response_model=schemas.DownloaderInfo)
def downloader2(_: str = Depends(verify_uri_token)) -> Any:
def downloader2(_: str = Depends(verify_apitoken)) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
@@ -106,7 +119,7 @@ def schedule(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/schedule2", summary="后台服务API_TOKEN", response_model=List[schemas.ScheduleInfo])
def schedule2(_: str = Depends(verify_uri_token)) -> Any:
def schedule2(_: str = Depends(verify_apitoken)) -> Any:
"""
查询下载器信息 API_TOKEN认证?token=xxx
"""
@@ -132,7 +145,7 @@ def cpu(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/cpu2", summary="获取当前CPU使用率API_TOKEN", response_model=int)
def cpu2(_: str = Depends(verify_uri_token)) -> Any:
def cpu2(_: str = Depends(verify_apitoken)) -> Any:
"""
获取当前CPU使用率 API_TOKEN认证?token=xxx
"""
@@ -148,7 +161,7 @@ def memory(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/memory2", summary="获取当前内存使用量和使用率API_TOKEN", response_model=List[int])
def memory2(_: str = Depends(verify_uri_token)) -> Any:
def memory2(_: str = Depends(verify_apitoken)) -> Any:
"""
获取当前内存使用率 API_TOKEN认证?token=xxx
"""

View File

@@ -1,31 +1,36 @@
from typing import List, Any
from typing import Any, List
from fastapi import APIRouter, Depends, Response
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.douban import DoubanChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.security import verify_token
from app.schemas import MediaType
from app.utils.http import RequestUtils
router = APIRouter()
@router.get("/img/{imgurl:path}", summary="豆瓣图片代理")
def douban_img(imgurl: str) -> Any:
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def douban_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
豆瓣图片代理
根据人物ID查询人物详情
"""
if not imgurl:
return None
response = RequestUtils(headers={
'Referer': "https://movie.douban.com/"
}, ua=settings.USER_AGENT).get_res(url=imgurl)
if response:
return Response(content=response.content, media_type="image/jpeg")
return None
return DoubanChain().person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def douban_person_credits(person_id: int,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = DoubanChain().person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
@@ -36,10 +41,9 @@ def movie_showing(page: int = 1,
浏览豆瓣正在热映
"""
movies = DoubanChain().movie_showing(page=page, count=count)
if not movies:
return []
medias = [MediaInfo(douban_info=movie) for movie in movies]
return [media.to_dict() for media in medias]
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
@@ -53,13 +57,9 @@ def douban_movies(sort: str = "R",
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
if not movies:
return []
medias = [MediaInfo(douban_info=movie) for movie in movies]
return [media.to_dict() for media in medias
if media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.png" not in media.poster_path]
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
@@ -73,14 +73,9 @@ def douban_tvs(sort: str = "R",
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
if not tvs:
return []
medias = [MediaInfo(douban_info=tv) for tv in tvs]
return [media.to_dict() for media in medias
if media.poster_path
and "movie_large.jpg" not in media.poster_path
and "tv_normal.jpg" not in media.poster_path
and "tv_large.jpg" not in media.poster_path]
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
@@ -91,7 +86,9 @@ def movie_top250(page: int = 1,
浏览豆瓣剧集信息
"""
movies = DoubanChain().movie_top250(page=page, count=count)
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
@@ -102,7 +99,9 @@ def tv_weekly_chinese(page: int = 1,
中国每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
@@ -113,7 +112,9 @@ def tv_weekly_global(page: int = 1,
全球每周剧集口碑榜
"""
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
@@ -124,7 +125,9 @@ def tv_animation(page: int = 1,
热门动画剧集
"""
tvs = DoubanChain().tv_animation(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
@@ -135,7 +138,9 @@ def movie_hot(page: int = 1,
热门电影
"""
movies = DoubanChain().movie_hot(page=page, count=count)
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
if movies:
return [media.to_dict() for media in movies]
return []
@router.get("/tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
@@ -146,28 +151,24 @@ def tv_hot(page: int = 1,
热门电视剧
"""
tvs = DoubanChain().tv_hot(page=page, count=count)
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
if tvs:
return [media.to_dict() for media in tvs]
return []
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.DoubanPerson])
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str,
type_name: str,
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询演员阵容type_name: 电影/电视剧
根据豆瓣ID查询演员阵容type_name: 电影/电视剧
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_credits(doubanid=doubanid, page=page)
return DoubanChain().movie_credits(doubanid=doubanid)
elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_credits(doubanid=doubanid, page=page)
else:
return []
if not doubaninfos:
return []
else:
return [schemas.DoubanPerson(**doubaninfo) for doubaninfo in doubaninfos]
return DoubanChain().tv_credits(doubanid=doubanid)
return []
@router.get("/recommend/{doubanid}/{type_name}", summary="豆瓣推荐电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -179,15 +180,14 @@ def douban_recommend(doubanid: str,
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
doubaninfos = DoubanChain().movie_recommend(doubanid=doubanid)
medias = DoubanChain().movie_recommend(doubanid=doubanid)
elif mediatype == MediaType.TV:
doubaninfos = DoubanChain().tv_recommend(doubanid=doubanid)
medias = DoubanChain().tv_recommend(doubanid=doubanid)
else:
return []
if not doubaninfos:
return []
else:
return [MediaInfo(douban_info=doubaninfo).to_dict() for doubaninfo in doubaninfos]
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/{doubanid}", summary="查询豆瓣详情", response_model=schemas.MediaInfo)

View File

@@ -1,35 +1,38 @@
from typing import Any, List
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, Body
from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db.models.user import User
from app.db.userauth import get_current_active_user
from app.db.user_oper import get_current_active_user
router = APIRouter()
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
def read_downloading(
def list(
name: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询正在下载的任务
"""
return DownloadChain().downloading()
return DownloadChain().downloading(name)
@router.post("/", summary="添加下载", response_model=schemas.Response)
def add_downloading(
@router.post("/", summary="添加下载(含媒体信息)", response_model=schemas.Response)
def download(
media_in: schemas.MediaInfo,
torrent_in: schemas.TorrentInfo,
current_user: User = Depends(get_current_active_user),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
downloader: str = Body(None),
save_path: str = Body(None),
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务
添加下载任务(含媒体信息)
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
@@ -45,14 +48,50 @@ def add_downloading(
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
"download_id": did
})
@router.post("/add", summary="添加下载(不含媒体信息)", response_model=schemas.Response)
def add(
torrent_in: schemas.TorrentInfo,
downloader: str = Body(None),
save_path: str = Body(None),
current_user: User = Depends(get_current_active_user)) -> Any:
"""
添加下载任务(不含媒体信息)
"""
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaChain().recognize_media(meta=metainfo)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.dict())
# 上下文
context = Context(
meta_info=metainfo,
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
"download_id": did
})
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
def start_downloading(
def start(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -63,9 +102,8 @@ def start_downloading(
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
def stop_downloading(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def stop(hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
暂停下载任务
"""
@@ -74,9 +112,8 @@ def stop_downloading(
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def remove_downloading(
hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def delete(hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除下载任务
"""

View File

@@ -1,189 +0,0 @@
import shutil
from pathlib import Path
from typing import Any, List
from fastapi import APIRouter, Depends
from starlette.responses import FileResponse, Response
from app import schemas
from app.core.config import settings
from app.core.security import verify_token
from app.log import logger
from app.utils.system import SystemUtils
router = APIRouter()
IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有目录和文件", response_model=List[schemas.FileItem])
def list_path(path: str,
sort: str = 'time',
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
:param path: 目录路径
:param sort: 排序方式name:按名称排序time:按修改时间排序
:param _: token
:return: 所有目录和文件
"""
# 返回结果
ret_items = []
if not path or path == "/":
if SystemUtils.is_windows():
partitions = SystemUtils.get_windows_drives() or ["C:/"]
for partition in partitions:
ret_items.append(schemas.FileItem(
type="dir",
path=partition + "/",
name=partition,
basename=partition
))
return ret_items
else:
path = "/"
else:
if not SystemUtils.is_windows() and not path.startswith("/"):
path = "/" + path
# 遍历目录
path_obj = Path(path)
if not path_obj.exists():
logger.error(f"目录不存在:{path}")
return []
# 如果是文件
if path_obj.is_file():
ret_items.append(schemas.FileItem(
type="file",
path=str(path_obj).replace("\\", "/"),
name=path_obj.name,
basename=path_obj.stem,
extension=path_obj.suffix[1:],
size=path_obj.stat().st_size,
modify_time=path_obj.stat().st_mtime,
))
return ret_items
# 扁历所有目录
for item in SystemUtils.list_sub_directory(path_obj):
ret_items.append(schemas.FileItem(
type="dir",
path=str(item).replace("\\", "/") + "/",
name=item.name,
basename=item.stem,
modify_time=item.stat().st_mtime,
))
# 遍历所有文件,不含子目录
for item in SystemUtils.list_sub_files(path_obj,
settings.RMT_MEDIAEXT
+ settings.RMT_SUBEXT
+ IMAGE_TYPES
+ [".nfo"]):
ret_items.append(schemas.FileItem(
type="file",
path=str(item).replace("\\", "/"),
name=item.name,
basename=item.stem,
extension=item.suffix[1:],
size=item.stat().st_size,
modify_time=item.stat().st_mtime,
))
# 排序
if sort == 'time':
ret_items.sort(key=lambda x: x.modify_time, reverse=True)
else:
ret_items.sort(key=lambda x: x.name, reverse=False)
return ret_items
@router.get("/mkdir", summary="创建目录", response_model=schemas.Response)
def mkdir(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
创建目录
"""
if not path:
return schemas.Response(success=False)
path_obj = Path(path)
if path_obj.exists():
return schemas.Response(success=False)
path_obj.mkdir(parents=True, exist_ok=True)
return schemas.Response(success=True)
@router.get("/delete", summary="删除文件或目录", response_model=schemas.Response)
def delete(path: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
删除文件或目录
"""
if not path:
return schemas.Response(success=False)
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=True)
if path_obj.is_file():
path_obj.unlink()
else:
shutil.rmtree(path_obj, ignore_errors=True)
return schemas.Response(success=True)
@router.get("/download", summary="下载文件或目录")
def download(path: str, token: str) -> Any:
"""
下载文件或目录
"""
if not path:
return schemas.Response(success=False)
# 认证token
if not verify_token(token):
return None
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=False)
if path_obj.is_file():
# 做为文件流式下载
return FileResponse(path_obj)
else:
# 做为压缩包下载
shutil.make_archive(base_name=path_obj.stem, format="zip", root_dir=path_obj)
reponse = Response(content=path_obj.read_bytes(), media_type="application/zip")
# 删除压缩包
Path(f"{path_obj.stem}.zip").unlink()
return reponse
@router.get("/rename", summary="重命名文件或目录", response_model=schemas.Response)
def rename(path: str, new_name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重命名文件或目录
"""
if not path or not new_name:
return schemas.Response(success=False)
path_obj = Path(path)
if not path_obj.exists():
return schemas.Response(success=False)
path_obj.rename(path_obj.parent / new_name)
return schemas.Response(success=True)
@router.get("/image", summary="读取图片")
def image(path: str, token: str) -> Any:
"""
读取图片
"""
if not path:
return None
# 认证token
if not verify_token(token):
return None
path_obj = Path(path)
if not path_obj.exists():
return None
if not path_obj.is_file():
return None
# 判断是否图片文件
if path_obj.suffix.lower() not in IMAGE_TYPES:
return None
return Response(content=path_obj.read_bytes(), media_type="image/jpeg")

View File

@@ -1,17 +1,18 @@
from pathlib import Path
from typing import List, Any
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.transfer import TransferChain
from app.chain.storage import StorageChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.schemas.types import EventType
from app.db.user_oper import get_current_active_superuser
from app.schemas.types import EventType, MediaType
router = APIRouter()
@@ -75,30 +76,44 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
deletesrc: bool = False,
deletedest: bool = False,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
删除转移历史记录
"""
history = TransferHistory.get(db, history_in.id)
history: TransferHistory = TransferHistory.get(db, history_in.id)
if not history:
return schemas.Response(success=False, msg="记录不存在")
return schemas.Response(success=False, message="记录不存在")
# 册除媒体库文件
if deletedest and history.dest:
state, msg = TransferChain().delete_files(Path(history.dest))
if deletedest and history.dest_fileitem:
dest_fileitem = schemas.FileItem(**history.dest_fileitem)
state = StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
if not state:
return schemas.Response(success=False, msg=msg)
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
# 删除源文件
if deletesrc and history.src:
state, msg = TransferChain().delete_files(Path(history.src))
if deletesrc and history.src_fileitem:
src_fileitem = schemas.FileItem(**history.src_fileitem)
state = StorageChain().delete_media_file(src_fileitem)
if not state:
return schemas.Response(success=False, msg=msg)
return schemas.Response(success=False, message=f"{src_fileitem.path} 删除失败")
# 发送事件
eventmanager.send_event(
EventType.DownloadFileDeleted,
{
"src": history.src
"src": history.src,
"hash": history.download_hash
}
)
# 删除记录
TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True)
@router.get("/empty/transfer", summary="清空转移历史记录", response_model=schemas.Response)
def delete_transfer_history(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空转移历史记录
"""
TransferHistory.truncate(db)
return schemas.Response(success=True)

View File

@@ -1,70 +1,51 @@
from datetime import timedelta
from typing import Any
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends, Form, HTTPException
from fastapi.security import OAuth2PasswordRequestForm
from sqlalchemy.orm import Session
from app import schemas
from app.chain.tmdb import TmdbChain
from app.chain.user import UserChain
from app.chain.mediaserver import MediaServerChain
from app.core import security
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import get_db
from app.db.models.user import User
from app.log import logger
from app.helper.sites import SitesHelper
from app.utils.web import WebUtils
router = APIRouter()
@router.post("/access-token", summary="获取token", response_model=schemas.Token)
async def login_access_token(
db: Session = Depends(get_db), form_data: OAuth2PasswordRequestForm = Depends()
def login_access_token(
form_data: OAuth2PasswordRequestForm = Depends(),
otp_password: str = Form(None)
) -> Any:
"""
获取认证Token
"""
# 检查数据库
user = User.authenticate(
db=db,
name=form_data.username,
password=form_data.password
)
if not user:
# 请求协助认证
logger.warn("登录用户本地不匹配,尝试辅助认证 ...")
token = UserChain().user_authenticate(form_data.username, form_data.password)
if not token:
logger.warn(f"用户 {form_data.username} 登录失败!")
raise HTTPException(status_code=401, detail="用户名或密码不正确")
else:
logger.info(f"用户 {form_data.username} 辅助认证成功,用户信息: {token}")
# 加入用户信息表
user = User.get_by_name(db=db, name=form_data.username)
if not user:
logger.info(f"用户不存在,创建普通用户: {form_data.username}")
user = User(name=form_data.username, is_active=True,
is_superuser=False, hashed_password=get_password_hash(token))
user.create(db)
else:
# 普通用户权限
user.is_superuser = False
elif not user.is_active:
raise HTTPException(status_code=403, detail="用户未启用")
logger.info(f"用户 {user.name} 登录成功!")
success, user_or_message = UserChain().user_authenticate(username=form_data.username,
password=form_data.password,
mfa_code=otp_password)
if not success:
raise HTTPException(status_code=401, detail=user_or_message)
level = SitesHelper().auth_level
return schemas.Token(
access_token=security.create_access_token(
userid=user.id,
username=user.name,
super_user=user.is_superuser,
expires_delta=timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
userid=user_or_message.id,
username=user_or_message.name,
super_user=user_or_message.is_superuser,
expires_delta=timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES),
level=level
),
token_type="bearer",
super_user=user.is_superuser,
user_name=user.name,
avatar=user.avatar
super_user=user_or_message.is_superuser,
user_id=user_or_message.id,
user_name=user_or_message.name,
avatar=user_or_message.avatar,
level=level
)
@@ -73,19 +54,12 @@ def wallpaper() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "tmdb":
return tmdb_wallpaper()
elif settings.WALLPAPER == "bing":
return bing_wallpaper()
return schemas.Response(success=False)
@router.get("/bing", summary="Bing每日壁纸", response_model=schemas.Response)
def bing_wallpaper() -> Any:
"""
获取Bing每日壁纸
"""
url = WebUtils.get_bing_wallpaper()
if settings.WALLPAPER == "bing":
url = WebUtils.get_bing_wallpaper()
elif settings.WALLPAPER == "mediaserver":
url = MediaServerChain().get_latest_wallpaper()
else:
url = TmdbChain().get_random_wallpager()
if url:
return schemas.Response(
success=True,
@@ -94,15 +68,14 @@ def bing_wallpaper() -> Any:
return schemas.Response(success=False)
@router.get("/tmdb", summary="TMDB电影海报", response_model=schemas.Response)
def tmdb_wallpaper() -> Any:
@router.get("/wallpapers", summary="登录页面电影海报列表", response_model=List[str])
def wallpapers() -> Any:
"""
获取TMDB电影海报
获取登录页面电影海报
"""
wallpager = TmdbChain().get_random_wallpager()
if wallpager:
return schemas.Response(
success=True,
message=wallpager
)
return schemas.Response(success=False)
if settings.WALLPAPER == "bing":
return WebUtils.get_bing_wallpapers()
elif settings.WALLPAPER == "mediaserver":
return MediaServerChain().get_latest_wallpapers()
else:
return TmdbChain().get_trending_wallpapers()

View File

@@ -1,4 +1,5 @@
from typing import List, Any
from pathlib import Path
from typing import List, Any, Union
from fastapi import APIRouter, Depends
@@ -6,8 +7,8 @@ from app import schemas
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.context import Context
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_uri_token
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.schemas import MediaType
router = APIRouter()
@@ -31,7 +32,7 @@ def recognize(title: str,
@router.get("/recognize2", summary="识别种子媒体信息API_TOKEN", response_model=schemas.Context)
def recognize2(title: str,
subtitle: str = None,
_: str = Depends(verify_uri_token)) -> Any:
_: str = Depends(verify_apitoken)) -> Any:
"""
根据标题、副标题识别媒体信息 API_TOKEN认证?token=xxx
"""
@@ -54,7 +55,7 @@ def recognize_file(path: str,
@router.get("/recognize_file2", summary="识别文件媒体信息API_TOKEN", response_model=schemas.Context)
def recognize_file2(path: str,
_: str = Depends(verify_uri_token)) -> Any:
_: str = Depends(verify_apitoken)) -> Any:
"""
根据文件路径识别媒体信息 API_TOKEN认证?token=xxx
"""
@@ -62,18 +63,73 @@ def recognize_file2(path: str,
return recognize_file(path)
@router.get("/search", summary="搜索媒体信息", response_model=List[schemas.MediaInfo])
def search_by_title(title: str,
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/search", summary="搜索媒体/人物信息", response_model=List[dict])
def search(title: str,
type: str = "media",
page: int = 1,
count: int = 8,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
模糊搜索媒体信息列表
模糊搜索媒体/人物信息列表 media媒体信息person人物信息
"""
_, medias = MediaChain().search(title=title)
if medias:
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []
def __get_source(obj: Union[dict, schemas.MediaPerson]):
"""
获取对象属性
"""
if isinstance(obj, dict):
return obj.get("source")
return obj.source
result = []
if type == "media":
_, medias = MediaChain().search(title=title)
if medias:
result = [media.to_dict() for media in medias]
else:
result = MediaChain().search_persons(name=title)
if result:
# 按设置的顺序对结果进行排序
setting_order = settings.SEARCH_SOURCE.split(',') or []
sort_order = {}
for index, source in enumerate(setting_order):
sort_order[source] = index
result = sorted(result, key=lambda x: sort_order.get(__get_source(x), 4))
return result[(page - 1) * count:page * count]
@router.post("/scrape/{storage}", summary="刮削媒体信息", response_model=schemas.Response)
def scrape(fileitem: schemas.FileItem,
storage: str = "local",
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刮削媒体信息
"""
if not fileitem or not fileitem.path:
return schemas.Response(success=False, message="刮削路径无效")
chain = MediaChain()
# 识别媒体信息
scrape_path = Path(fileitem.path)
meta = MetaInfoPath(scrape_path)
mediainfo = chain.recognize_by_meta(meta)
if not media_info:
return schemas.Response(success=False, message="刮削失败,无法识别媒体信息")
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
else:
if not fileitem.fileid:
return schemas.Response(success=False, message="刮削文件ID无效")
# 手动刮削
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")
@router.get("/category", summary="查询自动分类配置", response_model=dict)
def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询自动分类配置
"""
return MediaChain().media_category() or {}
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
@@ -83,28 +139,17 @@ def media_info(mediaid: str, type_name: str,
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
tmdbid, doubanid = None, None
tmdbid, doubanid, bangumiid = None, None, None
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
if not tmdbid and not doubanid:
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid[8:])
if not tmdbid and not doubanid and not bangumiid:
return schemas.MediaInfo()
if settings.RECOGNIZE_SOURCE == "themoviedb":
if not tmdbid and doubanid:
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
tmdbid = tmdbinfo.get("id")
else:
return schemas.MediaInfo()
else:
if not doubanid and tmdbid:
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
if doubaninfo:
doubanid = doubaninfo.get("id")
else:
return schemas.MediaInfo()
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
# 识别
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
if mediainfo:
MediaChain().obtain_images(mediainfo)
return mediainfo.to_dict()

View File

@@ -1,50 +1,53 @@
from typing import Any, List
from typing import Any, List, Dict
from fastapi import APIRouter, Depends, HTTPException
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.chain.mediaserver import MediaServerChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.helper.mediaserver import MediaServerHelper
from app.schemas import MediaType, NotExistMediaInfo
router = APIRouter()
@router.get("/play/{itemid}", summary="在线播放")
def play_item(itemid: str) -> schemas.Response:
@router.get("/play/{itemid:path}", summary="在线播放")
def play_item(itemid: str, _: schemas.TokenPayload = Depends(verify_token)) -> schemas.Response:
"""
获取媒体服务器播放页面地址
"""
if not itemid:
return schemas.Response(success=False, msg="参数错误")
if not settings.MEDIASERVER:
return schemas.Response(success=False, msg="未配置媒体服务器")
mediaserver = settings.MEDIASERVER.split(",")[0]
play_url = MediaServerChain().get_play_url(server=mediaserver, item_id=itemid)
# 重定向到play_url
if not play_url:
return schemas.Response(success=False, msg="未找到播放地址")
return schemas.Response(success=True, data={
"url": play_url
})
return schemas.Response(success=False, message="参数错误")
configs = MediaServerHelper().get_configs()
if not configs:
return schemas.Response(success=False, message="未配置媒体服务器")
media_chain = MediaServerChain()
for name in configs.keys():
item = media_chain.iteminfo(server=name, item_id=itemid)
if item:
play_url = media_chain.get_play_url(server=name, item_id=itemid)
if play_url:
return schemas.Response(success=True, data={
"url": play_url
})
return schemas.Response(success=False, message="未找到播放地址")
@router.get("/exists", summary="本地是否存在", response_model=schemas.Response)
def exists(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/exists", summary="查询本地是否存在(数据库)", response_model=schemas.Response)
def exists_local(title: str = None,
year: int = None,
mtype: str = None,
tmdbid: int = None,
season: int = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
判断本地是否存在
"""
@@ -61,35 +64,35 @@ def exists(title: str = None,
ret_info = {
"id": exist.item_id
}
"""
else:
# 服务器是否存在
mediainfo = MediaInfo()
mediainfo.from_dict({
"title": meta.name,
"year": year or meta.year,
"type": mtype or meta.type,
"tmdb_id": tmdbid,
"season": season
})
exist: schemas.ExistMediaInfo = MediaServerChain().media_exists(
mediainfo=mediainfo
)
if exist:
ret_info = {
"id": exist.itemid
}
"""
return schemas.Response(success=True if exist else False, data={
"item": ret_info
})
@router.post("/notexists", summary="查询缺失媒体信息", response_model=List[schemas.NotExistMediaInfo])
@router.post("/exists_remote", summary="查询已存在的剧集信息(媒体服务器)", response_model=Dict[int, list])
def exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体信息查询媒体库已存在的剧集信息
"""
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
existsinfo: schemas.ExistMediaInfo = MediaServerChain().media_exists(mediainfo=mediainfo)
if not existsinfo:
return []
if media_in.season:
return {
media_in.season: existsinfo.seasons.get(media_in.season) or []
}
return existsinfo.seasons
@router.post("/notexists", summary="查询媒体库缺失信息(媒体服务器)", response_model=List[schemas.NotExistMediaInfo])
def not_exists(media_in: schemas.MediaInfo,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询缺失媒体信息
根据媒体信息查询缺失电影/剧集
"""
# 媒体信息
meta = MetaInfo(title=media_in.title)
@@ -101,18 +104,13 @@ def not_exists(media_in: schemas.MediaInfo,
meta.type = MediaType.TV
if media_in.year:
meta.year = media_in.year
if media_in.tmdb_id or media_in.douban_id:
mediainfo = MediaChain().recognize_media(meta=meta, mtype=mtype,
tmdbid=media_in.tmdb_id, doubanid=media_in.douban_id)
else:
mediainfo = MediaChain().recognize_by_meta(metainfo=meta)
# 查询缺失信息
if not mediainfo:
raise HTTPException(status_code=404, detail="媒体信息不存在")
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
# 转化为媒体信息对象
mediainfo = MediaInfo()
mediainfo.from_dict(media_in.dict())
exist_flag, no_exists = DownloadChain().get_no_exists_info(meta=meta, mediainfo=mediainfo)
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if mediainfo.type == MediaType.MOVIE:
# 电影已存在时返回空列表,存在时返回空对像列表
# 电影已存在时返回空列表,存在时返回空对像列表
return [] if exist_flag else [NotExistMediaInfo()]
elif no_exists and no_exists.get(mediakey):
# 电视剧返回缺失的剧集
@@ -121,26 +119,27 @@ def not_exists(media_in: schemas.MediaInfo,
@router.get("/latest", summary="最新入库条目", response_model=List[schemas.MediaServerPlayItem])
def latest(count: int = 18,
def latest(server: str, count: int = 18,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器最新入库条目
"""
return MediaServerChain().latest(count=count, username=userinfo.username) or []
return MediaServerChain().latest(server=server, count=count, username=userinfo.username) or []
@router.get("/playing", summary="正在播放条目", response_model=List[schemas.MediaServerPlayItem])
def playing(count: int = 12,
def playing(server: str, count: int = 12,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器正在播放条目
"""
return MediaServerChain().playing(count=count, username=userinfo.username) or []
return MediaServerChain().playing(server=server, count=count, username=userinfo.username) or []
@router.get("/library", summary="媒体库列表", response_model=List[schemas.MediaServerLibrary])
def library(userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
def library(server: str, hidden: bool = False,
userinfo: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取媒体服务器媒体库列表
"""
return MediaServerChain().librarys(username=userinfo.username) or []
return MediaServerChain().librarys(server=server, username=userinfo.username, hidden=hidden) or []

View File

@@ -1,18 +1,23 @@
import json
from typing import Union, Any, List
from fastapi import APIRouter, BackgroundTasks, Depends
from fastapi import Request
from fastapi import APIRouter, BackgroundTasks, Depends, Request
from pywebpush import WebPushException, webpush
from sqlalchemy.orm import Session
from starlette.responses import PlainTextResponse
from app import schemas
from app.chain.message import MessageChain
from app.core.config import settings
from app.core.security import verify_token
from app.db.systemconfig_oper import SystemConfigOper
from app.core.config import settings, global_vars
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models import User
from app.db.models.message import Message
from app.db.user_oper import get_current_active_superuser
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt
from app.schemas import NotificationSwitch
from app.schemas.types import SystemConfigKey, NotificationType
from app.schemas.types import MessageChannel
router = APIRouter()
@@ -25,9 +30,10 @@ def start_message_chain(body: Any, form: Any, args: Any):
@router.post("/", summary="接收用户消息", response_model=schemas.Response)
async def user_message(background_tasks: BackgroundTasks, request: Request):
async def user_message(background_tasks: BackgroundTasks, request: Request,
_: schemas.TokenPayload = Depends(verify_apitoken)):
"""
用户消息响应
用户消息响应配置请求中需要添加参数token=API_TOKEN&source=消息配置名
"""
body = await request.body()
form = await request.form()
@@ -36,59 +42,119 @@ async def user_message(background_tasks: BackgroundTasks, request: Request):
return schemas.Response(success=True)
@router.get("/", summary="微信验证")
def wechat_verify(echostr: str, msg_signature: str,
timestamp: Union[str, int], nonce: str) -> Any:
@router.post("/web", summary="接收WEB消息", response_model=schemas.Response)
def web_message(text: str, current_user: User = Depends(get_current_active_superuser)):
"""
用户消息响应
WEB消息响应
"""
logger.info(f"收到微信验证请求: {echostr}")
MessageChain().handle_message(
channel=MessageChannel.Web,
source=current_user.name,
userid=current_user.name,
username=current_user.name,
text=text
)
return schemas.Response(success=True)
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
db: Session = Depends(get_db),
page: int = 1,
count: int = 20):
"""
获取WEB消息列表
"""
ret_messages = []
messages = Message.list_by_page(db, page=page, count=count)
for message in messages:
try:
ret_messages.append(message.to_dict())
except Exception as e:
logger.error(f"获取WEB消息列表失败: {str(e)}")
continue
return ret_messages
def wechat_verify(echostr: str, msg_signature: str, timestamp: Union[str, int], nonce: str,
source: str = None) -> Any:
"""
微信验证响应
"""
# 获取服务配置
client_configs = ServiceConfigHelper.get_notification_configs()
if not client_configs:
return "未找到对应的消息配置"
client_config = next((config for config in client_configs if
config.type == "wechat" and config.enabled and (not source or config.name == source)), None)
if not client_config:
return "未找到对应的消息配置"
try:
wxcpt = WXBizMsgCrypt(sToken=settings.WECHAT_TOKEN,
sEncodingAESKey=settings.WECHAT_ENCODING_AESKEY,
sReceiveId=settings.WECHAT_CORPID)
wxcpt = WXBizMsgCrypt(sToken=client_config.config.get('WECHAT_TOKEN'),
sEncodingAESKey=client_config.config.get('WECHAT_ENCODING_AESKEY'),
sReceiveId=client_config.config.get('WECHAT_CORPID'))
ret, sEchoStr = wxcpt.VerifyURL(sMsgSignature=msg_signature,
sTimeStamp=timestamp,
sNonce=nonce,
sEchoStr=echostr)
if ret == 0:
# 验证URL成功将sEchoStr返回给企业号
return PlainTextResponse(sEchoStr)
return "微信验证失败"
except Exception as err:
logger.error(f"微信请求验证失败: {str(err)}")
return str(err)
ret, sEchoStr = wxcpt.VerifyURL(sMsgSignature=msg_signature,
sTimeStamp=timestamp,
sNonce=nonce,
sEchoStr=echostr)
if ret != 0:
logger.error("微信请求验证失败 VerifyURL ret: %s" % str(ret))
# 验证URL成功将sEchoStr返回给企业号
return PlainTextResponse(sEchoStr)
@router.get("/switchs", summary="查询通知消息渠道开关", response_model=List[NotificationSwitch])
def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def vocechat_verify() -> Any:
"""
查询通知消息渠道开关
VoceChat验证响应
"""
return_list = []
# 读取数据库
switchs = SystemConfigOper().get(SystemConfigKey.NotificationChannels)
if not switchs:
for noti in NotificationType:
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True,
telegram=True, slack=True,
synologychat=True))
else:
for switch in switchs:
return_list.append(NotificationSwitch(**switch))
return return_list
return {"status": "OK"}
@router.post("/switchs", summary="设置通知消息渠道开关", response_model=schemas.Response)
def set_switchs(switchs: List[NotificationSwitch],
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/", summary="回调请求验证")
def incoming_verify(token: str = None, echostr: str = None, msg_signature: str = None,
timestamp: Union[str, int] = None, nonce: str = None, source: str = None,
_: schemas.TokenPayload = Depends(verify_apitoken)) -> Any:
"""
查询通知消息渠道开关
微信/VoceChat等验证响应
"""
switch_list = []
for switch in switchs:
switch_list.append(switch.dict())
# 存入数据库
SystemConfigOper().set(SystemConfigKey.NotificationChannels, switch_list)
logger.info(f"收到验证请求: token={token}, echostr={echostr}, "
f"msg_signature={msg_signature}, timestamp={timestamp}, nonce={nonce}")
if echostr and msg_signature and timestamp and nonce:
return wechat_verify(echostr, msg_signature, timestamp, nonce, source)
return vocechat_verify()
@router.post("/webpush/subscribe", summary="客户端webpush通知订阅", response_model=schemas.Response)
def subscribe(subscription: schemas.Subscription, _: schemas.TokenPayload = Depends(verify_token)):
"""
客户端webpush通知订阅
"""
subinfo = subscription.dict()
if subinfo not in global_vars.get_subscriptions():
global_vars.push_subscription(subinfo)
logger.debug(f"通知订阅成功: {subinfo}")
return schemas.Response(success=True)
@router.post("/webpush/send", summary="发送webpush通知", response_model=schemas.Response)
def send_notification(payload: schemas.SubscriptionMessage, _: schemas.TokenPayload = Depends(verify_token)):
"""
发送webpush通知
"""
for sub in global_vars.get_subscriptions():
try:
webpush(
subscription_info=sub,
data=json.dumps(payload.dict()),
vapid_private_key=settings.VAPID.get("privateKey"),
vapid_claims={
"sub": settings.VAPID.get("subject")
},
)
except WebPushException as err:
logger.error(f"WebPush发送失败: {str(err)}")
continue
return schemas.Response(success=True)

View File

@@ -1,87 +1,226 @@
from typing import Any, List
from typing import Annotated, Any, List, Optional
from fastapi import APIRouter, Depends
from fastapi import APIRouter, Depends, Header
from app import schemas
from app.chain.command import CommandChain
from app.core.config import settings
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.core.security import verify_apikey, verify_token
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.factory import app
from app.helper.plugin import PluginHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
PROTECTED_ROUTES = {"/api/v1/openapi.json", "/docs", "/docs/oauth2-redirect", "/redoc"}
PLUGIN_PREFIX = f"{settings.API_V1_STR}/plugin"
router = APIRouter()
def register_plugin_api(plugin_id: Optional[str] = None):
"""
动态注册插件 API
:param plugin_id: 插件 ID如果为 None则注册所有插件
"""
_update_plugin_api_routes(plugin_id, action="add")
def remove_plugin_api(plugin_id: str):
"""
动态移除单个插件的 API
:param plugin_id: 插件 ID
"""
_update_plugin_api_routes(plugin_id, action="remove")
def _update_plugin_api_routes(plugin_id: Optional[str], action: str):
"""
插件 API 路由注册和移除
:param plugin_id: 插件 ID如果 action 为 "add" 且 plugin_id 为 None则处理所有插件
如果 action 为 "remove"plugin_id 必须是有效的插件 ID
:param action: "add""remove",决定是添加还是移除路由
"""
if action not in {"add", "remove"}:
raise ValueError("Action must be 'add' or 'remove'")
is_modified = False
existing_paths = {route.path: route for route in app.routes}
plugin_ids = [plugin_id] if plugin_id else PluginManager().get_running_plugin_ids()
for plugin_id in plugin_ids:
routes_removed = _remove_routes(plugin_id)
if routes_removed:
is_modified = True
if action != "add":
continue
# 获取插件的 API 路由信息
plugin_apis = PluginManager().get_plugin_apis(plugin_id)
for api in plugin_apis:
api_path = f"{PLUGIN_PREFIX}{api.get('path', '')}"
try:
api["path"] = api_path
allow_anonymous = api.pop("allow_anonymous", False)
dependencies = api.setdefault("dependencies", [])
if not allow_anonymous and Depends(verify_apikey) not in dependencies:
dependencies.append(Depends(verify_apikey))
app.add_api_route(**api, tags=["plugin"])
is_modified = True
logger.debug(f"Added plugin route: {api_path}")
except Exception as e:
logger.error(f"Error adding plugin route {api_path}: {str(e)}")
if is_modified:
_clean_protected_routes(existing_paths)
app.openapi_schema = None
app.setup()
def _remove_routes(plugin_id: str) -> bool:
"""
移除与单个插件相关的路由
:param plugin_id: 插件 ID
:return: 是否有路由被移除
"""
if not plugin_id:
return False
prefix = f"{PLUGIN_PREFIX}/{plugin_id}/"
routes_to_remove = [route for route in app.routes if route.path.startswith(prefix)]
removed = False
for route in routes_to_remove:
try:
app.routes.remove(route)
removed = True
logger.debug(f"Removed plugin route: {route.path}")
except Exception as e:
logger.error(f"Error removing plugin route {route.path}: {str(e)}")
return removed
def _clean_protected_routes(existing_paths: dict):
"""
清理受保护的路由,防止在插件操作中被删除或重复添加
:param existing_paths: 当前应用的路由路径映射
"""
for protected_route in PROTECTED_ROUTES:
try:
existing_route = existing_paths.get(protected_route)
if existing_route:
app.routes.remove(existing_route)
except Exception as e:
logger.error(f"Error removing protected route {protected_route}: {str(e)}")
@router.get("/", summary="所有插件", response_model=List[schemas.Plugin])
def all_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def all_plugins(_: schemas.TokenPayload = Depends(get_current_active_superuser),
state: str = "all") -> List[schemas.Plugin]:
"""
查询所有插件清单,包括本地插件和在线插件
查询所有插件清单,包括本地插件和在线插件插件状态installed, market, all
"""
plugins = []
# 本地插件
local_plugins = PluginManager().get_local_plugins()
# 已安装插件
installed_plugins = [plugin for plugin in local_plugins if plugin.installed]
# 未安装的本地插件
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
if state == "installed":
return installed_plugins
# 在线插件
online_plugins = PluginManager().get_online_plugins()
if not online_plugins:
# 没有获取在线插件
if state == "market":
# 返回未安装的本地插件
return not_installed_plugins
return local_plugins
# 插件市场插件清单
market_plugins = []
# 已安装插件IDS
installed_ids = [plugin["id"] for plugin in local_plugins if plugin.get("installed")]
# 已经安装的本地
plugins.extend([plugin for plugin in local_plugins if plugin.get("installed")])
_installed_ids = [plugin.id for plugin in installed_plugins]
# 未安装的线上插件或者有更新的插件
for plugin in online_plugins:
if plugin["id"] not in installed_ids:
plugins.append(plugin)
elif plugin.get("has_update"):
plugin["installed"] = False
plugins.append(plugin)
# 本地插件存在但未安装且本地插件不在online插件中
plugin_ids = [plugin["id"] for plugin in plugins]
for plugin in local_plugins:
if plugin["id"] not in installed_ids \
and plugin["id"] not in plugin_ids:
plugins.append(plugin)
return plugins
if plugin.id not in _installed_ids:
market_plugins.append(plugin)
elif plugin.has_update:
market_plugins.append(plugin)
# 未安装的本地插件,且不在线上插件中
_plugin_ids = [plugin.id for plugin in market_plugins]
for plugin in not_installed_plugins:
if plugin.id not in _plugin_ids:
market_plugins.append(plugin)
# 返回插件清单
if state == "market":
# 返回未安装的插件
return market_plugins
# 返回所有插件
return installed_plugins + market_plugins
@router.get("/installed", summary="已安装插件", response_model=List[str])
def installed_plugins(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def installed(_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
查询用户已安装插件清单
"""
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/statistic", summary="插件安装统计", response_model=dict)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
插件安装统计
"""
return PluginHelper().get_statistic()
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install_plugin(plugin_id: str,
repo_url: str = "",
force: bool = False,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def install(plugin_id: str,
repo_url: str = "",
force: bool = False,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
安装插件
"""
# 已安装插件
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
# 如果是非本地括件,或者强制安装时,则需要下载安装
if repo_url and (force or plugin_id not in PluginManager().get_plugin_ids()):
# 下载安装
state, msg = PluginHelper().install(pid=plugin_id, repo_url=repo_url)
if not state:
# 安装失败
return schemas.Response(success=False, message=msg)
# 首先检查插件是否已经存在,并且是否强制安装,否则只进行安装统计
if not force and plugin_id in PluginManager().get_plugin_ids():
PluginHelper().install_reg(pid=plugin_id)
else:
# 插件不存在或需要强制安装,下载安装并注册插件
if repo_url:
state, msg = PluginHelper().install(pid=plugin_id, repo_url=repo_url)
# 安装失败则直接响应
if not state:
return schemas.Response(success=False, message=msg)
else:
# repo_url 为空时,也直接响应
return schemas.Response(success=False, message="没有传入仓库地址,无法正确安装插件,请检查配置")
# 安装插件
if plugin_id not in install_plugins:
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 载插件管理器
PluginManager().init_config()
# 载插件到内存
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@router.get("/form/{plugin_id}", summary="获取插件表单页面")
def plugin_form(plugin_id: str,
_: schemas.TokenPayload = Depends(verify_token)) -> dict:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件配置表单
"""
@@ -93,27 +232,63 @@ def plugin_form(plugin_id: str,
@router.get("/page/{plugin_id}", summary="获取插件数据页面")
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
def plugin_page(plugin_id: str, _: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
"""
根据插件ID获取插件数据页面
"""
return PluginManager().get_plugin_page(plugin_id)
@router.get("/reset/{plugin_id}", summary="重置插件配置", response_model=schemas.Response)
def reset_plugin(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
@router.get("/dashboard/meta", summary="获取所有插件仪表板元信息")
def plugin_dashboard_meta(_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
根据插件ID重置插件配置
获取所有插件仪表板元信息
"""
return PluginManager().get_plugin_dashboard_meta()
@router.get("/dashboard/{plugin_id}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, user_agent=user_agent)
@router.get("/dashboard/{plugin_id}/{key}", summary="获取插件仪表板配置")
def plugin_dashboard(plugin_id: str, key: str, user_agent: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> schemas.PluginDashboard:
"""
根据插件ID获取插件仪表板
"""
return PluginManager().get_plugin_dashboard(plugin_id, key=key, user_agent=user_agent)
@router.get("/reset/{plugin_id}", summary="重置插件配置及数据", response_model=schemas.Response)
def reset_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
根据插件ID重置插件配置及数据
"""
# 删除配置
PluginManager().delete_plugin_config(plugin_id)
# 删除插件所有数据
PluginManager().delete_plugin_data(plugin_id)
# 重新生效插件
PluginManager().reload_plugin(plugin_id, {})
PluginManager().reload_plugin(plugin_id)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@router.get("/{plugin_id}", summary="获取插件配置")
def plugin_config(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token)) -> dict:
def plugin_config(plugin_id: str,
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> dict:
"""
根据插件ID获取插件配置信息
"""
@@ -122,20 +297,26 @@ def plugin_config(plugin_id: str, _: schemas.TokenPayload = Depends(verify_token
@router.put("/{plugin_id}", summary="更新插件配置", response_model=schemas.Response)
def set_plugin_config(plugin_id: str, conf: dict,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
更新插件配置
"""
# 保存配置
PluginManager().save_plugin_config(plugin_id, conf)
# 重新生效插件
PluginManager().reload_plugin(plugin_id, conf)
PluginManager().init_plugin(plugin_id, conf)
# 注册插件服务
Scheduler().update_plugin_job(plugin_id)
# 注册菜单命令
CommandChain().init_commands(plugin_id)
# 注册插件API
register_plugin_api(plugin_id)
return schemas.Response(success=True)
@router.delete("/{plugin_id}", summary="卸载插件", response_model=schemas.Response)
def uninstall_plugin(plugin_id: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
卸载插件
"""
@@ -147,11 +328,14 @@ def uninstall_plugin(plugin_id: str,
break
# 保存
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 重载插件管理器
PluginManager().init_config()
# 移除插件API
remove_plugin_api(plugin_id)
# 移除插件服务
Scheduler().remove_plugin_job(plugin_id)
# 移除插件
PluginManager().remove_plugin(plugin_id)
return schemas.Response(success=True)
# 注册插件API
for api in PluginManager().get_plugin_apis():
router.add_api_route(**api)
# 注册全部插件API
register_plugin_api()

View File

@@ -13,7 +13,7 @@ router = APIRouter()
@router.get("/last", summary="查询搜索结果", response_model=List[schemas.Context])
async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询搜索结果
"""
@@ -21,17 +21,19 @@ async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return [torrent.to_dict() for torrent in torrents]
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=List[schemas.Context])
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
def search_by_id(mediaid: str,
mtype: str = None,
area: str = "title",
season: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/
根据TMDBID/豆瓣ID精确搜索站点资源 tmdb:/douban:/bangumi:
"""
torrents = []
if mtype:
mtype = MediaType(mtype)
if season:
season = int(season)
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban":
@@ -39,31 +41,61 @@ def search_by_id(mediaid: str,
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area)
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area)
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=mtype, area=area, season=season)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过豆瓣ID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
if tmdbinfo.get('season') and not season:
season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area)
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area)
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area, season=season)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
# 通过BangumiID识别TMDBID
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
# 通过BangumiID识别豆瓣ID
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=mtype, area=area, season=season)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
return []
return [torrent.to_dict() for torrent in torrents]
return schemas.Response(success=False, message="未知的媒体ID")
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
else:
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])
@router.get("/title", summary="模糊搜索资源", response_model=List[schemas.TorrentInfo])
async def search_by_title(keyword: str = None,
page: int = 0,
site: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
def search_by_title(keyword: str = None,
page: int = 0,
site: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page, site=site)
return [torrent.to_dict() for torrent in torrents]
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -10,9 +10,13 @@ from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
from app.db.models.site import Site
from app.db.models.siteicon import SiteIcon
from app.db.models.sitestatistic import SiteStatistic
from app.db.models.siteuserdata import SiteUserData
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.sites import SitesHelper
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey, EventType
@@ -23,7 +27,7 @@ router = APIRouter()
@router.get("/", summary="所有站点", response_model=List[schemas.Site])
def read_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> List[dict]:
"""
获取站点列表
"""
@@ -35,25 +39,35 @@ def add_site(
*,
db: Session = Depends(get_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(verify_token)
_: schemas.TokenPayload = Depends(get_current_active_superuser)
) -> Any:
"""
新增站点
"""
if not site_in.url:
return schemas.Response(success=False, message="站点地址不能为空")
if SitesHelper().auth_level < 2:
return schemas.Response(success=False, message="用户未通过认证,无法使用站点功能!")
domain = StringUtils.get_url_domain(site_in.url)
site_info = SitesHelper().get_indexer(domain)
if not site_info:
return schemas.Response(success=False, message="该站点不支持或用户未通过认证")
return schemas.Response(success=False, message="该站点不支持,请检查站点域名是否正确")
if Site.get_by_domain(db, domain):
return schemas.Response(success=False, message=f"{domain} 站点己存在")
# 保存站点信息
site_in.domain = domain
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site_in.name = site_info.get("name")
site_in.id = None
site_in.public = 1 if site_info.get("public") else 0
site = Site(**site_in.dict())
site.create(db)
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
"domain": domain
})
return schemas.Response(success=True)
@@ -62,7 +76,7 @@ def update_site(
*,
db: Session = Depends(get_db),
site_in: schemas.Site,
_: schemas.TokenPayload = Depends(verify_token)
_: schemas.TokenPayload = Depends(get_current_active_superuser)
) -> Any:
"""
更新站点信息
@@ -70,31 +84,20 @@ def update_site(
site = Site.get(db, site_in.id)
if not site:
return schemas.Response(success=False, message="站点不存在")
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict())
return schemas.Response(success=True)
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
"domain": site_in.domain
})
return schemas.Response(success=True)
@router.get("/cookiecloud", summary="CookieCloud同步", response_model=schemas.Response)
def cookie_cloud_sync(background_tasks: BackgroundTasks,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
运行CookieCloud同步站点信息
"""
@@ -103,8 +106,8 @@ def cookie_cloud_sync(background_tasks: BackgroundTasks,
@router.get("/reset", summary="重置站点", response_model=schemas.Response)
def cookie_cloud_sync(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def reset(db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)) -> Any:
"""
清空所有站点数据并重新同步CookieCloud站点信息
"""
@@ -116,18 +119,34 @@ def cookie_cloud_sync(db: Session = Depends(get_db),
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": None
"site_id": "*"
})
return schemas.Response(success=True, message="站点已重置!")
@router.post("/priorities", summary="批量更新站点优先级", response_model=schemas.Response)
def update_sites_priority(
priorities: List[dict],
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
批量更新站点优先级
"""
for priority in priorities:
site = Site.get(db, priority.get("id"))
if site:
site.update(db, {"pri": priority.get("pri")})
return schemas.Response(success=True)
@router.get("/cookie/{site_id}", summary="更新站点Cookie&UA", response_model=schemas.Response)
def update_cookie(
site_id: int,
username: str,
password: str,
code: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
使用用户密码更新站点Cookie
"""
@@ -141,10 +160,66 @@ def update_cookie(
# 更新Cookie
state, message = SiteChain().update_cookie(site_info=site_info,
username=username,
password=password)
password=password,
two_step_code=code)
return schemas.Response(success=state, message=message)
@router.post("/userdata/{site_id}", summary="更新站点用户数据", response_model=schemas.Response)
def refresh_userdata(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
刷新站点用户数据
"""
site = Site.get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
indexer = SitesHelper().get_indexer(site.domain)
if not indexer:
return schemas.Response(success=False, message="站点不支持索引或未通过用户认证!")
user_data = SiteChain().refresh_userdata(site=indexer) or {}
return schemas.Response(success=True, data=user_data)
@router.get("/userdata/latest", summary="查询所有站点最新用户数据", response_model=List[schemas.SiteUserData])
def read_userdata_latest(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
查询所有站点最新用户数据
"""
user_datas = SiteUserData.get_latest(db)
if not user_datas:
return []
return [user_data.to_dict() for user_data in user_datas]
@router.get("/userdata/{site_id}", summary="查询某站点用户数据", response_model=schemas.Response)
def read_userdata(
site_id: int,
workdate: str = None,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
查询站点用户数据
"""
site = Site.get(db, site_id)
if not site:
raise HTTPException(
status_code=404,
detail=f"站点 {site_id} 不存在",
)
user_data = SiteUserData.get_by_domain(db, domain=site.domain, workdate=workdate)
if not user_data:
return schemas.Response(success=False, data=[])
return schemas.Response(success=True, data=user_data)
@router.get("/test/{site_id}", summary="连接测试", response_model=schemas.Response)
def test_site(site_id: int,
db: Session = Depends(get_db),
@@ -186,7 +261,7 @@ def site_icon(site_id: int,
@router.get("/resource/{site_id}", summary="站点资源", response_model=List[schemas.TorrentInfo])
def site_resource(site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
浏览站点资源
"""
@@ -221,8 +296,25 @@ def read_site_by_domain(
return site
@router.get("/statistic/{site_url}", summary="站点统计信息", response_model=schemas.SiteStatistic)
def read_site_by_domain(
site_url: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
通过域名获取站点统计信息
"""
domain = StringUtils.get_url_domain(site_url)
sitestatistic = SiteStatistic.get_by_domain(db, domain)
if sitestatistic:
return sitestatistic
return schemas.SiteStatistic(domain=domain)
@router.get("/rss", summary="所有订阅站点", response_model=List[schemas.Site])
def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
def read_rss_sites(db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> List[dict]:
"""
获取站点列表
"""
@@ -243,7 +335,7 @@ def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
def read_site(
site_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
_: schemas.TokenPayload = Depends(get_current_active_superuser)
) -> Any:
"""
通过ID获取站点信息
@@ -255,3 +347,21 @@ def read_site(
detail=f"站点 {site_id} 不存在",
)
return site
@router.delete("/{site_id}", summary="删除站点", response_model=schemas.Response)
def delete_site(
site_id: int,
db: Session = Depends(get_db),
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
删除站点
"""
Site.delete(db, site_id)
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": site_id
})
return schemas.Response(success=True)

View File

@@ -0,0 +1,220 @@
from datetime import datetime
from pathlib import Path
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from starlette.responses import FileResponse, Response
from app import schemas
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token
from app.db.models import User
from app.db.user_oper import get_current_active_superuser
from app.helper.progress import ProgressHelper
from app.schemas.types import ProgressKey
router = APIRouter()
@router.get("/qrcode/{name}", summary="生成二维码内容", response_model=schemas.Response)
def qrcode(name: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
生成二维码
"""
qrcode_data, errmsg = StorageChain().generate_qrcode(name)
if qrcode_data:
return schemas.Response(success=True, data=qrcode_data, message=errmsg)
return schemas.Response(success=False)
@router.get("/check/{name}", summary="二维码登录确认", response_model=schemas.Response)
def check(name: str, ck: str = None, t: str = None, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
二维码登录确认
"""
if ck or t:
data, errmsg = StorageChain().check_login(name, ck=ck, t=t)
else:
data, errmsg = StorageChain().check_login(name)
if data:
return schemas.Response(success=True, data=data)
return schemas.Response(success=False, message=errmsg)
@router.post("/save/{name}", summary="保存存储配置", response_model=schemas.Response)
def save(name: str,
conf: dict,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
保存存储配置
"""
StorageChain().save_config(name, conf)
return schemas.Response(success=True)
@router.post("/list", summary="所有目录和文件", response_model=List[schemas.FileItem])
def list_files(fileitem: schemas.FileItem,
sort: str = 'updated_at',
_: User = Depends(get_current_active_superuser)) -> Any:
"""
查询当前目录下所有目录和文件
:param fileitem: 文件项
:param sort: 排序方式name:按名称排序time:按修改时间排序
:param _: token
:return: 所有目录和文件
"""
file_list = StorageChain().list_files(fileitem)
if file_list:
if sort == "name":
file_list.sort(key=lambda x: x.name or "")
else:
file_list.sort(key=lambda x: x.modify_time or datetime.min, reverse=True)
return file_list
@router.post("/mkdir", summary="创建目录", response_model=schemas.Response)
def mkdir(fileitem: schemas.FileItem,
name: str,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
创建目录
:param fileitem: 文件项
:param name: 目录名称
:param _: token
"""
if not name:
return schemas.Response(success=False)
result = StorageChain().create_folder(fileitem, name)
if result:
return schemas.Response(success=True)
return schemas.Response(success=False)
@router.post("/delete", summary="删除文件或目录", response_model=schemas.Response)
def delete(fileitem: schemas.FileItem,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
删除文件或目录
:param fileitem: 文件项
:param _: token
"""
result = StorageChain().delete_file(fileitem)
if result:
return schemas.Response(success=True)
return schemas.Response(success=False)
@router.post("/download", summary="下载文件")
def download(fileitem: schemas.FileItem,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
下载文件或目录
:param fileitem: 文件项
:param _: token
"""
# 临时目录
tmp_file = StorageChain().download_file(fileitem)
if tmp_file:
return FileResponse(path=tmp_file)
return schemas.Response(success=False)
@router.post("/image", summary="预览图片")
def image(fileitem: schemas.FileItem,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
下载文件或目录
:param fileitem: 文件项
:param _: token
"""
# 临时目录
tmp_file = StorageChain().download_file(fileitem)
if not tmp_file:
raise HTTPException(status_code=500, detail="图片读取出错")
return Response(content=tmp_file.read_bytes(), media_type="image/jpeg")
@router.post("/rename", summary="重命名文件或目录", response_model=schemas.Response)
def rename(fileitem: schemas.FileItem,
new_name: str,
recursive: bool = False,
_: User = Depends(get_current_active_superuser)) -> Any:
"""
重命名文件或目录
:param fileitem: 文件项
:param new_name: 新名称
:param recursive: 是否递归修改
:param _: token
"""
if not new_name:
return schemas.Response(success=False, message="新名称为空")
if fileitem.storage != 'local' and not fileitem.fileid:
return schemas.Response(success=False, message="资源ID获取失败")
result = StorageChain().rename_file(fileitem, new_name)
if result:
if recursive:
transferchain = TransferChain()
media_exts = settings.RMT_MEDIAEXT + settings.RMT_SUBEXT + settings.RMT_AUDIO_TRACK_EXT
# 递归修改目录内文件(智能识别命名)
sub_files: List[schemas.FileItem] = StorageChain().list_files(fileitem)
if sub_files:
# 开始进度
progress = ProgressHelper()
progress.start(ProgressKey.BatchRename)
total = len(sub_files)
handled = 0
for sub_file in sub_files:
handled += 1
progress.update(value=handled / total * 100,
text=f"正在处理 {sub_file.name} ...",
key=ProgressKey.BatchRename)
if sub_file.type == "dir":
continue
if not sub_file.extension:
continue
if f".{sub_file.extension.lower()}" not in media_exts:
continue
sub_path = Path(f"{fileitem.path}{sub_file.name}")
meta = MetaInfoPath(sub_path)
mediainfo = transferchain.recognize_media(meta)
if not mediainfo:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到媒体信息")
new_path = transferchain.recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 未识别到新名称")
ret: schemas.Response = rename(fileitem=sub_file,
new_name=Path(new_path).name,
recursive=False)
if not ret.success:
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=False, message=f"{sub_path.name} 重命名失败!")
progress.end(ProgressKey.BatchRename)
return schemas.Response(success=True)
return schemas.Response(success=False)
@router.get("/usage/{name}", summary="存储空间信息", response_model=schemas.StorageUsage)
def usage(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
"""
查询存储空间
"""
ret = StorageChain().storage_usage(name)
if ret:
return ret
return schemas.StorageUsage()
@router.get("/transtype/{name}", summary="支持的整理方式获取", response_model=schemas.StorageTransType)
def transtype(name: str, _: User = Depends(get_current_active_superuser)) -> Any:
"""
查询支持的整理方式
"""
ret = StorageChain().support_transtype(name)
if ret:
return schemas.StorageTransType(transtype=ret)
return schemas.StorageTransType()

View File

@@ -1,18 +1,21 @@
import json
from typing import List, Any
import cn2an
from fastapi import APIRouter, Request, BackgroundTasks, Depends, HTTPException, Header
from sqlalchemy.orm import Session
from app import schemas
from app.chain.subscribe import SubscribeChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token, verify_uri_token
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.userauth import get_current_active_user
from app.db.user_oper import get_current_active_user
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType
@@ -35,15 +38,11 @@ def read_subscribes(
"""
查询所有订阅
"""
subscribes = Subscribe.list(db)
for subscribe in subscribes:
if subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
return subscribes
return Subscribe.list(db)
@router.get("/list", summary="查询所有订阅API_TOKEN", response_model=List[schemas.Subscribe])
def list_subscribes(_: str = Depends(verify_uri_token)) -> Any:
def list_subscribes(_: str = Depends(verify_apitoken)) -> Any:
"""
查询所有订阅 API_TOKEN认证?token=xxx
"""
@@ -55,7 +54,7 @@ def create_subscribe(
*,
subscribe_in: schemas.Subscribe,
current_user: User = Depends(get_current_active_user),
) -> Any:
) -> schemas.Response:
"""
新增订阅
"""
@@ -65,7 +64,7 @@ def create_subscribe(
else:
mtype = None
# 豆瓣标理
if subscribe_in.doubanid:
if subscribe_in.doubanid or subscribe_in.bangumiid:
meta = MetaInfo(subscribe_in.name)
subscribe_in.name = meta.name
subscribe_in.season = meta.begin_season
@@ -80,13 +79,18 @@ def create_subscribe(
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
custom_words=subscribe_in.custom_words,
media_category=subscribe_in.media_category,
filter_groups=subscribe_in.filter_groups,
exist_ok=True)
return schemas.Response(success=True if sid else False, message=message, data={
"id": sid
})
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)
@router.put("/", summary="更新订阅", response_model=schemas.Response)
@@ -102,8 +106,6 @@ def update_subscribe(
subscribe = Subscribe.get(db, subscribe_in.id)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
if subscribe_in.sites is not None:
subscribe_in.sites = json.dumps(subscribe_in.sites)
# 避免更新缺失集数
subscribe_dict = subscribe_in.dict()
if not subscribe_in.lack_episode:
@@ -115,6 +117,9 @@ def update_subscribe(
subscribe_dict["lack_episode"] = (subscribe.lack_episode
+ (subscribe_in.total_episode
- (subscribe.total_episode or 0)))
# 是否手动修改过总集数
if subscribe_in.total_episode != subscribe.total_episode:
subscribe_dict["manual_total_episode"] = 1
subscribe.update(db, subscribe_dict)
return schemas.Response(success=True)
@@ -127,9 +132,10 @@ def subscribe_mediaid(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID豆瓣ID查询订阅 tmdb:/douban:
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
"""
result = None
title_check = False
if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:]
if not tmdbid or not str(tmdbid).isdigit():
@@ -140,16 +146,22 @@ def subscribe_mediaid(
if not doubanid:
return Subscribe()
result = Subscribe.get_by_doubanid(db, doubanid)
if not result and title:
if not result and title:
title_check = True
elif mediaid.startswith("bangumi:"):
bangumiid = mediaid[8:]
if not bangumiid or not str(bangumiid).isdigit():
return Subscribe()
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
# 使用名称检查订阅
if title_check and title:
meta = MetaInfo(title)
if season:
meta.begin_season = season
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
if result and result.sites:
result.sites = json.loads(result.sites)
return result if result else Subscribe()
@@ -163,6 +175,24 @@ def refresh_subscribes(
return schemas.Response(success=True)
@router.get("/reset/{subid}", summary="重置订阅", response_model=schemas.Response)
def reset_subscribes(
subid: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
重置订阅
"""
subscribe = Subscribe.get(db, subid)
if subscribe:
subscribe.update(db, {
"note": "",
"lack_episode": subscribe.total_episode
})
return schemas.Response(success=True)
return schemas.Response(success=False, message="订阅不存在")
@router.get("/check", summary="刷新订阅 TMDB 信息", response_model=schemas.Response)
def check_subscribes(
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
@@ -183,9 +213,11 @@ def search_subscribes(
background_tasks.add_task(
Scheduler().start,
job_id="subscribe_search",
sid=None,
state='R',
manual=True
**{
"sid": None,
"state": 'R',
"manual": True
}
)
return schemas.Response(success=True)
@@ -201,29 +233,15 @@ def search_subscribe(
background_tasks.add_task(
Scheduler().start,
job_id="subscribe_search",
sid=subscribe_id,
state=None,
manual=True
**{
"sid": subscribe_id,
"state": None,
"manual": True
}
)
return schemas.Response(success=True)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
subscribe = Subscribe.get(db, subscribe_id)
if subscribe and subscribe.sites:
subscribe.sites = json.loads(subscribe.sites)
return subscribe
@router.delete("/media/{mediaid}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe_by_mediaid(
mediaid: str,
@@ -248,19 +266,6 @@ def delete_subscribe_by_mediaid(
return schemas.Response(success=True)
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
Subscribe.delete(db, subscribe_id)
return schemas.Response(success=True)
@router.post("/seerr", summary="OverSeerr/JellySeerr通知订阅", response_model=schemas.Response)
async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
authorization: str = Header(None)) -> Any:
@@ -312,3 +317,177 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
username=user_name)
return schemas.Response(success=True)
@router.get("/history/{mtype}", summary="查询订阅历史", response_model=List[schemas.Subscribe])
def subscribe_history(
mtype: str,
page: int = 1,
count: int = 30,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询电影/电视剧订阅历史
"""
return SubscribeHistory.list_by_type(db, mtype=mtype, page=page, count=count)
@router.delete("/history/{history_id}", summary="删除订阅历史", response_model=schemas.Response)
def delete_subscribe(
history_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅历史
"""
SubscribeHistory.delete(db, history_id)
return schemas.Response(success=True)
@router.get("/popular", summary="热门订阅(基于用户共享数据)", response_model=List[schemas.MediaInfo])
def popular_subscribes(
stype: str,
page: int = 1,
count: int = 30,
min_sub: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询热门订阅
"""
subscribes = SubscribeHelper().get_statistic(stype=stype, page=page, count=count)
if subscribes:
ret_medias = []
for sub in subscribes:
# 订阅人数
count = sub.get("count")
if min_sub and count < min_sub:
continue
media = MediaInfo()
media.type = MediaType(sub.get("type"))
media.tmdb_id = sub.get("tmdbid")
# 处理标题
title = sub.get("name")
season = sub.get("season")
if season and int(season) > 1 and media.tmdb_id:
# 小写数据转大写
season_str = cn2an.an2cn(season, "low")
title = f"{title}{season_str}"
media.title = title
media.year = sub.get("year")
media.douban_id = sub.get("doubanid")
media.bangumi_id = sub.get("bangumiid")
media.tvdb_id = sub.get("tvdbid")
media.imdb_id = sub.get("imdbid")
media.season = sub.get("season")
media.overview = sub.get("description")
media.vote_average = sub.get("vote")
media.poster_path = sub.get("poster")
media.backdrop_path = sub.get("backdrop")
media.popularity = count
ret_medias.append(media)
return [media.to_dict() for media in ret_medias]
return []
@router.get("/user/{username}", summary="用户订阅", response_model=List[schemas.Subscribe])
def user_subscribes(
username: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询用户订阅
"""
return Subscribe.list_by_username(db, username)
@router.get("/files/{subscribe_id}", summary="订阅相关文件信息", response_model=schemas.SubscrbieInfo)
def subscribe_files(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
订阅相关文件信息
"""
subscribe = Subscribe.get(db, subscribe_id)
if subscribe:
return SubscribeChain().subscribe_files_info(subscribe)
return schemas.SubscrbieInfo()
@router.post("/share", summary="分享订阅", response_model=schemas.Response)
def subscribe_share(
sub: schemas.SubscribeShare,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
分享订阅
"""
state, errmsg = SubscribeHelper().sub_share(subscribe_id=sub.subscribe_id,
share_title=sub.share_title,
share_comment=sub.share_comment,
share_user=sub.share_user)
return schemas.Response(success=state, message=errmsg)
@router.post("/fork", summary="复用订阅", response_model=schemas.Response)
def subscribe_fork(
sub: schemas.SubscribeShare,
current_user: User = Depends(get_current_active_user)) -> Any:
"""
复用订阅
"""
sub_dict = sub.dict()
sub_dict.pop("id")
for key in list(sub_dict.keys()):
if not hasattr(schemas.Subscribe(), key):
sub_dict.pop(key)
result = create_subscribe(subscribe_in=schemas.Subscribe(**sub_dict),
current_user=current_user)
if result.success:
SubscribeHelper().sub_fork(share_id=sub.id)
return result
@router.get("/shares", summary="查询分享的订阅", response_model=List[schemas.SubscribeShare])
def popular_subscribes(
name: str = None,
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询分享的订阅
"""
return SubscribeHelper().get_shares(name=name, page=page, count=count)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号查询订阅信息
"""
if not subscribe_id:
return Subscribe()
return Subscribe.get(db, subscribe_id)
@router.delete("/{subscribe_id}", summary="删除订阅", response_model=schemas.Response)
def delete_subscribe(
subscribe_id: int,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)
) -> Any:
"""
删除订阅信息
"""
subscribe = Subscribe.get(db, subscribe_id)
if subscribe:
subscribe.delete(db, subscribe_id)
# 统计订阅
SubscribeHelper().sub_done_async({
"tmdbid": subscribe.tmdbid,
"doubanid": subscribe.doubanid
})
return schemas.Response(success=True)

View File

@@ -1,150 +1,397 @@
import asyncio
import io
import json
import time
import tempfile
from collections import deque
from datetime import datetime
from typing import Union, Any
from pathlib import Path
from typing import Optional, Union
import tailer
from fastapi import APIRouter, HTTPException, Depends, Response
import aiofiles
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from app import schemas
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.security import verify_token
from app.chain.system import SystemChain
from app.core.config import global_vars, settings
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.db.models import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_superuser
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.monitor import Monitor
from app.scheduler import Scheduler
from app.schemas.types import SystemConfigKey
from app.utils.crypto import HashUtils
from app.utils.http import RequestUtils
from app.utils.security import SecurityUtils
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
from version import APP_VERSION
router = APIRouter()
@router.get("/img/{imgurl:path}/{proxy}", summary="图片代理")
def get_img(imgurl: str, proxy: bool = False) -> Any:
def fetch_image(
url: str,
proxy: bool = False,
use_disk_cache: bool = False,
if_none_match: Optional[str] = None,
allowed_domains: Optional[set[str]] = None) -> Response:
"""
通过图片代理(使用代理服务器)
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
"""
if not imgurl:
return None
if proxy:
response = RequestUtils(ua=settings.USER_AGENT, proxies=settings.PROXY).get_res(url=imgurl)
else:
response = RequestUtils(ua=settings.USER_AGENT).get_res(url=imgurl)
if response:
return Response(content=response.content, media_type="image/jpeg")
return None
if not url:
raise HTTPException(status_code=404, detail="URL not provided")
if allowed_domains is None:
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS)
# 验证URL安全性
if not SecurityUtils.is_safe_url(url, allowed_domains):
raise HTTPException(status_code=404, detail="Unsafe URL")
# 后续观察系统性能表现如果发现磁盘缓存和HTTP缓存无法满足高并发情况下的响应速度需求可以考虑重新引入内存缓存
cache_path = None
if use_disk_cache:
# 生成缓存路径
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
raise HTTPException(status_code=400, detail="Invalid cache path or file type")
# 目前暂不考虑磁盘缓存文件是否过期,后续通过缓存清理机制处理
if cache_path.exists():
try:
content = cache_path.read_bytes()
etag = HashUtils.md5(content)
headers = RequestUtils.generate_cache_headers(etag, max_age=86400 * 7)
if if_none_match == etag:
return Response(status_code=304, headers=headers)
return Response(content=content, media_type="image/jpeg", headers=headers)
except Exception as e:
# 如果读取磁盘缓存发生异常,这里仅记录日志,尝试再次请求远端进行处理
logger.debug(f"Failed to read cache file {cache_path}: {e}")
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if proxy else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
if not response:
raise HTTPException(status_code=502, detail="Failed to fetch the image from the remote server")
# 验证下载的内容是否为有效图片
try:
Image.open(io.BytesIO(response.content)).verify()
except Exception as e:
logger.debug(f"Invalid image format for URL {url}: {e}")
raise HTTPException(status_code=502, detail="Invalid image format")
content = response.content
response_headers = response.headers
cache_control_header = response_headers.get("Cache-Control", "")
cache_directive, max_age = RequestUtils.parse_cache_control(cache_control_header)
# 如果需要使用磁盘缓存,则保存到磁盘
if use_disk_cache and cache_path:
try:
if not cache_path.parent.exists():
cache_path.parent.mkdir(parents=True, exist_ok=True)
with tempfile.NamedTemporaryFile(dir=cache_path.parent, delete=False) as tmp_file:
tmp_file.write(content)
temp_path = Path(tmp_file.name)
temp_path.replace(cache_path)
except Exception as e:
logger.debug(f"Failed to write cache file {cache_path}: {e}")
# 检查 If-None-Match
etag = HashUtils.md5(content)
if if_none_match == etag:
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(status_code=304, headers=headers)
headers = RequestUtils.generate_cache_headers(etag, cache_directive, max_age)
return Response(
content=content,
media_type=response_headers.get("Content-Type") or UrlUtils.get_mime_type(url, "image/jpeg"),
headers=headers
)
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/img/{proxy}", summary="图片代理")
def proxy_img(
imgurl: str,
proxy: bool = False,
if_none_match: Optional[str] = Header(None),
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
查询系统环境变量,包括当前版本号
图片代理,可选是否使用代理服务器,支持 HTTP 缓存
"""
# 媒体服务器添加图片代理支持
hosts = [config.config.get("host") for config in MediaServerHelper().get_configs().values() if
config and config.config and config.config.get("host")]
allowed_domains = set(settings.SECURITY_IMAGE_DOMAINS) | set(hosts)
return fetch_image(url=imgurl, proxy=proxy, use_disk_cache=False,
if_none_match=if_none_match, allowed_domains=allowed_domains)
@router.get("/cache/image", summary="图片缓存")
def cache_img(
url: str,
if_none_match: Optional[str] = Header(None),
_: schemas.TokenPayload = Depends(verify_resource_token)
) -> Response:
"""
本地缓存图片文件,支持 HTTP 缓存,如果启用全局图片缓存,则使用磁盘缓存
"""
# 如果没有启用全局图片缓存,则不使用磁盘缓存
return fetch_image(url=url, proxy=False, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)
def get_global_setting():
"""
查询非敏感系统设置(无需鉴权)
"""
# FIXME: 新增敏感配置项时要在此处添加排除项
info = settings.dict(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
)
return schemas.Response(success=True,
data=info)
@router.get("/env", summary="查询系统配置", response_model=schemas.Response)
def get_env_setting(_: User = Depends(get_current_active_superuser)):
"""
查询系统环境变量,包括当前版本号(仅管理员)
"""
info = settings.dict(
exclude={"SECRET_KEY", "SUPERUSER_PASSWORD", "API_TOKEN"}
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY"}
)
info.update({
"VERSION": APP_VERSION,
"AUTH_VERSION": SitesHelper().auth_version,
"INDEXER_VERSION": SitesHelper().indexer_version,
"FRONTEND_VERSION": SystemChain().get_frontend_version()
})
return schemas.Response(success=True,
data=info)
@router.post("/env", summary="更新系统配置", response_model=schemas.Response)
def set_env_setting(env: dict,
_: User = Depends(get_current_active_superuser)):
"""
更新系统环境变量(仅管理员)
"""
result = settings.update_settings(env=env)
# 统计成功和失败的结果
success_updates = {k: v for k, v in result.items() if v[0]}
failed_updates = {k: v for k, v in result.items() if not v[0]}
if failed_updates:
return schemas.Response(
success=False,
message="部分配置项更新失败",
data={
"success_updates": success_updates,
"failed_updates": failed_updates
}
)
return schemas.Response(
success=True,
message="所有配置项更新成功",
data={
"success_updates": success_updates
}
)
@router.get("/progress/{process_type}", summary="实时进度")
def get_progress(process_type: str, token: str):
async def get_progress(request: Request, process_type: str, _: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取处理进度返回格式为SSE
"""
if not token or not verify_token(token):
raise HTTPException(
status_code=403,
detail="认证失败!",
)
progress = ProgressHelper()
def event_generator():
while True:
detail = progress.get(process_type)
yield 'data: %s\n\n' % json.dumps(detail)
time.sleep(0.2)
async def event_generator():
try:
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
detail = progress.get(process_type)
yield f"data: {json.dumps(detail)}\n\n"
await asyncio.sleep(0.2)
except asyncio.CancelledError:
return
return StreamingResponse(event_generator(), media_type="text/event-stream")
@router.get("/setting/{key}", summary="查询系统设置", response_model=schemas.Response)
def get_setting(key: str,
_: schemas.TokenPayload = Depends(verify_token)):
_: User = Depends(get_current_active_superuser)):
"""
查询系统设置
查询系统设置(仅管理员)
"""
if hasattr(settings, key):
value = getattr(settings, key)
else:
value = SystemConfigOper().get(key)
return schemas.Response(success=True, data={
"value": SystemConfigOper().get(key)
"value": value
})
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
def set_setting(key: str, value: Union[list, dict, str, int] = None,
_: schemas.TokenPayload = Depends(verify_token)):
def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
_: User = Depends(get_current_active_superuser)):
"""
更新系统设置
更新系统设置(仅管理员)
"""
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
if hasattr(settings, key):
success, message = settings.update_setting(key=key, value=value)
return schemas.Response(success=success, message=message)
elif key in {item.value for item in SystemConfigKey}:
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
else:
return schemas.Response(success=False, message=f"配置项 '{key}' 不存在")
@router.get("/message", summary="实时消息")
def get_message(token: str):
async def get_message(request: Request, role: str = "system", _: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统消息返回格式为SSE
"""
if not token or not verify_token(token):
raise HTTPException(
status_code=403,
detail="认证失败!",
)
message = MessageHelper()
def event_generator():
while True:
detail = message.get()
yield 'data: %s\n\n' % (detail or '')
time.sleep(3)
async def event_generator():
try:
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
detail = message.get(role)
yield f"data: {detail or ''}\n\n"
await asyncio.sleep(3)
except asyncio.CancelledError:
return
return StreamingResponse(event_generator(), media_type="text/event-stream")
@router.get("/logging", summary="实时日志")
def get_logging(token: str):
async def get_logging(request: Request, length: int = 50, logfile: str = "moviepilot.log",
_: schemas.TokenPayload = Depends(verify_resource_token)):
"""
实时获取系统日志返回格式为SSE
实时获取系统日志
length = -1 时, 返回text/plain
否则 返回格式SSE
"""
if not token or not verify_token(token):
raise HTTPException(
status_code=403,
detail="认证失败!",
)
log_path = settings.LOG_PATH / logfile
def log_generator():
log_path = settings.LOG_PATH / 'moviepilot.log'
# 读取文件末尾50行不使用tailer模块
with open(log_path, 'r', encoding='utf-8') as f:
for line in f.readlines()[-50:]:
yield 'data: %s\n\n' % line
while True:
for text in tailer.follow(open(log_path, 'r', encoding='utf-8')):
yield 'data: %s\n\n' % (text or '')
time.sleep(1)
if not SecurityUtils.is_safe_path(settings.LOG_PATH, log_path, allowed_suffixes={".log"}):
raise HTTPException(status_code=404, detail="Not Found")
return StreamingResponse(log_generator(), media_type="text/event-stream")
if not log_path.exists() or not log_path.is_file():
raise HTTPException(status_code=404, detail="Not Found")
async def log_generator():
try:
# 使用固定大小的双向队列来限制内存使用
lines_queue = deque(maxlen=max(length, 50))
# 使用 aiofiles 异步读取文件
async with aiofiles.open(log_path, mode="r", encoding="utf-8") as f:
# 逐行读取文件,将每一行存入队列
file_content = await f.read()
for line in file_content.splitlines():
lines_queue.append(line)
for line in lines_queue:
yield f"data: {line}\n\n"
# 移动文件指针到文件末尾,继续监听新增内容
await f.seek(0, 2)
while not global_vars.is_system_stopped:
if await request.is_disconnected():
break
line = await f.readline()
if not line:
await asyncio.sleep(0.5)
continue
yield f"data: {line}\n\n"
except asyncio.CancelledError:
return
# 根据length参数返回不同的响应
if length == -1:
# 返回全部日志作为文本响应
if not log_path.exists():
return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, "r", encoding='utf-8') as file:
text = file.read()
# 倒序输出
text = "\n".join(text.split("\n")[::-1])
return Response(content=text, media_type="text/plain")
else:
# 返回SSE流响应
return StreamingResponse(log_generator(), media_type="text/event-stream")
@router.get("/versions", summary="查询Github所有Release版本", response_model=schemas.Response)
def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询Github所有Release版本
"""
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
if version_res:
ver_json = version_res.json()
if ver_json:
return schemas.Response(success=True, data=ver_json)
return schemas.Response(success=False)
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(title: str,
rulegroup_name: str,
subtitle: str = None,
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
# 查询规则组详情
rulegroup = RuleHelper().get_rule_group(rulegroup_name)
if not rulegroup:
return schemas.Response(success=False, message=f"过滤规则组 {rulegroup_name} 不存在!")
# 过滤
result = SearchChain().filter_torrents(rule_groups=[rulegroup.name],
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
})
@router.get("/nettest", summary="测试网络连通性")
@@ -174,70 +421,74 @@ def nettest(url: str,
return schemas.Response(success=False, message="网络连接失败!")
@router.get("/versions", summary="查询Github所有Release版本", response_model=schemas.Response)
def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/modulelist", summary="查询已加载的模块ID列表", response_model=schemas.Response)
def modulelist(_: schemas.TokenPayload = Depends(verify_token)):
"""
查询Github所有Release版本
查询已加载的模块ID列表
"""
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
f"https://api.github.com/repos/jxxghp/MoviePilot/releases")
if version_res:
ver_json = version_res.json()
if ver_json:
return schemas.Response(success=True, data=ver_json)
return schemas.Response(success=False)
@router.get("/ruletest", summary="优先级规则测试", response_model=schemas.Response)
def ruletest(title: str,
subtitle: str = None,
ruletype: str = None,
_: schemas.TokenPayload = Depends(verify_token)):
"""
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
"""
torrent = schemas.TorrentInfo(
title=title,
description=subtitle,
)
if ruletype == "2":
rule_string = SystemConfigOper().get(SystemConfigKey.BestVersionFilterRules)
elif ruletype == "3":
rule_string = SystemConfigOper().get(SystemConfigKey.SearchFilterRules)
else:
rule_string = SystemConfigOper().get(SystemConfigKey.SubscribeFilterRules)
if not rule_string:
return schemas.Response(success=False, message="优先级规则未设置!")
# 过滤
result = SearchChain().filter_torrents(rule_string=rule_string,
torrent_list=[torrent])
if not result:
return schemas.Response(success=False, message="不符合优先级规则!")
modules = [{
"id": k,
"name": v.get_name(),
} for k, v in ModuleManager().get_modules().items()]
return schemas.Response(success=True, data={
"priority": 100 - result[0].pri_order + 1
"modules": modules
})
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/moduletest/{moduleid}", summary="模块可用性测试", response_model=schemas.Response)
def moduletest(moduleid: str, _: schemas.TokenPayload = Depends(verify_token)):
"""
重启系统
模块可用性测试接口
"""
state, errmsg = ModuleManager().test(moduleid)
return schemas.Response(success=state, message=errmsg)
@router.get("/restart", summary="重启系统", response_model=schemas.Response)
def restart_system(_: User = Depends(get_current_active_superuser)):
"""
重启系统(仅管理员)
"""
if not SystemUtils.can_restart():
return schemas.Response(success=False, message="当前运行环境不支持重启操作!")
# 标识停止事件
global_vars.stop_system()
# 执行重启
ret, msg = SystemUtils.restart()
return schemas.Response(success=ret, message=msg)
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def execute_command(jobid: str,
_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/reload", summary="重新加载模块", response_model=schemas.Response)
def reload_module(_: User = Depends(get_current_active_superuser)):
"""
执行命令
重新加载模块(仅管理员)
"""
ModuleManager().reload()
Scheduler().init()
Monitor().init()
return schemas.Response(success=True)
@router.get("/runscheduler", summary="运行服务", response_model=schemas.Response)
def run_scheduler(jobid: str,
_: User = Depends(get_current_active_superuser)):
"""
执行命令(仅管理员)
"""
if not jobid:
return schemas.Response(success=False, message="命令不能为空!")
Scheduler().start(jobid)
return schemas.Response(success=True)
@router.get("/runscheduler2", summary="运行服务API_TOKEN", response_model=schemas.Response)
def run_scheduler2(jobid: str,
_: str = Depends(verify_apitoken)):
"""
执行命令API_TOKEN认证
"""
if not jobid:
return schemas.Response(success=False, message="命令不能为空!")
Scheduler().start(jobid)
return schemas.Response(success=True)

View File

@@ -4,7 +4,6 @@ from fastapi import APIRouter, Depends
from app import schemas
from app.chain.tmdb import TmdbChain
from app.core.context import MediaInfo
from app.core.security import verify_token
from app.schemas.types import MediaType
@@ -17,10 +16,9 @@ def tmdb_seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -
根据TMDBID查询themoviedb所有季信息
"""
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
if not seasons_info:
return []
else:
if seasons_info:
return seasons_info
return []
@router.get("/similar/{tmdbid}/{type_name}", summary="类似电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -32,15 +30,14 @@ def tmdb_similar(tmdbid: int,
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_similar(tmdbid=tmdbid)
medias = TmdbChain().movie_similar(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_similar(tmdbid=tmdbid)
medias = TmdbChain().tv_similar(tmdbid=tmdbid)
else:
return []
if not tmdbinfos:
return []
else:
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfos]
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/recommend/{tmdbid}/{type_name}", summary="推荐电影/电视剧", response_model=List[schemas.MediaInfo])
@@ -52,18 +49,17 @@ def tmdb_recommend(tmdbid: int,
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_recommend(tmdbid=tmdbid)
medias = TmdbChain().movie_recommend(tmdbid=tmdbid)
elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_recommend(tmdbid=tmdbid)
medias = TmdbChain().tv_recommend(tmdbid=tmdbid)
else:
return []
if not tmdbinfos:
return []
else:
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfos]
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.TmdbPerson])
@router.get("/credits/{tmdbid}/{type_name}", summary="演员阵容", response_model=List[schemas.MediaPerson])
def tmdb_credits(tmdbid: int,
type_name: str,
page: int = 1,
@@ -73,28 +69,21 @@ def tmdb_credits(tmdbid: int,
"""
mediatype = MediaType(type_name)
if mediatype == MediaType.MOVIE:
tmdbinfos = TmdbChain().movie_credits(tmdbid=tmdbid, page=page)
persons = TmdbChain().movie_credits(tmdbid=tmdbid, page=page)
elif mediatype == MediaType.TV:
tmdbinfos = TmdbChain().tv_credits(tmdbid=tmdbid, page=page)
persons = TmdbChain().tv_credits(tmdbid=tmdbid, page=page)
else:
return []
if not tmdbinfos:
return []
else:
return [schemas.TmdbPerson(**tmdbinfo) for tmdbinfo in tmdbinfos]
return persons or []
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.TmdbPerson)
@router.get("/person/{person_id}", summary="人物详情", response_model=schemas.MediaPerson)
def tmdb_person(person_id: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物详情
"""
tmdbinfo = TmdbChain().person_detail(person_id=person_id)
if not tmdbinfo:
return schemas.TmdbPerson()
else:
return schemas.TmdbPerson(**tmdbinfo)
return TmdbChain().person_detail(person_id=person_id)
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
@@ -104,11 +93,10 @@ def tmdb_person_credits(person_id: int,
"""
根据人物ID查询人物参演作品
"""
tmdbinfo = TmdbChain().person_credits(person_id=person_id, page=page)
if not tmdbinfo:
return []
else:
return [MediaInfo(tmdb_info=tmdbinfo).to_dict() for tmdbinfo in tmdbinfo]
medias = TmdbChain().person_credits(person_id=person_id, page=page)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
@@ -127,7 +115,7 @@ def tmdb_movies(sort_by: str = "popularity.desc",
page=page)
if not movies:
return []
return [MediaInfo(tmdb_info=movie).to_dict() for movie in movies]
return [movie.to_dict() for movie in movies]
@router.get("/tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
@@ -146,7 +134,7 @@ def tmdb_tvs(sort_by: str = "popularity.desc",
page=page)
if not tvs:
return []
return [MediaInfo(tmdb_info=tv).to_dict() for tv in tvs]
return [tv.to_dict() for tv in tvs]
@router.get("/trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
@@ -158,7 +146,7 @@ def tmdb_trending(page: int = 1,
infos = TmdbChain().tmdb_trending(page=page)
if not infos:
return []
return [MediaInfo(tmdb_info=info).to_dict() for info in infos]
return [info.to_dict() for info in infos]
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
@@ -167,8 +155,4 @@ def tmdb_season_episodes(tmdbid: int, season: int,
"""
根据TMDBID查询某季的所有信信息
"""
episodes_info = TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)
if not episodes_info:
return []
else:
return episodes_info
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)

View File

@@ -1,97 +1,153 @@
from pathlib import Path
from typing import Any
from typing import Any, Optional
from fastapi import APIRouter, Depends
from pydantic import BaseModel
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.storage import StorageChain
from app.chain.transfer import TransferChain
from app.core.security import verify_token
from app.core.metainfo import MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.db import get_db
from app.db.models.transferhistory import TransferHistory
from app.schemas import MediaType
from app.db.user_oper import get_current_active_superuser
from app.schemas import MediaType, FileItem
router = APIRouter()
class ManualTransferItem(BaseModel):
fileitem: FileItem = None,
logid: Optional[int] = None,
target_storage: Optional[str] = None,
target_path: Optional[str] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
type_name: Optional[str] = None,
season: Optional[int] = None,
transfer_type: Optional[str] = None,
episode_format: Optional[str] = None,
episode_detail: Optional[str] = None,
episode_part: Optional[str] = None,
episode_offset: Optional[int] = 0,
min_filesize: Optional[int] = 0,
scrape: bool = False,
from_history: bool = False
@router.get("/name", summary="查询整理后的名称", response_model=schemas.Response)
def query_name(path: str, filetype: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询整理后的名称
:param path: 文件路径
:param filetype: 文件类型
:param _: Token校验
"""
meta = MetaInfoPath(Path(path))
mediainfo = MediaChain().recognize_media(meta)
if not mediainfo:
return schemas.Response(success=False, message="未识别到媒体信息")
new_path = TransferChain().recommend_name(meta=meta, mediainfo=mediainfo)
if not new_path:
return schemas.Response(success=False, message="未识别到新名称")
if filetype == "dir":
parents = Path(new_path).parents
if len(parents) > 2:
new_name = parents[1].name
else:
new_name = parents[0].name
else:
new_name = Path(new_path).name
return schemas.Response(success=True, data={
"name": new_name
})
@router.post("/manual", summary="手动转移", response_model=schemas.Response)
def manual_transfer(path: str = None,
logid: int = None,
target: str = None,
tmdbid: int = None,
type_name: str = None,
season: int = None,
transfer_type: str = None,
episode_format: str = None,
episode_detail: str = None,
episode_part: str = None,
episode_offset: int = 0,
min_filesize: int = 0,
def manual_transfer(transer_item: ManualTransferItem,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
_: schemas.TokenPayload = Depends(get_current_active_superuser)) -> Any:
"""
手动转移,文件或历史记录,支持自定义剧集识别格式
:param path: 转移路径或文件
:param logid: 转移历史记录ID
:param target: 目标路径
:param type_name: 媒体类型、电影/电视剧
:param tmdbid: tmdbid
:param season: 剧集季号
:param transfer_type: 转移类型move/copy
:param episode_format: 剧集识别格式
:param episode_detail: 剧集识别详细信息
:param episode_part: 剧集识别分集信息
:param episode_offset: 剧集识别偏移量
:param min_filesize: 最小文件大小(MB)
:param transer_item: 手工整理项
:param db: 数据库
:param _: Token校验
"""
force = False
target = Path(target) if target else None
transfer = TransferChain()
if logid:
target_path = Path(transer_item.target_path) if transer_item.target_path else None
if transer_item.logid:
# 查询历史记录
history: TransferHistory = TransferHistory.get(db, logid)
history: TransferHistory = TransferHistory.get(db, transer_item.logid)
if not history:
return schemas.Response(success=False, message=f"历史记录不存在ID{logid}")
return schemas.Response(success=False, message=f"历史记录不存在ID{transer_item.logid}")
# 强制转移
force = True
# 源路径
in_path = Path(history.src)
# 目的路径
if history.dest and str(history.dest) != "None":
# 删除旧的已整理文件
transfer.delete_files(Path(history.dest))
if not target:
target = transfer.get_root_path(path=history.dest,
type_name=history.type,
category=history.category)
elif path:
in_path = Path(path)
if history.status and ("move" in history.mode):
# 重新整理成功的转移,则使用成功的 dest 做 in_path
src_fileitem = FileItem(**history.dest_fileitem)
else:
# 源路径
src_fileitem = FileItem(**history.src_fileitem)
# 目的路径
if history.dest_fileitem:
# 删除旧的已整理文件
dest_fileitem = FileItem(**history.dest_fileitem)
state = StorageChain().delete_media_file(dest_fileitem, mtype=MediaType(history.type))
if not state:
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
# 从历史数据获取信息
if transer_item.from_history:
transer_item.type_name = history.type if history.type else transer_item.type_name
transer_item.tmdbid = int(history.tmdbid) if history.tmdbid else transer_item.tmdbid
transer_item.doubanid = str(history.doubanid) if history.doubanid else transer_item.doubanid
transer_item.season = int(str(history.seasons).replace("S", "")) if history.seasons else transer_item.season
if history.episodes:
if "-" in str(history.episodes):
# E01-E03多集合并
episode_start, episode_end = str(history.episodes).split("-")
episode_list: list[int] = []
for i in range(int(episode_start.replace("E", "")), int(episode_end.replace("E", "")) + 1):
episode_list.append(i)
transer_item.episode_detail = ",".join(str(e) for e in episode_list)
else:
# E01单集
transer_item.episode_detail = str(history.episodes).replace("E", "")
elif transer_item.fileitem:
src_fileitem = transer_item.fileitem
else:
return schemas.Response(success=False, message=f"缺少参数path/logid")
return schemas.Response(success=False, message=f"缺少参数")
# 类型
mtype = MediaType(type_name) if type_name else None
mtype = MediaType(transer_item.type_name) if transer_item.type_name else None
# 自定义格式
epformat = None
if episode_offset or episode_part or episode_detail or episode_format:
if transer_item.episode_offset or transer_item.episode_part \
or transer_item.episode_detail or transer_item.episode_format:
epformat = schemas.EpisodeFormat(
format=episode_format,
detail=episode_detail,
part=episode_part,
offset=episode_offset,
format=transer_item.episode_format,
detail=transer_item.episode_detail,
part=transer_item.episode_part,
offset=transer_item.episode_offset,
)
# 开始转移
state, errormsg = transfer.manual_transfer(
in_path=in_path,
target=target,
tmdbid=tmdbid,
state, errormsg = TransferChain().manual_transfer(
fileitem=src_fileitem,
target_storage=transer_item.target_storage,
target_path=target_path,
tmdbid=transer_item.tmdbid,
doubanid=transer_item.doubanid,
mtype=mtype,
season=season,
transfer_type=transfer_type,
season=transer_item.season,
transfer_type=transer_item.transfer_type,
epformat=epformat,
min_filesize=min_filesize,
min_filesize=transer_item.min_filesize,
scrape=transer_item.scrape,
force=force
)
# 失败
@@ -101,3 +157,12 @@ def manual_transfer(path: str = None,
return schemas.Response(success=False, message=errormsg)
# 成功
return schemas.Response(success=True)
@router.get("/now", summary="立即执行下载器文件整理", response_model=schemas.Response)
def now(_: str = Depends(verify_apitoken)) -> Any:
"""
立即执行下载器文件整理 API_TOKEN认证?token=xxx
"""
TransferChain().process()
return schemas.Response(success=True)

View File

@@ -1,5 +1,6 @@
import base64
from typing import Any, List
import re
from typing import Any, List, Union
from fastapi import APIRouter, Depends, HTTPException, UploadFile, File
from sqlalchemy.orm import Session
@@ -8,15 +9,17 @@ from app import schemas
from app.core.security import get_password_hash
from app.db import get_db
from app.db.models.user import User
from app.db.userauth import get_current_active_superuser, get_current_active_user
from app.db.user_oper import get_current_active_superuser, get_current_active_user
from app.db.userconfig_oper import UserConfigOper
from app.utils.otp import OtpUtils
router = APIRouter()
@router.get("/", summary="所有用户", response_model=List[schemas.User])
def read_users(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_superuser),
def list_users(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_superuser),
) -> Any:
"""
查询用户列表
@@ -27,10 +30,10 @@ def read_users(
@router.post("/", summary="新增用户", response_model=schemas.Response)
def create_user(
*,
db: Session = Depends(get_db),
user_in: schemas.UserCreate,
current_user: User = Depends(get_current_active_superuser),
*,
db: Session = Depends(get_db),
user_in: schemas.UserCreate,
current_user: User = Depends(get_current_active_superuser),
) -> Any:
"""
新增用户
@@ -49,19 +52,32 @@ def create_user(
@router.put("/", summary="更新用户", response_model=schemas.Response)
def update_user(
*,
db: Session = Depends(get_db),
user_in: schemas.UserCreate,
_: User = Depends(get_current_active_superuser),
*,
db: Session = Depends(get_db),
user_in: schemas.UserUpdate,
_: User = Depends(get_current_active_superuser),
) -> Any:
"""
更新用户
"""
user_info = user_in.dict()
if user_info.get("password"):
# 正则表达式匹配密码包含字母、数字、特殊字符中的至少两项
pattern = r'^(?![a-zA-Z]+$)(?!\d+$)(?![^\da-zA-Z\s]+$).{6,50}$'
if not re.match(pattern, user_info.get("password")):
return schemas.Response(success=False,
message="密码需要同时包含字母、数字、特殊字符中的至少两项且长度大于6位")
user_info["hashed_password"] = get_password_hash(user_info["password"])
user_info.pop("password")
user = User.get_by_name(db, name=user_info["name"])
user = User.get_by_id(db, user_id=user_info["id"])
user_name = user_info.get("name")
if not user_name:
return schemas.Response(success=False, message="用户名不能为空")
# 新用户名去重
users = User.list(db)
for u in users:
if u.name == user_name and u.id != user_info["id"]:
return schemas.Response(success=False, message="用户名已被使用")
if not user:
return schemas.Response(success=False, message="用户不存在")
user.update(db, user_info)
@@ -70,7 +86,7 @@ def update_user(
@router.get("/current", summary="当前登录用户信息", response_model=schemas.User)
def read_current_user(
current_user: User = Depends(get_current_active_user)
current_user: User = Depends(get_current_active_user)
) -> Any:
"""
当前登录用户信息
@@ -79,8 +95,8 @@ def read_current_user(
@router.post("/avatar/{user_id}", summary="上传用户头像", response_model=schemas.Response)
async def upload_avatar(user_id: int, db: Session = Depends(get_db),
file: UploadFile = File(...)):
def upload_avatar(user_id: int, db: Session = Depends(get_db), file: UploadFile = File(...),
_: User = Depends(get_current_active_user)):
"""
上传用户头像
"""
@@ -96,15 +112,93 @@ async def upload_avatar(user_id: int, db: Session = Depends(get_db),
return schemas.Response(success=True, message=file.filename)
@router.delete("/{user_name}", summary="删除用户", response_model=schemas.Response)
def delete_user(
*,
db: Session = Depends(get_db),
user_name: str,
current_user: User = Depends(get_current_active_superuser),
@router.post('/otp/generate', summary='生成otp验证uri', response_model=schemas.Response)
def otp_generate(
current_user: User = Depends(get_current_active_user)
) -> Any:
secret, uri = OtpUtils.generate_secret_key(current_user.name)
return schemas.Response(success=secret != "", data={'secret': secret, 'uri': uri})
@router.post('/otp/judge', summary='判断otp验证是否通过', response_model=schemas.Response)
def otp_judge(
data: dict,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
uri = data.get("uri")
otp_password = data.get("otpPassword")
if not OtpUtils.is_legal(uri, otp_password):
return schemas.Response(success=False, message="验证码错误")
current_user.update_otp_by_name(db, current_user.name, True, OtpUtils.get_secret(uri))
return schemas.Response(success=True)
@router.post('/otp/disable', summary='关闭当前用户的otp验证', response_model=schemas.Response)
def otp_disable(
db: Session = Depends(get_db),
current_user: User = Depends(get_current_active_user)
) -> Any:
current_user.update_otp_by_name(db, current_user.name, False, "")
return schemas.Response(success=True)
@router.get('/otp/{userid}', summary='判断当前用户是否开启otp验证', response_model=schemas.Response)
def otp_enable(userid: str, db: Session = Depends(get_db)) -> Any:
user: User = User.get_by_name(db, userid)
if not user:
return schemas.Response(success=False)
return schemas.Response(success=user.is_otp)
@router.get("/config/{key}", summary="查询用户配置", response_model=schemas.Response)
def get_config(key: str,
current_user: User = Depends(get_current_active_user)):
"""
查询用户配置
"""
value = UserConfigOper().get(username=current_user.name, key=key)
return schemas.Response(success=True, data={
"value": value
})
@router.post("/config/{key}", summary="更新用户配置", response_model=schemas.Response)
def set_config(key: str, value: Union[list, dict, bool, int, str] = None,
current_user: User = Depends(get_current_active_user)):
"""
更新用户配置
"""
UserConfigOper().set(username=current_user.name, key=key, value=value)
return schemas.Response(success=True)
@router.delete("/id/{user_id}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_id(
*,
db: Session = Depends(get_db),
user_id: int,
current_user: User = Depends(get_current_active_superuser),
) -> Any:
"""
删除用户
通过唯一ID删除用户
"""
user = current_user.get_by_id(db, user_id=user_id)
if not user:
return schemas.Response(success=False, message="用户不存在")
user.delete_by_id(db, user_id)
return schemas.Response(success=True)
@router.delete("/name/{user_name}", summary="删除用户", response_model=schemas.Response)
def delete_user_by_name(
*,
db: Session = Depends(get_db),
user_name: str,
current_user: User = Depends(get_current_active_superuser),
) -> Any:
"""
通过用户名删除用户
"""
user = current_user.get_by_name(db, name=user_name)
if not user:
@@ -113,16 +207,16 @@ def delete_user(
return schemas.Response(success=True)
@router.get("/{user_id}", summary="用户详情", response_model=schemas.User)
def read_user_by_id(
user_id: int,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db),
@router.get("/{username}", summary="用户详情", response_model=schemas.User)
def read_user_by_name(
username: str,
current_user: User = Depends(get_current_active_user),
db: Session = Depends(get_db),
) -> Any:
"""
查询用户详情
"""
user = current_user.get(db, rid=user_id)
user = current_user.get_by_name(db, name=username)
if not user:
raise HTTPException(
status_code=404,
@@ -130,7 +224,7 @@ def read_user_by_id(
)
if user == current_user:
return user
if not user.is_superuser:
if not current_user.is_superuser:
raise HTTPException(
status_code=400,
detail="用户权限不足"

View File

@@ -4,8 +4,7 @@ from fastapi import APIRouter, BackgroundTasks, Request, Depends
from app import schemas
from app.chain.webhook import WebhookChain
from app.core.config import settings
from app.core.security import verify_uri_token
from app.core.security import verify_apitoken
router = APIRouter()
@@ -20,10 +19,10 @@ def start_webhook_chain(body: Any, form: Any, args: Any):
@router.post("/", summary="Webhook消息响应", response_model=schemas.Response)
async def webhook_message(background_tasks: BackgroundTasks,
request: Request,
_: str = Depends(verify_uri_token)
_: str = Depends(verify_apitoken)
) -> Any:
"""
Webhook响应
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
"""
body = await request.body()
form = await request.form()
@@ -33,10 +32,10 @@ async def webhook_message(background_tasks: BackgroundTasks,
@router.get("/", summary="Webhook消息响应", response_model=schemas.Response)
async def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_uri_token)) -> Any:
def webhook_message(background_tasks: BackgroundTasks,
request: Request, _: str = Depends(verify_apitoken)) -> Any:
"""
Webhook响应
Webhook响应配置请求中需要添加参数token=API_TOKEN&source=媒体服务器名
"""
args = request.query_params
background_tasks.add_task(start_webhook_chain, None, None, args)

View File

@@ -6,9 +6,8 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain
from app.chain.subscribe import SubscribeChain
from app.core.config import settings
from app.core.metainfo import MetaInfo
from app.core.security import verify_uri_apikey
from app.core.security import verify_apikey
from app.db import get_db
from app.db.models.subscribe import Subscribe
from app.schemas import RadarrMovie, SonarrSeries
@@ -19,7 +18,7 @@ arr_router = APIRouter(tags=['servarr'])
@arr_router.get("/system/status", summary="系统状态")
def arr_system_status(_: str = Depends(verify_uri_apikey)) -> Any:
def arr_system_status(_: str = Depends(verify_apikey)) -> Any:
"""
模拟Radarr、Sonarr系统状态
"""
@@ -73,7 +72,7 @@ def arr_system_status(_: str = Depends(verify_uri_apikey)) -> Any:
@arr_router.get("/qualityProfile", summary="质量配置")
def arr_qualityProfile(_: str = Depends(verify_uri_apikey)) -> Any:
def arr_qualityProfile(_: str = Depends(verify_apikey)) -> Any:
"""
模拟Radarr、Sonarr质量配置
"""
@@ -114,14 +113,14 @@ def arr_qualityProfile(_: str = Depends(verify_uri_apikey)) -> Any:
@arr_router.get("/rootfolder", summary="根目录")
def arr_rootfolder(_: str = Depends(verify_uri_apikey)) -> Any:
def arr_rootfolder(_: str = Depends(verify_apikey)) -> Any:
"""
模拟Radarr、Sonarr根目录
"""
return [
{
"id": 1,
"path": "/" if not settings.LIBRARY_PATHS else str(settings.LIBRARY_PATHS[0]),
"path": "/",
"accessible": True,
"freeSpace": 0,
"unmappedFolders": []
@@ -130,7 +129,7 @@ def arr_rootfolder(_: str = Depends(verify_uri_apikey)) -> Any:
@arr_router.get("/tag", summary="标签")
def arr_tag(_: str = Depends(verify_uri_apikey)) -> Any:
def arr_tag(_: str = Depends(verify_apikey)) -> Any:
"""
模拟Radarr、Sonarr标签
"""
@@ -143,7 +142,7 @@ def arr_tag(_: str = Depends(verify_uri_apikey)) -> Any:
@arr_router.get("/languageprofile", summary="语言")
def arr_languageprofile(_: str = Depends(verify_uri_apikey)) -> Any:
def arr_languageprofile(_: str = Depends(verify_apikey)) -> Any:
"""
模拟Radarr、Sonarr语言
"""
@@ -169,7 +168,7 @@ def arr_languageprofile(_: str = Depends(verify_uri_apikey)) -> Any:
@arr_router.get("/movie", summary="所有订阅电影", response_model=List[schemas.RadarrMovie])
def arr_movies(_: str = Depends(verify_uri_apikey), db: Session = Depends(get_db)) -> Any:
def arr_movies(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
"""
查询Rardar电影
"""
@@ -260,7 +259,7 @@ def arr_movies(_: str = Depends(verify_uri_apikey), db: Session = Depends(get_db
@arr_router.get("/movie/lookup", summary="查询电影", response_model=List[schemas.RadarrMovie])
def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
查询Rardar电影 term: `tmdb:${id}`
存在和不存在均不能返回错误
@@ -306,7 +305,7 @@ def arr_movie_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(
@arr_router.get("/movie/{mid}", summary="电影订阅详情", response_model=schemas.RadarrMovie)
def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
查询Rardar电影订阅
"""
@@ -334,7 +333,7 @@ def arr_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_u
@arr_router.post("/movie", summary="新增电影订阅")
def arr_add_movie(movie: RadarrMovie,
db: Session = Depends(get_db),
_: str = Depends(verify_uri_apikey)
_: str = Depends(verify_apikey)
) -> Any:
"""
新增Rardar电影订阅
@@ -363,7 +362,7 @@ def arr_add_movie(movie: RadarrMovie,
@arr_router.delete("/movie/{mid}", summary="删除电影订阅", response_model=schemas.Response)
def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
删除Rardar电影订阅
"""
@@ -379,7 +378,7 @@ def arr_remove_movie(mid: int, db: Session = Depends(get_db), _: str = Depends(v
@arr_router.get("/series", summary="所有剧集", response_model=List[schemas.SonarrSeries])
def arr_series(_: str = Depends(verify_uri_apikey), db: Session = Depends(get_db)) -> Any:
def arr_series(_: str = Depends(verify_apikey), db: Session = Depends(get_db)) -> Any:
"""
查询Sonarr剧集
"""
@@ -515,7 +514,7 @@ def arr_series(_: str = Depends(verify_uri_apikey), db: Session = Depends(get_db
@arr_router.get("/series/lookup", summary="查询剧集")
def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
查询Sonarr剧集 term: `tvdb:${id}` title
"""
@@ -604,7 +603,7 @@ def arr_series_lookup(term: str, db: Session = Depends(get_db), _: str = Depends
@arr_router.get("/series/{tid}", summary="剧集详情")
def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
查询Sonarr剧集
"""
@@ -640,7 +639,7 @@ def arr_serie(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_u
@arr_router.post("/series", summary="新增剧集订阅")
def arr_add_series(tv: schemas.SonarrSeries,
db: Session = Depends(get_db),
_: str = Depends(verify_uri_apikey)) -> Any:
_: str = Depends(verify_apikey)) -> Any:
"""
新增Sonarr剧集订阅
"""
@@ -682,7 +681,7 @@ def arr_add_series(tv: schemas.SonarrSeries,
@arr_router.delete("/series/{tid}", summary="删除剧集订阅")
def arr_remove_series(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_uri_apikey)) -> Any:
def arr_remove_series(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""
删除Sonarr剧集订阅
"""

134
app/api/servcookie.py Normal file
View File

@@ -0,0 +1,134 @@
import gzip
import json
from typing import Annotated, Callable, Any, Dict, Optional
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
from fastapi.responses import PlainTextResponse
from fastapi.routing import APIRoute
from app import schemas
from app.core.config import settings
from app.log import logger
from app.utils.crypto import CryptoJsUtils, HashUtils
class GzipRequest(Request):
async def body(self) -> bytes:
if not hasattr(self, "_body"):
body = await super().body()
if "gzip" in self.headers.getlist("Content-Encoding"):
body = gzip.decompress(body)
self._body = body
return self._body
class GzipRoute(APIRoute):
def get_route_handler(self) -> Callable:
original_route_handler = super().get_route_handler()
async def custom_route_handler(request: Request) -> Response:
request = GzipRequest(request.scope, request.receive)
return await original_route_handler(request)
return custom_route_handler
async def verify_server_enabled():
"""
校验CookieCloud服务路由是否打开
"""
if not settings.COOKIECLOUD_ENABLE_LOCAL:
raise HTTPException(status_code=400, detail="本地CookieCloud服务器未启用")
return True
cookie_router = APIRouter(route_class=GzipRoute,
tags=["servcookie"],
dependencies=[Depends(verify_server_enabled)])
@cookie_router.get("/", response_class=PlainTextResponse)
def get_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/", response_class=PlainTextResponse)
def post_root():
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
@cookie_router.post("/update")
async def update_cookie(req: schemas.CookieData):
"""
上传Cookie数据
"""
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
content = json.dumps({"encrypted": req.encrypted})
with open(file_path, encoding="utf-8", mode="w") as file:
file.write(content)
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
if read_content == content:
return {"action": "done"}
else:
return {"action": "error"}
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
"""
加载本地加密原始数据
"""
file_path = settings.COOKIE_PATH / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
raise HTTPException(status_code=404, detail="Item not found")
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data
def get_decrypted_cookie_data(uuid: str, password: str,
encrypted: str) -> Optional[Dict[str, Any]]:
"""
加载本地加密数据并解密为Cookie
"""
combined_string = f"{uuid}-{password}"
aes_key = HashUtils.md5(combined_string)[:16].encode("utf-8")
if encrypted:
try:
decrypted_data = CryptoJsUtils.decrypt(encrypted, aes_key).decode("utf-8")
decrypted_data = json.loads(decrypted_data)
if "cookie_data" in decrypted_data:
return decrypted_data
except Exception as e:
logger.error(f"解密Cookie数据失败{str(e)}")
return None
else:
return None
@cookie_router.get("/get/{uuid}")
async def get_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")]):
"""
GET 下载加密数据
"""
return load_encrypt_data(uuid)
@cookie_router.post("/get/{uuid}")
async def post_cookie(
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")],
request: schemas.CookiePassword):
"""
POST 下载加密数据
"""
data = load_encrypt_data(uuid)
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])

View File

@@ -10,14 +10,17 @@ from ruamel.yaml import CommentedMap
from transmission_rpc import File
from app.core.config import settings
from app.core.context import Context
from app.core.context import MediaInfo, TorrentInfo
from app.core.context import Context, MediaInfo, TorrentInfo
from app.core.event import EventManager
from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.db.message_oper import MessageOper
from app.db.user_oper import UserOper
from app.helper.message import MessageHelper
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, TmdbEpisode
WebhookEventInfo, TmdbEpisode, MediaPerson, FileItem, TransferDirectoryConf
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.utils.object import ObjectUtils
@@ -33,6 +36,9 @@ class ChainBase(metaclass=ABCMeta):
"""
self.modulemanager = ModuleManager()
self.eventmanager = EventManager()
self.messageoper = MessageOper()
self.messagehelper = MessageHelper()
self.useroper = UserOper()
@staticmethod
def load_cache(filename: str) -> Any:
@@ -75,6 +81,7 @@ class ChainBase(metaclass=ABCMeta):
def run_module(self, method: str, *args, **kwargs) -> Any:
"""
运行包含该方法的所有模块,然后返回结果
当kwargs包含命名参数raise_exception时如模块方法抛出异常且raise_exception为True则同步抛出异常
"""
def is_result_empty(ret):
@@ -88,8 +95,16 @@ class ChainBase(metaclass=ABCMeta):
logger.debug(f"请求模块执行:{method} ...")
result = None
modules = self.modulemanager.get_modules(method)
modules = self.modulemanager.get_running_modules(method)
# 按优先级排序
modules = sorted(modules, key=lambda x: x.get_priority())
for module in modules:
module_id = module.__class__.__name__
try:
module_name = module.get_name()
except Exception as err:
logger.debug(f"获取模块名称出错:{str(err)}")
module_name = module_id
try:
func = getattr(module, method)
if is_result_empty(result):
@@ -107,20 +122,40 @@ class ChainBase(metaclass=ABCMeta):
# 中止继续执行
break
except Exception as err:
if kwargs.get("raise_exception"):
raise
logger.error(
f"运行模块 {method} 出错:{module.__class__.__name__} - {str(err)}\n{traceback.print_exc()}")
f"运行模块 {module_id}.{method} 出错:{str(err)}\n{traceback.format_exc()}")
self.messagehelper.put(title=f"{module_name}发生了错误",
message=str(err),
role="system")
self.eventmanager.send_event(
EventType.SystemError,
{
"type": "module",
"module_id": module_id,
"module_name": module_name,
"module_method": method,
"error": str(err),
"traceback": traceback.format_exc()
}
)
return result
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
tmdbid: int = None,
doubanid: str = None) -> Optional[MediaInfo]:
doubanid: str = None,
bangumiid: int = None,
cache: bool = True) -> Optional[MediaInfo]:
"""
识别媒体信息
识别媒体信息不含Fanart图片
:param meta: 识别的元数据
:param mtype: 识别的媒体类型与tmdbid配套
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
# 识别用名中含指定信息情形
@@ -130,11 +165,16 @@ class ChainBase(metaclass=ABCMeta):
tmdbid = meta.tmdbid
if not doubanid and hasattr(meta, "doubanid"):
doubanid = meta.doubanid
# 有tmdbid时不使用其它ID
if tmdbid:
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid)
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
def match_doubaninfo(self, name: str, imdbid: str = None,
mtype: MediaType = None, year: str = None, season: int = None) -> Optional[dict]:
mtype: MediaType = None, year: str = None, season: int = None,
raise_exception: bool = False) -> Optional[dict]:
"""
搜索和匹配豆瓣信息
:param name: 标题
@@ -142,9 +182,10 @@ class ChainBase(metaclass=ABCMeta):
:param mtype: 类型
:param year: 年份
:param season: 季
:param raise_exception: 触发速率限制时是否抛出异常
"""
return self.run_module("match_doubaninfo", name=name, imdbid=imdbid,
mtype=mtype, year=year, season=season)
mtype=mtype, year=year, season=season, raise_exception=raise_exception)
def match_tmdbinfo(self, name: str, mtype: MediaType = None,
year: str = None, season: int = None) -> Optional[dict]:
@@ -182,14 +223,16 @@ class ChainBase(metaclass=ABCMeta):
image_prefix=image_prefix, image_type=image_type,
season=season, episode=episode)
def douban_info(self, doubanid: str, mtype: MediaType = None) -> Optional[dict]:
def douban_info(self, doubanid: str, mtype: MediaType = None,
raise_exception: bool = False) -> Optional[dict]:
"""
获取豆瓣信息
:param doubanid: 豆瓣ID
:param mtype: 媒体类型
:return: 豆瓣信息
:param raise_exception: 触发速率限制时是否抛出异常
"""
return self.run_module("douban_info", doubanid=doubanid, mtype=mtype)
return self.run_module("douban_info", doubanid=doubanid, mtype=mtype, raise_exception=raise_exception)
def tvdb_info(self, tvdbid: int) -> Optional[dict]:
"""
@@ -199,28 +242,38 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("tvdb_info", tvdbid=tvdbid)
def tmdb_info(self, tmdbid: int, mtype: MediaType) -> Optional[dict]:
def tmdb_info(self, tmdbid: int, mtype: MediaType, season: int = None) -> Optional[dict]:
"""
获取TMDB信息
:param tmdbid: int
:param mtype: 媒体类型
:param season: 季
:return: TVDB信息
"""
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype)
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype, season=season)
def message_parser(self, body: Any, form: Any,
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: int
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def message_parser(self, source: str, body: Any, form: Any,
args: Any) -> Optional[CommingMessage]:
"""
解析消息内容,返回字典,注意以下约定值:
userid: 用户ID
username: 用户名
text: 内容
:param source: 消息来源(渠道配置名称)
:param body: 请求体
:param form: 表单
:param args: 参数
:return: 消息渠道、消息内容
"""
return self.run_module("message_parser", body=body, form=form, args=args)
return self.run_module("message_parser", source=source, body=body, form=form, args=args)
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[WebhookEventInfo]:
"""
@@ -240,6 +293,13 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_medias", meta=meta)
def search_persons(self, name: str) -> Optional[List[MediaPerson]]:
"""
搜索人物信息
:param name: 人物名称
"""
return self.run_module("search_persons", name=name)
def search_torrents(self, site: CommentedMap,
keywords: List[str],
mtype: MediaType = None,
@@ -263,25 +323,26 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("refresh_torrents", site=site)
def filter_torrents(self, rule_string: str,
def filter_torrents(self, rule_groups: List[str],
torrent_list: List[TorrentInfo],
season_episodes: Dict[int, list] = None,
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
"""
过滤种子资源
:param rule_string: 过滤规则
:param rule_groups: 过滤规则组名称列表
:param torrent_list: 资源列表
:param season_episodes: 季集数过滤 {season:[episodes]}
:param mediainfo: 识别的媒体信息
:return: 过滤后的资源列表,添加资源优先级
"""
return self.run_module("filter_torrents", rule_string=rule_string,
return self.run_module("filter_torrents", rule_groups=rule_groups,
torrent_list=torrent_list, season_episodes=season_episodes,
mediainfo=mediainfo)
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None
) -> Optional[Tuple[Optional[str], str]]:
episodes: Set[int] = None, category: str = None,
downloader: str = None
) -> Optional[Tuple[Optional[str], Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param content: 种子文件地址或者磁力链接
@@ -289,10 +350,12 @@ class ChainBase(metaclass=ABCMeta):
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 种子分类
:return: 种子Hash错误信息
:param downloader: 下载器
:return: 下载器名称、种子Hash、错误信息
"""
return self.run_module("download", content=content, download_dir=download_dir,
cookie=cookie, episodes=episodes, category=category)
cookie=cookie, episodes=episodes, category=category,
downloader=downloader)
def download_added(self, context: Context, download_dir: Path, torrent_path: Path = None) -> None:
"""
@@ -306,80 +369,108 @@ class ChainBase(metaclass=ABCMeta):
download_dir=download_dir)
def list_torrents(self, status: TorrentStatus = None,
hashs: Union[list, str] = None) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
hashs: Union[list, str] = None,
downloader: str = None
) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
"""
获取下载器种子列表
:param status: 种子状态
:param hashs: 种子Hash
:param downloader: 下载器
:return: 下载器中符合状态的种子列表
"""
return self.run_module("list_torrents", status=status, hashs=hashs)
return self.run_module("list_torrents", status=status, hashs=hashs, downloader=downloader)
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None,
def transfer(self, fileitem: FileItem, meta: MetaBase, mediainfo: MediaInfo,
target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
"""
文件转移
:param path: 文件路径
:param fileitem: 文件信息
:param meta: 预识别的元数据
:param mediainfo: 识别的媒体信息
:param target_directory: 目标目录配置
:param target_storage: 目标存储
:param target_path: 目标路径
:param transfer_type: 转移模式
:param target: 转移目标路径
:param scrape: 是否刮削元数据
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target,
return self.run_module("transfer",
fileitem=fileitem, meta=meta, mediainfo=mediainfo,
target_directory=target_directory,
target_path=target_path, target_storage=target_storage,
transfer_type=transfer_type, scrape=scrape,
episodes_info=episodes_info)
def transfer_completed(self, hashs: Union[str, list], path: Path = None) -> None:
def transfer_completed(self, hashs: str, downloader: str = None) -> None:
"""
转移完成后的处理
下载器转移完成后的处理
:param hashs: 种子Hash
:param path: 源目录
:param downloader: 下载器
"""
return self.run_module("transfer_completed", hashs=hashs, path=path)
return self.run_module("transfer_completed", hashs=hashs, downloader=downloader)
def remove_torrents(self, hashs: Union[str, list]) -> bool:
def remove_torrents(self, hashs: Union[str, list], delete_file: bool = True,
downloader: str = None) -> bool:
"""
删除下载器种子
:param hashs: 种子Hash
:param delete_file: 是否删除文件
:param downloader: 下载器
:return: bool
"""
return self.run_module("remove_torrents", hashs=hashs)
return self.run_module("remove_torrents", hashs=hashs, delete_file=delete_file, downloader=downloader)
def start_torrents(self, hashs: Union[list, str]) -> bool:
def start_torrents(self, hashs: Union[list, str], downloader: str = None) -> bool:
"""
开始下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
return self.run_module("start_torrents", hashs=hashs)
return self.run_module("start_torrents", hashs=hashs, downloader=downloader)
def stop_torrents(self, hashs: Union[list, str]) -> bool:
def stop_torrents(self, hashs: Union[list, str], downloader: str = None) -> bool:
"""
停止下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
return self.run_module("stop_torrents", hashs=hashs)
return self.run_module("stop_torrents", hashs=hashs, downloader=downloader)
def torrent_files(self, tid: str) -> Optional[Union[TorrentFilesList, List[File]]]:
def torrent_files(self, tid: str,
downloader: str = None) -> Optional[Union[TorrentFilesList, List[File]]]:
"""
获取种子文件
:param tid: 种子Hash
:param downloader: 下载器
:return: 种子文件
"""
return self.run_module("torrent_files", tid=tid)
return self.run_module("torrent_files", tid=tid, downloader=downloader)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, itemid: str = None,
server: str = None) -> Optional[ExistMediaInfo]:
"""
判断媒体文件是否存在
:param mediainfo: 识别的媒体信息
:param itemid: 媒体服务器ItemID
:param server: 媒体服务器
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
return self.run_module("media_exists", mediainfo=mediainfo, itemid=itemid)
return self.run_module("media_exists", mediainfo=mediainfo, itemid=itemid, server=server)
def media_files(self, mediainfo: MediaInfo) -> Optional[List[FileItem]]:
"""
获取媒体文件清单
:param mediainfo: 识别的媒体信息
:return: 媒体文件列表
"""
return self.run_module("media_files", mediainfo=mediainfo)
def post_message(self, message: Notification) -> None:
"""
@@ -387,53 +478,78 @@ class ChainBase(metaclass=ABCMeta):
:param message: 消息体
:return: 成功或失败
"""
# 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage,
data={
"channel": message.channel,
"type": message.mtype,
"title": message.title,
"text": message.text,
"image": message.image,
"userid": message.userid,
})
logger.info(f"发送消息channel={message.channel}"
f"source={message.source},"
f"title={message.title}, "
f"text={message.text}"
f"userid={message.userid}")
if not message.userid and message.mtype:
# 没有指定用户ID时按规则确定发送对象
# 默认发送全体
to_targets = None
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
if notify_action == "admin":
# 仅发送管理员
logger.info(f"已设置 {message.mtype} 的消息只发送给管理员")
to_targets = self.useroper.get_settings(settings.SUPERUSER)
elif notify_action == "user":
# 发送对应用户
if message.username:
logger.info(f"已设置 {message.mtype} 的消息只发送给用户 {message.username}")
to_targets = self.useroper.get_settings(message.username)
if not message.username or to_targets is None:
if message.username:
logger.info(f"没有 {message.username} 这个用户,该消息将发送给管理员")
# 回滚发送管理员
to_targets = self.useroper.get_settings(settings.SUPERUSER)
message.targets = to_targets
# 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.dict())
# 发送
self.run_module("post_message", message=message)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
"""
发送媒体信息选择列表
:param message: 消息体
:param medias: 媒体列表
:return: 成功或失败
"""
note_list = [media.to_dict() for media in medias]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.run_module("post_medias_message", message=message, medias=medias)
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]:
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> None:
"""
发送种子信息选择列表
:param message: 消息体
:param torrents: 种子列表
:return: 成功或失败
"""
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
self.messagehelper.put(message, role="user", note=note_list, title=message.title)
self.messageoper.add(**message.dict(), note=note_list)
return self.run_module("post_torrents_message", message=message, torrents=torrents)
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
force_nfo: bool = False, force_img: bool = False) -> None:
def metadata_img(self, mediainfo: MediaInfo, season: int = None, episode: int = None) -> Optional[dict]:
"""
刮削元数据
:param path: 媒体文件路径
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param force_nfo: 强制刮削nfo
:param force_img: 强制刮削图片
:return: 成功或失败
获取图片名称和url
:param mediainfo: 媒体信息
:param season: 季号
:param episode: 集号
"""
self.run_module("scrape_metadata", path=path, mediainfo=mediainfo,
transfer_type=transfer_type, force_nfo=force_nfo, force_img=force_img)
return self.run_module("metadata_img", mediainfo=mediainfo, season=season, episode=episode)
def media_category(self) -> Optional[Dict[str, list]]:
"""
获取媒体分类
:return: 获取二级分类配置字典项,需包括电影、电视剧
"""
return self.run_module("media_category")
def register_commands(self, commands: Dict[str, dict]) -> None:
"""

54
app/chain/bangumi.py Normal file
View File

@@ -0,0 +1,54 @@
from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.context import MediaInfo
from app.utils.singleton import Singleton
class BangumiChain(ChainBase, metaclass=Singleton):
"""
Bangumi处理链单例运行
"""
def calendar(self) -> Optional[List[MediaInfo]]:
"""
获取Bangumi每日放送
"""
return self.run_module("bangumi_calendar")
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息
:param bangumiid: BangumiID
:return: Bangumi信息
"""
return self.run_module("bangumi_info", bangumiid=bangumiid)
def bangumi_credits(self, bangumiid: int) -> List[schemas.MediaPerson]:
"""
根据BangumiID查询电影演职员表
:param bangumiid: BangumiID
"""
return self.run_module("bangumi_credits", bangumiid=bangumiid)
def bangumi_recommend(self, bangumiid: int) -> Optional[List[MediaInfo]]:
"""
根据BangumiID查询推荐电影
:param bangumiid: BangumiID
"""
return self.run_module("bangumi_recommend", bangumiid=bangumiid)
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询Bangumi人物详情
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_detail", person_id=person_id)
def person_credits(self, person_id: int) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
"""
return self.run_module("bangumi_person_credits", person_id=person_id)

410
app/chain/command.py Normal file
View File

@@ -0,0 +1,410 @@
import threading
import traceback
from typing import Any, Union, Dict, Optional
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import Event as ManagerEvent, eventmanager, Event
from app.core.plugin import PluginManager
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas import Notification
from app.schemas.event import CommandRegisterEventData
from app.schemas.types import EventType, MessageChannel, ChainEventType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.structures import DictUtils
class CommandChain(ChainBase, metaclass=Singleton):
"""
全局命令管理,消费事件
"""
def __init__(self):
# 插件管理器
super().__init__()
# 注册的命令集合
self._registered_commands = {}
# 所有命令集合
self._commands = {}
# 内建命令集合
self._preset_commands = {
"/cookiecloud": {
"id": "cookiecloud",
"type": "scheduler",
"description": "同步站点",
"category": "站点"
},
"/sites": {
"func": SiteChain().remote_list,
"description": "查询站点",
"category": "站点",
"data": {}
},
"/site_cookie": {
"func": SiteChain().remote_cookie,
"description": "更新站点Cookie",
"data": {}
},
"/site_enable": {
"func": SiteChain().remote_enable,
"description": "启用站点",
"data": {}
},
"/site_disable": {
"func": SiteChain().remote_disable,
"description": "禁用站点",
"data": {}
},
"/mediaserver_sync": {
"id": "mediaserver_sync",
"type": "scheduler",
"description": "同步媒体服务器",
"category": "管理"
},
"/subscribes": {
"func": SubscribeChain().remote_list,
"description": "查询订阅",
"category": "订阅",
"data": {}
},
"/subscribe_refresh": {
"id": "subscribe_refresh",
"type": "scheduler",
"description": "刷新订阅",
"category": "订阅"
},
"/subscribe_search": {
"id": "subscribe_search",
"type": "scheduler",
"description": "搜索订阅",
"category": "订阅"
},
"/subscribe_delete": {
"func": SubscribeChain().remote_delete,
"description": "删除订阅",
"data": {}
},
"/subscribe_tmdb": {
"id": "subscribe_tmdb",
"type": "scheduler",
"description": "订阅元数据更新"
},
"/downloading": {
"func": DownloadChain().remote_downloading,
"description": "正在下载",
"category": "管理",
"data": {}
},
"/transfer": {
"id": "transfer",
"type": "scheduler",
"description": "下载文件整理",
"category": "管理"
},
"/redo": {
"func": TransferChain().remote_transfer,
"description": "手动整理",
"data": {}
},
"/clear_cache": {
"func": SystemChain().remote_clear_cache,
"description": "清理缓存",
"category": "管理",
"data": {}
},
"/restart": {
"func": SystemChain().restart,
"description": "重启系统",
"category": "管理",
"data": {}
},
"/version": {
"func": SystemChain().version,
"description": "当前版本",
"category": "管理",
"data": {}
}
}
# 插件命令集合
self._plugin_commands = {}
# 其他命令集合
self._other_commands = {}
# 初始化锁
self._rlock = threading.RLock()
# 插件管理
self.pluginmanager = PluginManager()
# 定时服务管理
self.scheduler = Scheduler()
# 消息管理器
self.messagehelper = MessageHelper()
# 初始化命令
self.init_commands()
def init_commands(self, pid: Optional[str] = None) -> None:
"""
初始化菜单命令
"""
if settings.DEV:
logger.debug("Development mode active. Skipping command initialization.")
return
# 使用线程池提交后台任务,避免引起阻塞
ThreadHelper().submit(self.__init_commands_background, pid)
def __init_commands_background(self, pid: Optional[str] = None) -> None:
"""
后台初始化菜单命令
"""
try:
with self._rlock:
logger.debug("Acquired lock for initializing commands in background.")
self._plugin_commands = self.__build_plugin_commands(pid)
self._commands = {
**self._preset_commands,
**self._plugin_commands,
**self._other_commands
}
# 强制触发注册
force_register = False
# 触发事件允许可以拦截和调整命令
event, initial_commands = self.__trigger_register_commands_event()
if event and event.event_data:
# 如果事件返回有效的 event_data使用事件中调整后的命令
event_data: CommandRegisterEventData = event.event_data
# 如果事件被取消,跳过命令注册
if event_data.cancel:
logger.debug(f"Command initialization canceled by event: {event_data.source}")
return
# 如果拦截源与插件标识一致时,这里认为需要强制触发注册
if pid is not None and pid == event_data.source:
force_register = True
initial_commands = event_data.commands or {}
logger.debug(f"Registering command count from event: {len(initial_commands)}")
else:
logger.debug(f"Registering initial command count: {len(initial_commands)}")
# initial_commands 必须是 self._commands 的子集
filtered_initial_commands = DictUtils.filter_keys_to_subset(initial_commands, self._commands)
# 如果 filtered_initial_commands 为空,则跳过注册
if not filtered_initial_commands and not force_register:
logger.debug("Filtered commands are empty, skipping registration.")
return
# 对比调整后的命令与当前命令
if filtered_initial_commands != self._registered_commands or force_register:
logger.debug("Command set has changed or force registration is enabled.")
self._registered_commands = filtered_initial_commands
super().register_commands(commands=filtered_initial_commands)
else:
logger.debug("Command set unchanged, skipping broadcast registration.")
except Exception as e:
logger.error(f"Error occurred during command initialization in background: {e}", exc_info=True)
def __trigger_register_commands_event(self) -> (Optional[Event], dict):
"""
触发事件,允许调整命令数据
"""
def add_commands(source, command_type):
"""
添加命令集合
"""
for cmd, command in source.items():
command_data = {
"type": command_type,
"description": command.get("description"),
"category": command.get("category")
}
# 如果有 pid则添加到命令数据中
plugin_id = command.get("pid")
if plugin_id:
command_data["pid"] = plugin_id
commands[cmd] = command_data
# 初始化命令字典
commands: Dict[str, dict] = {}
add_commands(self._preset_commands, "preset")
add_commands(self._plugin_commands, "plugin")
add_commands(self._other_commands, "other")
# 触发事件允许可以拦截和调整命令
event_data = CommandRegisterEventData(commands=commands, origin="CommandChain", service=None)
event = eventmanager.send_event(ChainEventType.CommandRegister, event_data)
return event, commands
def __build_plugin_commands(self, pid: Optional[str] = None) -> Dict[str, dict]:
"""
构建插件命令
"""
# 为了保证命令顺序的一致性,目前这里没有直接使用 pid 获取单一插件命令,后续如果存在性能问题,可以考虑优化这里的逻辑
plugin_commands = {}
for command in self.pluginmanager.get_plugin_commands():
cmd = command.get("cmd")
if cmd:
plugin_commands[cmd] = {
"pid": command.get("pid"),
"func": self.send_plugin_event,
"description": command.get("desc"),
"category": command.get("category"),
"data": {
"etype": command.get("event"),
"data": command.get("data")
}
}
return plugin_commands
def __run_command(self, command: Dict[str, any], data_str: str = "",
channel: MessageChannel = None, source: str = None, userid: Union[str, int] = None):
"""
运行定时服务
"""
if command.get("type") == "scheduler":
# 定时服务
if userid:
self.post_message(
Notification(
channel=channel,
source=source,
title=f"开始执行 {command.get('description')} ...",
userid=userid
)
)
# 执行定时任务
self.scheduler.start(job_id=command.get("id"))
if userid:
self.post_message(
Notification(
channel=channel,
source=source,
title=f"{command.get('description')} 执行完成",
userid=userid
)
)
else:
# 命令
cmd_data = command['data'] if command.get('data') else {}
args_num = ObjectUtils.arguments(command['func'])
if args_num > 0:
if cmd_data:
# 有内置参数直接使用内置参数
data = cmd_data.get("data") or {}
data['channel'] = channel
data['source'] = source
data['user'] = userid
if data_str:
data['arg_str'] = data_str
cmd_data['data'] = data
command['func'](**cmd_data)
elif args_num == 3:
# 没有输入参数只输入渠道来源、用户ID和消息来源
command['func'](channel, userid, source)
elif args_num > 3:
# 多个输入参数用户输入、用户ID
command['func'](data_str, channel, userid, source)
else:
# 没有参数
command['func']()
def get_commands(self):
"""
获取命令列表
"""
return self._commands
def get(self, cmd: str) -> Any:
"""
获取命令
"""
return self._commands.get(cmd, {})
def register(self, cmd: str, func: Any, data: dict = None,
desc: str = None, category: str = None) -> None:
"""
注册单个命令
"""
# 单独调用的,统一注册到其他
self._other_commands[cmd] = {
"func": func,
"description": desc,
"category": category,
"data": data or {}
}
def execute(self, cmd: str, data_str: str = "",
channel: MessageChannel = None, source: str = None,
userid: Union[str, int] = None) -> None:
"""
执行命令
"""
command = self.get(cmd)
if command:
try:
if userid:
logger.info(f"用户 {userid} 开始执行:{command.get('description')} ...")
else:
logger.info(f"开始执行:{command.get('description')} ...")
# 执行命令
self.__run_command(command, data_str=data_str,
channel=channel, source=source, userid=userid)
if userid:
logger.info(f"用户 {userid} {command.get('description')} 执行完成")
else:
logger.info(f"{command.get('description')} 执行完成")
except Exception as err:
logger.error(f"执行命令 {cmd} 出错:{str(err)} - {traceback.format_exc()}")
self.messagehelper.put(title=f"执行命令 {cmd} 出错",
message=str(err),
role="system")
@staticmethod
def send_plugin_event(etype: EventType, data: dict) -> None:
"""
发送插件命令
"""
eventmanager.send_event(etype, data)
@eventmanager.register(EventType.CommandExcute)
def command_event(self, event: ManagerEvent) -> None:
"""
注册命令执行事件
event_data: {
"cmd": "/xxx args"
}
"""
# 命令参数
event_str = event.event_data.get('cmd')
# 消息渠道
event_channel = event.event_data.get('channel')
# 消息来源
event_source = event.event_data.get('source')
# 消息用户
event_user = event.event_data.get('user')
if event_str:
cmd = event_str.split()[0]
args = " ".join(event_str.split()[1:])
if self.get(cmd):
self.execute(cmd=cmd, data_str=args,
channel=event_channel, source=event_source, userid=event_user)
@eventmanager.register(EventType.ModuleReload)
def module_reload_event(self, event: ManagerEvent) -> None:
"""
注册模块重载事件
"""
# 发生模块重载时,重新注册命令
self.init_commands()

View File

@@ -1,178 +0,0 @@
import base64
from typing import Tuple, Optional
from urllib.parse import urljoin
from lxml import etree
from app.chain import ChainBase
from app.chain.site import SiteChain
from app.core.config import settings
from app.db.site_oper import SiteOper
from app.db.siteicon_oper import SiteIconOper
from app.helper.cloudflare import under_challenge
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
class CookieCloudChain(ChainBase):
"""
CookieCloud处理链
"""
def __init__(self):
super().__init__()
self.siteoper = SiteOper()
self.siteiconoper = SiteIconOper()
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.sitechain = SiteChain()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper(
server=settings.COOKIECLOUD_HOST,
key=settings.COOKIECLOUD_KEY,
password=settings.COOKIECLOUD_PASSWORD
)
def process(self, manual=False) -> Tuple[bool, str]:
"""
通过CookieCloud同步站点Cookie
"""
logger.info("开始同步CookieCloud站点 ...")
cookies, msg = self.cookiecloud.download()
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(f"CookieCloud同步失败 {msg}")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
_add_count = 0
_fail_count = 0
for domain, cookie in cookies.items():
# 获取站点信息
indexer = self.siteshelper.get_indexer(domain)
site_info = self.siteoper.get_by_domain(domain)
if site_info:
# 检查站点连通性
status, msg = self.sitechain.test(domain)
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
# 新增站点
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT
).get_res(url=indexer.get("domain"))
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
_fail_count += 1
if under_challenge(res.text):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
f"站点 {indexer.get('name')} 登录失败没有该站点账号或Cookie已失效无法添加站点")
continue
elif res is not None:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接状态码:{res.status_code},无法添加站点")
continue
else:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and indexer.get("domain"):
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=indexer.get("domain"),
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 保存站点图标
if indexer:
site_icon = self.siteiconoper.get_by_domain(domain)
if not site_icon or not site_icon.base64:
logger.info(f"开始缓存站点 {indexer.get('name')} 图标 ...")
icon_url, icon_base64 = self.__parse_favicon(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if icon_url:
self.siteiconoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
logger.info(f"缓存站点 {indexer.get('name')} 图标成功")
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
# 处理完成
ret_msg = f"更新了{_update_count}个站点,新增了{_add_count}个站点"
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(f"CookieCloud同步成功, {ret_msg}")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
"""
解析站点favicon,返回base64 fav图标
:param url: 站点地址
:param cookie: Cookie
:param ua: User-Agent
:return:
"""
favicon_url = urljoin(url, "favicon.ico")
res = RequestUtils(cookies=cookie, timeout=60, ua=ua).get_res(url=url)
if res:
html_text = res.text
else:
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if html:
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=20, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
return favicon_url, None

View File

@@ -9,14 +9,14 @@ class DashboardChain(ChainBase, metaclass=Singleton):
"""
各类仪表板统计处理链
"""
def media_statistic(self) -> Optional[List[schemas.Statistic]]:
def media_statistic(self, server: str = None) -> Optional[List[schemas.Statistic]]:
"""
媒体数量统计
"""
return self.run_module("media_statistic")
return self.run_module("media_statistic", server=server)
def downloader_info(self) -> schemas.DownloaderInfo:
def downloader_info(self, downloader: str = None) -> Optional[List[schemas.DownloaderInfo]]:
"""
下载器信息
"""
return self.run_module("downloader_info")
return self.run_module("downloader_info", downloader=downloader)

View File

@@ -1,7 +1,8 @@
from typing import Optional, List
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -11,7 +12,22 @@ class DoubanChain(ChainBase, metaclass=Singleton):
豆瓣处理链,单例运行
"""
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据人物ID查询豆瓣人物详情
:param person_id: 人物ID
"""
return self.run_module("douban_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[MediaInfo]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
:param page: 页码
"""
return self.run_module("douban_person_credits", person_id=person_id, page=page)
def movie_top250(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取豆瓣电影TOP250
:param page: 页码
@@ -19,26 +35,26 @@ class DoubanChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("movie_top250", page=page, count=count)
def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def movie_showing(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取正在上映的电影
"""
return self.run_module("movie_showing", page=page, count=count)
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取本周中国剧集榜
"""
return self.run_module("tv_weekly_chinese", page=page, count=count)
def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def tv_weekly_global(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取本周全球剧集榜
"""
return self.run_module("tv_weekly_global", page=page, count=count)
def douban_discover(self, mtype: MediaType, sort: str, tags: str,
page: int = 0, count: int = 30) -> Optional[List[dict]]:
page: int = 0, count: int = 30) -> Optional[List[MediaInfo]]:
"""
发现豆瓣电影、剧集
:param mtype: 媒体类型
@@ -51,52 +67,46 @@ class DoubanChain(ChainBase, metaclass=Singleton):
return self.run_module("douban_discover", mtype=mtype, sort=sort, tags=tags,
page=page, count=count)
def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def tv_animation(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取动画剧集
"""
return self.run_module("tv_animation", page=page, count=count)
def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def movie_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取热门电影
"""
if settings.RECOGNIZE_SOURCE != "douban":
return None
return self.run_module("movie_hot", page=page, count=count)
def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
def tv_hot(self, page: int = 1, count: int = 30) -> Optional[List[MediaInfo]]:
"""
获取热门剧集
"""
if settings.RECOGNIZE_SOURCE != "douban":
return None
return self.run_module("tv_hot", page=page, count=count)
def movie_credits(self, doubanid: str, page: int = 1) -> List[dict]:
def movie_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员
:param doubanid: 豆瓣ID
:param page: 页码
"""
return self.run_module("douban_movie_credits", doubanid=doubanid, page=page)
return self.run_module("douban_movie_credits", doubanid=doubanid)
def tv_credits(self, doubanid: str, page: int = 1) -> List[dict]:
def tv_credits(self, doubanid: str) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员
:param doubanid: 豆瓣ID
:param page: 页码
"""
return self.run_module("douban_tv_credits", doubanid=doubanid, page=page)
return self.run_module("douban_tv_credits", doubanid=doubanid)
def movie_recommend(self, doubanid: str) -> List[dict]:
def movie_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电影
:param doubanid: 豆瓣ID
"""
return self.run_module("douban_movie_recommend", doubanid=doubanid)
def tv_recommend(self, doubanid: str) -> List[dict]:
def tv_recommend(self, doubanid: str) -> List[MediaInfo]:
"""
根据豆瓣ID查询推荐电视剧
:param doubanid: 豆瓣ID

View File

@@ -6,13 +6,17 @@ import time
from pathlib import Path
from typing import List, Optional, Tuple, Set, Dict, Union
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.core.config import settings, global_vars
from app.core.context import MediaInfo, TorrentInfo, Context
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.mediaserver_oper import MediaServerOper
from app.helper.directory import DirectoryHelper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification
@@ -31,16 +35,25 @@ class DownloadChain(ChainBase):
self.torrent = TorrentHelper()
self.downloadhis = DownloadHistoryOper()
self.mediaserver = MediaServerOper()
self.directoryhelper = DirectoryHelper()
self.messagehelper = MessageHelper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None,
userid: str = None):
channel: MessageChannel = None, username: str = None,
download_episodes: str = None):
"""
发送添加下载的消息
发送添加下载的消息,根据消息场景开关决定发给谁
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrent: 种子信息
:param channel: 通知渠道
:param username: 通知显示的下载用户信息
:param download_episodes: 下载的集数
"""
# 拼装消息内容
msg_text = ""
if userid:
msg_text = f"用户:{userid}"
if username:
msg_text = f"用户:{username}"
if torrent.site_name:
msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term:
@@ -63,22 +76,28 @@ class DownloadChain(ChainBase):
msg_text = f"{msg_text}\n促销:{torrent.volume_factor}"
if torrent.hit_and_run:
msg_text = f"{msg_text}\nHit&Run"
if torrent.labels:
msg_text = f"{msg_text}\n标签:{' '.join(torrent.labels)}"
if torrent.description:
html_re = re.compile(r'<[^>]+>', re.S)
description = html_re.sub('', torrent.description)
torrent.description = re.sub(r'<[^>]+>', '', description)
msg_text = f"{msg_text}\n描述:{torrent.description}"
# 下载成功按规则发送消息
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
title=f"{mediainfo.title_year} "
f"{meta.season_episode} 开始下载",
f"{'%s %s' % (meta.season, download_episodes) if download_episodes else meta.season_episode} 开始下载",
text=msg_text,
image=mediainfo.get_message_image()))
image=mediainfo.get_message_image(),
link=settings.MP_DOMAIN('/#/downloading'),
username=username))
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
source: str = None,
userid: Union[str, int] = None
) -> Tuple[Optional[Union[Path, str]], str, list]:
"""
@@ -102,17 +121,27 @@ class DownloadChain(ChainBase):
# 解码参数
req_str = base64.b64decode(base64_str.encode('utf-8')).decode('utf-8')
req_params: Dict[str, dict] = json.loads(req_str)
# 是否使用cookie
if not req_params.get('cookie'):
cookie = None
# 请求头
if req_params.get('header'):
headers = req_params.get('header')
else:
headers = None
if req_params.get('method') == 'get':
# GET请求
res = RequestUtils(
ua=ua,
cookies=cookie
cookies=cookie,
headers=headers
).get_res(url, params=req_params.get('params'))
else:
# POST请求
res = RequestUtils(
ua=ua,
cookies=cookie
cookies=cookie,
headers=headers
).post_res(url, params=req_params.get('params'))
if not res:
return None
@@ -133,12 +162,15 @@ class DownloadChain(ChainBase):
return None, "", []
if torrent.enclosure.startswith("magnet:"):
return torrent.enclosure, "", []
# Cookie
site_cookie = torrent.site_cookie
if torrent.enclosure.startswith("["):
# 需要解码获取下载地址
torrent_url = __get_redict_url(url=torrent.enclosure,
ua=torrent.site_ua,
cookie=torrent.site_cookie)
cookie=site_cookie)
# 涉及解析地址的不使用Cookie下载种子否则MT会出错
site_cookie = None
else:
torrent_url = torrent.enclosure
if not torrent_url:
@@ -147,7 +179,7 @@ class DownloadChain(ChainBase):
# 下载种子文件
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent_url,
cookie=torrent.site_cookie,
cookie=site_cookie,
ua=torrent.site_ua,
proxy=torrent.site_proxy)
@@ -159,6 +191,7 @@ class DownloadChain(ChainBase):
logger.error(f"下载种子文件失败:{torrent.title} - {torrent_url}")
self.post_message(Notification(
channel=channel,
source=source,
mtype=NotificationType.Manual,
title=f"{torrent.title} 种子下载失败!",
text=f"错误信息:{error_msg}\n站点:{torrent.site_name}",
@@ -170,28 +203,44 @@ class DownloadChain(ChainBase):
def download_single(self, context: Context, torrent_file: Path = None,
episodes: Set[int] = None,
channel: MessageChannel = None,
channel: MessageChannel = None, source: str = None,
save_path: str = None,
userid: Union[str, int] = None,
username: str = None) -> Optional[str]:
username: str = None,
downloader: str = None,
media_category: str = None) -> Optional[str]:
"""
下载及发送通知
:param context: 资源上下文
:param torrent_file: 种子文件路径
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 通知来源
:param save_path: 保存路径
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param downloader: 下载器
:param media_category: 自定义媒体类别
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
# 补充完整的media数据
if not _media.genre_ids:
new_media = self.recognize_media(mtype=_media.type, tmdbid=_media.tmdb_id,
doubanid=_media.douban_id, bangumiid=_media.bangumi_id)
if new_media:
_media = new_media
# 实际下载的集数
download_episodes = StringUtils.format_ep(list(episodes)) if episodes else None
_folder_name = ""
if not torrent_file:
# 下载种子文件,得到的可能是文件也可能是磁力链
content, _folder_name, _file_list = self.download_torrent(_torrent,
channel=channel,
source=source,
userid=userid)
if not content:
return None
@@ -201,50 +250,48 @@ class DownloadChain(ChainBase):
_folder_name, _file_list = self.torrent.get_torrent_info(torrent_file)
# 下载目录
if not save_path:
if settings.DOWNLOAD_CATEGORY and _media and _media.category:
# 开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = settings.SAVE_MOVIE_PATH / _media.category
else:
if _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = settings.SAVE_ANIME_PATH
else:
# 电视剧
download_dir = settings.SAVE_TV_PATH / _media.category
elif _media:
# 未开启下载二级目录
if _media.type == MediaType.MOVIE:
# 电影
download_dir = settings.SAVE_MOVIE_PATH
else:
if _media.genre_ids \
and set(_media.genre_ids).intersection(set(settings.ANIME_GENREIDS)):
# 动漫
download_dir = settings.SAVE_ANIME_PATH
else:
# 电视剧
download_dir = settings.SAVE_TV_PATH
else:
# 未识别
download_dir = settings.SAVE_PATH
if save_path:
# 有自定义下载目录时,尝试匹配目录配置
dir_info = self.directoryhelper.get_dir(_media, src_path=Path(save_path), local=True)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, local=True)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.download_type_folder:
# 一级自动分类
download_dir = Path(dir_info.download_path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.download_path)
# 二级目录
if not dir_info.media_category and dir_info.download_category_folder and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
elif save_path:
# 自定义下载目录
download_dir = Path(save_path)
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 添加下载
result: Optional[tuple] = self.download(content=content,
cookie=_torrent.site_cookie,
episodes=episodes,
download_dir=download_dir,
category=_media.category)
category=_media.category,
downloader=downloader)
if result:
_hash, error_msg = result
_downloader, _hash, error_msg = result
else:
_hash, error_msg = None, "知错误"
_downloader, _hash, error_msg = None, None, "找到下载器"
if _hash:
# 下载文件路径
@@ -264,7 +311,7 @@ class DownloadChain(ChainBase):
tvdbid=_media.tvdb_id,
doubanid=_media.douban_id,
seasons=_meta.season,
episodes=_meta.episode,
episodes=download_episodes or _meta.episode,
image=_media.get_backdrop_image(),
download_hash=_hash,
torrent_name=_torrent.title,
@@ -273,7 +320,8 @@ class DownloadChain(ChainBase):
userid=userid,
username=username,
channel=channel.value if channel else None,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
media_category=media_category
)
# 登记下载文件
@@ -291,7 +339,7 @@ class DownloadChain(ChainBase):
continue
files_to_add.append({
"download_hash": _hash,
"downloader": settings.DOWNLOADER,
"downloader": _downloader,
"fullpath": str(download_dir / _folder_name / file),
"savepath": str(download_dir / _folder_name),
"filepath": file,
@@ -300,21 +348,26 @@ class DownloadChain(ChainBase):
if files_to_add:
self.downloadhis.add_files(files_to_add)
# 发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel, userid=userid)
# 下载成功发送消息
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent,
username=username, download_episodes=download_episodes)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
"context": context
"context": context,
"username": username,
"downloader": _downloader
})
else:
# 下载失败
logger.error(f"{_media.title_year} 添加下载任务失败:"
f"{_torrent.title} - {_torrent.enclosure}{error_msg}")
# 只发送给对应渠道和用户
self.post_message(Notification(
channel=channel,
source=source,
mtype=NotificationType.Manual,
title="添加下载任务失败:%s %s"
% (_media.title_year, _meta.season_episode),
@@ -330,8 +383,10 @@ class DownloadChain(ChainBase):
no_exists: Dict[Union[int, str], Dict[int, NotExistMediaInfo]] = None,
save_path: str = None,
channel: MessageChannel = None,
source: str = None,
userid: str = None,
username: str = None
username: str = None,
media_category: str = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
@@ -339,8 +394,10 @@ class DownloadChain(ChainBase):
:param no_exists: 缺失的剧集信息
:param save_path: 保存路径
:param channel: 通知渠道
:param source: 通知来源
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
# 已下载的项目
@@ -357,12 +414,13 @@ class DownloadChain(ChainBase):
need = list(set(_need).difference(set(_current)))
# 清除已下载的季信息
seas = copy.deepcopy(no_exists.get(_mid))
for _sea in list(seas):
if _sea not in need:
no_exists[_mid].pop(_sea)
if not no_exists.get(_mid) and no_exists.get(_mid) is not None:
no_exists.pop(_mid)
break
if seas:
for _sea in list(seas):
if _sea not in need:
no_exists[_mid].pop(_sea)
if not no_exists.get(_mid) and no_exists.get(_mid) is not None:
no_exists.pop(_mid)
break
return need
def __update_episodes(_mid: Union[int, str], _sea: int, _need: list, _current: set) -> list:
@@ -405,13 +463,19 @@ class DownloadChain(ChainBase):
# 如果是电影,直接下载
for context in contexts:
if global_vars.is_system_stopped:
break
if context.media_info.type == MediaType.MOVIE:
if self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
# 电视剧整季匹配
logger.info(f"开始匹配电视剧整季:{no_exists}")
if no_exists:
# 先把整季缺失的拿出来,看是否刚好有所有季都满足的种子 {tmdbid: [seasons]}
need_seasons: Dict[int, list] = {}
@@ -424,10 +488,13 @@ class DownloadChain(ChainBase):
if not need_seasons.get(need_mid):
need_seasons[need_mid] = []
need_seasons[need_mid].append(tv.season or 1)
logger.info(f"缺失整季:{need_seasons}")
# 查找整季包含的种子,只处理整季没集的种子或者是集数超过季的种子
for need_mid, need_season in need_seasons.items():
# 循环种子
for context in contexts:
if global_vars.is_system_stopped:
break
# 媒体信息
media = context.media_info
# 识别元数据
@@ -439,23 +506,31 @@ class DownloadChain(ChainBase):
continue
# 种子的季清单
torrent_season = meta.season_list
# 没有季的默认为第1季
if not torrent_season:
torrent_season = [1]
# 种子有集的不要
if meta.episode_list:
continue
# 匹配TMDBID
if need_mid == media.tmdb_id or need_mid == media.douban_id:
# 不重复添加
if context in downloaded_list:
continue
# 种子季是需要季或者子集
if set(torrent_season).issubset(set(need_season)):
if len(torrent_season) == 1:
# 只有一季的可能是命名错误,需要打开种子鉴别,只有实际集数大于等于总集数才下载
logger.info(f"开始下载种子 {torrent.title} ...")
content, _, torrent_files = self.download_torrent(torrent)
if not content:
logger.warn(f"{torrent.title} 种子下载失败!")
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法确定种子文件集数")
continue
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{meta.org_string} 解析文件集数为 {torrent_episodes}")
logger.info(f"{meta.org_string} 解析种子文件集数为 {torrent_episodes}")
if not torrent_episodes:
continue
# 更新集数范围
@@ -466,31 +541,43 @@ class DownloadChain(ChainBase):
need_total = __get_season_episodes(need_mid, torrent_season[0])
if len(torrent_episodes) < need_total:
logger.info(
f"{meta.org_string} 解析文件集数发现不是完整合集")
f"{meta.org_string} 解析文件集数发现不是完整合集,先放弃这个种子")
continue
else:
# 下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
save_path=save_path,
channel=channel,
source=source,
userid=userid,
username=username
username=username,
media_category=media_category
)
else:
# 下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username)
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
if download_id:
# 下载成功
logger.info(f"{torrent.title} 添加下载成功")
downloaded_list.append(context)
# 更新仍需季集
need_season = __update_seasons(_mid=need_mid,
_need=need_season,
_current=torrent_season)
logger.info(f"{need_mid} 剩余需要季:{need_season}")
if not need_season:
# 全部下载完成
break
# 电视剧季内的集匹配
logger.info(f"开始电视剧完整集匹配:{no_exists}")
if no_exists:
# TMDBID列表
need_tv_list = list(no_exists)
@@ -515,6 +602,8 @@ class DownloadChain(ChainBase):
need_episodes = list(range(start_episode, total_episode + 1))
# 循环种子
for context in contexts:
if global_vars.is_system_stopped:
break
# 媒体信息
media = context.media_info
# 识别元数据
@@ -540,18 +629,24 @@ class DownloadChain(ChainBase):
# 为需要集的子集则下载
if torrent_episodes.issubset(set(need_episodes)):
# 下载
logger.info(f"开始下载 {meta.title} ...")
download_id = self.download_single(context, save_path=save_path,
channel=channel, userid=userid, username=username)
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
if download_id:
# 下载成功
logger.info(f"{meta.title} 添加下载成功")
downloaded_list.append(context)
# 更新仍需集数
need_episodes = __update_episodes(_mid=need_mid,
_need=need_episodes,
_sea=need_season,
_current=torrent_episodes)
logger.info(f"{need_season} 剩余需要集:{need_episodes}")
# 仍然缺失的剧集从整季中选择需要的集数文件下载仅支持QB和TR
logger.info(f"开始电视剧多集拆包匹配:{no_exists}")
if no_exists:
# TMDBID列表
no_exists_list = list(no_exists)
@@ -575,6 +670,8 @@ class DownloadChain(ChainBase):
continue
# 循环种子
for context in contexts:
if global_vars.is_system_stopped:
break
# 媒体信息
media = context.media_info
# 识别元数据
@@ -597,15 +694,17 @@ class DownloadChain(ChainBase):
and len(meta.season_list) == 1 \
and meta.season_list[0] == need_season:
# 检查种子看是否有需要的集
logger.info(f"开始下载种子 {torrent.title} ...")
content, _, torrent_files = self.download_torrent(torrent)
if not content:
logger.info(f"{torrent.title} 种子下载失败!")
continue
if isinstance(content, str):
logger.warn(f"{meta.org_string} 下载地址是磁力链,无法解析种子文件集数")
continue
# 种子全部集
torrent_episodes = self.torrent.get_torrent_episodes(torrent_files)
logger.info(f"{torrent.site_name} - {meta.org_string} 解析文件集数:{torrent_episodes}")
logger.info(f"{torrent.site_name} - {meta.org_string} 解析种子文件集数:{torrent_episodes}")
# 选中的集
selected_episodes = set(torrent_episodes).intersection(set(need_episodes))
if not selected_episodes:
@@ -613,18 +712,22 @@ class DownloadChain(ChainBase):
continue
logger.info(f"{torrent.site_name} - {torrent.title} 选中集数:{selected_episodes}")
# 添加下载
logger.info(f"开始下载 {torrent.title} ...")
download_id = self.download_single(
context=context,
torrent_file=content if isinstance(content, Path) else None,
episodes=selected_episodes,
save_path=save_path,
channel=channel,
source=source,
userid=userid,
username=username
username=username,
media_category=media_category
)
if not download_id:
continue
# 下载成功
logger.info(f"{torrent.title} 添加下载成功")
downloaded_list.append(context)
# 更新种子集数范围
begin_ep = min(torrent_episodes)
@@ -635,8 +738,10 @@ class DownloadChain(ChainBase):
_need=need_episodes,
_sea=need_season,
_current=selected_episodes)
logger.info(f"{need_season} 剩余需要集:{need_episodes}")
# 返回下载的资源,剩下没下完的
logger.info(f"成功下载种子数:{len(downloaded_list)},剩余未下载的剧集:{no_exists}")
return downloaded_list, no_exists
def get_no_exists_info(self, meta: MetaBase,
@@ -769,7 +874,7 @@ class DownloadChain(ChainBase):
# 全部存在
return True, no_exists
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None):
def remote_downloading(self, channel: MessageChannel, userid: Union[str, int] = None, source: str = None):
"""
查询正在下载的任务,并发送消息
"""
@@ -777,9 +882,12 @@ class DownloadChain(ChainBase):
if not torrents:
self.post_message(Notification(
channel=channel,
source=source,
mtype=NotificationType.Download,
title="没有正在下载的任务!",
userid=userid))
userid=userid,
link=settings.MP_DOMAIN('#/downloading')
))
return
# 发送消息
title = f"{len(torrents)} 个任务正在下载:"
@@ -791,14 +899,20 @@ class DownloadChain(ChainBase):
f"{round(torrent.progress, 1)}%")
index += 1
self.post_message(Notification(
channel=channel, mtype=NotificationType.Download,
title=title, text="\n".join(messages), userid=userid))
channel=channel,
source=source,
mtype=NotificationType.Download,
title=title,
text="\n".join(messages),
userid=userid,
link=settings.MP_DOMAIN('#/downloading')
))
def downloading(self) -> List[DownloadingTorrent]:
def downloading(self, name: str = None) -> List[DownloadingTorrent]:
"""
查询正在下载的任务
"""
torrents = self.list_torrents(status=TorrentStatus.DOWNLOADING)
torrents = self.list_torrents(downloader=name, status=TorrentStatus.DOWNLOADING)
if not torrents:
return []
ret_torrents = []
@@ -816,6 +930,7 @@ class DownloadChain(ChainBase):
}
# 下载用户
torrent.userid = history.userid
torrent.username = history.username
ret_torrents.append(torrent)
return ret_torrents
@@ -834,3 +949,26 @@ class DownloadChain(ChainBase):
删除下载任务
"""
return self.remove_torrents(hashs=[hash_str])
@eventmanager.register(EventType.DownloadFileDeleted)
def download_file_deleted(self, event: Event):
"""
下载文件删除时,同步删除下载任务
"""
if not event:
return
hash_str = event.event_data.get("hash")
if not hash_str:
return
logger.warn(f"检测到下载源文件被删除,删除下载任务(不含文件):{hash_str}")
# 先查询种子
torrents: List[schemas.TransferTorrent] = self.list_torrents(hashs=[hash_str])
if torrents:
self.remove_torrents(hashs=[hash_str], delete_file=False)
# 发出下载任务删除事件,如需处理辅种,可监听该事件
self.eventmanager.send_event(EventType.DownloadDeleted, {
"hash": hash_str,
"torrents": [torrent.dict() for torrent in torrents]
})
else:
logger.info(f"没有在下载器中查询到 {hash_str} 对应的下载任务")

View File

@@ -1,16 +1,18 @@
import copy
import time
from pathlib import Path
from threading import Lock
from typing import Optional, List, Tuple
from typing import Optional, List, Tuple, Union
from app import schemas
from app.chain import ChainBase
from app.chain.storage import StorageChain
from app.core.config import settings
from app.core.context import Context, MediaInfo
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.log import logger
from app.schemas.types import EventType, MediaType
from app.schemas.types import EventType, MediaType, ChainEventType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
@@ -21,10 +23,21 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
媒体信息处理链,单例运行
"""
# 临时识别标题
recognize_title: Optional[str] = None
# 临时识别结果 {title, name, year, season, episode}
recognize_temp: Optional[dict] = None
def __init__(self):
super().__init__()
self.storagechain = StorageChain()
def metadata_nfo(self, meta: MetaBase, mediainfo: MediaInfo,
season: int = None, episode: int = None) -> Optional[str]:
"""
获取NFO文件内容文本
:param meta: 元数据
:param mediainfo: 媒体信息
:param season: 季号
:param episode: 集号
"""
return self.run_module("metadata_nfo", meta=meta, mediainfo=mediainfo, season=season, episode=episode)
def recognize_by_meta(self, metainfo: MetaBase) -> Optional[MediaInfo]:
"""
@@ -34,8 +47,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
if not mediainfo:
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(EventType.NameRecognize):
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{title} ...')
mediainfo = self.recognize_help(title=title, org_meta=metainfo)
if not mediainfo:
@@ -54,83 +67,47 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param title: 标题
:param org_meta: 原始元数据
"""
with recognize_lock:
self.recognize_temp = None
self.recognize_title = title
# 发送请求事件
eventmanager.send_event(
EventType.NameRecognize,
# 发送请求事件,等待结果
result: Event = eventmanager.send_event(
ChainEventType.NameRecognize,
{
'title': title,
}
)
# 每0.5秒循环一次等待结果直到10秒后超时
for i in range(20):
if self.recognize_temp is not None:
break
time.sleep(0.5)
# 加锁
with recognize_lock:
mediainfo = None
if not self.recognize_temp or self.recognize_title != title:
# 没有识别结果或者识别标题已改变
return None
# 有识别结果
meta_dict = copy.deepcopy(self.recognize_temp)
logger.info(f'获取到辅助识别结果:{meta_dict}')
if meta_dict.get("name") == org_meta.name and meta_dict.get("year") == org_meta.year:
logger.info(f'辅助识别结果与原始识别结果一致')
else:
logger.info(f'辅助识别结果与原始识别结果不一致,重新匹配媒体信息 ...')
org_meta.name = meta_dict.get("name")
org_meta.year = meta_dict.get("year")
org_meta.begin_season = meta_dict.get("season")
org_meta.begin_episode = meta_dict.get("episode")
if org_meta.begin_season or org_meta.begin_episode:
org_meta.type = MediaType.TV
# 重新识别
mediainfo = self.recognize_media(meta=org_meta)
return mediainfo
@eventmanager.register(EventType.NameRecognizeResult)
def recognize_result(self, event: Event):
"""
监控识别结果事件,获取辅助识别结果,结果格式:{title, name, year, season, episode}
"""
if not event:
return
event_data = event.event_data or {}
# 加锁
with recognize_lock:
# 不是原标题的结果不要
if event_data.get("title") != self.recognize_title:
return
# 标志收到返回
self.recognize_temp = {}
# 处理数据格式
file_title, file_year, season_number, episode_number = None, None, None, None
if event_data.get("name"):
file_title = str(event_data["name"]).split("/")[0].strip().replace(".", " ")
if event_data.get("year"):
file_year = str(event_data["year"]).split("/")[0].strip()
if event_data.get("season") and str(event_data["season"]).isdigit():
season_number = int(event_data["season"])
if event_data.get("episode") and str(event_data["episode"]).isdigit():
episode_number = int(event_data["episode"])
if not file_title:
return
if file_title == 'Unknown':
return
if not str(file_year).isdigit():
file_year = None
# 结果赋值
self.recognize_temp = {
"name": file_title,
"year": file_year,
"season": season_number,
"episode": episode_number
}
if not result:
return None
# 获取返回事件数据
event_data = result.event_data or {}
logger.info(f'获取到辅助识别结果:{event_data}')
# 处理数据格式
title, year, season_number, episode_number = None, None, None, None
if event_data.get("name"):
title = str(event_data["name"]).split("/")[0].strip().replace(".", " ")
if event_data.get("year"):
year = str(event_data["year"]).split("/")[0].strip()
if event_data.get("season") and str(event_data["season"]).isdigit():
season_number = int(event_data["season"])
if event_data.get("episode") and str(event_data["episode"]).isdigit():
episode_number = int(event_data["episode"])
if not title:
return None
if title == 'Unknown':
return None
if not str(year).isdigit():
year = None
# 结果赋值
if title == org_meta.name and year == org_meta.year:
logger.info(f'辅助识别与原始识别结果一致,无需重新识别媒体信息')
return None
logger.info(f'辅助识别结果与原始识别结果不一致,重新匹配媒体信息 ...')
org_meta.name = title
org_meta.year = year
org_meta.begin_season = season_number
org_meta.begin_episode = episode_number
if org_meta.begin_season or org_meta.begin_episode:
org_meta.type = MediaType.TV
# 重新识别
return self.recognize_media(meta=org_meta)
def recognize_by_path(self, path: str) -> Optional[Context]:
"""
@@ -143,8 +120,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
if not mediainfo:
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(EventType.NameRecognize):
# 试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{file_path.name} ...')
mediainfo = self.recognize_help(title=path, org_meta=file_meta)
if not mediainfo:
@@ -156,9 +133,9 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 返回上下文
return Context(meta_info=file_meta, media_info=mediainfo)
def search(self, title: str) -> Tuple[MetaBase, List[MediaInfo]]:
def search(self, title: str) -> Tuple[Optional[MetaBase], List[MediaInfo]]:
"""
搜索媒体信息
搜索媒体/人物信息
:param title: 搜索内容
:return: 识别元数据,媒体信息列表
"""
@@ -195,14 +172,11 @@ class MediaChain(ChainBase, metaclass=Singleton):
doubaninfo = self.douban_info(doubanid=doubanid, mtype=mtype)
if doubaninfo:
# 优先使用原标题匹配
season_meta = None
if doubaninfo.get("original_title"):
meta = MetaInfo(title=doubaninfo.get("original_title"))
season_meta = MetaInfo(title=doubaninfo.get("title"))
# 合并季
meta.begin_season = season_meta.begin_season
else:
meta = MetaInfo(title=doubaninfo.get("title"))
meta_org = MetaInfo(title=doubaninfo.get("original_title"))
else:
meta_org = meta = MetaInfo(title=doubaninfo.get("title"))
# 年份
if doubaninfo.get("year"):
meta.year = doubaninfo.get("year")
@@ -211,24 +185,55 @@ class MediaChain(ChainBase, metaclass=Singleton):
meta.type = doubaninfo.get('media_type')
else:
meta.type = MediaType.MOVIE if doubaninfo.get("type") == "movie" else MediaType.TV
# 使用原标题识别TMDB媒体信息
tmdbinfo = self.match_tmdbinfo(
name=meta.name,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
if not tmdbinfo:
if season_meta and season_meta.name != meta.name:
# 使用主标题识别媒体信息
tmdbinfo = self.match_tmdbinfo(
name=season_meta.name,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
# 匹配TMDB信息
meta_names = list(dict.fromkeys([k for k in [meta_org.name,
meta.cn_name,
meta.en_name] if k]))
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=meta.year,
mtype=mtype or meta.type,
season=meta.begin_season
)
if tmdbinfo:
# 合季季后返回
tmdbinfo['season'] = meta.begin_season
break
return tmdbinfo
def get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取TMDB信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用原标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name"))
meta_cn = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta_cn = meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 识别TMDB媒体信息
meta_names = list(dict.fromkeys([k for k in [meta_cn.name,
meta.name] if k]))
for name in meta_names:
tmdbinfo = self.match_tmdbinfo(
name=name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
if tmdbinfo:
return tmdbinfo
return None
def get_doubaninfo_by_tmdbid(self, tmdbid: int,
mtype: MediaType = None, season: int = None) -> Optional[dict]:
"""
@@ -261,3 +266,258 @@ class MediaChain(ChainBase, metaclass=Singleton):
imdbid=imdbid
)
return None
def get_doubaninfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
"""
根据BangumiID获取豆瓣信息
"""
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
if bangumiinfo:
# 优先使用中文标题匹配
if bangumiinfo.get("name_cn"):
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
else:
meta = MetaInfo(title=bangumiinfo.get("name"))
# 年份
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
if release_date:
year = release_date[:4]
else:
year = None
# 使用名称识别豆瓣媒体信息
return self.match_doubaninfo(
name=meta.name,
year=year,
mtype=MediaType.TV,
season=meta.begin_season
)
return None
@eventmanager.register(EventType.MetadataScrape)
def scrape_metadata_event(self, event: Event):
"""
监控手动刮削事件
"""
if not event:
return
event_data = event.event_data or {}
fileitem = event_data.get("fileitem")
meta = event_data.get("meta")
mediainfo = event_data.get("mediainfo")
if not fileitem:
return
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
init_folder: bool = True, parent: schemas.FileItem = None,
overwrite: bool = False):
"""
手动刮削媒体信息
:param fileitem: 刮削目录或文件
:param meta: 元数据
:param mediainfo: 媒体信息
:param init_folder: 是否刮削根目录
:param parent: 上级目录
:param overwrite: 是否覆盖已有文件
"""
def __list_files(_fileitem: schemas.FileItem):
"""
列出下级文件
"""
return self.storagechain.list_files(fileitem=_fileitem)
def __save_file(_fileitem: schemas.FileItem, _path: Path, _content: Union[bytes, str]):
"""
保存或上传文件
:param _fileitem: 关联的媒体文件项
:param _path: 元数据文件路径
:param _content: 文件内容
"""
if not _fileitem or not _content or not _path:
return
tmp_file = settings.TEMP_PATH / _path.name
tmp_file.write_bytes(_content)
logger.info(f"保存文件:【{_fileitem.storage}{_path}")
_fileitem.path = str(_path.parent)
self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file)
if tmp_file.exists():
tmp_file.unlink()
def __download_image(_url: str) -> Optional[bytes]:
"""
下载图片并保存
"""
try:
logger.info(f"正在下载图片:{_url} ...")
r = RequestUtils(proxies=settings.PROXY).get_res(url=_url)
if r:
return r.content
else:
logger.info(f"{_url} 图片下载失败,请检查网络连通性!")
except Exception as err:
logger.error(f"{_url} 图片下载失败:{str(err)}")
return None
# 当前文件路径
filepath = Path(fileitem.path)
if fileitem.type == "file" \
and (not filepath.suffix or filepath.suffix.lower() not in settings.RMT_MEDIAEXT):
return
if not meta:
meta = MetaInfoPath(filepath)
if not mediainfo:
mediainfo = self.recognize_by_meta(meta)
if not mediainfo:
logger.warn(f"{filepath} 无法识别文件媒体信息!")
return
logger.info(f"开始刮削:{filepath} ...")
if mediainfo.type == MediaType.MOVIE:
# 电影
if fileitem.type == "file":
# 电影文件
logger.info(f"正在生成电影nfo{mediainfo.title_year} - {filepath.name}")
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
# 电影目录
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
# 生成目录内图片文件
if init_folder:
# 图片
for attr_name, attr_value in vars(mediainfo).items():
if attr_value \
and attr_name.endswith("_path") \
and attr_value \
and isinstance(attr_value, str) \
and attr_value.startswith("http"):
image_name = attr_name.replace("_path", "") + Path(attr_value).suffix
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(_url=attr_value)
# 写入图片到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
# 电视剧
if fileitem.type == "file":
# 当前为集文件,重新识别季集
file_meta = MetaInfoPath(filepath)
if not file_meta.begin_episode:
logger.warn(f"{filepath.name} 无法识别文件集数!")
return
file_mediainfo = self.recognize_media(meta=file_meta)
if not file_mediainfo:
logger.warn(f"{filepath.name} 无法识别文件媒体信息!")
return
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if not episode_nfo:
logger.warn(f"{filepath.name} nfo生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
# 获取集的图片
image_dict = self.metadata_img(mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
if image_dict:
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
logger.info(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
# 当前为目录,处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False)
# 生成目录的nfo和图片
if init_folder:
# 识别文件夹名称
season_meta = MetaInfo(filepath.name)
if season_meta.begin_season:
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season)
if not season_nfo:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
return
# 写入nfo到根目录
nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
# TMDB季poster图片
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath.with_name(image_name)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 判断当前目录是不是剧集根目录
if season_meta.name:
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not tv_nfo:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
return
# 写入tvshow nfo到根目录
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
# 生成目录图片
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
logger.info(f"{filepath.name} 刮削完成")

View File

@@ -1,11 +1,13 @@
import json
import threading
from typing import List, Union, Optional
from typing import List, Union, Optional, Generator
from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
from app.log import logger
lock = threading.Lock()
@@ -20,17 +22,53 @@ class MediaServerChain(ChainBase):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str = None, username: str = None) -> List[schemas.MediaServerLibrary]:
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
return self.run_module("mediaserver_librarys", server=server, username=username)
return self.run_module("mediaserver_librarys", server=server, username=username, hidden=hidden)
def items(self, server: str, library_id: Union[str, int]) -> List[schemas.MediaServerItem]:
def items(self, server: str, library_id: Union[str, int], start_index: int = 0, limit: Optional[int] = -1) \
-> Optional[Generator]:
"""
获取媒体服务器所有项目
获取媒体服务器项目列表,支持分页和不分页逻辑,默认不分页获取所有数据
:param server: 媒体服务器名称
:param library_id: 媒体库ID用于标识要获取的媒体库
:param start_index: 起始索引,用于分页获取数据。默认为 0即从第一个项目开始获取
:param limit: 每次请求的最大项目数,用于分页。如果为 None 或 -1则表示一次性获取所有数据默认为 -1
:return: 返回一个生成器对象,用于逐步获取媒体服务器中的项目
说明:
- 特别注意的是这里使用yield from返回迭代器避免同时使用return与yield导致Python生成器解析异常
- 如果 `limit` 为 None 或 -1 时,表示一次性获取所有数据,分页处理将不再生效
- 在这种情况下,内存消耗可能会较大,特别是在数据量非常大的场景下
- 如果未来评估结果显示,不分页场景下的内存消耗远大于分页处理时的网络请求开销,可以考虑在此方法中实现自分页的处理
- 即通过 `while` 循环在上层进行分页控制,逐步获取所有数据,避免内存爆炸,当前该逻辑由具体实例来实现不分页的处理
- Plex 实际上已默认支持内部分页处理Jellyfin 与 Emby 获取数据时存在内部过滤场景,如排除合集等,分页数据可能是错误的
if limit is not None and limit != -1:
yield from self.run_module("mediaserver_items", server=server, library_id=library_id,
start_index=start_index, limit=limit)
else:
# 自分页逻辑,通过循环逐步获取所有数据
page_size = 10
while True:
data_generator = self.run_module("mediaserver_items", server=server, library_id=library_id,
start_index=start_index, limit=page_size)
if not data_generator:
break
count = 0
for item in data_generator:
if item:
count += 1
yield item
if count < page_size:
break
start_index += page_size
"""
return self.run_module("mediaserver_items", server=server, library_id=library_id)
yield from self.run_module("mediaserver_items", server=server, library_id=library_id,
start_index=start_index, limit=limit)
def iteminfo(self, server: str, item_id: Union[str, int]) -> schemas.MediaServerItem:
"""
@@ -44,18 +82,34 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, count: int = 20, server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
def playing(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, count: int = 20, server: str = None, username: str = None) -> List[schemas.MediaServerPlayItem]:
def latest(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_latest_wallpapers(self, server: str = None, count: int = 10,
remote: bool = True, username: str = None) -> List[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
return self.run_module("mediaserver_latest_images", server=server, count=count,
remote=remote, username=username)
def get_latest_wallpaper(self, server: str = None, remote: bool = True, username: str = None) -> Optional[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
wallpapers = self.get_latest_wallpapers(server=server, count=1, remote=remote, username=username)
return wallpapers[0] if wallpapers else None
def get_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取播放地址
@@ -66,49 +120,60 @@ class MediaServerChain(ChainBase):
"""
同步媒体库所有数据到本地数据库
"""
# 设置的媒体服务器
mediaservers = ServiceConfigHelper.get_mediaserver_configs()
if not mediaservers:
return
with lock:
# 汇总统计
total_count = 0
# 清空登记薄
self.dboper.empty()
# 同步黑名单
sync_blacklist = settings.MEDIASERVER_SYNC_BLACKLIST.split(
",") if settings.MEDIASERVER_SYNC_BLACKLIST else []
# 设置的媒体服务器
if not settings.MEDIASERVER:
return
mediaservers = settings.MEDIASERVER.split(",")
# 遍历媒体服务器
for mediaserver in mediaservers:
logger.info(f"开始同步媒体库 {mediaserver} 的数据 ...")
for library in self.librarys(mediaserver):
# 同步黑名单 跳过
if library.name in sync_blacklist:
if not mediaserver:
continue
logger.info(f"正在准备同步媒体服务器 {mediaserver.name} 的数据")
if not mediaserver.enabled:
logger.info(f"媒体服务器 {mediaserver.name} 未启用,跳过")
continue
server_name = mediaserver.name
sync_libraries = mediaserver.sync_libraries or []
logger.info(f"开始同步媒体服务器 {server_name} 的数据 ...")
libraries = self.librarys(server_name)
if not libraries:
logger.info(f"没有获取到媒体服务器 {server_name} 的媒体库,跳过")
continue
for library in libraries:
if sync_libraries \
and "all" not in sync_libraries \
and str(library.id) not in sync_libraries:
logger.info(f"{library.name} 未在 {server_name} 同步媒体库列表中,跳过")
continue
logger.info(f"正在同步 {mediaserver} 媒体库 {library.name} ...")
logger.info(f"正在同步 {server_name} 媒体库 {library.name} ...")
library_count = 0
for item in self.items(mediaserver, library.id):
if not item:
continue
if not item.item_id:
for item in self.items(server=server_name, library_id=library.id):
if global_vars.is_system_stopped:
return
if not item or not item.item_id:
continue
logger.debug(f"正在同步 {item.title} ...")
# 计数
library_count += 1
seasoninfo = {}
# 类型
item_type = "电视剧" if item.item_type in ['Series', 'show'] else "电影"
item_type = "电视剧" if item.item_type in ["Series", "show"] else "电影"
if item_type == "电视剧":
# 查询剧集信息
espisodes_info = self.episodes(mediaserver, item.item_id) or []
espisodes_info = self.episodes(server_name, item.item_id) or []
for episode in espisodes_info:
seasoninfo[episode.season] = episode.episodes
# 插入数据
item_dict = item.dict()
item_dict['seasoninfo'] = json.dumps(seasoninfo)
item_dict['item_type'] = item_type
item_dict["seasoninfo"] = seasoninfo
item_dict["item_type"] = item_type
self.dboper.add(**item_dict)
logger.info(f"{mediaserver} 媒体库 {library.name} 同步完成,共同步数量:{library_count}")
logger.info(f"{server_name} 媒体库 {library.name} 同步完成,共同步数量:{library_count}")
# 总数累加
total_count += library_count
logger.info("【MediaServer】媒体库数据同步完成,同步数量:%s" % total_count)
logger.info(f"媒体服务器 {server_name} 数据同步完成,同步数量:{total_count}")

View File

@@ -1,7 +1,6 @@
import copy
import json
import re
from typing import Any, Optional, Dict
from typing import Any, Optional, Dict, Union
from app.chain import ChainBase
from app.chain.download import DownloadChain
@@ -12,9 +11,11 @@ from app.core.config import settings
from app.core.context import MediaInfo, Context
from app.core.event import EventManager
from app.core.meta import MetaBase
from app.db.message_oper import MessageOper
from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
from app.schemas import Notification, NotExistMediaInfo, CommingMessage
from app.schemas.types import EventType, MessageChannel, MediaType
from app.utils.string import StringUtils
@@ -40,18 +41,74 @@ class MessageChain(ChainBase):
self.downloadchain = DownloadChain()
self.subscribechain = SubscribeChain()
self.searchchain = SearchChain()
self.medtachain = MediaChain()
self.mediachain = MediaChain()
self.eventmanager = EventManager()
self.torrenthelper = TorrentHelper()
self.messagehelper = MessageHelper()
self.messageoper = MessageOper()
def __get_noexits_info(
self,
_meta: MetaBase,
_mediainfo: MediaInfo) -> Dict[Union[int, str], Dict[int, NotExistMediaInfo]]:
"""
获取缺失的媒体信息
"""
if _mediainfo.type == MediaType.TV:
if not _mediainfo.seasons:
# 补充媒体信息
_mediainfo = self.mediachain.recognize_media(mtype=_mediainfo.type,
tmdbid=_mediainfo.tmdb_id,
doubanid=_mediainfo.douban_id,
cache=False)
if not _mediainfo:
logger.warn(f"{_mediainfo.tmdb_id or _mediainfo.douban_id} 媒体信息识别失败!")
return {}
if not _mediainfo.seasons:
logger.warn(f"媒体信息中没有季集信息,"
f"标题:{_mediainfo.title}"
f"tmdbid{_mediainfo.tmdb_id}doubanid{_mediainfo.douban_id}")
return {}
# KEY
_mediakey = _mediainfo.tmdb_id or _mediainfo.douban_id
_no_exists = {
_mediakey: {}
}
if _meta.begin_season:
# 指定季
episodes = _mediainfo.seasons.get(_meta.begin_season)
if not episodes:
return {}
_no_exists[_mediakey][_meta.begin_season] = NotExistMediaInfo(
season=_meta.begin_season,
episodes=[],
total_episode=len(episodes),
start_episode=episodes[0]
)
else:
# 所有季
for sea, eps in _mediainfo.seasons.items():
if not eps:
continue
_no_exists[_mediakey][sea] = NotExistMediaInfo(
season=sea,
episodes=[],
total_episode=len(eps),
start_episode=eps[0]
)
else:
_no_exists = {}
return _no_exists
def process(self, body: Any, form: Any, args: Any) -> None:
"""
识别消息内容,执行操作
调用模块识别消息内容
"""
# 申明全局变量
global _current_page, _current_meta, _current_media
# 消息来源
source = args.get("source")
# 获取消息内容
info = self.message_parser(body=body, form=form, args=args)
info = self.message_parser(source=source, body=body, form=form, args=args)
if not info:
return
# 渠道
@@ -59,7 +116,7 @@ class MessageChain(ChainBase):
# 用户ID
userid = info.userid
# 用户名
username = info.username
username = info.username or userid
if not userid:
logger.debug(f'未识别到用户ID{body}{form}{args}')
return
@@ -68,10 +125,37 @@ class MessageChain(ChainBase):
if not text:
logger.debug(f'未识别到消息内容::{body}{form}{args}')
return
# 处理消息
self.handle_message(channel=channel, source=source, userid=userid, username=username, text=text)
def handle_message(self, channel: MessageChannel, source: str,
userid: Union[str, int], username: str, text: str) -> None:
"""
识别消息内容,执行操作
"""
# 申明全局变量
global _current_page, _current_meta, _current_media
# 加载缓存
user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {}
# 处理消息
logger.info(f'收到用户消息内容,用户:{userid},内容:{text}')
# 保存消息
self.messagehelper.put(
CommingMessage(
userid=userid,
username=username,
channel=channel,
source=source,
text=text
), role="user")
self.messageoper.add(
channel=channel,
source=source,
userid=username or userid,
text=text,
action=0
)
# 处理消息
if text.startswith('/'):
# 执行命令
self.eventmanager.send_event(
@@ -79,11 +163,13 @@ class MessageChain(ChainBase):
{
"cmd": text,
"user": userid,
"channel": channel
"channel": channel,
"source": source
}
)
elif text.isdigit():
# 用户选择了具体的条目
# 缓存
cache_data: dict = user_cache.get(userid)
# 选择项目
@@ -91,7 +177,7 @@ class MessageChain(ChainBase):
or not cache_data.get('items') \
or len(cache_data.get('items')) < int(text):
# 发送消息
self.post_message(Notification(channel=channel, title="输入有误!", userid=userid))
self.post_message(Notification(channel=channel, source=source, title="输入有误!", userid=userid))
return
# 选择的序号
_choice = int(text) + _current_page * self._page_size - 1
@@ -100,33 +186,49 @@ class MessageChain(ChainBase):
# 缓存列表
cache_list: list = copy.deepcopy(cache_data.get('items'))
# 选择
if cache_type == "Search":
if cache_type in ["Search", "ReSearch"]:
# 当前媒体信息
mediainfo: MediaInfo = cache_list[_choice]
_current_media = mediainfo
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
if exist_flag and cache_type == "Search":
# 媒体库中已存在
self.post_message(
Notification(channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
source=source,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 名称 或 下载 名称】",
userid=userid))
return
elif exist_flag:
# 没有缺失,但要全量重新搜索和下载
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 发送缺失的媒体信息
if no_exists:
# 发送消息
messages = []
if no_exists and cache_type == "Search":
# 发送缺失消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季缺失 {StringUtils.str_series(no_exist.episodes) if no_exist.episodes else no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
elif no_exists:
# 发送总集数的消息
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
messages = [
f"{sea} 季总 {no_exist.total_episode}"
for sea, no_exist in no_exists.get(mediakey).items()]
if messages:
self.post_message(Notification(channel=channel,
source=source,
title=f"{mediainfo.title_year}\n" + "\n".join(messages),
userid=userid))
# 搜索种子,过滤掉不需要的剧集,以便选择
logger.info(f"{mediainfo.title_year} 媒体库中不存在,开始搜索 ...")
logger.info(f"开始搜索 {mediainfo.title_year} ...")
self.post_message(
Notification(channel=channel,
source=source,
title=f"开始搜索 {mediainfo.type.value} {mediainfo.title_year} ...",
userid=userid))
# 开始搜索
@@ -135,8 +237,10 @@ class MessageChain(ChainBase):
if not contexts:
# 没有数据
self.post_message(Notification(
channel=channel, title=f"{mediainfo.title}"
f"{_current_meta.sea} 未搜索到需要的资源!",
channel=channel,
source=source,
title=f"{mediainfo.title}"
f"{_current_meta.sea} 未搜索到需要的资源!",
userid=userid))
return
# 搜索结果排序
@@ -144,13 +248,17 @@ class MessageChain(ChainBase):
# 判断是否设置自动下载
auto_download_user = settings.AUTO_DOWNLOAD_USER
# 匹配到自动下载用户
if auto_download_user and any(userid == user for user in auto_download_user.split(",")):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载")
if auto_download_user \
and (auto_download_user == "all"
or any(userid == user for user in auto_download_user.split(","))):
logger.info(f"用户 {userid} 在自动下载用户中,开始自动择优下载 ...")
# 自动选择下载
self.__auto_download(channel=channel,
source=source,
cache_list=contexts,
userid=userid,
username=username)
username=username,
no_exists=no_exists)
else:
# 更新缓存
user_cache[userid] = {
@@ -160,24 +268,31 @@ class MessageChain(ChainBase):
# 发送种子数据
logger.info(f"搜索到 {len(contexts)} 条数据,开始发送选择消息 ...")
self.__post_torrents_message(channel=channel,
source=source,
title=mediainfo.title,
items=contexts[:self._page_size],
userid=userid,
total=len(contexts))
elif cache_type == "Subscribe":
# 订阅媒体
elif cache_type in ["Subscribe", "ReSubscribe"]:
# 订阅或洗版媒体
mediainfo: MediaInfo = cache_list[_choice]
# 洗版标识
best_version = False
# 查询缺失的媒体信息
exist_flag, _ = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
if cache_type == "Subscribe":
exist_flag, _ = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=mediainfo)
if exist_flag:
self.post_message(Notification(
channel=channel,
source=source,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX】",
userid=userid))
return
else:
best_version = True
# 添加订阅状态为N
self.subscribechain.add(title=mediainfo.title,
year=mediainfo.year,
@@ -185,12 +300,15 @@ class MessageChain(ChainBase):
tmdbid=mediainfo.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
source=source,
userid=userid,
username=username)
username=username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
# 自动选择下载
# 自动选择下载,强制下载模式
self.__auto_download(channel=channel,
source=source,
cache_list=cache_list,
userid=userid,
username=username)
@@ -198,7 +316,8 @@ class MessageChain(ChainBase):
# 下载种子
context: Context = cache_list[_choice]
# 下载
self.downloadchain.download_single(context, userid=userid, channel=channel, username=username)
self.downloadchain.download_single(context, channel=channel, source=source,
userid=userid, username=username)
elif text.lower() == "p":
# 上一页
@@ -206,13 +325,13 @@ class MessageChain(ChainBase):
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, title="输入有误!", userid=userid))
channel=channel, source=source, title="输入有误!", userid=userid))
return
if _current_page == 0:
# 第一页
self.post_message(Notification(
channel=channel, title="已经是第一页了!", userid=userid))
channel=channel, source=source, title="已经是第一页了!", userid=userid))
return
# 减一页
_current_page -= 1
@@ -228,6 +347,7 @@ class MessageChain(ChainBase):
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list[start:end],
userid=userid,
@@ -235,6 +355,7 @@ class MessageChain(ChainBase):
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list[start:end],
userid=userid,
@@ -246,7 +367,7 @@ class MessageChain(ChainBase):
if not cache_data:
# 没有缓存
self.post_message(Notification(
channel=channel, title="输入有误!", userid=userid))
channel=channel, source=source, title="输入有误!", userid=userid))
return
cache_type: str = cache_data.get('type')
# 产生副本,避免修改原值
@@ -258,7 +379,7 @@ class MessageChain(ChainBase):
if not cache_list:
# 没有数据
self.post_message(Notification(
channel=channel, title="已经是最后一页了!", userid=userid))
channel=channel, source=source, title="已经是最后一页了!", userid=userid))
return
else:
# 加一页
@@ -266,11 +387,13 @@ class MessageChain(ChainBase):
if cache_type == "Torrent":
# 发送种子数据
self.__post_torrents_message(channel=channel,
source=source,
title=_current_media.title,
items=cache_list, userid=userid, total=total)
else:
# 发送媒体数据
self.__post_medias_message(channel=channel,
source=source,
title=_current_meta.name,
items=cache_list, userid=userid, total=total)
@@ -280,6 +403,14 @@ class MessageChain(ChainBase):
# 订阅
content = re.sub(r"订阅[:\s]*", "", text)
action = "Subscribe"
elif text.startswith("洗版"):
# 洗版
content = re.sub(r"洗版[:\s]*", "", text)
action = "ReSubscribe"
elif text.startswith("搜索") or text.startswith("下载"):
# 重新搜索/下载
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
action = "ReSearch"
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
@@ -290,21 +421,21 @@ class MessageChain(ChainBase):
action = "chat"
else:
# 搜索
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
content = text
action = "Search"
if action in ["Subscribe", "Search"]:
if action != "chat":
# 搜索
meta, medias = self.medtachain.search(content)
meta, medias = self.mediachain.search(content)
# 识别
if not meta.name:
self.post_message(Notification(
channel=channel, title="无法识别输入内容!", userid=userid))
channel=channel, source=source, title="无法识别输入内容!", userid=userid))
return
# 开始搜索
if not medias:
self.post_message(Notification(
channel=channel, title=f"{meta.name} 没有找到对应的媒体信息!", userid=userid))
channel=channel, source=source, title=f"{meta.name} 没有找到对应的媒体信息!", userid=userid))
return
logger.info(f"搜索到 {len(medias)} 条相关媒体信息")
# 记录当前状态
@@ -317,6 +448,7 @@ class MessageChain(ChainBase):
_current_media = None
# 发送媒体列表
self.__post_medias_message(channel=channel,
source=source,
title=meta.name,
items=medias[:self._page_size],
userid=userid, total=len(medias))
@@ -327,31 +459,35 @@ class MessageChain(ChainBase):
{
"text": content,
"userid": userid,
"channel": channel
"channel": channel,
"source": source
}
)
# 保存缓存
self.save_cache(user_cache, self._cache_file)
def __auto_download(self, channel, cache_list, userid, username):
def __auto_download(self, channel: MessageChannel, source: str, cache_list: list[Context],
userid: Union[str, int], username: str,
no_exists: Optional[Dict[Union[int, str], Dict[int, NotExistMediaInfo]]] = None):
"""
自动择优下载
"""
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(meta=_current_meta,
mediainfo=_current_media)
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在",
userid=userid))
return
if no_exists is None:
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=_current_meta,
mediainfo=_current_media
)
if exist_flag:
# 媒体库中已存在,查询全量
no_exists = self.__get_noexits_info(_current_meta, _current_media)
# 批量下载
downloads, lefts = self.downloadchain.batch_download(contexts=cache_list,
no_exists=no_exists,
channel=channel,
source=source,
userid=userid,
username=username)
if downloads and not lefts:
@@ -364,7 +500,7 @@ class MessageChain(ChainBase):
# 获取已下载剧集
downloaded = [download.meta_info.begin_episode for download in downloads
if download.meta_info.begin_episode]
note = json.dumps(downloaded)
note = downloaded
else:
note = None
# 添加订阅状态为R
@@ -374,12 +510,13 @@ class MessageChain(ChainBase):
tmdbid=_current_media.tmdb_id,
season=_current_meta.begin_season,
channel=channel,
source=source,
userid=userid,
username=username,
state="R",
note=note)
def __post_medias_message(self, channel: MessageChannel,
def __post_medias_message(self, channel: MessageChannel, source: str,
title: str, items: list, userid: str, total: int):
"""
发送媒体列表消息
@@ -390,11 +527,13 @@ class MessageChain(ChainBase):
title = f"{title}】共找到{total}条相关信息,请回复对应数字选择"
self.post_medias_message(Notification(
channel=channel,
source=source,
title=title,
userid=userid
), medias=items)
def __post_torrents_message(self, channel: MessageChannel, title: str, items: list,
def __post_torrents_message(self, channel: MessageChannel, source: str,
title: str, items: list,
userid: str, total: int):
"""
发送种子列表消息
@@ -405,6 +544,8 @@ class MessageChain(ChainBase):
title = f"{title}】共找到{total}条相关资源请回复对应数字下载0: 自动选择)"
self.post_torrents_message(Notification(
channel=channel,
source=source,
title=title,
userid=userid
userid=userid,
link=settings.MP_DOMAIN('#/resource')
), torrents=items)

View File

@@ -1,13 +1,15 @@
import pickle
import re
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed
from datetime import datetime
from typing import Dict, Tuple
from typing import Dict
from typing import List, Optional
from app.chain import ChainBase
from app.core.config import global_vars
from app.core.context import Context
from app.core.context import MediaInfo, TorrentInfo
from app.core.event import eventmanager, Event
from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.progress import ProgressHelper
@@ -15,8 +17,7 @@ from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo
from app.schemas.types import MediaType, ProgressKey, SystemConfigKey
from app.utils.string import StringUtils
from app.schemas.types import MediaType, ProgressKey, SystemConfigKey, EventType
class SearchChain(ChainBase):
@@ -24,6 +25,8 @@ class SearchChain(ChainBase):
站点资源搜索处理链
"""
__result_temp_file = "__search_result__"
def __init__(self):
super().__init__()
self.siteshelper = SitesHelper()
@@ -32,25 +35,33 @@ class SearchChain(ChainBase):
self.torrenthelper = TorrentHelper()
def search_by_id(self, tmdbid: int = None, doubanid: str = None,
mtype: MediaType = None, area: str = "title") -> List[Context]:
mtype: MediaType = None, area: str = "title", season: int = None) -> List[Context]:
"""
根据TMDBID/豆瓣ID搜索资源精确匹配但不不过滤本地存在的资源
:param tmdbid: TMDB ID
:param doubanid: 豆瓣 ID
:param mtype: 媒体,电影 or 电视剧
:param area: 搜索范围title or imdbid
:param season: 季数
"""
mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
logger.error(f'{tmdbid} 媒体信息识别失败!')
return []
results = self.process(mediainfo=mediainfo, area=area)
# 保存眲结果
no_exists = None
if season:
no_exists = {
tmdbid or doubanid: {
season: NotExistMediaInfo(episodes=[])
}
}
results = self.process(mediainfo=mediainfo, area=area, no_exists=no_exists)
# 保存到本地文件
bytes_results = pickle.dumps(results)
self.systemconfig.set(SystemConfigKey.SearchResults, bytes_results)
self.save_cache(bytes_results, self.__result_temp_file)
return results
def search_by_title(self, title: str, page: int = 0, site: int = None) -> List[TorrentInfo]:
def search_by_title(self, title: str, page: int = 0, site: int = None) -> List[Context]:
"""
根据标题搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
@@ -62,44 +73,68 @@ class SearchChain(ChainBase):
else:
logger.info(f'开始浏览资源,站点:{site} ...')
# 搜索
return self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
torrents = self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
if not torrents:
logger.warn(f'{title} 未搜索到资源')
return []
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
bytes_results = pickle.dumps(contexts)
self.save_cache(bytes_results, self.__result_temp_file)
return contexts
def last_search_results(self) -> List[Context]:
"""
获取上次搜索结果
"""
results = self.systemconfig.get(SystemConfigKey.SearchResults)
if not results:
# 读取本地文件缓存
content = self.load_cache(self.__result_temp_file)
if not content:
return []
try:
return pickle.loads(results)
return pickle.loads(content)
except Exception as e:
print(str(e))
logger.error(f'加载搜索结果失败:{str(e)} - {traceback.format_exc()}')
return []
def process(self, mediainfo: MediaInfo,
keyword: str = None,
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
sites: List[int] = None,
priority_rule: str = None,
filter_rule: Dict[str, str] = None,
area: str = "title") -> List[Context]:
rule_groups: List[str] = None,
area: str = "title",
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
根据媒体信息搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
:param keyword: 搜索关键词
:param no_exists: 缺失的媒体信息
:param sites: 站点ID列表为空时搜索所有站点
:param priority_rule: 优先级规则,为空时使用搜索优先级规则
:param filter_rule: 过滤规则,为空是使用默认过滤规则
:param rule_groups: 过滤规则组名称列表
:param area: 搜索范围title or imdbid
:param custom_words: 自定义识别词列表
:param filter_params: 过滤参数
"""
def __do_filter(torrent_list: List[TorrentInfo]) -> List[TorrentInfo]:
"""
执行优先级过滤
"""
return self.filter_torrents(rule_groups=rule_groups,
torrent_list=torrent_list,
season_episodes=season_episodes,
mediainfo=mediainfo) or []
# 豆瓣标题处理
if not mediainfo.tmdb_id:
meta = MetaInfo(title=mediainfo.title)
mediainfo.title = meta.name
mediainfo.season = meta.begin_season
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
# 补充媒体信息
if not mediainfo.names:
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
@@ -108,24 +143,31 @@ class SearchChain(ChainBase):
if not mediainfo:
logger.error(f'媒体信息识别失败!')
return []
# 缺失的季集
mediakey = mediainfo.tmdb_id or mediainfo.douban_id
if no_exists and no_exists.get(mediakey):
# 过滤剧集
season_episodes = {sea: info.episodes
for sea, info in no_exists[mediainfo.tmdb_id].items()}
for sea, info in no_exists[mediakey].items()}
elif mediainfo.season:
# 豆瓣只搜索当前季
season_episodes = {mediainfo.season: []}
else:
season_episodes = None
# 搜索关键词
if keyword:
keywords = [keyword]
elif mediainfo.original_title and mediainfo.title != mediainfo.original_title:
keywords = [mediainfo.title, mediainfo.original_title]
else:
keywords = [mediainfo.title]
# 去重去空,但要保持顺序
keywords = list(dict.fromkeys([k for k in [mediainfo.title,
mediainfo.original_title,
mediainfo.en_title,
mediainfo.hk_title,
mediainfo.tw_title,
mediainfo.sg_title] if k]))
# 执行搜索
torrents: List[TorrentInfo] = self.__search_all_sites(
mediainfo=mediainfo,
@@ -136,116 +178,103 @@ class SearchChain(ChainBase):
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 未搜索到资源')
return []
# 过滤种子
if priority_rule is None:
# 取搜索优先级规则
priority_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
if priority_rule:
logger.info(f'开始过滤资源,当前规则:{priority_rule} ...')
result: List[TorrentInfo] = self.filter_torrents(rule_string=priority_rule,
torrent_list=torrents,
season_episodes=season_episodes,
mediainfo=mediainfo)
if result is not None:
torrents = result
# 开始新进度
self.progress.start(ProgressKey.Search)
# 开始过滤
self.progress.update(value=0, text=f'开始过滤,总 {len(torrents)} 个资源,请稍候...',
key=ProgressKey.Search)
# 开始过滤规则过滤
if rule_groups is None:
# 取搜索过滤规则
rule_groups: List[str] = self.systemconfig.get(SystemConfigKey.SearchFilterRuleGroups)
if rule_groups:
logger.info(f'开始过滤规则/剧集过滤,使用规则组:{rule_groups} ...')
torrents = __do_filter(torrents)
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return []
# 使用过滤规则再次过滤
torrents = self.filter_torrents_by_rule(torrents=torrents,
mediainfo=mediainfo,
filter_rule=filter_rule)
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤规则的资源')
return []
# 匹配的资源
logger.info(f"过滤规则/剧集过滤完成,剩余 {len(torrents)} 个资源")
# 过滤完成
self.progress.update(value=50, text=f'过滤完成,剩余 {len(torrents)} 个资源', key=ProgressKey.Search)
# 开始匹配
_match_torrents = []
# 总数
_total = len(torrents)
# 已处理数
_count = 0
if mediainfo:
self.progress.start(ProgressKey.Search)
logger.info(f'开始匹配,总 {_total} 个资源 ...')
logger.info(f"标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
self.progress.update(value=0, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
# 英文标题应该在别名/原标题中,不需要再匹配
logger.info(f"开始匹配结果 标题:{mediainfo.title},原标题:{mediainfo.original_title},别名:{mediainfo.names}")
self.progress.update(value=51, text=f'开始匹配,总 {_total} 个资源 ...', key=ProgressKey.Search)
for torrent in torrents:
if global_vars.is_system_stopped:
break
_count += 1
self.progress.update(value=(_count / _total) * 100,
self.progress.update(value=(_count / _total) * 96,
text=f'正在匹配 {torrent.site_name},已完成 {_count} / {_total} ...',
key=ProgressKey.Search)
if not torrent.title:
continue
# 识别元数据
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description,
custom_words=custom_words)
if torrent.title != torrent_meta.org_string:
logger.info(f"种子名称应用识别词后发生改变:{torrent.title} => {torrent_meta.org_string}")
# 比对IMDBID
if torrent.imdbid \
and mediainfo.imdb_id \
and torrent.imdbid == mediainfo.imdb_id:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
logger.info(f'{mediainfo.title} 通过IMDBID匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append((torrent, torrent_meta))
continue
# 识别
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
# 比对类型
if (torrent_meta.type == MediaType.TV and mediainfo.type != MediaType.TV) \
or (torrent_meta.type != MediaType.TV and mediainfo.type == MediaType.TV):
logger.warn(f'{torrent.site_name} - {torrent.title} 类型不匹配')
# 匹配订阅附加参数
if filter_params and not self.torrenthelper.filter_torrent(torrent_info=torrent,
filter_params=filter_params):
continue
# 比对年份
if mediainfo.year:
if mediainfo.type == MediaType.TV:
# 剧集年份,每季的年份可能不同
if torrent_meta.year and torrent_meta.year not in [year for year in
mediainfo.season_years.values()]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
else:
# 电影年份上下浮动1年
if torrent_meta.year not in [str(int(mediainfo.year) - 1),
mediainfo.year,
str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue
# 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [
StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title)
]:
logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
# 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent):
# 匹配成功
_match_torrents.append((torrent, torrent_meta))
continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = re.split(r'[\s/|]+', torrent.description)
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
_match_torrents.append(torrent)
continue
# 比对别名和译名
for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name:
logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent)
break
else:
logger.warn(f'{torrent.site_name} - {torrent.title} 标题不匹配')
self.progress.update(value=100,
# 匹配完成
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
self.progress.update(value=97,
text=f'匹配完成,共匹配到 {len(_match_torrents)} 个资源',
key=ProgressKey.Search)
self.progress.end(ProgressKey.Search)
else:
_match_torrents = torrents
logger.info(f"匹配完成,共匹配到 {len(_match_torrents)} 个资源")
_match_torrents = [(t, MetaInfo(title=t.title, subtitle=t.description)) for t in torrents]
# 去掉mediainfo中多余的数据
mediainfo.clear()
# 组装上下文
contexts = [Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
contexts = [Context(torrent_info=t[0],
media_info=mediainfo,
torrent_info=torrent) for torrent in _match_torrents]
meta_info=t[1]) for t in _match_torrents]
# 排序
self.progress.update(value=99,
text=f'正在对 {len(contexts)} 个资源进行排序,请稍候...',
key=ProgressKey.Search)
contexts = self.torrenthelper.sort_torrents(contexts)
# 结束进度
logger.info(f'搜索完成,共 {len(contexts)} 个资源')
self.progress.update(value=100,
text=f'搜索完成,共 {len(contexts)} 个资源',
key=ProgressKey.Search)
self.progress.end(ProgressKey.Search)
# 返回
return contexts
@@ -315,6 +344,8 @@ class SearchChain(ChainBase):
# 结果集
results = []
for future in as_completed(all_task):
if global_vars.is_system_stopped:
break
finish_count += 1
result = future.result()
if result:
@@ -335,119 +366,23 @@ class SearchChain(ChainBase):
# 返回
return results
def filter_torrents_by_rule(self,
torrents: List[TorrentInfo],
mediainfo: MediaInfo,
filter_rule: Dict[str, str] = None,
) -> List[TorrentInfo]:
@eventmanager.register(EventType.SiteDeleted)
def remove_site(self, event: Event):
"""
使用过滤规则过滤种子
:param torrents: 种子列表
:param filter_rule: 过滤规则
:param mediainfo: 媒体信息
从搜索站点中移除与已删除站点相关的设置
"""
if not filter_rule:
# 没有则取搜索默认过滤规则
filter_rule = self.systemconfig.get(SystemConfigKey.DefaultSearchFilterRules)
if not filter_rule:
return torrents
# 包含
include = filter_rule.get("include")
# 排除
exclude = filter_rule.get("exclude")
# 质量
quality = filter_rule.get("quality")
# 分辨率
resolution = filter_rule.get("resolution")
# 特效
effect = filter_rule.get("effect")
# 电影大小
movie_size = filter_rule.get("movie_size")
# 剧集单集大小
tv_size = filter_rule.get("tv_size")
def __get_size_range(size_str: str) -> Tuple[float, float]:
"""
获取大小范围
"""
if not size_str:
return 0, 0
try:
size_range = size_str.split("-")
if len(size_range) == 1:
return 0, float(size_range[0])
elif len(size_range) == 2:
return float(size_range[0]), float(size_range[1])
except Exception as e:
print(str(e))
return 0, 0
def __filter_torrent(t: TorrentInfo) -> bool:
"""
过滤种子
"""
# 包含
if include:
if not re.search(r"%s" % include,
f"{t.title} {t.description}", re.I):
logger.info(f"{t.title} 不匹配包含规则 {include}")
return False
# 排除
if exclude:
if re.search(r"%s" % exclude,
f"{t.title} {t.description}", re.I):
logger.info(f"{t.title} 匹配排除规则 {exclude}")
return False
# 质量
if quality:
if not re.search(r"%s" % quality, t.title, re.I):
logger.info(f"{t.title} 不匹配质量规则 {quality}")
return False
# 分辨率
if resolution:
if not re.search(r"%s" % resolution, t.title, re.I):
logger.info(f"{t.title} 不匹配分辨率规则 {resolution}")
return False
# 特效
if effect:
if not re.search(r"%s" % effect, t.title, re.I):
logger.info(f"{t.title} 不匹配特效规则 {effect}")
return False
# 大小
if movie_size or tv_size:
if mediainfo.type == MediaType.TV:
size = tv_size
else:
size = movie_size
# 大小范围
begin_size, end_size = __get_size_range(size)
if begin_size and end_size:
meta = MetaInfo(title=t.title, subtitle=t.description)
# 集数
if mediainfo.type == MediaType.TV:
# 电视剧
season = meta.begin_season or 1
if meta.total_episode:
# 识别的总集数
episodes_num = meta.total_episode
else:
# 整季集数
episodes_num = len(mediainfo.seasons.get(season) or [1])
# 比较大小
if not (begin_size * 1024 ** 3 <= (t.size / episodes_num) <= end_size * 1024 ** 3):
logger.info(f"{t.title} {StringUtils.str_filesize(t.size)} "
f"{episodes_num}集,不匹配大小规则 {size}")
return False
else:
# 电影比较大小
if not (begin_size * 1024 ** 3 <= t.size <= end_size * 1024 ** 3):
logger.info(f"{t.title} {StringUtils.str_filesize(t.size)} 不匹配大小规则 {size}")
return False
return True
# 使用默认过滤规则再次过滤
return list(filter(lambda t: __filter_torrent(t), torrents))
if not event:
return
event_data = event.event_data or {}
site_id = event_data.get("site_id")
if not site_id:
return
if site_id == "*":
# 清空搜索站点
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
return
# 从选中的rss站点中移除
selected_sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
if site_id in selected_sites:
selected_sites.remove(site_id)
SystemConfigOper().set(SystemConfigKey.IndexerSites, selected_sites)

View File

@@ -1,16 +1,28 @@
import base64
import re
from typing import Union, Tuple
from datetime import datetime
from typing import Optional, Tuple, Union
from urllib.parse import urljoin
from lxml import etree
from ruamel.yaml import CommentedMap
from app.chain import ChainBase
from app.core.config import settings
from app.core.config import global_vars, settings
from app.core.event import Event, EventManager, eventmanager
from app.db.models.site import Site
from app.db.site_oper import SiteOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper
from app.helper.cookiecloud import CookieCloudHelper
from app.helper.message import MessageHelper
from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import MessageChannel, Notification
from app.schemas import MessageChannel, Notification, SiteUserData
from app.schemas.types import EventType, NotificationType
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
from app.utils.string import StringUtils
@@ -24,15 +36,82 @@ class SiteChain(ChainBase):
def __init__(self):
super().__init__()
self.siteoper = SiteOper()
self.siteshelper = SitesHelper()
self.rsshelper = RssHelper()
self.cookiehelper = CookieHelper()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper()
self.systemconfig = SystemConfigOper()
# 特殊站点登录验证
self.special_site_test = {
"zhuque.in": self.__zhuque_test,
# "m-team.io": self.__mteam_test,
"m-team.io": self.__mteam_test,
"m-team.cc": self.__mteam_test,
"ptlsp.com": self.__indexphp_test,
"1ptba.com": self.__indexphp_test,
"star-space.net": self.__indexphp_test,
"yemapt.org": self.__yema_test,
}
def refresh_userdata(self, site: CommentedMap = None) -> Optional[SiteUserData]:
"""
刷新站点的用户数据
:param site: 站点
:return: 用户数据
"""
userdata: SiteUserData = self.run_module("refresh_userdata", site=site)
if userdata:
self.siteoper.update_userdata(domain=StringUtils.get_url_domain(site.get("domain")),
name=site.get("name"),
payload=userdata.dict())
# 发送事件
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": site.get("id")
})
# 发送站点消息
if userdata.message_unread:
if userdata.message_unread_contents and len(userdata.message_unread_contents) > 0:
for head, date, content in userdata.message_unread_contents:
msg_title = f"【站点 {site.get('name')} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=msg_title, text=msg_text, link=site.get("url")
))
else:
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"站点 {site.get('name')} 收到 "
f"{userdata.message_unread} 条新消息,请登陆查看",
link=site.get("url")
))
return userdata
def refresh_userdatas(self) -> None:
"""
刷新所有站点的用户数据
"""
sites = self.siteshelper.get_indexers()
any_site_updated = False
for site in sites:
if global_vars.is_system_stopped:
return
if site.get("is_active"):
userdata = self.refresh_userdata(site)
if userdata:
any_site_updated = True
if any_site_updated:
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
def is_special_site(self, domain: str) -> bool:
"""
判断是否特殊站点
"""
return domain in self.special_site_test
@staticmethod
def __zhuque_test(site: Site) -> Tuple[bool, str]:
"""
@@ -40,11 +119,12 @@ class SiteChain(ChainBase):
"""
# 获取token
token = None
user_agent = site.ua or settings.USER_AGENT
res = RequestUtils(
ua=site.ua,
ua=user_agent,
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).get_res(url=site.url)
if res and res.status_code == 200:
csrf_token = re.search(r'<meta name="x-csrf-token" content="(.+?)">', res.text)
@@ -57,11 +137,11 @@ class SiteChain(ChainBase):
headers={
'X-CSRF-TOKEN': token,
"Content-Type": "application/json; charset=utf-8",
"User-Agent": f"{site.ua}"
"User-Agent": f"{user_agent}"
},
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).get_res(url=f"{site.url}api/user/getInfo")
if user_res and user_res.status_code == 200:
user_info = user_res.json()
@@ -74,18 +154,295 @@ class SiteChain(ChainBase):
"""
判断站点是否已经登陆m-team
"""
url = f"{site.url}api/member/profile"
user_agent = site.ua or settings.USER_AGENT
domain = StringUtils.get_url_domain(site.url)
url = f"https://api.{domain}/api/member/profile"
headers = {
"Content-Type": "application/json",
"User-Agent": user_agent,
"Accept": "application/json, text/plain, */*",
"Authorization": site.token
}
res = RequestUtils(
ua=site.ua,
cookies=site.cookie,
headers=headers,
proxies=settings.PROXY if site.proxy else None,
timeout=15
timeout=site.timeout or 15
).post_res(url=url)
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("data"):
# 更新最后访问时间
res = RequestUtils(headers=headers,
timeout=site.timeout or 15,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=f"https://api.{domain}/api/member/updateLastBrowse")
if res:
return True, "连接成功"
else:
return True, f"连接成功,但更新状态失败"
return False, "鉴权已过期或无效"
@staticmethod
def __yema_test(site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆yemapt
"""
user_agent = site.ua or settings.USER_AGENT
url = f"{site.url}api/consumer/fetchSelfDetail"
headers = {
"User-Agent": user_agent,
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
}
res = RequestUtils(
headers=headers,
cookies=site.cookie,
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=url)
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("success"):
return True, "连接成功"
return False, "Cookie已失效"
return False, "Cookie已过期"
def __indexphp_test(self, site: Site) -> Tuple[bool, str]:
"""
判断站点是否已经登陆ptlsp/1ptba
"""
site.url = f"{site.url}index.php"
return self.__test(site)
@staticmethod
def __parse_favicon(url: str, cookie: str, ua: str) -> Tuple[str, Optional[str]]:
"""
解析站点favicon,返回base64 fav图标
:param url: 站点地址
:param cookie: Cookie
:param ua: User-Agent
:return:
"""
favicon_url = urljoin(url, "favicon.ico")
res = RequestUtils(cookies=cookie, timeout=30, ua=ua).get_res(url=url)
if res:
html_text = res.text
else:
logger.error(f"获取站点页面失败:{url}")
return favicon_url, None
html = etree.HTML(html_text)
if StringUtils.is_valid_html_element(html):
fav_link = html.xpath('//head/link[contains(@rel, "icon")]/@href')
if fav_link:
favicon_url = urljoin(url, fav_link[0])
res = RequestUtils(cookies=cookie, timeout=15, ua=ua).get_res(url=favicon_url)
if res:
return favicon_url, base64.b64encode(res.content).decode()
else:
logger.error(f"获取站点图标失败:{favicon_url}")
return favicon_url, None
def sync_cookies(self, manual=False) -> Tuple[bool, str]:
"""
通过CookieCloud同步站点Cookie
"""
def __indexer_domain(inx: dict, sub_domain: str) -> str:
"""
根据主域名获取索引器地址
"""
if StringUtils.get_url_domain(inx.get("domain")) == sub_domain:
return inx.get("domain")
for ext_d in inx.get("ext_domains"):
if StringUtils.get_url_domain(ext_d) == sub_domain:
return ext_d
return sub_domain
logger.info("开始同步CookieCloud站点 ...")
cookies, msg = self.cookiecloud.download()
if not cookies:
logger.error(f"CookieCloud同步失败{msg}")
if manual:
self.message.put(msg, title="CookieCloud同步失败", role="system")
return False, msg
# 保存Cookie或新增站点
_update_count = 0
_add_count = 0
_fail_count = 0
for domain, cookie in cookies.items():
# 索引器信息
indexer = self.siteshelper.get_indexer(domain)
# 数据库的站点信息
site_info = self.siteoper.get_by_domain(domain)
if site_info and site_info.is_active == 1:
# 站点已存在,检查站点连通性
status, msg = self.test(domain)
# 更新站点Cookie
if status:
logger.info(f"站点【{site_info.name}】连通性正常不同步CookieCloud数据")
# 更新站点rss地址
if not site_info.public and not site_info.rss:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(
url=site_info.url,
cookie=cookie,
ua=site_info.ua or settings.USER_AGENT,
proxy=True if site_info.proxy else False
)
if rss_url:
logger.info(f"更新站点 {domain} RSS地址 ...")
self.siteoper.update_rss(domain=domain, rss=rss_url)
else:
logger.warn(errmsg)
continue
# 更新站点Cookie
logger.info(f"更新站点 {domain} Cookie ...")
self.siteoper.update_cookie(domain=domain, cookies=cookie)
_update_count += 1
elif indexer:
if settings.COOKIECLOUD_BLACKLIST and any(
StringUtils.get_url_domain(domain) == StringUtils.get_url_domain(black_domain) for black_domain
in str(settings.COOKIECLOUD_BLACKLIST).split(",")):
logger.warn(f"站点 {domain} 已在黑名单中,不添加站点")
continue
# 新增站点
domain_url = __indexer_domain(inx=indexer, sub_domain=domain)
res = RequestUtils(cookies=cookie,
ua=settings.USER_AGENT
).get_res(url=domain_url)
if res and res.status_code in [200, 500, 403]:
if not indexer.get("public") and not SiteUtils.is_logged_in(res.text):
_fail_count += 1
if under_challenge(res.text):
logger.warn(f"站点 {indexer.get('name')} 被Cloudflare防护无法登录无法添加站点")
continue
logger.warn(
f"站点 {indexer.get('name')} 登录失败没有该站点账号或Cookie已失效无法添加站点")
continue
elif res is not None:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接状态码:{res.status_code},无法添加站点")
continue
else:
_fail_count += 1
logger.warn(f"站点 {indexer.get('name')} 连接失败,无法添加站点")
continue
# 获取rss地址
rss_url = None
if not indexer.get("public") and domain_url:
# 自动生成rss地址
rss_url, errmsg = self.rsshelper.get_rss_link(url=domain_url,
cookie=cookie,
ua=settings.USER_AGENT)
if errmsg:
logger.warn(errmsg)
# 插入数据库
logger.info(f"新增站点 {indexer.get('name')} ...")
self.siteoper.add(name=indexer.get("name"),
url=domain_url,
domain=domain,
cookie=cookie,
rss=rss_url,
public=1 if indexer.get("public") else 0)
_add_count += 1
# 通知站点更新
if indexer:
EventManager().send_event(EventType.SiteUpdated, {
"domain": domain,
})
# 处理完成
ret_msg = f"更新了{_update_count}个站点,新增了{_add_count}个站点"
if _fail_count > 0:
ret_msg += f"{_fail_count}个站点添加失败,下次同步时将重试,也可以手动添加"
if manual:
self.message.put(ret_msg, title="CookieCloud同步成功", role="system")
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@eventmanager.register(EventType.SiteUpdated)
def cache_site_icon(self, event: Event):
"""
缓存站点图标
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
if str(domain).startswith("http"):
domain = StringUtils.get_url_domain(domain)
# 站点信息
siteinfo = self.siteoper.get_by_domain(domain)
if not siteinfo:
logger.warn(f"未维护站点 {domain} 信息!")
return
# Cookie
cookie = siteinfo.cookie
# 索引器
indexer = self.siteshelper.get_indexer(domain)
if not indexer:
logger.warn(f"站点 {domain} 索引器不存在!")
return
# 查询站点图标
site_icon = self.siteoper.get_icon_by_domain(domain)
if not site_icon or not site_icon.base64:
logger.info(f"开始缓存站点 {indexer.get('name')} 图标 ...")
icon_url, icon_base64 = self.__parse_favicon(url=indexer.get("domain"),
cookie=cookie,
ua=settings.USER_AGENT)
if icon_url:
self.siteoper.update_icon(name=indexer.get("name"),
domain=domain,
icon_url=icon_url,
icon_base64=icon_base64)
logger.info(f"缓存站点 {indexer.get('name')} 图标成功")
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
@eventmanager.register(EventType.SiteUpdated)
def clear_site_data(self, event: Event):
"""
清理站点数据
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
# 获取主域名中间那段
domain_host = StringUtils.get_url_host(domain)
# 查询以"site.domain_host"开头的配置项,并清除
site_keys = self.systemconfig.all().keys()
for key in site_keys:
if key.startswith(f"site.{domain_host}"):
logger.info(f"清理站点配置:{key}")
self.systemconfig.delete(key)
@eventmanager.register(EventType.SiteUpdated)
def cache_site_userdata(self, event: Event):
"""
缓存站点用户数据
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
if str(domain).startswith("http"):
domain = StringUtils.get_url_domain(domain)
indexer = self.siteshelper.get_indexer(domain)
if not indexer:
return
# 刷新站点用户数据
self.refresh_userdata(site=indexer) or {}
def test(self, url: str) -> Tuple[bool, str]:
"""
@@ -99,56 +456,74 @@ class SiteChain(ChainBase):
if not site_info:
return False, f"站点【{url}】不存在"
# 特殊站点测试
if self.special_site_test.get(domain):
return self.special_site_test[domain](site_info)
# 模拟登录
try:
# 开始记时
start_time = datetime.now()
# 特殊站点测试
if self.special_site_test.get(domain):
state, message = self.special_site_test[domain](site_info)
else:
# 通用站点测试
state, message = self.__test(site_info)
# 统计
seconds = (datetime.now() - start_time).seconds
if state:
self.siteoper.success(domain=domain, seconds=seconds)
else:
self.siteoper.fail(domain)
return state, message
except Exception as e:
return False, f"{str(e)}"
# 通用站点测试
@staticmethod
def __test(site_info: Site) -> Tuple[bool, str]:
"""
通用站点测试
"""
site_url = site_info.url
site_cookie = site_info.cookie
ua = site_info.ua
ua = site_info.ua or settings.USER_AGENT
render = site_info.render
public = site_info.public
proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
# 模拟登录
try:
# 访问链接
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
if not public and not SiteUtils.is_logged_in(page_source):
if under_challenge(page_source):
return False, f"无法通过Cloudflare"
return False, f"仿真登录失败Cookie已失效"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
proxies=proxies
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not public and not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码:{res.status_code}"
return False, f"{msg}"
elif public and res.status_code != 200:
return False, f"状态码:{res.status_code}"
elif res is not None:
# 访问链接
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
if not public and not SiteUtils.is_logged_in(page_source):
if under_challenge(page_source):
return False, f"无法通过Cloudflare"
return False, f"仿真登录失败Cookie已失效"
else:
res = RequestUtils(cookies=site_cookie,
ua=ua,
proxies=proxies
).get_res(url=site_url)
# 判断登录状态
if res and res.status_code in [200, 500, 403]:
if not public and not SiteUtils.is_logged_in(res.text):
if under_challenge(res.text):
msg = "站点被Cloudflare防护请打开站点浏览器仿真"
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码:{res.status_code}"
return False, f"{msg}"
elif public and res.status_code != 200:
return False, f"状态码:{res.status_code}"
else:
return False, f"无法打开网站"
except Exception as e:
return False, f"{str(e)}"
elif res is not None:
return False, f"状态码:{res.status_code}"
else:
return False, f"无法打开网站"
return True, "连接成功"
def remote_list(self, channel: MessageChannel, userid: Union[str, int] = None):
def remote_list(self, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
查询所有站点,发送消息
"""
@@ -157,11 +532,12 @@ class SiteChain(ChainBase):
self.post_message(Notification(
channel=channel,
title="没有维护任何站点信息!",
userid=userid))
userid=userid,
link=settings.MP_DOMAIN('#/site')))
title = f"共有 {len(site_list)} 个站点,回复对应指令操作:" \
f"\n- 禁用站点:/site_disable [id]" \
f"\n- 启用站点:/site_enable [id]" \
f"\n- 更新站点Cookie/site_cookie [id] [username] [password]"
f"\n- 更新站点Cookie/site_cookie [id] [username] [password] [2fa_code/secret]"
messages = []
for site in site_list:
if site.render:
@@ -169,15 +545,19 @@ class SiteChain(ChainBase):
else:
render_str = ""
if site.is_active:
messages.append(f"{site.id}. [{site.name}]({site.url}){render_str}")
messages.append(f"{site.id}. {site.name} {render_str}")
else:
messages.append(f"{site.id}. {site.name}")
messages.append(f"{site.id}. {site.name} ⚠️")
# 发送列表
self.post_message(Notification(
channel=channel,
title=title, text="\n".join(messages), userid=userid))
source=source,
title=title, text="\n".join(messages), userid=userid,
link=settings.MP_DOMAIN('#/site'))
)
def remote_disable(self, arg_str, channel: MessageChannel, userid: Union[str, int] = None):
def remote_disable(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
禁用站点
"""
@@ -199,9 +579,10 @@ class SiteChain(ChainBase):
"is_active": False
})
# 重新发送消息
self.remote_list(channel, userid)
self.remote_list(channel=channel, userid=userid, source=source)
def remote_enable(self, arg_str, channel: MessageChannel, userid: Union[str, int] = None):
def remote_enable(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
启用站点
"""
@@ -224,15 +605,16 @@ class SiteChain(ChainBase):
"is_active": True
})
# 重新发送消息
self.remote_list(channel, userid)
self.remote_list(channel=channel, userid=userid, source=source)
def update_cookie(self, site_info: Site,
username: str, password: str) -> Tuple[bool, str]:
username: str, password: str, two_step_code: str = None) -> Tuple[bool, str]:
"""
根据用户名密码更新站点Cookie
:param site_info: 站点信息
:param username: 用户名
:param password: 密码
:param two_step_code: 二步验证码或密钥
:return: (是否成功, 错误信息)
"""
# 更新站点Cookie
@@ -240,6 +622,7 @@ class SiteChain(ChainBase):
url=site_info.url,
username=username,
password=password,
two_step_code=two_step_code,
proxies=settings.PROXY_HOST if site_info.proxy else None
)
if result:
@@ -253,28 +636,36 @@ class SiteChain(ChainBase):
return True, msg
return False, "未知错误"
def remote_cookie(self, arg_str: str, channel: MessageChannel, userid: Union[str, int] = None):
def remote_cookie(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
使用用户名密码更新站点Cookie
"""
err_title = "请输入正确的命令格式:/site_cookie [id] [username] [password]" \
"[id]为站点编号,[uername]为站点用户名,[password]为站点密码"
err_title = "请输入正确的命令格式:/site_cookie [id] [username] [password] [2fa_code/secret]" \
"[id]为站点编号,[uername]为站点用户名,[password]为站点密码[2fa_code/secret]为站点二步验证码或密钥"
if not arg_str:
self.post_message(Notification(
channel=channel,
source=source,
title=err_title, userid=userid))
return
arg_str = str(arg_str).strip()
args = arg_str.split()
if len(args) != 3:
# 二步验证码
two_step_code = None
if len(args) == 4:
two_step_code = args[3]
elif len(args) != 3:
self.post_message(Notification(
channel=channel,
source=source,
title=err_title, userid=userid))
return
site_id = args[0]
if not site_id.isdigit():
self.post_message(Notification(
channel=channel,
source=source,
title=err_title, userid=userid))
return
# 站点ID
@@ -284,10 +675,12 @@ class SiteChain(ChainBase):
if not site_info:
self.post_message(Notification(
channel=channel,
source=source,
title=f"站点编号 {site_id} 不存在!", userid=userid))
return
self.post_message(Notification(
channel=channel,
source=source,
title=f"开始更新【{site_info.name}】Cookie&UA ...", userid=userid))
# 用户名
username = args[1]
@@ -296,16 +689,19 @@ class SiteChain(ChainBase):
# 更新Cookie
status, msg = self.update_cookie(site_info=site_info,
username=username,
password=password)
password=password,
two_step_code=two_step_code)
if not status:
logger.error(msg)
self.post_message(Notification(
channel=channel,
source=source,
title=f"{site_info.name}】 Cookie&UA更新失败",
text=f"错误原因:{msg}",
userid=userid))
else:
self.post_message(Notification(
channel=channel,
source=source,
title=f"{site_info.name}】 Cookie&UA更新成功",
userid=userid))

130
app/chain/storage.py Normal file
View File

@@ -0,0 +1,130 @@
from pathlib import Path
from typing import Optional, Tuple, List, Dict
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.log import logger
from app.schemas import MediaType
class StorageChain(ChainBase):
"""
存储处理链
"""
def save_config(self, storage: str, conf: dict) -> None:
"""
保存存储配置
"""
self.run_module("save_config", storage=storage, conf=conf)
def generate_qrcode(self, storage: str) -> Optional[Tuple[dict, str]]:
"""
生成二维码
"""
return self.run_module("generate_qrcode", storage=storage)
def check_login(self, storage: str, **kwargs) -> Optional[Tuple[dict, str]]:
"""
登录确认
"""
return self.run_module("check_login", storage=storage, **kwargs)
def list_files(self, fileitem: schemas.FileItem, recursion: bool = False) -> Optional[List[schemas.FileItem]]:
"""
查询当前目录下所有目录和文件
"""
return self.run_module("list_files", fileitem=fileitem, recursion=recursion)
def any_files(self, fileitem: schemas.FileItem, extensions: list = None) -> Optional[bool]:
"""
查询当前目录下是否存在指定扩展名任意文件
"""
return self.run_module("any_files", fileitem=fileitem, extensions=extensions)
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
"""
return self.run_module("create_folder", fileitem=fileitem, name=name)
def download_file(self, fileitem: schemas.FileItem, path: Path = None) -> Optional[Path]:
"""
下载文件
:param fileitem: 文件项
:param path: 本地保存路径
"""
return self.run_module("download_file", fileitem=fileitem, path=path)
def upload_file(self, fileitem: schemas.FileItem, path: Path) -> Optional[bool]:
"""
上传文件
:param fileitem: 保存目录项
:param path: 本地文件路径
"""
return self.run_module("upload_file", fileitem=fileitem, path=path)
def delete_file(self, fileitem: schemas.FileItem) -> Optional[bool]:
"""
删除文件或目录
"""
return self.run_module("delete_file", fileitem=fileitem)
def rename_file(self, fileitem: schemas.FileItem, name: str) -> Optional[bool]:
"""
重命名文件或目录
"""
return self.run_module("rename_file", fileitem=fileitem, name=name)
def get_file_item(self, storage: str, path: Path) -> Optional[schemas.FileItem]:
"""
根据路径获取文件项
"""
return self.run_module("get_file_item", storage=storage, path=path)
def get_parent_item(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取上级目录项
"""
return self.run_module("get_parent_item", fileitem=fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
"""
快照存储
"""
return self.run_module("snapshot_storage", storage=storage, path=path)
def storage_usage(self, storage: str) -> Optional[schemas.StorageUsage]:
"""
存储使用情况
"""
return self.run_module("storage_usage", storage=storage)
def support_transtype(self, storage: str) -> Optional[str]:
"""
获取支持的整理方式
"""
return self.run_module("support_transtype", storage=storage)
def delete_media_file(self, fileitem: schemas.FileItem, mtype: MediaType = None) -> bool:
"""
删除媒体文件,以及不含媒体文件的目录
"""
state = self.delete_file(fileitem)
if not state:
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
# 上级目录
if mtype and mtype == MediaType.TV:
dir_path = Path(fileitem.path).parent.parent
dir_item = self.get_file_item(storage=fileitem.storage, path=dir_path)
else:
dir_item = self.get_parent_item(fileitem)
if dir_item:
# 不存在其他媒体文件,删除空目录
if not self.any_files(dir_item, extensions=settings.RMT_MEDIAEXT):
return self.delete_file(dir_item)
# 存在媒体文件,返回文件删除状态
return state

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
from version import FRONTEND_VERSION, APP_VERSION
class SystemChain(ChainBase, metaclass=Singleton):
@@ -18,20 +19,25 @@ class SystemChain(ChainBase, metaclass=Singleton):
_restart_file = "__system_restart__"
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str]):
def __init__(self):
super().__init__()
# 重启完成检测
self.restart_finish()
def remote_clear_cache(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
"""
清理系统缓存
"""
self.clear_cache()
self.post_message(Notification(channel=channel,
self.post_message(Notification(channel=channel, source=source,
title=f"缓存清理完成!", userid=userid))
def restart(self, channel: MessageChannel, userid: Union[int, str]):
def restart(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
"""
重启系统
"""
if channel and userid:
self.post_message(Notification(channel=channel,
self.post_message(Notification(channel=channel, source=source,
title="系统正在重启,请耐心等候!", userid=userid))
# 保存重启信息
self.save_cache({
@@ -40,19 +46,31 @@ class SystemChain(ChainBase, metaclass=Singleton):
}, self._restart_file)
SystemUtils.restart()
def version(self, channel: MessageChannel, userid: Union[int, str]):
def __get_version_message(self) -> str:
"""
获取版本信息文本
"""
server_release_version = self.__get_server_release_version()
front_release_version = self.__get_front_release_version()
server_local_version = self.get_server_local_version()
front_local_version = self.get_frontend_version()
if server_release_version == server_local_version:
title = f"当前后端版本:{server_local_version},已是最新版本\n"
else:
title = f"当前后端版本:{server_local_version},远程版本:{server_release_version}\n"
if front_release_version == front_local_version:
title += f"当前前端版本:{front_local_version},已是最新版本"
else:
title += f"当前前端版本:{front_local_version},远程版本:{front_release_version}"
return title
def version(self, channel: MessageChannel, userid: Union[int, str], source: str = None):
"""
查看当前版本、远程版本
"""
release_version = self.__get_release_version()
local_version = self.get_local_version()
if release_version == local_version:
title = f"当前版本:{local_version},已是最新版本"
else:
title = f"当前版本:{local_version},远程版本:{release_version}"
self.post_message(Notification(channel=channel,
title=title, userid=userid))
self.post_message(Notification(channel=channel, source=source,
title=self.__get_version_message(),
userid=userid))
def restart_finish(self):
"""
@@ -71,49 +89,76 @@ class SystemChain(ChainBase, metaclass=Singleton):
userid = restart_channel.get('userid')
# 版本号
release_version = self.__get_release_version()
local_version = self.get_local_version()
if release_version == local_version:
title = f"当前版本:{local_version}"
else:
title = f"当前版本:{local_version},远程版本:{release_version}"
title = self.__get_version_message()
self.post_message(Notification(channel=channel,
title=f"系统已重启完成!{title}",
title=f"系统已重启完成!\n{title}",
userid=userid))
self.remove_cache(self._restart_file)
@staticmethod
def __get_release_version():
def __get_server_release_version():
"""
获取最新版本
获取后端V2最新版本
"""
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
else:
return None
try:
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2后端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到后端最新版本:{latest_v2}")
return latest_v2
else:
logger.error("无法获取后端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取后端最新版本失败:{str(err)}")
return None
@staticmethod
def get_local_version():
def __get_front_release_version():
"""
获取前端V2最新版本
"""
try:
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2前端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到前端最新版本:{latest_v2}")
return latest_v2
else:
logger.error("无法获取前端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取前端最新版本失败:{str(err)}")
return None
@staticmethod
def get_server_local_version():
"""
查看当前版本
"""
version_file = settings.ROOT_PATH / "version.py"
if version_file.exists():
try:
with open(version_file, 'rb') as f:
version = f.read()
pattern = r"'([^']*)'"
match = re.search(pattern, str(version))
return APP_VERSION
if match:
version = match.group(1)
return version
else:
logger.warn("未找到版本号")
return None
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
@staticmethod
def get_frontend_version():
"""
获取前端版本
"""
return FRONTEND_VERSION

View File

@@ -5,7 +5,7 @@ from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -16,7 +16,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str,
with_original_language: str, page: int = 1) -> Optional[List[dict]]:
with_original_language: str, page: int = 1) -> Optional[List[MediaInfo]]:
"""
:param mtype: 媒体类型
:param sort_by: 排序方式
@@ -25,21 +25,17 @@ class TmdbChain(ChainBase, metaclass=Singleton):
:param page: 页码
:return: 媒体信息列表
"""
if settings.RECOGNIZE_SOURCE != "themoviedb":
return None
return self.run_module("tmdb_discover", mtype=mtype,
sort_by=sort_by, with_genres=with_genres,
with_original_language=with_original_language,
page=page)
def tmdb_trending(self, page: int = 1) -> Optional[List[dict]]:
def tmdb_trending(self, page: int = 1) -> Optional[List[MediaInfo]]:
"""
TMDB流行趋势
:param page: 第几页
:return: TMDB信息列表
"""
if settings.RECOGNIZE_SOURCE != "themoviedb":
return None
return self.run_module("tmdb_trending", page=page)
def tmdb_seasons(self, tmdbid: int) -> List[schemas.TmdbSeason]:
@@ -57,35 +53,35 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season)
def movie_similar(self, tmdbid: int) -> List[dict]:
def movie_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询类似电影
:param tmdbid: TMDBID
"""
return self.run_module("tmdb_movie_similar", tmdbid=tmdbid)
def tv_similar(self, tmdbid: int) -> List[dict]:
def tv_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询类似电视剧
:param tmdbid: TMDBID
"""
return self.run_module("tmdb_tv_similar", tmdbid=tmdbid)
def movie_recommend(self, tmdbid: int) -> List[dict]:
def movie_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询推荐电影
:param tmdbid: TMDBID
"""
return self.run_module("tmdb_movie_recommend", tmdbid=tmdbid)
def tv_recommend(self, tmdbid: int) -> List[dict]:
def tv_recommend(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""
根据TMDBID查询推荐电视剧
:param tmdbid: TMDBID
"""
return self.run_module("tmdb_tv_recommend", tmdbid=tmdbid)
def movie_credits(self, tmdbid: int, page: int = 1) -> List[dict]:
def movie_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电影演职人员
:param tmdbid: TMDBID
@@ -93,7 +89,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_movie_credits", tmdbid=tmdbid, page=page)
def tv_credits(self, tmdbid: int, page: int = 1) -> List[dict]:
def tv_credits(self, tmdbid: int, page: int = 1) -> Optional[List[schemas.MediaPerson]]:
"""
根据TMDBID查询电视剧演职人员
:param tmdbid: TMDBID
@@ -101,14 +97,14 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_tv_credits", tmdbid=tmdbid, page=page)
def person_detail(self, person_id: int) -> dict:
def person_detail(self, person_id: int) -> Optional[schemas.MediaPerson]:
"""
根据TMDBID查询演职员详情
:param person_id: 人物ID
"""
return self.run_module("tmdb_person_detail", person_id=person_id)
def person_credits(self, person_id: int, page: int = 1) -> List[dict]:
def person_credits(self, person_id: int, page: int = 1) -> Optional[List[MediaInfo]]:
"""
根据人物ID查询人物参演作品
:param person_id: 人物ID
@@ -117,7 +113,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_random_wallpager(self):
def get_random_wallpager(self) -> Optional[str]:
"""
获取随机壁纸缓存1个小时
"""
@@ -126,6 +122,16 @@ class TmdbChain(ChainBase, metaclass=Singleton):
# 随机一个电影
while True:
info = random.choice(infos)
if info and info.get("backdrop_path"):
return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.get('backdrop_path')}"
if info and info.backdrop_path:
return info.backdrop_path
return None
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_trending_wallpapers(self, num: int = 10) -> List[str]:
"""
获取所有流行壁纸
"""
infos = self.tmdb_trending()
if infos:
return [info.backdrop_path for info in infos if info and info.backdrop_path][:num]
return []

View File

@@ -1,11 +1,12 @@
import re
import traceback
from typing import Dict, List, Union
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.config import settings, global_vars
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.metainfo import MetaInfo
from app.db.site_oper import SiteOper
@@ -15,7 +16,7 @@ from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import Notification
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType
from app.schemas.types import SystemConfigKey, MessageChannel, NotificationType, MediaType
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
@@ -98,7 +99,8 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
if not site.get("rss"):
logger.error(f'站点 {domain} 未配置RSS地址')
return []
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False)
rss_items = self.rsshelper.parse(site.get("rss"), True if site.get("proxy") else False,
timeout=int(site.get("timeout") or 30))
if rss_items is None:
# rss过期尝试保留原配置生成新的rss
self.__renew_rss_url(domain=domain, site=site)
@@ -152,12 +154,17 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 所有站点索引
indexers = self.siteshelper.get_indexers()
# 需要刷新的站点domain
domains = []
# 遍历站点缓存资源
for indexer in indexers:
if global_vars.is_system_stopped:
break
# 未开启的站点不刷新
if sites and indexer.get("id") not in sites:
continue
domain = StringUtils.get_url_domain(indexer.get("domain"))
domains.append(domain)
if stype == "spider":
# 刷新首页种子
torrents: List[TorrentInfo] = self.browse(domain=domain)
@@ -180,13 +187,21 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
logger.info(f'{indexer.get("name")} 没有新种子')
continue
for torrent in torrents:
if global_vars.is_system_stopped:
break
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
if torrent.title != meta.org_string:
logger.info(f'种子名称应用识别词后发生改变:{torrent.title} => {meta.org_string}')
# 使用站点种子分类,校正类型识别
if meta.type != MediaType.TV \
and torrent.category == MediaType.TV.value:
meta.type = MediaType.TV
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_by_meta(meta)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{torrent.title}')
logger.warn(f'{torrent.title} 未识别到媒体信息')
# 存储空的媒体信息
mediainfo = MediaInfo()
# 清理多余数据
@@ -212,7 +227,9 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
self.save_cache(torrents_cache, self._rss_file)
# 返回
# 去除不在站点范围内的缓存种子
if sites and torrents_cache:
torrents_cache = {k: v for k, v in torrents_cache.items() if k in domains}
return torrents_cache
def __renew_rss_url(self, domain: str, site: dict):
@@ -241,10 +258,14 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
# 发送消息
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期",
link=settings.MP_DOMAIN('#/site'))
)
else:
self.post_message(
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期",
link=settings.MP_DOMAIN('#/site')))
except Exception as e:
print(str(e))
self.post_message(Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期"))
logger.error(f"站点 {domain} RSS链接自动获取失败{str(e)} - {traceback.format_exc()}")
self.post_message(Notification(mtype=NotificationType.SiteMessage, title=f"站点 {domain} RSS链接已过期",
link=settings.MP_DOMAIN('#/site')))

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,237 @@
from typing import Optional
import secrets
from typing import Optional, Tuple, Union
from app.chain import ChainBase
from app.core.config import settings
from app.core.security import get_password_hash, verify_password
from app.db.models.user import User
from app.db.user_oper import UserOper
from app.log import logger
from app.schemas.event import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import ChainEventType
from app.utils.otp import OtpUtils
from app.utils.singleton import Singleton
PASSWORD_INVALID_CREDENTIALS_MESSAGE = "用户名或密码或二次校验码不正确"
class UserChain(ChainBase):
class UserChain(ChainBase, metaclass=Singleton):
"""
用户链,处理多种认证协议
"""
def user_authenticate(self, name, password) -> Optional[str]:
def __init__(self):
super().__init__()
self.user_oper = UserOper()
def user_authenticate(
self,
username: Optional[str] = None,
password: Optional[str] = None,
mfa_code: Optional[str] = None,
code: Optional[str] = None,
grant_type: str = "password"
) -> Union[Tuple[bool, Optional[str]], Tuple[bool, Optional[User]]]:
"""
辅助完成用户认证
:param name: 用户名
:param password: 密码
:return: token
认证用户,根据不同的 grant_type 处理不同的认证流程
:param username: 用户名,适用于 "password" grant_type
:param password: 用户密码,适用于 "password" grant_type
:param mfa_code: 一次性密码,适用于 "password" grant_type
:param code: 授权码,适用于 "authorization_code" grant_type
:param grant_type: 认证类型,如 "password", "authorization_code", "client_credentials"
:return:
- 对于成功的认证,返回 (True, User)
- 对于失败的认证,返回 (False, "错误信息")
"""
return self.run_module("user_authenticate", name=name, password=password)
credentials = AuthCredentials(
username=username,
password=password,
mfa_code=mfa_code,
code=code,
grant_type=grant_type
)
logger.debug(f"认证类型:{grant_type},开始准备对用户 {username} 进行身份校验")
if credentials.grant_type == "password":
# Password 认证
success, user_or_message = self.password_authenticate(credentials=credentials)
if success:
# 如果用户启用了二次验证码,则进一步验证
if not self._verify_mfa(user_or_message, credentials.mfa_code):
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
logger.info(f"用户 {username} 通过密码认证成功")
return True, user_or_message
else:
# 用户不存在或密码错误,考虑辅助认证
if settings.AUXILIARY_AUTH_ENABLE:
logger.warning("密码认证失败,尝试通过外部服务进行辅助认证 ...")
aux_success, aux_user_or_message = self.auxiliary_authenticate(credentials=credentials)
if aux_success:
# 辅助认证成功后再验证二次验证码
if not self._verify_mfa(aux_user_or_message, credentials.mfa_code):
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
return True, aux_user_or_message
else:
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
else:
logger.debug(f"辅助认证未启用,用户 {username} 认证失败")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
elif credentials.grant_type == "authorization_code":
# 处理其他认证类型的分支
if settings.AUXILIARY_AUTH_ENABLE:
aux_success, aux_user_or_message = self.auxiliary_authenticate(credentials=credentials)
if aux_success:
return True, aux_user_or_message
else:
return False, "认证失败"
else:
return False, "认证失败"
else:
logger.debug(f"辅助认证未启用,认证类型 {grant_type} 未实现")
return False, "不支持的认证类型"
def password_authenticate(self, credentials: AuthCredentials) -> Tuple[bool, Union[User, str]]:
"""
密码认证
:param credentials: 认证凭证,包含用户名、密码以及可选的 MFA 认证码
:return:
- 成功时返回 (True, User),其中 User 是认证通过的用户对象
- 失败时返回 (False, "错误信息")
"""
if not credentials or credentials.grant_type != "password":
logger.info("密码认证失败,认证类型不匹配")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
user = self.user_oper.get_by_name(name=credentials.username)
if not user:
logger.info(f"密码认证失败,用户 {credentials.username} 不存在")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
if not user.is_active:
logger.info(f"密码认证失败,用户 {credentials.username} 已被禁用")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
if not verify_password(credentials.password, str(user.hashed_password)):
logger.info(f"密码认证失败,用户 {credentials.username} 的密码验证不通过")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
return True, user
def auxiliary_authenticate(self, credentials: AuthCredentials) -> Tuple[bool, Union[User, str]]:
"""
辅助用户认证
:param credentials: 认证凭证,包含必要的认证信息
:return:
- 成功时返回 (True, User),其中 User 是认证通过的用户对象
- 失败时返回 (False, "错误信息")
"""
if not credentials:
return False, "认证凭证无效"
# 检查是否因为用户被禁用
if credentials.username:
user = self.user_oper.get_by_name(name=credentials.username)
if user and not user.is_active:
logger.info(f"用户 {user.name} 已被禁用,跳过后续身份校验")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
logger.debug(f"认证类型:{credentials.grant_type},尝试通过系统模块进行辅助认证,用户: {credentials.username}")
result = self.run_module("user_authenticate", credentials=credentials)
if not result:
logger.debug(f"通过系统模块辅助认证失败,尝试触发 {ChainEventType.AuthVerification} 事件")
event = self.eventmanager.send_event(etype=ChainEventType.AuthVerification, data=credentials)
if not event or not event.event_data:
logger.error(f"认证类型:{credentials.grant_type},辅助认证失败,未返回有效数据")
return False, f"认证类型:{credentials.grant_type},辅助认证事件失败或无效"
credentials = event.event_data # 使用事件返回的认证数据
else:
logger.info(f"通过系统模块辅助认证成功,用户: {credentials.username}")
credentials = result # 使用模块认证返回的认证数据
# 处理认证成功的逻辑
success = self._process_auth_success(username=credentials.username, credentials=credentials)
if success:
logger.info(f"用户 {credentials.username} 辅助认证通过")
return True, self.user_oper.get_by_name(credentials.username)
else:
logger.warning(f"用户 {credentials.username} 辅助认证未通过")
return False, PASSWORD_INVALID_CREDENTIALS_MESSAGE
@staticmethod
def _verify_mfa(user: User, mfa_code: Optional[str]) -> bool:
"""
验证 MFA二次验证码
:param user: 用户对象
:param mfa_code: 二次验证码
:return: 如果验证成功返回 True否则返回 False
"""
if not user.is_otp:
return True
if not mfa_code:
logger.info(f"用户 {user.name} 缺少 MFA 认证码")
return False
if not OtpUtils.check(str(user.otp_secret), mfa_code):
logger.info(f"用户 {user.name} 的 MFA 认证失败")
return False
return True
def _process_auth_success(self, username: str, credentials: AuthCredentials) -> bool:
"""
处理辅助认证成功的逻辑,返回用户对象或创建新用户
:param username: 用户名
:param credentials: 认证凭证,包含 token、channel、service 等信息
:return:
- 如果认证成功并且用户存在或已创建,返回 User 对象
- 如果认证被拦截或失败,返回 None
"""
if not username:
logger.info(f"未能获取到对应的用户信息,{credentials.grant_type} 认证不通过")
return False
token, channel, service = credentials.token, credentials.channel, credentials.service
if not all([token, channel, service]):
logger.info(f"用户 {username} 未通过 {credentials.grant_type} 认证,必要信息不足")
return False
# 触发认证通过的拦截事件
intercept_event = self.eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(username=username, channel=channel, service=service,
token=token, status="completed")
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:
logger.warning(
f"认证被拦截,用户:{username},渠道:{channel},服务:{service},拦截源:{intercept_data.source}")
return False
# 检查用户是否存在,如果不存在且当前为密码认证时则创建新用户
user = self.user_oper.get_by_name(name=username)
if user:
# 如果用户存在,但是已经被禁用,则直接响应
if not user.is_active:
logger.info(f"辅助认证失败,用户 {username} 已被禁用")
return False
anonymized_token = f"{token[:len(token) // 2]}********"
logger.info(
f"认证类型:{credentials.grant_type},用户:{username},渠道:{channel}"
f"服务:{service} 认证成功token{anonymized_token}")
return True
else:
if credentials.grant_type == "password":
self.user_oper.add(name=username, is_active=True, is_superuser=False,
hashed_password=get_password_hash(secrets.token_urlsafe(16)))
logger.info(f"用户 {username} 不存在,已通过 {credentials.grant_type} 认证并已创建普通用户")
return True
else:
logger.warning(
f"认证类型:{credentials.grant_type},用户:{username},渠道:{channel}"
f"服务:{service} 认证不通过,未能在本地找到对应的用户信息")
return False

View File

@@ -1,346 +0,0 @@
import importlib
import threading
import traceback
from threading import Thread
from typing import Any, Union, Dict
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
from app.core.event import Event as ManagerEvent
from app.core.event import eventmanager, EventManager
from app.core.plugin import PluginManager
from app.helper.thread import ThreadHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas import Notification
from app.schemas.types import EventType, MessageChannel
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
class CommandChian(ChainBase):
"""
插件处理链
"""
def process(self, *args, **kwargs):
pass
class Command(metaclass=Singleton):
"""
全局命令管理,消费事件
"""
# 内建命令
_commands = {}
# 退出事件
_event = threading.Event()
def __init__(self):
# 事件管理器
self.eventmanager = EventManager()
# 插件管理器
self.pluginmanager = PluginManager()
# 处理链
self.chain = CommandChian()
# 定时服务管理
self.scheduler = Scheduler()
# 线程管理器
self.threader = ThreadHelper()
# 内置命令
self._commands = {
"/cookiecloud": {
"id": "cookiecloud",
"type": "scheduler",
"description": "同步站点",
"category": "站点"
},
"/sites": {
"func": SiteChain().remote_list,
"description": "查询站点",
"category": "站点",
"data": {}
},
"/site_cookie": {
"func": SiteChain().remote_cookie,
"description": "更新站点Cookie",
"data": {}
},
"/site_enable": {
"func": SiteChain().remote_enable,
"description": "启用站点",
"data": {}
},
"/site_disable": {
"func": SiteChain().remote_disable,
"description": "禁用站点",
"data": {}
},
"/mediaserver_sync": {
"id": "mediaserver_sync",
"type": "scheduler",
"description": "同步媒体服务器",
"category": "管理"
},
"/subscribes": {
"func": SubscribeChain().remote_list,
"description": "查询订阅",
"category": "订阅",
"data": {}
},
"/subscribe_refresh": {
"id": "subscribe_refresh",
"type": "scheduler",
"description": "刷新订阅",
"category": "订阅"
},
"/subscribe_search": {
"id": "subscribe_search",
"type": "scheduler",
"description": "搜索订阅",
"category": "订阅"
},
"/subscribe_delete": {
"func": SubscribeChain().remote_delete,
"description": "删除订阅",
"data": {}
},
"/subscribe_tmdb": {
"id": "subscribe_tmdb",
"type": "scheduler",
"description": "订阅元数据更新"
},
"/downloading": {
"func": DownloadChain().remote_downloading,
"description": "正在下载",
"category": "管理",
"data": {}
},
"/transfer": {
"id": "transfer",
"type": "scheduler",
"description": "下载文件整理",
"category": "管理"
},
"/redo": {
"func": TransferChain().remote_transfer,
"description": "手动整理",
"data": {}
},
"/clear_cache": {
"func": SystemChain().remote_clear_cache,
"description": "清理缓存",
"category": "管理",
"data": {}
},
"/restart": {
"func": SystemChain().restart,
"description": "重启系统",
"category": "管理",
"data": {}
},
"/version": {
"func": SystemChain().version,
"description": "当前版本",
"category": "管理",
"data": {}
}
}
# 汇总插件命令
plugin_commands = self.pluginmanager.get_plugin_commands()
for command in plugin_commands:
self.register(
cmd=command.get('cmd'),
func=Command.send_plugin_event,
desc=command.get('desc'),
category=command.get('category'),
data={
'etype': command.get('event'),
'data': command.get('data')
}
)
# 广播注册命令菜单
self.chain.register_commands(commands=self.get_commands())
# 消息处理线程
self._thread = Thread(target=self.__run)
# 启动事件处理线程
self._thread.start()
# 重启msg
SystemChain().restart_finish()
def __run(self):
"""
事件处理线程
"""
while not self._event.is_set():
event, handlers = self.eventmanager.get_event()
if event:
logger.info(f"处理事件:{event.event_type} - {handlers}")
for handler in handlers:
try:
names = handler.__qualname__.split(".")
[class_name, method_name] = names
if class_name in self.pluginmanager.get_plugin_ids():
# 插件事件
self.threader.submit(
self.pluginmanager.run_plugin_method,
class_name, method_name, event
)
else:
# 检查全局变量中是否存在
if class_name not in globals():
# 导入模块除了插件和Command本身只有chain能响应事件
module = importlib.import_module(
f"app.chain.{class_name[:-5].lower()}"
)
class_obj = getattr(module, class_name)()
else:
# 通过类名创建类实例
class_obj = globals()[class_name]()
# 检查类是否存在并调用方法
if hasattr(class_obj, method_name):
self.threader.submit(
getattr(class_obj, method_name),
event
)
except Exception as e:
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
def __run_command(self, command: Dict[str, any],
data_str: str = "",
channel: MessageChannel = None, userid: Union[str, int] = None):
"""
运行定时服务
"""
if command.get("type") == "scheduler":
# 定时服务
if userid:
self.chain.post_message(
Notification(
channel=channel,
title=f"开始执行 {command.get('description')} ...",
userid=userid
)
)
# 执行定时任务
self.scheduler.start(job_id=command.get("id"))
if userid:
self.chain.post_message(
Notification(
channel=channel,
title=f"{command.get('description')} 执行完成",
userid=userid
)
)
else:
# 命令
cmd_data = command['data'] if command.get('data') else {}
args_num = ObjectUtils.arguments(command['func'])
if args_num > 0:
if cmd_data:
# 有内置参数直接使用内置参数
data = cmd_data.get("data") or {}
data['channel'] = channel
data['user'] = userid
cmd_data['data'] = data
command['func'](**cmd_data)
elif args_num == 2:
# 没有输入参数只输入渠道和用户ID
command['func'](channel, userid)
elif args_num > 2:
# 多个输入参数用户输入、用户ID
command['func'](data_str, channel, userid)
else:
# 没有参数
command['func']()
def stop(self):
"""
停止事件处理线程
"""
self._event.set()
self._thread.join()
def get_commands(self):
"""
获取命令列表
"""
return self._commands
def register(self, cmd: str, func: Any, data: dict = None,
desc: str = None, category: str = None) -> None:
"""
注册命令
"""
self._commands[cmd] = {
"func": func,
"description": desc,
"category": category,
"data": data or {}
}
def get(self, cmd: str) -> Any:
"""
获取命令
"""
return self._commands.get(cmd, {})
def execute(self, cmd: str, data_str: str = "",
channel: MessageChannel = None, userid: Union[str, int] = None) -> None:
"""
执行命令
"""
command = self.get(cmd)
if command:
try:
if userid:
logger.info(f"用户 {userid} 开始执行:{command.get('description')} ...")
else:
logger.info(f"开始执行:{command.get('description')} ...")
# 执行命令
self.__run_command(command, data_str=data_str,
channel=channel, userid=userid)
if userid:
logger.info(f"用户 {userid} {command.get('description')} 执行完成")
else:
logger.info(f"{command.get('description')} 执行完成")
except Exception as err:
logger.error(f"执行命令 {cmd} 出错:{str(err)}")
traceback.print_exc()
@staticmethod
def send_plugin_event(etype: EventType, data: dict) -> None:
"""
发送插件命令
"""
EventManager().send_event(etype, data)
@eventmanager.register(EventType.CommandExcute)
def command_event(self, event: ManagerEvent) -> None:
"""
注册命令执行事件
event_data: {
"cmd": "/xxx args"
}
"""
# 命令参数
event_str = event.event_data.get('cmd')
# 消息渠道
event_channel = event.event_data.get('channel')
# 消息用户
event_user = event.event_data.get('user')
if event_str:
cmd = event_str.split()[0]
args = " ".join(event_str.split()[1:])
if self.get(cmd):
self.execute(cmd, args, event_channel, event_user)

View File

@@ -1,24 +1,46 @@
import copy
import os
import re
import secrets
import sys
import threading
from pathlib import Path
from typing import List, Optional
from typing import Any, Dict, List, Optional, Tuple, Type
from pydantic import BaseSettings
from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator
from app.log import logger
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
class Settings(BaseSettings):
class ConfigModel(BaseModel):
"""
Pydantic 配置模型,描述所有配置项及其类型和默认值
"""
class Config:
extra = "ignore" # 忽略未定义的配置项
# 项目名称
PROJECT_NAME = "MoviePilot"
# 域名 格式https://movie-pilot.org
APP_DOMAIN: str = ""
# API路径
API_V1_STR: str = "/api/v1"
# 前端资源路径
FRONTEND_PATH: str = "/public"
# 密钥
SECRET_KEY: str = secrets.token_urlsafe(32)
# RESOURCE密钥
RESOURCE_SECRET_KEY: str = secrets.token_urlsafe(32)
# 允许的域名
ALLOWED_HOSTS: list = ["*"]
# TOKEN过期时间
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
# RESOURCE_TOKEN过期时间
RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS: int = 60 * 30
# 时区
TZ: str = "Asia/Shanghai"
# API监听地址
@@ -31,24 +53,42 @@ class Settings(BaseSettings):
DEBUG: bool = False
# 是否开发模式
DEV: bool = False
# 是否在控制台输出 SQL 语句,默认关闭
DB_ECHO: bool = False
# 数据库连接池类型QueuePool, NullPool
DB_POOL_TYPE: str = "QueuePool"
# 是否在获取连接时进行预先 ping 操作,默认关闭
DB_POOL_PRE_PING: bool = False
# 数据库连接池的大小,默认 100
DB_POOL_SIZE: int = 100
# 数据库连接的回收时间(秒),默认 1800 秒
DB_POOL_RECYCLE: int = 1800
# 数据库连接池获取连接的超时时间(秒),默认 60 秒
DB_POOL_TIMEOUT: int = 60
# 数据库连接池最大溢出连接数,默认 500
DB_MAX_OVERFLOW: int = 500
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# 配置文件目录
CONFIG_DIR: str = None
CONFIG_DIR: Optional[str] = None
# 超级管理员
SUPERUSER: str = "admin"
# 超级管理员初始密码
SUPERUSER_PASSWORD: str = "password"
# 辅助认证,允许通过外部服务进行认证、单点登录以及自动创建用户
AUXILIARY_AUTH_ENABLE: bool = False
# API密钥需要更换
API_TOKEN: str = "moviepilot"
# 登录页面电影海报,tmdb/bing
WALLPAPER: str = "tmdb"
API_TOKEN: Optional[str] = None
# 网络代理 IP:PORT
PROXY_HOST: str = None
PROXY_HOST: Optional[str] = None
# 登录页面电影海报,tmdb/bing/mediaserver
WALLPAPER: str = "tmdb"
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb,douban,bangumi"
# 媒体识别来源 themoviedb/douban
RECOGNIZE_SOURCE: str = "themoviedb"
# 刮削来源 themoviedb/douban
SCRAP_SOURCE: str = "themoviedb"
# 刮削入库的媒体文件
SCRAP_METADATA: bool = True
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# TMDB图片地址
@@ -59,150 +99,82 @@ class Settings(BaseSettings):
TMDB_API_KEY: str = "db55323b8d3e4154498498a75642b381"
# TVDB API Key
TVDB_API_KEY: str = "6b481081-10aa-440c-99f2-21d17717ee02"
# Fanart开关
FANART_ENABLE: bool = True
# Fanart API Key
FANART_API_KEY: str = "d2d31f9ecabea050fc7d68aa3146015f"
# 元数据识别缓存过期时间(小时)
META_CACHE_EXPIRE: int = 0
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
# 用户认证站点
AUTH_SITE: str = ""
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
# 使用 DOH 解析的域名列表
DOH_DOMAINS: str = "api.themoviedb.org,api.tmdb.org,webservice.fanart.tv,api.github.com,github.com,raw.githubusercontent.com,api.telegram.org"
# DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
# 支持的后缀格式
RMT_MEDIAEXT: list = ['.mp4', '.mkv', '.ts', '.iso',
'.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm',
'.tp']
'.tp', '.f4v']
# 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa']
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup']
# 支持的音轨文件后缀格式
RMT_AUDIO_TRACK_EXT: list = ['.mka']
# 索引器
INDEXER: str = "builtin"
# 音轨文件后缀格式
RMT_AUDIOEXT: list = ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.tta', '.vqf', '.wav', '.wma',
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qB', '.part']
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
# 订阅模式
SUBSCRIBE_MODE: str = "spider"
# RSS订阅模式刷新时间间隔分钟
SUBSCRIBE_RSS_INTERVAL: int = 30
# 订阅数据共享
SUBSCRIBE_STATISTIC_SHARE: bool = True
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 用户认证站点
AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None
# 消息通知渠道 telegram/wechat/slack多个通知渠道用,分隔
MESSAGER: str = "telegram"
# WeChat企业ID
WECHAT_CORPID: str = None
# WeChat应用Secret
WECHAT_APP_SECRET: str = None
# WeChat应用ID
WECHAT_APP_ID: str = None
# WeChat代理服务器
WECHAT_PROXY: str = "https://qyapi.weixin.qq.com"
# WeChat Token
WECHAT_TOKEN: str = None
# WeChat EncodingAESKey
WECHAT_ENCODING_AESKEY: str = None
# WeChat 管理员
WECHAT_ADMINS: str = None
# Telegram Bot Token
TELEGRAM_TOKEN: str = None
# Telegram Chat ID
TELEGRAM_CHAT_ID: str = None
# Telegram 用户ID使用,分隔
TELEGRAM_USERS: str = ""
# Telegram 管理员ID使用,分隔
TELEGRAM_ADMINS: str = ""
# Slack Bot User OAuth Token
SLACK_OAUTH_TOKEN: str = ""
# Slack App-Level Token
SLACK_APP_TOKEN: str = ""
# Slack 频道名称
SLACK_CHANNEL: str = ""
# SynologyChat Webhook
SYNOLOGYCHAT_WEBHOOK: str = ""
# SynologyChat Token
SYNOLOGYCHAT_TOKEN: str = ""
# 下载器 qbittorrent/transmission
DOWNLOADER: str = "qbittorrent"
# 下载器监控开关
DOWNLOADER_MONITOR: bool = True
# Qbittorrent地址IP:PORT
QB_HOST: str = None
# Qbittorrent用户名
QB_USER: str = None
# Qbittorrent密码
QB_PASSWORD: str = None
# Qbittorrent分类自动管理
QB_CATEGORY: bool = False
# Qbittorrent按顺序下载
QB_SEQUENTIAL: bool = True
# Qbittorrent忽略队列限制强制继续
QB_FORCE_RESUME: bool = False
# Transmission地址IP:PORT
TR_HOST: str = None
# Transmission用户名
TR_USER: str = None
# Transmission密码
TR_PASSWORD: str = None
# 检查本地媒体库是否存在资源开关
LOCAL_EXISTS_SEARCH: bool = False
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
# 站点数据刷新间隔(小时)
SITEDATA_REFRESH_INTERVAL: int = 6
# 读取和发送站点消息
SITE_MESSAGE: bool = True
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载保存目录,容器内映射路径需要一致
DOWNLOAD_PATH: str = None
# 电影下载保存目录,容器内映射路径需要一致
DOWNLOAD_MOVIE_PATH: str = None
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH: str = None
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH: str = None
# 下载目录二级分类
DOWNLOAD_CATEGORY: bool = False
# 下载站点字幕
DOWNLOAD_SUBTITLE: bool = True
# 媒体服务器 emby/jellyfin/plex多个媒体服务器,分割
MEDIASERVER: str = "emby"
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: Optional[int] = 6
# 媒体服务器同步黑名单,多个媒体库名称,分割
MEDIASERVER_SYNC_BLACKLIST: str = None
# EMBY服务器地址IP:PORT
EMBY_HOST: str = None
# EMBY外网地址http(s)://DOMAIN:PORT未设置时使用EMBY_HOST
EMBY_PLAY_HOST: str = None
# EMBY Api Key
EMBY_API_KEY: str = None
# Jellyfin服务器地址IP:PORT
JELLYFIN_HOST: str = None
# Jellyfin外网地址http(s)://DOMAIN:PORT未设置时使用JELLYFIN_HOST
JELLYFIN_PLAY_HOST: str = None
# Jellyfin Api Key
JELLYFIN_API_KEY: str = None
# Plex服务器地址IP:PORT
PLEX_HOST: str = None
# Plex外网地址http(s)://DOMAIN:PORT未设置时使用PLEX_HOST
PLEX_PLAY_HOST: str = None
# Plex Token
PLEX_TOKEN: str = None
# 转移方式 link/copy/move/softlink
TRANSFER_TYPE: str = "copy"
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: Optional[str] = None
# CookieCloud是否启动本地服务
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
# CookieCloud服务器地址
COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud"
# CookieCloud用户KEY
COOKIECLOUD_KEY: str = None
COOKIECLOUD_KEY: Optional[str] = None
# CookieCloud端对端加密密码
COOKIECLOUD_PASSWORD: str = None
COOKIECLOUD_PASSWORD: Optional[str] = None
# CookieCloud同步间隔分钟
COOKIECLOUD_INTERVAL: Optional[int] = 60 * 24
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# CookieCloud同步黑名单多个域名,分割
COOKIECLOUD_BLACKLIST: Optional[str] = None
# CookieCloud对应的浏览器UA
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57"
# 媒体库目录,多个目录使用,分隔
LIBRARY_PATH: str = None
# 电影媒体库目录名
LIBRARY_MOVIE_NAME: str = "电影"
# 电视剧媒体库目录名
LIBRARY_TV_NAME: str = "电视剧"
# 动漫媒体库目录名,不设置时使用电视剧目录
LIBRARY_ANIME_NAME: str = None
# 二级分类
LIBRARY_CATEGORY: bool = True
# 电视剧动漫的分类genre_ids
ANIME_GENREIDS = [16]
# 电影重命名格式
MOVIE_RENAME_FORMAT: str = "{{title}}{% if year %} ({{year}}){% endif %}" \
"/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}" \
@@ -212,16 +184,218 @@ class Settings(BaseSettings):
"/Season {{season}}" \
"/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}} 集{% endif %}" \
"{{fileExt}}"
# 转移时覆盖模式
OVERWRITE_MODE: str = "size"
# OCR服务器地址
OCR_HOST: str = "https://movie-pilot.org"
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org"
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins"
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载
PLUGIN_AUTO_RELOAD: bool = False
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: Optional[str] = None
# Github代理服务器格式https://mirror.ghproxy.com/
GITHUB_PROXY: Optional[str] = ''
# pip镜像站点格式https://pypi.tuna.tsinghua.edu.cn/simple
PIP_PROXY: Optional[str] = ''
# 指定的仓库Github token多个仓库使用,分隔,格式:{user1}/{repo1}:ghp_****,{user2}/{repo2}:github_pat_****
REPO_GITHUB_TOKEN: Optional[str] = None
# 大内存模式
BIG_MEMORY_MODE: bool = False
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins"
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: str = None
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: List[str] = ["image.tmdb.org", "static-mdb.v.geilijiasu.com", "doubanio.com", "lain.bgm.tv",
"raw.githubusercontent.com", "github.com"]
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
class Settings(BaseSettings, ConfigModel):
"""
系统配置类
"""
class Config:
case_sensitive = True
env_file = SystemUtils.get_env_path()
env_file_encoding = "utf-8"
def __init__(self, **kwargs):
super().__init__(**kwargs)
# 初始化配置目录及子目录
for path in [self.CONFIG_PATH, self.TEMP_PATH, self.LOG_PATH, self.COOKIE_PATH]:
if not path.exists():
path.mkdir(parents=True, exist_ok=True)
# 如果是二进制程序,确保配置文件存在
if SystemUtils.is_frozen():
app_env_path = self.CONFIG_PATH / "app.env"
if not app_env_path.exists():
SystemUtils.copy(self.INNER_CONFIG_PATH / "app.env", app_env_path)
@staticmethod
def validate_api_token(value: Any, original_value: Any) -> Tuple[Any, bool]:
"""
校验 API_TOKEN
"""
if isinstance(value, (list, dict, set)):
value = copy.deepcopy(value)
value = value.strip() if isinstance(value, str) else None
if not value or len(value) < 16:
new_token = secrets.token_urlsafe(16)
if not value:
logger.info(f"'API_TOKEN' 未设置已随机生成新的【API_TOKEN】{new_token}")
else:
logger.warning(f"'API_TOKEN' 长度不足 16 个字符存在安全隐患已随机生成新的【API_TOKEN】{new_token}")
return new_token, True
return value, str(value) != str(original_value)
@staticmethod
def generic_type_converter(value: Any, original_value: Any, expected_type: Type, default: Any, field_name: str,
raise_exception: bool = False) -> Tuple[Any, bool]:
"""
通用类型转换函数,根据预期类型转换值。如果转换失败,返回默认值
"""
if isinstance(value, (list, dict, set)):
value = copy.deepcopy(value)
# 如果 value 是 None仍需要检查与 original_value 是否不一致
if value is None:
return default, str(value) != str(original_value)
if isinstance(value, str):
value = value.strip()
try:
if expected_type is bool:
if isinstance(value, bool):
return value, str(value).lower() != str(original_value).lower()
if isinstance(value, str):
value_clean = value.lower()
bool_map = {
"false": False, "no": False, "0": False, "off": False,
"true": True, "yes": True, "1": True, "on": True
}
if value_clean in bool_map:
converted = bool_map[value_clean]
return converted, str(converted).lower() != str(original_value).lower()
elif isinstance(value, (int, float)):
converted = bool(value)
return converted, str(converted).lower() != str(original_value).lower()
return default, True
elif expected_type is int:
if isinstance(value, int):
return value, str(value) != str(original_value)
if isinstance(value, str):
converted = int(value)
return converted, str(converted) != str(original_value)
elif expected_type is float:
if isinstance(value, float):
return value, str(value) != str(original_value)
if isinstance(value, str):
converted = float(value)
return converted, str(converted) != str(original_value)
elif expected_type is str:
# 清理 value 中所有空白字符的字段
fields_not_keep_spaces = {"AUTO_DOWNLOAD_USER", "REPO_GITHUB_TOKEN", "PLUGIN_MARKET"}
if field_name in fields_not_keep_spaces:
value = re.sub(r"\s+", "", value)
return value, str(value) != str(original_value)
# # 后续考虑支持 list 类型的处理
# elif expected_type is list:
# if isinstance(value, list):
# return value, False
# if isinstance(value, str):
# items = [item.strip() for item in value.split(",") if item.strip()]
# return items, items != original_value.split(",")
# 可根据需要添加更多类型处理
else:
return value, str(value) != str(original_value)
except (ValueError, TypeError) as e:
if raise_exception:
raise ValueError(f"配置项 '{field_name}' 的值 '{value}' 无法转换成正确的类型") from e
logger.error(
f"配置项 '{field_name}' 的值 '{value}' 无法转换成正确的类型,使用默认值 '{default}',错误信息: {e}")
return default, True
@validator('*', pre=True, always=True)
def generic_type_validator(cls, value: Any, field):
"""
通用校验器,尝试将配置值转换为期望的类型
"""
if field.name == "API_TOKEN":
converted_value, needs_update = cls.validate_api_token(value, value)
else:
converted_value, needs_update = cls.generic_type_converter(value, value, field.type_, field.default,
field.name)
if needs_update:
cls.update_env_config(field, value, converted_value)
return converted_value
@staticmethod
def update_env_config(field: Any, original_value: Any, converted_value: Any) -> Tuple[bool, str]:
"""
更新 env 配置
"""
message = None
is_converted = original_value is not None and str(original_value) != str(converted_value)
if is_converted:
message = f"配置项 '{field.name}' 的值 '{original_value}' 无效,已替换为 '{converted_value}'"
logger.warning(message)
if field.name in os.environ:
if is_converted:
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
else:
set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "")
if is_converted:
logger.info(f"配置项 '{field.name}' 已自动修正并写入到 'app.env' 文件")
return True, message
def update_setting(self, key: str, value: Any) -> Tuple[bool, str]:
"""
更新单个配置项
"""
if not hasattr(self, key):
return False, f"配置项 '{key}' 不存在"
try:
field = self.__fields__[key]
original_value = getattr(self, key)
if field.name == "API_TOKEN":
converted_value, needs_update = self.validate_api_token(value, original_value)
else:
converted_value, needs_update = self.generic_type_converter(value, original_value, field.type_,
field.default, key)
# 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value):
success, message = self.update_env_config(field, original_value, converted_value)
# 仅成功更新配置时,才更新内存
if success:
setattr(self, key, converted_value)
return success, message
return True, ""
except Exception as e:
return False, str(e)
def update_settings(self, env: Dict[str, Any]) -> Dict[str, Tuple[bool, str]]:
"""
更新多个配置项
"""
results = {}
for k, v in env.items():
results[k] = self.update_setting(k, v)
return results
@property
def VERSION_FLAG(self) -> str:
"""
版本标识用来区分重大版本为空则为v1不允许外部修改
"""
return "v2"
@property
def INNER_CONFIG_PATH(self):
@@ -241,6 +415,10 @@ class Settings(BaseSettings):
def TEMP_PATH(self):
return self.CONFIG_PATH / "temp"
@property
def CACHE_PATH(self):
return self.CONFIG_PATH / "cache"
@property
def ROOT_PATH(self):
return Path(__file__).parents[2]
@@ -253,6 +431,10 @@ class Settings(BaseSettings):
def LOG_PATH(self):
return self.CONFIG_PATH / "logs"
@property
def COOKIE_PATH(self):
return self.CONFIG_PATH / "cookies"
@property
def CACHE_CONF(self):
if self.BIG_MEMORY_MODE:
@@ -262,7 +444,7 @@ class Settings(BaseSettings):
"torrents": 100,
"douban": 512,
"fanart": 512,
"meta": 15 * 24 * 3600
"meta": (self.META_CACHE_EXPIRE or 168) * 3600
}
return {
"tmdb": 256,
@@ -270,7 +452,7 @@ class Settings(BaseSettings):
"torrents": 50,
"douban": 256,
"fanart": 128,
"meta": 7 * 24 * 3600
"meta": (self.META_CACHE_EXPIRE or 72) * 3600
}
@property
@@ -289,48 +471,6 @@ class Settings(BaseSettings):
"server": self.PROXY_HOST
}
@property
def LIBRARY_PATHS(self) -> List[Path]:
if self.LIBRARY_PATH:
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return [self.CONFIG_PATH / "library"]
@property
def SAVE_PATH(self) -> Path:
"""
获取下载保存目录
"""
if self.DOWNLOAD_PATH:
return Path(self.DOWNLOAD_PATH)
return self.CONFIG_PATH / "downloads"
@property
def SAVE_MOVIE_PATH(self) -> Path:
"""
获取电影下载保存目录
"""
if self.DOWNLOAD_MOVIE_PATH:
return Path(self.DOWNLOAD_MOVIE_PATH)
return self.SAVE_PATH
@property
def SAVE_TV_PATH(self) -> Path:
"""
获取电视剧下载保存目录
"""
if self.DOWNLOAD_TV_PATH:
return Path(self.DOWNLOAD_TV_PATH)
return self.SAVE_PATH
@property
def SAVE_ANIME_PATH(self) -> Path:
"""
获取动漫下载保存目录
"""
if self.DOWNLOAD_ANIME_PATH:
return Path(self.DOWNLOAD_ANIME_PATH)
return self.SAVE_TV_PATH
@property
def GITHUB_HEADERS(self):
"""
@@ -342,26 +482,88 @@ class Settings(BaseSettings):
}
return {}
def __init__(self, **kwargs):
super().__init__(**kwargs)
with self.CONFIG_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
if SystemUtils.is_frozen():
if not (p / "app.env").exists():
SystemUtils.copy(self.INNER_CONFIG_PATH / "app.env", p / "app.env")
with self.TEMP_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
with self.LOG_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
def REPO_GITHUB_HEADERS(self, repo: str = None):
"""
Github指定的仓库请求头
:param repo: 指定的仓库名称,格式为 "user/repo"。如果为空,或者没有找到指定仓库请求头,则返回默认的请求头信息
:return: Github请求头
"""
# 如果没有传入指定的仓库名称或没有配置指定的仓库Token则返回默认的请求头信息
if not repo or not self.REPO_GITHUB_TOKEN:
return self.GITHUB_HEADERS
headers = {}
# 格式:{user1}/{repo1}:ghp_****,{user2}/{repo2}:github_pat_****
token_pairs = self.REPO_GITHUB_TOKEN.split(",")
for token_pair in token_pairs:
try:
parts = token_pair.split(":")
if len(parts) != 2:
print(f"无效的令牌格式: {token_pair}")
continue
repo_info = parts[0].strip()
token = parts[1].strip()
if not repo_info or not token:
print(f"无效的令牌或仓库信息: {token_pair}")
continue
headers[repo_info] = {
"Authorization": f"Bearer {token}"
}
except Exception as e:
print(f"处理令牌对 '{token_pair}' 时出错: {e}")
# 如果传入了指定的仓库名称,则返回该仓库的请求头信息,否则返回默认请求头
return headers.get(repo, self.GITHUB_HEADERS)
class Config:
case_sensitive = True
@property
def VAPID(self):
return {
"subject": f"mailto:{self.SUPERUSER}@movie-pilot.org",
"publicKey": "BH3w49sZA6jXUnE-yt4jO6VKh73lsdsvwoJ6Hx7fmPIDKoqGiUl2GEoZzy-iJfn4SfQQcx7yQdHf9RknwrL_lSM",
"privateKey": "JTixnYY0vEw97t9uukfO3UWKfHKJdT5kCQDiv3gu894"
}
def MP_DOMAIN(self, url: str = None):
if not self.APP_DOMAIN:
return None
return UrlUtils.combine_url(host=self.APP_DOMAIN, path=url)
settings = Settings(
_env_file=Settings().CONFIG_PATH / "app.env",
_env_file_encoding="utf-8"
)
class GlobalVar(object):
"""
全局标识
"""
# 系统停止事件
STOP_EVENT: threading.Event = threading.Event()
# webpush订阅
SUBSCRIPTIONS: List[dict] = []
def stop_system(self):
"""
停止系统
"""
self.STOP_EVENT.set()
@property
def is_system_stopped(self):
"""
是否停止
"""
return self.STOP_EVENT.is_set()
def get_subscriptions(self):
"""
获取webpush订阅
"""
return self.SUBSCRIPTIONS
def push_subscription(self, subscription: dict):
"""
添加webpush订阅
"""
self.SUBSCRIPTIONS.append(subscription)
# 实例化配置
settings = Settings()
# 全局标识
global_vars = GlobalVar()

View File

@@ -57,6 +57,8 @@ class TorrentInfo:
labels: list = field(default_factory=list)
# 种子优先级
pri_order: int = 0
# 种子分类 电影/电视剧
category: str = None
def __setattr__(self, name: str, value: Any):
self.__dict__[name] = value
@@ -131,10 +133,20 @@ class TorrentInfo:
@dataclass
class MediaInfo:
# 来源themoviedb、douban、bangumi
source: str = None
# 类型 电影、电视剧
type: MediaType = None
# 媒体标题
title: str = None
# 英文标题
en_title: str = None
# 香港标题
hk_title: str = None
# 台湾标题
tw_title: str = None
# 新加坡标题
sg_title: str = None
# 年份
year: str = None
# 季
@@ -147,6 +159,8 @@ class MediaInfo:
tvdb_id: int = None
# 豆瓣ID
douban_id: str = None
# Bangumi ID
bangumi_id: int = None
# 媒体原语种
original_language: str = None
# 媒体原发行标题
@@ -160,7 +174,7 @@ class MediaInfo:
# LOGO
logo_path: str = None
# 评分
vote_average: int = 0
vote_average: float = 0
# 描述
overview: str = None
# 风格ID
@@ -179,6 +193,8 @@ class MediaInfo:
tmdb_info: dict = field(default_factory=dict)
# 豆瓣 INFO
douban_info: dict = field(default_factory=dict)
# Bangumi INFO
bangumi_info: dict = field(default_factory=dict)
# 导演
directors: List[dict] = field(default_factory=list)
# 演员
@@ -234,6 +250,8 @@ class MediaInfo:
self.set_tmdb_info(self.tmdb_info)
if self.douban_info:
self.set_douban_info(self.douban_info)
if self.bangumi_info:
self.set_bangumi_info(self.bangumi_info)
def __setattr__(self, name: str, value: Any):
self.__dict__[name] = value
@@ -333,16 +351,18 @@ class MediaInfo:
return [], []
directors = []
actors = []
for cast in _credits.get("cast"):
for cast in _credits.get("cast") or []:
if cast.get("known_for_department") == "Acting":
actors.append(cast)
for crew in _credits.get("crew"):
for crew in _credits.get("crew") or []:
if crew.get("job") in ["Director", "Writer", "Editor", "Producer"]:
directors.append(crew)
return directors, actors
if not info:
return
# 来源
self.source = "themoviedb"
# 本体
self.tmdb_info = info
# 类型
@@ -368,6 +388,14 @@ class MediaInfo:
self.genre_ids = info.get('genre_ids') or []
# 原语种
self.original_language = info.get('original_language')
# 英文标题
self.en_title = info.get('en_title')
# 香港标题
self.hk_title = info.get('hk_title')
# 台湾标题
self.tw_title = info.get('tw_title')
# 新加坡标题
self.sg_title = info.get('sg_title')
if self.type == MediaType.MOVIE:
# 标题
self.title = info.get('title')
@@ -424,6 +452,8 @@ class MediaInfo:
"""
if not info:
return
# 来源
self.source = "douban"
# 本体
self.douban_info = info
# 豆瓣ID
@@ -432,19 +462,30 @@ class MediaInfo:
if not self.type:
if isinstance(info.get('media_type'), MediaType):
self.type = info.get('media_type')
elif info.get("type"):
self.type = MediaType.MOVIE if info.get("type") == "movie" else MediaType.TV
elif info.get("subtype"):
self.type = MediaType.MOVIE if info.get("subtype") == "movie" else MediaType.TV
elif info.get("target_type"):
self.type = MediaType.MOVIE if info.get("target_type") == "movie" else MediaType.TV
elif info.get("type_name"):
self.type = MediaType(info.get("type_name"))
elif info.get("uri"):
self.type = MediaType.MOVIE if "/movie/" in info.get("uri") else MediaType.TV
elif info.get("type") and info.get("type") in ["movie", "tv"]:
self.type = MediaType.MOVIE if info.get("type") == "movie" else MediaType.TV
# 标题
if not self.title:
self.title = info.get("title")
# 英文标题,暂时不支持
if not self.en_title:
self.en_title = info.get('original_title')
# 原语种标题
if not self.original_title:
self.original_title = info.get("original_title")
# 年份
if not self.year:
self.year = info.get("year")[:4] if info.get("year") else None
if not self.year and info.get("extra"):
self.year = info.get("extra").get("year")
# 识别标题中的季
meta = MetaInfo(info.get("title"))
# 季
@@ -474,14 +515,24 @@ class MediaInfo:
self.release_date = match.group()
# 海报
if not self.poster_path:
self.poster_path = info.get("pic", {}).get("large")
if info.get("pic"):
self.poster_path = info.get("pic", {}).get("large")
if not self.poster_path and info.get("cover_url"):
self.poster_path = info.get("cover_url")
# imageView2/0/q/80/w/9999/h/120/format/webp -> imageView2/1/w/500/h/750/format/webp
self.poster_path = re.sub(r'imageView2/\d/q/\d+/w/\d+/h/\d+/format/webp', 'imageView2/1/w/500/h/750/format/webp', info.get("cover_url"))
if not self.poster_path and info.get("cover"):
self.poster_path = info.get("cover").get("url")
if info.get("cover").get("url"):
self.poster_path = info.get("cover").get("url")
else:
self.poster_path = info.get("cover").get("large", {}).get("url")
# 简介
if not self.overview:
self.overview = info.get("intro") or info.get("card_subtitle") or ""
if not self.overview:
if info.get("extra", {}).get("info"):
extra_info = info.get("extra").get("info")
if extra_info:
self.overview = "".join(["".join(item) for item in extra_info])
# 从简介中提取年份
if self.overview and not self.year:
match = re.search(r'\d{4}', self.overview)
@@ -527,6 +578,74 @@ class MediaInfo:
if not hasattr(self, key):
setattr(self, key, value)
def set_bangumi_info(self, info: dict):
"""
初始化Bangumi信息
"""
if not info:
return
# 来源
self.source = "bangumi"
# 本体
self.bangumi_info = info
# 豆瓣ID
self.bangumi_id = info.get("id")
# 类型
if not self.type:
self.type = MediaType.TV
# 标题
if not self.title:
self.title = info.get("name_cn") or info.get("name")
# 原语种标题
if not self.original_title:
self.original_title = info.get("name")
# 识别标题中的季
meta = MetaInfo(self.title)
# 季
if not self.season:
self.season = meta.begin_season
# 评分
if not self.vote_average:
rating = info.get("rating")
if rating:
vote_average = float(rating.get("score"))
else:
vote_average = 0
self.vote_average = vote_average
# 发行日期
if not self.release_date:
self.release_date = info.get("date") or info.get("air_date")
# 年份
if not self.year:
self.year = self.release_date[:4] if self.release_date else None
# 海报
if not self.poster_path:
if info.get("images"):
self.poster_path = info.get("images", {}).get("large")
if not self.poster_path and info.get("image"):
self.poster_path = info.get("image")
# 简介
if not self.overview:
self.overview = info.get("summary")
# 别名
if not self.names:
infobox = info.get("infobox")
if infobox:
akas = [item.get("value") for item in infobox if item.get("key") == "别名"]
if akas:
self.names = [aka.get("v") for aka in akas[0]]
# 剧集
if self.type == MediaType.TV and not self.seasons:
meta = MetaInfo(self.title)
season = meta.begin_season or 1
episodes_count = info.get("total_episodes")
if episodes_count:
self.seasons[season] = list(range(1, episodes_count + 1))
# 演员
if not self.actors:
self.actors = info.get("actors") or []
@property
def title_year(self):
if self.title:
@@ -545,6 +664,8 @@ class MediaInfo:
return "https://www.themoviedb.org/tv/%s" % self.tmdb_id
elif self.douban_id:
return "https://movie.douban.com/subject/%s" % self.douban_id
elif self.bangumi_id:
return "http://bgm.tv/subject/%s" % self.bangumi_id
return ""
@property
@@ -606,6 +727,9 @@ class MediaInfo:
dicts["type"] = self.type.value if self.type else None
dicts["detail_link"] = self.detail_link
dicts["title_year"] = self.title_year
dicts["tmdb_info"] = None
dicts["douban_info"] = None
dicts["bangumi_info"] = None
return dicts
def clear(self):
@@ -614,6 +738,7 @@ class MediaInfo:
"""
self.tmdb_info = {}
self.douban_info = {}
self.bangumi_info = {}
self.seasons = {}
self.genres = []
self.season_info = []

View File

@@ -1,119 +1,523 @@
from queue import Queue, Empty
from typing import Dict, Any
import copy
import importlib
import inspect
import random
import threading
import time
import traceback
import uuid
from functools import lru_cache
from queue import Empty, PriorityQueue
from typing import Callable, Dict, List, Optional, Union
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.schemas.event import ChainEventData
from app.schemas.types import ChainEventType, EventType
from app.utils.limit import ExponentialBackoffRateLimiter
from app.utils.singleton import Singleton
from app.schemas.types import EventType
DEFAULT_EVENT_PRIORITY = 10 # 事件的默认优先级
MIN_EVENT_CONSUMER_THREADS = 1 # 最小事件消费者线程数
INITIAL_EVENT_QUEUE_IDLE_TIMEOUT_SECONDS = 1 # 事件队列空闲时的初始超时时间(秒)
MAX_EVENT_QUEUE_IDLE_TIMEOUT_SECONDS = 5 # 事件队列空闲时的最大超时时间(秒)
class Event:
"""
事件类,封装事件的基本信息
"""
def __init__(self, event_type: Union[EventType, ChainEventType],
event_data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY):
"""
:param event_type: 事件的类型,支持 EventType 或 ChainEventType
:param event_data: 可选,事件携带的数据,默认为空字典
:param priority: 可选,事件的优先级,默认为 10
"""
self.event_id = str(uuid.uuid4()) # 事件ID
self.event_type = event_type # 事件类型
self.event_data = event_data or {} # 事件数据
self.priority = priority # 事件优先级
def __repr__(self) -> str:
"""
重写 __repr__ 方法用于返回事件的详细信息包括事件类型、事件ID和优先级
"""
event_kind = Event.get_event_kind(self.event_type)
return f"<{event_kind}: {self.event_type.value}, ID: {self.event_id}, Priority: {self.priority}>"
def __lt__(self, other):
"""
定义事件对象的比较规则,基于优先级比较
优先级小的事件会被认为“更小”,优先级高的事件将被认为“更大”
"""
return self.priority < other.priority
@staticmethod
def get_event_kind(event_type: Union[EventType, ChainEventType]) -> str:
"""
根据事件类型判断事件是广播事件还是链式事件
:param event_type: 事件类型,支持 EventType 或 ChainEventType
:return: 返回 Broadcast Event 或 Chain Event
"""
return "Broadcast Event" if isinstance(event_type, EventType) else "Chain Event"
class EventManager(metaclass=Singleton):
"""
事件管理器
EventManager 负责管理和调度广播事件和链式事件,包括订阅、发送和处理事件
"""
# 退出事件
__event = threading.Event()
def __init__(self):
# 事件队列
self._eventQueue = Queue()
# 事件响应函数字典
self._handlers: Dict[str, Dict[str, Any]] = {}
# 已禁用的事件响应
self._disabled_handlers = []
self.__messagehelper = MessageHelper()
self.__executor = ThreadHelper() # 动态线程池,用于消费事件
self.__consumer_threads = [] # 用于保存启动的事件消费者线程
self.__event_queue = PriorityQueue() # 优先级队列
self.__broadcast_subscribers: Dict[EventType, Dict[str, Callable]] = {} # 广播事件的订阅者
self.__chain_subscribers: Dict[ChainEventType, Dict[str, tuple[int, Callable]]] = {} # 链式事件的订阅者
self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁
def get_event(self):
def start(self):
"""
获取事件
开始广播事件处理线程
"""
# 启动消费者线程用于处理广播事件
self.__event.set()
for _ in range(MIN_EVENT_CONSUMER_THREADS):
thread = threading.Thread(target=self.__broadcast_consumer_loop, daemon=True)
thread.start()
self.__consumer_threads.append(thread) # 将线程对象保存到列表中
def stop(self):
"""
停止广播事件处理线程
"""
logger.info("正在停止事件处理...")
self.__event.clear() # 停止广播事件处理
try:
event = self._eventQueue.get(block=True, timeout=1)
handlers = self._handlers.get(event.event_type) or {}
if handlers:
# 去除掉被禁用的事件响应
handlerList = [handler for handler in handlers.values()
if handler.__qualname__.split(".")[0] not in self._disabled_handlers]
return event, handlerList
return event, []
except Empty:
return None, []
# 通过遍历保存的线程来等待它们完成
for consumer_thread in self.__consumer_threads:
consumer_thread.join()
logger.info("事件处理停止完成")
except Exception as e:
logger.error(f"停止事件处理线程出错:{str(e)} - {traceback.format_exc()}")
def check(self, etype: EventType):
def check(self, etype: Union[EventType, ChainEventType]) -> bool:
"""
检查事件是否存在响应
检查是否有启用的事件处理器可以响应某个事件类型
:param etype: 事件类型 (EventType 或 ChainEventType)
:return: 返回是否存在可用的处理器
"""
return etype.value in self._handlers
def add_event_listener(self, etype: EventType, handler: type):
"""
注册事件处理
"""
try:
handlers = self._handlers[etype.value]
except KeyError:
handlers = {}
self._handlers[etype.value] = handlers
if handler.__qualname__ in handlers:
handlers.pop(handler.__qualname__)
if isinstance(etype, ChainEventType):
handlers = self.__chain_subscribers.get(etype, {})
return any(
self.__is_handler_enabled(handler)
for _, handler in handlers.values()
)
else:
logger.debug(f"Event Registed{etype.value} - {handler.__qualname__}")
handlers[handler.__qualname__] = handler
handlers = self.__broadcast_subscribers.get(etype, {})
return any(
self.__is_handler_enabled(handler)
for handler in handlers.values()
)
def disable_events_hander(self, class_name: str):
def send_event(self, etype: Union[EventType, ChainEventType], data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
"""
标记对应类事件处理为不可用
发送事件,根据事件类型决定是广播事件还是链式事件
:param etype: 事件类型 (EventType 或 ChainEventType)
:param data: 可选,事件数据
:param priority: 广播事件的优先级,默认为 10
:return: 如果是链式事件,返回处理后的事件数据;否则返回 None
"""
if class_name not in self._disabled_handlers:
self._disabled_handlers.append(class_name)
logger.debug(f"Event Disabled{class_name}")
event = Event(etype, data, priority)
if isinstance(etype, EventType):
self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType):
return self.__trigger_chain_event(event)
else:
logger.error(f"Unknown event type: {etype}")
def enable_events_hander(self, class_name: str):
def add_event_listener(self, event_type: Union[EventType, ChainEventType], handler: Callable,
priority: int = DEFAULT_EVENT_PRIORITY):
"""
标记对应事件处理为可用
注册事件处理器,将处理器添加到对应事件订阅列表中
:param event_type: 事件类型 (EventType 或 ChainEventType)
:param handler: 处理器
:param priority: 可选,链式事件的优先级,默认为 10广播事件不需要优先级
"""
if class_name in self._disabled_handlers:
self._disabled_handlers.remove(class_name)
logger.debug(f"Event Enabled{class_name}")
with self.__lock:
handler_identifier = self.__get_handler_identifier(handler)
def send_event(self, etype: EventType, data: dict = None):
"""
发送事件
"""
if etype not in EventType:
return
event = Event(etype.value)
event.event_data = data or {}
logger.debug(f"发送事件:{etype.value} - {event.event_data}")
self._eventQueue.put(event)
def register(self, etype: [EventType, list]):
"""
事件注册
:param etype: 事件类型
"""
def decorator(f):
if isinstance(etype, list):
for et in etype:
self.add_event_listener(et, f)
elif type(etype) == type(EventType):
for et in etype.__members__.values():
self.add_event_listener(et, f)
if isinstance(event_type, ChainEventType):
# 链式事件,按优先级排序
if event_type not in self.__chain_subscribers:
self.__chain_subscribers[event_type] = {}
handlers = self.__chain_subscribers[event_type]
if handler_identifier in handlers:
handlers.pop(handler_identifier)
else:
logger.debug(
f"Subscribed to chain event: {event_type.value}, "
f"Priority: {priority} - {handler_identifier}")
handlers[handler_identifier] = (priority, handler)
# 根据优先级排序
self.__chain_subscribers[event_type] = dict(
sorted(self.__chain_subscribers[event_type].items(), key=lambda x: x[1][0])
)
else:
self.add_event_listener(etype, f)
# 广播事件
if event_type not in self.__broadcast_subscribers:
self.__broadcast_subscribers[event_type] = {}
handlers = self.__broadcast_subscribers[event_type]
if handler_identifier in handlers:
handlers.pop(handler_identifier)
else:
logger.debug(f"Subscribed to broadcast event: {event_type.value} - {handler_identifier}")
handlers[handler_identifier] = handler
def remove_event_listener(self, event_type: Union[EventType, ChainEventType], handler: Callable):
"""
移除事件处理器,将处理器从对应事件的订阅列表中删除
:param event_type: 事件类型 (EventType 或 ChainEventType)
:param handler: 要移除的处理器
"""
with self.__lock:
handler_identifier = self.__get_handler_identifier(handler)
if isinstance(event_type, ChainEventType) and event_type in self.__chain_subscribers:
self.__chain_subscribers[event_type].pop(handler_identifier, None)
logger.debug(f"Unsubscribed from chain event: {event_type.value} - {handler_identifier}")
elif event_type in self.__broadcast_subscribers:
self.__broadcast_subscribers[event_type].pop(handler_identifier, None)
logger.debug(f"Unsubscribed from broadcast event: {event_type.value} - {handler_identifier}")
def disable_event_handler(self, target: Union[Callable, type]):
"""
禁用指定的事件处理器或事件处理器类
:param target: 处理器函数或类
"""
identifier = self.__get_handler_identifier(target)
if identifier in self.__disabled_handlers or identifier in self.__disabled_classes:
return
if isinstance(target, type):
self.__disabled_classes.add(identifier)
logger.debug(f"Disabled event handler class - {identifier}")
else:
self.__disabled_handlers.add(identifier)
logger.debug(f"Disabled event handler - {identifier}")
def enable_event_handler(self, target: Union[Callable, type]):
"""
启用指定的事件处理器或事件处理器类
:param target: 处理器函数或类
"""
identifier = self.__get_handler_identifier(target)
if isinstance(target, type):
self.__disabled_classes.discard(identifier)
logger.debug(f"Enabled event handler class - {identifier}")
else:
self.__disabled_handlers.discard(identifier)
logger.debug(f"Enabled event handler - {identifier}")
def visualize_handlers(self) -> List[Dict]:
"""
可视化所有事件处理器,包括是否被禁用的状态
:return: 处理器列表,包含事件类型、处理器标识符、优先级(如果有)和状态
"""
handler_info = []
# 统一处理广播事件和链式事件
for event_type, subscribers in {**self.__broadcast_subscribers, **self.__chain_subscribers}.items():
for handler_data in subscribers:
if isinstance(subscribers, dict):
priority, handler = handler_data
else:
priority = None
handler = handler_data
# 获取处理器的唯一标识符
handler_id = self.__get_handler_identifier(handler)
# 检查处理器的启用状态
status = "enabled" if self.__is_handler_enabled(handler) else "disabled"
# 构建处理器信息字典
handler_dict = {
"event_type": event_type.value,
"handler_identifier": handler_id,
"status": status
}
if priority is not None:
handler_dict["priority"] = priority
handler_info.append(handler_dict)
return handler_info
@classmethod
@lru_cache(maxsize=1000)
def __get_handler_identifier(cls, target: Union[Callable, type]) -> Optional[str]:
"""
获取处理器或处理器类的唯一标识符,包括模块名和类名/方法名
:param target: 处理器函数或类
:return: 唯一标识符
"""
# 统一使用 inspect.getmodule 来获取模块名
module = inspect.getmodule(target)
module_name = module.__name__ if module else "unknown_module"
# 使用 __qualname__ 获取目标的限定名
qualname = target.__qualname__
return f"{module_name}.{qualname}"
@classmethod
@lru_cache(maxsize=1000)
def __get_class_from_callable(cls, handler: Callable) -> Optional[str]:
"""
获取可调用对象所属类的唯一标识符
:param handler: 可调用对象(函数、方法等)
:return: 类的唯一标识符
"""
# 对于绑定方法,通过 __self__.__class__ 获取类
if inspect.ismethod(handler) and hasattr(handler, "__self__"):
return cls.__get_handler_identifier(handler.__self__.__class__)
# 对于类实例(实现了 __call__ 方法)
if not inspect.isfunction(handler) and hasattr(handler, "__call__"):
handler_cls = handler.__class__
return cls.__get_handler_identifier(handler_cls)
# 对于未绑定方法、静态方法、类方法,使用 __qualname__ 提取类信息
qualname_parts = handler.__qualname__.split(".")
if len(qualname_parts) > 1:
class_name = ".".join(qualname_parts[:-1])
module = inspect.getmodule(handler)
module_name = module.__name__ if module else "unknown_module"
return f"{module_name}.{class_name}"
def __is_handler_enabled(self, handler: Callable) -> bool:
"""
检查处理器是否已启用(没有被禁用)
:param handler: 处理器函数
:return: 如果处理器启用则返回 True否则返回 False
"""
# 获取处理器的唯一标识符
handler_id = self.__get_handler_identifier(handler)
# 获取处理器所属类的唯一标识符
class_id = self.__get_class_from_callable(handler)
# 检查处理器或类是否被禁用,只要其中之一被禁用则返回 False
if handler_id in self.__disabled_handlers or (class_id is not None and class_id in self.__disabled_classes):
return False
return True
def __trigger_chain_event(self, event: Event) -> Optional[Event]:
"""
触发链式事件,按顺序调用订阅的处理器,并记录处理耗时
"""
logger.debug(f"Triggering synchronous chain event: {event}")
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
def __trigger_broadcast_event(self, event: Event):
"""
触发广播事件,将事件插入到优先级队列中
:param event: 要处理的事件对象
"""
logger.debug(f"Triggering broadcast event: {event}")
self.__event_queue.put((event.priority, event))
def __dispatch_chain_event(self, event: Event) -> bool:
"""
同步方式调度链式事件,按优先级顺序逐个调用事件处理器,并记录每个处理器的处理时间
:param event: 要调度的事件对象
"""
handlers = self.__chain_subscribers.get(event.event_type, {})
if not handlers:
logger.debug(f"No handlers found for chain event: {event}")
return False
self.__log_event_lifecycle(event, "Started")
for handler_id, (priority, handler) in handlers.items():
start_time = time.time()
self.__safe_invoke_handler(handler, event)
logger.debug(
f"{self.__get_handler_identifier(handler)} (Priority: {priority}), "
f"completed in {time.time() - start_time:.3f}s for event: {event}"
)
self.__log_event_lifecycle(event, "Completed")
return True
def __dispatch_broadcast_event(self, event: Event):
"""
异步方式调度广播事件,通过线程池逐个调用事件处理器
:param event: 要调度的事件对象
"""
handlers = self.__broadcast_subscribers.get(event.event_type, {})
if not handlers:
logger.debug(f"No handlers found for broadcast event: {event}")
return
for handler_id, handler in handlers.items():
self.__executor.submit(self.__safe_invoke_handler, handler, event)
def __safe_invoke_handler(self, handler: Callable, event: Event):
"""
调用处理器,处理链式或广播事件
:param handler: 处理器
:param event: 事件对象
"""
if not self.__is_handler_enabled(handler):
logger.debug(f"Handler {self.__get_handler_identifier(handler)} is disabled. Skipping execution")
return
# 根据事件类型判断是否需要深复制
is_broadcast_event = isinstance(event.event_type, EventType)
event_to_process = copy.deepcopy(event) if is_broadcast_event else event
names = handler.__qualname__.split(".")
class_name, method_name = names[0], names[1]
try:
from app.core.plugin import PluginManager
if class_name in PluginManager().get_plugin_ids():
# 定义一个插件调用函数
def plugin_callable():
PluginManager().run_plugin_method(class_name, method_name, event_to_process)
if is_broadcast_event:
self.__executor.submit(plugin_callable)
else:
plugin_callable()
else:
# 获取全局对象或模块类的实例
class_obj = self.__get_class_instance(class_name)
if class_obj and hasattr(class_obj, method_name):
method = getattr(class_obj, method_name)
if is_broadcast_event:
self.__executor.submit(method, event_to_process)
else:
method(event_to_process)
except Exception as e:
self.__handle_event_error(event, handler, e)
@staticmethod
def __get_class_instance(class_name: str):
"""
根据类名获取类实例,首先检查全局变量中是否存在该类,如果不存在则尝试动态导入模块。
:param class_name: 类的名称
:return: 类的实例
"""
# 检查类是否在全局变量中
if class_name in globals():
try:
class_obj = globals()[class_name]()
return class_obj
except Exception as e:
logger.error(f"事件处理出错:创建全局类实例出错:{str(e)} - {traceback.format_exc()}")
return None
# 如果类不在全局变量中,尝试动态导入模块并创建实例
try:
# 导入模块除了插件只有chain能响应事件
if not class_name.endswith("Chain"):
logger.debug(f"事件处理出错:无效的 Chain 类名: {class_name},类名必须以 'Chain' 结尾")
return None
module_name = f"app.chain.{class_name[:-5].lower()}"
module = importlib.import_module(module_name)
if hasattr(module, class_name):
class_obj = getattr(module, class_name)()
return class_obj
else:
logger.debug(f"事件处理出错:模块 {module_name} 中没有找到类 {class_name}")
except Exception as e:
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
return None
def __broadcast_consumer_loop(self):
"""
持续从队列中提取事件的后台广播消费者线程
"""
jitter_factor = 0.1
rate_limiter = ExponentialBackoffRateLimiter(base_wait=INITIAL_EVENT_QUEUE_IDLE_TIMEOUT_SECONDS,
max_wait=MAX_EVENT_QUEUE_IDLE_TIMEOUT_SECONDS,
backoff_factor=2.0,
source="BroadcastConsumer",
enable_logging=False)
while self.__event.is_set():
try:
priority, event = self.__event_queue.get(timeout=rate_limiter.current_wait)
rate_limiter.reset()
self.__dispatch_broadcast_event(event)
except Empty:
rate_limiter.current_wait = rate_limiter.current_wait * random.uniform(1, 1 + jitter_factor)
rate_limiter.trigger_limit()
@staticmethod
def __log_event_lifecycle(event: Event, stage: str):
"""
记录事件的生命周期日志
"""
logger.debug(f"{stage} - {event}")
def __handle_event_error(self, event: Event, handler: Callable, e: Exception):
"""
全局错误处理器,用于处理事件处理中的异常
"""
logger.error(f"事件处理出错:{str(e)} - {traceback.format_exc()}")
names = handler.__qualname__.split(".")
class_name, method_name = names[0], names[1]
self.__messagehelper.put(title=f"{event.event_type} 事件处理出错",
message=f"{class_name}.{method_name}{str(e)}",
role="system")
self.send_event(
EventType.SystemError,
{
"type": "event",
"event_type": event.event_type,
"event_handle": f"{class_name}.{method_name}",
"error": str(e),
"traceback": traceback.format_exc()
}
)
def register(self, etype: Union[EventType, ChainEventType, List[Union[EventType, ChainEventType]], type]):
"""
事件注册装饰器,用于将函数注册为事件的处理器
:param etype:
- 单个事件类型成员 (如 EventType.MetadataScrape, ChainEventType.PluginAction)
- 事件类型类 (EventType, ChainEventType)
- 或事件类型成员的列表
"""
def decorator(f: Callable):
# 将输入的事件类型统一转换为列表格式
if isinstance(etype, list):
event_list = etype # 传入的已经是列表,直接使用
else:
event_list = [etype] # 不是列表则包裹成单一元素的列表
# 遍历列表,处理每个事件类型
for event in event_list:
if isinstance(event, (EventType, ChainEventType)):
self.add_event_listener(event, f)
elif isinstance(event, type) and issubclass(event, (EventType, ChainEventType)):
# 如果是 EventType 或 ChainEventType 类,提取该类中的所有成员
for et in event.__members__.values():
self.add_event_listener(et, f)
else:
raise ValueError(f"无效的事件类型: {event}")
return f
return decorator
class Event(object):
"""
事件对象
"""
def __init__(self, event_type=None):
# 事件类型
self.event_type = event_type
# 字典用于保存具体的事件数据
self.event_data = {}
# 实例引用,用于注册事件
# 全局实例定义
eventmanager = EventManager()

View File

@@ -1,9 +1,12 @@
import re
import traceback
import zhconv
import anitopy
from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.log import logger
from app.utils.string import StringUtils
from app.schemas.types import MediaType
@@ -13,7 +16,7 @@ class MetaAnime(MetaBase):
识别动漫
"""
_anime_no_words = ['CHS&CHT', 'MP4', 'GB MP4', 'WEB-DL']
_name_nostring_re = r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}"
_name_nostring_re = r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}|\s+GB"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
super().__init__(title, subtitle, isfile)
@@ -29,8 +32,6 @@ class MetaAnime(MetaBase):
if anitopy_info:
# 名称
name = anitopy_info.get("anime_title")
if name and name.find("/") != -1:
name = name.split("/")[-1].strip()
if not name or name in self._anime_no_words or (len(name) < 5 and not StringUtils.is_chinese(name)):
anitopy_info = anitopy.parse("[ANIME]" + title)
if anitopy_info:
@@ -41,28 +42,45 @@ class MetaAnime(MetaBase):
name = name_match.group(1).strip()
# 拆份中英文名称
if name:
lastword_type = ""
for word in name.split():
if not word:
continue
if word.endswith(']'):
word = word[:-1]
if word.isdigit():
if lastword_type == "cn":
self.cn_name = "%s %s" % (self.cn_name or "", word)
elif lastword_type == "en":
self.en_name = "%s %s" % (self.en_name or "", word)
elif StringUtils.is_chinese(word):
self.cn_name = "%s %s" % (self.cn_name or "", word)
lastword_type = "cn"
_split_flag = True
# 按/拆分中英文
if name.find("/") != -1:
names = name.split("/")
if StringUtils.is_chinese(names[0]):
self.cn_name = names[0]
if len(names) > 1:
self.en_name = names[1]
_split_flag = False
elif StringUtils.is_chinese(names[-1]):
self.cn_name = names[-1]
if len(names) > 1:
self.en_name = names[0]
_split_flag = False
else:
self.en_name = "%s %s" % (self.en_name or "", word)
lastword_type = "en"
name = names[-1]
# 拆分中英文
if _split_flag:
lastword_type = ""
for word in name.split():
if not word:
continue
if word.endswith(']'):
word = word[:-1]
if word.isdigit():
if lastword_type == "cn":
self.cn_name = "%s %s" % (self.cn_name or "", word)
elif lastword_type == "en":
self.en_name = "%s %s" % (self.en_name or "", word)
elif StringUtils.is_chinese(word):
self.cn_name = "%s %s" % (self.cn_name or "", word)
lastword_type = "cn"
else:
self.en_name = "%s %s" % (self.en_name or "", word)
lastword_type = "en"
if self.cn_name:
_, self.cn_name, _, _, _, _ = StringUtils.get_keyword(self.cn_name)
if self.cn_name:
self.cn_name = re.sub(r'%s' % self._name_nostring_re, '', self.cn_name, flags=re.IGNORECASE).strip()
self.cn_name = zhconv.convert(self.cn_name, "zh-hans")
if self.en_name:
self.en_name = re.sub(r'%s' % self._name_nostring_re, '', self.en_name, flags=re.IGNORECASE).strip().title()
self._name = StringUtils.str_title(self.en_name)
@@ -117,7 +135,7 @@ class MetaAnime(MetaBase):
else:
self.total_episode = 1
except Exception as err:
print(str(err))
logger.debug(f"解析集数失败:{str(err)} - {traceback.format_exc()}")
self.begin_episode = None
self.end_episode = None
self.type = MediaType.TV
@@ -162,7 +180,7 @@ class MetaAnime(MetaBase):
if not self.type:
self.type = MediaType.TV
except Exception as e:
print(str(e))
logger.error(f"解析动漫信息失败:{str(e)} - {traceback.format_exc()}")
@staticmethod
def __prepare_title(title: str):

View File

@@ -1,9 +1,11 @@
import traceback
from dataclasses import dataclass, asdict
from typing import Union, Optional, List, Self
import cn2an
import regex as re
from app.log import logger
from app.utils.string import StringUtils
from app.schemas.types import MediaType
@@ -65,16 +67,18 @@ class MetaBase(object):
# 副标题解析
_subtitle_flag = False
_title_episodel_re = r"Episode\s+(\d{1,4})"
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期](?!\s*[全共])"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP]+)\s*[集话話期](?!\s*[全共])"
_subtitle_episode_between_re = r"[第]*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]?\s*-\s*第*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
if not title:
return
self.org_string = title
self.subtitle = subtitle
self.org_string = title.strip() if title else None
self.subtitle = subtitle.strip() if subtitle else None
self.isfile = isfile
@property
@@ -108,7 +112,39 @@ class MetaBase(object):
if not title_text:
return
title_text = f" {title_text} "
if re.search(r'[全第季集话話期]', title_text, re.IGNORECASE):
if re.search(r"%s" % self._title_episodel_re, title_text, re.IGNORECASE):
episode_str = re.search(r'%s' % self._title_episodel_re, title_text, re.IGNORECASE)
if episode_str:
try:
episode = int(episode_str.group(1))
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if episode >= 10000:
return
if self.begin_episode is None:
self.begin_episode = episode
self.total_episode = 1
self.type = MediaType.TV
self._subtitle_flag = True
elif re.search(r'[全第季集话話期幕]', title_text, re.IGNORECASE):
# 全x季 x季全
season_all_str = re.search(r"%s" % self._subtitle_season_all_re, title_text, re.IGNORECASE)
if season_all_str:
season_all = season_all_str.group(1)
if not season_all:
season_all = season_all_str.group(2)
if season_all and self.begin_season is None and self.begin_episode is None:
try:
self.total_season = int(cn2an.cn2an(season_all.strip(), mode='smart'))
except Exception as err:
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_season = 1
self.end_season = self.total_season
self.type = MediaType.TV
self._subtitle_flag = True
return
# 第x季
season_str = re.search(r'%s' % self._subtitle_season_re, title_text, re.IGNORECASE)
if season_str:
@@ -127,7 +163,11 @@ class MetaBase(object):
else:
begin_season = int(cn2an.cn2an(seasons, mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别季失败:{str(err)} - {traceback.format_exc()}')
return
if begin_season and begin_season > 100:
return
if end_season and end_season > 100:
return
if self.begin_season is None and isinstance(begin_season, int):
self.begin_season = begin_season
@@ -140,6 +180,37 @@ class MetaBase(object):
self.total_season = (self.end_season - self.begin_season) + 1
self.type = MediaType.TV
self._subtitle_flag = True
# 第x-x集 第x集-x集
episode_between_str = re.search(r'%s' % self._subtitle_episode_between_re, title_text, re.IGNORECASE)
if episode_between_str:
episodes = episode_between_str.groups()
if episodes:
begin_episode = episodes[0]
end_episode = episodes[1]
else:
return
try:
begin_episode = int(cn2an.cn2an(begin_episode.strip(), mode='smart'))
end_episode = int(cn2an.cn2an(end_episode.strip(), mode='smart'))
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if begin_episode and begin_episode >= 10000:
return
if end_episode and end_episode >= 10000:
return
if self.begin_episode is None and isinstance(begin_episode, int):
self.begin_episode = begin_episode
self.total_episode = 1
if self.begin_episode is not None \
and self.end_episode is None \
and isinstance(end_episode, int) \
and end_episode != self.begin_episode:
self.end_episode = end_episode
self.total_episode = (self.end_episode - self.begin_episode) + 1
self.type = MediaType.TV
self._subtitle_flag = True
return
# 第x集
episode_str = re.search(r'%s' % self._subtitle_episode_re, title_text, re.IGNORECASE)
if episode_str:
@@ -158,7 +229,11 @@ class MetaBase(object):
else:
begin_episode = int(cn2an.cn2an(episodes, mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
if begin_episode and begin_episode >= 10000:
return
if end_episode and end_episode >= 10000:
return
if self.begin_episode is None and isinstance(begin_episode, int):
self.begin_episode = begin_episode
@@ -171,6 +246,7 @@ class MetaBase(object):
self.total_episode = (self.end_episode - self.begin_episode) + 1
self.type = MediaType.TV
self._subtitle_flag = True
return
# x集全
episode_all_str = re.search(r'%s' % self._subtitle_episode_all_re, title_text, re.IGNORECASE)
if episode_all_str:
@@ -181,28 +257,13 @@ class MetaBase(object):
try:
self.total_episode = int(cn2an.cn2an(episode_all.strip(), mode='smart'))
except Exception as err:
print(str(err))
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_episode = None
self.end_episode = None
self.type = MediaType.TV
self._subtitle_flag = True
# 全x季 x季全
season_all_str = re.search(r"%s" % self._subtitle_season_all_re, title_text, re.IGNORECASE)
if season_all_str:
season_all = season_all_str.group(1)
if not season_all:
season_all = season_all_str.group(2)
if season_all and self.begin_season is None and self.begin_episode is None:
try:
self.total_season = int(cn2an.cn2an(season_all.strip(), mode='smart'))
except Exception as err:
print(str(err))
return
self.begin_season = 1
self.end_season = self.total_season
self.type = MediaType.TV
self._subtitle_flag = True
return
@property
def season(self) -> str:
@@ -230,7 +291,7 @@ class MetaBase(object):
return self.season
else:
return ""
@property
def season_seq(self) -> str:
"""
@@ -273,7 +334,7 @@ class MetaBase(object):
str(self.end_episode).rjust(2, "0"))
else:
return ""
@property
def episode_list(self) -> List[int]:
"""
@@ -471,7 +532,7 @@ class MetaBase(object):
self.end_episode = end
if self.begin_episode and self.end_episode:
self.total_episode = (self.end_episode - self.begin_episode) + 1
def merge(self, meta: Self):
"""
全并Meta信息
@@ -489,13 +550,13 @@ class MetaBase(object):
self.year = meta.year
# 季
if (self.type == MediaType.TV
and not self.season):
and self.begin_season is None):
self.begin_season = meta.begin_season
self.end_season = meta.end_season
self.total_season = meta.total_season
# 开始集
if (self.type == MediaType.TV
and not self.episode):
and self.begin_episode is None):
self.begin_episode = meta.begin_episode
self.end_episode = meta.end_episode
self.total_episode = meta.total_episode

View File

@@ -1,13 +1,15 @@
import re
from pathlib import Path
from typing import Optional
from Pinyin2Hanzi import is_pinyin
from app.core.config import settings
from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.schemas.types import MediaType
from app.utils.string import StringUtils
from app.utils.tokens import Tokens
from app.schemas.types import MediaType
class MetaVideo(MetaBase):
@@ -24,14 +26,14 @@ class MetaVideo(MetaBase):
_source = ""
_effect = []
# 正则式区
_season_re = r"S(\d{2})|^S(\d{1,2})$|S(\d{1,2})E"
_season_re = r"S(\d{3})|^S(\d{1,3})$|S(\d{1,3})E"
_episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})"
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$"
_effect_re = r"^REMUX$|^UHD$|^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$|^REMUX$|^UHD$|^REPACK$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$"
_resources_type_re = r"%s|%s" % (_source_re, _effect_re)
_name_no_begin_re = r"^\[.+?]"
_name_no_begin_re = r"^[\[【].+?[\]】]"
_name_no_chinese_re = r".*版|.*字幕"
_name_se_words = ['', '', '', '', '', '', '']
_name_movie_words = ['剧场版', '劇場版', '电影版', '電影版']
@@ -39,19 +41,25 @@ class MetaVideo(MetaBase):
r"|HBO$|\s+HBO|\d{1,2}th|\d{1,2}bit|NETFLIX|AMAZON|IMAX|^3D|\s+3D|^BBC\s+|\s+BBC|BBC$|DISNEY\+?|XXX|\s+DC$" \
r"|[第\s共]+[0-9一二三四五六七八九十\-\s]+季" \
r"|[第\s共]+[0-9一二三四五六七八九十百零\-\s]+[集话話]" \
r"|连载|日剧|美剧|电视剧|动画片|动漫|欧美|西德|日韩|超高清|高清|蓝光|翡翠台|梦幻天堂·龙网|★?\d*月?新番" \
r"|最终季|合集|[多中国英葡法俄日韩德意西印泰台港粤双文语简繁体特效内封官译外挂]+字幕|版本|出品|台版|港版|\w+字幕组" \
r"|连载|日剧|美剧|电视剧|动画片|动漫|欧美|西德|日韩|超高清|高清|无水印|下载|蓝光|翡翠台|梦幻天堂·龙网|★?\d*月?新番" \
r"|最终季|合集|[多中国英葡法俄日韩德意西印泰台港粤双文语简繁体特效内封官译外挂]+字幕|版本|出品|台版|港版|\w+字幕组|\w+字幕社" \
r"|未删减版|UNCUT$|UNRATE$|WITH EXTRAS$|RERIP$|SUBBED$|PROPER$|REPACK$|SEASON$|EPISODE$|Complete$|Extended$|Extended Version$" \
r"|S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}" \
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]" \
r"|[248]K|\d{3,4}[PIX]+" \
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]"
r"|CD[\s.]*[1-9]|DVD[\s.]*[1-9]|DISK[\s.]*[1-9]|DISC[\s.]*[1-9]|\s+GB"
_resources_pix_re = r"^[SBUHD]*(\d{3,4}[PI]+)|\d{3,4}X(\d{3,4})"
_resources_pix_re2 = r"(^[248]+K)"
_video_encode_re = r"^[HX]26[45]$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^HDR\d*$"
_audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
"""
初始化
:param title: 标题,文件为去掉了后缀
:param subtitle: 副标题
:param isfile: 是否是文件名
"""
super().__init__(title, subtitle, isfile)
if not title:
return
@@ -59,13 +67,21 @@ class MetaVideo(MetaBase):
self._source = ""
self._effect = []
# 判断是否纯数字命名
title_path = Path(title)
if title_path.suffix.lower() in settings.RMT_MEDIAEXT \
and title_path.stem.isdigit() \
and len(title_path.stem) < 5:
self.begin_episode = int(title_path.stem)
if isfile \
and title.isdigit() \
and len(title) < 5:
self.begin_episode = int(title)
self.type = MediaType.TV
return
# 全名为Season xx 及 Sxx 直接返回
season_full_res = re.search(r"^Season\s+(\d{1,3})$|^S(\d{1,3})$", title)
if season_full_res:
self.type = MediaType.TV
season = season_full_res.group(1)
if season:
self.begin_season = int(season)
self.total_season = 1
return
# 去掉名称中第1个[]的内容
title = re.sub(r'%s' % self._name_no_begin_re, "", title, count=1)
# 把xxxx-xxxx年份换成前一个年份常出现在季集上
@@ -130,12 +146,47 @@ class MetaVideo(MetaBase):
# 处理part
if self.part and self.part.upper() == "PART":
self.part = None
# 没有中文标题时,尝试中描述中获取中文名
if not self.cn_name and self.en_name and self.subtitle:
if self.__is_pinyin(self.en_name):
# 英文名是拼音
cn_name = self.__get_title_from_description(self.subtitle)
if cn_name and len(cn_name) == len(self.en_name.split()):
# 中文名和拼音单词数相同,认为是中文名
self.cn_name = cn_name
# 制作组/字幕组
self.resource_team = ReleaseGroupsMatcher().match(title=original_title) or None
# 自定义占位符
self.customization = CustomizationMatcher().match(title=original_title) or None
@staticmethod
def __get_title_from_description(description: str) -> Optional[str]:
"""
从描述中提取标题
"""
if not description:
return None
titles = re.split(r'[\s/|]+', description)
if StringUtils.is_chinese(titles[0]):
return titles[0]
return None
@staticmethod
def __is_pinyin(name_str: str) -> bool:
"""
判断是否拼音
"""
if not name_str:
return False
for n in name_str.lower().split():
if not is_pinyin(n):
return False
return True
def __fix_name(self, name: str):
"""
去掉名字中不需要的干扰字符
"""
if not name:
return name
name = re.sub(r'%s' % self._name_nostring_re, '', name,
@@ -157,6 +208,9 @@ class MetaVideo(MetaBase):
return name
def __init_name(self, token: str):
"""
识别名称
"""
if not token:
return
# 回收标题
@@ -250,6 +304,9 @@ class MetaVideo(MetaBase):
self._last_token_type = "enname"
def __init_part(self, token: str):
"""
识别Part
"""
if not self.name:
return
if not self.year \
@@ -270,9 +327,12 @@ class MetaVideo(MetaBase):
self.tokens.get_next()
self._last_token_type = "part"
self._continue_flag = False
self._stop_name_flag = False
# self._stop_name_flag = False
def __init_year(self, token: str):
"""
识别年份
"""
if not self.name:
return
if not token.isdigit():
@@ -295,6 +355,9 @@ class MetaVideo(MetaBase):
self._stop_name_flag = True
def __init_resource_pix(self, token: str):
"""
识别分辨率
"""
if not self.name:
return
re_res = re.findall(r"%s" % self._resources_pix_re, token, re.IGNORECASE)
@@ -331,6 +394,9 @@ class MetaVideo(MetaBase):
self.resource_pix = re_res.group(1).lower()
def __init_season(self, token: str):
"""
识别季
"""
re_res = re.findall(r"%s" % self._season_re, token, re.IGNORECASE)
if re_res:
self._last_token_type = "season"
@@ -380,6 +446,9 @@ class MetaVideo(MetaBase):
self.begin_season = 1
def __init_episode(self, token: str):
"""
识别集
"""
re_res = re.findall(r"%s" % self._episode_re, token, re.IGNORECASE)
if re_res:
self._last_token_type = "episode"
@@ -450,6 +519,9 @@ class MetaVideo(MetaBase):
self._last_token_type = "EPISODE"
def __init_resource_type(self, token):
"""
识别资源类型
"""
if not self.name:
return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
@@ -488,6 +560,9 @@ class MetaVideo(MetaBase):
self._last_token = effect.upper()
def __init_video_encode(self, token: str):
"""
识别视频编码
"""
if not self.name:
return
if not self.year \
@@ -528,6 +603,9 @@ class MetaVideo(MetaBase):
self.video_encode = f"{self.video_encode} 10bit"
def __init_audio_encode(self, token: str):
"""
识别音频编码
"""
if not self.name:
return
if not self.year \

View File

@@ -71,7 +71,11 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:|yG)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )'],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', '(?:Lilith|NC)-Raws', '织梦字幕组']
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',
'悠哈璃羽字幕社',
'❀拨雪寻春❀', '沸羊羊(?:制作|字幕组)', '(?:桜|樱)都字幕组']
}
def __init__(self):

View File

@@ -4,6 +4,7 @@ import cn2an
import regex as re
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.singleton import Singleton
@@ -13,7 +14,7 @@ class WordsMatcher(metaclass=Singleton):
def __init__(self):
self.systemconfig = SystemConfigOper()
def prepare(self, title: str) -> Tuple[str, List[str]]:
def prepare(self, title: str, custom_words: List[str] = None) -> Tuple[str, List[str]]:
"""
预处理标题,支持三种格式
1屏蔽词
@@ -22,9 +23,9 @@ class WordsMatcher(metaclass=Singleton):
"""
appley_words = []
# 读取自定义识别词
words: List[str] = self.systemconfig.get(SystemConfigKey.CustomIdentifiers) or []
words: List[str] = custom_words or self.systemconfig.get(SystemConfigKey.CustomIdentifiers) or []
for word in words:
if not word:
if not word or word.startswith("#"):
continue
try:
if word.count(" => ") and word.count(" && ") and word.count(" >> ") and word.count(" <> "):
@@ -52,17 +53,18 @@ class WordsMatcher(metaclass=Singleton):
strings = word.split(" <> ")
offsets = strings[1].split(" >> ")
strings[1] = offsets[0]
title, message, state = self.__episode_offset(title, strings[0], strings[1],
offsets[1])
title, message, state = self.__episode_offset(title, strings[0], strings[1], offsets[1])
else:
# 屏蔽词
if not word.strip():
continue
title, message, state = self.__replace_regex(title, word, "")
if state:
appley_words.append(word)
except Exception as err:
print(str(err))
logger.warn(f"自定义识别词 {word} 预处理标题失败:{str(err)} - 标题:{title}")
return title, appley_words
@@ -77,7 +79,7 @@ class WordsMatcher(metaclass=Singleton):
else:
return re.sub(r'%s' % replaced, r'%s' % replace, title), "", True
except Exception as err:
print(str(err))
logger.warn(f"自定义识别词正则替换失败:{str(err)} - 标题:{title},被替换词:{replaced},替换词:{replace}")
return title, str(err), False
@staticmethod
@@ -129,5 +131,5 @@ class WordsMatcher(metaclass=Singleton):
title = re.sub(episode_offset_re, r'%s' % episode_num[1], title)
return title, "", True
except Exception as err:
print(str(err))
logger.warn(f"自定义识别词集数偏移失败:{str(err)} - 标题:{title},前定位词:{front},后定位词:{back},偏移量:{offset}")
return title, str(err), False

View File

@@ -1,30 +1,34 @@
from pathlib import Path
from typing import Tuple
from typing import Tuple, List
import regex as re
from app.core.config import settings
from app.core.meta import MetaAnime, MetaVideo, MetaBase
from app.core.meta.words import WordsMatcher
from app.log import logger
from app.schemas.types import MediaType
def MetaInfo(title: str, subtitle: str = None) -> MetaBase:
def MetaInfo(title: str, subtitle: str = None, custom_words: List[str] = None) -> MetaBase:
"""
根据标题和副标题识别元数据
:param title: 标题、种子名、文件名
:param subtitle: 副标题、描述
:param custom_words: 自定义识别词列表
:return: MetaAnime、MetaVideo
"""
# 原标题
org_title = title
# 预处理标题
title, apply_words = WordsMatcher().prepare(title)
title, apply_words = WordsMatcher().prepare(title, custom_words=custom_words)
# 获取标题中媒体信息
title, metainfo = find_metainfo(title)
# 判断是否处理文件
if title and Path(title).suffix.lower() in settings.RMT_MEDIAEXT:
isfile = True
# 去掉后缀
title = Path(title).stem
else:
isfile = False
# 识别
@@ -35,9 +39,12 @@ def MetaInfo(title: str, subtitle: str = None) -> MetaBase:
meta.apply_words = apply_words or []
# 修正媒体信息
if metainfo.get('tmdbid'):
meta.tmdbid = metainfo['tmdbid']
try:
meta.tmdbid = int(metainfo['tmdbid'])
except ValueError as _:
logger.warn("tmdbid 必须是数字")
if metainfo.get('doubanid'):
meta.tmdbid = metainfo['doubanid']
meta.doubanid = metainfo['doubanid']
if metainfo.get('type'):
meta.type = metainfo['type']
if metainfo.get('begin_season'):
@@ -61,7 +68,7 @@ def MetaInfoPath(path: Path) -> MetaBase:
:param path: 路径
"""
# 文件元数据,不包含后缀
file_meta = MetaInfo(title=path.stem)
file_meta = MetaInfo(title=path.name)
# 上级目录元数据
dir_meta = MetaInfo(title=path.parent.name)
# 合并元数据

View File

@@ -1,8 +1,11 @@
from typing import Generator, Optional
import traceback
from typing import Generator, Optional, Tuple, Any
from app.core.config import settings
from app.core.event import eventmanager
from app.helper.module import ModuleHelper
from app.log import logger
from app.schemas.types import EventType, ModuleType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
@@ -34,22 +37,54 @@ class ModuleManager(metaclass=Singleton):
for module in modules:
module_id = module.__name__
self._modules[module_id] = module
# 生成实例
_module = module()
# 初始化模块
if self.check_setting(_module.init_setting()):
# 通过模板开关控制加载
_module.init_module()
self._running_modules[module_id] = _module
logger.info(f"Moudle Loaded{module_id}")
try:
# 生成实例
_module = module()
# 初始化模块
if self.check_setting(_module.init_setting()):
# 通过模板开关控制加载
_module.init_module()
self._running_modules[module_id] = _module
logger.info(f"Moudle Loaded{module_id}")
except Exception as err:
logger.error(f"Load Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
def stop(self):
"""
停止所有模块
"""
for _, module in self._running_modules.items():
logger.info("正在停止所有模块...")
for module_id, module in self._running_modules.items():
if hasattr(module, "stop"):
module.stop()
try:
module.stop()
logger.info(f"Moudle Stoped{module_id}")
except Exception as err:
logger.error(f"Stop Moudle Error{module_id}{str(err)} - {traceback.format_exc()}", exc_info=True)
logger.info("所有模块停止完成")
def reload(self):
"""
重新加载所有模块
"""
self.stop()
self.load_modules()
eventmanager.send_event(etype=EventType.ModuleReload, data={})
def test(self, modleid: str) -> Tuple[bool, str]:
"""
测试模块
"""
if modleid not in self._running_modules:
return False, ""
module = self._running_modules[modleid]
if hasattr(module, "test") \
and ObjectUtils.check_method(getattr(module, "test")):
result = module.test()
if not result:
return False, ""
return result
return True, "模块不支持测试"
@staticmethod
def check_setting(setting: Optional[tuple]) -> bool:
@@ -59,13 +94,26 @@ class ModuleManager(metaclass=Singleton):
if not setting:
return True
switch, value = setting
if getattr(settings, switch) and value is True:
option = getattr(settings, switch)
if not option:
return False
if option and value is True:
return True
if value in getattr(settings, switch):
if value in option:
return True
return False
def get_modules(self, method: str) -> Generator:
def get_running_module(self, module_id: str) -> Any:
"""
根据模块id获取模块运行实例
"""
if not module_id:
return None
if not self._running_modules:
return None
return self._running_modules.get(module_id)
def get_running_modules(self, method: str) -> Generator:
"""
获取实现了同一方法的模块列表
"""
@@ -75,3 +123,30 @@ class ModuleManager(metaclass=Singleton):
if hasattr(module, method) \
and ObjectUtils.check_method(getattr(module, method)):
yield module
def get_running_type_modules(self, module_type: ModuleType) -> Generator:
"""
获取指定类型的模块列表
"""
if not self._running_modules:
return []
for _, module in self._running_modules.items():
if hasattr(module, 'get_type') \
and module.get_type() == module_type:
yield module
def get_module(self, module_id: str) -> Any:
"""
根据模块id获取模块
"""
if not module_id:
return None
if not self._modules:
return None
return self._modules.get(module_id)
def get_modules(self) -> dict:
"""
获取模块列表
"""
return self._modules

File diff suppressed because it is too large Load Diff

View File

@@ -3,53 +3,167 @@ import hashlib
import hmac
import json
import os
import traceback
from datetime import datetime, timedelta
from typing import Any, Union, Optional
from typing import Any, Union, Annotated, Optional
import jwt
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad
from fastapi import HTTPException, status, Depends
from fastapi.security import OAuth2PasswordBearer
from cryptography.fernet import Fernet
from fastapi import HTTPException, status, Security, Request, Response
from fastapi.security import OAuth2PasswordBearer, APIKeyHeader, APIKeyQuery, APIKeyCookie
from passlib.context import CryptContext
from app import schemas
from app.core.config import settings
from cryptography.fernet import Fernet
from app.log import logger
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
ALGORITHM = "HS256"
# Token认证
reusable_oauth2 = OAuth2PasswordBearer(
# OAuth2PasswordBearer 用于 JWT Token 认证
oauth2_scheme = OAuth2PasswordBearer(
tokenUrl=f"{settings.API_V1_STR}/login/access-token"
)
# RESOURCE TOKEN 通过 Cookie 认证
resource_token_cookie = APIKeyCookie(name=settings.PROJECT_NAME, auto_error=False, scheme_name="resource_token_cookie")
# API TOKEN 通过 QUERY 认证
api_token_query = APIKeyQuery(name="token", auto_error=False, scheme_name="api_token_query")
# API KEY 通过 Header 认证
api_key_header = APIKeyHeader(name="X-API-KEY", auto_error=False, scheme_name="api_key_header")
# API KEY 通过 QUERY 认证
api_key_query = APIKeyQuery(name="apikey", auto_error=False, scheme_name="api_key_query")
def create_access_token(
userid: Union[str, Any], username: str, super_user: bool = False,
expires_delta: timedelta = None
userid: Union[str, Any],
username: str,
super_user: bool = False,
expires_delta: Optional[timedelta] = None,
level: int = 1,
purpose: Optional[str] = "authentication"
) -> str:
if expires_delta:
"""
创建 JWT 访问令牌,包含用户 ID、用户名、是否为超级用户以及权限等级
:param userid: 用户的唯一标识符,通常是字符串或整数
:param username: 用户名,用于标识用户的账户名
:param super_user: 是否为超级用户,默认值为 False
:param expires_delta: 令牌的有效期时长,如果不提供则根据用途使用默认过期时间
:param level: 用户的权限级别,默认为 1
:param purpose: 令牌的用途,"authentication""resource"
:return: 编码后的 JWT 令牌字符串
:raises ValueError: 如果 expires_delta 为负数
"""
if purpose == "resource":
default_expire = timedelta(seconds=settings.RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS)
secret_key = settings.RESOURCE_SECRET_KEY
else:
default_expire = timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
secret_key = settings.SECRET_KEY
if expires_delta is not None:
if expires_delta.total_seconds() <= 0:
raise ValueError("过期时间必须为正数")
expire = datetime.utcnow() + expires_delta
else:
expire = datetime.utcnow() + timedelta(
minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES
)
expire = datetime.utcnow() + default_expire
to_encode = {
"exp": expire,
"iat": datetime.utcnow(),
"sub": str(userid),
"username": username,
"super_user": super_user
"super_user": super_user,
"level": level,
"purpose": purpose
}
encoded_jwt = jwt.encode(to_encode, settings.SECRET_KEY, algorithm=ALGORITHM)
encoded_jwt = jwt.encode(to_encode, secret_key, algorithm=ALGORITHM)
return encoded_jwt
def verify_token(token: str = Depends(reusable_oauth2)) -> schemas.TokenPayload:
def __set_or_refresh_resource_token_cookie(request: Request, response: Response, payload: schemas.TokenPayload):
"""
设置资源令牌 Cookie
:param request: 包含请求相关的上下文数据
:param response: 用于在服务器响应时设置 Cookie
:param payload: 已通过身份验证的 TokenPayload 对象
"""
resource_token = request.cookies.get(settings.PROJECT_NAME)
if resource_token:
# 检查令牌剩余时间
try:
decoded_token = jwt.decode(resource_token, settings.RESOURCE_SECRET_KEY, algorithms=[ALGORITHM])
exp = decoded_token.get("exp")
if exp:
remaining_time = datetime.utcfromtimestamp(exp) - datetime.utcnow()
# 根据剩余时长提前刷新令牌
if remaining_time < timedelta(seconds=(settings.RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS / 3)):
raise jwt.ExpiredSignatureError
except jwt.PyJWTError:
logger.debug(f"Token error occurred. refreshing token")
except Exception as e:
logger.debug(f"Unexpected error occurred while decoding token: {e}")
else:
# 如果令牌有效且没有即将过期,则不需要刷新
return
# 创建新的资源访问令牌
resource_token_expires = timedelta(seconds=settings.RESOURCE_ACCESS_TOKEN_EXPIRE_SECONDS)
resource_token = create_access_token(
userid=payload.sub,
username=payload.username,
super_user=payload.super_user,
expires_delta=resource_token_expires,
level=payload.level,
purpose="resource"
)
# 设置会话级别的 HttpOnly Cookie
response.set_cookie(
key=settings.PROJECT_NAME,
value=resource_token,
httponly=True,
secure=request.url.scheme == "https", # 根据当前请求的协议设置 secure 属性
samesite="lax" # 不同浏览器对 "Strict" 的处理可能不同,设置 SameSite 为 "Lax",以平衡安全性和兼容性
)
def __verify_token(token: str, purpose: str = "authentication") -> schemas.TokenPayload:
"""
使用 JWT Token 进行身份认证并解析 Token 的内容
:param token: JWT 令牌
:param purpose: 期望的令牌用途,默认为 "authentication"
:return: 包含用户身份信息的 Token 负载数据
:raises HTTPException: 如果令牌无效或用途不匹配
"""
try:
if purpose == "resource":
secret_key = settings.RESOURCE_SECRET_KEY
else:
secret_key = settings.SECRET_KEY
if not token:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail=f"{purpose} token not found"
)
payload = jwt.decode(
token, settings.SECRET_KEY, algorithms=[ALGORITHM]
token, secret_key, algorithms=[ALGORITHM]
)
token_payload = schemas.TokenPayload(**payload)
if token_payload.purpose != purpose:
raise jwt.InvalidTokenError("令牌用途不匹配")
return schemas.TokenPayload(**payload)
except (jwt.DecodeError, jwt.InvalidTokenError, jwt.ImmatureSignatureError):
raise HTTPException(
@@ -58,42 +172,98 @@ def verify_token(token: str = Depends(reusable_oauth2)) -> schemas.TokenPayload:
)
def get_token(token: str = None) -> str:
def verify_token(
request: Request,
response: Response,
token: str = Security(oauth2_scheme)
) -> schemas.TokenPayload:
"""
从请求URL中获取token
验证 JWT 令牌并自动处理 resource_token 写入
:param request: 请求对象,用于访问 Cookie 和请求信息
:param response: 响应对象,用于设置 Cookie
:param token: 从 Authorization 头部获取的 JWT 令牌
:return: 解析后的 TokenPayload
:raises HTTPException: 如果令牌无效或用途不匹配
"""
return token
# 验证并解析 JWT 认证令牌
payload = __verify_token(token=token, purpose="authentication")
# 如果没有 resource_token生成并写入到 Cookie
__set_or_refresh_resource_token_cookie(request, response, payload)
return payload
def get_apikey(apikey: str = None) -> str:
def verify_resource_token(
resource_token: str = Security(resource_token_cookie)
) -> schemas.TokenPayload:
"""
从请求URL中获取apikey
验证资源访问令牌(从 Cookie 中获取)
:param resource_token: 从 Cookie 中获取的资源访问令牌
:return: 解析后的 TokenPayload
:raises HTTPException: 如果资源访问令牌无效
"""
return apikey
# 验证并解析资源访问令牌
return __verify_token(token=resource_token, purpose="resource")
def verify_uri_token(token: str = Depends(get_token)) -> str:
def __get_api_token(
token_query: Annotated[str | None, Security(api_token_query)] = None
) -> str:
"""
通过依赖项使用token进行身份认证
从 URL 查询参数中获取 API Token
:param token_query: 从 URL 中的 `token` 查询参数获取 API Token
:return: 返回获取到的 API Token若无则返回 None
"""
if token != settings.API_TOKEN:
return token_query
def __get_api_key(
key_query: Annotated[str | None, Security(api_key_query)] = None,
key_header: Annotated[str | None, Security(api_key_header)] = None
) -> str:
"""
从 URL 查询参数或请求头部获取 API Key优先使用 URL 参数
:param key_query: URL 中的 `apikey` 查询参数
:param key_header: 请求头中的 `X-API-KEY` 参数
:return: 返回从 URL 或请求头中获取的 API Key若无则返回 None
"""
return key_query or key_header
def __verify_key(key: str, expected_key: str, key_type: str) -> str:
"""
通用的 API Key 或 Token 验证函数
:param key: 从请求中获取的 API Key 或 Token
:param expected_key: 系统配置中的期望值,用于验证的 API Key 或 Token
:param key_type: 键的类型(例如 "API_KEY""API_TOKEN"),用于错误消息
:return: 返回校验通过的 API Key 或 Token
:raises HTTPException: 如果校验不通过,抛出 401 错误
"""
if key != expected_key:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="token校验不通过"
detail=f"{key_type} 校验不通过"
)
return token
return key
def verify_uri_apikey(apikey: str = Depends(get_apikey)) -> str:
def verify_apitoken(token: str = Security(__get_api_token)) -> str:
"""
通过依赖项使用apikey进行身份认证
使用 API Token 进行身份认证
:param token: API Token从 URL 查询参数中获取
:return: 返回校验通过的 API Token
"""
if apikey != settings.API_TOKEN:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="apikey校验不通过"
)
return apikey
return __verify_key(token, settings.API_TOKEN, "API_TOKEN")
def verify_apikey(apikey: str = Security(__get_api_key)) -> str:
"""
使用 API Key 进行身份认证
:param apikey: API Key从 URL 查询参数或请求头中获取
:return: 返回校验通过的 API Key
"""
return __verify_key(apikey, settings.API_TOKEN, "API_KEY")
def verify_password(plain_password: str, hashed_password: str) -> bool:
@@ -112,7 +282,7 @@ def decrypt(data: bytes, key: bytes) -> Optional[bytes]:
try:
return fernet.decrypt(data)
except Exception as e:
print(str(e))
logger.error(f"解密失败:{str(e)} - {traceback.format_exc()}")
return None

View File

@@ -1,23 +1,41 @@
from typing import Any, Self, List
from typing import Tuple, Optional, Generator
from typing import Any, Generator, List, Optional, Self, Tuple
from sqlalchemy import create_engine, QueuePool
from sqlalchemy import inspect
from sqlalchemy.orm import declared_attr
from sqlalchemy.orm import sessionmaker, Session, scoped_session, as_declarative
from sqlalchemy import NullPool, QueuePool, and_, create_engine, inspect, text
from sqlalchemy.orm import Session, as_declarative, declared_attr, scoped_session, sessionmaker
from app.core.config import settings
# 数据库引擎
Engine = create_engine(f"sqlite:///{settings.CONFIG_PATH}/user.db",
pool_pre_ping=True,
echo=False,
poolclass=QueuePool,
pool_size=1024,
pool_recycle=3600,
pool_timeout=180,
max_overflow=10,
connect_args={"timeout": 60})
# 根据池类型设置 poolclass 和相关参
pool_class = NullPool if settings.DB_POOL_TYPE == "NullPool" else QueuePool
connect_args = {
"timeout": settings.DB_TIMEOUT
}
# 启用 WAL 模式时的额外配置
if settings.DB_WAL_ENABLE:
connect_args["check_same_thread"] = False
kwargs = {
"url": f"sqlite:///{settings.CONFIG_PATH}/user.db",
"pool_pre_ping": settings.DB_POOL_PRE_PING,
"echo": settings.DB_ECHO,
"poolclass": pool_class,
"pool_recycle": settings.DB_POOL_RECYCLE,
"connect_args": connect_args
}
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
kwargs.update({
"pool_size": settings.DB_POOL_SIZE,
"pool_timeout": settings.DB_POOL_TIMEOUT,
"max_overflow": settings.DB_MAX_OVERFLOW
})
# 创建数据库引擎
Engine = create_engine(**kwargs)
# 根据配置设置日志模式
journal_mode = "WAL" if settings.DB_WAL_ENABLE else "DELETE"
with Engine.connect() as connection:
current_mode = connection.execute(text(f"PRAGMA journal_mode={journal_mode};")).scalar()
print(f"Database journal mode set to: {current_mode}")
# 会话工厂
SessionFactory = sessionmaker(bind=Engine)
@@ -39,6 +57,36 @@ def get_db() -> Generator:
db.close()
def perform_checkpoint(mode: str = "PASSIVE"):
"""
执行 SQLite 的 checkpoint 操作,将 WAL 文件内容写回主数据库
:param mode: checkpoint 模式,可选值包括 "PASSIVE""FULL""RESTART""TRUNCATE"
默认为 "PASSIVE",即不锁定 WAL 文件的轻量级同步
"""
if not settings.DB_WAL_ENABLE:
return
valid_modes = {"PASSIVE", "FULL", "RESTART", "TRUNCATE"}
if mode.upper() not in valid_modes:
raise ValueError(f"Invalid checkpoint mode '{mode}'. Must be one of {valid_modes}")
try:
# 使用指定的 checkpoint 模式,确保 WAL 文件数据被正确写回主数据库
with Engine.connect() as conn:
conn.execute(text(f"PRAGMA wal_checkpoint({mode.upper()});"))
except Exception as e:
print(f"Error during WAL checkpoint: {e}")
def close_database():
"""
关闭所有数据库连接并清理资源
"""
try:
# 释放连接池SQLite 会自动清空 WAL 文件,这里不单独再调用 checkpoint
Engine.dispose()
except Exception as e:
print(f"Error while disposing database connections: {e}")
def get_args_db(args: tuple, kwargs: dict) -> Optional[Session]:
"""
从参数中获取数据库Session对象
@@ -163,7 +211,7 @@ class Base:
@classmethod
@db_update
def delete(cls, db: Session, rid):
db.query(cls).filter(cls.id == rid).delete()
db.query(cls).filter(and_(cls.id == rid)).delete()
@classmethod
@db_update

View File

@@ -1,4 +1,3 @@
from pathlib import Path
from typing import List
from app.db import DbOper
@@ -10,12 +9,12 @@ class DownloadHistoryOper(DbOper):
下载历史管理
"""
def get_by_path(self, path: Path) -> DownloadHistory:
def get_by_path(self, path: str) -> DownloadHistory:
"""
按路径查询下载记录
:param path: 数据key
"""
return DownloadHistory.get_by_path(self._db, str(path))
return DownloadHistory.get_by_path(self._db, path)
def get_by_hash(self, download_hash: str) -> DownloadHistory:
"""
@@ -24,6 +23,14 @@ class DownloadHistoryOper(DbOper):
"""
return DownloadHistory.get_by_hash(self._db, download_hash)
def get_by_mediaid(self, tmdbid: int, doubanid: str) -> List[DownloadHistory]:
"""
按媒体ID查询下载记录
:param tmdbid: tmdbid
:param doubanid: doubanid
"""
return DownloadHistory.get_by_mediaid(self._db, tmdbid=tmdbid, doubanid=doubanid)
def add(self, **kwargs):
"""
新增下载历史
@@ -132,3 +139,23 @@ class DownloadHistoryOper(DbOper):
type=type,
tmdbid=tmdbid,
seasons=seasons)
def list_by_type(self, mtype: str, days: int = 7) -> List[DownloadHistory]:
"""
获取指定类型的下载历史
"""
return DownloadHistory.list_by_type(db=self._db,
mtype=mtype,
days=days)
def delete_history(self, historyid):
"""
删除下载记录
"""
DownloadHistory.delete(self._db, historyid)
def delete_downloadfile(self, downloadfileid):
"""
删除下载文件记录
"""
DownloadFiles.delete(self._db, downloadfileid)

View File

@@ -2,9 +2,7 @@ from alembic.command import upgrade
from alembic.config import Config
from app.core.config import settings
from app.core.security import get_password_hash
from app.db import Engine, SessionFactory, Base
from app.db.models import *
from app.db import Engine, Base
from app.log import logger
@@ -14,16 +12,6 @@ def init_db():
"""
# 全量建表
Base.metadata.create_all(bind=Engine)
# 初始化超级管理员
with SessionFactory() as db:
_user = User.get_by_name(db=db, name=settings.SUPERUSER)
if not _user:
_user = User(
name=settings.SUPERUSER,
hashed_password=get_password_hash(settings.SUPERUSER_PASSWORD),
is_superuser=True,
)
_user.create(db)
def update_db():

View File

@@ -1,4 +1,3 @@
import json
from typing import Optional
from sqlalchemy.orm import Session
@@ -19,6 +18,8 @@ class MediaServerOper(DbOper):
"""
新增媒体服务器数据
"""
# MediaServerItem中没有的属性剔除
kwargs = {k: v for k, v in kwargs.items() if hasattr(MediaServerItem, k)}
item = MediaServerItem(**kwargs)
if not item.get_by_itemid(self._db, kwargs.get("item_id")):
item.create(self._db)
@@ -52,7 +53,7 @@ class MediaServerOper(DbOper):
# 判断季是否存在
if not item.seasoninfo:
return None
seasoninfo = json.loads(item.seasoninfo) or {}
seasoninfo = item.seasoninfo or {}
if kwargs.get("season") not in seasoninfo.keys():
return None
return item

69
app/db/message_oper.py Normal file
View File

@@ -0,0 +1,69 @@
import time
from typing import Optional, Union
from sqlalchemy.orm import Session
from app.db import DbOper
from app.db.models.message import Message
from app.schemas import MessageChannel, NotificationType
class MessageOper(DbOper):
"""
消息数据管理
"""
def __init__(self, db: Session = None):
super().__init__(db)
def add(self,
channel: MessageChannel = None,
source: str = None,
mtype: NotificationType = None,
title: str = None,
text: str = None,
image: str = None,
link: str = None,
userid: str = None,
action: int = 1,
note: Union[list, dict] = None,
**kwargs):
"""
新增媒体服务器数据
:param channel: 消息渠道
:param source: 来源
:param mtype: 消息类型
:param title: 标题
:param text: 文本内容
:param image: 图片
:param link: 链接
:param userid: 用户ID
:param action: 消息方向0-接收息1-发送消息
:param note: 附件json
"""
kwargs.update({
"channel": channel.value if channel else '',
"source": source,
"mtype": mtype.value if mtype else '',
"title": title,
"text": text,
"image": image,
"link": link,
"userid": userid,
"action": action,
"reg_time": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
"note": note or {}
})
# 从kwargs中去掉Message中没有的字段
for k in list(kwargs.keys()):
if k not in Message.__table__.columns.keys():
kwargs.pop(k)
Message(**kwargs).create(self._db)
def list_by_page(self, page: int = 1, count: int = 30) -> Optional[str]:
"""
获取媒体服务器数据ID
"""
return Message.list_by_page(self._db, page, count)

View File

@@ -7,3 +7,4 @@ from .subscribe import Subscribe
from .systemconfig import SystemConfig
from .transferhistory import TransferHistory
from .user import User
from .userconfig import UserConfig

View File

@@ -1,4 +1,6 @@
from sqlalchemy import Column, Integer, String, Sequence
import time
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -44,13 +46,21 @@ class DownloadHistory(Base):
# 创建时间
date = Column(String)
# 附加信息
note = Column(String)
note = Column(JSON)
# 自定义媒体类别
media_category = Column(String)
@staticmethod
@db_query
def get_by_hash(db: Session, download_hash: str):
return db.query(DownloadHistory).filter(DownloadHistory.download_hash == download_hash).first()
@staticmethod
@db_query
def get_by_mediaid(db: Session, tmdbid: int, doubanid: str):
return db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.doubanid == doubanid).all()
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
@@ -140,6 +150,16 @@ class DownloadHistory(Base):
DownloadHistory.tmdbid == tmdbid).order_by(
DownloadHistory.id.desc()).all()
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(DownloadHistory) \
.filter(DownloadHistory.type == mtype,
DownloadHistory.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)
class DownloadFiles(Base):
"""
@@ -188,6 +208,7 @@ class DownloadFiles(Base):
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
return list(result)
@staticmethod
@db_update
def delete_by_fullpath(db: Session, fullpath: str):
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,

View File

@@ -1,7 +1,7 @@
from datetime import datetime
from typing import Optional
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -35,9 +35,9 @@ class MediaServerItem(Base):
# 路径
path = Column(String)
# 季集
seasoninfo = Column(String)
seasoninfo = Column(JSON, default=dict)
# 备注
note = Column(String)
note = Column(JSON)
# 同步时间
lst_mod_date = Column(String, default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))

41
app/db/models/message.py Normal file
View File

@@ -0,0 +1,41 @@
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, Base
class Message(Base):
"""
消息表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 消息渠道
channel = Column(String)
# 消息来源
source = Column(String)
# 消息类型
mtype = Column(String)
# 标题
title = Column(String)
# 文本内容
text = Column(String)
# 图片
image = Column(String)
# 链接
link = Column(String)
# 用户ID
userid = Column(String)
# 登记时间
reg_time = Column(String, index=True)
# 消息方向0-接收息1-发送消息
action = Column(Integer)
# 附件json
note = Column(JSON)
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30):
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
count).all()
result.sort(key=lambda x: x.reg_time, reverse=False)
return list(result)

View File

@@ -1,4 +1,4 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -11,7 +11,7 @@ class PluginData(Base):
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
plugin_id = Column(String, nullable=False, index=True)
key = Column(String, index=True, nullable=False)
value = Column(String)
value = Column(JSON)
@staticmethod
@db_query
@@ -29,6 +29,11 @@ class PluginData(Base):
def del_plugin_data_by_key(db: Session, plugin_id: str, key: str):
db.query(PluginData).filter(PluginData.plugin_id == plugin_id, PluginData.key == key).delete()
@staticmethod
@db_update
def del_plugin_data(db: Session, plugin_id: str):
db.query(PluginData).filter(PluginData.plugin_id == plugin_id).delete()
@staticmethod
@db_query
def get_plugin_data_by_plugin_id(db: Session, plugin_id: str):

View File

@@ -1,6 +1,6 @@
from datetime import datetime
from sqlalchemy import Boolean, Column, Integer, String, Sequence
from sqlalchemy import Boolean, Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -25,6 +25,10 @@ class Site(Base):
cookie = Column(String)
# User-Agent
ua = Column(String)
# ApiKey
apikey = Column(String)
# Token
token = Column(String)
# 是否使用代理 0-否1-是
proxy = Column(Integer)
# 过滤规则
@@ -34,13 +38,15 @@ class Site(Base):
# 是否公开站点
public = Column(Integer)
# 附加信息
note = Column(String)
note = Column(JSON)
# 流控单位周期
limit_interval = Column(Integer, default=0)
# 流控次数
limit_count = Column(Integer, default=0)
# 流控间隔
limit_seconds = Column(Integer, default=0)
# 超时时间
timeout = Column(Integer, default=0)
# 是否启用
is_active = Column(Boolean(), default=True)
# 创建时间
@@ -63,6 +69,12 @@ class Site(Base):
result = db.query(Site).order_by(Site.pri).all()
return list(result)
@staticmethod
@db_query
def get_domains_by_ids(db: Session, ids: list):
result = db.query(Site.domain).filter(Site.id.in_(ids)).all()
return [r[0] for r in result]
@staticmethod
@db_update
def reset(db: Session):

View File

@@ -0,0 +1,37 @@
from datetime import datetime
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
class SiteStatistic(Base):
"""
站点统计表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 域名Key
domain = Column(String, index=True)
# 成功次数
success = Column(Integer)
# 失败次数
fail = Column(Integer)
# 平均耗时 秒
seconds = Column(Integer)
# 最后一次访问状态 0-成功 1-失败
lst_state = Column(Integer)
# 最后访问时间
lst_mod_date = Column(String, default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# 耗时记录 Json
note = Column(JSON)
@staticmethod
@db_query
def get_by_domain(db: Session, domain: str):
return db.query(SiteStatistic).filter(SiteStatistic.domain == domain).first()
@staticmethod
@db_update
def reset(db: Session):
db.query(SiteStatistic).delete()

View File

@@ -0,0 +1,93 @@
from datetime import datetime
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON, func
from sqlalchemy.orm import Session
from app.db import db_query, Base
class SiteUserData(Base):
"""
站点数据表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 站点域名
domain = Column(String, index=True)
# 站点名称
name = Column(String)
# 用户名
username = Column(String)
# 用户ID
userid = Column(Integer)
# 用户等级
user_level = Column(String)
# 加入时间
join_at = Column(String)
# 积分
bonus = Column(Float, default=0)
# 上传量
upload = Column(Float, default=0)
# 下载量
download = Column(Float, default=0)
# 分享率
ratio = Column(Float, default=0)
# 做种数
seeding = Column(Float, default=0)
# 下载数
leeching = Column(Float, default=0)
# 做种体积
seeding_size = Column(Float, default=0)
# 下载体积
leeching_size = Column(Float, default=0)
# 做种人数, 种子大小 JSON
seeding_info = Column(JSON, default=dict)
# 未读消息
message_unread = Column(Integer, default=0)
# 未读消息内容 JSON
message_unread_contents = Column(JSON, default=list)
# 错误信息
err_msg = Column(String)
# 更新日期
updated_day = Column(String, index=True, default=datetime.now().strftime('%Y-%m-%d'))
# 更新时间
updated_time = Column(String, default=datetime.now().strftime('%H:%M:%S'))
@staticmethod
@db_query
def get_by_domain(db: Session, domain: str, workdate: str = None, worktime: str = None):
if workdate and worktime:
return db.query(SiteUserData).filter(SiteUserData.domain == domain,
SiteUserData.updated_day == workdate,
SiteUserData.updated_time == worktime).all()
elif workdate:
return db.query(SiteUserData).filter(SiteUserData.domain == domain,
SiteUserData.updated_day == workdate).all()
return db.query(SiteUserData).filter(SiteUserData.domain == domain).all()
@staticmethod
@db_query
def get_by_date(db: Session, date: str):
return db.query(SiteUserData).filter(SiteUserData.updated_day == date).all()
@staticmethod
@db_query
def get_latest(db: Session):
"""
获取各站点最新一天的数据
"""
subquery = (
db.query(
SiteUserData.domain,
func.max(SiteUserData.updated_day).label('latest_update_day')
)
.group_by(SiteUserData.domain)
.filter(SiteUserData.err_msg == None)
.subquery()
)
# 主查询:按 domain 和 updated_day 获取最新的记录
return db.query(SiteUserData).join(
subquery,
(SiteUserData.domain == subquery.c.domain) &
(SiteUserData.updated_day == subquery.c.latest_update_day)
).order_by(SiteUserData.updated_time.desc()).all()

View File

@@ -1,4 +1,6 @@
from sqlalchemy import Column, Integer, String, Sequence
import time
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -21,14 +23,15 @@ class Subscribe(Base):
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分
vote = Column(Integer)
# 评分float
vote = Column(Float)
# 简介
description = Column(String)
# 过滤规则
@@ -50,7 +53,7 @@ class Subscribe(Base):
# 缺失集数
lack_episode = Column(Integer)
# 附加信息
note = Column(String)
note = Column(JSON)
# 状态N-新建, R-订阅中
state = Column(String, nullable=False, index=True, default='N')
# 最后更新时间
@@ -60,13 +63,23 @@ class Subscribe(Base):
# 订阅用户
username = Column(String)
# 订阅站点
sites = Column(String)
sites = Column(JSON, default=list)
# 是否洗版
best_version = Column(Integer, default=0)
# 当前优先级
current_priority = Column(Integer)
# 保存路径
save_path = Column(String)
# 是否使用 imdbid 搜索
search_imdbid = Column(Integer, default=0)
# 是否手动修改过总集数 0否 1是
manual_total_episode = Column(Integer, default=0)
# 自定义识别词
custom_words = Column(String)
# 自定义媒体类别
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
@staticmethod
@db_query
@@ -109,6 +122,11 @@ class Subscribe(Base):
def get_by_doubanid(db: Session, doubanid: str):
return db.query(Subscribe).filter(Subscribe.doubanid == doubanid).first()
@staticmethod
@db_query
def get_by_bangumiid(db: Session, bangumiid: int):
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
@db_update
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
subscrbies = self.get_by_tmdbid(db, tmdbid, season)
@@ -122,3 +140,32 @@ class Subscribe(Base):
if subscribe:
subscribe.delete(db, subscribe.id)
return True
@staticmethod
@db_query
def list_by_username(db: Session, username: str, state: str = None, mtype: str = None):
if mtype:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username,
Subscribe.type == mtype).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username,
Subscribe.type == mtype).all()
else:
if state:
result = db.query(Subscribe).filter(Subscribe.state == state,
Subscribe.username == username).all()
else:
result = db.query(Subscribe).filter(Subscribe.username == username).all()
return list(result)
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, days: int):
result = db.query(Subscribe) \
.filter(Subscribe.type == mtype,
Subscribe.date >= time.strftime("%Y-%m-%d %H:%M:%S",
time.localtime(time.time() - 86400 * int(days)))
).all()
return list(result)

View File

@@ -0,0 +1,78 @@
from sqlalchemy import Column, Integer, String, Sequence, Float, JSON
from sqlalchemy.orm import Session
from app.db import db_query, Base
class SubscribeHistory(Base):
"""
订阅历史表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 标题
name = Column(String, nullable=False, index=True)
# 年份
year = Column(String)
# 类型
type = Column(String)
# 搜索关键字
keyword = Column(String)
tmdbid = Column(Integer, index=True)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分float
vote = Column(Float)
# 简介
description = Column(String)
# 过滤规则
filter = Column(String)
# 包含
include = Column(String)
# 排除
exclude = Column(String)
# 质量
quality = Column(String)
# 分辨率
resolution = Column(String)
# 特效
effect = Column(String)
# 总集数
total_episode = Column(Integer)
# 开始集数
start_episode = Column(Integer)
# 订阅完成时间
date = Column(String)
# 订阅用户
username = Column(String)
# 订阅站点
sites = Column(JSON)
# 是否洗版
best_version = Column(Integer, default=0)
# 保存路径
save_path = Column(String)
# 是否使用 imdbid 搜索
search_imdbid = Column(Integer, default=0)
# 自定义识别词
custom_words = Column(String)
# 自定义媒体类别
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
@staticmethod
@db_query
def list_by_type(db: Session, mtype: str, page: int = 1, count: int = 30):
result = db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)

View File

@@ -1,4 +1,4 @@
from sqlalchemy import Column, Integer, String, Sequence
from sqlalchemy import Column, Integer, String, Sequence, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -12,7 +12,7 @@ class SystemConfig(Base):
# 主键
key = Column(String, index=True)
# 值
value = Column(String, nullable=True)
value = Column(JSON)
@staticmethod
@db_query

View File

@@ -1,6 +1,6 @@
import time
from sqlalchemy import Column, Integer, String, Sequence, Boolean, func
from sqlalchemy import Column, Integer, String, Sequence, Boolean, func, or_, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
@@ -11,10 +11,18 @@ class TransferHistory(Base):
转移历史记录
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 源目录
# 源路径
src = Column(String, index=True)
# 目标目录
# 源存储
src_storage = Column(String)
# 源文件项
src_fileitem = Column(JSON, default=dict)
# 目标路径
dest = Column(String)
# 目标存储
dest_storage = Column(String)
# 目标文件项
dest_fileitem = Column(JSON, default=dict)
# 转移模式 move/copy/link...
mode = Column(String)
# 类型 电影/电视剧
@@ -44,32 +52,40 @@ class TransferHistory(Base):
# 时间
date = Column(String, index=True)
# 文件清单以JSON存储
files = Column(String)
files = Column(JSON, default=list)
@staticmethod
@db_query
def list_by_title(db: Session, title: str, page: int = 1, count: int = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%'),
TransferHistory.status == status).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
result = db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).filter(TransferHistory.title.like(f'%{title}%')).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
result = db.query(TransferHistory).filter(or_(
TransferHistory.title.like(f'%{title}%'),
TransferHistory.src.like(f'%{title}%'),
TransferHistory.dest.like(f'%{title}%'),
)).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
def list_by_page(db: Session, page: int = 1, count: int = 30, status: bool = None):
if status is not None:
result = db.query(TransferHistory).filter(TransferHistory.status == status).order_by(
TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
result = db.query(TransferHistory).filter(
TransferHistory.status == status
).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
else:
result = db.query(TransferHistory).order_by(TransferHistory.date.desc()).offset((page - 1) * count).limit(
count).all()
result = db.query(TransferHistory).order_by(
TransferHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@@ -79,8 +95,17 @@ class TransferHistory(Base):
@staticmethod
@db_query
def get_by_src(db: Session, src: str):
return db.query(TransferHistory).filter(TransferHistory.src == src).first()
def get_by_src(db: Session, src: str, storage: str = None):
if storage:
return db.query(TransferHistory).filter(TransferHistory.src == src,
TransferHistory.src_storage == storage).first()
else:
return db.query(TransferHistory).filter(TransferHistory.src == src).first()
@staticmethod
@db_query
def get_by_dest(db: Session, dest: str):
return db.query(TransferHistory).filter(TransferHistory.dest == dest).first()
@staticmethod
@db_query
@@ -113,10 +138,13 @@ class TransferHistory(Base):
@db_query
def count_by_title(db: Session, title: str, status: bool = None):
if status is not None:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%'),
TransferHistory.status == status).first()[0]
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.status == status).first()[0]
else:
return db.query(func.count(TransferHistory.id)).filter(TransferHistory.title.like(f'%{title}%')).first()[0]
return db.query(func.count(TransferHistory.id)).filter(or_(
TransferHistory.title.like(f'%{title}%'),
TransferHistory.src.like(f'%{title}%'),
TransferHistory.dest.like(f'%{title}%')
)).first()[0]
@staticmethod
@db_query
@@ -203,3 +231,11 @@ class TransferHistory(Base):
"download_hash": download_hash
}
)
@staticmethod
@db_query
def list_by_date(db: Session, date: str):
"""
查询某时间之后的转移历史
"""
return db.query(TransferHistory).filter(TransferHistory.date > date).order_by(TransferHistory.id.desc()).all()

View File

@@ -1,8 +1,7 @@
from sqlalchemy import Boolean, Column, Integer, String, Sequence
from sqlalchemy import Boolean, Column, Integer, JSON, Sequence, String
from sqlalchemy.orm import Session
from app.core.security import verify_password
from app.db import db_query, db_update, Base
from app.db import Base, db_query, db_update
class User(Base):
@@ -11,9 +10,9 @@ class User(Base):
"""
# ID
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 用户名
# 用户名,唯一值
name = Column(String, index=True, nullable=False)
# 邮箱,未启用
# 邮箱
email = Column(String)
# 加密后密码
hashed_password = Column(String)
@@ -23,25 +22,46 @@ class User(Base):
is_superuser = Column(Boolean(), default=False)
# 头像
avatar = Column(String)
@staticmethod
@db_query
def authenticate(db: Session, name: str, password: str):
user = db.query(User).filter(User.name == name).first()
if not user:
return None
if not verify_password(password, str(user.hashed_password)):
return None
return user
# 是否启用otp二次验证
is_otp = Column(Boolean(), default=False)
# otp秘钥
otp_secret = Column(String, default=None)
# 用户权限 json
permissions = Column(JSON, default=dict)
# 用户个性化设置 json
settings = Column(JSON, default=dict)
@staticmethod
@db_query
def get_by_name(db: Session, name: str):
return db.query(User).filter(User.name == name).first()
@staticmethod
@db_query
def get_by_id(db: Session, user_id: int):
return db.query(User).filter(User.id == user_id).first()
@db_update
def delete_by_name(self, db: Session, name: str):
user = self.get_by_name(db, name)
if user:
user.delete(db, user.id)
return True
@db_update
def delete_by_id(self, db: Session, user_id: int):
user = self.get_by_id(db, user_id)
if user:
user.delete(db, user.id)
return True
@db_update
def update_otp_by_name(self, db: Session, name: str, otp: bool, secret: str):
user = self.get_by_name(db, name)
if user:
user.update(db, {
'is_otp': otp,
'otp_secret': secret
})
return True
return False

View File

@@ -0,0 +1,38 @@
from sqlalchemy import Column, Integer, String, Sequence, UniqueConstraint, Index, JSON
from sqlalchemy.orm import Session
from app.db import db_query, db_update, Base
class UserConfig(Base):
"""
用户配置表
"""
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 用户名
username = Column(String, index=True)
# 配置键
key = Column(String)
# 值
value = Column(JSON)
__table_args__ = (
# 用户名和配置键联合唯一
UniqueConstraint('username', 'key'),
Index('ix_userconfig_username_key', 'username', 'key'),
)
@staticmethod
@db_query
def get_by_key(db: Session, username: str, key: str):
return db.query(UserConfig) \
.filter(UserConfig.username == username) \
.filter(UserConfig.key == key) \
.first()
@db_update
def delete_by_key(self, db: Session, username: str, key: str):
userconfig = self.get_by_key(db=db, username=username, key=key)
if userconfig:
userconfig.delete(db=db, rid=userconfig.id)
return True

View File

@@ -0,0 +1,69 @@
from sqlalchemy import Column, Integer, String, Sequence, Float
from sqlalchemy.orm import Session
from app.db import db_query, Base
class UserRequest(Base):
"""
用户请求表
"""
# ID
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 申请用户
req_user = Column(String, index=True, nullable=False)
# 申请时间
req_time = Column(String)
# 申请备注
req_remark = Column(String)
# 审批用户
app_user = Column(String, index=True, nullable=False)
# 审批时间
app_time = Column(String)
# 审批状态 0-待审批 1-通过 2-拒绝
app_status = Column(Integer, default=0)
# 类型
type = Column(String)
# 标题
title = Column(String)
# 年份
year = Column(String)
# 媒体ID
tmdbid = Column(Integer)
imdbid = Column(String)
tvdbid = Column(Integer)
doubanid = Column(String)
bangumiid = Column(Integer)
# 季号
season = Column(Integer)
# 海报
poster = Column(String)
# 背景图
backdrop = Column(String)
# 评分float
vote = Column(Float)
# 简介
description = Column(String)
@staticmethod
@db_query
def get_by_req_user(db: Session, req_user: str, status: int = None):
if status:
return db.query(UserRequest).filter(UserRequest.req_user == req_user,
UserRequest.app_status == status).all()
else:
return db.query(UserRequest).filter(UserRequest.req_user == req_user).all()
@staticmethod
@db_query
def get_by_app_user(db: Session, app_user: str, status: int = None):
if status:
return db.query(UserRequest).filter(UserRequest.app_user == app_user,
UserRequest.app_status == status).all()
else:
return db.query(UserRequest).filter(UserRequest.app_user == app_user).all()
@staticmethod
@db_query
def get_by_status(db: Session, status: int):
return db.query(UserRequest).filter(UserRequest.app_status == status).all()

View File

@@ -1,9 +1,7 @@
import json
from typing import Any
from app.db import DbOper
from app.db.models.plugindata import PluginData
from app.utils.object import ObjectUtils
class PluginDataOper(DbOper):
@@ -18,8 +16,6 @@ class PluginDataOper(DbOper):
:param key: 数据key
:param value: 数据值
"""
if ObjectUtils.is_obj(value):
value = json.dumps(value)
plugin = PluginData.get_plugin_data_by_key(self._db, plugin_id, key)
if plugin:
plugin.update(self._db, {
@@ -28,26 +24,30 @@ class PluginDataOper(DbOper):
else:
PluginData(plugin_id=plugin_id, key=key, value=value).create(self._db)
def get_data(self, plugin_id: str, key: str) -> Any:
def get_data(self, plugin_id: str, key: str = None) -> Any:
"""
获取插件数据
:param plugin_id: 插件id
:param key: 数据key
"""
data = PluginData.get_plugin_data_by_key(self._db, plugin_id, key)
if not data:
return None
if ObjectUtils.is_obj(data.value):
return json.loads(data.value)
return data.value
if key:
data = PluginData.get_plugin_data_by_key(self._db, plugin_id, key)
if not data:
return None
return data.value
else:
return PluginData.get_plugin_data(self._db, plugin_id)
def del_data(self, plugin_id: str, key: str) -> Any:
def del_data(self, plugin_id: str, key: str = None) -> Any:
"""
删除插件数据
:param plugin_id: 插件id
:param key: 数据key
"""
PluginData.del_plugin_data_by_key(self._db, plugin_id, key)
if key:
PluginData.del_plugin_data_by_key(self._db, plugin_id, key)
else:
PluginData.del_plugin_data(self._db, plugin_id)
def truncate(self):
"""

View File

@@ -1,7 +1,11 @@
from typing import Tuple, List
from datetime import datetime
from typing import List, Tuple
from app.db import DbOper
from app.db.models import SiteIcon
from app.db.models.site import Site
from app.db.models.sitestatistic import SiteStatistic
from app.db.models.siteuserdata import SiteUserData
class SiteOper(DbOper):
@@ -63,6 +67,12 @@ class SiteOper(DbOper):
"""
return Site.get_by_domain(self._db, domain)
def get_domains_by_ids(self, ids: List[int]) -> List[str]:
"""
按ID获取站点域名
"""
return Site.get_domains_by_ids(self._db, ids)
def exists(self, domain: str) -> bool:
"""
判断站点是否存在
@@ -92,3 +102,130 @@ class SiteOper(DbOper):
"rss": rss
})
return True, "更新站点RSS地址成功"
def update_userdata(self, domain: str, name: str, payload: dict) -> Tuple[bool, str]:
"""
更新站点用户数据
"""
# 当前系统日期
current_day = datetime.now().strftime('%Y-%m-%d')
current_time = datetime.now().strftime('%H:%M:%S')
payload.update({
"domain": domain,
"name": name,
"updated_day": current_day,
"updated_time": current_time
})
# 按站点+天判断是否存在数据
siteuserdatas = SiteUserData.get_by_domain(self._db, domain=domain, workdate=current_day)
if siteuserdatas:
# 存在则更新
siteuserdatas[0].update(self._db, payload)
else:
# 不存在则插入
SiteUserData(**payload).create(self._db)
return True, "更新站点用户数据成功"
def get_userdata(self) -> List[SiteUserData]:
"""
获取站点用户数据
"""
return SiteUserData.list(self._db)
def get_userdata_by_domain(self, domain: str, workdate: str = None) -> List[SiteUserData]:
"""
获取站点用户数据
"""
return SiteUserData.get_by_domain(self._db, domain=domain, workdate=workdate)
def get_userdata_by_date(self, date: str) -> List[SiteUserData]:
"""
获取站点用户数据
"""
return SiteUserData.get_by_date(self._db, date)
def get_userdata_latest(self) -> List[SiteUserData]:
"""
获取站点最新数据
"""
return SiteUserData.get_latest(self._db)
def get_icon_by_domain(self, domain: str) -> SiteIcon:
"""
按域名获取站点图标
"""
return SiteIcon.get_by_domain(self._db, domain)
def update_icon(self, name: str, domain: str, icon_url: str, icon_base64: str) -> bool:
"""
更新站点图标
"""
icon_base64 = f"data:image/ico;base64,{icon_base64}" if icon_base64 else ""
siteicon = self.get_icon_by_domain(domain)
if not siteicon:
SiteIcon(name=name, domain=domain, url=icon_url, base64=icon_base64).create(self._db)
elif icon_base64:
siteicon.update(self._db, {
"url": icon_url,
"base64": icon_base64
})
return True
def success(self, domain: str, seconds: int = None):
"""
站点访问成功
"""
lst_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
sta = SiteStatistic.get_by_domain(self._db, domain)
if sta:
avg_seconds, note = None, {}
if seconds is not None:
note: dict = sta.note or {}
note[lst_date] = seconds or 1
avg_times = len(note.keys())
if avg_times > 10:
note = dict(sorted(note.items(), key=lambda x: x[0], reverse=True)[:10])
avg_seconds = sum([v for v in note.values()]) // avg_times
sta.update(self._db, {
"success": sta.success + 1,
"seconds": avg_seconds or sta.seconds,
"lst_state": 0,
"lst_mod_date": lst_date,
"note": note or sta.note
})
else:
note = {}
if seconds is not None:
note = {
lst_date: seconds or 1
}
SiteStatistic(
domain=domain,
success=1,
fail=0,
seconds=seconds or 1,
lst_state=0,
lst_mod_date=lst_date,
note=note
).create(self._db)
def fail(self, domain: str):
"""
站点访问失败
"""
lst_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
sta = SiteStatistic.get_by_domain(self._db, domain)
if sta:
sta.update(self._db, {
"fail": sta.fail + 1,
"lst_state": 1,
"lst_mod_date": lst_date
})
else:
SiteStatistic(
domain=domain,
success=0,
fail=1,
lst_state=1,
lst_mod_date=lst_date
).create(self._db)

View File

@@ -1,37 +0,0 @@
from typing import List
from app.db import DbOper
from app.db.models.siteicon import SiteIcon
class SiteIconOper(DbOper):
"""
站点管理
"""
def list(self) -> List[SiteIcon]:
"""
获取站点图标列表
"""
return SiteIcon.list(self._db)
def get_by_domain(self, domain: str) -> SiteIcon:
"""
按域名获取站点图标
"""
return SiteIcon.get_by_domain(self._db, domain)
def update_icon(self, name: str, domain: str, icon_url: str, icon_base64: str) -> bool:
"""
更新站点图标
"""
icon_base64 = f"data:image/ico;base64,{icon_base64}" if icon_base64 else ""
siteicon = SiteIcon(name=name, domain=domain, url=icon_url, base64=icon_base64)
if not self.get_by_domain(domain):
siteicon.create(self._db)
elif icon_base64:
siteicon.update(self._db, {
"url": icon_url,
"base64": icon_base64
})
return True

View File

@@ -4,6 +4,7 @@ from typing import Tuple, List
from app.core.context import MediaInfo
from app.db import DbOper
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
class SubscribeOper(DbOper):
@@ -27,6 +28,7 @@ class SubscribeOper(DbOper):
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
poster=mediainfo.get_poster_image(),
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
@@ -83,3 +85,35 @@ class SubscribeOper(DbOper):
subscribe = self.get(sid)
subscribe.update(self._db, payload)
return subscribe
def list_by_tmdbid(self, tmdbid: int, season: int = None) -> List[Subscribe]:
"""
获取指定tmdb_id的订阅
"""
return Subscribe.get_by_tmdbid(self._db, tmdbid=tmdbid, season=season)
def list_by_username(self, username: str, state: str = None, mtype: str = None) -> List[Subscribe]:
"""
获取指定用户的订阅
"""
return Subscribe.list_by_username(self._db, username=username, state=state, mtype=mtype)
def list_by_type(self, mtype: str, days: int = 7) -> Subscribe:
"""
获取指定类型的订阅
"""
return Subscribe.list_by_type(self._db, mtype=mtype, days=days)
def add_history(self, **kwargs):
"""
新增订阅
"""
# 去除kwargs中 SubscribeHistory 没有的字段
kwargs = {k: v for k, v in kwargs.items() if hasattr(SubscribeHistory, k)}
# 更新完成订阅时间
kwargs.update({"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())})
# 去掉主键
if "id" in kwargs:
kwargs.pop("id")
subscribe = SubscribeHistory(**kwargs)
subscribe.create(self._db)

View File

@@ -1,10 +1,8 @@
import json
from typing import Any, Union
from app.db import DbOper
from app.db.models.systemconfig import SystemConfig
from app.schemas.types import SystemConfigKey
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
@@ -18,10 +16,7 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
"""
super().__init__()
for item in SystemConfig.list(self._db):
if ObjectUtils.is_obj(item.value):
self.__SYSTEMCONF[item.key] = json.loads(item.value)
else:
self.__SYSTEMCONF[item.key] = item.value
self.__SYSTEMCONF[item.key] = item.value
def set(self, key: Union[str, SystemConfigKey], value: Any):
"""
@@ -31,11 +26,6 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
key = key.value
# 更新内存
self.__SYSTEMCONF[key] = value
# 写入数据库
if ObjectUtils.is_obj(value):
value = json.dumps(value)
elif value is None:
value = ''
conf = SystemConfig.get_by_key(self._db, key)
if conf:
if value:
@@ -56,6 +46,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def all(self):
"""
获取所有系统设置
"""
return self.__SYSTEMCONF or {}
def delete(self, key: Union[str, SystemConfigKey]):
"""
删除系统设置

View File

@@ -1,13 +1,11 @@
import json
import time
from pathlib import Path
from typing import Any, List
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.db import DbOper
from app.db.models.transferhistory import TransferHistory
from app.schemas import TransferInfo
from app.schemas import TransferInfo, FileItem
class TransferHistoryOper(DbOper):
@@ -29,12 +27,20 @@ class TransferHistoryOper(DbOper):
"""
return TransferHistory.list_by_title(self._db, title)
def get_by_src(self, src: str) -> TransferHistory:
def get_by_src(self, src: str, storage: str = None) -> TransferHistory:
"""
按源查询转移记录
:param src: 数据key
:param storage: 存储类型
"""
return TransferHistory.get_by_src(self._db, src)
return TransferHistory.get_by_src(self._db, src, storage)
def get_by_dest(self, dest: str) -> TransferHistory:
"""
按转移路径查询转移记录
:param dest: 数据key
"""
return TransferHistory.get_by_dest(self._db, dest)
def list_by_hash(self, download_hash: str) -> List[TransferHistory]:
"""
@@ -112,15 +118,19 @@ class TransferHistoryOper(DbOper):
"""
TransferHistory.update_download_hash(self._db, historyid, download_hash)
def add_success(self, src_path: Path, mode: str, meta: MetaBase,
def add_success(self, fileitem: FileItem, mode: str, meta: MetaBase,
mediainfo: MediaInfo, transferinfo: TransferInfo,
download_hash: str = None):
"""
新增转移成功历史记录
"""
self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path or ''),
src=fileitem.path,
src_storage=fileitem.storage,
src_fileitem=fileitem.dict(),
dest=transferinfo.target_item.path if transferinfo.target_item else None,
dest_storage=transferinfo.target_item.storage if transferinfo.target_item else None,
dest_fileitem=transferinfo.target_item.dict() if transferinfo.target_item else None,
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
@@ -135,18 +145,22 @@ class TransferHistoryOper(DbOper):
image=mediainfo.get_poster_image(),
download_hash=download_hash,
status=1,
files=json.dumps(transferinfo.file_list)
files=transferinfo.file_list
)
def add_fail(self, src_path: Path, mode: str, meta: MetaBase, mediainfo: MediaInfo = None,
def add_fail(self, fileitem: FileItem, mode: str, meta: MetaBase, mediainfo: MediaInfo = None,
transferinfo: TransferInfo = None, download_hash: str = None):
"""
新增转移失败历史记录
"""
if mediainfo and transferinfo:
his = self.add_force(
src=str(src_path),
dest=str(transferinfo.target_path or ''),
src=fileitem.path,
src_storage=fileitem.storage,
src_fileitem=fileitem.dict(),
dest=transferinfo.target_item.path if transferinfo.target_item else None,
dest_storage=transferinfo.target_item.storage if transferinfo.target_item else None,
dest_fileitem=transferinfo.target_item.dict() if transferinfo.target_item else None,
mode=mode,
type=mediainfo.type.value,
category=mediainfo.category,
@@ -162,13 +176,15 @@ class TransferHistoryOper(DbOper):
download_hash=download_hash,
status=0,
errmsg=transferinfo.message or '未知错误',
files=json.dumps(transferinfo.file_list)
files=transferinfo.file_list
)
else:
his = self.add_force(
title=meta.name,
year=meta.year,
src=str(src_path),
src=fileitem.path,
src_storage=fileitem.storage,
src_fileitem=fileitem.dict(),
mode=mode,
seasons=meta.season,
episodes=meta.episode,
@@ -177,3 +193,10 @@ class TransferHistoryOper(DbOper):
errmsg="未识别到媒体信息"
)
return his
def list_by_date(self, date: str) -> List[TransferHistory]:
"""
查询某时间之后的转移历史
:param date: 日期
"""
return TransferHistory.list_by_date(self._db, date)

113
app/db/user_oper.py Normal file
View File

@@ -0,0 +1,113 @@
from typing import Optional
from fastapi import Depends, HTTPException
from sqlalchemy.orm import Session
from app import schemas
from app.core.security import verify_token
from app.db import DbOper, get_db
from app.db.models.user import User
def get_current_user(
db: Session = Depends(get_db),
token_data: schemas.TokenPayload = Depends(verify_token)
) -> User:
"""
获取当前用户
"""
user = User.get(db, rid=token_data.sub)
if not user:
raise HTTPException(status_code=403, detail="用户不存在")
return user
def get_current_active_user(
current_user: User = Depends(get_current_user),
) -> User:
"""
获取当前激活用户
"""
if not current_user.is_active:
raise HTTPException(status_code=403, detail="用户未激活")
return current_user
def get_current_active_superuser(
current_user: User = Depends(get_current_user),
) -> User:
"""
获取当前激活超级管理员
"""
if not current_user.is_superuser:
raise HTTPException(
status_code=400, detail="用户权限不足"
)
return current_user
class UserOper(DbOper):
"""
用户管理
"""
def add(self, **kwargs):
"""
新增用户
"""
user = User(**kwargs)
user.create(self._db)
def get_by_name(self, name: str) -> User:
"""
根据用户名获取用户
"""
return User.get_by_name(self._db, name)
def get_permissions(self, name: str) -> dict:
"""
获取用户权限
{
"admin": "管理员",
"usermanage": "用户管理",
"dashboard": "仪表板",
"ranking": "推荐榜单",
"resource": {
"search": "搜索站点资源",
"download": "下载站点资源",
},
"subscribe": {
"request": "提交订阅请求",
"autopass": "订阅请求自动批准"
"approve": "审批订阅请求",
"calendar": "查看订阅日历",
"manage": "管理所有订阅"
},
"downloading": {
"view": "查看正在下载任务",
"manager": "管理正在下载任务"
}
}
"""
user = User.get_by_name(self._db, name)
if user:
return user.permissions or {}
return {}
def get_settings(self, name: str) -> Optional[dict]:
"""
获取用户个性化设置返回None表示用户不存在
"""
user = User.get_by_name(self._db, name)
if user:
return user.settings or {}
return None
def get_setting(self, name: str, key: str) -> Optional[str]:
"""
获取用户个性化设置
"""
settings = self.get_settings(name)
if settings:
return settings.get(key)
return None

View File

@@ -1,35 +0,0 @@
from fastapi import Depends, HTTPException
from sqlalchemy.orm import Session
from app import schemas
from app.core.security import verify_token
from app.db import get_db
from app.db.models.user import User
def get_current_user(
db: Session = Depends(get_db),
token_data: schemas.TokenPayload = Depends(verify_token)
) -> User:
user = User.get(db, rid=token_data.sub)
if not user:
raise HTTPException(status_code=403, detail="用户不存在")
return user
def get_current_active_user(
current_user: User = Depends(get_current_user),
) -> User:
if not current_user.is_active:
raise HTTPException(status_code=403, detail="用户未激活")
return current_user
def get_current_active_superuser(
current_user: User = Depends(get_current_user),
) -> User:
if not current_user.is_superuser:
raise HTTPException(
status_code=400, detail="用户权限不足"
)
return current_user

89
app/db/userconfig_oper.py Normal file
View File

@@ -0,0 +1,89 @@
from typing import Any, Union, Dict, Optional
from app.db import DbOper
from app.db.models.userconfig import UserConfig
from app.schemas.types import UserConfigKey
from app.utils.singleton import Singleton
class UserConfigOper(DbOper, metaclass=Singleton):
# 配置缓存
__USERCONF: Dict[str, Dict[str, Any]] = {}
def __init__(self):
"""
加载配置到内存
"""
super().__init__()
for item in UserConfig.list(self._db):
self.__set_config_cache(username=item.username, key=item.key, value=item.value)
def set(self, username: str, key: Union[str, UserConfigKey], value: Any):
"""
设置用户配置
"""
if isinstance(key, UserConfigKey):
key = key.value
# 更新内存
self.__set_config_cache(username=username, key=key, value=value)
# 写入数据库
conf = UserConfig.get_by_key(db=self._db, username=username, key=key)
if conf:
if value:
conf.update(self._db, {"value": value})
else:
conf.delete(self._db, conf.id)
else:
conf = UserConfig(username=username, key=key, value=value)
conf.create(self._db)
def get(self, username: str, key: Union[str, UserConfigKey] = None) -> Any:
"""
获取用户配置
"""
if not username:
return self.__USERCONF
if isinstance(key, UserConfigKey):
key = key.value
if not key:
return self.__get_config_caches(username=username)
return self.__get_config_cache(username=username, key=key)
def __del__(self):
if self._db:
self._db.close()
def __set_config_cache(self, username: str, key: str, value: Any):
"""
设置配置缓存
"""
if not username or not key:
return
cache = self.__USERCONF
if not cache:
cache = {}
user_cache = cache.get(username)
if not user_cache:
user_cache = {}
cache[username] = user_cache
user_cache[key] = value
self.__USERCONF = cache
def __get_config_caches(self, username: str) -> Optional[Dict[str, Any]]:
"""
获取配置缓存
"""
if not username or not self.__USERCONF:
return None
return self.__USERCONF.get(username)
def __get_config_cache(self, username: str, key: str) -> Any:
"""
获取配置缓存
"""
if not username or not key or not self.__USERCONF:
return None
user_cache = self.__get_config_caches(username)
if not user_cache:
return None
return user_cache.get(key)

View File

@@ -0,0 +1,42 @@
from typing import Optional
from app.db import DbOper
from app.db.models.userrequest import UserRequest
class UserRequestOper(DbOper):
"""
用户请求管理
"""
def get_need_approve(self) -> Optional[UserRequest]:
"""
获取待审批申请
"""
return UserRequest.get_by_status(self._db, 0)
def get_my_requests(self, username: str) -> Optional[UserRequest]:
"""
获取我的申请
"""
return UserRequest.get_by_req_user(self._db, username)
def approve(self, rid: int) -> bool:
"""
审批申请
"""
user_request = UserRequest.get(self._db, rid)
if user_request:
user_request.update(self._db, {"status": 1})
return True
return False
def deny(self, rid: int) -> bool:
"""
拒绝申请
"""
user_request = UserRequest.get(self._db, rid)
if user_request:
user_request.update(self._db, {"status": 2})
return True
return False

31
app/factory.py Normal file
View File

@@ -0,0 +1,31 @@
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from app.core.config import settings
from app.startup.lifecycle import lifespan
def create_app() -> FastAPI:
"""
创建并配置 FastAPI 应用实例。
"""
_app = FastAPI(
title=settings.PROJECT_NAME,
openapi_url=f"{settings.API_V1_STR}/openapi.json",
lifespan=lifespan
)
# 配置 CORS 中间件
_app.add_middleware(
CORSMiddleware,
allow_origins=settings.ALLOWED_HOSTS,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
return _app
# 创建 FastAPI 应用实例
app = create_app()

View File

@@ -0,0 +1,2 @@
from .doh import doh_query_json
from .cloudflare import under_challenge

View File

@@ -61,7 +61,7 @@ class PlaywrightHelper:
ua: str = None,
proxies: dict = None,
headless: bool = False,
timeout: int = 30) -> str:
timeout: int = 20) -> str:
"""
获取网页源码
:param url: 网页地址

View File

@@ -6,6 +6,7 @@ from playwright.sync_api import Page
from app.helper.browser import PlaywrightHelper
from app.helper.ocr import OcrHelper
from app.helper.twofa import TwoFactorAuth
from app.log import logger
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
@@ -51,7 +52,8 @@ class CookieHelper:
],
"twostep": [
'//input[@name="two_step_code"]',
'//input[@name="2fa_secret"]'
'//input[@name="2fa_secret"]',
'//input[@name="otp"]'
]
}
@@ -71,12 +73,14 @@ class CookieHelper:
url: str,
username: str,
password: str,
two_step_code: str = None,
proxies: dict = None) -> Tuple[Optional[str], Optional[str], str]:
"""
获取站点cookie和ua
:param url: 站点地址
:param username: 用户名
:param password: 密码
:param two_step_code: 二步验证码或密钥
:param proxies: 代理
:return: cookie、ua、message
"""
@@ -107,6 +111,15 @@ class CookieHelper:
break
if not password_xpath:
return None, None, "未找到密码输入框"
# 处理二步验证码
otp_code = TwoFactorAuth(two_step_code).get_code()
# 查找二步验证码输入框
twostep_xpath = None
if otp_code:
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
twostep_xpath = xpath
break
# 查找验证码输入框
captcha_xpath = None
for xpath in self._SITE_LOGIN_XPATH.get("captcha"):
@@ -138,6 +151,9 @@ class CookieHelper:
page.fill(username_xpath, username)
# 输入密码
page.fill(password_xpath, password)
# 输入二步验证码
if twostep_xpath:
page.fill(twostep_xpath, otp_code)
# 识别验证码
if captcha_xpath and captcha_img_url:
captcha_element = page.query_selector(captcha_xpath)
@@ -164,6 +180,24 @@ class CookieHelper:
except Exception as e:
logger.error(f"仿真登录失败:{str(e)}")
return None, None, f"仿真登录失败:{str(e)}"
# 对于某二次验证码为单页面的站点,输入二次验证码
if "verify" in page.url:
if not otp_code:
return None, None, "需要二次验证码"
html = etree.HTML(page.content())
for xpath in self._SITE_LOGIN_XPATH.get("twostep"):
if html.xpath(xpath):
try:
# 刷新一下 2fa code
otp_code = TwoFactorAuth(two_step_code).get_code()
page.fill(xpath, otp_code)
# 登录按钮 xpath 理论上相同,不再重复查找
page.click(submit_xpath)
page.wait_for_load_state("networkidle", timeout=30 * 1000)
except Exception as e:
logger.error(f"二次验证码输入失败:{str(e)}")
return None, None, f"二次验证码输入失败:{str(e)}"
break
# 登录后的源码
html_text = page.content()
if not html_text:

View File

@@ -1,68 +1,133 @@
from typing import Tuple, Optional
import json
from typing import Any, Dict, Tuple, Optional
from app.core.config import settings
from app.log import logger
from app.utils.crypto import CryptoJsUtils, HashUtils
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
from app.utils.url import UrlUtils
class CookieCloudHelper:
_ignore_cookies: list = ["CookieAutoDeleteBrowsingDataCleanup", "CookieAutoDeleteCleaningDiscarded"]
def __init__(self, server, key, password):
self._server = server
self._key = key
self._password = password
def __init__(self):
self.__sync_setting()
self._req = RequestUtils(content_type="application/json")
def __sync_setting(self):
"""
同步CookieCloud配置项
"""
self._server = UrlUtils.standardize_base_url(settings.COOKIECLOUD_HOST)
self._key = StringUtils.safe_strip(settings.COOKIECLOUD_KEY)
self._password = StringUtils.safe_strip(settings.COOKIECLOUD_PASSWORD)
self._enable_local = settings.COOKIECLOUD_ENABLE_LOCAL
self._local_path = settings.COOKIE_PATH
def download(self) -> Tuple[Optional[dict], str]:
"""
从CookieCloud下载数据
:return: Cookie数据、错误信息
"""
if not self._server or not self._key or not self._password:
# 更新为最新设置
self.__sync_setting()
if ((not self._server and not self._enable_local)
or not self._key
or not self._password):
return None, "CookieCloud参数不正确"
req_url = "%s/get/%s" % (self._server, self._key)
ret = self._req.post_res(url=req_url, json={"password": self._password})
if ret and ret.status_code == 200:
result = ret.json()
if self._enable_local:
# 开启本地服务时,从本地直接读取数据
result = self.__load_local_encrypt_data(self._key)
if not result:
return {}, "下载到数据"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
elif ret:
return None, f"同步CookieCloud失败错误码{ret.status_code}"
return {}, "从本地CookieCloud服务加载到cookie数据请检查服务器设置、用户KEY及加密密码是否正确"
else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确"
req_url = UrlUtils.combine_url(host=self._server, path=f"get/{self._key}")
ret = self._req.get_res(url=req_url)
if ret and ret.status_code == 200:
try:
result = ret.json()
if not result:
return {}, f"未从{self._server}下载到cookie数据"
except Exception as err:
return {}, f"{self._server}下载cookie数据错误{str(err)}"
elif ret:
return None, f"远程同步CookieCloud失败错误码{ret.status_code}"
else:
return None, "CookieCloud请求失败请检查服务器地址、用户KEY及加密密码是否正确"
encrypted = result.get("encrypted")
if not encrypted:
return {}, "未获取到cookie密文"
else:
crypt_key = self.__get_crypt_key()
try:
decrypted_data = CryptoJsUtils.decrypt(encrypted, crypt_key).decode("utf-8")
result = json.loads(decrypted_data)
except Exception as e:
return {}, "cookie解密失败" + str(e)
if not result:
return {}, "cookie解密为空"
if result.get("cookie_data"):
contents = result.get("cookie_data")
else:
contents = result
# 整理数据,使用domain域名的最后两级作为分组依据
domain_groups = {}
for site, cookies in contents.items():
for cookie in cookies:
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
if not domain_groups.get(domain_key):
domain_groups[domain_key] = [cookie]
else:
domain_groups[domain_key].append(cookie)
# 返回错误
ret_cookies = {}
# 索引器
for domain, content_list in domain_groups.items():
if not content_list:
continue
# 只有cf的cookie过滤掉
cloudflare_cookie = True
for content in content_list:
if content["name"] != "cf_clearance":
cloudflare_cookie = False
break
if cloudflare_cookie:
continue
# 站点Cookie
cookie_str = ";".join(
[f"{content.get('name')}={content.get('value')}"
for content in content_list
if content.get("name") and content.get("name") not in self._ignore_cookies]
)
ret_cookies[domain] = cookie_str
return ret_cookies, ""
def __get_crypt_key(self) -> bytes:
"""
使用UUID和密码生成CookieCloud的加解密密钥
"""
combined_string = f"{self._key}-{self._password}"
return HashUtils.md5(combined_string)[:16].encode("utf-8")
def __load_local_encrypt_data(self, uuid: str) -> Dict[str, Any]:
"""
获取本地CookieCloud数据
"""
file_path = self._local_path / f"{uuid}.json"
# 检查文件是否存在
if not file_path.exists():
logger.warn(f"本地CookieCloud文件不存在{file_path}")
return {}
# 读取文件
with open(file_path, encoding="utf-8", mode="r") as file:
read_content = file.read()
data = json.loads(read_content.encode("utf-8"))
return data

96
app/helper/directory.py Normal file
View File

@@ -0,0 +1,96 @@
from pathlib import Path
from typing import List, Optional
from app import schemas
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey, MediaType
class DirectoryHelper:
"""
下载目录/媒体库目录帮助类
"""
def __init__(self):
self.systemconfig = SystemConfigOper()
def get_dirs(self) -> List[schemas.TransferDirectoryConf]:
"""
获取所有下载目录
"""
dir_confs: List[dict] = self.systemconfig.get(SystemConfigKey.Directories)
if not dir_confs:
return []
return [schemas.TransferDirectoryConf(**d) for d in dir_confs]
def get_download_dirs(self) -> List[schemas.TransferDirectoryConf]:
"""
获取所有下载目录
"""
return sorted([d for d in self.get_dirs() if d.download_path], key=lambda x: x.priority)
def get_local_download_dirs(self) -> List[schemas.TransferDirectoryConf]:
"""
获取所有本地的可下载目录
"""
return [d for d in self.get_download_dirs() if d.storage == "local"]
def get_library_dirs(self) -> List[schemas.TransferDirectoryConf]:
"""
获取所有媒体库目录
"""
return sorted([d for d in self.get_dirs() if d.library_path], key=lambda x: x.priority)
def get_local_library_dirs(self) -> List[schemas.TransferDirectoryConf]:
"""
获取所有本地的媒体库目录
"""
return [d for d in self.get_library_dirs() if d.library_storage == "local"]
def get_dir(self, media: MediaInfo, src_path: Path = None, dest_path: Path = None,
fileitem: schemas.FileItem = None, local: bool = False) -> Optional[schemas.TransferDirectoryConf]:
"""
根据媒体信息获取下载目录、媒体库目录配置
:param media: 媒体信息
:param src_path: 源目录,有值时直接匹配
:param dest_path: 目标目录,有值时直接匹配
:param fileitem: 文件项,使用文件路径匹配
:param local: 是否本地目录
"""
# 处理类型
if media:
media_type = media.type.value
else:
media_type = MediaType.UNKNOWN.value
dirs = self.get_dirs()
# 按照配置顺序查找
for d in dirs:
# 下载目录
download_path = Path(d.download_path)
# 媒体库目录
library_path = Path(d.library_path)
# 下载目录不匹配, 不符合条件, 通常处理`下载`匹配
if src_path and download_path != src_path:
continue
# 媒体库目录不匹配, 或监控方式为None(即不自动整理), 不符合条件, 通常处理`整理`匹配
if dest_path:
if library_path != dest_path or not d.monitor_type:
continue
# 没有目录配置时起作用, 通常处理`手动整理`未选择`目标目录`的情况
if fileitem and not Path(fileitem.path).is_relative_to(download_path):
continue
# 本地目录
if local and d.storage != "local":
continue
# 目录类型为全部的,符合条件
if not d.media_type:
return d
# 目录类型相等,目录类别为全部,符合条件
if d.media_type == media_type and not d.media_category:
return d
# 目录类型相等,目录类别相等,符合条件
if d.media_type == media_type and d.media_category == media.category:
return d
return None

View File

@@ -4,6 +4,8 @@ from app.log import logger
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
import os
class DisplayHelper(metaclass=Singleton):
_display: Display = None
@@ -12,11 +14,14 @@ class DisplayHelper(metaclass=Singleton):
if not SystemUtils.is_docker():
return
try:
self._display = Display(visible=False, size=(1024, 768))
self._display = Display(visible=False, size=(1024, 768), extra_args=[os.environ['DISPLAY']])
self._display.start()
except Exception as err:
logger.error(f"DisplayHelper init error: {str(err)}")
def stop(self):
if self._display:
logger.info("正在停止虚拟显示...")
self._display.stop()
logger.info("虚拟显示已停止")

137
app/helper/doh.py Normal file
View File

@@ -0,0 +1,137 @@
"""
doh函数的实现。
author: https://github.com/C5H12O5/syno-videoinfo-plugin
"""
import base64
import concurrent
import concurrent.futures
import json
import socket
import struct
import urllib
import urllib.request
from typing import Dict, Optional
from app.core.config import settings
from app.log import logger
# 定义一个全局线程池执行器
_executor = concurrent.futures.ThreadPoolExecutor()
# 定义默认的DoH配置
_doh_timeout = 5
_doh_cache: Dict[str, str] = {}
def _patched_getaddrinfo(host, *args, **kwargs):
"""
socket.getaddrinfo的补丁版本。
"""
if host not in settings.DOH_DOMAINS.split(","):
return _orig_getaddrinfo(host, *args, **kwargs)
# 检查主机是否已解析
if host in _doh_cache:
ip = _doh_cache[host]
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
return _orig_getaddrinfo(ip, *args, **kwargs)
# 使用DoH解析主机
futures = []
for resolver in settings.DOH_RESOLVERS.split(","):
futures.append(_executor.submit(_doh_query, resolver, host))
for future in concurrent.futures.as_completed(futures):
ip = future.result()
if ip is not None:
logger.info("已解析 [%s] 为 [%s]", host, ip)
_doh_cache[host] = ip
host = ip
break
return _orig_getaddrinfo(host, *args, **kwargs)
# 对 socket.getaddrinfo 进行补丁
if settings.DOH_ENABLE:
_orig_getaddrinfo = socket.getaddrinfo
socket.getaddrinfo = _patched_getaddrinfo
def _doh_query(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
# 构造DNS查询消息RFC 1035
header = b"".join(
[
b"\x00\x00", # ID: 0
b"\x01\x00", # FLAGS: 标准递归查询
b"\x00\x01", # QDCOUNT: 1
b"\x00\x00", # ANCOUNT: 0
b"\x00\x00", # NSCOUNT: 0
b"\x00\x00", # ARCOUNT: 0
]
)
question = b"".join(
[
b"".join(
[
struct.pack("B", len(item)) + item.encode("utf-8")
for item in host.split(".")
]
)
+ b"\x00", # QNAME: 域名序列
b"\x00\x01", # QTYPE: A
b"\x00\x01", # QCLASS: IN
]
)
message = header + question
try:
# 发送GET请求到DoH解析器RFC 8484
b64message = base64.b64encode(message).decode("utf-8").rstrip("=")
url = f"https://{resolver}/dns-query?dns={b64message}"
headers = {"Content-Type": "application/dns-message"}
logger.debug("DoH请求: %s", url)
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
resp_body = response.read()
# 解析DNS响应消息RFC 1035
# name压缩:2 + type:2 + class:2 + ttl:4 + rdlength:2 = 12字节
first_rdata_start = len(header) + len(question) + 12
# rdataA记录= 4字节
first_rdata_end = first_rdata_start + 4
# 将rdata转换为IP地址
return socket.inet_ntoa(resp_body[first_rdata_start:first_rdata_end])
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None
def doh_query_json(resolver: str, host: str) -> Optional[str]:
"""
使用给定的DoH解析器查询给定主机的IP地址。
"""
url = f"https://{resolver}/dns-query?name={host}&type=A"
headers = {"Accept": "application/dns-json"}
logger.debug("DoH请求: %s", url)
try:
request = urllib.request.Request(url, headers=headers, method="GET")
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
logger.debug("解析器(%s)响应: %s", resolver, response.status)
if response.status != 200:
return None
response_body = response.read().decode("utf-8")
logger.debug("<== body: %s", response_body)
answer = json.loads(response_body)["Answer"]
return answer[0]["data"]
except Exception as e:
logger.error("解析器(%s)请求错误: %s", resolver, e)
return None

37
app/helper/downloader.py Normal file
View File

@@ -0,0 +1,37 @@
from typing import Optional
from app.helper.service import ServiceBaseHelper
from app.schemas import DownloaderConf, ServiceInfo
from app.schemas.types import SystemConfigKey, ModuleType
class DownloaderHelper(ServiceBaseHelper[DownloaderConf]):
"""
下载器帮助类
"""
def __init__(self):
super().__init__(
config_key=SystemConfigKey.Downloaders,
conf_type=DownloaderConf,
module_type=ModuleType.Downloader
)
def is_downloader(
self,
service_type: Optional[str] = None,
service: Optional[ServiceInfo] = None,
name: Optional[str] = None,
) -> bool:
"""
通用的下载器类型判断方法
:param service_type: 下载器的类型名称(如 'qbittorrent', 'transmission'
:param service: 要判断的服务信息
:param name: 服务的名称
:return: 如果服务类型或实例为指定类型,返回 True否则返回 False
"""
# 如果未提供 service 则通过 name 获取服务
service = service or self.get_service(name=name)
# 判断服务类型是否为指定类型
return bool(service and service.type == service_type)

View File

@@ -9,17 +9,19 @@ class FormatParser(object):
_split_chars = r"\.|\s+|\(|\)|\[|]|-|\+|【|】|/||;|&|\||#|_|「|」|~"
def __init__(self, eformat: str, details: str = None, part: str = None,
offset: int = None, key: str = "ep"):
offset: str = None, key: str = "ep"):
"""
:params eformat: 格式化字符串
:params details: 格式化详情
:params part: 分集
:params offset: 偏移量
:params offset: 偏移量 -10/EP*2
:prams key: EP关键字
"""
self._format = eformat
self._start_ep = None
self._end_ep = None
self.__offset = offset or "EP"
self._key = key
self._part = None
if part:
self._part = part
@@ -34,8 +36,6 @@ class FormatParser(object):
self._end_ep = int(tmp[0]) if int(tmp[0]) > int(tmp[1]) else int(tmp[1])
else:
self._start_ep = self._end_ep = int(tmp[0])
self.__offset = int(offset) if offset else 0
self._key = key
@property
def format(self):
@@ -77,15 +77,21 @@ class FormatParser(object):
if self._start_ep is not None and self._start_ep == self._end_ep:
if isinstance(self._start_ep, str):
s, e = self._start_ep.split("-")
start_ep = self.__offset.replace("EP", s)
end_ep = self.__offset.replace("EP", e)
if int(s) == int(e):
return int(s) + self.__offset, None, self.part
return int(s) + self.__offset, int(e) + self.__offset, self.part
return self._start_ep + self.__offset, None, self.part
return int(eval(start_ep)), None, self.part
return int(eval(start_ep)), int(eval(end_ep)), self.part
else:
start_ep = self.__offset.replace("EP", str(self._start_ep))
return int(eval(start_ep)), None, self.part
if not self._format:
return None, None, None
s, e = self.__handle_single(file_name)
return s + self.__offset if s is not None else None, \
e + self.__offset if e is not None else None, self.part
return self._start_ep, self._end_ep, self.part
else:
s, e = self.__handle_single(file_name)
start_ep = self.__offset.replace("EP", str(s)) if s else None
end_ep = self.__offset.replace("EP", str(e)) if e else None
return int(eval(start_ep)) if start_ep else None, int(eval(end_ep)) if end_ep else None, self.part
def __handle_single(self, file: str) -> Tuple[Optional[int], Optional[int]]:
"""

37
app/helper/mediaserver.py Normal file
View File

@@ -0,0 +1,37 @@
from typing import Optional
from app.helper.service import ServiceBaseHelper
from app.schemas import MediaServerConf, ServiceInfo
from app.schemas.types import SystemConfigKey, ModuleType
class MediaServerHelper(ServiceBaseHelper[MediaServerConf]):
"""
媒体服务器帮助类
"""
def __init__(self):
super().__init__(
config_key=SystemConfigKey.MediaServers,
conf_type=MediaServerConf,
module_type=ModuleType.MediaServer
)
def is_media_server(
self,
service_type: Optional[str] = None,
service: Optional[ServiceInfo] = None,
name: Optional[str] = None,
) -> bool:
"""
通用的媒体服务器类型判断方法
:param service_type: 媒体服务器的类型名称(如 'plex', 'emby', 'jellyfin'
:param service: 要判断的服务信息
:param name: 服务的名称
:return: 如果服务类型或实例为指定类型,返回 True否则返回 False
"""
# 如果未提供 service 则通过 name 获取服务
service = service or self.get_service(name=name)
# 判断服务类型是否为指定类型
return bool(service and service.type == service_type)

Some files were not shown because too many files have changed in this diff Show More