Compare commits

...

382 Commits

Author SHA1 Message Date
jxxghp
cde4db1a56 v2.1.2 2024-12-06 15:55:56 +08:00
jxxghp
29ae910953 fix build 2024-12-06 12:31:29 +08:00
jxxghp
314f90cc40 upgrade python-115 2024-12-06 12:30:13 +08:00
jxxghp
1c22e3d024 Merge pull request #3337 from InfinityPacer/feature/subscribe
feat(event): add ResourceDownload event for cancel download
2024-12-06 11:17:34 +08:00
InfinityPacer
233d62479f feat(event): add options to ResourceDownloadEventData 2024-12-06 10:47:56 +08:00
jxxghp
6974f2ebd7 Merge pull request #3335 from mackerel-12138/fix_scraper 2024-12-06 06:53:24 +08:00
InfinityPacer
c030166cf5 feat(event): send events for resource download based on source 2024-12-06 02:08:36 +08:00
InfinityPacer
4c511eaea6 chore(event): update ResourceDownloadEventData comment 2024-12-06 02:06:00 +08:00
InfinityPacer
6e443a1127 feat(event): add ResourceDownload event for cancel download 2024-12-06 01:55:44 +08:00
InfinityPacer
896e473c41 fix(event): filter and handle only enabled event handlers 2024-12-06 01:54:51 +08:00
zhanglijun
12f10ebedf fix: 音轨文件重命名整理 2024-12-06 00:40:38 +08:00
jxxghp
ba9f85747c Merge pull request #3330 from InfinityPacer/feature/subscribe 2024-12-05 17:10:47 +08:00
InfinityPacer
2954c02a7c feat(subscribe): add subscription status update API 2024-12-05 16:24:05 +08:00
InfinityPacer
312e602f12 feat(subscribe): add Pending and Suspended subscription states 2024-12-05 16:22:09 +08:00
InfinityPacer
ed37fcbb07 feat(subscribe): update get_by_state to handle multiple states 2024-12-05 16:20:14 +08:00
jxxghp
6acf8fbf00 Merge pull request #3324 from InfinityPacer/feature/subscribe 2024-12-05 06:54:45 +08:00
InfinityPacer
a1e178c805 feat(event): add ResourceSelection event for update resource contexts 2024-12-04 20:21:57 +08:00
jxxghp
922e2fc446 Merge pull request #3323 from Aqr-K/feat-module 2024-12-04 18:19:15 +08:00
jxxghp
db4c8cb3f2 Merge pull request #3322 from InfinityPacer/feature/subscribe 2024-12-04 18:18:32 +08:00
Aqr-K
1c578746fe fix(module): 补全 indexer 缺少 get_subtype 方法
- 补全 `indexer` 缺少 `get_subtype` 方法。
- 增加 `get_running_subtype_module` 方法,可结合 `types` 快速获取单个运行中的 `module` 。
2024-12-04 18:14:56 +08:00
InfinityPacer
68f88117b6 feat(events): add episodes field to DownloadAdded event for unpack 2024-12-04 16:11:35 +08:00
jxxghp
108c0a89f6 Merge pull request #3320 from InfinityPacer/feature/subscribe 2024-12-04 12:18:19 +08:00
InfinityPacer
92dacdf6a2 fix(subscribe): add RLock to prevent duplicate subscription downloads 2024-12-04 11:07:45 +08:00
InfinityPacer
6aa684d6a5 fix(subscribe): handle case when no subscriptions are found 2024-12-04 11:03:32 +08:00
InfinityPacer
efece8cc56 fix(subscribe): add check for None before updating subscription 2024-12-04 10:27:33 +08:00
jxxghp
383c8ca19a Merge pull request #3313 from Aqr-K/feat-module 2024-12-03 18:09:49 +08:00
jxxghp
0a73681280 Merge pull request #3315 from InfinityPacer/feature/scheduler 2024-12-03 18:09:23 +08:00
InfinityPacer
c1ecda280c fix #3312 2024-12-03 17:33:00 +08:00
Aqr-K
825fc35134 feat(modules): 增加子级 type 分类。
- 在 `types` 里,针对各个模块的类型进行子级分类。
- 为每个模块统一添加 `get_subtype` 方法,这样一来,能更精准快速地区分与调用子类的每个模块,又能获取 ModuleType 所规定的分类以及对应存在的子模块类型支持列表,从而有效解决当下调用时需繁琐遍历每个 module 以获取 get_name 或 _channel 的问题。
- 解决因消息渠道前端返回所保存的 type 与后端规定值不一致,而需要频繁调用 _channel 私有方法才能获取分类所可能产生的问题。
2024-12-03 14:57:19 +08:00
jxxghp
8f543ca602 Merge pull request #3309 from yxlimo/tmdbid-for-downloader 2024-12-03 06:55:36 +08:00
yxlimo
f0ecc1a497 fix: return last record when get downloadhistory by hash 2024-12-02 22:55:57 +08:00
jxxghp
71f170a1ad Merge pull request #3293 from wikrin/v2 2024-12-01 10:23:51 +08:00
Attente
3709b65b0e fix(api): correct variable reference in media scraping logic
- Change incorrect reference from media_info to mediainfo
2024-12-01 03:40:30 +08:00
jxxghp
9d6eb0f1e1 Merge pull request #3291 from mackerel-12138/fix_scraper 2024-11-30 16:06:04 +08:00
jxxghp
c93306147b Merge pull request #3290 from mackerel-12138/fix_poster 2024-11-30 16:05:11 +08:00
zhanglijun
5e8f924a2f fix: 修复指定tmdbid刮削时tmdbid丢失问题 2024-11-30 15:57:47 +08:00
zhanglijun
54988d6397 fix: 修复fanart季图片下载缺失/错误的问题 2024-11-30 13:51:30 +08:00
jxxghp
112761dc4c Merge pull request #3287 from InfinityPacer/feature/security 2024-11-30 07:15:52 +08:00
InfinityPacer
ef20508840 feat(auth): handle service instance retrieval with proper null check 2024-11-30 01:14:36 +08:00
InfinityPacer
589a1765ed feat(auth): support specifying service for authentication 2024-11-30 01:04:48 +08:00
jxxghp
2c666e24f3 Merge pull request #3283 from InfinityPacer/feature/subscribe 2024-11-29 21:12:25 +08:00
InfinityPacer
168e3c5533 fix(subscribe): move state update to finally to prevent duplicates 2024-11-29 18:56:19 +08:00
jxxghp
cda8b2573a Merge pull request #3282 from InfinityPacer/feature/subscribe 2024-11-29 16:47:56 +08:00
InfinityPacer
4cb4eb23b8 fix(subscribe): prevent fallback to search rules if not defined 2024-11-29 16:15:37 +08:00
jxxghp
f208b65570 更新 version.py 2024-11-29 08:59:55 +08:00
jxxghp
8a0a530036 Merge pull request #3279 from wikrin/v2 2024-11-29 07:36:34 +08:00
Attente
76643f13ed Update system.py 2024-11-29 07:33:02 +08:00
Attente
6992284a77 fix(api): 修复规则测试未获取到媒体信息导致的过滤失败问题 2024-11-29 07:25:08 +08:00
jxxghp
9a142799cd Merge pull request #3274 from InfinityPacer/feature/encoding 2024-11-28 17:29:22 +08:00
InfinityPacer
027d1567c3 feat(encoding): set PERFORMANCE_MODE to enabled by default 2024-11-28 17:07:14 +08:00
jxxghp
65af737dfd Merge pull request #3272 from wikrin/transfer 2024-11-28 07:23:25 +08:00
jxxghp
48aa0e3d0b Merge pull request #3271 from wikrin/v2 2024-11-28 07:22:16 +08:00
jxxghp
b4e31893ff Merge pull request #3268 from mackerel-12138/fix_scraper 2024-11-28 07:21:43 +08:00
Attente
4f1b95352a 改进手动整理逻辑 关联后端 jxxghp/MoviePilot-Frontend#255 2024-11-28 05:39:26 +08:00
Attente
ca664cb569 fix: 修复批量整理时媒体库目录匹配不正确的问题 2024-11-28 05:19:09 +08:00
zhanglijun
fe4ea73286 修复季nfo刮削错误, 优化季标题取值 2024-11-27 23:27:08 +08:00
jxxghp
9e9cca6de4 Merge pull request #3262 from InfinityPacer/feature/encoding 2024-11-27 16:25:46 +08:00
InfinityPacer
2e7e74c803 feat(encoding): update configuration to performance mode 2024-11-27 13:52:17 +08:00
InfinityPacer
916597047d Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/encoding 2024-11-27 12:52:01 +08:00
InfinityPacer
83fc474dbe feat(encoding): enhance encoding detection with confidence threshold 2024-11-27 12:33:57 +08:00
jxxghp
f67bf49e69 Merge pull request #3255 from InfinityPacer/feature/event 2024-11-27 06:59:54 +08:00
jxxghp
bf9043f526 Merge pull request #3254 from mackerel-12138/v2 2024-11-27 06:58:47 +08:00
InfinityPacer
a98de604a1 refactor(event): rename SmartRename to TransferRename 2024-11-27 00:50:34 +08:00
InfinityPacer
e160a745a7 fix(event): correct visualize_handlers 2024-11-27 00:49:37 +08:00
zhanglijun
7f2c6ef167 fix: 增加入参判断 2024-11-26 22:25:42 +08:00
jxxghp
2086651dbe Merge pull request #3235 from wikrin/fix 2024-11-26 22:17:32 +08:00
zhanglijun
132fde2308 修复季海报下载路径和第0季海报命名 2024-11-26 22:01:00 +08:00
jxxghp
4e27a1e623 fix #3247 2024-11-26 08:25:01 +08:00
Attente
a453831deb get_dir增加入参
- `include_unsorted`用于表示可否`包含`整理方式`为`不整理`的目录配置
2024-11-26 03:11:25 +08:00
jxxghp
1035ceb4ac Merge pull request #3245 from wikrin/v2 2024-11-25 23:04:15 +08:00
Attente
b7cb917347 fix(transfer): add library type and category folder support
- Add library_type_folder and library_category_folder parameters to the transfer function
- This enhances the transfer functionality by allowing sorting files into folders based on library type and category
2024-11-25 23:02:17 +08:00
jxxghp
680ad164dc Merge pull request #3236 from InfinityPacer/feature/scheduler 2024-11-25 17:54:05 +08:00
InfinityPacer
aed68253e9 feat(scheduler): expose internal methods for external invocation 2024-11-25 16:33:17 +08:00
InfinityPacer
b83c7a5656 feat(scheduler): support plugin method arguments via func_kwargs 2024-11-25 16:31:30 +08:00
InfinityPacer
491456b0a2 feat(scheduler): support plugin replacement for system services 2024-11-25 16:30:11 +08:00
Attente
84465a6536 不整理目录的下载路径可以被下载器获取
修改自动匹配源存储器类型入参
2024-11-25 13:51:04 +08:00
jxxghp
9acbcf4922 v2.1.0 2024-11-25 08:05:07 +08:00
jxxghp
8dc4290695 fix scrape bug 2024-11-25 07:58:17 +08:00
jxxghp
5c95945691 Update README.md 2024-11-24 18:16:37 +08:00
jxxghp
11115d50fb fix dockerfile 2024-11-24 18:14:09 +08:00
jxxghp
7f83d56a7e fix alipan 2024-11-24 17:55:08 +08:00
jxxghp
28805e9e17 fix alipan 2024-11-24 17:45:12 +08:00
jxxghp
88a098abc1 fix log 2024-11-24 17:35:04 +08:00
jxxghp
a3cc9830de fix scraping upload 2024-11-24 17:25:42 +08:00
jxxghp
43623efa99 fix log 2024-11-24 17:19:24 +08:00
jxxghp
ff73b2cb5d fix #3203 2024-11-24 17:11:19 +08:00
jxxghp
6cab14366c Merge pull request #3228 from YemaPT/fix-yemapt-taglist-none 2024-11-24 16:24:38 +08:00
yemapt
576d215d8c fix(yemapt): judge tag list none 2024-11-24 16:22:54 +08:00
jxxghp
a2c10c86bf Merge pull request #3226 from YemaPT/feature-yemapt-optimize 2024-11-24 14:08:04 +08:00
yemapt
21bede3f00 feat(yemapt): update search api and enrich torrent content 2024-11-24 13:45:31 +08:00
jxxghp
0a39322281 Merge pull request #3224 from wikrin/v2 2024-11-24 10:32:47 +08:00
Attente
be323d3da1 fix: 减少入参扩大适用范围 2024-11-24 10:22:29 +08:00
jxxghp
fa8860bf62 Merge pull request #3223 from wikrin/v2
fix: 入参错误
2024-11-24 08:56:58 +08:00
Attente
a700958edb fix: 入参错误 2024-11-24 08:54:59 +08:00
jxxghp
9349973d16 Merge pull request #3221 from wikrin/v2 2024-11-24 07:34:42 +08:00
Attente
c0d3637d12 refactor: change library type and category folder parameters to optional 2024-11-24 00:04:08 +08:00
jxxghp
79473ca229 Merge pull request #3196 from wikrin/fix 2024-11-23 23:01:09 +08:00
Attente
fccbe39547 修改target_directory获取逻辑 2024-11-23 22:41:55 +08:00
Attente
85324acacc 下载流程中get_dir()添加storage="local"入参 2024-11-23 22:41:55 +08:00
Attente
9dec4d704b get_dir去除fileitem参数
- 和`src_path & storage`重复, 需要的话直接传入这两项
2024-11-23 22:41:55 +08:00
jxxghp
72732277a1 fix alipan 2024-11-23 21:54:03 +08:00
jxxghp
8d737f9e37 fix alipan && rclone get_folder 2024-11-23 21:43:53 +08:00
jxxghp
96b3746caa fix alist delete 2024-11-23 21:29:08 +08:00
jxxghp
c690ea3c39 fix #3214
fix #3199
2024-11-23 21:26:22 +08:00
jxxghp
3282fb88e0 Merge pull request #3219 from mackerel-12138/s0_fix 2024-11-23 20:25:08 +08:00
zhanglijun
b9c2b9a044 重命名格式支持S0重命名为Specials,SPs 2024-11-23 20:22:37 +08:00
zhanglijun
24b58dc002 修复S0刮削问题
修复某些情况下剧集根目录判断错误的问题
2024-11-23 20:13:01 +08:00
jxxghp
42c56497c6 Merge pull request #3218 from DDS-Derek/issue_rfc 2024-11-23 12:34:52 +08:00
jxxghp
c7512d1580 Merge pull request #3217 from DDS-Derek/fix_tmp 2024-11-23 12:34:39 +08:00
jxxghp
7d25bf7b48 Merge pull request #3215 from mackerel-12138/v2 2024-11-23 12:34:04 +08:00
DDSRem
99daa3a95e chore(issue): add rfc template 2024-11-23 12:31:28 +08:00
jxxghp
0a923bced9 fix storage 2024-11-23 12:29:34 +08:00
DDSRem
06e3b0def2 fix(update): useless tmp directory when not updated 2024-11-23 12:25:46 +08:00
jxxghp
0feecc3eca fix #3204 2024-11-23 11:48:23 +08:00
jxxghp
0afbc58263 fix #3191 自动整理时,优先同盘 2024-11-23 11:31:56 +08:00
jxxghp
7c7561029a fix #3178 手动整理时支持选择一二级分类 2024-11-23 11:19:25 +08:00
zhanglijun
65683999e1 change comment 2024-11-23 11:00:37 +08:00
zhanglijun
f72e26015f delete unused code 2024-11-23 10:58:32 +08:00
zhanglijun
b4e5c50655 修复重命名时S0年份为None的问题
增加重命名配置 剧集日期
2024-11-23 10:55:21 +08:00
jxxghp
f395dc68c3 fix #3209 刮削加锁 2024-11-23 10:48:54 +08:00
jxxghp
27cf5bb7e6 feat:远程交互刷新数据时发送统计消息 2024-11-23 10:36:48 +08:00
jxxghp
9b573535cd Merge pull request #3201 from InfinityPacer/feature/event 2024-11-22 16:25:52 +08:00
jxxghp
cb32305b86 Merge pull request #3200 from cddjr/fix_subscribe_search_filter 2024-11-22 14:04:08 +08:00
景大侠
f7164450d0 fix: 将订阅规则过滤前置,避免因imdbid匹配而跳过 2024-11-22 13:47:18 +08:00
InfinityPacer
344862dbd4 feat(event): support smart rename event 2024-11-22 13:41:14 +08:00
InfinityPacer
f1d0e9d50a Revert "fix #3154 相同事件避免并发处理"
This reverts commit 79c637e003.
2024-11-22 12:41:14 +08:00
jxxghp
9ba9e8f41c v2.0.9 2024-11-22 08:11:07 +08:00
jxxghp
78fc5b7017 Merge pull request #3193 from wikrin/fix_any_files 2024-11-22 08:10:12 +08:00
Attente
fe07830b71 fix: 某些情况下误删媒体文件的问题 2024-11-22 07:45:01 +08:00
jxxghp
350f1faf2a Merge pull request #3189 from InfinityPacer/feature/module 2024-11-21 20:16:06 +08:00
InfinityPacer
103cfe0b47 fix(config): ensure accurate handling of env config updates 2024-11-21 20:08:18 +08:00
jxxghp
0953c1be16 Merge pull request #3187 from InfinityPacer/feature/scheduler 2024-11-21 17:43:29 +08:00
InfinityPacer
c299bf6f7c fix(auth): adjust auth to occur before module init 2024-11-21 17:37:48 +08:00
InfinityPacer
c0eb9d824c Revert "fix(auth): initialize plugin service only during retry auth"
This reverts commit 9f4cf530f8.
2024-11-21 16:41:56 +08:00
jxxghp
ebffdebdb2 refactor: 优化缓存策略 2024-11-21 15:52:08 +08:00
jxxghp
acd9e38477 Merge pull request #3186 from InfinityPacer/feature/scheduler 2024-11-21 14:54:01 +08:00
InfinityPacer
9f4cf530f8 fix(auth): initialize plugin service only during retry auth 2024-11-21 14:49:42 +08:00
jxxghp
84897aa592 fix #3162 2024-11-21 13:50:49 +08:00
jxxghp
23c5982f5a Merge pull request #3185 from InfinityPacer/feature/module 2024-11-21 12:42:05 +08:00
InfinityPacer
1849930b72 feat(qb): add support for ignoring category check via kwargs 2024-11-21 12:35:15 +08:00
jxxghp
4f1d3a7572 fix #3180 2024-11-21 12:13:44 +08:00
jxxghp
824c3ac5d6 fix #3176 2024-11-21 10:25:46 +08:00
jxxghp
1cec6ed6d1 v2.0.8
- 修复云盘扫码问题
2024-11-20 20:43:44 +08:00
jxxghp
fff75c7fe2 fix 115 2024-11-20 20:40:32 +08:00
jxxghp
81fecf1e07 fix alipan 2024-11-20 20:39:48 +08:00
jxxghp
ad8f687f8e fix alipan 2024-11-20 20:36:50 +08:00
jxxghp
a3172d7503 fix 扫码逻辑与底层模块解耦 2024-11-20 20:17:18 +08:00
jxxghp
8d5e0b26d5 fix:115支持Cookie 2024-11-20 13:14:37 +08:00
jxxghp
b1b980f550 Merge pull request #3171 from Sowevo/v2 2024-11-20 07:07:08 +08:00
Sowevo
8196589cff Merge branch 'jxxghp:v2' into v2 2024-11-19 22:43:31 +08:00
sowevo
cb9f41cb65 plex的item_id统一使用全路径
获取图片时兼容外网地址为Plex的官方转发地址https://app.plex.tv的情况
2024-11-19 22:41:55 +08:00
jxxghp
cb4981adb3 v2.0.7
- 修复了手动整理强制目录的问题
- 修复了AList无法整理文件的问题
- 修复了下载种子不使用全局UA的问题
- 修复了幼儿园的索引
- 修复了一处资源类型识别错误
- 用户认证现在也可以通过UI完成了
2024-11-19 20:42:25 +08:00
jxxghp
6880b42a84 fix #3161 2024-11-19 20:38:06 +08:00
jxxghp
97054adc61 fix 手动整理时强制目录 2024-11-19 20:22:31 +08:00
jxxghp
de94e5d595 fix #3166 2024-11-19 20:12:27 +08:00
jxxghp
a5a734d091 fix u115 transtype 2024-11-19 18:04:48 +08:00
jxxghp
efb607d22f Merge remote-tracking branch 'origin/v2' into v2 2024-11-19 13:31:52 +08:00
jxxghp
d0b2787a7c fix #1832 2024-11-19 13:11:54 +08:00
jxxghp
d5988ff443 Merge pull request #3165 from InfinityPacer/feature/module 2024-11-19 12:24:37 +08:00
InfinityPacer
96b4f1b575 feat(site): set default site timeout to 15 seconds 2024-11-19 11:10:01 +08:00
jxxghp
bb6b8439c7 fix siteauth scheduler 2024-11-19 08:39:39 +08:00
jxxghp
9cdce4509d fix siteauth schema 2024-11-19 08:25:12 +08:00
jxxghp
3956ab1fe8 add siteauth api 2024-11-19 08:18:26 +08:00
jxxghp
14686fdb03 合并拉取请求 #3159
fix: 去除资源搜索中多余的`订阅附加参数`过滤
2024-11-18 23:25:03 +08:00
Attente
32892ab747 fix: 去除资源搜索中多余的订阅附加参数过滤 2024-11-18 17:03:49 +08:00
jxxghp
79c637e003 fix #3154 相同事件避免并发处理 2024-11-18 08:01:43 +08:00
jxxghp
d7c260715a fix 115 2024-11-17 21:22:47 +08:00
jxxghp
2dfb089a39 fix bug 2024-11-17 21:04:24 +08:00
jxxghp
e04179525b Merge pull request #3146 from InfinityPacer/feature/module
chore(qbittorrent): update qbittorrent-api to version 2024.11.69
2024-11-17 15:59:43 +08:00
jxxghp
d044364c68 fix 115扫码后要重启 2024-11-17 15:58:29 +08:00
InfinityPacer
a0f912ffbe chore(qbittorrent): update qbittorrent-api to version 2024.11.69 2024-11-17 15:43:06 +08:00
jxxghp
d7c8b08d7a fix 115 2024-11-17 15:23:30 +08:00
jxxghp
f752082e1b v2.0.6 2024-11-17 15:15:42 +08:00
jxxghp
201ec21adf 优化Dev更新最新前端 2024-11-17 15:14:00 +08:00
jxxghp
57590323b2 fix ext 2024-11-17 14:56:42 +08:00
jxxghp
4636c7ada7 fix #3141 2024-11-17 14:14:13 +08:00
jxxghp
4c86a4da5f fix alist token 2024-11-17 14:07:39 +08:00
jxxghp
8dc9acf071 fix 115 2024-11-17 14:03:03 +08:00
jxxghp
abebae3664 Merge pull request #3139 from wdmcheng/v2 2024-11-17 12:00:41 +08:00
wdmcheng
4f7d8866a0 fix 本地存储 upload 后将文件识别为文件夹的问题 2024-11-17 11:50:33 +08:00
jxxghp
cceb22d729 fix log level 2024-11-17 08:56:02 +08:00
jxxghp
89edbb93f5 fix #3135 2024-11-17 08:52:15 +08:00
jxxghp
4ffb406172 更新 requirements.in 2024-11-17 02:23:07 +08:00
jxxghp
293e417865 feat:切换使用python-115 2024-11-17 02:10:45 +08:00
jxxghp
510c20dc70 fix 2024-11-16 21:49:54 +08:00
jxxghp
8e1810955b fix #3082 2024-11-16 20:56:32 +08:00
jxxghp
73f732fe1d fix #3126 目录删除加固 2024-11-16 20:29:17 +08:00
jxxghp
d6f5160959 fix mteam 消息99999 2024-11-16 19:55:41 +08:00
jxxghp
d64a7086dd fix #3120 2024-11-16 13:32:58 +08:00
jxxghp
825d9b768f 更新 version.py 2024-11-16 11:18:23 +08:00
jxxghp
f758a47f4f Merge pull request #3122 from DDS-Derek/fix_update 2024-11-16 11:02:04 +08:00
jxxghp
fc69d7e6c1 fix 2024-11-16 10:55:17 +08:00
DDSRem
edc30266c8 fix(update): clear tmp directory causes data loss
fix https://github.com/jxxghp/MoviePilot/issues/2996
2024-11-16 10:53:33 +08:00
jxxghp
665da9dad3 Merge pull request #3121 from DDS-Derek/fix_nginx 2024-11-16 10:37:23 +08:00
DDSRem
4048acf60e feat(docker): nginx client_max_body_size configuration
fix https://github.com/jxxghp/MoviePilot/issues/2951
fix https://github.com/jxxghp/MoviePilot/issues/2720
2024-11-16 10:23:28 +08:00
jxxghp
f116229ecc fix #3108 2024-11-16 09:50:55 +08:00
jxxghp
f6a2efb256 fix #3116 2024-11-16 09:25:46 +08:00
jxxghp
af3a50f7ea feat:订阅支持绑定下载器 2024-11-16 09:00:18 +08:00
jxxghp
44a0e5b4a7 fix #3120 2024-11-16 08:41:30 +08:00
jxxghp
f40a1246ff Merge pull request #3118 from wikrin/database 2024-11-16 07:54:53 +08:00
jxxghp
dd890c410c Merge pull request #3117 from wikrin/site 2024-11-16 07:54:42 +08:00
Attente
8fd7f2c875 fix 资源搜索下载时设置的下载器不生效的问题 2024-11-16 01:44:20 +08:00
Attente
8c09b3482f Upgrade the database 2024-11-16 00:28:13 +08:00
Attente
0066247a2b feat: 站点管理增加下载器选择 2024-11-16 00:22:04 +08:00
jxxghp
c7926fc575 Merge pull request #3113 from InfinityPacer/feature/module 2024-11-15 21:59:50 +08:00
InfinityPacer
ac5b9fd4e5 fix(rclone): specify UTF-8 encoding when save config 2024-11-15 17:42:11 +08:00
jxxghp
42dc539df6 fix #3013 2024-11-15 16:17:51 +08:00
jxxghp
e60d785a11 fix meta re 2024-11-15 13:50:33 +08:00
jxxghp
33558d6197 Merge pull request #3102 from InfinityPacer/feature/module 2024-11-15 12:01:21 +08:00
InfinityPacer
46d2ffeb75 fix #3100 2024-11-15 09:08:32 +08:00
jxxghp
8e4bce2f95 fix #3079 2024-11-15 08:03:23 +08:00
jxxghp
00f1f06e3d fix #3079 2024-11-15 08:00:22 +08:00
jxxghp
fe37bde993 fix offset ep 2024-11-14 22:29:14 +08:00
jxxghp
6c3bb8893f Merge pull request #3097 from wdmcheng/v2 2024-11-14 21:47:59 +08:00
wdmcheng
ca4d64819d fix 部分情况下Alist解析时间错误 2024-11-14 21:39:13 +08:00
jxxghp
0a53635d35 Merge pull request #3096 from rexshao/v2 2024-11-14 21:15:47 +08:00
rexshao
921e24b049 Update twofa.py
修复2fa使用secret无法正常生成code的BUG
2024-11-14 21:08:38 +08:00
jxxghp
24c21ed04e fix name 2024-11-14 19:58:37 +08:00
jxxghp
777785579e v2.0.4
- 修复了手动整理时找不到目录的问题
- 修复了白兔站点信息获取、登录状态检测
- 修复了一个索引报错问题
- 优化了资源下载对话框
- 目录设置增加了一个手动整理的选项
- 增加了QB无法连接时的日志打印
- 存储支持挂接AList
2024-11-14 19:48:16 +08:00
jxxghp
8061a06fe4 Merge remote-tracking branch 'origin/v2' into v2 2024-11-14 18:09:49 +08:00
jxxghp
438ce6ee3e fix SiteUserData schema 2024-11-14 18:09:40 +08:00
jxxghp
77e19c3de7 Merge pull request #3095 from InfinityPacer/feature/module 2024-11-14 17:25:31 +08:00
InfinityPacer
49881c9c54 fix #2952 2024-11-14 17:21:47 +08:00
jxxghp
5da28f702f fix alist 2024-11-14 14:54:22 +08:00
jxxghp
dfbd9f3b30 add alist storage card 2024-11-14 12:57:34 +08:00
jxxghp
d6c6ee9b4e fix #3092 2024-11-14 12:38:02 +08:00
jxxghp
4b27404ee5 Merge pull request #3091 from InfinityPacer/feature/cache 2024-11-14 11:57:26 +08:00
jxxghp
3a826b343a fix #3090 2024-11-14 11:52:56 +08:00
jxxghp
851aa5f9e2 fix #3031 2024-11-14 11:49:57 +08:00
InfinityPacer
9ef1f56ea1 feat(cache): add proxy support for specific domains in image caching 2024-11-14 10:21:00 +08:00
jxxghp
78d51b7621 Merge pull request #3031 from Akimio521/feat/filemanager-alist
feat: 增加 filemanager storages 类型:Alist
2024-11-14 08:12:31 +08:00
jxxghp
c12e2bdba7 fix 手动整理Bug 2024-11-14 08:04:52 +08:00
jxxghp
fda11f427c Merge pull request #3087 from amtoaer/fix_hares 2024-11-14 06:49:12 +08:00
amtoaer
d809330225 fix: 修复白兔俱乐部的站点信息获取、登录状态检测 2024-11-14 01:59:30 +08:00
jxxghp
ce4a2314d8 fix 手动整理时目录匹配Bug 2024-11-13 21:30:24 +08:00
amtoaer
c19e825e94 fix: 修复白兔俱乐部登录检测 2024-11-13 18:30:52 +08:00
jxxghp
c45d64b554 Merge pull request #3075 from wikrin/v2 2024-11-12 22:25:53 +08:00
Attente
0689b2e331 fix: episode_offset 2024-11-12 22:22:56 +08:00
jxxghp
e6105fdab5 **v2.0.3**
- 修复了最新版本号获取错误的问题
- 修复了文件管理重命名失败的问题
- 修复了整理多季时 season.nfo 刮削错误的问题
- 修复了Rclone存储容量检测错误的问题
- 优化了自定义规则,剧集文件大小规则按平均每集大小过滤
- 移动文件整理时,自动删除空的父目录
- 增加了自动阅读和发送站点消息的开关
- 增加了数据库WAL模式开关,开启后提升数据库性能
2024-11-12 18:48:15 +08:00
jxxghp
df34c7e2da Merge pull request #3074 from InfinityPacer/feature/db 2024-11-12 17:30:34 +08:00
InfinityPacer
24cc36033f feat(db): add support for SQLite WAL mode 2024-11-12 17:17:16 +08:00
jxxghp
aafb2bc269 fix #3071 增加站点消息开关 2024-11-12 13:59:13 +08:00
jxxghp
9dde56467a 更新 __init__.py 2024-11-12 12:24:05 +08:00
jxxghp
f9d62e7451 fix Rclone存储容量检测问题 2024-11-12 10:10:37 +08:00
jxxghp
f1f379966a fix 修复V2最新版本号获取 2024-11-12 08:37:07 +08:00
jxxghp
942c9ae545 Merge pull request #3058 from wikrin/fix-scrape_metadata 2024-11-10 14:02:31 +08:00
jxxghp
89be4f6200 Merge pull request #3054 from wikrin/fix-rename 2024-11-10 14:02:01 +08:00
Attente
bcbf729fd4 修复整理多季时season.nfo刮削错误的问题 2024-11-10 13:43:59 +08:00
Attente
7fc5b7678e 更改判断顺序 2024-11-10 07:47:49 +08:00
Attente
e20578685a fix: 修复重命名失败的问题 2024-11-09 23:59:58 +08:00
jxxghp
40b82d9cb6 fix #3042 移动模式删除空文件夹 2024-11-09 18:23:08 +08:00
jxxghp
9b2fccee01 feat:剧集文件大小过滤按平均每集大小 2024-11-09 18:01:50 +08:00
jxxghp
87bbee8c36 Merge pull request #3038 from InfinityPacer/feature/setup 2024-11-08 18:16:32 +08:00
InfinityPacer
4412ce9f17 fix(playwright): add check for HTTPS proxy 2024-11-08 18:08:45 +08:00
jxxghp
35b78b0e66 Merge pull request #3034 from lybtt/fix_update_bash 2024-11-08 16:44:55 +08:00
lvyb
d97fcc4a96 修复update脚本,版本号比较问题 2024-11-08 16:37:36 +08:00
Akimio521
c8e337440e feat(storages): add Alist storage type 2024-11-08 14:32:30 +08:00
Akimio521
726e7dfbd4 feat(StringUtils): add url_eqote method 2024-11-08 14:31:08 +08:00
jxxghp
a2096e8e0f v2.0.2 2024-11-08 13:26:05 +08:00
jxxghp
75e80158e5 Merge pull request #3030 from Akimio521/fix(tmdb/douban)-cache 2024-11-08 10:48:23 +08:00
Akimio521
d42bd14288 fix: 优先使用id作为cache key避免key冲突 2024-11-08 10:35:29 +08:00
jxxghp
28f6e7f9bb fix https://github.com/jxxghp/MoviePilot-Plugins/issues/540 2024-11-07 18:58:32 +08:00
jxxghp
2aadbeaed7 Merge pull request #3025 from amtoaer/feat_jellyfin_item_path 2024-11-07 18:48:15 +08:00
jxxghp
3f6b4bf3f2 Merge pull request #3022 from MMZOX/v2 2024-11-07 18:46:59 +08:00
amtoaer
f73750fcf7 feat: 为 jellyfin 的 webhook 事件填充 item_path 字段 2024-11-07 15:01:19 +08:00
MMZOX
59df673eb5 try to fix #2965 2024-11-07 13:45:06 +08:00
jxxghp
e29ab92cd1 fix #3008 2024-11-07 08:27:05 +08:00
jxxghp
3777045a17 fix #3012 2024-11-07 08:24:22 +08:00
jxxghp
16165c0fcc fix #3018 2024-11-07 08:20:11 +08:00
jxxghp
4d377d5e04 Merge pull request #3016 from InfinityPacer/feature/scheduler 2024-11-06 20:01:14 +08:00
InfinityPacer
76c84f9bac fix(scheduler): optimize job registration and removal logic 2024-11-06 19:37:22 +08:00
jxxghp
88f91152d6 Merge pull request #3009 from lybtt/fix_local_storage 2024-11-06 10:52:15 +08:00
lvyb
dfdb88c5ac fix softlink 2024-11-06 09:30:53 +08:00
jxxghp
ec183b6d0d release v2 2024-11-05 21:24:39 +08:00
jxxghp
9d047dddb4 更新 mediaserver.py 2024-11-05 18:39:38 +08:00
jxxghp
2d83880830 更新 mediaserver.py 2024-11-05 18:39:00 +08:00
jxxghp
7e6ef04554 fix 优化媒体服务器图片获取性能 #2993 2024-11-05 18:21:08 +08:00
jxxghp
08aa5fe50a fix bug #2993 2024-11-05 10:07:25 +08:00
jxxghp
656cc1fe01 Merge pull request #3004 from InfinityPacer/feature/module 2024-11-05 07:03:49 +08:00
InfinityPacer
8afaa683cc fix(config): update DB_MAX_OVERFLOW to 500 2024-11-05 00:48:22 +08:00
InfinityPacer
4d3aa0faf3 fix(config): update in-memory setting only on env update 2024-11-05 00:48:02 +08:00
jxxghp
9e08b9129a Merge pull request #2994 from Aqr-K/patch-1
Update system.py
2024-11-04 10:20:42 +08:00
jxxghp
0584bda470 fix bug 2024-11-03 19:59:33 +08:00
jxxghp
df8531e4d8 fix #2993 格式错误和传参警告 2024-11-03 19:51:43 +08:00
jxxghp
cfc51c305b 更新 mediaserver.py 2024-11-03 14:41:42 +08:00
jxxghp
28759f6c81 Merge pull request #2998 from Akimio521/fix/wallpapers 2024-11-03 14:36:08 +08:00
jxxghp
15b701803f Merge pull request #2997 from Akimio521/perfect/cn_name 2024-11-03 14:35:49 +08:00
Akimio521
72774f80a5 fix: 修复 wallpapers 未返回 list 2024-11-03 14:22:04 +08:00
Akimio521
341526b4d9 perfect(MetaAnime): self.cn_name 属性不再进行简化处理,在使用 TMDB 和豆瓣查询时再查询简体化名字 2024-11-03 14:05:12 +08:00
jxxghp
b6bfd215bc Merge pull request #2993 from Akimio521/v2 2024-11-03 06:54:35 +08:00
Akimio521
6801032f7a fix: 避免在匿名环境下暴露Plex地址以及Plex Token 2024-11-02 16:18:17 +08:00
Akimio521
af2075578c feat(Plex): 增加从Plex获取图片的URL选项,支持选择返回Plex URL还是TMDB URL 2024-11-02 16:17:25 +08:00
Akimio521
b46ede86fc fix: 移除不必要的配置导入 2024-11-02 14:57:38 +08:00
Aqr-K
a104001087 Update system.py 2024-11-02 14:27:46 +08:00
Akimio521
88e8790678 feat: 可选从媒体服务器中获取最新入库条目海报作为登录页面壁纸 2024-11-02 14:17:52 +08:00
Akimio521
a59d73a68a feat(PlexModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
Akimio521
522d970731 feat(JellyfinModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
Akimio521
51a0f97580 feat(EmbyModule): 实现获取媒体服务器最新入库条目的图片 2024-11-02 14:17:52 +08:00
jxxghp
0ef6d7bbf2 Merge pull request #2991 from wikrin/fix-get_dir 2024-11-02 06:27:43 +08:00
Attente
d818ceb8e6 fix: 修复了在某些特定场景下,手动整理无法正确获取目标路径的问题。 2024-11-02 02:01:29 +08:00
jxxghp
a69d56d9fd Merge pull request #2978 from wikrin/v2 2024-10-30 18:57:40 +08:00
jxxghp
957df2cf66 Merge pull request #2977 from wdmcheng/v2 2024-10-30 18:57:19 +08:00
wdmcheng
d863a7cb7f 改进 SystemUtils.list_files 遍历目录对特殊字符的兼容性(如'[]') 2024-10-30 18:29:14 +08:00
Attente
021fcb17bb fix: #2974 #2963 2024-10-30 18:16:52 +08:00
jxxghp
b4e233678d Merge pull request #2975 from thsrite/v2 2024-10-30 16:56:57 +08:00
thsrite
5e53825684 fix 增加开启检查本地媒体库是否存在资源开关,按需开启 2024-10-30 16:20:37 +08:00
jxxghp
236d860133 fix monitor 2024-10-30 08:25:26 +08:00
jxxghp
76d939b665 更新 __init__.py 2024-10-30 07:16:44 +08:00
jxxghp
63d35dfeef fix transfer 2024-10-30 07:12:58 +08:00
jxxghp
3dd7d36760 Merge pull request #2970 from wikrin/fix-monitor 2024-10-29 17:49:00 +08:00
jxxghp
e4b0e4bf33 Merge pull request #2969 from wikrin/fix-notify 2024-10-29 06:46:25 +08:00
jxxghp
3504c0cdd6 Merge pull request #2968 from wikrin/fix-get_dir 2024-10-29 06:45:57 +08:00
Attente
980feb3cd2 fix: 修复了整理模式目录监控时, 目标路径不符预期的问题 2024-10-29 06:13:18 +08:00
Attente
a1daf884e6 增加对配置了媒体库目录但没有设置自动整理的处理 2024-10-29 06:06:02 +08:00
Attente
f0e4d9bf63 去除多余判断
```
if src_path and download_path != src_path
if dest_path and library_path != dest_path
```
已经能排除`设定 -> 存储 & 目录`中未设置`下载目录`或`媒体库目录`的目录
2024-10-29 03:43:57 +08:00
Attente
15397a522e fix: 修复整理整个目录时,不发送通知的问题 2024-10-29 02:12:17 +08:00
Attente
1c00c47a9b fix: #2963 可能存在问题的修复
`def get_dir`引用只存在与下载相关模块中, 尝试删除公测看反馈
2024-10-29 00:49:36 +08:00
jxxghp
e9a6f08cc8 Merge pull request #2958 from thsrite/v2 2024-10-28 11:38:17 +08:00
thsrite
7ba2d60925 fix 2024-10-28 11:36:18 +08:00
thsrite
9686a20c2f fix #2905 订阅搜索不走订阅设置的分辨率等规则 2024-10-28 11:29:34 +08:00
jxxghp
6029cf283b Merge pull request #2953 from wikrin/BDMV 2024-10-28 09:55:34 +08:00
Attente
4d6ed7d552 - 将计算目录中所有文件总大小移动到 modules.filemanager 模块中。 2024-10-28 08:31:32 +08:00
Attente
8add8ed631 添加注释 2024-10-27 23:32:15 +08:00
Attente
ab78b10287 将判断移出, 减少is_bluray_dir调用次数 2024-10-27 23:28:28 +08:00
Attente
94ed377843 - 修复整理原盘报错的问题
- 添加类型注解
2024-10-27 23:02:45 +08:00
jxxghp
4cb85a2b4c Merge pull request #2949 from wikrin/fix 2024-10-27 07:56:18 +08:00
Attente
b2a88b2791 修正注释 2024-10-27 02:10:10 +08:00
Attente
88f451147e fix: 不知道算不算bug
- 修复 `新增订阅搜索` 阶段 `包含` 和 `排除` 不生效的问题
2024-10-27 01:36:38 +08:00
jxxghp
51099ace65 Merge pull request #2947 from InfinityPacer/feature/push 2024-10-26 17:11:33 +08:00
InfinityPacer
0564bdf020 fix(event): ensure backward compatibility 2024-10-26 15:56:10 +08:00
jxxghp
bbac709970 更新 __init__.py 2024-10-26 14:17:26 +08:00
jxxghp
bb9690c873 Merge pull request #2946 from InfinityPacer/feature/push 2024-10-26 13:40:38 +08:00
jxxghp
00be46b74f Merge pull request #2944 from wikrin/fix-message 2024-10-26 08:10:00 +08:00
jxxghp
2af21765e0 Merge pull request #2942 from wikrin/v2 2024-10-26 08:09:33 +08:00
Attente
646349ac35 fix: 修正msg => message 2024-10-26 07:10:42 +08:00
InfinityPacer
915388c109 feat(commands): support sending CommandRegister events for clients 2024-10-26 04:51:45 +08:00
InfinityPacer
3c24ae5351 feat(telegram): add delete_commands 2024-10-26 04:48:55 +08:00
InfinityPacer
e876ba38a7 fix(wechat): add error handling 2024-10-26 04:47:42 +08:00
Attente
01546baddc fix: 2941
`delete_media_file` 返回值现修改为:
- `目录存在其他媒体文件`时返回`文件删除状态`
- `目录不存在其他媒体文件`时返回`目录删除状态`
2024-10-26 00:38:42 +08:00
jxxghp
133195cc0a Merge pull request #2940 from thsrite/v2 2024-10-25 19:08:02 +08:00
thsrite
e58911397a fix dc3240e9 2024-10-25 19:06:32 +08:00
jxxghp
10553ad6fc Merge pull request #2939 from DDS-Derek/dev 2024-10-25 18:24:03 +08:00
DDSRem
672d430322 fix(docker): nginx directory permission issue
fix https://github.com/jxxghp/MoviePilot/issues/2892
2024-10-25 18:18:12 +08:00
jxxghp
be785f358d Merge pull request #2938 from InfinityPacer/feature/push 2024-10-25 17:37:11 +08:00
InfinityPacer
eff8a6c497 feat(wechat): add retry mechanism for message requests 2024-10-25 17:27:58 +08:00
InfinityPacer
5d89ad965f fix(telegram): ensure image cache path exists 2024-10-25 17:26:56 +08:00
jxxghp
1651f4677b Merge pull request #2937 from thsrite/v2 2024-10-25 16:59:29 +08:00
thsrite
dc3240e90a fix 种子过滤包含规则 2024-10-25 16:05:54 +08:00
jxxghp
e2ee930ff4 Merge pull request #2935 from thsrite/v2 2024-10-25 13:33:46 +08:00
thsrite
90901d7297 fix 获取站点最新数据的时候排除掉错误的数据 2024-10-25 13:24:29 +08:00
jxxghp
1b76f1c851 feat:站点未读消息发送 2024-10-25 13:10:35 +08:00
jxxghp
3d9853adcf fix #2933 2024-10-25 12:58:06 +08:00
jxxghp
81384c358e Merge pull request #2933 from wikrin/v2-transfer 2024-10-25 12:12:30 +08:00
InfinityPacer
a46463683d Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/push 2024-10-25 10:51:38 +08:00
jxxghp
4cf3b49324 Merge pull request #2932 from InfinityPacer/feature/module 2024-10-25 06:52:57 +08:00
Attente
1f6fa22aa1 fix: 修复 storagechain.list_files 递归得到的列表被覆盖的问题 2024-10-25 02:54:41 +08:00
Attente
d108b0da78 仅取消缩进,没有其他任何改动
减少`for`嵌套, 汇总遍历目录, 这样能提供更准确的`文件数`
2024-10-25 01:28:59 +08:00
Attente
0ee21b38de fix:
- 修复因首个子目录中无目标文件而不处理整个文件夹的问题
- 添加同时整理音轨
2024-10-25 01:27:39 +08:00
InfinityPacer
b1858f4849 fix #2931 2024-10-25 00:42:00 +08:00
InfinityPacer
ac086a7640 refactor(wechat): optimize message handling and add menu deletion 2024-10-24 20:27:41 +08:00
jxxghp
1d252f4eb2 Merge pull request #2930 from InfinityPacer/feature/push 2024-10-24 19:26:50 +08:00
jxxghp
ab354ef0e8 Merge pull request #2929 from InfinityPacer/feature/setup 2024-10-24 19:25:48 +08:00
jxxghp
167cba2dbb Merge pull request #2928 from InfinityPacer/feature/module 2024-10-24 19:25:01 +08:00
InfinityPacer
9cf7547a8c fix(downloader): ensure default downloader config fallback 2024-10-24 19:12:07 +08:00
InfinityPacer
823b81784e fix(setup): optimize logging 2024-10-24 16:50:03 +08:00
InfinityPacer
d9effb54ee feat(commands): support background initialization 2024-10-24 16:26:37 +08:00
jxxghp
1a8d9044d7 fix #2926 2024-10-24 14:28:30 +08:00
InfinityPacer
0a2ce11eb0 fix(setup): adjust dir name 2024-10-24 12:48:54 +08:00
jxxghp
42b5dd4178 fix #2924 2024-10-24 12:39:08 +08:00
InfinityPacer
2bae866f70 Merge branch 'v2' of https://github.com/jxxghp/MoviePilot into feature/setup 2024-10-24 11:21:45 +08:00
InfinityPacer
2470a98491 fix(setup): remove pkg_resources import and add working_set 2024-10-24 11:21:37 +08:00
jxxghp
9d70b117d7 Merge remote-tracking branch 'origin/v2' into v2 2024-10-24 11:14:45 +08:00
jxxghp
1fad9d9904 Merge pull request #2923 from InfinityPacer/feature/setup 2024-10-24 10:50:20 +08:00
jxxghp
dc1533d5e8 Merge pull request #2922 from thsrite/v2 2024-10-24 10:49:10 +08:00
thsrite
e0cfb4fd6d fix 重启后v2插件丢失问题 2024-10-24 10:38:55 +08:00
jxxghp
119919da51 fix:附属文件整理报错 2024-10-24 09:56:39 +08:00
InfinityPacer
684e518b87 fix(setup): remove unnecessary comments 2024-10-24 09:53:44 +08:00
jxxghp
50febd6b2c 更新 update 2024-10-24 07:14:37 +08:00
InfinityPacer
86dec5aec2 feat(setup): complete missing dependencies installation 2024-10-24 02:55:54 +08:00
jxxghp
fa021de2ae Merge pull request #2918 from InfinityPacer/feature/event 2024-10-23 20:15:51 +08:00
InfinityPacer
874572253c feat(auth): integrate Plex auxiliary authentication support 2024-10-23 20:11:23 +08:00
jxxghp
059f7f8146 更新 update 2024-10-23 20:09:02 +08:00
121 changed files with 5500 additions and 2203 deletions

45
.github/ISSUE_TEMPLATE/rfc.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: 功能提案
description: Request for Comments
title: "[RFC]"
labels: ["RFC"]
body:
- type: markdown
attributes:
value: |
一份提案(RFC)定位为 **「在某功能/重构的具体开发前,用于开发者间 review 技术设计/方案的文档」**
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background
attributes:
label: 背景 or 问题
description: 简单描述遇到的什么问题或需要改动什么。可以引用其他 issue、讨论、文档等。
validations:
required: true
- type: textarea
id: goal
attributes:
label: "目标 & 方案简述"
description: 简单描述提案此提案实现后,**预期的目标效果**,以及简单大致描述会采取的方案/步骤,可能会/不会产生什么影响。
validations:
required: true
- type: textarea
id: design
attributes:
label: "方案设计 & 实现步骤"
description: |
详细描述你设计的具体方案,可以考虑拆分列表或要点,一步步描述具体打算如何实现的步骤和相关细节。
这部份不需要一次性写完整,即使在创建完此提案 issue 后,依旧可以再次编辑修改。
validations:
required: false
- type: textarea
id: alternative
attributes:
label: "替代方案 & 对比"
description: |
[可选] 为来实现目标效果,还考虑过什么其他方案,有什么对比?
validations:
required: false

View File

@@ -5,10 +5,7 @@ on:
branches:
- v2
paths:
- '.github/workflows/build.yml'
- 'Dockerfile'
- 'version.py'
- 'requirements.in'
jobs:
Docker-build:

2
.gitignore vendored
View File

@@ -12,7 +12,7 @@ app/helper/*.bin
app/plugins/**
!app/plugins/__init__.py
config/cookies/**
config/user.db
config/user.db*
config/sites/**
config/logs/
config/temp/

View File

@@ -10,10 +10,7 @@ ENV LANG="C.UTF-8" \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
PROXY_HOST="" \
MOVIEPILOT_AUTO_UPDATE=false \
AUTH_SITE="iyuu" \
IYUU_SIGN=""
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \

View File

@@ -6,6 +6,7 @@
![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge)
![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge)
![Docker Pulls V2](https://img.shields.io/docker/pulls/jxxghp/moviepilot-v2?style=for-the-badge)
![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge)

View File

@@ -9,7 +9,9 @@ from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.schemas.types import SystemConfigKey
router = APIRouter()
@@ -49,7 +51,7 @@ def download(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
downloader=downloader, save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -82,7 +84,7 @@ def add(
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, username=current_user.name,
downloader=downloader, save_path=save_path)
downloader=downloader, save_path=save_path, source="Manual")
if not did:
return schemas.Response(success=False, message="任务添加失败")
return schemas.Response(success=True, data={
@@ -111,6 +113,17 @@ def stop(hashString: str,
return schemas.Response(success=True if ret else False)
@router.get("/clients", summary="查询可用下载器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用下载器
"""
downloaders: List[dict] = SystemConfigOper().get(SystemConfigKey.Downloaders)
if downloaders:
return [{"name": d.get("name"), "type": d.get("type")} for d in downloaders if d.get("enabled")]
return []
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def delete(hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -1,4 +1,3 @@
from pathlib import Path
from typing import List, Any
from fastapi import APIRouter, Depends
@@ -6,7 +5,6 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.storage import StorageChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
@@ -84,38 +82,20 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
"""
history: TransferHistory = TransferHistory.get(db, history_in.id)
if not history:
return schemas.Response(success=False, msg="记录不存在")
return schemas.Response(success=False, message="记录不存在")
# 册除媒体库文件
if deletedest and history.dest_fileitem:
dest_fileitem = schemas.FileItem(**history.dest_fileitem)
state = StorageChain().delete_file(dest_fileitem)
state = StorageChain().delete_media_file(fileitem=dest_fileitem, mtype=MediaType(history.type))
if not state:
return schemas.Response(success=False, msg=f"{dest_fileitem.path}删除失败")
# 上级目录
if history.type == MediaType.TV.value:
dir_path = Path(dest_fileitem.path).parent.parent
else:
dir_path = Path(dest_fileitem.path).parent
dir_item = StorageChain().get_file_item(storage=dest_fileitem.storage, path=dir_path)
if dir_item:
files = StorageChain().list_files(dir_item, recursion=True)
if files:
# 检查是否还有其他媒体文件
media_file_exist = False
for file in files:
if file.extension and f".{file.extension.lower()}" in settings.RMT_MEDIAEXT:
media_file_exist = True
break
# 删除空目录
if not media_file_exist:
StorageChain().delete_file(dir_item)
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
# 删除源文件
if deletesrc and history.src_fileitem:
src_fileitem = schemas.FileItem(**history.src_fileitem)
state = StorageChain().delete_file(src_fileitem)
state = StorageChain().delete_media_file(src_fileitem)
if not state:
return schemas.Response(success=False, msg=f"{src_fileitem.path}删除失败")
return schemas.Response(success=False, message=f"{src_fileitem.path} 删除失败")
# 发送事件
eventmanager.send_event(
EventType.DownloadFileDeleted,

View File

@@ -7,6 +7,7 @@ from fastapi.security import OAuth2PasswordRequestForm
from app import schemas
from app.chain.tmdb import TmdbChain
from app.chain.user import UserChain
from app.chain.mediaserver import MediaServerChain
from app.core import security
from app.core.config import settings
from app.helper.sites import SitesHelper
@@ -53,10 +54,12 @@ def wallpaper() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "tmdb":
url = TmdbChain().get_random_wallpager()
else:
if settings.WALLPAPER == "bing":
url = WebUtils.get_bing_wallpaper()
elif settings.WALLPAPER == "mediaserver":
url = MediaServerChain().get_latest_wallpaper()
else:
url = TmdbChain().get_random_wallpager()
if url:
return schemas.Response(
success=True,
@@ -70,7 +73,9 @@ def wallpapers() -> Any:
"""
获取登录页面电影海报
"""
if settings.WALLPAPER == "tmdb":
return TmdbChain().get_trending_wallpapers()
else:
if settings.WALLPAPER == "bing":
return WebUtils.get_bing_wallpapers()
elif settings.WALLPAPER == "mediaserver":
return MediaServerChain().get_latest_wallpapers()
else:
return TmdbChain().get_trending_wallpapers()

View File

@@ -111,14 +111,11 @@ def scrape(fileitem: schemas.FileItem,
scrape_path = Path(fileitem.path)
meta = MetaInfoPath(scrape_path)
mediainfo = chain.recognize_by_meta(meta)
if not media_info:
if not mediainfo:
return schemas.Response(success=False, message="刮削失败,无法识别媒体信息")
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
else:
if not fileitem.fileid:
return schemas.Response(success=False, message="刮削文件ID无效")
# 手动刮削
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")

View File

@@ -12,8 +12,10 @@ from app.core.security import verify_token
from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.mediaserver import MediaServerHelper
from app.schemas import MediaType, NotExistMediaInfo
from app.schemas.types import SystemConfigKey
router = APIRouter()
@@ -143,3 +145,14 @@ def library(server: str, hidden: bool = False,
获取媒体服务器媒体库列表
"""
return MediaServerChain().librarys(server=server, username=userinfo.username, hidden=hidden) or []
@router.get("/clients", summary="查询可用媒体服务器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用媒体服务器
"""
mediaservers: List[dict] = SystemConfigOper().get(SystemConfigKey.MediaServers)
if mediaservers:
return [{"name": d.get("name"), "type": d.get("type")} for d in mediaservers if d.get("enabled")]
return []

View File

@@ -8,6 +8,7 @@ from app import schemas
from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.core.security import verify_token
from app.db import get_db
from app.db.models import User
@@ -331,6 +332,31 @@ def read_rss_sites(db: Session = Depends(get_db),
return rss_sites
@router.get("/auth", summary="查询认证站点", response_model=dict)
def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
"""
获取可认证站点列表
"""
return SitesHelper().get_authsites()
@router.post("/auth", summary="用户站点认证", response_model=schemas.Response)
def auth_site(
auth_info: schemas.SiteAuth,
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
用户站点认证
"""
if not auth_info or not auth_info.site or not auth_info.params:
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
PluginManager().init_config()
Scheduler().init_plugin_jobs()
return schemas.Response(success=status, message=msg)
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
site_id: int,

View File

@@ -149,8 +149,8 @@ def rename(fileitem: schemas.FileItem,
:param recursive: 是否递归修改
:param _: token
"""
if not fileitem.fileid or not new_name:
return schemas.Response(success=False)
if not new_name:
return schemas.Response(success=False, message="新名称为空")
result = StorageChain().rename_file(fileitem, new_name)
if result:
if recursive:

View File

@@ -124,6 +124,27 @@ def update_subscribe(
return schemas.Response(success=True)
@router.put("/status/{subid}", summary="更新订阅状态", response_model=schemas.Response)
def update_subscribe_status(
subid: int,
state: str,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
更新订阅状态
"""
subscribe = Subscribe.get(db, subid)
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
valid_states = ["R", "P", "S"]
if state not in valid_states:
return schemas.Response(success=False, message="无效的订阅状态")
subscribe.update(db, {
"state": state
})
return schemas.Response(success=True)
@router.get("/media/{mediaid}", summary="查询订阅", response_model=schemas.Subscribe)
def subscribe_mediaid(
mediaid: str,

View File

@@ -16,6 +16,7 @@ from app import schemas
from app.chain.search import SearchChain
from app.chain.system import SystemChain
from app.core.config import global_vars, settings
from app.core.metainfo import MetaInfo
from app.core.module import ModuleManager
from app.core.security import verify_apitoken, verify_resource_token, verify_token
from app.db.models import User
@@ -159,7 +160,8 @@ def cache_img(
本地缓存图片文件,支持 HTTP 缓存,如果启用全局图片缓存,则使用磁盘缓存
"""
# 如果没有启用全局图片缓存,则不使用磁盘缓存
return fetch_image(url=url, proxy=False, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
proxy = "doubanio.com" not in url
return fetch_image(url=url, proxy=proxy, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)
@@ -182,7 +184,7 @@ def get_env_setting(_: User = Depends(get_current_active_superuser)):
查询系统环境变量,包括当前版本号(仅管理员)
"""
info = settings.dict(
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY"}
)
info.update({
"VERSION": APP_VERSION,
@@ -384,9 +386,14 @@ def ruletest(title: str,
if not rulegroup:
return schemas.Response(success=False, message=f"过滤规则组 {rulegroup_name} 不存在!")
# 根据标题查询媒体信息
media_info = SearchChain().recognize_media(MetaInfo(title=title, subtitle=subtitle))
if not media_info:
return schemas.Response(success=False, message="未识别到媒体信息!")
# 过滤
result = SearchChain().filter_torrents(rule_groups=[rulegroup.name],
torrent_list=[torrent])
torrent_list=[torrent], mediainfo=media_info)
if not result:
return schemas.Response(success=False, message="不符合过滤规则!")
return schemas.Response(success=True, data={

View File

@@ -20,22 +20,42 @@ router = APIRouter()
class ManualTransferItem(BaseModel):
fileitem: FileItem = None,
logid: Optional[int] = None,
target_storage: Optional[str] = None,
target_path: Optional[str] = None,
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
type_name: Optional[str] = None,
season: Optional[int] = None,
transfer_type: Optional[str] = None,
episode_format: Optional[str] = None,
episode_detail: Optional[str] = None,
episode_part: Optional[str] = None,
episode_offset: Optional[int] = 0,
min_filesize: Optional[int] = 0,
scrape: bool = False,
from_history: bool = False
# 文件项
fileitem: FileItem = None
# 日志ID
logid: Optional[int] = None
# 目标存储
target_storage: Optional[str] = None
# 目标路径
target_path: Optional[str] = None
# TMDB ID
tmdbid: Optional[int] = None
# 豆瓣ID
doubanid: Optional[str] = None
# 类型
type_name: Optional[str] = None
# 季号
season: Optional[int] = None
# 整理方式
transfer_type: Optional[str] = None
# 自定义格式
episode_format: Optional[str] = None
# 指定集数
episode_detail: Optional[str] = None
# 指定PART
episode_part: Optional[str] = None
# 集数偏移
episode_offset: Optional[str] = None
# 最小文件大小
min_filesize: Optional[int] = 0
# 刮削
scrape: bool = False
# 媒体库类型子目录
library_type_folder: Optional[bool] = None
# 媒体库类别子目录
library_category_folder: Optional[bool] = None
# 复用历史识别信息
from_history: Optional[bool] = False
@router.get("/name", summary="查询整理后的名称", response_model=schemas.Response)
@@ -96,7 +116,9 @@ def manual_transfer(transer_item: ManualTransferItem,
if history.dest_fileitem:
# 删除旧的已整理文件
dest_fileitem = FileItem(**history.dest_fileitem)
StorageChain().delete_file(dest_fileitem)
state = StorageChain().delete_media_file(dest_fileitem, mtype=MediaType(history.type))
if not state:
return schemas.Response(success=False, message=f"{dest_fileitem.path} 删除失败")
# 从历史数据获取信息
if transer_item.from_history:
@@ -146,6 +168,8 @@ def manual_transfer(transer_item: ManualTransferItem,
epformat=epformat,
min_filesize=transer_item.min_filesize,
scrape=transer_item.scrape,
library_type_folder=transer_item.library_type_folder,
library_category_folder=transer_item.library_category_folder,
force=force
)
# 失败

View File

@@ -20,7 +20,7 @@ from app.helper.message import MessageHelper
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo, TmdbEpisode, MediaPerson, FileItem
WebhookEventInfo, TmdbEpisode, MediaPerson, FileItem, TransferDirectoryConf
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.utils.object import ObjectUtils
@@ -382,24 +382,34 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("list_torrents", status=status, hashs=hashs, downloader=downloader)
def transfer(self, fileitem: FileItem, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str = None, target_storage: str = None, target_path: Path = None,
episodes_info: List[TmdbEpisode] = None,
scrape: bool = None) -> Optional[TransferInfo]:
target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
"""
文件转移
:param fileitem: 文件信息
:param meta: 预识别的元数据
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param target_directory: 目标目录配置
:param target_storage: 目标存储
:param target_path: 目标路径
:param episodes_info: 当前季的全部集信息
:param transfer_type: 转移模式
:param scrape: 是否刮削元数据
:param library_type_folder: 是否按类型创建目录
:param library_category_folder: 是否按类别创建目录
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
return self.run_module("transfer", fileitem=fileitem, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target_storage=target_storage,
target_path=target_path, episodes_info=episodes_info, scrape=scrape)
return self.run_module("transfer",
fileitem=fileitem, meta=meta, mediainfo=mediainfo,
target_directory=target_directory,
target_path=target_path, target_storage=target_storage,
transfer_type=transfer_type, scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
episodes_info=episodes_info)
def transfer_completed(self, hashs: str, downloader: str = None) -> None:
"""
@@ -499,7 +509,7 @@ class ChainBase(metaclass=ABCMeta):
to_targets = self.useroper.get_settings(settings.SUPERUSER)
message.targets = to_targets
# 发送事件
self.eventmanager.send_event(etype=EventType.NoticeMessage, data=message.dict())
self.eventmanager.send_event(etype=EventType.NoticeMessage, data={**message.dict(), "type": message.mtype})
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.dict())

View File

@@ -12,12 +12,15 @@ from app.core.config import settings
from app.core.event import Event as ManagerEvent, eventmanager, Event
from app.core.plugin import PluginManager
from app.helper.message import MessageHelper
from app.helper.thread import ThreadHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas import Notification
from app.schemas.event import CommandRegisterEventData
from app.schemas.types import EventType, MessageChannel, ChainEventType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
from app.utils.structures import DictUtils
class CommandChain(ChainBase, metaclass=Singleton):
@@ -51,6 +54,11 @@ class CommandChain(ChainBase, metaclass=Singleton):
"description": "更新站点Cookie",
"data": {}
},
"/site_statistic": {
"func": SiteChain().remote_refresh_userdatas,
"description": "站点数据统计",
"data": {}
},
"/site_enable": {
"func": SiteChain().remote_enable,
"description": "启用站点",
@@ -154,32 +162,59 @@ class CommandChain(ChainBase, metaclass=Singleton):
logger.debug("Development mode active. Skipping command initialization.")
return
with self._rlock:
logger.debug("Acquired lock for initializing commands.")
self._plugin_commands = self.__build_plugin_commands()
self._commands = {
**self._preset_commands,
**self._plugin_commands,
**self._other_commands
}
# 使用线程池提交后台任务,避免引起阻塞
ThreadHelper().submit(self.__init_commands_background, pid)
# 触发事件允许可以拦截和调整命令
event, initial_commands = self.__trigger_register_commands_event()
def __init_commands_background(self, pid: Optional[str] = None) -> None:
"""
后台初始化菜单命令
"""
try:
with self._rlock:
logger.debug("Acquired lock for initializing commands in background.")
self._plugin_commands = self.__build_plugin_commands(pid)
self._commands = {
**self._preset_commands,
**self._plugin_commands,
**self._other_commands
}
# 如果事件返回有效的 event_data使用事件中调整后的命令
if event and event.event_data:
initial_commands = event.event_data.get("commands") or {}
logger.debug(f"Registering command count from event: {len(initial_commands)}")
else:
logger.debug(f"Registering initial command count: {len(initial_commands)}")
# 强制触发注册
force_register = False
# 触发事件允许可以拦截和调整命令
event, initial_commands = self.__trigger_register_commands_event()
if event and event.event_data:
# 如果事件返回有效的 event_data使用事件中调整后的命令
event_data: CommandRegisterEventData = event.event_data
# 如果事件被取消,跳过命令注册
if event_data.cancel:
logger.debug(f"Command initialization canceled by event: {event_data.source}")
return
# 如果拦截源与插件标识一致时,这里认为需要强制触发注册
if pid is not None and pid == event_data.source:
force_register = True
initial_commands = event_data.commands or {}
logger.debug(f"Registering command count from event: {len(initial_commands)}")
else:
logger.debug(f"Registering initial command count: {len(initial_commands)}")
# initial_commands 必须是 self._commands 的子集
filtered_initial_commands = DictUtils.filter_keys_to_subset(initial_commands, self._commands)
# 如果 filtered_initial_commands 为空,则跳过注册
if not filtered_initial_commands and not force_register:
logger.debug("Filtered commands are empty, skipping registration.")
return
# 对比调整后的命令与当前命令
if initial_commands == self._registered_commands:
logger.debug("Command set unchanged, skipping broadcast registration.")
if filtered_initial_commands != self._registered_commands or force_register:
logger.debug("Command set has changed or force registration is enabled.")
self._registered_commands = filtered_initial_commands
super().register_commands(commands=filtered_initial_commands)
else:
logger.debug("Command set has changed, Updating and broadcasting new commands.")
self._registered_commands = initial_commands
super().register_commands(commands=initial_commands)
logger.debug("Command set unchanged, skipping broadcast registration.")
except Exception as e:
logger.error(f"Error occurred during command initialization in background: {e}", exc_info=True)
def __trigger_register_commands_event(self) -> (Optional[Event], dict):
"""
@@ -202,20 +237,22 @@ class CommandChain(ChainBase, metaclass=Singleton):
command_data["pid"] = plugin_id
commands[cmd] = command_data
# 触发事件允许可以拦截和调整命令
commands = {}
# 初始化命令字典
commands: Dict[str, dict] = {}
add_commands(self._preset_commands, "preset")
add_commands(self._plugin_commands, "plugin")
add_commands(self._other_commands, "other")
event_data = {
"commands": commands
}
return eventmanager.send_event(ChainEventType.CommandRegister, event_data), commands
def __build_plugin_commands(self) -> Dict[str, dict]:
# 触发事件允许可以拦截和调整命令
event_data = CommandRegisterEventData(commands=commands, origin="CommandChain", service=None)
event = eventmanager.send_event(ChainEventType.CommandRegister, event_data)
return event, commands
def __build_plugin_commands(self, pid: Optional[str] = None) -> Dict[str, dict]:
"""
构建插件命令
"""
# 为了保证命令顺序的一致性,目前这里没有直接使用 pid 获取单一插件命令,后续如果存在性能问题,可以考虑优化这里的逻辑
plugin_commands = {}
for command in self.pluginmanager.get_plugin_commands():
cmd = command.get("cmd")
@@ -370,7 +407,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
channel=event_channel, source=event_source, userid=event_user)
@eventmanager.register(EventType.ModuleReload)
def module_reload_event(self, event: ManagerEvent) -> None:
def module_reload_event(self, _: ManagerEvent) -> None:
"""
注册模块重载事件
"""

View File

@@ -20,7 +20,8 @@ from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import ExistMediaInfo, NotExistMediaInfo, DownloadingTorrent, Notification
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType
from app.schemas.event import ResourceSelectionEventData, ResourceDownloadEventData
from app.schemas.types import MediaType, TorrentStatus, EventType, MessageChannel, NotificationType, ChainEventType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
@@ -180,7 +181,7 @@ class DownloadChain(ChainBase):
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent_url,
cookie=site_cookie,
ua=torrent.site_ua,
ua=torrent.site_ua or settings.USER_AGENT,
proxy=torrent.site_proxy)
if isinstance(content, str):
@@ -191,7 +192,7 @@ class DownloadChain(ChainBase):
logger.error(f"下载种子文件失败:{torrent.title} - {torrent_url}")
self.post_message(Notification(
channel=channel,
source=source,
source=source if channel else None,
mtype=NotificationType.Manual,
title=f"{torrent.title} 种子下载失败!",
text=f"错误信息:{error_msg}\n站点:{torrent.site_name}",
@@ -203,11 +204,12 @@ class DownloadChain(ChainBase):
def download_single(self, context: Context, torrent_file: Path = None,
episodes: Set[int] = None,
channel: MessageChannel = None, source: str = None,
channel: MessageChannel = None,
source: str = None,
downloader: str = None,
save_path: str = None,
userid: Union[str, int] = None,
username: str = None,
downloader: str = None,
media_category: str = None) -> Optional[str]:
"""
下载及发送通知
@@ -215,16 +217,42 @@ class DownloadChain(ChainBase):
:param torrent_file: 种子文件路径
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 通知来源
:param source: 来源消息通知、Subscribe、Manual等
:param downloader: 下载器
:param save_path: 保存路径
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param downloader: 下载器
:param media_category: 自定义媒体类别
"""
# 发送资源下载事件,允许外部拦截下载
event_data = ResourceDownloadEventData(
context=context,
episodes=episodes,
channel=channel,
origin=source,
downloader=downloader,
options={
"save_path": save_path,
"userid": userid,
"username": username,
"media_category": media_category
}
)
# 触发资源下载事件
event = eventmanager.send_event(ChainEventType.ResourceDownload, event_data)
if event and event.event_data:
event_data: ResourceDownloadEventData = event.event_data
# 如果事件被取消,跳过资源下载
if event_data.cancel:
logger.debug(
f"Resource download canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 补充完整的media数据
if not _media.genre_ids:
@@ -251,35 +279,31 @@ class DownloadChain(ChainBase):
# 下载目录
if save_path:
# 有自定义下载目录时,尝试匹配目录配置
dir_info = self.directoryhelper.get_dir(_media, src_path=Path(save_path), local=True)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, local=True)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.download_type_folder:
# 一级自动分类
download_dir = Path(dir_info.download_path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.download_path)
# 二级目录
if not dir_info.media_category and dir_info.download_category_folder and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
elif save_path:
# 自定义下载目录
# 下载目录使用自定义的
download_dir = Path(save_path)
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, storage="local", include_unsorted=True)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.download_type_folder:
# 一级自动分类
download_dir = Path(dir_info.download_path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.download_path)
# 二级目录
if not dir_info.media_category and dir_info.download_category_folder and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 添加下载
result: Optional[tuple] = self.download(content=content,
@@ -287,7 +311,7 @@ class DownloadChain(ChainBase):
episodes=episodes,
download_dir=download_dir,
category=_media.category,
downloader=downloader)
downloader=downloader or _site_downloader)
if result:
_downloader, _hash, error_msg = result
else:
@@ -335,7 +359,7 @@ class DownloadChain(ChainBase):
continue
# 只处理视频格式
if not Path(file).suffix \
or Path(file).suffix not in settings.RMT_MEDIAEXT:
or Path(file).suffix.lower() not in settings.RMT_MEDIAEXT:
continue
files_to_add.append({
"download_hash": _hash,
@@ -358,7 +382,8 @@ class DownloadChain(ChainBase):
"hash": _hash,
"context": context,
"username": username,
"downloader": _downloader
"downloader": _downloader,
"episodes": episodes
})
else:
# 下载失败
@@ -367,7 +392,7 @@ class DownloadChain(ChainBase):
# 只发送给对应渠道和用户
self.post_message(Notification(
channel=channel,
source=source,
source=source if channel else None,
mtype=NotificationType.Manual,
title="添加下载任务失败:%s %s"
% (_media.title_year, _meta.season_episode),
@@ -386,7 +411,8 @@ class DownloadChain(ChainBase):
source: str = None,
userid: str = None,
username: str = None,
media_category: str = None
media_category: str = None,
downloader: str = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
@@ -394,10 +420,11 @@ class DownloadChain(ChainBase):
:param no_exists: 缺失的剧集信息
:param save_path: 保存路径
:param channel: 通知渠道
:param source: 通知来源
:param source: 来源(消息通知、订阅、手工下载等)
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param downloader: 下载器
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
# 已下载的项目
@@ -458,6 +485,21 @@ class DownloadChain(ChainBase):
return 9999
return no_exist[season].total_episode
# 发送资源选择事件,允许外部修改上下文数据
logger.debug(f"Initial contexts: {len(contexts)} items, Downloader: {downloader}")
event_data = ResourceSelectionEventData(
contexts=contexts,
downloader=downloader
)
event = eventmanager.send_event(ChainEventType.ResourceSelection, event_data)
# 如果事件修改了上下文数据,使用更新后的数据
if event and event.event_data:
event_data: ResourceSelectionEventData = event.event_data
if event_data.updated and event_data.updated_contexts is not None:
logger.debug(f"Contexts updated by event: "
f"{len(event_data.updated_contexts)} items (source: {event_data.source})")
contexts = event_data.updated_contexts
# 分组排序
contexts = TorrentHelper().sort_group_torrents(contexts)
@@ -469,7 +511,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category):
media_category=media_category, downloader=downloader):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
@@ -554,7 +596,8 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category
media_category=media_category,
downloader=downloader,
)
else:
# 下载
@@ -562,7 +605,8 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
@@ -633,7 +677,8 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
logger.info(f"{meta.title} 添加下载成功")
@@ -722,7 +767,8 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category
media_category=media_category,
downloader=downloader
)
if not download_id:
continue

View File

@@ -11,12 +11,15 @@ from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.log import logger
from app.schemas import FileItem
from app.schemas.types import EventType, MediaType, ChainEventType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
recognize_lock = Lock()
scraping_lock = Lock()
scraping_files = []
class MediaChain(ChainBase, metaclass=Singleton):
@@ -301,12 +304,23 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not event:
return
event_data = event.event_data or {}
fileitem = event_data.get("fileitem")
meta = event_data.get("meta")
mediainfo = event_data.get("mediainfo")
fileitem: FileItem = event_data.get("fileitem")
meta: MetaBase = event_data.get("meta")
mediainfo: MediaInfo = event_data.get("mediainfo")
if not fileitem:
return
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
# 刮削锁
with scraping_lock:
if fileitem.path in scraping_files:
return
scraping_files.append(fileitem.path)
try:
# 执行刮削
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
finally:
# 释放锁
with scraping_lock:
scraping_files.remove(fileitem.path)
def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
@@ -322,6 +336,20 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param overwrite: 是否覆盖已有文件
"""
def is_bluray_folder(_fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
"""
if not _fileitem or _fileitem.type != "dir":
return False
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in self.storagechain.list_files(_fileitem):
if item.name in required_files:
return True
return False
def __list_files(_fileitem: schemas.FileItem):
"""
列出下级文件
@@ -335,13 +363,21 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param _path: 元数据文件路径
:param _content: 文件内容
"""
if not _fileitem or not _content or not _path:
return
# 保存文件到临时目录
tmp_file = settings.TEMP_PATH / _path.name
tmp_file.write_bytes(_content)
logger.info(f"保存文件:【{_fileitem.storage}{_path}")
_fileitem.path = str(_path.parent)
self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file)
if tmp_file.exists():
tmp_file.unlink()
# 获取文件的父目录
try:
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
if item:
logger.info(f"已保存文件:{item.path}")
else:
logger.warn(f"文件保存失败:{item.path}")
finally:
if tmp_file.exists():
tmp_file.unlink()
def __download_image(_url: str) -> Optional[bytes]:
"""
@@ -356,6 +392,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
logger.info(f"{_url} 图片下载失败,请检查网络连通性!")
except Exception as err:
logger.error(f"{_url} 图片下载失败:{str(err)}")
return None
# 当前文件路径
filepath = Path(fileitem.path)
@@ -373,25 +410,40 @@ class MediaChain(ChainBase, metaclass=Singleton):
if mediainfo.type == MediaType.MOVIE:
# 电影
if fileitem.type == "file":
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
# 电影文件
logger.info(f"正在生成电影nfo{mediainfo.title_year} - {filepath.name}")
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
# 电影目录
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
if is_bluray_folder(fileitem):
# 原盘目录
nfo_path = filepath / (filepath.name + ".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
else:
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
# 生成目录内图片文件
if init_folder:
# 图片
@@ -410,16 +462,22 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 下载图片
content = __download_image(_url=attr_value)
# 写入图片到当前目录
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
else:
# 电视剧
if fileitem.type == "file":
# 当前为集文件,重新识别季集
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
# 重新识别季集
file_meta = MetaInfoPath(filepath)
if not file_meta.begin_episode:
logger.warn(f"{filepath.name} 无法识别文件集数!")
return
file_mediainfo = self.recognize_media(meta=file_meta)
file_mediainfo = self.recognize_media(meta=file_meta, tmdbid=mediainfo.tmdb_id)
if not file_mediainfo:
logger.warn(f"{filepath.name} 无法识别文件媒体信息!")
return
@@ -430,10 +488,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
logger.warn(f"{filepath.name} nfo生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
# 获取集的图片
image_dict = self.metadata_img(mediainfo=file_mediainfo,
@@ -447,7 +503,10 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
__save_file(_fileitem=parent, _path=image_path, _content=content)
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
# 当前为目录,处理目录内的文件
@@ -461,17 +520,21 @@ class MediaChain(ChainBase, metaclass=Singleton):
if init_folder:
# 识别文件夹名称
season_meta = MetaInfo(filepath.name)
if season_meta.begin_season:
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=meta.begin_season)
if not season_nfo:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
return
# 写入nfo到根目录
# 当前文件夹为Specials或者SPs时设置为S0
if filepath.name in settings.RENAME_FORMAT_S0_NAMES:
season_meta.begin_season = 0
if season_meta.begin_season is not None:
# 是否已存在
nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season)
if not season_nfo:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
return
# 写入nfo到根目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
# TMDB季poster图片
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
@@ -484,25 +547,55 @@ class MediaChain(ChainBase, metaclass=Singleton):
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 保存图片文件到剧集目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 额外fanart季图片poster thumb banner
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
if image_name.startswith("season"):
image_path = filepath.with_name(image_name)
# 只下载当前刮削季的图片
image_season = "00" if "specials" in image_name else image_name[6:8]
if image_season != str(season_meta.begin_season).rjust(2, '0'):
logger.info(f"当前刮削季为:{season_meta.begin_season},跳过文件:{image_path}")
continue
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
# 判断当前目录是不是剧集根目录
if season_meta.name:
if not season_meta.season:
# 是否已存在
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not tv_nfo:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
return
# 写入tvshow nfo到根目录
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
# 生成目录图片
image_dict = self.metadata_img(mediainfo=mediainfo)
if image_dict:
for image_name, image_url in image_dict.items():
# 不下载季图片
if image_name.startswith("season"):
continue
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
@@ -511,6 +604,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
logger.info(f"{filepath.name} 刮削完成")

View File

@@ -1,12 +1,14 @@
import threading
from typing import List, Union, Optional, Generator
from app import schemas
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import MediaServerLibrary, MediaServerItem, MediaServerSeasonInfo, MediaServerPlayItem
lock = threading.Lock()
@@ -20,7 +22,7 @@ class MediaServerChain(ChainBase):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[schemas.MediaServerLibrary]:
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
@@ -68,30 +70,46 @@ class MediaServerChain(ChainBase):
yield from self.run_module("mediaserver_items", server=server, library_id=library_id,
start_index=start_index, limit=limit)
def iteminfo(self, server: str, item_id: Union[str, int]) -> schemas.MediaServerItem:
def iteminfo(self, server: str, item_id: Union[str, int]) -> MediaServerItem:
"""
获取媒体服务器项目信息
"""
return self.run_module("mediaserver_iteminfo", server=server, item_id=item_id)
def episodes(self, server: str, item_id: Union[str, int]) -> List[schemas.MediaServerSeasonInfo]:
def episodes(self, server: str, item_id: Union[str, int]) -> List[MediaServerSeasonInfo]:
"""
获取媒体服务器剧集信息
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
def playing(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
def latest(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
def get_latest_wallpapers(self, server: str = None, count: int = 10,
remote: bool = True, username: str = None) -> List[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
return self.run_module("mediaserver_latest_images", server=server, count=count,
remote=remote, username=username)
def get_latest_wallpaper(self, server: str = None, remote: bool = True, username: str = None) -> Optional[str]:
"""
获取最新最新入库条目海报作为壁纸缓存1小时
"""
wallpapers = self.get_latest_wallpapers(server=server, count=1, remote=remote, username=username)
return wallpapers[0] if wallpapers else None
def get_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取播放地址

View File

@@ -111,6 +111,8 @@ class MessageChain(ChainBase):
info = self.message_parser(source=source, body=body, form=form, args=args)
if not info:
return
# 更新消息来源
source = info.source
# 渠道
channel = info.channel
# 用户ID

View File

@@ -105,7 +105,8 @@ class SearchChain(ChainBase):
sites: List[int] = None,
rule_groups: List[str] = None,
area: str = "title",
custom_words: List[str] = None) -> List[Context]:
custom_words: List[str] = None,
filter_params: Dict[str, str] = None) -> List[Context]:
"""
根据媒体信息搜索种子资源精确匹配应用过滤规则同时根据no_exists过滤本地已存在的资源
:param mediainfo: 媒体信息
@@ -115,6 +116,7 @@ class SearchChain(ChainBase):
:param rule_groups: 过滤规则组名称列表
:param area: 搜索范围title or imdbid
:param custom_words: 自定义识别词列表
:param filter_params: 过滤参数
"""
def __do_filter(torrent_list: List[TorrentInfo]) -> List[TorrentInfo]:
@@ -219,6 +221,12 @@ class SearchChain(ChainBase):
key=ProgressKey.Search)
if not torrent.title:
continue
# 匹配订阅附加参数
if filter_params and not self.torrenthelper.filter_torrent(torrent_info=torrent,
filter_params=filter_params):
continue
# 识别元数据
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description,
custom_words=custom_words)
@@ -231,6 +239,7 @@ class SearchChain(ChainBase):
logger.info(f'{mediainfo.title} 通过IMDBID匹配到资源{torrent.site_name} - {torrent.title}')
_match_torrents.append((torrent, torrent_meta))
continue
# 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,

View File

@@ -1,7 +1,7 @@
import base64
import re
from datetime import datetime
from typing import Optional, Tuple, Union
from typing import Optional, Tuple, Union, Dict
from urllib.parse import urljoin
from lxml import etree
@@ -22,7 +22,7 @@ from app.helper.rss import RssHelper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import MessageChannel, Notification, SiteUserData
from app.schemas.types import EventType
from app.schemas.types import EventType, NotificationType
from app.utils.http import RequestUtils
from app.utils.site import SiteUtils
from app.utils.string import StringUtils
@@ -65,17 +65,43 @@ class SiteChain(ChainBase):
self.siteoper.update_userdata(domain=StringUtils.get_url_domain(site.get("domain")),
name=site.get("name"),
payload=userdata.dict())
# 发送事件
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": site.get("id")
})
# 发送站点消息
if userdata.message_unread:
if userdata.message_unread_contents and len(userdata.message_unread_contents) > 0:
for head, date, content in userdata.message_unread_contents:
msg_title = f"【站点 {site.get('name')} 消息】"
msg_text = f"时间:{date}\n标题:{head}\n内容:\n{content}"
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=msg_title, text=msg_text, link=site.get("url")
))
else:
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"站点 {site.get('name')} 收到 "
f"{userdata.message_unread} 条新消息,请登陆查看",
link=site.get("url")
))
# 低分享率警告
if userdata.ratio and float(userdata.ratio) < 1:
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"【站点分享率低预警】",
text=f"站点 {site.get('name')} 分享率 {userdata.ratio},请注意!"
))
return userdata
def refresh_userdatas(self) -> None:
def refresh_userdatas(self) -> Dict[str, SiteUserData]:
"""
刷新所有站点的用户数据
"""
sites = self.siteshelper.get_indexers()
any_site_updated = False
result = {}
for site in sites:
if global_vars.is_system_stopped:
return
@@ -83,10 +109,12 @@ class SiteChain(ChainBase):
userdata = self.refresh_userdata(site)
if userdata:
any_site_updated = True
result[site.get("name")] = userdata
if any_site_updated:
EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*"
})
return result
def is_special_site(self, domain: str) -> bool:
"""
@@ -687,3 +715,66 @@ class SiteChain(ChainBase):
source=source,
title=f"{site_info.name}】 Cookie&UA更新成功",
userid=userid))
def remote_refresh_userdatas(self, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
刷新所有站点用户数据
"""
logger.info("收到命令,开始刷新站点数据 ...")
self.post_message(Notification(
channel=channel,
source=source,
title="开始刷新站点数据 ...",
userid=userid
))
# 刷新站点数据
site_datas = self.refresh_userdatas()
if site_datas:
# 发送消息
messages = {}
# 总上传
incUploads = 0
# 总下载
incDownloads = 0
# 今天日期
today_date = datetime.now().strftime("%Y-%m-%d")
for rand, site in enumerate(site_datas.keys()):
upload = int(site_datas[site].upload or 0)
download = int(site_datas[site].download or 0)
updated_date = site_datas[site].updated_day
if updated_date and updated_date != today_date:
updated_date = f"{updated_date}"
else:
updated_date = ""
if upload > 0 or download > 0:
incUploads += upload
incDownloads += download
messages[upload + (rand / 1000)] = (
f"{site}{updated_date}\n"
+ f"上传量:{StringUtils.str_filesize(upload)}\n"
+ f"下载量:{StringUtils.str_filesize(download)}\n"
+ "————————————"
)
if incDownloads or incUploads:
sorted_messages = [messages[key] for key in sorted(messages.keys(), reverse=True)]
sorted_messages.insert(0, f"【汇总】\n"
f"总上传:{StringUtils.str_filesize(incUploads)}\n"
f"总下载:{StringUtils.str_filesize(incDownloads)}\n"
f"————————————")
self.post_message(Notification(
channel=channel,
source=source,
title="【站点数据统计】",
text="\n".join(sorted_messages),
userid=userid
))
else:
self.post_message(Notification(
channel=channel,
source=source,
title="没有刷新到任何站点数据!",
userid=userid
))

View File

@@ -3,6 +3,10 @@ from typing import Optional, Tuple, List, Dict
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.helper.directory import DirectoryHelper
from app.log import logger
from app.schemas import MediaType
class StorageChain(ChainBase):
@@ -10,6 +14,10 @@ class StorageChain(ChainBase):
存储处理链
"""
def __init__(self):
super().__init__()
self.directoryhelper = DirectoryHelper()
def save_config(self, storage: str, conf: dict) -> None:
"""
保存存储配置
@@ -34,6 +42,12 @@ class StorageChain(ChainBase):
"""
return self.run_module("list_files", fileitem=fileitem, recursion=recursion)
def any_files(self, fileitem: schemas.FileItem, extensions: list = None) -> Optional[bool]:
"""
查询当前目录下是否存在指定扩展名任意文件
"""
return self.run_module("any_files", fileitem=fileitem, extensions=extensions)
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
@@ -48,13 +62,15 @@ class StorageChain(ChainBase):
"""
return self.run_module("download_file", fileitem=fileitem, path=path)
def upload_file(self, fileitem: schemas.FileItem, path: Path) -> Optional[bool]:
def upload_file(self, fileitem: schemas.FileItem, path: Path,
new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 保存目录项
:param path: 本地文件路径
:param new_name: 新文件名
"""
return self.run_module("upload_file", fileitem=fileitem, path=path)
return self.run_module("upload_file", fileitem=fileitem, path=path, new_name=new_name)
def delete_file(self, fileitem: schemas.FileItem) -> Optional[bool]:
"""
@@ -74,6 +90,12 @@ class StorageChain(ChainBase):
"""
return self.run_module("get_file_item", storage=storage, path=path)
def get_parent_item(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取上级目录项
"""
return self.run_module("get_parent_item", fileitem=fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
"""
快照存储
@@ -91,3 +113,53 @@ class StorageChain(ChainBase):
获取支持的整理方式
"""
return self.run_module("support_transtype", storage=storage)
def delete_media_file(self, fileitem: schemas.FileItem,
mtype: MediaType = None, delete_self: bool = True) -> bool:
"""
删除媒体文件,以及不含媒体文件的目录
"""
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT
if fileitem.path == "/" or len(Path(fileitem.path).parts) <= 2:
logger.warn(f"{fileitem.storage}{fileitem.path} 根目录或一级目录不允许删除")
return False
if fileitem.type == "dir":
# 本身是目录
if self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(fileitem)
return False
elif delete_self:
# 本身是文件
logger.warn(f"正在删除【{fileitem.storage}{fileitem.path}")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
if mtype:
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mtype == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
return True
# 处理上级目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
else:
dir_item = self.get_parent_item(fileitem)
if dir_item and len(Path(dir_item.path).parts) > 2:
# 如何目录是所有下载目录、媒体库目录的上级,则不处理
for d in self.directoryhelper.get_dirs():
if d.download_path and Path(d.download_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是下载目录本级或上级目录,不删除")
return True
if d.library_path and Path(d.library_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是媒体库目录本级或上级目录,不删除")
return True
# 不存在其他媒体文件,删除空目录
if self.any_files(dir_item, extensions=media_exts) is False:
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(dir_item)
return True

View File

@@ -1,6 +1,7 @@
import copy
import random
import time
import threading
from datetime import datetime
from typing import Dict, List, Optional, Union, Tuple
@@ -28,15 +29,17 @@ from app.log import logger
from app.schemas import NotExistMediaInfo, Notification, SubscrbieInfo, SubscribeEpisodeInfo, SubscribeDownloadFileInfo, \
SubscribeLibraryFileInfo
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType, EventType
from app.utils.singleton import Singleton
class SubscribeChain(ChainBase):
class SubscribeChain(ChainBase, metaclass=Singleton):
"""
订阅管理处理链
"""
def __init__(self):
super().__init__()
self._rlock = threading.RLock()
self.downloadchain = DownloadChain()
self.downloadhis = DownloadHistoryOper()
self.searchchain = SearchChain()
@@ -159,6 +162,8 @@ class SubscribeChain(ChainBase):
"search_imdbid") else kwargs.get("search_imdbid"),
'sites': self.__get_default_subscribe_config(mediainfo.type, "sites") or None if not kwargs.get(
"sites") else kwargs.get("sites"),
'downloader': self.__get_default_subscribe_config(mediainfo.type, "downloader") if not kwargs.get(
"downloader") else kwargs.get("downloader"),
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path")
})
@@ -232,175 +237,189 @@ class SubscribeChain(ChainBase):
"""
订阅搜索
:param sid: 订阅ID有值时只处理该订阅
:param state: 订阅状态 N:未搜索 R:已搜索
:param state: 订阅状态 N:新建, R:订阅中, P:待定, S:暂停
:param manual: 是否手动搜索
:return: 更新订阅状态为R或删除订阅
"""
if sid:
subscribes = [self.subscribeoper.get(sid)]
else:
subscribes = self.subscribeoper.list(state)
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
mediakey = subscribe.tmdbid or subscribe.doubanid
custom_word_list = subscribe.custom_words.split("\n") if subscribe.custom_words else None
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
# 随机休眠1-5分钟
if not sid and state == 'R':
sleep_time = random.randint(60, 300)
logger.info(f'订阅搜索随机休眠 {sleep_time} 秒 ...')
time.sleep(sleep_time)
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 如果状态为N则更新为R
if subscribe.state == 'N':
self.subscribeoper.update(subscribe.id, {'state': 'R'})
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版状态
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询媒体库缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
else:
# 洗版状态
exist_flag = False
if meta.type == MediaType.TV:
no_exists = {
mediakey: {
subscribe.season: NotExistMediaInfo(
season=subscribe.season,
episodes=[],
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode or 1)
}
}
else:
no_exists = {}
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅处理缺失集
if meta.type == MediaType.TV:
# 实际缺失集与订阅开始结束集范围进行整合,同时剔除已下载的集数
no_exists = self.__get_subscribe_no_exits(
subscribe_name=f'{subscribe.name} {meta.season}',
no_exists=no_exists,
mediakey=mediakey,
begin_season=meta.begin_season,
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode,
downloaded_episodes=self.__get_downloaded_episodes(subscribe)
)
# 站点范围
sites = self.get_sub_sites(subscribe)
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups)
else:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups)
# 搜索,同时电视剧会过滤掉不需要的剧集
contexts = self.searchchain.process(mediainfo=mediainfo,
keyword=subscribe.keyword,
no_exists=no_exists,
sites=sites,
rule_groups=rule_groups,
area="imdbid" if subscribe.search_imdbid else "title",
custom_words=custom_word_list)
if not contexts:
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 过滤搜索结果
matched_contexts = []
for context in contexts:
torrent_meta = context.meta_info
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 洗版
if subscribe.best_version:
# 洗版时,非整季不要
if torrent_mediainfo.type == MediaType.TV:
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 洗版时,优先级小于等于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
matched_contexts.append(context)
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 自动下载
downloads, lefts = self.downloadchain.batch_download(
contexts=matched_contexts,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category
)
# 判断是否应完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
# 手动触发时发送系统消息
if manual:
with self._rlock:
logger.debug(f"search lock acquired at {datetime.now()}")
if sid:
self.message.put(f'{subscribes[0].name} 搜索完成!', title="订阅搜索", role="system")
subscribe = self.subscribeoper.get(sid)
subscribes = [subscribe] if subscribe else []
else:
self.message.put('所有订阅搜索完成!', title="订阅搜索", role="system")
subscribes = self.subscribeoper.list(self.get_states_for_search(state))
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
mediakey = subscribe.tmdbid or subscribe.doubanid
custom_word_list = subscribe.custom_words.split("\n") if subscribe.custom_words else None
# 校验当前时间减订阅创建时间是否大于1分钟否则跳过先留出编辑订阅的时间
if subscribe.date:
now = datetime.now()
subscribe_time = datetime.strptime(subscribe.date, '%Y-%m-%d %H:%M:%S')
if (now - subscribe_time).total_seconds() < 60:
logger.debug(f"订阅标题:{subscribe.name} 新增小于1分钟暂不搜索...")
continue
# 随机休眠1-5分钟
if not sid and state in ['R', 'P']:
sleep_time = random.randint(60, 300)
logger.info(f'订阅搜索随机休眠 {sleep_time} 秒 ...')
time.sleep(sleep_time)
try:
logger.info(f'开始搜索订阅,标题:{subscribe.name} ...')
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版状态
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询媒体库缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
else:
# 洗版状态
exist_flag = False
if meta.type == MediaType.TV:
no_exists = {
mediakey: {
subscribe.season: NotExistMediaInfo(
season=subscribe.season,
episodes=[],
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode or 1)
}
}
else:
no_exists = {}
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅处理缺失集
if meta.type == MediaType.TV:
# 实际缺失集与订阅开始结束集范围进行整合,同时剔除已下载的集数
no_exists = self.__get_subscribe_no_exits(
subscribe_name=f'{subscribe.name} {meta.season}',
no_exists=no_exists,
mediakey=mediakey,
begin_season=meta.begin_season,
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode,
downloaded_episodes=self.__get_downloaded_episodes(subscribe)
)
# 站点范围
sites = self.get_sub_sites(subscribe)
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups) or []
else:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups) or []
# 搜索,同时电视剧会过滤掉不需要的剧集
contexts = self.searchchain.process(mediainfo=mediainfo,
keyword=subscribe.keyword,
no_exists=no_exists,
sites=sites,
rule_groups=rule_groups,
area="imdbid" if subscribe.search_imdbid else "title",
custom_words=custom_word_list,
filter_params=self.get_params(subscribe))
if not contexts:
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 过滤搜索结果
matched_contexts = []
for context in contexts:
torrent_meta = context.meta_info
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 洗版
if subscribe.best_version:
# 洗版时,非整季不要
if torrent_mediainfo.type == MediaType.TV:
if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 洗版时,优先级小于等于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
matched_contexts.append(context)
if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 自动下载
downloads, lefts = self.downloadchain.batch_download(
contexts=matched_contexts,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category,
downloader=subscribe.downloader,
source="Subscribe"
)
# 判断是否应完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
finally:
# 如果状态为N则更新为R
if subscribe.state == 'N':
self.subscribeoper.update(subscribe.id, {'state': 'R'})
# 手动触发时发送系统消息
if manual:
if subscribes:
if sid:
self.message.put(f'{subscribes[0].name} 搜索完成!', title="订阅搜索", role="system")
else:
self.message.put('所有订阅搜索完成!', title="订阅搜索", role="system")
else:
self.message.put('没有找到订阅!', title="订阅搜索", role="system")
logger.debug(f"search Lock released at {datetime.now()}")
def update_subscribe_priority(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]):
@@ -503,7 +522,7 @@ class SubscribeChain(ChainBase):
:return: 返回[]代表所有站点命中返回None代表没有订阅
"""
# 查询所有订阅
subscribes = self.subscribeoper.list('R')
subscribes = self.subscribeoper.list(self.get_states_for_search('R'))
if not subscribes:
return None
ret_sites = []
@@ -528,248 +547,255 @@ class SubscribeChain(ChainBase):
# 记录重新识别过的种子
_recognize_cached = []
# 所有订阅
subscribes = self.subscribeoper.list('R')
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
logger.info(f'开始匹配订阅,标题:{subscribe.name} ...')
mediakey = subscribe.tmdbid or subscribe.doubanid
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
# 订阅的站点域名列表
domains = []
if subscribe.sites:
domains = self.siteoper.get_domains_by_ids(subscribe.sites)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
else:
# 洗版
exist_flag = False
if meta.type == MediaType.TV:
no_exists = {
mediakey: {
subscribe.season: NotExistMediaInfo(
season=subscribe.season,
episodes=[],
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode or 1)
}
}
else:
no_exists = {}
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅
if meta.type == MediaType.TV:
# 整合实际缺失集与订阅开始集结束集,同时剔除已下载的集数
no_exists = self.__get_subscribe_no_exits(
subscribe_name=f'{subscribe.name} {meta.season}',
no_exists=no_exists,
mediakey=mediakey,
begin_season=meta.begin_season,
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode,
downloaded_episodes=self.__get_downloaded_episodes(subscribe)
)
# 遍历缓存种子
_match_context = []
for domain, contexts in torrents.items():
with self._rlock:
logger.debug(f"match lock acquired at {datetime.now()}")
# 所有订阅
subscribes = self.subscribeoper.list(self.get_states_for_search('R'))
# 遍历订阅
for subscribe in subscribes:
if global_vars.is_system_stopped:
break
if domains and domain not in domains:
logger.info(f'开始匹配订阅,标题:{subscribe.name} ...')
mediakey = subscribe.tmdbid or subscribe.doubanid
# 生成元数据
meta = MetaInfo(subscribe.name)
meta.year = subscribe.year
meta.begin_season = subscribe.season or None
try:
meta.type = MediaType(subscribe.type)
except ValueError:
logger.error(f'订阅 {subscribe.name} 类型错误:{subscribe.type}')
continue
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 提取信息
torrent_meta = copy.deepcopy(context.meta_info)
torrent_mediainfo = copy.deepcopy(context.media_info)
torrent_info = context.torrent_info
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.debug(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
continue
# 有自定义识别词时,需要判断是否需要重新识别
if subscribe.custom_words:
_, apply_words = WordsMatcher().prepare(torrent_info.title,
custom_words=subscribe.custom_words.split("\n"))
if apply_words:
logger.info(f'{torrent_info.site_name} - {torrent_info.title} 因订阅存在自定义识别词,重新识别元数据...')
# 重新识别元数据
torrent_meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description,
custom_words=subscribe.custom_word)
# 媒体信息需要重新识别
torrent_mediainfo = None
# 先判断是否有没识别的种子,否则重新识别
if not torrent_mediainfo \
or (not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
# 避免重复处理
_cache_key = f"{torrent_meta.org_string}_{torrent_info.description}"
if _cache_key not in _recognize_cached:
_recognize_cached.append(_cache_key)
# 重新识别媒体信息
torrent_mediainfo = self.recognize_media(meta=torrent_meta)
if torrent_mediainfo:
# 更新种子缓存
context.media_info = torrent_mediainfo
if not torrent_mediainfo:
# 通过标题匹配兜底
logger.warn(
f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info):
# 匹配成功
logger.info(
f'{mediainfo.title_year} 通过标题匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
# 更新种子缓存
torrent_mediainfo = mediainfo
context.media_info = mediainfo
else:
continue
# 直接比对媒体信息
if torrent_mediainfo and (torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id):
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
and torrent_mediainfo.tmdb_id != mediainfo.tmdb_id:
continue
if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue
logger.info(
f'{mediainfo.title_year} 通过媒体信ID匹配到可选资源{torrent_info.site_name} - {torrent_info.title}')
# 订阅的站点域名列表
domains = []
if subscribe.sites:
domains = self.siteoper.get_domains_by_ids(subscribe.sites)
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
cache=False)
if not mediainfo:
logger.warn(
f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}doubanid{subscribe.doubanid}')
continue
# 非洗版
if not subscribe.best_version:
# 每季总集数
totals = {}
if subscribe.season and subscribe.total_episode:
totals = {
subscribe.season: subscribe.total_episode
}
# 查询缺失的媒体信息
exist_flag, no_exists = self.downloadchain.get_no_exists_info(
meta=meta,
mediainfo=mediainfo,
totals=totals
)
else:
# 洗版
exist_flag = False
if meta.type == MediaType.TV:
no_exists = {
mediakey: {
subscribe.season: NotExistMediaInfo(
season=subscribe.season,
episodes=[],
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode or 1)
}
}
else:
continue
no_exists = {}
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
if len(torrent_meta.season_list) > 1:
logger.debug(f'{torrent_info.title} 有多季,不处理')
# 已存在
if exist_flag:
logger.info(f'{mediainfo.title_year} 媒体库中已存在')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo, force=True)
continue
# 电视剧订阅
if meta.type == MediaType.TV:
# 整合实际缺失集与订阅开始集结束集,同时剔除已下载的集数
no_exists = self.__get_subscribe_no_exits(
subscribe_name=f'{subscribe.name} {meta.season}',
no_exists=no_exists,
mediakey=mediakey,
begin_season=meta.begin_season,
total_episode=subscribe.total_episode,
start_episode=subscribe.start_episode,
downloaded_episodes=self.__get_downloaded_episodes(subscribe)
)
# 遍历缓存种子
_match_context = []
for domain, contexts in torrents.items():
if global_vars.is_system_stopped:
break
if domains and domain not in domains:
continue
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 提取信息
torrent_meta = copy.deepcopy(context.meta_info)
torrent_mediainfo = copy.deepcopy(context.media_info)
torrent_info = context.torrent_info
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
if sub_sites and torrent_info.site not in sub_sites:
logger.debug(f"{torrent_info.site_name} - {torrent_info.title} 不符合订阅站点要求")
continue
# 比对季
if torrent_meta.begin_season:
if meta.begin_season != torrent_meta.begin_season:
# 有自定义识别词时,需要判断是否需要重新识别
if subscribe.custom_words:
_, apply_words = WordsMatcher().prepare(torrent_info.title,
custom_words=subscribe.custom_words.split("\n"))
if apply_words:
logger.info(
f'{torrent_info.site_name} - {torrent_info.title} 因订阅存在自定义识别词,重新识别元数据...')
# 重新识别元数据
torrent_meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description,
custom_words=subscribe.custom_words)
# 媒体信息需要重新识别
torrent_mediainfo = None
# 先判断是否有没识别的种子,否则重新识别
if not torrent_mediainfo \
or (not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
# 避免重复处理
_cache_key = f"{torrent_meta.org_string}_{torrent_info.description}"
if _cache_key not in _recognize_cached:
_recognize_cached.append(_cache_key)
# 重新识别媒体信息
torrent_mediainfo = self.recognize_media(meta=torrent_meta)
if torrent_mediainfo:
# 更新种子缓存
context.media_info = torrent_mediainfo
if not torrent_mediainfo:
# 通过标题匹配兜底
logger.warn(
f'{torrent_info.site_name} - {torrent_info.title} 重新识别失败,尝试通过标题匹配...')
if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta,
torrent=torrent_info):
# 匹配成功
logger.info(
f'{mediainfo.title_year} 通过标题匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
# 更新种子缓存
torrent_mediainfo = mediainfo
context.media_info = mediainfo
else:
continue
# 直接比对媒体信息
if torrent_mediainfo and (torrent_mediainfo.tmdb_id or torrent_mediainfo.douban_id):
if torrent_mediainfo.type != mediainfo.type:
continue
if torrent_mediainfo.tmdb_id \
and torrent_mediainfo.tmdb_id != mediainfo.tmdb_id:
continue
if torrent_mediainfo.douban_id \
and torrent_mediainfo.douban_id != mediainfo.douban_id:
continue
logger.info(
f'{mediainfo.title_year} 通过媒体信ID匹配到可选资源{torrent_info.site_name} - {torrent_info.title}')
else:
continue
# 如果是电视剧
if torrent_mediainfo.type == MediaType.TV:
# 有多季的不要
if len(torrent_meta.season_list) > 1:
logger.debug(f'{torrent_info.title} 有多季,不处理')
continue
# 比对季
if torrent_meta.begin_season:
if meta.begin_season != torrent_meta.begin_season:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
elif meta.begin_season != 1:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
elif meta.begin_season != 1:
logger.debug(f'{torrent_info.title} 季不匹配')
continue
# 非洗版
if not subscribe.best_version:
# 不是缺失的剧集不要
if no_exists and no_exists.get(mediakey):
# 缺失
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
# 是否有交集
if no_exists_info.episodes and \
torrent_meta.episode_list and \
not set(no_exists_info.episodes).intersection(
set(torrent_meta.episode_list)
):
logger.debug(
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集'
)
# 非洗版
if not subscribe.best_version:
# 不是缺失的剧集不要
if no_exists and no_exists.get(mediakey):
# 缺失集
no_exists_info = no_exists.get(mediakey).get(subscribe.season)
if no_exists_info:
# 是否有交
if no_exists_info.episodes and \
torrent_meta.episode_list and \
not set(no_exists_info.episodes).intersection(
set(torrent_meta.episode_list)
):
logger.debug(
f'{torrent_info.title} 对应剧集 {torrent_meta.episode_list} 未包含缺失的剧集'
)
continue
else:
# 洗版时,非整季不要
if meta.type == MediaType.TV:
if torrent_meta.episode_list:
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
else:
# 洗版时,非整季不要
if meta.type == MediaType.TV:
if torrent_meta.episode_list:
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue
# 匹配订阅附加参数
if not self.torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_params=self.get_params(subscribe)):
continue
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups)
else:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups)
result: List[TorrentInfo] = self.filter_torrents(
rule_groups=rule_groups,
torrent_list=[torrent_info],
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.debug(f"{torrent_info.title} 不匹配过滤规则")
continue
# 洗版时,优先级小于已下载优先级的不要
if subscribe.best_version:
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
# 匹配订阅附加参数
if not self.torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_params=self.get_params(subscribe)):
continue
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
_match_context.append(context)
# 优先级过滤规则
if subscribe.best_version:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.BestVersionFilterRuleGroups)
else:
rule_groups = subscribe.filter_groups \
or self.systemconfig.get(SystemConfigKey.SubscribeFilterRuleGroups)
result: List[TorrentInfo] = self.filter_torrents(
rule_groups=rule_groups,
torrent_list=[torrent_info],
mediainfo=torrent_mediainfo)
if result is not None and not result:
# 不符合过滤规则
logger.debug(f"{torrent_info.title} 不匹配过滤规则")
continue
if not _match_context:
# 未匹配到资源
logger.info(f'{mediainfo.title_year} 未匹配到符合条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 洗版时,优先级小于已下载优先级的不要
if subscribe.best_version:
if subscribe.current_priority \
and torrent_info.pri_order <= subscribe.current_priority:
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
# 开始批量择优下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
downloads, lefts = self.downloadchain.batch_download(contexts=_match_context,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category)
# 判断是否要完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
_match_context.append(context)
if not _match_context:
# 未匹配到资源
logger.info(f'{mediainfo.title_year} 未匹配到符合条件的资源')
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta,
mediainfo=mediainfo, lefts=no_exists)
continue
# 开始批量择优下载
logger.info(f'{mediainfo.title_year} 匹配完成,共匹配到{len(_match_context)}个资源')
downloads, lefts = self.downloadchain.batch_download(contexts=_match_context,
no_exists=no_exists,
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category,
downloader=subscribe.downloader,
source="Subscribe")
# 判断是否要完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
logger.debug(f"match Lock released at {datetime.now()}")
def check(self):
"""
@@ -909,6 +935,9 @@ class SubscribeChain(ChainBase):
"""
完成订阅
"""
# 如果订阅状态为待定P说明订阅信息尚未完全更新无法完成订阅
if subscribe.state == "P":
return
# 完成订阅
msgstr = "订阅"
if bestversion:
@@ -1234,6 +1263,9 @@ class SubscribeChain(ChainBase):
file_path=file.fullpath,
)
if subscribe.type == MediaType.TV.value:
season_number = file_meta.begin_season
if season_number and season_number != subscribe.season:
continue
episode_number = file_meta.begin_episode
if episode_number and episodes.get(episode_number):
episodes[episode_number].download.append(file_info)
@@ -1271,6 +1303,9 @@ class SubscribeChain(ChainBase):
file_path=fileitem.path,
)
if subscribe.type == MediaType.TV.value:
season_number = file_meta.begin_season
if season_number and season_number != subscribe.season:
continue
episode_number = file_meta.begin_episode
if episode_number and episodes.get(episode_number):
episodes[episode_number].library.append(file_info)
@@ -1281,3 +1316,19 @@ class SubscribeChain(ChainBase):
subscribe_info.subscribe = Subscribe(**subscribe.to_dict())
subscribe_info.episodes = episodes
return subscribe_info
@staticmethod
def get_states_for_search(state: str) -> str:
"""
根据给定的状态返回实际需要搜索的状态列表,支持多个状态用逗号分隔
:param state: 订阅状态
N: New新建未处理
R: Resolved订阅中
P: Pending待定信息待进一步更新允许搜索不允许完成
S: Suspended暂停订阅不参与任何动作暂时停止处理
:return: 需要查询的状态列表(多个状态用逗号分隔)
"""
# 如果状态是 R 或 P则视为一起搜索返回 R,P 作为查询条件
if state in ["R", "P"]:
return "R,P"
return state

View File

@@ -1,6 +1,5 @@
import json
import re
from pathlib import Path
from typing import Union
from app.chain import ChainBase
@@ -10,6 +9,7 @@ from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
from version import FRONTEND_VERSION, APP_VERSION
class SystemChain(ChainBase, metaclass=Singleton):
@@ -98,77 +98,67 @@ class SystemChain(ChainBase, metaclass=Singleton):
@staticmethod
def __get_server_release_version():
"""
获取后端最新版本
获取后端V2最新版本
"""
try:
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2后端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到后端最新版本:{latest_v2}")
return latest_v2
else:
return None
logger.error("无法获取后端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取后端最新版本失败:{str(err)}")
return None
return None
@staticmethod
def __get_front_release_version():
"""
获取前端最新版本
获取前端V2最新版本
"""
try:
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2前端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到前端最新版本:{latest_v2}")
return latest_v2
else:
return None
logger.error("无法获取前端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取前端最新版本失败:{str(err)}")
return None
return None
@staticmethod
def get_server_local_version():
"""
查看当前版本
"""
version_file = settings.ROOT_PATH / "version.py"
if version_file.exists():
try:
with open(version_file, 'rb') as f:
version = f.read()
pattern = r"'([^']*)'"
match = re.search(pattern, str(version))
if match:
version = match.group(1)
return version
else:
logger.warn("未找到版本号")
return None
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
return APP_VERSION
@staticmethod
def get_frontend_version():
"""
获取前端版本
"""
if SystemUtils.is_frozen() and SystemUtils.is_windows():
version_file = settings.CONFIG_PATH.parent / "nginx" / "html" / "version.txt"
else:
version_file = Path(settings.FRONTEND_PATH) / "version.txt"
if version_file.exists():
try:
with open(version_file, 'r') as f:
version = str(f.read()).strip()
return version
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
else:
logger.warn("未找到前端版本文件,请正确设置 FRONTEND_PATH")
return None
return FRONTEND_VERSION

View File

@@ -120,6 +120,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),
@@ -174,7 +175,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条
torrents = torrents[:settings.CACHE_CONF.get('refresh')]
torrents = torrents[:settings.CACHE_CONF["refresh"]]
if torrents:
# 过滤出没有处理过的种子
torrents = [torrent for torrent in torrents
@@ -214,8 +215,8 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else:
torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF.get('torrents'):
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF.get('torrents'):]
if len(torrents_cache[domain]) > settings.CACHE_CONF["torrents"]:
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF["torrents"]:]
# 回收资源
del torrents
else:

View File

@@ -10,7 +10,7 @@ from app.chain.tmdb import TmdbChain
from app.core.config import settings, global_vars
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath
from app.core.metainfo import MetaInfoPath, MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
@@ -20,7 +20,7 @@ from app.helper.directory import DirectoryHelper
from app.helper.format import FormatParser
from app.helper.progress import ProgressHelper
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat, FileItem
from app.schemas import TransferInfo, TransferTorrent, Notification, EpisodeFormat, FileItem, TransferDirectoryConf
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey
from app.utils.string import StringUtils
@@ -65,15 +65,9 @@ class TransferChain(ChainBase):
# 获取下载器监控目录
download_dirs = self.directoryhelper.get_download_dirs()
# 如果没有下载器监控的目录则不处理
downloader_monitor = False
for dir_info in download_dirs:
# 只有下载器监控的本地目录才处理
if dir_info.monitor_type == "downloader" and dir_info.storage == "local":
downloader_monitor = True
break
if not downloader_monitor:
if not any(dir_info.monitor_type == "downloader" and dir_info.storage == "local"
for dir_info in download_dirs):
return True
logger.info("开始整理下载器中已经完成下载的文件 ...")
# 从下载器获取种子列表
torrents: Optional[List[TransferTorrent]] = self.list_torrents(status=TorrentStatus.TRANSFER)
@@ -126,7 +120,7 @@ class TransferChain(ChainBase):
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行整理
# 执行整理,匹配源目录
state, errmsg = self.__do_transfer(
fileitem=FileItem(
storage="local",
@@ -137,7 +131,9 @@ class TransferChain(ChainBase):
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
download_hash=torrent.hash
downloader=torrent.downloader,
download_hash=torrent.hash,
src_match=True
)
# 设置下载任务状态
@@ -150,25 +146,32 @@ class TransferChain(ChainBase):
def __do_transfer(self, fileitem: FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None,
download_hash: str = None, target_storage: str = None,
target_path: Path = None, transfer_type: str = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0, scrape: bool = None,
force: bool = False) -> Tuple[bool, str]:
target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
season: int = None, epformat: EpisodeFormat = None, min_filesize: int = 0,
downloader: str = None, download_hash: str = None,
force: bool = False, src_match: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的整理操作
:param fileitem: 文件项
:param meta: 元数据
:param mediainfo: 媒体信息
:param download_hash: 下载记录hash
:param target_directory: 目标目录配置
:param target_storage: 目标存储器
:param target_path: 目标路径
:param transfer_type: 整理类型
:param scrape: 是否刮削元数据
:param library_type_folder: 媒体库类型子目录
:param library_category_folder: 媒体库类别子目录
:param season: 季
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据
:param downloader: 下载器
:param download_hash: 下载记录hash
:param force: 是否强制整理
:param src_match: 是否源目录匹配
返回:成功标识,错误信息
"""
@@ -184,6 +187,17 @@ class TransferChain(ChainBase):
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总整理信息
transfers: Dict[Tuple, TransferInfo] = {}
# 待整理文件列表
file_items: List[FileItem] = []
# 蓝光目录列表
bluray: List[FileItem] = []
# 汇总错误信息
err_msgs: List[str] = []
# 已处理数量
@@ -194,6 +208,8 @@ class TransferChain(ChainBase):
skip_num = 0
# 本次整理方式
current_transfer_type = transfer_type
# 是否全部成功
all_success = True
# 获取待整理路径清单
trans_items = self.__get_trans_fileitems(fileitem)
@@ -209,286 +225,312 @@ class TransferChain(ChainBase):
# 处理所有待整理目录或文件,默认一个整理路径或文件只有一个媒体信息
for trans_item in trans_items:
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总整理信息
transfers: Dict[Tuple, TransferInfo] = {}
item_path = Path(trans_item.path)
# 是否蓝光路径
bluray_dir = trans_item.storage == "local" and SystemUtils.is_bluray_dir(item_path)
# 如果是目录且不是⼀蓝光原盘,获取所有文件并整理
if (trans_item.type == "dir"
and not (trans_item.storage == "local" and SystemUtils.is_bluray_dir(item_path))):
if trans_item.type == "dir" and not bluray_dir:
# 遍历获取下载目录所有文件(递归)
file_items = self.storagechain.list_files(trans_item, recursion=True)
if not file_items:
continue
if files := self.storagechain.list_files(trans_item, recursion=True):
file_items.extend(files)
# 如果是蓝光目录,计算⼤⼩
elif bluray_dir:
bluray.append(trans_item)
# 单个文件
else:
# 文件或蓝光目录
file_items = [trans_item]
file_items.append(trans_item)
if formaterHandler:
# 有集自定义格式,过滤文件
file_items = [f for f in file_items if formaterHandler.match(f.name)]
if formaterHandler:
# 有集自定义格式,过滤文件
file_items = [f for f in file_items if formaterHandler.match(f.name)]
# 过滤后缀和大小
file_items = [f for f in file_items
if f.extension and (f".{f.extension.lower()}" in self.all_exts
and (not min_filesize or f.size > min_filesize * 1024 * 1024))]
# 过滤后缀和大小
file_items = [f for f in file_items
if f.extension and (f".{f.extension.lower()}" in self.all_exts
and (not min_filesize or f.size > min_filesize * 1024 * 1024))]
# BDMV 跳过过滤
file_items.extend(bluray)
if not file_items:
logger.warn(f"{fileitem.path} 没有找到可整理的媒体文件")
return False, f"{fileitem.name} 没有找到可整理的媒体文件"
if not file_items:
logger.warn(f"{fileitem.path} 没有找到可整理的媒体文件")
return False, f"{fileitem.name} 没有找到可整理的媒体文件"
# 更新总文件数
total_num = len(file_items)
logger.info(f"正在整理 {total_num} 个文件...")
logger.info(f"正在整理 {len(file_items)} 个文件...")
# 整理所有文件
for file_item in file_items:
if global_vars.is_system_stopped:
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
# 计数
processed_num += 1
skip_num += 1
continue
# 整理所有文件
for file_item in file_items:
if global_vars.is_system_stopped:
break
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
# 计数
processed_num += 1
skip_num += 1
continue
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
err_msgs.append(f"{file_item.name} 命中整理屏蔽词")
# 计数
processed_num += 1
skip_num += 1
continue
# 整理成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_item.path, storage=file_item.storage)
if transferd and transferd.status:
logger.info(f"{file_item.path} 已成功整理过,如需重新处理,请删除历史记录。")
# 计数
processed_num += 1
skip_num += 1
# 整理屏蔽词不处理
is_blocked = False
if transfer_exclude_words:
for keyword in transfer_exclude_words:
if not keyword:
continue
if keyword and re.search(r"%s" % keyword, file_item.path, re.IGNORECASE):
logger.info(f"{file_item.path} 命中整理屏蔽词 {keyword},不处理")
is_blocked = True
break
if is_blocked:
err_msgs.append(f"{file_item.name} 命中整理屏蔽词")
# 计数
processed_num += 1
skip_num += 1
all_success = False
continue
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在整理 {processed_num + 1}/{total_num}{file_item.name} ...",
key=ProgressKey.FileTransfer)
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
# 合并季
if season is not None:
file_meta.begin_season = season
if not file_meta:
logger.error(f"{file_path} 无法识别有效信息")
err_msgs.append(f"{file_path} 无法识别有效信息")
# 整理成功的不再处理
if not force:
transferd = self.transferhis.get_by_src(file_item.path, storage=file_item.storage)
if transferd and transferd.status:
logger.info(f"{file_item.path} 已成功整理过,如需重新处理,请删除历史记录。")
# 计数
processed_num += 1
fail_num += 1
skip_num += 1
all_success = False
continue
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_path.name)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 更新进度
self.progress.update(value=processed_num / total_num * 100,
text=f"正在整理 {processed_num + 1}/{total_num}{file_item.name} ...",
key=ProgressKey.FileTransfer)
if not mediainfo:
# 识别媒体信息
file_mediainfo = self.mediachain.recognize_by_meta(file_meta)
else:
file_mediainfo = mediainfo
if not meta:
# 文件元数据
file_meta = MetaInfoPath(file_path)
else:
file_meta = meta
if not file_mediainfo:
logger.warn(f'{file_path} 未识别到媒体信息')
# 新增整理失败历史记录
his = self.transferhis.add_fail(
fileitem=file_item,
mode=transfer_type,
meta=file_meta,
download_hash=download_hash
)
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 未识别到媒体信息,无法入库!",
text=f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别整理。",
link=settings.MP_DOMAIN('#/history')
))
# 计数
processed_num += 1
fail_num += 1
continue
# 合并季
if season is not None:
file_meta.begin_season = season
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=file_mediainfo.tmdb_id,
mtype=file_mediainfo.type.value)
if transfer_history:
file_mediainfo.title = transfer_history.title
if not file_meta:
logger.error(f"{file_path} 无法识别有效信息")
err_msgs.append(f"{file_path} 无法识别有效信息")
# 计数
processed_num += 1
fail_num += 1
all_success = False
continue
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 自定义识别
if formaterHandler:
# 开始集、结束集、PART
begin_ep, end_ep, part = formaterHandler.split_episode(file_path.name)
if begin_ep is not None:
file_meta.begin_episode = begin_ep
file_meta.part = part
if end_ep is not None:
file_meta.end_episode = end_ep
# 获取集数据
if file_mediainfo.type == MediaType.TV:
if file_meta.begin_season is None:
file_meta.begin_season = 1
file_mediainfo.season = file_mediainfo.season or file_meta.begin_season
episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=file_mediainfo.tmdb_id,
season=file_mediainfo.season
)
else:
episodes_info = None
if not mediainfo:
# 识别媒体信息
file_mediainfo = self.mediachain.recognize_by_meta(file_meta)
else:
file_mediainfo = mediainfo
# 获取下载hash
if not download_hash:
download_file = self.downloadhis.get_file_by_fullpath(file_item.path)
if download_file:
download_hash = download_file.download_hash
# 执行整理
transferinfo: TransferInfo = self.transfer(fileitem=file_item,
meta=file_meta,
mediainfo=file_mediainfo,
transfer_type=transfer_type,
target_storage=target_storage,
target_path=target_path,
episodes_info=episodes_info,
scrape=scrape)
if not transferinfo:
logger.error("文件整理模块运行失败")
return False, "文件整理模块运行失败"
if not transferinfo.success:
# 整理失败
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
err_msgs.append(f"{file_path.name} {transferinfo.message}")
# 新增整理失败历史记录
self.transferhis.add_fail(
fileitem=file_item,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_mediainfo.title_year} {file_meta.season_episode} 入库失败!",
text=f"原因:{transferinfo.message or '未知'}",
image=file_mediainfo.get_message_image(),
link=settings.MP_DOMAIN('#/history')
))
# 计数
processed_num += 1
fail_num += 1
continue
# 汇总信息
current_transfer_type = transferinfo.transfer_type
mkey = (file_mediainfo.tmdb_id, file_meta.begin_season)
if mkey not in medias:
# 新增信息
metas[mkey] = file_meta
medias[mkey] = file_mediainfo
season_episodes[mkey] = file_meta.episode_list
transfers[mkey] = transferinfo
else:
# 合并季集清单
season_episodes[mkey] = list(set(season_episodes[mkey] + file_meta.episode_list))
# 合并整理数据
transfers[mkey].file_count += transferinfo.file_count
transfers[mkey].total_size += transferinfo.total_size
transfers[mkey].file_list.extend(transferinfo.file_list)
transfers[mkey].file_list_new.extend(transferinfo.file_list_new)
transfers[mkey].fail_list.extend(transferinfo.fail_list)
# 新增整理成功历史记录
self.transferhis.add_success(
if not file_mediainfo:
logger.warn(f'{file_path} 未识别到媒体信息')
# 新增整理失败历史记录
his = self.transferhis.add_fail(
fileitem=file_item,
mode=transfer_type or transferinfo.transfer_type,
mode=transfer_type,
meta=file_meta,
download_hash=download_hash
)
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 未识别到媒体信息,无法入库!",
text=f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别整理。",
link=settings.MP_DOMAIN('#/history')
))
# 计数
processed_num += 1
fail_num += 1
all_success = False
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = self.transferhis.get_by_type_tmdbid(tmdbid=file_mediainfo.tmdb_id,
mtype=file_mediainfo.type.value)
if transfer_history:
file_mediainfo.title = transfer_history.title
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 获取集数据
if file_mediainfo.type == MediaType.TV:
if file_meta.begin_season is None:
file_meta.begin_season = 1
file_mediainfo.season = file_mediainfo.season or file_meta.begin_season
episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=file_mediainfo.tmdb_id,
season=file_mediainfo.season
)
else:
episodes_info = None
# 获取下载hash
if not download_hash:
download_file = self.downloadhis.get_file_by_fullpath(file_item.path)
if download_file:
download_hash = download_file.download_hash
# 查询整理目标目录
dir_info = None
if not target_directory:
if src_match:
# 按源目录匹配,以便找到更合适的目录配置
dir_info = self.directoryhelper.get_dir(media=file_mediainfo,
storage=file_item.storage,
src_path=file_path,
target_storage=target_storage)
elif target_path:
# 指定目标路径,`手动整理`场景下使用,忽略源目录匹配,使用指定目录匹配
dir_info = self.directoryhelper.get_dir(media=file_mediainfo,
dest_path=target_path,
target_storage=target_storage)
else:
# 未指定目标路径,根据媒体信息获取目标目录
dir_info = self.directoryhelper.get_dir(file_mediainfo,
storage=file_item.storage,
target_storage=target_storage)
# 执行整理
transferinfo: TransferInfo = self.transfer(fileitem=file_item,
meta=file_meta,
mediainfo=file_mediainfo,
target_directory=target_directory or dir_info,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type,
episodes_info=episodes_info,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder)
if not transferinfo:
logger.error("文件整理模块运行失败")
return False, "文件整理模块运行失败"
if not transferinfo.success:
# 整理失败
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
err_msgs.append(f"{file_path.name} {transferinfo.message}")
# 新增整理失败历史记录
self.transferhis.add_fail(
fileitem=file_item,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 更新进度
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_mediainfo.title_year} {file_meta.season_episode} 入库失败!",
text=f"原因:{transferinfo.message or '未知'}",
image=file_mediainfo.get_message_image(),
link=settings.MP_DOMAIN('#/history')
))
# 计数
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
text=f"{file_path.name} 整理完成",
key=ProgressKey.FileTransfer)
fail_num += 1
all_success = False
continue
# 目录或文件整理完成
self.progress.update(text=f"{trans_item.path} 整理完成,正在执行后续处理 ...",
# 汇总信息
current_transfer_type = transferinfo.transfer_type
mkey = (file_mediainfo.tmdb_id, file_meta.begin_season)
if mkey not in medias:
# 新增信息
medias[mkey] = file_mediainfo
season_episodes[mkey] = file_meta.episode_list
transfers[mkey] = transferinfo
else:
# 合并季集清单
season_episodes[mkey] = list(set(season_episodes[mkey] + file_meta.episode_list))
# 合并整理数据
transfers[mkey].file_count += transferinfo.file_count
transfers[mkey].total_size += transferinfo.total_size
transfers[mkey].file_list.extend(transferinfo.file_list)
transfers[mkey].file_list_new.extend(transferinfo.file_list_new)
transfers[mkey].fail_list.extend(transferinfo.fail_list)
# 新增整理成功历史记录
self.transferhis.add_success(
fileitem=file_item,
mode=transfer_type or transferinfo.transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
transferinfo=transferinfo
)
# 整理完成事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
'mediainfo': file_mediainfo,
'transferinfo': transferinfo,
'downloader': downloader,
'download_hash': download_hash,
})
# 更新进度
processed_num += 1
self.progress.update(value=processed_num / total_num * 100,
text=f"{file_path.name} 整理完成",
key=ProgressKey.FileTransfer)
# 执行后续处理
for mkey, media in medias.items():
transfer_meta = metas[mkey]
transfer_info = transfers[mkey]
# 发送通知
if transfer_info.need_notify:
se_str = None
if media.type == MediaType.TV:
se_str = f"{transfer_meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=transfer_meta,
mediainfo=media,
transferinfo=transfer_info,
season_episode=se_str)
# 刮削事件
if scrape or transfer_info.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': transfer_meta,
'mediainfo': media,
'fileitem': transfer_info.target_diritem
})
# 整理完成事件
self.eventmanager.send_event(EventType.TransferComplete, {
# 目录或文件整理完成
self.progress.update(text=f"{fileitem.path} 整理完成,正在执行后续处理 ...",
key=ProgressKey.FileTransfer)
# 执行后续处理
for mkey, media in medias.items():
transfer_info = transfers[mkey]
transfer_meta = MetaInfo(transfer_info.target_diritem.name)
transfer_meta.begin_season = mkey[1]
# 发送通知
if transfer_info.need_notify:
se_str = None
if media.type == MediaType.TV:
se_str = f"{transfer_meta.season} {StringUtils.format_ep(season_episodes[mkey])}"
self.send_transfer_message(meta=transfer_meta,
mediainfo=media,
transferinfo=transfer_info,
season_episode=se_str)
# 刮削事件
if scrape or transfer_info.need_scrape:
self.eventmanager.send_event(EventType.MetadataScrape, {
'meta': transfer_meta,
'mediainfo': media,
'transferinfo': transfer_info,
'download_hash': download_hash,
'fileitem': transfer_info.target_diritem
})
# 移动模式处理
if current_transfer_type in ["move"]:
if all_success and current_transfer_type in ["move"]:
# 下载器hash
if download_hash:
if self.remove_torrents(download_hash):
if self.remove_torrents(download_hash, downloader=downloader):
logger.info(f"移动模式删除种子成功:{download_hash} ")
# 删除残留文件
# 删除残留目录
if fileitem:
logger.warn(f"删除残留文件夹:【{fileitem.storage}{fileitem.path}")
self.storagechain.delete_file(fileitem)
self.storagechain.delete_media_file(fileitem, delete_self=False)
# 结束进度
logger.info(f"{fileitem.path} 整理完成,共 {total_num} 个文件,"
@@ -652,6 +694,8 @@ class TransferChain(ChainBase):
epformat: EpisodeFormat = None,
min_filesize: int = 0,
scrape: bool = None,
library_type_folder: bool = None,
library_category_folder: bool = None,
force: bool = False) -> Tuple[bool, Union[str, list]]:
"""
手动整理,支持复杂条件,带进度显示
@@ -666,6 +710,8 @@ class TransferChain(ChainBase):
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据
:param library_type_folder: 是否按类型建立目录
:param library_category_folder: 是否按类别建立目录
:param force: 是否强制整理
"""
logger.info(f"手动整理:{fileitem.path} ...")
@@ -694,6 +740,8 @@ class TransferChain(ChainBase):
epformat=epformat,
min_filesize=min_filesize,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
force=force,
)
if not state:
@@ -712,6 +760,8 @@ class TransferChain(ChainBase):
epformat=epformat,
min_filesize=min_filesize,
scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
force=force)
return state, errmsg

View File

@@ -202,9 +202,9 @@ class UserChain(ChainBase, metaclass=Singleton):
# 触发认证通过的拦截事件
intercept_event = self.eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(username=username, channel=channel, service=service, token=token)
data=AuthInterceptCredentials(username=username, channel=channel, service=service,
token=token, status="completed")
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:

View File

@@ -8,7 +8,7 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Type
from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator
from pydantic import BaseModel, BaseSettings, validator, Field
from app.log import logger
from app.utils.system import SystemUtils
@@ -36,7 +36,7 @@ class ConfigModel(BaseModel):
# RESOURCE密钥
RESOURCE_SECRET_KEY: str = secrets.token_urlsafe(32)
# 允许的域名
ALLOWED_HOSTS: list = ["*"]
ALLOWED_HOSTS: list = Field(default_factory=lambda: ["*"])
# TOKEN过期时间
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
# RESOURCE_TOKEN过期时间
@@ -65,10 +65,12 @@ class ConfigModel(BaseModel):
DB_POOL_RECYCLE: int = 1800
# 数据库连接池获取连接的超时时间(秒),默认 60 秒
DB_POOL_TIMEOUT: int = 60
# 数据库连接池最大溢出连接数,默认 10
DB_MAX_OVERFLOW: int = 10
# 数据库连接池最大溢出连接数,默认 500
DB_MAX_OVERFLOW: int = 500
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
@@ -79,7 +81,7 @@ class ConfigModel(BaseModel):
API_TOKEN: Optional[str] = None
# 网络代理 IP:PORT
PROXY_HOST: Optional[str] = None
# 登录页面电影海报,tmdb/bing
# 登录页面电影海报,tmdb/bing/mediaserver
WALLPAPER: str = "tmdb"
# 媒体搜索来源 themoviedb/douban/bangumi多个用,分隔
SEARCH_SOURCE: str = "themoviedb,douban,bangumi"
@@ -112,29 +114,39 @@ class ConfigModel(BaseModel):
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
# 使用 DOH 解析的域名列表
DOH_DOMAINS: str = "api.themoviedb.org,api.tmdb.org,webservice.fanart.tv,api.github.com,github.com,raw.githubusercontent.com,api.telegram.org"
DOH_DOMAINS: str = ("api.themoviedb.org,"
"api.tmdb.org,"
"webservice.fanart.tv,"
"api.github.com,"
"github.com,"
"raw.githubusercontent.com,"
"api.telegram.org")
# DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
# 支持的后缀格式
RMT_MEDIAEXT: list = ['.mp4', '.mkv', '.ts', '.iso',
'.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm',
'.tp', '.f4v']
RMT_MEDIAEXT: list = Field(
default_factory=lambda: ['.mp4', '.mkv', '.ts', '.iso',
'.rmvb', '.avi', '.mov', '.mpeg',
'.mpg', '.wmv', '.3gp', '.asf',
'.m4v', '.flv', '.m2ts', '.strm',
'.tp', '.f4v']
)
# 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup']
RMT_SUBEXT: list = Field(default_factory=lambda: ['.srt', '.ass', '.ssa', '.sup'])
# 支持的音轨文件后缀格式
RMT_AUDIO_TRACK_EXT: list = ['.mka']
RMT_AUDIO_TRACK_EXT: list = Field(default_factory=lambda: ['.mka'])
# 音轨文件后缀格式
RMT_AUDIOEXT: list = ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.tta', '.vqf', '.wav', '.wma',
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
RMT_AUDIOEXT: list = Field(
default_factory=lambda: ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.tta', '.vqf', '.wav', '.wma',
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
)
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qB', '.part']
DOWNLOAD_TMPEXT: list = Field(default_factory=lambda: ['.!qb', '.part'])
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
# 订阅模式
@@ -145,10 +157,14 @@ class ConfigModel(BaseModel):
SUBSCRIBE_STATISTIC_SHARE: bool = True
# 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False
# 检查本地媒体库是否存在资源开关
LOCAL_EXISTS_SEARCH: bool = False
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
# 站点数据刷新间隔(小时)
SITEDATA_REFRESH_INTERVAL: int = 6
# 读取和发送站点消息
SITE_MESSAGE: bool = True
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载站点字幕
@@ -183,7 +199,10 @@ class ConfigModel(BaseModel):
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org"
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins"
PLUGIN_MARKET: str = ("https://github.com/jxxghp/MoviePilot-Plugins,"
"https://github.com/thsrite/MoviePilot-Plugins,"
"https://github.com/honue/MoviePilot-Plugins,"
"https://github.com/InfinityPacer/MoviePilot-Plugins")
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载
@@ -200,11 +219,27 @@ class ConfigModel(BaseModel):
BIG_MEMORY_MODE: bool = False
# 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False
# 是否启用编码探测的性能模式
ENCODING_DETECTION_PERFORMANCE_MODE: bool = True
# 编码探测的最低置信度阈值
ENCODING_DETECTION_MIN_CONFIDENCE: float = 0.8
# 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: List[str] = ["image.tmdb.org", "static-mdb.v.geilijiasu.com", "doubanio.com", "lain.bgm.tv",
"raw.githubusercontent.com", "github.com"]
SECURITY_IMAGE_DOMAINS: List[str] = Field(
default_factory=lambda: ["image.tmdb.org",
"static-mdb.v.geilijiasu.com",
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com"]
)
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
SECURITY_IMAGE_SUFFIXES: List[str] = Field(
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
)
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: List[str] = Field(
default_factory=lambda: ["Specials", "SPs"]
)
class Settings(BaseSettings, ConfigModel):
@@ -339,10 +374,9 @@ class Settings(BaseSettings, ConfigModel):
logger.warning(message)
if field.name in os.environ:
if is_converted:
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
logger.warning(message)
return False, message
else:
set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "")
if is_converted:
@@ -358,15 +392,19 @@ class Settings(BaseSettings, ConfigModel):
try:
field = self.__fields__[key]
original_value = getattr(self, key)
if field.name == "API_TOKEN":
converted_value, needs_update = self.validate_api_token(value, getattr(self, key))
converted_value, needs_update = self.validate_api_token(value, original_value)
else:
converted_value, needs_update = self.generic_type_converter(value, getattr(self, key), field.type_,
converted_value, needs_update = self.generic_type_converter(value, original_value, field.type_,
field.default, key)
# 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value):
setattr(self, key, converted_value)
return self.update_env_config(field, value, converted_value)
success, message = self.update_env_config(field, value, converted_value)
# 仅成功更新配置时,才更新内存
if success:
setattr(self, key, converted_value)
return success, message
return True, ""
except Exception as e:
return False, str(e)
@@ -427,22 +465,32 @@ class Settings(BaseSettings, ConfigModel):
@property
def CACHE_CONF(self):
"""
{
"torrents": "缓存种子数量",
"refresh": "订阅刷新处理数量",
"tmdb": "TMDB请求缓存数量",
"douban": "豆瓣请求缓存数量",
"fanart": "Fanart请求缓存数量",
"meta": "元数据缓存过期时间(秒)"
}
"""
if self.BIG_MEMORY_MODE:
return {
"torrents": 200,
"refresh": 100,
"tmdb": 1024,
"refresh": 50,
"torrents": 100,
"douban": 512,
"fanart": 512,
"meta": (self.META_CACHE_EXPIRE or 168) * 3600
"meta": (self.META_CACHE_EXPIRE or 24) * 3600
}
return {
"torrents": 100,
"refresh": 50,
"tmdb": 256,
"refresh": 30,
"torrents": 50,
"douban": 256,
"fanart": 128,
"meta": (self.META_CACHE_EXPIRE or 72) * 3600
"meta": (self.META_CACHE_EXPIRE or 2) * 3600
}
@property

View File

@@ -23,6 +23,8 @@ class TorrentInfo:
site_proxy: bool = False
# 站点优先级
site_order: int = 0
# 站点下载器
site_downloader: str = None
# 种子名称
title: str = None
# 种子副标题

View File

@@ -233,23 +233,29 @@ class EventManager(metaclass=Singleton):
可视化所有事件处理器,包括是否被禁用的状态
:return: 处理器列表,包含事件类型、处理器标识符、优先级(如果有)和状态
"""
def parse_handler_data(data):
"""
解析处理器数据,判断是否包含优先级
:param data: 订阅者数据,可能是元组或单一值
:return: (priority, handler),若没有优先级则返回 (None, handler)
"""
if isinstance(data, tuple) and len(data) == 2:
return data
return None, data
handler_info = []
# 统一处理广播事件和链式事件
for event_type, subscribers in {**self.__broadcast_subscribers, **self.__chain_subscribers}.items():
for handler_data in subscribers:
if isinstance(subscribers, dict):
priority, handler = handler_data
else:
priority = None
handler = handler_data
# 获取处理器的唯一标识符
handler_id = self.__get_handler_identifier(handler)
for handler_identifier, handler_data in subscribers.items():
# 解析优先级和处理器
priority, handler = parse_handler_data(handler_data)
# 检查处理器的启用状态
status = "enabled" if self.__is_handler_enabled(handler) else "disabled"
# 构建处理器信息字典
handler_dict = {
"event_type": event_type.value,
"handler_identifier": handler_id,
"handler_identifier": handler_identifier,
"status": status
}
if priority is not None:
@@ -341,8 +347,17 @@ class EventManager(metaclass=Singleton):
if not handlers:
logger.debug(f"No handlers found for chain event: {event}")
return False
# 过滤出启用的处理器
enabled_handlers = {handler_id: (priority, handler) for handler_id, (priority, handler) in handlers.items()
if self.__is_handler_enabled(handler)}
if not enabled_handlers:
logger.debug(f"No enabled handlers found for chain event: {event}. Skipping execution.")
return False
self.__log_event_lifecycle(event, "Started")
for handler_id, (priority, handler) in handlers.items():
for handler_id, (priority, handler) in enabled_handlers.items():
start_time = time.time()
self.__safe_invoke_handler(handler, event)
logger.debug(
@@ -499,11 +514,18 @@ class EventManager(metaclass=Singleton):
def decorator(f: Callable):
# 将输入的事件类型统一转换为列表格式
if isinstance(etype, list):
event_list = etype # 传入的已经是列表,直接使用
# 传入的已经是列表,直接使用
event_list = etype
elif etype is EventType:
# 订阅所有事件
event_list = []
for et in etype:
event_list.append(et)
else:
event_list = [etype] # 不是列表则包裹成单一元素的列表
# 不是列表则包裹成单一元素的列表
event_list = [etype]
# 遍历列表,处理每个事件类型
# 遍历列表,处理每个事件类型
for event in event_list:
if isinstance(event, (EventType, ChainEventType)):
self.add_event_listener(event, f)

View File

@@ -81,7 +81,6 @@ class MetaAnime(MetaBase):
_, self.cn_name, _, _, _, _ = StringUtils.get_keyword(self.cn_name)
if self.cn_name:
self.cn_name = re.sub(r'%s' % self._name_nostring_re, '', self.cn_name, flags=re.IGNORECASE).strip()
self.cn_name = zhconv.convert(self.cn_name, "zh-hans")
if self.en_name:
self.en_name = re.sub(r'%s' % self._name_nostring_re, '', self.en_name, flags=re.IGNORECASE).strip().title()
self._name = StringUtils.str_title(self.en_name)

View File

@@ -30,8 +30,8 @@ class MetaVideo(MetaBase):
_episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})"
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$"
_effect_re = r"^REMUX$|^UHD$|^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$|^REMUX$|^UHD$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_resources_type_re = r"%s|%s" % (_source_re, _effect_re)
_name_no_begin_re = r"^[\[【].+?[\]】]"
_name_no_chinese_re = r".*版|.*字幕"
@@ -524,16 +524,7 @@ class MetaVideo(MetaBase):
"""
if not self.name:
return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
if source_res:
self._last_token_type = "source"
self._continue_flag = False
self._stop_name_flag = True
if not self._source:
self._source = source_res.group(1)
self._last_token = self._source.upper()
return
elif token.upper() == "DL" \
if token.upper() == "DL" \
and self._last_token_type == "source" \
and self._last_token == "WEB":
self._source = "WEB-DL"
@@ -542,13 +533,37 @@ class MetaVideo(MetaBase):
elif token.upper() == "RAY" \
and self._last_token_type == "source" \
and self._last_token == "BLU":
self._source = "BluRay"
# UHD BluRay组合
if self._source == "UHD":
self._source = "UHD BluRay"
else:
self._source = "BluRay"
self._continue_flag = False
return
elif token.upper() == "WEBDL":
self._source = "WEB-DL"
self._continue_flag = False
return
# UHD REMUX组合
if token.upper() == "REMUX" \
and self._source == "BluRay":
self._source = "BluRay REMUX"
self._continue_flag = False
return
elif token.upper() == "BLURAY" \
and self._source == "UHD":
self._source = "UHD BluRay"
self._continue_flag = False
return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
if source_res:
self._last_token_type = "source"
self._continue_flag = False
self._stop_name_flag = True
if not self._source:
self._source = source_res.group(1)
self._last_token = self._source.upper()
return
effect_res = re.search(r"(%s)" % self._effect_re, token, re.IGNORECASE)
if effect_res:
self._last_token_type = "effect"

View File

@@ -1,11 +1,12 @@
import traceback
from typing import Generator, Optional, Tuple, Any
from typing import Generator, Optional, Tuple, Any, Union
from app.core.config import settings
from app.core.event import eventmanager
from app.helper.module import ModuleHelper
from app.log import logger
from app.schemas.types import EventType, ModuleType
from app.schemas.types import EventType, ModuleType, DownloaderType, MediaServerType, MessageChannel, StorageSchema, \
OtherModulesType
from app.utils.object import ObjectUtils
from app.utils.singleton import Singleton
@@ -19,6 +20,8 @@ class ModuleManager(metaclass=Singleton):
_modules: dict = {}
# 运行态模块列表
_running_modules: dict = {}
# 子模块类型集合
SubType = Union[DownloaderType, MediaServerType, MessageChannel, StorageSchema, OtherModulesType]
def __init__(self):
self.load_modules()
@@ -135,6 +138,17 @@ class ModuleManager(metaclass=Singleton):
and module.get_type() == module_type:
yield module
def get_running_subtype_module(self, module_subtype: SubType) -> Generator:
"""
获取指定子类型的模块
"""
if not self._running_modules:
return []
for _, module in self._running_modules.items():
if hasattr(module, 'get_subtype') \
and module.get_subtype() == module_subtype:
yield module
def get_module(self, module_id: str) -> Any:
"""
根据模块id获取模块

View File

@@ -331,6 +331,26 @@ class PluginManager(metaclass=Singleton):
)
return sync_plugins
def install_plugin_missing_dependencies(self) -> List[str]:
"""
安装插件中缺失或不兼容的依赖项
"""
# 第一步:获取需要安装的依赖项列表
missing_dependencies = self.pluginhelper.find_missing_dependencies()
if not missing_dependencies:
return missing_dependencies
logger.debug(f"检测到缺失的依赖项: {missing_dependencies}")
logger.info(f"开始安装缺失的依赖项,共 {len(missing_dependencies)} 个...")
# 第二步:安装依赖项并返回结果
total_start_time = time.time()
success, message = self.pluginhelper.install_dependencies(missing_dependencies)
total_elapsed_time = time.time() - total_start_time
if success:
logger.info(f"已完成 {len(missing_dependencies)} 个依赖项安装,总耗时:{total_elapsed_time:.2f}")
else:
logger.warning(f"存在缺失依赖项安装失败,请尝试手动安装,总耗时:{total_elapsed_time:.2f}")
return missing_dependencies
def get_plugin_config(self, pid: str) -> dict:
"""
获取插件配置
@@ -506,7 +526,8 @@ class PluginManager(metaclass=Singleton):
"name": "服务名称",
"trigger": "触发器cron、interval、date、CronTrigger.from_crontab()",
"func": self.xxx,
"kwargs": {} # 定时器参数
"kwargs": {} # 定时器参数,
"func_kwargs": {} # 方法参数
}]
"""
ret_services = []

View File

@@ -1,22 +1,25 @@
from typing import Any, Generator, List, Optional, Self, Tuple
from sqlalchemy import NullPool, QueuePool, and_, create_engine, inspect
from sqlalchemy import NullPool, QueuePool, and_, create_engine, inspect, text
from sqlalchemy.orm import Session, as_declarative, declared_attr, scoped_session, sessionmaker
from app.core.config import settings
# 根据池类型设置 poolclass 和相关参数
pool_class = NullPool if settings.DB_POOL_TYPE == "NullPool" else QueuePool
connect_args = {
"timeout": settings.DB_TIMEOUT
}
# 启用 WAL 模式时的额外配置
if settings.DB_WAL_ENABLE:
connect_args["check_same_thread"] = False
kwargs = {
"url": f"sqlite:///{settings.CONFIG_PATH}/user.db",
"pool_pre_ping": settings.DB_POOL_PRE_PING,
"echo": settings.DB_ECHO,
"poolclass": pool_class,
"pool_recycle": settings.DB_POOL_RECYCLE,
"connect_args": {
# "check_same_thread": False,
"timeout": settings.DB_TIMEOUT
}
"connect_args": connect_args
}
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
@@ -27,6 +30,11 @@ if pool_class == QueuePool:
})
# 创建数据库引擎
Engine = create_engine(**kwargs)
# 根据配置设置日志模式
journal_mode = "WAL" if settings.DB_WAL_ENABLE else "DELETE"
with Engine.connect() as connection:
current_mode = connection.execute(text(f"PRAGMA journal_mode={journal_mode};")).scalar()
print(f"Database journal mode set to: {current_mode}")
# 会话工厂
SessionFactory = sessionmaker(bind=Engine)
@@ -49,11 +57,34 @@ def get_db() -> Generator:
db.close()
def perform_checkpoint(mode: str = "PASSIVE"):
"""
执行 SQLite 的 checkpoint 操作,将 WAL 文件内容写回主数据库
:param mode: checkpoint 模式,可选值包括 "PASSIVE""FULL""RESTART""TRUNCATE"
默认为 "PASSIVE",即不锁定 WAL 文件的轻量级同步
"""
if not settings.DB_WAL_ENABLE:
return
valid_modes = {"PASSIVE", "FULL", "RESTART", "TRUNCATE"}
if mode.upper() not in valid_modes:
raise ValueError(f"Invalid checkpoint mode '{mode}'. Must be one of {valid_modes}")
try:
# 使用指定的 checkpoint 模式,确保 WAL 文件数据被正确写回主数据库
with Engine.connect() as conn:
conn.execute(text(f"PRAGMA wal_checkpoint({mode.upper()});"))
except Exception as e:
print(f"Error during WAL checkpoint: {e}")
def close_database():
"""
关闭所有数据库连接
关闭所有数据库连接并清理资源
"""
Engine.dispose()
try:
# 释放连接池SQLite 会自动清空 WAL 文件,这里不单独再调用 checkpoint
Engine.dispose()
except Exception as e:
print(f"Error while disposing database connections: {e}")
def get_args_db(args: tuple, kwargs: dict) -> Optional[Session]:

View File

@@ -53,7 +53,9 @@ class DownloadHistory(Base):
@staticmethod
@db_query
def get_by_hash(db: Session, download_hash: str):
return db.query(DownloadHistory).filter(DownloadHistory.download_hash == download_hash).first()
return db.query(DownloadHistory).filter(DownloadHistory.download_hash == download_hash).order_by(
DownloadHistory.date.desc()
).first()
@staticmethod
@db_query

View File

@@ -46,11 +46,13 @@ class Site(Base):
# 流控间隔
limit_seconds = Column(Integer, default=0)
# 超时时间
timeout = Column(Integer, default=0)
timeout = Column(Integer, default=15)
# 是否启用
is_active = Column(Boolean(), default=True)
# 创建时间
lst_mod_date = Column(String, default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# 下载器
downloader = Column(String)
@staticmethod
@db_query

View File

@@ -81,6 +81,7 @@ class SiteUserData(Base):
func.max(SiteUserData.updated_day).label('latest_update_day')
)
.group_by(SiteUserData.domain)
.filter(SiteUserData.err_msg is None)
.subquery()
)

View File

@@ -54,7 +54,7 @@ class Subscribe(Base):
lack_episode = Column(Integer)
# 附加信息
note = Column(JSON)
# 状态N-新建 R-订阅中
# 状态N-新建 R-订阅中 P-待定 S-暂停
state = Column(String, nullable=False, index=True, default='N')
# 最后更新时间
last_update = Column(String)
@@ -64,6 +64,8 @@ class Subscribe(Base):
username = Column(String)
# 订阅站点
sites = Column(JSON, default=list)
# 下载器
downloader = Column(String)
# 是否洗版
best_version = Column(Integer, default=0)
# 当前优先级
@@ -96,7 +98,13 @@ class Subscribe(Base):
@staticmethod
@db_query
def get_by_state(db: Session, state: str):
result = db.query(Subscribe).filter(Subscribe.state == state).all()
# 如果 state 为空或 None返回所有订阅
if not state:
result = db.query(Subscribe).all()
else:
# 如果传入的状态不为空,拆分成多个状态
states = state.split(',')
result = db.query(Subscribe).filter(Subscribe.state.in_(states)).all()
return list(result)
@staticmethod

View File

@@ -83,7 +83,8 @@ class SubscribeOper(DbOper):
更新订阅
"""
subscribe = self.get(sid)
subscribe.update(self._db, payload)
if subscribe:
subscribe.update(self._db, payload)
return subscribe
def list_by_tmdbid(self, tmdbid: int, season: int = None) -> List[Subscribe]:

View File

@@ -16,7 +16,7 @@ class PlaywrightHelper:
"""
sync_stealth(page, pure=True)
page.goto(url)
return sync_cf_retry(page)
return sync_cf_retry(page)[0]
def action(self, url: str,
callback: Callable,

View File

@@ -4,7 +4,8 @@ from typing import List, Optional
from app import schemas
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey, MediaType
from app.schemas.types import SystemConfigKey
from app.utils.system import SystemUtils
class DirectoryHelper:
@@ -48,46 +49,62 @@ class DirectoryHelper:
"""
return [d for d in self.get_library_dirs() if d.library_storage == "local"]
def get_dir(self, media: MediaInfo, src_path: Path = None, dest_path: Path = None,
local: bool = False) -> Optional[schemas.TransferDirectoryConf]:
def get_dir(self, media: MediaInfo, include_unsorted: bool = False,
storage: str = None, src_path: Path = None,
target_storage: str = None, dest_path: Path = None
) -> Optional[schemas.TransferDirectoryConf]:
"""
根据媒体信息获取下载目录、媒体库目录配置
:param media: 媒体信息
:param include_unsorted: 包含不整理目录
:param storage: 源存储类型
:param target_storage: 目标存储类型
:param src_path: 源目录,有值时直接匹配
:param dest_path: 目标目录,有值时直接匹配
:param local: 是否本地目录
"""
# 处理类型
if media:
media_type = media.type.value
else:
media_type = MediaType.UNKNOWN.value
if not media:
return None
# 电影/电视剧
media_type = media.type.value
dirs = self.get_dirs()
# 已匹配的目录
matched_dirs: List[schemas.TransferDirectoryConf] = []
# 按照配置顺序查找
for d in dirs:
if not d.download_path or not d.library_path:
# 没有启用整理的目录
if not d.monitor_type and not include_unsorted:
continue
# 下载目录
download_path = Path(d.download_path)
# 媒体库目录
library_path = Path(d.library_path)
# 媒体类型
# 有目录时直接匹配
if src_path and download_path != src_path:
# 源存储类型不匹配
if storage and d.storage != storage:
continue
if dest_path and library_path != dest_path:
# 目标存储类型不匹配
if target_storage and d.library_storage != target_storage:
continue
# 本地目录
if local and d.storage != "local":
# 有源目录时,源目录不匹配下载目录
if src_path and not src_path.is_relative_to(d.download_path):
continue
# 有目标目录时,目标目录不匹配媒体库目录
if dest_path and dest_path != Path(d.library_path):
continue
# 目录类型为全部的,符合条件
if not d.media_type:
return d
matched_dirs.append(d)
continue
# 目录类型相等,目录类别为全部,符合条件
if d.media_type == media_type and not d.media_category:
return d
matched_dirs.append(d)
continue
# 目录类型相等,目录类别相等,符合条件
if d.media_type == media_type and d.media_category == media.category:
return d
matched_dirs.append(d)
continue
if matched_dirs:
if src_path:
# 优先源目录同盘
for matched_dir in matched_dirs:
matched_path = Path(matched_dir.download_path)
if SystemUtils.is_same_disk(matched_path, src_path):
return matched_dir
return matched_dirs[0]
return None

View File

@@ -9,17 +9,27 @@ class FormatParser(object):
_split_chars = r"\.|\s+|\(|\)|\[|]|-|\+|【|】|/||;|&|\||#|_|「|」|~"
def __init__(self, eformat: str, details: str = None, part: str = None,
offset: int = None, key: str = "ep"):
offset: str = None, key: str = "ep"):
"""
:params eformat: 格式化字符串
:params details: 格式化详情
:params part: 分集
:params offset: 偏移量
:params offset: 偏移量 -10/EP*2
:prams key: EP关键字
"""
self._format = eformat
self._start_ep = None
self._end_ep = None
if not offset:
self.__offset = "EP"
elif "EP" in offset:
self.__offset = offset
else:
if offset.startswith("-") or offset.startswith("+"):
self.__offset = f"EP{offset}"
else:
self.__offset = f"EP+{offset}"
self._key = key
self._part = None
if part:
self._part = part
@@ -34,8 +44,6 @@ class FormatParser(object):
self._end_ep = int(tmp[0]) if int(tmp[0]) > int(tmp[1]) else int(tmp[1])
else:
self._start_ep = self._end_ep = int(tmp[0])
self.__offset = int(offset) if offset else 0
self._key = key
@property
def format(self):
@@ -77,15 +85,21 @@ class FormatParser(object):
if self._start_ep is not None and self._start_ep == self._end_ep:
if isinstance(self._start_ep, str):
s, e = self._start_ep.split("-")
start_ep = self.__offset.replace("EP", s)
end_ep = self.__offset.replace("EP", e)
if int(s) == int(e):
return int(s) + self.__offset, None, self.part
return int(s) + self.__offset, int(e) + self.__offset, self.part
return self._start_ep + self.__offset, None, self.part
return int(eval(start_ep)), None, self.part
return int(eval(start_ep)), int(eval(end_ep)), self.part
else:
start_ep = self.__offset.replace("EP", str(self._start_ep))
return int(eval(start_ep)), None, self.part
if not self._format:
return self._start_ep, self._end_ep, self.part
s, e = self.__handle_single(file_name)
return s + self.__offset if s is not None else None, \
e + self.__offset if e is not None else None, self.part
else:
s, e = self.__handle_single(file_name)
start_ep = self.__offset.replace("EP", str(s)) if s else None
end_ep = self.__offset.replace("EP", str(e)) if e else None
return int(eval(start_ep)) if start_ep else None, int(eval(end_ep)) if end_ep else None, self.part
def __handle_single(self, file: str) -> Tuple[Optional[int], Optional[int]]:
"""

View File

@@ -2,9 +2,12 @@ import json
import shutil
import traceback
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, Dict, List, Optional, Tuple, Set
from cachetools import TTLCache, cached
from packaging.specifiers import SpecifierSet, InvalidSpecifier
from packaging.version import Version, InvalidVersion
from pkg_resources import Requirement, working_set
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
@@ -15,6 +18,8 @@ from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
from app.utils.url import UrlUtils
PLUGIN_DIR = Path(settings.ROOT_PATH) / "app" / "plugins"
class PluginHelper(metaclass=Singleton):
"""
@@ -359,7 +364,7 @@ class PluginHelper(metaclass=Singleton):
requirements_txt = res.text
if requirements_txt.strip():
# 保存并安装依赖
requirements_file_path = Path(settings.ROOT_PATH) / "app" / "plugins" / pid.lower() / "requirements.txt"
requirements_file_path = PLUGIN_DIR / pid.lower() / "requirements.txt"
requirements_file_path.parent.mkdir(parents=True, exist_ok=True)
with open(requirements_file_path, "w", encoding="utf-8") as f:
f.write(requirements_txt)
@@ -376,7 +381,7 @@ class PluginHelper(metaclass=Singleton):
:return: (是否存在依赖,安装是否成功, 错误信息)
"""
# 定位插件目录和依赖文件
plugin_dir = Path(settings.ROOT_PATH) / "app" / "plugins" / pid.lower()
plugin_dir = PLUGIN_DIR / pid.lower()
requirements_file = plugin_dir / "requirements.txt"
# 检查是否存在 requirements.txt 文件
@@ -397,8 +402,8 @@ class PluginHelper(metaclass=Singleton):
:param pid: 插件 ID
:return: 备份目录路径
"""
plugin_dir = Path(settings.ROOT_PATH) / "app" / "plugins" / pid
backup_dir = Path(settings.TEMP_PATH) / "plugins_backup" / pid
plugin_dir = PLUGIN_DIR / pid
backup_dir = Path(settings.TEMP_PATH) / "plugin_backup" / pid
if plugin_dir.exists():
# 备份时清理已有的备份目录,防止残留文件影响
@@ -418,7 +423,7 @@ class PluginHelper(metaclass=Singleton):
:param pid: 插件 ID
:param backup_dir: 备份目录路径
"""
plugin_dir = Path(settings.ROOT_PATH) / "app" / "plugins" / pid
plugin_dir = PLUGIN_DIR / pid
if plugin_dir.exists():
shutil.rmtree(plugin_dir, ignore_errors=True)
logger.debug(f"{pid} 已清理插件目录 {plugin_dir}")
@@ -435,7 +440,7 @@ class PluginHelper(metaclass=Singleton):
删除旧插件
:param pid: 插件 ID
"""
plugin_dir = Path(settings.ROOT_PATH) / "app" / "plugins" / pid
plugin_dir = PLUGIN_DIR / pid
if plugin_dir.exists():
shutil.rmtree(plugin_dir, ignore_errors=True)
@@ -560,3 +565,182 @@ class PluginHelper(metaclass=Singleton):
logger.error(f"[GitHub] 所有策略均请求失败URL: {url},请检查网络连接或 GitHub 配置")
return None
def find_missing_dependencies(self) -> List[str]:
"""
收集所有需要安装或更新的依赖项
1. 收集所有插件的依赖项,合并版本约束
2. 获取已安装的包及其版本
3. 比较已安装的包与所需的依赖项,找出需要安装或升级的包
:return: 需要安装或更新的依赖项列表,例如 ["package1>=1.0.0", "package2"]
"""
try:
# 收集所有插件的依赖项
plugin_dependencies = self.__find_plugin_dependencies() # 返回格式为 {package_name: version_specifier}
# 获取已安装的包及其版本
installed_packages = self.__get_installed_packages() # 返回格式为 {package_name: Version}
# 需要安装或更新的依赖项列表
dependencies_to_install = []
for pkg_name, version_specifier in plugin_dependencies.items():
spec_set = SpecifierSet(version_specifier)
installed_version = installed_packages.get(pkg_name)
if installed_version is None:
# 包未安装,需要安装
if version_specifier:
dependencies_to_install.append(f"{pkg_name}{version_specifier}")
else:
dependencies_to_install.append(pkg_name)
elif not spec_set.contains(installed_version, prereleases=True):
# 已安装的版本不满足版本约束,需要升级或降级
if version_specifier:
dependencies_to_install.append(f"{pkg_name}{version_specifier}")
else:
dependencies_to_install.append(pkg_name)
# 已安装的版本满足要求,无需操作
return dependencies_to_install
except Exception as e:
logger.error(f"收集所有需要安装或更新的依赖项时发生错误:{e}")
return []
def install_dependencies(self, dependencies: List[str]) -> Tuple[bool, str]:
"""
安装指定的依赖项列表
:param dependencies: 需要安装或更新的依赖项列表
:return: (success, message)
"""
if not dependencies:
return False, "没有传入需要安装的依赖项"
try:
logger.debug(f"需要安装或更新的依赖项:{dependencies}")
# 创建临时的 requirements.txt 文件用于批量安装
requirements_temp_file = Path(settings.TEMP_PATH) / "plugin_dependencies" / "requirements.txt"
requirements_temp_file.parent.mkdir(parents=True, exist_ok=True)
with open(requirements_temp_file, "w", encoding="utf-8") as f:
for dep in dependencies:
f.write(dep + "\n")
# 使用自动降级策略安装依赖
success, message = self.__pip_install_with_fallback(requirements_temp_file)
# 删除临时文件
requirements_temp_file.unlink()
return success, message
except Exception as e:
logger.error(f"安装依赖项时发生错误:{e}")
return False, f"安装依赖项时发生错误:{e}"
def __get_installed_packages(self) -> Dict[str, Version]:
"""
获取已安装的包及其版本
使用 pkg_resources 获取当前环境中已安装的包,标准化包名并转换版本信息
对于无法解析的版本,记录警告日志并跳过
:return: 已安装包的字典,格式为 {package_name: Version}
"""
installed_packages = {}
try:
for dist in working_set:
pkg_name = self.__standardize_pkg_name(dist.project_name)
try:
installed_packages[pkg_name] = Version(dist.version)
except InvalidVersion:
logger.debug(f"无法解析已安装包 '{pkg_name}' 的版本:{dist.version}")
continue
return installed_packages
except Exception as e:
logger.error(f"获取已安装的包时发生错误:{e}")
return {}
def __find_plugin_dependencies(self) -> Dict[str, str]:
"""
收集所有插件的依赖项
遍历 plugins 目录下的所有插件,查找存在 requirements.txt 的插件目录
,并解析其中的依赖项,同时将所有插件的依赖项合并到字典中,方便后续统一处理
:return: 依赖项字典,格式为 {package_name: set(version_specifiers)}
"""
dependencies = {}
try:
for plugin_dir in PLUGIN_DIR.iterdir():
if plugin_dir.is_dir():
requirements_file = plugin_dir / "requirements.txt"
if requirements_file.exists():
# 解析当前插件的 requirements.txt获取依赖项
plugin_deps = self.__parse_requirements(requirements_file)
for pkg_name, version_specifiers in plugin_deps.items():
if pkg_name in dependencies:
# 更新已存在的包的版本约束集合
dependencies[pkg_name].update(version_specifiers)
else:
# 添加新的包及其版本约束
dependencies[pkg_name] = set(version_specifiers)
return self.__merge_dependencies(dependencies)
except Exception as e:
logger.error(f"收集插件依赖项时发生错误:{e}")
return {}
def __parse_requirements(self, requirements_file: Path) -> Dict[str, List[str]]:
"""
解析 requirements.txt 文件,返回依赖项字典
使用 packaging 库解析每一行依赖项,提取包名和版本约束
对于无法解析的行,记录警告日志,便于后续检查
:param requirements_file: requirements.txt 文件的路径
:return: 依赖项字典,格式为 {package_name: [version_specifier]}
"""
dependencies = {}
try:
with open(requirements_file, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if line and not line.startswith('#'):
# 使用 packaging 库解析依赖项
try:
req = Requirement(line)
pkg_name = self.__standardize_pkg_name(req.name)
version_specifier = str(req.specifier)
if pkg_name in dependencies:
dependencies[pkg_name].append(version_specifier)
else:
dependencies[pkg_name] = [version_specifier]
except Exception as e:
logger.debug(f"无法解析依赖项 '{line}'{e}")
return dependencies
except Exception as e:
logger.error(f"解析 requirements.txt 时发生错误:{e}")
return {}
@staticmethod
def __merge_dependencies(dependencies: Dict[str, Set[str]]) -> Dict[str, str]:
"""
合并依赖项,选择每个包的最高版本要求
对于多个插件依赖同一包的情况,合并其版本约束,取交集以满足所有插件的要求
如果交集为空,表示存在版本冲突,需要根据策略进行处理
:param dependencies: 依赖项字典,格式为 {package_name: set(version_specifiers)}
:return: 合并后的依赖项字典,格式为 {package_name: version_specifiers}
"""
try:
merged_dependencies = {}
for pkg_name, version_specifiers in dependencies.items():
# 合并版本约束
spec_set = SpecifierSet()
for specifier in version_specifiers:
try:
if specifier:
spec_set &= SpecifierSet(specifier)
except InvalidSpecifier as e:
logger.error(f"发生版本约束冲突:{e}")
# 将合并后的版本约束添加到结果字典
merged_dependencies[pkg_name] = str(spec_set) if spec_set else ''
return merged_dependencies
except Exception as e:
logger.error(f"合并依赖项时发生错误:{e}")
return {}
@staticmethod
def __standardize_pkg_name(name: str) -> str:
"""
标准化包名,将包名转换为小写并将连字符替换为下划线
:param name: 原始包名
:return: 标准化后的包名
"""
return name.lower().replace("-", "_") if name else name

View File

@@ -225,12 +225,13 @@ class RssHelper:
}
@staticmethod
def parse(url, proxy: bool = False, timeout: int = 15) -> Union[List[dict], None]:
def parse(url, proxy: bool = False, timeout: int = 15, headers: dict = None) -> Union[List[dict], None]:
"""
解析RSS订阅URL获取RSS中的种子信息
:param url: RSS地址
:param proxy: 是否使用代理
:param timeout: 请求超时
:param headers: 自定义请求头
:return: 种子信息列表如为None代表Rss过期
"""
# 开始处理
@@ -238,7 +239,8 @@ class RssHelper:
if not url:
return []
try:
ret = RequestUtils(proxies=settings.PROXY if proxy else None, timeout=timeout).get_res(url)
ret = RequestUtils(proxies=settings.PROXY if proxy else None,
timeout=timeout, headers=headers).get_res(url)
if not ret:
return []
except Exception as err:

View File

@@ -287,10 +287,10 @@ class TorrentHelper(metaclass=Singleton):
if not file:
continue
file_path = Path(file)
if file_path.suffix not in settings.RMT_MEDIAEXT:
if not file_path.suffix or file_path.suffix.lower() not in settings.RMT_MEDIAEXT:
continue
# 只使用文件名识别
meta = MetaInfo(file_path.stem)
meta = MetaInfo(file_path.name)
if not meta.begin_episode:
continue
episodes = list(set(episodes).union(set(meta.episode_list)))

View File

@@ -10,7 +10,7 @@ from app.log import logger
class TwoFactorAuth:
def __init__(self, code_or_secret: str):
if code_or_secret and len(code_or_secret) > 16:
if code_or_secret and len(code_or_secret) >= 16:
self.code = None
self.secret = code_or_secret
else:

View File

@@ -2,8 +2,9 @@ from abc import abstractmethod, ABCMeta
from typing import Generic, Tuple, Union, TypeVar, Type, Dict, Optional, Callable
from app.helper.service import ServiceConfigHelper
from app.schemas import Notification, MessageChannel, NotificationConf, MediaServerConf, DownloaderConf
from app.schemas.types import ModuleType
from app.schemas import Notification, NotificationConf, MediaServerConf, DownloaderConf
from app.schemas.types import ModuleType, DownloaderType, MediaServerType, MessageChannel, StorageSchema, \
OtherModulesType
class _ModuleBase(metaclass=ABCMeta):
@@ -43,6 +44,14 @@ class _ModuleBase(metaclass=ABCMeta):
"""
pass
@staticmethod
@abstractmethod
def get_subtype() -> Union[DownloaderType, MediaServerType, MessageChannel, StorageSchema, OtherModulesType]:
"""
获取模块子类型(下载器、媒体服务器、消息通道、存储类型、其他杂项模块类型)
"""
pass
@staticmethod
@abstractmethod
def get_priority() -> int:
@@ -161,13 +170,9 @@ class ServiceBase(Generic[TService, TConf], metaclass=ABCMeta):
"""
获取默认服务配置的名称
:return: 返回第一个设置为默认的配置名称;如果没有默认配置,则返回第一个配置的名称;如果没有配置,返回 None
:return: 默认第一个配置的名称
"""
# 优先查找默认配置
for conf in self._configs.values():
if getattr(conf, "default", False):
return conf.name
# 如果没有默认配置,返回第一个配置的名称
# 默认使用第一个配置的名称
first_conf = next(iter(self._configs.values()), None)
return first_conf.name if first_conf else None
@@ -226,6 +231,33 @@ class _DownloaderBase(ServiceBase[TService, DownloaderConf]):
下载器基类
"""
def __init__(self):
"""
初始化下载器基类
"""
super().__init__()
self._default_config_name: Optional[str] = None
def get_default_config_name(self) -> Optional[str]:
"""
获取默认服务配置的名称
:return: 优先从所有下载器中查找配置了默认的下载器,如果没有配置,则获取第一个下载器名称
"""
# 优先查找默认配置
if self._default_config_name:
return self._default_config_name
configs = ServiceConfigHelper.get_downloader_configs()
for conf in configs:
if conf.default:
self._default_config_name = conf.name
return self._default_config_name
# 如果没有默认配置,返回第一个配置的名称
first_conf = next(iter(configs), None)
self._default_config_name = first_conf.name if first_conf else None
return self._default_config_name
def get_configs(self) -> Dict[str, DownloaderConf]:
"""
获取已启用的下载器的配置字典

View File

@@ -7,7 +7,7 @@ from app.core.meta import MetaBase
from app.log import logger
from app.modules import _ModuleBase
from app.modules.bangumi.bangumi import BangumiApi
from app.schemas.types import ModuleType
from app.schemas.types import ModuleType, MediaRecognizeType
from app.utils.http import RequestUtils
@@ -44,6 +44,13 @@ class BangumiModule(_ModuleBase):
获取模块类型
"""
return ModuleType.MediaRecognize
@staticmethod
def get_subtype() -> MediaRecognizeType:
"""
获取模块子类型
"""
return MediaRecognizeType.Bangumi
@staticmethod
def get_priority() -> int:

View File

@@ -2,6 +2,7 @@ import re
from typing import List, Optional, Tuple, Union
import cn2an
import zhconv
from app import schemas
from app.core.config import settings
@@ -14,7 +15,7 @@ from app.modules.douban.apiv2 import DoubanApi
from app.modules.douban.douban_cache import DoubanCache
from app.modules.douban.scraper import DoubanScraper
from app.schemas import MediaPerson, APIRateLimitException
from app.schemas.types import MediaType, ModuleType
from app.schemas.types import MediaType, ModuleType, MediaRecognizeType
from app.utils.common import retry
from app.utils.http import RequestUtils
from app.utils.limit import rate_limit_exponential
@@ -58,6 +59,13 @@ class DoubanModule(_ModuleBase):
"""
return ModuleType.MediaRecognize
@staticmethod
def get_subtype() -> MediaRecognizeType:
"""
获取模块子类型
"""
return MediaRecognizeType.Douban
@staticmethod
def get_priority() -> int:
"""
@@ -107,8 +115,10 @@ class DoubanModule(_ModuleBase):
info = self.douban_info(doubanid=doubanid, mtype=mtype or meta.type)
elif meta:
info = {}
# 简体名称
zh_name = zhconv.convert(meta.cn_name, "zh-hans") if meta.cn_name else None
# 使用中英文名分别识别,去重去空,但要保持顺序
names = list(dict.fromkeys([k for k in [meta.cn_name, meta.en_name] if k]))
names = list(dict.fromkeys([k for k in [meta.cn_name, zh_name, meta.en_name] if k]))
for name in names:
if meta.begin_season:
logger.info(f"正在识别 {name}{meta.begin_season}季 ...")

View File

@@ -3,11 +3,11 @@ import base64
import hashlib
import hmac
from datetime import datetime
from functools import lru_cache
from random import choice
from urllib import parse
import requests
from cachetools import TTLCache, cached
from app.core.config import settings
from app.utils.http import RequestUtils
@@ -160,12 +160,12 @@ class DoubanApi(metaclass=Singleton):
self._session = requests.Session()
@classmethod
def __sign(cls, url: str, ts: int, method='GET') -> str:
def __sign(cls, url: str, ts: str, method='GET') -> str:
"""
签名
"""
url_path = parse.urlparse(url).path
raw_sign = '&'.join([method.upper(), parse.quote(url_path, safe=''), str(ts)])
raw_sign = '&'.join([method.upper(), parse.quote(url_path, safe=''), ts])
return base64.b64encode(
hmac.new(
cls._api_secret_key.encode(),
@@ -174,7 +174,7 @@ class DoubanApi(metaclass=Singleton):
).digest()
).decode()
@lru_cache(maxsize=settings.CACHE_CONF.get('douban'))
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
def __invoke(self, url: str, **kwargs) -> dict:
"""
GET请求
@@ -203,7 +203,7 @@ class DoubanApi(metaclass=Singleton):
return resp.json()
return resp.json() if resp else {}
@lru_cache(maxsize=settings.CACHE_CONF.get('douban'))
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
def __post(self, url: str, **kwargs) -> dict:
"""
POST请求

View File

@@ -16,7 +16,7 @@ from app.schemas.types import MediaType
lock = RLock()
CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp"
EXPIRE_TIMESTAMP = settings.CACHE_CONF.get('meta')
EXPIRE_TIMESTAMP = settings.CACHE_CONF["meta"]
class DoubanCache(metaclass=Singleton):
@@ -52,7 +52,7 @@ class DoubanCache(metaclass=Singleton):
获取缓存KEY
"""
return f"[{meta.type.value if meta.type else '未知'}]" \
f"{meta.name or meta.doubanid}-{meta.year}-{meta.begin_season}"
f"{meta.doubanid or meta.name}-{meta.year}-{meta.begin_season}"
def get(self, meta: MetaBase):
"""
@@ -77,7 +77,7 @@ class DoubanCache(metaclass=Singleton):
@return: 被删除的缓存内容
"""
with lock:
return self._meta_data.pop(key, None)
return self._meta_data.pop(key, {})
def delete_by_doubanid(self, doubanid: str) -> None:
"""

View File

@@ -2,11 +2,12 @@ from typing import Any, Generator, List, Optional, Tuple, Union
from app import schemas
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.log import logger
from app.modules import _MediaServerBase, _ModuleBase
from app.modules.emby.emby import Emby
from app.schemas.event import AuthCredentials
from app.schemas.types import MediaType, ModuleType
from app.schemas.event import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import MediaType, ModuleType, ChainEventType, MediaServerType
class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
@@ -29,6 +30,13 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
"""
return ModuleType.MediaServer
@staticmethod
def get_subtype() -> MediaServerType:
"""
获取模块子类型
"""
return MediaServerType.Emby
@staticmethod
def get_priority() -> int:
"""
@@ -65,16 +73,36 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
logger.info(f"Emby服务器 {name} 连接断开,尝试重连 ...")
server.reconnect()
def user_authenticate(self, credentials: AuthCredentials) -> Optional[AuthCredentials]:
def user_authenticate(self, credentials: AuthCredentials, service_name: Optional[str] = None) \
-> Optional[AuthCredentials]:
"""
使用Emby用户辅助完成用户认证
:param credentials: 认证数据
:param service_name: 指定要认证的媒体服务器名称,若为 None 则认证所有服务
:return: 认证数据
"""
# Emby认证
if not credentials or credentials.grant_type != "password":
return None
for name, server in self.get_instances().items():
# 确定要认证的服务器列表
if service_name:
# 如果指定了服务名,获取该服务实例
servers = [(service_name, server)] if (server := self.get_instance(service_name)) else []
else:
# 如果没有指定服务名,遍历所有服务
servers = self.get_instances().items()
# 遍历要认证的服务器
for name, server in servers:
# 触发认证拦截事件
intercept_event = eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(username=credentials.username, channel=self.get_name(),
service=name, status="triggered")
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:
continue
token = server.authenticate(credentials.username, credentials.password)
if token:
credentials.channel = self.get_name()
@@ -173,10 +201,10 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
媒体数量统计
"""
if server:
server: Emby = self.get_instance(server)
if not server:
server_obj: Emby = self.get_instance(server)
if not server_obj:
return None
servers = [server]
servers = [server_obj]
else:
servers = self.get_instances().values()
media_statistics = []
@@ -194,9 +222,9 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
"""
媒体库列表
"""
server: Emby = self.get_instance(server)
if server:
return server.get_librarys(username=username, hidden=hidden)
server_obj: Emby = self.get_instance(server)
if server_obj:
return server_obj.get_librarys(username=username, hidden=hidden)
return None
def mediaserver_items(self, server: str, library_id: Union[str, int], start_index: int = 0,
@@ -211,18 +239,18 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
:return: 返回一个生成器对象,用于逐步获取媒体服务器中的项目
"""
server: Emby = self.get_instance(server)
if server:
return server.get_items(library_id, start_index, limit)
server_obj: Emby = self.get_instance(server)
if server_obj:
return server_obj.get_items(library_id, start_index, limit)
return None
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
server: Emby = self.get_instance(server)
if server:
return server.get_iteminfo(item_id)
server_obj: Emby = self.get_instance(server)
if server_obj:
return server_obj.get_iteminfo(item_id)
return None
def mediaserver_tv_episodes(self, server: str,
@@ -230,10 +258,10 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
"""
获取剧集信息
"""
server: Emby = self.get_instance(server)
if not server:
server_obj: Emby = self.get_instance(server)
if not server_obj:
return None
_, seasoninfo = server.get_tv_episodes(item_id=item_id)
_, seasoninfo = server_obj.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(
@@ -246,26 +274,57 @@ class EmbyModule(_ModuleBase, _MediaServerBase[Emby]):
"""
获取媒体服务器正在播放信息
"""
server: Emby = self.get_instance(server)
if not server:
server_obj: Emby = self.get_instance(server)
if not server_obj:
return []
return server.get_resume(num=count, username=username)
return server_obj.get_resume(num=count, username=username)
def mediaserver_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取媒体库播放地址
"""
server: Emby = self.get_instance(server)
if not server:
server_obj: Emby = self.get_instance(server)
if not server_obj:
return None
return server.get_play_url(item_id)
return server_obj.get_play_url(item_id)
def mediaserver_latest(self, server: str,
def mediaserver_latest(self, server: str = None,
count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
server: Emby = self.get_instance(server)
if not server:
server_obj: Emby = self.get_instance(server)
if not server_obj:
return []
return server.get_latest(num=count, username=username)
return server_obj.get_latest(num=count, username=username)
def mediaserver_latest_images(self,
server: str = None,
count: int = 10,
username: str = None,
remote: bool = False
) -> List[str]:
"""
获取媒体服务器最新入库条目的图片
:param server: 媒体服务器名称
:param count: 获取数量
:param username: 用户名
:param remote: True为外网链接, False为内网链接
:return: 图片链接列表
"""
server_obj: Emby = self.get_instance(server)
if not server_obj:
return []
links = []
items: List[schemas.MediaServerPlayItem] = self.mediaserver_latest(server=server, count=count,
username=username)
for item in items:
if item.BackdropImageTags:
image_url = server_obj.get_backdrop_url(item_id=item.id,
image_tag=item.BackdropImageTags[0],
remote=remote)
if image_url:
links.append(image_url)
return links

View File

@@ -146,7 +146,8 @@ class Emby:
return []
libraries = []
for library in self.__get_emby_librarys(username) or []:
if hidden and self._sync_libraries and library.get("Id") not in self._sync_libraries:
if hidden and self._sync_libraries and "all" not in self._sync_libraries \
and library.get("Id") not in self._sync_libraries:
continue
match library.get("CollectionType"):
case "movies":
@@ -1076,20 +1077,23 @@ class Emby:
return f"{self._playhost or self._host}web/index.html#!" \
f"/item?id={item_id}&context=home&serverId={self.serverid}"
def __get_backdrop_url(self, item_id: str, image_tag: str) -> str:
def get_backdrop_url(self, item_id: str, image_tag: str, remote: bool = False) -> str:
"""
获取Emby的Backdrop图片地址
:param: item_id: 在Emby中的ID
:param: image_tag: 图片的tag
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
if not image_tag or not item_id:
return ""
return f"{self._host}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&fillWidth=666&api_key={self._apikey}"
if remote:
host_url = self._playhost or self._host
else:
host_url = self._host
return f"{host_url}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&api_key={self._apikey}"
def __get_local_image_by_id(self, item_id: str) -> str:
"""
@@ -1145,13 +1149,13 @@ class Emby:
subtitle = f'S{item.get("ParentIndexNumber")}:{item.get("IndexNumber")} - {item.get("Name")}'
if item_type == MediaType.MOVIE.value:
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
image = self.get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
else:
image = self.__get_backdrop_url(item_id=item.get("SeriesId"),
image_tag=item.get("SeriesPrimaryImageTag"))
image = self.get_backdrop_url(item_id=item.get("SeriesId"),
image_tag=item.get("SeriesPrimaryImageTag"))
if not image:
image = self.__get_local_image_by_id(item.get("SeriesId"))
ret_resume.append(schemas.MediaServerPlayItem(
@@ -1184,7 +1188,7 @@ class Emby:
params = {
"Limit": 100,
"MediaTypes": "Video",
"Fields": "ProductionYear,Path",
"Fields": "ProductionYear,Path,BackdropImageTags",
"api_key": self._apikey
}
try:
@@ -1212,7 +1216,8 @@ class Emby:
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
link=link,
BackdropImageTags=item.get("BackdropImageTags")
))
return ret_latest
else:

View File

@@ -1,12 +1,13 @@
import re
from functools import lru_cache
from typing import Optional, Tuple, Union
from cachetools import TTLCache, cached
from app.core.context import MediaInfo, settings
from app.log import logger
from app.modules import _ModuleBase
from app.schemas.types import MediaType, ModuleType, OtherModulesType
from app.utils.http import RequestUtils
from app.schemas.types import MediaType, ModuleType
class FanartModule(_ModuleBase):
@@ -342,6 +343,13 @@ class FanartModule(_ModuleBase):
"""
return ModuleType.Other
@staticmethod
def get_subtype() -> OtherModulesType:
"""
获取模块子类型
"""
return OtherModulesType.Fanart
@staticmethod
def get_priority() -> int:
"""
@@ -376,20 +384,30 @@ class FanartModule(_ModuleBase):
continue
if not isinstance(images, list):
continue
# 按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
# 取第一张图片
image_obj = images[0]
# 图片属性xx_path
image_name = self.__name(name)
image_season = image_obj.get('season')
# 设置图片
if image_name.startswith("season") and image_season:
# 季图片格式 seasonxx-poster
image_name = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
if not mediainfo.get_image(image_name):
# 没有图片才设置
mediainfo.set_image(image_name, image_obj.get('url'))
if image_name.startswith("season"):
# 季图片图片格式seasonxx-xxxx/season-specials-xxxx
for image_obj in images:
image_season = image_obj.get('season')
if image_season is not None:
# 包括poster,thumb,banner
if image_season == '0':
season_image = f"season-specials-{image_name[6:]}"
else:
season_image = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
# 设置图片,没有图片才设置
if not mediainfo.get_image(season_image):
mediainfo.set_image(season_image, image_obj.get('url'))
else:
# 其他图片,按欢迎程度倒排
images.sort(key=lambda x: int(x.get('likes', 0)), reverse=True)
# 取第一张图片
image_obj = images[0]
# 设置图片,没有图片才设置
if not mediainfo.get_image(image_name):
mediainfo.set_image(image_name, image_obj.get('url'))
return mediainfo
@@ -404,7 +422,7 @@ class FanartModule(_ModuleBase):
return result
@classmethod
@lru_cache(maxsize=settings.CACHE_CONF.get('fanart'))
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["fanart"], ttl=settings.CACHE_CONF["meta"]))
def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]:
if media_type == MediaType.MOVIE:
image_url = cls._movie_url % queryid

View File

@@ -1,4 +1,3 @@
import copy
import re
from pathlib import Path
from threading import Lock
@@ -8,6 +7,7 @@ from jinja2 import Template
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.helper.directory import DirectoryHelper
@@ -17,7 +17,8 @@ from app.log import logger
from app.modules import _ModuleBase
from app.modules.filemanager.storages import StorageBase
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, TransferDirectoryConf, FileItem, StorageUsage
from app.schemas.types import MediaType, ModuleType
from app.schemas.event import TransferRenameEventData
from app.schemas.types import MediaType, ModuleType, ChainEventType, OtherModulesType
from app.utils.system import SystemUtils
lock = Lock()
@@ -51,6 +52,13 @@ class FileManagerModule(_ModuleBase):
"""
return ModuleType.Other
@staticmethod
def get_subtype() -> OtherModulesType:
"""
获取模块子类型
"""
return OtherModulesType.FileManager
@staticmethod
def get_priority() -> int:
"""
@@ -65,9 +73,8 @@ class FileManagerModule(_ModuleBase):
"""
测试模块连接性
"""
directoryhelper = DirectoryHelper()
# 检查目录
dirs = directoryhelper.get_dirs()
dirs = self.directoryhelper.get_dirs()
if not dirs:
return False, "未设置任何目录"
for d in dirs:
@@ -132,8 +139,6 @@ class FileManagerModule(_ModuleBase):
)
return str(path)
pass
def save_config(self, storage: str, conf: Dict) -> None:
"""
保存存储配置
@@ -144,7 +149,7 @@ class FileManagerModule(_ModuleBase):
return
storage_oper.set_config(conf)
def generate_qrcode(self, storage: str) -> Optional[Dict[str, str]]:
def generate_qrcode(self, storage: str) -> Optional[Tuple[dict, str]]:
"""
生成二维码
"""
@@ -197,6 +202,36 @@ class FileManagerModule(_ModuleBase):
return result
def any_files(self, fileitem: FileItem, extensions: list = None) -> Optional[bool]:
"""
查询当前目录下是否存在指定扩展名任意文件
"""
storage_oper = self.__get_storage_oper(fileitem.storage)
if not storage_oper:
logger.error(f"不支持 {fileitem.storage} 的文件浏览")
return None
def __any_file(_item: FileItem):
"""
递归处理
"""
_items = storage_oper.list(_item)
if _items:
if not extensions:
return True
for t in _items:
if (t.type == "file"
and t.extension
and f".{t.extension.lower()}" in extensions):
return True
elif t.type == "dir":
if __any_file(t):
return True
return False
# 返回结果
return __any_file(fileitem)
def create_folder(self, fileitem: FileItem, name: str) -> Optional[FileItem]:
"""
创建目录
@@ -240,7 +275,7 @@ class FileManagerModule(_ModuleBase):
return None
return storage_oper.download(fileitem, path=path)
def upload_file(self, fileitem: FileItem, path: Path) -> Optional[FileItem]:
def upload_file(self, fileitem: FileItem, path: Path, new_name: str = None) -> Optional[FileItem]:
"""
上传文件
"""
@@ -248,7 +283,7 @@ class FileManagerModule(_ModuleBase):
if not storage_oper:
logger.error(f"不支持 {fileitem.storage} 的上传处理")
return None
return storage_oper.upload(fileitem, path)
return storage_oper.upload(fileitem, path, new_name)
def get_file_item(self, storage: str, path: Path) -> Optional[FileItem]:
"""
@@ -260,6 +295,16 @@ class FileManagerModule(_ModuleBase):
return None
return storage_oper.get_item(path)
def get_parent_item(self, fileitem: FileItem) -> Optional[FileItem]:
"""
获取上级目录项
"""
storage_oper = self.__get_storage_oper(fileitem.storage)
if not storage_oper:
logger.error(f"不支持 {fileitem.storage} 的文件获取")
return None
return storage_oper.get_parent(fileitem)
def snapshot_storage(self, storage: str, path: Path) -> Optional[Dict[str, float]]:
"""
快照存储
@@ -281,19 +326,24 @@ class FileManagerModule(_ModuleBase):
return storage_oper.usage()
def transfer(self, fileitem: FileItem, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str = None, target_storage: str = None, target_path: Path = None,
episodes_info: List[TmdbEpisode] = None,
scrape: bool = None) -> TransferInfo:
target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> TransferInfo:
"""
文件整理
:param fileitem: 文件
:param meta: 预识别的元数据,仅单文件整理时传递
:param fileitem: 文件信息
:param meta: 预识别的元数据
:param mediainfo: 识别的媒体信息
:param transfer_type: 整理方式
:param target_directory: 目标目录配置
:param target_storage: 目标存储
:param target_path: 目标路径
:param episodes_info: 当前季的全部集信息
:param transfer_type: 转移模式
:param scrape: 是否刮削元数据
:param library_type_folder: 是否按媒体类型创建目录
:param library_category_folder: 是否按媒体类别创建目录
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
# 检查目录路径
@@ -308,37 +358,37 @@ class FileManagerModule(_ModuleBase):
fileitem=fileitem,
message=f"{target_path} 不是有效目录")
# 获取目标路径
directoryhelper = DirectoryHelper()
if target_path:
dir_info = directoryhelper.get_dir(mediainfo, dest_path=target_path)
else:
dir_info = directoryhelper.get_dir(mediainfo)
if dir_info:
# 目标存储类型
if not target_storage:
target_storage = dir_info.library_storage
if target_directory:
# 整理方式
if not transfer_type:
transfer_type = dir_info.transfer_type
transfer_type = target_directory.transfer_type
# 是否需要重命名
need_rename = target_directory.renaming
# 是否需要通知
need_notify = target_directory.notify
# 覆盖模式
overwrite_mode = target_directory.overwrite_mode
# 是否需要刮削
if scrape is None:
need_scrape = dir_info.scraping
need_scrape = target_directory.scraping
else:
need_scrape = scrape
# 是否需要重命名
need_rename = dir_info.renaming
# 覆盖模式
overwrite_mode = dir_info.overwrite_mode
# 是否需要通知
need_notify = dir_info.notify
# 目标存储类型
if not target_storage:
target_storage = target_directory.library_storage
# 拼装媒体库一、二级子目录
target_path = self.__get_dest_dir(mediainfo=mediainfo, target_dir=dir_info)
target_path = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_directory,
need_type_folder=library_type_folder,
need_category_folder=library_category_folder)
elif target_path:
# 自定义目标路径,仅适用于手动整理的场景
need_scrape = scrape or False
need_rename = True
need_notify = False
overwrite_mode = "never"
# 手动整理的场景,有自定义目标路径
target_path = self.__get_dest_path(mediainfo=mediainfo, target_path=target_path,
need_type_folder=library_type_folder,
need_category_folder=library_category_folder)
else:
# 未找到有效的媒体库目录
logger.error(
@@ -346,20 +396,25 @@ class FileManagerModule(_ModuleBase):
return TransferInfo(success=False,
fileitem=fileitem,
message="未找到有效的媒体库目录")
logger.info(f"获取整理目标路径:【{target_storage}{target_path}")
# 整理方式
if not transfer_type:
logger.error(f"{target_directory.name} 未设置整理方式")
return TransferInfo(success=False,
fileitem=fileitem,
message=f"{target_directory.name} 未设置整理方式")
# 整理
logger.info(f"获取整理目标路径:【{target_storage}{target_path}")
return self.transfer_media(fileitem=fileitem,
in_meta=meta,
mediainfo=mediainfo,
transfer_type=transfer_type,
overwrite_mode=overwrite_mode,
target_storage=target_storage,
target_path=target_path,
episodes_info=episodes_info,
transfer_type=transfer_type,
need_scrape=need_scrape,
need_rename=need_rename,
need_notify=need_notify)
need_notify=need_notify,
overwrite_mode=overwrite_mode,
episodes_info=episodes_info)
def __get_storage_oper(self, _storage: str, _func: str = None) -> Optional[StorageBase]:
"""
@@ -422,13 +477,15 @@ class FileManagerModule(_ModuleBase):
target_file.parent.mkdir(parents=True)
# 本地到本地
if transfer_type == "copy":
state = source_oper.copy(fileitem, target_file)
state = source_oper.copy(fileitem, target_file.parent, target_file.name)
elif transfer_type == "move":
state = source_oper.move(fileitem, target_file)
state = source_oper.move(fileitem, target_file.parent, target_file.name)
elif transfer_type == "link":
state = source_oper.link(fileitem, target_file)
elif transfer_type == "softlink":
state = source_oper.softlink(fileitem, target_file)
else:
return None, f"不支持的整理方式:{transfer_type}"
if state:
return __get_targetitem(target_file), ""
else:
@@ -444,20 +501,20 @@ class FileManagerModule(_ModuleBase):
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
# 上传文件
new_item = target_oper.upload(target_fileitem, filepath)
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
if new_item:
return new_item, ""
else:
return None, f"{fileitem.path} 上传 {target_storage} 失败"
else:
return None, f"{target_file.parent} {target_storage} 目录获取失败"
return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif transfer_type == "move":
# 移动
# 根据目的路径获取文件夹
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
# 上传文件
new_item = target_oper.upload(target_fileitem, filepath)
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
if new_item:
# 删除源文件
source_oper.delete(fileitem)
@@ -465,7 +522,7 @@ class FileManagerModule(_ModuleBase):
else:
return None, f"{fileitem.path} 上传 {target_storage} 失败"
else:
return None, f"{target_file.parent} {target_storage} 目录获取失败"
return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif fileitem.storage != "local" and target_storage == "local":
# 网盘到本地
if target_file.exists():
@@ -489,25 +546,28 @@ class FileManagerModule(_ModuleBase):
return None, f"{fileitem.path} {fileitem.storage} 下载失败"
elif fileitem.storage == target_storage:
# 同一网盘
# 根据目的路径获取文件夹
target_diritem = target_oper.get_folder(target_file.parent)
if target_diritem:
# 重命名文件
if target_oper.rename(fileitem, target_file.name):
# 移动文件到新目录
if source_oper.move(fileitem, target_diritem):
ret_fileitem = copy.deepcopy(fileitem)
ret_fileitem.path = target_diritem.path + "/" + target_file.name
ret_fileitem.name = target_file.name
ret_fileitem.basename = target_file.stem
ret_fileitem.parent_fileid = target_diritem.fileid
return ret_fileitem, ""
if transfer_type == "copy":
# 复制文件到新目录
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
if source_oper.move(fileitem, Path(target_fileitem.path), target_file.name):
return target_oper.get_item(target_file), ""
else:
return None, f"{fileitem.path} {target_storage} 移动文件失败"
return None, f"{target_storage}{fileitem.path} 复制文件失败"
else:
return None, f"{fileitem.path} {target_storage} 重命名文件失败"
return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif transfer_type == "move":
# 移动文件到新目录
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
if source_oper.move(fileitem, Path(target_fileitem.path), target_file.name):
return target_oper.get_item(target_file), ""
else:
return None, f"{target_storage}{fileitem.path} 移动文件失败"
else:
return None, f"{target_storage}{target_file.parent} 目录获取失败"
else:
return None, f"{target_file.parent} {target_storage} 目录获取失败"
return None, f"不支持的整理方式:{transfer_type}"
return None, "未知错误"
@@ -570,16 +630,16 @@ class FileManagerModule(_ModuleBase):
if not parent_item:
return False, f"{org_path} 上级目录获取失败"
# 字幕文件列表
file_list: List[FileItem] = storage_oper.list(parent_item)
file_list: List[FileItem] = storage_oper.list(parent_item) or []
file_list = [f for f in file_list if f.type == "file" and f.extension
and f".{f.extension.lower()}" in settings.RMT_SUBEXT]
if len(file_list) == 0:
logger.debug(f"{parent_item.path} 目录下没有找到字幕文件...")
logger.info(f"{parent_item.path} 目录下没有找到字幕文件...")
else:
logger.debug("字幕文件清单:" + str(file_list))
logger.info(f"字幕文件清单:{[f.name for f in file_list]}")
# 识别文件名
metainfo = MetaInfoPath(org_path)
for sub_item in file_list:
if f".{sub_item.extension.lower()}" not in settings.RMT_SUBEXT:
continue
# 识别字幕文件名
sub_file_name = re.sub(_zhtw_sub_re,
".",
@@ -642,7 +702,7 @@ class FileManagerModule(_ModuleBase):
return False, errmsg
except Exception as error:
logger.info(f"字幕 {new_file} 出错了,原因: {str(error)}")
return False, ""
return True, ""
def __transfer_audio_track_files(self, fileitem: FileItem, target_storage: str, target_file: Path,
transfer_type: str) -> Tuple[bool, str]:
@@ -665,7 +725,9 @@ class FileManagerModule(_ModuleBase):
return False, f"{org_path} 上级目录获取失败"
file_list: List[FileItem] = storage_oper.list(parent_item)
# 匹配音轨文件
pending_file_list: List[FileItem] = [file for file in file_list if Path(file.name).stem == org_path.name
pending_file_list: List[FileItem] = [file for file in file_list
if Path(file.name).stem == org_path.stem
and file.type == "file" and file.extension
and f".{file.extension.lower()}" in settings.RMT_AUDIOEXT]
if len(pending_file_list) == 0:
return True, f"{parent_item.path} 目录下没有找到匹配的音轨文件"
@@ -770,7 +832,8 @@ class FileManagerModule(_ModuleBase):
else:
logger.info(f"正在删除已存在的文件:{target_file}")
target_file.unlink()
logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}")
logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}"
f"操作类型:{transfer_type}")
new_item, errmsg = self.__transfer_command(fileitem=fileitem,
target_storage=target_storage,
target_file=target_file,
@@ -786,26 +849,43 @@ class FileManagerModule(_ModuleBase):
return None, errmsg
@staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: TransferDirectoryConf) -> Path:
def __get_dest_path(mediainfo: MediaInfo, target_path: Path,
need_type_folder: bool = False, need_category_folder: bool = False):
"""
获取目标路径
"""
if need_type_folder:
target_path = target_path / mediainfo.type.value
if need_category_folder and mediainfo.category:
target_path = target_path / mediainfo.category
return target_path
@staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: TransferDirectoryConf,
need_type_folder: bool = None, need_category_folder: bool = None) -> Path:
"""
根据设置并装媒体库目录
:param mediainfo: 媒体信息
:target_dir: 媒体库根目录
:typename_dir: 是否加上类型目录
:need_type_folder: 是否需要按媒体类型创建目录
:need_category_folder: 是否需要按媒体类别创建目录
"""
if not target_dir.media_type and target_dir.library_type_folder:
if need_type_folder is None:
need_type_folder = target_dir.library_type_folder
if need_category_folder is None:
need_category_folder = target_dir.library_category_folder
if not target_dir.media_type and need_type_folder:
# 一级自动分类
library_dir = Path(target_dir.library_path) / mediainfo.type.value
elif target_dir.media_type and target_dir.library_type_folder:
elif target_dir.media_type and need_type_folder:
# 一级手动分类
library_dir = Path(target_dir.library_path) / target_dir.media_type
else:
library_dir = Path(target_dir.library_path)
if not target_dir.media_category and target_dir.library_category_folder and mediainfo.category:
if not target_dir.media_category and need_category_folder and mediainfo.category:
# 二级自动分类
library_dir = library_dir / mediainfo.category
elif target_dir.media_category and target_dir.library_category_folder:
elif target_dir.media_category and need_category_folder:
# 二级手动分类
library_dir = library_dir / target_dir.media_category
@@ -815,14 +895,14 @@ class FileManagerModule(_ModuleBase):
fileitem: FileItem,
in_meta: MetaBase,
mediainfo: MediaInfo,
transfer_type: str,
overwrite_mode: str,
target_storage: str,
target_path: Path,
episodes_info: List[TmdbEpisode] = None,
transfer_type: str,
need_scrape: bool = False,
need_rename: bool = True,
need_notify: bool = True,
overwrite_mode: str = None,
episodes_info: List[TmdbEpisode] = None,
) -> TransferInfo:
"""
识别并整理一个文件或者一个目录下的所有文件
@@ -832,11 +912,11 @@ class FileManagerModule(_ModuleBase):
:param target_storage: 目标存储
:param target_path: 目标路径
:param transfer_type: 文件整理方式
:param overwrite_mode: 覆盖模式
:param episodes_info: 当前季的全部集信息
:param need_scrape: 是否需要刮削
:param need_rename: 是否需要重命名
:param need_notify: 是否需要通知
:param overwrite_mode: 覆盖模式
:param episodes_info: 当前季的全部集信息
:return: TransferInfo、错误信息
"""
@@ -844,6 +924,18 @@ class FileManagerModule(_ModuleBase):
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
# 重命名格式不合法
logger.error(f"重命名格式不合法:{rename_format}")
return TransferInfo(success=False,
message=f"重命名格式不合法",
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify)
# 判断是否为文件夹
if fileitem.type == "dir":
# 整理整个目录,一般为蓝光原盘
@@ -870,13 +962,16 @@ class FileManagerModule(_ModuleBase):
need_notify=need_notify)
logger.info(f"文件夹 {fileitem.path} 整理成功")
# 计算目录下所有文件大小
total_size = sum(file.stat().st_size for file in Path(fileitem.path).rglob('*') if file.is_file())
# 返回整理后的路径
return TransferInfo(success=True,
fileitem=fileitem,
target_item=new_diritem,
target_diritem=new_diritem,
total_size=fileitem.size,
total_size=total_size,
need_scrape=need_scrape,
need_notify=need_notify,
transfer_type=transfer_type)
else:
# 整理单个文件
@@ -921,9 +1016,15 @@ class FileManagerModule(_ModuleBase):
# 目的操作对象
target_oper: StorageBase = self.__get_storage_oper(target_storage)
# 目标目录
target_diritem = target_oper.get_folder(
new_file.parent) if mediainfo.type == MediaType.MOVIE else target_oper.get_folder(
new_file.parent.parent)
target_diritem = target_oper.get_folder(new_file.parents[rename_format_level - 1])
if not target_diritem:
logger.error(f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败")
return TransferInfo(success=False,
message=f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
# 目标文件
target_item = target_oper.get_item(new_file)
if target_item:
@@ -1032,7 +1133,14 @@ class FileManagerModule(_ModuleBase):
if episode.episode_number == meta.begin_episode:
episode_title = episode.name
break
# 获取集播出日期
episode_date = None
if meta.begin_episode and episodes_info:
for episode in episodes_info:
if episode.episode_number == meta.begin_episode:
episode_date = episode.air_date
break
return {
# 标题
"title": __convert_invalid_characters(mediainfo.title),
@@ -1082,21 +1190,51 @@ class FileManagerModule(_ModuleBase):
"part": meta.part,
# 剧集标题
"episode_title": __convert_invalid_characters(episode_title),
# 剧集日期根据episodes_info值获取
"episode_date": episode_date,
# 文件后缀
"fileExt": file_ext,
# 自定义占位符
"customization": meta.customization
"customization": meta.customization,
# 文件元数据
"__meta__": meta,
# 识别的媒体信息
"__mediainfo__": mediainfo,
# 当前季的全部集信息
"__episodes_info__": episodes_info,
}
@staticmethod
def get_rename_path(template_string: str, rename_dict: dict, path: Path = None) -> Path:
"""
生成重命名后的完整路径
生成重命名后的完整路径,支持智能重命名事件
:param template_string: Jinja2 模板字符串
:param rename_dict: 渲染上下文,用于替换模板中的变量
:param path: 可选的基础路径,如果提供,将在其基础上拼接生成的路径
:return: 生成的完整路径
"""
# 创建jinja2模板对象
template = Template(template_string)
# 渲染生成的字符串
render_str = template.render(rename_dict)
logger.debug(f"Initial render string: {render_str}")
# 发送智能重命名事件
event_data = TransferRenameEventData(
template_string=template_string,
rename_dict=rename_dict,
render_str=render_str,
path=path
)
event = eventmanager.send_event(ChainEventType.TransferRename, event_data)
# 检查事件返回的结果
if event and event.event_data:
event_data: TransferRenameEventData = event.event_data
if event_data.updated and event_data.updated_str:
logger.debug(f"Render string updated by event: "
f"{render_str} -> {event_data.updated_str} (source: {event_data.source})")
render_str = event_data.updated_str
# 目的路径
if path:
return path / render_str
@@ -1122,17 +1260,19 @@ class FileManagerModule(_ModuleBase):
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 获取相对路径(重命名路径)
rel_path = self.get_rename_path(
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
continue
# 获取路径(重命名路径)
target_path = self.get_rename_path(
path=dir_path,
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo)
)
# 取相对路径的第1层目录
if rel_path.parts:
media_path = dir_path / rel_path.parts[0]
else:
continue
media_path = target_path.parents[rename_format_level - 1]
# 检索媒体文件
fileitem = storage_oper.get_item(media_path)
if not fileitem:
@@ -1150,6 +1290,9 @@ class FileManagerModule(_ModuleBase):
:param mediainfo: 识别的媒体信息
:return: 如不存在返回None存在时返回信息包括每季已存在所有集{type: movie/tv, seasons: {season: [episodes]}}
"""
if not settings.LOCAL_EXISTS_SEARCH:
return None
# 检查媒体库
fileitems = self.media_files(mediainfo)
if not fileitems:

View File

@@ -1,6 +1,6 @@
from abc import ABCMeta, abstractmethod
from pathlib import Path
from typing import Optional, List, Union, Dict
from typing import Optional, List, Union, Dict, Tuple
from app import schemas
from app.helper.storage import StorageHelper
@@ -16,7 +16,14 @@ class StorageBase(metaclass=ABCMeta):
def __init__(self):
self.storagehelper = StorageHelper()
def generate_qrcode(self, *args, **kwargs) -> Optional[Dict[str, str]]:
@abstractmethod
def init_storage(self):
"""
初始化
"""
pass
def generate_qrcode(self, *args, **kwargs) -> Optional[Tuple[dict, str]]:
pass
def check_login(self, *args, **kwargs) -> Optional[Dict[str, str]]:
@@ -28,11 +35,19 @@ class StorageBase(metaclass=ABCMeta):
"""
return self.storagehelper.get_storage(self.schema.value)
def get_conf(self) -> dict:
"""
获取配置
"""
conf = self.get_config()
return conf.config if conf else {}
def set_config(self, conf: dict):
"""
设置配置
"""
self.storagehelper.set_storage(self.schema.value, conf)
self.init_storage()
def support_transtype(self) -> dict:
"""
@@ -64,6 +79,8 @@ class StorageBase(metaclass=ABCMeta):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
"""
pass
@@ -107,16 +124,16 @@ class StorageBase(metaclass=ABCMeta):
下载文件,保存到本地,返回本地临时文件地址
:param fileitem: 文件项
:param path: 文件保存路径
"""
pass
@abstractmethod
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
pass
@@ -128,16 +145,22 @@ class StorageBase(metaclass=ABCMeta):
pass
@abstractmethod
def copy(self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]) -> bool:
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
pass
@abstractmethod
def move(self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]) -> bool:
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
pass

View File

@@ -16,10 +16,11 @@ from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from aligo import Aligo, BaseFile
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class AliPan(StorageBase):
class AliPan(StorageBase, metaclass=Singleton):
"""
阿里云相关操作
"""
@@ -54,17 +55,27 @@ class AliPan(StorageBase):
except FileNotFoundError:
logger.debug('未发现 aria2c')
self._has_aria2c = False
self.init_storage()
self.__init_aligo()
def __init_aligo(self):
def init_storage(self):
"""
初始化 aligo
"""
def show_qrcode(qr_link: str):
"""
显示二维码
"""
logger.info(f"请用阿里云盘 App 扫码登录:{qr_link}")
refresh_token = self.__auth_params.get("refreshToken")
if refresh_token:
self.aligo = Aligo(refresh_token=refresh_token, use_aria2=self._has_aria2c,
name="MoviePilot V2", level=logging.ERROR)
try:
self.aligo = Aligo(refresh_token=refresh_token, show=show_qrcode, use_aria2=self._has_aria2c,
name="MoviePilot V2", level=logging.ERROR, re_login=False)
except Exception as err:
logger.error(f"初始化阿里云盘失败:{str(err)}")
self.__clear_params()
@property
def __auth_params(self):
@@ -160,7 +171,7 @@ class AliPan(StorageBase):
})
self.__update_params(data)
self.__update_drives()
self.__init_aligo()
self.init_storage()
except Exception as e:
return {}, f"bizExt 解码失败:{str(e)}"
return data, ""
@@ -180,12 +191,16 @@ class AliPan(StorageBase):
"""
获取用户信息drive_id等
"""
if not self.aligo:
return {}
return self.aligo.get_user()
def __update_drives(self):
"""
更新用户存储根目录
"""
if not self.aligo:
return
drivers = self.aligo.list_my_drives()
for driver in drivers:
if driver.category == "resource":
@@ -240,28 +255,9 @@ class AliPan(StorageBase):
return []
# 根目录处理
if not fileitem or not fileitem.drive_id:
return [
schemas.FileItem(
storage=self.schema.value,
fileid="root",
drive_id=self.__auth_params.get("resourceDriveId"),
parent_fileid="root",
type="dir",
path="/资源库/",
name="资源库",
basename="资源库"
),
schemas.FileItem(
storage=self.schema.value,
fileid="root",
drive_id=self.__auth_params.get("backDriveId"),
parent_fileid="root",
type="dir",
path="/备份盘/",
name="备份盘",
basename="备份盘"
)
]
items = self.aligo.get_file_list()
if items:
return [self.__get_fileitem(item) for item in items]
elif fileitem.type == "file":
# 文件处理
file = self.detail(fileitem)
@@ -276,6 +272,8 @@ class AliPan(StorageBase):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
"""
if not self.aligo:
return None
@@ -283,21 +281,43 @@ class AliPan(StorageBase):
if item:
if isinstance(item, CreateFileResponse):
item = self.aligo.get_file(file_id=item.file_id, drive_id=item.drive_id)
return self.__get_fileitem(item)
return self.__get_fileitem(item, parent=fileitem.path)
return None
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
根据文件路程获取目录,不存在则创建
"""
if not self.aligo:
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
for sub_folder in self.list(_fileitem):
if sub_folder.type != "dir":
continue
if sub_folder.name == _name:
return sub_folder
return None
item = self.aligo.get_folder_by_path(path=str(path), create_folder=True)
if item:
if isinstance(item, CreateFileResponse):
item = self.aligo.get_file(file_id=item.file_id, drive_id=item.drive_id)
return self.__get_fileitem(item)
return None
# 是否已存在
folder = self.get_item(path)
if folder:
return folder
# 逐级查找和创建目录
fileitem = schemas.FileItem(path="/")
for part in path.parts:
if part == "/":
continue
dir_file = __find_dir(fileitem, part)
if dir_file:
fileitem = dir_file
else:
dir_file = self.create_folder(fileitem, part)
if not dir_file:
return None
fileitem = dir_file
return fileitem
def get_item(self, path: Path) -> Optional[schemas.FileItem]:
"""
@@ -307,7 +327,7 @@ class AliPan(StorageBase):
return None
item = self.aligo.get_file_by_path(path=str(path))
if item:
return self.__get_fileitem(item)
return self.__get_fileitem(item, parent=path.parent)
return None
def delete(self, fileitem: schemas.FileItem) -> bool:
@@ -328,7 +348,7 @@ class AliPan(StorageBase):
return None
item = self.aligo.get_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id)
if item:
return self.__get_fileitem(item)
return self.__get_fileitem(item, parent=fileitem.path)
return None
def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
@@ -347,41 +367,66 @@ class AliPan(StorageBase):
"""
if not self.aligo:
return None
local_path = self.aligo.download_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
local_path = self.aligo.download_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id, # noqa
local_folder=str(path or settings.TEMP_PATH))
if local_path:
return Path(local_path)
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件,并标记完成
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
if not self.aligo:
return None
# 上传文件
result = self.aligo.upload_file(file_path=str(path), parent_file_id=fileitem.fileid,
drive_id=fileitem.drive_id, name=path.name,
drive_id=fileitem.drive_id, name=new_name or path.name,
check_name_mode="refuse")
if result:
item = self.aligo.get_file(file_id=result.file_id, drive_id=result.drive_id)
if item:
return self.__get_fileitem(item)
return self.__get_fileitem(item, parent=fileitem.path)
return None
def move(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool:
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.aligo:
return False
target = self.get_folder(path)
if not target:
return False
if self.aligo.move_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
to_parent_file_id=target.fileid, to_drive_id=target.drive_id):
to_parent_file_id=target.fileid, to_drive_id=target.drive_id,
new_name=new_name):
return True
return False
def copy(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool:
pass
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.aligo:
return False
target = self.get_folder(path)
if not target:
return False
if self.aligo.copy_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
to_parent_file_id=target.fileid, to_drive_id=target.drive_id,
new_name=new_name):
return True
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""

View File

@@ -0,0 +1,741 @@
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Optional, List, Dict
from cachetools import cached, TTLCache
from requests import Response
from app import schemas
from app.core.config import settings
from app.log import logger
from app.modules.filemanager.storages import StorageBase
from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.url import UrlUtils
class Alist(StorageBase, metaclass=Singleton):
"""
Alist相关操作
api文档https://alist.nn.ci/zh/guide/api
"""
# 存储类型
schema = StorageSchema.Alist
# 支持的整理方式
transtype = {
"copy": "复制",
"move": "移动",
}
def __init__(self):
super().__init__()
def init_storage(self):
"""
初始化
"""
pass
@property
def __get_base_url(self) -> str:
"""
获取基础URL
"""
url = self.get_conf().get("url")
if url is None:
return ""
return UrlUtils.standardize_base_url(self.get_conf().get("url"))
def __get_api_url(self, path: str) -> str:
"""
获取API URL
"""
return UrlUtils.adapt_request_url(self.__get_base_url, path)
@property
def __get_valuable_toke(self) -> str:
"""
获取一个可用的token
如果设置永久令牌则返回永久令牌
否则使用账号密码生成临时令牌
"""
return self.__generate_token
@property
@cached(cache=TTLCache(maxsize=1, ttl=60 * 60 * 24 * 2 - 60 * 5))
def __generate_token(self) -> str:
"""
使用账号密码生成一个临时token
缓存2天提前5分钟更新
"""
conf = self.get_conf()
resp: Response = RequestUtils(headers={
'Content-Type': 'application/json'
}).post_res(
self.__get_api_url("/api/auth/login"),
data=json.dumps({
"username": conf.get("username"),
"password": conf.get("password"),
}),
)
"""
{
"username": "{{alist_username}}",
"password": "{{alist_password}}"
}
======================================
{
"code": 200,
"message": "success",
"data": {
"token": "abcd"
}
}
"""
if resp is None:
logger.warning("请求登录失败无法连接alist服务")
return ""
if resp.status_code != 200:
logger.warning(f"更新令牌请求发送失败,状态码:{resp.status_code}")
return ""
result = resp.json()
if result["code"] != 200:
logger.critical(f'更新令牌,错误信息:{result["message"]}')
return ""
logger.debug("AList获取令牌成功")
return result["data"]["token"]
def __get_header_with_token(self) -> dict:
"""
获取带有token的header
"""
return {"Authorization": self.__get_valuable_toke}
def check(self) -> bool:
"""
检查存储是否可用
"""
pass
def list(
self,
fileitem: schemas.FileItem,
password: str = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> Optional[List[schemas.FileItem]]:
"""
浏览文件
:param fileitem: 文件项
:param password: 路径密码
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
"""
if fileitem.type == "file":
item = self.get_item(Path(fileitem.path))
if item:
return [item]
return None
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/list"),
json={
"path": fileitem.path,
"password": password,
"page": page,
"per_page": per_page,
"refresh": refresh,
},
)
"""
{
"path": "/t",
"password": "",
"page": 1,
"per_page": 0,
"refresh": false
}
======================================
{
"code": 200,
"message": "success",
"data": {
"content": [
{
"name": "Alist V3.md",
"size": 1592,
"is_dir": false,
"modified": "2024-05-17T13:47:55.4174917+08:00",
"created": "2024-05-17T13:47:47.5725906+08:00",
"sign": "",
"thumb": "",
"type": 4,
"hashinfo": "null",
"hash_info": null
}
],
"total": 1,
"readme": "",
"header": "",
"write": true,
"provider": "Local"
}
}
"""
if resp is None:
logging.warning(f"请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(
f"请求获取目录 {fileitem.path} 的文件列表失败,状态码:{resp.status_code}"
)
return
result = resp.json()
if result["code"] != 200:
logging.warning(
f'获取目录 {fileitem.path} 的文件列表失败,错误信息:{result["message"]}'
)
return
return [
schemas.FileItem(
storage=self.schema.value,
type="dir" if item["is_dir"] else "file",
path=(Path(fileitem.path) / item["name"]).as_posix() + ("/" if item["is_dir"] else ""),
name=item["name"],
basename=Path(item["name"]).stem,
extension=Path(item["name"]).suffix[1:] if not item["is_dir"] else None,
size=item["size"] if not item["is_dir"] else None,
modify_time=self.__parse_timestamp(item["modified"]),
thumbnail=item["thumb"],
)
for item in result["data"]["content"] or []
]
def create_folder(
self, fileitem: schemas.FileItem, name: str
) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
"""
path = Path(fileitem.path) / name
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/mkdir"),
json={"path": path.as_posix()},
)
"""
{
"path": "/tt"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(f"请求创建目录 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求创建目录 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.warning(f'创建目录 {path} 失败,错误信息:{result["message"]}')
return
return self.get_item(path)
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取目录,如目录不存在则创建
"""
folder = self.get_item(path)
if folder:
return folder
if not folder:
folder = self.create_folder(schemas.FileItem(
storage=self.schema.value,
type="dir",
path=path.parent.as_posix(),
name=path.name,
basename=path.stem
), path.name)
return folder
def get_item(
self,
path: Path,
password: str = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
:param path: 文件路径
:param password: 路径密码
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": path.as_posix(),
"password": password,
"page": page,
"per_page": per_page,
"refresh": refresh,
},
)
"""
{
"path": "/t",
"password": "",
"page": 1,
"per_page": 0,
"refresh": false
}
======================================
{
"code": 200,
"message": "success",
"data": {
"name": "Alist V3.md",
"size": 2618,
"is_dir": false,
"modified": "2024-05-17T16:05:36.4651534+08:00",
"created": "2024-05-17T16:05:29.2001008+08:00",
"sign": "",
"thumb": "",
"type": 4,
"hashinfo": "null",
"hash_info": null,
"raw_url": "http://127.0.0.1:5244/p/local/Alist%20V3.md",
"readme": "",
"header": "",
"provider": "Local",
"related": null
}
}
"""
if resp is None:
logging.warning(f"请求获取文件 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求获取文件 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.debug(f'获取文件 {path} 失败,错误信息:{result["message"]}')
return
return schemas.FileItem(
storage=self.schema.value,
type="dir" if result["data"]["is_dir"] else "file",
path=path.as_posix() + ("/" if result["data"]["is_dir"] else ""),
name=result["data"]["name"],
basename=Path(result["data"]["name"]).stem,
extension=Path(result["data"]["name"]).suffix[1:],
size=result["data"]["size"],
modify_time=self.__parse_timestamp(result["data"]["modified"]),
thumbnail=result["data"]["thumb"],
)
def get_parent(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取父目录
"""
return self.get_folder(Path(fileitem.path).parent)
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/remove"),
json={
"dir": Path(fileitem.path).parent.as_posix(),
"names": [fileitem.name],
},
)
"""
{
"names": [
"string"
],
"dir": "string"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(f"请求删除文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logging.warning(
f"请求删除文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'删除文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
"""
重命名文件
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/rename"),
json={
"name": name,
"path": fileitem.path,
},
)
"""
{
"name": "test3",
"path": "/阿里云盘/test2"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if not resp:
logging.warning(f"请求重命名文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logging.warning(
f"请求重命名文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'重命名文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def download(
self,
fileitem: schemas.FileItem,
path: Path = None,
password: str = "",
) -> Optional[Path]:
"""
下载文件,保存到本地,返回本地临时文件地址
:param fileitem: 文件项
:param path: 文件保存路径
:param password: 文件密码
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": fileitem.path,
"password": password,
"page": 1,
"per_page": 0,
"refresh": False,
},
)
"""
{
"code": 200,
"message": "success",
"data": {
"name": "[ANi]輝夜姬想讓人告白~天才們的戀愛頭腦戰~[01][1080P][Baha][WEB-DL].mp4",
"size": 924933111,
"is_dir": false,
"modified": "1970-01-01T00:00:00Z",
"created": "1970-01-01T00:00:00Z",
"sign": "1v0xkMQz_uG8fkEOQ7-l58OnbB-g4GkdBlUBcrsApCQ=:0",
"thumb": "",
"type": 2,
"hashinfo": "null",
"hash_info": null,
"raw_url": "xxxxxx",
"readme": "",
"header": "",
"provider": "UrlTree",
"related": null
}
}
"""
if not resp:
logging.warning(f"请求获取文件 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求获取文件 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.warning(f'获取文件 {path} 失败,错误信息:{result["message"]}')
return
if result["data"]["raw_url"]:
download_url = result["data"]["raw_url"]
else:
download_url = UrlUtils.adapt_request_url(self.__get_base_url, f"/d{fileitem.path}")
if result["data"]["sign"]:
download_url = download_url + "?sign=" + result["data"]["sign"]
resp = RequestUtils(
headers=self.__get_header_with_token()
).get_res(download_url)
if not path:
new_path = settings.TEMP_PATH / fileitem.name
else:
new_path = path / fileitem.name
with open(new_path, "wb") as f:
f.write(resp.content)
if new_path.exists():
return new_path
return None
def upload(
self, fileitem: schemas.FileItem, path: Path, new_name: str = None, task: bool = False
) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
:param task: 是否为任务默认为False避免未完成上传时对文件进行操作
"""
encoded_path = UrlUtils.quote(fileitem.path)
headers = self.__get_header_with_token()
headers.setdefault("Content-Type", "multipart/form-data")
headers.setdefault("As-Task", str(task).lower())
headers.setdefault("File-Path", encoded_path)
with open(path, "rb") as f:
resp: Response = RequestUtils(headers=headers).put_res(
self.__get_api_url("/api/fs/form"),
data={"file": f},
)
if resp.status_code != 200:
logging.warning(f"请求上传文件 {path} 失败,状态码:{resp.status_code}")
return
new_item = self.get_item(Path(fileitem.path) / path.name)
if new_name and new_name != path.name:
if self.rename(new_item, new_name):
return self.get_item(Path(new_item.path).with_name(new_name))
return new_item
def detail(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取文件详情
"""
return self.get_item(Path(fileitem.path))
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/copy"),
json={
"src_dir": Path(fileitem.path).parent.as_posix(),
"dst_dir": path.as_posix(),
"names": [fileitem.name],
},
)
"""
{
"src_dir": "string",
"dst_dir": "string",
"names": [
"string"
]
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(
f"请求复制文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logging.warning(
f"请求复制文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
# 重命名
if fileitem.name != new_name:
self.rename(
self.get_item(path / fileitem.name), new_name
)
return True
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
# 先重命名
if fileitem.name != new_name:
self.rename(fileitem, new_name)
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/move"),
json={
"src_dir": Path(fileitem.path).parent.as_posix(),
"dst_dir": path.as_posix(),
"names": [new_name],
},
)
"""
{
"src_dir": "string",
"dst_dir": "string",
"names": [
"string"
]
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(
f"请求移动文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logging.warning(
f"请求移动文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'移动文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
硬链接文件
"""
pass
def softlink(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
软链接文件
"""
pass
def usage(self) -> Optional[schemas.StorageUsage]:
"""
存储使用情况
"""
pass
def snapshot(self, path: Path) -> Dict[str, float]:
"""
快照文件系统,输出所有层级文件信息(不含目录)
"""
files_info = {}
def __snapshot_file(_fileitm: schemas.FileItem):
"""
递归获取文件信息
"""
if _fileitm.type == "dir":
for sub_file in self.list(_fileitm):
__snapshot_file(sub_file)
else:
files_info[_fileitm.path] = _fileitm.size
fileitem = self.get_item(path)
if not fileitem:
return {}
__snapshot_file(fileitem)
return files_info
@staticmethod
def __parse_timestamp(time_str: str) -> float:
"""
直接使用 ISO 8601 格式解析时间
"""
return datetime.fromisoformat(time_str).timestamp()

View File

@@ -25,13 +25,19 @@ class LocalStorage(StorageBase):
"softlink": "软链接"
}
def init_storage(self):
"""
初始化
"""
pass
def check(self) -> bool:
"""
检查存储是否可用
"""
return True
def __get_fileitem(self, path: Path):
def __get_fileitem(self, path: Path) -> schemas.FileItem:
"""
获取文件项
"""
@@ -46,7 +52,7 @@ class LocalStorage(StorageBase):
modify_time=path.stat().st_mtime,
)
def __get_diritem(self, path: Path):
def __get_diritem(self, path: Path) -> schemas.FileItem:
"""
获取目录项
"""
@@ -109,6 +115,8 @@ class LocalStorage(StorageBase):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
"""
if not fileitem.path:
return None
@@ -183,28 +191,20 @@ class LocalStorage(StorageBase):
"""
return Path(fileitem.path)
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
dir_path = Path(fileitem.path)
target_path = dir_path / path.name
target_path = dir_path / (new_name or path.name)
code, message = SystemUtils.move(path, target_path)
if code != 0:
logger.error(f"移动文件失败:{message}")
return None
return self.__get_diritem(target_path)
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
复制文件
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.copy(file_path, target_file)
if code != 0:
logger.error(f"复制文件失败:{message}")
return False
return True
return self.get_item(target_path)
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
@@ -222,18 +222,35 @@ class LocalStorage(StorageBase):
软链接文件
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.copy(file_path, target_file)
code, message = SystemUtils.softlink(file_path, target_file)
if code != 0:
logger.error(f"软链接文件失败:{message}")
return False
return True
def move(self, fileitem: schemas.FileItem, target: Path) -> bool:
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.move(file_path, target)
code, message = SystemUtils.copy(file_path, path / new_name)
if code != 0:
logger.error(f"复制文件失败:{message}")
return False
return True
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.move(file_path, path / new_name)
if code != 0:
logger.error(f"移动文件失败:{message}")
return False

View File

@@ -27,6 +27,12 @@ class Rclone(StorageBase):
"copy": "复制"
}
def init_storage(self):
"""
初始化
"""
pass
def set_config(self, conf: dict):
"""
设置配置
@@ -39,7 +45,7 @@ class Rclone(StorageBase):
path = Path(filepath)
if not path.parent.exists():
path.parent.mkdir(parents=True)
path.write_text(conf.get('content'))
path.write_text(conf.get('content'), encoding='utf-8')
@staticmethod
def __get_hidden_shell():
@@ -76,7 +82,7 @@ class Rclone(StorageBase):
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=f"{parent}{item.get('Name')}",
path=f"{parent}{item.get('Name')}" + "/",
name=item.get("Name"),
basename=item.get("Name"),
modify_time=StringUtils.str_to_timestamp(item.get("ModTime"))
@@ -133,6 +139,8 @@ class Rclone(StorageBase):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
"""
try:
retcode = subprocess.run(
@@ -143,10 +151,7 @@ class Rclone(StorageBase):
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
ret_fileitem = copy.deepcopy(fileitem)
ret_fileitem.path = f"{fileitem.path}/{name}/"
ret_fileitem.name = name
return ret_fileitem
return self.get_item(Path(f"{fileitem.path}/{name}"))
except Exception as err:
logger.error(f"rclone创建目录失败{err}")
return None
@@ -160,13 +165,17 @@ class Rclone(StorageBase):
"""
查找下级目录中匹配名称的目录
"""
for sub_file in self.list(_fileitem):
if sub_file.type != "dir":
for sub_folder in self.list(_fileitem):
if sub_folder.type != "dir":
continue
if sub_file.name == _name:
return sub_file
if sub_folder.name == _name:
return sub_folder
return None
# 是否已存在
folder = self.get_item(path)
if folder:
return folder
# 逐级查找和创建目录
fileitem = schemas.FileItem(path="/")
for part in path.parts:
@@ -260,21 +269,25 @@ class Rclone(StorageBase):
logger.error(f"rclone复制文件失败{err}")
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
try:
new_path = Path(fileitem.path) / (new_name or path.name)
retcode = subprocess.run(
[
'rclone', 'copyto',
str(path),
f'MP:{Path(fileitem.path) / path.name}'
f'MP:{new_path}'
],
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
return self.__get_fileitem(path)
return self.__get_fileitem(new_path)
except Exception as err:
logger.error(f"rclone上传文件失败{err}")
return None
@@ -299,16 +312,19 @@ class Rclone(StorageBase):
logger.error(f"rclone获取文件详情失败{err}")
return None
def move(self, fileitem: schemas.FileItem, target: Path) -> bool:
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件target_file格式rclone:path
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
try:
retcode = subprocess.run(
[
'rclone', 'moveto',
f'MP:{fileitem.path}',
f'MP:{target}'
f'MP:{path / new_name}'
],
startupinfo=self.__get_hidden_shell()
).returncode
@@ -318,8 +334,27 @@ class Rclone(StorageBase):
logger.error(f"rclone移动文件失败{err}")
return False
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
try:
retcode = subprocess.run(
[
'rclone', 'copyto',
f'MP:{fileitem.path}',
f'MP:{path / new_name}'
],
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
return True
except Exception as err:
logger.error(f"rclone复制文件失败{err}")
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
@@ -331,11 +366,24 @@ class Rclone(StorageBase):
"""
存储使用情况
"""
conf = self.get_config()
if not conf:
return None
file_path = conf.config.get("filepath")
if not file_path or not Path(file_path).exists():
return None
# 读取rclone文件检查是否有[MP]节点配置
with open(file_path, "r", encoding="utf-8") as f:
lines = f.readlines()
if not lines:
return None
if not any("[MP]" in line.strip() for line in lines):
return None
try:
ret = subprocess.run(
[
'rclone', 'about',
'/', '--json'
'MP:/', '--json'
],
capture_output=True,
startupinfo=self.__get_hidden_shell()

View File

@@ -1,12 +1,7 @@
import base64
import subprocess
from pathlib import Path
from typing import Optional, Tuple, List
import oss2
import py115
from py115 import Cloud
from py115.types import LoginTarget, QrcodeSession, QrcodeStatus, Credential
from p115 import P115Client, P115Path
from app import schemas
from app.core.config import settings
@@ -27,57 +22,54 @@ class U115Pan(StorageBase, metaclass=Singleton):
# 支持的整理方式
transtype = {
"move": "移动"
"move": "移动",
"copy": "复制"
}
cloud: Optional[Cloud] = None
_session: QrcodeSession = None
# 115二维码登录地址
qrcode_url = "https://qrcodeapi.115.com/api/1.0/web/1.0/token/"
# 115登录状态检查
login_check_url = "https://qrcodeapi.115.com/get/status/"
# 115登录完成 alipaymini
login_done_api = f"https://passportapi.115.com/app/1.0/alipaymini/1.0/login/qrcode/"
# 是否有aria2c
_has_aria2c: bool = False
client: P115Client = None
session_info: dict = None
def __init__(self):
super().__init__()
try:
subprocess.run(['aria2c', '-h'], capture_output=True)
self._has_aria2c = True
logger.debug('发现 aria2c, 将使用 aria2c 下载文件')
except FileNotFoundError:
logger.debug('未发现 aria2c')
self._has_aria2c = False
self.init_storage()
def __init_cloud(self) -> bool:
def init_storage(self):
"""
初始化Cloud
"""
credential = self.__credential
if not credential:
logger.warn("115未登录请先登录")
return False
if not self.__credential:
return
try:
if not self.cloud:
self.cloud = py115.connect(credential)
self.client = P115Client(self.__credential, app="alipaymini",
check_for_relogin=False, console_qrcode=False)
except Exception as err:
logger.error(f"115连接失败请重新扫码登录:{str(err)}")
logger.error(f"115连接失败请重新登录{str(err)}")
self.__clear_credential()
return False
return True
@property
def __credential(self) -> Optional[Credential]:
def __credential(self) -> Optional[str]:
"""
获取已保存的115认证参数
获取已保存的115 Cookie
"""
cookie_dict = self.get_config()
if not cookie_dict:
conf = self.get_config()
if not conf:
return None
return Credential.from_dict(cookie_dict.dict().get("config"))
if not conf.config:
return None
return conf.config.get("cookie")
def __save_credential(self, credential: Credential):
def __save_credential(self, credential: dict):
"""
设置115认证参数
"""
self.set_config(credential.to_dict())
self.set_config(credential)
def __clear_credential(self):
"""
@@ -89,62 +81,75 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
生成二维码
"""
try:
self.cloud = py115.connect()
self._session = self.cloud.qrcode_login(LoginTarget.Web)
image_bin = self._session.image_data
if not image_bin:
res = RequestUtils(timeout=10).get_res(self.qrcode_url)
if res:
self.session_info = res.json().get("data")
qrcode_content = self.session_info.pop("qrcode")
if not qrcode_content:
logger.warn("115生成二维码失败未获取到二维码数据")
return None
# 转换为base64图片格式
image_base64 = base64.b64encode(image_bin).decode()
return {}, ""
return {
"codeContent": f"data:image/jpeg;base64,{image_base64}"
"codeContent": qrcode_content
}, ""
except Exception as e:
logger.warn(f"115生成二维码失败{str(e)}")
return {}, f"115生成二维码失败{str(e)}"
elif res is not None:
return {}, f"115生成二维码失败{res.status_code} - {res.reason}"
return {}, f"115生成二维码失败无法连接!"
def check_login(self) -> Optional[Tuple[dict, str]]:
"""
二维码登录确认
"""
if not self._session:
if not self.session_info:
return {}, "请先生成二维码!"
try:
if not self.cloud:
return {}, "请先生成二维码!"
status = self.cloud.qrcode_poll(self._session)
if status == QrcodeStatus.Done:
# 确认完成,保存认证信息
self.__save_credential(self.cloud.export_credentail())
result = {
"status": 1,
"tip": "登录成功!"
}
elif status == QrcodeStatus.Waiting:
result = {
"status": 0,
"tip": "请使用微信或115客户端扫码"
}
elif status == QrcodeStatus.Expired:
result = {
"status": -1,
"tip": "二维码已过期,请重新刷新!"
}
self.cloud = None
elif status == QrcodeStatus.Failed:
result = {
"status": -2,
"tip": "登录失败,请重试!"
}
self.cloud = None
else:
result = {
"status": -3,
"tip": "未知错误,请重试!"
}
self.cloud = None
resp = RequestUtils(timeout=10).get_res(self.login_check_url, params=self.session_info)
if not resp:
return {}, "115登录确认失败无法连接"
result = resp.json()
match result["data"].get("status"):
case 0:
result = {
"status": 0,
"tip": "请使用微信或115客户端扫码"
}
case 1:
result = {
"status": 1,
"tip": "扫码"
}
case 2:
# 确认完成,保存认证信息
resp = RequestUtils(timeout=10).post_res(self.login_done_api,
data={"account": self.session_info.get("uid")})
if not resp:
return {}, "115登录确认失败无法连接"
if resp:
# 保存认证信息
result = resp.json()
cookie_dict = result["data"]["cookie"]
cookie_str = "; ".join([f"{k}={v}" for k, v in cookie_dict.items()])
cookie_dict.update({"cookie": cookie_str})
self.__save_credential(cookie_dict)
self.init_storage()
result = {
"status": 2,
"tip": "登录成功!"
}
case -1:
result = {
"status": -1,
"tip": "二维码已过期,请重新刷新!"
}
case -2:
result = {
"status": -2,
"tip": "登录失败,请重试!"
}
case _:
result = {
"status": -3,
"tip": "未知错误,请重试!"
}
return result, ""
except Exception as e:
return {}, f"115登录确认失败{str(e)}"
@@ -153,10 +158,12 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
获取存储空间
"""
if not self.__init_cloud():
if not self.client:
return None
try:
return self.cloud.storage().space()
usage = self.client.fs.space_summury()
if usage:
return usage['rt_space_info']['all_total']['size'], usage['rt_space_info']['all_remain']['size']
except Exception as e:
logger.error(f"115获取存储空间失败{str(e)}")
return None
@@ -165,31 +172,27 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
检查存储是否可用
"""
return True if self.list(schemas.FileItem(
fileid="0"
)) else False
return True if self.list(schemas.FileItem()) else False
def list(self, fileitem: schemas.FileItem) -> Optional[List[schemas.FileItem]]:
"""
浏览文件
"""
if not self.__init_cloud():
if not self.client:
return []
try:
if fileitem.type == "file":
return [fileitem]
items = self.cloud.storage().list(dir_id=fileitem.fileid)
items: List[P115Path] = self.client.fs.list(fileitem.path)
return [schemas.FileItem(
storage=self.schema.value,
fileid=item.file_id,
parent_fileid=item.parent_id,
type="dir" if item.is_dir else "file",
path=f"{fileitem.path}{item.name}" + ("/" if item.is_dir else ""),
type="dir" if item.is_dir() else "file",
path=item.path + ("/" if item.is_dir() else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:],
modify_time=item.modified_time.timestamp() if item.modified_time else 0,
pickcode=item.pickcode
basename=item.stem,
size=item.stat().st_size,
extension=item.suffix[1:] if not item.is_dir() else None,
modify_time=item.stat().st_mtime
) for item in items if item]
except Exception as e:
logger.error(f"115浏览文件失败{str(e)}")
@@ -199,20 +202,18 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
创建目录
"""
if not self.__init_cloud():
if not self.client:
return None
try:
result = self.cloud.storage().make_dir(fileitem.fileid, name)
result = self.client.fs.makedirs(Path(fileitem.path) / name, exist_ok=True)
if result:
return schemas.FileItem(
storage=self.schema.value,
fileid=result.file_id,
parent_fileid=result.parent_id,
type="dir",
path=f"{fileitem.path}{name}/",
path=f"{result.path}/",
name=name,
modify_time=result.modified_time.timestamp() if result.modified_time else 0,
pickcode=result.pickcode
basename=Path(result.name).stem,
modify_time=result.mtime
)
except Exception as e:
logger.error(f"115创建目录失败{str(e)}")
@@ -222,73 +223,86 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
根据文件路程获取目录,不存在则创建
"""
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
for sub_file in self.list(_fileitem):
if sub_file.type != "dir":
continue
if sub_file.name == _name:
return sub_file
if not self.client:
return None
# 逐级查找和创建目录
fileitem = schemas.FileItem(fileid="0")
for part in path.parts:
if part == "/":
continue
dir_file = __find_dir(fileitem, part)
if dir_file:
fileitem = dir_file
else:
dir_file = self.create_folder(fileitem, part)
if not dir_file:
logger.warn(f"115创建目录 {fileitem.path}{part} 失败!")
return None
fileitem = dir_file
return fileitem if fileitem.fileid != "0" else None
folder = self.get_item(path)
if folder:
return folder
try:
result = self.client.fs.makedirs(path, exist_ok=True)
if result:
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=result.path + "/",
name=result.name,
basename=Path(result.name).stem,
modify_time=result.mtime
)
except Exception as e:
logger.error(f"115获取目录失败{str(e)}")
return None
def get_item(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
"""
def __find_item(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录或文件
"""
for sub_file in self.list(_fileitem):
if sub_file.name == _name:
return sub_file
if not self.client:
return None
# 逐级查找
fileitem = schemas.FileItem(fileid="0")
for part in path.parts:
if part == "/":
continue
item = __find_item(fileitem, part)
if not item:
try:
try:
item = self.client.fs.attr(path)
except FileNotFoundError:
return None
fileitem = item
return fileitem
if item:
return schemas.FileItem(
storage=self.schema.value,
type="dir" if item.is_directory else "file",
path=item.path + ("/" if item.is_directory else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:] if not item.is_directory else None,
modify_time=item.mtime,
thumbnail=item.get("thumb")
)
except Exception as e:
logger.info(f"115获取文件失败{str(e)}")
return None
def detail(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取文件详情
"""
pass
if not self.client:
return None
try:
try:
item = self.client.fs.attr(fileitem.path)
except FileNotFoundError:
return None
if item:
return schemas.FileItem(
storage=self.schema.value,
type="dir" if item.is_directory else "file",
path=item.path + ("/" if item.is_directory else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:] if not item.is_directory else None,
modify_time=item.mtime,
thumbnail=item.get("thumb")
)
except Exception as e:
logger.error(f"115获取文件详情失败{str(e)}")
return None
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().delete(fileitem.fileid)
self.client.fs.remove(fileitem.path)
return True
except Exception as e:
logger.error(f"115删除文件失败{str(e)}")
@@ -298,10 +312,10 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
重命名文件
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().rename(fileitem.fileid, name)
self.client.fs.rename(fileitem.path, Path(fileitem.path).with_name(name))
return True
except Exception as e:
logger.error(f"115重命名文件失败{str(e)}")
@@ -311,89 +325,77 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
获取下载链接
"""
if not self.__init_cloud():
if not self.client:
return None
local_file = (path or settings.TEMP_PATH) / fileitem.name
try:
ticket = self.cloud.storage().request_download(fileitem.pickcode)
if ticket:
path = (path or settings.TEMP_PATH) / fileitem.name
res = RequestUtils(headers=ticket.headers).get_res(ticket.url)
if res:
with open(path, "wb") as f:
f.write(res.content)
return path
else:
logger.warn(f"{fileitem.path} 未获取到下载链接")
task = self.client.fs.download(fileitem.path, file=local_file)
if task:
return local_file
except Exception as e:
logger.error(f"115下载失败{str(e)}")
logger.error(f"115下载文件失败:{str(e)}")
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
if not self.__init_cloud():
if not self.client:
return None
try:
ticket = self.cloud.storage().request_upload(dir_id=fileitem.fileid, file_path=str(path))
if ticket is None:
logger.warn(f"115请求上传出错")
return None
elif ticket.is_done:
file_path = Path(fileitem.path) / path.name
logger.warn(f"115上传{file_path} 文件已存在")
return self.get_item(file_path)
else:
auth = oss2.StsAuth(**ticket.oss_token)
bucket = oss2.Bucket(
auth=auth,
endpoint=ticket.oss_endpoint,
bucket_name=ticket.bucket_name,
)
por = bucket.put_object_from_file(
key=ticket.object_key,
filename=str(path),
headers=ticket.headers,
)
result = por.resp.response.json()
new_path = Path(fileitem.path) / (new_name or path.name)
with open(path, "rb") as f:
result = self.client.fs.upload(f, new_path)
if result:
result_data = result.get('data')
logger.info(f"115上传文件成功{result_data.get('file_name')}")
return schemas.FileItem(
storage=self.schema.value,
fileid=result_data.get('file_id'),
parent_fileid=fileitem.fileid,
type="file",
name=result_data.get('file_name'),
basename=Path(result_data.get('file_name')).stem,
path=f"{fileitem.path}{result_data.get('file_name')}",
size=result_data.get('file_size'),
extension=Path(result_data.get('file_name')).suffix[1:],
pickcode=result_data.get('pickcode')
path=str(path),
name=result.name,
basename=Path(result.name).stem,
size=result.size,
extension=Path(result.name).suffix[1:],
modify_time=result.mtime
)
else:
logger.warn(f"115上传文件失败{por.resp.response.text}")
return None
except Exception as e:
logger.error(f"115上传文件失败{str(e)}")
return None
def move(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool:
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().move(fileitem.fileid, target.fileid)
self.client.fs.copy(fileitem.path, path / new_name)
return True
except Exception as e:
logger.error(f"115复制文件失败{str(e)}")
return False
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.client:
return False
try:
self.client.fs.move(fileitem.path, path / new_name)
return True
except Exception as e:
logger.error(f"115移动文件失败{str(e)}")
return False
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
@@ -406,9 +408,9 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
info = self.storage()
if info:
total, used = info
total, free = info
return schemas.StorageUsage(
total=total,
available=total - used
available=free
)
return schemas.StorageUsage()

View File

@@ -7,7 +7,7 @@ from app.helper.rule import RuleHelper
from app.log import logger
from app.modules import _ModuleBase
from app.modules.filter.RuleParser import RuleParser
from app.schemas.types import ModuleType
from app.schemas.types import ModuleType, OtherModulesType
from app.utils.string import StringUtils
@@ -167,6 +167,13 @@ class FilterModule(_ModuleBase):
"""
return ModuleType.Other
@staticmethod
def get_subtype() -> OtherModulesType:
"""
获取模块子类型
"""
return OtherModulesType.Filter
@staticmethod
def get_priority() -> int:
"""
@@ -347,23 +354,22 @@ class FilterModule(_ModuleBase):
content = " ".join(match_content)
# 包含规则项
includes = self.rule_set[rule_name].get("include") or []
if isinstance(includes, str):
includes = includes.split("|")
if not isinstance(includes, list):
includes = [includes]
# 排除规则项
excludes = self.rule_set[rule_name].get("exclude") or []
if isinstance(excludes, str):
excludes = excludes.split("|")
if not isinstance(excludes, list):
excludes = [excludes]
# 大小范围规则项
size_range = self.rule_set[rule_name].get("size_range")
# 做种人数规则项
seeders = self.rule_set[rule_name].get("seeders")
# FREE规则
downloadvolumefactor = self.rule_set[rule_name].get("downloadvolumefactor")
for include in includes:
if not re.search(r"%s" % include, content, re.IGNORECASE):
# 未发现包含项
logger.debug(f"种子 {torrent.site_name} - {torrent.title} 不包含 {include}")
return False
if includes and not any(re.search(r"%s" % include, content, re.IGNORECASE) for include in includes):
# 未发现任何包含项
logger.debug(f"种子 {torrent.site_name} - {torrent.title} 不包含任何项 {includes}")
return False
for exclude in excludes:
if re.search(r"%s" % exclude, content, re.IGNORECASE):
# 发现排除项
@@ -433,26 +439,32 @@ class FilterModule(_ModuleBase):
@staticmethod
def __match_size(torrent: TorrentInfo, size_range: str) -> bool:
"""
判断种子是否匹配大小范围MB
判断种子是否匹配大小范围MB,剧集拆分为每集大小
"""
if not size_range:
return True
# 集数
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
episode_count = meta.total_episode or 1
# 每集大小
torrent_size = torrent.size / episode_count
# 大小范围
size_range = size_range.strip()
if size_range.find("-") != -1:
# 区间
size_min, size_max = size_range.split("-")
size_min = float(size_min.strip()) * 1024 * 1024
size_max = float(size_max.strip()) * 1024 * 1024
if size_min <= torrent.size <= size_max:
if size_min <= torrent_size <= size_max:
return True
elif size_range.startswith(">"):
# 大于
size_min = float(size_range[1:].strip()) * 1024 * 1024
if torrent.size >= size_min:
if torrent_size >= size_min:
return True
elif size_range.startswith("<"):
# 小于
size_max = float(size_range[1:].strip()) * 1024 * 1024
if torrent.size <= size_max:
if torrent_size <= size_max:
return True
return False

View File

@@ -18,7 +18,7 @@ from app.modules.indexer.spider.tnode import TNodeSpider
from app.modules.indexer.spider.torrentleech import TorrentLeech
from app.modules.indexer.spider.yema import YemaSpider
from app.schemas import SiteUserData
from app.schemas.types import MediaType, ModuleType
from app.schemas.types import MediaType, ModuleType, OtherModulesType
from app.utils.string import StringUtils
@@ -47,6 +47,13 @@ class IndexerModule(_ModuleBase):
"""
return ModuleType.Indexer
@staticmethod
def get_subtype() -> OtherModulesType:
"""
获取模块子类型
"""
return OtherModulesType.Indexer
@staticmethod
def get_priority() -> int:
"""
@@ -191,6 +198,7 @@ class IndexerModule(_ModuleBase):
site_ua=site.get("ua"),
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
**result) for result in result_array]
# 去重
return __remove_duplicate(torrents)
@@ -199,7 +207,7 @@ class IndexerModule(_ModuleBase):
def __spider_search(indexer: CommentedMap,
search_word: str = None,
mtype: MediaType = None,
page: int = 0) -> (bool, List[dict]):
page: int = 0) -> Tuple[bool, List[dict]]:
"""
根据关键字搜索单个站点
:param: indexer: 站点配置

View File

@@ -94,6 +94,7 @@ class SiteParserBase(metaclass=ABCMeta):
# 未读消息
self.message_unread = 0
self.message_unread_contents = []
self.message_read_force = False
# 全局附加请求头
self._addition_headers = None
@@ -182,7 +183,8 @@ class SiteParserBase(metaclass=ABCMeta):
)
)
# 解析用户未读消息
self._pase_unread_msgs()
if settings.SITE_MESSAGE:
self._pase_unread_msgs()
# 解析用户上传、下载、分享率等信息
if self._user_traffic_page:
self._parse_user_traffic_info(
@@ -201,7 +203,7 @@ class SiteParserBase(metaclass=ABCMeta):
:return:
"""
unread_msg_links = []
if self.message_unread > 0:
if self.message_unread > 0 or self.message_read_force:
links = {self._user_mail_unread_page, self._sys_mail_unread_page}
for link in links:
if not link:
@@ -225,7 +227,7 @@ class SiteParserBase(metaclass=ABCMeta):
)
unread_msg_links.extend(msg_links)
# 重新更新未读消息数99999表示有消息但数量未知
if self.message_unread == 99999:
if unread_msg_links and not self.message_unread:
self.message_unread = len(unread_msg_links)
# 解析未读消息内容
for msg_link in unread_msg_links:
@@ -342,11 +344,9 @@ class SiteParserBase(metaclass=ABCMeta):
logger.warn(
f"{self._site_name} 检测到Cloudflare请更新Cookie和UA")
return ""
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
return res.text
return RequestUtils.get_decoded_html_content(res,
settings.ENCODING_DETECTION_PERFORMANCE_MODE,
settings.ENCODING_DETECTION_MIN_CONFIDENCE)
return ""

View File

@@ -91,9 +91,7 @@ class MTorrentSiteUserInfo(SiteParserBase):
self.download = int(user_info.get("memberCount", {}).get("downloaded") or '0')
self.ratio = user_info.get("memberCount", {}).get("shareRate") or 0
self.bonus = user_info.get("memberCount", {}).get("bonus") or 0
# 需要解析消息,但不确定消息条数
self.message_unread = 99999
self.message_read_force = True
self._torrent_seeding_params = {
"pageNumber": 1,
"pageSize": 200,

View File

@@ -1,21 +1,60 @@
# -*- coding: utf-8 -*-
import re
import json
from typing import Optional
from lxml import etree
from urllib.parse import urljoin
from app.log import logger
from app.modules.indexer.parser import SiteSchema
from app.modules.indexer.parser.nexus_php import NexusPhpSiteUserInfo
from app.modules.indexer.parser import SiteParserBase
from app.utils.string import StringUtils
class NexusRabbitSiteUserInfo(NexusPhpSiteUserInfo):
class NexusRabbitSiteUserInfo(SiteParserBase):
schema = SiteSchema.NexusRabbit
def _parse_site_page(self, html_text: str):
super()._parse_site_page(html_text)
self._torrent_seeding_page = f"getusertorrentlistajax.php?page=1&limit=5000000&type=seeding&uid={self.userid}"
self._torrent_seeding_headers = {"Accept": "application/json, text/javascript, */*; q=0.01"}
html_text = self._prepare_html_text(html_text)
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: bool = False) -> Optional[str]:
user_detail = re.search(r"user.php\?id=(\d+)", html_text)
if not (user_detail and user_detail.group().strip()):
return
self.userid = user_detail.group(1)
self._user_detail_page = f"user.php?id={self.userid}"
self._user_traffic_page = None
self._torrent_seeding_page = "api/general"
self._torrent_seeding_params = {
"page": 1,
"limit": 5000000,
"action": "userTorrentsList",
"data": {"type": "seeding", "id": int(self.userid)},
}
self._torrent_seeding_headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest", # 必须要加上这一条,不然返回的是空数据
}
self._user_mail_unread_page = None
self._sys_mail_unread_page = "api/general"
self._mail_unread_params = {
"page": 1,
"limit": 5000000,
"action": "getMessageIn",
}
self._mail_unread_headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest",
}
def _parse_user_torrent_seeding_info(
self, html_text: str, multi_page: bool = False
) -> Optional[str]:
"""
做种相关信息
:param html_text:
@@ -24,22 +63,112 @@ class NexusRabbitSiteUserInfo(NexusPhpSiteUserInfo):
"""
try:
torrents = json.loads(html_text).get('data')
torrents = json.loads(html_text).get("data", [])
except Exception as e:
logger.error(f"解析做种信息失败: {str(e)}")
return
page_seeding_size = 0
page_seeding_info = []
seeding_size = 0
seeding_info = []
page_seeding = len(torrents)
for torrent in torrents:
seeders = int(torrent.get('seeders', 0))
size = int(torrent.get('size', 0))
page_seeding_size += int(torrent.get('size', 0))
seeders = int(torrent.get("seeders", 0))
size = StringUtils.num_filesize(torrent.get("size"))
seeding_size += size
seeding_info.append([seeders, size])
page_seeding_info.append([seeders, size])
self.seeding = len(torrents)
self.seeding_size = seeding_size
self.seeding_info = seeding_info
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
def _parse_message_unread_links(
self, html_text: str, msg_links: list
) -> str | None:
unread_ids = []
try:
messages = json.loads(html_text).get("data", [])
except Exception as e:
logger.error(f"解析未读消息失败: {e}")
return
for msg in messages:
msg_id, msg_unread = msg.get("id"), msg.get("unread")
if not (msg_id and msg_unread) or msg_unread == "no":
continue
unread_ids.append(msg_id)
head, date, content = msg.get("subject"), msg.get("added"), msg.get("msg")
if head and date and content:
self.message_unread_contents.append((head, date, content))
self.message_unread = len(unread_ids)
if unread_ids:
self._get_page_content(
url=urljoin(self._base_url, "api/general?loading=true"),
params={"action": "readMessage", "data": {"ids": unread_ids}},
headers={
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest",
},
)
return None
def _parse_user_base_info(self, html_text: str):
"""只有奶糖余额才需要在 base 中获取,其它均可以在详情页拿到"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
bonus = html.xpath(
'//div[contains(text(), "奶糖余额")]/following-sibling::div[1]/text()'
)
if bonus:
self.bonus = StringUtils.str_float(bonus[0].strip())
def _parse_user_detail_info(self, html_text: str):
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
# 缩小一下查找范围,所有的信息都在这个 div 里
user_info = html.xpath('//div[contains(@class, "layui-hares-user-info-right")]')
if not user_info:
return
user_info = user_info[0]
# 用户名
if username := user_info.xpath(
'.//span[contains(text(), "用户名")]/a/span/text()'
):
self.username = username[0].strip()
# 等级
if user_level := user_info.xpath('.//span[contains(text(), "等级")]/b/text()'):
self.user_level = user_level[0].strip()
# 加入日期
if join_date := user_info.xpath('.//span[contains(text(), "注册日期")]/text()'):
join_date = join_date[0].strip().split("\r")[0].removeprefix("注册日期:")
self.join_at = StringUtils.unify_datetime_str(join_date)
# 上传量
if upload := user_info.xpath('.//span[contains(text(), "上传量")]/text()'):
self.upload = StringUtils.num_filesize(
upload[0].strip().removeprefix("上传量:")
)
# 下载量
if download := user_info.xpath('.//span[contains(text(), "下载量")]/text()'):
self.download = StringUtils.num_filesize(
download[0].strip().removeprefix("下载量:")
)
# 分享率
if ratio := user_info.xpath('.//span[contains(text(), "分享率")]/em/text()'):
self.ratio = StringUtils.str_float(ratio[0].strip())
def _parse_message_content(self, html_text):
"""
解析短消息内容,已经在 _parse_message_unread_links 内实现,重载防止 abstractmethod 报错
:param html_text:
:return: head: message, date: time, content: message content
"""
pass
def _parse_user_traffic_info(self, html_text: str):
"""
解析用户的上传,下载,分享率等信息,已经在 _parse_user_detail_info 内实现,重载防止 abstractmethod 报错
:param html_text:
:return:
"""
pass

View File

@@ -36,7 +36,10 @@ class TNodeSiteUserInfo(SiteParserBase):
pass
def _parse_user_detail_info(self, html_text: str):
detail = json.loads(html_text)
try:
detail = json.loads(html_text)
except json.JSONDecodeError:
return
if detail.get("status") != 200:
return

View File

@@ -5,7 +5,6 @@ import traceback
from typing import List
from urllib.parse import quote, urlencode, urlparse, parse_qs
import chardet
from jinja2 import Template
from pyquery import PyQuery
from ruamel.yaml import CommentedMap
@@ -250,27 +249,9 @@ class TorrentSpider:
referer=self.referer,
proxies=self.proxies
).get_res(searchurl, allow_redirects=True)
if ret is not None:
# 使用chardet检测字符编码
raw_data = ret.content
if raw_data:
try:
result = chardet.detect(raw_data)
encoding = result['encoding']
# 解码为字符串
page_source = raw_data.decode(encoding)
except Exception as e:
logger.debug(f"chardet解码失败{str(e)}")
# 探测utf-8解码
if re.search(r"charset=\"?utf-8\"?", ret.text, re.IGNORECASE):
ret.encoding = "utf-8"
else:
ret.encoding = ret.apparent_encoding
page_source = ret.text
else:
page_source = ret.text
else:
page_source = ""
page_source = RequestUtils.get_decoded_html_content(ret,
settings.ENCODING_DETECTION_PERFORMANCE_MODE,
settings.ENCODING_DETECTION_MIN_CONFIDENCE)
# 解析
return self.parse(page_source)

View File

@@ -21,14 +21,30 @@ class YemaSpider:
_cookie = None
_ua = None
_size = 40
_searchurl = "%sapi/torrent/fetchCategoryOpenTorrentList"
_searchurl = "%sapi/torrent/fetchOpenTorrentList"
_downloadurl = "%sapi/torrent/download?id=%s"
_pageurl = "%s#/torrent/detail/%s/"
_timeout = 15
# 分类
_movie_category = 4
_tv_category = 5
_movie_category = [4]
_tv_category = [5, 13, 14, 17, 15, 6, 16]
# 标签 https://wiki.yemapt.org/developer/constants
_labels = {
"1": "禁转",
"2": "首发",
"3": "官方",
"4": "自制",
"5": "国语",
"6": "中字",
"7": "粤语",
"8": "英字",
"9": "HDR10",
"10": "杜比视界",
"11": "分集",
"12": "完结",
}
def __init__(self, indexer: CommentedMap):
self.systemconfig = SystemConfigOper()
@@ -47,14 +63,7 @@ class YemaSpider:
"""
搜索
"""
if not mtype:
categoryId = self._movie_category
elif mtype == MediaType.TV:
categoryId = self._tv_category
else:
categoryId = self._movie_category
params = {
"categoryId": categoryId,
"pageParam": {
"current": page + 1,
"pageSize": self._size,
@@ -62,6 +71,12 @@ class YemaSpider:
},
"sorter": {}
}
# 新接口可不传 categoryId 参数
# if mtype == MediaType.MOVIE:
# params.update({
# "categoryId": self._movie_category,
# })
# pass
if keyword:
params.update({
"keyword": keyword,
@@ -82,17 +97,27 @@ class YemaSpider:
results = res.json().get('data', []) or []
for result in results:
category_value = result.get('categoryId')
if category_value == self._tv_category:
if category_value in self._tv_category:
category = MediaType.TV.value
elif category_value == self._movie_category:
elif category_value in self._movie_category:
category = MediaType.MOVIE.value
else:
category = MediaType.UNKNOWN.value
pass
torrentLabelIds = result.get('tagList', []) or []
torrentLabels = []
for labelId in torrentLabelIds:
if self._labels.get(labelId) is not None:
torrentLabels.append(self._labels.get(labelId))
pass
pass
torrent = {
'title': result.get('showName'),
'description': result.get('shortDesc'),
'enclosure': self.__get_download_url(result.get('id')),
'pubdate': StringUtils.unify_datetime_str(result.get('gmtCreate')),
# 使用上架时间,而不是用户发布时间,上架时间即其他用户可见时间
'pubdate': StringUtils.unify_datetime_str(result.get('listingTime')),
'size': result.get('fileSize'),
'seeders': result.get('seedNum'),
'peers': result.get('leechNum'),
@@ -101,7 +126,7 @@ class YemaSpider:
'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('uploadPromotion')),
'freedate': StringUtils.unify_datetime_str(result.get('downloadPromotionEndTime')),
'page_url': self._pageurl % (self._domain, result.get('id')),
'labels': [],
'labels': torrentLabels,
'category': category
}
torrents.append(torrent)

View File

@@ -2,11 +2,12 @@ from typing import Any, Generator, List, Optional, Tuple, Union
from app import schemas
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.log import logger
from app.modules import _MediaServerBase, _ModuleBase
from app.modules.jellyfin.jellyfin import Jellyfin
from app.schemas.event import AuthCredentials
from app.schemas.types import MediaType, ModuleType
from app.schemas.event import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import MediaType, ModuleType, ChainEventType, MediaServerType
class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
@@ -29,6 +30,13 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
"""
return ModuleType.MediaServer
@staticmethod
def get_subtype() -> MediaServerType:
"""
获取模块子类型
"""
return MediaServerType.Jellyfin
@staticmethod
def get_priority() -> int:
"""
@@ -65,16 +73,36 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
return False, f"无法连接Jellyfin服务器{name}"
return True, ""
def user_authenticate(self, credentials: AuthCredentials) -> Optional[AuthCredentials]:
def user_authenticate(self, credentials: AuthCredentials, service_name: Optional[str] = None) \
-> Optional[AuthCredentials]:
"""
使用Jellyfin用户辅助完成用户认证
:param credentials: 认证数据
:param service_name: 指定要认证的媒体服务器名称,若为 None 则认证所有服务
:return: 认证数据
"""
# Jellyfin认证
if not credentials or credentials.grant_type != "password":
return None
for name, server in self.get_instances().items():
# 确定要认证的服务器列表
if service_name:
# 如果指定了服务名,获取该服务实例
servers = [(service_name, server)] if (server := self.get_instance(service_name)) else []
else:
# 如果没有指定服务名,遍历所有服务
servers = self.get_instances().items()
# 遍历要认证的服务器
for name, server in servers:
# 触发认证拦截事件
intercept_event = eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(username=credentials.username, channel=self.get_name(),
service=name, status="triggered")
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:
continue
token = server.authenticate(credentials.username, credentials.password)
if token:
credentials.channel = self.get_name()
@@ -171,10 +199,10 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
媒体数量统计
"""
if server:
server: Jellyfin = self.get_instance(server)
if not server:
server_obj: Jellyfin = self.get_instance(server)
if not server_obj:
return None
servers = [server]
servers = [server_obj]
else:
servers = self.get_instances().values()
media_statistics = []
@@ -192,9 +220,9 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
"""
媒体库列表
"""
server: Jellyfin = self.get_instance(server)
if server:
return server.get_librarys(username=username, hidden=hidden)
server_obj: Jellyfin = self.get_instance(server)
if server_obj:
return server_obj.get_librarys(username=username, hidden=hidden)
return None
def mediaserver_items(self, server: str, library_id: Union[str, int], start_index: int = 0,
@@ -209,18 +237,18 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
:return: 返回一个生成器对象,用于逐步获取媒体服务器中的项目
"""
server: Jellyfin = self.get_instance(server)
if server:
return server.get_items(library_id, start_index, limit)
server_obj: Jellyfin = self.get_instance(server)
if server_obj:
return server_obj.get_items(library_id, start_index, limit)
return None
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
server: Jellyfin = self.get_instance(server)
if server:
return server.get_iteminfo(item_id)
server_obj: Jellyfin = self.get_instance(server)
if server_obj:
return server_obj.get_iteminfo(item_id)
return None
def mediaserver_tv_episodes(self, server: str,
@@ -228,10 +256,10 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
"""
获取剧集信息
"""
server: Jellyfin = self.get_instance(server)
if not server:
server_obj: Jellyfin = self.get_instance(server)
if not server_obj:
return None
_, seasoninfo = server.get_tv_episodes(item_id=item_id)
_, seasoninfo = server_obj.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(
@@ -244,26 +272,57 @@ class JellyfinModule(_ModuleBase, _MediaServerBase[Jellyfin]):
"""
获取媒体服务器正在播放信息
"""
server: Jellyfin = self.get_instance(server)
if not server:
server_obj: Jellyfin = self.get_instance(server)
if not server_obj:
return []
return server.get_resume(num=count, username=username)
return server_obj.get_resume(num=count, username=username)
def mediaserver_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取媒体库播放地址
"""
server: Jellyfin = self.get_instance(server)
if not server:
server_obj: Jellyfin = self.get_instance(server)
if not server_obj:
return None
return server.get_play_url(item_id)
return server_obj.get_play_url(item_id)
def mediaserver_latest(self, server: str,
count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
def mediaserver_latest(self, server: str = None, count: int = 20,
username: str = None) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
server: Jellyfin = self.get_instance(server)
server_obj: Jellyfin = self.get_instance(server)
if not server_obj:
return []
return server_obj.get_latest(num=count, username=username)
def mediaserver_latest_images(self,
server: str = None,
count: int = 20,
username: str = None,
remote: bool = False,
) -> List[str]:
"""
获取媒体服务器最新入库条目的图片
:param server: 媒体服务器名称
:param count: 获取数量
:param username: 用户名
:param remote: True为外网链接, False为内网链接
:return: 图片链接列表
"""
server_obj: Jellyfin = self.get_instance(server)
if not server:
return []
return server.get_latest(num=count, username=username)
links = []
items: List[schemas.MediaServerPlayItem] = self.mediaserver_latest(server=server, count=count,
username=username)
for item in items:
if item.BackdropImageTags:
image_url = server_obj.get_backdrop_url(item_id=item.id,
image_tag=item.BackdropImageTags[0],
remote=remote)
if image_url:
links.append(image_url)
return links

View File

@@ -143,7 +143,8 @@ class Jellyfin:
return []
libraries = []
for library in self.__get_jellyfin_librarys(username) or []:
if hidden and self._sync_libraries and library.get("Id") not in self._sync_libraries:
if hidden and self._sync_libraries and "all" not in self._sync_libraries \
and library.get("Id") not in self._sync_libraries:
continue
match library.get("CollectionType"):
case "movies":
@@ -467,6 +468,30 @@ class Jellyfin:
return None
return None
def get_item_path_by_id(self, item_id: str) -> Optional[str]:
"""
根据ItemId查询所在的Path
:param item_id: 在Jellyfin中的ID
:return: Path
"""
if not self._host or not self._apikey:
return None
url = f"{self._host}Items/{item_id}/PlaybackInfo"
params = {"api_key": self._apikey}
try:
res = RequestUtils(timeout=10).get_res(url, params)
if res:
media_sources = res.json().get("MediaSources")
if media_sources:
return media_sources[0].get("Path")
else:
logger.error("Items/Id/PlaybackInfo 未获取到返回数据,不设置 Path")
return None
except Exception as e:
logger.error("连接Items/Id/PlaybackInfo出错" + str(e))
return None
return None
def generate_image_link(self, item_id: str, image_type: str, host_type: bool) -> Optional[str]:
"""
根据ItemId和imageType查询本地对应图片
@@ -661,6 +686,8 @@ class Jellyfin:
item_id=eventItem.item_id,
image_type="Backdrop"
)
# jellyfin 的 webhook 不含 item_path需要单独获取
eventItem.item_path = self.get_item_path_by_id(eventItem.item_id)
return eventItem
@@ -822,20 +849,23 @@ class Jellyfin:
return ""
return "%sItems/%s/Images/Primary" % (self._host, item_id)
def __get_backdrop_url(self, item_id: str, image_tag: str) -> str:
def get_backdrop_url(self, item_id: str, image_tag: str, remote: bool = False) -> str:
"""
获取Backdrop图片地址
:param: item_id: 在Jellyfin中的ID
:param: image_tag: 图片的tag
:param: remote 是否远程使用TG微信等客户端调用应为True
:param: inner 是否NT内部调用为True是会使用NT中转
"""
if not self._host or not self._apikey:
return ""
if not image_tag or not item_id:
return ""
return f"{self._host}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&fillWidth=666&api_key={self._apikey}"
if remote:
host_url = self._playhost or self._host
else:
host_url = self._host
return f"{host_url}Items/{item_id}/" \
f"Images/Backdrop?tag={image_tag}&api_key={self._apikey}"
def get_resume(self, num: int = 12, username: str = None) -> Optional[List[schemas.MediaServerPlayItem]]:
"""
@@ -874,8 +904,8 @@ class Jellyfin:
item_type = MediaType.MOVIE.value if item.get("Type") == "Movie" else MediaType.TV.value
link = self.get_play_url(item.get("Id"))
if item.get("BackdropImageTags"):
image = self.__get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
image = self.get_backdrop_url(item_id=item.get("Id"),
image_tag=item.get("BackdropImageTags")[0])
else:
image = self.__get_local_image_by_id(item.get("Id"))
# 小部分剧集无[xxx-S01E01-thumb.jpg]图片
@@ -918,7 +948,7 @@ class Jellyfin:
params = {
"Limit": 100,
"MediaTypes": "Video",
"Fields": "ProductionYear,Path",
"Fields": "ProductionYear,Path,BackdropImageTags",
"api_key": self._apikey,
}
try:
@@ -946,7 +976,8 @@ class Jellyfin:
subtitle=item.get("ProductionYear"),
type=item_type,
image=image,
link=link
link=link,
BackdropImageTags=item.get("BackdropImageTags")
))
return ret_latest
else:

View File

@@ -2,10 +2,12 @@ from typing import Optional, Tuple, Union, Any, List, Generator
from app import schemas
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.log import logger
from app.modules import _ModuleBase, _MediaServerBase
from app.modules.plex.plex import Plex
from app.schemas.types import MediaType, ModuleType
from app.schemas.event import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import MediaType, ModuleType, ChainEventType, MediaServerType
class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
@@ -28,6 +30,13 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
"""
return ModuleType.MediaServer
@staticmethod
def get_subtype() -> MediaServerType:
"""
获取模块子类型
"""
return MediaServerType.Plex
@staticmethod
def get_priority() -> int:
"""
@@ -64,6 +73,47 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
logger.info(f"Plex {name} 服务器连接断开,尝试重连 ...")
server.reconnect()
def user_authenticate(self, credentials: AuthCredentials, service_name: Optional[str] = None) \
-> Optional[AuthCredentials]:
"""
使用Plex用户辅助完成用户认证
:param credentials: 认证数据
:param service_name: 指定要认证的媒体服务器名称,若为 None 则认证所有服务
:return: 认证数据
"""
# Plex认证
if not credentials or credentials.grant_type != "password":
return None
# 确定要认证的服务器列表
if service_name:
# 如果指定了服务名,获取该服务实例
servers = [(service_name, server)] if (server := self.get_instance(service_name)) else []
else:
# 如果没有指定服务名,遍历所有服务
servers = self.get_instances().items()
# 遍历要认证的服务器
for name, server in servers:
# 触发认证拦截事件
intercept_event = eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(username=credentials.username, channel=self.get_name(),
service=name, status="triggered")
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:
continue
auth_result = server.authenticate(credentials.username, credentials.password)
if auth_result:
token, username = auth_result
credentials.channel = self.get_name()
credentials.service = name
credentials.token = token
# Plex 传入可能为邮箱,这里调整为用户名返回
credentials.username = username
return credentials
return None
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Webhook报文体
@@ -156,10 +206,10 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
媒体数量统计
"""
if server:
server: Plex = self.get_instance(server)
if not server:
server_obj: Plex = self.get_instance(server)
if not server_obj:
return None
servers = [server]
servers = [server_obj]
else:
servers = self.get_instances().values()
media_statistics = []
@@ -176,9 +226,9 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
"""
媒体库列表
"""
server: Plex = self.get_instance(server)
if server:
return server.get_librarys(hidden)
server_obj: Plex = self.get_instance(server)
if server_obj:
return server_obj.get_librarys(hidden)
return None
def mediaserver_items(self, server: str, library_id: Union[str, int], start_index: int = 0,
@@ -193,18 +243,18 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
:return: 返回一个生成器对象,用于逐步获取媒体服务器中的项目
"""
server: Plex = self.get_instance(server)
if server:
return server.get_items(library_id, start_index, limit)
server_obj: Plex = self.get_instance(server)
if server_obj:
return server_obj.get_items(library_id, start_index, limit)
return None
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
server: Plex = self.get_instance(server)
if server:
return server.get_iteminfo(item_id)
server_obj: Plex = self.get_instance(server)
if server_obj:
return server_obj.get_iteminfo(item_id)
return None
def mediaserver_tv_episodes(self, server: str,
@@ -212,10 +262,10 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
"""
获取剧集信息
"""
server: Plex = self.get_instance(server)
if not server:
server_obj: Plex = self.get_instance(server)
if not server_obj:
return None
_, seasoninfo = server.get_tv_episodes(item_id=item_id)
_, seasoninfo = server_obj.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(
@@ -227,25 +277,55 @@ class PlexModule(_ModuleBase, _MediaServerBase[Plex]):
"""
获取媒体服务器正在播放信息
"""
server: Plex = self.get_instance(server)
if not server:
server_obj: Plex = self.get_instance(server)
if not server_obj:
return []
return server.get_resume(num=count)
return server_obj.get_resume(num=count)
def mediaserver_latest(self, server: str, count: int = 20, **kwargs) -> List[schemas.MediaServerPlayItem]:
def mediaserver_latest(self, server: str = None, count: int = 20,
**kwargs) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
server: Plex = self.get_instance(server)
if not server:
server_obj: Plex = self.get_instance(server)
if not server_obj:
return []
return server.get_latest(num=count)
return server_obj.get_latest(num=count)
def mediaserver_latest_images(self,
server: str = None,
count: int = 20,
username: str = None,
**kwargs
) -> List[str]:
"""
获取媒体服务器最新入库条目的图片
:param server: 媒体服务器名称
:param count: 获取数量
:param username: 用户名
:return: 图片链接列表
"""
server_obj: Plex = self.get_instance(server)
if not server_obj:
return []
links = []
items: List[schemas.MediaServerPlayItem] = self.mediaserver_latest(server=server, count=count,
username=username)
for item in items:
link = server_obj.get_remote_image_by_id(item_id=item.id,
image_type="Backdrop",
plex_url=False)
if link:
links.append(link)
return links
def mediaserver_play_url(self, server: str, item_id: Union[str, int]) -> Optional[str]:
"""
获取媒体库播放地址
"""
server: Plex = self.get_instance(server)
if not server:
server_obj: Plex = self.get_instance(server)
if not server_obj:
return None
return server.get_play_url(item_id)
return server_obj.get_play_url(item_id)

View File

@@ -5,6 +5,7 @@ from urllib.parse import quote_plus
from cachetools import TTLCache, cached
from plexapi import media
from plexapi.myplex import MyPlexAccount
from plexapi.server import PlexServer
from requests import Response, Session
@@ -61,6 +62,27 @@ class Plex:
self._plex = None
logger.error(f"Plex服务器连接失败{str(e)}")
def authenticate(self, username: str, password: str) -> Optional[Tuple[str, str]]:
"""
用户认证
:param username: 用户名
:param password: 密码
:return: 认证成功返回 (token, 用户名),否则返回 None
"""
if not username or not password:
return None
try:
account = MyPlexAccount(username=username, password=password, remember=False)
if account:
plex = PlexServer(self._host, account.authToken)
if not plex:
return None
return account.authToken, account.username
except Exception as e:
# 处理认证失败或网络错误等情况
logger.error(f"Authentication failed: {e}")
return None
@cached(cache=TTLCache(maxsize=100, ttl=86400))
def __get_library_images(self, library_key: str, mtype: int) -> Optional[List[str]]:
"""
@@ -112,7 +134,8 @@ class Plex:
return []
libraries = []
for library in self._libraries:
if hidden and self._sync_libraries and str(library.key) not in self._sync_libraries:
if hidden and self._sync_libraries and "all" not in self._sync_libraries \
and str(library.key) not in self._sync_libraries:
continue
match library.type:
case "movie":
@@ -139,26 +162,26 @@ class Plex:
def get_medias_count(self) -> schemas.Statistic:
"""
获得电影、电视剧、动漫媒体数量
:return: MovieCount SeriesCount SongCount
:return: movie_count tv_count episode_count
"""
if not self._plex:
return schemas.Statistic()
sections = self._plex.library.sections()
MovieCount = SeriesCount = EpisodeCount = 0
movie_count = tv_count = episode_count = 0
# 媒体库白名单
allow_library = [lib.id for lib in self.get_librarys(hidden=True)]
for sec in sections:
if str(sec.key) not in allow_library:
if sec.key not in allow_library:
continue
if sec.type == "movie":
MovieCount += sec.totalSize
movie_count += sec.totalSize
if sec.type == "show":
SeriesCount += sec.totalSize
EpisodeCount += sec.totalViewSize(libtype='episode')
tv_count += sec.totalSize
episode_count += sec.totalViewSize(libtype="episode")
return schemas.Statistic(
movie_count=MovieCount,
tv_count=SeriesCount,
episode_count=EpisodeCount
movie_count=movie_count,
tv_count=tv_count,
episode_count=episode_count
)
def get_movies(self,
@@ -270,24 +293,33 @@ class Plex:
season_episodes[episode.seasonNumber].append(episode.index)
return videos.key, season_episodes
def get_remote_image_by_id(self, item_id: str, image_type: str, depth: int = 0) -> Optional[str]:
def get_remote_image_by_id(self,
item_id: str,
image_type: str,
depth: int = 0,
plex_url: bool = True) -> Optional[str]:
"""
根据ItemId从Plex查询图片地址
:param item_id: 在Plex中的ID
:param image_type: 图片的类型Poster或者Backdrop等
:param depth: 当前递归深度默认为0
:return: 图片对应在TMDB中的URL
:param plex_url: 是否返回Plex的URL默认为True仅在配置了外网地址和Token时有效
:return: 图片对应在plex服务器或TMDB中的URL
"""
if not self._plex or depth > 2 or not item_id:
return None
try:
image_url = None
ekey = f"/library/metadata/{item_id}"
ekey = item_id
item = self._plex.fetchItem(ekey=ekey)
if not item:
return None
# 如果配置了外网播放地址以及Token则默认从Plex媒体服务器获取图片否则返回有外网地址的图片资源
if self._playhost and self._token:
# Plex外网播放地址这个框里目前可以填两种地址
# 1. Plex的官方转发地址https://app.plex.tv, 2. 自己处理的端口转发地址
# 如果使用的是1的官方转发地址,那么就不能走这个逻辑,因为官方转发地址无法获取到图片
if (self._playhost and "app.plex.tv" not in self._playhost
and self._token and plex_url):
query = {"X-Plex-Token": self._token}
if image_type == "Poster":
if item.thumb:
@@ -318,8 +350,8 @@ class Plex:
image_url = image.key
break
# 如果最后还是找不到,则递归父级进行查找
if not image_url and hasattr(item, "parentRatingKey"):
return self.get_remote_image_by_id(item_id=item.parentRatingKey,
if not image_url and hasattr(item, "parentKey"):
return self.get_remote_image_by_id(item_id=item.parentKey,
image_type=image_type,
depth=depth + 1)
return image_url
@@ -637,7 +669,7 @@ class Plex:
"S" + str(message.get('Metadata', {}).get('parentIndex')),
"E" + str(message.get('Metadata', {}).get('index')),
message.get('Metadata', {}).get('title'))
eventItem.item_id = message.get('Metadata', {}).get('ratingKey')
eventItem.item_id = message.get('Metadata', {}).get('key')
eventItem.season_id = message.get('Metadata', {}).get('parentIndex')
eventItem.episode_id = message.get('Metadata', {}).get('index')
@@ -652,7 +684,7 @@ class Plex:
eventItem.item_name = "%s %s" % (
message.get('Metadata', {}).get('title'),
"(" + str(message.get('Metadata', {}).get('year')) + ")")
eventItem.item_id = message.get('Metadata', {}).get('ratingKey')
eventItem.item_id = message.get('Metadata', {}).get('key')
if len(message.get('Metadata', {}).get('summary')) > 100:
eventItem.overview = str(message.get('Metadata', {}).get('summary'))[:100] + "..."
else:
@@ -693,7 +725,7 @@ class Plex:
if not self._plex:
return []
# 媒体库白名单
allow_library = ",".join([lib.id for lib in self.get_librarys(hidden=True)])
allow_library = ",".join(map(str, (lib.id for lib in self.get_librarys(hidden=True))))
params = {"contentDirectoryID": allow_library}
items = self._plex.fetchItems("/hubs/continueWatching/items",
container_start=0,
@@ -729,7 +761,7 @@ class Plex:
if not self._plex:
return None
# 请求参数(除黑名单)
allow_library = ",".join([lib.id for lib in self.get_librarys(hidden=True)])
allow_library = ",".join(map(str, (lib.id for lib in self.get_librarys(hidden=True))))
params = {
"contentDirectoryID": allow_library,
"count": num,

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Set, Tuple, Optional, Union, List
from typing import Set, Tuple, Optional, Union, List, Dict
from qbittorrentapi import TorrentFilesList
from torrentool.torrent import Torrent
@@ -11,7 +11,7 @@ from app.log import logger
from app.modules import _ModuleBase, _DownloaderBase
from app.modules.qbittorrent.qbittorrent import Qbittorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus, ModuleType
from app.schemas.types import TorrentStatus, ModuleType, DownloaderType
from app.utils.string import StringUtils
@@ -35,6 +35,13 @@ class QbittorrentModule(_ModuleBase, _DownloaderBase[Qbittorrent]):
"""
return ModuleType.Downloader
@staticmethod
def get_subtype() -> DownloaderType:
"""
获取模块子类型
"""
return DownloaderType.Qbittorrent
@staticmethod
def get_priority() -> int:
"""
@@ -124,7 +131,8 @@ class QbittorrentModule(_ModuleBase, _DownloaderBase[Qbittorrent]):
is_paused=is_paused,
tag=tags,
cookie=cookie,
category=category
category=category,
ignore_category_check=False
)
if not state:
# 读取种子的名称
@@ -203,66 +211,75 @@ class QbittorrentModule(_ModuleBase, _DownloaderBase[Qbittorrent]):
:return: 下载器中符合状态的种子列表
"""
# 获取下载器
server: Qbittorrent = self.get_instance(downloader)
if not server:
return None
if downloader:
server: Qbittorrent = self.get_instance(downloader)
if not server:
return None
servers = {downloader: server}
else:
servers: Dict[str, Qbittorrent] = self.get_instances()
ret_torrents = []
if hashs:
# 按Hash获取
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
for torrent in torrents or []:
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = Path(torrent.get('save_path')) / torrent.get('name')
ret_torrents.append(TransferTorrent(
title=torrent.get('name'),
path=torrent_path,
hash=torrent.get('hash'),
size=torrent.get('total_size'),
tags=torrent.get('tags')
))
for name, server in servers.items():
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
for torrent in torrents or []:
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = Path(torrent.get('save_path')) / torrent.get('name')
ret_torrents.append(TransferTorrent(
downloader=name,
title=torrent.get('name'),
path=torrent_path,
hash=torrent.get('hash'),
size=torrent.get('total_size'),
tags=torrent.get('tags')
))
elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
tags = torrent.get("tags") or []
if "已整理" in tags:
continue
# 内容路径
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = torrent.get('save_path') / torrent.get('name')
ret_torrents.append(TransferTorrent(
title=torrent.get('name'),
path=torrent_path,
hash=torrent.get('hash'),
tags=torrent.get('tags')
))
for name, server in servers.items():
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
tags = torrent.get("tags") or []
if "已整理" in tags:
continue
# 内容路径
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = torrent.get('save_path') / torrent.get('name')
ret_torrents.append(TransferTorrent(
downloader=name,
title=torrent.get('name'),
path=torrent_path,
hash=torrent.get('hash'),
tags=torrent.get('tags')
))
elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.get('name'))
ret_torrents.append(DownloadingTorrent(
hash=torrent.get('hash'),
title=torrent.get('name'),
name=meta.name,
year=meta.year,
season_episode=meta.season_episode,
progress=torrent.get('progress') * 100,
size=torrent.get('total_size'),
state="paused" if torrent.get('state') in ("paused", "pausedDL") else "downloading",
dlspeed=StringUtils.str_filesize(torrent.get('dlspeed')),
upspeed=StringUtils.str_filesize(torrent.get('upspeed')),
left_time=StringUtils.str_secends(
(torrent.get('total_size') - torrent.get('completed')) / torrent.get('dlspeed')) if torrent.get(
'dlspeed') > 0 else ''
))
for name, server in servers.items():
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.get('name'))
ret_torrents.append(DownloadingTorrent(
downloader=name,
hash=torrent.get('hash'),
title=torrent.get('name'),
name=meta.name,
year=meta.year,
season_episode=meta.season_episode,
progress=torrent.get('progress') * 100,
size=torrent.get('total_size'),
state="paused" if torrent.get('state') in ("paused", "pausedDL") else "downloading",
dlspeed=StringUtils.str_filesize(torrent.get('dlspeed')),
upspeed=StringUtils.str_filesize(torrent.get('upspeed')),
left_time=StringUtils.str_secends(
(torrent.get('total_size') - torrent.get('completed')) / torrent.get('dlspeed')) if torrent.get(
'dlspeed') > 0 else ''
))
else:
return None
return ret_torrents

View File

@@ -1,4 +1,5 @@
import time
import traceback
from typing import Optional, Union, Tuple, List
import qbittorrentapi
@@ -75,8 +76,13 @@ class Qbittorrent:
REQUESTS_ARGS={'timeout': (15, 60)})
try:
qbt.auth_log_in()
except qbittorrentapi.LoginFailed as e:
logger.error(f"qbittorrent 登录失败:{str(e)}")
except (qbittorrentapi.LoginFailed, qbittorrentapi.Forbidden403Error) as e:
logger.error(f"qbittorrent 登录失败:{str(e).strip() or '请检查用户名和密码是否正确'}")
return None
except Exception as e:
stack_trace = "".join(traceback.format_exception(None, e, e.__traceback__))[:2000]
logger.error(f"qbittorrent 登录失败:{str(e)}\n{stack_trace}")
return None
return qbt
except Exception as err:
logger.error(f"qbittorrent 连接出错:{str(err)}")
@@ -245,6 +251,7 @@ class Qbittorrent:
:param category: 种子分类
:param download_dir: 下载路径
:param cookie: 站点Cookie用于辅助下载种子
:param kwargs: 可选参数,如 ignore_category_check 以及 QB相关参数
:return: bool
"""
if not self.qbc or not content:
@@ -270,13 +277,16 @@ class Qbittorrent:
else:
tags = None
# 分类自动管理
if category and self._category:
is_auto = True
# 如果忽略分类检查,则直接使用传入的分类值,否则,仅在分类存在且启用了自动管理时才传递参数
ignore_category_check = kwargs.pop("ignore_category_check", True)
if ignore_category_check:
is_auto = self._category
else:
is_auto = False
category = None
if category and self._category:
is_auto = True
else:
is_auto = False
category = None
try:
# 添加下载
qbc_ret = self.qbc.torrents_add(urls=urls,

View File

@@ -31,6 +31,13 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块子类型
"""
return MessageChannel.Slack
@staticmethod
def get_priority() -> int:
"""

View File

@@ -10,7 +10,7 @@ from app.core.context import Context
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.modules import _ModuleBase
from app.schemas.types import ModuleType
from app.schemas.types import ModuleType, OtherModulesType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
@@ -40,6 +40,13 @@ class SubtitleModule(_ModuleBase):
"""
return ModuleType.Other
@staticmethod
def get_subtype() -> OtherModulesType:
"""
获取模块子类型
"""
return OtherModulesType.Subtitle
@staticmethod
def get_priority() -> int:
"""

View File

@@ -29,6 +29,13 @@ class SynologyChatModule(_ModuleBase, _MessageBase[SynologyChat]):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块子类型
"""
return MessageChannel.SynologyChat
@staticmethod
def get_priority() -> int:
"""

View File

@@ -1,12 +1,16 @@
import copy
import json
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.context import MediaInfo, Context
from app.core.event import eventmanager
from app.log import logger
from app.modules import _ModuleBase, _MessageBase
from app.modules.telegram.telegram import Telegram
from app.schemas import MessageChannel, CommingMessage, Notification
from app.schemas.types import ModuleType
from app.schemas.event import CommandRegisterEventData
from app.schemas.types import ModuleType, ChainEventType
from app.utils.structures import DictUtils
class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
@@ -30,6 +34,13 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块子类型
"""
return MessageChannel.Telegram
@staticmethod
def get_priority() -> int:
"""
@@ -189,5 +200,41 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
注册命令,实现这个函数接收系统可用的命令菜单
:param commands: 命令字典
"""
for client in self.get_instances().values():
client.register_commands(commands)
for client_config in self.get_configs().values():
client = self.get_instance(client_config.name)
if not client:
continue
# 触发事件,允许调整命令数据,这里需要进行深复制,避免实例共享
scoped_commands = copy.deepcopy(commands)
event = eventmanager.send_event(
ChainEventType.CommandRegister,
CommandRegisterEventData(commands=scoped_commands, origin="Telegram", service=client_config.name)
)
# 如果事件返回有效的 event_data使用事件中调整后的命令
if event and event.event_data:
event_data: CommandRegisterEventData = event.event_data
# 如果事件被取消,跳过命令注册,并清理菜单
if event_data.cancel:
client.delete_commands()
logger.debug(
f"Command registration for {client_config.name} canceled by event: {event_data.source}"
)
continue
scoped_commands = event_data.commands or {}
if not scoped_commands:
logger.debug("Filtered commands are empty, skipping registration.")
client.delete_commands()
# scoped_commands 必须是 commands 的子集
filtered_scoped_commands = DictUtils.filter_keys_to_subset(scoped_commands, commands)
# 如果 filtered_scoped_commands 为空,则跳过注册
if not filtered_scoped_commands:
logger.debug("Filtered commands are empty, skipping registration.")
client.delete_commands()
continue
# 对比调整后的命令与当前命令
if filtered_scoped_commands != commands:
logger.debug(f"Command set has changed, Updating new commands: {filtered_scoped_commands}")
client.register_commands(filtered_scoped_commands)

View File

@@ -206,14 +206,15 @@ class Telegram:
"""
向Telegram发送报文
"""
if image:
res = RequestUtils(proxies=settings.PROXY).get_res(image)
if res is None:
raise Exception("获取图片失败")
if res.content:
# 使用随机标识构建图片文件的完整路径,并写入图片内容到文件
image_file = Path(settings.TEMP_PATH) / str(uuid.uuid4())
image_file = Path(settings.TEMP_PATH) / "telegram" / str(uuid.uuid4())
if not image_file.parent.exists():
image_file.parent.mkdir(parents=True, exist_ok=True)
image_file.write_bytes(res.content)
photo = InputFile(image_file)
# 发送图片到Telegram
@@ -223,8 +224,7 @@ class Telegram:
parse_mode="Markdown")
if ret is None:
raise Exception("发送图片消息失败")
if ret:
return True
return True
# 按4096分段循环发送消息
ret = None
if len(caption) > 4095:
@@ -256,6 +256,15 @@ class Telegram:
]
)
def delete_commands(self):
"""
清理菜单命令
"""
if not self._bot:
return
# 清理菜单命令
self._bot.delete_my_commands()
def stop(self):
"""
停止Telegram消息接收服务

View File

@@ -1,6 +1,7 @@
from typing import Optional, List, Tuple, Union, Dict
import cn2an
import zhconv
from app import schemas
from app.core.config import settings
@@ -13,7 +14,7 @@ from app.modules.themoviedb.scraper import TmdbScraper
from app.modules.themoviedb.tmdb_cache import TmdbCache
from app.modules.themoviedb.tmdbapi import TmdbApi
from app.schemas import MediaPerson
from app.schemas.types import MediaType, MediaImageType, ModuleType
from app.schemas.types import MediaType, MediaImageType, ModuleType, MediaRecognizeType
from app.utils.http import RequestUtils
@@ -48,6 +49,13 @@ class TheMovieDbModule(_ModuleBase):
"""
return ModuleType.MediaRecognize
@staticmethod
def get_subtype() -> MediaRecognizeType:
"""
获取模块子类型
"""
return MediaRecognizeType.TMDB
@staticmethod
def get_priority() -> int:
"""
@@ -116,8 +124,10 @@ class TheMovieDbModule(_ModuleBase):
info = self.tmdb.get_info(mtype=mtype, tmdbid=tmdbid)
elif meta:
info = {}
# 简体名称
zh_name = zhconv.convert(meta.cn_name, "zh-hans") if meta.cn_name else None
# 使用中英文名分别识别,去重去空,但要保持顺序
names = list(dict.fromkeys([k for k in [meta.cn_name, meta.en_name] if k]))
names = list(dict.fromkeys([k for k in [meta.cn_name, zh_name, meta.en_name] if k]))
for name in names:
if meta.begin_season:
logger.info(f"正在识别 {name}{meta.begin_season}季 ...")

View File

@@ -30,9 +30,9 @@ class TmdbScraper:
# 电影元数据文件
doc = self.__gen_movie_nfo_file(mediainfo=mediainfo)
else:
if season:
if season is not None:
# 查询季信息
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, meta.begin_season)
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, season)
if episode:
# 集元数据文件
episodeinfo = self.__get_episode_detail(seasoninfo, meta.begin_episode)
@@ -57,7 +57,7 @@ class TmdbScraper:
:param episode: 集号
"""
images = {}
if season:
if season is not None:
# 只需要集的图片
if episode:
# 集的图片
@@ -102,8 +102,13 @@ class TmdbScraper:
ext = Path(seasoninfo.get('poster_path')).suffix
# URL
url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}"
image_name = f"season{sea_seq}-poster{ext}"
# S0海报格式不同
if season == 0:
image_name = f"season-specials-poster{ext}"
else:
image_name = f"season{sea_seq}-poster{ext}"
return image_name, url
return "", ""
@staticmethod
def __get_episode_detail(seasoninfo: dict, episode: int) -> dict:
@@ -228,7 +233,7 @@ class TmdbScraper:
xoutline = DomUtils.add_node(doc, root, "outline")
xoutline.appendChild(doc.createCDATASection(seasoninfo.get("overview") or ""))
# 标题
DomUtils.add_node(doc, root, "title", "%s" % season)
DomUtils.add_node(doc, root, "title", seasoninfo.get("name") or "%s" % season)
# 发行日期
DomUtils.add_node(doc, root, "premiered", seasoninfo.get("air_date") or "")
DomUtils.add_node(doc, root, "releasedate", seasoninfo.get("air_date") or "")

View File

@@ -15,7 +15,7 @@ from app.schemas.types import MediaType
lock = RLock()
CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp"
EXPIRE_TIMESTAMP = settings.CACHE_CONF.get('meta')
EXPIRE_TIMESTAMP = settings.CACHE_CONF["meta"]
class TmdbCache(metaclass=Singleton):
@@ -50,7 +50,7 @@ class TmdbCache(metaclass=Singleton):
"""
获取缓存KEY
"""
return f"[{meta.type.value if meta.type else '未知'}]{meta.name or meta.tmdbid}-{meta.year}-{meta.begin_season}"
return f"[{meta.type.value if meta.type else '未知'}]{meta.tmdbid or meta.name}-{meta.year}-{meta.begin_season}"
def get(self, meta: MetaBase):
"""
@@ -75,7 +75,7 @@ class TmdbCache(metaclass=Singleton):
@return: 被删除的缓存内容
"""
with lock:
return self._meta_data.pop(key, None)
return self._meta_data.pop(key, {})
def delete_by_tmdbid(self, tmdbid: int) -> None:
"""
@@ -138,14 +138,14 @@ class TmdbCache(metaclass=Singleton):
if cache_year:
cache_year = cache_year[:4]
self._meta_data[self.__get_key(meta)] = {
"id": info.get("id"),
"type": info.get("media_type"),
"year": cache_year,
"title": cache_title,
"poster_path": info.get("poster_path"),
"backdrop_path": info.get("backdrop_path"),
CACHE_EXPIRE_TIMESTAMP_STR: int(time.time()) + EXPIRE_TIMESTAMP
}
"id": info.get("id"),
"type": info.get("media_type"),
"year": cache_year,
"title": cache_title,
"poster_path": info.get("poster_path"),
"backdrop_path": info.get("backdrop_path"),
CACHE_EXPIRE_TIMESTAMP_STR: int(time.time()) + EXPIRE_TIMESTAMP
}
elif info is not None:
# None时不缓存此时代表网络错误允许重复请求
self._meta_data[self.__get_key(meta)] = {'id': 0}
@@ -164,7 +164,7 @@ class TmdbCache(metaclass=Singleton):
return
with open(self._meta_path, 'wb') as f:
pickle.dump(new_meta_data, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(new_meta_data, f, pickle.HIGHEST_PROTOCOL) # type: ignore
def _random_sample(self, new_meta_data: dict) -> bool:
"""

View File

@@ -1,9 +1,9 @@
import traceback
from functools import lru_cache
from typing import Optional, List
from urllib.parse import quote
import zhconv
from cachetools import TTLCache, cached
from lxml import etree
from app.core.config import settings
@@ -27,8 +27,6 @@ class TmdbApi:
self.tmdb.domain = settings.TMDB_API_DOMAIN
# 开启缓存
self.tmdb.cache = True
# 缓存大小
self.tmdb.REQUEST_CACHE_MAXSIZE = settings.CACHE_CONF.get('tmdb')
# APIKEY
self.tmdb.api_key = settings.TMDB_API_KEY
# 语种
@@ -163,7 +161,7 @@ class TmdbApi:
season_number: int = None) -> Optional[dict]:
"""
搜索tmdb中的媒体信息匹配返回一条尽可能正确的信息
:param name: 索的名称
:param name: 索的名称
:param mtype: 类型:电影、电视剧
:param year: 年份,如要是季集需要是首播年份(first_air_date)
:param season_year: 当前季集年份
@@ -466,7 +464,7 @@ class TmdbApi:
return ret_info
@lru_cache(maxsize=settings.CACHE_CONF.get('tmdb'))
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
def match_web(self, name: str, mtype: MediaType) -> Optional[dict]:
"""
搜索TMDB网站直接抓取结果结果只有一条时才返回
@@ -1292,7 +1290,7 @@ class TmdbApi:
for group_episode in group_episodes:
order = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes or not order:
if not episodes:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")

View File

@@ -4,11 +4,12 @@ import logging
import os
import time
from datetime import datetime
from functools import lru_cache
import requests
import requests.exceptions
from cachetools import TTLCache, cached
from app.core.config import settings
from app.utils.http import RequestUtils
from .exceptions import TMDbException
@@ -24,7 +25,6 @@ class TMDb(object):
TMDB_CACHE_ENABLED = "TMDB_CACHE_ENABLED"
TMDB_PROXIES = "TMDB_PROXIES"
TMDB_DOMAIN = "TMDB_DOMAIN"
REQUEST_CACHE_MAXSIZE = None
_req = None
_session = None
@@ -137,7 +137,7 @@ class TMDb(object):
def cache(self, cache):
os.environ[self.TMDB_CACHE_ENABLED] = str(cache)
@lru_cache(maxsize=REQUEST_CACHE_MAXSIZE)
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
def cached_request(self, method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""

View File

@@ -4,7 +4,7 @@ from app.core.config import settings
from app.log import logger
from app.modules import _ModuleBase
from app.modules.thetvdb import tvdbapi
from app.schemas.types import ModuleType
from app.schemas.types import ModuleType, MediaRecognizeType
from app.utils.http import RequestUtils
@@ -28,6 +28,13 @@ class TheTvDbModule(_ModuleBase):
"""
return ModuleType.MediaRecognize
@staticmethod
def get_subtype() -> MediaRecognizeType:
"""
获取模块子类型
"""
return MediaRecognizeType.TVDB
@staticmethod
def get_priority() -> int:
"""

View File

@@ -1,5 +1,5 @@
from pathlib import Path
from typing import Set, Tuple, Optional, Union, List
from typing import Set, Tuple, Optional, Union, List, Dict
from torrentool.torrent import Torrent
from transmission_rpc import File
@@ -11,7 +11,7 @@ from app.log import logger
from app.modules import _ModuleBase, _DownloaderBase
from app.modules.transmission.transmission import Transmission
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus, ModuleType
from app.schemas.types import TorrentStatus, ModuleType, DownloaderType
from app.utils.string import StringUtils
@@ -35,6 +35,13 @@ class TransmissionModule(_ModuleBase, _DownloaderBase[Transmission]):
"""
return ModuleType.Downloader
@staticmethod
def get_subtype() -> DownloaderType:
"""
获取模块子类型
"""
return DownloaderType.Transmission
@staticmethod
def get_priority() -> int:
"""
@@ -196,60 +203,70 @@ class TransmissionModule(_ModuleBase, _DownloaderBase[Transmission]):
:return: 下载器中符合状态的种子列表
"""
# 获取下载器
server: Transmission = self.get_instance(downloader)
if not server:
return None
if downloader:
server: Transmission = self.get_instance(downloader)
if not server:
return None
servers = {downloader: server}
else:
servers: Dict[str, Transmission] = self.get_instances()
ret_torrents = []
if hashs:
# 按Hash获取
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
for torrent in torrents or []:
ret_torrents.append(TransferTorrent(
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
size=torrent.total_size,
tags=",".join(torrent.labels or [])
))
for name, server in servers.items():
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
for torrent in torrents or []:
ret_torrents.append(TransferTorrent(
downloader=name,
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
size=torrent.total_size,
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
# 含"已整理"tag的不处理
if "已整理" in torrent.labels or []:
continue
# 下载路径
path = torrent.download_dir
# 无法获取下载路径的不处理
if not path:
logger.debug(f"未获取到 {torrent.name} 下载保存路径")
continue
ret_torrents.append(TransferTorrent(
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
tags=",".join(torrent.labels or [])
))
for name, server in servers.items():
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
# 含"已整理"tag的不处理
if "已整理" in torrent.labels or []:
continue
# 下载路径
path = torrent.download_dir
# 无法获取下载路径的不处理
if not path:
logger.debug(f"未获取到 {torrent.name} 下载保存路径")
continue
ret_torrents.append(TransferTorrent(
downloader=name,
title=torrent.name,
path=Path(torrent.download_dir) / torrent.name,
hash=torrent.hashString,
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.name)
dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload
upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload
ret_torrents.append(DownloadingTorrent(
hash=torrent.hashString,
title=torrent.name,
name=meta.name,
year=meta.year,
season_episode=meta.season_episode,
progress=torrent.progress,
size=torrent.total_size,
state="paused" if torrent.status == "stopped" else "downloading",
dlspeed=StringUtils.str_filesize(dlspeed),
upspeed=StringUtils.str_filesize(upspeed),
left_time=StringUtils.str_secends(torrent.left_until_done / dlspeed) if dlspeed > 0 else ''
))
for name, server in servers.items():
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
for torrent in torrents or []:
meta = MetaInfo(torrent.name)
dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload
upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload
ret_torrents.append(DownloadingTorrent(
downloader=name,
hash=torrent.hashString,
title=torrent.name,
name=meta.name,
year=meta.year,
season_episode=meta.season_episode,
progress=torrent.progress,
size=torrent.total_size,
state="paused" if torrent.status == "stopped" else "downloading",
dlspeed=StringUtils.str_filesize(dlspeed),
upspeed=StringUtils.str_filesize(upspeed),
left_time=StringUtils.str_secends(torrent.left_until_done / dlspeed) if dlspeed > 0 else ''
))
else:
return None
return ret_torrents

View File

@@ -30,6 +30,13 @@ class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块子类型
"""
return MessageChannel.VoceChat
@staticmethod
def get_priority() -> int:
"""

View File

@@ -30,6 +30,13 @@ class WebPushModule(_ModuleBase, _MessageBase):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块子类型
"""
return MessageChannel.WebPush
@staticmethod
def get_priority() -> int:
"""

View File

@@ -1,14 +1,18 @@
import copy
import xml.dom.minidom
from typing import Optional, Union, List, Tuple, Any, Dict
from app.core.context import Context, MediaInfo
from app.core.event import eventmanager
from app.log import logger
from app.modules import _ModuleBase, _MessageBase
from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt
from app.modules.wechat.wechat import WeChat
from app.schemas import MessageChannel, CommingMessage, Notification
from app.schemas.types import ModuleType
from app.schemas.event import CommandRegisterEventData
from app.schemas.types import ModuleType, ChainEventType
from app.utils.dom import DomUtils
from app.utils.structures import DictUtils
class WechatModule(_ModuleBase, _MessageBase[WeChat]):
@@ -32,6 +36,13 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
"""
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
"""
获取模块的子类型
"""
return MessageChannel.Wechat
@staticmethod
def get_priority() -> int:
"""
@@ -222,7 +233,42 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
# 如果没有配置消息解密相关参数,则也没有必要进行菜单初始化
if not client_config.config.get("WECHAT_ENCODING_AESKEY") or not client_config.config.get("WECHAT_TOKEN"):
logger.debug(f"{client_config.name} 缺少消息解密参数,跳过后续菜单初始化")
else:
client = self.get_instance(client_config.name)
if client:
client.create_menus(commands)
continue
client = self.get_instance(client_config.name)
if not client:
continue
# 触发事件,允许调整命令数据,这里需要进行深复制,避免实例共享
scoped_commands = copy.deepcopy(commands)
event = eventmanager.send_event(
ChainEventType.CommandRegister,
CommandRegisterEventData(commands=scoped_commands, origin="WeChat", service=client_config.name)
)
# 如果事件返回有效的 event_data使用事件中调整后的命令
if event and event.event_data:
event_data: CommandRegisterEventData = event.event_data
# 如果事件被取消,跳过命令注册,并清理菜单
if event_data.cancel:
client.delete_menus()
logger.debug(
f"Command registration for {client_config.name} canceled by event: {event_data.source}"
)
continue
scoped_commands = event_data.commands or {}
if not scoped_commands:
logger.debug("Filtered commands are empty, skipping registration.")
client.delete_menus()
# scoped_commands 必须是 commands 的子集
filtered_scoped_commands = DictUtils.filter_keys_to_subset(scoped_commands, commands)
# 如果 filtered_scoped_commands 为空,则跳过注册
if not filtered_scoped_commands:
logger.debug("Filtered commands are empty, skipping registration.")
client.delete_menus()
continue
# 对比调整后的命令与当前命令
if filtered_scoped_commands != commands:
logger.debug(f"Command set has changed, Updating new commands: {filtered_scoped_commands}")
client.create_menus(filtered_scoped_commands)

View File

@@ -10,6 +10,7 @@ from app.log import logger
from app.utils.common import retry
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
from app.utils.url import UrlUtils
lock = threading.Lock()
@@ -31,14 +32,16 @@ class WeChat:
_proxy = None
# 企业微信发送消息URL
_send_msg_url = "/cgi-bin/message/send?access_token=%s"
_send_msg_url = "/cgi-bin/message/send?access_token={access_token}"
# 企业微信获取TokenURL
_token_url = "/cgi-bin/gettoken?corpid=%s&corpsecret=%s"
# 企业微信创菜单URL
_create_menu_url = "/cgi-bin/menu/create?access_token=%s&agentid=%s"
_token_url = "/cgi-bin/gettoken?corpid={corpid}&corpsecret={corpsecret}"
# 企业微信创菜单URL
_create_menu_url = "/cgi-bin/menu/create?access_token={access_token}&agentid={agentid}"
# 企业微信删除菜单URL
_delete_menu_url = "/cgi-bin/menu/delete?access_token={access_token}&agentid={agentid}"
def __init__(self, WECHAT_CORPID: str = None, WECHAT_APP_SECRET: str = None, WECHAT_APP_ID: str = None,
WECHAT_PROXY: str = None, **kwargs):
def __init__(self, WECHAT_CORPID: str = None, WECHAT_APP_SECRET: str = None,
WECHAT_APP_ID: str = None, WECHAT_PROXY: str = None, **kwargs):
"""
初始化
"""
@@ -51,10 +54,10 @@ class WeChat:
self._proxy = WECHAT_PROXY or "https://qyapi.weixin.qq.com"
if self._proxy:
self._proxy = self._proxy.rstrip("/")
self._send_msg_url = f"{self._proxy}/{self._send_msg_url}"
self._token_url = f"{self._proxy}/{self._token_url}"
self._create_menu_url = f"{self._proxy}/{self._create_menu_url}"
self._send_msg_url = UrlUtils.adapt_request_url(self._proxy, self._send_msg_url)
self._token_url = UrlUtils.adapt_request_url(self._proxy, self._token_url)
self._create_menu_url = UrlUtils.adapt_request_url(self._proxy, self._create_menu_url)
self._delete_menu_url = UrlUtils.adapt_request_url(self._proxy, self._delete_menu_url)
if self._corpid and self._appsecret and self._appid:
self.__get_access_token()
@@ -63,7 +66,7 @@ class WeChat:
"""
获取状态
"""
return True if self.__get_access_token else False
return True if self.__get_access_token() else False
@retry(Exception, logger=logger)
def __get_access_token(self, force=False):
@@ -82,13 +85,13 @@ class WeChat:
if not self._corpid or not self._appsecret:
return None
try:
token_url = self._token_url % (self._corpid, self._appsecret)
token_url = self._token_url.format(corpid=self._corpid, corpsecret=self._appsecret)
res = RequestUtils().get_res(token_url)
if res:
ret_json = res.json()
if ret_json.get('errcode') == 0:
self._access_token = ret_json.get('access_token')
self._expires_in = ret_json.get('expires_in')
if ret_json.get("errcode") == 0:
self._access_token = ret_json.get("access_token")
self._expires_in = ret_json.get("expires_in")
self._access_token_time = datetime.now()
elif res is not None:
logger.error(f"获取微信access_token失败错误码{res.status_code},错误原因:{res.reason}")
@@ -100,8 +103,64 @@ class WeChat:
return None
return self._access_token
@staticmethod
def __split_content(content: str, max_bytes: int = 2048) -> List[str]:
"""
将内容分块为不超过 max_bytes 字节的块
:param content: 待拆分的内容
:param max_bytes: 最大字节数
:return: 分块后的内容列表
"""
content_chunks = []
current_chunk = bytearray()
for line in content.splitlines():
encoded_line = (line + "\n").encode("utf-8")
line_length = len(encoded_line)
if line_length > max_bytes:
# 在处理长行之前,先将 current_chunk 添加到 content_chunks
if current_chunk:
content_chunks.append(current_chunk.decode("utf-8", errors="replace").strip())
current_chunk = bytearray()
# 处理长行,拆分为多个不超过 max_bytes 的块
start = 0
while start < line_length:
end = start + max_bytes # 不再需要为 "..." 预留空间
if end >= line_length:
end = line_length
else:
# 调整以避免拆分多字节字符
while end > start and (encoded_line[end] & 0xC0) == 0x80:
end -= 1
if end == start:
# 单个字符超过了 max_bytes强制包含整个字符
end = start + 1
while end < line_length and (encoded_line[end] & 0xC0) == 0x80:
end += 1
truncated_line = encoded_line[start:end].decode("utf-8", errors="replace")
content_chunks.append(truncated_line.strip())
start = end
continue # 继续处理下一行
# 检查添加当前行后是否会超过 max_bytes
if len(current_chunk) + line_length > max_bytes:
# 将 current_chunk 添加到 content_chunks
content_chunks.append(current_chunk.decode("utf-8", errors="replace").strip())
current_chunk = bytearray()
# 将当前行添加到 current_chunk
current_chunk += encoded_line
# 处理剩余的 current_chunk
if current_chunk:
content_chunks.append(current_chunk.decode("utf-8", errors="replace").strip())
return content_chunks
def __send_message(self, title: str, text: str = None,
userid: str = None, link: str = None) -> Optional[bool]:
userid: str = None, link: str = None) -> bool:
"""
发送文本消息
:param title: 消息标题
@@ -110,62 +169,41 @@ class WeChat:
:param link: 跳转链接
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % self.__get_access_token()
if not title:
logger.error("消息标题不能为空")
return False
if text:
content = "%s\n%s" % (title, text.replace("\n\n", "\n"))
formatted_text = text.replace("\n\n", "\n")
content = f"{title}\n{formatted_text}"
else:
content = title
if link:
content = f"{content}\n点击查看:{link}"
if not userid:
userid = "@all"
# Check if content exceeds 2048 bytes and split if necessary
if len(content.encode('utf-8')) > 2048:
content_chunks = []
current_chunk = ""
for line in content.splitlines():
if len(current_chunk.encode('utf-8')) + len(line.encode('utf-8')) > 2048:
content_chunks.append(current_chunk.strip())
current_chunk = ""
current_chunk += line + "\n"
if current_chunk:
content_chunks.append(current_chunk.strip())
# Send each chunk as a separate message
result = True
for chunk in content_chunks:
req_json = {
"touser": userid,
"msgtype": "text",
"agentid": self._appid,
"text": {
"content": chunk
},
"safe": 0,
"enable_id_trans": 0,
"enable_duplicate_check": 0
}
result = self.__post_request(message_url, req_json)
if not result:
return False
else:
# 分块处理逻辑
content_chunks = self.__split_content(content)
# 逐块发送消息
for chunk in content_chunks:
req_json = {
"touser": userid,
"msgtype": "text",
"agentid": self._appid,
"text": {
"content": content
"content": chunk
},
"safe": 0,
"enable_id_trans": 0,
"enable_duplicate_check": 0
}
result = self.__post_request(message_url, req_json)
return result
try:
# 如果是超长消息,有一个发送失败就全部失败
if not self.__post_request(self._send_msg_url, req_json):
return False
except Exception as e:
logger.error(f"发送消息块失败:{e}")
return False
return True
def __send_image_message(self, title: str, text: str, image_url: str,
userid: str = None, link: str = None) -> Optional[bool]:
@@ -178,7 +216,6 @@ class WeChat:
:param link: 跳转链接
:return: 发送状态,错误信息
"""
message_url = self._send_msg_url % self.__get_access_token()
if text:
text = text.replace("\n\n", "\n")
if not userid:
@@ -198,7 +235,11 @@ class WeChat:
]
}
}
return self.__post_request(message_url, req_json)
try:
return self.__post_request(self._send_msg_url, req_json)
except Exception as e:
logger.error(f"发送图文消息失败:{e}")
return False
def send_msg(self, title: str, text: str = "", image: str = "",
userid: str = None, link: str = None) -> Optional[bool]:
@@ -211,130 +252,142 @@ class WeChat:
:param link: 跳转链接
:return: 发送状态,错误信息
"""
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
try:
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
if image:
ret_code = self.__send_image_message(title=title, text=text, image_url=image, userid=userid, link=link)
else:
ret_code = self.__send_message(title=title, text=text, userid=userid, link=link)
if image:
ret_code = self.__send_image_message(title=title, text=text, image_url=image, userid=userid, link=link)
else:
ret_code = self.__send_message(title=title, text=text, userid=userid, link=link)
return ret_code
return ret_code
except Exception as e:
logger.error(f"发送消息失败:{e}")
return False
def send_medias_msg(self, medias: List[MediaInfo], userid: str = "") -> Optional[bool]:
"""
发送列表类消息
"""
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
try:
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
message_url = self._send_msg_url % self.__get_access_token()
if not userid:
userid = "@all"
articles = []
index = 1
for media in medias:
if media.vote_average:
title = f"{index}. {media.title_year}\n类型:{media.type.value},评分:{media.vote_average}"
else:
title = f"{index}. {media.title_year}\n类型:{media.type.value}"
articles.append({
"title": title,
"description": "",
"picurl": media.get_message_image() if index == 1 else media.get_poster_image(),
"url": media.detail_link
})
index += 1
if not userid:
userid = "@all"
articles = []
index = 1
for media in medias:
if media.vote_average:
title = f"{index}. {media.title_year}\n类型:{media.type.value},评分:{media.vote_average}"
else:
title = f"{index}. {media.title_year}\n类型:{media.type.value}"
articles.append({
"title": title,
"description": "",
"picurl": media.get_message_image() if index == 1 else media.get_poster_image(),
"url": media.detail_link
})
index += 1
req_json = {
"touser": userid,
"msgtype": "news",
"agentid": self._appid,
"news": {
"articles": articles
req_json = {
"touser": userid,
"msgtype": "news",
"agentid": self._appid,
"news": {
"articles": articles
}
}
}
return self.__post_request(message_url, req_json)
return self.__post_request(self._send_msg_url, req_json)
except Exception as e:
logger.error(f"发送消息失败:{e}")
return False
def send_torrents_msg(self, torrents: List[Context],
userid: str = "", title: str = "", link: str = None) -> Optional[bool]:
"""
发送列表消息
"""
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
try:
if not self.__get_access_token():
logger.error("获取微信access_token失败请检查参数配置")
return None
# 先发送标题
if title:
self.__send_message(title=title, userid=userid, link=link)
# 先发送标题
if title:
self.__send_message(title=title, userid=userid, link=link)
# 发送列表
message_url = self._send_msg_url % self.__get_access_token()
if not userid:
userid = "@all"
articles = []
index = 1
for context in torrents:
torrent = context.torrent_info
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
mediainfo = context.media_info
torrent_title = f"{index}.【{torrent.site_name}" \
f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group} " \
f"{StringUtils.str_filesize(torrent.size)} " \
f"{torrent.volume_factor} " \
f"{torrent.seeders}"
title = re.sub(r"\s+", " ", title).strip()
articles.append({
"title": torrent_title,
"description": torrent.description if index == 1 else '',
"picurl": mediainfo.get_message_image() if index == 1 else '',
"url": torrent.page_url
})
index += 1
# 发送列表
if not userid:
userid = "@all"
articles = []
index = 1
for context in torrents:
torrent = context.torrent_info
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
mediainfo = context.media_info
torrent_title = f"{index}.【{torrent.site_name}" \
f"{meta.season_episode} " \
f"{meta.resource_term} " \
f"{meta.video_term} " \
f"{meta.release_group} " \
f"{StringUtils.str_filesize(torrent.size)} " \
f"{torrent.volume_factor} " \
f"{torrent.seeders}"
torrent_title = re.sub(r"\s+", " ", torrent_title).strip()
articles.append({
"title": torrent_title,
"description": torrent.description if index == 1 else "",
"picurl": mediainfo.get_message_image() if index == 1 else "",
"url": torrent.page_url
})
index += 1
req_json = {
"touser": userid,
"msgtype": "news",
"agentid": self._appid,
"news": {
"articles": articles
req_json = {
"touser": userid,
"msgtype": "news",
"agentid": self._appid,
"news": {
"articles": articles
}
}
}
return self.__post_request(message_url, req_json)
return self.__post_request(self._send_msg_url, req_json)
except Exception as e:
logger.error(f"发送消息失败:{e}")
return False
def __post_request(self, message_url: str, req_json: dict) -> bool:
@retry(Exception, logger=logger)
def __post_request(self, url: str, req_json: dict) -> bool:
"""
向微信发送请求
"""
try:
res = RequestUtils(content_type='application/json').post(
message_url,
data=json.dumps(req_json, ensure_ascii=False).encode('utf-8')
)
if res and res.status_code == 200:
ret_json = res.json()
if ret_json.get('errcode') == 0:
return True
else:
if ret_json.get('errcode') == 42001:
self.__get_access_token(force=True)
logger.error(f"发送请求失败,错误信息:{ret_json.get('errmsg')}")
return False
elif res is not None:
logger.error(f"发送请求失败,错误码:{res.status_code},错误原因:{res.reason}")
return False
url = url.format(access_token=self.__get_access_token())
res = RequestUtils(content_type="application/json").post(
url=url,
data=json.dumps(req_json, ensure_ascii=False).encode("utf-8")
)
if res is None:
error_msg = "发送请求失败,未获取到返回信息"
raise Exception(error_msg)
if res.status_code != 200:
error_msg = f"发送请求失败,错误码:{res.status_code},错误原因:{res.reason}"
raise Exception(error_msg)
ret_json = res.json()
if ret_json.get("errcode") == 0:
return True
else:
if ret_json.get("errcode") == 42001:
self.__get_access_token(force=True)
error_msg = (f"access_token 已过期,尝试重新获取 access_token,"
f"errcode: {ret_json.get('errcode')}, errmsg: {ret_json.get('errmsg')}")
raise Exception(error_msg)
else:
logger.error(f"发送请求失败,未获取到返回信息")
logger.error(f"发送请求失败,错误信息:{ret_json.get('errmsg')}")
return False
except Exception as err:
logger.error(f"发送请求失败,错误信息:{str(err)}")
return False
def create_menus(self, commands: Dict[str, dict]):
"""
@@ -375,36 +428,53 @@ class WeChat:
]
}
"""
# 请求URL
req_url = self._create_menu_url % (self.__get_access_token(), self._appid)
try:
# 请求URL
req_url = self._create_menu_url.format(access_token="{access_token}", agentid=self._appid)
# 对commands按category分组
category_dict = {}
for key, value in commands.items():
category: Dict[str, dict] = value.get("category")
if category:
if not category_dict.get(category):
category_dict[category] = {}
category_dict[category][key] = value
# 对commands按category分组
category_dict = {}
for key, value in commands.items():
category: str = value.get("category")
if category:
if not category_dict.get(category):
category_dict[category] = {}
category_dict[category][key] = value
# 一级菜单
buttons = []
for category, menu in category_dict.items():
# 二级菜单
sub_buttons = []
for key, value in menu.items():
sub_buttons.append({
"type": "click",
"name": value.get("description"),
"key": key
# 一级菜单
buttons = []
for category, menu in category_dict.items():
# 二级菜单
sub_buttons = []
for key, value in menu.items():
sub_buttons.append({
"type": "click",
"name": value.get("description"),
"key": key
})
buttons.append({
"name": category,
"sub_button": sub_buttons[:5]
})
buttons.append({
"name": category,
"sub_button": sub_buttons[:5]
})
if buttons:
if buttons:
# 发送请求
self.__post_request(req_url, {
"button": buttons[:3]
})
except Exception as e:
logger.error(f"创建菜单失败:{e}")
return False
def delete_menus(self):
"""
删除微信菜单
"""
try:
# 请求URL
req_url = self._delete_menu_url.format(access_token=self.__get_access_token(), agentid=self._appid)
# 发送请求
self.__post_request(req_url, {
"button": buttons[:3]
})
RequestUtils().get(req_url)
except Exception as e:
logger.error(f"删除菜单失败:{e}")
return False

View File

@@ -263,28 +263,26 @@ class Monitor(metaclass=Singleton):
try:
item = self._queue.get(timeout=self._transfer_interval)
if item:
self.__handle_file(storage=item.get("storage"),
event_path=item.get("filepath"),
mon_path=item.get("mon_path"))
self.__handle_file(storage=item.get("storage"), event_path=item.get("filepath"))
except queue.Empty:
continue
except Exception as e:
logger.error(f"整理队列处理出现错误:{e}")
def __handle_file(self, storage: str, event_path: Path, mon_path: Path):
def __handle_file(self, storage: str, event_path: Path):
"""
整理一个文件
:param storage: 存储
:param event_path: 事件文件路径
:param mon_path: 监控目录
"""
def __get_bluray_dir(_path: Path):
"""
获取BDMV目录的上级目录
"""
for parent in _path.parents:
if parent.name == "BDMV":
return parent.parent
for p in _path.parents:
if p.name == "BDMV":
return p.parent
return None
# 全程加锁
@@ -386,7 +384,8 @@ class Monitor(metaclass=Singleton):
return
# 查询转移目的目录
if not self.directoryhelper.get_dir(mediainfo, src_path=Path(mon_path)):
dir_info = self.directoryhelper.get_dir(mediainfo, storage=storage, src_path=event_path)
if not dir_info:
logger.warn(f"{event_path.name} 未找到对应的目标目录")
return
@@ -418,6 +417,7 @@ class Monitor(metaclass=Singleton):
transferinfo: TransferInfo = self.chain.transfer(fileitem=file_item,
meta=file_meta,
mediainfo=mediainfo,
target_directory=dir_info,
episodes_info=episodes_info)
if not transferinfo:
@@ -478,8 +478,7 @@ class Monitor(metaclass=Singleton):
# 移动模式删除空目录
if transferinfo.transfer_type in ["move"]:
logger.info(f"正在删除: {file_item.storage} {file_item.path}")
self.storagechain.delete_file(file_item)
self.storagechain.delete_media_file(file_item, delete_self=False)
except Exception as e:
logger.error("目录监控发生错误:%s - %s" % (str(e), traceback.format_exc()))

View File

@@ -1,4 +1,3 @@
import logging
import threading
import traceback
from datetime import datetime, timedelta
@@ -20,19 +19,14 @@ from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotificationType
from app.schemas.types import EventType
from app.schemas.types import EventType, SystemConfigKey
from app.utils.singleton import Singleton
from app.utils.timer import TimerUtils
# 获取 apscheduler 的日志记录器
scheduler_logger = logging.getLogger('apscheduler')
# 设置日志级别为 WARNING
scheduler_logger.setLevel(logging.WARNING)
class SchedulerChain(ChainBase):
pass
@@ -47,7 +41,7 @@ class Scheduler(metaclass=Singleton):
# 退出事件
_event = threading.Event()
# 锁
_lock = threading.Lock()
_lock = threading.RLock()
# 各服务的运行状态
_jobs = {}
# 用户认证失败次数
@@ -60,49 +54,6 @@ class Scheduler(metaclass=Singleton):
"""
初始化定时服务
"""
def clear_cache():
"""
清理缓存
"""
TorrentsChain().clear_cache()
SchedulerChain().clear_cache()
def user_auth():
"""
用户认证检查
"""
if SitesHelper().auth_level >= 2:
return
# 最大重试次数
__max_try__ = 30
if self._auth_count > __max_try__:
SchedulerChain().messagehelper.put(title=f"用户认证失败",
message="用户认证失败次数过多,将不再尝试认证!",
role="system")
return
logger.info("用户未认证,正在尝试重新认证...")
status, msg = SitesHelper().check_user()
if status:
self._auth_count = 0
logger.info(f"{msg} 用户认证成功")
SchedulerChain().post_message(
Notification(
mtype=NotificationType.Manual,
title="MoviePilot用户认证成功",
text=f"使用站点:{msg}",
link=settings.MP_DOMAIN('#/site')
)
)
PluginManager().init_config()
self.init_plugin_jobs()
else:
self._auth_count += 1
logger.error(f"用户认证失败:{msg},共失败 {self._auth_count}")
if self._auth_count >= __max_try__:
logger.error("用户认证失败次数过多,将不再尝试认证!")
# 各服务的运行状态
self._jobs = {
"cookiecloud": {
@@ -148,12 +99,12 @@ class Scheduler(metaclass=Singleton):
},
"clear_cache": {
"name": "缓存清理",
"func": clear_cache,
"func": self.clear_cache,
"running": False,
},
"user_auth": {
"name": "用户认证检查",
"func": user_auth,
"func": self.user_auth,
"running": False,
},
"scheduler_job": {
@@ -328,7 +279,7 @@ class Scheduler(metaclass=Singleton):
"interval",
id="clear_cache",
name="缓存清理",
hours=settings.CACHE_CONF.get("meta") / 3600,
hours=settings.CACHE_CONF["meta"] / 3600,
kwargs={
'job_id': 'clear_cache'
}
@@ -346,17 +297,18 @@ class Scheduler(metaclass=Singleton):
}
)
# 站点数据刷新每隔30分钟
self._scheduler.add_job(
self.start,
"interval",
id="sitedata_refresh",
name="站点数据刷新",
minutes=settings.SITEDATA_REFRESH_INTERVAL * 60,
kwargs={
'job_id': 'sitedata_refresh'
}
)
# 站点数据刷新
if settings.SITEDATA_REFRESH_INTERVAL:
self._scheduler.add_job(
self.start,
"interval",
id="sitedata_refresh",
name="站点数据刷新",
minutes=settings.SITEDATA_REFRESH_INTERVAL * 60,
kwargs={
'job_id': 'sitedata_refresh'
}
)
self.init_plugin_jobs()
@@ -436,46 +388,70 @@ class Scheduler(metaclass=Singleton):
try:
sid = f"{service['id']}"
job_id = sid.split("|")[0]
if job_id not in self._jobs:
self._jobs[job_id] = {
"func": service["func"],
"name": service["name"],
"pid": pid,
"plugin_name": plugin_name,
"running": False,
}
self._scheduler.add_job(
self.start,
service["trigger"],
id=sid,
name=service["name"],
**service["kwargs"],
kwargs={"job_id": job_id}
)
logger.info(f"注册插件{plugin_name}服务:{service['name']} - {service['trigger']}")
self.remove_plugin_job(pid, job_id)
self._jobs[job_id] = {
"func": service["func"],
"name": service["name"],
"pid": pid,
"plugin_name": plugin_name,
"kwargs": service.get("func_kwargs") or {},
"running": False,
}
self._scheduler.add_job(
self.start,
service["trigger"],
id=sid,
name=service["name"],
**(service.get("kwargs") or {}),
kwargs={"job_id": job_id},
replace_existing=True
)
logger.info(f"注册插件{plugin_name}服务:{service['name']} - {service['trigger']}")
except Exception as e:
logger.error(f"注册插件{plugin_name}服务失败:{str(e)} - {service}")
SchedulerChain().messagehelper.put(title=f"插件 {plugin_name} 服务注册失败",
message=str(e),
role="system")
def remove_plugin_job(self, pid: str):
def remove_plugin_job(self, pid: str, job_id: str = None):
"""
移除插件定时服务
移除定时服务,可以是单个服务(包括默认服务)或整个插件的所有服务
:param pid: 插件 ID
:param job_id: 可选,指定要移除的单个服务的 job_id。如果不提供则移除该插件的所有服务当移除单个服务时默认服务也包含在内
"""
if not self._scheduler:
return
with self._lock:
# 获取插件名称
if job_id:
# 移除单个服务
service = self._jobs.pop(job_id, None)
if not service:
return
jobs_to_remove = [(job_id, service)]
else:
# 移除插件的所有服务
jobs_to_remove = [
(job_id, service) for job_id, service in self._jobs.items() if service.get("pid") == pid
]
for job_id, _ in jobs_to_remove:
self._jobs.pop(job_id, None)
if not jobs_to_remove:
return
plugin_name = PluginManager().get_plugin_attr(pid, "plugin_name")
for job_id, service in self._jobs.copy().items():
# 遍历移除任务
for job_id, service in jobs_to_remove:
try:
if service.get("pid") == pid:
self._jobs.pop(job_id, None)
try:
self._scheduler.remove_job(job_id)
except JobLookupError:
pass
# 在调度器中查找并移除对应的 job
job_removed = False
for job in list(self._scheduler.get_jobs()):
job_id_from_service = job.id.split("|")[0]
if job_id == job_id_from_service:
try:
self._scheduler.remove_job(job.id)
job_removed = True
except JobLookupError:
pass
if job_removed:
logger.info(f"移除插件服务({plugin_name}){service.get('name')}")
except Exception as e:
logger.error(f"移除插件服务失败:{str(e)} - {job_id}: {service}")
@@ -548,3 +524,50 @@ class Scheduler(metaclass=Singleton):
logger.info("定时任务停止完成")
except Exception as e:
logger.error(f"停止定时任务失败::{str(e)} - {traceback.format_exc()}")
@staticmethod
def clear_cache():
"""
清理缓存
"""
TorrentsChain().clear_cache()
SchedulerChain().clear_cache()
def user_auth(self):
"""
用户认证检查
"""
if SitesHelper().auth_level >= 2:
return
# 最大重试次数
__max_try__ = 30
if self._auth_count > __max_try__:
SchedulerChain().messagehelper.put(title=f"用户认证失败",
message="用户认证失败次数过多,将不再尝试认证!",
role="system")
return
logger.info("用户未认证,正在尝试认证...")
auth_conf = SystemConfigOper().get(SystemConfigKey.UserSiteAuthParams)
if auth_conf:
status, msg = SitesHelper().check_user(**auth_conf)
else:
status, msg = SitesHelper().check_user()
if status:
self._auth_count = 0
logger.info(f"{msg} 用户认证成功")
SchedulerChain().post_message(
Notification(
mtype=NotificationType.Manual,
title="MoviePilot用户认证成功",
text=f"使用站点:{msg}",
link=settings.MP_DOMAIN('#/site')
)
)
PluginManager().init_config()
self.init_plugin_jobs()
else:
self._auth_count += 1
logger.error(f"用户认证失败:{msg},共失败 {self._auth_count}")
if self._auth_count >= __max_try__:
logger.error("用户认证失败次数过多,将不再尝试认证!")

View File

@@ -180,6 +180,8 @@ class TorrentInfo(BaseModel):
site_proxy: Optional[bool] = False
# 站点优先级
site_order: Optional[int] = 0
# 站点下载器
site_downloader: Optional[str] = None
# 种子名称
title: Optional[str] = None
# 种子副标题

View File

@@ -1,7 +1,11 @@
from typing import Optional
from pathlib import Path
from typing import Optional, Dict, Any, List, Set
from pydantic import BaseModel, Field, root_validator
from app.core.context import Context
from app.schemas import MessageChannel
class BaseEventData(BaseModel):
"""
@@ -66,7 +70,7 @@ class AuthCredentials(ChainEventData):
class AuthInterceptCredentials(ChainEventData):
"""
AuthPassedIntercept 事件的数据模型
AuthIntercept 事件的数据模型
Attributes:
# 输入参数
@@ -74,17 +78,127 @@ class AuthInterceptCredentials(ChainEventData):
channel (str): 认证渠道
service (str): 服务名称
token (str): 认证令牌
status (str): 认证状态,"triggered""completed" 两个状态
# 输出参数
source (str): 拦截源,默认值为 "未知拦截源"
cancel (bool): 是否取消认证,默认值为 False
"""
# 输入参数
username: str = Field(..., description="用户名")
username: Optional[str] = Field(..., description="用户名")
channel: str = Field(..., description="认证渠道")
service: str = Field(..., description="服务名称")
status: str = Field(..., description="认证状态, 包含 'triggered' 表示认证触发,'completed' 表示认证成功")
token: Optional[str] = Field(None, description="认证令牌")
# 输出参数
source: str = Field("未知拦截源", description="拦截源")
cancel: bool = Field(False, description="是否取消认证")
class CommandRegisterEventData(ChainEventData):
"""
CommandRegister 事件的数据模型
Attributes:
# 输入参数
commands (dict): 菜单命令
origin (str): 事件源,可以是 Chain 或具体的模块名称
service (str): 服务名称
# 输出参数
source (str): 拦截源,默认值为 "未知拦截源"
cancel (bool): 是否取消认证,默认值为 False
"""
# 输入参数
commands: Dict[str, dict] = Field(..., description="菜单命令")
origin: str = Field(..., description="事件源")
service: Optional[str] = Field(..., description="服务名称")
# 输出参数
cancel: bool = Field(False, description="是否取消注册")
source: str = Field("未知拦截源", description="拦截源")
class TransferRenameEventData(ChainEventData):
"""
TransferRename 事件的数据模型
Attributes:
# 输入参数
template_string (str): Jinja2 模板字符串
rename_dict (dict): 渲染上下文
render_str (str): 渲染生成的字符串
path (Optional[Path]): 当前文件的目标路径
# 输出参数
updated (bool): 是否已更新,默认值为 False
updated_str (str): 更新后的字符串
source (str): 拦截源,默认值为 "未知拦截源"
"""
# 输入参数
template_string: str = Field(..., description="模板字符串")
rename_dict: Dict[str, Any] = Field(..., description="渲染上下文")
path: Optional[Path] = Field(None, description="文件的目标路径")
render_str: str = Field(..., description="渲染生成的字符串")
# 输出参数
updated: bool = Field(False, description="是否已更新")
updated_str: Optional[str] = Field(None, description="更新后的字符串")
source: Optional[str] = Field("未知拦截源", description="拦截源")
class ResourceSelectionEventData(BaseModel):
"""
ResourceSelection 事件的数据模型
Attributes:
# 输入参数
contexts (List[Context]): 当前待选择的资源上下文列表
source (str): 事件源,指示事件的触发来源
# 输出参数
updated (bool): 是否已更新,默认值为 False
updated_contexts (Optional[List[Context]]): 已更新的资源上下文列表,默认值为 None
source (str): 更新源,默认值为 "未知更新源"
"""
# 输入参数
contexts: Any = Field(None, description="待选择的资源上下文列表")
downloader: Optional[str] = Field(None, description="下载器")
# 输出参数
updated: bool = Field(False, description="是否已更新")
updated_contexts: Optional[List[Context]] = Field(None, description="已更新的资源上下文列表")
source: Optional[str] = Field("未知拦截源", description="拦截源")
class ResourceDownloadEventData(ChainEventData):
"""
ResourceDownload 事件的数据模型
Attributes:
# 输入参数
context (Context): 当前资源上下文
episodes (Set[int]): 需要下载的集数
channel (MessageChannel): 通知渠道
origin (str): 来源消息通知、Subscribe、Manual等
downloader (str): 下载器
options (dict): 其他参数
# 输出参数
cancel (bool): 是否取消下载,默认值为 False
source (str): 拦截源,默认值为 "未知拦截源"
reason (str): 拦截原因,描述拦截的具体原因
"""
# 输入参数
context: Any = Field(None, description="当前资源上下文")
episodes: Optional[Set[int]] = Field(None, description="需要下载的集数")
channel: Optional[MessageChannel] = Field(None, description="通知渠道")
origin: Optional[str] = Field(None, description="来源")
downloader: Optional[str] = Field(None, description="下载器")
options: Optional[dict] = Field(None, description="其他参数")
# 输出参数
cancel: bool = Field(False, description="是否取消下载")
source: str = Field("未知拦截源", description="拦截源")
reason: str = Field("", description="拦截原因")

View File

@@ -173,3 +173,4 @@ class MediaServerPlayItem(BaseModel):
image: Optional[str] = None
link: Optional[str] = None
percent: Optional[float] = None
BackdropImageTags: Optional[list] = []

View File

@@ -1,4 +1,4 @@
from typing import Optional, Any
from typing import Optional, Any, Union, Dict
from pydantic import BaseModel
@@ -35,7 +35,7 @@ class Site(BaseModel):
# 备注
note: Optional[Any] = None
# 超时时间
timeout: Optional[int] = 0
timeout: Optional[int] = 15
# 流控单位周期
limit_interval: Optional[int] = None
# 流控次数
@@ -44,6 +44,8 @@ class Site(BaseModel):
limit_seconds: Optional[int] = None
# 是否启用
is_active: Optional[bool] = True
# 下载器
downloader: Optional[str] = None
class Config:
orm_mode = True
@@ -75,7 +77,7 @@ class SiteUserData(BaseModel):
# 用户名
username: Optional[str]
# 用户ID
userid: Optional[int]
userid: Optional[Union[int, str]]
# 用户等级
user_level: Optional[str]
# 加入时间
@@ -108,3 +110,8 @@ class SiteUserData(BaseModel):
updated_day: Optional[str] = None
# 更新时间
updated_time: Optional[str] = None
class SiteAuth(BaseModel):
site: Optional[str] = None
params: Optional[Dict[str, Union[int, str]]] = {}

Some files were not shown because too many files have changed in this diff Show More