Compare commits

...

118 Commits

Author SHA1 Message Date
jxxghp
3be29f36a7 Merge pull request #5564 from DDSRem-Dev/dev 2026-03-11 15:24:46 +08:00
DDSRem
7638db4c3b fix(plugin): return remoteEntry path without API prefix to avoid double prefix 404
- get_plugin_remote_entry returns /plugin/file/... (relative to API root)
- Frontend already prepends API base; adding API_V1_STR caused /api/v1/api/v1/...

Made-with: Cursor
2026-03-11 15:12:40 +08:00
DDSRem
0312a500a6 refactor(plugin): replace deprecated pkg_resources with importlib.metadata
- Use distributions() in __get_installed_packages for installed packages
- Use packaging.requirements.Requirement, drop pkg_resources dependency
- __standardize_pkg_name: normalize dots to underscores (PEP-style)
- Keep max version when multiple distributions exist for same package

Made-with: Cursor
2026-03-11 14:53:15 +08:00
jxxghp
1a88b5355a 更新 requirements.in 2026-03-11 12:23:09 +08:00
jxxghp
3374773de5 更新 version.py 2026-03-11 07:22:09 +08:00
jxxghp
872b5fe3da Merge pull request #5559 from xiaoQQya/develop 2026-03-10 21:01:57 +08:00
xiaoQQya
be15e9871c perf: 优化站点 hhanclub 用户等级与加入时间获取兼容性 2026-03-10 19:42:04 +08:00
jxxghp
024a6a253b Merge pull request #5531 from WongWang/feat-plugin-priority 2026-03-10 12:54:39 +08:00
jxxghp
1af662df7b Merge pull request #5558 from YuF-9468/fix/5483-history-reorganize-event 2026-03-09 22:36:17 +08:00
YuF-9468
b4f64eb593 fix: preserve download context when re-organizing from history 2026-03-09 19:33:49 +08:00
jxxghp
86aa86208c Merge pull request #5557 from eNkru/feature/panda-group 2026-03-09 15:24:07 +08:00
Howard Ju
018e814615 feat(panda): add release group for PandaPT 2026-03-09 20:21:18 +13:00
jxxghp
e4d6e5cfc7 Merge pull request #5556 from YuF-9468/fix/5554-plugin-remote-entry-prefix 2026-03-09 12:02:20 +08:00
YuF-9468
770cd77632 refactor(plugin): build remoteEntry path with posixpath.join 2026-03-09 11:53:28 +08:00
YuF-9468
9f1692b33d fix(plugin): prepend API prefix for plugin remoteEntry URL 2026-03-09 11:41:42 +08:00
jxxghp
6f63e0a5d7 feat: enhance Telegram module with new functionality and improvements. 2026-03-08 09:48:42 +08:00
jxxghp
6a90e2c796 fix ide warnings 2026-03-08 08:32:29 +08:00
jxxghp
23b90ff0f9 remove app.env 2026-03-08 08:25:07 +08:00
jxxghp
dc86af2fa4 Merge pull request #5552 from EkkoG/qqbot 2026-03-08 08:23:53 +08:00
EkkoG
425b822046 feat(qqbot): enhance message sending with Markdown support and image size detection
- Added `use_markdown` parameter to `send_proactive_c2c_message` and `send_proactive_group_message` for Markdown formatting.
- Implemented methods to escape Markdown characters and format messages accordingly.
- Introduced image size detection for Markdown image rendering.
- Updated message sending logic to fallback to plain text if Markdown is unsupported.
2026-03-07 23:51:30 +08:00
EkkoG
65c18b1d52 feat(qqbot): implement QQ Bot notification module with API and WebSocket support
- Added QQ Bot notification module to facilitate proactive message sending and message reception via Gateway.
- Implemented API functions for sending C2C and group messages.
- Established WebSocket client for real-time message handling.
- Updated requirements to include websocket-client dependency.
- Enhanced schemas to support QQ channel capabilities and notification configurations.
2026-03-07 23:21:07 +08:00
jxxghp
1bddf3daa7 Merge pull request #5550 from wumode/fix_openlist 2026-03-07 08:21:11 +08:00
wumode
600b6af876 fix(openlist): transfer queue blocking 2026-03-06 23:21:43 +08:00
jxxghp
4bdf16331d Merge pull request #5546 from ziwiwiz/fix-docker-proxy-unauthorized-access 2026-03-06 13:19:34 +08:00
ziwiwiz
87cbda0528 fix(docker): optimize docker proxy listener config for better network isolation 2026-03-06 01:33:18 +08:00
jxxghp
9897941bf9 Merge pull request #5544 from YuF-9468/fix-issue-5495-tnode-json-guard 2026-03-05 18:08:24 +08:00
YuF-9468
31938812d0 chore: add warning logs for invalid tnode seeding payload 2026-03-05 09:35:25 +08:00
YuF-9468
19d879d3f6 fix(parser): guard invalid tnode seeding json response 2026-03-05 09:21:16 +08:00
jxxghp
cc41036c63 Merge pull request #5537 from jxxghp/copilot/optimize-message-logic 2026-03-03 20:45:00 +08:00
copilot-swe-agent[bot]
a9f2b40529 test: extend media-title detection coverage and cleanup
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-03-03 12:20:54 +00:00
copilot-swe-agent[bot]
86000ea19a feat: improve user message media-title detection
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-03-03 12:14:25 +00:00
copilot-swe-agent[bot]
0422c3b9e7 Initial plan 2026-03-03 12:08:33 +00:00
jxxghp
64c8bd5b5a Merge pull request #5535 from Seed680/v2 2026-03-03 20:00:31 +08:00
jxxghp
a7eba2c5fc Merge pull request #5534 from YuF-9468/fix-workflow-rating-float 2026-03-03 19:59:04 +08:00
YuF-9468
2b7753e43e workflow: handle zero vote threshold explicitly 2026-03-03 15:41:27 +08:00
noone
47c1e5b5b8 Merge remote-tracking branch 'origin/v2' into v2 2026-03-03 14:31:24 +08:00
noone
14ee97def0 feat(meta): 添加视频帧率信息解析支持
- 在MetaBase基类中新增fps属性用于存储帧率信息
- 实现MetaVideo中帧率信息的识别和解析逻辑
- 为MetaAnime添加帧率提取功能,与MetaVideo保持一致
- 更新测试用例以验证帧率信息的正确解析
- 在元数据测试数据中增加fps字段的预期值
2026-03-03 14:31:12 +08:00
Seed680
92e262f732 Merge branch 'jxxghp:v2' into v2 2026-03-03 14:13:07 +08:00
noone
c46880b701 feat(meta): 添加视频帧率信息解析支持
- 在MetaBase基类中新增fps属性用于存储帧率信息
- 实现MetaVideo中帧率信息的识别和解析逻辑
- 为MetaAnime添加帧率提取功能,与MetaVideo保持一致
- 更新测试用例以验证帧率信息的正确解析
- 在元数据测试数据中增加fps字段的预期值
2026-03-03 14:12:06 +08:00
YuF-9468
473e9b9300 workflow: allow decimal rating in filter medias 2026-03-03 13:56:24 +08:00
Castell
28945ef153 refactor: 将 download.py 中重复的媒体识别模式选择逻辑封装进选择器函数 2026-03-03 01:58:49 +08:00
Castell
b6b5d9f9c4 refactor: 将重复的媒体识别模式选择逻辑封装进选择器函数 2026-03-03 01:33:44 +08:00
Castell
ba5de1ab31 fix: 修复异步函数调用少写 await 关键字的错误 2026-03-03 00:37:55 +08:00
Castell
002ebeaade refactor: 简化媒体识别模式选择逻辑中的 if/else 结构 2026-03-03 00:21:55 +08:00
Castell
894756000c feat: 新增优先使用插件识别的功能 2026-03-02 20:58:10 +08:00
jxxghp
cdb178c503 Merge pull request #5530 from cddjr/bugfix/season-regex-capture-group 2026-03-02 12:07:37 +08:00
大虾
7c48cafc71 Update app/core/meta/metavideo.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-03-02 11:47:47 +08:00
景大侠
74d4592238 fix(meta): 修复正则表达式以正确匹配 Sxx 季信息格式 2026-03-02 11:35:41 +08:00
jxxghp
0044dd104e 更新 version.py 2026-03-02 07:04:49 +08:00
jxxghp
05041e2eae Merge pull request #5526 from baozaodetudou/orv2 2026-03-01 12:02:49 +08:00
doumao
78908f216d Merge branch 'v2' of github.com:jxxghp/MoviePilot into orv2 2026-02-28 22:58:09 +08:00
doumao
efc68ae701 fix: 绿联接口支持可配置SSL证书校验 2026-02-28 22:55:47 +08:00
jxxghp
e9340a8b4b Merge pull request #5525 from baozaodetudou/orv2 2026-02-28 22:50:10 +08:00
逗猫
66e199d516 Merge pull request #1 from baozaodetudou/v2
perf: 使用deque优化绿联媒体库遍历队列性能
2026-02-28 22:15:39 +08:00
doumao
6151d8a787 perf: 使用deque优化绿联媒体库遍历队列性能 2026-02-28 22:13:54 +08:00
doumao
296261da8a feat: 完成绿联影视接入并补齐扫描模式与统计展示 2026-02-28 21:58:35 +08:00
doumao
383371dd6f Merge branch 'v2' of github.com:jxxghp/MoviePilot into orv2 2026-02-28 21:57:45 +08:00
jxxghp
bb8c026bda Merge pull request #5523 from YuF-9468/fix-issue-5508-manual-transfer-auto-type 2026-02-28 17:18:23 +08:00
doumao
344993dd6f 新增绿联接口加解密工具与单元测试 2026-02-28 15:35:27 +08:00
YuF-bot
ffb048c314 refactor(transfer): narrow manual type parse exception to ValueError 2026-02-28 13:44:06 +08:00
jxxghp
3eef9b8faa Merge pull request #5522 from YuF-9468/fix-issue-5461-filemanager-test-optional-library 2026-02-28 13:31:09 +08:00
YuF-9468
5704bb646b fix(transfer): treat auto type as unspecified in manual transfer 2026-02-28 13:29:08 +08:00
YuF-9468
fbc684b3a7 fix(filemanager): skip library path check when transfer is disabled 2026-02-28 12:58:53 +08:00
jxxghp
6529b2a9c3 Merge pull request #5521 from YuF-9468/fix-issue-5463-agent-sites-list-parse 2026-02-28 12:31:54 +08:00
YuF-9468
a1701e2edf fix(agent): accept string-form sites list in search_torrents input 2026-02-28 12:30:12 +08:00
jxxghp
eba6391de7 Merge pull request #5520 from YuF-9468/fix-issue-5211-telegram-username-fallback 2026-02-28 12:17:33 +08:00
jxxghp
9f2c3c9688 Merge pull request #5517 from wumode/fix-progress-displaying 2026-02-28 12:16:13 +08:00
YuF-bot
57f5a19d0c fix(message): fallback Telegram username to string userid when absent 2026-02-28 11:10:15 +08:00
wumode
c8d53c6964 fix(ProgressHelper): progress displaying 2026-02-27 16:13:34 +08:00
jxxghp
643cda1abe Merge pull request #5516 from shawnlu96/fix/alipan-snapshot-monitoring 2026-02-27 07:07:31 +08:00
Shawn Lu
03d118a73a fix: 修复阿里云盘目录监控快照无法检测文件的问题
1. 为阿里云盘添加 ALIPAN_SNAPSHOT_CHECK_FOLDER_MODTIME 配置(默认 False)
   - 阿里云盘目录的 updated_at 不会随子文件变更而更新,导致增量快照
     始终跳过目录,快照结果为空
   - 与 Rclone/Alist 保持一致的配置模式

2. 移除 snapshot() 中文件级 modify_time 过滤
   - 原逻辑:仅包含 modify_time > last_snapshot_time 的文件
   - 问题:首次快照建立基准后,save_snapshot 将 timestamp 设为
     max(modify_times),后续快照中未变更的文件因 modify_time 不大于
     timestamp 而被排除,导致 compare_snapshots 无法检测到任何变化
   - 此外当 last_snapshot_time 为 None 时,比较会触发 TypeError
     并被静默捕获
   - 修复:始终包含所有遍历到的文件,由 compare_snapshots 负责变化检测
     目录级优化仍由 snapshot_check_folder_modtime 控制

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 23:43:21 +08:00
jxxghp
51dd7f5c17 Merge pull request #5512 from cddjr/bugfix/issue-5501 2026-02-25 21:13:35 +08:00
jxxghp
af7e1e7a3c Merge pull request #5509 from xiaoQQya/develop 2026-02-25 21:13:00 +08:00
大虾
ea5d855bc3 Update app/helper/directory.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-25 20:21:37 +08:00
景大侠
5f74367cd6 fix: 修复电视剧刮削问题 2026-02-25 20:18:05 +08:00
jxxghp
26e41e1c14 更新 version.py 2026-02-24 19:25:20 +08:00
xiaoQQya
1bb2b50043 fix: 修复站点 hhanclub 用户等级与加入时间不显示的问题 2026-02-23 21:44:23 +08:00
jxxghp
7bdb629f03 Merge pull request #5505 from DDSRem-Dev/rtorrent 2026-02-22 16:10:39 +08:00
jxxghp
fd92f986da Merge pull request #5504 from DDSRem-Dev/fix_smb_alipan 2026-02-22 16:10:08 +08:00
DDSRem
69a1207102 chore(rtorrent): formatting code 2026-02-22 13:42:27 +08:00
DDSRem
def652c768 fix(rtorrent): address code review feedback
- Replace direct _proxy access in transfer_completed with set_torrents_tag(overwrite=True) for proper encapsulation and error logging
- Optimize episode collection by using set accumulation instead of repeated list-set conversions in loop
- Fix type hint for hashs parameter in transfer_completed (str -> Union[str, list])
- Add overwrite parameter to set_torrents_tag to support tag replacement

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 13:40:15 +08:00
DDSRem
c35faf5356 feat(downloader): add rTorrent downloader support
Implement rTorrent downloader module via XML-RPC protocol, supporting both HTTP (nginx/ruTorrent proxy) and SCGI connection modes. Add RtorrentModule implementing _ModuleBase and _DownloaderBase interfaces with no extra dependencies.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 13:12:22 +08:00
jxxghp
0615a33206 Merge pull request #5503 from DDSRem-Dev/fix_u115 2026-02-22 13:00:16 +08:00
DDSRem
e77530bdc5 fix(storages): download directory concatenation error 2026-02-22 12:35:27 +08:00
DDSRem
8c62df63cc fix(u115): download directory concatenation error
fix: https://github.com/jxxghp/MoviePilot/issues/5429
2026-02-22 12:22:58 +08:00
jxxghp
bd36eade77 Merge pull request #5502 from DDSRem-Dev/dev 2026-02-22 12:17:33 +08:00
DDSRem
d2c023081a fix(openList): openList file upload and retrieval errors
fix https://github.com/jxxghp/MoviePilot/issues/5369
fix https://github.com/jxxghp/MoviePilot/issues/5038
2026-02-22 12:05:14 +08:00
jxxghp
63d0850b38 Merge pull request #5498 from cddjr/feat/recommend_manual_force_refresh 2026-02-13 18:39:21 +08:00
景大侠
c86659428f feat(recommend): 手动执行推荐缓存服务时强刷数据 2026-02-13 18:17:42 +08:00
jxxghp
bf7cc6caf0 Merge pull request #5497 from cddjr/bugfix/glitchtip_9684 2026-02-13 17:09:04 +08:00
jxxghp
26b8be6041 Merge pull request #5496 from cddjr/bugfix/issue_5456 2026-02-13 17:08:21 +08:00
景大侠
f978f9196f fix(transfer): 修复移动模式下过早删除种子的问题
- 撤回提交 4502a9c 的部分改动
2026-02-13 13:28:05 +08:00
景大侠
75cb8d2a3c fix(torrents): 修复刷新站点资源时因缺失种子链接导致的 'Failed to exists key: None' 错误 2026-02-12 17:45:15 +08:00
jxxghp
17a21ed707 更新 version.py 2026-02-12 07:09:45 +08:00
jxxghp
f390647139 fix(site): 更新站点信息时同步更新domain域名 2026-02-12 06:59:13 +08:00
jxxghp
aacd91e196 Merge pull request #5487 from cddjr/bugfix/issue_5242 2026-02-11 16:02:54 +08:00
景大侠
258171c9c4 fix(telegram): 修复通知标题含特殊符号时异常显示**符号 2026-02-11 09:20:50 +08:00
jxxghp
812c5873aa Merge pull request #5486 from cddjr/feat/shared-sync-async-cache 2026-02-10 22:11:42 +08:00
景大侠
4c3d47f1f0 feat(cache): 同步/异步函数可共享缓存
- 缓存键支持自定义命名,使异步与同步函数可共享缓存结果
- 内存缓存改为类变量,实现多个cache装饰器共享同一缓存空间
- 重构AsyncMemoryBackend,减少重复代码
- 补齐部分模块的缓存清理功能
2026-02-10 18:46:49 +08:00
jxxghp
ba7b6ba869 Merge pull request #5485 from yubanmeiqin9048/patch-2 2026-02-10 17:41:51 +08:00
yubanmeiqin9048
d0471ae512 fix: 修复目标目录无视频文件时转移字幕和音频触发目录删除 2026-02-10 14:10:42 +08:00
jxxghp
636c4be9fb 更新 version.py 2026-02-07 08:13:43 +08:00
jxxghp
6bec765a9d Merge pull request #5474 from jxxghp/copilot/optimize-file-move-implementation 2026-02-06 22:20:11 +08:00
copilot-swe-agent[bot]
d61d16ccc4 Restore the optimization - accidentally reverted in previous commit
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-02-06 14:15:29 +00:00
copilot-swe-agent[bot]
f2a5715b24 Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com> 2026-02-06 14:11:15 +00:00
copilot-swe-agent[bot]
c064c3781f Optimize SystemUtils.move to avoid triggering directory monitoring
Co-authored-by: jxxghp <51039935+jxxghp@users.noreply.github.com>
2026-02-06 14:03:03 +00:00
copilot-swe-agent[bot]
bb4dffe2a4 Initial plan 2026-02-06 13:59:59 +00:00
jxxghp
37cf3eeef3 Merge pull request #5473 from cddjr/feat_transfer_files_filter 2026-02-06 21:04:52 +08:00
景大侠
40395b2999 feat: 在构造待整理文件列表时引入过滤逻辑以简化后续处理 2026-02-06 20:56:26 +08:00
景大侠
32afe6445f fix: 整理成功事件缺少历史记录ID 2026-02-06 20:33:13 +08:00
jxxghp
793a991913 Merge remote-tracking branch 'origin/v2' into v2 2026-02-05 14:16:55 +08:00
jxxghp
d278224ff1 fix:优化第三方插件存储类型的检测提示 2026-02-05 14:16:50 +08:00
jxxghp
9b4d0ce6a8 Merge pull request #5466 from DDSRem-Dev/dev 2026-02-05 06:56:25 +08:00
DDSRem
a1829fe590 feat: u115 global rate limiting strategy 2026-02-04 23:24:14 +08:00
jxxghp
2b2b39365c Merge pull request #5464 from ChanningHe/enhance/discord 2026-02-04 18:08:38 +08:00
ChanningHe
1147930f3f fix: [slack&discord&telegram] handle special characters in config names 2026-02-04 14:09:40 +09:00
ChanningHe
636f338ed7 enhance: [discord] add _user_chat_mapping to chat in channel 2026-02-04 13:42:33 +09:00
ChanningHe
72365d00b4 enhance: discord debug information 2026-02-04 12:54:17 +09:00
79 changed files with 6684 additions and 708 deletions

5
.gitignore vendored
View File

@@ -27,4 +27,7 @@ venv
# Pylint
pylint-report.json
.pylint.d/
.pylint.d/
# AI
.claude/

View File

@@ -4,7 +4,7 @@ import json
import re
from typing import List, Optional, Type
from pydantic import BaseModel, Field
from pydantic import BaseModel, Field, field_validator
from app.agent.tools.base import MoviePilotTool
from app.chain.search import SearchChain
@@ -28,6 +28,28 @@ class SearchTorrentsInput(BaseModel):
filter_pattern: Optional[str] = Field(None,
description="Regular expression pattern to filter torrent titles by resolution, quality, or other keywords (e.g., '4K|2160p|UHD' for 4K content, '1080p|BluRay' for 1080p BluRay)")
@field_validator("sites", mode="before")
@classmethod
def normalize_sites(cls, value):
"""兼容字符串格式的站点列表(如 "[28]""28,30""""
if value is None:
return value
if isinstance(value, str):
value = value.strip()
if not value:
return None
try:
parsed = json.loads(value)
if isinstance(parsed, list):
return parsed
except Exception:
pass
if "," in value:
return [v.strip() for v in value.split(",") if v.strip()]
if value.isdigit():
return [value]
return value
class SearchTorrentsTool(MoviePilotTool):
name: str = "search_torrents"

View File

@@ -26,11 +26,17 @@ def statistic(name: Optional[str] = None, _: schemas.TokenPayload = Depends(veri
if media_statistics:
# 汇总各媒体库统计信息
ret_statistic = schemas.Statistic()
has_episode_count = False
for media_statistic in media_statistics:
ret_statistic.movie_count += media_statistic.movie_count
ret_statistic.tv_count += media_statistic.tv_count
ret_statistic.episode_count += media_statistic.episode_count
ret_statistic.user_count += media_statistic.user_count
ret_statistic.movie_count += media_statistic.movie_count or 0
ret_statistic.tv_count += media_statistic.tv_count or 0
ret_statistic.user_count += media_statistic.user_count or 0
if media_statistic.episode_count is not None:
ret_statistic.episode_count += media_statistic.episode_count or 0
has_episode_count = True
if not has_episode_count:
# 所有媒体服务都未提供剧集统计时,返回 None 供前端展示“未获取”。
ret_statistic.episode_count = None
return ret_statistic
else:
return schemas.Statistic()

View File

@@ -5,6 +5,7 @@ from fastapi import APIRouter, Depends, Body
from app import schemas
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.config import settings
from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
@@ -77,13 +78,14 @@ def add(
# 元数据
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
# 媒体信息
mediainfo = MediaChain().recognize_media(meta=metainfo, tmdbid=tmdbid, doubanid=doubanid)
mediainfo = MediaChain().select_recognize_source(
log_name=torrent_in.title,
log_context=torrent_in.title,
native_fn=lambda: MediaChain().recognize_media(meta=metainfo, tmdbid=tmdbid, doubanid=doubanid),
plugin_fn=lambda: MediaChain().recognize_help(title=torrent_in.title, org_meta=metainfo)
)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
mediainfo = MediaChain().recognize_help(title=torrent_in.title, org_meta=metainfo)
if not mediainfo:
return schemas.Response(success=False, message="无法识别媒体信息")
return schemas.Response(success=False, message="无法识别媒体信息")
# 种子信息
torrentinfo = TorrentInfo()
torrentinfo.from_dict(torrent_in.model_dump())

View File

@@ -92,10 +92,14 @@ async def update_site(
# 校正地址格式
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site_in.domain = StringUtils.get_url_domain(site_in.url)
await site.async_update(db, site_in.model_dump())
# 通知站点更新
await eventmanager.async_send_event(EventType.SiteUpdated, {
"domain": site_in.domain
"site_id": site_in.id,
"domain": site_in.domain,
"name": site_in.name,
"site_url": site_in.url
})
return schemas.Response(success=True)

View File

@@ -615,7 +615,10 @@ def run_scheduler(jobid: str,
"""
if not jobid:
return schemas.Response(success=False, message="命令不能为空!")
Scheduler().start(jobid)
if jobid in {"recommend_refresh", "cookiecloud"}:
Scheduler().start(jobid, manual=True)
else:
Scheduler().start(jobid)
return schemas.Response(success=True)
@@ -628,5 +631,8 @@ def run_scheduler2(jobid: str,
if not jobid:
return schemas.Response(success=False, message="命令不能为空!")
Scheduler().start(jobid)
if jobid in {"recommend_refresh", "cookiecloud"}:
Scheduler().start(jobid, manual=True)
else:
Scheduler().start(jobid)
return schemas.Response(success=True)

View File

@@ -93,6 +93,8 @@ def manual_transfer(transer_item: ManualTransferItem,
:param _: Token校验
"""
force = False
downloader = None
download_hash = None
target_path = Path(transer_item.target_path) if transer_item.target_path else None
if transer_item.logid:
# 查询历史记录
@@ -101,6 +103,8 @@ def manual_transfer(transer_item: ManualTransferItem,
return schemas.Response(success=False, message=f"整理记录不存在ID{transer_item.logid}")
# 强制转移
force = True
downloader = history.downloader
download_hash = history.download_hash
if history.status and ("move" in history.mode):
# 重新整理成功的转移,则使用成功的 dest 做 in_path
src_fileitem = FileItem(**history.dest_fileitem)
@@ -121,6 +125,7 @@ def manual_transfer(transer_item: ManualTransferItem,
transer_item.tmdbid = int(history.tmdbid) if history.tmdbid else transer_item.tmdbid
transer_item.doubanid = str(history.doubanid) if history.doubanid else transer_item.doubanid
transer_item.season = int(str(history.seasons).replace("S", "")) if history.seasons else transer_item.season
transer_item.episode_group = history.episode_group or transer_item.episode_group
if history.episodes:
if "-" in str(history.episodes):
# E01-E03多集合并
@@ -138,8 +143,14 @@ def manual_transfer(transer_item: ManualTransferItem,
else:
return schemas.Response(success=False, message=f"缺少参数")
# 类型
mtype = MediaType(transer_item.type_name) if transer_item.type_name else None
# 类型(“自动/auto/none”按未指定处理
mtype = None
type_name = str(transer_item.type_name).strip() if transer_item.type_name else ""
if type_name and type_name.lower() not in {"自动", "auto", "none"}:
try:
mtype = MediaType(type_name)
except ValueError:
return schemas.Response(success=False, message=f"不支持的媒体类型:{type_name}")
# 自定义格式
epformat = None
if transer_item.episode_offset or transer_item.episode_part \
@@ -167,7 +178,9 @@ def manual_transfer(transer_item: ManualTransferItem,
library_type_folder=transer_item.library_type_folder,
library_category_folder=transer_item.library_category_folder,
force=force,
background=background
background=background,
downloader=downloader,
download_hash=download_hash
)
# 失败
if not state:

View File

@@ -85,21 +85,48 @@ class MediaChain(ChainBase):
"""
return self.run_module("metadata_nfo", meta=meta, mediainfo=mediainfo, season=season, episode=episode)
def select_recognize_source(self, log_name: str, log_context: str,
native_fn, plugin_fn) -> Optional[MediaInfo]:
"""
选择识别模式,插件优先或原生优先
:param log_name: 用于日志“标题:...”处的名称(如 file_path.name 或 title
:param log_context: 用于日志“未识别到...的媒体信息”处的上下文(如 path 或 title
:param native_fn: 原生识别函数
:param plugin_fn: 插件识别函数
"""
mediainfo = None
plugin_available = eventmanager.check(ChainEventType.NameRecognize)
if settings.RECOGNIZE_PLUGIN_FIRST and plugin_available:
# 插件优先
logger.info(f"插件优先模式已开启。请求辅助识别,标题:{log_name} ...")
mediainfo = plugin_fn()
if not mediainfo:
logger.info(f'辅助识别未识别到 {log_context} 的媒体信息,尝试使用原生识别')
mediainfo = native_fn()
else:
# 原生优先
logger.info(f"插件优先模式未开启。尝试原生识别,标题:{log_name} ...")
mediainfo = native_fn()
if not mediainfo and plugin_available:
logger.info(f'原生识别未识别到 {log_context} 的媒体信息,尝试使用辅助识别')
mediainfo = plugin_fn()
return mediainfo
def recognize_by_meta(self, metainfo: MetaBase, episode_group: Optional[str] = None) -> Optional[MediaInfo]:
"""
根据主副标题识别媒体信息
"""
title = metainfo.title
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo, episode_group=episode_group)
# 按 config 中设置的识别顺序识别
mediainfo = self.select_recognize_source(
log_name=title,
log_context=title,
native_fn=lambda: self.recognize_media(meta=metainfo, episode_group=episode_group),
plugin_fn=lambda: self.recognize_help(title=title, org_meta=metainfo)
)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{title} ...')
mediainfo = self.recognize_help(title=title, org_meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None
logger.warn(f'{title} 未识别到媒体信息')
return None
# 识别成功
logger.info(f'{title} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
@@ -163,16 +190,16 @@ class MediaChain(ChainBase):
file_path = Path(path)
# 元数据
file_meta = MetaInfoPath(file_path)
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta, episode_group=episode_group)
# 按 config 中设置的识别顺序识别
mediainfo = self.select_recognize_source(
log_name=file_path.name,
log_context=path,
native_fn=lambda: self.recognize_media(meta=file_meta, episode_group=episode_group),
plugin_fn=lambda: self.recognize_help(title=path, org_meta=file_meta)
)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{file_path.name} ...')
mediainfo = self.recognize_help(title=path, org_meta=file_meta)
if not mediainfo:
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.info(f'{path} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
@@ -435,7 +462,7 @@ class MediaChain(ChainBase):
"""
列出下级文件
"""
return storagechain.list_files(fileitem=_fileitem)
return storagechain.list_files(fileitem=_fileitem) or []
def __save_file(_fileitem: schemas.FileItem, _path: Path, _content: Union[bytes, str]):
"""
@@ -668,7 +695,7 @@ class MediaChain(ChainBase):
if (
file.type == "dir"
and file.name not in settings.RENAME_FORMAT_S0_NAMES
and not file.name.lower().startswith("season")
and MetaInfo(file.name).begin_season is None
):
# 电视剧不处理非季子目录
continue
@@ -689,9 +716,13 @@ class MediaChain(ChainBase):
if filepath.name in settings.RENAME_FORMAT_S0_NAMES:
season_meta.begin_season = 0
elif season_meta.name and season_meta.begin_season is not None:
# 当前目录含有非季目录的名称,但却有季信息(通常是被辅助识别词指定了)
# 这种情况应该是剧集根目录,不能按季目录刮削,否则会导致`season_poster`的路径错误 详见issue#5373
season_meta.begin_season = None
# 目录含剧名且包含季号,需排除辅助词重新识别元数据,避免误判根目录 (issue 5501)
season_meta_no_custom = MetaInfo(
filepath.name, custom_words=["#"]
)
if season_meta_no_custom.begin_season is None:
# 季号是由辅助词指定的,按剧集根目录处理,避免`season_poster`路径错误 (issue 5373)
season_meta.begin_season = None
if season_meta.begin_season is not None:
# 检查季NFO开关
if scraping_switchs.get('season_nfo', True):
@@ -812,24 +843,58 @@ class MediaChain(ChainBase):
logger.info(f"已存在图片文件:{image_path}")
else:
logger.info(f"电视剧图片刮削已关闭,跳过:{image_name}")
else:
logger.warn("无法识别元数据,跳过")
logger.info(f"{filepath.name} 刮削完成")
async def async_select_recognize_source(self, log_name: str, log_context: str,
native_fn, plugin_fn) -> Optional[MediaInfo]:
"""
选择识别模式,插件优先或原生优先(异步版本)
:param log_name: 用于日志“标题:...”处的名称(如 file_path.name 或 title
:param log_context: 用于日志“未识别到...的媒体信息”处的上下文(如 path 或 title
:param native_fn: 原生识别函数
:param plugin_fn: 插件识别函数
"""
mediainfo = None
plugin_available = eventmanager.check(ChainEventType.NameRecognize)
if settings.RECOGNIZE_PLUGIN_FIRST and plugin_available:
# 插件优先
logger.info(f"插件优先模式已开启。请求辅助识别,标题:{log_name} ...")
mediainfo = await plugin_fn()
if not mediainfo:
logger.info(f'辅助识别未识别到 {log_context} 的媒体信息,尝试使用原生识别')
mediainfo = await native_fn()
else:
# 原生优先
logger.info(f"插件优先模式未开启。尝试原生识别,标题:{log_name} ...")
mediainfo = await native_fn()
if not mediainfo and plugin_available:
logger.info(f'原生识别未识别到 {log_context} 的媒体信息,尝试使用辅助识别')
mediainfo = await plugin_fn()
return mediainfo
async def async_recognize_by_meta(self, metainfo: MetaBase,
episode_group: Optional[str] = None) -> Optional[MediaInfo]:
"""
根据主副标题识别媒体信息(异步版本)
"""
title = metainfo.title
# 识别媒体信息
mediainfo: MediaInfo = await self.async_recognize_media(meta=metainfo, episode_group=episode_group)
# 定义识别函数
async def native_recognize():
return await self.async_recognize_media(meta=metainfo, episode_group=episode_group)
async def plugin_recognize():
return await self.async_recognize_help(title=title, org_meta=metainfo)
# 按 config 中设置的识别顺序识别
mediainfo = await self.async_select_recognize_source(
log_name=title,
log_context=title,
native_fn=native_recognize,
plugin_fn=plugin_recognize
)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{title} ...')
mediainfo = await self.async_recognize_help(title=title, org_meta=metainfo)
if not mediainfo:
logger.warn(f'{title} 未识别到媒体信息')
return None
logger.warn(f'{title} 未识别到媒体信息')
return None
# 识别成功
logger.info(f'{title} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
@@ -893,16 +958,21 @@ class MediaChain(ChainBase):
file_path = Path(path)
# 元数据
file_meta = MetaInfoPath(file_path)
# 识别媒体信息
mediainfo = await self.async_recognize_media(meta=file_meta, episode_group=episode_group)
# 定义识别函数
async def native_recognize():
return await self.async_recognize_media(meta=file_meta, episode_group=episode_group)
async def plugin_recognize():
return await self.async_recognize_help(title=path, org_meta=file_meta)
# 按 config 中设置的识别顺序识别
mediainfo = await self.async_select_recognize_source(
log_name=file_path.name,
log_context=path,
native_fn=native_recognize,
plugin_fn=plugin_recognize
)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
logger.info(f'请求辅助识别,标题:{file_path.name} ...')
mediainfo = await self.async_recognize_help(title=path, org_meta=file_meta)
if not mediainfo:
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.warn(f'{path} 未识别到媒体信息')
return Context(meta_info=file_meta)
logger.info(f'{path} 识别到媒体信息:{mediainfo.type.value} {mediainfo.title_year}')
# 更新媒体图片
await self.async_obtain_images(mediainfo=mediainfo)

View File

@@ -112,8 +112,8 @@ class MessageChain(ChainBase):
channel = info.channel
# 用户ID
userid = info.userid
# 用户名
username = info.username or userid
# 用户名(当渠道未提供公开用户名时,回退为 userid 的字符串,避免后续类型校验异常)
username = str(info.username) if info.username not in (None, "") else str(userid)
if userid is None or userid == '':
logger.debug(f'未识别到用户ID{body}{form}{args}')
return
@@ -490,18 +490,14 @@ class MessageChain(ChainBase):
# 重新搜索/下载
content = re.sub(r"(搜索|下载)[:\s]*", "", text)
action = "ReSearch"
elif text.startswith("#") \
or re.search(r"^请[问帮你]", text) \
or re.search(r"[?]$", text) \
or StringUtils.count_words(text) > 10 \
or text.find("继续") != -1:
# 聊天
content = text
action = "Chat"
elif StringUtils.is_link(text):
# 链接
content = text
action = "Link"
elif not StringUtils.is_media_title_like(text):
# 聊天
content = text
action = "Chat"
else:
# 搜索
content = text

View File

@@ -6,7 +6,7 @@ from app.chain import ChainBase
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.cache import cached
from app.core.cache import cached, fresh
from app.core.config import settings, global_vars
from app.helper.image import ImageHelper
from app.log import logger
@@ -27,9 +27,11 @@ class RecommendChain(ChainBase, metaclass=Singleton):
# 推荐缓存区域
recommend_cache_region = "recommend"
def refresh_recommend(self):
def refresh_recommend(self, manual: bool = False):
"""
刷新推荐
:param manual: 手动触发
"""
logger.debug("Starting to refresh Recommend data.")
@@ -62,7 +64,9 @@ class RecommendChain(ChainBase, metaclass=Singleton):
if method in methods_finished:
continue
logger.debug(f"Fetch {method.__name__} data for page {page}.")
data = method(page=page)
# 手动触发的刷新,总是需要获取最新数据
with fresh(manual):
data = method(page=page)
if not data:
logger.debug("All recommendation methods have finished fetching data. Ending pagination early.")
methods_finished.add(method)
@@ -90,7 +94,6 @@ class RecommendChain(ChainBase, metaclass=Singleton):
poster_path = data.get("poster_path")
if poster_path:
poster_url = poster_path.replace("original", "w500")
logger.debug(f"Caching poster image: {poster_url}")
self.__fetch_and_save_image(poster_url)
@staticmethod

View File

@@ -156,7 +156,7 @@ class StorageChain(ChainBase):
"""
判断是否包含蓝光必备的文件夹
"""
required_files = ("BDMV", "CERTIFICATE")
required_files = {"BDMV", "CERTIFICATE"}
return any(
item.type == "dir" and item.name in required_files
for item in fileitems or []
@@ -166,7 +166,7 @@ class StorageChain(ChainBase):
"""
删除媒体文件,以及不含媒体文件的目录
"""
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT + settings.RMT_SUBEXT + settings.RMT_AUDIOEXT
fileitem_path = Path(fileitem.path) if fileitem.path else Path("")
if len(fileitem_path.parts) <= 2:
logger.warn(f"{fileitem.storage}{fileitem.path} 根目录或一级目录不允许删除")

View File

@@ -265,6 +265,9 @@ class TorrentsChain(ChainBase):
for torrent in torrents:
if global_vars.is_system_stopped:
break
if not torrent.enclosure:
logger.warn(f"缺少种子链接,忽略处理: {torrent.title}")
continue
logger.info(f'处理资源:{torrent.title} ...')
# 识别
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)

View File

@@ -29,6 +29,7 @@ from app.log import logger
from app.schemas import StorageOperSelectionEventData
from app.schemas import TransferInfo, Notification, EpisodeFormat, FileItem, TransferDirectoryConf, \
TransferTask, TransferQueue, TransferJob, TransferJobTask
from app.schemas.exception import OperationInterrupted
from app.schemas.types import TorrentStatus, EventType, MediaType, ProgressKey, NotificationType, MessageChannel, \
SystemConfigKey, ChainEventType, ContentType
from app.utils.mixins import ConfigReloadMixin
@@ -345,11 +346,13 @@ class JobManager:
检查指定种子的所有任务是否都已完成
"""
with job_lock:
for job in self._job_view.values():
for task in job.tasks:
if task.download_hash == download_hash:
if task.state not in ["completed", "failed"]:
return False
if any(
task.state not in {"completed", "failed"}
for job in self._job_view.values()
for task in job.tasks
if task.download_hash == download_hash
):
return False
return True
def is_torrent_success(self, download_hash: str) -> bool:
@@ -357,11 +360,13 @@ class JobManager:
检查指定种子的所有任务是否都已成功
"""
with job_lock:
for job in self._job_view.values():
for task in job.tasks:
if task.download_hash == download_hash:
if task.state not in ["completed"]:
return False
if any(
task.state != "completed"
for job in self._job_view.values()
for task in job.tasks
if task.download_hash == download_hash
):
return False
return True
def has_tasks(self, meta: MetaBase, mediainfo: Optional[MediaInfo] = None, season: Optional[int] = None) -> bool:
@@ -751,15 +756,18 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
if self.jobview.is_success(task):
# 所有成功的业务
tasks = self.jobview.success_tasks(task.mediainfo, task.meta.begin_season)
# 获取整理屏蔽词
transfer_exclude_words = SystemConfigOper().get(SystemConfigKey.TransferExcludeWords)
processed_hashes = set()
for t in tasks:
if t.download_hash and t.download_hash not in processed_hashes:
# 检查该种子的所有任务(跨作业)是否都已成功
if self.jobview.is_torrent_success(t.download_hash):
processed_hashes.add(t.download_hash)
# 移除种子及文件
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash}")
if self._can_delete_torrent(t.download_hash, t.downloader, transfer_exclude_words):
# 移除种子及文件
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash}")
if not t.download_hash and t.fileitem:
# 删除剩余空目录
StorageChain().delete_media_file(t.fileitem, delete_self=False)
@@ -947,7 +955,7 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
if not settings.SCRAP_FOLLOW_TMDB:
transfer_history = transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
transfer_history = transferhis.get_by_type_tmdbid(tmdbid=mediainfo.tmdb_id,
mtype=mediainfo.type.value)
if transfer_history and mediainfo.title != transfer_history.title:
mediainfo.title = transfer_history.title
@@ -1169,14 +1177,29 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
return True
def __get_trans_fileitems(
self, fileitem: FileItem, check: bool = True
self,
fileitem: FileItem,
predicate: Optional[Callable[[FileItem, bool], bool]],
verify_file_exists: bool = True,
) -> List[Tuple[FileItem, bool]]:
"""
获取整理目录或文件列表
获取整理文件列表
:param fileitem: 文件项
:param check: 检查文件是否存在默认为True
:param fileitem: 文件项
:param predicate: 用于筛选目录或文件项
该函数接收两个参数:
- `file_item`: 需要判断的文件项(类型为 `FileItem`
- `is_bluray_dir`: 表示该项是否为蓝光原盘目录(布尔值)
函数应返回 `True` 表示保留该项,`False` 表示过滤掉
若 `predicate` 为 `None`,则默认保留所有项
:param verify_file_exists: 验证目录或文件是否存在,默认值为 `True`
"""
if global_vars.is_system_stopped:
raise OperationInterrupted()
storagechain = StorageChain()
def __is_bluray_sub(_path: str) -> bool:
@@ -1194,7 +1217,12 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
return storagechain.get_file_item(storage=_storage, path=p.parent)
return None
if check:
def _apply_predicate(file_item: FileItem, is_bluray_dir: bool) -> List[Tuple[FileItem, bool]]:
if predicate is None or predicate(file_item, is_bluray_dir):
return [(file_item, is_bluray_dir)]
return []
if verify_file_exists:
latest_fileitem = storagechain.get_item(fileitem)
if not latest_fileitem:
logger.warn(f"目录或文件不存在:{fileitem.path}")
@@ -1204,28 +1232,30 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
# 是否蓝光原盘子目录或文件
if __is_bluray_sub(fileitem.path):
if dir_item := __get_bluray_dir(fileitem.storage, Path(fileitem.path)):
if bluray_dir := __get_bluray_dir(fileitem.storage, Path(fileitem.path)):
# 返回该文件所在的原盘根目录
return [(dir_item, True)]
return _apply_predicate(bluray_dir, True)
# 单文件
if fileitem.type == "file":
return [(fileitem, False)]
return _apply_predicate(fileitem, False)
# 是否蓝光原盘根目录
sub_items = storagechain.list_files(fileitem, recursion=False) or []
if storagechain.contains_bluray_subdirectories(sub_items):
# 当前目录是原盘根目录,不需要递归
return [(fileitem, True)]
return _apply_predicate(fileitem, True)
# 不是原盘根目录 递归获取目录内需要整理的文件项列表
return [
item
for sub_item in sub_items
for item in (
self.__get_trans_fileitems(sub_item, check=False)
self.__get_trans_fileitems(
sub_item, predicate, verify_file_exists=False
)
if sub_item.type == "dir"
else [(sub_item, False)]
else _apply_predicate(sub_item, False)
)
]
@@ -1275,22 +1305,47 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
transfer_exclude_words = SystemConfigOper().get(SystemConfigKey.TransferExcludeWords)
# 汇总错误信息
err_msgs: List[str] = []
# 递归获取待整理的文件/目录列表
file_items = self.__get_trans_fileitems(fileitem)
if not file_items:
logger.warn(f"{fileitem.path} 没有找到可整理的媒体文件")
return False, f"{fileitem.name} 没有找到可整理的媒体文件"
def _filter(file_item: FileItem, is_bluray_dir: bool) -> bool:
"""
过滤文件
# 有集自定义格式,过滤文件
if formaterHandler:
file_items = [f for f in file_items if formaterHandler.match(f[0].name)]
:return: True 表示保留False 表示排除
"""
if continue_callback and not continue_callback():
raise OperationInterrupted()
# 有集自定义格式,过滤文件
if formaterHandler and not formaterHandler.match(file_item.name):
return False
# 过滤后缀和大小(蓝光目录、附加文件不过滤)
if (
not is_bluray_dir
and not self.__is_subtitle_file(file_item)
and not self.__is_audio_file(file_item)
):
if not self.__is_media_file(file_item):
return False
if not self.__is_allow_filesize(file_item, min_filesize):
return False
# 回收站及隐藏的文件不处理
if (
file_item.path.find("/@Recycle/") != -1
or file_item.path.find("/#recycle/") != -1
or file_item.path.find("/.") != -1
or file_item.path.find("/@eaDir") != -1
):
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
return False
# 整理屏蔽词不处理
if self._is_blocked_by_exclude_words(file_item.path, transfer_exclude_words):
return False
return True
# 过滤后缀和大小(蓝光目录、附加文件不过滤大小)
file_items = [f for f in file_items if f[1] or
self.__is_subtitle_file(f[0]) or
self.__is_audio_file(f[0]) or
(self.__is_media_file(f[0]) and self.__is_allow_filesize(f[0], min_filesize))]
try:
# 获取经过筛选后的待整理文件项列表
file_items = self.__get_trans_fileitems(fileitem, predicate=_filter)
except OperationInterrupted:
return False, f"{fileitem.name} 已取消"
if not file_items:
logger.warn(f"{fileitem.path} 没有找到可整理的媒体文件")
@@ -1303,21 +1358,10 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
try:
for file_item, bluray_dir in file_items:
if global_vars.is_system_stopped:
break
raise OperationInterrupted()
if continue_callback and not continue_callback():
break
raise OperationInterrupted()
file_path = Path(file_item.path)
# 回收站及隐藏的文件不处理
if file_item.path.find('/@Recycle/') != -1 \
or file_item.path.find('/#recycle/') != -1 \
or file_item.path.find('/.') != -1 \
or file_item.path.find('/@eaDir') != -1:
logger.debug(f"{file_item.path} 是回收站或隐藏的文件")
continue
# 整理屏蔽词不处理
if self._is_blocked_by_exclude_words(file_item.path, transfer_exclude_words):
continue
# 整理成功的不再处理
if not force:
@@ -1415,6 +1459,8 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
transfer_tasks.append(transfer_task)
else:
logger.debug(f"{file_path.name} 已在整理列表中,跳过")
except OperationInterrupted:
return False, f"{fileitem.name} 已取消"
finally:
file_items.clear()
del file_items
@@ -1588,7 +1634,9 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
library_type_folder: Optional[bool] = None,
library_category_folder: Optional[bool] = None,
force: Optional[bool] = False,
background: Optional[bool] = False) -> Tuple[bool, Union[str, list]]:
background: Optional[bool] = False,
downloader: Optional[str] = None,
download_hash: Optional[str] = None) -> Tuple[bool, Union[str, list]]:
"""
手动整理,支持复杂条件,带进度显示
:param fileitem: 文件项
@@ -1607,6 +1655,8 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
:param library_category_folder: 是否按类别建立目录
:param force: 是否强制整理
:param background: 是否后台运行
:param downloader: 下载器名称
:param download_hash: 下载任务哈希
"""
logger.info(f"手动整理:{fileitem.path} ...")
if tmdbid or doubanid:
@@ -1636,7 +1686,9 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
library_category_folder=library_category_folder,
force=force,
background=background,
manual=True
manual=True,
downloader=downloader,
download_hash=download_hash
)
if not state:
return False, errmsg
@@ -1657,7 +1709,9 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
library_category_folder=library_category_folder,
force=force,
background=background,
manual=True)
manual=True,
downloader=downloader,
download_hash=download_hash)
return state, errmsg
def send_transfer_message(self, meta: MetaBase, mediainfo: MediaInfo,
@@ -1697,3 +1751,46 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
logger.warn(f"{file_path} 命中屏蔽词 {keyword}")
return True
return False
def _can_delete_torrent(self, download_hash: str, downloader: str, transfer_exclude_words) -> bool:
"""
检查是否可以删除种子文件
:param download_hash: 种子Hash
:param downloader: 下载器名称
:param transfer_exclude_words: 整理屏蔽词
:return: 如果可以删除返回True否则返回False
"""
try:
# 获取种子信息
torrents = self.list_torrents(hashs=download_hash, downloader=downloader)
if not torrents:
return False
# 未下载完成
if torrents[0].progress < 100:
return False
# 获取种子文件列表
torrent_files = self.torrent_files(download_hash, downloader)
if not torrent_files:
return False
if not isinstance(torrent_files, list):
torrent_files = torrent_files.data
# 检查是否有媒体文件未被屏蔽且存在
save_path = torrents[0].path.parent
for file in torrent_files:
file_path = save_path / file.name
# 如果存在未被屏蔽的媒体文件,则不删除种子
if (file_path.suffix in self._allowed_exts
and not self._is_blocked_by_exclude_words(file_path.as_posix(), transfer_exclude_words)
and file_path.exists()):
return False
# 所有媒体文件都被屏蔽或不存在,可以删除种子
return True
except Exception as e:
logger.error(f"检查种子 {download_hash} 是否需要删除失败:{e}")
return False

View File

@@ -27,8 +27,6 @@ DEFAULT_CACHE_SIZE = 1024
# 默认缓存有效期
DEFAULT_CACHE_TTL = 365 * 24 * 60 * 60
lock = threading.Lock()
# 上下文变量来控制缓存行为
_fresh = contextvars.ContextVar('fresh', default=False)
@@ -297,14 +295,14 @@ class AsyncCacheBackend(CacheBackend):
"""
获取所有缓存键,类似 dict.keys()(异步)
"""
async for key, _ in await self.items(region=region):
async for key, _ in self.items(region=region):
yield key
async def values(self, region: Optional[str] = DEFAULT_CACHE_REGION) -> AsyncGenerator[Any, None]:
"""
获取所有缓存值,类似 dict.values()(异步)
"""
async for _, value in await self.items(region=region):
async for _, value in self.items(region=region):
yield value
async def update(self, other: Dict[str, Any], region: Optional[str] = DEFAULT_CACHE_REGION,
@@ -332,7 +330,7 @@ class AsyncCacheBackend(CacheBackend):
弹出最后一个缓存项,类似 dict.popitem()(异步)
"""
items = []
async for item in await self.items(region=region):
async for item in self.items(region=region):
items.append(item)
if not items:
raise KeyError("popitem(): cache is empty")
@@ -364,6 +362,11 @@ class MemoryBackend(CacheBackend):
基于 `cachetools.TTLCache` 实现的缓存后端
"""
# 类变量 _region_caches 的互斥锁
_lock = threading.Lock()
# 存储各个 region 的缓存实例region -> TTLCache
_region_caches: Dict[str, Union[MemoryTTLCache, MemoryLRUCache]] = {}
def __init__(self, cache_type: Literal['ttl', 'lru'] = 'ttl',
maxsize: Optional[int] = None, ttl: Optional[int] = None):
"""
@@ -376,8 +379,6 @@ class MemoryBackend(CacheBackend):
self.cache_type = cache_type
self.maxsize = maxsize or DEFAULT_CACHE_SIZE
self.ttl = ttl or DEFAULT_CACHE_TTL
# 存储各个 region 的缓存实例region -> TTLCache
self._region_caches: Dict[str, Union[MemoryTTLCache, MemoryLRUCache]] = {}
def __get_region_cache(self, region: str) -> Optional[Union[MemoryTTLCache, MemoryLRUCache]]:
"""
@@ -400,7 +401,7 @@ class MemoryBackend(CacheBackend):
maxsize = kwargs.get("maxsize", self.maxsize)
region = self.get_region(region)
# 设置缓存值
with lock:
with self._lock:
# 如果该 key 尚未有缓存实例,则创建一个新的 TTLCache 实例
region_cache = self._region_caches.setdefault(
region,
@@ -445,7 +446,7 @@ class MemoryBackend(CacheBackend):
region_cache = self.__get_region_cache(region)
if region_cache is None:
return
with lock:
with self._lock:
del region_cache[key]
def clear(self, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
@@ -458,13 +459,13 @@ class MemoryBackend(CacheBackend):
# 清理指定缓存区
region_cache = self.__get_region_cache(region)
if region_cache:
with lock:
with self._lock:
region_cache.clear()
logger.debug(f"Cleared cache for region: {region}")
else:
# 清除所有区域的缓存
for region_cache in self._region_caches.values():
with lock:
with self._lock:
region_cache.clear()
logger.info("Cleared all cache")
@@ -480,7 +481,7 @@ class MemoryBackend(CacheBackend):
yield from ()
return
# 使用锁保护迭代过程,避免在迭代时缓存被修改
with lock:
with self._lock:
# 创建快照避免并发修改问题
items_snapshot = list(region_cache.items())
for item in items_snapshot:
@@ -507,18 +508,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param maxsize: 缓存的最大条目数
:param ttl: 默认缓存存活时间,单位秒
"""
self.cache_type = cache_type
self.maxsize = maxsize or DEFAULT_CACHE_SIZE
self.ttl = ttl or DEFAULT_CACHE_TTL
# 存储各个 region 的缓存实例region -> TTLCache
self._region_caches: Dict[str, Union[MemoryTTLCache, MemoryLRUCache]] = {}
def __get_region_cache(self, region: str) -> Optional[Union[MemoryTTLCache, MemoryLRUCache]]:
"""
获取指定区域的缓存实例,如果不存在则返回 None
"""
region = self.get_region(region)
return self._region_caches.get(region)
self._backend = MemoryBackend(cache_type=cache_type, maxsize=maxsize, ttl=ttl)
async def set(self, key: str, value: Any, ttl: Optional[int] = None,
region: Optional[str] = DEFAULT_CACHE_REGION, **kwargs) -> None:
@@ -530,18 +520,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param ttl: 缓存的存活时间,不传入为永久缓存,单位秒
:param region: 缓存的区
"""
ttl = ttl or self.ttl
maxsize = kwargs.get("maxsize", self.maxsize)
region = self.get_region(region)
# 设置缓存值
with lock:
# 如果该 key 尚未有缓存实例,则创建一个新的 TTLCache 实例
region_cache = self._region_caches.setdefault(
region,
MemoryTTLCache(maxsize=maxsize, ttl=ttl) if self.cache_type == 'ttl'
else MemoryLRUCache(maxsize=maxsize)
)
region_cache[key] = value
return self._backend.set(key=key, value=value, ttl=ttl, region=region, **kwargs)
async def exists(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> bool:
"""
@@ -551,10 +530,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return False
return key in region_cache
return self._backend.exists(key=key, region=region)
async def get(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION) -> Any:
"""
@@ -564,10 +540,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
return region_cache.get(key)
return self._backend.get(key=key, region=region)
async def delete(self, key: str, region: Optional[str] = DEFAULT_CACHE_REGION):
"""
@@ -576,11 +549,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param key: 缓存的键
:param region: 缓存的区
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return
with lock:
del region_cache[key]
return self._backend.delete(key=key, region=region)
async def clear(self, region: Optional[str] = DEFAULT_CACHE_REGION) -> None:
"""
@@ -588,19 +557,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param region: 缓存的区为None时清空所有区缓存
"""
if region:
# 清理指定缓存区
region_cache = self.__get_region_cache(region)
if region_cache:
with lock:
region_cache.clear()
logger.debug(f"Cleared cache for region: {region}")
else:
# 清除所有区域的缓存
for region_cache in self._region_caches.values():
with lock:
region_cache.clear()
logger.info("All cache cleared")
return self._backend.clear(region=region)
async def items(self, region: Optional[str] = DEFAULT_CACHE_REGION) -> AsyncGenerator[Tuple[str, Any], None]:
"""
@@ -609,14 +566,7 @@ class AsyncMemoryBackend(AsyncCacheBackend):
:param region: 缓存的区
:return: 返回一个字典,包含所有缓存键值对
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return
# 使用锁保护迭代过程,避免在迭代时缓存被修改
with lock:
# 创建快照避免并发修改问题
items_snapshot = list(region_cache.items())
for item in items_snapshot:
for item in self._backend.items(region):
yield item
async def close(self) -> None:
@@ -1115,15 +1065,16 @@ def AsyncCache(cache_type: Literal['ttl', 'lru'] = 'ttl',
def cached(region: Optional[str] = None, maxsize: Optional[int] = 1024, ttl: Optional[int] = None,
skip_none: Optional[bool] = True, skip_empty: Optional[bool] = False):
skip_none: Optional[bool] = True, skip_empty: Optional[bool] = False, shared_key: Optional[str] = None):
"""
自定义缓存装饰器,支持为每个 key 动态传递 maxsize 和 ttl
:param region: 缓存
:param maxsize: 缓存的最大条目数
:param region: 缓存区域的标识符,默认根据模块名、函数名等自动生成标识
:param maxsize: 缓存区内的最大条目数
:param ttl: 缓存的存活时间,单位秒,未传入则为永久缓存,单位秒
:param skip_none: 跳过 None 缓存,默认为 True
:param skip_empty: 跳过空值缓存(如 None, [], {}, "", set()),默认为 False
:param shared_key: 同步/异步函数共享缓存的键,默认使用函数名(异步函数名会标准化为同步格式,如移除 `async_` 前缀)
:return: 装饰器函数
"""
@@ -1173,6 +1124,17 @@ def cached(region: Optional[str] = None, maxsize: Optional[int] = 1024, ttl: Opt
return False
return True
def __standardize_func_name() -> str:
"""
将异步函数名标准化为同步函数的命名,以生成统一的缓存键
"""
# XXX 假设异步函数名与同步版本仅差`async_`前缀或`_async`后缀当前MP代码大多符合否则需通过`shared_key`参数显式指定
return (
func.__name__.removeprefix("async_").removesuffix("_async")
if is_async
else func.__name__
)
def __get_cache_key(args, kwargs) -> str:
"""
根据函数和参数生成缓存键
@@ -1194,13 +1156,22 @@ def cached(region: Optional[str] = None, maxsize: Optional[int] = 1024, ttl: Opt
bound.arguments[param] for param in signature.parameters if param in bound.arguments
]
# 使用有序参数生成缓存键
return f"{func.__name__}_{hashkey(*keys)}"
# 获取缓存区
cache_region = region if region is not None else f"{func.__module__}.{func.__name__}"
return f"{func_name}_{hashkey(*keys)}"
# 被装饰函数的上层名称(如类名或外层函数名)
enclosing_name = (
func.__qualname__[:last_dot]
if (last_dot := func.__qualname__.rfind(".")) != -1
else ""
)
# 检查是否为异步函数
is_async = inspect.iscoroutinefunction(func)
# 生成标准化后的函数名称,用于同步/异步函数共享缓存
func_name = shared_key if shared_key else __standardize_func_name()
# 获取缓存区
cache_region = (
region if region is not None else f"{func.__module__}:{enclosing_name}:{func_name}"
)
if is_async:
# 异步函数使用异步缓存后端

View File

@@ -322,6 +322,8 @@ class ConfigModel(BaseModel):
DEFAULT_SUB: Optional[str] = "zh-cn"
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB: bool = True
# 优先使用辅助识别
RECOGNIZE_PLUGIN_FIRST: bool = False
# ==================== 服务地址配置 ====================
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
@@ -414,6 +416,8 @@ class ConfigModel(BaseModel):
RCLONE_SNAPSHOT_CHECK_FOLDER_MODTIME: bool = True
# 对OpenList进行快照对比时是否检查文件夹的修改时间
OPENLIST_SNAPSHOT_CHECK_FOLDER_MODTIME: bool = True
# 对阿里云盘进行快照对比时,是否检查文件夹的修改时间(默认关闭,因为阿里云盘目录时间不随子文件变更而更新)
ALIPAN_SNAPSHOT_CHECK_FOLDER_MODTIME: bool = False
# ==================== Docker配置 ====================
# Docker Client API地址

View File

@@ -17,6 +17,7 @@ class MetaAnime(MetaBase):
"""
_anime_no_words = ['CHS&CHT', 'MP4', 'GB MP4', 'WEB-DL']
_name_nostring_re = r"S\d{2}\s*-\s*S\d{2}|S\d{2}|\s+S\d{1,2}|EP?\d{2,4}\s*-\s*EP?\d{2,4}|EP?\d{2,4}|\s+EP?\d{1,4}|\s+GB"
_fps_re = r"(\d{2,3})(?=FPS)"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
super().__init__(title, subtitle, isfile)
@@ -173,6 +174,8 @@ class MetaAnime(MetaBase):
self.audio_encode = anitopy_info.get("audio_term")
if isinstance(self.audio_encode, list):
self.audio_encode = self.audio_encode[0]
# 帧率信息
self.__init_anime_fps(anitopy_info, original_title)
# 解析副标题,只要季和集
self.init_subtitle(self.org_string)
if not self._subtitle_flag and self.subtitle:
@@ -182,6 +185,20 @@ class MetaAnime(MetaBase):
except Exception as e:
logger.error(f"解析动漫信息失败:{str(e)} - {traceback.format_exc()}")
def __init_anime_fps(self, anitopy_info: dict, original_title: str):
"""
从原始标题中提取帧率信息与MetaVideo保持完全一致的实现
"""
re_res = re.search(rf"({self._fps_re})", original_title, re.IGNORECASE)
if re_res:
fps_value = None
if re_res.group(1): # FPS格式
fps_value = re_res.group(1)
if fps_value and fps_value.isdigit():
# 只存储纯数值
self.fps = int(fps_value)
@staticmethod
def __prepare_title(title: str):
"""

View File

@@ -66,6 +66,9 @@ class MetaBase(object):
# 附加信息
tmdbid: int = None
doubanid: str = None
# 帧率信息(纯数值)
fps: Optional[int] = None
# 副标题解析
_subtitle_flag = False
@@ -448,6 +451,13 @@ class MetaBase(object):
"""
return self.audio_encode or ""
@property
def frame_rate(self) -> int:
"""
返回帧率信息
"""
return self.fps or None
def is_in_season(self, season: Union[list, int, str]) -> bool:
"""
是否包含季
@@ -581,6 +591,9 @@ class MetaBase(object):
# 音频编码
if not self.audio_encode:
self.audio_encode = meta.audio_encode
# 帧率信息
if not self.fps:
self.fps = meta.fps
# Part
if not self.part:
self.part = meta.part

View File

@@ -53,7 +53,7 @@ class MetaVideo(MetaBase):
_resources_pix_re2 = r"(^[248]+K)"
_video_encode_re = r"^(H26[45])$|^(x26[45])$|^AVC$|^HEVC$|^VC\d?$|^MPEG\d?$|^Xvid$|^DivX$|^AV1$|^HDR\d*$|^AVS(\+|[23])$"
_audio_encode_re = r"^DTS\d?$|^DTSHD$|^DTSHDMA$|^Atmos$|^TrueHD\d?$|^AC3$|^\dAudios?$|^DDP\d?$|^DD\+\d?$|^DD\d?$|^LPCM\d?$|^AAC\d?$|^FLAC\d?$|^HD\d?$|^MA\d?$|^HR\d?$|^Opus\d?$|^Vorbis\d?$|^AV[3S]A$"
_fps_re = r"(\d{2,3})(?=FPS)"
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
"""
初始化
@@ -76,7 +76,7 @@ class MetaVideo(MetaBase):
self.type = MediaType.TV
return
# 全名为Season xx 及 Sxx 直接返回
season_full_res = re.search(r"^Season\s+(\d{1,3})$|^S(\d{1,3})$", title)
season_full_res = re.search(r"^(?:Season\s+|S)(\d{1,3})$", title, re.IGNORECASE)
if season_full_res:
self.type = MediaType.TV
season = season_full_res.group(1)
@@ -129,6 +129,9 @@ class MetaVideo(MetaBase):
# 音频编码
if self._continue_flag:
self.__init_audio_encode(token)
# 帧率
if self._continue_flag:
self.__init_fps(token)
# 取下一个,直到没有为卡
token = tokens.get_next()
self._continue_flag = True
@@ -716,3 +719,25 @@ class MetaVideo(MetaBase):
else:
self.audio_encode = "%s %s" % (self.audio_encode, token)
self._last_token = token
def __init_fps(self, token: str):
"""
识别帧率
"""
if not self.name:
return
re_res = re.search(rf"({self._fps_re})", token, re.IGNORECASE)
if re_res:
self._continue_flag = False
self._stop_name_flag = True
self._last_token_type = "fps"
# 提取帧率数值
fps_value = None
if re_res.group(1): # FPS格式
fps_value = re_res.group(1)
if fps_value and fps_value.isdigit():
# 只存储纯数值
self.fps = int(fps_value)
self._last_token = f"{self.fps}FPS"

View File

@@ -52,6 +52,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"nicept": [],
"oshen": [],
"ourbits": ['Our(?:Bits|TV)', 'FLTTH', 'Ao', 'PbK', 'MGs', 'iLove(?:HD|TV)'],
"panda": ['Panda', 'AilMWeb'],
"piggo": ['PiGo(?:NF|(?:H|WE)B)'],
"ptchina": [],
"pterclub": ['PTer(?:DIY|Game|(?:M|T)V|WEB|)'],
@@ -105,7 +106,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
else:
groups = self.__release_groups
title = f"{title} "
groups_re = re.compile(r"(?<=[-@\[£【&])(?:(?:%s))(?=[@.\s\S\]\[】&])" % groups, re.I)
groups_re = re.compile(r"(?<=[-@\[£【&])(?:(?:%s))(?=$|[@.\s\]\[】&])" % groups, re.I)
unique_groups = []
for item in re.findall(groups_re, title):
item_str = item[0] if isinstance(item, tuple) else item

View File

@@ -5,6 +5,7 @@ import concurrent.futures
import importlib.util
import inspect
import os
import posixpath
import sys
import threading
import time
@@ -775,11 +776,17 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
:param dist_path: 插件的分发路径
:return: 远程入口地址
"""
if dist_path.startswith("/"):
dist_path = dist_path[1:]
if dist_path.endswith("/"):
dist_path = dist_path[:-1]
return f"/plugin/file/{plugin_id.lower()}/{dist_path}/remoteEntry.js"
dist_path = dist_path.strip("/")
path = posixpath.join(
"plugin",
"file",
plugin_id.lower(),
dist_path,
"remoteEntry.js",
)
if not path.startswith("/"):
path = "/" + path
return path
def get_plugin_remotes(self, pid: Optional[str] = None) -> List[Dict[str, Any]]:
"""

View File

@@ -125,7 +125,7 @@ class TransferHistoryOper(DbOper):
"""
新增转移成功历史记录
"""
self.add_force(
return self.add_force(
src=fileitem.path,
src_storage=fileitem.storage,
src_fileitem=fileitem.model_dump(),

View File

@@ -151,8 +151,9 @@ class DirectoryHelper:
if not matchs:
continue
# 处理特例,有的人重命名的第一层是年份、分辨率
if any("title" in m for m in matchs):
# 找出最后一层含有标题参数的目录作为媒体根目录
if (any("title" in m for m in matchs)
and not any("season" in m for m in matchs)):
# 找出最后一层含有标题且不含季参数的目录作为媒体根目录
rename_format_level = level
break
else:

View File

@@ -25,7 +25,7 @@ class DownloaderHelper(ServiceBaseHelper[DownloaderConf]):
) -> bool:
"""
通用的下载器类型判断方法
:param service_type: 下载器的类型名称(如 'qbittorrent', 'transmission'
:param service_type: 下载器的类型名称(如 'qbittorrent', 'transmission', 'rtorrent'
:param service: 要判断的服务信息
:param name: 服务的名称
:return: 如果服务类型或实例为指定类型,返回 True否则返回 False

View File

@@ -13,9 +13,10 @@ import aiofiles
import aioshutil
import httpx
from anyio import Path as AsyncPath
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet, InvalidSpecifier
from packaging.version import Version, InvalidVersion
from pkg_resources import Requirement, working_set
from importlib.metadata import distributions
from requests import Response
from app.core.cache import cached
@@ -729,18 +730,26 @@ class PluginHelper(metaclass=WeakSingleton):
def __get_installed_packages(self) -> Dict[str, Version]:
"""
获取已安装的包及其版本
使用 pkg_resources 获取当前环境中已安装的包,标准化包名并转换版本信息
使用 importlib.metadata 获取当前环境中已安装的包,标准化包名并转换版本信息
对于无法解析的版本,记录警告日志并跳过
:return: 已安装包的字典,格式为 {package_name: Version}
"""
installed_packages = {}
try:
for dist in working_set:
pkg_name = self.__standardize_pkg_name(dist.project_name)
for dist in distributions():
name = dist.metadata.get("Name")
if not name:
continue
pkg_name = self.__standardize_pkg_name(name)
version_str = dist.metadata.get("Version") or getattr(dist, "version", None)
if not version_str:
continue
try:
installed_packages[pkg_name] = Version(dist.version)
v = Version(version_str)
if pkg_name not in installed_packages or v > installed_packages[pkg_name]:
installed_packages[pkg_name] = v
except InvalidVersion:
logger.debug(f"无法解析已安装包 '{pkg_name}' 的版本:{dist.version}")
logger.debug(f"无法解析已安装包 '{pkg_name}' 的版本:{version_str}")
continue
return installed_packages
except Exception as e:
@@ -844,12 +853,14 @@ class PluginHelper(metaclass=WeakSingleton):
@staticmethod
def __standardize_pkg_name(name: str) -> str:
"""
标准化包名,将包名转换为小写并将连字符替换为下划线
标准化包名,将包名转换为小写连字符与点替换为下划线(与 PEP 503 归一化风格一致)
:param name: 原始包名
:return: 标准化后的包名
"""
return name.lower().replace("-", "_") if name else name
if not name:
return name
return name.lower().replace("-", "_").replace(".", "_")
async def async_get_plugin_package_version(self, pid: str, repo_url: str,
package_version: Optional[str] = None) -> Optional[str]:

View File

@@ -3,10 +3,9 @@ from typing import Union, Optional
from app.core.cache import TTLCache
from app.schemas.types import ProgressKey
from app.utils.singleton import WeakSingleton
class ProgressHelper(metaclass=WeakSingleton):
class ProgressHelper:
"""
处理进度辅助类
"""

View File

@@ -25,7 +25,7 @@ class TorrentHelper:
"""
def __init__(self):
self._invalid_torrents = TTLCache(maxsize=128, ttl=3600 * 24)
self._invalid_torrents = TTLCache(region="invalid_torrents", maxsize=128, ttl=3600 * 24)
def download_torrent(self, url: str,
cookie: Optional[str] = None,
@@ -340,11 +340,11 @@ class TorrentHelper:
episodes = list(set(episodes).union(set(meta.episode_list)))
return episodes
def is_invalid(self, url: str) -> bool:
def is_invalid(self, url: Optional[str]) -> bool:
"""
判断种子是否是无效种子
"""
return url in self._invalid_torrents
return url in self._invalid_torrents if url else True
def add_invalid(self, url: str):
"""

View File

@@ -290,3 +290,11 @@ class BangumiModule(_ModuleBase):
if infos:
return [MediaInfo(bangumi_info=info) for info in infos]
return []
def clear_cache(self):
"""
清除缓存
"""
logger.info(f"开始清除{self.get_name()}缓存 ...")
self.bangumiapi.clear_cache()
logger.info(f"{self.get_name()}缓存清除完成")

View File

@@ -31,7 +31,7 @@ class BangumiApi(object):
self._req = RequestUtils(ua=settings.NORMAL_USER_AGENT, session=self._session)
self._async_req = AsyncRequestUtils(ua=settings.NORMAL_USER_AGENT)
@cached(maxsize=settings.CONF.bangumi, ttl=settings.CONF.meta)
@cached(maxsize=settings.CONF.bangumi, ttl=settings.CONF.meta, shared_key="get")
def __invoke(self, url, key: Optional[str] = None, **kwargs):
req_url = self._base_url + url
params = {}
@@ -47,7 +47,7 @@ class BangumiApi(object):
print(e)
return None
@cached(maxsize=settings.CONF.bangumi, ttl=settings.CONF.meta)
@cached(maxsize=settings.CONF.bangumi, ttl=settings.CONF.meta, shared_key="get")
async def __async_invoke(self, url, key: Optional[str] = None, **kwargs):
req_url = self._base_url + url
params = {}
@@ -300,6 +300,12 @@ class BangumiApi(object):
key="data",
_ts=datetime.strftime(datetime.now(), '%Y%m%d'), **kwargs)
def clear_cache(self):
"""
清除缓存
"""
self.__invoke.cache_clear()
def close(self):
if self._session:
self._session.close()

View File

@@ -139,9 +139,23 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
发送通知消息
:param message: 消息通知对象
"""
for conf in self.get_configs().values():
# DEBUG: Log entry and configs
configs = self.get_configs()
logger.debug(f"[Discord] post_message 被调用message.source={message.source}, "
f"message.userid={message.userid}, message.channel={message.channel}")
logger.debug(f"[Discord] 当前配置数量: {len(configs)}, 配置名称: {list(configs.keys())}")
logger.debug(f"[Discord] 当前实例数量: {len(self.get_instances())}, 实例名称: {list(self.get_instances().keys())}")
if not configs:
logger.warning("[Discord] get_configs() 返回空,没有可用的 Discord 配置")
return
for conf in configs.values():
logger.debug(f"[Discord] 检查配置: name={conf.name}, type={conf.type}, enabled={conf.enabled}")
if not self.check_message(message, conf.name):
logger.debug(f"[Discord] check_message 返回 False跳过配置: {conf.name}")
continue
logger.debug(f"[Discord] check_message 通过,准备发送到: {conf.name}")
targets = message.targets
userid = message.userid
if not userid and targets is not None:
@@ -150,13 +164,18 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
logger.warn("用户没有指定 Discord 用户ID消息无法发送")
return
client: Discord = self.get_instance(conf.name)
logger.debug(f"[Discord] get_instance('{conf.name}') 返回: {client is not None}")
if client:
client.send_msg(title=message.title, text=message.text,
logger.debug(f"[Discord] 调用 client.send_msg, userid={userid}, title={message.title[:50] if message.title else None}...")
result = client.send_msg(title=message.title, text=message.text,
image=message.image, userid=userid, link=message.link,
buttons=message.buttons,
original_message_id=message.original_message_id,
original_chat_id=message.original_chat_id,
mtype=message.mtype)
logger.debug(f"[Discord] send_msg 返回结果: {result}")
else:
logger.warning(f"[Discord] 未找到配置 '{conf.name}' 对应的 Discord 客户端实例")
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
"""

View File

@@ -2,6 +2,7 @@ import asyncio
import re
import threading
from typing import Optional, List, Dict, Any, Tuple, Union
from urllib.parse import quote
import discord
from discord import app_commands
@@ -33,6 +34,9 @@ class Discord:
DISCORD_GUILD_ID: Optional[Union[str, int]] = None,
DISCORD_CHANNEL_ID: Optional[Union[str, int]] = None,
**kwargs):
logger.debug(f"[Discord] 初始化 Discord 实例: name={kwargs.get('name')}, "
f"GUILD_ID={DISCORD_GUILD_ID}, CHANNEL_ID={DISCORD_CHANNEL_ID}, "
f"TOKEN={'已配置' if DISCORD_BOT_TOKEN else '未配置'}")
if not DISCORD_BOT_TOKEN:
logger.error("Discord Bot Token 未配置!")
return
@@ -40,10 +44,14 @@ class Discord:
self._token = DISCORD_BOT_TOKEN
self._guild_id = self._to_int(DISCORD_GUILD_ID)
self._channel_id = self._to_int(DISCORD_CHANNEL_ID)
logger.debug(f"[Discord] 解析后的 ID: _guild_id={self._guild_id}, _channel_id={self._channel_id}")
base_ds_url = f"http://127.0.0.1:{settings.PORT}/api/v1/message/"
self._ds_url = f"{base_ds_url}?token={settings.API_TOKEN}"
if kwargs.get("name"):
self._ds_url = f"{self._ds_url}&source={kwargs.get('name')}"
# URL encode the source name to handle special characters in config names
encoded_name = quote(kwargs.get('name'), safe='')
self._ds_url = f"{self._ds_url}&source={encoded_name}"
logger.debug(f"[Discord] 消息回调 URL: {self._ds_url}")
intents = discord.Intents.default()
intents.message_content = True
@@ -59,6 +67,7 @@ class Discord:
self._thread: Optional[threading.Thread] = None
self._ready_event = threading.Event()
self._user_dm_cache: Dict[str, discord.DMChannel] = {}
self._user_chat_mapping: Dict[str, str] = {} # userid -> chat_id mapping for reply targeting
self._broadcast_channel = None
self._bot_user_id: Optional[int] = None
@@ -86,6 +95,9 @@ class Discord:
if not self._should_process_message(message):
return
# Update user-chat mapping for reply targeting
self._update_user_chat_mapping(str(message.author.id), str(message.channel.id))
cleaned_text = self._clean_bot_mention(message.content or "")
username = message.author.display_name or message.author.global_name or message.author.name
payload = {
@@ -112,6 +124,10 @@ class Discord:
except Exception as e:
logger.error(f"处理 Discord 交互响应失败:{e}")
# Update user-chat mapping for reply targeting
if interaction.user and interaction.channel:
self._update_user_chat_mapping(str(interaction.user.id), str(interaction.channel.id))
username = (interaction.user.display_name or interaction.user.global_name or interaction.user.name) \
if interaction.user else None
payload = {
@@ -168,13 +184,19 @@ class Discord:
original_message_id: Optional[Union[int, str]] = None,
original_chat_id: Optional[str] = None,
mtype: Optional['NotificationType'] = None) -> Optional[bool]:
logger.debug(f"[Discord] send_msg 被调用: userid={userid}, title={title[:50] if title else None}...")
logger.debug(f"[Discord] get_state() = {self.get_state()}, "
f"_ready_event.is_set() = {self._ready_event.is_set()}, "
f"_client = {self._client is not None}")
if not self.get_state():
logger.warning("[Discord] get_state() 返回 FalseBot 未就绪,无法发送消息")
return False
if not title and not text:
logger.warn("标题和内容不能同时为空")
return False
try:
logger.debug(f"[Discord] 准备异步发送消息...")
future = asyncio.run_coroutine_threadsafe(
self._send_message(title=title, text=text, image=image, userid=userid,
link=link, buttons=buttons,
@@ -182,7 +204,9 @@ class Discord:
original_chat_id=original_chat_id,
mtype=mtype),
self._loop)
return future.result(timeout=30)
result = future.result(timeout=30)
logger.debug(f"[Discord] 异步发送完成,结果: {result}")
return result
except Exception as err:
logger.error(f"发送 Discord 消息失败:{err}")
return False
@@ -254,7 +278,9 @@ class Discord:
original_message_id: Optional[Union[int, str]],
original_chat_id: Optional[str],
mtype: Optional['NotificationType'] = None) -> bool:
logger.debug(f"[Discord] _send_message: userid={userid}, original_chat_id={original_chat_id}")
channel = await self._resolve_channel(userid=userid, chat_id=original_chat_id)
logger.debug(f"[Discord] _resolve_channel 返回: {channel}, type={type(channel)}")
if not channel:
logger.error("未找到可用的 Discord 频道或私聊")
return False
@@ -264,11 +290,18 @@ class Discord:
content = None
if original_message_id and original_chat_id:
logger.debug(f"[Discord] 编辑现有消息: message_id={original_message_id}")
return await self._edit_message(chat_id=original_chat_id, message_id=original_message_id,
content=content, embed=embed, view=view)
await channel.send(content=content, embed=embed, view=view)
return True
logger.debug(f"[Discord] 发送新消息到频道: {channel}")
try:
await channel.send(content=content, embed=embed, view=view)
logger.debug("[Discord] 消息发送成功")
return True
except Exception as e:
logger.error(f"[Discord] 发送消息到频道失败: {e}")
return False
async def _send_list_message(self, embeds: List[discord.Embed],
userid: Optional[str],
@@ -515,26 +548,54 @@ class Discord:
return view
async def _resolve_channel(self, userid: Optional[str] = None, chat_id: Optional[str] = None):
# 优先使用明确的聊天 ID
"""
Resolve the channel to send messages to.
Priority order:
1. `chat_id` (original channel where user sent the message) - for contextual replies
2. `userid` mapping (channel where user last sent a message) - for contextual replies
3. Configured `_channel_id` (broadcast channel) - for system notifications
4. Any available text channel in configured guild - fallback
5. `userid` (DM) - for private conversations as a final fallback
"""
logger.debug(f"[Discord] _resolve_channel: userid={userid}, chat_id={chat_id}, "
f"_channel_id={self._channel_id}, _guild_id={self._guild_id}")
# Priority 1: Use explicit chat_id (reply to the same channel where user sent message)
if chat_id:
logger.debug(f"[Discord] 尝试通过 chat_id={chat_id} 获取原始频道")
channel = self._client.get_channel(int(chat_id))
if channel:
logger.debug(f"[Discord] 通过 get_channel 找到频道: {channel}")
return channel
try:
return await self._client.fetch_channel(int(chat_id))
channel = await self._client.fetch_channel(int(chat_id))
logger.debug(f"[Discord] 通过 fetch_channel 找到频道: {channel}")
return channel
except Exception as err:
logger.warn(f"通过 chat_id 获取 Discord 频道失败:{err}")
# 私聊
# Priority 2: Use user-chat mapping (reply to where the user last sent a message)
if userid:
dm = await self._get_dm_channel(str(userid))
if dm:
return dm
mapped_chat_id = self._get_user_chat_id(str(userid))
if mapped_chat_id:
logger.debug(f"[Discord] 从用户映射获取 chat_id={mapped_chat_id}")
channel = self._client.get_channel(int(mapped_chat_id))
if channel:
logger.debug(f"[Discord] 通过映射找到频道: {channel}")
return channel
try:
channel = await self._client.fetch_channel(int(mapped_chat_id))
logger.debug(f"[Discord] 通过 fetch_channel 找到映射频道: {channel}")
return channel
except Exception as err:
logger.warn(f"通过映射的 chat_id 获取 Discord 频道失败:{err}")
# 配置的广播频道
# Priority 3: Use configured broadcast channel (for system notifications)
if self._broadcast_channel:
logger.debug(f"[Discord] 使用缓存的广播频道: {self._broadcast_channel}")
return self._broadcast_channel
if self._channel_id:
logger.debug(f"[Discord] 尝试通过配置的 _channel_id={self._channel_id} 获取频道")
channel = self._client.get_channel(self._channel_id)
if not channel:
try:
@@ -544,9 +605,11 @@ class Discord:
channel = None
self._broadcast_channel = channel
if channel:
logger.debug(f"[Discord] 通过配置的频道ID找到频道: {channel}")
return channel
# 按 Guild 寻找一个可用文本频道
# Priority 4: Find any available text channel in guild (fallback)
logger.debug(f"[Discord] 尝试在 Guild 中寻找可用频道")
target_guilds = []
if self._guild_id:
guild = self._client.get_guild(self._guild_id)
@@ -554,22 +617,47 @@ class Discord:
target_guilds.append(guild)
else:
target_guilds = list(self._client.guilds)
logger.debug(f"[Discord] 目标 Guilds 数量: {len(target_guilds)}")
for guild in target_guilds:
for channel in guild.text_channels:
if guild.me and channel.permissions_for(guild.me).send_messages:
logger.debug(f"[Discord] 在 Guild 中找到可用频道: {channel}")
self._broadcast_channel = channel
return channel
# Priority 5: Fallback to DM (only if no channel available)
if userid:
logger.debug(f"[Discord] 回退到私聊: userid={userid}")
dm = await self._get_dm_channel(str(userid))
if dm:
logger.debug(f"[Discord] 获取到私聊频道: {dm}")
return dm
else:
logger.debug(f"[Discord] 无法获取用户 {userid} 的私聊频道")
return None
async def _get_dm_channel(self, userid: str) -> Optional[discord.DMChannel]:
logger.debug(f"[Discord] _get_dm_channel: userid={userid}")
if userid in self._user_dm_cache:
logger.debug(f"[Discord] 从缓存获取私聊频道: {self._user_dm_cache.get(userid)}")
return self._user_dm_cache.get(userid)
try:
user_obj = self._client.get_user(int(userid)) or await self._client.fetch_user(int(userid))
logger.debug(f"[Discord] 尝试获取/创建用户 {userid} 的私聊频道")
user_obj = self._client.get_user(int(userid))
logger.debug(f"[Discord] get_user 结果: {user_obj}")
if not user_obj:
user_obj = await self._client.fetch_user(int(userid))
logger.debug(f"[Discord] fetch_user 结果: {user_obj}")
if not user_obj:
logger.debug(f"[Discord] 无法找到用户 {userid}")
return None
dm = user_obj.dm_channel or await user_obj.create_dm()
dm = user_obj.dm_channel
logger.debug(f"[Discord] 用户现有 dm_channel: {dm}")
if not dm:
dm = await user_obj.create_dm()
logger.debug(f"[Discord] 创建新的 dm_channel: {dm}")
if dm:
self._user_dm_cache[userid] = dm
return dm
@@ -577,6 +665,25 @@ class Discord:
logger.error(f"获取 Discord 私聊失败:{err}")
return None
def _update_user_chat_mapping(self, userid: str, chat_id: str) -> None:
"""
Update user-chat mapping for reply targeting.
This ensures replies go to the same channel where the user sent the message.
:param userid: User ID
:param chat_id: Channel/Chat ID where the user sent the message
"""
if userid and chat_id:
self._user_chat_mapping[userid] = chat_id
logger.debug(f"[Discord] 更新用户频道映射: userid={userid} -> chat_id={chat_id}")
def _get_user_chat_id(self, userid: str) -> Optional[str]:
"""
Get the chat ID where the user last sent a message.
:param userid: User ID
:return: Chat ID or None if not found
"""
return self._user_chat_mapping.get(userid)
def _should_process_message(self, message: discord.Message) -> bool:
if isinstance(message.channel, discord.DMChannel):
return True

View File

@@ -154,7 +154,6 @@ class DoubanApi(metaclass=WeakSingleton):
_api_url = "https://api.douban.com/v2"
def __init__(self):
self.__clear_async_cache__ = False
self._session = requests.Session()
@classmethod
@@ -225,7 +224,7 @@ class DoubanApi(metaclass=WeakSingleton):
"""
return resp.json() if resp is not None else None
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True, shared_key="get")
def __invoke(self, url: str, **kwargs) -> dict:
"""
GET请求
@@ -237,14 +236,11 @@ class DoubanApi(metaclass=WeakSingleton):
).get_res(url=req_url, params=params)
return self._handle_response(resp)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True, shared_key="get")
async def __async_invoke(self, url: str, **kwargs) -> dict:
"""
GET请求异步版本
"""
if self.__clear_async_cache__:
self.__clear_async_cache__ = False
await self.__async_invoke.cache_clear()
req_url, params = self._prepare_get_request(url, **kwargs)
resp = await AsyncRequestUtils(
ua=choice(self._user_agents)
@@ -263,7 +259,7 @@ class DoubanApi(metaclass=WeakSingleton):
params.pop('_ts')
return req_url, params
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True, shared_key="post")
def __post(self, url: str, **kwargs) -> dict:
"""
POST请求
@@ -285,7 +281,7 @@ class DoubanApi(metaclass=WeakSingleton):
).post_res(url=req_url, data=params)
return self._handle_response(resp)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True)
@cached(maxsize=settings.CONF.douban, ttl=settings.CONF.meta, skip_none=True, shared_key="post")
async def __async_post(self, url: str, **kwargs) -> dict:
"""
POST请求异步版本
@@ -865,7 +861,7 @@ class DoubanApi(metaclass=WeakSingleton):
清空LRU缓存
"""
self.__invoke.cache_clear()
self.__clear_async_cache__ = True
self.__post.cache_clear()
def close(self):
if self._session:

View File

@@ -440,7 +440,7 @@ class FanartModule(_ModuleBase):
return result
@classmethod
@cached(maxsize=settings.CONF.fanart, ttl=settings.CONF.meta)
@cached(maxsize=settings.CONF.fanart, ttl=settings.CONF.meta, shared_key="get")
def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]:
if media_type == MediaType.MOVIE:
image_url = cls._movie_url % queryid
@@ -456,3 +456,11 @@ class FanartModule(_ModuleBase):
except Exception as err:
logger.error(f"获取{queryid}的Fanart图片失败{str(err)}")
return None
def clear_cache(self):
"""
清除缓存
"""
logger.info(f"开始清除{self.get_name()}缓存 ...")
self.__request_fanart.cache_clear()
logger.info(f"{self.get_name()}缓存清除完成")

View File

@@ -81,26 +81,26 @@ class FileManagerModule(_ModuleBase):
return False, f"{d.name} 的下载目录未设置"
if d.storage == "local" and not Path(download_path).exists():
return False, f"{d.name} 的下载目录 {download_path} 不存在"
# 媒体库目录
# 仅在启用整理时检查媒体库目录
library_path = d.library_path
if not library_path:
return False, f"{d.name} 的媒体库目录未设置"
if d.library_storage == "local" and not Path(library_path).exists():
return False, f"{d.name} 的媒体库目录 {library_path} 不存在"
# 硬链接
if d.transfer_type == "link" \
and d.storage == "local" \
and d.library_storage == "local" \
and not SystemUtils.is_same_disk(Path(download_path), Path(library_path)):
return False, f"{d.name} 的下载目录 {download_path} 与媒体库目录 {library_path} 不在同一磁盘,无法硬链接"
if d.transfer_type:
if not library_path:
return False, f"{d.name} 的媒体库目录未设置"
if d.library_storage == "local" and not Path(library_path).exists():
return False, f"{d.name} 的媒体库目录 {library_path} 不存在"
# 硬链接
if d.transfer_type == "link" \
and d.storage == "local" \
and d.library_storage == "local" \
and not SystemUtils.is_same_disk(Path(download_path), Path(library_path)):
return False, f"{d.name} 的下载目录 {download_path} 与媒体库目录 {library_path} 不在同一磁盘,无法硬链接"
# 存储
storage_oper = self.__get_storage_oper(d.storage)
if not storage_oper:
return False, f"{d.name} 的存储类型 {d.storage} 不支持"
if not storage_oper.check():
return False, f"{d.name} 的存储测试不通过"
if d.transfer_type and d.transfer_type not in storage_oper.support_transtype():
return False, f"{d.name} 的存储不支持 {d.transfer_type} 整理方式"
if storage_oper:
if not storage_oper.check():
return False, f"{d.name} 的存储测试不通过"
if d.transfer_type and d.transfer_type not in storage_oper.support_transtype():
return False, f"{d.name} 的存储不支持 {d.transfer_type} 整理方式"
return True, ""

View File

@@ -261,13 +261,12 @@ class StorageBase(metaclass=ABCMeta):
for sub_file in sub_files:
__snapshot_file(sub_file, current_depth + 1)
else:
# 记录文件的完整信息用于比对
if getattr(_fileitm, 'modify_time', 0) > last_snapshot_time:
files_info[_fileitm.path] = {
'size': _fileitm.size or 0,
'modify_time': getattr(_fileitm, 'modify_time', 0),
'type': _fileitm.type
}
# 记录文件的完整信息用于比对(始终包含所有文件,由 compare_snapshots 负责检测变化)
files_info[_fileitm.path] = {
'size': _fileitm.size or 0,
'modify_time': getattr(_fileitm, 'modify_time', 0),
'type': _fileitm.type
}
except Exception as e:
logger.debug(f"Snapshot error for {_fileitm.path}: {e}")

View File

@@ -38,14 +38,14 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
schema = StorageSchema.Alipan
# 支持的整理方式
transtype = {
"move": "移动",
"copy": "复制"
}
transtype = {"move": "移动", "copy": "复制"}
# 基础url
base_url = "https://openapi.alipan.com"
# 阿里云盘目录时间不随子文件变更而更新,默认关闭目录修改时间检查
snapshot_check_folder_modtime = settings.ALIPAN_SNAPSHOT_CHECK_FOLDER_MODTIME
# 文件块大小默认10MB
chunk_size = 10 * 1024 * 1024
@@ -59,9 +59,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"""
初始化带速率限制的会话
"""
self.session.headers.update({
"Content-Type": "application/json"
})
self.session.headers.update({"Content-Type": "application/json"})
def _check_session(self):
"""
@@ -76,7 +74,11 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
获取默认存储桶ID
"""
conf = self.get_conf()
drive_id = conf.get("resource_drive_id") or conf.get("backup_drive_id") or conf.get("default_drive_id")
drive_id = (
conf.get("resource_drive_id")
or conf.get("backup_drive_id")
or conf.get("default_drive_id")
)
if not drive_id:
raise NoCheckInException("【阿里云盘】请先扫码登录!")
return drive_id
@@ -94,10 +96,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
if expires_in and refresh_time + expires_in < int(time.time()):
tokens = self.__refresh_access_token(refresh_token)
if tokens:
self.set_config({
"refresh_time": int(time.time()),
**tokens
})
self.set_config({"refresh_time": int(time.time()), **tokens})
access_token = tokens.get("access_token")
if access_token:
self.session.headers.update({"Authorization": f"Bearer {access_token}"})
@@ -115,10 +114,15 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
f"{self.base_url}/oauth/authorize/qrcode",
json={
"client_id": settings.ALIPAN_APP_ID,
"scopes": ["user:base", "file:all:read", "file:all:write", "file:share:write"],
"scopes": [
"user:base",
"file:all:read",
"file:all:write",
"file:share:write",
],
"code_challenge": code_verifier,
"code_challenge_method": "plain"
}
"code_challenge_method": "plain",
},
)
if resp is None:
return {}, "网络错误"
@@ -126,14 +130,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
if result.get("code"):
return {}, result.get("message")
# 持久化验证参数
self._auth_state = {
"sid": result.get("sid"),
"code_verifier": code_verifier
}
self._auth_state = {"sid": result.get("sid"), "code_verifier": code_verifier}
# 生成二维码内容
return {
"codeUrl": result.get("qrCodeUrl")
}, ""
return {"codeUrl": result.get("qrCodeUrl")}, ""
def check_login(self) -> Optional[Tuple[dict, str]]:
"""
@@ -144,7 +143,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"WaitLogin": "等待登录",
"ScanSuccess": "扫码成功",
"LoginSuccess": "登录成功",
"QRCodeExpired": "二维码过期"
"QRCodeExpired": "二维码过期",
}
if not self._auth_state:
@@ -163,10 +162,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
self._auth_state["authCode"] = authCode
tokens = self.__get_access_token()
if tokens:
self.set_config({
"refresh_time": int(time.time()),
**tokens
})
self.set_config({"refresh_time": int(time.time()), **tokens})
self.__get_drive_id()
return {"status": status, "tip": _status_text.get(status, "未知错误")}, ""
except Exception as e:
@@ -184,14 +180,16 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"client_id": settings.ALIPAN_APP_ID,
"grant_type": "authorization_code",
"code": self._auth_state["authCode"],
"code_verifier": self._auth_state["code_verifier"]
}
"code_verifier": self._auth_state["code_verifier"],
},
)
if resp is None:
raise SessionInvalidException("【阿里云盘】获取 access_token 失败")
result = resp.json()
if result.get("code"):
raise Exception(f"【阿里云盘】{result.get('code')} - {result.get('message')}")
raise Exception(
f"【阿里云盘】{result.get('code')} - {result.get('message')}"
)
return result
def __refresh_access_token(self, refresh_token: str) -> Optional[dict]:
@@ -205,30 +203,34 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
json={
"client_id": settings.ALIPAN_APP_ID,
"grant_type": "refresh_token",
"refresh_token": refresh_token
}
"refresh_token": refresh_token,
},
)
if resp is None:
logger.error(f"【阿里云盘】刷新 access_token 失败refresh_token={refresh_token}")
logger.error(
f"【阿里云盘】刷新 access_token 失败refresh_token={refresh_token}"
)
return None
result = resp.json()
if result.get("code"):
logger.warn(f"【阿里云盘】刷新 access_token 失败:{result.get('code')} - {result.get('message')}")
logger.warn(
f"【阿里云盘】刷新 access_token 失败:{result.get('code')} - {result.get('message')}"
)
return result
def __get_drive_id(self):
"""
获取默认存储桶ID
"""
resp = self.session.post(
f"{self.base_url}/adrive/v1.0/user/getDriveInfo"
)
resp = self.session.post(f"{self.base_url}/adrive/v1.0/user/getDriveInfo")
if resp is None:
logger.error("获取默认存储桶ID失败")
return None
result = resp.json()
if result.get("code"):
logger.warn(f"获取默认存储ID失败{result.get('code')} - {result.get('message')}")
logger.warn(
f"获取默认存储ID失败{result.get('code')} - {result.get('message')}"
)
return None
# 保存用户参数
"""
@@ -244,8 +246,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
self.set_config(conf)
return None
def _request_api(self, method: str, endpoint: str,
result_key: Optional[str] = None, **kwargs) -> Optional[Union[dict, list]]:
def _request_api(
self, method: str, endpoint: str, result_key: Optional[str] = None, **kwargs
) -> Optional[Union[dict, list]]:
"""
带错误处理和速率限制的API请求
"""
@@ -256,10 +259,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
no_error_log = kwargs.pop("no_error_log", False)
try:
resp = self.session.request(
method, f"{self.base_url}{endpoint}",
**kwargs
)
resp = self.session.request(method, f"{self.base_url}{endpoint}", **kwargs)
except requests.exceptions.RequestException as e:
logger.error(f"【阿里云盘】{method} 请求 {endpoint} 网络错误: {str(e)}")
return None
@@ -278,7 +278,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
ret_data = resp.json()
if ret_data.get("code"):
if not no_error_log:
logger.warn(f"【阿里云盘】{method} {endpoint} 返回:{ret_data.get('code')} {ret_data.get('message')}")
logger.warn(
f"【阿里云盘】{method} {endpoint} 返回:{ret_data.get('code')} {ret_data.get('message')}"
)
if result_key:
return ret_data.get(result_key)
@@ -328,7 +330,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
size: 前多少字节
"""
sha1 = hashlib.sha1()
with open(filepath, 'rb') as f:
with open(filepath, "rb") as f:
if size:
chunk = f.read(size)
sha1.update(chunk)
@@ -369,7 +371,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"limit": 100,
"marker": next_marker,
"parent_file_id": parent_file_id,
}
},
)
if resp is None:
raise FileNotFoundError(f"【阿里云盘】{fileitem.path} 检索出错!")
@@ -393,7 +395,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
return fileitem
return None
def create_folder(self, parent_item: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
def create_folder(
self, parent_item: schemas.FileItem, name: str
) -> Optional[schemas.FileItem]:
"""
创建目录
"""
@@ -404,8 +408,8 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"drive_id": parent_item.drive_id,
"parent_file_id": parent_item.fileid or "root",
"name": name,
"type": "folder"
}
"type": "folder",
},
)
if not resp:
return None
@@ -422,7 +426,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
计算文件前1KB的SHA1作为pre_hash
"""
sha1 = hashlib.sha1()
with open(file_path, 'rb') as f:
with open(file_path, "rb") as f:
data = f.read(1024)
sha1.update(data)
return sha1.hexdigest()
@@ -443,7 +447,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
try:
tmp_int = int(hex_str, 16)
except ValueError:
raise ValueError("【阿里云盘】Invalid hex string for proof code calculation")
raise ValueError(
"【阿里云盘】Invalid hex string for proof code calculation"
)
# Step 5-7: 计算读取范围
index = tmp_int % file_size
@@ -453,7 +459,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
end = file_size
# Step 8: 读取文件范围数据并编码
with open(file_path, 'rb') as f:
with open(file_path, "rb") as f:
f.seek(start)
chunk = f.read(end - start)
@@ -465,7 +471,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
计算整个文件的SHA1作为content_hash
"""
sha1 = hashlib.sha1()
with open(file_path, 'rb') as f:
with open(file_path, "rb") as f:
while True:
chunk = f.read(8192)
if not chunk:
@@ -473,9 +479,15 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
sha1.update(chunk)
return sha1.hexdigest()
def _create_file(self, drive_id: str, parent_file_id: str,
file_name: str, file_path: Path, check_name_mode="refuse",
chunk_size: int = 1 * 1024 * 1024 * 1024):
def _create_file(
self,
drive_id: str,
parent_file_id: str,
file_name: str,
file_path: Path,
check_name_mode="refuse",
chunk_size: int = 1 * 1024 * 1024 * 1024,
):
"""
创建文件请求,尝试秒传
"""
@@ -495,13 +507,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"check_name_mode": check_name_mode,
"size": file_size,
"pre_hash": pre_hash,
"part_info_list": part_info_list
"part_info_list": part_info_list,
}
resp = self._request_api(
"POST",
"/adrive/v1.0/openFile/create",
json=data
)
resp = self._request_api("POST", "/adrive/v1.0/openFile/create", json=data)
if not resp:
raise Exception("【阿里云盘】创建文件失败!")
if resp.get("code") == "PreHashMatched":
@@ -509,24 +517,24 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
proof_code = self._calculate_proof_code(file_path)
content_hash = self._calculate_content_hash(file_path)
data.pop("pre_hash")
data.update({
"proof_code": proof_code,
"proof_version": "v1",
"content_hash": content_hash,
"content_hash_name": "sha1",
})
resp = self._request_api(
"POST",
"/adrive/v1.0/openFile/create",
json=data
data.update(
{
"proof_code": proof_code,
"proof_version": "v1",
"content_hash": content_hash,
"content_hash_name": "sha1",
}
)
resp = self._request_api("POST", "/adrive/v1.0/openFile/create", json=data)
if not resp:
raise Exception("【阿里云盘】创建文件失败!")
if resp.get("code"):
raise Exception(resp.get("message"))
return resp
def _refresh_upload_urls(self, drive_id: str, file_id: str, upload_id: str, part_numbers: List[int]):
def _refresh_upload_urls(
self, drive_id: str, file_id: str, upload_id: str, part_numbers: List[int]
):
"""
刷新分片上传地址
"""
@@ -534,18 +542,16 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"drive_id": drive_id,
"file_id": file_id,
"upload_id": upload_id,
"part_info_list": [{"part_number": num} for num in part_numbers]
"part_info_list": [{"part_number": num} for num in part_numbers],
}
resp = self._request_api(
"POST",
"/adrive/v1.0/openFile/getUploadUrl",
json=data
"POST", "/adrive/v1.0/openFile/getUploadUrl", json=data
)
if not resp:
raise Exception("【阿里云盘】刷新分片上传地址失败!")
if resp.get("code"):
raise Exception(resp.get("message"))
return resp.get('part_info_list', [])
return resp.get("part_info_list", [])
@staticmethod
def _upload_part(upload_url: str, data: bytes):
@@ -558,15 +564,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"""
获取已上传分片列表
"""
data = {
"drive_id": drive_id,
"file_id": file_id,
"upload_id": upload_id
}
data = {"drive_id": drive_id, "file_id": file_id, "upload_id": upload_id}
resp = self._request_api(
"POST",
"/adrive/v1.0/openFile/listUploadedParts",
json=data
"POST", "/adrive/v1.0/openFile/listUploadedParts", json=data
)
if not resp:
raise Exception("【阿里云盘】获取已上传分片失败!")
@@ -576,24 +576,20 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
def _complete_upload(self, drive_id: str, file_id: str, upload_id: str):
"""标记上传完成"""
data = {
"drive_id": drive_id,
"file_id": file_id,
"upload_id": upload_id
}
resp = self._request_api(
"POST",
"/adrive/v1.0/openFile/complete",
json=data
)
data = {"drive_id": drive_id, "file_id": file_id, "upload_id": upload_id}
resp = self._request_api("POST", "/adrive/v1.0/openFile/complete", json=data)
if not resp:
raise Exception("【阿里云盘】完成上传失败!")
if resp.get("code"):
raise Exception(resp.get("message"))
return resp
def upload(self, target_dir: schemas.FileItem, local_path: Path,
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
def upload(
self,
target_dir: schemas.FileItem,
local_path: Path,
new_name: Optional[str] = None,
) -> Optional[schemas.FileItem]:
"""
文件上传:分片、支持秒传
"""
@@ -603,12 +599,14 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
# 1. 创建文件并检查秒传
chunk_size = 10 * 1024 * 1024 # 分片大小 10M
create_res = self._create_file(drive_id=target_dir.drive_id,
parent_file_id=target_dir.fileid,
file_name=target_name,
file_path=local_path,
chunk_size=chunk_size)
if create_res.get('rapid_upload', False):
create_res = self._create_file(
drive_id=target_dir.drive_id,
parent_file_id=target_dir.fileid,
file_name=target_name,
file_path=local_path,
chunk_size=chunk_size,
)
if create_res.get("rapid_upload", False):
logger.info(f"【阿里云盘】{target_name} 秒传完成!")
return self._delay_get_item(target_path)
@@ -617,33 +615,37 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
return self.get_item(target_path)
# 2. 准备分片上传参数
file_id = create_res.get('file_id')
file_id = create_res.get("file_id")
if not file_id:
logger.warn(f"【阿里云盘】创建 {target_name} 文件失败!")
return None
upload_id = create_res.get('upload_id')
part_info_list = create_res.get('part_info_list')
upload_id = create_res.get("upload_id")
part_info_list = create_res.get("part_info_list")
uploaded_parts = set()
# 3. 获取已上传分片
uploaded_info = self._list_uploaded_parts(drive_id=target_dir.drive_id, file_id=file_id, upload_id=upload_id)
for part in uploaded_info.get('uploaded_parts', []):
uploaded_parts.add(part['part_number'])
uploaded_info = self._list_uploaded_parts(
drive_id=target_dir.drive_id, file_id=file_id, upload_id=upload_id
)
for part in uploaded_info.get("uploaded_parts", []):
uploaded_parts.add(part["part_number"])
# 4. 初始化进度条
logger.info(f"【阿里云盘】开始上传: {local_path} -> {target_path},分片数:{len(part_info_list)}")
logger.info(
f"【阿里云盘】开始上传: {local_path} -> {target_path},分片数:{len(part_info_list)}"
)
progress_callback = transfer_process(local_path.as_posix())
# 5. 分片上传循环
uploaded_size = 0
with open(local_path, 'rb') as f:
with open(local_path, "rb") as f:
for part_info in part_info_list:
if global_vars.is_transfer_stopped(local_path.as_posix()):
logger.info(f"【阿里云盘】{target_name} 上传已取消!")
return None
# 计算分片参数
part_num = part_info['part_number']
part_num = part_info["part_number"]
start = (part_num - 1) * chunk_size
end = min(start + chunk_size, file_size)
current_chunk_size = end - start
@@ -664,14 +666,19 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
try:
# 获取当前上传地址(可能刷新)
if attempt > 0:
new_urls = self._refresh_upload_urls(drive_id=target_dir.drive_id, file_id=file_id,
upload_id=upload_id, part_numbers=[part_num])
upload_url = new_urls[0]['upload_url']
new_urls = self._refresh_upload_urls(
drive_id=target_dir.drive_id,
file_id=file_id,
upload_id=upload_id,
part_numbers=[part_num],
)
upload_url = new_urls[0]["upload_url"]
else:
upload_url = part_info['upload_url']
upload_url = part_info["upload_url"]
# 执行上传
logger.info(
f"【阿里云盘】开始 第{attempt + 1}次 上传 {target_name} 分片 {part_num} ...")
f"【阿里云盘】开始 第{attempt + 1}次 上传 {target_name} 分片 {part_num} ..."
)
response = self._upload_part(upload_url=upload_url, data=data)
if response is None:
continue
@@ -680,9 +687,12 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
break
else:
logger.warn(
f"【阿里云盘】{target_name} 分片 {part_num}{attempt + 1} 次上传失败:{response.text}")
f"【阿里云盘】{target_name} 分片 {part_num}{attempt + 1} 次上传失败:{response.text}"
)
except Exception as e:
logger.warn(f"【阿里云盘】{target_name} 分片 {part_num} 上传异常: {str(e)}")
logger.warn(
f"【阿里云盘】{target_name} 分片 {part_num} 上传异常: {str(e)}"
)
# 处理上传结果
if success:
@@ -690,17 +700,23 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
uploaded_size += current_chunk_size
progress_callback((uploaded_size * 100) / file_size)
else:
raise Exception(f"【阿里云盘】{target_name} 分片 {part_num} 上传失败!")
raise Exception(
f"【阿里云盘】{target_name} 分片 {part_num} 上传失败!"
)
# 6. 关闭进度条
progress_callback(100)
# 7. 完成上传
result = self._complete_upload(drive_id=target_dir.drive_id, file_id=file_id, upload_id=upload_id)
result = self._complete_upload(
drive_id=target_dir.drive_id, file_id=file_id, upload_id=upload_id
)
if not result:
raise Exception("【阿里云盘】完成上传失败!")
if result.get("code"):
logger.warn(f"【阿里云盘】{target_name} 上传失败:{result.get('message')}")
logger.warn(
f"【阿里云盘】{target_name} 上传失败:{result.get('message')}"
)
return self.__get_fileitem(result, parent=target_dir.path)
def download(self, fileitem: schemas.FileItem, path: Path = None) -> Optional[Path]:
@@ -713,7 +729,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
json={
"drive_id": fileitem.drive_id,
"file_id": fileitem.fileid,
}
},
)
if not download_info:
logger.error(f"【阿里云盘】获取下载链接失败: {fileitem.name}")
@@ -724,7 +740,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
logger.error(f"【阿里云盘】下载链接为空: {fileitem.name}")
return None
local_path = path or settings.TEMP_PATH / fileitem.name
local_path = (path or settings.TEMP_PATH) / fileitem.name
# 获取文件大小
file_size = fileitem.size
@@ -744,7 +760,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"Connection": "keep-alive",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "cross-site"
"Sec-Fetch-Site": "cross-site",
}
# 如果有access_token添加到请求头
@@ -789,10 +805,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
self._request_api(
"POST",
"/adrive/v1.0/openFile/recyclebin/trash",
json={
"drive_id": fileitem.drive_id,
"file_id": fileitem.fileid
}
json={"drive_id": fileitem.drive_id, "file_id": fileitem.fileid},
)
return True
except requests.exceptions.HTTPError:
@@ -808,8 +821,8 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
json={
"drive_id": fileitem.drive_id,
"file_id": fileitem.fileid,
"name": name
}
"name": name,
},
)
if not resp:
return False
@@ -828,9 +841,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"/adrive/v1.0/openFile/get_by_path",
json={
"drive_id": drive_id or self._default_drive_id,
"file_path": path.as_posix()
"file_path": path.as_posix(),
},
no_error_log=True
no_error_log=True,
)
if not resp:
return None
@@ -847,7 +860,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
获取指定路径的文件夹,如不存在则创建
"""
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
def __find_dir(
_fileitem: schemas.FileItem, _name: str
) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
@@ -863,7 +878,9 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
if folder:
return folder
# 逐级查找和创建目录
fileitem = schemas.FileItem(storage=self.schema.value, path="/", drive_id=self._default_drive_id)
fileitem = schemas.FileItem(
storage=self.schema.value, path="/", drive_id=self._default_drive_id
)
for part in path.parts[1:]:
dir_file = __find_dir(fileitem, part)
if dir_file:
@@ -901,7 +918,7 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"file_id": fileitem.fileid,
"to_drive_id": fileitem.drive_id,
"to_parent_file_id": dest_fileitem.fileid,
}
},
)
if not resp:
return False
@@ -934,8 +951,8 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
"drive_id": fileitem.drive_id,
"file_id": src_fid,
"to_parent_file_id": target_fileitem.fileid,
"new_name": new_name
}
"new_name": new_name,
},
)
if not resp:
return False
@@ -955,18 +972,14 @@ class AliPan(StorageBase, metaclass=WeakSingleton):
获取带有企业级配额信息的存储使用情况
"""
try:
resp = self._request_api(
"POST",
"/adrive/v1.0/user/getSpaceInfo"
)
resp = self._request_api("POST", "/adrive/v1.0/user/getSpaceInfo")
if not resp:
return None
space = resp.get("personal_space_info") or {}
total_size = space.get("total_size") or 0
used_size = space.get("used_size") or 0
return schemas.StorageUsage(
total=total_size,
available=total_size - used_size
total=total_size, available=total_size - used_size
)
except NoCheckInException:
return None

View File

@@ -9,6 +9,7 @@ from app.core.cache import cached
from app.core.config import settings, global_vars
from app.log import logger
from app.modules.filemanager.storages import StorageBase, transfer_process
from app.schemas.exception import OperationInterrupted
from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from app.utils.singleton import WeakSingleton
@@ -17,8 +18,9 @@ from app.utils.url import UrlUtils
class Alist(StorageBase, metaclass=WeakSingleton):
"""
Alist相关操作
api文档https://oplist.org/zh/
Openlist相关操作
API 文档https://fox.oplist.org/
"""
# 存储类型
@@ -42,13 +44,19 @@ class Alist(StorageBase, metaclass=WeakSingleton):
"""
self.__generate_token.cache_clear() # noqa
def _delay_get_item(self, path: Path) -> Optional[schemas.FileItem]:
def _delay_get_item(
self, path: Path, /, refresh: bool = False
) -> Optional[schemas.FileItem]:
"""
自动延迟重试 get_item 模块
:param path: 文件路径
:param refresh: 是否刷新
:return: 文件项
"""
for _ in range(2):
time.sleep(2)
fileitem = self.get_item(path)
fileitem = self.get_item(path=path, refresh=refresh)
if fileitem:
return fileitem
return None
@@ -66,6 +74,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
def __get_api_url(self, path: str) -> str:
"""
获取API URL
:param path: API路径
:return: API URL
"""
return UrlUtils.adapt_request_url(self.__get_base_url, path)
@@ -88,14 +99,14 @@ class Alist(StorageBase, metaclass=WeakSingleton):
token = conf.get("token")
if token:
return str(token)
resp = RequestUtils(headers={
'Content-Type': 'application/json'
}).post_res(
resp = RequestUtils(headers={"Content-Type": "application/json"}).post_res(
self.__get_api_url("/api/auth/login"),
data=json.dumps({
"username": conf.get("username"),
"password": conf.get("password"),
}),
data=json.dumps(
{
"username": conf.get("username"),
"password": conf.get("password"),
}
),
)
"""
{
@@ -117,13 +128,15 @@ class Alist(StorageBase, metaclass=WeakSingleton):
return ""
if resp.status_code != 200:
logger.warning(f"【OpenList】更新令牌请求发送失败状态码{resp.status_code}")
logger.warning(
f"【OpenList】更新令牌请求发送失败状态码{resp.status_code}"
)
return ""
result = resp.json()
if result["code"] != 200:
logger.critical(f'【OpenList】更新令牌错误信息{result["message"]}')
logger.critical(f"【OpenList】更新令牌错误信息{result['message']}")
return ""
logger.debug("【OpenList】AList获取令牌成功")
@@ -142,12 +155,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
return True if self.__generate_token() else False
def list(
self,
fileitem: schemas.FileItem,
password: Optional[str] = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
self,
fileitem: schemas.FileItem,
password: Optional[str] = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> List[schemas.FileItem]:
"""
浏览文件
@@ -156,15 +169,14 @@ class Alist(StorageBase, metaclass=WeakSingleton):
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
:return: 文件列表
"""
if fileitem.type == "file":
item = self.get_item(Path(fileitem.path))
if item:
return [item]
return []
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/list"),
json={
"path": fileitem.path,
@@ -211,7 +223,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
"""
if resp is None:
logger.warn(f"【OpenList】请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务")
logger.warn(
f"【OpenList】请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务"
)
return []
if resp.status_code != 200:
logger.warn(
@@ -223,7 +237,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
if result["code"] != 200:
logger.warn(
f'【OpenList】获取目录 {fileitem.path} 的文件列表失败,错误信息:{result["message"]}'
f"【OpenList】获取目录 {fileitem.path} 的文件列表失败,错误信息:{result['message']}"
)
return []
@@ -231,7 +245,8 @@ class Alist(StorageBase, metaclass=WeakSingleton):
schemas.FileItem(
storage=self.schema.value,
type="dir" if item["is_dir"] else "file",
path=(Path(fileitem.path) / item["name"]).as_posix() + ("/" if item["is_dir"] else ""),
path=(Path(fileitem.path) / item["name"]).as_posix()
+ ("/" if item["is_dir"] else ""),
name=item["name"],
basename=Path(item["name"]).stem,
extension=Path(item["name"]).suffix[1:] if not item["is_dir"] else None,
@@ -243,17 +258,16 @@ class Alist(StorageBase, metaclass=WeakSingleton):
]
def create_folder(
self, fileitem: schemas.FileItem, name: str
self, fileitem: schemas.FileItem, name: str
) -> Optional[schemas.FileItem]:
"""
创建目录
:param fileitem: 父目录
:param name: 目录名
:return: 目录项
"""
path = Path(fileitem.path) / name
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/mkdir"),
json={"path": path.as_posix()},
)
@@ -272,40 +286,50 @@ class Alist(StorageBase, metaclass=WeakSingleton):
logger.warn(f"【OpenList】请求创建目录 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"【OpenList】请求创建目录 {path} 失败,状态码:{resp.status_code}")
logger.warn(
f"【OpenList】请求创建目录 {path} 失败,状态码:{resp.status_code}"
)
return None
result = resp.json()
if result["code"] != 200:
logger.warn(f'【OpenList】创建目录 {path} 失败,错误信息:{result["message"]}')
logger.warn(
f"【OpenList】创建目录 {path} 失败,错误信息:{result['message']}"
)
return None
return self._delay_get_item(path)
return self._delay_get_item(path, refresh=True)
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取目录,如目录不存在则创建
:param path: 目录路径
:return: 目录项
"""
folder = self.get_item(path)
if folder:
return folder
if not folder:
folder = self.create_folder(schemas.FileItem(
storage=self.schema.value,
type="dir",
path=path.parent.as_posix(),
name=path.name,
basename=path.stem
), path.name)
folder = self.create_folder(
schemas.FileItem(
storage=self.schema.value,
type="dir",
path=path.parent.as_posix(),
name=path.name,
basename=path.stem,
),
path.name,
)
return folder
def get_item(
self,
path: Path,
password: Optional[str] = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
self,
path: Path,
password: Optional[str] = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
@@ -314,10 +338,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
:return: 文件项
"""
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": path.as_posix(),
@@ -362,12 +385,16 @@ class Alist(StorageBase, metaclass=WeakSingleton):
logger.warn(f"【OpenList】请求获取文件 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"【OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(
f"【OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}"
)
return None
result = resp.json()
if result["code"] != 200:
logger.debug(f'【OpenList】获取文件 {path} 失败,错误信息:{result["message"]}')
logger.debug(
f"【OpenList】获取文件 {path} 失败,错误信息:{result['message']}"
)
return None
return schemas.FileItem(
@@ -385,12 +412,18 @@ class Alist(StorageBase, metaclass=WeakSingleton):
def get_parent(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取父目录
:param fileitem: 文件项
:return: 父目录项
"""
return self.get_folder(Path(fileitem.path).parent)
def __is_empty_dir(self, fileitem: schemas.FileItem) -> bool:
"""
判断目录是否为空
:param fileitem: 文件项
:return: 是否为空目录
"""
if fileitem.type != "dir":
return False
@@ -401,19 +434,22 @@ class Alist(StorageBase, metaclass=WeakSingleton):
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件或目录空目录用专用API
:param fileitem: 文件项
:return: 是否删除成功
"""
# 如果是空目录,优先用 remove_empty_directory
if fileitem.type == "dir" and self.__is_empty_dir(fileitem):
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/remove_empty_directory"),
json={
"src_dir": fileitem.path,
},
)
if resp is None:
logger.warn(f"【OpenList】请求删除空目录 {fileitem.path} 失败无法连接alist服务")
logger.warn(
f"【OpenList】请求删除空目录 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logger.warn(
@@ -423,14 +459,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】删除空目录 {fileitem.path} 失败,错误信息:{result["message"]}'
f"【OpenList】删除空目录 {fileitem.path} 失败,错误信息:{result['message']}"
)
return False
return True
# 其它情况(文件或非空目录)
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/remove"),
json={
"dir": Path(fileitem.path).parent.as_posix(),
@@ -438,7 +472,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
},
)
if resp is None:
logger.warn(f"【OpenList】请求删除文件 {fileitem.path} 失败无法连接alist服务")
logger.warn(
f"【OpenList】请求删除文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logger.warn(
@@ -448,7 +484,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】删除文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f"【OpenList】删除文件 {fileitem.path} 失败,错误信息:{result['message']}"
)
return False
return True
@@ -456,10 +492,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
"""
重命名文件
:param fileitem: 文件项
:param name: 新文件名
:return: 是否重命名成功
"""
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/rename"),
json={
"name": name,
@@ -479,7 +517,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
}
"""
if not resp:
logger.warn(f"【OpenList】请求重命名文件 {fileitem.path} 失败无法连接alist服务")
logger.warn(
f"【OpenList】请求重命名文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logger.warn(
@@ -490,27 +530,26 @@ class Alist(StorageBase, metaclass=WeakSingleton):
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】重命名文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f"【OpenList】重命名文件 {fileitem.path} 失败,错误信息:{result['message']}"
)
return False
return True
def download(
self,
fileitem: schemas.FileItem,
path: Path = None,
password: Optional[str] = "",
self,
fileitem: schemas.FileItem,
path: Path = None,
password: Optional[str] = "",
) -> Optional[Path]:
"""
下载文件,保存到本地,返回本地临时文件地址
:param fileitem: 文件项
:param path: 文件保存路径
:param password: 文件密码
:return: 本地临时文件地址
"""
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": fileitem.path,
@@ -547,18 +586,24 @@ class Alist(StorageBase, metaclass=WeakSingleton):
logger.warn(f"【OpenList】请求获取文件 {path} 失败无法连接alist服务")
return None
if resp.status_code != 200:
logger.warn(f"【OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(
f"【OpenList】请求获取文件 {path} 失败,状态码:{resp.status_code}"
)
return None
result = resp.json()
if result["code"] != 200:
logger.warn(f'【OpenList】获取文件 {path} 失败,错误信息:{result["message"]}')
logger.warn(
f"【OpenList】获取文件 {path} 失败,错误信息:{result['message']}"
)
return None
if result["data"]["raw_url"]:
download_url = result["data"]["raw_url"]
else:
download_url = UrlUtils.adapt_request_url(self.__get_base_url, f"/d{fileitem.path}")
download_url = UrlUtils.adapt_request_url(
self.__get_base_url, f"/d{fileitem.path}"
)
if result["data"]["sign"]:
download_url = download_url + "?sign=" + result["data"]["sign"]
@@ -585,7 +630,11 @@ class Alist(StorageBase, metaclass=WeakSingleton):
return local_path
def upload(
self, fileitem: schemas.FileItem, path: Path, new_name: Optional[str] = None, task: bool = False
self,
fileitem: schemas.FileItem,
path: Path,
new_name: Optional[str] = None,
task: bool = False,
) -> Optional[schemas.FileItem]:
"""
上传文件(带进度)
@@ -593,6 +642,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
:param path: 本地文件路径
:param new_name: 上传后文件名
:param task: 是否为任务默认为False避免未完成上传时对文件进行操作
:return: 上传后的文件项
"""
try:
# 获取文件大小
@@ -612,7 +662,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
# 创建自定义的文件流,支持进度回调
class ProgressFileReader:
def __init__(self, file_path: Path, callback):
self.file = open(file_path, 'rb')
self.file = open(file_path, "rb")
self.callback = callback
self.uploaded_size = 0
self.file_size = file_path.stat().st_size
@@ -623,7 +673,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
def read(self, size=-1):
if global_vars.is_transfer_stopped(path.as_posix()):
logger.info(f"【OpenList】{path} 上传已取消!")
return None
raise OperationInterrupted(f"Upload cancelled: {path}")
chunk = self.file.read(size)
if chunk:
self.uploaded_size += len(chunk)
@@ -638,10 +688,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
# 使用自定义文件流上传
progress_reader = ProgressFileReader(path, progress_callback)
try:
resp = RequestUtils(headers=headers).put_res(
resp = RequestUtils(headers=headers, timeout=6000).put_res(
self.__get_api_url("/api/fs/put"),
data=progress_reader,
)
except OperationInterrupted:
return None
finally:
progress_reader.close()
@@ -649,17 +701,21 @@ class Alist(StorageBase, metaclass=WeakSingleton):
logger.warn(f"【OpenList】请求上传文件 {path} 失败")
return None
if resp.status_code != 200:
logger.warn(f"【OpenList】请求上传文件 {path} 失败,状态码:{resp.status_code}")
logger.warn(
f"【OpenList】请求上传文件 {path} 失败,状态码:{resp.status_code}"
)
return None
# 完成上传
progress_callback(100)
# 获取上传后的文件项
new_item = self._delay_get_item(target_path)
new_item = self._delay_get_item(target_path, refresh=True)
if new_item and new_name and new_name != path.name:
if self.rename(new_item, new_name):
return self._delay_get_item(Path(new_item.path).with_name(new_name))
return self._delay_get_item(
Path(new_item.path).with_name(new_name), refresh=True
)
return new_item
@@ -679,10 +735,9 @@ class Alist(StorageBase, metaclass=WeakSingleton):
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
:return: 是否复制成功
"""
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/copy"),
json={
"src_dir": Path(fileitem.path).parent.as_posix(),
@@ -719,12 +774,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f"【OpenList】复制文件 {fileitem.path} 失败,错误信息:{result['message']}"
)
return False
# 重命名
if fileitem.name != new_name:
new_item = self._delay_get_item(path / fileitem.name)
new_item = self._delay_get_item(path / fileitem.name, refresh=True)
if new_item:
self.rename(new_item, new_name)
return True
@@ -735,13 +790,12 @@ class Alist(StorageBase, metaclass=WeakSingleton):
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
:return: 是否移动成功
"""
# 先重命名
if fileitem.name != new_name:
self.rename(fileitem, new_name)
resp = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
resp = RequestUtils(headers=self.__get_header_with_token()).post_res(
self.__get_api_url("/api/fs/move"),
json={
"src_dir": Path(fileitem.path).parent.as_posix(),
@@ -778,7 +832,7 @@ class Alist(StorageBase, metaclass=WeakSingleton):
result = resp.json()
if result["code"] != 200:
logger.warn(
f'【OpenList】移动文件 {fileitem.path} 失败,错误信息:{result["message"]}'
f"【OpenList】移动文件 {fileitem.path} 失败,错误信息:{result['message']}"
)
return False
return True

View File

@@ -5,7 +5,11 @@ from typing import List, Optional, Union
import smbclient
from smbclient import ClientConfig, register_session, reset_connection_cache
from smbprotocol.exceptions import SMBException, SMBResponseException, SMBAuthenticationError
from smbprotocol.exceptions import (
SMBException,
SMBResponseException,
SMBAuthenticationError,
)
from app import schemas
from app.core.config import settings, global_vars
@@ -22,6 +26,7 @@ class SMBConnectionError(Exception):
"""
SMB 连接错误
"""
pass
@@ -84,7 +89,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
connection_timeout=60,
port=port,
auth_protocol="negotiate", # 使用协商认证
require_secure_negotiate=False # 匿名访问时可能需要关闭安全协商
require_secure_negotiate=False, # 匿名访问时可能需要关闭安全协商
)
# 注册会话以启用连接池
@@ -94,7 +99,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
password=self._password,
port=port,
encrypt=False, # 根据需要启用加密
connection_timeout=60
connection_timeout=60,
)
# 测试连接
@@ -105,7 +110,9 @@ class SMB(StorageBase, metaclass=WeakSingleton):
if self._is_anonymous_access():
logger.info(f"【SMB】匿名连接成功{self._server_path}")
else:
logger.info(f"【SMB】认证连接成功{self._server_path} (用户:{self._username})")
logger.info(
f"【SMB】认证连接成功{self._server_path} (用户:{self._username})"
)
except Exception as e:
logger.error(f"【SMB】连接初始化失败{e}")
@@ -160,7 +167,9 @@ class SMB(StorageBase, metaclass=WeakSingleton):
else:
return self._server_path
def _create_fileitem(self, stat_result, file_path: str, name: str) -> schemas.FileItem:
def _create_fileitem(
self, stat_result, file_path: str, name: str
) -> schemas.FileItem:
"""
创建文件项
"""
@@ -189,7 +198,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
path=relative_path,
name=name,
basename=name,
modify_time=modify_time
modify_time=modify_time,
)
else:
return schemas.FileItem(
@@ -199,8 +208,8 @@ class SMB(StorageBase, metaclass=WeakSingleton):
name=name,
basename=Path(name).stem,
extension=Path(name).suffix[1:] if Path(name).suffix else None,
size=getattr(stat_result, 'st_size', 0),
modify_time=modify_time
size=getattr(stat_result, "st_size", 0),
modify_time=modify_time,
)
except Exception as e:
logger.error(f"【SMB】创建文件项失败{e}")
@@ -211,7 +220,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
path=file_path.replace(self._server_path, "").replace("\\", "/"),
name=name,
basename=Path(name).stem,
modify_time=int(time.time())
modify_time=int(time.time()),
)
def init_storage(self):
@@ -282,7 +291,9 @@ class SMB(StorageBase, metaclass=WeakSingleton):
logger.error(f"【SMB】列出文件失败: {e}")
return []
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
def create_folder(
self, fileitem: schemas.FileItem, name: str
) -> Optional[schemas.FileItem]:
"""
创建目录
"""
@@ -302,7 +313,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
path=f"{fileitem.path.rstrip('/')}/{name}/",
name=name,
basename=name,
modify_time=int(time.time())
modify_time=int(time.time()),
)
except Exception as e:
logger.error(f"【SMB】创建目录失败: {e}")
@@ -350,7 +361,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
path="/",
name="",
basename="",
modify_time=int(time.time())
modify_time=int(time.time()),
)
smb_path = self._normalize_path(str(path).rstrip("/"))
@@ -459,8 +470,12 @@ class SMB(StorageBase, metaclass=WeakSingleton):
logger.info(f"【SMB】强制删除目录成功: {smb_path}")
except Exception as remove_error:
# 如果还是失败,记录错误并抛出异常
logger.error(f"【SMB】无法删除非空目录: {smb_path} - {remove_error}")
raise SMBConnectionError(f"无法删除非空目录 {smb_path}: {remove_error}")
logger.error(
f"【SMB】无法删除非空目录: {smb_path} - {remove_error}"
)
raise SMBConnectionError(
f"无法删除非空目录 {smb_path}: {remove_error}"
)
except SMBException as e:
logger.error(f"【SMB】SMB操作失败: {smb_path} - {e}")
raise SMBConnectionError(f"SMB操作失败 {smb_path}: {e}")
@@ -496,7 +511,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
"""
带实时进度显示的下载
"""
local_path = path or settings.TEMP_PATH / fileitem.name
local_path = (path or settings.TEMP_PATH) / fileitem.name
smb_path = self._normalize_path(fileitem.path)
try:
self._check_connection()
@@ -541,8 +556,9 @@ class SMB(StorageBase, metaclass=WeakSingleton):
local_path.unlink()
return None
def upload(self, fileitem: schemas.FileItem, path: Path,
new_name: Optional[str] = None) -> Optional[schemas.FileItem]:
def upload(
self, fileitem: schemas.FileItem, path: Path, new_name: Optional[str] = None
) -> Optional[schemas.FileItem]:
"""
带实时进度显示的上传
"""
@@ -644,22 +660,22 @@ class SMB(StorageBase, metaclass=WeakSingleton):
self._check_connection()
src_path = self._normalize_path(fileitem.path)
dst_path = self._normalize_path(target_file)
# 检查源文件是否存在
if not smbclient.path.exists(src_path):
raise FileNotFoundError(f"源文件不存在: {src_path}")
# 确保目标路径的父目录存在
dst_parent = "\\".join(dst_path.rsplit("\\", 1)[:-1])
if dst_parent and not smbclient.path.exists(dst_parent):
logger.info(f"【SMB】创建目标目录: {dst_parent}")
smbclient.makedirs(dst_parent, exist_ok=True)
# 尝试创建硬链接
smbclient.link(src_path, dst_path)
logger.info(f"【SMB】硬链接创建成功: {src_path} -> {dst_path}")
return True
except SMBResponseException as e:
# SMB协议错误可能不支持硬链接
logger.error(f"【SMB】创建硬链接失败(当前Samba服务器可能不支持硬链接): {e}")
@@ -667,8 +683,6 @@ class SMB(StorageBase, metaclass=WeakSingleton):
except Exception as e:
logger.error(f"【SMB】创建硬链接失败: {e}")
return False
def softlink(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
@@ -682,7 +696,7 @@ class SMB(StorageBase, metaclass=WeakSingleton):
volume_stat = smbclient.stat_volume(self._server_path)
return schemas.StorageUsage(
total=volume_stat.total_size,
available=volume_stat.caller_available_size
available=volume_stat.caller_available_size,
)
except Exception as e:

View File

@@ -3,7 +3,7 @@ import secrets
import time
from pathlib import Path
from threading import Lock
from typing import List, Optional, Tuple, Union, Dict
from typing import List, Optional, Tuple, Union
from hashlib import sha256
import oss2
@@ -20,7 +20,7 @@ from app.modules.filemanager.storages import transfer_process
from app.schemas.types import StorageSchema
from app.utils.singleton import WeakSingleton
from app.utils.string import StringUtils
from app.utils.limit import QpsRateLimiter
from app.utils.limit import QpsRateLimiter, RateStats
lock = Lock()
@@ -46,22 +46,23 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
# 文件块大小默认10MB
chunk_size = 10 * 1024 * 1024
# 流控重试间隔时间
retry_delay = 70
# 下载接口单独限流
download_endpoint = "/open/ufile/downurl"
# 风控触发后休眠时间(秒)
limit_sleep_seconds = 3600
def __init__(self):
super().__init__()
self._auth_state = {}
self.session = httpx.Client(follow_redirects=True, timeout=20.0)
self._init_session()
self.qps_limiter: Dict[str, QpsRateLimiter] = {
"/open/ufile/files": QpsRateLimiter(4),
"/open/folder/get_info": QpsRateLimiter(3),
"/open/ufile/move": QpsRateLimiter(2),
"/open/ufile/copy": QpsRateLimiter(2),
"/open/ufile/update": QpsRateLimiter(2),
"/open/ufile/delete": QpsRateLimiter(2),
}
# 接口限流
self._download_limiter = QpsRateLimiter(1)
self._api_limiter = QpsRateLimiter(3)
self._limit_until = 0.0
self._limit_lock = Lock()
# 总体 QPS/QPM/QPH 统计
self._rate_stats = RateStats(source="115")
def _init_session(self):
"""
@@ -209,8 +210,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
try:
resp = self.session.get(
f"{settings.U115_AUTH_SERVER}/u115/token",
params={"state": state}
f"{settings.U115_AUTH_SERVER}/u115/token", params={"state": state}
)
if resp is None:
return {}, "无法连接到授权服务器"
@@ -221,12 +221,14 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
if status == "completed":
data = result.get("data", {})
if data:
self.set_config({
"refresh_time": int(time.time()),
"access_token": data.get("access_token"),
"refresh_token": data.get("refresh_token"),
"expires_in": data.get("expires_in"),
})
self.set_config(
{
"refresh_time": int(time.time()),
"access_token": data.get("access_token"),
"refresh_token": data.get("refresh_token"),
"expires_in": data.get("expires_in"),
}
)
self._auth_state = {}
return {"status": 2, "tip": "授权成功"}, ""
return {}, "授权服务器返回数据不完整"
@@ -292,11 +294,24 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
# 错误日志标志
no_error_log = kwargs.pop("no_error_log", False)
# 重试次数
retry_times = kwargs.pop("retry_limit", 5)
retry_times = kwargs.pop("retry_limit", 3)
# qps 速率限制
if endpoint in self.qps_limiter:
self.qps_limiter[endpoint].acquire()
# 按接口类型限流
if endpoint == self.download_endpoint:
self._download_limiter.acquire()
else:
self._api_limiter.acquire()
self._rate_stats.record()
# 风控冷却期间阻止所有接口调用,统一等待
with self._limit_lock:
wait_until = self._limit_until
if wait_until > time.time():
wait_secs = wait_until - time.time()
logger.info(
f"【115】风控冷却中本请求等待 {wait_secs:.0f} 秒后再调用接口..."
)
time.sleep(wait_secs)
try:
resp = self.session.request(method, f"{self.base_url}{endpoint}", **kwargs)
@@ -310,13 +325,24 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
kwargs["retry_limit"] = retry_times
# 处理速率限制
if resp.status_code == 429:
reset_time = 5 + int(resp.headers.get("X-RateLimit-Reset", 60))
logger.debug(
f"【115】{method} 请求 {endpoint} 限流,等待{reset_time}秒后重试"
self._rate_stats.log_stats("warning")
if retry_times <= 0:
logger.error(
f"【115】{method} 请求 {endpoint} 触发限流(429),重试次数用尽!"
)
return None
with self._limit_lock:
self._limit_until = max(
self._limit_until,
time.time() + self.limit_sleep_seconds,
)
logger.warning(
f"【115】触发限流(429),全体接口进入风控冷却 {self.limit_sleep_seconds} 秒,随后重试..."
)
time.sleep(reset_time)
time.sleep(self.limit_sleep_seconds)
kwargs["retry_limit"] = retry_times - 1
kwargs["no_error_log"] = no_error_log
return self._request_api(method, endpoint, result_key, **kwargs)
# 处理请求错误
@@ -329,6 +355,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
)
return None
kwargs["retry_limit"] = retry_times - 1
kwargs["no_error_log"] = no_error_log
sleep_duration = 2 ** (5 - retry_times + 1)
logger.info(
f"【115】{method} 请求 {endpoint} 错误 {e},等待 {sleep_duration} 秒后重试..."
@@ -339,20 +366,27 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
# 返回数据
ret_data = resp.json()
if ret_data.get("code") not in (0, 20004):
error_msg = ret_data.get("message")
error_msg = ret_data.get("message", "")
if not no_error_log:
logger.warn(f"【115】{method} 请求 {endpoint} 出错:{error_msg}")
if "已达到当前访问上限" in error_msg:
self._rate_stats.log_stats("warning")
if retry_times <= 0:
logger.error(
f"【115】{method} 请求 {endpoint} 达到访问上限,重试次数用尽!"
f"【115】{method} 请求 {endpoint} 触发风控(访问上限),重试次数用尽!"
)
return None
kwargs["retry_limit"] = retry_times - 1
logger.info(
f"【115】{method} 请求 {endpoint} 达到访问上限,等待 {self.retry_delay} 秒后重试..."
with self._limit_lock:
self._limit_until = max(
self._limit_until,
time.time() + self.limit_sleep_seconds,
)
logger.warning(
f"【115】触发风控(访问上限),全体接口进入风控冷却 {self.limit_sleep_seconds} 秒,随后重试..."
)
time.sleep(self.retry_delay)
time.sleep(self.limit_sleep_seconds)
kwargs["retry_limit"] = retry_times - 1
kwargs["no_error_log"] = no_error_log
return self._request_api(method, endpoint, result_key, **kwargs)
return None
@@ -729,7 +763,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
logger.error(f"【115】下载链接为空: {fileitem.name}")
return None
local_path = path or settings.TEMP_PATH / fileitem.name
local_path = (path or settings.TEMP_PATH) / fileitem.name
# 获取文件大小
file_size = detail.size
@@ -879,7 +913,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
企业级复制实现(支持目录递归复制)
复制
"""
if fileitem.fileid is None:
fileitem = self.get_item(Path(fileitem.path))
@@ -912,7 +946,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
原子性移动操作实现
移动
"""
if fileitem.fileid is None:
fileitem = self.get_item(Path(fileitem.path))
@@ -950,7 +984,7 @@ class U115Pan(StorageBase, metaclass=WeakSingleton):
def usage(self) -> Optional[schemas.StorageUsage]:
"""
获取带有企业级配额信息的存储使用情况
存储使用情况
"""
try:
resp = self._request_api("GET", "/open/user/info", "data")

View File

@@ -50,15 +50,15 @@ class NexusHhanclubSiteUserInfo(NexusPhpSiteUserInfo):
if not StringUtils.is_valid_html_element(html):
return
# 加入时间
join_at_text = html.xpath('//*[@id="mainContent"]/div/div[2]/div[4]/div[3]/span[2]/text()[1]')
join_at_text = html.xpath('//span[contains(text(), "加入日期")]/following-sibling::span/span/@title')
if join_at_text:
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].split(' (')[0].strip())
self.join_at = StringUtils.unify_datetime_str(join_at_text[0].strip())
finally:
if html is not None:
del html
def _get_user_level(self, html):
super()._get_user_level(html)
user_level_path = html.xpath('//*[@id="mainContent"]/div/div[2]/div[2]/div[4]/span[2]/img/@title')
user_level_path = html.xpath('//b[contains(@class, "_Name")]/text()')
if user_level_path:
self.user_level = user_level_path[0]

View File

@@ -3,6 +3,7 @@ import json
import re
from typing import Optional
from app.log import logger
from app.modules.indexer.parser import SiteParserBase, SiteSchema
from app.utils.string import StringUtils
@@ -63,7 +64,16 @@ class TNodeSiteUserInfo(SiteParserBase):
"""
解析用户做种信息
"""
seeding_info = json.loads(html_text)
try:
seeding_info = json.loads(html_text)
except json.JSONDecodeError as e:
logger.warning(f"{self._site_name}: Failed to decode seeding info JSON: {e}")
return None
if not isinstance(seeding_info, dict):
logger.warning(f"{self._site_name}: Seeding info payload is not a dictionary")
return None
if seeding_info.get("status") != 200:
return None

View File

@@ -29,7 +29,7 @@ class TNodeSpider(metaclass=SingletonClass):
self._ua = indexer.get('ua')
self._timeout = indexer.get('timeout') or 15
@cached(region="indexer_spider", maxsize=1, ttl=60 * 60 * 24, skip_empty=True)
@cached(region="indexer_spider", maxsize=1, ttl=60 * 60 * 24, skip_empty=True, shared_key="get_token")
def __get_token(self) -> Optional[str]:
if not self._domain:
return
@@ -43,7 +43,7 @@ class TNodeSpider(metaclass=SingletonClass):
return csrf_token.group(1)
return None
@cached(region="indexer_spider", maxsize=1, ttl=60 * 60 * 24, skip_empty=True)
@cached(region="indexer_spider", maxsize=1, ttl=60 * 60 * 24, skip_empty=True, shared_key="get_token")
async def __async_get_token(self) -> Optional[str]:
if not self._domain:
return

View File

@@ -0,0 +1,180 @@
"""
QQ Bot 通知模块
基于 QQ 开放平台,支持主动消息推送和 Gateway 接收消息
注意:用户/群需曾与机器人交互过才能收到主动消息,且每月有配额限制
"""
import json
from typing import Optional, List, Tuple, Union, Any
from app.core.context import MediaInfo, Context
from app.log import logger
from app.modules import _ModuleBase, _MessageBase
from app.modules.qqbot.qqbot import QQBot
from app.schemas import CommingMessage, MessageChannel, Notification
from app.schemas.types import ModuleType
class QQBotModule(_ModuleBase, _MessageBase[QQBot]):
"""QQ Bot 通知模块"""
def init_module(self) -> None:
super().init_service(service_name=QQBot.__name__.lower(), service_type=QQBot)
self._channel = MessageChannel.QQ
@staticmethod
def get_name() -> str:
return "QQ"
@staticmethod
def get_type() -> ModuleType:
return ModuleType.Notification
@staticmethod
def get_subtype() -> MessageChannel:
return MessageChannel.QQ
@staticmethod
def get_priority() -> int:
return 10
def stop(self) -> None:
for client in self.get_instances().values():
if hasattr(client, "stop"):
client.stop()
def test(self) -> Optional[Tuple[bool, str]]:
if not self.get_instances():
return None
for name, client in self.get_instances().items():
if not client.get_state():
return False, f"QQ Bot {name} 未就绪"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def message_parser(
self, source: str, body: Any, form: Any, args: Any
) -> Optional[CommingMessage]:
"""
解析 Gateway 转发的 QQ 消息
body 格式: {"type": "C2C_MESSAGE_CREATE"|"GROUP_AT_MESSAGE_CREATE", "content": "...", "author": {...}, "id": "...", ...}
"""
client_config = self.get_config(source)
if not client_config:
return None
try:
if isinstance(body, bytes):
msg_body = json.loads(body)
elif isinstance(body, dict):
msg_body = body
else:
return None
except (json.JSONDecodeError, TypeError) as err:
logger.debug(f"解析 QQ 消息失败: {err}")
return None
msg_type = msg_body.get("type")
content = (msg_body.get("content") or "").strip()
if not content:
return None
if msg_type == "C2C_MESSAGE_CREATE":
author = msg_body.get("author", {})
user_openid = author.get("user_openid", "")
if not user_openid:
return None
logger.info(f"收到 QQ 私聊消息: userid={user_openid}, text={content[:50]}...")
return CommingMessage(
channel=MessageChannel.QQ,
source=client_config.name,
userid=user_openid,
username=user_openid,
text=content,
)
elif msg_type == "GROUP_AT_MESSAGE_CREATE":
author = msg_body.get("author", {})
member_openid = author.get("member_openid", "")
group_openid = msg_body.get("group_openid", "")
# 群聊用 group:group_openid 作为 userid便于回复时识别
userid = f"group:{group_openid}" if group_openid else member_openid
logger.info(f"收到 QQ 群消息: group={group_openid}, userid={member_openid}, text={content[:50]}...")
return CommingMessage(
channel=MessageChannel.QQ,
source=client_config.name,
userid=userid,
username=member_openid or group_openid,
text=content,
)
return None
def post_message(self, message: Notification, **kwargs) -> None:
for conf in self.get_configs().values():
if not self.check_message(message, conf.name):
continue
targets = message.targets
userid = message.userid
if not userid and targets:
userid = targets.get("qq_userid") or targets.get("qq_openid")
if not userid:
userid = targets.get("qq_group_openid") or targets.get("qq_group")
if userid:
userid = f"group:{userid}"
# 无 userid 且无默认配置时,由 client 向曾发过消息的用户/群广播
client: QQBot = self.get_instance(conf.name)
if client:
client.send_msg(
title=message.title,
text=message.text,
image=message.image,
link=message.link,
userid=userid,
targets=targets,
)
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
for conf in self.get_configs().values():
if not self.check_message(message, conf.name):
continue
targets = message.targets
userid = message.userid
if not userid and targets:
userid = targets.get("qq_userid") or targets.get("qq_openid")
if not userid:
g = targets.get("qq_group_openid") or targets.get("qq_group")
if g:
userid = f"group:{g}"
client: QQBot = self.get_instance(conf.name)
if client:
client.send_medias_msg(
medias=medias,
userid=userid,
title=message.title,
link=message.link,
targets=targets,
)
def post_torrents_message(
self, message: Notification, torrents: List[Context]
) -> None:
for conf in self.get_configs().values():
if not self.check_message(message, conf.name):
continue
targets = message.targets
userid = message.userid
if not userid and targets:
userid = targets.get("qq_userid") or targets.get("qq_openid")
if not userid:
g = targets.get("qq_group_openid") or targets.get("qq_group")
if g:
userid = f"group:{g}"
client: QQBot = self.get_instance(conf.name)
if client:
client.send_torrents_msg(
torrents=torrents,
userid=userid,
title=message.title,
link=message.link,
targets=targets,
)

206
app/modules/qqbot/api.py Normal file
View File

@@ -0,0 +1,206 @@
"""
QQ Bot API - Python 实现
参考 QQ 开放平台官方 API: https://bot.q.qq.com/wiki/develop/api/
"""
import time
from typing import Optional, Literal
from app.log import logger
from app.utils.http import RequestUtils
API_BASE = "https://api.sgroup.qq.com"
TOKEN_URL = "https://bots.qq.com/app/getAppAccessToken"
# Token 缓存
_cached_token: Optional[dict] = None
def get_access_token(app_id: str, client_secret: str) -> str:
"""
获取 AccessToken带缓存提前 5 分钟刷新)
"""
global _cached_token
now_ms = int(time.time() * 1000)
if _cached_token and now_ms < _cached_token["expires_at"] - 5 * 60 * 1000 and _cached_token["app_id"] == app_id:
return _cached_token["token"]
if _cached_token and _cached_token["app_id"] != app_id:
_cached_token = None
try:
resp = RequestUtils(timeout=30).post_res(
TOKEN_URL,
json={"appId": app_id, "clientSecret": client_secret}, # QQ API 使用 camelCase
headers={"Content-Type": "application/json"},
)
if not resp or not resp.json():
raise ValueError("Failed to get access_token: empty response")
data = resp.json()
token = data.get("access_token")
expires_in = data.get("expires_in", 7200)
if not token:
raise ValueError(f"Failed to get access_token: {data}")
# expires_in 可能为字符串,统一转为 int
expires_in = int(expires_in) if expires_in is not None else 7200
_cached_token = {
"token": token,
"expires_at": now_ms + expires_in * 1000,
"app_id": app_id,
}
logger.debug(f"QQ API: Token cached for app_id={app_id}")
return token
except Exception as e:
logger.error(f"QQ API: get_access_token failed: {e}")
raise
def clear_token_cache() -> None:
"""清除 Token 缓存"""
global _cached_token
_cached_token = None
def _api_request(
access_token: str,
method: str,
path: str,
body: Optional[dict] = None,
timeout: int = 30,
) -> dict:
"""通用 API 请求"""
url = f"{API_BASE}{path}"
headers = {
"Authorization": f"QQBot {access_token}",
"Content-Type": "application/json",
}
try:
if method.upper() == "GET":
resp = RequestUtils(timeout=timeout).get_res(url, headers=headers)
else:
resp = RequestUtils(timeout=timeout).post_res(
url, json=body or {}, headers=headers
)
if not resp:
raise ValueError("Empty response")
data = resp.json()
status = getattr(resp, "status_code", 0)
if status and status >= 400:
raise ValueError(f"API Error [{path}]: {data.get('message', data)}")
return data
except Exception as e:
logger.error(f"QQ API: {method} {path} failed: {e}")
raise
def send_proactive_c2c_message(
access_token: str,
openid: str,
content: str,
use_markdown: bool = False,
) -> dict:
"""
主动发送 C2C 单聊消息(不需要 msg_id
注意:每月限 4 条/用户,且用户必须曾与机器人交互过
:param access_token: 访问令牌
:param openid: 用户 openid
:param content: 消息内容
:param use_markdown: 是否使用 Markdown 格式(需机器人开通 Markdown 能力)
"""
if not content or not content.strip():
raise ValueError("主动消息内容不能为空")
content = content.strip()
body = {"markdown": {"content": content}, "msg_type": 2} if use_markdown else {"content": content, "msg_type": 0}
return _api_request(
access_token, "POST", f"/v2/users/{openid}/messages", body
)
def send_proactive_group_message(
access_token: str,
group_openid: str,
content: str,
use_markdown: bool = False,
) -> dict:
"""
主动发送群聊消息(不需要 msg_id
注意:每月限 4 条/群,且群必须曾与机器人交互过
:param access_token: 访问令牌
:param group_openid: 群聊 openid
:param content: 消息内容
:param use_markdown: 是否使用 Markdown 格式(需机器人开通 Markdown 能力)
"""
if not content or not content.strip():
raise ValueError("主动消息内容不能为空")
content = content.strip()
body = {"markdown": {"content": content}, "msg_type": 2} if use_markdown else {"content": content, "msg_type": 0}
return _api_request(
access_token, "POST", f"/v2/groups/{group_openid}/messages", body
)
def send_c2c_message(
access_token: str,
openid: str,
content: str,
msg_id: Optional[str] = None,
) -> dict:
"""被动回复 C2C 单聊消息1 小时内最多 4 次)"""
body = {"content": content, "msg_type": 0, "msg_seq": 1}
if msg_id:
body["msg_id"] = msg_id
return _api_request(
access_token, "POST", f"/v2/users/{openid}/messages", body
)
def send_group_message(
access_token: str,
group_openid: str,
content: str,
msg_id: Optional[str] = None,
) -> dict:
"""被动回复群聊消息1 小时内最多 4 次)"""
body = {"content": content, "msg_type": 0, "msg_seq": 1}
if msg_id:
body["msg_id"] = msg_id
return _api_request(
access_token, "POST", f"/v2/groups/{group_openid}/messages", body
)
def get_gateway_url(access_token: str) -> str:
"""
获取 WebSocket Gateway URL
"""
data = _api_request(access_token, "GET", "/gateway")
url = data.get("url")
if not url:
raise ValueError("Gateway URL not found in response")
return url
def send_message(
access_token: str,
target: str,
content: str,
msg_type: Literal["c2c", "group"] = "c2c",
msg_id: Optional[str] = None,
) -> dict:
"""
统一发送接口
:param access_token: 访问令牌
:param target: openidc2c或 group_openidgroup
:param content: 消息内容
:param msg_type: c2c 单聊 / group 群聊
:param msg_id: 可选,被动回复时传入原消息 id
"""
if msg_id:
if msg_type == "c2c":
return send_c2c_message(access_token, target, content, msg_id)
return send_group_message(access_token, target, content, msg_id)
if msg_type == "c2c":
return send_proactive_c2c_message(access_token, target, content)
return send_proactive_group_message(access_token, target, content)

View File

@@ -0,0 +1,196 @@
"""
QQ Bot Gateway WebSocket 客户端
连接 QQ 开放平台 Gateway接收 C2C 和群聊消息并转发至 MP 消息链
"""
import json
import threading
import time
from typing import Callable, Optional
import websocket
from app.log import logger
# QQ Bot intents
INTENT_GROUP_AND_C2C = 1 << 25 # 群聊和 C2C 私聊
def run_gateway(
app_id: str,
app_secret: str,
config_name: str,
get_token_fn: Callable[[str, str], str],
get_gateway_url_fn: Callable[[str], str],
on_message_fn: Callable[[dict], None],
stop_event: threading.Event,
) -> None:
"""
在后台线程中运行 Gateway WebSocket 连接
:param app_id: QQ 机器人 AppID
:param app_secret: QQ 机器人 AppSecret
:param config_name: 配置名称,用于消息来源标识
:param get_token_fn: 获取 access_token 的函数 (app_id, app_secret) -> token
:param get_gateway_url_fn: 获取 gateway URL 的函数 (token) -> url
:param on_message_fn: 收到消息时的回调 (payload_dict) -> None
:param stop_event: 停止事件set 时退出循环
"""
last_seq: Optional[int] = None
heartbeat_interval_ms: Optional[int] = None
heartbeat_timer: Optional[threading.Timer] = None
ws_ref: list = [] # 用于在闭包中保持 ws 引用
def send_heartbeat():
nonlocal heartbeat_timer
if stop_event.is_set():
return
try:
if ws_ref and ws_ref[0]:
payload = {"op": 1, "d": last_seq}
ws_ref[0].send(json.dumps(payload))
logger.debug(f"[QQ Gateway:{config_name}] Heartbeat sent, seq={last_seq}")
except Exception as err:
logger.debug(f"[QQ Gateway:{config_name}] Heartbeat error: {err}")
if heartbeat_interval_ms and not stop_event.is_set():
heartbeat_timer = threading.Timer(heartbeat_interval_ms / 1000.0, send_heartbeat)
heartbeat_timer.daemon = True
heartbeat_timer.start()
def on_ws_message(_, message):
nonlocal last_seq, heartbeat_interval_ms, heartbeat_timer
try:
payload = json.loads(message)
except json.JSONDecodeError as err:
logger.error(f"[QQ Gateway:{config_name}] Invalid JSON: {err}")
return
op = payload.get("op")
d = payload.get("d")
s = payload.get("s")
t = payload.get("t")
if s is not None:
last_seq = s
logger.debug(f"[QQ Gateway:{config_name}] op={op} t={t}")
if op == 10: # Hello
heartbeat_interval_ms = d.get("heartbeat_interval", 30000)
logger.info(f"[QQ Gateway:{config_name}] Hello received, heartbeat_interval={heartbeat_interval_ms}")
# Identify
identify = {
"op": 2,
"d": {
"token": f"QQBot {token}",
"intents": INTENT_GROUP_AND_C2C,
"shard": [0, 1],
},
}
ws_ref[0].send(json.dumps(identify))
logger.info(f"[QQ Gateway:{config_name}] Identify sent")
# 启动心跳
if heartbeat_timer:
heartbeat_timer.cancel()
heartbeat_timer = threading.Timer(heartbeat_interval_ms / 1000.0, send_heartbeat)
heartbeat_timer.daemon = True
heartbeat_timer.start()
elif op == 0: # Dispatch
if t == "READY":
session_id = d.get("session_id", "")
logger.info(f"[QQ Gateway:{config_name}] 连接成功 Ready, session_id={session_id}")
elif t == "RESUMED":
logger.info(f"[QQ Gateway:{config_name}] 连接成功 Session resumed")
elif t == "C2C_MESSAGE_CREATE":
author = d.get("author", {})
user_openid = author.get("user_openid", "")
content = d.get("content", "").strip()
msg_id = d.get("id", "")
if content:
on_message_fn({
"type": "C2C_MESSAGE_CREATE",
"content": content,
"author": {"user_openid": user_openid},
"id": msg_id,
"timestamp": d.get("timestamp", ""),
})
elif t == "GROUP_AT_MESSAGE_CREATE":
author = d.get("author", {})
member_openid = author.get("member_openid", "")
group_openid = d.get("group_openid", "")
content = d.get("content", "").strip()
msg_id = d.get("id", "")
if content:
on_message_fn({
"type": "GROUP_AT_MESSAGE_CREATE",
"content": content,
"author": {"member_openid": member_openid},
"id": msg_id,
"group_openid": group_openid,
"timestamp": d.get("timestamp", ""),
})
# 其他事件忽略
elif op == 7: # Reconnect
logger.info(f"[QQ Gateway:{config_name}] Reconnect requested")
# 当前实现不自动重连,由外层循环处理
elif op == 9: # Invalid Session
logger.warning(f"[QQ Gateway:{config_name}] Invalid session")
if ws_ref and ws_ref[0]:
ws_ref[0].close()
def on_ws_error(_, error):
logger.error(f"[QQ Gateway:{config_name}] WebSocket error: {error}")
def on_ws_close(_, close_status_code, close_msg):
logger.info(f"[QQ Gateway:{config_name}] WebSocket closed: {close_status_code} {close_msg}")
if heartbeat_timer:
heartbeat_timer.cancel()
reconnect_delays = [1, 2, 5, 10, 30, 60]
attempt = 0
while not stop_event.is_set():
try:
token = get_token_fn(app_id, app_secret)
gateway_url = get_gateway_url_fn(token)
logger.info(f"[QQ Gateway:{config_name}] Connecting to {gateway_url[:60]}...")
ws = websocket.WebSocketApp(
gateway_url,
on_message=on_ws_message,
on_error=on_ws_error,
on_close=on_ws_close,
)
ws_ref.clear()
ws_ref.append(ws)
# run_forever 会阻塞,需要传入 stop_event 的检查
# websocket-client 的 run_forever 支持 ping_interval, ping_timeout
# 我们使用自定义心跳,所以不设置 ping
ws.run_forever(
ping_interval=None,
ping_timeout=None,
skip_utf8_validation=True,
)
except Exception as e:
logger.error(f"[QQ Gateway:{config_name}] Connection error: {e}")
if stop_event.is_set():
break
delay = reconnect_delays[min(attempt, len(reconnect_delays) - 1)]
attempt += 1
logger.info(f"[QQ Gateway:{config_name}] Reconnecting in {delay}s (attempt {attempt})")
for _ in range(delay * 10):
if stop_event.is_set():
break
time.sleep(0.1)
if heartbeat_timer:
heartbeat_timer.cancel()
logger.info(f"[QQ Gateway:{config_name}] Gateway thread stopped")

397
app/modules/qqbot/qqbot.py Normal file
View File

@@ -0,0 +1,397 @@
"""
QQ Bot 通知客户端
基于 QQ 开放平台 API支持主动消息推送和 Gateway 接收消息
"""
import hashlib
import io
import pickle
import threading
from typing import Optional, List, Tuple
from PIL import Image
from app.chain.message import MessageChain
from app.core.cache import FileCache
from app.core.context import MediaInfo, Context
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules.qqbot.api import (
get_access_token,
get_gateway_url,
send_proactive_c2c_message,
send_proactive_group_message,
)
from app.modules.qqbot.gateway import run_gateway
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
# QQ Markdown 图片默认尺寸(获取失败时使用,与 OpenClaw 对齐)
_DEFAULT_IMAGE_SIZE: Tuple[int, int] = (512, 512)
class QQBot:
"""QQ Bot 通知客户端"""
def __init__(
self,
QQ_APP_ID: Optional[str] = None,
QQ_APP_SECRET: Optional[str] = None,
QQ_OPENID: Optional[str] = None,
QQ_GROUP_OPENID: Optional[str] = None,
name: Optional[str] = None,
**kwargs,
):
"""
初始化 QQ Bot
:param QQ_APP_ID: QQ 机器人 AppID
:param QQ_APP_SECRET: QQ 机器人 AppSecret
:param QQ_OPENID: 默认接收者 openid单聊
:param QQ_GROUP_OPENID: 默认群组 openid群聊与 QQ_OPENID 二选一)
:param name: 配置名称,用于消息来源标识和 Gateway 接收
"""
if not QQ_APP_ID or not QQ_APP_SECRET:
logger.error("QQ Bot 配置不完整:缺少 AppID 或 AppSecret")
self._ready = False
return
self._app_id = QQ_APP_ID
self._app_secret = QQ_APP_SECRET
self._default_openid = QQ_OPENID
self._default_group_openid = QQ_GROUP_OPENID
self._config_name = name or "qqbot"
self._ready = True
# 曾发过消息的用户/群,用于无默认接收者时的广播 {(target_id, is_group), ...}
self._known_targets: set = set()
_safe_name = hashlib.md5(self._config_name.encode()).hexdigest()[:12]
self._cache_key = f"__qqbot_known_targets_{_safe_name}__"
self._filecache = FileCache()
self._load_known_targets()
# 已处理的消息 ID用于去重避免同一条消息重复处理
self._processed_msg_ids: set = set()
self._max_processed_ids = 1000
# Gateway 后台线程
self._gateway_stop = threading.Event()
self._gateway_thread = None
self._start_gateway()
logger.info("QQ Bot 客户端初始化完成")
def _load_known_targets(self) -> None:
"""从缓存加载曾互动的用户/群"""
try:
content = self._filecache.get(self._cache_key)
if content:
data = pickle.loads(content)
if isinstance(data, (list, set)):
self._known_targets = set(tuple(x) for x in data)
except Exception as e:
logger.debug(f"QQ Bot 加载 known_targets 失败: {e}")
def _save_known_targets(self) -> None:
"""持久化曾互动的用户/群到缓存"""
try:
self._filecache.set(self._cache_key, pickle.dumps(list(self._known_targets)))
except Exception as e:
logger.debug(f"QQ Bot 保存 known_targets 失败: {e}")
def _forward_to_message_chain(self, payload: dict) -> None:
"""直接调用消息链处理,避免 HTTP 开销"""
def _run():
try:
MessageChain().process(
body=payload,
form={},
args={"source": self._config_name},
)
except Exception as e:
logger.error(f"QQ Bot 转发消息失败: {e}")
threading.Thread(target=_run, daemon=True).start()
def _on_gateway_message(self, payload: dict) -> None:
"""Gateway 收到消息时转发至 MP 消息链,并记录发送者用于广播"""
msg_id = payload.get("id")
if msg_id:
if msg_id in self._processed_msg_ids:
logger.debug(f"QQ Bot: 跳过重复消息 id={msg_id}")
return
self._processed_msg_ids.add(msg_id)
if len(self._processed_msg_ids) > self._max_processed_ids:
self._processed_msg_ids.clear()
# 记录发送者,用于无默认接收者时的广播
msg_type = payload.get("type")
if msg_type == "C2C_MESSAGE_CREATE":
openid = (payload.get("author") or {}).get("user_openid")
if openid:
self._known_targets.add((openid, False))
self._save_known_targets()
elif msg_type == "GROUP_AT_MESSAGE_CREATE":
group_openid = payload.get("group_openid")
if group_openid:
self._known_targets.add((group_openid, True))
self._save_known_targets()
self._forward_to_message_chain(payload)
def _start_gateway(self) -> None:
"""启动 Gateway WebSocket 连接(后台线程)"""
try:
self._gateway_thread = threading.Thread(
target=run_gateway,
kwargs={
"app_id": self._app_id,
"app_secret": self._app_secret,
"config_name": self._config_name,
"get_token_fn": get_access_token,
"get_gateway_url_fn": get_gateway_url,
"on_message_fn": self._on_gateway_message,
"stop_event": self._gateway_stop,
},
daemon=True,
)
self._gateway_thread.start()
logger.info(f"QQ Bot Gateway 已启动: {self._config_name}")
except Exception as e:
logger.error(f"QQ Bot Gateway 启动失败: {e}")
def stop(self) -> None:
"""停止 Gateway 连接"""
if self._gateway_stop:
self._gateway_stop.set()
if self._gateway_thread and self._gateway_thread.is_alive():
self._gateway_thread.join(timeout=5)
def get_state(self) -> bool:
"""获取就绪状态"""
return self._ready
def _get_target(self, userid: Optional[str] = None, targets: Optional[dict] = None) -> tuple:
"""
解析发送目标
:return: (target_id, is_group)
"""
# 优先使用 userid可能是 openid
if userid:
# 格式支持group:xxx 表示群聊
if str(userid).lower().startswith("group:"):
return userid[6:].strip(), True
return str(userid), False
# 从 targets 获取
if targets:
qq_openid = targets.get("qq_userid") or targets.get("qq_openid")
qq_group = targets.get("qq_group_openid") or targets.get("qq_group")
if qq_group:
return str(qq_group), True
if qq_openid:
return str(qq_openid), False
# 使用默认配置
if self._default_group_openid:
return self._default_group_openid, True
if self._default_openid:
return self._default_openid, False
return None, False
def _get_broadcast_targets(self) -> list:
"""获取广播目标列表(曾发过消息的用户/群)"""
return list(self._known_targets)
@staticmethod
def _get_image_size(url: str) -> Optional[Tuple[int, int]]:
"""
从图片 URL 获取尺寸,只下载前 64KB 解析文件头(参考 OpenClaw
:return: (width, height) 或 None
"""
try:
resp = RequestUtils(timeout=5).get_res(
url,
headers={"Range": "bytes=0-65535", "User-Agent": "QQBot-Image-Size-Detector/1.0"},
)
if not resp or not resp.content:
return None
data = resp.content[:65536] if len(resp.content) > 65536 else resp.content
with Image.open(io.BytesIO(data)) as img:
return img.width, img.height
except Exception as e:
logger.debug(f"QQ Bot 获取图片尺寸失败 ({url[:60]}...): {e}")
return None
@staticmethod
def _escape_markdown(text: str) -> str:
"""转义 Markdown 特殊字符,避免破坏格式。不转义 ()QQ 会误解析 \\( \\) 导致括号丢失或乱码"""
if not text:
return ""
text = text.replace("\\", "\\\\")
for char in ("*", "_", "[", "]", "`"):
text = text.replace(char, f"\\{char}")
return text
@staticmethod
def _format_message_markdown(
title: Optional[str] = None,
text: Optional[str] = None,
image: Optional[str] = None,
link: Optional[str] = None,
) -> tuple:
"""
将消息格式化为 QQ Markdown类似 Telegram 处理方式
:return: (content, use_markdown)
"""
parts = []
if title:
# 标题加粗,移除可能破坏格式的换行
safe_title = (title or "").replace("\n", " ").strip()
if safe_title:
parts.append(f"**{QQBot._escape_markdown(safe_title)}**")
if text:
parts.append(QQBot._escape_markdown((text or "").strip()))
if image:
# QQ Markdown 图片需带尺寸才能正确渲染,格式: ![#宽px #高px](url),否则会显示为 [图片] 文本
# 参考 OpenClaw先获取图片真实尺寸失败则用默认 512x512
img_url = (image or "").strip()
if img_url and (img_url.startswith("http://") or img_url.startswith("https://")):
size = QQBot._get_image_size(img_url)
w, h = size if size else _DEFAULT_IMAGE_SIZE
if size:
logger.debug(f"QQ Bot 图片尺寸: {w}x{h} - {img_url[:60]}...")
parts.append(f"![#{w}px #{h}px]({img_url})")
elif img_url:
parts.append(img_url)
if link:
link_url = (link or "").strip()
if link_url:
parts.append(f"[查看详情]({link_url})")
content = "\n\n".join(p for p in parts if p).strip()
return content, bool(content)
def send_msg(
self,
title: str,
text: Optional[str] = None,
image: Optional[str] = None,
link: Optional[str] = None,
userid: Optional[str] = None,
targets: Optional[dict] = None,
**kwargs,
) -> bool:
"""
发送 QQ 消息
:param title: 标题
:param text: 正文
:param image: 图片 URLQQ 主动消息暂不支持图片,可拼入文本)
:param link: 链接
:param userid: 目标 openid 或 group:xxx
:param targets: 目标字典
"""
if not self._ready:
return False
target, is_group = self._get_target(userid, targets)
targets_to_send = []
if target:
targets_to_send = [(target, is_group)]
else:
# 无默认接收者时,向曾发过消息的用户/群广播
broadcast = self._get_broadcast_targets()
if broadcast:
targets_to_send = broadcast
logger.debug(f"QQ Bot: 广播模式,共 {len(targets_to_send)} 个目标")
else:
logger.warn("QQ Bot: 未指定接收者且无互动用户,请在配置中设置 QQ_OPENID/QQ_GROUP_OPENID 或先让用户发消息")
return False
# 使用 Markdown 格式发送(类似 Telegram
content, use_markdown = self._format_message_markdown(title=title, text=text, image=image, link=link)
logger.info(f"QQ Bot 发送内容 (use_markdown={use_markdown}):\n{content}")
if not content:
logger.warn("QQ Bot: 消息内容为空")
return False
success_count = 0
try:
token = get_access_token(self._app_id, self._app_secret)
for tgt, tgt_is_group in targets_to_send:
send_fn = send_proactive_group_message if tgt_is_group else send_proactive_c2c_message
try:
send_fn(token, tgt, content, use_markdown=use_markdown)
success_count += 1
logger.debug(f"QQ Bot: 消息已发送到 {'' if tgt_is_group else '用户'} {tgt}")
except Exception as e:
err_msg = str(e)
if use_markdown and ("markdown" in err_msg.lower() or "11244" in err_msg or "权限" in err_msg):
# Markdown 未开通时回退为纯文本
plain_parts = []
if title:
plain_parts.append(f"{title}")
if text:
plain_parts.append(text)
if image:
plain_parts.append(image)
if link:
plain_parts.append(link)
plain_content = "\n".join(plain_parts).strip()
if plain_content:
send_fn(token, tgt, plain_content, use_markdown=False)
success_count += 1
logger.debug(f"QQ Bot: Markdown 不可用,已回退纯文本发送至 {tgt}")
else:
logger.error(f"QQ Bot 发送失败 ({tgt}): {e}")
return success_count > 0
except Exception as e:
logger.error(f"QQ Bot 发送失败: {e}")
return False
def send_medias_msg(
self,
medias: List[MediaInfo],
userid: Optional[str] = None,
title: Optional[str] = None,
link: Optional[str] = None,
**kwargs,
) -> bool:
"""发送媒体列表(转为文本)"""
if not medias:
return False
lines = [f"{i + 1}. {m.title_year} - {m.type.value}" for i, m in enumerate(medias)]
text = "\n".join(lines)
return self.send_msg(
title=title or "媒体列表",
text=text,
link=link,
userid=userid,
**kwargs,
)
def send_torrents_msg(
self,
torrents: List[Context],
userid: Optional[str] = None,
title: Optional[str] = None,
link: Optional[str] = None,
**kwargs,
) -> bool:
"""发送种子列表(转为文本)"""
if not torrents:
return False
lines = []
for i, ctx in enumerate(torrents):
t = ctx.torrent_info
meta = MetaInfo(t.title, t.description)
name = f"{meta.season_episode} {meta.resource_term} {meta.video_term}"
name = " ".join(name.split())
lines.append(f"{i + 1}.【{t.site_name}{name} {StringUtils.str_filesize(t.size)} {t.seeders}")
text = "\n".join(lines)
return self.send_msg(
title=title or "种子列表",
text=text,
link=link,
userid=userid,
**kwargs,
)

View File

@@ -0,0 +1,513 @@
from pathlib import Path
from typing import Set, Tuple, Optional, Union, List, Dict
from torrentool.torrent import Torrent
from app import schemas
from app.core.cache import FileCache
from app.core.config import settings
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase, _DownloaderBase
from app.modules.rtorrent.rtorrent import Rtorrent
from app.schemas import TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus, ModuleType, DownloaderType
from app.utils.string import StringUtils
class RtorrentModule(_ModuleBase, _DownloaderBase[Rtorrent]):
def init_module(self) -> None:
"""
初始化模块
"""
super().init_service(
service_name=Rtorrent.__name__.lower(), service_type=Rtorrent
)
@staticmethod
def get_name() -> str:
return "Rtorrent"
@staticmethod
def get_type() -> ModuleType:
"""
获取模块类型
"""
return ModuleType.Downloader
@staticmethod
def get_subtype() -> DownloaderType:
"""
获取模块子类型
"""
return DownloaderType.Rtorrent
@staticmethod
def get_priority() -> int:
"""
获取模块优先级,数字越小优先级越高,只有同一接口下优先级才生效
"""
return 3
def stop(self):
pass
def test(self) -> Optional[Tuple[bool, str]]:
"""
测试模块连接性
"""
if not self.get_instances():
return None
for name, server in self.get_instances().items():
if server.is_inactive():
server.reconnect()
if not server.transfer_info():
return False, f"无法连接rTorrent下载器{name}"
return True, ""
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
for name, server in self.get_instances().items():
if server.is_inactive():
logger.info(f"rTorrent下载器 {name} 连接断开,尝试重连 ...")
server.reconnect()
def download(
self,
content: Union[Path, str, bytes],
download_dir: Path,
cookie: str,
episodes: Set[int] = None,
category: Optional[str] = None,
label: Optional[str] = None,
downloader: Optional[str] = None,
) -> Optional[Tuple[Optional[str], Optional[str], Optional[str], str]]:
"""
根据种子文件,选择并添加下载任务
:param content: 种子文件地址或者磁力链接或种子内容
:param download_dir: 下载目录
:param cookie: cookie
:param episodes: 需要下载的集数
:param category: 分类rTorrent中未使用
:param label: 标签
:param downloader: 下载器
:return: 下载器名称、种子Hash、种子文件布局、错误原因
"""
def __get_torrent_info() -> Tuple[Optional[Torrent], Optional[bytes]]:
"""
获取种子名称
"""
torrent_info, torrent_content = None, None
try:
if isinstance(content, Path):
if content.exists():
torrent_content = content.read_bytes()
else:
torrent_content = FileCache().get(
content.as_posix(), region="torrents"
)
else:
torrent_content = content
if torrent_content:
if StringUtils.is_magnet_link(torrent_content):
return None, torrent_content
else:
torrent_info = Torrent.from_string(torrent_content)
return torrent_info, torrent_content
except Exception as e:
logger.error(f"获取种子名称失败:{e}")
return None, None
if not content:
return None, None, None, "下载内容为空"
# 读取种子的名称
torrent_from_file, content = __get_torrent_info()
# 检查是否为磁力链接
is_magnet = (
isinstance(content, str)
and content.startswith("magnet:")
or isinstance(content, bytes)
and content.startswith(b"magnet:")
)
if not torrent_from_file and not is_magnet:
return None, None, None, f"添加种子任务失败:无法读取种子文件"
# 获取下载器
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
# 生成随机Tag
tag = StringUtils.generate_random_str(10)
if label:
tags = label.split(",") + [tag]
elif settings.TORRENT_TAG:
tags = [tag, settings.TORRENT_TAG]
else:
tags = [tag]
# 如果要选择文件则先暂停
is_paused = True if episodes else False
# 添加任务
state = server.add_torrent(
content=content,
download_dir=self.normalize_path(download_dir, downloader),
is_paused=is_paused,
tags=tags,
cookie=cookie,
)
# rTorrent 始终使用原始种子布局
torrent_layout = "Original"
if not state:
# 查询所有下载器的种子
torrents, error = server.get_torrents()
if error:
return None, None, None, "无法连接rTorrent下载器"
if torrents:
try:
for torrent in torrents:
# 名称与大小相等则认为是同一个种子
if torrent.get("name") == getattr(
torrent_from_file, "name", ""
) and torrent.get("total_size") == getattr(
torrent_from_file, "total_size", 0
):
torrent_hash = torrent.get("hash")
torrent_tags = [
str(t).strip()
for t in torrent.get("tags", "").split(",")
if t.strip()
]
logger.warn(
f"下载器中已存在该种子任务:{torrent_hash} - {torrent.get('name')}"
)
# 给种子打上标签
if "已整理" in torrent_tags:
server.remove_torrents_tag(
ids=torrent_hash, tag=["已整理"]
)
if (
settings.TORRENT_TAG
and settings.TORRENT_TAG not in torrent_tags
):
logger.info(
f"给种子 {torrent_hash} 打上标签:{settings.TORRENT_TAG}"
)
server.set_torrents_tag(
ids=torrent_hash, tags=[settings.TORRENT_TAG]
)
return (
downloader or self.get_default_config_name(),
torrent_hash,
torrent_layout,
f"下载任务已存在",
)
finally:
torrents.clear()
del torrents
return None, None, None, f"添加种子任务失败:{content}"
else:
# 获取种子Hash
torrent_hash = server.get_torrent_id_by_tag(tags=tag)
if not torrent_hash:
return (
None,
None,
None,
f"下载任务添加成功但获取rTorrent任务信息失败{content}",
)
else:
if is_paused:
# 种子文件
torrent_files = server.get_files(torrent_hash)
if not torrent_files:
return (
downloader or self.get_default_config_name(),
torrent_hash,
torrent_layout,
"获取种子文件失败,下载任务可能在暂停状态",
)
# 不需要的文件ID
file_ids = []
# 需要的集清单
sucess_epidised = set()
try:
for torrent_file in torrent_files:
file_id = torrent_file.get("id")
file_name = torrent_file.get("name")
meta_info = MetaInfo(file_name)
if not meta_info.episode_list or not set(
meta_info.episode_list
).issubset(episodes):
file_ids.append(file_id)
else:
sucess_epidised.update(meta_info.episode_list)
finally:
torrent_files.clear()
del torrent_files
sucess_epidised = list(sucess_epidised)
if sucess_epidised and file_ids:
# 设置不需要的文件优先级为0不下载
server.set_files(
torrent_hash=torrent_hash, file_ids=file_ids, priority=0
)
# 开始任务
server.start_torrents(torrent_hash)
return (
downloader or self.get_default_config_name(),
torrent_hash,
torrent_layout,
f"添加下载成功,已选择集数:{sucess_epidised}",
)
else:
return (
downloader or self.get_default_config_name(),
torrent_hash,
torrent_layout,
"添加下载成功",
)
def list_torrents(
self,
status: TorrentStatus = None,
hashs: Union[list, str] = None,
downloader: Optional[str] = None,
) -> Optional[List[Union[TransferTorrent, DownloadingTorrent]]]:
"""
获取下载器种子列表
:param status: 种子状态
:param hashs: 种子Hash
:param downloader: 下载器
:return: 下载器中符合状态的种子列表
"""
# 获取下载器
if downloader:
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
servers = {downloader: server}
else:
servers: Dict[str, Rtorrent] = self.get_instances()
ret_torrents = []
if hashs:
# 按Hash获取
for name, server in servers.items():
torrents, _ = (
server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG) or []
)
try:
for torrent in torrents:
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = Path(torrent.get("save_path")) / torrent.get(
"name"
)
ret_torrents.append(
TransferTorrent(
downloader=name,
title=torrent.get("name"),
path=torrent_path,
hash=torrent.get("hash"),
size=torrent.get("total_size"),
tags=torrent.get("tags"),
progress=torrent.get("progress", 0),
state="paused"
if torrent.get("state") == 0
else "downloading",
)
)
finally:
torrents.clear()
del torrents
elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的
for name, server in servers.items():
torrents = (
server.get_completed_torrents(tags=settings.TORRENT_TAG) or []
)
try:
for torrent in torrents:
tags = torrent.get("tags") or ""
tag_list = [t.strip() for t in tags.split(",") if t.strip()]
if "已整理" in tag_list:
continue
content_path = torrent.get("content_path")
if content_path:
torrent_path = Path(content_path)
else:
torrent_path = Path(torrent.get("save_path")) / torrent.get(
"name"
)
ret_torrents.append(
TransferTorrent(
downloader=name,
title=torrent.get("name"),
path=torrent_path,
hash=torrent.get("hash"),
tags=torrent.get("tags"),
)
)
finally:
torrents.clear()
del torrents
elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务
for name, server in servers.items():
torrents = (
server.get_downloading_torrents(tags=settings.TORRENT_TAG) or []
)
try:
for torrent in torrents:
meta = MetaInfo(torrent.get("name"))
dlspeed = torrent.get("dlspeed", 0)
upspeed = torrent.get("upspeed", 0)
total_size = torrent.get("total_size", 0)
completed = torrent.get("completed", 0)
ret_torrents.append(
DownloadingTorrent(
downloader=name,
hash=torrent.get("hash"),
title=torrent.get("name"),
name=meta.name,
year=meta.year,
season_episode=meta.season_episode,
progress=torrent.get("progress", 0),
size=total_size,
state="paused"
if torrent.get("state") == 0
else "downloading",
dlspeed=StringUtils.str_filesize(dlspeed),
upspeed=StringUtils.str_filesize(upspeed),
left_time=StringUtils.str_secends(
(total_size - completed) / dlspeed
)
if dlspeed > 0
else "",
)
)
finally:
torrents.clear()
del torrents
else:
return None
return ret_torrents # noqa
def transfer_completed(
self, hashs: Union[str, list], downloader: Optional[str] = None
) -> None:
"""
转移完成后的处理
:param hashs: 种子Hash
:param downloader: 下载器
"""
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
# 获取原标签
org_tags = server.get_torrent_tags(ids=hashs)
# 种子打上已整理标签
if org_tags:
tags = org_tags + ["已整理"]
else:
tags = ["已整理"]
# 直接设置完整标签(覆盖)
server.set_torrents_tag(ids=hashs, tags=tags, overwrite=True)
return None
def remove_torrents(
self,
hashs: Union[str, list],
delete_file: Optional[bool] = True,
downloader: Optional[str] = None,
) -> Optional[bool]:
"""
删除下载器种子
:param hashs: 种子Hash
:param delete_file: 是否删除文件
:param downloader: 下载器
:return: bool
"""
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
return server.delete_torrents(delete_file=delete_file, ids=hashs)
def start_torrents(
self, hashs: Union[list, str], downloader: Optional[str] = None
) -> Optional[bool]:
"""
开始下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
return server.start_torrents(ids=hashs)
def stop_torrents(
self, hashs: Union[list, str], downloader: Optional[str] = None
) -> Optional[bool]:
"""
停止下载
:param hashs: 种子Hash
:param downloader: 下载器
:return: bool
"""
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
return server.stop_torrents(ids=hashs)
def torrent_files(
self, tid: str, downloader: Optional[str] = None
) -> Optional[List[Dict]]:
"""
获取种子文件列表
"""
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
return server.get_files(tid=tid)
def downloader_info(
self, downloader: Optional[str] = None
) -> Optional[List[schemas.DownloaderInfo]]:
"""
下载器信息
"""
if downloader:
server: Rtorrent = self.get_instance(downloader)
if not server:
return None
servers = [server]
else:
servers = self.get_instances().values()
ret_info = []
for server in servers:
info = server.transfer_info()
if not info:
continue
ret_info.append(
schemas.DownloaderInfo(
download_speed=info.get("dl_info_speed"),
upload_speed=info.get("up_info_speed"),
download_size=info.get("dl_info_data"),
upload_size=info.get("up_info_data"),
)
)
return ret_info

View File

@@ -0,0 +1,548 @@
import socket
import traceback
import xmlrpc.client
from pathlib import Path
from typing import Optional, Union, Tuple, List, Dict
from urllib.parse import urlparse
from app.log import logger
class SCGITransport(xmlrpc.client.Transport):
"""
通过SCGI协议与rTorrent通信的Transport
"""
def single_request(self, host, handler, request_body, verbose=False):
# 建立socket连接
parsed = urlparse(f"scgi://{host}")
sock = socket.create_connection(
(parsed.hostname, parsed.port or 5000), timeout=60
)
try:
# 构造SCGI请求头
headers = (
f"CONTENT_LENGTH\x00{len(request_body)}\x00"
f"SCGI\x001\x00"
f"REQUEST_METHOD\x00POST\x00"
f"REQUEST_URI\x00/RPC2\x00"
)
# netstring格式: "len:headers,"
netstring = f"{len(headers)}:{headers},".encode()
# 发送请求
sock.sendall(netstring + request_body)
# 读取响应
response = b""
while True:
chunk = sock.recv(4096)
if not chunk:
break
response += chunk
finally:
sock.close()
# 跳过HTTP响应头
header_end = response.find(b"\r\n\r\n")
if header_end != -1:
response = response[header_end + 4 :]
# 解析XML-RPC响应
return self.parse_response(self._build_response(response))
@staticmethod
def _build_response(data: bytes):
"""
构造类文件对象用于parse_response
"""
import io
import http.client
class _FakeSocket(io.BytesIO):
def makefile(self, *args, **kwargs):
return self
raw = b"HTTP/1.0 200 OK\r\nContent-Type: text/xml\r\n\r\n" + data
response = http.client.HTTPResponse(_FakeSocket(raw)) # noqa
response.begin()
return response
class Rtorrent:
"""
rTorrent下载器
"""
def __init__(
self,
host: Optional[str] = None,
port: Optional[int] = None,
username: Optional[str] = None,
password: Optional[str] = None,
**kwargs,
):
self._proxy = None
if host and port:
self._host = f"{host}:{port}"
elif host:
self._host = host
else:
logger.error("rTorrent配置不完整")
return
self._username = username
self._password = password
self._proxy = self.__login_rtorrent()
def __login_rtorrent(self) -> Optional[xmlrpc.client.ServerProxy]:
"""
连接rTorrent
"""
if not self._host:
return None
try:
url = self._host
if url.startswith("scgi://"):
# SCGI直连模式
logger.info(f"正在通过SCGI连接 rTorrent{url}")
proxy = xmlrpc.client.ServerProxy(url, transport=SCGITransport())
else:
# HTTP模式 (通过nginx/ruTorrent代理)
if not url.startswith("http"):
url = f"http://{url}"
# 注入认证信息到URL
if self._username and self._password:
parsed = urlparse(url)
url = f"{parsed.scheme}://{self._username}:{self._password}@{parsed.hostname}"
if parsed.port:
url += f":{parsed.port}"
url += parsed.path or "/RPC2"
logger.info(
f"正在通过HTTP连接 rTorrent{url.split('@')[-1] if '@' in url else url}"
)
proxy = xmlrpc.client.ServerProxy(url)
# 测试连接
proxy.system.client_version()
return proxy
except Exception as err:
stack_trace = "".join(
traceback.format_exception(None, err, err.__traceback__)
)[:2000]
logger.error(f"rTorrent 连接出错:{str(err)}\n{stack_trace}")
return None
def is_inactive(self) -> bool:
"""
判断是否需要重连
"""
if not self._host:
return False
return True if not self._proxy else False
def reconnect(self):
"""
重连
"""
self._proxy = self.__login_rtorrent()
def get_torrents(
self,
ids: Optional[Union[str, list]] = None,
status: Optional[str] = None,
tags: Optional[Union[str, list]] = None,
) -> Tuple[List[Dict], bool]:
"""
获取种子列表
:return: 种子列表, 是否发生异常
"""
if not self._proxy:
return [], True
try:
# 使用d.multicall2获取种子列表
fields = [
"d.hash=",
"d.name=",
"d.size_bytes=",
"d.completed_bytes=",
"d.down.rate=",
"d.up.rate=",
"d.state=",
"d.complete=",
"d.directory=",
"d.custom1=",
"d.is_active=",
"d.is_open=",
"d.ratio=",
"d.base_path=",
]
# 获取所有种子
results = self._proxy.d.multicall2("", "main", *fields)
torrents = []
for r in results:
torrent = {
"hash": r[0],
"name": r[1],
"total_size": r[2],
"completed": r[3],
"dlspeed": r[4],
"upspeed": r[5],
"state": r[6], # 0=stopped, 1=started
"complete": r[7], # 0=incomplete, 1=complete
"save_path": r[8],
"tags": r[9], # d.custom1 用于标签
"is_active": r[10],
"is_open": r[11],
"ratio": int(r[12]) / 1000.0 if r[12] else 0,
"content_path": r[13], # base_path 即完整内容路径
}
# 计算进度
if torrent["total_size"] > 0:
torrent["progress"] = (
torrent["completed"] / torrent["total_size"] * 100
)
else:
torrent["progress"] = 0
# ID过滤
if ids:
if isinstance(ids, str):
ids_list = [ids.upper()]
else:
ids_list = [i.upper() for i in ids]
if torrent["hash"].upper() not in ids_list:
continue
# 标签过滤
if tags:
torrent_tags = [
t.strip() for t in torrent["tags"].split(",") if t.strip()
]
if isinstance(tags, str):
tags_list = [t.strip() for t in tags.split(",")]
else:
tags_list = tags
if not set(tags_list).issubset(set(torrent_tags)):
continue
torrents.append(torrent)
return torrents, False
except Exception as err:
logger.error(f"获取种子列表出错:{str(err)}")
return [], True
def get_completed_torrents(
self, ids: Union[str, list] = None, tags: Union[str, list] = None
) -> Optional[List[Dict]]:
"""
获取已完成的种子
"""
if not self._proxy:
return None
torrents, error = self.get_torrents(ids=ids, tags=tags)
if error:
return None
return [t for t in torrents if t.get("complete") == 1]
def get_downloading_torrents(
self, ids: Union[str, list] = None, tags: Union[str, list] = None
) -> Optional[List[Dict]]:
"""
获取正在下载的种子
"""
if not self._proxy:
return None
torrents, error = self.get_torrents(ids=ids, tags=tags)
if error:
return None
return [t for t in torrents if t.get("complete") == 0]
def add_torrent(
self,
content: Union[str, bytes],
is_paused: Optional[bool] = False,
download_dir: Optional[str] = None,
tags: Optional[List[str]] = None,
cookie: Optional[str] = None,
**kwargs,
) -> bool:
"""
添加种子
:param content: 种子内容bytes或磁力链接/URLstr
:param is_paused: 添加后暂停
:param download_dir: 下载路径
:param tags: 标签列表
:param cookie: Cookie
:return: bool
"""
if not self._proxy or not content:
return False
try:
# 构造命令参数
commands = []
if download_dir:
commands.append(f'd.directory.set="{download_dir}"')
if tags:
tag_str = ",".join(tags)
commands.append(f'd.custom1.set="{tag_str}"')
if isinstance(content, bytes):
# 检查是否为磁力链接bytes形式
if content.startswith(b"magnet:"):
content = content.decode("utf-8", errors="ignore")
else:
# 种子文件内容使用load.raw
raw = xmlrpc.client.Binary(content)
if is_paused:
self._proxy.load.raw("", raw, *commands)
else:
self._proxy.load.raw_start("", raw, *commands)
return True
# URL或磁力链接
if is_paused:
self._proxy.load.normal("", content, *commands)
else:
self._proxy.load.start("", content, *commands)
return True
except Exception as err:
logger.error(f"添加种子出错:{str(err)}")
return False
def start_torrents(self, ids: Union[str, list]) -> bool:
"""
启动种子
"""
if not self._proxy:
return False
try:
if isinstance(ids, str):
ids = [ids]
for tid in ids:
self._proxy.d.start(tid)
return True
except Exception as err:
logger.error(f"启动种子出错:{str(err)}")
return False
def stop_torrents(self, ids: Union[str, list]) -> bool:
"""
停止种子
"""
if not self._proxy:
return False
try:
if isinstance(ids, str):
ids = [ids]
for tid in ids:
self._proxy.d.stop(tid)
return True
except Exception as err:
logger.error(f"停止种子出错:{str(err)}")
return False
def delete_torrents(self, delete_file: bool, ids: Union[str, list]) -> bool:
"""
删除种子
"""
if not self._proxy:
return False
if not ids:
return False
try:
if isinstance(ids, str):
ids = [ids]
for tid in ids:
if delete_file:
# 先获取base_path用于删除文件
try:
base_path = self._proxy.d.base_path(tid)
self._proxy.d.erase(tid)
if base_path:
import shutil
path = Path(base_path)
if path.is_dir():
shutil.rmtree(str(path), ignore_errors=True)
elif path.is_file():
path.unlink(missing_ok=True)
except Exception as e:
logger.warning(f"删除种子文件出错:{str(e)}")
self._proxy.d.erase(tid)
else:
self._proxy.d.erase(tid)
return True
except Exception as err:
logger.error(f"删除种子出错:{str(err)}")
return False
def get_files(self, tid: str) -> Optional[List[Dict]]:
"""
获取种子文件列表
"""
if not self._proxy:
return None
if not tid:
return None
try:
files = self._proxy.f.multicall(
tid,
"",
"f.path=",
"f.size_bytes=",
"f.priority=",
"f.completed_chunks=",
"f.size_chunks=",
)
result = []
for idx, f in enumerate(files):
result.append(
{
"id": idx,
"name": f[0],
"size": f[1],
"priority": f[2],
"progress": int(f[3]) / int(f[4]) * 100 if int(f[4]) > 0 else 0,
}
)
return result
except Exception as err:
logger.error(f"获取种子文件列表出错:{str(err)}")
return None
def set_files(
self, torrent_hash: str = None, file_ids: list = None, priority: int = 0
) -> bool:
"""
设置下载文件的优先级priority为0为不下载priority为1为普通
"""
if not self._proxy:
return False
if not torrent_hash or not file_ids:
return False
try:
for file_id in file_ids:
self._proxy.f.priority.set(f"{torrent_hash}:f{file_id}", priority)
# 更新种子优先级
self._proxy.d.update_priorities(torrent_hash)
return True
except Exception as err:
logger.error(f"设置种子文件状态出错:{str(err)}")
return False
def set_torrents_tag(
self, ids: Union[str, list], tags: List[str], overwrite: bool = False
) -> bool:
"""
设置种子标签使用d.custom1
:param ids: 种子Hash
:param tags: 标签列表
:param overwrite: 是否覆盖现有标签,默认为合并
"""
if not self._proxy:
return False
if not ids:
return False
try:
if isinstance(ids, str):
ids = [ids]
for tid in ids:
if overwrite:
# 直接覆盖标签
self._proxy.d.custom1.set(tid, ",".join(tags))
else:
# 获取现有标签
existing = self._proxy.d.custom1(tid)
existing_tags = (
[t.strip() for t in existing.split(",") if t.strip()]
if existing
else []
)
# 合并标签
merged = list(set(existing_tags + tags))
self._proxy.d.custom1.set(tid, ",".join(merged))
return True
except Exception as err:
logger.error(f"设置种子Tag出错{str(err)}")
return False
def remove_torrents_tag(self, ids: Union[str, list], tag: Union[str, list]) -> bool:
"""
移除种子标签
"""
if not self._proxy:
return False
if not ids:
return False
try:
if isinstance(ids, str):
ids = [ids]
if isinstance(tag, str):
tag = [tag]
for tid in ids:
existing = self._proxy.d.custom1(tid)
existing_tags = (
[t.strip() for t in existing.split(",") if t.strip()]
if existing
else []
)
new_tags = [t for t in existing_tags if t not in tag]
self._proxy.d.custom1.set(tid, ",".join(new_tags))
return True
except Exception as err:
logger.error(f"移除种子Tag出错{str(err)}")
return False
def get_torrent_tags(self, ids: str) -> List[str]:
"""
获取种子标签
"""
if not self._proxy:
return []
try:
existing = self._proxy.d.custom1(ids)
return (
[t.strip() for t in existing.split(",") if t.strip()]
if existing
else []
)
except Exception as err:
logger.error(f"获取种子标签出错:{str(err)}")
return []
def get_torrent_id_by_tag(
self, tags: Union[str, list], status: Optional[str] = None
) -> Optional[str]:
"""
通过标签多次尝试获取刚添加的种子ID并移除标签
"""
import time
if isinstance(tags, str):
tags = [tags]
torrent_id = None
for i in range(1, 10):
time.sleep(3)
torrents, error = self.get_torrents(tags=tags)
if not error and torrents:
torrent_id = torrents[0].get("hash")
# 移除查找标签
for tag in tags:
self.remove_torrents_tag(ids=torrent_id, tag=[tag])
break
return torrent_id
def transfer_info(self) -> Optional[Dict]:
"""
获取传输信息
"""
if not self._proxy:
return None
try:
return {
"dl_info_speed": self._proxy.throttle.global_down.rate(),
"up_info_speed": self._proxy.throttle.global_up.rate(),
"dl_info_data": self._proxy.throttle.global_down.total(),
"up_info_data": self._proxy.throttle.global_up.total(),
}
except Exception as err:
logger.error(f"获取传输信息出错:{str(err)}")
return None

View File

@@ -1,6 +1,7 @@
import re
from threading import Lock
from typing import List, Optional
from urllib.parse import quote
import requests
from slack_bolt import App
@@ -42,7 +43,9 @@ class Slack:
# 标记消息来源
if kwargs.get("name"):
self._ds_url = f"{self._ds_url}&source={kwargs.get('name')}"
# URL encode the source name to handle special characters
encoded_name = quote(kwargs.get('name'), safe='')
self._ds_url = f"{self._ds_url}&source={encoded_name}"
# 注册消息响应
@slack_app.event("message")

View File

@@ -2,7 +2,7 @@ import asyncio
import re
import threading
from typing import Optional, List, Dict, Callable
from urllib.parse import urljoin
from urllib.parse import urljoin, quote
from telebot import TeleBot, apihelper
from telebot.types import BotCommand, InlineKeyboardMarkup, InlineKeyboardButton, InputMediaPhoto
@@ -65,7 +65,9 @@ class Telegram:
# 标记渠道来源
if kwargs.get("name"):
self._ds_url = f"{self._ds_url}&source={kwargs.get('name')}"
# URL encode the source name to handle special characters
encoded_name = quote(kwargs.get('name'), safe='')
self._ds_url = f"{self._ds_url}&source={encoded_name}"
@_bot.message_handler(commands=['start', 'help'])
def send_welcome(message):
@@ -78,6 +80,11 @@ class Telegram:
# Check if we should process this message
if self._should_process_message(message):
# 发送正在输入状态
try:
_bot.send_chat_action(message.chat.id, 'typing')
except Exception as e:
logger.error(f"发送Telegram正在输入状态失败{e}")
RequestUtils(timeout=15).post_res(self._ds_url, json=message.json)
@_bot.callback_query_handler(func=lambda call: True)
@@ -113,6 +120,12 @@ class Telegram:
# 先确认回调避免用户看到loading状态
_bot.answer_callback_query(call.id)
# 发送正在输入状态
try:
_bot.send_chat_action(call.message.chat.id, 'typing')
except Exception as e:
logger.error(f"发送Telegram正在输入状态失败{e}")
# 发送给主程序处理
RequestUtils(timeout=15).post_res(self._ds_url, json=callback_json)
@@ -235,10 +248,14 @@ class Telegram:
return False
try:
if title and text:
caption = f"**{title}**\n{text}"
elif title:
caption = f"**{title}**"
# 标准化标题后再加粗,避免**符号被显示为文本
bold_title = (
f"**{standardize(title).removesuffix('\n')}**" if title else None
)
if bold_title and text:
caption = f"{bold_title}\n{text}"
elif bold_title:
caption = bold_title
elif text:
caption = text
else:

View File

@@ -1625,6 +1625,9 @@ class TmdbApi:
"""
清除缓存
"""
self.match_web.cache_clear()
self.discover.discover_movies.cache_clear()
self.discover.discover_tv_shows.cache_clear()
self.tmdb.cache_clear()
# 私有异步方法

View File

@@ -40,8 +40,6 @@ class TMDb(object):
self._reset = None
self._timeout = 15
self.__clear_async_cache__ = False
@property
def page(self):
return self._page
@@ -129,7 +127,6 @@ class TMDb(object):
return req
def cache_clear(self):
self.__clear_async_cache__ = True
return self.request.cache_clear()
def _validate_api_key(self):
@@ -200,7 +197,7 @@ class TMDb(object):
if rate_limit_result:
logger.warning("达到请求频率限制,将在 %d 秒后重试..." % rate_limit_result)
time.sleep(rate_limit_result)
return self._request_obj(action, params, call_cached, method, data, json, key)
return self._request_obj(action, params, False, method, data, json, key)
json_data = req.json()
self._process_json_response(json_data, is_async=False)
@@ -215,10 +212,6 @@ class TMDb(object):
self._validate_api_key()
url = self._build_url(action, params)
if self.__clear_async_cache__:
self.__clear_async_cache__ = False
await self.async_request.cache_clear()
async with async_fresh(not call_cached or method == "POST"):
req = await self.async_request(method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d'))
@@ -232,7 +225,7 @@ class TMDb(object):
if rate_limit_result:
logger.warning("达到请求频率限制,将在 %d 秒后重试..." % rate_limit_result)
await asyncio.sleep(rate_limit_result)
return await self._async_request_obj(action, params, call_cached, method, data, json, key)
return await self._async_request_obj(action, params, False, method, data, json, key)
json_data = req.json()
self._process_json_response(json_data, is_async=True)

View File

@@ -162,3 +162,12 @@ class TheTvDbModule(_ModuleBase):
except Exception as err:
logger.error(f"用标题搜索TVDB剧集失败 ({title}): {str(err)}")
return []
def clear_cache(self):
"""
清除缓存
"""
logger.info(f"开始清除{self.get_name()}缓存 ...")
if tvdb := self.tvdb:
tvdb.clear_cache()
logger.info(f"{self.get_name()}缓存清除完成")

View File

@@ -618,3 +618,9 @@ class TVDB:
"""
url = self.url.construct('user/favorites')
return self.request.make_request(url)
def clear_cache(self):
"""
清除缓存
"""
self.request.make_request.cache_clear()

View File

@@ -0,0 +1,358 @@
from typing import Any, Generator, List, Optional, Tuple, Union
from app import schemas
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.log import logger
from app.modules import _MediaServerBase, _ModuleBase
from app.modules.ugreen.ugreen import Ugreen
from app.schemas import AuthCredentials, AuthInterceptCredentials
from app.schemas.types import ChainEventType, MediaServerType, MediaType, ModuleType
class UgreenModule(_ModuleBase, _MediaServerBase[Ugreen]):
def init_module(self) -> None:
"""
初始化模块
"""
super().init_service(
service_name=Ugreen.__name__.lower(),
service_type=lambda conf: Ugreen(
**conf.config, sync_libraries=conf.sync_libraries
),
)
@staticmethod
def get_name() -> str:
return "绿联影视"
@staticmethod
def get_type() -> ModuleType:
"""
获取模块类型
"""
return ModuleType.MediaServer
@staticmethod
def get_subtype() -> MediaServerType:
"""
获取模块子类型
"""
return MediaServerType.Ugreen
@staticmethod
def get_priority() -> int:
"""
获取模块优先级,数字越小优先级越高,只有同一接口下优先级才生效
"""
return 5
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def scheduler_job(self) -> None:
"""
定时任务每10分钟调用一次
"""
for name, server in self.get_instances().items():
if server.is_configured() and server.is_inactive():
logger.info(f"绿联影视 {name} 连接断开,尝试重连 ...")
server.reconnect()
def stop(self):
for server in self.get_instances().values():
if server.is_authenticated():
server.disconnect()
def test(self) -> Optional[Tuple[bool, str]]:
"""
测试模块连接性
"""
if not self.get_instances():
return None
for name, server in self.get_instances().items():
if not server.is_configured():
return False, f"绿联影视配置不完整:{name}"
if server.is_inactive() and not server.reconnect():
return False, f"无法连接绿联影视:{name}"
return True, ""
def user_authenticate(
self, credentials: AuthCredentials, service_name: Optional[str] = None
) -> Optional[AuthCredentials]:
"""
使用绿联影视用户辅助完成用户认证
"""
if not credentials or credentials.grant_type != "password":
return None
if service_name:
servers = (
[(service_name, server)]
if (server := self.get_instance(service_name))
else []
)
else:
servers = self.get_instances().items()
for name, server in servers:
intercept_event = eventmanager.send_event(
etype=ChainEventType.AuthIntercept,
data=AuthInterceptCredentials(
username=credentials.username,
channel=self.get_name(),
service=name,
status="triggered",
),
)
if intercept_event and intercept_event.event_data:
intercept_data: AuthInterceptCredentials = intercept_event.event_data
if intercept_data.cancel:
continue
token = server.authenticate(credentials.username, credentials.password)
if token:
credentials.channel = self.get_name()
credentials.service = name
credentials.token = token
return credentials
return None
def webhook_parser(
self, body: Any, form: Any, args: Any
) -> Optional[schemas.WebhookEventInfo]:
"""
解析Webhook报文体
"""
source = args.get("source")
if source:
server: Optional[Ugreen] = self.get_instance(source)
if not server:
return None
result = server.get_webhook_message(body)
if result:
result.server_name = source
return result
for server in self.get_instances().values():
if server:
result = server.get_webhook_message(body)
if result:
return result
return None
def media_exists(
self,
mediainfo: MediaInfo,
itemid: Optional[str] = None,
server: Optional[str] = None,
) -> Optional[schemas.ExistMediaInfo]:
"""
判断媒体文件是否存在
"""
if server:
servers = [(server, self.get_instance(server))]
else:
servers = self.get_instances().items()
for name, s in servers:
if not s:
continue
if mediainfo.type == MediaType.MOVIE:
if itemid:
movie = s.get_iteminfo(itemid)
if movie:
logger.info(f"媒体库 {name} 中找到了 {movie}")
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server_type="ugreen",
server=name,
itemid=movie.item_id,
)
movies = s.get_movies(
title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
)
if not movies:
logger.info(f"{mediainfo.title_year} 没有在媒体库 {name}")
continue
logger.info(f"媒体库 {name} 中找到了 {movies}")
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server_type="ugreen",
server=name,
itemid=movies[0].item_id,
)
itemid, tvs = s.get_tv_episodes(
title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid,
)
if not tvs:
logger.info(f"{mediainfo.title_year} 没有在媒体库 {name}")
continue
logger.info(f"{mediainfo.title_year} 在媒体库 {name} 中找到了这些季集:{tvs}")
return schemas.ExistMediaInfo(
type=MediaType.TV,
seasons=tvs,
server_type="ugreen",
server=name,
itemid=itemid,
)
return None
def media_statistic(
self, server: Optional[str] = None
) -> Optional[List[schemas.Statistic]]:
"""
媒体数量统计
"""
if server:
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return None
servers = [server_obj]
else:
servers = self.get_instances().values()
media_statistics = []
for s in servers:
media_statistic = s.get_medias_count()
if not media_statistic:
continue
media_statistic.user_count = s.get_user_count()
media_statistics.append(media_statistic)
return media_statistics
def mediaserver_librarys(
self, server: Optional[str] = None, hidden: Optional[bool] = False, **kwargs
) -> Optional[List[schemas.MediaServerLibrary]]:
"""
媒体库列表
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if server_obj:
return server_obj.get_librarys(hidden=hidden)
return None
def mediaserver_items(
self,
server: str,
library_id: Union[str, int],
start_index: Optional[int] = 0,
limit: Optional[int] = -1,
) -> Optional[Generator]:
"""
获取媒体服务器项目列表
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if server_obj:
return server_obj.get_items(library_id, start_index, limit)
return None
def mediaserver_iteminfo(
self, server: str, item_id: str
) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if server_obj:
return server_obj.get_iteminfo(item_id)
return None
def mediaserver_tv_episodes(
self, server: str, item_id: Union[str, int]
) -> Optional[List[schemas.MediaServerSeasonInfo]]:
"""
获取剧集信息
"""
if not item_id:
return None
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return None
_, seasoninfo = server_obj.get_tv_episodes(item_id=str(item_id))
if not seasoninfo:
return []
return [
schemas.MediaServerSeasonInfo(season=season, episodes=episodes)
for season, episodes in seasoninfo.items()
]
def mediaserver_playing(
self, server: str, count: Optional[int] = 20, **kwargs
) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return []
return server_obj.get_resume(num=count) or []
def mediaserver_play_url(
self, server: str, item_id: Union[str, int]
) -> Optional[str]:
"""
获取媒体库播放地址
"""
if not item_id:
return None
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return None
return server_obj.get_play_url(str(item_id))
def mediaserver_latest(
self,
server: Optional[str] = None,
count: Optional[int] = 20,
**kwargs,
) -> List[schemas.MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return []
return server_obj.get_latest(num=count) or []
def mediaserver_latest_images(
self,
server: Optional[str] = None,
count: Optional[int] = 20,
remote: Optional[bool] = False,
**kwargs,
) -> List[str]:
"""
获取媒体服务器最新入库条目的图片
"""
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return []
return server_obj.get_latest_backdrops(num=count, remote=remote) or []
def mediaserver_image_cookies(
self,
server: Optional[str] = None,
image_url: Optional[str] = None,
**kwargs,
) -> Optional[str | dict]:
"""
获取绿联影视服务器的图片Cookies
"""
if not image_url:
return None
if server:
server_obj: Optional[Ugreen] = self.get_instance(server)
if not server_obj:
return None
return server_obj.get_image_cookies(image_url)
for server_obj in self.get_instances().values():
if cookies := server_obj.get_image_cookies(image_url):
return cookies
return None

750
app/modules/ugreen/api.py Normal file
View File

@@ -0,0 +1,750 @@
import base64
import uuid
from dataclasses import dataclass
from typing import Any, Dict, Mapping, Optional, Union
from urllib.parse import urlsplit, urlunsplit
from requests import Session
from app.log import logger
from app.utils.ugreen_crypto import UgreenCrypto
from app.utils.url import UrlUtils
@dataclass
class ApiResult:
code: int = -1
msg: str = ""
data: Any = None
debug: Optional[str] = None
raw: Optional[dict] = None
@property
def success(self) -> bool:
return self.code == 200
class Api:
"""
绿联影视 API 客户端(统一加密通道)。
说明:
1. 所有业务接口调用都应走 `request()`
2. `request()` 会自动将明文查询参数加密为 `encrypt_query`
3. 若响应包含 `encrypt_resp_body`,会自动完成解密后再返回。
"""
__slots__ = (
"_host",
"_session",
"_token",
"_static_token",
"_is_ugk",
"_public_key",
"_crypto",
"_username",
"_client_id",
"_client_version",
"_language",
"_ug_agent",
"_timeout",
"_verify_ssl",
)
def __init__(
self,
host: str,
client_version: str = "76363",
language: str = "zh-CN",
ug_agent: str = "PC/WEB",
timeout: int = 20,
verify_ssl: bool = True,
):
self._host = self._normalize_base_url(host)
self._session = Session()
self._token: Optional[str] = None
self._static_token: Optional[str] = None
self._is_ugk: bool = False
self._public_key: Optional[str] = None
self._crypto: Optional[UgreenCrypto] = None
self._username: Optional[str] = None
self._client_id = f"{uuid.uuid4()}-WEB"
self._client_version = client_version
self._language = language
self._ug_agent = ug_agent
self._timeout = timeout
# 是否校验证书,默认开启;仅在用户明确配置时才应关闭。
self._verify_ssl = bool(verify_ssl)
@property
def host(self) -> str:
return self._host
@property
def token(self) -> Optional[str]:
return self._token
@property
def static_token(self) -> Optional[str]:
return self._static_token
@property
def is_ugk(self) -> bool:
return self._is_ugk
@property
def public_key(self) -> Optional[str]:
return self._public_key
def close(self):
"""
关闭底层 HTTP 会话。
"""
self._session.close()
@staticmethod
def _normalize_base_url(host: str) -> str:
if not host:
return ""
host = UrlUtils.standardize_base_url(host).rstrip("/")
parsed = urlsplit(host)
return urlunsplit((parsed.scheme, parsed.netloc, "", "", "")).rstrip("/")
@staticmethod
def _decode_public_key(raw: Optional[str]) -> Optional[str]:
if not raw:
return None
value = str(raw).strip()
if not value:
return None
if "BEGIN" in value:
return value
try:
return base64.b64decode(value).decode("utf-8")
except Exception:
return None
@staticmethod
def _extract_rsa_token(resp_json: dict, headers: Mapping[str, str]) -> Optional[str]:
token = headers.get("x-rsa-token") or headers.get("X-Rsa-Token")
if token:
return token
token = resp_json.get("xRsaToken") or resp_json.get("x-rsa-token")
if token:
return token
data = resp_json.get("data") if isinstance(resp_json, Mapping) else None
if isinstance(data, Mapping):
return data.get("xRsaToken") or data.get("x-rsa-token")
return None
def _common_headers(self) -> dict[str, str]:
"""
获取绿联 Web 端通用请求头。
"""
return {
"Accept": "application/json, text/plain, */*",
"Client-Id": self._client_id,
"Client-Version": self._client_version,
"UG-Agent": self._ug_agent,
"X-Specify-Language": self._language,
}
def _request_json(
self,
url: str,
method: str = "GET",
headers: Optional[dict] = None,
params: Optional[dict] = None,
json_data: Optional[dict] = None,
) -> Optional[dict]:
"""
发送 HTTP 请求并尝试解析为 JSON。
"""
try:
method = method.upper()
if method == "POST":
resp = self._session.post(
url=url,
headers=headers,
params=params,
json=json_data,
timeout=self._timeout,
verify=self._verify_ssl,
)
else:
resp = self._session.get(
url=url,
headers=headers,
params=params,
timeout=self._timeout,
verify=self._verify_ssl,
)
return resp.json()
except Exception as err:
logger.error(f"请求绿联接口失败:{url} {err}")
return None
@staticmethod
def _build_result(payload: Any) -> ApiResult:
if not isinstance(payload, Mapping):
return ApiResult(code=-1, msg="响应格式错误", raw=None)
code = payload.get("code")
try:
code = int(code)
except Exception:
code = -1
return ApiResult(
code=code,
msg=str(payload.get("msg") or ""),
data=payload.get("data"),
debug=payload.get("debug"),
raw=dict(payload),
)
def login(self, username: str, password: str, keepalive: bool = True) -> Optional[str]:
"""
登录绿联账号并初始化加密上下文。
:param username: 用户名
:param password: 密码(会先做 RSA 分段加密)
:param keepalive: 是否保持登录
:return: 登录成功返回 token
"""
if not username or not password:
return None
headers = self._common_headers()
try:
check_resp = self._session.post(
url=f"{self._host}/ugreen/v1/verify/check",
headers=headers,
json={"username": username},
timeout=self._timeout,
verify=self._verify_ssl,
)
check_json = check_resp.json()
except Exception as err:
logger.error(f"绿联获取登录公钥失败:{err}")
return None
check_result = self._build_result(check_json)
if not check_result.success:
logger.error(f"绿联获取登录公钥失败:{check_result.msg}")
return None
rsa_token = self._extract_rsa_token(check_json, check_resp.headers)
login_public_key = self._decode_public_key(rsa_token)
if not login_public_key:
logger.error("绿联获取登录公钥失败:公钥为空")
return None
encrypted_password = UgreenCrypto(public_key=login_public_key).rsa_encrypt_long(password)
login_json = self._request_json(
url=f"{self._host}/ugreen/v1/verify/login",
method="POST",
headers=headers,
json_data={
"username": username,
"password": encrypted_password,
"keepalive": keepalive,
"otp": True,
"is_simple": True,
},
)
if not login_json:
return None
login_result = self._build_result(login_json)
if not login_result.success or not isinstance(login_result.data, Mapping):
logger.error(f"绿联登录失败:{login_result.msg}")
return None
token = str(login_result.data.get("token") or "").strip()
public_key = self._decode_public_key(str(login_result.data.get("public_key") or ""))
if not token or not public_key:
logger.error("绿联登录失败:未返回 token/public_key")
return None
self._token = token
static_token = str(login_result.data.get("static_token") or "").strip()
self._static_token = static_token or self._token
self._is_ugk = bool(login_result.data.get("is_ugk"))
self._public_key = public_key
self._crypto = UgreenCrypto(
public_key=self._public_key,
token=self._token,
client_id=self._client_id,
client_version=self._client_version,
ug_agent=self._ug_agent,
language=self._language,
)
self._username = username
return self._token
def export_session_state(self) -> Optional[dict]:
"""
导出当前登录会话,供持久化存储使用。
"""
if not self._token or not self._public_key:
return None
return {
"token": self._token,
"static_token": self._static_token,
"is_ugk": self._is_ugk,
"public_key": self._public_key,
"username": self._username,
"client_id": self._client_id,
"client_version": self._client_version,
"language": self._language,
"ug_agent": self._ug_agent,
"cookies": self._session.cookies.get_dict(),
}
def import_session_state(self, state: Mapping[str, Any]) -> bool:
"""
从持久化数据恢复登录会话,避免重复登录。
"""
if not isinstance(state, Mapping):
return False
token = str(state.get("token") or "").strip()
public_key = self._decode_public_key(str(state.get("public_key") or ""))
if not token or not public_key:
return False
static_token = str(state.get("static_token") or "").strip()
is_ugk = bool(state.get("is_ugk"))
# 会话可能与 client_id 绑定,需恢复原客户端信息
client_id = str(state.get("client_id") or "").strip()
if client_id:
self._client_id = client_id
client_version = str(state.get("client_version") or "").strip()
if client_version:
self._client_version = client_version
language = str(state.get("language") or "").strip()
if language:
self._language = language
ug_agent = str(state.get("ug_agent") or "").strip()
if ug_agent:
self._ug_agent = ug_agent
username = str(state.get("username") or "").strip()
self._username = username or None
cookies = state.get("cookies")
if isinstance(cookies, Mapping):
try:
self._session.cookies.update(
{
str(k): str(v)
for k, v in cookies.items()
if k is not None and v is not None
}
)
except Exception:
pass
self._token = token
self._static_token = static_token or self._token
self._is_ugk = is_ugk
self._public_key = public_key
self._crypto = UgreenCrypto(
public_key=self._public_key,
token=self._token,
client_id=self._client_id,
client_version=self._client_version,
ug_agent=self._ug_agent,
language=self._language,
)
return True
def logout(self):
"""
登出并清理本地认证状态。
"""
if not self._token or not self._crypto:
return
try:
req = self._crypto.build_encrypted_request(
url=f"{self._host}/ugreen/v1/verify/logout",
method="GET",
params={},
)
self._session.get(
req.url,
headers=req.headers,
params=req.params,
timeout=self._timeout,
verify=self._verify_ssl,
)
except Exception:
pass
self._token = None
self._static_token = None
self._is_ugk = False
self._public_key = None
self._crypto = None
self._username = None
def request(
self,
path: str,
method: str = "GET",
params: Optional[dict] = None,
data: Optional[dict] = None,
) -> ApiResult:
"""
统一请求入口。
核心行为:
1. 自动把 `params` 明文序列化并加密为 `encrypt_query`
2. 自动注入绿联安全头(`X-Ugreen-*`
3. 对 `POST/PUT/PATCH` 的 JSON 体加密;
4. 自动解密 `encrypt_resp_body`。
:param path: `/ugreen/` 后的相对路径,例如 `v1/video/homepage/media_list`
:param method: HTTP 方法
:param params: 明文查询参数(无需自己处理 encrypt_query
:param data: 明文 JSON 请求体(自动加密)
"""
if not self._crypto:
return ApiResult(code=-1, msg="未登录")
api_path = path.strip("/")
# 由加密工具自动构建 encrypt_query 与加密请求体
req = self._crypto.build_encrypted_request(
url=f"{self._host}/ugreen/{api_path}",
method=method.upper(),
params=params or {},
data=data,
encrypt_body=method.upper() in {"POST", "PUT", "PATCH"},
)
payload = self._request_json(
url=req.url,
method=method,
headers=req.headers,
params=req.params,
json_data=req.json,
)
if payload is None:
return ApiResult(code=-1, msg="接口请求失败")
# 响应若包含 encrypt_resp_body这里会自动解密
decrypted = self._crypto.decrypt_response(payload, req.aes_key)
return self._build_result(decrypted)
def current_user(self) -> Optional[dict]:
"""
获取当前登录用户信息。
"""
result = self.request("v1/user/current/user")
if not result.success or not isinstance(result.data, Mapping):
return None
return dict(result.data)
def media_list(self) -> list[dict]:
"""
获取首页媒体库列表(`media_lib_info_list`)。
"""
result = self.request("v1/video/homepage/media_list")
if not result.success or not isinstance(result.data, Mapping):
return []
items = result.data.get("media_lib_info_list")
return items if isinstance(items, list) else []
def media_lib_users(self) -> list[dict]:
"""
获取媒体库用户列表。
"""
result = self.request("v1/video/media_lib/get_user_list")
if not result.success or not isinstance(result.data, Mapping):
return []
users = result.data.get("user_info_arr")
return users if isinstance(users, list) else []
def recently_played(self, page: int = 1, page_size: int = 12) -> Optional[dict]:
"""
获取继续观看列表。
"""
result = self.request(
"v1/video/recently_played/get",
params={
"page": page,
"page_size": page_size,
"language": self._language,
"create_time_order": "false",
},
)
return result.data if result.success and isinstance(result.data, Mapping) else None
def recently_updated(self, page: int = 1, page_size: int = 20) -> Optional[dict]:
"""
获取最近更新列表。
"""
result = self.request(
"v1/video/recently_update/get",
params={
"page": page,
"page_size": page_size,
"language": self._language,
"create_time_order": "false",
},
)
return result.data if result.success and isinstance(result.data, Mapping) else None
def recently_played_info(self, item_id: Union[str, int]) -> Optional[dict]:
"""
获取单个视频的播放状态与基础详情信息。
"""
result = self.request(
"v1/video/recently_played/info",
params={
"ug_video_info_id": item_id,
"version_control": "true",
},
)
if result.code in {200, 1303} and isinstance(result.data, Mapping):
return dict(result.data)
return None
def search(self, keyword: str, offset: int = 0, limit: int = 200) -> Optional[dict]:
"""
搜索媒体(电影/剧集)。
"""
result = self.request(
"v1/video/search",
params={
"language": self._language,
"search_type": 1,
"offset": offset,
"limit": limit,
"keyword": keyword,
},
)
return result.data if result.success and isinstance(result.data, Mapping) else None
def video_all(self, classification: int, page: int = 1, page_size: int = 20) -> Optional[dict]:
"""
获取 `v1/video/all` 分类列表。
常用分类:
-102: 电影
-103: 电视剧
"""
result = self.request(
"v1/video/all",
params={
"page": page,
"pageSize": page_size,
"classification": classification,
"sort_type": 2,
"order_type": 2,
"release_date_begin": -9999999999,
"release_date_end": -9999999999,
"identify_status": 0,
"watch_status": -1,
"ug_style_id": 0,
"ug_country_id": 0,
"clarity": -1,
},
)
return result.data if result.success and isinstance(result.data, Mapping) else None
def poster_wall_get_folder(
self,
path: Optional[str] = None,
page: int = 1,
page_size: int = 100,
sort_type: int = 1,
order_type: int = 1,
) -> Optional[dict]:
"""
获取海报墙文件夹与条目(可按目录路径递归展开)。
"""
params: Dict[str, Any] = {
"page": page,
"page_size": page_size,
"sort_type": sort_type,
"order_type": order_type,
}
if path:
params["path"] = path
result = self.request("v1/video/poster_wall/media_lib/get_folder", params=params)
return result.data if result.success and isinstance(result.data, Mapping) else None
def get_movie(
self,
item_id: Union[str, int],
media_lib_set_id: Union[str, int],
path: Optional[str] = None,
folder_path: Optional[str] = None,
) -> Optional[dict]:
"""
获取电影详情。
"""
params: Dict[str, Any] = {
"id": item_id,
"media_lib_set_id": media_lib_set_id,
"fileVersion": "true",
}
if path:
params["path"] = path
if folder_path:
params["folder_path"] = folder_path
result = self.request("v1/video/details/getMovie", params=params)
return result.data if result.success and isinstance(result.data, Mapping) else None
def get_tv(self, item_id: Union[str, int], folder_path: str = "ALL") -> Optional[dict]:
"""
获取剧集详情(含季/集信息)。
"""
result = self.request(
"v2/video/details/getTV",
params={
"ug_video_info_id": item_id,
"folder_path": folder_path,
},
)
return result.data if result.success and isinstance(result.data, Mapping) else None
def scan(self, media_lib_set_id: Union[str, int], scan_type: int = 2, op_type: int = 2) -> bool:
"""
触发媒体库扫描。
:param media_lib_set_id: 媒体库 ID
:param scan_type: 扫描类型1: 新添加和修改, 2: 补充缺失, 3: 覆盖扫描)
:param op_type: 操作类型(网页端常用 2
"""
result = self.request(
"v1/video/media_lib/scan",
params={
"op_type": op_type,
"media_lib_set_id": media_lib_set_id,
"media_lib_scan_type": scan_type,
},
)
return result.success
def scan_status(self, only_brief: bool = True) -> list[dict]:
"""
获取媒体库扫描状态。
"""
result = self.request(
"v1/video/media_lib/scan/status",
params={"only_brief": "true" if only_brief else "false"},
)
if not result.success or not isinstance(result.data, Mapping):
return []
arr = result.data.get("media_lib_scan_status_arr")
return arr if isinstance(arr, list) else []
def preferences_all(self) -> Optional[Any]:
"""
获取影视偏好设置(`v1/video/preferences/all`)。
"""
result = self.request("v1/video/preferences/all")
return result.data if result.success else None
def history_get(self, num: int = 10) -> Optional[Any]:
"""
获取历史记录(`v1/video/history/get`)。
"""
result = self.request("v1/video/history/get", params={"num": num})
return result.data if result.success else None
def data_source_get_config(self) -> Optional[Any]:
"""
获取数据源配置(`v1/video/data_source/get_config`)。
"""
result = self.request("v1/video/data_source/get_config")
return result.data if result.success else None
def homepage_slider(
self, language: Optional[str] = None, app_name: str = "web"
) -> Optional[Any]:
"""
获取首页轮播数据(`v1/video/homepage/slider`)。
"""
result = self.request(
"v1/video/homepage/slider",
params={
"language": language or self._language,
"app_name": app_name,
},
)
return result.data if result.success else None
def media_lib_guide_init(self) -> Optional[Any]:
"""
获取媒体库引导初始化信息(`v1/video/media_lib/guide_init`)。
"""
result = self.request("v1/video/media_lib/guide_init")
return result.data if result.success else None
def media_lib_filter_options(
self, media_type: int = 0, language: Optional[str] = None
) -> Optional[Any]:
"""
获取媒体库筛选项(`v1/video/media_lib/filter/options`)。
"""
result = self.request(
"v1/video/media_lib/filter/options",
params={
"type": media_type,
"language": language or self._language,
},
)
return result.data if result.success else None
def guide(self, guide_position: int = 1, client_type: int = 1) -> Optional[Any]:
"""
获取引导位数据(`v1/video/guide`)。
"""
result = self.request(
"v1/video/guide",
params={
"guide_position": guide_position,
"client_type": client_type,
},
)
return result.data if result.success else None
def homepage_v2(self, language: Optional[str] = None) -> Optional[Any]:
"""
获取新版首页聚合数据(`v2/video/homepage`)。
"""
result = self.request(
"v2/video/homepage",
params={"language": language or self._language},
)
return result.data if result.success else None
def media_lib_init_user_permission(self) -> Optional[Any]:
"""
初始化用户媒体库权限(`v1/video/media_lib/init_user_permission`)。
"""
result = self.request("v1/video/media_lib/init_user_permission")
return result.data if result.success else None
def media_lib_get_all(
self, req_type: int = 2, language: Optional[str] = None
) -> Optional[Any]:
"""
获取全部媒体库集合(`v1/video/media_lib/get_all`)。
"""
result = self.request(
"v1/video/media_lib/get_all",
params={
"mediaLib_get_all_req_type": req_type,
"language": language or self._language,
},
)
return result.data if result.success else None

View File

@@ -0,0 +1,966 @@
import hashlib
from collections import deque
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, Generator, List, Mapping, Optional, Union
from urllib.parse import parse_qs, urlparse
from app import schemas
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.modules.ugreen.api import Api
from app.schemas import MediaType
from app.schemas.types import SystemConfigKey
from app.utils.url import UrlUtils
class Ugreen:
_username: Optional[str] = None
_password: Optional[str] = None
_userinfo: Optional[dict] = None
_host: Optional[str] = None
_playhost: Optional[str] = None
_libraries: dict[str, dict] = {}
_library_paths: dict[str, str] = {}
_sync_libraries: List[str] = []
_scan_type: int = 2
_verify_ssl: bool = True
_api: Optional[Api] = None
def __init__(
self,
host: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
play_host: Optional[str] = None,
sync_libraries: Optional[list] = None,
scan_mode: Optional[Union[str, int]] = None,
scan_type: Optional[Union[str, int]] = None,
verify_ssl: Optional[Union[bool, str, int]] = True,
**kwargs,
):
if not host or not username or not password:
logger.error("绿联影视配置不完整!!")
return
self._host = host
self._username = username
self._password = password
self._sync_libraries = sync_libraries or []
# 绿联媒体库扫描模式:
# 1 新添加和修改、2 补充缺失、3 覆盖扫描
self._scan_type = self.__resolve_scan_type(scan_mode=scan_mode, scan_type=scan_type)
# HTTPS 证书校验开关:默认开启,仅兼容自签证书等场景下可关闭。
self._verify_ssl = self.__resolve_verify_ssl(verify_ssl)
if play_host:
self._playhost = UrlUtils.standardize_base_url(play_host).rstrip("/")
if not self.reconnect():
logger.error(f"请检查服务端地址 {host}")
@property
def api(self) -> Optional[Api]:
return self._api
def close(self):
self.disconnect()
def is_configured(self) -> bool:
return bool(self._host and self._username and self._password)
def is_authenticated(self) -> bool:
return (
self.is_configured()
and self._api is not None
and self._api.token is not None
and self._userinfo is not None
)
def is_inactive(self) -> bool:
if not self.is_authenticated():
return True
self._userinfo = self._api.current_user() if self._api else None
return self._userinfo is None
def __session_cache_key(self) -> str:
"""
生成当前绿联实例的会话缓存键(基于 host + username
"""
normalized_host = UrlUtils.standardize_base_url(self._host or "").rstrip("/").lower()
username = (self._username or "").strip().lower()
raw = f"{normalized_host}|{username}"
return hashlib.sha256(raw.encode("utf-8")).hexdigest()
def __password_digest(self) -> str:
"""
存储密码摘要用于检测配置是否变更,避免明文落盘。
"""
return hashlib.sha256((self._password or "").encode("utf-8")).hexdigest()
@staticmethod
def __load_all_session_cache() -> dict:
sessions = SystemConfigOper().get(SystemConfigKey.UgreenSessionCache)
return sessions if isinstance(sessions, dict) else {}
@staticmethod
def __save_all_session_cache(sessions: dict):
SystemConfigOper().set(SystemConfigKey.UgreenSessionCache, sessions)
def __remove_persisted_session(self):
cache_key = self.__session_cache_key()
sessions = self.__load_all_session_cache()
if cache_key in sessions:
sessions.pop(cache_key, None)
self.__save_all_session_cache(sessions)
def __save_persisted_session(self):
if not self._api:
return
session_state = self._api.export_session_state()
if not session_state:
return
sessions = self.__load_all_session_cache()
cache_key = self.__session_cache_key()
sessions[cache_key] = {
**session_state,
"host": UrlUtils.standardize_base_url(self._host or "").rstrip("/"),
"username": self._username,
"password_digest": self.__password_digest(),
"updated_at": int(datetime.now().timestamp()),
}
self.__save_all_session_cache(sessions)
def __restore_persisted_session(self) -> bool:
cache_key = self.__session_cache_key()
sessions = self.__load_all_session_cache()
cached = sessions.get(cache_key)
if not isinstance(cached, Mapping):
return False
# 配置变更(尤其密码变更)后,不复用旧会话
if cached.get("password_digest") != self.__password_digest():
logger.info(f"绿联影视 {self._username} 检测到密码变更,清理旧会话缓存")
self.__remove_persisted_session()
return False
api = Api(host=self._host, verify_ssl=self._verify_ssl)
if not api.import_session_state(cached):
api.close()
self.__remove_persisted_session()
return False
userinfo = api.current_user()
if not userinfo:
# 会话失效,清理缓存并走正常登录
api.close()
self.__remove_persisted_session()
logger.info(f"绿联影视 {self._username} 持久化会话已失效,准备重新登录")
return False
self._api = api
self._userinfo = userinfo
logger.debug(f"{self._username} 已复用绿联影视持久化会话")
return True
def reconnect(self) -> bool:
if not self.is_configured():
return False
# 关闭旧连接(不主动登出,避免破坏可复用会话)
self.disconnect(logout=False)
if self.__restore_persisted_session():
self.get_librarys()
return True
self._api = Api(host=self._host, verify_ssl=self._verify_ssl)
if self._api.login(self._username, self._password) is None:
self.__remove_persisted_session()
return False
self._userinfo = self._api.current_user()
if not self._userinfo:
self.__remove_persisted_session()
return False
# 登录成功后持久化参数,下次优先复用
self.__save_persisted_session()
logger.debug(f"{self._username} 成功登录绿联影视")
self.get_librarys()
return True
def disconnect(self, logout: bool = False):
if self._api:
if logout:
# 显式登出时同步清理本地缓存
self._api.logout()
self.__remove_persisted_session()
self._api.close()
self._api = None
self._userinfo = None
logger.debug(f"{self._username} 已断开绿联影视")
@staticmethod
def __normalize_dir_path(path: Union[str, Path, None]) -> str:
if path is None:
return ""
value = str(path).replace("\\", "/").rstrip("/")
return value
@staticmethod
def __is_subpath(path: Union[str, Path, None], parent: Union[str, Path, None]) -> bool:
path_str = Ugreen.__normalize_dir_path(path)
parent_str = Ugreen.__normalize_dir_path(parent)
if not path_str or not parent_str:
return False
return path_str == parent_str or path_str.startswith(parent_str + "/")
def __build_image_stream_url(self, source_url: str, size: int = 1) -> Optional[str]:
"""
通过绿联 getImaStream 中转图片,规避 scraper.ugnas.com 403 问题。
"""
if not self._api:
return None
auth_token = self._api.static_token or self._api.token
if not auth_token:
return None
params = {
"app_name": "web",
"name": source_url,
"size": size,
}
if self._api.is_ugk:
params["ugk"] = auth_token
else:
params["token"] = auth_token
return UrlUtils.combine_url(
host=self._api.host,
path="/ugreen/v2/video/getImaStream",
query=params,
)
def __resolve_image(self, path: Optional[str]) -> Optional[str]:
if not path:
return None
if path.startswith("http://") or path.startswith("https://"):
parsed = urlparse(path)
if parsed.netloc.lower() == "scraper.ugnas.com":
# scraper 链接优先改为本机 getImaStream避免签名过期导致 403
if image_stream_url := self.__build_image_stream_url(path):
return image_stream_url
# 绿联返回的 scraper.ugnas.com 图片常带 auth_key 时效签名,
# 过期后会直接 403。这里提前过滤避免前端出现裂图。
if self.__is_expired_signed_image(path):
return None
return path
# 绿联本地图片路径需要额外鉴权头MP图片代理当前仅支持Cookie故先忽略本地路径。
return None
@staticmethod
def __is_expired_signed_image(url: str) -> bool:
"""
判断绿联 scraper 签名图是否已过期。
auth_key 结构通常为:
`{过期时间戳}-{随机串}-...`
"""
try:
parsed = urlparse(url)
if parsed.netloc.lower() != "scraper.ugnas.com":
return False
auth_key = parse_qs(parsed.query).get("auth_key", [None])[0]
if not auth_key:
return False
expire_part = str(auth_key).split("-", 1)[0]
expire_ts = int(expire_part)
now_ts = int(datetime.now().timestamp())
return expire_ts <= now_ts
except Exception:
return False
@staticmethod
def __parse_year(video_info: dict) -> Optional[Union[str, int]]:
year = video_info.get("year")
if isinstance(year, int) and year > 0:
return year
release_date = video_info.get("release_date")
if isinstance(release_date, (int, float)) and release_date > 0:
try:
return datetime.fromtimestamp(release_date).year
except Exception:
return None
return None
@staticmethod
def __map_item_type(video_type: Any) -> Optional[str]:
if video_type == 2:
return "Series"
if video_type == 1:
return "Movie"
if video_type == 3:
return "Collection"
if video_type == 0:
return "Folder"
return "Video"
@staticmethod
def __build_media_server_item(video_info: dict, play_status: Optional[dict] = None):
user_state = schemas.MediaServerItemUserState()
if isinstance(play_status, dict):
progress = play_status.get("progress")
watch_status = play_status.get("watch_status")
if watch_status == 2:
user_state.played = True
if isinstance(progress, (int, float)) and progress > 0:
user_state.resume = progress < 1
user_state.percentage = progress * 100.0
last_play_time = play_status.get("last_access_time") or play_status.get("LastPlayTime")
if isinstance(last_play_time, (int, float)) and last_play_time > 0:
user_state.last_played_date = str(int(last_play_time))
tmdb_id = video_info.get("tmdb_id")
if not isinstance(tmdb_id, int) or tmdb_id <= 0:
tmdb_id = None
item_id = video_info.get("ug_video_info_id")
if item_id is None:
return None
return schemas.MediaServerItem(
server="ugreen",
library=video_info.get("media_lib_set_id"),
item_id=str(item_id),
item_type=Ugreen.__map_item_type(video_info.get("type")),
title=video_info.get("name"),
original_title=video_info.get("original_name"),
year=Ugreen.__parse_year(video_info),
tmdbid=tmdb_id,
user_state=user_state,
)
def __build_root_url(self) -> str:
"""
统一返回 NAS Web 根地址作为跳转链接,避免失效深链。
"""
host = self._playhost or (self._api.host if self._api else "")
if not host:
return ""
return f"{host.rstrip('/')}/"
def __build_play_url(self, item_id: Union[str, int], video_type: Any, media_lib_set_id: Any) -> str:
# 绿联深链在部分版本会失效,统一回落到 NAS 根地址。
return self.__build_root_url()
def __build_play_item_from_wrapper(self, wrapper: dict) -> Optional[schemas.MediaServerPlayItem]:
video_info = wrapper.get("video_info") if isinstance(wrapper.get("video_info"), dict) else wrapper
if not isinstance(video_info, dict):
return None
item_id = video_info.get("ug_video_info_id")
if item_id is None:
return None
play_status = wrapper.get("play_status") if isinstance(wrapper.get("play_status"), dict) else {}
progress = play_status.get("progress") if isinstance(play_status.get("progress"), (int, float)) else 0
if video_info.get("type") == 2:
subtitle = play_status.get("tv_name") or "剧集"
media_type = MediaType.TV.value
else:
subtitle = "电影" if video_info.get("type") == 1 else "视频"
media_type = MediaType.MOVIE.value
image = self.__resolve_image(video_info.get("poster_path")) or self.__resolve_image(
video_info.get("backdrop_path")
)
return schemas.MediaServerPlayItem(
id=str(item_id),
title=video_info.get("name"),
subtitle=subtitle,
type=media_type,
image=image,
link=self.__build_play_url(item_id, video_info.get("type"), video_info.get("media_lib_set_id")),
percent=max(0.0, min(100.0, progress * 100.0)),
server_type="ugreen",
use_cookies=False,
)
@staticmethod
def __infer_library_type(name: str, path: Optional[str]) -> str:
name = name or ""
path = path or ""
if "电视剧" in path or any(key in name for key in ["", "综艺", "动漫", "纪录片"]):
return MediaType.TV.value
if "电影" in path or "电影" in name:
return MediaType.MOVIE.value
return MediaType.UNKNOWN.value
def __is_library_blocked(self, library_id: str) -> bool:
return (
True
if (
self._sync_libraries
and "all" not in self._sync_libraries
and str(library_id) not in self._sync_libraries
)
else False
)
@staticmethod
def __resolve_scan_type(
scan_mode: Optional[Union[str, int]] = None,
scan_type: Optional[Union[str, int]] = None,
) -> int:
"""
解析绿联扫描模式并转为 `media_lib_scan_type`。
支持值:
- 1 / new_and_modified: 新添加和修改
- 2 / supplement_missing: 补充缺失
- 3 / full_override: 覆盖扫描
"""
# 优先使用显式 scan_type 数值配置。
for value in (scan_type, scan_mode):
try:
parsed = int(value) # type: ignore[arg-type]
if parsed in (1, 2, 3):
return parsed
except Exception:
pass
mode = str(scan_mode or "").strip().lower()
mode_map = {
"new_and_modified": 1,
"new_modified": 1,
"add": 1,
"added": 1,
"new": 1,
"scan_new_modified": 1,
"supplement_missing": 2,
"supplement": 2,
"additional": 2,
"missing": 2,
"scan_missing": 2,
"full_override": 3,
"override": 3,
"cover": 3,
"replace": 3,
"scan_override": 3,
}
return mode_map.get(mode, 2)
@staticmethod
def __resolve_verify_ssl(verify_ssl: Optional[Union[bool, str, int]]) -> bool:
if isinstance(verify_ssl, bool):
return verify_ssl
if verify_ssl is None:
return True
value = str(verify_ssl).strip().lower()
if value in {"1", "true", "yes", "on"}:
return True
if value in {"0", "false", "no", "off"}:
return False
return True
def __scan_library(self, library_id: str, scan_type: Optional[int] = None) -> bool:
if not self._api:
return False
return self._api.scan(
media_lib_set_id=library_id,
scan_type=scan_type or self._scan_type,
op_type=2,
)
def __load_library_paths(self) -> dict[str, str]:
if not self._api:
return {}
paths: dict[str, str] = {}
page = 1
while True:
data = self._api.poster_wall_get_folder(page=page, page_size=100)
if not data:
break
for folder in data.get("folder_arr") or []:
lib_id = folder.get("media_lib_set_id")
lib_path = folder.get("path")
if lib_id is not None and lib_path:
paths[str(lib_id)] = str(lib_path)
if data.get("is_last_page"):
break
page += 1
return paths
def get_librarys(self, hidden: Optional[bool] = False) -> List[schemas.MediaServerLibrary]:
if not self.is_authenticated() or not self._api:
return []
media_libs = self._api.media_list()
self._library_paths = self.__load_library_paths()
libraries = []
self._libraries = {}
for lib in media_libs:
lib_id = str(lib.get("media_lib_set_id"))
if hidden and self.__is_library_blocked(lib_id):
continue
lib_name = lib.get("media_name") or ""
lib_path = self._library_paths.get(lib_id)
library_type = self.__infer_library_type(lib_name, lib_path)
poster_paths = lib.get("poster_paths") or []
backdrop_paths = lib.get("backdrop_paths") or []
image_list = list(
filter(
None,
[self.__resolve_image(p) for p in [*poster_paths, *backdrop_paths]],
)
)
self._libraries[lib_id] = {
"id": lib_id,
"name": lib_name,
"path": lib_path,
"type": library_type,
"video_count": lib.get("video_count") or 0,
}
libraries.append(
schemas.MediaServerLibrary(
server="ugreen",
id=lib_id,
name=lib_name,
type=library_type,
path=lib_path,
image_list=image_list,
link=self.__build_root_url(),
server_type="ugreen",
use_cookies=False,
)
)
return libraries
def get_user_count(self) -> int:
if not self.is_authenticated() or not self._api:
return 0
users = self._api.media_lib_users()
return len(users)
def get_medias_count(self) -> schemas.Statistic:
if not self.is_authenticated() or not self._api:
return schemas.Statistic()
movie_data = self._api.video_all(classification=-102, page=1, page_size=1) or {}
tv_data = self._api.video_all(classification=-103, page=1, page_size=1) or {}
return schemas.Statistic(
movie_count=int(movie_data.get("total_num") or 0),
tv_count=int(tv_data.get("total_num") or 0),
# 绿联当前不统计剧集总数,返回 None 由前端展示“未获取”。
episode_count=None,
)
def authenticate(self, username: str, password: str) -> Optional[str]:
if not username or not password or not self._host:
return None
api = Api(self._host, verify_ssl=self._verify_ssl)
try:
return api.login(username, password)
finally:
api.logout()
api.close()
@staticmethod
def __extract_video_info_list(bucket: Any) -> list[dict]:
if not isinstance(bucket, Mapping):
return []
video_arr = bucket.get("video_arr")
if not isinstance(video_arr, list):
return []
result = []
for item in video_arr:
if not isinstance(item, Mapping):
continue
info = item.get("video_info")
if isinstance(info, Mapping):
result.append(dict(info))
return result
def get_movies(
self, title: str, year: Optional[str] = None, tmdb_id: Optional[int] = None
) -> Optional[List[schemas.MediaServerItem]]:
if not self.is_authenticated() or not self._api or not title:
return None
data = self._api.search(title)
if not data:
return []
movies = []
for info in self.__extract_video_info_list(data.get("movies_list")):
info_tmdb = info.get("tmdb_id")
if tmdb_id and tmdb_id != info_tmdb:
continue
if title not in [info.get("name"), info.get("original_name")]:
continue
item_year = info.get("year")
if year and str(item_year) != str(year):
continue
media_item = self.__build_media_server_item(info)
if media_item:
movies.append(media_item)
return movies
def __search_tv_item(self, title: str, year: Optional[str] = None, tmdb_id: Optional[int] = None) -> Optional[dict]:
if not self._api:
return None
data = self._api.search(title)
if not data:
return None
for info in self.__extract_video_info_list(data.get("tv_list")):
if tmdb_id and tmdb_id != info.get("tmdb_id"):
continue
if title not in [info.get("name"), info.get("original_name")]:
continue
item_year = info.get("year")
if year and str(item_year) != str(year):
continue
return info
return None
def get_tv_episodes(
self,
item_id: Optional[str] = None,
title: Optional[str] = None,
year: Optional[str] = None,
tmdb_id: Optional[int] = None,
season: Optional[int] = None,
) -> tuple[Optional[str], Optional[Dict[int, list]]]:
if not self.is_authenticated() or not self._api:
return None, None
if not item_id:
if not title:
return None, None
if not (tv_info := self.__search_tv_item(title, year, tmdb_id)):
return None, None
found_item_id = tv_info.get("ug_video_info_id")
if found_item_id is None:
return None, None
item_id = str(found_item_id)
else:
item_id = str(item_id)
item_info = self.get_iteminfo(item_id)
if not item_info:
return None, {}
if tmdb_id and item_info.tmdbid and tmdb_id != item_info.tmdbid:
return None, {}
tv_detail = self._api.get_tv(item_id, folder_path="ALL")
if not tv_detail:
return None, {}
season_map = {}
for info in tv_detail.get("season_info") or []:
if not isinstance(info, dict):
continue
category_id = info.get("category_id")
season_num = info.get("season_num")
if category_id and isinstance(season_num, int):
season_map[str(category_id)] = season_num
season_episodes: Dict[int, list] = {}
for ep in tv_detail.get("tv_info") or []:
if not isinstance(ep, dict):
continue
episode = ep.get("episode")
if not isinstance(episode, int):
continue
season_num = season_map.get(str(ep.get("category_id")), 1)
if season is not None and season_num != season:
continue
season_episodes.setdefault(season_num, []).append(episode)
for season_num in list(season_episodes.keys()):
season_episodes[season_num] = sorted(set(season_episodes[season_num]))
return item_id, season_episodes
def refresh_root_library(self, scan_mode: Optional[Union[str, int]] = None) -> Optional[bool]:
if not self.is_authenticated() or not self._api:
return None
if not self._libraries:
self.get_librarys()
scan_type = (
self.__resolve_scan_type(scan_mode=scan_mode)
if scan_mode is not None
else self._scan_type
)
results = []
for lib_id in self._libraries.keys():
logger.info(
f"刷新媒体库:{self._libraries[lib_id].get('name')}(扫描模式: {scan_type}"
)
results.append(self.__scan_library(library_id=lib_id, scan_type=scan_type))
return all(results) if results else True
def __match_library_id_by_path(self, path: Optional[Path]) -> Optional[str]:
if path is None:
return None
path_str = self.__normalize_dir_path(path)
if not self._library_paths:
self.get_librarys()
for lib_id, lib_path in self._library_paths.items():
if self.__is_subpath(path_str, lib_path):
return lib_id
return None
def refresh_library_by_items(
self,
items: List[schemas.RefreshMediaItem],
scan_mode: Optional[Union[str, int]] = None,
) -> Optional[bool]:
if not self.is_authenticated() or not self._api:
return None
scan_type = (
self.__resolve_scan_type(scan_mode=scan_mode)
if scan_mode is not None
else self._scan_type
)
library_ids = set()
for item in items:
library_id = self.__match_library_id_by_path(item.target_path)
if library_id is None:
return self.refresh_root_library(scan_mode=scan_mode)
library_ids.add(library_id)
for library_id in library_ids:
lib_name = self._libraries.get(library_id, {}).get("name", library_id)
logger.info(f"刷新媒体库:{lib_name}(扫描模式: {scan_type}")
if not self.__scan_library(library_id=library_id, scan_type=scan_type):
return self.refresh_root_library(scan_mode=scan_mode)
return True
@staticmethod
def get_webhook_message(body: Any) -> Optional[schemas.WebhookEventInfo]:
return None
def get_iteminfo(self, itemid: str) -> Optional[schemas.MediaServerItem]:
if not self.is_authenticated() or not self._api or not itemid:
return None
info = self._api.recently_played_info(itemid)
if not info:
return None
video_info = info.get("video_info") if isinstance(info.get("video_info"), dict) else None
if not video_info or not video_info.get("ug_video_info_id"):
return None
return self.__build_media_server_item(video_info, info.get("play_status"))
def _iter_library_videos(self, root_path: str, page_size: int = 100):
if not self._api or not root_path:
return
queue = deque([root_path])
visited: set[str] = set()
max_paths = 20000
while queue and len(visited) < max_paths:
current_path = queue.popleft()
if current_path in visited:
continue
visited.add(current_path)
page = 1
while True:
data = self._api.poster_wall_get_folder(
path=current_path,
page=page,
page_size=page_size,
sort_type=1,
order_type=1,
)
if not data:
break
for video in data.get("video_arr") or []:
if isinstance(video, dict):
yield video
for folder in data.get("folder_arr") or []:
if not isinstance(folder, dict):
continue
sub_path = folder.get("path")
if sub_path and sub_path not in visited:
queue.append(str(sub_path))
if data.get("is_last_page"):
break
page += 1
def get_items(
self,
parent: Union[str, int],
start_index: Optional[int] = 0,
limit: Optional[int] = -1,
) -> Generator[schemas.MediaServerItem | None | Any, Any, None]:
if not self.is_authenticated() or not self._api:
return None
library_id = str(parent)
if not self._library_paths:
self.get_librarys()
root_path = self._library_paths.get(library_id)
if not root_path:
return None
skip = max(0, start_index or 0)
remain = -1 if limit in [None, -1] else max(0, limit)
for video in self._iter_library_videos(root_path=root_path):
video_type = video.get("type")
if video_type not in [1, 2]:
continue
if skip > 0:
skip -= 1
continue
item = self.__build_media_server_item(video)
if item:
yield item
if remain != -1:
remain -= 1
if remain <= 0:
break
return None
def get_play_url(self, item_id: str) -> Optional[str]:
if not self.is_authenticated() or not self._api:
return None
info = self._api.recently_played_info(item_id)
if not info:
return None
video_info = info.get("video_info") if isinstance(info.get("video_info"), dict) else None
if not video_info:
return None
return self.__build_play_url(
item_id=item_id,
video_type=video_info.get("type"),
media_lib_set_id=video_info.get("media_lib_set_id"),
)
def get_resume(self, num: Optional[int] = 12) -> Optional[List[schemas.MediaServerPlayItem]]:
if not self.is_authenticated() or not self._api:
return None
page_size = max(1, num or 12)
data = self._api.recently_played(page=1, page_size=page_size)
if not data:
return []
ret_resume = []
for item in data.get("video_arr") or []:
if len(ret_resume) == page_size:
break
if not isinstance(item, dict):
continue
video_info = item.get("video_info") if isinstance(item.get("video_info"), dict) else {}
library_id = str(video_info.get("media_lib_set_id") or "")
if self.__is_library_blocked(library_id):
continue
play_item = self.__build_play_item_from_wrapper(item)
if play_item:
ret_resume.append(play_item)
return ret_resume
def get_latest(self, num: int = 20) -> Optional[List[schemas.MediaServerPlayItem]]:
if not self.is_authenticated() or not self._api:
return None
page_size = max(1, num)
data = self._api.recently_updated(page=1, page_size=page_size)
if not data:
return []
latest = []
for item in data.get("video_arr") or []:
if len(latest) == page_size:
break
if not isinstance(item, dict):
continue
video_info = item.get("video_info") if isinstance(item.get("video_info"), dict) else {}
library_id = str(video_info.get("media_lib_set_id") or "")
if self.__is_library_blocked(library_id):
continue
play_item = self.__build_play_item_from_wrapper(item)
if play_item:
latest.append(play_item)
return latest
def get_latest_backdrops(self, num: int = 20, remote: bool = False) -> Optional[List[str]]:
if not self.is_authenticated() or not self._api:
return None
data = self._api.recently_updated(page=1, page_size=max(1, num))
if not data:
return []
images: List[str] = []
for item in data.get("video_arr") or []:
if len(images) == num:
break
if not isinstance(item, dict):
continue
video_info = item.get("video_info") if isinstance(item.get("video_info"), dict) else {}
library_id = str(video_info.get("media_lib_set_id") or "")
if self.__is_library_blocked(library_id):
continue
image = self.__resolve_image(video_info.get("backdrop_path")) or self.__resolve_image(
video_info.get("poster_path")
)
if image:
images.append(image)
return images
@staticmethod
def get_image_cookies(image_url: str):
# 绿联图片流接口依赖加密鉴权头当前图片代理仅支持Cookie注入。
return None

View File

@@ -29,3 +29,10 @@ class RateLimitExceededException(LimitException):
这个异常通常用于本地限流逻辑(例如 RateLimiter当系统检测到函数调用频率过高时触发限流并抛出该异常。
"""
pass
class OperationInterrupted(KeyboardInterrupt):
"""
用于表示操作被中断
"""
pass

View File

@@ -14,7 +14,7 @@ class ExistMediaInfo(BaseModel):
type: Optional[MediaType] = None
# 季
seasons: Optional[Dict[int, list]] = Field(default_factory=dict)
# 媒体服务器类型plex、jellyfin、emby、trimemedia
# 媒体服务器类型plex、jellyfin、emby、trimemedia、ugreen
server_type: Optional[str] = None
# 媒体服务器名称
server: Optional[str] = None

View File

@@ -114,6 +114,8 @@ class NotificationSwitch(BaseModel):
vocechat: Optional[bool] = False
# WebPush开关
webpush: Optional[bool] = False
# QQ开关
qq: Optional[bool] = False
class Subscription(BaseModel):
@@ -270,6 +272,15 @@ class ChannelCapabilityManager:
ChannelCapability.LINKS
},
fallback_enabled=True
),
MessageChannel.QQ: ChannelCapabilities(
channel=MessageChannel.QQ,
capabilities={
ChannelCapability.RICH_TEXT,
ChannelCapability.IMAGES,
ChannelCapability.LINKS
},
fallback_enabled=True
)
}

View File

@@ -9,6 +9,7 @@ class ServiceInfo:
"""
封装服务相关信息的数据类
"""
# 名称
name: Optional[str] = None
# 实例
@@ -25,9 +26,10 @@ class MediaServerConf(BaseModel):
"""
媒体服务器配置
"""
# 名称
name: Optional[str] = None
# 类型 emby/jellyfin/plex
# 类型 emby/jellyfin/plex/trimemedia/ugreen
type: Optional[str] = None
# 配置
config: Optional[dict] = Field(default_factory=dict)
@@ -41,9 +43,10 @@ class DownloaderConf(BaseModel):
"""
下载器配置
"""
# 名称
name: Optional[str] = None
# 类型 qbittorrent/transmission
# 类型 qbittorrent/transmission/rtorrent
type: Optional[str] = None
# 是否默认
default: Optional[bool] = False
@@ -59,9 +62,10 @@ class NotificationConf(BaseModel):
"""
通知配置
"""
# 名称
name: Optional[str] = None
# 类型 telegram/wechat/vocechat/synologychat/slack/webpush
# 类型 telegram/wechat/vocechat/synologychat/slack/webpush/qqbot
type: Optional[str] = None
# 配置
config: Optional[dict] = Field(default_factory=dict)
@@ -75,16 +79,18 @@ class NotificationSwitchConf(BaseModel):
"""
通知场景开关配置
"""
# 场景名称
type: str = None
# 通知范围 all/user/admin
action: Optional[str] = 'all'
action: Optional[str] = "all"
class StorageConf(BaseModel):
"""
存储配置
"""
# 类型 local/alipan/u115/rclone/alist
type: Optional[str] = None
# 名称
@@ -97,6 +103,7 @@ class TransferDirectoryConf(BaseModel):
"""
文件整理目录配置
"""
# 名称
name: Optional[str] = None
# 优先级
@@ -116,7 +123,7 @@ class TransferDirectoryConf(BaseModel):
# 监控方式 downloader/monitorNone为不监控
monitor_type: Optional[str] = None
# 监控模式 fast / compatibility
monitor_mode: Optional[str] = 'fast'
monitor_mode: Optional[str] = "fast"
# 整理方式 move/copy/link/softlink
transfer_type: Optional[str] = None
# 文件覆盖模式 always/size/never/latest

View File

@@ -219,6 +219,8 @@ class SystemConfigKey(Enum):
PluginInstallReport = "PluginInstallReport"
# 配置向导状态
SetupWizardState = "SetupWizardState"
# 绿联影视登录会话缓存
UgreenSessionCache = "UgreenSessionCache"
# 处理进度Key字典
@@ -285,6 +287,7 @@ class MessageChannel(Enum):
VoceChat = "VoceChat"
Web = "Web"
WebPush = "WebPush"
QQ = "QQ"
# 下载器类型
@@ -293,6 +296,8 @@ class DownloaderType(Enum):
Qbittorrent = "Qbittorrent"
# Transmission
Transmission = "Transmission"
# Rtorrent
Rtorrent = "Rtorrent"
# Aria2
# Aria2 = "Aria2"
@@ -307,6 +312,8 @@ class MediaServerType(Enum):
Plex = "Plex"
# 飞牛影视
TrimeMedia = "TrimeMedia"
# 绿联影视
Ugreen = "Ugreen"
# 识别器类型

View File

@@ -98,8 +98,14 @@ class ExponentialBackoffRateLimiter(BaseRateLimiter):
每次触发限流时,等待时间会成倍增加,直到达到最大等待时间
"""
def __init__(self, base_wait: float = 60.0, max_wait: float = 600.0, backoff_factor: float = 2.0,
source: str = "", enable_logging: bool = True):
def __init__(
self,
base_wait: float = 60.0,
max_wait: float = 600.0,
backoff_factor: float = 2.0,
source: str = "",
enable_logging: bool = True,
):
"""
初始化 ExponentialBackoffRateLimiter 实例
:param base_wait: 基础等待时间(秒),默认值为 60 秒1 分钟)
@@ -156,7 +162,9 @@ class ExponentialBackoffRateLimiter(BaseRateLimiter):
current_time = time.time()
with self.lock:
self.next_allowed_time = current_time + self.current_wait
self.current_wait = min(self.current_wait * self.backoff_factor, self.max_wait)
self.current_wait = min(
self.current_wait * self.backoff_factor, self.max_wait
)
wait_time = self.next_allowed_time - current_time
self.log_warning(f"触发限流,将在 {wait_time:.2f} 秒后允许继续调用")
@@ -168,8 +176,13 @@ class WindowRateLimiter(BaseRateLimiter):
如果超过允许的最大调用次数,则限流直到窗口期结束
"""
def __init__(self, max_calls: int, window_seconds: float,
source: str = "", enable_logging: bool = True):
def __init__(
self,
max_calls: int,
window_seconds: float,
source: str = "",
enable_logging: bool = True,
):
"""
初始化 WindowRateLimiter 实例
:param max_calls: 在时间窗口内允许的最大调用次数
@@ -190,7 +203,10 @@ class WindowRateLimiter(BaseRateLimiter):
current_time = time.time()
with self.lock:
# 清理超出时间窗口的调用记录
while self.call_times and current_time - self.call_times[0] > self.window_seconds:
while (
self.call_times
and current_time - self.call_times[0] > self.window_seconds
):
self.call_times.popleft()
if len(self.call_times) < self.max_calls:
@@ -225,8 +241,12 @@ class CompositeRateLimiter(BaseRateLimiter):
当任意一个限流策略触发限流时,都会阻止调用
"""
def __init__(self, limiters: List[BaseRateLimiter], source: str = "", enable_logging: bool = True):
def __init__(
self,
limiters: List[BaseRateLimiter],
source: str = "",
enable_logging: bool = True,
):
"""
初始化 CompositeRateLimiter 实例
:param limiters: 要组合的限流器列表
@@ -263,7 +283,9 @@ class CompositeRateLimiter(BaseRateLimiter):
# 通用装饰器:自定义限流器实例
def rate_limit_handler(limiter: BaseRateLimiter, raise_on_limit: bool = False) -> Callable:
def rate_limit_handler(
limiter: BaseRateLimiter, raise_on_limit: bool = False
) -> Callable:
"""
通用装饰器,允许用户传递自定义的限流器实例,用于处理限流逻辑
该装饰器可灵活支持任意继承自 BaseRateLimiter 的限流器
@@ -344,8 +366,14 @@ def rate_limit_handler(limiter: BaseRateLimiter, raise_on_limit: bool = False) -
# 装饰器:指数退避限流
def rate_limit_exponential(base_wait: float = 60.0, max_wait: float = 600.0, backoff_factor: float = 2.0,
raise_on_limit: bool = False, source: str = "", enable_logging: bool = True) -> Callable:
def rate_limit_exponential(
base_wait: float = 60.0,
max_wait: float = 600.0,
backoff_factor: float = 2.0,
raise_on_limit: bool = False,
source: str = "",
enable_logging: bool = True,
) -> Callable:
"""
装饰器,用于应用指数退避限流策略
通过逐渐增加调用等待时间控制调用频率。每次触发限流时,等待时间会成倍增加,直到达到最大等待时间
@@ -359,14 +387,21 @@ def rate_limit_exponential(base_wait: float = 60.0, max_wait: float = 600.0, bac
:return: 装饰器函数
"""
# 实例化 ExponentialBackoffRateLimiter并传入相关参数
limiter = ExponentialBackoffRateLimiter(base_wait, max_wait, backoff_factor, source, enable_logging)
limiter = ExponentialBackoffRateLimiter(
base_wait, max_wait, backoff_factor, source, enable_logging
)
# 使用通用装饰器逻辑包装该限流器
return rate_limit_handler(limiter, raise_on_limit)
# 装饰器:时间窗口限流
def rate_limit_window(max_calls: int, window_seconds: float,
raise_on_limit: bool = False, source: str = "", enable_logging: bool = True) -> Callable:
def rate_limit_window(
max_calls: int,
window_seconds: float,
raise_on_limit: bool = False,
source: str = "",
enable_logging: bool = True,
) -> Callable:
"""
装饰器,用于应用时间窗口限流策略
在固定的时间窗口内限制调用次数,当调用次数超过最大值时,触发限流,直到时间窗口结束
@@ -407,3 +442,63 @@ class QpsRateLimiter:
self.next_call_time = max(now, self.next_call_time) + self.interval
if sleep_duration > 0:
time.sleep(sleep_duration)
class RateStats:
"""
请求速率统计:记录时间戳,计算 QPS / QPM / QPH
"""
def __init__(self, window_seconds: float = 7200, source: str = ""):
"""
:param window_seconds: 统计窗口(秒),默认 2 小时,用于计算 QPH
:param source: 日志来源标识
"""
self._window = window_seconds
self._source = source
self._lock = threading.Lock()
self._timestamps: deque = deque()
def record(self) -> None:
"""
记录一次请求
"""
t = time.time()
with self._lock:
self._timestamps.append(t)
while self._timestamps and t - self._timestamps[0] > self._window:
self._timestamps.popleft()
def _count_since(self, seconds: float) -> int:
t = time.time()
with self._lock:
return sum(1 for ts in self._timestamps if t - ts <= seconds)
def get_qps(self) -> float:
"""
最近 1 秒内请求数
"""
return self._count_since(1.0)
def get_qpm(self) -> float:
"""
最近 1 分钟内请求数
"""
return self._count_since(60.0)
def get_qph(self) -> float:
"""
最近 1 小时内请求数
"""
return self._count_since(3600.0)
def log_stats(self, level: str = "info") -> None:
"""
输出当前 QPS/QPM/QPH
"""
qps, qpm, qph = self.get_qps(), self.get_qpm(), self.get_qph()
msg = f"QPS={qps} QPM={qpm} QPH={qph}"
if self._source:
msg = f"[{self._source}] {msg}"
log_fn = getattr(logger, level, logger.info)
log_fn(msg)

View File

@@ -23,6 +23,17 @@ _special_domains = [
_version_map = {"stable": -1, "rc": -2, "beta": -3, "alpha": -4}
# 不符合的版本号
_other_version = -5
_max_media_title_words = 10
_min_media_title_length = 2
_non_media_title_pattern = re.compile(r"^#|^请[问帮你]|[?]$|^继续$")
_chat_intent_pattern = re.compile(r"帮我|请问|怎么|如何|为什么|可以|能否|推荐|介绍|谢谢|想看|找一下|搜一下")
_media_feature_pattern = re.compile(
r"\s*[0-9一二三四五六七八九十百零]+\s*[季集]|S\d{1,2}(?:E\d{1,4})?|E\d{1,4}|(?:19|20)\d{2}",
re.IGNORECASE
)
_media_separator_pattern = re.compile(r"[\s\-_.::·'\"()\[\]【】]+")
_media_sentence_punctuation_pattern = re.compile(r"[,。!?!?,;]")
_media_title_char_pattern = re.compile(r"[\u4e00-\u9fffA-Za-z]")
class StringUtils:
@@ -531,6 +542,31 @@ class StringUtils:
return chinese_count + english_count
@staticmethod
def is_media_title_like(text: str) -> bool:
"""
判断文本是否像影视剧名称
"""
if not text:
return False
text = re.sub(r'\s+', ' ', text).strip()
if not text:
return False
if _non_media_title_pattern.search(text) \
or StringUtils.count_words(text) > _max_media_title_words:
return False
if "://" in text or text.startswith("magnet:?"):
return False
if _chat_intent_pattern.search(text):
return False
if _media_sentence_punctuation_pattern.search(text):
return False
# 先移除季/集/年份等媒体特征,再移除分隔符,只保留核心名称用于最终判定
candidate = _media_feature_pattern.sub("", text)
candidate = _media_separator_pattern.sub("", candidate)
return len(candidate) >= _min_media_title_length and _media_title_char_pattern.search(candidate) is not None
@staticmethod
def split_text(text: str, max_length: int) -> Generator:
"""

View File

@@ -166,10 +166,8 @@ class SystemUtils:
移动
"""
try:
# 当前目录改名
temp = src.replace(src.parent / dest.name)
# 移动到目标目录
shutil.move(temp, dest)
# 直接移动到目标路径,避免中间改名步骤触发目录监控
shutil.move(src, dest)
return 0, ""
except Exception as err:
return -1, str(err)

242
app/utils/ugreen_crypto.py Normal file
View File

@@ -0,0 +1,242 @@
from __future__ import annotations
import base64
import hashlib
import json
import os
import uuid
from dataclasses import dataclass
from typing import Any, Mapping, Sequence
from urllib.parse import quote, urlencode, urlsplit, urlunsplit
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
@dataclass
class UgreenEncryptedRequest:
url: str
headers: dict[str, str]
params: dict[str, str]
json: dict[str, Any] | None
aes_key: str
plain_query: str
class UgreenCrypto:
"""
绿联接口请求加解密工具。
"""
def __init__(
self,
public_key: str,
token: str | None = None,
client_id: str | None = None,
client_version: str | None = "76363",
ug_agent: str | None = "PC/WEB",
language: str = "zh-CN",
) -> None:
self.public_key_pem = self.normalize_public_key(public_key)
self.public_key = serialization.load_pem_public_key(
self.public_key_pem.encode("utf-8")
)
self.token = token
self.client_id = client_id
self.client_version = client_version
self.ug_agent = ug_agent
self.language = language
@staticmethod
def normalize_public_key(public_key: str) -> str:
key = (public_key or "").strip().strip('"').replace("\\n", "\n")
if "BEGIN" in key:
return key if key.endswith("\n") else f"{key}\n"
return (
"-----BEGIN RSA PUBLIC KEY-----\n"
f"{key}\n"
"-----END RSA PUBLIC KEY-----\n"
)
@staticmethod
def generate_aes_key() -> str:
return uuid.uuid4().hex
@staticmethod
def _flatten_query(prefix: str, value: Any) -> list[tuple[str, str]]:
pairs: list[tuple[str, str]] = []
if isinstance(value, Mapping):
for key, item in value.items():
next_prefix = f"{prefix}[{key}]" if prefix else str(key)
pairs.extend(UgreenCrypto._flatten_query(next_prefix, item))
return pairs
if isinstance(value, Sequence) and not isinstance(
value, (str, bytes, bytearray)
):
for item in value:
next_prefix = f"{prefix}[]"
pairs.extend(UgreenCrypto._flatten_query(next_prefix, item))
return pairs
if isinstance(value, bool):
pairs.append((prefix, "true" if value else "false"))
return pairs
if value is None:
pairs.append((prefix, ""))
return pairs
pairs.append((prefix, str(value)))
return pairs
@classmethod
def encode_query(cls, params: Mapping[str, Any] | None) -> str:
if not params:
return ""
pairs: list[tuple[str, str]] = []
for key, value in params.items():
pairs.extend(cls._flatten_query(str(key), value))
return urlencode(pairs, doseq=False, quote_via=quote, safe="")
def rsa_encrypt_long(self, plaintext: str) -> str:
if not plaintext:
return ""
key_size = self.public_key.key_size // 8
max_chunk = key_size - 11
encrypted_chunks: list[bytes] = []
raw = plaintext.encode("utf-8")
for start in range(0, len(raw), max_chunk):
chunk = raw[start : start + max_chunk]
encrypted_chunks.append(
self.public_key.encrypt(chunk, padding.PKCS1v15())
)
return base64.b64encode(b"".join(encrypted_chunks)).decode("utf-8")
@staticmethod
def aes_gcm_encrypt(plaintext: str, aes_key: str) -> str:
iv = os.urandom(12)
cipher = AESGCM(aes_key.encode("utf-8"))
encrypted = cipher.encrypt(iv, plaintext.encode("utf-8"), None)
# encrypt 返回 ciphertext + tag
return base64.b64encode(iv + encrypted).decode("utf-8")
@staticmethod
def aes_gcm_decrypt(payload_b64: str, aes_key: str) -> str:
raw = base64.b64decode(payload_b64)
iv = raw[:12]
encrypted = raw[12:]
cipher = AESGCM(aes_key.encode("utf-8"))
plain = cipher.decrypt(iv, encrypted, None)
return plain.decode("utf-8")
@staticmethod
def build_security_key(token: str) -> str:
return hashlib.md5(token.encode("utf-8")).hexdigest()
@staticmethod
def _normalize_body(data: Any) -> str:
if isinstance(data, str):
return data
if isinstance(data, (bytes, bytearray)):
return bytes(data).decode("utf-8")
return json.dumps(data, ensure_ascii=False, separators=(",", ":"))
def encrypt_body(self, data: Any, aes_key: str) -> dict[str, str]:
plain = self._normalize_body(data)
return {
"encrypt_req_body": self.aes_gcm_encrypt(plain, aes_key),
"req_body_sha256": hashlib.sha256(plain.encode("utf-8")).hexdigest(),
}
def build_headers(
self,
aes_key: str,
token: str | None = None,
extra_headers: Mapping[str, str] | None = None,
encrypt_token: bool = True,
) -> dict[str, str]:
token_value = token if token is not None else self.token
headers: dict[str, str] = dict(extra_headers or {})
if self.client_id:
headers.setdefault("Client-Id", self.client_id)
if self.client_version:
headers.setdefault("Client-Version", self.client_version)
if self.ug_agent:
headers.setdefault("UG-Agent", self.ug_agent)
headers.setdefault("X-Specify-Language", self.language)
headers.setdefault("Accept", "application/json, text/plain, */*")
if token_value:
headers["X-Ugreen-Security-Key"] = self.build_security_key(token_value)
headers["X-Ugreen-Security-Code"] = self.rsa_encrypt_long(aes_key)
headers["X-Ugreen-Token"] = (
self.rsa_encrypt_long(token_value) if encrypt_token else token_value
)
return headers
def build_encrypted_request(
self,
url: str,
method: str = "GET",
params: Mapping[str, Any] | None = None,
data: Any | None = None,
extra_headers: Mapping[str, str] | None = None,
token: str | None = None,
encrypt_token: bool = True,
encrypt_body: bool = True,
) -> UgreenEncryptedRequest:
"""
构建绿联加密请求。
关键点:
- 传入的是明文 `params`
- 方法内部会将其序列化并加密成 `encrypt_query`
- 业务侧不需要、也不应该手工拼接 `encrypt_query`。
"""
parsed = urlsplit(url)
clean_url = urlunsplit(
(parsed.scheme, parsed.netloc, parsed.path, "", parsed.fragment)
)
url_query_plain = parsed.query
input_query_plain = self.encode_query(params)
plain_query = "&".join(filter(None, [url_query_plain, input_query_plain]))
aes_key = self.generate_aes_key()
encrypted_query = self.aes_gcm_encrypt(plain_query, aes_key)
req_json = None
if data is not None:
req_json = self.encrypt_body(data, aes_key) if encrypt_body else data
headers = self.build_headers(
aes_key=aes_key,
token=token,
extra_headers=extra_headers,
encrypt_token=encrypt_token,
)
if req_json is not None:
headers.setdefault("Content-Type", "application/json")
_ = method # 保留参数,便于上层统一调用
return UgreenEncryptedRequest(
url=clean_url,
headers=headers,
# 绿联接口约定:查询参数统一透传为 encrypt_query
params={"encrypt_query": encrypted_query},
json=req_json,
aes_key=aes_key,
plain_query=plain_query,
)
def decrypt_response(self, response_json: Any, aes_key: str) -> Any:
if not isinstance(response_json, Mapping):
return response_json
encrypted = response_json.get("encrypt_resp_body")
if not encrypted:
return response_json
plain = self.aes_gcm_decrypt(str(encrypted), aes_key)
try:
return json.loads(plain)
except json.JSONDecodeError:
return plain

View File

@@ -13,7 +13,7 @@ class FilterMediasParams(ActionParams):
过滤媒体数据参数
"""
type: Optional[str] = Field(default=None, description="媒体类型 (电影/电视剧)")
vote: Optional[int] = Field(default=0, description="评分")
vote: Optional[float] = Field(default=None, description="评分(支持小数)")
year: Optional[str] = Field(default=None, description="年份")
@@ -55,7 +55,7 @@ class FilterMediasAction(BaseAction):
break
if params.type and media.type != params.type:
continue
if params.vote and media.vote_average < params.vote:
if params.vote is not None and media.vote_average < params.vote:
continue
if params.year and media.year != params.year:
continue

View File

@@ -1 +0,0 @@
# MoviePilot V2版本大部分设置可通过后台设置界面进行配置仅个别配置需要通过环境变量或本配置文件配置所有可配置项参考https://wiki.movie-pilot.org/zh/configuration

View File

@@ -13,7 +13,7 @@ http {
server unix:/var/run/docker.sock fail_timeout=0;
}
server {
listen 38379;
listen 127.0.0.1:38379;
server_name localhost;
access_log /dev/stdout combined;

View File

@@ -92,3 +92,4 @@ langchain-experimental~=0.3.4
openai~=1.108.2
google-generativeai~=0.8.5
ddgs~=9.10.0
websocket-client~=1.8.0

View File

@@ -235,6 +235,14 @@ release_group_cases = [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-iLoveTV", "group": "iLoveTV"}
]
},
# panda 组
{
"domain": "panda",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Panda", "group": "Panda"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-AilMWeb", "group": "AilMWeb"}
]
},
# piggo 组
{
"domain": "piggo",

View File

@@ -1,5 +1,5 @@
meta_cases = [{
"title": "The Long Season 2017 2160p WEB-DL H265 AAC-XXX",
"title": "The Long Season 2017 2160p WEB-DL H265 120FPS AAC-XXX",
"subtitle": "",
"target": {
"type": "未知",
@@ -12,10 +12,11 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": 120
}
}, {
"title": "Cherry Season S01 2014 2160p WEB-DL H265 AAC-XXX",
"title": "Cherry Season S01 2014 2160p 60fps WEB-DL H265 AAC-XXX",
"subtitle": "",
"target": {
"type": "电视剧",
@@ -28,7 +29,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": 60
}
}, {
"title": "【爪爪字幕组】★7月新番[欢迎来到实力至上主义的教室 第二季/Youkoso Jitsuryoku Shijou Shugi no Kyoushitsu e S2][11][1080p][HEVC][GB][MP4][招募翻译校对]",
@@ -44,7 +46,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "HEVC",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "National.Parks.Adventure.AKA.America.Wild:.National.Parks.Adventure.3D.2016.1080p.Blu-ray.AVC.TrueHD.7.1",
@@ -60,7 +63,8 @@ meta_cases = [{
"restype": "BluRay 3D",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "TrueHD 7.1"
"audio_codec": "TrueHD 7.1",
"fps": None
}
}, {
"title": "[秋叶原冥途战争][Akiba Maid Sensou][2022][WEB-DL][1080][TV Series][第01话][LeagueWEB]",
@@ -76,7 +80,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "哆啦A梦大雄的宇宙小战争 2021 (2022) - 1080p.mp4",
@@ -92,7 +97,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "新精武门1991 (1991).mkv",
@@ -108,7 +114,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "24 S01 1080p WEB-DL AAC2.0 H.264-BTN",
@@ -124,7 +131,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC 2.0"
"audio_codec": "AAC 2.0",
"fps": None
}
}, {
"title": "Qi Refining for 3000 Years S01E06 2022 1080p B-Blobal WEB-DL X264 AAC-AnimeS@AdWeb",
@@ -140,7 +148,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "Noumin Kanren no Skill Bakka Agetetara Naze ka Tsuyoku Natta S01E02 2022 1080p B-Global WEB-DL X264 AAC-AnimeS@ADWeb[2022年10月新番]",
@@ -156,7 +165,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "dou luo da lu S01E229 2018 2160p WEB-DL H265 AAC-ADWeb[[国漫连载] 斗罗大陆 第229集 4k | 国语中字]",
@@ -172,7 +182,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "Thor Love and Thunder (2022) [1080p] [WEBRip] [5.1]",
@@ -188,7 +199,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "5.1"
"audio_codec": "5.1",
"fps": None
}
}, {
"title": "[Animations(动画片)][[诛仙][Jade Dynasty][2022][WEB-DL][2160][TV Series][TV 08][LeagueWEB]][诛仙/诛仙动画 第一季 第08集 | 类型:动画 [国语中字]][680.12 MB]",
@@ -204,7 +216,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "钢铁侠2 (2010) 1080p AC3.mp4",
@@ -220,7 +233,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "AC3"
"audio_codec": "AC3",
"fps": None
}
}, {
"title": "Wonder Woman 1984 2020 BluRay 1080p Atmos TrueHD 7.1 X264-EPiC",
@@ -236,7 +250,8 @@ meta_cases = [{
"restype": "BluRay",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "Atmos TrueHD 7.1"
"audio_codec": "Atmos TrueHD 7.1",
"fps": None
}
}, {
"title": "9-1-1 - S04E03 - Future Tense WEBDL-1080p.mp4",
@@ -252,7 +267,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "【幻月字幕组】【22年日剧】【据幸存的六人所说】【04】【1080P】【中日双语】",
@@ -268,7 +284,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "【爪爪字幕组】★7月新番[即使如此依旧步步进逼/Soredemo Ayumu wa Yosetekuru][09][1080p][HEVC][GB][MP4][招募翻译校对]",
@@ -284,7 +301,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "HEVC",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[猎户不鸽发布组] 不死者之王 第四季 OVERLORD Ⅳ [02] [1080p] [简中内封] [2022年7月番]",
@@ -300,7 +318,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[GM-Team][国漫][寻剑 第1季][Sword Quest Season 1][2002][02][AVC][GB][1080P]",
@@ -316,7 +335,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": " [猎户不鸽发布组] 组长女儿与照料专员 / 组长女儿与保姆 Kumichou Musume to Sewagakari [09] [1080p+] [简中内嵌] [2022年7月番]",
@@ -332,7 +352,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "Nande Koko ni Sensei ga!? 2019 Blu-ray Remux 1080p AVC LPCM-7³ ACG",
@@ -348,7 +369,8 @@ meta_cases = [{
"restype": "BluRay REMUX",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "LPCM 7³"
"audio_codec": "LPCM 7³",
"fps": None
}
}, {
"title": "30.Rock.S02E01.1080p.UHD.BluRay.X264-BORDURE.mkv",
@@ -364,7 +386,8 @@ meta_cases = [{
"restype": "UHD BluRay",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[Gal to Kyouryuu][02][BDRIP][1080P][H264_FLAC].mkv",
@@ -380,7 +403,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "FLAC"
"audio_codec": "FLAC",
"fps": None
}
}, {
"title": "[AI-Raws] 逆境無頼カイジ #13 (BD HEVC 1920x1080 yuv444p10le FLAC)[7CFEE642].mkv",
@@ -396,7 +420,8 @@ meta_cases = [{
"restype": "BD",
"pix": "1080p",
"video_codec": "HEVC",
"audio_codec": "FLAC"
"audio_codec": "FLAC",
"fps": None
}
}, {
"title": "Mr. Robot - S02E06 - eps2.4_m4ster-s1ave.aes SDTV.mp4",
@@ -412,7 +437,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[神印王座][Throne of Seal][2022][WEB-DL][2160][TV Series][TV 22][LeagueWEB] 神印王座 第一季 第22集 | 类型:动画 [国语中字][967.44 MB]",
@@ -428,7 +454,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "S02E1000.mkv",
@@ -444,7 +471,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "西部世界 12.mkv",
@@ -460,7 +488,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[ANi] OVERLORD 第四季 - 04 [1080P][Baha][WEB-DL][AAC AVC][CHT].mp4",
@@ -476,7 +505,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "[SweetSub&LoliHouse] Made in Abyss S2 - 03v2 [WebRip 1080p HEVC-10bit AAC ASSx2].mkv",
@@ -492,7 +522,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "[GM-Team][国漫][斗破苍穹 第5季][Fights Break Sphere V][2022][05][HEVC][GB][4K]",
@@ -508,7 +539,8 @@ meta_cases = [{
"restype": "",
"pix": "2160p",
"video_codec": "HEVC",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "Ousama Ranking S01E02-[1080p][BDRIP][X265.FLAC].mkv",
@@ -524,7 +556,8 @@ meta_cases = [{
"restype": "BDRIP",
"pix": "1080p",
"video_codec": "x265",
"audio_codec": "FLAC"
"audio_codec": "FLAC",
"fps": None
}
}, {
"title": "[Nekomoe kissaten&LoliHouse] Soredemo Ayumu wa Yosetekuru - 01v2 [WebRip 1080p HEVC-10bit EAC3 ASSx2].mkv",
@@ -540,7 +573,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "EAC3"
"audio_codec": "EAC3",
"fps": None
}
}, {
"title": "[喵萌奶茶屋&LoliHouse] 金装的薇尔梅 / Kinsou no Vermeil - 01 [WebRip 1080p HEVC-10bit AAC][简繁内封字幕]",
@@ -556,7 +590,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "Hataraku.Maou-sama.S02E05.2022.1080p.CR.WEB-DL.X264.AAC-ADWeb.mkv",
@@ -572,7 +607,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "The Witch Part 2The Other One 2022 1080p WEB-DL AAC5.1 H264-tG1R0",
@@ -588,7 +624,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC 5.1"
"audio_codec": "AAC 5.1",
"fps": None
}
}, {
"title": "一夜新娘 - S02E07 - 第 7 集.mp4",
@@ -604,7 +641,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[ANi] 處刑少女的生存之道 - 07 [1080P][Baha][WEB-DL][AAC AVC][CHT].mp4",
@@ -620,7 +658,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "Stand-up.Comedy.S01E01.PartA.2022.1080p.WEB-DL.H264.AAC-TJUPT.mp4",
@@ -636,7 +675,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "教父3.The.Godfather.Part.III.1990.1080p.NF.WEBRip.H264.DDP5.1-PTerWEB.mkv",
@@ -652,7 +692,8 @@ meta_cases = [{
"restype": "WEBRip",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "DDP 5.1"
"audio_codec": "DDP 5.1",
"fps": None
}
}, {
"title": "A.Quiet.Place.Part.II.2020.1080p.UHD.BluRay.DD+7.1.DoVi.X265-PuTao",
@@ -668,7 +709,8 @@ meta_cases = [{
"restype": "UHD BluRay DoVi",
"pix": "1080p",
"video_codec": "x265",
"audio_codec": "DD+ 7.1"
"audio_codec": "DD+ 7.1",
"fps": None
}
}, {
"title": "Childhood.In.A.Capsule.S01E16.2022.1080p.KKTV.WEB-DL.X264.AAC-ADWeb.mkv",
@@ -684,7 +726,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "[桜都字幕组] 异世界归来的舅舅 / Isekai Ojisan [01][1080p][简体内嵌]",
@@ -700,7 +743,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "【喵萌奶茶屋】★04月新番★[夏日重現/Summer Time Rendering][15][720p][繁日雙語][招募翻譯片源]",
@@ -716,7 +760,8 @@ meta_cases = [{
"restype": "",
"pix": "720p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[NC-Raws] 打工吧!魔王大人 第二季 / Hataraku Maou-sama!! - 02 (B-Global 1920x1080 HEVC AAC MKV)",
@@ -732,7 +777,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "HEVC",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "The Witch Part 2 The Other One 2022 1080p WEB-DL AAC5.1 H.264-tG1R0",
@@ -748,7 +794,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC 5.1"
"audio_codec": "AAC 5.1",
"fps": None
}
}, {
"title": "The 355 2022 BluRay 1080p DTS-HD MA5.1 X265.10bit-BeiTai",
@@ -764,7 +811,8 @@ meta_cases = [{
"restype": "BluRay",
"pix": "1080p",
"video_codec": "x265 10bit",
"audio_codec": "DTS-HD MA 5.1"
"audio_codec": "DTS-HD MA 5.1",
"fps": None
}
}, {
"title": "Sense8 s01-s02 2015-2017 1080P WEB-DL X265 AC3£cXcY@FRDS",
@@ -780,7 +828,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x265",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "The Heart of Genius S01 13-14 2022 1080p WEB-DL H264 AAC",
@@ -796,7 +845,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "The Heart of Genius E13-14 2022 1080p WEB-DL H264 AAC",
@@ -812,7 +862,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "2022.8.2.Twelve.Monkeys.1995.GBR.4K.REMASTERED.BluRay.1080p.X264.DTS [3.4 GB]",
@@ -828,7 +879,8 @@ meta_cases = [{
"restype": "BluRay",
"pix": "4k",
"video_codec": "x264",
"audio_codec": "DTS"
"audio_codec": "DTS",
"fps": None
}
}, {
"title": "[NC-Raws] 王者天下 第四季 - 17 (Baha 1920x1080 AVC AAC MP4) [3B1AA7BB].mp4",
@@ -844,7 +896,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "Sense8 S2E1 2015-2017 1080P WEB-DL X265 AC3£cXcY@FRDS",
@@ -860,7 +913,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "x265",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[xyx98]传颂之物/Utawarerumono/うたわれるもの[BDrip][1920x1080][TV 01-26 Fin][hevc-yuv420p10 flac_ac3][ENG PGS]",
@@ -876,7 +930,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": "flac"
"audio_codec": "flac",
"fps": None
}
}, {
"title": "[云歌字幕组][7月新番][欢迎来到实力至上主义的教室 第二季][01][X264 10bit][1080p][简体中文].mp4",
@@ -892,7 +947,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "X264",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "[诛仙][Jade Dynasty][2022][WEB-DL][2160][TV Series][TV 04][LeagueWEB]",
@@ -908,7 +964,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "Rick and Morty.S06E06.JuRicksic.Mort.1080p.HMAX.WEBRip.DD5.1.X264-NTb[rartv]",
@@ -924,7 +981,8 @@ meta_cases = [{
"restype": "WEBRip",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "DD 5.1"
"audio_codec": "DD 5.1",
"fps": None
}
}, {
"title": "rick and Morty.S06E05.JuRicksic.Mort.1080p.HMAX.WEBRip.DD5.1.X264-NTb[rartv]",
@@ -940,7 +998,8 @@ meta_cases = [{
"restype": "WEBRip",
"pix": "1080p",
"video_codec": "x264",
"audio_codec": "DD 5.1"
"audio_codec": "DD 5.1",
"fps": None
}
}, {
"title": "[Hall_of_C] 诛仙 Zhu Xian (Jade Dynasty) - Episode 19",
@@ -956,7 +1015,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "I Woke Up a Vampire S02 2023 2160p NF WEB-DL DDP5.1 Atmos H 265-HHWEB",
@@ -972,7 +1032,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
"audio_codec": "DDP 5.1 Atmos"
"audio_codec": "DDP 5.1 Atmos",
"fps": None
}
}, {
"title": "Shadows of the Void S01 2024 1080p WEB-DL H264 AAC-HHWEB",
@@ -988,7 +1049,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "1080p",
"video_codec": "H264",
"audio_codec": "AAC"
"audio_codec": "AAC",
"fps": None
}
}, {
"title": "【极影字幕社】★1月新番 Metallic Rouge/金属口红 第13话 GB 1080P MP4字幕社招人内详",
@@ -1004,7 +1066,8 @@ meta_cases = [{
"restype": "",
"pix": "1080p",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"title": "Mai Xiang S01 2019 2160p WEB-DL H.265 DDP2.0-HHWEB",
@@ -1020,7 +1083,8 @@ meta_cases = [{
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
"audio_codec": "DDP 2.0"
"audio_codec": "DDP 2.0",
"fps": None
}
}, {
"path": "/volume1/电视剧/西部世界 第二季 (2016)/5.mkv",
@@ -1035,7 +1099,8 @@ meta_cases = [{
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": ""
"audio_codec": "",
"fps": None
}
}, {
"path": "/movies/The Vampire Diaries (2009) [tmdbid=18165]/The.Vampire.Diaries.S01E01.1080p.mkv",
@@ -1051,7 +1116,8 @@ meta_cases = [{
"pix": "1080p",
"video_codec": "",
"audio_codec": "",
"tmdbid": 18165
"tmdbid": 18165,
"fps": None
}
}, {
"path": "/movies/Inception (2010) [tmdbid-27205]/Inception.2010.1080p.mkv",
@@ -1067,7 +1133,40 @@ meta_cases = [{
"pix": "1080p",
"video_codec": "",
"audio_codec": "",
"tmdbid": 27205
"tmdbid": 27205,
"fps": None
}
}, {
"path": "/movies/Breaking Bad (2008) [tmdb=1396]/Season 2/",
"target": {
"type": "电视剧",
"cn_name": "",
"en_name": "Breaking Bad",
"year": "2008",
"part": "",
"season": "S02",
"episode": "",
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": "",
"tmdbid": 1396
}
}, {
"path": "/movies/Breaking Bad (2008) [tmdb=1396]/S2/",
"target": {
"type": "电视剧",
"cn_name": "",
"en_name": "Breaking Bad",
"year": "2008",
"part": "",
"season": "S02",
"episode": "",
"restype": "",
"pix": "",
"video_codec": "",
"audio_codec": "",
"tmdbid": 1396
}
}, {
"path": "/movies/Breaking Bad (2008) [tmdb=1396]/Season 1/Breaking.Bad.S01E01.1080p.mkv",
@@ -1083,7 +1182,8 @@ meta_cases = [{
"pix": "1080p",
"video_codec": "",
"audio_codec": "",
"tmdbid": 1396
"tmdbid": 1396,
"fps": None
}
}, {
"path": "/tv/Game of Thrones (2011) {tmdb=1399}/Season 1/Game.of.Thrones.S01E01.1080p.mkv",
@@ -1099,7 +1199,8 @@ meta_cases = [{
"pix": "1080p",
"video_codec": "",
"audio_codec": "",
"tmdbid": 1399
"tmdbid": 1399,
"fps": None
}
}, {
"path": "/movies/Avatar (2009) {tmdb-19995}/Avatar.2009.1080p.mkv",
@@ -1115,7 +1216,8 @@ meta_cases = [{
"pix": "1080p",
"video_codec": "",
"audio_codec": "",
"tmdbid": 19995
"tmdbid": 19995,
"fps": None
}
}, {
"path": "/movies/DouBan_IMDB.TOP250.Movies.Mixed.Collection.20240501.FRDS/为奴十二年.12.Years.a.Slave.2013.BluRay.1080p.x265.10bit.2Audio.MNHD-FRDS/12.Years.a.Slave.2013.BluRay.1080p.x265.10bit.2Audio.MNHD-FRDS.mkv",

View File

@@ -0,0 +1,299 @@
from __future__ import annotations
import argparse
import base64
import getpass
import json
import os
import sys
import uuid
from typing import Any, Mapping
from urllib.parse import urlsplit, urlunsplit
# 兼容直接运行脚本:避免 app/utils 被放在 sys.path 首位导致标准库模块被同名文件遮蔽
if __name__ == "__main__" and __package__ is None:
script_dir = os.path.dirname(os.path.abspath(__file__))
project_root = os.path.abspath(os.path.join(script_dir, "..", ".."))
if script_dir in sys.path:
sys.path.remove(script_dir)
if project_root not in sys.path:
sys.path.insert(0, project_root)
import requests
from app.utils.ugreen_crypto import UgreenCrypto
class UgreenLoginError(Exception):
pass
def _normalize_base_url(raw: str) -> str:
value = (raw or "").strip()
if not value:
raise UgreenLoginError("服务器地址不能为空")
if not value.startswith(("http://", "https://")):
value = f"http://{value}"
parsed = urlsplit(value)
if not parsed.netloc:
raise UgreenLoginError(f"无效服务器地址: {raw}")
return urlunsplit((parsed.scheme, parsed.netloc, "", "", "")).rstrip("/")
def _json_or_raise(resp: requests.Response, stage: str) -> dict[str, Any]:
try:
data = resp.json()
except Exception as exc: # pragma: no cover - 网络异常路径
raise UgreenLoginError(
f"{stage} 返回非 JSONHTTP {resp.status_code},响应片段: {resp.text[:200]}"
) from exc
if not isinstance(data, dict):
raise UgreenLoginError(f"{stage} 返回格式异常: {type(data).__name__}")
return data
def _decode_public_key(raw: str) -> str:
value = (raw or "").strip()
if not value:
raise UgreenLoginError("未获取到公钥")
if "BEGIN" in value:
return value
try:
return base64.b64decode(value).decode("utf-8")
except Exception as exc:
raise UgreenLoginError("公钥解码失败") from exc
def _raise_if_failed(payload: Mapping[str, Any], stage: str) -> None:
if payload.get("code") == 200:
return
raise UgreenLoginError(
f"{stage}失败: code={payload.get('code')} msg={payload.get('msg')}"
)
def _build_common_headers(
client_id: str, client_version: str, language: str
) -> dict[str, str]:
return {
"Accept": "application/json, text/plain, */*",
"Client-Id": client_id,
"Client-Version": client_version,
"UG-Agent": "PC/WEB",
"X-Specify-Language": language,
}
def _login_and_get_access(
session: requests.Session,
base_url: str,
username: str,
password: str,
keepalive: bool,
headers: Mapping[str, str],
timeout: float,
verify_ssl: bool,
) -> tuple[str, str]:
check_resp = session.post(
f"{base_url}/ugreen/v1/verify/check",
json={"username": username},
headers=dict(headers),
timeout=timeout,
verify=verify_ssl,
)
check_json = _json_or_raise(check_resp, "获取登录公钥")
_raise_if_failed(check_json, "获取登录公钥")
rsa_token = (
check_resp.headers.get("x-rsa-token")
or check_resp.headers.get("X-Rsa-Token")
or check_json.get("xRsaToken")
or check_json.get("x-rsa-token")
)
if not rsa_token:
data = check_json.get("data")
if isinstance(data, Mapping):
rsa_token = data.get("xRsaToken") or data.get("x-rsa-token")
if not rsa_token:
raise UgreenLoginError("登录公钥为空x-rsa-token")
login_public_key = _decode_public_key(str(rsa_token))
encrypted_password = UgreenCrypto(public_key=login_public_key).rsa_encrypt_long(
password
)
login_payload = {
"username": username,
"password": encrypted_password,
"keepalive": keepalive,
"otp": True,
"is_simple": True,
}
login_resp = session.post(
f"{base_url}/ugreen/v1/verify/login",
json=login_payload,
headers=dict(headers),
timeout=timeout,
verify=verify_ssl,
)
login_json = _json_or_raise(login_resp, "登录")
_raise_if_failed(login_json, "登录")
data = login_json.get("data")
if not isinstance(data, Mapping):
raise UgreenLoginError("登录成功但响应 data 为空")
token = str(data.get("token") or "").strip()
public_key = str(data.get("public_key") or "").strip()
if not token:
raise UgreenLoginError("登录成功但未拿到 token")
if not public_key:
raise UgreenLoginError("登录成功但未拿到 public_key")
return token, _decode_public_key(public_key)
def _fetch_media_lib(
session: requests.Session,
base_url: str,
token: str,
public_key: str,
client_id: str,
client_version: str,
language: str,
page: int,
page_size: int,
timeout: float,
verify_ssl: bool,
) -> Any:
crypto = UgreenCrypto(
public_key=public_key,
token=token,
client_id=client_id,
client_version=client_version,
ug_agent="PC/WEB",
language=language,
)
req = crypto.build_encrypted_request(
url=f"{base_url}/ugreen/v1/video/homepage/media_list",
method="GET",
params={"page": page, "page_size": page_size},
)
media_resp = session.get(
req.url,
headers=req.headers,
params=req.params,
timeout=timeout,
verify=verify_ssl,
)
media_json = _json_or_raise(media_resp, "获取媒体库")
return crypto.decrypt_response(media_json, req.aes_key)
def parse_args(argv: list[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="登录绿联 NAS 并调用媒体库接口(自动处理请求加密/响应解密)"
)
parser.add_argument("--host", help="服务器地址,例如: http://192.168.20.101:9999")
parser.add_argument("--username", help="用户名")
parser.add_argument("--password", help="密码(不传则交互输入)")
parser.add_argument("--client-id", help="可选,默认自动生成 UUID-WEB")
parser.add_argument("--client-version", default="76363", help="默认: 76363")
parser.add_argument("--language", default="zh-CN", help="默认: zh-CN")
parser.add_argument("--page", type=int, default=1, help="默认: 1")
parser.add_argument("--page-size", type=int, default=50, help="默认: 50")
parser.add_argument("--timeout", type=float, default=20.0, help="默认: 20 秒")
parser.add_argument("--insecure", action="store_true", help="忽略 HTTPS 证书校验")
parser.add_argument(
"--no-keepalive",
action="store_true",
help="关闭保持登录(默认保持登录)",
)
parser.add_argument("--pretty", action="store_true", help="美化输出 JSON")
parser.add_argument("--output", help="将解密后的结果写入文件")
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv or sys.argv[1:])
host = args.host or input("服务器地址: ").strip()
username = args.username or input("用户名: ").strip()
password = args.password or getpass.getpass("密码: ")
client_id = (args.client_id or f"{uuid.uuid4()}-WEB").strip()
keepalive = not args.no_keepalive
verify_ssl = not args.insecure
try:
base_url = _normalize_base_url(host)
if args.insecure:
requests.packages.urllib3.disable_warnings() # type: ignore[attr-defined]
session = requests.Session()
headers = _build_common_headers(
client_id=client_id,
client_version=args.client_version,
language=args.language,
)
token, public_key = _login_and_get_access(
session=session,
base_url=base_url,
username=username,
password=password,
keepalive=keepalive,
headers=headers,
timeout=args.timeout,
verify_ssl=verify_ssl,
)
decoded = _fetch_media_lib(
session=session,
base_url=base_url,
token=token,
public_key=public_key,
client_id=client_id,
client_version=args.client_version,
language=args.language,
page=args.page,
page_size=args.page_size,
timeout=args.timeout,
verify_ssl=verify_ssl,
)
if isinstance(decoded, Mapping):
if decoded.get("code") != 200:
raise UgreenLoginError(
f"媒体库接口失败: code={decoded.get('code')} msg={decoded.get('msg')}"
)
media_count = None
data = decoded.get("data")
if isinstance(data, Mapping) and isinstance(data.get("media_lib_info_list"), list):
media_count = len(data["media_lib_info_list"])
print(
f"调用成功: code={decoded.get('code')} msg={decoded.get('msg')} "
f"media_lib_info_list={media_count}"
)
text = json.dumps(
decoded,
ensure_ascii=False,
indent=2 if args.pretty else None,
separators=(",", ":") if not args.pretty else None,
)
if args.output:
with open(args.output, "w", encoding="utf-8") as f:
f.write(text)
f.write("\n")
print(f"解密结果已写入: {args.output}")
else:
print(text)
return 0
except UgreenLoginError as exc:
print(f"错误: {exc}", file=sys.stderr)
return 1
except requests.RequestException as exc:
print(f"网络错误: {exc}", file=sys.stderr)
return 2
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -30,7 +30,8 @@ class MetaInfoTest(TestCase):
"restype": meta_info.edition,
"pix": meta_info.resource_pix or "",
"video_codec": meta_info.video_encode or "",
"audio_codec": meta_info.audio_encode or ""
"audio_codec": meta_info.audio_encode or "",
"fps": meta_info.fps or None
}
# 检查tmdbid

26
tests/test_string.py Normal file
View File

@@ -0,0 +1,26 @@
from unittest import TestCase
from app.utils.string import StringUtils
class StringUtilsTest(TestCase):
def test_is_media_title_like_true(self):
self.assertTrue(StringUtils.is_media_title_like("盗梦空间"))
self.assertTrue(StringUtils.is_media_title_like("The Lord of the Rings"))
self.assertTrue(StringUtils.is_media_title_like("庆余年 第2季"))
self.assertTrue(StringUtils.is_media_title_like("The Office S01E01"))
self.assertTrue(StringUtils.is_media_title_like("权力的游戏 Game of Thrones"))
self.assertTrue(StringUtils.is_media_title_like("Spider-Man: No Way Home 2021"))
def test_is_media_title_like_false(self):
self.assertFalse(StringUtils.is_media_title_like(""))
self.assertFalse(StringUtils.is_media_title_like(" "))
self.assertFalse(StringUtils.is_media_title_like("a"))
self.assertFalse(StringUtils.is_media_title_like("第2季"))
self.assertFalse(StringUtils.is_media_title_like("S01E01"))
self.assertFalse(StringUtils.is_media_title_like("#推荐电影"))
self.assertFalse(StringUtils.is_media_title_like("请帮我推荐一部电影"))
self.assertFalse(StringUtils.is_media_title_like("盗梦空间怎么样?"))
self.assertFalse(StringUtils.is_media_title_like("我想看盗梦空间"))
self.assertFalse(StringUtils.is_media_title_like("继续"))

View File

@@ -0,0 +1,54 @@
from types import ModuleType, SimpleNamespace
import sys
# The endpoint import pulls in a wide plugin/helper graph. Some optional modules are
# not present in this test environment, so stub them before importing the endpoint.
sys.modules.setdefault("app.helper.sites", ModuleType("app.helper.sites"))
setattr(sys.modules["app.helper.sites"], "SitesHelper", object)
from app.api.endpoints.transfer import manual_transfer
from app.schemas import ManualTransferItem
def test_manual_transfer_from_history_preserves_download_context(monkeypatch):
history = SimpleNamespace(
status=0,
mode="copy",
src_fileitem={"storage": "local", "path": "/downloads/test.mkv", "name": "test.mkv", "type": "file"},
dest_fileitem=None,
downloader="qbittorrent",
download_hash="abc123",
type="电视剧",
tmdbid="100",
doubanid="200",
seasons="S01",
episodes="E01-E02",
episode_group="WEB-DL",
)
captured = {}
def fake_get(_db, logid):
assert logid == 1
return history
class FakeTransferChain:
def manual_transfer(self, **kwargs):
captured.update(kwargs)
return True, ""
monkeypatch.setattr("app.api.endpoints.transfer.TransferHistory.get", fake_get)
monkeypatch.setattr("app.api.endpoints.transfer.TransferChain", FakeTransferChain)
resp = manual_transfer(
transer_item=ManualTransferItem(logid=1, from_history=True),
background=True,
db=object(),
_="token",
)
assert resp.success is True
assert captured["downloader"] == "qbittorrent"
assert captured["download_hash"] == "abc123"
assert captured["episode_group"] == "WEB-DL"
assert captured["season"] == 1

113
tests/test_ugreen_api.py Normal file
View File

@@ -0,0 +1,113 @@
import unittest
from types import SimpleNamespace
from unittest.mock import patch
from app.modules.ugreen.api import Api
class _FakeResponse:
def __init__(self, payload: dict, headers: dict | None = None):
self._payload = payload
self.headers = headers or {}
def json(self):
return self._payload
class _FakeSession:
def __init__(self, get_responses=None, post_responses=None):
self._get_responses = list(get_responses or [])
self._post_responses = list(post_responses or [])
self.calls: list[tuple[str, dict]] = []
self.cookies = SimpleNamespace(
get_dict=lambda: {},
update=lambda *_args, **_kwargs: None,
)
def get(self, *args, **kwargs):
if args:
kwargs = {"url": args[0], **kwargs}
self.calls.append(("GET", kwargs))
return self._get_responses.pop(0) if self._get_responses else _FakeResponse({})
def post(self, *args, **kwargs):
if args:
kwargs = {"url": args[0], **kwargs}
self.calls.append(("POST", kwargs))
return self._post_responses.pop(0) if self._post_responses else _FakeResponse({})
@staticmethod
def close():
return None
class _FakeCrypto:
def __init__(self, *args, **kwargs):
pass
@staticmethod
def rsa_encrypt_long(raw: str) -> str:
return f"enc:{raw}"
@staticmethod
def build_encrypted_request(url: str, method: str = "GET", params=None, **kwargs):
return SimpleNamespace(url=url, headers={}, params=params or {}, json=None, aes_key="k")
@staticmethod
def decrypt_response(payload, aes_key):
return payload
class UgreenApiVerifySslTest(unittest.TestCase):
def test_request_json_default_verify_ssl_true(self):
api = Api(host="https://example.com")
fake_session = _FakeSession(
get_responses=[_FakeResponse({"code": 200})],
post_responses=[_FakeResponse({"code": 200})],
)
api._session = fake_session
api._request_json(url="https://example.com/a", method="GET")
api._request_json(url="https://example.com/b", method="POST", json_data={"x": 1})
self.assertEqual(fake_session.calls[0][1].get("verify"), True)
self.assertEqual(fake_session.calls[1][1].get("verify"), True)
def test_login_logout_follow_verify_ssl_flag(self):
api = Api(host="https://example.com", verify_ssl=False)
fake_session = _FakeSession(
get_responses=[_FakeResponse({})],
post_responses=[
_FakeResponse({"code": 200, "msg": "ok", "data": {}}, headers={"x-rsa-token": "BEGIN TEST"}),
_FakeResponse(
{
"code": 200,
"msg": "ok",
"data": {
"token": "token-value",
"public_key": "BEGIN LOGIN KEY",
"static_token": "static-token",
"is_ugk": False,
},
}
),
],
)
api._session = fake_session
with patch("app.modules.ugreen.api.UgreenCrypto", _FakeCrypto):
token = api.login("tester", "pwd")
self.assertEqual(token, "token-value")
api.logout()
self.assertEqual(len(fake_session.calls), 3)
self.assertEqual(fake_session.calls[0][0], "POST")
self.assertEqual(fake_session.calls[1][0], "POST")
self.assertEqual(fake_session.calls[2][0], "GET")
self.assertEqual(fake_session.calls[0][1].get("verify"), False)
self.assertEqual(fake_session.calls[1][1].get("verify"), False)
self.assertEqual(fake_session.calls[2][1].get("verify"), False)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,95 @@
import base64
import hashlib
import json
import unittest
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding, rsa
from app.utils.ugreen_crypto import UgreenCrypto
def _generate_rsa_keys() -> tuple[str, rsa.RSAPrivateKey]:
private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
public_pem = private_key.public_key().public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.PKCS1,
).decode("utf-8")
return public_pem, private_key
def _rsa_decrypt_long(private_key: rsa.RSAPrivateKey, payload_b64: str) -> str:
encrypted = base64.b64decode(payload_b64)
chunk_size = private_key.key_size // 8
plain_chunks = []
for start in range(0, len(encrypted), chunk_size):
chunk = encrypted[start : start + chunk_size]
plain_chunks.append(private_key.decrypt(chunk, padding.PKCS1v15()))
return b"".join(plain_chunks).decode("utf-8")
class UgreenCryptoTest(unittest.TestCase):
def setUp(self):
self.public_key, self.private_key = _generate_rsa_keys()
self.token = "demo-token-for-test"
self.crypto = UgreenCrypto(
public_key=self.public_key,
token=self.token,
client_id="test-client-id",
)
def test_rsa_encrypt_long(self):
plain = "A" * 400
encrypted = self.crypto.rsa_encrypt_long(plain)
self.assertEqual(plain, _rsa_decrypt_long(self.private_key, encrypted))
def test_build_encrypted_request_and_decrypt_response(self):
req = self.crypto.build_encrypted_request(
url="http://127.0.0.1:9999/ugreen/v1/video/homepage/media_list",
params={"page": 1, "page_size": 50},
data={"foo": "bar", "count": 2},
)
self.assertEqual(
req.plain_query,
"page=1&page_size=50",
)
self.assertEqual(
req.plain_query,
self.crypto.aes_gcm_decrypt(req.params["encrypt_query"], req.aes_key),
)
self.assertEqual(
req.headers["X-Ugreen-Security-Key"],
hashlib.md5(self.token.encode("utf-8")).hexdigest(),
)
self.assertEqual(
req.aes_key,
_rsa_decrypt_long(self.private_key, req.headers["X-Ugreen-Security-Code"]),
)
self.assertEqual(
self.token,
_rsa_decrypt_long(self.private_key, req.headers["X-Ugreen-Token"]),
)
encrypted_body = req.json["encrypt_req_body"]
body_plain = self.crypto.aes_gcm_decrypt(encrypted_body, req.aes_key)
self.assertEqual(json.loads(body_plain), {"foo": "bar", "count": 2})
self.assertEqual(
req.json["req_body_sha256"],
hashlib.sha256(body_plain.encode("utf-8")).hexdigest(),
)
server_payload = {"code": 0, "msg": "ok", "data": {"items": [1, 2, 3]}}
resp = {
"encrypt_resp_body": self.crypto.aes_gcm_encrypt(
json.dumps(server_payload, ensure_ascii=False, separators=(",", ":")),
req.aes_key,
)
}
decoded = self.crypto.decrypt_response(resp, req.aes_key)
self.assertEqual(decoded, server_payload)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,188 @@
import unittest
from unittest.mock import patch
import importlib.util
import sys
import types
from pathlib import Path
from app import schemas
try:
from app.api.endpoints import dashboard as dashboard_endpoint
except Exception:
dashboard_endpoint = None
def _load_ugreen_class():
"""
在测试中动态加载 Ugreen避免受可选依赖如 pyquery/sqlalchemy影响。
"""
module_name = "_test_ugreen_module"
if module_name in sys.modules:
return sys.modules[module_name].Ugreen
# 轻量日志桩
if "app.log" not in sys.modules:
log_module = types.ModuleType("app.log")
class _Logger:
def info(self, *_args, **_kwargs):
pass
def warning(self, *_args, **_kwargs):
pass
def error(self, *_args, **_kwargs):
pass
def debug(self, *_args, **_kwargs):
pass
log_module.logger = _Logger()
sys.modules["app.log"] = log_module
# SystemConfigOper 桩
if "app.db.systemconfig_oper" not in sys.modules:
db_module = types.ModuleType("app.db.systemconfig_oper")
class _SystemConfigOper:
@staticmethod
def get(_key):
return {}
@staticmethod
def set(_key, _value):
return None
db_module.SystemConfigOper = _SystemConfigOper
sys.modules["app.db.systemconfig_oper"] = db_module
# app.modules / app.modules.ugreen / app.modules.ugreen.api 桩
if "app.modules" not in sys.modules:
pkg = types.ModuleType("app.modules")
pkg.__path__ = []
sys.modules["app.modules"] = pkg
if "app.modules.ugreen" not in sys.modules:
subpkg = types.ModuleType("app.modules.ugreen")
subpkg.__path__ = []
sys.modules["app.modules.ugreen"] = subpkg
if "app.modules.ugreen.api" not in sys.modules:
api_module = types.ModuleType("app.modules.ugreen.api")
class _Api:
host = ""
token = None
api_module.Api = _Api
sys.modules["app.modules.ugreen.api"] = api_module
ugreen_path = Path(__file__).resolve().parents[1] / "app" / "modules" / "ugreen" / "ugreen.py"
spec = importlib.util.spec_from_file_location(module_name, ugreen_path)
module = importlib.util.module_from_spec(spec)
sys.modules[module_name] = module
assert spec and spec.loader
spec.loader.exec_module(module)
return module.Ugreen
Ugreen = _load_ugreen_class()
class _FakeUgreenApi:
host = "http://127.0.0.1:9999"
token = "test-token"
@staticmethod
def video_all(classification: int, page: int = 1, page_size: int = 1):
if classification == -102:
return {"total_num": 12}
if classification == -103:
return {"total_num": 34}
return {"total_num": 0}
class UgreenScanModeTest(unittest.TestCase):
def test_resolve_scan_type(self):
resolve = Ugreen._Ugreen__resolve_scan_type
self.assertEqual(resolve(scan_mode="new_and_modified"), 1)
self.assertEqual(resolve(scan_mode="supplement_missing"), 2)
self.assertEqual(resolve(scan_mode="full_override"), 3)
self.assertEqual(resolve(scan_mode="1"), 1)
self.assertEqual(resolve(scan_mode="2"), 2)
self.assertEqual(resolve(scan_mode="3"), 3)
self.assertEqual(resolve(scan_type=1), 1)
self.assertEqual(resolve(scan_type=2), 2)
self.assertEqual(resolve(scan_type=3), 3)
self.assertEqual(resolve(scan_mode="unknown"), 2)
self.assertEqual(resolve(), 2)
class UgreenVerifySslTest(unittest.TestCase):
def test_resolve_verify_ssl(self):
resolve = Ugreen._Ugreen__resolve_verify_ssl
self.assertEqual(resolve(True), True)
self.assertEqual(resolve(False), False)
self.assertEqual(resolve("true"), True)
self.assertEqual(resolve("1"), True)
self.assertEqual(resolve("false"), False)
self.assertEqual(resolve("0"), False)
self.assertEqual(resolve(None), True)
class UgreenStatisticTest(unittest.TestCase):
def test_get_medias_count_episode_is_none(self):
ugreen = Ugreen.__new__(Ugreen)
ugreen._host = "http://127.0.0.1:9999"
ugreen._username = "tester"
ugreen._password = "secret"
ugreen._userinfo = {"name": "tester"}
ugreen._api = _FakeUgreenApi()
stat = ugreen.get_medias_count()
self.assertEqual(stat.movie_count, 12)
self.assertEqual(stat.tv_count, 34)
self.assertIsNone(stat.episode_count)
class DashboardStatisticTest(unittest.TestCase):
@unittest.skipIf(dashboard_endpoint is None, "dashboard endpoint dependencies are missing")
def test_statistic_all_episode_missing(self):
mocked_stats = [
schemas.Statistic(movie_count=10, tv_count=20, episode_count=None, user_count=2),
schemas.Statistic(movie_count=1, tv_count=2, episode_count=None, user_count=1),
]
with patch(
"app.api.endpoints.dashboard.DashboardChain.media_statistic",
return_value=mocked_stats,
):
ret = dashboard_endpoint.statistic(name="ugreen", _=None)
self.assertEqual(ret.movie_count, 11)
self.assertEqual(ret.tv_count, 22)
self.assertEqual(ret.user_count, 3)
self.assertIsNone(ret.episode_count)
@unittest.skipIf(dashboard_endpoint is None, "dashboard endpoint dependencies are missing")
def test_statistic_mixed_episode_count(self):
mocked_stats = [
schemas.Statistic(movie_count=10, tv_count=20, episode_count=None, user_count=2),
schemas.Statistic(movie_count=1, tv_count=2, episode_count=6, user_count=1),
]
with patch(
"app.api.endpoints.dashboard.DashboardChain.media_statistic",
return_value=mocked_stats,
):
ret = dashboard_endpoint.statistic(name="all", _=None)
self.assertEqual(ret.movie_count, 11)
self.assertEqual(ret.tv_count, 22)
self.assertEqual(ret.user_count, 3)
self.assertEqual(ret.episode_count, 6)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.9.9'
FRONTEND_VERSION = 'v2.9.9'
APP_VERSION = 'v2.9.14'
FRONTEND_VERSION = 'v2.9.14'