mirror of
https://github.com/jxxghp/MoviePilot.git
synced 2026-05-09 14:02:39 +08:00
Compare commits
136 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
28f9756dd6 | ||
|
|
4bffe2cff1 | ||
|
|
fca478f1d8 | ||
|
|
097dff13a3 | ||
|
|
460b386004 | ||
|
|
89bf89c02d | ||
|
|
cefb60ba2c | ||
|
|
8c78627647 | ||
|
|
51189210c2 | ||
|
|
38933d5882 | ||
|
|
4619fc4042 | ||
|
|
ee7ba28235 | ||
|
|
409abb66be | ||
|
|
8aa8b1897b | ||
|
|
8c256d91bd | ||
|
|
d1d3fc7f30 | ||
|
|
ae15eac0f8 | ||
|
|
1282ad5004 | ||
|
|
6f6fcc79f2 | ||
|
|
e5c64e73b5 | ||
|
|
93a19b467b | ||
|
|
4ba8d42272 | ||
|
|
32e247b4d5 | ||
|
|
1d0d09c909 | ||
|
|
b7ee6ca8c4 | ||
|
|
4a4d93e7f9 | ||
|
|
7b096c0a09 | ||
|
|
3a93efb082 | ||
|
|
73cdd297b1 | ||
|
|
83187ea17d | ||
|
|
6d8eed30ce | ||
|
|
6fa48afa34 | ||
|
|
115fb40772 | ||
|
|
10b0dbb5d3 | ||
|
|
4c32ad902b | ||
|
|
787db8f5ac | ||
|
|
df1b2067b6 | ||
|
|
f3d9f25d02 | ||
|
|
eea7e3b55f | ||
|
|
810cb0a203 | ||
|
|
e0e21e39a2 | ||
|
|
cc31c66b93 | ||
|
|
011535fbc3 | ||
|
|
77b95d11fb | ||
|
|
89f6164eba | ||
|
|
70350aa39f | ||
|
|
61a0a66c47 | ||
|
|
6fcc5c84a6 | ||
|
|
5995b3f3e8 | ||
|
|
60996be71b | ||
|
|
49b50e5975 | ||
|
|
262bd6808b | ||
|
|
e9c8db9950 | ||
|
|
02a98f832f | ||
|
|
9a2a241a30 | ||
|
|
04c2a1eb18 | ||
|
|
65a4b7438c | ||
|
|
13c3c082b8 | ||
|
|
bf127d6a70 | ||
|
|
117672384c | ||
|
|
2ae2ea8ef7 | ||
|
|
7a5e513f25 | ||
|
|
81828948dd | ||
|
|
eda73e14f7 | ||
|
|
6aec326d05 | ||
|
|
d36dd69ec3 | ||
|
|
1688063450 | ||
|
|
ae5207f0e4 | ||
|
|
f1f4743936 | ||
|
|
e09f9ad009 | ||
|
|
8d938c2273 | ||
|
|
e5f97cd299 | ||
|
|
9dababbcfd | ||
|
|
9d8bd5044b | ||
|
|
5d07381111 | ||
|
|
61c695b77d | ||
|
|
1ceb8891b0 | ||
|
|
2f53fd3108 | ||
|
|
bf2d2cbd03 | ||
|
|
cb323653b8 | ||
|
|
edf3946558 | ||
|
|
6c5fae56d9 | ||
|
|
a4f2c574b0 | ||
|
|
815d83bfb3 | ||
|
|
df3294c9d2 | ||
|
|
1af5f02832 | ||
|
|
217fcfd1b2 | ||
|
|
80825584ac | ||
|
|
10543eedd0 | ||
|
|
bf12a8679d | ||
|
|
8cd12ab584 | ||
|
|
351de8b4da | ||
|
|
75fca971d4 | ||
|
|
22f3244bf5 | ||
|
|
aafc4b3a39 | ||
|
|
18906e5ab2 | ||
|
|
9675d199f9 | ||
|
|
78e8faa203 | ||
|
|
d5ed9bc654 | ||
|
|
770065d9ed | ||
|
|
abc4154e2c | ||
|
|
fd6c9d5d34 | ||
|
|
dc428e7de0 | ||
|
|
0c51d79be7 | ||
|
|
1b489ba581 | ||
|
|
4d9f17b083 | ||
|
|
3c7cd2186f | ||
|
|
5acfd683b9 | ||
|
|
6b01901a4a | ||
|
|
1ca54afd6c | ||
|
|
9c75c2d22e | ||
|
|
79ec3ed2c3 | ||
|
|
7072d2cfe8 | ||
|
|
c0c08b0b84 | ||
|
|
01329195ee | ||
|
|
ad40b99313 | ||
|
|
1e338e48ab | ||
|
|
ac9c9598f4 | ||
|
|
02cb5dfc31 | ||
|
|
8109ffb445 | ||
|
|
0ecbcb89fa | ||
|
|
8f38c06424 | ||
|
|
902394f86e | ||
|
|
9fefd807f9 | ||
|
|
a8fb4a6d84 | ||
|
|
7806267e92 | ||
|
|
eb5e17a115 | ||
|
|
2ae98d628d | ||
|
|
8b9dc0e77f | ||
|
|
2f151cea64 | ||
|
|
b777e8cab1 | ||
|
|
663e37bd03 | ||
|
|
8960620883 | ||
|
|
5b892b3a63 | ||
|
|
974d5f2f49 | ||
|
|
f70881bb4f |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -1,4 +1,5 @@
|
||||
.idea/
|
||||
.DS_Store
|
||||
*.c
|
||||
*.so
|
||||
*.pyd
|
||||
@@ -15,11 +16,15 @@ app/helper/*.bin
|
||||
app/plugins/**
|
||||
!app/plugins/__init__.py
|
||||
config/cookies/**
|
||||
config/app.env
|
||||
config/user.db*
|
||||
config/sites/**
|
||||
config/logs/
|
||||
config/temp/
|
||||
config/cache/
|
||||
.runtime/
|
||||
public/
|
||||
.moviepilot.env
|
||||
*.pyc
|
||||
*.log
|
||||
.vscode
|
||||
|
||||
45
README.md
45
README.md
@@ -1,5 +1,7 @@
|
||||
# MoviePilot
|
||||
|
||||
简体中文 | [English](README_EN.md)
|
||||
|
||||

|
||||

|
||||

|
||||
@@ -16,17 +18,31 @@
|
||||
|
||||
发布频道:https://t.me/moviepilot_channel
|
||||
|
||||
|
||||
## 主要特性
|
||||
|
||||
- 前后端分离,基于FastApi + Vue3。
|
||||
- 聚焦核心需求,简化功能和设置,部分设置项可直接使用默认值。
|
||||
- 重新设计了用户界面,更加美观易用。
|
||||
|
||||
|
||||
## 安装使用
|
||||
|
||||
官方Wiki:https://wiki.movie-pilot.org
|
||||
|
||||
### 为 AI Agent 添加 Skills
|
||||
|
||||
## 本地 CLI
|
||||
|
||||
一键安装运行脚本:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootstrap-local.sh | bash
|
||||
```
|
||||
|
||||
使用 `moviepilot` 命令管理MoviePilot,完整 CLI 文档:[`docs/cli.md`](docs/cli.md)
|
||||
|
||||
|
||||
## 为 AI Agent 添加 Skills
|
||||
```shell
|
||||
npx skills add https://github.com/jxxghp/MoviePilot
|
||||
```
|
||||
@@ -37,32 +53,9 @@ API文档:https://api.movie-pilot.org
|
||||
|
||||
MCP工具API文档:详见 [docs/mcp-api.md](docs/mcp-api.md)
|
||||
|
||||
本地运行需要 `Python 3.12`、`Node JS v20.12.1`
|
||||
开发环境准备与本地源码运行说明:[`docs/development-setup.md`](docs/development-setup.md)
|
||||
|
||||
- 克隆主项目 [MoviePilot](https://github.com/jxxghp/MoviePilot)
|
||||
```shell
|
||||
git clone https://github.com/jxxghp/MoviePilot
|
||||
```
|
||||
- 克隆资源项目 [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources) ,将 `resources` 目录下对应平台及版本的库 `.so`/`.pyd`/`.bin` 文件复制到 `app/helper` 目录
|
||||
```shell
|
||||
git clone https://github.com/jxxghp/MoviePilot-Resources
|
||||
```
|
||||
- 安装后端依赖,运行 `main.py` 启动后端服务,默认监听端口:`3001`,API文档地址:`http://localhost:3001/docs`
|
||||
```shell
|
||||
cd MoviePilot
|
||||
pip install -r requirements.txt
|
||||
python3 -m app.main
|
||||
```
|
||||
- 克隆前端项目 [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
|
||||
```shell
|
||||
git clone https://github.com/jxxghp/MoviePilot-Frontend
|
||||
```
|
||||
- 安装前端依赖,运行前端项目,访问:`http://localhost:5173`
|
||||
```shell
|
||||
yarn
|
||||
yarn dev
|
||||
```
|
||||
- 参考 [插件开发指引](https://wiki.movie-pilot.org/zh/plugindev) 在 `app/plugins` 目录下开发插件代码
|
||||
插件开发说明:<https://wiki.movie-pilot.org/zh/plugindev>
|
||||
|
||||
## 相关项目
|
||||
|
||||
|
||||
77
README_EN.md
Normal file
77
README_EN.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# MoviePilot
|
||||
|
||||
[简体中文](README.md) | English
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
Redesigned from parts of [NAStool](https://github.com/NAStool/nas-tools), with a stronger focus on core automation scenarios while reducing issues and making the project easier to extend and maintain.
|
||||
|
||||
# For learning and personal communication only. Please do not promote this project on platforms in mainland China.
|
||||
|
||||
Release channel: https://t.me/moviepilot_channel
|
||||
|
||||
|
||||
## Key Features
|
||||
|
||||
- Frontend/backend separation based on FastApi + Vue3.
|
||||
- Focuses on core needs, simplifies features and settings, and allows some options to work well with sensible defaults.
|
||||
- Reworked user interface for a cleaner and more practical experience.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
Official wiki: https://wiki.movie-pilot.org
|
||||
|
||||
|
||||
## Local CLI
|
||||
|
||||
One-command bootstrap script:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootstrap-local.sh | bash
|
||||
```
|
||||
|
||||
Manage MoviePilot with the `moviepilot` command. Full CLI documentation: [`docs/cli.md`](docs/cli.md)
|
||||
|
||||
|
||||
## Add Skills for AI Agents
|
||||
```shell
|
||||
npx skills add https://github.com/jxxghp/MoviePilot
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
API documentation: https://api.movie-pilot.org
|
||||
|
||||
MCP tool API documentation: see [docs/mcp-api.md](docs/mcp-api.md)
|
||||
|
||||
Development environment setup and local source-run guide: [`docs/development-setup.md`](docs/development-setup.md)
|
||||
|
||||
Plugin development guide: <https://wiki.movie-pilot.org/zh/plugindev>
|
||||
|
||||
## Related Projects
|
||||
|
||||
- [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
|
||||
- [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources)
|
||||
- [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)
|
||||
- [MoviePilot-Server](https://github.com/jxxghp/MoviePilot-Server)
|
||||
- [MoviePilot-Wiki](https://github.com/jxxghp/MoviePilot-Wiki)
|
||||
|
||||
## Disclaimer
|
||||
|
||||
- This software is for learning and personal communication only. It must not be used for commercial purposes or illegal activities. The software does not know how users choose to use it, and all responsibility rests with the user.
|
||||
- The source code is open source and derived from other open-source code. If someone removes the relevant restrictions and redistributes or publishes modified versions that lead to liability events, the publisher of those modifications bears full responsibility. Public releases that bypass or alter the user authentication mechanism are not recommended.
|
||||
- This project does not accept donations and has not published any donation page anywhere. The software itself is free of charge and does not provide paid services. Please verify information carefully to avoid being misled.
|
||||
|
||||
## Contributors
|
||||
|
||||
<a href="https://github.com/jxxghp/MoviePilot/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=jxxghp/MoviePilot" />
|
||||
</a>
|
||||
@@ -1,4 +1,5 @@
|
||||
import asyncio
|
||||
import json
|
||||
import re
|
||||
import traceback
|
||||
import uuid
|
||||
@@ -27,15 +28,92 @@ from app.agent.prompt import prompt_manager
|
||||
from app.agent.tools.factory import MoviePilotToolFactory
|
||||
from app.chain import ChainBase
|
||||
from app.core.config import settings
|
||||
from app.db.transferhistory_oper import TransferHistoryOper
|
||||
from app.helper.llm import LLMHelper
|
||||
from app.log import logger
|
||||
from app.schemas import Notification, NotificationType
|
||||
from app.schemas.message import ChannelCapabilityManager, ChannelCapability
|
||||
from app.schemas.types import MessageChannel
|
||||
from app.utils.identity import SYSTEM_INTERNAL_USER_ID
|
||||
|
||||
|
||||
class AgentChain(ChainBase):
|
||||
pass
|
||||
|
||||
|
||||
class _ThinkTagStripper:
|
||||
"""
|
||||
流式剥离 <think>...</think> 标签的辅助类。
|
||||
维护内部缓冲区,处理标签跨 token 边界被截断的情况。
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.buffer = ""
|
||||
self.in_think_tag = False
|
||||
|
||||
def reset(self):
|
||||
"""重置状态"""
|
||||
self.buffer = ""
|
||||
self.in_think_tag = False
|
||||
|
||||
def process(self, text: str, on_output: Callable[[str], None]):
|
||||
"""
|
||||
将新文本送入处理,剥离 <think> 标签后通过 on_output 回调输出。
|
||||
:param text: 新增的文本片段
|
||||
:param on_output: 输出回调,接收过滤后的文本
|
||||
:return: 本次调用是否通过 on_output 输出了内容
|
||||
"""
|
||||
self.buffer += text
|
||||
emitted = False
|
||||
while self.buffer:
|
||||
if not self.in_think_tag:
|
||||
start_idx = self.buffer.find("<think>")
|
||||
if start_idx != -1:
|
||||
if start_idx > 0:
|
||||
on_output(self.buffer[:start_idx])
|
||||
emitted = True
|
||||
self.in_think_tag = True
|
||||
self.buffer = self.buffer[start_idx + 7:]
|
||||
else:
|
||||
# 检查是否以 <think> 的不完整前缀结尾
|
||||
partial_match = False
|
||||
for i in range(6, 0, -1):
|
||||
if self.buffer.endswith("<think>"[:i]):
|
||||
if len(self.buffer) > i:
|
||||
on_output(self.buffer[:-i])
|
||||
emitted = True
|
||||
self.buffer = self.buffer[-i:]
|
||||
partial_match = True
|
||||
break
|
||||
if not partial_match:
|
||||
on_output(self.buffer)
|
||||
emitted = True
|
||||
self.buffer = ""
|
||||
else:
|
||||
end_idx = self.buffer.find("</think>")
|
||||
if end_idx != -1:
|
||||
self.in_think_tag = False
|
||||
self.buffer = self.buffer[end_idx + 8:]
|
||||
else:
|
||||
# 检查是否以 </think> 的不完整前缀结尾
|
||||
partial_match = False
|
||||
for i in range(7, 0, -1):
|
||||
if self.buffer.endswith("</think>"[:i]):
|
||||
self.buffer = self.buffer[-i:]
|
||||
partial_match = True
|
||||
break
|
||||
if not partial_match:
|
||||
self.buffer = ""
|
||||
break
|
||||
return emitted
|
||||
|
||||
def flush(self, on_output: Callable[[str], None]):
|
||||
"""流式结束时,输出缓冲区中剩余的非思考内容"""
|
||||
if self.buffer and not self.in_think_tag:
|
||||
on_output(self.buffer)
|
||||
self.buffer = ""
|
||||
|
||||
|
||||
class MoviePilotAgent:
|
||||
"""
|
||||
MoviePilot AI智能体(基于 LangChain v1 + LangGraph)
|
||||
@@ -54,6 +132,12 @@ class MoviePilotAgent:
|
||||
self.channel = channel
|
||||
self.source = source
|
||||
self.username = username
|
||||
self.reply_with_voice = False
|
||||
self._tool_context: Dict[str, object] = {}
|
||||
self.output_callback: Optional[Callable[[str], None]] = None
|
||||
self.force_streaming = False
|
||||
self.suppress_user_reply = False
|
||||
self._streamed_output = ""
|
||||
|
||||
# 流式token管理
|
||||
self.stream_handler = StreamingHandler()
|
||||
@@ -63,14 +147,41 @@ class MoviePilotAgent:
|
||||
"""
|
||||
是否为后台任务模式(无渠道信息,如定时唤醒)
|
||||
"""
|
||||
return not self.channel and not self.source
|
||||
return not self.channel or not self.source
|
||||
|
||||
def _should_stream(self) -> bool:
|
||||
"""
|
||||
判断是否应启用流式输出:
|
||||
- 后台模式不启用流式输出
|
||||
- 渠道支持消息编辑:启用流式输出(实时推送 token)
|
||||
- 渠道不支持消息编辑但开启了啰嗦模式:也需要启用流式输出,
|
||||
以便在工具调用前捕获 Agent 的中间文字并随工具消息一起发送
|
||||
- 其他情况不启用流式输出
|
||||
"""
|
||||
if self.is_background:
|
||||
return self.force_streaming or callable(self.output_callback)
|
||||
if self.reply_with_voice:
|
||||
return False
|
||||
if self.force_streaming or callable(self.output_callback):
|
||||
return True
|
||||
# 啰嗦模式下始终需要流式输出来捕获工具调用前的 Agent 文字
|
||||
if settings.AI_AGENT_VERBOSE:
|
||||
return True
|
||||
try:
|
||||
channel_enum = MessageChannel(self.channel)
|
||||
return ChannelCapabilityManager.supports_capability(
|
||||
channel_enum, ChannelCapability.MESSAGE_EDITING
|
||||
)
|
||||
except (ValueError, KeyError):
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _initialize_llm():
|
||||
def _initialize_llm(streaming: bool = False):
|
||||
"""
|
||||
初始化 LLM(带流式回调)
|
||||
初始化 LLM
|
||||
:param streaming: 是否启用流式输出
|
||||
"""
|
||||
return LLMHelper.get_llm(streaming=True)
|
||||
return LLMHelper.get_llm(streaming=streaming)
|
||||
|
||||
@staticmethod
|
||||
def _extract_text_content(content) -> str:
|
||||
@@ -105,6 +216,20 @@ class MoviePilotAgent:
|
||||
return "".join(text_parts)
|
||||
return str(content)
|
||||
|
||||
def _emit_output(self, text: str):
|
||||
"""
|
||||
输出当前流式文本到外部回调。
|
||||
"""
|
||||
if not text:
|
||||
return
|
||||
self._streamed_output += text
|
||||
if not callable(self.output_callback):
|
||||
return
|
||||
try:
|
||||
self.output_callback(self._streamed_output)
|
||||
except Exception as e:
|
||||
logger.debug(f"智能体输出回调失败: {e}")
|
||||
|
||||
def _initialize_tools(self) -> List:
|
||||
"""
|
||||
初始化工具列表
|
||||
@@ -116,18 +241,23 @@ class MoviePilotAgent:
|
||||
source=self.source,
|
||||
username=self.username,
|
||||
stream_handler=self.stream_handler,
|
||||
agent_context=self._tool_context,
|
||||
)
|
||||
|
||||
def _create_agent(self):
|
||||
def _create_agent(self, streaming: bool = False):
|
||||
"""
|
||||
创建 LangGraph Agent(使用 create_agent + SummarizationMiddleware)
|
||||
:param streaming: 是否启用流式输出
|
||||
"""
|
||||
try:
|
||||
# 系统提示词
|
||||
system_prompt = prompt_manager.get_agent_prompt(channel=self.channel)
|
||||
system_prompt = prompt_manager.get_agent_prompt(
|
||||
channel=self.channel,
|
||||
prefer_voice_reply=self.reply_with_voice,
|
||||
)
|
||||
|
||||
# LLM 模型(用于 agent 执行)
|
||||
llm = self._initialize_llm()
|
||||
llm = self._initialize_llm(streaming=streaming)
|
||||
|
||||
# 工具列表
|
||||
tools = self._initialize_tools()
|
||||
@@ -174,30 +304,50 @@ class MoviePilotAgent:
|
||||
logger.error(f"创建 Agent 失败: {e}")
|
||||
raise e
|
||||
|
||||
async def process(self, message: str, images: List[str] = None) -> str:
|
||||
async def process(
|
||||
self,
|
||||
message: str,
|
||||
images: List[str] = None,
|
||||
files: Optional[List[dict]] = None,
|
||||
) -> str:
|
||||
"""
|
||||
处理用户消息,流式推理并返回 Agent 回复
|
||||
"""
|
||||
try:
|
||||
logger.info(
|
||||
f"Agent推理: session_id={self.session_id}, input={message}, images={len(images) if images else 0}"
|
||||
f"Agent推理: session_id={self.session_id}, input={message}, "
|
||||
f"images={len(images) if images else 0}, files={len(files) if files else 0}"
|
||||
)
|
||||
self._tool_context = {
|
||||
"incoming_voice": self.reply_with_voice,
|
||||
"user_reply_sent": False,
|
||||
"reply_mode": None,
|
||||
}
|
||||
self._streamed_output = ""
|
||||
|
||||
# 获取历史消息
|
||||
messages = memory_manager.get_agent_messages(
|
||||
session_id=self.session_id, user_id=self.user_id
|
||||
)
|
||||
|
||||
# 构建用户消息内容
|
||||
if images:
|
||||
content = []
|
||||
if message:
|
||||
content.append({"type": "text", "text": message})
|
||||
for img in images:
|
||||
content.append({"type": "image_url", "image_url": {"url": img}})
|
||||
messages.append(HumanMessage(content=content))
|
||||
else:
|
||||
messages.append(HumanMessage(content=message))
|
||||
# 构建结构化用户消息内容
|
||||
request_payload = {
|
||||
"message": message or "",
|
||||
"images": [
|
||||
{"index": index + 1, "type": "image"}
|
||||
for index, _ in enumerate(images or [])
|
||||
],
|
||||
"files": files or [],
|
||||
}
|
||||
content = [
|
||||
{
|
||||
"type": "text",
|
||||
"text": json.dumps(request_payload, ensure_ascii=False, indent=2),
|
||||
}
|
||||
]
|
||||
for img in images or []:
|
||||
content.append({"type": "image_url", "image_url": {"url": img}})
|
||||
messages.append(HumanMessage(content=content))
|
||||
|
||||
# 执行推理
|
||||
await self._execute_agent(messages)
|
||||
@@ -205,6 +355,8 @@ class MoviePilotAgent:
|
||||
except Exception as e:
|
||||
error_message = f"处理消息时发生错误: {str(e)}"
|
||||
logger.error(error_message)
|
||||
if self.suppress_user_reply:
|
||||
raise
|
||||
await self.send_agent_message(error_message)
|
||||
return error_message
|
||||
|
||||
@@ -218,8 +370,11 @@ class MoviePilotAgent:
|
||||
:param config: Agent 运行配置
|
||||
:param on_token: 收到有效 token 时的回调
|
||||
"""
|
||||
in_think_tag = False
|
||||
buffer = ""
|
||||
stripper = _ThinkTagStripper()
|
||||
# 非VERBOSE模式下,跟踪当前langgraph_step以检测中间步骤的模型输出
|
||||
# 当模型在工具调用之前输出的"计划/思考"文本,会在检测到tool_call时被清除
|
||||
current_model_step = -1
|
||||
has_emitted_in_step = False
|
||||
|
||||
async for chunk in agent.astream(
|
||||
messages,
|
||||
@@ -230,65 +385,49 @@ class MoviePilotAgent:
|
||||
):
|
||||
if chunk["type"] == "messages":
|
||||
token, metadata = chunk["data"]
|
||||
if (
|
||||
token
|
||||
and hasattr(token, "tool_call_chunks")
|
||||
and not token.tool_call_chunks
|
||||
):
|
||||
# 跳过模型思考/推理内容(如 DeepSeek R1 的 reasoning_content)
|
||||
additional = getattr(token, "additional_kwargs", None)
|
||||
if additional and additional.get("reasoning_content"):
|
||||
continue
|
||||
if token.content:
|
||||
# content 可能是字符串或内容块列表,过滤掉思考类型的块
|
||||
content = self._extract_text_content(token.content)
|
||||
if content:
|
||||
buffer += content
|
||||
while buffer:
|
||||
if not in_think_tag:
|
||||
start_idx = buffer.find("<think>")
|
||||
if start_idx != -1:
|
||||
if start_idx > 0:
|
||||
on_token(buffer[:start_idx])
|
||||
in_think_tag = True
|
||||
buffer = buffer[start_idx + 7 :]
|
||||
else:
|
||||
# 检查是否以 <think> 的前缀结尾
|
||||
partial_match = False
|
||||
for i in range(6, 0, -1):
|
||||
if buffer.endswith("<think>"[:i]):
|
||||
if len(buffer) > i:
|
||||
on_token(buffer[:-i])
|
||||
buffer = buffer[-i:]
|
||||
partial_match = True
|
||||
break
|
||||
if not partial_match:
|
||||
on_token(buffer)
|
||||
buffer = ""
|
||||
else:
|
||||
end_idx = buffer.find("</think>")
|
||||
if end_idx != -1:
|
||||
in_think_tag = False
|
||||
buffer = buffer[end_idx + 8 :]
|
||||
else:
|
||||
# 检查是否以 </think> 的前缀结尾
|
||||
partial_match = False
|
||||
for i in range(7, 0, -1):
|
||||
if buffer.endswith("</think>"[:i]):
|
||||
buffer = buffer[-i:]
|
||||
partial_match = True
|
||||
break
|
||||
if not partial_match:
|
||||
buffer = ""
|
||||
if not token or not hasattr(token, "tool_call_chunks"):
|
||||
continue
|
||||
|
||||
if buffer and not in_think_tag:
|
||||
on_token(buffer)
|
||||
# 获取当前步骤信息
|
||||
step = metadata.get("langgraph_step", -1) if metadata else -1
|
||||
|
||||
if token.tool_call_chunks:
|
||||
# 检测到工具调用token:说明当前步骤是中间步骤
|
||||
# 非VERBOSE模式下,清除该步骤之前输出的"计划/思考"文本
|
||||
if not settings.AI_AGENT_VERBOSE and has_emitted_in_step:
|
||||
self.stream_handler.reset()
|
||||
stripper.reset()
|
||||
has_emitted_in_step = False
|
||||
continue
|
||||
|
||||
# 以下处理纯文本token(tool_call_chunks为空)
|
||||
|
||||
# 检测步骤变化,重置步骤内emit跟踪
|
||||
if step != current_model_step:
|
||||
current_model_step = step
|
||||
has_emitted_in_step = False
|
||||
|
||||
# 跳过模型思考/推理内容(如 DeepSeek R1 的 reasoning_content)
|
||||
additional = getattr(token, "additional_kwargs", None)
|
||||
if additional and additional.get("reasoning_content"):
|
||||
continue
|
||||
|
||||
if token.content:
|
||||
# content 可能是字符串或内容块列表,过滤掉思考类型的块
|
||||
content = self._extract_text_content(token.content)
|
||||
if content:
|
||||
if stripper.process(content, on_token):
|
||||
has_emitted_in_step = True
|
||||
|
||||
stripper.flush(on_token)
|
||||
|
||||
async def _execute_agent(self, messages: List[BaseMessage]):
|
||||
"""
|
||||
调用 LangGraph Agent,通过 astream 流式获取 token。
|
||||
支持流式输出:在支持消息编辑的渠道上实时推送 token。
|
||||
后台任务模式(无渠道信息):不进行流式输出,仅广播最终结果。
|
||||
调用 LangGraph Agent 执行推理。
|
||||
根据运行环境选择不同的执行模式:
|
||||
- 后台任务模式(无渠道信息):非流式 LLM + ainvoke,仅广播最终结果
|
||||
- 渠道不支持消息编辑:非流式 LLM + ainvoke,完成后发送最终回复
|
||||
- 渠道支持消息编辑:流式 LLM + astream,实时推送 token
|
||||
"""
|
||||
try:
|
||||
# Agent运行配置
|
||||
@@ -298,11 +437,53 @@ class MoviePilotAgent:
|
||||
}
|
||||
}
|
||||
|
||||
# 创建智能体
|
||||
agent = self._create_agent()
|
||||
# 判断是否启用流式输出
|
||||
use_streaming = self._should_stream()
|
||||
|
||||
if self.is_background:
|
||||
# 后台任务模式:非流式执行,等待完成后只取最后一条AI回复
|
||||
# 创建智能体(根据是否流式传入不同 LLM)
|
||||
agent = self._create_agent(streaming=use_streaming)
|
||||
|
||||
if use_streaming:
|
||||
# 流式模式:渠道支持消息编辑,启动流式输出实时推送 token
|
||||
await self.stream_handler.start_streaming(
|
||||
channel=self.channel,
|
||||
source=self.source,
|
||||
user_id=self.user_id,
|
||||
username=self.username,
|
||||
)
|
||||
|
||||
# 流式运行智能体,token 直接推送到 stream_handler
|
||||
await self._stream_agent_tokens(
|
||||
agent=agent,
|
||||
messages={"messages": messages},
|
||||
config=agent_config,
|
||||
on_token=lambda token: (self.stream_handler.emit(token), self._emit_output(token)),
|
||||
)
|
||||
|
||||
# 停止流式输出,返回是否已通过流式编辑发送了所有内容及最终文本
|
||||
(
|
||||
all_sent_via_stream,
|
||||
streamed_text,
|
||||
) = await self.stream_handler.stop_streaming()
|
||||
|
||||
if not all_sent_via_stream:
|
||||
# 流式输出未能发送全部内容(发送失败等)
|
||||
# 通过常规方式发送剩余内容
|
||||
remaining_text = await self.stream_handler.take()
|
||||
if remaining_text and not self._streamed_output:
|
||||
self._emit_output(remaining_text)
|
||||
if (
|
||||
remaining_text
|
||||
and not self.suppress_user_reply
|
||||
and not self._tool_context.get("user_reply_sent")
|
||||
):
|
||||
await self.send_agent_message(remaining_text)
|
||||
elif streamed_text:
|
||||
# 流式输出已发送全部内容,但未记录到数据库,补充保存消息记录
|
||||
await self._save_agent_message_to_db(streamed_text)
|
||||
|
||||
else:
|
||||
# 非流式模式:后台任务或渠道不支持消息编辑
|
||||
await agent.ainvoke(
|
||||
{"messages": messages},
|
||||
config=agent_config,
|
||||
@@ -325,42 +506,22 @@ class MoviePilotAgent:
|
||||
final_text = text.strip()
|
||||
break
|
||||
|
||||
# 后台任务仅广播最终回复,带标题
|
||||
if final_text:
|
||||
await self.send_agent_message(final_text, title="MoviePilot助手")
|
||||
if final_text and not self._streamed_output:
|
||||
self._emit_output(final_text)
|
||||
|
||||
else:
|
||||
# 正常渠道模式:启动流式输出
|
||||
await self.stream_handler.start_streaming(
|
||||
channel=self.channel,
|
||||
source=self.source,
|
||||
user_id=self.user_id,
|
||||
username=self.username,
|
||||
)
|
||||
|
||||
# 流式运行智能体,token 直接推送到 stream_handler
|
||||
await self._stream_agent_tokens(
|
||||
agent=agent,
|
||||
messages={"messages": messages},
|
||||
config=agent_config,
|
||||
on_token=self.stream_handler.emit,
|
||||
)
|
||||
|
||||
# 停止流式输出,返回是否已通过流式编辑发送了所有内容及最终文本
|
||||
(
|
||||
all_sent_via_stream,
|
||||
streamed_text,
|
||||
) = await self.stream_handler.stop_streaming()
|
||||
|
||||
if not all_sent_via_stream:
|
||||
# 流式输出未能发送全部内容(渠道不支持编辑,或发送失败)
|
||||
# 通过常规方式发送剩余内容
|
||||
remaining_text = await self.stream_handler.take()
|
||||
if remaining_text:
|
||||
await self.send_agent_message(remaining_text)
|
||||
elif streamed_text:
|
||||
# 流式输出已发送全部内容,但未记录到数据库,补充保存消息记录
|
||||
await self._save_agent_message_to_db(streamed_text)
|
||||
if (
|
||||
final_text
|
||||
and not self.suppress_user_reply
|
||||
and not self._tool_context.get("user_reply_sent")
|
||||
):
|
||||
if self.is_background:
|
||||
# 后台任务仅广播最终回复,带标题
|
||||
await self.send_agent_message(
|
||||
final_text, title="MoviePilot助手"
|
||||
)
|
||||
else:
|
||||
# 非流式渠道:发送最终回复
|
||||
await self.send_agent_message(final_text)
|
||||
|
||||
# 保存消息
|
||||
memory_manager.save_agent_messages(
|
||||
@@ -377,23 +538,18 @@ class MoviePilotAgent:
|
||||
return str(e), {}
|
||||
finally:
|
||||
# 确保停止流式输出
|
||||
if not self.is_background:
|
||||
await self.stream_handler.stop_streaming()
|
||||
await self.stream_handler.stop_streaming()
|
||||
|
||||
async def send_agent_message(self, message: str, title: str = ""):
|
||||
"""
|
||||
通过原渠道发送消息给用户
|
||||
"""
|
||||
user_id = self.user_id
|
||||
if self.user_id == "system":
|
||||
user_id = None
|
||||
|
||||
await AgentChain().async_post_message(
|
||||
Notification(
|
||||
channel=self.channel,
|
||||
source=self.source,
|
||||
mtype=NotificationType.Agent,
|
||||
userid=user_id,
|
||||
userid=self.user_id,
|
||||
username=self.username,
|
||||
title=title,
|
||||
text=message,
|
||||
@@ -437,9 +593,11 @@ class _MessageTask:
|
||||
user_id: str
|
||||
message: str
|
||||
images: Optional[List[str]] = None
|
||||
files: Optional[List[dict]] = None
|
||||
channel: Optional[str] = None
|
||||
source: Optional[str] = None
|
||||
username: Optional[str] = None
|
||||
reply_with_voice: bool = False
|
||||
|
||||
|
||||
class AgentManager:
|
||||
@@ -448,12 +606,21 @@ class AgentManager:
|
||||
同一会话的消息按顺序排队处理,不同会话之间互不影响。
|
||||
"""
|
||||
|
||||
# 批量重试整理的等待时间(秒),同一批次内的失败记录会合并为一次agent调用
|
||||
RETRY_TRANSFER_DEBOUNCE_SECONDS = 300
|
||||
|
||||
def __init__(self):
|
||||
self.active_agents: Dict[str, MoviePilotAgent] = {}
|
||||
# 每个会话的消息队列
|
||||
self._session_queues: Dict[str, asyncio.Queue] = {}
|
||||
# 每个会话的worker任务
|
||||
self._session_workers: Dict[str, asyncio.Task] = {}
|
||||
# 重试整理的 debounce 缓冲区: group_key -> List[history_id]
|
||||
self._retry_transfer_buffer: Dict[str, List[int]] = {}
|
||||
# 重试整理的 debounce 定时器: group_key -> asyncio.TimerHandle
|
||||
self._retry_transfer_timers: Dict[str, asyncio.TimerHandle] = {}
|
||||
# 重试整理缓冲区锁
|
||||
self._retry_transfer_lock = asyncio.Lock()
|
||||
|
||||
@staticmethod
|
||||
async def initialize():
|
||||
@@ -467,6 +634,11 @@ class AgentManager:
|
||||
关闭管理器
|
||||
"""
|
||||
await memory_manager.close()
|
||||
# 取消所有重试整理的延迟定时器
|
||||
for timer in self._retry_transfer_timers.values():
|
||||
timer.cancel()
|
||||
self._retry_transfer_timers.clear()
|
||||
self._retry_transfer_buffer.clear()
|
||||
# 取消所有会话worker
|
||||
for task in self._session_workers.values():
|
||||
task.cancel()
|
||||
@@ -488,9 +660,11 @@ class AgentManager:
|
||||
user_id: str,
|
||||
message: str,
|
||||
images: List[str] = None,
|
||||
files: Optional[List[dict]] = None,
|
||||
channel: str = None,
|
||||
source: str = None,
|
||||
username: str = None,
|
||||
reply_with_voice: bool = False,
|
||||
) -> str:
|
||||
"""
|
||||
处理用户消息:将消息放入会话队列,按顺序依次处理。
|
||||
@@ -501,9 +675,11 @@ class AgentManager:
|
||||
user_id=user_id,
|
||||
message=message,
|
||||
images=images,
|
||||
files=files,
|
||||
channel=channel,
|
||||
source=source,
|
||||
username=username,
|
||||
reply_with_voice=reply_with_voice,
|
||||
)
|
||||
|
||||
# 获取或创建会话队列
|
||||
@@ -601,8 +777,46 @@ class AgentManager:
|
||||
agent.source = task.source
|
||||
if task.username:
|
||||
agent.username = task.username
|
||||
agent.reply_with_voice = task.reply_with_voice
|
||||
|
||||
return await agent.process(task.message, images=task.images)
|
||||
return await agent.process(task.message, images=task.images, files=task.files)
|
||||
|
||||
async def stop_current_task(self, session_id: str):
|
||||
"""
|
||||
应急停止当前正在执行的Agent推理任务,但保留会话和记忆。
|
||||
与 clear_session 不同,此方法不会销毁Agent实例或清除记忆,
|
||||
用户可以在停止后继续对话。
|
||||
"""
|
||||
stopped = False
|
||||
|
||||
# 取消该会话的worker(会触发 _execute_agent 中的 CancelledError)
|
||||
if session_id in self._session_workers:
|
||||
self._session_workers[session_id].cancel()
|
||||
try:
|
||||
await self._session_workers[session_id]
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
self._session_workers.pop(session_id, None) # noqa
|
||||
stopped = True
|
||||
|
||||
# 清空队列中待处理的消息
|
||||
if session_id in self._session_queues:
|
||||
queue = self._session_queues[session_id]
|
||||
while not queue.empty():
|
||||
try:
|
||||
queue.get_nowait()
|
||||
queue.task_done()
|
||||
except asyncio.QueueEmpty:
|
||||
break
|
||||
self._session_queues.pop(session_id, None)
|
||||
stopped = True
|
||||
|
||||
if stopped:
|
||||
logger.info(f"会话 {session_id} 的Agent推理已应急停止")
|
||||
else:
|
||||
logger.debug(f"会话 {session_id} 没有正在执行的Agent任务")
|
||||
|
||||
return stopped
|
||||
|
||||
async def clear_session(self, session_id: str, user_id: str):
|
||||
"""
|
||||
@@ -636,7 +850,7 @@ class AgentManager:
|
||||
try:
|
||||
# 每次使用唯一的 session_id,避免共享上下文
|
||||
session_id = f"__agent_heartbeat_{uuid.uuid4().hex[:12]}__"
|
||||
user_id = "system"
|
||||
user_id = SYSTEM_INTERNAL_USER_ID
|
||||
|
||||
logger.info("智能体心跳唤醒:开始检查待处理任务...")
|
||||
|
||||
@@ -684,30 +898,72 @@ class AgentManager:
|
||||
except Exception as e:
|
||||
logger.error(f"智能体心跳唤醒失败: {e}")
|
||||
|
||||
async def retry_failed_transfer(self, history_id: int):
|
||||
async def retry_failed_transfer(self, history_id: int, group_key: str = ""):
|
||||
"""
|
||||
触发智能体重新整理失败的历史记录。
|
||||
由文件整理模块在检测到整理失败后调用,使用独立会话执行。
|
||||
由文件整理模块在检测到整理失败后调用。
|
||||
同一 group_key 的失败记录会在缓冲期内合并为一次agent调用,避免重复浪费token。
|
||||
:param history_id: 失败的整理历史记录ID
|
||||
:param group_key: 分组键,相同key的记录会被合并处理(如download_hash、源目录等)
|
||||
"""
|
||||
try:
|
||||
# 每次使用唯一的 session_id,避免共享上下文
|
||||
session_id = f"__agent_retry_transfer_{history_id}_{uuid.uuid4().hex[:8]}__"
|
||||
user_id = "system"
|
||||
if not group_key:
|
||||
group_key = f"_default_{history_id}"
|
||||
|
||||
logger.info(f"智能体重试整理:开始处理失败记录 ID={history_id} ...")
|
||||
async with self._retry_transfer_lock:
|
||||
# 将 history_id 加入缓冲区
|
||||
if group_key not in self._retry_transfer_buffer:
|
||||
self._retry_transfer_buffer[group_key] = []
|
||||
if history_id not in self._retry_transfer_buffer[group_key]:
|
||||
self._retry_transfer_buffer[group_key].append(history_id)
|
||||
logger.info(
|
||||
f"智能体重试整理:记录 ID={history_id} 已加入缓冲区 "
|
||||
f"(group={group_key}, 当前{len(self._retry_transfer_buffer[group_key])}条)"
|
||||
)
|
||||
|
||||
# 英文提示词,便于大模型理解
|
||||
# 取消该分组的旧定时器
|
||||
if group_key in self._retry_transfer_timers:
|
||||
self._retry_transfer_timers[group_key].cancel()
|
||||
|
||||
# 设置新的延迟定时器
|
||||
loop = asyncio.get_running_loop()
|
||||
self._retry_transfer_timers[group_key] = loop.call_later(
|
||||
self.RETRY_TRANSFER_DEBOUNCE_SECONDS,
|
||||
lambda gk=group_key: asyncio.ensure_future(
|
||||
self._flush_retry_transfer(gk)
|
||||
),
|
||||
)
|
||||
|
||||
async def _flush_retry_transfer(self, group_key: str):
|
||||
"""
|
||||
延迟定时器到期后,取出该分组的所有 history_id 并合并为一次agent调用。
|
||||
"""
|
||||
async with self._retry_transfer_lock:
|
||||
history_ids = self._retry_transfer_buffer.pop(group_key, [])
|
||||
self._retry_transfer_timers.pop(group_key, None)
|
||||
|
||||
if not history_ids:
|
||||
return
|
||||
|
||||
session_id = f"__agent_retry_transfer_batch_{uuid.uuid4().hex[:8]}__"
|
||||
user_id = SYSTEM_INTERNAL_USER_ID
|
||||
|
||||
ids_str = ", ".join(str(i) for i in history_ids)
|
||||
logger.info(
|
||||
f"智能体重试整理:开始批量处理失败记录 IDs=[{ids_str}] (group={group_key})"
|
||||
)
|
||||
|
||||
if len(history_ids) == 1:
|
||||
# 单条记录,使用原有逻辑
|
||||
retry_message = (
|
||||
f"[System Task - Transfer Failed Retry] A file transfer/organization has failed. "
|
||||
f"Please use the 'transfer-failed-retry' skill to retry the failed transfer.\n\n"
|
||||
f"Failed transfer history record ID: {history_id}\n\n"
|
||||
f"Failed transfer history record ID: {history_ids[0]}\n\n"
|
||||
f"Follow these steps:\n"
|
||||
f"1. Use `query_transfer_history` with status='failed' to find the record with id={history_id} "
|
||||
f"1. Use `query_transfer_history` with status='failed' to find the record with id={history_ids[0]} "
|
||||
f"and understand the failure details (source path, error message, media info)\n"
|
||||
f"2. Analyze the error message to determine the best retry strategy\n"
|
||||
f"3. If the source file no longer exists, skip this retry and report that the file is missing\n"
|
||||
f"4. Delete the failed history record using `delete_transfer_history` with history_id={history_id}\n"
|
||||
f"4. Delete the failed history record using `delete_transfer_history` with history_id={history_ids[0]}\n"
|
||||
f"5. Re-identify the media using `recognize_media` with the source file path\n"
|
||||
f"6. If recognition fails, try `search_media` with keywords from the filename\n"
|
||||
f"7. Re-transfer using `transfer_file` with the source path and any identified media info (tmdbid, media_type)\n"
|
||||
@@ -718,6 +974,36 @@ class AgentManager:
|
||||
f"Do NOT include greetings, explanations, or conversational text. "
|
||||
f"Respond in Chinese (中文)."
|
||||
)
|
||||
else:
|
||||
# 多条记录,使用批量处理逻辑
|
||||
retry_message = (
|
||||
f"[System Task - Batch Transfer Failed Retry] Multiple file transfers from the same source "
|
||||
f"have failed. These files likely belong to the SAME media (e.g., multiple episodes of the same TV show). "
|
||||
f"Please use the 'transfer-failed-retry' skill to retry them efficiently.\n\n"
|
||||
f"Failed transfer history record IDs: {ids_str}\n"
|
||||
f"Total failed records: {len(history_ids)}\n\n"
|
||||
f"Follow these steps:\n"
|
||||
f"1. Use `query_transfer_history` with status='failed' to find ALL records with these IDs "
|
||||
f"and understand the failure details\n"
|
||||
f"2. Since these files are likely from the same media, analyze the FIRST record to determine "
|
||||
f"the media identity and the best retry strategy. The root cause is usually the same for all files.\n"
|
||||
f"3. If the error is about media recognition (e.g., '未识别到媒体信息'), identify the media ONCE "
|
||||
f"using `recognize_media` or `search_media`, then reuse that result (tmdbid, media_type) for all files\n"
|
||||
f"4. For EACH failed record:\n"
|
||||
f" a. Delete the failed history record using `delete_transfer_history`\n"
|
||||
f" b. Re-transfer using `transfer_file` with the source path and the identified media info\n"
|
||||
f"5. Report a summary of results (how many succeeded, how many failed)\n\n"
|
||||
f"IMPORTANT OPTIMIZATION: These files share the same media identity. "
|
||||
f"Do NOT call `recognize_media` or `search_media` repeatedly for each file. "
|
||||
f"Identify the media ONCE, then apply to all files.\n\n"
|
||||
f"IMPORTANT: This is a background system task, NOT a user conversation. "
|
||||
f"Your final response will be broadcast as a notification. "
|
||||
f"Only output a brief result summary. "
|
||||
f"Do NOT include greetings, explanations, or conversational text. "
|
||||
f"Respond in Chinese (中文)."
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
await self.process_message(
|
||||
session_id=session_id,
|
||||
@@ -739,13 +1025,106 @@ class AgentManager:
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
logger.info(f"智能体重试整理:记录 ID={history_id} 处理完成")
|
||||
logger.info(
|
||||
f"智能体重试整理:批量处理完成 IDs=[{ids_str}] (group={group_key})"
|
||||
)
|
||||
|
||||
# 用完即弃,清理资源
|
||||
await self.clear_session(session_id, user_id)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"智能体重试整理失败 (ID={history_id}): {e}")
|
||||
logger.error(
|
||||
f"智能体重试整理失败 (IDs=[{ids_str}], group={group_key}): {e}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _build_manual_redo_prompt(history) -> str:
|
||||
"""
|
||||
构建手动 AI 整理提示词。
|
||||
"""
|
||||
src_fileitem = history.src_fileitem or {}
|
||||
source_path = src_fileitem.get("path") if isinstance(src_fileitem, dict) else ""
|
||||
source_path = source_path or history.src or ""
|
||||
season_episode = f"{history.seasons or ''}{history.episodes or ''}".strip()
|
||||
|
||||
return "\n".join(
|
||||
[
|
||||
"[System Task - Manual Transfer Re-Organize]",
|
||||
"A user manually triggered an AI re-organize task from the transfer history page.",
|
||||
"Your goal is to directly fix ONE transfer history record by using MoviePilot tools to analyze, clean up the old history entry if necessary, and organize the source file again.",
|
||||
"",
|
||||
"IMPORTANT:",
|
||||
"1. This is NOT a normal conversation. It is a background execution task.",
|
||||
"2. Do NOT rely on previous chat context. Work only from the record below.",
|
||||
"3. You should complete the re-organize by directly using tools such as `query_transfer_history`, `recognize_media`, `search_media`, `delete_transfer_history`, and `transfer_file`.",
|
||||
"4. Your final response must be a brief Chinese result summary only.",
|
||||
"",
|
||||
"Transfer history record:",
|
||||
f"- History ID: {history.id}",
|
||||
f"- Current status: {'success' if history.status else 'failed'}",
|
||||
f"- Current recognized title: {history.title or 'unknown'}",
|
||||
f"- Media type: {history.type or 'unknown'}",
|
||||
f"- Category: {history.category or 'unknown'}",
|
||||
f"- Year: {history.year or 'unknown'}",
|
||||
f"- Season/Episode: {season_episode or 'unknown'}",
|
||||
f"- Source path: {source_path or 'unknown'}",
|
||||
f"- Source storage: {history.src_storage or 'local'}",
|
||||
f"- Destination path: {history.dest or 'unknown'}",
|
||||
f"- Destination storage: {history.dest_storage or 'unknown'}",
|
||||
f"- Transfer mode: {history.mode or 'unknown'}",
|
||||
f"- Current TMDB ID: {history.tmdbid or 'none'}",
|
||||
f"- Current Douban ID: {history.doubanid or 'none'}",
|
||||
f"- Error message: {history.errmsg or 'none'}",
|
||||
"",
|
||||
"Required workflow:",
|
||||
f"1. Use `query_transfer_history` to locate and inspect the record with id={history.id}, and verify the source path, status, media info, and failure context.",
|
||||
"2. Decide whether the current recognition is trustworthy.",
|
||||
"3. If the source file no longer exists or cannot be safely processed, stop and report the reason.",
|
||||
"4. If the current recognition is wrong or the record should be reorganized, determine the correct media identity first.",
|
||||
"5. Prefer `recognize_media` with the source path. If recognition is not reliable, use `search_media` with keywords from filename/title/year.",
|
||||
"6. Only continue when you have high confidence in the target media.",
|
||||
"7. Before re-organizing, delete the old transfer history record with `delete_transfer_history` so the system will not skip the source file.",
|
||||
"8. Then use `transfer_file` to organize the source path directly.",
|
||||
"9. When calling `transfer_file`, reuse known context when appropriate: source storage, target path, target storage, transfer mode, season, tmdbid/doubanid, and media_type.",
|
||||
"10. If this record is already correct and no re-organize is needed, do not perform destructive actions; simply report that no change is necessary.",
|
||||
"",
|
||||
"Important execution rules:",
|
||||
"- Do NOT reorganize blindly when media identity is uncertain.",
|
||||
"- If the previous record was successful but obviously identified as the wrong media, still use the tool-based flow above instead of `/redo`.",
|
||||
"- Keep the final response short, in Chinese, and focused on outcome.",
|
||||
]
|
||||
)
|
||||
|
||||
async def manual_redo_transfer(
|
||||
self,
|
||||
history_id: int,
|
||||
output_callback: Optional[Callable[[str], None]] = None,
|
||||
) -> None:
|
||||
"""
|
||||
手动触发单条历史记录的 AI 整理。
|
||||
"""
|
||||
session_id = f"__agent_manual_redo_{history_id}_{uuid.uuid4().hex[:8]}__"
|
||||
user_id = SYSTEM_INTERNAL_USER_ID
|
||||
agent = MoviePilotAgent(
|
||||
session_id=session_id,
|
||||
user_id=user_id,
|
||||
channel=None,
|
||||
source=None,
|
||||
username=settings.SUPERUSER,
|
||||
)
|
||||
agent.output_callback = output_callback
|
||||
agent.force_streaming = True
|
||||
agent.suppress_user_reply = True
|
||||
|
||||
try:
|
||||
history = TransferHistoryOper().get(history_id)
|
||||
if not history:
|
||||
raise ValueError(f"整理记录不存在: {history_id}")
|
||||
|
||||
await agent.process(self._build_manual_redo_prompt(history))
|
||||
finally:
|
||||
await agent.cleanup()
|
||||
memory_manager.clear_memory(session_id, user_id)
|
||||
|
||||
|
||||
# 全局智能体管理器实例
|
||||
|
||||
@@ -38,7 +38,7 @@ class StreamingHandler:
|
||||
"""
|
||||
|
||||
# 流式输出的刷新间隔(秒)
|
||||
FLUSH_INTERVAL = 1.0
|
||||
FLUSH_INTERVAL = 0.3
|
||||
|
||||
def __init__(self):
|
||||
self._lock = threading.Lock()
|
||||
@@ -120,7 +120,9 @@ class StreamingHandler:
|
||||
title: str = "",
|
||||
):
|
||||
"""
|
||||
启动流式输出。检查渠道是否支持消息编辑,如果支持则启动定时刷新任务。
|
||||
启动流式输出。
|
||||
始终标记为流式状态(用于 buffer 收集 token),
|
||||
但只有渠道支持消息编辑时才启动定时刷新任务(实时推送给用户)。
|
||||
:param channel: 消息渠道
|
||||
:param source: 消息来源
|
||||
:param user_id: 用户ID
|
||||
@@ -133,16 +135,16 @@ class StreamingHandler:
|
||||
self._username = username
|
||||
self._title = title
|
||||
|
||||
# 检查渠道是否支持消息编辑
|
||||
if not self._can_stream():
|
||||
logger.debug(f"渠道 {channel} 不支持消息编辑,不启用流式输出")
|
||||
return
|
||||
|
||||
self._streaming_enabled = True
|
||||
self._sent_text = ""
|
||||
self._message_response = None
|
||||
self._msg_start_offset = 0
|
||||
|
||||
# 检查渠道是否支持消息编辑,不支持则仅收集 token 到 buffer,不实时推送
|
||||
if not self._can_stream():
|
||||
logger.debug(f"渠道 {channel} 不支持消息编辑,仅启用 buffer 收集模式")
|
||||
return
|
||||
|
||||
# 从渠道能力中获取单条消息最大长度
|
||||
try:
|
||||
channel_enum = MessageChannel(self._channel)
|
||||
@@ -345,6 +347,13 @@ class StreamingHandler:
|
||||
"""
|
||||
return self._streaming_enabled
|
||||
|
||||
@property
|
||||
def is_auto_flushing(self) -> bool:
|
||||
"""
|
||||
是否正在定时刷新(渠道支持消息编辑时自动推送 buffer 内容)
|
||||
"""
|
||||
return self._flush_task is not None
|
||||
|
||||
@property
|
||||
def has_sent_message(self) -> bool:
|
||||
"""
|
||||
|
||||
@@ -47,6 +47,11 @@ class SkillMetadata(TypedDict):
|
||||
约束: Skill中文描述。
|
||||
"""
|
||||
|
||||
version: int
|
||||
"""Skill 版本号。
|
||||
用于内置技能的版本管理,同步时比较版本号决定是否覆盖用户目录中的旧版本。
|
||||
"""
|
||||
|
||||
description: str
|
||||
"""Skill 功能描述。
|
||||
约束: 1-1024 字符,应说明功能及适用场景。
|
||||
@@ -154,9 +159,23 @@ def _parse_skill_metadata( # noqa: C901
|
||||
)
|
||||
compatibility_str = compatibility_str[:MAX_SKILL_COMPATIBILITY_LENGTH]
|
||||
|
||||
# 版本号,默认为 0(表示未设置版本)
|
||||
raw_version = frontmatter_data.get("version")
|
||||
version = 0
|
||||
if raw_version is not None:
|
||||
try:
|
||||
version = int(raw_version)
|
||||
except (ValueError, TypeError):
|
||||
logger.warning(
|
||||
"Invalid 'version' in %s (got %r), defaulting to 0",
|
||||
skill_path,
|
||||
raw_version,
|
||||
)
|
||||
|
||||
return SkillMetadata(
|
||||
id=skill_id,
|
||||
name=name,
|
||||
version=version,
|
||||
description=description_str,
|
||||
path=skill_path,
|
||||
metadata=_validate_metadata(frontmatter_data.get("metadata", {}), skill_path),
|
||||
@@ -287,10 +306,38 @@ Remember: Skills make you more capable and consistent. When in doubt, check if a
|
||||
"""
|
||||
|
||||
|
||||
def _extract_version(skill_md: Path) -> int:
|
||||
"""从 SKILL.md 文件中快速提取 version 字段,无法提取时返回 0。"""
|
||||
try:
|
||||
content = skill_md.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return 0
|
||||
match = re.match(r"^---\s*\n(.*?)\n---\s*\n", content, re.DOTALL)
|
||||
if not match:
|
||||
return 0
|
||||
try:
|
||||
frontmatter = yaml.safe_load(match.group(1))
|
||||
except yaml.YAMLError:
|
||||
return 0
|
||||
if not isinstance(frontmatter, dict):
|
||||
return 0
|
||||
raw = frontmatter.get("version")
|
||||
if raw is None:
|
||||
return 0
|
||||
try:
|
||||
return int(raw)
|
||||
except (ValueError, TypeError):
|
||||
return 0
|
||||
|
||||
|
||||
def _sync_bundled_skills(bundled_dir: Path, target_dir: Path) -> None:
|
||||
"""将项目自带的技能同步到用户目录。
|
||||
|
||||
仅当目标目录中不存在对应技能子目录时才复制,已存在则跳过(不覆盖用户修改)。
|
||||
- 目标目录中不存在对应技能子目录时,直接复制。
|
||||
- 目标目录中已存在时,比较内置与用户目录中 SKILL.md 的 version 字段:
|
||||
- 内置版本更高时,直接覆盖用户目录中的旧版本。
|
||||
- 版本相同或用户版本更高时,跳过。
|
||||
- 内置 SKILL.md 无 version 字段(视为 0)时,不覆盖。
|
||||
|
||||
Parameters
|
||||
----------
|
||||
@@ -312,15 +359,43 @@ def _sync_bundled_skills(bundled_dir: Path, target_dir: Path) -> None:
|
||||
continue
|
||||
|
||||
skill_dst = target_dir / skill_src.name
|
||||
if skill_dst.exists():
|
||||
# 目标已存在,跳过(不覆盖用户自定义修改)
|
||||
|
||||
if not skill_dst.exists():
|
||||
# 目标不存在,直接复制
|
||||
try:
|
||||
shutil.copytree(str(skill_src), str(skill_dst))
|
||||
logger.info(
|
||||
"已自动复制内置技能 '%s' -> '%s'", skill_src.name, skill_dst
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("复制内置技能 '%s' 失败: %s", skill_src.name, e)
|
||||
continue
|
||||
|
||||
# 目标已存在,比较版本号
|
||||
bundled_version = _extract_version(skill_md)
|
||||
if bundled_version <= 0:
|
||||
# 内置技能无版本号,保持旧逻辑不覆盖
|
||||
continue
|
||||
|
||||
user_skill_md = skill_dst / "SKILL.md"
|
||||
user_version = _extract_version(user_skill_md) if user_skill_md.is_file() else 0
|
||||
|
||||
if bundled_version <= user_version:
|
||||
# 用户版本 >= 内置版本,跳过
|
||||
continue
|
||||
|
||||
# 内置版本更高,删除旧版本后覆盖
|
||||
try:
|
||||
shutil.rmtree(str(skill_dst))
|
||||
shutil.copytree(str(skill_src), str(skill_dst))
|
||||
logger.info("已自动复制内置技能 '%s' -> '%s'", skill_src.name, skill_dst)
|
||||
logger.info(
|
||||
"已更新内置技能 '%s' (v%d -> v%d)",
|
||||
skill_src.name,
|
||||
user_version,
|
||||
bundled_version,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("复制内置技能 '%s' 失败: %s", skill_src.name, e)
|
||||
logger.warning("更新内置技能 '%s' 失败: %s", skill_src.name, e)
|
||||
|
||||
|
||||
class SkillsMiddleware(AgentMiddleware[SkillsState, ContextT, ResponseT]): # noqa
|
||||
|
||||
@@ -9,6 +9,8 @@ Core Capabilities:
|
||||
2. Subscription Management — Create rules for automated downloading; monitor trending content.
|
||||
3. Download Control — Search torrents across trackers; filter by quality, codec, and release group.
|
||||
4. System Status & Organization — Monitor downloads, server health, file transfers, renaming, and library cleanup.
|
||||
5. Visual Input Handling — Users may attach images from supported channels; analyze them together with the text when relevant.
|
||||
6. File Context Handling — User messages may arrive as structured JSON. Treat the `message` field as the user's text. Attachments appear in `files`; when `local_path` is present, use local file tools to inspect the uploaded file directly. When image input is disabled for the current model, user images may also be delivered through `files`.
|
||||
|
||||
<communication>
|
||||
{verbose_spec}
|
||||
@@ -19,6 +21,10 @@ Core Capabilities:
|
||||
- Use Markdown for structured data. Use `inline code` for media titles/paths.
|
||||
- Include key details (year, rating, resolution) but do NOT over-explain.
|
||||
- Do not stop for approval on read-only operations. Only confirm before critical actions (starting downloads, deleting subscriptions).
|
||||
- If the current channel supports image sending and an image would materially help, you may use the `send_message` tool with `image_url` to send it.
|
||||
- If the current channel supports file sending and you need to return a local image/file for the user to download, use `send_local_file`.
|
||||
{button_choice_spec}
|
||||
- Voice replies: {voice_reply_spec}
|
||||
- NOT a coding assistant. Do not offer code snippets.
|
||||
- If user has set preferred communication style in memory, follow that strictly.
|
||||
</communication>
|
||||
|
||||
@@ -50,10 +50,13 @@ class PromptManager:
|
||||
logger.error(f"加载提示词失败: {prompt_name}, 错误: {e}")
|
||||
raise
|
||||
|
||||
def get_agent_prompt(self, channel: str = None) -> str:
|
||||
def get_agent_prompt(
|
||||
self, channel: str = None, prefer_voice_reply: bool = False
|
||||
) -> str:
|
||||
"""
|
||||
获取智能体提示词
|
||||
:param channel: 消息渠道(Telegram、微信、Slack等)
|
||||
:param prefer_voice_reply: 是否优先使用语音回复
|
||||
:return: 提示词内容
|
||||
"""
|
||||
# 基础提示词
|
||||
@@ -73,6 +76,7 @@ class PromptManager:
|
||||
caps = ChannelCapabilityManager.get_capabilities(msg_channel)
|
||||
if caps:
|
||||
markdown_spec = self._generate_formatting_instructions(caps)
|
||||
button_choice_spec = self._generate_button_choice_instructions(msg_channel)
|
||||
|
||||
# 啰嗦模式
|
||||
verbose_spec = ""
|
||||
@@ -87,12 +91,17 @@ class PromptManager:
|
||||
|
||||
# MoviePilot系统信息
|
||||
moviepilot_info = self._get_moviepilot_info()
|
||||
voice_reply_spec = self._generate_voice_reply_instructions(
|
||||
prefer_voice_reply=prefer_voice_reply
|
||||
)
|
||||
|
||||
# 始终替换占位符,避免后续 .format() 时因残留花括号报 KeyError
|
||||
base_prompt = base_prompt.format(
|
||||
markdown_spec=markdown_spec,
|
||||
verbose_spec=verbose_spec,
|
||||
moviepilot_info=moviepilot_info,
|
||||
voice_reply_spec=voice_reply_spec,
|
||||
button_choice_spec=button_choice_spec,
|
||||
)
|
||||
|
||||
return base_prompt
|
||||
@@ -166,6 +175,37 @@ class PromptManager:
|
||||
instructions.append("- Links: Paste URLs directly as text.")
|
||||
return "\n".join(instructions)
|
||||
|
||||
@staticmethod
|
||||
def _generate_voice_reply_instructions(prefer_voice_reply: bool) -> str:
|
||||
if not prefer_voice_reply:
|
||||
return (
|
||||
"- Voice replies: Use normal text replies by default. "
|
||||
"Only call `send_voice_message` when spoken playback is clearly better than plain text."
|
||||
)
|
||||
return (
|
||||
"- Current message context: The user sent a voice message.\n"
|
||||
"- Reply preference: Prioritize calling `send_voice_message` for the main user-facing reply.\n"
|
||||
"- Fallback: If voice is unavailable on the current channel, `send_voice_message` will fall back to text.\n"
|
||||
"- Do not repeat the same full reply again after calling `send_voice_message`."
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _generate_button_choice_instructions(
|
||||
channel: MessageChannel = None,
|
||||
) -> str:
|
||||
if channel and ChannelCapabilityManager.supports_buttons(
|
||||
channel
|
||||
) and ChannelCapabilityManager.supports_callbacks(channel):
|
||||
return (
|
||||
"- User questions: If you need the user to choose from a few clear options, "
|
||||
"call `ask_user_choice` to send button options. After the user clicks a button, "
|
||||
"the selected value will come back as the user's next message. After calling this tool, "
|
||||
"wait for the user's selection instead of repeating the question in plain text."
|
||||
)
|
||||
return (
|
||||
"- User questions: When you truly need user input, ask briefly in plain text."
|
||||
)
|
||||
|
||||
def clear_cache(self):
|
||||
"""
|
||||
清空缓存
|
||||
|
||||
@@ -31,6 +31,7 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
|
||||
_username: Optional[str] = PrivateAttr(default=None)
|
||||
_stream_handler: Optional[StreamingHandler] = PrivateAttr(default=None)
|
||||
_require_admin: bool = PrivateAttr(default=False)
|
||||
_agent_context: dict = PrivateAttr(default_factory=dict)
|
||||
|
||||
def __init__(self, session_id: str, user_id: str, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
@@ -65,29 +66,27 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
|
||||
# 发送工具执行过程消息
|
||||
if self._stream_handler and self._stream_handler.is_streaming:
|
||||
if settings.AI_AGENT_VERBOSE:
|
||||
# VERBOSE:工具消息直接追加到 buffer 中,与 Agent 文字合并为同一条流式消息
|
||||
if tool_message:
|
||||
self._stream_handler.emit(f"\n\n⚙️ => {tool_message}\n\n")
|
||||
if self._stream_handler.is_auto_flushing:
|
||||
# 渠道支持编辑:工具消息追加到 buffer,由定时刷新推送
|
||||
if tool_message:
|
||||
self._stream_handler.emit(f"\n\n⚙️ => {tool_message}\n\n")
|
||||
else:
|
||||
# 渠道不支持编辑:取出 Agent 文字 + 工具消息合并独立发送
|
||||
agent_message = await self._stream_handler.take()
|
||||
messages = []
|
||||
if agent_message:
|
||||
messages.append(agent_message)
|
||||
if tool_message:
|
||||
messages.append(f"⚙️ => {tool_message}")
|
||||
if messages:
|
||||
merged_message = "\n\n".join(messages)
|
||||
await self.send_tool_message(merged_message)
|
||||
else:
|
||||
# 非VERBOSE,重置缓冲区从头更新,保持消息编辑能力
|
||||
self._stream_handler.reset()
|
||||
else:
|
||||
# 后台模式(无渠道信息)不发送工具调用消息
|
||||
if self._channel:
|
||||
# 非流式渠道:保持原有行为,取出 Agent 文字 + 工具消息合并独立发送
|
||||
agent_message = (
|
||||
await self._stream_handler.take() if self._stream_handler else ""
|
||||
)
|
||||
|
||||
messages = []
|
||||
if agent_message:
|
||||
messages.append(agent_message)
|
||||
if tool_message:
|
||||
messages.append(f"⚙️ => {tool_message}")
|
||||
|
||||
if messages:
|
||||
merged_message = "\n\n".join(messages)
|
||||
await self.send_tool_message(merged_message)
|
||||
# 未启用流式传输,不发送任何工具消息内容
|
||||
pass
|
||||
|
||||
logger.debug(f"Executing tool {self.name} with args: {kwargs}")
|
||||
|
||||
@@ -144,6 +143,12 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
|
||||
"""
|
||||
self._stream_handler = stream_handler
|
||||
|
||||
def set_agent_context(self, agent_context: Optional[dict]):
|
||||
"""
|
||||
设置与当前 Agent 共享的上下文。
|
||||
"""
|
||||
self._agent_context = agent_context or {}
|
||||
|
||||
async def _check_permission(self) -> Optional[str]:
|
||||
"""
|
||||
检查用户权限:
|
||||
@@ -251,7 +256,9 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
|
||||
|
||||
return None
|
||||
|
||||
async def send_tool_message(self, message: str, title: str = ""):
|
||||
async def send_tool_message(
|
||||
self, message: str, title: str = "", image: Optional[str] = None
|
||||
):
|
||||
"""
|
||||
发送工具消息
|
||||
"""
|
||||
@@ -263,5 +270,6 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
|
||||
username=self._username,
|
||||
title=title,
|
||||
text=message,
|
||||
image=image,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -30,6 +30,9 @@ from app.agent.tools.impl.search_torrents import SearchTorrentsTool
|
||||
from app.agent.tools.impl.get_search_results import GetSearchResultsTool
|
||||
from app.agent.tools.impl.search_web import SearchWebTool
|
||||
from app.agent.tools.impl.send_message import SendMessageTool
|
||||
from app.agent.tools.impl.ask_user_choice import AskUserChoiceTool
|
||||
from app.agent.tools.impl.send_local_file import SendLocalFileTool
|
||||
from app.agent.tools.impl.send_voice_message import SendVoiceMessageTool
|
||||
from app.agent.tools.impl.query_schedulers import QuerySchedulersTool
|
||||
from app.agent.tools.impl.run_scheduler import RunSchedulerTool
|
||||
from app.agent.tools.impl.query_workflows import QueryWorkflowsTool
|
||||
@@ -50,9 +53,14 @@ from app.agent.tools.impl.read_file import ReadFileTool
|
||||
from app.agent.tools.impl.browse_webpage import BrowseWebpageTool
|
||||
from app.agent.tools.impl.query_installed_plugins import QueryInstalledPluginsTool
|
||||
from app.agent.tools.impl.query_plugin_capabilities import QueryPluginCapabilitiesTool
|
||||
from app.agent.tools.impl.run_plugin_command import RunPluginCommandTool
|
||||
from app.agent.tools.impl.run_slash_command import RunSlashCommandTool
|
||||
from app.agent.tools.impl.list_slash_commands import ListSlashCommandsTool
|
||||
from app.agent.tools.impl.query_custom_identifiers import QueryCustomIdentifiersTool
|
||||
from app.agent.tools.impl.update_custom_identifiers import UpdateCustomIdentifiersTool
|
||||
from app.core.plugin import PluginManager
|
||||
from app.log import logger
|
||||
from app.schemas.message import ChannelCapabilityManager
|
||||
from app.schemas.types import MessageChannel
|
||||
from .base import MoviePilotTool
|
||||
|
||||
|
||||
@@ -61,6 +69,18 @@ class MoviePilotToolFactory:
|
||||
MoviePilot工具工厂
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def _should_enable_choice_tool(channel: str = None) -> bool:
|
||||
if not channel:
|
||||
return False
|
||||
try:
|
||||
message_channel = MessageChannel(channel)
|
||||
except ValueError:
|
||||
return False
|
||||
return ChannelCapabilityManager.supports_buttons(
|
||||
message_channel
|
||||
) and ChannelCapabilityManager.supports_callbacks(message_channel)
|
||||
|
||||
@staticmethod
|
||||
def create_tools(
|
||||
session_id: str,
|
||||
@@ -69,6 +89,7 @@ class MoviePilotToolFactory:
|
||||
source: str = None,
|
||||
username: str = None,
|
||||
stream_handler: Callable = None,
|
||||
agent_context: dict = None,
|
||||
) -> List[MoviePilotTool]:
|
||||
"""
|
||||
创建MoviePilot工具列表
|
||||
@@ -125,13 +146,25 @@ class MoviePilotToolFactory:
|
||||
BrowseWebpageTool,
|
||||
QueryInstalledPluginsTool,
|
||||
QueryPluginCapabilitiesTool,
|
||||
RunPluginCommandTool,
|
||||
RunSlashCommandTool,
|
||||
ListSlashCommandsTool,
|
||||
QueryCustomIdentifiersTool,
|
||||
UpdateCustomIdentifiersTool,
|
||||
]
|
||||
if MoviePilotToolFactory._should_enable_choice_tool(channel):
|
||||
tool_definitions.append(AskUserChoiceTool)
|
||||
tool_definitions.extend(
|
||||
[
|
||||
SendLocalFileTool,
|
||||
SendVoiceMessageTool,
|
||||
]
|
||||
)
|
||||
# 创建内置工具
|
||||
for ToolClass in tool_definitions:
|
||||
tool = ToolClass(session_id=session_id, user_id=user_id)
|
||||
tool.set_message_attr(channel=channel, source=source, username=username)
|
||||
tool.set_stream_handler(stream_handler=stream_handler)
|
||||
tool.set_agent_context(agent_context=agent_context)
|
||||
tools.append(tool)
|
||||
|
||||
# 加载插件提供的工具
|
||||
@@ -155,6 +188,7 @@ class MoviePilotToolFactory:
|
||||
channel=channel, source=source, username=username
|
||||
)
|
||||
tool.set_stream_handler(stream_handler=stream_handler)
|
||||
tool.set_agent_context(agent_context=agent_context)
|
||||
tools.append(tool)
|
||||
plugin_tools_count += 1
|
||||
logger.debug(
|
||||
|
||||
173
app/agent/tools/impl/ask_user_choice.py
Normal file
173
app/agent/tools/impl/ask_user_choice.py
Normal file
@@ -0,0 +1,173 @@
|
||||
"""让用户通过按钮进行选择的工具。"""
|
||||
|
||||
from typing import List, Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool, ToolChain
|
||||
from app.chain.interaction import (
|
||||
AgentInteractionOption,
|
||||
agent_interaction_manager,
|
||||
)
|
||||
from app.log import logger
|
||||
from app.schemas import Notification, NotificationType
|
||||
from app.schemas.message import ChannelCapabilityManager
|
||||
from app.schemas.types import MessageChannel
|
||||
|
||||
|
||||
class UserChoiceOptionInput(BaseModel):
|
||||
"""单个按钮选项。"""
|
||||
|
||||
label: str = Field(..., description="Text shown on the button")
|
||||
value: str = Field(
|
||||
...,
|
||||
description="The exact content that will be sent back to the agent after the user clicks this button",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_option(self):
|
||||
if not self.label.strip():
|
||||
raise ValueError("label 不能为空")
|
||||
if not self.value.strip():
|
||||
raise ValueError("value 不能为空")
|
||||
return self
|
||||
|
||||
|
||||
class AskUserChoiceInput(BaseModel):
|
||||
"""按钮选择工具输入。"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why the agent needs the user to choose from buttons",
|
||||
)
|
||||
message: str = Field(
|
||||
...,
|
||||
description="Question or prompt shown to the user together with the buttons",
|
||||
)
|
||||
title: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional short title displayed above the question",
|
||||
)
|
||||
options: List[UserChoiceOptionInput] = Field(
|
||||
...,
|
||||
description="Button options to show to the user",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_payload(self):
|
||||
if not self.message.strip():
|
||||
raise ValueError("message 不能为空")
|
||||
if not self.options:
|
||||
raise ValueError("options 至少需要提供一个")
|
||||
return self
|
||||
|
||||
|
||||
class AskUserChoiceTool(MoviePilotTool):
|
||||
name: str = "ask_user_choice"
|
||||
description: str = (
|
||||
"Ask the user to choose from button options on channels that support interactive buttons. "
|
||||
"After the user clicks a button, the selected value will come back as the user's next message."
|
||||
)
|
||||
args_schema: Type[BaseModel] = AskUserChoiceInput
|
||||
require_admin: bool = False
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
message = kwargs.get("message", "") or ""
|
||||
if len(message) > 40:
|
||||
message = message[:40] + "..."
|
||||
return f"正在发送按钮选择: {message}"
|
||||
|
||||
@staticmethod
|
||||
def _truncate_button_text(text: str, max_length: int) -> str:
|
||||
if max_length <= 0 or len(text) <= max_length:
|
||||
return text
|
||||
if max_length <= 3:
|
||||
return text[:max_length]
|
||||
return text[: max_length - 3] + "..."
|
||||
|
||||
async def run(
|
||||
self,
|
||||
message: str,
|
||||
options: List[UserChoiceOptionInput],
|
||||
title: Optional[str] = None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
if not self._channel or not self._source:
|
||||
return "当前不在可回传消息的会话中,无法发起按钮选择"
|
||||
|
||||
try:
|
||||
channel = MessageChannel(self._channel)
|
||||
except ValueError:
|
||||
return f"不支持的消息渠道: {self._channel}"
|
||||
|
||||
if not (
|
||||
ChannelCapabilityManager.supports_buttons(channel)
|
||||
and ChannelCapabilityManager.supports_callbacks(channel)
|
||||
):
|
||||
return f"当前渠道 {channel.value} 不支持按钮选择"
|
||||
|
||||
max_per_row = ChannelCapabilityManager.get_max_buttons_per_row(channel)
|
||||
max_rows = ChannelCapabilityManager.get_max_button_rows(channel)
|
||||
max_text_length = ChannelCapabilityManager.get_max_button_text_length(channel)
|
||||
max_options = max_per_row * max_rows
|
||||
if len(options) > max_options:
|
||||
return f"当前渠道最多支持 {max_options} 个按钮选项"
|
||||
|
||||
choice_options = [
|
||||
AgentInteractionOption(
|
||||
label=option.label.strip(), value=option.value.strip()
|
||||
)
|
||||
for option in options
|
||||
]
|
||||
request = agent_interaction_manager.create_request(
|
||||
session_id=self._session_id,
|
||||
user_id=str(self._user_id),
|
||||
channel=channel.value,
|
||||
source=self._source,
|
||||
username=self._username,
|
||||
title=title,
|
||||
prompt=message.strip(),
|
||||
options=choice_options,
|
||||
)
|
||||
|
||||
buttons = []
|
||||
current_row = []
|
||||
for index, option in enumerate(choice_options, start=1):
|
||||
current_row.append(
|
||||
{
|
||||
"text": self._truncate_button_text(option.label, max_text_length),
|
||||
"callback_data": (
|
||||
f"agent_interaction:choice:{request.request_id}:{index}"
|
||||
),
|
||||
}
|
||||
)
|
||||
if len(current_row) >= max_per_row:
|
||||
buttons.append(current_row)
|
||||
current_row = []
|
||||
if current_row:
|
||||
buttons.append(current_row)
|
||||
|
||||
logger.info(
|
||||
"执行工具: %s, channel=%s, session_id=%s, options=%s",
|
||||
self.name,
|
||||
channel.value,
|
||||
self._session_id,
|
||||
len(choice_options),
|
||||
)
|
||||
|
||||
await ToolChain().async_post_message(
|
||||
Notification(
|
||||
channel=channel,
|
||||
source=self._source,
|
||||
mtype=NotificationType.Agent,
|
||||
userid=self._user_id,
|
||||
username=self._username,
|
||||
title=title,
|
||||
text=message.strip(),
|
||||
buttons=buttons,
|
||||
)
|
||||
)
|
||||
|
||||
self._agent_context["user_reply_sent"] = True
|
||||
self._agent_context["reply_mode"] = "button_choice"
|
||||
return f"已发送 {len(choice_options)} 个按钮选项,等待用户选择"
|
||||
@@ -13,40 +13,48 @@ from app.schemas.types import MediaType, media_type_to_agent
|
||||
|
||||
class GetRecommendationsInput(BaseModel):
|
||||
"""获取推荐工具的输入参数模型"""
|
||||
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
|
||||
source: Optional[str] = Field("tmdb_trending",
|
||||
description="Recommendation source: "
|
||||
"'tmdb_trending' for TMDB trending content, "
|
||||
"'tmdb_movies' for TMDB popular movies, "
|
||||
"'tmdb_tvs' for TMDB popular TV shows, "
|
||||
"'douban_hot' for Douban popular content, "
|
||||
"'douban_movie_hot' for Douban hot movies, "
|
||||
"'douban_tv_hot' for Douban hot TV shows, "
|
||||
"'douban_movie_showing' for Douban movies currently showing, "
|
||||
"'douban_movies' for Douban latest movies, "
|
||||
"'douban_tvs' for Douban latest TV shows, "
|
||||
"'douban_movie_top250' for Douban movie TOP250, "
|
||||
"'douban_tv_weekly_chinese' for Douban Chinese TV weekly chart, "
|
||||
"'douban_tv_weekly_global' for Douban global TV weekly chart, "
|
||||
"'douban_tv_animation' for Douban popular animation, "
|
||||
"'bangumi_calendar' for Bangumi anime calendar")
|
||||
media_type: Optional[str] = Field("all",
|
||||
description="Allowed values: movie, tv, all")
|
||||
limit: Optional[int] = Field(20,
|
||||
description="Maximum number of recommendations to return (default: 20, maximum: 100)")
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
source: Optional[str] = Field(
|
||||
"tmdb_trending",
|
||||
description="Recommendation source: "
|
||||
"'tmdb_trending' for TMDB trending content, "
|
||||
"'tmdb_movies' for TMDB popular movies, "
|
||||
"'tmdb_tvs' for TMDB popular TV shows, "
|
||||
"'douban_hot' for Douban popular content, "
|
||||
"'douban_movie_hot' for Douban hot movies, "
|
||||
"'douban_tv_hot' for Douban hot TV shows, "
|
||||
"'douban_movie_showing' for Douban movies currently showing, "
|
||||
"'douban_movies' for Douban latest movies, "
|
||||
"'douban_tvs' for Douban latest TV shows, "
|
||||
"'douban_movie_top250' for Douban movie TOP250, "
|
||||
"'douban_tv_weekly_chinese' for Douban Chinese TV weekly chart, "
|
||||
"'douban_tv_weekly_global' for Douban global TV weekly chart, "
|
||||
"'douban_tv_animation' for Douban popular animation, "
|
||||
"'bangumi_calendar' for Bangumi anime calendar",
|
||||
)
|
||||
media_type: Optional[str] = Field(
|
||||
"all", description="Allowed values: movie, tv, all"
|
||||
)
|
||||
page: Optional[int] = Field(
|
||||
1, description="Page number for pagination (default: 1, 20 items per page)"
|
||||
)
|
||||
|
||||
|
||||
class GetRecommendationsTool(MoviePilotTool):
|
||||
name: str = "get_recommendations"
|
||||
description: str = "Get trending and popular media recommendations from various sources. Returns curated lists of popular movies, TV shows, and anime based on different criteria like trending, ratings, or calendar schedules."
|
||||
description: str = "Get trending and popular media recommendations from various sources. Returns curated lists of popular movies, TV shows, and anime based on different criteria like trending, ratings, or calendar schedules. Supports pagination with 20 items per page."
|
||||
args_schema: Type[BaseModel] = GetRecommendationsInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据推荐参数生成友好的提示消息"""
|
||||
source = kwargs.get("source", "tmdb_trending")
|
||||
media_type = kwargs.get("media_type", "all")
|
||||
limit = kwargs.get("limit", 20)
|
||||
|
||||
page = kwargs.get("page", 1)
|
||||
|
||||
source_map = {
|
||||
"tmdb_trending": "TMDB流行趋势",
|
||||
"tmdb_movies": "TMDB热门电影",
|
||||
@@ -61,20 +69,29 @@ class GetRecommendationsTool(MoviePilotTool):
|
||||
"douban_tv_weekly_chinese": "豆瓣国产剧集榜",
|
||||
"douban_tv_weekly_global": "豆瓣全球剧集榜",
|
||||
"douban_tv_animation": "豆瓣热门动漫",
|
||||
"bangumi_calendar": "番组计划"
|
||||
"bangumi_calendar": "番组计划",
|
||||
}
|
||||
source_desc = source_map.get(source, source)
|
||||
|
||||
|
||||
message = f"正在获取推荐: {source_desc}"
|
||||
if media_type != "all":
|
||||
message += f" [{media_type}]"
|
||||
message += f" (限制: {limit}条)"
|
||||
|
||||
message += f" (第{page}页)"
|
||||
|
||||
return message
|
||||
|
||||
async def run(self, source: Optional[str] = "tmdb_trending",
|
||||
media_type: Optional[str] = "all", limit: Optional[int] = 20, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}, 参数: source={source}, media_type={media_type}, limit={limit}")
|
||||
async def run(
|
||||
self,
|
||||
source: Optional[str] = "tmdb_trending",
|
||||
media_type: Optional[str] = "all",
|
||||
page: Optional[int] = 1,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
page = max(1, page or 1)
|
||||
page_size = 20
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 参数: source={source}, media_type={media_type}, page={page}"
|
||||
)
|
||||
try:
|
||||
if media_type != "all":
|
||||
media_type_enum = MediaType.from_agent(media_type)
|
||||
@@ -85,73 +102,103 @@ class GetRecommendationsTool(MoviePilotTool):
|
||||
recommend_chain = RecommendChain()
|
||||
results = []
|
||||
if source == "tmdb_trending":
|
||||
# async_tmdb_trending 只接受 page 参数,返回固定数量的结果
|
||||
# 如果需要限制数量,需要在返回后截取
|
||||
results = await recommend_chain.async_tmdb_trending(page=1)
|
||||
if limit and limit > 0:
|
||||
results = results[:limit]
|
||||
results = await recommend_chain.async_tmdb_trending(page=page)
|
||||
elif source == "tmdb_movies":
|
||||
# async_tmdb_movies 接受 page 参数,返回固定数量的结果
|
||||
results = await recommend_chain.async_tmdb_movies(page=1)
|
||||
if limit and limit > 0:
|
||||
results = results[:limit]
|
||||
results = await recommend_chain.async_tmdb_movies(page=page)
|
||||
elif source == "tmdb_tvs":
|
||||
# async_tmdb_tvs 接受 page 参数,返回固定数量的结果
|
||||
results = await recommend_chain.async_tmdb_tvs(page=1)
|
||||
if limit and limit > 0:
|
||||
results = results[:limit]
|
||||
results = await recommend_chain.async_tmdb_tvs(page=page)
|
||||
elif source == "douban_hot":
|
||||
if media_type == "movie":
|
||||
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_movie_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif media_type == "tv":
|
||||
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tv_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
else: # all
|
||||
results.extend(await recommend_chain.async_douban_movie_hot(page=1, count=limit))
|
||||
results.extend(await recommend_chain.async_douban_tv_hot(page=1, count=limit))
|
||||
results.extend(
|
||||
await recommend_chain.async_douban_movie_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
)
|
||||
results.extend(
|
||||
await recommend_chain.async_douban_tv_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
)
|
||||
elif source == "douban_movie_hot":
|
||||
results = await recommend_chain.async_douban_movie_hot(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_movie_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_tv_hot":
|
||||
results = await recommend_chain.async_douban_tv_hot(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tv_hot(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_movie_showing":
|
||||
results = await recommend_chain.async_douban_movie_showing(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_movie_showing(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_movies":
|
||||
results = await recommend_chain.async_douban_movies(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_movies(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_tvs":
|
||||
results = await recommend_chain.async_douban_tvs(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tvs(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_movie_top250":
|
||||
results = await recommend_chain.async_douban_movie_top250(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_movie_top250(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_tv_weekly_chinese":
|
||||
results = await recommend_chain.async_douban_tv_weekly_chinese(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tv_weekly_chinese(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_tv_weekly_global":
|
||||
results = await recommend_chain.async_douban_tv_weekly_global(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tv_weekly_global(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "douban_tv_animation":
|
||||
results = await recommend_chain.async_douban_tv_animation(page=1, count=limit)
|
||||
results = await recommend_chain.async_douban_tv_animation(
|
||||
page=page, count=page_size
|
||||
)
|
||||
elif source == "bangumi_calendar":
|
||||
results = await recommend_chain.async_bangumi_calendar(page=1, count=limit)
|
||||
results = await recommend_chain.async_bangumi_calendar(
|
||||
page=page, count=page_size
|
||||
)
|
||||
else:
|
||||
# 不支持的推荐来源
|
||||
supported_sources = [
|
||||
"tmdb_trending", "tmdb_movies", "tmdb_tvs",
|
||||
"douban_hot", "douban_movie_hot", "douban_tv_hot",
|
||||
"douban_movie_showing", "douban_movies", "douban_tvs",
|
||||
"douban_movie_top250", "douban_tv_weekly_chinese",
|
||||
"douban_tv_weekly_global", "douban_tv_animation",
|
||||
"bangumi_calendar"
|
||||
"tmdb_trending",
|
||||
"tmdb_movies",
|
||||
"tmdb_tvs",
|
||||
"douban_hot",
|
||||
"douban_movie_hot",
|
||||
"douban_tv_hot",
|
||||
"douban_movie_showing",
|
||||
"douban_movies",
|
||||
"douban_tvs",
|
||||
"douban_movie_top250",
|
||||
"douban_tv_weekly_chinese",
|
||||
"douban_tv_weekly_global",
|
||||
"douban_tv_animation",
|
||||
"bangumi_calendar",
|
||||
]
|
||||
return f"不支持的推荐来源: {source}。支持的来源包括: {', '.join(supported_sources)}"
|
||||
|
||||
if results:
|
||||
# 限制最多20条结果
|
||||
# 对于TMDB来源,API自身按页返回,取前page_size条
|
||||
total_count = len(results)
|
||||
limited_results = results[:20]
|
||||
page_results = results[:page_size]
|
||||
# 精简字段,只保留关键信息
|
||||
simplified_results = []
|
||||
for r in limited_results:
|
||||
for r in page_results:
|
||||
# r 应该是字典格式(to_dict的结果),但为了安全起见进行检查
|
||||
if not isinstance(r, dict):
|
||||
logger.warning(f"推荐结果格式异常,跳过: {type(r)}")
|
||||
continue
|
||||
|
||||
|
||||
simplified = {
|
||||
"title": r.get("title"),
|
||||
"en_title": r.get("en_title"),
|
||||
@@ -163,14 +210,19 @@ class GetRecommendationsTool(MoviePilotTool):
|
||||
"douban_id": r.get("douban_id"),
|
||||
"vote_average": r.get("vote_average"),
|
||||
"poster_path": r.get("poster_path"),
|
||||
"detail_link": r.get("detail_link")
|
||||
"detail_link": r.get("detail_link"),
|
||||
}
|
||||
simplified_results.append(simplified)
|
||||
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 20:
|
||||
return f"注意:推荐结果共找到 {total_count} 条,为节省上下文空间,仅显示前 20 条结果。\n\n{result_json}"
|
||||
return result_json
|
||||
result_json = json.dumps(
|
||||
simplified_results, ensure_ascii=False, indent=2
|
||||
)
|
||||
has_more = total_count > page_size
|
||||
payload_msg = f"第 {page} 页,当前页 {len(simplified_results)} 条结果。"
|
||||
if has_more:
|
||||
payload_msg += (
|
||||
f" 可能有更多数据,可使用 page={page + 1} 获取下一页。"
|
||||
)
|
||||
return f"{payload_msg}\n\n{result_json}"
|
||||
return "未找到推荐内容。"
|
||||
except Exception as e:
|
||||
logger.error(f"获取推荐失败: {e}", exc_info=True)
|
||||
|
||||
@@ -19,33 +19,60 @@ from ._torrent_search_utils import (
|
||||
|
||||
class GetSearchResultsInput(BaseModel):
|
||||
"""获取搜索结果工具的输入参数模型"""
|
||||
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
site: Optional[List[str]] = Field(None, description="Site name filters")
|
||||
season: Optional[List[str]] = Field(None, description="Season or episode filters")
|
||||
free_state: Optional[List[str]] = Field(None, description="Promotion state filters")
|
||||
video_code: Optional[List[str]] = Field(None, description="Video codec filters")
|
||||
edition: Optional[List[str]] = Field(None, description="Edition filters")
|
||||
resolution: Optional[List[str]] = Field(None, description="Resolution filters")
|
||||
release_group: Optional[List[str]] = Field(None, description="Release group filters")
|
||||
title_pattern: Optional[str] = Field(None, description="Regular expression pattern to filter torrent titles (e.g., '4K|2160p|UHD', '1080p.*BluRay')")
|
||||
show_filter_options: Optional[bool] = Field(False, description="Whether to return only optional filter options for re-checking available conditions")
|
||||
release_group: Optional[List[str]] = Field(
|
||||
None, description="Release group filters"
|
||||
)
|
||||
title_pattern: Optional[str] = Field(
|
||||
None,
|
||||
description="Regular expression pattern to filter torrent titles (e.g., '4K|2160p|UHD', '1080p.*BluRay')",
|
||||
)
|
||||
show_filter_options: Optional[bool] = Field(
|
||||
False,
|
||||
description="Whether to return only optional filter options for re-checking available conditions",
|
||||
)
|
||||
page: Optional[int] = Field(
|
||||
1,
|
||||
description="Page number for pagination (default: 1, each page returns up to 50 results)",
|
||||
)
|
||||
|
||||
|
||||
class GetSearchResultsTool(MoviePilotTool):
|
||||
name: str = "get_search_results"
|
||||
description: str = "Get cached torrent search results from search_torrents with optional filters. Returns at most the first 50 matches."
|
||||
description: str = "Get cached torrent search results from search_torrents with optional filters. Supports pagination with up to 50 results per page."
|
||||
args_schema: Type[BaseModel] = GetSearchResultsInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
return "正在获取搜索结果"
|
||||
|
||||
async def run(self, site: Optional[List[str]] = None, season: Optional[List[str]] = None,
|
||||
free_state: Optional[List[str]] = None, video_code: Optional[List[str]] = None,
|
||||
edition: Optional[List[str]] = None, resolution: Optional[List[str]] = None,
|
||||
release_group: Optional[List[str]] = None, title_pattern: Optional[str] = None,
|
||||
show_filter_options: bool = False,
|
||||
**kwargs) -> str:
|
||||
async def run(
|
||||
self,
|
||||
site: Optional[List[str]] = None,
|
||||
season: Optional[List[str]] = None,
|
||||
free_state: Optional[List[str]] = None,
|
||||
video_code: Optional[List[str]] = None,
|
||||
edition: Optional[List[str]] = None,
|
||||
resolution: Optional[List[str]] = None,
|
||||
release_group: Optional[List[str]] = None,
|
||||
title_pattern: Optional[str] = None,
|
||||
show_filter_options: bool = False,
|
||||
page: Optional[int] = 1,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
page = max(1, page or 1)
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 参数: site={site}, season={season}, free_state={free_state}, video_code={video_code}, edition={edition}, resolution={resolution}, release_group={release_group}, title_pattern={title_pattern}, show_filter_options={show_filter_options}")
|
||||
f"执行工具: {self.name}, 参数: site={site}, season={season}, free_state={free_state}, video_code={video_code}, edition={edition}, resolution={resolution}, release_group={release_group}, title_pattern={title_pattern}, show_filter_options={show_filter_options}, page={page}"
|
||||
)
|
||||
|
||||
try:
|
||||
items = await SearchChain().async_last_search_results() or []
|
||||
@@ -79,8 +106,10 @@ class GetSearchResultsTool(MoviePilotTool):
|
||||
)
|
||||
if regex_pattern:
|
||||
filtered_items = [
|
||||
item for item in filtered_items
|
||||
if item.torrent_info and item.torrent_info.title
|
||||
item
|
||||
for item in filtered_items
|
||||
if item.torrent_info
|
||||
and item.torrent_info.title
|
||||
and regex_pattern.search(item.torrent_info.title)
|
||||
]
|
||||
if not filtered_items:
|
||||
@@ -88,19 +117,37 @@ class GetSearchResultsTool(MoviePilotTool):
|
||||
|
||||
total_count = len(filtered_items)
|
||||
filtered_ids = {id(item) for item in filtered_items}
|
||||
matched_indices = [index for index, item in enumerate(items, start=1) if id(item) in filtered_ids]
|
||||
limited_items = filtered_items[:TORRENT_RESULT_LIMIT]
|
||||
limited_indices = matched_indices[:TORRENT_RESULT_LIMIT]
|
||||
matched_indices = [
|
||||
index
|
||||
for index, item in enumerate(items, start=1)
|
||||
if id(item) in filtered_ids
|
||||
]
|
||||
|
||||
# 分页
|
||||
page_size = TORRENT_RESULT_LIMIT
|
||||
start = (page - 1) * page_size
|
||||
end = start + page_size
|
||||
page_items = filtered_items[start:end]
|
||||
page_indices = matched_indices[start:end]
|
||||
|
||||
if not page_items:
|
||||
return f"第 {page} 页没有数据,共 {total_count} 条结果,共 {(total_count + page_size - 1) // page_size} 页。"
|
||||
|
||||
results = [
|
||||
simplify_search_result(item, index)
|
||||
for item, index in zip(limited_items, limited_indices)
|
||||
for item, index in zip(page_items, page_indices)
|
||||
]
|
||||
total_pages = (total_count + page_size - 1) // page_size
|
||||
payload = {
|
||||
"total_count": total_count,
|
||||
"page": page,
|
||||
"total_pages": total_pages,
|
||||
"results": results,
|
||||
}
|
||||
if total_count > TORRENT_RESULT_LIMIT:
|
||||
payload["message"] = f"搜索结果共找到 {total_count} 条,仅显示前 {TORRENT_RESULT_LIMIT} 条结果。"
|
||||
if page < total_pages:
|
||||
payload["message"] = (
|
||||
f"搜索结果共 {total_count} 条,当前第 {page}/{total_pages} 页,可使用 page={page + 1} 获取下一页。"
|
||||
)
|
||||
return json.dumps(payload, ensure_ascii=False, indent=2)
|
||||
except Exception as e:
|
||||
error_message = f"获取搜索结果失败: {str(e)}"
|
||||
|
||||
@@ -120,8 +120,8 @@ class ListDirectoryTool(MoviePilotTool):
|
||||
result_json = json.dumps(simplified_items, ensure_ascii=False, indent=2)
|
||||
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 20:
|
||||
return f"注意:目录中共有 {total_count} 个项目,为节省上下文空间,仅显示前 20 个项目。\n\n{result_json}"
|
||||
if total_count > 100:
|
||||
return f"注意:目录中共有 {total_count} 个项目,为节省上下文空间,仅显示前 100 个项目。\n\n{result_json}"
|
||||
else:
|
||||
return result_json
|
||||
except Exception as e:
|
||||
|
||||
79
app/agent/tools/impl/list_slash_commands.py
Normal file
79
app/agent/tools/impl/list_slash_commands.py
Normal file
@@ -0,0 +1,79 @@
|
||||
"""查询所有可用斜杠命令工具(系统命令 + 插件命令)"""
|
||||
|
||||
import json
|
||||
from typing import Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool
|
||||
from app.log import logger
|
||||
|
||||
|
||||
class ListSlashCommandsInput(BaseModel):
|
||||
"""查询所有可用斜杠命令工具的输入参数模型"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
|
||||
|
||||
class ListSlashCommandsTool(MoviePilotTool):
|
||||
name: str = "list_slash_commands"
|
||||
description: str = (
|
||||
"List all available slash commands in the system, including system preset commands "
|
||||
"(e.g. /cookiecloud, /sites, /subscribes, /downloading, /transfer, /restart, etc.) "
|
||||
"and plugin-registered commands. "
|
||||
"Use this tool to discover what slash commands are available before executing them with run_slash_command. "
|
||||
"This is especially useful when the user describes an action in natural language and you need to "
|
||||
"find the matching command to fulfill their request."
|
||||
)
|
||||
args_schema: Type[BaseModel] = ListSlashCommandsInput
|
||||
require_admin: bool = True
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""生成友好的提示消息"""
|
||||
return "正在查询所有可用命令"
|
||||
|
||||
async def run(self, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}")
|
||||
|
||||
try:
|
||||
from app.command import Command
|
||||
|
||||
command_obj = Command()
|
||||
all_commands = command_obj.get_commands()
|
||||
|
||||
if not all_commands:
|
||||
return "当前没有可用的命令"
|
||||
|
||||
commands_list = []
|
||||
for cmd, info in all_commands.items():
|
||||
cmd_info = {
|
||||
"command": cmd,
|
||||
"description": info.get("description", ""),
|
||||
}
|
||||
if info.get("category"):
|
||||
cmd_info["category"] = info["category"]
|
||||
# 标识命令类型
|
||||
if info.get("type") == "scheduler":
|
||||
cmd_info["type"] = "scheduler"
|
||||
elif info.get("pid"):
|
||||
cmd_info["type"] = "plugin"
|
||||
cmd_info["plugin_id"] = info["pid"]
|
||||
else:
|
||||
cmd_info["type"] = "system"
|
||||
commands_list.append(cmd_info)
|
||||
|
||||
result = {
|
||||
"total": len(commands_list),
|
||||
"commands": commands_list,
|
||||
}
|
||||
return json.dumps(result, ensure_ascii=False, indent=2)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"查询可用命令失败: {e}", exc_info=True)
|
||||
return json.dumps(
|
||||
{"success": False, "message": f"查询可用命令时发生错误: {str(e)}"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
66
app/agent/tools/impl/query_custom_identifiers.py
Normal file
66
app/agent/tools/impl/query_custom_identifiers.py
Normal file
@@ -0,0 +1,66 @@
|
||||
"""查询自定义识别词工具"""
|
||||
|
||||
import json
|
||||
from typing import Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool
|
||||
from app.db.systemconfig_oper import SystemConfigOper
|
||||
from app.log import logger
|
||||
from app.schemas.types import SystemConfigKey
|
||||
|
||||
|
||||
class QueryCustomIdentifiersInput(BaseModel):
|
||||
"""查询自定义识别词工具的输入参数模型"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
|
||||
|
||||
class QueryCustomIdentifiersTool(MoviePilotTool):
|
||||
name: str = "query_custom_identifiers"
|
||||
description: str = (
|
||||
"Query all currently configured custom identifiers (自定义识别词). "
|
||||
"Returns the list of identifier rules used for preprocessing torrent/file names before media recognition. "
|
||||
"Use this tool to check existing rules before adding new ones to avoid duplicates."
|
||||
)
|
||||
args_schema: Type[BaseModel] = QueryCustomIdentifiersInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""生成友好的提示消息"""
|
||||
return "正在查询自定义识别词"
|
||||
|
||||
async def run(self, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}")
|
||||
try:
|
||||
system_config_oper = SystemConfigOper()
|
||||
identifiers = system_config_oper.get(SystemConfigKey.CustomIdentifiers)
|
||||
if identifiers:
|
||||
return json.dumps(
|
||||
{
|
||||
"success": True,
|
||||
"count": len(identifiers),
|
||||
"identifiers": identifiers,
|
||||
},
|
||||
ensure_ascii=False,
|
||||
indent=2,
|
||||
)
|
||||
return json.dumps(
|
||||
{
|
||||
"success": True,
|
||||
"count": 0,
|
||||
"identifiers": [],
|
||||
"message": "当前没有配置自定义识别词",
|
||||
},
|
||||
ensure_ascii=False,
|
||||
indent=2,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"查询自定义识别词失败: {e}")
|
||||
return json.dumps(
|
||||
{"success": False, "message": f"查询自定义识别词时发生错误: {str(e)}"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -58,14 +58,7 @@ class QueryInstalledPluginsTool(MoviePilotTool):
|
||||
}
|
||||
)
|
||||
|
||||
total_count = len(plugins_list)
|
||||
result_json = json.dumps(plugins_list, ensure_ascii=False, indent=2)
|
||||
|
||||
if total_count > 50:
|
||||
limited_plugins = plugins_list[:50]
|
||||
limited_json = json.dumps(limited_plugins, ensure_ascii=False, indent=2)
|
||||
return f"注意:共找到 {total_count} 个已安装插件,为节省上下文空间,仅显示前 50 个。\n\n{limited_json}"
|
||||
|
||||
return result_json
|
||||
except Exception as e:
|
||||
logger.error(f"查询已安装插件失败: {e}", exc_info=True)
|
||||
|
||||
@@ -10,52 +10,70 @@ from app.chain.mediaserver import MediaServerChain
|
||||
from app.helper.service import ServiceConfigHelper
|
||||
from app.log import logger
|
||||
|
||||
PAGE_SIZE = 20
|
||||
|
||||
|
||||
class QueryLibraryLatestInput(BaseModel):
|
||||
"""查询媒体服务器最近入库影片工具的输入参数模型"""
|
||||
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
|
||||
server: Optional[str] = Field(None, description="Media server name (optional, if not specified queries all enabled media servers)")
|
||||
count: Optional[int] = Field(20, description="Number of items to return (default: 20)")
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
server: Optional[str] = Field(
|
||||
None,
|
||||
description="Media server name (optional, if not specified queries all enabled media servers)",
|
||||
)
|
||||
page: Optional[int] = Field(
|
||||
1, description="Page number for pagination (default: 1, 20 items per page)"
|
||||
)
|
||||
|
||||
|
||||
class QueryLibraryLatestTool(MoviePilotTool):
|
||||
name: str = "query_library_latest"
|
||||
description: str = "Query the latest media items added to the media server (Plex, Emby, Jellyfin). Returns recently added movies and TV series with their titles, images, links, and other metadata."
|
||||
description: str = "Query the latest media items added to the media server (Plex, Emby, Jellyfin). Returns recently added movies and TV series with their titles, images, links, and other metadata. Supports pagination with 20 items per page."
|
||||
args_schema: Type[BaseModel] = QueryLibraryLatestInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据查询参数生成友好的提示消息"""
|
||||
server = kwargs.get("server")
|
||||
count = kwargs.get("count", 20)
|
||||
|
||||
page = kwargs.get("page", 1)
|
||||
|
||||
parts = ["正在查询媒体服务器最近入库影片"]
|
||||
|
||||
|
||||
if server:
|
||||
parts.append(f"服务器: {server}")
|
||||
else:
|
||||
parts.append("所有服务器")
|
||||
|
||||
parts.append(f"数量: {count}条")
|
||||
|
||||
|
||||
parts.append(f"第{page}页")
|
||||
|
||||
return " | ".join(parts)
|
||||
|
||||
async def run(self, server: Optional[str] = None, count: Optional[int] = 20, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}, 参数: server={server}, count={count}")
|
||||
async def run(
|
||||
self, server: Optional[str] = None, page: Optional[int] = 1, **kwargs
|
||||
) -> str:
|
||||
page = max(1, page or 1)
|
||||
# 为了支持分页,需要获取足够多的数据再切片
|
||||
fetch_count = page * PAGE_SIZE
|
||||
logger.info(f"执行工具: {self.name}, 参数: server={server}, page={page}")
|
||||
try:
|
||||
media_chain = MediaServerChain()
|
||||
results = []
|
||||
|
||||
|
||||
# 如果没有指定服务器,获取所有启用的媒体服务器
|
||||
if not server:
|
||||
mediaservers = ServiceConfigHelper.get_mediaserver_configs()
|
||||
enabled_servers = [ms.name for ms in mediaservers if ms.enabled]
|
||||
|
||||
|
||||
if not enabled_servers:
|
||||
return "未找到启用的媒体服务器"
|
||||
|
||||
|
||||
# 遍历所有启用的服务器
|
||||
for server_name in enabled_servers:
|
||||
latest_items = media_chain.latest(server=server_name, count=count, username=self._username)
|
||||
latest_items = media_chain.latest(
|
||||
server=server_name, count=fetch_count, username=self._username
|
||||
)
|
||||
if latest_items:
|
||||
for item in latest_items:
|
||||
item_dict = item.model_dump(exclude_none=True)
|
||||
@@ -63,24 +81,37 @@ class QueryLibraryLatestTool(MoviePilotTool):
|
||||
results.append(item_dict)
|
||||
else:
|
||||
# 查询指定服务器
|
||||
latest_items = media_chain.latest(server=server, count=count, username=self._username)
|
||||
latest_items = media_chain.latest(
|
||||
server=server, count=fetch_count, username=self._username
|
||||
)
|
||||
if latest_items:
|
||||
for item in latest_items:
|
||||
item_dict = item.model_dump(exclude_none=True)
|
||||
item_dict["server"] = server
|
||||
results.append(item_dict)
|
||||
|
||||
|
||||
if not results:
|
||||
server_info = f"服务器 {server}" if server else "所有服务器"
|
||||
return f"未找到 {server_info} 的最近入库影片"
|
||||
|
||||
# 限制返回数量,避免结果过多
|
||||
if len(results) > count:
|
||||
results = results[:count]
|
||||
|
||||
return json.dumps(results, ensure_ascii=False, indent=2)
|
||||
|
||||
|
||||
# 分页
|
||||
total_count = len(results)
|
||||
start = (page - 1) * PAGE_SIZE
|
||||
end = start + PAGE_SIZE
|
||||
page_results = results[start:end]
|
||||
|
||||
if not page_results:
|
||||
total_pages = (total_count + PAGE_SIZE - 1) // PAGE_SIZE
|
||||
return f"第 {page} 页没有数据,共 {total_count} 条结果,共 {total_pages} 页。"
|
||||
|
||||
total_pages = (total_count + PAGE_SIZE - 1) // PAGE_SIZE
|
||||
payload_msg = f"第 {page}/{total_pages} 页,当前页 {len(page_results)} 条结果,共 {total_count} 条。"
|
||||
if page < total_pages:
|
||||
payload_msg += f" 可使用 page={page + 1} 获取下一页。"
|
||||
|
||||
result_json = json.dumps(page_results, ensure_ascii=False, indent=2)
|
||||
return f"{payload_msg}\n\n{result_json}"
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"查询媒体服务器最近入库影片失败: {e}", exc_info=True)
|
||||
return f"查询媒体服务器最近入库影片时发生错误: {str(e)}"
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ class QueryPluginCapabilitiesTool(MoviePilotTool):
|
||||
name: str = "query_plugin_capabilities"
|
||||
description: str = (
|
||||
"Query the capabilities of installed plugins, including supported commands and scheduled services. "
|
||||
"Commands are slash-commands (e.g. /xxx) that can be executed via the run_plugin_command tool. "
|
||||
"Commands are slash-commands (e.g. /xxx) that can be executed via the run_slash_command tool. "
|
||||
"Scheduled services are periodic tasks that can be triggered via the run_scheduler tool. "
|
||||
"Optionally specify a plugin_id to query a specific plugin, or omit to query all running plugins."
|
||||
)
|
||||
|
||||
@@ -11,36 +11,61 @@ from app.db.models.subscribehistory import SubscribeHistory
|
||||
from app.log import logger
|
||||
from app.schemas.types import media_type_to_agent
|
||||
|
||||
PAGE_SIZE = 20
|
||||
|
||||
|
||||
class QuerySubscribeHistoryInput(BaseModel):
|
||||
"""查询订阅历史工具的输入参数模型"""
|
||||
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
|
||||
media_type: Optional[str] = Field("all", description="Allowed values: movie, tv, all")
|
||||
name: Optional[str] = Field(None, description="Filter by media name (partial match, optional)")
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
media_type: Optional[str] = Field(
|
||||
"all", description="Allowed values: movie, tv, all"
|
||||
)
|
||||
name: Optional[str] = Field(
|
||||
None, description="Filter by media name (partial match, optional)"
|
||||
)
|
||||
page: Optional[int] = Field(
|
||||
1,
|
||||
description="Page number for pagination (default: 1, 20 items per page). Ignored when name filter is provided.",
|
||||
)
|
||||
|
||||
|
||||
class QuerySubscribeHistoryTool(MoviePilotTool):
|
||||
name: str = "query_subscribe_history"
|
||||
description: str = "Query subscription history records. Shows completed subscriptions with their details including name, type, rating, completion date, and other subscription information. Supports filtering by media type and name. Returns up to 30 records."
|
||||
description: str = "Query subscription history records. Shows completed subscriptions with their details including name, type, rating, completion date, and other subscription information. Supports filtering by media type and name. Supports pagination with 20 records per page."
|
||||
args_schema: Type[BaseModel] = QuerySubscribeHistoryInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据查询参数生成友好的提示消息"""
|
||||
media_type = kwargs.get("media_type", "all")
|
||||
name = kwargs.get("name")
|
||||
|
||||
page = kwargs.get("page", 1)
|
||||
|
||||
parts = ["正在查询订阅历史"]
|
||||
|
||||
|
||||
if media_type != "all":
|
||||
parts.append(f"类型: {media_type}")
|
||||
if name:
|
||||
parts.append(f"名称: {name}")
|
||||
|
||||
return " | ".join(parts) if len(parts) > 1 else parts[0]
|
||||
else:
|
||||
parts.append(f"第{page}页")
|
||||
|
||||
async def run(self, media_type: Optional[str] = "all",
|
||||
name: Optional[str] = None, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}, 参数: media_type={media_type}, name={name}")
|
||||
return " | ".join(parts)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
media_type: Optional[str] = "all",
|
||||
name: Optional[str] = None,
|
||||
page: Optional[int] = 1,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
page = max(1, page or 1)
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 参数: media_type={media_type}, name={name}, page={page}"
|
||||
)
|
||||
|
||||
try:
|
||||
if media_type not in ["all", "movie", "tv"]:
|
||||
@@ -48,69 +73,115 @@ class QuerySubscribeHistoryTool(MoviePilotTool):
|
||||
|
||||
# 获取数据库会话
|
||||
async with AsyncSessionFactory() as db:
|
||||
# 根据类型查询
|
||||
if media_type == "all":
|
||||
# 查询所有类型,需要分别查询电影和电视剧
|
||||
movie_history = await SubscribeHistory.async_list_by_type(db, mtype="movie", page=1, count=100)
|
||||
tv_history = await SubscribeHistory.async_list_by_type(db, mtype="tv", page=1, count=100)
|
||||
all_history = list(movie_history) + list(tv_history)
|
||||
# 按日期排序
|
||||
all_history.sort(key=lambda x: x.date or "", reverse=True)
|
||||
else:
|
||||
# 查询指定类型
|
||||
all_history = await SubscribeHistory.async_list_by_type(db, mtype=media_type, page=1, count=100)
|
||||
|
||||
# 按名称过滤
|
||||
filtered_history = []
|
||||
if name:
|
||||
# 有名称过滤时,获取足够多的记录在内存中过滤,不分页
|
||||
fetch_count = 500
|
||||
if media_type == "all":
|
||||
movie_history = await SubscribeHistory.async_list_by_type(
|
||||
db, mtype="movie", page=1, count=fetch_count
|
||||
)
|
||||
tv_history = await SubscribeHistory.async_list_by_type(
|
||||
db, mtype="tv", page=1, count=fetch_count
|
||||
)
|
||||
all_history = list(movie_history) + list(tv_history)
|
||||
all_history.sort(key=lambda x: x.date or "", reverse=True)
|
||||
else:
|
||||
all_history = list(
|
||||
await SubscribeHistory.async_list_by_type(
|
||||
db, mtype=media_type, page=1, count=fetch_count
|
||||
)
|
||||
)
|
||||
|
||||
# 按名称过滤
|
||||
name_lower = name.lower()
|
||||
for record in all_history:
|
||||
if record.name and name_lower in record.name.lower():
|
||||
filtered_history.append(record)
|
||||
filtered_history = [
|
||||
record
|
||||
for record in all_history
|
||||
if record.name and name_lower in record.name.lower()
|
||||
]
|
||||
|
||||
if not filtered_history:
|
||||
return "未找到相关订阅历史记录"
|
||||
|
||||
# 名称过滤时直接返回所有匹配结果,不分页
|
||||
simplified_records = self._simplify_records(filtered_history)
|
||||
result_json = json.dumps(
|
||||
simplified_records, ensure_ascii=False, indent=2
|
||||
)
|
||||
return result_json
|
||||
else:
|
||||
filtered_history = all_history
|
||||
|
||||
# 无名称过滤时,直接利用数据库分页
|
||||
if media_type == "all":
|
||||
movie_history = await SubscribeHistory.async_list_by_type(
|
||||
db, mtype="movie", page=1, count=page * PAGE_SIZE
|
||||
)
|
||||
tv_history = await SubscribeHistory.async_list_by_type(
|
||||
db, mtype="tv", page=1, count=page * PAGE_SIZE
|
||||
)
|
||||
all_history = list(movie_history) + list(tv_history)
|
||||
all_history.sort(key=lambda x: x.date or "", reverse=True)
|
||||
filtered_history = all_history
|
||||
else:
|
||||
filtered_history = list(
|
||||
await SubscribeHistory.async_list_by_type(
|
||||
db, mtype=media_type, page=1, count=page * PAGE_SIZE
|
||||
)
|
||||
)
|
||||
|
||||
if not filtered_history:
|
||||
return "未找到相关订阅历史记录"
|
||||
|
||||
# 限制最多30条
|
||||
|
||||
# 分页切片
|
||||
total_count = len(filtered_history)
|
||||
limited_history = filtered_history[:30]
|
||||
|
||||
# 转换为字典格式,只保留关键信息
|
||||
simplified_records = []
|
||||
for record in limited_history:
|
||||
simplified = {
|
||||
"id": record.id,
|
||||
"name": record.name,
|
||||
"year": record.year,
|
||||
"type": media_type_to_agent(record.type),
|
||||
"season": record.season,
|
||||
"tmdbid": record.tmdbid,
|
||||
"doubanid": record.doubanid,
|
||||
"bangumiid": record.bangumiid,
|
||||
"poster": record.poster,
|
||||
"vote": record.vote,
|
||||
"total_episode": record.total_episode,
|
||||
"date": record.date,
|
||||
"username": record.username
|
||||
}
|
||||
# 添加过滤规则信息(如果有)
|
||||
if record.filter:
|
||||
simplified["filter"] = record.filter
|
||||
if record.quality:
|
||||
simplified["quality"] = record.quality
|
||||
if record.resolution:
|
||||
simplified["resolution"] = record.resolution
|
||||
simplified_records.append(simplified)
|
||||
|
||||
result_json = json.dumps(simplified_records, ensure_ascii=False, indent=2)
|
||||
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 30:
|
||||
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
|
||||
|
||||
return result_json
|
||||
start = (page - 1) * PAGE_SIZE
|
||||
end = start + PAGE_SIZE
|
||||
page_records = filtered_history[start:end]
|
||||
|
||||
if not page_records:
|
||||
return f"第 {page} 页没有数据。"
|
||||
|
||||
simplified_records = self._simplify_records(page_records)
|
||||
result_json = json.dumps(
|
||||
simplified_records, ensure_ascii=False, indent=2
|
||||
)
|
||||
|
||||
has_more = total_count > end
|
||||
payload_msg = f"第 {page} 页,当前页 {len(simplified_records)} 条结果。"
|
||||
if has_more:
|
||||
payload_msg += (
|
||||
f" 可能有更多数据,可使用 page={page + 1} 获取下一页。"
|
||||
)
|
||||
|
||||
return f"{payload_msg}\n\n{result_json}"
|
||||
except Exception as e:
|
||||
logger.error(f"查询订阅历史失败: {e}", exc_info=True)
|
||||
return f"查询订阅历史时发生错误: {str(e)}"
|
||||
|
||||
@staticmethod
|
||||
def _simplify_records(records) -> list:
|
||||
"""转换为字典格式,只保留关键信息"""
|
||||
simplified_records = []
|
||||
for record in records:
|
||||
simplified = {
|
||||
"id": record.id,
|
||||
"name": record.name,
|
||||
"year": record.year,
|
||||
"type": media_type_to_agent(record.type),
|
||||
"season": record.season,
|
||||
"tmdbid": record.tmdbid,
|
||||
"doubanid": record.doubanid,
|
||||
"bangumiid": record.bangumiid,
|
||||
"poster": record.poster,
|
||||
"vote": record.vote,
|
||||
"total_episode": record.total_episode,
|
||||
"date": record.date,
|
||||
"username": record.username,
|
||||
}
|
||||
if record.filter:
|
||||
simplified["filter"] = record.filter
|
||||
if record.quality:
|
||||
simplified["quality"] = record.quality
|
||||
if record.resolution:
|
||||
simplified["resolution"] = record.resolution
|
||||
simplified_records.append(simplified)
|
||||
return simplified_records
|
||||
|
||||
@@ -11,6 +11,8 @@ from app.log import logger
|
||||
from app.schemas.subscribe import Subscribe as SubscribeSchema
|
||||
from app.schemas.types import MediaType
|
||||
|
||||
PAGE_SIZE = 100
|
||||
|
||||
QUERY_SUBSCRIBE_OUTPUT_FIELDS = [
|
||||
"id",
|
||||
"name",
|
||||
@@ -35,47 +37,76 @@ QUERY_SUBSCRIBE_OUTPUT_FIELDS = [
|
||||
"custom_words",
|
||||
"media_category",
|
||||
"filter_groups",
|
||||
"episode_group"
|
||||
"episode_group",
|
||||
]
|
||||
|
||||
|
||||
class QuerySubscribesInput(BaseModel):
|
||||
"""查询订阅工具的输入参数模型"""
|
||||
explanation: str = Field(..., description="Clear explanation of why this tool is being used in the current context")
|
||||
status: Optional[str] = Field("all",
|
||||
description="Filter subscriptions by status: 'R' for enabled subscriptions, 'S' for paused ones, 'all' for all subscriptions")
|
||||
media_type: Optional[str] = Field("all",
|
||||
description="Allowed values: movie, tv, all")
|
||||
tmdb_id: Optional[int] = Field(None, description="Filter by TMDB ID to check if a specific media is already subscribed")
|
||||
douban_id: Optional[str] = Field(None, description="Filter by Douban ID to check if a specific media is already subscribed")
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
status: Optional[str] = Field(
|
||||
"all",
|
||||
description="Filter subscriptions by status: 'R' for enabled subscriptions, 'S' for paused ones, 'all' for all subscriptions",
|
||||
)
|
||||
media_type: Optional[str] = Field(
|
||||
"all", description="Allowed values: movie, tv, all"
|
||||
)
|
||||
tmdb_id: Optional[int] = Field(
|
||||
None,
|
||||
description="Filter by TMDB ID to check if a specific media is already subscribed",
|
||||
)
|
||||
douban_id: Optional[str] = Field(
|
||||
None,
|
||||
description="Filter by Douban ID to check if a specific media is already subscribed",
|
||||
)
|
||||
page: Optional[int] = Field(
|
||||
1, description="Page number for pagination (default: 1, 100 items per page)"
|
||||
)
|
||||
|
||||
|
||||
class QuerySubscribesTool(MoviePilotTool):
|
||||
name: str = "query_subscribes"
|
||||
description: str = "Query subscription status and list user subscriptions. Returns full subscription parameters for each matched subscription."
|
||||
description: str = "Query subscription status and list user subscriptions. Returns full subscription parameters for each matched subscription. Supports pagination with 100 items per page."
|
||||
args_schema: Type[BaseModel] = QuerySubscribesInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据查询参数生成友好的提示消息"""
|
||||
status = kwargs.get("status", "all")
|
||||
media_type = kwargs.get("media_type", "all")
|
||||
|
||||
page = kwargs.get("page", 1)
|
||||
|
||||
parts = ["正在查询订阅"]
|
||||
|
||||
|
||||
# 根据状态过滤条件生成提示
|
||||
if status != "all":
|
||||
status_map = {"R": "已启用", "S": "已暂停"}
|
||||
parts.append(f"状态: {status_map.get(status, status)}")
|
||||
|
||||
|
||||
# 根据媒体类型过滤条件生成提示
|
||||
if media_type != "all":
|
||||
parts.append(f"类型: {media_type}")
|
||||
|
||||
return " | ".join(parts) if len(parts) > 1 else parts[0]
|
||||
|
||||
async def run(self, status: Optional[str] = "all", media_type: Optional[str] = "all",
|
||||
tmdb_id: Optional[int] = None, douban_id: Optional[str] = None, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}, 参数: status={status}, media_type={media_type}, tmdb_id={tmdb_id}, douban_id={douban_id}")
|
||||
parts.append(f"第{page}页")
|
||||
|
||||
return " | ".join(parts)
|
||||
|
||||
async def run(
|
||||
self,
|
||||
status: Optional[str] = "all",
|
||||
media_type: Optional[str] = "all",
|
||||
tmdb_id: Optional[int] = None,
|
||||
douban_id: Optional[str] = None,
|
||||
page: Optional[int] = 1,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
page = max(1, page or 1)
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 参数: status={status}, media_type={media_type}, tmdb_id={tmdb_id}, douban_id={douban_id}, page={page}"
|
||||
)
|
||||
try:
|
||||
if media_type != "all" and not MediaType.from_agent(media_type):
|
||||
return f"错误:无效的媒体类型 '{media_type}',支持的类型:'movie', 'tv', 'all'"
|
||||
@@ -86,7 +117,10 @@ class QuerySubscribesTool(MoviePilotTool):
|
||||
for sub in subscribes:
|
||||
if status != "all" and sub.state != status:
|
||||
continue
|
||||
if media_type != "all" and sub.type != MediaType.from_agent(media_type).value:
|
||||
if (
|
||||
media_type != "all"
|
||||
and sub.type != MediaType.from_agent(media_type).value
|
||||
):
|
||||
continue
|
||||
if tmdb_id is not None and sub.tmdbid != tmdb_id:
|
||||
continue
|
||||
@@ -94,21 +128,30 @@ class QuerySubscribesTool(MoviePilotTool):
|
||||
continue
|
||||
filtered_subscribes.append(sub)
|
||||
if filtered_subscribes:
|
||||
# 限制最多50条结果
|
||||
total_count = len(filtered_subscribes)
|
||||
limited_subscribes = filtered_subscribes[:50]
|
||||
# 分页
|
||||
start = (page - 1) * PAGE_SIZE
|
||||
end = start + PAGE_SIZE
|
||||
page_subscribes = filtered_subscribes[start:end]
|
||||
|
||||
if not page_subscribes:
|
||||
total_pages = (total_count + PAGE_SIZE - 1) // PAGE_SIZE
|
||||
return f"第 {page} 页没有数据,共 {total_count} 条结果,共 {total_pages} 页。"
|
||||
|
||||
full_subscribes = [
|
||||
SubscribeSchema.model_validate(s, from_attributes=True).model_dump(
|
||||
include=set(QUERY_SUBSCRIBE_OUTPUT_FIELDS),
|
||||
exclude_none=True
|
||||
include=set(QUERY_SUBSCRIBE_OUTPUT_FIELDS), exclude_none=True
|
||||
)
|
||||
for s in limited_subscribes
|
||||
for s in page_subscribes
|
||||
]
|
||||
result_json = json.dumps(full_subscribes, ensure_ascii=False, indent=2)
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 50:
|
||||
return f"注意:查询结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
|
||||
return result_json
|
||||
|
||||
total_pages = (total_count + PAGE_SIZE - 1) // PAGE_SIZE
|
||||
payload_msg = f"第 {page}/{total_pages} 页,当前页 {len(page_subscribes)} 条结果,共 {total_count} 条。"
|
||||
if page < total_pages:
|
||||
payload_msg += f" 可使用 page={page + 1} 获取下一页。"
|
||||
|
||||
return f"{payload_msg}\n\n{result_json}"
|
||||
return "未找到相关订阅"
|
||||
except Exception as e:
|
||||
logger.error(f"查询订阅失败: {e}", exc_info=True)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""运行插件命令工具"""
|
||||
"""运行斜杠命令工具(系统命令 + 插件命令)"""
|
||||
|
||||
import json
|
||||
from typing import Optional, Type
|
||||
@@ -7,13 +7,12 @@ from pydantic import BaseModel, Field
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool
|
||||
from app.core.event import eventmanager
|
||||
from app.core.plugin import PluginManager
|
||||
from app.log import logger
|
||||
from app.schemas.types import EventType, MessageChannel
|
||||
|
||||
|
||||
class RunPluginCommandInput(BaseModel):
|
||||
"""运行插件命令工具的输入参数模型"""
|
||||
class RunSlashCommandInput(BaseModel):
|
||||
"""运行斜杠命令工具的输入参数模型"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
@@ -23,26 +22,30 @@ class RunPluginCommandInput(BaseModel):
|
||||
...,
|
||||
description="The slash command to execute, e.g. '/cookiecloud'. "
|
||||
"Must start with '/'. Can include arguments after the command, e.g. '/command arg1 arg2'. "
|
||||
"Use query_plugin_capabilities tool to discover available commands first.",
|
||||
"Use query_plugin_capabilities tool to discover available plugin commands, "
|
||||
"or list_slash_commands tool to discover all available commands (including system commands).",
|
||||
)
|
||||
|
||||
|
||||
class RunPluginCommandTool(MoviePilotTool):
|
||||
name: str = "run_plugin_command"
|
||||
class RunSlashCommandTool(MoviePilotTool):
|
||||
name: str = "run_slash_command"
|
||||
description: str = (
|
||||
"Execute a plugin command by sending a CommandExcute event. "
|
||||
"Plugin commands are slash-commands (starting with '/') registered by plugins. "
|
||||
"Use the query_plugin_capabilities tool first to discover available commands and their descriptions. "
|
||||
"Execute a slash command (system or plugin) by sending a CommandExcute event. "
|
||||
"This tool supports ALL registered slash commands, including: "
|
||||
"1) System preset commands (e.g. /cookiecloud, /sites, /subscribes, /downloading, /transfer, /restart, etc.) "
|
||||
"2) Plugin commands registered by installed plugins. "
|
||||
"Use the query_plugin_capabilities tool to discover plugin commands, "
|
||||
"or the list_slash_commands tool to discover all available commands. "
|
||||
"The command will be executed asynchronously. "
|
||||
"Note: This tool triggers the command execution but the actual processing happens in the background."
|
||||
)
|
||||
args_schema: Type[BaseModel] = RunPluginCommandInput
|
||||
args_schema: Type[BaseModel] = RunSlashCommandInput
|
||||
require_admin: bool = True
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""生成友好的提示消息"""
|
||||
command = kwargs.get("command", "")
|
||||
return f"正在执行插件命令: {command}"
|
||||
return f"正在执行命令: {command}"
|
||||
|
||||
async def run(self, command: str, **kwargs) -> str:
|
||||
logger.info(f"执行工具: {self.name}, 参数: command={command}")
|
||||
@@ -52,21 +55,19 @@ class RunPluginCommandTool(MoviePilotTool):
|
||||
if not command.startswith("/"):
|
||||
command = f"/{command}"
|
||||
|
||||
# 验证命令是否存在
|
||||
plugin_manager = PluginManager()
|
||||
registered_commands = plugin_manager.get_plugin_commands()
|
||||
# 从全局 Command 单例中验证命令是否存在(包含系统预设命令 + 插件命令 + 其他命令)
|
||||
from app.command import Command
|
||||
|
||||
cmd_name = command.split()[0]
|
||||
matched_command = None
|
||||
for cmd in registered_commands:
|
||||
if cmd.get("cmd") == cmd_name:
|
||||
matched_command = cmd
|
||||
break
|
||||
command_obj = Command()
|
||||
matched_command = command_obj.get(cmd_name)
|
||||
|
||||
if not matched_command:
|
||||
# 列出可用命令帮助用户
|
||||
# 列出所有可用命令帮助用户
|
||||
all_commands = command_obj.get_commands()
|
||||
available_cmds = [
|
||||
f"{cmd.get('cmd')} - {cmd.get('desc', '无描述')}"
|
||||
for cmd in registered_commands
|
||||
f"{cmd} - {info.get('description', '无描述')}"
|
||||
for cmd, info in all_commands.items()
|
||||
]
|
||||
result = {
|
||||
"success": False,
|
||||
@@ -99,14 +100,16 @@ class RunPluginCommandTool(MoviePilotTool):
|
||||
"success": True,
|
||||
"message": f"命令 {cmd_name} 已触发执行",
|
||||
"command": command,
|
||||
"command_desc": matched_command.get("desc", ""),
|
||||
"plugin_id": matched_command.get("pid", ""),
|
||||
"command_desc": matched_command.get("description", ""),
|
||||
}
|
||||
# 如果是插件命令,附加插件ID
|
||||
if matched_command.get("pid"):
|
||||
result["plugin_id"] = matched_command["pid"]
|
||||
return json.dumps(result, ensure_ascii=False, indent=2)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"执行插件命令失败: {e}", exc_info=True)
|
||||
logger.error(f"执行命令失败: {e}", exc_info=True)
|
||||
return json.dumps(
|
||||
{"success": False, "message": f"执行插件命令时发生错误: {str(e)}"},
|
||||
{"success": False, "message": f"执行命令时发生错误: {str(e)}"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -96,8 +96,8 @@ class SearchMediaTool(MoviePilotTool):
|
||||
simplified_results.append(simplified)
|
||||
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 30:
|
||||
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
|
||||
if total_count > 100:
|
||||
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 100 条结果。\n\n{result_json}"
|
||||
return result_json
|
||||
else:
|
||||
return f"未找到符合条件的媒体资源: {title}"
|
||||
|
||||
@@ -72,8 +72,8 @@ class SearchPersonTool(MoviePilotTool):
|
||||
|
||||
result_json = json.dumps(simplified_results, ensure_ascii=False, indent=2)
|
||||
# 如果结果被裁剪,添加提示信息
|
||||
if total_count > 30:
|
||||
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 30 条结果。\n\n{result_json}"
|
||||
if total_count > 50:
|
||||
return f"注意:搜索结果共找到 {total_count} 条,为节省上下文空间,仅显示前 50 条结果。\n\n{result_json}"
|
||||
return result_json
|
||||
else:
|
||||
return f"未找到相关人物信息: {name}"
|
||||
|
||||
@@ -27,7 +27,7 @@ class SearchWebInput(BaseModel):
|
||||
..., description="The search query string to search for on the web"
|
||||
)
|
||||
max_results: Optional[int] = Field(
|
||||
5,
|
||||
20,
|
||||
description="Maximum number of search results to return (default: 5, max: 10)",
|
||||
)
|
||||
|
||||
@@ -40,10 +40,10 @@ class SearchWebTool(MoviePilotTool):
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据搜索参数生成友好的提示消息"""
|
||||
query = kwargs.get("query", "")
|
||||
max_results = kwargs.get("max_results", 5)
|
||||
max_results = kwargs.get("max_results", 20)
|
||||
return f"正在搜索网络内容: {query} (最多返回 {max_results} 条结果)"
|
||||
|
||||
async def run(self, query: str, max_results: Optional[int] = 5, **kwargs) -> str:
|
||||
async def run(self, query: str, max_results: Optional[int] = 20, **kwargs) -> str:
|
||||
"""
|
||||
执行网络搜索
|
||||
"""
|
||||
@@ -53,7 +53,7 @@ class SearchWebTool(MoviePilotTool):
|
||||
|
||||
try:
|
||||
# 限制最大结果数
|
||||
max_results = min(max(1, max_results or 5), 10)
|
||||
max_results = min(max(1, max_results or 20), 20)
|
||||
results = []
|
||||
|
||||
# 1. 优先使用 Exa (如果配置了 API Key)
|
||||
@@ -216,7 +216,7 @@ class SearchWebTool(MoviePilotTool):
|
||||
source = result.get("source", "Unknown")
|
||||
|
||||
# 裁剪摘要
|
||||
max_snippet_length = 500 # 增加到500字符,提供更多上下文
|
||||
max_snippet_length = 1000 # 增加到1000字符,提供更多上下文
|
||||
if len(snippet) > max_snippet_length:
|
||||
snippet = snippet[:max_snippet_length] + "..."
|
||||
|
||||
|
||||
107
app/agent/tools/impl/send_local_file.py
Normal file
107
app/agent/tools/impl/send_local_file.py
Normal file
@@ -0,0 +1,107 @@
|
||||
"""发送本地附件工具。"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool, ToolChain
|
||||
from app.log import logger
|
||||
from app.schemas import Notification, NotificationType
|
||||
from app.schemas.message import ChannelCapabilityManager, ChannelCapability
|
||||
from app.schemas.types import MessageChannel
|
||||
|
||||
|
||||
class SendLocalFileInput(BaseModel):
|
||||
"""发送本地附件工具输入。"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why sending this local file helps the user",
|
||||
)
|
||||
file_path: str = Field(
|
||||
...,
|
||||
description="Absolute path to the local image or file to send to the user",
|
||||
)
|
||||
message: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional message or caption to send with the attachment",
|
||||
)
|
||||
title: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional short title shown together with the attachment",
|
||||
)
|
||||
file_name: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional override filename presented to the user when downloading",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_file_path(self):
|
||||
if not self.file_path:
|
||||
raise ValueError("file_path 不能为空")
|
||||
return self
|
||||
|
||||
|
||||
class SendLocalFileTool(MoviePilotTool):
|
||||
name: str = "send_local_file"
|
||||
description: str = (
|
||||
"Send a local image or file from the server filesystem to the current user. "
|
||||
"Use this when you have generated or identified a local file the user should download."
|
||||
)
|
||||
args_schema: Type[BaseModel] = SendLocalFileInput
|
||||
require_admin: bool = False
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
file_path = kwargs.get("file_path", "")
|
||||
file_name = Path(file_path).name if file_path else "未知文件"
|
||||
return f"正在发送本地附件: {file_name}"
|
||||
|
||||
async def run(
|
||||
self,
|
||||
file_path: str,
|
||||
message: Optional[str] = None,
|
||||
title: Optional[str] = None,
|
||||
file_name: Optional[str] = None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
if not self._channel or not self._source:
|
||||
return "当前不在可回传消息的会话中,无法发送附件"
|
||||
|
||||
try:
|
||||
channel = MessageChannel(self._channel)
|
||||
except ValueError:
|
||||
return f"不支持的消息渠道: {self._channel}"
|
||||
|
||||
if not ChannelCapabilityManager.supports_capability(
|
||||
channel, ChannelCapability.FILE_SENDING
|
||||
):
|
||||
return f"当前渠道 {channel.value} 暂不支持发送本地文件"
|
||||
|
||||
resolved_path = Path(file_path).expanduser()
|
||||
if not resolved_path.is_absolute():
|
||||
resolved_path = resolved_path.resolve()
|
||||
if not resolved_path.exists() or not resolved_path.is_file():
|
||||
return f"文件不存在: {resolved_path}"
|
||||
|
||||
logger.info(
|
||||
"执行工具: %s, channel=%s, file=%s",
|
||||
self.name,
|
||||
channel.value,
|
||||
resolved_path,
|
||||
)
|
||||
|
||||
await ToolChain().async_post_message(
|
||||
Notification(
|
||||
channel=channel,
|
||||
source=self._source,
|
||||
mtype=NotificationType.Agent,
|
||||
userid=self._user_id,
|
||||
username=self._username,
|
||||
title=title,
|
||||
text=message,
|
||||
file_path=str(resolved_path),
|
||||
file_name=file_name or resolved_path.name,
|
||||
)
|
||||
)
|
||||
return "本地附件已发送"
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
from typing import Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool
|
||||
from app.log import logger
|
||||
@@ -15,49 +15,64 @@ class SendMessageInput(BaseModel):
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
message: str = Field(
|
||||
...,
|
||||
message: Optional[str] = Field(
|
||||
None,
|
||||
description="The message content to send to the user (should be clear and informative)",
|
||||
)
|
||||
message_type: Optional[str] = Field(
|
||||
"info",
|
||||
description="Type of message: 'info' for general information, 'success' for successful operations, 'warning' for warnings, 'error' for error messages",
|
||||
title: Optional[str] = Field(
|
||||
None,
|
||||
description="Title of the message, a short summary of the message content",
|
||||
)
|
||||
image_url: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional image URL to send together with the message on channels that support images (such as Telegram and Slack)",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_payload(self):
|
||||
if not self.message and not self.title and not self.image_url:
|
||||
raise ValueError("message、title、image_url 至少需要提供一个")
|
||||
return self
|
||||
|
||||
|
||||
class SendMessageTool(MoviePilotTool):
|
||||
name: str = "send_message"
|
||||
description: str = "Send notification message to the user through configured notification channels (Telegram, Slack, WeChat, etc.). Used to inform users about operation results, errors, or important updates."
|
||||
description: str = "Send notification message to the user through configured notification channels (Telegram, Slack, WeChat, etc.). Supports optional image_url on channels that can send images. Used to inform users about operation results, errors, important updates, or proactively send a relevant image."
|
||||
args_schema: Type[BaseModel] = SendMessageInput
|
||||
require_admin: bool = True
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""根据消息参数生成友好的提示消息"""
|
||||
message = kwargs.get("message", "")
|
||||
message_type = kwargs.get("message_type", "info")
|
||||
|
||||
type_map = {
|
||||
"info": "信息",
|
||||
"success": "成功",
|
||||
"warning": "警告",
|
||||
"error": "错误",
|
||||
}
|
||||
type_desc = type_map.get(message_type, message_type)
|
||||
message = kwargs.get("message", "") or ""
|
||||
title = kwargs.get("title") or ""
|
||||
image_url = kwargs.get("image_url")
|
||||
|
||||
# 截断过长的消息
|
||||
if len(message) > 50:
|
||||
message = message[:50] + "..."
|
||||
|
||||
return f"正在发送{type_desc}消息: {message}"
|
||||
if title and image_url:
|
||||
return f"正在发送图文消息: [{title}] {message}"
|
||||
if title:
|
||||
return f"正在发送消息: [{title}] {message}"
|
||||
if image_url:
|
||||
return f"正在发送图片消息: {message}"
|
||||
return f"正在发送消息: {message}"
|
||||
|
||||
async def run(
|
||||
self, message: str, message_type: Optional[str] = None, **kwargs
|
||||
self,
|
||||
message: Optional[str] = None,
|
||||
title: Optional[str] = None,
|
||||
image_url: Optional[str] = None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
title = title or ("图片" if image_url and not message else "")
|
||||
text = message or ""
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 参数: message={message}, message_type={message_type}"
|
||||
f"执行工具: {self.name}, 参数: title={title}, message={text}, image_url={image_url}"
|
||||
)
|
||||
try:
|
||||
await self.send_tool_message(message, title=message_type)
|
||||
await self.send_tool_message(text, title=title, image=image_url)
|
||||
return "消息已发送"
|
||||
except Exception as e:
|
||||
logger.error(f"发送消息失败: {e}")
|
||||
|
||||
96
app/agent/tools/impl/send_voice_message.py
Normal file
96
app/agent/tools/impl/send_voice_message.py
Normal file
@@ -0,0 +1,96 @@
|
||||
"""发送语音消息工具。"""
|
||||
|
||||
import asyncio
|
||||
from typing import Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool, ToolChain
|
||||
from app.core.config import settings
|
||||
from app.helper.voice import VoiceHelper
|
||||
from app.helper.service import ServiceConfigHelper
|
||||
from app.log import logger
|
||||
from app.schemas import Notification, NotificationType
|
||||
from app.schemas.types import MessageChannel
|
||||
|
||||
|
||||
class SendVoiceMessageInput(BaseModel):
|
||||
"""发送语音消息工具输入。"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why a voice reply is the best fit in the current context",
|
||||
)
|
||||
message: str = Field(
|
||||
...,
|
||||
description="The spoken content to send back to the user",
|
||||
)
|
||||
|
||||
|
||||
class SendVoiceMessageTool(MoviePilotTool):
|
||||
name: str = "send_voice_message"
|
||||
description: str = (
|
||||
"Send a voice reply to the current user. Prefer this when the user sent a voice message "
|
||||
"or when spoken playback is more natural. On channels without voice support or when TTS "
|
||||
"is unavailable, it automatically falls back to sending the same content as plain text."
|
||||
)
|
||||
args_schema: Type[BaseModel] = SendVoiceMessageInput
|
||||
require_admin: bool = False
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
message = kwargs.get("message") or ""
|
||||
if len(message) > 40:
|
||||
message = message[:40] + "..."
|
||||
return f"正在发送语音回复: {message}"
|
||||
|
||||
def _supports_real_voice_reply(self) -> bool:
|
||||
channel = self._channel or ""
|
||||
if channel == MessageChannel.Telegram.value:
|
||||
return True
|
||||
if channel != MessageChannel.Wechat.value:
|
||||
return False
|
||||
for config in ServiceConfigHelper.get_notification_configs():
|
||||
if config.name != self._source:
|
||||
continue
|
||||
return (config.config or {}).get("WECHAT_MODE", "app") != "bot"
|
||||
return False
|
||||
|
||||
async def run(self, message: str, **kwargs) -> str:
|
||||
if not message:
|
||||
return "语音回复内容不能为空"
|
||||
|
||||
voice_path = None
|
||||
used_voice = False
|
||||
channel = self._channel or ""
|
||||
if self._supports_real_voice_reply() and VoiceHelper.is_available("tts"):
|
||||
voice_file = await asyncio.to_thread(VoiceHelper.synthesize_speech, message)
|
||||
if voice_file:
|
||||
voice_path = str(voice_file)
|
||||
used_voice = True
|
||||
|
||||
logger.info(
|
||||
"执行工具: %s, channel=%s, use_voice=%s, text_len=%s",
|
||||
self.name,
|
||||
channel,
|
||||
used_voice,
|
||||
len(message),
|
||||
)
|
||||
|
||||
await ToolChain().async_post_message(
|
||||
Notification(
|
||||
channel=self._channel,
|
||||
source=self._source,
|
||||
mtype=NotificationType.Agent,
|
||||
userid=self._user_id,
|
||||
username=self._username,
|
||||
text=message,
|
||||
voice_path=voice_path,
|
||||
voice_caption=message if settings.AI_VOICE_REPLY_WITH_TEXT else None,
|
||||
)
|
||||
)
|
||||
self._agent_context["user_reply_sent"] = True
|
||||
self._agent_context["reply_mode"] = "voice" if used_voice else "text_fallback"
|
||||
|
||||
if used_voice:
|
||||
return "语音回复已发送"
|
||||
return "当前未使用语音通道,已自动回退为文字回复"
|
||||
104
app/agent/tools/impl/update_custom_identifiers.py
Normal file
104
app/agent/tools/impl/update_custom_identifiers.py
Normal file
@@ -0,0 +1,104 @@
|
||||
"""更新自定义识别词工具"""
|
||||
|
||||
import json
|
||||
from typing import List, Optional, Type
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from app.agent.tools.base import MoviePilotTool
|
||||
from app.db.systemconfig_oper import SystemConfigOper
|
||||
from app.log import logger
|
||||
from app.schemas.types import SystemConfigKey
|
||||
|
||||
|
||||
class UpdateCustomIdentifiersInput(BaseModel):
|
||||
"""更新自定义识别词工具的输入参数模型"""
|
||||
|
||||
explanation: str = Field(
|
||||
...,
|
||||
description="Clear explanation of why this tool is being used in the current context",
|
||||
)
|
||||
identifiers: List[str] = Field(
|
||||
...,
|
||||
description=(
|
||||
"The complete list of custom identifier rules to save. "
|
||||
"This REPLACES the entire existing list. "
|
||||
"Always query existing identifiers first, merge new rules, then pass the full list. "
|
||||
"These rules are global and affect future recognition for all torrents/files. "
|
||||
"When adding a rule for a user-provided sample, prefer narrow regex patterns that include "
|
||||
"sample-specific anchors such as the title alias, year, season/episode marker, group tag, "
|
||||
"resolution, or other distinctive fragments. Avoid overly broad patterns like bare generic "
|
||||
"tags, pure episode numbers, or common release words unless the user explicitly wants a global rule."
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
class UpdateCustomIdentifiersTool(MoviePilotTool):
|
||||
name: str = "update_custom_identifiers"
|
||||
description: str = (
|
||||
"Update the full list of custom identifiers (自定义识别词) used for preprocessing torrent/file names. "
|
||||
"This tool REPLACES all existing identifier rules with the provided list. "
|
||||
"IMPORTANT: Always use 'query_custom_identifiers' first to get existing rules, "
|
||||
"then merge new rules into the list before calling this tool to avoid accidentally deleting existing rules. "
|
||||
"IMPORTANT: New identifier rules are global. When the rule is created from a specific torrent/file name, "
|
||||
"make the regex as narrow as possible and include distinctive elements from that sample so unrelated titles "
|
||||
"are not affected. Prefer contextual replacements with capture groups/backreferences over bare block words "
|
||||
"when a generic word like REPACK, WEB-DL, 1080p, 字幕, or a simple episode marker would otherwise match too broadly. "
|
||||
"Supported rule formats (spaces around operators are required): "
|
||||
"1) Block word: just the word/regex to remove; "
|
||||
"2) Replacement: '被替换词 => 替换词'; "
|
||||
"3) Episode offset: '前定位词 <> 后定位词 >> EP±N'; "
|
||||
"4) Combined: '被替换词 => 替换词 && 前定位词 <> 后定位词 >> EP±N'; "
|
||||
"Lines starting with '#' are comments. "
|
||||
"The replacement target supports: {[tmdbid=xxx;type=movie/tv;s=xxx;e=xxx]} for direct TMDB ID matching."
|
||||
)
|
||||
args_schema: Type[BaseModel] = UpdateCustomIdentifiersInput
|
||||
|
||||
def get_tool_message(self, **kwargs) -> Optional[str]:
|
||||
"""生成友好的提示消息"""
|
||||
identifiers = kwargs.get("identifiers", [])
|
||||
return f"正在更新自定义识别词(共 {len(identifiers)} 条规则)"
|
||||
|
||||
async def run(self, identifiers: List[str] = None, **kwargs) -> str:
|
||||
logger.info(
|
||||
f"执行工具: {self.name}, 规则数量: {len(identifiers) if identifiers else 0}"
|
||||
)
|
||||
try:
|
||||
if identifiers is None:
|
||||
return json.dumps(
|
||||
{"success": False, "message": "必须提供 identifiers 参数"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
|
||||
# 过滤空字符串
|
||||
identifiers = [i for i in identifiers if i is not None]
|
||||
|
||||
system_config_oper = SystemConfigOper()
|
||||
|
||||
# 保存
|
||||
value = identifiers if identifiers else None
|
||||
success = await system_config_oper.async_set(
|
||||
SystemConfigKey.CustomIdentifiers, value
|
||||
)
|
||||
if success:
|
||||
return json.dumps(
|
||||
{
|
||||
"success": True,
|
||||
"message": f"自定义识别词已更新,共 {len(identifiers)} 条规则",
|
||||
"count": len(identifiers),
|
||||
"identifiers": identifiers,
|
||||
},
|
||||
ensure_ascii=False,
|
||||
indent=2,
|
||||
)
|
||||
else:
|
||||
return json.dumps(
|
||||
{"success": False, "message": "保存自定义识别词失败"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"更新自定义识别词失败: {e}")
|
||||
return json.dumps(
|
||||
{"success": False, "message": f"更新自定义识别词时发生错误: {str(e)}"},
|
||||
ensure_ascii=False,
|
||||
)
|
||||
@@ -2,7 +2,7 @@ from fastapi import APIRouter
|
||||
|
||||
from app.api.endpoints import login, user, webhook, message, site, subscribe, \
|
||||
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
|
||||
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent, mcp, mfa
|
||||
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent, mcp, mfa, openai, anthropic
|
||||
|
||||
api_router = APIRouter()
|
||||
api_router.include_router(login.router, prefix="/login", tags=["login"])
|
||||
@@ -30,3 +30,5 @@ api_router.include_router(recommend.router, prefix="/recommend", tags=["recommen
|
||||
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])
|
||||
api_router.include_router(torrent.router, prefix="/torrent", tags=["torrent"])
|
||||
api_router.include_router(mcp.router, prefix="/mcp", tags=["mcp"])
|
||||
api_router.include_router(openai.router, prefix="/openai/v1", tags=["openai"])
|
||||
api_router.include_router(anthropic.router, prefix="/anthropic/v1", tags=["anthropic"])
|
||||
|
||||
158
app/api/endpoints/anthropic.py
Normal file
158
app/api/endpoints/anthropic.py
Normal file
@@ -0,0 +1,158 @@
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
import uuid
|
||||
from typing import AsyncIterator, List, Optional
|
||||
|
||||
from fastapi import APIRouter, Header, Security
|
||||
from fastapi.responses import JSONResponse, StreamingResponse
|
||||
|
||||
from app import schemas
|
||||
from app.api.endpoints.openai import (
|
||||
MODEL_ID,
|
||||
_CollectingMoviePilotAgent,
|
||||
_error_response as _openai_error_response,
|
||||
)
|
||||
from app.api.openai_utils import build_anthropic_messages, build_prompt, build_session_id
|
||||
from app.core.config import settings
|
||||
from app.core.security import anthropic_api_key_header
|
||||
from app.schemas.types import MessageChannel
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
SESSION_PREFIX = "anthropic:"
|
||||
|
||||
|
||||
def _anthropic_error_response(
|
||||
message: str,
|
||||
status_code: int,
|
||||
error_type: str = "invalid_request_error",
|
||||
) -> JSONResponse:
|
||||
return JSONResponse(
|
||||
status_code=status_code,
|
||||
content=schemas.AnthropicErrorResponse(
|
||||
error=schemas.AnthropicErrorDetail(type=error_type, message=message)
|
||||
).model_dump(),
|
||||
)
|
||||
|
||||
|
||||
def _check_auth(api_key: Optional[str]) -> Optional[JSONResponse]:
|
||||
if not api_key or api_key != settings.API_TOKEN:
|
||||
return _anthropic_error_response(
|
||||
"invalid x-api-key",
|
||||
401,
|
||||
error_type="authentication_error",
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
async def _stream_anthropic_response(
|
||||
agent: _CollectingMoviePilotAgent,
|
||||
prompt: str,
|
||||
images: List[str],
|
||||
) -> AsyncIterator[str]:
|
||||
event_queue: asyncio.Queue = asyncio.Queue()
|
||||
if hasattr(agent.stream_handler, "bind_queue"):
|
||||
agent.stream_handler.bind_queue(event_queue)
|
||||
|
||||
message_id = f"msg_{uuid.uuid4().hex}"
|
||||
|
||||
async def _run_agent():
|
||||
try:
|
||||
await agent.process(prompt, images=images, files=None)
|
||||
except Exception as exc:
|
||||
await event_queue.put({"error": str(exc)})
|
||||
finally:
|
||||
await event_queue.put(None)
|
||||
|
||||
task = asyncio.create_task(_run_agent())
|
||||
try:
|
||||
yield f"event: message_start\ndata: {json.dumps({'type': 'message_start', 'message': {'id': message_id, 'type': 'message', 'role': 'assistant', 'content': [], 'model': MODEL_ID, 'stop_reason': None, 'stop_sequence': None, 'usage': {'input_tokens': 0, 'output_tokens': 0}}}, ensure_ascii=False)}\n\n"
|
||||
yield f"event: content_block_start\ndata: {json.dumps({'type': 'content_block_start', 'index': 0, 'content_block': {'type': 'text', 'text': ''}}, ensure_ascii=False)}\n\n"
|
||||
while True:
|
||||
item = await event_queue.get()
|
||||
if item is None:
|
||||
break
|
||||
if isinstance(item, dict) and item.get("error"):
|
||||
raise RuntimeError(str(item["error"]))
|
||||
text = str(item or "")
|
||||
if not text:
|
||||
continue
|
||||
yield f"event: content_block_delta\ndata: {json.dumps({'type': 'content_block_delta', 'index': 0, 'delta': {'type': 'text_delta', 'text': text}}, ensure_ascii=False)}\n\n"
|
||||
yield f"event: content_block_stop\ndata: {json.dumps({'type': 'content_block_stop', 'index': 0}, ensure_ascii=False)}\n\n"
|
||||
yield f"event: message_delta\ndata: {json.dumps({'type': 'message_delta', 'delta': {'stop_reason': 'end_turn', 'stop_sequence': None}, 'usage': {'output_tokens': 0}}, ensure_ascii=False)}\n\n"
|
||||
yield f"event: message_stop\ndata: {json.dumps({'type': 'message_stop'}, ensure_ascii=False)}\n\n"
|
||||
finally:
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
|
||||
@router.post("/messages", summary="Anthropic compatible messages", response_model=schemas.AnthropicMessagesResponse)
|
||||
async def messages(
|
||||
payload: schemas.AnthropicMessagesRequest,
|
||||
x_api_key: Optional[str] = Security(anthropic_api_key_header),
|
||||
anthropic_version: Optional[str] = Header(default=None, alias="anthropic-version"),
|
||||
):
|
||||
auth_error = _check_auth(x_api_key)
|
||||
if auth_error:
|
||||
return auth_error
|
||||
|
||||
if not settings.AI_AGENT_ENABLE:
|
||||
return _anthropic_error_response(
|
||||
"MoviePilot AI agent is disabled.",
|
||||
503,
|
||||
error_type="api_error",
|
||||
)
|
||||
|
||||
normalized_messages = build_anthropic_messages(payload.system, payload.messages)
|
||||
try:
|
||||
prompt, images = build_prompt(normalized_messages, use_server_session=False)
|
||||
except ValueError as exc:
|
||||
return _anthropic_error_response(str(exc), 400)
|
||||
|
||||
session_seed = anthropic_version or "anthropic"
|
||||
session_id = build_session_id(f"{session_seed}:{uuid.uuid4().hex}", SESSION_PREFIX)
|
||||
agent = _CollectingMoviePilotAgent(
|
||||
session_id=session_id,
|
||||
user_id=session_id,
|
||||
channel=MessageChannel.Web.value,
|
||||
source="anthropic",
|
||||
username="anthropic-client",
|
||||
stream_mode=payload.stream,
|
||||
)
|
||||
|
||||
if payload.stream:
|
||||
return StreamingResponse(
|
||||
_stream_anthropic_response(agent=agent, prompt=prompt, images=images),
|
||||
media_type="text/event-stream",
|
||||
headers={
|
||||
"Cache-Control": "no-cache",
|
||||
"Connection": "keep-alive",
|
||||
"X-Accel-Buffering": "no",
|
||||
},
|
||||
)
|
||||
|
||||
try:
|
||||
result = await agent.process(prompt, images=images, files=None)
|
||||
except Exception as exc:
|
||||
return _anthropic_error_response(str(exc), 500, error_type="api_error")
|
||||
|
||||
content = "\n\n".join(
|
||||
message.strip()
|
||||
for message in agent.collected_messages
|
||||
if message and message.strip()
|
||||
).strip()
|
||||
if not content and result:
|
||||
content = str(result).strip()
|
||||
if not content:
|
||||
content = "未获得有效回复。"
|
||||
|
||||
return schemas.AnthropicMessagesResponse(
|
||||
id=f"msg_{uuid.uuid4().hex}",
|
||||
content=[schemas.AnthropicTextBlock(text=content)],
|
||||
model=MODEL_ID,
|
||||
)
|
||||
@@ -1,3 +1,5 @@
|
||||
import asyncio
|
||||
import time
|
||||
from typing import List, Any, Optional
|
||||
|
||||
import jieba
|
||||
@@ -8,6 +10,7 @@ from pathlib import Path
|
||||
|
||||
from app import schemas
|
||||
from app.chain.storage import StorageChain
|
||||
from app.core.config import settings, global_vars
|
||||
from app.core.event import eventmanager
|
||||
from app.core.security import verify_token
|
||||
from app.db import get_async_db, get_db
|
||||
@@ -15,11 +18,51 @@ from app.db.models import User
|
||||
from app.db.models.downloadhistory import DownloadHistory, DownloadFiles
|
||||
from app.db.models.transferhistory import TransferHistory
|
||||
from app.db.user_oper import get_current_active_superuser_async, get_current_active_superuser
|
||||
from app.helper.progress import ProgressHelper
|
||||
from app.schemas.types import EventType
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
def _start_ai_redo_task(history_id: int, progress_key: str):
|
||||
from app.agent import agent_manager
|
||||
|
||||
progress = ProgressHelper(progress_key)
|
||||
progress.start()
|
||||
progress.update(
|
||||
text=f"智能助正在准备整理记录 #{history_id} ...",
|
||||
data={"history_id": history_id, "success": True},
|
||||
)
|
||||
|
||||
def update_output(text: str):
|
||||
progress.update(text=text, data={"history_id": history_id})
|
||||
|
||||
async def runner():
|
||||
try:
|
||||
await agent_manager.manual_redo_transfer(
|
||||
history_id=history_id,
|
||||
output_callback=update_output,
|
||||
)
|
||||
progress.update(
|
||||
text="智能助手整理完成",
|
||||
data={"history_id": history_id, "success": True, "completed": True},
|
||||
)
|
||||
except Exception as e:
|
||||
progress.update(
|
||||
text=f"智能助手整理失败:{str(e)}",
|
||||
data={
|
||||
"history_id": history_id,
|
||||
"success": False,
|
||||
"completed": True,
|
||||
"error": str(e),
|
||||
},
|
||||
)
|
||||
finally:
|
||||
progress.end()
|
||||
|
||||
asyncio.run_coroutine_threadsafe(runner(), global_vars.loop)
|
||||
|
||||
|
||||
@router.get("/download", summary="查询下载历史记录", response_model=List[schemas.DownloadHistory])
|
||||
async def download_history(page: Optional[int] = 1,
|
||||
count: Optional[int] = 30,
|
||||
@@ -114,6 +157,28 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
|
||||
return schemas.Response(success=True)
|
||||
|
||||
|
||||
@router.post("/transfer/{history_id}/ai-redo", summary="智能助手重新整理", response_model=schemas.Response)
|
||||
def ai_redo_transfer_history(
|
||||
history_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
_: User = Depends(get_current_active_superuser),
|
||||
) -> Any:
|
||||
"""
|
||||
手动触发单条历史记录的 AI 重新整理,并返回进度键。
|
||||
"""
|
||||
if not settings.AI_AGENT_ENABLE:
|
||||
return schemas.Response(success=False, message="MoviePilot智能助手未启用")
|
||||
|
||||
history = TransferHistory.get(db, history_id)
|
||||
if not history:
|
||||
return schemas.Response(success=False, message="整理记录不存在")
|
||||
|
||||
progress_key = f"ai_redo_transfer_{history_id}_{int(time.time() * 1000)}"
|
||||
_start_ai_redo_task(history_id=history_id, progress_key=progress_key)
|
||||
|
||||
return schemas.Response(success=True, data={"progress_key": progress_key})
|
||||
|
||||
|
||||
@router.get("/empty/transfer", summary="清空整理记录", response_model=schemas.Response)
|
||||
async def empty_transfer_history(db: AsyncSession = Depends(get_async_db),
|
||||
_: User = Depends(get_current_active_superuser_async)) -> Any:
|
||||
|
||||
@@ -38,21 +38,69 @@ async def user_message(background_tasks: BackgroundTasks, request: Request,
|
||||
body = await request.body()
|
||||
form = await request.form()
|
||||
args = request.query_params
|
||||
source = args.get("source")
|
||||
content_type = request.headers.get("content-type", "")
|
||||
body_text = body.decode("utf-8", errors="ignore")
|
||||
image_markers = [
|
||||
marker
|
||||
for marker in (
|
||||
'"photo"',
|
||||
'"document"',
|
||||
'"files"',
|
||||
'"attachments"',
|
||||
'"url_private"',
|
||||
'"image/"',
|
||||
'"image_url"',
|
||||
)
|
||||
if marker in body_text
|
||||
]
|
||||
logger.info(
|
||||
"消息入口收到请求: source=%s, content_type=%s, body_bytes=%s, form_keys=%s, image_markers=%s",
|
||||
source,
|
||||
content_type,
|
||||
len(body),
|
||||
list(form.keys()) if form else [],
|
||||
image_markers,
|
||||
)
|
||||
background_tasks.add_task(start_message_chain, body, form, args)
|
||||
return schemas.Response(success=True)
|
||||
|
||||
|
||||
@router.post("/web", summary="接收WEB消息", response_model=schemas.Response)
|
||||
def web_message(text: str, current_user: User = Depends(get_current_active_superuser)):
|
||||
async def web_message(
|
||||
request: Request,
|
||||
text: Optional[str] = None,
|
||||
current_user: User = Depends(get_current_active_superuser),
|
||||
):
|
||||
"""
|
||||
WEB消息响应
|
||||
"""
|
||||
images = None
|
||||
content_type = request.headers.get("content-type", "")
|
||||
if "application/json" in content_type:
|
||||
try:
|
||||
payload = await request.json()
|
||||
except Exception:
|
||||
payload = None
|
||||
if isinstance(payload, dict):
|
||||
text = payload.get("text", text)
|
||||
image = payload.get("image")
|
||||
images = payload.get("images")
|
||||
if image:
|
||||
if isinstance(images, list):
|
||||
images = [*images, image]
|
||||
else:
|
||||
images = [image]
|
||||
elif isinstance(images, str):
|
||||
images = [images]
|
||||
|
||||
MessageChain().handle_message(
|
||||
channel=MessageChannel.Web,
|
||||
source=current_user.name,
|
||||
userid=current_user.name,
|
||||
username=current_user.name,
|
||||
text=text
|
||||
text=text or "",
|
||||
images=images,
|
||||
)
|
||||
return schemas.Response(success=True)
|
||||
|
||||
|
||||
426
app/api/endpoints/openai.py
Normal file
426
app/api/endpoints/openai.py
Normal file
@@ -0,0 +1,426 @@
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
import uuid
|
||||
from typing import AsyncIterator, List, Optional, Tuple
|
||||
|
||||
from fastapi import APIRouter, Request, Security
|
||||
from fastapi.responses import JSONResponse, StreamingResponse
|
||||
from fastapi.security import HTTPAuthorizationCredentials
|
||||
|
||||
from app import schemas
|
||||
from app.api.openai_utils import (
|
||||
build_completion_payload,
|
||||
build_prompt,
|
||||
build_responses_input,
|
||||
build_session_id,
|
||||
)
|
||||
from app.agent import MoviePilotAgent, StreamingHandler
|
||||
from app.core.config import settings
|
||||
from app.core.security import openai_bearer_scheme
|
||||
from app.schemas.types import MessageChannel
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
MODEL_ID = "moviepilot-agent"
|
||||
SESSION_PREFIX = "openai:"
|
||||
|
||||
|
||||
class _CollectingMoviePilotAgent(MoviePilotAgent):
|
||||
"""
|
||||
捕获 Agent 最终输出,避免再通过消息渠道二次发送。
|
||||
"""
|
||||
|
||||
def __init__(self, *args, stream_mode: bool = False, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.collected_messages: List[str] = []
|
||||
self.stream_mode = stream_mode
|
||||
if stream_mode:
|
||||
self.stream_handler = _OpenAIStreamingHandler()
|
||||
|
||||
def _should_stream(self) -> bool:
|
||||
return self.stream_mode
|
||||
|
||||
async def send_agent_message(self, message: str, title: str = ""):
|
||||
text = (message or "").strip()
|
||||
if title and text:
|
||||
text = f"{title}\n{text}"
|
||||
elif title:
|
||||
text = title.strip()
|
||||
if text:
|
||||
self.collected_messages.append(text)
|
||||
if self.stream_mode:
|
||||
self.stream_handler.emit(text)
|
||||
|
||||
async def _save_agent_message_to_db(self, message: str, title: str = ""):
|
||||
return None
|
||||
|
||||
|
||||
class _OpenAIStreamingHandler(StreamingHandler):
|
||||
"""
|
||||
将 Agent 流式输出转发到 OpenAI SSE 队列,不向站内消息系统落消息。
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self._event_queue: Optional[asyncio.Queue] = None
|
||||
|
||||
def bind_queue(self, queue: asyncio.Queue):
|
||||
self._event_queue = queue
|
||||
|
||||
def emit(self, token: str):
|
||||
super().emit(token)
|
||||
if token and self._event_queue is not None:
|
||||
self._event_queue.put_nowait(token)
|
||||
|
||||
async def start_streaming(
|
||||
self,
|
||||
channel: Optional[str] = None,
|
||||
source: Optional[str] = None,
|
||||
user_id: Optional[str] = None,
|
||||
username: Optional[str] = None,
|
||||
title: str = "",
|
||||
):
|
||||
self._channel = channel
|
||||
self._source = source
|
||||
self._user_id = user_id
|
||||
self._username = username
|
||||
self._title = title
|
||||
self._streaming_enabled = True
|
||||
self._sent_text = ""
|
||||
self._message_response = None
|
||||
self._msg_start_offset = 0
|
||||
self._max_message_length = 0
|
||||
|
||||
async def stop_streaming(self) -> Tuple[bool, str]:
|
||||
if not self._streaming_enabled:
|
||||
return False, ""
|
||||
self._streaming_enabled = False
|
||||
with self._lock:
|
||||
final_text = self._buffer
|
||||
self._buffer = ""
|
||||
self._sent_text = ""
|
||||
self._message_response = None
|
||||
self._msg_start_offset = 0
|
||||
return True, final_text
|
||||
|
||||
|
||||
def _sse_payload(data: dict) -> str:
|
||||
return f"data: {json.dumps(data, ensure_ascii=False)}\n\n"
|
||||
|
||||
|
||||
async def _stream_response(
|
||||
agent: _CollectingMoviePilotAgent,
|
||||
prompt: str,
|
||||
images: List[str],
|
||||
) -> AsyncIterator[str]:
|
||||
event_queue: asyncio.Queue = asyncio.Queue()
|
||||
if isinstance(agent.stream_handler, _OpenAIStreamingHandler):
|
||||
agent.stream_handler.bind_queue(event_queue)
|
||||
|
||||
created = int(time.time())
|
||||
completion_id = f"chatcmpl-{uuid.uuid4().hex}"
|
||||
finished = False
|
||||
|
||||
async def _run_agent():
|
||||
try:
|
||||
await agent.process(prompt, images=images, files=None)
|
||||
except Exception as exc:
|
||||
await event_queue.put({"error": str(exc)})
|
||||
finally:
|
||||
await event_queue.put(None)
|
||||
|
||||
task = asyncio.create_task(_run_agent())
|
||||
|
||||
try:
|
||||
yield _sse_payload(
|
||||
{
|
||||
"id": completion_id,
|
||||
"object": "chat.completion.chunk",
|
||||
"created": created,
|
||||
"model": MODEL_ID,
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"delta": {"role": "assistant"},
|
||||
"finish_reason": None,
|
||||
}
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
while True:
|
||||
item = await event_queue.get()
|
||||
if item is None:
|
||||
break
|
||||
if isinstance(item, dict) and item.get("error"):
|
||||
raise RuntimeError(str(item["error"]))
|
||||
text = str(item or "")
|
||||
if not text:
|
||||
continue
|
||||
yield _sse_payload(
|
||||
{
|
||||
"id": completion_id,
|
||||
"object": "chat.completion.chunk",
|
||||
"created": created,
|
||||
"model": MODEL_ID,
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"delta": {"content": text},
|
||||
"finish_reason": None,
|
||||
}
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
finished = True
|
||||
yield _sse_payload(
|
||||
{
|
||||
"id": completion_id,
|
||||
"object": "chat.completion.chunk",
|
||||
"created": created,
|
||||
"model": MODEL_ID,
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"delta": {},
|
||||
"finish_reason": "stop",
|
||||
}
|
||||
],
|
||||
}
|
||||
)
|
||||
yield "data: [DONE]\n\n"
|
||||
finally:
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
elif finished:
|
||||
await task
|
||||
|
||||
|
||||
def _error_response(
|
||||
message: str,
|
||||
status_code: int,
|
||||
error_type: str = "invalid_request_error",
|
||||
code: Optional[str] = None,
|
||||
) -> JSONResponse:
|
||||
return JSONResponse(
|
||||
status_code=status_code,
|
||||
content=schemas.OpenAIErrorResponse(
|
||||
error=schemas.OpenAIErrorDetail(
|
||||
message=message,
|
||||
type=error_type,
|
||||
code=code,
|
||||
)
|
||||
).model_dump(),
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
|
||||
def _check_auth(
|
||||
credentials: Optional[HTTPAuthorizationCredentials],
|
||||
) -> Optional[JSONResponse]:
|
||||
if not credentials or credentials.scheme.lower() != "bearer":
|
||||
return _error_response(
|
||||
"Invalid bearer token.",
|
||||
401,
|
||||
error_type="authentication_error",
|
||||
code="invalid_api_key",
|
||||
)
|
||||
if credentials.credentials != settings.API_TOKEN:
|
||||
return _error_response(
|
||||
"Invalid bearer token.",
|
||||
401,
|
||||
error_type="authentication_error",
|
||||
code="invalid_api_key",
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
@router.get("/models", summary="OpenAI compatible models", response_model=schemas.OpenAIModelListResponse)
|
||||
async def list_models(
|
||||
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
|
||||
):
|
||||
auth_error = _check_auth(credentials)
|
||||
if auth_error:
|
||||
return auth_error
|
||||
now = int(time.time())
|
||||
return schemas.OpenAIModelListResponse(
|
||||
data=[schemas.OpenAIModelInfo(id=MODEL_ID, created=now)]
|
||||
)
|
||||
|
||||
|
||||
@router.post(
|
||||
"/chat/completions",
|
||||
summary="OpenAI compatible chat completions",
|
||||
response_model=schemas.OpenAIChatCompletionResponse,
|
||||
)
|
||||
async def chat_completions(
|
||||
payload: schemas.OpenAIChatCompletionsRequest,
|
||||
request: Request,
|
||||
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
|
||||
):
|
||||
auth_error = _check_auth(credentials)
|
||||
if auth_error:
|
||||
return auth_error
|
||||
|
||||
if not settings.AI_AGENT_ENABLE:
|
||||
return _error_response(
|
||||
"MoviePilot AI agent is disabled.",
|
||||
503,
|
||||
error_type="server_error",
|
||||
code="ai_agent_disabled",
|
||||
)
|
||||
|
||||
if not payload.messages:
|
||||
return _error_response(
|
||||
"`messages` must be a non-empty array.",
|
||||
400,
|
||||
code="invalid_messages",
|
||||
)
|
||||
|
||||
session_key = (
|
||||
str(payload.user or "").strip()
|
||||
or str(request.headers.get("x-session-id") or "").strip()
|
||||
or str(uuid.uuid4())
|
||||
)
|
||||
use_server_session = bool(
|
||||
str(payload.user or "").strip()
|
||||
or str(request.headers.get("x-session-id") or "").strip()
|
||||
)
|
||||
|
||||
try:
|
||||
prompt, images = build_prompt(payload.messages, use_server_session=use_server_session)
|
||||
except ValueError as exc:
|
||||
return _error_response(str(exc), 400, code="invalid_messages")
|
||||
|
||||
session_id = build_session_id(session_key, SESSION_PREFIX)
|
||||
username = str(payload.user or "openai-client")
|
||||
agent = _CollectingMoviePilotAgent(
|
||||
session_id=session_id,
|
||||
user_id=session_key,
|
||||
channel=MessageChannel.Web.value,
|
||||
source="openai",
|
||||
username=username,
|
||||
stream_mode=payload.stream,
|
||||
)
|
||||
|
||||
if payload.stream:
|
||||
return StreamingResponse(
|
||||
_stream_response(agent=agent, prompt=prompt, images=images),
|
||||
media_type="text/event-stream",
|
||||
headers={
|
||||
"Cache-Control": "no-cache",
|
||||
"Connection": "keep-alive",
|
||||
"X-Accel-Buffering": "no",
|
||||
},
|
||||
)
|
||||
|
||||
try:
|
||||
result = await agent.process(prompt, images=images, files=None)
|
||||
except Exception as exc:
|
||||
return _error_response(
|
||||
str(exc),
|
||||
500,
|
||||
error_type="server_error",
|
||||
code="agent_execution_failed",
|
||||
)
|
||||
|
||||
content = "\n\n".join(
|
||||
message.strip()
|
||||
for message in agent.collected_messages
|
||||
if message and message.strip()
|
||||
).strip()
|
||||
if not content and result:
|
||||
content = str(result).strip()
|
||||
if not content:
|
||||
content = "未获得有效回复。"
|
||||
|
||||
return JSONResponse(content=build_completion_payload(content, MODEL_ID))
|
||||
|
||||
|
||||
@router.post("/responses", summary="OpenAI compatible responses", response_model=schemas.OpenAIResponsesResponse)
|
||||
async def responses(
|
||||
payload: schemas.OpenAIResponsesRequest,
|
||||
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
|
||||
):
|
||||
auth_error = _check_auth(credentials)
|
||||
if auth_error:
|
||||
return auth_error
|
||||
|
||||
if not settings.AI_AGENT_ENABLE:
|
||||
return _error_response(
|
||||
"MoviePilot AI agent is disabled.",
|
||||
503,
|
||||
error_type="server_error",
|
||||
code="ai_agent_disabled",
|
||||
)
|
||||
|
||||
if payload.stream:
|
||||
return _error_response(
|
||||
"Streaming is not supported for /responses yet.",
|
||||
400,
|
||||
code="unsupported_stream",
|
||||
)
|
||||
|
||||
normalized_messages = build_responses_input(payload.input, instructions=payload.instructions)
|
||||
if not normalized_messages:
|
||||
return _error_response(
|
||||
"`input` must include at least one usable message.",
|
||||
400,
|
||||
code="invalid_input",
|
||||
)
|
||||
|
||||
try:
|
||||
prompt, images = build_prompt(normalized_messages, use_server_session=bool(payload.user))
|
||||
except ValueError as exc:
|
||||
return _error_response(str(exc), 400, code="invalid_input")
|
||||
|
||||
session_key = str(payload.user or uuid.uuid4())
|
||||
session_id = build_session_id(session_key, SESSION_PREFIX)
|
||||
agent = _CollectingMoviePilotAgent(
|
||||
session_id=session_id,
|
||||
user_id=session_key,
|
||||
channel=MessageChannel.Web.value,
|
||||
source="openai.responses",
|
||||
username=str(payload.user or "openai-client"),
|
||||
stream_mode=False,
|
||||
)
|
||||
|
||||
try:
|
||||
result = await agent.process(prompt, images=images, files=None)
|
||||
except Exception as exc:
|
||||
return _error_response(
|
||||
str(exc),
|
||||
500,
|
||||
error_type="server_error",
|
||||
code="agent_execution_failed",
|
||||
)
|
||||
|
||||
content = "\n\n".join(
|
||||
message.strip()
|
||||
for message in agent.collected_messages
|
||||
if message and message.strip()
|
||||
).strip()
|
||||
if not content and result:
|
||||
content = str(result).strip()
|
||||
if not content:
|
||||
content = "未获得有效回复。"
|
||||
|
||||
created_at = int(time.time())
|
||||
response_id = f"resp_{uuid.uuid4().hex}"
|
||||
output_message = schemas.OpenAIResponsesOutputMessage(
|
||||
id=f"msg_{uuid.uuid4().hex}",
|
||||
content=[schemas.OpenAIResponsesOutputText(text=content)],
|
||||
)
|
||||
return schemas.OpenAIResponsesResponse(
|
||||
id=response_id,
|
||||
created_at=created_at,
|
||||
model=MODEL_ID,
|
||||
output=[output_message],
|
||||
usage=schemas.OpenAIUsage(),
|
||||
)
|
||||
@@ -155,9 +155,13 @@ async def all_plugins(_: User = Depends(get_current_active_superuser_async),
|
||||
|
||||
# 未安装的本地插件
|
||||
not_installed_plugins = [plugin for plugin in local_plugins if not plugin.installed]
|
||||
# 本地插件仓库目录中的插件
|
||||
local_repo_plugins = plugin_manager.get_local_repo_plugins()
|
||||
# 在线插件
|
||||
online_plugins = await plugin_manager.async_get_online_plugins(force)
|
||||
if not online_plugins:
|
||||
candidate_plugins = plugin_manager.process_plugins_list(online_plugins + local_repo_plugins, []) \
|
||||
if online_plugins or local_repo_plugins else []
|
||||
if not candidate_plugins:
|
||||
# 没有获取在线插件
|
||||
if state == "market":
|
||||
# 返回未安装的本地插件
|
||||
@@ -169,7 +173,7 @@ async def all_plugins(_: User = Depends(get_current_active_superuser_async),
|
||||
# 已安装插件IDS
|
||||
_installed_ids = [plugin.id for plugin in installed_plugins]
|
||||
# 未安装的线上插件或者有更新的插件
|
||||
for plugin in online_plugins:
|
||||
for plugin in candidate_plugins:
|
||||
if plugin.id not in _installed_ids:
|
||||
market_plugins.append(plugin)
|
||||
elif plugin.has_update:
|
||||
@@ -229,11 +233,15 @@ async def install(plugin_id: str,
|
||||
# 首先检查插件是否已经存在,并且是否强制安装,否则只进行安装统计
|
||||
plugin_helper = PluginHelper()
|
||||
if not force and plugin_id in PluginManager().get_plugin_ids():
|
||||
await plugin_helper.async_install_reg(pid=plugin_id)
|
||||
await plugin_helper.async_install_reg(pid=plugin_id, repo_url=repo_url)
|
||||
else:
|
||||
# 插件不存在或需要强制安装,下载安装并注册插件
|
||||
if repo_url:
|
||||
state, msg = await plugin_helper.async_install(pid=plugin_id, repo_url=repo_url)
|
||||
state, msg = await plugin_helper.async_install(
|
||||
pid=plugin_id,
|
||||
repo_url=repo_url,
|
||||
force_install=force
|
||||
)
|
||||
# 安装失败则直接响应
|
||||
if not state:
|
||||
return schemas.Response(success=False, message=msg)
|
||||
@@ -260,6 +268,14 @@ async def remotes(token: str) -> Any:
|
||||
return PluginManager().get_plugin_remotes()
|
||||
|
||||
|
||||
@router.get("/sidebar_nav", summary="获取插件侧栏导航项", response_model=List[schemas.PluginSidebarNavItem])
|
||||
def plugin_sidebar_nav(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
聚合已启用 Vue 插件声明的侧栏入口(get_sidebar_nav),供前端主界面侧栏展示。
|
||||
"""
|
||||
return PluginManager().get_plugin_sidebar_nav()
|
||||
|
||||
|
||||
@router.get("/form/{plugin_id}", summary="获取插件表单页面")
|
||||
def plugin_form(plugin_id: str,
|
||||
_: User = Depends(get_current_active_superuser)) -> dict:
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
from typing import List, Any, Optional
|
||||
import json
|
||||
from typing import List, Any, Optional, AsyncIterator
|
||||
|
||||
from fastapi import APIRouter, Depends, Body
|
||||
from fastapi import APIRouter, Depends, Body, Request
|
||||
from fastapi.responses import StreamingResponse
|
||||
|
||||
from app import schemas
|
||||
from app.chain.media import MediaChain
|
||||
@@ -9,7 +11,7 @@ from app.chain.ai_recommend import AIRecommendChain
|
||||
from app.core.config import settings
|
||||
from app.core.event import eventmanager
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.core.security import verify_token
|
||||
from app.core.security import verify_resource_token, verify_token
|
||||
from app.log import logger
|
||||
from app.schemas import MediaRecognizeConvertEventData
|
||||
from app.schemas.types import MediaType, ChainEventType
|
||||
@@ -17,6 +19,38 @@ from app.schemas.types import MediaType, ChainEventType
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
def _parse_site_list(sites: Optional[str]) -> Optional[List[int]]:
|
||||
"""
|
||||
解析站点ID列表
|
||||
"""
|
||||
return [int(site) for site in sites.split(",") if site] if sites else None
|
||||
|
||||
|
||||
def _sse_event(data: dict) -> str:
|
||||
"""
|
||||
转换为SSE事件
|
||||
"""
|
||||
return f"data: {json.dumps(data, ensure_ascii=False)}\n\n"
|
||||
|
||||
|
||||
async def _stream_search_events(request: Request, event_source: AsyncIterator[dict]):
|
||||
"""
|
||||
输出搜索SSE事件
|
||||
"""
|
||||
try:
|
||||
async for event in event_source:
|
||||
if await request.is_disconnected():
|
||||
break
|
||||
yield _sse_event(event)
|
||||
except Exception as err:
|
||||
logger.error(f"渐进式搜索出错:{err}", exc_info=True)
|
||||
yield _sse_event({
|
||||
"type": "error",
|
||||
"success": False,
|
||||
"message": str(err)
|
||||
})
|
||||
|
||||
|
||||
@router.get("/last", summary="查询搜索结果", response_model=List[schemas.Context])
|
||||
async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
@@ -26,6 +60,139 @@ async def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
return [torrent.to_dict() for torrent in torrents]
|
||||
|
||||
|
||||
@router.get("/media/{mediaid}/stream", summary="渐进式精确搜索资源")
|
||||
async def search_by_id_stream(request: Request,
|
||||
mediaid: str,
|
||||
mtype: Optional[str] = None,
|
||||
area: Optional[str] = "title",
|
||||
title: Optional[str] = None,
|
||||
year: Optional[str] = None,
|
||||
season: Optional[str] = None,
|
||||
sites: Optional[str] = None,
|
||||
_: schemas.TokenPayload = Depends(verify_resource_token)) -> Any:
|
||||
"""
|
||||
根据TMDBID/豆瓣ID渐进式搜索站点资源,返回格式为SSE
|
||||
"""
|
||||
AIRecommendChain().cancel_ai_recommend()
|
||||
|
||||
media_type = MediaType(mtype) if mtype else None
|
||||
media_season = int(season) if season else None
|
||||
site_list = _parse_site_list(sites)
|
||||
media_chain = MediaChain()
|
||||
search_chain = SearchChain()
|
||||
|
||||
async def event_source():
|
||||
nonlocal media_season
|
||||
torrents = None
|
||||
if mediaid.startswith("tmdb:"):
|
||||
tmdbid = int(mediaid.replace("tmdb:", ""))
|
||||
if settings.RECOGNIZE_SOURCE == "douban":
|
||||
doubaninfo = await media_chain.async_get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=media_type)
|
||||
if doubaninfo:
|
||||
torrents = search_chain.async_search_by_id_stream(doubanid=doubaninfo.get("id"),
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
else:
|
||||
yield {"type": "error", "success": False, "message": "未识别到豆瓣媒体信息"}
|
||||
return
|
||||
else:
|
||||
torrents = search_chain.async_search_by_id_stream(tmdbid=tmdbid, mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
elif mediaid.startswith("douban:"):
|
||||
doubanid = mediaid.replace("douban:", "")
|
||||
if settings.RECOGNIZE_SOURCE == "themoviedb":
|
||||
tmdbinfo = await media_chain.async_get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=media_type)
|
||||
if tmdbinfo:
|
||||
if tmdbinfo.get('season') and not media_season:
|
||||
media_season = tmdbinfo.get('season')
|
||||
torrents = search_chain.async_search_by_id_stream(tmdbid=tmdbinfo.get("id"),
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
else:
|
||||
yield {"type": "error", "success": False, "message": "未识别到TMDB媒体信息"}
|
||||
return
|
||||
else:
|
||||
torrents = search_chain.async_search_by_id_stream(doubanid=doubanid, mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
elif mediaid.startswith("bangumi:"):
|
||||
bangumiid = int(mediaid.replace("bangumi:", ""))
|
||||
if settings.RECOGNIZE_SOURCE == "themoviedb":
|
||||
tmdbinfo = await media_chain.async_get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
|
||||
if tmdbinfo:
|
||||
torrents = search_chain.async_search_by_id_stream(tmdbid=tmdbinfo.get("id"),
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
else:
|
||||
yield {"type": "error", "success": False, "message": "未识别到TMDB媒体信息"}
|
||||
return
|
||||
else:
|
||||
doubaninfo = await media_chain.async_get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
|
||||
if doubaninfo:
|
||||
torrents = search_chain.async_search_by_id_stream(doubanid=doubaninfo.get("id"),
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
else:
|
||||
yield {"type": "error", "success": False, "message": "未识别到豆瓣媒体信息"}
|
||||
return
|
||||
else:
|
||||
event_data = MediaRecognizeConvertEventData(
|
||||
mediaid=mediaid,
|
||||
convert_type=settings.RECOGNIZE_SOURCE
|
||||
)
|
||||
event = await eventmanager.async_send_event(ChainEventType.MediaRecognizeConvert, event_data)
|
||||
if event and event.event_data:
|
||||
event_data = event.event_data
|
||||
if event_data.media_dict:
|
||||
search_id = event_data.media_dict.get("id")
|
||||
if event_data.convert_type == "themoviedb":
|
||||
torrents = search_chain.async_search_by_id_stream(tmdbid=search_id, mtype=media_type,
|
||||
area=area, season=media_season,
|
||||
sites=site_list, cache_local=True)
|
||||
elif event_data.convert_type == "douban":
|
||||
torrents = search_chain.async_search_by_id_stream(doubanid=search_id, mtype=media_type,
|
||||
area=area, season=media_season,
|
||||
sites=site_list, cache_local=True)
|
||||
else:
|
||||
if not title:
|
||||
yield {"type": "error", "success": False, "message": "未知的媒体ID"}
|
||||
return
|
||||
meta = MetaInfo(title)
|
||||
if year:
|
||||
meta.year = year
|
||||
if media_type:
|
||||
meta.type = media_type
|
||||
if media_season:
|
||||
meta.type = MediaType.TV
|
||||
meta.begin_season = media_season
|
||||
mediainfo = await media_chain.async_recognize_media(meta=meta)
|
||||
if mediainfo:
|
||||
if settings.RECOGNIZE_SOURCE == "themoviedb":
|
||||
torrents = search_chain.async_search_by_id_stream(tmdbid=mediainfo.tmdb_id,
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
else:
|
||||
torrents = search_chain.async_search_by_id_stream(doubanid=mediainfo.douban_id,
|
||||
mtype=media_type, area=area,
|
||||
season=media_season, sites=site_list,
|
||||
cache_local=True)
|
||||
|
||||
if not torrents:
|
||||
yield {"type": "error", "success": False, "message": "未搜索到任何资源"}
|
||||
return
|
||||
|
||||
async for event in torrents:
|
||||
yield event
|
||||
|
||||
return StreamingResponse(_stream_search_events(request, event_source()), media_type="text/event-stream")
|
||||
|
||||
|
||||
@router.get("/media/{mediaid}", summary="精确搜索资源", response_model=schemas.Response)
|
||||
async def search_by_id(mediaid: str,
|
||||
mtype: Optional[str] = None,
|
||||
@@ -156,6 +323,26 @@ async def search_by_id(mediaid: str,
|
||||
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])
|
||||
|
||||
|
||||
@router.get("/title/stream", summary="渐进式模糊搜索资源")
|
||||
async def search_by_title_stream(request: Request,
|
||||
keyword: Optional[str] = None,
|
||||
page: Optional[int] = 0,
|
||||
sites: Optional[str] = None,
|
||||
_: schemas.TokenPayload = Depends(verify_resource_token)) -> Any:
|
||||
"""
|
||||
根据名称渐进式模糊搜索站点资源,返回格式为SSE
|
||||
"""
|
||||
AIRecommendChain().cancel_ai_recommend()
|
||||
|
||||
event_source = SearchChain().async_search_by_title_stream(
|
||||
title=keyword,
|
||||
page=page,
|
||||
sites=_parse_site_list(sites),
|
||||
cache_local=True
|
||||
)
|
||||
return StreamingResponse(_stream_search_events(request, event_source), media_type="text/event-stream")
|
||||
|
||||
|
||||
@router.get("/title", summary="模糊搜索资源", response_model=schemas.Response)
|
||||
async def search_by_title(keyword: Optional[str] = None,
|
||||
page: Optional[int] = 0,
|
||||
@@ -169,7 +356,7 @@ async def search_by_title(keyword: Optional[str] = None,
|
||||
|
||||
torrents = await SearchChain().async_search_by_title(
|
||||
title=keyword, page=page,
|
||||
sites=[int(site) for site in sites.split(",") if site] if sites else None,
|
||||
sites=_parse_site_list(sites),
|
||||
cache_local=True
|
||||
)
|
||||
if not torrents:
|
||||
|
||||
@@ -399,7 +399,15 @@ async def subscribe_history(
|
||||
"""
|
||||
查询电影/电视剧订阅历史
|
||||
"""
|
||||
return await SubscribeHistory.async_list_by_type(db, mtype=mtype, page=page, count=count)
|
||||
histories = await SubscribeHistory.async_list_by_type(db, mtype=mtype, page=page, count=count)
|
||||
result = []
|
||||
for history in histories:
|
||||
history_item = schemas.Subscribe.model_validate(history, from_attributes=True)
|
||||
if history_item.type == MediaType.TV.value:
|
||||
history_item.total_episode = 0
|
||||
history_item.lack_episode = 0
|
||||
result.append(history_item)
|
||||
return result
|
||||
|
||||
|
||||
@router.delete("/history/{history_id}", summary="删除订阅历史", response_model=schemas.Response)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
177
app/api/openai_utils.py
Normal file
177
app/api/openai_utils.py
Normal file
@@ -0,0 +1,177 @@
|
||||
import hashlib
|
||||
import time
|
||||
import uuid
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
|
||||
def _get_message_field(message: Any, field: str, default: Any = None) -> Any:
|
||||
if isinstance(message, dict):
|
||||
return message.get(field, default)
|
||||
return getattr(message, field, default)
|
||||
|
||||
|
||||
def extract_text_and_images(content: Any) -> Tuple[str, List[str]]:
|
||||
if content is None:
|
||||
return "", []
|
||||
if isinstance(content, str):
|
||||
return content.strip(), []
|
||||
|
||||
text_parts: List[str] = []
|
||||
image_urls: List[str] = []
|
||||
if isinstance(content, list):
|
||||
for item in content:
|
||||
if isinstance(item, str):
|
||||
normalized = item.strip()
|
||||
if normalized:
|
||||
text_parts.append(normalized)
|
||||
continue
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
item_type = (item.get("type") or "").lower()
|
||||
if item_type == "text":
|
||||
text = item.get("text")
|
||||
if text and str(text).strip():
|
||||
text_parts.append(str(text).strip())
|
||||
elif item_type == "input_text":
|
||||
text = item.get("text")
|
||||
if text and str(text).strip():
|
||||
text_parts.append(str(text).strip())
|
||||
elif item_type == "image_url":
|
||||
image_url = item.get("image_url")
|
||||
url = image_url.get("url") if isinstance(image_url, dict) else image_url
|
||||
if url and str(url).strip():
|
||||
image_urls.append(str(url).strip())
|
||||
elif item_type == "input_image":
|
||||
url = item.get("image_url")
|
||||
if url and str(url).strip():
|
||||
image_urls.append(str(url).strip())
|
||||
elif item_type == "image":
|
||||
source = item.get("source") or {}
|
||||
if isinstance(source, dict) and source.get("type") == "base64":
|
||||
data = source.get("data")
|
||||
media_type = source.get("media_type") or "image/png"
|
||||
if data and str(data).strip():
|
||||
image_urls.append(f"data:{media_type};base64,{str(data).strip()}")
|
||||
return "\n".join(text_parts).strip(), image_urls
|
||||
|
||||
|
||||
def build_prompt(messages: List[Any], use_server_session: bool) -> Tuple[str, List[str]]:
|
||||
system_texts: List[str] = []
|
||||
transcript: List[str] = []
|
||||
latest_user_text = ""
|
||||
latest_user_images: List[str] = []
|
||||
|
||||
for message in messages:
|
||||
role = str(_get_message_field(message, "role", "user") or "user").lower()
|
||||
if role == "developer":
|
||||
role = "system"
|
||||
text, images = extract_text_and_images(_get_message_field(message, "content"))
|
||||
if role == "system":
|
||||
if text:
|
||||
system_texts.append(text)
|
||||
continue
|
||||
if role == "user":
|
||||
if text or images:
|
||||
latest_user_text = text
|
||||
latest_user_images = images
|
||||
if text:
|
||||
transcript.append(f"user: {text}")
|
||||
continue
|
||||
if text:
|
||||
transcript.append(f"{role}: {text}")
|
||||
|
||||
if not latest_user_text and not latest_user_images:
|
||||
raise ValueError("No usable user message found in messages.")
|
||||
|
||||
prompt_parts: List[str] = []
|
||||
if system_texts:
|
||||
prompt_parts.append("系统要求:\n" + "\n\n".join(system_texts))
|
||||
|
||||
if not use_server_session and transcript:
|
||||
history = transcript[:-1] if transcript[-1].startswith("user: ") else transcript
|
||||
if history:
|
||||
prompt_parts.append("对话上下文:\n" + "\n".join(history[-10:]))
|
||||
|
||||
if latest_user_text:
|
||||
prompt_parts.append("当前用户消息:\n" + latest_user_text)
|
||||
else:
|
||||
prompt_parts.append("当前用户消息:\n请结合图片内容回复。")
|
||||
|
||||
return "\n\n".join(part for part in prompt_parts if part).strip(), latest_user_images
|
||||
|
||||
|
||||
def build_session_id(session_key: str, prefix: str) -> str:
|
||||
digest = hashlib.sha256(session_key.encode("utf-8")).hexdigest()
|
||||
return f"{prefix}{digest[:32]}"
|
||||
|
||||
|
||||
def build_completion_payload(content: str, model_id: str) -> Dict[str, Any]:
|
||||
created = int(time.time())
|
||||
return {
|
||||
"id": f"chatcmpl-{uuid.uuid4().hex}",
|
||||
"object": "chat.completion",
|
||||
"created": created,
|
||||
"model": model_id,
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": content,
|
||||
},
|
||||
"finish_reason": "stop",
|
||||
}
|
||||
],
|
||||
"usage": {
|
||||
"prompt_tokens": 0,
|
||||
"completion_tokens": 0,
|
||||
"total_tokens": 0,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def build_responses_input(
|
||||
input_data: Any, instructions: str | None = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
messages: List[Dict[str, Any]] = []
|
||||
if instructions and str(instructions).strip():
|
||||
messages.append({"role": "system", "content": str(instructions).strip()})
|
||||
|
||||
if isinstance(input_data, str):
|
||||
normalized = input_data.strip()
|
||||
if normalized:
|
||||
messages.append({"role": "user", "content": normalized})
|
||||
return messages
|
||||
|
||||
if isinstance(input_data, list):
|
||||
for item in input_data:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
item_type = (item.get("type") or "").lower()
|
||||
if item_type == "message":
|
||||
role = item.get("role") or "user"
|
||||
content = item.get("content")
|
||||
messages.append({"role": role, "content": content})
|
||||
elif item.get("role") and "content" in item:
|
||||
messages.append({"role": item.get("role"), "content": item.get("content")})
|
||||
return messages
|
||||
|
||||
if isinstance(input_data, dict) and input_data.get("role") and "content" in input_data:
|
||||
messages.append({"role": input_data.get("role"), "content": input_data.get("content")})
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
def build_anthropic_messages(
|
||||
system: Any, messages: List[Any]
|
||||
) -> List[Dict[str, Any]]:
|
||||
normalized: List[Dict[str, Any]] = []
|
||||
system_text, _ = extract_text_and_images(system)
|
||||
if system_text:
|
||||
normalized.append({"role": "system", "content": system_text})
|
||||
|
||||
for message in messages:
|
||||
role = _get_message_field(message, "role", "user")
|
||||
content = _get_message_field(message, "content")
|
||||
normalized.append({"role": role, "content": content})
|
||||
return normalized
|
||||
@@ -38,6 +38,7 @@ from app.schemas import (
|
||||
TransferDirectoryConf,
|
||||
MessageResponse,
|
||||
)
|
||||
from app.utils.identity import normalize_internal_user_id
|
||||
from app.schemas.category import CategoryConfig
|
||||
from app.schemas.types import (
|
||||
TorrentStatus,
|
||||
@@ -119,6 +120,21 @@ class ChainBase(metaclass=ABCMeta):
|
||||
"""
|
||||
self.filecache.delete(filename)
|
||||
|
||||
@staticmethod
|
||||
def _normalize_notification_for_dispatch(
|
||||
message: Notification
|
||||
) -> Notification:
|
||||
"""
|
||||
规范化待发送的通知消息。
|
||||
后台任务会复用内部占位用户ID作为会话身份,这里在真正发送前清空,
|
||||
让消息重新走默认通知路由或基于 targets 的目标解析。
|
||||
"""
|
||||
dispatch_message = copy.deepcopy(message)
|
||||
dispatch_message.userid = normalize_internal_user_id(
|
||||
dispatch_message.userid
|
||||
)
|
||||
return dispatch_message
|
||||
|
||||
async def async_remove_cache(self, filename: str) -> None:
|
||||
"""
|
||||
异步删除缓存,同时删除Redis和本地缓存
|
||||
@@ -1119,10 +1135,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
# 保存消息
|
||||
self.messagehelper.put(message, role="user", title=message.title)
|
||||
self.messageoper.add(**message.model_dump())
|
||||
dispatch_message = self._normalize_notification_for_dispatch(message)
|
||||
# 发送消息按设置隔离
|
||||
if not message.userid and message.mtype:
|
||||
if not dispatch_message.userid and dispatch_message.mtype:
|
||||
# 消息隔离设置
|
||||
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
|
||||
notify_action = ServiceConfigHelper.get_notification_switch(
|
||||
dispatch_message.mtype
|
||||
)
|
||||
if notify_action:
|
||||
# 'admin' 'user,admin' 'user' 'all'
|
||||
actions = notify_action.split(",")
|
||||
@@ -1131,7 +1150,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
send_orignal = False
|
||||
useroper = UserOper()
|
||||
for action in actions:
|
||||
send_message = copy.deepcopy(message)
|
||||
send_message = copy.deepcopy(dispatch_message)
|
||||
if action == "admin" and not admin_sended:
|
||||
# 仅发送管理员
|
||||
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
|
||||
@@ -1186,13 +1205,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
# 发送消息事件
|
||||
self.eventmanager.send_event(
|
||||
etype=EventType.NoticeMessage,
|
||||
data={**message.model_dump(), "type": message.mtype},
|
||||
data={**dispatch_message.model_dump(), "type": dispatch_message.mtype},
|
||||
)
|
||||
# 按原消息发送
|
||||
self.messagequeue.send_message(
|
||||
"post_message",
|
||||
message=message,
|
||||
immediately=True if message.userid else False,
|
||||
message=dispatch_message,
|
||||
immediately=True if dispatch_message.userid else False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@@ -1233,10 +1252,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
# 保存消息
|
||||
self.messagehelper.put(message, role="user", title=message.title)
|
||||
await self.messageoper.async_add(**message.model_dump())
|
||||
dispatch_message = self._normalize_notification_for_dispatch(message)
|
||||
# 发送消息按设置隔离
|
||||
if not message.userid and message.mtype:
|
||||
if not dispatch_message.userid and dispatch_message.mtype:
|
||||
# 消息隔离设置
|
||||
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
|
||||
notify_action = ServiceConfigHelper.get_notification_switch(
|
||||
dispatch_message.mtype
|
||||
)
|
||||
if notify_action:
|
||||
# 'admin' 'user,admin' 'user' 'all'
|
||||
actions = notify_action.split(",")
|
||||
@@ -1245,7 +1267,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
send_orignal = False
|
||||
useroper = UserOper()
|
||||
for action in actions:
|
||||
send_message = copy.deepcopy(message)
|
||||
send_message = copy.deepcopy(dispatch_message)
|
||||
if action == "admin" and not admin_sended:
|
||||
# 仅发送管理员
|
||||
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
|
||||
@@ -1300,13 +1322,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
# 发送消息事件
|
||||
await self.eventmanager.async_send_event(
|
||||
etype=EventType.NoticeMessage,
|
||||
data={**message.model_dump(), "type": message.mtype},
|
||||
data={**dispatch_message.model_dump(), "type": dispatch_message.mtype},
|
||||
)
|
||||
# 按原消息发送
|
||||
await self.messagequeue.async_send_message(
|
||||
"post_message",
|
||||
message=message,
|
||||
immediately=True if message.userid else False,
|
||||
message=dispatch_message,
|
||||
immediately=True if dispatch_message.userid else False,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@@ -1324,11 +1346,12 @@ class ChainBase(metaclass=ABCMeta):
|
||||
message, role="user", note=note_list, title=message.title
|
||||
)
|
||||
self.messageoper.add(**message.model_dump(), note=note_list)
|
||||
dispatch_message = self._normalize_notification_for_dispatch(message)
|
||||
return self.messagequeue.send_message(
|
||||
"post_medias_message",
|
||||
message=message,
|
||||
message=dispatch_message,
|
||||
medias=medias,
|
||||
immediately=True if message.userid else False,
|
||||
immediately=True if dispatch_message.userid else False,
|
||||
)
|
||||
|
||||
def post_torrents_message(
|
||||
@@ -1345,11 +1368,12 @@ class ChainBase(metaclass=ABCMeta):
|
||||
message, role="user", note=note_list, title=message.title
|
||||
)
|
||||
self.messageoper.add(**message.model_dump(), note=note_list)
|
||||
dispatch_message = self._normalize_notification_for_dispatch(message)
|
||||
return self.messagequeue.send_message(
|
||||
"post_torrents_message",
|
||||
message=message,
|
||||
message=dispatch_message,
|
||||
torrents=torrents,
|
||||
immediately=True if message.userid else False,
|
||||
immediately=True if dispatch_message.userid else False,
|
||||
)
|
||||
|
||||
def delete_message(
|
||||
@@ -1383,6 +1407,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
chat_id: Union[str, int],
|
||||
text: str,
|
||||
title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
) -> bool:
|
||||
"""
|
||||
编辑已发送的消息
|
||||
@@ -1392,6 +1417,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param chat_id: 聊天ID
|
||||
:param text: 新的消息内容
|
||||
:param title: 消息标题
|
||||
:param buttons: 更新后的按钮列表
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
return self.run_module(
|
||||
@@ -1402,6 +1428,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
chat_id=chat_id,
|
||||
text=text,
|
||||
title=title,
|
||||
buttons=buttons,
|
||||
)
|
||||
|
||||
def send_direct_message(self, message: Notification) -> Optional[MessageResponse]:
|
||||
@@ -1411,7 +1438,10 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param message: 消息体
|
||||
:return: 消息响应(包含message_id, chat_id等)
|
||||
"""
|
||||
return self.run_module("send_direct_message", message=message)
|
||||
return self.run_module(
|
||||
"send_direct_message",
|
||||
message=self._normalize_notification_for_dispatch(message),
|
||||
)
|
||||
|
||||
def metadata_img(
|
||||
self,
|
||||
|
||||
1363
app/chain/interaction.py
Normal file
1363
app/chain/interaction.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1320,7 +1320,7 @@ class MediaChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
mediainfo = await native_fn()
|
||||
else:
|
||||
# 原生优先
|
||||
logger.info(f"插件优先模式未开启。尝试原生识别,标题:{log_name} ...")
|
||||
logger.info(f"识别标题:{log_name} ...")
|
||||
mediainfo = await native_fn()
|
||||
if not mediainfo and plugin_available:
|
||||
logger.info(
|
||||
|
||||
1886
app/chain/message.py
1886
app/chain/message.py
File diff suppressed because it is too large
Load Diff
@@ -3,7 +3,7 @@ import random
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from datetime import datetime
|
||||
from typing import Dict, Tuple
|
||||
from typing import AsyncIterator, Any, Dict, Tuple
|
||||
from typing import List, Optional
|
||||
|
||||
from app.helper.sites import SitesHelper # noqa
|
||||
@@ -167,6 +167,85 @@ class SearchChain(ChainBase):
|
||||
await self.async_save_cache(contexts, self.__result_temp_file)
|
||||
return contexts
|
||||
|
||||
async def async_search_by_title_stream(self, title: str, page: Optional[int] = 0,
|
||||
sites: List[int] = None,
|
||||
cache_local: Optional[bool] = False) -> AsyncIterator[dict]:
|
||||
"""
|
||||
根据标题渐进式搜索资源,不识别不过滤,按站点完成顺序返回结果
|
||||
"""
|
||||
if title:
|
||||
logger.info(f'开始渐进式搜索资源,关键词:{title} ...')
|
||||
else:
|
||||
logger.info(f'开始渐进式浏览资源,站点:{sites} ...')
|
||||
|
||||
contexts: List[Context] = []
|
||||
async for event in self.__async_search_all_sites_stream(keyword=title, sites=sites, page=page):
|
||||
result = event.pop("items", []) or []
|
||||
batch_contexts = [
|
||||
Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
|
||||
torrent_info=torrent)
|
||||
for torrent in result
|
||||
]
|
||||
if batch_contexts:
|
||||
contexts.extend(batch_contexts)
|
||||
yield {
|
||||
**event,
|
||||
"type": "append",
|
||||
"items": [context.to_dict() for context in batch_contexts],
|
||||
"total_items": len(contexts)
|
||||
}
|
||||
|
||||
if cache_local:
|
||||
await self.async_save_cache(contexts, self.__result_temp_file)
|
||||
|
||||
if not contexts:
|
||||
logger.warn(f'{title} 未搜索到资源')
|
||||
yield {
|
||||
"type": "done",
|
||||
"text": f"搜索完成,共 {len(contexts)} 个资源",
|
||||
"items": [context.to_dict() for context in contexts],
|
||||
"total_items": len(contexts)
|
||||
}
|
||||
|
||||
async def async_search_by_id_stream(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
|
||||
mtype: MediaType = None, area: Optional[str] = "title",
|
||||
season: Optional[int] = None, sites: List[int] = None,
|
||||
cache_local: bool = False) -> AsyncIterator[dict]:
|
||||
"""
|
||||
根据TMDBID/豆瓣ID渐进式搜索资源,先返回站点原始候选,再返回过滤匹配后的最终结果
|
||||
"""
|
||||
mediainfo = await self.async_recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
|
||||
if not mediainfo:
|
||||
logger.error(f'{tmdbid} 媒体信息识别失败!')
|
||||
yield {
|
||||
"type": "error",
|
||||
"success": False,
|
||||
"message": "媒体信息识别失败"
|
||||
}
|
||||
return
|
||||
|
||||
no_exists = None
|
||||
if season is not None:
|
||||
no_exists = {
|
||||
tmdbid or doubanid: {
|
||||
season: NotExistMediaInfo(episodes=[])
|
||||
}
|
||||
}
|
||||
|
||||
contexts: List[Context] = []
|
||||
async for event in self.async_process_stream(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists):
|
||||
if event.get("type") == "done":
|
||||
contexts = event.get("contexts") or []
|
||||
event = {
|
||||
key: value
|
||||
for key, value in event.items()
|
||||
if key != "contexts"
|
||||
}
|
||||
yield event
|
||||
|
||||
if cache_local:
|
||||
await self.async_save_cache(contexts, self.__result_temp_file)
|
||||
|
||||
@staticmethod
|
||||
def __prepare_params(mediainfo: MediaInfo,
|
||||
keyword: Optional[str] = None,
|
||||
@@ -503,6 +582,115 @@ class SearchChain(ChainBase):
|
||||
filter_params=filter_params
|
||||
)
|
||||
|
||||
async def async_process_stream(self, mediainfo: MediaInfo,
|
||||
keyword: Optional[str] = None,
|
||||
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
|
||||
sites: List[int] = None,
|
||||
rule_groups: List[str] = None,
|
||||
area: Optional[str] = "title",
|
||||
custom_words: List[str] = None,
|
||||
filter_params: Dict[str, str] = None) -> AsyncIterator[dict]:
|
||||
"""
|
||||
根据媒体信息渐进式搜索种子资源,先返回站点候选,再返回过滤匹配后的最终结果
|
||||
"""
|
||||
|
||||
# 豆瓣标题处理
|
||||
if not mediainfo.tmdb_id:
|
||||
meta = MetaInfo(title=mediainfo.title)
|
||||
mediainfo.title = meta.name
|
||||
mediainfo.season = meta.begin_season
|
||||
logger.info(f'开始渐进式搜索资源,关键词:{keyword or mediainfo.title} ...')
|
||||
|
||||
# 补充媒体信息
|
||||
if not mediainfo.names:
|
||||
mediainfo = await self.async_recognize_media(mtype=mediainfo.type,
|
||||
tmdbid=mediainfo.tmdb_id,
|
||||
doubanid=mediainfo.douban_id)
|
||||
if not mediainfo:
|
||||
logger.error(f'媒体信息识别失败!')
|
||||
yield {
|
||||
"type": "error",
|
||||
"success": False,
|
||||
"message": "媒体信息识别失败"
|
||||
}
|
||||
return
|
||||
|
||||
# 准备搜索参数
|
||||
season_episodes, keywords = self.__prepare_params(
|
||||
mediainfo=mediainfo,
|
||||
keyword=keyword,
|
||||
no_exists=no_exists
|
||||
)
|
||||
|
||||
torrents: List[TorrentInfo] = []
|
||||
candidate_contexts: List[Context] = []
|
||||
search_count = 0
|
||||
|
||||
for search_word in keywords:
|
||||
if search_count > 0:
|
||||
logger.info(f"已搜索 {search_count} 次,强制休眠 1-10 秒 ...")
|
||||
await asyncio.sleep(random.randint(1, 10))
|
||||
|
||||
async for event in self.__async_search_all_sites_stream(
|
||||
mediainfo=mediainfo,
|
||||
keyword=search_word,
|
||||
sites=sites,
|
||||
area=area):
|
||||
result = event.pop("items", []) or []
|
||||
torrents.extend(result)
|
||||
batch_contexts = [
|
||||
Context(meta_info=MetaInfo(title=torrent.title, subtitle=torrent.description),
|
||||
media_info=mediainfo,
|
||||
torrent_info=torrent)
|
||||
for torrent in result
|
||||
]
|
||||
candidate_contexts.extend(batch_contexts)
|
||||
yield {
|
||||
**event,
|
||||
"type": "append",
|
||||
"stage": "searching",
|
||||
"items": [context.to_dict() for context in batch_contexts],
|
||||
"total_items": len(candidate_contexts)
|
||||
}
|
||||
|
||||
search_count += 1
|
||||
if torrents:
|
||||
logger.info(f"共搜索到 {len(torrents)} 个资源,停止搜索")
|
||||
break
|
||||
|
||||
yield {
|
||||
"type": "progress",
|
||||
"stage": "filtering",
|
||||
"value": 98,
|
||||
"text": f"正在过滤匹配 {len(torrents)} 个候选资源 ..."
|
||||
}
|
||||
|
||||
contexts = await run_in_threadpool(self.__parse_result,
|
||||
torrents=torrents,
|
||||
mediainfo=mediainfo,
|
||||
keyword=keyword,
|
||||
rule_groups=rule_groups,
|
||||
season_episodes=season_episodes,
|
||||
custom_words=custom_words,
|
||||
filter_params=filter_params)
|
||||
final_items = [context.to_dict() for context in contexts]
|
||||
yield {
|
||||
"type": "replace",
|
||||
"stage": "filtered",
|
||||
"value": 100,
|
||||
"text": f"过滤匹配完成,共 {len(contexts)} 个资源",
|
||||
"items": final_items,
|
||||
"total_items": len(contexts)
|
||||
}
|
||||
yield {
|
||||
"type": "done",
|
||||
"stage": "done",
|
||||
"text": f"搜索完成,共 {len(contexts)} 个资源",
|
||||
"items": final_items,
|
||||
"total_items": len(contexts),
|
||||
"contexts": contexts
|
||||
}
|
||||
|
||||
def __search_all_sites(self, keyword: str,
|
||||
mediainfo: Optional[MediaInfo] = None,
|
||||
sites: List[int] = None,
|
||||
@@ -670,6 +858,106 @@ class SearchChain(ChainBase):
|
||||
# 返回
|
||||
return results
|
||||
|
||||
async def __async_search_all_sites_stream(self, keyword: str,
|
||||
mediainfo: Optional[MediaInfo] = None,
|
||||
sites: List[int] = None,
|
||||
page: Optional[int] = 0,
|
||||
area: Optional[str] = "title") -> AsyncIterator[Dict[str, Any]]:
|
||||
"""
|
||||
异步搜索多个站点,按站点完成顺序渐进式返回结果
|
||||
:param mediainfo: 识别的媒体信息
|
||||
:param keyword: 搜索关键词
|
||||
:param sites: 指定站点ID列表,如有则只搜索指定站点,否则搜索所有站点
|
||||
:param page: 搜索页码
|
||||
:param area: 搜索区域 title or imdbid
|
||||
"""
|
||||
indexer_sites = []
|
||||
|
||||
if not sites:
|
||||
sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
|
||||
|
||||
for indexer in await SitesHelper().async_get_indexers():
|
||||
if not sites or indexer.get("id") in sites:
|
||||
indexer_sites.append(indexer)
|
||||
if not indexer_sites:
|
||||
logger.warn('未开启任何有效站点,无法搜索资源')
|
||||
yield {
|
||||
"type": "done",
|
||||
"stage": "searching",
|
||||
"value": 100,
|
||||
"text": "未开启任何有效站点,无法搜索资源",
|
||||
"items": [],
|
||||
"finished": 0,
|
||||
"total": 0
|
||||
}
|
||||
return
|
||||
|
||||
progress = ProgressHelper(ProgressKey.Search)
|
||||
progress.start()
|
||||
start_time = datetime.now()
|
||||
total_num = len(indexer_sites)
|
||||
finish_count = 0
|
||||
progress.update(value=0,
|
||||
text=f"开始搜索,共 {total_num} 个站点 ...")
|
||||
yield {
|
||||
"type": "progress",
|
||||
"stage": "searching",
|
||||
"value": 0,
|
||||
"text": f"开始搜索,共 {total_num} 个站点 ...",
|
||||
"items": [],
|
||||
"finished": 0,
|
||||
"total": total_num
|
||||
}
|
||||
|
||||
async def search_site(site: dict) -> Tuple[dict, List[TorrentInfo]]:
|
||||
if area == "imdbid":
|
||||
result = await self.async_search_torrents(site=site,
|
||||
keyword=mediainfo.imdb_id if mediainfo else None,
|
||||
mtype=mediainfo.type if mediainfo else None,
|
||||
page=page)
|
||||
else:
|
||||
result = await self.async_search_torrents(site=site,
|
||||
keyword=keyword,
|
||||
mtype=mediainfo.type if mediainfo else None,
|
||||
page=page)
|
||||
return site, result or []
|
||||
|
||||
tasks = [asyncio.create_task(search_site(site)) for site in indexer_sites]
|
||||
results_count = 0
|
||||
try:
|
||||
for future in asyncio.as_completed(tasks):
|
||||
if global_vars.is_system_stopped:
|
||||
break
|
||||
finish_count += 1
|
||||
site, result = await future
|
||||
results_count += len(result)
|
||||
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
|
||||
progress_value = finish_count / total_num * 100
|
||||
progress_text = f"正在搜索{keyword or ''},已完成 {finish_count} / {total_num} 个站点 ..."
|
||||
progress.update(value=progress_value, text=progress_text)
|
||||
yield {
|
||||
"type": "append",
|
||||
"stage": "searching",
|
||||
"value": progress_value,
|
||||
"text": progress_text,
|
||||
"items": result,
|
||||
"site": site.get("name"),
|
||||
"site_id": site.get("id"),
|
||||
"finished": finish_count,
|
||||
"total": total_num,
|
||||
"total_items": results_count
|
||||
}
|
||||
finally:
|
||||
for task in tasks:
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
|
||||
end_time = datetime.now()
|
||||
progress.update(value=100,
|
||||
text=f"站点搜索完成,有效资源数:{results_count},总耗时 {(end_time - start_time).seconds} 秒")
|
||||
logger.info(f"站点搜索完成,有效资源数:{results_count},总耗时 {(end_time - start_time).seconds} 秒")
|
||||
progress.end()
|
||||
|
||||
@eventmanager.register(EventType.SiteDeleted)
|
||||
def remove_site(self, event: Event):
|
||||
"""
|
||||
|
||||
1241
app/chain/skills.py
Normal file
1241
app/chain/skills.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -61,6 +61,12 @@ class StorageChain(ChainBase):
|
||||
"""
|
||||
return self.run_module("create_folder", fileitem=fileitem, name=name)
|
||||
|
||||
def get_folder(self, storage: str, path: Path) -> Optional[schemas.FileItem]:
|
||||
"""
|
||||
获取目录,不存在则递归创建
|
||||
"""
|
||||
return self.run_module("get_folder", storage=storage, path=path)
|
||||
|
||||
def download_file(self, fileitem: schemas.FileItem, path: Path = None) -> Optional[Path]:
|
||||
"""
|
||||
下载文件
|
||||
|
||||
@@ -593,11 +593,17 @@ class SubscribeChain(ChainBase):
|
||||
|
||||
# 洗版
|
||||
if subscribe.best_version:
|
||||
# 洗版时,非整季不要
|
||||
if torrent_mediainfo.type == MediaType.TV:
|
||||
if torrent_meta.episode_list:
|
||||
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
|
||||
continue
|
||||
# 洗版时,不符合订阅集数的不要
|
||||
if (
|
||||
torrent_mediainfo.type == MediaType.TV
|
||||
and not self._is_episode_range_covered(
|
||||
meta=torrent_meta, subscribe=subscribe
|
||||
)
|
||||
):
|
||||
logger.info(
|
||||
f"{subscribe.name} 正在洗版,{torrent_info.title} 不符合订阅集数范围"
|
||||
)
|
||||
continue
|
||||
# 洗版时,优先级小于等于已下载优先级的不要
|
||||
if subscribe.current_priority \
|
||||
and torrent_info.pri_order <= subscribe.current_priority:
|
||||
@@ -985,11 +991,18 @@ class SubscribeChain(ChainBase):
|
||||
)
|
||||
continue
|
||||
else:
|
||||
# 洗版时,非整季不要
|
||||
if meta.type == MediaType.TV:
|
||||
if torrent_meta.episode_list:
|
||||
logger.debug(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
|
||||
continue
|
||||
# 洗版时,不符合订阅集数的不要
|
||||
if (
|
||||
meta.type == MediaType.TV
|
||||
and not self._is_episode_range_covered(
|
||||
meta=torrent_meta,
|
||||
subscribe=subscribe,
|
||||
)
|
||||
):
|
||||
logger.debug(
|
||||
f"{subscribe.name} 正在洗版,{torrent_info.title} 不符合订阅集数范围"
|
||||
)
|
||||
continue
|
||||
|
||||
# 匹配订阅附加参数
|
||||
if not torrenthelper.filter_torrent(torrent_info=torrent_info,
|
||||
@@ -1753,6 +1766,8 @@ class SubscribeChain(ChainBase):
|
||||
- exist_flag (bool): 布尔值,表示媒体是否已经完全下载或已存在
|
||||
- no_exists (dict): 缺失的媒体信息,包含缺失的集数或其他相关信息
|
||||
"""
|
||||
self.__refresh_total_episode_before_completion(subscribe=subscribe, mediainfo=mediainfo)
|
||||
|
||||
# 非洗版
|
||||
if not subscribe.best_version:
|
||||
# 每季总集数
|
||||
@@ -1821,6 +1836,55 @@ class SubscribeChain(ChainBase):
|
||||
# 返回结果,表示媒体未完全下载或存在
|
||||
return False, no_exists
|
||||
|
||||
@staticmethod
|
||||
def __refresh_total_episode_before_completion(subscribe: Subscribe, mediainfo: MediaInfo):
|
||||
"""
|
||||
在完成判断前,按最新识别结果兜底修正订阅总集数,防止旧总集数导致误完成。
|
||||
"""
|
||||
if subscribe.type != MediaType.TV.value:
|
||||
return
|
||||
if subscribe.manual_total_episode:
|
||||
return
|
||||
if subscribe.season is None:
|
||||
return
|
||||
|
||||
new_total_episode = len((mediainfo.seasons or {}).get(subscribe.season) or [])
|
||||
old_total_episode = subscribe.total_episode or 0
|
||||
if not new_total_episode or new_total_episode <= old_total_episode:
|
||||
return
|
||||
|
||||
old_lack_episode = subscribe.lack_episode or 0
|
||||
new_lack_episode = old_lack_episode + (new_total_episode - old_total_episode)
|
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
SubscribeOper().update(subscribe.id, {
|
||||
"total_episode": new_total_episode,
|
||||
"lack_episode": new_lack_episode,
|
||||
"last_update": now
|
||||
})
|
||||
subscribe.total_episode = new_total_episode
|
||||
subscribe.lack_episode = new_lack_episode
|
||||
subscribe.last_update = now
|
||||
logger.info(
|
||||
f"订阅 {subscribe.name} 第{subscribe.season}季 总集数更新为 {new_total_episode},缺失集数更新为 {new_lack_episode}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _is_episode_range_covered(meta: MetaBase, subscribe: Subscribe) -> bool:
|
||||
"""
|
||||
判断种子是否包含指定订阅的剧集范围
|
||||
"""
|
||||
episodes = meta.episode_list
|
||||
if not episodes:
|
||||
# 没有剧集信息,表示该种子为合集
|
||||
return True
|
||||
|
||||
min_ep = min(episodes)
|
||||
max_ep = max(episodes)
|
||||
start_ep = subscribe.start_episode or 1
|
||||
end_ep = subscribe.total_episode
|
||||
|
||||
return min_ep <= start_ep and max_ep >= end_ep
|
||||
|
||||
@staticmethod
|
||||
def get_states_for_search(state: str) -> str:
|
||||
"""
|
||||
|
||||
@@ -74,10 +74,13 @@ class JobManager:
|
||||
_job_view: Dict[Tuple, TransferJob] = {}
|
||||
# 汇总季集清单
|
||||
_season_episodes: Dict[Tuple, List[int]] = {}
|
||||
# 记录从 meta 作业迁移到 media 作业的关系,用于清理提前失败后残留的 media 作业
|
||||
_meta_to_media_ids: Dict[Tuple, set[Tuple]] = {}
|
||||
|
||||
def __init__(self):
|
||||
self._job_view = {}
|
||||
self._season_episodes = {}
|
||||
self._meta_to_media_ids = {}
|
||||
|
||||
@staticmethod
|
||||
def __get_meta_id(meta: MetaBase = None, season: Optional[int] = None) -> Tuple:
|
||||
@@ -185,6 +188,43 @@ class JobManager:
|
||||
self._season_episodes[__mediaid__] = task.meta.episode_list
|
||||
return True
|
||||
|
||||
def migrate_task(self, task: TransferTask) -> bool:
|
||||
"""
|
||||
将任务从 meta 作业迁移到 media 作业
|
||||
"""
|
||||
curr_task, source_job_id = self.__remove_task_with_job_id(task.fileitem)
|
||||
if not self.add_task(task, state=curr_task.state if curr_task else "waiting"):
|
||||
return False
|
||||
if curr_task and task.mediainfo:
|
||||
metaid = self.__get_meta_id(
|
||||
meta=task.meta, season=task.meta.begin_season
|
||||
)
|
||||
mediaid = self.__get_id(task)
|
||||
if source_job_id == metaid and mediaid != metaid:
|
||||
with job_lock:
|
||||
self._meta_to_media_ids.setdefault(metaid, set()).add(mediaid)
|
||||
return True
|
||||
|
||||
def __is_job_done(self, job_id: Tuple) -> bool:
|
||||
"""
|
||||
检查指定作业是否已完成
|
||||
"""
|
||||
if job_id not in self._job_view:
|
||||
return True
|
||||
return all(
|
||||
task.state in ["completed", "failed"]
|
||||
for task in self._job_view[job_id].tasks
|
||||
)
|
||||
|
||||
def __pop_job(self, job_id: Tuple):
|
||||
"""
|
||||
移除指定作业和对应季集缓存
|
||||
"""
|
||||
if job_id in self._season_episodes:
|
||||
self._season_episodes.pop(job_id)
|
||||
if job_id in self._job_view:
|
||||
self._job_view.pop(job_id)
|
||||
|
||||
def running_task(self, task: TransferTask):
|
||||
"""
|
||||
设置任务为运行中
|
||||
@@ -233,10 +273,39 @@ class JobManager:
|
||||
- set(task.meta.episode_list)
|
||||
)
|
||||
|
||||
def fail_unfinished_task(self, task: TransferTask):
|
||||
"""
|
||||
将指定任务视图中的非终态任务标记为失败
|
||||
"""
|
||||
if not task or not task.fileitem:
|
||||
return
|
||||
with job_lock:
|
||||
for mediaid, job in self._job_view.items():
|
||||
for job_task in job.tasks:
|
||||
if job_task.fileitem != task.fileitem:
|
||||
continue
|
||||
if job_task.state not in ["completed", "failed"]:
|
||||
job_task.state = "failed"
|
||||
if mediaid in self._season_episodes:
|
||||
self._season_episodes[mediaid] = list(
|
||||
set(self._season_episodes[mediaid])
|
||||
- set(task.meta.episode_list)
|
||||
)
|
||||
return
|
||||
|
||||
def remove_task(self, fileitem: FileItem) -> Optional[TransferJobTask]:
|
||||
"""
|
||||
根据文件项移除任务
|
||||
"""
|
||||
task, _ = self.__remove_task_with_job_id(fileitem)
|
||||
return task
|
||||
|
||||
def __remove_task_with_job_id(
|
||||
self, fileitem: FileItem
|
||||
) -> Tuple[Optional[TransferJobTask], Optional[Tuple]]:
|
||||
"""
|
||||
根据文件项移除任务,并返回任务所在的作业ID
|
||||
"""
|
||||
with job_lock:
|
||||
for mediaid in list(self._job_view):
|
||||
job = self._job_view[mediaid]
|
||||
@@ -252,8 +321,8 @@ class JobManager:
|
||||
set(self._season_episodes[mediaid])
|
||||
- set(task.meta.episode_list)
|
||||
)
|
||||
return task
|
||||
return None
|
||||
return task, mediaid
|
||||
return None, None
|
||||
|
||||
def remove_job(self, task: TransferTask) -> Optional[TransferJob]:
|
||||
"""
|
||||
@@ -280,27 +349,20 @@ class JobManager:
|
||||
media=task.mediainfo, season=task.meta.begin_season
|
||||
)
|
||||
|
||||
meta_done = True
|
||||
if __metaid__ in self._job_view:
|
||||
meta_done = all(
|
||||
t.state in ["completed", "failed"]
|
||||
for t in self._job_view[__metaid__].tasks
|
||||
)
|
||||
related_media_ids = set(self._meta_to_media_ids.get(__metaid__, set()))
|
||||
if task.mediainfo:
|
||||
related_media_ids.add(__mediaid__)
|
||||
|
||||
media_done = True
|
||||
if __mediaid__ in self._job_view:
|
||||
media_done = all(
|
||||
t.state in ["completed", "failed"]
|
||||
for t in self._job_view[__mediaid__].tasks
|
||||
)
|
||||
meta_done = self.__is_job_done(__metaid__)
|
||||
media_done = all(
|
||||
self.__is_job_done(mediaid) for mediaid in related_media_ids
|
||||
)
|
||||
|
||||
if meta_done and media_done:
|
||||
__id__ = self.__get_id(task)
|
||||
if __id__ in self._job_view:
|
||||
# 移除季集信息
|
||||
if __id__ in self._season_episodes:
|
||||
self._season_episodes.pop(__id__)
|
||||
self._job_view.pop(__id__)
|
||||
remove_ids = {__metaid__, self.__get_id(task), *related_media_ids}
|
||||
for job_id in remove_ids:
|
||||
self.__pop_job(job_id)
|
||||
self._meta_to_media_ids.pop(__metaid__, None)
|
||||
|
||||
def is_done(self, task: TransferTask) -> bool:
|
||||
"""
|
||||
@@ -780,10 +842,22 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
Notification(
|
||||
mtype=NotificationType.Manual,
|
||||
title=f"{task.mediainfo.title_year} {task.meta.season_episode} 入库失败!",
|
||||
text=f"原因:{transferinfo.message or '未知'}",
|
||||
text="\n".join(
|
||||
[
|
||||
f"原因:{transferinfo.message or '未知'}",
|
||||
(
|
||||
f"如果按钮不可用,可回复:\n```\n/redo {history.id}\n```"
|
||||
if history
|
||||
else ""
|
||||
),
|
||||
]
|
||||
).strip(),
|
||||
image=task.mediainfo.get_message_image(),
|
||||
username=task.username,
|
||||
link=settings.MP_DOMAIN("#/history"),
|
||||
buttons=self.build_failed_transfer_buttons(
|
||||
history.id if history else None
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
@@ -799,8 +873,17 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
try:
|
||||
from app.agent import agent_manager
|
||||
|
||||
# 使用 download_hash 或源文件父目录作为分组键,
|
||||
# 同一批次(如同一个种子)的失败记录会被合并为一次agent调用
|
||||
group_key = (
|
||||
task.download_hash or str(task.fileitem.path).rsplit("/", 1)[0]
|
||||
if task.fileitem
|
||||
else ""
|
||||
)
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
agent_manager.retry_failed_transfer(history.id),
|
||||
agent_manager.retry_failed_transfer(
|
||||
history.id, group_key=group_key
|
||||
),
|
||||
global_vars.loop,
|
||||
)
|
||||
logger.info(f"已触发AI智能体重试整理历史记录 #{history.id}")
|
||||
@@ -958,6 +1041,17 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
return
|
||||
self.jobview.remove_task(fileitem)
|
||||
|
||||
def __fail_transfer_task(self, task: TransferTask):
|
||||
"""
|
||||
标记异常整理任务失败并清理作业视图
|
||||
"""
|
||||
self.jobview.fail_unfinished_task(task)
|
||||
if task.download_hash and self.jobview.is_torrent_done(task.download_hash):
|
||||
self.transfer_completed(
|
||||
hashs=task.download_hash, downloader=task.downloader
|
||||
)
|
||||
self.jobview.try_remove_job(task)
|
||||
|
||||
def __start_transfer(self):
|
||||
"""
|
||||
处理队列
|
||||
@@ -1034,6 +1128,7 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
logger.error(
|
||||
f"{fileitem.name} 整理任务处理出现错误:{e} - {traceback.format_exc()}"
|
||||
)
|
||||
self.__fail_transfer_task(task)
|
||||
with task_lock:
|
||||
self._processed_num += 1
|
||||
self._fail_num += 1
|
||||
@@ -1110,9 +1205,17 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
Notification(
|
||||
mtype=NotificationType.Manual,
|
||||
title=f"{task.fileitem.name} 未识别到媒体信息,无法入库!",
|
||||
text=f"回复:\n```\n/redo {his.id} [tmdbid]|[类型]\n```\n手动识别整理。",
|
||||
text=(
|
||||
"原因:未识别到媒体信息\n"
|
||||
"如果按钮不可用,可回复:\n"
|
||||
f"```\n/redo {his.id}\n/redo {his.id} [tmdbid]|[类型]\n```\n"
|
||||
"自动重试或手动识别整理。"
|
||||
),
|
||||
username=task.username,
|
||||
link=settings.MP_DOMAIN("#/history"),
|
||||
buttons=self.build_failed_transfer_buttons(
|
||||
his.id if his else None
|
||||
),
|
||||
)
|
||||
)
|
||||
# 任务失败,直接移除task
|
||||
@@ -1127,8 +1230,17 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
try:
|
||||
from app.agent import agent_manager
|
||||
|
||||
# 使用 download_hash 或源文件父目录作为分组键
|
||||
group_key = (
|
||||
task.download_hash
|
||||
or str(task.fileitem.path).rsplit("/", 1)[0]
|
||||
if task.fileitem
|
||||
else ""
|
||||
)
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
agent_manager.retry_failed_transfer(his.id),
|
||||
agent_manager.retry_failed_transfer(
|
||||
his.id, group_key=group_key
|
||||
),
|
||||
global_vars.loop,
|
||||
)
|
||||
logger.info(f"已触发AI智能体重试整理历史记录 #{his.id}")
|
||||
@@ -1152,10 +1264,7 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
# 更新任务信息
|
||||
task.mediainfo = mediainfo
|
||||
# 更新队列任务
|
||||
curr_task = self.jobview.remove_task(task.fileitem)
|
||||
self.jobview.add_task(
|
||||
task, state=curr_task.state if curr_task else "waiting"
|
||||
)
|
||||
self.jobview.migrate_task(task)
|
||||
|
||||
# 获取集数据
|
||||
if task.mediainfo.type == MediaType.TV and not task.episodes_info:
|
||||
@@ -1753,9 +1862,17 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
"finished": finished_files,
|
||||
},
|
||||
)
|
||||
state, err_msg = self.__handle_transfer(
|
||||
task=transfer_task, callback=self.__default_callback
|
||||
)
|
||||
try:
|
||||
state, err_msg = self.__handle_transfer(
|
||||
task=transfer_task, callback=self.__default_callback
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"{transfer_task.fileitem.name} 整理任务处理出现错误:"
|
||||
f"{e} - {traceback.format_exc()}"
|
||||
)
|
||||
self.__fail_transfer_task(transfer_task)
|
||||
state, err_msg = False, str(e)
|
||||
if not state:
|
||||
all_success = False
|
||||
logger.warn(f"{transfer_task.fileitem.name} {err_msg}")
|
||||
@@ -1798,8 +1915,8 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
Notification(
|
||||
channel=channel,
|
||||
source=source,
|
||||
title="请输入正确的命令格式:/redo [id] [tmdbid/豆瓣id]|[类型],"
|
||||
"[id]整理记录编号",
|
||||
title="请输入正确的命令格式:/redo [id] 或 /redo [id] [tmdbid/豆瓣id]|[类型],"
|
||||
"[id] 为整理记录编号",
|
||||
userid=userid,
|
||||
)
|
||||
)
|
||||
@@ -1808,7 +1925,7 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
args_error()
|
||||
return
|
||||
arg_strs = str(arg_str).split()
|
||||
if len(arg_strs) != 2:
|
||||
if len(arg_strs) not in (1, 2):
|
||||
args_error()
|
||||
return
|
||||
# 历史记录ID
|
||||
@@ -1816,6 +1933,20 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
if not logid.isdigit():
|
||||
args_error()
|
||||
return
|
||||
if len(arg_strs) == 1:
|
||||
state, errmsg = self.redo_transfer_history(int(logid))
|
||||
if not state:
|
||||
self.post_message(
|
||||
Notification(
|
||||
channel=channel,
|
||||
title="手动整理失败",
|
||||
source=source,
|
||||
text=errmsg,
|
||||
userid=userid,
|
||||
link=settings.MP_DOMAIN("#/history"),
|
||||
)
|
||||
)
|
||||
return
|
||||
# TMDBID/豆瓣ID
|
||||
id_strs = arg_strs[1].split("|")
|
||||
media_id = id_strs[0]
|
||||
@@ -1843,6 +1974,31 @@ class TransferChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
|
||||
)
|
||||
return
|
||||
|
||||
@staticmethod
|
||||
def build_failed_transfer_buttons(
|
||||
history_id: Optional[int],
|
||||
) -> Optional[List[List[dict]]]:
|
||||
"""
|
||||
构建整理失败通知的操作按钮。
|
||||
"""
|
||||
if not history_id:
|
||||
return None
|
||||
return [
|
||||
[
|
||||
{"text": "重试", "callback_data": f"transfer_retry_{history_id}"},
|
||||
{
|
||||
"text": "智能助手接管",
|
||||
"callback_data": f"transfer_ai_retry_{history_id}",
|
||||
},
|
||||
]
|
||||
]
|
||||
|
||||
def redo_transfer_history(self, history_id: int) -> Tuple[bool, str]:
|
||||
"""
|
||||
按历史记录直接重新整理,自动重新识别媒体信息。
|
||||
"""
|
||||
return self.__re_transfer(logid=history_id)
|
||||
|
||||
def __re_transfer(
|
||||
self, logid: int, mtype: MediaType = None, mediaid: Optional[str] = None
|
||||
) -> Tuple[bool, str]:
|
||||
|
||||
1184
app/cli.py
Normal file
1184
app/cli.py
Normal file
File diff suppressed because it is too large
Load Diff
200
app/command.py
200
app/command.py
@@ -7,6 +7,7 @@ from app.chain import ChainBase
|
||||
from app.chain.download import DownloadChain
|
||||
from app.chain.message import MessageChain
|
||||
from app.chain.site import SiteChain
|
||||
from app.chain.skills import SkillsChain
|
||||
from app.chain.subscribe import SubscribeChain
|
||||
from app.chain.system import SystemChain
|
||||
from app.chain.transfer import TransferChain
|
||||
@@ -45,109 +46,121 @@ class Command(metaclass=Singleton):
|
||||
"id": "cookiecloud",
|
||||
"type": "scheduler",
|
||||
"description": "同步站点",
|
||||
"category": "站点"
|
||||
"category": "站点",
|
||||
},
|
||||
"/sites": {
|
||||
"func": SiteChain().remote_list,
|
||||
"description": "查询站点",
|
||||
"category": "站点",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/site_cookie": {
|
||||
"func": SiteChain().remote_cookie,
|
||||
"description": "更新站点Cookie",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/site_statistic": {
|
||||
"func": SiteChain().remote_refresh_userdatas,
|
||||
"description": "站点数据统计",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/site_enable": {
|
||||
"func": SiteChain().remote_enable,
|
||||
"description": "启用站点",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/site_disable": {
|
||||
"func": SiteChain().remote_disable,
|
||||
"description": "禁用站点",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/mediaserver_sync": {
|
||||
"id": "mediaserver_sync",
|
||||
"type": "scheduler",
|
||||
"description": "同步媒体服务器",
|
||||
"category": "管理"
|
||||
"category": "管理",
|
||||
},
|
||||
"/subscribes": {
|
||||
"func": SubscribeChain().remote_list,
|
||||
"description": "查询订阅",
|
||||
"category": "订阅",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/subscribe_refresh": {
|
||||
"id": "subscribe_refresh",
|
||||
"type": "scheduler",
|
||||
"description": "刷新订阅",
|
||||
"category": "订阅"
|
||||
"category": "订阅",
|
||||
},
|
||||
"/subscribe_search": {
|
||||
"id": "subscribe_search",
|
||||
"type": "scheduler",
|
||||
"description": "搜索订阅",
|
||||
"category": "订阅"
|
||||
"category": "订阅",
|
||||
},
|
||||
"/subscribe_delete": {
|
||||
"func": SubscribeChain().remote_delete,
|
||||
"description": "删除订阅",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/subscribe_tmdb": {
|
||||
"id": "subscribe_tmdb",
|
||||
"type": "scheduler",
|
||||
"description": "订阅元数据更新"
|
||||
"description": "订阅元数据更新",
|
||||
},
|
||||
"/downloading": {
|
||||
"func": DownloadChain().remote_downloading,
|
||||
"description": "正在下载",
|
||||
"category": "管理",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/transfer": {
|
||||
"id": "transfer",
|
||||
"type": "scheduler",
|
||||
"description": "下载文件整理",
|
||||
"category": "管理"
|
||||
"category": "管理",
|
||||
},
|
||||
"/redo": {
|
||||
"func": TransferChain().remote_transfer,
|
||||
"description": "手动整理",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/clear_cache": {
|
||||
"func": SystemChain().remote_clear_cache,
|
||||
"description": "清理缓存",
|
||||
"category": "管理",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/restart": {
|
||||
"func": SystemChain().restart,
|
||||
"description": "重启系统",
|
||||
"category": "管理",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/version": {
|
||||
"func": SystemChain().version,
|
||||
"description": "当前版本",
|
||||
"category": "管理",
|
||||
"data": {}
|
||||
"data": {},
|
||||
},
|
||||
"/clear_session": {
|
||||
"func": MessageChain().remote_clear_session,
|
||||
"description": "清除会话",
|
||||
"category": "管理",
|
||||
"data": {}
|
||||
}
|
||||
"data": {},
|
||||
},
|
||||
"/stop_agent": {
|
||||
"func": MessageChain().remote_stop_agent,
|
||||
"description": "停止推理",
|
||||
"category": "管理",
|
||||
"data": {},
|
||||
},
|
||||
"/skills": {
|
||||
"func": SkillsChain().remote_manage,
|
||||
"description": "管理技能",
|
||||
"category": "智能体",
|
||||
"data": {},
|
||||
},
|
||||
}
|
||||
# 插件命令集合
|
||||
self._plugin_commands = {}
|
||||
@@ -182,7 +195,7 @@ class Command(metaclass=Singleton):
|
||||
self._commands = {
|
||||
**self._preset_commands,
|
||||
**self._plugin_commands,
|
||||
**self._other_commands
|
||||
**self._other_commands,
|
||||
}
|
||||
|
||||
# 强制触发注册
|
||||
@@ -195,32 +208,50 @@ class Command(metaclass=Singleton):
|
||||
event_data: CommandRegisterEventData = event.event_data
|
||||
# 如果事件被取消,跳过命令注册
|
||||
if event_data.cancel:
|
||||
logger.debug(f"Command initialization canceled by event: {event_data.source}")
|
||||
logger.debug(
|
||||
f"Command initialization canceled by event: {event_data.source}"
|
||||
)
|
||||
return
|
||||
# 如果拦截源与插件标识一致时,这里认为需要强制触发注册
|
||||
if pid is not None and pid == event_data.source:
|
||||
force_register = True
|
||||
initial_commands = event_data.commands or {}
|
||||
logger.debug(f"Registering command count from event: {len(initial_commands)}")
|
||||
logger.debug(
|
||||
f"Registering command count from event: {len(initial_commands)}"
|
||||
)
|
||||
else:
|
||||
logger.debug(f"Registering initial command count: {len(initial_commands)}")
|
||||
logger.debug(
|
||||
f"Registering initial command count: {len(initial_commands)}"
|
||||
)
|
||||
|
||||
# initial_commands 必须是 self._commands 的子集
|
||||
filtered_initial_commands = DictUtils.filter_keys_to_subset(initial_commands, self._commands)
|
||||
filtered_initial_commands = DictUtils.filter_keys_to_subset(
|
||||
initial_commands, self._commands
|
||||
)
|
||||
# 如果 filtered_initial_commands 为空,则跳过注册
|
||||
if not filtered_initial_commands and not force_register:
|
||||
logger.debug("Filtered commands are empty, skipping registration.")
|
||||
return
|
||||
|
||||
# 对比调整后的命令与当前命令
|
||||
if filtered_initial_commands != self._registered_commands or force_register:
|
||||
logger.debug("Command set has changed or force registration is enabled.")
|
||||
if (
|
||||
filtered_initial_commands != self._registered_commands
|
||||
or force_register
|
||||
):
|
||||
logger.debug(
|
||||
"Command set has changed or force registration is enabled."
|
||||
)
|
||||
self._registered_commands = filtered_initial_commands
|
||||
CommandChain().register_commands(commands=filtered_initial_commands)
|
||||
else:
|
||||
logger.debug("Command set unchanged, skipping broadcast registration.")
|
||||
logger.debug(
|
||||
"Command set unchanged, skipping broadcast registration."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error occurred during command initialization in background: {e}", exc_info=True)
|
||||
logger.error(
|
||||
f"Error occurred during command initialization in background: {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def __trigger_register_commands_event(self) -> tuple[Optional[Event], dict]:
|
||||
"""
|
||||
@@ -238,7 +269,7 @@ class Command(metaclass=Singleton):
|
||||
command_data = {
|
||||
"type": command_type,
|
||||
"description": command.get("description"),
|
||||
"category": command.get("category")
|
||||
"category": command.get("category"),
|
||||
}
|
||||
# 如果有 pid,则添加到命令数据中
|
||||
plugin_id = command.get("pid")
|
||||
@@ -253,7 +284,9 @@ class Command(metaclass=Singleton):
|
||||
add_commands(self._other_commands, "other")
|
||||
|
||||
# 触发事件允许可以拦截和调整命令
|
||||
event_data = CommandRegisterEventData(commands=commands, origin="CommandChain", service=None)
|
||||
event_data = CommandRegisterEventData(
|
||||
commands=commands, origin="CommandChain", service=None
|
||||
)
|
||||
event = eventmanager.send_event(ChainEventType.CommandRegister, event_data)
|
||||
return event, commands
|
||||
|
||||
@@ -274,13 +307,19 @@ class Command(metaclass=Singleton):
|
||||
"show": command.get("show", True),
|
||||
"data": {
|
||||
"etype": command.get("event"),
|
||||
"data": command.get("data")
|
||||
}
|
||||
"data": command.get("data"),
|
||||
},
|
||||
}
|
||||
return plugin_commands
|
||||
|
||||
def __run_command(self, command: Dict[str, any], data_str: Optional[str] = "",
|
||||
channel: MessageChannel = None, source: Optional[str] = None, userid: Union[str, int] = None):
|
||||
def __run_command(
|
||||
self,
|
||||
command: Dict[str, any],
|
||||
data_str: Optional[str] = "",
|
||||
channel: MessageChannel = None,
|
||||
source: Optional[str] = None,
|
||||
userid: Union[str, int] = None,
|
||||
):
|
||||
"""
|
||||
运行定时服务
|
||||
"""
|
||||
@@ -292,7 +331,7 @@ class Command(metaclass=Singleton):
|
||||
channel=channel,
|
||||
source=source,
|
||||
title=f"开始执行 {command.get('description')} ...",
|
||||
userid=userid
|
||||
userid=userid,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -305,33 +344,33 @@ class Command(metaclass=Singleton):
|
||||
channel=channel,
|
||||
source=source,
|
||||
title=f"{command.get('description')} 执行完成",
|
||||
userid=userid
|
||||
userid=userid,
|
||||
)
|
||||
)
|
||||
else:
|
||||
# 命令
|
||||
cmd_data = copy.deepcopy(command['data']) if command.get('data') else {}
|
||||
args_num = ObjectUtils.arguments(command['func'])
|
||||
cmd_data = copy.deepcopy(command["data"]) if command.get("data") else {}
|
||||
args_num = ObjectUtils.arguments(command["func"])
|
||||
if args_num > 0:
|
||||
if cmd_data:
|
||||
# 有内置参数直接使用内置参数
|
||||
data = cmd_data.get("data") or {}
|
||||
data['channel'] = channel
|
||||
data['source'] = source
|
||||
data['user'] = userid
|
||||
data["channel"] = channel
|
||||
data["source"] = source
|
||||
data["user"] = userid
|
||||
if data_str:
|
||||
data['arg_str'] = data_str
|
||||
cmd_data['data'] = data
|
||||
command['func'](**cmd_data)
|
||||
data["arg_str"] = data_str
|
||||
cmd_data["data"] = data
|
||||
command["func"](**cmd_data)
|
||||
elif args_num == 3:
|
||||
# 没有输入参数,只输入渠道来源、用户ID和消息来源
|
||||
command['func'](channel, userid, source)
|
||||
command["func"](channel, userid, source)
|
||||
elif args_num > 3:
|
||||
# 多个输入参数:用户输入、用户ID
|
||||
command['func'](data_str, channel, userid, source)
|
||||
command["func"](data_str, channel, userid, source)
|
||||
else:
|
||||
# 没有参数
|
||||
command['func']()
|
||||
command["func"]()
|
||||
|
||||
def get_commands(self):
|
||||
"""
|
||||
@@ -345,9 +384,15 @@ class Command(metaclass=Singleton):
|
||||
"""
|
||||
return self._commands.get(cmd, {})
|
||||
|
||||
def register(self, cmd: str, func: Any, data: Optional[dict] = None,
|
||||
desc: Optional[str] = None, category: Optional[str] = None,
|
||||
show: bool = True) -> None:
|
||||
def register(
|
||||
self,
|
||||
cmd: str,
|
||||
func: Any,
|
||||
data: Optional[dict] = None,
|
||||
desc: Optional[str] = None,
|
||||
category: Optional[str] = None,
|
||||
show: bool = True,
|
||||
) -> None:
|
||||
"""
|
||||
注册单个命令
|
||||
"""
|
||||
@@ -357,12 +402,17 @@ class Command(metaclass=Singleton):
|
||||
"description": desc,
|
||||
"category": category,
|
||||
"data": data or {},
|
||||
"show": show
|
||||
"show": show,
|
||||
}
|
||||
|
||||
def execute(self, cmd: str, data_str: Optional[str] = "",
|
||||
channel: MessageChannel = None, source: Optional[str] = None,
|
||||
userid: Union[str, int] = None) -> None:
|
||||
def execute(
|
||||
self,
|
||||
cmd: str,
|
||||
data_str: Optional[str] = "",
|
||||
channel: MessageChannel = None,
|
||||
source: Optional[str] = None,
|
||||
userid: Union[str, int] = None,
|
||||
) -> None:
|
||||
"""
|
||||
执行命令
|
||||
"""
|
||||
@@ -370,23 +420,32 @@ class Command(metaclass=Singleton):
|
||||
if command:
|
||||
try:
|
||||
if userid:
|
||||
logger.info(f"用户 {userid} 开始执行:{command.get('description')} ...")
|
||||
logger.info(
|
||||
f"用户 {userid} 开始执行:{command.get('description')} ..."
|
||||
)
|
||||
else:
|
||||
logger.info(f"开始执行:{command.get('description')} ...")
|
||||
|
||||
# 执行命令
|
||||
self.__run_command(command, data_str=data_str,
|
||||
channel=channel, source=source, userid=userid)
|
||||
self.__run_command(
|
||||
command,
|
||||
data_str=data_str,
|
||||
channel=channel,
|
||||
source=source,
|
||||
userid=userid,
|
||||
)
|
||||
|
||||
if userid:
|
||||
logger.info(f"用户 {userid} {command.get('description')} 执行完成")
|
||||
else:
|
||||
logger.info(f"{command.get('description')} 执行完成")
|
||||
except Exception as err:
|
||||
logger.error(f"执行命令 {cmd} 出错:{str(err)} - {traceback.format_exc()}")
|
||||
self.messagehelper.put(title=f"执行命令 {cmd} 出错",
|
||||
message=str(err),
|
||||
role="system")
|
||||
logger.error(
|
||||
f"执行命令 {cmd} 出错:{str(err)} - {traceback.format_exc()}"
|
||||
)
|
||||
self.messagehelper.put(
|
||||
title=f"执行命令 {cmd} 出错", message=str(err), role="system"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def send_plugin_event(etype: EventType, data: dict) -> None:
|
||||
@@ -404,19 +463,24 @@ class Command(metaclass=Singleton):
|
||||
}
|
||||
"""
|
||||
# 命令参数
|
||||
event_str = event.event_data.get('cmd')
|
||||
event_str = event.event_data.get("cmd")
|
||||
# 消息渠道
|
||||
event_channel = event.event_data.get('channel')
|
||||
event_channel = event.event_data.get("channel")
|
||||
# 消息来源
|
||||
event_source = event.event_data.get('source')
|
||||
event_source = event.event_data.get("source")
|
||||
# 消息用户
|
||||
event_user = event.event_data.get('user')
|
||||
event_user = event.event_data.get("user")
|
||||
if event_str:
|
||||
cmd = event_str.split()[0]
|
||||
args = " ".join(event_str.split()[1:])
|
||||
if self.get(cmd):
|
||||
self.execute(cmd=cmd, data_str=args,
|
||||
channel=event_channel, source=event_source, userid=event_user)
|
||||
self.execute(
|
||||
cmd=cmd,
|
||||
data_str=args,
|
||||
channel=event_channel,
|
||||
source=event_source,
|
||||
userid=event_user,
|
||||
)
|
||||
|
||||
@eventmanager.register(EventType.ModuleReload)
|
||||
def module_reload_event(self, _: ManagerEvent) -> None:
|
||||
|
||||
@@ -211,7 +211,7 @@ class CacheBackend(ABC):
|
||||
"""
|
||||
获取缓存的区
|
||||
"""
|
||||
return f"region:{region}" if region else "region:default"
|
||||
return f"region:{region}" if region else "region:DEFAULT"
|
||||
|
||||
@staticmethod
|
||||
def is_redis() -> bool:
|
||||
|
||||
@@ -417,6 +417,17 @@ class ConfigModel(BaseModel):
|
||||
PLUGIN_STATISTIC_SHARE: bool = True
|
||||
# 是否开启插件热加载
|
||||
PLUGIN_AUTO_RELOAD: bool = False
|
||||
# 本地插件仓库目录,多个地址使用,分隔
|
||||
PLUGIN_LOCAL_REPO_PATHS: Optional[str] = None
|
||||
|
||||
# ==================== 技能配置 ====================
|
||||
# 技能市场仓库地址,多个地址使用,分隔
|
||||
SKILL_MARKET: str = (
|
||||
"https://clawhub.ai,"
|
||||
"https://github.com/openai/skills,"
|
||||
"https://github.com/anthropics/skills,"
|
||||
"https://github.com/vercel-labs/agent-skills"
|
||||
)
|
||||
|
||||
# ==================== Github & PIP ====================
|
||||
# Github token,提高请求api限流阈值 ghp_****
|
||||
@@ -494,6 +505,10 @@ class ConfigModel(BaseModel):
|
||||
LLM_PROVIDER: str = "deepseek"
|
||||
# LLM模型名称
|
||||
LLM_MODEL: str = "deepseek-chat"
|
||||
# 是否尽量关闭模型的思考/推理能力(按各 provider/model 支持情况自动适配)
|
||||
LLM_DISABLE_THINKING: bool = True
|
||||
# LLM是否支持图片输入,开启后消息图片会按多模态输入发送给模型
|
||||
LLM_SUPPORT_IMAGE_INPUT: bool = True
|
||||
# LLM API密钥
|
||||
LLM_API_KEY: Optional[str] = None
|
||||
# LLM基础URL(用于自定义API端点)
|
||||
@@ -538,6 +553,35 @@ class ConfigModel(BaseModel):
|
||||
# AI智能体自动重试整理失败记录开关
|
||||
AI_AGENT_RETRY_TRANSFER: bool = False
|
||||
|
||||
# 语音能力提供商(当前仅支持 openai)
|
||||
AI_VOICE_PROVIDER: str = "openai"
|
||||
# 语音识别提供商,未设置时回退到 AI_VOICE_PROVIDER
|
||||
AI_VOICE_STT_PROVIDER: Optional[str] = None
|
||||
# 语音合成提供商,未设置时回退到 AI_VOICE_PROVIDER
|
||||
AI_VOICE_TTS_PROVIDER: Optional[str] = None
|
||||
# 语音能力 API 密钥,未设置且 LLM_PROVIDER=openai 时回退使用 LLM_API_KEY
|
||||
AI_VOICE_API_KEY: Optional[str] = None
|
||||
# 语音识别 API 密钥,未设置时回退到 AI_VOICE_API_KEY
|
||||
AI_VOICE_STT_API_KEY: Optional[str] = None
|
||||
# 语音合成 API 密钥,未设置时回退到 AI_VOICE_API_KEY
|
||||
AI_VOICE_TTS_API_KEY: Optional[str] = None
|
||||
# 语音能力基础URL,未设置且 LLM_PROVIDER=openai 时回退使用 LLM_BASE_URL
|
||||
AI_VOICE_BASE_URL: Optional[str] = None
|
||||
# 语音识别基础URL,未设置时回退到 AI_VOICE_BASE_URL
|
||||
AI_VOICE_STT_BASE_URL: Optional[str] = None
|
||||
# 语音合成基础URL,未设置时回退到 AI_VOICE_BASE_URL
|
||||
AI_VOICE_TTS_BASE_URL: Optional[str] = None
|
||||
# 语音转文字模型
|
||||
AI_VOICE_STT_MODEL: str = "gpt-4o-mini-transcribe"
|
||||
# 文字转语音模型
|
||||
AI_VOICE_TTS_MODEL: str = "gpt-4o-mini-tts"
|
||||
# TTS 发音人
|
||||
AI_VOICE_TTS_VOICE: str = "alloy"
|
||||
# 语音识别语言
|
||||
AI_VOICE_LANGUAGE: str = "zh"
|
||||
# 回复语音时是否同时附带文字说明
|
||||
AI_VOICE_REPLY_WITH_TEXT: bool = False
|
||||
|
||||
|
||||
class Settings(BaseSettings, ConfigModel, LogConfigModel):
|
||||
"""
|
||||
@@ -1015,7 +1059,16 @@ class GlobalVar(object):
|
||||
# 需应急停止文件整理
|
||||
EMERGENCY_STOP_TRANSFER: List[str] = []
|
||||
# 当前事件循环
|
||||
CURRENT_EVENT_LOOP: AbstractEventLoop = asyncio.get_event_loop()
|
||||
CURRENT_EVENT_LOOP: AbstractEventLoop = None
|
||||
|
||||
@classmethod
|
||||
def _get_event_loop(cls) -> AbstractEventLoop:
|
||||
try:
|
||||
return asyncio.get_event_loop()
|
||||
except RuntimeError:
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
return loop
|
||||
|
||||
def stop_system(self):
|
||||
"""
|
||||
@@ -1085,6 +1138,8 @@ class GlobalVar(object):
|
||||
"""
|
||||
当前循环
|
||||
"""
|
||||
if self.CURRENT_EVENT_LOOP is None:
|
||||
self.CURRENT_EVENT_LOOP = self._get_event_loop()
|
||||
return self.CURRENT_EVENT_LOOP
|
||||
|
||||
def set_loop(self, loop: AbstractEventLoop):
|
||||
|
||||
@@ -6,6 +6,7 @@ import importlib.util
|
||||
import inspect
|
||||
import os
|
||||
import posixpath
|
||||
import shutil
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
@@ -38,7 +39,7 @@ from app.utils.system import SystemUtils
|
||||
|
||||
class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
"""插件管理器"""
|
||||
CONFIG_WATCH = {"DEV", "PLUGIN_AUTO_RELOAD"}
|
||||
CONFIG_WATCH = {"DEV", "PLUGIN_AUTO_RELOAD", "PLUGIN_LOCAL_REPO_PATHS"}
|
||||
|
||||
def __init__(self):
|
||||
# 插件列表
|
||||
@@ -51,6 +52,8 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
self._monitor_thread: Optional[threading.Thread] = None
|
||||
# 监控停止事件
|
||||
self._stop_monitor_event = threading.Event()
|
||||
# 本地插件同步写入运行目录后的短时忽略窗口
|
||||
self._recent_local_sync: Dict[str, float] = {}
|
||||
# 开发者模式监测插件修改
|
||||
if settings.DEV or settings.PLUGIN_AUTO_RELOAD:
|
||||
self.__start_monitor()
|
||||
@@ -308,11 +311,14 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
运行 watchfiles 监视器的主循环。
|
||||
"""
|
||||
# 监视插件目录
|
||||
plugins_path = str(settings.ROOT_PATH / "app" / "plugins")
|
||||
plugin_paths = [str(settings.ROOT_PATH / "app" / "plugins")]
|
||||
for local_repo_path in PluginHelper.get_local_repo_paths():
|
||||
if local_repo_path.exists() and local_repo_path.is_dir():
|
||||
plugin_paths.append(str(local_repo_path))
|
||||
logger.info(">>> 监控线程已启动,准备进入watch循环...")
|
||||
# 使用 watchfiles 监视目录变化,并响应变化事件
|
||||
# Todo: yield_on_timeout = True 时,每秒检查停止事件,会返回空集合;后续可以考虑用来做心跳之类的功能?
|
||||
for changes in watch(plugins_path, stop_event=self._stop_monitor_event, rust_timeout=1000,
|
||||
for changes in watch(*plugin_paths, stop_event=self._stop_monitor_event, rust_timeout=1000,
|
||||
yield_on_timeout=True):
|
||||
# 如果收到停止事件,退出循环
|
||||
if not changes:
|
||||
@@ -320,18 +326,56 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
|
||||
# 处理变化事件
|
||||
plugins_to_reload = set()
|
||||
local_plugins_to_sync = {}
|
||||
for _change_type, path_str in changes:
|
||||
event_path = Path(path_str)
|
||||
|
||||
# 跳过非 .py 文件以及 pycache 目录中的文件
|
||||
if not event_path.name.endswith(".py") or "__pycache__" in event_path.parts:
|
||||
# 跳过 pycache 目录中的文件
|
||||
if "__pycache__" in event_path.parts:
|
||||
continue
|
||||
|
||||
if event_path.name == "requirements.txt":
|
||||
candidate = self._get_local_plugin_candidate_from_path(event_path)
|
||||
if candidate:
|
||||
if candidate.get("compatible") is False:
|
||||
logger.info(
|
||||
f"检测到本地插件 {candidate.get('id')} 依赖文件变化,"
|
||||
f"但跳过处理:{candidate.get('skip_reason')}"
|
||||
)
|
||||
continue
|
||||
logger.warn(f"检测到本地插件 {candidate.get('id')} 依赖文件变化,请重新安装本地插件以安装依赖")
|
||||
continue
|
||||
|
||||
# 跳过非 .py 文件
|
||||
if not event_path.name.endswith(".py"):
|
||||
continue
|
||||
|
||||
# 解析插件ID
|
||||
pid = self._get_plugin_id_from_path(event_path)
|
||||
# 跳过无效插件文件
|
||||
if pid:
|
||||
# 收集需要重载的插件ID,自动去重,避免重复重载
|
||||
runtime_pid = self._get_plugin_id_from_path(event_path)
|
||||
local_candidate = self._get_local_plugin_candidate_from_path(event_path) if not runtime_pid else None
|
||||
if runtime_pid:
|
||||
last_sync_time = self._recent_local_sync.get(runtime_pid)
|
||||
if last_sync_time and time.time() - last_sync_time < 2:
|
||||
logger.debug(f"忽略本地插件同步产生的运行目录变化:{runtime_pid}")
|
||||
continue
|
||||
# 运行目录变化只重载,不能反向触发本地同步。
|
||||
plugins_to_reload.add(runtime_pid)
|
||||
elif local_candidate:
|
||||
if local_candidate.get("compatible") is False:
|
||||
package_version = local_candidate.get("package_version")
|
||||
source_root = f"plugins.{package_version}" if package_version else "plugins"
|
||||
logger.info(
|
||||
f"检测到本地插件 {local_candidate.get('id')} 文件变化,来源:{source_root},"
|
||||
f"文件:{event_path},但跳过同步:{local_candidate.get('skip_reason')}"
|
||||
)
|
||||
continue
|
||||
local_plugins_to_sync[local_candidate.get("id")] = (local_candidate, event_path)
|
||||
|
||||
for pid, (candidate, event_path) in local_plugins_to_sync.items():
|
||||
package_version = candidate.get("package_version")
|
||||
source_root = f"plugins.{package_version}" if package_version else "plugins"
|
||||
logger.info(f"检测到本地插件 {pid} 文件变化,来源:{source_root},文件:{event_path}")
|
||||
if self._sync_local_plugin_if_installed(pid, candidate):
|
||||
plugins_to_reload.add(pid)
|
||||
|
||||
# 触发重载
|
||||
@@ -351,6 +395,7 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
:return: 插件ID字符串,如果不是有效插件文件则返回 None。
|
||||
"""
|
||||
try:
|
||||
event_path = event_path.resolve()
|
||||
plugins_root = settings.ROOT_PATH / "app" / "plugins"
|
||||
# 确保修改的文件在 plugins 目录下
|
||||
if not event_path.is_relative_to(plugins_root):
|
||||
@@ -389,6 +434,78 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
logger.error(f"从路径解析插件ID时出错: {e}")
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _get_local_plugin_candidate_from_path(event_path: Path) -> Optional[dict]:
|
||||
"""
|
||||
根据本地插件仓库路径解析具体插件候选,保留 plugins/plugins.v2 来源差异
|
||||
"""
|
||||
try:
|
||||
event_path = event_path.resolve()
|
||||
for local_repo_path in PluginHelper.get_local_repo_paths():
|
||||
if not local_repo_path.exists() or not local_repo_path.is_dir():
|
||||
continue
|
||||
if not event_path.is_relative_to(local_repo_path):
|
||||
continue
|
||||
try:
|
||||
relative_parts = event_path.relative_to(local_repo_path).parts
|
||||
except (ValueError, IndexError):
|
||||
continue
|
||||
if len(relative_parts) < 2:
|
||||
continue
|
||||
if relative_parts[0] == "plugins":
|
||||
package_version = ""
|
||||
elif relative_parts[0].startswith("plugins."):
|
||||
package_version = relative_parts[0].split(".", 1)[1]
|
||||
else:
|
||||
continue
|
||||
plugin_dir_name = relative_parts[1]
|
||||
candidate = PluginHelper().get_local_plugin_candidate(
|
||||
pid=plugin_dir_name,
|
||||
package_version=package_version,
|
||||
repo_path=local_repo_path,
|
||||
strict_compat=False
|
||||
)
|
||||
if candidate:
|
||||
return candidate
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"从本地插件仓库路径解析插件候选时出错: {e}")
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _sync_local_plugin_if_installed(pid: str, candidate: Optional[dict] = None) -> bool:
|
||||
"""
|
||||
已安装本地插件源码变化时,同步到运行目录
|
||||
"""
|
||||
installed_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
|
||||
if pid not in installed_plugins:
|
||||
logger.info(f"本地插件 {pid} 尚未安装,跳过自动同步和热重载")
|
||||
return False
|
||||
|
||||
candidate = candidate or PluginHelper().get_local_plugin_candidate(pid)
|
||||
if not candidate:
|
||||
return False
|
||||
|
||||
source_dir = Path(candidate.get("path"))
|
||||
dest_dir = settings.ROOT_PATH / "app" / "plugins" / pid.lower()
|
||||
try:
|
||||
if source_dir.resolve() == dest_dir.resolve():
|
||||
return True
|
||||
if dest_dir.exists():
|
||||
shutil.rmtree(dest_dir, ignore_errors=True)
|
||||
shutil.copytree(
|
||||
source_dir,
|
||||
dest_dir,
|
||||
dirs_exist_ok=True,
|
||||
ignore=shutil.ignore_patterns("__pycache__", "*.pyc", ".DS_Store")
|
||||
)
|
||||
PluginManager()._recent_local_sync[pid] = time.time()
|
||||
logger.info(f"已同步本地插件 {pid}:{source_dir} -> {dest_dir}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"同步本地插件 {pid} 失败:{e}")
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def __stop_plugin(plugin: Any):
|
||||
"""
|
||||
@@ -484,11 +601,14 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
|
||||
# 获取已安装插件列表
|
||||
install_plugins = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
|
||||
# 获取在线插件列表
|
||||
# 获取远程和本地仓库来源插件列表
|
||||
online_plugins = self.get_online_plugins()
|
||||
local_repo_plugins = self.get_local_repo_plugins()
|
||||
candidate_plugins = self.process_plugins_list(online_plugins + local_repo_plugins, []) \
|
||||
if online_plugins or local_repo_plugins else []
|
||||
# 确定需要安装的插件
|
||||
plugins_to_install = [
|
||||
plugin for plugin in online_plugins
|
||||
plugin for plugin in candidate_plugins
|
||||
if plugin.id in install_plugins and not self.is_plugin_exists(plugin.id, plugin.plugin_version)
|
||||
]
|
||||
|
||||
@@ -809,6 +929,64 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
})
|
||||
return remotes
|
||||
|
||||
def get_plugin_sidebar_nav(self) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
聚合所有已启用 Vue 插件的侧栏导航项(get_sidebar_nav)。
|
||||
"""
|
||||
valid_sections = {"start", "discovery", "subscribe", "organize", "system"}
|
||||
valid_permissions = {"subscribe", "discovery", "search", "manage", "admin"}
|
||||
items: List[Dict[str, Any]] = []
|
||||
running_plugins_snapshot = dict(self._running_plugins)
|
||||
for plugin_id, plugin in running_plugins_snapshot.items():
|
||||
if not plugin.get_state():
|
||||
continue
|
||||
if not hasattr(plugin, "get_sidebar_nav") or not ObjectUtils.check_method(plugin.get_sidebar_nav):
|
||||
continue
|
||||
if not hasattr(plugin, "get_render_mode"):
|
||||
continue
|
||||
render_mode, _ = plugin.get_render_mode()
|
||||
if render_mode != "vue":
|
||||
continue
|
||||
try:
|
||||
nav_list = plugin.get_sidebar_nav()
|
||||
if not nav_list:
|
||||
continue
|
||||
for raw in nav_list:
|
||||
if not raw or not isinstance(raw, dict):
|
||||
continue
|
||||
nav_key = str(raw.get("nav_key") or raw.get("key") or "main").strip()
|
||||
if not nav_key or any(c in nav_key for c in ["/", "?", "#", " "]):
|
||||
logger.warning(f"插件[{plugin_id}]侧栏项 nav_key 无效,已跳过: {nav_key!r}")
|
||||
continue
|
||||
title = raw.get("title") or plugin.plugin_name
|
||||
icon = raw.get("icon") or "mdi-puzzle"
|
||||
section = str(raw.get("section") or "system").lower()
|
||||
if section not in valid_sections:
|
||||
section = "system"
|
||||
perm = raw.get("permission")
|
||||
if perm is not None and str(perm) not in valid_permissions:
|
||||
perm = None
|
||||
else:
|
||||
perm = str(perm) if perm is not None else None
|
||||
order = raw.get("order", 0)
|
||||
try:
|
||||
order = int(order)
|
||||
except (TypeError, ValueError):
|
||||
order = 0
|
||||
items.append({
|
||||
"plugin_id": plugin_id,
|
||||
"nav_key": nav_key,
|
||||
"title": title,
|
||||
"icon": icon,
|
||||
"section": section,
|
||||
"permission": perm,
|
||||
"order": order,
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"获取插件[{plugin_id}]侧栏导航出错:{str(e)}")
|
||||
items.sort(key=lambda x: (x["section"], x["order"], x["plugin_id"], x["nav_key"]))
|
||||
return items
|
||||
|
||||
def get_plugin_dashboard_meta(self) -> List[Dict[str, str]]:
|
||||
"""
|
||||
获取所有插件仪表盘元信息
|
||||
@@ -983,7 +1161,9 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
else:
|
||||
base_version_plugins.extend(plugins) # 收集 v1 版本插件
|
||||
|
||||
return self._process_plugins_list(higher_version_plugins, base_version_plugins)
|
||||
result = self.process_plugins_list(higher_version_plugins, base_version_plugins)
|
||||
logger.info(f"获取到 {len(result)} 个线上插件")
|
||||
return result
|
||||
|
||||
def get_local_plugins(self) -> List[schemas.Plugin]:
|
||||
"""
|
||||
@@ -1058,6 +1238,38 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
plugins.sort(key=lambda x: x.plugin_order if hasattr(x, "plugin_order") else 0)
|
||||
return plugins
|
||||
|
||||
def get_local_repo_plugins(self) -> List[schemas.Plugin]:
|
||||
"""
|
||||
获取本地插件仓库目录中的插件信息
|
||||
"""
|
||||
plugins = []
|
||||
installed_apps = SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
|
||||
local_candidates = PluginHelper().get_local_plugin_candidates()
|
||||
if not local_candidates:
|
||||
return []
|
||||
for pid, plugin_info in local_candidates.items():
|
||||
package_version = plugin_info.get("package_version")
|
||||
plugin = self._process_plugin_info(
|
||||
pid=pid,
|
||||
plugin_info=plugin_info,
|
||||
market=PluginHelper.make_local_repo_url(
|
||||
pid,
|
||||
plugin_info.get("repo_path"),
|
||||
package_version
|
||||
),
|
||||
installed_apps=installed_apps,
|
||||
add_time=0,
|
||||
package_version=package_version
|
||||
)
|
||||
if not plugin:
|
||||
continue
|
||||
plugin.is_local = True
|
||||
plugins.append(plugin)
|
||||
|
||||
plugins.sort(key=lambda x: x.plugin_order if hasattr(x, "plugin_order") else 0)
|
||||
logger.info(f"获取到 {len(plugins)} 个本地插件")
|
||||
return plugins
|
||||
|
||||
@staticmethod
|
||||
def is_plugin_exists(pid: str, version: str = None) -> bool:
|
||||
"""
|
||||
@@ -1122,8 +1334,8 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
return ret_plugins
|
||||
|
||||
@staticmethod
|
||||
def _process_plugins_list(higher_version_plugins: List[schemas.Plugin],
|
||||
base_version_plugins: List[schemas.Plugin]) -> List[schemas.Plugin]:
|
||||
def process_plugins_list(higher_version_plugins: List[schemas.Plugin],
|
||||
base_version_plugins: List[schemas.Plugin]) -> List[schemas.Plugin]:
|
||||
"""
|
||||
处理插件列表:合并、去重、排序、保留最高版本
|
||||
:param higher_version_plugins: 高版本插件列表
|
||||
@@ -1136,20 +1348,41 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
# 将未出现在高版本插件列表中的 v1 插件加入 all_plugins
|
||||
higher_plugin_ids = {f"{p.id}{p.plugin_version}" for p in higher_version_plugins}
|
||||
all_plugins.extend([p for p in base_version_plugins if f"{p.id}{p.plugin_version}" not in higher_plugin_ids])
|
||||
# 去重
|
||||
all_plugins = list({f"{p.id}{p.plugin_version}": p for p in all_plugins}.values())
|
||||
# 所有插件按 repo 在设置中的顺序排序
|
||||
all_plugins.sort(
|
||||
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.repo_url) if x.repo_url else 0
|
||||
)
|
||||
# 相同 ID 的插件保留版本号最大的版本
|
||||
max_versions = {}
|
||||
for p in all_plugins:
|
||||
if p.id not in max_versions or StringUtils.compare_version(p.plugin_version, ">", max_versions[p.id]):
|
||||
max_versions[p.id] = p.plugin_version
|
||||
result = [p for p in all_plugins if p.plugin_version == max_versions[p.id]]
|
||||
logger.info(f"共获取到 {len(result)} 个线上插件")
|
||||
return result
|
||||
markets = [item for item in settings.PLUGIN_MARKET.split(",") if item]
|
||||
|
||||
def repo_order(plugin: schemas.Plugin) -> int:
|
||||
if PluginHelper.is_local_repo_url(plugin.repo_url):
|
||||
return len(markets) + 1
|
||||
if plugin.repo_url in markets:
|
||||
return markets.index(plugin.repo_url)
|
||||
return len(markets)
|
||||
|
||||
# 去重:同 ID + 版本优先保留市场来源,其次按来源顺序稳定保留。
|
||||
dedup_plugins = {}
|
||||
for plugin in sorted(all_plugins, key=repo_order):
|
||||
key = f"{plugin.id}{plugin.plugin_version}"
|
||||
exists = dedup_plugins.get(key)
|
||||
if not exists:
|
||||
dedup_plugins[key] = plugin
|
||||
continue
|
||||
if PluginHelper.is_local_repo_url(exists.repo_url) and not PluginHelper.is_local_repo_url(plugin.repo_url):
|
||||
dedup_plugins[key] = plugin
|
||||
|
||||
# 相同 ID 的插件保留版本号最大的版本;同版本市场来源优先。
|
||||
result_by_id = {}
|
||||
for plugin in sorted(dedup_plugins.values(), key=repo_order):
|
||||
exists = result_by_id.get(plugin.id)
|
||||
if not exists:
|
||||
result_by_id[plugin.id] = plugin
|
||||
continue
|
||||
if StringUtils.compare_version(plugin.plugin_version, ">", exists.plugin_version):
|
||||
result_by_id[plugin.id] = plugin
|
||||
elif plugin.plugin_version == exists.plugin_version \
|
||||
and PluginHelper.is_local_repo_url(exists.repo_url) \
|
||||
and not PluginHelper.is_local_repo_url(plugin.repo_url):
|
||||
result_by_id[plugin.id] = plugin
|
||||
|
||||
return list(result_by_id.values())
|
||||
|
||||
def _process_plugin_info(self, pid: str, plugin_info: dict, market: str,
|
||||
installed_apps: List[str], add_time: int,
|
||||
@@ -1296,7 +1529,9 @@ class PluginManager(ConfigReloadMixin, metaclass=Singleton):
|
||||
else:
|
||||
base_version_plugins.extend(plugins) # 收集 v1 版本插件
|
||||
|
||||
return self._process_plugins_list(higher_version_plugins, base_version_plugins)
|
||||
result = self.process_plugins_list(higher_version_plugins, base_version_plugins)
|
||||
logger.info(f"获取到 {len(result)} 个线上插件")
|
||||
return result
|
||||
|
||||
async def async_get_plugins_from_market(self, market: str,
|
||||
package_version: Optional[str] = None,
|
||||
|
||||
@@ -13,7 +13,7 @@ from Crypto.Cipher import AES
|
||||
from Crypto.Util.Padding import pad
|
||||
from cryptography.fernet import Fernet
|
||||
from fastapi import HTTPException, status, Security, Request, Response
|
||||
from fastapi.security import OAuth2PasswordBearer, APIKeyHeader, APIKeyQuery, APIKeyCookie
|
||||
from fastapi.security import OAuth2PasswordBearer, APIKeyHeader, APIKeyQuery, APIKeyCookie, HTTPBearer
|
||||
from passlib.context import CryptContext
|
||||
|
||||
from app import schemas
|
||||
@@ -42,6 +42,12 @@ api_key_header = APIKeyHeader(name="X-API-KEY", auto_error=False, scheme_name="a
|
||||
# API KEY 通过 QUERY 认证
|
||||
api_key_query = APIKeyQuery(name="apikey", auto_error=False, scheme_name="api_key_query")
|
||||
|
||||
# OpenAI compatible Bearer Token 认证
|
||||
openai_bearer_scheme = HTTPBearer(auto_error=False)
|
||||
|
||||
# Anthropic compatible API Key 认证
|
||||
anthropic_api_key_header = APIKeyHeader(name="x-api-key", auto_error=False, scheme_name="anthropic_api_key_header")
|
||||
|
||||
|
||||
def __get_api_token(
|
||||
token_query: Annotated[str | None, Security(api_token_query)] = None
|
||||
|
||||
@@ -10,6 +10,9 @@ def init_db():
|
||||
"""
|
||||
初始化数据库
|
||||
"""
|
||||
# 确保所有模型都已注册到 Base.metadata 中
|
||||
import app.db.models # noqa: F401
|
||||
|
||||
# 全量建表
|
||||
Base.metadata.create_all(bind=Engine) # noqa
|
||||
|
||||
|
||||
@@ -1,10 +1,14 @@
|
||||
from .downloadhistory import DownloadHistory, DownloadFiles
|
||||
from .mediaserver import MediaServerItem
|
||||
from .message import Message
|
||||
from .passkey import PassKey
|
||||
from .plugindata import PluginData
|
||||
from .site import Site
|
||||
from .siteicon import SiteIcon
|
||||
from .sitestatistic import SiteStatistic
|
||||
from .siteuserdata import SiteUserData
|
||||
from .subscribe import Subscribe
|
||||
from .subscribehistory import SubscribeHistory
|
||||
from .systemconfig import SystemConfig
|
||||
from .transferhistory import TransferHistory
|
||||
from .user import User
|
||||
|
||||
@@ -238,7 +238,7 @@ class ImageHelper(metaclass=Singleton):
|
||||
# 请求远程图片
|
||||
params = self._get_request_params(url, proxy, cookies)
|
||||
response = RequestUtils(**params).get_res(url=url)
|
||||
if not response:
|
||||
if response is None or response.status_code != 200:
|
||||
logger.warn(f"Failed to fetch image from URL: {url}")
|
||||
return None
|
||||
|
||||
@@ -274,7 +274,7 @@ class ImageHelper(metaclass=Singleton):
|
||||
# 请求远程图片
|
||||
params = self._get_request_params(url, proxy, cookies)
|
||||
response = await AsyncRequestUtils(**params).get_res(url=url)
|
||||
if not response:
|
||||
if response is None or response.status_code != 200:
|
||||
logger.warn(f"Failed to fetch image from URL: {url}")
|
||||
return None
|
||||
|
||||
|
||||
@@ -1,90 +1,332 @@
|
||||
"""LLM模型相关辅助功能"""
|
||||
|
||||
from typing import List
|
||||
import asyncio
|
||||
import inspect
|
||||
import time
|
||||
from typing import Any, List
|
||||
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
|
||||
|
||||
class LLMTestError(RuntimeError):
|
||||
"""LLM 测试调用异常,附带请求耗时。"""
|
||||
|
||||
def __init__(self, message: str, duration_ms: int | None = None):
|
||||
super().__init__(message)
|
||||
self.duration_ms = duration_ms
|
||||
|
||||
|
||||
class LLMTestTimeout(TimeoutError):
|
||||
"""LLM 测试调用超时,附带请求耗时。"""
|
||||
|
||||
def __init__(self, message: str, duration_ms: int | None = None):
|
||||
super().__init__(message)
|
||||
self.duration_ms = duration_ms
|
||||
|
||||
|
||||
def _patch_gemini_thought_signature():
|
||||
"""
|
||||
修复 langchain-google-genai 中 Gemini 2.5 思考模型的 thought_signature 兼容问题。
|
||||
langchain-google-genai 的 _is_gemini_3_or_later() 仅检查 "gemini-3",
|
||||
导致 Gemini 2.5 思考模型(如 gemini-2.5-flash、gemini-2.5-pro)在工具调用时
|
||||
缺少 thought_signature 而报错 400。
|
||||
此补丁将检查范围扩展到 Gemini 2.5 模型。
|
||||
"""
|
||||
try:
|
||||
import langchain_google_genai.chat_models as _cm
|
||||
|
||||
# 仅在未修补时执行
|
||||
if getattr(_cm, "_thought_signature_patched", False):
|
||||
return
|
||||
|
||||
def _patched_is_gemini_3_or_later(model_name: str) -> bool:
|
||||
if not model_name:
|
||||
return False
|
||||
name = model_name.lower().replace("models/", "")
|
||||
# Gemini 2.5 思考模型也需要 thought_signature 支持
|
||||
return "gemini-3" in name or "gemini-2.5" in name
|
||||
|
||||
_cm._is_gemini_3_or_later = _patched_is_gemini_3_or_later
|
||||
_cm._thought_signature_patched = True
|
||||
logger.debug(
|
||||
"已修补 langchain-google-genai thought_signature 兼容性(覆盖 Gemini 2.5 模型)"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"修补 langchain-google-genai thought_signature 失败: {e}")
|
||||
|
||||
|
||||
def _get_httpx_proxy_key() -> str:
|
||||
"""
|
||||
获取当前 httpx 版本支持的代理参数名。
|
||||
httpx < 0.28 使用 "proxies"(复数),>= 0.28 使用 "proxy"(单数)。
|
||||
google-genai SDK 会静默过滤掉不在 httpx.Client.__init__ 签名中的参数,
|
||||
因此必须使用与当前 httpx 版本匹配的参数名。
|
||||
"""
|
||||
try:
|
||||
import httpx
|
||||
|
||||
params = inspect.signature(httpx.Client.__init__).parameters
|
||||
if "proxy" in params:
|
||||
return "proxy"
|
||||
return "proxies"
|
||||
except Exception:
|
||||
return "proxies"
|
||||
|
||||
|
||||
class LLMHelper:
|
||||
"""LLM模型相关辅助功能"""
|
||||
|
||||
@staticmethod
|
||||
def get_llm(streaming: bool = False):
|
||||
def _should_disable_thinking(disable_thinking: bool | None = None) -> bool:
|
||||
"""
|
||||
判断本次调用是否应尝试关闭模型思考能力。
|
||||
"""
|
||||
if disable_thinking is not None:
|
||||
return bool(disable_thinking)
|
||||
return bool(getattr(settings, "LLM_DISABLE_THINKING", False))
|
||||
|
||||
@staticmethod
|
||||
def _normalize_model_name(model_name: str | None) -> str:
|
||||
"""
|
||||
统一清理模型名称,便于按模型族做能力映射。
|
||||
"""
|
||||
return (model_name or "").strip().lower()
|
||||
|
||||
@classmethod
|
||||
def _build_disabled_thinking_kwargs(
|
||||
cls,
|
||||
provider: str,
|
||||
model: str | None,
|
||||
disable_thinking: bool | None = None,
|
||||
) -> dict[str, Any]:
|
||||
"""
|
||||
按 provider/model 生成“禁用思考”相关参数。
|
||||
|
||||
优先使用 LangChain/OpenAI SDK 已支持的原生字段;仅在 provider
|
||||
明确要求自定义请求体时,才回退到 extra_body。
|
||||
"""
|
||||
if not cls._should_disable_thinking(disable_thinking):
|
||||
return {}
|
||||
|
||||
provider_name = (provider or "").strip().lower()
|
||||
model_name = cls._normalize_model_name(model)
|
||||
if not model_name:
|
||||
return {}
|
||||
|
||||
# Moonshot Kimi K2.5/K2.6 需要在请求体显式声明 thinking.disabled。
|
||||
if model_name.startswith(("kimi-k2.5", "kimi-k2.6")):
|
||||
return {"extra_body": {"thinking": {"type": "disabled"}}}
|
||||
|
||||
# OpenAI 原生推理模型优先走 LangChain 内置 reasoning_effort。
|
||||
if provider_name == "openai" and model_name.startswith(
|
||||
("gpt-5", "o1", "o3", "o4")
|
||||
):
|
||||
return {"reasoning_effort": "none"}
|
||||
|
||||
# Gemini 使用 google-genai / langchain-google-genai 内置思考控制参数。
|
||||
if provider_name == "google":
|
||||
if "gemini-2.5" in model_name:
|
||||
return {
|
||||
"thinking_budget": 0,
|
||||
"include_thoughts": False,
|
||||
}
|
||||
if "gemini-3" in model_name:
|
||||
return {
|
||||
"thinking_level": "minimal",
|
||||
"include_thoughts": False,
|
||||
}
|
||||
|
||||
return {}
|
||||
|
||||
@staticmethod
|
||||
def supports_image_input() -> bool:
|
||||
"""
|
||||
判断当前模型是否启用了图片输入能力。
|
||||
"""
|
||||
return bool(settings.LLM_SUPPORT_IMAGE_INPUT)
|
||||
|
||||
@staticmethod
|
||||
def get_llm(
|
||||
streaming: bool = False,
|
||||
provider: str | None = None,
|
||||
model: str | None = None,
|
||||
disable_thinking: bool | None = None,
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
):
|
||||
"""
|
||||
获取LLM实例
|
||||
:param streaming: 是否启用流式输出
|
||||
:return: LLM实例
|
||||
"""
|
||||
provider = settings.LLM_PROVIDER.lower()
|
||||
api_key = settings.LLM_API_KEY
|
||||
provider_name = str(
|
||||
provider if provider is not None else settings.LLM_PROVIDER
|
||||
).lower()
|
||||
model_name = model if model is not None else settings.LLM_MODEL
|
||||
api_key_value = api_key if api_key is not None else settings.LLM_API_KEY
|
||||
base_url_value = base_url if base_url is not None else settings.LLM_BASE_URL
|
||||
thinking_kwargs = LLMHelper._build_disabled_thinking_kwargs(
|
||||
provider=provider_name,
|
||||
model=model_name,
|
||||
disable_thinking=disable_thinking,
|
||||
)
|
||||
|
||||
if not api_key:
|
||||
if not api_key_value:
|
||||
raise ValueError("未配置LLM API Key")
|
||||
|
||||
if provider == "google":
|
||||
if provider_name == "google":
|
||||
# 修补 Gemini 2.5 思考模型的 thought_signature 兼容性
|
||||
_patch_gemini_thought_signature()
|
||||
|
||||
# 统一使用 langchain-google-genai 原生接口
|
||||
# 不使用 OpenAI 兼容端点,因其不支持 Gemini 思考模型的 thought_signature,
|
||||
# 会导致工具调用时报错 400
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
client_args = None
|
||||
if settings.PROXY_HOST:
|
||||
# 通过代理使用 Google 的 OpenAI 兼容接口
|
||||
from langchain_openai import ChatOpenAI
|
||||
proxy_key = _get_httpx_proxy_key()
|
||||
client_args = {proxy_key: settings.PROXY_HOST}
|
||||
|
||||
model = ChatOpenAI(
|
||||
model=settings.LLM_MODEL,
|
||||
api_key=api_key,
|
||||
max_retries=3,
|
||||
base_url="https://generativelanguage.googleapis.com/v1beta/openai",
|
||||
temperature=settings.LLM_TEMPERATURE,
|
||||
streaming=streaming,
|
||||
stream_usage=True,
|
||||
openai_proxy=settings.PROXY_HOST,
|
||||
)
|
||||
else:
|
||||
# 使用 langchain-google-genai 原生接口(v4 API 变更:google_api_key → api_key,max_retries → retries)
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
model = ChatGoogleGenerativeAI(
|
||||
model=settings.LLM_MODEL,
|
||||
api_key=api_key,
|
||||
retries=3,
|
||||
temperature=settings.LLM_TEMPERATURE,
|
||||
streaming=streaming
|
||||
)
|
||||
elif provider == "deepseek":
|
||||
model = ChatGoogleGenerativeAI(
|
||||
model=model_name,
|
||||
api_key=api_key_value,
|
||||
retries=3,
|
||||
temperature=settings.LLM_TEMPERATURE,
|
||||
streaming=streaming,
|
||||
client_args=client_args,
|
||||
**thinking_kwargs,
|
||||
)
|
||||
elif provider_name == "deepseek":
|
||||
from langchain_deepseek import ChatDeepSeek
|
||||
|
||||
model = ChatDeepSeek(
|
||||
model=settings.LLM_MODEL,
|
||||
api_key=api_key,
|
||||
model=model_name,
|
||||
api_key=api_key_value,
|
||||
max_retries=3,
|
||||
temperature=settings.LLM_TEMPERATURE,
|
||||
streaming=streaming,
|
||||
stream_usage=True,
|
||||
**thinking_kwargs,
|
||||
)
|
||||
else:
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
model = ChatOpenAI(
|
||||
model=settings.LLM_MODEL,
|
||||
api_key=api_key,
|
||||
model=model_name,
|
||||
api_key=api_key_value,
|
||||
max_retries=3,
|
||||
base_url=settings.LLM_BASE_URL,
|
||||
base_url=base_url_value,
|
||||
temperature=settings.LLM_TEMPERATURE,
|
||||
streaming=streaming,
|
||||
stream_usage=True,
|
||||
openai_proxy=settings.PROXY_HOST,
|
||||
**thinking_kwargs,
|
||||
)
|
||||
|
||||
# 检查是否有profile
|
||||
if hasattr(model, "profile") and model.profile:
|
||||
logger.info(f"使用LLM模型: {model.model},Profile: {model.profile}")
|
||||
logger.debug(f"使用LLM模型: {model.model},Profile: {model.profile}")
|
||||
else:
|
||||
model.profile = {
|
||||
"max_input_tokens": settings.LLM_MAX_CONTEXT_TOKENS * 1000, # 转换为token单位
|
||||
"max_input_tokens": settings.LLM_MAX_CONTEXT_TOKENS
|
||||
* 1000, # 转换为token单位
|
||||
}
|
||||
|
||||
return model
|
||||
|
||||
@staticmethod
|
||||
def _extract_text_content(content) -> str:
|
||||
"""
|
||||
从响应内容中提取纯文本,仅保留真实文本块。
|
||||
"""
|
||||
if content is None:
|
||||
return ""
|
||||
if isinstance(content, str):
|
||||
return content
|
||||
if isinstance(content, list):
|
||||
text_parts = []
|
||||
for block in content:
|
||||
if isinstance(block, str):
|
||||
text_parts.append(block)
|
||||
continue
|
||||
|
||||
if isinstance(block, dict) or hasattr(block, "get"):
|
||||
block_type = block.get("type")
|
||||
if block.get("thought") or block_type in (
|
||||
"thinking",
|
||||
"reasoning_content",
|
||||
"reasoning",
|
||||
"thought",
|
||||
):
|
||||
continue
|
||||
if block_type == "text":
|
||||
text_parts.append(block.get("text", ""))
|
||||
continue
|
||||
if not block_type and isinstance(block.get("text"), str):
|
||||
text_parts.append(block.get("text", ""))
|
||||
return "".join(text_parts)
|
||||
if isinstance(content, dict) or hasattr(content, "get"):
|
||||
if content.get("thought"):
|
||||
return ""
|
||||
if content.get("type") == "text":
|
||||
return content.get("text", "")
|
||||
if not content.get("type") and isinstance(content.get("text"), str):
|
||||
return content.get("text", "")
|
||||
return ""
|
||||
|
||||
@staticmethod
|
||||
async def test_current_settings(
|
||||
prompt: str = "请只回复 OK",
|
||||
timeout: int = 20,
|
||||
provider: str | None = None,
|
||||
model: str | None = None,
|
||||
disable_thinking: bool | None = None,
|
||||
api_key: str | None = None,
|
||||
base_url: str | None = None,
|
||||
) -> dict:
|
||||
"""
|
||||
使用当前已保存配置执行一次最小 LLM 调用。
|
||||
"""
|
||||
provider_name = provider if provider is not None else settings.LLM_PROVIDER
|
||||
model_name = model if model is not None else settings.LLM_MODEL
|
||||
api_key_value = api_key if api_key is not None else settings.LLM_API_KEY
|
||||
base_url_value = base_url if base_url is not None else settings.LLM_BASE_URL
|
||||
start = time.perf_counter()
|
||||
llm = LLMHelper.get_llm(
|
||||
streaming=False,
|
||||
provider=provider_name,
|
||||
model=model_name,
|
||||
disable_thinking=disable_thinking,
|
||||
api_key=api_key_value,
|
||||
base_url=base_url_value,
|
||||
)
|
||||
try:
|
||||
response = await asyncio.wait_for(llm.ainvoke(prompt), timeout=timeout)
|
||||
except TimeoutError as err:
|
||||
duration_ms = round((time.perf_counter() - start) * 1000)
|
||||
raise LLMTestTimeout("LLM 调用超时", duration_ms=duration_ms) from err
|
||||
except Exception as err:
|
||||
duration_ms = round((time.perf_counter() - start) * 1000)
|
||||
raise LLMTestError(str(err), duration_ms=duration_ms) from err
|
||||
|
||||
reply_text = LLMHelper._extract_text_content(
|
||||
getattr(response, "content", response)
|
||||
).strip()
|
||||
duration_ms = round((time.perf_counter() - start) * 1000)
|
||||
|
||||
data = {
|
||||
"provider": provider_name,
|
||||
"model": model_name,
|
||||
"duration_ms": duration_ms,
|
||||
}
|
||||
if reply_text:
|
||||
data["reply_preview"] = reply_text[:120]
|
||||
return data
|
||||
|
||||
def get_models(
|
||||
self, provider: str, api_key: str, base_url: str = None
|
||||
self, provider: str, api_key: str, base_url: str = None
|
||||
) -> List[str]:
|
||||
"""获取模型列表"""
|
||||
logger.info(f"获取 {provider} 模型列表...")
|
||||
@@ -98,8 +340,18 @@ class LLMHelper:
|
||||
"""获取Google模型列表(使用 google-genai SDK v1)"""
|
||||
try:
|
||||
from google import genai
|
||||
from google.genai.types import HttpOptions
|
||||
|
||||
client = genai.Client(api_key=api_key)
|
||||
http_options = None
|
||||
if settings.PROXY_HOST:
|
||||
proxy_key = _get_httpx_proxy_key()
|
||||
proxy_args = {proxy_key: settings.PROXY_HOST}
|
||||
http_options = HttpOptions(
|
||||
client_args=proxy_args,
|
||||
async_client_args=proxy_args,
|
||||
)
|
||||
|
||||
client = genai.Client(api_key=api_key, http_options=http_options)
|
||||
models = client.models.list()
|
||||
return [
|
||||
m.name
|
||||
@@ -112,7 +364,7 @@ class LLMHelper:
|
||||
|
||||
@staticmethod
|
||||
def _get_openai_compatible_models(
|
||||
provider: str, api_key: str, base_url: str = None
|
||||
provider: str, api_key: str, base_url: str = None
|
||||
) -> List[str]:
|
||||
"""获取OpenAI兼容模型列表"""
|
||||
try:
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import asyncio
|
||||
import importlib
|
||||
import io
|
||||
import json
|
||||
@@ -8,6 +9,7 @@ import traceback
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Tuple, Set, Callable, Awaitable
|
||||
from urllib.parse import parse_qs, quote, unquote, urlsplit
|
||||
|
||||
import aiofiles
|
||||
import aioshutil
|
||||
@@ -26,10 +28,12 @@ from app.log import logger
|
||||
from app.schemas.types import SystemConfigKey
|
||||
from app.utils.http import RequestUtils, AsyncRequestUtils
|
||||
from app.utils.singleton import WeakSingleton
|
||||
from app.utils.string import StringUtils
|
||||
from app.utils.system import SystemUtils
|
||||
from app.utils.url import UrlUtils
|
||||
|
||||
PLUGIN_DIR = Path(settings.ROOT_PATH) / "app" / "plugins"
|
||||
LOCAL_REPO_PREFIX = "local://"
|
||||
|
||||
|
||||
class PluginHelper(metaclass=WeakSingleton):
|
||||
@@ -49,9 +53,283 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
if self.install_report():
|
||||
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
|
||||
|
||||
@staticmethod
|
||||
def is_local_repo_url(repo_url: Optional[str]) -> bool:
|
||||
"""
|
||||
判断是否为本地插件来源标识
|
||||
"""
|
||||
return bool(repo_url and repo_url.startswith(LOCAL_REPO_PREFIX))
|
||||
|
||||
@staticmethod
|
||||
def make_local_repo_url(pid: str, repo_path: Optional[Path] = None,
|
||||
package_version: Optional[str] = None) -> str:
|
||||
"""
|
||||
生成本地插件安装来源标识
|
||||
"""
|
||||
repo_url = f"{LOCAL_REPO_PREFIX}{quote(pid, safe='')}"
|
||||
params = []
|
||||
if repo_path:
|
||||
params.append(f"path={quote(str(repo_path), safe='/:~')}")
|
||||
if package_version:
|
||||
params.append(f"version={quote(package_version, safe='')}")
|
||||
if params:
|
||||
repo_url = f"{repo_url}?{'&'.join(params)}"
|
||||
return repo_url
|
||||
|
||||
@staticmethod
|
||||
def parse_local_repo_url(repo_url: str) -> Optional[str]:
|
||||
"""
|
||||
从本地插件来源标识中解析插件ID
|
||||
"""
|
||||
if not PluginHelper.is_local_repo_url(repo_url):
|
||||
return None
|
||||
try:
|
||||
parts = urlsplit(repo_url)
|
||||
pid = unquote(parts.netloc or parts.path.strip("/"))
|
||||
except Exception:
|
||||
pid = repo_url[len(LOCAL_REPO_PREFIX):].split("?", 1)[0].strip("/")
|
||||
return pid or None
|
||||
|
||||
@staticmethod
|
||||
def parse_local_repo_path(repo_url: str) -> Optional[Path]:
|
||||
"""
|
||||
从本地插件来源标识中解析仓库路径
|
||||
"""
|
||||
if not PluginHelper.is_local_repo_url(repo_url):
|
||||
return None
|
||||
try:
|
||||
values = parse_qs(urlsplit(repo_url).query).get("path")
|
||||
if not values:
|
||||
return None
|
||||
path = Path(values[0]).expanduser()
|
||||
if not path.is_absolute():
|
||||
path = settings.ROOT_PATH / path
|
||||
return path.resolve()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def parse_local_repo_package_version(repo_url: str) -> Optional[str]:
|
||||
"""
|
||||
从本地插件来源标识中解析 package 版本
|
||||
"""
|
||||
if not PluginHelper.is_local_repo_url(repo_url):
|
||||
return None
|
||||
try:
|
||||
values = parse_qs(urlsplit(repo_url).query).get("version")
|
||||
if not values:
|
||||
return None
|
||||
return values[0]
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def sanitize_repo_url_for_statistic(repo_url: Optional[str]) -> Optional[str]:
|
||||
"""
|
||||
统计上报前脱敏 repo_url,避免泄露本地仓库绝对路径
|
||||
"""
|
||||
if not repo_url:
|
||||
return repo_url
|
||||
if not PluginHelper.is_local_repo_url(repo_url):
|
||||
return repo_url
|
||||
|
||||
pid = PluginHelper.parse_local_repo_url(repo_url)
|
||||
if not pid:
|
||||
return LOCAL_REPO_PREFIX.rstrip("/")
|
||||
|
||||
return PluginHelper.make_local_repo_url(
|
||||
pid=pid,
|
||||
package_version=PluginHelper.parse_local_repo_package_version(repo_url)
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_local_repo_paths() -> List[Path]:
|
||||
"""
|
||||
获取本地插件仓库目录列表
|
||||
"""
|
||||
if not settings.PLUGIN_LOCAL_REPO_PATHS:
|
||||
return []
|
||||
paths = []
|
||||
for item in settings.PLUGIN_LOCAL_REPO_PATHS.split(","):
|
||||
local_repo_path = item.strip()
|
||||
if not local_repo_path:
|
||||
continue
|
||||
path = Path(local_repo_path).expanduser()
|
||||
if not path.is_absolute():
|
||||
path = settings.ROOT_PATH / path
|
||||
paths.append(path.resolve())
|
||||
return paths
|
||||
|
||||
@staticmethod
|
||||
def __get_local_package(repo_path: Path, package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
|
||||
"""
|
||||
从本地插件仓库读取 package.json 或 package.{version}.json
|
||||
"""
|
||||
package_file = repo_path / (
|
||||
f"package.{package_version}.json" if package_version else "package.json"
|
||||
)
|
||||
if not package_file.exists():
|
||||
return {}
|
||||
try:
|
||||
content = package_file.read_text(encoding="utf-8")
|
||||
payload = json.loads(content)
|
||||
except Exception as e:
|
||||
logger.warn(f"读取本地插件包 {package_file} 失败:{e}")
|
||||
return None
|
||||
if not isinstance(payload, dict):
|
||||
logger.warn(f"本地插件包 {package_file} 格式不正确")
|
||||
return None
|
||||
return payload
|
||||
|
||||
@staticmethod
|
||||
def __get_local_plugin_dir(repo_path: Path, pid: str, package_version: Optional[str]) -> Path:
|
||||
plugin_root = f"plugins.{package_version}" if package_version else "plugins"
|
||||
return repo_path / plugin_root / pid.lower()
|
||||
|
||||
def get_local_plugin_candidates(self) -> Dict[str, dict]:
|
||||
"""
|
||||
扫描本地插件仓库,按插件ID保留版本号最高的候选
|
||||
"""
|
||||
candidates: Dict[str, dict] = {}
|
||||
for repo_order, repo_path in enumerate(self.get_local_repo_paths()):
|
||||
if not repo_path.exists() or not repo_path.is_dir():
|
||||
logger.warn(f"本地插件仓库目录不存在或不可读:{repo_path}")
|
||||
continue
|
||||
|
||||
package_candidates = []
|
||||
if settings.VERSION_FLAG:
|
||||
package_candidates.append((settings.VERSION_FLAG, self.__get_local_package(repo_path,
|
||||
settings.VERSION_FLAG)))
|
||||
package_candidates.append(("", self.__get_local_package(repo_path)))
|
||||
|
||||
for package_version, local_plugins in package_candidates:
|
||||
if local_plugins is None:
|
||||
continue
|
||||
for pid, plugin_info in local_plugins.items():
|
||||
if not isinstance(plugin_info, dict):
|
||||
continue
|
||||
# package.json 中的旧结构需要声明兼容当前版本。
|
||||
if (
|
||||
not package_version
|
||||
and settings.VERSION_FLAG
|
||||
and plugin_info.get(settings.VERSION_FLAG) is not True
|
||||
):
|
||||
continue
|
||||
|
||||
plugin_dir = self.__get_local_plugin_dir(repo_path, pid, package_version)
|
||||
if not plugin_dir.is_dir():
|
||||
logger.debug(f"跳过本地插件 {pid}:插件目录不存在 {plugin_dir}")
|
||||
continue
|
||||
|
||||
candidate = plugin_info.copy()
|
||||
candidate["id"] = pid
|
||||
candidate["package_version"] = package_version
|
||||
candidate["repo_order"] = repo_order
|
||||
candidate["repo_path"] = repo_path
|
||||
candidate["path"] = plugin_dir
|
||||
candidate_version = str(candidate.get("version") or "0")
|
||||
|
||||
existing = candidates.get(pid)
|
||||
if not existing:
|
||||
candidates[pid] = candidate
|
||||
continue
|
||||
|
||||
existing_version = str(existing.get("version") or "0")
|
||||
if StringUtils.compare_version(candidate_version, ">", existing_version):
|
||||
candidates[pid] = candidate
|
||||
elif (
|
||||
candidate_version == existing_version
|
||||
and repo_order < int(existing.get("repo_order", repo_order))
|
||||
):
|
||||
logger.info(f"本地插件 {pid} 存在同版本来源,使用靠前目录:{repo_path}")
|
||||
candidates[pid] = candidate
|
||||
|
||||
return candidates
|
||||
|
||||
def get_local_plugin_candidate(self, pid: str, package_version: Optional[str] = None,
|
||||
repo_path: Optional[Path] = None,
|
||||
strict_compat: bool = True) -> Optional[dict]:
|
||||
"""
|
||||
获取指定插件ID的本地插件候选
|
||||
"""
|
||||
if not pid:
|
||||
return None
|
||||
if package_version is not None or repo_path is not None:
|
||||
repo_paths = [repo_path.resolve()] if repo_path else self.get_local_repo_paths()
|
||||
package_versions = [package_version] if package_version is not None else []
|
||||
if package_version is None:
|
||||
if settings.VERSION_FLAG:
|
||||
package_versions.append(settings.VERSION_FLAG)
|
||||
package_versions.append("")
|
||||
selected_candidate = None
|
||||
for repo_order, local_repo_path in enumerate(self.get_local_repo_paths()):
|
||||
if local_repo_path not in repo_paths:
|
||||
continue
|
||||
for current_package_version in package_versions:
|
||||
local_plugins = self.__get_local_package(local_repo_path, current_package_version or "")
|
||||
if not local_plugins:
|
||||
continue
|
||||
for candidate_pid, plugin_info in local_plugins.items():
|
||||
if candidate_pid.lower() != pid.lower() or not isinstance(plugin_info, dict):
|
||||
continue
|
||||
is_compatible = not (
|
||||
not current_package_version
|
||||
and settings.VERSION_FLAG
|
||||
and plugin_info.get(settings.VERSION_FLAG) is not True
|
||||
)
|
||||
if not is_compatible and strict_compat:
|
||||
continue
|
||||
plugin_dir = self.__get_local_plugin_dir(local_repo_path, candidate_pid,
|
||||
current_package_version or "")
|
||||
if not plugin_dir.is_dir():
|
||||
continue
|
||||
candidate = plugin_info.copy()
|
||||
candidate["id"] = candidate_pid
|
||||
candidate["package_version"] = current_package_version or ""
|
||||
candidate["repo_order"] = repo_order
|
||||
candidate["repo_path"] = local_repo_path
|
||||
candidate["path"] = plugin_dir
|
||||
if not is_compatible:
|
||||
candidate["compatible"] = False
|
||||
candidate["skip_reason"] = f"package.json 未声明 {settings.VERSION_FLAG} 兼容"
|
||||
if package_version is not None:
|
||||
return candidate
|
||||
if not selected_candidate:
|
||||
selected_candidate = candidate
|
||||
continue
|
||||
selected_version = str(selected_candidate.get("version") or "0")
|
||||
candidate_version = str(candidate.get("version") or "0")
|
||||
if StringUtils.compare_version(candidate_version, ">", selected_version):
|
||||
selected_candidate = candidate
|
||||
return selected_candidate
|
||||
|
||||
candidates = self.get_local_plugin_candidates()
|
||||
for candidate_pid, candidate in candidates.items():
|
||||
if candidate_pid.lower() == pid.lower():
|
||||
return candidate
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def __parse_plugin_index_response(content: str) -> Optional[Dict[str, dict]]:
|
||||
"""
|
||||
解析插件索引响应,仅缓存成功解析出的字典结果。
|
||||
"""
|
||||
try:
|
||||
payload = json.loads(content)
|
||||
except json.JSONDecodeError:
|
||||
if "404: Not Found" not in content:
|
||||
logger.warn(f"插件包数据解析失败:{content}")
|
||||
return None
|
||||
|
||||
if not isinstance(payload, dict):
|
||||
logger.warn(f"插件包数据格式不正确,期望 dict,实际为 {type(payload).__name__}")
|
||||
return None
|
||||
|
||||
return payload
|
||||
|
||||
@cached(maxsize=128, ttl=1800)
|
||||
def get_plugins(self, repo_url: str,
|
||||
package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
|
||||
package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
|
||||
"""
|
||||
获取Github所有最新插件列表
|
||||
:param repo_url: Github仓库地址
|
||||
@@ -70,15 +348,11 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
res = self.__request_with_fallback(package_url, headers=settings.REPO_GITHUB_HEADERS(repo=f"{user}/{repo}"))
|
||||
if res is None:
|
||||
return None
|
||||
if res:
|
||||
content = res.text
|
||||
try:
|
||||
return json.loads(content)
|
||||
except json.JSONDecodeError:
|
||||
if "404: Not Found" not in content:
|
||||
logger.warn(f"插件包数据解析失败:{content}")
|
||||
return None
|
||||
return {}
|
||||
if res.status_code == 404:
|
||||
return {}
|
||||
if res.status_code != 200:
|
||||
return None
|
||||
return self.__parse_plugin_index_response(res.text)
|
||||
|
||||
def get_plugin_package_version(self, pid: str, repo_url: str,
|
||||
package_version: Optional[str] = None) -> Optional[str]:
|
||||
@@ -136,7 +410,7 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
if not settings.PLUGIN_STATISTIC_SHARE:
|
||||
return {}
|
||||
res = RequestUtils(proxies=settings.PROXY, timeout=10).get_res(self._install_statistic)
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return res.json()
|
||||
return {}
|
||||
|
||||
@@ -155,9 +429,9 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
timeout=5
|
||||
).post(install_reg_url, json={
|
||||
"plugin_id": pid,
|
||||
"repo_url": repo_url
|
||||
"repo_url": self.sanitize_repo_url_for_statistic(repo_url)
|
||||
})
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -172,7 +446,10 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
if items:
|
||||
for pid, repo_url in items:
|
||||
if pid:
|
||||
payload_plugins.append({"plugin_id": pid, "repo_url": repo_url})
|
||||
payload_plugins.append({
|
||||
"plugin_id": pid,
|
||||
"repo_url": self.sanitize_repo_url_for_statistic(repo_url)
|
||||
})
|
||||
else:
|
||||
plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins)
|
||||
if not plugins:
|
||||
@@ -182,7 +459,7 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
content_type="application/json",
|
||||
timeout=5).post(self._install_report,
|
||||
json={"plugins": payload_plugins})
|
||||
return True if res else False
|
||||
return bool(res is not None and res.status_code == 200)
|
||||
|
||||
def install(self, pid: str, repo_url: str, package_version: Optional[str] = None, force_install: bool = False) \
|
||||
-> Tuple[bool, str]:
|
||||
@@ -200,6 +477,9 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
:param force_install: 是否强制安装插件,默认不启用,启用时不进行备份和恢复操作
|
||||
:return: (是否成功, 错误信息)
|
||||
"""
|
||||
if self.is_local_repo_url(repo_url):
|
||||
return self.install_local(pid=pid, repo_url=repo_url, force_install=force_install)
|
||||
|
||||
if SystemUtils.is_frozen():
|
||||
return False, "可执行文件模式下,只能安装本地插件"
|
||||
|
||||
@@ -257,6 +537,56 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
|
||||
return self.__install_flow_sync(pid, force_install, prepare_filelist, repo_url)
|
||||
|
||||
def install_local(self, pid: str, repo_url: str = "", force_install: bool = False) -> Tuple[bool, str]:
|
||||
"""
|
||||
从本地插件仓库目录安装插件
|
||||
"""
|
||||
local_pid = self.parse_local_repo_url(repo_url) if repo_url else pid
|
||||
if not local_pid or local_pid.lower() != pid.lower():
|
||||
return False, "本地插件来源与插件ID不匹配"
|
||||
|
||||
repo_path = self.parse_local_repo_path(repo_url) if repo_url else None
|
||||
package_version = self.parse_local_repo_package_version(repo_url) if repo_url else None
|
||||
candidate = self.get_local_plugin_candidate(
|
||||
pid,
|
||||
package_version=package_version,
|
||||
repo_path=repo_path
|
||||
)
|
||||
if not candidate:
|
||||
return False, f"未找到本地插件:{pid}"
|
||||
|
||||
source_dir = Path(candidate.get("path"))
|
||||
dest_dir = PLUGIN_DIR / pid.lower()
|
||||
try:
|
||||
if source_dir.resolve() == dest_dir.resolve():
|
||||
return False, "本地插件来源不能与运行目录相同"
|
||||
except Exception:
|
||||
return False, "本地插件来源路径无效"
|
||||
|
||||
def prepare_local() -> Tuple[bool, str]:
|
||||
try:
|
||||
shutil.copytree(
|
||||
source_dir,
|
||||
dest_dir,
|
||||
dirs_exist_ok=True,
|
||||
ignore=shutil.ignore_patterns("__pycache__", "*.pyc", ".DS_Store")
|
||||
)
|
||||
return True, ""
|
||||
except Exception as e:
|
||||
logger.error(f"复制本地插件 {pid} 失败:{e}")
|
||||
return False, f"复制本地插件失败:{e}"
|
||||
|
||||
return self.__install_flow_sync(
|
||||
pid=pid,
|
||||
force_install=force_install,
|
||||
prepare_content=prepare_local,
|
||||
repo_url=repo_url or self.make_local_repo_url(
|
||||
pid,
|
||||
candidate.get("repo_path"),
|
||||
candidate.get("package_version")
|
||||
)
|
||||
)
|
||||
|
||||
def __get_file_list(self, pid: str, user_repo: str, package_version: Optional[str] = None) -> \
|
||||
Tuple[Optional[list], Optional[str]]:
|
||||
"""
|
||||
@@ -445,22 +775,93 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
shutil.rmtree(plugin_dir, ignore_errors=True)
|
||||
|
||||
@staticmethod
|
||||
def pip_install_with_fallback(requirements_file: Path) -> Tuple[bool, str]:
|
||||
def refresh_persistent_plugin_backup(pid: str) -> bool:
|
||||
"""
|
||||
刷新插件持久化备份目录,供 docker 重置后恢复使用
|
||||
"""
|
||||
if not SystemUtils.is_docker():
|
||||
return True
|
||||
|
||||
plugin_dir = PLUGIN_DIR / pid.lower()
|
||||
if not plugin_dir.exists():
|
||||
logger.warn(f"{pid} 插件目录不存在,跳过刷新插件备份")
|
||||
return False
|
||||
|
||||
backup_root = settings.CONFIG_PATH / "plugins_backup"
|
||||
backup_dir = backup_root / pid.lower()
|
||||
try:
|
||||
backup_root.mkdir(parents=True, exist_ok=True)
|
||||
if backup_dir.exists():
|
||||
shutil.rmtree(backup_dir, ignore_errors=True)
|
||||
shutil.copytree(
|
||||
plugin_dir,
|
||||
backup_dir,
|
||||
dirs_exist_ok=True,
|
||||
ignore=shutil.ignore_patterns("__pycache__", "*.pyc", ".DS_Store")
|
||||
)
|
||||
logger.info(f"已刷新插件备份: {pid}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"刷新插件备份失败: {pid} - {e}")
|
||||
return False
|
||||
|
||||
def __collect_plugin_wheels_dirs(self) -> List[Path]:
|
||||
"""
|
||||
收集已安装插件目录下可用的 wheels 目录,供批量依赖安装时复用。
|
||||
"""
|
||||
wheels_dirs = []
|
||||
try:
|
||||
install_plugins = {
|
||||
plugin_id.lower()
|
||||
for plugin_id in self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
|
||||
}
|
||||
for plugin_id in install_plugins:
|
||||
wheels_dir = PLUGIN_DIR / plugin_id / "wheels"
|
||||
if wheels_dir.is_dir():
|
||||
wheels_dirs.append(wheels_dir)
|
||||
except Exception as e:
|
||||
logger.error(f"收集插件 wheels 目录时发生错误:{e}")
|
||||
return []
|
||||
|
||||
# 去重并保持稳定顺序,避免重复传递相同目录
|
||||
return list(dict.fromkeys(wheels_dirs))
|
||||
|
||||
@staticmethod
|
||||
def pip_install_with_fallback(requirements_file: Path,
|
||||
find_links_dirs: Optional[List[Path]] = None) -> Tuple[bool, str]:
|
||||
"""
|
||||
使用自动降级策略安装依赖,并确保新安装的包可被动态导入
|
||||
:param requirements_file: 依赖的 requirements.txt 文件路径
|
||||
:param find_links_dirs: 额外的本地 wheels 目录列表
|
||||
:return: (是否成功, 错误信息)
|
||||
"""
|
||||
wheels_dir = requirements_file.parent / "wheels"
|
||||
candidate_dirs = []
|
||||
if wheels_dir.is_dir():
|
||||
candidate_dirs.append(wheels_dir)
|
||||
if find_links_dirs:
|
||||
candidate_dirs.extend(find_links_dirs)
|
||||
|
||||
# 去重并保持传入顺序
|
||||
resolved_dirs = []
|
||||
seen_dirs = set()
|
||||
for candidate_dir in candidate_dirs:
|
||||
candidate_path = Path(candidate_dir)
|
||||
if not candidate_path.is_dir():
|
||||
continue
|
||||
candidate_key = str(candidate_path.resolve())
|
||||
if candidate_key in seen_dirs:
|
||||
continue
|
||||
seen_dirs.add(candidate_key)
|
||||
resolved_dirs.append(candidate_path)
|
||||
|
||||
find_links_option = []
|
||||
if wheels_dir.is_dir():
|
||||
# 如果目录存在,增加 --find-links 选项
|
||||
logger.debug(f"[PIP] 发现插件内嵌的 wheels 目录: {wheels_dir},将优先从本地安装。")
|
||||
find_links_option = ["--find-links", str(wheels_dir)]
|
||||
if resolved_dirs:
|
||||
for local_wheels_dir in resolved_dirs:
|
||||
logger.debug(f"[PIP] 发现可用的 wheels 目录: {local_wheels_dir},将优先从本地安装。")
|
||||
find_links_option.extend(["--find-links", str(local_wheels_dir)])
|
||||
else:
|
||||
# 如果不存在,选项为空列表,对后续命令无影响
|
||||
logger.debug(f"[PIP] 未发现插件内嵌的 wheels 目录,将仅使用在线源。")
|
||||
logger.debug(f"[PIP] 未发现可用的 wheels 目录,将仅使用在线源。")
|
||||
|
||||
base_cmd = [sys.executable, "-m", "pip", "install"] + find_links_option + ["-r", str(requirements_file)]
|
||||
strategies = []
|
||||
@@ -569,10 +970,10 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
logger.error(f"{pid} 准备插件内容失败:{message}")
|
||||
if backup_dir:
|
||||
self.__restore_plugin(pid, backup_dir)
|
||||
logger.warning(f"{pid} 插件安装失败,已还原备份插件")
|
||||
logger.warn(f"{pid} 插件安装失败,已还原备份插件")
|
||||
else:
|
||||
self.__remove_old_plugin(pid)
|
||||
logger.warning(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
logger.warn(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
return False, message
|
||||
|
||||
dependencies_exist, dep_ok, dep_msg = self.__install_dependencies_if_required(pid)
|
||||
@@ -580,13 +981,14 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
logger.error(f"{pid} 依赖安装失败:{dep_msg}")
|
||||
if backup_dir:
|
||||
self.__restore_plugin(pid, backup_dir)
|
||||
logger.warning(f"{pid} 插件安装失败,已还原备份插件")
|
||||
logger.warn(f"{pid} 插件安装失败,已还原备份插件")
|
||||
else:
|
||||
self.__remove_old_plugin(pid)
|
||||
logger.warning(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
logger.warn(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
return False, dep_msg
|
||||
|
||||
self.install_reg(pid, repo_url)
|
||||
self.refresh_persistent_plugin_backup(pid)
|
||||
return True, ""
|
||||
|
||||
def __install_from_release(self, pid: str, user_repo: str, release_tag: str) -> Tuple[bool, str]:
|
||||
@@ -719,7 +1121,8 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
f.write(dep + "\n")
|
||||
try:
|
||||
# 使用自动降级策略安装依赖
|
||||
return self.pip_install_with_fallback(requirements_temp_file)
|
||||
wheels_dirs = self.__collect_plugin_wheels_dirs()
|
||||
return self.pip_install_with_fallback(requirements_temp_file, wheels_dirs)
|
||||
finally:
|
||||
# 删除临时文件
|
||||
requirements_temp_file.unlink()
|
||||
@@ -922,7 +1325,7 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
|
||||
@cached(maxsize=128, ttl=1800)
|
||||
async def async_get_plugins(self, repo_url: str,
|
||||
package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
|
||||
package_version: Optional[str] = None) -> Optional[Dict[str, dict]]:
|
||||
"""
|
||||
异步获取Github所有最新插件列表
|
||||
:param repo_url: Github仓库地址
|
||||
@@ -942,15 +1345,11 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
headers=settings.REPO_GITHUB_HEADERS(repo=f"{user}/{repo}"))
|
||||
if res is None:
|
||||
return None
|
||||
if res:
|
||||
content = res.text
|
||||
try:
|
||||
return json.loads(content)
|
||||
except json.JSONDecodeError:
|
||||
if "404: Not Found" not in content:
|
||||
logger.warn(f"插件包数据解析失败:{content}")
|
||||
return None
|
||||
return {}
|
||||
if res.status_code == 404:
|
||||
return {}
|
||||
if res.status_code != 200:
|
||||
return None
|
||||
return self.__parse_plugin_index_response(res.text)
|
||||
|
||||
async def async_get_statistic(self) -> Dict:
|
||||
"""
|
||||
@@ -959,7 +1358,7 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
if not settings.PLUGIN_STATISTIC_SHARE:
|
||||
return {}
|
||||
res = await AsyncRequestUtils(proxies=settings.PROXY, timeout=10).get_res(self._install_statistic)
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return res.json()
|
||||
return {}
|
||||
|
||||
@@ -978,9 +1377,9 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
timeout=5
|
||||
).post(install_reg_url, json={
|
||||
"plugin_id": pid,
|
||||
"repo_url": repo_url
|
||||
"repo_url": self.sanitize_repo_url_for_statistic(repo_url)
|
||||
})
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -995,7 +1394,10 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
if items:
|
||||
for pid, repo_url in items:
|
||||
if pid:
|
||||
payload_plugins.append({"plugin_id": pid, "repo_url": repo_url})
|
||||
payload_plugins.append({
|
||||
"plugin_id": pid,
|
||||
"repo_url": self.sanitize_repo_url_for_statistic(repo_url)
|
||||
})
|
||||
else:
|
||||
plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins)
|
||||
if not plugins:
|
||||
@@ -1005,7 +1407,7 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
content_type="application/json",
|
||||
timeout=5).post(self._install_report,
|
||||
json={"plugins": payload_plugins})
|
||||
return True if res else False
|
||||
return bool(res is not None and res.status_code == 200)
|
||||
|
||||
async def __async_get_file_list(self, pid: str, user_repo: str, package_version: Optional[str] = None) -> \
|
||||
Tuple[Optional[list], Optional[str]]:
|
||||
@@ -1237,7 +1639,8 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
|
||||
try:
|
||||
# 使用自动降级策略安装依赖
|
||||
return self.pip_install_with_fallback(Path(requirements_temp_file))
|
||||
wheels_dirs = self.__collect_plugin_wheels_dirs()
|
||||
return self.pip_install_with_fallback(Path(requirements_temp_file), wheels_dirs)
|
||||
finally:
|
||||
# 删除临时文件
|
||||
await requirements_temp_file.unlink()
|
||||
@@ -1366,6 +1769,9 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
:param force_install: 是否强制安装插件,默认不启用,启用时不进行备份和恢复操作
|
||||
:return: (是否成功, 错误信息)
|
||||
"""
|
||||
if self.is_local_repo_url(repo_url):
|
||||
return await asyncio.to_thread(self.install_local, pid, repo_url, force_install)
|
||||
|
||||
if SystemUtils.is_frozen():
|
||||
return False, "可执行文件模式下,只能安装本地插件"
|
||||
|
||||
@@ -1453,10 +1859,10 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
logger.error(f"{pid} 准备插件内容失败:{message}")
|
||||
if backup_dir:
|
||||
await self.__async_restore_plugin(pid, backup_dir)
|
||||
logger.warning(f"{pid} 插件安装失败,已还原备份插件")
|
||||
logger.warn(f"{pid} 插件安装失败,已还原备份插件")
|
||||
else:
|
||||
await self.__async_remove_old_plugin(pid)
|
||||
logger.warning(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
logger.warn(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
return False, message
|
||||
|
||||
dependencies_exist, dep_ok, dep_msg = await self.__async_install_dependencies_if_required(pid)
|
||||
@@ -1464,13 +1870,14 @@ class PluginHelper(metaclass=WeakSingleton):
|
||||
logger.error(f"{pid} 依赖安装失败:{dep_msg}")
|
||||
if backup_dir:
|
||||
await self.__async_restore_plugin(pid, backup_dir)
|
||||
logger.warning(f"{pid} 插件安装失败,已还原备份插件")
|
||||
logger.warn(f"{pid} 插件安装失败,已还原备份插件")
|
||||
else:
|
||||
await self.__async_remove_old_plugin(pid)
|
||||
logger.warning(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
logger.warn(f"{pid} 已清理对应插件目录,请尝试重新安装")
|
||||
return False, dep_msg
|
||||
|
||||
await self.async_install_reg(pid, repo_url)
|
||||
await asyncio.to_thread(self.refresh_persistent_plugin_backup, pid)
|
||||
return True, ""
|
||||
|
||||
def __prepare_content_via_filelist_sync(self, pid: str, user_repo: str,
|
||||
|
||||
@@ -140,7 +140,7 @@ class RedisHelper(ConfigReloadMixin, metaclass=Singleton):
|
||||
"""
|
||||
获取缓存的区
|
||||
"""
|
||||
return f"region:{quote(region)}" if region else "region:DEFAULT"
|
||||
return f"region:{region}" if region else "region:DEFAULT"
|
||||
|
||||
def __make_redis_key(self, region: str, key: str) -> str:
|
||||
"""
|
||||
@@ -370,7 +370,7 @@ class AsyncRedisHelper(ConfigReloadMixin, metaclass=Singleton):
|
||||
"""
|
||||
获取缓存的区
|
||||
"""
|
||||
return f"region:{region}" if region else "region:default"
|
||||
return f"region:{region}" if region else "region:DEFAULT"
|
||||
|
||||
def __make_redis_key(self, region: str, key: str) -> str:
|
||||
"""
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import json
|
||||
import platform
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from app.core.config import settings
|
||||
@@ -14,7 +16,7 @@ class ResourceHelper:
|
||||
"""
|
||||
检测和更新资源包
|
||||
"""
|
||||
# 资源包的git仓库地址
|
||||
|
||||
_repo = f"{settings.GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/package.v2.json"
|
||||
_files_api = f"https://api.github.com/repos/jxxghp/MoviePilot-Resources/contents/resources.v2"
|
||||
_base_dir: Path = settings.ROOT_PATH
|
||||
@@ -26,6 +28,35 @@ class ResourceHelper:
|
||||
def proxies(self):
|
||||
return None if settings.GITHUB_PROXY else settings.PROXY
|
||||
|
||||
@staticmethod
|
||||
def _get_python_version_tag() -> str:
|
||||
version = sys.version_info
|
||||
return f"cp{version.major}{version.minor}"
|
||||
|
||||
@staticmethod
|
||||
def _get_machine_tag() -> str:
|
||||
machine = platform.machine().lower()
|
||||
if machine in {"arm64", "aarch64"}:
|
||||
return "aarch64"
|
||||
elif machine in {"x86_64", "amd64"}:
|
||||
return "x86_64"
|
||||
return machine
|
||||
|
||||
@staticmethod
|
||||
def _get_needed_files() -> list[str]:
|
||||
python_version = ResourceHelper._get_python_version_tag()
|
||||
python_ver = python_version.replace("cp", "")
|
||||
system = platform.system().lower()
|
||||
machine = ResourceHelper._get_machine_tag()
|
||||
files = ["user.sites.v2.bin"]
|
||||
if system == "linux":
|
||||
files.append(f"sites.cpython-{python_ver}-{machine}-linux-gnu.so")
|
||||
elif system == "darwin":
|
||||
files.append(f"sites.cpython-{python_ver}-darwin.so")
|
||||
elif system == "windows":
|
||||
files.append(f"sites.cp{python_ver}-win_amd64.pyd")
|
||||
return files
|
||||
|
||||
def check(self):
|
||||
"""
|
||||
检测是否有更新,如有则下载安装
|
||||
@@ -35,7 +66,9 @@ class ResourceHelper:
|
||||
if SystemUtils.is_frozen():
|
||||
return None
|
||||
logger.info("开始检测资源包版本...")
|
||||
res = RequestUtils(proxies=self.proxies, headers=settings.GITHUB_HEADERS, timeout=10).get_res(self._repo)
|
||||
res = RequestUtils(
|
||||
proxies=self.proxies, headers=settings.GITHUB_HEADERS, timeout=10
|
||||
).get_res(self._repo)
|
||||
if res:
|
||||
try:
|
||||
resource_info = json.loads(res.text)
|
||||
@@ -71,38 +104,50 @@ class ResourceHelper:
|
||||
need_updates[rname] = target
|
||||
if need_updates:
|
||||
# 下载文件信息列表
|
||||
r = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS,
|
||||
timeout=30).get_res(self._files_api)
|
||||
r = RequestUtils(
|
||||
proxies=settings.PROXY,
|
||||
headers=settings.GITHUB_HEADERS,
|
||||
timeout=30,
|
||||
).get_res(self._files_api)
|
||||
if r and not r.ok:
|
||||
return None, f"连接仓库失败:{r.status_code} - {r.reason}"
|
||||
elif not r:
|
||||
return None, "连接仓库失败"
|
||||
files_info = r.json()
|
||||
# 下载资源文件
|
||||
needed_files = self._get_needed_files()
|
||||
logger.info(f"需要下载的资源文件:{needed_files}")
|
||||
success = True
|
||||
for item in files_info:
|
||||
save_path = need_updates.get(item.get("name"))
|
||||
file_name = item.get("name")
|
||||
if file_name not in needed_files:
|
||||
continue
|
||||
save_path = need_updates.get(file_name)
|
||||
if not save_path:
|
||||
continue
|
||||
if item.get("download_url"):
|
||||
logger.info(f"开始更新资源文件:{item.get('name')} ...")
|
||||
download_url = f"{settings.GITHUB_PROXY}{item.get('download_url')}"
|
||||
# 下载资源文件
|
||||
res = RequestUtils(proxies=self.proxies, headers=settings.GITHUB_HEADERS,
|
||||
timeout=180).get_res(download_url)
|
||||
logger.info(f"开始更新资源文件:{file_name} ...")
|
||||
download_url = (
|
||||
f"{settings.GITHUB_PROXY}{item.get('download_url')}"
|
||||
)
|
||||
res = RequestUtils(
|
||||
proxies=self.proxies,
|
||||
headers=settings.GITHUB_HEADERS,
|
||||
timeout=180,
|
||||
).get_res(download_url)
|
||||
if not res:
|
||||
logger.error(f"文件 {item.get('name')} 下载失败!")
|
||||
logger.error(f"文件 {file_name} 下载失败!")
|
||||
success = False
|
||||
break
|
||||
elif res.status_code != 200:
|
||||
logger.error(f"下载文件 {item.get('name')} 失败:{res.status_code} - {res.reason}")
|
||||
logger.error(
|
||||
f"下载文件 {file_name} 失败:{res.status_code} - {res.reason}"
|
||||
)
|
||||
success = False
|
||||
break
|
||||
# 创建插件文件夹
|
||||
file_path = self._base_dir / save_path / item.get("name")
|
||||
file_path = self._base_dir / save_path / file_name
|
||||
if not file_path.parent.exists():
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
# 写入文件
|
||||
file_path.write_bytes(res.content)
|
||||
if success:
|
||||
logger.info("资源包更新完成,开始重启服务...")
|
||||
|
||||
1175
app/helper/skill.py
Normal file
1175
app/helper/skill.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -108,7 +108,7 @@ class SubscribeHelper(metaclass=WeakSingleton):
|
||||
return False, "连接MoviePilot服务器失败"
|
||||
|
||||
# 检查响应状态
|
||||
if res and res.status_code == 200:
|
||||
if res.status_code == 200:
|
||||
# 清除缓存
|
||||
if clear_cache:
|
||||
self.get_shares.cache_clear()
|
||||
@@ -126,7 +126,7 @@ class SubscribeHelper(metaclass=WeakSingleton):
|
||||
"""
|
||||
处理返回List的HTTP响应
|
||||
"""
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return res.json()
|
||||
return []
|
||||
|
||||
@@ -202,7 +202,7 @@ class SubscribeHelper(metaclass=WeakSingleton):
|
||||
res = RequestUtils(proxies=settings.PROXY, timeout=5, headers={
|
||||
"Content-Type": "application/json"
|
||||
}).post_res(self._sub_reg, json=sub)
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -216,7 +216,7 @@ class SubscribeHelper(metaclass=WeakSingleton):
|
||||
res = await AsyncRequestUtils(proxies=settings.PROXY, timeout=5, headers={
|
||||
"Content-Type": "application/json"
|
||||
}).post_res(self._sub_reg, json=sub)
|
||||
if res and res.status_code == 200:
|
||||
if res is not None and res.status_code == 200:
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -267,7 +267,7 @@ class SubscribeHelper(metaclass=WeakSingleton):
|
||||
sub.to_dict() for sub in subscribes
|
||||
]
|
||||
})
|
||||
return True if res else False
|
||||
return bool(res is not None and res.status_code == 200)
|
||||
|
||||
def sub_share(self, subscribe_id: int,
|
||||
share_title: str, share_comment: str, share_user: str) -> Tuple[bool, str]:
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Tuple
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import docker
|
||||
import psutil
|
||||
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
@@ -27,6 +31,8 @@ class SystemHelper(ConfigReloadMixin):
|
||||
}
|
||||
|
||||
__system_flag_file = "/var/log/nginx/__moviepilot__"
|
||||
__local_backend_runtime_file = settings.TEMP_PATH / "moviepilot.runtime.json"
|
||||
__local_restart_log_file = settings.LOG_PATH / "moviepilot.restart.stdout.log"
|
||||
|
||||
def on_config_changed(self):
|
||||
logger.update_loggers()
|
||||
@@ -39,10 +45,74 @@ class SystemHelper(ConfigReloadMixin):
|
||||
"""
|
||||
判断是否可以内部重启
|
||||
"""
|
||||
return (
|
||||
Path("/var/run/docker.sock").exists()
|
||||
or settings.DOCKER_CLIENT_API != "tcp://127.0.0.1:38379"
|
||||
return SystemUtils.is_docker() or SystemHelper._is_local_cli_managed()
|
||||
|
||||
@staticmethod
|
||||
def _load_runtime_file(path: Path) -> Optional[dict]:
|
||||
if not path.exists():
|
||||
return None
|
||||
try:
|
||||
payload = json.loads(path.read_text(encoding="utf-8"))
|
||||
except (OSError, json.JSONDecodeError):
|
||||
return None
|
||||
return payload if isinstance(payload, dict) else None
|
||||
|
||||
@staticmethod
|
||||
def _is_local_cli_managed() -> bool:
|
||||
runtime = SystemHelper._load_runtime_file(SystemHelper.__local_backend_runtime_file)
|
||||
if not runtime:
|
||||
return False
|
||||
|
||||
pid = runtime.get("pid")
|
||||
create_time = runtime.get("create_time")
|
||||
if not pid:
|
||||
return False
|
||||
|
||||
try:
|
||||
pid = int(pid)
|
||||
except (TypeError, ValueError):
|
||||
return False
|
||||
|
||||
if pid != os.getpid():
|
||||
return False
|
||||
|
||||
if create_time is None:
|
||||
return True
|
||||
|
||||
try:
|
||||
current_process = psutil.Process(os.getpid())
|
||||
return abs(current_process.create_time() - float(create_time)) <= 2
|
||||
except (psutil.Error, TypeError, ValueError):
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _spawn_local_restart_helper() -> None:
|
||||
helper_code = (
|
||||
"import os, subprocess, sys, time;"
|
||||
"time.sleep(1.0);"
|
||||
"cmd=[sys.executable, '-m', 'app.cli', 'restart', '--force', '--stop-timeout', '30', '--start-timeout', '60'];"
|
||||
"subprocess.run(cmd, cwd=os.environ.get('MOVIEPILOT_ROOT'), env=os.environ.copy(), check=False)"
|
||||
)
|
||||
env = os.environ.copy()
|
||||
env["MOVIEPILOT_ROOT"] = str(settings.ROOT_PATH)
|
||||
env["PYTHONUNBUFFERED"] = "1"
|
||||
|
||||
SystemHelper.__local_restart_log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
with SystemHelper.__local_restart_log_file.open("a", encoding="utf-8") as log_handle:
|
||||
kwargs = {
|
||||
"cwd": str(settings.ROOT_PATH),
|
||||
"stdout": log_handle,
|
||||
"stderr": subprocess.STDOUT,
|
||||
"stdin": subprocess.DEVNULL,
|
||||
"close_fds": True,
|
||||
"env": env,
|
||||
}
|
||||
if os.name == "nt":
|
||||
kwargs["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.DETACHED_PROCESS
|
||||
else:
|
||||
kwargs["start_new_session"] = True
|
||||
process = subprocess.Popen([sys.executable, "-c", helper_code], **kwargs)
|
||||
logger.info(f"已创建本地 CLI 重启任务,辅助进程 PID: {process.pid}")
|
||||
|
||||
@staticmethod
|
||||
def _get_container_id() -> str:
|
||||
@@ -104,7 +174,14 @@ class SystemHelper(ConfigReloadMixin):
|
||||
执行Docker重启操作
|
||||
"""
|
||||
if not SystemUtils.is_docker():
|
||||
return False, "非Docker环境,无法重启!"
|
||||
if not SystemHelper._is_local_cli_managed():
|
||||
return False, "当前实例不是由 moviepilot CLI 启动,无法执行内建重启!"
|
||||
try:
|
||||
SystemHelper._spawn_local_restart_helper()
|
||||
return True, ""
|
||||
except Exception as err:
|
||||
logger.error(f"本地 CLI 重启失败: {str(err)}")
|
||||
return False, f"本地 CLI 重启失败:{str(err)}"
|
||||
|
||||
try:
|
||||
# 检查容器是否配置了自动重启策略
|
||||
|
||||
197
app/helper/voice.py
Normal file
197
app/helper/voice.py
Normal file
@@ -0,0 +1,197 @@
|
||||
"""语音能力辅助功能。"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from io import BytesIO
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
|
||||
|
||||
class VoiceProvider(ABC):
|
||||
"""语音 provider 抽象层。"""
|
||||
|
||||
MAX_TRANSCRIBE_BYTES = 25 * 1024 * 1024
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def name(self) -> str:
|
||||
"""provider 名称。"""
|
||||
|
||||
@abstractmethod
|
||||
def is_available_for_stt(self) -> bool:
|
||||
"""是否可用于语音识别。"""
|
||||
|
||||
@abstractmethod
|
||||
def is_available_for_tts(self) -> bool:
|
||||
"""是否可用于语音合成。"""
|
||||
|
||||
@abstractmethod
|
||||
def transcribe_bytes(self, content: bytes, filename: str = "input.ogg") -> Optional[str]:
|
||||
"""将音频字节转成文字。"""
|
||||
|
||||
@abstractmethod
|
||||
def synthesize_speech(self, text: str) -> Optional[Path]:
|
||||
"""将文字转成语音文件。"""
|
||||
|
||||
|
||||
class OpenAIVoiceProvider(VoiceProvider):
|
||||
"""OpenAI / OpenAI-compatible provider。"""
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "openai"
|
||||
|
||||
@staticmethod
|
||||
def _resolve_credentials(mode: str) -> tuple[Optional[str], Optional[str]]:
|
||||
mode = mode.lower()
|
||||
provider = (
|
||||
settings.AI_VOICE_STT_PROVIDER
|
||||
if mode == "stt"
|
||||
else settings.AI_VOICE_TTS_PROVIDER
|
||||
) or settings.AI_VOICE_PROVIDER
|
||||
provider = (provider or "").strip().lower()
|
||||
|
||||
api_key = (
|
||||
settings.AI_VOICE_STT_API_KEY
|
||||
if mode == "stt"
|
||||
else settings.AI_VOICE_TTS_API_KEY
|
||||
) or settings.AI_VOICE_API_KEY
|
||||
base_url = (
|
||||
settings.AI_VOICE_STT_BASE_URL
|
||||
if mode == "stt"
|
||||
else settings.AI_VOICE_TTS_BASE_URL
|
||||
) or settings.AI_VOICE_BASE_URL
|
||||
|
||||
if (
|
||||
not api_key
|
||||
and provider == "openai"
|
||||
and (settings.LLM_PROVIDER or "").strip().lower() == "openai"
|
||||
):
|
||||
api_key = settings.LLM_API_KEY
|
||||
base_url = base_url or settings.LLM_BASE_URL
|
||||
|
||||
return api_key, base_url
|
||||
|
||||
def _get_client(self, mode: str):
|
||||
from openai import OpenAI
|
||||
|
||||
api_key, base_url = self._resolve_credentials(mode)
|
||||
if not api_key:
|
||||
raise ValueError(f"{mode.upper()} provider 未配置 API Key")
|
||||
return OpenAI(api_key=api_key, base_url=base_url, max_retries=3)
|
||||
|
||||
def is_available_for_stt(self) -> bool:
|
||||
api_key, _ = self._resolve_credentials("stt")
|
||||
return bool(api_key)
|
||||
|
||||
def is_available_for_tts(self) -> bool:
|
||||
api_key, _ = self._resolve_credentials("tts")
|
||||
return bool(api_key)
|
||||
|
||||
def transcribe_bytes(self, content: bytes, filename: str = "input.ogg") -> Optional[str]:
|
||||
if not content:
|
||||
return None
|
||||
if len(content) > self.MAX_TRANSCRIBE_BYTES:
|
||||
raise ValueError("语音文件超过 25MB,无法识别")
|
||||
|
||||
try:
|
||||
client = self._get_client("stt")
|
||||
audio_file = BytesIO(content)
|
||||
audio_file.name = filename
|
||||
response = client.audio.transcriptions.create(
|
||||
model=settings.AI_VOICE_STT_MODEL,
|
||||
file=audio_file,
|
||||
language=settings.AI_VOICE_LANGUAGE or "zh",
|
||||
response_format="verbose_json",
|
||||
)
|
||||
text = getattr(response, "text", None)
|
||||
return text.strip() if text else None
|
||||
except Exception as err:
|
||||
logger.error(f"语音转文字失败: provider={self.name}, error={err}")
|
||||
return None
|
||||
|
||||
def synthesize_speech(self, text: str) -> Optional[Path]:
|
||||
if not text:
|
||||
return None
|
||||
|
||||
try:
|
||||
client = self._get_client("tts")
|
||||
voice_dir = settings.TEMP_PATH / "voice"
|
||||
voice_dir.mkdir(parents=True, exist_ok=True)
|
||||
output_path = voice_dir / f"{uuid4().hex}.opus"
|
||||
response = client.audio.speech.create(
|
||||
model=settings.AI_VOICE_TTS_MODEL,
|
||||
voice=settings.AI_VOICE_TTS_VOICE,
|
||||
input=text,
|
||||
response_format="opus",
|
||||
)
|
||||
response.write_to_file(output_path)
|
||||
return output_path
|
||||
except Exception as err:
|
||||
logger.error(f"文字转语音失败: provider={self.name}, error={err}")
|
||||
return None
|
||||
|
||||
|
||||
class VoiceHelper:
|
||||
"""统一语音入口,负责按 STT/TTS provider 路由。"""
|
||||
|
||||
_providers: Dict[str, VoiceProvider] = {
|
||||
"openai": OpenAIVoiceProvider(),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def register_provider(cls, provider: VoiceProvider) -> None:
|
||||
cls._providers[provider.name.lower()] = provider
|
||||
|
||||
@staticmethod
|
||||
def _resolve_provider_name(mode: str) -> str:
|
||||
mode = mode.lower()
|
||||
provider = (
|
||||
settings.AI_VOICE_STT_PROVIDER
|
||||
if mode == "stt"
|
||||
else settings.AI_VOICE_TTS_PROVIDER
|
||||
) or settings.AI_VOICE_PROVIDER
|
||||
return (provider or "openai").strip().lower()
|
||||
|
||||
@classmethod
|
||||
def get_provider(cls, mode: str) -> Optional[VoiceProvider]:
|
||||
provider_name = cls._resolve_provider_name(mode)
|
||||
provider = cls._providers.get(provider_name)
|
||||
if provider:
|
||||
return provider
|
||||
logger.warning(f"未注册语音 provider: mode={mode}, provider={provider_name}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def get_registered_providers(cls) -> list[str]:
|
||||
return sorted(cls._providers.keys())
|
||||
|
||||
@classmethod
|
||||
def is_available(cls, mode: Optional[str] = None) -> bool:
|
||||
if mode:
|
||||
provider = cls.get_provider(mode)
|
||||
if not provider:
|
||||
return False
|
||||
return (
|
||||
provider.is_available_for_stt()
|
||||
if mode.lower() == "stt"
|
||||
else provider.is_available_for_tts()
|
||||
)
|
||||
return cls.is_available("stt") or cls.is_available("tts")
|
||||
|
||||
@classmethod
|
||||
def transcribe_bytes(cls, content: bytes, filename: str = "input.ogg") -> Optional[str]:
|
||||
provider = cls.get_provider("stt")
|
||||
if not provider:
|
||||
return None
|
||||
return provider.transcribe_bytes(content=content, filename=filename)
|
||||
|
||||
@classmethod
|
||||
def synthesize_speech(cls, text: str) -> Optional[Path]:
|
||||
provider = cls.get_provider("tts")
|
||||
if not provider:
|
||||
return None
|
||||
return provider.synthesize_speech(text=text)
|
||||
12
app/log.py
12
app/log.py
@@ -1,5 +1,6 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import queue
|
||||
import sys
|
||||
import threading
|
||||
@@ -407,11 +408,12 @@ class LoggerManager:
|
||||
for handler in _logger.handlers:
|
||||
_logger.removeHandler(handler)
|
||||
|
||||
# 只设置终端日志(文件日志由 NonBlockingFileHandler 处理)
|
||||
console_handler = logging.StreamHandler()
|
||||
console_formatter = CustomFormatter(log_settings.LOG_CONSOLE_FORMAT)
|
||||
console_handler.setFormatter(console_formatter)
|
||||
_logger.addHandler(console_handler)
|
||||
# 本地 CLI 已经有独立的 stdio 滚动日志时,不再把业务日志重复打一份到控制台。
|
||||
if os.getenv("MOVIEPILOT_DISABLE_CONSOLE_LOG") != "1":
|
||||
console_handler = logging.StreamHandler()
|
||||
console_formatter = CustomFormatter(log_settings.LOG_CONSOLE_FORMAT)
|
||||
console_handler.setFormatter(console_formatter)
|
||||
_logger.addHandler(console_handler)
|
||||
|
||||
# 禁止向父级log传递
|
||||
_logger.propagate = False
|
||||
|
||||
19
app/main.py
19
app/main.py
@@ -4,19 +4,32 @@ import setproctitle
|
||||
import signal
|
||||
import sys
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
import uvicorn as uvicorn
|
||||
from PIL import Image
|
||||
from uvicorn import Config
|
||||
|
||||
from app.factory import app
|
||||
from app.utils.stdio import configure_rotating_stdio
|
||||
from app.utils.system import SystemUtils
|
||||
|
||||
# 禁用输出
|
||||
if SystemUtils.is_frozen():
|
||||
stdio_log_file = os.getenv("MOVIEPILOT_STDIO_LOG_FILE")
|
||||
if stdio_log_file:
|
||||
# 本地 CLI 会把 stdout/stderr 切到滚动日志,避免无限追加单独的大文件。
|
||||
configure_rotating_stdio(
|
||||
log_file=Path(stdio_log_file),
|
||||
max_bytes=max(int(os.getenv("MOVIEPILOT_STDIO_LOG_MAX_BYTES", "0") or 0), 1),
|
||||
backup_count=max(
|
||||
int(os.getenv("MOVIEPILOT_STDIO_LOG_BACKUP_COUNT", "0") or 0),
|
||||
0,
|
||||
),
|
||||
)
|
||||
elif SystemUtils.is_frozen():
|
||||
sys.stdout = open(os.devnull, 'w')
|
||||
sys.stderr = open(os.devnull, 'w')
|
||||
|
||||
from app.factory import app
|
||||
from app.core.config import settings
|
||||
from app.db.init import init_db, update_db
|
||||
|
||||
@@ -95,4 +108,4 @@ if __name__ == '__main__':
|
||||
# 更新数据库
|
||||
update_db()
|
||||
# 启动API服务
|
||||
Server.run()
|
||||
Server.run()
|
||||
|
||||
@@ -39,7 +39,7 @@ class BangumiApi(object):
|
||||
params.update(kwargs)
|
||||
resp = self._req.get_res(url=req_url, params=params)
|
||||
try:
|
||||
if not resp:
|
||||
if resp is None or resp.status_code != 200:
|
||||
return None
|
||||
result = resp.json()
|
||||
return result.get(key) if key else result
|
||||
@@ -55,7 +55,7 @@ class BangumiApi(object):
|
||||
params.update(kwargs)
|
||||
resp = await self._async_req.get_res(url=req_url, params=params)
|
||||
try:
|
||||
if not resp:
|
||||
if resp is None or resp.status_code != 200:
|
||||
return None
|
||||
result = resp.json()
|
||||
return result.get(key) if key else result
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import json
|
||||
from urllib.parse import quote, unquote
|
||||
from typing import Optional, Union, List, Tuple, Any
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
@@ -6,6 +7,7 @@ from app.log import logger
|
||||
from app.modules import _ModuleBase, _MessageBase
|
||||
from app.schemas import MessageChannel, CommingMessage, Notification, MessageResponse
|
||||
from app.schemas.types import ModuleType
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
try:
|
||||
from app.modules.discord.discord import Discord
|
||||
@@ -15,6 +17,31 @@ except Exception as err: # ImportError or other load issues
|
||||
|
||||
|
||||
class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
_IMAGE_SUFFIXES = (
|
||||
".png",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".gif",
|
||||
".webp",
|
||||
".bmp",
|
||||
".tiff",
|
||||
".svg",
|
||||
)
|
||||
_AUDIO_SUFFIXES = (
|
||||
".mp3",
|
||||
".m4a",
|
||||
".wav",
|
||||
".ogg",
|
||||
".oga",
|
||||
".opus",
|
||||
".aac",
|
||||
".amr",
|
||||
".flac",
|
||||
".mpga",
|
||||
".mpeg",
|
||||
".webm",
|
||||
)
|
||||
|
||||
def init_module(self) -> None:
|
||||
"""
|
||||
初始化模块
|
||||
@@ -131,10 +158,14 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
text = msg_json.get("text")
|
||||
chat_id = msg_json.get("chat_id")
|
||||
images = self._extract_images(msg_json)
|
||||
if (text or images) and userid:
|
||||
audio_refs = self._extract_audio_refs(msg_json)
|
||||
files = self._extract_files(msg_json)
|
||||
if (text or images or audio_refs or files) and userid:
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的 Discord 消息:"
|
||||
f"userid={userid}, username={username}, text={text}, images={len(images) if images else 0}"
|
||||
f"userid={userid}, username={username}, text={text}, "
|
||||
f"images={len(images) if images else 0}, audios={len(audio_refs) if audio_refs else 0}, "
|
||||
f"files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(
|
||||
channel=MessageChannel.Discord,
|
||||
@@ -144,11 +175,15 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
text=text,
|
||||
chat_id=str(chat_id) if chat_id else None,
|
||||
images=images,
|
||||
audio_refs=audio_refs,
|
||||
files=files,
|
||||
)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _extract_images(msg_json: dict) -> Optional[List[str]]:
|
||||
def _extract_images(
|
||||
msg_json: dict,
|
||||
) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
"""
|
||||
从Discord消息中提取图片URL
|
||||
"""
|
||||
@@ -157,12 +192,97 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
return None
|
||||
images = []
|
||||
for attachment in attachments:
|
||||
if attachment.get("type") == "image":
|
||||
url = attachment.get("url")
|
||||
if url:
|
||||
images.append(url)
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (attachment.get("content_type") or "").lower()
|
||||
filename = (attachment.get("filename") or "").lower()
|
||||
if (
|
||||
attachment.get("type") == "image"
|
||||
or content_type.startswith("image/")
|
||||
or filename.endswith(DiscordModule._IMAGE_SUFFIXES)
|
||||
):
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=url,
|
||||
name=attachment.get("filename"),
|
||||
mime_type=attachment.get("content_type"),
|
||||
size=attachment.get("size"),
|
||||
)
|
||||
)
|
||||
return images if images else None
|
||||
|
||||
@classmethod
|
||||
def _extract_audio_refs(cls, msg_json: dict) -> Optional[List[str]]:
|
||||
"""
|
||||
从Discord消息中提取音频URL
|
||||
"""
|
||||
attachments = msg_json.get("attachments", [])
|
||||
if not attachments:
|
||||
return None
|
||||
audio_refs = []
|
||||
for attachment in attachments:
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (attachment.get("content_type") or "").lower()
|
||||
filename = (attachment.get("filename") or "").lower()
|
||||
if content_type.startswith("audio/") or filename.endswith(cls._AUDIO_SUFFIXES):
|
||||
audio_refs.append(f"discord://file/{quote(url, safe='')}")
|
||||
return audio_refs if audio_refs else None
|
||||
|
||||
@classmethod
|
||||
def _extract_files(
|
||||
cls, msg_json: dict
|
||||
) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
"""
|
||||
从 Discord 消息中提取非图片/非音频文件。
|
||||
"""
|
||||
attachments = msg_json.get("attachments", [])
|
||||
if not attachments:
|
||||
return None
|
||||
|
||||
files = []
|
||||
for attachment in attachments:
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (attachment.get("content_type") or "").lower()
|
||||
filename = (attachment.get("filename") or "").lower()
|
||||
is_image = (
|
||||
attachment.get("type") == "image"
|
||||
or content_type.startswith("image/")
|
||||
or filename.endswith(cls._IMAGE_SUFFIXES)
|
||||
)
|
||||
is_audio = content_type.startswith("audio/") or filename.endswith(
|
||||
cls._AUDIO_SUFFIXES
|
||||
)
|
||||
if is_image or is_audio:
|
||||
continue
|
||||
files.append(
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"discord://file/{quote(url, safe='')}",
|
||||
name=attachment.get("filename"),
|
||||
mime_type=attachment.get("content_type"),
|
||||
size=attachment.get("size"),
|
||||
)
|
||||
)
|
||||
return files or None
|
||||
|
||||
def download_discord_file_bytes(self, file_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载Discord附件并返回原始字节
|
||||
"""
|
||||
if not file_ref or not file_ref.startswith("discord://file/"):
|
||||
return None
|
||||
if not self.get_config(source):
|
||||
return None
|
||||
file_url = unquote(file_ref.replace("discord://file/", "", 1))
|
||||
resp = RequestUtils(timeout=30).get_res(file_url)
|
||||
if resp and resp.content:
|
||||
return resp.content
|
||||
return None
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
"""
|
||||
发送通知消息
|
||||
@@ -208,19 +328,29 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
)
|
||||
if client:
|
||||
logger.debug(
|
||||
f"[Discord] 调用 client.send_msg, userid={userid}, title={message.title[:50] if message.title else None}..."
|
||||
)
|
||||
result = client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
mtype=message.mtype,
|
||||
f"[Discord] 调用 client 发送, userid={userid}, title={message.title[:50] if message.title else None}..."
|
||||
)
|
||||
if message.file_path:
|
||||
result = client.send_file(
|
||||
file_path=message.file_path,
|
||||
file_name=message.file_name,
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
else:
|
||||
result = client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
mtype=message.mtype,
|
||||
)
|
||||
logger.debug(f"[Discord] send_msg 返回结果: {result}")
|
||||
else:
|
||||
logger.warning(
|
||||
@@ -309,6 +439,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
chat_id: Union[str, int],
|
||||
text: str,
|
||||
title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
) -> bool:
|
||||
"""
|
||||
编辑消息
|
||||
@@ -318,6 +449,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
:param chat_id: 聊天ID
|
||||
:param text: 新的消息内容
|
||||
:param title: 消息标题
|
||||
:param buttons: 新的按钮列表
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
if channel != self._channel:
|
||||
@@ -330,6 +462,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
result = client.send_msg(
|
||||
title=title or "",
|
||||
text=text,
|
||||
buttons=buttons,
|
||||
original_message_id=message_id,
|
||||
original_chat_id=str(chat_id),
|
||||
)
|
||||
@@ -357,21 +490,37 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
|
||||
return None
|
||||
client: Discord = self.get_instance(conf.name)
|
||||
if client:
|
||||
result = client.send_msg(
|
||||
title=message.title or "",
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
if message.file_path:
|
||||
result = client.send_file(
|
||||
file_path=message.file_path,
|
||||
file_name=message.file_name,
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
else:
|
||||
result = client.send_msg(
|
||||
title=message.title or "",
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
if result:
|
||||
success, message_id = (
|
||||
success, response_data = (
|
||||
(result[0], result[1])
|
||||
if isinstance(result, tuple)
|
||||
else (result, None)
|
||||
)
|
||||
if success:
|
||||
message_id = None
|
||||
chat_id = None
|
||||
if isinstance(response_data, dict):
|
||||
message_id = response_data.get("message_id")
|
||||
chat_id = response_data.get("chat_id")
|
||||
elif response_data is not None:
|
||||
message_id = str(response_data)
|
||||
return MessageResponse(
|
||||
message_id=str(message_id) if message_id else None,
|
||||
chat_id=None,
|
||||
chat_id=str(chat_id) if chat_id else None,
|
||||
channel=MessageChannel.Discord,
|
||||
source=conf.name,
|
||||
success=True,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import asyncio
|
||||
import re
|
||||
import threading
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Dict, Any, Tuple, Union
|
||||
from urllib.parse import quote
|
||||
|
||||
@@ -126,6 +127,20 @@ class Discord:
|
||||
if isinstance(message.channel, discord.DMChannel)
|
||||
else "guild",
|
||||
}
|
||||
if message.attachments:
|
||||
payload["attachments"] = [
|
||||
{
|
||||
"id": str(attachment.id),
|
||||
"filename": attachment.filename,
|
||||
"content_type": attachment.content_type,
|
||||
"url": attachment.url,
|
||||
"proxy_url": attachment.proxy_url,
|
||||
"size": attachment.size,
|
||||
"height": attachment.height,
|
||||
"width": attachment.width,
|
||||
}
|
||||
for attachment in message.attachments
|
||||
]
|
||||
await self._post_to_ds(payload)
|
||||
|
||||
@self._client.event
|
||||
@@ -259,6 +274,37 @@ class Discord:
|
||||
logger.error(f"发送 Discord 消息失败:{err}")
|
||||
return False
|
||||
|
||||
def send_file(
|
||||
self,
|
||||
file_path: str,
|
||||
title: Optional[str] = None,
|
||||
text: Optional[str] = None,
|
||||
userid: Optional[str] = None,
|
||||
file_name: Optional[str] = None,
|
||||
original_chat_id: Optional[str] = None,
|
||||
) -> Optional[bool]:
|
||||
if not self.get_state():
|
||||
return False
|
||||
if not file_path:
|
||||
return False
|
||||
|
||||
try:
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._send_file(
|
||||
file_path=file_path,
|
||||
title=title,
|
||||
text=text,
|
||||
userid=userid,
|
||||
file_name=file_name,
|
||||
original_chat_id=original_chat_id,
|
||||
),
|
||||
self._loop,
|
||||
)
|
||||
return future.result(timeout=30)
|
||||
except Exception as err:
|
||||
logger.error(f"发送 Discord 文件失败:{err}")
|
||||
return False
|
||||
|
||||
def send_medias_msg(
|
||||
self,
|
||||
medias: List[MediaInfo],
|
||||
@@ -346,7 +392,7 @@ class Discord:
|
||||
original_message_id: Optional[Union[int, str]],
|
||||
original_chat_id: Optional[str],
|
||||
mtype: Optional["NotificationType"] = None,
|
||||
) -> Tuple[bool, Optional[int]]:
|
||||
) -> Tuple[bool, Optional[Dict[str, str]]]:
|
||||
logger.debug(
|
||||
f"[Discord] _send_message: userid={userid}, original_chat_id={original_chat_id}"
|
||||
)
|
||||
@@ -373,17 +419,73 @@ class Discord:
|
||||
embed=embed,
|
||||
view=view,
|
||||
)
|
||||
return success, int(original_message_id) if original_message_id else None
|
||||
return (
|
||||
success,
|
||||
{
|
||||
"message_id": str(original_message_id),
|
||||
"chat_id": str(original_chat_id),
|
||||
}
|
||||
if success and original_message_id and original_chat_id
|
||||
else None,
|
||||
)
|
||||
|
||||
logger.debug(f"[Discord] 发送新消息到频道: {channel}")
|
||||
try:
|
||||
sent_message = await channel.send(content=content, embed=embed, view=view)
|
||||
logger.debug("[Discord] 消息发送成功")
|
||||
return True, sent_message.id if sent_message else None
|
||||
return (
|
||||
True,
|
||||
{
|
||||
"message_id": str(sent_message.id),
|
||||
"chat_id": str(channel.id),
|
||||
}
|
||||
if sent_message and getattr(channel, "id", None) is not None
|
||||
else None,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"[Discord] 发送消息到频道失败: {e}")
|
||||
return False, None
|
||||
|
||||
async def _send_file(
|
||||
self,
|
||||
file_path: str,
|
||||
title: Optional[str],
|
||||
text: Optional[str],
|
||||
userid: Optional[str],
|
||||
file_name: Optional[str],
|
||||
original_chat_id: Optional[str],
|
||||
) -> Tuple[bool, Optional[Dict[str, str]]]:
|
||||
channel = await self._resolve_channel(userid=userid, chat_id=original_chat_id)
|
||||
if not channel:
|
||||
logger.error("未找到可用的 Discord 频道或私聊")
|
||||
return False, None
|
||||
|
||||
local_file = Path(file_path)
|
||||
if not local_file.exists() or not local_file.is_file():
|
||||
logger.error(f"Discord发送文件失败,文件不存在: {local_file}")
|
||||
return False, None
|
||||
|
||||
content_parts = [part for part in [title, text] if part]
|
||||
content = "\n".join(content_parts) if content_parts else None
|
||||
if content and len(content) > 1900:
|
||||
content = content[:1900] + "..."
|
||||
|
||||
try:
|
||||
discord_file = discord.File(
|
||||
str(local_file), filename=file_name or local_file.name
|
||||
)
|
||||
sent_message = await channel.send(content=content, file=discord_file)
|
||||
return (
|
||||
True,
|
||||
{
|
||||
"message_id": str(sent_message.id),
|
||||
"chat_id": str(channel.id),
|
||||
},
|
||||
)
|
||||
except Exception as err:
|
||||
logger.error(f"Discord发送文件失败: {err}")
|
||||
return False, None
|
||||
|
||||
async def _send_list_message(
|
||||
self,
|
||||
embeds: List[discord.Embed],
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import json
|
||||
import subprocess
|
||||
import threading
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Union
|
||||
|
||||
from app import schemas
|
||||
from app.core.config import settings
|
||||
@@ -11,6 +13,9 @@ from app.schemas.types import StorageSchema
|
||||
from app.utils.string import StringUtils
|
||||
from app.utils.system import SystemUtils
|
||||
|
||||
_folder_locks: dict[str, threading.Lock] = {}
|
||||
_folder_locks_guard = threading.Lock()
|
||||
|
||||
|
||||
class Rclone(StorageBase):
|
||||
"""
|
||||
@@ -120,6 +125,43 @@ class Rclone(StorageBase):
|
||||
modify_time=StringUtils.str_to_timestamp(item.get("ModTime"))
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def __normalize_remote_path(path: Union[Path, str]) -> str:
|
||||
"""
|
||||
规范化远端路径,统一目录锁键值。
|
||||
"""
|
||||
path_str = Path(str(path or "/")).as_posix()
|
||||
if not path_str.startswith("/"):
|
||||
path_str = f"/{path_str}"
|
||||
if path_str != "/":
|
||||
path_str = path_str.rstrip("/")
|
||||
return path_str or "/"
|
||||
|
||||
@staticmethod
|
||||
def __get_path_lock(path: Union[Path, str]) -> threading.Lock:
|
||||
"""
|
||||
获取指定远端路径的模块级锁。
|
||||
"""
|
||||
normalized = Rclone.__normalize_remote_path(path)
|
||||
with _folder_locks_guard:
|
||||
if normalized not in _folder_locks:
|
||||
_folder_locks[normalized] = threading.Lock()
|
||||
return _folder_locks[normalized]
|
||||
|
||||
def __wait_for_item(
|
||||
self, path: Path, retries: int = 3, delay: float = 0.2
|
||||
) -> Optional[schemas.FileItem]:
|
||||
"""
|
||||
等待目录或文件在远端可见,兼容云盘最终一致性延迟。
|
||||
"""
|
||||
for attempt in range(retries):
|
||||
item = self.get_item(path)
|
||||
if item:
|
||||
return item
|
||||
if attempt < retries - 1:
|
||||
time.sleep(delay)
|
||||
return None
|
||||
|
||||
def check(self) -> bool:
|
||||
"""
|
||||
检查存储是否可用
|
||||
@@ -163,50 +205,53 @@ class Rclone(StorageBase):
|
||||
:param fileitem: 父目录
|
||||
:param name: 目录名
|
||||
"""
|
||||
path = Path(self.__normalize_remote_path(Path(fileitem.path) / name))
|
||||
try:
|
||||
retcode = subprocess.run(
|
||||
[
|
||||
'rclone', 'mkdir',
|
||||
f'MP:{Path(fileitem.path) / name}'
|
||||
f'MP:{path}'
|
||||
],
|
||||
startupinfo=self.__get_hidden_shell()
|
||||
).returncode
|
||||
if retcode == 0:
|
||||
return self.get_item(Path(fileitem.path) / name)
|
||||
folder = self.__wait_for_item(path)
|
||||
if folder:
|
||||
return folder
|
||||
logger.warn(f"【rclone】目录 {path} 创建成功后暂未可见")
|
||||
return None
|
||||
folder = self.__wait_for_item(path, retries=2)
|
||||
if folder:
|
||||
logger.info(f"【rclone】目录 {path} 已存在,忽略重复创建")
|
||||
return folder
|
||||
except Exception as err:
|
||||
logger.error(f"【rclone】创建目录失败:{err}")
|
||||
folder = self.__wait_for_item(path, retries=2)
|
||||
if folder:
|
||||
logger.info(f"【rclone】目录 {path} 已存在,忽略创建异常")
|
||||
return folder
|
||||
return None
|
||||
|
||||
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
|
||||
"""
|
||||
根据文件路程获取目录,不存在则创建
|
||||
"""
|
||||
|
||||
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
|
||||
"""
|
||||
查找下级目录中匹配名称的目录
|
||||
"""
|
||||
for sub_folder in self.list(_fileitem):
|
||||
if sub_folder.type != "dir":
|
||||
continue
|
||||
if sub_folder.name == _name:
|
||||
return sub_folder
|
||||
return None
|
||||
normalized = Path(self.__normalize_remote_path(path))
|
||||
|
||||
# 是否已存在
|
||||
folder = self.get_item(path)
|
||||
folder = self.get_item(normalized)
|
||||
if folder:
|
||||
return folder
|
||||
# 逐级查找和创建目录
|
||||
fileitem = schemas.FileItem(storage=self.schema.value, path="/")
|
||||
for part in path.parts[1:]:
|
||||
dir_file = __find_dir(fileitem, part)
|
||||
if dir_file:
|
||||
fileitem = dir_file
|
||||
else:
|
||||
dir_file = self.create_folder(fileitem, part)
|
||||
fileitem = schemas.FileItem(storage=self.schema.value, type="dir", path="/")
|
||||
for part in normalized.parts[1:]:
|
||||
current_path = Path(self.__normalize_remote_path(Path(fileitem.path) / part))
|
||||
with self.__get_path_lock(current_path):
|
||||
dir_file = self.get_item(current_path)
|
||||
if not dir_file:
|
||||
logger.warn(f"【rclone】创建目录 {fileitem.path}{part} 失败!")
|
||||
dir_file = self.create_folder(fileitem, part)
|
||||
if not dir_file:
|
||||
logger.warn(f"【rclone】创建目录 {current_path} 失败!")
|
||||
return None
|
||||
fileitem = dir_file
|
||||
return fileitem
|
||||
|
||||
@@ -13,8 +13,15 @@ from app.helper.directory import DirectoryHelper
|
||||
from app.helper.message import TemplateHelper
|
||||
from app.log import logger
|
||||
from app.modules.filemanager.storages import StorageBase
|
||||
from app.schemas import TransferInfo, TmdbEpisode, TransferDirectoryConf, FileItem, TransferInterceptEventData, \
|
||||
TransferRenameEventData
|
||||
from app.schemas import (
|
||||
TransferInfo,
|
||||
TmdbEpisode,
|
||||
TransferDirectoryConf,
|
||||
FileItem,
|
||||
TransferInterceptEventData,
|
||||
TransferOverwriteCheckEventData,
|
||||
TransferRenameEventData,
|
||||
)
|
||||
from app.schemas.types import MediaType, ChainEventType
|
||||
from app.utils.system import SystemUtils
|
||||
|
||||
@@ -51,26 +58,27 @@ class TransHandler:
|
||||
elif isinstance(current_value, bool):
|
||||
current_value = value
|
||||
elif isinstance(current_value, int):
|
||||
current_value += (value or 0)
|
||||
current_value += value or 0
|
||||
else:
|
||||
current_value = value
|
||||
setattr(result, key, current_value)
|
||||
|
||||
def transfer_media(self,
|
||||
fileitem: FileItem,
|
||||
in_meta: MetaBase,
|
||||
mediainfo: MediaInfo,
|
||||
target_storage: str,
|
||||
target_path: Path,
|
||||
transfer_type: str,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
need_scrape: Optional[bool] = False,
|
||||
need_rename: Optional[bool] = True,
|
||||
need_notify: Optional[bool] = True,
|
||||
overwrite_mode: Optional[str] = None,
|
||||
episodes_info: List[TmdbEpisode] = None
|
||||
) -> TransferInfo:
|
||||
def transfer_media(
|
||||
self,
|
||||
fileitem: FileItem,
|
||||
in_meta: MetaBase,
|
||||
mediainfo: MediaInfo,
|
||||
target_storage: str,
|
||||
target_path: Path,
|
||||
transfer_type: str,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
need_scrape: Optional[bool] = False,
|
||||
need_rename: Optional[bool] = True,
|
||||
need_notify: Optional[bool] = True,
|
||||
overwrite_mode: Optional[str] = None,
|
||||
episodes_info: List[TmdbEpisode] = None,
|
||||
) -> TransferInfo:
|
||||
"""
|
||||
识别并整理一个文件或者一个目录下的所有文件
|
||||
:param fileitem: 整理的文件对象,可能是一个文件也可以是一个目录
|
||||
@@ -109,7 +117,9 @@ class TransHandler:
|
||||
"""
|
||||
if not _fileitem.extension:
|
||||
return False
|
||||
if f".{_fileitem.extension.lower()}" in (settings.RMT_SUBEXT + settings.RMT_AUDIOEXT):
|
||||
if f".{_fileitem.extension.lower()}" in (
|
||||
settings.RMT_SUBEXT + settings.RMT_AUDIOEXT
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
||||
@@ -117,7 +127,6 @@ class TransHandler:
|
||||
result = TransferInfo()
|
||||
|
||||
try:
|
||||
|
||||
# 重命名格式
|
||||
rename_format = settings.RENAME_FORMAT(mediainfo.type)
|
||||
|
||||
@@ -128,9 +137,11 @@ class TransHandler:
|
||||
new_path = self.get_rename_path(
|
||||
path=target_path,
|
||||
template_string=rename_format,
|
||||
rename_dict=self.get_naming_dict(meta=in_meta,
|
||||
mediainfo=mediainfo),
|
||||
source_path=fileitem.path
|
||||
rename_dict=self.get_naming_dict(
|
||||
meta=in_meta, mediainfo=mediainfo
|
||||
),
|
||||
source_path=fileitem.path,
|
||||
source_item=fileitem,
|
||||
)
|
||||
new_path = DirectoryHelper.get_media_root_path(
|
||||
rename_format, rename_path=new_path
|
||||
@@ -149,40 +160,46 @@ class TransHandler:
|
||||
new_path = target_path / fileitem.name
|
||||
# 原盘大小只计算STREAM目录内的文件大小
|
||||
if stream_fileitem := source_oper.get_item(
|
||||
Path(fileitem.path) / "BDMV" / "STREAM"
|
||||
Path(fileitem.path) / "BDMV" / "STREAM"
|
||||
):
|
||||
fileitem.size = sum(
|
||||
file.size for file in source_oper.list(stream_fileitem) or []
|
||||
)
|
||||
# 整理目录
|
||||
new_diritem, errmsg = self.__transfer_dir(fileitem=fileitem,
|
||||
mediainfo=mediainfo,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_storage=target_storage,
|
||||
target_path=new_path,
|
||||
transfer_type=transfer_type,
|
||||
result=result)
|
||||
new_diritem, errmsg = self.__transfer_dir(
|
||||
fileitem=fileitem,
|
||||
mediainfo=mediainfo,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_storage=target_storage,
|
||||
target_path=new_path,
|
||||
transfer_type=transfer_type,
|
||||
result=result,
|
||||
)
|
||||
if not new_diritem:
|
||||
logger.error(f"文件夹 {fileitem.path} 整理失败:{errmsg}")
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message=errmsg,
|
||||
fileitem=fileitem,
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=errmsg,
|
||||
fileitem=fileitem,
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
|
||||
logger.info(f"文件夹 {fileitem.path} 整理成功")
|
||||
# 返回整理后的路径
|
||||
self.__update_result(result=result,
|
||||
success=True,
|
||||
fileitem=fileitem,
|
||||
target_item=new_diritem,
|
||||
target_diritem=new_diritem,
|
||||
need_scrape=need_scrape,
|
||||
need_notify=need_notify,
|
||||
transfer_type=transfer_type)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=True,
|
||||
fileitem=fileitem,
|
||||
target_item=new_diritem,
|
||||
target_diritem=new_diritem,
|
||||
need_scrape=need_scrape,
|
||||
need_notify=need_notify,
|
||||
transfer_type=transfer_type,
|
||||
)
|
||||
return result
|
||||
else:
|
||||
# 整理单个文件
|
||||
@@ -190,13 +207,15 @@ class TransHandler:
|
||||
# 电视剧
|
||||
if in_meta.begin_episode is None:
|
||||
logger.warn(f"文件 {fileitem.path} 整理失败:未识别到文件集数")
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message="未识别到文件集数",
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message="未识别到文件集数",
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
|
||||
# 文件结束季为空
|
||||
@@ -218,9 +237,10 @@ class TransHandler:
|
||||
meta=in_meta,
|
||||
mediainfo=mediainfo,
|
||||
episodes_info=episodes_info,
|
||||
file_ext=f".{fileitem.extension}"
|
||||
file_ext=f".{fileitem.extension}",
|
||||
),
|
||||
source_path=fileitem.path
|
||||
source_path=fileitem.path,
|
||||
source_item=fileitem,
|
||||
)
|
||||
|
||||
# 针对字幕文件,文件名中补充额外标识信息
|
||||
@@ -250,13 +270,15 @@ class TransHandler:
|
||||
target_diritem = target_oper.get_folder(folder_path)
|
||||
if not target_diritem:
|
||||
logger.error(f"目标目录 {folder_path} 获取失败")
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message=f"目标目录 {folder_path} 获取失败",
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=f"目标目录 {folder_path} 获取失败",
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
|
||||
# 判断是否要覆盖,附加文件强制覆盖
|
||||
@@ -274,92 +296,172 @@ class TransHandler:
|
||||
if not overflag:
|
||||
# 目标文件已存在
|
||||
logger.info(
|
||||
f"目的文件系统中已经存在同名文件 {target_file},当前整理覆盖模式设置为 {overwrite_mode}")
|
||||
if overwrite_mode == 'always':
|
||||
f"目的文件系统中已经存在同名文件 {target_file},当前整理覆盖模式设置为 {overwrite_mode}"
|
||||
)
|
||||
# 触发覆盖检查事件,允许插件提供源/目标文件真实大小
|
||||
# 或直接给出覆盖决策(例如 .strm 文件指向网盘原始文件)
|
||||
overwrite_event_data = TransferOverwriteCheckEventData(
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_storage=target_storage,
|
||||
target_path=new_file,
|
||||
overwrite_mode=overwrite_mode or "",
|
||||
transfer_type=transfer_type,
|
||||
)
|
||||
overwrite_event = eventmanager.send_event(
|
||||
ChainEventType.TransferOverwriteCheck,
|
||||
overwrite_event_data,
|
||||
)
|
||||
plugin_overwrite: Optional[bool] = None
|
||||
plugin_source_size: Optional[int] = None
|
||||
plugin_target_size: Optional[int] = None
|
||||
if overwrite_event and overwrite_event.event_data:
|
||||
overwrite_event_data = overwrite_event.event_data
|
||||
plugin_overwrite = overwrite_event_data.overwrite
|
||||
plugin_source_size = overwrite_event_data.source_size
|
||||
plugin_target_size = overwrite_event_data.target_size
|
||||
if (
|
||||
plugin_overwrite is not None
|
||||
or plugin_source_size is not None
|
||||
or plugin_target_size is not None
|
||||
):
|
||||
logger.info(
|
||||
f"覆盖检查事件由 {overwrite_event_data.source} 处理:"
|
||||
f"overwrite={plugin_overwrite}, "
|
||||
f"source_size={plugin_source_size}, "
|
||||
f"target_size={plugin_target_size}, "
|
||||
f"reason={overwrite_event_data.reason}"
|
||||
)
|
||||
if plugin_overwrite is True:
|
||||
overflag = True
|
||||
elif plugin_overwrite is False:
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=overwrite_event_data.reason
|
||||
or "插件决定不覆盖已有文件",
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_diritem=target_diritem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
elif overwrite_mode == "always":
|
||||
# 总是覆盖同名文件
|
||||
overflag = True
|
||||
elif overwrite_mode == 'size':
|
||||
elif overwrite_mode == "size":
|
||||
# 存在时大覆盖小
|
||||
if target_item.size < fileitem.size:
|
||||
logger.info(f"目标文件文件大小更小,将覆盖:{new_file}")
|
||||
source_size = (
|
||||
plugin_source_size
|
||||
if plugin_source_size is not None
|
||||
else fileitem.size
|
||||
)
|
||||
target_size = (
|
||||
plugin_target_size
|
||||
if plugin_target_size is not None
|
||||
else target_item.size
|
||||
)
|
||||
if target_size < source_size:
|
||||
logger.info(
|
||||
f"目标文件文件大小更小,将覆盖:{new_file}"
|
||||
)
|
||||
overflag = True
|
||||
else:
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message=f"媒体库存在同名文件,且质量更好",
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_diritem=target_diritem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=f"媒体库存在同名文件,且质量更好",
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_diritem=target_diritem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
elif overwrite_mode == 'never':
|
||||
elif overwrite_mode == "never":
|
||||
# 存在不覆盖
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message=f"媒体库存在同名文件,当前覆盖模式为不覆盖",
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_diritem=target_diritem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=f"媒体库存在同名文件,当前覆盖模式为不覆盖",
|
||||
fileitem=fileitem,
|
||||
target_item=target_item,
|
||||
target_diritem=target_diritem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
elif overwrite_mode == 'latest':
|
||||
elif overwrite_mode == "latest":
|
||||
# 仅保留最新版本
|
||||
logger.info(f"当前整理覆盖模式设置为仅保留最新版本,将覆盖:{new_file}")
|
||||
logger.info(
|
||||
f"当前整理覆盖模式设置为仅保留最新版本,将覆盖:{new_file}"
|
||||
)
|
||||
overflag = True
|
||||
else:
|
||||
if overwrite_mode == 'latest':
|
||||
if overwrite_mode == "latest":
|
||||
# 文件不存在,但仅保留最新版本
|
||||
logger.info(
|
||||
f"当前整理覆盖模式设置为 {overwrite_mode},仅保留最新版本,正在删除已有版本文件 ...")
|
||||
f"当前整理覆盖模式设置为 {overwrite_mode},仅保留最新版本,正在删除已有版本文件 ..."
|
||||
)
|
||||
self.__delete_version_files(target_oper, new_file)
|
||||
else:
|
||||
# 附加文件 总是需要覆盖
|
||||
overflag = True
|
||||
|
||||
# 整理文件
|
||||
new_item, err_msg = self.__transfer_file(fileitem=fileitem,
|
||||
mediainfo=mediainfo,
|
||||
target_storage=target_storage,
|
||||
target_file=new_file,
|
||||
transfer_type=transfer_type,
|
||||
over_flag=overflag,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
result=result)
|
||||
new_item, err_msg = self.__transfer_file(
|
||||
fileitem=fileitem,
|
||||
mediainfo=mediainfo,
|
||||
target_storage=target_storage,
|
||||
target_file=new_file,
|
||||
transfer_type=transfer_type,
|
||||
over_flag=overflag,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
result=result,
|
||||
)
|
||||
if not new_item:
|
||||
logger.error(f"文件 {fileitem.path} 整理失败:{err_msg}")
|
||||
self.__update_result(result=result,
|
||||
success=False,
|
||||
message=err_msg,
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=False,
|
||||
message=err_msg,
|
||||
fileitem=fileitem,
|
||||
fail_list=[fileitem.path],
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
|
||||
logger.info(f"文件 {fileitem.path} 整理成功")
|
||||
self.__update_result(result=result,
|
||||
success=True,
|
||||
fileitem=fileitem,
|
||||
target_item=new_item,
|
||||
target_diritem=target_diritem,
|
||||
need_scrape=need_scrape,
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify)
|
||||
self.__update_result(
|
||||
result=result,
|
||||
success=True,
|
||||
fileitem=fileitem,
|
||||
target_item=new_item,
|
||||
target_diritem=target_diritem,
|
||||
need_scrape=need_scrape,
|
||||
transfer_type=transfer_type,
|
||||
need_notify=need_notify,
|
||||
)
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.error(f"媒体整理出错:{e}")
|
||||
return TransferInfo(success=False, message=str(e))
|
||||
|
||||
@staticmethod
|
||||
def __transfer_command(fileitem: FileItem, target_storage: str,
|
||||
source_oper: StorageBase, target_oper: StorageBase,
|
||||
target_file: Path, transfer_type: str,
|
||||
) -> Tuple[Optional[FileItem], str]:
|
||||
def __transfer_command(
|
||||
fileitem: FileItem,
|
||||
target_storage: str,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
target_file: Path,
|
||||
transfer_type: str,
|
||||
) -> Tuple[Optional[FileItem], str]:
|
||||
"""
|
||||
处理单个文件
|
||||
:param fileitem: 源文件
|
||||
@@ -381,12 +483,15 @@ class TransHandler:
|
||||
basename=_path.stem,
|
||||
type="file",
|
||||
size=_path.stat().st_size,
|
||||
extension=_path.suffix.lstrip('.'),
|
||||
modify_time=_path.stat().st_mtime
|
||||
extension=_path.suffix.lstrip("."),
|
||||
modify_time=_path.stat().st_mtime,
|
||||
)
|
||||
|
||||
if (fileitem.storage != target_storage
|
||||
and fileitem.storage != "local" and target_storage != "local"):
|
||||
if (
|
||||
fileitem.storage != target_storage
|
||||
and fileitem.storage != "local"
|
||||
and target_storage != "local"
|
||||
):
|
||||
return None, f"不支持 {fileitem.storage} 到 {target_storage} 的文件整理"
|
||||
|
||||
if fileitem.storage == "local" and target_storage == "local":
|
||||
@@ -419,20 +524,27 @@ class TransHandler:
|
||||
target_fileitem = target_oper.get_folder(target_file.parent)
|
||||
if target_fileitem:
|
||||
# 上传文件
|
||||
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
|
||||
new_item = target_oper.upload(
|
||||
target_fileitem, filepath, target_file.name
|
||||
)
|
||||
if new_item:
|
||||
return new_item, ""
|
||||
else:
|
||||
return None, f"{fileitem.path} 上传 {target_storage} 失败"
|
||||
else:
|
||||
return None, f"【{target_storage}】{target_file.parent} 目录获取失败"
|
||||
return (
|
||||
None,
|
||||
f"【{target_storage}】{target_file.parent} 目录获取失败",
|
||||
)
|
||||
elif transfer_type == "move":
|
||||
# 移动
|
||||
# 根据目的路径获取文件夹
|
||||
target_fileitem = target_oper.get_folder(target_file.parent)
|
||||
if target_fileitem:
|
||||
# 上传文件
|
||||
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
|
||||
new_item = target_oper.upload(
|
||||
target_fileitem, filepath, target_file.name
|
||||
)
|
||||
if new_item:
|
||||
# 删除源文件
|
||||
source_oper.delete(fileitem)
|
||||
@@ -440,7 +552,10 @@ class TransHandler:
|
||||
else:
|
||||
return None, f"{fileitem.path} 上传 {target_storage} 失败"
|
||||
else:
|
||||
return None, f"【{target_storage}】{target_file.parent} 目录获取失败"
|
||||
return (
|
||||
None,
|
||||
f"【{target_storage}】{target_file.parent} 目录获取失败",
|
||||
)
|
||||
elif fileitem.storage != "local" and target_storage == "local":
|
||||
# 网盘到本地
|
||||
if target_file.exists():
|
||||
@@ -449,7 +564,9 @@ class TransHandler:
|
||||
# 网盘到本地
|
||||
if transfer_type in ["copy", "move"]:
|
||||
# 下载
|
||||
tmp_file = source_oper.download(fileitem=fileitem, path=target_file.parent)
|
||||
tmp_file = source_oper.download(
|
||||
fileitem=fileitem, path=target_file.parent
|
||||
)
|
||||
if tmp_file:
|
||||
# 创建目录
|
||||
if not target_file.parent.exists():
|
||||
@@ -471,22 +588,32 @@ class TransHandler:
|
||||
# 复制文件到新目录
|
||||
target_fileitem = target_oper.get_folder(target_file.parent)
|
||||
if target_fileitem:
|
||||
if source_oper.copy(fileitem, Path(target_fileitem.path), target_file.name):
|
||||
if source_oper.copy(
|
||||
fileitem, Path(target_fileitem.path), target_file.name
|
||||
):
|
||||
return target_oper.get_item(target_file), ""
|
||||
else:
|
||||
return None, f"【{target_storage}】{fileitem.path} 复制文件失败"
|
||||
else:
|
||||
return None, f"【{target_storage}】{target_file.parent} 目录获取失败"
|
||||
return (
|
||||
None,
|
||||
f"【{target_storage}】{target_file.parent} 目录获取失败",
|
||||
)
|
||||
elif transfer_type == "move":
|
||||
# 移动文件到新目录
|
||||
target_fileitem = target_oper.get_folder(target_file.parent)
|
||||
if target_fileitem:
|
||||
if source_oper.move(fileitem, Path(target_fileitem.path), target_file.name):
|
||||
if source_oper.move(
|
||||
fileitem, Path(target_fileitem.path), target_file.name
|
||||
):
|
||||
return target_oper.get_item(target_file), ""
|
||||
else:
|
||||
return None, f"【{target_storage}】{fileitem.path} 移动文件失败"
|
||||
else:
|
||||
return None, f"【{target_storage}】{target_file.parent} 目录获取失败"
|
||||
return (
|
||||
None,
|
||||
f"【{target_storage}】{target_file.parent} 目录获取失败",
|
||||
)
|
||||
elif transfer_type == "link":
|
||||
if source_oper.link(fileitem, target_file):
|
||||
return target_oper.get_item(target_file), ""
|
||||
@@ -503,22 +630,28 @@ class TransHandler:
|
||||
重命名字幕文件,补充附加信息
|
||||
"""
|
||||
# 字幕正则式
|
||||
_zhcn_sub_re = r"([.\[(\s](((zh[-_])?(cn|ch[si]|sg|sc))|zho?" \
|
||||
r"|chinese|(cn|ch[si]|sg|zho?)[-_&]?(cn|ch[si]|sg|zho?|eng|jap|ja|jpn)" \
|
||||
r"|eng[-_&]?(cn|ch[si]|sg|zho?)|(jap|ja|jpn)[-_&]?(cn|ch[si]|sg|zho?)" \
|
||||
r"|简[体中]?)[.\])\s])" \
|
||||
r"|([\u4e00-\u9fa5]{0,3}[中双][\u4e00-\u9fa5]{0,2}[字文语][\u4e00-\u9fa5]{0,3})" \
|
||||
r"|简体|简中|JPSC|sc_jp" \
|
||||
r"|(?<![a-z0-9])gb(?![a-z0-9])"
|
||||
_zhtw_sub_re = r"([.\[(\s](((zh[-_])?(hk|tw|cht|tc))" \
|
||||
r"|cht[-_&]?(cht|eng|jap|ja|jpn)" \
|
||||
r"|eng[-_&]?cht|(jap|ja|jpn)[-_&]?cht" \
|
||||
r"|繁[体中]?)[.\])\s])" \
|
||||
r"|繁体中[文字]|中[文字]繁体|繁体|JPTC|tc_jp" \
|
||||
r"|(?<![a-z0-9])big5(?![a-z0-9])"
|
||||
_ja_sub_re = r"([.\[(\s](ja-jp|jap|ja|jpn" \
|
||||
r"|(jap|ja|jpn)[-_&]?eng|eng[-_&]?(jap|ja|jpn))[.\])\s])" \
|
||||
r"|日本語|日語"
|
||||
_zhcn_sub_re = (
|
||||
r"([.\[(\s](((zh[-_])?(cn|ch[si]|sg|sc))|zho?"
|
||||
r"|chinese|(cn|ch[si]|sg|zho?)[-_&]?(cn|ch[si]|sg|zho?|eng|jap|ja|jpn)"
|
||||
r"|eng[-_&]?(cn|ch[si]|sg|zho?)|(jap|ja|jpn)[-_&]?(cn|ch[si]|sg|zho?)"
|
||||
r"|简[体中]?)[.\])\s])"
|
||||
r"|([\u4e00-\u9fa5]{0,3}[中双][\u4e00-\u9fa5]{0,2}[字文语][\u4e00-\u9fa5]{0,3})"
|
||||
r"|简体|简中|JPSC|sc_jp"
|
||||
r"|(?<![a-z0-9])gb(?![a-z0-9])"
|
||||
)
|
||||
_zhtw_sub_re = (
|
||||
r"([.\[(\s](((zh[-_])?(hk|tw|cht|tc))"
|
||||
r"|cht[-_&]?(cht|eng|jap|ja|jpn)"
|
||||
r"|eng[-_&]?cht|(jap|ja|jpn)[-_&]?cht"
|
||||
r"|繁[体中]?)[.\])\s])"
|
||||
r"|繁体中[文字]|中[文字]繁体|繁体|JPTC|tc_jp"
|
||||
r"|(?<![a-z0-9])big5(?![a-z0-9])"
|
||||
)
|
||||
_ja_sub_re = (
|
||||
r"([.\[(\s](ja-jp|jap|ja|jpn"
|
||||
r"|(jap|ja|jpn)[-_&]?eng|eng[-_&]?(jap|ja|jpn))[.\])\s])"
|
||||
r"|日本語|日語"
|
||||
)
|
||||
_eng_sub_re = r"[.\[(\s]eng[.\])\s]"
|
||||
|
||||
# 原文件后缀
|
||||
@@ -537,20 +670,29 @@ class TransHandler:
|
||||
new_file_type = ".eng"
|
||||
|
||||
# 添加默认字幕标识
|
||||
if ((settings.DEFAULT_SUB == "zh-cn" and new_file_type == ".chi.zh-cn")
|
||||
or (settings.DEFAULT_SUB == "zh-tw" and new_file_type == ".zh-tw")
|
||||
or (settings.DEFAULT_SUB == "ja" and new_file_type == ".ja")
|
||||
or (settings.DEFAULT_SUB == "eng" and new_file_type == ".eng")):
|
||||
if (
|
||||
(settings.DEFAULT_SUB == "zh-cn" and new_file_type == ".chi.zh-cn")
|
||||
or (settings.DEFAULT_SUB == "zh-tw" and new_file_type == ".zh-tw")
|
||||
or (settings.DEFAULT_SUB == "ja" and new_file_type == ".ja")
|
||||
or (settings.DEFAULT_SUB == "eng" and new_file_type == ".eng")
|
||||
):
|
||||
new_sub_tag = ".default" + new_file_type
|
||||
else:
|
||||
new_sub_tag = new_file_type
|
||||
|
||||
return new_file.with_name(new_file.stem + new_sub_tag + file_ext)
|
||||
|
||||
def __transfer_dir(self, fileitem: FileItem, mediainfo: MediaInfo,
|
||||
source_oper: StorageBase, target_oper: StorageBase,
|
||||
transfer_type: str, target_storage: str, target_path: Path,
|
||||
result: TransferInfo) -> Tuple[Optional[FileItem], str]:
|
||||
def __transfer_dir(
|
||||
self,
|
||||
fileitem: FileItem,
|
||||
mediainfo: MediaInfo,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
transfer_type: str,
|
||||
target_storage: str,
|
||||
target_path: Path,
|
||||
result: TransferInfo,
|
||||
) -> Tuple[Optional[FileItem], str]:
|
||||
"""
|
||||
整理整个文件夹
|
||||
:param fileitem: 源文件
|
||||
@@ -570,7 +712,7 @@ class TransHandler:
|
||||
mediainfo=mediainfo,
|
||||
target_storage=target_storage,
|
||||
target_path=target_path,
|
||||
transfer_type=transfer_type
|
||||
transfer_type=transfer_type,
|
||||
)
|
||||
event = eventmanager.send_event(ChainEventType.TransferIntercept, event_data)
|
||||
if event and event.event_data:
|
||||
@@ -579,25 +721,34 @@ class TransHandler:
|
||||
if event_data.cancel:
|
||||
logger.debug(
|
||||
f"Transfer dir canceled by event: {event_data.source},"
|
||||
f"Reason: {event_data.reason}")
|
||||
f"Reason: {event_data.reason}"
|
||||
)
|
||||
return None, event_data.reason
|
||||
# 处理所有文件
|
||||
state, errmsg = self.__transfer_dir_files(fileitem=fileitem,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_path=target_path,
|
||||
transfer_type=transfer_type,
|
||||
result=result)
|
||||
state, errmsg = self.__transfer_dir_files(
|
||||
fileitem=fileitem,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_path=target_path,
|
||||
transfer_type=transfer_type,
|
||||
result=result,
|
||||
)
|
||||
if state:
|
||||
return target_item, errmsg
|
||||
else:
|
||||
return None, errmsg
|
||||
|
||||
def __transfer_dir_files(self, fileitem: FileItem, target_storage: str,
|
||||
source_oper: StorageBase, target_oper: StorageBase,
|
||||
transfer_type: str, target_path: Path,
|
||||
result: TransferInfo) -> Tuple[bool, str]:
|
||||
def __transfer_dir_files(
|
||||
self,
|
||||
fileitem: FileItem,
|
||||
target_storage: str,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
transfer_type: str,
|
||||
target_path: Path,
|
||||
result: TransferInfo,
|
||||
) -> Tuple[bool, str]:
|
||||
"""
|
||||
按目录结构整理目录下所有文件
|
||||
:param fileitem: 源文件
|
||||
@@ -613,24 +764,28 @@ class TransHandler:
|
||||
if item.type == "dir":
|
||||
# 递归整理目录
|
||||
new_path = target_path / item.name
|
||||
state, errmsg = self.__transfer_dir_files(fileitem=item,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
transfer_type=transfer_type,
|
||||
target_path=new_path,
|
||||
result=result)
|
||||
state, errmsg = self.__transfer_dir_files(
|
||||
fileitem=item,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
transfer_type=transfer_type,
|
||||
target_path=new_path,
|
||||
result=result,
|
||||
)
|
||||
if not state:
|
||||
return False, errmsg
|
||||
else:
|
||||
# 整理文件
|
||||
new_file = target_path / item.name
|
||||
new_item, errmsg = self.__transfer_command(fileitem=item,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_file=new_file,
|
||||
transfer_type=transfer_type)
|
||||
new_item, errmsg = self.__transfer_command(
|
||||
fileitem=item,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_file=new_file,
|
||||
transfer_type=transfer_type,
|
||||
)
|
||||
if not new_item:
|
||||
return False, errmsg
|
||||
self.__update_result(
|
||||
@@ -641,11 +796,18 @@ class TransHandler:
|
||||
# 返回成功
|
||||
return True, ""
|
||||
|
||||
def __transfer_file(self, fileitem: FileItem, mediainfo: MediaInfo,
|
||||
source_oper: StorageBase, target_oper: StorageBase,
|
||||
target_storage: str, target_file: Path,
|
||||
transfer_type: str, result: TransferInfo,
|
||||
over_flag: Optional[bool] = False) -> Tuple[Optional[FileItem], str]:
|
||||
def __transfer_file(
|
||||
self,
|
||||
fileitem: FileItem,
|
||||
mediainfo: MediaInfo,
|
||||
source_oper: StorageBase,
|
||||
target_oper: StorageBase,
|
||||
target_storage: str,
|
||||
target_file: Path,
|
||||
transfer_type: str,
|
||||
result: TransferInfo,
|
||||
over_flag: Optional[bool] = False,
|
||||
) -> Tuple[Optional[FileItem], str]:
|
||||
"""
|
||||
整理一个文件,同时处理其他相关文件
|
||||
:param fileitem: 原文件
|
||||
@@ -659,17 +821,17 @@ class TransHandler:
|
||||
:param source_oper: 源存储操作对象
|
||||
:param target_oper: 目标存储操作对象
|
||||
"""
|
||||
logger.info(f"正在整理文件:【{fileitem.storage}】{fileitem.path} 到 【{target_storage}】{target_file},"
|
||||
f"操作类型:{transfer_type}")
|
||||
logger.info(
|
||||
f"正在整理文件:【{fileitem.storage}】{fileitem.path} 到 【{target_storage}】{target_file},"
|
||||
f"操作类型:{transfer_type}"
|
||||
)
|
||||
event_data = TransferInterceptEventData(
|
||||
fileitem=fileitem,
|
||||
mediainfo=mediainfo,
|
||||
target_storage=target_storage,
|
||||
target_path=target_file,
|
||||
transfer_type=transfer_type,
|
||||
options={
|
||||
"over_flag": over_flag
|
||||
}
|
||||
options={"over_flag": over_flag},
|
||||
)
|
||||
event = eventmanager.send_event(ChainEventType.TransferIntercept, event_data)
|
||||
if event and event.event_data:
|
||||
@@ -678,9 +840,12 @@ class TransHandler:
|
||||
if event_data.cancel:
|
||||
logger.debug(
|
||||
f"Transfer file canceled by event: {event_data.source},"
|
||||
f"Reason: {event_data.reason}")
|
||||
f"Reason: {event_data.reason}"
|
||||
)
|
||||
return None, event_data.reason
|
||||
if target_storage == "local" and (target_file.exists() or target_file.is_symlink()):
|
||||
if target_storage == "local" and (
|
||||
target_file.exists() or target_file.is_symlink()
|
||||
):
|
||||
if not over_flag:
|
||||
logger.warn(f"文件已存在:{target_file}")
|
||||
return None, f"{target_file} 已存在"
|
||||
@@ -694,15 +859,19 @@ class TransHandler:
|
||||
logger.warn(f"文件已存在:【{target_storage}】{target_file}")
|
||||
return None, f"【{target_storage}】{target_file} 已存在"
|
||||
else:
|
||||
logger.info(f"正在删除已存在的文件:【{target_storage}】{target_file}")
|
||||
logger.info(
|
||||
f"正在删除已存在的文件:【{target_storage}】{target_file}"
|
||||
)
|
||||
target_oper.delete(exists_item)
|
||||
# 执行文件整理命令
|
||||
new_item, errmsg = self.__transfer_command(fileitem=fileitem,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_file=target_file,
|
||||
transfer_type=transfer_type)
|
||||
new_item, errmsg = self.__transfer_command(
|
||||
fileitem=fileitem,
|
||||
target_storage=target_storage,
|
||||
source_oper=source_oper,
|
||||
target_oper=target_oper,
|
||||
target_file=target_file,
|
||||
transfer_type=transfer_type,
|
||||
)
|
||||
if new_item:
|
||||
self.__update_result(
|
||||
result=result,
|
||||
@@ -716,8 +885,12 @@ class TransHandler:
|
||||
return None, errmsg
|
||||
|
||||
@staticmethod
|
||||
def get_dest_path(mediainfo: MediaInfo, target_path: Path,
|
||||
need_type_folder: Optional[bool] = False, need_category_folder: Optional[bool] = False):
|
||||
def get_dest_path(
|
||||
mediainfo: MediaInfo,
|
||||
target_path: Path,
|
||||
need_type_folder: Optional[bool] = False,
|
||||
need_category_folder: Optional[bool] = False,
|
||||
):
|
||||
"""
|
||||
获取目标路径
|
||||
"""
|
||||
@@ -728,8 +901,12 @@ class TransHandler:
|
||||
return target_path
|
||||
|
||||
@staticmethod
|
||||
def get_dest_dir(mediainfo: MediaInfo, target_dir: TransferDirectoryConf,
|
||||
need_type_folder: Optional[bool] = None, need_category_folder: Optional[bool] = None) -> Path:
|
||||
def get_dest_dir(
|
||||
mediainfo: MediaInfo,
|
||||
target_dir: TransferDirectoryConf,
|
||||
need_type_folder: Optional[bool] = None,
|
||||
need_category_folder: Optional[bool] = None,
|
||||
) -> Path:
|
||||
"""
|
||||
根据设置并装媒体库目录
|
||||
:param mediainfo: 媒体信息
|
||||
@@ -749,7 +926,11 @@ class TransHandler:
|
||||
library_dir = Path(target_dir.library_path) / target_dir.media_type
|
||||
else:
|
||||
library_dir = Path(target_dir.library_path)
|
||||
if not target_dir.media_category and need_category_folder and mediainfo.category:
|
||||
if (
|
||||
not target_dir.media_category
|
||||
and need_category_folder
|
||||
and mediainfo.category
|
||||
):
|
||||
# 二级自动分类
|
||||
library_dir = library_dir / mediainfo.category
|
||||
elif target_dir.media_category and need_category_folder:
|
||||
@@ -759,8 +940,12 @@ class TransHandler:
|
||||
return library_dir
|
||||
|
||||
@staticmethod
|
||||
def get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: Optional[str] = None,
|
||||
episodes_info: List[TmdbEpisode] = None) -> dict:
|
||||
def get_naming_dict(
|
||||
meta: MetaBase,
|
||||
mediainfo: MediaInfo,
|
||||
file_ext: Optional[str] = None,
|
||||
episodes_info: List[TmdbEpisode] = None,
|
||||
) -> dict:
|
||||
"""
|
||||
根据媒体信息,返回Format字典
|
||||
:param meta: 文件元数据
|
||||
@@ -768,8 +953,12 @@ class TransHandler:
|
||||
:param file_ext: 文件扩展名
|
||||
:param episodes_info: 当前季的全部集信息
|
||||
"""
|
||||
return TemplateHelper().builder.build(meta=meta, mediainfo=mediainfo,
|
||||
file_extension=file_ext, episodes_info=episodes_info)
|
||||
return TemplateHelper().builder.build(
|
||||
meta=meta,
|
||||
mediainfo=mediainfo,
|
||||
file_extension=file_ext,
|
||||
episodes_info=episodes_info,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def __delete_version_files(storage_oper: StorageBase, path: Path) -> bool:
|
||||
@@ -816,14 +1005,20 @@ class TransHandler:
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
def get_rename_path(template_string: str, rename_dict: dict,
|
||||
path: Path = None, source_path: str = None) -> Path:
|
||||
def get_rename_path(
|
||||
template_string: str,
|
||||
rename_dict: dict,
|
||||
path: Optional[Path] = None,
|
||||
source_path: Optional[str] = None,
|
||||
source_item: Optional[FileItem] = None,
|
||||
) -> Path:
|
||||
"""
|
||||
生成重命名后的完整路径,支持智能重命名事件
|
||||
:param template_string: Jinja2 模板字符串
|
||||
:param rename_dict: 渲染上下文,用于替换模板中的变量
|
||||
:param path: 可选的基础路径,如果提供,将在其基础上拼接生成的路径
|
||||
:param source_path: 源文件路径,即待整理的文件路径
|
||||
:param source_item: 源文件信息,即待整理的文件信息
|
||||
:return: 生成的完整路径
|
||||
"""
|
||||
# 创建jinja2模板对象
|
||||
@@ -838,15 +1033,18 @@ class TransHandler:
|
||||
rename_dict=rename_dict,
|
||||
render_str=render_str,
|
||||
path=path,
|
||||
source_path=source_path
|
||||
source_path=source_path,
|
||||
source_item=source_item,
|
||||
)
|
||||
event = eventmanager.send_event(ChainEventType.TransferRename, event_data)
|
||||
# 检查事件返回的结果
|
||||
if event and event.event_data:
|
||||
event_data: TransferRenameEventData = event.event_data
|
||||
if event_data.updated and event_data.updated_str:
|
||||
logger.debug(f"Render string updated by event: "
|
||||
f"{render_str} -> {event_data.updated_str} (source: {event_data.source})")
|
||||
logger.debug(
|
||||
f"Render string updated by event: "
|
||||
f"{render_str} -> {event_data.updated_str} (source: {event_data.source})"
|
||||
)
|
||||
render_str = event_data.updated_str
|
||||
|
||||
# 目的路径
|
||||
|
||||
@@ -5,6 +5,7 @@ QQ Bot 通知模块
|
||||
"""
|
||||
|
||||
import json
|
||||
from urllib.parse import quote, unquote
|
||||
from typing import Optional, List, Tuple, Union, Any
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
@@ -13,12 +14,39 @@ from app.modules import _ModuleBase, _MessageBase
|
||||
from app.modules.qqbot.qqbot import QQBot
|
||||
from app.schemas import CommingMessage, MessageChannel, Notification
|
||||
from app.schemas.types import ModuleType
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
|
||||
class QQBotModule(_ModuleBase, _MessageBase[QQBot]):
|
||||
"""QQ Bot 通知模块"""
|
||||
|
||||
_IMAGE_SUFFIXES = (
|
||||
".png",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".gif",
|
||||
".webp",
|
||||
".bmp",
|
||||
".tiff",
|
||||
".svg",
|
||||
)
|
||||
_AUDIO_SUFFIXES = (
|
||||
".mp3",
|
||||
".m4a",
|
||||
".wav",
|
||||
".ogg",
|
||||
".oga",
|
||||
".opus",
|
||||
".aac",
|
||||
".amr",
|
||||
".flac",
|
||||
".mpga",
|
||||
".mpeg",
|
||||
".webm",
|
||||
)
|
||||
|
||||
def init_module(self) -> None:
|
||||
self.stop()
|
||||
super().init_service(service_name=QQBot.__name__.lower(), service_type=QQBot)
|
||||
self._channel = MessageChannel.QQ
|
||||
|
||||
@@ -77,7 +105,10 @@ class QQBotModule(_ModuleBase, _MessageBase[QQBot]):
|
||||
|
||||
msg_type = msg_body.get("type")
|
||||
content = (msg_body.get("content") or "").strip()
|
||||
if not content:
|
||||
images = self._extract_images(msg_body)
|
||||
audio_refs = self._extract_audio_refs(msg_body)
|
||||
files = self._extract_files(msg_body)
|
||||
if not content and not images and not audio_refs and not files:
|
||||
return None
|
||||
|
||||
if msg_type == "C2C_MESSAGE_CREATE":
|
||||
@@ -85,13 +116,20 @@ class QQBotModule(_ModuleBase, _MessageBase[QQBot]):
|
||||
user_openid = author.get("user_openid", "")
|
||||
if not user_openid:
|
||||
return None
|
||||
logger.info(f"收到 QQ 私聊消息: userid={user_openid}, text={content[:50]}...")
|
||||
logger.info(
|
||||
f"收到 QQ 私聊消息: userid={user_openid}, "
|
||||
f"text={(content or '')[:50]}..., images={len(images) if images else 0}, "
|
||||
f"audios={len(audio_refs) if audio_refs else 0}, files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(
|
||||
channel=MessageChannel.QQ,
|
||||
source=client_config.name,
|
||||
userid=user_openid,
|
||||
username=user_openid,
|
||||
text=content,
|
||||
images=images,
|
||||
audio_refs=audio_refs,
|
||||
files=files,
|
||||
)
|
||||
elif msg_type == "GROUP_AT_MESSAGE_CREATE":
|
||||
author = msg_body.get("author", {})
|
||||
@@ -99,16 +137,170 @@ class QQBotModule(_ModuleBase, _MessageBase[QQBot]):
|
||||
group_openid = msg_body.get("group_openid", "")
|
||||
# 群聊用 group:group_openid 作为 userid,便于回复时识别
|
||||
userid = f"group:{group_openid}" if group_openid else member_openid
|
||||
logger.info(f"收到 QQ 群消息: group={group_openid}, userid={member_openid}, text={content[:50]}...")
|
||||
logger.info(
|
||||
f"收到 QQ 群消息: group={group_openid}, userid={member_openid}, "
|
||||
f"text={(content or '')[:50]}..., images={len(images) if images else 0}, "
|
||||
f"audios={len(audio_refs) if audio_refs else 0}, files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(
|
||||
channel=MessageChannel.QQ,
|
||||
source=client_config.name,
|
||||
userid=userid,
|
||||
username=member_openid or group_openid,
|
||||
text=content,
|
||||
images=images,
|
||||
audio_refs=audio_refs,
|
||||
files=files,
|
||||
)
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _extract_images(
|
||||
cls, msg_body: dict
|
||||
) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
images: List[CommingMessage.MessageImage] = []
|
||||
attachments = msg_body.get("attachments") or []
|
||||
if isinstance(attachments, list):
|
||||
for attachment in attachments:
|
||||
if not isinstance(attachment, dict):
|
||||
continue
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (
|
||||
attachment.get("content_type")
|
||||
or attachment.get("mime_type")
|
||||
or ""
|
||||
).lower()
|
||||
filename = (
|
||||
attachment.get("filename")
|
||||
or attachment.get("name")
|
||||
or ""
|
||||
).lower()
|
||||
if content_type.startswith("image/") or filename.endswith(cls._IMAGE_SUFFIXES):
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=url,
|
||||
name=attachment.get("filename") or attachment.get("name"),
|
||||
mime_type=attachment.get("content_type")
|
||||
or attachment.get("mime_type"),
|
||||
size=attachment.get("size"),
|
||||
)
|
||||
)
|
||||
|
||||
for key in ("image", "image_url", "pic_url"):
|
||||
value = msg_body.get(key)
|
||||
if isinstance(value, str) and value.startswith("http"):
|
||||
images.append(CommingMessage.MessageImage(ref=value))
|
||||
|
||||
extra_images = msg_body.get("images")
|
||||
if isinstance(extra_images, list):
|
||||
for item in extra_images:
|
||||
if isinstance(item, str) and item.startswith("http"):
|
||||
images.append(CommingMessage.MessageImage(ref=item))
|
||||
elif isinstance(item, dict):
|
||||
url = item.get("url") or item.get("image_url")
|
||||
if isinstance(url, str) and url.startswith("http"):
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=url,
|
||||
name=item.get("name") or item.get("filename"),
|
||||
mime_type=item.get("content_type")
|
||||
or item.get("mime_type"),
|
||||
size=item.get("size"),
|
||||
)
|
||||
)
|
||||
|
||||
deduped = []
|
||||
for image in images:
|
||||
if image.ref not in [item.ref for item in deduped]:
|
||||
deduped.append(image)
|
||||
return deduped or None
|
||||
|
||||
@classmethod
|
||||
def _extract_audio_refs(cls, msg_body: dict) -> Optional[List[str]]:
|
||||
audio_refs: List[str] = []
|
||||
attachments = msg_body.get("attachments") or []
|
||||
if isinstance(attachments, list):
|
||||
for attachment in attachments:
|
||||
if not isinstance(attachment, dict):
|
||||
continue
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (
|
||||
attachment.get("content_type")
|
||||
or attachment.get("mime_type")
|
||||
or ""
|
||||
).lower()
|
||||
filename = (
|
||||
attachment.get("filename")
|
||||
or attachment.get("name")
|
||||
or ""
|
||||
).lower()
|
||||
if content_type.startswith("audio/") or filename.endswith(cls._AUDIO_SUFFIXES):
|
||||
audio_refs.append(f"qq://file/{quote(url, safe='')}")
|
||||
|
||||
deduped = []
|
||||
for audio_ref in audio_refs:
|
||||
if audio_ref not in deduped:
|
||||
deduped.append(audio_ref)
|
||||
return deduped or None
|
||||
|
||||
@classmethod
|
||||
def _extract_files(
|
||||
cls, msg_body: dict
|
||||
) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
files: List[CommingMessage.MessageAttachment] = []
|
||||
attachments = msg_body.get("attachments") or []
|
||||
if isinstance(attachments, list):
|
||||
for attachment in attachments:
|
||||
if not isinstance(attachment, dict):
|
||||
continue
|
||||
url = attachment.get("url") or attachment.get("proxy_url")
|
||||
if not url:
|
||||
continue
|
||||
content_type = (
|
||||
attachment.get("content_type")
|
||||
or attachment.get("mime_type")
|
||||
or ""
|
||||
).lower()
|
||||
filename = (
|
||||
attachment.get("filename") or attachment.get("name") or ""
|
||||
).lower()
|
||||
is_image = content_type.startswith("image/") or filename.endswith(
|
||||
cls._IMAGE_SUFFIXES
|
||||
)
|
||||
is_audio = content_type.startswith("audio/") or filename.endswith(
|
||||
cls._AUDIO_SUFFIXES
|
||||
)
|
||||
if is_image or is_audio:
|
||||
continue
|
||||
files.append(
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"qq://file/{quote(url, safe='')}",
|
||||
name=attachment.get("filename") or attachment.get("name"),
|
||||
mime_type=attachment.get("content_type")
|
||||
or attachment.get("mime_type"),
|
||||
size=attachment.get("size"),
|
||||
)
|
||||
)
|
||||
return files or None
|
||||
|
||||
def download_qq_file_bytes(self, file_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载QQ音频附件并返回原始字节
|
||||
"""
|
||||
if not file_ref or not file_ref.startswith("qq://file/"):
|
||||
return None
|
||||
if not self.get_config(source):
|
||||
return None
|
||||
file_url = unquote(file_ref.replace("qq://file/", "", 1))
|
||||
resp = RequestUtils(timeout=30).get_res(file_url)
|
||||
if resp and resp.content:
|
||||
return resp.content
|
||||
return None
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
for conf in self.get_configs().values():
|
||||
if not self.check_message(message, conf.name):
|
||||
|
||||
@@ -6,7 +6,7 @@ QQ Bot Gateway WebSocket 客户端
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
from typing import Callable, Optional
|
||||
from typing import Callable, List, Optional
|
||||
|
||||
import websocket
|
||||
|
||||
@@ -24,6 +24,7 @@ def run_gateway(
|
||||
get_gateway_url_fn: Callable[[str], str],
|
||||
on_message_fn: Callable[[dict], None],
|
||||
stop_event: threading.Event,
|
||||
ws_holder: List,
|
||||
) -> None:
|
||||
"""
|
||||
在后台线程中运行 Gateway WebSocket 连接
|
||||
@@ -34,20 +35,20 @@ def run_gateway(
|
||||
:param get_gateway_url_fn: 获取 gateway URL 的函数 (token) -> url
|
||||
:param on_message_fn: 收到消息时的回调 (payload_dict) -> None
|
||||
:param stop_event: 停止事件,set 时退出循环
|
||||
:param ws_holder: 调用方持有的单元素列表,存放当前 WebSocketApp,供 stop() 时 close 以打断 run_forever
|
||||
"""
|
||||
last_seq: Optional[int] = None
|
||||
heartbeat_interval_ms: Optional[int] = None
|
||||
heartbeat_timer: Optional[threading.Timer] = None
|
||||
ws_ref: list = [] # 用于在闭包中保持 ws 引用
|
||||
|
||||
def send_heartbeat():
|
||||
nonlocal heartbeat_timer
|
||||
if stop_event.is_set():
|
||||
return
|
||||
try:
|
||||
if ws_ref and ws_ref[0]:
|
||||
if ws_holder and ws_holder[0]:
|
||||
payload = {"op": 1, "d": last_seq}
|
||||
ws_ref[0].send(json.dumps(payload))
|
||||
ws_holder[0].send(json.dumps(payload))
|
||||
logger.debug(f"[QQ Gateway:{config_name}] Heartbeat sent, seq={last_seq}")
|
||||
except Exception as err:
|
||||
logger.debug(f"[QQ Gateway:{config_name}] Heartbeat error: {err}")
|
||||
@@ -87,7 +88,7 @@ def run_gateway(
|
||||
"shard": [0, 1],
|
||||
},
|
||||
}
|
||||
ws_ref[0].send(json.dumps(identify))
|
||||
ws_holder[0].send(json.dumps(identify))
|
||||
logger.info(f"[QQ Gateway:{config_name}] Identify sent")
|
||||
|
||||
# 启动心跳
|
||||
@@ -139,8 +140,8 @@ def run_gateway(
|
||||
|
||||
elif op == 9: # Invalid Session
|
||||
logger.warning(f"[QQ Gateway:{config_name}] Invalid session")
|
||||
if ws_ref and ws_ref[0]:
|
||||
ws_ref[0].close()
|
||||
if ws_holder and ws_holder[0]:
|
||||
ws_holder[0].close()
|
||||
|
||||
def on_ws_error(_, error):
|
||||
logger.error(f"[QQ Gateway:{config_name}] WebSocket error: {error}")
|
||||
@@ -149,6 +150,7 @@ def run_gateway(
|
||||
logger.info(f"[QQ Gateway:{config_name}] WebSocket closed: {close_status_code} {close_msg}")
|
||||
if heartbeat_timer:
|
||||
heartbeat_timer.cancel()
|
||||
ws_holder.clear()
|
||||
|
||||
reconnect_delays = [1, 2, 5, 10, 30, 60]
|
||||
attempt = 0
|
||||
@@ -165,8 +167,8 @@ def run_gateway(
|
||||
on_error=on_ws_error,
|
||||
on_close=on_ws_close,
|
||||
)
|
||||
ws_ref.clear()
|
||||
ws_ref.append(ws)
|
||||
ws_holder.clear()
|
||||
ws_holder.append(ws)
|
||||
|
||||
# run_forever 会阻塞,需要传入 stop_event 的检查
|
||||
# websocket-client 的 run_forever 支持 ping_interval, ping_timeout
|
||||
|
||||
@@ -50,6 +50,9 @@ class QQBot:
|
||||
:param QQ_GROUP_OPENID: 默认群组 openid(群聊,与 QQ_OPENID 二选一)
|
||||
:param name: 配置名称,用于消息来源标识和 Gateway 接收
|
||||
"""
|
||||
self._gateway_stop = None
|
||||
self._gateway_thread = None
|
||||
self._gateway_ws_holder: list = []
|
||||
if not QQ_APP_ID or not QQ_APP_SECRET:
|
||||
logger.error("QQ Bot 配置不完整:缺少 AppID 或 AppSecret")
|
||||
self._ready = False
|
||||
@@ -151,6 +154,7 @@ class QQBot:
|
||||
"get_gateway_url_fn": get_gateway_url,
|
||||
"on_message_fn": self._on_gateway_message,
|
||||
"stop_event": self._gateway_stop,
|
||||
"ws_holder": self._gateway_ws_holder,
|
||||
},
|
||||
daemon=True,
|
||||
)
|
||||
@@ -161,10 +165,19 @@ class QQBot:
|
||||
|
||||
def stop(self) -> None:
|
||||
"""停止 Gateway 连接"""
|
||||
if self._gateway_stop:
|
||||
if self._gateway_stop is not None:
|
||||
self._gateway_stop.set()
|
||||
if self._gateway_thread and self._gateway_thread.is_alive():
|
||||
self._gateway_thread.join(timeout=5)
|
||||
try:
|
||||
if self._gateway_ws_holder:
|
||||
self._gateway_ws_holder[0].close()
|
||||
except Exception as e:
|
||||
logger.debug(f"QQ Bot Gateway WebSocket close: {e}")
|
||||
if self._gateway_thread is not None and self._gateway_thread.is_alive():
|
||||
self._gateway_thread.join(timeout=20)
|
||||
if self._gateway_thread.is_alive():
|
||||
logger.warning(
|
||||
"QQ Bot Gateway 线程在 stop 后仍未退出,可能存在重复收消息,请重启进程"
|
||||
)
|
||||
|
||||
def get_state(self) -> bool:
|
||||
"""获取就绪状态"""
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import json
|
||||
import re
|
||||
from urllib.parse import quote, unquote
|
||||
from typing import Optional, Union, List, Tuple, Any
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
@@ -11,6 +12,21 @@ from app.schemas.types import ModuleType
|
||||
|
||||
|
||||
class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
_AUDIO_SUFFIXES = (
|
||||
".mp3",
|
||||
".m4a",
|
||||
".wav",
|
||||
".ogg",
|
||||
".oga",
|
||||
".opus",
|
||||
".aac",
|
||||
".amr",
|
||||
".flac",
|
||||
".mpga",
|
||||
".mpeg",
|
||||
".webm",
|
||||
)
|
||||
|
||||
def init_module(self) -> None:
|
||||
"""
|
||||
初始化模块
|
||||
@@ -193,17 +209,26 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
if not client_config:
|
||||
return None
|
||||
try:
|
||||
msg_json: dict = json.loads(body)
|
||||
msg_json = json.loads(body)
|
||||
while isinstance(msg_json, str):
|
||||
msg_json = json.loads(msg_json)
|
||||
except Exception as err:
|
||||
logger.debug(f"解析Slack消息失败:{str(err)}")
|
||||
return None
|
||||
if not isinstance(msg_json, dict):
|
||||
logger.debug(f"Slack消息格式无效:{type(msg_json)}")
|
||||
return None
|
||||
if msg_json:
|
||||
images = None
|
||||
audio_refs = None
|
||||
files = None
|
||||
if msg_json.get("type") == "message":
|
||||
userid = msg_json.get("user")
|
||||
text = msg_json.get("text")
|
||||
username = msg_json.get("user")
|
||||
images = self._extract_images(msg_json)
|
||||
audio_refs = self._extract_audio_refs(msg_json)
|
||||
files = self._extract_files(msg_json)
|
||||
elif msg_json.get("type") == "block_actions":
|
||||
userid = msg_json.get("user", {}).get("id")
|
||||
callback_data = msg_json.get("actions")[0].get("value")
|
||||
@@ -246,6 +271,8 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
).strip()
|
||||
username = ""
|
||||
images = self._extract_images(msg_json.get("event", {}))
|
||||
audio_refs = self._extract_audio_refs(msg_json.get("event", {}))
|
||||
files = self._extract_files(msg_json.get("event", {}))
|
||||
elif msg_json.get("type") == "shortcut":
|
||||
userid = msg_json.get("user", {}).get("id")
|
||||
text = msg_json.get("callback_id")
|
||||
@@ -257,7 +284,9 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
else:
|
||||
return None
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的Slack消息:userid={userid}, username={username}, text={text}, images={len(images) if images else 0}"
|
||||
f"收到来自 {client_config.name} 的Slack消息:userid={userid}, username={username}, "
|
||||
f"text={text}, images={len(images) if images else 0}, audios={len(audio_refs) if audio_refs else 0}, "
|
||||
f"files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(
|
||||
channel=MessageChannel.Slack,
|
||||
@@ -266,11 +295,15 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
username=username,
|
||||
text=text,
|
||||
images=images,
|
||||
audio_refs=audio_refs,
|
||||
files=files,
|
||||
)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _extract_images(msg_json: dict) -> Optional[List[str]]:
|
||||
def _extract_images(
|
||||
msg_json: dict,
|
||||
) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
"""
|
||||
从Slack消息中提取图片URL
|
||||
"""
|
||||
@@ -279,12 +312,131 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
return None
|
||||
images = []
|
||||
for file in files:
|
||||
if file.get("type") in ("image", "jpg", "jpeg", "png", "gif", "webp"):
|
||||
file_type = str(file.get("type", "")).lower()
|
||||
file_ext = str(file.get("filetype", "")).lower()
|
||||
mime_type = str(file.get("mimetype", "")).lower()
|
||||
if (
|
||||
file_type == "image"
|
||||
or file_ext in ("jpg", "jpeg", "png", "gif", "webp", "bmp")
|
||||
or mime_type.startswith("image/")
|
||||
):
|
||||
url = file.get("url_private") or file.get("url_private_download")
|
||||
if url:
|
||||
images.append(url)
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=url,
|
||||
name=file.get("name") or file.get("title"),
|
||||
mime_type=file.get("mimetype"),
|
||||
size=file.get("size"),
|
||||
)
|
||||
)
|
||||
return images if images else None
|
||||
|
||||
@classmethod
|
||||
def _extract_audio_refs(cls, msg_json: dict) -> Optional[List[str]]:
|
||||
"""
|
||||
从Slack消息中提取音频文件引用
|
||||
"""
|
||||
files = msg_json.get("files", [])
|
||||
if not files:
|
||||
return None
|
||||
audio_refs = []
|
||||
for file in files:
|
||||
file_type = str(file.get("type", "")).lower()
|
||||
file_ext = f".{str(file.get('filetype', '')).lower().lstrip('.')}"
|
||||
mime_type = str(file.get("mimetype", "")).lower()
|
||||
if (
|
||||
file_type == "audio"
|
||||
or mime_type.startswith("audio/")
|
||||
or file_ext in cls._AUDIO_SUFFIXES
|
||||
):
|
||||
url = file.get("url_private_download") or file.get("url_private")
|
||||
if url:
|
||||
audio_refs.append(f"slack://file/{quote(url, safe='')}")
|
||||
return audio_refs if audio_refs else None
|
||||
|
||||
@classmethod
|
||||
def _extract_files(
|
||||
cls, msg_json: dict
|
||||
) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
"""
|
||||
从 Slack 消息中提取非图片/非音频文件。
|
||||
"""
|
||||
files = msg_json.get("files", [])
|
||||
if not files:
|
||||
return None
|
||||
|
||||
attachments = []
|
||||
for file in files:
|
||||
file_type = str(file.get("type", "")).lower()
|
||||
file_ext = f".{str(file.get('filetype', '')).lower().lstrip('.')}"
|
||||
mime_type = str(file.get("mimetype", "")).lower()
|
||||
is_image = (
|
||||
file_type == "image"
|
||||
or file_ext in (".jpg", ".jpeg", ".png", ".gif", ".webp", ".bmp")
|
||||
or mime_type.startswith("image/")
|
||||
)
|
||||
is_audio = (
|
||||
file_type == "audio"
|
||||
or mime_type.startswith("audio/")
|
||||
or file_ext in cls._AUDIO_SUFFIXES
|
||||
)
|
||||
if is_image or is_audio:
|
||||
continue
|
||||
|
||||
url = file.get("url_private_download") or file.get("url_private")
|
||||
if not url:
|
||||
continue
|
||||
attachments.append(
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"slack://file/{quote(url, safe='')}",
|
||||
name=file.get("name") or file.get("title"),
|
||||
mime_type=file.get("mimetype"),
|
||||
size=file.get("size"),
|
||||
)
|
||||
)
|
||||
return attachments or None
|
||||
|
||||
def download_slack_file_to_data_url(self, file_url: str, source: str) -> Optional[str]:
|
||||
"""
|
||||
下载Slack文件并转为data URL
|
||||
:param file_url: Slack私有文件URL
|
||||
:param source: 来源名称
|
||||
:return: data URL
|
||||
"""
|
||||
config = self.get_config(source)
|
||||
if not config:
|
||||
return None
|
||||
client = self.get_instance(config.name)
|
||||
if not client:
|
||||
return None
|
||||
file_data = client.download_file(file_url)
|
||||
if file_data:
|
||||
import base64
|
||||
|
||||
content, mime_type = file_data
|
||||
return f"data:{mime_type};base64,{base64.b64encode(content).decode()}"
|
||||
return None
|
||||
|
||||
def download_slack_file_bytes(self, file_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载Slack音频文件并返回原始字节
|
||||
"""
|
||||
if not file_ref or not file_ref.startswith("slack://file/"):
|
||||
return None
|
||||
config = self.get_config(source)
|
||||
if not config:
|
||||
return None
|
||||
client = self.get_instance(config.name)
|
||||
if not client:
|
||||
return None
|
||||
file_url = unquote(file_ref.replace("slack://file/", "", 1))
|
||||
file_data = client.download_file(file_url)
|
||||
if file_data:
|
||||
content, _ = file_data
|
||||
return content
|
||||
return None
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
"""
|
||||
发送消息
|
||||
@@ -303,16 +455,25 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
return
|
||||
client: Slack = self.get_instance(conf.name)
|
||||
if client:
|
||||
client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
if message.file_path:
|
||||
client.send_file(
|
||||
file_path=message.file_path,
|
||||
file_name=message.file_name,
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
else:
|
||||
client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
|
||||
def post_medias_message(
|
||||
self, message: Notification, medias: List[MediaInfo]
|
||||
@@ -396,6 +557,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
chat_id: Union[str, int],
|
||||
text: str,
|
||||
title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
) -> bool:
|
||||
"""
|
||||
编辑消息
|
||||
@@ -405,6 +567,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
:param chat_id: 聊天ID
|
||||
:param text: 新的消息内容
|
||||
:param title: 消息标题
|
||||
:param buttons: 新的按钮列表
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
if channel != self._channel:
|
||||
@@ -417,6 +580,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
result = client.send_msg(
|
||||
title=title or "",
|
||||
text=text,
|
||||
buttons=buttons,
|
||||
original_message_id=str(message_id),
|
||||
original_chat_id=str(chat_id),
|
||||
)
|
||||
@@ -442,26 +606,40 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
|
||||
return None
|
||||
client: Slack = self.get_instance(conf.name)
|
||||
if client:
|
||||
result = client.send_msg(
|
||||
title=message.title or "",
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
if message.file_path:
|
||||
result = client.send_file(
|
||||
file_path=message.file_path,
|
||||
file_name=message.file_name,
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
else:
|
||||
result = client.send_msg(
|
||||
title=message.title or "",
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
)
|
||||
if result and result[0]:
|
||||
# Slack 使用时间戳作为 message_id,chat_id 是频道ID
|
||||
# 注意:这里返回的是发送后的结果,需要获取实际的 message_id
|
||||
# 由于 Slack API 返回的是 result[1],包含完整响应,我们需要从中提取
|
||||
response_data = result[1]
|
||||
message_id = (
|
||||
response_data.get("ts")
|
||||
if isinstance(response_data, dict)
|
||||
else None
|
||||
)
|
||||
channel_id = (
|
||||
response_data.get("channel")
|
||||
if isinstance(response_data, dict)
|
||||
else None
|
||||
)
|
||||
message_id = None
|
||||
channel_id = None
|
||||
if hasattr(response_data, "get"):
|
||||
message_id = response_data.get("ts")
|
||||
channel_id = response_data.get("channel")
|
||||
if not message_id and hasattr(response_data, "data"):
|
||||
files = (response_data.data or {}).get("files") or []
|
||||
if files:
|
||||
message_id = files[0].get("id")
|
||||
shares = (
|
||||
files[0].get("shares", {})
|
||||
.get("private", {})
|
||||
)
|
||||
if shares:
|
||||
channel_id = next(iter(shares.keys()), None)
|
||||
return MessageResponse(
|
||||
message_id=message_id,
|
||||
chat_id=channel_id,
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import re
|
||||
from threading import Lock
|
||||
from typing import List, Optional
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple
|
||||
from urllib.parse import quote
|
||||
|
||||
import requests
|
||||
@@ -12,6 +13,7 @@ from app.core.config import settings
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.log import logger
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
lock = Lock()
|
||||
@@ -22,6 +24,7 @@ class Slack:
|
||||
_service: SocketModeHandler = None
|
||||
_ds_url = f"http://127.0.0.1:{settings.PORT}/api/v1/message?token={settings.API_TOKEN}"
|
||||
_channel = ""
|
||||
_oauth_token = ""
|
||||
|
||||
def __init__(self, SLACK_OAUTH_TOKEN: Optional[str] = None, SLACK_APP_TOKEN: Optional[str] = None,
|
||||
SLACK_CHANNEL: Optional[str] = None, **kwargs):
|
||||
@@ -40,6 +43,7 @@ class Slack:
|
||||
|
||||
self._client = slack_app.client
|
||||
self._channel = SLACK_CHANNEL
|
||||
self._oauth_token = SLACK_OAUTH_TOKEN
|
||||
|
||||
# 标记消息来源
|
||||
if kwargs.get("name"):
|
||||
@@ -102,6 +106,28 @@ class Slack:
|
||||
"""
|
||||
return True if self._client else False
|
||||
|
||||
def download_file(self, file_url: str) -> Optional[Tuple[bytes, str]]:
|
||||
"""
|
||||
下载Slack私有文件
|
||||
:param file_url: Slack文件URL
|
||||
:return: (文件内容, MIME类型)
|
||||
"""
|
||||
if not self._client or not self._oauth_token or not file_url:
|
||||
return None
|
||||
try:
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self._oauth_token}",
|
||||
"User-Agent": settings.USER_AGENT,
|
||||
"Accept": "*/*",
|
||||
}
|
||||
resp = RequestUtils(headers=headers, timeout=30).get_res(file_url)
|
||||
if resp and resp.content:
|
||||
mime_type = resp.headers.get("Content-Type", "image/jpeg")
|
||||
return resp.content, mime_type.split(";")[0]
|
||||
except Exception as e:
|
||||
logger.error(f"下载Slack文件失败: {e}")
|
||||
return None
|
||||
|
||||
def send_msg(self, title: str, text: Optional[str] = None,
|
||||
image: Optional[str] = None, link: Optional[str] = None,
|
||||
userid: Optional[str] = None, buttons: Optional[List[List[dict]]] = None,
|
||||
@@ -221,6 +247,48 @@ class Slack:
|
||||
logger.error(f"Slack消息发送失败: {msg_e}")
|
||||
return False, str(msg_e)
|
||||
|
||||
def send_file(
|
||||
self,
|
||||
file_path: str,
|
||||
title: Optional[str] = None,
|
||||
text: Optional[str] = None,
|
||||
userid: Optional[str] = None,
|
||||
file_name: Optional[str] = None,
|
||||
):
|
||||
"""
|
||||
发送本地文件到 Slack。
|
||||
"""
|
||||
if not self._client:
|
||||
return False, "消息客户端未就绪"
|
||||
if not file_path:
|
||||
return False, "文件路径不能为空"
|
||||
|
||||
local_file = Path(file_path)
|
||||
if not local_file.exists() or not local_file.is_file():
|
||||
return False, f"文件不存在: {local_file}"
|
||||
|
||||
try:
|
||||
if userid:
|
||||
channel = userid
|
||||
else:
|
||||
channel = self.__find_public_channel()
|
||||
|
||||
comment_parts = [part for part in [title, text] if part]
|
||||
initial_comment = "\n".join(comment_parts) if comment_parts else None
|
||||
|
||||
with local_file.open("rb") as fp:
|
||||
result = self._client.files_upload_v2(
|
||||
channel=channel,
|
||||
file=fp,
|
||||
filename=file_name or local_file.name,
|
||||
title=title or (file_name or local_file.name),
|
||||
initial_comment=initial_comment,
|
||||
)
|
||||
return True, result
|
||||
except Exception as err:
|
||||
logger.error(f"Slack文件发送失败: {err}")
|
||||
return False, str(err)
|
||||
|
||||
def send_medias_msg(self, medias: List[MediaInfo], userid: Optional[str] = None, title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
original_message_id: Optional[str] = None,
|
||||
|
||||
@@ -162,7 +162,7 @@ class SubtitleModule(_ModuleBase):
|
||||
time.sleep(1)
|
||||
# 目录仍然不存在,且有文件夹名,则创建目录
|
||||
if not working_dir_item and folder_name:
|
||||
parent_dir_item = storageChain.get_file_item(storage, download_dir)
|
||||
parent_dir_item = storageChain.get_folder(storage, download_dir)
|
||||
if parent_dir_item:
|
||||
working_dir_item = storageChain.create_folder(
|
||||
parent_dir_item,
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
import json
|
||||
from typing import Optional, Union, List, Tuple, Any
|
||||
from urllib.parse import quote, unquote
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.log import logger
|
||||
@@ -6,9 +8,34 @@ from app.modules import _ModuleBase, _MessageBase
|
||||
from app.modules.synologychat.synologychat import SynologyChat
|
||||
from app.schemas import MessageChannel, CommingMessage, Notification
|
||||
from app.schemas.types import ModuleType
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
|
||||
class SynologyChatModule(_ModuleBase, _MessageBase[SynologyChat]):
|
||||
_IMAGE_SUFFIXES = (
|
||||
".png",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".gif",
|
||||
".webp",
|
||||
".bmp",
|
||||
".tiff",
|
||||
".svg",
|
||||
)
|
||||
_AUDIO_SUFFIXES = (
|
||||
".mp3",
|
||||
".m4a",
|
||||
".wav",
|
||||
".ogg",
|
||||
".oga",
|
||||
".opus",
|
||||
".aac",
|
||||
".amr",
|
||||
".flac",
|
||||
".mpga",
|
||||
".mpeg",
|
||||
".webm",
|
||||
)
|
||||
|
||||
def init_module(self) -> None:
|
||||
"""
|
||||
@@ -96,15 +123,189 @@ class SynologyChatModule(_ModuleBase, _MessageBase[SynologyChat]):
|
||||
user_id = int(message.get("user_id"))
|
||||
# 获取用户名
|
||||
user_name = message.get("username")
|
||||
if text and user_id:
|
||||
logger.info(f"收到来自 {client_config.name} 的SynologyChat消息:"
|
||||
f"userid={user_id}, username={user_name}, text={text}")
|
||||
images = self._extract_images(message)
|
||||
audio_refs = self._extract_audio_refs(message)
|
||||
files = self._extract_files(message)
|
||||
if (text or images or audio_refs or files) and user_id:
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的SynologyChat消息:"
|
||||
f"userid={user_id}, username={user_name}, text={text}, "
|
||||
f"images={len(images) if images else 0}, audios={len(audio_refs) if audio_refs else 0}, "
|
||||
f"files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(channel=MessageChannel.SynologyChat, source=client_config.name,
|
||||
userid=user_id, username=user_name, text=text)
|
||||
userid=user_id, username=user_name, text=text or "",
|
||||
images=images, audio_refs=audio_refs, files=files)
|
||||
except Exception as err:
|
||||
logger.debug(f"解析SynologyChat消息失败:{str(err)}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _extract_images(
|
||||
cls, message: dict
|
||||
) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
images = []
|
||||
for key in ("file_url", "image_url", "pic_url"):
|
||||
value = message.get(key)
|
||||
if isinstance(value, str) and cls._looks_like_image(value):
|
||||
images.append(CommingMessage.MessageImage(ref=value))
|
||||
|
||||
for key in ("attachments", "files"):
|
||||
raw_value = message.get(key)
|
||||
if not raw_value:
|
||||
continue
|
||||
try:
|
||||
parsed = json.loads(raw_value) if isinstance(raw_value, str) else raw_value
|
||||
except Exception:
|
||||
parsed = raw_value
|
||||
items = parsed if isinstance(parsed, list) else [parsed]
|
||||
for item in items:
|
||||
if isinstance(item, str) and cls._looks_like_image(item):
|
||||
images.append(CommingMessage.MessageImage(ref=item))
|
||||
elif isinstance(item, dict):
|
||||
url = item.get("url") or item.get("file_url") or item.get("image_url")
|
||||
if isinstance(url, str) and cls._looks_like_image(url):
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=url,
|
||||
name=item.get("name") or item.get("filename"),
|
||||
mime_type=item.get("content_type")
|
||||
or item.get("mime_type"),
|
||||
size=item.get("size"),
|
||||
)
|
||||
)
|
||||
|
||||
deduped = []
|
||||
for image in images:
|
||||
if image.ref not in [item.ref for item in deduped]:
|
||||
deduped.append(image)
|
||||
return deduped or None
|
||||
|
||||
@classmethod
|
||||
def _extract_audio_refs(cls, message: dict) -> Optional[List[str]]:
|
||||
audio_refs = []
|
||||
for key in ("audio_url", "voice_url", "file_url"):
|
||||
value = message.get(key)
|
||||
if isinstance(value, str) and cls._looks_like_audio(value):
|
||||
audio_refs.append(f"synology://file/{quote(value, safe='')}")
|
||||
|
||||
for key in ("attachments", "files"):
|
||||
raw_value = message.get(key)
|
||||
if not raw_value:
|
||||
continue
|
||||
try:
|
||||
parsed = json.loads(raw_value) if isinstance(raw_value, str) else raw_value
|
||||
except Exception:
|
||||
parsed = raw_value
|
||||
items = parsed if isinstance(parsed, list) else [parsed]
|
||||
for item in items:
|
||||
if isinstance(item, str) and cls._looks_like_audio(item):
|
||||
audio_refs.append(f"synology://file/{quote(item, safe='')}")
|
||||
elif isinstance(item, dict):
|
||||
url = item.get("url") or item.get("file_url") or item.get("audio_url")
|
||||
if not isinstance(url, str):
|
||||
continue
|
||||
content_type = (
|
||||
item.get("content_type")
|
||||
or item.get("mime_type")
|
||||
or ""
|
||||
).lower()
|
||||
name = (
|
||||
item.get("name")
|
||||
or item.get("filename")
|
||||
or ""
|
||||
).lower()
|
||||
if content_type.startswith("audio/") or cls._looks_like_audio(url) or name.endswith(cls._AUDIO_SUFFIXES):
|
||||
audio_refs.append(f"synology://file/{quote(url, safe='')}")
|
||||
|
||||
deduped = []
|
||||
for audio_ref in audio_refs:
|
||||
if audio_ref not in deduped:
|
||||
deduped.append(audio_ref)
|
||||
return deduped or None
|
||||
|
||||
@classmethod
|
||||
def _looks_like_image(cls, value: str) -> bool:
|
||||
if not value or not isinstance(value, str):
|
||||
return False
|
||||
lowered = value.lower()
|
||||
return lowered.startswith("http") and any(
|
||||
suffix in lowered for suffix in cls._IMAGE_SUFFIXES
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def _looks_like_audio(cls, value: str) -> bool:
|
||||
if not value or not isinstance(value, str):
|
||||
return False
|
||||
lowered = value.lower()
|
||||
return lowered.startswith("http") and any(
|
||||
suffix in lowered for suffix in cls._AUDIO_SUFFIXES
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def _extract_files(
|
||||
cls, message: dict
|
||||
) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
files = []
|
||||
for key in ("attachments", "files"):
|
||||
raw_value = message.get(key)
|
||||
if not raw_value:
|
||||
continue
|
||||
try:
|
||||
parsed = json.loads(raw_value) if isinstance(raw_value, str) else raw_value
|
||||
except Exception:
|
||||
parsed = raw_value
|
||||
items = parsed if isinstance(parsed, list) else [parsed]
|
||||
for item in items:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
url = item.get("url") or item.get("file_url") or item.get("download_url")
|
||||
if not isinstance(url, str) or not url.startswith("http"):
|
||||
continue
|
||||
content_type = (
|
||||
item.get("content_type") or item.get("mime_type") or ""
|
||||
).lower()
|
||||
name = (item.get("name") or item.get("filename") or "").lower()
|
||||
is_image = content_type.startswith("image/") or name.endswith(
|
||||
cls._IMAGE_SUFFIXES
|
||||
) or cls._looks_like_image(url)
|
||||
is_audio = content_type.startswith("audio/") or name.endswith(
|
||||
cls._AUDIO_SUFFIXES
|
||||
) or cls._looks_like_audio(url)
|
||||
if is_image or is_audio:
|
||||
continue
|
||||
files.append(
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"synology://file/{quote(url, safe='')}",
|
||||
name=item.get("name") or item.get("filename"),
|
||||
mime_type=item.get("content_type") or item.get("mime_type"),
|
||||
size=item.get("size"),
|
||||
)
|
||||
)
|
||||
|
||||
deduped = []
|
||||
seen_refs = set()
|
||||
for file_item in files:
|
||||
if file_item.ref in seen_refs:
|
||||
continue
|
||||
seen_refs.add(file_item.ref)
|
||||
deduped.append(file_item)
|
||||
return deduped or None
|
||||
|
||||
def download_synologychat_file_bytes(self, file_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载 Synology Chat 音频文件并返回原始字节
|
||||
"""
|
||||
if not file_ref or not file_ref.startswith("synology://file/"):
|
||||
return None
|
||||
if not self.get_config(source):
|
||||
return None
|
||||
file_url = unquote(file_ref.replace("synology://file/", "", 1))
|
||||
resp = RequestUtils(timeout=30).get_res(file_url)
|
||||
if resp and resp.content:
|
||||
return resp.content
|
||||
return None
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
"""
|
||||
发送消息
|
||||
|
||||
@@ -131,11 +131,21 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
return None
|
||||
client: Telegram = self.get_instance(client_config.name)
|
||||
try:
|
||||
message: dict = json.loads(body)
|
||||
message = json.loads(body)
|
||||
while isinstance(message, str):
|
||||
message = json.loads(message)
|
||||
except Exception as err:
|
||||
logger.debug(f"解析Telegram消息失败:{str(err)}")
|
||||
return None
|
||||
|
||||
if not isinstance(message, dict):
|
||||
logger.debug(f"Telegram消息格式无效:{type(message)}")
|
||||
return None
|
||||
|
||||
# 兼容某些转发链路使用 Telegram Update 外壳
|
||||
if "message" in message and isinstance(message.get("message"), dict):
|
||||
message = message.get("message")
|
||||
|
||||
if message:
|
||||
# 处理按钮回调
|
||||
if "callback_query" in message:
|
||||
@@ -204,17 +214,21 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
text = self._append_reply_markup_links(text, msg.get("reply_markup"))
|
||||
|
||||
images = self._extract_images(msg)
|
||||
audio_refs = self._extract_audio_refs(msg)
|
||||
files = self._extract_files(msg)
|
||||
|
||||
if user_id:
|
||||
if not text and not images:
|
||||
if not text and not images and not audio_refs and not files:
|
||||
logger.debug(
|
||||
f"收到来自 {client_config.name} 的Telegram消息无文本和图片"
|
||||
f"收到来自 {client_config.name} 的Telegram消息无文本、图片、语音和文件"
|
||||
)
|
||||
return None
|
||||
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的Telegram消息:"
|
||||
f"userid={user_id}, username={user_name}, chat_id={chat_id}, text={text}, images={len(images) if images else 0}"
|
||||
f"userid={user_id}, username={user_name}, chat_id={chat_id}, text={text}, "
|
||||
f"images={len(images) if images else 0}, audios={len(audio_refs) if audio_refs else 0}, "
|
||||
f"files={len(files) if files else 0}"
|
||||
)
|
||||
|
||||
cleaned_text = (
|
||||
@@ -253,11 +267,13 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
text=cleaned_text,
|
||||
chat_id=str(chat_id) if chat_id else None,
|
||||
images=images if images else None,
|
||||
audio_refs=audio_refs if audio_refs else None,
|
||||
files=files if files else None,
|
||||
)
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _extract_images(msg: dict) -> Optional[List[str]]:
|
||||
def _extract_images(msg: dict) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
"""
|
||||
从Telegram消息中提取图片file_id
|
||||
"""
|
||||
@@ -267,17 +283,73 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
largest_photo = photo[-1]
|
||||
file_id = largest_photo.get("file_id")
|
||||
if file_id:
|
||||
images.append(file_id)
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=f"tg://file_id/{file_id}",
|
||||
mime_type="image/jpeg",
|
||||
size=largest_photo.get("file_size"),
|
||||
)
|
||||
)
|
||||
|
||||
document = msg.get("document")
|
||||
if document:
|
||||
file_id = document.get("file_id")
|
||||
mime_type = document.get("mime_type", "")
|
||||
if file_id and mime_type.startswith("image/"):
|
||||
images.append(file_id)
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=f"tg://file_id/{file_id}",
|
||||
name=document.get("file_name"),
|
||||
mime_type=document.get("mime_type"),
|
||||
size=document.get("file_size"),
|
||||
)
|
||||
)
|
||||
|
||||
return images if images else None
|
||||
|
||||
@staticmethod
|
||||
def _extract_audio_refs(msg: dict) -> Optional[List[str]]:
|
||||
"""
|
||||
从Telegram消息中提取语音/音频 file_id。
|
||||
"""
|
||||
audio_refs = []
|
||||
voice = msg.get("voice")
|
||||
if voice:
|
||||
file_id = voice.get("file_id")
|
||||
if file_id:
|
||||
audio_refs.append(f"tg://voice_file_id/{file_id}")
|
||||
|
||||
audio = msg.get("audio")
|
||||
if audio:
|
||||
file_id = audio.get("file_id")
|
||||
if file_id:
|
||||
audio_refs.append(f"tg://audio_file_id/{file_id}")
|
||||
|
||||
return audio_refs if audio_refs else None
|
||||
|
||||
@staticmethod
|
||||
def _extract_files(msg: dict) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
"""
|
||||
从 Telegram 消息中提取非图片文件附件。
|
||||
"""
|
||||
document = msg.get("document")
|
||||
if not isinstance(document, dict):
|
||||
return None
|
||||
|
||||
file_id = document.get("file_id")
|
||||
mime_type = (document.get("mime_type") or "").lower()
|
||||
if not file_id or mime_type.startswith("image/"):
|
||||
return None
|
||||
|
||||
return [
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"tg://document_file_id/{file_id}",
|
||||
name=document.get("file_name"),
|
||||
mime_type=document.get("mime_type"),
|
||||
size=document.get("file_size"),
|
||||
)
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def _embed_entity_links(text: str, entities: Optional[List[dict]]) -> str:
|
||||
"""
|
||||
@@ -379,17 +451,34 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
return
|
||||
client: Telegram = self.get_instance(conf.name)
|
||||
if client:
|
||||
client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
disable_web_page_preview=message.disable_web_page_preview,
|
||||
)
|
||||
if message.file_path:
|
||||
client.send_file(
|
||||
file_path=message.file_path,
|
||||
file_name=message.file_name,
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
userid=userid,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
elif message.voice_path:
|
||||
client.send_voice(
|
||||
voice_path=message.voice_path,
|
||||
userid=userid,
|
||||
caption=message.voice_caption,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
else:
|
||||
client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
buttons=message.buttons,
|
||||
original_message_id=message.original_message_id,
|
||||
original_chat_id=message.original_chat_id,
|
||||
disable_web_page_preview=message.disable_web_page_preview,
|
||||
)
|
||||
|
||||
def post_medias_message(
|
||||
self, message: Notification, medias: List[MediaInfo]
|
||||
@@ -475,6 +564,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
chat_id: Union[str, int],
|
||||
text: str,
|
||||
title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
) -> bool:
|
||||
"""
|
||||
编辑消息
|
||||
@@ -484,6 +574,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
:param chat_id: 聊天ID
|
||||
:param text: 新的消息内容
|
||||
:param title: 消息标题
|
||||
:param buttons: 新的按钮列表
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
if channel != self._channel:
|
||||
@@ -498,6 +589,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
message_id=message_id,
|
||||
text=text,
|
||||
title=title,
|
||||
buttons=buttons,
|
||||
)
|
||||
if result:
|
||||
return True
|
||||
@@ -521,14 +613,22 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
return None
|
||||
client: Telegram = self.get_instance(conf.name)
|
||||
if client:
|
||||
result = client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
disable_web_page_preview=message.disable_web_page_preview,
|
||||
)
|
||||
if message.voice_path:
|
||||
result = client.send_voice(
|
||||
voice_path=message.voice_path,
|
||||
userid=userid,
|
||||
caption=message.voice_caption,
|
||||
original_chat_id=message.original_chat_id,
|
||||
)
|
||||
else:
|
||||
result = client.send_msg(
|
||||
title=message.title,
|
||||
text=message.text,
|
||||
image=message.image,
|
||||
userid=userid,
|
||||
link=message.link,
|
||||
disable_web_page_preview=message.disable_web_page_preview,
|
||||
)
|
||||
if result and result.get("success"):
|
||||
return MessageResponse(
|
||||
message_id=result.get("message_id"),
|
||||
@@ -591,7 +691,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
)
|
||||
client.register_commands(filtered_scoped_commands)
|
||||
|
||||
def download_file_to_base64(self, file_id: str, source: str) -> Optional[str]:
|
||||
def download_telegram_file_to_base64(self, file_id: str, source: str) -> Optional[str]:
|
||||
"""
|
||||
下载Telegram文件并转为base64
|
||||
:param file_id: Telegram文件ID
|
||||
@@ -610,3 +710,15 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
|
||||
|
||||
return base64.b64encode(file_content).decode()
|
||||
return None
|
||||
|
||||
def download_telegram_file_bytes(self, file_id: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载Telegram文件并返回原始字节。
|
||||
"""
|
||||
config = self.get_config(source)
|
||||
if not config:
|
||||
return None
|
||||
client = self.get_instance(config.name)
|
||||
if not client:
|
||||
return None
|
||||
return client.download_file(file_id)
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
import asyncio
|
||||
import json
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from typing import Optional, List, Dict, Callable, Union
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional, List, Dict, Callable, Union
|
||||
from urllib.parse import urljoin, quote
|
||||
|
||||
from telebot import TeleBot, apihelper
|
||||
@@ -113,7 +115,11 @@ class Telegram:
|
||||
if self._should_process_message(message):
|
||||
# 启动持续发送正在输入状态
|
||||
self._start_typing_task(message.chat.id)
|
||||
RequestUtils(timeout=15).post_res(self._ds_url, json=message.json)
|
||||
payload = self._serialize_update_payload(message)
|
||||
if not payload:
|
||||
logger.warn("Telegram消息序列化失败,跳过转发")
|
||||
return
|
||||
RequestUtils(timeout=15).post_res(self._ds_url, json=payload)
|
||||
|
||||
@_bot.callback_query_handler(func=lambda call: True)
|
||||
def callback_query(call):
|
||||
@@ -200,14 +206,48 @@ class Telegram:
|
||||
return None
|
||||
try:
|
||||
file_info = self._bot.get_file(file_id)
|
||||
file_url = f"https://api.telegram.org/file/bot{self._telegram_token}/{file_info.file_path}"
|
||||
resp = RequestUtils(timeout=30).get_res(file_url)
|
||||
file_url = apihelper.FILE_URL.format(
|
||||
self._telegram_token, file_info.file_path
|
||||
)
|
||||
resp = RequestUtils(
|
||||
proxies=apihelper.proxy, timeout=30
|
||||
).get_res(file_url)
|
||||
if resp and resp.content:
|
||||
logger.info(
|
||||
"Telegram图片下载成功: file_id=%s, file_path=%s, content_bytes=%s",
|
||||
file_id,
|
||||
file_info.file_path,
|
||||
len(resp.content),
|
||||
)
|
||||
return resp.content
|
||||
logger.warn(
|
||||
"Telegram图片下载失败: file_id=%s, file_path=%s, file_url=%s, proxy_enabled=%s",
|
||||
file_id,
|
||||
getattr(file_info, "file_path", None),
|
||||
file_url,
|
||||
bool(apihelper.proxy),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"下载Telegram文件失败: {e}")
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _serialize_update_payload(message: Any) -> Optional[dict]:
|
||||
"""
|
||||
将 Telegram Message 对象稳定序列化为 dict,避免 requests 的 json 参数再次包一层字符串。
|
||||
"""
|
||||
try:
|
||||
if hasattr(message, "to_dict"):
|
||||
payload = message.to_dict()
|
||||
else:
|
||||
payload = getattr(message, "json", None) or message
|
||||
if isinstance(payload, str):
|
||||
payload = json.loads(payload)
|
||||
return payload if isinstance(payload, dict) else None
|
||||
except Exception as e:
|
||||
logger.error(f"序列化Telegram消息失败: {e}")
|
||||
return None
|
||||
|
||||
def _update_user_chat_mapping(self, userid: int, chat_id: int) -> None:
|
||||
"""
|
||||
更新用户与聊天的映射关系
|
||||
@@ -384,7 +424,12 @@ class Telegram:
|
||||
if original_message_id and original_chat_id:
|
||||
# 编辑消息
|
||||
result = self.__edit_message(
|
||||
original_chat_id, original_message_id, caption, buttons, image
|
||||
original_chat_id,
|
||||
original_message_id,
|
||||
caption,
|
||||
buttons,
|
||||
image,
|
||||
disable_web_page_preview=disable_web_page_preview,
|
||||
)
|
||||
self._stop_typing_task(chat_id)
|
||||
return {
|
||||
@@ -417,6 +462,115 @@ class Telegram:
|
||||
self._stop_typing_task(chat_id)
|
||||
return {"success": False}
|
||||
|
||||
def send_voice(
|
||||
self,
|
||||
voice_path: str,
|
||||
userid: Optional[str] = None,
|
||||
caption: Optional[str] = None,
|
||||
original_chat_id: Optional[str] = None,
|
||||
) -> Optional[dict]:
|
||||
"""
|
||||
发送Telegram语音消息。
|
||||
"""
|
||||
if not self._bot or not voice_path:
|
||||
return None
|
||||
|
||||
chat_id = self._determine_target_chat_id(userid, original_chat_id)
|
||||
voice_file = Path(voice_path)
|
||||
if not voice_file.exists():
|
||||
logger.error(f"语音文件不存在: {voice_file}")
|
||||
return {"success": False}
|
||||
|
||||
try:
|
||||
with voice_file.open("rb") as fp:
|
||||
sent = self._bot.send_voice(
|
||||
chat_id=chat_id,
|
||||
voice=fp,
|
||||
caption=standardize(caption) if caption else None,
|
||||
parse_mode="MarkdownV2" if caption else None,
|
||||
)
|
||||
self._stop_typing_task(chat_id)
|
||||
if sent and hasattr(sent, "message_id"):
|
||||
return {
|
||||
"success": True,
|
||||
"message_id": sent.message_id,
|
||||
"chat_id": sent.chat.id if hasattr(sent, "chat") else chat_id,
|
||||
}
|
||||
return {"success": bool(sent)}
|
||||
except Exception as err:
|
||||
logger.error(f"发送语音消息失败:{err}")
|
||||
self._stop_typing_task(chat_id)
|
||||
return {"success": False}
|
||||
finally:
|
||||
try:
|
||||
voice_file.unlink(missing_ok=True)
|
||||
except Exception as cleanup_err:
|
||||
logger.debug(f"清理语音临时文件失败: {cleanup_err}")
|
||||
|
||||
def send_file(
|
||||
self,
|
||||
file_path: str,
|
||||
userid: Optional[str] = None,
|
||||
title: Optional[str] = None,
|
||||
text: Optional[str] = None,
|
||||
file_name: Optional[str] = None,
|
||||
original_chat_id: Optional[str] = None,
|
||||
) -> Optional[dict]:
|
||||
"""
|
||||
发送本地图片或文件给 Telegram 用户。
|
||||
"""
|
||||
if not self._bot or not file_path:
|
||||
return None
|
||||
|
||||
local_file = Path(file_path)
|
||||
if not local_file.exists() or not local_file.is_file():
|
||||
logger.error(f"附件文件不存在: {local_file}")
|
||||
return {"success": False}
|
||||
|
||||
chat_id = self._determine_target_chat_id(userid, original_chat_id)
|
||||
send_name = file_name or local_file.name
|
||||
suffix = local_file.suffix.lower()
|
||||
is_image = suffix in {".png", ".jpg", ".jpeg", ".gif", ".webp", ".bmp"}
|
||||
|
||||
try:
|
||||
bold_title = (
|
||||
f"**{standardize(title).removesuffix('\n')}**" if title else None
|
||||
)
|
||||
if bold_title and text:
|
||||
caption = f"{bold_title}\n{text}"
|
||||
elif bold_title:
|
||||
caption = bold_title
|
||||
else:
|
||||
caption = text or ""
|
||||
|
||||
with local_file.open("rb") as fp:
|
||||
if is_image:
|
||||
sent = self._bot.send_photo(
|
||||
chat_id=chat_id,
|
||||
photo=fp,
|
||||
caption=standardize(caption) if caption else None,
|
||||
parse_mode="MarkdownV2" if caption else None,
|
||||
)
|
||||
else:
|
||||
sent = self._bot.send_document(
|
||||
chat_id=chat_id,
|
||||
document=(send_name, fp),
|
||||
caption=standardize(caption) if caption else None,
|
||||
parse_mode="MarkdownV2" if caption else None,
|
||||
)
|
||||
self._stop_typing_task(chat_id)
|
||||
if sent and hasattr(sent, "message_id"):
|
||||
return {
|
||||
"success": True,
|
||||
"message_id": sent.message_id,
|
||||
"chat_id": sent.chat.id if hasattr(sent, "chat") else chat_id,
|
||||
}
|
||||
return {"success": bool(sent)}
|
||||
except Exception as err:
|
||||
logger.error(f"发送本地附件失败: {err}")
|
||||
self._stop_typing_task(chat_id)
|
||||
return {"success": False}
|
||||
|
||||
def _determine_target_chat_id(
|
||||
self, userid: Optional[str] = None, original_chat_id: Optional[str] = None
|
||||
) -> str:
|
||||
@@ -681,6 +835,7 @@ class Telegram:
|
||||
message_id: Union[str, int],
|
||||
text: str,
|
||||
title: Optional[str] = None,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
) -> Optional[bool]:
|
||||
"""
|
||||
编辑Telegram消息(公开方法)
|
||||
@@ -688,6 +843,7 @@ class Telegram:
|
||||
:param message_id: 消息ID
|
||||
:param text: 新的消息内容
|
||||
:param title: 消息标题
|
||||
:param buttons: 新的按钮列表
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
if not self._bot:
|
||||
@@ -707,6 +863,7 @@ class Telegram:
|
||||
chat_id=str(chat_id),
|
||||
message_id=int(message_id),
|
||||
text=caption,
|
||||
buttons=buttons,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"编辑Telegram消息异常: {str(e)}")
|
||||
@@ -719,6 +876,7 @@ class Telegram:
|
||||
text: str,
|
||||
buttons: Optional[List[List[dict]]] = None,
|
||||
image: Optional[str] = None,
|
||||
disable_web_page_preview: Optional[bool] = None,
|
||||
) -> Optional[bool]:
|
||||
"""
|
||||
编辑已发送的消息
|
||||
@@ -727,6 +885,7 @@ class Telegram:
|
||||
:param text: 新的消息内容
|
||||
:param buttons: 按钮列表
|
||||
:param image: 图片URL或路径
|
||||
:param disable_web_page_preview: 是否禁用链接预览(仅纯文本编辑时生效)
|
||||
:return: 编辑是否成功
|
||||
"""
|
||||
if not self._bot:
|
||||
@@ -751,13 +910,18 @@ class Telegram:
|
||||
)
|
||||
else:
|
||||
# 如果没有图片,使用edit_message_text
|
||||
self._bot.edit_message_text(
|
||||
chat_id=chat_id,
|
||||
message_id=message_id,
|
||||
text=standardize(text),
|
||||
parse_mode="MarkdownV2",
|
||||
reply_markup=reply_markup,
|
||||
)
|
||||
edit_text_kwargs: Dict[str, Any] = {
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"text": standardize(text),
|
||||
"parse_mode": "MarkdownV2",
|
||||
"reply_markup": reply_markup,
|
||||
}
|
||||
if disable_web_page_preview is not None:
|
||||
edit_text_kwargs["disable_web_page_preview"] = (
|
||||
disable_web_page_preview
|
||||
)
|
||||
self._bot.edit_message_text(**edit_text_kwargs)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"编辑消息失败:{str(e)}")
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import json
|
||||
from urllib.parse import quote, unquote
|
||||
from typing import Optional, Union, List, Tuple, Any, Dict
|
||||
|
||||
from app.core.context import Context, MediaInfo
|
||||
@@ -10,6 +11,30 @@ from app.schemas.types import ModuleType
|
||||
|
||||
|
||||
class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
|
||||
_IMAGE_SUFFIXES = (
|
||||
".png",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".gif",
|
||||
".webp",
|
||||
".bmp",
|
||||
".tiff",
|
||||
".svg",
|
||||
)
|
||||
_AUDIO_SUFFIXES = (
|
||||
".mp3",
|
||||
".m4a",
|
||||
".wav",
|
||||
".ogg",
|
||||
".oga",
|
||||
".opus",
|
||||
".aac",
|
||||
".amr",
|
||||
".flac",
|
||||
".mpga",
|
||||
".mpeg",
|
||||
".webm",
|
||||
)
|
||||
|
||||
def init_module(self) -> None:
|
||||
"""
|
||||
@@ -99,12 +124,19 @@ class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
|
||||
msg_body = json.loads(body)
|
||||
# 类型
|
||||
msg_type = msg_body.get("detail", {}).get("type")
|
||||
if msg_type != "normal":
|
||||
# 非新消息
|
||||
if msg_type not in ("normal", "reply"):
|
||||
# 非新消息/回复
|
||||
return None
|
||||
logger.debug(f"收到VoceChat请求:{msg_body}")
|
||||
# 文本内容
|
||||
content = msg_body.get("detail", {}).get("content")
|
||||
detail = msg_body.get("detail", {}) or {}
|
||||
content_type = detail.get("content_type") or ""
|
||||
content = detail.get("content")
|
||||
images = self._extract_images(detail)
|
||||
audio_refs = self._extract_audio_refs(detail)
|
||||
files = self._extract_files(detail)
|
||||
text = None
|
||||
if content_type in ("text/plain", "text/markdown") and isinstance(content, str):
|
||||
text = content
|
||||
# 用户ID
|
||||
gid = msg_body.get("target", {}).get("gid")
|
||||
channel_id = client_config.config.get("channel_id")
|
||||
@@ -116,14 +148,149 @@ class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
|
||||
userid = f"UID#{msg_body.get('from_uid')}"
|
||||
|
||||
# 处理消息内容
|
||||
if content and userid:
|
||||
logger.info(f"收到来自 {client_config.name} 的VoceChat消息:userid={userid}, text={content}")
|
||||
if (text or images or audio_refs or files) and userid:
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的VoceChat消息:"
|
||||
f"userid={userid}, text={text}, images={len(images) if images else 0}, "
|
||||
f"audios={len(audio_refs) if audio_refs else 0}, files={len(files) if files else 0}"
|
||||
)
|
||||
return CommingMessage(channel=MessageChannel.VoceChat, source=client_config.name,
|
||||
userid=userid, username=userid, text=content)
|
||||
userid=userid, username=userid, text=text or "",
|
||||
images=images, audio_refs=audio_refs, files=files)
|
||||
except Exception as err:
|
||||
logger.error(f"VoceChat消息处理发生错误:{str(err)}")
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _extract_images(
|
||||
cls, detail: dict
|
||||
) -> Optional[List[CommingMessage.MessageImage]]:
|
||||
content_type = detail.get("content_type") or ""
|
||||
if content_type != "vocechat/file":
|
||||
return None
|
||||
properties = detail.get("properties") or {}
|
||||
mime_type = (
|
||||
properties.get("content_type")
|
||||
or properties.get("mime_type")
|
||||
or properties.get("contentType")
|
||||
or ""
|
||||
).lower()
|
||||
file_path = (
|
||||
properties.get("path")
|
||||
or properties.get("file_path")
|
||||
or properties.get("storage_path")
|
||||
or detail.get("content")
|
||||
)
|
||||
direct_url = (
|
||||
properties.get("url")
|
||||
or properties.get("download_url")
|
||||
or properties.get("file_url")
|
||||
)
|
||||
file_name = (
|
||||
properties.get("name")
|
||||
or properties.get("filename")
|
||||
or (str(file_path).rsplit("/", 1)[-1] if file_path else "")
|
||||
).lower()
|
||||
|
||||
is_image = mime_type.startswith("image/") or file_name.endswith(cls._IMAGE_SUFFIXES)
|
||||
if not is_image:
|
||||
return None
|
||||
if isinstance(direct_url, str) and direct_url.startswith("http"):
|
||||
return [
|
||||
CommingMessage.MessageImage(
|
||||
ref=direct_url,
|
||||
name=properties.get("name") or properties.get("filename"),
|
||||
mime_type=mime_type or None,
|
||||
size=properties.get("size"),
|
||||
)
|
||||
]
|
||||
if isinstance(file_path, str) and file_path:
|
||||
return [
|
||||
CommingMessage.MessageImage(
|
||||
ref=f"vocechat://file/{quote(file_path, safe='')}",
|
||||
name=properties.get("name") or properties.get("filename"),
|
||||
mime_type=mime_type or None,
|
||||
size=properties.get("size"),
|
||||
)
|
||||
]
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _extract_audio_refs(cls, detail: dict) -> Optional[List[str]]:
|
||||
content_type = detail.get("content_type") or ""
|
||||
if content_type != "vocechat/file":
|
||||
return None
|
||||
properties = detail.get("properties") or {}
|
||||
mime_type = (
|
||||
properties.get("content_type")
|
||||
or properties.get("mime_type")
|
||||
or properties.get("contentType")
|
||||
or ""
|
||||
).lower()
|
||||
file_path = (
|
||||
properties.get("path")
|
||||
or properties.get("file_path")
|
||||
or properties.get("storage_path")
|
||||
or detail.get("content")
|
||||
)
|
||||
file_name = (
|
||||
properties.get("name")
|
||||
or properties.get("filename")
|
||||
or (str(file_path).rsplit("/", 1)[-1] if file_path else "")
|
||||
).lower()
|
||||
|
||||
is_audio = mime_type.startswith("audio/") or file_name.endswith(cls._AUDIO_SUFFIXES)
|
||||
if not is_audio:
|
||||
return None
|
||||
if isinstance(file_path, str) and file_path:
|
||||
return [f"vocechat://file/{quote(file_path, safe='')}"]
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _extract_files(
|
||||
cls, detail: dict
|
||||
) -> Optional[List[CommingMessage.MessageAttachment]]:
|
||||
content_type = detail.get("content_type") or ""
|
||||
if content_type != "vocechat/file":
|
||||
return None
|
||||
properties = detail.get("properties") or {}
|
||||
mime_type = (
|
||||
properties.get("content_type")
|
||||
or properties.get("mime_type")
|
||||
or properties.get("contentType")
|
||||
or ""
|
||||
).lower()
|
||||
file_path = (
|
||||
properties.get("path")
|
||||
or properties.get("file_path")
|
||||
or properties.get("storage_path")
|
||||
or detail.get("content")
|
||||
)
|
||||
file_name = (
|
||||
properties.get("name")
|
||||
or properties.get("filename")
|
||||
or (str(file_path).rsplit("/", 1)[-1] if file_path else "")
|
||||
)
|
||||
lowered_name = str(file_name).lower()
|
||||
is_image = mime_type.startswith("image/") or lowered_name.endswith(
|
||||
cls._IMAGE_SUFFIXES
|
||||
)
|
||||
is_audio = mime_type.startswith("audio/") or lowered_name.endswith(
|
||||
cls._AUDIO_SUFFIXES
|
||||
)
|
||||
if is_image or is_audio or not isinstance(file_path, str) or not file_path:
|
||||
return None
|
||||
return [
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"vocechat://file/{quote(file_path, safe='')}",
|
||||
name=file_name,
|
||||
mime_type=properties.get("content_type")
|
||||
or properties.get("mime_type")
|
||||
or properties.get("contentType"),
|
||||
size=properties.get("size"),
|
||||
)
|
||||
]
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
"""
|
||||
发送消息
|
||||
@@ -136,11 +303,11 @@ class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
|
||||
targets = message.targets
|
||||
userid = message.userid
|
||||
if not message.userid and targets:
|
||||
userid = targets.get('telegram_userid')
|
||||
userid = targets.get('vocechat_userid')
|
||||
client: VoceChat = self.get_instance(conf.name)
|
||||
if client:
|
||||
client.send_msg(title=message.title, text=message.text,
|
||||
userid=userid, link=message.link)
|
||||
image=message.image, userid=userid, link=message.link)
|
||||
|
||||
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
|
||||
"""
|
||||
@@ -182,3 +349,37 @@ class VoceChatModule(_ModuleBase, _MessageBase[VoceChat]):
|
||||
|
||||
def register_commands(self, commands: Dict[str, dict]):
|
||||
pass
|
||||
|
||||
def download_vocechat_image_to_data_url(self, image_ref: str, source: str) -> Optional[str]:
|
||||
"""
|
||||
下载 VoceChat 图片并转换为 data URL
|
||||
"""
|
||||
if not image_ref or not image_ref.startswith("vocechat://file/"):
|
||||
return None
|
||||
client_config = self.get_config(source)
|
||||
if not client_config:
|
||||
return None
|
||||
client: VoceChat = self.get_instance(client_config.name)
|
||||
if not client:
|
||||
return None
|
||||
file_path = unquote(image_ref.replace("vocechat://file/", "", 1))
|
||||
return client.download_file_to_data_url(file_path)
|
||||
|
||||
def download_vocechat_file_bytes(self, file_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载 VoceChat 文件并返回原始字节
|
||||
"""
|
||||
if not file_ref or not file_ref.startswith("vocechat://file/"):
|
||||
return None
|
||||
client_config = self.get_config(source)
|
||||
if not client_config:
|
||||
return None
|
||||
client: VoceChat = self.get_instance(client_config.name)
|
||||
if not client:
|
||||
return None
|
||||
file_path = unquote(file_ref.replace("vocechat://file/", "", 1))
|
||||
file_data = client.download_file(file_path)
|
||||
if file_data:
|
||||
content, _ = file_data
|
||||
return content
|
||||
return None
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import re
|
||||
import threading
|
||||
from typing import Optional, List
|
||||
import base64
|
||||
from typing import Optional, List, Tuple
|
||||
from urllib.parse import quote
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.core.metainfo import MetaInfo
|
||||
@@ -21,6 +23,7 @@ class VoceChat:
|
||||
_channel_id = None
|
||||
# 请求对象
|
||||
_client = None
|
||||
_file_client = None
|
||||
|
||||
def __init__(self, VOCECHAT_HOST: Optional[str] = None, VOCECHAT_API_KEY: Optional[str] = None, VOCECHAT_CHANNEL_ID: Optional[str] = None, **kwargs):
|
||||
"""
|
||||
@@ -29,12 +32,11 @@ class VoceChat:
|
||||
if not VOCECHAT_HOST or not VOCECHAT_API_KEY or not VOCECHAT_CHANNEL_ID:
|
||||
logger.error("VoceChat配置不完整!")
|
||||
return
|
||||
self._host = VOCECHAT_HOST
|
||||
if self._host:
|
||||
if not self._host.endswith("/"):
|
||||
self._host += "/"
|
||||
if not self._host.startswith("http"):
|
||||
self._playhost = "http://" + self._host
|
||||
self._host = VOCECHAT_HOST.strip()
|
||||
if self._host and not self._host.startswith("http"):
|
||||
self._host = f"http://{self._host}"
|
||||
if self._host and not self._host.endswith("/"):
|
||||
self._host += "/"
|
||||
self._apikey = VOCECHAT_API_KEY
|
||||
self._channel_id = VOCECHAT_CHANNEL_ID
|
||||
if self._apikey and self._host and self._channel_id:
|
||||
@@ -43,6 +45,10 @@ class VoceChat:
|
||||
"x-api-key": self._apikey,
|
||||
"accept": "application/json; charset=utf-8"
|
||||
})
|
||||
self._file_client = RequestUtils(headers={
|
||||
"x-api-key": self._apikey,
|
||||
"accept": "*/*"
|
||||
})
|
||||
|
||||
def get_state(self):
|
||||
"""
|
||||
@@ -61,6 +67,7 @@ class VoceChat:
|
||||
return result.json()
|
||||
|
||||
def send_msg(self, title: str, text: Optional[str] = None,
|
||||
image: Optional[str] = None,
|
||||
userid: Optional[str] = None, link: Optional[str] = None) -> Optional[bool]:
|
||||
"""
|
||||
微信消息发送入口,支持文本、图片、链接跳转、指定发送对象
|
||||
@@ -83,6 +90,9 @@ class VoceChat:
|
||||
else:
|
||||
caption = f"**{title}**"
|
||||
|
||||
if image:
|
||||
caption = f"{caption}\n"
|
||||
|
||||
if link:
|
||||
caption = f"{caption}\n[查看详情]({link})"
|
||||
|
||||
@@ -97,6 +107,46 @@ class VoceChat:
|
||||
logger.error(f"发送消息失败:{msg_e}")
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _guess_mime_type(content: bytes, default: str = "image/jpeg") -> str:
|
||||
if not content:
|
||||
return default
|
||||
if content.startswith(b"\x89PNG\r\n\x1a\n"):
|
||||
return "image/png"
|
||||
if content.startswith(b"\xff\xd8\xff"):
|
||||
return "image/jpeg"
|
||||
if content.startswith((b"GIF87a", b"GIF89a")):
|
||||
return "image/gif"
|
||||
if content.startswith(b"BM"):
|
||||
return "image/bmp"
|
||||
if content.startswith(b"RIFF") and b"WEBP" in content[:16]:
|
||||
return "image/webp"
|
||||
return default
|
||||
|
||||
def download_file(self, path: str) -> Optional[Tuple[bytes, str]]:
|
||||
"""
|
||||
下载 VoceChat 文件资源
|
||||
"""
|
||||
if not path or not self._file_client:
|
||||
return None
|
||||
req_url = f"{self._host}api/resource/file?path={quote(path, safe='')}"
|
||||
try:
|
||||
res = self._file_client.get_res(req_url)
|
||||
except Exception as err:
|
||||
logger.error(f"VoceChat 文件下载失败:{err}")
|
||||
return None
|
||||
if not res or not res.content:
|
||||
return None
|
||||
mime_type = (res.headers.get("Content-Type") or "").split(";")[0].strip()
|
||||
return res.content, mime_type or self._guess_mime_type(res.content)
|
||||
|
||||
def download_file_to_data_url(self, path: str) -> Optional[str]:
|
||||
file_data = self.download_file(path)
|
||||
if not file_data:
|
||||
return None
|
||||
content, mime_type = file_data
|
||||
return f"data:{mime_type};base64,{base64.b64encode(content).decode()}"
|
||||
|
||||
def send_medias_msg(self, title: str, medias: List[MediaInfo],
|
||||
userid: Optional[str] = None, link: Optional[str] = None) -> Optional[bool]:
|
||||
"""
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
import copy
|
||||
import json
|
||||
import re
|
||||
import xml.dom.minidom
|
||||
from typing import Optional, Union, List, Tuple, Any, Dict
|
||||
from urllib.parse import quote
|
||||
|
||||
from app.core.context import Context, MediaInfo
|
||||
from app.core.event import eventmanager
|
||||
@@ -103,7 +106,7 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
|
||||
if not client_config:
|
||||
return None
|
||||
if self._is_bot_mode(client_config.config):
|
||||
return None
|
||||
return self._parse_bot_message(source=source, body=body, client_config=client_config)
|
||||
client: WeChat = self.get_instance(client_config.name)
|
||||
# URL参数
|
||||
sVerifyMsgSig = args.get("msg_signature")
|
||||
@@ -163,6 +166,10 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
|
||||
logger.warn(f"解析不到消息类型和用户ID")
|
||||
return None
|
||||
# 解析消息内容
|
||||
content = None
|
||||
images = None
|
||||
audio_refs = None
|
||||
files = None
|
||||
if msg_type == "event" and event == "click":
|
||||
# 校验用户有权限执行交互命令
|
||||
if client_config.config.get('WECHAT_ADMINS'):
|
||||
@@ -178,17 +185,125 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
|
||||
# 文本消息
|
||||
content = DomUtils.tag_value(root_node, "Content", default="")
|
||||
logger.info(f"收到来自 {client_config.name} 的微信消息:userid={user_id}, text={content}")
|
||||
elif msg_type == "image":
|
||||
media_id = DomUtils.tag_value(root_node, "MediaId")
|
||||
pic_url = DomUtils.tag_value(root_node, "PicUrl")
|
||||
if media_id:
|
||||
images = [CommingMessage.MessageImage(ref=f"wxwork://media_id/{media_id}")]
|
||||
elif pic_url:
|
||||
images = [CommingMessage.MessageImage(ref=pic_url)]
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的微信图片消息:userid={user_id}, images={len(images) if images else 0}"
|
||||
)
|
||||
elif msg_type == "voice":
|
||||
media_id = DomUtils.tag_value(root_node, "MediaId")
|
||||
recognition = DomUtils.tag_value(root_node, "Recognition", default="")
|
||||
content = (recognition or "").strip()
|
||||
if media_id:
|
||||
audio_refs = [f"wxwork://voice_media_id/{media_id}"]
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的微信语音消息:userid={user_id}, "
|
||||
f"text={content}, audios={len(audio_refs) if audio_refs else 0}"
|
||||
)
|
||||
elif msg_type == "file":
|
||||
media_id = DomUtils.tag_value(root_node, "MediaId")
|
||||
file_name = DomUtils.tag_value(root_node, "FileName")
|
||||
if media_id:
|
||||
files = [
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"wxwork://file_media_id/{media_id}",
|
||||
name=file_name,
|
||||
)
|
||||
]
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的微信文件消息:userid={user_id}, files={len(files) if files else 0}"
|
||||
)
|
||||
else:
|
||||
return None
|
||||
|
||||
if content:
|
||||
if content or images or audio_refs or files:
|
||||
# 处理消息内容
|
||||
return CommingMessage(channel=MessageChannel.Wechat, source=client_config.name,
|
||||
userid=user_id, username=user_id, text=content)
|
||||
userid=user_id, username=user_id, text=content or "",
|
||||
images=images, audio_refs=audio_refs, files=files)
|
||||
except Exception as err:
|
||||
logger.error(f"微信消息处理发生错误:{str(err)}")
|
||||
return None
|
||||
|
||||
def _parse_bot_message(self, source: str, body: Any, client_config) -> Optional[CommingMessage]:
|
||||
try:
|
||||
if isinstance(body, bytes):
|
||||
msg_json = json.loads(body)
|
||||
elif isinstance(body, dict):
|
||||
msg_json = body
|
||||
else:
|
||||
msg_json = json.loads(body)
|
||||
while isinstance(msg_json, str):
|
||||
msg_json = json.loads(msg_json)
|
||||
except Exception as err:
|
||||
logger.debug(f"解析企业微信智能机器人消息失败:{err}")
|
||||
return None
|
||||
|
||||
if not isinstance(msg_json, dict):
|
||||
return None
|
||||
|
||||
payload_body = msg_json.get("body") or {}
|
||||
sender = ((payload_body.get("from") or {}).get("userid") or "").strip()
|
||||
if not sender:
|
||||
return None
|
||||
if payload_body.get("chattype") == "group":
|
||||
return None
|
||||
|
||||
text = WeChatBot._extract_text_from_body(payload_body)
|
||||
images = WeChatBot._extract_images_from_body(payload_body)
|
||||
audio_refs = ["wxbot://voice"] if payload_body.get("msgtype") == "voice" else None
|
||||
files = None
|
||||
if payload_body.get("msgtype") == "file":
|
||||
file_payload = payload_body.get("file") or {}
|
||||
download_url = file_payload.get("download_url")
|
||||
if download_url:
|
||||
files = [
|
||||
CommingMessage.MessageAttachment(
|
||||
ref=f"wxbot://file/{quote(download_url, safe='')}",
|
||||
name=file_payload.get("name") or file_payload.get("filename"),
|
||||
mime_type=file_payload.get("content_type")
|
||||
or file_payload.get("mime_type"),
|
||||
size=file_payload.get("size"),
|
||||
)
|
||||
]
|
||||
if text:
|
||||
text = re.sub(r"@\S+", "", text).strip()
|
||||
|
||||
if text and text.startswith("/") and client_config.config.get('WECHAT_ADMINS'):
|
||||
wechat_admins = [
|
||||
admin.strip()
|
||||
for admin in client_config.config.get('WECHAT_ADMINS', '').split(',')
|
||||
if admin.strip()
|
||||
]
|
||||
if wechat_admins and sender not in wechat_admins:
|
||||
client: WeChatBot = self.get_instance(client_config.name)
|
||||
if client:
|
||||
client.send_msg(title="只有管理员才有权限执行此命令", userid=sender)
|
||||
return None
|
||||
|
||||
if not text and not images and not audio_refs and not files:
|
||||
return None
|
||||
|
||||
logger.info(
|
||||
f"收到来自 {client_config.name} 的企业微信智能机器人消息:"
|
||||
f"userid={sender}, text={text}, images={len(images) if images else 0}"
|
||||
)
|
||||
return CommingMessage(
|
||||
channel=MessageChannel.Wechat,
|
||||
source=client_config.name,
|
||||
userid=sender,
|
||||
username=sender,
|
||||
text=text or "",
|
||||
images=images,
|
||||
audio_refs=audio_refs,
|
||||
files=files,
|
||||
)
|
||||
|
||||
def post_message(self, message: Notification, **kwargs) -> None:
|
||||
"""
|
||||
发送消息
|
||||
@@ -207,8 +322,56 @@ class WechatModule(_ModuleBase, _MessageBase[WeChat]):
|
||||
return
|
||||
client: WeChat = self.get_instance(conf.name)
|
||||
if client:
|
||||
client.send_msg(title=message.title, text=message.text,
|
||||
image=message.image, userid=userid, link=message.link)
|
||||
if message.voice_path and hasattr(client, "send_voice"):
|
||||
sent = client.send_voice(
|
||||
voice_path=message.voice_path,
|
||||
userid=userid,
|
||||
)
|
||||
if not sent:
|
||||
client.send_msg(title=message.title, text=message.text,
|
||||
image=message.image, userid=userid, link=message.link)
|
||||
else:
|
||||
client.send_msg(title=message.title, text=message.text,
|
||||
image=message.image, userid=userid, link=message.link)
|
||||
|
||||
def download_wechat_image_to_data_url(self, image_ref: str, source: str) -> Optional[str]:
|
||||
"""
|
||||
下载企业微信渠道图片并转换为 data URL
|
||||
"""
|
||||
if not image_ref:
|
||||
return None
|
||||
client_config = self.get_config(source)
|
||||
if not client_config:
|
||||
return None
|
||||
client = self.get_instance(client_config.name)
|
||||
if not client:
|
||||
return None
|
||||
if image_ref.startswith("wxwork://media_id/") and hasattr(client, "download_media_to_data_url"):
|
||||
media_id = image_ref.replace("wxwork://media_id/", "", 1)
|
||||
return client.download_media_to_data_url(media_id)
|
||||
if image_ref.startswith("wxbot://image/") and hasattr(client, "download_image_to_data_url"):
|
||||
return client.download_image_to_data_url(image_ref)
|
||||
return None
|
||||
|
||||
def download_wechat_media_bytes(self, media_ref: str, source: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载企业微信语音媒体并返回原始字节。
|
||||
"""
|
||||
if not media_ref:
|
||||
return None
|
||||
client_config = self.get_config(source)
|
||||
if not client_config:
|
||||
return None
|
||||
client = self.get_instance(client_config.name)
|
||||
if not client or not hasattr(client, "download_media_bytes"):
|
||||
return None
|
||||
if media_ref.startswith("wxwork://voice_media_id/"):
|
||||
media_id = media_ref.replace("wxwork://voice_media_id/", "", 1)
|
||||
return client.download_media_bytes(media_id)
|
||||
if media_ref.startswith("wxwork://file_media_id/"):
|
||||
media_id = media_ref.replace("wxwork://file_media_id/", "", 1)
|
||||
return client.download_media_bytes(media_id)
|
||||
return None
|
||||
|
||||
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> None:
|
||||
"""
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
import json
|
||||
import re
|
||||
import threading
|
||||
import base64
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Dict
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
@@ -43,6 +46,10 @@ class WeChat:
|
||||
_create_menu_url = "cgi-bin/menu/create?access_token={access_token}&agentid={agentid}"
|
||||
# 企业微信删除菜单URL
|
||||
_delete_menu_url = "cgi-bin/menu/delete?access_token={access_token}&agentid={agentid}"
|
||||
# 企业微信下载媒体URL
|
||||
_download_media_url = "cgi-bin/media/get?access_token={access_token}&media_id={media_id}"
|
||||
# 企业微信上传临时素材URL
|
||||
_upload_media_url = "cgi-bin/media/upload?access_token={access_token}&type={media_type}"
|
||||
|
||||
def __init__(self, WECHAT_CORPID: Optional[str] = None, WECHAT_APP_SECRET: Optional[str] = None,
|
||||
WECHAT_APP_ID: Optional[str] = None, WECHAT_PROXY: Optional[str] = None, **kwargs):
|
||||
@@ -62,6 +69,8 @@ class WeChat:
|
||||
self._token_url = UrlUtils.adapt_request_url(self._proxy, self._token_url)
|
||||
self._create_menu_url = UrlUtils.adapt_request_url(self._proxy, self._create_menu_url)
|
||||
self._delete_menu_url = UrlUtils.adapt_request_url(self._proxy, self._delete_menu_url)
|
||||
self._download_media_url = UrlUtils.adapt_request_url(self._proxy, self._download_media_url)
|
||||
self._upload_media_url = UrlUtils.adapt_request_url(self._proxy, self._upload_media_url)
|
||||
|
||||
if self._corpid and self._appsecret and self._appid:
|
||||
self.__get_access_token()
|
||||
@@ -267,6 +276,220 @@ class WeChat:
|
||||
logger.error(f"发送消息失败:{e}")
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _guess_mime_type(content: bytes, default: str = "image/jpeg") -> str:
|
||||
"""
|
||||
根据文件头推断图片 MIME
|
||||
"""
|
||||
if not content:
|
||||
return default
|
||||
if content.startswith(b"\x89PNG\r\n\x1a\n"):
|
||||
return "image/png"
|
||||
if content.startswith(b"\xff\xd8\xff"):
|
||||
return "image/jpeg"
|
||||
if content.startswith((b"GIF87a", b"GIF89a")):
|
||||
return "image/gif"
|
||||
if content.startswith(b"BM"):
|
||||
return "image/bmp"
|
||||
if content.startswith(b"RIFF") and b"WEBP" in content[:16]:
|
||||
return "image/webp"
|
||||
return default
|
||||
|
||||
def download_media_to_data_url(self, media_id: str) -> Optional[str]:
|
||||
"""
|
||||
下载企业微信媒体文件并转换为 data URL
|
||||
"""
|
||||
if not media_id:
|
||||
return None
|
||||
access_token = self.__get_access_token()
|
||||
if not access_token:
|
||||
logger.error("下载企业微信媒体失败:access_token 获取失败")
|
||||
return None
|
||||
req_url = self._download_media_url.format(
|
||||
access_token=access_token,
|
||||
media_id=media_id,
|
||||
)
|
||||
try:
|
||||
res = RequestUtils(timeout=30).get_res(req_url)
|
||||
except Exception as err:
|
||||
logger.error(f"下载企业微信媒体失败:{err}")
|
||||
return None
|
||||
if not res or not res.content:
|
||||
return None
|
||||
|
||||
content_type = (res.headers.get("Content-Type") or "").split(";")[0].strip()
|
||||
if content_type == "application/json":
|
||||
try:
|
||||
logger.error(f"企业微信媒体下载失败:{res.json()}")
|
||||
except Exception:
|
||||
logger.error(f"企业微信媒体下载失败:{res.text}")
|
||||
return None
|
||||
|
||||
mime_type = self._guess_mime_type(res.content, content_type or "image/jpeg")
|
||||
return f"data:{mime_type};base64,{base64.b64encode(res.content).decode()}"
|
||||
|
||||
def download_media_bytes(self, media_id: str) -> Optional[bytes]:
|
||||
"""
|
||||
下载企业微信媒体文件并返回原始字节。
|
||||
"""
|
||||
if not media_id:
|
||||
return None
|
||||
access_token = self.__get_access_token()
|
||||
if not access_token:
|
||||
logger.error("下载企业微信媒体失败:access_token 获取失败")
|
||||
return None
|
||||
req_url = self._download_media_url.format(
|
||||
access_token=access_token,
|
||||
media_id=media_id,
|
||||
)
|
||||
try:
|
||||
res = RequestUtils(timeout=30).get_res(req_url)
|
||||
except Exception as err:
|
||||
logger.error(f"下载企业微信媒体失败:{err}")
|
||||
return None
|
||||
if not res or not res.content:
|
||||
return None
|
||||
content_type = (res.headers.get("Content-Type") or "").split(";")[0].strip()
|
||||
if content_type == "application/json":
|
||||
try:
|
||||
logger.error(f"企业微信媒体下载失败:{res.json()}")
|
||||
except Exception:
|
||||
logger.error(f"企业微信媒体下载失败:{res.text}")
|
||||
return None
|
||||
return res.content
|
||||
|
||||
@staticmethod
|
||||
def _convert_voice_to_amr(voice_path: str) -> Optional[Path]:
|
||||
"""
|
||||
将语音文件转换为企业微信要求的 AMR 格式(<=60s)。
|
||||
"""
|
||||
src_path = Path(voice_path)
|
||||
if not src_path.exists():
|
||||
logger.error(f"语音文件不存在:{src_path}")
|
||||
return None
|
||||
|
||||
dst_path = src_path.with_suffix(".amr")
|
||||
cmd = [
|
||||
"ffmpeg",
|
||||
"-y",
|
||||
"-i",
|
||||
str(src_path),
|
||||
"-ar",
|
||||
"8000",
|
||||
"-ac",
|
||||
"1",
|
||||
"-t",
|
||||
"60",
|
||||
str(dst_path),
|
||||
]
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
except Exception as err:
|
||||
logger.error(f"调用 ffmpeg 转换 AMR 失败:{err}")
|
||||
return None
|
||||
|
||||
if result.returncode != 0 or not dst_path.exists():
|
||||
logger.error(
|
||||
"ffmpeg 转换 AMR 失败: returncode=%s, stderr=%s",
|
||||
result.returncode,
|
||||
(result.stderr or "").strip()[:500],
|
||||
)
|
||||
return None
|
||||
|
||||
if dst_path.stat().st_size > 2 * 1024 * 1024:
|
||||
logger.error("AMR 语音文件超过 2MB,无法发送到企业微信")
|
||||
dst_path.unlink(missing_ok=True)
|
||||
return None
|
||||
return dst_path
|
||||
|
||||
def _upload_temp_media(self, media_path: Path, media_type: str = "voice") -> Optional[str]:
|
||||
"""
|
||||
上传企业微信临时素材,返回 media_id。
|
||||
"""
|
||||
access_token = self.__get_access_token()
|
||||
if not access_token:
|
||||
return None
|
||||
req_url = self._upload_media_url.format(
|
||||
access_token=access_token,
|
||||
media_type=media_type,
|
||||
)
|
||||
try:
|
||||
with media_path.open("rb") as media_file:
|
||||
response = RequestUtils(timeout=60).request(
|
||||
method="post",
|
||||
url=req_url,
|
||||
files={
|
||||
"media": (
|
||||
media_path.name,
|
||||
media_file,
|
||||
"voice/amr" if media_type == "voice" else "application/octet-stream",
|
||||
)
|
||||
},
|
||||
)
|
||||
except Exception as err:
|
||||
logger.error(f"上传企业微信临时素材失败:{err}")
|
||||
return None
|
||||
|
||||
if not response:
|
||||
return None
|
||||
|
||||
try:
|
||||
ret_json = response.json()
|
||||
except Exception as err:
|
||||
logger.error(f"解析企业微信临时素材响应失败:{err}")
|
||||
return None
|
||||
|
||||
if ret_json.get("errcode") != 0:
|
||||
logger.error(f"上传企业微信临时素材失败:{ret_json}")
|
||||
return None
|
||||
return ret_json.get("media_id")
|
||||
|
||||
def send_voice(self, voice_path: str, userid: Optional[str] = None) -> Optional[bool]:
|
||||
"""
|
||||
发送企业微信语音消息。仅自建应用模式支持。
|
||||
"""
|
||||
if not voice_path:
|
||||
return False
|
||||
if not self.__get_access_token():
|
||||
logger.error("获取微信access_token失败,请检查参数配置")
|
||||
return None
|
||||
if not userid:
|
||||
userid = "@all"
|
||||
|
||||
source_path = Path(voice_path)
|
||||
converted_path = self._convert_voice_to_amr(voice_path)
|
||||
if not converted_path:
|
||||
return False
|
||||
|
||||
try:
|
||||
media_id = self._upload_temp_media(converted_path, media_type="voice")
|
||||
if not media_id:
|
||||
return False
|
||||
|
||||
req_json = {
|
||||
"touser": userid,
|
||||
"msgtype": "voice",
|
||||
"agentid": self._appid,
|
||||
"voice": {
|
||||
"media_id": media_id
|
||||
},
|
||||
"safe": 0,
|
||||
"enable_id_trans": 0,
|
||||
"enable_duplicate_check": 0
|
||||
}
|
||||
return self.__post_request(self._send_msg_url, req_json)
|
||||
except Exception as err:
|
||||
logger.error(f"发送企业微信语音消息失败:{err}")
|
||||
return False
|
||||
finally:
|
||||
converted_path.unlink(missing_ok=True)
|
||||
source_path.unlink(missing_ok=True)
|
||||
|
||||
def send_medias_msg(self, medias: List[MediaInfo], userid: Optional[str] = None) -> Optional[bool]:
|
||||
"""
|
||||
发送列表类消息
|
||||
|
||||
@@ -5,15 +5,18 @@ import re
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
import base64
|
||||
from typing import Optional, List, Dict, Tuple, Set
|
||||
|
||||
import websocket
|
||||
from Crypto.Cipher import AES
|
||||
|
||||
from app.core.cache import FileCache
|
||||
from app.core.config import settings
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.log import logger
|
||||
from app.schemas import CommingMessage
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
@@ -332,6 +335,139 @@ class WeChatBot:
|
||||
text = "\n".join(part for part in text_parts if part).strip()
|
||||
return text or None
|
||||
|
||||
@staticmethod
|
||||
def _build_image_ref(image_payload: dict) -> Optional[str]:
|
||||
if not image_payload or not isinstance(image_payload, dict):
|
||||
return None
|
||||
download_url = (
|
||||
image_payload.get("download_url")
|
||||
or image_payload.get("url")
|
||||
or image_payload.get("cdnurl")
|
||||
)
|
||||
if not download_url:
|
||||
return None
|
||||
payload = {
|
||||
"url": download_url,
|
||||
"aeskey": image_payload.get("aeskey")
|
||||
or image_payload.get("encoding_aes_key")
|
||||
or image_payload.get("encrypt_key"),
|
||||
"mime_type": image_payload.get("mime_type")
|
||||
or image_payload.get("content_type"),
|
||||
}
|
||||
encoded = base64.urlsafe_b64encode(
|
||||
json.dumps(payload, ensure_ascii=False).encode("utf-8")
|
||||
).decode("ascii").rstrip("=")
|
||||
return f"wxbot://image/{encoded}"
|
||||
|
||||
@classmethod
|
||||
def _extract_images_from_body(
|
||||
cls, body: dict
|
||||
) -> Optional[List["CommingMessage.MessageImage"]]:
|
||||
images: List["CommingMessage.MessageImage"] = []
|
||||
msgtype = body.get("msgtype")
|
||||
|
||||
if msgtype == "image":
|
||||
image_payload = body.get("image") or {}
|
||||
image_ref = cls._build_image_ref(image_payload)
|
||||
if image_ref:
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=image_ref,
|
||||
mime_type=image_payload.get("mime_type")
|
||||
or image_payload.get("content_type"),
|
||||
)
|
||||
)
|
||||
elif msgtype == "mixed":
|
||||
for item in (body.get("mixed") or {}).get("msg_item") or []:
|
||||
if item.get("msgtype") != "image":
|
||||
continue
|
||||
image_payload = item.get("image") or {}
|
||||
image_ref = cls._build_image_ref(image_payload)
|
||||
if image_ref:
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=image_ref,
|
||||
mime_type=image_payload.get("mime_type")
|
||||
or image_payload.get("content_type"),
|
||||
)
|
||||
)
|
||||
|
||||
quote = body.get("quote") or {}
|
||||
if not images and quote.get("msgtype") == "image":
|
||||
image_payload = quote.get("image") or {}
|
||||
image_ref = cls._build_image_ref(image_payload)
|
||||
if image_ref:
|
||||
images.append(
|
||||
CommingMessage.MessageImage(
|
||||
ref=image_ref,
|
||||
mime_type=image_payload.get("mime_type")
|
||||
or image_payload.get("content_type"),
|
||||
)
|
||||
)
|
||||
|
||||
return images or None
|
||||
|
||||
@staticmethod
|
||||
def _guess_mime_type(content: bytes, default: str = "image/jpeg") -> str:
|
||||
if not content:
|
||||
return default
|
||||
if content.startswith(b"\x89PNG\r\n\x1a\n"):
|
||||
return "image/png"
|
||||
if content.startswith(b"\xff\xd8\xff"):
|
||||
return "image/jpeg"
|
||||
if content.startswith((b"GIF87a", b"GIF89a")):
|
||||
return "image/gif"
|
||||
if content.startswith(b"BM"):
|
||||
return "image/bmp"
|
||||
if content.startswith(b"RIFF") and b"WEBP" in content[:16]:
|
||||
return "image/webp"
|
||||
return default
|
||||
|
||||
def download_image_to_data_url(self, image_ref: str) -> Optional[str]:
|
||||
if not image_ref or not image_ref.startswith("wxbot://image/"):
|
||||
return None
|
||||
encoded = image_ref.replace("wxbot://image/", "", 1)
|
||||
try:
|
||||
padding = "=" * (-len(encoded) % 4)
|
||||
payload = json.loads(
|
||||
base64.urlsafe_b64decode((encoded + padding).encode("ascii")).decode(
|
||||
"utf-8"
|
||||
)
|
||||
)
|
||||
except Exception as err:
|
||||
logger.error(f"解析企业微信智能机器人图片引用失败:{err}")
|
||||
return None
|
||||
|
||||
download_url = payload.get("url")
|
||||
if not download_url:
|
||||
return None
|
||||
|
||||
try:
|
||||
resp = RequestUtils(timeout=30).get_res(download_url)
|
||||
except Exception as err:
|
||||
logger.error(f"下载企业微信智能机器人图片失败:{err}")
|
||||
return None
|
||||
if not resp or not resp.content:
|
||||
return None
|
||||
|
||||
content = resp.content
|
||||
aes_key = payload.get("aeskey")
|
||||
if aes_key:
|
||||
try:
|
||||
aes_bytes = base64.b64decode(aes_key + "=" * (-len(aes_key) % 4))
|
||||
cipher = AES.new(aes_bytes, AES.MODE_CBC, aes_bytes[:16])
|
||||
decrypted = cipher.decrypt(content)
|
||||
padding_len = decrypted[-1]
|
||||
if 0 < padding_len <= 32:
|
||||
decrypted = decrypted[:-padding_len]
|
||||
content = decrypted
|
||||
except Exception as err:
|
||||
logger.error(f"解密企业微信智能机器人图片失败:{err}")
|
||||
return None
|
||||
|
||||
mime_type = self._guess_mime_type(content, payload.get("mime_type") or "image/jpeg")
|
||||
return f"data:{mime_type};base64,{base64.b64encode(content).decode()}"
|
||||
|
||||
def _handle_callback_message(self, payload: dict) -> None:
|
||||
body = payload.get("body") or {}
|
||||
sender = ((body.get("from") or {}).get("userid") or "").strip()
|
||||
@@ -343,20 +479,24 @@ class WeChatBot:
|
||||
return
|
||||
|
||||
text = self._extract_text_from_body(body)
|
||||
if not text:
|
||||
return
|
||||
images = self._extract_images_from_body(body)
|
||||
|
||||
text = re.sub(r"@\S+", "", text).strip()
|
||||
if not text:
|
||||
if text:
|
||||
text = re.sub(r"@\S+", "", text).strip()
|
||||
|
||||
if not text and not images:
|
||||
return
|
||||
|
||||
self._remember_target(sender)
|
||||
|
||||
if text.startswith("/") and self._admins and sender not in self._admins:
|
||||
if text and text.startswith("/") and self._admins and sender not in self._admins:
|
||||
self.send_msg(title="只有管理员才有权限执行此命令", userid=sender)
|
||||
return
|
||||
|
||||
logger.info(f"收到来自 {self._config_name} 的企业微信智能机器人消息:userid={sender}, text={text}")
|
||||
logger.info(
|
||||
f"收到来自 {self._config_name} 的企业微信智能机器人消息:"
|
||||
f"userid={sender}, text={text}, images={len(images) if images else 0}"
|
||||
)
|
||||
self._forward_to_message_chain(payload)
|
||||
|
||||
def _forward_to_message_chain(self, payload: dict) -> None:
|
||||
|
||||
@@ -11,6 +11,7 @@ from .monitoring import *
|
||||
from .plugin import *
|
||||
from .response import *
|
||||
from .rule import *
|
||||
from .openai import *
|
||||
from .servarr import *
|
||||
from .servcookie import *
|
||||
from .site import *
|
||||
@@ -23,4 +24,3 @@ from .transfer import *
|
||||
from .user import *
|
||||
from .workflow import *
|
||||
from .mcp import *
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ class Event(BaseModel):
|
||||
"""
|
||||
事件模型
|
||||
"""
|
||||
|
||||
event_type: str = Field(..., description="事件类型")
|
||||
event_data: Optional[dict] = Field(default={}, description="事件数据")
|
||||
priority: Optional[int] = Field(0, description="事件优先级")
|
||||
@@ -20,6 +21,7 @@ class BaseEventData(BaseModel):
|
||||
"""
|
||||
事件数据的基类,所有具体事件数据类应继承自此类
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
@@ -27,11 +29,14 @@ class ConfigChangeEventData(BaseEventData):
|
||||
"""
|
||||
ConfigChange 事件的数据模型
|
||||
"""
|
||||
|
||||
key: set[str] = Field(..., description="配置项的键(集合类型)")
|
||||
value: Optional[Any] = Field(default=None, description="配置项的新值")
|
||||
change_type: str = Field(default="update", description="配置项的变更类型,如 'add', 'update', 'delete'")
|
||||
change_type: str = Field(
|
||||
default="update", description="配置项的变更类型,如 'add', 'update', 'delete'"
|
||||
)
|
||||
|
||||
@field_validator('key', mode='before')
|
||||
@field_validator("key", mode="before")
|
||||
@classmethod
|
||||
def convert_to_set(cls, v):
|
||||
"""将输入的 str、list、dict.keys() 等转为 set"""
|
||||
@@ -55,6 +60,7 @@ class ChainEventData(BaseEventData):
|
||||
"""
|
||||
链式事件数据的基类,所有具体事件数据类应继承自此类
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
@@ -73,12 +79,24 @@ class AuthCredentials(ChainEventData):
|
||||
channel (Optional[str]): 认证渠道
|
||||
service (Optional[str]): 服务名称
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
username: Optional[str] = Field(None, description="用户名,适用于 'password' 认证类型")
|
||||
password: Optional[str] = Field(None, description="用户密码,适用于 'password' 认证类型")
|
||||
mfa_code: Optional[str] = Field(None, description="一次性密码,目前仅适用于 'password' 认证类型")
|
||||
code: Optional[str] = Field(None, description="授权码,适用于 'authorization_code' 认证类型")
|
||||
grant_type: str = Field(..., description="认证类型,如 'password', 'authorization_code', 'client_credentials'")
|
||||
username: Optional[str] = Field(
|
||||
None, description="用户名,适用于 'password' 认证类型"
|
||||
)
|
||||
password: Optional[str] = Field(
|
||||
None, description="用户密码,适用于 'password' 认证类型"
|
||||
)
|
||||
mfa_code: Optional[str] = Field(
|
||||
None, description="一次性密码,目前仅适用于 'password' 认证类型"
|
||||
)
|
||||
code: Optional[str] = Field(
|
||||
None, description="授权码,适用于 'authorization_code' 认证类型"
|
||||
)
|
||||
grant_type: str = Field(
|
||||
...,
|
||||
description="认证类型,如 'password', 'authorization_code', 'client_credentials'",
|
||||
)
|
||||
# scope: List[str] = Field(default_factory=list, description="权限范围,如 ['read', 'write']")
|
||||
|
||||
# 输出参数
|
||||
@@ -87,7 +105,7 @@ class AuthCredentials(ChainEventData):
|
||||
channel: Optional[str] = Field(default=None, description="认证渠道")
|
||||
service: Optional[str] = Field(default=None, description="服务名称")
|
||||
|
||||
@model_validator(mode='before')
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
def check_fields_based_on_grant_type(cls, values): # noqa
|
||||
grant_type = values.get("grant_type")
|
||||
@@ -97,7 +115,9 @@ class AuthCredentials(ChainEventData):
|
||||
|
||||
if grant_type == "password":
|
||||
if not values.get("username") or not values.get("password"):
|
||||
raise ValueError("username and password are required for grant_type 'password'")
|
||||
raise ValueError(
|
||||
"username and password are required for grant_type 'password'"
|
||||
)
|
||||
|
||||
elif grant_type == "authorization_code":
|
||||
if not values.get("code"):
|
||||
@@ -122,11 +142,15 @@ class AuthInterceptCredentials(ChainEventData):
|
||||
source (str): 拦截源,默认值为 "未知拦截源"
|
||||
cancel (bool): 是否取消认证,默认值为 False
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
username: Optional[str] = Field(..., description="用户名")
|
||||
channel: str = Field(..., description="认证渠道")
|
||||
service: str = Field(..., description="服务名称")
|
||||
status: str = Field(..., description="认证状态, 包含 'triggered' 表示认证触发,'completed' 表示认证成功")
|
||||
status: str = Field(
|
||||
...,
|
||||
description="认证状态, 包含 'triggered' 表示认证触发,'completed' 表示认证成功",
|
||||
)
|
||||
token: Optional[str] = Field(default=None, description="认证令牌")
|
||||
|
||||
# 输出参数
|
||||
@@ -148,6 +172,7 @@ class CommandRegisterEventData(ChainEventData):
|
||||
source (str): 拦截源,默认值为 "未知拦截源"
|
||||
cancel (bool): 是否取消认证,默认值为 False
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
commands: Dict[str, dict] = Field(..., description="菜单命令")
|
||||
origin: str = Field(..., description="事件源")
|
||||
@@ -169,18 +194,25 @@ class TransferRenameEventData(ChainEventData):
|
||||
render_str (str): 渲染生成的字符串
|
||||
path (Optional[Path]): 当前文件的目标路径
|
||||
source_path (Optional[str]): 源文件路径,即待整理的文件路径
|
||||
source_item (Optional[FileItem]): 源文件信息,即待整理的文件信息
|
||||
|
||||
# 输出参数
|
||||
updated (bool): 是否已更新,默认值为 False
|
||||
updated_str (str): 更新后的字符串
|
||||
source (str): 拦截源,默认值为 "未知拦截源"
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
template_string: str = Field(..., description="模板字符串")
|
||||
rename_dict: Dict[str, Any] = Field(..., description="渲染上下文")
|
||||
path: Optional[Path] = Field(None, description="文件的目标路径")
|
||||
render_str: str = Field(..., description="渲染生成的字符串")
|
||||
source_path: Optional[str] = Field(None, description="源文件路径,即待整理的文件路径")
|
||||
source_path: Optional[str] = Field(
|
||||
None, description="源文件路径,即待整理的文件路径"
|
||||
)
|
||||
source_item: Optional[FileItem] = Field(
|
||||
None, description="源文件信息,即待整理的文件信息"
|
||||
)
|
||||
|
||||
# 输出参数
|
||||
updated: bool = Field(default=False, description="是否已更新")
|
||||
@@ -202,6 +234,7 @@ class ResourceSelectionEventData(BaseModel):
|
||||
updated_contexts (Optional[List[Context]]): 已更新的资源上下文列表,默认值为 None
|
||||
source (str): 更新源,默认值为 "未知更新源"
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
contexts: Any = Field(None, description="待选择的资源上下文列表")
|
||||
downloader: Optional[str] = Field(None, description="下载器")
|
||||
@@ -209,7 +242,9 @@ class ResourceSelectionEventData(BaseModel):
|
||||
|
||||
# 输出参数
|
||||
updated: bool = Field(default=False, description="是否已更新")
|
||||
updated_contexts: Optional[List[Any]] = Field(default=None, description="已更新的资源上下文列表")
|
||||
updated_contexts: Optional[List[Any]] = Field(
|
||||
default=None, description="已更新的资源上下文列表"
|
||||
)
|
||||
source: Optional[str] = Field(default="未知拦截源", description="拦截源")
|
||||
|
||||
|
||||
@@ -231,6 +266,7 @@ class ResourceDownloadEventData(ChainEventData):
|
||||
source (str): 拦截源,默认值为 "未知拦截源"
|
||||
reason (str): 拦截原因,描述拦截的具体原因
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
context: Any = Field(None, description="当前资源上下文")
|
||||
episodes: Optional[Set[int]] = Field(None, description="需要下载的集数")
|
||||
@@ -262,6 +298,7 @@ class TransferInterceptEventData(ChainEventData):
|
||||
source (str): 拦截源,默认值为 "未知拦截源"
|
||||
reason (str): 拦截原因,描述拦截的具体原因
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
fileitem: FileItem = Field(..., description="源文件")
|
||||
mediainfo: Any = Field(..., description="媒体信息")
|
||||
@@ -276,16 +313,73 @@ class TransferInterceptEventData(ChainEventData):
|
||||
reason: str = Field(default="", description="拦截原因")
|
||||
|
||||
|
||||
class TransferOverwriteCheckEventData(ChainEventData):
|
||||
"""
|
||||
TransferOverwriteCheck 事件的数据模型
|
||||
|
||||
在覆盖模式判断(如按文件大小覆盖)执行之前触发,允许插件提供源文件与
|
||||
目标文件的真实大小(例如本地 .strm 文件指向的网盘原始文件大小),或者
|
||||
直接给出覆盖决策。
|
||||
|
||||
Attributes:
|
||||
# 输入参数
|
||||
fileitem (FileItem): 源文件
|
||||
target_item (FileItem): 目标文件(已存在)
|
||||
target_storage (str): 目标存储
|
||||
target_path (Path): 目标文件路径
|
||||
overwrite_mode (str): 覆盖模式(always、size、never、latest)
|
||||
transfer_type (str): 整理方式
|
||||
options (dict): 其他参数
|
||||
|
||||
# 输出参数
|
||||
source_size (Optional[int]): 由插件提供的源文件真实大小,覆盖
|
||||
fileitem.size 用于 size 模式比较;为 None 时表示不修改
|
||||
target_size (Optional[int]): 由插件提供的目标文件真实大小,覆盖
|
||||
target_item.size 用于 size 模式比较;为 None 时表示不修改
|
||||
overwrite (Optional[bool]): 由插件直接给出的覆盖决策,非 None 时
|
||||
将完全跳过 MoviePilot 内置的 size/never/latest 等比较逻辑
|
||||
source (str): 处理来源
|
||||
reason (str): 处理原因,描述插件做出决策或修改的原因
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
fileitem: FileItem = Field(..., description="源文件")
|
||||
target_item: FileItem = Field(..., description="目标已存在文件")
|
||||
target_storage: str = Field(..., description="目标存储")
|
||||
target_path: Path = Field(..., description="目标文件路径")
|
||||
overwrite_mode: str = Field(..., description="覆盖模式")
|
||||
transfer_type: str = Field(..., description="整理方式")
|
||||
options: Optional[dict] = Field(default=None, description="其他参数")
|
||||
|
||||
# 输出参数
|
||||
source_size: Optional[int] = Field(
|
||||
default=None, description="插件提供的源文件真实大小"
|
||||
)
|
||||
target_size: Optional[int] = Field(
|
||||
default=None, description="插件提供的目标文件真实大小"
|
||||
)
|
||||
overwrite: Optional[bool] = Field(
|
||||
default=None, description="插件直接给出的覆盖决策"
|
||||
)
|
||||
source: str = Field(default="未知处理源", description="处理来源")
|
||||
reason: str = Field(default="", description="处理原因")
|
||||
|
||||
|
||||
class DiscoverMediaSource(BaseModel):
|
||||
"""
|
||||
探索媒体数据源的基类
|
||||
"""
|
||||
|
||||
name: str = Field(..., description="数据源名称")
|
||||
mediaid_prefix: str = Field(..., description="媒体ID的前缀,不含:")
|
||||
api_path: str = Field(..., description="媒体数据源API地址")
|
||||
filter_params: Optional[Dict[str, Any]] = Field(default=None, description="过滤参数")
|
||||
filter_params: Optional[Dict[str, Any]] = Field(
|
||||
default=None, description="过滤参数"
|
||||
)
|
||||
filter_ui: Optional[List[dict]] = Field(default=[], description="过滤参数UI配置")
|
||||
depends: Optional[Dict[str, list]] = Field(default=None, description="UI依赖关系字典")
|
||||
depends: Optional[Dict[str, list]] = Field(
|
||||
default=None, description="UI依赖关系字典"
|
||||
)
|
||||
|
||||
|
||||
class DiscoverSourceEventData(ChainEventData):
|
||||
@@ -296,14 +390,18 @@ class DiscoverSourceEventData(ChainEventData):
|
||||
# 输出参数
|
||||
extra_sources (List[DiscoverMediaSource]): 额外媒体数据源
|
||||
"""
|
||||
|
||||
# 输出参数
|
||||
extra_sources: List[DiscoverMediaSource] = Field(default_factory=list, description="额外媒体数据源")
|
||||
extra_sources: List[DiscoverMediaSource] = Field(
|
||||
default_factory=list, description="额外媒体数据源"
|
||||
)
|
||||
|
||||
|
||||
class RecommendMediaSource(BaseModel):
|
||||
"""
|
||||
推荐媒体数据源的基类
|
||||
"""
|
||||
|
||||
name: str = Field(..., description="数据源名称")
|
||||
api_path: str = Field(..., description="媒体数据源API地址")
|
||||
type: str = Field(..., description="类型")
|
||||
@@ -317,8 +415,11 @@ class RecommendSourceEventData(ChainEventData):
|
||||
# 输出参数
|
||||
extra_sources (List[RecommendMediaSource]): 额外媒体数据源
|
||||
"""
|
||||
|
||||
# 输出参数
|
||||
extra_sources: List[RecommendMediaSource] = Field(default_factory=list, description="额外媒体数据源")
|
||||
extra_sources: List[RecommendMediaSource] = Field(
|
||||
default_factory=list, description="额外媒体数据源"
|
||||
)
|
||||
|
||||
|
||||
class MediaRecognizeConvertEventData(ChainEventData):
|
||||
@@ -333,12 +434,15 @@ class MediaRecognizeConvertEventData(ChainEventData):
|
||||
# 输出参数
|
||||
media_dict (dict): TheMovieDb/豆瓣的媒体数据
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
mediaid: str = Field(..., description="媒体ID")
|
||||
convert_type: str = Field(..., description="转换类型(themoviedb/douban)")
|
||||
|
||||
# 输出参数
|
||||
media_dict: dict = Field(default_factory=dict, description="转换后的媒体信息(TheMovieDb/豆瓣)")
|
||||
media_dict: dict = Field(
|
||||
default_factory=dict, description="转换后的媒体信息(TheMovieDb/豆瓣)"
|
||||
)
|
||||
|
||||
|
||||
class StorageOperSelectionEventData(ChainEventData):
|
||||
@@ -352,6 +456,7 @@ class StorageOperSelectionEventData(ChainEventData):
|
||||
# 输出参数
|
||||
storage_oper (Callable): 存储操作对象
|
||||
"""
|
||||
|
||||
# 输入参数
|
||||
storage: Optional[str] = Field(default=None, description="存储类型")
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Optional, Union, List, Dict, Set
|
||||
from typing import Optional, Union, List, Dict, Set, Any
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from app.schemas.types import ContentType, NotificationType, MessageChannel
|
||||
|
||||
@@ -29,6 +29,71 @@ class CommingMessage(BaseModel):
|
||||
外来消息
|
||||
"""
|
||||
|
||||
class MessageImage(BaseModel):
|
||||
"""
|
||||
外来消息图片
|
||||
"""
|
||||
|
||||
ref: str
|
||||
name: Optional[str] = None
|
||||
mime_type: Optional[str] = None
|
||||
size: Optional[int] = None
|
||||
|
||||
@classmethod
|
||||
def from_value(cls, value: Any) -> Optional["CommingMessage.MessageImage"]:
|
||||
if value is None:
|
||||
return None
|
||||
if isinstance(value, cls):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
return cls(ref=value)
|
||||
if isinstance(value, dict):
|
||||
ref = (
|
||||
value.get("ref")
|
||||
or value.get("url")
|
||||
or value.get("image_url")
|
||||
or value.get("file_url")
|
||||
)
|
||||
if not ref:
|
||||
return None
|
||||
size = value.get("size")
|
||||
try:
|
||||
size = int(size) if size is not None else None
|
||||
except (TypeError, ValueError):
|
||||
size = None
|
||||
return cls(
|
||||
ref=ref,
|
||||
name=value.get("name") or value.get("filename"),
|
||||
mime_type=value.get("mime_type") or value.get("content_type"),
|
||||
size=size,
|
||||
)
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def normalize_list(
|
||||
cls, values: Optional[Any]
|
||||
) -> Optional[List["CommingMessage.MessageImage"]]:
|
||||
if not values:
|
||||
return None
|
||||
if not isinstance(values, list):
|
||||
values = [values]
|
||||
normalized = []
|
||||
for value in values:
|
||||
item = cls.from_value(value)
|
||||
if item:
|
||||
normalized.append(item)
|
||||
return normalized or None
|
||||
|
||||
class MessageAttachment(BaseModel):
|
||||
"""
|
||||
外来消息附件(非图片/非语音)
|
||||
"""
|
||||
|
||||
ref: str
|
||||
name: Optional[str] = None
|
||||
mime_type: Optional[str] = None
|
||||
size: Optional[int] = None
|
||||
|
||||
# 用户ID
|
||||
userid: Optional[Union[str, int]] = None
|
||||
# 用户名称
|
||||
@@ -54,7 +119,18 @@ class CommingMessage(BaseModel):
|
||||
# 完整的回调查询信息(原始数据)
|
||||
callback_query: Optional[Dict] = None
|
||||
# 图片列表(图片URL或file_id)
|
||||
images: Optional[List[str]] = None
|
||||
images: Optional[List[MessageImage]] = None
|
||||
# 语音/音频引用列表
|
||||
audio_refs: Optional[List[str]] = None
|
||||
# 文件附件列表
|
||||
files: Optional[List[MessageAttachment]] = None
|
||||
|
||||
@field_validator("images", mode="before")
|
||||
@classmethod
|
||||
def _normalize_images(
|
||||
cls, value: Any
|
||||
) -> Optional[List["CommingMessage.MessageImage"]]:
|
||||
return cls.MessageImage.normalize_list(value)
|
||||
|
||||
def to_dict(self):
|
||||
"""
|
||||
@@ -86,6 +162,14 @@ class Notification(BaseModel):
|
||||
text: Optional[str] = None
|
||||
# 图片
|
||||
image: Optional[str] = None
|
||||
# 语音文件路径
|
||||
voice_path: Optional[str] = None
|
||||
# 本地文件路径
|
||||
file_path: Optional[str] = None
|
||||
# 发送时展示的文件名
|
||||
file_name: Optional[str] = None
|
||||
# 语音消息附带说明文字
|
||||
voice_caption: Optional[str] = None
|
||||
# 链接
|
||||
link: Optional[str] = None
|
||||
# 用户ID
|
||||
@@ -248,6 +332,7 @@ class ChannelCapabilityManager:
|
||||
ChannelCapability.IMAGES,
|
||||
ChannelCapability.LINKS,
|
||||
ChannelCapability.MENU_COMMANDS,
|
||||
ChannelCapability.FILE_SENDING,
|
||||
},
|
||||
max_buttons_per_row=3,
|
||||
max_button_rows=8,
|
||||
@@ -266,6 +351,7 @@ class ChannelCapabilityManager:
|
||||
ChannelCapability.RICH_TEXT,
|
||||
ChannelCapability.IMAGES,
|
||||
ChannelCapability.LINKS,
|
||||
ChannelCapability.FILE_SENDING,
|
||||
},
|
||||
max_buttons_per_row=5,
|
||||
max_button_rows=5,
|
||||
|
||||
156
app/schemas/openai.py
Normal file
156
app/schemas/openai.py
Normal file
@@ -0,0 +1,156 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, Field
|
||||
|
||||
|
||||
class OpenAIModelInfo(BaseModel):
|
||||
id: str
|
||||
object: str = "model"
|
||||
created: int
|
||||
owned_by: str = "moviepilot"
|
||||
|
||||
|
||||
class OpenAIModelListResponse(BaseModel):
|
||||
object: str = "list"
|
||||
data: List[OpenAIModelInfo] = Field(default_factory=list)
|
||||
|
||||
|
||||
class OpenAIChatMessage(BaseModel):
|
||||
role: str
|
||||
content: Any
|
||||
name: Optional[str] = None
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class OpenAIChatCompletionsRequest(BaseModel):
|
||||
model: Optional[str] = None
|
||||
messages: List[OpenAIChatMessage]
|
||||
user: Optional[str] = None
|
||||
stream: bool = False
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class OpenAIResponsesRequest(BaseModel):
|
||||
model: Optional[str] = None
|
||||
input: Any
|
||||
instructions: Optional[str] = None
|
||||
user: Optional[str] = None
|
||||
stream: bool = False
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class OpenAIChatChoiceMessage(BaseModel):
|
||||
role: str = "assistant"
|
||||
content: str
|
||||
|
||||
|
||||
class OpenAIChatChoice(BaseModel):
|
||||
index: int = 0
|
||||
message: OpenAIChatChoiceMessage
|
||||
finish_reason: str = "stop"
|
||||
|
||||
|
||||
class OpenAIUsage(BaseModel):
|
||||
prompt_tokens: int = 0
|
||||
completion_tokens: int = 0
|
||||
total_tokens: int = 0
|
||||
|
||||
|
||||
class OpenAIChatCompletionResponse(BaseModel):
|
||||
id: str
|
||||
object: str = "chat.completion"
|
||||
created: int
|
||||
model: str
|
||||
choices: List[OpenAIChatChoice]
|
||||
usage: OpenAIUsage
|
||||
|
||||
|
||||
class OpenAIResponsesOutputText(BaseModel):
|
||||
type: str = "output_text"
|
||||
text: str
|
||||
annotations: List[Dict[str, Any]] = Field(default_factory=list)
|
||||
|
||||
|
||||
class OpenAIResponsesOutputMessage(BaseModel):
|
||||
id: str
|
||||
type: str = "message"
|
||||
status: str = "completed"
|
||||
role: str = "assistant"
|
||||
content: List[OpenAIResponsesOutputText] = Field(default_factory=list)
|
||||
|
||||
|
||||
class OpenAIResponsesResponse(BaseModel):
|
||||
id: str
|
||||
object: str = "response"
|
||||
created_at: int
|
||||
status: str = "completed"
|
||||
model: str
|
||||
output: List[OpenAIResponsesOutputMessage] = Field(default_factory=list)
|
||||
error: Optional[Any] = None
|
||||
incomplete_details: Optional[Any] = None
|
||||
usage: OpenAIUsage
|
||||
|
||||
|
||||
class OpenAIErrorDetail(BaseModel):
|
||||
message: str
|
||||
type: str = "invalid_request_error"
|
||||
param: Optional[str] = None
|
||||
code: Optional[str] = None
|
||||
|
||||
|
||||
class OpenAIErrorResponse(BaseModel):
|
||||
error: OpenAIErrorDetail
|
||||
|
||||
|
||||
OpenAIChatContentPart = Dict[str, Any]
|
||||
|
||||
|
||||
class AnthropicMessage(BaseModel):
|
||||
role: str
|
||||
content: Any
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class AnthropicMessagesRequest(BaseModel):
|
||||
model: Optional[str] = None
|
||||
messages: List[AnthropicMessage]
|
||||
system: Optional[Any] = None
|
||||
max_tokens: Optional[int] = 1024
|
||||
stream: bool = False
|
||||
|
||||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class AnthropicTextBlock(BaseModel):
|
||||
type: str = "text"
|
||||
text: str
|
||||
|
||||
|
||||
class AnthropicUsage(BaseModel):
|
||||
input_tokens: int = 0
|
||||
output_tokens: int = 0
|
||||
|
||||
|
||||
class AnthropicMessagesResponse(BaseModel):
|
||||
id: str
|
||||
type: str = "message"
|
||||
role: str = "assistant"
|
||||
content: List[AnthropicTextBlock] = Field(default_factory=list)
|
||||
model: str
|
||||
stop_reason: str = "end_turn"
|
||||
stop_sequence: Optional[str] = None
|
||||
usage: AnthropicUsage = Field(default_factory=AnthropicUsage)
|
||||
|
||||
|
||||
class AnthropicErrorDetail(BaseModel):
|
||||
type: str = "invalid_request_error"
|
||||
message: str
|
||||
|
||||
|
||||
class AnthropicErrorResponse(BaseModel):
|
||||
type: str = "error"
|
||||
error: AnthropicErrorDetail
|
||||
@@ -69,6 +69,24 @@ class PluginDashboard(Plugin):
|
||||
elements: Optional[List[dict]] = Field(default_factory=list)
|
||||
|
||||
|
||||
class PluginSidebarNavItem(BaseModel):
|
||||
"""
|
||||
插件侧栏导航项(前端全页路由)
|
||||
"""
|
||||
plugin_id: str = Field(description="插件 ID")
|
||||
nav_key: str = Field(description="导航键,对应 URL 段")
|
||||
title: str = Field(description="侧栏标题")
|
||||
icon: str = Field(default="mdi-puzzle", description="MDI 图标名")
|
||||
section: str = Field(
|
||||
description="分组:start / discovery / subscribe / organize / system",
|
||||
)
|
||||
permission: Optional[str] = Field(
|
||||
default=None,
|
||||
description="权限:subscribe / discovery / search / manage / admin",
|
||||
)
|
||||
order: int = Field(default=0, description="同组内排序,越小越靠前")
|
||||
|
||||
|
||||
class PluginMemoryInfo(BaseModel):
|
||||
"""插件内存信息"""
|
||||
plugin_id: str = Field(description="插件ID")
|
||||
|
||||
@@ -156,6 +156,8 @@ class ChainEventType(Enum):
|
||||
TransferRename = "transfer.rename"
|
||||
# 整理拦截
|
||||
TransferIntercept = "transfer.intercept"
|
||||
# 整理覆盖检查
|
||||
TransferOverwriteCheck = "transfer.overwrite.check"
|
||||
# 资源选择
|
||||
ResourceSelection = "resource.selection"
|
||||
# 资源下载
|
||||
|
||||
@@ -161,7 +161,7 @@ class RequestUtils:
|
||||
response = self.request(method=method, url=url, **kwargs)
|
||||
yield response
|
||||
finally:
|
||||
if response:
|
||||
if response is not None:
|
||||
try:
|
||||
response.close()
|
||||
except Exception as e:
|
||||
@@ -206,16 +206,18 @@ class RequestUtils:
|
||||
:return: 响应的内容,若发生RequestException则返回None
|
||||
"""
|
||||
response = self.request(method="get", url=url, params=params, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
content = str(response.content, "utf-8")
|
||||
return content
|
||||
except Exception as e:
|
||||
logger.debug(f"处理响应内容失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
content = str(response.content, "utf-8")
|
||||
return content
|
||||
except Exception as e:
|
||||
logger.debug(f"处理响应内容失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
response.close()
|
||||
return None
|
||||
|
||||
def post(self, url: str, data: Any = None, json: dict = None, **kwargs) -> Optional[Response]:
|
||||
"""
|
||||
@@ -280,7 +282,7 @@ class RequestUtils:
|
||||
try:
|
||||
yield response
|
||||
finally:
|
||||
if response:
|
||||
if response is not None:
|
||||
response.close()
|
||||
|
||||
def post_res(self,
|
||||
@@ -382,16 +384,18 @@ class RequestUtils:
|
||||
:return: JSON数据,若发生异常则返回None
|
||||
"""
|
||||
response = self.request(method="get", url=url, params=params, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析JSON失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析JSON失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
response.close()
|
||||
return None
|
||||
|
||||
def post_json(self, url: str, data: Any = None, json: dict = None, **kwargs) -> Optional[dict]:
|
||||
"""
|
||||
@@ -405,16 +409,18 @@ class RequestUtils:
|
||||
if json is None:
|
||||
json = {}
|
||||
response = self.request(method="post", url=url, data=data, json=json, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析JSON失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析JSON失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
response.close()
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def parse_cache_control(header: str) -> Tuple[str, Optional[int]]:
|
||||
@@ -575,7 +581,9 @@ class AsyncRequestUtils:
|
||||
timeout: int = None,
|
||||
referer: str = None,
|
||||
content_type: str = None,
|
||||
accept_type: str = None):
|
||||
accept_type: str = None,
|
||||
verify: bool = False,
|
||||
follow_redirects: bool = True):
|
||||
"""
|
||||
:param headers: 请求头部信息
|
||||
:param ua: User-Agent字符串
|
||||
@@ -586,10 +594,14 @@ class AsyncRequestUtils:
|
||||
:param referer: Referer头部信息
|
||||
:param content_type: 请求的Content-Type,默认为 "application/x-www-form-urlencoded; charset=UTF-8"
|
||||
:param accept_type: Accept头部信息,默认为 "application/json"
|
||||
:param verify: 是否校验证书
|
||||
:param follow_redirects: 客户端默认是否跟随重定向
|
||||
"""
|
||||
self._proxies = self._convert_proxies_for_httpx(proxies)
|
||||
self._client = client
|
||||
self._timeout = timeout or 20
|
||||
self._verify = verify
|
||||
self._follow_redirects = follow_redirects
|
||||
if not content_type:
|
||||
content_type = "application/x-www-form-urlencoded; charset=UTF-8"
|
||||
if headers:
|
||||
@@ -654,7 +666,7 @@ class AsyncRequestUtils:
|
||||
response = await self.request(method=method, url=url, **kwargs)
|
||||
yield response
|
||||
finally:
|
||||
if response:
|
||||
if response is not None:
|
||||
try:
|
||||
await response.aclose()
|
||||
except Exception as e:
|
||||
@@ -675,8 +687,8 @@ class AsyncRequestUtils:
|
||||
async with httpx.AsyncClient(
|
||||
proxy=self._proxies,
|
||||
timeout=self._timeout,
|
||||
verify=False,
|
||||
follow_redirects=True,
|
||||
verify=self._verify,
|
||||
follow_redirects=self._follow_redirects,
|
||||
cookies=self._cookies # 在创建客户端时传入Cookie
|
||||
) as client:
|
||||
return await self._make_request(client, method, url, raise_exception, **kwargs)
|
||||
@@ -711,16 +723,18 @@ class AsyncRequestUtils:
|
||||
:return: 响应的内容,若发生RequestError则返回None
|
||||
"""
|
||||
response = await self.request(method="get", url=url, params=params, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
content = response.text
|
||||
return content
|
||||
except Exception as e:
|
||||
logger.debug(f"处理异步响应内容失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
await response.aclose() # 确保连接被关闭
|
||||
return None
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
content = response.text
|
||||
return content
|
||||
except Exception as e:
|
||||
logger.debug(f"处理异步响应内容失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
await response.aclose()
|
||||
|
||||
async def post(self, url: str, data: Any = None, json: dict = None, **kwargs) -> Optional[httpx.Response]:
|
||||
"""
|
||||
@@ -785,7 +799,7 @@ class AsyncRequestUtils:
|
||||
try:
|
||||
yield response
|
||||
finally:
|
||||
if response:
|
||||
if response is not None:
|
||||
await response.aclose()
|
||||
|
||||
async def post_res(self,
|
||||
@@ -887,16 +901,18 @@ class AsyncRequestUtils:
|
||||
:return: JSON数据,若发生异常则返回None
|
||||
"""
|
||||
response = await self.request(method="get", url=url, params=params, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析异步JSON失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析异步JSON失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
await response.aclose()
|
||||
return None
|
||||
|
||||
async def post_json(self, url: str, data: Any = None, json: dict = None, **kwargs) -> Optional[dict]:
|
||||
"""
|
||||
@@ -910,13 +926,15 @@ class AsyncRequestUtils:
|
||||
if json is None:
|
||||
json = {}
|
||||
response = await self.request(method="post", url=url, data=data, json=json, **kwargs)
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析异步JSON失败: {e}")
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
if response:
|
||||
try:
|
||||
data = response.json()
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug(f"解析异步JSON失败: {e}")
|
||||
return None
|
||||
return None
|
||||
finally:
|
||||
if response is not None:
|
||||
await response.aclose()
|
||||
return None
|
||||
|
||||
27
app/utils/identity.py
Normal file
27
app/utils/identity.py
Normal file
@@ -0,0 +1,27 @@
|
||||
from typing import Optional, Union
|
||||
|
||||
# 后台任务会话使用的内部占位用户ID。
|
||||
# 它只用于在 agent/memory/session 侧标识“系统触发的任务”,
|
||||
# 不能直接作为真实消息接收人下发到 Telegram/企业微信 等通知渠道。
|
||||
SYSTEM_INTERNAL_USER_ID = "system"
|
||||
|
||||
|
||||
def is_internal_user_id(userid: Optional[Union[str, int]]) -> bool:
|
||||
"""
|
||||
判断是否为系统内部占位用户ID。
|
||||
"""
|
||||
return (
|
||||
isinstance(userid, str)
|
||||
and userid.strip().lower() == SYSTEM_INTERNAL_USER_ID
|
||||
)
|
||||
|
||||
|
||||
def normalize_internal_user_id(
|
||||
userid: Optional[Union[str, int]]
|
||||
) -> Optional[Union[str, int]]:
|
||||
"""
|
||||
将系统内部占位用户ID归一化为 None,避免被通知渠道误认为真实接收人。
|
||||
"""
|
||||
if is_internal_user_id(userid):
|
||||
return None
|
||||
return userid
|
||||
84
app/utils/stdio.py
Normal file
84
app/utils/stdio.py
Normal file
@@ -0,0 +1,84 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
import logging
|
||||
import sys
|
||||
import threading
|
||||
from logging.handlers import RotatingFileHandler
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class RotatingLineStream(io.TextIOBase):
|
||||
"""
|
||||
将 stdout/stderr 按行写入滚动日志文件。
|
||||
|
||||
这里不复用业务 logger,避免 stdout 日志再次回流到控制台或普通业务日志文件,
|
||||
同时保证启动阶段的 print/uvicorn 输出也能按配置滚动。
|
||||
"""
|
||||
|
||||
def __init__(self, log_file: Path, max_bytes: int, backup_count: int):
|
||||
super().__init__()
|
||||
self._buffer = ""
|
||||
self._lock = threading.Lock()
|
||||
|
||||
logger_name = f"moviepilot-stdio::{log_file}"
|
||||
self._logger = logging.getLogger(logger_name)
|
||||
self._logger.setLevel(logging.INFO)
|
||||
self._logger.propagate = False
|
||||
self._logger.handlers.clear()
|
||||
|
||||
handler = RotatingFileHandler(
|
||||
filename=str(log_file),
|
||||
maxBytes=max_bytes,
|
||||
backupCount=backup_count,
|
||||
encoding="utf-8",
|
||||
)
|
||||
handler.setFormatter(logging.Formatter("%(message)s"))
|
||||
self._logger.addHandler(handler)
|
||||
|
||||
@property
|
||||
def encoding(self) -> str:
|
||||
return "utf-8"
|
||||
|
||||
def writable(self) -> bool:
|
||||
return True
|
||||
|
||||
def isatty(self) -> bool:
|
||||
return False
|
||||
|
||||
def write(self, message: str) -> int:
|
||||
if not message:
|
||||
return 0
|
||||
|
||||
with self._lock:
|
||||
self._buffer += message.replace("\r\n", "\n")
|
||||
while "\n" in self._buffer:
|
||||
line, self._buffer = self._buffer.split("\n", 1)
|
||||
self._logger.info(line)
|
||||
return len(message)
|
||||
|
||||
def flush(self) -> None:
|
||||
with self._lock:
|
||||
if self._buffer:
|
||||
self._logger.info(self._buffer)
|
||||
self._buffer = ""
|
||||
for handler in self._logger.handlers:
|
||||
handler.flush()
|
||||
|
||||
|
||||
def configure_rotating_stdio(
|
||||
*, log_file: Path, max_bytes: int, backup_count: int
|
||||
) -> RotatingLineStream:
|
||||
"""
|
||||
将当前进程的 stdout/stderr 统一重定向到同一个滚动日志流。
|
||||
"""
|
||||
|
||||
log_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
stream = RotatingLineStream(
|
||||
log_file=log_file,
|
||||
max_bytes=max_bytes,
|
||||
backup_count=backup_count,
|
||||
)
|
||||
sys.stdout = stream
|
||||
sys.stderr = stream
|
||||
return stream
|
||||
@@ -85,8 +85,11 @@ RUN FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/
|
||||
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
|
||||
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
|
||||
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
|
||||
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
|
||||
&& mv -f /tmp/MoviePilot-Resources-main/resources.v2/* /app/app/helper/
|
||||
&& curl -sL "https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/resources.v2/user.sites.v2.bin" -o /app/app/helper/user.sites.v2.bin \
|
||||
&& python_ver=$(python3 -c 'import sys; print(f"cpython-{sys.version_info.major}{sys.version_info.minor}")') \
|
||||
&& ARCH=$(uname -m) \
|
||||
&& if [ "$ARCH" = "aarch64" ]; then SUFFIX="aarch64-linux-gnu"; else SUFFIX="x86_64-linux-gnu"; fi \
|
||||
&& curl -sL "https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/resources.v2/sites.${python_ver}-${SUFFIX}.so" -o "/app/app/helper/sites.${python_ver}-${SUFFIX}.so"
|
||||
|
||||
# final 阶段: 安装运行时依赖和配置最终镜像
|
||||
FROM prepare_package AS final
|
||||
|
||||
@@ -143,14 +143,24 @@ function install_backend_and_download_resources() {
|
||||
cp -a /plugins/* /app/app/plugins/
|
||||
# 更新站点资源
|
||||
INFO "→ 开始更新站点资源..."
|
||||
if ! download_and_unzip "${GITHUB_PROXY}https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" "Resources"; then
|
||||
cp -a /resources_bakcup/* /app/app/helper/
|
||||
rm -rf /resources_bakcup
|
||||
WARN "站点资源下载失败,继续使用旧的资源来启动..."
|
||||
return 1
|
||||
python_version=$(python3 -c 'import sys; print(f"cpython-{sys.version_info.major}{sys.version_info.minor}")')
|
||||
arch=$(uname -m)
|
||||
if [ "$arch" = "aarch64" ]; then
|
||||
arch_suffix="aarch64-linux-gnu"
|
||||
else
|
||||
arch_suffix="x86_64-linux-gnu"
|
||||
fi
|
||||
INFO "当前 Python 版本:${python_version},架构:${arch}"
|
||||
# 下载 user.sites.v2.bin
|
||||
if ! curl ${CURL_OPTIONS} "${GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/resources.v2/user.sites.v2.bin" -o /app/app/helper/user.sites.v2.bin; then
|
||||
cp -a /resources_bakcup/user.sites.v2.bin /app/app/helper/
|
||||
WARN "user.sites.v2.bin 下载失败,继续使用旧的资源来启动..."
|
||||
fi
|
||||
# 下载对应平台的 sites 文件
|
||||
sites_file="sites.${python_version}-${arch_suffix}.so"
|
||||
if ! curl ${CURL_OPTIONS} "${GITHUB_PROXY}https://raw.githubusercontent.com/jxxghp/MoviePilot-Resources/main/resources.v2/${sites_file}" -o "/app/app/helper/${sites_file}"; then
|
||||
WARN "${sites_file} 下载失败,继续使用旧的资源来启动..."
|
||||
fi
|
||||
# 复制新站点资源
|
||||
cp -a ${TMP_PATH}/Resources/resources.v2/* /app/app/helper/
|
||||
INFO "站点资源更新成功"
|
||||
# 清理临时目录
|
||||
rm -rf "${TMP_PATH}"
|
||||
|
||||
473
docs/cli.md
Normal file
473
docs/cli.md
Normal file
@@ -0,0 +1,473 @@
|
||||
# MoviePilot CLI
|
||||
|
||||
`moviepilot` 是 MoviePilot 本地源码模式的一体化入口,负责本地安装、初始化、更新,以及前后端服务管理。
|
||||
|
||||
## 一键安装
|
||||
|
||||
```shell
|
||||
curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootstrap-local.sh | bash
|
||||
```
|
||||
|
||||
脚本会自动:
|
||||
|
||||
- 检测操作系统
|
||||
- 自动检查并尽量安装 `git`、`curl`、`Python 3.11+`
|
||||
- 克隆 `MoviePilot`
|
||||
- 安装后端依赖
|
||||
- 下载 `MoviePilot-Frontend` 最新 release 的 `dist.zip`
|
||||
- 下载 `MoviePilot-Resources` 主分支资源
|
||||
- 将 `resources.v2/*` 同步到后端 [app/helper](/Users/jxxghp/PycharmProjects/MoviePilot/app/helper)
|
||||
- 下载本地 Node 运行时并安装前端运行依赖
|
||||
- 执行初始化向导
|
||||
- 创建全局 `moviepilot` 命令
|
||||
- 默认启动前后端服务
|
||||
|
||||
说明:
|
||||
|
||||
- 如果系统里已经有可用的 `Python 3.11+`,脚本会优先直接复用本地解释器
|
||||
- 如果系统里没有可用的 `Python 3.11+`,脚本会再尝试自动补齐运行环境
|
||||
- Linux 下安装系统依赖时通常需要 `sudo`
|
||||
- 复用已有仓库时,脚本现在只会因为已跟踪源码改动而阻止自动更新,不会再被 `.DS_Store` 之类未跟踪文件卡住
|
||||
|
||||
如果安装完成后当前终端仍提示找不到 `moviepilot`:
|
||||
|
||||
- 重新打开终端
|
||||
- 如果脚本提示使用了 `~/.local/bin`,执行 `source ~/.zshrc` 或 `source ~/.bashrc`
|
||||
|
||||
## 配置目录
|
||||
|
||||
本地 CLI 默认将配置目录放在程序目录外,避免直接删除程序目录时把配置一并删掉。
|
||||
|
||||
- macOS:`~/Library/Application Support/MoviePilot`
|
||||
- Linux:`${XDG_CONFIG_HOME:-~/.config}/moviepilot`
|
||||
|
||||
如果在交互式终端中执行一键安装脚本,或直接执行 `moviepilot setup` / `moviepilot init` 且未传入 `--config-dir`,程序会先询问配置目录,并把上面的默认路径作为默认值展示出来。
|
||||
|
||||
可以在安装或初始化时手动指定:
|
||||
|
||||
```shell
|
||||
moviepilot setup --config-dir /path/to/moviepilot-config
|
||||
moviepilot init --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
查看当前实际配置目录:
|
||||
|
||||
```shell
|
||||
moviepilot config path
|
||||
```
|
||||
|
||||
## 目录说明
|
||||
|
||||
- 后端代码:仓库根目录
|
||||
- 外置配置目录:`moviepilot config path` 输出的 `Config Dir`
|
||||
- 前端静态文件:`public/`
|
||||
- 前端本地 Node 运行时:`.runtime/node/`
|
||||
- 后端日志:`<Config Dir>/logs/moviepilot.log`
|
||||
- 后端启动日志:`<Config Dir>/logs/moviepilot.stdout.log`
|
||||
该文件同样受 `LOG_MAX_FILE_SIZE` 与 `LOG_BACKUP_COUNT` 控制
|
||||
- 前端启动日志:`<Config Dir>/logs/moviepilot.frontend.stdout.log`
|
||||
|
||||
## 帮助与发现
|
||||
|
||||
根帮助:
|
||||
|
||||
```shell
|
||||
moviepilot --help
|
||||
moviepilot help
|
||||
moviepilot commands
|
||||
```
|
||||
|
||||
分级帮助:
|
||||
|
||||
```shell
|
||||
moviepilot help install
|
||||
moviepilot help init
|
||||
moviepilot help setup
|
||||
moviepilot help uninstall
|
||||
moviepilot help update
|
||||
moviepilot help agent
|
||||
moviepilot help config
|
||||
moviepilot help config set
|
||||
moviepilot help tool
|
||||
moviepilot help scheduler
|
||||
```
|
||||
|
||||
配置项清单与说明:
|
||||
|
||||
```shell
|
||||
moviepilot config keys
|
||||
moviepilot config keys API
|
||||
moviepilot config describe API_TOKEN
|
||||
```
|
||||
|
||||
动态工具清单与参数说明:
|
||||
|
||||
```shell
|
||||
moviepilot tool list
|
||||
moviepilot tool show <tool_name>
|
||||
```
|
||||
|
||||
## 完整命令清单
|
||||
|
||||
```text
|
||||
moviepilot install deps
|
||||
moviepilot install frontend
|
||||
moviepilot install resources
|
||||
moviepilot init
|
||||
moviepilot setup
|
||||
moviepilot uninstall
|
||||
moviepilot update backend
|
||||
moviepilot update frontend
|
||||
moviepilot update all
|
||||
moviepilot startup enable
|
||||
moviepilot startup disable
|
||||
moviepilot startup status
|
||||
moviepilot agent
|
||||
moviepilot start
|
||||
moviepilot stop
|
||||
moviepilot restart
|
||||
moviepilot status
|
||||
moviepilot logs
|
||||
moviepilot version
|
||||
moviepilot config path
|
||||
moviepilot config list
|
||||
moviepilot config get
|
||||
moviepilot config set
|
||||
moviepilot config keys
|
||||
moviepilot config describe
|
||||
moviepilot tool list
|
||||
moviepilot tool show
|
||||
moviepilot tool run
|
||||
moviepilot scheduler list
|
||||
moviepilot scheduler run
|
||||
moviepilot help
|
||||
moviepilot commands
|
||||
```
|
||||
|
||||
## 安装命令
|
||||
|
||||
安装后端依赖:
|
||||
|
||||
```shell
|
||||
moviepilot install deps
|
||||
moviepilot install deps --python python3.11
|
||||
moviepilot install deps --venv /path/to/venv
|
||||
moviepilot install deps --recreate
|
||||
moviepilot install deps --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 默认会自动选择本地已安装的 `Python 3.11+` 解释器
|
||||
|
||||
安装前端 release:
|
||||
|
||||
```shell
|
||||
moviepilot install frontend
|
||||
moviepilot install frontend --version latest
|
||||
moviepilot install frontend --version v2.9.31
|
||||
moviepilot install frontend --node-version 20.12.1
|
||||
moviepilot install frontend --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 默认下载 `MoviePilot-Frontend` 最新 release 的 `dist.zip`
|
||||
- 会自动安装本地 Node 运行时
|
||||
- 会自动安装 `service.js` 所需的运行依赖
|
||||
|
||||
安装资源文件:
|
||||
|
||||
```shell
|
||||
moviepilot install resources
|
||||
moviepilot install resources --resources-repo /path/to/MoviePilot-Resources
|
||||
moviepilot install resources --resource-dir /path/to/resources.v2
|
||||
moviepilot install resources --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 默认直接从 GitHub 下载 `MoviePilot-Resources` 主分支压缩包
|
||||
- 会将 `resources.v2/*` 整体复制到 [app/helper](/Users/jxxghp/PycharmProjects/MoviePilot/app/helper)
|
||||
- 这一步和 Docker 构建流程保持一致
|
||||
|
||||
## 初始化命令
|
||||
|
||||
初始化本地配置:
|
||||
|
||||
```shell
|
||||
moviepilot init
|
||||
moviepilot init --wizard
|
||||
moviepilot init --skip-resources
|
||||
moviepilot init --force-token
|
||||
moviepilot init --superuser admin --superuser-password 'ChangeMe123!'
|
||||
moviepilot init --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
一体化安装:
|
||||
|
||||
```shell
|
||||
moviepilot setup
|
||||
moviepilot setup --wizard
|
||||
moviepilot setup --frontend-version latest
|
||||
moviepilot setup --node-version 20.12.1
|
||||
moviepilot setup --skip-resources
|
||||
moviepilot setup --recreate
|
||||
moviepilot setup --superuser admin --superuser-password 'ChangeMe123!'
|
||||
moviepilot setup --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
`moviepilot setup` 会串行执行:
|
||||
|
||||
1. 安装后端依赖
|
||||
2. 下载并安装前端 release
|
||||
3. 下载并同步资源文件
|
||||
4. 初始化本地配置
|
||||
|
||||
`--wizard` 会进入交互式初始化向导,支持配置:
|
||||
|
||||
- `API_TOKEN`
|
||||
- 超级管理员用户名与密码
|
||||
- 数据库类型
|
||||
默认 `SQLite`
|
||||
可切换为 `PostgreSQL`,并填写主机、端口、数据库名、用户名、密码
|
||||
- 默认下载目录与媒体库目录
|
||||
- AI Agent
|
||||
可按需启用,并配置 `LLM_PROVIDER`、`LLM_MODEL`、`LLM_API_KEY`、`LLM_BASE_URL`
|
||||
- 用户站点认证
|
||||
可按需选择认证站点,并按站点要求填写用户名、UID、Passkey 等参数
|
||||
- 开机自启
|
||||
可按需启用,MoviePilot 会根据当前操作系统注册登录自启动
|
||||
- 下载器
|
||||
- 媒体服务器
|
||||
- 消息通知渠道
|
||||
|
||||
如果希望在自动化安装时直接预设超级管理员,也可以在一键安装脚本中透传:
|
||||
|
||||
```shell
|
||||
curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootstrap-local.sh | \
|
||||
bash -s -- --superuser admin --superuser-password 'ChangeMe123!'
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `--superuser-password` 更适合自动化场景,命令可能会出现在 shell 历史中
|
||||
- 交互式 `--wizard` 会在初始化过程中提示输入超级管理员用户名和密码
|
||||
|
||||
## 开机自启命令
|
||||
|
||||
管理当前本地安装的开机自启:
|
||||
|
||||
```shell
|
||||
moviepilot startup status
|
||||
moviepilot startup enable
|
||||
moviepilot startup disable
|
||||
moviepilot startup enable --venv /path/to/venv
|
||||
moviepilot startup enable --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- macOS 使用 `LaunchAgent`
|
||||
- Linux 优先使用 `systemd --user`,当前环境不可用时自动回退到 `XDG autostart`
|
||||
- Windows 使用当前用户的 Startup 启动目录
|
||||
- 注册的启动项会调用本地 CLI 的统一启动入口,因此会同时拉起后端与前端
|
||||
|
||||
## 卸载命令
|
||||
|
||||
卸载本地安装产物:
|
||||
|
||||
```shell
|
||||
moviepilot uninstall
|
||||
moviepilot uninstall --venv /path/to/venv
|
||||
moviepilot uninstall --config-dir /path/to/moviepilot-config
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 卸载时会先停止当前 CLI 管理的前后端服务
|
||||
- 会删除本地虚拟环境、前端运行时、本地 Node 运行时、全局 `moviepilot` 软链接,以及同步到 `app/helper` 的资源文件
|
||||
- 如果之前注册过开机自启,卸载时也会一并取消
|
||||
- 会询问是否同时删除配置目录,默认不删除
|
||||
- 如果当前使用的是仓库内 legacy `config/` 目录,确认删除后其中的 `category.yaml` 等配置文件也会一起删除
|
||||
- 整个卸载流程包含两次确认
|
||||
- 源码目录会保留,如需彻底移除仓库请在确认后手动删除项目目录
|
||||
|
||||
## 更新命令
|
||||
|
||||
更新后端:
|
||||
|
||||
```shell
|
||||
moviepilot update backend
|
||||
moviepilot update backend --ref latest
|
||||
moviepilot update backend --ref v2
|
||||
moviepilot update backend --ref v2.9.31
|
||||
```
|
||||
|
||||
更新前端:
|
||||
|
||||
```shell
|
||||
moviepilot update frontend
|
||||
moviepilot update frontend --frontend-version latest
|
||||
moviepilot update frontend --frontend-version v2.9.31
|
||||
```
|
||||
|
||||
整体更新:
|
||||
|
||||
```shell
|
||||
moviepilot update all
|
||||
moviepilot update all --ref latest --frontend-version latest
|
||||
moviepilot update all --skip-resources
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `update backend` 会更新 Git 仓库并重新安装后端依赖
|
||||
- `update frontend` 会下载并替换前端 release
|
||||
- `update all` 会同时更新后端、前端,默认也会同步资源文件
|
||||
- 更新前请先执行 `moviepilot stop`
|
||||
|
||||
## Agent 命令
|
||||
|
||||
直接给智能体发送一次请求:
|
||||
|
||||
```shell
|
||||
moviepilot agent 帮我分析最近一次搜索失败的原因
|
||||
moviepilot agent --user-id admin 帮我检查当前下载器配置
|
||||
moviepilot agent --session cli-debug-1 帮我看看为什么没有自动整理
|
||||
moviepilot agent --new-session 帮我总结当前系统配置有什么明显问题
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `moviepilot agent` 直接在本地环境里发起一次智能体请求
|
||||
- 默认每次可自动创建新会话,也可以通过 `--session` 指定会话 ID
|
||||
- 使用前需要先正确配置 LLM 相关参数,并打开 `AI_AGENT_ENABLE`
|
||||
|
||||
## 服务管理命令
|
||||
|
||||
`moviepilot start/stop/restart/status` 统一管理前后端。
|
||||
|
||||
启动、停止、重启与状态:
|
||||
|
||||
```shell
|
||||
moviepilot start
|
||||
moviepilot start --timeout 60
|
||||
moviepilot stop
|
||||
moviepilot stop --timeout 30 --force
|
||||
moviepilot restart
|
||||
moviepilot restart --start-timeout 60 --stop-timeout 30
|
||||
moviepilot status
|
||||
moviepilot version
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `start` 会先启动后端,再启动前端
|
||||
- 如果开启了 `MOVIEPILOT_AUTO_UPDATE=release|true|dev`,`start/restart` 会在启动前尽力执行一次本地自动更新;更新失败只告警,不阻断当前启动
|
||||
- 通过系统内置的重启入口触发重启时,本地 CLI 安装模式也会复用同一套前后端进程管理完成重启
|
||||
- 前端默认监听 `NGINX_PORT`,默认值 `3000`
|
||||
- 后端默认监听 `PORT`,默认值 `3001`
|
||||
- 前端通过 `service.js` 代理 `/api` 与 `/cookiecloud` 到后端
|
||||
- 本地前端代理在启动时会先确认后端可用;如果后端长时间不可用,前端也会自动退出,避免只剩半套服务
|
||||
|
||||
日志:
|
||||
|
||||
```shell
|
||||
moviepilot logs
|
||||
moviepilot logs --lines 100
|
||||
moviepilot logs --stdio
|
||||
moviepilot logs --frontend
|
||||
moviepilot logs --follow
|
||||
moviepilot logs --frontend --follow
|
||||
moviepilot logs --stdio --follow
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 默认 `logs` 查看后端应用日志
|
||||
- `--stdio` 查看后端启动标准输出
|
||||
- `--frontend` 查看前端启动标准输出
|
||||
|
||||
## 配置命令
|
||||
|
||||
查看配置路径:
|
||||
|
||||
```shell
|
||||
moviepilot config path
|
||||
```
|
||||
|
||||
查看当前配置:
|
||||
|
||||
```shell
|
||||
moviepilot config list
|
||||
moviepilot config list --show-secrets
|
||||
```
|
||||
|
||||
读取和写入单个配置:
|
||||
|
||||
```shell
|
||||
moviepilot config get PORT
|
||||
moviepilot config set PORT 3001
|
||||
moviepilot config set NGINX_PORT 3000
|
||||
moviepilot config set API_TOKEN your-token-here
|
||||
```
|
||||
|
||||
查看所有可配置项:
|
||||
|
||||
```shell
|
||||
moviepilot config keys
|
||||
moviepilot config keys DB_
|
||||
moviepilot config keys --show-current
|
||||
moviepilot config keys --show-current --show-secrets
|
||||
moviepilot config describe PORT
|
||||
moviepilot config describe API_TOKEN --show-secrets
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `config list` 显示当前配置值
|
||||
- `config keys` 显示配置项名称、类型和默认值
|
||||
- `config describe` 显示单个配置项的类型、默认值和当前值
|
||||
|
||||
## Tool 命令
|
||||
|
||||
列出所有 MCP 工具:
|
||||
|
||||
```shell
|
||||
moviepilot tool list
|
||||
```
|
||||
|
||||
查看单个工具的参数说明:
|
||||
|
||||
```shell
|
||||
moviepilot tool show query_schedulers
|
||||
moviepilot tool show search_torrents
|
||||
```
|
||||
|
||||
运行工具:
|
||||
|
||||
```shell
|
||||
moviepilot tool run query_schedulers
|
||||
moviepilot tool run search_torrents media_type=movie tmdb_id=12345
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- `tool list` 用于动态发现当前服务可调用的工具
|
||||
- `tool show` 会输出参数名、类型和描述
|
||||
- `tool run` 参数格式固定为 `key=value`
|
||||
|
||||
## Scheduler 命令
|
||||
|
||||
列出调度任务:
|
||||
|
||||
```shell
|
||||
moviepilot scheduler list
|
||||
```
|
||||
|
||||
立即执行调度任务:
|
||||
|
||||
```shell
|
||||
moviepilot scheduler run subscribe_refresh
|
||||
```
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
在开始之前,请确保您的系统已安装以下软件:
|
||||
|
||||
- **Python 3.12 或更高版本** (暂时兼容 3.11 ,推荐使用 3.12+)
|
||||
- **Python 3.11 或更高版本**
|
||||
- **pip** (Python 包管理器)
|
||||
- **Git** (用于版本控制)
|
||||
|
||||
@@ -119,4 +119,4 @@ safety check -r requirements.txt --policy-file=safety.policy.yml > safety_report
|
||||
### 5. 参考资源
|
||||
|
||||
- [pip-tools 官方文档](https://github.com/jazzband/pip-tools)
|
||||
- [safety 官方文档](https://pyup.io/safety/)
|
||||
- [safety 官方文档](https://pyup.io/safety/)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user