Compare commits

...

32 Commits

Author SHA1 Message Date
jxxghp
79bfeaf2af 移除工具调用前的流重置,保留模型思考文本可见 2026-04-25 23:12:34 +08:00
jxxghp
4fe41ba5e9 更新 base.py 2026-04-25 22:16:15 +08:00
jxxghp
14d6e2febc Refine agent prompts for concise professional replies 2026-04-25 22:04:35 +08:00
jxxghp
97c7e71207 更新 Agent Prompt.txt 2026-04-25 21:51:47 +08:00
jxxghp
8f29a218ea chore: bump version to v2.10.5 2026-04-25 12:55:33 +08:00
jxxghp
4fd5aa3eb6 fix: improve DeepSeek reasoning_content payload handling and update langchain dependencies 2026-04-25 12:46:21 +08:00
jxxghp
bfc27d151c 更新 ask_user_choice.py 2026-04-25 11:36:36 +08:00
jxxghp
f2b56b8f40 更新 ask_user_choice.py 2026-04-25 11:35:32 +08:00
jxxghp
a05ffc07d4 refactor: remove legacy LLM_DISABLE_THINKING and LLM_REASONING_EFFORT config, unify thinking_level handling
- Eliminate support for LLM_DISABLE_THINKING and LLM_REASONING_EFFORT in config, code, and tests
- Simplify LLM thinking level logic to rely solely on LLM_THINKING_LEVEL
- Refactor LLMHelper and related endpoints to remove legacy parameter handling
- Update system API and test utilities to match new configuration structure
- Minor code cleanup and formatting improvements
2026-04-25 10:42:03 +08:00
jxxghp
4a81417fb7 fix: preserve deepseek reasoning content in tool loops 2026-04-25 09:37:01 +08:00
jxxghp
c7fa3dc863 feat: unify llm thinking level controls 2026-04-24 19:50:23 +08:00
jxxghp
28f9756dd6 feat: improve skill instructions with highlighted command formatting 2026-04-22 18:12:21 +08:00
jxxghp
4bffe2cff1 chore: bump version to v2.10.4 2026-04-22 18:02:28 +08:00
jxxghp
fca478f1d8 feat: support custom skill sources in /skills 2026-04-22 18:00:57 +08:00
Sebastian
097dff13a3 feat: add ai-compatible API endpoints 2026-04-22 17:21:43 +08:00
jxxghp
460b386004 feat: add searchable skills marketplace 2026-04-22 16:49:42 +08:00
jxxghp
89bf89c02d feat: add clawhub skill registry source 2026-04-22 16:22:10 +08:00
jxxghp
cefb60ba2c refactor: unify message interactions 2026-04-22 15:18:04 +08:00
jxxghp
8c78627647 feat: add skills marketplace management 2026-04-22 14:55:00 +08:00
jxxghp
51189210c2 更新 config.py 2026-04-22 10:39:25 +08:00
jxxghp
38933d5882 feat(agent): support disabling model thinking 2026-04-22 10:36:36 +08:00
jxxghp
4619fc4042 更新 version.py 2026-04-21 22:25:57 +08:00
jxxghp
ee7ba28235 Allow LLM test to use request payload 2026-04-21 22:14:19 +08:00
笨笨
409abb66be test: remove absolute path from llm helper test 2026-04-21 20:39:32 +08:00
笨笨
8aa8b1897b feat: add llm test endpoint 2026-04-21 20:39:32 +08:00
jxxghp
8c256d91bd refine custom identifier skill scope 2026-04-21 17:31:37 +08:00
jxxghp
d1d3fc7f30 更新 media.py 2026-04-21 14:38:16 +08:00
jxxghp
ae15eac0f8 feat: normalize internal system user ID in notification dispatch
- Add SYSTEM_INTERNAL_USER_ID constant and helpers to app.utils.identity
- Ensure internal user ID is normalized to None before dispatching notifications, preventing misrouting to external channels
- Refactor MessageChain to use normalization for all message dispatch methods
- Add tests for internal user ID normalization and notification dispatch behavior
2026-04-21 14:32:14 +08:00
jxxghp
1282ad5004 feat: improve local CLI startup management 2026-04-21 11:26:56 +08:00
笨笨
6f6fcc79f2 fix: serialize rclone folder creation during concurrent transfers 2026-04-20 21:34:35 +08:00
jxxghp
e5c64e73b5 docs: add English README 2026-04-20 19:46:34 +08:00
jxxghp
93a19b467b Add uninstall workflow to local CLI 2026-04-20 13:38:06 +08:00
56 changed files with 9875 additions and 1258 deletions

View File

@@ -1,5 +1,7 @@
# MoviePilot
简体中文 | [English](README_EN.md)
![GitHub Repo stars](https://img.shields.io/github/stars/jxxghp/MoviePilot?style=for-the-badge)
![GitHub forks](https://img.shields.io/github/forks/jxxghp/MoviePilot?style=for-the-badge)
![GitHub contributors](https://img.shields.io/github/contributors/jxxghp/MoviePilot?style=for-the-badge)

77
README_EN.md Normal file
View File

@@ -0,0 +1,77 @@
# MoviePilot
[简体中文](README.md) | English
![GitHub Repo stars](https://img.shields.io/github/stars/jxxghp/MoviePilot?style=for-the-badge)
![GitHub forks](https://img.shields.io/github/forks/jxxghp/MoviePilot?style=for-the-badge)
![GitHub contributors](https://img.shields.io/github/contributors/jxxghp/MoviePilot?style=for-the-badge)
![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge)
![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge)
![Docker Pulls V2](https://img.shields.io/docker/pulls/jxxghp/moviepilot-v2?style=for-the-badge)
![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge)
Redesigned from parts of [NAStool](https://github.com/NAStool/nas-tools), with a stronger focus on core automation scenarios while reducing issues and making the project easier to extend and maintain.
# For learning and personal communication only. Please do not promote this project on platforms in mainland China.
Release channel: https://t.me/moviepilot_channel
## Key Features
- Frontend/backend separation based on FastApi + Vue3.
- Focuses on core needs, simplifies features and settings, and allows some options to work well with sensible defaults.
- Reworked user interface for a cleaner and more practical experience.
## Installation
Official wiki: https://wiki.movie-pilot.org
## Local CLI
One-command bootstrap script:
```shell
curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootstrap-local.sh | bash
```
Manage MoviePilot with the `moviepilot` command. Full CLI documentation: [`docs/cli.md`](docs/cli.md)
## Add Skills for AI Agents
```shell
npx skills add https://github.com/jxxghp/MoviePilot
```
## Development
API documentation: https://api.movie-pilot.org
MCP tool API documentation: see [docs/mcp-api.md](docs/mcp-api.md)
Development environment setup and local source-run guide: [`docs/development-setup.md`](docs/development-setup.md)
Plugin development guide: <https://wiki.movie-pilot.org/zh/plugindev>
## Related Projects
- [MoviePilot-Frontend](https://github.com/jxxghp/MoviePilot-Frontend)
- [MoviePilot-Resources](https://github.com/jxxghp/MoviePilot-Resources)
- [MoviePilot-Plugins](https://github.com/jxxghp/MoviePilot-Plugins)
- [MoviePilot-Server](https://github.com/jxxghp/MoviePilot-Server)
- [MoviePilot-Wiki](https://github.com/jxxghp/MoviePilot-Wiki)
## Disclaimer
- This software is for learning and personal communication only. It must not be used for commercial purposes or illegal activities. The software does not know how users choose to use it, and all responsibility rests with the user.
- The source code is open source and derived from other open-source code. If someone removes the relevant restrictions and redistributes or publishes modified versions that lead to liability events, the publisher of those modifications bears full responsibility. Public releases that bypass or alter the user authentication mechanism are not recommended.
- This project does not accept donations and has not published any donation page anywhere. The software itself is free of charge and does not provide paid services. Please verify information carefully to avoid being misled.
## Contributors
<a href="https://github.com/jxxghp/MoviePilot/graphs/contributors">
<img src="https://contrib.rocks/image?repo=jxxghp/MoviePilot" />
</a>

View File

@@ -34,6 +34,7 @@ from app.log import logger
from app.schemas import Notification, NotificationType
from app.schemas.message import ChannelCapabilityManager, ChannelCapability
from app.schemas.types import MessageChannel
from app.utils.identity import SYSTEM_INTERNAL_USER_ID
class AgentChain(ChainBase):
@@ -72,7 +73,7 @@ class _ThinkTagStripper:
on_output(self.buffer[:start_idx])
emitted = True
self.in_think_tag = True
self.buffer = self.buffer[start_idx + 7:]
self.buffer = self.buffer[start_idx + 7 :]
else:
# 检查是否以 <think> 的不完整前缀结尾
partial_match = False
@@ -92,7 +93,7 @@ class _ThinkTagStripper:
end_idx = self.buffer.find("</think>")
if end_idx != -1:
self.in_think_tag = False
self.buffer = self.buffer[end_idx + 8:]
self.buffer = self.buffer[end_idx + 8 :]
else:
# 检查是否以 </think> 的不完整前缀结尾
partial_match = False
@@ -370,10 +371,6 @@ class MoviePilotAgent:
:param on_token: 收到有效 token 时的回调
"""
stripper = _ThinkTagStripper()
# 非VERBOSE模式下跟踪当前langgraph_step以检测中间步骤的模型输出
# 当模型在工具调用之前输出的"计划/思考"文本会在检测到tool_call时被清除
current_model_step = -1
has_emitted_in_step = False
async for chunk in agent.astream(
messages,
@@ -387,25 +384,13 @@ class MoviePilotAgent:
if not token or not hasattr(token, "tool_call_chunks"):
continue
# 获取当前步骤信息
step = metadata.get("langgraph_step", -1) if metadata else -1
if token.tool_call_chunks:
# 检测到工具调用token说明当前步骤是中间步骤
# 非VERBOSE模式下清除该步骤之前输出的"计划/思考"文本
if not settings.AI_AGENT_VERBOSE and has_emitted_in_step:
self.stream_handler.reset()
stripper.reset()
has_emitted_in_step = False
# 清除 stripper 内部缓冲中可能残留的 <think> 标签中间状态
stripper.reset()
continue
# 以下处理纯文本tokentool_call_chunks为空
# 检测步骤变化重置步骤内emit跟踪
if step != current_model_step:
current_model_step = step
has_emitted_in_step = False
# 跳过模型思考/推理内容(如 DeepSeek R1 的 reasoning_content
additional = getattr(token, "additional_kwargs", None)
if additional and additional.get("reasoning_content"):
@@ -415,8 +400,7 @@ class MoviePilotAgent:
# content 可能是字符串或内容块列表,过滤掉思考类型的块
content = self._extract_text_content(token.content)
if content:
if stripper.process(content, on_token):
has_emitted_in_step = True
stripper.process(content, on_token)
stripper.flush(on_token)
@@ -456,7 +440,10 @@ class MoviePilotAgent:
agent=agent,
messages={"messages": messages},
config=agent_config,
on_token=lambda token: (self.stream_handler.emit(token), self._emit_output(token)),
on_token=lambda token: (
self.stream_handler.emit(token),
self._emit_output(token),
),
)
# 停止流式输出,返回是否已通过流式编辑发送了所有内容及最终文本
@@ -543,16 +530,12 @@ class MoviePilotAgent:
"""
通过原渠道发送消息给用户
"""
user_id = self.user_id
if self.user_id == "system":
user_id = None
await AgentChain().async_post_message(
Notification(
channel=self.channel,
source=self.source,
mtype=NotificationType.Agent,
userid=user_id,
userid=self.user_id,
username=self.username,
title=title,
text=message,
@@ -853,7 +836,7 @@ class AgentManager:
try:
# 每次使用唯一的 session_id避免共享上下文
session_id = f"__agent_heartbeat_{uuid.uuid4().hex[:12]}__"
user_id = "system"
user_id = SYSTEM_INTERNAL_USER_ID
logger.info("智能体心跳唤醒:开始检查待处理任务...")
@@ -948,7 +931,7 @@ class AgentManager:
return
session_id = f"__agent_retry_transfer_batch_{uuid.uuid4().hex[:8]}__"
user_id = "system"
user_id = SYSTEM_INTERNAL_USER_ID
ids_str = ", ".join(str(i) for i in history_ids)
logger.info(
@@ -1007,7 +990,6 @@ class AgentManager:
)
try:
await self.process_message(
session_id=session_id,
user_id=user_id,
@@ -1107,7 +1089,7 @@ class AgentManager:
手动触发单条历史记录的 AI 整理。
"""
session_id = f"__agent_manual_redo_{history_id}_{uuid.uuid4().hex[:8]}__"
user_id = "system"
user_id = SYSTEM_INTERNAL_USER_ID
agent = MoviePilotAgent(
session_id=session_id,
user_id=user_id,

View File

@@ -1,107 +0,0 @@
"""Agent 客户端交互请求管理。"""
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from threading import Lock
from typing import Dict, List, Optional
import uuid
@dataclass(frozen=True)
class AgentInteractionOption:
"""交互选项。"""
label: str
value: str
@dataclass
class PendingAgentInteraction:
"""待处理的 Agent 客户端交互请求。"""
request_id: str
session_id: str
user_id: str
channel: Optional[str]
source: Optional[str]
username: Optional[str]
title: Optional[str]
prompt: str
options: List[AgentInteractionOption]
created_at: datetime = field(default_factory=datetime.now)
class AgentInteractionManager:
"""管理 Agent 发起的客户端交互请求。"""
_ttl = timedelta(hours=24)
def __init__(self):
self._pending_interactions: Dict[str, PendingAgentInteraction] = {}
self._lock = Lock()
def _cleanup_locked(self):
expire_before = datetime.now() - self._ttl
expired_ids = [
request_id
for request_id, request in self._pending_interactions.items()
if request.created_at < expire_before
]
for request_id in expired_ids:
self._pending_interactions.pop(request_id, None)
def create_request(
self,
session_id: str,
user_id: str,
channel: Optional[str],
source: Optional[str],
username: Optional[str],
title: Optional[str],
prompt: str,
options: List[AgentInteractionOption],
) -> PendingAgentInteraction:
with self._lock:
self._cleanup_locked()
request_id = uuid.uuid4().hex[:12]
while request_id in self._pending_interactions:
request_id = uuid.uuid4().hex[:12]
request = PendingAgentInteraction(
request_id=request_id,
session_id=session_id,
user_id=str(user_id),
channel=channel,
source=source,
username=username,
title=title,
prompt=prompt,
options=options,
)
self._pending_interactions[request_id] = request
return request
def resolve(
self,
request_id: str,
option_index: int,
user_id: Optional[str] = None,
) -> Optional[tuple[PendingAgentInteraction, AgentInteractionOption]]:
with self._lock:
self._cleanup_locked()
request = self._pending_interactions.get(request_id)
if not request:
return None
if user_id is not None and str(request.user_id) != str(user_id):
return None
if option_index < 1 or option_index > len(request.options):
return None
option = request.options[option_index - 1]
self._pending_interactions.pop(request_id, None)
return request, option
def clear(self):
with self._lock:
self._pending_interactions.clear()
agent_interaction_manager = AgentInteractionManager()

View File

@@ -124,34 +124,29 @@ Default memory file: {memory_file}
</agent_memory>
<memory_onboarding>
**IMPORTANT — First-time user detected!**
First-time user detected.
The memory directory is currently empty. This means this is likely the user's first interaction, or their preferences have been reset.
The memory directory is currently empty. This likely means the user has no saved long-term preferences yet.
**Your MANDATORY first action in this conversation:**
Before doing ANYTHING else (before answering questions, before calling tools, before performing any task), you MUST proactively greet the user warmly and ask them about their preferences so you can provide personalized service going forward. Specifically, ask about:
**Behavior requirements:**
- Do NOT interrupt the current task just to collect preferences.
- Do NOT proactively greet warmly, build rapport, or ask a long onboarding questionnaire.
- Default to a concise, professional style until the user states a preference.
- Only ask for preferences when they are directly useful for the current task, or when a short follow-up question at the end would clearly help future interactions.
1. **How to address the user** — Ask what name or nickname they'd like you to call them (e.g., a real name, a nickname, or a fun title). This is the top priority for building a personal connection.
2. **Communication style preference** — Do they prefer a cute/playful tone (with emojis), a formal/professional tone, a concise/minimalist style, or something else?
3. **Media preferences** — What types of media do they primarily care about? (e.g., movies, TV shows, anime, documentaries, etc.)
4. **Quality preferences** — Do they have preferred video quality (4K, 1080p), codecs (H.265, H.264), or subtitle language preferences?
5. **Any other special requests** — Anything else they'd like you to always keep in mind?
**What to collect when useful:**
- Preferred communication style
- Media interests
- Quality / codec / subtitle preferences
- Any standing rules the user wants you to follow
**After the user replies**, you MUST immediately:
1. Use the `write_file` tool to save ALL their preferences to the memory file at: `{memory_file}`
2. Format the memory file in clean Markdown with clear sections (e.g., `## User Profile`, `## Communication Style`, `## Media Preferences`, etc.)
3. The `## User Profile` section MUST include the user's preferred name/nickname at the top
4. Only AFTER saving the preferences, proceed to help with whatever the user originally asked about (if anything)
5. From this point on, always address the user by their preferred name/nickname in conversations
6. You may also create additional `.md` files in the memory directory (`{memory_dir}`) for different topics as needed.
**When the user provides lasting preferences**, you MUST promptly save them to `{memory_file}` using `write_file` or `edit_file`.
**If the user skips the preference questions** and directly asks you to do something:
- Go ahead and help them with their request first
- But still ask about their preferences naturally at the end of the interaction
- Save whatever you learn about them (implicit or explicit) to the memory file
**Example onboarding flow:**
The greeting should introduce yourself, explain this is the first meeting, and ask the above questions in a numbered list. Adapt the tone to your persona defined in the base system prompt.
**Memory format requirements:**
- Use clean Markdown with short sections.
- Record only durable preferences and working rules.
- Do NOT invent personal details or preferred names.
- Do NOT force use of a nickname or personalized greeting.
</memory_onboarding>
<memory_guidelines>

View File

@@ -15,9 +15,12 @@ Core Capabilities:
<communication>
{verbose_spec}
- Tone: friendly, concise. Like a knowledgeable friend, not a corporate bot.
- Use emojis sparingly (1-3 per response): greetings, completions, errors.
- Tone: professional, concise, restrained.
- Be direct. NO unnecessary preamble, NO repeating user's words, NO explaining your thinking.
- Prioritize task progress over conversation. Answer only what is necessary to move the task forward.
- Do NOT flatter the user, praise the question, or use overly eager/service-oriented phrases.
- Do NOT use emojis, exclamation marks, cute language, or excessive apology.
- Prefer short declarative sentences. Default to one or two short paragraphs; use lists only when they improve scanability.
- Use Markdown for structured data. Use `inline code` for media titles/paths.
- Include key details (year, rating, resolution) but do NOT over-explain.
- Do not stop for approval on read-only operations. Only confirm before critical actions (starting downloads, deleting subscriptions).
@@ -34,6 +37,7 @@ Core Capabilities:
- NO filler phrases like "Let me help you", "Here are the results", "I found..." — skip all unnecessary preamble.
- NO repeating what user said.
- NO narrating your internal reasoning.
- NO praise, emotional cushioning, or unnecessary politeness padding.
- After task completion: one line summary only.
- When error occurs: brief acknowledgment + suggestion, then move on.
</response_format>

View File

@@ -81,9 +81,7 @@ class MoviePilotTool(BaseTool, metaclass=ABCMeta):
if messages:
merged_message = "\n\n".join(messages)
await self.send_tool_message(merged_message)
else:
# 非VERBOSE重置缓冲区从头更新保持消息编辑能力
self._stream_handler.reset()
# 非VERBOSE不重置流保留已输出的模型思考文本
else:
# 未启用流式传输,不发送任何工具消息内容
pass

View File

@@ -5,7 +5,7 @@ from typing import List, Optional, Type
from pydantic import BaseModel, Field, model_validator
from app.agent.tools.base import MoviePilotTool, ToolChain
from app.agent.interaction import (
from app.chain.interaction import (
AgentInteractionOption,
agent_interaction_manager,
)
@@ -106,7 +106,7 @@ class AskUserChoiceTool(MoviePilotTool):
):
return f"当前渠道 {channel.value} 不支持按钮选择"
max_per_row = ChannelCapabilityManager.get_max_buttons_per_row(channel)
max_per_row = 1
max_rows = ChannelCapabilityManager.get_max_button_rows(channel)
max_text_length = ChannelCapabilityManager.get_max_button_text_length(channel)
max_options = max_per_row * max_rows

View File

@@ -23,7 +23,12 @@ class UpdateCustomIdentifiersInput(BaseModel):
description=(
"The complete list of custom identifier rules to save. "
"This REPLACES the entire existing list. "
"Always query existing identifiers first, merge new rules, then pass the full list."
"Always query existing identifiers first, merge new rules, then pass the full list. "
"These rules are global and affect future recognition for all torrents/files. "
"When adding a rule for a user-provided sample, prefer narrow regex patterns that include "
"sample-specific anchors such as the title alias, year, season/episode marker, group tag, "
"resolution, or other distinctive fragments. Avoid overly broad patterns like bare generic "
"tags, pure episode numbers, or common release words unless the user explicitly wants a global rule."
),
)
@@ -35,6 +40,10 @@ class UpdateCustomIdentifiersTool(MoviePilotTool):
"This tool REPLACES all existing identifier rules with the provided list. "
"IMPORTANT: Always use 'query_custom_identifiers' first to get existing rules, "
"then merge new rules into the list before calling this tool to avoid accidentally deleting existing rules. "
"IMPORTANT: New identifier rules are global. When the rule is created from a specific torrent/file name, "
"make the regex as narrow as possible and include distinctive elements from that sample so unrelated titles "
"are not affected. Prefer contextual replacements with capture groups/backreferences over bare block words "
"when a generic word like REPACK, WEB-DL, 1080p, 字幕, or a simple episode marker would otherwise match too broadly. "
"Supported rule formats (spaces around operators are required): "
"1) Block word: just the word/regex to remove; "
"2) Replacement: '被替换词 => 替换词'; "

View File

@@ -2,7 +2,7 @@ from fastapi import APIRouter
from app.api.endpoints import login, user, webhook, message, site, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent, mcp, mfa
transfer, mediaserver, bangumi, storage, discover, recommend, workflow, torrent, mcp, mfa, openai, anthropic
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -30,3 +30,5 @@ api_router.include_router(recommend.router, prefix="/recommend", tags=["recommen
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])
api_router.include_router(torrent.router, prefix="/torrent", tags=["torrent"])
api_router.include_router(mcp.router, prefix="/mcp", tags=["mcp"])
api_router.include_router(openai.router, prefix="/openai/v1", tags=["openai"])
api_router.include_router(anthropic.router, prefix="/anthropic/v1", tags=["anthropic"])

View File

@@ -0,0 +1,158 @@
import asyncio
import json
import time
import uuid
from typing import AsyncIterator, List, Optional
from fastapi import APIRouter, Header, Security
from fastapi.responses import JSONResponse, StreamingResponse
from app import schemas
from app.api.endpoints.openai import (
MODEL_ID,
_CollectingMoviePilotAgent,
_error_response as _openai_error_response,
)
from app.api.openai_utils import build_anthropic_messages, build_prompt, build_session_id
from app.core.config import settings
from app.core.security import anthropic_api_key_header
from app.schemas.types import MessageChannel
router = APIRouter()
SESSION_PREFIX = "anthropic:"
def _anthropic_error_response(
message: str,
status_code: int,
error_type: str = "invalid_request_error",
) -> JSONResponse:
return JSONResponse(
status_code=status_code,
content=schemas.AnthropicErrorResponse(
error=schemas.AnthropicErrorDetail(type=error_type, message=message)
).model_dump(),
)
def _check_auth(api_key: Optional[str]) -> Optional[JSONResponse]:
if not api_key or api_key != settings.API_TOKEN:
return _anthropic_error_response(
"invalid x-api-key",
401,
error_type="authentication_error",
)
return None
async def _stream_anthropic_response(
agent: _CollectingMoviePilotAgent,
prompt: str,
images: List[str],
) -> AsyncIterator[str]:
event_queue: asyncio.Queue = asyncio.Queue()
if hasattr(agent.stream_handler, "bind_queue"):
agent.stream_handler.bind_queue(event_queue)
message_id = f"msg_{uuid.uuid4().hex}"
async def _run_agent():
try:
await agent.process(prompt, images=images, files=None)
except Exception as exc:
await event_queue.put({"error": str(exc)})
finally:
await event_queue.put(None)
task = asyncio.create_task(_run_agent())
try:
yield f"event: message_start\ndata: {json.dumps({'type': 'message_start', 'message': {'id': message_id, 'type': 'message', 'role': 'assistant', 'content': [], 'model': MODEL_ID, 'stop_reason': None, 'stop_sequence': None, 'usage': {'input_tokens': 0, 'output_tokens': 0}}}, ensure_ascii=False)}\n\n"
yield f"event: content_block_start\ndata: {json.dumps({'type': 'content_block_start', 'index': 0, 'content_block': {'type': 'text', 'text': ''}}, ensure_ascii=False)}\n\n"
while True:
item = await event_queue.get()
if item is None:
break
if isinstance(item, dict) and item.get("error"):
raise RuntimeError(str(item["error"]))
text = str(item or "")
if not text:
continue
yield f"event: content_block_delta\ndata: {json.dumps({'type': 'content_block_delta', 'index': 0, 'delta': {'type': 'text_delta', 'text': text}}, ensure_ascii=False)}\n\n"
yield f"event: content_block_stop\ndata: {json.dumps({'type': 'content_block_stop', 'index': 0}, ensure_ascii=False)}\n\n"
yield f"event: message_delta\ndata: {json.dumps({'type': 'message_delta', 'delta': {'stop_reason': 'end_turn', 'stop_sequence': None}, 'usage': {'output_tokens': 0}}, ensure_ascii=False)}\n\n"
yield f"event: message_stop\ndata: {json.dumps({'type': 'message_stop'}, ensure_ascii=False)}\n\n"
finally:
if not task.done():
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
@router.post("/messages", summary="Anthropic compatible messages", response_model=schemas.AnthropicMessagesResponse)
async def messages(
payload: schemas.AnthropicMessagesRequest,
x_api_key: Optional[str] = Security(anthropic_api_key_header),
anthropic_version: Optional[str] = Header(default=None, alias="anthropic-version"),
):
auth_error = _check_auth(x_api_key)
if auth_error:
return auth_error
if not settings.AI_AGENT_ENABLE:
return _anthropic_error_response(
"MoviePilot AI agent is disabled.",
503,
error_type="api_error",
)
normalized_messages = build_anthropic_messages(payload.system, payload.messages)
try:
prompt, images = build_prompt(normalized_messages, use_server_session=False)
except ValueError as exc:
return _anthropic_error_response(str(exc), 400)
session_seed = anthropic_version or "anthropic"
session_id = build_session_id(f"{session_seed}:{uuid.uuid4().hex}", SESSION_PREFIX)
agent = _CollectingMoviePilotAgent(
session_id=session_id,
user_id=session_id,
channel=MessageChannel.Web.value,
source="anthropic",
username="anthropic-client",
stream_mode=payload.stream,
)
if payload.stream:
return StreamingResponse(
_stream_anthropic_response(agent=agent, prompt=prompt, images=images),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
try:
result = await agent.process(prompt, images=images, files=None)
except Exception as exc:
return _anthropic_error_response(str(exc), 500, error_type="api_error")
content = "\n\n".join(
message.strip()
for message in agent.collected_messages
if message and message.strip()
).strip()
if not content and result:
content = str(result).strip()
if not content:
content = "未获得有效回复。"
return schemas.AnthropicMessagesResponse(
id=f"msg_{uuid.uuid4().hex}",
content=[schemas.AnthropicTextBlock(text=content)],
model=MODEL_ID,
)

426
app/api/endpoints/openai.py Normal file
View File

@@ -0,0 +1,426 @@
import asyncio
import json
import time
import uuid
from typing import AsyncIterator, List, Optional, Tuple
from fastapi import APIRouter, Request, Security
from fastapi.responses import JSONResponse, StreamingResponse
from fastapi.security import HTTPAuthorizationCredentials
from app import schemas
from app.api.openai_utils import (
build_completion_payload,
build_prompt,
build_responses_input,
build_session_id,
)
from app.agent import MoviePilotAgent, StreamingHandler
from app.core.config import settings
from app.core.security import openai_bearer_scheme
from app.schemas.types import MessageChannel
router = APIRouter()
MODEL_ID = "moviepilot-agent"
SESSION_PREFIX = "openai:"
class _CollectingMoviePilotAgent(MoviePilotAgent):
"""
捕获 Agent 最终输出,避免再通过消息渠道二次发送。
"""
def __init__(self, *args, stream_mode: bool = False, **kwargs):
super().__init__(*args, **kwargs)
self.collected_messages: List[str] = []
self.stream_mode = stream_mode
if stream_mode:
self.stream_handler = _OpenAIStreamingHandler()
def _should_stream(self) -> bool:
return self.stream_mode
async def send_agent_message(self, message: str, title: str = ""):
text = (message or "").strip()
if title and text:
text = f"{title}\n{text}"
elif title:
text = title.strip()
if text:
self.collected_messages.append(text)
if self.stream_mode:
self.stream_handler.emit(text)
async def _save_agent_message_to_db(self, message: str, title: str = ""):
return None
class _OpenAIStreamingHandler(StreamingHandler):
"""
将 Agent 流式输出转发到 OpenAI SSE 队列,不向站内消息系统落消息。
"""
def __init__(self):
super().__init__()
self._event_queue: Optional[asyncio.Queue] = None
def bind_queue(self, queue: asyncio.Queue):
self._event_queue = queue
def emit(self, token: str):
super().emit(token)
if token and self._event_queue is not None:
self._event_queue.put_nowait(token)
async def start_streaming(
self,
channel: Optional[str] = None,
source: Optional[str] = None,
user_id: Optional[str] = None,
username: Optional[str] = None,
title: str = "",
):
self._channel = channel
self._source = source
self._user_id = user_id
self._username = username
self._title = title
self._streaming_enabled = True
self._sent_text = ""
self._message_response = None
self._msg_start_offset = 0
self._max_message_length = 0
async def stop_streaming(self) -> Tuple[bool, str]:
if not self._streaming_enabled:
return False, ""
self._streaming_enabled = False
with self._lock:
final_text = self._buffer
self._buffer = ""
self._sent_text = ""
self._message_response = None
self._msg_start_offset = 0
return True, final_text
def _sse_payload(data: dict) -> str:
return f"data: {json.dumps(data, ensure_ascii=False)}\n\n"
async def _stream_response(
agent: _CollectingMoviePilotAgent,
prompt: str,
images: List[str],
) -> AsyncIterator[str]:
event_queue: asyncio.Queue = asyncio.Queue()
if isinstance(agent.stream_handler, _OpenAIStreamingHandler):
agent.stream_handler.bind_queue(event_queue)
created = int(time.time())
completion_id = f"chatcmpl-{uuid.uuid4().hex}"
finished = False
async def _run_agent():
try:
await agent.process(prompt, images=images, files=None)
except Exception as exc:
await event_queue.put({"error": str(exc)})
finally:
await event_queue.put(None)
task = asyncio.create_task(_run_agent())
try:
yield _sse_payload(
{
"id": completion_id,
"object": "chat.completion.chunk",
"created": created,
"model": MODEL_ID,
"choices": [
{
"index": 0,
"delta": {"role": "assistant"},
"finish_reason": None,
}
],
}
)
while True:
item = await event_queue.get()
if item is None:
break
if isinstance(item, dict) and item.get("error"):
raise RuntimeError(str(item["error"]))
text = str(item or "")
if not text:
continue
yield _sse_payload(
{
"id": completion_id,
"object": "chat.completion.chunk",
"created": created,
"model": MODEL_ID,
"choices": [
{
"index": 0,
"delta": {"content": text},
"finish_reason": None,
}
],
}
)
finished = True
yield _sse_payload(
{
"id": completion_id,
"object": "chat.completion.chunk",
"created": created,
"model": MODEL_ID,
"choices": [
{
"index": 0,
"delta": {},
"finish_reason": "stop",
}
],
}
)
yield "data: [DONE]\n\n"
finally:
if not task.done():
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
elif finished:
await task
def _error_response(
message: str,
status_code: int,
error_type: str = "invalid_request_error",
code: Optional[str] = None,
) -> JSONResponse:
return JSONResponse(
status_code=status_code,
content=schemas.OpenAIErrorResponse(
error=schemas.OpenAIErrorDetail(
message=message,
type=error_type,
code=code,
)
).model_dump(),
headers={"WWW-Authenticate": "Bearer"},
)
def _check_auth(
credentials: Optional[HTTPAuthorizationCredentials],
) -> Optional[JSONResponse]:
if not credentials or credentials.scheme.lower() != "bearer":
return _error_response(
"Invalid bearer token.",
401,
error_type="authentication_error",
code="invalid_api_key",
)
if credentials.credentials != settings.API_TOKEN:
return _error_response(
"Invalid bearer token.",
401,
error_type="authentication_error",
code="invalid_api_key",
)
return None
@router.get("/models", summary="OpenAI compatible models", response_model=schemas.OpenAIModelListResponse)
async def list_models(
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
):
auth_error = _check_auth(credentials)
if auth_error:
return auth_error
now = int(time.time())
return schemas.OpenAIModelListResponse(
data=[schemas.OpenAIModelInfo(id=MODEL_ID, created=now)]
)
@router.post(
"/chat/completions",
summary="OpenAI compatible chat completions",
response_model=schemas.OpenAIChatCompletionResponse,
)
async def chat_completions(
payload: schemas.OpenAIChatCompletionsRequest,
request: Request,
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
):
auth_error = _check_auth(credentials)
if auth_error:
return auth_error
if not settings.AI_AGENT_ENABLE:
return _error_response(
"MoviePilot AI agent is disabled.",
503,
error_type="server_error",
code="ai_agent_disabled",
)
if not payload.messages:
return _error_response(
"`messages` must be a non-empty array.",
400,
code="invalid_messages",
)
session_key = (
str(payload.user or "").strip()
or str(request.headers.get("x-session-id") or "").strip()
or str(uuid.uuid4())
)
use_server_session = bool(
str(payload.user or "").strip()
or str(request.headers.get("x-session-id") or "").strip()
)
try:
prompt, images = build_prompt(payload.messages, use_server_session=use_server_session)
except ValueError as exc:
return _error_response(str(exc), 400, code="invalid_messages")
session_id = build_session_id(session_key, SESSION_PREFIX)
username = str(payload.user or "openai-client")
agent = _CollectingMoviePilotAgent(
session_id=session_id,
user_id=session_key,
channel=MessageChannel.Web.value,
source="openai",
username=username,
stream_mode=payload.stream,
)
if payload.stream:
return StreamingResponse(
_stream_response(agent=agent, prompt=prompt, images=images),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Accel-Buffering": "no",
},
)
try:
result = await agent.process(prompt, images=images, files=None)
except Exception as exc:
return _error_response(
str(exc),
500,
error_type="server_error",
code="agent_execution_failed",
)
content = "\n\n".join(
message.strip()
for message in agent.collected_messages
if message and message.strip()
).strip()
if not content and result:
content = str(result).strip()
if not content:
content = "未获得有效回复。"
return JSONResponse(content=build_completion_payload(content, MODEL_ID))
@router.post("/responses", summary="OpenAI compatible responses", response_model=schemas.OpenAIResponsesResponse)
async def responses(
payload: schemas.OpenAIResponsesRequest,
credentials: Optional[HTTPAuthorizationCredentials] = Security(openai_bearer_scheme),
):
auth_error = _check_auth(credentials)
if auth_error:
return auth_error
if not settings.AI_AGENT_ENABLE:
return _error_response(
"MoviePilot AI agent is disabled.",
503,
error_type="server_error",
code="ai_agent_disabled",
)
if payload.stream:
return _error_response(
"Streaming is not supported for /responses yet.",
400,
code="unsupported_stream",
)
normalized_messages = build_responses_input(payload.input, instructions=payload.instructions)
if not normalized_messages:
return _error_response(
"`input` must include at least one usable message.",
400,
code="invalid_input",
)
try:
prompt, images = build_prompt(normalized_messages, use_server_session=bool(payload.user))
except ValueError as exc:
return _error_response(str(exc), 400, code="invalid_input")
session_key = str(payload.user or uuid.uuid4())
session_id = build_session_id(session_key, SESSION_PREFIX)
agent = _CollectingMoviePilotAgent(
session_id=session_id,
user_id=session_key,
channel=MessageChannel.Web.value,
source="openai.responses",
username=str(payload.user or "openai-client"),
stream_mode=False,
)
try:
result = await agent.process(prompt, images=images, files=None)
except Exception as exc:
return _error_response(
str(exc),
500,
error_type="server_error",
code="agent_execution_failed",
)
content = "\n\n".join(
message.strip()
for message in agent.collected_messages
if message and message.strip()
).strip()
if not content and result:
content = str(result).strip()
if not content:
content = "未获得有效回复。"
created_at = int(time.time())
response_id = f"resp_{uuid.uuid4().hex}"
output_message = schemas.OpenAIResponsesOutputMessage(
id=f"msg_{uuid.uuid4().hex}",
content=[schemas.OpenAIResponsesOutputText(text=content)],
)
return schemas.OpenAIResponsesResponse(
id=response_id,
created_at=created_at,
model=MODEL_ID,
output=[output_message],
usage=schemas.OpenAIUsage(),
)

View File

@@ -12,6 +12,7 @@ from anyio import Path as AsyncPath
from app.helper.sites import SitesHelper # noqa # noqa
from fastapi import APIRouter, Body, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from app import schemas
from app.chain.mediaserver import MediaServerChain
@@ -29,14 +30,14 @@ from app.db.user_oper import (
get_current_active_superuser_async,
get_current_active_user_async,
)
from app.helper.llm import LLMHelper
from app.helper.image import ImageHelper
from app.helper.llm import LLMHelper, LLMTestTimeout
from app.helper.mediaserver import MediaServerHelper
from app.helper.message import MessageHelper
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.system import SystemHelper
from app.helper.image import ImageHelper
from app.log import logger
from app.scheduler import Scheduler
from app.schemas import ConfigChangeEventData
@@ -52,6 +53,15 @@ router = APIRouter()
_NETTEST_REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
class LlmTestRequest(BaseModel):
enabled: Optional[bool] = None
provider: Optional[str] = None
model: Optional[str] = None
thinking_level: Optional[str] = None
api_key: Optional[str] = None
base_url: Optional[str] = None
def _match_nettest_prefix(url: str, prefix: str) -> bool:
"""
判断目标URL是否仍然落在允许的协议、主机、端口和路径前缀内。
@@ -259,6 +269,29 @@ def _build_nettest_rules() -> list[dict[str, Any]]:
return rules
def _sanitize_llm_test_error(message: str, api_key: Optional[str] = None) -> str:
"""
清理错误信息中的敏感字段,避免回显密钥。
"""
if not message:
return "LLM 调用失败"
sanitized = message
if api_key:
sanitized = sanitized.replace(api_key, "***")
sanitized = re.sub(
r"(?i)(api[_-]?key\s*[:=]\s*)([^\s,;]+)",
r"\1***",
sanitized,
)
sanitized = re.sub(
r"(?i)authorization\s*:\s*bearer\s+[^\s,;]+",
"Authorization: ***",
sanitized,
)
return sanitized
def _validate_nettest_url(url: str) -> Optional[str]:
"""
对实际请求地址做基础安全校验。
@@ -327,12 +360,12 @@ async def _close_nettest_response(response: Any) -> None:
async def fetch_image(
url: str,
proxy: Optional[bool] = None,
use_cache: bool = False,
if_none_match: Optional[str] = None,
cookies: Optional[str | dict] = None,
allowed_domains: Optional[set[str]] = None,
url: str,
proxy: Optional[bool] = None,
use_cache: bool = False,
if_none_match: Optional[str] = None,
cookies: Optional[str | dict] = None,
allowed_domains: Optional[set[str]] = None,
) -> Optional[Response]:
"""
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
@@ -354,6 +387,7 @@ async def fetch_image(
use_cache=use_cache,
cookies=cookies,
)
if content:
# 检查 If-None-Match
etag = HashUtils.md5(content)
@@ -366,16 +400,17 @@ async def fetch_image(
media_type=UrlUtils.get_mime_type(url, "image/jpeg"),
headers=headers,
)
return None
@router.get("/img/{proxy}", summary="图片代理")
async def proxy_img(
imgurl: str,
proxy: bool = False,
cache: bool = False,
use_cookies: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token),
imgurl: str,
proxy: bool = False,
cache: bool = False,
use_cookies: bool = False,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token),
) -> Response:
"""
图片代理,可选是否使用代理服务器,支持 HTTP 缓存
@@ -404,9 +439,9 @@ async def proxy_img(
@router.get("/cache/image", summary="图片缓存")
async def cache_img(
url: str,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token),
url: str,
if_none_match: Annotated[str | None, Header()] = None,
_: schemas.TokenPayload = Depends(verify_resource_token),
) -> Response:
"""
本地缓存图片文件,支持 HTTP 缓存,如果启用全局图片缓存,则使用磁盘缓存
@@ -500,7 +535,7 @@ async def get_env_setting(_: User = Depends(get_current_active_user_async)):
@router.post("/env", summary="更新系统配置", response_model=schemas.Response)
async def set_env_setting(
env: dict, _: User = Depends(get_current_active_superuser_async)
env: dict, _: User = Depends(get_current_active_superuser_async)
):
"""
更新系统环境变量(仅管理员)
@@ -535,9 +570,9 @@ async def set_env_setting(
@router.get("/progress/{process_type}", summary="实时进度")
async def get_progress(
request: Request,
process_type: str,
_: schemas.TokenPayload = Depends(verify_resource_token),
request: Request,
process_type: str,
_: schemas.TokenPayload = Depends(verify_resource_token),
):
"""
实时获取处理进度返回格式为SSE
@@ -572,9 +607,9 @@ async def get_setting(key: str, _: User = Depends(get_current_active_user_async)
@router.post("/setting/{key}", summary="更新系统设置", response_model=schemas.Response)
async def set_setting(
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser_async),
key: str,
value: Annotated[Union[list, dict, bool, int, str] | None, Body()] = None,
_: User = Depends(get_current_active_superuser_async),
):
"""
更新系统设置(仅管理员)
@@ -608,10 +643,10 @@ async def set_setting(
@router.get("/llm-models", summary="获取LLM模型列表", response_model=schemas.Response)
async def get_llm_models(
provider: str,
api_key: str,
base_url: Optional[str] = None,
_: User = Depends(get_current_active_user_async),
provider: str,
api_key: str,
base_url: Optional[str] = None,
_: User = Depends(get_current_active_user_async),
):
"""
获取LLM模型列表
@@ -625,11 +660,73 @@ async def get_llm_models(
return schemas.Response(success=False, message=str(e))
@router.post("/llm-test", summary="测试LLM调用", response_model=schemas.Response)
async def llm_test(
payload: Annotated[Optional[LlmTestRequest], Body()] = None,
_: User = Depends(get_current_active_superuser_async),
):
"""
使用传入配置或当前已保存配置执行一次最小 LLM 调用。
"""
if not payload:
return schemas.Response(success=False, message="请配置智能助手LLM相关参数后再进行测试")
if not payload.provider or not payload.model:
return schemas.Response(success=False, message="请配置LLM提供商和模型")
data = {
"provider": payload.provider,
"model": payload.model,
}
if not payload.enabled:
return schemas.Response(success=False, message="请先启用智能助手", data=data)
if not payload.api_key or not payload.api_key.strip():
return schemas.Response(
success=False,
message="请先配置 LLM API Key",
data=data,
)
if not payload.model or not payload.model.strip():
return schemas.Response(
success=False,
message="请先配置 LLM 模型",
data=data,
)
try:
result = await LLMHelper.test_current_settings(
provider=payload.provider,
model=payload.model,
thinking_level=payload.thinking_level,
api_key=payload.api_key,
base_url=payload.base_url,
)
if not result.get("reply_preview"):
return schemas.Response(
success=False,
message="模型响应为空"
)
return schemas.Response(success=True, data=result)
except (LLMTestTimeout, TimeoutError) as err:
logger.warning(err)
return schemas.Response(
success=False,
message="LLM 调用超时"
)
except Exception as err:
return schemas.Response(
success=False,
message=_sanitize_llm_test_error(str(err), payload.api_key)
)
@router.get("/message", summary="实时消息")
async def get_message(
request: Request,
role: Optional[str] = "system",
_: schemas.TokenPayload = Depends(verify_resource_token),
request: Request,
role: Optional[str] = "system",
_: schemas.TokenPayload = Depends(verify_resource_token),
):
"""
实时获取系统消息返回格式为SSE
@@ -652,10 +749,10 @@ async def get_message(
@router.get("/logging", summary="实时日志")
async def get_logging(
request: Request,
length: Optional[int] = 50,
logfile: Optional[str] = "moviepilot.log",
_: schemas.TokenPayload = Depends(verify_resource_token),
request: Request,
length: Optional[int] = 50,
logfile: Optional[str] = "moviepilot.log",
_: schemas.TokenPayload = Depends(verify_resource_token),
):
"""
实时获取系统日志
@@ -666,7 +763,7 @@ async def get_logging(
log_path = base_path / logfile
if not await SecurityUtils.async_is_safe_path(
base_path=base_path, user_path=log_path, allowed_suffixes={".log"}
base_path=base_path, user_path=log_path, allowed_suffixes={".log"}
):
raise HTTPException(status_code=404, detail="Not Found")
@@ -683,7 +780,7 @@ async def get_logging(
# 读取历史日志
async with aiofiles.open(
log_path, mode="r", encoding="utf-8", errors="ignore"
log_path, mode="r", encoding="utf-8", errors="ignore"
) as f:
# 优化大文件读取策略
if file_size > 100 * 1024:
@@ -695,7 +792,7 @@ async def get_logging(
# 找到第一个完整的行
first_newline = content.find("\n")
if first_newline != -1:
content = content[first_newline + 1 :]
content = content[first_newline + 1:]
else:
# 小文件直接读取全部内容
content = await f.read()
@@ -703,7 +800,7 @@ async def get_logging(
# 按行分割并添加到队列,只保留非空行
lines = [line.strip() for line in content.splitlines() if line.strip()]
# 只取最后N行
for line in lines[-max(length, 50) :]:
for line in lines[-max(length, 50):]:
lines_queue.append(line)
# 输出历史日志
@@ -712,7 +809,7 @@ async def get_logging(
# 实时监听新日志
async with aiofiles.open(
log_path, mode="r", encoding="utf-8", errors="ignore"
log_path, mode="r", encoding="utf-8", errors="ignore"
) as f:
# 移动文件指针到文件末尾,继续监听新增内容
await f.seek(0, 2)
@@ -751,7 +848,7 @@ async def get_logging(
try:
# 使用 aiofiles 异步读取文件
async with aiofiles.open(
log_path, mode="r", encoding="utf-8", errors="ignore"
log_path, mode="r", encoding="utf-8", errors="ignore"
) as file:
text = await file.read()
# 倒序输出
@@ -783,10 +880,10 @@ async def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
def ruletest(
title: str,
rulegroup_name: str,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
title: str,
rulegroup_name: str,
subtitle: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
过滤规则测试,规则类型 1-订阅2-洗版3-搜索
@@ -841,11 +938,10 @@ async def nettest_targets(_: schemas.TokenPayload = Depends(verify_token)):
@router.get("/nettest", summary="测试网络连通性")
async def nettest(
target_id: Optional[str] = None,
url: Optional[str] = None,
proxy: Optional[bool] = None,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
target_id: Optional[str] = None,
url: Optional[str] = None,
include: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token),
):
"""
测试内置目标的网络连通性。

177
app/api/openai_utils.py Normal file
View File

@@ -0,0 +1,177 @@
import hashlib
import time
import uuid
from typing import Any, Dict, List, Tuple
def _get_message_field(message: Any, field: str, default: Any = None) -> Any:
if isinstance(message, dict):
return message.get(field, default)
return getattr(message, field, default)
def extract_text_and_images(content: Any) -> Tuple[str, List[str]]:
if content is None:
return "", []
if isinstance(content, str):
return content.strip(), []
text_parts: List[str] = []
image_urls: List[str] = []
if isinstance(content, list):
for item in content:
if isinstance(item, str):
normalized = item.strip()
if normalized:
text_parts.append(normalized)
continue
if not isinstance(item, dict):
continue
item_type = (item.get("type") or "").lower()
if item_type == "text":
text = item.get("text")
if text and str(text).strip():
text_parts.append(str(text).strip())
elif item_type == "input_text":
text = item.get("text")
if text and str(text).strip():
text_parts.append(str(text).strip())
elif item_type == "image_url":
image_url = item.get("image_url")
url = image_url.get("url") if isinstance(image_url, dict) else image_url
if url and str(url).strip():
image_urls.append(str(url).strip())
elif item_type == "input_image":
url = item.get("image_url")
if url and str(url).strip():
image_urls.append(str(url).strip())
elif item_type == "image":
source = item.get("source") or {}
if isinstance(source, dict) and source.get("type") == "base64":
data = source.get("data")
media_type = source.get("media_type") or "image/png"
if data and str(data).strip():
image_urls.append(f"data:{media_type};base64,{str(data).strip()}")
return "\n".join(text_parts).strip(), image_urls
def build_prompt(messages: List[Any], use_server_session: bool) -> Tuple[str, List[str]]:
system_texts: List[str] = []
transcript: List[str] = []
latest_user_text = ""
latest_user_images: List[str] = []
for message in messages:
role = str(_get_message_field(message, "role", "user") or "user").lower()
if role == "developer":
role = "system"
text, images = extract_text_and_images(_get_message_field(message, "content"))
if role == "system":
if text:
system_texts.append(text)
continue
if role == "user":
if text or images:
latest_user_text = text
latest_user_images = images
if text:
transcript.append(f"user: {text}")
continue
if text:
transcript.append(f"{role}: {text}")
if not latest_user_text and not latest_user_images:
raise ValueError("No usable user message found in messages.")
prompt_parts: List[str] = []
if system_texts:
prompt_parts.append("系统要求:\n" + "\n\n".join(system_texts))
if not use_server_session and transcript:
history = transcript[:-1] if transcript[-1].startswith("user: ") else transcript
if history:
prompt_parts.append("对话上下文:\n" + "\n".join(history[-10:]))
if latest_user_text:
prompt_parts.append("当前用户消息:\n" + latest_user_text)
else:
prompt_parts.append("当前用户消息:\n请结合图片内容回复。")
return "\n\n".join(part for part in prompt_parts if part).strip(), latest_user_images
def build_session_id(session_key: str, prefix: str) -> str:
digest = hashlib.sha256(session_key.encode("utf-8")).hexdigest()
return f"{prefix}{digest[:32]}"
def build_completion_payload(content: str, model_id: str) -> Dict[str, Any]:
created = int(time.time())
return {
"id": f"chatcmpl-{uuid.uuid4().hex}",
"object": "chat.completion",
"created": created,
"model": model_id,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": content,
},
"finish_reason": "stop",
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0,
},
}
def build_responses_input(
input_data: Any, instructions: str | None = None
) -> List[Dict[str, Any]]:
messages: List[Dict[str, Any]] = []
if instructions and str(instructions).strip():
messages.append({"role": "system", "content": str(instructions).strip()})
if isinstance(input_data, str):
normalized = input_data.strip()
if normalized:
messages.append({"role": "user", "content": normalized})
return messages
if isinstance(input_data, list):
for item in input_data:
if not isinstance(item, dict):
continue
item_type = (item.get("type") or "").lower()
if item_type == "message":
role = item.get("role") or "user"
content = item.get("content")
messages.append({"role": role, "content": content})
elif item.get("role") and "content" in item:
messages.append({"role": item.get("role"), "content": item.get("content")})
return messages
if isinstance(input_data, dict) and input_data.get("role") and "content" in input_data:
messages.append({"role": input_data.get("role"), "content": input_data.get("content")})
return messages
def build_anthropic_messages(
system: Any, messages: List[Any]
) -> List[Dict[str, Any]]:
normalized: List[Dict[str, Any]] = []
system_text, _ = extract_text_and_images(system)
if system_text:
normalized.append({"role": "system", "content": system_text})
for message in messages:
role = _get_message_field(message, "role", "user")
content = _get_message_field(message, "content")
normalized.append({"role": role, "content": content})
return normalized

View File

@@ -38,6 +38,7 @@ from app.schemas import (
TransferDirectoryConf,
MessageResponse,
)
from app.utils.identity import normalize_internal_user_id
from app.schemas.category import CategoryConfig
from app.schemas.types import (
TorrentStatus,
@@ -119,6 +120,21 @@ class ChainBase(metaclass=ABCMeta):
"""
self.filecache.delete(filename)
@staticmethod
def _normalize_notification_for_dispatch(
message: Notification
) -> Notification:
"""
规范化待发送的通知消息。
后台任务会复用内部占位用户ID作为会话身份这里在真正发送前清空
让消息重新走默认通知路由或基于 targets 的目标解析。
"""
dispatch_message = copy.deepcopy(message)
dispatch_message.userid = normalize_internal_user_id(
dispatch_message.userid
)
return dispatch_message
async def async_remove_cache(self, filename: str) -> None:
"""
异步删除缓存同时删除Redis和本地缓存
@@ -1119,10 +1135,13 @@ class ChainBase(metaclass=ABCMeta):
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
self.messageoper.add(**message.model_dump())
dispatch_message = self._normalize_notification_for_dispatch(message)
# 发送消息按设置隔离
if not message.userid and message.mtype:
if not dispatch_message.userid and dispatch_message.mtype:
# 消息隔离设置
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
notify_action = ServiceConfigHelper.get_notification_switch(
dispatch_message.mtype
)
if notify_action:
# 'admin' 'user,admin' 'user' 'all'
actions = notify_action.split(",")
@@ -1131,7 +1150,7 @@ class ChainBase(metaclass=ABCMeta):
send_orignal = False
useroper = UserOper()
for action in actions:
send_message = copy.deepcopy(message)
send_message = copy.deepcopy(dispatch_message)
if action == "admin" and not admin_sended:
# 仅发送管理员
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
@@ -1186,13 +1205,13 @@ class ChainBase(metaclass=ABCMeta):
# 发送消息事件
self.eventmanager.send_event(
etype=EventType.NoticeMessage,
data={**message.model_dump(), "type": message.mtype},
data={**dispatch_message.model_dump(), "type": dispatch_message.mtype},
)
# 按原消息发送
self.messagequeue.send_message(
"post_message",
message=message,
immediately=True if message.userid else False,
message=dispatch_message,
immediately=True if dispatch_message.userid else False,
**kwargs,
)
@@ -1233,10 +1252,13 @@ class ChainBase(metaclass=ABCMeta):
# 保存消息
self.messagehelper.put(message, role="user", title=message.title)
await self.messageoper.async_add(**message.model_dump())
dispatch_message = self._normalize_notification_for_dispatch(message)
# 发送消息按设置隔离
if not message.userid and message.mtype:
if not dispatch_message.userid and dispatch_message.mtype:
# 消息隔离设置
notify_action = ServiceConfigHelper.get_notification_switch(message.mtype)
notify_action = ServiceConfigHelper.get_notification_switch(
dispatch_message.mtype
)
if notify_action:
# 'admin' 'user,admin' 'user' 'all'
actions = notify_action.split(",")
@@ -1245,7 +1267,7 @@ class ChainBase(metaclass=ABCMeta):
send_orignal = False
useroper = UserOper()
for action in actions:
send_message = copy.deepcopy(message)
send_message = copy.deepcopy(dispatch_message)
if action == "admin" and not admin_sended:
# 仅发送管理员
logger.info(f"{send_message.mtype} 的消息已设置发送给管理员")
@@ -1300,13 +1322,13 @@ class ChainBase(metaclass=ABCMeta):
# 发送消息事件
await self.eventmanager.async_send_event(
etype=EventType.NoticeMessage,
data={**message.model_dump(), "type": message.mtype},
data={**dispatch_message.model_dump(), "type": dispatch_message.mtype},
)
# 按原消息发送
await self.messagequeue.async_send_message(
"post_message",
message=message,
immediately=True if message.userid else False,
message=dispatch_message,
immediately=True if dispatch_message.userid else False,
**kwargs,
)
@@ -1324,11 +1346,12 @@ class ChainBase(metaclass=ABCMeta):
message, role="user", note=note_list, title=message.title
)
self.messageoper.add(**message.model_dump(), note=note_list)
dispatch_message = self._normalize_notification_for_dispatch(message)
return self.messagequeue.send_message(
"post_medias_message",
message=message,
message=dispatch_message,
medias=medias,
immediately=True if message.userid else False,
immediately=True if dispatch_message.userid else False,
)
def post_torrents_message(
@@ -1345,11 +1368,12 @@ class ChainBase(metaclass=ABCMeta):
message, role="user", note=note_list, title=message.title
)
self.messageoper.add(**message.model_dump(), note=note_list)
dispatch_message = self._normalize_notification_for_dispatch(message)
return self.messagequeue.send_message(
"post_torrents_message",
message=message,
message=dispatch_message,
torrents=torrents,
immediately=True if message.userid else False,
immediately=True if dispatch_message.userid else False,
)
def delete_message(
@@ -1383,6 +1407,7 @@ class ChainBase(metaclass=ABCMeta):
chat_id: Union[str, int],
text: str,
title: Optional[str] = None,
buttons: Optional[List[List[dict]]] = None,
) -> bool:
"""
编辑已发送的消息
@@ -1392,6 +1417,7 @@ class ChainBase(metaclass=ABCMeta):
:param chat_id: 聊天ID
:param text: 新的消息内容
:param title: 消息标题
:param buttons: 更新后的按钮列表
:return: 编辑是否成功
"""
return self.run_module(
@@ -1402,6 +1428,7 @@ class ChainBase(metaclass=ABCMeta):
chat_id=chat_id,
text=text,
title=title,
buttons=buttons,
)
def send_direct_message(self, message: Notification) -> Optional[MessageResponse]:
@@ -1411,7 +1438,10 @@ class ChainBase(metaclass=ABCMeta):
:param message: 消息体
:return: 消息响应包含message_id, chat_id等
"""
return self.run_module("send_direct_message", message=message)
return self.run_module(
"send_direct_message",
message=self._normalize_notification_for_dispatch(message),
)
def metadata_img(
self,

1363
app/chain/interaction.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1320,7 +1320,7 @@ class MediaChain(ChainBase, ConfigReloadMixin, metaclass=Singleton):
mediainfo = await native_fn()
else:
# 原生优先
logger.info(f"插件优先模式未开启。尝试原生识别标题:{log_name} ...")
logger.info(f"识别标题:{log_name} ...")
mediainfo = await native_fn()
if not mediainfo and plugin_available:
logger.info(

File diff suppressed because it is too large Load Diff

1241
app/chain/skills.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
import json
import os
import re
import shutil
import subprocess
import sys
@@ -9,7 +10,7 @@ from pathlib import Path
from typing import Any, Dict, Iterable, Optional, get_args, get_origin
from urllib.error import HTTPError, URLError
from urllib.parse import urlencode
from urllib.request import Request, urlopen
from urllib.request import ProxyHandler, Request, build_opener, urlopen
import click
import psutil
@@ -28,7 +29,11 @@ FRONTEND_VERSION_FILE = FRONTEND_DIR / "version.txt"
HEALTH_PATH = "/api/v1/system/global"
HEALTH_TOKEN = "moviepilot"
FRONTEND_HEALTH_PATH = "/version.txt"
BACKEND_RELEASES_API = "https://api.github.com/repos/jxxghp/MoviePilot/releases"
FRONTEND_RELEASES_API = "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases"
LOCAL_HOSTS = {"0.0.0.0", "::", "::1", "", "localhost"}
MANAGED_ACTIVE_STATES = {"running", "starting"}
AUTO_UPDATE_ENABLED_VALUES = {"true", "release", "dev"}
MASKED_FIELDS = {
"API_TOKEN",
"DB_POSTGRESQL_PASSWORD",
@@ -199,6 +204,173 @@ def _frontend_health(runtime: Optional[Dict[str, Any]] = None, timeout: float =
return False, None
def _warn(message: str) -> None:
click.secho(message, fg="yellow")
def _release_prefix(version: Optional[str]) -> str:
"""
从版本号中提取主版本前缀,用于把本地自动更新限制在当前主版本线上。
"""
matched = re.match(r"^(v\d+)", str(version or "").strip())
return matched.group(1) if matched else "v2"
def _release_sort_key(tag: str) -> tuple[int, ...]:
return tuple(int(part) for part in re.findall(r"\d+", tag))
def _github_api_json(url: str, *, repo: str) -> Any:
headers = {
"Accept": "application/vnd.github+json",
"User-Agent": "MoviePilot-CLI",
}
headers.update(settings.REPO_GITHUB_HEADERS(repo))
opener = build_opener(ProxyHandler(settings.PROXY or {}))
request = Request(url=url, headers=headers, method="GET")
try:
with opener.open(request, timeout=10.0) as response:
return json.loads(response.read().decode("utf-8"))
except HTTPError as exc:
detail = exc.read().decode("utf-8", errors="ignore")
raise RuntimeError(f"访问 GitHub API 失败HTTP {exc.code}: {detail or url}") from exc
except URLError as exc:
raise RuntimeError(f"访问 GitHub API 失败:{exc.reason}") from exc
except json.JSONDecodeError as exc:
raise RuntimeError(f"GitHub API 返回了无法解析的响应:{url}") from exc
def _latest_release_tag(url: str, *, repo: str, prefix: str) -> Optional[str]:
payload = _github_api_json(url, repo=repo)
if not isinstance(payload, list):
raise RuntimeError(f"GitHub API 返回格式异常:{url}")
matched_tags = []
for item in payload:
if not isinstance(item, dict):
continue
tag_name = str(item.get("tag_name") or "").strip()
if tag_name.startswith(f"{prefix}."):
matched_tags.append(tag_name)
if not matched_tags:
return None
return sorted(matched_tags, key=_release_sort_key)[-1]
def _git_current_branch() -> Optional[str]:
try:
branch = subprocess.check_output(
["git", "rev-parse", "--abbrev-ref", "HEAD"],
cwd=str(_repo_root()),
text=True,
).strip()
except (OSError, subprocess.CalledProcessError):
return None
return branch or None
def _auto_update_mode() -> str:
return str(getattr(settings, "MOVIEPILOT_AUTO_UPDATE", "") or "").strip().lower()
def _resolve_auto_update_targets(mode: str) -> tuple[Optional[str], Optional[str]]:
backend_prefix = _release_prefix(APP_VERSION)
frontend_prefix = _release_prefix(_installed_frontend_version() or APP_VERSION)
if mode == "dev":
current_branch = _git_current_branch()
backend_ref = "latest"
if not current_branch or current_branch == "HEAD":
# 从 release 模式切回 dev 时detached HEAD 需要一个明确分支。
backend_ref = backend_prefix
else:
backend_ref = _latest_release_tag(
BACKEND_RELEASES_API,
repo="jxxghp/MoviePilot",
prefix=backend_prefix,
)
frontend_version = _latest_release_tag(
FRONTEND_RELEASES_API,
repo="jxxghp/MoviePilot-Frontend",
prefix=frontend_prefix,
)
return backend_ref, frontend_version
def _best_effort_auto_update() -> None:
mode = _auto_update_mode()
if mode not in AUTO_UPDATE_ENABLED_VALUES:
return
try:
backend_ref, frontend_version = _resolve_auto_update_targets(mode)
except RuntimeError as exc:
_warn(f"自动更新准备失败,继续使用当前版本启动:{exc}")
return
if not backend_ref or not frontend_version:
_warn("自动更新准备失败,未能解析当前主版本对应的远端版本,继续使用当前版本启动")
return
update_command = [
sys.executable,
str(_repo_root() / "scripts" / "local_setup.py"),
"update",
"all",
"--ref",
backend_ref,
"--frontend-version",
frontend_version,
"--venv",
str(_repo_root() / "venv"),
"--config-dir",
str(settings.CONFIG_PATH),
]
update_env = os.environ.copy()
if settings.PROXY_HOST:
update_env.setdefault("http_proxy", settings.PROXY_HOST)
update_env.setdefault("https_proxy", settings.PROXY_HOST)
update_env.setdefault("HTTP_PROXY", settings.PROXY_HOST)
update_env.setdefault("HTTPS_PROXY", settings.PROXY_HOST)
if settings.GITHUB_TOKEN:
update_env.setdefault("GITHUB_TOKEN", settings.GITHUB_TOKEN)
click.echo(f"检测到 MOVIEPILOT_AUTO_UPDATE={mode},启动前执行本地自动更新")
result = subprocess.run(
update_command,
cwd=str(_repo_root()),
env=update_env,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
encoding="utf-8",
errors="ignore",
check=False,
)
if result.returncode == 0:
click.echo("本地自动更新完成")
return
output_lines = [line for line in (result.stdout or "").splitlines() if line.strip()]
tail = output_lines[-1] if output_lines else "未知错误"
_warn(f"本地自动更新失败,继续使用当前版本启动:{tail}")
def _ensure_frontend_not_running_alone(timeout: int) -> None:
"""
如果只检测到 CLI 管理的前端仍在运行,则先停掉它,再按统一顺序重启前后端。
"""
backend_state, _, _, _ = _managed_backend_status()
frontend_state, _, _, _ = _managed_frontend_status()
if backend_state == "stopped" and frontend_state in MANAGED_ACTIVE_STATES:
click.echo("检测到仅前端仍在运行,先停止前端后再整体启动")
_stop_frontend_service(timeout=timeout, force=True)
def _managed_backend_status() -> tuple[str, Optional[Dict[str, Any]], Optional[psutil.Process], Optional[Dict[str, Any]]]:
runtime = _backend_runtime()
process = _get_process(runtime)
@@ -431,18 +603,27 @@ def _ensure_local_api_token() -> bool:
return result is True
def _spawn_process(command: list[str], *, cwd: Path, log_file: Path, env: Optional[Dict[str, str]] = None) -> subprocess.Popen:
log_file.parent.mkdir(parents=True, exist_ok=True)
log_handle = log_file.open("a", encoding="utf-8")
def _spawn_process(
command: list[str],
*,
cwd: Path,
log_file: Optional[Path],
env: Optional[Dict[str, str]] = None,
) -> subprocess.Popen:
kwargs: Dict[str, Any] = {
"cwd": str(cwd),
"stdout": log_handle,
"stderr": subprocess.STDOUT,
"stdin": subprocess.DEVNULL,
"close_fds": True,
"env": env or os.environ.copy(),
}
if log_file:
log_file.parent.mkdir(parents=True, exist_ok=True)
log_handle = log_file.open("a", encoding="utf-8")
kwargs["stdout"] = log_handle
kwargs["stderr"] = subprocess.STDOUT
else:
kwargs["stdout"] = subprocess.DEVNULL
kwargs["stderr"] = subprocess.DEVNULL
if os.name == "nt":
kwargs["creationflags"] = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.DETACHED_PROCESS
else:
@@ -454,8 +635,19 @@ def _spawn_backend_process() -> subprocess.Popen:
return _spawn_process(
[sys.executable, "-m", "app.main"],
cwd=_repo_root(),
log_file=BACKEND_STDIO_LOG_FILE,
env={**os.environ, "PYTHONUNBUFFERED": "1"},
log_file=None,
env={
**os.environ,
"PYTHONUNBUFFERED": "1",
"MOVIEPILOT_DISABLE_CONSOLE_LOG": "1",
"MOVIEPILOT_STDIO_LOG_FILE": str(BACKEND_STDIO_LOG_FILE),
"MOVIEPILOT_STDIO_LOG_MAX_BYTES": str(
max(int(settings.LOG_MAX_FILE_SIZE or 0), 1) * 1024 * 1024
),
"MOVIEPILOT_STDIO_LOG_BACKUP_COUNT": str(
max(int(settings.LOG_BACKUP_COUNT or 0), 0)
),
},
)
@@ -649,6 +841,12 @@ def cli() -> None:
@click.option("--timeout", default=60, show_default=True, help="等待后端与前端就绪的秒数")
def start(timeout: int) -> None:
"""后台启动本地 MoviePilot 前后端服务"""
_ensure_frontend_not_running_alone(timeout=min(timeout, 15))
backend_state, _, _, _ = _managed_backend_status()
frontend_state, _, _, _ = _managed_frontend_status()
if backend_state == "stopped" and frontend_state == "stopped":
_best_effort_auto_update()
backend_result = _start_backend_service(timeout=timeout)
backend_runtime = backend_result["runtime"]
try:
@@ -699,6 +897,7 @@ def restart(start_timeout: int, stop_timeout: int, force: bool) -> None:
"""重启本地 MoviePilot 前后端服务"""
_stop_frontend_service(timeout=stop_timeout, force=force)
_stop_backend_service(timeout=stop_timeout, force=force)
_best_effort_auto_update()
backend_result = _start_backend_service(timeout=start_timeout)
frontend_result = _start_frontend_service(timeout=start_timeout, backend_port=int(backend_result["runtime"]["port"]))
click.echo("MoviePilot 已重启")

View File

@@ -7,6 +7,7 @@ from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.message import MessageChain
from app.chain.site import SiteChain
from app.chain.skills import SkillsChain
from app.chain.subscribe import SubscribeChain
from app.chain.system import SystemChain
from app.chain.transfer import TransferChain
@@ -154,6 +155,12 @@ class Command(metaclass=Singleton):
"category": "管理",
"data": {},
},
"/skills": {
"func": SkillsChain().remote_manage,
"description": "管理技能",
"category": "智能体",
"data": {},
},
}
# 插件命令集合
self._plugin_commands = {}

View File

@@ -420,6 +420,15 @@ class ConfigModel(BaseModel):
# 本地插件仓库目录,多个地址使用,分隔
PLUGIN_LOCAL_REPO_PATHS: Optional[str] = None
# ==================== 技能配置 ====================
# 技能市场仓库地址,多个地址使用,分隔
SKILL_MARKET: str = (
"https://clawhub.ai,"
"https://github.com/openai/skills,"
"https://github.com/anthropics/skills,"
"https://github.com/vercel-labs/agent-skills"
)
# ==================== Github & PIP ====================
# Github token提高请求api限流阈值 ghp_****
GITHUB_TOKEN: Optional[str] = None
@@ -496,6 +505,8 @@ class ConfigModel(BaseModel):
LLM_PROVIDER: str = "deepseek"
# LLM模型名称
LLM_MODEL: str = "deepseek-chat"
# 思考模式/深度配置off/auto/minimal/low/medium/high/max/xhigh
LLM_THINKING_LEVEL: Optional[str] = 'off'
# LLM是否支持图片输入开启后消息图片会按多模态输入发送给模型
LLM_SUPPORT_IMAGE_INPUT: bool = True
# LLM API密钥

View File

@@ -13,7 +13,7 @@ from Crypto.Cipher import AES
from Crypto.Util.Padding import pad
from cryptography.fernet import Fernet
from fastapi import HTTPException, status, Security, Request, Response
from fastapi.security import OAuth2PasswordBearer, APIKeyHeader, APIKeyQuery, APIKeyCookie
from fastapi.security import OAuth2PasswordBearer, APIKeyHeader, APIKeyQuery, APIKeyCookie, HTTPBearer
from passlib.context import CryptContext
from app import schemas
@@ -42,6 +42,12 @@ api_key_header = APIKeyHeader(name="X-API-KEY", auto_error=False, scheme_name="a
# API KEY 通过 QUERY 认证
api_key_query = APIKeyQuery(name="apikey", auto_error=False, scheme_name="api_key_query")
# OpenAI compatible Bearer Token 认证
openai_bearer_scheme = HTTPBearer(auto_error=False)
# Anthropic compatible API Key 认证
anthropic_api_key_header = APIKeyHeader(name="x-api-key", auto_error=False, scheme_name="anthropic_api_key_header")
def __get_api_token(
token_query: Annotated[str | None, Security(api_token_query)] = None

View File

@@ -1,12 +1,34 @@
"""LLM模型相关辅助功能"""
import asyncio
import inspect
from typing import List
import json
import time
from functools import wraps
from typing import Any, List
from langchain_core.messages import AIMessage
from app.core.config import settings
from app.log import logger
class LLMTestError(RuntimeError):
"""LLM 测试调用异常,附带请求耗时。"""
def __init__(self, message: str, duration_ms: int | None = None):
super().__init__(message)
self.duration_ms = duration_ms
class LLMTestTimeout(TimeoutError):
"""LLM 测试调用超时,附带请求耗时。"""
def __init__(self, message: str, duration_ms: int | None = None):
super().__init__(message)
self.duration_ms = duration_ms
def _patch_gemini_thought_signature():
"""
修复 langchain-google-genai 中 Gemini 2.5 思考模型的 thought_signature 兼容问题。
@@ -52,13 +74,290 @@ def _get_httpx_proxy_key() -> str:
if "proxy" in params:
return "proxy"
return "proxies"
except Exception:
except Exception as e:
logger.warning(f"检测 httpx 代理参数失败,默认使用 'proxies'{e}")
return "proxies"
def _deepseek_thinking_toggle(extra_body: Any) -> bool | None:
"""
解析 DeepSeek extra_body 中显式传入的 thinking 开关。
"""
if not isinstance(extra_body, dict):
return None
thinking = extra_body.get("thinking")
if not isinstance(thinking, dict):
return None
thinking_type = str(thinking.get("type") or "").strip().lower()
if thinking_type == "enabled":
return True
if thinking_type == "disabled":
return False
return None
def _is_deepseek_thinking_enabled(model_name: str | None, extra_body: Any) -> bool:
"""
判断本次 DeepSeek 调用是否处于 thinking mode。
"""
explicit_toggle = _deepseek_thinking_toggle(extra_body)
if explicit_toggle is not None:
return explicit_toggle
normalized_model_name = str(model_name or "").strip().lower()
if normalized_model_name == "deepseek-reasoner":
return True
if normalized_model_name.startswith("deepseek-v4-"):
# DeepSeek V4 默认启用 thinking mode除非显式关闭。
return True
return False
def _patch_deepseek_reasoning_content_support():
"""
修补 langchain-deepseek 在 tool-call 场景下遗漏 reasoning_content 回传的问题。
DeepSeek thinking mode 要求:若 assistant 历史消息包含 tool_calls
后续请求中必须带回该条消息的顶层 reasoning_content。
某些 langchain-deepseek 版本虽然能从响应中拿到 reasoning_content
但不会在重放消息历史时写回请求载荷,导致 400。
"""
try:
from langchain_deepseek import ChatDeepSeek
except Exception as err:
logger.debug(f"跳过 langchain-deepseek reasoning_content 修补:{err}")
return
if getattr(ChatDeepSeek, "_moviepilot_reasoning_content_patched", False):
return
original_get_request_payload = getattr(ChatDeepSeek, "_get_request_payload", None)
if not callable(original_get_request_payload):
logger.warning("langchain-deepseek 缺少 _get_request_payload无法修补 reasoning_content")
return
@wraps(original_get_request_payload)
def _patched_get_request_payload(self, input_, *, stop=None, **kwargs):
payload = original_get_request_payload(self, input_, stop=stop, **kwargs)
# Resolve original messages so we can extract reasoning_content from
# additional_kwargs. The parent's payload builder does not propagate
# this DeepSeek-specific field.
messages = self._convert_input(input_).to_messages()
for i, message in enumerate(payload["messages"]):
if message["role"] == "tool" and isinstance(message["content"], list):
message["content"] = json.dumps(message["content"])
elif message["role"] == "assistant":
if isinstance(message["content"], list):
# DeepSeek API expects assistant content to be a string,
# not a list. Extract text blocks and join them, or use
# empty string if none exist.
text_parts = [
block.get("text", "")
for block in message["content"]
if isinstance(block, dict) and block.get("type") == "text"
]
message["content"] = "".join(text_parts) if text_parts else ""
# DeepSeek reasoning models require every assistant message to
# carry a reasoning_content field (even when empty). The value
# is stored in AIMessage.additional_kwargs by
# _create_chat_result(); re-inject it into the API payload.
if (
"reasoning_content" not in message
and i < len(messages)
and isinstance(messages[i], AIMessage)
):
message["reasoning_content"] = messages[i].additional_kwargs.get(
"reasoning_content", ""
)
return payload
ChatDeepSeek._get_request_payload = _patched_get_request_payload
ChatDeepSeek._moviepilot_reasoning_content_patched = True
logger.debug("已修补 langchain-deepseek thinking tool-call 的 reasoning_content 回传兼容性")
class LLMHelper:
"""LLM模型相关辅助功能"""
_SUPPORTED_THINKING_LEVELS = frozenset(
{"off", "auto", "minimal", "low", "medium", "high", "max", "xhigh"}
)
@staticmethod
def _normalize_model_name(model_name: str | None) -> str:
"""
统一清理模型名称,便于按模型族做能力映射。
"""
return (model_name or "").strip().lower()
@classmethod
def _normalize_deepseek_reasoning_effort(
cls, thinking_level: str | None = None
) -> str | None:
"""
DeepSeek 文档当前建议使用 high/max兼容常见 effort 别名。
"""
if not thinking_level or thinking_level in {"off", "auto"}:
return None
if thinking_level in {"minimal", "low", "medium", "high"}:
return "high"
if thinking_level in {"max", "xhigh"}:
return "max"
logger.warning(f"忽略不支持的 DeepSeek reasoning_effort 配置: {thinking_level}")
return None
@classmethod
def _normalize_openai_reasoning_effort(
cls, thinking_level: str | None = None
) -> str | None:
"""
OpenAI reasoning_effort 支持更细粒度的 effort统一做最近似映射。
"""
if not thinking_level or thinking_level == "auto":
return None
if thinking_level == "off":
return "none"
if thinking_level == "max":
return "xhigh"
return thinking_level
@classmethod
def _build_google_thinking_kwargs(
cls, model_name: str, thinking_level: str
) -> dict[str, Any]:
"""
Gemini 3 使用 thinking_levelGemini 2.5 使用 thinking_budget。
"""
if not model_name or thinking_level == "auto":
return {}
if "gemini-2.5" in model_name:
if thinking_level == "off":
if "pro" in model_name:
# Gemini 2.5 Pro 官方不支持完全关闭思考,回退到最小预算。
return {
"thinking_budget": 128,
"include_thoughts": False,
}
return {
"thinking_budget": 0,
"include_thoughts": False,
}
budget_map = {
"minimal": 512,
"low": 1024,
"medium": 4096,
"high": 8192,
"max": 24576,
"xhigh": 24576,
}
budget = budget_map.get(thinking_level)
return (
{
"thinking_budget": budget,
"include_thoughts": False,
}
if budget is not None
else {}
)
if "gemini-3" in model_name:
level_map = {
"off": "minimal",
"minimal": "minimal",
"low": "low",
"medium": "medium",
"high": "high",
"max": "high",
"xhigh": "high",
}
google_level = level_map.get(thinking_level)
return (
{
"thinking_level": google_level,
"include_thoughts": False,
}
if google_level
else {}
)
return {}
@classmethod
def _build_kimi_thinking_kwargs(
cls, model_name: str, thinking_level: str
) -> dict[str, Any]:
"""
Kimi 当前公开文档仅支持思考开关,不支持显式深度调节。
"""
if model_name.startswith("kimi-k2-thinking"):
return {}
if thinking_level == "off":
return {"extra_body": {"thinking": {"type": "disabled"}}}
return {}
@classmethod
def _build_thinking_kwargs(
cls,
provider: str,
model: str | None,
thinking_level: str | None = None
) -> dict[str, Any]:
"""
按 provider/model 生成思考模式相关参数。
优先使用 LangChain/OpenAI SDK 已支持的原生字段;仅在 provider
明确要求自定义请求体时,才回退到 extra_body。
"""
provider_name = (provider or "").strip().lower()
model_name = cls._normalize_model_name(model)
if provider_name == "deepseek":
if thinking_level == "off":
return {"extra_body": {"thinking": {"type": "disabled"}}}
if thinking_level == "auto":
return {}
kwargs: dict[str, Any] = {"extra_body": {"thinking": {"type": "enabled"}}}
deepseek_effort = cls._normalize_deepseek_reasoning_effort(
thinking_level
)
if deepseek_effort:
kwargs["reasoning_effort"] = deepseek_effort
return kwargs
if model_name.startswith(("kimi-k2.5", "kimi-k2.6", "kimi-k2-thinking")):
return cls._build_kimi_thinking_kwargs(model_name, thinking_level)
if not model_name:
return {}
# OpenAI 原生推理模型优先走 LangChain 内置 reasoning_effort。
if provider_name == "openai" and model_name.startswith(
("gpt-5", "o1", "o3", "o4")
):
openai_effort = cls._normalize_openai_reasoning_effort(
thinking_level
)
return {"reasoning_effort": openai_effort} if openai_effort else {}
# Gemini 使用 google-genai / langchain-google-genai 内置思考控制参数。
if provider_name == "google":
return cls._build_google_thinking_kwargs(
model_name, thinking_level
)
return {}
@staticmethod
def supports_image_input() -> bool:
"""
@@ -67,19 +366,45 @@ class LLMHelper:
return bool(settings.LLM_SUPPORT_IMAGE_INPUT)
@staticmethod
def get_llm(streaming: bool = False):
def get_llm(
streaming: bool = False,
provider: str | None = None,
model: str | None = None,
thinking_level: str | None = None,
api_key: str | None = None,
base_url: str | None = None,
):
"""
获取LLM实例
:param streaming: 是否启用流式输出
:param provider: LLM提供商默认为配置项LLM_PROVIDER
:param model: 模型名称默认为配置项LLM_MODEL
:param thinking_level: 思考模式级别,默认为 None即自动判断
是否启用思考模式)。支持的级别包括 "off"(关闭)、"auto"(自动)、"minimal""low""medium""high""max"/"xhigh"(最大)。
不同模型对思考模式的支持和表现不同,具体映射关系请
参考代码实现。对于不支持思考模式的模型,该参数将被忽略。
:param api_key: API Key默认为
配置项LLM_API_KEY。对于某些提供商
如 DeepSeek可能需要同时提供 base_url。
:param base_url: API Base URL默认为配置项LLM_BASE_URL。
:return: LLM实例
"""
provider = settings.LLM_PROVIDER.lower()
api_key = settings.LLM_API_KEY
provider_name = str(
provider if provider is not None else settings.LLM_PROVIDER
).lower()
model_name = model if model is not None else settings.LLM_MODEL
api_key_value = api_key if api_key is not None else settings.LLM_API_KEY
base_url_value = base_url if base_url is not None else settings.LLM_BASE_URL
thinking_kwargs = LLMHelper._build_thinking_kwargs(
provider=provider_name,
model=model_name,
thinking_level=thinking_level
)
if not api_key:
if not api_key_value:
raise ValueError("未配置LLM API Key")
if provider == "google":
if provider_name == "google":
# 修补 Gemini 2.5 思考模型的 thought_signature 兼容性
_patch_gemini_thought_signature()
@@ -94,36 +419,41 @@ class LLMHelper:
client_args = {proxy_key: settings.PROXY_HOST}
model = ChatGoogleGenerativeAI(
model=settings.LLM_MODEL,
api_key=api_key,
model=model_name,
api_key=api_key_value,
retries=3,
temperature=settings.LLM_TEMPERATURE,
streaming=streaming,
client_args=client_args,
**thinking_kwargs,
)
elif provider == "deepseek":
elif provider_name == "deepseek":
from langchain_deepseek import ChatDeepSeek
_patch_deepseek_reasoning_content_support()
model = ChatDeepSeek(
model=settings.LLM_MODEL,
api_key=api_key,
model=model_name,
api_key=api_key_value,
api_base=base_url_value,
max_retries=3,
temperature=settings.LLM_TEMPERATURE,
streaming=streaming,
stream_usage=True,
**thinking_kwargs,
)
else:
from langchain_openai import ChatOpenAI
model = ChatOpenAI(
model=settings.LLM_MODEL,
api_key=api_key,
model=model_name,
api_key=api_key_value,
max_retries=3,
base_url=settings.LLM_BASE_URL,
base_url=base_url_value,
temperature=settings.LLM_TEMPERATURE,
streaming=streaming,
stream_usage=True,
openai_proxy=settings.PROXY_HOST,
**thinking_kwargs,
)
# 检查是否有profile
@@ -132,13 +462,102 @@ class LLMHelper:
else:
model.profile = {
"max_input_tokens": settings.LLM_MAX_CONTEXT_TOKENS
* 1000, # 转换为token单位
* 1000, # 转换为token单位
}
return model
@staticmethod
def _extract_text_content(content) -> str:
"""
从响应内容中提取纯文本,仅保留真实文本块。
"""
if content is None:
return ""
if isinstance(content, str):
return content
if isinstance(content, list):
text_parts = []
for block in content:
if isinstance(block, str):
text_parts.append(block)
continue
if isinstance(block, dict) or hasattr(block, "get"):
block_type = block.get("type")
if block.get("thought") or block_type in (
"thinking",
"reasoning_content",
"reasoning",
"thought",
):
continue
if block_type == "text":
text_parts.append(block.get("text", ""))
continue
if not block_type and isinstance(block.get("text"), str):
text_parts.append(block.get("text", ""))
return "".join(text_parts)
if isinstance(content, dict) or hasattr(content, "get"):
if content.get("thought"):
return ""
if content.get("type") == "text":
return content.get("text", "")
if not content.get("type") and isinstance(content.get("text"), str):
return content.get("text", "")
return ""
@staticmethod
async def test_current_settings(
prompt: str = "请只回复 OK",
timeout: int = 20,
provider: str | None = None,
model: str | None = None,
thinking_level: str | None = None,
api_key: str | None = None,
base_url: str | None = None,
) -> dict:
"""
使用当前已保存配置执行一次最小 LLM 调用。
"""
provider_name = provider if provider is not None else settings.LLM_PROVIDER
model_name = model if model is not None else settings.LLM_MODEL
api_key_value = api_key if api_key is not None else settings.LLM_API_KEY
base_url_value = base_url if base_url is not None else settings.LLM_BASE_URL
start = time.perf_counter()
llm = LLMHelper.get_llm(
streaming=False,
provider=provider_name,
model=model_name,
thinking_level=thinking_level,
api_key=api_key_value,
base_url=base_url_value,
)
try:
response = await asyncio.wait_for(llm.ainvoke(prompt), timeout=timeout)
except TimeoutError as err:
duration_ms = round((time.perf_counter() - start) * 1000)
raise LLMTestTimeout("LLM 调用超时", duration_ms=duration_ms) from err
except Exception as err:
duration_ms = round((time.perf_counter() - start) * 1000)
raise LLMTestError(str(err), duration_ms=duration_ms) from err
reply_text = LLMHelper._extract_text_content(
getattr(response, "content", response)
).strip()
duration_ms = round((time.perf_counter() - start) * 1000)
data = {
"provider": provider_name,
"model": model_name,
"duration_ms": duration_ms,
}
if reply_text:
data["reply_preview"] = reply_text[:120]
return data
def get_models(
self, provider: str, api_key: str, base_url: str = None
self, provider: str, api_key: str, base_url: str = None
) -> List[str]:
"""获取模型列表"""
logger.info(f"获取 {provider} 模型列表...")
@@ -176,7 +595,7 @@ class LLMHelper:
@staticmethod
def _get_openai_compatible_models(
provider: str, api_key: str, base_url: str = None
provider: str, api_key: str, base_url: str = None
) -> List[str]:
"""获取OpenAI兼容模型列表"""
try:

1175
app/helper/skill.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
import asyncio
import logging
import os
import queue
import sys
import threading
@@ -407,11 +408,12 @@ class LoggerManager:
for handler in _logger.handlers:
_logger.removeHandler(handler)
# 只设置终端日志(文件日志由 NonBlockingFileHandler 处理)
console_handler = logging.StreamHandler()
console_formatter = CustomFormatter(log_settings.LOG_CONSOLE_FORMAT)
console_handler.setFormatter(console_formatter)
_logger.addHandler(console_handler)
# 本地 CLI 已经有独立的 stdio 滚动日志时,不再把业务日志重复打一份到控制台。
if os.getenv("MOVIEPILOT_DISABLE_CONSOLE_LOG") != "1":
console_handler = logging.StreamHandler()
console_formatter = CustomFormatter(log_settings.LOG_CONSOLE_FORMAT)
console_handler.setFormatter(console_formatter)
_logger.addHandler(console_handler)
# 禁止向父级log传递
_logger.propagate = False

View File

@@ -4,19 +4,32 @@ import setproctitle
import signal
import sys
import threading
from pathlib import Path
import uvicorn as uvicorn
from PIL import Image
from uvicorn import Config
from app.factory import app
from app.utils.stdio import configure_rotating_stdio
from app.utils.system import SystemUtils
# 禁用输出
if SystemUtils.is_frozen():
stdio_log_file = os.getenv("MOVIEPILOT_STDIO_LOG_FILE")
if stdio_log_file:
# 本地 CLI 会把 stdout/stderr 切到滚动日志,避免无限追加单独的大文件。
configure_rotating_stdio(
log_file=Path(stdio_log_file),
max_bytes=max(int(os.getenv("MOVIEPILOT_STDIO_LOG_MAX_BYTES", "0") or 0), 1),
backup_count=max(
int(os.getenv("MOVIEPILOT_STDIO_LOG_BACKUP_COUNT", "0") or 0),
0,
),
)
elif SystemUtils.is_frozen():
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, 'w')
from app.factory import app
from app.core.config import settings
from app.db.init import init_db, update_db
@@ -95,4 +108,4 @@ if __name__ == '__main__':
# 更新数据库
update_db()
# 启动API服务
Server.run()
Server.run()

View File

@@ -439,6 +439,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
chat_id: Union[str, int],
text: str,
title: Optional[str] = None,
buttons: Optional[List[List[dict]]] = None,
) -> bool:
"""
编辑消息
@@ -448,6 +449,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
:param chat_id: 聊天ID
:param text: 新的消息内容
:param title: 消息标题
:param buttons: 新的按钮列表
:return: 编辑是否成功
"""
if channel != self._channel:
@@ -460,6 +462,7 @@ class DiscordModule(_ModuleBase, _MessageBase[Discord]):
result = client.send_msg(
title=title or "",
text=text,
buttons=buttons,
original_message_id=message_id,
original_chat_id=str(chat_id),
)

View File

@@ -1,7 +1,9 @@
import json
import subprocess
import threading
import time
from pathlib import Path
from typing import Optional, List
from typing import Optional, List, Union
from app import schemas
from app.core.config import settings
@@ -11,6 +13,9 @@ from app.schemas.types import StorageSchema
from app.utils.string import StringUtils
from app.utils.system import SystemUtils
_folder_locks: dict[str, threading.Lock] = {}
_folder_locks_guard = threading.Lock()
class Rclone(StorageBase):
"""
@@ -120,6 +125,43 @@ class Rclone(StorageBase):
modify_time=StringUtils.str_to_timestamp(item.get("ModTime"))
)
@staticmethod
def __normalize_remote_path(path: Union[Path, str]) -> str:
"""
规范化远端路径,统一目录锁键值。
"""
path_str = Path(str(path or "/")).as_posix()
if not path_str.startswith("/"):
path_str = f"/{path_str}"
if path_str != "/":
path_str = path_str.rstrip("/")
return path_str or "/"
@staticmethod
def __get_path_lock(path: Union[Path, str]) -> threading.Lock:
"""
获取指定远端路径的模块级锁。
"""
normalized = Rclone.__normalize_remote_path(path)
with _folder_locks_guard:
if normalized not in _folder_locks:
_folder_locks[normalized] = threading.Lock()
return _folder_locks[normalized]
def __wait_for_item(
self, path: Path, retries: int = 3, delay: float = 0.2
) -> Optional[schemas.FileItem]:
"""
等待目录或文件在远端可见,兼容云盘最终一致性延迟。
"""
for attempt in range(retries):
item = self.get_item(path)
if item:
return item
if attempt < retries - 1:
time.sleep(delay)
return None
def check(self) -> bool:
"""
检查存储是否可用
@@ -163,50 +205,53 @@ class Rclone(StorageBase):
:param fileitem: 父目录
:param name: 目录名
"""
path = Path(self.__normalize_remote_path(Path(fileitem.path) / name))
try:
retcode = subprocess.run(
[
'rclone', 'mkdir',
f'MP:{Path(fileitem.path) / name}'
f'MP:{path}'
],
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
return self.get_item(Path(fileitem.path) / name)
folder = self.__wait_for_item(path)
if folder:
return folder
logger.warn(f"【rclone】目录 {path} 创建成功后暂未可见")
return None
folder = self.__wait_for_item(path, retries=2)
if folder:
logger.info(f"【rclone】目录 {path} 已存在,忽略重复创建")
return folder
except Exception as err:
logger.error(f"【rclone】创建目录失败{err}")
folder = self.__wait_for_item(path, retries=2)
if folder:
logger.info(f"【rclone】目录 {path} 已存在,忽略创建异常")
return folder
return None
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
根据文件路程获取目录,不存在则创建
"""
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
for sub_folder in self.list(_fileitem):
if sub_folder.type != "dir":
continue
if sub_folder.name == _name:
return sub_folder
return None
normalized = Path(self.__normalize_remote_path(path))
# 是否已存在
folder = self.get_item(path)
folder = self.get_item(normalized)
if folder:
return folder
# 逐级查找和创建目录
fileitem = schemas.FileItem(storage=self.schema.value, path="/")
for part in path.parts[1:]:
dir_file = __find_dir(fileitem, part)
if dir_file:
fileitem = dir_file
else:
dir_file = self.create_folder(fileitem, part)
fileitem = schemas.FileItem(storage=self.schema.value, type="dir", path="/")
for part in normalized.parts[1:]:
current_path = Path(self.__normalize_remote_path(Path(fileitem.path) / part))
with self.__get_path_lock(current_path):
dir_file = self.get_item(current_path)
if not dir_file:
logger.warn(f"【rclone】创建目录 {fileitem.path}{part} 失败!")
dir_file = self.create_folder(fileitem, part)
if not dir_file:
logger.warn(f"【rclone】创建目录 {current_path} 失败!")
return None
fileitem = dir_file
return fileitem

View File

@@ -557,6 +557,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
chat_id: Union[str, int],
text: str,
title: Optional[str] = None,
buttons: Optional[List[List[dict]]] = None,
) -> bool:
"""
编辑消息
@@ -566,6 +567,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
:param chat_id: 聊天ID
:param text: 新的消息内容
:param title: 消息标题
:param buttons: 新的按钮列表
:return: 编辑是否成功
"""
if channel != self._channel:
@@ -578,6 +580,7 @@ class SlackModule(_ModuleBase, _MessageBase[Slack]):
result = client.send_msg(
title=title or "",
text=text,
buttons=buttons,
original_message_id=str(message_id),
original_chat_id=str(chat_id),
)

View File

@@ -564,6 +564,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
chat_id: Union[str, int],
text: str,
title: Optional[str] = None,
buttons: Optional[List[List[dict]]] = None,
) -> bool:
"""
编辑消息
@@ -573,6 +574,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
:param chat_id: 聊天ID
:param text: 新的消息内容
:param title: 消息标题
:param buttons: 新的按钮列表
:return: 编辑是否成功
"""
if channel != self._channel:
@@ -587,6 +589,7 @@ class TelegramModule(_ModuleBase, _MessageBase[Telegram]):
message_id=message_id,
text=text,
title=title,
buttons=buttons,
)
if result:
return True

View File

@@ -835,6 +835,7 @@ class Telegram:
message_id: Union[str, int],
text: str,
title: Optional[str] = None,
buttons: Optional[List[List[dict]]] = None,
) -> Optional[bool]:
"""
编辑Telegram消息公开方法
@@ -842,6 +843,7 @@ class Telegram:
:param message_id: 消息ID
:param text: 新的消息内容
:param title: 消息标题
:param buttons: 新的按钮列表
:return: 编辑是否成功
"""
if not self._bot:
@@ -861,6 +863,7 @@ class Telegram:
chat_id=str(chat_id),
message_id=int(message_id),
text=caption,
buttons=buttons,
)
except Exception as e:
logger.error(f"编辑Telegram消息异常: {str(e)}")

View File

@@ -11,6 +11,7 @@ from .monitoring import *
from .plugin import *
from .response import *
from .rule import *
from .openai import *
from .servarr import *
from .servcookie import *
from .site import *
@@ -23,4 +24,3 @@ from .transfer import *
from .user import *
from .workflow import *
from .mcp import *

156
app/schemas/openai.py Normal file
View File

@@ -0,0 +1,156 @@
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, ConfigDict, Field
class OpenAIModelInfo(BaseModel):
id: str
object: str = "model"
created: int
owned_by: str = "moviepilot"
class OpenAIModelListResponse(BaseModel):
object: str = "list"
data: List[OpenAIModelInfo] = Field(default_factory=list)
class OpenAIChatMessage(BaseModel):
role: str
content: Any
name: Optional[str] = None
model_config = ConfigDict(extra="allow")
class OpenAIChatCompletionsRequest(BaseModel):
model: Optional[str] = None
messages: List[OpenAIChatMessage]
user: Optional[str] = None
stream: bool = False
model_config = ConfigDict(extra="allow")
class OpenAIResponsesRequest(BaseModel):
model: Optional[str] = None
input: Any
instructions: Optional[str] = None
user: Optional[str] = None
stream: bool = False
model_config = ConfigDict(extra="allow")
class OpenAIChatChoiceMessage(BaseModel):
role: str = "assistant"
content: str
class OpenAIChatChoice(BaseModel):
index: int = 0
message: OpenAIChatChoiceMessage
finish_reason: str = "stop"
class OpenAIUsage(BaseModel):
prompt_tokens: int = 0
completion_tokens: int = 0
total_tokens: int = 0
class OpenAIChatCompletionResponse(BaseModel):
id: str
object: str = "chat.completion"
created: int
model: str
choices: List[OpenAIChatChoice]
usage: OpenAIUsage
class OpenAIResponsesOutputText(BaseModel):
type: str = "output_text"
text: str
annotations: List[Dict[str, Any]] = Field(default_factory=list)
class OpenAIResponsesOutputMessage(BaseModel):
id: str
type: str = "message"
status: str = "completed"
role: str = "assistant"
content: List[OpenAIResponsesOutputText] = Field(default_factory=list)
class OpenAIResponsesResponse(BaseModel):
id: str
object: str = "response"
created_at: int
status: str = "completed"
model: str
output: List[OpenAIResponsesOutputMessage] = Field(default_factory=list)
error: Optional[Any] = None
incomplete_details: Optional[Any] = None
usage: OpenAIUsage
class OpenAIErrorDetail(BaseModel):
message: str
type: str = "invalid_request_error"
param: Optional[str] = None
code: Optional[str] = None
class OpenAIErrorResponse(BaseModel):
error: OpenAIErrorDetail
OpenAIChatContentPart = Dict[str, Any]
class AnthropicMessage(BaseModel):
role: str
content: Any
model_config = ConfigDict(extra="allow")
class AnthropicMessagesRequest(BaseModel):
model: Optional[str] = None
messages: List[AnthropicMessage]
system: Optional[Any] = None
max_tokens: Optional[int] = 1024
stream: bool = False
model_config = ConfigDict(extra="allow")
class AnthropicTextBlock(BaseModel):
type: str = "text"
text: str
class AnthropicUsage(BaseModel):
input_tokens: int = 0
output_tokens: int = 0
class AnthropicMessagesResponse(BaseModel):
id: str
type: str = "message"
role: str = "assistant"
content: List[AnthropicTextBlock] = Field(default_factory=list)
model: str
stop_reason: str = "end_turn"
stop_sequence: Optional[str] = None
usage: AnthropicUsage = Field(default_factory=AnthropicUsage)
class AnthropicErrorDetail(BaseModel):
type: str = "invalid_request_error"
message: str
class AnthropicErrorResponse(BaseModel):
type: str = "error"
error: AnthropicErrorDetail

27
app/utils/identity.py Normal file
View File

@@ -0,0 +1,27 @@
from typing import Optional, Union
# 后台任务会话使用的内部占位用户ID。
# 它只用于在 agent/memory/session 侧标识“系统触发的任务”,
# 不能直接作为真实消息接收人下发到 Telegram/企业微信 等通知渠道。
SYSTEM_INTERNAL_USER_ID = "system"
def is_internal_user_id(userid: Optional[Union[str, int]]) -> bool:
"""
判断是否为系统内部占位用户ID。
"""
return (
isinstance(userid, str)
and userid.strip().lower() == SYSTEM_INTERNAL_USER_ID
)
def normalize_internal_user_id(
userid: Optional[Union[str, int]]
) -> Optional[Union[str, int]]:
"""
将系统内部占位用户ID归一化为 None避免被通知渠道误认为真实接收人。
"""
if is_internal_user_id(userid):
return None
return userid

84
app/utils/stdio.py Normal file
View File

@@ -0,0 +1,84 @@
from __future__ import annotations
import io
import logging
import sys
import threading
from logging.handlers import RotatingFileHandler
from pathlib import Path
class RotatingLineStream(io.TextIOBase):
"""
将 stdout/stderr 按行写入滚动日志文件。
这里不复用业务 logger避免 stdout 日志再次回流到控制台或普通业务日志文件,
同时保证启动阶段的 print/uvicorn 输出也能按配置滚动。
"""
def __init__(self, log_file: Path, max_bytes: int, backup_count: int):
super().__init__()
self._buffer = ""
self._lock = threading.Lock()
logger_name = f"moviepilot-stdio::{log_file}"
self._logger = logging.getLogger(logger_name)
self._logger.setLevel(logging.INFO)
self._logger.propagate = False
self._logger.handlers.clear()
handler = RotatingFileHandler(
filename=str(log_file),
maxBytes=max_bytes,
backupCount=backup_count,
encoding="utf-8",
)
handler.setFormatter(logging.Formatter("%(message)s"))
self._logger.addHandler(handler)
@property
def encoding(self) -> str:
return "utf-8"
def writable(self) -> bool:
return True
def isatty(self) -> bool:
return False
def write(self, message: str) -> int:
if not message:
return 0
with self._lock:
self._buffer += message.replace("\r\n", "\n")
while "\n" in self._buffer:
line, self._buffer = self._buffer.split("\n", 1)
self._logger.info(line)
return len(message)
def flush(self) -> None:
with self._lock:
if self._buffer:
self._logger.info(self._buffer)
self._buffer = ""
for handler in self._logger.handlers:
handler.flush()
def configure_rotating_stdio(
*, log_file: Path, max_bytes: int, backup_count: int
) -> RotatingLineStream:
"""
将当前进程的 stdout/stderr 统一重定向到同一个滚动日志流。
"""
log_file.parent.mkdir(parents=True, exist_ok=True)
stream = RotatingLineStream(
log_file=log_file,
max_bytes=max_bytes,
backup_count=backup_count,
)
sys.stdout = stream
sys.stderr = stream
return stream

View File

@@ -41,6 +41,8 @@ curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootst
- macOS`~/Library/Application Support/MoviePilot`
- Linux`${XDG_CONFIG_HOME:-~/.config}/moviepilot`
如果在交互式终端中执行一键安装脚本,或直接执行 `moviepilot setup` / `moviepilot init` 且未传入 `--config-dir`,程序会先询问配置目录,并把上面的默认路径作为默认值展示出来。
可以在安装或初始化时手动指定:
```shell
@@ -62,6 +64,7 @@ moviepilot config path
- 前端本地 Node 运行时:`.runtime/node/`
- 后端日志:`<Config Dir>/logs/moviepilot.log`
- 后端启动日志:`<Config Dir>/logs/moviepilot.stdout.log`
该文件同样受 `LOG_MAX_FILE_SIZE``LOG_BACKUP_COUNT` 控制
- 前端启动日志:`<Config Dir>/logs/moviepilot.frontend.stdout.log`
## 帮助与发现
@@ -80,6 +83,7 @@ moviepilot commands
moviepilot help install
moviepilot help init
moviepilot help setup
moviepilot help uninstall
moviepilot help update
moviepilot help agent
moviepilot help config
@@ -111,9 +115,13 @@ moviepilot install frontend
moviepilot install resources
moviepilot init
moviepilot setup
moviepilot uninstall
moviepilot update backend
moviepilot update frontend
moviepilot update all
moviepilot startup enable
moviepilot startup disable
moviepilot startup status
moviepilot agent
moviepilot start
moviepilot stop
@@ -228,6 +236,8 @@ moviepilot setup --config-dir /path/to/moviepilot-config
可按需启用,并配置 `LLM_PROVIDER``LLM_MODEL``LLM_API_KEY``LLM_BASE_URL`
- 用户站点认证
可按需选择认证站点并按站点要求填写用户名、UID、Passkey 等参数
- 开机自启
可按需启用MoviePilot 会根据当前操作系统注册登录自启动
- 下载器
- 媒体服务器
- 消息通知渠道
@@ -244,6 +254,45 @@ curl -fsSL https://raw.githubusercontent.com/jxxghp/MoviePilot/v2/scripts/bootst
- `--superuser-password` 更适合自动化场景,命令可能会出现在 shell 历史中
- 交互式 `--wizard` 会在初始化过程中提示输入超级管理员用户名和密码
## 开机自启命令
管理当前本地安装的开机自启:
```shell
moviepilot startup status
moviepilot startup enable
moviepilot startup disable
moviepilot startup enable --venv /path/to/venv
moviepilot startup enable --config-dir /path/to/moviepilot-config
```
说明:
- macOS 使用 `LaunchAgent`
- Linux 优先使用 `systemd --user`,当前环境不可用时自动回退到 `XDG autostart`
- Windows 使用当前用户的 Startup 启动目录
- 注册的启动项会调用本地 CLI 的统一启动入口,因此会同时拉起后端与前端
## 卸载命令
卸载本地安装产物:
```shell
moviepilot uninstall
moviepilot uninstall --venv /path/to/venv
moviepilot uninstall --config-dir /path/to/moviepilot-config
```
说明:
- 卸载时会先停止当前 CLI 管理的前后端服务
- 会删除本地虚拟环境、前端运行时、本地 Node 运行时、全局 `moviepilot` 软链接,以及同步到 `app/helper` 的资源文件
- 如果之前注册过开机自启,卸载时也会一并取消
- 会询问是否同时删除配置目录,默认不删除
- 如果当前使用的是仓库内 legacy `config/` 目录,确认删除后其中的 `category.yaml` 等配置文件也会一起删除
- 整个卸载流程包含两次确认
- 源码目录会保留,如需彻底移除仓库请在确认后手动删除项目目录
## 更新命令
更新后端:
@@ -315,10 +364,12 @@ moviepilot version
说明:
- `start` 会先启动后端,再启动前端
- 如果开启了 `MOVIEPILOT_AUTO_UPDATE=release|true|dev``start/restart` 会在启动前尽力执行一次本地自动更新;更新失败只告警,不阻断当前启动
- 通过系统内置的重启入口触发重启时,本地 CLI 安装模式也会复用同一套前后端进程管理完成重启
- 前端默认监听 `NGINX_PORT`,默认值 `3000`
- 后端默认监听 `PORT`,默认值 `3001`
- 前端通过 `service.js` 代理 `/api``/cookiecloud` 到后端
- 本地前端代理在启动时会先确认后端可用;如果后端长时间不可用,前端也会自动退出,避免只剩半套服务
日志:

View File

@@ -14,7 +14,9 @@ Bootstrap Commands:
moviepilot install resources [--resources-repo PATH] [--resource-dir PATH] [--config-dir PATH]
moviepilot init [--skip-resources] [--force-token] [--wizard] [--superuser NAME] [--superuser-password PASSWORD] [--config-dir PATH]
moviepilot setup [--python PYTHON] [--venv PATH] [--recreate] [--frontend-version latest] [--node-version 20.12.1] [--wizard] [--superuser NAME] [--superuser-password PASSWORD] [--config-dir PATH]
moviepilot uninstall [--venv PATH] [--config-dir PATH]
moviepilot update {backend|frontend|all} [OPTIONS]
moviepilot startup {enable|disable|status} [--venv PATH] [--config-dir PATH]
moviepilot agent [OPTIONS] MESSAGE...
Runtime Commands:
@@ -27,7 +29,9 @@ Discovery Commands:
moviepilot help
moviepilot help config
moviepilot help install
moviepilot help uninstall
moviepilot help update
moviepilot help startup
moviepilot commands
Examples:
@@ -35,7 +39,9 @@ Examples:
moviepilot install frontend
moviepilot install resources
moviepilot setup --wizard
moviepilot uninstall
moviepilot update all
moviepilot startup enable
moviepilot agent 帮我分析最近一次搜索失败
moviepilot help config
moviepilot config keys
@@ -52,9 +58,13 @@ Bootstrap Commands
install resources
init
setup
uninstall
update backend
update frontend
update all
startup enable
startup disable
startup status
agent
Runtime Commands
@@ -145,6 +155,22 @@ Options:
EOF
}
show_uninstall_help() {
cat <<'EOF'
Usage: moviepilot uninstall [OPTIONS]
Options:
--venv PATH 虚拟环境目录,默认 ./venv
--config-dir PATH 指定配置目录,默认使用当前安装配置
-h, --help 显示帮助
说明:
- 默认保留配置目录,过程中会询问是否删除
- 卸载时会进行两次确认
- 源码目录不会被删除
EOF
}
show_update_help() {
cat <<'EOF'
Usage:
@@ -165,6 +191,25 @@ Options:
EOF
}
show_startup_help() {
cat <<'EOF'
Usage:
moviepilot startup enable [OPTIONS]
moviepilot startup disable [OPTIONS]
moviepilot startup status [OPTIONS]
Options:
--venv PATH 虚拟环境目录,默认 ./venv
--config-dir PATH 指定配置目录,默认使用当前安装配置
-h, --help 显示帮助
说明:
- macOS 使用 LaunchAgent
- Linux 优先使用 systemd --user不可用时回退到 XDG autostart
- Windows 使用当前用户的 Startup 启动目录
EOF
}
show_agent_help() {
cat <<'EOF'
Usage:
@@ -296,6 +341,10 @@ show_command_help() {
show_setup_help
exit 0
;;
uninstall)
show_uninstall_help
exit 0
;;
agent)
show_agent_help
exit 0
@@ -304,6 +353,10 @@ show_command_help() {
show_update_help
exit 0
;;
startup)
show_startup_help
exit 0
;;
commands)
show_commands
exit 0
@@ -397,11 +450,22 @@ case "${1:-}" in
require_bootstrap_python
exec "$BOOTSTRAP_PYTHON" "$SETUP_SCRIPT" setup "$@"
;;
uninstall)
shift
require_bootstrap_python
COMMAND_PATH="$(command -v moviepilot 2>/dev/null || true)"
MOVIEPILOT_LAUNCH_PATH="$0" MOVIEPILOT_COMMAND_PATH="$COMMAND_PATH" exec "$BOOTSTRAP_PYTHON" "$SETUP_SCRIPT" uninstall "$@"
;;
update)
shift
require_bootstrap_python
exec "$BOOTSTRAP_PYTHON" "$SETUP_SCRIPT" update "$@"
;;
startup)
shift
require_bootstrap_python
exec "$BOOTSTRAP_PYTHON" "$SETUP_SCRIPT" startup "$@"
;;
agent)
shift
require_bootstrap_python

View File

@@ -76,14 +76,14 @@ pympler~=1.1
smbprotocol~=1.15.0
setproctitle~=1.3.6
httpx[socks]~=0.28.1
langchain~=1.2.13
langchain-core~=1.2.20
langchain~=1.2.15
langchain-core~=1.3.2
langchain-community~=0.4.1
langchain-openai~=1.1.11
langchain-google-genai~=4.2.1
langchain-openai~=1.2.1
langchain-google-genai~=4.2.2
langchain-deepseek~=1.0.1
langgraph~=1.1.3
openai~=2.29.0
google-genai~=1.68.0
langgraph~=1.1.9
openai~=2.32.0
google-genai~=1.73.1
ddgs~=9.10.0
websocket-client~=1.8.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,164 @@
---
name: create-moviepilot-skill
version: 1
description: >-
Use this skill when the user asks to create, scaffold, update, or review a
MoviePilot agent skill. This includes adding a new built-in skill under the
repository `skills/` directory, editing an existing built-in skill, writing
`SKILL.md` frontmatter and workflow instructions, choosing `allowed-tools`,
adding helper scripts when needed, and bumping the built-in skill `version`
so changes can sync into `config/agent/skills`.
allowed-tools: list_directory read_file write_file edit_file execute_command
---
# Create MoviePilot Skill
This skill guides you through creating or updating a built-in MoviePilot agent
skill in this repository.
## Scope
Use this workflow for repository built-in skills:
- Create or update files under `skills/<skill-id>/`
- Commit the skill as part of the MoviePilot repository
- Do not place the implementation only in `config/agent/skills` unless the user
explicitly asks for a local override instead of a built-in skill
## MoviePilot-Specific Rules
- The repository root `skills/` directory is the bundled source of truth for
built-in skills.
- On agent startup, bundled skills are synced into `config/agent/skills`.
- Sync overwrite depends on the `version` field in `SKILL.md`. If you update an
existing built-in skill, increment `version`, or users may continue using an
older copied version.
- Keep the folder name and frontmatter `name` identical. Use lowercase letters,
digits, and hyphens only.
- Prefer extending an existing skill instead of creating an overlapping
duplicate.
## Workflow
### Step 1: Understand the Request
- Determine whether the user wants a new skill or a change to an existing one.
- Extract the target task, likely trigger phrases, needed tools, and whether
helper scripts are necessary.
- If the goal is still ambiguous after reading the request and local context,
ask one focused clarification question. Otherwise proceed with a reasonable
default.
### Step 2: Check Existing Skills First
- Inspect the repository `skills/` directory before creating anything new.
- If an existing skill already covers most of the workflow, update it instead of
adding a near-duplicate.
- Reuse the repository style: concise YAML frontmatter, trigger-rich
description, and procedural body sections.
### Step 3: Choose the Skill ID and Path
- New built-in skill path: `skills/<skill-id>/SKILL.md`
- Keep `<skill-id>` short, hyphen-case, and under 64 characters.
- Use a verb-led or domain-led name that makes the trigger obvious, such as
`transfer-failed-retry`, `moviepilot-api`, or `create-moviepilot-skill`.
### Step 4: Write Frontmatter Correctly
Use this shape:
```markdown
---
name: create-moviepilot-skill
version: 1
description: >-
Explain what the skill does and exactly when to use it.
allowed-tools: list_directory read_file write_file edit_file execute_command
---
```
Rules:
- `description` is the primary trigger surface. Put concrete "when to use"
scenarios there.
- Include `version` for built-in skills. Increment it whenever you ship a new
built-in revision.
- Add `allowed-tools` when the workflow depends on a small, well-defined tool
set.
- Add `compatibility` only when environment constraints actually matter.
### Step 5: Write the Body
The body should contain:
- A short purpose statement
- MoviePilot-specific rules or guardrails
- A step-by-step workflow
- Concrete examples of matching user requests
- References to supporting files when they exist
Prefer:
- Imperative instructions
- Concrete file paths
- Examples aligned with actual MoviePilot conventions
Avoid:
- Generic theory that does not change execution
- Large duplicated documentation
- Extra files like `README.md` or `CHANGELOG.md` inside the skill directory
### Step 6: Add Supporting Files Only When They Help
- Add `scripts/` only when the same deterministic work would otherwise be
rewritten repeatedly.
- Keep helper files inside the same skill directory.
- Reference helper paths explicitly from `SKILL.md`.
- If the skill is instructions-only, keep it to a single `SKILL.md`.
### Step 7: Implement the Skill
For a new built-in skill:
1. Create `skills/<skill-id>/`
2. Create `SKILL.md`
3. Add helper scripts only if they are justified
For an existing built-in skill:
1. Edit `skills/<skill-id>/SKILL.md`
2. Increment `version`
3. Update helper files in the same directory if needed
### Step 8: Validate Before Finishing
- Re-read the frontmatter and confirm `name` matches the directory name.
- Confirm `description` mentions real trigger scenarios.
- If you changed an existing built-in skill, confirm `version` increased.
- If possible, validate the file can be parsed by the MoviePilot skills loader.
- Report the final path and note whether the agent needs a restart to sync the
latest built-in skill into `config/agent/skills`.
## Minimal Example
User request:
`给 MoviePilot agent 加一个处理站点 Cookie 更新的内置技能`
Expected outcome:
- Create or update a directory such as `skills/update-site-cookie/`
- Write `SKILL.md` with a trigger-rich `description`
- Include only the tools needed for that workflow
- Increment `version` when revising an existing built-in skill
## Final Checklist
- Is the skill under the repository `skills/` directory?
- Does the folder name equal frontmatter `name`?
- Does `description` clearly say when the skill should trigger?
- Did you avoid duplicating an existing skill unnecessarily?
- Did you increment `version` for built-in skill updates?
- Did you keep the skill lean and procedural?

View File

@@ -1,11 +1,13 @@
---
name: generate-identifiers
version: 1
version: 2
description: >-
Use this skill when a user provides a torrent name or file name and wants to fix recognition issues,
or asks to add/manage custom identifiers (自定义识别词).
This skill generates identifier rules based on the WordsMatcher preprocessing logic,
checks for duplicates against existing rules, and saves them via MCP tools.
Because custom identifiers are global, generated rules must default to conservative,
sample-specific regex patterns instead of broad matches unless the user explicitly wants global cleanup.
Applicable scenarios include:
1) A torrent or file name is incorrectly recognized (wrong title, season, episode, etc.);
2) The user wants to block unwanted keywords from torrent names;
@@ -34,9 +36,11 @@ There are **four formats**. Operators must have spaces on both sides.
Removes matched text from the title. Supports regex.
```
REPACK
SomeUniqueAlias
```
Use a bare block word only when the token itself is specific enough globally, or when the user explicitly wants a global cleanup rule.
### 2. Replacement (被替换词 => 替换词)
Regex substitution. The left side is a regex pattern, the right side is the replacement (supports backreferences).
@@ -84,6 +88,40 @@ Lines starting with `#` are comments and will be skipped during processing.
5. **Chinese number support**: Episode offset handles Chinese numbers (一二三四五六七八九十).
6. **Empty replacement**: Using nothing after `=>` is equivalent to a block word.
## Global Scope Guardrails
Custom identifiers are **global**. A new rule affects all future torrent/file recognition, not just the sample provided by the user.
When generating a new rule, default to **the narrowest regex that still fixes the user's sample**:
- Extract the sample's unique anchors first: wrong title alias, year, season/episode marker, group tag, source, resolution, release tag, file extension, or other distinctive fragments.
- The matching side should usually contain **at least two meaningful anchors**, and one of them should normally be the title alias or another highly distinctive identifier from the user-provided sample.
- Prefer matching the **full wrong alias or a stable unique fragment** from the sample, not a short generic substring.
- Avoid generic global rules such as bare `1080p`, `WEB-DL`, `中字`, `国配`, `REPACK`, `S01E01`, or pure numbers unless the user explicitly wants a global cleanup rule.
- If the rule only needs to fix one specific naming pattern, prefer a **contextual replacement** with capture groups/backreferences over a bare block word.
- For episode offset rules, the `前定位词` and `后定位词` should use sample-specific context so the offset only runs on the intended naming pattern.
- For direct TMDB/Douban binding, the left side should match the user's specific wrong alias or naming pattern, not a broad season/episode pattern that could hit other media.
### Narrow vs Broad Examples
Bad (too broad for a global rule):
```
REPACK
1080p
S01E01 => {[tmdbid=12345;type=tv;s=1;e=1]}
```
Better (scoped to the user's sample pattern):
```
(\[SubGroup\].*?My\.Show.*?2024.*?)REPACK => \1
Some\.Weird\.Name(?:\.2024)?(?:\.S01E\d+)? => {[tmdbid=12345;type=tv;s=1]}
\[Baha\] <> \[1080P\] >> EP-12
```
Before saving, mentally test the rule against:
- the user's sample: it should match
- unrelated titles with common release tags: it should usually **not** match
## Workflow
### Step 1: Analyze the Problem
@@ -92,6 +130,7 @@ Parse the torrent/file name provided by the user. Identify:
- What is being incorrectly recognized (title, season, episode, year, quality, etc.)
- What the correct recognition result should be
- Which identifier format(s) will solve the problem
- Which fragments in the provided sample are unique enough to use as regex anchors, so the rule does not accidentally affect unrelated titles
### Step 2: Generate the Identifier Rule(s)
@@ -99,6 +138,7 @@ Write the rule using the appropriate format. Ensure:
- Regex special characters are properly escaped
- Add a comment line (starting with `#`) above the rule to describe what it does
- Test the regex mentally against the provided name to verify correctness
- Because the rule is global, prefer the most specific viable match; if a bare block word would be too broad, rewrite it as a contextual replacement that includes sample-specific anchors
### Step 3: Query Existing Identifiers
@@ -159,30 +199,30 @@ Tell the user:
**User**: "种子名 `My.Show.2024.REPACK.1080p.mkv`REPACK导致识别异常"
**Solution**: Block word:
**Solution**: Contextual replacement, scoped to this title pattern:
```
# 屏蔽REPACK标记
REPACK
# 仅在 My.Show.2024 命名中移除 REPACK
(My\.Show\.2024\.)REPACK(\.1080p) => \1\2
```
### Non-Standard Naming
**User**: "文件名 `[OldName] EP01.mkv`,应该识别为 NewName"
**Solution**: Replacement:
**Solution**: Replacement scoped to the wrong alias:
```
# OldName替换为NewName
OldName => NewName
# 将特定错误别名 OldName 替换为 NewName
\[OldName\] => [NewName]
```
### Force TMDB ID Recognition
**User**: "种子名 `Some.Weird.Name.S01E01.1080p.mkv`识别不到TMDB ID是12345是电视剧"
**Solution**: Direct ID specification:
**Solution**: Direct ID specification with a sample-specific alias pattern:
```
# 强制识别Some.Weird.NameTMDB ID 12345
Some\.Weird\.Name => {[tmdbid=12345;type=tv;s=1]}
# 仅在 Some.Weird.Name 这一命名模式下强制绑定 TMDB ID 12345
Some\.Weird\.Name(?:\.S01E\d+)?(?:\.1080p)? => {[tmdbid=12345;type=tv;s=1]}
```
### Combined Fix
@@ -224,4 +264,5 @@ The `WordsMatcher.prepare()` method (in `app/core/meta/words.py`) processes each
- Always query existing rules first before updating
- Never remove existing rules unless the user explicitly asks
- Add comment lines before new rules for maintainability
- Remember that new rules are global. If a rule looks broad, rewrite it to include more sample-specific anchors before saving.
- When uncertain about the correct approach, present multiple options and let the user choose

View File

@@ -8,7 +8,7 @@ from app.agent.tools.impl.ask_user_choice import (
AskUserChoiceTool,
UserChoiceOptionInput,
)
from app.agent.interaction import (
from app.chain.interaction import (
AgentInteractionOption,
agent_interaction_manager,
)

View File

@@ -0,0 +1,22 @@
import unittest
from app.agent.middleware.memory import MEMORY_ONBOARDING_PROMPT
from app.agent.prompt import prompt_manager
class TestAgentPromptStyle(unittest.TestCase):
def test_agent_prompt_enforces_concise_professional_style(self):
prompt = prompt_manager.get_agent_prompt()
self.assertIn("professional, concise, restrained", prompt)
self.assertIn("Do NOT flatter the user", prompt)
self.assertIn("NO praise, emotional cushioning", prompt)
def test_memory_onboarding_does_not_force_warm_intro(self):
self.assertIn("Do NOT interrupt the current task", MEMORY_ONBOARDING_PROMPT)
self.assertIn("Do NOT proactively greet warmly", MEMORY_ONBOARDING_PROMPT)
self.assertNotIn("greet the user warmly", MEMORY_ONBOARDING_PROMPT)
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,144 @@
import importlib.util
import sys
import unittest
from pathlib import Path
from types import ModuleType
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
def _stub_module(name: str, **attrs):
module = sys.modules.get(name)
if module is None:
module = ModuleType(name)
sys.modules[name] = module
for key, value in attrs.items():
setattr(module, key, value)
return module
class _DummyLogger:
def __getattr__(self, _name):
return lambda *args, **kwargs: None
def _build_tool_call(name: str = "search", arguments: str = "{}"):
return [
{
"id": "call_1",
"type": "tool_call",
"name": name,
"args": {},
}
]
class _FakeChatDeepSeek:
def __init__(self, model_name: str, model_kwargs: dict | None = None):
self.model_name = model_name
self.model_kwargs = model_kwargs or {}
def _get_request_payload(self, input_, *, stop=None, **kwargs):
messages = []
for message in input_:
payload_message = {
"role": message.type,
"content": message.content,
}
if message.type == "human":
payload_message["role"] = "user"
elif message.type == "ai":
payload_message["role"] = "assistant"
tool_calls = getattr(message, "tool_calls", None)
if tool_calls:
payload_message["tool_calls"] = tool_calls
elif message.type == "tool":
payload_message["role"] = "tool"
payload_message["tool_call_id"] = message.tool_call_id
messages.append(payload_message)
return {"messages": messages}
_ORIGINAL_GET_REQUEST_PAYLOAD = _FakeChatDeepSeek._get_request_payload
sys.modules.pop("app.helper.llm", None)
_stub_module(
"app.core.config",
settings=ModuleType("settings"),
)
sys.modules["app.core.config"].settings.LLM_PROVIDER = "deepseek"
sys.modules["app.core.config"].settings.LLM_MODEL = "deepseek-v4-pro"
sys.modules["app.core.config"].settings.LLM_API_KEY = "sk-test"
sys.modules["app.core.config"].settings.LLM_BASE_URL = "https://api.deepseek.com"
sys.modules["app.core.config"].settings.LLM_THINKING_LEVEL = None
sys.modules["app.core.config"].settings.LLM_TEMPERATURE = 0.1
sys.modules["app.core.config"].settings.LLM_MAX_CONTEXT_TOKENS = 64
sys.modules["app.core.config"].settings.PROXY_HOST = None
_stub_module("app.log", logger=_DummyLogger())
_stub_module("langchain_deepseek", ChatDeepSeek=_FakeChatDeepSeek)
module_path = Path(__file__).resolve().parents[1] / "app" / "helper" / "llm.py"
spec = importlib.util.spec_from_file_location("test_llm_module_for_deepseek_compat", module_path)
llm_module = importlib.util.module_from_spec(spec)
assert spec and spec.loader
spec.loader.exec_module(llm_module)
class DeepSeekCompatPatchTest(unittest.TestCase):
def setUp(self):
_FakeChatDeepSeek._get_request_payload = _ORIGINAL_GET_REQUEST_PAYLOAD
if hasattr(_FakeChatDeepSeek, "_moviepilot_reasoning_content_patched"):
delattr(_FakeChatDeepSeek, "_moviepilot_reasoning_content_patched")
llm_module._patch_deepseek_reasoning_content_support()
def test_injects_reasoning_content_for_assistant_tool_calls(self):
llm = _FakeChatDeepSeek("deepseek-v4-pro")
messages = [
HumanMessage(content="天气如何?"),
AIMessage(
content="",
tool_calls=_build_tool_call(),
additional_kwargs={"reasoning_content": "先调用天气工具"},
),
ToolMessage(content="晴天", tool_call_id="call_1"),
]
payload = llm._get_request_payload(messages)
self.assertEqual(
payload["messages"][1]["reasoning_content"],
"先调用天气工具",
)
def test_falls_back_to_empty_reasoning_content_when_missing(self):
llm = _FakeChatDeepSeek("deepseek-v4-flash")
messages = [
HumanMessage(content="天气如何?"),
AIMessage(content="", tool_calls=_build_tool_call()),
ToolMessage(content="晴天", tool_call_id="call_1"),
]
payload = llm._get_request_payload(messages)
self.assertIn("reasoning_content", payload["messages"][1])
self.assertEqual(payload["messages"][1]["reasoning_content"], "")
def test_skips_injection_when_thinking_is_disabled(self):
llm = _FakeChatDeepSeek(
"deepseek-v4-pro",
model_kwargs={"extra_body": {"thinking": {"type": "disabled"}}},
)
messages = [
HumanMessage(content="天气如何?"),
AIMessage(
content="",
tool_calls=_build_tool_call(),
additional_kwargs={"reasoning_content": "先调用天气工具"},
),
ToolMessage(content="晴天", tool_call_id="call_1"),
]
payload = llm._get_request_payload(messages)
self.assertNotIn("reasoning_content", payload["messages"][1])

View File

@@ -0,0 +1,321 @@
import asyncio
import importlib.util
import sys
import unittest
from pathlib import Path
from types import ModuleType, SimpleNamespace
from unittest.mock import Mock, patch
def _stub_module(name: str, **attrs):
module = sys.modules.get(name)
if module is None:
module = ModuleType(name)
sys.modules[name] = module
for key, value in attrs.items():
setattr(module, key, value)
return module
class _DummyLogger:
def __getattr__(self, _name):
return lambda *args, **kwargs: None
class _FakeModel:
def __init__(self, content):
self._content = content
async def ainvoke(self, _prompt):
return SimpleNamespace(content=self._content)
sys.modules.pop("app.helper.llm", None)
_stub_module(
"app.core.config",
settings=SimpleNamespace(
LLM_PROVIDER="global-provider",
LLM_MODEL="global-model",
LLM_API_KEY="global-key",
LLM_BASE_URL="https://global.example.com",
LLM_THINKING_LEVEL=None,
LLM_TEMPERATURE=0.1,
LLM_MAX_CONTEXT_TOKENS=64,
PROXY_HOST=None,
),
)
_stub_module("app.log", logger=_DummyLogger())
module_path = Path(__file__).resolve().parents[1] / "app" / "helper" / "llm.py"
spec = importlib.util.spec_from_file_location("test_llm_module", module_path)
llm_module = importlib.util.module_from_spec(spec)
assert spec and spec.loader
spec.loader.exec_module(llm_module)
class LlmHelperTestCallTest(unittest.TestCase):
def test_extract_text_content_ignores_non_text_blocks(self):
content = [
{"type": "reasoning", "text": "internal"},
{"type": "tool_use", "name": "search"},
{"type": "text", "text": "OK"},
]
result = llm_module.LLMHelper._extract_text_content(content)
self.assertEqual(result, "OK")
def test_test_current_settings_uses_explicit_snapshot(self):
fake_model = _FakeModel("OK")
get_llm_mock = Mock(return_value=fake_model)
with patch.object(llm_module.LLMHelper, "get_llm", get_llm_mock):
result = asyncio.run(
llm_module.LLMHelper.test_current_settings(
provider="deepseek",
model="deepseek-chat",
api_key="sk-test",
base_url="https://api.deepseek.com",
)
)
get_llm_mock.assert_called_once_with(
streaming=False,
provider="deepseek",
model="deepseek-chat",
thinking_level=None,
disable_thinking=None,
reasoning_effort=None,
api_key="sk-test",
base_url="https://api.deepseek.com",
)
self.assertEqual(result["provider"], "deepseek")
self.assertEqual(result["model"], "deepseek-chat")
self.assertEqual(result["reply_preview"], "OK")
def test_test_current_settings_does_not_promote_non_text_blocks(self):
fake_model = _FakeModel(
[
{"type": "tool_use", "name": "lookup"},
{"type": "reasoning", "text": "thinking"},
]
)
with patch.object(llm_module.LLMHelper, "get_llm", return_value=fake_model):
result = asyncio.run(
llm_module.LLMHelper.test_current_settings(
provider="deepseek",
model="deepseek-chat",
api_key="sk-test",
base_url="https://api.deepseek.com",
)
)
self.assertNotIn("reply_preview", result)
def test_get_llm_uses_kimi_extra_body_to_disable_thinking(self):
calls = []
class _FakeChatOpenAI:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{"langchain_openai": SimpleNamespace(ChatOpenAI=_FakeChatOpenAI)},
):
llm_module.LLMHelper.get_llm(
provider="openai",
model="kimi-k2.6",
disable_thinking=True,
api_key="sk-test",
base_url="https://kimi.example.com/v1",
)
self.assertEqual(len(calls), 1)
self.assertEqual(
calls[0].get("extra_body"),
{"thinking": {"type": "disabled"}},
)
def test_get_llm_uses_deepseek_thinking_level_controls(self):
calls = []
patch_calls = []
class _FakeChatDeepSeek:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{"langchain_deepseek": SimpleNamespace(ChatDeepSeek=_FakeChatDeepSeek)},
), patch.object(
llm_module,
"_patch_deepseek_reasoning_content_support",
side_effect=lambda: patch_calls.append(True),
):
llm_module.LLMHelper.get_llm(
provider="deepseek",
model="deepseek-v4-pro",
thinking_level="xhigh",
api_key="sk-test",
base_url="https://api.deepseek.com",
)
self.assertEqual(len(calls), 1)
self.assertEqual(
calls[0].get("extra_body"),
{"thinking": {"type": "enabled"}},
)
self.assertEqual(patch_calls, [True])
self.assertEqual(calls[0].get("reasoning_effort"), "max")
self.assertEqual(calls[0].get("api_base"), "https://api.deepseek.com")
def test_get_llm_disables_deepseek_thinking_via_thinking_level(self):
calls = []
patch_calls = []
class _FakeChatDeepSeek:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{"langchain_deepseek": SimpleNamespace(ChatDeepSeek=_FakeChatDeepSeek)},
), patch.object(
llm_module,
"_patch_deepseek_reasoning_content_support",
side_effect=lambda: patch_calls.append(True),
):
llm_module.LLMHelper.get_llm(
provider="deepseek",
model="deepseek-v4-flash",
thinking_level="off",
api_key="sk-test",
base_url="https://proxy.example.com",
)
self.assertEqual(len(calls), 1)
self.assertEqual(
calls[0].get("extra_body"),
{"thinking": {"type": "disabled"}},
)
self.assertEqual(patch_calls, [True])
self.assertIsNone(calls[0].get("reasoning_effort"))
self.assertEqual(calls[0].get("api_base"), "https://proxy.example.com")
def test_get_llm_uses_openai_reasoning_effort_none_for_off(self):
calls = []
class _FakeChatOpenAI:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{"langchain_openai": SimpleNamespace(ChatOpenAI=_FakeChatOpenAI)},
):
llm_module.LLMHelper.get_llm(
provider="openai",
model="gpt-5-mini",
thinking_level="off",
api_key="sk-test",
base_url="https://api.openai.com/v1",
)
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].get("reasoning_effort"), "none")
def test_get_llm_maps_unified_max_to_openai_xhigh(self):
calls = []
class _FakeChatOpenAI:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{"langchain_openai": SimpleNamespace(ChatOpenAI=_FakeChatOpenAI)},
):
llm_module.LLMHelper.get_llm(
provider="openai",
model="gpt-5.4",
thinking_level="max",
api_key="sk-test",
base_url="https://api.openai.com/v1",
)
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].get("reasoning_effort"), "xhigh")
def test_get_llm_uses_gemini_builtin_thinking_controls(self):
calls = []
class _FakeChatGoogleGenerativeAI:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{
"langchain_google_genai": SimpleNamespace(
ChatGoogleGenerativeAI=_FakeChatGoogleGenerativeAI
)
},
):
llm_module.LLMHelper.get_llm(
provider="google",
model="gemini-2.5-flash",
thinking_level="off",
api_key="sk-test",
base_url=None,
)
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].get("thinking_budget"), 0)
self.assertFalse(calls[0].get("include_thoughts"))
def test_get_llm_uses_gemini_3_thinking_level_controls(self):
calls = []
class _FakeChatGoogleGenerativeAI:
def __init__(self, **kwargs):
calls.append(kwargs)
self.model = kwargs["model"]
self.profile = None
with patch.dict(
sys.modules,
{
"langchain_google_genai": SimpleNamespace(
ChatGoogleGenerativeAI=_FakeChatGoogleGenerativeAI
)
},
):
llm_module.LLMHelper.get_llm(
provider="google",
model="gemini-3.1-flash",
thinking_level="xhigh",
api_key="sk-test",
base_url=None,
)
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].get("thinking_level"), "high")
self.assertFalse(calls[0].get("include_thoughts"))
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,63 @@
from __future__ import annotations
import importlib.util
import unittest
import uuid
from pathlib import Path
from unittest.mock import patch
MODULE_PATH = Path(__file__).resolve().parents[1] / "scripts" / "local_setup.py"
def load_local_setup_module():
module_name = f"moviepilot_local_setup_config_{uuid.uuid4().hex}"
spec = importlib.util.spec_from_file_location(module_name, MODULE_PATH)
module = importlib.util.module_from_spec(spec)
assert spec and spec.loader
spec.loader.exec_module(module)
return module
class LocalSetupConfigDirTests(unittest.TestCase):
def test_setup_prompts_for_config_dir_when_not_provided(self):
module = load_local_setup_module()
default_dir = Path("/tmp/default-moviepilot-config")
custom_dir = Path("/tmp/custom-moviepilot-config")
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module, "resolve_config_dir", return_value=default_dir
), patch.object(
module, "_prompt_path", return_value=str(custom_dir)
):
result = module._resolve_interactive_config_dir("setup", None)
self.assertEqual(result, custom_dir)
def test_setup_keeps_default_config_dir_when_user_accepts_default(self):
module = load_local_setup_module()
default_dir = Path("/tmp/default-moviepilot-config")
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module, "resolve_config_dir", return_value=default_dir
), patch.object(
module, "_prompt_path", return_value=str(default_dir)
):
result = module._resolve_interactive_config_dir("init", None)
self.assertEqual(result, default_dir)
def test_non_setup_command_does_not_prompt_for_config_dir(self):
module = load_local_setup_module()
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module, "_prompt_path"
) as prompt_mock:
result = module._resolve_interactive_config_dir("install-deps", None)
self.assertIsNone(result)
prompt_mock.assert_not_called()
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,167 @@
from __future__ import annotations
import importlib.util
import tempfile
import unittest
import uuid
from contextlib import ExitStack
from pathlib import Path
from unittest.mock import patch
MODULE_PATH = Path(__file__).resolve().parents[1] / "scripts" / "local_setup.py"
def load_local_setup_module():
module_name = f"moviepilot_local_setup_{uuid.uuid4().hex}"
spec = importlib.util.spec_from_file_location(module_name, MODULE_PATH)
module = importlib.util.module_from_spec(spec)
assert spec and spec.loader
spec.loader.exec_module(module)
return module
class LocalSetupUninstallTests(unittest.TestCase):
def prepare_install_tree(self, *, legacy_config: bool = False):
module = load_local_setup_module()
temp_dir = tempfile.TemporaryDirectory()
self.addCleanup(temp_dir.cleanup)
temp_path = Path(temp_dir.name)
root_dir = temp_path / "MoviePilot"
helper_dir = root_dir / "app" / "helper"
runtime_dir = root_dir / ".runtime"
public_dir = root_dir / "public"
venv_dir = root_dir / "venv"
install_env_file = root_dir / ".moviepilot.env"
config_dir = root_dir / "config" if legacy_config else temp_path / "moviepilot-config"
temp_config_dir = config_dir / "temp"
helper_dir.mkdir(parents=True)
runtime_dir.mkdir(parents=True)
public_dir.mkdir(parents=True)
venv_dir.mkdir(parents=True)
temp_config_dir.mkdir(parents=True)
install_env_file.write_text("CONFIG_DIR=/tmp/moviepilot-config\n", encoding="utf-8")
(root_dir / "moviepilot").write_text("#!/usr/bin/env bash\n", encoding="utf-8")
(helper_dir / "sites.py").write_text("generated\n", encoding="utf-8")
(helper_dir / "user.sites.v2.bin").write_bytes(b"binary")
(temp_config_dir / "moviepilot.runtime.json").write_text("{}", encoding="utf-8")
(temp_config_dir / "moviepilot.frontend.runtime.json").write_text(
"{}", encoding="utf-8"
)
stack = ExitStack()
self.addCleanup(stack.close)
stack.enter_context(patch.object(module, "ROOT", root_dir))
stack.enter_context(patch.object(module, "HELPER_DIR", helper_dir))
stack.enter_context(patch.object(module, "RUNTIME_DIR", runtime_dir))
stack.enter_context(patch.object(module, "PUBLIC_DIR", public_dir))
stack.enter_context(patch.object(module, "INSTALL_ENV_FILE", install_env_file))
stack.enter_context(patch.object(module, "LEGACY_CONFIG_DIR", root_dir / "config"))
stack.enter_context(patch.object(module, "CONFIG_DIR", config_dir))
stack.enter_context(patch.object(module, "TEMP_DIR", temp_config_dir))
return module, root_dir, config_dir, venv_dir, install_env_file
def test_remove_config_data_deletes_legacy_config_directory(self):
module, _, config_dir, _, _ = self.prepare_install_tree(legacy_config=True)
category_file = config_dir / "category.yaml"
category_file.write_text("seed\n", encoding="utf-8")
(config_dir / "logs").mkdir(exist_ok=True)
(config_dir / "user.db").write_text("db\n", encoding="utf-8")
removed = module._remove_config_data(config_dir)
self.assertFalse(config_dir.exists())
self.assertFalse(category_file.exists())
self.assertIn(
str(config_dir.resolve()),
{str(path.resolve()) for path in removed},
)
def test_uninstall_keeps_config_by_default(self):
module, root_dir, config_dir, venv_dir, install_env_file = self.prepare_install_tree()
cli_dir = root_dir.parent / "bin"
cli_dir.mkdir()
cli_link = cli_dir / "moviepilot"
cli_link.symlink_to(root_dir / "moviepilot")
yes_no_answers = iter([False, True])
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module,
"_prompt_yes_no",
side_effect=lambda label, default=False: next(yes_no_answers),
), patch.object(
module,
"_prompt_text",
side_effect=lambda label, default=None, allow_empty=False, secret=False: module.UNINSTALL_CONFIRM_TEXT,
), patch.object(module, "_stop_managed_services", return_value=None):
result = module.uninstall_local(
venv_dir=venv_dir,
config_dir=config_dir,
launch_path=str(cli_link),
)
self.assertFalse(result["cancelled"])
self.assertTrue(config_dir.exists())
self.assertTrue(install_env_file.exists())
self.assertFalse(venv_dir.exists())
self.assertFalse((root_dir / ".runtime").exists())
self.assertFalse((root_dir / "public").exists())
self.assertFalse((root_dir / "app" / "helper" / "sites.py").exists())
self.assertFalse((root_dir / "app" / "helper" / "user.sites.v2.bin").exists())
self.assertFalse(cli_link.exists())
def test_uninstall_deletes_external_config_when_requested(self):
module, _, config_dir, venv_dir, install_env_file = self.prepare_install_tree()
yes_no_answers = iter([True, True])
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module,
"_prompt_yes_no",
side_effect=lambda label, default=False: next(yes_no_answers),
), patch.object(
module,
"_prompt_text",
side_effect=lambda label, default=None, allow_empty=False, secret=False: module.UNINSTALL_CONFIRM_TEXT,
), patch.object(module, "_stop_managed_services", return_value=None):
result = module.uninstall_local(
venv_dir=venv_dir,
config_dir=config_dir,
)
self.assertFalse(result["cancelled"])
self.assertTrue(result["config_deleted"])
self.assertFalse(config_dir.exists())
self.assertFalse(install_env_file.exists())
def test_uninstall_deletes_legacy_config_when_requested(self):
module, _, config_dir, venv_dir, install_env_file = self.prepare_install_tree(
legacy_config=True
)
(config_dir / "category.yaml").write_text("seed\n", encoding="utf-8")
yes_no_answers = iter([True, True])
with patch.object(module, "_is_interactive", return_value=True), patch.object(
module,
"_prompt_yes_no",
side_effect=lambda label, default=False: next(yes_no_answers),
), patch.object(
module,
"_prompt_text",
side_effect=lambda label, default=None, allow_empty=False, secret=False: module.UNINSTALL_CONFIRM_TEXT,
), patch.object(module, "_stop_managed_services", return_value=None):
result = module.uninstall_local(
venv_dir=venv_dir,
config_dir=config_dir,
)
self.assertFalse(result["cancelled"])
self.assertTrue(result["config_deleted"])
self.assertFalse(config_dir.exists())
self.assertFalse(install_env_file.exists())
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,158 @@
import sys
import unittest
from types import ModuleType
from unittest.mock import patch
sys.modules.setdefault("qbittorrentapi", ModuleType("qbittorrentapi"))
setattr(sys.modules["qbittorrentapi"], "TorrentFilesList", list)
sys.modules.setdefault("transmission_rpc", ModuleType("transmission_rpc"))
setattr(sys.modules["transmission_rpc"], "File", object)
sys.modules.setdefault("psutil", ModuleType("psutil"))
from app.chain.interaction import MediaInteractionChain, media_interaction_manager
from app.chain.message import MessageChain
from app.core.context import MediaInfo
from app.core.meta import MetaBase
from app.schemas.types import MessageChannel
class TestMediaInteraction(unittest.TestCase):
def tearDown(self):
media_interaction_manager.clear()
@staticmethod
def _build_meta(name: str) -> MetaBase:
meta = MetaBase(name)
meta.name = name
meta.begin_season = 1
return meta
def test_message_routes_text_reply_to_media_interaction_before_ai(self):
chain = MessageChain()
request = media_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Wechat,
source="wechat-test",
username="tester",
action="Search",
keyword="星际穿越",
title="星际穿越",
meta=self._build_meta("星际穿越"),
items=[MediaInfo(title="星际穿越", year="2014")],
)
self.assertIsNotNone(request)
with patch.object(chain, "_record_user_message"), patch(
"app.chain.message.MediaInteractionChain.handle_text_interaction",
return_value=True,
) as handle_text, patch.object(chain, "_handle_ai_message") as handle_ai:
chain.handle_message(
channel=MessageChannel.Wechat,
source="wechat-test",
userid="10001",
username="tester",
text="1",
)
handle_text.assert_called_once()
handle_ai.assert_not_called()
def test_callback_routes_to_media_interaction_chain(self):
chain = MessageChain()
request = media_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
action="Search",
keyword="星际穿越",
title="星际穿越",
meta=self._build_meta("星际穿越"),
items=[MediaInfo(title="星际穿越", year="2014")],
)
with patch(
"app.chain.message.MediaInteractionChain.handle_callback_interaction",
return_value=True,
) as handle_callback:
chain._handle_callback(
text=f"CALLBACK:media:{request.request_id}:page-next",
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
)
handle_callback.assert_called_once()
def test_media_interaction_starts_search_and_posts_media_list(self):
chain = MediaInteractionChain()
meta = self._build_meta("星际穿越")
medias = [
MediaInfo(title="星际穿越", year="2014"),
MediaInfo(title="Interstellar", year="2014"),
]
with patch(
"app.chain.interaction.MediaChain.search",
return_value=(meta, medias),
), patch.object(chain, "post_medias_message") as post_medias_message:
handled = chain.handle_text_interaction(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
text="星际穿越",
)
self.assertTrue(handled)
post_medias_message.assert_called_once()
notification = post_medias_message.call_args.args[0]
self.assertTrue(notification.buttons)
self.assertTrue(
notification.buttons[0][0]["callback_data"].startswith("media:")
)
request = media_interaction_manager.get_by_user("10001")
self.assertIsNotNone(request)
self.assertEqual(request.action, "Search")
self.assertEqual(len(request.items), 2)
def test_media_interaction_legacy_page_callback_updates_existing_request(self):
chain = MediaInteractionChain()
request = media_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
action="Search",
keyword="星际穿越",
title="星际穿越",
meta=self._build_meta("星际穿越"),
items=[
MediaInfo(title=f"资源 {index}", year="2024")
for index in range(1, 11)
],
)
with patch.object(chain, "post_medias_message") as post_medias_message:
handled = chain.handle_callback_interaction(
callback_data="page_n",
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
original_message_id=123,
original_chat_id="456",
)
self.assertTrue(handled)
self.assertEqual(request.page, 1)
post_medias_message.assert_called_once()
notification = post_medias_message.call_args.args[0]
self.assertEqual(notification.original_message_id, 123)
self.assertEqual(notification.original_chat_id, "456")
if __name__ == "__main__":
unittest.main()

120
tests/test_openai_utils.py Normal file
View File

@@ -0,0 +1,120 @@
from unittest import TestCase
from app.api.openai_utils import (
build_anthropic_messages,
build_completion_payload,
build_prompt,
build_responses_input,
build_session_id,
extract_text_and_images,
)
class OpenAIUtilsTest(TestCase):
def test_extract_text_and_images(self):
text, images = extract_text_and_images(
[
{"type": "text", "text": "你好"},
{"type": "image_url", "image_url": {"url": "https://example.com/a.png"}},
{"type": "text", "text": "世界"},
]
)
self.assertEqual(text, "你好\n世界")
self.assertEqual(images, ["https://example.com/a.png"])
def test_extract_text_and_images_with_input_image_and_base64_image(self):
text, images = extract_text_and_images(
[
{"type": "input_text", "text": "看图"},
{"type": "input_image", "image_url": "https://example.com/b.png"},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": "YWJj",
},
},
]
)
self.assertEqual(text, "看图")
self.assertEqual(
images,
["https://example.com/b.png", "data:image/png;base64,YWJj"],
)
def test_build_prompt_without_server_session_keeps_recent_history(self):
prompt, images = build_prompt(
[
{"role": "system", "content": "回答简短"},
{"role": "user", "content": "第一句"},
{"role": "assistant", "content": "第一答"},
{"role": "user", "content": "第二句"},
],
use_server_session=False,
)
self.assertIn("系统要求:\n回答简短", prompt)
self.assertIn("对话上下文:\nuser: 第一句\nassistant: 第一答", prompt)
self.assertIn("当前用户消息:\n第二句", prompt)
self.assertEqual(images, [])
def test_build_prompt_with_server_session_ignores_history_block(self):
prompt, _ = build_prompt(
[
{"role": "user", "content": "历史问题"},
{"role": "assistant", "content": "历史回答"},
{"role": "user", "content": "当前问题"},
],
use_server_session=True,
)
self.assertNotIn("对话上下文:", prompt)
self.assertIn("当前用户消息:\n当前问题", prompt)
def test_build_prompt_accepts_image_only_user_message(self):
prompt, images = build_prompt(
[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": "https://example.com/a.png"}}
],
}
],
use_server_session=True,
)
self.assertIn("请结合图片内容回复", prompt)
self.assertEqual(images, ["https://example.com/a.png"])
def test_build_session_id_is_stable(self):
session_id = build_session_id("user-1", "openai:")
self.assertTrue(session_id.startswith("openai:"))
self.assertEqual(session_id, build_session_id("user-1", "openai:"))
self.assertNotEqual(session_id, build_session_id("user-2", "openai:"))
def test_build_completion_payload(self):
payload = build_completion_payload("你好", "moviepilot-agent")
self.assertEqual(payload["model"], "moviepilot-agent")
self.assertEqual(payload["choices"][0]["message"]["content"], "你好")
self.assertEqual(payload["choices"][0]["finish_reason"], "stop")
def test_build_responses_input(self):
messages = build_responses_input(
[
{
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "你好"}],
}
],
instructions="你要简短回答",
)
self.assertEqual(messages[0]["role"], "system")
self.assertEqual(messages[1]["role"], "user")
def test_build_anthropic_messages(self):
messages = build_anthropic_messages(
system=[{"type": "text", "text": "你是助手"}],
messages=[{"role": "user", "content": "你好"}],
)
self.assertEqual(messages[0]["role"], "system")
self.assertEqual(messages[1]["role"], "user")

View File

@@ -0,0 +1,205 @@
import threading
import unittest
from pathlib import Path
from types import SimpleNamespace
from typing import Union
from unittest.mock import patch
from app import schemas
from app.modules.filemanager.storages import rclone as rclone_module
from app.modules.filemanager.storages.rclone import Rclone
class RcloneStorageTest(unittest.TestCase):
def setUp(self):
with rclone_module._folder_locks_guard:
rclone_module._folder_locks.clear()
@staticmethod
def _normalize(path: Union[Path, str]) -> str:
return Rclone._Rclone__normalize_remote_path(path)
def _make_dir_item(self, path: Union[Path, str]) -> schemas.FileItem:
normalized = self._normalize(path)
name = Path(normalized).name or "/"
return schemas.FileItem(
storage="rclone",
type="dir",
path="/" if normalized == "/" else f"{normalized}/",
name=name,
basename=name,
)
def test_get_folder_serializes_same_target_directory_creation(self):
storage = Rclone()
thread_count = 4
start_event = threading.Event()
missing_barrier = threading.Barrier(thread_count)
state_lock = threading.Lock()
existing_paths = {"/"}
mkdir_calls = []
results = []
errors = []
def fake_get_item(_self, path: Path):
normalized = self._normalize(path)
with state_lock:
exists = normalized in existing_paths
if not exists and normalized == "/Show":
try:
missing_barrier.wait(timeout=0.1)
except threading.BrokenBarrierError:
pass
with state_lock:
exists = normalized in existing_paths
if exists:
return self._make_dir_item(normalized)
return None
def fake_run(cmd, *args, **kwargs):
target = self._normalize(cmd[-1].removeprefix("MP:"))
with state_lock:
mkdir_calls.append(target)
existing_paths.add(target)
return SimpleNamespace(returncode=0)
def worker():
try:
start_event.wait()
results.append(storage.get_folder(Path("/Show/Season 1")))
except Exception as err: # pragma: no cover - 仅用于调试失败
errors.append(err)
threads = [threading.Thread(target=worker) for _ in range(thread_count)]
with patch.object(Rclone, "get_item", autospec=True, side_effect=fake_get_item):
with patch(
"app.modules.filemanager.storages.rclone.subprocess.run",
side_effect=fake_run,
):
for thread in threads:
thread.start()
start_event.set()
for thread in threads:
thread.join(timeout=1)
self.assertFalse(errors)
self.assertTrue(all(not thread.is_alive() for thread in threads))
self.assertEqual(thread_count, len(results))
self.assertTrue(all(result and result.path == "/Show/Season 1/" for result in results))
self.assertEqual(1, mkdir_calls.count("/Show"))
self.assertEqual(1, mkdir_calls.count("/Show/Season 1"))
def test_get_folder_serializes_shared_parent_creation(self):
storage = Rclone()
thread_count = 4
start_event = threading.Event()
missing_barrier = threading.Barrier(thread_count)
state_lock = threading.Lock()
existing_paths = {"/"}
mkdir_calls = []
results = []
errors = []
targets = [
Path("/Show/Season 1"),
Path("/Show/Season 2"),
Path("/Show/Season 1"),
Path("/Show/Season 2"),
]
def fake_get_item(_self, path: Path):
normalized = self._normalize(path)
with state_lock:
exists = normalized in existing_paths
if not exists and normalized == "/Show":
try:
missing_barrier.wait(timeout=0.1)
except threading.BrokenBarrierError:
pass
with state_lock:
exists = normalized in existing_paths
if exists:
return self._make_dir_item(normalized)
return None
def fake_run(cmd, *args, **kwargs):
target = self._normalize(cmd[-1].removeprefix("MP:"))
with state_lock:
mkdir_calls.append(target)
existing_paths.add(target)
return SimpleNamespace(returncode=0)
def worker(target: Path):
try:
start_event.wait()
results.append(storage.get_folder(target))
except Exception as err: # pragma: no cover - 仅用于调试失败
errors.append(err)
threads = [threading.Thread(target=worker, args=(target,)) for target in targets]
with patch.object(Rclone, "get_item", autospec=True, side_effect=fake_get_item):
with patch(
"app.modules.filemanager.storages.rclone.subprocess.run",
side_effect=fake_run,
):
for thread in threads:
thread.start()
start_event.set()
for thread in threads:
thread.join(timeout=1)
self.assertFalse(errors)
self.assertTrue(all(not thread.is_alive() for thread in threads))
self.assertEqual(4, len(results))
self.assertEqual(1, mkdir_calls.count("/Show"))
self.assertEqual(1, mkdir_calls.count("/Show/Season 1"))
self.assertEqual(1, mkdir_calls.count("/Show/Season 2"))
def test_create_folder_retries_visibility_after_successful_mkdir(self):
storage = Rclone()
expected = self._make_dir_item("/Show")
responses = [None, expected]
def fake_get_item(_self, path: Path):
return responses.pop(0)
with patch.object(Rclone, "get_item", autospec=True, side_effect=fake_get_item):
with patch(
"app.modules.filemanager.storages.rclone.subprocess.run",
return_value=SimpleNamespace(returncode=0),
) as run_mock:
with patch("app.modules.filemanager.storages.rclone.time.sleep", return_value=None):
folder = storage.create_folder(
schemas.FileItem(storage="rclone", type="dir", path="/"),
"Show",
)
self.assertEqual("/Show/", folder.path)
run_mock.assert_called_once()
def test_create_folder_accepts_existing_directory_after_failed_mkdir(self):
storage = Rclone()
expected = self._make_dir_item("/Show")
responses = [None, expected]
def fake_get_item(_self, path: Path):
return responses.pop(0)
with patch.object(Rclone, "get_item", autospec=True, side_effect=fake_get_item):
with patch(
"app.modules.filemanager.storages.rclone.subprocess.run",
return_value=SimpleNamespace(returncode=1),
) as run_mock:
with patch("app.modules.filemanager.storages.rclone.time.sleep", return_value=None):
folder = storage.create_folder(
schemas.FileItem(storage="rclone", type="dir", path="/"),
"Show",
)
self.assertEqual("/Show/", folder.path)
run_mock.assert_called_once()
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,712 @@
import io
import sys
import tempfile
import unittest
import zipfile
from pathlib import Path
from types import ModuleType
from unittest.mock import patch
sys.modules.setdefault("qbittorrentapi", ModuleType("qbittorrentapi"))
setattr(sys.modules["qbittorrentapi"], "TorrentFilesList", list)
sys.modules.setdefault("transmission_rpc", ModuleType("transmission_rpc"))
setattr(sys.modules["transmission_rpc"], "File", object)
sys.modules.setdefault("psutil", ModuleType("psutil"))
sys.modules.setdefault("aioshutil", ModuleType("aioshutil"))
from app.chain.message import MessageChain
from app.chain.skills import SkillsChain, skills_interaction_manager
from app.helper.skill import (
SkillHelper,
SkillInfo,
SkillMarketSource,
settings as skill_settings,
)
from app.schemas.types import MessageChannel
def _build_skill_zip(skill_dir: str, skill_name: str) -> bytes:
buf = io.BytesIO()
with zipfile.ZipFile(buf, "w") as zf:
zf.writestr(
f"demo-main/{skill_dir}/SKILL.md",
(
f"---\n"
f"name: {skill_name}\n"
f"version: 1\n"
f"description: demo skill\n"
f"---\n\n"
f"# {skill_name}\n"
),
)
zf.writestr(f"demo-main/{skill_dir}/scripts/example.py", "print('ok')\n")
return buf.getvalue()
class _FakeResponse:
def __init__(self, payload=None, content: bytes = b"", status_code: int = 200):
self._payload = payload
self.content = content
self.status_code = status_code
def json(self):
return self._payload
class TestSkillsCommand(unittest.TestCase):
def tearDown(self):
skills_interaction_manager.clear()
def test_message_routes_text_reply_to_skills_interaction_before_ai(self):
chain = MessageChain()
skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Wechat,
source="wechat-test",
username="tester",
)
with patch.object(chain, "_record_user_message"), patch(
"app.chain.message.SkillsChain.handle_text_interaction",
return_value=True,
) as handle_text, patch.object(chain, "_handle_ai_message") as handle_ai:
chain.handle_message(
channel=MessageChannel.Wechat,
source="wechat-test",
userid="10001",
username="tester",
text="2",
)
handle_text.assert_called_once()
handle_ai.assert_not_called()
def test_callback_routes_to_skills_chain(self):
chain = MessageChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
with patch(
"app.chain.message.SkillsChain.handle_callback_interaction",
return_value=True,
) as handle_callback:
chain._handle_callback(
text=f"CALLBACK:skills:{request.request_id}:market",
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
)
handle_callback.assert_called_once()
def test_skillhelper_install_and_remove_market_skill(self):
helper = SkillHelper()
skill = SkillInfo(
id="demo-skill",
name="demo-skill",
description="demo",
source_type="market",
source_label="市场 · acme/demo",
repo_url="https://github.com/acme/demo",
repo_name="acme/demo",
skill_path="skills/demo-skill",
)
zip_bytes = _build_skill_zip("skills/demo-skill", "demo-skill")
with tempfile.TemporaryDirectory() as tempdir:
user_root = Path(tempdir) / "user-skills"
bundled_root = Path(tempdir) / "bundled-skills"
user_root.mkdir(parents=True, exist_ok=True)
bundled_root.mkdir(parents=True, exist_ok=True)
with patch.object(
SkillHelper, "get_user_skills_dir", return_value=user_root
), patch.object(
SkillHelper, "get_bundled_skills_dir", return_value=bundled_root
), patch.object(
helper, "_download_repo_archive", return_value=zip_bytes
):
success, message = helper.install_market_skill(skill)
self.assertTrue(success, message)
self.assertTrue((user_root / "demo-skill" / "SKILL.md").exists())
self.assertTrue(
(user_root / "demo-skill" / ".moviepilot-skill-source.json").exists()
)
local_skills = helper.list_local_skills()
self.assertEqual(len(local_skills), 1)
self.assertEqual(local_skills[0].source_type, "market")
self.assertTrue(local_skills[0].removable)
removed, remove_message = helper.remove_local_skill("demo-skill")
self.assertTrue(removed, remove_message)
self.assertFalse((user_root / "demo-skill").exists())
bundled_skill_dir = bundled_root / "builtin-skill"
bundled_skill_dir.mkdir(parents=True, exist_ok=True)
(bundled_skill_dir / "SKILL.md").write_text(
"---\nname: builtin-skill\ndescription: builtin\n---\n",
encoding="utf-8",
)
installed_builtin = user_root / "builtin-skill"
installed_builtin.mkdir(parents=True, exist_ok=True)
(installed_builtin / "SKILL.md").write_text(
"---\nname: builtin-skill\ndescription: builtin\n---\n",
encoding="utf-8",
)
removed, remove_message = helper.remove_local_skill("builtin-skill")
self.assertFalse(removed)
self.assertIn("内置技能", remove_message)
def test_skillhelper_lists_clawhub_registry_skills(self):
helper = SkillHelper()
response = _FakeResponse(
payload={
"status": "success",
"value": {
"hasMore": False,
"nextCursor": None,
"page": [
{
"ownerHandle": "openclaw",
"skill": {
"slug": "weather-forecast",
"displayName": "Weather Forecast",
"summary": "Forecast weather from ClawHub",
},
}
],
},
}
)
with patch.object(
helper,
"_discover_clawhub_runtime_env",
return_value={"convex_url": "https://wry-manatee-359.convex.cloud"},
), patch.object(helper, "_request_convex_query", return_value=response):
skills = helper._list_market_source_skills("https://clawhub.ai")
self.assertEqual(len(skills), 1)
self.assertEqual(skills[0].id, "weather-forecast")
self.assertEqual(skills[0].name, "Weather Forecast")
self.assertEqual(skills[0].source_type, "registry")
self.assertEqual(skills[0].registry_name, "ClawHub")
self.assertEqual(skills[0].source_label, "社区注册表 · ClawHub")
self.assertIn("/openclaw/weather-forecast", skills[0].path)
def test_skillhelper_filters_market_skills_by_query(self):
helper = SkillHelper()
skills = [
SkillInfo(
id="weather-forecast",
name="Weather Forecast",
description="Forecast weather from ClawHub",
source_label="社区注册表 · ClawHub",
),
SkillInfo(
id="github-tools",
name="GitHub Tools",
description="Manage pull requests",
source_label="官方仓库 · openai/skills",
),
]
filtered = helper.filter_market_skills(skills=skills, query="weather clawhub")
self.assertEqual(len(filtered), 1)
self.assertEqual(filtered[0].id, "weather-forecast")
def test_skillhelper_falls_back_to_rest_registry_listing_when_runtime_missing(self):
helper = SkillHelper()
response = _FakeResponse(
payload={
"items": [
{
"slug": "weather-forecast",
"name": "Weather Forecast",
"summary": "Forecast weather from ClawHub",
"owner": {"handle": "openclaw"},
}
]
}
)
with patch.object(
helper, "_discover_clawhub_runtime_env", return_value=None
), patch.object(helper, "_request_registry", return_value=response):
skills = helper._list_market_source_skills("https://clawhub.ai")
self.assertEqual(len(skills), 1)
self.assertEqual(skills[0].id, "weather-forecast")
self.assertEqual(skills[0].source_type, "registry")
self.assertEqual(skills[0].registry_name, "ClawHub")
self.assertEqual(skills[0].source_label, "社区注册表 · ClawHub")
self.assertIn("/openclaw/weather-forecast", skills[0].path)
def test_skillhelper_installs_registry_skill(self):
helper = SkillHelper()
skill = SkillInfo(
id="registry-demo",
name="Registry Demo",
description="registry demo",
source_type="registry",
source_label="注册表 · ClawHub",
registry_url="https://clawhub.ai",
registry_name="ClawHub",
registry_slug="registry-demo",
download_url="https://clawhub.ai/api/v1/download?slug=registry-demo",
)
zip_bytes = _build_skill_zip("package", "registry-demo")
with tempfile.TemporaryDirectory() as tempdir:
user_root = Path(tempdir) / "user-skills"
bundled_root = Path(tempdir) / "bundled-skills"
user_root.mkdir(parents=True, exist_ok=True)
bundled_root.mkdir(parents=True, exist_ok=True)
with patch.object(
SkillHelper, "get_user_skills_dir", return_value=user_root
), patch.object(
SkillHelper, "get_bundled_skills_dir", return_value=bundled_root
), patch.object(
helper, "_request_registry", return_value=_FakeResponse(content=zip_bytes)
):
success, message = helper.install_market_skill(skill)
self.assertTrue(success, message)
self.assertTrue((user_root / "registry-demo" / "SKILL.md").exists())
self.assertTrue(
(
user_root
/ "registry-demo"
/ ".moviepilot-skill-source.json"
).exists()
)
local_skills = helper.list_local_skills()
self.assertEqual(len(local_skills), 1)
self.assertEqual(local_skills[0].source_type, "registry")
self.assertEqual(local_skills[0].registry_name, "ClawHub")
self.assertEqual(local_skills[0].source_label, "社区注册表 · ClawHub")
def test_skillhelper_lists_market_sources_and_marks_custom_entries(self):
helper = SkillHelper()
with patch.object(
helper,
"get_market_sources",
return_value=[
"https://clawhub.ai",
"https://github.com/openai/skills",
"https://github.com/acme/custom-skills",
],
), patch.object(
helper,
"get_default_market_sources",
return_value=[
"https://clawhub.ai",
"https://github.com/openai/skills",
],
):
sources = helper.list_market_source_entries()
self.assertEqual(len(sources), 3)
self.assertTrue(sources[0].builtin)
self.assertTrue(sources[1].builtin)
self.assertFalse(sources[2].builtin)
self.assertTrue(sources[2].removable)
self.assertEqual(sources[2].label, "仓库来源 · acme/custom-skills")
def test_skillhelper_add_custom_market_source_updates_setting(self):
helper = SkillHelper()
with patch.object(
helper,
"get_market_sources",
return_value=["https://github.com/openai/skills"],
), patch.object(
type(skill_settings),
"update_setting",
return_value=(True, ""),
) as update_setting:
success, message = helper.add_custom_market_source("acme/custom-skills")
self.assertTrue(success)
self.assertIn("acme/custom-skills", message)
update_setting.assert_called_once_with(
key="SKILL_MARKET",
value="https://github.com/openai/skills,https://github.com/acme/custom-skills",
)
def test_skillhelper_remove_custom_market_source_blocks_builtin(self):
helper = SkillHelper()
with patch.object(
helper,
"get_default_market_sources",
return_value=["https://github.com/openai/skills"],
):
success, message = helper.remove_custom_market_source(
"https://github.com/openai/skills"
)
self.assertFalse(success)
self.assertIn("内置默认源", message)
def test_skills_chain_market_view_marks_clawhub_as_community_source(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "market"
with patch.object(
chain.skillhelper,
"list_market_skills",
return_value=[
SkillInfo(
id="weather-forecast",
name="Weather Forecast",
description="Forecast weather from ClawHub",
source_type="registry",
source_label="社区注册表 · ClawHub",
registry_name="ClawHub",
registry_url="https://clawhub.ai",
registry_slug="weather-forecast",
)
],
):
title, text, _buttons = chain._build_market_view(request=request)
self.assertEqual(title, "技能市场")
self.assertIn("社区注册表 · ClawHub", text)
self.assertIn("社区源,安装前请自行甄别安全性", text)
self.assertIn("ClawHub 属于社区注册表", text)
def test_skills_chain_market_view_filters_by_search_query(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "market"
request.market_query = "weather"
with patch.object(
chain.skillhelper,
"list_market_skills",
return_value=[
SkillInfo(
id="weather-forecast",
name="Weather Forecast",
description="Forecast weather from ClawHub",
source_type="registry",
source_label="社区注册表 · ClawHub",
registry_name="ClawHub",
registry_url="https://clawhub.ai",
registry_slug="weather-forecast",
),
SkillInfo(
id="github-tools",
name="GitHub Tools",
description="Manage pull requests",
source_type="market",
source_label="官方仓库 · openai/skills",
repo_name="openai/skills",
),
],
):
title, text, buttons = chain._build_market_view(request=request)
self.assertEqual(title, "技能市场")
self.assertIn("当前搜索weather", text)
self.assertIn("weather-forecast", text)
self.assertNotIn("github-tools", text)
self.assertTrue(buttons)
self.assertEqual(buttons[0][0]["callback_data"], f"skills:{request.request_id}:clear-search")
def test_skills_chain_root_view_uses_friendly_source_labels(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
with patch.object(chain.skillhelper, "list_local_skills", return_value=[]), patch.object(
chain.skillhelper, "list_market_skills", return_value=[]
), patch.object(
chain.skillhelper,
"list_market_source_entries",
return_value=[
SkillMarketSource(
source="https://clawhub.ai",
label="社区注册表 · ClawHub",
builtin=True,
removable=False,
),
SkillMarketSource(
source="https://github.com/openai/skills",
label="官方仓库 · openai/skills",
builtin=True,
removable=False,
),
SkillMarketSource(
source="https://github.com/acme/custom-skills",
label="仓库来源 · acme/custom-skills",
builtin=False,
removable=True,
),
],
):
title, text, _buttons = chain._build_root_view(request=request)
self.assertEqual(title, "技能管理")
self.assertIn("社区注册表 · ClawHub", text)
self.assertIn("官方仓库 · openai/skills", text)
self.assertIn("仓库来源 · acme/custom-skills", text)
self.assertIn("3. 管理技能源", text)
def test_skills_chain_callback_enters_search_input_mode(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
with patch.object(chain, "_render_interaction") as render:
handled = chain.handle_callback_interaction(
callback_data=f"skills:{request.request_id}:search",
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
)
self.assertTrue(handled)
self.assertEqual(request.view, "market")
self.assertEqual(request.awaiting_input, "market-search")
render.assert_called_once()
def test_skills_chain_text_search_updates_market_query(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "market"
with patch.object(chain, "_render_interaction") as render:
handled = chain.handle_text_interaction(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
text="搜索 weather",
)
self.assertTrue(handled)
self.assertEqual(request.market_query, "weather")
self.assertEqual(request.market_page, 0)
self.assertIsNone(request.awaiting_input)
render.assert_called_once()
def test_skills_chain_followup_text_applies_search_when_awaiting_input(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "market"
request.awaiting_input = "market-search"
with patch.object(chain, "_render_interaction") as render:
handled = chain.handle_text_interaction(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
text="calendar",
)
self.assertTrue(handled)
self.assertEqual(request.market_query, "calendar")
self.assertIsNone(request.awaiting_input)
render.assert_called_once()
def test_skills_chain_callback_enters_source_add_mode(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
with patch.object(chain, "_render_interaction") as render:
handled = chain.handle_callback_interaction(
callback_data=f"skills:{request.request_id}:source-add",
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
)
self.assertTrue(handled)
self.assertEqual(request.view, "sources")
self.assertEqual(request.awaiting_input, "source-add")
render.assert_called_once()
def test_skills_chain_followup_text_adds_custom_market_source(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "sources"
request.awaiting_input = "source-add"
with patch.object(
chain.skillhelper,
"add_custom_market_source",
return_value=(True, "已添加技能源:仓库来源 · acme/custom-skills"),
) as add_source, patch.object(chain, "_render_interaction") as render, patch.object(
chain, "post_message"
) as post_message:
handled = chain.handle_text_interaction(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
text="acme/custom-skills",
)
self.assertTrue(handled)
self.assertIsNone(request.awaiting_input)
add_source.assert_called_once_with("acme/custom-skills")
post_message.assert_called_once()
render.assert_called_once()
def test_skills_chain_text_removes_custom_market_source_by_index(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
with patch.object(
chain,
"_remove_market_source",
return_value=(True, "已删除技能源:仓库来源 · acme/custom-skills"),
) as remove_source, patch.object(chain, "_render_interaction") as render, patch.object(
chain, "post_message"
) as post_message:
handled = chain.handle_text_interaction(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
text="删除源 3",
)
self.assertTrue(handled)
self.assertEqual(request.view, "sources")
remove_source.assert_called_once_with(page_index=3)
post_message.assert_called_once()
render.assert_called_once()
def test_skills_chain_source_view_lists_custom_sources(self):
chain = SkillsChain()
request = skills_interaction_manager.create_or_replace(
user_id="10001",
channel=MessageChannel.Telegram,
source="telegram-test",
username="tester",
)
request.view = "sources"
with patch.object(
chain.skillhelper,
"list_market_source_entries",
return_value=[
SkillMarketSource(
source="https://clawhub.ai",
label="社区注册表 · ClawHub",
builtin=True,
removable=False,
),
SkillMarketSource(
source="https://github.com/acme/custom-skills",
label="仓库来源 · acme/custom-skills",
builtin=False,
removable=True,
),
],
):
title, text, buttons = chain._build_sources_view(request=request)
self.assertEqual(title, "技能源管理")
self.assertIn("社区注册表 · ClawHub", text)
self.assertIn("仓库来源 · acme/custom-skills", text)
self.assertIn("删除自定义源", text)
self.assertTrue(buttons)
self.assertEqual(
buttons[1][0]["callback_data"],
f"skills:{request.request_id}:source-remove:2",
)
def test_skills_chain_updates_buttons_via_edit_message(self):
chain = SkillsChain()
buttons = [[{"text": "安装 1", "callback_data": "skills:req:install:1"}]]
with patch.object(chain, "edit_message", return_value=True) as edit_message, patch.object(
chain, "post_message"
) as post_message:
chain._update_or_post_message(
channel=MessageChannel.Telegram,
source="telegram-test",
userid="10001",
username="tester",
title="技能市场",
text="请选择技能",
buttons=buttons,
original_message_id=123,
original_chat_id="456",
)
edit_message.assert_called_once_with(
channel=MessageChannel.Telegram,
source="telegram-test",
message_id=123,
chat_id="456",
title="技能市场",
text="请选择技能",
buttons=buttons,
)
post_message.assert_not_called()
if __name__ == "__main__":
unittest.main()

View File

@@ -0,0 +1,291 @@
import asyncio
import sys
import unittest
from types import ModuleType
from unittest.mock import AsyncMock, patch
def _stub_module(name: str, **attrs):
module = sys.modules.get(name)
if module is None:
module = ModuleType(name)
sys.modules[name] = module
for key, value in attrs.items():
setattr(module, key, value)
return module
class _Dummy:
def __init__(self, *args, **kwargs):
pass
def __getattr__(self, _name):
return lambda *args, **kwargs: None
class _DummyError(Exception):
def __init__(self, message="", duration_ms=None):
super().__init__(message)
self.duration_ms = duration_ms
for _module_name in ("pillow_avif", "aiofiles", "psutil"):
_stub_module(_module_name)
_stub_module("app.helper.sites", SitesHelper=_Dummy)
_stub_module("app.chain.mediaserver", MediaServerChain=_Dummy)
_stub_module("app.chain.search", SearchChain=_Dummy)
_stub_module("app.chain.system", SystemChain=_Dummy)
_stub_module("app.core.event", eventmanager=_Dummy())
_stub_module("app.core.metainfo", MetaInfo=_Dummy)
_stub_module("app.core.module", ModuleManager=_Dummy)
_stub_module(
"app.core.security",
verify_apitoken=_Dummy,
verify_resource_token=_Dummy,
verify_token=_Dummy,
)
_stub_module("app.db.models", User=_Dummy)
_stub_module("app.db.systemconfig_oper", SystemConfigOper=_Dummy)
_stub_module(
"app.db.user_oper",
get_current_active_superuser=_Dummy,
get_current_active_superuser_async=_Dummy,
get_current_active_user_async=_Dummy,
)
_stub_module(
"app.helper.llm",
LLMHelper=_Dummy,
LLMTestError=_DummyError,
LLMTestTimeout=_DummyError,
)
_stub_module("app.helper.mediaserver", MediaServerHelper=_Dummy)
_stub_module("app.helper.message", MessageHelper=_Dummy)
_stub_module("app.helper.progress", ProgressHelper=_Dummy)
_stub_module("app.helper.rule", RuleHelper=_Dummy)
_stub_module("app.helper.subscribe", SubscribeHelper=_Dummy)
_stub_module("app.helper.system", SystemHelper=_Dummy)
_stub_module("app.helper.image", ImageHelper=_Dummy)
_stub_module("app.scheduler", Scheduler=_Dummy)
_stub_module(
"app.log",
logger=_Dummy(),
log_settings=_Dummy(),
LogConfigModel=type("LogConfigModel", (), {}),
)
_stub_module("app.utils.crypto", HashUtils=_Dummy)
_stub_module("app.utils.http", RequestUtils=_Dummy, AsyncRequestUtils=_Dummy)
_stub_module("version", APP_VERSION="test")
from app.api.endpoints import system as system_endpoint
class LlmTestEndpointTest(unittest.TestCase):
def test_llm_test_requires_ai_agent_enabled(self):
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", False):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertEqual(resp.message, "请先启用智能助手")
def test_llm_test_requires_api_key(self):
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_API_KEY", None
), patch.object(system_endpoint.settings, "LLM_MODEL", "deepseek-chat"):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertEqual(resp.message, "请先配置 LLM API Key")
self.assertEqual(resp.data["model"], "deepseek-chat")
def test_llm_test_requires_model(self):
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-test"
), patch.object(system_endpoint.settings, "LLM_MODEL", ""):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertEqual(resp.message, "请先配置 LLM 模型")
def test_llm_test_returns_successful_reply_preview(self):
llm_test_mock = AsyncMock(
return_value={
"provider": "deepseek",
"model": "deepseek-chat",
"duration_ms": 321,
"reply_preview": "OK",
}
)
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_PROVIDER", "deepseek"
), patch.object(system_endpoint.settings, "LLM_MODEL", "deepseek-chat"), patch.object(
system_endpoint.settings, "LLM_THINKING_LEVEL", "max"
), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-test"
), patch.object(
system_endpoint.settings, "LLM_BASE_URL", "https://api.deepseek.com"
), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
llm_test_mock,
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
llm_test_mock.assert_awaited_once_with(
provider="deepseek",
model="deepseek-chat",
thinking_level="max",
disable_thinking=None,
reasoning_effort=None,
api_key="sk-test",
base_url="https://api.deepseek.com",
)
self.assertTrue(resp.success)
self.assertEqual(resp.data["provider"], "deepseek")
self.assertEqual(resp.data["model"], "deepseek-chat")
self.assertEqual(resp.data["duration_ms"], 321)
self.assertEqual(resp.data["reply_preview"], "OK")
def test_llm_test_prefers_request_payload_over_saved_settings(self):
llm_test_mock = AsyncMock(
return_value={
"provider": "openai",
"model": "gpt-4.1-mini",
"duration_ms": 123,
"reply_preview": "OK",
}
)
payload = system_endpoint.LlmTestRequest(
enabled=True,
provider="openai",
model="gpt-4.1-mini",
thinking_level="high",
api_key="sk-live",
base_url="https://example.com/v1",
)
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", False), patch.object(
system_endpoint.settings, "LLM_PROVIDER", "deepseek"
), patch.object(system_endpoint.settings, "LLM_MODEL", "deepseek-chat"), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-saved"
), patch.object(
system_endpoint.settings, "LLM_BASE_URL", "https://api.deepseek.com"
), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
llm_test_mock,
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(payload=payload, _="token"))
llm_test_mock.assert_awaited_once_with(
provider="openai",
model="gpt-4.1-mini",
thinking_level="high",
disable_thinking=None,
reasoning_effort=None,
api_key="sk-live",
base_url="https://example.com/v1",
)
self.assertTrue(resp.success)
self.assertEqual(resp.data["provider"], "openai")
self.assertEqual(resp.data["model"], "gpt-4.1-mini")
def test_llm_test_supports_legacy_thinking_payload(self):
llm_test_mock = AsyncMock(
return_value={
"provider": "deepseek",
"model": "deepseek-v4-pro",
"duration_ms": 123,
"reply_preview": "OK",
}
)
payload = system_endpoint.LlmTestRequest(
enabled=True,
provider="deepseek",
model="deepseek-v4-pro",
disable_thinking=False,
reasoning_effort="xhigh",
api_key="sk-live",
base_url="https://api.deepseek.com",
)
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", False), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
llm_test_mock,
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(payload=payload, _="token"))
llm_test_mock.assert_awaited_once_with(
provider="deepseek",
model="deepseek-v4-pro",
thinking_level=None,
disable_thinking=False,
reasoning_effort="xhigh",
api_key="sk-live",
base_url="https://api.deepseek.com",
)
self.assertTrue(resp.success)
def test_llm_test_rejects_empty_reply(self):
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_PROVIDER", "deepseek"
), patch.object(system_endpoint.settings, "LLM_MODEL", "deepseek-chat"), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-test"
), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
AsyncMock(return_value={"provider": "deepseek", "model": "deepseek-chat", "duration_ms": 12}),
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertEqual(resp.message, "模型响应为空")
self.assertEqual(resp.data["duration_ms"], 12)
def test_llm_test_maps_timeout_error(self):
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_PROVIDER", "deepseek"
), patch.object(system_endpoint.settings, "LLM_MODEL", "deepseek-chat"), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-test"
), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
AsyncMock(side_effect=TimeoutError("request timed out")),
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertEqual(resp.message, "LLM 调用超时")
def test_llm_test_sanitizes_error_message(self):
raw_error = (
"request failed api_key=sk-secret "
"Authorization: Bearer sk-secret "
"base error sk-secret"
)
with patch.object(system_endpoint.settings, "AI_AGENT_ENABLE", True), patch.object(
system_endpoint.settings, "LLM_API_KEY", "sk-secret"
), patch.object(system_endpoint.settings, "LLM_PROVIDER", "deepseek"), patch.object(
system_endpoint.settings, "LLM_MODEL", "deepseek-chat"
), patch.object(
system_endpoint.LLMHelper,
"test_current_settings",
AsyncMock(side_effect=RuntimeError(raw_error)),
create=True,
):
resp = asyncio.run(system_endpoint.llm_test(_="token"))
self.assertFalse(resp.success)
self.assertNotIn("sk-secret", resp.message)
self.assertNotIn("Authorization: Bearer", resp.message)
self.assertIn("***", resp.message)
if __name__ == "__main__":
unittest.main()

View File

@@ -19,6 +19,15 @@ class _Dummy:
def __init__(self, *args, **kwargs):
pass
def __getattr__(self, _name):
return lambda *args, **kwargs: None
class _DummyError(Exception):
def __init__(self, message="", duration_ms=None):
super().__init__(message)
self.duration_ms = duration_ms
for _module_name in ("pillow_avif", "aiofiles", "psutil"):
_stub_module(_module_name)
@@ -44,7 +53,12 @@ _stub_module(
get_current_active_superuser_async=_Dummy,
get_current_active_user_async=_Dummy,
)
_stub_module("app.helper.llm", LLMHelper=_Dummy)
_stub_module(
"app.helper.llm",
LLMHelper=_Dummy,
LLMTestError=_DummyError,
LLMTestTimeout=_DummyError,
)
_stub_module("app.helper.mediaserver", MediaServerHelper=_Dummy)
_stub_module("app.helper.message", MessageHelper=_Dummy)
_stub_module("app.helper.progress", ProgressHelper=_Dummy)
@@ -53,6 +67,12 @@ _stub_module("app.helper.subscribe", SubscribeHelper=_Dummy)
_stub_module("app.helper.system", SystemHelper=_Dummy)
_stub_module("app.helper.image", ImageHelper=_Dummy)
_stub_module("app.scheduler", Scheduler=_Dummy)
_stub_module(
"app.log",
logger=_Dummy(),
log_settings=_Dummy(),
LogConfigModel=type("LogConfigModel", (), {}),
)
_stub_module("app.utils.crypto", HashUtils=_Dummy)
_stub_module("app.utils.http", RequestUtils=_Dummy, AsyncRequestUtils=_Dummy)
_stub_module("version", APP_VERSION="test")

View File

@@ -0,0 +1,70 @@
import sys
import unittest
from types import ModuleType
from unittest.mock import patch
sys.modules.setdefault("qbittorrentapi", ModuleType("qbittorrentapi"))
setattr(sys.modules["qbittorrentapi"], "TorrentFilesList", list)
sys.modules.setdefault("transmission_rpc", ModuleType("transmission_rpc"))
setattr(sys.modules["transmission_rpc"], "File", object)
sys.modules.setdefault("psutil", ModuleType("psutil"))
from app.chain.message import MessageChain
from app.schemas import Notification
from app.utils.identity import (
SYSTEM_INTERNAL_USER_ID,
is_internal_user_id,
normalize_internal_user_id,
)
class TestSystemNotificationDispatch(unittest.TestCase):
def test_internal_userid_identity_helpers(self):
self.assertTrue(is_internal_user_id(SYSTEM_INTERNAL_USER_ID))
self.assertTrue(is_internal_user_id(" System "))
self.assertIsNone(normalize_internal_user_id(SYSTEM_INTERNAL_USER_ID))
self.assertEqual(normalize_internal_user_id("10001"), "10001")
def test_post_message_normalizes_internal_userid_before_queueing(self):
chain = MessageChain()
message = Notification(
userid=SYSTEM_INTERNAL_USER_ID,
username="admin",
title="后台报告",
text="任务完成",
)
with patch("app.chain.MessageTemplateHelper.render", return_value=message), patch.object(
chain.messagehelper, "put"
), patch.object(chain.messageoper, "add"), patch.object(
chain.eventmanager, "send_event"
) as send_event, patch.object(
chain.messagequeue, "send_message"
) as send_message:
chain.post_message(message)
event_payload = send_event.call_args.kwargs["data"]
queued_message = send_message.call_args.kwargs["message"]
self.assertIsNone(event_payload["userid"])
self.assertIsNone(queued_message.userid)
self.assertFalse(send_message.call_args.kwargs["immediately"])
def test_send_direct_message_normalizes_internal_userid(self):
chain = MessageChain()
message = Notification(
userid=SYSTEM_INTERNAL_USER_ID,
username="admin",
title="后台报告",
text="任务完成",
)
with patch.object(chain, "run_module") as run_module:
chain.send_direct_message(message)
sent_message = run_module.call_args.kwargs["message"]
self.assertIsNone(sent_message.userid)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.10.2'
FRONTEND_VERSION = 'v2.10.2'
APP_VERSION = 'v2.10.5'
FRONTEND_VERSION = 'v2.10.5'