Compare commits

...

66 Commits

Author SHA1 Message Date
jxxghp
9acbcf4922 v2.1.0 2024-11-25 08:05:07 +08:00
jxxghp
8dc4290695 fix scrape bug 2024-11-25 07:58:17 +08:00
jxxghp
5c95945691 Update README.md 2024-11-24 18:16:37 +08:00
jxxghp
11115d50fb fix dockerfile 2024-11-24 18:14:09 +08:00
jxxghp
7f83d56a7e fix alipan 2024-11-24 17:55:08 +08:00
jxxghp
28805e9e17 fix alipan 2024-11-24 17:45:12 +08:00
jxxghp
88a098abc1 fix log 2024-11-24 17:35:04 +08:00
jxxghp
a3cc9830de fix scraping upload 2024-11-24 17:25:42 +08:00
jxxghp
43623efa99 fix log 2024-11-24 17:19:24 +08:00
jxxghp
ff73b2cb5d fix #3203 2024-11-24 17:11:19 +08:00
jxxghp
6cab14366c Merge pull request #3228 from YemaPT/fix-yemapt-taglist-none 2024-11-24 16:24:38 +08:00
yemapt
576d215d8c fix(yemapt): judge tag list none 2024-11-24 16:22:54 +08:00
jxxghp
a2c10c86bf Merge pull request #3226 from YemaPT/feature-yemapt-optimize 2024-11-24 14:08:04 +08:00
yemapt
21bede3f00 feat(yemapt): update search api and enrich torrent content 2024-11-24 13:45:31 +08:00
jxxghp
0a39322281 Merge pull request #3224 from wikrin/v2 2024-11-24 10:32:47 +08:00
Attente
be323d3da1 fix: 减少入参扩大适用范围 2024-11-24 10:22:29 +08:00
jxxghp
fa8860bf62 Merge pull request #3223 from wikrin/v2
fix: 入参错误
2024-11-24 08:56:58 +08:00
Attente
a700958edb fix: 入参错误 2024-11-24 08:54:59 +08:00
jxxghp
9349973d16 Merge pull request #3221 from wikrin/v2 2024-11-24 07:34:42 +08:00
Attente
c0d3637d12 refactor: change library type and category folder parameters to optional 2024-11-24 00:04:08 +08:00
jxxghp
79473ca229 Merge pull request #3196 from wikrin/fix 2024-11-23 23:01:09 +08:00
Attente
fccbe39547 修改target_directory获取逻辑 2024-11-23 22:41:55 +08:00
Attente
85324acacc 下载流程中get_dir()添加storage="local"入参 2024-11-23 22:41:55 +08:00
Attente
9dec4d704b get_dir去除fileitem参数
- 和`src_path & storage`重复, 需要的话直接传入这两项
2024-11-23 22:41:55 +08:00
jxxghp
72732277a1 fix alipan 2024-11-23 21:54:03 +08:00
jxxghp
8d737f9e37 fix alipan && rclone get_folder 2024-11-23 21:43:53 +08:00
jxxghp
96b3746caa fix alist delete 2024-11-23 21:29:08 +08:00
jxxghp
c690ea3c39 fix #3214
fix #3199
2024-11-23 21:26:22 +08:00
jxxghp
3282fb88e0 Merge pull request #3219 from mackerel-12138/s0_fix 2024-11-23 20:25:08 +08:00
zhanglijun
b9c2b9a044 重命名格式支持S0重命名为Specials,SPs 2024-11-23 20:22:37 +08:00
zhanglijun
24b58dc002 修复S0刮削问题
修复某些情况下剧集根目录判断错误的问题
2024-11-23 20:13:01 +08:00
jxxghp
42c56497c6 Merge pull request #3218 from DDS-Derek/issue_rfc 2024-11-23 12:34:52 +08:00
jxxghp
c7512d1580 Merge pull request #3217 from DDS-Derek/fix_tmp 2024-11-23 12:34:39 +08:00
jxxghp
7d25bf7b48 Merge pull request #3215 from mackerel-12138/v2 2024-11-23 12:34:04 +08:00
DDSRem
99daa3a95e chore(issue): add rfc template 2024-11-23 12:31:28 +08:00
jxxghp
0a923bced9 fix storage 2024-11-23 12:29:34 +08:00
DDSRem
06e3b0def2 fix(update): useless tmp directory when not updated 2024-11-23 12:25:46 +08:00
jxxghp
0feecc3eca fix #3204 2024-11-23 11:48:23 +08:00
jxxghp
0afbc58263 fix #3191 自动整理时,优先同盘 2024-11-23 11:31:56 +08:00
jxxghp
7c7561029a fix #3178 手动整理时支持选择一二级分类 2024-11-23 11:19:25 +08:00
zhanglijun
65683999e1 change comment 2024-11-23 11:00:37 +08:00
zhanglijun
f72e26015f delete unused code 2024-11-23 10:58:32 +08:00
zhanglijun
b4e5c50655 修复重命名时S0年份为None的问题
增加重命名配置 剧集日期
2024-11-23 10:55:21 +08:00
jxxghp
f395dc68c3 fix #3209 刮削加锁 2024-11-23 10:48:54 +08:00
jxxghp
27cf5bb7e6 feat:远程交互刷新数据时发送统计消息 2024-11-23 10:36:48 +08:00
jxxghp
9b573535cd Merge pull request #3201 from InfinityPacer/feature/event 2024-11-22 16:25:52 +08:00
jxxghp
cb32305b86 Merge pull request #3200 from cddjr/fix_subscribe_search_filter 2024-11-22 14:04:08 +08:00
景大侠
f7164450d0 fix: 将订阅规则过滤前置,避免因imdbid匹配而跳过 2024-11-22 13:47:18 +08:00
InfinityPacer
344862dbd4 feat(event): support smart rename event 2024-11-22 13:41:14 +08:00
InfinityPacer
f1d0e9d50a Revert "fix #3154 相同事件避免并发处理"
This reverts commit 79c637e003.
2024-11-22 12:41:14 +08:00
jxxghp
9ba9e8f41c v2.0.9 2024-11-22 08:11:07 +08:00
jxxghp
78fc5b7017 Merge pull request #3193 from wikrin/fix_any_files 2024-11-22 08:10:12 +08:00
Attente
fe07830b71 fix: 某些情况下误删媒体文件的问题 2024-11-22 07:45:01 +08:00
jxxghp
350f1faf2a Merge pull request #3189 from InfinityPacer/feature/module 2024-11-21 20:16:06 +08:00
InfinityPacer
103cfe0b47 fix(config): ensure accurate handling of env config updates 2024-11-21 20:08:18 +08:00
jxxghp
0953c1be16 Merge pull request #3187 from InfinityPacer/feature/scheduler 2024-11-21 17:43:29 +08:00
InfinityPacer
c299bf6f7c fix(auth): adjust auth to occur before module init 2024-11-21 17:37:48 +08:00
InfinityPacer
c0eb9d824c Revert "fix(auth): initialize plugin service only during retry auth"
This reverts commit 9f4cf530f8.
2024-11-21 16:41:56 +08:00
jxxghp
ebffdebdb2 refactor: 优化缓存策略 2024-11-21 15:52:08 +08:00
jxxghp
acd9e38477 Merge pull request #3186 from InfinityPacer/feature/scheduler 2024-11-21 14:54:01 +08:00
InfinityPacer
9f4cf530f8 fix(auth): initialize plugin service only during retry auth 2024-11-21 14:49:42 +08:00
jxxghp
84897aa592 fix #3162 2024-11-21 13:50:49 +08:00
jxxghp
23c5982f5a Merge pull request #3185 from InfinityPacer/feature/module 2024-11-21 12:42:05 +08:00
InfinityPacer
1849930b72 feat(qb): add support for ignoring category check via kwargs 2024-11-21 12:35:15 +08:00
jxxghp
4f1d3a7572 fix #3180 2024-11-21 12:13:44 +08:00
jxxghp
824c3ac5d6 fix #3176 2024-11-21 10:25:46 +08:00
46 changed files with 1042 additions and 540 deletions

45
.github/ISSUE_TEMPLATE/rfc.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: 功能提案
description: Request for Comments
title: "[RFC]"
labels: ["RFC"]
body:
- type: markdown
attributes:
value: |
一份提案(RFC)定位为 **「在某功能/重构的具体开发前,用于开发者间 review 技术设计/方案的文档」**
目的是让协作的开发者间清晰的知道「要做什么」和「具体会怎么做」,以及所有的开发者都能公开透明的参与讨论;
以便评估和讨论产生的影响 (遗漏的考虑、向后兼容性、与现有功能的冲突)
因此提案侧重在对解决问题的 **方案、设计、步骤** 的描述上。
如果仅希望讨论是否添加或改进某功能本身,请使用 -> [Issue: 功能改进](https://github.com/jxxghp/MoviePilot/issues/new?assignees=&labels=feature+request&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
- type: textarea
id: background
attributes:
label: 背景 or 问题
description: 简单描述遇到的什么问题或需要改动什么。可以引用其他 issue、讨论、文档等。
validations:
required: true
- type: textarea
id: goal
attributes:
label: "目标 & 方案简述"
description: 简单描述提案此提案实现后,**预期的目标效果**,以及简单大致描述会采取的方案/步骤,可能会/不会产生什么影响。
validations:
required: true
- type: textarea
id: design
attributes:
label: "方案设计 & 实现步骤"
description: |
详细描述你设计的具体方案,可以考虑拆分列表或要点,一步步描述具体打算如何实现的步骤和相关细节。
这部份不需要一次性写完整,即使在创建完此提案 issue 后,依旧可以再次编辑修改。
validations:
required: false
- type: textarea
id: alternative
attributes:
label: "替代方案 & 对比"
description: |
[可选] 为来实现目标效果,还考虑过什么其他方案,有什么对比?
validations:
required: false

View File

@@ -10,10 +10,7 @@ ENV LANG="C.UTF-8" \
UMASK=000 \ UMASK=000 \
PORT=3001 \ PORT=3001 \
NGINX_PORT=3000 \ NGINX_PORT=3000 \
PROXY_HOST="" \ MOVIEPILOT_AUTO_UPDATE=release
MOVIEPILOT_AUTO_UPDATE=false \
AUTH_SITE="iyuu" \
IYUU_SIGN=""
WORKDIR "/app" WORKDIR "/app"
RUN apt-get update -y \ RUN apt-get update -y \
&& apt-get upgrade -y \ && apt-get upgrade -y \

View File

@@ -6,6 +6,7 @@
![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge) ![GitHub repo size](https://img.shields.io/github/repo-size/jxxghp/MoviePilot?style=for-the-badge)
![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge) ![GitHub issues](https://img.shields.io/github/issues/jxxghp/MoviePilot?style=for-the-badge)
![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge) ![Docker Pulls](https://img.shields.io/docker/pulls/jxxghp/moviepilot?style=for-the-badge)
![Docker Pulls V2](https://img.shields.io/docker/pulls/jxxghp/moviepilot-v2?style=for-the-badge)
![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge) ![Platform](https://img.shields.io/badge/platform-Windows%20%7C%20Linux%20%7C%20Synology-blue?style=for-the-badge)

View File

@@ -9,7 +9,9 @@ from app.core.context import MediaInfo, Context, TorrentInfo
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.core.security import verify_token from app.core.security import verify_token
from app.db.models.user import User from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user from app.db.user_oper import get_current_active_user
from app.schemas.types import SystemConfigKey
router = APIRouter() router = APIRouter()
@@ -111,6 +113,17 @@ def stop(hashString: str,
return schemas.Response(success=True if ret else False) return schemas.Response(success=True if ret else False)
@router.get("/clients", summary="查询可用下载器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用下载器
"""
downloaders: List[dict] = SystemConfigOper().get(SystemConfigKey.Downloaders)
if downloaders:
return [{"name": d.get("name"), "type": d.get("type")} for d in downloaders if d.get("enabled")]
return []
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response) @router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
def delete(hashString: str, def delete(hashString: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any: _: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -12,8 +12,10 @@ from app.core.security import verify_token
from app.db import get_db from app.db import get_db
from app.db.mediaserver_oper import MediaServerOper from app.db.mediaserver_oper import MediaServerOper
from app.db.models import MediaServerItem from app.db.models import MediaServerItem
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.mediaserver import MediaServerHelper from app.helper.mediaserver import MediaServerHelper
from app.schemas import MediaType, NotExistMediaInfo from app.schemas import MediaType, NotExistMediaInfo
from app.schemas.types import SystemConfigKey
router = APIRouter() router = APIRouter()
@@ -143,3 +145,14 @@ def library(server: str, hidden: bool = False,
获取媒体服务器媒体库列表 获取媒体服务器媒体库列表
""" """
return MediaServerChain().librarys(server=server, username=userinfo.username, hidden=hidden) or [] return MediaServerChain().librarys(server=server, username=userinfo.username, hidden=hidden) or []
@router.get("/clients", summary="查询可用媒体服务器", response_model=List[dict])
def clients(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询可用媒体服务器
"""
mediaservers: List[dict] = SystemConfigOper().get(SystemConfigKey.MediaServers)
if mediaservers:
return [{"name": d.get("name"), "type": d.get("type")} for d in mediaservers if d.get("enabled")]
return []

View File

@@ -8,6 +8,7 @@ from app import schemas
from app.chain.site import SiteChain from app.chain.site import SiteChain
from app.chain.torrents import TorrentsChain from app.chain.torrents import TorrentsChain
from app.core.event import EventManager from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.core.security import verify_token from app.core.security import verify_token
from app.db import get_db from app.db import get_db
from app.db.models import User from app.db.models import User
@@ -351,6 +352,8 @@ def auth_site(
return schemas.Response(success=False, message="请输入认证站点和认证参数") return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params) status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict()) SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
PluginManager().init_config()
Scheduler().init_plugin_jobs()
return schemas.Response(success=status, message=msg) return schemas.Response(success=status, message=msg)

View File

@@ -35,6 +35,8 @@ class ManualTransferItem(BaseModel):
episode_offset: Optional[str] = None, episode_offset: Optional[str] = None,
min_filesize: Optional[int] = 0, min_filesize: Optional[int] = 0,
scrape: bool = False, scrape: bool = False,
library_type_folder: bool = False,
library_category_folder: bool = False,
from_history: bool = False from_history: bool = False
@@ -148,6 +150,8 @@ def manual_transfer(transer_item: ManualTransferItem,
epformat=epformat, epformat=epformat,
min_filesize=transer_item.min_filesize, min_filesize=transer_item.min_filesize,
scrape=transer_item.scrape, scrape=transer_item.scrape,
library_type_folder=transer_item.library_type_folder,
library_category_folder=transer_item.library_category_folder,
force=force force=force
) )
# 失败 # 失败

View File

@@ -385,6 +385,7 @@ class ChainBase(metaclass=ABCMeta):
target_directory: TransferDirectoryConf = None, target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None, target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None, transfer_type: str = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]: episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
""" """
文件转移 文件转移
@@ -396,6 +397,8 @@ class ChainBase(metaclass=ABCMeta):
:param target_path: 目标路径 :param target_path: 目标路径
:param transfer_type: 转移模式 :param transfer_type: 转移模式
:param scrape: 是否刮削元数据 :param scrape: 是否刮削元数据
:param library_type_folder: 是否按类型创建目录
:param library_category_folder: 是否按类别创建目录
:param episodes_info: 当前季的全部集信息 :param episodes_info: 当前季的全部集信息
:return: {path, target_path, message} :return: {path, target_path, message}
""" """
@@ -404,6 +407,8 @@ class ChainBase(metaclass=ABCMeta):
target_directory=target_directory, target_directory=target_directory,
target_path=target_path, target_storage=target_storage, target_path=target_path, target_storage=target_storage,
transfer_type=transfer_type, scrape=scrape, transfer_type=transfer_type, scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
episodes_info=episodes_info) episodes_info=episodes_info)
def transfer_completed(self, hashs: str, downloader: str = None) -> None: def transfer_completed(self, hashs: str, downloader: str = None) -> None:

View File

@@ -54,6 +54,11 @@ class CommandChain(ChainBase, metaclass=Singleton):
"description": "更新站点Cookie", "description": "更新站点Cookie",
"data": {} "data": {}
}, },
"/site_statistic": {
"func": SiteChain().remote_refresh_userdatas,
"description": "站点数据统计",
"data": {}
},
"/site_enable": { "/site_enable": {
"func": SiteChain().remote_enable, "func": SiteChain().remote_enable,
"description": "启用站点", "description": "启用站点",
@@ -402,7 +407,7 @@ class CommandChain(ChainBase, metaclass=Singleton):
channel=event_channel, source=event_source, userid=event_user) channel=event_channel, source=event_source, userid=event_user)
@eventmanager.register(EventType.ModuleReload) @eventmanager.register(EventType.ModuleReload)
def module_reload_event(self, event: ManagerEvent) -> None: def module_reload_event(self, _: ManagerEvent) -> None:
""" """
注册模块重载事件 注册模块重载事件
""" """

View File

@@ -256,7 +256,7 @@ class DownloadChain(ChainBase):
download_dir = Path(save_path) download_dir = Path(save_path)
else: else:
# 根据媒体信息查询下载目录配置 # 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media) dir_info = self.directoryhelper.get_dir(_media, storage="local")
# 拼装子目录 # 拼装子目录
if dir_info: if dir_info:
# 一级目录 # 一级目录

View File

@@ -11,12 +11,15 @@ from app.core.event import eventmanager, Event
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath from app.core.metainfo import MetaInfo, MetaInfoPath
from app.log import logger from app.log import logger
from app.schemas import FileItem
from app.schemas.types import EventType, MediaType, ChainEventType from app.schemas.types import EventType, MediaType, ChainEventType
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from app.utils.singleton import Singleton from app.utils.singleton import Singleton
from app.utils.string import StringUtils from app.utils.string import StringUtils
recognize_lock = Lock() recognize_lock = Lock()
scraping_lock = Lock()
scraping_files = []
class MediaChain(ChainBase, metaclass=Singleton): class MediaChain(ChainBase, metaclass=Singleton):
@@ -301,12 +304,23 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not event: if not event:
return return
event_data = event.event_data or {} event_data = event.event_data or {}
fileitem = event_data.get("fileitem") fileitem: FileItem = event_data.get("fileitem")
meta = event_data.get("meta") meta: MetaBase = event_data.get("meta")
mediainfo = event_data.get("mediainfo") mediainfo: MediaInfo = event_data.get("mediainfo")
if not fileitem: if not fileitem:
return return
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo) # 刮削锁
with scraping_lock:
if fileitem.path in scraping_files:
return
scraping_files.append(fileitem.path)
try:
# 执行刮削
self.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
finally:
# 释放锁
with scraping_lock:
scraping_files.remove(fileitem.path)
def scrape_metadata(self, fileitem: schemas.FileItem, def scrape_metadata(self, fileitem: schemas.FileItem,
meta: MetaBase = None, mediainfo: MediaInfo = None, meta: MetaBase = None, mediainfo: MediaInfo = None,
@@ -322,6 +336,20 @@ class MediaChain(ChainBase, metaclass=Singleton):
:param overwrite: 是否覆盖已有文件 :param overwrite: 是否覆盖已有文件
""" """
def is_bluray_folder(_fileitem: schemas.FileItem) -> bool:
"""
判断是否为原盘目录
"""
if not _fileitem or _fileitem.type != "dir":
return False
# 蓝光原盘目录必备的文件或文件夹
required_files = ['BDMV', 'CERTIFICATE']
# 检查目录下是否存在所需文件或文件夹
for item in self.storagechain.list_files(_fileitem):
if item.name in required_files:
return True
return False
def __list_files(_fileitem: schemas.FileItem): def __list_files(_fileitem: schemas.FileItem):
""" """
列出下级文件 列出下级文件
@@ -337,14 +365,19 @@ class MediaChain(ChainBase, metaclass=Singleton):
""" """
if not _fileitem or not _content or not _path: if not _fileitem or not _content or not _path:
return return
# 保存文件到临时目录
tmp_file = settings.TEMP_PATH / _path.name tmp_file = settings.TEMP_PATH / _path.name
tmp_file.write_bytes(_content) tmp_file.write_bytes(_content)
_fileitem.path = str(_path.parent) # 获取文件的父目录
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file) try:
if item: item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file, new_name=_path.name)
logger.info(f"已保存文件:{Path(item.path) / item.name}") if item:
if tmp_file.exists(): logger.info(f"已保存文件:{item.path}")
tmp_file.unlink() else:
logger.warn(f"文件保存失败:{item.path}")
finally:
if tmp_file.exists():
tmp_file.unlink()
def __download_image(_url: str) -> Optional[bytes]: def __download_image(_url: str) -> Optional[bytes]:
""" """
@@ -380,25 +413,37 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 是否已存在 # 是否已存在
nfo_path = filepath.with_suffix(".nfo") nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path): if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}") logger.info(f"已存在nfo文件{nfo_path}")
return return
# 电影文件 # 电影文件
logger.info(f"正在生成电影nfo{mediainfo.title_year} - {filepath.name}")
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo) movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo: if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败") logger.warn(f"{filepath.name} nfo文件生成失败")
return return
# 保存或上传nfo文件到上级目录 # 保存或上传nfo文件到上级目录
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo) __save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else: else:
# 电影目录 # 电影目录
files = __list_files(_fileitem=fileitem) if is_bluray_folder(fileitem):
for file in files: # 原盘目录
self.scrape_metadata(fileitem=file, nfo_path = filepath / "movie.nfo"
meta=meta, mediainfo=mediainfo, if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
init_folder=False, parent=fileitem) logger.info(f"已存在nfo文件{nfo_path}")
return
# 生成原盘nfo
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not movie_nfo:
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到当前目录
__save_file(_fileitem=fileitem, _path=nfo_path, _content=movie_nfo)
else:
# 处理目录内的文件
files = __list_files(_fileitem=fileitem)
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
# 生成目录内图片文件 # 生成目录内图片文件
if init_folder: if init_folder:
# 图片 # 图片
@@ -412,7 +457,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath / image_name image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path): path=image_path):
logger.debug(f"已存在图片文件:{image_path}") logger.info(f"已存在图片文件:{image_path}")
continue continue
# 下载图片 # 下载图片
content = __download_image(_url=attr_value) content = __download_image(_url=attr_value)
@@ -425,7 +470,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 是否已存在 # 是否已存在
nfo_path = filepath.with_suffix(".nfo") nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path): if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}") logger.info(f"已存在nfo文件{nfo_path}")
return return
# 重新识别季集 # 重新识别季集
file_meta = MetaInfoPath(filepath) file_meta = MetaInfoPath(filepath)
@@ -453,7 +498,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
for episode, image_url in image_dict.items(): for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix) image_path = filepath.with_suffix(Path(image_url).suffix)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=image_path): if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
logger.debug(f"已存在图片文件:{image_path}") logger.info(f"已存在图片文件:{image_path}")
continue continue
# 下载图片 # 下载图片
content = __download_image(image_url) content = __download_image(image_url)
@@ -475,11 +520,14 @@ class MediaChain(ChainBase, metaclass=Singleton):
if init_folder: if init_folder:
# 识别文件夹名称 # 识别文件夹名称
season_meta = MetaInfo(filepath.name) season_meta = MetaInfo(filepath.name)
if season_meta.begin_season: # 当前文件夹为Specials或者SPs时设置为S0
if filepath.name in settings.RENAME_FORMAT_S0_NAMES:
season_meta.begin_season = 0
if season_meta.begin_season is not None:
# 是否已存在 # 是否已存在
nfo_path = filepath / "season.nfo" nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path): if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}") logger.info(f"已存在nfo文件{nfo_path}")
return return
# 当前目录有季号生成季nfo # 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season) season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season)
@@ -495,7 +543,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath.with_name(image_name) image_path = filepath.with_name(image_name)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path): path=image_path):
logger.debug(f"已存在图片文件:{image_path}") logger.info(f"已存在图片文件:{image_path}")
continue continue
# 下载图片 # 下载图片
content = __download_image(image_url) content = __download_image(image_url)
@@ -503,11 +551,11 @@ class MediaChain(ChainBase, metaclass=Singleton):
if content: if content:
__save_file(_fileitem=fileitem, _path=image_path, _content=content) __save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 判断当前目录是不是剧集根目录 # 判断当前目录是不是剧集根目录
if season_meta.name: if not season_meta.season:
# 是否已存在 # 是否已存在
nfo_path = filepath / "tvshow.nfo" nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path): if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}") logger.info(f"已存在nfo文件{nfo_path}")
return return
# 当前目录有名称生成tvshow nfo 和 tv图片 # 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo) tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
@@ -523,7 +571,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath / image_name image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path): path=image_path):
logger.debug(f"已存在图片文件:{image_path}") logger.info(f"已存在图片文件:{image_path}")
continue continue
# 下载图片 # 下载图片
content = __download_image(image_url) content = __download_image(image_url)

View File

@@ -221,6 +221,12 @@ class SearchChain(ChainBase):
key=ProgressKey.Search) key=ProgressKey.Search)
if not torrent.title: if not torrent.title:
continue continue
# 匹配订阅附加参数
if filter_params and not self.torrenthelper.filter_torrent(torrent_info=torrent,
filter_params=filter_params):
continue
# 识别元数据 # 识别元数据
torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description, torrent_meta = MetaInfo(title=torrent.title, subtitle=torrent.description,
custom_words=custom_words) custom_words=custom_words)
@@ -234,11 +240,6 @@ class SearchChain(ChainBase):
_match_torrents.append((torrent, torrent_meta)) _match_torrents.append((torrent, torrent_meta))
continue continue
# 匹配订阅附加参数
if filter_params and not self.torrenthelper.filter_torrent(torrent_info=torrent,
filter_params=filter_params):
continue
# 比对种子 # 比对种子
if self.torrenthelper.match_torrent(mediainfo=mediainfo, if self.torrenthelper.match_torrent(mediainfo=mediainfo,
torrent_meta=torrent_meta, torrent_meta=torrent_meta,

View File

@@ -1,7 +1,7 @@
import base64 import base64
import re import re
from datetime import datetime from datetime import datetime
from typing import Optional, Tuple, Union from typing import Optional, Tuple, Union, Dict
from urllib.parse import urljoin from urllib.parse import urljoin
from lxml import etree from lxml import etree
@@ -86,14 +86,22 @@ class SiteChain(ChainBase):
f"{userdata.message_unread} 条新消息,请登陆查看", f"{userdata.message_unread} 条新消息,请登陆查看",
link=site.get("url") link=site.get("url")
)) ))
# 低分享率警告
if userdata.ratio and float(userdata.ratio) < 1:
self.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"【站点分享率低预警】",
text=f"站点 {site.get('name')} 分享率 {userdata.ratio},请注意!"
))
return userdata return userdata
def refresh_userdatas(self) -> None: def refresh_userdatas(self) -> Dict[str, SiteUserData]:
""" """
刷新所有站点的用户数据 刷新所有站点的用户数据
""" """
sites = self.siteshelper.get_indexers() sites = self.siteshelper.get_indexers()
any_site_updated = False any_site_updated = False
result = {}
for site in sites: for site in sites:
if global_vars.is_system_stopped: if global_vars.is_system_stopped:
return return
@@ -101,10 +109,12 @@ class SiteChain(ChainBase):
userdata = self.refresh_userdata(site) userdata = self.refresh_userdata(site)
if userdata: if userdata:
any_site_updated = True any_site_updated = True
result[site.get("name")] = userdata
if any_site_updated: if any_site_updated:
EventManager().send_event(EventType.SiteRefreshed, { EventManager().send_event(EventType.SiteRefreshed, {
"site_id": "*" "site_id": "*"
}) })
return result
def is_special_site(self, domain: str) -> bool: def is_special_site(self, domain: str) -> bool:
""" """
@@ -705,3 +715,66 @@ class SiteChain(ChainBase):
source=source, source=source,
title=f"{site_info.name}】 Cookie&UA更新成功", title=f"{site_info.name}】 Cookie&UA更新成功",
userid=userid)) userid=userid))
def remote_refresh_userdatas(self, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
"""
刷新所有站点用户数据
"""
logger.info("收到命令,开始刷新站点数据 ...")
self.post_message(Notification(
channel=channel,
source=source,
title="开始刷新站点数据 ...",
userid=userid
))
# 刷新站点数据
site_datas = self.refresh_userdatas()
if site_datas:
# 发送消息
messages = {}
# 总上传
incUploads = 0
# 总下载
incDownloads = 0
# 今天日期
today_date = datetime.now().strftime("%Y-%m-%d")
for rand, site in enumerate(site_datas.keys()):
upload = int(site_datas[site].upload or 0)
download = int(site_datas[site].download or 0)
updated_date = site_datas[site].updated_day
if updated_date and updated_date != today_date:
updated_date = f"{updated_date}"
else:
updated_date = ""
if upload > 0 or download > 0:
incUploads += upload
incDownloads += download
messages[upload + (rand / 1000)] = (
f"{site}{updated_date}\n"
+ f"上传量:{StringUtils.str_filesize(upload)}\n"
+ f"下载量:{StringUtils.str_filesize(download)}\n"
+ "————————————"
)
if incDownloads or incUploads:
sorted_messages = [messages[key] for key in sorted(messages.keys(), reverse=True)]
sorted_messages.insert(0, f"【汇总】\n"
f"总上传:{StringUtils.str_filesize(incUploads)}\n"
f"总下载:{StringUtils.str_filesize(incDownloads)}\n"
f"————————————")
self.post_message(Notification(
channel=channel,
source=source,
title="【站点数据统计】",
text="\n".join(sorted_messages),
userid=userid
))
else:
self.post_message(Notification(
channel=channel,
source=source,
title="没有刷新到任何站点数据!",
userid=userid
))

View File

@@ -135,9 +135,17 @@ class StorageChain(ChainBase):
if not self.delete_file(fileitem): if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败") logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False return False
# 处理上级目录 if mtype:
if mtype and mtype == MediaType.TV: # 重命名格式
dir_item = self.get_file_item(storage=fileitem.storage, path=Path(fileitem.path).parent.parent) rename_format = settings.TV_RENAME_FORMAT \
if mtype == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
return True
# 处理上级目录
dir_item = self.get_file_item(storage=fileitem.storage,
path=Path(fileitem.path).parents[rename_format_level - 1])
else: else:
dir_item = self.get_parent_item(fileitem) dir_item = self.get_parent_item(fileitem)
if dir_item and len(Path(dir_item.path).parts) > 2: if dir_item and len(Path(dir_item.path).parts) > 2:

View File

@@ -175,7 +175,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
# 按pubdate降序排列 # 按pubdate降序排列
torrents.sort(key=lambda x: x.pubdate or '', reverse=True) torrents.sort(key=lambda x: x.pubdate or '', reverse=True)
# 取前N条 # 取前N条
torrents = torrents[:settings.CACHE_CONF.get('refresh')] torrents = torrents[:settings.CACHE_CONF["refresh"]]
if torrents: if torrents:
# 过滤出没有处理过的种子 # 过滤出没有处理过的种子
torrents = [torrent for torrent in torrents torrents = [torrent for torrent in torrents
@@ -215,8 +215,8 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
else: else:
torrents_cache[domain].append(context) torrents_cache[domain].append(context)
# 如果超过了限制条数则移除掉前面的 # 如果超过了限制条数则移除掉前面的
if len(torrents_cache[domain]) > settings.CACHE_CONF.get('torrents'): if len(torrents_cache[domain]) > settings.CACHE_CONF["torrents"]:
torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF.get('torrents'):] torrents_cache[domain] = torrents_cache[domain][-settings.CACHE_CONF["torrents"]:]
# 回收资源 # 回收资源
del torrents del torrents
else: else:

View File

@@ -10,7 +10,7 @@ from app.chain.tmdb import TmdbChain
from app.core.config import settings, global_vars from app.core.config import settings, global_vars
from app.core.context import MediaInfo from app.core.context import MediaInfo
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.metainfo import MetaInfoPath from app.core.metainfo import MetaInfoPath, MetaInfo
from app.db.downloadhistory_oper import DownloadHistoryOper from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.models.downloadhistory import DownloadHistory from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory from app.db.models.transferhistory import TransferHistory
@@ -131,6 +131,7 @@ class TransferChain(ChainBase):
extension=file_path.suffix.lstrip('.'), extension=file_path.suffix.lstrip('.'),
), ),
mediainfo=mediainfo, mediainfo=mediainfo,
downloader=torrent.downloader,
download_hash=torrent.hash, download_hash=torrent.hash,
src_match=True src_match=True
) )
@@ -148,8 +149,9 @@ class TransferChain(ChainBase):
target_directory: TransferDirectoryConf = None, target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None, target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None, transfer_type: str = None, scrape: bool = None,
season: int = None, epformat: EpisodeFormat = None, library_type_folder: bool = None, library_category_folder: bool = None,
min_filesize: int = 0, download_hash: str = None, season: int = None, epformat: EpisodeFormat = None, min_filesize: int = 0,
downloader: str = None, download_hash: str = None,
force: bool = False, src_match: bool = False) -> Tuple[bool, str]: force: bool = False, src_match: bool = False) -> Tuple[bool, str]:
""" """
执行一个复杂目录的整理操作 执行一个复杂目录的整理操作
@@ -161,9 +163,12 @@ class TransferChain(ChainBase):
:param target_path: 目标路径 :param target_path: 目标路径
:param transfer_type: 整理类型 :param transfer_type: 整理类型
:param scrape: 是否刮削元数据 :param scrape: 是否刮削元数据
:param library_type_folder: 媒体库类型子目录
:param library_category_folder: 媒体库类别子目录
:param season: 季 :param season: 季
:param epformat: 剧集格式 :param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB) :param min_filesize: 最小文件大小(MB)
:param downloader: 下载器
:param download_hash: 下载记录hash :param download_hash: 下载记录hash
:param force: 是否强制整理 :param force: 是否强制整理
:param src_match: 是否源目录匹配 :param src_match: 是否源目录匹配
@@ -184,8 +189,6 @@ class TransferChain(ChainBase):
# 汇总季集清单 # 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {} season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息 # 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {} medias: Dict[Tuple, MediaInfo] = {}
# 汇总整理信息 # 汇总整理信息
@@ -389,14 +392,23 @@ class TransferChain(ChainBase):
download_hash = download_file.download_hash download_hash = download_file.download_hash
# 查询整理目标目录 # 查询整理目标目录
if not target_directory and not target_path: if not target_directory:
if src_match: if src_match:
# 按源目录匹配,以便找到更合适的目录配置 # 按源目录匹配,以便找到更合适的目录配置
target_directory = self.directoryhelper.get_dir(file_mediainfo, target_directory = self.directoryhelper.get_dir(media=file_mediainfo,
storage=file_item.storage, src_path=file_path) storage=file_item.storage,
src_path=file_path,
target_storage=target_storage)
elif target_path:
# 指定目标路径,`手动整理`场景下使用,忽略源目录匹配,使用指定目录匹配
target_directory = self.directoryhelper.get_dir(media=file_mediainfo,
dest_path=target_path,
target_storage=target_storage)
else: else:
# 未指定目标路径,根据媒体信息获取目标目录 # 未指定目标路径,根据媒体信息获取目标目录
target_directory = self.directoryhelper.get_dir(file_mediainfo) target_directory = self.directoryhelper.get_dir(file_mediainfo,
storage=target_storage,
target_storage=target_storage)
# 执行整理 # 执行整理
transferinfo: TransferInfo = self.transfer(fileitem=file_item, transferinfo: TransferInfo = self.transfer(fileitem=file_item,
@@ -407,7 +419,9 @@ class TransferChain(ChainBase):
target_path=target_path, target_path=target_path,
transfer_type=transfer_type, transfer_type=transfer_type,
episodes_info=episodes_info, episodes_info=episodes_info,
scrape=scrape) scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder)
if not transferinfo: if not transferinfo:
logger.error("文件整理模块运行失败") logger.error("文件整理模块运行失败")
return False, "文件整理模块运行失败" return False, "文件整理模块运行失败"
@@ -443,7 +457,6 @@ class TransferChain(ChainBase):
mkey = (file_mediainfo.tmdb_id, file_meta.begin_season) mkey = (file_mediainfo.tmdb_id, file_meta.begin_season)
if mkey not in medias: if mkey not in medias:
# 新增信息 # 新增信息
metas[mkey] = file_meta
medias[mkey] = file_mediainfo medias[mkey] = file_mediainfo
season_episodes[mkey] = file_meta.episode_list season_episodes[mkey] = file_meta.episode_list
transfers[mkey] = transferinfo transfers[mkey] = transferinfo
@@ -467,6 +480,15 @@ class TransferChain(ChainBase):
transferinfo=transferinfo transferinfo=transferinfo
) )
# 整理完成事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': file_meta,
'mediainfo': file_mediainfo,
'transferinfo': transferinfo,
'downloader': downloader,
'download_hash': download_hash,
})
# 更新进度 # 更新进度
processed_num += 1 processed_num += 1
self.progress.update(value=processed_num / total_num * 100, self.progress.update(value=processed_num / total_num * 100,
@@ -479,8 +501,9 @@ class TransferChain(ChainBase):
# 执行后续处理 # 执行后续处理
for mkey, media in medias.items(): for mkey, media in medias.items():
transfer_meta = metas[mkey]
transfer_info = transfers[mkey] transfer_info = transfers[mkey]
transfer_meta = MetaInfo(transfer_info.target_diritem.name)
transfer_meta.begin_season = mkey[1]
# 发送通知 # 发送通知
if transfer_info.need_notify: if transfer_info.need_notify:
se_str = None se_str = None
@@ -497,19 +520,12 @@ class TransferChain(ChainBase):
'mediainfo': media, 'mediainfo': media,
'fileitem': transfer_info.target_diritem 'fileitem': transfer_info.target_diritem
}) })
# 整理完成事件
self.eventmanager.send_event(EventType.TransferComplete, {
'meta': transfer_meta,
'mediainfo': media,
'transferinfo': transfer_info,
'download_hash': download_hash,
})
# 移动模式处理 # 移动模式处理
if all_success and current_transfer_type in ["move"]: if all_success and current_transfer_type in ["move"]:
# 下载器hash # 下载器hash
if download_hash: if download_hash:
if self.remove_torrents(download_hash): if self.remove_torrents(download_hash, downloader=downloader):
logger.info(f"移动模式删除种子成功:{download_hash} ") logger.info(f"移动模式删除种子成功:{download_hash} ")
# 删除残留目录 # 删除残留目录
if fileitem: if fileitem:
@@ -677,6 +693,8 @@ class TransferChain(ChainBase):
epformat: EpisodeFormat = None, epformat: EpisodeFormat = None,
min_filesize: int = 0, min_filesize: int = 0,
scrape: bool = None, scrape: bool = None,
library_type_folder: bool = False,
library_category_folder: bool = False,
force: bool = False) -> Tuple[bool, Union[str, list]]: force: bool = False) -> Tuple[bool, Union[str, list]]:
""" """
手动整理,支持复杂条件,带进度显示 手动整理,支持复杂条件,带进度显示
@@ -691,6 +709,8 @@ class TransferChain(ChainBase):
:param epformat: 剧集格式 :param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB) :param min_filesize: 最小文件大小(MB)
:param scrape: 是否刮削元数据 :param scrape: 是否刮削元数据
:param library_type_folder: 是否按类型建立目录
:param library_category_folder: 是否按类别建立目录
:param force: 是否强制整理 :param force: 是否强制整理
""" """
logger.info(f"手动整理:{fileitem.path} ...") logger.info(f"手动整理:{fileitem.path} ...")
@@ -719,6 +739,8 @@ class TransferChain(ChainBase):
epformat=epformat, epformat=epformat,
min_filesize=min_filesize, min_filesize=min_filesize,
scrape=scrape, scrape=scrape,
library_type_folder=library_type_folder,
library_category_folder=library_category_folder,
force=force, force=force,
) )
if not state: if not state:

View File

@@ -8,7 +8,7 @@ from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Type from typing import Any, Dict, List, Optional, Tuple, Type
from dotenv import set_key from dotenv import set_key
from pydantic import BaseModel, BaseSettings, validator from pydantic import BaseModel, BaseSettings, validator, Field
from app.log import logger from app.log import logger
from app.utils.system import SystemUtils from app.utils.system import SystemUtils
@@ -36,7 +36,7 @@ class ConfigModel(BaseModel):
# RESOURCE密钥 # RESOURCE密钥
RESOURCE_SECRET_KEY: str = secrets.token_urlsafe(32) RESOURCE_SECRET_KEY: str = secrets.token_urlsafe(32)
# 允许的域名 # 允许的域名
ALLOWED_HOSTS: list = ["*"] ALLOWED_HOSTS: list = Field(default_factory=lambda: ["*"])
# TOKEN过期时间 # TOKEN过期时间
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8 ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 8
# RESOURCE_TOKEN过期时间 # RESOURCE_TOKEN过期时间
@@ -114,29 +114,39 @@ class ConfigModel(BaseModel):
# 是否启用DOH解析域名 # 是否启用DOH解析域名
DOH_ENABLE: bool = True DOH_ENABLE: bool = True
# 使用 DOH 解析的域名列表 # 使用 DOH 解析的域名列表
DOH_DOMAINS: str = "api.themoviedb.org,api.tmdb.org,webservice.fanart.tv,api.github.com,github.com,raw.githubusercontent.com,api.telegram.org" DOH_DOMAINS: str = ("api.themoviedb.org,"
"api.tmdb.org,"
"webservice.fanart.tv,"
"api.github.com,"
"github.com,"
"raw.githubusercontent.com,"
"api.telegram.org")
# DOH 解析服务器列表 # DOH 解析服务器列表
DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112" DOH_RESOLVERS: str = "1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112"
# 支持的后缀格式 # 支持的后缀格式
RMT_MEDIAEXT: list = ['.mp4', '.mkv', '.ts', '.iso', RMT_MEDIAEXT: list = Field(
'.rmvb', '.avi', '.mov', '.mpeg', default_factory=lambda: ['.mp4', '.mkv', '.ts', '.iso',
'.mpg', '.wmv', '.3gp', '.asf', '.rmvb', '.avi', '.mov', '.mpeg',
'.m4v', '.flv', '.m2ts', '.strm', '.mpg', '.wmv', '.3gp', '.asf',
'.tp', '.f4v'] '.m4v', '.flv', '.m2ts', '.strm',
'.tp', '.f4v']
)
# 支持的字幕文件后缀格式 # 支持的字幕文件后缀格式
RMT_SUBEXT: list = ['.srt', '.ass', '.ssa', '.sup'] RMT_SUBEXT: list = Field(default_factory=lambda: ['.srt', '.ass', '.ssa', '.sup'])
# 支持的音轨文件后缀格式 # 支持的音轨文件后缀格式
RMT_AUDIO_TRACK_EXT: list = ['.mka'] RMT_AUDIO_TRACK_EXT: list = Field(default_factory=lambda: ['.mka'])
# 音轨文件后缀格式 # 音轨文件后缀格式
RMT_AUDIOEXT: list = ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf', RMT_AUDIOEXT: list = Field(
'.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3', default_factory=lambda: ['.aac', '.ac3', '.amr', '.caf', '.cda', '.dsf',
'.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg', '.dff', '.kar', '.m4a', '.mp1', '.mp2', '.mp3',
'.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak', '.mid', '.mod', '.mka', '.mpc', '.nsf', '.ogg',
'.tta', '.vqf', '.wav', '.wma', '.pcm', '.rmi', '.s3m', '.snd', '.spx', '.tak',
'.aifc', '.aiff', '.alac', '.adif', '.adts', '.tta', '.vqf', '.wav', '.wma',
'.flac', '.midi', '.opus', '.sfalc'] '.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
)
# 下载器临时文件后缀 # 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qb', '.part'] DOWNLOAD_TMPEXT: list = Field(default_factory=lambda: ['.!qb', '.part'])
# 媒体服务器同步间隔(小时) # 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6 MEDIASERVER_SYNC_INTERVAL: int = 6
# 订阅模式 # 订阅模式
@@ -189,7 +199,10 @@ class ConfigModel(BaseModel):
# 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目 # 服务器地址,对应 https://github.com/jxxghp/MoviePilot-Server 项目
MP_SERVER_HOST: str = "https://movie-pilot.org" MP_SERVER_HOST: str = "https://movie-pilot.org"
# 插件市场仓库地址,多个地址使用,分隔,地址以/结尾 # 插件市场仓库地址,多个地址使用,分隔,地址以/结尾
PLUGIN_MARKET: str = "https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins" PLUGIN_MARKET: str = ("https://github.com/jxxghp/MoviePilot-Plugins,"
"https://github.com/thsrite/MoviePilot-Plugins,"
"https://github.com/honue/MoviePilot-Plugins,"
"https://github.com/InfinityPacer/MoviePilot-Plugins")
# 插件安装数据共享 # 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载 # 是否开启插件热加载
@@ -207,10 +220,22 @@ class ConfigModel(BaseModel):
# 全局图片缓存,将媒体图片缓存到本地 # 全局图片缓存,将媒体图片缓存到本地
GLOBAL_IMAGE_CACHE: bool = False GLOBAL_IMAGE_CACHE: bool = False
# 允许的图片缓存域名 # 允许的图片缓存域名
SECURITY_IMAGE_DOMAINS: List[str] = ["image.tmdb.org", "static-mdb.v.geilijiasu.com", "doubanio.com", "lain.bgm.tv", SECURITY_IMAGE_DOMAINS: List[str] = Field(
"raw.githubusercontent.com", "github.com"] default_factory=lambda: ["image.tmdb.org",
"static-mdb.v.geilijiasu.com",
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com"]
)
# 允许的图片文件后缀格式 # 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"] SECURITY_IMAGE_SUFFIXES: List[str] = Field(
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
)
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: List[str] = Field(
default_factory=lambda: ["Specials", "SPs"]
)
class Settings(BaseSettings, ConfigModel): class Settings(BaseSettings, ConfigModel):
@@ -345,10 +370,9 @@ class Settings(BaseSettings, ConfigModel):
logger.warning(message) logger.warning(message)
if field.name in os.environ: if field.name in os.environ:
if is_converted: message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性"
message = f"配置项 '{field.name}' 已在环境变量中设置,请手动更新以保持一致性" logger.warning(message)
logger.warning(message) return False, message
return False, message
else: else:
set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "") set_key(SystemUtils.get_env_path(), field.name, str(converted_value) if converted_value is not None else "")
if is_converted: if is_converted:
@@ -372,7 +396,7 @@ class Settings(BaseSettings, ConfigModel):
field.default, key) field.default, key)
# 如果没有抛出异常,则统一使用 converted_value 进行更新 # 如果没有抛出异常,则统一使用 converted_value 进行更新
if needs_update or str(value) != str(converted_value): if needs_update or str(value) != str(converted_value):
success, message = self.update_env_config(field, original_value, converted_value) success, message = self.update_env_config(field, value, converted_value)
# 仅成功更新配置时,才更新内存 # 仅成功更新配置时,才更新内存
if success: if success:
setattr(self, key, converted_value) setattr(self, key, converted_value)
@@ -437,22 +461,32 @@ class Settings(BaseSettings, ConfigModel):
@property @property
def CACHE_CONF(self): def CACHE_CONF(self):
"""
{
"torrents": "缓存种子数量",
"refresh": "订阅刷新处理数量",
"tmdb": "TMDB请求缓存数量",
"douban": "豆瓣请求缓存数量",
"fanart": "Fanart请求缓存数量",
"meta": "元数据缓存过期时间(秒)"
}
"""
if self.BIG_MEMORY_MODE: if self.BIG_MEMORY_MODE:
return { return {
"torrents": 200,
"refresh": 100,
"tmdb": 1024, "tmdb": 1024,
"refresh": 50,
"torrents": 100,
"douban": 512, "douban": 512,
"fanart": 512, "fanart": 512,
"meta": (self.META_CACHE_EXPIRE or 168) * 3600 "meta": (self.META_CACHE_EXPIRE or 24) * 3600
} }
return { return {
"torrents": 100,
"refresh": 50,
"tmdb": 256, "tmdb": 256,
"refresh": 30,
"torrents": 50,
"douban": 256, "douban": 256,
"fanart": 128, "fanart": 128,
"meta": (self.META_CACHE_EXPIRE or 72) * 3600 "meta": (self.META_CACHE_EXPIRE or 2) * 3600
} }
@property @property

View File

@@ -84,7 +84,6 @@ class EventManager(metaclass=Singleton):
self.__disabled_handlers = set() # 禁用的事件处理器集合 self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合 self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁 self.__lock = threading.Lock() # 线程锁
self.__processing_events = {} # 用于记录当前正在处理的事件 {event_hash: event}
def start(self): def start(self):
""" """
@@ -130,14 +129,6 @@ class EventManager(metaclass=Singleton):
for handler in handlers.values() for handler in handlers.values()
) )
@staticmethod
def __get_event_hash(event: Event) -> str:
"""
计算事件的唯一标识符hash
"""
data_string = str(event.event_type.value) + str(event.event_data)
return str(uuid.uuid5(uuid.NAMESPACE_DNS, data_string))
def send_event(self, etype: Union[EventType, ChainEventType], data: Optional[Union[Dict, ChainEventData]] = None, def send_event(self, etype: Union[EventType, ChainEventType], data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY) -> Optional[Event]: priority: int = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
""" """
@@ -148,12 +139,6 @@ class EventManager(metaclass=Singleton):
:return: 如果是链式事件,返回处理后的事件数据;否则返回 None :return: 如果是链式事件,返回处理后的事件数据;否则返回 None
""" """
event = Event(etype, data, priority) event = Event(etype, data, priority)
event_hash = self.__get_event_hash(event)
with self.__lock:
if event_hash in self.__processing_events:
logger.debug(f"Duplicate event ignored: {event}")
return None
self.__processing_events[event_hash] = event
if isinstance(etype, EventType): if isinstance(etype, EventType):
self.__trigger_broadcast_event(event) self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType): elif isinstance(etype, ChainEventType):
@@ -335,14 +320,9 @@ class EventManager(metaclass=Singleton):
""" """
触发链式事件,按顺序调用订阅的处理器,并记录处理耗时 触发链式事件,按顺序调用订阅的处理器,并记录处理耗时
""" """
try: logger.debug(f"Triggering synchronous chain event: {event}")
logger.debug(f"Triggering synchronous chain event: {event}") dispatch = self.__dispatch_chain_event(event)
dispatch = self.__dispatch_chain_event(event) return event if dispatch else None
return event if dispatch else None
finally:
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
def __trigger_broadcast_event(self, event: Event): def __trigger_broadcast_event(self, event: Event):
""" """
@@ -383,9 +363,6 @@ class EventManager(metaclass=Singleton):
return return
for handler_id, handler in handlers.items(): for handler_id, handler in handlers.items():
self.__executor.submit(self.__safe_invoke_handler, handler, event) self.__executor.submit(self.__safe_invoke_handler, handler, event)
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
def __safe_invoke_handler(self, handler: Callable, event: Event): def __safe_invoke_handler(self, handler: Callable, event: Event):
""" """

View File

@@ -5,6 +5,7 @@ from app import schemas
from app.core.context import MediaInfo from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey from app.schemas.types import SystemConfigKey
from app.utils.system import SystemUtils
class DirectoryHelper: class DirectoryHelper:
@@ -48,16 +49,18 @@ class DirectoryHelper:
""" """
return [d for d in self.get_library_dirs() if d.library_storage == "local"] return [d for d in self.get_library_dirs() if d.library_storage == "local"]
def get_dir(self, media: MediaInfo, storage: str = "local", def get_dir(self, media: MediaInfo,
src_path: Path = None, dest_path: Path = None, fileitem: schemas.FileItem = None storage: str = None, src_path: Path = None,
target_storage: str = None, dest_path: Path = None
) -> Optional[schemas.TransferDirectoryConf]: ) -> Optional[schemas.TransferDirectoryConf]:
""" """
根据媒体信息获取下载目录、媒体库目录配置 根据媒体信息获取下载目录、媒体库目录配置
:param media: 媒体信息 :param media: 媒体信息
:param storage: 存储类型 :param storage: 存储类型
:param target_storage: 目标存储类型
:param fileitem: 文件项,使用文件路径匹配
:param src_path: 源目录,有值时直接匹配 :param src_path: 源目录,有值时直接匹配
:param dest_path: 目标目录,有值时直接匹配 :param dest_path: 目标目录,有值时直接匹配
:param fileitem: 文件项,使用文件路径匹配
""" """
# 处理类型 # 处理类型
if not media: if not media:
@@ -65,35 +68,43 @@ class DirectoryHelper:
# 电影/电视剧 # 电影/电视剧
media_type = media.type.value media_type = media.type.value
dirs = self.get_dirs() dirs = self.get_dirs()
# 已匹配的目录
matched_dirs: List[schemas.TransferDirectoryConf] = []
# 按照配置顺序查找 # 按照配置顺序查找
for d in dirs: for d in dirs:
# 没有启用整理的目录 # 没有启用整理的目录
if not d.monitor_type: if not d.monitor_type:
continue continue
# 存储类型不匹配 # 存储类型不匹配
if storage and d.storage != storage: if storage and d.storage != storage:
continue continue
# 下载目录 # 目标存储类型不匹配
download_path = Path(d.download_path) if target_storage and d.library_storage != target_storage:
# 媒体库目录
library_path = Path(d.library_path)
# 有源目录时,源目录不匹配下载目录
if src_path and not src_path.is_relative_to(download_path):
continue continue
# 有文件项时,文件项不匹配下载目录 # 有源目录时,源目录不匹配下载目录
if fileitem and not Path(fileitem.path).is_relative_to(download_path): if src_path and not src_path.is_relative_to(d.download_path):
continue continue
# 有目标目录时,目标目录不匹配媒体库目录 # 有目标目录时,目标目录不匹配媒体库目录
if dest_path and not dest_path.is_relative_to(library_path): if dest_path and dest_path != Path(d.library_path):
continue continue
# 目录类型为全部的,符合条件 # 目录类型为全部的,符合条件
if not d.media_type: if not d.media_type:
return d matched_dirs.append(d)
continue
# 目录类型相等,目录类别为全部,符合条件 # 目录类型相等,目录类别为全部,符合条件
if d.media_type == media_type and not d.media_category: if d.media_type == media_type and not d.media_category:
return d matched_dirs.append(d)
continue
# 目录类型相等,目录类别相等,符合条件 # 目录类型相等,目录类别相等,符合条件
if d.media_type == media_type and d.media_category == media.category: if d.media_type == media_type and d.media_category == media.category:
return d matched_dirs.append(d)
continue
if matched_dirs:
if src_path:
# 优先源目录同盘
for matched_dir in matched_dirs:
matched_path = Path(matched_dir.download_path)
if SystemUtils.is_same_disk(matched_path, src_path):
return matched_dir
return matched_dirs[0]
return None return None

View File

@@ -290,7 +290,7 @@ class TorrentHelper(metaclass=Singleton):
if not file_path.suffix or file_path.suffix.lower() not in settings.RMT_MEDIAEXT: if not file_path.suffix or file_path.suffix.lower() not in settings.RMT_MEDIAEXT:
continue continue
# 只使用文件名识别 # 只使用文件名识别
meta = MetaInfo(file_path.stem) meta = MetaInfo(file_path.name)
if not meta.begin_episode: if not meta.begin_episode:
continue continue
episodes = list(set(episodes).union(set(meta.episode_list))) episodes = list(set(episodes).union(set(meta.episode_list)))

View File

@@ -3,11 +3,11 @@ import base64
import hashlib import hashlib
import hmac import hmac
from datetime import datetime from datetime import datetime
from functools import lru_cache
from random import choice from random import choice
from urllib import parse from urllib import parse
import requests import requests
from cachetools import TTLCache, cached
from app.core.config import settings from app.core.config import settings
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
@@ -160,12 +160,12 @@ class DoubanApi(metaclass=Singleton):
self._session = requests.Session() self._session = requests.Session()
@classmethod @classmethod
def __sign(cls, url: str, ts: int, method='GET') -> str: def __sign(cls, url: str, ts: str, method='GET') -> str:
""" """
签名 签名
""" """
url_path = parse.urlparse(url).path url_path = parse.urlparse(url).path
raw_sign = '&'.join([method.upper(), parse.quote(url_path, safe=''), str(ts)]) raw_sign = '&'.join([method.upper(), parse.quote(url_path, safe=''), ts])
return base64.b64encode( return base64.b64encode(
hmac.new( hmac.new(
cls._api_secret_key.encode(), cls._api_secret_key.encode(),
@@ -174,7 +174,7 @@ class DoubanApi(metaclass=Singleton):
).digest() ).digest()
).decode() ).decode()
@lru_cache(maxsize=settings.CACHE_CONF.get('douban')) @cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
def __invoke(self, url: str, **kwargs) -> dict: def __invoke(self, url: str, **kwargs) -> dict:
""" """
GET请求 GET请求
@@ -203,7 +203,7 @@ class DoubanApi(metaclass=Singleton):
return resp.json() return resp.json()
return resp.json() if resp else {} return resp.json() if resp else {}
@lru_cache(maxsize=settings.CACHE_CONF.get('douban')) @cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
def __post(self, url: str, **kwargs) -> dict: def __post(self, url: str, **kwargs) -> dict:
""" """
POST请求 POST请求

View File

@@ -16,7 +16,7 @@ from app.schemas.types import MediaType
lock = RLock() lock = RLock()
CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp" CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp"
EXPIRE_TIMESTAMP = settings.CACHE_CONF.get('meta') EXPIRE_TIMESTAMP = settings.CACHE_CONF["meta"]
class DoubanCache(metaclass=Singleton): class DoubanCache(metaclass=Singleton):
@@ -77,7 +77,7 @@ class DoubanCache(metaclass=Singleton):
@return: 被删除的缓存内容 @return: 被删除的缓存内容
""" """
with lock: with lock:
return self._meta_data.pop(key, None) return self._meta_data.pop(key, {})
def delete_by_doubanid(self, doubanid: str) -> None: def delete_by_doubanid(self, doubanid: str) -> None:
""" """

View File

@@ -1,12 +1,13 @@
import re import re
from functools import lru_cache
from typing import Optional, Tuple, Union from typing import Optional, Tuple, Union
from cachetools import TTLCache, cached
from app.core.context import MediaInfo, settings from app.core.context import MediaInfo, settings
from app.log import logger from app.log import logger
from app.modules import _ModuleBase from app.modules import _ModuleBase
from app.utils.http import RequestUtils
from app.schemas.types import MediaType, ModuleType from app.schemas.types import MediaType, ModuleType
from app.utils.http import RequestUtils
class FanartModule(_ModuleBase): class FanartModule(_ModuleBase):
@@ -404,7 +405,7 @@ class FanartModule(_ModuleBase):
return result return result
@classmethod @classmethod
@lru_cache(maxsize=settings.CACHE_CONF.get('fanart')) @cached(cache=TTLCache(maxsize=settings.CACHE_CONF["fanart"], ttl=settings.CACHE_CONF["meta"]))
def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]: def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]:
if media_type == MediaType.MOVIE: if media_type == MediaType.MOVIE:
image_url = cls._movie_url % queryid image_url = cls._movie_url % queryid

View File

@@ -1,4 +1,3 @@
import copy
import re import re
from pathlib import Path from pathlib import Path
from threading import Lock from threading import Lock
@@ -8,6 +7,7 @@ from jinja2 import Template
from app.core.config import settings from app.core.config import settings
from app.core.context import MediaInfo from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.meta import MetaBase from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo, MetaInfoPath from app.core.metainfo import MetaInfo, MetaInfoPath
from app.helper.directory import DirectoryHelper from app.helper.directory import DirectoryHelper
@@ -17,7 +17,8 @@ from app.log import logger
from app.modules import _ModuleBase from app.modules import _ModuleBase
from app.modules.filemanager.storages import StorageBase from app.modules.filemanager.storages import StorageBase
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, TransferDirectoryConf, FileItem, StorageUsage from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, TransferDirectoryConf, FileItem, StorageUsage
from app.schemas.types import MediaType, ModuleType from app.schemas.event import SmartRenameEventData
from app.schemas.types import MediaType, ModuleType, ChainEventType
from app.utils.system import SystemUtils from app.utils.system import SystemUtils
lock = Lock() lock = Lock()
@@ -131,8 +132,6 @@ class FileManagerModule(_ModuleBase):
) )
return str(path) return str(path)
pass
def save_config(self, storage: str, conf: Dict) -> None: def save_config(self, storage: str, conf: Dict) -> None:
""" """
保存存储配置 保存存储配置
@@ -219,7 +218,8 @@ class FileManagerModule(_ModuleBase):
and f".{t.extension.lower()}" in extensions): and f".{t.extension.lower()}" in extensions):
return True return True
elif t.type == "dir": elif t.type == "dir":
return __any_file(t) if __any_file(t):
return True
return False return False
# 返回结果 # 返回结果
@@ -322,6 +322,7 @@ class FileManagerModule(_ModuleBase):
target_directory: TransferDirectoryConf = None, target_directory: TransferDirectoryConf = None,
target_storage: str = None, target_path: Path = None, target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None, transfer_type: str = None, scrape: bool = None,
library_type_folder: bool = None, library_category_folder: bool = None,
episodes_info: List[TmdbEpisode] = None) -> TransferInfo: episodes_info: List[TmdbEpisode] = None) -> TransferInfo:
""" """
文件整理 文件整理
@@ -333,6 +334,8 @@ class FileManagerModule(_ModuleBase):
:param target_path: 目标路径 :param target_path: 目标路径
:param transfer_type: 转移模式 :param transfer_type: 转移模式
:param scrape: 是否刮削元数据 :param scrape: 是否刮削元数据
:param library_type_folder: 是否按媒体类型创建目录
:param library_category_folder: 是否按媒体类别创建目录
:param episodes_info: 当前季的全部集信息 :param episodes_info: 当前季的全部集信息
:return: {path, target_path, message} :return: {path, target_path, message}
""" """
@@ -349,37 +352,36 @@ class FileManagerModule(_ModuleBase):
message=f"{target_path} 不是有效目录") message=f"{target_path} 不是有效目录")
# 获取目标路径 # 获取目标路径
if target_directory: if target_directory:
# 拼装媒体库一、二级子目录
target_path = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_directory)
# 目标存储类型
if not target_storage:
target_storage = target_directory.library_storage
# 整理方式 # 整理方式
if not transfer_type: if not transfer_type:
transfer_type = target_directory.transfer_type transfer_type = target_directory.transfer_type
if not transfer_type:
logger.error(f"{target_directory.name} 未设置整理方式")
return TransferInfo(success=False,
fileitem=fileitem,
message=f"{target_directory.name} 未设置整理方式")
# 是否需要刮削
if scrape is None:
need_scrape = target_directory.scraping
else:
need_scrape = scrape
# 是否需要重命名 # 是否需要重命名
need_rename = target_directory.renaming need_rename = target_directory.renaming
# 是否需要通知 # 是否需要通知
need_notify = target_directory.notify need_notify = target_directory.notify
# 覆盖模式 # 覆盖模式
overwrite_mode = target_directory.overwrite_mode overwrite_mode = target_directory.overwrite_mode
# 是否需要刮削
if scrape is None:
need_scrape = target_directory.scraping
else:
need_scrape = scrape
# 目标存储类型
if not target_storage:
target_storage = target_directory.library_storage
# 拼装媒体库一、二级子目录
target_path = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_directory,
need_type_folder=library_type_folder,
need_category_folder=library_category_folder)
elif target_path: elif target_path:
# 手动整理的场景,有自定义目标路径
need_scrape = scrape or False need_scrape = scrape or False
need_rename = True need_rename = True
need_notify = False need_notify = False
overwrite_mode = "never" overwrite_mode = "never"
logger.warn(f"{target_path} 为自定义路径, 通知将不会发送") # 手动整理的场景,有自定义目标路径
target_path = self.__get_dest_path(mediainfo=mediainfo, target_path=target_path,
need_type_folder=library_type_folder,
need_category_folder=library_category_folder)
else: else:
# 未找到有效的媒体库目录 # 未找到有效的媒体库目录
logger.error( logger.error(
@@ -387,9 +389,14 @@ class FileManagerModule(_ModuleBase):
return TransferInfo(success=False, return TransferInfo(success=False,
fileitem=fileitem, fileitem=fileitem,
message="未找到有效的媒体库目录") message="未找到有效的媒体库目录")
# 整理方式
logger.info(f"获取整理目标路径:【{target_storage}{target_path}") if not transfer_type:
logger.error(f"{target_directory.name} 未设置整理方式")
return TransferInfo(success=False,
fileitem=fileitem,
message=f"{target_directory.name} 未设置整理方式")
# 整理 # 整理
logger.info(f"获取整理目标路径:【{target_storage}{target_path}")
return self.transfer_media(fileitem=fileitem, return self.transfer_media(fileitem=fileitem,
in_meta=meta, in_meta=meta,
mediainfo=mediainfo, mediainfo=mediainfo,
@@ -463,9 +470,9 @@ class FileManagerModule(_ModuleBase):
target_file.parent.mkdir(parents=True) target_file.parent.mkdir(parents=True)
# 本地到本地 # 本地到本地
if transfer_type == "copy": if transfer_type == "copy":
state = source_oper.copy(fileitem, target_file) state = source_oper.copy(fileitem, target_file.parent, target_file.name)
elif transfer_type == "move": elif transfer_type == "move":
state = source_oper.move(fileitem, target_file) state = source_oper.move(fileitem, target_file.parent, target_file.name)
elif transfer_type == "link": elif transfer_type == "link":
state = source_oper.link(fileitem, target_file) state = source_oper.link(fileitem, target_file)
elif transfer_type == "softlink": elif transfer_type == "softlink":
@@ -493,7 +500,7 @@ class FileManagerModule(_ModuleBase):
else: else:
return None, f"{fileitem.path} 上传 {target_storage} 失败" return None, f"{fileitem.path} 上传 {target_storage} 失败"
else: else:
return None, f"{target_file.parent} {target_storage} 目录获取失败" return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif transfer_type == "move": elif transfer_type == "move":
# 移动 # 移动
# 根据目的路径获取文件夹 # 根据目的路径获取文件夹
@@ -508,7 +515,7 @@ class FileManagerModule(_ModuleBase):
else: else:
return None, f"{fileitem.path} 上传 {target_storage} 失败" return None, f"{fileitem.path} 上传 {target_storage} 失败"
else: else:
return None, f"{target_file.parent} {target_storage} 目录获取失败" return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif fileitem.storage != "local" and target_storage == "local": elif fileitem.storage != "local" and target_storage == "local":
# 网盘到本地 # 网盘到本地
if target_file.exists(): if target_file.exists():
@@ -532,25 +539,28 @@ class FileManagerModule(_ModuleBase):
return None, f"{fileitem.path} {fileitem.storage} 下载失败" return None, f"{fileitem.path} {fileitem.storage} 下载失败"
elif fileitem.storage == target_storage: elif fileitem.storage == target_storage:
# 同一网盘 # 同一网盘
# 根据目的路径获取文件夹 if transfer_type == "copy":
target_diritem = target_oper.get_folder(target_file.parent) # 复制文件到新目录
if target_diritem: target_fileitem = target_oper.get_folder(target_file.parent)
# 重命名文件 if target_fileitem:
if target_oper.rename(fileitem, target_file.name): if source_oper.move(fileitem, Path(target_fileitem.path), target_file.name):
# 移动文件到新目录 return target_oper.get_item(target_file), ""
if source_oper.move(fileitem, target_diritem):
ret_fileitem = copy.deepcopy(fileitem)
ret_fileitem.path = target_diritem.path + "/" + target_file.name
ret_fileitem.name = target_file.name
ret_fileitem.basename = target_file.stem
ret_fileitem.parent_fileid = target_diritem.fileid
return ret_fileitem, ""
else: else:
return None, f"{fileitem.path} {target_storage} 移动文件失败" return None, f"{target_storage}{fileitem.path} 复制文件失败"
else: else:
return None, f"{fileitem.path} {target_storage} 重命名文件失败" return None, f"{target_storage}{target_file.parent} 目录获取失败"
elif transfer_type == "move":
# 移动文件到新目录
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
if source_oper.move(fileitem, Path(target_fileitem.path), target_file.name):
return target_oper.get_item(target_file), ""
else:
return None, f"{target_storage}{fileitem.path} 移动文件失败"
else:
return None, f"{target_storage}{target_file.parent} 目录获取失败"
else: else:
return None, f"{target_file.parent} {target_storage} 目录获取失败" return None, f"不支持的整理方式:{transfer_type}"
return None, "未知错误" return None, "未知错误"
@@ -815,7 +825,8 @@ class FileManagerModule(_ModuleBase):
else: else:
logger.info(f"正在删除已存在的文件:{target_file}") logger.info(f"正在删除已存在的文件:{target_file}")
target_file.unlink() target_file.unlink()
logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}") logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}"
f"操作类型:{transfer_type}")
new_item, errmsg = self.__transfer_command(fileitem=fileitem, new_item, errmsg = self.__transfer_command(fileitem=fileitem,
target_storage=target_storage, target_storage=target_storage,
target_file=target_file, target_file=target_file,
@@ -831,26 +842,43 @@ class FileManagerModule(_ModuleBase):
return None, errmsg return None, errmsg
@staticmethod @staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: TransferDirectoryConf) -> Path: def __get_dest_path(mediainfo: MediaInfo, target_path: Path,
need_type_folder: bool = False, need_category_folder: bool = False):
"""
获取目标路径
"""
if need_type_folder:
target_path = target_path / mediainfo.type.value
if need_category_folder and mediainfo.category:
target_path = target_path / mediainfo.category
return target_path
@staticmethod
def __get_dest_dir(mediainfo: MediaInfo, target_dir: TransferDirectoryConf,
need_type_folder: bool = None, need_category_folder: bool = None) -> Path:
""" """
根据设置并装媒体库目录 根据设置并装媒体库目录
:param mediainfo: 媒体信息 :param mediainfo: 媒体信息
:target_dir: 媒体库根目录 :target_dir: 媒体库根目录
:typename_dir: 是否加上类型目录 :need_type_folder: 是否需要按媒体类型创建目录
:need_category_folder: 是否需要按媒体类别创建目录
""" """
if not target_dir.media_type and target_dir.library_type_folder: if need_type_folder is None:
need_type_folder = target_dir.library_type_folder
if need_category_folder is None:
need_category_folder = target_dir.library_category_folder
if not target_dir.media_type and need_type_folder:
# 一级自动分类 # 一级自动分类
library_dir = Path(target_dir.library_path) / mediainfo.type.value library_dir = Path(target_dir.library_path) / mediainfo.type.value
elif target_dir.media_type and target_dir.library_type_folder: elif target_dir.media_type and need_type_folder:
# 一级手动分类 # 一级手动分类
library_dir = Path(target_dir.library_path) / target_dir.media_type library_dir = Path(target_dir.library_path) / target_dir.media_type
else: else:
library_dir = Path(target_dir.library_path) library_dir = Path(target_dir.library_path)
if not target_dir.media_category and need_category_folder and mediainfo.category:
if not target_dir.media_category and target_dir.library_category_folder and mediainfo.category:
# 二级自动分类 # 二级自动分类
library_dir = library_dir / mediainfo.category library_dir = library_dir / mediainfo.category
elif target_dir.media_category and target_dir.library_category_folder: elif target_dir.media_category and need_category_folder:
# 二级手动分类 # 二级手动分类
library_dir = library_dir / target_dir.media_category library_dir = library_dir / target_dir.media_category
@@ -889,6 +917,18 @@ class FileManagerModule(_ModuleBase):
rename_format = settings.TV_RENAME_FORMAT \ rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
# 重命名格式不合法
logger.error(f"重命名格式不合法:{rename_format}")
return TransferInfo(success=False,
message=f"重命名格式不合法",
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify)
# 判断是否为文件夹 # 判断是否为文件夹
if fileitem.type == "dir": if fileitem.type == "dir":
# 整理整个目录,一般为蓝光原盘 # 整理整个目录,一般为蓝光原盘
@@ -969,9 +1009,15 @@ class FileManagerModule(_ModuleBase):
# 目的操作对象 # 目的操作对象
target_oper: StorageBase = self.__get_storage_oper(target_storage) target_oper: StorageBase = self.__get_storage_oper(target_storage)
# 目标目录 # 目标目录
target_diritem = target_oper.get_folder( target_diritem = target_oper.get_folder(new_file.parents[rename_format_level - 1])
new_file.parent) if mediainfo.type == MediaType.MOVIE else target_oper.get_folder( if not target_diritem:
new_file.parent.parent) logger.error(f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败")
return TransferInfo(success=False,
message=f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
need_notify=need_notify)
# 目标文件 # 目标文件
target_item = target_oper.get_item(new_file) target_item = target_oper.get_item(new_file)
if target_item: if target_item:
@@ -1080,6 +1126,13 @@ class FileManagerModule(_ModuleBase):
if episode.episode_number == meta.begin_episode: if episode.episode_number == meta.begin_episode:
episode_title = episode.name episode_title = episode.name
break break
# 获取集播出日期
episode_date = None
if meta.begin_episode and episodes_info:
for episode in episodes_info:
if episode.episode_number == meta.begin_episode:
episode_date = episode.air_date
break
return { return {
# 标题 # 标题
@@ -1130,21 +1183,51 @@ class FileManagerModule(_ModuleBase):
"part": meta.part, "part": meta.part,
# 剧集标题 # 剧集标题
"episode_title": __convert_invalid_characters(episode_title), "episode_title": __convert_invalid_characters(episode_title),
# 剧集日期根据episodes_info值获取
"episode_date": episode_date,
# 文件后缀 # 文件后缀
"fileExt": file_ext, "fileExt": file_ext,
# 自定义占位符 # 自定义占位符
"customization": meta.customization "customization": meta.customization,
# 文件元数据
"__meta__": meta,
# 识别的媒体信息
"__mediainfo__": mediainfo,
# 当前季的全部集信息
"__episodes_info__": episodes_info,
} }
@staticmethod @staticmethod
def get_rename_path(template_string: str, rename_dict: dict, path: Path = None) -> Path: def get_rename_path(template_string: str, rename_dict: dict, path: Path = None) -> Path:
""" """
生成重命名后的完整路径 生成重命名后的完整路径,支持智能重命名事件
:param template_string: Jinja2 模板字符串
:param rename_dict: 渲染上下文,用于替换模板中的变量
:param path: 可选的基础路径,如果提供,将在其基础上拼接生成的路径
:return: 生成的完整路径
""" """
# 创建jinja2模板对象 # 创建jinja2模板对象
template = Template(template_string) template = Template(template_string)
# 渲染生成的字符串 # 渲染生成的字符串
render_str = template.render(rename_dict) render_str = template.render(rename_dict)
logger.debug(f"Initial render string: {render_str}")
# 发送智能重命名事件
event_data = SmartRenameEventData(
template_string=template_string,
rename_dict=rename_dict,
render_str=render_str,
path=path
)
event = eventmanager.send_event(ChainEventType.SmartRename, event_data)
# 检查事件返回的结果
if event and event.event_data:
event_data: SmartRenameEventData = event.event_data
if event_data.updated and event_data.updated_str:
logger.debug(f"Render string updated by event: "
f"{render_str} -> {event_data.updated_str} (source: {event_data.source})")
render_str = event_data.updated_str
# 目的路径 # 目的路径
if path: if path:
return path / render_str return path / render_str
@@ -1170,17 +1253,19 @@ class FileManagerModule(_ModuleBase):
# 重命名格式 # 重命名格式
rename_format = settings.TV_RENAME_FORMAT \ rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 获取相对路径(重命名路径) # 计算重命名中的文件夹层数
rel_path = self.get_rename_path( rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
continue
# 获取路径(重命名路径)
target_path = self.get_rename_path(
path=dir_path,
template_string=rename_format, template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title), rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo) mediainfo=mediainfo)
) )
# 取相对路径的第1层目录 # 取相对路径的第1层目录
if rel_path.parts: media_path = target_path.parents[rename_format_level - 1]
media_path = dir_path / rel_path.parts[0]
else:
continue
# 检索媒体文件 # 检索媒体文件
fileitem = storage_oper.get_item(media_path) fileitem = storage_oper.get_item(media_path)
if not fileitem: if not fileitem:

View File

@@ -79,6 +79,8 @@ class StorageBase(metaclass=ABCMeta):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]: def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
""" """
创建目录 创建目录
:param fileitem: 父目录
:param name: 目录名
""" """
pass pass
@@ -122,7 +124,6 @@ class StorageBase(metaclass=ABCMeta):
下载文件,保存到本地,返回本地临时文件地址 下载文件,保存到本地,返回本地临时文件地址
:param fileitem: 文件项 :param fileitem: 文件项
:param path: 文件保存路径 :param path: 文件保存路径
""" """
pass pass
@@ -144,16 +145,22 @@ class StorageBase(metaclass=ABCMeta):
pass pass
@abstractmethod @abstractmethod
def copy(self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]) -> bool: def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
""" """
复制文件 复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
pass pass
@abstractmethod @abstractmethod
def move(self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]) -> bool: def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
""" """
移动文件 移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
pass pass

View File

@@ -61,6 +61,7 @@ class AliPan(StorageBase, metaclass=Singleton):
""" """
初始化 aligo 初始化 aligo
""" """
def show_qrcode(qr_link: str): def show_qrcode(qr_link: str):
""" """
显示二维码 显示二维码
@@ -254,28 +255,9 @@ class AliPan(StorageBase, metaclass=Singleton):
return [] return []
# 根目录处理 # 根目录处理
if not fileitem or not fileitem.drive_id: if not fileitem or not fileitem.drive_id:
return [ items = self.aligo.get_file_list()
schemas.FileItem( if items:
storage=self.schema.value, return [self.__get_fileitem(item) for item in items]
fileid="root",
drive_id=self.__auth_params.get("resourceDriveId"),
parent_fileid="root",
type="dir",
path="/资源库/",
name="资源库",
basename="资源库"
),
schemas.FileItem(
storage=self.schema.value,
fileid="root",
drive_id=self.__auth_params.get("backDriveId"),
parent_fileid="root",
type="dir",
path="/备份盘/",
name="备份盘",
basename="备份盘"
)
]
elif fileitem.type == "file": elif fileitem.type == "file":
# 文件处理 # 文件处理
file = self.detail(fileitem) file = self.detail(fileitem)
@@ -290,6 +272,8 @@ class AliPan(StorageBase, metaclass=Singleton):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]: def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
""" """
创建目录 创建目录
:param fileitem: 父目录
:param name: 目录名
""" """
if not self.aligo: if not self.aligo:
return None return None
@@ -297,21 +281,43 @@ class AliPan(StorageBase, metaclass=Singleton):
if item: if item:
if isinstance(item, CreateFileResponse): if isinstance(item, CreateFileResponse):
item = self.aligo.get_file(file_id=item.file_id, drive_id=item.drive_id) item = self.aligo.get_file(file_id=item.file_id, drive_id=item.drive_id)
return self.__get_fileitem(item) return self.__get_fileitem(item, parent=fileitem.path)
return None return None
def get_folder(self, path: Path) -> Optional[schemas.FileItem]: def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
""" """
根据文件路程获取目录,不存在则创建 根据文件路程获取目录,不存在则创建
""" """
if not self.aligo:
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
for sub_folder in self.list(_fileitem):
if sub_folder.type != "dir":
continue
if sub_folder.name == _name:
return sub_folder
return None return None
item = self.aligo.get_folder_by_path(path=str(path), create_folder=True)
if item: # 是否已存在
if isinstance(item, CreateFileResponse): folder = self.get_item(path)
item = self.aligo.get_file(file_id=item.file_id, drive_id=item.drive_id) if folder:
return self.__get_fileitem(item) return folder
return None # 逐级查找和创建目录
fileitem = schemas.FileItem(path="/")
for part in path.parts:
if part == "/":
continue
dir_file = __find_dir(fileitem, part)
if dir_file:
fileitem = dir_file
else:
dir_file = self.create_folder(fileitem, part)
if not dir_file:
return None
fileitem = dir_file
return fileitem
def get_item(self, path: Path) -> Optional[schemas.FileItem]: def get_item(self, path: Path) -> Optional[schemas.FileItem]:
""" """
@@ -321,7 +327,7 @@ class AliPan(StorageBase, metaclass=Singleton):
return None return None
item = self.aligo.get_file_by_path(path=str(path)) item = self.aligo.get_file_by_path(path=str(path))
if item: if item:
return self.__get_fileitem(item) return self.__get_fileitem(item, parent=path.parent)
return None return None
def delete(self, fileitem: schemas.FileItem) -> bool: def delete(self, fileitem: schemas.FileItem) -> bool:
@@ -342,7 +348,7 @@ class AliPan(StorageBase, metaclass=Singleton):
return None return None
item = self.aligo.get_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id) item = self.aligo.get_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id)
if item: if item:
return self.__get_fileitem(item) return self.__get_fileitem(item, parent=fileitem.path)
return None return None
def rename(self, fileitem: schemas.FileItem, name: str) -> bool: def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
@@ -370,6 +376,9 @@ class AliPan(StorageBase, metaclass=Singleton):
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]: def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
""" """
上传文件,并标记完成 上传文件,并标记完成
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
""" """
if not self.aligo: if not self.aligo:
return None return None
@@ -380,22 +389,44 @@ class AliPan(StorageBase, metaclass=Singleton):
if result: if result:
item = self.aligo.get_file(file_id=result.file_id, drive_id=result.drive_id) item = self.aligo.get_file(file_id=result.file_id, drive_id=result.drive_id)
if item: if item:
return self.__get_fileitem(item) return self.__get_fileitem(item, parent=fileitem.path)
return None return None
def move(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool: def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
""" """
移动文件 移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
if not self.aligo: if not self.aligo:
return False return False
target = self.get_folder(path)
if not target:
return False
if self.aligo.move_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id, if self.aligo.move_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
to_parent_file_id=target.fileid, to_drive_id=target.drive_id): to_parent_file_id=target.fileid, to_drive_id=target.drive_id,
new_name=new_name):
return True return True
return False return False
def copy(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool: def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
pass """
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.aligo:
return False
target = self.get_folder(path)
if not target:
return False
if self.aligo.copy_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
to_parent_file_id=target.fileid, to_drive_id=target.drive_id,
new_name=new_name):
return True
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool: def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
""" """

View File

@@ -2,10 +2,10 @@ import json
import logging import logging
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
from typing import Optional, Tuple, List, Dict, Union from typing import Optional, List, Dict
from requests import Response
from cachetools import cached, TTLCache from cachetools import cached, TTLCache
from requests import Response
from app import schemas from app import schemas
from app.core.config import settings from app.core.config import settings
@@ -13,10 +13,11 @@ from app.log import logger
from app.modules.filemanager.storages import StorageBase from app.modules.filemanager.storages import StorageBase
from app.schemas.types import StorageSchema from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.url import UrlUtils from app.utils.url import UrlUtils
class Alist(StorageBase): class Alist(StorageBase, metaclass=Singleton):
""" """
Alist相关操作 Alist相关操作
api文档https://alist.nn.ci/zh/guide/api api文档https://alist.nn.ci/zh/guide/api
@@ -232,6 +233,8 @@ class Alist(StorageBase):
) -> Optional[schemas.FileItem]: ) -> Optional[schemas.FileItem]:
""" """
创建目录 创建目录
:param fileitem: 父目录
:param name: 目录名
""" """
path = Path(fileitem.path) / name path = Path(fileitem.path) / name
resp: Response = RequestUtils( resp: Response = RequestUtils(
@@ -270,14 +273,16 @@ class Alist(StorageBase):
获取目录,如目录不存在则创建 获取目录,如目录不存在则创建
""" """
folder = self.get_item(path) folder = self.get_item(path)
if folder:
return folder
if not folder: if not folder:
folder = self.create_folder(self.get_parent(schemas.FileItem( folder = self.create_folder(schemas.FileItem(
storage=self.schema.value, storage=self.schema.value,
type="dir", type="dir",
path=path.as_posix() + "/", path=path.parent.as_posix(),
name=path.name, name=path.name,
basename=path.stem basename=path.stem
)), path.name) ), path.name)
return folder return folder
def get_item( def get_item(
@@ -348,7 +353,7 @@ class Alist(StorageBase):
result = resp.json() result = resp.json()
if result["code"] != 200: if result["code"] != 200:
logging.warning(f'获取文件 {path} 失败,错误信息:{result["message"]}') logging.debug(f'获取文件 {path} 失败,错误信息:{result["message"]}')
return return
return schemas.FileItem( return schemas.FileItem(
@@ -376,7 +381,7 @@ class Alist(StorageBase):
resp: Response = RequestUtils( resp: Response = RequestUtils(
headers=self.__get_header_with_token() headers=self.__get_header_with_token()
).post_res( ).post_res(
self.__get_api_url("/api/fs/delete"), self.__get_api_url("/api/fs/remove"),
json={ json={
"dir": Path(fileitem.path).parent.as_posix(), "dir": Path(fileitem.path).parent.as_posix(),
"names": [fileitem.name], "names": [fileitem.name],
@@ -576,51 +581,21 @@ class Alist(StorageBase):
""" """
return self.get_item(Path(fileitem.path)) return self.get_item(Path(fileitem.path))
@staticmethod def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
def __get_copy_and_move_data(
fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> Tuple[str, str, List[str], bool]:
"""
获取复制或移动文件需要的数据
:param fileitem: 文件项
:param target: 目标文件项或目标路径
:return: 源目录,目标目录,文件名列表,是否有效
"""
name = Path(target).name
if fileitem.name != name:
return "", "", [], False
src_dir = Path(fileitem.path).parent.as_posix()
if isinstance(target, schemas.FileItem):
traget_dir = Path(target.path).parent.as_posix()
else:
traget_dir = target.parent.as_posix()
return src_dir, traget_dir, [name], True
def copy(
self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> bool:
""" """
复制文件 复制文件
:param fileitem: 文件项
源文件名和目标文件名必须相同 :param path: 目标目录
:param new_name: 新文件名
""" """
src_dir, dst_dir, names, is_valid = self.__get_copy_and_move_data(
fileitem, target
)
if not is_valid:
return False
resp: Response = RequestUtils( resp: Response = RequestUtils(
headers=self.__get_header_with_token() headers=self.__get_header_with_token()
).post_res( ).post_res(
self.__get_api_url("/api/fs/copy"), self.__get_api_url("/api/fs/copy"),
json={ json={
"src_dir": src_dir, "src_dir": Path(fileitem.path).parent.as_posix(),
"dst_dir": dst_dir, "dst_dir": path.as_posix(),
"names": names, "names": [fileitem.name],
}, },
) )
""" """
@@ -655,28 +630,31 @@ class Alist(StorageBase):
f'复制文件 {fileitem.path} 失败,错误信息:{result["message"]}' f'复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
) )
return False return False
# 重命名
if fileitem.name != new_name:
self.rename(
self.get_item(path / fileitem.name), new_name
)
return True return True
def move( def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> bool:
""" """
移动文件 移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
src_dir, dst_dir, names, is_valid = self.__get_copy_and_move_data( # 先重命名
fileitem, target if fileitem.name != new_name:
) self.rename(fileitem, new_name)
if not is_valid:
return False
resp: Response = RequestUtils( resp: Response = RequestUtils(
headers=self.__get_header_with_token() headers=self.__get_header_with_token()
).post_res( ).post_res(
self.__get_api_url("/api/fs/move"), self.__get_api_url("/api/fs/move"),
json={ json={
"src_dir": src_dir, "src_dir": Path(fileitem.path).parent.as_posix(),
"dst_dir": dst_dir, "dst_dir": path.as_posix(),
"names": names, "names": [new_name],
}, },
) )
""" """
@@ -757,15 +735,7 @@ class Alist(StorageBase):
@staticmethod @staticmethod
def __parse_timestamp(time_str: str) -> float: def __parse_timestamp(time_str: str) -> float:
# try: """
# # 尝试解析带微秒的时间格式 直接使用 ISO 8601 格式解析时间
# dt = datetime.strptime(time_str[:26], '%Y-%m-%dT%H:%M:%S.%f') """
# except ValueError: return datetime.fromisoformat(time_str).timestamp()
# # 如果失败,尝试解析不带微秒的时间格式
# dt = datetime.strptime(time_str, '%Y-%m-%dT%H:%M:%SZ')
# 直接使用 ISO 8601 格式解析时间
dt = datetime.fromisoformat(time_str)
# 返回时间戳
return dt.timestamp()

View File

@@ -37,7 +37,7 @@ class LocalStorage(StorageBase):
""" """
return True return True
def __get_fileitem(self, path: Path): def __get_fileitem(self, path: Path) -> schemas.FileItem:
""" """
获取文件项 获取文件项
""" """
@@ -52,7 +52,7 @@ class LocalStorage(StorageBase):
modify_time=path.stat().st_mtime, modify_time=path.stat().st_mtime,
) )
def __get_diritem(self, path: Path): def __get_diritem(self, path: Path) -> schemas.FileItem:
""" """
获取目录项 获取目录项
""" """
@@ -115,6 +115,8 @@ class LocalStorage(StorageBase):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]: def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
""" """
创建目录 创建目录
:param fileitem: 父目录
:param name: 目录名
""" """
if not fileitem.path: if not fileitem.path:
return None return None
@@ -192,6 +194,9 @@ class LocalStorage(StorageBase):
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]: def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
""" """
上传文件 上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
""" """
dir_path = Path(fileitem.path) dir_path = Path(fileitem.path)
target_path = dir_path / (new_name or path.name) target_path = dir_path / (new_name or path.name)
@@ -201,17 +206,6 @@ class LocalStorage(StorageBase):
return None return None
return self.get_item(target_path) return self.get_item(target_path)
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
复制文件
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.copy(file_path, target_file)
if code != 0:
logger.error(f"复制文件失败:{message}")
return False
return True
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool: def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
""" """
硬链接文件 硬链接文件
@@ -234,12 +228,29 @@ class LocalStorage(StorageBase):
return False return False
return True return True
def move(self, fileitem: schemas.FileItem, target: Path) -> bool: def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
""" """
移动文件 复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
file_path = Path(fileitem.path) file_path = Path(fileitem.path)
code, message = SystemUtils.move(file_path, target) code, message = SystemUtils.copy(file_path, path / new_name)
if code != 0:
logger.error(f"复制文件失败:{message}")
return False
return True
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
file_path = Path(fileitem.path)
code, message = SystemUtils.move(file_path, path / new_name)
if code != 0: if code != 0:
logger.error(f"移动文件失败:{message}") logger.error(f"移动文件失败:{message}")
return False return False

View File

@@ -139,6 +139,8 @@ class Rclone(StorageBase):
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]: def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
""" """
创建目录 创建目录
:param fileitem: 父目录
:param name: 目录名
""" """
try: try:
retcode = subprocess.run( retcode = subprocess.run(
@@ -149,10 +151,7 @@ class Rclone(StorageBase):
startupinfo=self.__get_hidden_shell() startupinfo=self.__get_hidden_shell()
).returncode ).returncode
if retcode == 0: if retcode == 0:
ret_fileitem = copy.deepcopy(fileitem) return self.get_item(Path(f"{fileitem.path}/{name}"))
ret_fileitem.path = f"{fileitem.path}/{name}/"
ret_fileitem.name = name
return ret_fileitem
except Exception as err: except Exception as err:
logger.error(f"rclone创建目录失败{err}") logger.error(f"rclone创建目录失败{err}")
return None return None
@@ -166,13 +165,17 @@ class Rclone(StorageBase):
""" """
查找下级目录中匹配名称的目录 查找下级目录中匹配名称的目录
""" """
for sub_file in self.list(_fileitem): for sub_folder in self.list(_fileitem):
if sub_file.type != "dir": if sub_folder.type != "dir":
continue continue
if sub_file.name == _name: if sub_folder.name == _name:
return sub_file return sub_folder
return None return None
# 是否已存在
folder = self.get_item(path)
if folder:
return folder
# 逐级查找和创建目录 # 逐级查找和创建目录
fileitem = schemas.FileItem(path="/") fileitem = schemas.FileItem(path="/")
for part in path.parts: for part in path.parts:
@@ -269,6 +272,9 @@ class Rclone(StorageBase):
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]: def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
""" """
上传文件 上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
""" """
try: try:
new_path = Path(fileitem.path) / (new_name or path.name) new_path = Path(fileitem.path) / (new_name or path.name)
@@ -306,16 +312,19 @@ class Rclone(StorageBase):
logger.error(f"rclone获取文件详情失败{err}") logger.error(f"rclone获取文件详情失败{err}")
return None return None
def move(self, fileitem: schemas.FileItem, target: Path) -> bool: def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
""" """
移动文件target_file格式rclone:path 移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
try: try:
retcode = subprocess.run( retcode = subprocess.run(
[ [
'rclone', 'moveto', 'rclone', 'moveto',
f'MP:{fileitem.path}', f'MP:{fileitem.path}',
f'MP:{target}' f'MP:{path / new_name}'
], ],
startupinfo=self.__get_hidden_shell() startupinfo=self.__get_hidden_shell()
).returncode ).returncode
@@ -325,8 +334,27 @@ class Rclone(StorageBase):
logger.error(f"rclone移动文件失败{err}") logger.error(f"rclone移动文件失败{err}")
return False return False
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool: def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
pass """
复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
try:
retcode = subprocess.run(
[
'rclone', 'copyto',
f'MP:{fileitem.path}',
f'MP:{path / new_name}'
],
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
return True
except Exception as err:
logger.error(f"rclone复制文件失败{err}")
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool: def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass pass

View File

@@ -225,6 +225,9 @@ class U115Pan(StorageBase, metaclass=Singleton):
""" """
if not self.client: if not self.client:
return None return None
folder = self.get_item(path)
if folder:
return folder
try: try:
result = self.client.fs.makedirs(path, exist_ok=True) result = self.client.fs.makedirs(path, exist_ok=True)
if result: if result:
@@ -336,6 +339,9 @@ class U115Pan(StorageBase, metaclass=Singleton):
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]: def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
""" """
上传文件 上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
""" """
if not self.client: if not self.client:
return None return None
@@ -358,32 +364,38 @@ class U115Pan(StorageBase, metaclass=Singleton):
logger.error(f"115上传文件失败{str(e)}") logger.error(f"115上传文件失败{str(e)}")
return None return None
def move(self, fileitem: schemas.FileItem, target: schemas.FileItem) -> bool: def copy(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
"""
if not self.client:
return False
try:
self.client.fs.move(fileitem.path, target.path)
return True
except Exception as e:
logger.error(f"115移动文件失败{str(e)}")
return False
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
""" """
复制文件 复制文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
""" """
if not self.client: if not self.client:
return False return False
try: try:
self.client.fs.copy(fileitem.path, target_file) self.client.fs.copy(fileitem.path, path / new_name)
return True return True
except Exception as e: except Exception as e:
logger.error(f"115复制文件失败{str(e)}") logger.error(f"115复制文件失败{str(e)}")
return False return False
def move(self, fileitem: schemas.FileItem, path: Path, new_name: str) -> bool:
"""
移动文件
:param fileitem: 文件项
:param path: 目标目录
:param new_name: 新文件名
"""
if not self.client:
return False
try:
self.client.fs.move(fileitem.path, path / new_name)
return True
except Exception as e:
logger.error(f"115移动文件失败{str(e)}")
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool: def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass pass

View File

@@ -21,14 +21,30 @@ class YemaSpider:
_cookie = None _cookie = None
_ua = None _ua = None
_size = 40 _size = 40
_searchurl = "%sapi/torrent/fetchCategoryOpenTorrentList" _searchurl = "%sapi/torrent/fetchOpenTorrentList"
_downloadurl = "%sapi/torrent/download?id=%s" _downloadurl = "%sapi/torrent/download?id=%s"
_pageurl = "%s#/torrent/detail/%s/" _pageurl = "%s#/torrent/detail/%s/"
_timeout = 15 _timeout = 15
# 分类 # 分类
_movie_category = 4 _movie_category = [4]
_tv_category = 5 _tv_category = [5, 13, 14, 17, 15, 6, 16]
# 标签 https://wiki.yemapt.org/developer/constants
_labels = {
"1": "禁转",
"2": "首发",
"3": "官方",
"4": "自制",
"5": "国语",
"6": "中字",
"7": "粤语",
"8": "英字",
"9": "HDR10",
"10": "杜比视界",
"11": "分集",
"12": "完结",
}
def __init__(self, indexer: CommentedMap): def __init__(self, indexer: CommentedMap):
self.systemconfig = SystemConfigOper() self.systemconfig = SystemConfigOper()
@@ -47,14 +63,7 @@ class YemaSpider:
""" """
搜索 搜索
""" """
if not mtype:
categoryId = self._movie_category
elif mtype == MediaType.TV:
categoryId = self._tv_category
else:
categoryId = self._movie_category
params = { params = {
"categoryId": categoryId,
"pageParam": { "pageParam": {
"current": page + 1, "current": page + 1,
"pageSize": self._size, "pageSize": self._size,
@@ -62,6 +71,12 @@ class YemaSpider:
}, },
"sorter": {} "sorter": {}
} }
# 新接口可不传 categoryId 参数
# if mtype == MediaType.MOVIE:
# params.update({
# "categoryId": self._movie_category,
# })
# pass
if keyword: if keyword:
params.update({ params.update({
"keyword": keyword, "keyword": keyword,
@@ -82,17 +97,27 @@ class YemaSpider:
results = res.json().get('data', []) or [] results = res.json().get('data', []) or []
for result in results: for result in results:
category_value = result.get('categoryId') category_value = result.get('categoryId')
if category_value == self._tv_category: if category_value in self._tv_category :
category = MediaType.TV.value category = MediaType.TV.value
elif category_value == self._movie_category: elif category_value in self._movie_category:
category = MediaType.MOVIE.value category = MediaType.MOVIE.value
else: else:
category = MediaType.UNKNOWN.value category = MediaType.UNKNOWN.value
pass
torrentLabelIds = result.get('tagList', []) or []
torrentLabels = []
for labelId in torrentLabelIds:
if self._labels.get(labelId) is not None:
torrentLabels.append(self._labels.get(labelId))
pass
pass
torrent = { torrent = {
'title': result.get('showName'), 'title': result.get('showName'),
'description': result.get('shortDesc'), 'description': result.get('shortDesc'),
'enclosure': self.__get_download_url(result.get('id')), 'enclosure': self.__get_download_url(result.get('id')),
'pubdate': StringUtils.unify_datetime_str(result.get('gmtCreate')), # 使用上架时间,而不是用户发布时间,上架时间即其他用户可见时间
'pubdate': StringUtils.unify_datetime_str(result.get('listingTime')),
'size': result.get('fileSize'), 'size': result.get('fileSize'),
'seeders': result.get('seedNum'), 'seeders': result.get('seedNum'),
'peers': result.get('leechNum'), 'peers': result.get('leechNum'),
@@ -101,7 +126,7 @@ class YemaSpider:
'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('uploadPromotion')), 'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('uploadPromotion')),
'freedate': StringUtils.unify_datetime_str(result.get('downloadPromotionEndTime')), 'freedate': StringUtils.unify_datetime_str(result.get('downloadPromotionEndTime')),
'page_url': self._pageurl % (self._domain, result.get('id')), 'page_url': self._pageurl % (self._domain, result.get('id')),
'labels': [], 'labels': torrentLabels,
'category': category 'category': category
} }
torrents.append(torrent) torrents.append(torrent)

View File

@@ -1,5 +1,5 @@
from pathlib import Path from pathlib import Path
from typing import Set, Tuple, Optional, Union, List from typing import Set, Tuple, Optional, Union, List, Dict
from qbittorrentapi import TorrentFilesList from qbittorrentapi import TorrentFilesList
from torrentool.torrent import Torrent from torrentool.torrent import Torrent
@@ -124,7 +124,8 @@ class QbittorrentModule(_ModuleBase, _DownloaderBase[Qbittorrent]):
is_paused=is_paused, is_paused=is_paused,
tag=tags, tag=tags,
cookie=cookie, cookie=cookie,
category=category category=category,
ignore_category_check=False
) )
if not state: if not state:
# 读取种子的名称 # 读取种子的名称
@@ -203,66 +204,75 @@ class QbittorrentModule(_ModuleBase, _DownloaderBase[Qbittorrent]):
:return: 下载器中符合状态的种子列表 :return: 下载器中符合状态的种子列表
""" """
# 获取下载器 # 获取下载器
server: Qbittorrent = self.get_instance(downloader) if downloader:
if not server: server: Qbittorrent = self.get_instance(downloader)
return None if not server:
return None
servers = {downloader: server}
else:
servers: Dict[str, Qbittorrent] = self.get_instances()
ret_torrents = [] ret_torrents = []
if hashs: if hashs:
# 按Hash获取 # 按Hash获取
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
content_path = torrent.get("content_path") for torrent in torrents or []:
if content_path: content_path = torrent.get("content_path")
torrent_path = Path(content_path) if content_path:
else: torrent_path = Path(content_path)
torrent_path = Path(torrent.get('save_path')) / torrent.get('name') else:
ret_torrents.append(TransferTorrent( torrent_path = Path(torrent.get('save_path')) / torrent.get('name')
title=torrent.get('name'), ret_torrents.append(TransferTorrent(
path=torrent_path, downloader=name,
hash=torrent.get('hash'), title=torrent.get('name'),
size=torrent.get('total_size'), path=torrent_path,
tags=torrent.get('tags') hash=torrent.get('hash'),
)) size=torrent.get('total_size'),
tags=torrent.get('tags')
))
elif status == TorrentStatus.TRANSFER: elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的 # 获取已完成且未整理的
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
tags = torrent.get("tags") or [] for torrent in torrents or []:
if "已整理" in tags: tags = torrent.get("tags") or []
continue if "已整理" in tags:
# 内容路径 continue
content_path = torrent.get("content_path") # 内容路径
if content_path: content_path = torrent.get("content_path")
torrent_path = Path(content_path) if content_path:
else: torrent_path = Path(content_path)
torrent_path = torrent.get('save_path') / torrent.get('name') else:
ret_torrents.append(TransferTorrent( torrent_path = torrent.get('save_path') / torrent.get('name')
title=torrent.get('name'), ret_torrents.append(TransferTorrent(
path=torrent_path, downloader=name,
hash=torrent.get('hash'), title=torrent.get('name'),
tags=torrent.get('tags') path=torrent_path,
)) hash=torrent.get('hash'),
tags=torrent.get('tags')
))
elif status == TorrentStatus.DOWNLOADING: elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务 # 获取正在下载的任务
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
meta = MetaInfo(torrent.get('name')) for torrent in torrents or []:
ret_torrents.append(DownloadingTorrent( meta = MetaInfo(torrent.get('name'))
hash=torrent.get('hash'), ret_torrents.append(DownloadingTorrent(
title=torrent.get('name'), downloader=name,
name=meta.name, hash=torrent.get('hash'),
year=meta.year, title=torrent.get('name'),
season_episode=meta.season_episode, name=meta.name,
progress=torrent.get('progress') * 100, year=meta.year,
size=torrent.get('total_size'), season_episode=meta.season_episode,
state="paused" if torrent.get('state') in ("paused", "pausedDL") else "downloading", progress=torrent.get('progress') * 100,
dlspeed=StringUtils.str_filesize(torrent.get('dlspeed')), size=torrent.get('total_size'),
upspeed=StringUtils.str_filesize(torrent.get('upspeed')), state="paused" if torrent.get('state') in ("paused", "pausedDL") else "downloading",
left_time=StringUtils.str_secends( dlspeed=StringUtils.str_filesize(torrent.get('dlspeed')),
(torrent.get('total_size') - torrent.get('completed')) / torrent.get('dlspeed')) if torrent.get( upspeed=StringUtils.str_filesize(torrent.get('upspeed')),
'dlspeed') > 0 else '' left_time=StringUtils.str_secends(
)) (torrent.get('total_size') - torrent.get('completed')) / torrent.get('dlspeed')) if torrent.get(
'dlspeed') > 0 else ''
))
else: else:
return None return None
return ret_torrents return ret_torrents

View File

@@ -251,6 +251,7 @@ class Qbittorrent:
:param category: 种子分类 :param category: 种子分类
:param download_dir: 下载路径 :param download_dir: 下载路径
:param cookie: 站点Cookie用于辅助下载种子 :param cookie: 站点Cookie用于辅助下载种子
:param kwargs: 可选参数,如 ignore_category_check 以及 QB相关参数
:return: bool :return: bool
""" """
if not self.qbc or not content: if not self.qbc or not content:
@@ -276,13 +277,16 @@ class Qbittorrent:
else: else:
tags = None tags = None
# 分类自动管理 # 如果忽略分类检查,则直接使用传入的分类值,否则,仅在分类存在且启用了自动管理时才传递参数
if category and self._category: ignore_category_check = kwargs.pop("ignore_category_check", True)
is_auto = True if ignore_category_check:
is_auto = self._category
else: else:
is_auto = False if category and self._category:
category = None is_auto = True
else:
is_auto = False
category = None
try: try:
# 添加下载 # 添加下载
qbc_ret = self.qbc.torrents_add(urls=urls, qbc_ret = self.qbc.torrents_add(urls=urls,

View File

@@ -30,7 +30,7 @@ class TmdbScraper:
# 电影元数据文件 # 电影元数据文件
doc = self.__gen_movie_nfo_file(mediainfo=mediainfo) doc = self.__gen_movie_nfo_file(mediainfo=mediainfo)
else: else:
if season: if season is not None:
# 查询季信息 # 查询季信息
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, meta.begin_season) seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, meta.begin_season)
if episode: if episode:
@@ -57,7 +57,7 @@ class TmdbScraper:
:param episode: 集号 :param episode: 集号
""" """
images = {} images = {}
if season: if season is not None:
# 只需要集的图片 # 只需要集的图片
if episode: if episode:
# 集的图片 # 集的图片
@@ -104,6 +104,7 @@ class TmdbScraper:
url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}" url = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{seasoninfo.get('poster_path')}"
image_name = f"season{sea_seq}-poster{ext}" image_name = f"season{sea_seq}-poster{ext}"
return image_name, url return image_name, url
return "", ""
@staticmethod @staticmethod
def __get_episode_detail(seasoninfo: dict, episode: int) -> dict: def __get_episode_detail(seasoninfo: dict, episode: int) -> dict:

View File

@@ -15,7 +15,7 @@ from app.schemas.types import MediaType
lock = RLock() lock = RLock()
CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp" CACHE_EXPIRE_TIMESTAMP_STR = "cache_expire_timestamp"
EXPIRE_TIMESTAMP = settings.CACHE_CONF.get('meta') EXPIRE_TIMESTAMP = settings.CACHE_CONF["meta"]
class TmdbCache(metaclass=Singleton): class TmdbCache(metaclass=Singleton):
@@ -75,7 +75,7 @@ class TmdbCache(metaclass=Singleton):
@return: 被删除的缓存内容 @return: 被删除的缓存内容
""" """
with lock: with lock:
return self._meta_data.pop(key, None) return self._meta_data.pop(key, {})
def delete_by_tmdbid(self, tmdbid: int) -> None: def delete_by_tmdbid(self, tmdbid: int) -> None:
""" """
@@ -138,14 +138,14 @@ class TmdbCache(metaclass=Singleton):
if cache_year: if cache_year:
cache_year = cache_year[:4] cache_year = cache_year[:4]
self._meta_data[self.__get_key(meta)] = { self._meta_data[self.__get_key(meta)] = {
"id": info.get("id"), "id": info.get("id"),
"type": info.get("media_type"), "type": info.get("media_type"),
"year": cache_year, "year": cache_year,
"title": cache_title, "title": cache_title,
"poster_path": info.get("poster_path"), "poster_path": info.get("poster_path"),
"backdrop_path": info.get("backdrop_path"), "backdrop_path": info.get("backdrop_path"),
CACHE_EXPIRE_TIMESTAMP_STR: int(time.time()) + EXPIRE_TIMESTAMP CACHE_EXPIRE_TIMESTAMP_STR: int(time.time()) + EXPIRE_TIMESTAMP
} }
elif info is not None: elif info is not None:
# None时不缓存此时代表网络错误允许重复请求 # None时不缓存此时代表网络错误允许重复请求
self._meta_data[self.__get_key(meta)] = {'id': 0} self._meta_data[self.__get_key(meta)] = {'id': 0}
@@ -164,7 +164,7 @@ class TmdbCache(metaclass=Singleton):
return return
with open(self._meta_path, 'wb') as f: with open(self._meta_path, 'wb') as f:
pickle.dump(new_meta_data, f, pickle.HIGHEST_PROTOCOL) pickle.dump(new_meta_data, f, pickle.HIGHEST_PROTOCOL) # type: ignore
def _random_sample(self, new_meta_data: dict) -> bool: def _random_sample(self, new_meta_data: dict) -> bool:
""" """

View File

@@ -1,9 +1,9 @@
import traceback import traceback
from functools import lru_cache
from typing import Optional, List from typing import Optional, List
from urllib.parse import quote from urllib.parse import quote
import zhconv import zhconv
from cachetools import TTLCache, cached
from lxml import etree from lxml import etree
from app.core.config import settings from app.core.config import settings
@@ -27,8 +27,6 @@ class TmdbApi:
self.tmdb.domain = settings.TMDB_API_DOMAIN self.tmdb.domain = settings.TMDB_API_DOMAIN
# 开启缓存 # 开启缓存
self.tmdb.cache = True self.tmdb.cache = True
# 缓存大小
self.tmdb.REQUEST_CACHE_MAXSIZE = settings.CACHE_CONF.get('tmdb')
# APIKEY # APIKEY
self.tmdb.api_key = settings.TMDB_API_KEY self.tmdb.api_key = settings.TMDB_API_KEY
# 语种 # 语种
@@ -466,7 +464,7 @@ class TmdbApi:
return ret_info return ret_info
@lru_cache(maxsize=settings.CACHE_CONF.get('tmdb')) @cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
def match_web(self, name: str, mtype: MediaType) -> Optional[dict]: def match_web(self, name: str, mtype: MediaType) -> Optional[dict]:
""" """
搜索TMDB网站直接抓取结果结果只有一条时才返回 搜索TMDB网站直接抓取结果结果只有一条时才返回
@@ -1292,7 +1290,7 @@ class TmdbApi:
for group_episode in group_episodes: for group_episode in group_episodes:
order = group_episode.get('order') order = group_episode.get('order')
episodes = group_episode.get('episodes') episodes = group_episode.get('episodes')
if not episodes or not order: if not episodes:
continue continue
# 当前季第一季时间 # 当前季第一季时间
first_date = episodes[0].get("air_date") first_date = episodes[0].get("air_date")

View File

@@ -4,11 +4,12 @@ import logging
import os import os
import time import time
from datetime import datetime from datetime import datetime
from functools import lru_cache
import requests import requests
import requests.exceptions import requests.exceptions
from cachetools import TTLCache, cached
from app.core.config import settings
from app.utils.http import RequestUtils from app.utils.http import RequestUtils
from .exceptions import TMDbException from .exceptions import TMDbException
@@ -24,7 +25,6 @@ class TMDb(object):
TMDB_CACHE_ENABLED = "TMDB_CACHE_ENABLED" TMDB_CACHE_ENABLED = "TMDB_CACHE_ENABLED"
TMDB_PROXIES = "TMDB_PROXIES" TMDB_PROXIES = "TMDB_PROXIES"
TMDB_DOMAIN = "TMDB_DOMAIN" TMDB_DOMAIN = "TMDB_DOMAIN"
REQUEST_CACHE_MAXSIZE = None
_req = None _req = None
_session = None _session = None
@@ -137,7 +137,7 @@ class TMDb(object):
def cache(self, cache): def cache(self, cache):
os.environ[self.TMDB_CACHE_ENABLED] = str(cache) os.environ[self.TMDB_CACHE_ENABLED] = str(cache)
@lru_cache(maxsize=REQUEST_CACHE_MAXSIZE) @cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
def cached_request(self, method, url, data, json, def cached_request(self, method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d')): _ts=datetime.strftime(datetime.now(), '%Y%m%d')):
""" """

View File

@@ -1,5 +1,5 @@
from pathlib import Path from pathlib import Path
from typing import Set, Tuple, Optional, Union, List from typing import Set, Tuple, Optional, Union, List, Dict
from torrentool.torrent import Torrent from torrentool.torrent import Torrent
from transmission_rpc import File from transmission_rpc import File
@@ -196,60 +196,70 @@ class TransmissionModule(_ModuleBase, _DownloaderBase[Transmission]):
:return: 下载器中符合状态的种子列表 :return: 下载器中符合状态的种子列表
""" """
# 获取下载器 # 获取下载器
server: Transmission = self.get_instance(downloader) if downloader:
if not server: server: Transmission = self.get_instance(downloader)
return None if not server:
return None
servers = {downloader: server}
else:
servers: Dict[str, Transmission] = self.get_instances()
ret_torrents = [] ret_torrents = []
if hashs: if hashs:
# 按Hash获取 # 按Hash获取
torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents, _ = server.get_torrents(ids=hashs, tags=settings.TORRENT_TAG)
ret_torrents.append(TransferTorrent( for torrent in torrents or []:
title=torrent.name, ret_torrents.append(TransferTorrent(
path=Path(torrent.download_dir) / torrent.name, downloader=name,
hash=torrent.hashString, title=torrent.name,
size=torrent.total_size, path=Path(torrent.download_dir) / torrent.name,
tags=",".join(torrent.labels or []) hash=torrent.hashString,
)) size=torrent.total_size,
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.TRANSFER: elif status == TorrentStatus.TRANSFER:
# 获取已完成且未整理的 # 获取已完成且未整理的
torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents = server.get_completed_torrents(tags=settings.TORRENT_TAG)
# 含"已整理"tag的不处理 for torrent in torrents or []:
if "已整理" in torrent.labels or []: # 含"已整理"tag的不处理
continue if "已整理" in torrent.labels or []:
# 下载路径 continue
path = torrent.download_dir # 下载路径
# 无法获取下载路径的不处理 path = torrent.download_dir
if not path: # 无法获取下载路径的不处理
logger.debug(f"未获取到 {torrent.name} 下载保存路径") if not path:
continue logger.debug(f"未获取到 {torrent.name} 下载保存路径")
ret_torrents.append(TransferTorrent( continue
title=torrent.name, ret_torrents.append(TransferTorrent(
path=Path(torrent.download_dir) / torrent.name, downloader=name,
hash=torrent.hashString, title=torrent.name,
tags=",".join(torrent.labels or []) path=Path(torrent.download_dir) / torrent.name,
)) hash=torrent.hashString,
tags=",".join(torrent.labels or [])
))
elif status == TorrentStatus.DOWNLOADING: elif status == TorrentStatus.DOWNLOADING:
# 获取正在下载的任务 # 获取正在下载的任务
torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG) for name, server in servers.items():
for torrent in torrents or []: torrents = server.get_downloading_torrents(tags=settings.TORRENT_TAG)
meta = MetaInfo(torrent.name) for torrent in torrents or []:
dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload meta = MetaInfo(torrent.name)
upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload dlspeed = torrent.rate_download if hasattr(torrent, "rate_download") else torrent.rateDownload
ret_torrents.append(DownloadingTorrent( upspeed = torrent.rate_upload if hasattr(torrent, "rate_upload") else torrent.rateUpload
hash=torrent.hashString, ret_torrents.append(DownloadingTorrent(
title=torrent.name, downloader=name,
name=meta.name, hash=torrent.hashString,
year=meta.year, title=torrent.name,
season_episode=meta.season_episode, name=meta.name,
progress=torrent.progress, year=meta.year,
size=torrent.total_size, season_episode=meta.season_episode,
state="paused" if torrent.status == "stopped" else "downloading", progress=torrent.progress,
dlspeed=StringUtils.str_filesize(dlspeed), size=torrent.total_size,
upspeed=StringUtils.str_filesize(upspeed), state="paused" if torrent.status == "stopped" else "downloading",
left_time=StringUtils.str_secends(torrent.left_until_done / dlspeed) if dlspeed > 0 else '' dlspeed=StringUtils.str_filesize(dlspeed),
)) upspeed=StringUtils.str_filesize(upspeed),
left_time=StringUtils.str_secends(torrent.left_until_done / dlspeed) if dlspeed > 0 else ''
))
else: else:
return None return None
return ret_torrents return ret_torrents

View File

@@ -280,9 +280,9 @@ class Monitor(metaclass=Singleton):
""" """
获取BDMV目录的上级目录 获取BDMV目录的上级目录
""" """
for parent in _path.parents: for p in _path.parents:
if parent.name == "BDMV": if p.name == "BDMV":
return parent.parent return p.parent
return None return None
# 全程加锁 # 全程加锁

View File

@@ -174,9 +174,6 @@ class Scheduler(metaclass=Singleton):
# 停止定时服务 # 停止定时服务
self.stop() self.stop()
# 用户认证立即执行一次
user_auth()
# 调试模式不启动定时服务 # 调试模式不启动定时服务
if settings.DEV: if settings.DEV:
return return
@@ -329,7 +326,7 @@ class Scheduler(metaclass=Singleton):
"interval", "interval",
id="clear_cache", id="clear_cache",
name="缓存清理", name="缓存清理",
hours=settings.CACHE_CONF.get("meta") / 3600, hours=settings.CACHE_CONF["meta"] / 3600,
kwargs={ kwargs={
'job_id': 'clear_cache' 'job_id': 'clear_cache'
} }

View File

@@ -1,4 +1,5 @@
from typing import Optional, Dict from pathlib import Path
from typing import Optional, Dict, Any
from pydantic import BaseModel, Field, root_validator from pydantic import BaseModel, Field, root_validator
@@ -114,3 +115,31 @@ class CommandRegisterEventData(ChainEventData):
# 输出参数 # 输出参数
cancel: bool = Field(False, description="是否取消注册") cancel: bool = Field(False, description="是否取消注册")
source: str = Field("未知拦截源", description="拦截源") source: str = Field("未知拦截源", description="拦截源")
class SmartRenameEventData(ChainEventData):
"""
SmartRename 事件的数据模型
Attributes:
# 输入参数
template_string (str): Jinja2 模板字符串
rename_dict (dict): 渲染上下文
render_str (str): 渲染生成的字符串
path (Optional[Path]): 当前文件的目标路径
# 输出参数
updated (bool): 是否已更新,默认值为 False
updated_str (str): 更新后的字符串
source (str): 拦截源,默认值为 "未知拦截源"
"""
# 输入参数
template_string: str = Field(..., description="模板字符串")
rename_dict: Dict[str, Any] = Field(..., description="渲染上下文")
path: Optional[Path] = Field(None, description="文件的目标路径")
render_str: str = Field(..., description="渲染生成的字符串")
# 输出参数
updated: bool = Field(False, description="是否已更新")
updated_str: Optional[str] = Field(None, description="更新后的字符串")
source: Optional[str] = Field("未知拦截源", description="拦截源")

View File

@@ -10,6 +10,7 @@ class TransferTorrent(BaseModel):
""" """
待转移任务信息 待转移任务信息
""" """
downloader: Optional[str] = None
title: Optional[str] = None title: Optional[str] = None
path: Optional[Path] = None path: Optional[Path] = None
hash: Optional[str] = None hash: Optional[str] = None
@@ -22,6 +23,7 @@ class DownloadingTorrent(BaseModel):
""" """
下载中任务信息 下载中任务信息
""" """
downloader: Optional[str] = None
hash: Optional[str] = None hash: Optional[str] = None
title: Optional[str] = None title: Optional[str] = None
name: Optional[str] = None name: Optional[str] = None

View File

@@ -60,14 +60,16 @@ class EventType(Enum):
# 同步链式事件 # 同步链式事件
class ChainEventType(Enum): class ChainEventType(Enum):
# 名称识别请求 # 名称识别
NameRecognize = "name.recognize" NameRecognize = "name.recognize"
# 认证验证请求 # 认证验证
AuthVerification = "auth.verification" AuthVerification = "auth.verification"
# 认证拦截请求 # 认证拦截
AuthIntercept = "auth.intercept" AuthIntercept = "auth.intercept"
# 命令注册请求 # 命令注册
CommandRegister = "command.register" CommandRegister = "command.register"
# 智能重命名
SmartRename = "SmartRename"
# 系统配置Key字典 # 系统配置Key字典

View File

@@ -23,7 +23,9 @@ from app.helper.message import MessageHelper
from app.scheduler import Scheduler from app.scheduler import Scheduler
from app.monitor import Monitor from app.monitor import Monitor
from app.schemas import Notification, NotificationType from app.schemas import Notification, NotificationType
from app.schemas.types import SystemConfigKey
from app.db import close_database from app.db import close_database
from app.db.systemconfig_oper import SystemConfigOper
from app.chain.command import CommandChain from app.chain.command import CommandChain
@@ -72,6 +74,19 @@ def clear_temp():
SystemUtils.clear(settings.CACHE_PATH / "images", days=7) SystemUtils.clear(settings.CACHE_PATH / "images", days=7)
def user_auth():
"""
用户认证检查
"""
if SitesHelper().auth_level >= 2:
return
auth_conf = SystemConfigOper().get(SystemConfigKey.UserSiteAuthParams)
if auth_conf:
SitesHelper().check_user(**auth_conf)
else:
SitesHelper().check_user()
def check_auth(): def check_auth():
""" """
检查认证状态 检查认证状态
@@ -128,6 +143,8 @@ def start_modules(_: FastAPI):
SitesHelper() SitesHelper()
# 资源包检测 # 资源包检测
ResourceHelper() ResourceHelper()
# 用户认证
user_auth()
# 加载模块 # 加载模块
ModuleManager() ModuleManager()
# 启动事件消费 # 启动事件消费

22
update
View File

@@ -20,16 +20,6 @@ function WARN() {
echo -e "${WARN} ${1}" echo -e "${WARN} ${1}"
} }
TMP_PATH=$(mktemp -d)
if [ ! -d "${TMP_PATH}" ]; then
# 如果自动生成 tmp 文件夹失败则手动指定,避免出现数据丢失等情况
TMP_PATH=/tmp/mp_update_path
if [ -d /tmp/mp_update_path ]; then
rm -rf /tmp/mp_update_path
fi
mkdir -p /tmp/mp_update_path
fi
# 下载及解压 # 下载及解压
function download_and_unzip() { function download_and_unzip() {
local retries=0 local retries=0
@@ -275,6 +265,15 @@ function get_priority() {
} }
if [[ "${MOVIEPILOT_AUTO_UPDATE}" = "true" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}" = "release" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}" = "dev" ]]; then if [[ "${MOVIEPILOT_AUTO_UPDATE}" = "true" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}" = "release" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}" = "dev" ]]; then
TMP_PATH=$(mktemp -d)
if [ ! -d "${TMP_PATH}" ]; then
# 如果自动生成 tmp 文件夹失败则手动指定,避免出现数据丢失等情况
TMP_PATH=/tmp/mp_update_path
if [ -d /tmp/mp_update_path ]; then
rm -rf /tmp/mp_update_path
fi
mkdir -p /tmp/mp_update_path
fi
# 优先级:镜像站 > 全局 > 不代理 # 优先级:镜像站 > 全局 > 不代理
# pip # pip
retries=0 retries=0
@@ -324,6 +323,9 @@ if [[ "${MOVIEPILOT_AUTO_UPDATE}" = "true" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}"
WARN "当前版本号获取失败,继续启动..." WARN "当前版本号获取失败,继续启动..."
fi fi
fi fi
if [ -d "${TMP_PATH}" ]; then
rm -rf "${TMP_PATH}"
fi
elif [[ "${MOVIEPILOT_AUTO_UPDATE}" = "false" ]]; then elif [[ "${MOVIEPILOT_AUTO_UPDATE}" = "false" ]]; then
INFO "程序自动升级已关闭如需自动升级请在创建容器时设置环境变量MOVIEPILOT_AUTO_UPDATE=release" INFO "程序自动升级已关闭如需自动升级请在创建容器时设置环境变量MOVIEPILOT_AUTO_UPDATE=release"
else else

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.0.8' APP_VERSION = 'v2.1.0'
FRONTEND_VERSION = 'v2.0.8' FRONTEND_VERSION = 'v2.1.0'