Compare commits

...

116 Commits

Author SHA1 Message Date
jxxghp
1cec6ed6d1 v2.0.8
- 修复云盘扫码问题
2024-11-20 20:43:44 +08:00
jxxghp
fff75c7fe2 fix 115 2024-11-20 20:40:32 +08:00
jxxghp
81fecf1e07 fix alipan 2024-11-20 20:39:48 +08:00
jxxghp
ad8f687f8e fix alipan 2024-11-20 20:36:50 +08:00
jxxghp
a3172d7503 fix 扫码逻辑与底层模块解耦 2024-11-20 20:17:18 +08:00
jxxghp
8d5e0b26d5 fix:115支持Cookie 2024-11-20 13:14:37 +08:00
jxxghp
b1b980f550 Merge pull request #3171 from Sowevo/v2 2024-11-20 07:07:08 +08:00
Sowevo
8196589cff Merge branch 'jxxghp:v2' into v2 2024-11-19 22:43:31 +08:00
sowevo
cb9f41cb65 plex的item_id统一使用全路径
获取图片时兼容外网地址为Plex的官方转发地址https://app.plex.tv的情况
2024-11-19 22:41:55 +08:00
jxxghp
cb4981adb3 v2.0.7
- 修复了手动整理强制目录的问题
- 修复了AList无法整理文件的问题
- 修复了下载种子不使用全局UA的问题
- 修复了幼儿园的索引
- 修复了一处资源类型识别错误
- 用户认证现在也可以通过UI完成了
2024-11-19 20:42:25 +08:00
jxxghp
6880b42a84 fix #3161 2024-11-19 20:38:06 +08:00
jxxghp
97054adc61 fix 手动整理时强制目录 2024-11-19 20:22:31 +08:00
jxxghp
de94e5d595 fix #3166 2024-11-19 20:12:27 +08:00
jxxghp
a5a734d091 fix u115 transtype 2024-11-19 18:04:48 +08:00
jxxghp
efb607d22f Merge remote-tracking branch 'origin/v2' into v2 2024-11-19 13:31:52 +08:00
jxxghp
d0b2787a7c fix #1832 2024-11-19 13:11:54 +08:00
jxxghp
d5988ff443 Merge pull request #3165 from InfinityPacer/feature/module 2024-11-19 12:24:37 +08:00
InfinityPacer
96b4f1b575 feat(site): set default site timeout to 15 seconds 2024-11-19 11:10:01 +08:00
jxxghp
bb6b8439c7 fix siteauth scheduler 2024-11-19 08:39:39 +08:00
jxxghp
9cdce4509d fix siteauth schema 2024-11-19 08:25:12 +08:00
jxxghp
3956ab1fe8 add siteauth api 2024-11-19 08:18:26 +08:00
jxxghp
14686fdb03 合并拉取请求 #3159
fix: 去除资源搜索中多余的`订阅附加参数`过滤
2024-11-18 23:25:03 +08:00
Attente
32892ab747 fix: 去除资源搜索中多余的订阅附加参数过滤 2024-11-18 17:03:49 +08:00
jxxghp
79c637e003 fix #3154 相同事件避免并发处理 2024-11-18 08:01:43 +08:00
jxxghp
d7c260715a fix 115 2024-11-17 21:22:47 +08:00
jxxghp
2dfb089a39 fix bug 2024-11-17 21:04:24 +08:00
jxxghp
e04179525b Merge pull request #3146 from InfinityPacer/feature/module
chore(qbittorrent): update qbittorrent-api to version 2024.11.69
2024-11-17 15:59:43 +08:00
jxxghp
d044364c68 fix 115扫码后要重启 2024-11-17 15:58:29 +08:00
InfinityPacer
a0f912ffbe chore(qbittorrent): update qbittorrent-api to version 2024.11.69 2024-11-17 15:43:06 +08:00
jxxghp
d7c8b08d7a fix 115 2024-11-17 15:23:30 +08:00
jxxghp
f752082e1b v2.0.6 2024-11-17 15:15:42 +08:00
jxxghp
201ec21adf 优化Dev更新最新前端 2024-11-17 15:14:00 +08:00
jxxghp
57590323b2 fix ext 2024-11-17 14:56:42 +08:00
jxxghp
4636c7ada7 fix #3141 2024-11-17 14:14:13 +08:00
jxxghp
4c86a4da5f fix alist token 2024-11-17 14:07:39 +08:00
jxxghp
8dc9acf071 fix 115 2024-11-17 14:03:03 +08:00
jxxghp
abebae3664 Merge pull request #3139 from wdmcheng/v2 2024-11-17 12:00:41 +08:00
wdmcheng
4f7d8866a0 fix 本地存储 upload 后将文件识别为文件夹的问题 2024-11-17 11:50:33 +08:00
jxxghp
cceb22d729 fix log level 2024-11-17 08:56:02 +08:00
jxxghp
89edbb93f5 fix #3135 2024-11-17 08:52:15 +08:00
jxxghp
4ffb406172 更新 requirements.in 2024-11-17 02:23:07 +08:00
jxxghp
293e417865 feat:切换使用python-115 2024-11-17 02:10:45 +08:00
jxxghp
510c20dc70 fix 2024-11-16 21:49:54 +08:00
jxxghp
8e1810955b fix #3082 2024-11-16 20:56:32 +08:00
jxxghp
73f732fe1d fix #3126 目录删除加固 2024-11-16 20:29:17 +08:00
jxxghp
d6f5160959 fix mteam 消息99999 2024-11-16 19:55:41 +08:00
jxxghp
d64a7086dd fix #3120 2024-11-16 13:32:58 +08:00
jxxghp
825d9b768f 更新 version.py 2024-11-16 11:18:23 +08:00
jxxghp
f758a47f4f Merge pull request #3122 from DDS-Derek/fix_update 2024-11-16 11:02:04 +08:00
jxxghp
fc69d7e6c1 fix 2024-11-16 10:55:17 +08:00
DDSRem
edc30266c8 fix(update): clear tmp directory causes data loss
fix https://github.com/jxxghp/MoviePilot/issues/2996
2024-11-16 10:53:33 +08:00
jxxghp
665da9dad3 Merge pull request #3121 from DDS-Derek/fix_nginx 2024-11-16 10:37:23 +08:00
DDSRem
4048acf60e feat(docker): nginx client_max_body_size configuration
fix https://github.com/jxxghp/MoviePilot/issues/2951
fix https://github.com/jxxghp/MoviePilot/issues/2720
2024-11-16 10:23:28 +08:00
jxxghp
f116229ecc fix #3108 2024-11-16 09:50:55 +08:00
jxxghp
f6a2efb256 fix #3116 2024-11-16 09:25:46 +08:00
jxxghp
af3a50f7ea feat:订阅支持绑定下载器 2024-11-16 09:00:18 +08:00
jxxghp
44a0e5b4a7 fix #3120 2024-11-16 08:41:30 +08:00
jxxghp
f40a1246ff Merge pull request #3118 from wikrin/database 2024-11-16 07:54:53 +08:00
jxxghp
dd890c410c Merge pull request #3117 from wikrin/site 2024-11-16 07:54:42 +08:00
Attente
8fd7f2c875 fix 资源搜索下载时设置的下载器不生效的问题 2024-11-16 01:44:20 +08:00
Attente
8c09b3482f Upgrade the database 2024-11-16 00:28:13 +08:00
Attente
0066247a2b feat: 站点管理增加下载器选择 2024-11-16 00:22:04 +08:00
jxxghp
c7926fc575 Merge pull request #3113 from InfinityPacer/feature/module 2024-11-15 21:59:50 +08:00
InfinityPacer
ac5b9fd4e5 fix(rclone): specify UTF-8 encoding when save config 2024-11-15 17:42:11 +08:00
jxxghp
42dc539df6 fix #3013 2024-11-15 16:17:51 +08:00
jxxghp
e60d785a11 fix meta re 2024-11-15 13:50:33 +08:00
jxxghp
33558d6197 Merge pull request #3102 from InfinityPacer/feature/module 2024-11-15 12:01:21 +08:00
InfinityPacer
46d2ffeb75 fix #3100 2024-11-15 09:08:32 +08:00
jxxghp
8e4bce2f95 fix #3079 2024-11-15 08:03:23 +08:00
jxxghp
00f1f06e3d fix #3079 2024-11-15 08:00:22 +08:00
jxxghp
fe37bde993 fix offset ep 2024-11-14 22:29:14 +08:00
jxxghp
6c3bb8893f Merge pull request #3097 from wdmcheng/v2 2024-11-14 21:47:59 +08:00
wdmcheng
ca4d64819d fix 部分情况下Alist解析时间错误 2024-11-14 21:39:13 +08:00
jxxghp
0a53635d35 Merge pull request #3096 from rexshao/v2 2024-11-14 21:15:47 +08:00
rexshao
921e24b049 Update twofa.py
修复2fa使用secret无法正常生成code的BUG
2024-11-14 21:08:38 +08:00
jxxghp
24c21ed04e fix name 2024-11-14 19:58:37 +08:00
jxxghp
777785579e v2.0.4
- 修复了手动整理时找不到目录的问题
- 修复了白兔站点信息获取、登录状态检测
- 修复了一个索引报错问题
- 优化了资源下载对话框
- 目录设置增加了一个手动整理的选项
- 增加了QB无法连接时的日志打印
- 存储支持挂接AList
2024-11-14 19:48:16 +08:00
jxxghp
8061a06fe4 Merge remote-tracking branch 'origin/v2' into v2 2024-11-14 18:09:49 +08:00
jxxghp
438ce6ee3e fix SiteUserData schema 2024-11-14 18:09:40 +08:00
jxxghp
77e19c3de7 Merge pull request #3095 from InfinityPacer/feature/module 2024-11-14 17:25:31 +08:00
InfinityPacer
49881c9c54 fix #2952 2024-11-14 17:21:47 +08:00
jxxghp
5da28f702f fix alist 2024-11-14 14:54:22 +08:00
jxxghp
dfbd9f3b30 add alist storage card 2024-11-14 12:57:34 +08:00
jxxghp
d6c6ee9b4e fix #3092 2024-11-14 12:38:02 +08:00
jxxghp
4b27404ee5 Merge pull request #3091 from InfinityPacer/feature/cache 2024-11-14 11:57:26 +08:00
jxxghp
3a826b343a fix #3090 2024-11-14 11:52:56 +08:00
jxxghp
851aa5f9e2 fix #3031 2024-11-14 11:49:57 +08:00
InfinityPacer
9ef1f56ea1 feat(cache): add proxy support for specific domains in image caching 2024-11-14 10:21:00 +08:00
jxxghp
78d51b7621 Merge pull request #3031 from Akimio521/feat/filemanager-alist
feat: 增加 filemanager storages 类型:Alist
2024-11-14 08:12:31 +08:00
jxxghp
c12e2bdba7 fix 手动整理Bug 2024-11-14 08:04:52 +08:00
jxxghp
fda11f427c Merge pull request #3087 from amtoaer/fix_hares 2024-11-14 06:49:12 +08:00
amtoaer
d809330225 fix: 修复白兔俱乐部的站点信息获取、登录状态检测 2024-11-14 01:59:30 +08:00
jxxghp
ce4a2314d8 fix 手动整理时目录匹配Bug 2024-11-13 21:30:24 +08:00
amtoaer
c19e825e94 fix: 修复白兔俱乐部登录检测 2024-11-13 18:30:52 +08:00
jxxghp
c45d64b554 Merge pull request #3075 from wikrin/v2 2024-11-12 22:25:53 +08:00
Attente
0689b2e331 fix: episode_offset 2024-11-12 22:22:56 +08:00
jxxghp
e6105fdab5 **v2.0.3**
- 修复了最新版本号获取错误的问题
- 修复了文件管理重命名失败的问题
- 修复了整理多季时 season.nfo 刮削错误的问题
- 修复了Rclone存储容量检测错误的问题
- 优化了自定义规则,剧集文件大小规则按平均每集大小过滤
- 移动文件整理时,自动删除空的父目录
- 增加了自动阅读和发送站点消息的开关
- 增加了数据库WAL模式开关,开启后提升数据库性能
2024-11-12 18:48:15 +08:00
jxxghp
df34c7e2da Merge pull request #3074 from InfinityPacer/feature/db 2024-11-12 17:30:34 +08:00
InfinityPacer
24cc36033f feat(db): add support for SQLite WAL mode 2024-11-12 17:17:16 +08:00
jxxghp
aafb2bc269 fix #3071 增加站点消息开关 2024-11-12 13:59:13 +08:00
jxxghp
9dde56467a 更新 __init__.py 2024-11-12 12:24:05 +08:00
jxxghp
f9d62e7451 fix Rclone存储容量检测问题 2024-11-12 10:10:37 +08:00
jxxghp
f1f379966a fix 修复V2最新版本号获取 2024-11-12 08:37:07 +08:00
jxxghp
942c9ae545 Merge pull request #3058 from wikrin/fix-scrape_metadata 2024-11-10 14:02:31 +08:00
jxxghp
89be4f6200 Merge pull request #3054 from wikrin/fix-rename 2024-11-10 14:02:01 +08:00
Attente
bcbf729fd4 修复整理多季时season.nfo刮削错误的问题 2024-11-10 13:43:59 +08:00
Attente
7fc5b7678e 更改判断顺序 2024-11-10 07:47:49 +08:00
Attente
e20578685a fix: 修复重命名失败的问题 2024-11-09 23:59:58 +08:00
jxxghp
40b82d9cb6 fix #3042 移动模式删除空文件夹 2024-11-09 18:23:08 +08:00
jxxghp
9b2fccee01 feat:剧集文件大小过滤按平均每集大小 2024-11-09 18:01:50 +08:00
jxxghp
87bbee8c36 Merge pull request #3038 from InfinityPacer/feature/setup 2024-11-08 18:16:32 +08:00
InfinityPacer
4412ce9f17 fix(playwright): add check for HTTPS proxy 2024-11-08 18:08:45 +08:00
jxxghp
35b78b0e66 Merge pull request #3034 from lybtt/fix_update_bash 2024-11-08 16:44:55 +08:00
lvyb
d97fcc4a96 修复update脚本,版本号比较问题 2024-11-08 16:37:36 +08:00
Akimio521
c8e337440e feat(storages): add Alist storage type 2024-11-08 14:32:30 +08:00
Akimio521
726e7dfbd4 feat(StringUtils): add url_eqote method 2024-11-08 14:31:08 +08:00
60 changed files with 1861 additions and 569 deletions

2
.gitignore vendored
View File

@@ -12,7 +12,7 @@ app/helper/*.bin
app/plugins/**
!app/plugins/__init__.py
config/cookies/**
config/user.db
config/user.db*
config/sites/**
config/logs/
config/temp/

View File

@@ -116,9 +116,6 @@ def scrape(fileitem: schemas.FileItem,
if storage == "local":
if not scrape_path.exists():
return schemas.Response(success=False, message="刮削路径不存在")
else:
if not fileitem.fileid:
return schemas.Response(success=False, message="刮削文件ID无效")
# 手动刮削
chain.scrape_metadata(fileitem=fileitem, meta=meta, mediainfo=mediainfo)
return schemas.Response(success=True, message=f"{fileitem.path} 刮削完成")

View File

@@ -331,6 +331,29 @@ def read_rss_sites(db: Session = Depends(get_db),
return rss_sites
@router.get("/auth", summary="查询认证站点", response_model=dict)
def read_auth_sites(_: schemas.TokenPayload = Depends(verify_token)) -> dict:
"""
获取可认证站点列表
"""
return SitesHelper().get_authsites()
@router.post("/auth", summary="用户站点认证", response_model=schemas.Response)
def auth_site(
auth_info: schemas.SiteAuth,
_: User = Depends(get_current_active_superuser)
) -> Any:
"""
用户站点认证
"""
if not auth_info or not auth_info.site or not auth_info.params:
return schemas.Response(success=False, message="请输入认证站点和认证参数")
status, msg = SitesHelper().check_user(auth_info.site, auth_info.params)
SystemConfigOper().set(SystemConfigKey.UserSiteAuthParams, auth_info.dict())
return schemas.Response(success=status, message=msg)
@router.get("/{site_id}", summary="站点详情", response_model=schemas.Site)
def read_site(
site_id: int,

View File

@@ -149,8 +149,8 @@ def rename(fileitem: schemas.FileItem,
:param recursive: 是否递归修改
:param _: token
"""
if not fileitem.fileid or not new_name:
return schemas.Response(success=False)
if not new_name:
return schemas.Response(success=False, message="新名称为空")
result = StorageChain().rename_file(fileitem, new_name)
if result:
if recursive:

View File

@@ -159,7 +159,8 @@ def cache_img(
本地缓存图片文件,支持 HTTP 缓存,如果启用全局图片缓存,则使用磁盘缓存
"""
# 如果没有启用全局图片缓存,则不使用磁盘缓存
return fetch_image(url=url, proxy=False, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
proxy = "doubanio.com" not in url
return fetch_image(url=url, proxy=proxy, use_disk_cache=settings.GLOBAL_IMAGE_CACHE, if_none_match=if_none_match)
@router.get("/global", summary="查询非敏感系统设置", response_model=schemas.Response)

View File

@@ -32,7 +32,7 @@ class ManualTransferItem(BaseModel):
episode_format: Optional[str] = None,
episode_detail: Optional[str] = None,
episode_part: Optional[str] = None,
episode_offset: Optional[int] = 0,
episode_offset: Optional[str] = None,
min_filesize: Optional[int] = 0,
scrape: bool = False,
from_history: bool = False

View File

@@ -180,7 +180,7 @@ class DownloadChain(ChainBase):
torrent_file, content, download_folder, files, error_msg = self.torrent.download_torrent(
url=torrent_url,
cookie=site_cookie,
ua=torrent.site_ua,
ua=torrent.site_ua or settings.USER_AGENT,
proxy=torrent.site_proxy)
if isinstance(content, str):
@@ -204,10 +204,10 @@ class DownloadChain(ChainBase):
def download_single(self, context: Context, torrent_file: Path = None,
episodes: Set[int] = None,
channel: MessageChannel = None, source: str = None,
downloader: str = None,
save_path: str = None,
userid: Union[str, int] = None,
username: str = None,
downloader: str = None,
media_category: str = None) -> Optional[str]:
"""
下载及发送通知
@@ -216,15 +216,16 @@ class DownloadChain(ChainBase):
:param episodes: 需要下载的集数
:param channel: 通知渠道
:param source: 通知来源
:param downloader: 下载器
:param save_path: 保存路径
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param downloader: 下载器
:param media_category: 自定义媒体类别
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 补充完整的media数据
if not _media.genre_ids:
@@ -251,35 +252,31 @@ class DownloadChain(ChainBase):
# 下载目录
if save_path:
# 有自定义下载目录时,尝试匹配目录配置
dir_info = self.directoryhelper.get_dir(_media, src_path=Path(save_path), local=True)
else:
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media, local=True)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.download_type_folder:
# 一级自动分类
download_dir = Path(dir_info.download_path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.download_path)
# 二级目录
if not dir_info.media_category and dir_info.download_category_folder and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
elif save_path:
# 自定义下载目录
# 下载目录使用自定义的
download_dir = Path(save_path)
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 根据媒体信息查询下载目录配置
dir_info = self.directoryhelper.get_dir(_media)
# 拼装子目录
if dir_info:
# 一级目录
if not dir_info.media_type and dir_info.download_type_folder:
# 一级自动分类
download_dir = Path(dir_info.download_path) / _media.type.value
else:
# 一级不分类
download_dir = Path(dir_info.download_path)
# 二级目录
if not dir_info.media_category and dir_info.download_category_folder and _media and _media.category:
# 二级自动分类
download_dir = download_dir / _media.category
else:
# 未找到下载目录,且没有自定义下载目录
logger.error(f"未找到下载目录:{_media.type.value} {_media.title_year}")
self.messagehelper.put(f"{_media.type.value} {_media.title_year} 未找到下载目录!",
title="下载失败", role="system")
return None
# 添加下载
result: Optional[tuple] = self.download(content=content,
@@ -287,7 +284,7 @@ class DownloadChain(ChainBase):
episodes=episodes,
download_dir=download_dir,
category=_media.category,
downloader=downloader)
downloader=downloader or _site_downloader)
if result:
_downloader, _hash, error_msg = result
else:
@@ -335,7 +332,7 @@ class DownloadChain(ChainBase):
continue
# 只处理视频格式
if not Path(file).suffix \
or Path(file).suffix not in settings.RMT_MEDIAEXT:
or Path(file).suffix.lower() not in settings.RMT_MEDIAEXT:
continue
files_to_add.append({
"download_hash": _hash,
@@ -386,7 +383,8 @@ class DownloadChain(ChainBase):
source: str = None,
userid: str = None,
username: str = None,
media_category: str = None
media_category: str = None,
downloader: str = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载
@@ -398,6 +396,7 @@ class DownloadChain(ChainBase):
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param downloader: 下载器
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
# 已下载的项目
@@ -469,7 +468,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category):
media_category=media_category, downloader=downloader):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
@@ -554,7 +553,8 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category
media_category=media_category,
downloader=downloader,
)
else:
# 下载
@@ -562,7 +562,8 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
@@ -633,7 +634,8 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category)
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
logger.info(f"{meta.title} 添加下载成功")
@@ -722,7 +724,8 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category
media_category=media_category,
downloader=downloader
)
if not download_id:
continue

View File

@@ -339,9 +339,10 @@ class MediaChain(ChainBase, metaclass=Singleton):
return
tmp_file = settings.TEMP_PATH / _path.name
tmp_file.write_bytes(_content)
logger.info(f"保存文件:【{_fileitem.storage}{_path}")
_fileitem.path = str(_path.parent)
self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file)
item = self.storagechain.upload_file(fileitem=_fileitem, path=tmp_file)
if item:
logger.info(f"已保存文件:{Path(item.path) / item.name}")
if tmp_file.exists():
tmp_file.unlink()
@@ -376,6 +377,11 @@ class MediaChain(ChainBase, metaclass=Singleton):
if mediainfo.type == MediaType.MOVIE:
# 电影
if fileitem.type == "file":
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 电影文件
logger.info(f"正在生成电影nfo{mediainfo.title_year} - {filepath.name}")
movie_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
@@ -383,10 +389,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
logger.warn(f"{filepath.name} nfo文件生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=movie_nfo)
else:
# 电影目录
@@ -408,7 +412,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(_url=attr_value)
@@ -418,7 +422,12 @@ class MediaChain(ChainBase, metaclass=Singleton):
else:
# 电视剧
if fileitem.type == "file":
# 当前为集文件,重新识别季集
# 是否已存在
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 重新识别季集
file_meta = MetaInfoPath(filepath)
if not file_meta.begin_episode:
logger.warn(f"{filepath.name} 无法识别文件集数!")
@@ -434,10 +443,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
logger.warn(f"{filepath.name} nfo生成失败")
return
# 保存或上传nfo文件到上级目录
nfo_path = filepath.with_suffix(".nfo")
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=nfo_path, _content=episode_nfo)
# 获取集的图片
image_dict = self.metadata_img(mediainfo=file_mediainfo,
@@ -446,12 +453,14 @@ class MediaChain(ChainBase, metaclass=Singleton):
for episode, image_url in image_dict.items():
image_path = filepath.with_suffix(Path(image_url).suffix)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=image_path):
logger.info(f"已存在图片文件:{image_path}")
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
# 保存图片文件到当前目录
if content:
if not parent:
parent = self.storagechain.get_parent_item(fileitem)
__save_file(_fileitem=parent, _path=image_path, _content=content)
else:
@@ -467,16 +476,17 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 识别文件夹名称
season_meta = MetaInfo(filepath.name)
if season_meta.begin_season:
# 是否已存在
nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 当前目录有季号生成季nfo
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=meta.begin_season)
season_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo, season=season_meta.begin_season)
if not season_nfo:
logger.warn(f"无法生成电视剧季nfo文件{meta.name}")
return
# 写入nfo到根目录
nfo_path = filepath / "season.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=fileitem, _path=nfo_path, _content=season_nfo)
# TMDB季poster图片
image_dict = self.metadata_img(mediainfo=mediainfo, season=season_meta.begin_season)
@@ -485,7 +495,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath.with_name(image_name)
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)
@@ -494,16 +504,17 @@ class MediaChain(ChainBase, metaclass=Singleton):
__save_file(_fileitem=fileitem, _path=image_path, _content=content)
# 判断当前目录是不是剧集根目录
if season_meta.name:
# 是否已存在
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.debug(f"已存在nfo文件{nfo_path}")
return
# 当前目录有名称生成tvshow nfo 和 tv图片
tv_nfo = self.metadata_nfo(meta=meta, mediainfo=mediainfo)
if not tv_nfo:
logger.warn(f"无法生成电视剧nfo文件{meta.name}")
return
# 写入tvshow nfo到根目录
nfo_path = filepath / "tvshow.nfo"
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
logger.info(f"已存在nfo文件{nfo_path}")
return
__save_file(_fileitem=fileitem, _path=nfo_path, _content=tv_nfo)
# 生成目录图片
image_dict = self.metadata_img(mediainfo=mediainfo)
@@ -512,7 +523,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
image_path = filepath / image_name
if not overwrite and self.storagechain.get_file_item(storage=fileitem.storage,
path=image_path):
logger.info(f"已存在图片文件:{image_path}")
logger.debug(f"已存在图片文件:{image_path}")
continue
# 下载图片
content = __download_image(image_url)

View File

@@ -3,12 +3,12 @@ from typing import List, Union, Optional, Generator
from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
from app.log import logger
from app.schemas import MediaServerLibrary, MediaServerItem, MediaServerSeasonInfo, MediaServerPlayItem
lock = threading.Lock()
@@ -22,7 +22,7 @@ class MediaServerChain(ChainBase):
super().__init__()
self.dboper = MediaServerOper()
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[schemas.MediaServerLibrary]:
def librarys(self, server: str, username: str = None, hidden: bool = False) -> List[MediaServerLibrary]:
"""
获取媒体服务器所有媒体库
"""
@@ -70,25 +70,25 @@ class MediaServerChain(ChainBase):
yield from self.run_module("mediaserver_items", server=server, library_id=library_id,
start_index=start_index, limit=limit)
def iteminfo(self, server: str, item_id: Union[str, int]) -> schemas.MediaServerItem:
def iteminfo(self, server: str, item_id: Union[str, int]) -> MediaServerItem:
"""
获取媒体服务器项目信息
"""
return self.run_module("mediaserver_iteminfo", server=server, item_id=item_id)
def episodes(self, server: str, item_id: Union[str, int]) -> List[schemas.MediaServerSeasonInfo]:
def episodes(self, server: str, item_id: Union[str, int]) -> List[MediaServerSeasonInfo]:
"""
获取媒体服务器剧集信息
"""
return self.run_module("mediaserver_tv_episodes", server=server, item_id=item_id)
def playing(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
def playing(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器正在播放信息
"""
return self.run_module("mediaserver_playing", count=count, server=server, username=username)
def latest(self, server: str, count: int = 20, username: str = None) -> List[schemas.MediaServerPlayItem]:
def latest(self, server: str, count: int = 20, username: str = None) -> List[MediaServerPlayItem]:
"""
获取媒体服务器最新入库条目
"""

View File

@@ -4,6 +4,7 @@ from typing import Optional, Tuple, List, Dict
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.helper.directory import DirectoryHelper
from app.log import logger
from app.schemas import MediaType
@@ -13,6 +14,10 @@ class StorageChain(ChainBase):
存储处理链
"""
def __init__(self):
super().__init__()
self.directoryhelper = DirectoryHelper()
def save_config(self, storage: str, conf: dict) -> None:
"""
保存存储配置
@@ -37,6 +42,12 @@ class StorageChain(ChainBase):
"""
return self.run_module("list_files", fileitem=fileitem, recursion=recursion)
def any_files(self, fileitem: schemas.FileItem, extensions: list = None) -> Optional[bool]:
"""
查询当前目录下是否存在指定扩展名任意文件
"""
return self.run_module("any_files", fileitem=fileitem, extensions=extensions)
def create_folder(self, fileitem: schemas.FileItem, name: str) -> Optional[schemas.FileItem]:
"""
创建目录
@@ -51,13 +62,15 @@ class StorageChain(ChainBase):
"""
return self.run_module("download_file", fileitem=fileitem, path=path)
def upload_file(self, fileitem: schemas.FileItem, path: Path) -> Optional[bool]:
def upload_file(self, fileitem: schemas.FileItem, path: Path,
new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 保存目录项
:param path: 本地文件路径
:param new_name: 新文件名
"""
return self.run_module("upload_file", fileitem=fileitem, path=path)
return self.run_module("upload_file", fileitem=fileitem, path=path, new_name=new_name)
def delete_file(self, fileitem: schemas.FileItem) -> Optional[bool]:
"""
@@ -101,34 +114,44 @@ class StorageChain(ChainBase):
"""
return self.run_module("support_transtype", storage=storage)
def delete_media_file(self, fileitem: schemas.FileItem, mtype: MediaType = None) -> bool:
def delete_media_file(self, fileitem: schemas.FileItem,
mtype: MediaType = None, delete_self: bool = True) -> bool:
"""
删除媒体文件,以及不含媒体文件的目录
"""
state = self.delete_file(fileitem)
if not state:
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
media_exts = settings.RMT_MEDIAEXT + settings.DOWNLOAD_TMPEXT
if fileitem.path == "/" or len(Path(fileitem.path).parts) <= 2:
logger.warn(f"{fileitem.storage}{fileitem.path} 根目录或一级目录不允许删除")
return False
# 上级目录
if fileitem.type == "dir":
# 本身是目录
if self.any_files(fileitem, extensions=media_exts) is False:
logger.warn(f"{fileitem.storage}{fileitem.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(fileitem)
return False
elif delete_self:
# 本身是文件
logger.warn(f"正在删除【{fileitem.storage}{fileitem.path}")
if not self.delete_file(fileitem):
logger.warn(f"{fileitem.storage}{fileitem.path} 删除失败")
return False
# 处理上级目录
if mtype and mtype == MediaType.TV:
dir_path = Path(fileitem.path).parent.parent
dir_item = self.get_file_item(storage=fileitem.storage, path=dir_path)
dir_item = self.get_file_item(storage=fileitem.storage, path=Path(fileitem.path).parent.parent)
else:
dir_item = self.get_parent_item(fileitem)
if dir_item:
files = self.list_files(dir_item, recursion=True)
# 是否存在其他媒体文件
media_file_exist = False
if files:
for file in files:
if file.extension and f".{file.extension.lower()}" in settings.RMT_MEDIAEXT:
media_file_exist = True
break
if dir_item and len(Path(dir_item.path).parts) > 2:
# 如何目录是所有下载目录、媒体库目录的上级,则不处理
for d in self.directoryhelper.get_dirs():
if d.download_path and Path(d.download_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是下载目录本级或上级目录,不删除")
return True
if d.library_path and Path(d.library_path).is_relative_to(Path(dir_item.path)):
logger.debug(f"{dir_item.storage}{dir_item.path} 是媒体库目录本级或上级目录,不删除")
return True
# 不存在其他媒体文件,删除空目录
if not media_file_exist:
# 返回空目录删除状态
if self.any_files(dir_item, extensions=media_exts) is False:
logger.warn(f"{dir_item.storage}{dir_item.path} 不存在其它媒体文件,删除空目录")
return self.delete_file(dir_item)
# 存在媒体文件,返回文件删除状态
return state
return True

View File

@@ -159,6 +159,8 @@ class SubscribeChain(ChainBase):
"search_imdbid") else kwargs.get("search_imdbid"),
'sites': self.__get_default_subscribe_config(mediainfo.type, "sites") or None if not kwargs.get(
"sites") else kwargs.get("sites"),
'downloader': self.__get_default_subscribe_config(mediainfo.type, "downloader") if not kwargs.get(
"downloader") else kwargs.get("downloader"),
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path")
})
@@ -363,10 +365,6 @@ class SubscribeChain(ChainBase):
torrent_info = context.torrent_info
torrent_mediainfo = context.media_info
# 匹配订阅附加参数
if not self.torrenthelper.filter_torrent(torrent_info=torrent_info,
filter_params=self.get_params(subscribe)):
continue
# 洗版
if subscribe.best_version:
# 洗版时,非整季不要
@@ -394,7 +392,8 @@ class SubscribeChain(ChainBase):
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category
media_category=subscribe.media_category,
downloader=subscribe.downloader,
)
# 判断是否应完成订阅
@@ -773,7 +772,8 @@ class SubscribeChain(ChainBase):
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category)
media_category=subscribe.media_category,
downloader=subscribe.downloader)
# 判断是否要完成订阅
self.finish_subscribe_or_not(subscribe=subscribe, meta=meta, mediainfo=mediainfo,
downloads=downloads, lefts=lefts)
@@ -1241,6 +1241,9 @@ class SubscribeChain(ChainBase):
file_path=file.fullpath,
)
if subscribe.type == MediaType.TV.value:
season_number = file_meta.begin_season
if season_number and season_number != subscribe.season:
continue
episode_number = file_meta.begin_episode
if episode_number and episodes.get(episode_number):
episodes[episode_number].download.append(file_info)
@@ -1278,6 +1281,9 @@ class SubscribeChain(ChainBase):
file_path=fileitem.path,
)
if subscribe.type == MediaType.TV.value:
season_number = file_meta.begin_season
if season_number and season_number != subscribe.season:
continue
episode_number = file_meta.begin_episode
if episode_number and episodes.get(episode_number):
episodes[episode_number].library.append(file_info)

View File

@@ -1,6 +1,5 @@
import json
import re
from pathlib import Path
from typing import Union
from app.chain import ChainBase
@@ -10,6 +9,7 @@ from app.schemas import Notification, MessageChannel
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
from version import FRONTEND_VERSION, APP_VERSION
class SystemChain(ChainBase, metaclass=Singleton):
@@ -98,77 +98,67 @@ class SystemChain(ChainBase, metaclass=Singleton):
@staticmethod
def __get_server_release_version():
"""
获取后端最新版本
获取后端V2最新版本
"""
try:
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2后端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到后端最新版本:{latest_v2}")
return latest_v2
else:
return None
logger.error("无法获取后端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取后端最新版本失败:{str(err)}")
return None
return None
@staticmethod
def __get_front_release_version():
"""
获取前端最新版本
获取前端V2最新版本
"""
try:
version_res = RequestUtils(proxies=settings.PROXY, headers=settings.GITHUB_HEADERS).get_res(
"https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest")
if version_res:
ver_json = version_res.json()
version = f"{ver_json['tag_name']}"
return version
# 获取所有发布的版本列表
response = RequestUtils(
proxies=settings.PROXY,
headers=settings.GITHUB_HEADERS
).get_res("https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases")
if response:
releases = [release['tag_name'] for release in response.json()]
v2_releases = [tag for tag in releases if re.match(r"^v2\.", tag)]
if not v2_releases:
logger.warn("获取v2前端最新版本版本出错")
else:
# 找到最新的v2版本
latest_v2 = sorted(v2_releases, key=lambda s: list(map(int, re.findall(r'\d+', s))))[-1]
logger.info(f"获取到前端最新版本:{latest_v2}")
return latest_v2
else:
return None
logger.error("无法获取前端版本信息请检查网络连接或GitHub API请求。")
except Exception as err:
logger.error(f"获取前端最新版本失败:{str(err)}")
return None
return None
@staticmethod
def get_server_local_version():
"""
查看当前版本
"""
version_file = settings.ROOT_PATH / "version.py"
if version_file.exists():
try:
with open(version_file, 'rb') as f:
version = f.read()
pattern = r"'([^']*)'"
match = re.search(pattern, str(version))
if match:
version = match.group(1)
return version
else:
logger.warn("未找到版本号")
return None
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
return APP_VERSION
@staticmethod
def get_frontend_version():
"""
获取前端版本
"""
if SystemUtils.is_frozen() and SystemUtils.is_windows():
version_file = settings.CONFIG_PATH.parent / "nginx" / "html" / "version.txt"
else:
version_file = Path(settings.FRONTEND_PATH) / "version.txt"
if version_file.exists():
try:
with open(version_file, 'r') as f:
version = str(f.read()).strip()
return version
except Exception as err:
logger.error(f"加载版本文件 {version_file} 出错:{str(err)}")
else:
logger.warn("未找到前端版本文件,请正确设置 FRONTEND_PATH")
return None
return FRONTEND_VERSION

View File

@@ -120,6 +120,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
site_ua=site.get("ua") or settings.USER_AGENT,
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
title=item.get("title"),
enclosure=item.get("enclosure"),
page_url=item.get("link"),

View File

@@ -120,7 +120,7 @@ class TransferChain(ChainBase):
# 非MoviePilot下载的任务按文件识别
mediainfo = None
# 执行整理
# 执行整理,匹配源目录
state, errmsg = self.__do_transfer(
fileitem=FileItem(
storage="local",
@@ -131,7 +131,8 @@ class TransferChain(ChainBase):
extension=file_path.suffix.lstrip('.'),
),
mediainfo=mediainfo,
download_hash=torrent.hash
download_hash=torrent.hash,
src_match=True
)
# 设置下载任务状态
@@ -148,7 +149,8 @@ class TransferChain(ChainBase):
target_storage: str = None, target_path: Path = None,
transfer_type: str = None, scrape: bool = None,
season: int = None, epformat: EpisodeFormat = None,
min_filesize: int = 0, download_hash: str = None, force: bool = False) -> Tuple[bool, str]:
min_filesize: int = 0, download_hash: str = None,
force: bool = False, src_match: bool = False) -> Tuple[bool, str]:
"""
执行一个复杂目录的整理操作
:param fileitem: 文件项
@@ -164,6 +166,7 @@ class TransferChain(ChainBase):
:param min_filesize: 最小文件大小(MB)
:param download_hash: 下载记录hash
:param force: 是否强制整理
:param src_match: 是否源目录匹配
返回:成功标识,错误信息
"""
@@ -202,6 +205,8 @@ class TransferChain(ChainBase):
skip_num = 0
# 本次整理方式
current_transfer_type = transfer_type
# 是否全部成功
all_success = True
# 获取待整理路径清单
trans_items = self.__get_trans_fileitems(fileitem)
@@ -281,6 +286,7 @@ class TransferChain(ChainBase):
# 计数
processed_num += 1
skip_num += 1
all_success = False
continue
# 整理成功的不再处理
@@ -291,6 +297,7 @@ class TransferChain(ChainBase):
# 计数
processed_num += 1
skip_num += 1
all_success = False
continue
# 更新进度
@@ -314,6 +321,7 @@ class TransferChain(ChainBase):
# 计数
processed_num += 1
fail_num += 1
all_success = False
continue
# 自定义识别
@@ -350,6 +358,7 @@ class TransferChain(ChainBase):
# 计数
processed_num += 1
fail_num += 1
all_success = False
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
@@ -379,6 +388,16 @@ class TransferChain(ChainBase):
if download_file:
download_hash = download_file.download_hash
# 查询整理目标目录
if not target_directory and not target_path:
if src_match:
# 按源目录匹配,以便找到更合适的目录配置
target_directory = self.directoryhelper.get_dir(file_mediainfo,
storage=file_item.storage, src_path=file_path)
else:
# 未指定目标路径,根据媒体信息获取目标目录
target_directory = self.directoryhelper.get_dir(file_mediainfo)
# 执行整理
transferinfo: TransferInfo = self.transfer(fileitem=file_item,
meta=file_meta,
@@ -416,6 +435,7 @@ class TransferChain(ChainBase):
# 计数
processed_num += 1
fail_num += 1
all_success = False
continue
# 汇总信息
@@ -486,15 +506,14 @@ class TransferChain(ChainBase):
})
# 移动模式处理
if current_transfer_type in ["move"]:
if all_success and current_transfer_type in ["move"]:
# 下载器hash
if download_hash:
if self.remove_torrents(download_hash):
logger.info(f"移动模式删除种子成功:{download_hash} ")
# 删除残留文件
# 删除残留目录
if fileitem:
logger.warn(f"删除残留文件夹:【{fileitem.storage}{fileitem.path}")
self.storagechain.delete_file(fileitem)
self.storagechain.delete_media_file(fileitem, delete_self=False)
# 结束进度
logger.info(f"{fileitem.path} 整理完成,共 {total_num} 个文件,"

View File

@@ -69,6 +69,8 @@ class ConfigModel(BaseModel):
DB_MAX_OVERFLOW: int = 500
# SQLite 的 busy_timeout 参数,默认为 60 秒
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
@@ -134,7 +136,7 @@ class ConfigModel(BaseModel):
'.aifc', '.aiff', '.alac', '.adif', '.adts',
'.flac', '.midi', '.opus', '.sfalc']
# 下载器临时文件后缀
DOWNLOAD_TMPEXT: list = ['.!qB', '.part']
DOWNLOAD_TMPEXT: list = ['.!qb', '.part']
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL: int = 6
# 订阅模式
@@ -151,6 +153,8 @@ class ConfigModel(BaseModel):
SEARCH_MULTIPLE_NAME: bool = False
# 站点数据刷新间隔(小时)
SITEDATA_REFRESH_INTERVAL: int = 6
# 读取和发送站点消息
SITE_MESSAGE: bool = True
# 种子标签
TORRENT_TAG: str = "MOVIEPILOT"
# 下载站点字幕

View File

@@ -23,6 +23,8 @@ class TorrentInfo:
site_proxy: bool = False
# 站点优先级
site_order: int = 0
# 站点下载器
site_downloader: str = None
# 种子名称
title: str = None
# 种子副标题

View File

@@ -84,6 +84,7 @@ class EventManager(metaclass=Singleton):
self.__disabled_handlers = set() # 禁用的事件处理器集合
self.__disabled_classes = set() # 禁用的事件处理器类集合
self.__lock = threading.Lock() # 线程锁
self.__processing_events = {} # 用于记录当前正在处理的事件 {event_hash: event}
def start(self):
"""
@@ -129,6 +130,14 @@ class EventManager(metaclass=Singleton):
for handler in handlers.values()
)
@staticmethod
def __get_event_hash(event: Event) -> str:
"""
计算事件的唯一标识符hash
"""
data_string = str(event.event_type.value) + str(event.event_data)
return str(uuid.uuid5(uuid.NAMESPACE_DNS, data_string))
def send_event(self, etype: Union[EventType, ChainEventType], data: Optional[Union[Dict, ChainEventData]] = None,
priority: int = DEFAULT_EVENT_PRIORITY) -> Optional[Event]:
"""
@@ -139,6 +148,12 @@ class EventManager(metaclass=Singleton):
:return: 如果是链式事件,返回处理后的事件数据;否则返回 None
"""
event = Event(etype, data, priority)
event_hash = self.__get_event_hash(event)
with self.__lock:
if event_hash in self.__processing_events:
logger.debug(f"Duplicate event ignored: {event}")
return None
self.__processing_events[event_hash] = event
if isinstance(etype, EventType):
self.__trigger_broadcast_event(event)
elif isinstance(etype, ChainEventType):
@@ -320,9 +335,14 @@ class EventManager(metaclass=Singleton):
"""
触发链式事件,按顺序调用订阅的处理器,并记录处理耗时
"""
logger.debug(f"Triggering synchronous chain event: {event}")
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
try:
logger.debug(f"Triggering synchronous chain event: {event}")
dispatch = self.__dispatch_chain_event(event)
return event if dispatch else None
finally:
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
def __trigger_broadcast_event(self, event: Event):
"""
@@ -363,6 +383,9 @@ class EventManager(metaclass=Singleton):
return
for handler_id, handler in handlers.items():
self.__executor.submit(self.__safe_invoke_handler, handler, event)
event_hash = self.__get_event_hash(event)
with self.__lock:
self.__processing_events.pop(event_hash, None)
def __safe_invoke_handler(self, handler: Callable, event: Event):
"""
@@ -499,11 +522,18 @@ class EventManager(metaclass=Singleton):
def decorator(f: Callable):
# 将输入的事件类型统一转换为列表格式
if isinstance(etype, list):
event_list = etype # 传入的已经是列表,直接使用
# 传入的已经是列表,直接使用
event_list = etype
elif etype is EventType:
# 订阅所有事件
event_list = []
for et in etype:
event_list.append(et)
else:
event_list = [etype] # 不是列表则包裹成单一元素的列表
# 不是列表则包裹成单一元素的列表
event_list = [etype]
# 遍历列表,处理每个事件类型
# 遍历列表,处理每个事件类型
for event in event_list:
if isinstance(event, (EventType, ChainEventType)):
self.add_event_listener(event, f)

View File

@@ -30,8 +30,8 @@ class MetaVideo(MetaBase):
_episode_re = r"EP?(\d{2,4})$|^EP?(\d{1,4})$|^S\d{1,2}EP?(\d{1,4})$|S\d{2}EP?(\d{2,4})"
_part_re = r"(^PART[0-9ABI]{0,2}$|^CD[0-9]{0,2}$|^DVD[0-9]{0,2}$|^DISK[0-9]{0,2}$|^DISC[0-9]{0,2}$)"
_roman_numerals = r"^(?=[MDCLXVI])M*(C[MD]|D?C{0,3})(X[CL]|L?X{0,3})(I[XV]|V?I{0,3})$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$|^REMUX$|^UHD$|^REPACK$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$"
_source_re = r"^BLURAY$|^HDTV$|^UHDTV$|^HDDVD$|^WEBRIP$|^DVDRIP$|^BDRIP$|^BLU$|^WEB$|^BD$|^HDRip$|^REMUX$|^UHD$"
_effect_re = r"^SDR$|^HDR\d*$|^DOLBY$|^DOVI$|^DV$|^3D$|^REPACK$"
_resources_type_re = r"%s|%s" % (_source_re, _effect_re)
_name_no_begin_re = r"^[\[【].+?[\]】]"
_name_no_chinese_re = r".*版|.*字幕"
@@ -524,16 +524,7 @@ class MetaVideo(MetaBase):
"""
if not self.name:
return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
if source_res:
self._last_token_type = "source"
self._continue_flag = False
self._stop_name_flag = True
if not self._source:
self._source = source_res.group(1)
self._last_token = self._source.upper()
return
elif token.upper() == "DL" \
if token.upper() == "DL" \
and self._last_token_type == "source" \
and self._last_token == "WEB":
self._source = "WEB-DL"
@@ -542,13 +533,37 @@ class MetaVideo(MetaBase):
elif token.upper() == "RAY" \
and self._last_token_type == "source" \
and self._last_token == "BLU":
self._source = "BluRay"
# UHD BluRay组合
if self._source == "UHD":
self._source = "UHD BluRay"
else:
self._source = "BluRay"
self._continue_flag = False
return
elif token.upper() == "WEBDL":
self._source = "WEB-DL"
self._continue_flag = False
return
# UHD REMUX组合
if token.upper() == "REMUX" \
and self._source == "BluRay":
self._source = "BluRay REMUX"
self._continue_flag = False
return
elif token.upper() == "BLURAY" \
and self._source == "UHD":
self._source = "UHD BluRay"
self._continue_flag = False
return
source_res = re.search(r"(%s)" % self._source_re, token, re.IGNORECASE)
if source_res:
self._last_token_type = "source"
self._continue_flag = False
self._stop_name_flag = True
if not self._source:
self._source = source_res.group(1)
self._last_token = self._source.upper()
return
effect_res = re.search(r"(%s)" % self._effect_re, token, re.IGNORECASE)
if effect_res:
self._last_token_type = "effect"

View File

@@ -1,22 +1,25 @@
from typing import Any, Generator, List, Optional, Self, Tuple
from sqlalchemy import NullPool, QueuePool, and_, create_engine, inspect
from sqlalchemy import NullPool, QueuePool, and_, create_engine, inspect, text
from sqlalchemy.orm import Session, as_declarative, declared_attr, scoped_session, sessionmaker
from app.core.config import settings
# 根据池类型设置 poolclass 和相关参数
pool_class = NullPool if settings.DB_POOL_TYPE == "NullPool" else QueuePool
connect_args = {
"timeout": settings.DB_TIMEOUT
}
# 启用 WAL 模式时的额外配置
if settings.DB_WAL_ENABLE:
connect_args["check_same_thread"] = False
kwargs = {
"url": f"sqlite:///{settings.CONFIG_PATH}/user.db",
"pool_pre_ping": settings.DB_POOL_PRE_PING,
"echo": settings.DB_ECHO,
"poolclass": pool_class,
"pool_recycle": settings.DB_POOL_RECYCLE,
"connect_args": {
# "check_same_thread": False,
"timeout": settings.DB_TIMEOUT
}
"connect_args": connect_args
}
# 当使用 QueuePool 时,添加 QueuePool 特有的参数
if pool_class == QueuePool:
@@ -27,6 +30,11 @@ if pool_class == QueuePool:
})
# 创建数据库引擎
Engine = create_engine(**kwargs)
# 根据配置设置日志模式
journal_mode = "WAL" if settings.DB_WAL_ENABLE else "DELETE"
with Engine.connect() as connection:
current_mode = connection.execute(text(f"PRAGMA journal_mode={journal_mode};")).scalar()
print(f"Database journal mode set to: {current_mode}")
# 会话工厂
SessionFactory = sessionmaker(bind=Engine)
@@ -49,11 +57,34 @@ def get_db() -> Generator:
db.close()
def perform_checkpoint(mode: str = "PASSIVE"):
"""
执行 SQLite 的 checkpoint 操作,将 WAL 文件内容写回主数据库
:param mode: checkpoint 模式,可选值包括 "PASSIVE""FULL""RESTART""TRUNCATE"
默认为 "PASSIVE",即不锁定 WAL 文件的轻量级同步
"""
if not settings.DB_WAL_ENABLE:
return
valid_modes = {"PASSIVE", "FULL", "RESTART", "TRUNCATE"}
if mode.upper() not in valid_modes:
raise ValueError(f"Invalid checkpoint mode '{mode}'. Must be one of {valid_modes}")
try:
# 使用指定的 checkpoint 模式,确保 WAL 文件数据被正确写回主数据库
with Engine.connect() as conn:
conn.execute(text(f"PRAGMA wal_checkpoint({mode.upper()});"))
except Exception as e:
print(f"Error during WAL checkpoint: {e}")
def close_database():
"""
关闭所有数据库连接
关闭所有数据库连接并清理资源
"""
Engine.dispose()
try:
# 释放连接池SQLite 会自动清空 WAL 文件,这里不单独再调用 checkpoint
Engine.dispose()
except Exception as e:
print(f"Error while disposing database connections: {e}")
def get_args_db(args: tuple, kwargs: dict) -> Optional[Session]:

View File

@@ -46,11 +46,13 @@ class Site(Base):
# 流控间隔
limit_seconds = Column(Integer, default=0)
# 超时时间
timeout = Column(Integer, default=0)
timeout = Column(Integer, default=15)
# 是否启用
is_active = Column(Boolean(), default=True)
# 创建时间
lst_mod_date = Column(String, default=datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
# 下载器
downloader = Column(String)
@staticmethod
@db_query

View File

@@ -64,6 +64,8 @@ class Subscribe(Base):
username = Column(String)
# 订阅站点
sites = Column(JSON, default=list)
# 下载器
downloader = Column(String)
# 是否洗版
best_version = Column(Integer, default=0)
# 当前优先级

View File

@@ -4,7 +4,7 @@ from typing import List, Optional
from app import schemas
from app.core.context import MediaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey, MediaType
from app.schemas.types import SystemConfigKey
class DirectoryHelper:
@@ -48,40 +48,43 @@ class DirectoryHelper:
"""
return [d for d in self.get_library_dirs() if d.library_storage == "local"]
def get_dir(self, media: MediaInfo, src_path: Path = None, dest_path: Path = None,
fileitem: schemas.FileItem = None, local: bool = False) -> Optional[schemas.TransferDirectoryConf]:
def get_dir(self, media: MediaInfo, storage: str = "local",
src_path: Path = None, dest_path: Path = None, fileitem: schemas.FileItem = None
) -> Optional[schemas.TransferDirectoryConf]:
"""
根据媒体信息获取下载目录、媒体库目录配置
:param media: 媒体信息
:param storage: 存储类型
:param src_path: 源目录,有值时直接匹配
:param dest_path: 目标目录,有值时直接匹配
:param fileitem: 文件项,使用文件路径匹配
:param local: 是否本地目录
"""
# 处理类型
if media:
media_type = media.type.value
else:
media_type = MediaType.UNKNOWN.value
if not media:
return None
# 电影/电视剧
media_type = media.type.value
dirs = self.get_dirs()
# 按照配置顺序查找
for d in dirs:
# 没有启用整理的目录
if not d.monitor_type:
continue
# 存储类型不匹配
if storage and d.storage != storage:
continue
# 下载目录
download_path = Path(d.download_path)
# 媒体库目录
library_path = Path(d.library_path)
# 下载目录不匹配, 不符合条件, 通常处理`下载`匹配
if src_path and download_path != src_path:
# 有源目录时,源目录不匹配下载目录
if src_path and not src_path.is_relative_to(download_path):
continue
# 媒体库目录不匹配, 或监控方式为None(即不自动整理), 不符合条件, 通常处理`整理`匹配
if dest_path:
if library_path != dest_path or not d.monitor_type:
continue
# 没有目录配置时起作用, 通常处理`手动整理`未选择`目标目录`的情况
# 有文件项时,文件项不匹配下载目录
if fileitem and not Path(fileitem.path).is_relative_to(download_path):
continue
# 本地目录
if local and d.storage != "local":
# 有目标目录时,目标目录不匹配媒体库目录
if dest_path and not dest_path.is_relative_to(library_path):
continue
# 目录类型为全部的,符合条件
if not d.media_type:

View File

@@ -20,7 +20,15 @@ class FormatParser(object):
self._format = eformat
self._start_ep = None
self._end_ep = None
self.__offset = offset or "EP"
if not offset:
self.__offset = "EP"
elif "EP" in offset:
self.__offset = offset
else:
if offset.startswith("-") or offset.startswith("+"):
self.__offset = f"EP{offset}"
else:
self.__offset = f"EP+{offset}"
self._key = key
self._part = None
if part:
@@ -91,7 +99,7 @@ class FormatParser(object):
s, e = self.__handle_single(file_name)
start_ep = self.__offset.replace("EP", str(s)) if s else None
end_ep = self.__offset.replace("EP", str(e)) if e else None
return int(eval(start_ep)) if start_ep else None, int(eval(end_ep)) if end_ep else None, self.part
return int(eval(start_ep)) if start_ep else None, int(eval(end_ep)) if end_ep else None, self.part
def __handle_single(self, file: str) -> Tuple[Optional[int], Optional[int]]:
"""

View File

@@ -287,7 +287,7 @@ class TorrentHelper(metaclass=Singleton):
if not file:
continue
file_path = Path(file)
if file_path.suffix not in settings.RMT_MEDIAEXT:
if not file_path.suffix or file_path.suffix.lower() not in settings.RMT_MEDIAEXT:
continue
# 只使用文件名识别
meta = MetaInfo(file_path.stem)

View File

@@ -10,7 +10,7 @@ from app.log import logger
class TwoFactorAuth:
def __init__(self, code_or_secret: str):
if code_or_secret and len(code_or_secret) > 16:
if code_or_secret and len(code_or_secret) >= 16:
self.code = None
self.secret = code_or_secret
else:

View File

@@ -65,9 +65,8 @@ class FileManagerModule(_ModuleBase):
"""
测试模块连接性
"""
directoryhelper = DirectoryHelper()
# 检查目录
dirs = directoryhelper.get_dirs()
dirs = self.directoryhelper.get_dirs()
if not dirs:
return False, "未设置任何目录"
for d in dirs:
@@ -144,7 +143,7 @@ class FileManagerModule(_ModuleBase):
return
storage_oper.set_config(conf)
def generate_qrcode(self, storage: str) -> Optional[Dict[str, str]]:
def generate_qrcode(self, storage: str) -> Optional[Tuple[dict, str]]:
"""
生成二维码
"""
@@ -197,6 +196,35 @@ class FileManagerModule(_ModuleBase):
return result
def any_files(self, fileitem: FileItem, extensions: list = None) -> Optional[bool]:
"""
查询当前目录下是否存在指定扩展名任意文件
"""
storage_oper = self.__get_storage_oper(fileitem.storage)
if not storage_oper:
logger.error(f"不支持 {fileitem.storage} 的文件浏览")
return None
def __any_file(_item: FileItem):
"""
递归处理
"""
_items = storage_oper.list(_item)
if _items:
if not extensions:
return True
for t in _items:
if (t.type == "file"
and t.extension
and f".{t.extension.lower()}" in extensions):
return True
elif t.type == "dir":
return __any_file(t)
return False
# 返回结果
return __any_file(fileitem)
def create_folder(self, fileitem: FileItem, name: str) -> Optional[FileItem]:
"""
创建目录
@@ -240,7 +268,7 @@ class FileManagerModule(_ModuleBase):
return None
return storage_oper.download(fileitem, path=path)
def upload_file(self, fileitem: FileItem, path: Path) -> Optional[FileItem]:
def upload_file(self, fileitem: FileItem, path: Path, new_name: str = None) -> Optional[FileItem]:
"""
上传文件
"""
@@ -248,7 +276,7 @@ class FileManagerModule(_ModuleBase):
if not storage_oper:
logger.error(f"不支持 {fileitem.storage} 的上传处理")
return None
return storage_oper.upload(fileitem, path)
return storage_oper.upload(fileitem, path, new_name)
def get_file_item(self, storage: str, path: Path) -> Optional[FileItem]:
"""
@@ -320,14 +348,6 @@ class FileManagerModule(_ModuleBase):
fileitem=fileitem,
message=f"{target_path} 不是有效目录")
# 获取目标路径
directoryhelper = DirectoryHelper()
if not target_directory:
# 根据目的路径查找目录配置
if target_path:
target_directory = directoryhelper.get_dir(mediainfo, dest_path=target_path)
else:
target_directory = directoryhelper.get_dir(mediainfo, fileitem=fileitem)
if target_directory:
# 拼装媒体库一、二级子目录
target_path = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_directory)
@@ -337,6 +357,11 @@ class FileManagerModule(_ModuleBase):
# 整理方式
if not transfer_type:
transfer_type = target_directory.transfer_type
if not transfer_type:
logger.error(f"{target_directory.name} 未设置整理方式")
return TransferInfo(success=False,
fileitem=fileitem,
message=f"{target_directory.name} 未设置整理方式")
# 是否需要刮削
if scrape is None:
need_scrape = target_directory.scraping
@@ -349,7 +374,7 @@ class FileManagerModule(_ModuleBase):
# 覆盖模式
overwrite_mode = target_directory.overwrite_mode
elif target_path:
# 自定义目标路径,仅适用于手动整理的场景
# 手动整理的场景,有自定义目标路径
need_scrape = scrape or False
need_rename = True
need_notify = False
@@ -445,6 +470,8 @@ class FileManagerModule(_ModuleBase):
state = source_oper.link(fileitem, target_file)
elif transfer_type == "softlink":
state = source_oper.softlink(fileitem, target_file)
else:
return None, f"不支持的整理方式:{transfer_type}"
if state:
return __get_targetitem(target_file), ""
else:
@@ -460,13 +487,8 @@ class FileManagerModule(_ModuleBase):
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
# 上传文件
new_item = target_oper.upload(target_fileitem, filepath)
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
if new_item:
# 重命名为目标文件名
if new_item.name != target_file.name:
if target_oper.rename(new_item, target_file.name):
new_item.name = target_file.name
new_item.path = str(Path(new_item.path).parent / target_file.name)
return new_item, ""
else:
return None, f"{fileitem.path} 上传 {target_storage} 失败"
@@ -478,13 +500,8 @@ class FileManagerModule(_ModuleBase):
target_fileitem = target_oper.get_folder(target_file.parent)
if target_fileitem:
# 上传文件
new_item = target_oper.upload(target_fileitem, filepath)
new_item = target_oper.upload(target_fileitem, filepath, target_file.name)
if new_item:
# 重命名为目标文件名
if new_item.name != target_file.name:
if target_oper.rename(new_item, target_file.name):
new_item.name = target_file.name
new_item.path = str(Path(new_item.path).parent / target_file.name)
# 删除源文件
source_oper.delete(fileitem)
return new_item, ""
@@ -596,18 +613,16 @@ class FileManagerModule(_ModuleBase):
if not parent_item:
return False, f"{org_path} 上级目录获取失败"
# 字幕文件列表
file_list: List[FileItem] = storage_oper.list(parent_item)
file_list: List[FileItem] = storage_oper.list(parent_item) or []
file_list = [f for f in file_list if f.type == "file" and f.extension
and f".{f.extension.lower()}" in settings.RMT_SUBEXT]
if len(file_list) == 0:
logger.debug(f"{parent_item.path} 目录下没有找到字幕文件...")
logger.info(f"{parent_item.path} 目录下没有找到字幕文件...")
else:
logger.debug("字幕文件清单:" + str(file_list))
logger.info(f"字幕文件清单:{[f.name for f in file_list]}")
# 识别文件名
metainfo = MetaInfoPath(org_path)
for sub_item in file_list:
if sub_item.type == "dir" or not sub_item.extension:
continue
if f".{sub_item.extension.lower()}" not in settings.RMT_SUBEXT:
continue
# 识别字幕文件名
sub_file_name = re.sub(_zhtw_sub_re,
".",

View File

@@ -1,6 +1,6 @@
from abc import ABCMeta, abstractmethod
from pathlib import Path
from typing import Optional, List, Union, Dict
from typing import Optional, List, Union, Dict, Tuple
from app import schemas
from app.helper.storage import StorageHelper
@@ -16,7 +16,14 @@ class StorageBase(metaclass=ABCMeta):
def __init__(self):
self.storagehelper = StorageHelper()
def generate_qrcode(self, *args, **kwargs) -> Optional[Dict[str, str]]:
@abstractmethod
def init_storage(self):
"""
初始化
"""
pass
def generate_qrcode(self, *args, **kwargs) -> Optional[Tuple[dict, str]]:
pass
def check_login(self, *args, **kwargs) -> Optional[Dict[str, str]]:
@@ -28,11 +35,19 @@ class StorageBase(metaclass=ABCMeta):
"""
return self.storagehelper.get_storage(self.schema.value)
def get_conf(self) -> dict:
"""
获取配置
"""
conf = self.get_config()
return conf.config if conf else {}
def set_config(self, conf: dict):
"""
设置配置
"""
self.storagehelper.set_storage(self.schema.value, conf)
self.init_storage()
def support_transtype(self) -> dict:
"""
@@ -112,11 +127,12 @@ class StorageBase(metaclass=ABCMeta):
pass
@abstractmethod
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
"""
pass

View File

@@ -16,10 +16,11 @@ from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from aligo import Aligo, BaseFile
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class AliPan(StorageBase):
class AliPan(StorageBase, metaclass=Singleton):
"""
阿里云相关操作
"""
@@ -54,17 +55,26 @@ class AliPan(StorageBase):
except FileNotFoundError:
logger.debug('未发现 aria2c')
self._has_aria2c = False
self.init_storage()
self.__init_aligo()
def __init_aligo(self):
def init_storage(self):
"""
初始化 aligo
"""
def show_qrcode(qr_link: str):
"""
显示二维码
"""
logger.info(f"请用阿里云盘 App 扫码登录:{qr_link}")
refresh_token = self.__auth_params.get("refreshToken")
if refresh_token:
self.aligo = Aligo(refresh_token=refresh_token, use_aria2=self._has_aria2c,
name="MoviePilot V2", level=logging.ERROR)
try:
self.aligo = Aligo(refresh_token=refresh_token, show=show_qrcode, use_aria2=self._has_aria2c,
name="MoviePilot V2", level=logging.ERROR, re_login=False)
except Exception as err:
logger.error(f"初始化阿里云盘失败:{str(err)}")
self.__clear_params()
@property
def __auth_params(self):
@@ -160,7 +170,7 @@ class AliPan(StorageBase):
})
self.__update_params(data)
self.__update_drives()
self.__init_aligo()
self.init_storage()
except Exception as e:
return {}, f"bizExt 解码失败:{str(e)}"
return data, ""
@@ -180,12 +190,16 @@ class AliPan(StorageBase):
"""
获取用户信息drive_id等
"""
if not self.aligo:
return {}
return self.aligo.get_user()
def __update_drives(self):
"""
更新用户存储根目录
"""
if not self.aligo:
return
drivers = self.aligo.list_my_drives()
for driver in drivers:
if driver.category == "resource":
@@ -347,13 +361,13 @@ class AliPan(StorageBase):
"""
if not self.aligo:
return None
local_path = self.aligo.download_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id,
local_path = self.aligo.download_file(file_id=fileitem.fileid, drive_id=fileitem.drive_id, # noqa
local_folder=str(path or settings.TEMP_PATH))
if local_path:
return Path(local_path)
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件,并标记完成
"""
@@ -361,7 +375,7 @@ class AliPan(StorageBase):
return None
# 上传文件
result = self.aligo.upload_file(file_path=str(path), parent_file_id=fileitem.fileid,
drive_id=fileitem.drive_id, name=path.name,
drive_id=fileitem.drive_id, name=new_name or path.name,
check_name_mode="refuse")
if result:
item = self.aligo.get_file(file_id=result.file_id, drive_id=result.drive_id)

View File

@@ -0,0 +1,771 @@
import json
import logging
from datetime import datetime
from pathlib import Path
from typing import Optional, Tuple, List, Dict, Union
from requests import Response
from cachetools import cached, TTLCache
from app import schemas
from app.core.config import settings
from app.log import logger
from app.modules.filemanager.storages import StorageBase
from app.schemas.types import StorageSchema
from app.utils.http import RequestUtils
from app.utils.url import UrlUtils
class Alist(StorageBase):
"""
Alist相关操作
api文档https://alist.nn.ci/zh/guide/api
"""
# 存储类型
schema = StorageSchema.Alist
# 支持的整理方式
transtype = {
"copy": "复制",
"move": "移动",
}
def __init__(self):
super().__init__()
def init_storage(self):
"""
初始化
"""
pass
@property
def __get_base_url(self) -> str:
"""
获取基础URL
"""
url = self.get_conf().get("url")
if url is None:
return ""
return UrlUtils.standardize_base_url(self.get_conf().get("url"))
def __get_api_url(self, path: str) -> str:
"""
获取API URL
"""
return UrlUtils.adapt_request_url(self.__get_base_url, path)
@property
def __get_valuable_toke(self) -> str:
"""
获取一个可用的token
如果设置永久令牌则返回永久令牌
否则使用账号密码生成临时令牌
"""
return self.__generate_token
@property
@cached(cache=TTLCache(maxsize=1, ttl=60 * 60 * 24 * 2 - 60 * 5))
def __generate_token(self) -> str:
"""
使用账号密码生成一个临时token
缓存2天提前5分钟更新
"""
conf = self.get_conf()
resp: Response = RequestUtils(headers={
'Content-Type': 'application/json'
}).post_res(
self.__get_api_url("/api/auth/login"),
data=json.dumps({
"username": conf.get("username"),
"password": conf.get("password"),
}),
)
"""
{
"username": "{{alist_username}}",
"password": "{{alist_password}}"
}
======================================
{
"code": 200,
"message": "success",
"data": {
"token": "abcd"
}
}
"""
if resp is None:
logger.warning("请求登录失败无法连接alist服务")
return ""
if resp.status_code != 200:
logger.warning(f"更新令牌请求发送失败,状态码:{resp.status_code}")
return ""
result = resp.json()
if result["code"] != 200:
logger.critical(f'更新令牌,错误信息:{result["message"]}')
return ""
logger.debug("AList获取令牌成功")
return result["data"]["token"]
def __get_header_with_token(self) -> dict:
"""
获取带有token的header
"""
return {"Authorization": self.__get_valuable_toke}
def check(self) -> bool:
"""
检查存储是否可用
"""
pass
def list(
self,
fileitem: schemas.FileItem,
password: str = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> Optional[List[schemas.FileItem]]:
"""
浏览文件
:param fileitem: 文件项
:param password: 路径密码
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
"""
if fileitem.type == "file":
item = self.get_item(Path(fileitem.path))
if item:
return [item]
return None
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/list"),
json={
"path": fileitem.path,
"password": password,
"page": page,
"per_page": per_page,
"refresh": refresh,
},
)
"""
{
"path": "/t",
"password": "",
"page": 1,
"per_page": 0,
"refresh": false
}
======================================
{
"code": 200,
"message": "success",
"data": {
"content": [
{
"name": "Alist V3.md",
"size": 1592,
"is_dir": false,
"modified": "2024-05-17T13:47:55.4174917+08:00",
"created": "2024-05-17T13:47:47.5725906+08:00",
"sign": "",
"thumb": "",
"type": 4,
"hashinfo": "null",
"hash_info": null
}
],
"total": 1,
"readme": "",
"header": "",
"write": true,
"provider": "Local"
}
}
"""
if resp is None:
logging.warning(f"请求获取目录 {fileitem.path} 的文件列表失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(
f"请求获取目录 {fileitem.path} 的文件列表失败,状态码:{resp.status_code}"
)
return
result = resp.json()
if result["code"] != 200:
logging.warning(
f'获取目录 {fileitem.path} 的文件列表失败,错误信息:{result["message"]}'
)
return
return [
schemas.FileItem(
storage=self.schema.value,
type="dir" if item["is_dir"] else "file",
path=(Path(fileitem.path) / item["name"]).as_posix() + ("/" if item["is_dir"] else ""),
name=item["name"],
basename=Path(item["name"]).stem,
extension=Path(item["name"]).suffix[1:] if not item["is_dir"] else None,
size=item["size"] if not item["is_dir"] else None,
modify_time=self.__parse_timestamp(item["modified"]),
thumbnail=item["thumb"],
)
for item in result["data"]["content"] or []
]
def create_folder(
self, fileitem: schemas.FileItem, name: str
) -> Optional[schemas.FileItem]:
"""
创建目录
"""
path = Path(fileitem.path) / name
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/mkdir"),
json={"path": path.as_posix()},
)
"""
{
"path": "/tt"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(f"请求创建目录 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求创建目录 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.warning(f'创建目录 {path} 失败,错误信息:{result["message"]}')
return
return self.get_item(path)
def get_folder(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取目录,如目录不存在则创建
"""
folder = self.get_item(path)
if not folder:
folder = self.create_folder(self.get_parent(schemas.FileItem(
storage=self.schema.value,
type="dir",
path=path.as_posix() + "/",
name=path.name,
basename=path.stem
)), path.name)
return folder
def get_item(
self,
path: Path,
password: str = "",
page: int = 1,
per_page: int = 0,
refresh: bool = False,
) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
:param path: 文件路径
:param password: 路径密码
:param page: 页码
:param per_page: 每页数量
:param refresh: 是否刷新
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": path.as_posix(),
"password": password,
"page": page,
"per_page": per_page,
"refresh": refresh,
},
)
"""
{
"path": "/t",
"password": "",
"page": 1,
"per_page": 0,
"refresh": false
}
======================================
{
"code": 200,
"message": "success",
"data": {
"name": "Alist V3.md",
"size": 2618,
"is_dir": false,
"modified": "2024-05-17T16:05:36.4651534+08:00",
"created": "2024-05-17T16:05:29.2001008+08:00",
"sign": "",
"thumb": "",
"type": 4,
"hashinfo": "null",
"hash_info": null,
"raw_url": "http://127.0.0.1:5244/p/local/Alist%20V3.md",
"readme": "",
"header": "",
"provider": "Local",
"related": null
}
}
"""
if resp is None:
logging.warning(f"请求获取文件 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求获取文件 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.warning(f'获取文件 {path} 失败,错误信息:{result["message"]}')
return
return schemas.FileItem(
storage=self.schema.value,
type="dir" if result["data"]["is_dir"] else "file",
path=path.as_posix() + ("/" if result["data"]["is_dir"] else ""),
name=result["data"]["name"],
basename=Path(result["data"]["name"]).stem,
extension=Path(result["data"]["name"]).suffix[1:],
size=result["data"]["size"],
modify_time=self.__parse_timestamp(result["data"]["modified"]),
thumbnail=result["data"]["thumb"],
)
def get_parent(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取父目录
"""
return self.get_folder(Path(fileitem.path).parent)
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/delete"),
json={
"dir": Path(fileitem.path).parent.as_posix(),
"names": [fileitem.name],
},
)
"""
{
"names": [
"string"
],
"dir": "string"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(f"请求删除文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logging.warning(
f"请求删除文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'删除文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def rename(self, fileitem: schemas.FileItem, name: str) -> bool:
"""
重命名文件
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/rename"),
json={
"name": name,
"path": fileitem.path,
},
)
"""
{
"name": "test3",
"path": "/阿里云盘/test2"
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if not resp:
logging.warning(f"请求重命名文件 {fileitem.path} 失败无法连接alist服务")
return False
if resp.status_code != 200:
logging.warning(
f"请求重命名文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'重命名文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def download(
self,
fileitem: schemas.FileItem,
path: Path = None,
password: str = "",
) -> Optional[Path]:
"""
下载文件,保存到本地,返回本地临时文件地址
:param fileitem: 文件项
:param path: 文件保存路径
:param password: 文件密码
"""
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/get"),
json={
"path": fileitem.path,
"password": password,
"page": 1,
"per_page": 0,
"refresh": False,
},
)
"""
{
"code": 200,
"message": "success",
"data": {
"name": "[ANi]輝夜姬想讓人告白~天才們的戀愛頭腦戰~[01][1080P][Baha][WEB-DL].mp4",
"size": 924933111,
"is_dir": false,
"modified": "1970-01-01T00:00:00Z",
"created": "1970-01-01T00:00:00Z",
"sign": "1v0xkMQz_uG8fkEOQ7-l58OnbB-g4GkdBlUBcrsApCQ=:0",
"thumb": "",
"type": 2,
"hashinfo": "null",
"hash_info": null,
"raw_url": "xxxxxx",
"readme": "",
"header": "",
"provider": "UrlTree",
"related": null
}
}
"""
if not resp:
logging.warning(f"请求获取文件 {path} 失败无法连接alist服务")
return
if resp.status_code != 200:
logging.warning(f"请求获取文件 {path} 失败,状态码:{resp.status_code}")
return
result = resp.json()
if result["code"] != 200:
logging.warning(f'获取文件 {path} 失败,错误信息:{result["message"]}')
return
if result["data"]["raw_url"]:
download_url = result["data"]["raw_url"]
else:
download_url = UrlUtils.adapt_request_url(self.__get_base_url, f"/d{fileitem.path}")
if result["data"]["sign"]:
download_url = download_url + "?sign=" + result["data"]["sign"]
resp = RequestUtils(
headers=self.__get_header_with_token()
).get_res(download_url)
if not path:
new_path = settings.TEMP_PATH / fileitem.name
else:
new_path = path / fileitem.name
with open(new_path, "wb") as f:
f.write(resp.content)
if new_path.exists():
return new_path
return None
def upload(
self, fileitem: schemas.FileItem, path: Path, new_name: str = None, task: bool = False
) -> Optional[schemas.FileItem]:
"""
上传文件
:param fileitem: 上传目录项
:param path: 本地文件路径
:param new_name: 上传后文件名
:param task: 是否为任务默认为False避免未完成上传时对文件进行操作
"""
encoded_path = UrlUtils.quote(fileitem.path)
headers = self.__get_header_with_token()
headers.setdefault("Content-Type", "multipart/form-data")
headers.setdefault("As-Task", str(task).lower())
headers.setdefault("File-Path", encoded_path)
with open(path, "rb") as f:
resp: Response = RequestUtils(headers=headers).put_res(
self.__get_api_url("/api/fs/form"),
data={"file": f},
)
if resp.status_code != 200:
logging.warning(f"请求上传文件 {path} 失败,状态码:{resp.status_code}")
return
new_item = self.get_item(Path(fileitem.path) / path.name)
if new_name and new_name != path.name:
if self.rename(new_item, new_name):
return self.get_item(Path(new_item.path).with_name(new_name))
return new_item
def detail(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取文件详情
"""
return self.get_item(Path(fileitem.path))
@staticmethod
def __get_copy_and_move_data(
fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> Tuple[str, str, List[str], bool]:
"""
获取复制或移动文件需要的数据
:param fileitem: 文件项
:param target: 目标文件项或目标路径
:return: 源目录,目标目录,文件名列表,是否有效
"""
name = Path(target).name
if fileitem.name != name:
return "", "", [], False
src_dir = Path(fileitem.path).parent.as_posix()
if isinstance(target, schemas.FileItem):
traget_dir = Path(target.path).parent.as_posix()
else:
traget_dir = target.parent.as_posix()
return src_dir, traget_dir, [name], True
def copy(
self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> bool:
"""
复制文件
源文件名和目标文件名必须相同
"""
src_dir, dst_dir, names, is_valid = self.__get_copy_and_move_data(
fileitem, target
)
if not is_valid:
return False
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/copy"),
json={
"src_dir": src_dir,
"dst_dir": dst_dir,
"names": names,
},
)
"""
{
"src_dir": "string",
"dst_dir": "string",
"names": [
"string"
]
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(
f"请求复制文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logging.warning(
f"请求复制文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'复制文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def move(
self, fileitem: schemas.FileItem, target: Union[schemas.FileItem, Path]
) -> bool:
"""
移动文件
"""
src_dir, dst_dir, names, is_valid = self.__get_copy_and_move_data(
fileitem, target
)
if not is_valid:
return False
resp: Response = RequestUtils(
headers=self.__get_header_with_token()
).post_res(
self.__get_api_url("/api/fs/move"),
json={
"src_dir": src_dir,
"dst_dir": dst_dir,
"names": names,
},
)
"""
{
"src_dir": "string",
"dst_dir": "string",
"names": [
"string"
]
}
======================================
{
"code": 200,
"message": "success",
"data": null
}
"""
if resp is None:
logging.warning(
f"请求移动文件 {fileitem.path} 失败无法连接alist服务"
)
return False
if resp.status_code != 200:
logging.warning(
f"请求移动文件 {fileitem.path} 失败,状态码:{resp.status_code}"
)
return False
result = resp.json()
if result["code"] != 200:
logging.warning(
f'移动文件 {fileitem.path} 失败,错误信息:{result["message"]}'
)
return False
return True
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
硬链接文件
"""
pass
def softlink(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""
软链接文件
"""
pass
def usage(self) -> Optional[schemas.StorageUsage]:
"""
存储使用情况
"""
pass
def snapshot(self, path: Path) -> Dict[str, float]:
"""
快照文件系统,输出所有层级文件信息(不含目录)
"""
files_info = {}
def __snapshot_file(_fileitm: schemas.FileItem):
"""
递归获取文件信息
"""
if _fileitm.type == "dir":
for sub_file in self.list(_fileitm):
__snapshot_file(sub_file)
else:
files_info[_fileitm.path] = _fileitm.size
fileitem = self.get_item(path)
if not fileitem:
return {}
__snapshot_file(fileitem)
return files_info
@staticmethod
def __parse_timestamp(time_str: str) -> float:
# try:
# # 尝试解析带微秒的时间格式
# dt = datetime.strptime(time_str[:26], '%Y-%m-%dT%H:%M:%S.%f')
# except ValueError:
# # 如果失败,尝试解析不带微秒的时间格式
# dt = datetime.strptime(time_str, '%Y-%m-%dT%H:%M:%SZ')
# 直接使用 ISO 8601 格式解析时间
dt = datetime.fromisoformat(time_str)
# 返回时间戳
return dt.timestamp()

View File

@@ -25,6 +25,12 @@ class LocalStorage(StorageBase):
"softlink": "软链接"
}
def init_storage(self):
"""
初始化
"""
pass
def check(self) -> bool:
"""
检查存储是否可用
@@ -183,17 +189,17 @@ class LocalStorage(StorageBase):
"""
return Path(fileitem.path)
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
"""
dir_path = Path(fileitem.path)
target_path = dir_path / path.name
target_path = dir_path / (new_name or path.name)
code, message = SystemUtils.move(path, target_path)
if code != 0:
logger.error(f"移动文件失败:{message}")
return None
return self.__get_diritem(target_path)
return self.get_item(target_path)
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
"""

View File

@@ -27,6 +27,12 @@ class Rclone(StorageBase):
"copy": "复制"
}
def init_storage(self):
"""
初始化
"""
pass
def set_config(self, conf: dict):
"""
设置配置
@@ -39,7 +45,7 @@ class Rclone(StorageBase):
path = Path(filepath)
if not path.parent.exists():
path.parent.mkdir(parents=True)
path.write_text(conf.get('content'))
path.write_text(conf.get('content'), encoding='utf-8')
@staticmethod
def __get_hidden_shell():
@@ -76,7 +82,7 @@ class Rclone(StorageBase):
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=f"{parent}{item.get('Name')}",
path=f"{parent}{item.get('Name')}" + "/",
name=item.get("Name"),
basename=item.get("Name"),
modify_time=StringUtils.str_to_timestamp(item.get("ModTime"))
@@ -260,21 +266,22 @@ class Rclone(StorageBase):
logger.error(f"rclone复制文件失败{err}")
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
"""
try:
new_path = Path(fileitem.path) / (new_name or path.name)
retcode = subprocess.run(
[
'rclone', 'copyto',
str(path),
f'MP:{Path(fileitem.path) / path.name}'
f'MP:{new_path}'
],
startupinfo=self.__get_hidden_shell()
).returncode
if retcode == 0:
return self.__get_fileitem(path)
return self.__get_fileitem(new_path)
except Exception as err:
logger.error(f"rclone上传文件失败{err}")
return None
@@ -331,11 +338,24 @@ class Rclone(StorageBase):
"""
存储使用情况
"""
conf = self.get_config()
if not conf:
return None
file_path = conf.config.get("filepath")
if not file_path or not Path(file_path).exists():
return None
# 读取rclone文件检查是否有[MP]节点配置
with open(file_path, "r", encoding="utf-8") as f:
lines = f.readlines()
if not lines:
return None
if not any("[MP]" in line.strip() for line in lines):
return None
try:
ret = subprocess.run(
[
'rclone', 'about',
'/', '--json'
'MP:/', '--json'
],
capture_output=True,
startupinfo=self.__get_hidden_shell()

View File

@@ -1,12 +1,7 @@
import base64
import subprocess
from pathlib import Path
from typing import Optional, Tuple, List
import oss2
import py115
from py115 import Cloud
from py115.types import LoginTarget, QrcodeSession, QrcodeStatus, Credential
from p115 import P115Client, P115Path
from app import schemas
from app.core.config import settings
@@ -27,57 +22,54 @@ class U115Pan(StorageBase, metaclass=Singleton):
# 支持的整理方式
transtype = {
"move": "移动"
"move": "移动",
"copy": "复制"
}
cloud: Optional[Cloud] = None
_session: QrcodeSession = None
# 115二维码登录地址
qrcode_url = "https://qrcodeapi.115.com/api/1.0/web/1.0/token/"
# 115登录状态检查
login_check_url = "https://qrcodeapi.115.com/get/status/"
# 115登录完成 alipaymini
login_done_api = f"https://passportapi.115.com/app/1.0/alipaymini/1.0/login/qrcode/"
# 是否有aria2c
_has_aria2c: bool = False
client: P115Client = None
session_info: dict = None
def __init__(self):
super().__init__()
try:
subprocess.run(['aria2c', '-h'], capture_output=True)
self._has_aria2c = True
logger.debug('发现 aria2c, 将使用 aria2c 下载文件')
except FileNotFoundError:
logger.debug('未发现 aria2c')
self._has_aria2c = False
self.init_storage()
def __init_cloud(self) -> bool:
def init_storage(self):
"""
初始化Cloud
"""
credential = self.__credential
if not credential:
logger.warn("115未登录请先登录")
return False
if not self.__credential:
return
try:
if not self.cloud:
self.cloud = py115.connect(credential)
self.client = P115Client(self.__credential, app="alipaymini",
check_for_relogin=False, console_qrcode=False)
except Exception as err:
logger.error(f"115连接失败请重新扫码登录:{str(err)}")
logger.error(f"115连接失败请重新登录{str(err)}")
self.__clear_credential()
return False
return True
@property
def __credential(self) -> Optional[Credential]:
def __credential(self) -> Optional[str]:
"""
获取已保存的115认证参数
获取已保存的115 Cookie
"""
cookie_dict = self.get_config()
if not cookie_dict:
conf = self.get_config()
if not conf:
return None
return Credential.from_dict(cookie_dict.dict().get("config"))
if not conf.config:
return None
return conf.config.get("cookie")
def __save_credential(self, credential: Credential):
def __save_credential(self, credential: dict):
"""
设置115认证参数
"""
self.set_config(credential.to_dict())
self.set_config(credential)
def __clear_credential(self):
"""
@@ -89,62 +81,75 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
生成二维码
"""
try:
self.cloud = py115.connect()
self._session = self.cloud.qrcode_login(LoginTarget.Web)
image_bin = self._session.image_data
if not image_bin:
res = RequestUtils(timeout=10).get_res(self.qrcode_url)
if res:
self.session_info = res.json().get("data")
qrcode_content = self.session_info.pop("qrcode")
if not qrcode_content:
logger.warn("115生成二维码失败未获取到二维码数据")
return None
# 转换为base64图片格式
image_base64 = base64.b64encode(image_bin).decode()
return {}, ""
return {
"codeContent": f"data:image/jpeg;base64,{image_base64}"
"codeContent": qrcode_content
}, ""
except Exception as e:
logger.warn(f"115生成二维码失败{str(e)}")
return {}, f"115生成二维码失败{str(e)}"
elif res is not None:
return {}, f"115生成二维码失败{res.status_code} - {res.reason}"
return {}, f"115生成二维码失败无法连接!"
def check_login(self) -> Optional[Tuple[dict, str]]:
"""
二维码登录确认
"""
if not self._session:
if not self.session_info:
return {}, "请先生成二维码!"
try:
if not self.cloud:
return {}, "请先生成二维码!"
status = self.cloud.qrcode_poll(self._session)
if status == QrcodeStatus.Done:
# 确认完成,保存认证信息
self.__save_credential(self.cloud.export_credentail())
result = {
"status": 1,
"tip": "登录成功!"
}
elif status == QrcodeStatus.Waiting:
result = {
"status": 0,
"tip": "请使用微信或115客户端扫码"
}
elif status == QrcodeStatus.Expired:
result = {
"status": -1,
"tip": "二维码已过期,请重新刷新!"
}
self.cloud = None
elif status == QrcodeStatus.Failed:
result = {
"status": -2,
"tip": "登录失败,请重试!"
}
self.cloud = None
else:
result = {
"status": -3,
"tip": "未知错误,请重试!"
}
self.cloud = None
resp = RequestUtils(timeout=10).get_res(self.login_check_url, params=self.session_info)
if not resp:
return {}, "115登录确认失败无法连接"
result = resp.json()
match result["data"].get("status"):
case 0:
result = {
"status": 0,
"tip": "请使用微信或115客户端扫码"
}
case 1:
result = {
"status": 1,
"tip": "扫码"
}
case 2:
# 确认完成,保存认证信息
resp = RequestUtils(timeout=10).post_res(self.login_done_api,
data={"account": self.session_info.get("uid")})
if not resp:
return {}, "115登录确认失败无法连接"
if resp:
# 保存认证信息
result = resp.json()
cookie_dict = result["data"]["cookie"]
cookie_str = "; ".join([f"{k}={v}" for k, v in cookie_dict.items()])
cookie_dict.update({"cookie": cookie_str})
self.__save_credential(cookie_dict)
self.init_storage()
result = {
"status": 2,
"tip": "登录成功!"
}
case -1:
result = {
"status": -1,
"tip": "二维码已过期,请重新刷新!"
}
case -2:
result = {
"status": -2,
"tip": "登录失败,请重试!"
}
case _:
result = {
"status": -3,
"tip": "未知错误,请重试!"
}
return result, ""
except Exception as e:
return {}, f"115登录确认失败{str(e)}"
@@ -153,10 +158,12 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
获取存储空间
"""
if not self.__init_cloud():
if not self.client:
return None
try:
return self.cloud.storage().space()
usage = self.client.fs.space_summury()
if usage:
return usage['rt_space_info']['all_total']['size'], usage['rt_space_info']['all_remain']['size']
except Exception as e:
logger.error(f"115获取存储空间失败{str(e)}")
return None
@@ -165,31 +172,27 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
检查存储是否可用
"""
return True if self.list(schemas.FileItem(
fileid="0"
)) else False
return True if self.list(schemas.FileItem()) else False
def list(self, fileitem: schemas.FileItem) -> Optional[List[schemas.FileItem]]:
"""
浏览文件
"""
if not self.__init_cloud():
if not self.client:
return []
try:
if fileitem.type == "file":
return [fileitem]
items = self.cloud.storage().list(dir_id=fileitem.fileid)
items: List[P115Path] = self.client.fs.list(fileitem.path)
return [schemas.FileItem(
storage=self.schema.value,
fileid=item.file_id,
parent_fileid=item.parent_id,
type="dir" if item.is_dir else "file",
path=f"{fileitem.path}{item.name}" + ("/" if item.is_dir else ""),
type="dir" if item.is_dir() else "file",
path=item.path + ("/" if item.is_dir() else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:],
modify_time=item.modified_time.timestamp() if item.modified_time else 0,
pickcode=item.pickcode
basename=item.stem,
size=item.stat().st_size,
extension=item.suffix[1:] if not item.is_dir() else None,
modify_time=item.stat().st_mtime
) for item in items if item]
except Exception as e:
logger.error(f"115浏览文件失败{str(e)}")
@@ -199,20 +202,18 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
创建目录
"""
if not self.__init_cloud():
if not self.client:
return None
try:
result = self.cloud.storage().make_dir(fileitem.fileid, name)
result = self.client.fs.makedirs(Path(fileitem.path) / name, exist_ok=True)
if result:
return schemas.FileItem(
storage=self.schema.value,
fileid=result.file_id,
parent_fileid=result.parent_id,
type="dir",
path=f"{fileitem.path}{name}/",
path=f"{result.path}/",
name=name,
modify_time=result.modified_time.timestamp() if result.modified_time else 0,
pickcode=result.pickcode
basename=Path(result.name).stem,
modify_time=result.mtime
)
except Exception as e:
logger.error(f"115创建目录失败{str(e)}")
@@ -222,73 +223,83 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
根据文件路程获取目录,不存在则创建
"""
def __find_dir(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录
"""
for sub_file in self.list(_fileitem):
if sub_file.type != "dir":
continue
if sub_file.name == _name:
return sub_file
if not self.client:
return None
# 逐级查找和创建目录
fileitem = schemas.FileItem(fileid="0")
for part in path.parts:
if part == "/":
continue
dir_file = __find_dir(fileitem, part)
if dir_file:
fileitem = dir_file
else:
dir_file = self.create_folder(fileitem, part)
if not dir_file:
logger.warn(f"115创建目录 {fileitem.path}{part} 失败!")
return None
fileitem = dir_file
return fileitem if fileitem.fileid != "0" else None
try:
result = self.client.fs.makedirs(path, exist_ok=True)
if result:
return schemas.FileItem(
storage=self.schema.value,
type="dir",
path=result.path + "/",
name=result.name,
basename=Path(result.name).stem,
modify_time=result.mtime
)
except Exception as e:
logger.error(f"115获取目录失败:{str(e)}")
return None
def get_item(self, path: Path) -> Optional[schemas.FileItem]:
"""
获取文件或目录不存在返回None
"""
def __find_item(_fileitem: schemas.FileItem, _name: str) -> Optional[schemas.FileItem]:
"""
查找下级目录中匹配名称的目录或文件
"""
for sub_file in self.list(_fileitem):
if sub_file.name == _name:
return sub_file
if not self.client:
return None
# 逐级查找
fileitem = schemas.FileItem(fileid="0")
for part in path.parts:
if part == "/":
continue
item = __find_item(fileitem, part)
if not item:
try:
try:
item = self.client.fs.attr(path)
except FileNotFoundError:
return None
fileitem = item
return fileitem
if item:
return schemas.FileItem(
storage=self.schema.value,
type="dir" if item.is_directory else "file",
path=item.path + ("/" if item.is_directory else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:] if not item.is_directory else None,
modify_time=item.mtime,
thumbnail=item.get("thumb")
)
except Exception as e:
logger.info(f"115获取文件失败{str(e)}")
return None
def detail(self, fileitem: schemas.FileItem) -> Optional[schemas.FileItem]:
"""
获取文件详情
"""
pass
if not self.client:
return None
try:
try:
item = self.client.fs.attr(fileitem.path)
except FileNotFoundError:
return None
if item:
return schemas.FileItem(
storage=self.schema.value,
type="dir" if item.is_directory else "file",
path=item.path + ("/" if item.is_directory else ""),
name=item.name,
size=item.size,
extension=Path(item.name).suffix[1:] if not item.is_directory else None,
modify_time=item.mtime,
thumbnail=item.get("thumb")
)
except Exception as e:
logger.error(f"115获取文件详情失败{str(e)}")
return None
def delete(self, fileitem: schemas.FileItem) -> bool:
"""
删除文件
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().delete(fileitem.fileid)
self.client.fs.remove(fileitem.path)
return True
except Exception as e:
logger.error(f"115删除文件失败{str(e)}")
@@ -298,10 +309,10 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
重命名文件
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().rename(fileitem.fileid, name)
self.client.fs.rename(fileitem.path, Path(fileitem.path).with_name(name))
return True
except Exception as e:
logger.error(f"115重命名文件失败{str(e)}")
@@ -311,69 +322,38 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
获取下载链接
"""
if not self.__init_cloud():
if not self.client:
return None
local_file = (path or settings.TEMP_PATH) / fileitem.name
try:
ticket = self.cloud.storage().request_download(fileitem.pickcode)
if ticket:
path = (path or settings.TEMP_PATH) / fileitem.name
res = RequestUtils(headers=ticket.headers).get_res(ticket.url)
if res:
with open(path, "wb") as f:
f.write(res.content)
return path
else:
logger.warn(f"{fileitem.path} 未获取到下载链接")
task = self.client.fs.download(fileitem.path, file=local_file)
if task:
return local_file
except Exception as e:
logger.error(f"115下载失败{str(e)}")
logger.error(f"115下载文件失败:{str(e)}")
return None
def upload(self, fileitem: schemas.FileItem, path: Path) -> Optional[schemas.FileItem]:
def upload(self, fileitem: schemas.FileItem, path: Path, new_name: str = None) -> Optional[schemas.FileItem]:
"""
上传文件
"""
if not self.__init_cloud():
if not self.client:
return None
try:
ticket = self.cloud.storage().request_upload(dir_id=fileitem.fileid, file_path=str(path))
if ticket is None:
logger.warn(f"115请求上传出错")
return None
elif ticket.is_done:
file_path = Path(fileitem.path) / path.name
logger.warn(f"115上传{file_path} 文件已存在")
return self.get_item(file_path)
else:
auth = oss2.StsAuth(**ticket.oss_token)
bucket = oss2.Bucket(
auth=auth,
endpoint=ticket.oss_endpoint,
bucket_name=ticket.bucket_name,
)
por = bucket.put_object_from_file(
key=ticket.object_key,
filename=str(path),
headers=ticket.headers,
)
result = por.resp.response.json()
new_path = Path(fileitem.path) / (new_name or path.name)
with open(path, "rb") as f:
result = self.client.fs.upload(f, new_path)
if result:
result_data = result.get('data')
logger.info(f"115上传文件成功{result_data.get('file_name')}")
return schemas.FileItem(
storage=self.schema.value,
fileid=result_data.get('file_id'),
parent_fileid=fileitem.fileid,
type="file",
name=result_data.get('file_name'),
basename=Path(result_data.get('file_name')).stem,
path=f"{fileitem.path}{result_data.get('file_name')}",
size=result_data.get('file_size'),
extension=Path(result_data.get('file_name')).suffix[1:],
pickcode=result_data.get('pickcode')
path=str(path),
name=result.name,
basename=Path(result.name).stem,
size=result.size,
extension=Path(result.name).suffix[1:],
modify_time=result.mtime
)
else:
logger.warn(f"115上传文件失败{por.resp.response.text}")
return None
except Exception as e:
logger.error(f"115上传文件失败{str(e)}")
return None
@@ -382,17 +362,27 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
移动文件
"""
if not self.__init_cloud():
if not self.client:
return False
try:
self.cloud.storage().move(fileitem.fileid, target.fileid)
self.client.fs.move(fileitem.path, target.path)
return True
except Exception as e:
logger.error(f"115移动文件失败{str(e)}")
return False
def copy(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
"""
复制文件
"""
if not self.client:
return False
try:
self.client.fs.copy(fileitem.path, target_file)
return True
except Exception as e:
logger.error(f"115复制文件失败{str(e)}")
return False
def link(self, fileitem: schemas.FileItem, target_file: Path) -> bool:
pass
@@ -406,9 +396,9 @@ class U115Pan(StorageBase, metaclass=Singleton):
"""
info = self.storage()
if info:
total, used = info
total, free = info
return schemas.StorageUsage(
total=total,
available=total - used
available=free
)
return schemas.StorageUsage()

View File

@@ -432,26 +432,32 @@ class FilterModule(_ModuleBase):
@staticmethod
def __match_size(torrent: TorrentInfo, size_range: str) -> bool:
"""
判断种子是否匹配大小范围MB
判断种子是否匹配大小范围MB,剧集拆分为每集大小
"""
if not size_range:
return True
# 集数
meta = MetaInfo(title=torrent.title, subtitle=torrent.description)
episode_count = meta.total_episode or 1
# 每集大小
torrent_size = torrent.size / episode_count
# 大小范围
size_range = size_range.strip()
if size_range.find("-") != -1:
# 区间
size_min, size_max = size_range.split("-")
size_min = float(size_min.strip()) * 1024 * 1024
size_max = float(size_max.strip()) * 1024 * 1024
if size_min <= torrent.size <= size_max:
if size_min <= torrent_size <= size_max:
return True
elif size_range.startswith(">"):
# 大于
size_min = float(size_range[1:].strip()) * 1024 * 1024
if torrent.size >= size_min:
if torrent_size >= size_min:
return True
elif size_range.startswith("<"):
# 小于
size_max = float(size_range[1:].strip()) * 1024 * 1024
if torrent.size <= size_max:
if torrent_size <= size_max:
return True
return False

View File

@@ -191,6 +191,7 @@ class IndexerModule(_ModuleBase):
site_ua=site.get("ua"),
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
site_downloader=site.get("downloader"),
**result) for result in result_array]
# 去重
return __remove_duplicate(torrents)
@@ -199,7 +200,7 @@ class IndexerModule(_ModuleBase):
def __spider_search(indexer: CommentedMap,
search_word: str = None,
mtype: MediaType = None,
page: int = 0) -> (bool, List[dict]):
page: int = 0) -> Tuple[bool, List[dict]]:
"""
根据关键字搜索单个站点
:param: indexer: 站点配置

View File

@@ -94,6 +94,7 @@ class SiteParserBase(metaclass=ABCMeta):
# 未读消息
self.message_unread = 0
self.message_unread_contents = []
self.message_read_force = False
# 全局附加请求头
self._addition_headers = None
@@ -182,7 +183,8 @@ class SiteParserBase(metaclass=ABCMeta):
)
)
# 解析用户未读消息
self._pase_unread_msgs()
if settings.SITE_MESSAGE:
self._pase_unread_msgs()
# 解析用户上传、下载、分享率等信息
if self._user_traffic_page:
self._parse_user_traffic_info(
@@ -201,7 +203,7 @@ class SiteParserBase(metaclass=ABCMeta):
:return:
"""
unread_msg_links = []
if self.message_unread > 0:
if self.message_unread > 0 or self.message_read_force:
links = {self._user_mail_unread_page, self._sys_mail_unread_page}
for link in links:
if not link:
@@ -225,7 +227,7 @@ class SiteParserBase(metaclass=ABCMeta):
)
unread_msg_links.extend(msg_links)
# 重新更新未读消息数99999表示有消息但数量未知
if self.message_unread == 99999:
if unread_msg_links and not self.message_unread:
self.message_unread = len(unread_msg_links)
# 解析未读消息内容
for msg_link in unread_msg_links:

View File

@@ -91,9 +91,7 @@ class MTorrentSiteUserInfo(SiteParserBase):
self.download = int(user_info.get("memberCount", {}).get("downloaded") or '0')
self.ratio = user_info.get("memberCount", {}).get("shareRate") or 0
self.bonus = user_info.get("memberCount", {}).get("bonus") or 0
# 需要解析消息,但不确定消息条数
self.message_unread = 99999
self.message_read_force = True
self._torrent_seeding_params = {
"pageNumber": 1,
"pageSize": 200,

View File

@@ -1,21 +1,60 @@
# -*- coding: utf-8 -*-
import re
import json
from typing import Optional
from lxml import etree
from urllib.parse import urljoin
from app.log import logger
from app.modules.indexer.parser import SiteSchema
from app.modules.indexer.parser.nexus_php import NexusPhpSiteUserInfo
from app.modules.indexer.parser import SiteParserBase
from app.utils.string import StringUtils
class NexusRabbitSiteUserInfo(NexusPhpSiteUserInfo):
class NexusRabbitSiteUserInfo(SiteParserBase):
schema = SiteSchema.NexusRabbit
def _parse_site_page(self, html_text: str):
super()._parse_site_page(html_text)
self._torrent_seeding_page = f"getusertorrentlistajax.php?page=1&limit=5000000&type=seeding&uid={self.userid}"
self._torrent_seeding_headers = {"Accept": "application/json, text/javascript, */*; q=0.01"}
html_text = self._prepare_html_text(html_text)
def _parse_user_torrent_seeding_info(self, html_text: str, multi_page: bool = False) -> Optional[str]:
user_detail = re.search(r"user.php\?id=(\d+)", html_text)
if not (user_detail and user_detail.group().strip()):
return
self.userid = user_detail.group(1)
self._user_detail_page = f"user.php?id={self.userid}"
self._user_traffic_page = None
self._torrent_seeding_page = "api/general"
self._torrent_seeding_params = {
"page": 1,
"limit": 5000000,
"action": "userTorrentsList",
"data": {"type": "seeding", "id": int(self.userid)},
}
self._torrent_seeding_headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest", # 必须要加上这一条,不然返回的是空数据
}
self._user_mail_unread_page = None
self._sys_mail_unread_page = "api/general"
self._mail_unread_params = {
"page": 1,
"limit": 5000000,
"action": "getMessageIn",
}
self._mail_unread_headers = {
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest",
}
def _parse_user_torrent_seeding_info(
self, html_text: str, multi_page: bool = False
) -> Optional[str]:
"""
做种相关信息
:param html_text:
@@ -24,22 +63,112 @@ class NexusRabbitSiteUserInfo(NexusPhpSiteUserInfo):
"""
try:
torrents = json.loads(html_text).get('data')
torrents = json.loads(html_text).get("data", [])
except Exception as e:
logger.error(f"解析做种信息失败: {str(e)}")
return
page_seeding_size = 0
page_seeding_info = []
seeding_size = 0
seeding_info = []
page_seeding = len(torrents)
for torrent in torrents:
seeders = int(torrent.get('seeders', 0))
size = int(torrent.get('size', 0))
page_seeding_size += int(torrent.get('size', 0))
seeders = int(torrent.get("seeders", 0))
size = StringUtils.num_filesize(torrent.get("size"))
seeding_size += size
seeding_info.append([seeders, size])
page_seeding_info.append([seeders, size])
self.seeding = len(torrents)
self.seeding_size = seeding_size
self.seeding_info = seeding_info
self.seeding += page_seeding
self.seeding_size += page_seeding_size
self.seeding_info.extend(page_seeding_info)
def _parse_message_unread_links(
self, html_text: str, msg_links: list
) -> str | None:
unread_ids = []
try:
messages = json.loads(html_text).get("data", [])
except Exception as e:
logger.error(f"解析未读消息失败: {e}")
return
for msg in messages:
msg_id, msg_unread = msg.get("id"), msg.get("unread")
if not (msg_id and msg_unread) or msg_unread == "no":
continue
unread_ids.append(msg_id)
head, date, content = msg.get("subject"), msg.get("added"), msg.get("msg")
if head and date and content:
self.message_unread_contents.append((head, date, content))
self.message_unread = len(unread_ids)
if unread_ids:
self._get_page_content(
url=urljoin(self._base_url, "api/general?loading=true"),
params={"action": "readMessage", "data": {"ids": unread_ids}},
headers={
"Content-Type": "application/json",
"Accept": "application/json, text/plain, */*",
"X-Requested-With": "XMLHttpRequest",
},
)
return None
def _parse_user_base_info(self, html_text: str):
"""只有奶糖余额才需要在 base 中获取,其它均可以在详情页拿到"""
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
bonus = html.xpath(
'//div[contains(text(), "奶糖余额")]/following-sibling::div[1]/text()'
)
if bonus:
self.bonus = StringUtils.str_float(bonus[0].strip())
def _parse_user_detail_info(self, html_text: str):
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
# 缩小一下查找范围,所有的信息都在这个 div 里
user_info = html.xpath('//div[contains(@class, "layui-hares-user-info-right")]')
if not user_info:
return
user_info = user_info[0]
# 用户名
if username := user_info.xpath(
'.//span[contains(text(), "用户名")]/a/span/text()'
):
self.username = username[0].strip()
# 等级
if user_level := user_info.xpath('.//span[contains(text(), "等级")]/b/text()'):
self.user_level = user_level[0].strip()
# 加入日期
if join_date := user_info.xpath('.//span[contains(text(), "注册日期")]/text()'):
join_date = join_date[0].strip().split("\r")[0].removeprefix("注册日期:")
self.join_at = StringUtils.unify_datetime_str(join_date)
# 上传量
if upload := user_info.xpath('.//span[contains(text(), "上传量")]/text()'):
self.upload = StringUtils.num_filesize(
upload[0].strip().removeprefix("上传量:")
)
# 下载量
if download := user_info.xpath('.//span[contains(text(), "下载量")]/text()'):
self.download = StringUtils.num_filesize(
download[0].strip().removeprefix("下载量:")
)
# 分享率
if ratio := user_info.xpath('.//span[contains(text(), "分享率")]/em/text()'):
self.ratio = StringUtils.str_float(ratio[0].strip())
def _parse_message_content(self, html_text):
"""
解析短消息内容,已经在 _parse_message_unread_links 内实现,重载防止 abstractmethod 报错
:param html_text:
:return: head: message, date: time, content: message content
"""
pass
def _parse_user_traffic_info(self, html_text: str):
"""
解析用户的上传,下载,分享率等信息,已经在 _parse_user_detail_info 内实现,重载防止 abstractmethod 报错
:param html_text:
:return:
"""
pass

View File

@@ -36,7 +36,10 @@ class TNodeSiteUserInfo(SiteParserBase):
pass
def _parse_user_detail_info(self, html_text: str):
detail = json.loads(html_text)
try:
detail = json.loads(html_text)
except json.JSONDecodeError:
return
if detail.get("status") != 200:
return

View File

@@ -162,26 +162,26 @@ class Plex:
def get_medias_count(self) -> schemas.Statistic:
"""
获得电影、电视剧、动漫媒体数量
:return: MovieCount SeriesCount SongCount
:return: movie_count tv_count episode_count
"""
if not self._plex:
return schemas.Statistic()
sections = self._plex.library.sections()
MovieCount = SeriesCount = EpisodeCount = 0
movie_count = tv_count = episode_count = 0
# 媒体库白名单
allow_library = [lib.id for lib in self.get_librarys(hidden=True)]
for sec in sections:
if str(sec.key) not in allow_library:
if sec.key not in allow_library:
continue
if sec.type == "movie":
MovieCount += sec.totalSize
movie_count += sec.totalSize
if sec.type == "show":
SeriesCount += sec.totalSize
EpisodeCount += sec.totalViewSize(libtype='episode')
tv_count += sec.totalSize
episode_count += sec.totalViewSize(libtype="episode")
return schemas.Statistic(
movie_count=MovieCount,
tv_count=SeriesCount,
episode_count=EpisodeCount
movie_count=movie_count,
tv_count=tv_count,
episode_count=episode_count
)
def get_movies(self,
@@ -294,7 +294,7 @@ class Plex:
return videos.key, season_episodes
def get_remote_image_by_id(self,
item_id: str,
item_id: str,
image_type: str,
depth: int = 0,
plex_url: bool = True) -> Optional[str]:
@@ -310,12 +310,16 @@ class Plex:
return None
try:
image_url = None
ekey = f"/library/metadata/{item_id}"
ekey = item_id
item = self._plex.fetchItem(ekey=ekey)
if not item:
return None
# 如果配置了外网播放地址以及Token则默认从Plex媒体服务器获取图片否则返回有外网地址的图片资源
if self._playhost and self._token and plex_url:
# Plex外网播放地址这个框里目前可以填两种地址
# 1. Plex的官方转发地址https://app.plex.tv, 2. 自己处理的端口转发地址
# 如果使用的是1的官方转发地址,那么就不能走这个逻辑,因为官方转发地址无法获取到图片
if (self._playhost and "app.plex.tv" not in self._playhost
and self._token and plex_url):
query = {"X-Plex-Token": self._token}
if image_type == "Poster":
if item.thumb:
@@ -346,8 +350,8 @@ class Plex:
image_url = image.key
break
# 如果最后还是找不到,则递归父级进行查找
if not image_url and hasattr(item, "parentRatingKey"):
return self.get_remote_image_by_id(item_id=item.parentRatingKey,
if not image_url and hasattr(item, "parentKey"):
return self.get_remote_image_by_id(item_id=item.parentKey,
image_type=image_type,
depth=depth + 1)
return image_url
@@ -665,7 +669,7 @@ class Plex:
"S" + str(message.get('Metadata', {}).get('parentIndex')),
"E" + str(message.get('Metadata', {}).get('index')),
message.get('Metadata', {}).get('title'))
eventItem.item_id = message.get('Metadata', {}).get('ratingKey')
eventItem.item_id = message.get('Metadata', {}).get('key')
eventItem.season_id = message.get('Metadata', {}).get('parentIndex')
eventItem.episode_id = message.get('Metadata', {}).get('index')
@@ -680,7 +684,7 @@ class Plex:
eventItem.item_name = "%s %s" % (
message.get('Metadata', {}).get('title'),
"(" + str(message.get('Metadata', {}).get('year')) + ")")
eventItem.item_id = message.get('Metadata', {}).get('ratingKey')
eventItem.item_id = message.get('Metadata', {}).get('key')
if len(message.get('Metadata', {}).get('summary')) > 100:
eventItem.overview = str(message.get('Metadata', {}).get('summary'))[:100] + "..."
else:
@@ -721,7 +725,7 @@ class Plex:
if not self._plex:
return []
# 媒体库白名单
allow_library = ",".join([lib.id for lib in self.get_librarys(hidden=True)])
allow_library = ",".join(map(str, (lib.id for lib in self.get_librarys(hidden=True))))
params = {"contentDirectoryID": allow_library}
items = self._plex.fetchItems("/hubs/continueWatching/items",
container_start=0,
@@ -757,7 +761,7 @@ class Plex:
if not self._plex:
return None
# 请求参数(除黑名单)
allow_library = ",".join([lib.id for lib in self.get_librarys(hidden=True)])
allow_library = ",".join(map(str, (lib.id for lib in self.get_librarys(hidden=True))))
params = {
"contentDirectoryID": allow_library,
"count": num,

View File

@@ -1,4 +1,5 @@
import time
import traceback
from typing import Optional, Union, Tuple, List
import qbittorrentapi
@@ -75,8 +76,13 @@ class Qbittorrent:
REQUESTS_ARGS={'timeout': (15, 60)})
try:
qbt.auth_log_in()
except qbittorrentapi.LoginFailed as e:
logger.error(f"qbittorrent 登录失败:{str(e)}")
except (qbittorrentapi.LoginFailed, qbittorrentapi.Forbidden403Error) as e:
logger.error(f"qbittorrent 登录失败:{str(e).strip() or '请检查用户名和密码是否正确'}")
return None
except Exception as e:
stack_trace = "".join(traceback.format_exception(None, e, e.__traceback__))[:2000]
logger.error(f"qbittorrent 登录失败:{str(e)}\n{stack_trace}")
return None
return qbt
except Exception as err:
logger.error(f"qbittorrent 连接出错:{str(err)}")

View File

@@ -263,19 +263,17 @@ class Monitor(metaclass=Singleton):
try:
item = self._queue.get(timeout=self._transfer_interval)
if item:
self.__handle_file(storage=item.get("storage"),
event_path=item.get("filepath"),
mon_path=item.get("mon_path"))
self.__handle_file(storage=item.get("storage"), event_path=item.get("filepath"))
except queue.Empty:
continue
except Exception as e:
logger.error(f"整理队列处理出现错误:{e}")
def __handle_file(self, storage: str, event_path: Path, mon_path: Path):
def __handle_file(self, storage: str, event_path: Path):
"""
整理一个文件
:param storage: 存储
:param event_path: 事件文件路径
:param mon_path: 监控目录
"""
def __get_bluray_dir(_path: Path):
@@ -386,7 +384,7 @@ class Monitor(metaclass=Singleton):
return
# 查询转移目的目录
dir_info = self.directoryhelper.get_dir(mediainfo, src_path=mon_path)
dir_info = self.directoryhelper.get_dir(mediainfo, storage=storage, src_path=event_path)
if not dir_info:
logger.warn(f"{event_path.name} 未找到对应的目标目录")
return
@@ -480,8 +478,7 @@ class Monitor(metaclass=Singleton):
# 移动模式删除空目录
if transferinfo.transfer_type in ["move"]:
logger.info(f"正在删除: {file_item.storage} {file_item.path}")
self.storagechain.delete_file(file_item)
self.storagechain.delete_media_file(file_item, delete_self=False)
except Exception as e:
logger.error("目录监控发生错误:%s - %s" % (str(e), traceback.format_exc()))

View File

@@ -19,10 +19,11 @@ from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import EventManager
from app.core.plugin import PluginManager
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.sites import SitesHelper
from app.log import logger
from app.schemas import Notification, NotificationType
from app.schemas.types import EventType
from app.schemas.types import EventType, SystemConfigKey
from app.utils.singleton import Singleton
from app.utils.timer import TimerUtils
@@ -74,8 +75,12 @@ class Scheduler(metaclass=Singleton):
message="用户认证失败次数过多,将不再尝试认证!",
role="system")
return
logger.info("用户未认证,正在尝试重新认证...")
status, msg = SitesHelper().check_user()
logger.info("用户未认证,正在尝试认证...")
auth_conf = SystemConfigOper().get(SystemConfigKey.UserSiteAuthParams)
if auth_conf:
status, msg = SitesHelper().check_user(**auth_conf)
else:
status, msg = SitesHelper().check_user()
if status:
self._auth_count = 0
logger.info(f"{msg} 用户认证成功")
@@ -169,6 +174,9 @@ class Scheduler(metaclass=Singleton):
# 停止定时服务
self.stop()
# 用户认证立即执行一次
user_auth()
# 调试模式不启动定时服务
if settings.DEV:
return

View File

@@ -180,6 +180,8 @@ class TorrentInfo(BaseModel):
site_proxy: Optional[bool] = False
# 站点优先级
site_order: Optional[int] = 0
# 站点下载器
site_downloader: Optional[str] = None
# 种子名称
title: Optional[str] = None
# 种子副标题

View File

@@ -1,4 +1,4 @@
from typing import Optional, Any
from typing import Optional, Any, Union, Dict
from pydantic import BaseModel
@@ -35,7 +35,7 @@ class Site(BaseModel):
# 备注
note: Optional[Any] = None
# 超时时间
timeout: Optional[int] = 0
timeout: Optional[int] = 15
# 流控单位周期
limit_interval: Optional[int] = None
# 流控次数
@@ -44,6 +44,8 @@ class Site(BaseModel):
limit_seconds: Optional[int] = None
# 是否启用
is_active: Optional[bool] = True
# 下载器
downloader: Optional[str] = None
class Config:
orm_mode = True
@@ -75,7 +77,7 @@ class SiteUserData(BaseModel):
# 用户名
username: Optional[str]
# 用户ID
userid: Optional[int]
userid: Optional[Union[int, str]]
# 用户等级
user_level: Optional[str]
# 加入时间
@@ -108,3 +110,8 @@ class SiteUserData(BaseModel):
updated_day: Optional[str] = None
# 更新时间
updated_time: Optional[str] = None
class SiteAuth(BaseModel):
site: Optional[str] = None
params: Optional[Dict[str, Union[int, str]]] = {}

View File

@@ -54,6 +54,8 @@ class Subscribe(BaseModel):
username: Optional[str] = None
# 订阅站点
sites: Optional[List[int]] = []
# 下载器
downloader: Optional[str] = None
# 是否洗版
best_version: Optional[int] = 0
# 当前优先级

View File

@@ -83,7 +83,7 @@ class StorageConf(BaseModel):
"""
存储配置
"""
# 类型 local/alipan/u115/rclone
# 类型 local/alipan/u115/rclone/alist
type: Optional[str] = None
# 名称
name: Optional[str] = None

View File

@@ -122,6 +122,8 @@ class SystemConfigKey(Enum):
DefaultMovieSubscribeConfig = "DefaultMovieSubscribeConfig"
# 默认电视剧订阅规则
DefaultTvSubscribeConfig = "DefaultTvSubscribeConfig"
# 用户站点认证参数
UserSiteAuthParams = "UserSiteAuthParams"
# 处理进度Key字典
@@ -187,6 +189,7 @@ class StorageSchema(Enum):
Alipan = "alipan"
U115 = "u115"
Rclone = "rclone"
Alist = "alist"
# 模块类型

View File

@@ -24,7 +24,8 @@ class SiteUtils:
' or contains(@data-url, "logout")'
' or contains(@href, "mybonus") '
' or contains(@onclick, "logout")'
' or contains(@href, "usercp")]',
' or contains(@href, "usercp")'
' or contains(@lay-on, "logout")]',
'//form[contains(@action, "logout")]',
'//div[@class="user-info-side"]',
'//a[@id="myitem"]'

View File

@@ -275,6 +275,10 @@ class SystemUtils:
# 遍历目录
for path in directory.iterdir():
if path.is_dir():
if not SystemUtils.is_windows() and path.name.startswith("."):
continue
if path.name == "@eaDir":
continue
dirs.append(path)
return dirs

View File

@@ -1,6 +1,7 @@
import mimetypes
from pathlib import Path
from typing import Optional, Union
from urllib import parse
from urllib.parse import parse_qs, urlencode, urljoin, urlparse, urlunparse
from app.log import logger
@@ -95,3 +96,14 @@ class UrlUtils:
except Exception as e:
logger.debug(f"Error get_mime_type: {e}")
return default_type
@staticmethod
def quote(s: str) -> str:
"""
将字符串编码为 URL 安全的格式
这将确保路径中的特殊字符(如空格、中文字符等)被正确编码,以便在 URL 中传输
:param s: 要编码的字符串
:return: 编码后的字符串
"""
return parse.quote(s)

View File

@@ -15,6 +15,8 @@ DB_POOL_SIZE=100
DB_MAX_OVERFLOW=500
# SQLite 的 busy_timeout 参数可适当增加如180以减少锁定错误
DB_TIMEOUT=60
# SQLite 是否启用 WAL 模式,启用可提升读写并发性能,但可能在异常情况下增加数据丢失的风险
DB_WAL_ENABLE=false
# 【*】超级管理员,设置后一但重启将固化到数据库中,修改将无效(初始化超级管理员密码仅会生成一次,请在日志中查看并自行登录系统修改)
SUPERUSER=admin
# 辅助认证,允许通过外部服务进行认证、单点登录以及自动创建用户

View File

@@ -64,7 +64,7 @@ def upgrade() -> None:
},
{
"type": "rclone",
"name": "Rclone网盘",
"name": "RClone",
"config": {}
}
])

View File

@@ -0,0 +1,40 @@
"""2.0.6
Revision ID: a295e41830a6
Revises: ecf3c693fdf3
Create Date: 2024-11-14 12:49:13.838120
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import sqlite
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey
# revision identifiers, used by Alembic.
revision = 'a295e41830a6'
down_revision = 'ecf3c693fdf3'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# 初始化AList存储
_systemconfig = SystemConfigOper()
_storages = _systemconfig.get(SystemConfigKey.Storages)
if _storages:
if "alist" not in [storage["type"] for storage in _storages]:
_storages.append({
"type": "alist",
"name": "AList",
"config": {}
})
_systemconfig.set(SystemConfigKey.Storages, _storages)
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -0,0 +1,31 @@
"""2.0.7
Revision ID: eaf9cbc49027
Revises: a295e41830a6
Create Date: 2024-11-16 00:26:09.505188
"""
import contextlib
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'eaf9cbc49027'
down_revision = 'a295e41830a6'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# 站点管理、订阅增加下载器选项
with contextlib.suppress(Exception):
op.add_column('site', sa.Column('downloader', sa.String(), nullable=True))
op.add_column('subscribe', sa.Column('downloader', sa.String(), nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -3,7 +3,8 @@
# shellcheck disable=SC2016
# 使用 `envsubst` 将模板文件中的 ${NGINX_PORT} 替换为实际的环境变量值
envsubst '${NGINX_PORT}${PORT}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
export NGINX_CLIENT_MAX_BODY_SIZE=${NGINX_CLIENT_MAX_BODY_SIZE:-10m}
envsubst '${NGINX_PORT}${PORT}${NGINX_CLIENT_MAX_BODY_SIZE}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
# 自动更新
cd /
/usr/local/bin/mp_update
@@ -21,7 +22,11 @@ chown -R moviepilot:moviepilot \
/var/log/nginx
chown moviepilot:moviepilot /etc/hosts /tmp
# 下载浏览器内核
HTTPS_PROXY="${HTTPS_PROXY:-${https_proxy:-$PROXY_HOST}}" gosu moviepilot:moviepilot playwright install chromium
if [[ "$HTTPS_PROXY" =~ ^https?:// ]] || [[ "$https_proxy" =~ ^https?:// ]] || [[ "$PROXY_HOST" =~ ^https?:// ]]; then
HTTPS_PROXY="${HTTPS_PROXY:-${https_proxy:-$PROXY_HOST}}" gosu moviepilot:moviepilot playwright install chromium
else
gosu moviepilot:moviepilot playwright install chromium
fi
# 启动前端nginx服务
nginx
# 启动docker http proxy nginx

View File

@@ -17,6 +17,8 @@ http {
keepalive_timeout 3600;
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;

View File

@@ -23,7 +23,7 @@ APScheduler~=3.10.1
cryptography~=43.0.0
pytz~=2023.3
pycryptodome~=3.20.0
qbittorrent-api==2024.9.67
qbittorrent-api==2024.11.69
plexapi~=4.15.16
transmission-rpc~=4.3.0
Jinja2~=3.1.4
@@ -58,7 +58,6 @@ pystray~=0.19.5
pyotp~=2.9.0
Pinyin2Hanzi~=0.1.1
pywebpush~=2.0.0
py115j~=0.0.7
oss2~=2.18.6
python-115~=0.0.9.8.7
aligo~=6.2.4
aiofiles~=24.1.0

View File

@@ -345,13 +345,13 @@ meta_cases = [{
"part": "",
"season": "",
"episode": "",
"restype": "BluRay Remux",
"restype": "BluRay REMUX",
"pix": "1080p",
"video_codec": "AVC",
"audio_codec": "LPCM 7³"
}
}, {
"title": "30.Rock.S02E01.1080p.BluRay.X264-BORDURE.mkv",
"title": "30.Rock.S02E01.1080p.UHD.BluRay.X264-BORDURE.mkv",
"subtitle": "",
"target": {
"type": "电视剧",
@@ -361,7 +361,7 @@ meta_cases = [{
"part": "",
"season": "S02",
"episode": "E01",
"restype": "BluRay",
"restype": "UHD BluRay",
"pix": "1080p",
"video_codec": "X264",
"audio_codec": ""
@@ -611,7 +611,7 @@ meta_cases = [{
"subtitle": "",
"target": {
"type": "电视剧",
"cn_name": "刑少女的生存之道",
"cn_name": "刑少女的生存之道",
"en_name": "",
"year": "",
"part": "",
@@ -665,7 +665,7 @@ meta_cases = [{
"part": "",
"season": "",
"episode": "",
"restype": "BluRay DoVi UHD",
"restype": "UHD BluRay DoVi",
"pix": "1080p",
"video_codec": "X265",
"audio_codec": "DD 7.1"

65
update
View File

@@ -20,6 +20,16 @@ function WARN() {
echo -e "${WARN} ${1}"
}
TMP_PATH=$(mktemp -d)
if [ ! -d "${TMP_PATH}" ]; then
# 如果自动生成 tmp 文件夹失败则手动指定,避免出现数据丢失等情况
TMP_PATH=/tmp/mp_update_path
if [ -d /tmp/mp_update_path ]; then
rm -rf /tmp/mp_update_path
fi
mkdir -p /tmp/mp_update_path
fi
# 下载及解压
function download_and_unzip() {
local retries=0
@@ -28,9 +38,9 @@ function download_and_unzip() {
local target_dir="$2"
INFO "正在下载 ${url}..."
while [ $retries -lt $max_retries ]; do
if curl ${CURL_OPTIONS} "${url}" ${CURL_HEADERS} | busybox unzip -d /tmp - > /dev/null; then
if [ -e /tmp/MoviePilot-* ]; then
mv /tmp/MoviePilot-* /tmp/"${target_dir}"
if curl ${CURL_OPTIONS} "${url}" ${CURL_HEADERS} | busybox unzip -d ${TMP_PATH} - > /dev/null; then
if [ -e ${TMP_PATH}/MoviePilot-* ]; then
mv ${TMP_PATH}/MoviePilot-* ${TMP_PATH}/"${target_dir}"
fi
break
else
@@ -48,8 +58,6 @@ function download_and_unzip() {
# 下载程序资源,$1: 后端版本路径
function install_backend_and_download_resources() {
# 清理临时目录,上次安装失败可能有残留
rm -rf /tmp/*
# 更新后端程序
if ! download_and_unzip "${GITHUB_PROXY}https://github.com/jxxghp/MoviePilot/archive/refs/${1}" "App"; then
WARN "后端程序下载失败,继续使用旧的程序来启动..."
@@ -61,16 +69,33 @@ function install_backend_and_download_resources() {
ERROR "pip 更新失败,请重新拉取镜像"
return 1
fi
if ! pip install ${PIP_OPTIONS} --root-user-action=ignore -r /tmp/App/requirements.txt > /dev/null; then
if ! pip install ${PIP_OPTIONS} --root-user-action=ignore -r ${TMP_PATH}/App/requirements.txt > /dev/null; then
ERROR "安装依赖失败,请重新拉取镜像"
return 1
fi
INFO "安装依赖成功"
# 从后端文件中读取前端版本号
frontend_version=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /tmp/App/version.py)
if [[ "${frontend_version}" != *v* ]]; then
WARN "前端最新版本号获取失败,继续启动..."
return 1
# 如果是"heads/v2.zip"则查找v2开头的最新版本号
if [[ "${1}" == "heads/v2.zip" ]]; then
INFO "正在获取前端最新版本号..."
# 获取所有发布的版本列表并筛选出以v2开头的版本号
releases=$(curl ${CURL_OPTIONS} "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases" ${CURL_HEADERS} | jq -r '.[].tag_name' | grep "^v2\.")
if [ -z "$releases" ]; then
WARN "未找到任何v2前端版本继续启动..."
return 1
else
# 找到最新的v2版本
frontend_version=$(echo "$releases" | sort -V | tail -n 1)
fi
INFO "前端最新版本号:${frontend_version}"
else
INFO "正在获取前端版本号..."
# 从后端文件中读取前端版本号
frontend_version=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" ${TMP_PATH}/App/version.py)
if [[ "${frontend_version}" != *v* ]]; then
WARN "前端版本号获取失败,继续启动..."
return 1
fi
INFO "前端版本号:${frontend_version}"
fi
# 更新前端程序
if ! download_and_unzip "${GITHUB_PROXY}https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${frontend_version}/dist.zip" "dist"; then
@@ -94,11 +119,11 @@ function install_backend_and_download_resources() {
rm -rf /app
mkdir -p /app
# 复制新后端程序
cp -a /tmp/App/* /app/
cp -a ${TMP_PATH}/App/* /app/
# 复制新前端程序
rm -rf /public
mkdir -p /public
cp -a /tmp/dist/* /public/
cp -a ${TMP_PATH}/dist/* /public/
INFO "程序部分更新成功,前端版本:${frontend_version},后端版本:${1}"
# 恢复插件目录
cp -a /plugins/* /app/app/plugins/
@@ -112,10 +137,10 @@ function install_backend_and_download_resources() {
fi
INFO "站点资源下载成功"
# 复制新站点资源
cp -a /tmp/Resources/resources/* /app/app/helper/
cp -a ${TMP_PATH}/Resources/resources/* /app/app/helper/
INFO "站点资源更新成功"
# 清理临时目录
rm -rf /tmp/*
rm -rf "${TMP_PATH}"
return 0
}
@@ -212,14 +237,14 @@ function compare_versions() {
return 1
elif (( current_ver < release_ver )); then
INFO "发现新版本,开始自动升级..."
install_backend_and_download_resources "tags/${release_ver}.zip"
install_backend_and_download_resources "tags/$2.zip"
return 0
else
WARN "当前版本已是最新版本,跳过更新步骤..."
return 1
continue
fi
fi
done
WARN "当前版本已是最新版本,跳过更新步骤..."
}
# 优先级转换
@@ -287,11 +312,11 @@ if [[ "${MOVIEPILOT_AUTO_UPDATE}" = "true" ]] || [[ "${MOVIEPILOT_AUTO_UPDATE}"
# 获取所有发布的版本列表并筛选出以v2开头的版本号
releases=$(curl ${CURL_OPTIONS} "https://api.github.com/repos/jxxghp/MoviePilot/releases" ${CURL_HEADERS} | jq -r '.[].tag_name' | grep "^v2\.")
if [ -z "$releases" ]; then
WARN "未找到任何v2.x版本,继续启动..."
WARN "未找到任何v2后端版本,继续启动..."
else
# 找到最新的v2版本
latest_v2=$(echo "$releases" | sort -V | tail -n 1)
INFO "最新的v2.x版本号:${latest_v2}"
INFO "最新的v2后端版本号:${latest_v2}"
# 使用版本号比较函数进行比较,并下载最新版本
compare_versions "${current_version}" "${latest_v2}"
fi

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.0.2'
FRONTEND_VERSION = 'v2.0.2'
APP_VERSION = 'v2.0.8'
FRONTEND_VERSION = 'v2.0.8'