mirror of
https://github.com/jxxghp/MoviePilot.git
synced 2026-05-10 06:22:48 +08:00
Compare commits
61 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
48f6a45194 | ||
|
|
c8ae6bcc78 | ||
|
|
7f6beb2a78 | ||
|
|
ea160afd90 | ||
|
|
29df0813fd | ||
|
|
b014c4a4e5 | ||
|
|
f173c21695 | ||
|
|
dc41f4946a | ||
|
|
fed754f03a | ||
|
|
382d9ed525 | ||
|
|
e3707f39bb | ||
|
|
9df8d3d360 | ||
|
|
5b3c310cda | ||
|
|
79d692771e | ||
|
|
f74ffed3ae | ||
|
|
0325d7f4f1 | ||
|
|
3926298907 | ||
|
|
d98376b490 | ||
|
|
219690afc0 | ||
|
|
bcb1fc1600 | ||
|
|
923be7e1e9 | ||
|
|
951353ee0b | ||
|
|
52bdfa7f9a | ||
|
|
4af29aa76d | ||
|
|
8efa6a742b | ||
|
|
ada5e1cca5 | ||
|
|
859191203f | ||
|
|
cab4055315 | ||
|
|
cacee7abfe | ||
|
|
61694f4c2b | ||
|
|
9c328e3d1c | ||
|
|
b2fe86c744 | ||
|
|
600e32d3e4 | ||
|
|
3ad733bab4 | ||
|
|
1799b63abb | ||
|
|
d71dc13e32 | ||
|
|
f4633788e9 | ||
|
|
2250e7db39 | ||
|
|
b1bb0ced7a | ||
|
|
28aecd79c6 | ||
|
|
d097ef45eb | ||
|
|
dac718edc8 | ||
|
|
598ab23a2c | ||
|
|
8be6e28933 | ||
|
|
bd6805be58 | ||
|
|
c147d36cb2 | ||
|
|
7a5d210167 | ||
|
|
ef335f2b8e | ||
|
|
19eca11d17 | ||
|
|
ab99bd356a | ||
|
|
70f2d72532 | ||
|
|
0ca995da0f | ||
|
|
2a67abe62d | ||
|
|
03a07ac7bf | ||
|
|
f104c903ec | ||
|
|
6b74a8e266 | ||
|
|
cadd885dbf | ||
|
|
7e0cad8491 | ||
|
|
4c05e9fb2b | ||
|
|
42311f0118 | ||
|
|
951be74a21 |
12
README.md
12
README.md
@@ -84,7 +84,7 @@ docker pull jxxghp/moviepilot:latest
|
||||
- **SUBSCRIBE_MODE:** 订阅模式,`rss`/`spider`,默认`spider`,`rss`模式通过定时刷新RSS来匹配订阅(RSS地址会自动获取,也可手动维护),对站点压力小,同时可设置订阅刷新周期,24小时运行,但订阅和下载通知不能过滤和显示免费,推荐使用rss模式。
|
||||
- **SUBSCRIBE_RSS_INTERVAL:** RSS订阅模式刷新时间间隔(分钟),默认`30`分钟,不能小于5分钟。
|
||||
- **SUBSCRIBE_SEARCH:** 订阅搜索,`true`/`false`,默认`false`,开启后会每隔24小时对所有订阅进行全量搜索,以补齐缺失剧集(一般情况下正常订阅即可,订阅搜索只做为兜底,会增加站点压力,不建议开启)。
|
||||
- **MESSAGER:** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
|
||||
- **MESSAGER:** 消息通知渠道,支持 `telegram`/`wechat`/`slack`/`synologychat`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
|
||||
|
||||
- `wechat`设置项:
|
||||
|
||||
@@ -108,6 +108,11 @@ docker pull jxxghp/moviepilot:latest
|
||||
- **SLACK_OAUTH_TOKEN:** Slack Bot User OAuth Token
|
||||
- **SLACK_APP_TOKEN:** Slack App-Level Token
|
||||
- **SLACK_CHANNEL:** Slack 频道名称,默认`全体`
|
||||
|
||||
- `synologychat`设置项:
|
||||
|
||||
- **SYNOLOGYCHAT_WEBHOOK:** 在Synology Chat中创建机器人,获取机器人`传入URL`
|
||||
- **SYNOLOGYCHAT_TOKEN:** SynologyChat机器人`令牌`
|
||||
|
||||
|
||||
- **DOWNLOADER:** 下载器,支持`qbittorrent`/`transmission`,QB版本号要求>= 4.3.9,TR版本号要求>= 3.0,同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`qbittorrent`
|
||||
@@ -145,6 +150,7 @@ docker pull jxxghp/moviepilot:latest
|
||||
- **PLEX_TOKEN:** Plex网页Url中的`X-Plex-Token`,通过浏览器F12->网络从请求URL中获取
|
||||
|
||||
- **MEDIASERVER_SYNC_INTERVAL:** 媒体服务器同步间隔(小时),默认`6`,留空则不同步
|
||||
- **MEDIASERVER_SYNC_BLACKLIST:** 媒体服务器同步黑名单,多个媒体库名称使用,分割
|
||||
|
||||
|
||||
### 2. **用户认证**
|
||||
@@ -228,9 +234,9 @@ docker pull jxxghp/moviepilot:latest
|
||||
- 通过CookieCloud同步快速同步站点,不需要使用的站点可在WEB管理界面中禁用,无法同步的站点可手动新增。
|
||||
- 通过WEB进行管理,将WEB添加到手机桌面获得类App使用效果,管理界面端口:`3000`,后台API端口:`3001`。
|
||||
- 通过下载器监控或使用目录监控插件实现自动整理入库刮削(二选一)。
|
||||
- 通过微信/Telegram/Slack远程管理,其中微信/Telegram将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示),微信需要在官方页面设置回调地址,地址相对路径为:`/api/v1/message/`。
|
||||
- 通过微信/Telegram/Slack/SynologyChat远程管理,其中微信/Telegram将会自动添加操作菜单(微信菜单条数有限制,部分菜单不显示);微信需要在官方页面设置回调地址,SynologyChat需要设置机器人传入地址,地址相对路径为:`/api/v1/message/`。
|
||||
- 设置媒体服务器Webhook,通过MoviePilot发送播放通知等。Webhook回调相对路径为`/api/v1/webhook?token=moviepilot`(`3001`端口),其中`moviepilot`为设置的`API_TOKEN`。
|
||||
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr(`3001`端口),可使用Overseerr/Jellyseerr浏览订阅。
|
||||
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr(`API服务端口`),可使用Overseerr/Jellyseerr浏览订阅。
|
||||
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
|
||||
|
||||
**注意**
|
||||
|
||||
40
alembic/versions/232dfa044617_1_0_6.py
Normal file
40
alembic/versions/232dfa044617_1_0_6.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""1.0.6
|
||||
|
||||
Revision ID: 232dfa044617
|
||||
Revises: e734c7fe6056
|
||||
Create Date: 2023-09-19 21:34:41.994617
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '232dfa044617'
|
||||
down_revision = 'e734c7fe6056'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
# 搜索优先级
|
||||
op.execute("delete from systemconfig where key = 'SearchFilterRules';")
|
||||
op.execute(
|
||||
"insert into systemconfig(key, value) VALUES('SearchFilterRules', (select value from systemconfig where key= 'FilterRules'));")
|
||||
# 订阅优先级
|
||||
op.execute("delete from systemconfig where key = 'SubscribeFilterRules';")
|
||||
op.execute(
|
||||
"insert into systemconfig(key, value) VALUES('SubscribeFilterRules', (select value from systemconfig where key= 'FilterRules'));")
|
||||
# 洗版优先级
|
||||
op.execute("delete from systemconfig where key = 'BestVersionFilterRules';")
|
||||
op.execute(
|
||||
"insert into systemconfig(key, value) VALUES('BestVersionFilterRules', (select value from systemconfig where key= 'FilterRules2'));")
|
||||
# 删除旧的优先级规则
|
||||
op.execute("delete from systemconfig where key = 'FilterRules';")
|
||||
op.execute("delete from systemconfig where key = 'FilterRules2';")
|
||||
# ### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
pass
|
||||
@@ -64,13 +64,16 @@ def downloader(db: Session = Depends(get_db),
|
||||
"""
|
||||
transfer_info = DashboardChain(db).downloader_info()
|
||||
free_space = SystemUtils.free_space(Path(settings.DOWNLOAD_PATH))
|
||||
return schemas.DownloaderInfo(
|
||||
download_speed=transfer_info.download_speed,
|
||||
upload_speed=transfer_info.upload_speed,
|
||||
download_size=transfer_info.download_size,
|
||||
upload_size=transfer_info.upload_size,
|
||||
free_space=free_space
|
||||
)
|
||||
if transfer_info:
|
||||
return schemas.DownloaderInfo(
|
||||
download_speed=transfer_info.download_speed,
|
||||
upload_speed=transfer_info.upload_speed,
|
||||
download_size=transfer_info.download_size,
|
||||
upload_size=transfer_info.upload_size,
|
||||
free_space=free_space
|
||||
)
|
||||
else:
|
||||
return schemas.DownloaderInfo()
|
||||
|
||||
|
||||
@router.get("/schedule", summary="后台服务", response_model=List[schemas.ScheduleInfo])
|
||||
|
||||
@@ -62,20 +62,22 @@ def transfer_history(title: str = None,
|
||||
|
||||
@router.delete("/transfer", summary="删除转移历史记录", response_model=schemas.Response)
|
||||
def delete_transfer_history(history_in: schemas.TransferHistory,
|
||||
delete_file: bool = False,
|
||||
deletesrc: bool = False,
|
||||
deletedest: bool = False,
|
||||
db: Session = Depends(get_db),
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
删除转移历史记录
|
||||
"""
|
||||
# 触发删除事件
|
||||
if delete_file:
|
||||
history = TransferHistory.get(db, history_in.id)
|
||||
if not history:
|
||||
return schemas.Response(success=False, msg="记录不存在")
|
||||
# 册除文件
|
||||
if history.dest:
|
||||
TransferChain(db).delete_files(Path(history.dest))
|
||||
history = TransferHistory.get(db, history_in.id)
|
||||
if not history:
|
||||
return schemas.Response(success=False, msg="记录不存在")
|
||||
# 册除媒体库文件
|
||||
if deletedest and history.dest:
|
||||
TransferChain(db).delete_files(Path(history.dest))
|
||||
# 删除源文件
|
||||
if deletesrc and history.src:
|
||||
TransferChain(db).delete_files(Path(history.src))
|
||||
# 删除记录
|
||||
TransferHistory.delete(db, history_in.id)
|
||||
return schemas.Response(success=True)
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import random
|
||||
from datetime import timedelta
|
||||
from typing import Any
|
||||
|
||||
@@ -15,7 +14,7 @@ from app.core.security import get_password_hash
|
||||
from app.db import get_db
|
||||
from app.db.models.user import User
|
||||
from app.log import logger
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.web import WebUtils
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -67,21 +66,10 @@ def bing_wallpaper() -> Any:
|
||||
"""
|
||||
获取Bing每日壁纸
|
||||
"""
|
||||
url = "https://cn.bing.com/HPImageArchive.aspx?format=js&idx=0&n=1"
|
||||
try:
|
||||
resp = RequestUtils(timeout=5).get_res(url)
|
||||
except Exception as err:
|
||||
print(str(err))
|
||||
return schemas.Response(success=False)
|
||||
if resp and resp.status_code == 200:
|
||||
try:
|
||||
result = resp.json()
|
||||
if isinstance(result, dict):
|
||||
for image in result.get('images') or []:
|
||||
return schemas.Response(success=False,
|
||||
message=f"https://cn.bing.com{image.get('url')}" if 'url' in image else '')
|
||||
except Exception as err:
|
||||
print(str(err))
|
||||
url = WebUtils.get_bing_wallpaper()
|
||||
if url:
|
||||
return schemas.Response(success=False,
|
||||
message=url)
|
||||
return schemas.Response(success=False)
|
||||
|
||||
|
||||
@@ -90,14 +78,10 @@ def tmdb_wallpaper(db: Session = Depends(get_db)) -> Any:
|
||||
"""
|
||||
获取TMDB电影海报
|
||||
"""
|
||||
infos = TmdbChain(db).tmdb_trending()
|
||||
if infos:
|
||||
# 随机一个电影
|
||||
while True:
|
||||
info = random.choice(infos)
|
||||
if info and info.get("backdrop_path"):
|
||||
return schemas.Response(
|
||||
success=True,
|
||||
message=f"https://image.tmdb.org/t/p/original{info.get('backdrop_path')}"
|
||||
)
|
||||
wallpager = TmdbChain(db).get_random_wallpager()
|
||||
if wallpager:
|
||||
return schemas.Response(
|
||||
success=True,
|
||||
message=wallpager
|
||||
)
|
||||
return schemas.Response(success=False)
|
||||
|
||||
@@ -73,7 +73,9 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
switchs = SystemConfigOper().get(SystemConfigKey.NotificationChannels)
|
||||
if not switchs:
|
||||
for noti in NotificationType:
|
||||
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True, telegram=True, slack=True))
|
||||
return_list.append(NotificationSwitch(mtype=noti.value, wechat=True,
|
||||
telegram=True, slack=True,
|
||||
synologychat=True))
|
||||
else:
|
||||
for switch in switchs:
|
||||
return_list.append(NotificationSwitch(**switch))
|
||||
|
||||
@@ -234,14 +234,14 @@ def read_rss_sites(db: Session = Depends(get_db)) -> List[dict]:
|
||||
获取站点列表
|
||||
"""
|
||||
# 选中的rss站点
|
||||
rss_sites = SystemConfigOper().get(SystemConfigKey.RssSites)
|
||||
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
|
||||
# 所有站点
|
||||
all_site = Site.list_order_by_pri(db)
|
||||
if not rss_sites or not all_site:
|
||||
if not selected_sites or not all_site:
|
||||
return []
|
||||
|
||||
# 选中的rss站点
|
||||
rss_sites = [site for site in all_site if site and site.id in rss_sites]
|
||||
rss_sites = [site for site in all_site if site and site.id in selected_sites]
|
||||
return rss_sites
|
||||
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/env", summary="查询系统环境变量", response_model=schemas.Response)
|
||||
def get_setting(_: schemas.TokenPayload = Depends(verify_token)):
|
||||
def get_env_setting(_: schemas.TokenPayload = Depends(verify_token)):
|
||||
"""
|
||||
查询系统环境变量,包括当前版本号
|
||||
"""
|
||||
@@ -83,7 +83,7 @@ def set_setting(key: str, value: Union[list, dict, str, int] = None,
|
||||
|
||||
|
||||
@router.get("/message", summary="实时消息")
|
||||
def get_progress(token: str):
|
||||
def get_message(token: str):
|
||||
"""
|
||||
实时获取系统消息,返回格式为SSE
|
||||
"""
|
||||
@@ -169,31 +169,33 @@ def latest_version(_: schemas.TokenPayload = Depends(verify_token)):
|
||||
return schemas.Response(success=False)
|
||||
|
||||
|
||||
@router.get("/ruletest", summary="过滤规则测试", response_model=schemas.Response)
|
||||
@router.get("/ruletest", summary="优先级规则测试", response_model=schemas.Response)
|
||||
def ruletest(title: str,
|
||||
subtitle: str = None,
|
||||
ruletype: str = None,
|
||||
db: Session = Depends(get_db),
|
||||
_: schemas.TokenPayload = Depends(verify_token)):
|
||||
"""
|
||||
过滤规则测试,规则类型 1-订阅,2-洗版
|
||||
过滤规则测试,规则类型 1-订阅,2-洗版,3-搜索
|
||||
"""
|
||||
torrent = schemas.TorrentInfo(
|
||||
title=title,
|
||||
description=subtitle,
|
||||
)
|
||||
if ruletype == "2":
|
||||
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules2)
|
||||
rule_string = SystemConfigOper().get(SystemConfigKey.BestVersionFilterRules)
|
||||
elif ruletype == "3":
|
||||
rule_string = SystemConfigOper().get(SystemConfigKey.SearchFilterRules)
|
||||
else:
|
||||
rule_string = SystemConfigOper().get(SystemConfigKey.FilterRules)
|
||||
rule_string = SystemConfigOper().get(SystemConfigKey.SubscribeFilterRules)
|
||||
if not rule_string:
|
||||
return schemas.Response(success=False, message="过滤规则未设置!")
|
||||
return schemas.Response(success=False, message="优先级规则未设置!")
|
||||
|
||||
# 过滤
|
||||
result = SearchChain(db).filter_torrents(rule_string=rule_string,
|
||||
torrent_list=[torrent])
|
||||
if not result:
|
||||
return schemas.Response(success=False, message="不符合过滤规则!")
|
||||
return schemas.Response(success=False, message="不符合优先级规则!")
|
||||
return schemas.Response(success=True, data={
|
||||
"priority": 100 - result[0].pri_order + 1
|
||||
})
|
||||
|
||||
@@ -223,16 +223,19 @@ class ChainBase(metaclass=ABCMeta):
|
||||
|
||||
def filter_torrents(self, rule_string: str,
|
||||
torrent_list: List[TorrentInfo],
|
||||
season_episodes: Dict[int, list] = None) -> List[TorrentInfo]:
|
||||
season_episodes: Dict[int, list] = None,
|
||||
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
|
||||
"""
|
||||
过滤种子资源
|
||||
:param rule_string: 过滤规则
|
||||
:param torrent_list: 资源列表
|
||||
:param season_episodes: 季集数过滤 {season:[episodes]}
|
||||
:param mediainfo: 识别的媒体信息
|
||||
:return: 过滤后的资源列表,添加资源优先级
|
||||
"""
|
||||
return self.run_module("filter_torrents", rule_string=rule_string,
|
||||
torrent_list=torrent_list, season_episodes=season_episodes)
|
||||
torrent_list=torrent_list, season_episodes=season_episodes,
|
||||
mediainfo=mediainfo)
|
||||
|
||||
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
|
||||
episodes: Set[int] = None, category: str = None
|
||||
|
||||
@@ -225,7 +225,7 @@ class DownloadChain(ChainBase):
|
||||
self.downloadhis.add_files(files_to_add)
|
||||
|
||||
# 发送消息
|
||||
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel)
|
||||
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, channel=channel, userid=userid)
|
||||
# 下载成功后处理
|
||||
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
|
||||
# 广播事件
|
||||
|
||||
@@ -64,7 +64,13 @@ class MediaServerChain(ChainBase):
|
||||
total_count = 0
|
||||
# 清空登记薄
|
||||
_dbOper.empty(server=settings.MEDIASERVER)
|
||||
# 同步黑名单
|
||||
sync_blacklist = settings.MEDIASERVER_SYNC_BLACKLIST.split(
|
||||
",") if settings.MEDIASERVER_SYNC_BLACKLIST else []
|
||||
for library in self.librarys():
|
||||
if library.name in sync_blacklist:
|
||||
# 同步黑名单 跳过
|
||||
continue
|
||||
logger.info(f"正在同步媒体库 {library.name} ...")
|
||||
library_count = 0
|
||||
for item in self.items(library.id):
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import pickle
|
||||
import re
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from datetime import datetime
|
||||
from typing import Dict
|
||||
@@ -88,7 +89,7 @@ class SearchChain(ChainBase):
|
||||
:param keyword: 搜索关键词
|
||||
:param no_exists: 缺失的媒体信息
|
||||
:param sites: 站点ID列表,为空时搜索所有站点
|
||||
:param filter_rule: 过滤规则,为空是使用默认过滤规则
|
||||
:param filter_rule: 过滤规则,为空是使用默认搜索过滤规则
|
||||
:param area: 搜索范围,title or imdbid
|
||||
"""
|
||||
logger.info(f'开始搜索资源,关键词:{keyword or mediainfo.title} ...')
|
||||
@@ -129,18 +130,24 @@ class SearchChain(ChainBase):
|
||||
return []
|
||||
# 过滤种子
|
||||
if filter_rule is None:
|
||||
# 取默认过滤规则
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
|
||||
# 取搜索优先级规则
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.SearchFilterRules)
|
||||
if filter_rule:
|
||||
logger.info(f'开始过滤资源,当前规则:{filter_rule} ...')
|
||||
result: List[TorrentInfo] = self.filter_torrents(rule_string=filter_rule,
|
||||
torrent_list=torrents,
|
||||
season_episodes=season_episodes)
|
||||
season_episodes=season_episodes,
|
||||
mediainfo=mediainfo)
|
||||
if result is not None:
|
||||
torrents = result
|
||||
if not torrents:
|
||||
logger.warn(f'{keyword or mediainfo.title} 没有符合过滤条件的资源')
|
||||
logger.warn(f'{keyword or mediainfo.title} 没有符合优先级规则的资源')
|
||||
return []
|
||||
# 使用默认过滤规则再次过滤
|
||||
torrents = self.filter_torrents_by_default_rule(torrents)
|
||||
if not torrents:
|
||||
logger.warn(f'{keyword or mediainfo.title} 没有符合默认过滤规则的资源')
|
||||
return []
|
||||
# 匹配的资源
|
||||
_match_torrents = []
|
||||
# 总数
|
||||
@@ -247,14 +254,14 @@ class SearchChain(ChainBase):
|
||||
"""
|
||||
# 未开启的站点不搜索
|
||||
indexer_sites = []
|
||||
|
||||
# 配置的索引站点
|
||||
if sites:
|
||||
config_indexers = [str(sid) for sid in sites]
|
||||
else:
|
||||
config_indexers = [str(sid) for sid in self.systemconfig.get(SystemConfigKey.IndexerSites) or []]
|
||||
if not sites:
|
||||
sites = self.systemconfig.get(SystemConfigKey.IndexerSites) or []
|
||||
|
||||
for indexer in self.siteshelper.get_indexers():
|
||||
# 检查站点索引开关
|
||||
if not config_indexers or str(indexer.get("id")) in config_indexers:
|
||||
if not sites or indexer.get("id") in sites:
|
||||
# 站点流控
|
||||
state, msg = self.siteshelper.check(indexer.get("domain"))
|
||||
if state:
|
||||
@@ -264,6 +271,7 @@ class SearchChain(ChainBase):
|
||||
if not indexer_sites:
|
||||
logger.warn('未开启任何有效站点,无法搜索资源')
|
||||
return []
|
||||
|
||||
# 开始进度
|
||||
self.progress.start(ProgressKey.Search)
|
||||
# 开始计时
|
||||
@@ -305,3 +313,29 @@ class SearchChain(ChainBase):
|
||||
self.progress.end(ProgressKey.Search)
|
||||
# 返回
|
||||
return results
|
||||
|
||||
def filter_torrents_by_default_rule(self,
|
||||
torrents: List[TorrentInfo]) -> List[TorrentInfo]:
|
||||
|
||||
# 取默认过滤规则
|
||||
default_include_exclude = self.systemconfig.get(SystemConfigKey.DefaultIncludeExcludeFilter) or {}
|
||||
if not default_include_exclude:
|
||||
return torrents
|
||||
include = default_include_exclude.get("include")
|
||||
exclude = default_include_exclude.get("exclude")
|
||||
new_torrents = []
|
||||
for torrent in torrents:
|
||||
# 包含
|
||||
if include:
|
||||
if not re.search(r"%s" % include,
|
||||
f"{torrent.title} {torrent.description}", re.I):
|
||||
logger.info(f"{torrent.title} 不匹配包含规则 {include}")
|
||||
continue
|
||||
# 排除
|
||||
if exclude:
|
||||
if re.search(r"%s" % exclude,
|
||||
f"{torrent.title} {torrent.description}", re.I):
|
||||
logger.info(f"{torrent.title} 匹配排除规则 {exclude}")
|
||||
continue
|
||||
new_torrents.append(torrent)
|
||||
return new_torrents
|
||||
|
||||
@@ -264,9 +264,9 @@ class SubscribeChain(ChainBase):
|
||||
sites = None
|
||||
# 过滤规则
|
||||
if subscribe.best_version:
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
|
||||
else:
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
|
||||
# 搜索,同时电视剧会过滤掉不需要的剧集
|
||||
contexts = self.searchchain.process(mediainfo=mediainfo,
|
||||
keyword=subscribe.keyword,
|
||||
@@ -285,16 +285,6 @@ class SubscribeChain(ChainBase):
|
||||
torrent_meta = context.meta_info
|
||||
torrent_info = context.torrent_info
|
||||
torrent_mediainfo = context.media_info
|
||||
# 包含
|
||||
if subscribe.include:
|
||||
if not re.search(r"%s" % subscribe.include,
|
||||
f"{torrent_info.title} {torrent_info.description}", re.I):
|
||||
continue
|
||||
# 排除
|
||||
if subscribe.exclude:
|
||||
if re.search(r"%s" % subscribe.exclude,
|
||||
f"{torrent_info.title} {torrent_info.description}", re.I):
|
||||
continue
|
||||
# 非洗版
|
||||
if not subscribe.best_version:
|
||||
# 如果是电视剧过滤掉已经下载的集数
|
||||
@@ -340,7 +330,7 @@ class SubscribeChain(ChainBase):
|
||||
# 更新订阅剩余集数和时间
|
||||
update_date = True if downloads else False
|
||||
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
|
||||
mediainfo=mediainfo, update_date=update_date)
|
||||
mediainfo=mediainfo, update_date=update_date)
|
||||
# 手动触发时发送系统消息
|
||||
if manual:
|
||||
if sid:
|
||||
@@ -487,6 +477,10 @@ class SubscribeChain(ChainBase):
|
||||
}
|
||||
else:
|
||||
no_exists = {}
|
||||
# 包含与排除规则
|
||||
default_include_exclude = self.systemconfig.get(SystemConfigKey.DefaultIncludeExcludeFilter) or {}
|
||||
include = subscribe.include or default_include_exclude.get("include")
|
||||
exclude = subscribe.exclude or default_include_exclude.get("exclude")
|
||||
# 遍历缓存种子
|
||||
_match_context = []
|
||||
for domain, contexts in torrents.items():
|
||||
@@ -501,12 +495,13 @@ class SubscribeChain(ChainBase):
|
||||
continue
|
||||
# 过滤规则
|
||||
if subscribe.best_version:
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules2)
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.BestVersionFilterRules)
|
||||
else:
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
|
||||
result: List[TorrentInfo] = self.filter_torrents(
|
||||
rule_string=filter_rule,
|
||||
torrent_list=[torrent_info])
|
||||
torrent_list=[torrent_info],
|
||||
mediainfo=torrent_mediainfo)
|
||||
if result is not None and not result:
|
||||
# 不符合过滤规则
|
||||
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
|
||||
@@ -558,14 +553,16 @@ class SubscribeChain(ChainBase):
|
||||
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
|
||||
continue
|
||||
# 包含
|
||||
if subscribe.include:
|
||||
if not re.search(r"%s" % subscribe.include,
|
||||
if include:
|
||||
if not re.search(r"%s" % include,
|
||||
f"{torrent_info.title} {torrent_info.description}", re.I):
|
||||
logger.info(f"{torrent_info.title} 不匹配包含规则 {include}")
|
||||
continue
|
||||
# 排除
|
||||
if subscribe.exclude:
|
||||
if re.search(r"%s" % subscribe.exclude,
|
||||
if exclude:
|
||||
if re.search(r"%s" % exclude,
|
||||
f"{torrent_info.title} {torrent_info.description}", re.I):
|
||||
logger.info(f"{torrent_info.title} 匹配排除规则 {exclude}")
|
||||
continue
|
||||
# 匹配成功
|
||||
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
|
||||
@@ -588,7 +585,7 @@ class SubscribeChain(ChainBase):
|
||||
update_date = True if downloads else False
|
||||
# 未完成下载,计算剩余集数
|
||||
self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
|
||||
mediainfo=mediainfo, update_date=update_date)
|
||||
mediainfo=mediainfo, update_date=update_date)
|
||||
else:
|
||||
if meta.type == MediaType.TV:
|
||||
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
|
||||
@@ -621,7 +618,8 @@ class SubscribeChain(ChainBase):
|
||||
if len(episodes) > (subscribe.total_episode or 0):
|
||||
total_episode = len(episodes)
|
||||
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
|
||||
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
|
||||
logger.info(
|
||||
f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
|
||||
else:
|
||||
total_episode = subscribe.total_episode
|
||||
lack_episode = subscribe.lack_episode
|
||||
@@ -682,9 +680,9 @@ class SubscribeChain(ChainBase):
|
||||
return False
|
||||
|
||||
def __update_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
|
||||
subscribe: Subscribe,
|
||||
mediainfo: MediaInfo,
|
||||
update_date: bool = False):
|
||||
subscribe: Subscribe,
|
||||
mediainfo: MediaInfo,
|
||||
update_date: bool = False):
|
||||
"""
|
||||
更新订阅剩余集数
|
||||
"""
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
import random
|
||||
from typing import Optional, List
|
||||
|
||||
from cachetools import cached, TTLCache
|
||||
|
||||
from app import schemas
|
||||
from app.chain import ChainBase
|
||||
from app.schemas import MediaType
|
||||
from app.utils.singleton import Singleton
|
||||
|
||||
|
||||
class TmdbChain(ChainBase):
|
||||
class TmdbChain(ChainBase, metaclass=Singleton):
|
||||
"""
|
||||
TheMovieDB处理链
|
||||
"""
|
||||
@@ -106,3 +110,17 @@ class TmdbChain(ChainBase):
|
||||
:param page: 页码
|
||||
"""
|
||||
return self.run_module("person_credits", person_id=person_id, page=page)
|
||||
|
||||
@cached(cache=TTLCache(maxsize=1, ttl=3600))
|
||||
def get_random_wallpager(self):
|
||||
"""
|
||||
获取随机壁纸,缓存1个小时
|
||||
"""
|
||||
infos = self.tmdb_trending()
|
||||
if infos:
|
||||
# 随机一个电影
|
||||
while True:
|
||||
info = random.choice(infos)
|
||||
if info and info.get("backdrop_path"):
|
||||
return f"https://image.tmdb.org/t/p/original{info.get('backdrop_path')}"
|
||||
return None
|
||||
|
||||
@@ -60,7 +60,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
|
||||
else:
|
||||
return self.load_cache(self._rss_file) or {}
|
||||
|
||||
@cached(cache=TTLCache(maxsize=128, ttl=600))
|
||||
@cached(cache=TTLCache(maxsize=128 if settings.BIG_MEMORY_MODE else 1, ttl=600))
|
||||
def browse(self, domain: str) -> List[TorrentInfo]:
|
||||
"""
|
||||
浏览站点首页内容,返回种子清单,TTL缓存10分钟
|
||||
@@ -73,7 +73,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
|
||||
return []
|
||||
return self.refresh_torrents(site=site)
|
||||
|
||||
@cached(cache=TTLCache(maxsize=128, ttl=300))
|
||||
@cached(cache=TTLCache(maxsize=128 if settings.BIG_MEMORY_MODE else 1, ttl=300))
|
||||
def rss(self, domain: str) -> List[TorrentInfo]:
|
||||
"""
|
||||
获取站点RSS内容,返回种子清单,TTL缓存5分钟
|
||||
@@ -129,7 +129,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
|
||||
|
||||
# 刷新站点
|
||||
if not sites:
|
||||
sites = [str(sid) for sid in (self.systemconfig.get(SystemConfigKey.RssSites) or [])]
|
||||
sites = self.systemconfig.get(SystemConfigKey.RssSites) or []
|
||||
|
||||
# 读取缓存
|
||||
torrents_cache = self.get_torrents()
|
||||
@@ -139,7 +139,7 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
|
||||
# 遍历站点缓存资源
|
||||
for indexer in indexers:
|
||||
# 未开启的站点不刷新
|
||||
if sites and str(indexer.get("id")) not in sites:
|
||||
if sites and indexer.get("id") not in sites:
|
||||
continue
|
||||
domain = StringUtils.get_url_domain(indexer.get("domain"))
|
||||
if stype == "spider":
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import glob
|
||||
import re
|
||||
import shutil
|
||||
import threading
|
||||
@@ -299,7 +300,7 @@ class TransferChain(ChainBase):
|
||||
if not transferinfo:
|
||||
logger.error("文件转移模块运行失败")
|
||||
return False, "文件转移模块运行失败"
|
||||
if not transferinfo.target_path:
|
||||
if not transferinfo.success:
|
||||
# 转移失败
|
||||
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
|
||||
err_msgs.append(f"{file_path.name} {transferinfo.message}")
|
||||
@@ -489,6 +490,7 @@ class TransferChain(ChainBase):
|
||||
src_path = Path(history.src)
|
||||
if not src_path.exists():
|
||||
return False, f"源目录不存在:{src_path}"
|
||||
dest_path = Path(history.dest) if history.dest else None
|
||||
# 查询媒体信息
|
||||
mediainfo = self.recognize_media(mtype=mtype, tmdbid=tmdbid)
|
||||
if not mediainfo:
|
||||
@@ -506,6 +508,7 @@ class TransferChain(ChainBase):
|
||||
state, errmsg = self.do_transfer(path=src_path,
|
||||
mediainfo=mediainfo,
|
||||
download_hash=history.download_hash,
|
||||
target=dest_path,
|
||||
force=True)
|
||||
if not state:
|
||||
return False, errmsg
|
||||
@@ -601,8 +604,10 @@ class TransferChain(ChainBase):
|
||||
if not path.exists():
|
||||
return
|
||||
if path.is_file():
|
||||
# 删除文件
|
||||
path.unlink()
|
||||
# 删除文件、nfo、jpg
|
||||
files = glob.glob(f"{Path(path.parent).joinpath(path.stem)}*")
|
||||
for file in files:
|
||||
Path(file).unlink()
|
||||
logger.warn(f"文件 {path} 已删除")
|
||||
# 需要删除父目录
|
||||
elif str(path.parent) == str(path.root):
|
||||
@@ -615,11 +620,24 @@ class TransferChain(ChainBase):
|
||||
# 删除目录
|
||||
logger.warn(f"目录 {path} 已删除")
|
||||
# 需要删除父目录
|
||||
# 判断父目录是否为空, 为空则删除
|
||||
for parent_path in path.parents:
|
||||
if str(parent_path.parent) != str(path.root):
|
||||
# 父目录非根目录,才删除父目录
|
||||
files = SystemUtils.list_files(parent_path, settings.RMT_MEDIAEXT)
|
||||
if not files:
|
||||
shutil.rmtree(parent_path)
|
||||
logger.warn(f"目录 {parent_path} 已删除")
|
||||
|
||||
# 判断当前媒体父路径下是否有媒体文件,如有则无需遍历父级
|
||||
if not SystemUtils.exits_files(path.parent, settings.RMT_MEDIAEXT):
|
||||
# 媒体库二级分类根路径
|
||||
library_root_names = [
|
||||
settings.LIBRARY_MOVIE_NAME or '电影',
|
||||
settings.LIBRARY_TV_NAME or '电视剧',
|
||||
settings.LIBRARY_ANIME_NAME or '动漫',
|
||||
]
|
||||
|
||||
# 判断父目录是否为空, 为空则删除
|
||||
for parent_path in path.parents:
|
||||
# 遍历父目录到媒体库二级分类根路径
|
||||
if str(parent_path.name) in library_root_names:
|
||||
break
|
||||
if str(parent_path.parent) != str(path.root):
|
||||
# 父目录非根目录,才删除父目录
|
||||
if not SystemUtils.exits_files(parent_path, settings.RMT_MEDIAEXT):
|
||||
# 当前路径下没有媒体文件则删除
|
||||
shutil.rmtree(parent_path)
|
||||
logger.warn(f"目录 {parent_path} 已删除")
|
||||
|
||||
@@ -4,7 +4,7 @@ from typing import Any
|
||||
from app.chain import ChainBase
|
||||
from app.schemas import Notification
|
||||
from app.schemas.types import EventType, MediaImageType, MediaType, NotificationType
|
||||
from app.utils.http import WebUtils
|
||||
from app.utils.web import WebUtils
|
||||
|
||||
|
||||
class WebhookChain(ChainBase):
|
||||
|
||||
@@ -106,6 +106,10 @@ class Settings(BaseSettings):
|
||||
SLACK_APP_TOKEN: str = ""
|
||||
# Slack 频道名称
|
||||
SLACK_CHANNEL: str = ""
|
||||
# SynologyChat Webhook
|
||||
SYNOLOGYCHAT_WEBHOOK: str = ""
|
||||
# SynologyChat Token
|
||||
SYNOLOGYCHAT_TOKEN: str = ""
|
||||
# 下载器 qbittorrent/transmission
|
||||
DOWNLOADER: str = "qbittorrent"
|
||||
# 下载器监控开关
|
||||
@@ -144,6 +148,8 @@ class Settings(BaseSettings):
|
||||
REFRESH_MEDIASERVER: bool = True
|
||||
# 媒体服务器同步间隔(小时)
|
||||
MEDIASERVER_SYNC_INTERVAL: int = 6
|
||||
# 媒体服务器同步黑名单,多个媒体库名称,分割
|
||||
MEDIASERVER_SYNC_BLACKLIST: str = None
|
||||
# EMBY服务器地址,IP:PORT
|
||||
EMBY_HOST: str = None
|
||||
# EMBY Api Key
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any
|
||||
from typing import Any, Self, List
|
||||
|
||||
from sqlalchemy.orm import as_declarative, declared_attr, Session
|
||||
|
||||
@@ -16,13 +16,13 @@ class Base:
|
||||
db.rollback()
|
||||
raise err
|
||||
|
||||
def create(self, db: Session):
|
||||
def create(self, db: Session) -> Self:
|
||||
db.add(self)
|
||||
self.commit(db)
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def get(cls, db: Session, rid: int):
|
||||
def get(cls, db: Session, rid: int) -> Self:
|
||||
return db.query(cls).filter(cls.id == rid).first()
|
||||
|
||||
def update(self, db: Session, payload: dict):
|
||||
@@ -42,7 +42,7 @@ class Base:
|
||||
Base.commit(db)
|
||||
|
||||
@classmethod
|
||||
def list(cls, db: Session):
|
||||
def list(cls, db: Session) -> List[Self]:
|
||||
return db.query(cls).all()
|
||||
|
||||
def to_dict(self):
|
||||
|
||||
@@ -58,6 +58,8 @@ def checkMessage(channel_type: MessageChannel):
|
||||
return None
|
||||
if channel_type == MessageChannel.Slack and not switch.get("slack"):
|
||||
return None
|
||||
if channel_type == MessageChannel.SynologyChat and not switch.get("synologychat"):
|
||||
return None
|
||||
return func(self, message, *args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
@@ -272,8 +272,8 @@ class Emby(metaclass=Singleton):
|
||||
return None
|
||||
return ""
|
||||
|
||||
def get_movies(self,
|
||||
title: str,
|
||||
def get_movies(self,
|
||||
title: str,
|
||||
year: str = None,
|
||||
tmdb_id: int = None) -> Optional[List[dict]]:
|
||||
"""
|
||||
@@ -338,10 +338,12 @@ class Emby(metaclass=Singleton):
|
||||
if not item_id:
|
||||
return {}
|
||||
# 验证tmdbid是否相同
|
||||
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
|
||||
if tmdb_id and item_tmdbid:
|
||||
if str(tmdb_id) != str(item_tmdbid):
|
||||
return {}
|
||||
item_info = self.get_iteminfo(item_id)
|
||||
if item_info:
|
||||
item_tmdbid = (item_info.get("ProviderIds") or {}).get("Tmdb")
|
||||
if tmdb_id and item_tmdbid:
|
||||
if str(tmdb_id) != str(item_tmdbid):
|
||||
return {}
|
||||
# /Shows/Id/Episodes 查集的信息
|
||||
if not season:
|
||||
season = ""
|
||||
|
||||
@@ -43,9 +43,13 @@ class FileTransferModule(_ModuleBase):
|
||||
# 获取目标路径
|
||||
if not target:
|
||||
target = self.get_target_path(in_path=path)
|
||||
else:
|
||||
target = self.get_library_path(target)
|
||||
if not target:
|
||||
logger.error("未找到媒体库目录,无法转移文件")
|
||||
return TransferInfo(message="未找到媒体库目录,无法转移文件")
|
||||
return TransferInfo(success=False,
|
||||
path=path,
|
||||
message="未找到媒体库目录,无法转移文件")
|
||||
# 转移
|
||||
return self.transfer_media(in_path=path,
|
||||
in_meta=meta,
|
||||
@@ -316,9 +320,11 @@ class FileTransferModule(_ModuleBase):
|
||||
over_flag=over_flag)
|
||||
|
||||
@staticmethod
|
||||
def __get_library_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
|
||||
def __get_dest_dir(mediainfo: MediaInfo, target_dir: Path) -> Path:
|
||||
"""
|
||||
根据设置并装媒体库目录
|
||||
:param mediainfo: 媒体信息
|
||||
:target_dir: 媒体库根目录
|
||||
"""
|
||||
if mediainfo.type == MediaType.MOVIE:
|
||||
# 电影
|
||||
@@ -355,19 +361,23 @@ class FileTransferModule(_ModuleBase):
|
||||
:param in_path: 转移的路径,可能是一个文件也可以是一个目录
|
||||
:param in_meta:预识别元数据
|
||||
:param mediainfo: 媒体信息
|
||||
:param target_dir: 目的文件夹,非空的转移到该文件夹,为空时则按类型转移到配置文件中的媒体库文件夹
|
||||
:param target_dir: 媒体库根目录
|
||||
:param transfer_type: 文件转移方式
|
||||
:return: TransferInfo、错误信息
|
||||
"""
|
||||
# 检查目录路径
|
||||
if not in_path.exists():
|
||||
return TransferInfo(message=f"{in_path} 路径不存在")
|
||||
return TransferInfo(success=False,
|
||||
path=in_path,
|
||||
message=f"{in_path} 路径不存在")
|
||||
|
||||
if not target_dir.exists():
|
||||
return TransferInfo(message=f"{target_dir} 目标路径不存在")
|
||||
return TransferInfo(success=False,
|
||||
path=in_path,
|
||||
message=f"{target_dir} 目标路径不存在")
|
||||
|
||||
# 媒体库目录
|
||||
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
|
||||
# 媒体库目的目录
|
||||
target_dir = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_dir)
|
||||
|
||||
# 重命名格式
|
||||
rename_format = settings.TV_RENAME_FORMAT \
|
||||
@@ -393,11 +403,16 @@ class FileTransferModule(_ModuleBase):
|
||||
transfer_type=transfer_type)
|
||||
if retcode != 0:
|
||||
logger.error(f"文件夹 {in_path} 转移失败,错误码:{retcode}")
|
||||
return TransferInfo(message=f"文件夹 {in_path} 转移失败,错误码:{retcode}")
|
||||
return TransferInfo(success=False,
|
||||
message=f"文件夹 {in_path} 转移失败,错误码:{retcode}",
|
||||
path=in_path,
|
||||
target_path=new_path,
|
||||
is_bluray=bluray_flag)
|
||||
|
||||
logger.info(f"文件夹 {in_path} 转移成功")
|
||||
# 返回转移后的路径
|
||||
return TransferInfo(path=in_path,
|
||||
return TransferInfo(success=True,
|
||||
path=in_path,
|
||||
target_path=new_path,
|
||||
total_size=new_path.stat().st_size,
|
||||
is_bluray=bluray_flag)
|
||||
@@ -440,11 +455,15 @@ class FileTransferModule(_ModuleBase):
|
||||
over_flag=overflag)
|
||||
if retcode != 0:
|
||||
logger.error(f"文件 {in_path} 转移失败,错误码:{retcode}")
|
||||
return TransferInfo(message=f"文件 {in_path.name} 转移失败,错误码:{retcode}",
|
||||
return TransferInfo(success=False,
|
||||
message=f"文件 {in_path.name} 转移失败,错误码:{retcode}",
|
||||
path=in_path,
|
||||
target_path=new_file,
|
||||
fail_list=[str(in_path)])
|
||||
|
||||
logger.info(f"文件 {in_path} 转移成功")
|
||||
return TransferInfo(path=in_path,
|
||||
return TransferInfo(success=True,
|
||||
path=in_path,
|
||||
target_path=new_file,
|
||||
file_count=1,
|
||||
total_size=new_file.stat().st_size,
|
||||
@@ -514,6 +533,28 @@ class FileTransferModule(_ModuleBase):
|
||||
else:
|
||||
return Path(render_str)
|
||||
|
||||
@staticmethod
|
||||
def get_library_path(path: Path):
|
||||
"""
|
||||
根据目录查询其所在的媒体库目录
|
||||
"""
|
||||
if not path:
|
||||
return None
|
||||
if not settings.LIBRARY_PATHS:
|
||||
return None
|
||||
# 目的路径,多路径以,分隔
|
||||
dest_paths = settings.LIBRARY_PATHS
|
||||
if len(dest_paths) == 1:
|
||||
return dest_paths[0]
|
||||
for libpath in dest_paths:
|
||||
try:
|
||||
if path.is_relative_to(libpath):
|
||||
return libpath
|
||||
except Exception as e:
|
||||
logger.debug(f"计算媒体库路径时出错:{e}")
|
||||
continue
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def get_target_path(in_path: Path = None) -> Optional[Path]:
|
||||
"""
|
||||
@@ -533,7 +574,7 @@ class FileTransferModule(_ModuleBase):
|
||||
if in_path:
|
||||
for path in dest_paths:
|
||||
try:
|
||||
relative = Path(in_path).relative_to(path).as_posix()
|
||||
relative = in_path.relative_to(path).as_posix()
|
||||
if len(relative) > max_length:
|
||||
max_length = len(relative)
|
||||
target_path = path
|
||||
@@ -569,7 +610,7 @@ class FileTransferModule(_ModuleBase):
|
||||
if not target_dir:
|
||||
continue
|
||||
# 媒体分类路径
|
||||
target_dir = self.__get_library_dir(mediainfo=mediainfo, target_dir=target_dir)
|
||||
target_dir = self.__get_dest_dir(mediainfo=mediainfo, target_dir=target_dir)
|
||||
# 重命名格式
|
||||
rename_format = settings.TV_RENAME_FORMAT \
|
||||
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import re
|
||||
from typing import List, Tuple, Union, Dict, Optional
|
||||
|
||||
from app.core.context import TorrentInfo
|
||||
from app.core.context import TorrentInfo, MediaInfo
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.log import logger
|
||||
from app.modules import _ModuleBase
|
||||
@@ -9,9 +9,10 @@ from app.modules.filter.RuleParser import RuleParser
|
||||
|
||||
|
||||
class FilterModule(_ModuleBase):
|
||||
|
||||
# 规则解析器
|
||||
parser: RuleParser = None
|
||||
# 媒体信息
|
||||
media: MediaInfo = None
|
||||
|
||||
# 内置规则集
|
||||
rule_set: Dict[str, dict] = {
|
||||
@@ -37,8 +38,12 @@ class FilterModule(_ModuleBase):
|
||||
},
|
||||
# 中字
|
||||
"CNSUB": {
|
||||
"include": [r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]|繁體|简体|[中国國][字配]|国语|國語|中文|中字'],
|
||||
"exclude": []
|
||||
"include": [
|
||||
r'[中国國繁简](/|\s|\\|\|)?[繁简英粤]|[英简繁](/|\s|\\|\|)?[中繁简]|繁體|简体|[中国國][字配]|国语|國語|中文|中字'],
|
||||
"exclude": [],
|
||||
"tmdb": {
|
||||
"original_language": "zh,cn"
|
||||
}
|
||||
},
|
||||
# 特效字幕
|
||||
"SPECSUB": {
|
||||
@@ -107,16 +112,19 @@ class FilterModule(_ModuleBase):
|
||||
|
||||
def filter_torrents(self, rule_string: str,
|
||||
torrent_list: List[TorrentInfo],
|
||||
season_episodes: Dict[int, list] = None) -> List[TorrentInfo]:
|
||||
season_episodes: Dict[int, list] = None,
|
||||
mediainfo: MediaInfo = None) -> List[TorrentInfo]:
|
||||
"""
|
||||
过滤种子资源
|
||||
:param rule_string: 过滤规则
|
||||
:param torrent_list: 资源列表
|
||||
:param season_episodes: 季集数过滤 {season:[episodes]}
|
||||
:param mediainfo: 媒体信息
|
||||
:return: 过滤后的资源列表,添加资源优先级
|
||||
"""
|
||||
if not rule_string:
|
||||
return torrent_list
|
||||
self.media = mediainfo
|
||||
# 返回种子列表
|
||||
ret_torrents = []
|
||||
for torrent in torrent_list:
|
||||
@@ -215,6 +223,11 @@ class FilterModule(_ModuleBase):
|
||||
if not self.rule_set.get(rule_name):
|
||||
# 规则不存在
|
||||
return False
|
||||
# TMDB规则
|
||||
tmdb = self.rule_set[rule_name].get("tmdb")
|
||||
# 符合TMDB规则的直接返回True,即不过滤
|
||||
if tmdb and self.__match_tmdb(tmdb):
|
||||
return True
|
||||
# 包含规则项
|
||||
includes = self.rule_set[rule_name].get("include") or []
|
||||
# 排除规则项
|
||||
@@ -236,3 +249,44 @@ class FilterModule(_ModuleBase):
|
||||
# FREE规则不匹配
|
||||
return False
|
||||
return True
|
||||
|
||||
def __match_tmdb(self, tmdb: dict) -> bool:
|
||||
"""
|
||||
判断种子是否匹配TMDB规则
|
||||
"""
|
||||
def __get_media_value(key: str):
|
||||
try:
|
||||
return getattr(self.media, key)
|
||||
except ValueError:
|
||||
return ""
|
||||
|
||||
if not self.media:
|
||||
return False
|
||||
|
||||
for attr, value in tmdb.items():
|
||||
if not value:
|
||||
continue
|
||||
# 获取media信息的值
|
||||
info_value = __get_media_value(attr)
|
||||
if not info_value:
|
||||
# 没有该值,不匹配
|
||||
return False
|
||||
elif attr == "production_countries":
|
||||
# 国家信息
|
||||
info_values = [str(val.get("iso_3166_1")).upper() for val in info_value]
|
||||
else:
|
||||
# media信息转化为数组
|
||||
if isinstance(info_value, list):
|
||||
info_values = [str(val).upper() for val in info_value]
|
||||
else:
|
||||
info_values = [str(info_value).upper()]
|
||||
# 过滤值转化为数组
|
||||
if value.find(",") != -1:
|
||||
values = [str(val).upper() for val in value.split(",")]
|
||||
else:
|
||||
values = [str(value).upper()]
|
||||
# 没有交集为不匹配
|
||||
if not set(values).intersection(set(info_values)):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@@ -278,7 +278,7 @@ class Plex(metaclass=Singleton):
|
||||
if hasattr(lib, "locations") and lib.locations:
|
||||
for location in lib.locations:
|
||||
if is_subpath(path, Path(location)):
|
||||
return lib.key, location
|
||||
return lib.key, str(path)
|
||||
except Exception as err:
|
||||
logger.error(f"查找媒体库出错:{err}")
|
||||
return "", ""
|
||||
|
||||
85
app/modules/synologychat/__init__.py
Normal file
85
app/modules/synologychat/__init__.py
Normal file
@@ -0,0 +1,85 @@
|
||||
from typing import Optional, Union, List, Tuple, Any
|
||||
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.log import logger
|
||||
from app.modules import _ModuleBase, checkMessage
|
||||
from app.modules.synologychat.synologychat import SynologyChat
|
||||
from app.schemas import MessageChannel, CommingMessage, Notification
|
||||
|
||||
|
||||
class SynologyChatModule(_ModuleBase):
|
||||
synologychat: SynologyChat = None
|
||||
|
||||
def init_module(self) -> None:
|
||||
self.synologychat = SynologyChat()
|
||||
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def init_setting(self) -> Tuple[str, Union[str, bool]]:
|
||||
return "MESSAGER", "synologychat"
|
||||
|
||||
def message_parser(self, body: Any, form: Any,
|
||||
args: Any) -> Optional[CommingMessage]:
|
||||
"""
|
||||
解析消息内容,返回字典,注意以下约定值:
|
||||
userid: 用户ID
|
||||
username: 用户名
|
||||
text: 内容
|
||||
:param body: 请求体
|
||||
:param form: 表单
|
||||
:param args: 参数
|
||||
:return: 渠道、消息体
|
||||
"""
|
||||
try:
|
||||
message: dict = form
|
||||
if not message:
|
||||
return None
|
||||
# 校验token
|
||||
token = message.get("token")
|
||||
if not token or not self.synologychat.check_token(token):
|
||||
return None
|
||||
# 文本
|
||||
text = message.get("text")
|
||||
# 用户ID
|
||||
user_id = int(message.get("user_id"))
|
||||
# 获取用户名
|
||||
user_name = message.get("username")
|
||||
if text and user_id:
|
||||
logger.info(f"收到SynologyChat消息:userid={user_id}, username={user_name}, text={text}")
|
||||
return CommingMessage(channel=MessageChannel.SynologyChat,
|
||||
userid=user_id, username=user_name, text=text)
|
||||
except Exception as err:
|
||||
logger.debug(f"解析SynologyChat消息失败:{err}")
|
||||
return None
|
||||
|
||||
@checkMessage(MessageChannel.SynologyChat)
|
||||
def post_message(self, message: Notification) -> None:
|
||||
"""
|
||||
发送消息
|
||||
:param message: 消息体
|
||||
:return: 成功或失败
|
||||
"""
|
||||
self.synologychat.send_msg(title=message.title, text=message.text,
|
||||
image=message.image, userid=message.userid)
|
||||
|
||||
@checkMessage(MessageChannel.SynologyChat)
|
||||
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
|
||||
"""
|
||||
发送媒体信息选择列表
|
||||
:param message: 消息体
|
||||
:param medias: 媒体列表
|
||||
:return: 成功或失败
|
||||
"""
|
||||
return self.synologychat.send_meidas_msg(title=message.title, medias=medias,
|
||||
userid=message.userid)
|
||||
|
||||
@checkMessage(MessageChannel.SynologyChat)
|
||||
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]:
|
||||
"""
|
||||
发送种子信息选择列表
|
||||
:param message: 消息体
|
||||
:param torrents: 种子列表
|
||||
:return: 成功或失败
|
||||
"""
|
||||
return self.synologychat.send_torrents_msg(title=message.title, torrents=torrents, userid=message.userid)
|
||||
203
app/modules/synologychat/synologychat.py
Normal file
203
app/modules/synologychat/synologychat.py
Normal file
@@ -0,0 +1,203 @@
|
||||
import json
|
||||
import re
|
||||
from typing import Optional, List
|
||||
from urllib.parse import quote
|
||||
from threading import Lock
|
||||
|
||||
from app.core.config import settings
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.log import logger
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.singleton import Singleton
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
lock = Lock()
|
||||
|
||||
|
||||
class SynologyChat(metaclass=Singleton):
|
||||
def __init__(self):
|
||||
self._req = RequestUtils(content_type="application/x-www-form-urlencoded")
|
||||
self._webhook_url = settings.SYNOLOGYCHAT_WEBHOOK
|
||||
self._token = settings.SYNOLOGYCHAT_TOKEN
|
||||
if self._webhook_url:
|
||||
self._domain = StringUtils.get_base_url(self._webhook_url)
|
||||
|
||||
def check_token(self, token: str) -> bool:
|
||||
return True if token == self._token else False
|
||||
|
||||
def send_msg(self, title: str, text: str = "", image: str = "", userid: str = "") -> Optional[bool]:
|
||||
"""
|
||||
发送Telegram消息
|
||||
:param title: 消息标题
|
||||
:param text: 消息内容
|
||||
:param image: 消息图片地址
|
||||
:param userid: 用户ID,如有则只发消息给该用户
|
||||
:user_id: 发送消息的目标用户ID,为空则发给管理员
|
||||
"""
|
||||
if not title and not text:
|
||||
logger.error("标题和内容不能同时为空")
|
||||
return False
|
||||
if not self._webhook_url or not self._token:
|
||||
return False
|
||||
try:
|
||||
# 拼装消息内容
|
||||
titles = str(title).split('\n')
|
||||
if len(titles) > 1:
|
||||
title = titles[0]
|
||||
if not text:
|
||||
text = "\n".join(titles[1:])
|
||||
else:
|
||||
text = f"%s\n%s" % ("\n".join(titles[1:]), text)
|
||||
|
||||
if text:
|
||||
caption = "*%s*\n%s" % (title, text.replace("\n\n", "\n"))
|
||||
else:
|
||||
caption = title
|
||||
payload_data = {'text': quote(caption)}
|
||||
if image:
|
||||
payload_data['file_url'] = quote(image)
|
||||
if userid:
|
||||
payload_data['user_ids'] = [int(userid)]
|
||||
else:
|
||||
userids = self.__get_bot_users()
|
||||
if not userids:
|
||||
logger.error("SynologyChat机器人没有对任何用户可见")
|
||||
return False
|
||||
payload_data['user_ids'] = userids
|
||||
|
||||
return self.__send_request(payload_data)
|
||||
|
||||
except Exception as msg_e:
|
||||
logger.error(f"SynologyChat发送消息错误:{str(msg_e)}")
|
||||
return False
|
||||
|
||||
def send_meidas_msg(self, medias: List[MediaInfo], userid: str = "", title: str = "") -> Optional[bool]:
|
||||
"""
|
||||
发送列表类消息
|
||||
"""
|
||||
if not medias:
|
||||
return False
|
||||
if not self._webhook_url or not self._token:
|
||||
return False
|
||||
try:
|
||||
if not title or not isinstance(medias, list):
|
||||
return False
|
||||
index, image, caption = 1, "", "*%s*" % title
|
||||
for media in medias:
|
||||
if not image:
|
||||
image = media.get_message_image()
|
||||
if media.vote_average:
|
||||
caption = "%s\n%s. <%s|%s>\n_%s,%s_" % (caption,
|
||||
index,
|
||||
media.detail_link,
|
||||
media.title_year,
|
||||
f"类型:{media.type.value}",
|
||||
f"评分:{media.vote_average}")
|
||||
else:
|
||||
caption = "%s\n%s. <%s|%s>\n_%s_" % (caption,
|
||||
index,
|
||||
media.detail_link,
|
||||
media.title_year,
|
||||
f"类型:{media.type.value}")
|
||||
index += 1
|
||||
|
||||
if userid:
|
||||
userids = [int(userid)]
|
||||
else:
|
||||
userids = self.__get_bot_users()
|
||||
payload_data = {
|
||||
"text": quote(caption),
|
||||
"user_ids": userids
|
||||
}
|
||||
return self.__send_request(payload_data)
|
||||
|
||||
except Exception as msg_e:
|
||||
logger.error(f"SynologyChat发送消息错误:{str(msg_e)}")
|
||||
return False
|
||||
|
||||
def send_torrents_msg(self, torrents: List[Context],
|
||||
userid: str = "", title: str = "") -> Optional[bool]:
|
||||
"""
|
||||
发送列表消息
|
||||
"""
|
||||
if not self._webhook_url or not self._token:
|
||||
return None
|
||||
|
||||
if not torrents:
|
||||
return False
|
||||
|
||||
try:
|
||||
index, caption = 1, "*%s*" % title
|
||||
for context in torrents:
|
||||
torrent = context.torrent_info
|
||||
site_name = torrent.site_name
|
||||
meta = MetaInfo(torrent.title, torrent.description)
|
||||
link = torrent.page_url
|
||||
title = f"{meta.season_episode} " \
|
||||
f"{meta.resource_term} " \
|
||||
f"{meta.video_term} " \
|
||||
f"{meta.release_group}"
|
||||
title = re.sub(r"\s+", " ", title).strip()
|
||||
free = torrent.volume_factor
|
||||
seeder = f"{torrent.seeders}↑"
|
||||
description = torrent.description
|
||||
caption = f"{caption}\n{index}.【{site_name}】<{link}|{title}> " \
|
||||
f"{StringUtils.str_filesize(torrent.size)} {free} {seeder}\n" \
|
||||
f"_{description}_"
|
||||
index += 1
|
||||
|
||||
if userid:
|
||||
userids = [int(userid)]
|
||||
else:
|
||||
userids = self.__get_bot_users()
|
||||
|
||||
payload_data = {
|
||||
"text": quote(caption),
|
||||
"user_ids": userids
|
||||
}
|
||||
return self.__send_request(payload_data)
|
||||
except Exception as msg_e:
|
||||
logger.error(f"SynologyChat发送消息错误:{str(msg_e)}")
|
||||
return False
|
||||
|
||||
def __get_bot_users(self):
|
||||
"""
|
||||
查询机器人可见的用户列表
|
||||
"""
|
||||
if not self._domain or not self._token:
|
||||
return []
|
||||
req_url = f"{self._domain}" \
|
||||
f"/webapi/entry.cgi?api=SYNO.Chat.External&method=user_list&version=2&token=" \
|
||||
f"{self._token}"
|
||||
ret = self._req.get_res(url=req_url)
|
||||
if ret and ret.status_code == 200:
|
||||
users = ret.json().get("data", {}).get("users", []) or []
|
||||
return [user.get("user_id") for user in users]
|
||||
else:
|
||||
return []
|
||||
|
||||
def __send_request(self, payload_data):
|
||||
"""
|
||||
发送消息请求
|
||||
"""
|
||||
payload = f"payload={json.dumps(payload_data)}"
|
||||
ret = self._req.post_res(url=self._webhook_url, data=payload)
|
||||
if ret and ret.status_code == 200:
|
||||
result = ret.json()
|
||||
if result:
|
||||
errno = result.get('error', {}).get('code')
|
||||
errmsg = result.get('error', {}).get('errors')
|
||||
if not errno:
|
||||
return True
|
||||
logger.error(f"SynologyChat返回错误:{errno}-{errmsg}")
|
||||
return False
|
||||
else:
|
||||
logger.error(f"SynologyChat返回:{ret.text}")
|
||||
return False
|
||||
elif ret is not None:
|
||||
logger.error(f"SynologyChat请求失败,错误码:{ret.status_code},错误原因:{ret.reason}")
|
||||
return False
|
||||
else:
|
||||
logger.error(f"SynologyChat请求失败,未获取到返回信息")
|
||||
return False
|
||||
@@ -52,7 +52,7 @@ class Telegram(metaclass=Singleton):
|
||||
定义线程函数来运行 infinity_polling
|
||||
"""
|
||||
try:
|
||||
_bot.infinity_polling(long_polling_timeout=10)
|
||||
_bot.infinity_polling(long_polling_timeout=30, logger_level=None)
|
||||
except Exception as err:
|
||||
logger.error(f"Telegram消息接收服务异常:{err}")
|
||||
|
||||
|
||||
@@ -131,7 +131,7 @@ class TransmissionModule(_ModuleBase):
|
||||
title=torrent.name,
|
||||
path=Path(torrent.download_dir) / torrent.name,
|
||||
hash=torrent.hashString,
|
||||
tags=torrent.labels
|
||||
tags=",".join(torrent.labels or [])
|
||||
))
|
||||
elif status == TorrentStatus.DOWNLOADING:
|
||||
# 获取正在下载的任务
|
||||
|
||||
@@ -14,6 +14,7 @@ from ruamel.yaml import CommentedMap
|
||||
from app import schemas
|
||||
from app.core.config import settings
|
||||
from app.core.event import EventManager, eventmanager, Event
|
||||
from app.db.models.site import Site
|
||||
from app.helper.browser import PlaywrightHelper
|
||||
from app.helper.cloudflare import under_challenge
|
||||
from app.helper.module import ModuleHelper
|
||||
@@ -85,11 +86,18 @@ class AutoSignIn(_PluginBase):
|
||||
self._onlyonce = config.get("onlyonce")
|
||||
self._notify = config.get("notify")
|
||||
self._queue_cnt = config.get("queue_cnt") or 5
|
||||
self._sign_sites = config.get("sign_sites")
|
||||
self._login_sites = config.get("login_sites")
|
||||
self._sign_sites = config.get("sign_sites") or []
|
||||
self._login_sites = config.get("login_sites") or []
|
||||
self._retry_keyword = config.get("retry_keyword")
|
||||
self._clean = config.get("clean")
|
||||
|
||||
# 过滤掉已删除的站点
|
||||
all_sites = [site for site in self.sites.get_indexers() if not site.get("public")]
|
||||
self._sign_sites = [site.get("id") for site in all_sites if site.get("id") in self._sign_sites]
|
||||
self._login_sites = [site.get("id") for site in all_sites if site.get("id") in self._login_sites]
|
||||
# 保存配置
|
||||
self.__update_config()
|
||||
|
||||
# 加载模块
|
||||
if self._enabled or self._onlyonce:
|
||||
|
||||
@@ -237,8 +245,8 @@ class AutoSignIn(_PluginBase):
|
||||
拼装插件配置页面,需要返回两块数据:1、页面配置;2、数据结构
|
||||
"""
|
||||
# 站点的可选项
|
||||
site_options = [{"title": site.get("name"), "value": site.get("id")}
|
||||
for site in self.sites.get_indexers()]
|
||||
site_options = [{"title": site.name, "value": site.id}
|
||||
for site in Site.list_order_by_pri(self.db)]
|
||||
return [
|
||||
{
|
||||
'component': 'VForm',
|
||||
@@ -592,11 +600,6 @@ class AutoSignIn(_PluginBase):
|
||||
# 今日没数据
|
||||
if not today_history or self._clean:
|
||||
logger.info(f"今日 {today} 未{type},开始{type}已选站点")
|
||||
# 过滤删除的站点
|
||||
if type == "签到":
|
||||
self._sign_sites = [site.get("id") for site in do_sites if site]
|
||||
if type == "登录":
|
||||
self._login_sites = [site.get("id") for site in do_sites if site]
|
||||
if self._clean:
|
||||
# 关闭开关
|
||||
self._clean = False
|
||||
@@ -946,30 +949,25 @@ class AutoSignIn(_PluginBase):
|
||||
site_id = event.event_data.get("site_id")
|
||||
config = self.get_config()
|
||||
if config:
|
||||
sign_sites = config.get("sign_sites")
|
||||
if sign_sites:
|
||||
if isinstance(sign_sites, str):
|
||||
sign_sites = [sign_sites]
|
||||
self._sign_sites = self.__remove_site_id(config.get("sign_sites") or [], site_id)
|
||||
self._login_sites = self.__remove_site_id(config.get("login_sites") or [], site_id)
|
||||
# 保存配置
|
||||
self.__update_config()
|
||||
|
||||
# 删除对应站点
|
||||
if site_id:
|
||||
sign_sites = [site for site in sign_sites if int(site) != int(site_id)]
|
||||
else:
|
||||
# 清空
|
||||
sign_sites = []
|
||||
def __remove_site_id(self, do_sites, site_id):
|
||||
if do_sites:
|
||||
if isinstance(do_sites, str):
|
||||
do_sites = [do_sites]
|
||||
|
||||
# 若无站点,则停止
|
||||
if len(sign_sites) == 0:
|
||||
self._enabled = False
|
||||
# 删除对应站点
|
||||
if site_id:
|
||||
do_sites = [site for site in do_sites if int(site) != int(site_id)]
|
||||
else:
|
||||
# 清空
|
||||
do_sites = []
|
||||
|
||||
# 保存配置
|
||||
self.update_config(
|
||||
{
|
||||
"enabled": self._enabled,
|
||||
"notify": self._notify,
|
||||
"cron": self._cron,
|
||||
"onlyonce": self._onlyonce,
|
||||
"queue_cnt": self._queue_cnt,
|
||||
"sign_sites": sign_sites
|
||||
}
|
||||
)
|
||||
# 若无站点,则停止
|
||||
if len(do_sites) == 0:
|
||||
self._enabled = False
|
||||
|
||||
return do_sites
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
import zipfile
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import List, Tuple, Dict, Any
|
||||
|
||||
import pytz
|
||||
import requests
|
||||
from apscheduler.schedulers.background import BackgroundScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
from python_hosts import Hosts, HostsEntry
|
||||
from requests import Response
|
||||
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
@@ -79,20 +84,25 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
if self.get_state() or self._onlyonce:
|
||||
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
|
||||
|
||||
if self.get_state() and self._cron:
|
||||
logger.info(f"Cloudflare CDN优选服务启动,周期:{self._cron}")
|
||||
self._scheduler.add_job(func=self.__cloudflareSpeedTest,
|
||||
trigger=CronTrigger.from_crontab(self._cron),
|
||||
name="Cloudflare优选")
|
||||
try:
|
||||
if self.get_state() and self._cron:
|
||||
logger.info(f"Cloudflare CDN优选服务启动,周期:{self._cron}")
|
||||
self._scheduler.add_job(func=self.__cloudflareSpeedTest,
|
||||
trigger=CronTrigger.from_crontab(self._cron),
|
||||
name="Cloudflare优选")
|
||||
|
||||
if self._onlyonce:
|
||||
logger.info(f"Cloudflare CDN优选服务启动,立即运行一次")
|
||||
self._scheduler.add_job(func=self.__cloudflareSpeedTest, trigger='date',
|
||||
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
|
||||
name="Cloudflare优选")
|
||||
# 关闭一次性开关
|
||||
self._onlyonce = False
|
||||
self.__update_config()
|
||||
if self._onlyonce:
|
||||
logger.info(f"Cloudflare CDN优选服务启动,立即运行一次")
|
||||
self._scheduler.add_job(func=self.__cloudflareSpeedTest, trigger='date',
|
||||
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
|
||||
name="Cloudflare优选")
|
||||
# 关闭一次性开关
|
||||
self._onlyonce = False
|
||||
self.__update_config()
|
||||
except Exception as err:
|
||||
logger.error(f"Cloudflare CDN优选服务出错:{str(err)}")
|
||||
self.systemmessage.put(f"Cloudflare CDN优选服务出错:{str(err)}")
|
||||
return
|
||||
|
||||
# 启动任务
|
||||
if self._scheduler.get_jobs():
|
||||
@@ -142,13 +152,35 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
if err_flag:
|
||||
logger.info("正在进行CLoudflare CDN优选,请耐心等待")
|
||||
# 执行优选命令,-dd不测速
|
||||
cf_command = f'cd {self._cf_path} && chmod a+x {self._binary_name} && ./{self._binary_name} {self._additional_args} -o {self._result_file}' + (
|
||||
f' -f {self._cf_ipv4}' if self._ipv4 else '') + (f' -f {self._cf_ipv6}' if self._ipv6 else '')
|
||||
if SystemUtils.is_windows():
|
||||
cf_command = f'cd \"{self._cf_path}\" && CloudflareST {self._additional_args} -o \"{self._result_file}\"' + (
|
||||
f' -f \"{self._cf_ipv4}\"' if self._ipv4 else '') + (f' -f \"{self._cf_ipv6}\"' if self._ipv6 else '')
|
||||
else:
|
||||
cf_command = f'cd {self._cf_path} && chmod a+x {self._binary_name} && ./{self._binary_name} {self._additional_args} -o {self._result_file}' + (
|
||||
f' -f {self._cf_ipv4}' if self._ipv4 else '') + (f' -f {self._cf_ipv6}' if self._ipv6 else '')
|
||||
logger.info(f'正在执行优选命令 {cf_command}')
|
||||
os.system(cf_command)
|
||||
if SystemUtils.is_windows():
|
||||
process = subprocess.Popen(cf_command, shell=True)
|
||||
# 执行命令后无法退出 采用异步和设置超时方案
|
||||
# 设置超时时间为120秒
|
||||
if cf_command.__contains__("-dd"):
|
||||
time.sleep(120)
|
||||
else:
|
||||
time.sleep(600)
|
||||
# 如果没有在120秒内完成任务,那么杀死该进程
|
||||
if process.poll() is None:
|
||||
os.system('taskkill /F /IM CloudflareST.exe')
|
||||
else:
|
||||
os.system(cf_command)
|
||||
|
||||
# 获取优选后最优ip
|
||||
best_ip = SystemUtils.execute("sed -n '2,1p' " + self._result_file + " | awk -F, '{print $1}'")
|
||||
if SystemUtils.is_windows():
|
||||
powershell_command = f"powershell.exe -Command \"Get-Content \'{self._result_file}\' | Select-Object -Skip 1 -First 1 | Write-Output\""
|
||||
logger.info(f'正在执行powershell命令 {powershell_command}')
|
||||
best_ip = SystemUtils.execute(powershell_command)
|
||||
best_ip = best_ip.split(',')[0]
|
||||
else:
|
||||
best_ip = SystemUtils.execute("sed -n '2,1p' " + self._result_file + " | awk -F, '{print $1}'")
|
||||
logger.info(f"\n获取到最优ip==>[{best_ip}]")
|
||||
|
||||
# 替换自定义Hosts插件数据库hosts
|
||||
@@ -246,7 +278,10 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
# 是否重新安装
|
||||
if self._re_install:
|
||||
install_flag = True
|
||||
os.system(f'rm -rf {self._cf_path}')
|
||||
if SystemUtils.is_windows():
|
||||
os.system(f'rd /s /q \"{self._cf_path}\"')
|
||||
else:
|
||||
os.system(f'rm -rf {self._cf_path}')
|
||||
logger.info(f'删除CloudflareSpeedTest目录 {self._cf_path},开始重新安装')
|
||||
|
||||
# 判断目录是否存在
|
||||
@@ -277,7 +312,8 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
|
||||
# 重装后数据库有版本数据,但是本地没有则重装
|
||||
if not install_flag and release_version == self._version and not Path(
|
||||
f'{self._cf_path}/{self._binary_name}').exists():
|
||||
f'{self._cf_path}/{self._binary_name}').exists() and not Path(
|
||||
f'{self._cf_path}/CloudflareST.exe').exists():
|
||||
logger.warn(f"未检测到CloudflareSpeedTest本地版本,重新安装")
|
||||
install_flag = True
|
||||
|
||||
@@ -287,9 +323,11 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
|
||||
# 检查环境、安装
|
||||
if SystemUtils.is_windows():
|
||||
# todo
|
||||
logger.error(f"CloudflareSpeedTest暂不支持windows平台")
|
||||
return False, None
|
||||
# windows
|
||||
cf_file_name = 'CloudflareST_windows_amd64.zip'
|
||||
download_url = f'{self._release_prefix}/{release_version}/{cf_file_name}'
|
||||
return self.__os_install(download_url, cf_file_name, release_version,
|
||||
f"ditto -V -x -k --sequesterRsrc {self._cf_path}/{cf_file_name} {self._cf_path}")
|
||||
elif SystemUtils.is_macos():
|
||||
# mac
|
||||
uname = SystemUtils.execute('uname -m')
|
||||
@@ -317,14 +355,31 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
proxies = settings.PROXY
|
||||
https_proxy = proxies.get("https") if proxies and proxies.get("https") else None
|
||||
if https_proxy:
|
||||
os.system(
|
||||
f'wget -P {self._cf_path} --no-check-certificate -e use_proxy=yes -e https_proxy={https_proxy} {download_url}')
|
||||
if SystemUtils.is_windows():
|
||||
self.__get_windows_cloudflarest(download_url, proxies)
|
||||
else:
|
||||
os.system(
|
||||
f'wget -P {self._cf_path} --no-check-certificate -e use_proxy=yes -e https_proxy={https_proxy} {download_url}')
|
||||
else:
|
||||
os.system(f'wget -P {self._cf_path} https://ghproxy.com/{download_url}')
|
||||
if SystemUtils.is_windows():
|
||||
self.__get_windows_cloudflarest(download_url, proxies)
|
||||
else:
|
||||
os.system(f'wget -P {self._cf_path} https://ghproxy.com/{download_url}')
|
||||
|
||||
# 判断是否下载好安装包
|
||||
if Path(f'{self._cf_path}/{cf_file_name}').exists():
|
||||
try:
|
||||
if SystemUtils.is_windows():
|
||||
with zipfile.ZipFile(f'{self._cf_path}/{cf_file_name}', 'r') as zip_ref:
|
||||
# 解压ZIP文件中的所有文件到指定目录
|
||||
zip_ref.extractall(self._cf_path)
|
||||
if Path(f'{self._cf_path}\\CloudflareST.exe').exists():
|
||||
logger.info(f"CloudflareSpeedTest安装成功,当前版本:{release_version}")
|
||||
return True, release_version
|
||||
else:
|
||||
logger.error(f"CloudflareSpeedTest安装失败,请检查")
|
||||
os.system(f'rd /s /q \"{self._cf_path}\"')
|
||||
return False, None
|
||||
# 解压
|
||||
os.system(f'{unzip_command}')
|
||||
# 删除压缩包
|
||||
@@ -338,23 +393,42 @@ class CloudflareSpeedTest(_PluginBase):
|
||||
return False, None
|
||||
except Exception as err:
|
||||
# 如果升级失败但是有可执行文件CloudflareST,则可继续运行,反之停止
|
||||
if Path(f'{self._cf_path}/{self._binary_name}').exists():
|
||||
if Path(f'{self._cf_path}/{self._binary_name}').exists() or \
|
||||
Path(f'{self._cf_path}\\CloudflareST.exe').exists():
|
||||
logger.error(f"CloudflareSpeedTest安装失败:{str(err)},继续使用现版本运行")
|
||||
return True, None
|
||||
else:
|
||||
logger.error(f"CloudflareSpeedTest安装失败:{str(err)},无可用版本,停止运行")
|
||||
os.removedirs(self._cf_path)
|
||||
if SystemUtils.is_windows():
|
||||
os.system(f'rd /s /q \"{self._cf_path}\"')
|
||||
else:
|
||||
os.removedirs(self._cf_path)
|
||||
return False, None
|
||||
else:
|
||||
# 如果升级失败但是有可执行文件CloudflareST,则可继续运行,反之停止
|
||||
if Path(f'{self._cf_path}/{self._binary_name}').exists():
|
||||
if Path(f'{self._cf_path}/{self._binary_name}').exists() or \
|
||||
Path(f'{self._cf_path}\\CloudflareST.exe').exists():
|
||||
logger.warn(f"CloudflareSpeedTest安装失败,存在可执行版本,继续运行")
|
||||
return True, None
|
||||
else:
|
||||
logger.error(f"CloudflareSpeedTest安装失败,无可用版本,停止运行")
|
||||
os.removedirs(self._cf_path)
|
||||
if SystemUtils.is_windows():
|
||||
os.system(f'rd /s /q \"{self._cf_path}\"')
|
||||
else:
|
||||
os.removedirs(self._cf_path)
|
||||
return False, None
|
||||
|
||||
def __get_windows_cloudflarest(self, download_url, proxies):
|
||||
response = Response()
|
||||
try:
|
||||
response = requests.get(download_url, stream=True, proxies=proxies if proxies else None)
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error(f"CloudflareSpeedTest下载失败:{str(e)}")
|
||||
if response.status_code == 200:
|
||||
with open(f'{self._cf_path}\\CloudflareST_windows_amd64.zip', 'wb') as file:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
file.write(chunk)
|
||||
|
||||
@staticmethod
|
||||
def __get_release_version():
|
||||
"""
|
||||
|
||||
@@ -292,7 +292,7 @@ class DirMonitor(_PluginBase):
|
||||
if not transferinfo:
|
||||
logger.error("文件转移模块运行失败")
|
||||
return
|
||||
if not transferinfo.target_path:
|
||||
if not transferinfo.success:
|
||||
# 转移失败
|
||||
logger.warn(f"{file_path.name} 入库失败:{transferinfo.message}")
|
||||
# 新增转移失败历史记录
|
||||
|
||||
@@ -12,6 +12,9 @@ from ruamel.yaml import CommentedMap
|
||||
|
||||
from app.core.config import settings
|
||||
from app.helper.sites import SitesHelper
|
||||
|
||||
from app.core.event import eventmanager
|
||||
from app.db.models.site import Site
|
||||
from app.helper.torrent import TorrentHelper
|
||||
from app.log import logger
|
||||
from app.modules.qbittorrent import Qbittorrent
|
||||
@@ -19,6 +22,7 @@ from app.modules.transmission import Transmission
|
||||
from app.plugins import _PluginBase
|
||||
from app.plugins.iyuuautoseed.iyuu_helper import IyuuHelper
|
||||
from app.schemas import NotificationType
|
||||
from app.schemas.types import EventType
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
@@ -100,7 +104,7 @@ class IYUUAutoSeed(_PluginBase):
|
||||
self._cron = config.get("cron")
|
||||
self._token = config.get("token")
|
||||
self._downloaders = config.get("downloaders")
|
||||
self._sites = config.get("sites")
|
||||
self._sites = config.get("sites") or []
|
||||
self._notify = config.get("notify")
|
||||
self._nolabels = config.get("nolabels")
|
||||
self._nopaths = config.get("nopaths")
|
||||
@@ -109,6 +113,11 @@ class IYUUAutoSeed(_PluginBase):
|
||||
self._error_caches = [] if self._clearcache else config.get("error_caches") or []
|
||||
self._success_caches = [] if self._clearcache else config.get("success_caches") or []
|
||||
|
||||
# 过滤掉已删除的站点
|
||||
self._sites = [site.get("id") for site in self.sites.get_indexers() if
|
||||
not site.get("public") and site.get("id") in self._sites]
|
||||
self.__update_config()
|
||||
|
||||
# 停止现有任务
|
||||
self.stop_service()
|
||||
|
||||
@@ -166,8 +175,8 @@ class IYUUAutoSeed(_PluginBase):
|
||||
拼装插件配置页面,需要返回两块数据:1、页面配置;2、数据结构
|
||||
"""
|
||||
# 站点的可选项
|
||||
site_options = [{"title": site.get("name"), "value": site.get("id")}
|
||||
for site in self.sites.get_indexers()]
|
||||
site_options = [{"title": site.name, "value": site.id}
|
||||
for site in Site.list_order_by_pri(self.db)]
|
||||
return [
|
||||
{
|
||||
'component': 'VForm',
|
||||
@@ -425,10 +434,6 @@ class IYUUAutoSeed(_PluginBase):
|
||||
return
|
||||
logger.info("开始辅种任务 ...")
|
||||
|
||||
# 排除已删除站点
|
||||
self._sites = [site.get("id") for site in self.sites.get_indexers() if
|
||||
site.get("id") in self._sites]
|
||||
|
||||
# 计数器初始化
|
||||
self.total = 0
|
||||
self.realtotal = 0
|
||||
@@ -979,3 +984,31 @@ class IYUUAutoSeed(_PluginBase):
|
||||
self._scheduler = None
|
||||
except Exception as e:
|
||||
print(str(e))
|
||||
|
||||
@eventmanager.register(EventType.SiteDeleted)
|
||||
def site_deleted(self, event):
|
||||
"""
|
||||
删除对应站点选中
|
||||
"""
|
||||
site_id = event.event_data.get("site_id")
|
||||
config = self.get_config()
|
||||
if config:
|
||||
sites = config.get("sites")
|
||||
if sites:
|
||||
if isinstance(sites, str):
|
||||
sites = [sites]
|
||||
|
||||
# 删除对应站点
|
||||
if site_id:
|
||||
sites = [site for site in sites if int(site) != int(site_id)]
|
||||
else:
|
||||
# 清空
|
||||
sites = []
|
||||
|
||||
# 若无站点,则停止
|
||||
if len(sites) == 0:
|
||||
self._enabled = False
|
||||
|
||||
self._sites = sites
|
||||
# 保存配置
|
||||
self.__update_config()
|
||||
|
||||
@@ -2,7 +2,6 @@ import datetime
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import List, Tuple, Dict, Any, Optional
|
||||
@@ -10,11 +9,10 @@ from typing import List, Tuple, Dict, Any, Optional
|
||||
from apscheduler.schedulers.background import BackgroundScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
|
||||
from app.chain.transfer import TransferChain
|
||||
from app.core.config import settings
|
||||
from app.core.event import eventmanager, Event
|
||||
from app.db.downloadhistory_oper import DownloadHistoryOper
|
||||
from app.db.models.transferhistory import TransferHistory
|
||||
from app.db.transferhistory_oper import TransferHistoryOper
|
||||
from app.log import logger
|
||||
from app.modules.emby import Emby
|
||||
from app.modules.jellyfin import Jellyfin
|
||||
@@ -23,7 +21,6 @@ from app.modules.themoviedb.tmdbv3api import Episode
|
||||
from app.modules.transmission import Transmission
|
||||
from app.plugins import _PluginBase
|
||||
from app.schemas.types import NotificationType, EventType, MediaType
|
||||
from app.utils.path_utils import PathUtils
|
||||
|
||||
|
||||
class MediaSyncDel(_PluginBase):
|
||||
@@ -58,14 +55,16 @@ class MediaSyncDel(_PluginBase):
|
||||
_del_source = False
|
||||
_exclude_path = None
|
||||
_library_path = None
|
||||
_transferchain = None
|
||||
_transferhis = None
|
||||
_downloadhis = None
|
||||
qb = None
|
||||
tr = None
|
||||
|
||||
def init_plugin(self, config: dict = None):
|
||||
self._transferhis = TransferHistoryOper(self.db)
|
||||
self._downloadhis = DownloadHistoryOper(self.db)
|
||||
self._transferchain = TransferChain(self.db)
|
||||
self._transferhis = self._transferchain.transferhis
|
||||
self._downloadhis = self._transferchain.downloadhis
|
||||
self.episode = Episode()
|
||||
self.qb = Qbittorrent()
|
||||
self.tr = Transmission()
|
||||
@@ -534,7 +533,7 @@ class MediaSyncDel(_PluginBase):
|
||||
episode_num=episode_num)
|
||||
|
||||
def __sync_del(self, media_type: str, media_name: str, media_path: str,
|
||||
tmdb_id: int, season_num: int, episode_num: int):
|
||||
tmdb_id: int, season_num: str, episode_num: str):
|
||||
"""
|
||||
执行删除逻辑
|
||||
"""
|
||||
@@ -587,10 +586,7 @@ class MediaSyncDel(_PluginBase):
|
||||
if self._del_source:
|
||||
# 1、直接删除源文件
|
||||
if transferhis.src and Path(transferhis.src).suffix in settings.RMT_MEDIAEXT:
|
||||
source_name = os.path.basename(transferhis.src)
|
||||
source_path = str(transferhis.src).replace(source_name, "")
|
||||
self.delete_media_file(filedir=source_path,
|
||||
filename=source_name)
|
||||
self._transferchain.delete_files(Path(transferhis.src))
|
||||
if transferhis.download_hash:
|
||||
try:
|
||||
# 2、判断种子是否被删除完
|
||||
@@ -654,16 +650,20 @@ class MediaSyncDel(_PluginBase):
|
||||
self.save_data("history", history)
|
||||
|
||||
def __get_transfer_his(self, media_type: str, media_name: str, media_path: str,
|
||||
tmdb_id: int, season_num: int, episode_num: int):
|
||||
tmdb_id: int, season_num: str, episode_num: str):
|
||||
"""
|
||||
查询转移记录
|
||||
"""
|
||||
# 季数
|
||||
if season_num:
|
||||
if season_num and season_num.isdigit():
|
||||
season_num = str(season_num).rjust(2, '0')
|
||||
else:
|
||||
season_num = None
|
||||
# 集数
|
||||
if episode_num:
|
||||
if episode_num and episode_num.isdigit():
|
||||
episode_num = str(episode_num).rjust(2, '0')
|
||||
else:
|
||||
episode_num = None
|
||||
|
||||
# 类型
|
||||
mtype = MediaType.MOVIE if media_type in ["Movie", "MOV"] else MediaType.TV
|
||||
@@ -825,10 +825,7 @@ class MediaSyncDel(_PluginBase):
|
||||
if self._del_source:
|
||||
# 1、直接删除源文件
|
||||
if transferhis.src and Path(transferhis.src).suffix in settings.RMT_MEDIAEXT:
|
||||
source_name = os.path.basename(transferhis.src)
|
||||
source_path = str(transferhis.src).replace(source_name, "")
|
||||
self.delete_media_file(filedir=source_path,
|
||||
filename=source_name)
|
||||
self._transferchain.delete_files(Path(transferhis.src))
|
||||
if transferhis.download_hash:
|
||||
try:
|
||||
# 2、判断种子是否被删除完
|
||||
@@ -1187,42 +1184,6 @@ class MediaSyncDel(_PluginBase):
|
||||
|
||||
return del_medias
|
||||
|
||||
@staticmethod
|
||||
def delete_media_file(filedir: str, filename: str):
|
||||
"""
|
||||
删除媒体文件,空目录也会被删除
|
||||
"""
|
||||
filedir = os.path.normpath(filedir).replace("\\", "/")
|
||||
file = os.path.join(filedir, filename)
|
||||
try:
|
||||
if not os.path.exists(file):
|
||||
return False, f"{file} 不存在"
|
||||
os.remove(file)
|
||||
nfoname = f"{os.path.splitext(filename)[0]}.nfo"
|
||||
nfofile = os.path.join(filedir, nfoname)
|
||||
if os.path.exists(nfofile):
|
||||
os.remove(nfofile)
|
||||
# 检查空目录并删除
|
||||
if re.findall(r"^S\d{2}|^Season", os.path.basename(filedir), re.I):
|
||||
# 当前是季文件夹,判断并删除
|
||||
seaon_dir = filedir
|
||||
if seaon_dir.count('/') > 1 and not PathUtils.get_dir_files(seaon_dir, exts=settings.RMT_MEDIAEXT):
|
||||
shutil.rmtree(seaon_dir)
|
||||
# 媒体文件夹
|
||||
media_dir = os.path.dirname(seaon_dir)
|
||||
else:
|
||||
media_dir = filedir
|
||||
# 检查并删除媒体文件夹,非根目录且目录大于二级,且没有媒体文件时才会删除
|
||||
if media_dir != '/' \
|
||||
and media_dir.count('/') > 1 \
|
||||
and not re.search(r'[a-zA-Z]:/$', media_dir) \
|
||||
and not PathUtils.get_dir_files(media_dir, exts=settings.RMT_MEDIAEXT):
|
||||
shutil.rmtree(media_dir)
|
||||
return True, f"{file} 删除成功"
|
||||
except Exception as e:
|
||||
logger.error("删除源文件失败:%s" % str(e))
|
||||
return True, f"{file} 删除失败"
|
||||
|
||||
def get_state(self):
|
||||
return self._enabled
|
||||
|
||||
|
||||
@@ -549,7 +549,7 @@ class RssSubscribe(_PluginBase):
|
||||
logger.error(f"未获取到RSS数据:{url}")
|
||||
return
|
||||
# 过滤规则
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.FilterRules)
|
||||
filter_rule = self.systemconfig.get(SystemConfigKey.SubscribeFilterRules)
|
||||
# 解析数据
|
||||
for result in results:
|
||||
try:
|
||||
@@ -593,7 +593,8 @@ class RssSubscribe(_PluginBase):
|
||||
if self._filter:
|
||||
result = self.chain.filter_torrents(
|
||||
rule_string=filter_rule,
|
||||
torrent_list=[torrentinfo]
|
||||
torrent_list=[torrentinfo],
|
||||
mediainfo=mediainfo
|
||||
)
|
||||
if not result:
|
||||
logger.info(f"{title} {description} 不匹配过滤规则")
|
||||
|
||||
@@ -15,6 +15,7 @@ from app import schemas
|
||||
from app.core.config import settings
|
||||
from app.core.event import Event
|
||||
from app.core.event import eventmanager
|
||||
from app.db.models.site import Site
|
||||
from app.helper.browser import PlaywrightHelper
|
||||
from app.helper.module import ModuleHelper
|
||||
from app.helper.sites import SitesHelper
|
||||
@@ -84,6 +85,11 @@ class SiteStatistic(_PluginBase):
|
||||
self._statistic_type = config.get("statistic_type") or "all"
|
||||
self._statistic_sites = config.get("statistic_sites") or []
|
||||
|
||||
# 过滤掉已删除的站点
|
||||
self._statistic_sites = [site.get("id") for site in self.sites.get_indexers() if
|
||||
not site.get("public") and site.get("id") in self._statistic_sites]
|
||||
self.__update_config()
|
||||
|
||||
if self._enabled or self._onlyonce:
|
||||
# 加载模块
|
||||
self._site_schema = ModuleHelper.load('app.plugins.sitestatistic.siteuserinfo',
|
||||
@@ -177,8 +183,8 @@ class SiteStatistic(_PluginBase):
|
||||
拼装插件配置页面,需要返回两块数据:1、页面配置;2、数据结构
|
||||
"""
|
||||
# 站点的可选项
|
||||
site_options = [{"title": site.get("name"), "value": site.get("id")}
|
||||
for site in self.sites.get_indexers()]
|
||||
site_options = [{"title": site.name, "value": site.id}
|
||||
for site in Site.list_order_by_pri(self.db)]
|
||||
return [
|
||||
{
|
||||
'component': 'VForm',
|
||||
@@ -1047,10 +1053,6 @@ class SiteStatistic(_PluginBase):
|
||||
else:
|
||||
refresh_sites = [site for site in self.sites.get_indexers() if
|
||||
site.get("id") in self._statistic_sites]
|
||||
|
||||
# 过滤掉已删除的站点
|
||||
self._statistic_sites = [site.get("id") for site in refresh_sites if site]
|
||||
self.__update_config()
|
||||
if not refresh_sites:
|
||||
return
|
||||
|
||||
|
||||
@@ -731,20 +731,34 @@ class TorrentRemover(_PluginBase):
|
||||
remove_torrents.append(item)
|
||||
# 处理辅种
|
||||
if self._samedata and remove_torrents:
|
||||
remove_ids = [t.get("id") for t in remove_torrents]
|
||||
remove_torrents_plus = []
|
||||
for remove_torrent in remove_torrents:
|
||||
name = remove_torrent.get("name")
|
||||
size = remove_torrent.get("size")
|
||||
for torrent in torrents:
|
||||
if downloader == "qbittorrent":
|
||||
item_plus = self.__get_qb_torrent(torrent)
|
||||
plus_id = torrent.hash
|
||||
plus_name = torrent.name
|
||||
plus_size = torrent.size
|
||||
plus_site = StringUtils.get_url_sld(torrent.tracker)
|
||||
else:
|
||||
item_plus = self.__get_tr_torrent(torrent)
|
||||
if not item_plus:
|
||||
continue
|
||||
if item_plus.get("name") == name \
|
||||
and item_plus.get("size") == size \
|
||||
and item_plus.get("id") not in [t.get("id") for t in remove_torrents]:
|
||||
remove_torrents_plus.append(item_plus)
|
||||
remove_torrents.extend(remove_torrents_plus)
|
||||
plus_id = torrent.hashString
|
||||
plus_name = torrent.name
|
||||
plus_size = torrent.total_size
|
||||
plus_site = torrent.trackers[0].get("sitename") if torrent.trackers else ""
|
||||
# 比对名称和大小
|
||||
if plus_name == name \
|
||||
and plus_size == size \
|
||||
and plus_id not in remove_ids:
|
||||
remove_torrents_plus.append(
|
||||
{
|
||||
"id": plus_id,
|
||||
"name": plus_name,
|
||||
"site": plus_site,
|
||||
"size": plus_size
|
||||
}
|
||||
)
|
||||
if remove_torrents_plus:
|
||||
remove_torrents.extend(remove_torrents_plus)
|
||||
return remove_torrents
|
||||
|
||||
@@ -51,3 +51,5 @@ class NotificationSwitch(BaseModel):
|
||||
telegram: Optional[bool] = False
|
||||
# Slack开关
|
||||
slack: Optional[bool] = False
|
||||
# SynologyChat开关
|
||||
synologychat: Optional[bool] = False
|
||||
|
||||
@@ -35,6 +35,8 @@ class TransferInfo(BaseModel):
|
||||
"""
|
||||
文件转移结果信息
|
||||
"""
|
||||
# 是否成功标志
|
||||
success: bool = True
|
||||
# 转移⼁路径
|
||||
path: Optional[Path] = None
|
||||
# 转移后路径
|
||||
|
||||
@@ -48,7 +48,7 @@ class SystemConfigKey(Enum):
|
||||
UserInstalledPlugins = "UserInstalledPlugins"
|
||||
# 搜索结果
|
||||
SearchResults = "SearchResults"
|
||||
# 索引站点范围
|
||||
# 搜索站点范围
|
||||
IndexerSites = "IndexerSites"
|
||||
# 订阅站点范围
|
||||
RssSites = "RssSites"
|
||||
@@ -60,10 +60,14 @@ class SystemConfigKey(Enum):
|
||||
CustomReleaseGroups = "CustomReleaseGroups"
|
||||
# 自定义识别词
|
||||
CustomIdentifiers = "CustomIdentifiers"
|
||||
# 过滤规则
|
||||
FilterRules = "FilterRules"
|
||||
# 搜索优先级规则
|
||||
SearchFilterRules = "SearchFilterRules"
|
||||
# 订阅优先级规则
|
||||
SubscribeFilterRules = "SubscribeFilterRules"
|
||||
# 洗版规则
|
||||
FilterRules2 = "FilterRules2"
|
||||
BestVersionFilterRules = "BestVersionFilterRules"
|
||||
# 默认包含与排除规则
|
||||
DefaultIncludeExcludeFilter = "DefaultIncludeExcludeFilter"
|
||||
# 转移屏蔽词
|
||||
TransferExcludeWords = "TransferExcludeWords"
|
||||
|
||||
@@ -105,3 +109,4 @@ class MessageChannel(Enum):
|
||||
Wechat = "微信"
|
||||
Telegram = "Telegram"
|
||||
Slack = "Slack"
|
||||
SynologyChat = "SynologyChat"
|
||||
|
||||
@@ -172,39 +172,3 @@ class RequestUtils:
|
||||
cookiesList.append(cookies)
|
||||
return cookiesList
|
||||
return cookie_dict
|
||||
|
||||
|
||||
class WebUtils:
|
||||
@staticmethod
|
||||
def get_location(ip: str):
|
||||
"""
|
||||
https://api.mir6.com/api/ip
|
||||
{
|
||||
"code": 200,
|
||||
"msg": "success",
|
||||
"data": {
|
||||
"ip": "240e:97c:2f:1::5c",
|
||||
"dec": "47925092370311863177116789888333643868",
|
||||
"country": "中国",
|
||||
"countryCode": "CN",
|
||||
"province": "广东省",
|
||||
"city": "广州市",
|
||||
"districts": "",
|
||||
"idc": "",
|
||||
"isp": "中国电信",
|
||||
"net": "数据中心",
|
||||
"zipcode": "510000",
|
||||
"areacode": "020",
|
||||
"protocol": "IPv6",
|
||||
"location": "中国[CN] 广东省 广州市",
|
||||
"myip": "125.89.7.89",
|
||||
"time": "2023-09-01 17:28:23"
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
r = RequestUtils().get_res(f"https://api.mir6.com/api/ip?ip={ip}&type=json")
|
||||
if r:
|
||||
return r.json().get("data", {}).get("location") or ''
|
||||
except Exception as err:
|
||||
return str(err)
|
||||
|
||||
@@ -106,7 +106,7 @@ class SystemUtils:
|
||||
|
||||
if directory.is_file():
|
||||
return [directory]
|
||||
|
||||
|
||||
if not min_filesize:
|
||||
min_filesize = 0
|
||||
|
||||
@@ -122,6 +122,36 @@ class SystemUtils:
|
||||
|
||||
return files
|
||||
|
||||
@staticmethod
|
||||
def exits_files(directory: Path, extensions: list, min_filesize: int = 0) -> bool:
|
||||
"""
|
||||
判断目录下是否存在指定扩展名的文件
|
||||
:return True存在 False不存在
|
||||
"""
|
||||
|
||||
if not min_filesize:
|
||||
min_filesize = 0
|
||||
|
||||
if not directory.exists():
|
||||
return False
|
||||
|
||||
if directory.is_file():
|
||||
return True
|
||||
|
||||
if not min_filesize:
|
||||
min_filesize = 0
|
||||
|
||||
pattern = r".*(" + "|".join(extensions) + ")$"
|
||||
|
||||
# 遍历目录及子目录
|
||||
for path in directory.rglob('**/*'):
|
||||
if path.is_file() \
|
||||
and re.match(pattern, path.name, re.IGNORECASE) \
|
||||
and path.stat().st_size >= min_filesize * 1024 * 1024:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def list_sub_files(directory: Path, extensions: list) -> List[Path]:
|
||||
"""
|
||||
|
||||
56
app/utils/web.py
Normal file
56
app/utils/web.py
Normal file
@@ -0,0 +1,56 @@
|
||||
from typing import Optional
|
||||
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
|
||||
class WebUtils:
|
||||
@staticmethod
|
||||
def get_location(ip: str):
|
||||
"""
|
||||
https://api.mir6.com/api/ip
|
||||
{
|
||||
"code": 200,
|
||||
"msg": "success",
|
||||
"data": {
|
||||
"ip": "240e:97c:2f:1::5c",
|
||||
"dec": "47925092370311863177116789888333643868",
|
||||
"country": "中国",
|
||||
"countryCode": "CN",
|
||||
"province": "广东省",
|
||||
"city": "广州市",
|
||||
"districts": "",
|
||||
"idc": "",
|
||||
"isp": "中国电信",
|
||||
"net": "数据中心",
|
||||
"zipcode": "510000",
|
||||
"areacode": "020",
|
||||
"protocol": "IPv6",
|
||||
"location": "中国[CN] 广东省 广州市",
|
||||
"myip": "125.89.7.89",
|
||||
"time": "2023-09-01 17:28:23"
|
||||
}
|
||||
}
|
||||
"""
|
||||
try:
|
||||
r = RequestUtils().get_res(f"https://api.mir6.com/api/ip?ip={ip}&type=json")
|
||||
if r:
|
||||
return r.json().get("data", {}).get("location") or ''
|
||||
except Exception as err:
|
||||
return str(err)
|
||||
|
||||
@staticmethod
|
||||
def get_bing_wallpaper() -> Optional[str]:
|
||||
"""
|
||||
获取Bing每日壁纸
|
||||
"""
|
||||
url = "https://cn.bing.com/HPImageArchive.aspx?format=js&idx=0&n=1"
|
||||
resp = RequestUtils(timeout=5).get_res(url)
|
||||
if resp and resp.status_code == 200:
|
||||
try:
|
||||
result = resp.json()
|
||||
if isinstance(result, dict):
|
||||
for image in result.get('images') or []:
|
||||
return f"https://cn.bing.com{image.get('url')}" if 'url' in image else ''
|
||||
except Exception as err:
|
||||
print(str(err))
|
||||
return None
|
||||
@@ -1 +1 @@
|
||||
APP_VERSION = 'v1.2.1'
|
||||
APP_VERSION = 'v1.2.4'
|
||||
|
||||
Reference in New Issue
Block a user