Compare commits

..

42 Commits

Author SHA1 Message Date
jxxghp
91eac50ab9 v1.7.7
- 多别名搜索(`SEARCH_MULTIPLE_NAME`)默认为关,优化了站点无法连通时的搜索处理逻辑,加快搜索速度 - 修复了站点删除或重置后订阅等站点设置残留的问题 - `馒头`站点数据统计切换为使用ApiKey - 优化了Bangumi每日放送的演员阵容显示 - 插件支持显示下载安装次数
2024-03-27 17:01:33 +08:00
jxxghp
f6468ad327 fix scraper 2024-03-27 16:01:20 +08:00
jxxghp
fb6c3a9f36 fix site test 2024-03-27 15:45:27 +08:00
jxxghp
eb751bb581 fix site test 2024-03-27 15:35:01 +08:00
jxxghp
f9069bf19b fix #1758 2024-03-27 12:22:15 +08:00
jxxghp
ef0c88a3b6 fix 种子去重 2024-03-27 11:37:51 +08:00
jxxghp
f1f8ccb5d6 feat:plugins statistics 2024-03-27 08:24:06 +08:00
jxxghp
2df113ad38 fix SiteDeleted 2024-03-27 07:09:00 +08:00
jxxghp
fa03232321 Merge pull request #1759 from cddjr/fix_remove_site 2024-03-27 06:24:18 +08:00
景大侠
04f50284c6 fix 删除站点会导致其订阅的站点列表出现数字ID 2024-03-27 00:54:58 +08:00
jxxghp
9fc950c2ed Merge pull request #1751 from z3shan33/main 2024-03-26 16:41:59 +08:00
zss
9c1aeb933e fix bangumi中通过characters获取配音角色信息 2024-03-26 16:11:03 +08:00
jxxghp
1cee20134a fix 插件去重&排序 2024-03-26 09:30:05 +08:00
jxxghp
0ca5f5bd89 fix timeout 2024-03-25 23:06:30 +08:00
jxxghp
25e0c25bc6 fix timeout 2024-03-25 23:01:50 +08:00
jxxghp
3f8453f054 fix 2024-03-25 20:14:24 +08:00
jxxghp
cf259af2d1 feat:插件安装统计 2024-03-25 18:02:57 +08:00
jxxghp
0b70f74553 fix site test 2024-03-24 21:33:41 +08:00
jxxghp
f0bc5d737b - 问题修复 2024-03-24 15:45:20 +08:00
jxxghp
181d87f68e fix mtorrent 2024-03-24 15:31:00 +08:00
jxxghp
e37ac4da6a v1.7.6
- 馒头搜索切换为使用ApiKey,需要先在`控制台`->`实验室`建立存取令牌,手工维护站点cookie后ApiKey会自动获取并缓存使用,如更换了ApiKey,需要手动触发站点修改才会清除缓存。
- 资源搜索时整合多个别名的搜索结果,避免搜索不全

注意:馒头除搜索下载外,站点签到、数据统计、刷流等仍然使用cookie访问,请自行评估风险。
2024-03-24 14:01:20 +08:00
jxxghp
bd7ca7fa60 feat:m-team x-api-key 2024-03-24 13:38:36 +08:00
jxxghp
96de772119 fix mtorrent 2024-03-24 10:20:12 +08:00
jxxghp
72b6556c62 add SEARCH_MULTIPLE_NAME 2024-03-24 08:26:59 +08:00
jxxghp
e4bb182668 feat:搜索更多结果 2024-03-24 08:13:08 +08:00
jxxghp
595d097235 v1.7.5
- 认证站点新增支持青蛙🐸,蝴蝶🦋支持ipv4域名,适配了馒头新UI
- 加快了插件市场的加载速度
- 插件日志倒序显示
2024-03-23 19:01:09 +08:00
jxxghp
9b53aad34f fix mtorrent 2024-03-23 13:46:06 +08:00
jxxghp
e92a2e1ff1 Merge pull request #1728 from developer-wlj/wlj0323 2024-03-23 13:38:33 +08:00
mayun110
764359c3e8 fix 2024-03-23 13:18:36 +08:00
mayun110
abd1a51863 fix: labels by mTorrent 2024-03-23 12:26:49 +08:00
jxxghp
2f05f8dc4d fix mtorrent 2024-03-23 09:50:03 +08:00
jxxghp
23c678e71e fix mtorrent 2024-03-23 09:42:11 +08:00
jxxghp
ef67b76453 fix 下载消息显示用户名 2024-03-22 13:26:07 +08:00
jxxghp
c4e7870f7b Merge pull request #1726 from sundxfansky/main 2024-03-22 06:53:18 +08:00
jxxghp
9cef50436a Merge pull request #1725 from Vincwnt/main 2024-03-22 06:51:40 +08:00
sundxfansky
a15aded0a0 无需添加时间 2024-03-22 04:40:33 +08:00
chenyuan
8ac40dc205 fix: 存在已删除用户时, 消息批量推送失败bug 2024-03-21 22:27:01 +08:00
jxxghp
92a5b3d227 feat:线上插件多线程加载 2024-03-21 21:30:26 +08:00
jxxghp
761f1e7a4b feat:线上插件多线程加载 2024-03-21 21:27:54 +08:00
jxxghp
ad0731e1ec 更新 README.md 2024-03-21 18:27:36 +08:00
jxxghp
a451f12d86 add qingwa 2024-03-21 16:55:57 +08:00
jxxghp
dcde619e77 插件日志倒序 & 补充安装版本Windows指引 2024-03-21 16:28:16 +08:00
29 changed files with 485 additions and 116 deletions

View File

@@ -47,7 +47,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- Windows
下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000
1. 独立执行文件版本:下载 [MoviePilot.exe](https://github.com/jxxghp/MoviePilot/releases)双击运行后自动生成配置文件目录访问http://localhost:3000
2. 安装包版本:[Windows-MoviePilot](https://github.com/developer-wlj/Windows-MoviePilot)
- 群晖套件
@@ -81,7 +82,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
- **❗AUTH_SITE** 认证站点(认证通过后才能使用站点相关功能),支持配置多个认证站点,使用`,`分隔,如:`iyuu,hhclub`,会依次执行认证操作,直到有一个站点认证成功。
配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
认证资源`v1.1.4`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`
认证资源`v1.2.4+`支持:`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`ptba` /`icc2022`/`ptlsp`/`xingtan`/`ptvicomo`/`agsvpt`/`hdkyl`/`qingwa`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
@@ -101,6 +102,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
| ptvicomo | `PTVICOMO_UID`用户ID<br/>`PTVICOMO_PASSKEY`:密钥 |
| agsvpt | `AGSVPT_UID`用户ID<br/>`AGSVPT_PASSKEY`:密钥 |
| hdkyl | `HDKYL_UID`用户ID<br/>`HDKYL_PASSKEY`:密钥 |
| qingwa | `QINGWA_UID`用户ID<br/>`QINGWA_PASSKEY`:密钥 |
### 2. **环境变量 / 配置文件**
@@ -130,6 +132,8 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
---
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
---
- **SEARCH_MULTIPLE_NAME** 搜索时是否使用多个名称搜索,`true`/`false`,默认`false`,开启后会使用多个名称进行搜索,搜索结果会更全面,但会增加搜索时间;关闭时只要其中一个名称搜索到结果或全部名称搜索完毕即停止
---
- **MOVIE_RENAME_FORMAT** 电影重命名格式基于jinjia2语法
`MOVIE_RENAME_FORMAT`支持的配置项:

View File

@@ -45,7 +45,7 @@ def download(
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
"download_id": did
})
@@ -73,7 +73,7 @@ def add(
media_info=mediainfo,
torrent_info=torrentinfo
)
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name)
did = DownloadChain().download_single(context=context, username=current_user.name)
return schemas.Response(success=True if did else False, data={
"download_id": did
})

View File

@@ -67,6 +67,14 @@ def installed(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return SystemConfigOper().get(SystemConfigKey.UserInstalledPlugins) or []
@router.get("/statistic", summary="插件安装统计", response_model=dict)
def statistic(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
插件安装统计
"""
return PluginHelper().get_statistic()
@router.get("/install/{plugin_id}", summary="安装插件", response_model=schemas.Response)
def install(plugin_id: str,
repo_url: str = "",
@@ -89,6 +97,8 @@ def install(plugin_id: str,
install_plugins.append(plugin_id)
# 保存设置
SystemConfigOper().set(SystemConfigKey.UserInstalledPlugins, install_plugins)
# 统计
PluginHelper().install_reg(plugin_id)
# 重载插件管理器
PluginManager().init_config()
# 注册插件服务

View File

@@ -57,8 +57,8 @@ def add_site(
site_in.id = None
site = Site(**site_in.dict())
site.create(db)
# 通知缓存站点图标
EventManager().send_event(EventType.CacheSiteIcon, {
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
"domain": domain
})
return schemas.Response(success=True)
@@ -81,8 +81,8 @@ def update_site(
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
site_in.url = f"{_scheme}://{_netloc}/"
site.update(db, site_in.dict())
# 通知缓存站点图标
EventManager().send_event(EventType.CacheSiteIcon, {
# 通知站点更新
EventManager().send_event(EventType.SiteUpdated, {
"domain": site_in.domain
})
return schemas.Response(success=True)
@@ -130,7 +130,7 @@ def reset(db: Session = Depends(get_db),
# 插件站点删除
EventManager().send_event(EventType.SiteDeleted,
{
"site_id": None
"site_id": "*"
})
return schemas.Response(success=True, message="站点已重置!")

View File

@@ -191,6 +191,8 @@ def get_logging(token: str, length: int = 50, logfile: str = "moviepilot.log"):
return Response(content="日志文件不存在!", media_type="text/plain")
with open(log_path, 'r', encoding='utf-8') as file:
text = file.read()
# 倒序输出
text = '\n'.join(text.split('\n')[::-1])
return Response(content=text, media_type="text/plain")
else:
# 返回SSE流响应

View File

@@ -34,14 +34,19 @@ class DownloadChain(ChainBase):
self.mediaserver = MediaServerOper()
def post_download_message(self, meta: MetaBase, mediainfo: MediaInfo, torrent: TorrentInfo,
channel: MessageChannel = None,
userid: str = None):
channel: MessageChannel = None, userid: str = None, username: str = None):
"""
发送添加下载的消息
:param meta: 元数据
:param mediainfo: 媒体信息
:param torrent: 种子信息
:param channel: 通知渠道
:param userid: 用户ID指定时精确发送对应用户
:param username: 通知显示的下载用户信息
"""
msg_text = ""
if userid:
msg_text = f"用户:{userid}"
if username:
msg_text = f"用户:{username}"
if torrent.site_name:
msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term:
@@ -73,6 +78,7 @@ class DownloadChain(ChainBase):
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Download,
userid=userid,
title=f"{mediainfo.title_year} "
f"{meta.season_episode} 开始下载",
text=msg_text,
@@ -103,17 +109,27 @@ class DownloadChain(ChainBase):
# 解码参数
req_str = base64.b64decode(base64_str.encode('utf-8')).decode('utf-8')
req_params: Dict[str, dict] = json.loads(req_str)
# 是否使用cookie
if not req_params.get('cookie'):
cookie = None
# 请求头
if req_params.get('header'):
headers = req_params.get('header')
else:
headers = None
if req_params.get('method') == 'get':
# GET请求
res = RequestUtils(
ua=ua,
cookies=cookie
cookies=cookie,
headers=headers
).get_res(url, params=req_params.get('params'))
else:
# POST请求
res = RequestUtils(
ua=ua,
cookies=cookie
cookies=cookie,
headers=headers
).post_res(url, params=req_params.get('params'))
if not res:
return None
@@ -302,18 +318,20 @@ class DownloadChain(ChainBase):
self.downloadhis.add_files(files_to_add)
# 发送消息群发不带channel和userid
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent)
self.post_download_message(meta=_meta, mediainfo=_media, torrent=_torrent, username=username)
# 下载成功后处理
self.download_added(context=context, download_dir=download_dir, torrent_path=torrent_file)
# 广播事件
self.eventmanager.send_event(EventType.DownloadAdded, {
"hash": _hash,
"context": context
"context": context,
"username": username
})
else:
# 下载失败
logger.error(f"{_media.title_year} 添加下载任务失败:"
f"{_torrent.title} - {_torrent.enclosure}{error_msg}")
# 只发送给对应渠道和用户
self.post_message(Notification(
channel=channel,
mtype=NotificationType.Manual,

View File

@@ -115,7 +115,7 @@ class MessageChain(ChainBase):
# 用户ID
userid = info.userid
# 用户名
username = info.username
username = info.username or userid
if not userid:
logger.debug(f'未识别到用户ID{body}{form}{args}')
return
@@ -192,8 +192,8 @@ class MessageChain(ChainBase):
# 媒体库中已存在
self.post_message(
Notification(channel=channel,
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 XXX 或 下载 XXX",
title=f"{_current_media.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需重新下载请发送:搜索 名称 或 下载 名称】",
userid=userid))
return
elif exist_flag:
@@ -274,8 +274,8 @@ class MessageChain(ChainBase):
if exist_flag:
self.post_message(Notification(
channel=channel,
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX",
title=f"{mediainfo.title_year}"
f"{_current_meta.sea} 媒体库中已存在,如需洗版请发送:洗版 XXX",
userid=userid))
return
else:

View File

@@ -9,6 +9,7 @@ from typing import List, Optional
from app.chain import ChainBase
from app.core.context import Context
from app.core.context import MediaInfo, TorrentInfo
from app.core.event import eventmanager, Event
from app.core.metainfo import MetaInfo
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.progress import ProgressHelper
@@ -16,7 +17,7 @@ from app.helper.sites import SitesHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo
from app.schemas.types import MediaType, ProgressKey, SystemConfigKey
from app.schemas.types import MediaType, ProgressKey, SystemConfigKey, EventType
from app.utils.string import StringUtils
@@ -384,3 +385,24 @@ class SearchChain(ChainBase):
),
torrents
))
@eventmanager.register(EventType.SiteDeleted)
def remove_site(self, event: Event):
"""
从搜索站点中移除与已删除站点相关的设置
"""
if not event:
return
event_data = event.event_data or {}
site_id = event_data.get("site_id")
if not site_id:
return
if site_id == "*":
# 清空搜索站点
SystemConfigOper().set(SystemConfigKey.IndexerSites, [])
return
# 从选中的rss站点中移除
selected_sites = SystemConfigOper().get(SystemConfigKey.IndexerSites) or []
if site_id in selected_sites:
selected_sites.remove(site_id)
SystemConfigOper().set(SystemConfigKey.IndexerSites, selected_sites)

View File

@@ -5,6 +5,7 @@ from typing import Union
from urllib.parse import urljoin
from lxml import etree
from ruamel.yaml import CommentedMap
from app.chain import ChainBase
from app.core.config import settings
@@ -12,6 +13,7 @@ from app.core.event import eventmanager, Event, EventManager
from app.db.models.site import Site
from app.db.site_oper import SiteOper
from app.db.siteicon_oper import SiteIconOper
from app.db.systemconfig_oper import SystemConfigOper
from app.helper.browser import PlaywrightHelper
from app.helper.cloudflare import under_challenge
from app.helper.cookie import CookieHelper
@@ -41,13 +43,21 @@ class SiteChain(ChainBase):
self.cookiehelper = CookieHelper()
self.message = MessageHelper()
self.cookiecloud = CookieCloudHelper()
self.systemconfig = SystemConfigOper()
# 特殊站点登录验证
self.special_site_test = {
"zhuque.in": self.__zhuque_test,
# "m-team.io": self.__mteam_test,
"m-team.io": self.__mteam_test,
"m-team.cc": self.__mteam_test,
}
def is_special_site(self, domain: str) -> bool:
"""
判断是否特殊站点
"""
return domain in self.special_site_test
@staticmethod
def __zhuque_test(site: Site) -> Tuple[bool, str]:
"""
@@ -99,7 +109,17 @@ class SiteChain(ChainBase):
if res and res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("data"):
return True, "连接成功"
# 更新最后访问时间
res = RequestUtils(cookies=site.cookie,
ua=site.ua,
timeout=60,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=urljoin(url, "api/member/updateLastBrowse"))
if res:
return True, "连接成功"
else:
return True, f"连接成功,但更新状态失败"
return False, "Cookie已失效"
@staticmethod
@@ -230,9 +250,9 @@ class SiteChain(ChainBase):
public=1 if indexer.get("public") else 0)
_add_count += 1
# 通知缓存站点图标
# 通知站点更新
if indexer:
EventManager().send_event(EventType.CacheSiteIcon, {
EventManager().send_event(EventType.SiteUpdated, {
"domain": domain,
})
# 处理完成
@@ -244,7 +264,7 @@ class SiteChain(ChainBase):
logger.info(f"CookieCloud同步成功{ret_msg}")
return True, ret_msg
@eventmanager.register(EventType.CacheSiteIcon)
@eventmanager.register(EventType.SiteUpdated)
def cache_site_icon(self, event: Event):
"""
缓存站点图标
@@ -286,24 +306,40 @@ class SiteChain(ChainBase):
else:
logger.warn(f"缓存站点 {indexer.get('name')} 图标失败")
def test(self, url: str) -> Tuple[bool, str]:
@eventmanager.register(EventType.SiteUpdated)
def clear_site_data(self, event: Event):
"""
清理站点数据
"""
if not event:
return
event_data = event.event_data or {}
# 主域名
domain = event_data.get("domain")
if not domain:
return
# 获取主域名中间那段
domain_host = StringUtils.get_url_host(domain)
# 查询以"site.domain_host"开头的配置项,并清除
site_keys = self.systemconfig.all().keys()
for key in site_keys:
if key.startswith(f"site.{domain_host}"):
logger.info(f"清理站点配置:{key}")
self.systemconfig.delete(key)
def test(self, site_info: Union[str, CommentedMap, dict]) -> Tuple[bool, str]:
"""
测试站点是否可用
:param url: 站点域名
:param site_info: 站点域名或者站点的数据对象
:return: (是否可用, 错误信息)
"""
# 检查域名是否可用
domain = StringUtils.get_url_domain(url)
site_info = self.siteoper.get_by_domain(domain)
if not site_info:
return False, f"站点【{url}】不存在"
# 模拟登录
try:
# 特殊站点测试
if self.special_site_test.get(domain):
return self.special_site_test[domain](site_info)
if isinstance(site_info, str):
url = site_info
domain = StringUtils.get_url_domain(url)
site_info = self.siteoper.get_by_domain(domain)
if not site_info:
return False, f"站点【{url}】不存在"
# 通用站点测试
site_url = site_info.url
site_cookie = site_info.cookie
@@ -312,7 +348,21 @@ class SiteChain(ChainBase):
public = site_info.public
proxies = settings.PROXY if site_info.proxy else None
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
else:
# 外部站点测试
site_url = site_info.get("url")
domain = StringUtils.get_url_domain(site_url)
site_cookie = site_info.get("cookie")
ua = site_info.get("ua")
render = site_info.get("render")
public = site_info.get("public")
proxies = settings.PROXY if site_info.get("proxy") else None
proxy_server = settings.PROXY_SERVER if site_info.get("proxy") else None
# 模拟登录
try:
# 特殊站点测试
if self.special_site_test.get(domain):
return self.special_site_test[domain](site_info)
# 访问链接
if render:
page_source = PlaywrightHelper().get_page_source(url=site_url,

View File

@@ -12,6 +12,7 @@ from app.chain.search import SearchChain
from app.chain.torrents import TorrentsChain
from app.core.config import settings
from app.core.context import TorrentInfo, Context, MediaInfo
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.db.models.subscribe import Subscribe
@@ -21,7 +22,7 @@ from app.helper.message import MessageHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo, Notification
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType, EventType
from app.utils.string import StringUtils
@@ -143,8 +144,8 @@ class SubscribeChain(ChainBase):
userid=userid))
elif message:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
if username or userid:
text = f"评分:{mediainfo.vote_average},来自用户:{username or userid}"
if username:
text = f"评分:{mediainfo.vote_average},来自用户:{username}"
else:
text = f"评分:{mediainfo.vote_average}"
# 群发
@@ -956,3 +957,41 @@ class SubscribeChain(ChainBase):
start_episode=start_episode
)
return no_exists
@eventmanager.register(EventType.SiteDeleted)
def remove_site(self, event: Event):
"""
从订阅中移除与站点相关的设置
"""
if not event:
return
event_data = event.event_data or {}
site_id = event_data.get("site_id")
if not site_id:
return
if site_id == "*":
# 站点被重置
SystemConfigOper().set(SystemConfigKey.RssSites, [])
for subscribe in self.subscribeoper.list():
if not subscribe.sites:
continue
self.subscribeoper.update(subscribe.id, {
"sites": ""
})
return
# 从选中的rss站点中移除
selected_sites = SystemConfigOper().get(SystemConfigKey.RssSites) or []
if site_id in selected_sites:
selected_sites.remove(site_id)
SystemConfigOper().set(SystemConfigKey.RssSites, selected_sites)
# 查询所有订阅
for subscribe in self.subscribeoper.list():
if not subscribe.sites:
continue
sites = json.loads(subscribe.sites) or []
if site_id not in sites:
continue
sites.remove(site_id)
self.subscribeoper.update(subscribe.id, {
"sites": json.dumps(sites)
})

View File

@@ -236,6 +236,8 @@ class Settings(BaseSettings):
META_CACHE_EXPIRE: int = 0
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
# 搜索多个名称
SEARCH_MULTIPLE_NAME: bool = False
@validator("SUBSCRIBE_RSS_INTERVAL",
"COOKIECLOUD_INTERVAL",

View File

@@ -1,5 +1,7 @@
import concurrent
import concurrent.futures
import traceback
from typing import List, Any, Dict, Tuple
from typing import List, Any, Dict, Tuple, Optional
from app.core.config import settings
from app.core.event import eventmanager
@@ -117,13 +119,13 @@ class PluginManager(metaclass=Singleton):
"""
if SystemUtils.is_frozen():
return
logger.info("开始安装在线插件...")
logger.info("开始安装第三方插件...")
# 已安装插件
install_plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 在线插件
online_plugins = self.get_online_plugins()
if not online_plugins:
logger.error("未获取到在线插件")
logger.error("未获取到第三方插件")
return
# 支持更新的插件自动更新
for plugin in online_plugins:
@@ -138,7 +140,7 @@ class PluginManager(metaclass=Singleton):
f"插件 {plugin.get('plugin_name')} v{plugin.get('plugin_version')} 安装失败:{msg}")
continue
logger.info(f"插件 {plugin.get('plugin_name')} 安装成功,版本:{plugin.get('plugin_version')}")
logger.info("在线插件安装完成")
logger.info("第三方插件安装完成")
def get_plugin_config(self, pid: str) -> dict:
"""
@@ -282,18 +284,15 @@ class PluginManager(metaclass=Singleton):
"""
获取所有在线插件信息
"""
# 返回值
all_confs = []
if not settings.PLUGIN_MARKET:
return all_confs
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 线上插件列表
markets = settings.PLUGIN_MARKET.split(",")
for market in markets:
def __get_plugin_info(market: str) -> Optional[List[dict]]:
"""
获取插件信息
"""
online_plugins = self.pluginhelper.get_plugins(market) or {}
if not online_plugins:
logger.warn(f"获取插件库失败 {market}")
logger.warn(f"获取插件库失败{market}")
return
ret_plugins = []
for pid, plugin in online_plugins.items():
# 运行状插件
plugin_obj = self._running_plugins.get(pid)
@@ -355,11 +354,39 @@ class PluginManager(metaclass=Singleton):
# 本地标志
conf.update({"is_local": False})
# 汇总
all_confs.append(conf)
# 按插件ID去重
if all_confs:
all_confs = list({v["id"]: v for v in all_confs}.values())
return all_confs
ret_plugins.append(conf)
return ret_plugins
if not settings.PLUGIN_MARKET:
return []
# 返回值
all_plugins = []
# 已安装插件
installed_apps = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins) or []
# 使用多线程获取线上插件
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for m in settings.PLUGIN_MARKET.split(","):
futures.append(executor.submit(__get_plugin_info, m))
for future in concurrent.futures.as_completed(futures):
plugins = future.result()
if plugins:
all_plugins.extend(plugins)
# 所有插件按repo在设置中的顺序排序
all_plugins.sort(
key=lambda x: settings.PLUGIN_MARKET.split(",").index(x.get("repo_url")) if x.get("repo_url") else 0
)
# 按插件ID和版本号去重相同插件以前面的为准
result = []
_dup = []
for p in all_plugins:
key = f"{p.get('id')}v{p.get('plugin_version')}"
if key not in _dup:
_dup.append(key)
result.append(p)
logger.info(f"共获取到 {len(result)} 个第三方插件")
return result
def get_local_plugins(self) -> List[dict]:
"""

View File

@@ -56,6 +56,12 @@ class SystemConfigOper(DbOper, metaclass=Singleton):
return self.__SYSTEMCONF
return self.__SYSTEMCONF.get(key)
def all(self):
"""
获取所有系统设置
"""
return self.__SYSTEMCONF or {}
def delete(self, key: Union[str, SystemConfigKey]):
"""
删除系统设置

View File

@@ -61,7 +61,7 @@ class PlaywrightHelper:
ua: str = None,
proxies: dict = None,
headless: bool = False,
timeout: int = 30) -> str:
timeout: int = 20) -> str:
"""
获取网页源码
:param url: 网页地址

View File

@@ -7,7 +7,9 @@ from typing import Dict, Tuple, Optional, List
from cachetools import TTLCache, cached
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
@@ -20,6 +22,18 @@ class PluginHelper(metaclass=Singleton):
_base_url = "https://raw.githubusercontent.com/%s/%s/main/"
_install_reg = "https://movie-pilot.org/plugin/install/%s"
_install_report = "https://movie-pilot.org/plugin/install"
_install_statistic = "https://movie-pilot.org/plugin/statistic"
def __init__(self):
self.systemconfig = SystemConfigOper()
if not self.systemconfig.get(SystemConfigKey.PluginInstallReport):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
@cached(cache=TTLCache(maxsize=100, ttl=1800))
def get_plugins(self, repo_url: str) -> Dict[str, dict]:
"""
@@ -61,6 +75,45 @@ class PluginHelper(metaclass=Singleton):
return None, None
return user, repo
@cached(cache=TTLCache(maxsize=1, ttl=1800))
def get_statistic(self) -> Dict:
"""
获取插件安装统计
"""
res = RequestUtils(timeout=10).get_res(self._install_statistic)
if res and res.status_code == 200:
return res.json()
return {}
def install_reg(self, pid: str) -> bool:
"""
安装插件统计
"""
if not pid:
return False
res = RequestUtils(timeout=5).get_res(self._install_reg % pid)
if res and res.status_code == 200:
return True
return False
def install_report(self) -> bool:
"""
上报存量插件安装统计
"""
plugins = self.systemconfig.get(SystemConfigKey.UserInstalledPlugins)
if not plugins:
return False
res = RequestUtils(content_type="application/json",
timeout=5).post(self._install_report,
json={
"plugins": [
{
"plugin_id": plugin,
} for plugin in plugins
]
})
return True if res else False
def install(self, pid: str, repo_url: str) -> Tuple[bool, str]:
"""
安装插件
@@ -154,4 +207,7 @@ class PluginHelper(metaclass=Singleton):
requirements_file = plugin_dir / "requirements.txt"
if requirements_file.exists():
SystemUtils.execute(f"pip install -r {requirements_file} > /dev/null 2>&1")
# 安装成功后统计
self.install_reg(pid)
return True, ""

View File

@@ -15,7 +15,8 @@ class BangumiApi(object):
"calendar": "calendar",
"detail": "v0/subjects/%s",
"persons": "v0/subjects/%s/persons",
"subjects": "v0/subjects/%s/subjects"
"subjects": "v0/subjects/%s/subjects",
"characters": "v0/subjects/%s/characters"
}
_base_url = "https://api.bgm.tv/"
_req = RequestUtils(session=requests.Session())
@@ -145,7 +146,17 @@ class BangumiApi(object):
"""
获取番剧人物
"""
return self.__invoke(self._urls["persons"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
ret_list = []
result = self.__invoke(self._urls["characters"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
character_id = item.get("id")
actors = item.get("actors")
if character_id and actors and actors[0]:
actor_info = actors[0]
actor_info.update({'career': [item.get('name')]})
ret_list.append(actor_info)
return ret_list
def subjects(self, bid: int):
"""

View File

@@ -1,4 +1,3 @@
import time
from pathlib import Path
from typing import Union
from xml.dom import minidom
@@ -31,6 +30,9 @@ class DoubanScraper:
:param force_img: 强制生成图片
"""
if not mediainfo or not file_path:
return
self._transfer_type = transfer_type
self._force_nfo = force_nfo
self._force_img = force_img
@@ -83,10 +85,6 @@ class DoubanScraper:
@staticmethod
def __gen_common_nfo(mediainfo: MediaInfo, doc, root):
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(time.time())))
# 简介
xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(mediainfo.overview or ""))
@@ -166,8 +164,6 @@ class DoubanScraper:
logger.info(f"正在生成季NFO文件{season_path.name}")
doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "season")
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# 简介
xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(mediainfo.overview or ""))

View File

@@ -3,6 +3,7 @@ from typing import List, Optional, Tuple, Union
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.core.context import TorrentInfo
from app.helper.sites import SitesHelper
from app.log import logger
@@ -50,6 +51,18 @@ class IndexerModule(_ModuleBase):
:param page: 页码
:return: 资源列表
"""
def __remove_duplicate(_torrents: List[TorrentInfo]) -> List[TorrentInfo]:
"""
去除重复的种子
:param _torrents: 种子列表
:return: 去重后的种子列表
"""
if not settings.SEARCH_MULTIPLE_NAME:
return _torrents
# 通过encosure去重
return list({f"{t.title}_{t.description}": t for t in _torrents}.values())
# 确认搜索的名字
if not keywords:
# 浏览种子页
@@ -76,30 +89,38 @@ class IndexerModule(_ModuleBase):
try:
if site.get('parser') == "TNodeSpider":
error_flag, result_array = TNodeSpider(site).search(
error_flag, result = TNodeSpider(site).search(
keyword=search_word,
page=page
)
elif site.get('parser') == "TorrentLeech":
error_flag, result_array = TorrentLeech(site).search(
error_flag, result = TorrentLeech(site).search(
keyword=search_word,
page=page
)
elif site.get('parser') == "mTorrent":
error_flag, result_array = MTorrentSpider(site).search(
error_flag, result = MTorrentSpider(site).search(
keyword=search_word,
mtype=mtype,
page=page
)
else:
error_flag, result_array = self.__spider_search(
error_flag, result = self.__spider_search(
search_word=search_word,
indexer=site,
mtype=mtype,
page=page
)
# 有结果后停止
if result_array:
if error_flag:
break
if not result:
continue
if settings.SEARCH_MULTIPLE_NAME:
# 合并多个结果
result_array.extend(result)
else:
# 有结果就停止
result_array = result
break
except Exception as err:
logger.error(f"{site.get('name')} 搜索出错:{str(err)}")
@@ -113,14 +134,16 @@ class IndexerModule(_ModuleBase):
return []
else:
logger.info(f"{site.get('name')} 搜索完成,耗时 {seconds} 秒,返回数据:{len(result_array)}")
# 合并站点信息,以TorrentInfo返回
return [TorrentInfo(site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua"),
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
**result) for result in result_array]
# TorrentInfo
torrents = [TorrentInfo(site=site.get("id"),
site_name=site.get("name"),
site_cookie=site.get("cookie"),
site_ua=site.get("ua"),
site_proxy=site.get("proxy"),
site_order=site.get("pri"),
**result) for result in result_array]
# 去重
return __remove_duplicate(torrents)
@staticmethod
def __spider_search(indexer: CommentedMap,

View File

@@ -6,6 +6,7 @@ from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas import MediaType
from app.utils.http import RequestUtils
@@ -13,6 +14,9 @@ from app.utils.string import StringUtils
class MTorrentSpider:
"""
mTorrent API需要缓存ApiKey
"""
_indexerid = None
_domain = None
_name = ""
@@ -28,14 +32,23 @@ class MTorrentSpider:
_movie_category = ['401', '419', '420', '421', '439', '405', '404']
_tv_category = ['403', '402', '435', '438', '404', '405']
# API KEY
_apikey = None
# 标签
_labels = {
0: "",
4: "中字",
6: "国配",
"0": "",
"1": "DIY",
"2": "国配",
"3": "DIY 国配",
"4": "中字",
"5": "DIY 中字",
"6": "国配 中字",
"7": "DIY 国配 中字"
}
def __init__(self, indexer: CommentedMap):
self.systemconfig = SystemConfigOper()
if indexer:
self._indexerid = indexer.get('id')
self._domain = indexer.get('domain')
@@ -46,7 +59,51 @@ class MTorrentSpider:
self._cookie = indexer.get('cookie')
self._ua = indexer.get('ua')
def __get_apikey(self) -> str:
"""
获取ApiKey
"""
domain_host = StringUtils.get_url_host(self._domain)
self._apikey = self.systemconfig.get(f"site.{domain_host}.apikey")
if not self._apikey:
try:
res = RequestUtils(
headers={
"Content-Type": "application/json",
"User-Agent": f"{self._ua}"
},
cookies=self._cookie,
ua=self._ua,
proxies=self._proxy,
referer=f"{self._domain}usercp?tab=laboratory",
timeout=15
).post_res(url=f"{self._domain}api/apikey/getKeyList")
if res and res.status_code == 200:
api_keys = res.json().get('data')
if api_keys:
logger.info(f"{self._name} 获取ApiKey成功")
# 按lastModifiedDate倒序排序
api_keys.sort(key=lambda x: x.get('lastModifiedDate'), reverse=True)
self._apikey = api_keys[0].get('apiKey')
self.systemconfig.set(f"site.{domain_host}.apikey", self._apikey)
else:
logger.warn(f"{self._name} 获取ApiKey失败请先在`控制台`->`实验室`建立存取令牌")
else:
logger.warn(f"{self._name} 获取ApiKey失败请检查Cookie是否有效")
except Exception as e:
logger.error(f"{self._name} 获取ApiKey出错{e}")
return self._apikey
def search(self, keyword: str, mtype: MediaType = None, page: int = 0) -> Tuple[bool, List[dict]]:
"""
搜索
"""
# 检查ApiKey
self.__get_apikey()
if not self._apikey:
return True, []
if not mtype:
categories = []
elif mtype == MediaType.TV:
@@ -63,31 +120,45 @@ class MTorrentSpider:
res = RequestUtils(
headers={
"Content-Type": "application/json",
"User-Agent": f"{self._ua}"
"User-Agent": f"{self._ua}",
"x-api-key": self._apikey
},
cookies=self._cookie,
proxies=self._proxy,
referer=f"{self._domain}browse",
timeout=30
timeout=15
).post_res(url=self._searchurl, json=params)
torrents = []
if res and res.status_code == 200:
results = res.json().get('data', {}).get("data") or []
for result in results:
category_value = result.get('category')
if category_value in self._tv_category \
and category_value not in self._movie_category:
category = MediaType.TV.value
elif category_value in self._movie_category:
category = MediaType.MOVIE.value
else:
category = MediaType.UNKNOWN.value
labels_value = self._labels.get(result.get('labels') or "0") or ""
if labels_value:
labels = labels_value.split()
else:
labels = []
torrent = {
'title': result.get('name'),
'description': result.get('smallDescr'),
'enclosure': self.__get_download_url(result.get('id')),
'pubdate': StringUtils.format_timestamp(result.get('createdDate')),
'size': result.get('size'),
'seeders': result.get('status', {}).get("seeders"),
'peers': result.get('status', {}).get("leechers"),
'grabs': result.get('status', {}).get("timesCompleted"),
'size': int(result.get('size') or '0'),
'seeders': int(result.get('status', {}).get("seeders") or '0'),
'peers': int(result.get('status', {}).get("leechers") or '0'),
'grabs': int(result.get('status', {}).get("timesCompleted") or '0'),
'downloadvolumefactor': self.__get_downloadvolumefactor(result.get('status', {}).get("discount")),
'uploadvolumefactor': self.__get_uploadvolumefactor(result.get('status', {}).get("discount")),
'page_url': self._pageurl % (self._domain, result.get('id')),
'imdbid': self.__find_imdbid(result.get('imdb')),
'labels': [self._labels.get(result.get('labels') or 0)] if result.get('labels') else []
'labels': labels,
'category': category
}
torrents.append(torrent)
elif res is not None:
@@ -100,6 +171,9 @@ class MTorrentSpider:
@staticmethod
def __find_imdbid(imdb: str) -> str:
"""
从imdb链接中提取imdbid
"""
if imdb:
m = re.search(r"tt\d+", imdb)
if m:
@@ -108,6 +182,9 @@ class MTorrentSpider:
@staticmethod
def __get_downloadvolumefactor(discount: str) -> float:
"""
获取下载系数
"""
discount_dict = {
"FREE": 0,
"PERCENT_50": 0.5,
@@ -121,6 +198,9 @@ class MTorrentSpider:
@staticmethod
def __get_uploadvolumefactor(discount: str) -> float:
"""
获取上传系数
"""
uploadvolumefactor_dict = {
"_2X": 2.0,
"_2X_FREE": 2.0,
@@ -131,12 +211,22 @@ class MTorrentSpider:
return 1
def __get_download_url(self, torrent_id: str) -> str:
"""
获取下载链接返回base64编码的json字符串及URL
"""
url = self._downloadurl % self._domain
params = {
'method': 'post',
'cookie': False,
'params': {
'id': torrent_id
},
'header': {
'Content-Type': 'application/json',
'User-Agent': f'{self._ua}',
'Accept': 'application/json, text/plain, */*',
'x-api-key': self._apikey
},
'result': 'data'
}
# base64编码

View File

@@ -95,7 +95,7 @@ class TorrentSpider:
self.render = indexer.get('render')
self.domain = indexer.get('domain')
self.result_num = int(indexer.get('result_num') or 100)
self._timeout = int(indexer.get('timeout') or 30)
self._timeout = int(indexer.get('timeout') or 15)
self.page = page
if self.domain and not str(self.domain).endswith("/"):
self.domain = self.domain + "/"

View File

@@ -77,7 +77,7 @@ class TNodeSpider:
},
cookies=self._cookie,
proxies=self._proxy,
timeout=30
timeout=15
).post_res(url=self._searchurl, json=params)
torrents = []
if res and res.status_code == 200:

View File

@@ -40,7 +40,7 @@ class TorrentLeech:
},
cookies=self._indexer.get('cookie'),
proxies=self._proxy,
timeout=30
timeout=15
).get_res(url)
torrents = []
if res and res.status_code == 200:

View File

@@ -183,7 +183,7 @@ class SynologyChat:
ret = self._req.get_res(url=req_url)
if ret and ret.status_code == 200:
users = ret.json().get("data", {}).get("users", []) or []
return [user.get("user_id") for user in users]
return [user.get("user_id") for user in users if user.get("deleted", True) is False]
else:
return []

View File

@@ -1,4 +1,3 @@
import time
import traceback
from pathlib import Path
from typing import Union
@@ -37,6 +36,9 @@ class TmdbScraper:
:param force_img: 是否强制生成图片
"""
if not mediainfo or not file_path:
return
self._transfer_type = transfer_type
self._force_nfo = force_nfo
self._force_img = force_img
@@ -151,10 +153,6 @@ class TmdbScraper:
"""
生成公共NFO
"""
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(time.time())))
# TMDB
DomUtils.add_node(doc, root, "tmdbid", mediainfo.tmdb_id or "")
uniqueid_tmdb = DomUtils.add_node(doc, root, "uniqueid", mediainfo.tmdb_id or "")
@@ -267,9 +265,6 @@ class TmdbScraper:
logger.info(f"正在生成季NFO文件{season_path.name}")
doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "season")
# 添加时间
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# 简介
xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(seasoninfo.get("overview") or ""))
@@ -306,8 +301,6 @@ class TmdbScraper:
logger.info(f"正在生成剧集NFO文件{file_path.name}")
doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "episodedetails")
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# TMDBID
uniqueid = DomUtils.add_node(doc, root, "uniqueid", str(episodeinfo.get("id")))
uniqueid.setAttribute("type", "tmdb")

View File

@@ -38,3 +38,5 @@ class Plugin(BaseModel):
is_local: Optional[bool] = False
# 仓库地址
repo_url: Optional[str] = None
# 安装次数
install_count: Optional[int] = 0

View File

@@ -40,8 +40,8 @@ class EventType(Enum):
NameRecognize = "name.recognize"
# 名称识别结果
NameRecognizeResult = "name.recognize.result"
# 缓存站点图标
CacheSiteIcon = "cache.siteicon"
# 站点发生更新
SiteUpdated = "site.updated"
# 系统配置Key字典
@@ -76,6 +76,8 @@ class SystemConfigKey(Enum):
DefaultSearchFilterRules = "DefaultSearchFilterRules"
# 转移屏蔽词
TransferExcludeWords = "TransferExcludeWords"
# 插件安装统计
PluginInstallReport = "PluginInstallReport"
# 处理进度Key字典

View File

@@ -282,6 +282,18 @@ class StringUtils:
return netloc[-2]
return netloc[0]
@staticmethod
def get_url_host(url: str) -> str:
"""
获取URL的一级域名
"""
if not url:
return ""
_, netloc = StringUtils.get_url_netloc(url)
if not netloc:
return ""
return netloc.split(".")[-2]
@staticmethod
def get_base_url(url: str) -> str:
"""
@@ -588,6 +600,8 @@ class StringUtils:
# 处理不希望包含多个冒号的情况(除了协议后的冒号)
return None, None
domain = ":".join(parts[:-1])
if domain.endswith("/"):
domain = domain[:-1]
# 检查是否包含端口号
try:
port = int(parts[-1])

View File

@@ -45,3 +45,5 @@ DOWNLOAD_SUBTITLE=true
OCR_HOST=https://movie-pilot.org
# 插件市场仓库地址,多个地址使用`,`分隔,保留最后的/
PLUGIN_MARKET=https://github.com/jxxghp/MoviePilot-Plugins
# 搜索多个名称true/false为true时搜索时会同时搜索中英文及原始名称搜索结果会更全面但会增加搜索时间为false时其中一个名称搜索到结果或全部名称搜索完毕即停止
SEARCH_MULTIPLE_NAME=true

View File

@@ -1 +1 @@
APP_VERSION = 'v1.7.4'
APP_VERSION = 'v1.7.7'