mirror of
https://github.com/jxxghp/MoviePilot.git
synced 2026-05-09 17:42:39 +08:00
Compare commits
62 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
92769b27f1 | ||
|
|
fa83168b92 | ||
|
|
f96295de3a | ||
|
|
6cecb3c6a6 | ||
|
|
b6486035c4 | ||
|
|
f7c1d28c0f | ||
|
|
47c2ae1c08 | ||
|
|
c03f24dcf5 | ||
|
|
6e2f5762b4 | ||
|
|
75330a08cc | ||
|
|
3f17e371c3 | ||
|
|
a820341ec7 | ||
|
|
c1f04f5631 | ||
|
|
a121e45b94 | ||
|
|
885ee976b2 | ||
|
|
e6229beb94 | ||
|
|
f2a40e1ec3 | ||
|
|
5f80aa5b7c | ||
|
|
14ff1e9af6 | ||
|
|
49ab5ac709 | ||
|
|
74c7a1927b | ||
|
|
cbd704373c | ||
|
|
a05724f664 | ||
|
|
97d0fc046a | ||
|
|
6248e34400 | ||
|
|
a442dab85b | ||
|
|
d4514edba6 | ||
|
|
0c581565ad | ||
|
|
350def0a6f | ||
|
|
5b3027c0a7 | ||
|
|
e4b90ca8f7 | ||
|
|
d917b00055 | ||
|
|
cc94c6c367 | ||
|
|
6410051e3a | ||
|
|
aaa1b80edf | ||
|
|
f345d94009 | ||
|
|
550fe26d76 | ||
|
|
7ad498b3a3 | ||
|
|
20eb0b4635 | ||
|
|
747dc3fafe | ||
|
|
4708fbb3cb | ||
|
|
6ba40edeb4 | ||
|
|
79cb28faf9 | ||
|
|
9acf05f334 | ||
|
|
d0af1bf075 | ||
|
|
f8a95cec4a | ||
|
|
3cd672fa8d | ||
|
|
fe03638552 | ||
|
|
1ae220c654 | ||
|
|
75c7e71ee6 | ||
|
|
4619158b99 | ||
|
|
3f88907ba9 | ||
|
|
ae6440bd0a | ||
|
|
261f5fc0c6 | ||
|
|
a5d044d535 | ||
|
|
6e607ca89f | ||
|
|
06e4b9ad83 | ||
|
|
c755dc9b85 | ||
|
|
209451d5f9 | ||
|
|
60b2d30f42 | ||
|
|
399d26929d | ||
|
|
f50c2e59a9 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -10,7 +10,9 @@ app/helper/*.pyd
|
||||
app/helper/*.bin
|
||||
app/plugins/**
|
||||
!app/plugins/__init__.py
|
||||
config/cookies/**
|
||||
config/user.db
|
||||
config/sites/**
|
||||
*.pyc
|
||||
*.log
|
||||
.vscode
|
||||
15
README.md
15
README.md
@@ -21,7 +21,11 @@
|
||||
|
||||
### 2. **安装CookieCloud服务端(可选)**
|
||||
|
||||
MoviePilot内置了公共CookieCloud服务器,如果需要自建服务,可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建,docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)。
|
||||
通过CookieCloud可以快速同步浏览器中保存的站点数据到MoviePilot,支持以下服务方式:
|
||||
|
||||
- 使用公共CookieCloud远程服务器(默认):服务器地址为:https://movie-pilot.org/cookiecloud
|
||||
- 使用内建的本地Cookie服务:在 `设定` - `站点` 中打开`启用本地CookieCloud服务器`后,将启用内建的CookieCloud提供服务,服务地址为:`http://localhost:${NGINX_PORT}/cookiecloud/`, Cookie数据加密保存在配置文件目录下的`cookies`文件中
|
||||
- 自建服务CookieCloud服务器:参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建,docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)
|
||||
|
||||
**声明:** 本项目不会收集用户敏感数据,Cookie同步也是基于CookieCloud项目实现,非本项目提供的能力。技术角度上CookieCloud采用端到端加密,在个人不泄露`用户KEY`和`端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
|
||||
|
||||
@@ -106,6 +110,7 @@ MoviePilot需要配套下载器和媒体服务器配合使用。
|
||||
- **❗SUPERUSER:** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面,**注意:启动一次后再次修改该值不会生效,除非删除数据库文件!**
|
||||
- **❗API_TOKEN:** API密钥,默认`moviepilot`,在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
|
||||
- **BIG_MEMORY_MODE:** 大内存模式,默认为`false`,开启后会增加缓存数量,占用更多的内存,但响应速度会更快
|
||||
- **DOH_ENABLE:** DNS over HTTPS开关,`true`/`false`,默认`true`,开启后会使用DOH对api.themoviedb.org等域名进行解析,以减少被DNS污染的情况,提升网络连通性
|
||||
- **META_CACHE_EXPIRE:** 元数据识别缓存过期时间(小时),数字型,不配置或者配置为0时使用系统默认(大内存模式为7天,否则为3天),调大该值可减少themoviedb的访问次数
|
||||
- **GITHUB_TOKEN:** Github token,提高自动更新、插件安装等请求Github Api的限流阈值,格式:ghp_****
|
||||
- **DEV:** 开发者模式,`true`/`false`,默认`false`,开启后会暂停所有定时任务
|
||||
@@ -219,6 +224,14 @@ location / {
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
```
|
||||
- 反代使用ssl时,需要开启`http2`,否则会导致日志加载时间过长或不可用。以`Nginx`为例:
|
||||
```nginx configuration
|
||||
server {
|
||||
listen 443 ssl;
|
||||
http2 on;
|
||||
# ...
|
||||
}
|
||||
```
|
||||
- 新建的企业微信应用需要固定公网IP的代理才能收到消息,代理添加以下代码:
|
||||
```nginx configuration
|
||||
location /cgi-bin/gettoken {
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
from fastapi import APIRouter
|
||||
|
||||
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
|
||||
media, douban, search, plugin, tmdb, history, system, download, dashboard, filebrowser, transfer, mediaserver
|
||||
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
|
||||
filebrowser, transfer, mediaserver, bangumi
|
||||
|
||||
api_router = APIRouter()
|
||||
api_router.include_router(login.router, prefix="/login", tags=["login"])
|
||||
@@ -22,3 +23,5 @@ api_router.include_router(dashboard.router, prefix="/dashboard", tags=["dashboar
|
||||
api_router.include_router(filebrowser.router, prefix="/filebrowser", tags=["filebrowser"])
|
||||
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
|
||||
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
|
||||
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])
|
||||
|
||||
|
||||
64
app/api/endpoints/bangumi.py
Normal file
64
app/api/endpoints/bangumi.py
Normal file
@@ -0,0 +1,64 @@
|
||||
from typing import List, Any
|
||||
|
||||
from fastapi import APIRouter, Depends
|
||||
|
||||
from app import schemas
|
||||
from app.chain.bangumi import BangumiChain
|
||||
from app.core.context import MediaInfo
|
||||
from app.core.security import verify_token
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
|
||||
def calendar(page: int = 1,
|
||||
count: int = 30,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
浏览Bangumi每日放送
|
||||
"""
|
||||
infos = BangumiChain().calendar(page=page, count=count)
|
||||
if not infos:
|
||||
return []
|
||||
medias = [MediaInfo(bangumi_info=info) for info in infos]
|
||||
return [media.to_dict() for media in medias]
|
||||
|
||||
|
||||
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.BangumiPerson])
|
||||
def bangumi_credits(bangumiid: int,
|
||||
page: int = 1,
|
||||
count: int = 20,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
查询Bangumi演职员表
|
||||
"""
|
||||
persons = BangumiChain().bangumi_credits(bangumiid, page=page, count=count)
|
||||
if not persons:
|
||||
return []
|
||||
return [schemas.BangumiPerson(**person) for person in persons]
|
||||
|
||||
|
||||
@router.get("/recommend/{bangumiid}", summary="查询Bangumi推荐", response_model=List[schemas.MediaInfo])
|
||||
def bangumi_recommend(bangumiid: int,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
查询Bangumi推荐
|
||||
"""
|
||||
infos = BangumiChain().bangumi_recommend(bangumiid)
|
||||
if not infos:
|
||||
return []
|
||||
medias = [MediaInfo(bangumi_info=info) for info in infos]
|
||||
return [media.to_dict() for media in medias]
|
||||
|
||||
|
||||
@router.get("/{bangumiid}", summary="查询Bangumi详情", response_model=schemas.MediaInfo)
|
||||
def bangumi_info(bangumiid: int,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
查询Bangumi详情
|
||||
"""
|
||||
info = BangumiChain().bangumi_info(bangumiid)
|
||||
if info:
|
||||
return MediaInfo(bangumi_info=info).to_dict()
|
||||
else:
|
||||
return schemas.MediaInfo()
|
||||
@@ -90,7 +90,7 @@ def movie_top250(page: int = 1,
|
||||
"""
|
||||
浏览豆瓣剧集信息
|
||||
"""
|
||||
movies = DoubanChain().movie_top250(page=page, count=count)
|
||||
movies = DoubanChain().movie_top250(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
|
||||
|
||||
|
||||
@@ -101,7 +101,7 @@ def tv_weekly_chinese(page: int = 1,
|
||||
"""
|
||||
中国每周剧集口碑榜
|
||||
"""
|
||||
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count)
|
||||
tvs = DoubanChain().tv_weekly_chinese(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ def tv_weekly_global(page: int = 1,
|
||||
"""
|
||||
全球每周剧集口碑榜
|
||||
"""
|
||||
tvs = DoubanChain().tv_weekly_global(page=page, count=count)
|
||||
tvs = DoubanChain().tv_weekly_global(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
|
||||
|
||||
|
||||
@@ -123,7 +123,7 @@ def tv_animation(page: int = 1,
|
||||
"""
|
||||
热门动画剧集
|
||||
"""
|
||||
tvs = DoubanChain().tv_animation(page=page, count=count)
|
||||
tvs = DoubanChain().tv_animation(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
|
||||
|
||||
|
||||
@@ -134,7 +134,7 @@ def movie_hot(page: int = 1,
|
||||
"""
|
||||
热门电影
|
||||
"""
|
||||
movies = DoubanChain().movie_hot(page=page, count=count)
|
||||
movies = DoubanChain().movie_hot(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=movie).to_dict() for movie in movies]
|
||||
|
||||
|
||||
@@ -145,7 +145,7 @@ def tv_hot(page: int = 1,
|
||||
"""
|
||||
热门电视剧
|
||||
"""
|
||||
tvs = DoubanChain().tv_hot(page=page, count=count)
|
||||
tvs = DoubanChain().tv_hot(page=page, count=count) or []
|
||||
return [MediaInfo(douban_info=tv).to_dict() for tv in tvs]
|
||||
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ from fastapi import APIRouter, Depends
|
||||
|
||||
from app import schemas
|
||||
from app.chain.download import DownloadChain
|
||||
from app.chain.media import MediaChain
|
||||
from app.core.context import MediaInfo, Context, TorrentInfo
|
||||
from app.core.metainfo import MetaInfo
|
||||
from app.core.security import verify_token
|
||||
@@ -14,7 +15,7 @@ router = APIRouter()
|
||||
|
||||
|
||||
@router.get("/", summary="正在下载", response_model=List[schemas.DownloadingTorrent])
|
||||
def read_downloading(
|
||||
def read(
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
查询正在下载的任务
|
||||
@@ -23,11 +24,10 @@ def read_downloading(
|
||||
|
||||
|
||||
@router.post("/", summary="添加下载", response_model=schemas.Response)
|
||||
def add_downloading(
|
||||
def download(
|
||||
media_in: schemas.MediaInfo,
|
||||
torrent_in: schemas.TorrentInfo,
|
||||
current_user: User = Depends(get_current_active_user),
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
current_user: User = Depends(get_current_active_user)) -> Any:
|
||||
"""
|
||||
添加下载任务
|
||||
"""
|
||||
@@ -51,8 +51,36 @@ def add_downloading(
|
||||
})
|
||||
|
||||
|
||||
@router.post("/add", summary="添加下载", response_model=schemas.Response)
|
||||
def add(
|
||||
torrent_in: schemas.TorrentInfo,
|
||||
current_user: User = Depends(get_current_active_user)) -> Any:
|
||||
"""
|
||||
添加下载任
|
||||
"""
|
||||
# 元数据
|
||||
metainfo = MetaInfo(title=torrent_in.title, subtitle=torrent_in.description)
|
||||
# 媒体信息
|
||||
mediainfo = MediaChain().recognize_media(meta=metainfo)
|
||||
if not mediainfo:
|
||||
return schemas.Response(success=False, message="无法识别媒体信息")
|
||||
# 种子信息
|
||||
torrentinfo = TorrentInfo()
|
||||
torrentinfo.from_dict(torrent_in.dict())
|
||||
# 上下文
|
||||
context = Context(
|
||||
meta_info=metainfo,
|
||||
media_info=mediainfo,
|
||||
torrent_info=torrentinfo
|
||||
)
|
||||
did = DownloadChain().download_single(context=context, userid=current_user.name, username=current_user.name)
|
||||
return schemas.Response(success=True if did else False, data={
|
||||
"download_id": did
|
||||
})
|
||||
|
||||
|
||||
@router.get("/start/{hashString}", summary="开始任务", response_model=schemas.Response)
|
||||
def start_downloading(
|
||||
def start(
|
||||
hashString: str,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
@@ -63,7 +91,7 @@ def start_downloading(
|
||||
|
||||
|
||||
@router.get("/stop/{hashString}", summary="暂停任务", response_model=schemas.Response)
|
||||
def stop_downloading(
|
||||
def stop(
|
||||
hashString: str,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
@@ -74,7 +102,7 @@ def stop_downloading(
|
||||
|
||||
|
||||
@router.delete("/{hashString}", summary="删除下载任务", response_model=schemas.Response)
|
||||
def remove_downloading(
|
||||
def info(
|
||||
hashString: str,
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
|
||||
@@ -106,28 +106,17 @@ def media_info(mediaid: str, type_name: str,
|
||||
根据媒体ID查询themoviedb或豆瓣媒体信息,type_name: 电影/电视剧
|
||||
"""
|
||||
mtype = MediaType(type_name)
|
||||
tmdbid, doubanid = None, None
|
||||
tmdbid, doubanid, bangumiid = None, None, None
|
||||
if mediaid.startswith("tmdb:"):
|
||||
tmdbid = int(mediaid[5:])
|
||||
elif mediaid.startswith("douban:"):
|
||||
doubanid = mediaid[7:]
|
||||
if not tmdbid and not doubanid:
|
||||
elif mediaid.startswith("bangumi:"):
|
||||
bangumiid = int(mediaid[8:])
|
||||
if not tmdbid and not doubanid and not bangumiid:
|
||||
return schemas.MediaInfo()
|
||||
if settings.RECOGNIZE_SOURCE == "themoviedb":
|
||||
if not tmdbid and doubanid:
|
||||
tmdbinfo = MediaChain().get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
|
||||
if tmdbinfo:
|
||||
tmdbid = tmdbinfo.get("id")
|
||||
else:
|
||||
return schemas.MediaInfo()
|
||||
else:
|
||||
if not doubanid and tmdbid:
|
||||
doubaninfo = MediaChain().get_doubaninfo_by_tmdbid(tmdbid=tmdbid, mtype=mtype)
|
||||
if doubaninfo:
|
||||
doubanid = doubaninfo.get("id")
|
||||
else:
|
||||
return schemas.MediaInfo()
|
||||
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
|
||||
# 识别
|
||||
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
|
||||
if mediainfo:
|
||||
MediaChain().obtain_images(mediainfo)
|
||||
return mediainfo.to_dict()
|
||||
|
||||
@@ -1,18 +1,24 @@
|
||||
import json
|
||||
from typing import Union, Any, List
|
||||
|
||||
from fastapi import APIRouter, BackgroundTasks, Depends
|
||||
from fastapi import Request
|
||||
from sqlalchemy.orm import Session
|
||||
from starlette.responses import PlainTextResponse
|
||||
|
||||
from app import schemas
|
||||
from app.chain.message import MessageChain
|
||||
from app.core.config import settings
|
||||
from app.core.security import verify_token
|
||||
from app.db import get_db
|
||||
from app.db.models import User
|
||||
from app.db.models.message import Message
|
||||
from app.db.systemconfig_oper import SystemConfigOper
|
||||
from app.db.userauth import get_current_active_superuser
|
||||
from app.log import logger
|
||||
from app.modules.wechat.WXBizMsgCrypt3 import WXBizMsgCrypt
|
||||
from app.schemas import NotificationSwitch
|
||||
from app.schemas.types import SystemConfigKey, NotificationType
|
||||
from app.schemas.types import SystemConfigKey, NotificationType, MessageChannel
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -36,6 +42,39 @@ async def user_message(background_tasks: BackgroundTasks, request: Request):
|
||||
return schemas.Response(success=True)
|
||||
|
||||
|
||||
@router.post("/web", summary="接收WEB消息", response_model=schemas.Response)
|
||||
def web_message(text: str, current_user: User = Depends(get_current_active_superuser)):
|
||||
"""
|
||||
WEB消息响应
|
||||
"""
|
||||
MessageChain().handle_message(
|
||||
channel=MessageChannel.Web,
|
||||
userid=current_user.name,
|
||||
username=current_user.name,
|
||||
text=text
|
||||
)
|
||||
return schemas.Response(success=True)
|
||||
|
||||
|
||||
@router.get("/web", summary="获取WEB消息", response_model=List[dict])
|
||||
def get_web_message(_: schemas.TokenPayload = Depends(verify_token),
|
||||
db: Session = Depends(get_db),
|
||||
page: int = 1,
|
||||
count: int = 20):
|
||||
"""
|
||||
获取WEB消息列表
|
||||
"""
|
||||
ret_messages = []
|
||||
messages = Message.list_by_page(db, page=page, count=count)
|
||||
for message in messages:
|
||||
try:
|
||||
ret_messages.append(message.to_dict())
|
||||
except Exception as e:
|
||||
logger.error(f"获取WEB消息列表失败: {str(e)}")
|
||||
continue
|
||||
return ret_messages
|
||||
|
||||
|
||||
def wechat_verify(echostr: str, msg_signature: str,
|
||||
timestamp: Union[str, int], nonce: str) -> Any:
|
||||
"""
|
||||
@@ -103,7 +142,7 @@ def read_switchs(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
def set_switchs(switchs: List[NotificationSwitch],
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
查询通知消息渠道开关
|
||||
设置通知消息渠道开关
|
||||
"""
|
||||
switch_list = []
|
||||
for switch in switchs:
|
||||
|
||||
@@ -52,6 +52,20 @@ def search_by_id(mediaid: str,
|
||||
mtype=mtype, area=area)
|
||||
else:
|
||||
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=mtype, area=area)
|
||||
elif mediaid.startswith("bangumi:"):
|
||||
bangumiid = int(mediaid.replace("bangumi:", ""))
|
||||
if settings.RECOGNIZE_SOURCE == "themoviedb":
|
||||
# 通过BangumiID识别TMDBID
|
||||
tmdbinfo = MediaChain().get_tmdbinfo_by_bangumiid(bangumiid=bangumiid)
|
||||
if tmdbinfo:
|
||||
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
|
||||
mtype=mtype, area=area)
|
||||
else:
|
||||
# 通过BangumiID识别豆瓣ID
|
||||
doubaninfo = MediaChain().get_doubaninfo_by_bangumiid(bangumiid=bangumiid)
|
||||
if doubaninfo:
|
||||
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
|
||||
mtype=mtype, area=area)
|
||||
else:
|
||||
return []
|
||||
return [torrent.to_dict() for torrent in torrents]
|
||||
|
||||
@@ -50,6 +50,9 @@ def add_site(
|
||||
return schemas.Response(success=False, message=f"{domain} 站点己存在")
|
||||
# 保存站点信息
|
||||
site_in.domain = domain
|
||||
# 校正地址格式
|
||||
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
|
||||
site_in.url = f"{_scheme}://{_netloc}/"
|
||||
site_in.name = site_info.get("name")
|
||||
site_in.id = None
|
||||
site = Site(**site_in.dict())
|
||||
@@ -74,6 +77,9 @@ def update_site(
|
||||
site = Site.get(db, site_in.id)
|
||||
if not site:
|
||||
return schemas.Response(success=False, message="站点不存在")
|
||||
# 校正地址格式
|
||||
_scheme, _netloc = StringUtils.get_url_netloc(site_in.url)
|
||||
site_in.url = f"{_scheme}://{_netloc}/"
|
||||
site.update(db, site_in.dict())
|
||||
# 通知缓存站点图标
|
||||
EventManager().send_event(EventType.CacheSiteIcon, {
|
||||
|
||||
@@ -65,7 +65,7 @@ def create_subscribe(
|
||||
else:
|
||||
mtype = None
|
||||
# 豆瓣标理
|
||||
if subscribe_in.doubanid:
|
||||
if subscribe_in.doubanid or subscribe_in.bangumiid:
|
||||
meta = MetaInfo(subscribe_in.name)
|
||||
subscribe_in.name = meta.name
|
||||
subscribe_in.season = meta.begin_season
|
||||
@@ -80,6 +80,7 @@ def create_subscribe(
|
||||
tmdbid=subscribe_in.tmdbid,
|
||||
season=subscribe_in.season,
|
||||
doubanid=subscribe_in.doubanid,
|
||||
bangumiid=subscribe_in.bangumiid,
|
||||
username=current_user.name,
|
||||
best_version=subscribe_in.best_version,
|
||||
save_path=subscribe_in.save_path,
|
||||
@@ -131,9 +132,10 @@ def subscribe_mediaid(
|
||||
db: Session = Depends(get_db),
|
||||
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
|
||||
"""
|
||||
根据TMDBID或豆瓣ID查询订阅 tmdb:/douban:
|
||||
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
|
||||
"""
|
||||
result = None
|
||||
title_check = False
|
||||
if mediaid.startswith("tmdb:"):
|
||||
tmdbid = mediaid[5:]
|
||||
if not tmdbid or not str(tmdbid).isdigit():
|
||||
@@ -144,14 +146,21 @@ def subscribe_mediaid(
|
||||
if not doubanid:
|
||||
return Subscribe()
|
||||
result = Subscribe.get_by_doubanid(db, doubanid)
|
||||
# 豆瓣已订阅如果 id 搜索无结果使用标题搜索
|
||||
# 会造成同名结果也会被返回
|
||||
if not result and title:
|
||||
meta = MetaInfo(title)
|
||||
if season:
|
||||
meta.begin_season = season
|
||||
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
|
||||
|
||||
title_check = True
|
||||
elif mediaid.startswith("bangumi:"):
|
||||
bangumiid = mediaid[8:]
|
||||
if not bangumiid or not str(bangumiid).isdigit():
|
||||
return Subscribe()
|
||||
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
|
||||
if not result and title:
|
||||
title_check = True
|
||||
# 使用名称检查订阅
|
||||
if title_check and title:
|
||||
meta = MetaInfo(title)
|
||||
if season:
|
||||
meta.begin_season = season
|
||||
result = Subscribe.get_by_title(db, title=meta.name, season=meta.begin_season)
|
||||
if result and result.sites:
|
||||
result.sites = json.loads(result.sites)
|
||||
|
||||
|
||||
@@ -138,7 +138,7 @@ def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
|
||||
|
||||
|
||||
@router.get("/message", summary="实时消息")
|
||||
def get_message(token: str):
|
||||
def get_message(token: str, role: str = "sys"):
|
||||
"""
|
||||
实时获取系统消息,返回格式为SSE
|
||||
"""
|
||||
@@ -152,7 +152,7 @@ def get_message(token: str):
|
||||
|
||||
def event_generator():
|
||||
while True:
|
||||
detail = message.get()
|
||||
detail = message.get(role)
|
||||
yield 'data: %s\n\n' % (detail or '')
|
||||
time.sleep(3)
|
||||
|
||||
|
||||
137
app/api/servcookie.py
Normal file
137
app/api/servcookie.py
Normal file
@@ -0,0 +1,137 @@
|
||||
import gzip
|
||||
import json
|
||||
from hashlib import md5
|
||||
from typing import Annotated, Callable
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Path, Request, Response
|
||||
from fastapi.responses import PlainTextResponse
|
||||
from fastapi.routing import APIRoute
|
||||
|
||||
from app import schemas
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
from app.utils.common import decrypt
|
||||
|
||||
|
||||
class GzipRequest(Request):
|
||||
|
||||
async def body(self) -> bytes:
|
||||
if not hasattr(self, "_body"):
|
||||
body = await super().body()
|
||||
if "gzip" in self.headers.getlist("Content-Encoding"):
|
||||
body = gzip.decompress(body)
|
||||
self._body = body
|
||||
return self._body
|
||||
|
||||
|
||||
class GzipRoute(APIRoute):
|
||||
|
||||
def get_route_handler(self) -> Callable:
|
||||
original_route_handler = super().get_route_handler()
|
||||
|
||||
async def custom_route_handler(request: Request) -> Response:
|
||||
request = GzipRequest(request.scope, request.receive)
|
||||
return await original_route_handler(request)
|
||||
|
||||
return custom_route_handler
|
||||
|
||||
|
||||
async def verify_server_enabled():
|
||||
"""
|
||||
校验CookieCloud服务路由是否打开
|
||||
"""
|
||||
if not settings.COOKIECLOUD_ENABLE_LOCAL:
|
||||
raise HTTPException(status_code=400, detail="本地CookieCloud服务器未启用")
|
||||
return True
|
||||
|
||||
|
||||
cookie_router = APIRouter(route_class=GzipRoute,
|
||||
tags=['servcookie'],
|
||||
dependencies=[Depends(verify_server_enabled)])
|
||||
|
||||
|
||||
@cookie_router.get("/", response_class=PlainTextResponse)
|
||||
def get_root():
|
||||
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
|
||||
|
||||
|
||||
@cookie_router.post("/", response_class=PlainTextResponse)
|
||||
def post_root():
|
||||
return "Hello MoviePilot! COOKIECLOUD API ROOT = /cookiecloud"
|
||||
|
||||
|
||||
@cookie_router.post("/update")
|
||||
async def update_cookie(req: schemas.CookieData):
|
||||
"""
|
||||
上传Cookie数据
|
||||
"""
|
||||
file_path = settings.COOKIE_PATH / f"{req.uuid}.json"
|
||||
content = json.dumps({"encrypted": req.encrypted})
|
||||
with open(file_path, encoding="utf-8", mode="w") as file:
|
||||
file.write(content)
|
||||
with open(file_path, encoding="utf-8", mode="r") as file:
|
||||
read_content = file.read()
|
||||
if read_content == content:
|
||||
return {"action": "done"}
|
||||
else:
|
||||
return {"action": "error"}
|
||||
|
||||
|
||||
def load_encrypt_data(uuid: str) -> Dict[str, Any]:
|
||||
"""
|
||||
加载本地加密原始数据
|
||||
"""
|
||||
file_path = settings.COOKIE_PATH / f"{uuid}.json"
|
||||
|
||||
# 检查文件是否存在
|
||||
if not file_path.exists():
|
||||
raise HTTPException(status_code=404, detail="Item not found")
|
||||
|
||||
# 读取文件
|
||||
with open(file_path, encoding="utf-8", mode="r") as file:
|
||||
read_content = file.read()
|
||||
data = json.loads(read_content.encode("utf-8"))
|
||||
return data
|
||||
|
||||
|
||||
def get_decrypted_cookie_data(uuid: str, password: str,
|
||||
encrypted: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
加载本地加密数据并解密为Cookie
|
||||
"""
|
||||
key_md5 = md5()
|
||||
key_md5.update((uuid + '-' + password).encode('utf-8'))
|
||||
aes_key = (key_md5.hexdigest()[:16]).encode('utf-8')
|
||||
|
||||
if encrypted:
|
||||
try:
|
||||
decrypted_data = decrypt(encrypted, aes_key).decode('utf-8')
|
||||
decrypted_data = json.loads(decrypted_data)
|
||||
if 'cookie_data' in decrypted_data:
|
||||
return decrypted_data
|
||||
except Exception as e:
|
||||
logger.error(f"解密Cookie数据失败:{str(e)}")
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
@cookie_router.get("/get/{uuid}")
|
||||
async def get_cookie(
|
||||
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")]):
|
||||
"""
|
||||
GET 下载加密数据
|
||||
"""
|
||||
return load_encrypt_data(uuid)
|
||||
|
||||
|
||||
@cookie_router.post("/get/{uuid}")
|
||||
async def post_cookie(
|
||||
uuid: Annotated[str, Path(min_length=5, pattern="^[a-zA-Z0-9]+$")],
|
||||
request: schemas.CookiePassword):
|
||||
"""
|
||||
POST 下载加密数据
|
||||
"""
|
||||
data = load_encrypt_data(uuid)
|
||||
return get_decrypted_cookie_data(uuid, request.password, data["encrypted"])
|
||||
@@ -15,6 +15,8 @@ from app.core.context import MediaInfo, TorrentInfo
|
||||
from app.core.event import EventManager
|
||||
from app.core.meta import MetaBase
|
||||
from app.core.module import ModuleManager
|
||||
from app.db.message_oper import MessageOper
|
||||
from app.helper.message import MessageHelper
|
||||
from app.log import logger
|
||||
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
|
||||
WebhookEventInfo, TmdbEpisode
|
||||
@@ -33,6 +35,8 @@ class ChainBase(metaclass=ABCMeta):
|
||||
"""
|
||||
self.modulemanager = ModuleManager()
|
||||
self.eventmanager = EventManager()
|
||||
self.messageoper = MessageOper()
|
||||
self.messagehelper = MessageHelper()
|
||||
|
||||
@staticmethod
|
||||
def load_cache(filename: str) -> Any:
|
||||
@@ -115,6 +119,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
mtype: MediaType = None,
|
||||
tmdbid: int = None,
|
||||
doubanid: str = None,
|
||||
bangumiid: int = None,
|
||||
cache: bool = True) -> Optional[MediaInfo]:
|
||||
"""
|
||||
识别媒体信息
|
||||
@@ -122,6 +127,7 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param mtype: 识别的媒体类型,与tmdbid配套
|
||||
:param tmdbid: tmdbid
|
||||
:param doubanid: 豆瓣ID
|
||||
:param bangumiid: BangumiID
|
||||
:param cache: 是否使用缓存
|
||||
:return: 识别的媒体信息,包括剧集信息
|
||||
"""
|
||||
@@ -132,8 +138,12 @@ class ChainBase(metaclass=ABCMeta):
|
||||
tmdbid = meta.tmdbid
|
||||
if not doubanid and hasattr(meta, "doubanid"):
|
||||
doubanid = meta.doubanid
|
||||
# 有tmdbid时不使用其它ID
|
||||
if tmdbid:
|
||||
doubanid = None
|
||||
bangumiid = None
|
||||
return self.run_module("recognize_media", meta=meta, mtype=mtype,
|
||||
tmdbid=tmdbid, doubanid=doubanid, cache=cache)
|
||||
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
|
||||
|
||||
def match_doubaninfo(self, name: str, imdbid: str = None,
|
||||
mtype: MediaType = None, year: str = None, season: int = None) -> Optional[dict]:
|
||||
@@ -210,6 +220,14 @@ class ChainBase(metaclass=ABCMeta):
|
||||
"""
|
||||
return self.run_module("tmdb_info", tmdbid=tmdbid, mtype=mtype)
|
||||
|
||||
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
|
||||
"""
|
||||
获取Bangumi信息
|
||||
:param bangumiid: int
|
||||
:return: Bangumi信息
|
||||
"""
|
||||
return self.run_module("bangumi_info", bangumiid=bangumiid)
|
||||
|
||||
def message_parser(self, body: Any, form: Any,
|
||||
args: Any) -> Optional[CommingMessage]:
|
||||
"""
|
||||
@@ -403,6 +421,10 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param message: 消息体
|
||||
:return: 成功或失败
|
||||
"""
|
||||
logger.info(f"发送消息:channel={message.channel},"
|
||||
f"title={message.title}, "
|
||||
f"text={message.text},"
|
||||
f"userid={message.userid}")
|
||||
# 发送事件
|
||||
self.eventmanager.send_event(etype=EventType.NoticeMessage,
|
||||
data={
|
||||
@@ -413,10 +435,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
"image": message.image,
|
||||
"userid": message.userid,
|
||||
})
|
||||
logger.info(f"发送消息:channel={message.channel},"
|
||||
f"title={message.title}, "
|
||||
f"text={message.text},"
|
||||
f"userid={message.userid}")
|
||||
# 保存消息
|
||||
self.messagehelper.put(message, role="user")
|
||||
self.messageoper.add(channel=message.channel, mtype=message.mtype,
|
||||
title=message.title, text=message.text,
|
||||
image=message.image, link=message.link,
|
||||
userid=message.userid, action=1)
|
||||
# 发送
|
||||
self.run_module("post_message", message=message)
|
||||
|
||||
def post_medias_message(self, message: Notification, medias: List[MediaInfo]) -> Optional[bool]:
|
||||
@@ -426,6 +451,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param medias: 媒体列表
|
||||
:return: 成功或失败
|
||||
"""
|
||||
note_list = [media.to_dict() for media in medias]
|
||||
self.messagehelper.put(message, role="user", note=note_list)
|
||||
self.messageoper.add(channel=message.channel, mtype=message.mtype,
|
||||
title=message.title, text=message.text,
|
||||
image=message.image, link=message.link,
|
||||
userid=message.userid, action=1,
|
||||
note=note_list)
|
||||
return self.run_module("post_medias_message", message=message, medias=medias)
|
||||
|
||||
def post_torrents_message(self, message: Notification, torrents: List[Context]) -> Optional[bool]:
|
||||
@@ -435,6 +467,13 @@ class ChainBase(metaclass=ABCMeta):
|
||||
:param torrents: 种子列表
|
||||
:return: 成功或失败
|
||||
"""
|
||||
note_list = [torrent.torrent_info.to_dict() for torrent in torrents]
|
||||
self.messagehelper.put(message, role="user", note=note_list)
|
||||
self.messageoper.add(channel=message.channel, mtype=message.mtype,
|
||||
title=message.title, text=message.text,
|
||||
image=message.image, link=message.link,
|
||||
userid=message.userid, action=1,
|
||||
note=note_list)
|
||||
return self.run_module("post_torrents_message", message=message, torrents=torrents)
|
||||
|
||||
def scrape_metadata(self, path: Path, mediainfo: MediaInfo, transfer_type: str,
|
||||
|
||||
42
app/chain/bangumi.py
Normal file
42
app/chain/bangumi.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from typing import Optional, List
|
||||
|
||||
from app.chain import ChainBase
|
||||
from app.utils.singleton import Singleton
|
||||
|
||||
|
||||
class BangumiChain(ChainBase, metaclass=Singleton):
|
||||
"""
|
||||
Bangumi处理链,单例运行
|
||||
"""
|
||||
|
||||
def calendar(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
|
||||
"""
|
||||
获取Bangumi每日放送
|
||||
:param page: 页码
|
||||
:param count: 每页数量
|
||||
"""
|
||||
return self.run_module("bangumi_calendar", page=page, count=count)
|
||||
|
||||
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
|
||||
"""
|
||||
获取Bangumi信息
|
||||
:param bangumiid: BangumiID
|
||||
:return: Bangumi信息
|
||||
"""
|
||||
return self.run_module("bangumi_info", bangumiid=bangumiid)
|
||||
|
||||
def bangumi_credits(self, bangumiid: int, page: int = 1, count: int = 20) -> List[dict]:
|
||||
"""
|
||||
根据BangumiID查询电影演职员表
|
||||
:param bangumiid: BangumiID
|
||||
:param page: 页码
|
||||
:param count: 数量
|
||||
"""
|
||||
return self.run_module("bangumi_credits", bangumiid=bangumiid, page=page, count=count)
|
||||
|
||||
def bangumi_recommend(self, bangumiid: int) -> List[dict]:
|
||||
"""
|
||||
根据BangumiID查询推荐电影
|
||||
:param bangumiid: BangumiID
|
||||
"""
|
||||
return self.run_module("bangumi_recommend", bangumiid=bangumiid)
|
||||
@@ -229,6 +229,32 @@ class MediaChain(ChainBase, metaclass=Singleton):
|
||||
)
|
||||
return tmdbinfo
|
||||
|
||||
def get_tmdbinfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
|
||||
"""
|
||||
根据BangumiID获取TMDB信息
|
||||
"""
|
||||
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
|
||||
if bangumiinfo:
|
||||
# 优先使用原标题匹配
|
||||
if bangumiinfo.get("name"):
|
||||
meta = MetaInfo(title=bangumiinfo.get("name"))
|
||||
else:
|
||||
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
|
||||
# 年份
|
||||
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
|
||||
if release_date:
|
||||
year = release_date[:4]
|
||||
else:
|
||||
year = None
|
||||
# 使用名称识别TMDB媒体信息
|
||||
return self.match_tmdbinfo(
|
||||
name=meta.name,
|
||||
year=year,
|
||||
mtype=MediaType.TV,
|
||||
season=meta.begin_season
|
||||
)
|
||||
return None
|
||||
|
||||
def get_doubaninfo_by_tmdbid(self, tmdbid: int,
|
||||
mtype: MediaType = None, season: int = None) -> Optional[dict]:
|
||||
"""
|
||||
@@ -261,3 +287,29 @@ class MediaChain(ChainBase, metaclass=Singleton):
|
||||
imdbid=imdbid
|
||||
)
|
||||
return None
|
||||
|
||||
def get_doubaninfo_by_bangumiid(self, bangumiid: int) -> Optional[dict]:
|
||||
"""
|
||||
根据BangumiID获取豆瓣信息
|
||||
"""
|
||||
bangumiinfo = self.bangumi_info(bangumiid=bangumiid)
|
||||
if bangumiinfo:
|
||||
# 优先使用中文标题匹配
|
||||
if bangumiinfo.get("name_cn"):
|
||||
meta = MetaInfo(title=bangumiinfo.get("name_cn"))
|
||||
else:
|
||||
meta = MetaInfo(title=bangumiinfo.get("name"))
|
||||
# 年份
|
||||
release_date = bangumiinfo.get("date") or bangumiinfo.get("air_date")
|
||||
if release_date:
|
||||
year = release_date[:4]
|
||||
else:
|
||||
year = None
|
||||
# 使用名称识别豆瓣媒体信息
|
||||
return self.match_doubaninfo(
|
||||
name=meta.name,
|
||||
year=year,
|
||||
mtype=MediaType.TV,
|
||||
season=meta.begin_season
|
||||
)
|
||||
return None
|
||||
|
||||
@@ -12,9 +12,11 @@ from app.core.config import settings
|
||||
from app.core.context import MediaInfo, Context
|
||||
from app.core.event import EventManager
|
||||
from app.core.meta import MetaBase
|
||||
from app.db.message_oper import MessageOper
|
||||
from app.helper.message import MessageHelper
|
||||
from app.helper.torrent import TorrentHelper
|
||||
from app.log import logger
|
||||
from app.schemas import Notification, NotExistMediaInfo
|
||||
from app.schemas import Notification, NotExistMediaInfo, CommingMessage
|
||||
from app.schemas.types import EventType, MessageChannel, MediaType
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
@@ -43,6 +45,8 @@ class MessageChain(ChainBase):
|
||||
self.mediachain = MediaChain()
|
||||
self.eventmanager = EventManager()
|
||||
self.torrenthelper = TorrentHelper()
|
||||
self.messagehelper = MessageHelper()
|
||||
self.messageoper = MessageOper()
|
||||
|
||||
def __get_noexits_info(
|
||||
self,
|
||||
@@ -100,10 +104,8 @@ class MessageChain(ChainBase):
|
||||
|
||||
def process(self, body: Any, form: Any, args: Any) -> None:
|
||||
"""
|
||||
识别消息内容,执行操作
|
||||
调用模块识别消息内容
|
||||
"""
|
||||
# 申明全局变量
|
||||
global _current_page, _current_meta, _current_media
|
||||
# 获取消息内容
|
||||
info = self.message_parser(body=body, form=form, args=args)
|
||||
if not info:
|
||||
@@ -122,10 +124,34 @@ class MessageChain(ChainBase):
|
||||
if not text:
|
||||
logger.debug(f'未识别到消息内容::{body}{form}{args}')
|
||||
return
|
||||
# 处理消息
|
||||
self.handle_message(channel=channel, userid=userid, username=username, text=text)
|
||||
|
||||
def handle_message(self, channel: MessageChannel, userid: Union[str, int], username: str, text: str) -> None:
|
||||
"""
|
||||
识别消息内容,执行操作
|
||||
"""
|
||||
# 申明全局变量
|
||||
global _current_page, _current_meta, _current_media
|
||||
# 加载缓存
|
||||
user_cache: Dict[str, dict] = self.load_cache(self._cache_file) or {}
|
||||
# 处理消息
|
||||
logger.info(f'收到用户消息内容,用户:{userid},内容:{text}')
|
||||
# 保存消息
|
||||
self.messagehelper.put(
|
||||
CommingMessage(
|
||||
userid=userid,
|
||||
username=username,
|
||||
channel=channel,
|
||||
text=text
|
||||
), role="user")
|
||||
self.messageoper.add(
|
||||
channel=channel,
|
||||
userid=username or userid,
|
||||
text=text,
|
||||
action=0
|
||||
)
|
||||
# 处理消息
|
||||
if text.startswith('/'):
|
||||
# 执行命令
|
||||
self.eventmanager.send_event(
|
||||
|
||||
@@ -40,11 +40,7 @@ class SiteChain(ChainBase):
|
||||
self.rsshelper = RssHelper()
|
||||
self.cookiehelper = CookieHelper()
|
||||
self.message = MessageHelper()
|
||||
self.cookiecloud = CookieCloudHelper(
|
||||
server=settings.COOKIECLOUD_HOST,
|
||||
key=settings.COOKIECLOUD_KEY,
|
||||
password=settings.COOKIECLOUD_PASSWORD
|
||||
)
|
||||
self.cookiecloud = CookieCloudHelper()
|
||||
|
||||
# 特殊站点登录验证
|
||||
self.special_site_test = {
|
||||
@@ -302,20 +298,21 @@ class SiteChain(ChainBase):
|
||||
if not site_info:
|
||||
return False, f"站点【{url}】不存在"
|
||||
|
||||
# 特殊站点测试
|
||||
if self.special_site_test.get(domain):
|
||||
return self.special_site_test[domain](site_info)
|
||||
|
||||
# 通用站点测试
|
||||
site_url = site_info.url
|
||||
site_cookie = site_info.cookie
|
||||
ua = site_info.ua
|
||||
render = site_info.render
|
||||
public = site_info.public
|
||||
proxies = settings.PROXY if site_info.proxy else None
|
||||
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
|
||||
# 模拟登录
|
||||
try:
|
||||
# 特殊站点测试
|
||||
if self.special_site_test.get(domain):
|
||||
return self.special_site_test[domain](site_info)
|
||||
|
||||
# 通用站点测试
|
||||
site_url = site_info.url
|
||||
site_cookie = site_info.cookie
|
||||
ua = site_info.ua
|
||||
render = site_info.render
|
||||
public = site_info.public
|
||||
proxies = settings.PROXY if site_info.proxy else None
|
||||
proxy_server = settings.PROXY_SERVER if site_info.proxy else None
|
||||
|
||||
# 访问链接
|
||||
if render:
|
||||
page_source = PlaywrightHelper().get_page_source(url=site_url,
|
||||
|
||||
@@ -45,6 +45,7 @@ class SubscribeChain(ChainBase):
|
||||
mtype: MediaType = None,
|
||||
tmdbid: int = None,
|
||||
doubanid: str = None,
|
||||
bangumiid: int = None,
|
||||
season: int = None,
|
||||
channel: MessageChannel = None,
|
||||
userid: str = None,
|
||||
@@ -100,6 +101,7 @@ class SubscribeChain(ChainBase):
|
||||
mediainfo = self.recognize_media(mtype=mediainfo.type,
|
||||
tmdbid=mediainfo.tmdb_id,
|
||||
doubanid=mediainfo.douban_id,
|
||||
bangumiid=mediainfo.bangumi_id,
|
||||
cache=False)
|
||||
if not mediainfo:
|
||||
logger.error(f"媒体信息识别失败!")
|
||||
@@ -124,6 +126,8 @@ class SubscribeChain(ChainBase):
|
||||
# 合并信息
|
||||
if doubanid:
|
||||
mediainfo.douban_id = doubanid
|
||||
if bangumiid:
|
||||
mediainfo.bangumi_id = bangumiid
|
||||
# 添加订阅
|
||||
sid, err_msg = self.subscribeoper.add(mediainfo, season=season, username=username, **kwargs)
|
||||
if not sid:
|
||||
|
||||
@@ -187,6 +187,8 @@ class Settings(BaseSettings):
|
||||
PLEX_TOKEN: Optional[str] = None
|
||||
# 转移方式 link/copy/move/softlink
|
||||
TRANSFER_TYPE: str = "copy"
|
||||
# CookieCloud是否启动本地服务
|
||||
COOKIECLOUD_ENABLE_LOCAL: Optional[bool] = False
|
||||
# CookieCloud服务器地址
|
||||
COOKIECLOUD_HOST: str = "https://movie-pilot.org/cookiecloud"
|
||||
# CookieCloud用户KEY
|
||||
@@ -232,6 +234,8 @@ class Settings(BaseSettings):
|
||||
AUTO_UPDATE_RESOURCE: bool = True
|
||||
# 元数据识别缓存过期时间(小时)
|
||||
META_CACHE_EXPIRE: int = 0
|
||||
# 是否启用DOH解析域名
|
||||
DOH_ENABLE: bool = True
|
||||
|
||||
@validator("SUBSCRIBE_RSS_INTERVAL",
|
||||
"COOKIECLOUD_INTERVAL",
|
||||
@@ -275,6 +279,10 @@ class Settings(BaseSettings):
|
||||
@property
|
||||
def LOG_PATH(self):
|
||||
return self.CONFIG_PATH / "logs"
|
||||
|
||||
@property
|
||||
def COOKIE_PATH(self):
|
||||
return self.CONFIG_PATH / "cookies"
|
||||
|
||||
@property
|
||||
def CACHE_CONF(self):
|
||||
@@ -397,6 +405,9 @@ class Settings(BaseSettings):
|
||||
with self.LOG_PATH as p:
|
||||
if not p.exists():
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
with self.COOKIE_PATH as p:
|
||||
if not p.exists():
|
||||
p.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
class Config:
|
||||
case_sensitive = True
|
||||
|
||||
@@ -153,6 +153,8 @@ class MediaInfo:
|
||||
tvdb_id: int = None
|
||||
# 豆瓣ID
|
||||
douban_id: str = None
|
||||
# Bangumi ID
|
||||
bangumi_id: int = None
|
||||
# 媒体原语种
|
||||
original_language: str = None
|
||||
# 媒体原发行标题
|
||||
@@ -185,6 +187,8 @@ class MediaInfo:
|
||||
tmdb_info: dict = field(default_factory=dict)
|
||||
# 豆瓣 INFO
|
||||
douban_info: dict = field(default_factory=dict)
|
||||
# Bangumi INFO
|
||||
bangumi_info: dict = field(default_factory=dict)
|
||||
# 导演
|
||||
directors: List[dict] = field(default_factory=list)
|
||||
# 演员
|
||||
@@ -240,6 +244,8 @@ class MediaInfo:
|
||||
self.set_tmdb_info(self.tmdb_info)
|
||||
if self.douban_info:
|
||||
self.set_douban_info(self.douban_info)
|
||||
if self.bangumi_info:
|
||||
self.set_bangumi_info(self.bangumi_info)
|
||||
|
||||
def __setattr__(self, name: str, value: Any):
|
||||
self.__dict__[name] = value
|
||||
@@ -540,6 +546,69 @@ class MediaInfo:
|
||||
if not hasattr(self, key):
|
||||
setattr(self, key, value)
|
||||
|
||||
def set_bangumi_info(self, info: dict):
|
||||
"""
|
||||
初始化Bangumi信息
|
||||
"""
|
||||
if not info:
|
||||
return
|
||||
# 本体
|
||||
self.bangumi_info = info
|
||||
# 豆瓣ID
|
||||
self.bangumi_id = info.get("id")
|
||||
# 类型
|
||||
if not self.type:
|
||||
self.type = MediaType.TV
|
||||
# 标题
|
||||
if not self.title:
|
||||
self.title = info.get("name_cn") or info.get("name")
|
||||
# 原语种标题
|
||||
if not self.original_title:
|
||||
self.original_title = info.get("name")
|
||||
# 识别标题中的季
|
||||
meta = MetaInfo(self.title)
|
||||
# 季
|
||||
if not self.season:
|
||||
self.season = meta.begin_season
|
||||
# 评分
|
||||
if not self.vote_average:
|
||||
rating = info.get("rating")
|
||||
if rating:
|
||||
vote_average = float(rating.get("score"))
|
||||
else:
|
||||
vote_average = 0
|
||||
self.vote_average = vote_average
|
||||
# 发行日期
|
||||
if not self.release_date:
|
||||
self.release_date = info.get("date") or info.get("air_date")
|
||||
# 年份
|
||||
if not self.year:
|
||||
self.year = self.release_date[:4] if self.release_date else None
|
||||
# 海报
|
||||
if not self.poster_path:
|
||||
self.poster_path = info.get("images", {}).get("large")
|
||||
# 简介
|
||||
if not self.overview:
|
||||
self.overview = info.get("summary")
|
||||
# 别名
|
||||
if not self.names:
|
||||
infobox = info.get("infobox")
|
||||
if infobox:
|
||||
akas = [item.get("value") for item in infobox if item.get("key") == "别名"]
|
||||
if akas:
|
||||
self.names = [aka.get("v") for aka in akas[0]]
|
||||
|
||||
# 剧集
|
||||
if self.type == MediaType.TV and not self.seasons:
|
||||
meta = MetaInfo(self.title)
|
||||
season = meta.begin_season or 1
|
||||
episodes_count = info.get("total_episodes")
|
||||
if episodes_count:
|
||||
self.seasons[season] = list(range(1, episodes_count + 1))
|
||||
# 演员
|
||||
if not self.actors:
|
||||
self.actors = info.get("actors") or []
|
||||
|
||||
@property
|
||||
def title_year(self):
|
||||
if self.title:
|
||||
@@ -558,6 +627,8 @@ class MediaInfo:
|
||||
return "https://www.themoviedb.org/tv/%s" % self.tmdb_id
|
||||
elif self.douban_id:
|
||||
return "https://movie.douban.com/subject/%s" % self.douban_id
|
||||
elif self.bangumi_id:
|
||||
return "http://bgm.tv/subject/%s" % self.bangumi_id
|
||||
return ""
|
||||
|
||||
@property
|
||||
@@ -619,6 +690,9 @@ class MediaInfo:
|
||||
dicts["type"] = self.type.value if self.type else None
|
||||
dicts["detail_link"] = self.detail_link
|
||||
dicts["title_year"] = self.title_year
|
||||
dicts["tmdb_info"] = None
|
||||
dicts["douban_info"] = None
|
||||
dicts["bangumi_info"] = None
|
||||
return dicts
|
||||
|
||||
def clear(self):
|
||||
@@ -627,6 +701,7 @@ class MediaInfo:
|
||||
"""
|
||||
self.tmdb_info = {}
|
||||
self.douban_info = {}
|
||||
self.bangumi_info = {}
|
||||
self.seasons = {}
|
||||
self.genres = []
|
||||
self.season_info = []
|
||||
|
||||
@@ -69,8 +69,8 @@ class MetaBase(object):
|
||||
_subtitle_flag = False
|
||||
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
|
||||
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
|
||||
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期](?!\s*[全共])"
|
||||
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期]"
|
||||
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP\-]+)\s*[集话話期幕](?!\s*[全共])"
|
||||
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
|
||||
|
||||
def __init__(self, title: str, subtitle: str = None, isfile: bool = False):
|
||||
if not title:
|
||||
@@ -110,7 +110,7 @@ class MetaBase(object):
|
||||
if not title_text:
|
||||
return
|
||||
title_text = f" {title_text} "
|
||||
if re.search(r'[全第季集话話期]', title_text, re.IGNORECASE):
|
||||
if re.search(r'[全第季集话話期幕]', title_text, re.IGNORECASE):
|
||||
# 第x季
|
||||
season_str = re.search(r'%s' % self._subtitle_season_re, title_text, re.IGNORECASE)
|
||||
if season_str:
|
||||
|
||||
61
app/db/message_oper.py
Normal file
61
app/db/message_oper.py
Normal file
@@ -0,0 +1,61 @@
|
||||
import json
|
||||
import time
|
||||
from typing import Optional, Union
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.db import DbOper
|
||||
from app.db.models.message import Message
|
||||
from app.schemas import MessageChannel, NotificationType
|
||||
|
||||
|
||||
class MessageOper(DbOper):
|
||||
"""
|
||||
消息数据管理
|
||||
"""
|
||||
|
||||
def __init__(self, db: Session = None):
|
||||
super().__init__(db)
|
||||
|
||||
def add(self,
|
||||
channel: MessageChannel = None,
|
||||
mtype: NotificationType = None,
|
||||
title: str = None,
|
||||
text: str = None,
|
||||
image: str = None,
|
||||
link: str = None,
|
||||
userid: str = None,
|
||||
action: int = 1,
|
||||
note: Union[list, dict] = None,
|
||||
**kwargs):
|
||||
"""
|
||||
新增媒体服务器数据
|
||||
:param channel: 消息渠道
|
||||
:param mtype: 消息类型
|
||||
:param title: 标题
|
||||
:param text: 文本内容
|
||||
:param image: 图片
|
||||
:param link: 链接
|
||||
:param userid: 用户ID
|
||||
:param action: 消息方向:0-接收息,1-发送消息
|
||||
:param note: 附件json
|
||||
"""
|
||||
kwargs.update({
|
||||
"channel": channel.value if channel else '',
|
||||
"mtype": mtype.value if mtype else '',
|
||||
"title": title,
|
||||
"text": text,
|
||||
"image": image,
|
||||
"link": link,
|
||||
"userid": userid,
|
||||
"action": action,
|
||||
"reg_time": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
|
||||
"note": json.dumps(note) if note else ''
|
||||
})
|
||||
Message(**kwargs).create(self._db)
|
||||
|
||||
def list_by_page(self, page: int = 1, count: int = 30) -> Optional[str]:
|
||||
"""
|
||||
获取媒体服务器数据ID
|
||||
"""
|
||||
return Message.list_by_page(self._db, page, count)
|
||||
@@ -200,6 +200,7 @@ class DownloadFiles(Base):
|
||||
result = db.query(DownloadFiles).filter(DownloadFiles.savepath == savepath).all()
|
||||
return list(result)
|
||||
|
||||
@staticmethod
|
||||
@db_update
|
||||
def delete_by_fullpath(db: Session, fullpath: str):
|
||||
db.query(DownloadFiles).filter(DownloadFiles.fullpath == fullpath,
|
||||
|
||||
39
app/db/models/message.py
Normal file
39
app/db/models/message.py
Normal file
@@ -0,0 +1,39 @@
|
||||
from sqlalchemy import Column, Integer, String, Sequence
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.db import db_query, Base
|
||||
|
||||
|
||||
class Message(Base):
|
||||
"""
|
||||
消息表
|
||||
"""
|
||||
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
|
||||
# 消息渠道
|
||||
channel = Column(String)
|
||||
# 消息类型
|
||||
mtype = Column(String)
|
||||
# 标题
|
||||
title = Column(String)
|
||||
# 文本内容
|
||||
text = Column(String)
|
||||
# 图片
|
||||
image = Column(String)
|
||||
# 链接
|
||||
link = Column(String)
|
||||
# 用户ID
|
||||
userid = Column(String)
|
||||
# 登记时间
|
||||
reg_time = Column(String, index=True)
|
||||
# 消息方向:0-接收息,1-发送消息
|
||||
action = Column(Integer)
|
||||
# 附件json
|
||||
note = Column(String)
|
||||
|
||||
@staticmethod
|
||||
@db_query
|
||||
def list_by_page(db: Session, page: int = 1, count: int = 30):
|
||||
result = db.query(Message).order_by(Message.reg_time.desc()).offset((page - 1) * count).limit(
|
||||
count).all()
|
||||
result.sort(key=lambda x: x.reg_time, reverse=False)
|
||||
return list(result)
|
||||
@@ -1,6 +1,6 @@
|
||||
import time
|
||||
|
||||
from sqlalchemy import Column, Integer, String, Sequence
|
||||
from sqlalchemy import Column, Integer, String, Sequence, Float
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from app.db import db_query, db_update, Base
|
||||
@@ -23,14 +23,15 @@ class Subscribe(Base):
|
||||
imdbid = Column(String)
|
||||
tvdbid = Column(Integer)
|
||||
doubanid = Column(String, index=True)
|
||||
bangumiid = Column(Integer, index=True)
|
||||
# 季号
|
||||
season = Column(Integer)
|
||||
# 海报
|
||||
poster = Column(String)
|
||||
# 背景图
|
||||
backdrop = Column(String)
|
||||
# 评分
|
||||
vote = Column(Integer)
|
||||
# 评分,float
|
||||
vote = Column(Float)
|
||||
# 简介
|
||||
description = Column(String)
|
||||
# 过滤规则
|
||||
@@ -115,6 +116,11 @@ class Subscribe(Base):
|
||||
def get_by_doubanid(db: Session, doubanid: str):
|
||||
return db.query(Subscribe).filter(Subscribe.doubanid == doubanid).first()
|
||||
|
||||
@staticmethod
|
||||
@db_query
|
||||
def get_by_bangumiid(db: Session, bangumiid: int):
|
||||
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
|
||||
|
||||
@db_update
|
||||
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
|
||||
subscrbies = self.get_by_tmdbid(db, tmdbid, season)
|
||||
|
||||
@@ -27,6 +27,7 @@ class SubscribeOper(DbOper):
|
||||
imdbid=mediainfo.imdb_id,
|
||||
tvdbid=mediainfo.tvdb_id,
|
||||
doubanid=mediainfo.douban_id,
|
||||
bangumiid=mediainfo.bangumi_id,
|
||||
poster=mediainfo.get_poster_image(),
|
||||
backdrop=mediainfo.get_backdrop_image(),
|
||||
vote=mediainfo.vote_average,
|
||||
|
||||
@@ -0,0 +1,2 @@
|
||||
from .doh import doh_query_json
|
||||
from .cloudflare import under_challenge
|
||||
|
||||
@@ -1,68 +1,126 @@
|
||||
from typing import Tuple, Optional
|
||||
import json
|
||||
from hashlib import md5
|
||||
from typing import Any, Dict, Tuple, Optional
|
||||
|
||||
from app.core.config import settings
|
||||
from app.utils.common import decrypt
|
||||
from app.utils.http import RequestUtils
|
||||
from app.utils.string import StringUtils
|
||||
|
||||
|
||||
class CookieCloudHelper:
|
||||
|
||||
_ignore_cookies: list = ["CookieAutoDeleteBrowsingDataCleanup", "CookieAutoDeleteCleaningDiscarded"]
|
||||
|
||||
def __init__(self, server: str, key: str, password: str):
|
||||
self._server = server
|
||||
self._key = key
|
||||
self._password = password
|
||||
def __init__(self):
|
||||
self._sync_setting()
|
||||
self._req = RequestUtils(content_type="application/json")
|
||||
|
||||
def _sync_setting(self):
|
||||
self._server = settings.COOKIECLOUD_HOST
|
||||
self._key = settings.COOKIECLOUD_KEY
|
||||
self._password = settings.COOKIECLOUD_PASSWORD
|
||||
self._enable_local = settings.COOKIECLOUD_ENABLE_LOCAL
|
||||
self._local_path = settings.COOKIE_PATH
|
||||
|
||||
def download(self) -> Tuple[Optional[dict], str]:
|
||||
"""
|
||||
从CookieCloud下载数据
|
||||
:return: Cookie数据、错误信息
|
||||
"""
|
||||
if not self._server or not self._key or not self._password:
|
||||
# 更新为最新设置
|
||||
self._sync_setting()
|
||||
|
||||
if ((not self._server and not self._enable_local)
|
||||
or not self._key
|
||||
or not self._password):
|
||||
return None, "CookieCloud参数不正确"
|
||||
req_url = "%s/get/%s" % (self._server, str(self._key).strip())
|
||||
ret = self._req.post_res(url=req_url, json={"password": str(self._password).strip()})
|
||||
if ret and ret.status_code == 200:
|
||||
result = ret.json()
|
||||
|
||||
if self._enable_local:
|
||||
# 开启本地服务时,从本地直接读取数据
|
||||
result = self._load_local_encrypt_data(self._key)
|
||||
if not result:
|
||||
return {}, "未下载到数据"
|
||||
if result.get("cookie_data"):
|
||||
contents = result.get("cookie_data")
|
||||
else:
|
||||
contents = result
|
||||
# 整理数据,使用domain域名的最后两级作为分组依据
|
||||
domain_groups = {}
|
||||
for site, cookies in contents.items():
|
||||
for cookie in cookies:
|
||||
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
|
||||
if not domain_groups.get(domain_key):
|
||||
domain_groups[domain_key] = [cookie]
|
||||
else:
|
||||
domain_groups[domain_key].append(cookie)
|
||||
# 返回错误
|
||||
ret_cookies = {}
|
||||
# 索引器
|
||||
for domain, content_list in domain_groups.items():
|
||||
if not content_list:
|
||||
continue
|
||||
# 只有cf的cookie过滤掉
|
||||
cloudflare_cookie = True
|
||||
for content in content_list:
|
||||
if content["name"] != "cf_clearance":
|
||||
cloudflare_cookie = False
|
||||
break
|
||||
if cloudflare_cookie:
|
||||
continue
|
||||
# 站点Cookie
|
||||
cookie_str = ";".join(
|
||||
[f"{content.get('name')}={content.get('value')}"
|
||||
for content in content_list
|
||||
if content.get("name") and content.get("name") not in self._ignore_cookies]
|
||||
)
|
||||
ret_cookies[domain] = cookie_str
|
||||
return ret_cookies, ""
|
||||
elif ret:
|
||||
return None, f"同步CookieCloud失败,错误码:{ret.status_code}"
|
||||
return {}, "未从本地CookieCloud服务加载到cookie数据,请检查服务器设置、用户KEY及加密密码是否正确"
|
||||
else:
|
||||
return None, "CookieCloud请求失败,请检查服务器地址、用户KEY及加密密码是否正确"
|
||||
req_url = "%s/get/%s" % (self._server, str(self._key).strip())
|
||||
ret = self._req.get_res(url=req_url)
|
||||
if ret and ret.status_code == 200:
|
||||
try:
|
||||
result = ret.json()
|
||||
if not result:
|
||||
return {}, f"未从{self._server}下载到cookie数据"
|
||||
except Exception as err:
|
||||
return {}, f"从{self._server}下载cookie数据错误:{str(err)}"
|
||||
elif ret:
|
||||
return None, f"远程同步CookieCloud失败,错误码:{ret.status_code}"
|
||||
else:
|
||||
return None, "CookieCloud请求失败,请检查服务器地址、用户KEY及加密密码是否正确"
|
||||
|
||||
encrypted = result.get("encrypted")
|
||||
if not encrypted:
|
||||
return {}, "未获取到cookie密文"
|
||||
else:
|
||||
crypt_key = self._get_crypt_key()
|
||||
try:
|
||||
decrypted_data = decrypt(encrypted, crypt_key).decode('utf-8')
|
||||
result = json.loads(decrypted_data)
|
||||
except Exception as e:
|
||||
return {}, "cookie解密失败:" + str(e)
|
||||
|
||||
if not result:
|
||||
return {}, "cookie解密为空"
|
||||
|
||||
if result.get("cookie_data"):
|
||||
contents = result.get("cookie_data")
|
||||
else:
|
||||
contents = result
|
||||
# 整理数据,使用domain域名的最后两级作为分组依据
|
||||
domain_groups = {}
|
||||
for site, cookies in contents.items():
|
||||
for cookie in cookies:
|
||||
domain_key = StringUtils.get_url_domain(cookie.get("domain"))
|
||||
if not domain_groups.get(domain_key):
|
||||
domain_groups[domain_key] = [cookie]
|
||||
else:
|
||||
domain_groups[domain_key].append(cookie)
|
||||
# 返回错误
|
||||
ret_cookies = {}
|
||||
# 索引器
|
||||
for domain, content_list in domain_groups.items():
|
||||
if not content_list:
|
||||
continue
|
||||
# 只有cf的cookie过滤掉
|
||||
cloudflare_cookie = True
|
||||
for content in content_list:
|
||||
if content["name"] != "cf_clearance":
|
||||
cloudflare_cookie = False
|
||||
break
|
||||
if cloudflare_cookie:
|
||||
continue
|
||||
# 站点Cookie
|
||||
cookie_str = ";".join(
|
||||
[f"{content.get('name')}={content.get('value')}"
|
||||
for content in content_list
|
||||
if content.get("name") and content.get("name") not in self._ignore_cookies]
|
||||
)
|
||||
ret_cookies[domain] = cookie_str
|
||||
return ret_cookies, ""
|
||||
|
||||
def _get_crypt_key(self) -> bytes:
|
||||
"""
|
||||
使用UUID和密码生成CookieCloud的加解密密钥
|
||||
"""
|
||||
md5_generator = md5()
|
||||
md5_generator.update((str(self._key).strip() + '-' + str(self._password).strip()).encode('utf-8'))
|
||||
return (md5_generator.hexdigest()[:16]).encode('utf-8')
|
||||
|
||||
def _load_local_encrypt_data(self, uuid: str) -> Dict[str, Any]:
|
||||
file_path = self._local_path / f"{uuid}.json"
|
||||
# 检查文件是否存在
|
||||
if not file_path.exists():
|
||||
return {}
|
||||
|
||||
# 读取文件
|
||||
with open(file_path, encoding="utf-8", mode="r") as file:
|
||||
read_content = file.read()
|
||||
data = json.loads(read_content.encode("utf-8"))
|
||||
return data
|
||||
|
||||
156
app/helper/doh.py
Normal file
156
app/helper/doh.py
Normal file
@@ -0,0 +1,156 @@
|
||||
"""
|
||||
doh函数的实现。
|
||||
author: https://github.com/C5H12O5/syno-videoinfo-plugin
|
||||
"""
|
||||
import base64
|
||||
import concurrent
|
||||
import concurrent.futures
|
||||
import json
|
||||
import socket
|
||||
import struct
|
||||
import urllib
|
||||
import urllib.request
|
||||
from typing import Dict, Optional
|
||||
|
||||
from app.core.config import settings
|
||||
from app.log import logger
|
||||
|
||||
# 定义一个全局集合来存储注册的主机
|
||||
_registered_hosts = {
|
||||
'api.themoviedb.org',
|
||||
'api.tmdb.org',
|
||||
'webservice.fanart.tv',
|
||||
'api.github.com',
|
||||
'github.com',
|
||||
'raw.githubusercontent.com',
|
||||
'api.telegram.org'
|
||||
}
|
||||
|
||||
# 定义一个全局线程池执行器
|
||||
_executor = concurrent.futures.ThreadPoolExecutor()
|
||||
|
||||
# 定义默认的DoH配置
|
||||
_doh_timeout = 5
|
||||
_doh_cache: Dict[str, str] = {}
|
||||
_doh_resolvers = [
|
||||
# https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https
|
||||
"1.0.0.1",
|
||||
"1.1.1.1",
|
||||
# https://support.quad9.net/hc/en-us
|
||||
"9.9.9.9",
|
||||
"149.112.112.112"
|
||||
]
|
||||
|
||||
|
||||
def _patched_getaddrinfo(host, *args, **kwargs):
|
||||
"""
|
||||
socket.getaddrinfo的补丁版本。
|
||||
"""
|
||||
if host not in _registered_hosts:
|
||||
return _orig_getaddrinfo(host, *args, **kwargs)
|
||||
|
||||
# 检查主机是否已解析
|
||||
if host in _doh_cache:
|
||||
ip = _doh_cache[host]
|
||||
logger.info("已解析 [%s] 为 [%s] (缓存)", host, ip)
|
||||
return _orig_getaddrinfo(ip, *args, **kwargs)
|
||||
|
||||
# 使用DoH解析主机
|
||||
futures = []
|
||||
for resolver in _doh_resolvers:
|
||||
futures.append(_executor.submit(_doh_query, resolver, host))
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
ip = future.result()
|
||||
if ip is not None:
|
||||
logger.info("已解析 [%s] 为 [%s]", host, ip)
|
||||
_doh_cache[host] = ip
|
||||
host = ip
|
||||
break
|
||||
|
||||
return _orig_getaddrinfo(host, *args, **kwargs)
|
||||
|
||||
|
||||
# 对 socket.getaddrinfo 进行补丁
|
||||
if settings.DOH_ENABLE:
|
||||
_orig_getaddrinfo = socket.getaddrinfo
|
||||
socket.getaddrinfo = _patched_getaddrinfo
|
||||
|
||||
|
||||
def _doh_query(resolver: str, host: str) -> Optional[str]:
|
||||
"""
|
||||
使用给定的DoH解析器查询给定主机的IP地址。
|
||||
"""
|
||||
|
||||
# 构造DNS查询消息(RFC 1035)
|
||||
header = b"".join(
|
||||
[
|
||||
b"\x00\x00", # ID: 0
|
||||
b"\x01\x00", # FLAGS: 标准递归查询
|
||||
b"\x00\x01", # QDCOUNT: 1
|
||||
b"\x00\x00", # ANCOUNT: 0
|
||||
b"\x00\x00", # NSCOUNT: 0
|
||||
b"\x00\x00", # ARCOUNT: 0
|
||||
]
|
||||
)
|
||||
question = b"".join(
|
||||
[
|
||||
b"".join(
|
||||
[
|
||||
struct.pack("B", len(item)) + item.encode("utf-8")
|
||||
for item in host.split(".")
|
||||
]
|
||||
)
|
||||
+ b"\x00", # QNAME: 域名序列
|
||||
b"\x00\x01", # QTYPE: A
|
||||
b"\x00\x01", # QCLASS: IN
|
||||
]
|
||||
)
|
||||
message = header + question
|
||||
|
||||
try:
|
||||
# 发送GET请求到DoH解析器(RFC 8484)
|
||||
b64message = base64.b64encode(message).decode("utf-8").rstrip("=")
|
||||
url = f"https://{resolver}/dns-query?dns={b64message}"
|
||||
headers = {"Content-Type": "application/dns-message"}
|
||||
logger.debug("DoH请求: %s", url)
|
||||
|
||||
request = urllib.request.Request(url, headers=headers, method="GET")
|
||||
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
|
||||
logger.debug("解析器(%s)响应: %s", resolver, response.status)
|
||||
if response.status != 200:
|
||||
return None
|
||||
resp_body = response.read()
|
||||
|
||||
# 解析DNS响应消息(RFC 1035)
|
||||
# name(压缩):2 + type:2 + class:2 + ttl:4 + rdlength:2 = 12字节
|
||||
first_rdata_start = len(header) + len(question) + 12
|
||||
# rdata(A记录)= 4字节
|
||||
first_rdata_end = first_rdata_start + 4
|
||||
# 将rdata转换为IP地址
|
||||
return socket.inet_ntoa(resp_body[first_rdata_start:first_rdata_end])
|
||||
except Exception as e:
|
||||
logger.error("解析器(%s)请求错误: %s", resolver, e)
|
||||
return None
|
||||
|
||||
|
||||
def doh_query_json(resolver: str, host: str) -> Optional[str]:
|
||||
"""
|
||||
使用给定的DoH解析器查询给定主机的IP地址。
|
||||
"""
|
||||
url = f"https://{resolver}/dns-query?name={host}&type=A"
|
||||
headers = {"Accept": "application/dns-json"}
|
||||
logger.debug("DoH请求: %s", url)
|
||||
try:
|
||||
request = urllib.request.Request(url, headers=headers, method="GET")
|
||||
with urllib.request.urlopen(request, timeout=_doh_timeout) as response:
|
||||
logger.debug("解析器(%s)响应: %s", resolver, response.status)
|
||||
if response.status != 200:
|
||||
return None
|
||||
response_body = response.read().decode("utf-8")
|
||||
logger.debug("<== body: %s", response_body)
|
||||
answer = json.loads(response_body)["Answer"]
|
||||
return answer[0]["data"]
|
||||
except Exception as e:
|
||||
logger.error("解析器(%s)请求错误: %s", resolver, e)
|
||||
return None
|
||||
@@ -1,19 +1,46 @@
|
||||
import json
|
||||
import queue
|
||||
import time
|
||||
from typing import Optional, Any, Union
|
||||
|
||||
from app.utils.singleton import Singleton
|
||||
|
||||
|
||||
class MessageHelper(metaclass=Singleton):
|
||||
"""
|
||||
消息队列管理器
|
||||
消息队列管理器,包括系统消息和用户消息
|
||||
"""
|
||||
def __init__(self):
|
||||
self.queue = queue.Queue()
|
||||
self.sys_queue = queue.Queue()
|
||||
self.user_queue = queue.Queue()
|
||||
|
||||
def put(self, message: str):
|
||||
self.queue.put(message)
|
||||
def put(self, message: Any, role: str = "sys", note: Union[list, dict] = None):
|
||||
"""
|
||||
存消息
|
||||
:param message: 消息
|
||||
:param role: 消息通道 sys/user
|
||||
:param note: 附件json
|
||||
"""
|
||||
if role == "sys":
|
||||
self.sys_queue.put(message)
|
||||
else:
|
||||
if isinstance(message, str):
|
||||
self.user_queue.put(message)
|
||||
elif hasattr(message, "to_dict"):
|
||||
content = message.to_dict()
|
||||
content['date'] = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
|
||||
content['note'] = json.dumps(note) if note else None
|
||||
self.user_queue.put(json.dumps(content))
|
||||
|
||||
def get(self):
|
||||
if not self.queue.empty():
|
||||
return self.queue.get(block=False)
|
||||
def get(self, role: str = "sys") -> Optional[str]:
|
||||
"""
|
||||
取消息
|
||||
:param role: 消息通道 sys/user
|
||||
"""
|
||||
if role == "sys":
|
||||
if not self.sys_queue.empty():
|
||||
return self.sys_queue.get(block=False)
|
||||
else:
|
||||
if not self.user_queue.empty():
|
||||
return self.user_queue.get(block=False)
|
||||
return None
|
||||
|
||||
@@ -55,11 +55,7 @@ class ResourceHelper(metaclass=Singleton):
|
||||
target = resource.get("target")
|
||||
version = resource.get("version")
|
||||
# 判断平台
|
||||
if platform and platform != SystemUtils.platform:
|
||||
continue
|
||||
# 判断本地是否存在
|
||||
local_path = self._base_dir / target / rname
|
||||
if not local_path.exists():
|
||||
if platform and platform != SystemUtils.platform():
|
||||
continue
|
||||
# 判断版本号
|
||||
if rtype == "auth":
|
||||
|
||||
@@ -53,10 +53,13 @@ def init_routers():
|
||||
"""
|
||||
from app.api.apiv1 import api_router
|
||||
from app.api.servarr import arr_router
|
||||
from app.api.servcookie import cookie_router
|
||||
# API路由
|
||||
App.include_router(api_router, prefix=settings.API_V1_STR)
|
||||
# Radarr、Sonarr路由
|
||||
App.include_router(arr_router, prefix="/api/v3")
|
||||
# CookieCloud路由
|
||||
App.include_router(cookie_router, prefix="/cookiecloud")
|
||||
|
||||
|
||||
def start_frontend():
|
||||
|
||||
93
app/modules/bangumi/__init__.py
Normal file
93
app/modules/bangumi/__init__.py
Normal file
@@ -0,0 +1,93 @@
|
||||
from typing import List, Optional, Tuple, Union
|
||||
|
||||
from app.core.context import MediaInfo
|
||||
from app.log import logger
|
||||
from app.modules import _ModuleBase
|
||||
from app.modules.bangumi.bangumi import BangumiApi
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
|
||||
class BangumiModule(_ModuleBase):
|
||||
bangumiapi: BangumiApi = None
|
||||
|
||||
def init_module(self) -> None:
|
||||
self.bangumiapi = BangumiApi()
|
||||
|
||||
def stop(self):
|
||||
pass
|
||||
|
||||
def test(self) -> Tuple[bool, str]:
|
||||
"""
|
||||
测试模块连接性
|
||||
"""
|
||||
ret = RequestUtils().get_res("https://api.bgm.tv/")
|
||||
if ret and ret.status_code == 200:
|
||||
return True, ""
|
||||
elif ret:
|
||||
return False, f"无法连接Bangumi,错误码:{ret.status_code}"
|
||||
return False, "Bangumi网络连接失败"
|
||||
|
||||
def init_setting(self) -> Tuple[str, Union[str, bool]]:
|
||||
pass
|
||||
|
||||
def recognize_media(self, bangumiid: int = None,
|
||||
**kwargs) -> Optional[MediaInfo]:
|
||||
"""
|
||||
识别媒体信息
|
||||
:param bangumiid: 识别的Bangumi ID
|
||||
:return: 识别的媒体信息,包括剧集信息
|
||||
"""
|
||||
if not bangumiid:
|
||||
return None
|
||||
|
||||
# 直接查询详情
|
||||
info = self.bangumi_info(bangumiid=bangumiid)
|
||||
if info:
|
||||
# 赋值TMDB信息并返回
|
||||
mediainfo = MediaInfo(bangumi_info=info)
|
||||
logger.info(f"{bangumiid} Bangumi识别结果:{mediainfo.type.value} "
|
||||
f"{mediainfo.title_year}")
|
||||
return mediainfo
|
||||
else:
|
||||
logger.info(f"{bangumiid} 未匹配到Bangumi媒体信息")
|
||||
|
||||
return None
|
||||
|
||||
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
|
||||
"""
|
||||
获取Bangumi信息
|
||||
:param bangumiid: BangumiID
|
||||
:return: Bangumi信息
|
||||
"""
|
||||
if not bangumiid:
|
||||
return None
|
||||
logger.info(f"开始获取Bangumi信息:{bangumiid} ...")
|
||||
return self.bangumiapi.detail(bangumiid)
|
||||
|
||||
def bangumi_calendar(self, page: int = 1, count: int = 30) -> Optional[List[dict]]:
|
||||
"""
|
||||
获取Bangumi每日放送
|
||||
:param page: 页码
|
||||
:param count: 每页数量
|
||||
"""
|
||||
return self.bangumiapi.calendar(page, count)
|
||||
|
||||
def bangumi_credits(self, bangumiid: int, page: int = 1, count: int = 20) -> List[dict]:
|
||||
"""
|
||||
根据TMDBID查询电影演职员表
|
||||
:param bangumiid: BangumiID
|
||||
:param page: 页码
|
||||
:param count: 数量
|
||||
"""
|
||||
persons = self.bangumiapi.persons(bangumiid) or []
|
||||
if persons:
|
||||
return persons[(page - 1) * count: page * count]
|
||||
else:
|
||||
return []
|
||||
|
||||
def bangumi_recommend(self, bangumiid: int) -> List[dict]:
|
||||
"""
|
||||
根据BangumiID查询推荐电影
|
||||
:param bangumiid: BangumiID
|
||||
"""
|
||||
return self.bangumiapi.subjects(bangumiid) or []
|
||||
154
app/modules/bangumi/bangumi.py
Normal file
154
app/modules/bangumi/bangumi.py
Normal file
@@ -0,0 +1,154 @@
|
||||
from datetime import datetime
|
||||
from functools import lru_cache
|
||||
|
||||
import requests
|
||||
|
||||
from app.utils.http import RequestUtils
|
||||
|
||||
|
||||
class BangumiApi(object):
|
||||
"""
|
||||
https://bangumi.github.io/api/
|
||||
"""
|
||||
|
||||
_urls = {
|
||||
"calendar": "calendar",
|
||||
"detail": "v0/subjects/%s",
|
||||
"persons": "v0/subjects/%s/persons",
|
||||
"subjects": "v0/subjects/%s/subjects"
|
||||
}
|
||||
_base_url = "https://api.bgm.tv/"
|
||||
_req = RequestUtils(session=requests.Session())
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
@lru_cache(maxsize=128)
|
||||
def __invoke(cls, url, **kwargs):
|
||||
req_url = cls._base_url + url
|
||||
params = {}
|
||||
if kwargs:
|
||||
params.update(kwargs)
|
||||
resp = cls._req.get_res(url=req_url, params=params)
|
||||
try:
|
||||
return resp.json() if resp else None
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return None
|
||||
|
||||
def calendar(self, page: int = 1, count: int = 30):
|
||||
"""
|
||||
获取每日放送,返回items
|
||||
"""
|
||||
"""
|
||||
[
|
||||
{
|
||||
"weekday": {
|
||||
"en": "Mon",
|
||||
"cn": "星期一",
|
||||
"ja": "月耀日",
|
||||
"id": 1
|
||||
},
|
||||
"items": [
|
||||
{
|
||||
"id": 350235,
|
||||
"url": "http://bgm.tv/subject/350235",
|
||||
"type": 2,
|
||||
"name": "月が導く異世界道中 第二幕",
|
||||
"name_cn": "月光下的异世界之旅 第二幕",
|
||||
"summary": "",
|
||||
"air_date": "2024-01-08",
|
||||
"air_weekday": 1,
|
||||
"rating": {
|
||||
"total": 257,
|
||||
"count": {
|
||||
"1": 1,
|
||||
"2": 1,
|
||||
"3": 4,
|
||||
"4": 15,
|
||||
"5": 51,
|
||||
"6": 111,
|
||||
"7": 49,
|
||||
"8": 13,
|
||||
"9": 5,
|
||||
"10": 7
|
||||
},
|
||||
"score": 6.1
|
||||
},
|
||||
"rank": 6125,
|
||||
"images": {
|
||||
"large": "http://lain.bgm.tv/pic/cover/l/3c/a5/350235_A0USf.jpg",
|
||||
"common": "http://lain.bgm.tv/pic/cover/c/3c/a5/350235_A0USf.jpg",
|
||||
"medium": "http://lain.bgm.tv/pic/cover/m/3c/a5/350235_A0USf.jpg",
|
||||
"small": "http://lain.bgm.tv/pic/cover/s/3c/a5/350235_A0USf.jpg",
|
||||
"grid": "http://lain.bgm.tv/pic/cover/g/3c/a5/350235_A0USf.jpg"
|
||||
},
|
||||
"collection": {
|
||||
"doing": 920
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": 358561,
|
||||
"url": "http://bgm.tv/subject/358561",
|
||||
"type": 2,
|
||||
"name": "大宇宙时代",
|
||||
"name_cn": "大宇宙时代",
|
||||
"summary": "",
|
||||
"air_date": "2024-01-22",
|
||||
"air_weekday": 1,
|
||||
"rating": {
|
||||
"total": 2,
|
||||
"count": {
|
||||
"1": 0,
|
||||
"2": 0,
|
||||
"3": 0,
|
||||
"4": 0,
|
||||
"5": 1,
|
||||
"6": 1,
|
||||
"7": 0,
|
||||
"8": 0,
|
||||
"9": 0,
|
||||
"10": 0
|
||||
},
|
||||
"score": 5.5
|
||||
},
|
||||
"images": {
|
||||
"large": "http://lain.bgm.tv/pic/cover/l/71/66/358561_UzsLu.jpg",
|
||||
"common": "http://lain.bgm.tv/pic/cover/c/71/66/358561_UzsLu.jpg",
|
||||
"medium": "http://lain.bgm.tv/pic/cover/m/71/66/358561_UzsLu.jpg",
|
||||
"small": "http://lain.bgm.tv/pic/cover/s/71/66/358561_UzsLu.jpg",
|
||||
"grid": "http://lain.bgm.tv/pic/cover/g/71/66/358561_UzsLu.jpg"
|
||||
},
|
||||
"collection": {
|
||||
"doing": 9
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
"""
|
||||
ret_list = []
|
||||
result = self.__invoke(self._urls["calendar"], _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
|
||||
if result:
|
||||
for item in result:
|
||||
ret_list.extend(item.get("items") or [])
|
||||
return ret_list[(page - 1) * count: page * count]
|
||||
|
||||
def detail(self, bid: int):
|
||||
"""
|
||||
获取番剧详情
|
||||
"""
|
||||
return self.__invoke(self._urls["detail"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
|
||||
|
||||
def persons(self, bid: int):
|
||||
"""
|
||||
获取番剧人物
|
||||
"""
|
||||
return self.__invoke(self._urls["persons"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
|
||||
|
||||
def subjects(self, bid: int):
|
||||
"""
|
||||
获取关联条目信息
|
||||
"""
|
||||
return self.__invoke(self._urls["subjects"] % bid, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
|
||||
@@ -57,7 +57,11 @@ class DoubanModule(_ModuleBase):
|
||||
:param cache: 是否使用缓存
|
||||
:return: 识别的媒体信息,包括剧集信息
|
||||
"""
|
||||
if settings.RECOGNIZE_SOURCE != "douban":
|
||||
if not doubanid and not meta:
|
||||
return None
|
||||
|
||||
if meta and not doubanid \
|
||||
and settings.RECOGNIZE_SOURCE != "douban":
|
||||
return None
|
||||
|
||||
if not meta:
|
||||
|
||||
@@ -43,7 +43,7 @@ class FileTransferModule(_ModuleBase):
|
||||
continue
|
||||
download_path = Path(path)
|
||||
if not download_path.exists():
|
||||
return False, f"目录 {download_path} 不存在"
|
||||
return False, f"下载目录 {download_path} 不存在"
|
||||
download_paths.append(path)
|
||||
# 下载目录的设备ID
|
||||
download_devids = [Path(path).stat().st_dev for path in download_paths]
|
||||
@@ -54,7 +54,7 @@ class FileTransferModule(_ModuleBase):
|
||||
for path in settings.LIBRARY_PATHS:
|
||||
library_path = Path(path)
|
||||
if not library_path.exists():
|
||||
return False, f"目录不存在:{library_path}"
|
||||
return False, f"媒体库目录不存在:{library_path}"
|
||||
if settings.DOWNLOADER_MONITOR and settings.TRANSFER_TYPE == "link":
|
||||
if library_path.stat().st_dev not in download_devids:
|
||||
return False, f"媒体库目录 {library_path} " \
|
||||
|
||||
@@ -133,7 +133,7 @@ class Qbittorrent:
|
||||
except Exception as err:
|
||||
logger.error(f"删除种子Tag出错:{str(err)}")
|
||||
return False
|
||||
|
||||
|
||||
def remove_torrents_tag(self, ids: Union[str, list], tag: Union[str, list]) -> bool:
|
||||
"""
|
||||
移除种子Tag
|
||||
@@ -148,7 +148,7 @@ class Qbittorrent:
|
||||
except Exception as err:
|
||||
logger.error(f"移除种子Tag出错:{str(err)}")
|
||||
return False
|
||||
|
||||
|
||||
def set_torrents_tag(self, ids: Union[str, list], tags: list):
|
||||
"""
|
||||
设置种子状态为已整理,以及是否强制做种
|
||||
@@ -372,6 +372,24 @@ class Qbittorrent:
|
||||
logger.error(f"设置速度限制出错:{str(err)}")
|
||||
return False
|
||||
|
||||
def get_speed_limit(self) -> Optional[Tuple[float, float]]:
|
||||
"""
|
||||
获取QB速度
|
||||
:return: 返回download_limit 和upload_limit ,默认是0
|
||||
"""
|
||||
if not self.qbc:
|
||||
return None
|
||||
|
||||
download_limit = 0
|
||||
upload_limit = 0
|
||||
try:
|
||||
download_limit = self.qbc.transfer.download_limit
|
||||
upload_limit = self.qbc.transfer.upload_limit
|
||||
except Exception as err:
|
||||
logger.error(f"获取速度限制出错:{str(err)}")
|
||||
|
||||
return download_limit / 1024, upload_limit / 1024
|
||||
|
||||
def recheck_torrents(self, ids: Union[str, list]) -> bool:
|
||||
"""
|
||||
重新校验种子
|
||||
|
||||
@@ -67,7 +67,11 @@ class TheMovieDbModule(_ModuleBase):
|
||||
:param cache: 是否使用缓存
|
||||
:return: 识别的媒体信息,包括剧集信息
|
||||
"""
|
||||
if settings.RECOGNIZE_SOURCE != "themoviedb":
|
||||
if not tmdbid and not meta:
|
||||
return None
|
||||
|
||||
if meta and not tmdbid \
|
||||
and settings.RECOGNIZE_SOURCE != "themoviedb":
|
||||
return None
|
||||
|
||||
if not meta:
|
||||
@@ -182,7 +186,7 @@ class TheMovieDbModule(_ModuleBase):
|
||||
:param season: 季号
|
||||
"""
|
||||
# 搜索
|
||||
logger.info(f"开始使用 名称:{name}、年份:{year} 匹配TMDB信息 ...")
|
||||
logger.info(f"开始使用 名称:{name} 年份:{year} 匹配TMDB信息 ...")
|
||||
info = self.tmdb.match(name=name,
|
||||
year=year,
|
||||
mtype=mtype,
|
||||
|
||||
@@ -189,9 +189,16 @@ class TmdbHelper:
|
||||
season_year,
|
||||
season_number)
|
||||
if not info:
|
||||
logger.debug(
|
||||
f"正在识别{mtype.value}:{name}, 年份={year} ...")
|
||||
info = self.__search_tv_by_name(name, year)
|
||||
year_range = [year]
|
||||
if year:
|
||||
year_range.append(str(int(year) + 1))
|
||||
year_range.append(str(int(year) - 1))
|
||||
for year in year_range:
|
||||
logger.debug(
|
||||
f"正在识别{mtype.value}:{name}, 年份={year} ...")
|
||||
info = self.__search_tv_by_name(name, year)
|
||||
if info:
|
||||
break
|
||||
if info:
|
||||
info['media_type'] = MediaType.TV
|
||||
# 返回
|
||||
|
||||
@@ -289,6 +289,29 @@ class Transmission:
|
||||
logger.error(f"设置速度限制出错:{str(err)}")
|
||||
return False
|
||||
|
||||
def get_speed_limit(self) -> Optional[Tuple[float, float]]:
|
||||
"""
|
||||
获取TR速度
|
||||
:return: download_limit 下载速度 默认是0
|
||||
upload_limit 上传速度 默认是0
|
||||
"""
|
||||
if not self.trc:
|
||||
return None
|
||||
|
||||
download_limit = 0
|
||||
upload_limit = 0
|
||||
try:
|
||||
download_limit = self.trc.get_session().get('speed_limit_down')
|
||||
upload_limit = self.trc.get_session().get('speed_limit_up')
|
||||
|
||||
except Exception as err:
|
||||
logger.error(f"获取速度限制出错:{str(err)}")
|
||||
|
||||
return (
|
||||
download_limit,
|
||||
upload_limit
|
||||
)
|
||||
|
||||
def recheck_torrents(self, ids: Union[str, list]) -> bool:
|
||||
"""
|
||||
重新校验种子
|
||||
@@ -372,4 +395,3 @@ class Transmission:
|
||||
except Exception as err:
|
||||
logger.error(f"修改tracker出错:{str(err)}")
|
||||
return False
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ from .site import *
|
||||
from .subscribe import *
|
||||
from .context import *
|
||||
from .servarr import *
|
||||
from .servcookie import *
|
||||
from .plugin import *
|
||||
from .history import *
|
||||
from .dashboard import *
|
||||
@@ -13,3 +14,5 @@ from .message import *
|
||||
from .tmdb import *
|
||||
from .transfer import *
|
||||
from .file import *
|
||||
from .bangumi import *
|
||||
from .douban import *
|
||||
|
||||
12
app/schemas/bangumi.py
Normal file
12
app/schemas/bangumi.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class BangumiPerson(BaseModel):
|
||||
id: Optional[int] = None
|
||||
name: Optional[str] = None
|
||||
type: Optional[int] = 1
|
||||
career: Optional[list] = []
|
||||
images: Optional[dict] = {}
|
||||
relation: Optional[str] = None
|
||||
@@ -83,6 +83,8 @@ class MediaInfo(BaseModel):
|
||||
tvdb_id: Optional[str] = None
|
||||
# 豆瓣ID
|
||||
douban_id: Optional[str] = None
|
||||
# Bangumi ID
|
||||
bangumi_id: Optional[int] = None
|
||||
# 媒体原语种
|
||||
original_language: Optional[str] = None
|
||||
# 媒体原发行标题
|
||||
|
||||
14
app/schemas/douban.py
Normal file
14
app/schemas/douban.py
Normal file
@@ -0,0 +1,14 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class DoubanPerson(BaseModel):
|
||||
id: Optional[str] = None
|
||||
name: Optional[str] = None
|
||||
roles: Optional[list] = []
|
||||
title: Optional[str] = None
|
||||
url: Optional[str] = None
|
||||
character: Optional[str] = None
|
||||
avatar: Optional[dict] = None
|
||||
latin_name: Optional[str] = None
|
||||
@@ -17,6 +17,20 @@ class CommingMessage(BaseModel):
|
||||
channel: Optional[MessageChannel] = None
|
||||
# 消息体
|
||||
text: Optional[str] = None
|
||||
# 时间
|
||||
date: Optional[str] = None
|
||||
# 消息方向
|
||||
action: Optional[int] = 0
|
||||
|
||||
def to_dict(self):
|
||||
"""
|
||||
转换为字典
|
||||
"""
|
||||
items = self.dict()
|
||||
for k, v in items.items():
|
||||
if isinstance(v, MessageChannel):
|
||||
items[k] = v.value
|
||||
return items
|
||||
|
||||
|
||||
class Notification(BaseModel):
|
||||
@@ -37,6 +51,21 @@ class Notification(BaseModel):
|
||||
link: Optional[str] = None
|
||||
# 用户ID
|
||||
userid: Optional[Union[str, int]] = None
|
||||
# 时间
|
||||
date: Optional[str] = None
|
||||
# 消息方向
|
||||
action: Optional[int] = 1
|
||||
|
||||
def to_dict(self):
|
||||
"""
|
||||
转换为字典
|
||||
"""
|
||||
items = self.dict()
|
||||
for k, v in items.items():
|
||||
if isinstance(v, MessageChannel) \
|
||||
or isinstance(v, NotificationType):
|
||||
items[k] = v.value
|
||||
return items
|
||||
|
||||
|
||||
class NotificationSwitch(BaseModel):
|
||||
|
||||
11
app/schemas/servcookie.py
Normal file
11
app/schemas/servcookie.py
Normal file
@@ -0,0 +1,11 @@
|
||||
from fastapi import Query
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class CookieData(BaseModel):
|
||||
encrypted: str = Query(min_length=1, max_length=1024 * 1024 * 50)
|
||||
uuid: str = Query(min_length=5, pattern="^[a-zA-Z0-9]+$")
|
||||
|
||||
|
||||
class CookiePassword(BaseModel):
|
||||
password: str
|
||||
@@ -15,6 +15,7 @@ class Subscribe(BaseModel):
|
||||
keyword: Optional[str] = None
|
||||
tmdbid: Optional[int] = None
|
||||
doubanid: Optional[str] = None
|
||||
bangumiid: Optional[int] = None
|
||||
# 季号
|
||||
season: Optional[int] = None
|
||||
# 海报
|
||||
|
||||
@@ -49,14 +49,3 @@ class TmdbPerson(BaseModel):
|
||||
popularity: Optional[float] = None
|
||||
images: Optional[dict] = {}
|
||||
biography: Optional[str] = None
|
||||
|
||||
|
||||
class DoubanPerson(BaseModel):
|
||||
id: Optional[str] = None
|
||||
name: Optional[str] = None
|
||||
roles: Optional[list] = []
|
||||
title: Optional[str] = None
|
||||
url: Optional[str] = None
|
||||
character: Optional[str] = None
|
||||
avatar: Optional[dict] = None
|
||||
latin_name: Optional[str] = None
|
||||
|
||||
@@ -117,3 +117,4 @@ class MessageChannel(Enum):
|
||||
Slack = "Slack"
|
||||
SynologyChat = "SynologyChat"
|
||||
VoceChat = "VoceChat"
|
||||
Web = "Web"
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
import base64
|
||||
import time
|
||||
from hashlib import md5
|
||||
from typing import Any
|
||||
|
||||
from Crypto import Random
|
||||
from Crypto.Cipher import AES
|
||||
|
||||
|
||||
def retry(ExceptionToCheck: Any,
|
||||
tries: int = 3, delay: int = 3, backoff: int = 2, logger: Any = None):
|
||||
@@ -32,3 +37,48 @@ def retry(ExceptionToCheck: Any,
|
||||
return f_retry
|
||||
|
||||
return deco_retry
|
||||
|
||||
|
||||
def bytes_to_key(data: bytes, salt: bytes, output=48) -> bytes:
|
||||
# extended from https://gist.github.com/gsakkis/4546068
|
||||
assert len(salt) == 8, len(salt)
|
||||
data += salt
|
||||
key = md5(data).digest()
|
||||
final_key = key
|
||||
while len(final_key) < output:
|
||||
key = md5(key + data).digest()
|
||||
final_key += key
|
||||
return final_key[:output]
|
||||
|
||||
|
||||
def encrypt(message: bytes, passphrase: bytes) -> bytes:
|
||||
"""
|
||||
CryptoJS 加密原文
|
||||
|
||||
This is a modified copy of https://stackoverflow.com/questions/36762098/how-to-decrypt-password-from-javascript-cryptojs-aes-encryptpassword-passphras
|
||||
"""
|
||||
salt = Random.new().read(8)
|
||||
key_iv = bytes_to_key(passphrase, salt, 32 + 16)
|
||||
key = key_iv[:32]
|
||||
iv = key_iv[32:]
|
||||
aes = AES.new(key, AES.MODE_CBC, iv)
|
||||
length = 16 - (len(message) % 16)
|
||||
data = message + (chr(length) * length).encode()
|
||||
return base64.b64encode(b"Salted__" + salt + aes.encrypt(data))
|
||||
|
||||
|
||||
def decrypt(encrypted: str | bytes, passphrase: bytes) -> bytes:
|
||||
"""
|
||||
CryptoJS 解密密文
|
||||
|
||||
来源同encrypt
|
||||
"""
|
||||
encrypted = base64.b64decode(encrypted)
|
||||
assert encrypted[0:8] == b"Salted__"
|
||||
salt = encrypted[8:16]
|
||||
key_iv = bytes_to_key(passphrase, salt, 32 + 16)
|
||||
key = key_iv[:32]
|
||||
iv = key_iv[32:]
|
||||
aes = AES.new(key, AES.MODE_CBC, iv)
|
||||
data = aes.decrypt(encrypted[16:])
|
||||
return data[:-(data[-1] if type(data[-1]) == int else ord(data[-1]))]
|
||||
|
||||
@@ -71,8 +71,8 @@ class SystemUtils:
|
||||
"""
|
||||
return True if platform.machine() == 'aarch64' else False
|
||||
|
||||
@property
|
||||
def platform(self) -> str:
|
||||
@staticmethod
|
||||
def platform() -> str:
|
||||
"""
|
||||
获取系统平台
|
||||
"""
|
||||
@@ -437,6 +437,8 @@ class SystemUtils:
|
||||
"""
|
||||
执行Docker重启操作
|
||||
"""
|
||||
if not SystemUtils.is_docker():
|
||||
return False, "非Docker环境,无法重启!"
|
||||
try:
|
||||
# 创建 Docker 客户端
|
||||
client = docker.DockerClient(base_url='tcp://127.0.0.1:38379')
|
||||
|
||||
@@ -11,6 +11,8 @@ DEV=false
|
||||
SUPERUSER=admin
|
||||
# 大内存模式,开启后会增加缓存数量,但会占用更多内存
|
||||
BIG_MEMORY_MODE=false
|
||||
# 是否启用DOH域名解析,启用后对于api.themovie.org等域名通过DOH解析,避免域名DNS被污染
|
||||
DOH_ENABLE=true
|
||||
# 元数据识别缓存过期时间,数字型,单位小时,0为系统默认(大内存模式为7天,滞则为3天),调大该值可减少themoviedb的访问次数
|
||||
META_CACHE_EXPIRE=0
|
||||
# 自动检查和更新站点资源包(索引、认证等)
|
||||
@@ -43,4 +45,3 @@ DOWNLOAD_SUBTITLE=true
|
||||
OCR_HOST=https://movie-pilot.org
|
||||
# 插件市场仓库地址,多个地址使用`,`分隔,保留最后的/
|
||||
PLUGIN_MARKET=https://github.com/jxxghp/MoviePilot-Plugins
|
||||
|
||||
|
||||
34
database/versions/5813aaa7cb3a_1_0_15.py
Normal file
34
database/versions/5813aaa7cb3a_1_0_15.py
Normal file
@@ -0,0 +1,34 @@
|
||||
"""1.0.15
|
||||
|
||||
Revision ID: 5813aaa7cb3a
|
||||
Revises: f94cd1217fd7
|
||||
Create Date: 2024-03-17 09:04:51.785716
|
||||
|
||||
"""
|
||||
import contextlib
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '5813aaa7cb3a'
|
||||
down_revision = 'f94cd1217fd7'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
with contextlib.suppress(Exception):
|
||||
with op.batch_alter_table("message") as batch_op:
|
||||
batch_op.add_column(sa.Column('note', sa.String, nullable=True))
|
||||
try:
|
||||
op.create_index('ix_message_reg_time', 'message', ['reg_time'], unique=False)
|
||||
except Exception as err:
|
||||
pass
|
||||
# ### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
pass
|
||||
34
database/versions/d146dea51516_1_0_16.py
Normal file
34
database/versions/d146dea51516_1_0_16.py
Normal file
@@ -0,0 +1,34 @@
|
||||
"""1.0.16
|
||||
|
||||
Revision ID: d146dea51516
|
||||
Revises: 5813aaa7cb3a
|
||||
Create Date: 2024-03-18 18:13:38.099531
|
||||
|
||||
"""
|
||||
import contextlib
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = 'd146dea51516'
|
||||
down_revision = '5813aaa7cb3a'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
with contextlib.suppress(Exception):
|
||||
with op.batch_alter_table("subscribe") as batch_op:
|
||||
batch_op.add_column(sa.Column('bangumiid', sa.Integer, nullable=True))
|
||||
try:
|
||||
op.create_index('ix_subscribe_bangumiid', 'subscribe', ['bangumiid'], unique=False)
|
||||
except Exception as err:
|
||||
pass
|
||||
# ### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
pass
|
||||
@@ -28,4 +28,4 @@ def upgrade() -> None:
|
||||
|
||||
def downgrade() -> None:
|
||||
# ### commands auto generated by Alembic - please adjust! ###
|
||||
pass
|
||||
pass
|
||||
|
||||
19
nginx.conf
19
nginx.conf
@@ -56,6 +56,25 @@ http {
|
||||
proxy_pass http://backend_api;
|
||||
}
|
||||
|
||||
location /cookiecloud {
|
||||
# 后端cookiecloud地址
|
||||
proxy_pass http://backend_api;
|
||||
rewrite ^.+mock-server/?(.*)$ /$1 break;
|
||||
proxy_http_version 1.1;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
proxy_redirect off;
|
||||
proxy_set_header Connection "";
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Nginx-Proxy true;
|
||||
|
||||
# 超时设置
|
||||
proxy_read_timeout 600s;
|
||||
}
|
||||
|
||||
location ~ ^/api/v1/system/(message|progress/) {
|
||||
# SSE MIME类型设置
|
||||
default_type text/event-stream;
|
||||
|
||||
@@ -1 +1 @@
|
||||
APP_VERSION = 'v1.7.2'
|
||||
APP_VERSION = 'v1.7.4'
|
||||
|
||||
Reference in New Issue
Block a user