Compare commits

...

67 Commits

Author SHA1 Message Date
jxxghp
65b17e4f2b v2.4.2
- 修复普通用户通过媒体卡片跳转搜索时无法选择站点的问题,普通用户不能修改搜索站点,会按管理员预设站点直接搜索
2025-04-22 17:35:30 +08:00
jxxghp
23c6898789 更新 nginx.template.conf 2025-04-21 21:42:12 +08:00
jxxghp
df2a1be2a2 更新 nginx.template.conf 2025-04-21 21:33:00 +08:00
jxxghp
2db628a2ba v2.4.1
本版本更新主要调整了用户界面:
- 新增透明主题风格
- PWA模式下全新设计了底部导航栏
- 优化了多处UI细节
2025-04-21 20:05:53 +08:00
jxxghp
b6c40436c9 Merge pull request #4165 from wikrin/v2 2025-04-19 22:36:48 +08:00
Attente
a8a70cac08 refactor(db): optimize download history query logic
- 使用`TransferHistory.list_by`相同逻辑
2025-04-19 20:22:37 +08:00
jxxghp
3eefbf97b1 更新 plex.py 2025-04-19 15:14:47 +08:00
jxxghp
3c423e0838 更新 jellyfin.py 2025-04-19 15:14:14 +08:00
jxxghp
99cde43954 更新 emby.py 2025-04-19 15:13:33 +08:00
jxxghp
fa3a787bf7 更新 mediaserver.py 2025-04-19 15:12:42 +08:00
jxxghp
c776dc8036 feat: WebhookMessage.json 2025-04-19 07:59:59 +08:00
jxxghp
1ef068351d fix docker 2025-04-17 19:36:54 +08:00
jxxghp
6abe0a1862 fix version 2025-04-17 19:15:18 +08:00
jxxghp
ff13045f52 fix build 2025-04-17 12:44:22 +08:00
jxxghp
59c09681cb fix build 2025-04-17 11:49:07 +08:00
jxxghp
f664cf6fa5 remove built-lite 2025-04-17 11:47:24 +08:00
jxxghp
01a847a9c2 test beta 2025-04-17 11:43:42 +08:00
jxxghp
6da655f67f Merge pull request #4154 from TimoYoung/v2 2025-04-16 12:41:15 +08:00
TimoYoung
21df7dced1 fix: 同步cookiecloud站点执行失败问题 2025-04-16 10:26:43 +08:00
jxxghp
7fc257ea79 v2.4.0 2025-04-16 08:11:31 +08:00
jxxghp
24f170ff72 fix 搜索缓存 2025-04-16 08:10:48 +08:00
jxxghp
39999c9ee4 更新 Dockerfile 2025-04-15 06:54:11 +08:00
jxxghp
27a5188e4e 更新 Dockerfile.lite 2025-04-15 06:52:53 +08:00
jxxghp
a5af0786aa - 修复UI错误 2025-04-13 16:03:40 +08:00
jxxghp
e9c9cfaa72 Merge pull request #4137 from lddsb/patch-1 2025-04-11 16:06:29 +08:00
Dee Luo
8ca4ea0f3f perf: 优化qb下载器端口获取逻辑 2025-04-11 15:43:40 +08:00
jxxghp
86e1f9a9d6 Merge pull request #4136 from lddsb/patch-3 2025-04-11 11:43:26 +08:00
Dee Luo
b36ceda585 fix: Rename groups to groups.py 2025-04-11 11:22:29 +08:00
Dee Luo
27a3e6c6db feat: 增加制作组的单元测试 2025-04-11 11:21:39 +08:00
Dee Luo
a731327c00 feat: 增加制作组的单元测试cases 2025-04-11 11:20:36 +08:00
Dee Luo
737c00978e perf: 优化制作组匹配逻辑,解决部分Web组匹配不到的问题
增加两个站制作组的匹配规则
2025-04-11 11:18:15 +08:00
jxxghp
18bcb3a067 fix #4118 2025-04-10 19:40:22 +08:00
jxxghp
f49f55576f Merge pull request #4128 from lddsb/patch-2 2025-04-10 11:09:12 +08:00
Dee Luo
1bef4f9a4d perf: 优化制作组读取自定义制作组的逻辑,避免被空字符串的list影响最终结果 2025-04-10 11:00:46 +08:00
Dee Luo
ab1df59f7a fix: 修复前端传递了[""]这样的空list导致判空时逻辑异常的问题 2025-04-10 10:51:40 +08:00
jxxghp
bcd235521e v2.3.9
- 优化多处UI细节
- 修复了订阅分享参数传递问题,开放了订阅分享管理功能
2025-04-10 08:34:16 +08:00
jxxghp
31a2eac302 fix:订阅分享参数传递 2025-04-10 08:19:59 +08:00
jxxghp
7e6b7e5dd5 更新 subscribe.py 2025-04-09 17:32:07 +08:00
jxxghp
9ec9f48425 feat:增加订阅管理员 #4123 2025-04-09 13:26:58 +08:00
jxxghp
a3bec43eab feat:增加订阅管理员 #4123 2025-04-09 13:26:10 +08:00
jxxghp
f429b6397e fix RecommendMediaSource 2025-04-08 18:52:54 +08:00
jxxghp
9d6e7dc288 Merge pull request #4115 from lddsb/patch-1 2025-04-08 17:58:36 +08:00
Dee Luo
a27c09c1e8 perf: 放宽制作组后缀匹配
支持 制作组xxx 这样的后缀匹配
2025-04-08 16:35:38 +08:00
jxxghp
ceb0697c73 - 适配馒头API变动 2025-04-07 21:30:41 +08:00
jxxghp
6ad6a08bf1 Merge pull request #4110 from cddjr/trimemedia
提升飞牛服务端地址的兼容性
2025-04-07 21:15:38 +08:00
jxxghp
fac6ad7116 Merge pull request #4109 from cddjr/fix_mteam
修复馒头请求参数错误的问题
2025-04-07 21:14:42 +08:00
景大侠
7d8cda0457 修复馒头请求参数错误的问题 2025-04-07 21:04:21 +08:00
景大侠
33fc3fd63b 新增删除媒体的api 2025-04-07 17:20:47 +08:00
景大侠
8d39cc87f7 提升服务端地址的兼容性 2025-04-07 16:37:41 +08:00
景大侠
d0b1348c96 fix some warnings 2025-04-07 16:21:39 +08:00
jxxghp
0afc38f6b8 Merge pull request #4103 from wikrin/v2 2025-04-07 11:07:11 +08:00
Attente
264896ba17 fix: 剧集组刮削 2025-04-07 09:25:06 +08:00
jxxghp
08decf0b82 feat:新增默认插件库 2025-04-07 08:06:59 +08:00
jxxghp
98381265e6 更新 u115.py 2025-04-07 07:37:00 +08:00
DDSRem
d323159719 Update requirements.in 2025-04-06 13:10:56 +08:00
jxxghp
7ef21e1d1c Merge pull request #4098 from DDS-Derek/dev 2025-04-06 12:02:01 +08:00
DDSRem
2d6b2ab7d7 bump: python environment upgrade 3.12
links https://github.com/jxxghp/MoviePilot/issues/3543
2025-04-06 11:56:00 +08:00
jxxghp
a1e6fd88a9 更新 version.py 2025-04-06 07:53:29 +08:00
jxxghp
e72ff867fc fix 115 pickcode 2025-04-05 09:29:08 +08:00
jxxghp
8512641984 更新 scraper.py 2025-04-04 22:13:14 +08:00
jxxghp
f1aa64d191 fix episodes group 2025-04-04 12:17:42 +08:00
jxxghp
347262538f fix episodes group 2025-04-04 08:59:12 +08:00
jxxghp
82510d60ca 更新 __init__.py 2025-04-03 22:48:29 +08:00
jxxghp
6104cd04c3 更新 context.py 2025-04-03 20:32:56 +08:00
jxxghp
44eb58426a feat:支持指定剧集组识别和刮削 2025-04-03 18:43:04 +08:00
jxxghp
078b60cc1e feat:支持指定剧集组识别和刮削 2025-04-03 18:35:02 +08:00
jxxghp
21e120a4f8 refactor:减少一次接口查询 2025-04-03 10:43:31 +08:00
66 changed files with 1547 additions and 789 deletions

View File

@@ -46,7 +46,7 @@ jobs:
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
file: docker/Dockerfile
platforms: |
linux/amd64
linux/arm64/v8

View File

@@ -1,55 +0,0 @@
name: MoviePilot Builder v2 Lite
on:
workflow_dispatch:
push:
branches:
- v2
paths:
- 'version.py'
jobs:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot-v2
tags: |
type=raw,value=lite-latest
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile.lite
platforms: |
linux/amd64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}-docker
cache-to: type=gha, scope=${{ github.workflow }}-docker

View File

@@ -1,93 +0,0 @@
FROM python:3.11.4-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
CONFIG_DIR="/config" \
TERM="xterm" \
DISPLAY=:987 \
PUID=0 \
PGID=0 \
UMASK=000 \
PORT=3001 \
NGINX_PORT=3000 \
MOVIEPILOT_AUTO_UPDATE=release
WORKDIR "/app"
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get -y install \
musl-dev \
nginx \
gettext-base \
locales \
procps \
gosu \
bash \
wget \
curl \
busybox \
dumb-init \
jq \
fuse3 \
rsync \
ffmpeg \
nano \
&& \
if [ "$(uname -m)" = "x86_64" ]; \
then ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1; \
elif [ "$(uname -m)" = "aarch64" ]; \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython pip-tools \
&& pip-compile requirements.in \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& python3 /app/setup.py \
&& find /app/app -type f -name "*.py" ! -path "/app/app/main.py" -exec rm -f {} \; \
&& FRONTEND_VERSION=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" /app/version.py) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Plugins/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Plugins-main/plugins.v2/* /app/app/plugins/ \
&& cat /tmp/MoviePilot-Plugins-main/package.json | jq -r 'to_entries[] | select(.value.v2 == true) | .key' | awk '{print tolower($0)}' | \
while read -r i; do if [ ! -d "/app/app/plugins/$i" ]; then mv "/tmp/MoviePilot-Plugins-main/plugins/$i" "/app/app/plugins/"; else echo "跳过 $i"; fi; done \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" | busybox unzip -d /tmp - \
&& mv -f /tmp/MoviePilot-Resources-main/resources/* /app/app/helper/ \
&& rm -rf /tmp/* /app/build
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

View File

@@ -28,7 +28,7 @@
## 参与开发
需要 `Python 3.11``Node JS v20.12.1`
需要 `Python 3.12``Node JS v20.12.1`
- 克隆主项目 [MoviePilot](https://github.com/jxxghp/MoviePilot)
```shell

View File

@@ -62,7 +62,7 @@ class FetchTorrentsAction(BaseAction):
params = FetchTorrentsParams(**params)
if params.search_type == "keyword":
# 按关键字搜索
torrents = self.searchchain.search_by_title(title=params.name, sites=params.sites, cache_local=False)
torrents = self.searchchain.search_by_title(title=params.name, sites=params.sites)
for torrent in torrents:
if global_vars.is_workflow_stopped(workflow_id):
break

View File

@@ -136,6 +136,24 @@ def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return MediaChain().media_category() or {}
@router.get("/group/seasons/{episode_group}", summary="查询剧集组季信息", response_model=List[schemas.MediaSeason])
def group_seasons(episode_group: str, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询剧集组季信息themoviedb
"""
return TmdbChain().tmdb_group_seasons(group_id=episode_group)
@router.get("/groups/{tmdbid}", summary="查询媒体剧集组", response_model=List[dict])
def seasons(tmdbid: int, _: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体剧集组列表themoviedb
"""
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, mtype=MediaType.TV)
if not mediainfo:
return []
return mediainfo.episode_groups
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: Optional[str] = None,
title: Optional[str] = None,

View File

@@ -6,8 +6,8 @@ from app import schemas
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas.types import ChainEventType
from chain.recommend import RecommendChain
from schemas import RecommendSourceEventData
from app.chain.recommend import RecommendChain
from app.schemas import RecommendSourceEventData
router = APIRouter()

View File

@@ -58,12 +58,12 @@ def search_by_id(mediaid: str,
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
torrents = SearchChain().search_by_id(tmdbid=tmdbid, mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
elif mediaid.startswith("douban:"):
doubanid = mediaid.replace("douban:", "")
if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -74,12 +74,12 @@ def search_by_id(mediaid: str,
media_season = tmdbinfo.get('season')
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
torrents = SearchChain().search_by_id(doubanid=doubanid, mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid.replace("bangumi:", ""))
if settings.RECOGNIZE_SOURCE == "themoviedb":
@@ -88,7 +88,7 @@ def search_by_id(mediaid: str,
if tmdbinfo:
torrents = SearchChain().search_by_id(tmdbid=tmdbinfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到TMDB媒体信息")
else:
@@ -97,7 +97,7 @@ def search_by_id(mediaid: str,
if doubaninfo:
torrents = SearchChain().search_by_id(doubanid=doubaninfo.get("id"),
mtype=media_type, area=area, season=media_season,
sites=site_list)
sites=site_list, cache_local=True)
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
@@ -113,11 +113,11 @@ def search_by_id(mediaid: str,
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(tmdbid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(doubanid=search_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
@@ -133,11 +133,11 @@ def search_by_id(mediaid: str,
mediainfo = MediaChain().recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id,
mtype=media_type, area=area, season=media_season)
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id, mtype=media_type, area=area,
season=media_season, cache_local=True)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
@@ -154,7 +154,8 @@ def search_by_title(keyword: Optional[str] = None,
根据名称模糊搜索站点资源,支持分页,关键词为空是返回首页资源
"""
torrents = SearchChain().search_by_title(title=keyword, page=page,
sites=[int(site) for site in sites.split(",") if site] if sites else None)
sites=[int(site) for site in sites.split(",") if site] if sites else None,
cache_local=True)
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
return schemas.Response(success=True, data=[torrent.to_dict() for torrent in torrents])

View File

@@ -75,22 +75,12 @@ def create_subscribe(
title = subscribe_in.name
else:
title = None
# 订阅用户
subscribe_in.username = current_user.name
sid, message = SubscribeChain().add(mtype=mtype,
title=title,
year=subscribe_in.year,
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
mediaid=subscribe_in.mediaid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
custom_words=subscribe_in.custom_words,
media_category=subscribe_in.media_category,
filter_groups=subscribe_in.filter_groups,
exist_ok=True)
exist_ok=True,
**subscribe_in.dict())
return schemas.Response(
success=bool(sid), message=message, data={"id": sid}
)

View File

@@ -28,6 +28,7 @@ from app.helper.message import MessageHelper, MessageQueueManager
from app.helper.progress import ProgressHelper
from app.helper.rule import RuleHelper
from app.helper.sites import SitesHelper
from app.helper.subscribe import SubscribeHelper
from app.log import logger
from app.monitor import Monitor
from app.scheduler import Scheduler
@@ -179,9 +180,10 @@ def get_global_setting():
exclude={"SECRET_KEY", "RESOURCE_SECRET_KEY", "API_TOKEN", "TMDB_API_KEY", "TVDB_API_KEY", "FANART_API_KEY",
"COOKIECLOUD_KEY", "COOKIECLOUD_PASSWORD", "GITHUB_TOKEN", "REPO_GITHUB_TOKEN"}
)
# 追加用户唯一ID
# 追加用户唯一ID和订阅分享管理权限
info.update({
"USER_UNIQUE_ID": SystemUtils.generate_user_unique_id()
"USER_UNIQUE_ID": SubscribeHelper().get_user_uuid(),
"SUBSCRIBE_SHARE_MANAGE": SubscribeHelper().is_admin_user(),
})
return schemas.Response(success=True,
data=info)
@@ -281,6 +283,9 @@ def set_setting(key: str, value: Union[list, dict, bool, int, str] = None,
success, message = settings.update_setting(key=key, value=value)
return schemas.Response(success=success, message=message)
elif key in {item.value for item in SystemConfigKey}:
if isinstance(value, list):
value = list(filter(None, value))
value = value if value else None
SystemConfigOper().set(key, value)
return schemas.Response(success=True)
else:

View File

@@ -114,9 +114,9 @@ def tmdb_person_credits(person_id: int,
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
def tmdb_season_episodes(tmdbid: int, season: int,
def tmdb_season_episodes(tmdbid: int, season: int, episode_group: Optional[str] = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据TMDBID查询某季的所有信信息
"""
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season)
return TmdbChain().tmdb_episodes(tmdbid=tmdbid, season=season, episode_group=episode_group)

View File

@@ -146,6 +146,7 @@ def manual_transfer(transer_item: ManualTransferItem,
doubanid=transer_item.doubanid,
mtype=mtype,
season=transer_item.season,
episode_group=transer_item.episode_group,
transfer_type=transer_item.transfer_type,
epformat=epformat,
min_filesize=transer_item.min_filesize,

View File

@@ -150,6 +150,7 @@ class ChainBase(metaclass=ABCMeta):
tmdbid: Optional[int] = None,
doubanid: Optional[str] = None,
bangumiid: Optional[int] = None,
episode_group: Optional[str] = None,
cache: bool = True) -> Optional[MediaInfo]:
"""
识别媒体信息不含Fanart图片
@@ -158,6 +159,7 @@ class ChainBase(metaclass=ABCMeta):
:param tmdbid: tmdbid
:param doubanid: 豆瓣ID
:param bangumiid: BangumiID
:param episode_group: 剧集组
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
@@ -173,7 +175,8 @@ class ChainBase(metaclass=ABCMeta):
doubanid = None
bangumiid = None
return self.run_module("recognize_media", meta=meta, mtype=mtype,
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, cache=cache)
tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid,
episode_group=episode_group, cache=cache)
def match_doubaninfo(self, name: str, imdbid: Optional[str] = None,
mtype: Optional[MediaType] = None, year: Optional[str] = None, season: Optional[int] = None,

View File

@@ -209,7 +209,6 @@ class DownloadChain(ChainBase):
save_path: Optional[str] = None,
userid: Union[str, int] = None,
username: Optional[str] = None,
media_category: Optional[str] = None,
label: Optional[str] = None) -> Optional[str]:
"""
下载及发送通知
@@ -222,9 +221,13 @@ class DownloadChain(ChainBase):
:param save_path: 保存路径
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param label: 自定义标签
"""
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 发送资源下载事件,允许外部拦截下载
event_data = ResourceDownloadEventData(
context=context,
@@ -236,7 +239,7 @@ class DownloadChain(ChainBase):
"save_path": save_path,
"userid": userid,
"username": username,
"media_category": media_category
"media_category": _media.category
}
)
# 触发资源下载事件
@@ -250,15 +253,11 @@ class DownloadChain(ChainBase):
f"Reason: {event_data.reason}")
return None
_torrent = context.torrent_info
_media = context.media_info
_meta = context.meta_info
_site_downloader = _torrent.site_downloader
# 补充完整的media数据
if not _media.genre_ids:
new_media = self.recognize_media(mtype=_media.type, tmdbid=_media.tmdb_id,
doubanid=_media.douban_id, bangumiid=_media.bangumi_id)
doubanid=_media.douban_id, bangumiid=_media.bangumi_id,
episode_group=_media.episode_group)
if new_media:
_media = new_media
@@ -355,7 +354,8 @@ class DownloadChain(ChainBase):
username=username,
channel=channel.value if channel else None,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
media_category=media_category,
media_category=_media.category,
episode_group=_media.episode_group,
note={"source": source}
)
@@ -423,7 +423,6 @@ class DownloadChain(ChainBase):
source: Optional[str] = None,
userid: Optional[str] = None,
username: Optional[str] = None,
media_category: Optional[str] = None,
downloader: Optional[str] = None
) -> Tuple[List[Context], Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
"""
@@ -435,7 +434,6 @@ class DownloadChain(ChainBase):
:param source: 来源(消息通知、订阅、手工下载等)
:param userid: 用户ID
:param username: 调用下载的用户名/插件名
:param media_category: 自定义媒体类别
:param downloader: 下载器
:return: 已经下载的资源列表、剩余未下载到的剧集 no_exists[tmdb_id/douban_id] = {season: NotExistMediaInfo}
"""
@@ -524,7 +522,7 @@ class DownloadChain(ChainBase):
logger.info(f"开始下载电影 {context.torrent_info.title} ...")
if self.download_single(context, save_path=save_path, channel=channel,
source=source, userid=userid, username=username,
media_category=media_category, downloader=downloader):
downloader=downloader):
# 下载成功
logger.info(f"{context.torrent_info.title} 添加下载成功")
downloaded_list.append(context)
@@ -609,8 +607,7 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader,
downloader=downloader
)
else:
# 下载
@@ -618,7 +615,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
@@ -690,7 +686,6 @@ class DownloadChain(ChainBase):
download_id = self.download_single(context, save_path=save_path,
channel=channel, source=source,
userid=userid, username=username,
media_category=media_category,
downloader=downloader)
if download_id:
# 下载成功
@@ -780,7 +775,6 @@ class DownloadChain(ChainBase):
source=source,
userid=userid,
username=username,
media_category=media_category,
downloader=downloader
)
if not download_id:
@@ -866,7 +860,8 @@ class DownloadChain(ChainBase):
# 补充媒体信息
mediainfo: MediaInfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id)
doubanid=mediainfo.douban_id,
episode_group=mediainfo.episode_group)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
return False, {}

View File

@@ -42,13 +42,13 @@ class MediaChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("metadata_nfo", meta=meta, mediainfo=mediainfo, season=season, episode=episode)
def recognize_by_meta(self, metainfo: MetaBase) -> Optional[MediaInfo]:
def recognize_by_meta(self, metainfo: MetaBase, episode_group: Optional[str] = None) -> Optional[MediaInfo]:
"""
根据主副标题识别媒体信息
"""
title = metainfo.title
# 识别媒体信息
mediainfo: MediaInfo = self.recognize_media(meta=metainfo)
mediainfo: MediaInfo = self.recognize_media(meta=metainfo, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
@@ -112,7 +112,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 重新识别
return self.recognize_media(meta=org_meta)
def recognize_by_path(self, path: str) -> Optional[Context]:
def recognize_by_path(self, path: str, episode_group: Optional[str] = None) -> Optional[Context]:
"""
根据文件路径识别媒体信息
"""
@@ -121,7 +121,7 @@ class MediaChain(ChainBase, metaclass=Singleton):
# 元数据
file_meta = MetaInfoPath(file_path)
# 识别媒体信息
mediainfo = self.recognize_media(meta=file_meta)
mediainfo = self.recognize_media(meta=file_meta, episode_group=episode_group)
if not mediainfo:
# 尝试使用辅助识别,如果有注册响应事件的话
if eventmanager.check(ChainEventType.NameRecognize):
@@ -474,7 +474,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
if not file_meta.begin_episode:
logger.warn(f"{filepath.name} 无法识别文件集数!")
return
file_mediainfo = self.recognize_media(meta=file_meta, tmdbid=mediainfo.tmdb_id)
file_mediainfo = self.recognize_media(meta=file_meta, tmdbid=mediainfo.tmdb_id,
episode_group=mediainfo.episode_group)
if not file_mediainfo:
logger.warn(f"{filepath.name} 无法识别文件媒体信息!")
return
@@ -483,7 +484,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
if overwrite or not self.storagechain.get_file_item(storage=fileitem.storage, path=nfo_path):
# 获取集的nfo文件
episode_nfo = self.metadata_nfo(meta=file_meta, mediainfo=file_mediainfo,
season=file_meta.begin_season, episode=file_meta.begin_episode)
season=file_meta.begin_season,
episode=file_meta.begin_episode)
if episode_nfo:
# 保存或上传nfo文件到上级目录
if not parent:

View File

@@ -36,7 +36,7 @@ class SearchChain(ChainBase):
def search_by_id(self, tmdbid: Optional[int] = None, doubanid: Optional[str] = None,
mtype: MediaType = None, area: Optional[str] = "title", season: Optional[int] = None,
sites: List[int] = None) -> List[Context]:
sites: List[int] = None, cache_local: bool = False) -> List[Context]:
"""
根据TMDBID/豆瓣ID搜索资源精确匹配不过滤本地存在的资源
:param tmdbid: TMDB ID
@@ -45,6 +45,7 @@ class SearchChain(ChainBase):
:param area: 搜索范围title or imdbid
:param season: 季数
:param sites: 站点ID列表
:param cache_local: 是否缓存到本地
"""
mediainfo = self.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
if not mediainfo:
@@ -59,12 +60,12 @@ class SearchChain(ChainBase):
}
results = self.process(mediainfo=mediainfo, sites=sites, area=area, no_exists=no_exists)
# 保存到本地文件
bytes_results = pickle.dumps(results)
self.save_cache(bytes_results, self.__result_temp_file)
if cache_local:
self.save_cache(pickle.dumps(results), self.__result_temp_file)
return results
def search_by_title(self, title: str, page: Optional[int] = 0,
sites: List[int] = None, cache_local: Optional[bool] = True) -> List[Context]:
sites: List[int] = None, cache_local: Optional[bool] = False) -> List[Context]:
"""
根据标题搜索资源,不识别不过滤,直接返回站点内容
:param title: 标题,为空时返回所有站点首页内容
@@ -86,8 +87,7 @@ class SearchChain(ChainBase):
torrent_info=torrent) for torrent in torrents]
# 保存到本地文件
if cache_local:
bytes_results = pickle.dumps(contexts)
self.save_cache(bytes_results, self.__result_temp_file)
self.save_cache(pickle.dumps(contexts), self.__result_temp_file)
return contexts
def last_search_results(self) -> List[Context]:

View File

@@ -1,7 +1,6 @@
import base64
import re
from datetime import datetime
from time import time
from typing import Optional, Tuple, Union, Dict
from urllib.parse import urljoin
@@ -178,12 +177,9 @@ class SiteChain(ChainBase):
domain = StringUtils.get_url_domain(site.url)
url = f"https://api.{domain}/api/member/profile"
headers = {
"Content-Type": "application/json",
"User-Agent": user_agent,
"Accept": "application/json, text/plain, */*",
"Authorization": site.token,
"x-api-key": site.apikey,
"ts": str(int(time()))
}
res = RequestUtils(
headers=headers,
@@ -193,27 +189,10 @@ class SiteChain(ChainBase):
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
state = False
message = "鉴权已过期或无效"
user_info = res.json() or {}
if user_info.get("data"):
# 更新最后访问时间
del headers["x-api-key"]
res = RequestUtils(headers=headers,
timeout=site.timeout or 15,
proxies=settings.PROXY if site.proxy else None,
referer=f"{site.url}index"
).post_res(url=f"https://api.{domain}/api/member/updateLastBrowse")
state = True
message = "连接成功,但更新状态失败"
if res and res.status_code == 200:
update_info = res.json() or {}
if "code" in update_info and int(update_info["code"]) == 0:
message = "连接成功"
elif user_info.get("message"):
# 使用馒头的错误提示
message = user_info.get("message")
return state, message
return True, "连接成功"
return False, user_info.get("message", "鉴权已过期或无效")
else:
return False, f"错误:{res.status_code} {res.reason}"
@@ -318,7 +297,7 @@ class SiteChain(ChainBase):
"""
if StringUtils.get_url_domain(inx.get("domain")) == sub_domain:
return inx.get("domain")
for ext_d in inx.get("ext_domains"):
for ext_d in inx.get("ext_domains", []):
if StringUtils.get_url_domain(ext_d) == sub_domain:
return ext_d
return sub_domain

View File

@@ -60,6 +60,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
doubanid: Optional[str] = None,
bangumiid: Optional[int] = None,
mediaid: Optional[str] = None,
episode_group: Optional[str] = None,
season: Optional[int] = None,
channel: MessageChannel = None,
source: Optional[str] = None,
@@ -117,7 +118,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo = __get_event_meida(mediaid, metainfo)
else:
# 使用TMDBID识别
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid, cache=False)
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid,
episode_group=episode_group, cache=False)
else:
if doubanid:
# 豆瓣识别模式,不使用缓存
@@ -134,7 +136,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 使用名称识别兜底
if not mediainfo:
mediainfo = self.recognize_media(meta=metainfo)
mediainfo = self.recognize_media(meta=metainfo, episode_group=episode_group)
# 识别失败
if not mediainfo:
@@ -147,12 +149,13 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
season = 1
# 总集数
if not kwargs.get('total_episode'):
if not mediainfo.seasons:
if not mediainfo.seasons or episode_group:
# 补充媒体信息
mediainfo = self.recognize_media(mtype=mediainfo.type,
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
episode_group=episode_group,
cache=False)
if not mediainfo:
logger.error(f"媒体信息识别失败!")
@@ -207,8 +210,9 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path"),
'filter_groups': self.__get_default_subscribe_config(mediainfo.type, "filter_groups") if not kwargs.get(
"filter_groups") else kwargs.get("filter_groups"),
"filter_groups") else kwargs.get("filter_groups")
})
# 操作数据库
sid, err_msg = self.subscribeoper.add(mediainfo=mediainfo, season=season, username=username, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} {err_msg}')
@@ -323,6 +327,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
@@ -383,6 +388,11 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
logger.info(
f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于或等于已下载优先级')
continue
# 更新订阅自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
matched_contexts.append(context)
if not matched_contexts:
@@ -398,7 +408,6 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
@@ -574,6 +583,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
@@ -603,9 +613,10 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
logger.debug(f'开始匹配站点:{domain},共缓存了 {len(contexts)} 个种子...')
for context in contexts:
# 提取信息
torrent_meta = copy.deepcopy(context.meta_info)
torrent_mediainfo = copy.deepcopy(context.media_info)
torrent_info = context.torrent_info
_context = copy.deepcopy(context)
torrent_meta = _context.meta_info
torrent_mediainfo = _context.media_info
torrent_info = _context.torrent_info
# 不在订阅站点范围的不处理
sub_sites = self.get_sub_sites(subscribe)
@@ -633,7 +644,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
if not torrent_mediainfo \
or (not torrent_mediainfo.tmdb_id and not torrent_mediainfo.douban_id):
# 重新识别媒体信息
torrent_mediainfo = self.recognize_media(meta=torrent_meta)
torrent_mediainfo = self.recognize_media(meta=torrent_meta,
episode_group=subscribe.episode_group)
if torrent_mediainfo:
# 更新种子缓存
context.media_info = torrent_mediainfo
@@ -736,7 +748,12 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 匹配成功
logger.info(f'{mediainfo.title_year} 匹配成功:{torrent_info.title}')
_match_context.append(context)
# 自定义属性
if subscribe.media_category:
torrent_mediainfo.category = subscribe.media_category
if subscribe.episode_group:
torrent_mediainfo.episode_group = subscribe.episode_group
_match_context.append(_context)
if not _match_context:
# 未匹配到资源
@@ -752,7 +769,6 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
userid=subscribe.username,
username=subscribe.username,
save_path=subscribe.save_path,
media_category=subscribe.media_category,
downloader=subscribe.downloader,
source=self.get_subscribe_source_keyword(subscribe)
)
@@ -793,6 +809,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(
@@ -1274,7 +1291,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 查询TMDB中的集信息
tmdb_episodes = self.tmdbchain.tmdb_episodes(
tmdbid=subscribe.tmdbid,
season=subscribe.season
season=subscribe.season,
episode_group=subscribe.episode_group
)
if tmdb_episodes:
for episode in tmdb_episodes:
@@ -1336,6 +1354,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo: MediaInfo = self.recognize_media(meta=meta, mtype=meta.type,
tmdbid=subscribe.tmdbid,
doubanid=subscribe.doubanid,
episode_group=subscribe.episode_group,
cache=False)
if not mediainfo:
logger.warn(

View File

@@ -70,13 +70,21 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_seasons", tmdbid=tmdbid)
def tmdb_episodes(self, tmdbid: int, season: int) -> List[schemas.TmdbEpisode]:
def tmdb_group_seasons(self, group_id: str) -> List[schemas.TmdbSeason]:
"""
根据剧集组ID查询themoviedb所有季集信息
:param group_id: 剧集组ID
"""
return self.run_module("tmdb_group_seasons", group_id=group_id)
def tmdb_episodes(self, tmdbid: int, season: int, episode_group: Optional[str] = None) -> List[schemas.TmdbEpisode]:
"""
根据TMDBID查询某季的所有信信息
:param tmdbid: TMDBID
:param season: 季
:param episode_group: 剧集组
"""
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season)
return self.run_module("tmdb_episodes", tmdbid=tmdbid, season=season, episode_group=episode_group)
def movie_similar(self, tmdbid: int) -> Optional[List[MediaInfo]]:
"""

View File

@@ -623,7 +623,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 下载记录中已存在识别信息
mediainfo: Optional[MediaInfo] = self.recognize_media(mtype=MediaType(download_history.type),
tmdbid=download_history.tmdbid,
doubanid=download_history.doubanid)
doubanid=download_history.doubanid,
episode_group=download_history.episode_group)
if mediainfo:
# 更新自定义媒体类别
if download_history.media_category:
@@ -681,7 +682,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
season_num = 1
task.episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=task.mediainfo.tmdb_id,
season=season_num
season=season_num,
episode_group=task.mediainfo.episode_group
)
# 查询整理目标目录
@@ -798,7 +800,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 按TMDBID识别
mediainfo = self.recognize_media(mtype=mtype,
tmdbid=downloadhis.tmdbid,
doubanid=downloadhis.doubanid)
doubanid=downloadhis.doubanid,
episode_group=downloadhis.episode_group)
if mediainfo:
# 补充图片
self.obtain_images(mediainfo)
@@ -1214,12 +1217,12 @@ class TransferChain(ChainBase, metaclass=Singleton):
# 查询媒体信息
if mtype and mediaid:
mediainfo = self.recognize_media(mtype=mtype, tmdbid=int(mediaid) if str(mediaid).isdigit() else None,
doubanid=mediaid)
doubanid=mediaid, episode_group=history.episode_group)
if mediainfo:
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
else:
mediainfo = self.mediachain.recognize_by_path(str(src_path))
mediainfo = self.mediachain.recognize_by_path(str(src_path), episode_group=history.episode_group)
if not mediainfo:
return False, f"未识别到媒体信息,类型:{mtype.value}id{mediaid}"
# 重新执行整理
@@ -1252,6 +1255,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
doubanid: Optional[str] = None,
mtype: MediaType = None,
season: Optional[int] = None,
episode_group: Optional[str] = None,
transfer_type: Optional[str] = None,
epformat: EpisodeFormat = None,
min_filesize: Optional[int] = 0,
@@ -1269,6 +1273,7 @@ class TransferChain(ChainBase, metaclass=Singleton):
:param doubanid: 豆瓣ID
:param mtype: 媒体类型
:param season: 季度
:param episode_group: 剧集组
:param transfer_type: 整理类型
:param epformat: 剧集格式
:param min_filesize: 最小文件大小(MB)
@@ -1282,7 +1287,8 @@ class TransferChain(ChainBase, metaclass=Singleton):
if tmdbid or doubanid:
# 有输入TMDBID时单个识别
# 识别媒体信息
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, doubanid=doubanid, mtype=mtype)
mediainfo: MediaInfo = self.mediachain.recognize_media(tmdbid=tmdbid, doubanid=doubanid,
mtype=mtype, episode_group=episode_group)
if not mediainfo:
return False, f"媒体信息识别失败tmdbid{tmdbid}doubanid{doubanid}type: {mtype.value}"
else:

View File

@@ -212,7 +212,8 @@ class ConfigModel(BaseModel):
PLUGIN_MARKET: str = ("https://github.com/jxxghp/MoviePilot-Plugins,"
"https://github.com/thsrite/MoviePilot-Plugins,"
"https://github.com/honue/MoviePilot-Plugins,"
"https://github.com/InfinityPacer/MoviePilot-Plugins")
"https://github.com/InfinityPacer/MoviePilot-Plugins,"
"https://github.com/DDS-Derek/MoviePilot-Plugins")
# 插件安装数据共享
PLUGIN_STATISTIC_SHARE: bool = True
# 是否开启插件热加载

View File

@@ -264,6 +264,10 @@ class MediaInfo:
next_episode_to_air: dict = field(default_factory=dict)
# 内容分级
content_rating: str = None
# 全部剧集组
episode_groups: List[dict] = field(default_factory=list)
# 剧集组
episode_group: str = None
def __post_init__(self):
# 设置媒体信息
@@ -454,6 +458,10 @@ class MediaInfo:
air_date = seainfo.get("air_date")
if air_date:
self.season_years[season] = air_date[:4]
# 剧集组
if info.get("episode_groups"):
self.episode_groups = info.pop("episode_groups").get("results") or []
# 海报
if info.get('poster_path'):
self.poster_path = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.get('poster_path')}"
@@ -773,6 +781,7 @@ class MediaInfo:
self.spoken_languages = []
self.networks = []
self.next_episode_to_air = {}
self.episode_groups = []
@dataclass

View File

@@ -15,32 +15,32 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"0ff": ['FF(?:(?:A|WE)B|CD|E(?:DU|B)|TV)'],
"1pt": [],
"52pt": [],
"audiences": ['Audies', 'AD(?:Audio|E(?:|book)|Music|Web)'],
"audiences": ['Audies', 'AD(?:Audio|E(?:book|)|Music|Web)'],
"azusa": [],
"beitai": ['BeiTai'],
"btschool": ['Bts(?:CHOOL|HD|PAD|TV)', 'Zone'],
"carpt": ['CarPT'],
"chdbits": ['CHD(?:|Bits|PAD|(?:|HK)TV|WEB)', 'StBOX', 'OneHD', 'Lee', 'xiaopie'],
"chdbits": ['CHD(?:Bits|PAD|(?:|HK)TV|WEB|)', 'StBOX', 'OneHD', 'Lee', 'xiaopie'],
"discfan": [],
"dragonhd": [],
"eastgame": ['(?:(?:iNT|(?:HALFC|Mini(?:S|H|FH)D))-|)TLF'],
"filelist": [],
"gainbound": ['(?:DG|GBWE)B'],
"hares": ['Hares(?:|(?:M|T)V|Web)'],
"hares": ['Hares(?:(?:M|T)V|Web|)'],
"hd4fans": [],
"hdarea": ['HDA(?:pad|rea|TV)', 'EPiC'],
"hdatmos": [],
"hdbd": [],
"hdchina": ['HDC(?:|hina|TV)', 'k9611', 'tudou', 'iHD'],
"hdchina": ['HDC(?:hina|TV|)', 'k9611', 'tudou', 'iHD'],
"hddolby": ['D(?:ream|BTV)', '(?:HD|QHstudI)o'],
"hdfans": ['beAst(?:|TV)'],
"hdhome": ['HDH(?:|ome|Pad|TV|WEB)'],
"hdpt": ['HDPT(?:|Web)'],
"hdsky": ['HDS(?:|ky|TV|Pad|WEB)', 'AQLJ'],
"hdfans": ['beAst(?:TV|)'],
"hdhome": ['HDH(?:ome|Pad|TV|WEB|)'],
"hdpt": ['HDPT(?:Web|)'],
"hdsky": ['HDS(?:ky|TV|Pad|WEB|)', 'AQLJ'],
"hdtime": [],
"HDU": [],
"hdvideo": [],
"hdzone": ['HDZ(?:|one)'],
"hdzone": ['HDZ(?:one|)'],
"hhanclub": ['HHWEB'],
"hitpt": [],
"htpt": ['HTPT'],
@@ -48,34 +48,36 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"joyhd": [],
"keepfrds": ['FRDS', 'Yumi', 'cXcY'],
"lemonhd": ['L(?:eague(?:(?:C|H)D|(?:M|T)V|NF|WEB)|HD)', 'i18n', 'CiNT'],
"mteam": ['MTeam(?:|TV)', 'MPAD'],
"mteam": ['MTeam(?:TV|)', 'MPAD'],
"nanyangpt": [],
"nicept": [],
"oshen": [],
"ourbits": ['Our(?:Bits|TV)', 'FLTTH', 'Ao', 'PbK', 'MGs', 'iLove(?:HD|TV)'],
"piggo": ['PiGo(?:NF|(?:H|WE)B)'],
"ptchina": [],
"pterclub": ['PTer(?:|DIY|Game|(?:M|T)V|WEB)'],
"pthome": ['PTH(?:|Audio|eBook|music|ome|tv|WEB)'],
"pterclub": ['PTer(?:DIY|Game|(?:M|T)V|WEB|)'],
"pthome": ['PTH(?:Audio|eBook|music|ome|tv|WEB|)'],
"ptmsg": [],
"ptsbao": ['PTsbao', 'OPS', 'F(?:Fans(?:AIeNcE|BD|D(?:VD|IY)|TV|WEB)|HDMv)', 'SGXT'],
"pttime": [],
"putao": ['PuTao'],
"soulvoice": [],
"springsunday": ['CMCT(?:|V)'],
"sharkpt": ['Shark(?:|WEB|DIY|TV|MV)'],
"springsunday": ['CMCT(?:V|)'],
"sharkpt": ['Shark(?:WEB|DIY|TV|MV|)'],
"tccf": [],
"tjupt": ['TJUPT'],
"totheglory": ['TTG', 'WiKi', 'NGB', 'DoA', '(?:ARi|ExRE)N'],
"U2": [],
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:|yG)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )', 'UBWEB'],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:yG|)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )',],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',
'悠哈璃羽字幕社',
'❀拨雪寻春❀', '沸羊羊(?:制作|字幕组)', '(?:桜|樱)都字幕组']
'❀拨雪寻春❀', '沸羊羊(?:制作|字幕组)', '(?:桜|樱)都字幕组'],
"forge": ['FROG(?:E|Web|)'],
"ubits": ['UB(?:its|WEB|TV)'],
}
def __init__(self):
@@ -97,13 +99,15 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
if not groups:
# 自定义组
custom_release_groups = self.systemconfig.get(SystemConfigKey.CustomReleaseGroups)
if isinstance(custom_release_groups, list):
custom_release_groups = list(filter(None, custom_release_groups))
if custom_release_groups:
custom_release_groups_str = '|'.join(custom_release_groups)
groups = f"{self.__release_groups}|{custom_release_groups_str}"
else:
groups = self.__release_groups
title = f"{title} "
groups_re = re.compile(r"(?<=[-@\[£【&])(?:%s)(?=[@.\s\]\[】&])" % groups, re.I)
groups_re = re.compile(r"(?<=[-@\[£【&])(?:%s)(?=[@.\s\S\]\[】&])" % groups, re.I)
# 处理一个制作组识别多次的情况,保留顺序
unique_groups = []
for item in re.findall(groups_re, title):

View File

@@ -113,6 +113,7 @@ class DownloadHistoryOper(DbOper):
season: Optional[str] = None, episode: Optional[str] = None, tmdbid=None) -> List[DownloadHistory]:
"""
按类型、标题、年份、季集查询下载记录
tmdbid + mtype 或 title + year
"""
return DownloadHistory.get_last_by(db=self._db,
mtype=mtype,

View File

@@ -52,6 +52,8 @@ class DownloadHistory(Base):
note = Column(JSON)
# 自定义媒体类别
media_category = Column(String)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query
@@ -83,45 +85,54 @@ class DownloadHistory(Base):
year: Optional[str] = None, season: Optional[str] = None,
episode: Optional[str] = None, tmdbid: Optional[int] = None):
"""
据tmdbid、season、season_episode查询转移记录
据tmdbid、season、season_episode查询下载记录
tmdbid + mtype 或 title + year
"""
result = None
if tmdbid and not season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid).order_by(
# TMDBID + 类型
if tmdbid and mtype:
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
if tmdbid and season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season).order_by(
# 电视剧某季
elif season:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
if tmdbid and season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
else:
# 电视剧所有季集/电影
result = db.query(DownloadHistory).filter(DownloadHistory.tmdbid == tmdbid,
DownloadHistory.type == mtype).order_by(
DownloadHistory.id.desc()).all()
# 电视剧所有季集|电影
if not season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
# 标题 + 年份
elif title and year:
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季
if season and not episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
# 电视剧某季
elif season:
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season).order_by(
DownloadHistory.id.desc()).all()
# 电视剧某季某集
if season and episode:
result = db.query(DownloadHistory).filter(DownloadHistory.type == mtype,
DownloadHistory.title == title,
DownloadHistory.year == year,
DownloadHistory.seasons == season,
DownloadHistory.episodes == episode).order_by(
else:
# 电视剧所有季集/电影
result = db.query(DownloadHistory).filter(DownloadHistory.title == title,
DownloadHistory.year == year).order_by(
DownloadHistory.id.desc()).all()
if result:
return list(result)
return []
@staticmethod
@db_query

View File

@@ -84,6 +84,8 @@ class Subscribe(Base):
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
# 选择的剧集组
episode_group = Column(String)
@staticmethod
@db_query

View File

@@ -69,6 +69,8 @@ class SubscribeHistory(Base):
media_category = Column(String)
# 过滤规则组
filter_groups = Column(JSON, default=list)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query

View File

@@ -56,6 +56,8 @@ class TransferHistory(Base):
date = Column(String, index=True)
# 文件清单以JSON存储
files = Column(JSON, default=list)
# 剧集组
episode_group = Column(String)
@staticmethod
@db_query

View File

@@ -20,21 +20,24 @@ class SubscribeOper(DbOper):
tmdbid=mediainfo.tmdb_id,
doubanid=mediainfo.douban_id,
season=kwargs.get('season'))
kwargs.update({
"name": mediainfo.title,
"year": mediainfo.year,
"type": mediainfo.type.value,
"tmdbid": mediainfo.tmdb_id,
"imdbid": mediainfo.imdb_id,
"tvdbid": mediainfo.tvdb_id,
"doubanid": mediainfo.douban_id,
"bangumiid": mediainfo.bangumi_id,
"episode_group": mediainfo.episode_group,
"poster": mediainfo.get_poster_image(),
"backdrop": mediainfo.get_backdrop_image(),
"vote": mediainfo.vote_average,
"description": mediainfo.overview,
"date": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
})
if not subscribe:
subscribe = Subscribe(name=mediainfo.title,
year=mediainfo.year,
type=mediainfo.type.value,
tmdbid=mediainfo.tmdb_id,
imdbid=mediainfo.imdb_id,
tvdbid=mediainfo.tvdb_id,
doubanid=mediainfo.douban_id,
bangumiid=mediainfo.bangumi_id,
poster=mediainfo.get_poster_image(),
backdrop=mediainfo.get_backdrop_image(),
vote=mediainfo.vote_average,
description=mediainfo.overview,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()),
**kwargs)
subscribe = Subscribe(**kwargs)
subscribe.create(self._db)
# 查询订阅
subscribe = Subscribe.exists(self._db,

View File

@@ -177,6 +177,7 @@ class TransferHistoryOper(DbOper):
image=mediainfo.get_poster_image(),
downloader=downloader,
download_hash=download_hash,
episode_group=mediainfo.episode_group,
status=0,
errmsg=transferinfo.message or '未知错误',
files=transferinfo.file_list

View File

@@ -5,6 +5,7 @@ from app.core.cache import cached, cache_backend
from app.core.config import settings
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
from app.schemas.types import SystemConfigKey
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
@@ -32,13 +33,30 @@ class SubscribeHelper(metaclass=Singleton):
_shares_cache_region = "subscribe_share"
_github_user = None
_share_user_id = None
_admin_users = [
"jxxghp",
"thsrite",
"InfinityPacer",
"DDSRem",
"Aqr-K",
"Putarku",
"4Nest",
"xyswordzoro",
"wikrin"
]
def __init__(self):
self.systemconfig = SystemConfigOper()
self.share_user_id = SystemUtils.generate_user_unique_id()
if settings.SUBSCRIBE_STATISTIC_SHARE:
if not self.systemconfig.get(SystemConfigKey.SubscribeReport):
if self.sub_report():
self.systemconfig.set(SystemConfigKey.SubscribeReport, "1")
self.get_user_uuid()
self.get_github_user()
@cached(maxsize=20, ttl=1800)
def get_statistic(self, stype: str, page: Optional[int] = 1, count: Optional[int] = 30) -> List[dict]:
@@ -135,7 +153,7 @@ class SubscribeHelper(metaclass=Singleton):
"share_title": share_title,
"share_comment": share_comment,
"share_user": share_user,
"share_uid": self.share_user_id,
"share_uid": self._share_user_id,
**subscribe_dict
})
if res is None:
@@ -155,7 +173,7 @@ class SubscribeHelper(metaclass=Singleton):
return False, "当前没有开启订阅数据共享功能"
res = RequestUtils(proxies=settings.PROXY,
timeout=5).delete_res(f"{self._sub_share}/{share_id}",
params={"share_uid": self.share_user_id})
params={"share_uid": self._share_user_id})
if res is None:
return False, "连接MoviePilot服务器失败"
if res.ok:
@@ -196,3 +214,35 @@ class SubscribeHelper(metaclass=Singleton):
if res and res.status_code == 200:
return res.json()
return []
def get_user_uuid(self) -> str:
"""
获取用户uuid
"""
if not self._share_user_id:
self._share_user_id = SystemUtils.generate_user_unique_id()
logger.info(f"当前用户UUID: {self._share_user_id}")
return self._share_user_id
def get_github_user(self) -> str:
"""
获取github用户
"""
if self._github_user is None and settings.GITHUB_HEADERS:
res = RequestUtils(headers=settings.GITHUB_HEADERS,
proxies=settings.PROXY,
timeout=15).get_res(f"https://api.github.com/user")
if res:
self._github_user = res.json().get("login")
logger.info(f"当前Github用户: {self._github_user}")
return self._github_user
def is_admin_user(self) -> bool:
"""
判断是否是管理员
"""
if not self._github_user:
return False
if self._github_user in self._admin_users:
return True
return False

View File

@@ -75,8 +75,8 @@ class DoubanModule(_ModuleBase):
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
doubanid: str = None,
cache: bool = True,
doubanid: Optional[str] = None,
cache: Optional[bool] = True,
**kwargs) -> Optional[MediaInfo]:
"""
识别媒体信息

View File

@@ -1031,6 +1031,8 @@ class Emby:
eventItem.image_url = self.get_remote_image_by_id(item_id=eventItem.item_id,
image_type="Backdrop")
eventItem.json_object = message
return eventItem
def get_data(self, url: str) -> Optional[Response]:

View File

@@ -323,6 +323,8 @@ class U115Pan(StorageBase, metaclass=Singleton):
cid = '0'
else:
cid = fileitem.fileid
if not cid:
cid = self._path_to_id(fileitem.path)
items = []
offset = 0
@@ -410,7 +412,7 @@ class U115Pan(StorageBase, metaclass=Singleton):
return oss2.utils.b64encode_as_string(json.dumps(cb).strip())
target_name = new_name or local_path.name
target_path = str(Path(target_dir.path) / target_name)
target_path = Path(target_dir.path) / target_name
# 计算文件特征值
file_size = local_path.stat().st_size
file_sha1 = self._calc_sha1(local_path)
@@ -441,7 +443,6 @@ class U115Pan(StorageBase, metaclass=Singleton):
# 结果
init_result = init_resp.get("data")
logger.debug(f"【115】上传 Step 1 初始化结果: {init_result}")
file_id = init_result.get("file_id")
# 回调信息
bucket_name = init_result.get("bucket")
object_name = init_result.get("object")
@@ -486,27 +487,13 @@ class U115Pan(StorageBase, metaclass=Singleton):
bucket_name = init_result.get("bucket")
if not object_name:
object_name = init_result.get("object")
if not file_id:
file_id = init_result.get("file_id")
if not callback:
callback = init_result.get("callback")
# Step 3: 秒传
if init_result.get("status") == 2:
logger.info(f"【115】{target_name} 秒传成功")
return schemas.FileItem(
storage=self.schema.value,
fileid=str(file_id),
parent_fileid=target_cid,
path=target_path,
name=target_name,
basename=Path(target_name).stem,
extension=Path(target_name).suffix[1:],
size=file_size,
type="file",
pickcode=pick_code,
modify_time=int(time.time())
)
return self.get_item(target_path)
# Step 4: 获取上传凭证
token_resp = self._request_api(
@@ -617,19 +604,7 @@ class U115Pan(StorageBase, metaclass=Singleton):
logger.error(f"【115】{target_name} 上传失败: {e.status}, 错误码: {e.code}, 详情: {e.message}")
return None
# 返回结果
return schemas.FileItem(
storage=self.schema.value,
fileid=str(file_id),
parent_fileid = target_cid,
type="file",
path=target_path,
name=target_name,
basename=Path(target_name).stem,
extension=Path(target_name).suffix[1:],
size=file_size,
pickcode=pick_code,
modify_time=int(time.time())
)
return self.get_item(target_path)
def download(self, fileitem: schemas.FileItem, path: Path = None) -> Optional[Path]:
"""
@@ -722,11 +697,11 @@ class U115Pan(StorageBase, metaclass=Singleton):
return schemas.FileItem(
storage=self.schema.value,
fileid=str(resp["file_id"]),
path=str(path) + ("/" if resp["file_category"] == "1" else ""),
path=str(path) + ("/" if resp["file_category"] == "0" else ""),
type="file" if resp["file_category"] == "1" else "dir",
name=resp["file_name"],
basename=Path(resp["file_name"]).stem,
extension=Path(resp["file_name"]).suffix[1:],
extension=Path(resp["file_name"]).suffix[1:] if resp["file_category"] == "1" else None,
pickcode=resp["pick_code"],
size=StringUtils.num_filesize(resp['size']) if resp["file_category"] == "1" else None,
modify_time=resp["utime"]

View File

@@ -191,7 +191,6 @@ class MTorrentSpider:
'id': torrent_id
},
'header': {
'Content-Type': 'application/json',
'User-Agent': f'{self._ua}',
'Accept': 'application/json, text/plain, */*',
'x-api-key': self._apikey

View File

@@ -696,6 +696,8 @@ class Jellyfin:
# jellyfin 的 webhook 不含 item_path需要单独获取
eventItem.item_path = self.get_item_path_by_id(eventItem.item_id)
eventItem.json_object = message
return eventItem
@staticmethod

View File

@@ -703,6 +703,8 @@ class Plex:
eventItem.image_url = self.get_remote_image_by_id(item_id=eventItem.item_id,
image_type="Backdrop")
eventItem.json_object = message
return eventItem
def get_plex(self):

View File

@@ -1,3 +1,4 @@
import re
from typing import Optional, List, Tuple, Union, Dict
import cn2an
@@ -85,6 +86,7 @@ class TheMovieDbModule(_ModuleBase):
def recognize_media(self, meta: MetaBase = None,
mtype: MediaType = None,
tmdbid: Optional[int] = None,
episode_group: Optional[str] = None,
cache: Optional[bool] = True,
**kwargs) -> Optional[MediaInfo]:
"""
@@ -92,6 +94,7 @@ class TheMovieDbModule(_ModuleBase):
:param meta: 识别的元数据
:param mtype: 识别的媒体类型与tmdbid配套
:param tmdbid: tmdbid
:param episode_group: 剧集组
:param cache: 是否使用缓存
:return: 识别的媒体信息,包括剧集信息
"""
@@ -116,6 +119,11 @@ class TheMovieDbModule(_ModuleBase):
meta.tmdbid = tmdbid
cache_info = self.cache.get(meta)
# 查询剧集组
group_seasons = []
if episode_group:
group_seasons = self.tmdb.get_tv_group_seasons(episode_group)
# 识别匹配
if not cache_info or not cache:
info = None
@@ -143,7 +151,8 @@ class TheMovieDbModule(_ModuleBase):
year=meta.year,
mtype=meta.type,
season_year=meta.year,
season_number=meta.begin_season)
season_number=meta.begin_season,
group_seasons=group_seasons)
if not info:
# 去掉年份再查一次
info = self.tmdb.match(name=name,
@@ -157,7 +166,8 @@ class TheMovieDbModule(_ModuleBase):
if not info:
info = self.tmdb.match(name=name,
year=meta.year,
mtype=MediaType.TV)
mtype=MediaType.TV,
group_seasons=group_seasons)
if not info:
# 去掉年份和类型再查一次
info = self.tmdb.match_multi(name=name)
@@ -207,11 +217,61 @@ class TheMovieDbModule(_ModuleBase):
logger.info(f"{tmdbid} TMDB识别结果{mediainfo.type.value} "
f"{mediainfo.title_year}")
# 补充剧集年份
if mediainfo.type == MediaType.TV:
episode_years = self.tmdb.get_tv_episode_years(info.get("id"))
if episode_years:
mediainfo.season_years = episode_years
# 使用剧集组的集信息和年份
if mediainfo.type == MediaType.TV and mediainfo.episode_groups:
if group_seasons:
# 指定剧集组时
seasons = {}
season_info = []
season_years = {}
for group_season in group_seasons:
# 季
season = group_season.get("order")
# 集列表
episodes = group_season.get("episodes")
if not episodes:
continue
seasons[season] = [ep.get("episode_number") for ep in episodes]
season_info.append(group_season)
# 当前季第一季时间
first_date = episodes[0].get("air_date")
if re.match(r"^\d{4}-\d{2}-\d{2}$", first_date):
season_years[season] = str(first_date).split("-")[0]
# 每季集清单
if seasons:
mediainfo.seasons = seasons
mediainfo.number_of_seasons = len(seasons)
# 每季集详情
if season_info:
mediainfo.season_info = season_info
# 每季年份
if season_years:
mediainfo.season_years = season_years
# 所有剧集组
mediainfo.episode_group = episode_group
mediainfo.episode_groups = group_seasons
else:
# 每季年份
season_years = {}
for group in mediainfo.episode_groups:
if group.get('type') != 6:
# 只处理剧集部分
continue
group_episodes = self.tmdb.get_tv_group_seasons(group.get('id'))
if not group_episodes:
continue
for group_episode in group_episodes:
season = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")
# 判断是不是日期格式
if re.match(r"^\d{4}-\d{2}-\d{2}$", first_date):
season_years[season] = str(first_date).split("-")[0]
if season_years:
mediainfo.season_years = season_years
return mediainfo
else:
logger.info(f"{meta.name if meta else tmdbid} 未匹配到TMDB媒体信息")
@@ -428,16 +488,36 @@ class TheMovieDbModule(_ModuleBase):
tmdb_info = self.tmdb.get_info(tmdbid=tmdbid, mtype=MediaType.TV)
if not tmdb_info:
return []
return [schemas.TmdbSeason(**season)
for season in tmdb_info.get("seasons", []) if season.get("season_number")]
return [schemas.TmdbSeason(**sea)
for sea in tmdb_info.get("seasons", []) if sea.get("season_number")]
def tmdb_episodes(self, tmdbid: int, season: int) -> List[schemas.TmdbEpisode]:
def tmdb_group_seasons(self, group_id: str) -> List[schemas.TmdbSeason]:
"""
根据TMDBID查询某季的所有信信息
根据剧集组ID查询themoviedb所有季集信息
:param group_id: 剧集组ID
"""
group_seasons = self.tmdb.get_tv_group_seasons(group_id)
if not group_seasons:
return []
return [schemas.TmdbSeason(
season_number=sea.get("order"),
name=sea.get("name"),
episode_count=len(sea.get("episodes") or []),
air_date=sea.get("episodes")[0].get("air_date") if sea.get("episodes") else None,
) for sea in group_seasons]
def tmdb_episodes(self, tmdbid: int, season: int, episode_group: Optional[str] = None) -> List[schemas.TmdbEpisode]:
"""
根据TMDBID查询某季的所有集信息
:param tmdbid: TMDBID
:param season: 季
:param episode_group: 剧集组
"""
season_info = self.tmdb.get_tv_season_detail(tmdbid=tmdbid, season=season)
if episode_group:
season_info = self.tmdb.get_tv_group_detail(episode_group, season=season)
else:
season_info = self.tmdb.get_tv_season_detail(tmdbid=tmdbid, season=season)
if not season_info or not season_info.get("episodes"):
return []
return [schemas.TmdbEpisode(**episode) for episode in season_info.get("episodes")]

View File

@@ -32,7 +32,10 @@ class TmdbScraper:
else:
if season is not None:
# 查询季信息
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, season)
if mediainfo.episode_group:
seasoninfo = self.tmdb.get_tv_group_detail(mediainfo.episode_group, season=season)
else:
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, season=season)
if episode:
# 集元数据文件
episodeinfo = self.__get_episode_detail(seasoninfo, meta.begin_episode)
@@ -61,7 +64,10 @@ class TmdbScraper:
# 只需要集的图片
if episode:
# 集的图片
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, season)
if mediainfo.episode_group:
seasoninfo = self.tmdb.get_tv_group_detail(mediainfo.episode_group, season)
else:
seasoninfo = self.tmdb.get_tv_season_detail(mediainfo.tmdb_id, season)
if seasoninfo:
episodeinfo = self.__get_episode_detail(seasoninfo, episode)
if episodeinfo and episodeinfo.get("still_path"):

View File

@@ -1,3 +1,4 @@
import re
import traceback
from typing import Optional, List
from urllib.parse import quote
@@ -187,7 +188,8 @@ class TmdbApi:
mtype: MediaType,
year: Optional[str] = None,
season_year: Optional[str] = None,
season_number: Optional[int] = None) -> Optional[dict]:
season_number: Optional[int] = None,
group_seasons: Optional[List[dict]] = None) -> Optional[dict]:
"""
搜索tmdb中的媒体信息匹配返回一条尽可能正确的信息
:param name: 检索的名称
@@ -195,6 +197,7 @@ class TmdbApi:
:param year: 年份,如要是季集需要是首播年份(first_air_date)
:param season_year: 当前季集年份
:param season_number: 季集,整数
:param group_seasons: 集数组信息
:return: TMDB的INFO同时会将mtype赋值到media_type中
"""
if not self.search:
@@ -222,7 +225,8 @@ class TmdbApi:
f"正在识别{mtype.value}{name}, 季集={season_number}, 季集年份={season_year} ...")
info = self.__search_tv_by_season(name,
season_year,
season_number)
season_number,
group_seasons)
if not info:
year_range = [year]
if year:
@@ -332,12 +336,14 @@ class TmdbApi:
return tv
return {}
def __search_tv_by_season(self, name: str, season_year: str, season_number: int) -> Optional[dict]:
def __search_tv_by_season(self, name: str, season_year: str, season_number: int,
group_seasons: Optional[List[dict]] = None) -> Optional[dict]:
"""
根据电视剧的名称和季的年份及序号匹配TMDB
:param name: 识别的文件名或者种子名
:param season_year: 季的年份
:param season_number: 季序号
:param group_seasons: 集数组信息
:return: 匹配的媒体信息
"""
@@ -345,12 +351,25 @@ class TmdbApi:
if not tv_info:
return False
try:
seasons = self.__get_tv_seasons(tv_info)
for season, season_info in seasons.items():
if season_info.get("air_date"):
if season_info.get("air_date")[0:4] == str(_season_year) \
and season == int(season_number):
return True
if group_seasons:
for group_season in group_seasons:
season = group_season.get('order')
if season != season_number:
continue
episodes = group_season.get('episodes')
if not episodes:
continue
first_date = episodes[0].get("air_date")
if re.match(r"^\d{4}-\d{2}-\d{2}$", first_date):
if str(_season_year) == str(first_date).split("-")[0]:
return True
else:
seasons = self.__get_tv_seasons(tv_info)
for season, season_info in seasons.items():
if season_info.get("air_date"):
if season_info.get("air_date")[0:4] == str(_season_year) \
and season == int(season_number):
return True
except Exception as e1:
logger.error(f"连接TMDB出错{e1}")
print(traceback.format_exc())
@@ -768,11 +787,11 @@ class TmdbApi:
def __get_movie_detail(self,
tmdbid: int,
append_to_response: Optional[str] = "images,"
"credits,"
"alternative_titles,"
"translations,"
"release_dates,"
"external_ids") -> Optional[dict]:
"credits,"
"alternative_titles,"
"translations,"
"release_dates,"
"external_ids") -> Optional[dict]:
"""
获取电影的详情
:param tmdbid: TMDB ID
@@ -881,11 +900,12 @@ class TmdbApi:
def __get_tv_detail(self,
tmdbid: int,
append_to_response: Optional[str] = "images,"
"credits,"
"alternative_titles,"
"translations,"
"content_ratings,"
"external_ids") -> Optional[dict]:
"credits,"
"alternative_titles,"
"translations,"
"content_ratings,"
"external_ids,"
"episode_groups") -> Optional[dict]:
"""
获取电视剧的详情
:param tmdbid: TMDB ID
@@ -1316,6 +1336,36 @@ class TmdbApi:
logger.error(str(e))
return []
def get_tv_group_seasons(self, group_id: str) -> List[dict]:
"""
获取电视剧剧集组季集列表
"""
if not self.tv:
return []
try:
logger.debug(f"正在获取剧集组:{group_id}...")
return self.tv.group_episodes(group_id) or []
except Exception as e:
logger.error(str(e))
return []
def get_tv_group_detail(self, group_id: str, season: int) -> dict:
"""
获取剧集组某个季的信息
"""
group_seasons = self.get_tv_group_seasons(group_id)
if not group_seasons:
return {}
for group_season in group_seasons:
if group_season.get('order') == season:
# 剧集组中每个季的episode_number从1开始
for i, e in enumerate(group_season.get('episodes', []), start=1):
e['episode_number'] = i
return group_season
return {}
def get_person_detail(self, person_id: int) -> dict:
"""
获取人物详情
@@ -1376,38 +1426,6 @@ class TmdbApi:
"""
self.tmdb.cache_clear()
def get_tv_episode_years(self, tv_id: int) -> dict:
"""
查询剧集组年份
"""
try:
episode_groups = self.tv.episode_groups(tv_id)
if not episode_groups:
return {}
episode_years = {}
for episode_group in episode_groups:
logger.debug(f"正在获取剧集组年份:{episode_group.get('id')}...")
if episode_group.get('type') != 6:
# 只处理剧集部分
continue
group_episodes = self.tv.group_episodes(episode_group.get('id'))
if not group_episodes:
continue
for group_episode in group_episodes:
order = group_episode.get('order')
episodes = group_episode.get('episodes')
if not episodes:
continue
# 当前季第一季时间
first_date = episodes[0].get("air_date")
if not first_date and str(first_date).split("-") != 3:
continue
episode_years[order] = str(first_date).split("-")[0]
return episode_years
except Exception as e:
logger.error(str(e))
return {}
def close(self):
"""
关闭连接

View File

@@ -62,7 +62,9 @@ class TrimeMediaModule(_ModuleBase, _MediaServerBase[TrimeMedia]):
server.reconnect()
def stop(self):
pass
for server in self.get_instances().values():
if server.is_authenticated():
server.disconnect()
def test(self) -> Optional[Tuple[bool, str]]:
"""
@@ -73,7 +75,7 @@ class TrimeMediaModule(_ModuleBase, _MediaServerBase[TrimeMedia]):
for name, server in self.get_instances().items():
if not server.is_configured():
return False, f"飞牛影视配置不完整:{name}"
if server.is_inactive() and server.reconnect() != True:
if server.is_inactive() and not server.reconnect():
return False, f"无法连接飞牛影视:{name}"
return True, ""

View File

@@ -4,7 +4,7 @@ import random
import time
from dataclasses import dataclass
from enum import Enum
from typing import Optional, Union, List
from typing import List, Optional, Union
from app.core.config import settings
from app.log import logger
@@ -19,27 +19,27 @@ class User:
class Category(Enum):
Movie = "Movie"
MOVIE = "Movie"
TV = "TV"
Mix = "Mix"
Others = "Others"
MIX = "Mix"
OTHERS = "Others"
@classmethod
def _missing_(cls, value):
return cls.Others
return cls.OTHERS
class Type(Enum):
Movie = "Movie"
MOVIE = "Movie"
TV = "TV"
Season = "Season"
Episode = "Episode"
Video = "Video"
Directory = "Directory"
SEASON = "Season"
EPISODE = "Episode"
VIDEO = "Video"
DIRECTORY = "Directory"
@classmethod
def _missing_(cls, value):
return cls.Video
return cls.VIDEO
@dataclass
@@ -60,6 +60,13 @@ class MediaDbSummary:
total: int = 0
@dataclass
class Version:
# 飞牛影视版本
frontend: Optional[str] = None
backend: Optional[str] = None
@dataclass
class Item:
guid: str
@@ -103,6 +110,7 @@ class Api:
"_apikey",
"_api_path",
"_request_utils",
"_version",
)
@property
@@ -117,13 +125,34 @@ class Api:
def apikey(self) -> str:
return self._apikey
@property
def version(self) -> Optional[Version]:
return self._version
def __init__(self, host: str, apikey: str):
self._api_path = "/v/api/v1"
"""
:param host: 飞牛服务端地址如http://127.0.0.1:5666/v
"""
self._api_path = "/api/v1"
self._host = host.rstrip("/")
self._apikey = apikey
self._token = None
self._token: Optional[str] = None
self._version: Optional[Version] = None
self._request_utils = RequestUtils(session=requests.Session())
def sys_version(self) -> Optional[Version]:
"""
飞牛影视版本号
"""
if (res := self.__request_api("/sys/version")) and res.success:
if res.data:
self._version = Version(
frontend=res.data.get("version"),
backend=res.data.get("mediasrvVersion"),
)
return self._version
return None
def login(self, username, password) -> Optional[str]:
"""
登录飞牛影视
@@ -131,14 +160,14 @@ class Api:
:return: 成功返回token 否则返回None
"""
if (
res := self.__request_api(
"/login",
data={
"username": username,
"password": password,
"app_name": "trimemedia-web",
},
)
res := self.__request_api(
"/login",
data={
"username": username,
"password": password,
"app_name": "trimemedia-web",
},
)
) and res.success:
self._token = res.data.get("token")
return self._token
@@ -250,7 +279,7 @@ class Api:
扫描指定媒体库
"""
if (
res := self.__request_api(f"/mdb/scan/{mdb.guid}", data={})
res := self.__request_api(f"/mdb/scan/{mdb.guid}", data={})
) and res.success:
if res.data:
return True
@@ -272,22 +301,22 @@ class Api:
return item
def item_list(
self,
guid: Optional[str] = None,
type=None,
exclude_grouped_video=True,
page=1,
page_size=22,
sort_by="create_time",
sort="DESC",
self,
guid: Optional[str] = None,
types=None,
exclude_grouped_video=True,
page=1,
page_size=22,
sort_by="create_time",
sort="DESC",
) -> Optional[list[Item]]:
"""
媒体列表
"""
if type is None:
type = [Type.Movie, Type.TV, Type.Directory, Type.Video]
if types is None:
types = [Type.MOVIE, Type.TV, Type.DIRECTORY, Type.VIDEO]
post = {
"tags": {"type": type} if type else {},
"tags": {"type": types} if types else {},
"sort_type": sort,
"sort_column": sort_by,
"page": page,
@@ -307,25 +336,48 @@ class Api:
搜索影片、演员
"""
if (
res := self.__request_api("/search/list", params={"q": keywords})
res := self.__request_api("/search/list", params={"q": keywords})
) and res.success:
return [self.__build_item(info) for info in res.data]
return None
def item(self, guid: str) -> Optional[Item]:
""" """
"""
查询媒体详情
"""
if (res := self.__request_api(f"/item/{guid}")) and res.success:
return self.__build_item(res.data)
return None
def del_item(self, guid: str, delete_file: bool) -> bool:
"""
删除媒体
:param delete_file: True删除媒体文件False仅从媒体库移除
"""
if (
res := self.__request_api(
f"/item/{guid}",
method="delete",
data={"delete_file": 1 if delete_file else 0, "media_guids": []},
)
) and res.success:
if res.data:
return True
return False
def season_list(self, tv_guid: str) -> Optional[list[Item]]:
""" """
"""
查询季列表
"""
if (res := self.__request_api(f"/season/list/{tv_guid}")) and res.success:
return [self.__build_item(info) for info in res.data]
return None
def episode_list(self, season_guid: str) -> Optional[list[Item]]:
""" """
"""
查询剧集列表
"""
if (res := self.__request_api(f"/episode/list/{season_guid}")) and res.success:
return [self.__build_item(info) for info in res.data]
return None
@@ -338,12 +390,12 @@ class Api:
return [self.__build_item(info) for info in res.data]
return None
def __get_authx(self, api_path, body: Optional[str]):
def __get_authx(self, api_path: str, body: Optional[str]):
"""
计算消息签名
"""
if api_path[0] != "/":
api_path = "/" + api_path
if not api_path.startswith("/v"):
api_path = "/v" + api_path
nonce = str(random.randint(100000, 999999))
ts = str(int(time.time() * 1000))
md5 = hashlib.md5()
@@ -366,10 +418,17 @@ class Api:
return f"nonce={nonce}&timestamp={ts}&sign={sign}"
def __request_api(
self, api: str, method: str = None, params: dict = None, data: dict = None
self,
api: str,
method: Optional[str] = None,
params: Optional[dict] = None,
data: Optional[dict] = None,
suppress_log=False,
):
"""
请求飞牛影视API
:param suppress_log: 是否禁止日志
"""
@dataclass
@@ -397,7 +456,7 @@ class Api:
url = self._host + api_path
if method is None:
method = "get" if data is None else "post"
if method == "post":
if method != "get":
json_body = (
json.dumps(data, allow_nan=False, cls=JsonEncoder) if data else ""
)
@@ -422,11 +481,13 @@ class Api:
resp = res.json()
msg = resp.get("msg")
if code := int(resp.get("code", -1)):
logger.error(f"请求接口 {api_path} 失败,错误码:{code} {msg}")
if not suppress_log:
logger.error(f"请求接口 {url} 失败,错误码:{code} {msg}")
return Result(code, msg)
return Result(0, msg, resp.get("data"))
else:
logger.error(f"请求接口 {api_path} 失败")
elif not suppress_log:
logger.error(f"请求接口 {url} 失败")
except Exception as e:
logger.error(f"请求接口 {api_path} 异常:" + str(e))
if not suppress_log:
logger.error(f"请求接口 {url} 异常:" + str(e))
return None

View File

@@ -26,21 +26,58 @@ class TrimeMedia:
username: Optional[str] = None,
password: Optional[str] = None,
play_host: Optional[str] = None,
sync_libraries: list = None,
sync_libraries: Optional[list] = None,
**kwargs,
):
if not host or not username or not password:
logger.error("飞牛影视配置不完整!!")
return
host = UrlUtils.standardize_base_url(host).rstrip("/")
if play_host:
self._playhost = UrlUtils.standardize_base_url(play_host).rstrip("/")
self._username = username
self._password = password
self._sync_libraries = sync_libraries or []
self._api = fnapi.Api(host, apikey="16CCEB3D-AB42-077D-36A1-F355324E4237")
if (api := self.__create_api(host)) is None:
logger.error(f"请检查服务端地址 {host}")
return
self._api = api
if play_api := self.__create_api(play_host):
self._playhost = play_api.host
elif play_host:
logger.warning(f"请检查外网播放地址 {play_host}")
self.reconnect()
@property
def api(self) -> Optional[fnapi.Api]:
"""
获得飞牛API
"""
return self._api
def __create_api(self, host: Optional[str]) -> Optional[fnapi.Api]:
"""
创建一个飞牛API
:param host: 服务端地址
:return: 如果地址无效、不可访问则返回None
"""
if not host:
return None
api_key = "16CCEB3D-AB42-077D-36A1-F355324E4237"
host = UrlUtils.standardize_base_url(host).rstrip("/")
if not host.endswith("/v"):
# 尝试补上结尾的/v 测试能否正常访问
api = fnapi.Api(host + "/v", api_key)
if api.sys_version():
return api
# 测试用户配置的地址
api = fnapi.Api(host, api_key)
return api if api.sys_version() else None
def __del__(self):
self.disconnect()
def is_configured(self) -> bool:
return self._api is not None
@@ -62,14 +99,27 @@ class TrimeMedia:
"""
if not self.is_configured():
return False
if (fnver := self._api.sys_version()) is None:
return False
# 版本号:0.8.36, 服务版本:0.8.19
logger.debug(f"版本号:{fnver.frontend}, 服务版本:{fnver.backend}")
if self._api.login(self._username, self._password) is None:
return False
self._userinfo = self._api.user_info()
if self._userinfo is None:
return False
logger.debug(f"{self._userinfo.username} 成功登录飞牛影视")
logger.debug(f"{self._username} 成功登录飞牛影视")
return True
def disconnect(self):
"""
断开与飞牛的连接
"""
if self.is_authenticated():
self._api.logout()
self._userinfo = None
logger.debug(f"{self._username} 已断开飞牛影视")
def get_librarys(
self, hidden: Optional[bool] = False
) -> List[schemas.MediaServerLibrary]:
@@ -87,11 +137,11 @@ class TrimeMedia:
for library in self._libraries.values():
if hidden and self.__is_library_blocked(library.guid):
continue
if library.category == fnapi.Category.Movie:
if library.category == fnapi.Category.MOVIE:
library_type = MediaType.MOVIE.value
elif library.category == fnapi.Category.TV:
library_type = MediaType.TV.value
elif library.category == fnapi.Category.Others:
elif library.category == fnapi.Category.OTHERS:
# 忽略这个库
continue
else:
@@ -107,7 +157,7 @@ class TrimeMedia:
f"{self._api.host}{img_path}?w=256"
for img_path in library.posters or []
],
link=f"{self._playhost or self._api.host}/v/library/{library.guid}",
link=f"{self._playhost or self._api.host}/library/{library.guid}",
)
)
return libraries
@@ -170,7 +220,7 @@ class TrimeMedia:
movies = []
items = self._api.search_list(keywords=title) or []
for item in items:
if item.type != fnapi.Type.Movie:
if item.type != fnapi.Type.MOVIE:
continue
if (
(not tmdb_id or tmdb_id == item.tmdb_id)
@@ -280,7 +330,7 @@ class TrimeMedia:
lib = self.__match_library_by_path(item.target_path)
if lib is None:
# 如果有匹配失败的,刷新整个库
return self._api.mdb_scanall()
return self.refresh_root_library()
# 媒体库去重
libraries.add(lib.guid)
@@ -290,7 +340,7 @@ class TrimeMedia:
logger.info(f"刷新媒体库:{lib.name}")
if not self._api.mdb_scan(lib):
# 如果失败,刷新整个库
return self._api.mdb_scanall()
return self.refresh_root_library()
return True
def __match_library_by_path(self, path: Path) -> Optional[fnapi.MediaDb]:
@@ -336,7 +386,7 @@ class TrimeMedia:
if item.watched:
user_state.played = True
if item.duration and item.ts is not None:
user_state.percentage = item.ts / item.duration
user_state.percentage = item.ts / item.duration * 100
user_state.resume = True
if item.type is None:
item_type = None
@@ -361,40 +411,37 @@ class TrimeMedia:
"""
拼装播放链接
"""
if item.type == fnapi.Type.Episode:
return f"{host}/v/tv/episode/{item.guid}"
elif item.type == fnapi.Type.Season:
return f"{host}/v/tv/season/{item.guid}"
elif item.type == fnapi.Type.Movie:
return f"{host}/v/movie/{item.guid}"
if item.type == fnapi.Type.EPISODE:
return f"{host}/tv/episode/{item.guid}"
elif item.type == fnapi.Type.SEASON:
return f"{host}/tv/season/{item.guid}"
elif item.type == fnapi.Type.MOVIE:
return f"{host}/movie/{item.guid}"
elif item.type == fnapi.Type.TV:
return f"{host}/v/tv/{item.guid}"
return f"{host}/tv/{item.guid}"
else:
# 其它类型走通用页面,由飞牛来判断
return f"{host}/v/other/{item.guid}"
return f"{host}/other/{item.guid}"
def __build_media_server_play_item(
self, item: fnapi.Item
) -> schemas.MediaServerPlayItem:
"""
:params use_backdrop: 是否优先使用Backdrop类型的图片
"""
if item.type == fnapi.Type.Episode:
if item.type == fnapi.Type.EPISODE:
title = item.tv_title
subtitle = f"S{item.season_number}:{item.episode_number} - {item.title}"
else:
title = item.title
subtitle = "电影" if item.type == fnapi.Type.Movie else "视频"
type = (
subtitle = "电影" if item.type == fnapi.Type.MOVIE else "视频"
types = (
MediaType.MOVIE.value
if item.type in [fnapi.Type.Movie, fnapi.Type.Video]
if item.type in [fnapi.Type.MOVIE, fnapi.Type.VIDEO]
else MediaType.TV.value
)
return schemas.MediaServerPlayItem(
id=item.guid,
title=title,
subtitle=subtitle,
type=type,
type=types,
image=f"{self._api.host}{item.poster}",
link=self.__build_play_url(self._playhost or self._api.host, item),
percent=(
@@ -421,22 +468,22 @@ class TrimeMedia:
"""
if not self.is_authenticated():
return None
if (SIZE := limit) is None:
SIZE = -1
if (page_size := limit) is None:
page_size = -1
items = (
self._api.item_list(
guid=parent,
page=start_index + 1,
page_size=SIZE,
type=[fnapi.Type.Movie, fnapi.Type.TV, fnapi.Type.Directory],
page_size=page_size,
types=[fnapi.Type.MOVIE, fnapi.Type.TV, fnapi.Type.DIRECTORY],
)
or []
)
for item in items:
if item.type == fnapi.Type.Directory:
if item.type == fnapi.Type.DIRECTORY:
for items in self.get_items(parent=item.guid):
yield items
elif item.type in [fnapi.Type.Movie, fnapi.Type.TV]:
elif item.type in [fnapi.Type.MOVIE, fnapi.Type.TV]:
yield self.__build_media_server_item(item)
return None
@@ -482,7 +529,7 @@ class TrimeMedia:
self._api.item_list(
page=1,
page_size=max(100, num * 5),
type=[fnapi.Type.Movie, fnapi.Type.TV],
types=[fnapi.Type.MOVIE, fnapi.Type.TV],
)
or []
)
@@ -505,7 +552,7 @@ class TrimeMedia:
self._api.item_list(
page=1,
page_size=max(100, num * 5),
type=[fnapi.Type.Movie, fnapi.Type.TV],
types=[fnapi.Type.MOVIE, fnapi.Type.TV],
)
or []
)
@@ -534,7 +581,7 @@ class TrimeMedia:
def __is_library_blocked(self, library_guid: str):
if library := self._libraries.get(library_guid):
if library.category == fnapi.Category.Others:
if library.category == fnapi.Category.OTHERS:
# 忽略这个库
return True
return (

View File

@@ -170,6 +170,10 @@ class MediaInfo(BaseModel):
runtime: Optional[int] = None
# 下一集
next_episode_to_air: Optional[dict] = Field(default_factory=dict)
# 全部剧集组
episode_groups: Optional[list] = Field(default_factory=list)
# 剧集组
episode_group: Optional[str] = None
class TorrentInfo(BaseModel):

View File

@@ -274,6 +274,7 @@ class RecommendMediaSource(BaseModel):
"""
name: str = Field(..., description="数据源名称")
api_path: str = Field(..., description="媒体数据源API地址")
type: str = Field(..., description="类型")
class RecommendSourceEventData(ChainEventData):

View File

@@ -48,6 +48,8 @@ class DownloadHistory(BaseModel):
note: Optional[Any] = None
# 自定义媒体类别
media_category: Optional[str] = None
# 自定义剧集组
episode_group: Optional[str] = None
class Config:
orm_mode = True
@@ -86,6 +88,8 @@ class TransferHistory(BaseModel):
image: Optional[str] = None
# 下载器Hash
download_hash: Optional[str] = None
# 自定义剧集组
episode_group: Optional[str] = None
# 状态 1-成功0-失败
status: bool = True
# 失败原因

View File

@@ -160,6 +160,7 @@ class WebhookEventInfo(BaseModel):
save_reason: Optional[str] = None
item_isvirtual: Optional[bool] = None
media_type: Optional[str] = None
json_object: Optional[dict] = {}
class MediaServerPlayItem(BaseModel):

View File

@@ -73,6 +73,8 @@ class Subscribe(BaseModel):
media_category: Optional[str] = None
# 过滤规则组
filter_groups: Optional[List[str]] = Field(default_factory=list)
# 剧集组
episode_group: Optional[str] = None
class Config:
orm_mode = True
@@ -130,6 +132,8 @@ class SubscribeShare(BaseModel):
custom_words: Optional[str] = None
# 自定义媒体类别
media_category: Optional[str] = None
# 自定义剧集组
episode_group: Optional[str] = None
# 复用人次
count: Optional[int] = 0

View File

@@ -200,3 +200,5 @@ class ManualTransferItem(BaseModel):
library_category_folder: Optional[bool] = None
# 复用历史识别信息
from_history: Optional[bool] = False
# 剧集组
episode_group: Optional[str] = None

View File

@@ -642,13 +642,14 @@ class StringUtils:
if len(parts) > 3:
# 处理不希望包含多个冒号的情况(除了协议后的冒号)
return None, None
# 不含端口地址
domain = ":".join(parts[:-1]).rstrip('/')
# 端口号
try:
elif len(parts) == 3:
port = int(parts[-1])
except ValueError:
# 端口号不是整数,返回 None 表示无效
# 不含端口地址
domain = ":".join(parts[:-1]).rstrip('/')
elif len(parts) == 2:
port = 443 if address.startswith("https") else 80
domain = address
else:
return None, None
return domain, port

View File

@@ -1,67 +1,25 @@
#######################################################################
# 【*】为必配项,其余为选配项,选配项可以删除整项配置项或者保留配置默认值 #
#######################################################################
#######################################################################################################
# V2版本中大部分设置可通过后台设置界面进行配置本文件仅展示界面无法配置的项 这些项同样可以通过环境变量进行设置 #
#######################################################################################################
# 【*】API监听地址注意不是前端访问地址
HOST=0.0.0.0
# 是否调试模式,打开后将输出更多日志
DEBUG=false
# 是否开发模式,打开后后台服务将不会启动
DEV=false
# 日志级别DEBUG、INFO、WARNING、ERROR等当DEBUG=true时此配置项将被忽略日志级别始终为DEBUG
LOG_LEVEL=INFO
# 【*】超级管理员,设置后一但重启将固化到数据库中,修改将无效(初始化超级管理员密码仅会生成一次,请在日志中查看并自行登录系统修改)
SUPERUSER=admin
# 自动检查和更新站点资源包(索引、认证等)
AUTO_UPDATE_RESOURCE=true
# 媒体识别来源 themoviedb/douban使用themoviedb时需要确保能正常连接api.themoviedb.org使用douban时不支持二级分类
RECOGNIZE_SOURCE=themoviedb
# OCR服务器地址
OCR_HOST=https://movie-pilot.org
# 搜索多个名称true/false为true时搜索时会同时搜索中英文及原始名称搜索结果会更全面但会增加搜索时间为false时其中一个名称搜索到结果或全部名称搜索完毕即停止
SEARCH_MULTIPLE_NAME=false
# 为指定字幕添加.default后缀设置为默认字幕支持为'zh-cn''zh-tw''eng'添加默认字幕未定义或设置为None则不添加
DEFAULT_SUB=zh-cn
# 数据库连接池的大小可适当降低如20-50以减少I/O压力
DB_POOL_SIZE=100
# 数据库连接池最大溢出连接数可适当降低如0以减少I/O压力
DB_MAX_OVERFLOW=500
# SQLite 的 busy_timeout 参数可适当增加如180以减少锁定错误
DB_TIMEOUT=60
# SQLite 是否启用 WAL 模式,启用可提升读写并发性能,但可能在异常情况下增加数据丢失的风险
DB_WAL_ENABLE=false
# 【*】超级管理员,设置后一但重启将固化到数据库中,修改将无效(初始化超级管理员密码仅会生成一次,请在日志中查看并自行登录系统修改)
SUPERUSER=admin
# 辅助认证,允许通过外部服务进行认证、单点登录以及自动创建用户
AUXILIARY_AUTH_ENABLE=false
# 大内存模式,开启后会增加缓存数量,但会占用更多内存
BIG_MEMORY_MODE=false
# 是否启用DOH域名解析启用后对于api.themovie.org等域名通过DOH解析避免域名DNS被污染
DOH_ENABLE=true
# 使用 DOH 解析的域名列表,多个域名使用`,`分隔
DOH_DOMAINS=api.themoviedb.org,api.tmdb.org,webservice.fanart.tv,api.github.com,github.com,raw.githubusercontent.com,api.telegram.org
# DOH 解析服务器列表,多个服务器使用`,`分隔
DOH_RESOLVERS=1.0.0.1,1.1.1.1,9.9.9.9,149.112.112.112
# 元数据识别缓存过期时间数字型单位小时0为系统默认大内存模式为7天滞则为3天调大该值可减少themoviedb的访问次数
META_CACHE_EXPIRE=0
# 自动检查和更新站点资源包(索引、认证等)
AUTO_UPDATE_RESOURCE=true
# 【*】API密钥未设置时系统将随机生成建议使用复杂字符串用于Jellyseerr/Overseerr、媒体服务器Webhook等配置以及部分支持API_TOKEN的API请求
API_TOKEN=''
# 登录页面电影海报tmdb/bing/mediaservertmdb要求能正常连接api.themoviedb.org
WALLPAPER=tmdb
# TMDB图片地址无需修改需保留默认值如果默认地址连通性不好可以尝试修改为`static-mdb.v.geilijiasu.com`
TMDB_IMAGE_DOMAIN=image.tmdb.org
# TMDB API地址无需修改需保留默认值也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
TMDB_API_DOMAIN=api.themoviedb.org
# 媒体识别来源 themoviedb/douban使用themoviedb时需要确保能正常连接api.themoviedb.org使用douban时不支持二级分类
RECOGNIZE_SOURCE=themoviedb
# Fanart开关
FANART_ENABLE=true
# 新增已入库媒体是否跟随TMDB信息变化true/false为false时即使TMDB信息变化时也会仍然按历史记录中已入库的信息进行刮削
SCRAP_FOLLOW_TMDB=true
# 刮削来源 themoviedb/douban使用themoviedb时需要确保能正常连接api.themoviedb.org使用douban时会缺失部分信息
SCRAP_SOURCE=themoviedb
# 电影重命名格式Jinja2语法参考https://jinja.palletsprojects.com/en/3.0.x/templates/
MOVIE_RENAME_FORMAT={{title}}{% if year %} ({{year}}){% endif %}/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}{{fileExt}}
# 电视剧重命名格式Jinja2语法参考https://jinja.palletsprojects.com/en/3.0.x/templates/
TV_RENAME_FORMAT={{title}}{% if year %} ({{year}}){% endif %}/Season {{season}}/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}}{% endif %}{{fileExt}}
# 交互搜索自动下载用户ID消息通知渠道的用户ID使用,分割,设置为 all 代表所有用户自动择优下载,未设置需要用户手动选择资源或者回复`0`才自动择优下载
AUTO_DOWNLOAD_USER=
# 自动下载站点字幕(如有)
DOWNLOAD_SUBTITLE=true
# OCR服务器地址
OCR_HOST=https://movie-pilot.org
# 插件市场仓库地址,多个地址使用`,`分隔,保留最后的/
PLUGIN_MARKET=https://github.com/jxxghp/MoviePilot-Plugins,https://github.com/thsrite/MoviePilot-Plugins,https://github.com/InfinityPacer/MoviePilot-Plugins,https://github.com/honue/MoviePilot-Plugins
# 搜索多个名称true/false为true时搜索时会同时搜索中英文及原始名称搜索结果会更全面但会增加搜索时间为false时其中一个名称搜索到结果或全部名称搜索完毕即停止
SEARCH_MULTIPLE_NAME=true
# 为指定字幕添加.default后缀设置为默认字幕支持为'zh-cn''zh-tw''eng'添加默认字幕未定义或设置为None则不添加
DEFAULT_SUB=None
# 是否开发调试模式,仅开发人员使用,打开后将停止后台服务
DEV=false

View File

@@ -0,0 +1,32 @@
"""2.1.3
Revision ID: 4b544f5d3b07
Revises: 610bb05ddeef
Create Date: 2025-04-03 11:21:42.780337
"""
import contextlib
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import sqlite
# revision identifiers, used by Alembic.
revision = '4b544f5d3b07'
down_revision = '610bb05ddeef'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
with contextlib.suppress(Exception):
op.add_column('downloadhistory', sa.Column('episode_group', sa.String, nullable=True))
op.add_column('subscribe', sa.Column('episode_group', sa.String, nullable=True))
op.add_column('subscribehistory', sa.Column('episode_group', sa.String, nullable=True))
op.add_column('transferhistory', sa.Column('episode_group', sa.String, nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -1,4 +1,4 @@
FROM python:3.11.4-slim-bookworm
FROM python:3.12.8-slim-bookworm
ENV LANG="C.UTF-8" \
TZ="Asia/Shanghai" \
HOME="/moviepilot" \
@@ -38,7 +38,6 @@ RUN apt-get update -y \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& curl https://rclone.org/install.sh | bash \
&& curl --insecure -fsSL https://raw.githubusercontent.com/DDS-Derek/Aria2-Pro-Core/master/aria2-install.sh | bash \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
@@ -46,7 +45,7 @@ RUN apt-get update -y \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.in requirements.in
COPY ../requirements.in requirements.in
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
@@ -62,12 +61,13 @@ RUN apt-get update -y \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& cp -f /app/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
COPY .. .
RUN cp -f /app/docker/nginx.common.conf /etc/nginx/common.conf \
&& cp -f /app/docker/nginx.template.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/docker/update.sh /usr/local/bin/mp_update.sh \
&& cp -f /app/docker/entrypoint.sh /entrypoint.sh \
&& cp -f /app/docker/docker_http_proxy.conf /etc/nginx/docker_http_proxy.conf \
&& chmod +x /entrypoint.sh /usr/local/bin/mp_update.sh \
&& mkdir -p ${HOME} \
&& groupadd -r moviepilot -g 918 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 918 \
@@ -88,4 +88,4 @@ RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& rm -rf /tmp/*
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]
ENTRYPOINT [ "/entrypoint.sh" ]

101
docker/cert.sh Normal file
View File

@@ -0,0 +1,101 @@
#!/bin/bash
set -e
Green="\033[32m"
Red="\033[31m"
Yellow='\033[33m'
Font="\033[0m"
INFO="[${Green}INFO${Font}]"
ERROR="[${Red}ERROR${Font}]"
WARN="[${Yellow}WARN${Font}]"
function INFO() {
echo -e "${INFO} ${1}"
}
function ERROR() {
echo -e "${ERROR} ${1}"
}
function WARN() {
echo -e "${WARN} ${1}"
}
# 核心条件验证
if [ "${ENABLE_SSL}" = "true" ] && \
[ "${AUTO_ISSUE_CERT}" = "true" ] && \
[ -n "${SSL_DOMAIN}" ]; then
# 创建证书目录
mkdir -p /config/certs/"${SSL_DOMAIN}"
chown moviepilot:moviepilot /config/certs -R
# 安装acme.sh使用官方安装脚本
if [ ! -d "/config/acme.sh" ]; then
INFO "→ 安装acme.sh..."
# 生成安装参数
INSTALL_ARGS=(
"--install-online"
"--home" "/config/acme.sh"
"--config-home" "/config/acme.sh/data"
"--cert-home" "/config/certs"
)
# 添加邮箱参数(如果设置)
if [ -n "${SSL_EMAIL}" ]; then
INSTALL_ARGS+=("--accountemail" "${SSL_EMAIL}")
else
WARN "未设置SSL_EMAIL建议配置邮箱用于证书过期提醒"
fi
# 执行官方安装命令
curl -sSL https://get.acme.sh | sh -s -- "${INSTALL_ARGS[@]}"
fi
# 签发证书(仅当证书不存在时)
if [ ! -f "/config/certs/${SSL_DOMAIN}/fullchain.pem" ]; then
# 必要参数检查
REQUIRED_VARS=("DNS_PROVIDER")
for var in "${REQUIRED_VARS[@]}"; do
eval "value=\${${var}}"
[ -z "$value" ] && { ERROR "必须设置环境变量: ${var}"; exit 1; }
done
INFO "→ 签发证书: ${SSL_DOMAIN} (DNS验证方式: ${DNS_PROVIDER})"
# 加载ACME环境变量带安全过滤
INFO "正在加载ACME环境变量..."
env | grep '^ACME_ENV_' | while read -r line; do
key="${line#ACME_ENV_}"
key="${key%%=*}"
value="${line#ACME_ENV_${key}=}"
# 过滤非法变量名
if [[ "$key" =~ ^[a-zA-Z_][a-zA-Z0-9_]*$ ]]; then
export "$key"="$value"
INFO "已加载环境变量: ${key}=******"
else
WARN "跳过无效变量名: ${key}"
fi
done
# 签发证书
/config/acme.sh/acme.sh --issue \
--dns "${DNS_PROVIDER}" \
--domain "${SSL_DOMAIN}" \
--key-file /config/certs/"${SSL_DOMAIN}"/privkey.pem \
--fullchain-file /config/certs/"${SSL_DOMAIN}"/fullchain.pem \
--reloadcmd "nginx -s reload" \
--force
# 创建稳定符号链接
ln -sf /config/certs/"${SSL_DOMAIN}" /config/certs/latest
fi
# 配置自动更新任务
INFO "→ 配置cron自动更新..."
echo "0 3 * * * /config/acme.sh/acme.sh --cron --home /config/acme.sh && nginx -s reload" > /etc/cron.d/acme
chmod 644 /etc/cron.d/acme
service cron start
elif [ "${ENABLE_SSL}" = "true" ] && [ "${AUTO_ISSUE_CERT}" = "true" ] && [ -z "${SSL_DOMAIN}" ]; then
WARN "已启用自动签发证书但未设置SSL_DOMAIN跳过证书管理"
fi

97
docker/entrypoint.sh Normal file
View File

@@ -0,0 +1,97 @@
#!/bin/bash
# shellcheck shell=bash
# shellcheck disable=SC2016
# shellcheck disable=SC2155
Green="\033[32m"
Red="\033[31m"
Yellow='\033[33m'
Font="\033[0m"
INFO="[${Green}INFO${Font}]"
ERROR="[${Red}ERROR${Font}]"
WARN="[${Yellow}WARN${Font}]"
function INFO() {
echo -e "${INFO} ${1}"
}
function ERROR() {
echo -e "${ERROR} ${1}"
}
function WARN() {
echo -e "${WARN} ${1}"
}
# 生成HTTPS配置块
if [ "${ENABLE_SSL}" = "true" ]; then
export HTTPS_SERVER_CONF=$(cat <<EOF
server {
include /etc/nginx/mime.types;
default_type application/octet-stream;
listen 443 ssl;
listen [::]:443 ssl;
server_name ${SSL_DOMAIN:-moviepilot};
# SSL证书路径
ssl_certificate /config/certs/latest/fullchain.pem;
ssl_certificate_key /config/certs/latest/privkey.pem;
# SSL安全配置
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# 公共配置
include common.conf;
}
EOF
)
else
export HTTPS_SERVER_CONF="# HTTPS未启用"
fi
# 使用 `envsubst` 将模板文件中的 ${NGINX_PORT} 替换为实际的环境变量值
export NGINX_CLIENT_MAX_BODY_SIZE=${NGINX_CLIENT_MAX_BODY_SIZE:-10m}
envsubst '${NGINX_PORT}${PORT}${NGINX_CLIENT_MAX_BODY_SIZE}${ENABLE_SSL}${HTTPS_SERVER_CONF}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
# 自动更新
cd /
source /usr/local/bin/mp_update.sh
cd /app || exit
# 更改 moviepilot userid 和 groupid
groupmod -o -g "${PGID}" moviepilot
usermod -o -u "${PUID}" moviepilot
# 更改文件权限
chown -R moviepilot:moviepilot \
"${HOME}" \
/app \
/public \
/config \
/var/lib/nginx \
/var/log/nginx
chown moviepilot:moviepilot /etc/hosts /tmp
# 下载浏览器内核
if [[ "$HTTPS_PROXY" =~ ^https?:// ]] || [[ "$HTTPS_PROXY" =~ ^https?:// ]] || [[ "$PROXY_HOST" =~ ^https?:// ]]; then
HTTPS_PROXY="${HTTPS_PROXY:-${https_proxy:-$PROXY_HOST}}" gosu moviepilot:moviepilot playwright install chromium
else
gosu moviepilot:moviepilot playwright install chromium
fi
# 证书管理
source /app/docker/cert.sh
# 启动前端nginx服务
INFO "→ 启动前端nginx服务..."
nginx
# 启动docker http proxy nginx
if [ -S "/var/run/docker.sock" ]; then
INFO "→ 启动 Docker Proxy..."
nginx -c /etc/nginx/docker_http_proxy.conf
# 上面nginx是通过root启动的会将目录权限改成root所以需要重新再设置一遍权限
chown -R moviepilot:moviepilot \
/var/lib/nginx \
/var/log/nginx
fi
# 设置后端服务权限掩码
umask "${UMASK}"
# 启动后端服务
INFO "→ 启动后端服务..."
exec dumb-init gosu moviepilot:moviepilot python3 app/main.py

100
docker/nginx.common.conf Normal file
View File

@@ -0,0 +1,100 @@
# 公共根目录
root /public;
# 主应用路由
location / {
expires off;
add_header Cache-Control "no-cache, no-store, must-revalidate";
try_files $uri $uri/ /index.html;
}
# 图片类静态资源
location ~* \.(png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# assets目录
location /assets {
expires 1y;
add_header Cache-Control "public, immutable";
}
# 站点图标
location /api/v1/site/icon/ {
# 站点图标缓存
proxy_cache my_cache;
# 缓存响应码为200和302的请求1小时
proxy_cache_valid 200 302 1h;
# 缓存其他响应码的请求5分钟
proxy_cache_valid any 5m;
# 缓存键的生成规则
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# 向后端API转发请求
proxy_pass http://backend_api;
}
# 本地CookieCloud
location /cookiecloud {
proxy_pass http://backend_api;
rewrite ^.+mock-server/?(.*)$ /$1 break;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
# 超时设置
proxy_read_timeout 600s;
}
# SSE特殊配置
location ~ ^/api/v1/system/(message|progress/) {
# SSE MIME类型设置
default_type text/event-stream;
# 禁用缓存
add_header Cache-Control no-cache;
add_header X-Accel-Buffering no;
proxy_buffering off;
proxy_cache off;
# 代理设置
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# 超时设置
proxy_read_timeout 3600s;
}
# API代理配置
location /api {
proxy_pass http://backend_api;
rewrite ^.+mock-server/?(.*)$ /$1 break;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
# 超时设置
proxy_read_timeout 600s;
}

View File

@@ -0,0 +1,50 @@
user moviepilot;
worker_processes auto;
worker_cpu_affinity auto;
events {
worker_connections 1024;
}
http {
# 设置缓存路径和缓存区大小
proxy_cache_path /tmp levels=1:2 keys_zone=my_cache:10m max_size=100m inactive=60m use_temp_path=off;
sendfile on;
keepalive_timeout 3600;
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_min_length 256;
gzip_vary on;
gzip_comp_level 6;
# HTTP
server {
include /etc/nginx/mime.types;
default_type application/octet-stream;
listen ${NGINX_PORT};
listen [::]:${NGINX_PORT};
server_name moviepilot;
# 公共配置
include common.conf;
}
# HTTPS
${HTTPS_SERVER_CONF}
upstream backend_api {
# 后端API的地址和端口
server 127.0.0.1:${PORT};
# 可以添加更多后端服务器作为负载均衡
}
}

View File

@@ -26,7 +26,7 @@ function download_and_unzip() {
local max_retries=3
local url="$1"
local target_dir="$2"
INFO "正在下载 ${url}..."
INFO "正在下载 ${url}..."
while [ $retries -lt $max_retries ]; do
if curl ${CURL_OPTIONS} "${url}" ${CURL_HEADERS} | busybox unzip -d ${TMP_PATH} - > /dev/null; then
if [ -e ${TMP_PATH}/MoviePilot-* ]; then
@@ -54,19 +54,19 @@ function install_backend_and_download_resources() {
return 1
fi
INFO "后端程序下载成功"
INFO "依赖安装中..."
INFO "→ 正在安装依赖..."
if ! pip install ${PIP_OPTIONS} --upgrade --root-user-action=ignore pip > /dev/null; then
ERROR "pip 更新失败,请重新拉取镜像"
return 1
fi
if ! pip install ${PIP_OPTIONS} --root-user-action=ignore -r ${TMP_PATH}/App/requirements.txt > /dev/null; then
ERROR "安装依赖失败,请重新拉取镜像"
ERROR "依赖安装失败,请重新拉取镜像"
return 1
fi
INFO "安装依赖成功"
INFO "依赖安装成功"
# 如果是"heads/v2.zip"则查找v2开头的最新版本号
if [[ "${1}" == "heads/v2.zip" ]]; then
INFO "正在获取前端最新版本号..."
INFO "正在获取前端最新版本号..."
# 获取所有发布的版本列表并筛选出以v2开头的版本号
releases=$(curl ${CURL_OPTIONS} "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases" ${CURL_HEADERS} | jq -r '.[].tag_name' | grep "^v2\.")
if [ -z "$releases" ]; then
@@ -78,7 +78,7 @@ function install_backend_and_download_resources() {
fi
INFO "前端最新版本号:${frontend_version}"
else
INFO "正在获取前端版本号..."
INFO "正在获取前端版本号..."
# 从后端文件中读取前端版本号
frontend_version=$(sed -n "s/^FRONTEND_VERSION\s*=\s*'\([^']*\)'/\1/p" ${TMP_PATH}/App/version.py)
if [[ "${frontend_version}" != *v* ]]; then
@@ -94,13 +94,13 @@ function install_backend_and_download_resources() {
fi
INFO "前端程序下载成功"
# 备份插件目录
INFO "备份插件目录..."
INFO "→ 正在备份插件目录..."
rm -rf /plugins
mkdir -p /plugins
cp -a /app/app/plugins/* /plugins/
rm -f /plugins/__init__.py
# 备份站点资源
INFO "备份站点资源目录..."
INFO "→ 正在备份站点资源目录..."
rm -rf /resources_bakcup
mkdir /resources_bakcup
cp -a /app/app/helper/user.sites.bin /resources_bakcup
@@ -118,14 +118,13 @@ function install_backend_and_download_resources() {
# 恢复插件目录
cp -a /plugins/* /app/app/plugins/
# 更新站点资源
INFO "开始更新站点资源..."
INFO "开始更新站点资源..."
if ! download_and_unzip "${GITHUB_PROXY}https://github.com/jxxghp/MoviePilot-Resources/archive/refs/heads/main.zip" "Resources"; then
cp -a /resources_bakcup/* /app/app/helper/
rm -rf /resources_bakcup
WARN "站点资源下载失败,继续使用旧的资源来启动..."
return 1
fi
INFO "站点资源下载成功"
# 复制新站点资源
cp -a ${TMP_PATH}/Resources/resources/* /app/app/helper/
INFO "站点资源更新成功"

View File

@@ -6,7 +6,7 @@
在开始之前,请确保您的系统已安装以下软件:
- **Python 3.11 或更高版本**
- **Python 3.12 或更高版本** (暂时兼容 3.11 ,推荐使用 3.12+)
- **pip** (Python 包管理器)
- **Git** (用于版本控制)

View File

@@ -1,43 +0,0 @@
#!/bin/bash
# shellcheck shell=bash
# shellcheck disable=SC2016
# 使用 `envsubst` 将模板文件中的 ${NGINX_PORT} 替换为实际的环境变量值
export NGINX_CLIENT_MAX_BODY_SIZE=${NGINX_CLIENT_MAX_BODY_SIZE:-10m}
envsubst '${NGINX_PORT}${PORT}${NGINX_CLIENT_MAX_BODY_SIZE}' < /etc/nginx/nginx.template.conf > /etc/nginx/nginx.conf
# 自动更新
cd /
/usr/local/bin/mp_update
cd /app || exit
# 更改 moviepilot userid 和 groupid
groupmod -o -g "${PGID}" moviepilot
usermod -o -u "${PUID}" moviepilot
# 更改文件权限
chown -R moviepilot:moviepilot \
"${HOME}" \
/app \
/public \
/config \
/var/lib/nginx \
/var/log/nginx
chown moviepilot:moviepilot /etc/hosts /tmp
# 下载浏览器内核
if [[ "$HTTPS_PROXY" =~ ^https?:// ]] || [[ "$https_proxy" =~ ^https?:// ]] || [[ "$PROXY_HOST" =~ ^https?:// ]]; then
HTTPS_PROXY="${HTTPS_PROXY:-${https_proxy:-$PROXY_HOST}}" gosu moviepilot:moviepilot playwright install chromium
else
gosu moviepilot:moviepilot playwright install chromium
fi
# 启动前端nginx服务
nginx
# 启动docker http proxy nginx
if [ -S "/var/run/docker.sock" ]; then
nginx -c /etc/nginx/docker_http_proxy.conf
# 上面nginx是通过root启动的会将目录权限改成root所以需要重新再设置一遍权限
chown -R moviepilot:moviepilot \
/var/lib/nginx \
/var/log/nginx
fi
# 设置后端服务权限掩码
umask "${UMASK}"
# 启动后端服务
exec dumb-init gosu moviepilot:moviepilot python3 app/main.py

View File

@@ -1,142 +0,0 @@
user moviepilot;
worker_processes auto;
worker_cpu_affinity auto;
events {
worker_connections 1024;
}
http {
# 设置缓存路径和缓存区大小
proxy_cache_path /tmp levels=1:2 keys_zone=my_cache:10m max_size=100m inactive=60m use_temp_path=off;
sendfile on;
keepalive_timeout 3600;
client_max_body_size ${NGINX_CLIENT_MAX_BODY_SIZE};
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
gzip_proxied any;
gzip_min_length 256;
gzip_vary on;
gzip_comp_level 6;
server {
include /etc/nginx/mime.types;
default_type application/octet-stream;
listen ${NGINX_PORT};
listen [::]:${NGINX_PORT};
server_name moviepilot;
location / {
# 主目录
expires off;
add_header Cache-Control "no-cache, no-store, must-revalidate";
root /public;
try_files $uri $uri/ /index.html;
}
location ~* \.(png|jpg|jpeg|gif|ico|svg)$ {
# 静态资源
expires 1y;
add_header Cache-Control "public, immutable";
root /public;
}
location /assets {
# 静态资源
expires 1y;
add_header Cache-Control "public, immutable";
root /public;
}
location /api/v1/site/icon/ {
# 站点图标缓存
proxy_cache my_cache;
# 缓存响应码为200和302的请求1小时
proxy_cache_valid 200 302 1h;
# 缓存其他响应码的请求5分钟
proxy_cache_valid any 5m;
# 缓存键的生成规则
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# 向后端API转发请求
proxy_pass http://backend_api;
}
location /cookiecloud {
# 后端cookiecloud地址
proxy_pass http://backend_api;
rewrite ^.+mock-server/?(.*)$ /$1 break;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
# 超时设置
proxy_read_timeout 600s;
}
location ~ ^/api/v1/system/(message|progress/) {
# SSE MIME类型设置
default_type text/event-stream;
# 禁用缓存
add_header Cache-Control no-cache;
add_header X-Accel-Buffering no;
proxy_buffering off;
proxy_cache off;
# 代理设置
proxy_pass http://backend_api;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# 超时设置
proxy_read_timeout 3600s;
}
location /api {
# 后端API
proxy_pass http://backend_api;
rewrite ^.+mock-server/?(.*)$ /$1 break;
proxy_http_version 1.1;
proxy_buffering off;
proxy_cache off;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
# 超时设置
proxy_read_timeout 600s;
}
}
upstream backend_api {
# 后端API的地址和端口
server 127.0.0.1:${PORT};
# 可以添加更多后端服务器作为负载均衡
}
}

View File

@@ -23,8 +23,8 @@ APScheduler~=3.10.1
cryptography~=43.0.0
pytz~=2023.3
pycryptodome~=3.20.0
qbittorrent-api==2024.11.69
plexapi~=4.15.16
qbittorrent-api==2024.11.70
plexapi~=4.16.0
transmission-rpc~=4.3.0
Jinja2~=3.1.4
pyparsing~=3.0.9
@@ -34,7 +34,7 @@ beautifulsoup4~=4.12.2
pillow~=10.4.0
pillow-avif-plugin~=1.4.6
pyTelegramBotAPI~=4.12.0
playwright~=1.37.0
playwright~=1.49.1
cf-clearance~=0.31.0
torrentool~=1.2.0
slack-bolt~=1.18.0
@@ -69,4 +69,4 @@ packaging~=24.2
cf_clearance~=0.31.0
oss2~=2.19.1
tqdm~=4.67.1
setuptools~=65.5.0
setuptools~=78.1.0

417
tests/cases/groups.py Normal file
View File

@@ -0,0 +1,417 @@
release_group_cases = [
# 0ff 组(示例结构)
{
"domain": "0ff",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFAB", "group": "FFAB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFWEB", "group": "FFWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFCD", "group": "FFCD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFEDU", "group": "FFEDU"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFEB", "group": "FFEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFTV", "group": "FFTV"}
]
},
# audiences 组(示例结构)
{
"domain": "audiences",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Audies", "group": "Audies"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ADE", "group": "ADE"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ADAudio", "group": "ADAudio"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ADEbook", "group": "ADEbook"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ADMusic", "group": "ADMusic"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ADWeb", "group": "ADWeb"}
]
},
# ---- 以下为新增结构化部分 ----
# beitai 组
{
"domain": "beitai",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BeiTai", "group": "BeiTai"}
]
},
# btschool 组
{
"domain": "btschool",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BtsCHOOL", "group": "BtsCHOOL"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BtsHD", "group": "BtsHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BtsPAD", "group": "BtsPAD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BtsTV", "group": "BtsTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Zone", "group": "Zone"}
]
},
# carpt 组
{
"domain": "carpt",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CarPT", "group": "CarPT"}
]
},
# chd 组
{
"domain": "chd",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHD", "group": "CHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHDBits", "group": "CHDBits"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHDPAD", "group": "CHDPAD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHDTV", "group": "CHDTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHDHKTV", "group": "CHDHKTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CHDWEB", "group": "CHDWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-StBOX", "group": "StBOX"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-OneHD", "group": "OneHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Lee", "group": "Lee"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-xiaopie", "group": "xiaopie"}
]
},
# eastgame 组
{
"domain": "eastgame",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-TLF", "group": "TLF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-iNT-TLF", "group": "iNT-TLF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HALFC-TLF", "group": "HALFC-TLF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MiniSD-TLF", "group": "MiniSD-TLF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MiniHD-TLF", "group": "MiniHD-TLF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MiniFHD-TLF", "group": "MiniFHD-TLF"}
]
},
# gainbound 组
{
"domain": "gainbound",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-DGB", "group": "DGB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-GBWEB", "group": "GBWEB"}
]
},
# hares 组
{
"domain": "hares",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Hares", "group": "Hares"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HaresMV", "group": "HaresMV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HaresTV", "group": "HaresTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HaresWeb", "group": "HaresWeb"}
]
},
# hdarea 组
{
"domain": "hdarea",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDApad", "group": "HDApad"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDArea", "group": "HDArea"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDATV", "group": "HDATV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-EPiC", "group": "EPiC"}
]
},
# hdchina 组
{
"domain": "hdchina",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDC", "group": "HDC"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDChina", "group": "HDChina"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDCTV", "group": "HDCTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-k9611", "group": "k9611"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-tudou", "group": "tudou"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-iHD", "group": "iHD"}
]
},
# hddolby 组
{
"domain": "hddolby",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Dream", "group": "Dream"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-DBTV", "group": "DBTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDo", "group": "HDo"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-QHStudIo", "group": "QHStudIo"}
]
},
# hdfans 组
{
"domain": "hdfans",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-beAst", "group": "beAst"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-beAstTV", "group": "beAstTV"}
]
},
# hdhome 组
{
"domain": "hdhome",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDH", "group": "HDH"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDHome", "group": "HDHome"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDHPad", "group": "HDHPad"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDHTV", "group": "HDHTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDHWEB", "group": "HDHWEB"}
]
},
# hdpt 组
{
"domain": "hdpt",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDPT", "group": "HDPT"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDPTWeb", "group": "HDPTWeb"}
]
},
# hdsky 组
{
"domain": "hdsky",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDS", "group": "HDS"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDSky", "group": "HDSky"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDSTV", "group": "HDSTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDSPad", "group": "HDSPad"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDSWEB", "group": "HDSWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-AQLJ", "group": "AQLJ"}
]
},
# hdzone 组
{
"domain": "hdzone",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDZ", "group": "HDZ"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HDZone", "group": "HDZone"}
]
},
# hhanclub 组
{
"domain": "hhanclub",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HHWEB", "group": "HHWEB"}
]
},
# htpt 组
{
"domain": "htpt",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HTPT", "group": "HTPT"}
]
},
# keepfrds 组
{
"domain": "keepfrds",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FRDS", "group": "FRDS"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Yumi@FRDS", "group": "Yumi@FRDS"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-cXcY@FRDS", "group": "cXcY@FRDS"}
]
},
# lemonhd 组
{
"domain": "lemonhd",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueCD", "group": "LeagueCD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueHD", "group": "LeagueHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueMV", "group": "LeagueMV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueTV", "group": "LeagueTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueNF", "group": "LeagueNF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LeagueWEB", "group": "LeagueWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LHD", "group": "LHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-i18n", "group": "i18n"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CiNT", "group": "CiNT"}
]
},
# mteam 组
{
"domain": "mteam",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MTeam", "group": "MTeam"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MTeamTV", "group": "MTeamTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MPAD", "group": "MPAD"}
]
},
# ourbits 组
{
"domain": "ourbits",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-OurBits", "group": "OurBits"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-OurTV", "group": "OurTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FLTTH", "group": "FLTTH"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Ao", "group": "Ao"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PbK", "group": "PbK"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MGs", "group": "MGs"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-iLoveHD", "group": "iLoveHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-iLoveTV", "group": "iLoveTV"}
]
},
# piggo 组
{
"domain": "piggo",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PiGoNF", "group": "PiGoNF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PiGoHB", "group": "PiGoHB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PiGoWEB", "group": "PiGoWEB"}
]
},
# pterclub 组
{
"domain": "pterclub",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTer", "group": "PTer"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTerDIY", "group": "PTerDIY"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTerGame", "group": "PTerGame"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTerMV", "group": "PTerMV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTerTV", "group": "PTerTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTerWEB", "group": "PTerWEB"}
]
},
# pthome 组
{
"domain": "pthome",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTH", "group": "PTH"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHAudio", "group": "PTHAudio"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHeBook", "group": "PTHeBook"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHmusic", "group": "PTHmusic"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHome", "group": "PTHome"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHtv", "group": "PTHtv"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTHWEB", "group": "PTHWEB"}
]
},
# ptsbao 组
{
"domain": "ptsbao",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PTsbao", "group": "PTsbao"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-OPS", "group": "OPS"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansAIeNcE", "group": "FFansAIeNcE"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansBD", "group": "FFansBD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansDVD", "group": "FFansDVD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansDIY", "group": "FFansDIY"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansTV", "group": "FFansTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FFansWEB", "group": "FFansWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FHDMv", "group": "FHDMv"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SGXT", "group": "SGXT"}
]
},
# putao 组
{
"domain": "putao",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PuTao", "group": "PuTao"}
]
},
# ssd 组
{
"domain": "ssd",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CMCT", "group": "CMCT"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CMCT@制作者", "group": "CMCT"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CMCTV", "group": "CMCTV"}
]
},
# sharkpt 组
{
"domain": "sharkpt",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Shark", "group": "Shark"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SharkWEB", "group": "SharkWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SharkDIY", "group": "SharkDIY"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SharkTV", "group": "SharkTV"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SharkMV", "group": "SharkMV"}
]
},
# tjupt 组
{
"domain": "tjupt",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-TJUPT", "group": "TJUPT"}
]
},
# ttg 组
{
"domain": "ttg",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-TTG", "group": "TTG"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-WiKi", "group": "WiKi"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-NGB", "group": "NGB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-DoA", "group": "DoA"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ARiN", "group": "ARiN"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ExREN", "group": "ExREN"}
]
},
# others 组
{
"domain": "others",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BMDru", "group": "BMDru"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BeyondHD", "group": "BeyondHD"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-BTN", "group": "BTN"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Cfandora", "group": "Cfandora"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Ctrlhd", "group": "Ctrlhd"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-CMRG", "group": "CMRG"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-DON", "group": "DON"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-EVO", "group": "EVO"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FLUX", "group": "FLUX"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HONE", "group": "HONE"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HONEyG", "group": "HONEyG"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-NoGroup", "group": "NoGroup"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-NTb", "group": "NTb"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-NTG", "group": "NTG"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-PandaMoon", "group": "PandaMoon"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SMURF", "group": "SMURF"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-TEPES", "group": "TEPES"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Taengoo", "group": "Taengoo"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-TrollHD ", "group": "TrollHD "},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-UBWEB", "group": "UBWEB"}
]
},
# anime 组
{
"domain": "anime",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-ANi", "group": "ANi"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-HYSUB", "group": "HYSUB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-KTXP", "group": "KTXP"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-LoliHouse", "group": "LoliHouse"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MCE", "group": "MCE"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Nekomoe kissaten", "group": "Nekomoe kissaten"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-SweetSub", "group": "SweetSub"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-MingY", "group": "MingY"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-Lilith-Raws", "group": "Lilith-Raws"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-NC-Raws", "group": "NC-Raws"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-织梦字幕组", "group": "织梦字幕组"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-枫叶字幕组", "group": "枫叶字幕组"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-猎户手抄部", "group": "猎户手抄部"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-喵萌奶茶屋", "group": "喵萌奶茶屋"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-漫猫字幕社", "group": "漫猫字幕社"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-霜庭云花Sub", "group": "霜庭云花Sub"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-北宇治字幕组", "group": "北宇治字幕组"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-氢气烤肉架", "group": "氢气烤肉架"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-云歌字幕组", "group": "云歌字幕组"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-萌樱字幕组", "group": "萌樱字幕组"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-极影字幕社", "group": "极影字幕社"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-悠哈璃羽字幕社", "group": "悠哈璃羽字幕社"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-❀拨雪寻春❀", "group": "❀拨雪寻春❀"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-沸羊羊制作", "group": "沸羊羊制作"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-沸羊羊字幕组", "group": "沸羊羊字幕组"},
{
"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-桜都字幕组",
"group": "桜都字幕组",
},
{
"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-樱都字幕组",
"group": "樱都字幕组",
},
]
},
# frog 组
{
"domain": "frog",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FROG", "group": "FROG"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FROGE", "group": "FROGE"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-FROGWeb", "group": "FROGWeb"},
]
},
# ubits 组
{
"domain": "ubits",
"groups": [
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-UBits", "group": "UBits"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-UBWEB", "group": "UBWEB"},
{"title": "Bluey S03 2021 2160p WEB-DL H.265 AAC 2.0-UBTV", "group": "UBTV"},
]
},
]

View File

@@ -0,0 +1,13 @@
from unittest import TestCase
from tests.cases.groups import release_group_cases
from app.core.meta.releasegroup import ReleaseGroupsMatcher
class MetaInfoTest(TestCase):
def test_release_group(self):
for info in release_group_cases:
print(f"开始测试 {info.get('domain')}")
for item in info.get('groups', []):
release_group = ReleaseGroupsMatcher().match(item.get("title"))
print(f"\tmatch release group {release_group}, should be: {item.get('group')}")
self.assertEqual(item.get("group"), release_group)
print(f"完成 {info.get('domain')}")

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.3.7'
FRONTEND_VERSION = 'v2.3.7-1'
APP_VERSION = 'v2.4.2'
FRONTEND_VERSION = 'v2.4.2'