Compare commits

...

161 Commits
v1.2.6 ... v

Author SHA1 Message Date
jxxghp
c37e02009f fix build 2023-10-09 19:39:19 +08:00
jxxghp
a96b8a4e07 fix build 2023-10-09 19:37:49 +08:00
jxxghp
79b4d5fb8e fix build 2023-10-09 19:33:05 +08:00
jxxghp
de128f5e6a fix 2023-10-09 15:04:54 +08:00
jxxghp
ef8ddcde07 fix 2023-10-09 14:46:23 +08:00
jxxghp
eaff557d70 windows package 2023-10-09 14:11:03 +08:00
jxxghp
38f7a31200 windows package 2023-10-09 13:40:09 +08:00
jxxghp
97f16289c9 windows package 2023-10-09 12:57:52 +08:00
jxxghp
e15f5ab93e Merge pull request #767 from thsrite/main 2023-10-09 11:50:18 +08:00
thsrite
15fd312765 fix #766 2023-10-09 11:41:59 +08:00
jxxghp
eea316865f fix #753 2023-10-09 11:05:53 +08:00
jxxghp
05bbfbbd54 Merge pull request #765 from thsrite/main
fix #701
2023-10-09 10:09:46 +08:00
thsrite
6039a9d0d5 fix 2023-10-09 10:06:04 +08:00
thsrite
0159b02916 fix 8bbd4dc9 2023-10-09 09:50:30 +08:00
thsrite
8bbd4dc913 fix #701 2023-10-09 09:37:16 +08:00
jxxghp
9e3ded6ad5 Merge pull request #764 from thsrite/main
fix 下载消息发送所有
2023-10-09 09:27:40 +08:00
jxxghp
fe63275a6b fix bug 2023-10-09 09:09:59 +08:00
jxxghp
81ed465607 fix #759 2023-10-09 09:05:48 +08:00
thsrite
d9aa281ce1 fix 下载消息发送所有 2023-10-09 09:02:01 +08:00
jxxghp
56648d664e fix README.md 2023-10-08 17:03:20 +08:00
jxxghp
da49d5577a fix app.env 2023-10-08 16:41:53 +08:00
jxxghp
f3dbdefdb1 fix README.md 2023-10-08 16:26:22 +08:00
jxxghp
d4302759e6 fix README.md 2023-10-08 16:25:27 +08:00
jxxghp
914f192fb2 test 2023-10-08 16:24:40 +08:00
jxxghp
522b554e36 fix README.md 2023-10-08 16:12:27 +08:00
jxxghp
4c54ab5319 fix README.md 2023-10-08 15:58:42 +08:00
jxxghp
d7f4ed069c Merge pull request #757 from lightolly/dev/20231008 2023-10-08 14:04:00 +08:00
olly
7ea0c5ee4c fix:演职员刮削优化
1.豆瓣查询增加速率限制后重试
2.全中文演职员跳过处理
2023-10-08 14:00:55 +08:00
jxxghp
e773a9d9d4 Merge pull request #755 from thsrite/customization 2023-10-08 12:22:56 +08:00
thsrite
b570542fab fix 2023-10-08 12:16:45 +08:00
thsrite
09716e98ba feat 自定义占位符 2023-10-08 11:59:52 +08:00
jxxghp
9236b361e2 Merge remote-tracking branch 'origin/main' 2023-10-08 06:56:57 +08:00
jxxghp
f281d8c068 fix #749 2023-10-08 06:56:45 +08:00
jxxghp
83ed17d5c1 Merge pull request #752 from thsrite/main
feat 药丸论坛签到
2023-10-07 20:54:25 +08:00
jxxghp
e2671dd4ed fix dockerfile 2023-10-07 05:52:43 -07:00
thsrite
4c4d640331 feat 药丸论坛签到 2023-10-07 20:51:32 +08:00
jxxghp
6c4307c918 fix #750 2023-10-07 05:29:23 -07:00
jxxghp
5a7062c699 fix 2023-10-07 05:03:19 -07:00
jxxghp
7da01f7404 fix 2023-10-07 05:03:06 -07:00
jxxghp
2b695cb8c6 fix #748 2023-10-07 04:59:07 -07:00
jxxghp
599817eec7 test 2023-10-07 04:44:06 -07:00
jxxghp
11fa33be0a test 2023-10-07 04:33:52 -07:00
jxxghp
b5ac9d4ce4 fix app.env 2023-10-07 04:08:19 -07:00
jxxghp
78f0ac0042 fix README.md 2023-10-07 04:01:21 -07:00
jxxghp
00ecd7adc5 更新 app.env 2023-10-07 18:24:02 +08:00
jxxghp
c39cb3bffc 更新 app.env 2023-10-07 18:22:32 +08:00
jxxghp
2fa902bfff Merge pull request #747 from thsrite/main 2023-10-07 18:09:25 +08:00
thsrite
f8bcd351ae fix 依赖 2023-10-07 18:08:33 +08:00
jxxghp
6013d99bf6 v1.2.9 2023-10-07 17:21:08 +08:00
jxxghp
e7c3977f7b fix README.md 2023-10-07 12:26:16 +08:00
jxxghp
47e1218fe0 fix #732 2023-10-07 10:31:33 +08:00
jxxghp
a71a95892f fix 2023-10-05 23:23:33 -07:00
jxxghp
b5f53e309f fix 2023-10-05 23:12:46 -07:00
jxxghp
3164ba2d98 fix #734 2023-10-05 17:57:47 -07:00
jxxghp
89854d188d fix actor thumb 2023-10-05 17:49:31 -07:00
jxxghp
79c7475435 fix tmdb lru cache 2023-10-05 17:41:02 -07:00
jxxghp
2ee477c35e fix requests session stream 2023-10-05 17:32:23 -07:00
jxxghp
5bcd90c569 fix requests session 2023-10-05 17:21:59 -07:00
jxxghp
1a49c7c59e try fix 2023-10-05 07:44:21 +08:00
jxxghp
d995932a1c fix personmeta 2023-10-04 14:34:42 +08:00
jxxghp
1b0bbbbbfd fix webhook plugin 2023-10-04 08:01:30 +08:00
jxxghp
2aa93fa341 fix webhook plugin 2023-10-04 08:01:02 +08:00
jxxghp
a970f90c6f Merge remote-tracking branch 'origin/main' 2023-10-04 07:33:38 +08:00
jxxghp
44f612fed5 v1.2.8 2023-10-04 07:33:31 +08:00
jxxghp
564a48dd8f fix 2023-10-03 16:24:27 -07:00
jxxghp
9d029de56a fix 2023-10-03 16:23:05 -07:00
jxxghp
2dd3fc5d8c fix #722 2023-10-03 16:19:43 -07:00
jxxghp
9c335dbdfb fix #724 2023-10-03 16:17:19 -07:00
jxxghp
0e30ea92f1 fix #726 2023-10-03 16:14:04 -07:00
jxxghp
a0ced4e43c 认证站点支持xingtan.one 2023-10-03 16:05:50 -07:00
jxxghp
cfaaf65edc support xingtan 2023-10-04 07:03:13 +08:00
jxxghp
35be18bb1a fix 2023-10-01 21:55:49 +08:00
jxxghp
02296e1758 fix 2023-10-01 21:46:09 +08:00
jxxghp
0b84b05cdd fix #705 2023-10-01 21:36:33 +08:00
jxxghp
99e3d5acca fix #707 2023-10-01 21:33:58 +08:00
jxxghp
8001511484 fix #690 2023-10-01 21:23:41 +08:00
jxxghp
8420b2ea85 fix personmeta 2023-10-01 21:08:16 +08:00
jxxghp
9af883acbb fix personmeta 2023-10-01 18:27:26 +08:00
jxxghp
e21ba5ad51 fix personmeta 2023-10-01 18:11:01 +08:00
jxxghp
1293fafd34 fix 2023-10-01 16:47:47 +08:00
jxxghp
4bcc6bd733 fix bug 2023-10-01 14:18:56 +08:00
jxxghp
53a514feb6 fix personmeta支持豆瓣 2023-10-01 14:16:36 +08:00
jxxghp
e697889aad fix 2023-10-01 12:37:18 +08:00
jxxghp
8b0fba054e Merge remote-tracking branch 'origin/main' 2023-10-01 12:28:46 +08:00
jxxghp
32ff385444 fix personmeta 2023-10-01 12:28:41 +08:00
jxxghp
8456c7f4a3 Merge pull request #718 from DDS-Derek/main
功能改进增加选择类型
2023-10-01 11:55:56 +08:00
jxxghp
fcbfb63645 fix personmeta 2023-10-01 11:52:25 +08:00
DDSDerek
1fa7d15982 fix: issue 2023-10-01 10:07:51 +08:00
DDSDerek
a173978f6b feat: optimize issue 2023-10-01 10:06:11 +08:00
jxxghp
2f069afc77 fix personmeta 2023-10-01 08:15:19 +08:00
jxxghp
ea998b4e41 fix personmeta 2023-10-01 07:53:50 +08:00
jxxghp
ba27d02854 fix 2023-09-30 20:40:48 +08:00
jxxghp
f78df58906 fix 2023-09-30 20:36:51 +08:00
jxxghp
308683a7e9 fix scraper 2023-09-30 20:27:48 +08:00
jxxghp
b3f4a6f251 fix mediaserver 2023-09-30 15:27:01 +08:00
jxxghp
d1841d8f15 fix mediaserver 2023-09-30 15:16:53 +08:00
jxxghp
c8d6de3e9b Merge pull request #706 from song-zhou/main 2023-09-29 22:04:22 +08:00
Elsie Weber
938f5c8cea Merge branch 'jxxghp:main' into main 2023-09-29 21:57:50 +08:00
songzhou
d166930b0a 修复手动执行订阅搜索服务无效bug 2023-09-29 21:57:41 +08:00
jxxghp
e1ac3c0d15 fix personmeta 2023-09-29 12:01:00 +08:00
jxxghp
59da489e05 Merge pull request #704 from developer-wlj/wlj0909 2023-09-29 10:30:16 +08:00
developer-wlj
be12c736fb Merge branch 'jxxghp:main' into wlj0909 2023-09-29 10:14:36 +08:00
jxxghp
71c52aae7b Merge pull request #703 from DDS-Derek/main 2023-09-29 10:12:32 +08:00
mayun110
dbfe2af53c fix PersonMeta插件jellyfin无法显示头像问题 2023-09-29 10:11:18 +08:00
DDSRem
cca898f5b6 feat: docker build use cache 2023-09-29 09:31:47 +08:00
jxxghp
9abd780aa2 fix PersonMeta 2023-09-29 08:34:45 +08:00
jxxghp
2e89eeca2c fix #694 按站点多次检索 2023-09-29 08:20:55 +08:00
jxxghp
dbb3bead6b fix #696 2023-09-28 22:38:11 +08:00
jxxghp
d0b88ec7f6 fix #696 2023-09-28 22:36:35 +08:00
jxxghp
5898bc7eb1 - 修复v1.2.7版本中的问题 2023-09-28 22:19:13 +08:00
jxxghp
cfe113f6c3 fix bug 2023-09-28 22:16:21 +08:00
jxxghp
83500128c9 Merge pull request #698 from song-zhou/main
修复通知emby时libraryId错误bug
2023-09-28 22:09:32 +08:00
songzhou
2bff3a80da 修复通知emby时libraryId错误bug 2023-09-28 22:05:43 +08:00
jxxghp
3dd7b33f3e fix bug 2023-09-28 21:37:57 +08:00
jxxghp
8de487b0bf fix bug 2023-09-28 21:27:39 +08:00
jxxghp
ce88a6818f fix #693 2023-09-28 21:18:40 +08:00
jxxghp
6172832f41 fix 图片下载重试 2023-09-28 21:13:40 +08:00
jxxghp
a0ed228f4b fix 演员头像&中文名 2023-09-28 21:11:08 +08:00
jxxghp
01fd56a019 feat 演职人员优先使用TMDB中的中文名 2023-09-28 20:24:47 +08:00
jxxghp
087fcd340a fix #692 2023-09-28 20:06:03 +08:00
jxxghp
b3b09f3c03 Merge pull request #692 from DDS-Derek/main 2023-09-28 20:04:30 +08:00
DDSRem
11d17bf21a fix: https://github.com/jxxghp/MoviePilot/pull/654 2023-09-28 19:57:28 +08:00
jxxghp
b1ee80edee fix themoivedb timeout 2023-09-28 19:08:34 +08:00
jxxghp
107d496adb v1.2.7 2023-09-28 17:43:34 +08:00
jxxghp
9f1112b58d fix 2023-09-28 17:41:48 +08:00
jxxghp
989d6e3fe7 fix 2023-09-28 17:29:21 +08:00
jxxghp
3999c64853 add PersonMeta 2023-09-28 17:11:55 +08:00
jxxghp
760e3d6de0 更新 __init__.py 2023-09-28 16:32:56 +08:00
jxxghp
02111a3b9f fix #684 2023-09-28 16:23:10 +08:00
jxxghp
e6af2c0f34 fix 2023-09-28 16:14:52 +08:00
jxxghp
bd4c639761 Merge pull request #688 from thsrite/main
feat 定时清理媒体库插件
2023-09-28 15:46:13 +08:00
thsrite
d39b7ec021 fix 2023-09-28 15:40:13 +08:00
thsrite
63ca5f5017 fix 下载进度推送逻辑 2023-09-28 15:32:07 +08:00
thsrite
2202cf457b fix 2023-09-28 15:25:04 +08:00
thsrite
5d04b7abd6 feat 定时清理媒体库插件 2023-09-28 15:21:01 +08:00
jxxghp
0588d5d5f3 fix get_location 2023-09-28 14:49:54 +08:00
jxxghp
5a59e443d7 fix 2023-09-28 14:43:08 +08:00
jxxghp
470f4df979 fix #669 2023-09-28 14:32:34 +08:00
jxxghp
84bda71330 fix #657 2023-09-28 14:16:27 +08:00
jxxghp
ea883255cb fix #685 添加resourceType资源类型 2023-09-28 13:45:06 +08:00
jxxghp
e9abb69fb5 fix 2023-09-28 12:52:32 +08:00
jxxghp
ff63390794 Merge pull request #686 from thsrite/main 2023-09-28 12:39:12 +08:00
jxxghp
78b3135276 feat 媒体文件同步删除插件:支持手动删除源文件同步处理下载任务 2023-09-28 12:35:41 +08:00
thsrite
15bd2c09ed fix 2023-09-28 12:28:24 +08:00
thsrite
34d44857e4 fix messageforward 2023-09-28 12:11:39 +08:00
thsrite
dccded2d3e fix 下载消息增加用户 2023-09-28 12:03:18 +08:00
thsrite
295cafc060 fix 2023-09-28 11:56:13 +08:00
thsrite
c792e97f67 fix 下载进度增加识别名 2023-09-28 11:41:30 +08:00
thsrite
d30a02987d feat 正在下载进度推送插件 2023-09-28 11:10:34 +08:00
jxxghp
84d4c9cf73 feat 重命名支持episode_title集标题 2023-09-28 10:58:31 +08:00
jxxghp
21ecd1f708 fix #673 2023-09-28 08:34:34 +08:00
jxxghp
248b9a8e8c fix #663 2023-09-28 08:24:39 +08:00
jxxghp
3c7abfada6 fix #677 2023-09-28 08:14:22 +08:00
jxxghp
f363656e0a Merge remote-tracking branch 'origin/main' 2023-09-28 08:09:01 +08:00
jxxghp
e9ee9dbce1 fix #676 2023-09-28 08:08:55 +08:00
jxxghp
ab0b8653ab Merge pull request #674 from developer-wlj/wlj0909 2023-09-27 18:12:10 +08:00
developer-wlj
20711e17fb Merge branch 'jxxghp:main' into wlj0909 2023-09-27 18:06:51 +08:00
mayun110
a89bd8b816 Merge remote-tracking branch 'origin/wlj0909' into wlj0909 2023-09-27 18:05:46 +08:00
mayun110
3692cfea64 fix 无法匹配国语标签的bug 2023-09-27 15:38:35 +08:00
jxxghp
81d9d39029 fix bug 2023-09-27 14:12:11 +08:00
jxxghp
f5a61ceff1 fix bug 2023-09-27 13:40:35 +08:00
108 changed files with 4724 additions and 1438 deletions

View File

@@ -14,6 +14,18 @@ body:
description: 目前使用的程序版本
validations:
required: true
- type: dropdown
id: type
attributes:
label: 功能改进类型
description: 你需要在下面哪个方面改进功能
options:
- 主程序
- 插件
- Docker
- 其他
validations:
required: true
- type: textarea
id: feature-request
attributes:

65
.github/workflows/build-windows.yml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: MoviePilot Windows Builder
on:
workflow_dispatch:
push:
branches:
- main
paths:
- version.py
jobs:
Windows-build:
runs-on: windows-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Release Version
id: release_version
run: |
$app_version = Select-String -Path "version.py" -Pattern "APP_VERSION\s=\s'v(.*)'" | ForEach-Object { $_.Matches.Groups[1].Value }
$env:GITHUB_ENV += "app_version=$app_version"
- name: Init Python 3.11.4
uses: actions/setup-python@v4
with:
python-version: '3.11.4'
- name: Install Dependent Packages
run: |
python -m pip install --upgrade pip
pip install wheel pyinstaller
pip install -r requirements.txt
shell: pwsh
- name: Pyinstaller
run: |
pyinstaller windows.spec
shell: pwsh
- name: Upload Windows File
uses: actions/upload-artifact@v3
with:
name: windows
path: dist/MoviePilot.exe
- name: Generate Release
id: generate_release
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Upload Release Asset
uses: dwenegar/upload-release-assets@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
release_id: ${{ steps.generate_release.outputs.id }}
assets_path: |
dist/MoviePilot.exe

View File

@@ -1,4 +1,4 @@
name: MoviePilot Builder
name: MoviePilot Docker Builder
on:
workflow_dispatch:
push:
@@ -8,23 +8,20 @@ on:
- version.py
jobs:
build:
Docker-build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
-
name: Checkout
- name: Checkout
uses: actions/checkout@v4
-
name: Release version
- name: Release version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Docker meta
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
@@ -33,23 +30,19 @@ jobs:
type=raw,value=${{ env.app_version }}
type=raw,value=latest
-
name: Set Up QEMU
- name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
- name: Set Up Buildx
uses: docker/setup-buildx-action@v3
-
name: Login DockerHub
- name: Login DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Build Image
- name: Build Image
uses: docker/build-push-action@v5
with:
context: .
@@ -62,3 +55,5 @@ jobs:
MOVIEPILOT_VERSION=${{ env.app_version }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}

View File

@@ -1,36 +0,0 @@
name: MoviePilot Release
on:
workflow_dispatch:
push:
branches:
- main
paths:
- version.py
jobs:
build:
runs-on: ubuntu-latest
name: Build Docker Image
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Release Version
id: release_version
run: |
app_version=$(cat version.py |sed -ne "s/APP_VERSION\s=\s'v\(.*\)'/\1/gp")
echo "app_version=$app_version" >> $GITHUB_ENV
-
name: Generate Release
uses: actions/create-release@latest
with:
tag_name: v${{ env.app_version }}
release_name: v${{ env.app_version }}
body: ${{ github.event.commits[0].message }}
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,39 +3,16 @@ ARG MOVIEPILOT_VERSION
ENV LANG="C.UTF-8" \
HOME="/moviepilot" \
TERM="xterm" \
TZ="Asia/Shanghai" \
PUID=0 \
PGID=0 \
UMASK=000 \
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
PORT=3001 \
NGINX_PORT=3000 \
CONFIG_DIR="/config" \
API_TOKEN="moviepilot" \
AUTH_SITE="iyuu" \
DOWNLOAD_PATH="/downloads" \
DOWNLOAD_CATEGORY="false" \
TORRENT_TAG="MOVIEPILOT" \
LIBRARY_PATH="" \
LIBRARY_CATEGORY="false" \
TRANSFER_TYPE="copy" \
COOKIECLOUD_HOST="https://movie-pilot.org/cookiecloud" \
COOKIECLOUD_KEY="" \
COOKIECLOUD_PASSWORD="" \
MESSAGER="telegram" \
TELEGRAM_TOKEN="" \
TELEGRAM_CHAT_ID="" \
DOWNLOADER="qbittorrent" \
QB_HOST="127.0.0.1:8080" \
QB_USER="admin" \
QB_PASSWORD="adminadmin" \
MEDIASERVER="emby" \
EMBY_HOST="http://127.0.0.1:8096" \
EMBY_API_KEY=""
MOVIEPILOT_AUTO_UPDATE=true \
MOVIEPILOT_AUTO_UPDATE_DEV=false \
CONFIG_DIR="/config"
WORKDIR "/app"
COPY . .
RUN apt-get update \
RUN apt-get update -y \
&& apt-get -y install \
musl-dev \
nginx \
@@ -56,26 +33,20 @@ RUN apt-get update \
elif [ "$(uname -m)" = "aarch64" ]; \
then ln -s /usr/lib/aarch64-linux-musl/libc.so /lib/libc.musl-aarch64.so.1; \
fi \
&& cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf \
/tmp/* \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY requirements.txt requirements.txt
RUN apt-get update -y \
&& apt-get install -y build-essential \
&& pip install --upgrade pip \
&& pip install Cython \
&& pip install -r requirements.txt \
&& playwright install-deps chromium \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& FRONTEND_VERSION=$(curl -sL "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | jq -r .tag_name) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public \
&& apt-get remove -y build-essential \
&& apt-get autoremove -y \
&& apt-get clean -y \
@@ -84,6 +55,22 @@ RUN apt-get update \
/moviepilot/.cache \
/var/lib/apt/lists/* \
/var/tmp/*
COPY . .
RUN cp -f /app/nginx.conf /etc/nginx/nginx.template.conf \
&& cp -f /app/update /usr/local/bin/mp_update \
&& cp -f /app/entrypoint /entrypoint \
&& chmod +x /entrypoint /usr/local/bin/mp_update \
&& mkdir -p ${HOME} /var/lib/haproxy/server-state \
&& groupadd -r moviepilot -g 911 \
&& useradd -r moviepilot -g moviepilot -d ${HOME} -s /bin/bash -u 911 \
&& python_ver=$(python3 -V | awk '{print $2}') \
&& echo "/app/" > /usr/local/lib/python${python_ver%.*}/site-packages/app.pth \
&& echo 'fs.inotify.max_user_watches=5242880' >> /etc/sysctl.conf \
&& echo 'fs.inotify.max_user_instances=5242880' >> /etc/sysctl.conf \
&& locale-gen zh_CN.UTF-8 \
&& FRONTEND_VERSION=$(curl -sL "https://api.github.com/repos/jxxghp/MoviePilot-Frontend/releases/latest" | jq -r .tag_name) \
&& curl -sL "https://github.com/jxxghp/MoviePilot-Frontend/releases/download/${FRONTEND_VERSION}/dist.zip" | busybox unzip -d / - \
&& mv /dist /public
EXPOSE 3000
VOLUME [ "/config" ]
ENTRYPOINT [ "/entrypoint" ]

129
README.md
View File

@@ -15,23 +15,23 @@ Dockerhttps://hub.docker.com/r/jxxghp/moviepilot
## 安装
1. **安装CookieCloud插件**
### 1. **安装CookieCloud插件**
站点信息需要通过CookieCloud同步获取因此需要安装CookieCloud插件将浏览器中的站点Cookie数据同步到云端后再同步到MoviePilot使用。 插件下载地址请点击 [这里](https://github.com/easychen/CookieCloud/releases)。
2. **安装CookieCloud服务端可选**
### 2. **安装CookieCloud服务端可选**
MoviePilot内置了公共CookieCloud服务器如果需要自建服务可参考 [CookieCloud](https://github.com/easychen/CookieCloud) 项目进行搭建docker镜像请点击 [这里](https://hub.docker.com/r/easychen/cookiecloud)。
**声明:** 本项目不会收集用户敏感数据Cookie同步也是基于CookieCloud项目实现非本项目提供的能力。技术角度上CookieCloud采用端到端加密在个人不泄露`用户KEY``端对端加密密码`的情况下第三方无法窃取任何用户信息(包括服务器持有者)。如果你不放心,可以不使用公共服务或者不使用本项目,但如果使用后发生了任何信息泄露与本项目无关!
3. **安装配套管理软件**
### 3. **安装配套管理软件**
MoviePilot需要配套下载器和媒体服务器配合使用。
- 下载器支持qBittorrent、TransmissionQB版本号要求>= 4.3.9TR版本号要求>= 3.0推荐使用QB。
- 媒体服务器支持Jellyfin、Emby、Plex推荐使用Emby。
4. **安装MoviePilot**
### 4. **安装MoviePilot**
目前仅提供docker镜像点击 [这里](https://hub.docker.com/r/jxxghp/moviepilot) 或执行命令:
@@ -41,50 +41,56 @@ docker pull jxxghp/moviepilot:latest
## 配置
项目的所有配置均通过环境变量进行设置,部分环境建立容器后会自动显示待配置项,如未自动显示配置项则需要手动增加对应环境变量。
项目的所有配置均通过环境变量进行设置,支持两种配置方式:
- 在docker环境变量部分进行参数配置部分环境建立容器后会自动显示待配置项如未自动显示配置项则需要手动增加对应环境变量。
- 下载 [app.env](https://github.com/jxxghp/MoviePilot/raw/main/config/app.env) 文件,修改好配置后放置到配置文件映射路径根目录,配置项可根据说明自主增减。
配置文件映射路径:`/config`
配置文件映射路径:`/config`,配置项生效优先级:环境变量 > env文件 > 默认值,部分参数如路径映射、站点认证、权限端口等必须通过环境变量进行配置。
> $\color{red}{*}$ 号标识的为必填项,其它为可选项,可选项可删除配置变量从而使用默认值。
### 1. **基础设置**
- **PUID**:运行程序用户的`uid`,默认`0`
- **PGID**:运行程序用户的`gid`,默认`0`
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`
- **MOVIEPILOT_AUTO_UPDATE**:重启更新,`true`/`false`,默认`true` **注意:如果出现网络问题可以配置`PROXY_HOST`,具体看下方`PROXY_HOST`解释**
- **NGINX_PORT** WEB服务端口,默认`3000`,可自行修改不能与API服务端口冲突
- **PORT** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突
- **SUPERUSER** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **SUPERUSER_PASSWORD** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **API_TOKEN** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **PROXY_HOST** 网络代理可选访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port`
- **NGINX_PORT $\color{red}{*}$ ** WEB服务端口默认`3000`可自行修改不能与API服务端口冲突仅支持环境变量配置
- **PORT $\color{red}{*}$ ** API服务端口默认`3001`可自行修改不能与WEB服务端口冲突仅支持环境变量配置
- **PUID**:运行程序用户的`uid`,默认`0`(仅支持环境变量配置)
- **PGID**:运行程序用户的`gid`,默认`0`(仅支持环境变量配置)
- **UMASK**:掩码权限,默认`000`,可以考虑设置为`022`(仅支持环境变量配置)
- **MOVIEPILOT_AUTO_UPDATE**:重启更新,`true`/`false`,默认`true` **注意:如果出现网络问题可以配置`PROXY_HOST`,具体看下方`PROXY_HOST`解释**(仅支持环境变量配置)
- **MOVIEPILOT_AUTO_UPDATE_DEV**:重启时更新到未发布的开发版本代码,`true`/`false`,默认`false`(仅支持环境变量配置)
---
- **SUPERUSER $\color{red}{*}$ ** 超级管理员用户名,默认`admin`,安装后使用该用户登录后台管理界面
- **SUPERUSER_PASSWORD $\color{red}{*}$ ** 超级管理员初始密码,默认`password`,建议修改为复杂密码
- **API_TOKEN $\color{red}{*}$ ** API密钥默认`moviepilot`在媒体服务器Webhook、微信回调等地址配置中需要加上`?token=`该值,建议修改为复杂字符串
- **PROXY_HOST** 网络代理访问themoviedb或者重启更新需要使用代理访问格式为`http(s)://ip:port``socks5://user:pass@host:port`(可选)
- **TMDB_API_DOMAIN** TMDB API地址默认`api.themoviedb.org`,也可配置为`api.tmdb.org`或其它中转代理服务地址,能连通即可
- **DOWNLOAD_PATH** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **REFRESH_MEDIASERVER** 入库刷新媒体库,`true`/`false`,默认`true`
- **TMDB_IMAGE_DOMAIN** TMDB图片地址默认`image.tmdb.org`可配置为其它中转代理以加速TMDB图片显示`static-mdb.v.geilijiasu.com`
---
- **SCRAP_METADATA** 刮削入库的媒体文件,`true`/`false`,默认`true`
- **SCRAP_SOURCE** 刮削元数据及图片使用的数据源,`themoviedb`/`douban`,默认`themoviedb`
- **SCRAP_FOLLOW_TMDB** 新增已入库媒体是否跟随TMDB信息变化`true`/`false`,默认`true`
- **TORRENT_TAG** 种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **LIBRARY_PATH** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电影媒体库目录名,默认`电影`
- **LIBRARY_TV_NAME** 电视剧媒体库目录名,默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录,默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置`category.yaml`自动在媒体库目录下建立二级目录分类
- **TRANSFER_TYPE** 转移方式,支持`link`/`copy`/`move`/`softlink` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响**
- **COOKIECLOUD_HOST** CookieCloud服务器地址格式`http(s)://ip:port`,不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **COOKIECLOUD_KEY** CookieCloud用户KEY
- **COOKIECLOUD_PASSWORD** CookieCloud端对端加密密码
- **COOKIECLOUD_INTERVAL** CookieCloud同步间隔(分钟)
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选,设置后可增加连接站点的成功率,同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
---
- **TRANSFER_TYPE $\color{red}{*}$ ** 整理转移方式,支持`link`/`copy`/`move`/`softlink` **注意:在`link`和`softlink`转移方式下,转移后的文件会继承源文件的权限掩码,不受`UMASK`影响**
- **LIBRARY_PATH $\color{red}{*}$ ** 媒体库目录,多个目录使用`,`分隔
- **LIBRARY_MOVIE_NAME** 电媒体库目录名称(不是完整路径),默认`电`
- **LIBRARY_TV_NAME** 电视剧媒体库目录称(不是完整路径),默认`电视剧`
- **LIBRARY_ANIME_NAME** 动漫媒体库目录称(不是完整路径),默认`电视剧/动漫`
- **LIBRARY_CATEGORY** 媒体库二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在媒体库目录下建立二级目录分类
---
- **COOKIECLOUD_HOST $\color{red}{*}$ ** CookieCloud服务器地址格式`http(s)://ip:port`,不配置默认使用内建服务器`https://movie-pilot.org/cookiecloud`
- **COOKIECLOUD_KEY $\color{red}{*}$ ** CookieCloud用户KEY
- **COOKIECLOUD_PASSWORD $\color{red}{*}$ ** CookieCloud端对端加密密码
- **COOKIECLOUD_INTERVAL $\color{red}{*}$ ** CookieCloud同步间隔分钟
- **USER_AGENT $\color{red}{*}$ ** CookieCloud保存Cookie对应的浏览器UA建议配置,设置后可增加连接站点的成功率,同步站点后可以在管理界面中修改
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点验证码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
---
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`/`synologychat`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- **SEARCH_SOURCE** 媒体信息搜索来源,`themoviedb`/`douban`,默认`themoviedb`
---
- **AUTO_DOWNLOAD_USER** 远程交互搜索时自动择优下载的用户ID多个用户使用,分割,未设置需要选择资源或者回复`0`
- **MESSAGER $\color{red}{*}$ ** 消息通知渠道,支持 `telegram`/`wechat`/`slack`/`synologychat`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
- `wechat`设置项:
@@ -101,21 +107,29 @@ docker pull jxxghp/moviepilot:latest
- **TELEGRAM_TOKEN** Telegram Bot Token
- **TELEGRAM_CHAT_ID** Telegram Chat ID
- **TELEGRAM_USERS** Telegram 用户ID多个使用,分隔只有用户ID在列表中才可以使用Bot如未设置则均可以使用Bot
- **TELEGRAM_ADMINS** Telegram 管理员ID多个使用,分隔只有管理员才可以操作Bot菜单如未设置则均可以操作菜单
- **TELEGRAM_ADMINS** Telegram 管理员ID多个使用,分隔只有管理员才可以操作Bot菜单如未设置则均可以操作菜单(可选)
- `slack`设置项:
- **SLACK_OAUTH_TOKEN** Slack Bot User OAuth Token
- **SLACK_APP_TOKEN** Slack App-Level Token
- **SLACK_CHANNEL** Slack 频道名称,默认`全体`
- **SLACK_CHANNEL** Slack 频道名称,默认`全体`(可选)
- `synologychat`设置项:
- **SYNOLOGYCHAT_WEBHOOK** 在Synology Chat中创建机器人获取机器人`传入URL`
- **SYNOLOGYCHAT_TOKEN** SynologyChat机器人`令牌`
- **DOWNLOADER** 下载器,支持`qbittorrent`/`transmission`QB版本号要求>= 4.3.9TR版本号要求>= 3.0,同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`qbittorrent`
---
- **DOWNLOAD_PATH $\color{red}{*}$ ** 下载保存目录,**注意:需要将`moviepilot``下载器`的映射路径保持一致**,否则会导致下载文件无法转移
- **DOWNLOAD_MOVIE_PATH** 电影下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_TV_PATH** 电视剧下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_ANIME_PATH** 动漫下载保存目录路径,不设置则下载到`DOWNLOAD_PATH`
- **DOWNLOAD_CATEGORY** 下载二级分类开关,`true`/`false`,默认`false`,开启后会根据配置 [category.yaml](https://github.com/jxxghp/MoviePilot/raw/main/config/category.yaml) 自动在下载目录下建立二级目录分类
- **DOWNLOAD_SUBTITLE** 下载站点字幕,`true`/`false`,默认`true`
- **DOWNLOADER_MONITOR** 下载器监控,`true`/`false`,默认为`true`,开启后下载完成时才会自动整理入库
- **TORRENT_TAG** 下载器种子标签,默认为`MOVIEPILOT`设置后只有MoviePilot添加的下载才会处理留空所有下载器中的任务均会处理
- **DOWNLOADER $\color{red}{*}$ ** 下载器,支持`qbittorrent`/`transmission`QB版本号要求>= 4.3.9TR版本号要求>= 3.0,同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`qbittorrent`
- `qbittorrent`设置项:
@@ -130,9 +144,9 @@ docker pull jxxghp/moviepilot:latest
- **TR_USER** transmission用户名
- **TR_PASSWORD** transmission密码
- **DOWNLOADER_MONITOR** 下载器监控,`true`/`false`,默认为`true`,开启后下载完成时才会自动整理入库
- **MEDIASERVER** 媒体服务器,支持`emby`/`jellyfin`/`plex`,同时开启多个使用`,`分隔。还需要配置对应媒体服务器的环境变量,非对应媒体服务器的变量可删除,推荐使用`emby`
---
- **REFRESH_MEDIASERVER** 入库后是否刷新媒体服务器,`true`/`false`,默认`true`
- **MEDIASERVER $\color{red}{*}$ ** 媒体服务器,支持`emby`/`jellyfin`/`plex`,同时开启多个使用`,`分隔。还需要配置对应媒体服务器的环境变量,非对应媒体服务器的变量可删除,推荐使用`emby`
- `emby`设置项:
@@ -155,9 +169,9 @@ docker pull jxxghp/moviepilot:latest
### 2. **用户认证**
- **AUTH_SITE** 认证站点,支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数(**仅能通过docker环境变量配置**
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
- **AUTH_SITE $\color{red}{*}$ ** 认证站点,支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`/`xingtan`
| 站点 | 参数 |
|:------------:|:-----------------------------------------------------:|
@@ -173,6 +187,7 @@ docker pull jxxghp/moviepilot:latest
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
| xingtan | `XINGTAN_UID`用户ID<br/>`XINGTAN_PASSKEY`:密钥 |
### 2. **进阶配置**
@@ -188,10 +203,12 @@ docker pull jxxghp/moviepilot:latest
> `original_title` 原语种标题
> `name` 识别名称
> `year` 年份
> `edition` 版本
> `resourceType`:资源类型
> `effect`:特效
> `edition` 版本(资源类型+特效)
> `videoFormat` 分辨率
> `releaseGroup` 制作组/字幕组
> `effect` 特效
> `customization` 自定义占位符
> `videoCodec` 视频编码
> `audioCodec` 音频编码
> `tmdbid` TMDBID
@@ -212,6 +229,7 @@ docker pull jxxghp/moviepilot:latest
> `season` 季号
> `episode` 集号
> `season_episode` 季集 SxxExx
> `episode_title` 集标题
`TV_RENAME_FORMAT`默认配置格式:
@@ -220,9 +238,7 @@ docker pull jxxghp/moviepilot:latest
```
### 3. **过滤规则**
`设定`-`规则`中设定,规则说明:
### 3. **优先级规则**
- 仅支持使用内置规则进行排列组合,内置规则有:`蓝光原盘``4K``1080P``中文字幕``特效字幕``H265``H264``杜比``HDR``REMUX``WEB-DL``免费``国语配音`
- 符合任一层级规则的资源将被标识选中,匹配成功的层级做为该资源的优先级,排越前面优先级超高
@@ -239,10 +255,9 @@ docker pull jxxghp/moviepilot:latest
- 将MoviePilot做为Radarr或Sonarr服务器添加到Overseerr或Jellyseerr`API服务端口`可使用Overseerr/Jellyseerr浏览订阅。
- 映射宿主机docker.sock文件到容器`/var/run/docker.sock`,以支持内建重启操作。实例:`-v /var/run/docker.sock:/var/run/docker.sock:ro`
**注意**
1) 容器首次启动需要下载浏览器内核,根据网络情况可能需要较长时间,此时无法登录。可映射`/moviepilot`目录避免容器重置后重新触发浏览器内核下载。
2) 使用反向代理时,需要添加以下配置,否则可能会导致部分功能无法访问(`ip:port`修改为实际值):
### **注意**
- 容器首次启动需要下载浏览器内核,根据网络情况可能需要较长时间,此时无法登录。可映射`/moviepilot`目录避免容器重置后重新触发浏览器内核下载。
- 使用反向代理时,需要添加以下配置,否则可能会导致部分功能无法访问(`ip:port`修改为实际值):
```nginx configuration
location / {
proxy_pass http://ip:port;
@@ -252,7 +267,7 @@ location / {
proxy_set_header X-Forwarded-Proto $scheme;
}
```
3) 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
- 新建的企业微信应用需要固定公网IP的代理才能收到消息代理添加以下代码
```nginx configuration
location /cgi-bin/gettoken {
proxy_pass https://qyapi.weixin.qq.com;

BIN
app.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

View File

@@ -2,7 +2,7 @@ from pathlib import Path
from typing import Any, List, Optional
from fastapi import APIRouter, Depends
from requests import Session
from sqlalchemy.orm import Session
from app import schemas
from app.chain.dashboard import DashboardChain

View File

@@ -16,10 +16,16 @@ router = APIRouter()
IMAGE_TYPES = [".jpg", ".png", ".gif", ".bmp", ".jpeg", ".webp"]
@router.get("/list", summary="所有", response_model=List[schemas.FileItem])
def list_path(path: str, sort: str = 'time', _: schemas.TokenPayload = Depends(verify_token)) -> Any:
@router.get("/list", summary="所有目录和文", response_model=List[schemas.FileItem])
def list_path(path: str,
sort: str = 'time',
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询当前目录下所有目录和文件
:param path: 目录路径
:param sort: 排序方式name:按名称排序time:按修改时间排序
:param _: token
:return: 所有目录和文件
"""
# 返回结果
ret_items = []

View File

@@ -6,11 +6,13 @@ from sqlalchemy.orm import Session
from app import schemas
from app.chain.transfer import TransferChain
from app.core.event import eventmanager
from app.core.security import verify_token
from app.db import get_db
from app.db.models.downloadhistory import DownloadHistory
from app.db.models.transferhistory import TransferHistory
from app.schemas import MediaType
from app.schemas.types import EventType
router = APIRouter()
@@ -78,6 +80,13 @@ def delete_transfer_history(history_in: schemas.TransferHistory,
# 删除源文件
if deletesrc and history.src:
TransferChain(db).delete_files(Path(history.src))
# 发送事件
eventmanager.send_event(
EventType.DownloadFileDeleted,
{
"src": history.src
}
)
# 删除记录
TransferHistory.delete(db, history_in.id)
return schemas.Response(success=True)

View File

@@ -162,7 +162,9 @@ def search_subscribes(
background_tasks.add_task(
Scheduler().start,
job_id="subscribe_search",
sid=None, state='R'
sid=None,
state='R',
manual=True
)
return schemas.Response(success=True)
@@ -178,7 +180,9 @@ def search_subscribe(
background_tasks.add_task(
Scheduler().start,
job_id="subscribe_search",
sid=subscribe_id, state=None
sid=subscribe_id,
state=None,
manual=True
)
return schemas.Response(success=True)

View File

@@ -222,5 +222,8 @@ def execute_command(jobid: str,
"""
if not jobid:
return schemas.Response(success=False, message="命令不能为空!")
Scheduler().start(jobid)
return schemas.Response(success=True)
if jobid == "subscribe_search":
Scheduler().start(jobid, state = 'R')
else:
Scheduler().start(jobid)
return schemas.Response(success=True)

View File

@@ -1,7 +1,7 @@
from typing import Any, List
from fastapi import APIRouter, HTTPException, Depends
from requests import Session
from sqlalchemy.orm import Session
from app import schemas
from app.chain.media import MediaChain

View File

@@ -18,7 +18,7 @@ from app.core.meta import MetaBase
from app.core.module import ModuleManager
from app.log import logger
from app.schemas import TransferInfo, TransferTorrent, ExistMediaInfo, DownloadingTorrent, CommingMessage, Notification, \
WebhookEventInfo
WebhookEventInfo, TmdbEpisode
from app.schemas.types import TorrentStatus, MediaType, MediaImageType, EventType
from app.utils.object import ObjectUtils
@@ -115,6 +115,17 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("recognize_media", meta=meta, mtype=mtype, tmdbid=tmdbid)
def match_doubaninfo(self, name: str, mtype: str = None,
year: str = None, season: int = None) -> Optional[dict]:
"""
搜索和匹配豆瓣信息
:param name: 标题
:param mtype: 类型
:param year: 年份
:param season: 季
"""
return self.run_module("match_doubaninfo", name=name, mtype=mtype, year=year, season=season)
def obtain_images(self, mediainfo: MediaInfo) -> Optional[MediaInfo]:
"""
补充抓取媒体信息图片
@@ -197,21 +208,19 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_medias", meta=meta)
def search_torrents(self, site: CommentedMap,
mediainfo: MediaInfo,
keyword: str = None,
page: int = 0,
area: str = "title") -> List[TorrentInfo]:
keywords: List[str],
mtype: MediaType = None,
page: int = 0) -> List[TorrentInfo]:
"""
搜索一个站点的种子资源
:param site: 站点
:param mediainfo: 识别的媒体信息
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param keywords: 搜索关键词列表
:param mtype: 媒体类型
:param page: 页码
:param area: 搜索区域
:reutrn: 资源列表
"""
return self.run_module("search_torrents", mediainfo=mediainfo, site=site,
keyword=keyword, page=page, area=area)
return self.run_module("search_torrents", site=site, keywords=keywords,
mtype=mtype, page=page)
def refresh_torrents(self, site: CommentedMap) -> List[TorrentInfo]:
"""
@@ -274,7 +283,8 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("list_torrents", status=status, hashs=hashs)
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> Optional[TransferInfo]:
transfer_type: str, target: Path = None,
episodes_info: List[TmdbEpisode] = None) -> Optional[TransferInfo]:
"""
文件转移
:param path: 文件路径
@@ -282,10 +292,12 @@ class ChainBase(metaclass=ABCMeta):
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移模式
:param target: 转移目标路径
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
return self.run_module("transfer", path=path, meta=meta, mediainfo=mediainfo,
transfer_type=transfer_type, target=target)
transfer_type=transfer_type, target=target,
episodes_info=episodes_info)
def transfer_completed(self, hashs: Union[str, list], path: Path = None) -> None:
"""

View File

@@ -1,6 +1,7 @@
import base64
import json
import re
import time
from pathlib import Path
from typing import List, Optional, Tuple, Set, Dict, Union
@@ -39,8 +40,10 @@ class DownloadChain(ChainBase):
发送添加下载的消息
"""
msg_text = ""
if userid:
msg_text = f"用户:{userid}"
if torrent.site_name:
msg_text = f"站点:{torrent.site_name}"
msg_text = f"{msg_text}\n站点:{torrent.site_name}"
if meta.resource_term:
msg_text = f"{msg_text}\n质量:{meta.resource_term}"
if torrent.size:
@@ -71,8 +74,7 @@ class DownloadChain(ChainBase):
title=f"{mediainfo.title_year} "
f"{meta.season_episode} 开始下载",
text=msg_text,
image=mediainfo.get_message_image(),
userid=userid))
image=mediainfo.get_message_image()))
def download_torrent(self, torrent: TorrentInfo,
channel: MessageChannel = None,
@@ -266,7 +268,10 @@ class DownloadChain(ChainBase):
download_hash=_hash,
torrent_name=_torrent.title,
torrent_description=_torrent.description,
torrent_site=_torrent.site_name
torrent_site=_torrent.site_name,
userid=userid,
channel=channel.value if channel else None,
date=time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
)
# 登记下载文件
@@ -318,7 +323,7 @@ class DownloadChain(ChainBase):
contexts: List[Context],
no_exists: Dict[int, Dict[int, NotExistMediaInfo]] = None,
save_path: str = None,
channel: str = None,
channel: MessageChannel = None,
userid: str = None) -> Tuple[List[Context], Dict[int, Dict[int, NotExistMediaInfo]]]:
"""
根据缺失数据,自动种子列表中组合择优下载

View File

@@ -28,12 +28,18 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_librarys", server=server)
def items(self, server: str, library_id: Union[str, int]) -> Generator:
def items(self, server: str, library_id: Union[str, int]) -> List[schemas.MediaServerItem]:
"""
获取媒体服务器所有项目
"""
return self.run_module("mediaserver_items", server=server, library_id=library_id)
def iteminfo(self, server: str, item_id: Union[str, int]) -> schemas.MediaServerItem:
"""
获取媒体服务器项目信息
"""
return self.run_module("mediaserver_iteminfo", server=server, item_id=item_id)
def episodes(self, server: str, item_id: Union[str, int]) -> List[schemas.MediaServerSeasonInfo]:
"""
获取媒体服务器剧集信息

View File

@@ -187,7 +187,7 @@ class MessageChain(ChainBase):
# 下载种子
context: Context = cache_list[int(text) - 1]
# 下载
self.downloadchain.download_single(context, userid=userid)
self.downloadchain.download_single(context, userid=userid, channel=channel)
elif text.lower() == "p":
# 上一页

View File

@@ -62,7 +62,7 @@ class SearchChain(ChainBase):
else:
logger.info(f'开始浏览资源,站点:{site} ...')
# 搜索
return self.__search_all_sites(keyword=title, sites=[site] if site else None, page=page) or []
return self.__search_all_sites(keywords=[title], sites=[site] if site else None, page=page) or []
def last_search_results(self) -> List[Context]:
"""
@@ -117,16 +117,12 @@ class SearchChain(ChainBase):
else:
keywords = [mediainfo.title]
# 执行搜索
torrents: List[TorrentInfo] = []
for keyword in keywords:
torrents = self.__search_all_sites(
mediainfo=mediainfo,
keyword=keyword,
sites=sites,
area=area
)
if torrents:
break
torrents: List[TorrentInfo] = self.__search_all_sites(
mediainfo=mediainfo,
keywords=keywords,
sites=sites,
area=area
)
if not torrents:
logger.warn(f'{keyword or mediainfo.title} 未搜索到资源')
return []
@@ -241,15 +237,15 @@ class SearchChain(ChainBase):
# 返回
return contexts
def __search_all_sites(self, mediainfo: Optional[MediaInfo] = None,
keyword: str = None,
def __search_all_sites(self, keywords: List[str],
mediainfo: Optional[MediaInfo] = None,
sites: List[int] = None,
page: int = 0,
area: str = "title") -> Optional[List[TorrentInfo]]:
"""
多线程搜索多个站点
:param mediainfo: 识别的媒体信息
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param keywords: 搜索关键词列表
:param sites: 指定站点ID列表如有则只搜索指定站点否则搜索所有站点
:param page: 搜索页码
:param area: 搜索区域 title or imdbid
@@ -291,8 +287,18 @@ class SearchChain(ChainBase):
executor = ThreadPoolExecutor(max_workers=len(indexer_sites))
all_task = []
for site in indexer_sites:
task = executor.submit(self.search_torrents, mediainfo=mediainfo,
site=site, keyword=keyword, page=page, area=area)
if area == "imdbid":
# 搜索IMDBID
task = executor.submit(self.search_torrents, site=site,
keywords=[mediainfo.imdb_id] if mediainfo else None,
mtype=mediainfo.type if mediainfo else None,
page=page)
else:
# 搜索标题
task = executor.submit(self.search_torrents, site=site,
keywords=keywords,
mtype=mediainfo.type if mediainfo else None,
page=page)
all_task.append(task)
# 结果集
results = []
@@ -303,7 +309,7 @@ class SearchChain(ChainBase):
results.extend(result)
logger.info(f"站点搜索进度:{finish_count} / {total_num}")
self.progress.update(value=finish_count / total_num * 100,
text=f"正在搜索{keyword or ''},已完成 {finish_count} / {total_num} 个站点 ...",
text=f"正在搜索{keywords or ''},已完成 {finish_count} / {total_num} 个站点 ...",
key=ProgressKey.Search)
# 计算耗时
end_time = datetime.now()

View File

@@ -3,7 +3,7 @@ import re
from datetime import datetime
from typing import Dict, List, Optional, Union, Tuple
from requests import Session
from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.download import DownloadChain

View File

@@ -5,6 +5,7 @@ from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.config import settings
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -122,5 +123,5 @@ class TmdbChain(ChainBase, metaclass=Singleton):
while True:
info = random.choice(infos)
if info and info.get("backdrop_path"):
return f"https://image.tmdb.org/t/p/original{info.get('backdrop_path')}"
return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{info.get('backdrop_path')}"
return None

View File

@@ -9,6 +9,7 @@ from sqlalchemy.orm import Session
from app.chain import ChainBase
from app.chain.media import MediaChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.meta import MetaBase
@@ -41,6 +42,7 @@ class TransferChain(ChainBase):
self.transferhis = TransferHistoryOper(self._db)
self.progress = ProgressHelper()
self.mediachain = MediaChain(self._db)
self.tmdbchain = TmdbChain(self._db)
self.systemconfig = SystemConfigOper()
def process(self) -> bool:
@@ -110,17 +112,6 @@ class TransferChain(ChainBase):
logger.warn(f"{path.name} 没有找到可转移的媒体文件")
return False, f"{path.name} 没有找到可转移的媒体文件"
# 汇总错误信息
err_msgs: List[str] = []
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总转移信息
transfers: Dict[Tuple, TransferInfo] = {}
# 有集自定义格式
formaterHandler = FormatParser(eformat=epformat.format,
details=epformat.detail,
@@ -129,17 +120,24 @@ class TransferChain(ChainBase):
# 开始进度
self.progress.start(ProgressKey.FileTransfer)
# 总数
# 目录所有文件清单
transfer_files = SystemUtils.list_files(directory=path,
extensions=settings.RMT_MEDIAEXT,
min_filesize=min_filesize)
if formaterHandler:
# 有集自定义格式,过滤文件
transfer_files = [f for f in transfer_files if formaterHandler.match(f.name)]
# 总数
# 汇总错误信息
err_msgs: List[str] = []
# 总文件数
total_num = len(transfer_files)
# 已处理数量
processed_num = 0
# 失败数量
fail_num = 0
# 跳过数量
skip_num = 0
self.progress.update(value=0,
text=f"开始转移 {path},共 {total_num} 个文件 ...",
key=ProgressKey.FileTransfer)
@@ -149,6 +147,15 @@ class TransferChain(ChainBase):
# 处理所有待转移目录或文件,默认一个转移路径或文件只有一个媒体信息
for trans_path in trans_paths:
# 汇总季集清单
season_episodes: Dict[Tuple, List[int]] = {}
# 汇总元数据
metas: Dict[Tuple, MetaBase] = {}
# 汇总媒体信息
medias: Dict[Tuple, MediaInfo] = {}
# 汇总转移信息
transfers: Dict[Tuple, TransferInfo] = {}
# 如果是目录且不是⼀蓝光原盘,获取所有文件并转移
if (not trans_path.is_file()
and not SystemUtils.is_bluray_dir(trans_path)):
@@ -165,7 +172,6 @@ class TransferChain(ChainBase):
# 转移所有文件
for file_path in file_paths:
# 回收站及隐藏的文件不处理
file_path_str = str(file_path)
if file_path_str.find('/@Recycle/') != -1 \
@@ -173,6 +179,9 @@ class TransferChain(ChainBase):
or file_path_str.find('/.') != -1 \
or file_path_str.find('/@eaDir') != -1:
logger.debug(f"{file_path_str} 是回收站或隐藏的文件")
# 计数
processed_num += 1
skip_num += 1
continue
# 整理屏蔽词不处理
@@ -187,6 +196,9 @@ class TransferChain(ChainBase):
break
if is_blocked:
err_msgs.append(f"{file_path.name} 命中整理屏蔽词")
# 计数
processed_num += 1
skip_num += 1
continue
# 转移成功的不再处理
@@ -194,6 +206,9 @@ class TransferChain(ChainBase):
transferd = self.transferhis.get_by_src(file_path_str)
if transferd and transferd.status:
logger.info(f"{file_path} 已成功转移过,如需重新处理,请删除历史记录。")
# 计数
processed_num += 1
skip_num += 1
continue
# 更新进度
@@ -214,6 +229,9 @@ class TransferChain(ChainBase):
if not file_meta:
logger.error(f"{file_path} 无法识别有效信息")
err_msgs.append(f"{file_path} 无法识别有效信息")
# 计数
processed_num += 1
fail_num += 1
continue
# 自定义识别
@@ -237,7 +255,7 @@ class TransferChain(ChainBase):
# 新增转移失败历史记录
his = self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
mode=transfer_type,
meta=file_meta,
download_hash=download_hash
)
@@ -246,6 +264,9 @@ class TransferChain(ChainBase):
title=f"{file_path.name} 未识别到媒体信息,无法入库!\n"
f"回复:```\n/redo {his.id} [tmdbid]|[类型]\n``` 手动识别转移。"
))
# 计数
processed_num += 1
fail_num += 1
continue
# 如果未开启新增已入库媒体是否跟随TMDB信息变化则根据tmdbid查询之前的title
@@ -257,31 +278,17 @@ class TransferChain(ChainBase):
logger.info(f"{file_path.name} 识别为:{file_mediainfo.type.value} {file_mediainfo.title_year}")
# 电视剧没有集无法转移
if file_mediainfo.type == MediaType.TV and not file_meta.episode:
# 转移失败
logger.warn(f"{file_path.name} 入库失败:未识别到集数")
err_msgs.append(f"{file_path.name} 未识别到集数")
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo
)
# 发送消息
self.post_message(Notification(
mtype=NotificationType.Manual,
title=f"{file_path.name} 入库失败!",
text=f"原因:未识别到集数",
image=file_mediainfo.get_message_image()
))
continue
# 更新媒体图片
self.obtain_images(mediainfo=file_mediainfo)
# 获取集数据
if file_mediainfo.type == MediaType.TV:
episodes_info = self.tmdbchain.tmdb_episodes(tmdbid=file_mediainfo.tmdb_id,
season=file_meta.begin_season or 1)
else:
episodes_info = None
# 获取下载hash
if not download_hash:
download_file = self.downloadhis.get_file_by_fullpath(file_path_str)
if download_file:
@@ -292,7 +299,8 @@ class TransferChain(ChainBase):
mediainfo=file_mediainfo,
path=file_path,
transfer_type=transfer_type,
target=target)
target=target,
episodes_info=episodes_info)
if not transferinfo:
logger.error("文件转移模块运行失败")
return False, "文件转移模块运行失败"
@@ -303,7 +311,7 @@ class TransferChain(ChainBase):
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
@@ -316,6 +324,9 @@ class TransferChain(ChainBase):
text=f"原因:{transferinfo.message or '未知'}",
image=file_mediainfo.get_message_image()
))
# 计数
processed_num += 1
fail_num += 1
continue
# 汇总信息
@@ -339,7 +350,7 @@ class TransferChain(ChainBase):
# 新增转移成功历史记录
self.transferhis.add_success(
src_path=file_path,
mode=settings.TRANSFER_TYPE,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=file_mediainfo,
@@ -355,8 +366,7 @@ class TransferChain(ChainBase):
key=ProgressKey.FileTransfer)
# 目录或文件转移完成
self.progress.update(value=100,
text=f"所有文件转移完成,正在执行后续处理 ...",
self.progress.update(text=f"{trans_path} 转移完成,正在执行后续处理 ...",
key=ProgressKey.FileTransfer)
# 执行后续处理
@@ -383,10 +393,16 @@ class TransferChain(ChainBase):
'mediainfo': media,
'transferinfo': transfer_info
})
# 结束进度
logger.info(f"{path} 转移完成,共 {total_num} 个文件,"
f"成功 {total_num - len(err_msgs)} 个,失败 {len(err_msgs)}")
self.progress.end(ProgressKey.FileTransfer)
# 结束进度
logger.info(f"{path} 转移完成,共 {total_num}文件,"
f"失败 {fail_num} 个,跳过 {skip_num}")
self.progress.update(value=100,
text=f"{path} 转移完成,共 {total_num} 个文件,"
f"失败 {fail_num} 个,跳过 {skip_num}",
key=ProgressKey.FileTransfer)
self.progress.end(ProgressKey.FileTransfer)
return True, "\n".join(err_msgs)
@@ -600,13 +616,15 @@ class TransferChain(ChainBase):
def delete_files(path: Path):
"""
删除转移后的文件以及空目录
:param path: 文件路径
"""
logger.info(f"开始删除文件以及空目录:{path} ...")
if not path.exists():
return
if path.is_file():
# 删除文件、nfo、jpg
files = glob.glob(f"{Path(path.parent).joinpath(path.stem)}*")
# 删除文件、nfo、jpg等同名文件
pattern = path.stem.replace('[', '?').replace(']', '?')
files = path.parent.glob(f"{pattern}.*")
for file in files:
Path(file).unlink()
logger.warn(f"文件 {path} 已删除")

View File

@@ -1,9 +1,13 @@
import os
import secrets
import sys
from pathlib import Path
from typing import List
from pydantic import BaseSettings
from app.utils.system import SystemUtils
class Settings(BaseSettings):
# 项目名称
@@ -208,7 +212,11 @@ class Settings(BaseSettings):
def CONFIG_PATH(self):
if self.CONFIG_DIR:
return Path(self.CONFIG_DIR)
return self.INNER_CONFIG_PATH
elif SystemUtils.is_docker():
return Path("/config")
elif SystemUtils.is_frozen():
return Path(sys.executable).parent / "config"
return self.ROOT_PATH / "config"
@property
def TEMP_PATH(self):
@@ -268,11 +276,14 @@ class Settings(BaseSettings):
return [Path(path) for path in self.LIBRARY_PATH.split(",")]
return []
def __init__(self):
super().__init__()
def __init__(self, **kwargs):
super().__init__(**kwargs)
with self.CONFIG_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
if SystemUtils.is_frozen():
if not (p / "app.env").exists():
SystemUtils.copy(self.INNER_CONFIG_PATH / "app.env", p / "app.env")
with self.TEMP_PATH as p:
if not p.exists():
p.mkdir(parents=True, exist_ok=True)
@@ -284,4 +295,7 @@ class Settings(BaseSettings):
case_sensitive = True
settings = Settings()
settings = Settings(
_env_file=Settings().CONFIG_PATH / "app.env",
_env_file_encoding="utf-8"
)

View File

@@ -1,6 +1,6 @@
import re
from dataclasses import dataclass, field, asdict
from typing import List, Dict, Any
from typing import List, Dict, Any, Tuple
from app.core.config import settings
from app.core.meta import MetaBase
@@ -272,7 +272,7 @@ class MediaInfo:
初始化媒信息
"""
def __directors_actors(tmdbinfo: dict):
def __directors_actors(tmdbinfo: dict) -> Tuple[List[dict], List[dict]]:
"""
查询导演和演员
:param tmdbinfo: TMDB元数据

View File

@@ -0,0 +1,47 @@
import regex as re
from app.db.systemconfig_oper import SystemConfigOper
from app.schemas.types import SystemConfigKey
from app.utils.singleton import Singleton
class CustomizationMatcher(metaclass=Singleton):
"""
识别自定义占位符
"""
customization = None
custom_separator = None
def __init__(self):
self.systemconfig = SystemConfigOper()
self.customization = None
self.custom_separator = None
def match(self, title=None):
"""
:param title: 资源标题或文件名
:return: 匹配结果
"""
if not title:
return ""
if not self.customization:
# 自定义占位符
customization = self.systemconfig.get(SystemConfigKey.Customization)
if not customization:
return ""
if isinstance(customization, str):
customization = customization.replace("\n", ";").replace("|", ";").strip(";").split(";")
self.customization = "|".join([f"({item})" for item in customization])
customization_re = re.compile(r"%s" % self.customization)
# 处理重复多次的情况,保留先后顺序(按添加自定义占位符的顺序)
unique_customization = {}
for item in re.findall(customization_re, title):
if not isinstance(item, tuple):
item = (item,)
for i in range(len(item)):
if item[i] and unique_customization.get(item[i]) is None:
unique_customization[item[i]] = i
unique_customization = list(dict(sorted(unique_customization.items(), key=lambda x: x[1])).keys())
separator = self.custom_separator or "@"
return separator.join(unique_customization)

View File

@@ -1,6 +1,7 @@
import re
import zhconv
import anitopy
from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.utils.string import StringUtils
@@ -144,6 +145,8 @@ class MetaAnime(MetaBase):
self.resource_team = \
ReleaseGroupsMatcher().match(title=original_title) or \
anitopy_info_origin.get("release_group") or None
# 自定义占位符
self.customization = CustomizationMatcher().match(title=original_title) or None
# 视频编码
self.video_encode = anitopy_info.get("video_term")
if isinstance(self.video_encode, list):

View File

@@ -51,6 +51,8 @@ class MetaBase(object):
resource_pix: Optional[str] = None
# 识别的制作组/字幕组
resource_team: Optional[str] = None
# 识别的自定义占位符
customization: Optional[str] = None
# 视频编码
video_encode: Optional[str] = None
# 音频编码
@@ -492,6 +494,9 @@ class MetaBase(object):
# 制作组/字幕组
if not self.resource_team:
self.resource_team = meta.resource_team
# 自定义占位符
if not self.customization:
self.customization = meta.customization
# 特效
if not self.resource_effect:
self.resource_effect = meta.resource_effect

View File

@@ -2,6 +2,7 @@ import re
from pathlib import Path
from app.core.config import settings
from app.core.meta.customization import CustomizationMatcher
from app.core.meta.metabase import MetaBase
from app.core.meta.releasegroup import ReleaseGroupsMatcher
from app.utils.string import StringUtils
@@ -130,6 +131,8 @@ class MetaVideo(MetaBase):
self.part = None
# 制作组/字幕组
self.resource_team = ReleaseGroupsMatcher().match(title=original_title) or None
# 自定义占位符
self.customization = CustomizationMatcher().match(title=original_title) or None
def __fix_name(self, name: str):
if not name:

View File

@@ -74,6 +74,16 @@ class DownloadHistoryOper(DbOper):
"""
DownloadFiles.delete_by_fullpath(self._db, fullpath)
def get_hash_by_fullpath(self, fullpath: str) -> str:
"""
按fullpath查询下载文件记录hash
:param fullpath: 数据key
"""
fileinfo: DownloadFiles = DownloadFiles.get_by_fullpath(self._db, fullpath)
if fileinfo:
return fileinfo.download_hash
return ""
def list_by_page(self, page: int = 1, count: int = 30) -> List[DownloadHistory]:
"""
分页查询下载历史
@@ -98,3 +108,11 @@ class DownloadHistoryOper(DbOper):
season=season,
episode=episode,
tmdbid=tmdbid)
def list_by_user_date(self, date: str, userid: str = None) -> List[DownloadHistory]:
"""
查询某用户某时间之后的下载历史
"""
return DownloadHistory.list_by_user_date(db=self._db,
date=date,
userid=userid)

View File

@@ -39,7 +39,7 @@ def update_db():
更新数据库
"""
db_location = settings.CONFIG_PATH / 'user.db'
script_location = settings.ROOT_PATH / 'alembic'
script_location = settings.ROOT_PATH / 'database'
try:
alembic_cfg = Config()
alembic_cfg.set_main_option('script_location', str(script_location))

View File

@@ -35,6 +35,12 @@ class DownloadHistory(Base):
torrent_description = Column(String)
# 种子站点
torrent_site = Column(String)
# 下载用户
userid = Column(String)
# 下载渠道
channel = Column(String)
# 创建时间
date = Column(String)
# 附加信息
note = Column(String)
@@ -90,6 +96,19 @@ class DownloadHistory(Base):
DownloadHistory.episodes == episode).order_by(
DownloadHistory.id.desc()).all()
@staticmethod
def list_by_user_date(db: Session, date: str, userid: str = None):
"""
查询某用户某时间之后的下载历史
"""
if userid:
return db.query(DownloadHistory).filter(DownloadHistory.date < date,
DownloadHistory.userid == userid).order_by(
DownloadHistory.id.desc()).all()
else:
return db.query(DownloadHistory).filter(DownloadHistory.date < date).order_by(
DownloadHistory.id.desc()).all()
class DownloadFiles(Base):
"""

View File

@@ -65,6 +65,10 @@ class TransferHistory(Base):
def get_by_src(db: Session, src: str):
return db.query(TransferHistory).filter(TransferHistory.src == src).first()
@staticmethod
def list_by_hash(db: Session, download_hash: str):
return db.query(TransferHistory).filter(TransferHistory.download_hash == download_hash).all()
@staticmethod
def statistic(db: Session, days: int = 7):
"""

View File

@@ -36,6 +36,13 @@ class TransferHistoryOper(DbOper):
"""
return TransferHistory.get_by_src(self._db, src)
def list_by_hash(self, download_hash: str) -> List[TransferHistory]:
"""
按种子hash查询转移记录
:param download_hash: 种子hash
"""
return TransferHistory.list_by_hash(self._db, download_hash)
def add(self, **kwargs) -> TransferHistory:
"""
新增转移历史

View File

@@ -2,12 +2,15 @@ from pyvirtualdisplay import Display
from app.log import logger
from app.utils.singleton import Singleton
from app.utils.system import SystemUtils
class DisplayHelper(metaclass=Singleton):
_display: Display = None
def __init__(self):
if not SystemUtils.is_docker():
return
try:
self._display = Display(visible=False, size=(1024, 768))
self._display.start()

View File

@@ -1,5 +1,6 @@
import xml.etree.ElementTree as ET
from pathlib import Path
from typing import List, Optional
class NfoReader:
@@ -8,6 +9,9 @@ class NfoReader:
self.tree = ET.parse(xml_file_path)
self.root = self.tree.getroot()
def get_element_value(self, element_path):
def get_element_value(self, element_path) -> Optional[str]:
element = self.root.find(element_path)
return element.text if element is not None else None
def get_elements(self, element_path) -> List[ET.Element]:
return self.root.findall(element_path)

Binary file not shown.

View File

@@ -1,3 +1,4 @@
from datetime import datetime
from pathlib import Path
from typing import List, Optional, Tuple, Union
@@ -10,11 +11,11 @@ from app.modules import _ModuleBase
from app.modules.douban.apiv2 import DoubanApi
from app.modules.douban.scraper import DoubanScraper
from app.schemas.types import MediaType
from app.utils.common import retry
from app.utils.system import SystemUtils
class DoubanModule(_ModuleBase):
doubanapi: DoubanApi = None
scraper: DoubanScraper = None
@@ -34,6 +35,271 @@ class DoubanModule(_ModuleBase):
:param doubanid: 豆瓣ID
:return: 豆瓣信息
"""
"""
{
"rating": {
"count": 287365,
"max": 10,
"star_count": 3.5,
"value": 6.6
},
"lineticket_url": "",
"controversy_reason": "",
"pubdate": [
"2021-10-29(中国大陆)"
],
"last_episode_number": null,
"interest_control_info": null,
"pic": {
"large": "https://img9.doubanio.com/view/photo/m_ratio_poster/public/p2707553644.webp",
"normal": "https://img9.doubanio.com/view/photo/s_ratio_poster/public/p2707553644.webp"
},
"vendor_count": 6,
"body_bg_color": "f4f5f9",
"is_tv": false,
"head_info": null,
"album_no_interact": false,
"ticket_price_info": "",
"webisode_count": 0,
"year": "2021",
"card_subtitle": "2021 / 英国 美国 / 动作 惊悚 冒险 / 凯瑞·福永 / 丹尼尔·克雷格 蕾雅·赛杜",
"forum_info": null,
"webisode": null,
"id": "20276229",
"gallery_topic_count": 0,
"languages": [
"英语",
"法语",
"意大利语",
"俄语",
"西班牙语"
],
"genres": [
"动作",
"惊悚",
"冒险"
],
"review_count": 926,
"title": "007无暇赴死",
"intro": "世界局势波诡云谲,再度出山的邦德(丹尼尔·克雷格 饰面临有史以来空前的危机传奇特工007的故事在本片中达到高潮。新老角色集结亮相蕾雅·赛杜回归二度饰演邦女郎玛德琳。系列最恐怖反派萨芬拉米·马雷克 饰重磅登场毫不留情地展示了自己狠辣的一面不仅揭开了玛德琳身上隐藏的秘密还酝酿着危及数百万人性命的阴谋幽灵党的身影也似乎再次浮出水面。半路杀出的新00号特工拉什纳·林奇 饰)与神秘女子(安娜·德·阿玛斯 饰)看似与邦德同阵作战,但其真实目的依然成谜。关乎邦德生死的新仇旧怨接踵而至,暗潮汹涌之下他能否拯救世界?",
"interest_cmt_earlier_tip_title": "发布于上映前",
"has_linewatch": true,
"ugc_tabs": [
{
"source": "reviews",
"type": "review",
"title": "影评"
},
{
"source": "forum_topics",
"type": "forum",
"title": "讨论"
}
],
"forum_topic_count": 857,
"ticket_promo_text": "",
"webview_info": {},
"is_released": true,
"actors": [
{
"name": "丹尼尔·克雷格",
"roles": [
"演员",
"制片人",
"配音"
],
"title": "丹尼尔·克雷格(同名)英国,英格兰,柴郡,切斯特影视演员",
"url": "https://movie.douban.com/celebrity/1025175/",
"user": null,
"character": "饰 詹姆斯·邦德 James Bond 007",
"uri": "douban://douban.com/celebrity/1025175?subject_id=27230907",
"avatar": {
"large": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p42588.jpg?imageView2/2/q/80/w/600/h/3000/format/webp",
"normal": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p42588.jpg?imageView2/2/q/80/w/200/h/300/format/webp"
},
"sharing_url": "https://www.douban.com/doubanapp/dispatch?uri=/celebrity/1025175/",
"type": "celebrity",
"id": "1025175",
"latin_name": "Daniel Craig"
}
],
"interest": null,
"vendor_icons": [
"https://img9.doubanio.com/f/frodo/fbc90f355fc45d5d2056e0d88c697f9414b56b44/pics/vendors/tencent.png",
"https://img2.doubanio.com/f/frodo/8286b9b5240f35c7e59e1b1768cd2ccf0467cde5/pics/vendors/migu_video.png",
"https://img9.doubanio.com/f/frodo/88a62f5e0cf9981c910e60f4421c3e66aac2c9bc/pics/vendors/bilibili.png"
],
"episodes_count": 0,
"color_scheme": {
"is_dark": true,
"primary_color_light": "868ca5",
"_base_color": [
0.6333333333333333,
0.18867924528301885,
0.20784313725490197
],
"secondary_color": "f4f5f9",
"_avg_color": [
0.059523809523809625,
0.09790209790209795,
0.5607843137254902
],
"primary_color_dark": "676c7f"
},
"type": "movie",
"null_rating_reason": "",
"linewatches": [
{
"url": "http://v.youku.com/v_show/id_XNTIwMzM2NDg5Mg==.html?tpa=dW5pb25faWQ9MzAwMDA4XzEwMDAwMl8wMl8wMQ&refer=esfhz_operation.xuka.xj_00003036_000000_FNZfau_19010900",
"source": {
"literal": "youku",
"pic": "https://img1.doubanio.com/img/files/file-1432869267.png",
"name": "优酷视频"
},
"source_uri": "youku://play?vid=XNTIwMzM2NDg5Mg==&source=douban&refer=esfhz_operation.xuka.xj_00003036_000000_FNZfau_19010900",
"free": false
},
],
"info_url": "https://www.douban.com/doubanapp//h5/movie/20276229/desc",
"tags": [],
"durations": [
"163分钟"
],
"comment_count": 97204,
"cover": {
"description": "",
"author": {
"loc": {
"id": "108288",
"name": "北京",
"uid": "beijing"
},
"kind": "user",
"name": "雨落下",
"reg_time": "2020-08-11 16:22:48",
"url": "https://www.douban.com/people/221011676/",
"uri": "douban://douban.com/user/221011676",
"id": "221011676",
"avatar_side_icon_type": 3,
"avatar_side_icon_id": "234",
"avatar": "https://img2.doubanio.com/icon/up221011676-2.jpg",
"is_club": false,
"type": "user",
"avatar_side_icon": "https://img2.doubanio.com/view/files/raw/file-1683625971.png",
"uid": "221011676"
},
"url": "https://movie.douban.com/photos/photo/2707553644/",
"image": {
"large": {
"url": "https://img9.doubanio.com/view/photo/l/public/p2707553644.webp",
"width": 1082,
"height": 1600,
"size": 0
},
"raw": null,
"small": {
"url": "https://img9.doubanio.com/view/photo/s/public/p2707553644.webp",
"width": 405,
"height": 600,
"size": 0
},
"normal": {
"url": "https://img9.doubanio.com/view/photo/m/public/p2707553644.webp",
"width": 405,
"height": 600,
"size": 0
},
"is_animated": false
},
"uri": "douban://douban.com/photo/2707553644",
"create_time": "2021-10-26 15:05:01",
"position": 0,
"owner_uri": "douban://douban.com/movie/20276229",
"type": "photo",
"id": "2707553644",
"sharing_url": "https://www.douban.com/doubanapp/dispatch?uri=/photo/2707553644/"
},
"cover_url": "https://img9.doubanio.com/view/photo/m_ratio_poster/public/p2707553644.webp",
"restrictive_icon_url": "",
"header_bg_color": "676c7f",
"is_douban_intro": false,
"ticket_vendor_icons": [
"https://img9.doubanio.com/view/dale-online/dale_ad/public/0589a62f2f2d7c2.jpg"
],
"honor_infos": [],
"sharing_url": "https://movie.douban.com/subject/20276229/",
"subject_collections": [],
"wechat_timeline_share": "screenshot",
"countries": [
"英国",
"美国"
],
"url": "https://movie.douban.com/subject/20276229/",
"release_date": null,
"original_title": "No Time to Die",
"uri": "douban://douban.com/movie/20276229",
"pre_playable_date": null,
"episodes_info": "",
"subtype": "movie",
"directors": [
{
"name": "凯瑞·福永",
"roles": [
"导演",
"制片人",
"编剧",
"摄影",
"演员"
],
"title": "凯瑞·福永(同名)美国,加利福尼亚州,奥克兰影视演员",
"url": "https://movie.douban.com/celebrity/1009531/",
"user": null,
"character": "导演",
"uri": "douban://douban.com/celebrity/1009531?subject_id=27215222",
"avatar": {
"large": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p1392285899.57.jpg?imageView2/2/q/80/w/600/h/3000/format/webp",
"normal": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p1392285899.57.jpg?imageView2/2/q/80/w/200/h/300/format/webp"
},
"sharing_url": "https://www.douban.com/doubanapp/dispatch?uri=/celebrity/1009531/",
"type": "celebrity",
"id": "1009531",
"latin_name": "Cary Fukunaga"
}
],
"is_show": false,
"in_blacklist": false,
"pre_release_desc": "",
"video": null,
"aka": [
"007生死有时(港)",
"007生死交战(台)",
"007间不容死",
"邦德25",
"007没空去死(豆友译名)",
"James Bond 25",
"Never Dream of Dying",
"Shatterhand"
],
"is_restrictive": false,
"trailer": {
"sharing_url": "https://www.douban.com/doubanapp/dispatch?uri=/movie/20276229/trailer%3Ftrailer_id%3D282585%26trailer_type%3DA",
"video_url": "https://vt1.doubanio.com/202310011325/3b1f5827e91dde7826dc20930380dfc2/view/movie/M/402820585.mp4",
"title": "中国预告片:终极决战版 (中文字幕)",
"uri": "douban://douban.com/movie/20276229/trailer?trailer_id=282585&trailer_type=A",
"cover_url": "https://img1.doubanio.com/img/trailer/medium/2712944408.jpg",
"term_num": 0,
"n_comments": 21,
"create_time": "2021-11-01",
"subject_title": "007无暇赴死",
"file_size": 10520074,
"runtime": "00:42",
"type": "A",
"id": "282585",
"desc": ""
},
"interest_cmt_earlier_tip_desc": "该短评的发布时间早于公开上映时间,作者可能通过其他渠道提前观看,请谨慎参考。其评分将不计入总评分。"
}
"""
if not doubanid:
return None
logger.info(f"开始获取豆瓣信息:{doubanid} ...")
@@ -129,22 +395,45 @@ class DoubanModule(_ModuleBase):
return ret_medias
def __match(self, name: str, year: str, season: int = None) -> dict:
@retry(Exception, 5, 3, 3, logger=logger)
def match_doubaninfo(self, name: str, mtype: str = None,
year: str = None, season: int = None) -> dict:
"""
搜索和匹配豆瓣信息
:param name: 名称
:param mtype: 类型 电影/电视剧
:param year: 年份
:param season: 季号
"""
result = self.doubanapi.search(f"{name} {year or ''}")
result = self.doubanapi.search(f"{name} {year or ''}".strip(),
ts=datetime.strftime(datetime.now(), '%Y%m%d%H%M%S'))
if not result:
logger.warn(f"未找到 {name} 的豆瓣信息")
return {}
# 触发rate limit
if "search_access_rate_limit" in result.values():
logger.warn(f"触发豆瓣API速率限制 错误信息 {result} ...")
raise Exception("触发豆瓣API速率限制")
for item_obj in result.get("items"):
if item_obj.get("type_name") not in (MediaType.TV.value, MediaType.MOVIE.value):
type_name = item_obj.get("type_name")
if type_name not in [MediaType.TV.value, MediaType.MOVIE.value]:
continue
title = item_obj.get("title")
if mtype and mtype != type_name:
continue
if mtype == MediaType.TV and not season:
season = 1
item = item_obj.get("target")
title = item.get("title")
if not title:
continue
meta = MetaInfo(title)
if meta.name == name and (not season or meta.begin_season == season):
return item_obj
if type_name == MediaType.TV.value:
meta.type = MediaType.TV
meta.begin_season = meta.begin_season or 1
if meta.name == name \
and ((not season and not meta.begin_season) or meta.begin_season == season) \
and (not year or item.get('year') == year):
return item
return {}
def movie_top250(self, page: int = 1, count: int = 30) -> List[dict]:
@@ -173,7 +462,10 @@ class DoubanModule(_ModuleBase):
if not meta.name:
return
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title, year=mediainfo.year, season=meta.begin_season)
doubaninfo = self.match_doubaninfo(name=mediainfo.title,
mtype=mediainfo.type.value,
year=mediainfo.year,
season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
return
@@ -192,9 +484,10 @@ class DoubanModule(_ModuleBase):
if not meta.name:
continue
# 根据名称查询豆瓣数据
doubaninfo = self.__match(name=mediainfo.title,
year=mediainfo.year,
season=meta.begin_season)
doubaninfo = self.match_doubaninfo(name=mediainfo.title,
mtype=mediainfo.type.value,
year=mediainfo.year,
season=meta.begin_season)
if not doubaninfo:
logger.warn(f"未找到 {mediainfo.title} 的豆瓣信息")
break

View File

@@ -146,72 +146,113 @@ class DoubanApi(metaclass=Singleton):
_api_secret_key = "bf7dddc7c9cfe6f7"
_api_key = "0dad551ec0f84ed02907ff5c42e8ec70"
_base_url = "https://frodo.douban.com/api/v2"
_session = requests.Session()
_session = None
def __init__(self):
pass
self._session = requests.Session()
@classmethod
def __sign(cls, url: str, ts: int, method='GET') -> str:
url_path = parse.urlparse(url).path
raw_sign = '&'.join([method.upper(), parse.quote(url_path, safe=''), str(ts)])
return base64.b64encode(hmac.new(cls._api_secret_key.encode(), raw_sign.encode(), hashlib.sha1).digest()
).decode()
return base64.b64encode(
hmac.new(
cls._api_secret_key.encode(),
raw_sign.encode(),
hashlib.sha1
).digest()
).decode()
@classmethod
@lru_cache(maxsize=settings.CACHE_CONF.get('douban'))
def __invoke(cls, url, **kwargs):
req_url = cls._base_url + url
def __invoke(self, url, **kwargs):
req_url = self._base_url + url
params = {'apiKey': cls._api_key}
params = {'apiKey': self._api_key}
if kwargs:
params.update(kwargs)
ts = params.pop('_ts', int(datetime.strftime(datetime.now(), '%Y%m%d')))
params.update({'os_rom': 'android', 'apiKey': cls._api_key, '_ts': ts, '_sig': cls.__sign(url=req_url, ts=ts)})
resp = RequestUtils(ua=choice(cls._user_agents), session=cls._session).get_res(url=req_url, params=params)
ts = params.pop(
'_ts',
datetime.strftime(datetime.now(), '%Y%m%d')
)
params.update({
'os_rom': 'android',
'apiKey': self._api_key,
'_ts': ts,
'_sig': self.__sign(url=req_url, ts=ts)
})
resp = RequestUtils(
ua=choice(self._user_agents),
session=self._session
).get_res(url=req_url, params=params)
if resp.status_code == 400 and "rate_limit" in resp.text:
return resp.json()
return resp.json() if resp else {}
def search(self, keyword, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["search"], q=keyword, start=start, count=count, _ts=ts)
def search(self, keyword, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["search"], q=keyword,
start=start, count=count, _ts=ts)
def movie_search(self, keyword, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_search"], q=keyword, start=start, count=count, _ts=ts)
def movie_search(self, keyword, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_search"], q=keyword,
start=start, count=count, _ts=ts)
def tv_search(self, keyword, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_search"], q=keyword, start=start, count=count, _ts=ts)
def tv_search(self, keyword, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_search"], q=keyword,
start=start, count=count, _ts=ts)
def book_search(self, keyword, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["book_search"], q=keyword, start=start, count=count, _ts=ts)
def book_search(self, keyword, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["book_search"], q=keyword,
start=start, count=count, _ts=ts)
def group_search(self, keyword, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["group_search"], q=keyword, start=start, count=count, _ts=ts)
def group_search(self, keyword, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["group_search"], q=keyword,
start=start, count=count, _ts=ts)
def movie_showing(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_showing"], start=start, count=count, _ts=ts)
def movie_showing(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_showing"],
start=start, count=count, _ts=ts)
def movie_soon(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_soon"], start=start, count=count, _ts=ts)
def movie_soon(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_soon"],
start=start, count=count, _ts=ts)
def movie_hot_gaia(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_hot_gaia"], start=start, count=count, _ts=ts)
def movie_hot_gaia(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_hot_gaia"],
start=start, count=count, _ts=ts)
def tv_hot(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_hot"], start=start, count=count, _ts=ts)
def tv_hot(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_hot"],
start=start, count=count, _ts=ts)
def tv_animation(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_animation"], start=start, count=count, _ts=ts)
def tv_animation(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_animation"],
start=start, count=count, _ts=ts)
def tv_variety_show(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_variety_show"], start=start, count=count, _ts=ts)
def tv_variety_show(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_variety_show"],
start=start, count=count, _ts=ts)
def tv_rank_list(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_rank_list"], start=start, count=count, _ts=ts)
def tv_rank_list(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_rank_list"],
start=start, count=count, _ts=ts)
def show_hot(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["show_hot"], start=start, count=count, _ts=ts)
def show_hot(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["show_hot"],
start=start, count=count, _ts=ts)
def movie_detail(self, subject_id):
return self.__invoke(self._urls["movie_detail"] + subject_id)
@@ -228,20 +269,30 @@ class DoubanApi(metaclass=Singleton):
def book_detail(self, subject_id):
return self.__invoke(self._urls["book_detail"] + subject_id)
def movie_top250(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_top250"], start=start, count=count, _ts=ts)
def movie_top250(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_top250"],
start=start, count=count, _ts=ts)
def movie_recommend(self, tags='', sort='R', start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_recommend"], tags=tags, sort=sort, start=start, count=count, _ts=ts)
def movie_recommend(self, tags='', sort='R', start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["movie_recommend"], tags=tags, sort=sort,
start=start, count=count, _ts=ts)
def tv_recommend(self, tags='', sort='R', start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_recommend"], tags=tags, sort=sort, start=start, count=count, _ts=ts)
def tv_recommend(self, tags='', sort='R', start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_recommend"], tags=tags, sort=sort,
start=start, count=count, _ts=ts)
def tv_chinese_best_weekly(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_chinese_best_weekly"], start=start, count=count, _ts=ts)
def tv_chinese_best_weekly(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_chinese_best_weekly"],
start=start, count=count, _ts=ts)
def tv_global_best_weekly(self, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_global_best_weekly"], start=start, count=count, _ts=ts)
def tv_global_best_weekly(self, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
return self.__invoke(self._urls["tv_global_best_weekly"],
start=start, count=count, _ts=ts)
def doulist_detail(self, subject_id):
"""
@@ -250,7 +301,8 @@ class DoubanApi(metaclass=Singleton):
"""
return self.__invoke(self._urls["doulist"] + subject_id)
def doulist_items(self, subject_id, start=0, count=20, ts=datetime.strftime(datetime.now(), '%Y%m%d')):
def doulist_items(self, subject_id, start=0, count=20,
ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
豆列列表
:param subject_id: 豆列id
@@ -258,4 +310,9 @@ class DoubanApi(metaclass=Singleton):
:param count: 数量
:param ts: 时间戳
"""
return self.__invoke(self._urls["doulist_items"] % subject_id, start=start, count=count, _ts=ts)
return self.__invoke(self._urls["doulist_items"] % subject_id,
start=start, count=count, _ts=ts)
def __del__(self):
if self._session:
self._session.close()

View File

@@ -6,7 +6,6 @@ from app.core.context import MediaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.emby.emby import Emby
from app.schemas import ExistMediaInfo, RefreshMediaItem, WebhookEventInfo
from app.schemas.types import MediaType
@@ -28,7 +27,7 @@ class EmbyModule(_ModuleBase):
"""
# 定时重连
if not self.emby.is_inactive():
self.emby = Emby()
self.emby.reconnect()
def user_authenticate(self, name: str, password: str) -> Optional[str]:
"""
@@ -40,7 +39,7 @@ class EmbyModule(_ModuleBase):
# Emby认证
return self.emby.authenticate(name, password)
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[WebhookEventInfo]:
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Webhook报文体
:param body: 请求体
@@ -50,7 +49,7 @@ class EmbyModule(_ModuleBase):
"""
return self.emby.get_webhook_message(form, args)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[schemas.ExistMediaInfo]:
"""
判断媒体文件是否存在
:param mediainfo: 识别的媒体信息
@@ -62,25 +61,40 @@ class EmbyModule(_ModuleBase):
movie = self.emby.get_iteminfo(itemid)
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.emby.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="emby",
itemid=movie.item_id
)
movies = self.emby.get_movies(title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"媒体库中已存在:{movies}")
return ExistMediaInfo(type=MediaType.MOVIE)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="emby",
itemid=movies[0].item_id
)
else:
tvs = self.emby.get_tv_episodes(title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
itemid, tvs = self.emby.get_tv_episodes(title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
if not tvs:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"{mediainfo.title_year} 媒体库中已存在:{tvs}")
return ExistMediaInfo(type=MediaType.TV, seasons=tvs)
return schemas.ExistMediaInfo(
type=MediaType.TV,
seasons=tvs,
server="emby",
itemid=itemid
)
def refresh_mediaserver(self, mediainfo: MediaInfo, file_path: Path) -> None:
"""
@@ -90,7 +104,7 @@ class EmbyModule(_ModuleBase):
:return: 成功或失败
"""
items = [
RefreshMediaItem(
schemas.RefreshMediaItem(
title=mediainfo.title,
year=mediainfo.year,
type=mediainfo.type,
@@ -105,13 +119,8 @@ class EmbyModule(_ModuleBase):
媒体数量统计
"""
media_statistic = self.emby.get_medias_count()
user_count = self.emby.get_user_count()
return [schemas.Statistic(
movie_count=media_statistic.get("MovieCount") or 0,
tv_count=media_statistic.get("SeriesCount") or 0,
episode_count=media_statistic.get("EpisodeCount") or 0,
user_count=user_count or 0
)]
media_statistic.user_count = self.emby.get_user_count()
return [media_statistic]
def mediaserver_librarys(self, server: str) -> Optional[List[schemas.MediaServerLibrary]]:
"""
@@ -119,16 +128,7 @@ class EmbyModule(_ModuleBase):
"""
if server != "emby":
return None
librarys = self.emby.get_librarys()
if not librarys:
return []
return [schemas.MediaServerLibrary(
server="emby",
id=library.get("id"),
name=library.get("name"),
type=library.get("type"),
path=library.get("path")
) for library in librarys]
return self.emby.get_librarys()
def mediaserver_items(self, server: str, library_id: str) -> Optional[Generator]:
"""
@@ -136,21 +136,15 @@ class EmbyModule(_ModuleBase):
"""
if server != "emby":
return None
items = self.emby.get_items(library_id)
for item in items:
yield schemas.MediaServerItem(
server="emby",
library=item.get("library"),
item_id=item.get("id"),
item_type=item.get("type"),
title=item.get("title"),
original_title=item.get("original_title"),
year=item.get("year"),
tmdbid=int(item.get("tmdbid")) if item.get("tmdbid") else None,
imdbid=item.get("imdbid"),
tvdbid=item.get("tvdbid"),
path=item.get("path"),
)
return self.emby.get_items(library_id)
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
if server != "emby":
return None
return self.emby.get_iteminfo(item_id)
def mediaserver_tv_episodes(self, server: str,
item_id: Union[str, int]) -> Optional[List[schemas.MediaServerSeasonInfo]]:
@@ -159,7 +153,7 @@ class EmbyModule(_ModuleBase):
"""
if server != "emby":
return None
seasoninfo = self.emby.get_tv_episodes(item_id=item_id)
_, seasoninfo = self.emby.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(

View File

@@ -1,17 +1,16 @@
import json
import re
from pathlib import Path
from typing import List, Optional, Union, Dict, Generator
from typing import List, Optional, Union, Dict, Generator, Tuple
from requests import Response
from app import schemas
from app.core.config import settings
from app.log import logger
from app.schemas import RefreshMediaItem, WebhookEventInfo
from app.schemas.types import MediaType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class Emby(metaclass=Singleton):
@@ -35,6 +34,13 @@ class Emby(metaclass=Singleton):
return False
return True if not self.user else False
def reconnect(self):
"""
重连
"""
self.user = self.get_user()
self.folders = self.get_emby_folders()
def get_emby_folders(self) -> List[dict]:
"""
获取Emby媒体库路径列表
@@ -71,7 +77,7 @@ class Emby(metaclass=Singleton):
logger.error(f"连接User/Views 出错:" + str(e))
return []
def get_librarys(self):
def get_librarys(self) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库列表
"""
@@ -86,12 +92,15 @@ class Emby(metaclass=Singleton):
library_type = MediaType.TV.value
case _:
continue
libraries.append({
"id": library.get("Id"),
"name": library.get("Name"),
"path": library.get("Path"),
"type": library_type
})
libraries.append(
schemas.MediaServerLibrary(
server="emby",
id=library.get("Id"),
name=library.get("Name"),
path=library.get("Path"),
type=library_type
)
)
return libraries
def get_user(self, user_name: str = None) -> Optional[Union[str, int]]:
@@ -193,59 +202,29 @@ class Emby(metaclass=Singleton):
logger.error(f"连接Users/Query出错" + str(e))
return 0
def get_activity_log(self, num: int = 30) -> List[dict]:
"""
获取Emby活动记录
"""
if not self._host or not self._apikey:
return []
req_url = "%semby/System/ActivityLog/Entries?api_key=%s&" % (self._host, self._apikey)
ret_array = []
try:
res = RequestUtils().get_res(req_url)
if res:
ret_json = res.json()
items = ret_json.get('Items')
for item in items:
if item.get("Type") == "AuthenticationSucceeded":
event_type = "LG"
event_date = StringUtils.get_time(item.get("Date"))
event_str = "%s, %s" % (item.get("Name"), item.get("ShortOverview"))
activity = {"type": event_type, "event": event_str, "date": event_date}
ret_array.append(activity)
if item.get("Type") in ["VideoPlayback", "VideoPlaybackStopped"]:
event_type = "PL"
event_date = StringUtils.get_time(item.get("Date"))
event_str = item.get("Name")
activity = {"type": event_type, "event": event_str, "date": event_date}
ret_array.append(activity)
else:
logger.error(f"System/ActivityLog/Entries 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接System/ActivityLog/Entries出错" + str(e))
return []
return ret_array[:num]
def get_medias_count(self) -> dict:
def get_medias_count(self) -> schemas.Statistic:
"""
获得电影、电视剧、动漫媒体数量
:return: MovieCount SeriesCount SongCount
"""
if not self._host or not self._apikey:
return {}
return schemas.Statistic()
req_url = "%semby/Items/Counts?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
return res.json()
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return {}
return schemas.Statistic()
except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e))
return {}
return schemas.Statistic()
def __get_emby_series_id_by_name(self, name: str, year: str) -> Optional[str]:
"""
@@ -256,7 +235,15 @@ class Emby(metaclass=Singleton):
"""
if not self._host or not self._apikey:
return None
req_url = "%semby/Items?IncludeItemTypes=Series&Fields=ProductionYear&StartIndex=0&Recursive=true&SearchTerm=%s&Limit=10&IncludeSearchTypes=false&api_key=%s" % (
req_url = ("%semby/Items?"
"IncludeItemTypes=Series"
"&Fields=ProductionYear"
"&StartIndex=0"
"&Recursive=true"
"&SearchTerm=%s"
"&Limit=10"
"&IncludeSearchTypes=false"
"&api_key=%s") % (
self._host, name, self._apikey)
try:
res = RequestUtils().get_res(req_url)
@@ -275,7 +262,7 @@ class Emby(metaclass=Singleton):
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
tmdb_id: int = None) -> Optional[List[schemas.MediaServerItem]]:
"""
根据标题和年份检查电影是否在Emby中存在存在则返回列表
:param title: 标题
@@ -296,17 +283,28 @@ class Emby(metaclass=Singleton):
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="emby",
library=res_item.get("ParentId"),
item_id=res_item.get("Id"),
item_type=res_item.get("Type"),
title=res_item.get("Name"),
original_title=res_item.get("OriginalTitle"),
year=res_item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=res_item.get("ProviderIds", {}).get("Imdb"),
tvdbid=res_item.get("ProviderIds", {}).get("Tvdb"),
path=res_item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
ret_movies.append(mediaserver_item)
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
if (mediaserver_item.title == title
and (not year or str(mediaserver_item.year) == str(year))):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
@@ -318,7 +316,8 @@ class Emby(metaclass=Singleton):
title: str = None,
year: str = None,
tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]:
season: int = None
) -> Tuple[Optional[str], Optional[Dict[int, List[Dict[int, list]]]]]:
"""
根据标题和年份和季返回Emby中的剧集列表
:param item_id: Emby中的ID
@@ -329,22 +328,21 @@ class Emby(metaclass=Singleton):
:return: 每一季的已有集数
"""
if not self._host or not self._apikey:
return None
return None, None
# 电视剧
if not item_id:
item_id = self.__get_emby_series_id_by_name(title, year)
if item_id is None:
return None
return None, None
if not item_id:
return {}
return None, {}
# 验证tmdbid是否相同
item_info = self.get_iteminfo(item_id)
if item_info:
item_tmdbid = (item_info.get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
# /Shows/Id/Episodes 查集的信息
if tmdb_id and item_info.tmdbid:
if str(tmdb_id) != str(item_info.tmdbid):
return None, {}
# 查集的信息
if not season:
season = ""
try:
@@ -352,7 +350,8 @@ class Emby(metaclass=Singleton):
self._host, item_id, season, self._apikey)
res_json = RequestUtils().get_res(req_url)
if res_json:
res_items = res_json.json().get("Items")
tv_item = res_json.json()
res_items = tv_item.get("Items")
season_episodes = {}
for res_item in res_items:
season_index = res_item.get("ParentIndexNumber")
@@ -367,11 +366,11 @@ class Emby(metaclass=Singleton):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
# 返回
return season_episodes
return tv_item.get("Id"), season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None
return {}
return None, None
return None, {}
def get_remote_image_by_id(self, item_id: str, image_type: str) -> Optional[str]:
"""
@@ -434,7 +433,7 @@ class Emby(metaclass=Singleton):
return False
return False
def refresh_library_by_items(self, items: List[RefreshMediaItem]) -> bool:
def refresh_library_by_items(self, items: List[schemas.RefreshMediaItem]) -> bool:
"""
按类型、名称、年份来刷新媒体库
:param items: 已识别的需要刷新媒体库的媒体信息列表
@@ -456,7 +455,7 @@ class Emby(metaclass=Singleton):
return self.__refresh_emby_library_by_id(library_id)
logger.info(f"Emby媒体库刷新完成")
def __get_emby_library_id_by_item(self, item: RefreshMediaItem) -> Optional[str]:
def __get_emby_library_id_by_item(self, item: schemas.RefreshMediaItem) -> Optional[str]:
"""
根据媒体信息查询在哪个媒体库返回要刷新的位置的ID
:param item: {title, year, type, category, target_path}
@@ -474,17 +473,18 @@ class Emby(metaclass=Singleton):
return None
# 查找需要刷新的媒体库ID
item_path = Path(item.target_path)
# 匹配子目录
for folder in self.folders:
# 匹配子目录
for subfolder in folder.get("SubFolders"):
try:
# 匹配子目录
subfolder_path = Path(subfolder.get("Path"))
if item_path.is_relative_to(subfolder_path):
return subfolder.get("Id")
return folder.get("Id")
except Exception as err:
print(str(err))
# 如果找不到,只要路径中有分类目录名就命中
# 如果找不到,只要路径中有分类目录名就命中
for folder in self.folders:
for subfolder in folder.get("SubFolders"):
if subfolder.get("Path") and re.search(r"[/\\]%s" % item.category,
subfolder.get("Path")):
@@ -492,31 +492,45 @@ class Emby(metaclass=Singleton):
# 刷新根目录
return "/"
def get_iteminfo(self, itemid: str) -> dict:
def get_iteminfo(self, itemid: str) -> Optional[schemas.MediaServerItem]:
"""
获取单个项目详情
"""
if not itemid:
return {}
return None
if not self._host or not self._apikey:
return {}
return None
req_url = "%semby/Users/%s/Items/%s?api_key=%s" % (self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
return res.json()
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="emby",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
except Exception as e:
logger.error(f"连接Items/Id出错" + str(e))
return {}
return None
def get_items(self, parent: str) -> Generator:
"""
获取媒体服务器所有媒体库列表
"""
if not parent:
yield {}
yield None
if not self._host or not self._apikey:
yield {}
yield None
req_url = "%semby/Users/%s/Items?ParentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
@@ -526,26 +540,15 @@ class Emby(metaclass=Singleton):
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
item_info = self.get_iteminfo(result.get("Id"))
yield {"id": result.get("Id"),
"library": item_info.get("ParentId"),
"type": item_info.get("Type"),
"title": item_info.get("Name"),
"original_title": item_info.get("OriginalTitle"),
"year": item_info.get("ProductionYear"),
"tmdbid": item_info.get("ProviderIds", {}).get("Tmdb"),
"imdbid": item_info.get("ProviderIds", {}).get("Imdb"),
"tvdbid": item_info.get("ProviderIds", {}).get("Tvdb"),
"path": item_info.get("Path"),
"json": str(item_info)}
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(parent=result.get('Id')):
yield item
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
yield {}
yield None
def get_webhook_message(self, form: any, args: dict) -> Optional[WebhookEventInfo]:
def get_webhook_message(self, form: any, args: dict) -> Optional[schemas.WebhookEventInfo]:
"""
解析Emby Webhook报文
电影:
@@ -798,7 +801,7 @@ class Emby(metaclass=Singleton):
if not eventType:
return None
logger.info(f"接收到emby webhook{message}")
eventItem = WebhookEventInfo(event=eventType, channel="emby")
eventItem = schemas.WebhookEventInfo(event=eventType, channel="emby")
if message.get('Item'):
if message.get('Item', {}).get('Type') == 'Episode':
eventItem.item_type = "TV"
@@ -864,16 +867,36 @@ class Emby(metaclass=Singleton):
def get_data(self, url: str) -> Optional[Response]:
"""
自定义URL从媒体服务器获取数据其中{HOST}{APIKEY}{USER}会被替换成实际的值
自定义URL从媒体服务器获取数据其中[HOST]、[APIKEY]、[USER]会被替换成实际的值
:param url: 请求地址
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
try:
return RequestUtils().get_res(url=url)
return RequestUtils(content_type="application/json").get_res(url=url)
except Exception as e:
logger.error(f"连接Emby出错" + str(e))
return None
def post_data(self, url: str, data: str = None, headers: dict = None) -> Optional[Response]:
"""
自定义URL从媒体服务器获取数据其中[HOST]、[APIKEY]、[USER]会被替换成实际的值
:param url: 请求地址
:param data: 请求数据
:param headers: 请求头
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
try:
return RequestUtils(
headers=headers,
).post_res(url=url, data=data)
except Exception as e:
logger.error(f"连接Emby出错" + str(e))
return None

View File

@@ -329,7 +329,11 @@ class FanartModule(_ModuleBase):
if mediainfo.type == MediaType.MOVIE:
result = self.__request_fanart(mediainfo.type, mediainfo.tmdb_id)
else:
result = self.__request_fanart(mediainfo.type, mediainfo.tvdb_id)
if mediainfo.tvdb_id:
result = self.__request_fanart(mediainfo.type, mediainfo.tvdb_id)
else:
logger.info(f"{mediainfo.title_year} 没有tvdbid无法获取Fanart图片")
return
if not result or result.get('status') == 'error':
logger.warn(f"没有获取到 {mediainfo.title_year} 的Fanart图片数据")
return
@@ -351,6 +355,7 @@ class FanartModule(_ModuleBase):
# 季图片格式 seasonxx-poster
image_name = f"season{str(image_season).rjust(2, '0')}-{image_name[6:]}"
if not mediainfo.get_image(image_name):
# 没有图片才设置
mediainfo.set_image(image_name, image_obj.get('url'))
return mediainfo

View File

@@ -11,7 +11,7 @@ from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.schemas import TransferInfo, ExistMediaInfo
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode
from app.schemas.types import MediaType
from app.utils.system import SystemUtils
@@ -30,7 +30,8 @@ class FileTransferModule(_ModuleBase):
pass
def transfer(self, path: Path, meta: MetaBase, mediainfo: MediaInfo,
transfer_type: str, target: Path = None) -> TransferInfo:
transfer_type: str, target: Path = None,
episodes_info: List[TmdbEpisode] = None) -> TransferInfo:
"""
文件转移
:param path: 文件路径
@@ -38,6 +39,7 @@ class FileTransferModule(_ModuleBase):
:param mediainfo: 识别的媒体信息
:param transfer_type: 转移方式
:param target: 目标路径
:param episodes_info: 当前季的全部集信息
:return: {path, target_path, message}
"""
# 获取目标路径
@@ -49,13 +51,14 @@ class FileTransferModule(_ModuleBase):
logger.error("未找到媒体库目录,无法转移文件")
return TransferInfo(success=False,
path=path,
message="未找到媒体库目录,无法转移文件")
message="未找到媒体库目录")
# 转移
return self.transfer_media(in_path=path,
in_meta=meta,
mediainfo=mediainfo,
transfer_type=transfer_type,
target_dir=target)
target_dir=target,
episodes_info=episodes_info)
@staticmethod
def __transfer_command(file_item: Path, target_file: Path, transfer_type: str) -> int:
@@ -355,6 +358,7 @@ class FileTransferModule(_ModuleBase):
mediainfo: MediaInfo,
transfer_type: str,
target_dir: Path,
episodes_info: List[TmdbEpisode] = None
) -> TransferInfo:
"""
识别并转移一个文件或者一个目录下的所有文件
@@ -363,6 +367,7 @@ class FileTransferModule(_ModuleBase):
:param mediainfo: 媒体信息
:param target_dir: 媒体库根目录
:param transfer_type: 文件转移方式
:param episodes_info: 当前季的全部集信息
:return: TransferInfo、错误信息
"""
# 检查目录路径
@@ -404,7 +409,7 @@ class FileTransferModule(_ModuleBase):
if retcode != 0:
logger.error(f"文件夹 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(success=False,
message=f"文件夹 {in_path} 转移失败,错误码:{retcode}",
message=f"错误码:{retcode}",
path=in_path,
target_path=new_path,
is_bluray=bluray_flag)
@@ -418,17 +423,24 @@ class FileTransferModule(_ModuleBase):
is_bluray=bluray_flag)
else:
# 转移单个文件
# 文件结束季为空
in_meta.end_season = None
if mediainfo.type == MediaType.TV:
# 电视剧
if in_meta.begin_episode is None:
logger.warn(f"文件 {in_path} 转移失败:未识别到文件集数")
return TransferInfo(success=False,
message=f"未识别到文件集数",
path=in_path,
fail_list=[str(in_path)])
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能有多集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 文件结束季为空
in_meta.end_season = None
# 文件总季数为1
if in_meta.total_season:
in_meta.total_season = 1
# 文件不可能超过2集
if in_meta.total_episode > 2:
in_meta.total_episode = 1
in_meta.end_episode = None
# 目的文件名
new_file = self.get_rename_path(
@@ -437,6 +449,7 @@ class FileTransferModule(_ModuleBase):
rename_dict=self.__get_naming_dict(
meta=in_meta,
mediainfo=mediainfo,
episodes_info=episodes_info,
file_ext=in_path.suffix
)
)
@@ -456,7 +469,7 @@ class FileTransferModule(_ModuleBase):
if retcode != 0:
logger.error(f"文件 {in_path} 转移失败,错误码:{retcode}")
return TransferInfo(success=False,
message=f"文件 {in_path.name} 转移失败,错误码:{retcode}",
message=f"错误码:{retcode}",
path=in_path,
target_path=new_file,
fail_list=[str(in_path)])
@@ -472,13 +485,23 @@ class FileTransferModule(_ModuleBase):
file_list_new=[str(new_file)])
@staticmethod
def __get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: str = None) -> dict:
def __get_naming_dict(meta: MetaBase, mediainfo: MediaInfo, file_ext: str = None,
episodes_info: List[TmdbEpisode] = None) -> dict:
"""
根据媒体信息返回Format字典
:param meta: 文件元数据
:param mediainfo: 识别的媒体信息
:param file_ext: 文件扩展名
:param episodes_info: 当前季的全部集信息
"""
# 获取集标题
episode_title = None
if meta.begin_episode and episodes_info:
for episode in episodes_info:
if episode.episode_number == meta.begin_episode:
episode_title = episode.name
break
return {
# 标题
"title": mediainfo.title,
@@ -490,14 +513,16 @@ class FileTransferModule(_ModuleBase):
"name": meta.name,
# 年份
"year": mediainfo.year or meta.year,
# 资源类型
"resourceType": meta.resource_type,
# 特效
"effect": meta.resource_effect,
# 版本
"edition": meta.edition,
# 分辨率
"videoFormat": meta.resource_pix,
# 制作组/字幕组
"releaseGroup": meta.resource_team,
# 特效
"effect": meta.resource_effect,
# 视频编码
"videoCodec": meta.video_encode,
# 音频编码
@@ -514,8 +539,12 @@ class FileTransferModule(_ModuleBase):
"season_episode": "%s%s" % (meta.season, meta.episodes),
# 段/节
"part": meta.part,
# 剧集标题
"episode_title": episode_title,
# 文件后缀
"fileExt": file_ext
"fileExt": file_ext,
# 自定义占位符
"customization": meta.customization
}
@staticmethod
@@ -613,9 +642,10 @@ class FileTransferModule(_ModuleBase):
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 相对路径
meta = MetaInfo(mediainfo.title)
rel_path = self.get_rename_path(
template_string=rename_format,
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
rename_dict=self.__get_naming_dict(meta=meta,
mediainfo=mediainfo)
)
# 取相对路径的第1层目录

View File

@@ -96,7 +96,7 @@ class FilterModule(_ModuleBase):
},
# 国语配音
"CNVOI": {
"include": [r'[国國][语語]配音|[国國]配'],
"include": [r'[国國][语語]配音|[国國]配|[国國][语語]'],
"exclude": []
}
}

View File

@@ -3,7 +3,7 @@ from typing import List, Optional, Tuple, Union
from ruamel.yaml import CommentedMap
from app.core.context import MediaInfo, TorrentInfo
from app.core.context import TorrentInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.indexer.mtorrent import MTorrentSpider
@@ -28,69 +28,71 @@ class IndexerModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
return "INDEXER", "builtin"
def search_torrents(self, site: CommentedMap, mediainfo: MediaInfo = None,
keyword: str = None, page: int = 0, area: str = "title") -> List[TorrentInfo]:
def search_torrents(self, site: CommentedMap,
keywords: List[str] = None,
mtype: MediaType = None,
page: int = 0) -> List[TorrentInfo]:
"""
搜索一个站点
:param mediainfo: 识别的媒体信息
:param site: 站点
:param keyword: 搜索关键词,如有按关键词搜索,否则按媒体信息名称搜索
:param keywords: 搜索关键词列表
:param mtype: 媒体类型
:param page: 页码
:param area: 搜索区域 title or imdbid
:return: 资源列表
"""
# 确认搜索的名字
if keyword:
search_word = keyword
elif mediainfo:
search_word = mediainfo.title
else:
search_word = None
if search_word \
and site.get('language') == "en" \
and StringUtils.is_chinese(search_word):
# 不支持中文
logger.warn(f"{site.get('name')} 不支持中文搜索")
return []
# 去除搜索关键字中的特殊字符
if search_word:
search_word = StringUtils.clear(search_word, replace_word=" ", allow_space=True)
if not keywords:
# 浏览种子页
keywords = [None]
# 开始索引
result_array = []
# 开始计时
start_time = datetime.now()
try:
imdbid = mediainfo.imdb_id if mediainfo and area == "imdbid" else None
if site.get('parser') == "TNodeSpider":
error_flag, result_array = TNodeSpider(site).search(
keyword=search_word,
imdbid=imdbid,
page=page
)
elif site.get('parser') == "TorrentLeech":
error_flag, result_array = TorrentLeech(site).search(
keyword=search_word,
page=page
)
elif site.get('parser') == "mTorrent":
error_flag, result_array = MTorrentSpider(site).search(
keyword=search_word,
mtype=mediainfo.type if mediainfo else None,
page=page
)
else:
error_flag, result_array = self.__spider_search(
keyword=search_word,
imdbid=imdbid,
indexer=site,
mtype=mediainfo.type if mediainfo else None,
page=page
)
except Exception as err:
logger.error(f"{site.get('name')} 搜索出错:{err}")
# 搜索多个关键字
for search_word in keywords:
# 可能为关键字或ttxxxx
if search_word \
and site.get('language') == "en" \
and StringUtils.is_chinese(search_word):
# 不支持中文
logger.warn(f"{site.get('name')} 不支持中文搜索")
continue
# 去除搜索关键字中的特殊字符
if search_word:
search_word = StringUtils.clear(search_word, replace_word=" ", allow_space=True)
try:
if site.get('parser') == "TNodeSpider":
error_flag, result_array = TNodeSpider(site).search(
keyword=search_word,
page=page
)
elif site.get('parser') == "TorrentLeech":
error_flag, result_array = TorrentLeech(site).search(
keyword=search_word,
page=page
)
elif site.get('parser') == "mTorrent":
error_flag, result_array = MTorrentSpider(site).search(
keyword=search_word,
mtype=mtype,
page=page
)
else:
error_flag, result_array = self.__spider_search(
search_word=search_word,
indexer=site,
mtype=mtype,
page=page
)
# 有结果后停止
if result_array:
break
except Exception as err:
logger.error(f"{site.get('name')} 搜索出错:{err}")
# 索引花费的时间
seconds = round((datetime.now() - start_time).seconds, 1)
@@ -112,15 +114,13 @@ class IndexerModule(_ModuleBase):
@staticmethod
def __spider_search(indexer: CommentedMap,
keyword: str = None,
imdbid: str = None,
search_word: str = None,
mtype: MediaType = None,
page: int = 0) -> (bool, List[dict]):
"""
根据关键字搜索单个站点
:param: indexer: 站点配置
:param: keyword: 关键字
:param: imdbid: imdbid
:param: search_word: 关键字
:param: page: 页码
:param: mtype: 媒体类型
:param: timeout: 超时时间
@@ -128,8 +128,7 @@ class IndexerModule(_ModuleBase):
"""
_spider = TorrentSpider(indexer=indexer,
mtype=mtype,
keyword=keyword,
imdbid=imdbid,
keyword=search_word,
page=page)
return _spider.is_error, _spider.get_torrents()

View File

@@ -40,8 +40,6 @@ class TorrentSpider:
referer: str = None
# 搜索关键字
keyword: str = None
# 搜索IMDBID
imdbid: str = None
# 媒体类型
mtype: MediaType = None
# 搜索路径、方式配置
@@ -68,7 +66,6 @@ class TorrentSpider:
def __init__(self,
indexer: CommentedMap,
keyword: [str, list] = None,
imdbid: str = None,
page: int = 0,
referer: str = None,
mtype: MediaType = None):
@@ -76,7 +73,6 @@ class TorrentSpider:
设置查询参数
:param indexer: 索引器
:param keyword: 搜索关键字,如果数组则为批量搜索
:param imdbid: IMDB ID
:param page: 页码
:param referer: Referer
:param mtype: 媒体类型
@@ -84,7 +80,6 @@ class TorrentSpider:
if not indexer:
return
self.keyword = keyword
self.imdbid = imdbid
self.mtype = mtype
self.indexerid = indexer.get('id')
self.indexername = indexer.get('name')
@@ -159,20 +154,17 @@ class TorrentSpider:
# 搜索URL
indexer_params = self.search.get("params") or {}
if indexer_params:
# 支持IMDBID时优先使用IMDBID搜索
search_area = indexer_params.get("search_area") or 0
if self.imdbid and search_area:
search_word = self.imdbid
else:
search_word = self.keyword
# 不启用IMDBID搜索时需要将search_area移除
if search_area:
indexer_params.pop('search_area')
search_area = indexer_params.get('search_area')
# search_area非0表示支持imdbid搜索
if (search_area and
(not self.keyword or not self.keyword.startswith('tt'))):
# 支持imdbid搜索但关键字不是imdbid时不启用imdbid搜索
indexer_params.pop('search_area')
# 变量字典
inputs_dict = {
"keyword": search_word
}
# 查询参数
# 查询参数,默认查询标题
params = {
"search_mode": search_mode,
"search_area": 0,

View File

@@ -49,16 +49,16 @@ class TNodeSpider:
if csrf_token:
self._token = csrf_token.group(1)
def search(self, keyword: str, imdbid: str = None, page: int = 0) -> Tuple[bool, List[dict]]:
def search(self, keyword: str, page: int = 0) -> Tuple[bool, List[dict]]:
if not self._token:
logger.warn(f"{self._name} 未获取到token无法搜索")
return True, []
search_type = "imdbid" if imdbid else "title"
search_type = "imdbid" if (keyword and keyword.startswith('tt')) else "title"
params = {
"page": int(page) + 1,
"size": self._size,
"type": search_type,
"keyword": imdbid or keyword or "",
"keyword": keyword or "",
"sorter": "id",
"order": "desc",
"tags": [],

View File

@@ -1,4 +1,3 @@
import json
from pathlib import Path
from typing import Optional, Tuple, Union, Any, List, Generator
@@ -7,7 +6,6 @@ from app.core.context import MediaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.jellyfin.jellyfin import Jellyfin
from app.schemas import ExistMediaInfo, WebhookEventInfo
from app.schemas.types import MediaType
@@ -26,7 +24,7 @@ class JellyfinModule(_ModuleBase):
"""
# 定时重连
if not self.jellyfin.is_inactive():
self.jellyfin = Jellyfin()
self.jellyfin.reconnect()
def stop(self):
pass
@@ -41,7 +39,7 @@ class JellyfinModule(_ModuleBase):
# Jellyfin认证
return self.jellyfin.authenticate(name, password)
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[WebhookEventInfo]:
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Webhook报文体
:param body: 请求体
@@ -51,7 +49,7 @@ class JellyfinModule(_ModuleBase):
"""
return self.jellyfin.get_webhook_message(body)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[schemas.ExistMediaInfo]:
"""
判断媒体文件是否存在
:param mediainfo: 识别的媒体信息
@@ -63,25 +61,38 @@ class JellyfinModule(_ModuleBase):
movie = self.jellyfin.get_iteminfo(itemid)
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="jellyfin",
itemid=movie.item_id
)
movies = self.jellyfin.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"媒体库中已存在:{movies}")
return ExistMediaInfo(type=MediaType.MOVIE)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="jellyfin",
itemid=movies[0].item_id
)
else:
tvs = self.jellyfin.get_tv_episodes(title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
itemid, tvs = self.jellyfin.get_tv_episodes(title=mediainfo.title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
if not tvs:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"{mediainfo.title_year} 媒体库中已存在:{tvs}")
return ExistMediaInfo(type=MediaType.TV, seasons=tvs)
return schemas.ExistMediaInfo(
type=MediaType.TV,
seasons=tvs,
server="jellyfin",
itemid=itemid
)
def refresh_mediaserver(self, mediainfo: MediaInfo, file_path: Path) -> None:
"""
@@ -97,13 +108,8 @@ class JellyfinModule(_ModuleBase):
媒体数量统计
"""
media_statistic = self.jellyfin.get_medias_count()
user_count = self.jellyfin.get_user_count()
return [schemas.Statistic(
movie_count=media_statistic.get("MovieCount") or 0,
tv_count=media_statistic.get("SeriesCount") or 0,
episode_count=media_statistic.get("EpisodeCount") or 0,
user_count=user_count or 0
)]
media_statistic.user_count = self.jellyfin.get_user_count()
return [media_statistic]
def mediaserver_librarys(self, server: str) -> Optional[List[schemas.MediaServerLibrary]]:
"""
@@ -111,16 +117,7 @@ class JellyfinModule(_ModuleBase):
"""
if server != "jellyfin":
return None
librarys = self.jellyfin.get_librarys()
if not librarys:
return []
return [schemas.MediaServerLibrary(
server="jellyfin",
id=library.get("id"),
name=library.get("name"),
type=library.get("type"),
path=library.get("path")
) for library in librarys]
return self.jellyfin.get_librarys()
def mediaserver_items(self, server: str, library_id: str) -> Optional[Generator]:
"""
@@ -128,21 +125,15 @@ class JellyfinModule(_ModuleBase):
"""
if server != "jellyfin":
return None
items = self.jellyfin.get_items(library_id)
for item in items:
yield schemas.MediaServerItem(
server="jellyfin",
library=item.get("library"),
item_id=item.get("id"),
item_type=item.get("type"),
title=item.get("title"),
original_title=item.get("original_title"),
year=item.get("year"),
tmdbid=item.get("tmdbid"),
imdbid=item.get("imdbid"),
tvdbid=item.get("tvdbid"),
path=item.get("path"),
)
return self.jellyfin.get_items(library_id)
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
if server != "jellyfin":
return None
return self.jellyfin.get_iteminfo(item_id)
def mediaserver_tv_episodes(self, server: str,
item_id: Union[str, int]) -> Optional[List[schemas.MediaServerSeasonInfo]]:
@@ -151,7 +142,7 @@ class JellyfinModule(_ModuleBase):
"""
if server != "jellyfin":
return None
seasoninfo = self.jellyfin.get_tv_episodes(item_id=item_id)
_, seasoninfo = self.jellyfin.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(

View File

@@ -1,15 +1,14 @@
import json
import re
from typing import List, Union, Optional, Dict, Generator
from typing import List, Union, Optional, Dict, Generator, Tuple
from requests import Response
from app import schemas
from app.core.config import settings
from app.log import logger
from app.schemas import MediaType, WebhookEventInfo
from app.schemas import MediaType
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
from app.utils.string import StringUtils
class Jellyfin(metaclass=Singleton):
@@ -33,6 +32,13 @@ class Jellyfin(metaclass=Singleton):
return False
return True if not self.user else False
def reconnect(self):
"""
重连
"""
self.user = self.get_user()
self.serverid = self.get_server_id()
def __get_jellyfin_librarys(self) -> List[dict]:
"""
获取Jellyfin媒体库的信息
@@ -66,12 +72,14 @@ class Jellyfin(metaclass=Singleton):
library_type = MediaType.TV.value
case _:
continue
libraries.append({
"id": library.get("Id"),
"name": library.get("Name"),
"path": library.get("Path"),
"type": library_type
})
libraries.append(
schemas.MediaServerLibrary(
server="jellyfin",
id=library.get("Id"),
name=library.get("Name"),
path=library.get("Path"),
type=library_type
))
return libraries
def get_user_count(self) -> int:
@@ -172,59 +180,29 @@ class Jellyfin(metaclass=Singleton):
logger.error(f"连接System/Info出错" + str(e))
return None
def get_activity_log(self, num: int = 30) -> List[dict]:
"""
获取Jellyfin活动记录
"""
if not self._host or not self._apikey:
return []
req_url = "%sSystem/ActivityLog/Entries?api_key=%s&Limit=%s" % (self._host, self._apikey, num)
ret_array = []
try:
res = RequestUtils().get_res(req_url)
if res:
ret_json = res.json()
items = ret_json.get('Items')
for item in items:
if item.get("Type") == "SessionStarted":
event_type = "LG"
event_date = re.sub(r'\dZ', 'Z', item.get("Date"))
event_str = "%s, %s" % (item.get("Name"), item.get("ShortOverview"))
activity = {"type": event_type, "event": event_str,
"date": StringUtils.get_time(event_date)}
ret_array.append(activity)
if item.get("Type") in ["VideoPlayback", "VideoPlaybackStopped"]:
event_type = "PL"
event_date = re.sub(r'\dZ', 'Z', item.get("Date"))
activity = {"type": event_type, "event": item.get("Name"),
"date": StringUtils.get_time(event_date)}
ret_array.append(activity)
else:
logger.error(f"System/ActivityLog/Entries 未获取到返回数据")
return []
except Exception as e:
logger.error(f"连接System/ActivityLog/Entries出错" + str(e))
return []
return ret_array
def get_medias_count(self) -> Optional[dict]:
def get_medias_count(self) -> schemas.Statistic:
"""
获得电影、电视剧、动漫媒体数量
:return: MovieCount SeriesCount SongCount
"""
if not self._host or not self._apikey:
return None
return schemas.Statistic()
req_url = "%sItems/Counts?api_key=%s" % (self._host, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res:
return res.json()
result = res.json()
return schemas.Statistic(
movie_count=result.get("MovieCount") or 0,
tv_count=result.get("SeriesCount") or 0,
episode_count=result.get("EpisodeCount") or 0
)
else:
logger.error(f"Items/Counts 未获取到返回数据")
return {}
return schemas.Statistic()
except Exception as e:
logger.error(f"连接Items/Counts出错" + str(e))
return {}
return schemas.Statistic()
def __get_jellyfin_series_id_by_name(self, name: str, year: str) -> Optional[str]:
"""
@@ -232,7 +210,8 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true" % (
req_url = ("%sUsers/%s/Items?"
"api_key=%s&searchTerm=%s&IncludeItemTypes=Series&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, name)
try:
res = RequestUtils().get_res(req_url)
@@ -251,7 +230,7 @@ class Jellyfin(metaclass=Singleton):
def get_movies(self,
title: str,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
tmdb_id: int = None) -> Optional[List[schemas.MediaServerItem]]:
"""
根据标题和年份检查电影是否在Jellyfin中存在存在则返回列表
:param title: 标题
@@ -261,7 +240,8 @@ class Jellyfin(metaclass=Singleton):
"""
if not self._host or not self._apikey or not self.user:
return None
req_url = "%sUsers/%s/Items?api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true" % (
req_url = ("%sUsers/%s/Items?"
"api_key=%s&searchTerm=%s&IncludeItemTypes=Movie&Limit=10&Recursive=true") % (
self._host, self.user, self._apikey, title)
try:
res = RequestUtils().get_res(req_url)
@@ -269,19 +249,30 @@ class Jellyfin(metaclass=Singleton):
res_items = res.json().get("Items")
if res_items:
ret_movies = []
for res_item in res_items:
item_tmdbid = res_item.get("ProviderIds", {}).get("Tmdb")
for item in res_items:
item_tmdbid = item.get("ProviderIds", {}).get("Tmdb")
mediaserver_item = schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(item_tmdbid) if item_tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
if tmdb_id and item_tmdbid:
if str(item_tmdbid) != str(tmdb_id):
continue
else:
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
ret_movies.append(mediaserver_item)
continue
if res_item.get('Name') == title and (
not year or str(res_item.get('ProductionYear')) == str(year)):
ret_movies.append(
{'title': res_item.get('Name'), 'year': str(res_item.get('ProductionYear'))})
if mediaserver_item.title == title and (
not year or str(mediaserver_item.year) == str(year)):
ret_movies.append(mediaserver_item)
return ret_movies
except Exception as e:
logger.error(f"连接Items出错" + str(e))
@@ -293,7 +284,7 @@ class Jellyfin(metaclass=Singleton):
title: str = None,
year: str = None,
tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]:
season: int = None) -> Tuple[Optional[str], Optional[Dict[int, list]]]:
"""
根据标题和年份和季返回Jellyfin中的剧集列表
:param item_id: Jellyfin中的Id
@@ -304,19 +295,20 @@ class Jellyfin(metaclass=Singleton):
:return: 集号的列表
"""
if not self._host or not self._apikey or not self.user:
return None
return None, None
# 查TVID
if not item_id:
item_id = self.__get_jellyfin_series_id_by_name(title, year)
if item_id is None:
return None
return None, None
if not item_id:
return {}
return None, {}
# 验证tmdbid是否相同
item_tmdbid = (self.get_iteminfo(item_id).get("ProviderIds") or {}).get("Tmdb")
if tmdb_id and item_tmdbid:
if str(tmdb_id) != str(item_tmdbid):
return {}
item_info = self.get_iteminfo(item_id)
if item_info:
if tmdb_id and item_info.tmdbid:
if str(tmdb_id) != str(item_info.tmdbid):
return None, {}
if not season:
season = ""
try:
@@ -324,7 +316,8 @@ class Jellyfin(metaclass=Singleton):
self._host, item_id, season, self.user, self._apikey)
res_json = RequestUtils().get_res(req_url)
if res_json:
res_items = res_json.json().get("Items")
tv_info = res_json.json()
res_items = tv_info.get("Items")
# 返回的季集信息
season_episodes = {}
for res_item in res_items:
@@ -339,11 +332,11 @@ class Jellyfin(metaclass=Singleton):
if not season_episodes.get(season_index):
season_episodes[season_index] = []
season_episodes[season_index].append(episode_index)
return season_episodes
return tv_info.get('Id'), season_episodes
except Exception as e:
logger.error(f"连接Shows/Id/Episodes出错" + str(e))
return None
return {}
return None, None
return None, {}
def get_remote_image_by_id(self, item_id: str, image_type: str) -> Optional[str]:
"""
@@ -387,7 +380,7 @@ class Jellyfin(metaclass=Singleton):
logger.error(f"连接Library/Refresh出错" + str(e))
return False
def get_webhook_message(self, body: any) -> Optional[WebhookEventInfo]:
def get_webhook_message(self, body: any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Jellyfin报文
{
@@ -463,7 +456,7 @@ class Jellyfin(metaclass=Singleton):
eventType = message.get('NotificationType')
if not eventType:
return None
eventItem = WebhookEventInfo(
eventItem = schemas.WebhookEventInfo(
event=eventType,
channel="jellyfin"
)
@@ -499,32 +492,46 @@ class Jellyfin(metaclass=Singleton):
return eventItem
def get_iteminfo(self, itemid: str) -> dict:
def get_iteminfo(self, itemid: str) -> Optional[schemas.MediaServerItem]:
"""
获取单个项目详情
"""
if not itemid:
return {}
return None
if not self._host or not self._apikey:
return {}
return None
req_url = "%sUsers/%s/Items/%s?api_key=%s" % (
self._host, self.user, itemid, self._apikey)
try:
res = RequestUtils().get_res(req_url)
if res and res.status_code == 200:
return res.json()
item = res.json()
tmdbid = item.get("ProviderIds", {}).get("Tmdb")
return schemas.MediaServerItem(
server="jellyfin",
library=item.get("ParentId"),
item_id=item.get("Id"),
item_type=item.get("Type"),
title=item.get("Name"),
original_title=item.get("OriginalTitle"),
year=item.get("ProductionYear"),
tmdbid=int(tmdbid) if tmdbid else None,
imdbid=item.get("ProviderIds", {}).get("Imdb"),
tvdbid=item.get("ProviderIds", {}).get("Tvdb"),
path=item.get("Path")
)
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
return {}
return None
def get_items(self, parent: str) -> Generator:
"""
获取媒体服务器所有媒体库列表
"""
if not parent:
yield {}
yield None
if not self._host or not self._apikey:
yield {}
yield None
req_url = "%sUsers/%s/Items?parentId=%s&api_key=%s" % (self._host, self.user, parent, self._apikey)
try:
res = RequestUtils().get_res(req_url)
@@ -534,37 +541,46 @@ class Jellyfin(metaclass=Singleton):
if not result:
continue
if result.get("Type") in ["Movie", "Series"]:
item_info = self.get_iteminfo(result.get("Id"))
yield {"id": result.get("Id"),
"library": item_info.get("ParentId"),
"type": item_info.get("Type"),
"title": item_info.get("Name"),
"original_title": item_info.get("OriginalTitle"),
"year": item_info.get("ProductionYear"),
"tmdbid": item_info.get("ProviderIds", {}).get("Tmdb"),
"imdbid": item_info.get("ProviderIds", {}).get("Imdb"),
"tvdbid": item_info.get("ProviderIds", {}).get("Tvdb"),
"path": item_info.get("Path"),
"json": str(item_info)}
yield self.get_iteminfo(result.get("Id"))
elif "Folder" in result.get("Type"):
for item in self.get_items(result.get("Id")):
yield item
except Exception as e:
logger.error(f"连接Users/Items出错" + str(e))
yield {}
yield None
def get_data(self, url: str) -> Optional[Response]:
"""
自定义URL从媒体服务器获取数据其中{HOST}{APIKEY}{USER}会被替换成实际的值
自定义URL从媒体服务器获取数据其中[HOST]、[APIKEY]、[USER]会被替换成实际的值
:param url: 请求地址
"""
if not self._host or not self._apikey:
return None
url = url.replace("{HOST}", self._host) \
.replace("{APIKEY}", self._apikey) \
.replace("{USER}", self.user)
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
try:
return RequestUtils().get_res(url=url)
return RequestUtils(accept_type="application/json").get_res(url=url)
except Exception as e:
logger.error(f"连接Jellyfin出错" + str(e))
return None
def post_data(self, url: str, data: str = None, headers: dict = None) -> Optional[Response]:
"""
自定义URL从媒体服务器获取数据其中[HOST]、[APIKEY]、[USER]会被替换成实际的值
:param url: 请求地址
:param data: 请求数据
:param headers: 请求头
"""
if not self._host or not self._apikey:
return None
url = url.replace("[HOST]", self._host) \
.replace("[APIKEY]", self._apikey) \
.replace("[USER]", self.user)
try:
return RequestUtils(
headers=headers
).post_res(url=url, data=data)
except Exception as e:
logger.error(f"连接Jellyfin出错" + str(e))
return None

View File

@@ -6,12 +6,10 @@ from app.core.context import MediaInfo
from app.log import logger
from app.modules import _ModuleBase
from app.modules.plex.plex import Plex
from app.schemas import ExistMediaInfo, RefreshMediaItem, WebhookEventInfo
from app.schemas.types import MediaType
class PlexModule(_ModuleBase):
plex: Plex = None
def init_module(self) -> None:
@@ -29,9 +27,9 @@ class PlexModule(_ModuleBase):
"""
# 定时重连
if not self.plex.is_inactive():
self.plex = Plex()
self.plex.reconnect()
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[WebhookEventInfo]:
def webhook_parser(self, body: Any, form: Any, args: Any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Webhook报文体
:param body: 请求体
@@ -41,7 +39,7 @@ class PlexModule(_ModuleBase):
"""
return self.plex.get_webhook_message(form)
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[ExistMediaInfo]:
def media_exists(self, mediainfo: MediaInfo, itemid: str = None) -> Optional[schemas.ExistMediaInfo]:
"""
判断媒体文件是否存在
:param mediainfo: 识别的媒体信息
@@ -53,29 +51,42 @@ class PlexModule(_ModuleBase):
movie = self.plex.get_iteminfo(itemid)
if movie:
logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="plex",
itemid=movie.item_id
)
movies = self.plex.get_movies(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id)
if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"媒体库中已存在:{movies}")
return ExistMediaInfo(type=MediaType.MOVIE)
return schemas.ExistMediaInfo(
type=MediaType.MOVIE,
server="plex",
itemid=movies[0].item_id
)
else:
tvs = self.plex.get_tv_episodes(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
item_id, tvs = self.plex.get_tv_episodes(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id,
item_id=itemid)
if not tvs:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None
else:
logger.info(f"{mediainfo.title_year} 媒体库中已存在:{tvs}")
return ExistMediaInfo(type=MediaType.TV, seasons=tvs)
return schemas.ExistMediaInfo(
type=MediaType.TV,
seasons=tvs,
server="plex",
itemid=item_id
)
def refresh_mediaserver(self, mediainfo: MediaInfo, file_path: Path) -> None:
"""
@@ -85,7 +96,7 @@ class PlexModule(_ModuleBase):
:return: 成功或失败
"""
items = [
RefreshMediaItem(
schemas.RefreshMediaItem(
title=mediainfo.title,
year=mediainfo.year,
type=mediainfo.type,
@@ -100,12 +111,8 @@ class PlexModule(_ModuleBase):
媒体数量统计
"""
media_statistic = self.plex.get_medias_count()
return [schemas.Statistic(
movie_count=media_statistic.get("MovieCount") or 0,
tv_count=media_statistic.get("SeriesCount") or 0,
episode_count=media_statistic.get("EpisodeCount") or 0,
user_count=1
)]
media_statistic.user_count = 1
return [media_statistic]
def mediaserver_librarys(self, server: str) -> Optional[List[schemas.MediaServerLibrary]]:
"""
@@ -113,16 +120,7 @@ class PlexModule(_ModuleBase):
"""
if server != "plex":
return None
librarys = self.plex.get_librarys()
if not librarys:
return []
return [schemas.MediaServerLibrary(
server="plex",
id=library.get("id"),
name=library.get("name"),
type=library.get("type"),
path=library.get("path")
) for library in librarys]
return self.plex.get_librarys()
def mediaserver_items(self, server: str, library_id: str) -> Optional[Generator]:
"""
@@ -130,21 +128,15 @@ class PlexModule(_ModuleBase):
"""
if server != "plex":
return None
items = self.plex.get_items(library_id)
for item in items:
yield schemas.MediaServerItem(
server="plex",
library=item.get("library"),
item_id=item.get("id"),
item_type=item.get("type"),
title=item.get("title"),
original_title=item.get("original_title"),
year=item.get("year"),
tmdbid=item.get("tmdbid"),
imdbid=item.get("imdbid"),
tvdbid=item.get("tvdbid"),
path=item.get("path"),
)
return self.plex.get_items(library_id)
def mediaserver_iteminfo(self, server: str, item_id: str) -> Optional[schemas.MediaServerItem]:
"""
媒体库项目详情
"""
if server != "plex":
return None
return self.plex.get_iteminfo(item_id)
def mediaserver_tv_episodes(self, server: str,
item_id: Union[str, int]) -> Optional[List[schemas.MediaServerSeasonInfo]]:
@@ -153,7 +145,7 @@ class PlexModule(_ModuleBase):
"""
if server != "plex":
return None
seasoninfo = self.plex.get_tv_episodes(item_id=item_id)
_, seasoninfo = self.plex.get_tv_episodes(item_id=item_id)
if not seasoninfo:
return []
return [schemas.MediaServerSeasonInfo(

View File

@@ -6,9 +6,10 @@ from urllib.parse import quote_plus
from plexapi import media
from plexapi.server import PlexServer
from app import schemas
from app.core.config import settings
from app.log import logger
from app.schemas import RefreshMediaItem, MediaType, WebhookEventInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -38,7 +39,18 @@ class Plex(metaclass=Singleton):
return False
return True if not self._plex else False
def get_librarys(self):
def reconnect(self):
"""
重连
"""
try:
self._plex = PlexServer(self._host, self._token)
self._libraries = self._plex.library.sections()
except Exception as e:
self._plex = None
logger.error(f"Plex服务器连接失败{str(e)}")
def get_librarys(self) -> List[schemas.MediaServerLibrary]:
"""
获取媒体服务器所有媒体库列表
"""
@@ -58,81 +70,42 @@ class Plex(metaclass=Singleton):
library_type = MediaType.TV.value
case _:
continue
libraries.append({
"id": library.key,
"name": library.title,
"path": library.locations,
"type": library_type
})
libraries.append(
schemas.MediaServerLibrary(
id=library.key,
name=library.title,
path=library.locations,
type=library_type
)
)
return libraries
def get_activity_log(self, num: int = 30) -> Optional[List[dict]]:
"""
获取Plex活动记录
"""
if not self._plex:
return []
ret_array = []
try:
# type的含义: 1 电影 4 剧集单集 详见 plexapi/utils.py中SEARCHTYPES的定义
# 根据最后播放时间倒序获取数据
historys = self._plex.library.search(sort='lastViewedAt:desc', limit=num, type='1,4')
for his in historys:
# 过滤掉最后播放时间为空的
if his.lastViewedAt:
if his.type == "episode":
event_title = "%s %s%s %s" % (
his.grandparentTitle,
"S" + str(his.parentIndex),
"E" + str(his.index),
his.title
)
event_str = "开始播放剧集 %s" % event_title
else:
event_title = "%s %s" % (
his.title, "(" + str(his.year) + ")")
event_str = "开始播放电影 %s" % event_title
event_type = "PL"
event_date = his.lastViewedAt.strftime('%Y-%m-%d %H:%M:%S')
activity = {"type": event_type, "event": event_str, "date": event_date}
ret_array.append(activity)
except Exception as e:
logger.error(f"连接System/ActivityLog/Entries出错" + str(e))
return []
if ret_array:
ret_array = sorted(ret_array, key=lambda x: x['date'], reverse=True)
return ret_array
def get_medias_count(self) -> dict:
def get_medias_count(self) -> schemas.Statistic:
"""
获得电影、电视剧、动漫媒体数量
:return: MovieCount SeriesCount SongCount
"""
if not self._plex:
return {}
return schemas.Statistic()
sections = self._plex.library.sections()
MovieCount = SeriesCount = SongCount = EpisodeCount = 0
MovieCount = SeriesCount = EpisodeCount = 0
for sec in sections:
if sec.type == "movie":
MovieCount += sec.totalSize
if sec.type == "show":
SeriesCount += sec.totalSize
EpisodeCount += sec.totalViewSize(libtype='episode')
if sec.type == "artist":
SongCount += sec.totalSize
return {
"MovieCount": MovieCount,
"SeriesCount": SeriesCount,
"SongCount": SongCount,
"EpisodeCount": EpisodeCount
}
return schemas.Statistic(
movie_count=MovieCount,
tv_count=SeriesCount,
episode_count=EpisodeCount
)
def get_movies(self,
title: str,
def get_movies(self,
title: str,
original_title: str = None,
year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]:
tmdb_id: int = None) -> Optional[List[schemas.MediaServerItem]]:
"""
根据标题和年份检查电影是否在Plex中存在存在则返回列表
:param title: 标题
@@ -145,20 +118,43 @@ class Plex(metaclass=Singleton):
return None
ret_movies = []
if year:
movies = self._plex.library.search(title=title, year=year, libtype="movie")
movies = self._plex.library.search(title=title,
year=year,
libtype="movie")
# 根据原标题再查一遍
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
movies.extend(self._plex.library.search(title=original_title,
year=year,
libtype="movie"))
else:
movies = self._plex.library.search(title=title, libtype="movie")
movies = self._plex.library.search(title=title,
libtype="movie")
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
for movie in set(movies):
movie_tmdbid = self.__get_ids(movie.guids).get("tmdb_id")
if tmdb_id and movie_tmdbid:
if str(movie_tmdbid) != str(tmdb_id):
movies.extend(self._plex.library.search(title=original_title,
libtype="movie"))
for item in set(movies):
ids = self.__get_ids(item.guids)
if tmdb_id and ids['tmdb_id']:
if str(ids['tmdb_id']) != str(tmdb_id):
continue
ret_movies.append({'title': movie.title, 'year': movie.year})
path = None
if item.locations:
path = item.locations[0]
ret_movies.append(
schemas.MediaServerItem(
server="plex",
library=item.librarySectionID,
item_id=item.key,
item_type=item.type,
title=item.title,
original_title=item.originalTitle,
year=item.year,
tmdbid=ids['tmdb_id'],
imdbid=ids['imdb_id'],
tvdbid=ids['tvdb_id'],
path=path,
)
)
return ret_movies
def get_tv_episodes(self,
@@ -167,7 +163,7 @@ class Plex(metaclass=Singleton):
original_title: str = None,
year: str = None,
tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]:
season: int = None) -> Tuple[Optional[str], Optional[Dict[int, list]]]:
"""
根据标题、年份、季查询电视剧所有集信息
:param item_id: 媒体ID
@@ -179,22 +175,28 @@ class Plex(metaclass=Singleton):
:return: 所有集的列表
"""
if not self._plex:
return {}
return None, {}
if item_id:
videos = self._plex.fetchItem(item_id)
else:
# 根据标题和年份模糊搜索,该结果不够准确
videos = self._plex.library.search(title=title, year=year, libtype="show")
if not videos and original_title and str(original_title) != str(title):
videos = self._plex.library.search(title=original_title, year=year, libtype="show")
videos = self._plex.library.search(title=title,
year=year,
libtype="show")
if (not videos
and original_title
and str(original_title) != str(title)):
videos = self._plex.library.search(title=original_title,
year=year,
libtype="show")
if not videos:
return {}
return None, {}
if isinstance(videos, list):
videos = videos[0]
video_tmdbid = self.__get_ids(videos.guids).get('tmdb_id')
if tmdb_id and video_tmdbid:
if str(video_tmdbid) != str(tmdb_id):
return {}
return None, {}
episodes = videos.episodes()
season_episodes = {}
for episode in episodes:
@@ -203,7 +205,7 @@ class Plex(metaclass=Singleton):
if episode.seasonNumber not in season_episodes:
season_episodes[episode.seasonNumber] = []
season_episodes[episode.seasonNumber].append(episode.index)
return season_episodes
return videos.key, season_episodes
def get_remote_image_by_id(self, item_id: str, image_type: str) -> Optional[str]:
"""
@@ -216,9 +218,11 @@ class Plex(metaclass=Singleton):
return None
try:
if image_type == "Poster":
images = self._plex.fetchItems('/library/metadata/%s/posters' % item_id, cls=media.Poster)
images = self._plex.fetchItems('/library/metadata/%s/posters' % item_id,
cls=media.Poster)
else:
images = self._plex.fetchItems('/library/metadata/%s/arts' % item_id, cls=media.Art)
images = self._plex.fetchItems('/library/metadata/%s/arts' % item_id,
cls=media.Art)
for image in images:
if hasattr(image, 'key') and image.key.startswith('http'):
return image.key
@@ -234,7 +238,7 @@ class Plex(metaclass=Singleton):
return False
return self._plex.library.update()
def refresh_library_by_items(self, items: List[RefreshMediaItem]) -> bool:
def refresh_library_by_items(self, items: List[schemas.RefreshMediaItem]) -> bool:
"""
按路径刷新媒体库 item: target_path
"""
@@ -283,19 +287,34 @@ class Plex(metaclass=Singleton):
logger.error(f"查找媒体库出错:{err}")
return "", ""
def get_iteminfo(self, itemid: str) -> dict:
def get_iteminfo(self, itemid: str) -> Optional[schemas.MediaServerItem]:
"""
获取单个项目详情
"""
if not self._plex:
return {}
return None
try:
item = self._plex.fetchItem(itemid)
ids = self.__get_ids(item.guids)
return {'ProviderIds': {'Tmdb': ids['tmdb_id'], 'Imdb': ids['imdb_id']}}
path = None
if item.locations:
path = item.locations[0]
return schemas.MediaServerItem(
server="plex",
library=item.librarySectionID,
item_id=item.key,
item_type=item.type,
title=item.title,
original_title=item.originalTitle,
year=item.year,
tmdbid=ids['tmdb_id'],
imdbid=ids['imdb_id'],
tvdbid=ids['tvdb_id'],
path=path,
)
except Exception as err:
logger.error(f"获取项目详情出错:{err}")
return {}
return None
@staticmethod
def __get_ids(guids: List[Any]) -> dict:
@@ -326,9 +345,9 @@ class Plex(metaclass=Singleton):
获取媒体服务器所有媒体库列表
"""
if not parent:
yield {}
yield None
if not self._plex:
yield {}
yield None
try:
section = self._plex.library.sectionByID(int(parent))
if section:
@@ -339,21 +358,24 @@ class Plex(metaclass=Singleton):
path = None
if item.locations:
path = item.locations[0]
yield {"id": item.key,
"library": item.librarySectionID,
"type": item.type,
"title": item.title,
"original_title": item.originalTitle,
"year": item.year,
"tmdbid": ids['tmdb_id'],
"imdbid": ids['imdb_id'],
"tvdbid": ids['tvdb_id'],
"path": path}
yield schemas.MediaServerItem(
server="plex",
library=item.librarySectionID,
item_id=item.key,
item_type=item.type,
title=item.title,
original_title=item.originalTitle,
year=item.year,
tmdbid=ids['tmdb_id'],
imdbid=ids['imdb_id'],
tvdbid=ids['tvdb_id'],
path=path,
)
except Exception as err:
logger.error(f"获取媒体库列表出错:{err}")
yield {}
yield None
def get_webhook_message(self, form: any) -> Optional[WebhookEventInfo]:
def get_webhook_message(self, form: any) -> Optional[schemas.WebhookEventInfo]:
"""
解析Plex报文
eventItem 字段的含义
@@ -402,7 +424,7 @@ class Plex(metaclass=Singleton):
"parentTitle": "Combat Shadow Fighting Saga / Great Prison Battle Saga",
"originalTitle": "Baki Hanma",
"contentRating": "TV-MA",
"summary": "The world is shaken by news of a man taking down a monstrous elephant with his bare hands. Back in Japan, Baki is confronted by a knife-wielding child.",
"summary": "The world is shaken by news",
"index": 1,
"parentIndex": 1,
"audienceRating": 8.5,
@@ -471,7 +493,7 @@ class Plex(metaclass=Singleton):
if not eventType:
return None
logger.info(f"接收到plex webhook{message}")
eventItem = WebhookEventInfo(event=eventType, channel="plex")
eventItem = schemas.WebhookEventInfo(event=eventType, channel="plex")
if message.get('Metadata'):
if message.get('Metadata', {}).get('type') == 'episode':
eventItem.item_type = "TV"
@@ -484,14 +506,17 @@ class Plex(metaclass=Singleton):
eventItem.season_id = message.get('Metadata', {}).get('parentIndex')
eventItem.episode_id = message.get('Metadata', {}).get('index')
if message.get('Metadata', {}).get('summary') and len(message.get('Metadata', {}).get('summary')) > 100:
if (message.get('Metadata', {}).get('summary')
and len(message.get('Metadata', {}).get('summary')) > 100):
eventItem.overview = str(message.get('Metadata', {}).get('summary'))[:100] + "..."
else:
eventItem.overview = message.get('Metadata', {}).get('summary')
else:
eventItem.item_type = "MOV" if message.get('Metadata', {}).get('type') == 'movie' else "SHOW"
eventItem.item_type = "MOV" if message.get('Metadata',
{}).get('type') == 'movie' else "SHOW"
eventItem.item_name = "%s %s" % (
message.get('Metadata', {}).get('title'), "(" + str(message.get('Metadata', {}).get('year')) + ")")
message.get('Metadata', {}).get('title'),
"(" + str(message.get('Metadata', {}).get('year')) + ")")
eventItem.item_id = message.get('Metadata', {}).get('ratingKey')
if len(message.get('Metadata', {}).get('summary')) > 100:
eventItem.overview = str(message.get('Metadata', {}).get('summary'))[:100] + "..."

View File

@@ -34,7 +34,7 @@ class QbittorrentModule(_ModuleBase):
"""
# 定时重连
if self.qbittorrent.is_inactive():
self.qbittorrent = Qbittorrent()
self.qbittorrent.reconnect()
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
@@ -225,6 +225,8 @@ class QbittorrentModule(_ModuleBase):
"""
# 调用Qbittorrent API查询实时信息
info = self.qbittorrent.transfer_info()
if not info:
return schemas.DownloaderInfo()
return schemas.DownloaderInfo(
download_speed=info.get("dl_info_speed"),
upload_speed=info.get("up_info_speed"),

View File

@@ -35,6 +35,12 @@ class Qbittorrent(metaclass=Singleton):
return False
return True if not self.qbc else False
def reconnect(self):
"""
重连
"""
self.qbc = self.__login_qbittorrent()
def __login_qbittorrent(self) -> Optional[Client]:
"""
连接qbittorrent

View File

@@ -158,10 +158,8 @@ class Telegram(metaclass=Singleton):
title = re.sub(r"\s+", " ", title).strip()
free = torrent.volume_factor
seeder = f"{torrent.seeders}"
description = torrent.description
caption = f"{caption}\n{index}.【{site_name}】[{title}]({link}) " \
f"{StringUtils.str_filesize(torrent.size)} {free} {seeder}\n" \
f"_{description}_"
f"{StringUtils.str_filesize(torrent.size)} {free} {seeder}"
index += 1
if userid:

View File

@@ -345,7 +345,7 @@ class TheMovieDbModule(_ModuleBase):
image_path = seasoninfo.get(image_type.value)
if image_path:
return f"https://image.tmdb.org/t/p/{image_prefix}{image_path}"
return f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/{image_prefix}{image_path}"
return None
def movie_similar(self, tmdbid: int) -> List[dict]:

View File

@@ -159,14 +159,16 @@ class TmdbScraper:
xdirector.setAttribute("tmdbid", str(director.get("id") or ""))
# 演员
for actor in mediainfo.actors:
# 获取中文名
xactor = DomUtils.add_node(doc, root, "actor")
DomUtils.add_node(doc, xactor, "name", actor.get("name") or "")
DomUtils.add_node(doc, xactor, "type", "Actor")
DomUtils.add_node(doc, xactor, "role", actor.get("character") or actor.get("role") or "")
DomUtils.add_node(doc, xactor, "order", actor.get("order") if actor.get("order") is not None else "")
DomUtils.add_node(doc, xactor, "tmdbid", actor.get("id") or "")
DomUtils.add_node(doc, xactor, "thumb", actor.get('image'))
DomUtils.add_node(doc, xactor, "profile", actor.get('profile'))
DomUtils.add_node(doc, xactor, "thumb",
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{actor.get('profile_path')}")
DomUtils.add_node(doc, xactor, "profile",
f"https://www.themoviedb.org/person/{actor.get('id')}")
# 风格
genres = mediainfo.genres or []
for genre in genres:
@@ -241,7 +243,8 @@ class TmdbScraper:
doc = minidom.Document()
root = DomUtils.add_node(doc, doc, "season")
# 添加时间
DomUtils.add_node(doc, root, "dateadded", time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
DomUtils.add_node(doc, root, "dateadded",
time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())))
# 简介
xplot = DomUtils.add_node(doc, root, "plot")
xplot.appendChild(doc.createCDATASection(seasoninfo.get("overview") or ""))
@@ -253,7 +256,8 @@ class TmdbScraper:
DomUtils.add_node(doc, root, "premiered", seasoninfo.get("air_date") or "")
DomUtils.add_node(doc, root, "releasedate", seasoninfo.get("air_date") or "")
# 发行年份
DomUtils.add_node(doc, root, "year", seasoninfo.get("air_date")[:4] if seasoninfo.get("air_date") else "")
DomUtils.add_node(doc, root, "year",
seasoninfo.get("air_date")[:4] if seasoninfo.get("air_date") else "")
# seasonnumber
DomUtils.add_node(doc, root, "seasonnumber", str(season))
# 保存
@@ -317,6 +321,10 @@ class TmdbScraper:
DomUtils.add_node(doc, xactor, "name", actor.get("name") or "")
DomUtils.add_node(doc, xactor, "type", "Actor")
DomUtils.add_node(doc, xactor, "tmdbid", actor.get("id") or "")
DomUtils.add_node(doc, xactor, "thumb",
f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{actor.get('profile_path')}")
DomUtils.add_node(doc, xactor, "profile",
f"https://www.themoviedb.org/person/{actor.get('id')}")
# 保存文件
self.__save_nfo(doc, file_path.with_suffix(".nfo"))
@@ -336,6 +344,8 @@ class TmdbScraper:
logger.info(f"图片已保存:{file_path}")
else:
logger.info(f"{file_path.stem}图片下载失败,请检查网络连通性")
except RequestException as err:
raise err
except Exception as err:
logger.error(f"{file_path.stem}图片下载失败:{err}")

View File

@@ -1136,6 +1136,26 @@ class TmdbHelper:
def get_person_detail(self, person_id: int) -> dict:
"""
获取人物详情
{
"adult": false,
"also_known_as": [
"Michael Chen",
"Chen He",
"陈赫"
],
"biography": "陈赫xxx",
"birthday": "1985-11-09",
"deathday": null,
"gender": 2,
"homepage": "https://movie.douban.com/celebrity/1313841/",
"id": 1397016,
"imdb_id": "nm4369305",
"known_for_department": "Acting",
"name": "Chen He",
"place_of_birth": "FuzhouFujian ProvinceChina",
"popularity": 9.228,
"profile_path": "/2Bk39zVuoHUNHtpZ7LVg7OgkDd4.jpg"
}
"""
if not self.person:
return {}

View File

@@ -3,18 +3,19 @@
import logging
import os
import time
from datetime import datetime
from functools import lru_cache
import requests
import requests.exceptions
from app.utils.http import RequestUtils
from .exceptions import TMDbException
logger = logging.getLogger(__name__)
class TMDb(object):
_session = None
TMDB_API_KEY = "TMDB_API_KEY"
TMDB_LANGUAGE = "TMDB_LANGUAGE"
TMDB_SESSION_ID = "TMDB_SESSION_ID"
@@ -25,11 +26,18 @@ class TMDb(object):
TMDB_DOMAIN = "TMDB_DOMAIN"
REQUEST_CACHE_MAXSIZE = None
_req = None
_session = None
def __init__(self, obj_cached=True, session=None):
if self.__class__._session is None or session is not None:
self.__class__._session = requests.Session() if session is None else session
if session is not None:
self._req = RequestUtils(session=session, proxies=self.proxies)
else:
self._session = requests.Session()
self._req = RequestUtils(session=self._session, proxies=self.proxies)
self._remaining = 40
self._reset = None
self._timeout = 15
self.obj_cached = obj_cached
if os.environ.get(self.TMDB_LANGUAGE) is None:
os.environ[self.TMDB_LANGUAGE] = "en-US"
@@ -53,7 +61,7 @@ class TMDb(object):
@property
def domain(self):
return os.environ.get(self.TMDB_DOMAIN)
@property
def proxies(self):
proxy = os.environ.get(self.TMDB_PROXIES)
@@ -130,13 +138,24 @@ class TMDb(object):
os.environ[self.TMDB_CACHE_ENABLED] = str(cache)
@lru_cache(maxsize=REQUEST_CACHE_MAXSIZE)
def cached_request(self, method, url, data, json):
return requests.request(method, url, data=data, json=json, proxies=self.proxies)
def cached_request(self, method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""
缓存请求时间默认1天
"""
return self.request(method, url, data, json)
def request(self, method, url, data, json):
if method == "GET":
return self._req.get_res(url, params=data, json=json)
else:
return self._req.post_res(url, data=data, json=json)
def cache_clear(self):
return self.cached_request.cache_clear()
def _request_obj(self, action, params="", call_cached=True, method="GET", data=None, json=None, key=None):
def _request_obj(self, action, params="", call_cached=True,
method="GET", data=None, json=None, key=None):
if self.api_key is None or self.api_key == "":
raise TMDbException("No API key found.")
@@ -151,7 +170,7 @@ class TMDb(object):
if self.cache and self.obj_cached and call_cached and method != "POST":
req = self.cached_request(method, url, data, json)
else:
req = self.__class__._session.request(method, url, data=data, json=json, proxies=self.proxies)
req = self.request(method, url, data, json)
headers = req.headers
@@ -196,3 +215,7 @@ class TMDb(object):
if key:
return json.get(key)
return json
def __del__(self):
if self._session:
self._session.close()

View File

@@ -34,7 +34,7 @@ class TransmissionModule(_ModuleBase):
"""
# 定时重连
if not self.transmission.is_inactive():
self.transmission = Transmission()
self.transmission.reconnect()
def download(self, content: Union[Path, str], download_dir: Path, cookie: str,
episodes: Set[int] = None, category: str = None) -> Optional[Tuple[Optional[str], str]]:
@@ -211,6 +211,8 @@ class TransmissionModule(_ModuleBase):
下载器信息
"""
info = self.transmission.transfer_info()
if not info:
return schemas.DownloaderInfo()
return schemas.DownloaderInfo(
download_speed=info.download_speed,
upload_speed=info.upload_speed,

View File

@@ -56,6 +56,12 @@ class Transmission(metaclass=Singleton):
return False
return True if not self.trc else False
def reconnect(self):
"""
重连
"""
self.trc = self.__login_transmission()
def get_torrents(self, ids: Union[str, list] = None, status: Union[str, list] = None,
tags: Union[str, list] = None) -> Tuple[List[Torrent], bool]:
"""

View File

@@ -0,0 +1,551 @@
import time
from collections import defaultdict
from datetime import datetime, timedelta
from pathlib import Path
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import eventmanager
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.transferhistory_oper import TransferHistoryOper
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple, Optional
from app.log import logger
from app.schemas import NotificationType, DownloadHistory
from app.schemas.types import EventType
class AutoClean(_PluginBase):
# 插件名称
plugin_name = "定时清理媒体库"
# 插件描述
plugin_desc = "定时清理用户下载的种子、源文件、媒体库文件。"
# 插件图标
plugin_icon = "clean.png"
# 主题色
plugin_color = "#3377ed"
# 插件版本
plugin_version = "1.0"
# 插件作者
plugin_author = "thsrite"
# 作者主页
author_url = "https://github.com/thsrite"
# 插件配置项ID前缀
plugin_config_prefix = "autoclean_"
# 加载顺序
plugin_order = 23
# 可使用的用户级别
auth_level = 2
# 私有属性
_enabled = False
# 任务执行间隔
_cron = None
_type = None
_onlyonce = False
_notify = False
_cleantype = None
_cleanuser = None
_cleandate = None
_downloadhis = None
_transferhis = None
# 定时器
_scheduler: Optional[BackgroundScheduler] = None
def init_plugin(self, config: dict = None):
# 停止现有任务
self.stop_service()
if config:
self._enabled = config.get("enabled")
self._cron = config.get("cron")
self._onlyonce = config.get("onlyonce")
self._notify = config.get("notify")
self._cleantype = config.get("cleantype")
self._cleanuser = config.get("cleanuser")
self._cleandate = config.get("cleandate")
# 加载模块
if self._enabled:
self._downloadhis = DownloadHistoryOper(self.db)
self._transferhis = TransferHistoryOper(self.db)
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
try:
self._scheduler.add_job(func=self.__clean,
trigger=CronTrigger.from_crontab(self._cron),
name="定时清理媒体库")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
if self._onlyonce:
logger.info(f"定时清理媒体库服务启动,立即运行一次")
self._scheduler.add_job(func=self.__clean, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name="定时清理媒体库")
# 关闭一次性开关
self._onlyonce = False
self.update_config({
"onlyonce": False,
"cron": self._cron,
"cleantype": self._cleantype,
"enabled": self._enabled,
"cleanuser": self._cleanuser,
"cleandate": self._cleandate,
"notify": self._notify,
})
# 启动任务
if self._scheduler.get_jobs():
self._scheduler.print_jobs()
self._scheduler.start()
def __clean(self):
"""
定时清理媒体库
"""
if not self._cleandate:
logger.error("未配置清理媒体库时间,停止运行")
return
# 清理日期
current_time = datetime.now()
days_ago = current_time - timedelta(days=int(self._cleandate))
clean_date = days_ago.strftime("%Y-%m-%d")
# 查询用户清理日期之后的下载历史
if not self._cleanuser:
downloadhis_list = self._downloadhis.list_by_user_date(date=clean_date)
logger.info(f'获取到日期 {clean_date} 之后的下载历史 {len(downloadhis_list)}')
self.__clean_history(date=clean_date, downloadhis_list=downloadhis_list)
else:
for userid in str(self._cleanuser).split(","):
downloadhis_list = self._downloadhis.list_by_user_date(date=clean_date,
userid=userid)
logger.info(
f'获取到用户 {userid} 日期 {clean_date} 之后的下载历史 {len(downloadhis_list)}')
self.__clean_history(date=clean_date, downloadhis_list=downloadhis_list, userid=userid)
def __clean_history(self, date: str, downloadhis_list: List[DownloadHistory], userid: str = None):
"""
清理下载历史、转移记录
"""
if not downloadhis_list:
logger.warn(f"未获取到日期 {date} 之后的下载记录,停止运行")
return
# 读取历史记录
history = self.get_data('history') or []
# 创建一个字典来保存分组结果
downloadhis_grouped_dict: Dict[tuple, List[DownloadHistory]] = defaultdict(list)
# 遍历DownloadHistory对象列表
for downloadhis in downloadhis_list:
# 获取type和tmdbid的值
dtype = downloadhis.type
tmdbid = downloadhis.tmdbid
# 将DownloadHistory对象添加到对应分组的列表中
downloadhis_grouped_dict[(dtype, tmdbid)].append(downloadhis)
# 输出分组结果
for key, downloadhis_list in downloadhis_grouped_dict.items():
logger.info(f"开始清理 {key}")
del_transferhis_cnt = 0
del_media_name = downloadhis_list[0].title
del_media_user = downloadhis_list[0].userid
del_media_type = downloadhis_list[0].type
del_media_year = downloadhis_list[0].year
del_media_season = downloadhis_list[0].seasons
del_media_episode = downloadhis_list[0].episodes
del_image = downloadhis_list[0].image
for downloadhis in downloadhis_list:
if not downloadhis.download_hash:
logger.debug(f'下载历史 {downloadhis.id} {downloadhis.title} 未获取到download_hash跳过处理')
continue
# 根据hash获取转移记录
transferhis_list = self._transferhis.list_by_hash(download_hash=downloadhis.download_hash)
if not transferhis_list:
logger.warn(f"下载历史 {downloadhis.download_hash} 未查询到转移记录,跳过处理")
continue
for history in transferhis_list:
# 册除媒体库文件
if str(self._cleantype == "dest") or str(self._cleantype == "all"):
TransferChain(self.db).delete_files(Path(history.dest))
# 删除记录
self._transferhis.delete(history.id)
# 删除源文件
if str(self._cleantype == "src") or str(self._cleantype == "all"):
TransferChain(self.db).delete_files(Path(history.src))
# 发送事件
eventmanager.send_event(
EventType.DownloadFileDeleted,
{
"src": history.src
}
)
# 累加删除数量
del_transferhis_cnt += len(transferhis_list)
# 发送消息
if self._notify:
self.post_message(
mtype=NotificationType.MediaServer,
title="【定时清理媒体库任务完成】",
text=f"清理媒体名称 {del_media_name}\n"
f"下载媒体用户 {del_media_user}\n"
f"删除历史记录 {del_transferhis_cnt}",
userid=userid)
history.append({
"type": del_media_type,
"title": del_media_name,
"year": del_media_year,
"season": del_media_season,
"episode": del_media_episode,
"image": del_image,
"del_time": time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
})
# 保存历史
self.save_data("history", history)
def get_state(self) -> bool:
return self._enabled
@staticmethod
def get_command() -> List[Dict[str, Any]]:
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
"""
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'onlyonce',
'label': '立即运行一次',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '开启通知',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'placeholder': '0 0 ? ? ?'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'cleantype',
'label': '清理方式',
'items': [
{'title': '媒体库文件', 'value': 'dest'},
{'title': '源文件', 'value': 'src'},
{'title': '所有文件', 'value': 'all'},
]
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cleandate',
'label': '清理媒体日期',
'placeholder': '清理多少天之前的下载记录(天)'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cleanuser',
'label': '清理下载用户',
'placeholder': '多个用户,分割'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"onlyonce": False,
"notify": False,
"cleantype": "dest",
"cron": "",
"cleanuser": "",
"cleandate": 30
}
def get_page(self) -> List[dict]:
"""
拼装插件详情页面,需要返回页面配置,同时附带数据
"""
# 查询同步详情
historys = self.get_data('history')
if not historys:
return [
{
'component': 'div',
'text': '暂无数据',
'props': {
'class': 'text-center',
}
}
]
# 数据按时间降序排序
historys = sorted(historys, key=lambda x: x.get('del_time'), reverse=True)
# 拼装页面
contents = []
for history in historys:
htype = history.get("type")
title = history.get("title")
year = history.get("year")
season = history.get("season")
episode = history.get("episode")
image = history.get("image")
del_time = history.get("del_time")
if season:
sub_contents = [
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'类型:{htype}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'标题:{title}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'年份:{year}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'季:{season}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'集:{episode}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'时间:{del_time}'
}
]
else:
sub_contents = [
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'类型:{htype}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'标题:{title}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'年份:{year}'
},
{
'component': 'VCardText',
'props': {
'class': 'pa-0 px-2'
},
'text': f'时间:{del_time}'
}
]
contents.append(
{
'component': 'VCard',
'content': [
{
'component': 'div',
'props': {
'class': 'd-flex justify-space-start flex-nowrap flex-row',
},
'content': [
{
'component': 'div',
'content': [
{
'component': 'VImg',
'props': {
'src': image,
'height': 120,
'width': 80,
'aspect-ratio': '2/3',
'class': 'object-cover shadow ring-gray-500',
'cover': True
}
}
]
},
{
'component': 'div',
'content': sub_contents
}
]
}
]
}
)
return [
{
'component': 'div',
'props': {
'class': 'grid gap-3 grid-info-card',
},
'content': contents
}
]
def stop_service(self):
"""
退出插件
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
logger.error("退出插件失败:%s" % str(e))

View File

@@ -420,8 +420,7 @@ class BestFilmVersion(_PluginBase):
item_info_resp = Emby().get_iteminfo(itemid=data.get('Id'))
else:
item_info_resp = self.plex_get_iteminfo(itemid=data.get('Id'))
logger.info(f'BestFilmVersion插件 item打印 {item_info_resp}')
logger.debug(f'BestFilmVersion插件 item打印 {item_info_resp}')
if not item_info_resp:
continue
@@ -430,41 +429,35 @@ class BestFilmVersion(_PluginBase):
continue
# 获取tmdb_id
media_info_ids = item_info_resp.get('ExternalUrls')
if not media_info_ids:
tmdb_id = item_info_resp.tmdbid
if not tmdb_id:
continue
for media_info_id in media_info_ids:
if 'TheMovieDb' != media_info_id.get('Name'):
continue
tmdb_find_id = str(media_info_id.get('Url')).split('/')
tmdb_find_id.reverse()
tmdb_id = tmdb_find_id[0]
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(tmdbid=tmdb_id, mtype=MediaType.MOVIE)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{data.get("Name")}tmdbID{tmdb_id}')
continue
# 添加订阅
self.subscribechain.add(mtype=MediaType.MOVIE,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
best_version=True,
username="收藏洗版",
exist_ok=True)
# 加入缓存
caches.append(data.get('Name'))
# 存储历史记录
if mediainfo.tmdb_id not in [h.get("tmdbid") for h in history]:
history.append({
"title": mediainfo.title,
"type": mediainfo.type.value,
"year": mediainfo.year,
"poster": mediainfo.get_poster_image(),
"overview": mediainfo.overview,
"tmdbid": mediainfo.tmdb_id,
"time": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
})
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(tmdbid=tmdb_id, mtype=MediaType.MOVIE)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{data.get("Name")}tmdbid{tmdb_id}')
continue
# 添加订阅
self.subscribechain.add(mtype=MediaType.MOVIE,
title=mediainfo.title,
year=mediainfo.year,
tmdbid=mediainfo.tmdb_id,
best_version=True,
username="收藏洗版",
exist_ok=True)
# 加入缓存
caches.append(data.get('Name'))
# 存储历史记录
if mediainfo.tmdb_id not in [h.get("tmdbid") for h in history]:
history.append({
"title": mediainfo.title,
"type": mediainfo.type.value,
"year": mediainfo.year,
"poster": mediainfo.get_poster_image(),
"overview": mediainfo.overview,
"tmdbid": mediainfo.tmdb_id,
"time": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
})
# 保存历史记录
self.save_data('history', history)
# 保存缓存
@@ -474,7 +467,7 @@ class BestFilmVersion(_PluginBase):
def jellyfin_get_items(self) -> List[dict]:
# 获取所有user
users_url = "{HOST}Users?&apikey={APIKEY}"
users_url = "[HOST]Users?&apikey=[APIKEY]"
users = self.get_users(Jellyfin().get_data(users_url))
if not users:
logger.info(f"bestfilmversion/users_url: {users_url}")
@@ -482,7 +475,7 @@ class BestFilmVersion(_PluginBase):
all_items = []
for user in users:
# 根据加入日期 降序排序
url = "{HOST}Users/" + user + "/Items?SortBy=DateCreated%2CSortName" \
url = "[HOST]Users/" + user + "/Items?SortBy=DateCreated%2CSortName" \
"&SortOrder=Descending" \
"&Filters=IsFavorite" \
"&Recursive=true" \
@@ -491,7 +484,7 @@ class BestFilmVersion(_PluginBase):
"&ExcludeLocationTypes=Virtual" \
"&EnableTotalRecordCount=false" \
"&Limit=20" \
"&apikey={APIKEY}"
"&apikey=[APIKEY]"
resp = self.get_items(Jellyfin().get_data(url))
if not resp:
continue
@@ -500,14 +493,14 @@ class BestFilmVersion(_PluginBase):
def emby_get_items(self) -> List[dict]:
# 获取所有user
get_users_url = "{HOST}Users?&api_key={APIKEY}"
get_users_url = "[HOST]Users?&api_key=[APIKEY]"
users = self.get_users(Emby().get_data(get_users_url))
if not users:
return []
all_items = []
for user in users:
# 根据加入日期 降序排序
url = "{HOST}emby/Users/" + user + "/Items?SortBy=DateCreated%2CSortName" \
url = "[HOST]emby/Users/" + user + "/Items?SortBy=DateCreated%2CSortName" \
"&SortOrder=Descending" \
"&Filters=IsFavorite" \
"&Recursive=true" \
@@ -515,7 +508,7 @@ class BestFilmVersion(_PluginBase):
"&CollapseBoxSetItems=false" \
"&ExcludeLocationTypes=Virtual" \
"&EnableTotalRecordCount=false" \
"&Limit=20&api_key={APIKEY}"
"&Limit=20&api_key=[APIKEY]"
resp = self.get_items(Emby().get_data(url))
if not resp:
continue
@@ -634,52 +627,34 @@ class BestFilmVersion(_PluginBase):
if not _is_lock:
return
try:
mediainfo: Optional[MediaInfo] = None
if not data.tmdb_id:
info = None
if data.channel == 'jellyfin' and data.save_reason == 'UpdateUserRating' and data.item_favorite:
if (data.channel == 'jellyfin'
and data.save_reason == 'UpdateUserRating'
and data.item_favorite):
info = Jellyfin().get_iteminfo(itemid=data.item_id)
elif data.channel == 'emby' and data.event == 'item.rate':
info = Emby().get_iteminfo(itemid=data.item_id)
elif data.channel == 'plex' and data.event == 'item.rate':
info = Plex().get_iteminfo(itemid=data.item_id)
logger.info(f'BestFilmVersion/webhook_message_action item打印{info}')
logger.debug(f'BestFilmVersion/webhook_message_action item打印{info}')
if not info:
return
if info['Type'] not in ['Movie', 'MOV', 'movie']:
if info.item_type not in ['Movie', 'MOV', 'movie']:
return
# 获取tmdb_id
media_info_ids = info.get('ExternalUrls')
if not media_info_ids:
return
for media_info_id in media_info_ids:
if 'TheMovieDb' != media_info_id.get('Name'):
continue
tmdb_find_id = str(media_info_id.get('Url')).split('/')
tmdb_find_id.reverse()
tmdb_id = tmdb_find_id[0]
mediainfo = self.chain.recognize_media(tmdbid=tmdb_id, mtype=MediaType.MOVIE)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{data.item_name}tmdbID{tmdb_id}')
return
tmdb_id = info.tmdbid
else:
if data.channel == 'jellyfin' and (data.save_reason != 'UpdateUserRating' or not data.item_favorite):
tmdb_id = data.tmdb_id
if (data.channel == 'jellyfin'
and (data.save_reason != 'UpdateUserRating' or not data.item_favorite)):
return
if data.item_type not in ['Movie', 'MOV', 'movie']:
return
mediainfo = self.chain.recognize_media(tmdbid=data.tmdb_id, mtype=MediaType.MOVIE)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{data.item_name}tmdbID{data.tmdb_id}')
return
# 识别媒体信息
mediainfo = self.chain.recognize_media(tmdbid=tmdb_id, mtype=MediaType.MOVIE)
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{data.item_name}tmdbID{tmdb_id}')
return
# 读取缓存
caches = self._cache_path.read_text().split("\n") if self._cache_path.exists() else []

View File

@@ -49,6 +49,7 @@ class BrushFlow(_PluginBase):
siteshelper = None
siteoper = None
torrents = None
sites = None
qb = None
tr = None
# 添加种子定时
@@ -88,6 +89,7 @@ class BrushFlow(_PluginBase):
self.siteshelper = SitesHelper()
self.siteoper = SiteOper()
self.torrents = TorrentsChain()
self.sites = SitesHelper()
if config:
self._enabled = config.get("enabled")
self._notify = config.get("notify")
@@ -115,11 +117,21 @@ class BrushFlow(_PluginBase):
self._save_path = config.get("save_path")
self._clear_task = config.get("clear_task")
# 过滤掉已删除的站点
self._brushsites = [site.get("id") for site in self.sites.get_indexers() if
not site.get("public") and site.get("id") in self._brushsites]
# 保存配置
self.__update_config()
if self._clear_task:
# 清除统计数据
self.save_data("statistic", {})
# 清除种子记录
self.save_data("torrents", {})
# 关闭一次性开关
self._clear_task = False
self.__update_config()
# 停止现有任务
self.stop_service()
@@ -225,7 +237,7 @@ class BrushFlow(_PluginBase):
self._scheduler.add_job(self.brush, 'interval', minutes=self._cron)
except Exception as e:
logger.error(f"站点刷流服务启动失败:{e}")
self.systemmessage(f"站点刷流服务启动失败:{e}")
self.systemmessage.put(f"站点刷流服务启动失败:{e}")
return
if self._onlyonce:
logger.info(f"站点刷流服务启动,立即运行一次")
@@ -729,6 +741,7 @@ class BrushFlow(_PluginBase):
"enabled": False,
"notify": True,
"onlyonce": False,
"clear_task": False,
"freeleech": "free"
}
@@ -1109,7 +1122,7 @@ class BrushFlow(_PluginBase):
{
'component': 'thead',
'props': {
'class': 'text-no-wrap'
'class': 'text-no-wrap'
},
'content': [
{
@@ -1218,7 +1231,8 @@ class BrushFlow(_PluginBase):
"seed_inactivetime": self._seed_inactivetime,
"up_speed": self._up_speed,
"dl_speed": self._dl_speed,
"save_path": self._save_path
"save_path": self._save_path,
"clear_task": self._clear_task
})
def brush(self):
@@ -1285,10 +1299,10 @@ class BrushFlow(_PluginBase):
else:
end_size = 0
if begin_size and not end_size \
and torrent.size > float(begin_size) * 1024**3:
and torrent.size > float(begin_size) * 1024 ** 3:
continue
elif begin_size and end_size \
and not float(begin_size) * 1024**3 <= torrent.size <= float(end_size) * 1024**3:
and not float(begin_size) * 1024 ** 3 <= torrent.size <= float(end_size) * 1024 ** 3:
continue
# 做种人数
if self._seeder:
@@ -1345,7 +1359,7 @@ class BrushFlow(_PluginBase):
break
# 保种体积GB
if self._disksize \
and (torrents_size + torrent.size) > float(self._disksize) * 1024**3:
and (torrents_size + torrent.size) > float(self._disksize) * 1024 ** 3:
logger.warn(f"当前做种体积 {StringUtils.str_filesize(torrents_size)} "
f"已超过保种体积 {self._disksize},停止新增任务")
break
@@ -1367,6 +1381,7 @@ class BrushFlow(_PluginBase):
"deleted": False,
}
# 统计数据
torrents_size += torrent.size
statistic_info["count"] += 1
# 发送消息
self.__send_add_message(torrent)
@@ -1767,19 +1782,22 @@ class BrushFlow(_PluginBase):
"""
发送删除种子的消息
"""
if self._notify:
self.chain.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"【刷流任务删种】",
text=f"站点:{site_name}\n"
f"标题{torrent_title}\n"
f"原因{reason}"
))
if not self._notify:
return
self.chain.post_message(Notification(
mtype=NotificationType.SiteMessage,
title=f"【刷流任务删种】",
text=f"站点{site_name}\n"
f"标题{torrent_title}\n"
f"原因:{reason}"
))
def __send_add_message(self, torrent: TorrentInfo):
"""
发送添加下载的消息
"""
if not self._notify:
return
msg_text = ""
if torrent.site_name:
msg_text = f"站点:{torrent.site_name}"
@@ -1819,25 +1837,29 @@ class BrushFlow(_PluginBase):
def __get_downloader_info(self) -> schemas.DownloaderInfo:
"""
获取下载器实时信息
获取下载器实时信息(所有下载器)
"""
if self._downloader == "qbittorrent":
# 调用Qbittorrent API查询实时信息
ret_info = schemas.DownloaderInfo()
# Qbittorrent
if self.qb:
info = self.qb.transfer_info()
return schemas.DownloaderInfo(
download_speed=info.get("dl_info_speed"),
upload_speed=info.get("up_info_speed"),
download_size=info.get("dl_info_data"),
upload_size=info.get("up_info_data")
)
else:
if info:
ret_info.download_speed += info.get("dl_info_speed")
ret_info.upload_speed += info.get("up_info_speed")
ret_info.download_size += info.get("dl_info_data")
ret_info.upload_size += info.get("up_info_data")
# Transmission
if self.tr:
info = self.tr.transfer_info()
return schemas.DownloaderInfo(
download_speed=info.download_speed,
upload_speed=info.upload_speed,
download_size=info.current_stats.downloaded_bytes,
upload_size=info.current_stats.uploaded_bytes
)
if info:
ret_info.download_speed += info.download_speed
ret_info.upload_speed += info.upload_speed
ret_info.download_size += info.current_stats.downloaded_bytes
ret_info.upload_size += info.current_stats.uploaded_bytes
return ret_info
def __get_downloading_count(self) -> int:
"""
@@ -1848,7 +1870,7 @@ class BrushFlow(_PluginBase):
return 0
torrents = downlader.get_downloading_torrents()
return len(torrents) or 0
@staticmethod
def __get_pubminutes(pubdate: str) -> int:
"""
@@ -1860,8 +1882,7 @@ class BrushFlow(_PluginBase):
pubdate = pubdate.replace("T", " ").replace("Z", "")
pubdate = datetime.strptime(pubdate, "%Y-%m-%d %H:%M:%S")
now = datetime.now()
return (now - pubdate).seconds // 60
return (now - pubdate).total_seconds() // 60
except Exception as e:
print(str(e))
return 0

View File

@@ -4,7 +4,7 @@ from typing import List, Tuple, Dict, Any
from app.core.config import settings
from app.core.context import MediaInfo
from app.core.event import eventmanager
from app.core.event import eventmanager, Event
from app.log import logger
from app.plugins import _PluginBase
from app.schemas import TransferInfo
@@ -183,7 +183,7 @@ class ChineseSubFinder(_PluginBase):
pass
@eventmanager.register(EventType.TransferComplete)
def download(self, event):
def download(self, event: Event):
"""
调用ChineseSubFinder下载字幕
"""

View File

@@ -725,9 +725,9 @@ class CloudflareSpeedTest(_PluginBase):
new_entrys.append(host_entry)
except Exception as err:
err_hosts.append(host + "\n")
logger.error(f"{host} 格式转换错误:{str(err)}")
logger.error(f"[HOST] 格式转换错误:{str(err)}")
# 推送实时消息
self.systemmessage.put(f"{host} 格式转换错误:{str(err)}")
self.systemmessage.put(f"[HOST] 格式转换错误:{str(err)}")
# 写入系统hosts
if new_entrys:

View File

@@ -199,9 +199,9 @@ class CustomHosts(_PluginBase):
new_entrys.append(host_entry)
except Exception as err:
err_hosts.append(host + "\n")
logger.error(f"{host} 格式转换错误:{str(err)}")
logger.error(f"[HOST] 格式转换错误:{str(err)}")
# 推送实时消息
self.systemmessage.put(f"{host} 格式转换错误:{str(err)}")
self.systemmessage.put(f"[HOST] 格式转换错误:{str(err)}")
# 写入系统hosts
if new_entrys:

View File

@@ -12,6 +12,7 @@ from watchdog.events import FileSystemEventHandler
from watchdog.observers import Observer
from watchdog.observers.polling import PollingObserver
from app.chain.tmdb import TmdbChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.context import MediaInfo
@@ -74,6 +75,7 @@ class DirMonitor(_PluginBase):
transferhis = None
downloadhis = None
transferchian = None
tmdbchain = None
_observer = []
_enabled = False
_notify = False
@@ -85,6 +87,8 @@ class DirMonitor(_PluginBase):
_exclude_keywords = ""
# 存储源目录与目的目录关系
_dirconf: Dict[str, Path] = {}
# 存储源目录转移方式
_transferconf: Dict[str, str] = {}
_medias = {}
# 退出事件
_event = Event()
@@ -93,9 +97,10 @@ class DirMonitor(_PluginBase):
self.transferhis = TransferHistoryOper(self.db)
self.downloadhis = DownloadHistoryOper(self.db)
self.transferchian = TransferChain(self.db)
self.tmdbchain = TmdbChain(self.db)
# 清空配置
self._dirconf = {}
self._transferconf = {}
# 读取配置
if config:
@@ -130,6 +135,13 @@ class DirMonitor(_PluginBase):
paths = [mon_path]
else:
paths = mon_path.split(":")
# 自定义转移方式
if mon_path.count("#") == 1:
self._transferconf[mon_path] = mon_path.split("#")[1]
else:
self._transferconf[mon_path] = self._transfer_type
target_path = None
if len(paths) > 1:
mon_path = paths[0]
@@ -245,6 +257,8 @@ class DirMonitor(_PluginBase):
# 查询转移目的目录
target: Path = self._dirconf.get(mon_path)
# 查询转移方式
transfer_type = self._transferconf.get(mon_path)
# 识别媒体信息
mediainfo: MediaInfo = self.chain.recognize_media(meta=file_meta)
@@ -258,7 +272,7 @@ class DirMonitor(_PluginBase):
# 新增转移成功历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
mode=transfer_type,
meta=file_meta
)
return
@@ -274,15 +288,23 @@ class DirMonitor(_PluginBase):
# 更新媒体图片
self.chain.obtain_images(mediainfo=mediainfo)
# 获取集数据
if mediainfo.type == MediaType.TV:
episodes_info = self.tmdbchain.tmdb_episodes(tmdbid=mediainfo.tmdb_id,
season=file_meta.begin_season or 1)
else:
episodes_info = None
# 获取downloadhash
download_hash = self.get_download_hash(src=str(file_path))
# 转移
transferinfo: TransferInfo = self.chain.transfer(mediainfo=mediainfo,
path=file_path,
transfer_type=self._transfer_type,
transfer_type=transfer_type,
target=target,
meta=file_meta)
meta=file_meta,
episodes_info=episodes_info)
if not transferinfo:
logger.error("文件转移模块运行失败")
@@ -293,7 +315,7 @@ class DirMonitor(_PluginBase):
# 新增转移失败历史记录
self.transferhis.add_fail(
src_path=file_path,
mode=self._transfer_type,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=mediainfo,
@@ -310,7 +332,7 @@ class DirMonitor(_PluginBase):
# 新增转移成功历史记录
self.transferhis.add_success(
src_path=file_path,
mode=self._transfer_type,
mode=transfer_type,
download_hash=download_hash,
meta=file_meta,
mediainfo=mediainfo,
@@ -392,7 +414,7 @@ class DirMonitor(_PluginBase):
})
# 移动模式删除空目录
if self._transfer_type == "move":
if transfer_type == "move":
for file_dir in file_path.parents:
if len(str(file_dir)) <= len(str(Path(mon_path))):
# 重要,删除到监控目录为止
@@ -591,9 +613,10 @@ class DirMonitor(_PluginBase):
'model': 'monitor_dirs',
'label': '监控目录',
'rows': 5,
'placeholder': '每一行一个目录,支持种配置方式:\n'
'placeholder': '每一行一个目录,支持种配置方式:\n'
'监控目录\n'
'监控目录:转移目的目录(需同时在媒体库目录中配置该目的目录)'
'监控目录:转移目的目录(需同时在媒体库目录中配置该目的目录)\n'
'监控目录:转移目的目录#转移方式move|copy|link|softlink'
}
}
]

View File

@@ -4,6 +4,7 @@ import xml.dom.minidom
from threading import Event
from typing import Tuple, List, Dict, Any, Optional
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
@@ -100,11 +101,19 @@ class DoubanRank(_PluginBase):
logger.error(f"豆瓣榜单订阅服务启动失败,错误信息:{str(e)}")
self.systemmessage.put(f"豆瓣榜单订阅服务启动失败,错误信息:{str(e)}")
else:
self._scheduler.add_job(func=self.__refresh_rss,
trigger=CronTrigger.from_crontab("0 8 * * *"),
name="豆瓣榜单订阅")
self._scheduler.add_job(func=self.__refresh_rss, trigger='date',
run_date=datetime.datetime.now(
tz=pytz.timezone(settings.TZ)) + datetime.timedelta(seconds=3)
)
logger.info("豆瓣榜单订阅服务启动,周期:每天 08:00")
if self._onlyonce:
logger.info("豆瓣榜单订阅服务启动,立即运行一次")
self._scheduler.add_job(func=self.__refresh_rss, trigger='date',
run_date=datetime.datetime.now(
tz=pytz.timezone(settings.TZ)) + datetime.timedelta(seconds=3)
)
if self._onlyonce or self._clear:
# 关闭一次性开关
self._onlyonce = False

View File

@@ -0,0 +1,320 @@
from apscheduler.schedulers.background import BackgroundScheduler
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
from app.core.config import settings
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple, Optional, Union
from app.log import logger
from app.schemas import NotificationType, TransferTorrent, DownloadingTorrent
from app.schemas.types import TorrentStatus, MessageChannel
from app.utils.string import StringUtils
class DownloadingMsg(_PluginBase):
# 插件名称
plugin_name = "下载进度推送"
# 插件描述
plugin_desc = "定时推送正在下载进度。"
# 插件图标
plugin_icon = "downloadmsg.png"
# 主题色
plugin_color = "#3DE75D"
# 插件版本
plugin_version = "1.0"
# 插件作者
plugin_author = "thsrite"
# 作者主页
author_url = "https://github.com/thsrite"
# 插件配置项ID前缀
plugin_config_prefix = "downloading_"
# 加载顺序
plugin_order = 22
# 可使用的用户级别
auth_level = 2
# 私有属性
_enabled = False
# 任务执行间隔
_seconds = None
_type = None
_adminuser = None
_downloadhis = None
# 定时器
_scheduler: Optional[BackgroundScheduler] = None
def init_plugin(self, config: dict = None):
# 停止现有任务
self.stop_service()
if config:
self._enabled = config.get("enabled")
self._seconds = config.get("seconds") or 300
self._type = config.get("type") or 'admin'
self._adminuser = config.get("adminuser")
# 加载模块
if self._enabled:
self._downloadhis = DownloadHistoryOper(self.db)
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._seconds:
try:
self._scheduler.add_job(func=self.__downloading,
trigger='interval',
seconds=int(self._seconds),
name="下载进度推送")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
# 启动任务
if self._scheduler.get_jobs():
self._scheduler.print_jobs()
self._scheduler.start()
def __downloading(self):
"""
定时推送正在下载进度
"""
# 正在下载种子
torrents = DownloadChain(self.db).list_torrents(status=TorrentStatus.DOWNLOADING)
if not torrents:
logger.info("当前没有正在下载的任务!")
return
# 推送用户
if self._type == "admin" or self._type == "both":
if not self._adminuser:
logger.error("未配置管理员用户")
return
for userid in str(self._adminuser).split(","):
self.__send_msg(torrents=torrents, userid=userid)
if self._type == "user" or self._type == "both":
user_torrents = {}
# 根据正在下载种子hash获取下载历史
for torrent in torrents:
downloadhis = self._downloadhis.get_by_hash(download_hash=torrent.hash)
if not downloadhis:
logger.warn(f"种子 {torrent.hash} 未获取到MoviePilot下载历史无法推送下载进度")
continue
if not downloadhis.userid:
logger.debug(f"种子 {torrent.hash} 未获取到下载用户记录,无法推送下载进度")
continue
user_torrent = user_torrents.get(downloadhis.userid) or []
user_torrent.append(torrent)
user_torrents[downloadhis.userid] = user_torrent
if not user_torrents or not user_torrents.keys():
logger.warn("未获取到用户下载记录,无法推送下载进度")
return
# 推送用户下载任务进度
for userid in list(user_torrents.keys()):
if not userid:
continue
# 如果用户是管理员,无需重复推送
if self._type == "admin" or self._type == "both" and self._adminuser and userid in str(
self._adminuser).split(","):
logger.debug("管理员已推送")
continue
user_torrent = user_torrents.get(userid)
if not user_torrent:
logger.warn(f"未获取到用户 {userid} 下载任务")
continue
self.__send_msg(torrents=user_torrent,
userid=userid)
if self._type == "all":
self.__send_msg(torrents=torrents)
def __send_msg(self, torrents: Optional[List[Union[TransferTorrent, DownloadingTorrent]]], userid: str = None):
"""
发送消息
"""
title = f"{len(torrents)} 个任务正在下载:"
messages = []
index = 1
channel_value = None
for torrent in torrents:
year = None
name = None
se = None
ep = None
# 先查询下载记录,没有再识别
downloadhis = self._downloadhis.get_by_hash(download_hash=torrent.hash)
if downloadhis:
name = downloadhis.title
year = downloadhis.year
se = downloadhis.seasons
ep = downloadhis.episodes
if not channel_value:
channel_value = downloadhis.channel
else:
try:
context = MediaChain(self.db).recognize_by_title(title=torrent.title)
if not context or not context.media_info:
continue
media_info = context.media_info
year = media_info.year
name = media_info.title
if media_info.number_of_seasons:
se = f"S{str(media_info.number_of_seasons).rjust(2, '0')}"
if media_info.number_of_episodes:
ep = f"E{str(media_info.number_of_episodes).rjust(2, '0')}"
except Exception as e:
print(str(e))
# 拼装标题
if year:
media_name = "%s (%s) %s%s" % (name, year, se, ep)
elif name:
media_name = "%s %s%s" % (name, se, ep)
else:
media_name = torrent.title
messages.append(f"{index}. {media_name}\n"
f"{torrent.title} "
f"{StringUtils.str_filesize(torrent.size)} "
f"{round(torrent.progress, 1)}%")
index += 1
# 用户消息渠道
if channel_value:
channel = next(
(channel for channel in MessageChannel.__members__.values() if channel.value == channel_value), None)
else:
channel = None
self.post_message(mtype=NotificationType.Download,
channel=channel,
title=title,
text="\n".join(messages),
userid=userid)
def get_state(self) -> bool:
return self._enabled
@staticmethod
def get_command() -> List[Dict[str, Any]]:
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
"""
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'seconds',
'label': '执行间隔',
'placeholder': '单位(秒)'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'adminuser',
'label': '管理员用户',
'placeholder': '多个用户,分割'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSelect',
'props': {
'model': 'type',
'label': '推送类型',
'items': [
{'title': '管理员', 'value': 'admin'},
{'title': '下载用户', 'value': 'user'},
{'title': '管理员和下载用户', 'value': 'both'},
{'title': '所有用户', 'value': 'all'}
]
}
}
]
}
]
}
]
}
], {
"enabled": False,
"seconds": 300,
"adminuser": "",
"type": "admin"
}
def get_page(self) -> List[dict]:
pass
def stop_service(self):
"""
退出插件
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
logger.error("退出插件失败:%s" % str(e))

View File

@@ -0,0 +1,292 @@
import json
import re
from datetime import datetime, timedelta
import pytz
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings
from app.plugins import _PluginBase
from typing import Any, List, Dict, Tuple, Optional
from app.log import logger
from app.schemas import NotificationType
from app.utils.http import RequestUtils
class InvitesSignin(_PluginBase):
# 插件名称
plugin_name = "药丸签到"
# 插件描述
plugin_desc = "药丸论坛签到。"
# 插件图标
plugin_icon = "invites.png"
# 主题色
plugin_color = "#4FB647"
# 插件版本
plugin_version = "1.0"
# 插件作者
plugin_author = "thsrite"
# 作者主页
author_url = "https://github.com/thsrite"
# 插件配置项ID前缀
plugin_config_prefix = "invitessignin"
# 加载顺序
plugin_order = 24
# 可使用的用户级别
auth_level = 2
# 私有属性
_enabled = False
# 任务执行间隔
_cron = None
_cookie = None
_onlyonce = False
_notify = False
# 定时器
_scheduler: Optional[BackgroundScheduler] = None
def init_plugin(self, config: dict = None):
# 停止现有任务
self.stop_service()
if config:
self._enabled = config.get("enabled")
self._cron = config.get("cron")
self._cookie = config.get("cookie")
self._notify = config.get("notify")
self._onlyonce = config.get("onlyonce")
# 加载模块
if self._enabled:
# 定时服务
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
try:
self._scheduler.add_job(func=self.__signin,
trigger=CronTrigger.from_crontab(self._cron),
name="药丸签到")
except Exception as err:
logger.error(f"定时任务配置错误:{err}")
if self._onlyonce:
logger.info(f"药丸签到服务启动,立即运行一次")
self._scheduler.add_job(func=self.__signin, trigger='date',
run_date=datetime.now(tz=pytz.timezone(settings.TZ)) + timedelta(seconds=3),
name="药丸签到")
# 关闭一次性开关
self._onlyonce = False
self.update_config({
"onlyonce": False,
"cron": self._cron,
"enabled": self._enabled,
"cookie": self._cookie,
"notify": self._notify,
})
# 启动任务
if self._scheduler.get_jobs():
self._scheduler.print_jobs()
self._scheduler.start()
def __signin(self):
"""
药丸签到
"""
res = RequestUtils(cookies=self._cookie).get_res(url="https://invites.fun")
if not res or res.status_code != 200:
logger.error("请求药丸错误")
return
# 获取csrfToken
pattern = r'"csrfToken":"(.*?)"'
csrfToken = re.findall(pattern, res.text)
if not csrfToken:
logger.error("请求csrfToken失败")
return
csrfToken = csrfToken[0]
logger.info(f"获取csrfToken成功 {csrfToken}")
# 获取userid
pattern = r'"userId":(\d+)'
match = re.search(pattern, res.text)
if match:
userId = match.group(1)
logger.info(f"获取userid成功 {userId}")
else:
logger.error("未找到userId")
return
headers = {
"X-Csrf-Token": csrfToken,
"X-Http-Method-Override": "PATCH",
"Cookie": self._cookie
}
data = {
"data": {
"type": "users",
"attributes": {
"canCheckin": False,
"totalContinuousCheckIn": 2
},
"id": userId
}
}
# 开始签到
res = RequestUtils(headers=headers).post_res(url=f"https://invites.fun/api/users/{userId}", json=data)
if not res or res.status_code != 200:
logger.error("药丸签到失败")
return
sign_dict = json.loads(res.text)
money = sign_dict['data']['attributes']['money']
totalContinuousCheckIn = sign_dict['data']['attributes']['totalContinuousCheckIn']
# 发送通知
if self._notify:
self.post_message(
mtype=NotificationType.SiteMessage,
title="【药丸签到任务完成】",
text=f"累计签到 {totalContinuousCheckIn} \n"
f"剩余药丸 {money}")
def get_state(self) -> bool:
return self._enabled
@staticmethod
def get_command() -> List[Dict[str, Any]]:
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
"""
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'notify',
'label': '开启通知',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 4
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'onlyonce',
'label': '立即运行一次',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '签到周期'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cookie',
'label': '药丸cookie'
}
}
]
}
]
}
]
}
], {
"enabled": False,
"onlyonce": False,
"notify": False,
"cookie": "",
"cron": "0 9 * * *"
}
def get_page(self) -> List[dict]:
pass
def stop_service(self):
"""
退出插件
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._scheduler.shutdown()
self._scheduler = None
except Exception as e:
logger.error("退出插件失败:%s" % str(e))

View File

@@ -109,7 +109,7 @@ class IYUUAutoSeed(_PluginBase):
self._nolabels = config.get("nolabels")
self._nopaths = config.get("nopaths")
self._clearcache = config.get("clearcache")
self._permanent_error_caches = config.get("permanent_error_caches") or []
self._permanent_error_caches = [] if self._clearcache else config.get("permanent_error_caches") or []
self._error_caches = [] if self._clearcache else config.get("error_caches") or []
self._success_caches = [] if self._clearcache else config.get("success_caches") or []

View File

@@ -25,9 +25,9 @@ from app.schemas.types import NotificationType, EventType, MediaType
class MediaSyncDel(_PluginBase):
# 插件名称
plugin_name = "媒体同步删除"
plugin_name = "媒体文件同步删除"
# 插件描述
plugin_desc = "媒体库删除媒体后同步删除历史记录、源文件和下载任务。"
plugin_desc = "同步删除历史记录、源文件和下载任务。"
# 插件图标
plugin_icon = "mediasyncdel.png"
# 主题色
@@ -187,9 +187,9 @@ class MediaSyncDel(_PluginBase):
'component': 'VSelect',
'props': {
'model': 'sync_type',
'label': '同步方式',
'label': '媒体库同步方式',
'items': [
{'title': 'webhook', 'value': 'webhook'},
{'title': 'Webhook', 'value': 'webhook'},
{'title': '日志', 'value': 'log'},
{'title': 'Scripter X', 'value': 'plugin'}
]
@@ -208,7 +208,7 @@ class MediaSyncDel(_PluginBase):
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '执行周期',
'label': '日志检查周期',
'placeholder': '5位cron表达式留空自动'
}
}
@@ -246,7 +246,7 @@ class MediaSyncDel(_PluginBase):
'props': {
'model': 'library_path',
'rows': '2',
'label': '媒体库路径',
'label': '媒体库路径映射',
'placeholder': '媒体服务器路径:MoviePilot路径一行一个'
}
}
@@ -266,11 +266,11 @@ class MediaSyncDel(_PluginBase):
{
'component': 'VAlert',
'props': {
'text': '同步方式分为webhook、日志同步和Scripter X'
'webhook需要Emby4.8.0.45及以上开启媒体删除的webhook'
'(建议使用媒体库刮削插件覆盖元数据重新刮削剧集路径)'
'日志同步需要配置执行周期默认30分钟执行一次'
'Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期'
'text': '媒体库同步方式分为Webhook、日志同步和Scripter X'
'1、Webhook需要Emby4.8.0.45及以上开启媒体删除的Webhook'
'2、日志同步需要配置检查周期默认30分钟执行一次'
'3、Scripter X方式需要emby安装并配置Scripter X插件无需配置执行周期'
'4、启用该插件后非媒体服务器触发的源文件删除也会同步处理下载器中的下载任务'
}
}
]
@@ -673,6 +673,8 @@ class MediaSyncDel(_PluginBase):
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
if len(sub_paths) < 2:
continue
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 删除电影
@@ -739,7 +741,11 @@ class MediaSyncDel(_PluginBase):
return
# 遍历删除
last_del_time = None
for del_media in del_medias:
# 删除时间
del_time = del_media.get("time")
last_del_time = del_time
# 媒体类型 Movie|Series|Season|Episode
media_type = del_media.get("type")
# 媒体名称 蜀山战纪
@@ -765,6 +771,8 @@ class MediaSyncDel(_PluginBase):
paths = self._library_path.split("\n")
for path in paths:
sub_paths = path.split(":")
if len(sub_paths) < 2:
continue
media_path = media_path.replace(sub_paths[0], sub_paths[1]).replace('\\', '/')
# 获取删除的记录
@@ -877,7 +885,7 @@ class MediaSyncDel(_PluginBase):
# 保存历史
self.save_data("history", history)
self.save_data("last_time", datetime.datetime.now())
self.save_data("last_time", last_del_time or datetime.datetime.now())
def handle_torrent(self, src: str, torrent_hash: str):
"""
@@ -1043,146 +1051,207 @@ class MediaSyncDel(_PluginBase):
@staticmethod
def parse_emby_log(last_time):
log_url = "{HOST}System/Logs/embyserver.txt?api_key={APIKEY}"
log_res = Emby().get_data(log_url)
if not log_res or log_res.status_code != 200:
logger.error("获取emby日志失败请检查服务器配置")
return []
"""
获取emby日志列表、解析emby日志
"""
# 正则解析删除的媒体信息
pattern = r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}) Info App: Removing item from database, Type: (\w+), Name: (.*), Path: (.*), Id: (\d+)'
matches = re.findall(pattern, log_res.text)
def __parse_log(file_name: str, del_list: list):
"""
解析emby日志
"""
log_url = f"[HOST]System/Logs/{file_name}?api_key=[APIKEY]"
log_res = Emby().get_data(log_url)
if not log_res or log_res.status_code != 200:
logger.error("获取emby日志失败请检查服务器配置")
return del_list
# 正则解析删除的媒体信息
pattern = r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}) Info App: Removing item from database, Type: (\w+), Name: (.*), Path: (.*), Id: (\d+)'
matches = re.findall(pattern, log_res.text)
# 循环获取媒体信息
for match in matches:
mtime = match[0]
# 排除已处理的媒体信息
if last_time and mtime < last_time:
continue
mtype = match[1]
name = match[2]
path = match[3]
year = None
year_pattern = r'\(\d+\)'
year_match = re.search(year_pattern, path)
if year_match:
year = year_match.group()[1:-1]
season = None
episode = None
if mtype == 'Episode' or mtype == 'Season':
name_pattern = r"\/([\u4e00-\u9fa5]+)(?= \()"
season_pattern = r"Season\s*(\d+)"
episode_pattern = r"S\d+E(\d+)"
name_match = re.search(name_pattern, path)
season_match = re.search(season_pattern, path)
episode_match = re.search(episode_pattern, path)
if name_match:
name = name_match.group(1)
if season_match:
season = season_match.group(1)
if int(season) < 10:
season = f'S0{season}'
else:
season = f'S{season}'
else:
season = None
if episode_match:
episode = episode_match.group(1)
episode = f'E{episode}'
else:
episode = None
media = {
"time": mtime,
"type": mtype,
"name": name,
"year": year,
"path": path,
"season": season,
"episode": episode,
}
logger.debug(f"解析到删除媒体:{json.dumps(media)}")
del_list.append(media)
return del_list
log_files = []
try:
# 获取所有emby日志
log_list_url = "[HOST]System/Logs/Query?Limit=3&api_key=[APIKEY]"
log_list_res = Emby().get_data(log_list_url)
if log_list_res and log_list_res.status_code == 200:
log_files_dict = json.loads(log_list_res.text)
for item in log_files_dict.get("Items"):
if str(item.get('Name')).startswith("embyserver"):
log_files.append(str(item.get('Name')))
except Exception as e:
print(str(e))
if not log_files:
log_files.append("embyserver.txt")
del_medias = []
# 循环获取媒体信息
for match in matches:
mtime = match[0]
# 排除已处理的媒体信息
if last_time and mtime < last_time:
continue
mtype = match[1]
name = match[2]
path = match[3]
year = None
year_pattern = r'\(\d+\)'
year_match = re.search(year_pattern, path)
if year_match:
year = year_match.group()[1:-1]
season = None
episode = None
if mtype == 'Episode' or mtype == 'Season':
name_pattern = r"\/([\u4e00-\u9fa5]+)(?= \()"
season_pattern = r"Season\s*(\d+)"
episode_pattern = r"S\d+E(\d+)"
name_match = re.search(name_pattern, path)
season_match = re.search(season_pattern, path)
episode_match = re.search(episode_pattern, path)
if name_match:
name = name_match.group(1)
if season_match:
season = season_match.group(1)
if int(season) < 10:
season = f'S0{season}'
else:
season = f'S{season}'
else:
season = None
if episode_match:
episode = episode_match.group(1)
episode = f'E{episode}'
else:
episode = None
media = {
"time": mtime,
"type": mtype,
"name": name,
"year": year,
"path": path,
"season": season,
"episode": episode,
}
logger.debug(f"解析到删除媒体:{json.dumps(media)}")
del_medias.append(media)
log_files.reverse()
for log_file in log_files:
del_medias = __parse_log(log_file, del_medias)
return del_medias
@staticmethod
def parse_jellyfin_log(last_time: datetime):
# 根据加入日期 降序排序
log_url = "{HOST}System/Logs/Log?name=log_%s.log&api_key={APIKEY}" % datetime.date.today().strftime("%Y%m%d")
log_res = Jellyfin().get_data(log_url)
if not log_res or log_res.status_code != 200:
logger.error("获取jellyfin日志失败请检查服务器配置")
return []
"""
获取jellyfin日志列表、解析jellyfin日志
"""
# 正则解析删除的媒体信息
pattern = r'\[(.*?)\].*?Removing item, Type: "(.*?)", Name: "(.*?)", Path: "(.*?)"'
matches = re.findall(pattern, log_res.text)
def __parse_log(file_name: str, del_list: list):
"""
解析jellyfin日志
"""
log_url = f"[HOST]System/Logs/Log?name={file_name}&api_key=[APIKEY]"
log_res = Jellyfin().get_data(log_url)
if not log_res or log_res.status_code != 200:
logger.error("获取jellyfin日志失败请检查服务器配置")
return del_list
# 正则解析删除的媒体信息
pattern = r'\[(.*?)\].*?Removing item, Type: "(.*?)", Name: "(.*?)", Path: "(.*?)"'
matches = re.findall(pattern, log_res.text)
# 循环获取媒体信息
for match in matches:
mtime = match[0]
# 排除已处理的媒体信息
if last_time and mtime < last_time:
continue
mtype = match[1]
name = match[2]
path = match[3]
year = None
year_pattern = r'\(\d+\)'
year_match = re.search(year_pattern, path)
if year_match:
year = year_match.group()[1:-1]
season = None
episode = None
if mtype == 'Episode' or mtype == 'Season':
name_pattern = r"\/([\u4e00-\u9fa5]+)(?= \()"
season_pattern = r"Season\s*(\d+)"
episode_pattern = r"S\d+E(\d+)"
name_match = re.search(name_pattern, path)
season_match = re.search(season_pattern, path)
episode_match = re.search(episode_pattern, path)
if name_match:
name = name_match.group(1)
if season_match:
season = season_match.group(1)
if int(season) < 10:
season = f'S0{season}'
else:
season = f'S{season}'
else:
season = None
if episode_match:
episode = episode_match.group(1)
episode = f'E{episode}'
else:
episode = None
media = {
"time": mtime,
"type": mtype,
"name": name,
"year": year,
"path": path,
"season": season,
"episode": episode,
}
logger.debug(f"解析到删除媒体:{json.dumps(media)}")
del_list.append(media)
return del_list
log_files = []
try:
# 获取所有jellyfin日志
log_list_url = "[HOST]System/Logs?api_key=[APIKEY]"
log_list_res = Jellyfin().get_data(log_list_url)
if log_list_res and log_list_res.status_code == 200:
log_files_dict = json.loads(log_list_res.text)
for item in log_files_dict:
if str(item.get('Name')).startswith("log_"):
log_files.append(str(item.get('Name')))
except Exception as e:
print(str(e))
if not log_files:
log_files.append("log_%s.log" % datetime.date.today().strftime("%Y%m%d"))
del_medias = []
# 循环获取媒体信息
for match in matches:
mtime = match[0]
# 排除已处理的媒体信息
if last_time and mtime < last_time:
continue
mtype = match[1]
name = match[2]
path = match[3]
year = None
year_pattern = r'\(\d+\)'
year_match = re.search(year_pattern, path)
if year_match:
year = year_match.group()[1:-1]
season = None
episode = None
if mtype == 'Episode' or mtype == 'Season':
name_pattern = r"\/([\u4e00-\u9fa5]+)(?= \()"
season_pattern = r"Season\s*(\d+)"
episode_pattern = r"S\d+E(\d+)"
name_match = re.search(name_pattern, path)
season_match = re.search(season_pattern, path)
episode_match = re.search(episode_pattern, path)
if name_match:
name = name_match.group(1)
if season_match:
season = season_match.group(1)
if int(season) < 10:
season = f'S0{season}'
else:
season = f'S{season}'
else:
season = None
if episode_match:
episode = episode_match.group(1)
episode = f'E{episode}'
else:
episode = None
media = {
"time": mtime,
"type": mtype,
"name": name,
"year": year,
"path": path,
"season": season,
"episode": episode,
}
logger.debug(f"解析到删除媒体:{json.dumps(media)}")
del_medias.append(media)
log_files.reverse()
for log_file in log_files:
del_medias = __parse_log(log_file, del_medias)
return del_medias
@@ -1202,21 +1271,25 @@ class MediaSyncDel(_PluginBase):
except Exception as e:
logger.error("退出插件失败:%s" % str(e))
@eventmanager.register(EventType.MediaDeleted)
def remote_sync_del(self, event: Event):
@eventmanager.register(EventType.DownloadFileDeleted)
def downloadfile_del_sync(self, event: Event):
"""
媒体库同步删除
下载文件删除处理事件
"""
if event:
logger.info("收到命令,开始执行媒体库同步删除 ...")
self.post_message(channel=event.event_data.get("channel"),
title="开始媒体库同步删除 ...",
userid=event.event_data.get("user"))
self.sync_del_by_log()
if event:
self.post_message(channel=event.event_data.get("channel"),
title="媒体库同步删除完成!", userid=event.event_data.get("user"))
if not self._enabled:
return
if not event:
return
event_data = event.event_data
src = event_data.get("src")
if not src:
return
# 查询下载hash
download_hash = self._downloadhis.get_hash_by_fullpath(src)
if download_hash:
self.handle_torrent(src=src, torrent_hash=download_hash)
else:
logger.warn(f"未查询到文件 {src} 对应的下载记录")
@staticmethod
def get_tmdbimage_url(path: str, prefix="w500"):

View File

@@ -325,6 +325,9 @@ class MessageForward(_PluginBase):
logger.info(f"转发消息 {title} 成功")
return True
else:
if ret_json.get('errcode') == 81013:
return False
logger.error(f"转发消息 {title} 失败,错误信息:{ret_json}")
if ret_json.get('errcode') == 42001 or ret_json.get('errcode') == 40014:
logger.info("token已过期正在重新刷新token重试")

View File

@@ -3,6 +3,7 @@ import os
import sqlite3
from datetime import datetime
from app.core.config import settings
from app.db.downloadhistory_oper import DownloadHistoryOper
from app.db.plugindata_oper import PluginDataOper
from app.db.transferhistory_oper import TransferHistoryOper
@@ -157,15 +158,16 @@ class NAStoolSync(_PluginBase):
# 替换value
if isinstance(plugin_value, str):
plugin_value = json.loads(plugin_value)
if str(plugin_value.get("to_download")).isdigit() and int(
plugin_value.get("to_download")) == int(sub_downloaders[0]):
plugin_value["to_download"] = sub_downloaders[1]
_value: dict = json.loads(plugin_value)
elif isinstance(plugin_value, dict):
if str(plugin_value.get("to_download")).isdigit() and int(
plugin_value.get("to_download")) == int(sub_downloaders[0]):
plugin_value["to_download"] = sub_downloaders[1]
# 替换辅种记录
if str(plugin_id) == "IYUUAutoSeed":
if isinstance(plugin_value, str):
plugin_value = json.loads(plugin_value)
plugin_value: list = json.loads(plugin_value)
if not isinstance(plugin_value, list):
plugin_value = [plugin_value]
for value in plugin_value:
@@ -213,6 +215,7 @@ class NAStoolSync(_PluginBase):
mtorrent = history[9]
mdesc = history[10]
msite = history[11]
mdate = history[12]
# 处理站点映射
if self._site:
@@ -234,7 +237,9 @@ class NAStoolSync(_PluginBase):
download_hash=mdownload_hash,
torrent_name=mtorrent,
torrent_description=mdesc,
torrent_site=msite
torrent_site=msite,
userid=settings.SUPERUSER,
date=mdate
)
cnt += 1
if cnt % 100 == 0:
@@ -358,7 +363,8 @@ class NAStoolSync(_PluginBase):
DOWNLOAD_ID,
TORRENT,
DESC,
SITE
SITE,
DATE
FROM
DOWNLOAD_HISTORY
WHERE

View File

@@ -0,0 +1,980 @@
import base64
import copy
import datetime
import json
import re
import threading
import time
from pathlib import Path
from typing import Any, List, Dict, Tuple, Optional
import pytz
import zhconv
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from requests import RequestException
from app.chain.mediaserver import MediaServerChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings
from app.core.event import eventmanager, Event
from app.core.meta import MetaBase
from app.log import logger
from app.modules.emby import Emby
from app.modules.jellyfin import Jellyfin
from app.modules.plex import Plex
from app.plugins import _PluginBase
from app.schemas import MediaInfo, MediaServerItem
from app.schemas.types import EventType, MediaType
from app.utils.common import retry
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class PersonMeta(_PluginBase):
# 插件名称
plugin_name = "演职人员刮削"
# 插件描述
plugin_desc = "刮削演职人员图片以及中文名称。"
# 插件图标
plugin_icon = "actor.png"
# 主题色
plugin_color = "#E66E72"
# 插件版本
plugin_version = "1.0"
# 插件作者
plugin_author = "jxxghp"
# 作者主页
author_url = "https://github.com/jxxghp"
# 插件配置项ID前缀
plugin_config_prefix = "personmeta_"
# 加载顺序
plugin_order = 24
# 可使用的用户级别
auth_level = 1
# 退出事件
_event = threading.Event()
# 私有属性
_scheduler = None
tmdbchain = None
mschain = None
_enabled = False
_onlyonce = False
_cron = None
_delay = 0
_remove_nozh = False
def init_plugin(self, config: dict = None):
self.tmdbchain = TmdbChain(self.db)
self.mschain = MediaServerChain(self.db)
if config:
self._enabled = config.get("enabled")
self._onlyonce = config.get("onlyonce")
self._cron = config.get("cron")
self._delay = config.get("delay") or 0
self._remove_nozh = config.get("remove_nozh") or False
# 停止现有任务
self.stop_service()
# 启动服务
if self._enabled or self._onlyonce:
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron or self._onlyonce:
if self._cron:
try:
self._scheduler.add_job(func=self.scrap_library,
trigger=CronTrigger.from_crontab(self._cron),
name="演职人员刮削")
logger.info(f"演职人员刮削服务启动,周期:{self._cron}")
except Exception as e:
logger.error(f"演职人员刮削服务启动失败,错误信息:{str(e)}")
self.systemmessage.put(f"演职人员刮削服务启动失败,错误信息:{str(e)}")
if self._onlyonce:
self._scheduler.add_job(func=self.scrap_library, trigger='date',
run_date=datetime.datetime.now(
tz=pytz.timezone(settings.TZ)) + datetime.timedelta(seconds=3)
)
logger.info(f"演职人员刮削服务启动,立即运行一次")
# 关闭一次性开关
self._onlyonce = False
# 保存配置
self.__update_config()
if self._scheduler.get_jobs():
# 启动服务
self._scheduler.print_jobs()
self._scheduler.start()
def __update_config(self):
"""
更新配置
"""
self.update_config({
"enabled": self._enabled,
"onlyonce": self._onlyonce,
"cron": self._cron,
"delay": self._delay,
"remove_nozh": self._remove_nozh
})
def get_state(self) -> bool:
return self._enabled
@staticmethod
def get_command() -> List[Dict[str, Any]]:
pass
def get_api(self) -> List[Dict[str, Any]]:
pass
def get_form(self) -> Tuple[List[dict], Dict[str, Any]]:
"""
拼装插件配置页面需要返回两块数据1、页面配置2、数据结构
"""
return [
{
'component': 'VForm',
'content': [
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'enabled',
'label': '启用插件',
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'onlyonce',
'label': '立即运行一次',
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'cron',
'label': '媒体库扫描周期',
'placeholder': '5位cron表达式'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'delay',
'label': '入库延迟时间(秒)',
'placeholder': '30'
}
}
]
}
]
},
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12,
'md': 6
},
'content': [
{
'component': 'VSwitch',
'props': {
'model': 'remove_nozh',
'label': '删除非中文演员',
}
}
]
}
]
}
]
}
], {
"enabled": False,
"onlyonce": False,
"cron": "",
"delay": 30,
"remove_nozh": False
}
def get_page(self) -> List[dict]:
pass
@eventmanager.register(EventType.TransferComplete)
def scrap_rt(self, event: Event):
"""
根据事件实时刮削演员信息
"""
if not self._enabled:
return
# 事件数据
mediainfo: MediaInfo = event.event_data.get("mediainfo")
meta: MetaBase = event.event_data.get("meta")
if not mediainfo or not meta:
return
# 延迟
if self._delay:
time.sleep(int(self._delay))
# 查询媒体服务器中的条目
existsinfo = self.chain.media_exists(mediainfo=mediainfo)
if not existsinfo or not existsinfo.itemid:
logger.warn(f"演职人员刮削 {mediainfo.title_year} 在媒体库中不存在")
return
# 查询条目详情
iteminfo = self.mschain.iteminfo(server=existsinfo.server, item_id=existsinfo.itemid)
if not iteminfo:
logger.warn(f"演职人员刮削 {mediainfo.title_year} 条目详情获取失败")
return
# 刮削演职人员信息
self.__update_item(server=existsinfo.server, item=iteminfo,
mediainfo=mediainfo, season=meta.begin_season)
def scrap_library(self):
"""
扫描整个媒体库,刮削演员信息
"""
# 所有媒体服务器
if not settings.MEDIASERVER:
return
for server in settings.MEDIASERVER.split(","):
# 扫描所有媒体库
logger.info(f"开始刮削服务器 {server} 的演员信息 ...")
for library in self.mschain.librarys(server):
logger.info(f"开始刮削媒体库 {library.name} 的演员信息 ...")
for item in self.mschain.items(server, library.id):
if not item:
continue
if not item.item_id:
continue
if "Series" not in item.item_type \
and "Movie" not in item.item_type:
continue
if self._event.is_set():
logger.info(f"演职人员刮削服务停止")
return
# 处理条目
logger.info(f"开始刮削 {item.title} 的演员信息 ...")
self.__update_item(server=server, item=item)
logger.info(f"{item.title} 的演员信息刮削完成")
logger.info(f"媒体库 {library.name} 的演员信息刮削完成")
logger.info(f"服务器 {server} 的演员信息刮削完成")
def __update_peoples(self, server: str, itemid: str, iteminfo: dict, douban_actors):
# 处理媒体项中的人物信息
"""
"People": [
{
"Name": "丹尼尔·克雷格",
"Id": "33625",
"Role": "James Bond",
"Type": "Actor",
"PrimaryImageTag": "bef4f764540f10577f804201d8d27918"
}
]
"""
peoples = []
# 更新当前媒体项人物
for people in iteminfo["People"] or []:
if self._event.is_set():
logger.info(f"演职人员刮削服务停止")
return
if not people.get("Name"):
continue
if StringUtils.is_chinese(people.get("Name")) \
and StringUtils.is_chinese(people.get("Role")):
peoples.append(people)
continue
info = self.__update_people(server=server, people=people,
douban_actors=douban_actors)
if info:
peoples.append(info)
elif not self._remove_nozh:
peoples.append(people)
# 保存媒体项信息
if peoples:
iteminfo["People"] = peoples
self.set_iteminfo(server=server, itemid=itemid, iteminfo=iteminfo)
def __update_item(self, server: str, item: MediaServerItem,
mediainfo: MediaInfo = None, season: int = None):
"""
更新媒体服务器中的条目
"""
def __need_trans_actor(_item):
# 是否需要处理人物信息
_peoples = [x for x in _item.get("People", []) if
(x.get("Name") and not StringUtils.is_chinese(x.get("Name")))
or (x.get("Role") and not StringUtils.is_chinese(x.get("Role")))]
if _peoples:
return True
return False
# 识别媒体信息
if not mediainfo:
if not item.tmdbid:
logger.warn(f"{item.title} 未找到tmdbid无法识别媒体信息")
return
mtype = MediaType.TV if item.item_type in ['Series', 'show'] else MediaType.MOVIE
mediainfo = self.chain.recognize_media(mtype=mtype, tmdbid=item.tmdbid)
if not mediainfo:
logger.warn(f"{item.title} 未识别到媒体信息")
return
# 获取媒体项
iteminfo = self.get_iteminfo(server=server, itemid=item.item_id)
if not iteminfo:
logger.warn(f"{item.title} 未找到媒体项")
return
if __need_trans_actor(iteminfo):
# 获取豆瓣演员信息
logger.info(f"开始获取 {item.title} 的豆瓣演员信息 ...")
douban_actors = self.__get_douban_actors(mediainfo=mediainfo, season=season)
self.__update_peoples(server=server, itemid=item.item_id, iteminfo=iteminfo, douban_actors=douban_actors)
else:
logger.info(f"{item.title} 的人物信息已是中文,无需更新")
# 处理季和集人物
if iteminfo.get("Type") and "Series" in iteminfo["Type"]:
# 获取季媒体项
seasons = self.get_items(server=server, parentid=item.item_id, mtype="Season")
if not seasons:
logger.warn(f"{item.title} 未找到季媒体项")
return
for season in seasons["Items"]:
# 获取豆瓣演员信息
season_actors = self.__get_douban_actors(mediainfo=mediainfo, season=season.get("IndexNumber"))
# 如果是Jellyfin更新季的人物Emby/Plex季没有人物
if server == "jellyfin":
seasoninfo = self.get_iteminfo(server=server, itemid=season.get("Id"))
if not seasoninfo:
logger.warn(f"{item.title} 未找到季媒体项:{season.get('Id')}")
continue
if __need_trans_actor(seasoninfo):
# 更新季媒体项人物
self.__update_peoples(server=server, itemid=season.get("Id"), iteminfo=seasoninfo,
douban_actors=season_actors)
logger.info(f"{seasoninfo.get('Id')} 的人物信息更新完成")
else:
logger.info(f"{seasoninfo.get('Id')} 的人物信息已是中文,无需更新")
# 获取集媒体项
episodes = self.get_items(server=server, parentid=season.get("Id"), mtype="Episode")
if not episodes:
logger.warn(f"{item.title} 未找到集媒体项")
continue
# 更新集媒体项人物
for episode in episodes["Items"]:
# 获取集媒体项详情
episodeinfo = self.get_iteminfo(server=server, itemid=episode.get("Id"))
if not episodeinfo:
logger.warn(f"{item.title} 未找到集媒体项:{episode.get('Id')}")
continue
if __need_trans_actor(episodeinfo):
# 更新集媒体项人物
self.__update_peoples(server=server, itemid=episode.get("Id"), iteminfo=episodeinfo,
douban_actors=season_actors)
logger.info(f"{episodeinfo.get('Id')} 的人物信息更新完成")
else:
logger.info(f"{episodeinfo.get('Id')} 的人物信息已是中文,无需更新")
def __update_people(self, server: str, people: dict, douban_actors: list = None) -> Optional[dict]:
"""
更新人物信息,返回替换后的人物信息
"""
def __get_peopleid(p: dict) -> Tuple[Optional[str], Optional[str]]:
"""
获取人物的TMDBID、IMDBID
"""
if not p.get("ProviderIds"):
return None, None
peopletmdbid, peopleimdbid = None, None
if "Tmdb" in p["ProviderIds"]:
peopletmdbid = p["ProviderIds"]["Tmdb"]
if "tmdb" in p["ProviderIds"]:
peopletmdbid = p["ProviderIds"]["tmdb"]
if "Imdb" in p["ProviderIds"]:
peopleimdbid = p["ProviderIds"]["Imdb"]
if "imdb" in p["ProviderIds"]:
peopleimdbid = p["ProviderIds"]["imdb"]
return peopletmdbid, peopleimdbid
# 返回的人物信息
ret_people = copy.deepcopy(people)
try:
# 查询媒体库人物详情
personinfo = self.get_iteminfo(server=server, itemid=people.get("Id"))
if not personinfo:
logger.warn(f"未找到人物 {people.get('Name')} 的信息")
return None
# 是否更新标志
updated_name = False
updated_overview = False
update_character = False
profile_path = None
# 从TMDB信息中更新人物信息
person_tmdbid, person_imdbid = __get_peopleid(personinfo)
if person_tmdbid:
person_tmdbinfo = self.tmdbchain.person_detail(int(person_tmdbid))
if person_tmdbinfo:
cn_name = self.__get_chinese_name(person_tmdbinfo)
if cn_name:
# 更新中文名
logger.info(f"{people.get('Name')} 从TMDB获取到中文名{cn_name}")
personinfo["Name"] = cn_name
ret_people["Name"] = cn_name
updated_name = True
# 更新中文描述
biography = person_tmdbinfo.get("biography")
if biography and StringUtils.is_chinese(biography):
logger.info(f"{people.get('Name')} 从TMDB获取到中文描述")
personinfo["Overview"] = biography
updated_overview = True
# 图片
profile_path = person_tmdbinfo.get('profile_path')
if profile_path:
logger.info(f"{people.get('Name')} 从TMDB获取到图片{profile_path}")
profile_path = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/original{profile_path}"
# 从豆瓣信息中更新人物信息
"""
{
"name": "丹尼尔·克雷格",
"roles": [
"演员",
"制片人",
"配音"
],
"title": "丹尼尔·克雷格(同名)英国,英格兰,柴郡,切斯特影视演员",
"url": "https://movie.douban.com/celebrity/1025175/",
"user": null,
"character": "饰 詹姆斯·邦德 James Bond 007",
"uri": "douban://douban.com/celebrity/1025175?subject_id=27230907",
"avatar": {
"large": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p42588.jpg?imageView2/2/q/80/w/600/h/3000/format/webp",
"normal": "https://qnmob3.doubanio.com/view/celebrity/raw/public/p42588.jpg?imageView2/2/q/80/w/200/h/300/format/webp"
},
"sharing_url": "https://www.douban.com/doubanapp/dispatch?uri=/celebrity/1025175/",
"type": "celebrity",
"id": "1025175",
"latin_name": "Daniel Craig"
}
"""
if douban_actors and (not updated_name
or not updated_overview
or not update_character):
# 从豆瓣演员中匹配中文名称、角色和简介
for douban_actor in douban_actors:
if douban_actor.get("latin_name") == people.get("Name") \
or douban_actor.get("name") == people.get("Name"):
# 名称
if not updated_name:
logger.info(f"{people.get('Name')} 从豆瓣中获取到中文名:{douban_actor.get('name')}")
personinfo["Name"] = douban_actor.get("name")
ret_people["Name"] = douban_actor.get("name")
updated_name = True
# 描述
if not updated_overview:
if douban_actor.get("title"):
logger.info(f"{people.get('Name')} 从豆瓣中获取到中文描述:{douban_actor.get('title')}")
personinfo["Overview"] = douban_actor.get("title")
updated_overview = True
# 饰演角色
if not update_character:
if douban_actor.get("character"):
# "饰 詹姆斯·邦德 James Bond 007"
character = re.sub(r"\s+", "",
douban_actor.get("character"))
character = re.sub("演员", "",
character)
if character:
logger.info(f"{people.get('Name')} 从豆瓣中获取到饰演角色:{character}")
ret_people["Role"] = character
update_character = True
# 图片
if not profile_path:
avatar = douban_actor.get("avatar") or {}
if avatar.get("large"):
logger.info(f"{people.get('Name')} 从豆瓣中获取到图片:{avatar.get('large')}")
profile_path = avatar.get("large")
break
# 更新人物图片
if profile_path:
logger.info(f"更新人物 {people.get('Name')} 的图片:{profile_path}")
self.set_item_image(server=server, itemid=people.get("Id"), imageurl=profile_path)
# 锁定人物信息
if updated_name:
if "Name" not in personinfo["LockedFields"]:
personinfo["LockedFields"].append("Name")
if updated_overview:
if "Overview" not in personinfo["LockedFields"]:
personinfo["LockedFields"].append("Overview")
# 更新人物信息
if updated_name or updated_overview or update_character:
logger.info(f"更新人物 {people.get('Name')} 的信息:{personinfo}")
ret = self.set_iteminfo(server=server, itemid=people.get("Id"), iteminfo=personinfo)
if ret:
return ret_people
else:
logger.info(f"人物 {people.get('Name')} 未找到中文数据")
except Exception as err:
logger.error(f"更新人物信息失败:{err}")
return None
def __get_douban_actors(self, mediainfo: MediaInfo, season: int = None) -> List[dict]:
"""
获取豆瓣演员信息
"""
# 随机休眠1-5秒
time.sleep(1 + int(time.time()) % 5)
# 匹配豆瓣信息
doubaninfo = self.chain.match_doubaninfo(name=mediainfo.title,
mtype=mediainfo.type.value,
year=mediainfo.year,
season=season)
# 豆瓣演员
if doubaninfo:
doubanitem = self.chain.douban_info(doubaninfo.get("id")) or {}
return (doubanitem.get("actors") or []) + (doubanitem.get("directors") or [])
else:
logger.warn(f"未找到豆瓣信息:{mediainfo.title_year}")
return []
@staticmethod
def get_iteminfo(server: str, itemid: str) -> dict:
"""
获得媒体项详情
"""
def __get_emby_iteminfo() -> dict:
"""
获得Emby媒体项详情
"""
try:
url = f'[HOST]emby/Users/[USER]/Items/{itemid}?' \
f'Fields=ChannelMappingInfo&api_key=[APIKEY]'
res = Emby().get_data(url=url)
if res:
return res.json()
except Exception as err:
logger.error(f"获取Emby媒体项详情失败{err}")
return {}
def __get_jellyfin_iteminfo() -> dict:
"""
获得Jellyfin媒体项详情
"""
try:
url = f'[HOST]Users/[USER]/Items/{itemid}?Fields=ChannelMappingInfo&api_key=[APIKEY]'
res = Jellyfin().get_data(url=url)
if res:
result = res.json()
if result:
result['FileName'] = Path(result['Path']).name
return result
except Exception as err:
logger.error(f"获取Jellyfin媒体项详情失败{err}")
return {}
def __get_plex_iteminfo() -> dict:
"""
获得Plex媒体项详情
"""
iteminfo = {}
try:
plexitem = Plex().get_plex().library.fetchItem(ekey=itemid)
if 'movie' in plexitem.METADATA_TYPE:
iteminfo['Type'] = 'Movie'
iteminfo['IsFolder'] = False
elif 'episode' in plexitem.METADATA_TYPE:
iteminfo['Type'] = 'Series'
iteminfo['IsFolder'] = False
if 'show' in plexitem.TYPE:
iteminfo['ChildCount'] = plexitem.childCount
iteminfo['Name'] = plexitem.title
iteminfo['Id'] = plexitem.key
iteminfo['ProductionYear'] = plexitem.year
iteminfo['ProviderIds'] = {}
for guid in plexitem.guids:
idlist = str(guid.id).split(sep='://')
if len(idlist) < 2:
continue
iteminfo['ProviderIds'][idlist[0]] = idlist[1]
for location in plexitem.locations:
iteminfo['Path'] = location
iteminfo['FileName'] = Path(location).name
iteminfo['Overview'] = plexitem.summary
iteminfo['CommunityRating'] = plexitem.audienceRating
return iteminfo
except Exception as err:
logger.error(f"获取Plex媒体项详情失败{err}")
return {}
if server == "emby":
return __get_emby_iteminfo()
elif server == "jellyfin":
return __get_jellyfin_iteminfo()
else:
return __get_plex_iteminfo()
@staticmethod
def get_items(server: str, parentid: str, mtype: str = None) -> dict:
"""
获得媒体的所有子媒体项
"""
pass
def __get_emby_items() -> dict:
"""
获得Emby媒体的所有子媒体项
"""
try:
if parentid:
url = f'[HOST]emby/Users/[USER]/Items?ParentId={parentid}&api_key=[APIKEY]'
else:
url = '[HOST]emby/Users/[USER]/Items?api_key=[APIKEY]'
res = Emby().get_data(url=url)
if res:
return res.json()
except Exception as err:
logger.error(f"获取Emby媒体的所有子媒体项失败{err}")
return {}
def __get_jellyfin_items() -> dict:
"""
获得Jellyfin媒体的所有子媒体项
"""
try:
if parentid:
url = f'[HOST]Users/[USER]/Items?ParentId={parentid}&api_key=[APIKEY]'
else:
url = '[HOST]Users/[USER]/Items?api_key=[APIKEY]'
res = Jellyfin().get_data(url=url)
if res:
return res.json()
except Exception as err:
logger.error(f"获取Jellyfin媒体的所有子媒体项失败{err}")
return {}
def __get_plex_items(t: str) -> dict:
"""
获得Plex媒体的所有子媒体项
"""
items = {}
try:
plex = Plex().get_plex()
items['Items'] = []
if parentid:
if mtype and 'Season' in t:
plexitem = plex.library.fetchItem(ekey=parentid)
items['Items'] = []
for season in plexitem.seasons():
item = {
'Name': season.title,
'Id': season.key,
'IndexNumber': season.seasonNumber,
'Overview': season.summary
}
items['Items'].append(item)
elif mtype and 'Episode' in t:
plexitem = plex.library.fetchItem(ekey=parentid)
items['Items'] = []
for episode in plexitem.episodes():
item = {
'Name': episode.title,
'Id': episode.key,
'IndexNumber': episode.episodeNumber,
'Overview': episode.summary,
'CommunityRating': episode.audienceRating
}
items['Items'].append(item)
else:
plexitems = plex.library.sectionByID(sectionID=parentid)
for plexitem in plexitems.all():
item = {}
if 'movie' in plexitem.METADATA_TYPE:
item['Type'] = 'Movie'
item['IsFolder'] = False
elif 'episode' in plexitem.METADATA_TYPE:
item['Type'] = 'Series'
item['IsFolder'] = False
item['Name'] = plexitem.title
item['Id'] = plexitem.key
items['Items'].append(item)
else:
plexitems = plex.library.sections()
for plexitem in plexitems:
item = {}
if 'Directory' in plexitem.TAG:
item['Type'] = 'Folder'
item['IsFolder'] = True
elif 'movie' in plexitem.METADATA_TYPE:
item['Type'] = 'Movie'
item['IsFolder'] = False
elif 'episode' in plexitem.METADATA_TYPE:
item['Type'] = 'Series'
item['IsFolder'] = False
item['Name'] = plexitem.title
item['Id'] = plexitem.key
items['Items'].append(item)
return items
except Exception as err:
logger.error(f"获取Plex媒体的所有子媒体项失败{err}")
return {}
if server == "emby":
return __get_emby_items()
elif server == "jellyfin":
return __get_jellyfin_items()
else:
return __get_plex_items(mtype)
@staticmethod
def set_iteminfo(server: str, itemid: str, iteminfo: dict):
"""
更新媒体项详情
"""
def __set_emby_iteminfo():
"""
更新Emby媒体项详情
"""
try:
res = Emby().post_data(
url=f'[HOST]emby/Items/{itemid}?api_key=[APIKEY]&reqformat=json',
data=json.dumps(iteminfo),
headers={
"Content-Type": "application/json"
}
)
if res and res.status_code in [200, 204]:
return True
else:
logger.error(f"更新Emby媒体项详情失败错误码{res.status_code}")
return False
except Exception as err:
logger.error(f"更新Emby媒体项详情失败{err}")
return False
def __set_jellyfin_iteminfo():
"""
更新Jellyfin媒体项详情
"""
try:
res = Jellyfin().post_data(
url=f'[HOST]Items/{itemid}?api_key=[APIKEY]',
data=json.dumps(iteminfo),
headers={
"Content-Type": "application/json"
}
)
if res and res.status_code in [200, 204]:
return True
else:
logger.error(f"更新Jellyfin媒体项详情失败错误码{res.status_code}")
return False
except Exception as err:
logger.error(f"更新Jellyfin媒体项详情失败{err}")
return False
def __set_plex_iteminfo():
"""
更新Plex媒体项详情
"""
try:
plexitem = Plex().get_plex().library.fetchItem(ekey=itemid)
if 'CommunityRating' in iteminfo:
edits = {
'audienceRating.value': iteminfo['CommunityRating'],
'audienceRating.locked': 1
}
plexitem.edit(**edits)
plexitem.editTitle(iteminfo['Name']).editSummary(iteminfo['Overview']).reload()
return True
except Exception as err:
logger.error(f"更新Plex媒体项详情失败{err}")
return False
if server == "emby":
return __set_emby_iteminfo()
elif server == "jellyfin":
return __set_jellyfin_iteminfo()
else:
return __set_plex_iteminfo()
@staticmethod
@retry(RequestException, logger=logger)
def set_item_image(server: str, itemid: str, imageurl: str):
"""
更新媒体项图片
"""
def __download_image():
"""
下载图片
"""
try:
if "doubanio.com" in imageurl:
r = RequestUtils(headers={
'Referer': "https://movie.douban.com/"
}, ua=settings.USER_AGENT).get_res(url=imageurl, raise_exception=True)
else:
r = RequestUtils().get_res(url=imageurl, raise_exception=True)
if r:
return base64.b64encode(r.content).decode()
else:
logger.info(f"{imageurl} 图片下载失败,请检查网络连通性")
except Exception as err:
logger.error(f"下载图片失败:{err}")
return None
def __set_emby_item_image(_base64: str):
"""
更新Emby媒体项图片
"""
try:
url = f'[HOST]emby/Items/{itemid}/Images/Primary?api_key=[APIKEY]'
res = Emby().post_data(
url=url,
data=_base64,
headers={
"Content-Type": "image/png"
}
)
if res and res.status_code in [200, 204]:
return True
else:
logger.error(f"更新Emby媒体项图片失败错误码{res.status_code}")
return False
except Exception as result:
logger.error(f"更新Emby媒体项图片失败{result}")
return False
def __set_jellyfin_item_image():
"""
更新Jellyfin媒体项图片
# FIXME 改为预下载图片
"""
try:
url = f'[HOST]Items/{itemid}/RemoteImages/Download?' \
f'Type=Primary&ImageUrl={imageurl}&ProviderName=TheMovieDb&api_key=[APIKEY]'
res = Jellyfin().post_data(url=url)
if res and res.status_code in [200, 204]:
return True
else:
logger.error(f"更新Jellyfin媒体项图片失败错误码{res.status_code}")
return False
except Exception as err:
logger.error(f"更新Jellyfin媒体项图片失败{err}")
return False
def __set_plex_item_image():
"""
更新Plex媒体项图片
# FIXME 改为预下载图片
"""
try:
plexitem = Plex().get_plex().library.fetchItem(ekey=itemid)
plexitem.uploadPoster(url=imageurl)
return True
except Exception as err:
logger.error(f"更新Plex媒体项图片失败{err}")
return False
if server == "emby":
# 下载图片获取base64
image_base64 = __download_image()
if image_base64:
return __set_emby_item_image(image_base64)
elif server == "jellyfin":
return __set_jellyfin_item_image()
else:
return __set_plex_item_image()
return None
@staticmethod
def __get_chinese_name(personinfo: dict) -> str:
"""
获取TMDB别名中的中文名
"""
try:
also_known_as = personinfo.get("also_known_as") or []
if also_known_as:
for name in also_known_as:
if name and StringUtils.is_chinese(name):
# 使用cn2an将繁体转化为简体
return zhconv.convert(name, "zh-hans")
except Exception as err:
logger.error(f"获取人物中文名失败:{err}")
return ""
def stop_service(self):
"""
停止服务
"""
try:
if self._scheduler:
self._scheduler.remove_all_jobs()
if self._scheduler.running:
self._event.set()
self._scheduler.shutdown()
self._event.clear()
self._scheduler = None
except Exception as e:
print(str(e))

View File

@@ -841,87 +841,88 @@ class SiteStatistic(_PluginBase):
url = site_info.get("url")
proxy = site_info.get("proxy")
ua = site_info.get("ua")
session = requests.Session()
proxies = settings.PROXY if proxy else None
proxy_server = settings.PROXY_SERVER if proxy else None
render = site_info.get("render")
# 会话管理
with requests.Session() as session:
proxies = settings.PROXY if proxy else None
proxy_server = settings.PROXY_SERVER if proxy else None
render = site_info.get("render")
logger.debug(f"站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
if render:
# 演染模式
html_text = PlaywrightHelper().get_page_source(url=url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
else:
# 普通模式
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "") \
.replace("+", "") \
.replace(" ", "") \
.replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
logger.error("站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
logger.error(f"站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
logger.debug(f"站点 {site_name} url={url} site_cookie={site_cookie} ua={ua}")
if render:
# 演染模式
html_text = PlaywrightHelper().get_page_source(url=url,
cookies=site_cookie,
ua=ua,
proxies=proxy_server)
else:
logger.error(f"站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
if html_text:
site_schema = self.__build_class(html_text)
if not site_schema:
logger.error("站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua, proxy=proxy)
return None
# 普通模式
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=url)
if res and res.status_code == 200:
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
# 第一次登录反爬
if html_text.find("title") == -1:
i = html_text.find("window.location")
if i == -1:
return None
tmp_url = url + html_text[i:html_text.find(";")] \
.replace("\"", "") \
.replace("+", "") \
.replace(" ", "") \
.replace("window.location=", "")
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=tmp_url)
if res and res.status_code == 200:
if "charset=utf-8" in res.text or "charset=UTF-8" in res.text:
res.encoding = "UTF-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
else:
logger.error("站点 %s 被反爬限制:%s, 状态码:%s" % (site_name, url, res.status_code))
return None
# 兼容假首页情况,假首页通常没有 <link rel="search" 属性
if '"search"' not in html_text and '"csrf-token"' not in html_text:
res = RequestUtils(cookies=site_cookie,
session=session,
ua=ua,
proxies=proxies
).get_res(url=url + "/index.php")
if res and res.status_code == 200:
if re.search(r"charset=\"?utf-8\"?", res.text, re.IGNORECASE):
res.encoding = "utf-8"
else:
res.encoding = res.apparent_encoding
html_text = res.text
if not html_text:
return None
elif res is not None:
logger.error(f"站点 {site_name} 连接失败,状态码:{res.status_code}")
return None
else:
logger.error(f"站点 {site_name} 无法访问:{url}")
return None
# 解析站点类型
if html_text:
site_schema = self.__build_class(html_text)
if not site_schema:
logger.error("站点 %s 无法识别站点类型" % site_name)
return None
return site_schema(site_name, url, site_cookie, html_text, session=session, ua=ua, proxy=proxy)
return None
def refresh_by_domain(self, domain: str) -> schemas.Response:
"""

View File

@@ -6,7 +6,6 @@ from enum import Enum
from typing import Optional
from urllib.parse import urljoin, urlsplit
import requests
from requests import Session
from app.core.config import settings
@@ -107,7 +106,7 @@ class ISiteUserInfo(metaclass=ABCMeta):
self._base_url = f"{split_url.scheme}://{split_url.netloc}"
self._site_cookie = site_cookie
self._index_html = index_html
self._session = session if session else requests.Session()
self._session = session if session else None
self._ua = ua
self._emulate = emulate

View File

@@ -73,15 +73,19 @@ class SpeedLimiter(_PluginBase):
try:
# 总带宽
self._bandwidth = int(float(config.get("bandwidth") or 0)) * 1000000
# 自动限速开关
if self._bandwidth > 0:
# 自动限速开关
self._auto_limit = True
else:
self._auto_limit = False
except Exception as e:
logger.error(f"智能限速上行带宽设置错误:{str(e)}")
self._bandwidth = 0
# 限速服务开关
self._limit_enabled = True if self._play_up_speed or self._play_down_speed or self._auto_limit else False
self._limit_enabled = True if (self._play_up_speed
or self._play_down_speed
or self._auto_limit) else False
self._allocation_ratio = config.get("allocation_ratio") or ""
# 不限速地址
self._unlimited_ips["ipv4"] = config.get("ipv4") or ""
@@ -379,7 +383,14 @@ class SpeedLimiter(_PluginBase):
return
if event:
event_data: WebhookEventInfo = event.event_data
if event_data.event not in ["playback.start", "PlaybackStart", "media.play"]:
if event_data.event not in [
"playback.start",
"PlaybackStart",
"media.play",
"media.stop",
"PlaybackStop",
"playback.stop"
]:
return
# 当前播放的总比特率
total_bit_rate = 0
@@ -392,7 +403,7 @@ class SpeedLimiter(_PluginBase):
# 查询播放中会话
playing_sessions = []
if media_server == "emby":
req_url = "{HOST}emby/Sessions?api_key={APIKEY}"
req_url = "[HOST]emby/Sessions?api_key=[APIKEY]"
try:
res = Emby().get_data(req_url)
if res and res.status_code == 200:
@@ -415,7 +426,7 @@ class SpeedLimiter(_PluginBase):
and session.get("NowPlayingItem", {}).get("MediaType") == "Video":
total_bit_rate += int(session.get("NowPlayingItem", {}).get("Bitrate") or 0)
elif media_server == "jellyfin":
req_url = "{HOST}Sessions?api_key={APIKEY}"
req_url = "[HOST]Sessions?api_key=[APIKEY]"
try:
res = Jellyfin().get_data(req_url)
if res and res.status_code == 200:
@@ -479,17 +490,13 @@ class SpeedLimiter(_PluginBase):
self.__set_limiter(limit_type="未播放", upload_limit=self._noplay_up_speed,
download_limit=self._noplay_down_speed)
def __calc_limit(self, total_bit_rate):
def __calc_limit(self, total_bit_rate: float) -> float:
"""
计算智能上传限速
"""
residual_bandwidth = (self._bandwidth - total_bit_rate)
if residual_bandwidth < 0:
play_up_speed = 10
else:
play_up_speed = round(residual_bandwidth / 8 / 1024, 2)
return play_up_speed
if not self._bandwidth:
return 10
return round((self._bandwidth - total_bit_rate) / 8 / 1024, 2)
def __set_limiter(self, limit_type: str, upload_limit: float, download_limit: float):
"""
@@ -572,7 +579,7 @@ class SpeedLimiter(_PluginBase):
logger.error(f"设置限速失败:{str(e)}")
@staticmethod
def __allow_access(allow_ips, ip):
def __allow_access(allow_ips: dict, ip: str) -> bool:
"""
判断IP是否合法
:param allow_ips: 充许的IP范围 {"ipv4":, "ipv6":}

View File

@@ -100,7 +100,7 @@ class TorrentTransfer(_PluginBase):
return
if self._fromdownloader == self._todownloader:
logger.error(f"源下载器和目的下载器不能相同")
self.systemmessage(f"源下载器和目的下载器不能相同")
self.systemmessage.put(f"源下载器和目的下载器不能相同")
return
self._scheduler = BackgroundScheduler(timezone=settings.TZ)
if self._cron:
@@ -110,7 +110,7 @@ class TorrentTransfer(_PluginBase):
CronTrigger.from_crontab(self._cron))
except Exception as e:
logger.error(f"转移做种服务启动失败:{e}")
self.systemmessage(f"转移做种服务启动失败:{e}")
self.systemmessage.put(f"转移做种服务启动失败:{e}")
return
if self._onlyonce:
logger.info(f"转移做种服务启动,立即运行一次")

View File

@@ -137,13 +137,29 @@ class WebHook(_PluginBase):
return
def __to_dict(_event):
result = {}
for key, value in _event.items():
if hasattr(value, 'to_dict'):
result[key] = value.to_dict()
else:
result[key] = str(value)
return result
"""
递归将对象转换为字典
"""
if isinstance(_event, dict):
for k, v in _event.items():
_event[k] = __to_dict(v)
return _event
elif isinstance(_event, list):
for i in range(len(_event)):
_event[i] = __to_dict(_event[i])
return _event
elif isinstance(_event, tuple):
return tuple(__to_dict(list(_event)))
elif isinstance(_event, set):
return set(__to_dict(list(_event)))
elif hasattr(_event, 'to_dict'):
return __to_dict(_event.to_dict())
elif hasattr(_event, '__dict__'):
return __to_dict(_event.__dict__)
elif isinstance(_event, (int, float, str, bool, type(None))):
return _event
else:
return str(_event)
event_info = {
"type": event.event_type,

View File

@@ -11,6 +11,7 @@ from app.chain import ChainBase
from app.chain.cookiecloud import CookieCloudChain
from app.chain.mediaserver import MediaServerChain
from app.chain.subscribe import SubscribeChain
from app.chain.tmdb import TmdbChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.db import SessionFactory
@@ -183,6 +184,14 @@ class Scheduler(metaclass=Singleton):
}
)
# 后台刷新TMDB壁纸
self._scheduler.add_job(
TmdbChain(self._db).get_random_wallpager,
"interval",
minutes=30,
next_run_time=datetime.now(pytz.timezone(settings.TZ)) + timedelta(seconds=3)
)
# 公共定时服务
self._scheduler.add_job(
SchedulerChain(self._db).scheduler_job,

View File

@@ -36,6 +36,12 @@ class DownloadHistory(BaseModel):
torrent_description: Optional[str] = None
# 站点
torrent_site: Optional[str] = None
# 下载用户
userid: Optional[str] = None
# 下载渠道
channel: Optional[str] = None
# 创建时间
date: Optional[str] = None
# 备注
note: Optional[str] = None

View File

@@ -14,6 +14,10 @@ class ExistMediaInfo(BaseModel):
type: Optional[MediaType]
# 季
seasons: Optional[Dict[int, list]] = {}
# 媒体服务器
server: Optional[str] = None
# 媒体ID
itemid: Optional[Union[str, int]] = None
class NotExistMediaInfo(BaseModel):

View File

@@ -34,8 +34,8 @@ class EventType(Enum):
DownloadAdded = "download.added"
# 删除历史记录
HistoryDeleted = "history.deleted"
# 删除媒体库文件
MediaDeleted = "media.deleted"
# 删除下载源文件
DownloadFileDeleted = "downloadfile.deleted"
# 用户外来消息
UserMessage = "user.message"
# 通知消息
@@ -58,6 +58,8 @@ class SystemConfigKey(Enum):
NotificationChannels = "NotificationChannels"
# 自定义制作组/字幕组
CustomReleaseGroups = "CustomReleaseGroups"
# 自定义占位符
Customization = "Customization"
# 自定义识别词
CustomIdentifiers = "CustomIdentifiers"
# 搜索优先级规则

View File

@@ -59,7 +59,8 @@ class RequestUtils:
headers=self._headers,
proxies=self._proxies,
timeout=self._timeout,
json=json)
json=json,
stream=False)
else:
return requests.post(url,
data=data,
@@ -67,7 +68,8 @@ class RequestUtils:
headers=self._headers,
proxies=self._proxies,
timeout=self._timeout,
json=json)
json=json,
stream=False)
except requests.exceptions.RequestException:
return None
@@ -91,27 +93,38 @@ class RequestUtils:
except requests.exceptions.RequestException:
return None
def get_res(self, url: str, params: dict = None,
allow_redirects: bool = True, raise_exception: bool = False) -> Optional[Response]:
def get_res(self, url: str,
params: dict = None,
data: Any = None,
json: dict = None,
allow_redirects: bool = True,
raise_exception: bool = False
) -> Optional[Response]:
try:
if self._session:
return self._session.get(url,
params=params,
data=data,
json=json,
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
allow_redirects=allow_redirects)
allow_redirects=allow_redirects,
stream=False)
else:
return requests.get(url,
params=params,
data=data,
json=json,
verify=False,
headers=self._headers,
proxies=self._proxies,
cookies=self._cookies,
timeout=self._timeout,
allow_redirects=allow_redirects)
allow_redirects=allow_redirects,
stream=False)
except requests.exceptions.RequestException:
if raise_exception:
raise requests.exceptions.RequestException
@@ -120,7 +133,8 @@ class RequestUtils:
def post_res(self, url: str, data: Any = None, params: dict = None,
allow_redirects: bool = True,
files: Any = None,
json: dict = None) -> Optional[Response]:
json: dict = None,
raise_exception: bool = False) -> Optional[Response]:
try:
if self._session:
return self._session.post(url,
@@ -133,7 +147,8 @@ class RequestUtils:
timeout=self._timeout,
allow_redirects=allow_redirects,
files=files,
json=json)
json=json,
stream=False)
else:
return requests.post(url,
data=data,
@@ -145,8 +160,11 @@ class RequestUtils:
timeout=self._timeout,
allow_redirects=allow_redirects,
files=files,
json=json)
json=json,
stream=False)
except requests.exceptions.RequestException:
if raise_exception:
raise requests.exceptions.RequestException
return None
@staticmethod

View File

@@ -1,155 +0,0 @@
import os
class PathUtils:
@staticmethod
def get_dir_files(in_path, exts="", filesize=0, episode_format=None):
"""
获得目录下的媒体文件列表List ,按后缀、大小、格式过滤
"""
if not in_path:
return []
if not os.path.exists(in_path):
return []
ret_list = []
if os.path.isdir(in_path):
for root, dirs, files in os.walk(in_path):
for file in files:
cur_path = os.path.join(root, file)
# 检查路径是否合法
if PathUtils.is_invalid_path(cur_path):
continue
# 检查格式匹配
if episode_format and not episode_format.match(file):
continue
# 检查后缀
if exts and os.path.splitext(file)[-1].lower() not in exts:
continue
# 检查文件大小
if filesize and os.path.getsize(cur_path) < filesize:
continue
# 命中
if cur_path not in ret_list:
ret_list.append(cur_path)
else:
# 检查路径是否合法
if PathUtils.is_invalid_path(in_path):
return []
# 检查后缀
if exts and os.path.splitext(in_path)[-1].lower() not in exts:
return []
# 检查格式
if episode_format and not episode_format.match(os.path.basename(in_path)):
return []
# 检查文件大小
if filesize and os.path.getsize(in_path) < filesize:
return []
ret_list.append(in_path)
return ret_list
@staticmethod
def get_dir_level1_files(in_path, exts=""):
"""
查询目录下的文件(只查询一级)
"""
ret_list = []
if not os.path.exists(in_path):
return []
for file in os.listdir(in_path):
path = os.path.join(in_path, file)
if os.path.isfile(path):
if not exts or os.path.splitext(file)[-1].lower() in exts:
ret_list.append(path)
return ret_list
@staticmethod
def get_dir_level1_medias(in_path, exts=""):
"""
根据后缀,返回目录下所有的文件及文件夹列表(只查询一级)
"""
ret_list = []
if not os.path.exists(in_path):
return []
if os.path.isdir(in_path):
for file in os.listdir(in_path):
path = os.path.join(in_path, file)
if os.path.isfile(path):
if not exts or os.path.splitext(file)[-1].lower() in exts:
ret_list.append(path)
else:
ret_list.append(path)
else:
ret_list.append(in_path)
return ret_list
@staticmethod
def is_invalid_path(path):
"""
判断是否不能处理的路径
"""
if not path:
return True
if path.find('/@Recycle/') != -1 or path.find('/#recycle/') != -1 or path.find('/.') != -1 or path.find(
'/@eaDir') != -1:
return True
return False
@staticmethod
def is_path_in_path(path1, path2):
"""
判断两个路径是否包含关系 path1 in path2
"""
if not path1 or not path2:
return False
path1 = os.path.normpath(path1).replace("\\", "/")
path2 = os.path.normpath(path2).replace("\\", "/")
if path1 == path2:
return True
path = os.path.dirname(path2)
while True:
if path == path1:
return True
path = os.path.dirname(path)
if path == os.path.dirname(path):
break
return False
@staticmethod
def get_bluray_dir(path):
"""
判断是否蓝光原盘目录,是则返回原盘的根目录,否则返回空
"""
if not path or not os.path.exists(path):
return None
if os.path.isdir(path):
if os.path.exists(os.path.join(path, "BDMV", "index.bdmv")):
return path
elif os.path.normpath(path).endswith("BDMV") \
and os.path.exists(os.path.join(path, "index.bdmv")):
return os.path.dirname(path)
elif os.path.normpath(path).endswith("STREAM") \
and os.path.exists(os.path.join(os.path.dirname(path), "index.bdmv")):
return PathUtils.get_parent_paths(path, 2)
else:
# 电视剧原盘下会存在多个目录形如Spider Man 2021/DIsc1, Spider Man 2021/Disc2
for level1 in PathUtils.get_dir_level1_medias(path):
if os.path.exists(os.path.join(level1, "BDMV", "index.bdmv")):
return path
return None
else:
if str(os.path.splitext(path)[-1]).lower() in [".m2ts", ".ts"] \
and os.path.normpath(os.path.dirname(path)).endswith("STREAM") \
and os.path.exists(os.path.join(PathUtils.get_parent_paths(path, 2), "index.bdmv")):
return PathUtils.get_parent_paths(path, 3)
else:
return None
@staticmethod
def get_parent_paths(path, level: int = 1):
"""
获取父目录路径level为向上查找的层数
"""
for lv in range(0, level):
path = os.path.dirname(path)
return path

View File

@@ -68,6 +68,8 @@ class StringUtils:
"""
判断是否含有中文
"""
if not word:
return False
if isinstance(word, list):
word = " ".join(word)
chn = re.compile(r'[\u4e00-\u9fff]')

View File

@@ -3,6 +3,7 @@ import os
import platform
import re
import shutil
import sys
from pathlib import Path
from typing import List, Union, Tuple
@@ -27,20 +28,39 @@ class SystemUtils:
@staticmethod
def is_docker() -> bool:
"""
判断是否为Docker环境
"""
return Path("/.dockerenv").exists()
@staticmethod
def is_synology() -> bool:
"""
判断是否为群晖系统
"""
if SystemUtils.is_windows():
return False
return True if "synology" in SystemUtils.execute('uname -a') else False
@staticmethod
def is_windows() -> bool:
"""
判断是否为Windows系统
"""
return True if os.name == "nt" else False
@staticmethod
def is_frozen() -> bool:
"""
判断是否为冻结的二进制文件
"""
return True if getattr(sys, 'frozen', False) else False
@staticmethod
def is_macos() -> bool:
"""
判断是否为MacOS系统
"""
return True if platform.system() == 'Darwin' else False
@staticmethod
@@ -77,7 +97,7 @@ class SystemUtils:
"""
try:
# link到当前目录并改名
tmp_path = src.parent / dest.name
tmp_path = (src.parent / dest.name).with_suffix(".mp")
tmp_path.hardlink_to(src)
# 移动到目标目录
shutil.move(tmp_path, dest)
@@ -347,6 +367,7 @@ class SystemUtils:
# 创建 Docker 客户端
client = docker.DockerClient(base_url='tcp://127.0.0.1:38379')
# 获取当前容器的 ID
container_id = None
with open('/proc/self/mountinfo', 'r') as f:
data = f.read()
index_resolv_conf = data.find("resolv.conf")
@@ -354,6 +375,12 @@ class SystemUtils:
index_second_slash = data.rfind("/", 0, index_resolv_conf)
index_first_slash = data.rfind("/", 0, index_second_slash) + 1
container_id = data[index_first_slash:index_second_slash]
if len(container_id) < 20:
index_resolv_conf = data.find("/sys/fs/cgroup/devices")
if index_resolv_conf != -1:
index_second_slash = data.rfind(" ", 0, index_resolv_conf)
index_first_slash = data.rfind("/", 0, index_second_slash) + 1
container_id = data[index_first_slash:index_second_slash]
if not container_id:
return False, "获取容器ID失败"
# 重启当前容器

View File

@@ -4,8 +4,16 @@ from app.utils.http import RequestUtils
class WebUtils:
@staticmethod
def get_location(ip: str):
"""
查询IP所属地
"""
return WebUtils.get_location1(ip) or WebUtils.get_location2(ip)
@staticmethod
def get_location1(ip: str):
"""
https://api.mir6.com/api/ip
{
@@ -36,7 +44,33 @@ class WebUtils:
if r:
return r.json().get("data", {}).get("location") or ''
except Exception as err:
return str(err)
print(str(err))
return ""
@staticmethod
def get_location2(ip: str):
"""
https://whois.pconline.com.cn/ipJson.jsp?json=true&ip=
{
"ip": "122.8.12.22",
"pro": "上海市",
"proCode": "310000",
"city": "上海市",
"cityCode": "310000",
"region": "",
"regionCode": "0",
"addr": "上海市 铁通",
"regionNames": "",
"err": ""
}
"""
try:
r = RequestUtils().get_res(f"https://whois.pconline.com.cn/ipJson.jsp?json=true&ip={ip}")
if r:
return r.json().get("addr") or ''
except Exception as err:
print(str(err))
return ""
@staticmethod
def get_bing_wallpaper() -> Optional[str]:

187
config/app.env Normal file
View File

@@ -0,0 +1,187 @@
#######################################################################
# 【*】为必配项,其余为选配项,选配项可以删除整项配置项或者保留配置默认值 #
#######################################################################
####################################
# 基础设置 #
####################################
# 时区
TZ=Asia/Shanghai
# 【*】API监听地址
HOST=0.0.0.0
# 是否调试模式
DEBUG=false
# 是否开发模式
DEV=false
# 【*】超级管理员
SUPERUSER=admin
# 【*】超级管理员初始密码
SUPERUSER_PASSWORD=password
# 【*】API密钥建议更换复杂字符串
API_TOKEN=moviepilot
# 网络代理 IP:PORT
PROXY_HOST=
# TMDB图片地址无需修改需保留默认值
TMDB_IMAGE_DOMAIN=image.tmdb.org
# TMDB API地址无需修改需保留默认值
TMDB_API_DOMAIN=api.themoviedb.org
# 大内存模式
BIG_MEMORY_MODE=false
####################################
# 媒体识别&刮削 #
####################################
# 媒体信息搜索来源 themoviedb/douban
SEARCH_SOURCE=themoviedb
# 刮削入库的媒体文件 true/false
SCRAP_METADATA=true
# 新增已入库媒体是否跟随TMDB信息变化
SCRAP_FOLLOW_TMDB=true
# 刮削来源 themoviedb/douban
SCRAP_SOURCE=themoviedb
####################################
# 媒体库 #
####################################
# 【*】转移方式 link/copy/move/softlink
TRANSFER_TYPE=copy
# 【*】媒体库目录,多个目录使用,分隔
LIBRARY_PATH=
# 电影媒体库目录名,默认电影
LIBRARY_MOVIE_NAME=
# 电视剧媒体库目录名,默认电视剧
LIBRARY_TV_NAME=
# 动漫媒体库目录名,默认电视剧/动漫
LIBRARY_ANIME_NAME=
# 二级分类
LIBRARY_CATEGORY=true
# 电影重命名格式
MOVIE_RENAME_FORMAT={{title}}{% if year %} ({{year}}){% endif %}/{{title}}{% if year %} ({{year}}){% endif %}{% if part %}-{{part}}{% endif %}{% if videoFormat %} - {{videoFormat}}{% endif %}{{fileExt}}
# 电视剧重命名格式
TV_RENAME_FORMAT={{title}}{% if year %} ({{year}}){% endif %}/Season {{season}}/{{title}} - {{season_episode}}{% if part %}-{{part}}{% endif %}{% if episode %} - 第 {{episode}}{% endif %}{{fileExt}}
####################################
# 站点 #
####################################
# 【*】CookieCloud服务器地址默认为公共服务器
COOKIECLOUD_HOST=https://movie-pilot.org/cookiecloud
# 【*】CookieCloud用户KEY
COOKIECLOUD_KEY=
# 【*】CookieCloud端对端加密密码
COOKIECLOUD_PASSWORD=
# 【*】CookieCloud同步间隔分钟
COOKIECLOUD_INTERVAL=1440
# OCR服务器地址
OCR_HOST=https://movie-pilot.org
# 【*】CookieCloud对应的浏览器UA
USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.57
####################################
# 订阅 & 搜索 #
####################################
# 订阅模式 spider/rss
SUBSCRIBE_MODE=spider
# RSS订阅模式刷新时间间隔分钟
SUBSCRIBE_RSS_INTERVAL=30
# 订阅搜索开关
SUBSCRIBE_SEARCH=false
# 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER=
####################################
# 消息通知 #
####################################
# 【*】消息通知渠道 telegram/wechat/slack多个通知渠道用,分隔
MESSAGER=telegram
# WeChat企业ID
WECHAT_CORPID=
# WeChat应用Secret
WECHAT_APP_SECRET=
# WeChat应用ID
WECHAT_APP_ID=
# WeChat代理服务器无需代理需保留默认值
WECHAT_PROXY=https://qyapi.weixin.qq.com
# WeChat Token
WECHAT_TOKEN=
# WeChat EncodingAESKey
WECHAT_ENCODING_AESKEY=
# WeChat 管理员
WECHAT_ADMINS=
# Telegram Bot Token
TELEGRAM_TOKEN=
# Telegram Chat ID
TELEGRAM_CHAT_ID=
# Telegram 用户ID使用,分隔
TELEGRAM_USERS=
# Telegram 管理员ID使用,分隔
TELEGRAM_ADMINS=
# Slack Bot User OAuth Token
SLACK_OAUTH_TOKEN=
# Slack App-Level Token
SLACK_APP_TOKEN=
# Slack 频道名称
SLACK_CHANNEL=
# SynologyChat Webhook
SYNOLOGYCHAT_WEBHOOK=
# SynologyChat Token
SYNOLOGYCHAT_TOKEN=
####################################
# 下载 #
####################################
# 【*】下载器 qbittorrent/transmission
DOWNLOADER=qbittorrent
# 下载器监控开关
DOWNLOADER_MONITOR=true
# Qbittorrent地址IP:PORT
QB_HOST=
# Qbittorrent用户名
QB_USER=
# Qbittorrent密码
QB_PASSWORD=
# Qbittorrent分类自动管理
QB_CATEGORY=false
# Transmission地址IP:PORT
TR_HOST=
# Transmission用户名
TR_USER=
# Transmission密码
TR_PASSWORD=
# 种子标签
TORRENT_TAG=MOVIEPILOT
# 【*】下载保存目录,容器内映射路径需要一致
DOWNLOAD_PATH=/downloads
# 电影下载保存目录,容器内映射路径需要一致
DOWNLOAD_MOVIE_PATH=
# 电视剧下载保存目录,容器内映射路径需要一致
DOWNLOAD_TV_PATH=
# 动漫下载保存目录,容器内映射路径需要一致
DOWNLOAD_ANIME_PATH=
# 下载目录二级分类
DOWNLOAD_CATEGORY=false
# 下载站点字幕
DOWNLOAD_SUBTITLE=true
####################################
# 媒体服务器 #
####################################
# 【*】媒体服务器 emby/jellyfin/plex多个媒体服务器,分割
MEDIASERVER=emby
# 入库刷新媒体库
REFRESH_MEDIASERVER=true
# 媒体服务器同步间隔(小时)
MEDIASERVER_SYNC_INTERVAL=6
# 媒体服务器同步黑名单,多个媒体库名称,分割
MEDIASERVER_SYNC_BLACKLIST=
# EMBY服务器地址IP:PORT
EMBY_HOST=
# EMBY Api Key
EMBY_API_KEY=
# Jellyfin服务器地址IP:PORT
JELLYFIN_HOST=
# Jellyfin Api Key
JELLYFIN_API_KEY=
# Plex服务器地址IP:PORT
PLEX_HOST=
# Plex Token
PLEX_TOKEN=

File diff suppressed because one or more lines are too long

View File

@@ -12,7 +12,7 @@ for module in Path(__file__).with_name("models").glob("*.py"):
db_version = input("请输入版本号:")
db_location = settings.CONFIG_PATH / 'user.db'
script_location = settings.ROOT_PATH / 'alembic'
script_location = settings.ROOT_PATH / 'database'
alembic_cfg = AlembicConfig()
alembic_cfg.set_main_option('script_location', str(script_location))
alembic_cfg.set_main_option('sqlalchemy.url', f"sqlite:///{db_location}")

Some files were not shown because too many files have changed in this diff Show More