Compare commits

..

40 Commits
1.1.1 ... v1.x

Author SHA1 Message Date
amtoaer
0b3434f5fd chore: bump version from 1.1.8 to 1.1.9 2024-03-02 14:09:37 +08:00
amtoaer
85e2b2be50 fix: 发生超时异常时忽略掉 2024-03-02 14:08:22 +08:00
amtoaer
b666378c00 chore: bump version from 1.1.7 to 1.1.8 2024-02-26 23:56:54 +08:00
amtoaer
28ed22dc1b fix: 修复开启处理分 p 视频后执行 refresh 导致历史视频分 p 刷新的问题,修复分 p 图片不存在问题 2024-02-26 23:56:26 +08:00
amtoaer
911ce84f5a fix: 对 xml 中的文本内容转义,避免特殊字符影响 2024-02-26 13:26:07 +08:00
amtoaer
e25ed452b4 chore: bump version from 1.1.6 to 1.1.7 2024-02-25 01:11:53 +08:00
amtoaer
2f36220582 chore: 加入一键发版的 make 命令 2024-02-25 01:11:13 +08:00
amtoaer
f6a5238b6e fix: 修复执行错误 2024-02-25 00:47:03 +08:00
amtoaer
ec5776a0ed feat: recheck 对分 p 视频做适配,为所有的数据库批量操作指定 batch_size 2024-02-24 21:37:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c21da25c6f feat: 将下载视频时选择流的部分参数提取为配置 (#47) 2024-02-24 17:36:56 +08:00
amtoaer
bde142a896 doc: 修正一些表述 2024-02-24 03:52:58 +08:00
amtoaer
af8cd0d819 refactor: refresh 中异步保存文件 2024-02-24 03:49:28 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a4c362d8ab feat: 支持分 p 视频下载,待额外测试 (#24) 2024-02-24 03:38:08 +08:00
amtoaer
1dd760d445 chore: 更换代码格式化器,移除无用依赖 2024-02-21 23:54:39 +08:00
amtoaer
0bc7b831de chore: bump version from 1.1.5 to 1.1.6 2024-02-02 22:28:21 +08:00
amtoaer
fe2056ae33 fix: 修复无音频流视频下载失败的问题 2024-02-02 22:28:09 +08:00
amtoaer
8a7a7e370b chore: bump version from 1.1.4 to 1.1.5 2024-02-02 17:29:13 +08:00
amtoaer
6ce143647c chore: 更新上游依赖 2024-02-02 17:29:07 +08:00
amtoaer
668c67da53 chore: bump version from 1.1.3 to 1.1.4 2024-01-20 15:50:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9204bbb4ad fix: 修复新的配置项没有写入配置文件的问题,扩充单行字符限制 (#33) 2024-01-20 15:37:43 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d467750d4f feat: 支持指定编码优先级 (#32) 2024-01-20 15:16:48 +08:00
amtoaer
641cc3f48b chore: 优化 dockerfile,缩小镜像体积 2024-01-06 02:13:00 +08:00
amtoaer
345c764463 fix: 修复 docker 退出时不会释放资源的问题 2024-01-06 00:41:28 +08:00
amtoaer
85b7d3dc9b chore: 恢复 dockerfile 写法,难以复用缓存但减小容器体积 2024-01-05 23:34:32 +08:00
amtoaer
f1ada17f30 chore: bump version from 1.1.2 to 1.1.3 2024-01-05 01:15:12 +08:00
amtoaer
cb0ac7eb67 chore: 开启自动提交和自动标签 2024-01-05 01:13:36 +08:00
amtoaer
31efedbde9 chore: 修复依赖异常,优化 dockerfile 流程 2024-01-05 01:11:10 +08:00
amtoaer
3defb07325 chore: 存版本号并添加入口,方便触发版本间的迁移逻辑 2024-01-04 22:13:03 +08:00
amtoaer
e36f829e70 chore: 引入 bump-version 并正确设置版本号 2024-01-04 22:04:10 +08:00
amtoaer
c20b579523 chore: 排序一下依赖 2024-01-04 21:54:27 +08:00
amtoaer
ceec222604 chore: 更新上游依赖,修复刷新 cookie 失败的错误 2024-01-04 21:50:28 +08:00
amtoaer
60ea7795ae chore: 修改基础镜像标签 2024-01-04 21:07:08 +08:00
DDSDerek
6cbacbd127 chore: Optimization docker (#17)
* feat: docker build adds cache

* fix: dockerfile optimization

* doc: dockerhub pictures are not displayed properly

---------

Co-authored-by: DDSRem <1448139087@qq.com>
2024-01-04 20:51:03 +08:00
DDSDerek
8ea2fbe0f9 fix: docker meta username error (#16)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:31:48 +08:00
DDSDerek
e3fded16ac feat: support arm64 architecture (#15)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:22:26 +08:00
amtoaer
961913c4fb doc: 加入字幕相关文档 2023-12-07 22:11:37 +08:00
amtoaer
fa20e5efee feat: 开放弹幕的各项设置 2023-12-07 21:45:18 +08:00
amtoaer
38fb0a4560 fix: 安全地移除配置项 2023-12-07 21:29:57 +08:00
amtoaer
9e94e3b73e chore: try except 按块分割,移除无用的设置项 2023-12-07 21:15:40 +08:00
amtoaer
b955a9fe45 chore: 替换掉被标记 deprecated 的方法 2023-12-06 18:17:17 +08:00
19 changed files with 1665 additions and 1183 deletions

View File

@@ -12,18 +12,37 @@ jobs:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=debug
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- -
name: Login to DockerHub name: Login to DockerHub
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- -
name: Build and push images name: Build and push images
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
file: ./Dockerfile file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true push: true
tags: | tags: ${{ steps.meta.outputs.tags }}
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:debug labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}

View File

@@ -12,22 +12,41 @@ jobs:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=${{ github.ref_name }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
- -
name: Login to DockerHub name: Login to DockerHub
uses: docker/login-action@v3 uses: docker/login-action@v3
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- -
name: Build and push images name: Build and push images
uses: docker/build-push-action@v5 uses: docker/build-push-action@v5
with: with:
context: . context: .
file: ./Dockerfile file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true push: true
tags: | tags: ${{ steps.meta.outputs.tags }}
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:${{ github.ref_name }} labels: ${{ steps.meta.outputs.labels }}
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:latest cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
- -
name: Update DockerHub description name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3 uses: peter-evans/dockerhub-description@v3

View File

@@ -1,22 +1,41 @@
FROM python:3.11.6-alpine3.18 AS base FROM python:3.11.7-alpine3.19 as base
WORKDIR /app WORKDIR /app
ENV BILI_IN_DOCKER=true ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
RUN apk add --no-cache ffmpeg tini \
&& apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
&& pip install poetry==1.7.1 pip3-autoremove==1.2.0
COPY poetry.lock pyproject.toml ./ COPY poetry.lock pyproject.toml ./
RUN apk add ffmpeg \ RUN poetry config virtualenvs.create false \
&& apk add --no-cache --virtual .build-deps \ && poetry install --only main --no-root \
gcc \ && pip3-autoremove -y poetry pip3-autoremove \
musl-dev \ && apk del .build-deps \
libffi-dev \ && rm -rf \
openssl-dev \ /root/.cache \
&& pip install poetry \ /tmp/*
&& poetry config virtualenvs.create false \
&& poetry install --no-dev --no-interaction --no-ansi \
&& apk del .build-deps
COPY . . COPY . .
ENTRYPOINT [ "python", "entry.py" ] FROM scratch
WORKDIR /app
ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
COPY --from=base / /
ENTRYPOINT [ "tini", "python", "entry.py" ]
VOLUME [ "/app/config", "/app/data", "/app/thumb", "/Videos/Bilibilis" ]

View File

@@ -1,4 +1,4 @@
.PHONY: install fmt start-daemon start-once db-init db-migrate db-upgrade sync-conf .PHONY: install fmt start-daemon start-once db-init db-migrate db-upgrade sync-conf release
install: install:
@echo "Installing dependencies..." @echo "Installing dependencies..."
@@ -6,8 +6,8 @@ install:
fmt: fmt:
@echo "Formatting..." @echo "Formatting..."
@poetry run black . @poetry run ruff format .
@poetry run ruff --fix . @poetry run ruff check --fix .
start-daemon: start-daemon:
@poetry run python entry.py @poetry run python entry.py
@@ -28,4 +28,12 @@ sync-conf:
@echo "Syncing config..." @echo "Syncing config..."
@cp ${CONFIG_SRC} ./config/ @cp ${CONFIG_SRC} ./config/
@cp ${DB_SRC} ./data/ @cp ${DB_SRC} ./data/
@echo "Done." @echo "Done."
release:
@echo "Releasing..."
@git checkout main
@bump-my-version bump patch
@git push origin main
@git push origin --tags
@echo "Done."

View File

@@ -13,15 +13,23 @@
## 工作截图 ## 工作截图
![下载视频](asset/run.png) ![下载视频](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/run.png)
![EMBY 识别](asset/emby.png) ![EMBY 识别](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/emby.png)
## 配置文件 ## 配置文件
对于配置文件的前五项,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。 对于配置文件的前五项,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。
```python ```python
@dataclass
class SubtitleConfig(DataClassJsonMixin):
font_name: str = "微软雅黑,黑体" # 字体
font_size: float = 40 # 字号
alpha: float = 0.8 # 透明度
fly_time: float = 5 # 滚动弹幕持续时间
static_time: float = 10 # 静态弹幕持续时间
class Config(DataClassJsonMixin): class Config(DataClassJsonMixin):
sessdata: str = "" sessdata: str = ""
bili_jct: str = "" bili_jct: str = ""
@@ -29,8 +37,8 @@ class Config(DataClassJsonMixin):
dedeuserid: str = "" dedeuserid: str = ""
ac_time_value: str = "" ac_time_value: str = ""
interval: int = 20 # 任务执行的间隔时间 interval: int = 20 # 任务执行的间隔时间
favorite_ids: list[int] = field(default_factory=list) # 收藏夹的 id
path_mapper: dict[int, str] = field(default_factory=dict) # 收藏夹的 id 到存储目录的映射 path_mapper: dict[int, str] = field(default_factory=dict) # 收藏夹的 id 到存储目录的映射
subtitle: SubtitleConfig = field(default_factory=SubtitleConfig) # 字幕相关设置
``` ```
程序默认会将配置文件存储于 `${程序路径}/config/config.json`,数据库文件存储于 `${程序路径}/data/data.db`,如果发现不存在则新建并写入初始配置。 程序默认会将配置文件存储于 `${程序路径}/config/config.json`,数据库文件存储于 `${程序路径}/data/data.db`,如果发现不存在则新建并写入初始配置。
@@ -48,7 +56,7 @@ services:
bili-sync: bili-sync:
image: amtoaer/bili-sync:latest image: amtoaer/bili-sync:latest
user: 1000:1000 # 此处可以指定以哪个用户的权限运行,不填写的话默认 root推荐填写。 user: 1000:1000 # 此处可以指定以哪个用户的权限运行,不填写的话默认 root推荐填写。
tty: true # 加上这一行可以让日志变成彩色 tty: true # 加上这一行可以让支持的终端以彩色显示日志(如果发现日志出现乱码就去掉)
volumes: volumes:
- /home/amtoaer/Videos/Bilibilis/:/Videos/Bilibilis/ # 视频文件 - /home/amtoaer/Videos/Bilibilis/:/Videos/Bilibilis/ # 视频文件
- /home/amtoaer/.config/nas/bili-sync/config/:/app/config/ # 配置文件 - /home/amtoaer/.config/nas/bili-sync/config/:/app/config/ # 配置文件
@@ -75,11 +83,15 @@ services:
"dedeuserid": "xxxxxxxxxxxxxxxxxx", "dedeuserid": "xxxxxxxxxxxxxxxxxx",
"ac_time_value": "xxxxxxxxxxxxxxxxxx", "ac_time_value": "xxxxxxxxxxxxxxxxxx",
"interval": 20, "interval": 20,
"favorite_ids": [
711322958
],
"path_mapper": { "path_mapper": {
"711322958": "/Videos/Bilibilis/Bilibili-711322958/" "711322958": "/Videos/Bilibilis/Bilibili-711322958/"
},
"subtitle": {
"font_name": "微软雅黑,黑体",
"font_size": 40.0,
"alpha": 0.8,
"fly_time": 5.0,
"static_time": 10.0
} }
} }
``` ```
@@ -117,7 +129,7 @@ services:
- [x] 凭证认证 - [x] 凭证认证
- [x] 视频选优 - [x] 视频选优
- [x] 视频下载 - [x] 视频下载
- [x] 支持并下载 - [x] 支持并下载
- [x] 支持作为 daemon 运行 - [x] 支持作为 daemon 运行
- [x] 构建 nfo 和 poster 文件,方便以单集形式导入 emby - [x] 构建 nfo 和 poster 文件,方便以单集形式导入 emby
- [x] 支持收藏夹翻页,下载全部历史视频 - [x] 支持收藏夹翻页,下载全部历史视频

View File

@@ -13,30 +13,32 @@ from utils import aexists, aremove
async def recheck(): async def recheck():
"""刷新数据库中视频的状态,如果发现文件不存在则标记未下载,以便在下次任务重新下载,在自己手动删除文件后调用""" """刷新数据库中视频的状态,如果发现文件不存在则标记未下载,以便在下次任务重新下载,在自己手动删除文件后调用"""
async def is_ok(item: FavoriteItem) -> bool:
if len(item.pages):
# 多 p 视频全部存在才算存在
return all(await asyncio.gather(*[aexists(page.video_path) for page in item.pages]))
return await aexists(item.video_path)
items = await FavoriteItem.filter( items = await FavoriteItem.filter(
type=MediaType.VIDEO, type=MediaType.VIDEO, status=MediaStatus.NORMAL, downloaded=True
status=MediaStatus.NORMAL, ).prefetch_related("pages")
downloaded=True, items_to_update = []
) for item in items:
exists = await asyncio.gather(*[aexists(item.video_path) for item in items]) for page in item.pages:
for item, exist in zip(items, exists): # 疑似 tortoise 的 bugprefetch_related 不会更新反向引用的字段,这里手动更新一下
if isinstance(exist, Exception): page.favorite_item = item
logger.error( items_ok = await asyncio.gather(*[is_ok(item) for item in items], return_exceptions=True)
"Error when checking file {} {}: {}", for item, ok in zip(items, items_ok):
item.bvid, if isinstance(ok, Exception):
item.name, logger.error("Error when checking file {} {}: {}.", item.bvid, item.name, ok)
exist,
)
continue continue
if not exist: if not ok:
logger.info( logger.info("Lack of file detected for {} {}, mark as not downloaded.", item.bvid, item.name)
"File {} {} not exists, mark as not downloaded.",
item.bvid,
item.name,
)
item.downloaded = False item.downloaded = False
items_to_update.append(item)
logger.info("Updating database...") logger.info("Updating database...")
await FavoriteItem.bulk_update(items, fields=["downloaded"]) await FavoriteItem.bulk_update(items_to_update, fields=["downloaded"], batch_size=300)
logger.info("Database updated.") logger.info("Database updated.")
@@ -52,10 +54,7 @@ async def _refresh_favorite_item_info(
items = await FavoriteItem.filter(downloaded=True).prefetch_related("upper") items = await FavoriteItem.filter(downloaded=True).prefetch_related("upper")
if force: if force:
# 如果强制刷新,那么就先把现存的所有内容删除 # 如果强制刷新,那么就先把现存的所有内容删除
await asyncio.gather( await asyncio.gather(*[aremove(path) for item in items for path in path_getter(item)], return_exceptions=True)
*[aremove(path) for item in items for path in path_getter(item)],
return_exceptions=True,
)
await asyncio.gather( await asyncio.gather(
*[ *[
process_favorite_item( process_favorite_item(
@@ -65,6 +64,7 @@ async def _refresh_favorite_item_info(
process_nfo=process_nfo, process_nfo=process_nfo,
process_upper=process_upper, process_upper=process_upper,
process_subtitle=process_subtitle, process_subtitle=process_subtitle,
refresh_mode=True,
) )
for item in items for item in items
], ],
@@ -72,30 +72,14 @@ async def _refresh_favorite_item_info(
) )
refresh_nfo = functools.partial( refresh_nfo = functools.partial(_refresh_favorite_item_info, lambda item: [item.nfo_path], process_nfo=True)
_refresh_favorite_item_info, lambda item: [item.nfo_path], process_nfo=True
)
refresh_poster = functools.partial( refresh_poster = functools.partial(_refresh_favorite_item_info, lambda item: [item.poster_path], process_poster=True)
_refresh_favorite_item_info,
lambda item: [item.poster_path],
process_poster=True,
)
refresh_video = functools.partial( refresh_video = functools.partial(_refresh_favorite_item_info, lambda item: [item.video_path], process_video=True)
_refresh_favorite_item_info,
lambda item: [item.video_path],
process_video=True,
)
refresh_upper = functools.partial( refresh_upper = functools.partial(_refresh_favorite_item_info, lambda item: item.upper_path, process_upper=True)
_refresh_favorite_item_info,
lambda item: item.upper_path,
process_upper=True,
)
refresh_subtitle = functools.partial( refresh_subtitle = functools.partial(
_refresh_favorite_item_info, _refresh_favorite_item_info, lambda item: [item.subtitle_path], process_subtitle=True
lambda item: [item.subtitle_path],
process_subtitle=True,
) )

View File

@@ -4,11 +4,7 @@ from pathlib import Path
def get_base(dir_name: str) -> Path: def get_base(dir_name: str) -> Path:
path = ( path = Path(base) if (base := os.getenv(f"{dir_name.upper()}_PATH")) else Path(__file__).parent / dir_name
Path(base)
if (base := os.getenv(f"{dir_name.upper()}_PATH"))
else Path(__file__).parent / dir_name
)
path.mkdir(parents=True, exist_ok=True) path.mkdir(parents=True, exist_ok=True)
return path return path
@@ -37,20 +33,18 @@ class MediaStatus(IntEnum):
@property @property
def text(self) -> str: def text(self) -> str:
return { return {MediaStatus.NORMAL: "normal", MediaStatus.INVISIBLE: "invisible", MediaStatus.DELETED: "deleted"}[self]
MediaStatus.NORMAL: "normal",
MediaStatus.INVISIBLE: "invisible",
MediaStatus.DELETED: "deleted", class NfoMode(IntEnum):
}[self] MOVIE = 1
TVSHOW = 2
EPISODE = 3
UPPER = 4
TORTOISE_ORM = { TORTOISE_ORM = {
"connections": {"default": f"sqlite://{DEFAULT_DATABASE_PATH}"}, "connections": {"default": f"sqlite://{DEFAULT_DATABASE_PATH}"},
"apps": { "apps": {"models": {"models": ["models", "aerich.models"], "default_connection": "default"}},
"models": {
"models": ["models", "aerich.models"],
"default_connection": "default",
},
},
"use_tz": True, "use_tz": True,
} }

View File

@@ -6,28 +6,18 @@ from settings import settings
class PersistedCredential(Credential): class PersistedCredential(Credential):
def __init__(self) -> None: def __init__(self) -> None:
super().__init__( super().__init__(
settings.sessdata, settings.sessdata, settings.bili_jct, settings.buvid3, settings.dedeuserid, settings.ac_time_value
settings.bili_jct,
settings.buvid3,
settings.dedeuserid,
settings.ac_time_value,
) )
async def refresh(self) -> None: async def refresh(self) -> None:
await super().refresh() await super().refresh()
( (settings.sessdata, settings.bili_jct, settings.dedeuserid, settings.ac_time_value) = (
settings.sessdata,
settings.bili_jct,
settings.dedeuserid,
settings.ac_time_value,
) = (
self.sessdata, self.sessdata,
self.bili_jct, self.bili_jct,
self.dedeuserid, self.dedeuserid,
self.ac_time_value, self.ac_time_value,
) )
# 暂时使用同步调用 await settings.asave()
settings.save()
credential = PersistedCredential() credential = PersistedCredential()

View File

@@ -1,17 +1,12 @@
import asyncio import asyncio
import os
import signal
import sys import sys
import uvloop import uvloop
from loguru import logger from loguru import logger
from commands import ( from commands import recheck, refresh_nfo, refresh_poster, refresh_subtitle, refresh_upper, refresh_video
recheck,
refresh_nfo,
refresh_poster,
refresh_subtitle,
refresh_upper,
refresh_video,
)
from models import init_model from models import init_model
from processor import cleanup, process from processor import cleanup, process
from settings import settings from settings import settings
@@ -45,8 +40,16 @@ async def entry() -> None:
if __name__ == "__main__": if __name__ == "__main__":
# 确保 docker 退出时正确触发资源释放
signal.signal(signal.SIGTERM, lambda *_: os.kill(os.getpid(), signal.SIGINT))
with asyncio.Runner() as runner: with asyncio.Runner() as runner:
try: try:
runner.run(entry()) runner.run(entry())
except Exception:
logger.exception("Unexpected error occurred, exiting...")
except KeyboardInterrupt:
logger.error("Exit Signal Received, exiting...")
finally: finally:
logger.info("Cleaning up resources...")
runner.run(cleanup()) runner.run(cleanup())
logger.info("Done, exited.")

View File

@@ -0,0 +1,14 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
CREATE TABLE IF NOT EXISTS "program" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"version" VARCHAR(20) NOT NULL
);"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
DROP TABLE IF EXISTS "program";"""

View File

@@ -0,0 +1,21 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
CREATE TABLE IF NOT EXISTS "favoriteitempage" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"cid" INT NOT NULL,
"page" INT NOT NULL,
"name" VARCHAR(255) NOT NULL,
"image" TEXT NOT NULL,
"status" SMALLINT NOT NULL DEFAULT 1 /* NORMAL: 1\nINVISIBLE: 2\nDELETED: 3 */,
"downloaded" INT NOT NULL DEFAULT 0,
"favorite_item_id" INT NOT NULL REFERENCES "favoriteitem" ("id") ON DELETE CASCADE,
CONSTRAINT "uid_favoriteite_favorit_c3b50e" UNIQUE ("favorite_item_id", "page")
) /* 收藏条目的分p */;"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
DROP TABLE IF EXISTS "favoriteitempage";"""

157
models.py
View File

@@ -3,17 +3,12 @@ from asyncio import create_subprocess_exec
from pathlib import Path from pathlib import Path
from tortoise import Tortoise, fields from tortoise import Tortoise, fields
from tortoise.fields import Field
from tortoise.models import Model from tortoise.models import Model
from constants import ( from constants import DEFAULT_THUMB_PATH, MIGRATE_COMMAND, TORTOISE_ORM, MediaStatus, MediaType
DEFAULT_THUMB_PATH,
MIGRATE_COMMAND,
TORTOISE_ORM,
MediaStatus,
MediaType,
)
from settings import settings from settings import settings
from utils import aopen from version import VERSION
class FavoriteList(Model): class FavoriteList(Model):
@@ -40,31 +35,11 @@ class Upper(Model):
@property @property
def thumb_path(self) -> Path: def thumb_path(self) -> Path:
return ( return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "folder.jpg"
DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "folder.jpg"
)
@property @property
def meta_path(self) -> Path: def meta_path(self) -> Path:
return ( return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "person.nfo"
DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "person.nfo"
)
async def save_metadata(self):
async with aopen(self.meta_path, "w") as f:
await f.write(
f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot />
<outline />
<lockdata>false</lockdata>
<dateadded>{self.created_at.strftime("%Y-%m-%d %H:%M:%S")}</dateadded>
<title>{self.mid}</title>
<sorttitle>{self.mid}</sorttitle>
</person>
""".strip()
)
class FavoriteItem(Model): class FavoriteItem(Model):
@@ -73,17 +48,13 @@ class FavoriteItem(Model):
id = fields.IntField(pk=True) id = fields.IntField(pk=True)
name = fields.CharField(max_length=255) name = fields.CharField(max_length=255)
type = fields.IntEnumField(enum_type=MediaType) type = fields.IntEnumField(enum_type=MediaType)
status = fields.IntEnumField( status = fields.IntEnumField(enum_type=MediaStatus, default=MediaStatus.NORMAL)
enum_type=MediaStatus, default=MediaStatus.NORMAL
)
bvid = fields.CharField(max_length=255) bvid = fields.CharField(max_length=255)
desc = fields.TextField() desc = fields.TextField()
cover = fields.TextField() cover = fields.TextField()
tags = fields.JSONField(null=True) tags = fields.JSONField(null=True)
favorite_list = fields.ForeignKeyField( favorite_list: Field[FavoriteList] = fields.ForeignKeyField("models.FavoriteList", related_name="items")
"models.FavoriteList", related_name="items" upper: Field[Upper] = fields.ForeignKeyField("models.Upper", related_name="uploads")
)
upper = fields.ForeignKeyField("models.Upper", related_name="uploads")
ctime = fields.DatetimeField() ctime = fields.DatetimeField()
pubtime = fields.DatetimeField() pubtime = fields.DatetimeField()
fav_time = fields.DatetimeField() fav_time = fields.DatetimeField()
@@ -95,65 +66,129 @@ class FavoriteItem(Model):
unique_together = (("bvid", "favorite_list_id"),) unique_together = (("bvid", "favorite_list_id"),)
@property @property
def safe_name(self) -> str: def tmp_video_path(self) -> Path:
return self.name.replace("/", "_") return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_video"
@property
def tmp_audio_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_audio"
@property
def video_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.mp4"
@property
def nfo_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.nfo"
@property
def poster_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}-poster.jpg"
@property
def upper_path(self) -> list[Path]:
return [self.upper.thumb_path, self.upper.meta_path]
@property
def subtitle_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.zh-CN.default.ass"
@property
def tvshow_nfo_path(self) -> Path:
"""分p视频时使用"""
return Path(settings.path_mapper[self.favorite_list_id]) / self.bvid / "tvshow.nfo"
@property
def tvshow_poster_path(self) -> Path:
"""分p视频时使用"""
return Path(settings.path_mapper[self.favorite_list_id]) / self.bvid / "poster.jpg"
class FavoriteItemPage(Model):
"""收藏条目的分p"""
id = fields.IntField(pk=True)
favorite_item: Field[FavoriteItem] = fields.ForeignKeyField("models.FavoriteItem", related_name="pages")
cid = fields.IntField()
page = fields.IntField()
name = fields.CharField(max_length=255)
image = fields.TextField()
status = fields.IntEnumField(enum_type=MediaStatus, default=MediaStatus.NORMAL)
downloaded = fields.BooleanField(default=False)
class Meta:
unique_together = (("favorite_item_id", "page"),)
@property @property
def tmp_video_path(self) -> Path: def tmp_video_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"tmp_{self.bvid}_video" / self.favorite_item.bvid
/ "Season 1"
/ f"tmp_{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}_video"
) )
@property @property
def tmp_audio_path(self) -> Path: def tmp_audio_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"tmp_{self.bvid}_audio" / self.favorite_item.bvid
/ "Season 1"
/ f"tmp_{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}_audio"
) )
@property @property
def video_path(self) -> Path: def video_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"{self.bvid}.mp4" / self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.mp4"
) )
@property @property
def nfo_path(self) -> Path: def nfo_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"{self.bvid}.nfo" / self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.nfo"
) )
@property @property
def poster_path(self) -> Path: def poster_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"{self.bvid}-poster.jpg" / self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}-thumb.jpg"
) )
@property
def upper_path(self) -> list[Path]:
return [
self.upper.thumb_path,
self.upper.meta_path,
]
@property @property
def subtitle_path(self) -> Path: def subtitle_path(self) -> Path:
return ( return (
Path(settings.path_mapper[self.favorite_list_id]) Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ f"{self.bvid}.zh-CN.default.ass" / self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.zh-CN.default.ass"
) )
class Program(Model):
id = fields.IntField(pk=True)
version = fields.CharField(max_length=20)
async def init_model() -> None: async def init_model() -> None:
await Tortoise.init(config=TORTOISE_ORM) await Tortoise.init(config=TORTOISE_ORM)
migrate_commands = ( migrate_commands = (
[MIGRATE_COMMAND, "upgrade"] [MIGRATE_COMMAND, "upgrade"] if os.getenv("BILI_IN_DOCKER") else ["poetry", "run", MIGRATE_COMMAND, "upgrade"]
if os.getenv("BILI_IN_DOCKER")
else ["poetry", "run", MIGRATE_COMMAND, "upgrade"]
) )
process = await create_subprocess_exec(*migrate_commands) process = await create_subprocess_exec(*migrate_commands)
await process.communicate() await process.communicate()
program, created = await Program.get_or_create(defaults={"version": VERSION})
if created or program.version != VERSION:
# 把新版本的迁移逻辑写在这里
pass
program.version = VERSION
await program.save()

151
nfo.py
View File

@@ -1,28 +1,78 @@
import datetime import datetime
from abc import abstractmethod
from dataclasses import dataclass from dataclasses import dataclass
from pathlib import Path from pathlib import Path
from models import FavoriteItem, FavoriteItemPage, Upper
from utils import aopen from utils import aopen
@dataclass @dataclass
class Actor: class Base:
"""基类,有个工具方法"""
@abstractmethod
def to_xml(self) -> str:
...
@staticmethod
def escape(s: str) -> str:
"""转义 xml 特殊字符"""
return s.translate(str.maketrans({"<": "&lt;", ">": "&gt;", "&": "&amp;", "'": "&apos;", '"': "&quot;"}))
async def to_file(self, path: Path) -> None:
"""把 xml 写入文件"""
async with aopen(path, "w", encoding="utf-8") as f:
await f.write(self.to_xml())
@dataclass
class EpisodeInfo(Base):
"""分p的单集信息"""
title: str
season: int
episode: int
@staticmethod
def from_favorite_item_page(page: FavoriteItemPage) -> "EpisodeInfo":
return EpisodeInfo(title=page.name, season=1, episode=page.page)
def to_xml(self) -> str:
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot />
<outline />
<title>{self.escape(self.title)}</title>
<season>{self.season}</season>
<episode>{self.episode}</episode>
</episodedetails>
""".strip()
@dataclass
class Actor(Base):
name: str name: str
role: str role: str
@staticmethod
def from_upper(upper: Upper) -> "Actor":
return Actor(name=upper.mid, role=upper.name)
def to_xml(self) -> str: def to_xml(self) -> str:
return f""" return f"""
<actor> <actor>
<name>{self.name}</name> <name>{self.name}</name>
<role>{self.role}</role> <role>{self.escape(self.role)}</role>
</actor> </actor>
""".strip( """.strip()
"\n"
)
@dataclass @dataclass
class EpisodeInfo: class MovieInfo(Base):
"""单p的视频信息"""
title: str title: str
plot: str plot: str
tags: list[str] tags: list[str]
@@ -30,29 +80,94 @@ class EpisodeInfo:
bvid: str bvid: str
aired: datetime.datetime aired: datetime.datetime
async def write_nfo(self, path: Path) -> None: @staticmethod
async with aopen(path, "w", encoding="utf-8") as f: def from_favorite_item(fav_item: FavoriteItem) -> "MovieInfo":
await f.write(self.to_xml()) return MovieInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[Actor.from_upper(fav_item.upper)],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
)
def to_xml(self) -> str: def to_xml(self) -> str:
actor = "\n".join(_.to_xml() for _ in self.actor) actor = "\n".join(_.to_xml() for _ in self.actor)
tags = ( tags = (
"\n".join(f" <genre>{_}</genre>" for _ in self.tags) "\n".join(f" <genre>{self.escape(_)}</genre>" for _ in self.tags) if isinstance(self.tags, list) else ""
if isinstance(self.tags, list)
else ""
) )
return f""" return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?> <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails> <movie>
<plot><![CDATA[{self.plot}]]></plot> <plot><![CDATA[{self.escape(self.plot)}]]></plot>
<outline /> <outline />
<title>{self.title}</title> <title>{self.escape(self.title)}</title>
{actor} {actor}
<year>{self.aired.year}</year> <year>{self.aired.year}</year>
{tags} {tags}
<uniqueid type="bilibili">{self.bvid}</uniqueid> <uniqueid type="bilibili">{self.bvid}</uniqueid>
<aired>{self.aired.strftime("%Y-%m-%d")}</aired> <aired>{self.aired.strftime("%Y-%m-%d")}</aired>
</episodedetails> </movie>
""".strip( """.strip()
"\n"
@dataclass
class TVShowInfo(Base):
title: str
plot: str
tags: list[str]
actor: list[Actor]
bvid: str
aired: datetime.datetime
@staticmethod
def from_favorite_item(fav_item: FavoriteItem) -> "TVShowInfo":
return TVShowInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[Actor.from_upper(fav_item.upper)],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
) )
def to_xml(self) -> str:
actor = "\n".join(_.to_xml() for _ in self.actor)
tags = (
"\n".join(f" <genre>{self.escape(_)}</genre>" for _ in self.tags) if isinstance(self.tags, list) else ""
)
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<tvshow>
<plot><![CDATA[{self.escape(self.plot)}]]></plot>
<outline />
<title>{self.escape(self.title)}</title>
{actor}
<year>{self.aired.year}</year>
{tags}
<uniqueid type="bilibili">{self.bvid}</uniqueid>
<aired>{self.aired.strftime("%Y-%m-%d")}</aired>
</tvshow>
""".strip()
@dataclass
class UpperInfo(Base):
mid: int
created_at: datetime.datetime
def from_upper(upper: Upper) -> "UpperInfo":
return UpperInfo(mid=upper.mid, created_at=upper.created_at)
def to_xml(self) -> str:
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot />
<outline />
<lockdata>false</lockdata>
<dateadded>{self.created_at.strftime("%Y-%m-%d %H:%M:%S")}</dateadded>
<title>{self.mid}</title>
<sorttitle>{self.mid}</sorttitle>
</person>
""".strip()

1511
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,21 @@
import asyncio import asyncio
import contextlib
import datetime import datetime
from asyncio import Semaphore, create_subprocess_exec from asyncio import Semaphore, create_subprocess_exec
from asyncio.subprocess import DEVNULL from asyncio.subprocess import PIPE
from pathlib import Path
from bilibili_api import ass, favorite_list, video from bilibili_api import ass, favorite_list, video
from bilibili_api.exceptions import ResponseCodeException from bilibili_api.exceptions import ResponseCodeException
from loguru import logger from loguru import logger
from tortoise import Tortoise from tortoise.connection import connections
from tortoise.models import Model
from constants import FFMPEG_COMMAND, MediaStatus, MediaType from constants import FFMPEG_COMMAND, MediaStatus, MediaType, NfoMode
from credential import credential from credential import credential
from models import FavoriteItem, FavoriteList, Upper from models import FavoriteItem, FavoriteItemPage, FavoriteList, Upper
from nfo import Actor, EpisodeInfo from nfo import Base as NfoBase
from nfo import EpisodeInfo, MovieInfo, TVShowInfo, UpperInfo
from settings import settings from settings import settings
from utils import aexists, amakedirs, client, download_content from utils import aexists, amakedirs, client, download_content
@@ -20,10 +24,11 @@ anchor = datetime.date.today()
async def cleanup() -> None: async def cleanup() -> None:
await client.aclose() await client.aclose()
await Tortoise.close_connections() await connections.close_all()
def concurrent_decorator(concurrency: int) -> callable: def concurrent_decorator(concurrency: int) -> callable:
"""一个简单的并发限制装饰器,被装饰的函数同时仅能运行 concurrency 个"""
sem = Semaphore(value=concurrency) sem = Semaphore(value=concurrency)
def decorator(func: callable) -> callable: def decorator(func: callable) -> callable:
@@ -36,18 +41,12 @@ def concurrent_decorator(concurrency: int) -> callable:
return decorator return decorator
async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None: async def update_favorite_item(medias: list[dict], fav_list: FavoriteList) -> None:
"""根据收藏夹里的视频列表更新数据库记录"""
uppers = [ uppers = [
Upper( Upper(mid=media["upper"]["mid"], name=media["upper"]["name"], thumb=media["upper"]["face"]) for media in medias
mid=media["upper"]["mid"],
name=media["upper"]["name"],
thumb=media["upper"]["face"],
)
for media in medias
] ]
await Upper.bulk_create( await Upper.bulk_create(uppers, on_conflict=["mid"], update_fields=["name", "thumb"], batch_size=300)
uppers, on_conflict=["mid"], update_fields=["name", "thumb"]
)
items = [ items = [
FavoriteItem( FavoriteItem(
name=media["title"], name=media["title"],
@@ -67,15 +66,20 @@ async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
await FavoriteItem.bulk_create( await FavoriteItem.bulk_create(
items, items,
on_conflict=["bvid", "favorite_list_id"], on_conflict=["bvid", "favorite_list_id"],
update_fields=[ update_fields=["name", "type", "desc", "cover", "ctime", "pubtime", "fav_time"],
"name", batch_size=300,
"type", )
"desc",
"cover",
"ctime", async def update_favorite_item_page(pages: list[dict], item: FavoriteItem):
"pubtime", pages = [
"fav_time", FavoriteItemPage(
], favorite_item=item, cid=page["cid"], page=page["page"], name=page["part"], image=page.get("first_frame", "")
)
for page in pages
]
await FavoriteItemPage.bulk_create(
pages, on_conflict=["favorite_item_id", "page"], update_fields=["cid", "name", "image"], batch_size=300
) )
@@ -91,13 +95,11 @@ async def process() -> None:
except Exception: except Exception:
logger.exception("Failed to refresh credential.") logger.exception("Failed to refresh credential.")
return return
for favorite_id in settings.favorite_ids: for favorite_id in settings.path_mapper:
if favorite_id not in settings.path_mapper: try:
logger.warning( await process_favorite(favorite_id)
f"Favorite {favorite_id} not in path mapper, ignored." except asyncio.TimeoutError:
) logger.exception("Process favorite {} timeout.", favorite_id)
continue
await process_favorite(favorite_id)
async def process_favorite(favorite_id: int) -> None: async def process_favorite(favorite_id: int) -> None:
@@ -106,11 +108,7 @@ async def process_favorite(favorite_id: int) -> None:
favorite_id, page=1, credential=credential favorite_id, page=1, credential=credential
) )
title = favorite_video_list["info"]["title"] title = favorite_video_list["info"]["title"]
logger.info( logger.info("Start to process favorite {}: {}.", favorite_id, title)
"Start to process favorite {}: {}",
favorite_id,
title,
)
fav_list, _ = await FavoriteList.get_or_create( fav_list, _ = await FavoriteList.get_or_create(
id=favorite_id, defaults={"name": favorite_video_list["info"]["title"]} id=favorite_id, defaults={"name": favorite_video_list["info"]["title"]}
) )
@@ -119,43 +117,28 @@ async def process_favorite(favorite_id: int) -> None:
while True: while True:
page += 1 page += 1
if page > 1: if page > 1:
favorite_video_list = ( favorite_video_list = await favorite_list.get_video_favorite_list_content(
await favorite_list.get_video_favorite_list_content( favorite_id, page=page, credential=credential
favorite_id, page=page, credential=credential
)
) )
# 先看看对应 bvid 的记录是否存在 # 先看看对应 bvid 的记录是否存在
existed_items = await FavoriteItem.filter( existed_items = await FavoriteItem.filter(
favorite_list=fav_list, favorite_list=fav_list, bvid__in=[media["bvid"] for media in favorite_video_list["medias"]]
bvid__in=[media["bvid"] for media in favorite_video_list["medias"]],
) )
# 记录一下获得的列表中的 bvid 和 fav_time # 记录一下获得的列表中的 bvid 和 fav_time
media_info = { media_info = {(media["bvid"], media["fav_time"]) for media in favorite_video_list["medias"]}
(media["bvid"], media["fav_time"])
for media in favorite_video_list["medias"]
}
# 如果有 bvid 和 fav_time 都相同的记录,说明已经到达了上次处理到的位置 # 如果有 bvid 和 fav_time 都相同的记录,说明已经到达了上次处理到的位置
continue_flag = not media_info & { continue_flag = not media_info & {(item.bvid, int(item.fav_time.timestamp())) for item in existed_items}
(item.bvid, int(item.fav_time.timestamp())) await update_favorite_item(favorite_video_list["medias"], fav_list)
for item in existed_items
}
await manage_model(favorite_video_list["medias"], fav_list)
if not (continue_flag and favorite_video_list["has_more"]): if not (continue_flag and favorite_video_list["has_more"]):
break break
all_unprocessed_items = await FavoriteItem.filter( all_unprocessed_items = await FavoriteItem.filter(
favorite_list=fav_list, favorite_list=fav_list, type=MediaType.VIDEO, status=MediaStatus.NORMAL, downloaded=False
type=MediaType.VIDEO,
status=MediaStatus.NORMAL,
downloaded=False,
).prefetch_related("upper") ).prefetch_related("upper")
await asyncio.gather( await asyncio.gather(*[process_favorite_item(item) for item in all_unprocessed_items], return_exceptions=True)
*[process_favorite_item(item) for item in all_unprocessed_items], logger.info("Favorite {} {} has been processed.", favorite_id, title)
return_exceptions=True,
)
logger.info("Favorite {} {} processed successfully.", favorite_id, title)
@concurrent_decorator(4) @concurrent_decorator(concurrency=4)
async def process_favorite_item( async def process_favorite_item(
fav_item: FavoriteItem, fav_item: FavoriteItem,
process_poster=True, process_poster=True,
@@ -163,170 +146,309 @@ async def process_favorite_item(
process_nfo=True, process_nfo=True,
process_upper=True, process_upper=True,
process_subtitle=True, process_subtitle=True,
refresh_mode=False,
) -> None: ) -> None:
logger.info("Start to process video {} {}", fav_item.bvid, fav_item.name) logger.info("Start to process video {} {}.", fav_item.bvid, fav_item.name)
if fav_item.type != MediaType.VIDEO: if fav_item.type != MediaType.VIDEO:
logger.warning("Media {} is not a video, skipped.", fav_item.name) logger.warning("Media {} {} is not a video, skipped.", fav_item.bvid, fav_item.name)
return return
v = video.Video(fav_item.bvid, credential=credential) v = video.Video(fav_item.bvid, credential=credential)
try: # 如果没有获取过 tags那么尝试获取一下不关键忽略掉错误
if process_upper: with contextlib.suppress(Exception):
# 写入 up 主头像 if fav_item.tags is None:
if not all( fav_item.tags = [_["tag_name"] for _ in await v.get_tags()]
await asyncio.gather( # 处理 up 主信息和是否分 p 无关,放到前面
aexists(fav_item.upper.thumb_path), if process_upper:
aexists(fav_item.upper.meta_path), result = await asyncio.gather(
) get_file(fav_item.upper.thumb, fav_item.upper.thumb_path),
): get_nfo(fav_item.upper.meta_path, obj=fav_item.upper, mode=NfoMode.UPPER),
await amakedirs(fav_item.upper.thumb_path.parent, exist_ok=True) return_exceptions=True,
await asyncio.gather( )
fav_item.upper.save_metadata(), if any(isinstance(_, FileExistsError) for _ in result):
download_content( logger.info("Upper {} {} already exists, skipped.", fav_item.upper.mid, fav_item.upper.name)
fav_item.upper.thumb, fav_item.upper.thumb_path elif any(isinstance(_, Exception) for _ in result):
), logger.exception("Failed to process upper {} {}.", fav_item.upper.mid, fav_item.upper.name)
return_exceptions=True, single_page = False
) if settings.paginated_video:
else: pages = None
logger.info( if not refresh_mode:
"Upper {} {} already exists, skipped.", # 非手动触发的情况下,会刷新一下 pages
fav_item.upper.mid, try:
fav_item.upper.name, tmp_pages = await v.get_pages()
) if len(tmp_pages) <= 1:
if process_nfo: single_page = True
if not await aexists(fav_item.nfo_path):
if fav_item.tags is None:
try:
fav_item.tags = [
_["tag_name"] for _ in await v.get_tags()
]
except Exception:
logger.exception(
"Failed to get tags of video {} {}",
fav_item.bvid,
fav_item.name,
)
# 写入 nfo
await EpisodeInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[
Actor(
name=fav_item.upper.mid,
role=fav_item.upper.name,
)
],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
).write_nfo(fav_item.nfo_path)
else:
logger.info(
"NFO of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_poster:
# 写入 poster
if not await aexists(fav_item.poster_path):
await download_content(fav_item.cover, fav_item.poster_path)
else:
logger.info(
"Poster of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_subtitle:
if not await aexists(fav_item.subtitle_path):
await ass.make_ass_file_danmakus_protobuf(
v, 0, str(fav_item.subtitle_path.resolve())
)
else:
logger.info(
"Subtitle of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_video:
if await aexists(fav_item.video_path):
fav_item.downloaded = True
logger.info(
"Video {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
else:
# 开始处理视频内容
detector = video.VideoDownloadURLDataDetecter(
await v.get_download_url(page_index=0)
)
streams = detector.detect_best_streams()
if detector.check_flv_stream():
await download_content(
streams[0].url, fav_item.tmp_video_path
)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
else: else:
await asyncio.gather( await update_favorite_item_page(tmp_pages, fav_item)
download_content( except Exception:
streams[0].url, fav_item.tmp_video_path logger.exception("Failed to get pages of video {} {}.", fav_item.bvid, fav_item.name)
), # 从表中查出 pages
download_content( pages = await FavoriteItemPage.filter(favorite_item=fav_item).order_by("page")
streams[1].url, fav_item.tmp_audio_path for page in pages:
), page.favorite_item = fav_item
) if pages and not single_page:
process = await create_subprocess_exec( if process_nfo:
FFMPEG_COMMAND, try:
"-i", await get_nfo(fav_item.tvshow_nfo_path, obj=fav_item, mode=NfoMode.TVSHOW)
str(fav_item.tmp_video_path), except FileExistsError:
"-i", logger.info("Nfo of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
str(fav_item.tmp_audio_path), except Exception:
"-c", logger.exception("Failed to process nfo of video {} {}.", fav_item.bvid, fav_item.name)
"copy", if process_poster:
str(fav_item.video_path), try:
stdout=DEVNULL, await get_file(fav_item.cover, fav_item.tvshow_poster_path)
stderr=DEVNULL, except FileExistsError:
) logger.info("Poster of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
await process.communicate() except Exception:
fav_item.tmp_video_path.unlink() logger.exception("Failed to process poster of video {} {}.", fav_item.bvid, fav_item.name)
fav_item.tmp_audio_path.unlink() await asyncio.gather(
fav_item.downloaded = True *[
logger.info( process_favorite_item_page(page, v, process_poster, process_video, process_nfo, process_subtitle)
"{} {} processed successfully.", for page in pages
fav_item.bvid, ],
fav_item.name, return_exceptions=True,
) )
except ResponseCodeException as e: fav_item.downloaded = all(page.downloaded for page in pages)
match e.code: page_status = {page.status for page in pages}
case 62002: if MediaStatus.INVISIBLE in page_status:
fav_item.status = MediaStatus.INVISIBLE fav_item.status = MediaStatus.INVISIBLE
case -404: elif MediaStatus.DELETED in page_status:
fav_item.status = MediaStatus.DELETED fav_item.status = MediaStatus.DELETED
case _: else:
fav_item.status = MediaStatus.NORMAL
if single_page or not settings.paginated_video:
if process_nfo:
try:
await get_nfo(fav_item.nfo_path, obj=fav_item, mode=NfoMode.MOVIE)
except FileExistsError:
logger.info("NFO of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process nfo of video {} {}.", fav_item.bvid, fav_item.name)
if process_poster:
try:
await get_file(fav_item.cover, fav_item.poster_path)
except FileExistsError:
logger.info("Poster of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process poster of video {} {}.", fav_item.bvid, fav_item.name)
if process_subtitle:
try:
await get_subtitle(v, 0, fav_item.subtitle_path)
except FileExistsError:
logger.info("Subtitle of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process subtitle of video {} {}.", fav_item.bvid, fav_item.name)
if process_video:
try:
await get_video(v, 0, fav_item.tmp_video_path, fav_item.tmp_audio_path, fav_item.video_path)
fav_item.downloaded = True
except FileExistsError:
logger.info("Video {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
fav_item.downloaded = True
except Exception as e:
errcode_status = {62002: MediaStatus.INVISIBLE, -404: MediaStatus.DELETED}
if not (isinstance(e, ResponseCodeException) and (status := errcode_status.get(e.code))):
logger.exception("Failed to process video {} {}.", fav_item.bvid, fav_item.name)
else:
fav_item.status = status
logger.error(
"Video {} {} is not available, marked as {}.",
fav_item.bvid,
fav_item.name,
fav_item.status.text,
)
await fav_item.save()
logger.info("{} {} has been processed.", fav_item.bvid, fav_item.name)
@concurrent_decorator(concurrency=4)
async def process_favorite_item_page(
fav_page: FavoriteItemPage,
v: video.Video,
process_poster=True,
process_video=True,
process_nfo=True,
process_subtitle=True,
):
logger.info(
"Start to process video {} {} page {}.", fav_page.favorite_item.bvid, fav_page.favorite_item.name, fav_page.page
)
if process_nfo:
try:
await get_nfo(fav_page.nfo_path, obj=fav_page, mode=NfoMode.EPISODE)
except FileExistsError:
logger.info(
"NFO of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process nfo of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_poster:
try:
await get_file(fav_page.image or fav_page.favorite_item.cover, fav_page.poster_path)
except FileExistsError:
logger.info(
"Poster of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process poster of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_subtitle:
try:
await get_subtitle(v, fav_page.page - 1, fav_page.subtitle_path)
except FileExistsError:
logger.info(
"Subtitle of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process subtitle of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_video:
try:
await get_video(v, fav_page.page - 1, fav_page.tmp_video_path, fav_page.tmp_audio_path, fav_page.video_path)
fav_page.downloaded = True
except FileExistsError:
logger.info(
"Video {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
fav_page.downloaded = True
except Exception as e:
errcode_status = {62002: MediaStatus.INVISIBLE, -404: MediaStatus.DELETED}
if not (isinstance(e, ResponseCodeException) and (status := errcode_status.get(e.code))):
logger.exception( logger.exception(
"Failed to process video {} {}, error_code: {}", "Failed to process video {} {} page {}.",
fav_item.bvid, fav_page.favorite_item.bvid,
fav_item.name, fav_page.favorite_item.name,
e.code, fav_page.page,
) )
return else:
logger.error( fav_page.status = status
"Video {} {} is not available, marked as {}", logger.error(
fav_item.bvid, "Video {} {} page {} is not available, marked as {}.",
fav_item.name, fav_page.favorite_item.bvid,
fav_item.status.text, fav_page.favorite_item.name,
fav_page.page,
fav_page.status.text,
)
await fav_page.save()
logger.info(
"{} {} page {} has been processed.", fav_page.favorite_item.bvid, fav_page.favorite_item.name, fav_page.page
)
async def get_video(v: video.Video, page_id: int, tmp_video_path: Path, tmp_audio_path: Path, video_path: Path) -> None:
"""指定临时视频、音频和目标视频目录下载视频的某个分p"""
if await aexists(video_path):
# 目标视频已经存在,忽略掉
raise FileExistsError
await amakedirs(video_path.parent, exist_ok=True)
# 分析对应分p的视频流
detector = video.VideoDownloadURLDataDetecter(await v.get_download_url(page_index=page_id))
streams = detector.detect_best_streams(**settings.stream.model_dump())
if detector.check_flv_stream():
# 对于 flv直接下载
await download_content(streams[0].url, tmp_video_path)
process = await create_subprocess_exec(
FFMPEG_COMMAND, "-i", tmp_video_path, video_path, stdout=PIPE, stderr=PIPE
) )
except Exception: stdout, stderr = await process.communicate()
logger.exception( tmp_video_path.unlink(missing_ok=True)
"Failed to process video {} {}", fav_item.bvid, fav_item.name else:
# 对于非 flv首先要下载视频流
paths, tasks = ([tmp_video_path], [download_content(streams[0].url, tmp_video_path)])
if streams[1]:
# 如果有音频流,也下载
paths.append(tmp_audio_path)
tasks.append(download_content(streams[1].url, tmp_audio_path))
await asyncio.gather(*tasks)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
*sum([["-i", path] for path in paths], []),
"-c",
"copy",
video_path,
stdout=PIPE,
stderr=PIPE,
) )
finally: stdout, stderr = await process.communicate()
await fav_item.save() for path in paths:
path.unlink(missing_ok=True)
if process.returncode != 0:
raise RuntimeError(
f"{FFMPEG_COMMAND} exited with non-zero code {process.returncode}."
f"\nstdout:\n{stdout.decode()}"
f"\nstderr:\n{stderr.decode()}"
)
async def get_file(url: str, path: Path) -> None:
"""一个简单的下载封装,用于下载封面等内容"""
if await aexists(path):
# 目标文件已经存在,忽略掉
raise FileExistsError
await amakedirs(path.parent, exist_ok=True)
await download_content(url, path)
async def get_subtitle(v: video.Video, page_id: int, subtitle_path: Path) -> None:
"""指定目标字幕文件下载视频的某个分p的字幕"""
if await aexists(subtitle_path):
# 目标字幕已经存在,忽略掉
raise FileExistsError
await amakedirs(subtitle_path.parent, exist_ok=True)
await ass.make_ass_file_danmakus_protobuf(
v,
page_id,
str(subtitle_path.resolve()),
credential=credential,
font_name=settings.subtitle.font_name,
font_size=settings.subtitle.font_size,
alpha=settings.subtitle.alpha,
fly_time=settings.subtitle.fly_time,
static_time=settings.subtitle.static_time,
)
async def get_nfo(nfo_path: Path, *, obj: Model, mode: NfoMode) -> None:
"""指定 nfo 路径、对象和模式,将对应的 nfo 信息写入到文件"""
if await aexists(nfo_path):
# 目标 nfo 已经存在,忽略掉
raise FileExistsError
await amakedirs(nfo_path.parent, exist_ok=True)
# 根据不同的模式,生成不同的 nfo
nfo: NfoBase = None
match obj, mode:
case FavoriteItem(), NfoMode.MOVIE:
nfo = MovieInfo.from_favorite_item(obj)
case FavoriteItem(), NfoMode.TVSHOW:
nfo = TVShowInfo.from_favorite_item(obj)
case FavoriteItemPage(), NfoMode.EPISODE:
nfo = EpisodeInfo.from_favorite_item_page(obj)
case Upper(), NfoMode.UPPER:
nfo = UpperInfo.from_upper(obj)
case _:
raise ValueError
await nfo.to_file(nfo_path)

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "bili-sync" name = "bili-sync"
version = "1.0.1" version = "1.1.9"
description = "" description = ""
authors = ["amtoaer <amtoaer@gmail.com>"] authors = ["amtoaer <amtoaer@gmail.com>"]
license = "GPL-3.0" license = "GPL-3.0"
@@ -8,25 +8,25 @@ readme = "README.md"
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = "^3.11" python = "^3.11"
bilibili-api-python = { git = "https://github.com/amtoaer/bilibili-api.git", rev = "dev" }
dataclasses-json = "0.6.2"
tortoise-orm = "0.20.0"
loguru = "0.7.2"
uvloop = "0.19.0"
aiofiles = "23.2.1"
aerich = "0.7.2" aerich = "0.7.2"
aiofiles = "23.2.1"
bilibili-api-python = {git = "https://github.com/Nemo2011/bilibili-api.git", rev = "16.2.0b2"}
loguru = "0.7.2"
pydantic = "2.5.3"
tortoise-orm = "0.20.0"
uvloop = "0.19.0"
[tool.poetry.group.dev.dependencies] [tool.poetry.group.dev.dependencies]
black = "23.11.0" bump-my-version = "0.15.4"
ruff = "0.1.6"
ipython = "8.17.2" ipython = "8.17.2"
ruff = "0.2.2"
[tool.black] [tool.black]
line-length = 80 line-length = 100
[tool.ruff] [tool.ruff]
line-length = 80 line-length = 120
select = [ lint.select = [
"F", # https://beta.ruff.rs/docs/rules/#pyflakes-f "F", # https://beta.ruff.rs/docs/rules/#pyflakes-f
"E", "E",
"W", # https://beta.ruff.rs/docs/rules/#pycodestyle-e-w "W", # https://beta.ruff.rs/docs/rules/#pycodestyle-e-w
@@ -50,9 +50,11 @@ select = [
"NPY", # https://beta.ruff.rs/docs/rules/#numpy-specific-rules-npy "NPY", # https://beta.ruff.rs/docs/rules/#numpy-specific-rules-npy
"RUF100", # https://beta.ruff.rs/docs/configuration/#automatic-noqa-management "RUF100", # https://beta.ruff.rs/docs/configuration/#automatic-noqa-management
] ]
ignore = [ lint.ignore = [
"A003", # Class attribute `id` is shadowing a Python builtin "A003", # Class attribute `id` is shadowing a Python builtin
] ]
lint.isort.split-on-trailing-comma = false
format.skip-magic-trailing-comma = true
exclude = ["migrations"] exclude = ["migrations"]
[tool.aerich] [tool.aerich]
@@ -60,6 +62,28 @@ tortoise_orm = "constants.TORTOISE_ORM"
location = "./migrations" location = "./migrations"
src_folder = "./." src_folder = "./."
[tool.bumpversion]
commit = true
message = "chore: bump version from {current_version} to {new_version}"
tag = true
tag_name = "{new_version}"
tag_message = ""
current_version = "1.1.9"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
[[tool.bumpversion.files]]
filename = "version.py"
[[tool.bumpversion.files]]
filename = "pyproject.toml"
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"

View File

@@ -1,63 +1,100 @@
from dataclasses import dataclass, field, fields
from pathlib import Path from pathlib import Path
from typing import Self
from dataclasses_json import DataClassJsonMixin from bilibili_api.video import AudioQuality, VideoCodecs, VideoQuality
from pydantic import BaseModel, Field, field_validator, root_validator
from pydantic_core import PydanticCustomError
from typing_extensions import Annotated
from constants import DEFAULT_CONFIG_PATH from constants import DEFAULT_CONFIG_PATH
from utils import amakedirs, aopen
@dataclass class SubtitleConfig(BaseModel):
class Config(DataClassJsonMixin): font_name: str = "微软雅黑,黑体" # 字体
sessdata: str = "" font_size: float = 40 # 字号
bili_jct: str = "" alpha: float = 0.8 # 透明度
buvid3: str = "" fly_time: float = 5 # 滚动弹幕持续时间
dedeuserid: str = "" static_time: float = 10 # 静态弹幕持续时间
ac_time_value: str = ""
class StreamConfig(BaseModel):
video_max_quality: VideoQuality = VideoQuality._8K
audio_max_quality: AudioQuality = AudioQuality._192K
video_min_quality: VideoQuality = VideoQuality._360P
audio_min_quality: AudioQuality = AudioQuality._64K
codecs: list[VideoCodecs] = Field(
default_factory=lambda: [VideoCodecs.AV1, VideoCodecs.AVC, VideoCodecs.HEV], min_length=1
)
no_dolby_video: bool = False
no_dolby_audio: bool = False
no_hdr: bool = False
no_hires: bool = False
@field_validator("codecs", mode="after")
def codec_validator(cls, codecs: list[VideoCodecs]) -> list[VideoCodecs]:
if len(codecs) != len(set(codecs)):
raise PydanticCustomError("unique_list", "List must be unique")
return codecs
class Config(BaseModel):
sessdata: Annotated[str, Field(min_length=1)] = ""
bili_jct: Annotated[str, Field(min_length=1)] = ""
buvid3: Annotated[str, Field(min_length=1)] = ""
dedeuserid: Annotated[str, Field(min_length=1)] = ""
ac_time_value: Annotated[str, Field(min_length=1)] = ""
interval: int = 20 interval: int = 20
favorite_ids: list[int] = field(default_factory=list) path_mapper: dict[int, str] = Field(default_factory=dict)
path_mapper: dict[int, str] = field(default_factory=dict) subtitle: SubtitleConfig = Field(default_factory=SubtitleConfig)
stream: StreamConfig = Field(default_factory=StreamConfig)
paginated_video: bool = False
def validate(self) -> Self: @root_validator(pre=True)
"""所有值必须被设置""" def migrate(cls, values: dict) -> dict:
if not all(getattr(self, f.name) for f in fields(self)): # 把旧版本的 codec 迁移为 stream 中的 codecs
raise ValueError("Some config values are not set.") if "codec" in values and "stream" not in values:
return self values["stream"] = {"codecs": values.pop("codec")}
return values
@staticmethod @staticmethod
def load(path: Path | None = None) -> Self: def load(path: Path | None = None) -> "Config":
if not path: if not path:
path = DEFAULT_CONFIG_PATH path = DEFAULT_CONFIG_PATH
try: try:
with path.open("r") as f: with path.open("r") as f:
return Config.schema().loads(f.read()) return Config.model_validate_json(f.read())
except Exception as e: except Exception as e:
raise RuntimeError(f"Failed to load config file: {path}") from e raise RuntimeError(f"Failed to load config file: {path}") from e
def save(self, path: Path | None = None) -> Self: def save(self, path: Path | None = None) -> "Config":
if not path: if not path:
path = DEFAULT_CONFIG_PATH path = DEFAULT_CONFIG_PATH
try: try:
path.parent.mkdir(parents=True, exist_ok=True) path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w") as f: with path.open("w") as f:
f.write( f.write(Config.model_dump_json(self, indent=4))
Config.schema().dumps(self, indent=4, ensure_ascii=False) return self
) except Exception as e:
raise RuntimeError(f"Failed to save config file: {path}") from e
async def asave(self, path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
await amakedirs(path.parent, exist_ok=True)
async with aopen(path, "w") as f:
await f.write(Config.model_dump_json(self, indent=4))
return self return self
except Exception as e: except Exception as e:
raise RuntimeError(f"Failed to save config file: {path}") from e raise RuntimeError(f"Failed to save config file: {path}") from e
def init_settings() -> Config: def init_settings() -> Config:
return ( if not DEFAULT_CONFIG_PATH.exists():
( # 配置文件不存在的情况下,写入空的默认值
Config.load(DEFAULT_CONFIG_PATH) Config().save(DEFAULT_CONFIG_PATH)
if DEFAULT_CONFIG_PATH.exists() # 读取配置文件,校验出错会抛出异常,校验通过则重新保存一下配置文件(写入新配置项的默认值)
else Config() return Config.load(DEFAULT_CONFIG_PATH).save()
)
.save(DEFAULT_CONFIG_PATH)
.validate()
)
settings = init_settings() settings = init_settings()

View File

@@ -27,9 +27,7 @@ async def amakedirs(path: Path, exist_ok=False) -> None:
await makedirs(path, exist_ok=exist_ok) await makedirs(path, exist_ok=exist_ok)
def aopen( def aopen(path: Path, mode: str = "r", **kwargs) -> AiofilesContextManager[None, None, AsyncTextIOWrapper]:
path: Path, mode: str = "r", **kwargs
) -> AiofilesContextManager[None, None, AsyncTextIOWrapper]:
return aiofiles.open(path, mode, **kwargs) return aiofiles.open(path, mode, **kwargs)

1
version.py Normal file
View File

@@ -0,0 +1 @@
VERSION = "1.1.9"