Compare commits

...

41 Commits
1.1.0 ... v1.x

Author SHA1 Message Date
amtoaer
0b3434f5fd chore: bump version from 1.1.8 to 1.1.9 2024-03-02 14:09:37 +08:00
amtoaer
85e2b2be50 fix: 发生超时异常时忽略掉 2024-03-02 14:08:22 +08:00
amtoaer
b666378c00 chore: bump version from 1.1.7 to 1.1.8 2024-02-26 23:56:54 +08:00
amtoaer
28ed22dc1b fix: 修复开启处理分 p 视频后执行 refresh 导致历史视频分 p 刷新的问题,修复分 p 图片不存在问题 2024-02-26 23:56:26 +08:00
amtoaer
911ce84f5a fix: 对 xml 中的文本内容转义,避免特殊字符影响 2024-02-26 13:26:07 +08:00
amtoaer
e25ed452b4 chore: bump version from 1.1.6 to 1.1.7 2024-02-25 01:11:53 +08:00
amtoaer
2f36220582 chore: 加入一键发版的 make 命令 2024-02-25 01:11:13 +08:00
amtoaer
f6a5238b6e fix: 修复执行错误 2024-02-25 00:47:03 +08:00
amtoaer
ec5776a0ed feat: recheck 对分 p 视频做适配,为所有的数据库批量操作指定 batch_size 2024-02-24 21:37:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c21da25c6f feat: 将下载视频时选择流的部分参数提取为配置 (#47) 2024-02-24 17:36:56 +08:00
amtoaer
bde142a896 doc: 修正一些表述 2024-02-24 03:52:58 +08:00
amtoaer
af8cd0d819 refactor: refresh 中异步保存文件 2024-02-24 03:49:28 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a4c362d8ab feat: 支持分 p 视频下载,待额外测试 (#24) 2024-02-24 03:38:08 +08:00
amtoaer
1dd760d445 chore: 更换代码格式化器,移除无用依赖 2024-02-21 23:54:39 +08:00
amtoaer
0bc7b831de chore: bump version from 1.1.5 to 1.1.6 2024-02-02 22:28:21 +08:00
amtoaer
fe2056ae33 fix: 修复无音频流视频下载失败的问题 2024-02-02 22:28:09 +08:00
amtoaer
8a7a7e370b chore: bump version from 1.1.4 to 1.1.5 2024-02-02 17:29:13 +08:00
amtoaer
6ce143647c chore: 更新上游依赖 2024-02-02 17:29:07 +08:00
amtoaer
668c67da53 chore: bump version from 1.1.3 to 1.1.4 2024-01-20 15:50:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9204bbb4ad fix: 修复新的配置项没有写入配置文件的问题,扩充单行字符限制 (#33) 2024-01-20 15:37:43 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d467750d4f feat: 支持指定编码优先级 (#32) 2024-01-20 15:16:48 +08:00
amtoaer
641cc3f48b chore: 优化 dockerfile,缩小镜像体积 2024-01-06 02:13:00 +08:00
amtoaer
345c764463 fix: 修复 docker 退出时不会释放资源的问题 2024-01-06 00:41:28 +08:00
amtoaer
85b7d3dc9b chore: 恢复 dockerfile 写法,难以复用缓存但减小容器体积 2024-01-05 23:34:32 +08:00
amtoaer
f1ada17f30 chore: bump version from 1.1.2 to 1.1.3 2024-01-05 01:15:12 +08:00
amtoaer
cb0ac7eb67 chore: 开启自动提交和自动标签 2024-01-05 01:13:36 +08:00
amtoaer
31efedbde9 chore: 修复依赖异常,优化 dockerfile 流程 2024-01-05 01:11:10 +08:00
amtoaer
3defb07325 chore: 存版本号并添加入口,方便触发版本间的迁移逻辑 2024-01-04 22:13:03 +08:00
amtoaer
e36f829e70 chore: 引入 bump-version 并正确设置版本号 2024-01-04 22:04:10 +08:00
amtoaer
c20b579523 chore: 排序一下依赖 2024-01-04 21:54:27 +08:00
amtoaer
ceec222604 chore: 更新上游依赖,修复刷新 cookie 失败的错误 2024-01-04 21:50:28 +08:00
amtoaer
60ea7795ae chore: 修改基础镜像标签 2024-01-04 21:07:08 +08:00
DDSDerek
6cbacbd127 chore: Optimization docker (#17)
* feat: docker build adds cache

* fix: dockerfile optimization

* doc: dockerhub pictures are not displayed properly

---------

Co-authored-by: DDSRem <1448139087@qq.com>
2024-01-04 20:51:03 +08:00
DDSDerek
8ea2fbe0f9 fix: docker meta username error (#16)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:31:48 +08:00
DDSDerek
e3fded16ac feat: support arm64 architecture (#15)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:22:26 +08:00
amtoaer
961913c4fb doc: 加入字幕相关文档 2023-12-07 22:11:37 +08:00
amtoaer
fa20e5efee feat: 开放弹幕的各项设置 2023-12-07 21:45:18 +08:00
amtoaer
38fb0a4560 fix: 安全地移除配置项 2023-12-07 21:29:57 +08:00
amtoaer
9e94e3b73e chore: try except 按块分割,移除无用的设置项 2023-12-07 21:15:40 +08:00
amtoaer
b955a9fe45 chore: 替换掉被标记 deprecated 的方法 2023-12-06 18:17:17 +08:00
amtoaer
9d151b4731 feat: 命令默认不覆盖现有内容,更新文档 2023-12-06 01:19:08 +08:00
19 changed files with 1690 additions and 1187 deletions

View File

@@ -12,18 +12,37 @@ jobs:
-
name: Checkout
uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=debug
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
-
name: Build and push images
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:debug
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}

View File

@@ -12,22 +12,41 @@ jobs:
-
name: Checkout
uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=${{ github.ref_name }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
-
name: Build and push images
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:${{ github.ref_name }}
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:latest
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
-
name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3

View File

@@ -1,22 +1,41 @@
FROM python:3.11.6-alpine3.18 AS base
FROM python:3.11.7-alpine3.19 as base
WORKDIR /app
ENV BILI_IN_DOCKER=true
ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
RUN apk add --no-cache ffmpeg tini \
&& apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
&& pip install poetry==1.7.1 pip3-autoremove==1.2.0
COPY poetry.lock pyproject.toml ./
RUN apk add ffmpeg \
&& apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
&& pip install poetry \
&& poetry config virtualenvs.create false \
&& poetry install --no-dev --no-interaction --no-ansi \
&& apk del .build-deps
RUN poetry config virtualenvs.create false \
&& poetry install --only main --no-root \
&& pip3-autoremove -y poetry pip3-autoremove \
&& apk del .build-deps \
&& rm -rf \
/root/.cache \
/tmp/*
COPY . .
ENTRYPOINT [ "python", "entry.py" ]
FROM scratch
WORKDIR /app
ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
COPY --from=base / /
ENTRYPOINT [ "tini", "python", "entry.py" ]
VOLUME [ "/app/config", "/app/data", "/app/thumb", "/Videos/Bilibilis" ]

View File

@@ -1,4 +1,4 @@
.PHONY: install fmt start-daemon start-once db-init db-migrate db-upgrade sync-conf
.PHONY: install fmt start-daemon start-once db-init db-migrate db-upgrade sync-conf release
install:
@echo "Installing dependencies..."
@@ -6,8 +6,8 @@ install:
fmt:
@echo "Formatting..."
@poetry run black .
@poetry run ruff --fix .
@poetry run ruff format .
@poetry run ruff check --fix .
start-daemon:
@poetry run python entry.py
@@ -28,4 +28,12 @@ sync-conf:
@echo "Syncing config..."
@cp ${CONFIG_SRC} ./config/
@cp ${DB_SRC} ./data/
@echo "Done."
@echo "Done."
release:
@echo "Releasing..."
@git checkout main
@bump-my-version bump patch
@git push origin main
@git push origin --tags
@echo "Done."

View File

@@ -13,15 +13,23 @@
## 工作截图
![下载视频](asset/run.png)
![下载视频](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/run.png)
![EMBY 识别](asset/emby.png)
![EMBY 识别](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/emby.png)
## 配置文件
对于配置文件的前五项,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。
```python
@dataclass
class SubtitleConfig(DataClassJsonMixin):
font_name: str = "微软雅黑,黑体" # 字体
font_size: float = 40 # 字号
alpha: float = 0.8 # 透明度
fly_time: float = 5 # 滚动弹幕持续时间
static_time: float = 10 # 静态弹幕持续时间
class Config(DataClassJsonMixin):
sessdata: str = ""
bili_jct: str = ""
@@ -29,8 +37,8 @@ class Config(DataClassJsonMixin):
dedeuserid: str = ""
ac_time_value: str = ""
interval: int = 20 # 任务执行的间隔时间
favorite_ids: list[int] = field(default_factory=list) # 收藏夹的 id
path_mapper: dict[int, str] = field(default_factory=dict) # 收藏夹的 id 到存储目录的映射
subtitle: SubtitleConfig = field(default_factory=SubtitleConfig) # 字幕相关设置
```
程序默认会将配置文件存储于 `${程序路径}/config/config.json`,数据库文件存储于 `${程序路径}/data/data.db`,如果发现不存在则新建并写入初始配置。
@@ -48,7 +56,7 @@ services:
bili-sync:
image: amtoaer/bili-sync:latest
user: 1000:1000 # 此处可以指定以哪个用户的权限运行,不填写的话默认 root推荐填写。
tty: true # 加上这一行可以让日志变成彩色
tty: true # 加上这一行可以让支持的终端以彩色显示日志(如果发现日志出现乱码就去掉)
volumes:
- /home/amtoaer/Videos/Bilibilis/:/Videos/Bilibilis/ # 视频文件
- /home/amtoaer/.config/nas/bili-sync/config/:/app/config/ # 配置文件
@@ -75,11 +83,15 @@ services:
"dedeuserid": "xxxxxxxxxxxxxxxxxx",
"ac_time_value": "xxxxxxxxxxxxxxxxxx",
"interval": 20,
"favorite_ids": [
711322958
],
"path_mapper": {
"711322958": "/Videos/Bilibilis/Bilibili-711322958/"
},
"subtitle": {
"font_name": "微软雅黑,黑体",
"font_size": 40.0,
"alpha": 0.8,
"fly_time": 5.0,
"static_time": 10.0
}
}
```
@@ -88,22 +100,36 @@ services:
为满足需要,该应用包含几个单独的命令,可在程序目录下使用 `python entry.py ${command name}` 运行。
1. `once`
1. `once`
处理收藏夹,和一般定时任务触发时执行的操作完全相同,但仅运行一次。
2. `recheck`
将本地不存在的视频文件标记成未下载,下次定时任务触发时将一并下载。
3. `upper_thumb`
3. `refresh_refresh_poster`
手动触发全量下载 up 主头像,为使用老版本时下载的没有 up 头像的视频添加头像
更新本地视频的封面
3. `refresh_upper`
更新本地up的头像和元数据。
3. `refresh_nfo`
更新本地视频的元数据。(如标签、标题等信息)
3. `refresh_video`
更新本地的视频源文件。
3. `refresh_subtitle`
更新本地的弹幕文件。
**对于以 refresh 开头的命令,均支持 --force 参数,如果有 --force 参数,将全量覆盖对应内容,否则默认仅更新缺失的部分。**
## 路线图
- [x] 凭证认证
- [x] 视频选优
- [x] 视频下载
- [x] 支持并下载
- [x] 支持并下载
- [x] 支持作为 daemon 运行
- [x] 构建 nfo 和 poster 文件,方便以单集形式导入 emby
- [x] 支持收藏夹翻页,下载全部历史视频

View File

@@ -13,30 +13,32 @@ from utils import aexists, aremove
async def recheck():
"""刷新数据库中视频的状态,如果发现文件不存在则标记未下载,以便在下次任务重新下载,在自己手动删除文件后调用"""
async def is_ok(item: FavoriteItem) -> bool:
if len(item.pages):
# 多 p 视频全部存在才算存在
return all(await asyncio.gather(*[aexists(page.video_path) for page in item.pages]))
return await aexists(item.video_path)
items = await FavoriteItem.filter(
type=MediaType.VIDEO,
status=MediaStatus.NORMAL,
downloaded=True,
)
exists = await asyncio.gather(*[aexists(item.video_path) for item in items])
for item, exist in zip(items, exists):
if isinstance(exist, Exception):
logger.error(
"Error when checking file {} {}: {}",
item.bvid,
item.name,
exist,
)
type=MediaType.VIDEO, status=MediaStatus.NORMAL, downloaded=True
).prefetch_related("pages")
items_to_update = []
for item in items:
for page in item.pages:
# 疑似 tortoise 的 bugprefetch_related 不会更新反向引用的字段,这里手动更新一下
page.favorite_item = item
items_ok = await asyncio.gather(*[is_ok(item) for item in items], return_exceptions=True)
for item, ok in zip(items, items_ok):
if isinstance(ok, Exception):
logger.error("Error when checking file {} {}: {}.", item.bvid, item.name, ok)
continue
if not exist:
logger.info(
"File {} {} not exists, mark as not downloaded.",
item.bvid,
item.name,
)
if not ok:
logger.info("Lack of file detected for {} {}, mark as not downloaded.", item.bvid, item.name)
item.downloaded = False
items_to_update.append(item)
logger.info("Updating database...")
await FavoriteItem.bulk_update(items, fields=["downloaded"])
await FavoriteItem.bulk_update(items_to_update, fields=["downloaded"], batch_size=300)
logger.info("Database updated.")
@@ -47,12 +49,12 @@ async def _refresh_favorite_item_info(
process_nfo: bool = False,
process_upper: bool = False,
process_subtitle: bool = False,
force: bool = False,
):
items = await FavoriteItem.filter(downloaded=True).prefetch_related("upper")
await asyncio.gather(
*[aremove(path) for item in items for path in path_getter(item)],
return_exceptions=True,
)
if force:
# 如果强制刷新,那么就先把现存的所有内容删除
await asyncio.gather(*[aremove(path) for item in items for path in path_getter(item)], return_exceptions=True)
await asyncio.gather(
*[
process_favorite_item(
@@ -62,6 +64,7 @@ async def _refresh_favorite_item_info(
process_nfo=process_nfo,
process_upper=process_upper,
process_subtitle=process_subtitle,
refresh_mode=True,
)
for item in items
],
@@ -69,30 +72,14 @@ async def _refresh_favorite_item_info(
)
refresh_nfo = functools.partial(
_refresh_favorite_item_info, lambda item: [item.nfo_path], process_nfo=True
)
refresh_nfo = functools.partial(_refresh_favorite_item_info, lambda item: [item.nfo_path], process_nfo=True)
refresh_poster = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.poster_path],
process_poster=True,
)
refresh_poster = functools.partial(_refresh_favorite_item_info, lambda item: [item.poster_path], process_poster=True)
refresh_video = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.video_path],
process_video=True,
)
refresh_video = functools.partial(_refresh_favorite_item_info, lambda item: [item.video_path], process_video=True)
refresh_upper = functools.partial(
_refresh_favorite_item_info,
lambda item: item.upper_path,
process_upper=True,
)
refresh_upper = functools.partial(_refresh_favorite_item_info, lambda item: item.upper_path, process_upper=True)
refresh_subtitle = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.subtitle_path],
process_subtitle=True,
_refresh_favorite_item_info, lambda item: [item.subtitle_path], process_subtitle=True
)

View File

@@ -4,11 +4,7 @@ from pathlib import Path
def get_base(dir_name: str) -> Path:
path = (
Path(base)
if (base := os.getenv(f"{dir_name.upper()}_PATH"))
else Path(__file__).parent / dir_name
)
path = Path(base) if (base := os.getenv(f"{dir_name.upper()}_PATH")) else Path(__file__).parent / dir_name
path.mkdir(parents=True, exist_ok=True)
return path
@@ -37,20 +33,18 @@ class MediaStatus(IntEnum):
@property
def text(self) -> str:
return {
MediaStatus.NORMAL: "normal",
MediaStatus.INVISIBLE: "invisible",
MediaStatus.DELETED: "deleted",
}[self]
return {MediaStatus.NORMAL: "normal", MediaStatus.INVISIBLE: "invisible", MediaStatus.DELETED: "deleted"}[self]
class NfoMode(IntEnum):
MOVIE = 1
TVSHOW = 2
EPISODE = 3
UPPER = 4
TORTOISE_ORM = {
"connections": {"default": f"sqlite://{DEFAULT_DATABASE_PATH}"},
"apps": {
"models": {
"models": ["models", "aerich.models"],
"default_connection": "default",
},
},
"apps": {"models": {"models": ["models", "aerich.models"], "default_connection": "default"}},
"use_tz": True,
}

View File

@@ -6,28 +6,18 @@ from settings import settings
class PersistedCredential(Credential):
def __init__(self) -> None:
super().__init__(
settings.sessdata,
settings.bili_jct,
settings.buvid3,
settings.dedeuserid,
settings.ac_time_value,
settings.sessdata, settings.bili_jct, settings.buvid3, settings.dedeuserid, settings.ac_time_value
)
async def refresh(self) -> None:
await super().refresh()
(
settings.sessdata,
settings.bili_jct,
settings.dedeuserid,
settings.ac_time_value,
) = (
(settings.sessdata, settings.bili_jct, settings.dedeuserid, settings.ac_time_value) = (
self.sessdata,
self.bili_jct,
self.dedeuserid,
self.ac_time_value,
)
# 暂时使用同步调用
settings.save()
await settings.asave()
credential = PersistedCredential()

View File

@@ -1,17 +1,12 @@
import asyncio
import os
import signal
import sys
import uvloop
from loguru import logger
from commands import (
recheck,
refresh_nfo,
refresh_poster,
refresh_subtitle,
refresh_upper,
refresh_video,
)
from commands import recheck, refresh_nfo, refresh_poster, refresh_subtitle, refresh_upper, refresh_video
from models import init_model
from processor import cleanup, process
from settings import settings
@@ -21,6 +16,7 @@ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
async def entry() -> None:
await init_model()
force = any("force" in _ for _ in sys.argv)
for command, func in (
("once", process),
("recheck", recheck),
@@ -32,7 +28,10 @@ async def entry() -> None:
):
if any(command in _ for _ in sys.argv):
logger.info("Running {}...", command)
await func()
if command.startswith("refresh"):
await func(force=force)
else:
await func()
return
logger.info("Running daemon...")
while True:
@@ -41,8 +40,16 @@ async def entry() -> None:
if __name__ == "__main__":
# 确保 docker 退出时正确触发资源释放
signal.signal(signal.SIGTERM, lambda *_: os.kill(os.getpid(), signal.SIGINT))
with asyncio.Runner() as runner:
try:
runner.run(entry())
except Exception:
logger.exception("Unexpected error occurred, exiting...")
except KeyboardInterrupt:
logger.error("Exit Signal Received, exiting...")
finally:
logger.info("Cleaning up resources...")
runner.run(cleanup())
logger.info("Done, exited.")

View File

@@ -0,0 +1,14 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
CREATE TABLE IF NOT EXISTS "program" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"version" VARCHAR(20) NOT NULL
);"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
DROP TABLE IF EXISTS "program";"""

View File

@@ -0,0 +1,21 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
CREATE TABLE IF NOT EXISTS "favoriteitempage" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"cid" INT NOT NULL,
"page" INT NOT NULL,
"name" VARCHAR(255) NOT NULL,
"image" TEXT NOT NULL,
"status" SMALLINT NOT NULL DEFAULT 1 /* NORMAL: 1\nINVISIBLE: 2\nDELETED: 3 */,
"downloaded" INT NOT NULL DEFAULT 0,
"favorite_item_id" INT NOT NULL REFERENCES "favoriteitem" ("id") ON DELETE CASCADE,
CONSTRAINT "uid_favoriteite_favorit_c3b50e" UNIQUE ("favorite_item_id", "page")
) /* 收藏条目的分p */;"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
DROP TABLE IF EXISTS "favoriteitempage";"""

157
models.py
View File

@@ -3,17 +3,12 @@ from asyncio import create_subprocess_exec
from pathlib import Path
from tortoise import Tortoise, fields
from tortoise.fields import Field
from tortoise.models import Model
from constants import (
DEFAULT_THUMB_PATH,
MIGRATE_COMMAND,
TORTOISE_ORM,
MediaStatus,
MediaType,
)
from constants import DEFAULT_THUMB_PATH, MIGRATE_COMMAND, TORTOISE_ORM, MediaStatus, MediaType
from settings import settings
from utils import aopen
from version import VERSION
class FavoriteList(Model):
@@ -40,31 +35,11 @@ class Upper(Model):
@property
def thumb_path(self) -> Path:
return (
DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "folder.jpg"
)
return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "folder.jpg"
@property
def meta_path(self) -> Path:
return (
DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "person.nfo"
)
async def save_metadata(self):
async with aopen(self.meta_path, "w") as f:
await f.write(
f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot />
<outline />
<lockdata>false</lockdata>
<dateadded>{self.created_at.strftime("%Y-%m-%d %H:%M:%S")}</dateadded>
<title>{self.mid}</title>
<sorttitle>{self.mid}</sorttitle>
</person>
""".strip()
)
return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "person.nfo"
class FavoriteItem(Model):
@@ -73,17 +48,13 @@ class FavoriteItem(Model):
id = fields.IntField(pk=True)
name = fields.CharField(max_length=255)
type = fields.IntEnumField(enum_type=MediaType)
status = fields.IntEnumField(
enum_type=MediaStatus, default=MediaStatus.NORMAL
)
status = fields.IntEnumField(enum_type=MediaStatus, default=MediaStatus.NORMAL)
bvid = fields.CharField(max_length=255)
desc = fields.TextField()
cover = fields.TextField()
tags = fields.JSONField(null=True)
favorite_list = fields.ForeignKeyField(
"models.FavoriteList", related_name="items"
)
upper = fields.ForeignKeyField("models.Upper", related_name="uploads")
favorite_list: Field[FavoriteList] = fields.ForeignKeyField("models.FavoriteList", related_name="items")
upper: Field[Upper] = fields.ForeignKeyField("models.Upper", related_name="uploads")
ctime = fields.DatetimeField()
pubtime = fields.DatetimeField()
fav_time = fields.DatetimeField()
@@ -95,65 +66,129 @@ class FavoriteItem(Model):
unique_together = (("bvid", "favorite_list_id"),)
@property
def safe_name(self) -> str:
return self.name.replace("/", "_")
def tmp_video_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_video"
@property
def tmp_audio_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_audio"
@property
def video_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.mp4"
@property
def nfo_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.nfo"
@property
def poster_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}-poster.jpg"
@property
def upper_path(self) -> list[Path]:
return [self.upper.thumb_path, self.upper.meta_path]
@property
def subtitle_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.zh-CN.default.ass"
@property
def tvshow_nfo_path(self) -> Path:
"""分p视频时使用"""
return Path(settings.path_mapper[self.favorite_list_id]) / self.bvid / "tvshow.nfo"
@property
def tvshow_poster_path(self) -> Path:
"""分p视频时使用"""
return Path(settings.path_mapper[self.favorite_list_id]) / self.bvid / "poster.jpg"
class FavoriteItemPage(Model):
"""收藏条目的分p"""
id = fields.IntField(pk=True)
favorite_item: Field[FavoriteItem] = fields.ForeignKeyField("models.FavoriteItem", related_name="pages")
cid = fields.IntField()
page = fields.IntField()
name = fields.CharField(max_length=255)
image = fields.TextField()
status = fields.IntEnumField(enum_type=MediaStatus, default=MediaStatus.NORMAL)
downloaded = fields.BooleanField(default=False)
class Meta:
unique_together = (("favorite_item_id", "page"),)
@property
def tmp_video_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"tmp_{self.bvid}_video"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"tmp_{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}_video"
)
@property
def tmp_audio_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"tmp_{self.bvid}_audio"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"tmp_{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}_audio"
)
@property
def video_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}.mp4"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.mp4"
)
@property
def nfo_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}.nfo"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.nfo"
)
@property
def poster_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}-poster.jpg"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}-thumb.jpg"
)
@property
def upper_path(self) -> list[Path]:
return [
self.upper.thumb_path,
self.upper.meta_path,
]
@property
def subtitle_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}.zh-CN.default.ass"
Path(settings.path_mapper[self.favorite_item.favorite_list_id])
/ self.favorite_item.bvid
/ "Season 1"
/ f"{self.favorite_item.bvid} - S01E{f'{self.page:02d}'}.zh-CN.default.ass"
)
class Program(Model):
id = fields.IntField(pk=True)
version = fields.CharField(max_length=20)
async def init_model() -> None:
await Tortoise.init(config=TORTOISE_ORM)
migrate_commands = (
[MIGRATE_COMMAND, "upgrade"]
if os.getenv("BILI_IN_DOCKER")
else ["poetry", "run", MIGRATE_COMMAND, "upgrade"]
[MIGRATE_COMMAND, "upgrade"] if os.getenv("BILI_IN_DOCKER") else ["poetry", "run", MIGRATE_COMMAND, "upgrade"]
)
process = await create_subprocess_exec(*migrate_commands)
await process.communicate()
program, created = await Program.get_or_create(defaults={"version": VERSION})
if created or program.version != VERSION:
# 把新版本的迁移逻辑写在这里
pass
program.version = VERSION
await program.save()

151
nfo.py
View File

@@ -1,28 +1,78 @@
import datetime
from abc import abstractmethod
from dataclasses import dataclass
from pathlib import Path
from models import FavoriteItem, FavoriteItemPage, Upper
from utils import aopen
@dataclass
class Actor:
class Base:
"""基类,有个工具方法"""
@abstractmethod
def to_xml(self) -> str:
...
@staticmethod
def escape(s: str) -> str:
"""转义 xml 特殊字符"""
return s.translate(str.maketrans({"<": "&lt;", ">": "&gt;", "&": "&amp;", "'": "&apos;", '"': "&quot;"}))
async def to_file(self, path: Path) -> None:
"""把 xml 写入文件"""
async with aopen(path, "w", encoding="utf-8") as f:
await f.write(self.to_xml())
@dataclass
class EpisodeInfo(Base):
"""分p的单集信息"""
title: str
season: int
episode: int
@staticmethod
def from_favorite_item_page(page: FavoriteItemPage) -> "EpisodeInfo":
return EpisodeInfo(title=page.name, season=1, episode=page.page)
def to_xml(self) -> str:
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot />
<outline />
<title>{self.escape(self.title)}</title>
<season>{self.season}</season>
<episode>{self.episode}</episode>
</episodedetails>
""".strip()
@dataclass
class Actor(Base):
name: str
role: str
@staticmethod
def from_upper(upper: Upper) -> "Actor":
return Actor(name=upper.mid, role=upper.name)
def to_xml(self) -> str:
return f"""
<actor>
<name>{self.name}</name>
<role>{self.role}</role>
<role>{self.escape(self.role)}</role>
</actor>
""".strip(
"\n"
)
""".strip()
@dataclass
class EpisodeInfo:
class MovieInfo(Base):
"""单p的视频信息"""
title: str
plot: str
tags: list[str]
@@ -30,29 +80,94 @@ class EpisodeInfo:
bvid: str
aired: datetime.datetime
async def write_nfo(self, path: Path) -> None:
async with aopen(path, "w", encoding="utf-8") as f:
await f.write(self.to_xml())
@staticmethod
def from_favorite_item(fav_item: FavoriteItem) -> "MovieInfo":
return MovieInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[Actor.from_upper(fav_item.upper)],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
)
def to_xml(self) -> str:
actor = "\n".join(_.to_xml() for _ in self.actor)
tags = (
"\n".join(f" <genre>{_}</genre>" for _ in self.tags)
if isinstance(self.tags, list)
else ""
"\n".join(f" <genre>{self.escape(_)}</genre>" for _ in self.tags) if isinstance(self.tags, list) else ""
)
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot><![CDATA[{self.plot}]]></plot>
<movie>
<plot><![CDATA[{self.escape(self.plot)}]]></plot>
<outline />
<title>{self.title}</title>
<title>{self.escape(self.title)}</title>
{actor}
<year>{self.aired.year}</year>
{tags}
<uniqueid type="bilibili">{self.bvid}</uniqueid>
<aired>{self.aired.strftime("%Y-%m-%d")}</aired>
</episodedetails>
""".strip(
"\n"
</movie>
""".strip()
@dataclass
class TVShowInfo(Base):
title: str
plot: str
tags: list[str]
actor: list[Actor]
bvid: str
aired: datetime.datetime
@staticmethod
def from_favorite_item(fav_item: FavoriteItem) -> "TVShowInfo":
return TVShowInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[Actor.from_upper(fav_item.upper)],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
)
def to_xml(self) -> str:
actor = "\n".join(_.to_xml() for _ in self.actor)
tags = (
"\n".join(f" <genre>{self.escape(_)}</genre>" for _ in self.tags) if isinstance(self.tags, list) else ""
)
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<tvshow>
<plot><![CDATA[{self.escape(self.plot)}]]></plot>
<outline />
<title>{self.escape(self.title)}</title>
{actor}
<year>{self.aired.year}</year>
{tags}
<uniqueid type="bilibili">{self.bvid}</uniqueid>
<aired>{self.aired.strftime("%Y-%m-%d")}</aired>
</tvshow>
""".strip()
@dataclass
class UpperInfo(Base):
mid: int
created_at: datetime.datetime
def from_upper(upper: Upper) -> "UpperInfo":
return UpperInfo(mid=upper.mid, created_at=upper.created_at)
def to_xml(self) -> str:
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot />
<outline />
<lockdata>false</lockdata>
<dateadded>{self.created_at.strftime("%Y-%m-%d %H:%M:%S")}</dateadded>
<title>{self.mid}</title>
<sorttitle>{self.mid}</sorttitle>
</person>
""".strip()

1511
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +1,21 @@
import asyncio
import contextlib
import datetime
from asyncio import Semaphore, create_subprocess_exec
from asyncio.subprocess import DEVNULL
from asyncio.subprocess import PIPE
from pathlib import Path
from bilibili_api import ass, favorite_list, video
from bilibili_api.exceptions import ResponseCodeException
from loguru import logger
from tortoise import Tortoise
from tortoise.connection import connections
from tortoise.models import Model
from constants import FFMPEG_COMMAND, MediaStatus, MediaType
from constants import FFMPEG_COMMAND, MediaStatus, MediaType, NfoMode
from credential import credential
from models import FavoriteItem, FavoriteList, Upper
from nfo import Actor, EpisodeInfo
from models import FavoriteItem, FavoriteItemPage, FavoriteList, Upper
from nfo import Base as NfoBase
from nfo import EpisodeInfo, MovieInfo, TVShowInfo, UpperInfo
from settings import settings
from utils import aexists, amakedirs, client, download_content
@@ -20,10 +24,11 @@ anchor = datetime.date.today()
async def cleanup() -> None:
await client.aclose()
await Tortoise.close_connections()
await connections.close_all()
def concurrent_decorator(concurrency: int) -> callable:
"""一个简单的并发限制装饰器,被装饰的函数同时仅能运行 concurrency 个"""
sem = Semaphore(value=concurrency)
def decorator(func: callable) -> callable:
@@ -36,18 +41,12 @@ def concurrent_decorator(concurrency: int) -> callable:
return decorator
async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
async def update_favorite_item(medias: list[dict], fav_list: FavoriteList) -> None:
"""根据收藏夹里的视频列表更新数据库记录"""
uppers = [
Upper(
mid=media["upper"]["mid"],
name=media["upper"]["name"],
thumb=media["upper"]["face"],
)
for media in medias
Upper(mid=media["upper"]["mid"], name=media["upper"]["name"], thumb=media["upper"]["face"]) for media in medias
]
await Upper.bulk_create(
uppers, on_conflict=["mid"], update_fields=["name", "thumb"]
)
await Upper.bulk_create(uppers, on_conflict=["mid"], update_fields=["name", "thumb"], batch_size=300)
items = [
FavoriteItem(
name=media["title"],
@@ -67,15 +66,20 @@ async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
await FavoriteItem.bulk_create(
items,
on_conflict=["bvid", "favorite_list_id"],
update_fields=[
"name",
"type",
"desc",
"cover",
"ctime",
"pubtime",
"fav_time",
],
update_fields=["name", "type", "desc", "cover", "ctime", "pubtime", "fav_time"],
batch_size=300,
)
async def update_favorite_item_page(pages: list[dict], item: FavoriteItem):
pages = [
FavoriteItemPage(
favorite_item=item, cid=page["cid"], page=page["page"], name=page["part"], image=page.get("first_frame", "")
)
for page in pages
]
await FavoriteItemPage.bulk_create(
pages, on_conflict=["favorite_item_id", "page"], update_fields=["cid", "name", "image"], batch_size=300
)
@@ -91,13 +95,11 @@ async def process() -> None:
except Exception:
logger.exception("Failed to refresh credential.")
return
for favorite_id in settings.favorite_ids:
if favorite_id not in settings.path_mapper:
logger.warning(
f"Favorite {favorite_id} not in path mapper, ignored."
)
continue
await process_favorite(favorite_id)
for favorite_id in settings.path_mapper:
try:
await process_favorite(favorite_id)
except asyncio.TimeoutError:
logger.exception("Process favorite {} timeout.", favorite_id)
async def process_favorite(favorite_id: int) -> None:
@@ -106,11 +108,7 @@ async def process_favorite(favorite_id: int) -> None:
favorite_id, page=1, credential=credential
)
title = favorite_video_list["info"]["title"]
logger.info(
"Start to process favorite {}: {}",
favorite_id,
title,
)
logger.info("Start to process favorite {}: {}.", favorite_id, title)
fav_list, _ = await FavoriteList.get_or_create(
id=favorite_id, defaults={"name": favorite_video_list["info"]["title"]}
)
@@ -119,43 +117,28 @@ async def process_favorite(favorite_id: int) -> None:
while True:
page += 1
if page > 1:
favorite_video_list = (
await favorite_list.get_video_favorite_list_content(
favorite_id, page=page, credential=credential
)
favorite_video_list = await favorite_list.get_video_favorite_list_content(
favorite_id, page=page, credential=credential
)
# 先看看对应 bvid 的记录是否存在
existed_items = await FavoriteItem.filter(
favorite_list=fav_list,
bvid__in=[media["bvid"] for media in favorite_video_list["medias"]],
favorite_list=fav_list, bvid__in=[media["bvid"] for media in favorite_video_list["medias"]]
)
# 记录一下获得的列表中的 bvid 和 fav_time
media_info = {
(media["bvid"], media["fav_time"])
for media in favorite_video_list["medias"]
}
media_info = {(media["bvid"], media["fav_time"]) for media in favorite_video_list["medias"]}
# 如果有 bvid 和 fav_time 都相同的记录,说明已经到达了上次处理到的位置
continue_flag = not media_info & {
(item.bvid, int(item.fav_time.timestamp()))
for item in existed_items
}
await manage_model(favorite_video_list["medias"], fav_list)
continue_flag = not media_info & {(item.bvid, int(item.fav_time.timestamp())) for item in existed_items}
await update_favorite_item(favorite_video_list["medias"], fav_list)
if not (continue_flag and favorite_video_list["has_more"]):
break
all_unprocessed_items = await FavoriteItem.filter(
favorite_list=fav_list,
type=MediaType.VIDEO,
status=MediaStatus.NORMAL,
downloaded=False,
favorite_list=fav_list, type=MediaType.VIDEO, status=MediaStatus.NORMAL, downloaded=False
).prefetch_related("upper")
await asyncio.gather(
*[process_favorite_item(item) for item in all_unprocessed_items],
return_exceptions=True,
)
logger.info("Favorite {} {} processed successfully.", favorite_id, title)
await asyncio.gather(*[process_favorite_item(item) for item in all_unprocessed_items], return_exceptions=True)
logger.info("Favorite {} {} has been processed.", favorite_id, title)
@concurrent_decorator(4)
@concurrent_decorator(concurrency=4)
async def process_favorite_item(
fav_item: FavoriteItem,
process_poster=True,
@@ -163,170 +146,309 @@ async def process_favorite_item(
process_nfo=True,
process_upper=True,
process_subtitle=True,
refresh_mode=False,
) -> None:
logger.info("Start to process video {} {}", fav_item.bvid, fav_item.name)
logger.info("Start to process video {} {}.", fav_item.bvid, fav_item.name)
if fav_item.type != MediaType.VIDEO:
logger.warning("Media {} is not a video, skipped.", fav_item.name)
logger.warning("Media {} {} is not a video, skipped.", fav_item.bvid, fav_item.name)
return
v = video.Video(fav_item.bvid, credential=credential)
try:
if process_upper:
# 写入 up 主头像
if not all(
await asyncio.gather(
aexists(fav_item.upper.thumb_path),
aexists(fav_item.upper.meta_path),
)
):
await amakedirs(fav_item.upper.thumb_path.parent, exist_ok=True)
await asyncio.gather(
fav_item.upper.save_metadata(),
download_content(
fav_item.upper.thumb, fav_item.upper.thumb_path
),
return_exceptions=True,
)
else:
logger.info(
"Upper {} {} already exists, skipped.",
fav_item.upper.mid,
fav_item.upper.name,
)
if process_nfo:
if not await aexists(fav_item.nfo_path):
if fav_item.tags is None:
try:
fav_item.tags = [
_["tag_name"] for _ in await v.get_tags()
]
except Exception:
logger.exception(
"Failed to get tags of video {} {}",
fav_item.bvid,
fav_item.name,
)
# 写入 nfo
await EpisodeInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[
Actor(
name=fav_item.upper.mid,
role=fav_item.upper.name,
)
],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
).write_nfo(fav_item.nfo_path)
else:
logger.info(
"NFO of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_poster:
# 写入 poster
if not await aexists(fav_item.poster_path):
await download_content(fav_item.cover, fav_item.poster_path)
else:
logger.info(
"Poster of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_subtitle:
if not await aexists(fav_item.subtitle_path):
await ass.make_ass_file_danmakus_protobuf(
v, 0, str(fav_item.subtitle_path.resolve())
)
else:
logger.info(
"Subtitle of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
if process_video:
if await aexists(fav_item.video_path):
fav_item.downloaded = True
logger.info(
"Video {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
else:
# 开始处理视频内容
detector = video.VideoDownloadURLDataDetecter(
await v.get_download_url(page_index=0)
)
streams = detector.detect_best_streams()
if detector.check_flv_stream():
await download_content(
streams[0].url, fav_item.tmp_video_path
)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
# 如果没有获取过 tags那么尝试获取一下不关键忽略掉错误
with contextlib.suppress(Exception):
if fav_item.tags is None:
fav_item.tags = [_["tag_name"] for _ in await v.get_tags()]
# 处理 up 主信息和是否分 p 无关,放到前面
if process_upper:
result = await asyncio.gather(
get_file(fav_item.upper.thumb, fav_item.upper.thumb_path),
get_nfo(fav_item.upper.meta_path, obj=fav_item.upper, mode=NfoMode.UPPER),
return_exceptions=True,
)
if any(isinstance(_, FileExistsError) for _ in result):
logger.info("Upper {} {} already exists, skipped.", fav_item.upper.mid, fav_item.upper.name)
elif any(isinstance(_, Exception) for _ in result):
logger.exception("Failed to process upper {} {}.", fav_item.upper.mid, fav_item.upper.name)
single_page = False
if settings.paginated_video:
pages = None
if not refresh_mode:
# 非手动触发的情况下,会刷新一下 pages
try:
tmp_pages = await v.get_pages()
if len(tmp_pages) <= 1:
single_page = True
else:
await asyncio.gather(
download_content(
streams[0].url, fav_item.tmp_video_path
),
download_content(
streams[1].url, fav_item.tmp_audio_path
),
)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
"-i",
str(fav_item.tmp_audio_path),
"-c",
"copy",
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
fav_item.tmp_audio_path.unlink()
fav_item.downloaded = True
logger.info(
"{} {} processed successfully.",
fav_item.bvid,
fav_item.name,
)
except ResponseCodeException as e:
match e.code:
case 62002:
await update_favorite_item_page(tmp_pages, fav_item)
except Exception:
logger.exception("Failed to get pages of video {} {}.", fav_item.bvid, fav_item.name)
# 从表中查出 pages
pages = await FavoriteItemPage.filter(favorite_item=fav_item).order_by("page")
for page in pages:
page.favorite_item = fav_item
if pages and not single_page:
if process_nfo:
try:
await get_nfo(fav_item.tvshow_nfo_path, obj=fav_item, mode=NfoMode.TVSHOW)
except FileExistsError:
logger.info("Nfo of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process nfo of video {} {}.", fav_item.bvid, fav_item.name)
if process_poster:
try:
await get_file(fav_item.cover, fav_item.tvshow_poster_path)
except FileExistsError:
logger.info("Poster of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process poster of video {} {}.", fav_item.bvid, fav_item.name)
await asyncio.gather(
*[
process_favorite_item_page(page, v, process_poster, process_video, process_nfo, process_subtitle)
for page in pages
],
return_exceptions=True,
)
fav_item.downloaded = all(page.downloaded for page in pages)
page_status = {page.status for page in pages}
if MediaStatus.INVISIBLE in page_status:
fav_item.status = MediaStatus.INVISIBLE
case -404:
elif MediaStatus.DELETED in page_status:
fav_item.status = MediaStatus.DELETED
case _:
else:
fav_item.status = MediaStatus.NORMAL
if single_page or not settings.paginated_video:
if process_nfo:
try:
await get_nfo(fav_item.nfo_path, obj=fav_item, mode=NfoMode.MOVIE)
except FileExistsError:
logger.info("NFO of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process nfo of video {} {}.", fav_item.bvid, fav_item.name)
if process_poster:
try:
await get_file(fav_item.cover, fav_item.poster_path)
except FileExistsError:
logger.info("Poster of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process poster of video {} {}.", fav_item.bvid, fav_item.name)
if process_subtitle:
try:
await get_subtitle(v, 0, fav_item.subtitle_path)
except FileExistsError:
logger.info("Subtitle of {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
except Exception:
logger.exception("Failed to process subtitle of video {} {}.", fav_item.bvid, fav_item.name)
if process_video:
try:
await get_video(v, 0, fav_item.tmp_video_path, fav_item.tmp_audio_path, fav_item.video_path)
fav_item.downloaded = True
except FileExistsError:
logger.info("Video {} {} already exists, skipped.", fav_item.bvid, fav_item.name)
fav_item.downloaded = True
except Exception as e:
errcode_status = {62002: MediaStatus.INVISIBLE, -404: MediaStatus.DELETED}
if not (isinstance(e, ResponseCodeException) and (status := errcode_status.get(e.code))):
logger.exception("Failed to process video {} {}.", fav_item.bvid, fav_item.name)
else:
fav_item.status = status
logger.error(
"Video {} {} is not available, marked as {}.",
fav_item.bvid,
fav_item.name,
fav_item.status.text,
)
await fav_item.save()
logger.info("{} {} has been processed.", fav_item.bvid, fav_item.name)
@concurrent_decorator(concurrency=4)
async def process_favorite_item_page(
fav_page: FavoriteItemPage,
v: video.Video,
process_poster=True,
process_video=True,
process_nfo=True,
process_subtitle=True,
):
logger.info(
"Start to process video {} {} page {}.", fav_page.favorite_item.bvid, fav_page.favorite_item.name, fav_page.page
)
if process_nfo:
try:
await get_nfo(fav_page.nfo_path, obj=fav_page, mode=NfoMode.EPISODE)
except FileExistsError:
logger.info(
"NFO of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process nfo of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_poster:
try:
await get_file(fav_page.image or fav_page.favorite_item.cover, fav_page.poster_path)
except FileExistsError:
logger.info(
"Poster of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process poster of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_subtitle:
try:
await get_subtitle(v, fav_page.page - 1, fav_page.subtitle_path)
except FileExistsError:
logger.info(
"Subtitle of {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
except Exception:
logger.exception(
"Failed to process subtitle of video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
if process_video:
try:
await get_video(v, fav_page.page - 1, fav_page.tmp_video_path, fav_page.tmp_audio_path, fav_page.video_path)
fav_page.downloaded = True
except FileExistsError:
logger.info(
"Video {} {} page {} already exists, skipped.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
fav_page.downloaded = True
except Exception as e:
errcode_status = {62002: MediaStatus.INVISIBLE, -404: MediaStatus.DELETED}
if not (isinstance(e, ResponseCodeException) and (status := errcode_status.get(e.code))):
logger.exception(
"Failed to process video {} {}, error_code: {}",
fav_item.bvid,
fav_item.name,
e.code,
"Failed to process video {} {} page {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
)
return
logger.error(
"Video {} {} is not available, marked as {}",
fav_item.bvid,
fav_item.name,
fav_item.status.text,
else:
fav_page.status = status
logger.error(
"Video {} {} page {} is not available, marked as {}.",
fav_page.favorite_item.bvid,
fav_page.favorite_item.name,
fav_page.page,
fav_page.status.text,
)
await fav_page.save()
logger.info(
"{} {} page {} has been processed.", fav_page.favorite_item.bvid, fav_page.favorite_item.name, fav_page.page
)
async def get_video(v: video.Video, page_id: int, tmp_video_path: Path, tmp_audio_path: Path, video_path: Path) -> None:
"""指定临时视频、音频和目标视频目录下载视频的某个分p"""
if await aexists(video_path):
# 目标视频已经存在,忽略掉
raise FileExistsError
await amakedirs(video_path.parent, exist_ok=True)
# 分析对应分p的视频流
detector = video.VideoDownloadURLDataDetecter(await v.get_download_url(page_index=page_id))
streams = detector.detect_best_streams(**settings.stream.model_dump())
if detector.check_flv_stream():
# 对于 flv直接下载
await download_content(streams[0].url, tmp_video_path)
process = await create_subprocess_exec(
FFMPEG_COMMAND, "-i", tmp_video_path, video_path, stdout=PIPE, stderr=PIPE
)
except Exception:
logger.exception(
"Failed to process video {} {}", fav_item.bvid, fav_item.name
stdout, stderr = await process.communicate()
tmp_video_path.unlink(missing_ok=True)
else:
# 对于非 flv首先要下载视频流
paths, tasks = ([tmp_video_path], [download_content(streams[0].url, tmp_video_path)])
if streams[1]:
# 如果有音频流,也下载
paths.append(tmp_audio_path)
tasks.append(download_content(streams[1].url, tmp_audio_path))
await asyncio.gather(*tasks)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
*sum([["-i", path] for path in paths], []),
"-c",
"copy",
video_path,
stdout=PIPE,
stderr=PIPE,
)
finally:
await fav_item.save()
stdout, stderr = await process.communicate()
for path in paths:
path.unlink(missing_ok=True)
if process.returncode != 0:
raise RuntimeError(
f"{FFMPEG_COMMAND} exited with non-zero code {process.returncode}."
f"\nstdout:\n{stdout.decode()}"
f"\nstderr:\n{stderr.decode()}"
)
async def get_file(url: str, path: Path) -> None:
"""一个简单的下载封装,用于下载封面等内容"""
if await aexists(path):
# 目标文件已经存在,忽略掉
raise FileExistsError
await amakedirs(path.parent, exist_ok=True)
await download_content(url, path)
async def get_subtitle(v: video.Video, page_id: int, subtitle_path: Path) -> None:
"""指定目标字幕文件下载视频的某个分p的字幕"""
if await aexists(subtitle_path):
# 目标字幕已经存在,忽略掉
raise FileExistsError
await amakedirs(subtitle_path.parent, exist_ok=True)
await ass.make_ass_file_danmakus_protobuf(
v,
page_id,
str(subtitle_path.resolve()),
credential=credential,
font_name=settings.subtitle.font_name,
font_size=settings.subtitle.font_size,
alpha=settings.subtitle.alpha,
fly_time=settings.subtitle.fly_time,
static_time=settings.subtitle.static_time,
)
async def get_nfo(nfo_path: Path, *, obj: Model, mode: NfoMode) -> None:
"""指定 nfo 路径、对象和模式,将对应的 nfo 信息写入到文件"""
if await aexists(nfo_path):
# 目标 nfo 已经存在,忽略掉
raise FileExistsError
await amakedirs(nfo_path.parent, exist_ok=True)
# 根据不同的模式,生成不同的 nfo
nfo: NfoBase = None
match obj, mode:
case FavoriteItem(), NfoMode.MOVIE:
nfo = MovieInfo.from_favorite_item(obj)
case FavoriteItem(), NfoMode.TVSHOW:
nfo = TVShowInfo.from_favorite_item(obj)
case FavoriteItemPage(), NfoMode.EPISODE:
nfo = EpisodeInfo.from_favorite_item_page(obj)
case Upper(), NfoMode.UPPER:
nfo = UpperInfo.from_upper(obj)
case _:
raise ValueError
await nfo.to_file(nfo_path)

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "bili-sync"
version = "1.0.1"
version = "1.1.9"
description = ""
authors = ["amtoaer <amtoaer@gmail.com>"]
license = "GPL-3.0"
@@ -8,25 +8,25 @@ readme = "README.md"
[tool.poetry.dependencies]
python = "^3.11"
bilibili-api-python = { git = "https://github.com/amtoaer/bilibili-api.git", rev = "dev" }
dataclasses-json = "0.6.2"
tortoise-orm = "0.20.0"
loguru = "0.7.2"
uvloop = "0.19.0"
aiofiles = "23.2.1"
aerich = "0.7.2"
aiofiles = "23.2.1"
bilibili-api-python = {git = "https://github.com/Nemo2011/bilibili-api.git", rev = "16.2.0b2"}
loguru = "0.7.2"
pydantic = "2.5.3"
tortoise-orm = "0.20.0"
uvloop = "0.19.0"
[tool.poetry.group.dev.dependencies]
black = "23.11.0"
ruff = "0.1.6"
bump-my-version = "0.15.4"
ipython = "8.17.2"
ruff = "0.2.2"
[tool.black]
line-length = 80
line-length = 100
[tool.ruff]
line-length = 80
select = [
line-length = 120
lint.select = [
"F", # https://beta.ruff.rs/docs/rules/#pyflakes-f
"E",
"W", # https://beta.ruff.rs/docs/rules/#pycodestyle-e-w
@@ -50,9 +50,11 @@ select = [
"NPY", # https://beta.ruff.rs/docs/rules/#numpy-specific-rules-npy
"RUF100", # https://beta.ruff.rs/docs/configuration/#automatic-noqa-management
]
ignore = [
lint.ignore = [
"A003", # Class attribute `id` is shadowing a Python builtin
]
lint.isort.split-on-trailing-comma = false
format.skip-magic-trailing-comma = true
exclude = ["migrations"]
[tool.aerich]
@@ -60,6 +62,28 @@ tortoise_orm = "constants.TORTOISE_ORM"
location = "./migrations"
src_folder = "./."
[tool.bumpversion]
commit = true
message = "chore: bump version from {current_version} to {new_version}"
tag = true
tag_name = "{new_version}"
tag_message = ""
current_version = "1.1.9"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
[[tool.bumpversion.files]]
filename = "version.py"
[[tool.bumpversion.files]]
filename = "pyproject.toml"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -1,63 +1,100 @@
from dataclasses import dataclass, field, fields
from pathlib import Path
from typing import Self
from dataclasses_json import DataClassJsonMixin
from bilibili_api.video import AudioQuality, VideoCodecs, VideoQuality
from pydantic import BaseModel, Field, field_validator, root_validator
from pydantic_core import PydanticCustomError
from typing_extensions import Annotated
from constants import DEFAULT_CONFIG_PATH
from utils import amakedirs, aopen
@dataclass
class Config(DataClassJsonMixin):
sessdata: str = ""
bili_jct: str = ""
buvid3: str = ""
dedeuserid: str = ""
ac_time_value: str = ""
class SubtitleConfig(BaseModel):
font_name: str = "微软雅黑,黑体" # 字体
font_size: float = 40 # 字号
alpha: float = 0.8 # 透明度
fly_time: float = 5 # 滚动弹幕持续时间
static_time: float = 10 # 静态弹幕持续时间
class StreamConfig(BaseModel):
video_max_quality: VideoQuality = VideoQuality._8K
audio_max_quality: AudioQuality = AudioQuality._192K
video_min_quality: VideoQuality = VideoQuality._360P
audio_min_quality: AudioQuality = AudioQuality._64K
codecs: list[VideoCodecs] = Field(
default_factory=lambda: [VideoCodecs.AV1, VideoCodecs.AVC, VideoCodecs.HEV], min_length=1
)
no_dolby_video: bool = False
no_dolby_audio: bool = False
no_hdr: bool = False
no_hires: bool = False
@field_validator("codecs", mode="after")
def codec_validator(cls, codecs: list[VideoCodecs]) -> list[VideoCodecs]:
if len(codecs) != len(set(codecs)):
raise PydanticCustomError("unique_list", "List must be unique")
return codecs
class Config(BaseModel):
sessdata: Annotated[str, Field(min_length=1)] = ""
bili_jct: Annotated[str, Field(min_length=1)] = ""
buvid3: Annotated[str, Field(min_length=1)] = ""
dedeuserid: Annotated[str, Field(min_length=1)] = ""
ac_time_value: Annotated[str, Field(min_length=1)] = ""
interval: int = 20
favorite_ids: list[int] = field(default_factory=list)
path_mapper: dict[int, str] = field(default_factory=dict)
path_mapper: dict[int, str] = Field(default_factory=dict)
subtitle: SubtitleConfig = Field(default_factory=SubtitleConfig)
stream: StreamConfig = Field(default_factory=StreamConfig)
paginated_video: bool = False
def validate(self) -> Self:
"""所有值必须被设置"""
if not all(getattr(self, f.name) for f in fields(self)):
raise ValueError("Some config values are not set.")
return self
@root_validator(pre=True)
def migrate(cls, values: dict) -> dict:
# 把旧版本的 codec 迁移为 stream 中的 codecs
if "codec" in values and "stream" not in values:
values["stream"] = {"codecs": values.pop("codec")}
return values
@staticmethod
def load(path: Path | None = None) -> Self:
def load(path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
with path.open("r") as f:
return Config.schema().loads(f.read())
return Config.model_validate_json(f.read())
except Exception as e:
raise RuntimeError(f"Failed to load config file: {path}") from e
def save(self, path: Path | None = None) -> Self:
def save(self, path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w") as f:
f.write(
Config.schema().dumps(self, indent=4, ensure_ascii=False)
)
f.write(Config.model_dump_json(self, indent=4))
return self
except Exception as e:
raise RuntimeError(f"Failed to save config file: {path}") from e
async def asave(self, path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
await amakedirs(path.parent, exist_ok=True)
async with aopen(path, "w") as f:
await f.write(Config.model_dump_json(self, indent=4))
return self
except Exception as e:
raise RuntimeError(f"Failed to save config file: {path}") from e
def init_settings() -> Config:
return (
(
Config.load(DEFAULT_CONFIG_PATH)
if DEFAULT_CONFIG_PATH.exists()
else Config()
)
.save(DEFAULT_CONFIG_PATH)
.validate()
)
if not DEFAULT_CONFIG_PATH.exists():
# 配置文件不存在的情况下,写入空的默认值
Config().save(DEFAULT_CONFIG_PATH)
# 读取配置文件,校验出错会抛出异常,校验通过则重新保存一下配置文件(写入新配置项的默认值)
return Config.load(DEFAULT_CONFIG_PATH).save()
settings = init_settings()

View File

@@ -27,9 +27,7 @@ async def amakedirs(path: Path, exist_ok=False) -> None:
await makedirs(path, exist_ok=exist_ok)
def aopen(
path: Path, mode: str = "r", **kwargs
) -> AiofilesContextManager[None, None, AsyncTextIOWrapper]:
def aopen(path: Path, mode: str = "r", **kwargs) -> AiofilesContextManager[None, None, AsyncTextIOWrapper]:
return aiofiles.open(path, mode, **kwargs)

1
version.py Normal file
View File

@@ -0,0 +1 @@
VERSION = "1.1.9"