Compare commits

...

45 Commits
1.0.5 ... 1.1.5

Author SHA1 Message Date
amtoaer
8a7a7e370b chore: bump version from 1.1.4 to 1.1.5 2024-02-02 17:29:13 +08:00
amtoaer
6ce143647c chore: 更新上游依赖 2024-02-02 17:29:07 +08:00
amtoaer
668c67da53 chore: bump version from 1.1.3 to 1.1.4 2024-01-20 15:50:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9204bbb4ad fix: 修复新的配置项没有写入配置文件的问题,扩充单行字符限制 (#33) 2024-01-20 15:37:43 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d467750d4f feat: 支持指定编码优先级 (#32) 2024-01-20 15:16:48 +08:00
amtoaer
641cc3f48b chore: 优化 dockerfile,缩小镜像体积 2024-01-06 02:13:00 +08:00
amtoaer
345c764463 fix: 修复 docker 退出时不会释放资源的问题 2024-01-06 00:41:28 +08:00
amtoaer
85b7d3dc9b chore: 恢复 dockerfile 写法,难以复用缓存但减小容器体积 2024-01-05 23:34:32 +08:00
amtoaer
f1ada17f30 chore: bump version from 1.1.2 to 1.1.3 2024-01-05 01:15:12 +08:00
amtoaer
cb0ac7eb67 chore: 开启自动提交和自动标签 2024-01-05 01:13:36 +08:00
amtoaer
31efedbde9 chore: 修复依赖异常,优化 dockerfile 流程 2024-01-05 01:11:10 +08:00
amtoaer
3defb07325 chore: 存版本号并添加入口,方便触发版本间的迁移逻辑 2024-01-04 22:13:03 +08:00
amtoaer
e36f829e70 chore: 引入 bump-version 并正确设置版本号 2024-01-04 22:04:10 +08:00
amtoaer
c20b579523 chore: 排序一下依赖 2024-01-04 21:54:27 +08:00
amtoaer
ceec222604 chore: 更新上游依赖,修复刷新 cookie 失败的错误 2024-01-04 21:50:28 +08:00
amtoaer
60ea7795ae chore: 修改基础镜像标签 2024-01-04 21:07:08 +08:00
DDSDerek
6cbacbd127 chore: Optimization docker (#17)
* feat: docker build adds cache

* fix: dockerfile optimization

* doc: dockerhub pictures are not displayed properly

---------

Co-authored-by: DDSRem <1448139087@qq.com>
2024-01-04 20:51:03 +08:00
DDSDerek
8ea2fbe0f9 fix: docker meta username error (#16)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:31:48 +08:00
DDSDerek
e3fded16ac feat: support arm64 architecture (#15)
Co-authored-by: DDSRem <1448139087@qq.com>
2023-12-30 14:22:26 +08:00
amtoaer
961913c4fb doc: 加入字幕相关文档 2023-12-07 22:11:37 +08:00
amtoaer
fa20e5efee feat: 开放弹幕的各项设置 2023-12-07 21:45:18 +08:00
amtoaer
38fb0a4560 fix: 安全地移除配置项 2023-12-07 21:29:57 +08:00
amtoaer
9e94e3b73e chore: try except 按块分割,移除无用的设置项 2023-12-07 21:15:40 +08:00
amtoaer
b955a9fe45 chore: 替换掉被标记 deprecated 的方法 2023-12-06 18:17:17 +08:00
amtoaer
9d151b4731 feat: 命令默认不覆盖现有内容,更新文档 2023-12-06 01:19:08 +08:00
amtoaer
1686c1a8df feat: 支持弹幕下载 2023-12-06 00:39:46 +08:00
amtoaer
de6eaeb4a6 chore: 整理代码逻辑,留出下载字幕的入口 2023-12-06 00:00:42 +08:00
amtoaer
46d1810e7c chore: 处理成功的日志往外提一级 2023-12-04 01:50:31 +08:00
amtoaer
89e2567fef feat: tag 获取失败不影响主流程 2023-12-04 01:35:29 +08:00
amtoaer
38caf1f0d6 fix: 修复运行错误 2023-12-04 01:02:04 +08:00
amtoaer
6877171f4d fix: 修复参数错误 2023-12-04 00:54:04 +08:00
amtoaer
29d06a040b fix: 注册命令 2023-12-04 00:50:22 +08:00
amtoaer
ceec5d6780 feat: 尝试支持视频标签 2023-12-04 00:39:42 +08:00
amtoaer
650498d4a1 fix: 每天应仅检查一次 credential 2023-12-03 23:14:15 +08:00
amtoaer
96ff84391d doc: 修复图片路径过长的问题 2023-12-02 01:29:33 +08:00
amtoaer
44e8a2c97d doc: 加入对额外命令的描述,添加图片 2023-12-02 01:26:55 +08:00
amtoaer
c3bfb3c2e5 doc: 已解决 up 头像问题,更新 README 2023-12-02 01:12:59 +08:00
amtoaer
ec91cbf3ed fix: 继续修复字段错误 2023-12-02 00:51:21 +08:00
amtoaer
f174a3b898 fix: 修复字段错误,凭据刷新从第二天开始 2023-12-02 00:46:21 +08:00
amtoaer
c8fca7fcca style: 格式化代码 2023-12-02 00:40:55 +08:00
amtoaer
6ef25d6409 fix: 修复类型错误,加入部分日志 2023-12-02 00:40:35 +08:00
amtoaer
f10fc9dd97 fix: 修复字段取值错误 2023-12-02 00:30:56 +08:00
amtoaer
d21f14d851 ci: 推送 main 时构建 debug 镜像 2023-12-02 00:30:47 +08:00
amtoaer
012b3f9f31 fix: 尝试修复 emby 头像路径,异步化所有文件操作 2023-12-02 00:26:19 +08:00
amtoaer
bbde9d6ba6 fix: 修复判断 2023-11-30 18:12:15 +08:00
18 changed files with 1474 additions and 853 deletions

View File

@@ -0,0 +1,48 @@
name: Docker Image CI (DEBUG)
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=debug
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push images
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}

View File

@@ -12,22 +12,41 @@ jobs:
-
name: Checkout
uses: actions/checkout@v3
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync
tags: |
type=raw,value=${{ github.ref_name }}
type=raw,value=latest
-
name: Set Up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set Up Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
-
name: Build and push images
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile
file: Dockerfile
platforms: |
linux/amd64
linux/arm64/v8
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:${{ github.ref_name }}
${{ secrets.DOCKERHUB_USERNAME }}/bili-sync:latest
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
-
name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3

2
.gitignore vendored
View File

@@ -4,7 +4,7 @@ debug.py
videos
config.test.json
database.test.db*
example.json
example*.json
thumbs.test
config
data

View File

@@ -1,22 +1,41 @@
FROM python:3.11.6-alpine3.18 AS base
FROM python:3.11.7-alpine3.19 as base
WORKDIR /app
ENV BILI_IN_DOCKER=true
ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
RUN apk add --no-cache ffmpeg tini \
&& apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
&& pip install poetry==1.7.1 pip3-autoremove==1.2.0
COPY poetry.lock pyproject.toml ./
RUN apk add ffmpeg \
&& apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
&& pip install poetry \
&& poetry config virtualenvs.create false \
&& poetry install --no-dev --no-interaction --no-ansi \
&& apk del .build-deps
RUN poetry config virtualenvs.create false \
&& poetry install --only main --no-root \
&& pip3-autoremove -y poetry pip3-autoremove \
&& apk del .build-deps \
&& rm -rf \
/root/.cache \
/tmp/*
COPY . .
ENTRYPOINT [ "python", "entry.py" ]
FROM scratch
WORKDIR /app
ENV LANG=zh_CN.UTF-8 \
TZ=Asia/Shanghai \
BILI_IN_DOCKER=true
COPY --from=base / /
ENTRYPOINT [ "tini", "python", "entry.py" ]
VOLUME [ "/app/config", "/app/data", "/app/thumb", "/Videos/Bilibilis" ]

View File

@@ -1,4 +1,4 @@
.PHONY: install fmt start-daemon start-once
.PHONY: install fmt start-daemon start-once db-init db-migrate db-upgrade sync-conf
install:
@echo "Installing dependencies..."
@@ -22,4 +22,10 @@ db-migrate:
@poetry run aerich migrate
db-upgrade:
@poetry run aerich upgrade
@poetry run aerich upgrade
sync-conf:
@echo "Syncing config..."
@cp ${CONFIG_SRC} ./config/
@cp ${DB_SRC} ./data/
@echo "Done."

View File

@@ -1,4 +1,6 @@
# bili-sync
![bili-sync](https://socialify.git.ci/amtoaer/bili-sync/image?description=1&font=KoHo&issues=1&language=1&logo=https%3A%2F%2Fs2.loli.net%2F2023%2F12%2F02%2F9EwT2yInOu1d3zm.png&name=1&owner=1&pattern=Signal&pulls=1&stargazers=1&theme=Light)
## 简介
为 NAS 用户编写的 BILIBILI 收藏夹同步工具,可方便导入 EMBY 等媒体库工具浏览。
@@ -11,15 +13,23 @@
## 工作截图
![下载视频](asset/run.png)
![下载视频](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/run.png)
![EMBY 识别](asset/emby.png)
![EMBY 识别](https://raw.githubusercontent.com/amtoaer/bili-sync/main/asset/emby.png)
## 配置文件
对于配置文件的前五项,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。
```python
@dataclass
class SubtitleConfig(DataClassJsonMixin):
font_name: str = "微软雅黑,黑体" # 字体
font_size: float = 40 # 字号
alpha: float = 0.8 # 透明度
fly_time: float = 5 # 滚动弹幕持续时间
static_time: float = 10 # 静态弹幕持续时间
class Config(DataClassJsonMixin):
sessdata: str = ""
bili_jct: str = ""
@@ -27,8 +37,8 @@ class Config(DataClassJsonMixin):
dedeuserid: str = ""
ac_time_value: str = ""
interval: int = 20 # 任务执行的间隔时间
favorite_ids: list[int] = field(default_factory=list) # 收藏夹的 id
path_mapper: dict[int, str] = field(default_factory=dict) # 收藏夹的 id 到存储目录的映射
subtitle: SubtitleConfig = field(default_factory=SubtitleConfig) # 字幕相关设置
```
程序默认会将配置文件存储于 `${程序路径}/config/config.json`,数据库文件存储于 `${程序路径}/data/data.db`,如果发现不存在则新建并写入初始配置。
@@ -37,15 +47,6 @@ class Config(DataClassJsonMixin):
即:我们可以通过运行一次程序,等程序写入初始配置并提示配置错误终止后编辑 `config.json` 文件,编辑后即可重新运行。
## 关于 UP 头像
目前开放全局的环境变量 `THUMB_PATH` 作为 up 主头像的存储位置。
在下载某条视频时,如果 UP 的头像还不存在,就会将 UP 的头像下载至 `THUMB_PATH`,同时在视频的 NFO 文件中写入 UP 头像的绝对路径。
但实际测试下来EMBY 似乎无法正常读取 NFO 文件中的本地头像路径,待找到处理办法后再修复。
> 虽然但是,一个基本的逻辑是,如果期望 `bili-sync` 在 NFO 中写入的头像绝对路径能够被 EMBY 读取到,那么两个容器中头像的绝对路径必须完全相同。因此虽然头像还没办法正常加载,但为后续考虑,还是推荐将 THUMB_PATH 填写上,并确保该路径在 `bili-sync` 和 `emby` 两个容器中指向的是相同的文件夹(也就是把一个文件夹同时挂载到 `bili-sync` 和 `emby` 的 THUMB_PATH 下)。
## Docker 运行示例
@@ -60,17 +61,16 @@ services:
- /home/amtoaer/Videos/Bilibilis/:/Videos/Bilibilis/ # 视频文件
- /home/amtoaer/.config/nas/bili-sync/config/:/app/config/ # 配置文件
- /home/amtoaer/.config/nas/bili-sync/data/:/app/data/ # 数据库
# 注:如需在 emby 内查看 up 主头像,需要将 emby 的 metadata/people/ 配置目录挂载至容器的 /app/thumb/
- /home/amtoaer/.config/nas/emby/metadata/people/:/app/thumb/
environment:
- THUMB_PATH=/Videos/Bilibilis/thumb/ # 将头像放到视频文件的 thumb 文件夹下
- TZ=Asia/Shanghai
restart: always
network_mode: bridge
hostname: bili-sync
container_name: bili-sync
logging:
driver: "json-file"
options:
max-size: "30m"
driver: "local"
```
对应的配置文件:
@@ -83,18 +83,46 @@ services:
"dedeuserid": "xxxxxxxxxxxxxxxxxx",
"ac_time_value": "xxxxxxxxxxxxxxxxxx",
"interval": 20,
"favorite_ids": [
711322958
],
"path_mapper": {
"711322958": "/Videos/Bilibilis/Bilibili-711322958/"
},
"subtitle": {
"font_name": "微软雅黑,黑体",
"font_size": 40.0,
"alpha": 0.8,
"fly_time": 5.0,
"static_time": 10.0
}
}
```
## 目前的问题
## 支持的额外命令
- [ ] 研究一下 NFO看看怎么正常读取本地的演员头像
为满足需要,该应用包含几个单独的命令,可在程序目录下使用 `python entry.py ${command name}` 运行。
1. `once`
处理收藏夹,和一般定时任务触发时执行的操作完全相同,但仅运行一次。
2. `recheck`
将本地不存在的视频文件标记成未下载,下次定时任务触发时将一并下载。
3. `refresh_refresh_poster`
更新本地视频的封面。
3. `refresh_upper`
更新本地up的头像和元数据。
3. `refresh_nfo`
更新本地视频的元数据。(如标签、标题等信息)
3. `refresh_video`
更新本地的视频源文件。
3. `refresh_subtitle`
更新本地的弹幕文件。
**对于以 refresh 开头的命令,均支持 --force 参数,如果有 --force 参数,将全量覆盖对应内容,否则默认仅更新缺失的部分。**
## 路线图

View File

@@ -1,10 +1,14 @@
import asyncio
import functools
from pathlib import Path
from typing import Callable
from aiofiles.os import path
from loguru import logger
from constants import MediaStatus, MediaType
from models import FavoriteItem
from processor import process_favorite_item
from utils import aexists, aremove
async def recheck():
@@ -14,9 +18,7 @@ async def recheck():
status=MediaStatus.NORMAL,
downloaded=True,
)
exists = await asyncio.gather(
*[path.exists(item.video_path) for item in items]
)
exists = await asyncio.gather(*[aexists(item.video_path) for item in items])
for item, exist in zip(items, exists):
if isinstance(exist, Exception):
logger.error(
@@ -36,3 +38,64 @@ async def recheck():
logger.info("Updating database...")
await FavoriteItem.bulk_update(items, fields=["downloaded"])
logger.info("Database updated.")
async def _refresh_favorite_item_info(
path_getter: Callable[[FavoriteItem], list[Path]],
process_poster: bool = False,
process_video: bool = False,
process_nfo: bool = False,
process_upper: bool = False,
process_subtitle: bool = False,
force: bool = False,
):
items = await FavoriteItem.filter(downloaded=True).prefetch_related("upper")
if force:
# 如果强制刷新,那么就先把现存的所有内容删除
await asyncio.gather(
*[aremove(path) for item in items for path in path_getter(item)],
return_exceptions=True,
)
await asyncio.gather(
*[
process_favorite_item(
item,
process_poster=process_poster,
process_video=process_video,
process_nfo=process_nfo,
process_upper=process_upper,
process_subtitle=process_subtitle,
)
for item in items
],
return_exceptions=True,
)
refresh_nfo = functools.partial(
_refresh_favorite_item_info, lambda item: [item.nfo_path], process_nfo=True
)
refresh_poster = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.poster_path],
process_poster=True,
)
refresh_video = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.video_path],
process_video=True,
)
refresh_upper = functools.partial(
_refresh_favorite_item_info,
lambda item: item.upper_path,
process_upper=True,
)
refresh_subtitle = functools.partial(
_refresh_favorite_item_info,
lambda item: [item.subtitle_path],
process_subtitle=True,
)

View File

@@ -1,10 +1,19 @@
import asyncio
import os
import signal
import sys
import uvloop
from loguru import logger
from commands import recheck
from commands import (
recheck,
refresh_nfo,
refresh_poster,
refresh_subtitle,
refresh_upper,
refresh_video,
)
from models import init_model
from processor import cleanup, process
from settings import settings
@@ -14,16 +23,23 @@ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
async def entry() -> None:
await init_model()
if any("once" in _ for _ in sys.argv):
# 单次运行
logger.info("Running once...")
await process()
return
if any("recheck" in _ for _ in sys.argv):
# 重新检查
logger.info("Rechecking...")
await recheck()
return
force = any("force" in _ for _ in sys.argv)
for command, func in (
("once", process),
("recheck", recheck),
("refresh_poster", refresh_poster),
("refresh_upper", refresh_upper),
("refresh_nfo", refresh_nfo),
("refresh_video", refresh_video),
("refresh_subtitle", refresh_subtitle),
):
if any(command in _ for _ in sys.argv):
logger.info("Running {}...", command)
if command.startswith("refresh"):
await func(force=force)
else:
await func()
return
logger.info("Running daemon...")
while True:
await process()
@@ -31,8 +47,16 @@ async def entry() -> None:
if __name__ == "__main__":
# 确保 docker 退出时正确触发资源释放
signal.signal(signal.SIGTERM, lambda *_: os.kill(os.getpid(), signal.SIGINT))
with asyncio.Runner() as runner:
try:
runner.run(entry())
except Exception:
logger.exception("Unexpected error occurred, exiting...")
except KeyboardInterrupt:
logger.error("Exit Signal Received, exiting...")
finally:
logger.info("Cleaning up resources...")
runner.run(cleanup())
logger.info("Done, exited.")

View File

@@ -0,0 +1,11 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
ALTER TABLE "favoriteitem" ADD "tags" JSON;"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
ALTER TABLE "favoriteitem" DROP COLUMN "tags";"""

View File

@@ -0,0 +1,14 @@
from tortoise import BaseDBAsyncClient
async def upgrade(db: BaseDBAsyncClient) -> str:
return """
CREATE TABLE IF NOT EXISTS "program" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"version" VARCHAR(20) NOT NULL
);"""
async def downgrade(db: BaseDBAsyncClient) -> str:
return """
DROP TABLE IF EXISTS "program";"""

View File

@@ -13,6 +13,8 @@ from constants import (
MediaType,
)
from settings import settings
from utils import aopen
from version import VERSION
class FavoriteList(Model):
@@ -39,7 +41,27 @@ class Upper(Model):
@property
def thumb_path(self) -> Path:
return DEFAULT_THUMB_PATH / f"{self.mid}.jpg"
return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "folder.jpg"
@property
def meta_path(self) -> Path:
return DEFAULT_THUMB_PATH / str(self.mid)[0] / f"{self.mid}" / "person.nfo"
async def save_metadata(self):
async with aopen(self.meta_path, "w") as f:
await f.write(
f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot />
<outline />
<lockdata>false</lockdata>
<dateadded>{self.created_at.strftime("%Y-%m-%d %H:%M:%S")}</dateadded>
<title>{self.mid}</title>
<sorttitle>{self.mid}</sorttitle>
</person>
""".strip()
)
class FavoriteItem(Model):
@@ -48,15 +70,12 @@ class FavoriteItem(Model):
id = fields.IntField(pk=True)
name = fields.CharField(max_length=255)
type = fields.IntEnumField(enum_type=MediaType)
status = fields.IntEnumField(
enum_type=MediaStatus, default=MediaStatus.NORMAL
)
status = fields.IntEnumField(enum_type=MediaStatus, default=MediaStatus.NORMAL)
bvid = fields.CharField(max_length=255)
desc = fields.TextField()
cover = fields.TextField()
favorite_list = fields.ForeignKeyField(
"models.FavoriteList", related_name="items"
)
tags = fields.JSONField(null=True)
favorite_list = fields.ForeignKeyField("models.FavoriteList", related_name="items")
upper = fields.ForeignKeyField("models.Upper", related_name="uploads")
ctime = fields.DatetimeField()
pubtime = fields.DatetimeField()
@@ -74,38 +93,39 @@ class FavoriteItem(Model):
@property
def tmp_video_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"tmp_{self.bvid}_video"
)
return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_video"
@property
def tmp_audio_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"tmp_{self.bvid}_audio"
)
return Path(settings.path_mapper[self.favorite_list_id]) / f"tmp_{self.bvid}_audio"
@property
def video_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}.mp4"
)
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.mp4"
@property
def nfo_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}.nfo"
)
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.nfo"
@property
def poster_path(self) -> Path:
return (
Path(settings.path_mapper[self.favorite_list_id])
/ f"{self.bvid}-poster.jpg"
)
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}-poster.jpg"
@property
def upper_path(self) -> list[Path]:
return [
self.upper.thumb_path,
self.upper.meta_path,
]
@property
def subtitle_path(self) -> Path:
return Path(settings.path_mapper[self.favorite_list_id]) / f"{self.bvid}.zh-CN.default.ass"
class Program(Model):
id = fields.IntField(pk=True)
version = fields.CharField(max_length=20)
async def init_model() -> None:
@@ -117,3 +137,13 @@ async def init_model() -> None:
)
process = await create_subprocess_exec(*migrate_commands)
await process.communicate()
program, created = await Program.get_or_create(
defaults={
"version": VERSION,
}
)
if created or program.version != VERSION:
# 把新版本的迁移逻辑写在这里
pass
program.version = VERSION
await program.save()

17
nfo.py
View File

@@ -2,19 +2,19 @@ import datetime
from dataclasses import dataclass
from pathlib import Path
from utils import aopen
@dataclass
class Actor:
name: str
role: str
thumb: Path
def to_xml(self) -> str:
return f"""
<actor>
<name>{self.name}</name>
<role>{self.role}</role>
<thumb>{self.thumb.resolve()}</thumb>
</actor>
""".strip(
"\n"
@@ -25,16 +25,22 @@ class Actor:
class EpisodeInfo:
title: str
plot: str
tags: list[str]
actor: list[Actor]
bvid: str
aired: datetime.datetime
def write_nfo(self, path: Path) -> None:
with path.open("w", encoding="utf-8") as f:
f.write(self.to_xml())
async def write_nfo(self, path: Path) -> None:
async with aopen(path, "w", encoding="utf-8") as f:
await f.write(self.to_xml())
def to_xml(self) -> str:
actor = "\n".join(_.to_xml() for _ in self.actor)
tags = (
"\n".join(f" <genre>{_}</genre>" for _ in self.tags)
if isinstance(self.tags, list)
else ""
)
return f"""
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
@@ -43,6 +49,7 @@ class EpisodeInfo:
<title>{self.title}</title>
{actor}
<year>{self.aired.year}</year>
{tags}
<uniqueid type="bilibili">{self.bvid}</uniqueid>
<aired>{self.aired.strftime("%Y-%m-%d")}</aired>
</episodedetails>

1361
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,29 +2,25 @@ import asyncio
import datetime
from asyncio import Semaphore, create_subprocess_exec
from asyncio.subprocess import DEVNULL
from pathlib import Path
import aiofiles
import httpx
from bilibili_api import HEADERS, favorite_list, video
from bilibili_api import ass, favorite_list, video
from bilibili_api.exceptions import ResponseCodeException
from loguru import logger
from tortoise import Tortoise
from tortoise.connection import connections
from constants import FFMPEG_COMMAND, MediaStatus, MediaType
from credential import credential
from models import FavoriteItem, FavoriteList, Upper
from nfo import Actor, EpisodeInfo
from settings import settings
from utils import aexists, amakedirs, client, download_content
client = httpx.AsyncClient(headers=HEADERS)
anchor = datetime.datetime.today()
anchor = datetime.date.today()
async def cleanup() -> None:
await client.aclose()
await Tortoise.close_connections()
await connections.close_all()
def concurrent_decorator(concurrency: int) -> callable:
@@ -40,16 +36,6 @@ def concurrent_decorator(concurrency: int) -> callable:
return decorator
async def download_content(url: str, path: Path) -> None:
async with client.stream("GET", url) as resp, aiofiles.open(
path, "wb"
) as f:
async for chunk in resp.aiter_bytes(40960):
if not chunk:
return
await f.write(chunk)
async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
uppers = [
Upper(
@@ -59,9 +45,7 @@ async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
)
for media in medias
]
await Upper.bulk_create(
uppers, on_conflict=["mid"], update_fields=["name", "thumb"]
)
await Upper.bulk_create(uppers, on_conflict=["mid"], update_fields=["name", "thumb"])
items = [
FavoriteItem(
name=media["title"],
@@ -95,20 +79,17 @@ async def manage_model(medias: list[dict], fav_list: FavoriteList) -> None:
async def process() -> None:
global anchor
if datetime.datetime.now() > anchor and await credential.check_refresh():
try:
await credential.refresh()
anchor = datetime.datetime.today() + datetime.timedelta(days=1)
logger.info("Credential refreshed.")
except Exception:
logger.exception("Failed to refresh credential.")
return
for favorite_id in settings.favorite_ids:
if favorite_id not in settings.path_mapper:
logger.warning(
f"Favorite {favorite_id} not in path mapper, ignored."
)
continue
if (today := datetime.date.today()) > anchor:
anchor = today
logger.info("Check credential.")
if await credential.check_refresh():
try:
await credential.refresh()
logger.info("Credential refreshed.")
except Exception:
logger.exception("Failed to refresh credential.")
return
for favorite_id in settings.path_mapper:
await process_favorite(favorite_id)
@@ -131,10 +112,8 @@ async def process_favorite(favorite_id: int) -> None:
while True:
page += 1
if page > 1:
favorite_video_list = (
await favorite_list.get_video_favorite_list_content(
favorite_id, page=page, credential=credential
)
favorite_video_list = await favorite_list.get_video_favorite_list_content(
favorite_id, page=page, credential=credential
)
# 先看看对应 bvid 的记录是否存在
existed_items = await FavoriteItem.filter(
@@ -142,14 +121,10 @@ async def process_favorite(favorite_id: int) -> None:
bvid__in=[media["bvid"] for media in favorite_video_list["medias"]],
)
# 记录一下获得的列表中的 bvid 和 fav_time
media_info = {
(media["bvid"], media["fav_time"])
for media in favorite_video_list["medias"]
}
media_info = {(media["bvid"], media["fav_time"]) for media in favorite_video_list["medias"]}
# 如果有 bvid 和 fav_time 都相同的记录,说明已经到达了上次处理到的位置
continue_flag = not media_info & {
(item.bvid, int(item.fav_time.timestamp()))
for item in existed_items
(item.bvid, int(item.fav_time.timestamp())) for item in existed_items
}
await manage_model(favorite_video_list["medias"], fav_list)
if not (continue_flag and favorite_video_list["has_more"]):
@@ -161,112 +136,216 @@ async def process_favorite(favorite_id: int) -> None:
downloaded=False,
).prefetch_related("upper")
await asyncio.gather(
*[process_video(item) for item in all_unprocessed_items],
*[process_favorite_item(item) for item in all_unprocessed_items],
return_exceptions=True,
)
logger.info("Favorite {} {} processed successfully.", favorite_id, title)
@concurrent_decorator(4)
async def process_video(fav_item: FavoriteItem) -> None:
async def process_favorite_item(
fav_item: FavoriteItem,
process_poster=True,
process_video=True,
process_nfo=True,
process_upper=True,
process_subtitle=True,
) -> None:
logger.info("Start to process video {} {}", fav_item.bvid, fav_item.name)
if fav_item.type != MediaType.VIDEO:
logger.warning("Media {} is not a video, skipped.", fav_item.name)
return
v = video.Video(fav_item.bvid, credential=credential)
# 如果没有获取过 tags那么尝试获取一下
try:
if fav_item.video_path.exists():
fav_item.downloaded = True
await fav_item.save()
logger.info(
"{} {} already exists, skipped.", fav_item.bvid, fav_item.name
)
return
# 写入 up 主头像
if not fav_item.upper.thumb_path.exists():
await download_content(
fav_item.upper.thumb, fav_item.upper.thumb_path
)
# 写入 nfo
EpisodeInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[
Actor(
name=fav_item.upper.mid,
role=fav_item.upper.name,
thumb=fav_item.upper.thumb_path,
)
],
bvid=fav_item.bvid,
aired=fav_item.ctime,
).write_nfo(fav_item.nfo_path)
# 写入 poster
await download_content(fav_item.cover, fav_item.poster_path)
# 开始处理视频内容
v = video.Video(fav_item.bvid, credential=credential)
detector = video.VideoDownloadURLDataDetecter(
await v.get_download_url(page_index=0)
)
streams = detector.detect_best_streams()
if detector.check_flv_stream():
await download_content(streams[0].url, fav_item.tmp_video_path)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
else:
await asyncio.gather(
download_content(streams[0].url, fav_item.tmp_video_path),
download_content(streams[1].url, fav_item.tmp_audio_path),
)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
"-i",
str(fav_item.tmp_audio_path),
"-c",
"copy",
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
fav_item.tmp_audio_path.unlink()
fav_item.downloaded = True
await fav_item.save()
logger.info(
"{} {} processed successfully.", fav_item.bvid, fav_item.name
)
except ResponseCodeException as e:
match e.code:
case 62002:
fav_item.status = MediaStatus.INVISIBLE
case -404:
fav_item.status = MediaStatus.DELETED
case _:
logger.exception(
"Failed to process video {} {}, error_code: {}",
fav_item.bvid,
fav_item.name,
e.code,
)
return
await fav_item.save()
logger.error(
"Video {} {} is not available, marked as {}",
fav_item.bvid,
fav_item.name,
fav_item.status.text,
)
if fav_item.tags is None:
fav_item.tags = [_["tag_name"] for _ in await v.get_tags()]
except Exception:
logger.exception(
"Failed to process video {} {}", fav_item.bvid, fav_item.name
"Failed to get tags of video {} {}",
fav_item.bvid,
fav_item.name,
)
if process_upper:
try:
if not all(
await asyncio.gather(
aexists(fav_item.upper.thumb_path),
aexists(fav_item.upper.meta_path),
)
):
await amakedirs(fav_item.upper.thumb_path.parent, exist_ok=True)
await asyncio.gather(
fav_item.upper.save_metadata(),
download_content(fav_item.upper.thumb, fav_item.upper.thumb_path),
return_exceptions=True,
)
else:
logger.info(
"Upper {} {} already exists, skipped.",
fav_item.upper.mid,
fav_item.upper.name,
)
except Exception:
logger.exception(
"Failed to process upper {} {}",
fav_item.upper.mid,
fav_item.upper.name,
)
if process_nfo:
try:
if not await aexists(fav_item.nfo_path):
await EpisodeInfo(
title=fav_item.name,
plot=fav_item.desc,
actor=[
Actor(
name=fav_item.upper.mid,
role=fav_item.upper.name,
)
],
tags=fav_item.tags,
bvid=fav_item.bvid,
aired=fav_item.ctime,
).write_nfo(fav_item.nfo_path)
else:
logger.info(
"NFO of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
except Exception:
logger.exception(
"Failed to process nfo of video {} {}",
fav_item.bvid,
fav_item.name,
)
if process_poster:
try:
if not await aexists(fav_item.poster_path):
try:
await download_content(fav_item.cover, fav_item.poster_path)
except Exception:
logger.exception(
"Failed to download poster of video {} {}",
fav_item.bvid,
fav_item.name,
)
else:
logger.info(
"Poster of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
except Exception:
logger.exception(
"Failed to process poster of video {} {}",
fav_item.bvid,
fav_item.name,
)
if process_subtitle:
try:
if not await aexists(fav_item.subtitle_path):
await ass.make_ass_file_danmakus_protobuf(
v,
0,
str(fav_item.subtitle_path.resolve()),
credential=credential,
font_name=settings.subtitle.font_name,
font_size=settings.subtitle.font_size,
alpha=settings.subtitle.alpha,
fly_time=settings.subtitle.fly_time,
static_time=settings.subtitle.static_time,
)
else:
logger.info(
"Subtitle of {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
except Exception:
logger.exception(
"Failed to process subtitle of video {} {}",
fav_item.bvid,
fav_item.name,
)
if process_video:
try:
if await aexists(fav_item.video_path):
fav_item.downloaded = True
logger.info(
"Video {} {} already exists, skipped.",
fav_item.bvid,
fav_item.name,
)
else:
# 开始处理视频内容
detector = video.VideoDownloadURLDataDetecter(
await v.get_download_url(page_index=0)
)
streams = detector.detect_best_streams(codecs=settings.codec)
if detector.check_flv_stream():
await download_content(streams[0].url, fav_item.tmp_video_path)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
else:
await asyncio.gather(
download_content(streams[0].url, fav_item.tmp_video_path),
download_content(streams[1].url, fav_item.tmp_audio_path),
)
process = await create_subprocess_exec(
FFMPEG_COMMAND,
"-i",
str(fav_item.tmp_video_path),
"-i",
str(fav_item.tmp_audio_path),
"-c",
"copy",
str(fav_item.video_path),
stdout=DEVNULL,
stderr=DEVNULL,
)
await process.communicate()
fav_item.tmp_video_path.unlink()
fav_item.tmp_audio_path.unlink()
fav_item.downloaded = True
except ResponseCodeException as e:
match e.code:
case 62002:
fav_item.status = MediaStatus.INVISIBLE
case -404:
fav_item.status = MediaStatus.DELETED
case _:
logger.exception(
"Failed to process video {} {}, error_code: {}",
fav_item.bvid,
fav_item.name,
e.code,
)
if fav_item.status != MediaStatus.NORMAL:
logger.error(
"Video {} {} is not available, marked as {}",
fav_item.bvid,
fav_item.name,
fav_item.status.text,
)
except Exception:
logger.exception("Failed to process video {} {}", fav_item.bvid, fav_item.name)
await fav_item.save()
logger.info(
"{} {} is processed successfully.",
fav_item.bvid,
fav_item.name,
)

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "bili-sync"
version = "1.0.1"
version = "1.1.5"
description = ""
authors = ["amtoaer <amtoaer@gmail.com>"]
license = "GPL-3.0"
@@ -8,24 +8,26 @@ readme = "README.md"
[tool.poetry.dependencies]
python = "^3.11"
bilibili-api-python = { git = "https://github.com/amtoaer/bilibili-api.git", rev = "dev" }
dataclasses-json = "0.6.2"
tortoise-orm = "0.20.0"
loguru = "0.7.2"
uvloop = "0.19.0"
aiofiles = "23.2.1"
aerich = "0.7.2"
aiofiles = "23.2.1"
bilibili-api-python = {git = "https://github.com/Nemo2011/bilibili-api.git", rev = "16.2.0b2"}
dataclasses-json = "0.6.2"
loguru = "0.7.2"
pydantic = "2.5.3"
tortoise-orm = "0.20.0"
uvloop = "0.19.0"
[tool.poetry.group.dev.dependencies]
black = "23.11.0"
ruff = "0.1.6"
bump-my-version = "0.15.4"
ipython = "8.17.2"
ruff = "0.1.6"
[tool.black]
line-length = 80
line-length = 100
[tool.ruff]
line-length = 80
line-length = 100
select = [
"F", # https://beta.ruff.rs/docs/rules/#pyflakes-f
"E",
@@ -60,6 +62,24 @@ tortoise_orm = "constants.TORTOISE_ORM"
location = "./migrations"
src_folder = "./."
[tool.bumpversion]
commit = true
message = "chore: bump version from {current_version} to {new_version}"
tag = true
tag_name = "{new_version}"
tag_message = ""
current_version = "1.1.5"
parse = "(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)"
[[tool.bumpversion.files]]
filename = "version.py"
[[tool.bumpversion.files]]
filename = "pyproject.toml"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -1,63 +1,73 @@
from dataclasses import dataclass, field, fields
from pathlib import Path
from typing import Self
from dataclasses_json import DataClassJsonMixin
from bilibili_api.video import VideoCodecs
from pydantic import BaseModel, Field, field_validator
from pydantic_core import PydanticCustomError
from typing_extensions import Annotated
from constants import DEFAULT_CONFIG_PATH
@dataclass
class Config(DataClassJsonMixin):
sessdata: str = ""
bili_jct: str = ""
buvid3: str = ""
dedeuserid: str = ""
ac_time_value: str = ""
interval: int = 20
favorite_ids: list[int] = field(default_factory=list)
path_mapper: dict[int, str] = field(default_factory=dict)
class SubtitleConfig(BaseModel):
font_name: str = "微软雅黑,黑体" # 字体
font_size: float = 40 # 字号
alpha: float = 0.8 # 透明度
fly_time: float = 5 # 滚动弹幕持续时间
static_time: float = 10 # 静态弹幕持续时间
def validate(self) -> Self:
"""所有值必须被设置"""
if not all(getattr(self, f.name) for f in fields(self)):
raise ValueError("Some config values are not set.")
return self
class Config(BaseModel):
sessdata: Annotated[str, Field(min_length=1)] = ""
bili_jct: Annotated[str, Field(min_length=1)] = ""
buvid3: Annotated[str, Field(min_length=1)] = ""
dedeuserid: Annotated[str, Field(min_length=1)] = ""
ac_time_value: Annotated[str, Field(min_length=1)] = ""
interval: int = 20
path_mapper: dict[int, str] = Field(default_factory=dict)
subtitle: SubtitleConfig = Field(default_factory=SubtitleConfig)
codec: list[VideoCodecs] = Field(
default_factory=lambda: [
VideoCodecs.AV1,
VideoCodecs.AVC,
VideoCodecs.HEV,
],
min_length=1,
)
@field_validator("codec", mode="after")
def codec_validator(cls, codecs: list[VideoCodecs]) -> list[VideoCodecs]:
if len(codecs) != len(set(codecs)):
raise PydanticCustomError("unique_list", "List must be unique")
return codecs
@staticmethod
def load(path: Path | None = None) -> Self:
def load(path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
with path.open("r") as f:
return Config.schema().loads(f.read())
return Config.model_validate_json(f.read())
except Exception as e:
raise RuntimeError(f"Failed to load config file: {path}") from e
def save(self, path: Path | None = None) -> Self:
def save(self, path: Path | None = None) -> "Config":
if not path:
path = DEFAULT_CONFIG_PATH
try:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w") as f:
f.write(
Config.schema().dumps(self, indent=4, ensure_ascii=False)
)
f.write(Config.model_dump_json(self, indent=4))
return self
except Exception as e:
raise RuntimeError(f"Failed to save config file: {path}") from e
def init_settings() -> Config:
return (
(
Config.load(DEFAULT_CONFIG_PATH)
if DEFAULT_CONFIG_PATH.exists()
else Config()
)
.save(DEFAULT_CONFIG_PATH)
.validate()
)
if not DEFAULT_CONFIG_PATH.exists():
# 配置文件不存在的情况下,写入空的默认值
Config().save(DEFAULT_CONFIG_PATH)
# 读取配置文件,校验出错会抛出异常,校验通过则重新保存一下配置文件(写入新配置项的默认值)
return Config.load(DEFAULT_CONFIG_PATH).save()
settings = init_settings()

37
utils.py Normal file
View File

@@ -0,0 +1,37 @@
from pathlib import Path
import aiofiles
import httpx
from aiofiles.base import AiofilesContextManager
from aiofiles.os import makedirs, remove
from aiofiles.ospath import exists
from aiofiles.threadpool.text import AsyncTextIOWrapper
from bilibili_api import HEADERS
client = httpx.AsyncClient(headers=HEADERS)
async def download_content(url: str, path: Path) -> None:
async with client.stream("GET", url) as resp, aopen(path, "wb") as f:
async for chunk in resp.aiter_bytes(40960):
if not chunk:
return
await f.write(chunk)
async def aexists(path: Path) -> bool:
return await exists(path)
async def amakedirs(path: Path, exist_ok=False) -> None:
await makedirs(path, exist_ok=exist_ok)
def aopen(
path: Path, mode: str = "r", **kwargs
) -> AiofilesContextManager[None, None, AsyncTextIOWrapper]:
return aiofiles.open(path, mode, **kwargs)
async def aremove(path: Path) -> None:
await remove(path)

1
version.py Normal file
View File

@@ -0,0 +1 @@
VERSION = "1.1.5"