Compare commits

...

312 Commits
v2.0.3 ... main

Author SHA1 Message Date
amtoaer
791dd57f23 chore: 发布 bili-sync 2.11.1 2026-05-07 14:47:45 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c4b227e26e feat: telegram 通知渠道支持仅发送文字 (#701) 2026-04-08 00:26:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
744bb536b3 feat: 视频源页显示最新视频时间 (#700) 2026-04-07 18:38:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
91ab64a068 feat: 支持自定义 webhook 请求的 headers,更新说明内容 (#693) 2026-03-31 01:49:32 +08:00
amtoaer
55dde84f96 chore: 发布 bili-sync 2.11.0 2026-03-26 20:39:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eea233e576 ci: 修复 ci 在 windows 上运行失败的错误 (#690) 2026-03-25 16:57:58 +08:00
ᴀᴍᴛᴏᴀᴇʀ
72bf2b6a4d ci: 更新 workflows 中使用的 action,避免 node 版本低于 24 的 warning (#689) 2026-03-25 16:50:47 +08:00
wanlala
47ce8f148b 添加 armv7l 版本构建 (#688)
* Add workflow_dispatch trigger for build binary

* Ready for pull request from build-binary.yaml

* Add support for armv7l architecture in Dockerfile

* Add support for linux/armv7l platform in release build

* Update build configuration for Linux-armv7 target

* Change armv7l to armv7 in release build workflow

* Update ARM platform tarball extraction in Dockerfile

* 修正 platform

---------

Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-03-25 14:29:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1c68f13c54 perf: 避免一些常见场景的字符串拷贝,略微提升性能 (#687) 2026-03-25 12:21:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2a4c1313b0 chore: 升级 rust 到 1.94.0 (#685) 2026-03-24 23:08:31 +08:00
amtoaer
ec44798523 chore: 微调 placeholder 的提示文本 2026-03-24 22:59:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8cb59d6b2a feat: 过滤规则引入视频总长度和联合投稿 (#684) 2026-03-24 22:58:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3a2df55314 perf: 移除不必要的 Vec,略微提升性能 (#682) 2026-03-24 17:15:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
04448c6d8f feat: 支持解析联合投稿 (#681) 2026-03-24 16:25:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
09604fd283 fix: 清空重置、全量刷新时跳过空路径的删除,微调前端样式 (#679) 2026-03-17 00:35:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29f36238e3 feat: 支持手动触发全量更新,清除本地多余的视频条目与文件 (#678) 2026-03-16 02:50:55 +08:00
ᴀᴍᴛᴏᴀᴇʀ
980779d5c5 fix: 视频源第一页视频为空不再视为错误 (#677) 2026-03-15 22:38:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dd96a32b35 feat: 在视频页显示视频属于哪个视频源 (#676) 2026-03-15 21:53:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d39cce043c feat: 支持筛选视频的有效性 (#673) 2026-03-15 16:44:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e97fa73542 feat: 修改通知器,支持提示成功任务数量 (#672) 2026-03-15 03:31:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2bd660efc9 feat: 添加开关,允许尝试下载未充电的视频 (#666) 2026-02-28 22:55:01 +08:00
amtoaer
fe13029e84 chore: 发布 bili-sync 2.10.4 2026-02-25 11:11:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bdf4ab58f2 docs: 更新截图和文档链接,修改前端域名 (#659) 2026-02-25 10:51:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
681617cf02 fix: 引入 dunce 库规范化路径,移除手写的规范化逻辑 (#658) 2026-02-24 23:24:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b6c5b547a3 fix: 处理 windows 下的文件夹路径,确保不以空格结尾 (#657) 2026-02-24 22:04:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8aba906904 fix: 尝试修复浏览器从休眠中恢复时的图表乱序问题 (#656) 2026-02-24 01:54:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3e465d9b71 fix: 兼容 flac/audio 字段存在但为 null 的情况 (#655) 2026-02-23 12:34:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1930a57edd feat: 添加防抖,优化日志页的自动滚动体验 (#654) 2026-02-21 23:37:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bb1576a0df perf: 使用 itertools 提供的 join,避免 collect 到 Vec 的额外分配 (#652) 2026-02-19 19:04:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5350d3491b chore: 升级 rust 到 1.93.1,移除 ws 中的一些无用变量 (#650) 2026-02-15 16:31:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e130f14c13 fix: 修复 detail 页面状态显示错误 (#649) 2026-02-15 16:28:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
980f74a242 fix: 修复某些收藏夹视频的 valid 判断 (#648) 2026-02-15 15:09:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8c04dc6564 chore: 前端自动排序 imports,合并 icon 导入并替换掉 deprecated (#642) 2026-02-07 09:27:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c49ec81d51 fix: 修复一些前端的小问题 (#641) 2026-02-06 14:12:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
580a66eb17 feat: 扩大风控检测,当 http 返回 403 或 412 时认为是风控 (#640) 2026-02-05 17:13:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
295d4105aa feat: 支持自定义 ffmpeg 路径 (#639) 2026-02-05 15:58:33 +08:00
ApliNi
151251719b feat: 添加配置目录环境变量 (#632)
* feat: 添加配置目录环境变量

* feat: 添加配置目录命令行参数

* feat: 添加配置目录短参数

* refactor: 调整一下写法

---------

Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-02-03 13:42:16 +08:00
amtoaer
e51fed984b chore: 发布 bili-sync 2.10.3 2026-01-29 13:59:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
716c78b1e3 chore: 指定项目 rust 版本为 1.93.0,调整 ci 以读取配置 (#626) 2026-01-28 18:56:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
22bc6bb3e8 feat: 调整视频源页面 UI,提高可读性 (#623) 2026-01-26 20:11:38 +08:00
ᴀᴍᴛᴏᴀᴇʀ
fedbd4cdb1 feat: 调整视频编码优先级,默认使用 AVC (#622) 2026-01-26 18:23:31 +08:00
amtoaer
c1d9dc8b87 chore: 发布 bili-sync 2.10.2 2026-01-16 15:25:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7f09a98d6c feat: 实现仅失败、仅成功、仅等待的筛选 (#610) 2026-01-16 15:10:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
269647ac22 chore: 使用 ring 代替 aws-lc-rs (#609) 2026-01-15 14:39:16 +08:00
amtoaer
e0189c5b36 chore: 移除 sea-orm 的 tls 依赖 2026-01-14 16:54:18 +08:00
开心
4c1abcf48c feat: videos页面中新增仅失败过滤选项 (#605)
* videos页面中新增 仅失败过滤选项

* 仅失败筛选时才计算失败标记,避免额外的分页查询

* 去除[仅失败]多余的逻辑判定

* refactor: 后端调整:1)为 status -> sql 加入一个中间层方便拓展;2)将 Option<bool> 改为带有 default 的 bool;3)failed 统一改成 failed_only

* refactor: 前端调整:1)前端也统一改成 failed_only;2)修复很多地方在 loadVideo 前没有读取 failedOnly;3)略微调整前端样式

* format

---------

Co-authored-by: kaixin1995 <admin@haokaikai.cn>
Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-01-13 22:28:10 +08:00
amtoaer
c05463285b chore: 发布 bili-sync 2.10.1 2026-01-12 11:25:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
264de2487e fix: 修复 svelte 升级后 status-editor 按钮无法点击的问题 (#603) 2026-01-12 11:22:48 +08:00
amtoaer
ea575b04e6 chore: 发布 bili-sync 2.10.0 2026-01-11 23:17:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f122b9756b feat: 适当扩大历史日志的容量 (#602) 2026-01-11 21:42:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
26514f7174 feat: 支持清除重置,方便分页视频刷新 (#596) 2026-01-11 15:03:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5944298f10 添加扫码登录功能 (#601)
* feat: 添加扫码登录功能,支持生成二维码并轮询登录状态

* feat: 增强扫码登录功能的测试,完善二维码生成与状态轮询的文档注释

* refactor: 后端改动之:1)拆分 login 到 credential 中;2)扫码登录和刷新凭据时复用相同的 extract 函数;3)精简注释。

* refactor: 前端改动之:1)扫码在单独的弹窗页处理;2)不同 status 下采用相同布局,避免状态变化导致布局跳动

* format

---------

Co-authored-by: zkl <i@zkl2333.com>
2026-01-11 12:59:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
64eecaa822 fix: 修复某些边缘情况的图表显示异常 (#592) 2026-01-09 18:14:32 +08:00
amtoaer
18d06c51ba chore: 忽略前端 shadcn-svelte 组件的 warning 2026-01-05 13:30:09 +08:00
amtoaer
ffa5c1e860 refactor: 统一存放配置项的默认值 2026-01-05 13:01:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
97e1b6285e feat: bind_address 绑定失败后尝试 fallback 到默认地址,避免无法启动 web 服务 (#590) 2026-01-05 12:13:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e2a24eff29 chore: 更新前后端依赖版本 (#589) 2026-01-05 11:46:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
56f5ed8e01 feat: 支持搜索关注的 UP 主 (#588) 2026-01-05 00:39:45 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0b5ae3d664 fix: 修复并行下载未正确触发的问题,根据文件是否为流做不同处理 (#586) 2025-12-31 11:52:38 +08:00
amtoaer
f24ee97b28 chore: 发布 bili-sync 2.9.4 2025-12-26 21:21:36 +08:00
ᴀᴍᴛᴏᴀᴇʀ
96c11bb077 fix: 修复从 2.6.0 以下版本直接升级的行为错误 (#583) 2025-12-26 21:21:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2455f7c83d fix: 调整 toast 位置到上方居中,避免遮挡交互组件 (#582) 2025-12-26 18:12:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4faf5a7cf9 fix: 修复标志位没有正确重置的问题,支持任意失败次数任务的重置 (#581) 2025-12-26 17:43:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c2c732093d fix: 修复某些视频下载提示 404 not found 的问题 (#579) 2025-12-26 14:24:52 +08:00
amtoaer
4103122f6b chore: 发布 bili-sync 2.9.3 2025-12-20 00:43:27 +08:00
amtoaer
14b8f877cf refactor: 修复 clippy warning 2025-12-20 00:42:47 +08:00
welann
8dfc7ddf5c fix: 为过滤/跳过选项的 Switch 使用唯一 id 并修正 Label 关联 (#575) 2025-12-20 00:40:39 +08:00
amtoaer
9a63e1eb6f chore: 发布 bili-sync 2.9.2 2025-12-12 14:13:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1b279ed7f fix: 修改过滤逻辑,避免某些存储空间由于磁盘类型探测失败而被错误过滤的情况 (#568) 2025-12-11 11:35:36 +08:00
amtoaer
128ca49225 chore: 发布 bili-sync 2.9.1 2025-12-09 12:40:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8c2e8da2b0 fix: 获取磁盘空间时筛选 SSD/HDD 并根据 name 去重,防止重复计算 (#563) 2025-12-09 12:39:49 +08:00
amtoaer
5dd7486b12 chore: 发布 bili-sync 2.9.0 2025-12-08 00:54:24 +08:00
amtoaer
b7d9e5dc0c fix: 光标悬浮在切换主题的按钮上时应该变成指针 2025-12-07 00:38:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1eac3e298 feat: 支持禁用凭证检查刷新任务,由用户自行维护 credential 有效性 (#560) 2025-12-06 23:26:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3f047771cb feat: 视频规则部分,添加不区分大小写的“包含”过滤 (#559) 2025-12-06 22:00:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f1703096fd feat: 支持根据筛选条件批量编辑视频的下载状态 (#558) 2025-12-06 19:47:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
930660045f feat: 支持深色主题 (#557) 2025-12-06 01:44:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6391aa67c0 feat: 支持按照 BV 号搜索 (#554) 2025-12-05 21:52:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b5ef76b0ed fix: 正确处理“我追的合集 / 收藏夹”中的收藏夹条目,以及一些样式、文本调整 (#553) 2025-12-05 16:38:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f37d9af678 fix: 兼容 API 返回字符串类型时间戳的情况 (#552) 2025-12-05 01:56:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7ef38a38ed feat: 支持自定义 webhook 模板,支持发送测试信息 (#551) 2025-12-05 00:21:36 +08:00
amtoaer
e76673d076 chore: 发布 bili-sync 2.8.0 2025-12-01 23:15:07 +08:00
Naomi
f3822dd536 feat: 完善nfo时间字段、演员缩略图 (#542) 2025-11-29 01:22:26 +08:00
amtoaer
688c8cec6a feat: 凭据刷新部分添加一些 context 方便调试 2025-11-21 10:50:52 +08:00
amtoaer
c854e4e889 fix: 尝试修复执行速度过快导致的时间戳问题 2025-11-20 15:04:39 +08:00
amtoaer
645e686822 fix: 确保流中出现的错误类型能够正确保留 2025-11-11 14:42:05 +08:00
ᴀᴍᴛᴏᴀᴇʀ
670f21a725 refactor: 整理重构下载任务调度部分的代码,增强可读性和鲁棒性 (#531) 2025-11-11 01:29:52 +08:00
amtoaer
8931cb5d2a feat: 滚动条不再导致布局抖动,优化图表配色 2025-11-09 21:48:05 +08:00
amtoaer
66996a77c6 chore: flac 流解析错误时打印错误的流信息,方便后续修复 2025-11-09 19:11:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
170bd14fe3 feat: 重构视频下载任务的触发逻辑,由简单的 tokio::sleep 迁移至调度器调度 (#529) 2025-11-09 01:11:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c69a88f1da feat: 优化风控相关的细节处理 (#527) 2025-11-08 00:41:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8ac6829e61 feat: 支持配置通知器,在视频源处理或整个下载任务出现错误时会触发消息通知 (#526) 2025-11-07 20:37:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a871db655f feat: 支持删除视频源 (#525) 2025-11-07 15:15:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
854d39cf88 feat: 优化对全局配置的处理,调整下载路径填充逻辑 (#523) 2025-11-06 17:25:26 +08:00
amtoaer
b6cba69e11 chore: 处理视频流出错时报出具体错误信息 2025-11-02 00:43:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ff6db0ad97 feat: 更换部分 API,重构 wbi 签名实现,增加额外风控检测 (#503) 2025-10-15 02:01:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
84d353365a feat: 支持设置快捷订阅的路径默认值 (#502) 2025-10-14 18:44:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c7e0d31811 chore: 移除旧版配置文件的迁移逻辑 (#501) 2025-10-14 16:32:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2fff5134cf fix: 修复 sysinfo 初始值偶尔异常偏高的问题 (#499) 2025-10-14 01:38:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a1569d085 refactor: 重构 WebSocket 处理部分,整理逻辑并优化性能 (#498) 2025-10-13 20:15:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
de702435af feat: 重构下载模块,将文件下载到临时目录再最终移动至目标路径 (#495) 2025-10-13 01:59:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eb2606f120 feat: 加入充电视频和番剧、影视判断,同时修复 category 被错误覆盖的问题 (#494) 2025-10-12 03:01:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
02c42861ab feat: 支持跳过视频的某些处理部分 (#492) 2025-10-11 20:45:44 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ed54ca13b8 feat: 支持使用动态 api 获取投稿,该 api 会返回动态视频 (#485) 2025-10-10 18:52:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4d6669a48a refactor: 使用 returning 简化逻辑 (#488) 2025-10-10 13:57:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eadb464363 chore: 更新 rust 依赖 (#486) 2025-10-10 12:49:11 +08:00
amtoaer
2b046362d7 chore: 发布 bili-sync 2.7.0 2025-09-25 00:51:59 +08:00
ᴀᴍᴛᴏᴀᴇʀ
61c9e7de88 chore: 前端小修改,ua 随机范围添加 windows (#470) 2025-09-25 00:50:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3d25c6b321 chore: 跑一遍 auto-correct (#468) 2025-09-24 18:50:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d35858790b chore: clippy 应该拒绝 warning (#466) 2025-09-24 17:58:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b441f04cdf chore: 修复新的 clippy warnings (#467) 2025-09-24 17:36:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4db7e6763a feat: 支持重新评估历史视频,前端显示视频的规则评估状态 (#465) 2025-09-24 17:08:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bbbb7d0c5b feat: 利用 etag 节省内容传输,显式写明生命周期 (#464) 2025-09-24 02:03:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
210c94398a feat: 实现视频的筛选规则 (#457) 2025-09-24 00:42:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6c7d295fe6 fix: 修复字幕风控的报错 (#463) 2025-09-23 08:27:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
71519af2f3 chore: 移除不必要的 image-proxy (#451) 2025-08-28 18:51:23 +08:00
Thomas Yang
8ed2fbae24 feat: 请求中header的User-Agent使用随机值 (#447) 2025-08-27 10:27:23 +08:00
amtoaer
fd90bc8b73 chore: 下载失败时不再打印一大串 URL 2025-08-08 20:23:40 +08:00
amtoaer
66bd3d6a41 chore: ffmpeg 执行失败时添加一条说明 2025-08-07 15:11:29 +08:00
amtoaer
5ef23a678f chore: 发布 bili-sync 2.6.3 2025-08-07 12:41:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
66079f3adc feat: sqlite 开启 Wal,移除不必要的 Arc,妥善释放数据库 (#421) 2025-08-06 17:20:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4f780faf64 fix: 增加 busy_timeout、最小化事务块、增加每批处理 page 量 (#420) 2025-08-06 14:08:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dbcb1fa78b fix: 为填充视频详情添加并发限制,避免数据库竞争 (#419) 2025-08-06 10:37:06 +08:00
amtoaer
386dac7735 chore: 格式化后端代码 2025-08-05 23:11:55 +08:00
Xinyu Bao
5537c621be Add error messages for the case in which the database initialization fails (#415) 2025-08-05 23:11:11 +08:00
amtoaer
c7978e20da chore: 发布 bili-sync 2.6.2 2025-07-23 22:48:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6e4af47bda fix: 修复 collection_type 反序列化错误 (#403) 2025-07-23 22:46:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
791e4997a0 docs: 修复配置项描述 (#396) 2025-07-13 16:59:06 +08:00
amtoaer
05ab83fc93 chore: 发布 bili-sync 2.6.1 2025-07-13 00:29:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
18ed9e09b1 fix: 修复 chromium 系浏览器创建 WebSocket 失败的问题 (#395) 2025-07-13 00:28:24 +08:00
amtoaer
e196afa8ce chore: 发布 bili-sync 2.6.0 2025-07-12 19:32:58 +08:00
amtoaer
9b2da75391 chore: 同样为前端加入版本号,并在发版时进行修改 2025-07-12 19:32:15 +08:00
amtoaer
664e1d9f21 docs: 在 readme 中加入管理页 2025-07-12 19:28:10 +08:00
amtoaer
31c26f033e docs: 文档跟进最新代码变化 2025-07-12 19:23:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29d78dabdd perf: 优化 dashboard 的查询性能 (#393) 2025-07-12 16:06:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
87fb597ba4 fix: 修复本地测试发现的若干问题 (#392) 2025-07-12 15:17:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c8f7a2267d chore: 更新 rust 依赖 (#391) 2025-07-11 20:44:38 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2837bb5234 feat: WebSocket connect 使用 Promise,确保 sendMessage 发生在 connect 后 (#390) 2025-07-11 20:00:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0990a276ff fix: 移动端 sidebar 在点按后自动收起 (#389) 2025-07-11 19:15:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
adc2e32e58 feat: 重置任务状态时支持 force 参数,默认不启用 (#388) 2025-07-11 19:01:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
267e9373f9 feat: 加入设置页里缺失的设置项,密码表单允许修改可见性 (#387) 2025-07-11 01:53:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dd23d1db58 feat: 事件推送由 SSE 切换到 WebSocket (#386) 2025-07-11 00:14:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cc25749445 feat: 前端添加下载状态卡片 (#385) 2025-07-10 15:13:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
655b4389b7 feat: 支持 "在 b 站打开" 的快捷操作,一些细节优化 (#384) 2025-07-10 01:46:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
486dab5355 chore: 添加前端压缩 (#383) 2025-07-10 00:03:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
74a45526f0 fix: 修复日志页面自动滚动问题 (#382) 2025-07-09 23:34:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ce60838244 fix: 修复筛选器查询无效 (#381) 2025-07-09 21:50:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
35866888e8 fix: 新订阅添加后应该默认启用 (#380) 2025-07-08 15:29:57 +08:00
ᴀᴍᴛᴏᴀᴇʀ
fbb7623ee1 fix: 尝试修复下载错误 (#379) 2025-07-08 14:37:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1affe4d594 feat: 修改交互逻辑,支持前端查看日志 (#378) 2025-07-08 12:48:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c73a2f01a feat: 添加 dashboard 页面 (#377) 2025-07-07 23:32:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a627584fb0 refactor: 根据路径分割 api,避免单文件内容过多 (#376) 2025-07-07 01:51:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
636a843bda chore: 移除 utoipa (#375) 2025-07-07 01:01:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7bb4e7bc44 feat: 前端支持根据 ID 手动添加订阅 (#374) 2025-07-06 22:49:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e50318870e feat: 支持前端编辑、提交 Config (#370) 2025-06-18 16:50:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
28971c3ff3 feat: 添加视频源管理页,支持修改路径与启用状态 (#369) 2025-06-17 18:55:45 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f47ce92a51 chore: 通过移除依赖 debuginfo 的方式加快 debug 构建 (#368) 2025-06-17 13:56:36 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a35794ed7a refactor: 在后端处理字段映射与 invalid 判断 (#367) 2025-06-17 13:44:23 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bad00af147 chore: 移除无用的依赖 (#366) 2025-06-17 02:45:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4539e9379d feat: 迁移所有配置到数据库,并支持运行时重载 (#364) 2025-06-17 02:15:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a46c2572b1 chore: 为 video sources 添加 enabled 字段 (#362) 2025-06-13 12:00:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a41efdbe78 chore: 移除订阅卡片的单行最大宽度限制,支持铺满屏幕 (#359) 2025-06-09 12:17:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a98e49347b feat: 支持 webui 加载用户的订阅与收藏,一键点击订阅 (#357) 2025-06-09 11:16:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
586d5ec4ee chore: 大幅缩减构建结果的二进制文件体积 (#356) 2025-06-06 23:34:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
65a047b0fa feat: 支持手动编辑某个视频、分页状态,优化部分代码 (#355) 2025-06-06 07:39:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c0ed37750f refactor: 固定大小的任务省去装箱,直接使用 tokio::join! (#354) 2025-06-05 16:30:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0e98f484ef chore: 前端跑一遍 format、lint,尝试在 ci 中加入前端 lint 检查 (#353) 2025-06-04 21:37:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6226fa7c4d fix: 修复一些小问题,优化细节体验 (#352) 2025-06-04 21:15:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c528152986 feat: 重构优化部分 API,支持重置全体失败的任务 (#351) 2025-06-04 17:04:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
45849957ff refactor: 优化填充视频详情时的性能 (#350) 2025-06-02 00:56:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8510aa318e feat: 支持获取我的收藏夹、收藏的视频合集与关注的 up 主 (#349) 2025-06-02 00:15:21 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c07e475fe6 chore: 换用更美观、现代的前端页面 (#348) 2025-06-01 13:42:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a574d005c3 refactor: 重构 nfo,增强拓展性和可读性,方便后续变更 (#345) 2025-05-30 17:28:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e9d1c9eadb refactor: 移除无意义的 bvid 转 aid 逻辑 (#344) 2025-05-30 14:28:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a9f604a07d feat: 支持单个文件的并发下载 (#343) 2025-05-30 02:19:23 +08:00
amtoaer
6383730706 ci: 除 fmt 外一律使用 stable toolchain 2025-05-29 01:56:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
34d3e47b2d refactor: 调整视频列表/视频合集的扫描逻辑,优化性能 (#342) 2025-05-29 01:50:06 +08:00
amtoaer
d7ec0584bc chore: 发布 bili-sync 2.5.1 2025-05-19 22:54:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1ec015856b fix: 修复杜比视界合并后变为普通 HDR 的错误 (#333)
* fix: dolby hrd download not correct

* chore: 仅保留对 dolby vision 有效的参数

---------

Co-authored-by: njzydark <njzydark@gmail.com>
2025-05-19 20:57:42 +08:00
amtoaer
99d4d900e6 build: 将 git2 设置为 0.20.2 版本,尝试修复 windows 构建 2025-05-19 18:51:29 +08:00
amtoaer
f85f105e69 refactor: 修改奇怪的 if else 顺序 2025-05-19 17:06:28 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a1395458c fix: 改进视频流编码判断逻辑 (#332)
* fix: 改进视频流编码判断逻辑

* test: 添加新的单元测试,确保 HDR、杜比视界获取正常
2025-05-19 17:04:48 +08:00
amtoaer
bafb4af8dd chore: 升级依赖,修正新引入的 clippy 规则 2025-05-19 16:53:40 +08:00
amtoaer
f52724b974 chore: 发布 bili-sync 2.5.0 2025-02-27 14:04:40 +08:00
amtoaer
4e1e0c40cf docs: 文档跟进最新代码变化 2025-02-27 14:03:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
439513e5ab chore: 修改 error 判断,考虑 chain (#291) 2025-02-27 13:39:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
33a61ec08d fix: 视频合集/视频列表改为全量拉取,确保正确更新 (#290) 2025-02-25 20:55:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a6d0d6b777 feat: 下载时考虑 backup_url,支持按照 cdn 优先级排序 (#288) 2025-02-24 19:48:07 +08:00
amtoaer
ae685cbe61 ci: 打 tag 时仅触发 release,跳过 commit 2025-02-21 21:44:36 +08:00
amtoaer
16e14fc371 chore: 发布 bili-sync 2.4.1 2025-02-21 21:22:52 +08:00
amtoaer
b4a5dee236 ci: 使 ci 版本带有版本标签 2025-02-21 21:15:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2b3e6f9547 chore: 程序开始时打印欢迎信息,调整日志和构建流 (#285) 2025-02-21 21:04:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f8b93d2c76 fix: 修复配置初始化的检测 (#284) 2025-02-21 19:32:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
94462ca706 chore: 更新 rust edition 到 2024,更新依赖 (#283) 2025-02-21 17:47:49 +08:00
amtoaer
9cbefc26ab chore: 发布 bili-sync 2.4.0 2025-02-19 22:20:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2bfd69c15e docs: 文档跟进最新代码变化 (#275) 2025-02-19 22:12:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4765d6f50a fix: API TOKEN 输入框应该设置 password 类型 (#274) 2025-02-19 21:22:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bf306dfec3 chore: 补上缺失的 error_for_status 调用,修改一个 clippy 格式错误 (#273) 2025-02-19 20:40:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a6425f11a2 fix: 修复 video 中分 p 下载状态的设置 (#272) 2025-02-19 19:04:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
395ef0013a ci: 统一使用 ubuntu 24.04 运行 ci(20.04 将被弃用) (#271) 2025-02-19 17:28:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ab0533210f chore: error 会打印更加详细的信息,修正常见错误的判断 (#270) 2025-02-19 16:53:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3eb2f0b14d ci: 修复并优化 ci 流程 (#269) 2025-02-19 14:33:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
42272b1294 ci: 调整构建流,在 commit 时同样构建 binary (#266) 2025-02-19 04:21:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1168f35f3 build: 在 version 中展示详细的构建信息 (#265)
* build: 在 version 中展示详细的构建信息

* chore: 修改
2025-02-19 03:47:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bc27778366 chore: 前端支持取消视频来源筛选(点击来源两次),调整 API TOKEN 填写位置 (#264) 2025-02-19 02:18:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9c5f3452e9 fix: 修复 reset 执行问题 (#263) 2025-02-19 01:52:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d3b4559b2d feat: 加入塑料前端 (#262) 2025-02-19 01:47:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
59305c0bb4 feat: reset_failed 支持修正标记位,这允许用户手动触发新的子任务 (#261) 2025-02-18 23:36:44 +08:00
ᴀᴍᴛᴏᴀᴇʀ
32214d5d5f chore: 将 video list model / video list 重命名为 video source (#260) 2025-02-18 22:36:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
315ad13703 feat: 在状态更新时忽略掉一些常见的错误 (#259) 2025-02-18 22:22:29 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e12a9cda95 feat: 加入重置单个视频状态的 API,视频接口返回下载状态 (#258) 2025-02-18 19:24:55 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c995b3bf72 feat: 加入带有详细类型注释的 swagger 文档 (#257) 2025-02-18 01:55:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1467c262a1 feat: 添加部分简单 API,相应修改程序入口的初始化流程 (#251) 2025-02-17 16:58:51 +08:00
amtoaer
7251802202 chore: 格式化代码 2025-02-16 03:56:47 +08:00
dragonlanc
e1285ff49a chore: 修改拼写错误 seprate -> separate (#253) 2025-02-16 03:38:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e01a22136e refactor: 使用 const 泛型约束 status (#250) 2025-02-13 21:41:05 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eba69ff82a chore: 拆分主函数,支持响应终止信号 (#247)
* chore: 拆分主函数,支持响应 Ctrl + C 信号

* chore: unix 应该处理 SIGTERM
2025-02-12 03:34:17 +08:00
amtoaer
5af6fe5e6e chore: 移除多余的空格 2025-02-12 01:36:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9d8e398cbe refactor: 下载部分使用 tokio 的封装代替手动实现 (#245) 2025-02-05 02:33:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7097b2a6b9 fix: 修改错误拼写 (#244) 2025-02-05 02:28:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
acf7359d56 chore: 简化 up 主处理逻辑,支持 up 主信息更新 (#243) 2025-02-04 23:59:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c514b2dcc feat: 将视频的原始 URL 放到简介中 (#241) 2025-02-04 23:25:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2c4fa441e7 fix: 等待 task 执行 (#238) 2025-02-01 20:13:58 +08:00
ᴀᴍᴛᴏᴀᴇʀ
51672e8607 chore: 使用 tokio::spawn 运行主任务 (#237) 2025-02-01 18:47:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cc7f773300 feat: 支持下载 cc 字幕 (#234) 2025-01-30 01:20:53 +08:00
amtoaer
802565e4f6 chore: 发布 bili-sync 2.3.0 2025-01-25 00:34:47 +08:00
amtoaer
4984026017 docs: 更新文档,跟进最新代码变化 2025-01-25 00:29:12 +08:00
amtoaer
2a98359085 chore: 隐藏 target 并调整表述,缩减日志长度 2025-01-25 00:11:22 +08:00
amtoaer
979294bb94 fix: 修复 video path 未正确设置问题 2025-01-24 14:05:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
40cf22a7fa refactor: 引入 enum_dispatch 静态分发,提升性能 (#232) 2025-01-24 13:44:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9e5a8b0573 feat: 确保 video stream 在出现错误时返回 Err (#231) 2025-01-24 13:17:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c220f0d2b refactor: 精简代码,统一逻辑 (#229) 2025-01-24 01:11:59 +08:00
amtoaer
aa88f97eff refactor: 尝试将任务处理部分重构为 stream 写法,增补注释 2025-01-23 17:13:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b4177d4ffc feat: 引入更健壮的新视频检测方法 (#228)
* feat: 为各个 video list 表添加 latest_row_at 字段

* chore: 为 model 引入新增的字段

* feat: 实现新版中断条件(待测试)

* test: 更新测试
2025-01-22 23:53:18 +08:00
amtoaer
b888db6a61 refactor: 数据块已经在内存中,直接使用 write_all 2025-01-22 01:52:32 +08:00
amtoaer
6ae87364b4 feat: 为下载加入 flush 与 content-length 检查 2025-01-22 00:18:04 +08:00
amtoaer
18c966a0f9 refactor: 避免一些不必要的 to_string 2025-01-21 22:59:16 +08:00
amtoaer
ab84a8dad1 refactor: 签名时按需使用 String 2025-01-21 22:54:20 +08:00
amtoaer
1a32e38dc3 refactor: 使用 context 代替 ok_or 和 ok_or_else 2025-01-21 18:06:54 +08:00
amtoaer
0f25923c52 refactor: 继续调整优化部分代码,移除主体代码的所有 unwrap 2025-01-21 17:17:14 +08:00
amtoaer
cdc30e1b32 refactor: 优化部分代码,移除一批 unwrap 2025-01-21 03:12:45 +08:00
NKDark
c10c14c125 chore: 修改配置文件写入逻辑 (#222) 2025-01-21 01:39:48 +08:00
amtoaer
60604aeb33 docs: 更新文档描述,简化视频合集/视频列表的配置 2025-01-17 17:53:32 +08:00
amtoaer
276fb5b3e4 chore: 发布 bili-sync 2.2.0 2025-01-14 19:12:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e05f58b8a1 docs: 文档跟进最新代码变化 (#217) 2025-01-14 18:16:15 +08:00
amtoaer
8dfc96e1dc chore: 补充一条提示信息 2025-01-14 05:18:04 +08:00
amtoaer
cdc639cf75 fix: 修复代码语义错误,精简一些不必要的代码 2025-01-14 02:21:15 +08:00
amtoaer
847c3115cd chore: 遇到编码不符合的情况不再打印日志 2025-01-14 01:19:03 +08:00
amtoaer
7dc049ffe5 chore: 默认设置请求频率限制,用户可手动调整 2025-01-14 00:08:38 +08:00
amtoaer
265fe630dd fix: 修复 UP 主信息接口的类型问题 2025-01-14 00:07:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f31900e6c7 deps: 更新项目依赖 (#214) 2025-01-13 19:39:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
54b46c150e refactor: 一些边边角角的小重构 (#213) 2025-01-13 18:57:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7d9999d6aa feat: 调整并重构视频音频流的选择逻辑,应该可以提升些许性能 (#212)
* feat: 调整并重构视频音频流的选择逻辑,应该可以提升些许性能

* test: 添加少量单元测试
2025-01-13 13:51:16 +08:00
amtoaer
05aa30119e ci: 使用最新 nightly 执行 check 2025-01-12 03:13:59 +08:00
amtoaer
368b9ef735 style: 清空 clippy 提示 2025-01-11 23:36:59 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0113bf704d chore: 支持使用 leaky-bucket 限制请求频率 (#211)
* chore: 移除之前引入的 delay

* feat: 支持为 b 站请求配置频率限制
2025-01-11 23:24:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
66a7b1394e test: 修复 windows 单元测试错误 (#164) 2024-08-09 00:02:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ae05cad22f feat: 允许在 video_name 和 page_name 中使用对应平台的路径分隔符 (#163) 2024-08-08 23:53:22 +08:00
amtoaer
be3abab13f chore: 移除多余的 info 2024-08-08 22:01:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c432a282a7 fix: 修复视频 page 过多时数据库插入失败的问题 (#162) 2024-08-03 23:49:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e9e20ace93 build: 升级依赖 (#160) 2024-07-28 15:38:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6187827e1b fix: 确保无论视频下载结果如何,都在最终删除临时文件 (#159) 2024-07-28 15:34:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a4a95e343 feat: 支持设置 video 和 page 的下载并发 (#157) 2024-07-28 02:32:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
401fcdc630 refactor: 将 filenamify 移动至本地,将正则表达式设置为 static (#156) 2024-07-28 01:51:37 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b2d22253c5 feat: 支持 up 主投稿视频下载 (#155) 2024-07-27 22:35:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29bfc2efce refactor: 重构部分代码,调整函数位置 (#154) 2024-07-25 00:05:29 +08:00
ᴀᴍᴛᴏᴀᴇʀ
75de39dfbb feat: 支持设置时间格式化字符串,支持在 video_name 和 page_name 中使用 time (#152) 2024-07-24 21:06:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8f37fdf841 refactor: 把循环拆分到外层,提取公共代码 (#151) 2024-07-24 00:36:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
20e3ac2129 build: 升级 time 依赖 (#150) 2024-07-23 22:38:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3a8f33d273 feat: 支持各种任务结束之后的 delay 配置 (#148) 2024-07-23 22:29:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d46881aea6 docs: 支持点击放大文档中的图片 (#149) 2024-07-23 04:13:05 -07:00
ᴀᴍᴛᴏᴀᴇʀ
e25339c53c docs: 将图片转为 webp 并压缩,大幅缩小占用空间 (#147) 2024-07-22 22:12:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5102999676 docs: 修复配置文件位置的描述错误 (#145) 2024-07-22 12:53:41 +08:00
amtoaer
991ce3ea3c chore: 发布 bili-sync 2.1.2 2024-07-21 23:40:30 +08:00
amtoaer
e4fb096d0c build: 更新项目依赖 2024-07-21 22:51:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
28070aa7d8 docs: 添加"工作原理"小节 (#135) 2024-07-21 21:34:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
33e758bd91 refactor: 移除不必要的标记和代码块,统一 use 格式 (#144) 2024-07-21 19:16:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
86e858082d feat: 为下载视频接口加入 wbi 签名 (#143) 2024-07-21 18:47:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2ffe432f37 feat: 为合集接口实现 wbi 签名 (#140) 2024-07-21 16:49:53 +08:00
A1ca7raz
6ef9ecaee0 chore: 更正许可证文件名错误 (#141) 2024-07-19 20:33:36 +08:00
amtoaer
9ef88e1b2b docs: 更新部分表述,更新当前的功能列表 2024-07-11 19:27:02 +08:00
amtoaer
6e7c6061b2 chore: 发布 bili-sync 2.1.1 2024-07-11 18:09:31 +08:00
amtoaer
40b3f77748 docs: 添加 2.1.1 中稍后再看的文档 2024-07-11 18:08:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c27d1a2381 feat: 支持稍后再看的扫描与下载 (#131)
* 暂存

* 写点

* feat: 支持稍后再看

* chore: 干掉 print
2024-07-10 22:46:01 -07:00
ᴀᴍᴛᴏᴀᴇʀ
4c5d1b6ea1 fix: 修复 exist_labels 可能判断错误的问题 (#132) 2024-07-09 22:47:07 +08:00
amtoaer
0b6fd72682 chore: 发布 bili-sync 2.1.0 2024-07-05 22:44:56 +08:00
amtoaer
e65cd36b2e chore: 固定 CNAME 文件,修改 gh-pages 分支的提交信息 2024-07-05 22:35:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
352282f277 docs: 全局修改描述,在文档中加入版本信息并在发版时自动替换 (#128)
* 暂存

* chore: 修改一些杂项
2024-07-05 22:31:26 +08:00
amtoaer
fa2bc7b5e8 docs: 采用自定义域名方式,移除 base 目录 2024-07-05 18:26:39 +08:00
amtoaer
bb90f0c6f2 docs: 修改文档的 base 目录 2024-07-05 16:58:18 +08:00
amtoaer
90f2a1d4ed docs: 添加在线文档的 workflow 构建流,修复一些问题 2024-07-05 16:45:42 +08:00
amtoaer
e2b65746dd docs: 添加独立的文档页面,移除 README 中的相关描述 (#127) 2024-07-05 02:17:42 +08:00
amtoaer
24d0da0bf3 chore: 修改 as 大小写,避免 warning 2024-07-04 01:49:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ff1150e863 fix: 修复重构引入的若干 bug (#126) 2024-07-04 01:00:41 +08:00
amtoaer
940abd4f3b build: 修改现有的版本号,添加 release 相关选项 2024-07-03 22:11:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4c9ad2318c feat: 大范围重构,支持视频合集下载 (#97) 2024-07-03 03:57:12 -07:00
ᴀᴍᴛᴏᴀᴇʀ
097f885050 build: 更新依赖 (#125) 2024-06-28 03:07:45 -07:00
ᴀᴍᴛᴏᴀᴇʀ
6ebef0a414 ci: 对处于 draft 状态的 PR 禁用 workflow (#123) 2024-06-28 00:04:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4818e62414 refactor: 引入 clap 处理环境变量和命令行参数 (#119) 2024-06-08 10:57:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1744f8647b chore: 修改项目路径结构,使用 workspace 组织包 (#118) 2024-06-08 01:56:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c4db12b154 fix: 修复类型错误导致的数值溢出 (#115) 2024-06-01 03:21:23 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2ef99a20c9 feat: 支持自定义 NFO 文件中的视频时间,可选加入收藏夹的时间、视频发布的时间 (#114)
* feat: 支持自定义 NFO 文件中的视频时间,可选加入收藏夹的时间、视频发布的时间

* chore: 使用小写
2024-06-01 03:01:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
67de151234 ci: 使用较旧的 rust nightly 版本,避免语言变更导致的编译失败 (#113) 2024-06-01 01:51:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
73f97f937f feat: 每次执行前检查登录状态,避免凭据失效导致的非预期行为 (#112)
* feat: 每次执行前检查登录状态,避免凭据失效导致的非预期行为

* refactor: 减少代码长度
2024-06-01 01:46:15 +08:00
ky0utarou
8fee6fb97a Update README.md - compose中指定user,附加简要说明 (#102)
* Update README.md - compose中指定user

* Update README.md - compose中指定user的简要说明
2024-05-08 19:11:32 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e5e5b07978 fix: 修复当目标文件已存在时 ffmpeg 卡住的问题 (#99) 2024-05-05 17:22:35 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cd2bd9cbb3 chore: 减少并发下载量与 read_timeout 值 (#96)
* chore: 减少并发下载量与 read_timeout 值

* chore: 修正注释
2024-05-03 12:48:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f044b18337 chore: 使用 tracing 替换 env_logger (#93) 2024-05-02 03:00:16 +08:00
amtoaer
d3bfca42f6 ci: 先安装依赖再 copy 二进制文件,确保使用 docker 缓存 2024-05-02 00:45:47 +08:00
ky0utarou
10ccb47790 ci: Dockerfile - 保留tzdata (#91)
* Dockerfile - keep tzdata for correct time

* Dockerfile - install tzdata only for correct logging time

refer to https://stackoverflow.com/a/68996528
2024-05-01 21:21:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e732e7d616 feat: 放宽数据库连接池的连接数和获取时间,避免 time out 错误 (#87) 2024-04-29 13:46:22 +08:00
amtoaer
f81d9fc6eb chore: 修改版本并添加许可证 2024-04-29 00:59:50 +08:00
399 changed files with 33765 additions and 9024 deletions

111
.github/workflows/build-binary.yaml vendored Normal file
View File

@@ -0,0 +1,111 @@
name: Build Binary
on:
workflow_call:
jobs:
build-frontend:
name: Build frontend
runs-on: ubuntu-24.04
defaults:
run:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Build Frontend
run: bun run build
- name: Upload Web Build Artifact
uses: actions/upload-artifact@v6
with:
name: web-build
path: web/build
build:
name: Build bili-sync-rs for ${{ matrix.platform.release_for }}
needs: build-frontend
runs-on: ${{ matrix.platform.os }}
strategy:
matrix:
platform:
- release_for: Linux-armv7
os: ubuntu-24.04
target: armv7-unknown-linux-musleabihf
bin: bili-sync-rs
name: bili-sync-rs-Linux-armv7-musl.tar.gz
- release_for: Linux-x86_64
os: ubuntu-24.04
target: x86_64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-x86_64-musl.tar.gz
- release_for: Linux-aarch64
os: ubuntu-24.04
target: aarch64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-aarch64-musl.tar.gz
- release_for: macOS-x86_64
os: macOS-latest
target: x86_64-apple-darwin
bin: bili-sync-rs
name: bili-sync-rs-Darwin-x86_64.tar.gz
- release_for: macOS-aarch64
os: macOS-latest
target: aarch64-apple-darwin
bin: bili-sync-rs
name: bili-sync-rs-Darwin-aarch64.tar.gz
- release_for: Windows-x86_64
os: windows-latest
target: x86_64-pc-windows-msvc
bin: bili-sync-rs.exe
name: bili-sync-rs-Windows-x86_64.zip
steps:
- name: Checkout repo
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Download Web Build Artifact
uses: actions/download-artifact@v8
with:
name: web-build
path: web/build
- name: Read Toolchain Version
id: read_rust_toolchain
shell: bash
run: |
channel=$(grep '^channel' rust-toolchain.toml | sed 's/.*= *"\(.*\)"/\1/')
echo "value=$channel" >> $GITHUB_OUTPUT
- name: Build binary
uses: houseabsolute/actions-rust-cross@v1
with:
command: build
target: ${{ matrix.platform.target }}
toolchain: ${{ steps.read_rust_toolchain.outputs.value }}
args: "--locked --release"
strip: true
- name: Package as archive
shell: bash
run: |
cd target/${{ matrix.platform.target }}/release
if [[ "${{ matrix.platform.target }}" == "x86_64-pc-windows-msvc" ]]; then
7z a ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
else
tar czvf ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
fi
- name: Upload release artifact
uses: actions/upload-artifact@v6
with:
name: bili-sync-rs-${{ matrix.platform.release_for }}
path: |
${{ github.workspace }}/${{ matrix.platform.name }}

41
.github/workflows/build-doc.yaml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Build Main Docs
on:
push:
branches:
- main
paths:
- "docs/**"
jobs:
doc:
name: Build documentation
runs-on: ubuntu-24.04
defaults:
run:
working-directory: docs
steps:
- name: Checkout repo
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Build documentation
run: bun run docs:build
- name: Deploy Github Pages
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/.vitepress/dist
force_orphan: true
commit_message: 部署来自 main 的最新文档变更:

View File

@@ -1,43 +0,0 @@
name: Check
on:
push:
branches:
- main
pull_request:
branches:
- "**"
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
jobs:
tests:
name: Run Clippy and tests
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
- run: rustup toolchain install nightly && rustup default nightly && rustup component add rustfmt clippy
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: cargo fmt check
run: cargo fmt --check
- name: cargo clippy
run: cargo clippy
- name: cargo test
run: cargo test

11
.github/workflows/commit-build.yaml vendored Normal file
View File

@@ -0,0 +1,11 @@
name: Build Main Binary
on:
push:
branches:
- main
jobs:
build-binary:
if: ${{ !startsWith(github.ref, 'refs/tags/') }}
uses: amtoaer/bili-sync/.github/workflows/build-binary.yaml@main

68
.github/workflows/pr-check.yaml vendored Normal file
View File

@@ -0,0 +1,68 @@
name: Check
on:
push:
branches:
- main
pull_request:
types: ["opened", "reopened", "synchronize", "ready_for_review"]
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
jobs:
check-backend:
name: Run backend checks
runs-on: ubuntu-24.04
if: ${{ github.event_name == 'push' || !github.event.pull_request.draft }}
steps:
- name: Checkout repo
uses: actions/checkout@v6
- run: rustup install && rustup component add rustfmt --toolchain nightly
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: cargo fmt check
run: cargo +nightly fmt --check
- name: cargo clippy
run: cargo clippy -- -D warnings
- name: cargo test
run: cargo test
check-frontend:
name: Run frontend checks
runs-on: ubuntu-24.04
if: ${{ github.event_name == 'push' || !github.event.pull_request.draft }}
defaults:
run:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Check Frontend
run: bun run lint

79
.github/workflows/release-build.yaml vendored Normal file
View File

@@ -0,0 +1,79 @@
name: Build Main Binary And Release
on:
push:
tags:
- v*
jobs:
build-binary:
uses: amtoaer/bili-sync/.github/workflows/build-binary.yaml@main
github-release:
name: Create GitHub Release
needs: build-binary
runs-on: ubuntu-24.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v6
- name: Download release artifact
uses: actions/download-artifact@v8
with:
merge-multiple: true
- name: Publish GitHub release
uses: softprops/action-gh-release@v2
with:
files: bili-sync-rs*
tag_name: ${{ github.ref_name }}
draft: true
docker-release:
name: Create Docker Image
needs: build-binary
runs-on: ubuntu-24.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v6
- name: Download release artifact
uses: actions/download-artifact@v8
with:
merge-multiple: true
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs
tags: |
type=raw,value=latest
type=raw,value=${{ github.ref_name }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
linux/arm/v7
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
- name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs

View File

@@ -1,129 +0,0 @@
name: Build Binary And Release
on:
push:
tags:
- v*
jobs:
build:
name: Release for ${{ matrix.platform.release_for }}
runs-on: ${{ matrix.platform.os }}
strategy:
matrix:
platform:
- release_for: Linux-x86_64
os: ubuntu-20.04
target: x86_64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-x86_64-musl.tar.gz
- release_for: Linux-aarch64
os: ubuntu-20.04
target: aarch64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-aarch64-musl.tar.gz
- release_for: macOS-x86_64
os: macOS-latest
target: x86_64-apple-darwin
bin: bili-sync-rs
name: bili-sync-rs-Darwin-x86_64.tar.gz
- release_for: macOS-aarch64
os: macOS-latest
target: aarch64-apple-darwin
bin: bili-sync-rs
name: bili-sync-rs-Darwin-aarch64.tar.gz
- release_for: Windows-x86_64
os: windows-latest
target: x86_64-pc-windows-msvc
bin: bili-sync-rs.exe
name: bili-sync-rs-Windows-x86_64.zip
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Install musl-tools
run: sudo apt-get update --yes && sudo apt-get install --yes musl-tools
if: contains(matrix.platform.target, 'musl')
- name: Build binary
uses: houseabsolute/actions-rust-cross@v0
with:
command: build
target: ${{ matrix.platform.target }}
toolchain: stable
args: "--locked --release"
strip: true
- name: Package as archive
shell: bash
run: |
cp target/${{ matrix.platform.target }}/release/${{ matrix.platform.bin }} ${{ matrix.platform.release_for }}-${{ matrix.platform.bin }}
cd target/${{ matrix.platform.target }}/release
if [[ "${{ matrix.platform.target }}" == "x86_64-pc-windows-msvc" ]]; then
7z a ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
else
tar czvf ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
fi
- name: Upload release artifact
uses: actions/upload-artifact@v4
with:
name: bili-sync-rs-${{ matrix.platform.release_for }}
# contains raw binary and compressed archive
path: |
${{ github.workspace }}/${{ matrix.platform.release_for }}-${{ matrix.platform.bin }}
${{ github.workspace }}/${{ matrix.platform.name }}
release:
name: Create GitHub Release & Docker Image
needs: build
runs-on: ubuntu-20.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Download release artifact
uses: actions/download-artifact@v4
with:
merge-multiple: true
- name: Publish GitHub release
uses: softprops/action-gh-release@v2
with:
files: bili-sync-rs*
tag_name: ${{ github.ref_name }}
draft: true
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs
tags: |
type=raw,value=latest
type=raw,value=${{ github.ref_name }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
- name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs

7
.gitignore vendored
View File

@@ -1,6 +1,7 @@
**/target
auth_data
*.sqlite
*.json
video
*.sqlite*
debug*
node_modules
docs/.vitepress/cache
docs/.vitepress/dist

3746
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +1,106 @@
[package]
name = "bili-sync-rs"
version = "2.0.0"
edition = "2021"
[dependencies]
anyhow = { version = "1.0.81", features = ["backtrace"] }
arc-swap = { version = "1.7", features = ["serde"] }
async-stream = "0.3.5"
chrono = { version = "0.4.35", features = ["serde"] }
cookie = "0.18.0"
dirs = "5.0.1"
entity = { path = "entity" }
env_logger = "0.11.3"
filenamify = "0.1.0"
float-ord = "0.3.2"
futures = "0.3.30"
handlebars = "5.1.2"
hex = "0.4.3"
log = "0.4.21"
memchr = "2.5.0"
migration = { path = "migration" }
once_cell = "1.19.0"
prost = "0.12.4"
quick-xml = { version = "0.31.0", features = ["async-tokio"] }
rand = "0.8.5"
regex = "1.10.3"
reqwest = { version = "0.12.4", features = [
"json",
"stream",
"cookies",
"gzip",
"charset",
"http2",
"rustls-tls",
], default-features = false }
rsa = { version = "0.9.6", features = ["sha2"] }
sea-orm = { version = "0.12", features = [
"sqlx-sqlite",
"runtime-tokio-rustls",
"macros",
] }
serde = { version = "1.0.197", features = ["derive"] }
serde_json = "1.0"
strum = { version = "0.26", features = ["derive"] }
thiserror = "1.0.58"
tokio = { version = "1", features = ["full"] }
toml = "0.8.12"
[workspace]
members = [".", "entity", "migration"]
members = ["crates/*"]
default-members = ["crates/bili_sync"]
resolver = "2"
[workspace.package]
version = "2.11.1"
authors = ["amtoaer <amtoaer@gmail.com>"]
license = "MIT"
description = "由 Rust & Tokio 驱动的哔哩哔哩同步工具"
edition = "2024"
publish = false
[workspace.dependencies]
bili_sync_entity = { path = "crates/bili_sync_entity" }
bili_sync_migration = { path = "crates/bili_sync_migration" }
anyhow = { version = "1.0.100", features = ["backtrace"] }
arc-swap = { version = "1.8.0", features = ["serde"] }
async-stream = "0.3.6"
async-tempfile = { version = "0.7.0", features = ["uuid"] }
async-trait = "0.1.89"
axum = { version = "0.8.8", features = ["macros", "ws"] }
base64 = "0.22.1"
built = { version = "0.7.7", features = ["git2", "chrono"] }
chrono = { version = "0.4.42", features = ["serde"] }
clap = { version = "4.5.54", features = ["env", "string"] }
cookie = "0.18.1"
croner = "3.0.1"
dashmap = "6.1.0"
derivative = "2.2.0"
dirs = "6.0.0"
dunce = "1.0.5"
either = "1.15.0"
enum_dispatch = "0.3.13"
float-ord = "0.3.2"
futures = "0.3.31"
git2 = { version = "0.20.3", features = [], default-features = false }
handlebars = "6.4.0"
hex = "0.4.3"
itertools = "0.14.0"
leaky-bucket = "1.1.2"
md5 = "0.8.0"
memchr = "2.7.6"
once_cell = "1.21.3"
parking_lot = "0.12.5"
prost = "0.14.1"
quick-xml = { version = "0.38.4", features = ["async-tokio"] }
rand = "0.9.2"
regex = "1.12.2"
reqwest = { version = "0.13.1", features = [
"query",
"form",
"charset",
"cookies",
"gzip",
"http2",
"json",
"rustls-no-provider",
"stream",
], default-features = false }
rsa = { version = "0.10.0-rc.9", features = ["sha2"] }
rust-embed-for-web = { git = "https://github.com/amtoaer/rust-embed-for-web", tag = "v1.0.0" }
rustls = { version = "0.23.36", default-features = false, features = ["ring"] }
sea-orm = { version = "1.1.19", features = [
"macros",
"runtime-tokio",
"sqlx-sqlite",
"sqlite-use-returning-for-3_35",
] }
sea-orm-migration = { version = "1.1.19", features = [] }
serde = { version = "1.0.228", features = ["derive"] }
serde_json = "1.0.148"
serde_urlencoded = "0.7.1"
strum = { version = "0.27.2", features = ["derive"] }
sysinfo = "0.37.2"
thiserror = "2.0.17"
tokio = { version = "1.49.0", features = ["full"] }
tokio-cron-scheduler = "0.15.1"
tokio-stream = { version = "0.1.18", features = ["sync"] }
tokio-util = { version = "0.7.18", features = ["io", "rt"] }
toml = "0.9.10"
tower = "0.5.2"
tracing = "0.1.44"
tracing-subscriber = { version = "0.3.22", features = ["chrono", "json"] }
ua_generator = { version = "0.5.42", default-features = false }
uuid = { version = "1.19.0", features = ["v4"] }
validator = { version = "0.20.0", features = ["derive"] }
[workspace.metadata.release]
release = false
tag-message = ""
tag-prefix = ""
pre-release-commit-message = "chore: 发布 bili-sync {{version}}"
publish = false
pre-release-replacements = [
{ file = "../../docs/.vitepress/config.mts", search = "\"v[0-9\\.]+\"", replace = "\"v{{version}}\"", exactly = 1 },
{ file = "../../docs/introduction.md", search = " v[0-9\\.]+", replace = " v{{version}}", exactly = 1 },
{ file = "../../web/package.json", search = "\"version\": \"[0-9\\.]+\"", replace = "\"version\": \"{{version}}\"", exactly = 1 },
]
[profile.dev.package."*"]
debug = false
[profile.release]
strip = true

View File

@@ -1,23 +1,22 @@
FROM alpine as base
FROM alpine AS base
ARG TARGETPLATFORM
WORKDIR /app
COPY ./*-bili-sync-rs ./targets/
RUN apk update && apk add --no-cache \
ca-certificates \
tzdata \
ffmpeg \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone \
&& apk del tzdata
ffmpeg
COPY ./bili-sync-rs-Linux-*.tar.gz ./targets/
RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
mv ./targets/Linux-x86_64-bili-sync-rs ./bili-sync-rs; \
tar xzvf ./targets/bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./; \
elif [ "$TARGETPLATFORM" = "linux/arm/v7" ]; then \
tar xzvf ./targets/bili-sync-rs-Linux-armv7-musl.tar.gz -C ./; \
else \
mv ./targets/Linux-aarch64-bili-sync-rs ./bili-sync-rs; \
tar xzvf ./targets/bili-sync-rs-Linux-aarch64-musl.tar.gz -C ./; \
fi
RUN rm -rf ./targets && chmod +x ./bili-sync-rs
@@ -37,4 +36,3 @@ COPY --from=base / /
ENTRYPOINT [ "/app/bili-sync-rs" ]
VOLUME [ "/app/.config/bili-sync" ]

View File

@@ -1,10 +1,24 @@
clean:
rm -rf ./*-bili-sync-rs
rm -rf ./bili-sync-rs-Linux*.tar.gz
build:
build-frontend:
cd ./web && bun run build && cd ..
build: build-frontend
cargo build --target x86_64-unknown-linux-musl --release
build-debug: build-frontend
cargo build --target x86_64-unknown-linux-musl
build-docker: build
cp target/x86_64-unknown-linux-musl/release/bili-sync-rs ./Linux-x86_64-bili-sync-rs
tar czvf ./bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./target/x86_64-unknown-linux-musl/release/ ./bili-sync-rs
docker build . -t bili-sync-rs-local --build-arg="TARGETPLATFORM=linux/amd64"
just clean
just clean
build-docker-debug: build-debug
tar czvf ./bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./target/x86_64-unknown-linux-musl/debug/ ./bili-sync-rs
docker build . -t bili-sync-rs-local --build-arg="TARGETPLATFORM=linux/amd64"
just clean
debug: build-frontend
cargo run

21
License Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 ᴀᴍᴛᴏᴀᴇʀ
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

184
README.md
View File

@@ -3,173 +3,41 @@
## 简介
> [!NOTE]
> 此为 v2.x 版本文档v1.x 版本文档请前往[此处](https://github.com/amtoaer/bili-sync/tree/v1.x)查看。
为 NAS 用户编写的 BILIBILI 收藏夹同步工具,可使用 EMBY 等媒体库工具浏览。
支持展示视频封面、名称、加入日期、标签、分页等。
> [查看文档](https://bili-sync.amto.cc/) [加入 Telegram 交流群](https://t.me/+nuYrt8q6uEo4MWI1)
bili-sync 是一款专为 NAS 用户编写的哔哩哔哩同步工具,由 Rust & Tokio 驱动。
## 效果演示
**注:因为可能同时存在单页视频和多页视频,媒体库类型请选择“混合内容”。**
### 概览
![概览](./assets/overview.png)
### 详情
![详情](./assets/detail.png)
### 管理页
![管理页](./assets/webui.webp)
### 媒体库概览
![媒体库概览](./assets/overview.webp)
### 媒体库详情
![媒体库详情](./assets/detail.webp)
### 播放(使用 infuse
![播放](./assets/play.png)
![播放](./assets/play.webp)
### 文件排布
![文件](./assets/dir.png)
![文件](./assets/dir.webp)
## 配置文件说明
> [!NOTE]
> 在 Docker 环境中,`~` 会被展开为 `/app`。
## 功能与路线图
程序默认会将配置文件存储于 `~/.config/bili-sync/config.toml`,数据库文件存储于 `~/.config/bili-sync/data.sqlite`,如果发现不存在会新建并写入默认配置。
- [x] 使用用户填写的凭据认证,并在必要时自动刷新
- [x] 支持收藏夹与视频列表/视频合集的下载
- [x] 自动选择用户设置范围内最优的视频和音频流,并在下载完成后使用 FFmpeg 合并
- [x] 使用 Tokio 与 Reqwest对视频、视频分页进行异步并发下载
- [x] 使用媒体服务器支持的文件命名,方便一键作为媒体库导入
- [x] 当前轮次下载失败会在下一轮下载时重试,失败次数过多自动丢弃
- [x] 使用数据库保存媒体信息,避免对同个视频的多次请求
- [x] 打印日志,并在请求出现风控时自动终止,等待下一轮执行
- [x] 提供多平台的二进制可执行文件,为 Linux 平台提供了立即可用的 Docker 镜像
- [x] 支持对“稍后再看”内视频的自动扫描与下载
- [x] 支持对 UP 主投稿视频的自动扫描与下载
- [x] 支持限制任务的并行度和接口请求频率
- [x] 支持单个文件的分块并行下载
- [x] 支持使用 Web UI 配置,查看并管理视频、视频源
配置文件加载时会进行简单校验,默认配置无法通过校验,程序会报错终止运行。
可以下载程序后直接运行程序,看到报错后参考报错信息对默认配置进行修改,修改正确后即可正常运行。
对于配置文件中的 `credential`,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。
配置文件中的 `video_name``page_name` 支持使用模板,模板的替换语法请参考示例。模板中的内容在执行时会被动态替换为对应的内容。
video_name 支持设置 bvid视频编号、title视频标题、upper_nameup 主名称、upper_midup 主 id
page_name 除支持 video 的全部参数外,还支持 ptitle分 P 标题、pid分 P 页号)。
对于每个 favorite_list 的下载路径,程序会在其下建立如下的文件夹:
1. 单页视频:
```bash
├── {video_name}
│   ├── {page_name}.mp4
│   ├── {page_name}.nfo
│   └── {page_name}-poster.jpg
```
2. 多页视频:
```bash
├── {video_name}
│   ├── poster.jpg
│   ├── Season 1
│   │   ├── {page_name} - S01E01.mp4
│   │   ├── {page_name} - S01E01.nfo
│   │   ├── {page_name} - S01E01-thumb.jpg
│   │   ├── {page_name} - S01E02.mp4
│   │   ├── {page_name} - S01E02.nfo
│   │   └── {page_name} - S01E02-thumb.jpg
│   └── tvshow.nfo
```
对于 filter_option 的可选值,请前往 [analyzer.rs](https://github.com/amtoaer/bili-sync/blob/main/src/bilibili/analyzer.rs) 查看。
对于 danmaku_option 的项含义,请前往 [danmaku/mod.rs](https://github.com/amtoaer/bili-sync/blob/main/src/bilibili/danmaku/canvas/mod.rs) 与 [原项目的说明](https://github.com/gwy15/danmu2ass?tab=readme-ov-file#%E5%91%BD%E4%BB%A4%E8%A1%8C) 查看。
## 配置文件示例
```toml
# 视频所处文件夹的名称
video_name = "{{title}}"
# 视频分页文件的命名
page_name = "{{bvid}}"
# 扫描运行的间隔(单位:秒)
interval = 1200
# emby 演员信息的保存位置
upper_path = "/home/amtoaer/.config/nas/emby/metadata/people/"
[credential]
# Bilibili 的 Web 端身份凭据,需要凭据才能下载高清视频
sessdata = ""
bili_jct = ""
buvid3 = ""
dedeuserid = ""
ac_time_value = ""
[filter_option]
# 视频、音频流的筛选选项,程序会使用范围内质量最高的流
# 注意设置范围过小可能导致无满足条件的流,推荐仅调整质量上限和编码优先级
video_max_quality = "Quality8k"
video_min_quality = "Quality360p"
audio_max_quality = "QualityHiRES"
audio_min_quality = "Quality64k"
codecs = [
"AV1",
"HEV",
"AVC",
]
no_dolby_video = false
no_dolby_audio = false
no_hdr = false
no_hires = false
[danmaku_option]
# 弹幕的一些相关选项,如字体、字号、透明度、停留时间、是否加粗等
duration = 12.0
font = "黑体"
font_size = 25
width_ratio = 1.2
horizontal_gap = 20.0
lane_size = 32
float_percentage = 0.5
bottom_percentage = 0.3
opacity = 76
bold = true
outline = 0.8
time_offset = 0.0
[favorite_list]
# 收藏夹 ID = 存储的位置
52642258 = "/home/amtoaer/HDDs/Videos/Bilibilis/混剪"
```
## Docker Compose 文件示例
该项目为 `Linux/amd64` 与 `Linux/arm64` 提供了 Docker 版本镜像。
Docker 版包含该平台对应版本的可执行文件(位于`/app/bili-sync-rs`),并预装了 FFmpeg其它用法与普通版本完全一致。可查看 [用于构建镜像的 Dockerfile](./Dockerfile)
以下是一个 Docker Compose 的编写示例:
```yaml
services:
bili-sync-rs:
image: amtoaer/bili-sync-rs:v2.0.0
restart: unless-stopped
network_mode: bridge
tty: true # 该选项请仅在日志终端支持彩色输出时启用,否则日志中可能会出现乱码
hostname: bili-sync-rs
container_name: bili-sync-rs
volumes:
- /home/amtoaer/.config/nas/bili-sync-rs:/app/.config/bili-sync
# 以及一些其它必要的挂载,确保此处的挂载与 bili-sync-rs 的配置相匹配
# ...
logging:
driver: "local"
```
## 路线图
- [x] 凭证认证
- [x] 视频选优
- [x] 视频下载
- [x] 支持并发下载
- [x] 支持作为 daemon 运行
- [x] 构建 nfo 和 poster 文件,方便以单集形式导入 emby
- [x] 支持收藏夹翻页,下载全部历史视频
- [x] 对接数据库,提前检查,按需下载
- [x] 支持弹幕下载
- [x] 更好的错误处理
- [x] 更好的日志
- [x] 请求过快出现风控的 workaround
- [x] 提供简单易用的打包(如 docker
- [ ] 支持 UP 主合集下载
## 参考与借鉴
@@ -177,4 +45,4 @@ services:
+ [bilibili-API-collect](https://github.com/SocialSisterYi/bilibili-API-collect) B 站的第三方接口文档
+ [bilibili-api](https://github.com/Nemo2011/bilibili-api) 使用 Python 调用接口的参考实现
+ [danmu2ass](https://github.com/gwy15/danmu2ass) 本项目弹幕下载功能的缝合来源
+ [danmu2ass](https://github.com/gwy15/danmu2ass) 本项目弹幕下载功能的缝合来源

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.1 MiB

BIN
assets/detail.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1015 KiB

BIN
assets/dir.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 MiB

BIN
assets/overview.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 MiB

BIN
assets/play.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

BIN
assets/webui.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

View File

@@ -0,0 +1,75 @@
[package]
name = "bili_sync"
version = { workspace = true }
edition = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
description = { workspace = true }
publish = { workspace = true }
readme = "../../README.md"
build = "build.rs"
[dependencies]
anyhow = { workspace = true }
arc-swap = { workspace = true }
async-stream = { workspace = true }
async-tempfile = { workspace = true }
axum = { workspace = true }
base64 = { workspace = true }
bili_sync_entity = { workspace = true }
bili_sync_migration = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true }
cookie = { workspace = true }
croner = { workspace = true }
dashmap = { workspace = true }
dirs = { workspace = true }
dunce = { workspace = true }
enum_dispatch = { workspace = true }
float-ord = { workspace = true }
futures = { workspace = true }
handlebars = { workspace = true }
hex = { workspace = true }
itertools = { workspace = true }
leaky-bucket = { workspace = true }
md5 = { workspace = true }
memchr = { workspace = true }
once_cell = { workspace = true }
parking_lot = { workspace = true }
prost = { workspace = true }
quick-xml = { workspace = true }
rand = { workspace = true }
regex = { workspace = true }
reqwest = { workspace = true }
rsa = { workspace = true }
rust-embed-for-web = { workspace = true }
rustls = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_urlencoded = { workspace = true }
strum = { workspace = true }
sysinfo = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }
tokio-cron-scheduler = { workspace = true }
tokio-stream = { workspace = true }
tokio-util = { workspace = true }
toml = { workspace = true }
tower = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
ua_generator = { workspace = true }
uuid = { workspace = true }
validator = { workspace = true }
[build-dependencies]
built = { workspace = true }
git2 = { workspace = true }
[package.metadata.release]
release = true
[[bin]]
name = "bili-sync-rs"
path = "src/main.rs"

View File

@@ -0,0 +1,3 @@
fn main() {
built::write_built_file().expect("Failed to acquire build-time information");
}

View File

@@ -0,0 +1,118 @@
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use chrono::Utc;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Collection, CollectionItem, CollectionType, Credential, VideoInfo};
impl VideoSource for collection::Model {
fn display_name(&self) -> Cow<'static, str> {
format!("{}{}", CollectionType::from_expected(self.r#type), self.name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::CollectionId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.collection_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Collection(collection::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn should_take(
&self,
_idx: usize,
_release_datetime: &chrono::DateTime<Utc>,
_latest_row_at: &chrono::DateTime<Utc>,
) -> bool {
// collection视频合集/视频列表)返回的内容似乎并非严格按照时间排序,并且不同 collection 的排序方式也不同
// 为了保证程序正确性collection 不根据时间提前 break而是每次都全量拉取
true
}
fn should_filter(
&self,
_idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
// 由于 collection 的视频无固定时间顺序should_take 无法提前中断拉取,因此 should_filter 环节需要进行额外过滤
if let Ok(video_info) = video_info
&& video_info.release_datetime() > latest_row_at
{
return Some(video_info);
}
None
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let collection = Collection::new(
bili_client,
CollectionItem {
sid: self.s_id.to_string(),
mid: self.m_id.to_string(),
collection_type: CollectionType::from_expected(self.r#type),
},
credential,
);
let collection_info = collection.get_info().await?;
ensure!(
collection_info.sid == self.s_id
&& collection_info.mid == self.m_id
&& collection_info.collection_type == CollectionType::from_expected(self.r#type),
"collection info mismatch: {:?} != {:?}",
collection_info,
collection.collection
);
let updated_model = collection::ActiveModel {
id: Unchanged(self.id),
name: Set(collection_info.name),
..Default::default()
}
.update(connection)
.await?;
Ok((updated_model.into(), Box::pin(collection.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -0,0 +1,81 @@
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Credential, FavoriteList, VideoInfo};
impl VideoSource for favorite::Model {
fn display_name(&self) -> Cow<'static, str> {
format!("收藏夹「{}", self.name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::FavoriteId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.favorite_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Favorite(favorite::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let favorite = FavoriteList::new(bili_client, self.f_id.to_string(), credential);
let favorite_info = favorite.get_info().await?;
ensure!(
favorite_info.id == self.f_id,
"favorite id mismatch: {} != {}",
favorite_info.id,
self.f_id
);
let updated_model = favorite::ActiveModel {
id: Unchanged(self.id),
name: Set(favorite_info.title),
..Default::default()
}
.update(connection)
.await?;
Ok((updated_model.into(), Box::pin(favorite.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -0,0 +1,158 @@
mod collection;
mod favorite;
mod submission;
mod watch_later;
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result};
use chrono::Utc;
use enum_dispatch::enum_dispatch;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::DatabaseConnection;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
#[rustfmt::skip]
use bili_sync_entity::collection::Model as Collection;
use bili_sync_entity::favorite::Model as Favorite;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::submission::Model as Submission;
use bili_sync_entity::watch_later::Model as WatchLater;
use crate::bilibili::{BiliClient, Credential, VideoInfo};
#[enum_dispatch]
pub enum VideoSourceEnum {
Favorite,
Collection,
Submission,
WatchLater,
}
#[enum_dispatch(VideoSourceEnum)]
pub trait VideoSource {
/// 获取视频源的名称
fn display_name(&self) -> Cow<'static, str>;
/// 获取特定视频列表的筛选条件
fn filter_expr(&self) -> SimpleExpr;
// 为 video_model 设置该视频列表的关联 id
fn set_relation_id(&self, video_model: &mut bili_sync_entity::video::ActiveModel);
// 获取视频列表的保存路径
fn path(&self) -> &Path;
/// 获取视频 model 中记录的最新时间
fn get_latest_row_at(&self) -> DateTime;
/// 更新视频 model 中记录的最新时间,此处返回需要更新的 ActiveModel接着调用 save 方法执行保存
/// 不同 VideoSource 返回的类型不同,为了 VideoSource 的 object safety 不能使用 impl Trait
/// Box<dyn ActiveModelTrait> 又提示 ActiveModelTrait 没有 object safety因此手写一个 Enum 静态分发
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel;
// 判断是否应该继续拉取视频
fn should_take(
&self,
_idx: usize,
release_datetime: &chrono::DateTime<Utc>,
latest_row_at: &chrono::DateTime<Utc>,
) -> bool {
release_datetime > latest_row_at
}
fn should_filter(
&self,
_idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
_latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
// 视频按照时间顺序拉取should_take 已经获取了所有需要处理的视频should_filter 无需额外处理
video_info.ok()
}
fn rule(&self) -> &Option<Rule>;
fn log_refresh_video_start(&self) {
info!("开始扫描{}..", self.display_name());
}
fn log_refresh_video_end(&self, count: usize) {
info!("扫描{}完成,获取到 {} 条新视频", self.display_name(), count);
}
fn log_fetch_video_start(&self) {
info!("开始填充{}视频详情..", self.display_name());
}
fn log_fetch_video_end(&self) {
info!("填充{}视频详情完成", self.display_name());
}
fn log_download_video_start(&self) {
info!("开始下载{}视频..", self.display_name());
}
fn log_download_video_end(&self) {
info!("下载{}视频完成", self.display_name());
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)>;
async fn create_dir_all(&self) -> Result<()> {
let video_source_path = self.path();
tokio::fs::create_dir_all(video_source_path).await.with_context(|| {
format!(
"failed to create video source directory {}",
video_source_path.display()
)
})?;
Ok(())
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()>;
}
pub enum _ActiveModel {
Favorite(bili_sync_entity::favorite::ActiveModel),
Collection(bili_sync_entity::collection::ActiveModel),
Submission(bili_sync_entity::submission::ActiveModel),
WatchLater(bili_sync_entity::watch_later::ActiveModel),
}
impl _ActiveModel {
pub async fn save(self, connection: &DatabaseConnection) -> Result<()> {
match self {
_ActiveModel::Favorite(model) => {
model.save(connection).await?;
}
_ActiveModel::Collection(model) => {
model.save(connection).await?;
}
_ActiveModel::Submission(model) => {
model.save(connection).await?;
}
_ActiveModel::WatchLater(mut model) => {
if model.id.is_not_set() {
model.id = Set(1);
model.insert(connection).await?;
} else {
model.save(connection).await?;
}
}
}
Ok(())
}
}

View File

@@ -0,0 +1,122 @@
use std::path::Path;
use std::pin::Pin;
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Credential, Dynamic, Submission, VideoInfo};
impl VideoSource for submission::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
format!("{}」投稿", self.upper_name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::SubmissionId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.submission_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Submission(submission::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn should_take(
&self,
idx: usize,
release_datetime: &chrono::DateTime<chrono::Utc>,
latest_row_at: &chrono::DateTime<chrono::Utc>,
) -> bool {
// 如果使用动态 API那么可能出现用户置顶了一个很久以前的视频在动态顶部的情况
// 这种情况应该继续拉取下去,不能因为第一条不满足条件就停止
// 后续的非置顶内容是正常由新到旧排序的,可以继续使用常规方式处理
if idx == 0 && self.use_dynamic_api {
return true;
}
release_datetime > latest_row_at
}
fn should_filter(
&self,
idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
latest_row_at: &chrono::DateTime<chrono::Utc>,
) -> Option<VideoInfo> {
if idx == 0 && self.use_dynamic_api {
// 同理,动态 API 的第一条内容可能是置顶的老视频,单独做个过滤
// 其实不过滤也不影响逻辑正确性,因为后续 insert 发生冲突仍然会忽略掉
// 此处主要是出于性能考虑,减少不必要的数据库操作
if let Ok(video_info) = video_info
&& video_info.release_datetime() > latest_row_at
{
return Some(video_info);
}
None
} else {
video_info.ok()
}
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let submission = Submission::new(bili_client, self.upper_id.to_string(), credential);
let upper = submission.get_info().await?;
ensure!(
upper.mid == submission.upper_id,
"submission upper id mismatch: {} != {}",
upper.mid,
submission.upper_id
);
let updated_model = submission::ActiveModel {
id: Unchanged(self.id),
upper_name: Set(upper.name),
..Default::default()
}
.update(connection)
.await?;
let video_stream = if self.use_dynamic_api {
// 必须显式写出 dyn否则 rust 会自动推导到 impl 从而认为 if else 返回类型不一致
Box::pin(Dynamic::from(submission).into_video_stream()) as Pin<Box<dyn Stream<Item = _> + Send + 'a>>
} else {
Box::pin(submission.into_video_stream())
};
Ok((updated_model.into(), video_stream))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -0,0 +1,66 @@
use std::path::Path;
use std::pin::Pin;
use anyhow::Result;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Credential, VideoInfo, WatchLater};
impl VideoSource for watch_later::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
"稍后再看".into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::WatchLaterId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.watch_later_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::WatchLater(watch_later::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
_connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let watch_later = WatchLater::new(bili_client, credential);
Ok((self.into(), Box::pin(watch_later.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -0,0 +1,9 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum InnerApiError {
#[error("Primary key not found: {0}")]
NotFound(i32),
#[error("Bad request: {0}")]
BadRequest(String),
}

View File

@@ -0,0 +1,149 @@
use std::borrow::Borrow;
use bili_sync_entity::video;
use bili_sync_migration::SimpleExpr;
use itertools::Itertools;
use sea_orm::{ColumnTrait, Condition, ConnectionTrait, DatabaseTransaction};
use crate::api::request::{StatusFilter, ValidationFilter};
use crate::api::response::{PageInfo, SimplePageInfo, SimpleVideoInfo, VideoInfo};
use crate::utils::status::VideoStatus;
impl StatusFilter {
pub fn to_video_query(&self) -> Condition {
let query_builder = VideoStatus::query_builder();
match self {
Self::Failed => query_builder.failed(),
Self::Succeeded => query_builder.succeeded(),
Self::Waiting => query_builder.waiting(),
}
}
}
impl ValidationFilter {
pub fn to_video_query(&self) -> SimpleExpr {
match self {
ValidationFilter::Invalid => video::Column::Valid.eq(false),
ValidationFilter::Skipped => video::Column::Valid
.eq(true)
.and(video::Column::ShouldDownload.eq(false)),
ValidationFilter::Normal => video::Column::Valid
.eq(true)
.and(video::Column::ShouldDownload.eq(true)),
}
}
}
pub trait VideoRecord {
fn as_id_status_tuple(&self) -> (i32, u32);
}
pub trait PageRecord {
fn as_id_status_tuple(&self) -> (i32, u32);
}
impl VideoRecord for VideoInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl VideoRecord for SimpleVideoInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl PageRecord for PageInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl PageRecord for SimplePageInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
pub async fn update_video_download_status<T>(
txn: &DatabaseTransaction,
videos: &[impl Borrow<T>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr>
where
T: VideoRecord,
{
if videos.is_empty() {
return Ok(());
}
if let Some(size) = batch_size {
for chunk in videos.chunks(size) {
execute_video_update_batch(txn, chunk.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
} else {
execute_video_update_batch(txn, videos.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
Ok(())
}
pub async fn update_page_download_status<T>(
txn: &DatabaseTransaction,
pages: &[impl Borrow<T>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr>
where
T: PageRecord,
{
if pages.is_empty() {
return Ok(());
}
if let Some(size) = batch_size {
for chunk in pages.chunks(size) {
execute_page_update_batch(txn, chunk.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
} else {
execute_page_update_batch(txn, pages.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
Ok(())
}
async fn execute_video_update_batch(
txn: &DatabaseTransaction,
videos: impl Iterator<Item = (i32, u32)>,
) -> Result<(), sea_orm::DbErr> {
let values = videos.map(|v| format!("({}, {})", v.0, v.1)).join(", ");
if values.is_empty() {
return Ok(());
}
let sql = format!(
"WITH tempdata(id, download_status) AS (VALUES {}) \
UPDATE video \
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE video.id = tempdata.id",
values
);
txn.execute_unprepared(&sql).await?;
Ok(())
}
async fn execute_page_update_batch(
txn: &DatabaseTransaction,
pages: impl Iterator<Item = (i32, u32)>,
) -> Result<(), sea_orm::DbErr> {
let values = pages.map(|p| format!("({}, {})", p.0, p.1)).join(", ");
if values.is_empty() {
return Ok(());
}
let sql = format!(
"WITH tempdata(id, download_status) AS (VALUES {}) \
UPDATE page \
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE page.id = tempdata.id",
values
);
txn.execute_unprepared(&sql).await?;
Ok(())
}

View File

@@ -0,0 +1,8 @@
mod error;
mod helper;
mod request;
mod response;
mod routes;
mod wrapper;
pub use routes::{LogHelper, MAX_HISTORY_LOGS, router};

View File

@@ -0,0 +1,157 @@
use bili_sync_entity::rule::Rule;
use serde::{Deserialize, Serialize};
use validator::Validate;
use crate::bilibili::CollectionType;
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum StatusFilter {
Failed,
Succeeded,
Waiting,
}
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum ValidationFilter {
Skipped,
Invalid,
Normal,
}
#[derive(Deserialize)]
pub struct VideosRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
pub page: Option<u64>,
pub page_size: Option<u64>,
}
#[derive(Deserialize)]
pub struct ResetVideoStatusRequest {
#[serde(default)]
pub force: bool,
}
#[derive(Deserialize)]
pub struct ResetFilteredVideoStatusRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
#[serde(default)]
pub force: bool,
}
#[derive(Deserialize, Validate)]
pub struct StatusUpdate {
#[validate(range(min = 0, max = 4))]
pub status_index: usize,
#[validate(custom(function = "crate::utils::validation::validate_status_value"))]
pub status_value: u32,
}
#[derive(Deserialize, Validate)]
pub struct PageStatusUpdate {
pub page_id: i32,
#[validate(nested)]
pub updates: Vec<StatusUpdate>,
}
#[derive(Deserialize, Validate)]
pub struct UpdateVideoStatusRequest {
#[serde(default)]
#[validate(nested)]
pub video_updates: Vec<StatusUpdate>,
#[serde(default)]
#[validate(nested)]
pub page_updates: Vec<PageStatusUpdate>,
}
#[derive(Deserialize, Validate)]
pub struct UpdateFilteredVideoStatusRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
#[serde(default)]
#[validate(nested)]
pub video_updates: Vec<StatusUpdate>,
#[serde(default)]
#[validate(nested)]
pub page_updates: Vec<StatusUpdate>,
}
#[derive(Deserialize)]
pub struct FollowedCollectionsRequest {
pub page_num: Option<i32>,
pub page_size: Option<i32>,
}
#[derive(Deserialize)]
pub struct FollowedUppersRequest {
pub page_num: Option<i32>,
pub page_size: Option<i32>,
pub name: Option<String>,
}
#[derive(Deserialize, Validate)]
pub struct InsertFavoriteRequest {
pub fid: i64,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
pub struct InsertCollectionRequest {
pub sid: i64,
pub mid: i64,
#[serde(default)]
pub collection_type: CollectionType,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
pub struct InsertSubmissionRequest {
pub upper_id: i64,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
#[serde(rename_all = "camelCase")]
pub struct UpdateVideoSourceRequest {
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
pub enabled: bool,
pub rule: Option<Rule>,
pub use_dynamic_api: Option<bool>,
}
#[derive(Serialize, Deserialize)]
pub struct DefaultPathRequest {
pub name: String,
}
#[derive(Debug, Deserialize)]
pub struct PollQrcodeRequest {
pub qrcode_key: String,
}
#[derive(Debug, Deserialize)]
pub struct FullSyncVideoSourceRequest {
pub delete_local: bool,
}

View File

@@ -0,0 +1,239 @@
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use sea_orm::prelude::DateTime;
use sea_orm::{DerivePartialModel, FromQueryResult};
use serde::Serialize;
use crate::bilibili::{PollStatus, Qrcode};
use crate::utils::status::{PageStatus, VideoStatus};
#[derive(Serialize)]
pub struct VideoSourcesResponse {
pub collection: Vec<VideoSource>,
pub favorite: Vec<VideoSource>,
pub submission: Vec<VideoSource>,
pub watch_later: Vec<VideoSource>,
}
#[derive(Serialize)]
pub struct VideosResponse {
pub videos: Vec<VideoInfo>,
pub total_count: u64,
}
#[derive(Serialize)]
pub struct VideoResponse {
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct ResetVideoResponse {
pub resetted: bool,
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct ClearAndResetVideoStatusResponse {
pub warning: Option<String>,
pub video: VideoInfo,
}
#[derive(Serialize)]
pub struct ResetFilteredVideosResponse {
pub resetted: bool,
pub resetted_videos_count: usize,
pub resetted_pages_count: usize,
}
#[derive(Serialize)]
pub struct UpdateVideoStatusResponse {
pub success: bool,
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct UpdateFilteredVideoStatusResponse {
pub success: bool,
pub updated_videos_count: usize,
pub updated_pages_count: usize,
}
#[derive(FromQueryResult, Serialize)]
pub struct VideoSource {
pub id: i32,
pub name: String,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult)]
#[sea_orm(entity = "video::Entity")]
pub struct VideoInfo {
pub id: i32,
pub bvid: String,
pub name: String,
pub upper_name: String,
pub valid: bool,
pub should_download: bool,
#[serde(serialize_with = "serde_video_download_status")]
pub download_status: u32,
pub collection_id: Option<i32>,
pub favorite_id: Option<i32>,
pub submission_id: Option<i32>,
pub watch_later_id: Option<i32>,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult)]
#[sea_orm(entity = "page::Entity")]
pub struct PageInfo {
pub id: i32,
pub video_id: i32,
pub pid: i32,
pub name: String,
#[serde(serialize_with = "serde_page_download_status")]
pub download_status: u32,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult, Clone, Copy)]
#[sea_orm(entity = "video::Entity")]
pub struct SimpleVideoInfo {
pub id: i32,
pub download_status: u32,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult, Clone, Copy)]
#[sea_orm(entity = "page::Entity")]
pub struct SimplePageInfo {
pub id: i32,
pub video_id: i32,
pub download_status: u32,
}
fn serde_video_download_status<S>(status: &u32, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let status: [u32; 5] = VideoStatus::from(*status).into();
status.serialize(serializer)
}
fn serde_page_download_status<S>(status: &u32, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let status: [u32; 5] = PageStatus::from(*status).into();
status.serialize(serializer)
}
#[derive(Serialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum Followed {
Favorite {
title: String,
media_count: i64,
fid: i64,
mid: i64,
invalid: bool,
subscribed: bool,
},
Collection {
title: String,
sid: i64,
mid: i64,
media_count: i64,
invalid: bool,
subscribed: bool,
},
Upper {
mid: i64,
uname: String,
face: String,
sign: String,
invalid: bool,
subscribed: bool,
},
}
#[derive(Serialize)]
pub struct FavoritesResponse {
pub favorites: Vec<Followed>,
}
#[derive(Serialize)]
pub struct CollectionsResponse {
pub collections: Vec<Followed>,
pub total: i64,
}
#[derive(Serialize)]
pub struct UppersResponse {
pub uppers: Vec<Followed>,
pub total: i64,
}
#[derive(Serialize)]
pub struct VideoSourcesDetailsResponse {
pub collections: Vec<VideoSourceDetail>,
pub favorites: Vec<VideoSourceDetail>,
pub submissions: Vec<VideoSourceDetail>,
pub watch_later: Vec<VideoSourceDetail>,
}
#[derive(Serialize, FromQueryResult)]
pub struct DayCountPair {
pub day: String,
pub cnt: i64,
}
#[derive(Serialize)]
pub struct DashBoardResponse {
pub enabled_favorites: u64,
pub enabled_collections: u64,
pub enabled_submissions: u64,
pub enable_watch_later: bool,
pub videos_by_day: Vec<DayCountPair>,
}
#[derive(Serialize, Clone, Copy)]
pub struct SysInfo {
pub timestamp: i64,
pub total_memory: u64,
pub used_memory: u64,
pub process_memory: u64,
pub used_cpu: f32,
pub process_cpu: f32,
pub total_disk: u64,
pub available_disk: u64,
}
#[derive(Serialize, FromQueryResult)]
#[serde(rename_all = "camelCase")]
pub struct VideoSourceDetail {
pub id: i32,
pub name: String,
pub path: String,
pub rule: Option<Rule>,
#[serde(default)]
pub rule_display: Option<String>,
#[serde(default)]
pub use_dynamic_api: Option<bool>,
pub enabled: bool,
pub latest_row_at: Option<DateTime>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdateVideoSourceResponse {
pub rule_display: Option<String>,
}
pub type GenerateQrcodeResponse = Qrcode;
pub type PollQrcodeResponse = PollStatus;
#[derive(Serialize)]
pub struct FullSyncVideoSourceResponse {
pub removed_count: usize,
pub warnings: Option<Vec<String>>,
}

View File

@@ -0,0 +1,50 @@
use std::sync::Arc;
use anyhow::Result;
use axum::extract::Extension;
use axum::routing::{get, post};
use axum::{Json, Router};
use sea_orm::DatabaseConnection;
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::bilibili::BiliClient;
use crate::config::{Config, VersionedConfig};
use crate::notifier::{Message, Notifier};
pub(super) fn router() -> Router {
Router::new()
.route("/config", get(get_config).put(update_config))
.route("/config/notifiers/ping", post(ping_notifiers))
}
/// 获取全局配置
pub async fn get_config() -> Result<ApiResponse<Arc<Config>>, ApiError> {
Ok(ApiResponse::ok(VersionedConfig::get().snapshot()))
}
/// 更新全局配置
pub async fn update_config(
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(config): ValidatedJson<Config>,
) -> Result<ApiResponse<Arc<Config>>, ApiError> {
config.check()?;
let new_config = VersionedConfig::get().update(config, &db).await?;
Ok(ApiResponse::ok(new_config))
}
pub async fn ping_notifiers(
Extension(bili_client): Extension<Arc<BiliClient>>,
Json(mut notifier): Json<Notifier>,
) -> Result<ApiResponse<()>, ApiError> {
// 对于 webhook 类型的通知器测试,设置上 ignore_cache tag 以强制实时渲染
if let Notifier::Webhook { ignore_cache, .. } = &mut notifier {
*ignore_cache = Some(());
}
notifier
.notify(bili_client.inner_client(), Message{
message: "This is a test notification from BiliSync.".into(),
image_url: Some("https://socialify.git.ci/amtoaer/bili-sync/image?description=1&font=KoHo&issues=1&language=1&logo=https%3A%2F%2Fs2.loli.net%2F2023%2F12%2F02%2F9EwT2yInOu1d3zm.png&name=1&owner=1&pattern=Signal&pulls=1&stargazers=1&theme=Light".to_owned()),
})
.await?;
Ok(ApiResponse::ok(()))
}

View File

@@ -0,0 +1,65 @@
use axum::routing::get;
use axum::{Extension, Router};
use bili_sync_entity::*;
use sea_orm::entity::prelude::*;
use sea_orm::{FromQueryResult, Statement};
use crate::api::response::{DashBoardResponse, DayCountPair};
use crate::api::wrapper::{ApiError, ApiResponse};
pub(super) fn router() -> Router {
Router::new().route("/dashboard", get(get_dashboard))
}
async fn get_dashboard(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<DashBoardResponse>, ApiError> {
let (enabled_favorites, enabled_collections, enabled_submissions, enabled_watch_later, videos_by_day) = tokio::try_join!(
favorite::Entity::find()
.filter(favorite::Column::Enabled.eq(true))
.count(&db),
collection::Entity::find()
.filter(collection::Column::Enabled.eq(true))
.count(&db),
submission::Entity::find()
.filter(submission::Column::Enabled.eq(true))
.count(&db),
watch_later::Entity::find()
.filter(watch_later::Column::Enabled.eq(true))
.count(&db),
DayCountPair::find_by_statement(Statement::from_string(
db.get_database_backend(),
// 用 SeaORM 太复杂了,直接写个裸 SQL
"
SELECT
dates.day AS day,
COUNT(video.id) AS cnt
FROM
(
SELECT
STRFTIME('%Y-%m-%d', DATE('now', '-' || n || ' days', 'localtime')) AS day,
DATETIME(DATE('now', '-' || n || ' days', 'localtime'), 'utc') AS start_utc_datetime,
DATETIME(DATE('now', '-' || n || ' days', '+1 day', 'localtime'), 'utc') AS end_utc_datetime
FROM
(
SELECT 0 AS n UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6
)
) AS dates
LEFT JOIN
video ON video.created_at >= dates.start_utc_datetime AND video.created_at < dates.end_utc_datetime
GROUP BY
dates.day
ORDER BY
dates.day;
"
))
.all(&db),
)?;
Ok(ApiResponse::ok(DashBoardResponse {
enabled_favorites,
enabled_collections,
enabled_submissions,
enable_watch_later: enabled_watch_later > 0,
videos_by_day,
}))
}

View File

@@ -0,0 +1,34 @@
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Query};
use axum::routing::{get, post};
use crate::api::request::PollQrcodeRequest;
use crate::api::response::{GenerateQrcodeResponse, PollQrcodeResponse};
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::bilibili::{BiliClient, Credential};
pub(super) fn router() -> Router {
Router::new()
.route("/login/qrcode/generate", post(generate_qrcode))
.route("/login/qrcode/poll", get(poll_qrcode))
}
/// 生成扫码登录二维码
pub async fn generate_qrcode(
Extension(bili_client): Extension<Arc<BiliClient>>,
) -> Result<ApiResponse<GenerateQrcodeResponse>, ApiError> {
Ok(ApiResponse::ok(Credential::generate_qrcode(&bili_client.client).await?))
}
/// 轮询扫码登录状态
pub async fn poll_qrcode(
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<PollQrcodeRequest>,
) -> Result<ApiResponse<PollQrcodeResponse>, ApiError> {
Ok(ApiResponse::ok(
Credential::poll_qrcode(&bili_client.client, &params.qrcode_key).await?,
))
}

View File

@@ -0,0 +1,189 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Query};
use axum::routing::get;
use bili_sync_entity::*;
use itertools::{Either, Itertools};
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QueryFilter, QuerySelect};
use crate::api::request::{FollowedCollectionsRequest, FollowedUppersRequest};
use crate::api::response::{CollectionsResponse, FavoritesResponse, Followed, UppersResponse};
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::bilibili::{BiliClient, Me};
use crate::config::VersionedConfig;
pub(super) fn router() -> Router {
Router::new()
.route("/me/favorites", get(get_created_favorites))
.route("/me/collections", get(get_followed_collections))
.route("/me/uppers", get(get_followed_uppers))
}
/// 获取当前用户创建的收藏夹
pub async fn get_created_favorites(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
) -> Result<ApiResponse<FavoritesResponse>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let bili_favorites = me.get_created_favorites().await?;
let favorites = if let Some(bili_favorites) = bili_favorites {
// b 站收藏夹相关接口使用的所谓“fid”其实是该处的 id即 fid + mid 后两位
let bili_fids: Vec<_> = bili_favorites.iter().map(|fav| fav.id).collect();
let subscribed_fids: HashSet<i64> = favorite::Entity::find()
.select_only()
.column(favorite::Column::FId)
.filter(favorite::Column::FId.is_in(bili_fids))
.into_tuple()
.all(&db)
.await?
.into_iter()
.collect();
bili_favorites
.into_iter()
.map(|fav| Followed::Favorite {
title: fav.title,
media_count: fav.media_count,
// api 返回的 id 才是真实的 fid
fid: fav.id,
mid: fav.mid,
invalid: false,
subscribed: subscribed_fids.contains(&fav.id),
})
.collect()
} else {
vec![]
};
Ok(ApiResponse::ok(FavoritesResponse { favorites }))
}
/// 获取当前用户收藏的合集/收藏夹
pub async fn get_followed_collections(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedCollectionsRequest>,
) -> Result<ApiResponse<CollectionsResponse>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(50));
let bili_collections = me.get_followed_collections(page_num, page_size).await?;
let collections = if let Some(collection_list) = bili_collections.list {
// collection_list 中的条目可能是合集或者收藏夹,需要分类处理
// 目前看下来,最显著的区别是合集的 fid 是 0
let (bili_fids, bili_sids): (Vec<_>, Vec<_>) = collection_list.iter().partition_map(|col| {
if col.fid != 0 {
Either::Left(col.id)
} else {
Either::Right(col.id)
}
});
let (subscribed_fids, subscribed_sids): (HashSet<i64>, HashSet<i64>) = tokio::try_join!(
async {
Result::<_, anyhow::Error>::Ok(
favorite::Entity::find()
.select_only()
.column(favorite::Column::FId)
.filter(favorite::Column::FId.is_in(bili_fids))
.into_tuple()
.all(&db)
.await?
.into_iter()
.collect(),
)
},
async {
Ok(collection::Entity::find()
.select_only()
.column(collection::Column::SId)
.filter(collection::Column::SId.is_in(bili_sids))
.into_tuple()
.all(&db)
.await?
.into_iter()
.collect())
}
)?;
collection_list
.into_iter()
.map(|col| {
if col.fid != 0 {
Followed::Favorite {
title: col.title,
media_count: col.media_count,
fid: col.id,
mid: col.mid,
invalid: col.state == 1,
subscribed: subscribed_fids.contains(&col.id),
}
} else {
Followed::Collection {
title: col.title,
sid: col.id,
mid: col.mid,
media_count: col.media_count,
invalid: col.state == 1,
subscribed: subscribed_sids.contains(&col.id),
}
}
})
.collect()
} else {
vec![]
};
Ok(ApiResponse::ok(CollectionsResponse {
collections,
total: bili_collections.count,
}))
}
/// 获取当前用户关注的 UP 主
pub async fn get_followed_uppers(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedUppersRequest>,
) -> Result<ApiResponse<UppersResponse>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(20));
let bili_uppers = me
.get_followed_uppers(page_num, page_size, params.name.as_deref())
.await?;
let bili_uid: Vec<_> = bili_uppers.list.iter().map(|upper| upper.mid).collect();
let subscribed_ids: Vec<i64> = submission::Entity::find()
.select_only()
.column(submission::Column::UpperId)
.filter(submission::Column::UpperId.is_in(bili_uid))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_ids.into_iter().collect();
let uppers = bili_uppers
.list
.into_iter()
.map(|upper| Followed::Upper {
mid: upper.mid,
// 官方没有提供字段,但是可以使用这种方式简单判断下
invalid: upper.uname == "账号已注销" && upper.face == "https://i0.hdslb.com/bfs/face/member/noface.jpg",
uname: upper.uname,
face: upper.face,
sign: upper.sign,
subscribed: subscribed_set.contains(&upper.mid),
})
.collect();
Ok(ApiResponse::ok(UppersResponse {
uppers,
total: bili_uppers.total,
}))
}

View File

@@ -0,0 +1,62 @@
use axum::extract::Request;
use axum::http::HeaderMap;
use axum::middleware::Next;
use axum::response::{IntoResponse, Response};
use axum::{Router, middleware};
use base64::Engine;
use base64::prelude::BASE64_URL_SAFE_NO_PAD;
use reqwest::StatusCode;
use crate::api::wrapper::ApiResponse;
use crate::config::VersionedConfig;
mod config;
mod dashboard;
mod login;
mod me;
mod task;
mod video_sources;
mod videos;
mod ws;
pub use ws::{LogHelper, MAX_HISTORY_LOGS};
pub fn router() -> Router {
Router::new().nest(
"/api",
config::router()
.merge(me::router())
.merge(login::router())
.merge(video_sources::router())
.merge(videos::router())
.merge(dashboard::router())
.merge(ws::router())
.merge(task::router())
.layer(middleware::from_fn(auth)),
)
}
/// 中间件:使用 auth token 对请求进行身份验证
pub async fn auth(mut headers: HeaderMap, request: Request, next: Next) -> Result<Response, StatusCode> {
let config = VersionedConfig::get().read();
let token = config.auth_token.as_str();
if headers
.get("Authorization")
.and_then(|v| v.to_str().ok())
.is_some_and(|s| s == token)
{
return Ok(next.run(request).await);
}
if let Some(protocol) = headers.remove("Sec-WebSocket-Protocol")
&& protocol
.to_str()
.ok()
.and_then(|s| BASE64_URL_SAFE_NO_PAD.decode(s).ok())
.is_some_and(|s| s == token.as_bytes())
{
let mut resp = next.run(request).await;
resp.headers_mut().insert("Sec-WebSocket-Protocol", protocol);
return Ok(resp);
}
Ok(ApiResponse::<()>::unauthorized("auth token does not match").into_response())
}

View File

@@ -0,0 +1,15 @@
use anyhow::Result;
use axum::Router;
use axum::routing::post;
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::task::DownloadTaskManager;
pub(super) fn router() -> Router {
Router::new().route("/task/download", post(new_download_task))
}
pub async fn new_download_task() -> Result<ApiResponse<bool>, ApiError> {
DownloadTaskManager::get().download_once().await?;
Ok(ApiResponse::ok(true))
}

View File

@@ -0,0 +1,522 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::{Context, Result};
use axum::extract::{Extension, Path, Query};
use axum::routing::{get, post, put};
use axum::{Json, Router};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use bili_sync_migration::Expr;
use futures::stream::FuturesUnordered;
use futures::{StreamExt, TryStreamExt};
use itertools::Itertools;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QuerySelect, QueryTrait, TransactionTrait};
use crate::adapter::{_ActiveModel, VideoSource as _, VideoSourceEnum};
use crate::api::error::InnerApiError;
use crate::api::request::{
DefaultPathRequest, FullSyncVideoSourceRequest, InsertCollectionRequest, InsertFavoriteRequest,
InsertSubmissionRequest, UpdateVideoSourceRequest,
};
use crate::api::response::{
FullSyncVideoSourceResponse, UpdateVideoSourceResponse, VideoSource, VideoSourceDetail,
VideoSourcesDetailsResponse, VideoSourcesResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::bilibili::{BiliClient, Collection, CollectionItem, FavoriteList, Submission};
use crate::config::{PathSafeTemplate, TEMPLATE, VersionedConfig};
use crate::utils::rule::FieldEvaluatable;
pub(super) fn router() -> Router {
Router::new()
.route("/video-sources", get(get_video_sources))
.route("/video-sources/details", get(get_video_sources_details))
.route(
"/video-sources/{type}/default-path",
get(get_video_sources_default_path),
) // 仅用于前端获取默认路径
.route(
"/video-sources/{type}/{id}",
put(update_video_source).delete(remove_video_source),
)
.route("/video-sources/{type}/{id}/evaluate", post(evaluate_video_source))
.route("/video-sources/{type}/{id}/full-sync", post(full_sync_video_source))
.route("/video-sources/favorites", post(insert_favorite))
.route("/video-sources/collections", post(insert_collection))
.route("/video-sources/submissions", post(insert_submission))
}
/// 列出所有视频来源
pub async fn get_video_sources(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoSourcesResponse>, ApiError> {
let (collection, favorite, submission, mut watch_later) = tokio::try_join!(
collection::Entity::find()
.select_only()
.columns([collection::Column::Id, collection::Column::Name])
.into_model::<VideoSource>()
.all(&db),
favorite::Entity::find()
.select_only()
.columns([favorite::Column::Id, favorite::Column::Name])
.into_model::<VideoSource>()
.all(&db),
submission::Entity::find()
.select_only()
.column(submission::Column::Id)
.column_as(submission::Column::UpperName, "name")
.into_model::<VideoSource>()
.all(&db),
watch_later::Entity::find()
.select_only()
.column(watch_later::Column::Id)
.column_as(Expr::value("稍后再看"), "name")
.into_model::<VideoSource>()
.all(&db)
)?;
// watch_later 是一个特殊的视频来源,如果不存在则添加一个默认项
if watch_later.is_empty() {
watch_later.push(VideoSource {
id: 1,
name: "稍后再看".to_string(),
});
}
Ok(ApiResponse::ok(VideoSourcesResponse {
collection,
favorite,
submission,
watch_later,
}))
}
/// 获取视频来源详情
pub async fn get_video_sources_details(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoSourcesDetailsResponse>, ApiError> {
let (mut collections, mut favorites, mut submissions, mut watch_later) = tokio::try_join!(
collection::Entity::find()
.select_only()
.columns([
collection::Column::Id,
collection::Column::Name,
collection::Column::Path,
collection::Column::Rule,
collection::Column::Enabled,
collection::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
favorite::Entity::find()
.select_only()
.columns([
favorite::Column::Id,
favorite::Column::Name,
favorite::Column::Path,
favorite::Column::Rule,
favorite::Column::Enabled,
favorite::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
submission::Entity::find()
.select_only()
.column_as(submission::Column::UpperName, "name")
.columns([
submission::Column::Id,
submission::Column::Path,
submission::Column::Enabled,
submission::Column::Rule,
submission::Column::UseDynamicApi,
submission::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
watch_later::Entity::find()
.select_only()
.column_as(Expr::value("稍后再看"), "name")
.columns([
watch_later::Column::Id,
watch_later::Column::Path,
watch_later::Column::Enabled,
watch_later::Column::Rule,
watch_later::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db)
)?;
if watch_later.is_empty() {
watch_later.push(VideoSourceDetail {
id: 1,
name: "稍后再看".to_string(),
path: String::new(),
rule: None,
rule_display: None,
use_dynamic_api: None,
enabled: false,
latest_row_at: None,
})
}
for sources in [&mut collections, &mut favorites, &mut submissions, &mut watch_later] {
sources.iter_mut().for_each(|item| {
if let Some(rule) = &item.rule {
item.rule_display = Some(rule.to_string());
}
item.latest_row_at = item.latest_row_at.filter(|dt| dt.and_utc().timestamp() != 0);
});
}
Ok(ApiResponse::ok(VideoSourcesDetailsResponse {
collections,
favorites,
submissions,
watch_later,
}))
}
pub async fn get_video_sources_default_path(
Path(source_type): Path<String>,
Query(params): Query<DefaultPathRequest>,
) -> Result<ApiResponse<String>, ApiError> {
let template_name = match source_type.as_str() {
"favorites" => "favorite_default_path",
"collections" => "collection_default_path",
"submissions" => "submission_default_path",
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let template = TEMPLATE.read();
Ok(ApiResponse::ok(
template.path_safe_render(template_name, &serde_json::to_value(params)?)?,
))
}
/// 更新视频来源
pub async fn update_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateVideoSourceRequest>,
) -> Result<ApiResponse<UpdateVideoSourceResponse>, ApiError> {
let rule_display = request.rule.as_ref().map(|rule| rule.to_string());
let active_model = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: collection::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Collection(active_model)
}),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: favorite::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Favorite(active_model)
}),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: submission::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
if let Some(use_dynamic_api) = request.use_dynamic_api {
active_model.use_dynamic_api = Set(use_dynamic_api);
}
_ActiveModel::Submission(active_model)
}),
"watch_later" => match watch_later::Entity::find_by_id(id).one(&db).await? {
// 稍后再看需要做特殊处理get 时如果稍后再看不存在返回的是 id 为 1 的假记录
// 因此此处可能是更新也可能是插入,做个额外的处理
Some(model) => {
// 如果有记录,使用 id 对应的记录更新
let mut active_model: watch_later::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
Some(_ActiveModel::WatchLater(active_model))
}
None => {
if id != 1 {
None
} else {
// 如果没有记录且 id 为 1插入一个新的稍后再看记录
Some(_ActiveModel::WatchLater(watch_later::ActiveModel {
path: Set(request.path),
enabled: Set(request.enabled),
rule: Set(request.rule),
..Default::default()
}))
}
}
},
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(active_model) = active_model else {
return Err(InnerApiError::NotFound(id).into());
};
active_model.save(&db).await?;
Ok(ApiResponse::ok(UpdateVideoSourceResponse { rule_display }))
}
pub async fn remove_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<bool>, ApiError> {
// 不允许删除稍后再看
let video_source: Option<VideoSourceEnum> = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(Into::into),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(video_source) = video_source else {
return Err(InnerApiError::NotFound(id).into());
};
let txn = db.begin().await?;
page::Entity::delete_many()
.filter(
page::Column::VideoId.in_subquery(
video::Entity::find()
.filter(video_source.filter_expr())
.select_only()
.column(video::Column::Id)
.as_query()
.to_owned(),
),
)
.exec(&txn)
.await?;
video::Entity::delete_many()
.filter(video_source.filter_expr())
.exec(&txn)
.await?;
video_source.delete_from_db(&txn).await?;
txn.commit().await?;
Ok(ApiResponse::ok(true))
}
pub async fn evaluate_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<bool>, ApiError> {
// 找出对应 source 的规则与 video 筛选条件
let (rule, filter_condition) = match source_type.as_str() {
"collections" => (
collection::Entity::find_by_id(id)
.select_only()
.column(collection::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::CollectionId.eq(id),
),
"favorites" => (
favorite::Entity::find_by_id(id)
.select_only()
.column(favorite::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::FavoriteId.eq(id),
),
"submissions" => (
submission::Entity::find_by_id(id)
.select_only()
.column(submission::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::SubmissionId.eq(id),
),
"watch_later" => (
watch_later::Entity::find_by_id(id)
.select_only()
.column(watch_later::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::WatchLaterId.eq(id),
),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let videos: Vec<(video::Model, Vec<page::Model>)> = video::Entity::find()
.filter(filter_condition)
.find_with_related(page::Entity)
.all(&db)
.await?;
let video_should_download_pairs = videos
.into_iter()
.map(|(video, pages)| (video.id, rule.evaluate_model(&video, &pages)))
.collect::<Vec<(i32, bool)>>();
let txn = db.begin().await?;
for chunk in video_should_download_pairs.chunks(500) {
let sql = format!(
"WITH tempdata(id, should_download) AS (VALUES {}) \
UPDATE video \
SET should_download = tempdata.should_download \
FROM tempdata \
WHERE video.id = tempdata.id",
chunk.iter().map(|item| format!("({}, {})", item.0, item.1)).join(", ")
);
txn.execute_unprepared(&sql).await?;
}
txn.commit().await?;
Ok(ApiResponse::ok(true))
}
pub async fn full_sync_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Json(request): Json<FullSyncVideoSourceRequest>,
) -> Result<ApiResponse<FullSyncVideoSourceResponse>, ApiError> {
let video_source: Option<VideoSourceEnum> = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"watch_later" => watch_later::Entity::find_by_id(id).one(&db).await?.map(Into::into),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(video_source) = video_source else {
return Err(InnerApiError::NotFound(id).into());
};
let credential = &VersionedConfig::get().read().credential;
let filter_expr = video_source.filter_expr();
let (_, video_streams) = video_source.refresh(&bili_client, credential, &db).await?;
let all_videos = video_streams
.try_collect::<Vec<_>>()
.await
.context("failed to read all videos from video stream")?;
let all_bvids = all_videos.into_iter().map(|v| v.bvid_owned()).collect::<HashSet<_>>();
let videos_to_remove = video::Entity::find()
.filter(video::Column::Bvid.is_not_in(all_bvids).and(filter_expr))
.select_only()
.columns([video::Column::Id, video::Column::Path])
.into_tuple::<(i32, String)>()
.all(&db)
.await?;
if videos_to_remove.is_empty() {
return Ok(ApiResponse::ok(FullSyncVideoSourceResponse {
removed_count: 0,
warnings: None,
}));
}
let remove_count = videos_to_remove.len();
let (video_ids, video_paths): (Vec<i32>, Vec<String>) = videos_to_remove.into_iter().unzip();
let txn = db.begin().await?;
page::Entity::delete_many()
.filter(page::Column::VideoId.is_in(video_ids.iter().copied()))
.exec(&txn)
.await?;
video::Entity::delete_many()
.filter(video::Column::Id.is_in(video_ids))
.exec(&txn)
.await?;
txn.commit().await?;
let warnings = if request.delete_local {
let tasks = video_paths
.into_iter()
.filter_map(|path| {
if path.is_empty() {
None
} else {
Some(async move {
tokio::fs::remove_dir_all(&path)
.await
.with_context(|| format!("failed to remove {path}"))?;
Result::<_, anyhow::Error>::Ok(())
})
}
})
.collect::<FuturesUnordered<_>>();
Some(
tasks
.filter_map(|res| futures::future::ready(res.err().map(|e| format!("{:#}", e))))
.collect::<Vec<_>>()
.await,
)
} else {
None
};
Ok(ApiResponse::ok(FullSyncVideoSourceResponse {
removed_count: remove_count,
warnings,
}))
}
/// 新增收藏夹订阅
pub async fn insert_favorite(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertFavoriteRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let favorite = FavoriteList::new(bili_client.as_ref(), request.fid.to_string(), credential);
let favorite_info = favorite.get_info().await?;
favorite::Entity::insert(favorite::ActiveModel {
f_id: Set(favorite_info.id),
name: Set(favorite_info.title.clone()),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}
/// 新增合集/列表订阅
pub async fn insert_collection(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertCollectionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let collection = Collection::new(
bili_client.as_ref(),
CollectionItem {
sid: request.sid.to_string(),
mid: request.mid.to_string(),
collection_type: request.collection_type,
},
credential,
);
let collection_info = collection.get_info().await?;
collection::Entity::insert(collection::ActiveModel {
s_id: Set(collection_info.sid),
m_id: Set(collection_info.mid),
r#type: Set(collection_info.collection_type.into()),
name: Set(collection_info.name.clone()),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}
/// 新增投稿订阅
pub async fn insert_submission(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertSubmissionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let submission = Submission::new(bili_client.as_ref(), request.upper_id.to_string(), credential);
let upper = submission.get_info().await?;
submission::Entity::insert(submission::ActiveModel {
upper_id: Set(upper.mid.parse()?),
upper_name: Set(upper.name),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}

View File

@@ -0,0 +1,419 @@
use std::collections::HashSet;
use anyhow::{Context, Result};
use axum::extract::{Extension, Path, Query};
use axum::routing::{get, post};
use axum::{Json, Router};
use bili_sync_entity::*;
use sea_orm::ActiveValue::Set;
use sea_orm::{
ActiveModelTrait, ColumnTrait, DatabaseConnection, EntityTrait, IntoActiveModel, PaginatorTrait, QueryFilter,
QueryOrder, TransactionTrait, TryIntoModel,
};
use crate::api::error::InnerApiError;
use crate::api::helper::{update_page_download_status, update_video_download_status};
use crate::api::request::{
ResetFilteredVideoStatusRequest, ResetVideoStatusRequest, UpdateFilteredVideoStatusRequest,
UpdateVideoStatusRequest, VideosRequest,
};
use crate::api::response::{
ClearAndResetVideoStatusResponse, PageInfo, ResetFilteredVideosResponse, ResetVideoResponse, SimplePageInfo,
SimpleVideoInfo, UpdateFilteredVideoStatusResponse, UpdateVideoStatusResponse, VideoInfo, VideoResponse,
VideosResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::utils::status::{PageStatus, VideoStatus};
pub(super) fn router() -> Router {
Router::new()
.route("/videos", get(get_videos))
.route("/videos/{id}", get(get_video))
.route(
"/videos/{id}/clear-and-reset-status",
post(clear_and_reset_video_status),
)
.route("/videos/{id}/reset-status", post(reset_video_status))
.route("/videos/{id}/update-status", post(update_video_status))
.route("/videos/reset-status", post(reset_filtered_video_status))
.route("/videos/update-status", post(update_filtered_video_status))
}
/// 列出视频的基本信息,支持根据视频来源筛选、名称查找和分页
pub async fn get_videos(
Extension(db): Extension<DatabaseConnection>,
Query(params): Query<VideosRequest>,
) -> Result<ApiResponse<VideosResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(params.collection, video::Column::CollectionId),
(params.favorite, video::Column::FavoriteId),
(params.submission, video::Column::SubmissionId),
(params.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = params.query {
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = params.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = params.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let total_count = query.clone().count(&db).await?;
let (page, page_size) = if let (Some(page), Some(page_size)) = (params.page, params.page_size) {
(page, page_size)
} else {
(0, 10)
};
Ok(ApiResponse::ok(VideosResponse {
videos: query
.order_by_desc(video::Column::Id)
.into_partial_model::<VideoInfo>()
.paginate(&db, page_size)
.fetch_page(page)
.await?,
total_count,
}))
}
pub async fn get_video(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoResponse>, ApiError> {
let (video_info, pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
Ok(ApiResponse::ok(VideoResponse {
video: video_info,
pages: pages_info,
}))
}
pub async fn reset_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetVideoStatusRequest>,
) -> Result<ApiResponse<ResetVideoResponse>, ApiError> {
let (video_info, pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(mut video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let resetted_pages_info = pages_info
.into_iter()
.filter_map(|mut page_info| {
let mut page_status = PageStatus::from(page_info.download_status);
if (request.force && page_status.force_reset_failed()) || page_status.reset_failed() {
page_info.download_status = page_status.into();
Some(page_info)
} else {
None
}
})
.collect::<Vec<_>>();
let mut video_status = VideoStatus::from(video_info.download_status);
let mut video_resetted = (request.force && video_status.force_reset_failed()) || video_status.reset_failed();
if !resetted_pages_info.is_empty() {
video_status.set(4, 0); // 将“分页下载”重置为 0
video_resetted = true;
}
let resetted_videos_info = if video_resetted {
video_info.download_status = video_status.into();
vec![&video_info]
} else {
vec![]
};
let resetted = !resetted_videos_info.is_empty() || !resetted_pages_info.is_empty();
if resetted {
let txn = db.begin().await?;
if !resetted_videos_info.is_empty() {
// 只可能有 1 个元素,所以不用 batch
update_video_download_status::<VideoInfo>(&txn, &resetted_videos_info, None).await?;
}
if !resetted_pages_info.is_empty() {
update_page_download_status(&txn, &resetted_pages_info, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(ResetVideoResponse {
resetted,
video: video_info,
pages: resetted_pages_info,
}))
}
pub async fn clear_and_reset_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<ClearAndResetVideoStatusResponse>, ApiError> {
let video_info = video::Entity::find_by_id(id).one(&db).await?;
let Some(video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let txn = db.begin().await?;
let mut video_info = video_info.into_active_model();
video_info.single_page = Set(None);
video_info.download_status = Set(0);
video_info.valid = Set(true);
let video_info = video_info.update(&txn).await?;
page::Entity::delete_many()
.filter(page::Column::VideoId.eq(id))
.exec(&txn)
.await?;
txn.commit().await?;
let video_info = video_info.try_into_model()?;
let warning = if video_info.path.is_empty() {
None
} else {
tokio::fs::remove_dir_all(&video_info.path)
.await
.context(format!("删除本地路径「{}」失败", video_info.path))
.err()
.map(|e| format!("{:#}", e))
};
Ok(ApiResponse::ok(ClearAndResetVideoStatusResponse {
warning,
video: VideoInfo {
id: video_info.id,
bvid: video_info.bvid,
name: video_info.name,
upper_name: video_info.upper_name,
valid: video_info.valid,
should_download: video_info.should_download,
download_status: video_info.download_status,
collection_id: video_info.collection_id,
favorite_id: video_info.favorite_id,
submission_id: video_info.submission_id,
watch_later_id: video_info.watch_later_id,
},
}))
}
pub async fn reset_filtered_video_status(
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetFilteredVideoStatusRequest>,
) -> Result<ApiResponse<ResetFilteredVideosResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(request.collection, video::Column::CollectionId),
(request.favorite, video::Column::FavoriteId),
(request.submission, video::Column::SubmissionId),
(request.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = request.query {
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = request.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = request.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let all_videos = query.into_partial_model::<SimpleVideoInfo>().all(&db).await?;
let all_pages = page::Entity::find()
.filter(page::Column::VideoId.is_in(all_videos.iter().map(|v| v.id)))
.into_partial_model::<SimplePageInfo>()
.all(&db)
.await?;
let resetted_pages_info = all_pages
.into_iter()
.filter_map(|mut page_info| {
let mut page_status = PageStatus::from(page_info.download_status);
if (request.force && page_status.force_reset_failed()) || page_status.reset_failed() {
page_info.download_status = page_status.into();
Some(page_info)
} else {
None
}
})
.collect::<Vec<_>>();
let video_ids_with_resetted_pages: HashSet<i32> = resetted_pages_info.iter().map(|page| page.video_id).collect();
let resetted_videos_info = all_videos
.into_iter()
.filter_map(|mut video_info| {
let mut video_status = VideoStatus::from(video_info.download_status);
let mut video_resetted =
(request.force && video_status.force_reset_failed()) || video_status.reset_failed();
if video_ids_with_resetted_pages.contains(&video_info.id) {
video_status.set(4, 0); // 将"分页下载"重置为 0
video_resetted = true;
}
if video_resetted {
video_info.download_status = video_status.into();
Some(video_info)
} else {
None
}
})
.collect::<Vec<_>>();
let has_video_updates = !resetted_videos_info.is_empty();
let has_page_updates = !resetted_pages_info.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &resetted_videos_info, Some(500)).await?;
}
if has_page_updates {
update_page_download_status(&txn, &resetted_pages_info, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(ResetFilteredVideosResponse {
resetted: has_video_updates || has_page_updates,
resetted_videos_count: resetted_videos_info.len(),
resetted_pages_count: resetted_pages_info.len(),
}))
}
pub async fn update_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateVideoStatusRequest>,
) -> Result<ApiResponse<UpdateVideoStatusResponse>, ApiError> {
let (video_info, mut pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(mut video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let mut video_status = VideoStatus::from(video_info.download_status);
for update in &request.video_updates {
video_status.set(update.status_index, update.status_value);
}
video_info.download_status = video_status.into();
let mut updated_pages_info = Vec::new();
let mut page_id_map = pages_info
.iter_mut()
.map(|page| (page.id, page))
.collect::<std::collections::HashMap<_, _>>();
for page_update in &request.page_updates {
if let Some(page_info) = page_id_map.remove(&page_update.page_id) {
let mut page_status = PageStatus::from(page_info.download_status);
for update in &page_update.updates {
page_status.set(update.status_index, update.status_value);
}
page_info.download_status = page_status.into();
updated_pages_info.push(page_info);
}
}
let has_video_updates = !request.video_updates.is_empty();
let has_page_updates = !updated_pages_info.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status::<VideoInfo>(&txn, &[&video_info], None).await?;
}
if has_page_updates {
update_page_download_status::<PageInfo>(&txn, &updated_pages_info, None).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(UpdateVideoStatusResponse {
success: has_video_updates || has_page_updates,
video: video_info,
pages: pages_info,
}))
}
pub async fn update_filtered_video_status(
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateFilteredVideoStatusRequest>,
) -> Result<ApiResponse<UpdateFilteredVideoStatusResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(request.collection, video::Column::CollectionId),
(request.favorite, video::Column::FavoriteId),
(request.submission, video::Column::SubmissionId),
(request.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = request.query {
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = request.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = request.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let mut all_videos = query.into_partial_model::<SimpleVideoInfo>().all(&db).await?;
let mut all_pages = page::Entity::find()
.filter(page::Column::VideoId.is_in(all_videos.iter().map(|v| v.id)))
.into_partial_model::<SimplePageInfo>()
.all(&db)
.await?;
for video_info in all_videos.iter_mut() {
let mut video_status = VideoStatus::from(video_info.download_status);
for update in &request.video_updates {
video_status.set(update.status_index, update.status_value);
}
video_info.download_status = video_status.into();
}
for page_info in all_pages.iter_mut() {
let mut page_status = PageStatus::from(page_info.download_status);
for update in &request.page_updates {
page_status.set(update.status_index, update.status_value);
}
page_info.download_status = page_status.into();
}
let has_video_updates = !all_videos.is_empty();
let has_page_updates = !all_pages.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &all_videos, Some(500)).await?;
}
if has_page_updates {
update_page_download_status(&txn, &all_pages, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(UpdateFilteredVideoStatusResponse {
success: has_video_updates || has_page_updates,
updated_videos_count: all_videos.len(),
updated_pages_count: all_pages.len(),
}))
}

View File

@@ -0,0 +1,54 @@
use std::collections::VecDeque;
use std::sync::Arc;
use parking_lot::RwLock;
use tokio::sync::broadcast;
use tracing_subscriber::fmt::MakeWriter;
pub const MAX_HISTORY_LOGS: usize = 200;
/// LogHelper 维护了日志发送器和一个日志历史记录的缓冲区
pub struct LogHelper {
pub sender: broadcast::Sender<String>,
pub log_history: Arc<RwLock<VecDeque<String>>>,
}
impl LogHelper {
pub fn new(sender: broadcast::Sender<String>, log_history: Arc<RwLock<VecDeque<String>>>) -> Self {
LogHelper { sender, log_history }
}
}
impl<'a> MakeWriter<'a> for LogHelper {
type Writer = Self;
fn make_writer(&'a self) -> Self::Writer {
self.clone()
}
}
impl std::io::Write for LogHelper {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
let log_message = String::from_utf8_lossy(buf).to_string();
let _ = self.sender.send(log_message.clone());
let mut history = self.log_history.write();
history.push_back(log_message);
if history.len() > MAX_HISTORY_LOGS {
history.pop_front();
}
Ok(buf.len())
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
impl Clone for LogHelper {
fn clone(&self) -> Self {
LogHelper {
sender: self.sender.clone(),
log_history: self.log_history.clone(),
}
}
}

View File

@@ -0,0 +1,338 @@
mod log_helper;
use std::sync::{Arc, LazyLock};
use std::time::Duration;
use axum::extract::WebSocketUpgrade;
use axum::extract::ws::{Message, WebSocket};
use axum::response::IntoResponse;
use axum::routing::any;
use axum::{Extension, Router};
use dashmap::DashMap;
use futures::stream::{SplitSink, SplitStream};
use futures::{SinkExt, StreamExt, future};
use itertools::Itertools;
pub use log_helper::{LogHelper, MAX_HISTORY_LOGS};
use parking_lot::RwLock;
use serde::{Deserialize, Serialize};
use sysinfo::{
CpuRefreshKind, DiskRefreshKind, Disks, MemoryRefreshKind, Pid, ProcessRefreshKind, ProcessesToUpdate, System,
get_current_pid,
};
use tokio::sync::mpsc;
use tokio::{pin, select};
use tokio_stream::wrappers::{BroadcastStream, WatchStream};
use tokio_util::future::FutureExt;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
use crate::api::response::SysInfo;
use crate::task::{DownloadTaskManager, TaskStatus};
static WEBSOCKET_HANDLER: LazyLock<WebSocketHandler> = LazyLock::new(WebSocketHandler::new);
pub(super) fn router() -> Router {
Router::new().route("/ws", any(websocket_handler))
}
async fn websocket_handler(ws: WebSocketUpgrade, Extension(log_writer): Extension<LogHelper>) -> impl IntoResponse {
ws.on_upgrade(|socket| handle_socket(socket, log_writer))
}
// 事件类型枚举
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
enum EventType {
Logs,
Tasks,
SysInfo,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
enum ClientEvent {
Subscribe(EventType),
Unsubscribe(EventType),
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
enum ServerEvent {
Logs(String),
Tasks(TaskStatus),
SysInfo(SysInfo),
}
struct WebSocketHandler {
sysinfo_subscribers: Arc<DashMap<Uuid, mpsc::Sender<ServerEvent>>>,
sysinfo_cancel: RwLock<Option<CancellationToken>>,
}
impl WebSocketHandler {
fn new() -> Self {
Self {
sysinfo_subscribers: Arc::new(DashMap::new()),
sysinfo_cancel: RwLock::new(None),
}
}
/// 向客户端推送信息
async fn handle_sender(&self, mut sender: SplitSink<WebSocket, Message>, mut rx: mpsc::Receiver<ServerEvent>) {
while let Some(event) = rx.recv().await {
let text = match serde_json::to_string(&event) {
Ok(text) => text,
Err(e) => {
error!("Failed to serialize event: {:?}", e);
continue;
}
};
if let Err(e) = sender.send(Message::Text(text.into())).await {
error!("Failed to send message: {:?}", e);
break;
}
}
}
/// 从客户端接收信息
async fn handle_receiver(
&self,
mut receiver: SplitStream<WebSocket>,
tx: mpsc::Sender<ServerEvent>,
uuid: Uuid,
log_writer: LogHelper,
) {
// 日志和任务状态的处理本身就是由 stream 驱动的,可以直接为每个 ws 连接维护独立的任务处理器
// 系统信息是服务端轮询然后推送的,如果单独维护会导致每个连接都独立轮询系统信息,造成不必要的浪费
// 因此采用了全局的订阅者管理,所有连接共享同一个系统信息轮询任务
let (mut log_cancel, mut task_cancel) = (None, None);
while let Some(Ok(msg)) = receiver.next().await {
let Message::Text(text) = msg else {
continue;
};
let client_event = match serde_json::from_str::<ClientEvent>(&text) {
Ok(event) => event,
Err(e) => {
error!("Failed to parse client message: {:?}, error: {:?}", text, e);
continue;
}
};
match client_event {
ClientEvent::Subscribe(EventType::Logs) => {
if log_cancel.is_none() {
log_cancel = Some(self.new_log_handler(tx.clone(), &log_writer));
}
}
ClientEvent::Unsubscribe(EventType::Logs) => {
if let Some(cancel) = log_cancel.take() {
cancel.cancel();
}
}
ClientEvent::Subscribe(EventType::Tasks) => {
if task_cancel.is_none() {
task_cancel = Some(self.new_task_handler(tx.clone()));
}
}
ClientEvent::Unsubscribe(EventType::Tasks) => {
if let Some(cancel) = task_cancel.take() {
cancel.cancel();
}
}
ClientEvent::Subscribe(EventType::SysInfo) => {
self.add_sysinfo_subscriber(uuid, tx.clone());
}
ClientEvent::Unsubscribe(EventType::SysInfo) => {
self.remove_sysinfo_subscriber(uuid);
}
}
}
// 连接关闭,清除仍然残留的任务
if let Some(cancel) = log_cancel {
cancel.cancel();
}
if let Some(cancel) = task_cancel {
cancel.cancel();
}
self.remove_sysinfo_subscriber(uuid);
}
/// 添加全局系统信息订阅者
fn add_sysinfo_subscriber(&self, uuid: Uuid, sender: mpsc::Sender<ServerEvent>) {
self.sysinfo_subscribers.insert(uuid, sender);
if self.sysinfo_cancel.read().is_none() {
let mut sys_info_cancel = self.sysinfo_cancel.write();
if sys_info_cancel.is_some() {
return;
}
*sys_info_cancel = Some(self.new_sysinfo_handler(self.sysinfo_subscribers.clone()));
}
}
/// 移除全局系统信息订阅者
fn remove_sysinfo_subscriber(&self, uuid: Uuid) {
self.sysinfo_subscribers.remove(&uuid);
if self.sysinfo_subscribers.is_empty()
&& let Some(token) = self.sysinfo_cancel.write().take()
{
token.cancel();
}
}
/// 创建异步日志推送任务,返回任务的取消令牌
fn new_log_handler(&self, tx: mpsc::Sender<ServerEvent>, log_writer: &LogHelper) -> CancellationToken {
let cancel_token = CancellationToken::new();
// 读取历史日志
let history = log_writer.log_history.read();
let history_logs = history.iter().cloned().collect::<Vec<String>>();
drop(history);
// 获取日志广播接收器
let log_rx = log_writer.sender.subscribe();
tokio::spawn(
async move {
// 合并历史日志和实时日志流
let log_stream = futures::stream::iter(history_logs)
.chain(BroadcastStream::new(log_rx).filter_map(async |msg| msg.ok()))
.map(ServerEvent::Logs);
pin!(log_stream);
while let Some(event) = log_stream.next().await {
if let Err(e) = tx.send(event).await {
error!("Failed to send log event: {:?}", e);
break;
}
}
}
.with_cancellation_token_owned(cancel_token.clone()),
);
cancel_token
}
/// 创建异步任务状态推送任务,返回任务的取消令牌
fn new_task_handler(&self, tx: mpsc::Sender<ServerEvent>) -> CancellationToken {
let cancel_token = CancellationToken::new();
tokio::spawn(
async move {
let mut stream = WatchStream::new(DownloadTaskManager::get().subscribe()).map(ServerEvent::Tasks);
while let Some(event) = stream.next().await {
if let Err(e) = tx.send(event).await {
error!("Failed to send task status: {:?}", e);
break;
}
}
}
.with_cancellation_token_owned(cancel_token.clone()),
);
cancel_token
}
/// 创建异步系统信息推送任务,返回任务的取消令牌
fn new_sysinfo_handler(
&self,
sysinfo_subscribers: Arc<DashMap<Uuid, mpsc::Sender<ServerEvent>>>,
) -> CancellationToken {
let cancel_token = CancellationToken::new();
let cancel_token_clone = cancel_token.clone();
tokio::spawn(async move {
let (tx, mut rx) = mpsc::channel(10);
let (tick_tx, mut tick_rx) = mpsc::channel(3);
// 在阻塞线程中轮询系统信息,防止阻塞异步运行时
tokio::task::spawn_blocking(move || {
// 对于 linux/mac/windows 平台,该方法永远返回 Some(pid)expect 基本是安全的
let self_pid = get_current_pid().expect("Unsupported platform");
let mut system = System::new();
let mut disks = Disks::new();
while tick_rx.blocking_recv().is_some() {
system.refresh_needed(self_pid);
disks.refresh_needed(self_pid);
let process = match system.process(self_pid) {
Some(p) => p,
None => continue,
};
let (available, total) = disks
.iter()
.filter(|d| {
d.available_space() > 0
&& d.total_space() > 0
// 简单过滤一些虚拟文件系统
&& !["overlay", "tmpfs", "sysfs", "proc"]
.contains(&d.file_system().to_string_lossy().as_ref())
})
.unique_by(|d| d.name())
.fold((0, 0), |(mut available, mut total), d| {
available += d.available_space();
total += d.total_space();
(available, total)
});
let sys_info = SysInfo {
timestamp: chrono::Utc::now().timestamp_millis(),
total_memory: system.total_memory(),
used_memory: system.used_memory(),
process_memory: process.memory(),
used_cpu: system.global_cpu_usage(),
process_cpu: process.cpu_usage() / system.cpus().len() as f32,
total_disk: total,
available_disk: available,
};
if tx.blocking_send(sys_info).is_err() {
break;
}
}
});
// 异步部分负责获取由阻塞线程发送过来的系统信息,并推送给所有订阅者
// 收到取消信号时,设置标志位,确保阻塞线程正常退出
let mut interval = tokio::time::interval(Duration::from_secs(2));
loop {
select! {
_ = cancel_token_clone.cancelled() => {
drop(tick_tx);
break;
}
_ = interval.tick() => {
let _ = tick_tx.send(()).await;
}
Some(sys_info) = rx.recv() => {
future::join_all(sysinfo_subscribers.iter().map(async |subscriber| {
if let Err(e) = subscriber.send(ServerEvent::SysInfo(sys_info)).await {
error!(
"Failed to send sysinfo event to subscriber {}: {:?}",
subscriber.key(),
e
);
}
}))
.await;
}
}
}
});
cancel_token
}
}
async fn handle_socket(socket: WebSocket, log_writer: LogHelper) {
let (ws_sender, ws_receiver) = socket.split();
let uuid = Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel(100);
tokio::spawn(WEBSOCKET_HANDLER.handle_sender(ws_sender, rx));
tokio::spawn(WEBSOCKET_HANDLER.handle_receiver(ws_receiver, tx, uuid, log_writer));
}
trait SysInfoExt {
fn refresh_needed(&mut self, self_pid: Pid);
}
impl SysInfoExt for System {
fn refresh_needed(&mut self, self_pid: Pid) {
self.refresh_memory_specifics(MemoryRefreshKind::nothing().with_ram());
self.refresh_cpu_specifics(CpuRefreshKind::nothing().with_cpu_usage());
self.refresh_processes_specifics(
ProcessesToUpdate::Some(&[self_pid]),
true,
ProcessRefreshKind::nothing().with_cpu().with_memory(),
);
}
}
impl SysInfoExt for Disks {
fn refresh_needed(&mut self, _self_pid: Pid) {
self.refresh_specifics(true, DiskRefreshKind::nothing().with_storage());
}
}

View File

@@ -0,0 +1,119 @@
use std::borrow::Cow;
use anyhow::Error;
use axum::Json;
use axum::extract::rejection::JsonRejection;
use axum::extract::{FromRequest, Request};
use axum::response::IntoResponse;
use reqwest::StatusCode;
use serde::Serialize;
use serde::de::DeserializeOwned;
use validator::Validate;
use crate::api::error::InnerApiError;
#[derive(Serialize)]
pub struct ApiResponse<T: Serialize> {
status_code: u16,
#[serde(skip_serializing_if = "Option::is_none")]
data: Option<T>,
#[serde(skip_serializing_if = "Option::is_none")]
message: Option<Cow<'static, str>>,
}
impl<T: Serialize> ApiResponse<T> {
pub fn ok(data: T) -> Self {
Self {
status_code: 200,
data: Some(data),
message: None,
}
}
pub fn bad_request(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 400,
data: None,
message: Some(message.into()),
}
}
pub fn unauthorized(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 401,
data: None,
message: Some(message.into()),
}
}
pub fn not_found(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 404,
data: None,
message: Some(message.into()),
}
}
pub fn internal_server_error(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 500,
data: None,
message: Some(message.into()),
}
}
}
impl<T: Serialize> IntoResponse for ApiResponse<T> {
fn into_response(self) -> axum::response::Response {
(
StatusCode::from_u16(self.status_code).expect("invalid Http Status Code"),
Json(self),
)
.into_response()
}
}
pub struct ApiError(Error);
impl<E> From<E> for ApiError
where
E: Into<anyhow::Error>,
{
fn from(value: E) -> Self {
Self(value.into())
}
}
impl IntoResponse for ApiError {
fn into_response(self) -> axum::response::Response {
if let Some(inner_error) = self.0.downcast_ref::<InnerApiError>() {
match inner_error {
InnerApiError::NotFound(_) => return ApiResponse::<()>::not_found(self.0.to_string()).into_response(),
InnerApiError::BadRequest(_) => {
return ApiResponse::<()>::bad_request(self.0.to_string()).into_response();
}
}
}
ApiResponse::<()>::internal_server_error(self.0.to_string()).into_response()
}
}
#[derive(Debug, Clone, Copy, Default)]
pub struct ValidatedJson<T>(pub T);
impl<T, S> FromRequest<S> for ValidatedJson<T>
where
T: DeserializeOwned + Validate,
S: Send + Sync,
Json<T>: FromRequest<S, Rejection = JsonRejection>,
{
type Rejection = ApiError;
async fn from_request(req: Request, state: &S) -> Result<Self, Self::Rejection> {
let Json(value) = Json::<T>::from_request(req, state).await?;
value
.validate()
.map_err(|e| ApiError::from(InnerApiError::BadRequest(e.to_string())))?;
Ok(ValidatedJson(value))
}
}

View File

@@ -0,0 +1,484 @@
use anyhow::{Context, Result, bail};
use serde::{Deserialize, Serialize};
use crate::bilibili::error::BiliError;
pub struct PageAnalyzer {
pub(crate) info: serde_json::Value,
}
#[derive(Debug, strum::FromRepr, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, Clone)]
pub enum VideoQuality {
Quality360p = 16,
Quality480p = 32,
Quality720p = 64,
Quality1080p = 80,
Quality1080pPLUS = 112,
Quality1080p60 = 116,
Quality4k = 120,
QualityHdr = 125,
QualityDolby = 126,
Quality8k = 127,
}
#[derive(Debug, Clone, Copy, strum::FromRepr, PartialEq, Eq, Serialize, Deserialize)]
pub enum AudioQuality {
Quality64k = 30216,
Quality132k = 30232,
QualityDolby = 30250,
QualityHiRES = 30251,
Quality192k = 30280,
}
impl Ord for AudioQuality {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.as_sort_key().cmp(&other.as_sort_key())
}
}
impl PartialOrd for AudioQuality {
fn partial_cmp(&self, other: &AudioQuality) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl AudioQuality {
pub fn as_sort_key(&self) -> isize {
match self {
// 这可以让 Dolby 和 Hi-RES 排在 192k 之后,且 Dolby 和 Hi-RES 之间的顺序不变
Self::QualityHiRES | Self::QualityDolby => (*self as isize) + 40,
_ => *self as isize,
}
}
}
#[allow(clippy::upper_case_acronyms)]
#[derive(
Debug, strum::EnumString, strum::Display, strum::AsRefStr, PartialEq, PartialOrd, Serialize, Deserialize, Clone,
)]
pub enum VideoCodecs {
#[strum(serialize = "hev")]
HEV,
#[strum(serialize = "avc")]
AVC,
#[strum(serialize = "av01")]
AV1,
}
impl TryFrom<u64> for VideoCodecs {
type Error = anyhow::Error;
fn try_from(value: u64) -> std::result::Result<Self, Self::Error> {
// https://socialsisteryi.github.io/bilibili-API-collect/docs/video/videostream_url.html#%E8%A7%86%E9%A2%91%E7%BC%96%E7%A0%81%E4%BB%A3%E7%A0%81
match value {
7 => Ok(Self::AVC),
12 => Ok(Self::HEV),
13 => Ok(Self::AV1),
_ => bail!("invalid video codecs id: {}", value),
}
}
}
// 视频流的筛选偏好
#[derive(Serialize, Deserialize, Clone)]
pub struct FilterOption {
pub video_max_quality: VideoQuality,
pub video_min_quality: VideoQuality,
pub audio_max_quality: AudioQuality,
pub audio_min_quality: AudioQuality,
pub codecs: Vec<VideoCodecs>,
pub no_dolby_video: bool,
pub no_dolby_audio: bool,
pub no_hdr: bool,
pub no_hires: bool,
}
impl Default for FilterOption {
fn default() -> Self {
Self {
video_max_quality: VideoQuality::Quality8k,
video_min_quality: VideoQuality::Quality360p,
audio_max_quality: AudioQuality::QualityHiRES,
audio_min_quality: AudioQuality::Quality64k,
codecs: vec![VideoCodecs::AVC, VideoCodecs::HEV, VideoCodecs::AV1],
no_dolby_video: false,
no_dolby_audio: false,
no_hdr: false,
no_hires: false,
}
}
}
// 上游项目中的五种流类型,不过目测应该只有 Flv、DashVideo、DashAudio 三种会被用到
#[derive(Debug, PartialEq, PartialOrd)]
pub enum Stream {
Flv(String),
Html5Mp4(String),
EpisodeTryMp4(String),
DashVideo {
url: String,
backup_url: Vec<String>,
quality: VideoQuality,
codecs: VideoCodecs,
},
DashAudio {
url: String,
backup_url: Vec<String>,
quality: AudioQuality,
},
}
// 通用的获取流链接的方法,交由 Downloader 使用
impl Stream {
pub fn urls(&self, enable_cdn_sorting: bool) -> Vec<&str> {
match self {
Self::Flv(url) | Self::Html5Mp4(url) | Self::EpisodeTryMp4(url) => vec![url],
Self::DashVideo { url, backup_url, .. } | Self::DashAudio { url, backup_url, .. } => {
let mut urls = std::iter::once(url.as_str())
.chain(backup_url.iter().map(|s| s.as_str()))
.collect::<Vec<_>>();
if enable_cdn_sorting {
urls.sort_by_key(|u| {
if u.contains("upos-") {
0 // 服务商 cdn
} else if u.contains("cn-") {
1 // 自建 cdn
} else if u.contains("mcdn") {
2 // mcdn
} else {
3 // pcdn 或者其它
}
});
}
urls
}
}
}
}
/// 用于获取视频流的最佳筛选结果,有两种可能:
/// 1. 单个混合流,作为 Mixed 返回
/// 2. 视频、音频分离,作为 VideoAudio 返回,其中音频流可能不存在(对于无声视频,如 BV1J7411H7KQ
#[derive(Debug)]
pub enum BestStream {
VideoAudio { video: Stream, audio: Option<Stream> },
Mixed(Stream),
}
impl PageAnalyzer {
pub fn new(info: serde_json::Value) -> Self {
Self { info }
}
fn is_flv_stream(&self) -> bool {
self.info.get("durl").is_some() && self.info["format"].as_str().is_some_and(|f| f.starts_with("flv"))
}
fn is_html5_mp4_stream(&self) -> bool {
self.info.get("durl").is_some()
&& self.info["format"].as_str().is_some_and(|f| f.starts_with("mp4"))
&& self.info["is_html5"].as_bool().is_some_and(|b| b)
}
fn is_episode_try_mp4_stream(&self) -> bool {
self.info.get("durl").is_some()
&& self.info["format"].as_str().is_some_and(|f| f.starts_with("mp4"))
&& self.info["is_html5"].as_bool().is_none_or(|b| !b)
}
/// 获取所有的视频、音频流,并根据条件筛选
fn streams(&mut self, filter_option: &FilterOption) -> Result<Vec<Stream>> {
if self.is_flv_stream() {
return Ok(vec![Stream::Flv(
self.info["durl"][0]["url"]
.as_str()
.context("invalid flv stream")?
.to_string(),
)]);
}
if self.is_html5_mp4_stream() {
return Ok(vec![Stream::Html5Mp4(
self.info["durl"][0]["url"]
.as_str()
.context("invalid html5 mp4 stream")?
.to_string(),
)]);
}
if self.is_episode_try_mp4_stream() {
return Ok(vec![Stream::EpisodeTryMp4(
self.info["durl"][0]["url"]
.as_str()
.context("invalid episode try mp4 stream")?
.to_string(),
)]);
}
let mut streams: Vec<Stream> = Vec::new();
for video in self
.info
.pointer_mut("/dash/video")
.and_then(|v| v.as_array_mut())
.ok_or(BiliError::VideoStreamsEmpty)?
.iter_mut()
{
let (Some(url), Some(quality), Some(codecs_id)) = (
video["baseUrl"].as_str(),
video["id"].as_u64(),
video["codecid"].as_u64(),
) else {
continue;
};
let quality = VideoQuality::from_repr(quality as usize).context("invalid video stream quality")?;
let Ok(codecs) = codecs_id.try_into() else {
continue;
};
if !filter_option.codecs.contains(&codecs)
|| quality < filter_option.video_min_quality
|| quality > filter_option.video_max_quality
|| (quality == VideoQuality::QualityHdr && filter_option.no_hdr)
|| (quality == VideoQuality::QualityDolby && filter_option.no_dolby_video)
{
continue;
}
streams.push(Stream::DashVideo {
url: url.to_string(),
backup_url: serde_json::from_value(video["backupUrl"].take()).unwrap_or_default(),
quality,
codecs,
});
}
if let Some(audios) = self.info.pointer_mut("/dash/audio").and_then(|a| a.as_array_mut()) {
for audio in audios.iter_mut() {
let (Some(url), Some(quality)) = (audio["baseUrl"].as_str(), audio["id"].as_u64()) else {
continue;
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid audio stream quality")?;
if quality < filter_option.audio_min_quality || quality > filter_option.audio_max_quality {
continue;
}
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
if !filter_option.no_hires
&& let Some(flac) = self
.info
.pointer_mut("/dash/flac/audio")
.and_then(|f| f.as_object_mut())
{
let (Some(url), Some(quality)) = (flac["baseUrl"].as_str(), flac["id"].as_u64()) else {
bail!("invalid flac stream, flac content: {:?}", flac);
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid flac stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(flac["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
if !filter_option.no_dolby_audio
&& let Some(dolby_audio) = self
.info
.pointer_mut("/dash/dolby/audio/0")
.and_then(|a| a.as_object_mut())
{
let (Some(url), Some(quality)) = (dolby_audio["baseUrl"].as_str(), dolby_audio["id"].as_u64()) else {
bail!("invalid dolby audio stream");
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid dolby audio stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(dolby_audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
Ok(streams)
}
pub fn best_stream(&mut self, filter_option: &FilterOption) -> Result<BestStream> {
let streams = self.streams(filter_option)?;
if self.is_flv_stream() || self.is_html5_mp4_stream() || self.is_episode_try_mp4_stream() {
// 按照 streams 中的假设,符合这三种情况的流只有一个,直接取
return Ok(BestStream::Mixed(
streams.into_iter().next().context("no stream found")?,
));
}
let (videos, audios): (Vec<Stream>, Vec<Stream>) =
streams.into_iter().partition(|s| matches!(s, Stream::DashVideo { .. }));
Ok(BestStream::VideoAudio {
video: videos
.into_iter()
.max_by(|a, b| match (a, b) {
(
Stream::DashVideo {
quality: a_quality,
codecs: a_codecs,
..
},
Stream::DashVideo {
quality: b_quality,
codecs: b_codecs,
..
},
) => {
if a_quality != b_quality {
return a_quality.cmp(b_quality);
};
filter_option
.codecs
.iter()
.position(|c| c == b_codecs)
.cmp(&filter_option.codecs.iter().position(|c| c == a_codecs))
}
_ => unreachable!(),
})
.context("no video stream found")?,
audio: audios.into_iter().max_by(|a, b| match (a, b) {
(Stream::DashAudio { quality: a_quality, .. }, Stream::DashAudio { quality: b_quality, .. }) => {
a_quality.cmp(b_quality)
}
_ => unreachable!(),
}),
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::bilibili::{BiliClient, Video};
use crate::config::VersionedConfig;
#[test]
fn test_quality_order() {
assert!(
[
VideoQuality::Quality360p,
VideoQuality::Quality480p,
VideoQuality::Quality720p,
VideoQuality::Quality1080p,
VideoQuality::Quality1080pPLUS,
VideoQuality::Quality1080p60,
VideoQuality::Quality4k,
VideoQuality::QualityHdr,
VideoQuality::QualityDolby,
VideoQuality::Quality8k
]
.is_sorted()
);
assert!(
[
AudioQuality::Quality64k,
AudioQuality::Quality132k,
AudioQuality::Quality192k,
AudioQuality::QualityDolby,
AudioQuality::QualityHiRES,
]
.is_sorted()
);
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_best_stream() {
let testcases = [
// 随便一个 8k + hires 视频
(
"BV1xRChYUE2R",
VideoQuality::Quality8k,
VideoCodecs::HEV,
Some(AudioQuality::QualityHiRES),
),
// 一个没有声音的纯视频
("BV1J7411H7KQ", VideoQuality::Quality720p, VideoCodecs::HEV, None),
// 一个杜比全景声的演示片
(
"BV1Mm4y1P7JV",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityDolby),
),
// 影视飓风的杜比视界视频
(
"BV1HEf2YWEvs",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityDolby),
),
// 孤独摇滚的杜比视界 + hires + 杜比全景声视频
(
"BV1YDVYzeE39",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityHiRES),
),
// 一个京紫的 HDR 视频
(
"BV1cZ4y1b7iB",
VideoQuality::QualityHdr,
VideoCodecs::HEV,
Some(AudioQuality::Quality192k),
),
];
let config = VersionedConfig::get().read();
for (bvid, video_quality, video_codec, audio_quality) in testcases.into_iter() {
let client = BiliClient::new();
let video = Video::new(&client, bvid, &config.credential);
let pages = video.get_pages().await.expect("failed to get pages");
let first_page = pages.into_iter().next().expect("no page found");
let best_stream = video
.get_page_analyzer(&first_page)
.await
.expect("failed to get page analyzer")
.best_stream(&config.filter_option)
.expect("failed to get best stream");
dbg!(bvid, &best_stream);
match best_stream {
BestStream::VideoAudio {
video: Stream::DashVideo { quality, codecs, .. },
audio,
} => {
assert_eq!(quality, video_quality);
assert_eq!(codecs, video_codec);
assert_eq!(
audio.map(|audio_stream| match audio_stream {
Stream::DashAudio { quality, .. } => quality,
_ => unreachable!(),
}),
audio_quality,
);
}
_ => unreachable!(),
}
}
}
#[test]
fn test_url_sort() {
let stream = Stream::DashVideo {
url: "https://xy116x207x155x163xy240ey95dy1010y700yy8dxy.mcdn.bilivideo.cn:4483".to_owned(),
backup_url: vec![
"https://upos-sz-mirrorcos.bilivideo.com".to_owned(),
"https://cn-tj-cu-01-11.bilivideo.com".to_owned(),
"https://xxx.v1d.szbdys.com".to_owned(),
],
quality: VideoQuality::Quality1080p,
codecs: VideoCodecs::AVC,
};
assert_eq!(
stream.urls(true),
vec![
"https://upos-sz-mirrorcos.bilivideo.com",
"https://cn-tj-cu-01-11.bilivideo.com",
"https://xy116x207x155x163xy240ey95dy1010y700yy8dxy.mcdn.bilivideo.cn:4483",
"https://xxx.v1d.szbdys.com"
]
);
}
}

View File

@@ -0,0 +1,148 @@
use std::sync::Arc;
use std::time::Duration;
use anyhow::{Result, bail};
use leaky_bucket::RateLimiter;
use parking_lot::Once;
use reqwest::{Method, header};
use ua_generator::ua;
use crate::bilibili::Credential;
use crate::bilibili::credential::WbiImg;
use crate::config::{RateLimit, VersionedCache};
// 一个对 reqwest::Client 的简单封装,用于 Bilibili 请求
#[derive(Clone)]
pub struct Client(reqwest::Client);
impl Client {
pub fn new() -> Self {
static INIT: Once = Once::new();
INIT.call_once(|| {
rustls::crypto::ring::default_provider()
.install_default()
.expect("Failed to install rustls crypto provider");
});
// 正常访问 api 所必须的 header作为默认 header 添加到每个请求中
let mut headers = header::HeaderMap::new();
headers.insert(
header::USER_AGENT,
header::HeaderValue::from_static(ua::spoof_chrome_ua()),
);
headers.insert(
header::REFERER,
header::HeaderValue::from_static("https://www.bilibili.com/"),
);
Self(
reqwest::Client::builder()
.default_headers(headers)
.gzip(true)
.connect_timeout(std::time::Duration::from_secs(10))
.read_timeout(std::time::Duration::from_secs(10))
.build()
.expect("failed to build reqwest client"),
)
}
// a wrapper of reqwest::Client::request to add credential to the request
pub fn request(&self, method: Method, url: &str, credential: Option<&Credential>) -> reqwest::RequestBuilder {
let mut req = self.0.request(method, url);
// 如果有 credential会将其转换成 cookie 添加到请求的 header 中
if let Some(credential) = credential {
req = req
.header(header::COOKIE, format!("SESSDATA={}", credential.sessdata))
.header(header::COOKIE, format!("bili_jct={}", credential.bili_jct))
.header(header::COOKIE, format!("buvid3={}", credential.buvid3))
.header(header::COOKIE, format!("DedeUserID={}", credential.dedeuserid))
.header(header::COOKIE, format!("ac_time_value={}", credential.ac_time_value));
}
req
}
}
// clippy 建议实现 Default trait
impl Default for Client {
fn default() -> Self {
Self::new()
}
}
enum Limiter {
Latest(VersionedCache<Option<RateLimiter>>),
Snapshot(Arc<Option<RateLimiter>>),
}
pub struct BiliClient {
pub client: Client,
limiter: Limiter,
}
impl BiliClient {
pub fn new() -> Self {
let client = Client::new();
let limiter = Limiter::Latest(
VersionedCache::new(|config| {
Ok(config
.concurrent_limit
.rate_limit
.as_ref()
.map(|RateLimit { limit, duration }| {
RateLimiter::builder()
.initial(*limit)
.refill(*limit)
.max(*limit)
.interval(Duration::from_millis(*duration))
.build()
}))
})
.expect("failed to create rate limiter"),
);
Self { client, limiter }
}
/// 获取当前 BiliClient 的快照,快照中的限流器固定不变
pub fn snapshot(&self) -> Result<Self> {
let Limiter::Latest(inner) = &self.limiter else {
// 语法上没问题,但语义上不允许对快照进行快照
bail!("cannot snapshot a snapshot BiliClient");
};
Ok(Self {
client: self.client.clone(),
limiter: Limiter::Snapshot(inner.snapshot()),
})
}
/// 获取一个预构建的请求,通过该方法获取请求时会检查并等待速率限制
pub async fn request(&self, method: Method, url: &str, credential: &Credential) -> reqwest::RequestBuilder {
match &self.limiter {
Limiter::Latest(inner) => {
if let Some(limiter) = inner.read().as_ref() {
limiter.acquire_one().await;
}
}
Limiter::Snapshot(inner) => {
if let Some(limiter) = inner.as_ref() {
limiter.acquire_one().await;
}
}
}
self.client.request(method, url, Some(credential))
}
/// 检查并刷新 Credential不需要刷新返回 Ok(None),需要刷新返回 Ok(Some(new_credential))
pub async fn check_refresh(&self, credential: &Credential) -> Result<Option<Credential>> {
if !credential.need_refresh(&self.client).await? {
return Ok(None);
}
Ok(Some(credential.refresh(&self.client).await?))
}
/// 获取 wbi img用于生成请求签名
pub async fn wbi_img(&self, credential: &Credential) -> Result<WbiImg> {
credential.wbi_img(&self.client).await
}
pub fn inner_client(&self) -> &reqwest::Client {
&self.client.0
}
}

View File

@@ -0,0 +1,306 @@
use std::fmt::{Display, Formatter};
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use reqwest::Method;
use serde::Deserialize;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
#[derive(PartialEq, Eq, Hash, Clone, Debug, Default, Copy)]
pub enum CollectionType {
Series,
#[default]
Season,
}
impl<'de> serde::Deserialize<'de> for CollectionType {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let v = i32::deserialize(deserializer)?;
CollectionType::try_from(v).map_err(serde::de::Error::custom)
}
}
impl From<CollectionType> for i32 {
fn from(v: CollectionType) -> Self {
match v {
CollectionType::Series => 1,
CollectionType::Season => 2,
}
}
}
impl TryFrom<i32> for CollectionType {
type Error = anyhow::Error;
fn try_from(v: i32) -> Result<Self, Self::Error> {
match v {
1 => Ok(CollectionType::Series),
2 => Ok(CollectionType::Season),
v => Err(anyhow!("got invalid collection type {}", v)),
}
}
}
impl CollectionType {
pub fn from_expected(v: i32) -> Self {
Self::try_from(v).expect("invalid collection type")
}
}
impl Display for CollectionType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let s = match self {
CollectionType::Series => "列表",
CollectionType::Season => "合集",
};
write!(f, "{}", s)
}
}
#[derive(PartialEq, Eq, Hash, Debug)]
pub struct CollectionItem {
pub mid: String,
pub sid: String,
pub collection_type: CollectionType,
}
pub struct Collection<'a> {
client: &'a BiliClient,
pub collection: CollectionItem,
credential: &'a Credential,
}
#[derive(Debug, PartialEq)]
pub struct CollectionInfo {
pub name: String,
pub mid: i64,
pub sid: i64,
pub collection_type: CollectionType,
}
impl<'de> Deserialize<'de> for CollectionInfo {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
#[derive(Deserialize)]
struct CollectionInfoRaw {
mid: i64,
name: String,
season_id: Option<i64>,
series_id: Option<i64>,
}
let raw = CollectionInfoRaw::deserialize(deserializer)?;
let (sid, collection_type) = match (raw.season_id, raw.series_id) {
(Some(sid), None) => (sid, CollectionType::Season),
(None, Some(sid)) => (sid, CollectionType::Series),
_ => return Err(serde::de::Error::custom("invalid collection info")),
};
Ok(CollectionInfo {
mid: raw.mid,
name: raw.name,
sid,
collection_type,
})
}
}
impl<'a> Collection<'a> {
pub fn new(client: &'a BiliClient, collection: CollectionItem, credential: &'a Credential) -> Self {
Self {
client,
collection,
credential,
}
}
pub async fn get_info(&self) -> Result<CollectionInfo> {
let meta = match self.collection.collection_type {
// 没有找到专门获取 Season 信息的接口,所以直接获取第一页,从里面取 meta 信息
CollectionType::Season => self.get_videos(1).await?["data"]["meta"].take(),
CollectionType::Series => self.get_series_info().await?["data"]["meta"].take(),
};
Ok(serde_json::from_value(meta)?)
}
async fn get_series_info(&self) -> Result<Value> {
self.client
.request(Method::GET, "https://api.bilibili.com/x/series/series", self.credential)
.await
.query(&[("series_id", self.collection.sid.as_str())])
.send()
.await?
.error_for_status_ext()?
.json::<Value>()
.await?
.validate()
}
async fn get_videos(&self, page: i32) -> Result<Value> {
let req = match self.collection.collection_type {
CollectionType::Series => self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/series/archives",
self.credential,
)
.await
.query(&[("pn", page)])
.query(&[
("mid", self.collection.mid.as_str()),
("series_id", self.collection.sid.as_str()),
("only_normal", "true"),
("sort", "desc"),
("ps", "30"),
]),
CollectionType::Season => self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/polymer/web-space/seasons_archives_list",
self.credential,
)
.await
.query(&[("page_num", page)])
.query(&[
("mid", self.collection.mid.as_str()),
("season_id", self.collection.sid.as_str()),
("sort_reverse", "true"),
("page_size", "30"),
]),
};
req.send()
.await?
.error_for_status_ext()?
.json::<Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self.get_videos(page).await.with_context(|| {
format!(
"failed to get videos of collection {:?} page {}",
self.collection, page
)
})?;
let archives = &mut videos["data"]["archives"];
if archives.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!(
"no videos found in collection {:?} page {}",
self.collection,
page
))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(archives.take()).with_context(|| {
format!(
"failed to parse videos of collection {:?} page {}",
self.collection, page
)
})?;
for video_info in videos_info {
yield video_info;
}
let page_info = &videos["data"]["page"];
let fields = match self.collection.collection_type {
CollectionType::Series => ["num", "size", "total"],
CollectionType::Season => ["page_num", "page_size", "total"],
};
let values = fields
.iter()
.map(|f| page_info[f].as_i64())
.collect::<Vec<Option<i64>>>();
if let [Some(num), Some(size), Some(total)] = values[..] {
if num * size < total {
page += 1;
continue;
}
} else {
Err(anyhow!(
"invalid page info of collection {:?} page {}: read {:?} from {}",
self.collection,
page,
fields,
page_info
))?;
}
break;
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_collection_info_parse() {
let testcases = vec![
(
r#"
{
"category": 0,
"cover": "https://archive.biliimg.com/bfs/archive/a6fbf7a7b9f4af09d9cf40482268634df387ef68.jpg",
"description": "",
"mid": 521722088,
"name": "合集·【命运方舟全剧情解说】",
"ptime": 1714701600,
"season_id": 1987140,
"total": 10
}
"#,
CollectionInfo {
mid: 521722088,
name: "合集·【命运方舟全剧情解说】".to_owned(),
sid: 1987140,
collection_type: CollectionType::Season,
},
),
(
r#"
{
"series_id": 387212,
"mid": 521722088,
"name": "提瓦特冒险记",
"description": "原神沙雕般的游戏体验",
"keywords": [
""
],
"creator": "",
"state": 2,
"last_update_ts": 1633167320,
"total": 3,
"ctime": 1633167320,
"mtime": 1633167320,
"raw_keywords": "",
"category": 1
}
"#,
CollectionInfo {
mid: 521722088,
name: "提瓦特冒险记".to_owned(),
sid: 387212,
collection_type: CollectionType::Series,
},
),
];
for (json, expect) in testcases {
let info: CollectionInfo = serde_json::from_str(json).unwrap();
assert_eq!(info, expect);
}
}
}

View File

@@ -0,0 +1,443 @@
use std::collections::HashSet;
use anyhow::{Context, Result, bail, ensure};
use cookie::Cookie;
use regex::Regex;
use reqwest::{Method, header};
use rsa::pkcs8::DecodePublicKey;
use rsa::sha2::Sha256;
use rsa::{Oaep, RsaPublicKey};
use serde::{Deserialize, Serialize};
use crate::bilibili::{BiliError, Client, ErrorForStatusExt, Validate};
const MIXIN_KEY_ENC_TAB: [usize; 64] = [
46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39, 12, 38,
41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60, 51, 30, 4, 22, 25, 54, 21, 56, 59, 6, 63, 57, 62, 11, 36,
20, 34, 44, 52,
];
mod qrcode_status_code {
pub const SUCCESS: i64 = 0;
pub const NOT_SCANNED: i64 = 86101;
pub const SCANNED_UNCONFIRMED: i64 = 86090;
pub const EXPIRED: i64 = 86038;
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct Credential {
pub sessdata: String,
pub bili_jct: String,
pub buvid3: String,
pub dedeuserid: String,
pub ac_time_value: String,
}
#[derive(Debug, Deserialize)]
pub struct WbiImg {
pub(crate) img_url: String,
pub(crate) sub_url: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct Qrcode {
pub url: String,
pub qrcode_key: String,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "status", rename_all = "snake_case")]
pub enum PollStatus {
Success {
credential: Credential,
},
Pending {
message: String,
#[serde(default)]
scanned: bool,
},
Expired {
message: String,
},
}
impl WbiImg {
pub fn into_mixin_key(self) -> Option<String> {
let key = match (get_filename(self.img_url.as_str()), get_filename(self.sub_url.as_str())) {
(Some(img_key), Some(sub_key)) => img_key.to_string() + sub_key,
_ => return None,
};
let key = key.as_bytes();
Some(MIXIN_KEY_ENC_TAB.iter().take(32).map(|&x| key[x] as char).collect())
}
}
impl Credential {
pub async fn wbi_img(&self, client: &Client) -> Result<WbiImg> {
let mut res = client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/nav", Some(self))
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"]["wbi_img"].take())?)
}
pub async fn generate_qrcode(client: &Client) -> Result<Qrcode> {
let mut res = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/qrcode/generate",
None,
)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
pub async fn poll_qrcode(client: &Client, qrcode_key: &str) -> Result<PollStatus> {
let mut resp = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/qrcode/poll",
None,
)
.query(&[("qrcode_key", qrcode_key)])
.send()
.await?
.error_for_status_ext()?;
let headers = std::mem::take(resp.headers_mut());
let json = resp.json::<serde_json::Value>().await?.validate()?;
let code = json["data"]["code"].as_i64().context("missing 'code' field in data")?;
match code {
qrcode_status_code::SUCCESS => {
let mut credential = Self::extract(headers, json)?;
credential.buvid3 = Self::get_buvid3(client).await?;
Ok(PollStatus::Success { credential })
}
qrcode_status_code::NOT_SCANNED => Ok(PollStatus::Pending {
message: "未扫描".to_owned(),
scanned: false,
}),
qrcode_status_code::SCANNED_UNCONFIRMED => Ok(PollStatus::Pending {
message: "已扫描,请在手机上确认登录".to_owned(),
scanned: true,
}),
qrcode_status_code::EXPIRED => Ok(PollStatus::Expired {
message: "二维码已过期".to_owned(),
}),
_ => {
bail!(BiliError::InvalidResponse(json.to_string()));
}
}
}
/// 获取 buvid3 浏览器指纹
///
/// 参考 https://github.com/SocialSisterYi/bilibili-API-collect/blob/master/docs/misc/buvid3_4.md
async fn get_buvid3(client: &Client) -> Result<String> {
let resp = client
.request(Method::GET, "https://api.bilibili.com/x/web-frontend/getbuvid", None)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
resp["data"]["buvid"]
.as_str()
.context("missing 'buvid' field in data")
.map(|s| s.to_string())
}
/// 检查凭据是否有效
pub async fn need_refresh(&self, client: &Client) -> Result<bool> {
let res = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/cookie/info",
Some(self),
)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
res["data"]["refresh"].as_bool().context("check refresh failed")
}
pub async fn refresh(&self, client: &Client) -> Result<Self> {
let correspond_path = Self::get_correspond_path();
let csrf = self
.get_refresh_csrf(client, correspond_path)
.await
.context("获取 refresh_csrf 失败")?;
let new_credential = self
.get_new_credential(client, &csrf)
.await
.context("刷新 Credential 失败")?;
self.confirm_refresh(client, &new_credential)
.await
.context("确认更新 Credential 失败")?;
Ok(new_credential)
}
fn get_correspond_path() -> String {
// 调用频率很低,让 key 在函数内部构造影响不大
let key = RsaPublicKey::from_public_key_pem(
"-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDLgd2OAkcGVtoE3ThUREbio0Eg
Uc/prcajMKXvkCKFCWhJYJcLkcM2DKKcSeFpD/j6Boy538YXnR6VhcuUJOhH2x71
nzPjfdTcqMz7djHum0qSZA0AyCBDABUqCrfNgCiJ00Ra7GmRj+YCK1NJEuewlb40
JNrRuoEUXpabUzGB8QIDAQAB
-----END PUBLIC KEY-----",
)
.expect("fail to decode public key");
// 精确到毫秒的时间戳可能出现时间比服务器快的情况,提前 20s 以防万一
let ts = chrono::Local::now().timestamp_millis() - 20000;
let data = format!("refresh_{}", ts).into_bytes();
let encrypted = key
.encrypt(&mut rand::rng(), Oaep::new::<Sha256>(), &data)
.expect("fail to encrypt");
hex::encode(encrypted)
}
async fn get_refresh_csrf(&self, client: &Client, correspond_path: String) -> Result<String> {
let res = client
.request(
Method::GET,
format!("https://www.bilibili.com/correspond/1/{}", correspond_path).as_str(),
Some(self),
)
.header(header::COOKIE, "Domain=.bilibili.com")
.send()
.await?
.error_for_status_ext()?;
regex_find(r#"<div id="1-name">(.+?)</div>"#, res.text().await?.as_str())
}
async fn get_new_credential(&self, client: &Client, csrf: &str) -> Result<Credential> {
let mut resp = client
.request(
Method::POST,
"https://passport.bilibili.com/x/passport-login/web/cookie/refresh",
Some(self),
)
.header(header::COOKIE, "Domain=.bilibili.com")
.form(&[
// 这里不是 json而是 form data
("csrf", self.bili_jct.as_str()),
("refresh_csrf", csrf),
("refresh_token", self.ac_time_value.as_str()),
("source", "main_web"),
])
.send()
.await?
.error_for_status_ext()?;
let headers = std::mem::take(resp.headers_mut());
let json = resp.json::<serde_json::Value>().await?.validate()?;
let mut credential = Self::extract(headers, json)?;
credential.buvid3 = self.buvid3.clone();
Ok(credential)
}
async fn confirm_refresh(&self, client: &Client, new_credential: &Credential) -> Result<()> {
client
.request(
Method::POST,
"https://passport.bilibili.com/x/passport-login/web/confirm/refresh",
// 此处用的是新的凭证
Some(new_credential),
)
.form(&[
("csrf", new_credential.bili_jct.as_str()),
("refresh_token", self.ac_time_value.as_str()),
])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(())
}
/// 解析 header 和 json获取除 buvid3 字段外全部填充的 Credential
fn extract(headers: header::HeaderMap, json: serde_json::Value) -> Result<Credential> {
let mut credential = Credential::default();
let required_cookies = HashSet::from(["SESSDATA", "bili_jct", "DedeUserID"]);
let cookies: Vec<Cookie> = headers
.get_all(header::SET_COOKIE)
.iter()
.filter_map(|x| x.to_str().ok())
.filter_map(|x| Cookie::parse(x).ok())
.filter(|x| required_cookies.contains(x.name()))
.collect();
ensure!(
cookies.len() == required_cookies.len(),
"not all required cookies found"
);
for cookie in cookies {
match cookie.name() {
"SESSDATA" => credential.sessdata = cookie.value().to_string(),
"bili_jct" => credential.bili_jct = cookie.value().to_string(),
"DedeUserID" => credential.dedeuserid = cookie.value().to_string(),
_ => unreachable!(),
}
}
match json["data"]["refresh_token"].as_str() {
Some(token) => credential.ac_time_value = token.to_string(),
None => bail!("refresh_token not found"),
}
Ok(credential)
}
}
// 用指定的 pattern 正则表达式在 doc 中查找,返回第一个匹配的捕获组
fn regex_find(pattern: &str, doc: &str) -> Result<String> {
let re = Regex::new(pattern)?;
Ok(re
.captures(doc)
.context("no match found")?
.get(1)
.context("no capture found")?
.as_str()
.to_string())
}
fn get_filename(url: &str) -> Option<&str> {
url.rsplit_once('/')
.and_then(|(_, s)| s.rsplit_once('.'))
.map(|(s, _)| s)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_and_find() {
let doc = r#"
<html lang="zh-Hans">
<body>
<div id="1-name">b0cc8411ded2f9db2cff2edb3123acac</div>
</body>
</html>
"#;
assert_eq!(
regex_find(r#"<div id="1-name">(.+?)</div>"#, doc).unwrap(),
"b0cc8411ded2f9db2cff2edb3123acac",
);
}
#[test]
fn test_encode_query() {
let query = vec![
("bar", "五一四".to_string()),
("baz", "1919810".to_string()),
("foo", "one one four".to_string()),
];
assert_eq!(
serde_urlencoded::to_string(query).unwrap().replace('+', "%20"),
"bar=%E4%BA%94%E4%B8%80%E5%9B%9B&baz=1919810&foo=one%20one%20four"
);
}
#[test]
fn test_extract_credential_success() {
let mut headers = header::HeaderMap::new();
headers.append(
header::SET_COOKIE,
"SESSDATA=test_sessdata; Path=/; Domain=bilibili.com".parse().unwrap(),
);
headers.append(
header::SET_COOKIE,
"bili_jct=test_jct; Path=/; Domain=bilibili.com".parse().unwrap(),
);
headers.append(
header::SET_COOKIE,
"DedeUserID=123456; Path=/; Domain=bilibili.com".parse().unwrap(),
);
let json = serde_json::json!({
"data": {
"refresh_token": "test_refresh_token"
}
});
let credential = Credential::extract(headers, json).unwrap();
assert_eq!(credential.sessdata, "test_sessdata");
assert_eq!(credential.bili_jct, "test_jct");
assert_eq!(credential.dedeuserid, "123456");
assert_eq!(credential.ac_time_value, "test_refresh_token");
assert!(credential.buvid3.is_empty());
}
#[test]
fn test_extract_credential_missing_sessdata() {
let headers = header::HeaderMap::new();
let json = serde_json::json!({
"data": {
"refresh_token": "test_refresh_token"
}
});
assert!(Credential::extract(headers, json).is_err());
}
#[test]
fn test_extract_credential_missing_refresh_token() {
let mut headers = header::HeaderMap::new();
headers.append(header::SET_COOKIE, "SESSDATA=test_sessdata".parse().unwrap());
headers.append(header::SET_COOKIE, "bili_jct=test_jct".parse().unwrap());
headers.append(header::SET_COOKIE, "DedeUserID=123456".parse().unwrap());
let json = serde_json::json!({
"data": {}
});
assert!(Credential::extract(headers, json).is_err());
}
#[ignore = "requires manual testing with real QR code scan"]
#[tokio::test]
async fn test_qrcode_login_flow() -> Result<()> {
let client = Client::new();
// 1. 生成二维码
let qr_response = Credential::generate_qrcode(&client).await?;
println!("二维码 URL: {}", qr_response.url);
println!("qrcode_key: {}", qr_response.qrcode_key);
println!("\n请使用 B 站 APP 扫描二维码...\n");
// 2. 轮询登录状态(最多轮询 90 次,每 2 秒一次,共 180 秒)
for i in 1..=90 {
println!("{} 次轮询...", i);
let status = Credential::poll_qrcode(&client, &qr_response.qrcode_key).await?;
match status {
PollStatus::Success { credential } => {
println!("\n登录成功!");
println!("SESSDATA: {}", credential.sessdata);
println!("bili_jct: {}", credential.bili_jct);
println!("buvid3: {}", credential.buvid3);
println!("DedeUserID: {}", credential.dedeuserid);
println!("ac_time_value: {}", credential.ac_time_value);
return Ok(());
}
PollStatus::Pending { message, scanned } => {
println!("状态: {}, 已扫描: {}", message, scanned);
}
PollStatus::Expired { message } => {
println!("\n二维码已过期: {}", message);
anyhow::bail!("二维码过期");
}
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
bail!("轮询超时")
}
}

View File

@@ -88,14 +88,14 @@ impl fmt::Display for CanvasStyles {
}
}
pub struct AssWriter<W: AsyncWrite> {
pub struct AssWriter<'a, W: AsyncWrite> {
f: Pin<Box<BufWriter<W>>>,
title: String,
canvas_config: CanvasConfig,
canvas_config: CanvasConfig<'a>,
}
impl<W: AsyncWrite> AssWriter<W> {
pub fn new(f: W, title: String, canvas_config: CanvasConfig) -> Self {
impl<'a, W: AsyncWrite> AssWriter<'a, W> {
pub fn new(f: W, title: String, canvas_config: CanvasConfig<'a>) -> Self {
AssWriter {
// 对于 HDD、docker 之类的场景,磁盘 IO 是非常大的瓶颈。使用大缓存
f: Box::pin(BufWriter::with_capacity(10 << 20, f)),
@@ -104,7 +104,7 @@ impl<W: AsyncWrite> AssWriter<W> {
}
}
pub async fn construct(f: W, title: String, canvas_config: CanvasConfig) -> Result<Self> {
pub async fn construct(f: W, title: String, canvas_config: CanvasConfig<'a>) -> Result<Self> {
let mut res = Self::new(f, title, canvas_config);
res.init().await?;
Ok(res)
@@ -184,7 +184,7 @@ impl<W: AsyncWrite> AssWriter<W> {
}
}
fn escape_text(text: &str) -> Cow<str> {
fn escape_text(text: &'_ str) -> Cow<'_, str> {
let text = text.trim();
if memchr::memchr(b'\n', text.as_bytes()).is_some() {
Cow::from(text.replace('\n', "\\N"))

View File

@@ -1,5 +1,5 @@
use crate::bilibili::danmaku::canvas::CanvasConfig;
use crate::bilibili::danmaku::Danmu;
use crate::bilibili::danmaku::canvas::CanvasConfig;
pub enum Collision {
// 会越来越远
@@ -18,7 +18,7 @@ pub struct Lane {
}
impl Lane {
pub fn draw(danmu: &Danmu, config: &CanvasConfig) -> Self {
pub fn draw(danmu: &Danmu, config: &CanvasConfig<'_>) -> Self {
Lane {
last_shoot_time: danmu.timeline_s,
last_length: danmu.length(config),
@@ -26,7 +26,7 @@ impl Lane {
}
/// 这个槽位是否可以发射另外一条弹幕,返回可能的情形
pub fn available_for(&self, other: &Danmu, config: &CanvasConfig) -> Collision {
pub fn available_for(&self, other: &Danmu, config: &CanvasConfig<'_>) -> Collision {
#[allow(non_snake_case)]
let T = config.danmaku_option.duration;
#[allow(non_snake_case)]

View File

@@ -5,12 +5,12 @@ use anyhow::Result;
use float_ord::FloatOrd;
use lane::Lane;
use crate::bilibili::PageInfo;
use crate::bilibili::danmaku::canvas::lane::Collision;
use crate::bilibili::danmaku::danmu::DanmuType;
use crate::bilibili::danmaku::{Danmu, DrawEffect, Drawable};
use crate::bilibili::PageInfo;
#[derive(Debug, serde::Deserialize, serde::Serialize)]
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub struct DanmakuOption {
pub duration: f64,
pub font: String,
@@ -26,7 +26,7 @@ pub struct DanmakuOption {
pub bottom_percentage: f64,
/// 透明度0-255
pub opacity: u8,
/// 是否加粗1代表是0代表否
/// 是否加粗1 代表是0 代表否
pub bold: bool,
/// 描边
pub outline: f64,
@@ -54,13 +54,13 @@ impl Default for DanmakuOption {
}
#[derive(Clone)]
pub struct CanvasConfig {
pub struct CanvasConfig<'a> {
pub width: u64,
pub height: u64,
pub danmaku_option: &'static DanmakuOption,
pub danmaku_option: &'a DanmakuOption,
}
impl CanvasConfig {
pub fn new(danmaku_option: &'static DanmakuOption, page: &PageInfo) -> Self {
impl<'a> CanvasConfig<'a> {
pub fn new(danmaku_option: &'a DanmakuOption, page: &PageInfo) -> Self {
let (width, height) = Self::dimension(page);
Self {
width,
@@ -86,7 +86,7 @@ impl CanvasConfig {
((720.0 / height as f64 * width as f64) as u64, 720)
}
pub fn canvas(self) -> Canvas {
pub fn canvas(self) -> Canvas<'a> {
let float_lanes_cnt =
(self.danmaku_option.float_percentage * self.height as f64 / self.danmaku_option.lane_size as f64) as usize;
@@ -97,12 +97,12 @@ impl CanvasConfig {
}
}
pub struct Canvas {
pub config: CanvasConfig,
pub struct Canvas<'a> {
pub config: CanvasConfig<'a>,
pub float_lanes: Vec<Option<Lane>>,
}
impl Canvas {
impl<'a> Canvas<'a> {
pub fn draw(&mut self, mut danmu: Danmu) -> Result<Option<Drawable>> {
danmu.timeline_s += self.config.danmaku_option.time_offset;
if danmu.timeline_s < 0.0 {

View File

@@ -1,5 +1,5 @@
//! 一个弹幕实例,但是没有位置信息
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use crate::bilibili::danmaku::canvas::CanvasConfig;
@@ -39,8 +39,8 @@ pub struct Danmu {
impl Danmu {
/// 计算弹幕的“像素长度”,会乘上一个缩放因子
///
/// 汉字算一个全宽英文算2/3宽
pub fn length(&self, config: &CanvasConfig) -> f64 {
/// 汉字算一个全宽,英文算 2/3
pub fn length(&self, config: &CanvasConfig<'_>) -> f64 {
let pts = config.danmaku_option.font_size
* self
.content

View File

@@ -5,8 +5,7 @@ use tokio::fs::{self, File};
use crate::bilibili::danmaku::canvas::CanvasConfig;
use crate::bilibili::danmaku::{AssWriter, Danmu};
use crate::bilibili::PageInfo;
use crate::config::CONFIG;
use crate::bilibili::{DanmakuOption, PageInfo};
pub struct DanmakuWriter<'a> {
page: &'a PageInfo,
@@ -18,11 +17,11 @@ impl<'a> DanmakuWriter<'a> {
DanmakuWriter { page, danmaku }
}
pub async fn write(self, path: PathBuf) -> Result<()> {
pub async fn write(self, path: PathBuf, danmaku_option: &DanmakuOption) -> Result<()> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let canvas_config = CanvasConfig::new(&CONFIG.danmaku_option, self.page);
let canvas_config = CanvasConfig::new(danmaku_option, self.page);
let mut writer =
AssWriter::construct(File::create(path).await?, self.page.name.clone(), canvas_config.clone()).await?;
let mut canvas = canvas_config.canvas();

View File

@@ -0,0 +1,97 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use chrono::DateTime;
use futures::Stream;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Dynamic<'a> {
client: &'a BiliClient,
pub upper_id: String,
credential: &'a Credential,
}
impl<'a> Dynamic<'a> {
pub fn new(client: &'a BiliClient, upper_id: String, credential: &'a Credential) -> Self {
Self {
client,
upper_id,
credential,
}
}
pub async fn get_dynamics(&self, offset: Option<String>) -> Result<Value> {
self.client
.request(
Method::GET,
"https://api.bilibili.com/x/polymer/web-dynamic/v1/feed/space",
self.credential,
)
.await
.query(&[
("host_mid", self.upper_id.as_str()),
("offset", offset.as_deref().unwrap_or("")),
("type", "video"),
])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut offset = None;
loop {
let mut res = self
.get_dynamics(offset.take())
.await
.with_context(|| "failed to get dynamics")?;
let items = match res["data"]["items"].as_array_mut() {
Some(items) if !items.is_empty() => items,
_ => {
if offset.is_none() {
break;
}
Err(anyhow!("no dynamics found in offset {:?}", offset))?
}
};
for item in items.iter_mut() {
if item["type"].as_str().is_none_or(|t| t != "DYNAMIC_TYPE_AV") {
continue;
}
let pub_ts = item["modules"]["module_author"]["pub_ts"].take();
let pub_dt = pub_ts
.as_i64()
.or_else(|| pub_ts.as_str().and_then(|s| s.parse::<i64>().ok()))
.and_then(DateTime::from_timestamp_secs)
.with_context(|| format!("invalid pub_ts: {:?}", pub_ts))?;
let mut video_info: VideoInfo =
serde_json::from_value(item["modules"]["module_dynamic"]["major"]["archive"].take())?;
// 这些地方不使用 let else 是因为 try_stream! 宏不支持
if let VideoInfo::Dynamic { ref mut pubtime, .. } = video_info {
*pubtime = pub_dt;
yield video_info;
} else {
Err(anyhow!("video info is not dynamic"))?;
}
}
if let (Some(has_more), Some(new_offset)) =
(res["data"]["has_more"].as_bool(), res["data"]["offset"].as_str())
{
if !has_more {
break;
}
offset = Some(new_offset.to_string());
} else {
Err(anyhow!("no has_more or offset found"))?;
}
}
}
}
}

View File

@@ -0,0 +1,24 @@
use thiserror::Error;
#[derive(Error, Debug, Clone)]
pub enum BiliError {
#[error("response missing 'code' or 'message' field, full response: {0}")]
InvalidResponse(String),
#[error("API returned error code {0}, full response: {1}")]
ErrorResponse(i64, String),
#[error("risk control triggered by server, full response: {0}")]
RiskControlOccurred(String),
#[error("invalid HTTP response code {0}, reason: {1}")]
InvalidStatusCode(u16, &'static str),
#[error("no video streams available (may indicate risk control)")]
VideoStreamsEmpty,
}
impl BiliError {
pub fn is_risk_control_related(&self) -> bool {
matches!(
self,
BiliError::RiskControlOccurred(_) | BiliError::VideoStreamsEmpty | BiliError::InvalidStatusCode(_, _)
)
}
}

View File

@@ -0,0 +1,105 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
pub struct FavoriteList<'a> {
client: &'a BiliClient,
fid: String,
credential: &'a Credential,
}
#[derive(Debug, serde::Deserialize)]
pub struct FavoriteListInfo {
pub id: i64,
pub title: String,
}
impl<'a> FavoriteList<'a> {
pub fn new(client: &'a BiliClient, fid: String, credential: &'a Credential) -> Self {
Self {
client,
fid,
credential,
}
}
pub async fn get_info(&self) -> Result<FavoriteListInfo> {
let mut res = self
.client
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/info",
self.credential,
)
.await
.query(&[("media_id", &self.fid)])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
async fn get_videos(&self, page: u32) -> Result<Value> {
self.client
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v3/fav/resource/list",
self.credential,
)
.await
.query(&[
("media_id", self.fid.as_str()),
("pn", page.to_string().as_str()),
("ps", "20"),
("order", "mtime"),
("type", "0"),
("tid", "0"),
])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
}
// 拿到收藏夹的所有权,返回一个收藏夹下的视频流
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self
.get_videos(page)
.await
.with_context(|| format!("failed to get videos of favorite {} page {}", self.fid, page))?;
let medias = &mut videos["data"]["medias"];
if medias.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!("no medias found in favorite {} page {}", self.fid, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(medias.take())
.with_context(|| format!("failed to parse videos of favorite {} page {}", self.fid, page))?;
for video_info in videos_info {
yield video_info;
}
let has_more = &videos["data"]["has_more"];
if let Some(v) = has_more.as_bool() {
if v {
page += 1;
continue;
}
} else {
Err(anyhow!("has_more is not a bool"))?;
}
break;
}
}
}
}

View File

@@ -0,0 +1,138 @@
use anyhow::{Result, ensure};
use reqwest::Method;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate};
pub struct Me<'a> {
client: &'a BiliClient,
credential: &'a Credential,
}
impl<'a> Me<'a> {
pub fn new(client: &'a BiliClient, credential: &'a Credential) -> Self {
Self { client, credential }
}
pub async fn get_created_favorites(&self) -> Result<Option<Vec<FavoriteItem>>> {
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let mut resp = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/created/list-all",
self.credential,
)
.await
.query(&[("up_mid", &self.mid())])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"]["list"].take())?)
}
pub async fn get_followed_collections(&self, page_num: i32, page_size: i32) -> Result<Collections> {
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let mut resp = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/collected/list",
self.credential,
)
.await
.query(&[("up_mid", self.mid()), ("platform", "web")])
.query(&[("pn", page_num), ("ps", page_size)])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
pub async fn get_followed_uppers(
&self,
page_num: i32,
page_size: i32,
name: Option<&str>,
) -> Result<FollowedUppers> {
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let url = if name.is_some() {
"https://api.bilibili.com/x/relation/followings/search"
} else {
"https://api.bilibili.com/x/relation/followings"
};
let mut request = self
.client
.request(Method::GET, url, self.credential)
.await
.query(&[("vmid", self.mid())])
.query(&[("pn", page_num), ("ps", page_size)]);
if let Some(name) = name {
request = request.query(&[("name", name)]);
}
let mut resp = request
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
fn mid(&self) -> &str {
&self.credential.dedeuserid
}
}
#[derive(Debug, serde::Deserialize)]
pub struct FavoriteItem {
pub title: String,
pub media_count: i64,
pub id: i64,
pub mid: i64,
}
#[derive(Debug, serde::Deserialize)]
pub struct CollectionItem {
pub id: i64,
pub fid: i64,
pub mid: i64,
pub state: i32,
pub title: String,
pub media_count: i64,
}
#[derive(Debug, serde::Deserialize)]
pub struct Collections {
pub count: i64,
pub list: Option<Vec<CollectionItem>>,
}
#[derive(Debug, serde::Deserialize)]
pub struct FollowedUppers {
pub total: i64,
pub list: Vec<FollowedUpper>,
}
#[derive(Debug, serde::Deserialize)]
pub struct FollowedUpper {
pub mid: i64,
pub uname: String,
pub face: String,
pub sign: String,
}

View File

@@ -0,0 +1,441 @@
use std::borrow::Cow;
use std::sync::Arc;
pub use analyzer::{BestStream, FilterOption};
use anyhow::{Context, Result, bail, ensure};
use arc_swap::ArcSwapOption;
use bili_sync_entity::upper_vec::Upper;
use chrono::serde::ts_seconds;
use chrono::{DateTime, Utc};
pub use client::{BiliClient, Client};
pub use collection::{Collection, CollectionItem, CollectionType};
pub use credential::{Credential, PollStatus, Qrcode};
pub use danmaku::DanmakuOption;
pub use dynamic::Dynamic;
pub use error::BiliError;
pub use favorite_list::FavoriteList;
pub use me::Me;
use once_cell::sync::Lazy;
use reqwest::{RequestBuilder, StatusCode};
pub use submission::Submission;
pub use video::{Dimension, PageInfo, Video};
pub use watch_later::WatchLater;
mod analyzer;
mod client;
mod collection;
mod credential;
mod danmaku;
mod dynamic;
mod error;
mod favorite_list;
mod me;
mod submission;
mod subtitle;
mod video;
mod watch_later;
static MIXIN_KEY: Lazy<ArcSwapOption<String>> = Lazy::new(Default::default);
pub(crate) fn set_global_mixin_key(key: String) {
MIXIN_KEY.store(Some(Arc::new(key)));
}
pub(crate) trait Validate {
type Output;
fn validate(self) -> Result<Self::Output>;
}
pub(crate) trait ErrorForStatusExt {
type Output;
fn error_for_status_ext(self) -> Result<Self::Output>;
}
impl Validate for serde_json::Value {
type Output = serde_json::Value;
fn validate(self) -> Result<Self::Output> {
let code = self["code"]
.as_i64()
.with_context(|| BiliError::InvalidResponse(self.to_string()))?;
if code == -352 || !self["data"]["v_voucher"].is_null() {
bail!(BiliError::RiskControlOccurred(self.to_string()));
}
ensure!(code == 0, BiliError::ErrorResponse(code, self.to_string()));
Ok(self)
}
}
impl ErrorForStatusExt for reqwest::Response {
type Output = reqwest::Response;
fn error_for_status_ext(self) -> Result<Self::Output> {
let status = self.status();
// 412 是由于请求频率过高导致的,确定是风控问题
// 403 目前偶尔出现在下载视频音频流时,由于是偶尔出现且过一段时间消失,暂时也当成风控问题处理
if status == StatusCode::PRECONDITION_FAILED || status == StatusCode::FORBIDDEN {
bail!(BiliError::InvalidStatusCode(
status.as_u16(),
status.canonical_reason().unwrap_or("Unknown")
));
}
Ok(self.error_for_status()?)
}
}
pub(crate) trait WbiSign {
type Output;
fn wbi_sign(self, mixin_key: Option<impl AsRef<str>>) -> Result<Self::Output>;
}
impl WbiSign for RequestBuilder {
type Output = RequestBuilder;
fn wbi_sign(self, mixin_key: Option<impl AsRef<str>>) -> Result<Self::Output> {
let Some(mixin_key) = mixin_key else {
return Ok(self);
};
let (client, req) = self.build_split();
let mut req = req?;
sign_request(&mut req, mixin_key.as_ref(), chrono::Utc::now().timestamp())?;
Ok(RequestBuilder::from_parts(client, req))
}
}
fn sign_request(req: &mut reqwest::Request, mixin_key: &str, timestamp: i64) -> Result<()> {
let mut query_pairs = req.url().query_pairs().collect::<Vec<_>>();
let timestamp = timestamp.to_string();
query_pairs.push(("wts".into(), Cow::Borrowed(timestamp.as_str())));
query_pairs.sort_by(|a, b| a.0.cmp(&b.0));
let query_str = serde_urlencoded::to_string(query_pairs)?.replace('+', "%20");
let w_rid = format!("{:x}", md5::compute(query_str + mixin_key));
req.url_mut()
.query_pairs_mut()
.extend_pairs([("w_rid", w_rid), ("wts", timestamp)]);
Ok(())
}
#[derive(Debug, serde::Deserialize)]
#[serde(untagged)]
/// 注意此处的顺序是有要求的,因为对于 untagged 的 enum 来说serde 会按照顺序匹配
/// > There is no explicit tag identifying which variant the data contains.
/// > Serde will try to match the data against each variant in order and the first one that deserializes successfully is the one returned.
pub enum VideoInfo {
/// 从视频详情接口获取的视频信息
Detail {
title: String,
bvid: String,
#[serde(rename = "desc")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64, String>,
#[serde(default)]
staff: Option<Vec<Upper<i64, String>>>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
is_upower_exclusive: bool,
is_upower_play: bool,
redirect_url: Option<String>,
pages: Vec<PageInfo>,
state: i32,
},
/// 从收藏夹接口获取的视频信息
Favorite {
title: String,
#[serde(rename = "type")]
vtype: i32,
bvid: String,
intro: String,
cover: String,
upper: Upper<i64, String>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(with = "ts_seconds")]
fav_time: DateTime<Utc>,
#[serde(with = "ts_seconds")]
pubtime: DateTime<Utc>,
attr: i32,
},
/// 从稍后再看接口获取的视频信息
WatchLater {
title: String,
bvid: String,
#[serde(rename = "desc")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64, String>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "add_at", with = "ts_seconds")]
fav_time: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
state: i32,
},
/// 从视频合集/视频列表接口获取的视频信息
Collection {
bvid: String,
#[serde(rename = "pic")]
cover: String,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
},
// 从用户投稿接口获取的视频信息
Submission {
title: String,
bvid: String,
#[serde(rename = "description")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "created", with = "ts_seconds")]
ctime: DateTime<Utc>,
},
// 从动态获取的视频信息(此处 pubtime 未在结构中,因此使用 default + 手动赋值)
Dynamic {
title: String,
bvid: String,
desc: String,
cover: String,
#[serde(default)]
pubtime: DateTime<Utc>,
},
}
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Context;
use futures::StreamExt;
use reqwest::Method;
use super::*;
use crate::bilibili::credential::WbiImg;
use crate::config::VersionedConfig;
use crate::database::setup_database;
use crate::utils::init_logger;
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_video_info_type() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
init_logger("None,bili_sync=debug", None);
let bili_client = BiliClient::new();
// 请求 UP 主视频必须要获取 mixin key使用 key 计算请求参数的签名,否则直接提示权限不足返回空
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
let collection = Collection::new(
&bili_client,
CollectionItem {
mid: "521722088".to_string(),
sid: "4523".to_string(),
collection_type: CollectionType::Season,
},
&credential,
);
let videos = collection
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Collection { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试收藏夹
let favorite = FavoriteList::new(&bili_client, "3144336058".to_string(), &credential);
let videos = favorite
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Favorite { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试稍后再看
let watch_later = WatchLater::new(&bili_client, &credential);
let videos = watch_later
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::WatchLater { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试投稿
let submission = Submission::new(&bili_client, "956761".to_string(), &credential);
let videos = submission
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Submission { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试动态
let dynamic = Dynamic::new(&bili_client, "659898".to_string(), &credential);
let videos = dynamic
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Dynamic { .. })));
assert!(videos.iter().skip(1).rev().is_sorted_by_key(|v| v.release_datetime()));
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_subtitle_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
let video = Video::new(&bili_client, "BV1gLfnY8E6D", &credential);
let pages = video.get_pages().await?;
println!("pages: {:?}", pages);
let subtitles = video.get_subtitles(&pages[0]).await?;
for subtitle in subtitles {
println!(
"{}: {}",
subtitle.lan,
subtitle.body.to_string().chars().take(200).collect::<String>()
);
}
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_upower_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
for (bvid, (upower_exclusive, upower_play)) in [
("BV1HxXwYEEqt", (true, false)), // 充电专享且无权观看
("BV16w41187fx", (true, true)), // 充电专享但有权观看
("BV1n34jzPEYq", (false, false)), // 普通视频
] {
let video = Video::new(&bili_client, bvid, credential);
let info = video.get_view_info().await?;
let VideoInfo::Detail {
is_upower_exclusive,
is_upower_play,
..
} = info
else {
unreachable!();
};
assert_eq!(is_upower_exclusive, upower_exclusive, "bvid: {}", bvid);
assert_eq!(is_upower_play, upower_play, "bvid: {}", bvid);
}
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_ep_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
for (bvid, redirect_is_none) in [
("BV1SF411g796", false), // EP
("BV13xtnzPEye", false), // 番剧
("BV1kT4NzTEZj", true), // 普通视频
] {
let video = Video::new(&bili_client, bvid, credential);
let info = video.get_view_info().await?;
let VideoInfo::Detail { redirect_url, .. } = info else {
unreachable!();
};
assert_eq!(redirect_url.is_none(), redirect_is_none, "bvid: {}", bvid);
}
Ok(())
}
#[test]
fn test_wbi_key() -> Result<()> {
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = key.into_mixin_key().context("no mixin key")?;
assert_eq!(key.as_str(), "ea1db124af3c7062474693fa704f4ff8");
let client = Client::new();
let mut req = client
.request(Method::GET, "https://www.baidu.com/", None)
.query(&[("foo", "114"), ("bar", "514")])
.query(&[("zab", "1919810")])
.build()?;
sign_request(&mut req, key.as_str(), 1702204169).unwrap();
let query: Vec<_> = req.url().query_pairs().collect();
assert_eq!(
query,
vec![
("foo".into(), "114".into()),
("bar".into(), "514".into()),
("zab".into(), "1919810".into()),
("w_rid".into(), "8f6f2b5b3d485fe1886cec6a0be8c5d4".into()),
("wts".into(), "1702204169".into()),
]
);
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = key.into_mixin_key().context("no mixin key")?;
let mut req = client
.request(Method::GET, "https://www.baidu.com/", None)
.query(&[("mid", "11997177"), ("token", "")])
.query(&[("platform", "web"), ("web_location", "1550101")])
.build()?;
sign_request(&mut req, key.as_str(), 1703513649).unwrap();
let query: Vec<_> = req.url().query_pairs().collect();
assert_eq!(
query,
vec![
("mid".into(), "11997177".into()),
("token".into(), "".into()),
("platform".into(), "web".into()),
("web_location".into(), "1550101".into()),
("w_rid".into(), "7d4428b3f2f9ee2811e116ec6fd41a4f".into()),
("wts".into(), "1703513649".into()),
]
);
Ok(())
}
}

View File

@@ -0,0 +1,108 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use bili_sync_entity::upper_vec::Upper;
use futures::Stream;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, Dynamic, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Submission<'a> {
client: &'a BiliClient,
pub upper_id: String,
credential: &'a Credential,
}
impl<'a> From<Submission<'a>> for Dynamic<'a> {
fn from(submission: Submission<'a>) -> Self {
Dynamic::new(submission.client, submission.upper_id, submission.credential)
}
}
impl<'a> Submission<'a> {
pub fn new(client: &'a BiliClient, upper_id: String, credential: &'a Credential) -> Self {
Self {
client,
upper_id,
credential,
}
}
pub async fn get_info(&self) -> Result<Upper<String, String>> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/card",
self.credential,
)
.await
.query(&[("mid", self.upper_id.as_str())])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"]["card"].take())?)
}
async fn get_videos(&self, page: i32) -> Result<Value> {
self.client
.request(
Method::GET,
"https://api.bilibili.com/x/space/wbi/arc/search",
self.credential,
)
.await
.query(&[
("mid", self.upper_id.as_str()),
("order", "pubdate"),
("order_avoided", "true"),
("platform", "web"),
("web_location", "1550101"),
("ps", "30"),
])
.query(&[("pn", page)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self
.get_videos(page)
.await
.with_context(|| format!("failed to get videos of upper {} page {}", self.upper_id, page))?;
let vlist = &mut videos["data"]["list"]["vlist"];
if vlist.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!("no medias found in upper {} page {}", self.upper_id, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(vlist.take())
.with_context(|| format!("failed to parse videos of upper {} page {}", self.upper_id, page))?;
for video_info in videos_info {
yield video_info;
}
let count = &videos["data"]["page"]["count"];
if let Some(v) = count.as_i64() {
if v > (page * 30) as i64 {
page += 1;
continue;
}
} else {
Err(anyhow!("count is not an i64"))?;
}
break;
}
}
}
}

View File

@@ -0,0 +1,75 @@
use std::fmt::Display;
#[derive(Debug, serde::Deserialize)]
pub struct SubTitlesInfo {
pub subtitles: Vec<SubTitleInfo>,
}
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleInfo {
pub lan: String,
pub subtitle_url: String,
}
pub struct SubTitle {
pub lan: String,
pub body: SubTitleBody,
}
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleBody(pub Vec<SubTitleItem>);
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleItem {
from: f64,
to: f64,
content: String,
}
impl SubTitleInfo {
pub fn is_ai_sub(&self) -> bool {
// ai aisubtitle.hdslb.com/bfs/ai_subtitle/xxxx
// 非 aiaisubtitle.hdslb.com/bfs/subtitle/xxxx
self.subtitle_url.contains("ai_subtitle")
}
}
impl Display for SubTitleBody {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
for (idx, item) in self.0.iter().enumerate() {
writeln!(f, "{}", idx)?;
writeln!(f, "{} --> {}", format_time(item.from), format_time(item.to))?;
writeln!(f, "{}", item.content)?;
writeln!(f)?;
}
Ok(())
}
}
fn format_time(time: f64) -> String {
let (second, millisecond) = (time.trunc(), (time.fract() * 1e3) as u32);
let (hour, minute, second) = (
(second / 3600.0) as u32,
((second % 3600.0) / 60.0) as u32,
(second % 60.0) as u32,
);
format!("{:02}:{:02}:{:02},{:03}", hour, minute, second, millisecond)
}
#[cfg(test)]
mod tests {
#[test]
fn test_format_time() {
// float 解析会有精度问题,但误差几毫秒应该不太关键
// 想再健壮一点就得手写 serde_json 解析拆分秒和毫秒,然后分别处理了
let testcases = [
(0.0, "00:00:00,000"),
(1.5, "00:00:01,500"),
(206.45, "00:03:26,449"),
(360001.23, "100:00:01,229"),
];
for (time, expect) in testcases.iter() {
assert_eq!(super::format_time(*time), *expect);
}
}
}

View File

@@ -0,0 +1,217 @@
use anyhow::{Context, Result, ensure};
use futures::TryStreamExt;
use futures::stream::FuturesUnordered;
use prost::Message;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::analyzer::PageAnalyzer;
use crate::bilibili::client::BiliClient;
use crate::bilibili::danmaku::{DanmakuElem, DanmakuWriter, DmSegMobileReply};
use crate::bilibili::subtitle::{SubTitle, SubTitleBody, SubTitleInfo, SubTitlesInfo};
use crate::bilibili::{Credential, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Video<'a> {
client: &'a BiliClient,
pub bvid: &'a str,
credential: &'a Credential,
}
#[derive(Debug, serde::Deserialize, Default)]
pub struct PageInfo {
pub cid: i64,
pub page: i32,
#[serde(rename = "part")]
pub name: String,
pub duration: u32,
pub first_frame: Option<String>,
pub dimension: Option<Dimension>,
}
#[derive(Debug, serde::Deserialize, Default)]
pub struct Dimension {
pub width: u32,
pub height: u32,
pub rotate: u32,
}
impl<'a> Video<'a> {
pub fn new(client: &'a BiliClient, bvid: &'a str, credential: &'a Credential) -> Self {
Self {
client,
bvid,
credential,
}
}
/// 直接调用视频信息接口获取详细的视频信息,视频信息中包含了视频的分页信息
pub async fn get_view_info(&self) -> Result<VideoInfo> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/wbi/view",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
#[cfg(test)]
pub async fn get_pages(&self) -> Result<Vec<PageInfo>> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/player/pagelist",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
pub async fn get_tags(&self) -> Result<Vec<String>> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/view/detail/tag",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(res["data"]
.as_array_mut()
.context("tags is not an array")?
.iter_mut()
.filter_map(|v| if let Value::String(s) = v.take() { Some(s) } else { None })
.collect())
}
pub async fn get_danmaku_writer(&self, page: &'a PageInfo) -> Result<DanmakuWriter<'a>> {
let tasks = FuturesUnordered::new();
for i in 1..=page.duration.div_ceil(360) {
tasks.push(self.get_danmaku_segment(page, i as i64));
}
let result: Vec<Vec<DanmakuElem>> = tasks.try_collect().await?;
let mut result: Vec<DanmakuElem> = result.into_iter().flatten().collect();
result.sort_by_key(|d| d.progress);
Ok(DanmakuWriter::new(page, result.into_iter().map(|x| x.into()).collect()))
}
async fn get_danmaku_segment(&self, page: &PageInfo, segment_idx: i64) -> Result<Vec<DanmakuElem>> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/v2/dm/wbi/web/seg.so",
self.credential,
)
.await
.query(&[("type", 1), ("oid", page.cid), ("segment_index", segment_idx)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?;
let headers = std::mem::take(res.headers_mut());
let content_type = headers.get("content-type");
ensure!(
content_type.is_some_and(|v| v == "application/octet-stream"),
"unexpected content type: {:?}, body: {:?}",
content_type,
res.text().await
);
Ok(DmSegMobileReply::decode(res.bytes().await?)?.elems)
}
pub async fn get_page_analyzer(&self, page: &PageInfo) -> Result<PageAnalyzer> {
let mut res = self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/player/wbi/playurl",
self.credential,
)
.await
.query(&[
("bvid", self.bvid),
("qn", "127"),
("otype", "json"),
("fnval", "4048"),
("fourk", "1"),
])
.query(&[("cid", page.cid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(PageAnalyzer::new(res["data"].take()))
}
pub async fn get_subtitles(&self, page: &PageInfo) -> Result<Vec<SubTitle>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/v2", self.credential)
.await
.query(&[("bvid", self.bvid)])
.query(&[("cid", page.cid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
// 接口返回的信息,包含了一系列的字幕,每个字幕包含了字幕的语言和 json 下载地址
match serde_json::from_value::<Option<SubTitlesInfo>>(res["data"]["subtitle"].take())? {
Some(subtitles_info) => {
let tasks = subtitles_info
.subtitles
.into_iter()
.filter(|v| !v.is_ai_sub())
.map(|v| self.get_subtitle(v))
.collect::<FuturesUnordered<_>>();
tasks.try_collect().await
}
None => Ok(vec![]),
}
}
async fn get_subtitle(&self, info: SubTitleInfo) -> Result<SubTitle> {
let mut res = self
.client
.client // 这里可以直接使用 inner_client因为该请求不需要鉴权
.request(Method::GET, format!("https:{}", &info.subtitle_url).as_str(), None)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?;
let body: SubTitleBody = serde_json::from_value(res["body"].take())?;
Ok(SubTitle { lan: info.lan, body })
}
}

View File

@@ -0,0 +1,50 @@
use anyhow::{Context, Result};
use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
pub struct WatchLater<'a> {
client: &'a BiliClient,
credential: &'a Credential,
}
impl<'a> WatchLater<'a> {
pub fn new(client: &'a BiliClient, credential: &'a Credential) -> Self {
Self { client, credential }
}
async fn get_videos(&self) -> Result<Value> {
self.client
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v2/history/toview",
self.credential,
)
.await
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut videos = self
.get_videos()
.await
.with_context(|| "Failed to get watch later list")?;
let list = &mut videos["data"]["list"];
if list.as_array().is_none_or(|v| v.is_empty()) {
return;
}
let videos_info: Vec<VideoInfo> =
serde_json::from_value(list.take()).with_context(|| "Failed to parse watch later list")?;
for video_info in videos_info {
yield video_info;
}
}
}
}

View File

@@ -0,0 +1,54 @@
use std::borrow::Cow;
use std::path::PathBuf;
use std::sync::LazyLock;
use clap::Parser;
pub static ARGS: LazyLock<Args> = LazyLock::new(Args::parse);
#[derive(Parser)]
#[command(name = "Bili-Sync", version = detail_version(), about, long_about = None)]
pub struct Args {
#[arg(short, long, env = "SCAN_ONLY")]
pub scan_only: bool,
#[arg(short, long, default_value = "None,bili_sync=info", env = "RUST_LOG")]
pub log_level: String,
#[arg(short, long, env = "DISABLE_CREDENTIAL_REFRESH")]
pub disable_credential_refresh: bool,
#[arg(short, long, env = "BILI_SYNC_CONFIG_DIR")]
pub config_dir: Option<PathBuf>,
#[arg(short, long, env = "BILI_SYNC_FFMPEG_PATH")]
pub ffmpeg_path: Option<String>,
}
mod built_info {
include!(concat!(env!("OUT_DIR"), "/built.rs"));
}
pub fn version() -> Cow<'static, str> {
if let (Some(git_version), Some(git_dirty)) = (built_info::GIT_VERSION, built_info::GIT_DIRTY) {
Cow::Owned(format!("{}{}", git_version, if git_dirty { "-dirty" } else { "" }))
} else {
Cow::Borrowed(built_info::PKG_VERSION)
}
}
fn detail_version() -> String {
format!(
"{}
Architecture: {}-{}
Author: {}
Built Time: {}
Rustc Version: {}",
version(),
built_info::CFG_OS,
built_info::CFG_TARGET_ARCH,
built_info::PKG_AUTHORS,
built_info::BUILT_TIME_UTC,
built_info::RUSTC_VERSION,
)
}

View File

@@ -0,0 +1,140 @@
use std::path::PathBuf;
use std::sync::{Arc, LazyLock};
use anyhow::{Result, bail};
use croner::parser::CronParser;
use itertools::Itertools;
use sea_orm::DatabaseConnection;
use serde::{Deserialize, Serialize};
use validator::Validate;
use crate::bilibili::{Credential, DanmakuOption, FilterOption};
use crate::config::args::ARGS;
use crate::config::default::{
default_auth_token, default_bind_address, default_collection_path, default_favorite_path, default_submission_path,
default_time_format,
};
use crate::config::item::{ConcurrentLimit, NFOTimeType, SkipOption, Trigger};
use crate::notifier::Notifier;
use crate::utils::model::{load_db_config, save_db_config};
pub static CONFIG_DIR: LazyLock<PathBuf> = LazyLock::new(|| {
ARGS.config_dir
.clone()
.or_else(|| dirs::config_dir().map(|dir| dir.join("bili-sync")))
.expect("No config path found")
});
#[derive(Serialize, Deserialize, Validate, Clone)]
pub struct Config {
pub auth_token: String,
pub bind_address: String,
pub credential: Credential,
pub filter_option: FilterOption,
pub danmaku_option: DanmakuOption,
#[serde(default)]
pub skip_option: SkipOption,
pub video_name: String,
pub page_name: String,
#[serde(default)]
pub notifiers: Option<Arc<Vec<Notifier>>>,
#[serde(default = "default_favorite_path")]
pub favorite_default_path: String,
#[serde(default = "default_collection_path")]
pub collection_default_path: String,
#[serde(default = "default_submission_path")]
pub submission_default_path: String,
pub interval: Trigger,
pub upper_path: PathBuf,
pub nfo_time_type: NFOTimeType,
pub concurrent_limit: ConcurrentLimit,
pub time_format: String,
pub cdn_sorting: bool,
#[serde(default)]
pub try_upower_anyway: bool,
pub version: u64,
}
impl Config {
pub async fn load_from_database(connection: &DatabaseConnection) -> Result<Option<Result<Self>>> {
load_db_config(connection).await
}
pub async fn save_to_database(&self, connection: &DatabaseConnection) -> Result<()> {
save_db_config(self, connection).await
}
pub fn check(&self) -> Result<()> {
let mut errors = Vec::new();
if !self.upper_path.is_absolute() {
errors.push("up 主头像保存的路径应为绝对路径");
}
if self.video_name.is_empty() {
errors.push("未设置 video_name 模板");
}
if self.page_name.is_empty() {
errors.push("未设置 page_name 模板");
}
let credential = &self.credential;
if credential.sessdata.is_empty()
|| credential.bili_jct.is_empty()
|| credential.buvid3.is_empty()
|| credential.dedeuserid.is_empty()
|| credential.ac_time_value.is_empty()
{
errors.push("Credential 信息不完整,请确保填写完整");
}
if !(self.concurrent_limit.video > 0 && self.concurrent_limit.page > 0) {
errors.push("video 和 page 允许的并发数必须大于 0");
}
match &self.interval {
Trigger::Interval(secs) => {
if *secs <= 60 {
errors.push("下载任务执行间隔时间必须大于 60 秒");
}
}
Trigger::Cron(cron) => {
if CronParser::builder()
.seconds(croner::parser::Seconds::Required)
.dom_and_dow(true)
.build()
.parse(cron)
.is_err()
{
errors.push("Cron 表达式无效,正确格式为“秒 分 时 日 月 周”");
}
}
};
if !errors.is_empty() {
bail!(errors.into_iter().map(|e| format!("- {}", e)).join("\n"));
}
Ok(())
}
}
impl Default for Config {
fn default() -> Self {
Self {
auth_token: default_auth_token(),
bind_address: default_bind_address(),
credential: Credential::default(),
filter_option: FilterOption::default(),
danmaku_option: DanmakuOption::default(),
skip_option: SkipOption::default(),
video_name: "{{title}}".to_owned(),
page_name: "{{bvid}}".to_owned(),
notifiers: None,
favorite_default_path: default_favorite_path(),
collection_default_path: default_collection_path(),
submission_default_path: default_submission_path(),
interval: Trigger::default(),
upper_path: CONFIG_DIR.join("upper_face"),
nfo_time_type: NFOTimeType::FavTime,
concurrent_limit: ConcurrentLimit::default(),
time_format: default_time_format(),
cdn_sorting: false,
try_upower_anyway: false,
version: 0,
}
}
}

View File

@@ -0,0 +1,30 @@
use rand::seq::IndexedRandom;
/// 默认的 auth_token 实现,生成随机 16 位字符串
pub(super) fn default_auth_token() -> String {
let byte_choices = b"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*()_+-=";
let mut rng = rand::rng();
(0..16)
.map(|_| *(byte_choices.choose(&mut rng).expect("choose byte failed")) as char)
.collect()
}
pub(crate) fn default_bind_address() -> String {
"0.0.0.0:12345".to_string()
}
pub(super) fn default_time_format() -> String {
"%Y-%m-%d".to_string()
}
pub fn default_favorite_path() -> String {
"收藏夹/{{name}}".to_owned()
}
pub fn default_collection_path() -> String {
"合集/{{name}}".to_owned()
}
pub fn default_submission_path() -> String {
"投稿/{{name}}".to_owned()
}

View File

@@ -0,0 +1,102 @@
use std::sync::LazyLock;
use anyhow::Result;
use handlebars::handlebars_helper;
use crate::config::versioned_cache::VersionedCache;
use crate::config::{Config, PathSafeTemplate};
use crate::notifier::{Notifier, webhook_template_content, webhook_template_key};
pub static TEMPLATE: LazyLock<VersionedCache<handlebars::Handlebars<'static>>> =
LazyLock::new(|| VersionedCache::new(create_template).expect("Failed to create handlebars template"));
fn create_template(config: &Config) -> Result<handlebars::Handlebars<'static>> {
let mut handlebars = handlebars::Handlebars::new();
handlebars.register_helper("truncate", Box::new(truncate));
handlebars.path_safe_register("video", config.video_name.clone())?;
handlebars.path_safe_register("page", config.page_name.clone())?;
handlebars.path_safe_register("favorite_default_path", config.favorite_default_path.clone())?;
handlebars.path_safe_register("collection_default_path", config.collection_default_path.clone())?;
handlebars.path_safe_register("submission_default_path", config.submission_default_path.clone())?;
if let Some(notifiers) = &config.notifiers {
for notifier in notifiers.iter() {
if let Notifier::Webhook { url, template, .. } = notifier {
handlebars.register_template_string(&webhook_template_key(url), webhook_template_content(template))?;
}
}
}
Ok(handlebars)
}
handlebars_helper!(truncate: |s: String, len: usize| {
if s.chars().count() > len {
s.chars().take(len).collect::<String>()
} else {
s.to_string()
}
});
#[cfg(test)]
mod tests {
use serde_json::json;
use super::*;
#[test]
fn test_template_usage() {
let mut template = handlebars::Handlebars::new();
template.register_helper("truncate", Box::new(truncate));
let _ = template.path_safe_register("video", "test{{bvid}}test");
let _ = template.path_safe_register("test_truncate", "哈哈,{{ truncate title 30 }}");
let _ = template.path_safe_register("test_path_unix", "{{ truncate title 7 }}/test/a");
let _ = template.path_safe_register("test_path_windows", r"{{ truncate title 7 }}\\test\\a");
#[cfg(not(windows))]
{
assert_eq!(
template
.path_safe_render("test_path_unix", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲/test/a"
);
assert_eq!(
template
.path_safe_render("test_path_windows", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲_test_a"
);
}
#[cfg(windows)]
{
assert_eq!(
template
.path_safe_render("test_path_unix", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲_test_a"
);
assert_eq!(
template
.path_safe_render("test_path_windows", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
r"关注_永雏塔菲\\test\\a"
);
}
assert_eq!(
template
.path_safe_render("video", &json!({"bvid": "BV1b5411h7g7"}))
.unwrap(),
"testBV1b5411h7g7test"
);
assert_eq!(
template
.path_safe_render(
"test_truncate",
&json!({"title": "你说得对,但是 Rust 是由 Mozilla 自主研发的一款全新的编译期格斗游戏。\
编译将发生在一个被称作「Cargo」的构建系统中。在这里被引用的指针将被授予「生命周期」之力导引对象安全。\
你将扮演一位名为「Rustacean」的神秘角色在与「Rustc」的搏斗中邂逅各种骨骼惊奇的傲娇报错。\
征服她们、通过编译同时逐步发掘「C++」程序崩溃的真相。"})
)
.unwrap(),
"哈哈,你说得对,但是 Rust 是由 Mozilla 自主研发的一"
);
}
}

View File

@@ -0,0 +1,100 @@
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::utils::filenamify::filenamify;
/// NFO 文件使用的时间类型
#[derive(Serialize, Deserialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub enum NFOTimeType {
#[default]
FavTime,
PubTime,
}
/// 并发下载相关的配置
#[derive(Serialize, Deserialize, Clone)]
pub struct ConcurrentLimit {
pub video: usize,
pub page: usize,
pub rate_limit: Option<RateLimit>,
#[serde(default)]
pub download: ConcurrentDownloadLimit,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct ConcurrentDownloadLimit {
pub enable: bool,
pub concurrency: usize,
pub threshold: u64,
}
impl Default for ConcurrentDownloadLimit {
fn default() -> Self {
Self {
enable: true,
concurrency: 4,
threshold: 20 * (1 << 20), // 20 MB
}
}
}
#[derive(Serialize, Deserialize, Clone)]
pub struct RateLimit {
pub limit: usize,
pub duration: u64,
}
impl Default for ConcurrentLimit {
fn default() -> Self {
Self {
video: 3,
page: 2,
// 默认的限速配置,每 250ms 允许请求 4 次
rate_limit: Some(RateLimit {
limit: 4,
duration: 250,
}),
download: ConcurrentDownloadLimit::default(),
}
}
}
#[derive(Serialize, Deserialize, Clone, Default)]
pub struct SkipOption {
pub no_poster: bool,
pub no_video_nfo: bool,
pub no_upper: bool,
pub no_danmaku: bool,
pub no_subtitle: bool,
}
#[derive(Serialize, Deserialize, Clone)]
#[serde(untagged)]
pub enum Trigger {
Interval(u64),
Cron(String),
}
impl Default for Trigger {
fn default() -> Self {
Trigger::Interval(1200)
}
}
pub trait PathSafeTemplate {
fn path_safe_register(&mut self, name: &'static str, template: impl Into<String>) -> Result<()>;
fn path_safe_render(&self, name: &'static str, data: &serde_json::Value) -> Result<String>;
}
/// 通过将模板字符串中的分隔符替换为自定义的字符串,使得模板字符串中的分隔符得以保留
impl PathSafeTemplate for handlebars::Handlebars<'_> {
fn path_safe_register(&mut self, name: &'static str, template: impl Into<String>) -> Result<()> {
let template = template.into();
Ok(self.register_template_string(name, template.replace(std::path::MAIN_SEPARATOR_STR, "__SEP__"))?)
}
fn path_safe_render(&self, name: &'static str, data: &serde_json::Value) -> Result<String> {
Ok(filenamify(&self.render(name, data)?).replace("__SEP__", std::path::MAIN_SEPARATOR_STR))
}
}

View File

@@ -0,0 +1,15 @@
mod args;
mod current;
mod default;
mod handlebar;
mod item;
mod versioned_cache;
mod versioned_config;
pub use crate::config::args::{ARGS, version};
pub use crate::config::current::{CONFIG_DIR, Config};
pub(crate) use crate::config::default::default_bind_address;
pub use crate::config::handlebar::TEMPLATE;
pub use crate::config::item::{ConcurrentDownloadLimit, NFOTimeType, PathSafeTemplate, RateLimit, Trigger};
pub use crate::config::versioned_cache::VersionedCache;
pub use crate::config::versioned_config::VersionedConfig;

View File

@@ -0,0 +1,56 @@
use std::sync::Arc;
use anyhow::Result;
use arc_swap::{ArcSwap, Guard};
use tokio_util::future::FutureExt;
use tokio_util::sync::CancellationToken;
use crate::config::{Config, VersionedConfig};
pub struct VersionedCache<T> {
inner: Arc<ArcSwap<T>>,
cancel_token: CancellationToken,
}
/// 一个跟随全局配置变化自动更新的缓存
impl<T: Send + Sync + 'static> VersionedCache<T> {
pub fn new(builder: fn(&Config) -> Result<T>) -> Result<Self> {
let mut rx = VersionedConfig::get().subscribe();
let initial_value = builder(&rx.borrow_and_update())?;
let cancel_token = CancellationToken::new();
let inner = Arc::new(ArcSwap::from_pointee(initial_value));
let inner_clone = inner.clone();
tokio::spawn(
async move {
while rx.changed().await.is_ok() {
match builder(&rx.borrow()) {
Ok(new_value) => {
inner_clone.store(Arc::new(new_value));
}
Err(e) => {
error!("Failed to update versioned cache: {:?}", e);
}
}
}
}
.with_cancellation_token_owned(cancel_token.clone()),
);
Ok(Self { inner, cancel_token })
}
/// 获取一个临时的只读引用
pub fn read(&self) -> Guard<Arc<T>> {
self.inner.load()
}
/// 获取当前缓存的完整快照
pub fn snapshot(&self) -> Arc<T> {
self.inner.load_full()
}
}
impl<T> Drop for VersionedCache<T> {
fn drop(&mut self) {
self.cancel_token.cancel();
}
}

View File

@@ -0,0 +1,126 @@
use std::sync::Arc;
use anyhow::{Result, bail};
use arc_swap::{ArcSwap, Guard};
use sea_orm::DatabaseConnection;
use tokio::sync::{OnceCell, watch};
use crate::bilibili::Credential;
use crate::config::Config;
static VERSIONED_CONFIG: OnceCell<VersionedConfig> = OnceCell::const_new();
pub struct VersionedConfig {
inner: ArcSwap<Config>,
update_lock: tokio::sync::Mutex<()>,
tx: watch::Sender<Arc<Config>>,
rx: watch::Receiver<Arc<Config>>,
}
impl VersionedConfig {
/// 初始化全局的 `VersionedConfig`,初始化失败或者已初始化过则返回错误
pub async fn init(connection: &DatabaseConnection) -> Result<&'static VersionedConfig> {
VERSIONED_CONFIG
.get_or_try_init(|| async move {
let mut config = match Config::load_from_database(connection).await? {
Some(Ok(config)) => config,
Some(Err(e)) => bail!("解析数据库配置失败: {}", e),
None => {
let config = Config::default();
warn!(
"生成 auth_token{},可使用该 token 登录 web UI该信息仅在首次运行时打印",
config.auth_token
);
config.save_to_database(connection).await?;
config
}
};
// version 本身不具有实际意义,仅用于并发更新时的版本控制,在初始化时可以直接清空
config.version = 0;
Ok(VersionedConfig::new(config))
})
.await
}
#[cfg(test)]
/// 仅在测试环境使用,该方法会尝试从测试数据库中加载配置并写入到全局的 VERSIONED_CONFIG
pub async fn init_for_test(connection: &DatabaseConnection) -> Result<&'static VersionedConfig> {
VERSIONED_CONFIG
.get_or_try_init(|| async move {
let Some(Ok(config)) = Config::load_from_database(&connection).await? else {
bail!("no config found in test database");
};
Ok(VersionedConfig::new(config))
})
.await
}
#[cfg(not(test))]
/// 获取全局的 `VersionedConfig`,如果未初始化则会 panic
pub fn get() -> &'static VersionedConfig {
VERSIONED_CONFIG.get().expect("VERSIONED_CONFIG is not initialized")
}
#[cfg(test)]
/// 尝试获取全局的 `VersionedConfig`,如果未初始化则退回默认配置
pub fn get() -> &'static VersionedConfig {
use std::sync::LazyLock;
static FALLBACK_CONFIG: LazyLock<VersionedConfig> = LazyLock::new(|| VersionedConfig::new(Config::default()));
// 优先从全局变量获取,未初始化则退回默认配置
return VERSIONED_CONFIG.get().unwrap_or_else(|| &FALLBACK_CONFIG);
}
fn new(config: Config) -> Self {
let inner = ArcSwap::from_pointee(config);
let (tx, rx) = watch::channel(inner.load_full());
Self {
inner,
update_lock: tokio::sync::Mutex::new(()),
tx,
rx,
}
}
pub fn read(&self) -> Guard<Arc<Config>> {
self.inner.load()
}
pub fn snapshot(&self) -> Arc<Config> {
self.inner.load_full()
}
pub fn subscribe(&self) -> watch::Receiver<Arc<Config>> {
self.rx.clone()
}
pub async fn update_credential(
&self,
new_credential: Credential,
connection: &DatabaseConnection,
) -> Result<Arc<Config>> {
let _lock = self.update_lock.lock().await;
let mut new_config = self.inner.load().as_ref().clone();
new_config.credential = new_credential;
new_config.version += 1;
new_config.save_to_database(connection).await?;
let new_config = Arc::new(new_config);
self.inner.store(new_config.clone());
self.tx.send(new_config.clone())?;
Ok(new_config)
}
/// 外部 API 会调用这个方法,如果更新失败直接返回错误
pub async fn update(&self, mut new_config: Config, connection: &DatabaseConnection) -> Result<Arc<Config>> {
let _lock = self.update_lock.lock().await;
let old_config = self.inner.load();
if old_config.version != new_config.version {
bail!("配置版本不匹配,请刷新页面修改后重新提交");
}
new_config.version += 1;
new_config.save_to_database(connection).await?;
let new_config = Arc::new(new_config);
self.inner.store(new_config.clone());
self.tx.send(new_config.clone())?;
Ok(new_config)
}
}

View File

@@ -0,0 +1,71 @@
use std::path::Path;
use std::time::Duration;
use anyhow::{Context, Result, bail};
use bili_sync_migration::{Migrator, MigratorTrait};
use sea_orm::sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode, SqliteSynchronous};
use sea_orm::sqlx::{ConnectOptions as SqlxConnectOptions, Sqlite};
use sea_orm::{ConnectOptions, ConnectionTrait, Database, DatabaseConnection, SqlxSqliteConnector, Statement};
fn database_url(path: &Path) -> String {
format!("sqlite://{}?mode=rwc", path.to_string_lossy())
}
async fn database_connection(database_url: &str) -> Result<DatabaseConnection> {
let mut option = ConnectOptions::new(database_url);
option
.max_connections(50)
.min_connections(5)
.acquire_timeout(Duration::from_secs(90));
let connect_option = option
.get_url()
.parse::<SqliteConnectOptions>()
.context("Failed to parse database URL")?
.disable_statement_logging()
.busy_timeout(Duration::from_secs(90))
.journal_mode(SqliteJournalMode::Wal)
.synchronous(SqliteSynchronous::Normal)
.optimize_on_close(true, None);
Ok(SqlxSqliteConnector::from_sqlx_sqlite_pool(
option
.sqlx_pool_options::<Sqlite>()
.connect_with(connect_option)
.await?,
))
}
async fn migrate_database(database_url: &str) -> Result<()> {
// 注意此处使用内部构造的 DatabaseConnection而不是通过 database_connection() 获取
// 这是因为使用多个连接的 Connection 会导致奇怪的迁移顺序问题,而使用默认的连接选项不会
let connection = Database::connect(database_url).await?;
// 避免 https://github.com/amtoaer/bili-sync/issues/571 问题,迁移前根据 migration 确认当前版本
// 如果用户从 2.6.0 以下版本直接升级migration 不满足需求,直接报错而不执行迁移
if connection
.query_one(Statement::from_string(
connection.get_database_backend(),
"SELECT 1 FROM seaql_migrations WHERE version = 'm20250613_043257_add_config';",
))
.await
.is_ok_and(|res| res.is_none())
{
// 查询成功且结果为空,即没有 m20250613_043257_add_config说明版本低于 2.6.0
bail!("该版本仅支持从 2.6.x 以上的版本升级,请先升级至 2.6.x 或 2.7.x 完成配置迁移,再升级至最新版本。");
}
Ok(Migrator::up(&connection, None).await?)
}
/// 进行数据库迁移并获取数据库连接,供外部使用
pub async fn setup_database(path: &Path) -> Result<DatabaseConnection> {
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent).await.context(
"Failed to create config directory. Please check if you have granted necessary permissions to your folder.",
)?;
}
let database_url = database_url(path);
migrate_database(&database_url)
.await
.context("Failed to migrate database")?;
database_connection(&database_url)
.await
.context("Failed to connect to database")
}

View File

@@ -0,0 +1,344 @@
use core::str;
use std::io::SeekFrom;
use std::path::Path;
use std::sync::Arc;
use anyhow::{Context, Result, bail, ensure};
use async_tempfile::TempFile;
use futures::TryStreamExt;
use reqwest::{Method, StatusCode, header};
use tokio::fs::{self};
use tokio::io::{AsyncSeekExt, AsyncWriteExt};
use tokio::process::Command;
use tokio::task::JoinSet;
use tokio_util::io::StreamReader;
use crate::bilibili::{Client, ErrorForStatusExt};
use crate::config::{ARGS, ConcurrentDownloadLimit};
pub struct Downloader {
client: Client,
}
impl Downloader {
// Downloader 使用带有默认 Header 的 Client 构建
// 拿到 url 后下载文件不需要任何 cookie 作为身份凭证
// 但如果不设置默认 Header下载时会遇到 403 Forbidden 错误
pub fn new(client: Client) -> Self {
Self { client }
}
pub async fn fetch(&self, url: &str, path: &Path, concurrent_download: &ConcurrentDownloadLimit) -> Result<()> {
let mut temp_file = TempFile::new().await?;
self.fetch_internal(url, &mut temp_file, false, concurrent_download)
.await?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(temp_file.file_path(), path).await?;
// temp_file 的 drop 需要 std::fs::remove_file
// 如果交由 rust 自动执行虽然逻辑正确但会略微阻塞异步上下文
// 尽量主动调用,保证正常执行的情况下文件清除操作由 spawn_blocking 在专门线程中完成
temp_file.drop_async().await;
Ok(())
}
pub async fn multi_fetch(
&self,
urls: &[&str],
path: &Path,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let temp_file = self.multi_fetch_internal(urls, true, concurrent_download).await?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(temp_file.file_path(), path).await?;
temp_file.drop_async().await;
Ok(())
}
pub async fn multi_fetch_and_merge(
&self,
video_urls: &[&str],
audio_urls: &[&str],
path: &Path,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let (video_temp_file, audio_temp_file) = tokio::try_join!(
self.multi_fetch_internal(video_urls, true, concurrent_download),
self.multi_fetch_internal(audio_urls, true, concurrent_download)
)?;
let final_temp_file = TempFile::new().await?;
let output = Command::new(ARGS.ffmpeg_path.as_deref().unwrap_or("ffmpeg"))
.args([
"-i",
video_temp_file.file_path().to_string_lossy().as_ref(),
"-i",
audio_temp_file.file_path().to_string_lossy().as_ref(),
"-c",
"copy",
"-strict",
"unofficial",
"-f",
"mp4",
"-y",
final_temp_file.file_path().to_string_lossy().as_ref(),
])
.output()
.await
.context("failed to run ffmpeg")?;
if !output.status.success() {
bail!("ffmpeg error: {}", str::from_utf8(&output.stderr).unwrap_or("unknown"));
}
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(final_temp_file.file_path(), path).await?;
tokio::join!(
video_temp_file.drop_async(),
audio_temp_file.drop_async(),
final_temp_file.drop_async()
);
Ok(())
}
async fn multi_fetch_internal(
&self,
urls: &[&str],
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<TempFile> {
if urls.is_empty() {
bail!("no urls provided");
}
let mut temp_file = TempFile::new().await?;
for (idx, url) in urls.iter().enumerate() {
match self
.fetch_internal(url, &mut temp_file, is_stream, concurrent_download)
.await
{
Ok(_) => return Ok(temp_file),
Err(e) => {
if idx == urls.len() - 1 {
temp_file.drop_async().await;
return Err(e).with_context(|| format!("failed to download file from all {} urls", urls.len()));
}
temp_file.set_len(0).await?;
temp_file.rewind().await?;
}
}
}
unreachable!()
}
async fn fetch_internal(
&self,
url: &str,
file: &mut TempFile,
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
if concurrent_download.enable {
self.fetch_parallel(url, file, is_stream, concurrent_download).await
} else {
self.fetch_serial(url, file).await
}
}
async fn fetch_serial(&self, url: &str, file: &mut TempFile) -> Result<()> {
let resp = self
.client
.request(Method::GET, url, None)
.send()
.await?
.error_for_status_ext()?;
let expected = resp.header_content_length();
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, file).await?;
file.flush().await?;
if let Some(expected) = expected {
ensure!(
received == expected,
"downloaded bytes mismatch: expected {}, got {}",
expected,
received
);
}
Ok(())
}
async fn fetch_parallel(
&self,
url: &str,
file: &mut TempFile,
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let (concurrency, threshold) = (concurrent_download.concurrency, concurrent_download.threshold);
let file_size = if is_stream {
// B 站视频、音频流存在 HEAD 为 404 但 GET 正常的情况,此处假设支持分块,直接使用携带 Range 头的 GET 请求探测
let resp = self
.client
.request(Method::GET, url, None)
.header(header::RANGE, "bytes=0-0")
.send()
.await?
.error_for_status_ext()?;
if resp.status() != StatusCode::PARTIAL_CONTENT {
return self.fetch_serial(url, file).await;
}
resp.header_file_size()
} else {
// 对于普通文件,直接使用常规的 HEAD 请求探测
let resp = self
.client
.request(Method::HEAD, url, None)
.send()
.await?
.error_for_status_ext()?;
if resp
.headers()
.get(header::ACCEPT_RANGES)
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges#none
.is_none_or(|v| v.to_str().unwrap_or_default() == "none")
{
return self.fetch_serial(url, file).await;
}
resp.header_content_length()
};
let Some(file_size) = file_size else {
return self.fetch_serial(url, file).await;
};
let chunk_size = file_size / concurrency as u64;
if chunk_size < threshold {
return self.fetch_serial(url, file).await;
}
file.set_len(file_size).await?;
let mut tasks = JoinSet::new();
let url = Arc::new(url.to_string());
for i in 0..concurrency {
let start = i as u64 * chunk_size;
let end = if i == concurrency - 1 {
file_size
} else {
start + chunk_size
} - 1;
let (url_clone, client_clone) = (url.clone(), self.client.clone());
let mut file_clone = file.open_rw().await?;
tasks.spawn(async move {
file_clone.seek(SeekFrom::Start(start)).await?;
let range_header = format!("bytes={}-{}", start, end);
let resp = client_clone
.request(Method::GET, &url_clone, None)
.header(header::RANGE, &range_header)
.send()
.await?
.error_for_status_ext()?;
if let Some(content_length) = resp.header_content_length() {
ensure!(
content_length == end - start + 1,
"content length mismatch: expected {}, got {}",
end - start + 1,
content_length
);
}
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, &mut file_clone).await?;
file_clone.flush().await?;
ensure!(
received == end - start + 1,
"downloaded bytes mismatch: expected {}, got {}",
end - start + 1,
received,
);
Ok(())
});
}
while let Some(res) = tasks.join_next().await {
res??;
}
Ok(())
}
}
/// reqwest.content_length() 居然指的是 body_size 而非 content-length header没办法自己实现一下
/// https://github.com/seanmonstar/reqwest/issues/1814
trait ResponseExt {
/// 获取 Content-Length 头的值
fn header_content_length(&self) -> Option<u64>;
/// 获取 Content-Range 头中的文件总大小部分
fn header_file_size(&self) -> Option<u64>;
}
impl ResponseExt for reqwest::Response {
fn header_content_length(&self) -> Option<u64> {
self.headers()
.get(header::CONTENT_LENGTH)
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u64>().ok())
}
fn header_file_size(&self) -> Option<u64> {
self.headers()
.get(header::CONTENT_RANGE)
.and_then(|v| v.to_str().ok())
.and_then(|s| {
// Content-Range: bytes 0-0/800946
s.rsplit_once('/')
})
.and_then(|(_, size_str)| size_str.parse::<u64>().ok())
}
}
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Result;
use crate::bilibili::{BestStream, BiliClient, Video};
use crate::config::VersionedConfig;
use crate::database::setup_database;
use crate::downloader::Downloader;
#[ignore = "only for manual test"]
#[tokio::test(flavor = "multi_thread")]
async fn test_parse_and_download_video() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let config = VersionedConfig::get().read();
let client = BiliClient::new();
let video = Video::new(&client, "BV1QJmaYKEv4", &config.credential);
let pages = video.get_pages().await.expect("failed to get pages");
let first_page = pages.into_iter().next().expect("no page found");
let mut page_analyzer = video
.get_page_analyzer(&first_page)
.await
.expect("failed to get page analyzer");
let json_info = serde_json::to_string_pretty(&page_analyzer.info)?;
tokio::fs::write("./debug_playurl.json", json_info).await?;
let best_stream = page_analyzer
.best_stream(&config.filter_option)
.expect("failed to get best stream");
let BestStream::VideoAudio {
video,
audio: Some(audio),
} = best_stream
else {
panic!("best stream is not video & audio");
};
dbg!(&video);
dbg!(&audio);
let downloader = Downloader::new(client.client);
downloader
.multi_fetch_and_merge(
&video.urls(true),
&audio.urls(true),
Path::new("./output.mp4"),
&config.concurrent_limit.download,
)
.await
.expect("failed to download video");
Ok(())
}
}

View File

@@ -0,0 +1,50 @@
use std::io;
use anyhow::Result;
pub enum ExecutionStatus {
Skipped,
Succeeded,
Ignored(anyhow::Error),
Failed(anyhow::Error),
// 任务可以返回该状态固定自己的 status
Fixed(u32),
}
// 目前 stable rust 似乎不支持自定义类型使用 ? 运算符,只能先在返回值使用 Result再这样套层娃
impl From<Result<ExecutionStatus>> for ExecutionStatus {
fn from(res: Result<ExecutionStatus>) -> Self {
match res {
Ok(status) => status,
Err(err) => {
for cause in err.chain() {
if let Some(io_err) = cause.downcast_ref::<io::Error>() {
// 权限错误
if io_err.kind() == io::ErrorKind::PermissionDenied {
return ExecutionStatus::Ignored(err);
}
// 使用 io::Error 包裹的 reqwest::Error
if io_err.kind() == io::ErrorKind::Other
&& io_err.get_ref().is_some_and(|e| {
e.downcast_ref::<reqwest::Error>().is_some_and(is_ignored_reqwest_error)
})
{
return ExecutionStatus::Ignored(err);
}
}
// 未包裹的 reqwest::Error
if let Some(error) = cause.downcast_ref::<reqwest::Error>()
&& is_ignored_reqwest_error(error)
{
return ExecutionStatus::Ignored(err);
}
}
ExecutionStatus::Failed(err)
}
}
}
}
fn is_ignored_reqwest_error(err: &reqwest::Error) -> bool {
err.is_decode() || err.is_body() || err.is_timeout()
}

View File

@@ -0,0 +1,134 @@
#[macro_use]
extern crate tracing;
mod adapter;
mod api;
mod bilibili;
mod config;
mod database;
mod downloader;
mod error;
mod notifier;
mod task;
mod utils;
mod workflow;
use std::collections::VecDeque;
use std::fmt::Debug;
use std::future::Future;
use std::sync::Arc;
use anyhow::{Context, Result, bail};
use bilibili::BiliClient;
use parking_lot::RwLock;
use sea_orm::DatabaseConnection;
use task::{http_server, video_downloader};
use tokio::process::Command;
use tokio_util::sync::CancellationToken;
use tokio_util::task::TaskTracker;
use crate::api::{LogHelper, MAX_HISTORY_LOGS};
use crate::config::{ARGS, CONFIG_DIR, VersionedConfig};
use crate::database::setup_database;
use crate::utils::init_logger;
use crate::utils::signal::terminate;
#[tokio::main]
async fn main() {
let (bili_client, connection, log_writer) = match init().await {
Ok(res) => res,
Err(e) => {
error!("初始化失败:{:#}", e);
return;
}
};
let token = CancellationToken::new();
let tracker = TaskTracker::new();
spawn_task(
"HTTP 服务",
http_server(connection.clone(), bili_client.clone(), log_writer),
&tracker,
token.clone(),
);
spawn_task(
"定时下载",
video_downloader(connection.clone(), bili_client),
&tracker,
token.clone(),
);
tracker.close();
handle_shutdown(connection, tracker, token).await
}
fn spawn_task(
task_name: &'static str,
task: impl Future<Output = impl Debug> + Send + 'static,
tracker: &TaskTracker,
token: CancellationToken,
) {
tracker.spawn(async move {
tokio::select! {
res = task => {
error!("「{}」异常结束,返回结果为:「{:?}」,取消其它仍在执行的任务..", task_name, res);
token.cancel();
},
_ = token.cancelled() => {
info!("「{}」接收到取消信号,终止运行..", task_name);
}
}
});
}
/// 初始化日志系统、打印欢迎信息,初始化数据库连接和全局配置
async fn init() -> Result<(Arc<BiliClient>, DatabaseConnection, LogHelper)> {
let (tx, _rx) = tokio::sync::broadcast::channel(30);
let log_history = Arc::new(RwLock::new(VecDeque::with_capacity(MAX_HISTORY_LOGS + 1)));
let log_writer = LogHelper::new(tx, log_history.clone());
init_logger(&ARGS.log_level, Some(log_writer.clone()));
info!("欢迎使用 Bili-Sync当前程序版本{}", config::version());
info!("项目地址https://github.com/amtoaer/bili-sync");
let ffmpeg_path = ARGS.ffmpeg_path.as_deref().unwrap_or("ffmpeg");
let ffmpeg_exists = Command::new(ffmpeg_path)
.arg("-version")
.output()
.await
.map(|output| output.status.success())
.unwrap_or(false);
if !ffmpeg_exists {
bail!("ffmpeg 不存在或无法执行,请确保已正确安装 ffmpeg并且 {ffmpeg_path} 命令可用");
}
let connection = setup_database(&CONFIG_DIR.join("data.sqlite"))
.await
.context("数据库初始化失败")?;
info!("数据库初始化完成");
VersionedConfig::init(&connection).await.context("配置初始化失败")?;
info!("配置初始化完成");
Ok((Arc::new(BiliClient::new()), connection, log_writer))
}
async fn handle_shutdown(connection: DatabaseConnection, tracker: TaskTracker, token: CancellationToken) {
tokio::select! {
_ = tracker.wait() => {
error!("所有任务均已终止..")
}
_ = terminate() => {
info!("接收到终止信号,开始终止任务..");
token.cancel();
tracker.wait().await;
info!("所有任务均已终止..");
}
}
info!("正在关闭数据库连接..");
match connection.close().await {
Ok(()) => info!("数据库连接已关闭,程序结束"),
Err(e) => error!("关闭数据库连接时遇到错误:{:#},程序异常结束", e),
}
}

View File

@@ -0,0 +1,67 @@
use bili_sync_entity::video;
use crate::utils::status::{STATUS_OK, VideoStatus};
pub enum DownloadNotifyInfo {
List {
source: String,
img_url: Option<String>,
titles: Vec<String>,
},
Summary {
source: String,
img_url: Option<String>,
count: usize,
},
}
impl DownloadNotifyInfo {
pub fn new(source: String) -> Self {
Self::List {
source,
img_url: None,
titles: Vec::with_capacity(10),
}
}
pub fn record(&mut self, models: &[video::ActiveModel]) {
let success_models = models
.iter()
.filter(|m| {
let sub_task_status: [u32; 5] = VideoStatus::from(*m.download_status.as_ref()).into();
sub_task_status.into_iter().all(|s| s == STATUS_OK)
})
.collect::<Vec<_>>();
match self {
Self::List {
source,
img_url,
titles,
} => {
let count = success_models.len() + titles.len();
if count > 10 {
*self = Self::Summary {
source: std::mem::take(source),
img_url: std::mem::take(img_url),
count,
};
} else {
if img_url.is_none() {
*img_url = success_models.first().map(|m| m.cover.as_ref().clone());
}
titles.extend(success_models.into_iter().map(|m| m.name.as_ref().clone()));
}
}
Self::Summary { count, .. } => *count += success_models.len(),
}
}
pub fn should_notify(&self) -> bool {
if let Self::List { titles, .. } = self
&& titles.is_empty()
{
return false;
}
true
}
}

View File

@@ -0,0 +1,59 @@
use std::borrow::Cow;
use itertools::Itertools;
use serde::Serialize;
use crate::notifier::DownloadNotifyInfo;
#[derive(Serialize)]
pub struct Message<'a> {
pub message: Cow<'a, str>,
pub image_url: Option<String>,
}
impl<'a> From<&'a str> for Message<'a> {
fn from(message: &'a str) -> Self {
Self {
message: Cow::Borrowed(message),
image_url: None,
}
}
}
impl From<String> for Message<'_> {
fn from(message: String) -> Self {
Self {
message: message.into(),
image_url: None,
}
}
}
impl From<DownloadNotifyInfo> for Message<'_> {
fn from(info: DownloadNotifyInfo) -> Self {
match info {
DownloadNotifyInfo::List {
source,
img_url,
titles,
} => Self {
message: format!(
"{}的 {} 条新视频已入库:\n{}",
source,
titles.len(),
titles
.into_iter()
.enumerate()
.map(|(i, title)| format!("{}. {title}", i + 1))
.join("\n")
)
.into(),
image_url: img_url,
},
DownloadNotifyInfo::Summary { source, img_url, count } => Self {
message: format!("{}的 {} 条新视频已入库,快去看看吧!", source, count).into(),
image_url: img_url,
},
}
}
}

View File

@@ -0,0 +1,116 @@
mod info;
mod message;
use std::collections::HashMap;
use anyhow::Result;
use futures::future;
pub use info::DownloadNotifyInfo;
pub use message::Message;
use reqwest::header;
use serde::{Deserialize, Serialize};
use crate::config::TEMPLATE;
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(rename_all = "camelCase", tag = "type")]
pub enum Notifier {
Telegram {
bot_token: String,
chat_id: String,
#[serde(default)]
skip_image: bool,
},
Webhook {
url: String,
template: Option<String>,
#[serde(default)]
headers: Option<HashMap<String, String>>,
#[serde(skip)]
// 一个内部辅助字段,用于决定是否强制渲染当前模板,在测试时使用
ignore_cache: Option<()>,
},
}
pub fn webhook_template_key(url: &str) -> String {
format!("payload_{}", url)
}
pub fn webhook_template_content(template: &Option<String>) -> &str {
template
.as_deref()
.filter(|t| !t.trim().is_empty())
.unwrap_or(r#"{"text": "{{{message}}}"}"#)
}
pub trait NotifierAllExt {
async fn notify_all<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()>;
}
impl NotifierAllExt for Vec<Notifier> {
async fn notify_all<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()> {
let message = message.into();
future::join_all(self.iter().map(|notifier| notifier.notify_internal(client, &message))).await;
Ok(())
}
}
impl Notifier {
pub async fn notify<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()> {
self.notify_internal(client, &message.into()).await
}
async fn notify_internal<'a>(&self, client: &reqwest::Client, message: &Message<'a>) -> Result<()> {
match self {
Notifier::Telegram {
bot_token,
chat_id,
skip_image,
} => {
if let Some(img_url) = &message.image_url
&& !*skip_image
{
let url = format!("https://api.telegram.org/bot{}/sendPhoto", bot_token);
let params = [
("chat_id", chat_id.as_str()),
("photo", img_url.as_str()),
("caption", message.message.as_ref()),
];
client.post(&url).form(&params).send().await?;
} else {
let url = format!("https://api.telegram.org/bot{}/sendMessage", bot_token);
let params = [("chat_id", chat_id.as_str()), ("text", message.message.as_ref())];
client.post(&url).form(&params).send().await?;
}
}
Notifier::Webhook {
url,
template,
headers,
ignore_cache,
} => {
let key = webhook_template_key(url);
let handlebar = TEMPLATE.read();
let payload = match ignore_cache {
Some(_) => handlebar.render_template(webhook_template_content(template), &message)?,
None => handlebar.render(&key, &message)?,
};
let mut headers_map = header::HeaderMap::new();
headers_map.insert(header::CONTENT_TYPE, "application/json".try_into()?);
if let Some(custom_headers) = headers {
for (key, value) in custom_headers {
if let (Ok(key), Ok(value)) =
(header::HeaderName::try_from(key), header::HeaderValue::try_from(value))
{
headers_map.insert(key, value);
}
}
}
client.post(url).headers(headers_map).body(payload).send().await?;
}
}
Ok(())
}
}

View File

@@ -0,0 +1,117 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::{Context, Result};
use axum::extract::Request;
use axum::http::header;
use axum::response::IntoResponse;
use axum::routing::get;
use axum::{Extension, ServiceExt};
use reqwest::StatusCode;
use rust_embed_for_web::{EmbedableFile, RustEmbed};
use sea_orm::DatabaseConnection;
use crate::api::{LogHelper, router};
use crate::bilibili::BiliClient;
use crate::config::{VersionedConfig, default_bind_address};
#[derive(RustEmbed)]
#[preserve_source = false]
#[folder = "../../web/build"]
struct Asset;
pub async fn http_server(
database_connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
log_writer: LogHelper,
) -> Result<()> {
let app = router()
.fallback_service(get(frontend_files).head(frontend_files))
.layer(Extension(database_connection))
.layer(Extension(bili_client))
.layer(Extension(log_writer));
let (bind_address, listener) = {
let bind_address = VersionedConfig::get().read().bind_address.to_owned();
let listen_res = tokio::net::TcpListener::bind(&bind_address)
.await
.context("bind address failed");
match listen_res {
Ok(listener) => (bind_address, listener),
Err(e) => {
let default_bind_address = default_bind_address();
if default_bind_address == bind_address {
return Err(e);
}
warn!(
"绑定到地址 {} 失败:{:#},尝试绑定到默认地址 {}",
bind_address, e, default_bind_address
);
let listener = tokio::net::TcpListener::bind(&default_bind_address)
.await
.context("bind default address failed")?;
(default_bind_address, listener)
}
}
};
info!("开始运行管理页http://{}", bind_address);
Ok(axum::serve(listener, ServiceExt::<Request>::into_make_service(app)).await?)
}
async fn frontend_files(request: Request) -> impl IntoResponse {
let mut path = request.uri().path().trim_start_matches('/');
if path.is_empty() || Asset::get(path).is_none() {
path = "index.html";
}
let Some(content) = Asset::get(path) else {
return (StatusCode::NOT_FOUND, "404 Not Found").into_response();
};
let mime_type = content.mime_type();
let content_type = mime_type.as_deref().unwrap_or("application/octet-stream");
let default_headers = [
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
];
if let Some(if_none_match) = request.headers().get(header::IF_NONE_MATCH)
&& let Ok(client_etag) = if_none_match.to_str()
&& client_etag == content.hash()
{
return (StatusCode::NOT_MODIFIED, default_headers).into_response();
}
if request.method() == axum::http::Method::HEAD {
return (StatusCode::OK, default_headers).into_response();
}
if cfg!(debug_assertions) {
// safety: `RustEmbed` returns uncompressed files directly from the filesystem in debug mode
return (StatusCode::OK, default_headers, content.data().unwrap()).into_response();
}
let accepted_encodings = request
.headers()
.get(header::ACCEPT_ENCODING)
.and_then(|v| v.to_str().ok())
.map(|s| s.split(',').map(str::trim).collect::<HashSet<_>>())
.unwrap_or_default();
for (encoding, data) in [("br", content.data_br()), ("gzip", content.data_gzip())] {
if accepted_encodings.contains(encoding)
&& let Some(data) = data
{
return (
StatusCode::OK,
[
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
(header::CONTENT_ENCODING, encoding),
],
data,
)
.into_response();
}
}
(
StatusCode::NOT_ACCEPTABLE,
"Client must support gzip or brotli compression",
)
.into_response()
}

View File

@@ -0,0 +1,5 @@
mod http_server;
mod video_downloader;
pub use http_server::http_server;
pub use video_downloader::{DownloadTaskManager, TaskStatus, video_downloader};

View File

@@ -0,0 +1,373 @@
use std::pin::Pin;
use std::sync::Arc;
use std::time::Duration;
use anyhow::{Context, Result, bail};
use sea_orm::DatabaseConnection;
use serde::Serialize;
use tokio::sync::{OnceCell, watch};
use tokio_cron_scheduler::{Job, JobScheduler};
use crate::adapter::VideoSource;
use crate::bilibili::{self, BiliClient, BiliError};
use crate::config::{ARGS, Config, TEMPLATE, Trigger, VersionedConfig};
use crate::utils::model::get_enabled_video_sources;
use crate::utils::notify::error_and_notify;
use crate::workflow::process_video_source;
static INSTANCE: OnceCell<DownloadTaskManager> = OnceCell::const_new();
/// 启动周期下载视频的任务
pub async fn video_downloader(connection: DatabaseConnection, bili_client: Arc<BiliClient>) -> Result<()> {
let task_manager = DownloadTaskManager::init(connection, bili_client).await?;
task_manager.start().await
}
pub struct DownloadTaskManager {
sched: Arc<tokio::sync::Mutex<JobScheduler>>,
cx: Arc<TaskContext>,
shutdown_rx: watch::Receiver<Result<()>>,
}
#[derive(Serialize, Default, Clone, Copy, Debug)]
pub struct TaskStatus {
is_running: bool,
last_run: Option<chrono::DateTime<chrono::Local>>,
last_finish: Option<chrono::DateTime<chrono::Local>>,
next_run: Option<chrono::DateTime<chrono::Local>>,
}
struct TaskContext {
connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
running: tokio::sync::Mutex<()>,
status_tx: watch::Sender<TaskStatus>,
status_rx: watch::Receiver<TaskStatus>,
video_task_id: tokio::sync::Mutex<Option<uuid::Uuid>>, // 存储当前视频下载任务的 UUID
}
impl DownloadTaskManager {
/// 初始化 DownloadTaskManager 单例
pub async fn init(
connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
) -> Result<&'static DownloadTaskManager> {
INSTANCE
.get_or_try_init(|| DownloadTaskManager::new(connection, bili_client))
.await
}
/// 获取 DownloadTaskManager 单例,未初始化时直接 panic
pub fn get() -> &'static DownloadTaskManager {
INSTANCE.get().expect("DownloadTaskManager is not initialized")
}
/// 订阅下载任务的状态更新
pub fn subscribe(&self) -> watch::Receiver<TaskStatus> {
self.cx.status_rx.clone()
}
/// 手动执行一次下载任务
pub async fn download_once(&self) -> Result<()> {
let _ = self
.sched
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::download_video_task(self.cx.clone()),
)?)
.await?;
Ok(())
}
/// 启动任务调度器
async fn start(&self) -> Result<()> {
self.sched.lock().await.start().await?;
let mut shutdown_rx = self.shutdown_rx.clone();
shutdown_rx.changed().await?;
self.sched.lock().await.shutdown().await.context("任务调度器关闭失败")?;
if let Err(e) = &*shutdown_rx.borrow() {
bail!("{:#}", e);
}
Ok(())
}
/// 私有的调度器构造函数
async fn new(connection: DatabaseConnection, bili_client: Arc<BiliClient>) -> Result<Self> {
let sched = Arc::new(tokio::sync::Mutex::new(JobScheduler::new().await?));
let (status_tx, status_rx) = watch::channel(TaskStatus::default());
let (running, video_task_id) = (tokio::sync::Mutex::new(()), tokio::sync::Mutex::new(None));
let cx = Arc::new(TaskContext {
connection,
bili_client,
running,
status_tx,
status_rx,
video_task_id,
});
// 读取初始配置
let mut rx = VersionedConfig::get().subscribe();
let initial_config = rx.borrow_and_update().clone();
if ARGS.disable_credential_refresh {
warn!("已禁用凭据检查与刷新任务bili-sync 将不会自动检查刷新 Credential需要用户自行维护");
} else {
// 初始化凭据检查与刷新任务,该任务必须成功,否则直接退出
sched
.lock()
.await
.add(Job::new_async_tz(
"0 0 1 * * *",
chrono::Local,
DownloadTaskManager::check_and_refresh_credential_task(cx.clone()),
)?)
.await?;
}
// 初始化并添加视频下载任务,将任务 ID 保存到 TaskManager 中
let video_task_id = async {
let job_run = DownloadTaskManager::download_video_task(cx.clone());
let job = match &initial_config.interval {
Trigger::Interval(interval) => Job::new_repeated_async(Duration::from_secs(*interval), job_run)?,
Trigger::Cron(cron) => Job::new_async_tz(cron, chrono::Local, job_run)?,
};
Result::<_, anyhow::Error>::Ok(sched.lock().await.add(job).await?)
}
.await;
let video_task_id = match video_task_id {
Ok(id) => Some(id),
Err(err) => {
error_and_notify(
&initial_config,
&cx.bili_client,
format!("初始化视频下载任务失败:{:#}", err),
);
None
}
};
*cx.video_task_id.lock().await = video_task_id;
// 发起一个一次性的任务,更新一下下次运行的时间
if let Some(video_task_id) = video_task_id {
sched
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::refresh_next_run(video_task_id, cx.clone()),
)?)
.await?;
}
// 发起一个新任务,用来监听配置变更,动态更新视频下载任务
let cx_clone = cx.clone();
let sched_clone = sched.clone();
let (shutdown_tx, shutdown_rx) = tokio::sync::watch::channel(Ok(()));
tokio::spawn(async move {
let update_task_result = async {
while rx.changed().await.is_ok() {
let new_config = rx.borrow().clone();
let cx = cx_clone.clone();
let mut video_task_id = cx.video_task_id.lock().await;
if let Some(old_video_task_id) = *video_task_id {
// 这里必须成功,不然后面会重复添加任务
sched_clone
.lock()
.await
.remove(&old_video_task_id)
.await
.context("移除旧的视频下载任务失败")?;
}
let new_video_task_id = async {
let job_run = DownloadTaskManager::download_video_task(cx.clone());
let job = match &new_config.interval {
Trigger::Interval(interval) => {
Job::new_repeated_async(Duration::from_secs(*interval), job_run)?
}
Trigger::Cron(cron) => Job::new_async_tz(cron, chrono::Local, job_run)?,
};
Result::<_, anyhow::Error>::Ok(sched_clone.lock().await.add(job).await?)
}
.await;
let new_video_task_id = match new_video_task_id {
Ok(id) => Some(id),
Err(err) => {
error_and_notify(
&initial_config,
&cx.bili_client,
format!("重载视频下载任务失败:{:#}", err),
);
None
}
};
*video_task_id = new_video_task_id;
if let Some(video_task_id) = new_video_task_id {
sched_clone
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::refresh_next_run(video_task_id, cx.clone()),
)?)
.await?;
}
}
Result::<(), anyhow::Error>::Ok(())
}
.await;
// 如果执行正常,上面应该是永远不会退出的
let _ = shutdown_tx.send(update_task_result);
});
Ok(Self { sched, cx, shutdown_rx })
}
fn check_and_refresh_credential_task(
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |_uuid, _l| {
let cx = cx.clone();
Box::pin(async move {
let _lock = cx.running.lock().await;
let config = VersionedConfig::get().read();
info!("开始执行本轮凭据检查与刷新任务..");
match check_and_refresh_credential(&cx.connection, &cx.bili_client, &config).await {
Ok(_) => info!("本轮凭据检查与刷新任务执行完毕"),
Err(e) => {
error_and_notify(
&config,
&cx.bili_client,
format!("本轮凭据检查与刷新任务执行遇到错误:{:#}", e),
);
}
}
})
}
}
fn refresh_next_run(
video_task_id: uuid::Uuid,
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |_uuid, mut l| {
let cx = cx.clone();
Box::pin(async move {
let old_status = *cx.status_rx.borrow();
let next_run = l
.next_tick_for_job(video_task_id)
.await
.ok()
.flatten()
.map(|dt| dt.with_timezone(&chrono::Local));
let _ = cx.status_tx.send(TaskStatus { next_run, ..old_status });
})
}
}
fn download_video_task(
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |uuid, mut l| {
let cx = cx.clone();
Box::pin(async move {
let Ok(_lock) = cx.running.try_lock() else {
warn!("上一次视频下载任务尚未结束,跳过本次执行..");
return;
};
let _ = cx.status_tx.send(TaskStatus {
is_running: true,
last_run: Some(chrono::Local::now()),
last_finish: None,
next_run: None,
});
info!("开始执行本轮视频下载任务..");
let mut config = VersionedConfig::get().snapshot();
match download_video(&cx.connection, &cx.bili_client, &mut config).await {
Ok(_) => info!("本轮视频下载任务执行完毕"),
Err(e) => {
error_and_notify(
&config,
&cx.bili_client,
format!("本轮视频下载任务执行遇到错误:{:#}", e),
);
}
}
// 注意此处尽量从 updating 中读取 uuid因为当前任务可能是不存在 next_tick 的 oneshot 任务
let task_uuid = (*cx.video_task_id.lock().await).unwrap_or(uuid);
let next_run = l
.next_tick_for_job(task_uuid)
.await
.ok()
.flatten()
.map(|dt| dt.with_timezone(&chrono::Local));
let last_status = *cx.status_rx.borrow();
let _ = cx.status_tx.send(TaskStatus {
is_running: false,
last_run: last_status.last_run,
last_finish: Some(chrono::Local::now()),
next_run,
});
})
}
}
}
async fn check_and_refresh_credential(
connection: &DatabaseConnection,
bili_client: &BiliClient,
config: &Config,
) -> Result<()> {
match bili_client
.check_refresh(&config.credential)
.await
.context("检查刷新 Credential 失败")?
{
None => {
info!("Credential 无需刷新");
}
Some(new_credential) => {
VersionedConfig::get()
.update_credential(new_credential, connection)
.await
.context("新 Credential 持久化失败")?;
info!("Credential 已刷新并保存");
}
}
Ok(())
}
async fn download_video(
connection: &DatabaseConnection,
bili_client: &BiliClient,
config: &mut Arc<Config>,
) -> Result<()> {
config.check().context("配置检查失败")?;
let mixin_key = bili_client
.wbi_img(&config.credential)
.await
.context("获取 wbi_img 失败")?
.into_mixin_key()
.context("解析 mixin key 失败")?;
bilibili::set_global_mixin_key(mixin_key);
let template = TEMPLATE.snapshot();
let bili_client = bili_client.snapshot()?;
let video_sources = get_enabled_video_sources(connection)
.await
.context("获取视频源列表失败")?;
if video_sources.is_empty() {
bail!("没有可用的视频源");
}
for video_source in video_sources {
let display_name = video_source.display_name();
if let Err(e) = process_video_source(video_source, &bili_client, connection, &template, config).await {
error_and_notify(
config,
&bili_client,
format!("处理 {} 时遇到错误:{:#},跳过该视频源", display_name, e),
);
if let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
warn!("检测到风控,终止此轮视频下载任务..");
break;
}
}
}
Ok(())
}

View File

@@ -0,0 +1,225 @@
use chrono::{DateTime, NaiveDateTime, Utc};
use sea_orm::ActiveValue::{NotSet, Set};
use sea_orm::IntoActiveModel;
use crate::bilibili::{PageInfo, VideoInfo};
impl VideoInfo {
/// 在检测视频更新时,通过该方法将 VideoInfo 转换为简单的 ActiveModel此处仅填充一些简单信息后续会使用详情覆盖
pub fn into_simple_model(self) -> bili_sync_entity::video::ActiveModel {
let default = bili_sync_entity::video::ActiveModel {
id: NotSet,
created_at: NotSet,
should_download: NotSet,
// 此处不使用 ActiveModel::default() 是为了让其它字段有默认值
..bili_sync_entity::video::Model::default().into_active_model()
};
match self {
VideoInfo::Collection {
bvid,
cover,
ctime,
pubtime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
category: Set(2), // 视频合集里的内容类型肯定是视频
valid: Set(true),
..default
},
VideoInfo::Favorite {
title,
vtype,
bvid,
intro,
cover,
upper,
ctime,
fav_time,
pubtime,
attr,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(vtype),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: Set(fav_time.naive_utc()),
download_status: Set(0),
valid: Set(attr == 0 || attr == 4),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
..default
},
VideoInfo::WatchLater {
title,
bvid,
intro,
cover,
upper,
ctime,
fav_time,
pubtime,
state,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(2), // 稍后再看里的内容类型肯定是视频
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: Set(fav_time.naive_utc()),
download_status: Set(0),
valid: Set(state == 0),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
..default
},
VideoInfo::Submission {
title,
bvid,
intro,
cover,
ctime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
category: Set(2), // 投稿视频的内容类型肯定是视频
valid: Set(true),
..default
},
VideoInfo::Dynamic {
title,
bvid,
desc,
cover,
pubtime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
intro: Set(desc),
cover: Set(cover),
pubtime: Set(pubtime.naive_utc()),
category: Set(2), // 动态里的视频内容类型肯定是视频
valid: Set(true),
..default
},
VideoInfo::Detail { .. } => unreachable!(),
}
}
/// 填充视频详情时调用,该方法会将视频详情附加到原有的 Model 上
/// 特殊地,如果在检测视频更新时记录了 favtime那么 favtime 会维持原样,否则会使用 pubtime 填充
/// 如果开启 try_upower_anyway标记视频状态时不再检测是否充电一律进入后面的下载环节
pub fn into_detail_model(
self,
base_model: bili_sync_entity::video::Model,
try_upower_anyway: bool,
) -> bili_sync_entity::video::ActiveModel {
match self {
VideoInfo::Detail {
title,
bvid,
intro,
cover,
upper,
staff,
ctime,
pubtime,
state,
is_upower_exclusive,
is_upower_play,
redirect_url,
..
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: if base_model.favtime != NaiveDateTime::default() {
Set(base_model.favtime) // 之前设置了 favtime使用之前的值等价于 unset但设置上以支持后续的规则匹配
} else {
Set(pubtime.naive_utc()) // 未设置过 favtime使用 pubtime 填充
},
download_status: Set(0),
// state == 0 表示开放浏览
// is_upower_exclusive 和 is_upower_play 相等有两种情况:
// 1. 都为 true表示视频是充电专享但是已经充过电有权观看
// 2. 都为 false表示视频是非充电视频
// redirect_url 仅在视频为番剧、影视、纪录片等特殊视频时才会有值,如果为空说明是普通视频
// 仅在三种条件都满足时,才认为视频是可下载的
valid: Set(state == 0
&& (try_upower_anyway || (is_upower_exclusive == is_upower_play))
&& redirect_url.is_none()),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
staff: Set(staff.map(Into::into)),
..base_model.into_active_model()
},
_ => unreachable!(),
}
}
/// 获取视频的发布时间,用于对时间做筛选检查新视频
pub fn release_datetime(&self) -> &DateTime<Utc> {
match self {
VideoInfo::Collection { pubtime: time, .. }
| VideoInfo::Favorite { fav_time: time, .. }
| VideoInfo::WatchLater { fav_time: time, .. }
| VideoInfo::Submission { ctime: time, .. }
| VideoInfo::Dynamic { pubtime: time, .. } => time,
VideoInfo::Detail { .. } => unreachable!(),
}
}
pub fn bvid_owned(self) -> String {
match self {
VideoInfo::Collection { bvid, .. }
| VideoInfo::Favorite { bvid, .. }
| VideoInfo::WatchLater { bvid, .. }
| VideoInfo::Submission { bvid, .. }
| VideoInfo::Dynamic { bvid, .. }
| VideoInfo::Detail { bvid, .. } => bvid,
}
}
}
impl PageInfo {
pub fn into_active_model(self, video_model_id: i32) -> bili_sync_entity::page::ActiveModel {
let (width, height) = match &self.dimension {
Some(d) => {
if d.rotate == 0 {
(Some(d.width), Some(d.height))
} else {
(Some(d.height), Some(d.width))
}
}
None => (None, None),
};
bili_sync_entity::page::ActiveModel {
video_id: Set(video_model_id),
cid: Set(self.cid),
pid: Set(self.page),
name: Set(self.name),
width: Set(width),
height: Set(height),
duration: Set(self.duration),
image: Set(self.first_frame),
download_status: Set(0),
..Default::default()
}
}
}

View File

@@ -0,0 +1,36 @@
use sea_orm::DatabaseConnection;
use crate::adapter::VideoSourceEnum;
use crate::bilibili::BiliClient;
use crate::config::Config;
use crate::downloader::Downloader;
#[derive(Clone, Copy)]
pub struct DownloadContext<'a> {
pub bili_client: &'a BiliClient,
pub video_source: &'a VideoSourceEnum,
pub template: &'a handlebars::Handlebars<'a>,
pub connection: &'a DatabaseConnection,
pub downloader: &'a Downloader,
pub config: &'a Config,
}
impl<'a> DownloadContext<'a> {
pub fn new(
bili_client: &'a BiliClient,
video_source: &'a VideoSourceEnum,
template: &'a handlebars::Handlebars<'a>,
connection: &'a DatabaseConnection,
downloader: &'a Downloader,
config: &'a Config,
) -> Self {
Self {
bili_client,
video_source,
template,
connection,
downloader,
config,
}
}
}

View File

@@ -0,0 +1,61 @@
macro_rules! regex {
($re:literal $(,)?) => {{
static RE: once_cell::sync::OnceCell<regex::Regex> = once_cell::sync::OnceCell::new();
RE.get_or_init(|| regex::Regex::new($re).expect("invalid regex"))
}};
}
pub fn filenamify<S: AsRef<str>>(input: S) -> String {
let reserved = regex!("[<>:\"/\\\\|?*\u{0000}-\u{001F}\u{007F}\u{0080}-\u{009F}]+");
let windows_reserved = regex!("^(con|prn|aux|nul|com\\d|lpt\\d)$");
let outer_periods = regex!("^\\.+|\\.+$");
let replacement = "_";
let input = reserved.replace_all(input.as_ref(), replacement);
let input = outer_periods.replace_all(input.as_ref(), replacement);
let mut result = input.into_owned();
if windows_reserved.is_match(result.as_str()) {
result.push_str(replacement);
}
result
}
#[cfg(test)]
mod tests {
use super::filenamify;
#[test]
fn test_filenamify() {
assert_eq!(filenamify("foo/bar"), "foo_bar");
assert_eq!(filenamify("foo//bar"), "foo_bar");
assert_eq!(filenamify("//foo//bar//"), "_foo_bar_");
assert_eq!(filenamify("foo\\bar"), "foo_bar");
assert_eq!(filenamify("foo\\\\\\bar"), "foo_bar");
assert_eq!(filenamify(r"foo\\bar"), "foo_bar");
assert_eq!(filenamify(r"foo\\\\\\bar"), "foo_bar");
assert_eq!(filenamify("////foo////bar////"), "_foo_bar_");
assert_eq!(filenamify("foo\u{0000}bar"), "foo_bar");
assert_eq!(filenamify("\"foo<>bar*"), "_foo_bar_");
assert_eq!(filenamify("."), "_");
assert_eq!(filenamify(".."), "_");
assert_eq!(filenamify("./"), "__");
assert_eq!(filenamify("../"), "__");
assert_eq!(filenamify("../../foo/bar"), "__.._foo_bar");
assert_eq!(filenamify("foo.bar."), "foo.bar_");
assert_eq!(filenamify("foo.bar.."), "foo.bar_");
assert_eq!(filenamify("foo.bar..."), "foo.bar_");
assert_eq!(filenamify("con"), "con_");
assert_eq!(filenamify("com1"), "com1_");
assert_eq!(filenamify(":nul|"), "_nul_");
assert_eq!(filenamify("foo/bar/nul"), "foo_bar_nul");
assert_eq!(filenamify("file:///file.tar.gz"), "file_file.tar.gz");
assert_eq!(filenamify("http://www.google.com"), "http_www.google.com");
assert_eq!(
filenamify("https://www.youtube.com/watch?v=dQw4w9WgXcQ"),
"https_www.youtube.com_watch_v=dQw4w9WgXcQ"
);
}
}

View File

@@ -0,0 +1,29 @@
use serde_json::json;
pub fn video_format_args(video_model: &bili_sync_entity::video::Model, time_format: &str) -> serde_json::Value {
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
"upper_name": &video_model.upper_name,
"upper_mid": &video_model.upper_id,
"pubtime": &video_model.pubtime.and_utc().format(time_format).to_string(),
"fav_time": &video_model.favtime.and_utc().format(time_format).to_string(),
})
}
pub fn page_format_args(
video_model: &bili_sync_entity::video::Model,
page_model: &bili_sync_entity::page::Model,
time_format: &str,
) -> serde_json::Value {
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
"upper_name": &video_model.upper_name,
"upper_mid": &video_model.upper_id,
"ptitle": &page_model.name,
"pid": page_model.pid,
"pubtime": video_model.pubtime.and_utc().format(time_format).to_string(),
"fav_time": video_model.favtime.and_utc().format(time_format).to_string(),
})
}

View File

@@ -0,0 +1,43 @@
pub mod convert;
pub mod download_context;
pub mod filenamify;
pub mod format_arg;
pub mod model;
pub mod nfo;
pub mod notify;
pub mod rule;
pub mod signal;
pub mod status;
pub mod validation;
use tracing_subscriber::fmt;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use crate::api::LogHelper;
pub fn init_logger(log_level: &str, log_writer: Option<LogHelper>) {
let log = tracing_subscriber::fmt::Subscriber::builder()
.compact()
.with_env_filter(tracing_subscriber::EnvFilter::builder().parse_lossy(log_level))
.with_target(false)
.with_timer(tracing_subscriber::fmt::time::ChronoLocal::new(
"%b %d %H:%M:%S".to_owned(),
))
.finish();
if let Some(writer) = log_writer {
log.with(
fmt::layer()
.with_ansi(false)
.with_timer(tracing_subscriber::fmt::time::ChronoLocal::new(
"%b %d %H:%M:%S".to_owned(),
))
.json()
.flatten_event(true)
.with_writer(writer),
)
.try_init()
.expect("初始化日志失败");
} else {
log.try_init().expect("初始化日志失败");
}
}

View File

@@ -0,0 +1,171 @@
use anyhow::{Context, Result, anyhow};
use bili_sync_entity::*;
use rand::seq::SliceRandom;
use sea_orm::ActiveValue::Set;
use sea_orm::DatabaseTransaction;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::{OnConflict, SimpleExpr};
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::VideoInfo;
use crate::config::Config;
use crate::utils::status::STATUS_COMPLETED;
/// 筛选未填充的视频
pub async fn filter_unfilled_videos(
additional_expr: SimpleExpr,
conn: &DatabaseConnection,
) -> Result<Vec<video::Model>> {
video::Entity::find()
.filter(
video::Column::Valid
.eq(true)
.and(video::Column::DownloadStatus.eq(0))
.and(video::Column::Category.eq(2))
.and(video::Column::SinglePage.is_null())
.and(additional_expr),
)
.all(conn)
.await
.context("filter unfilled videos failed")
}
/// 筛选未处理完成的视频和视频页
pub async fn filter_unhandled_video_pages(
additional_expr: SimpleExpr,
connection: &DatabaseConnection,
) -> Result<Vec<(video::Model, Vec<page::Model>)>> {
video::Entity::find()
.filter(
video::Column::Valid
.eq(true)
.and(video::Column::DownloadStatus.lt(STATUS_COMPLETED))
.and(video::Column::Category.eq(2))
.and(video::Column::SinglePage.is_not_null())
.and(video::Column::ShouldDownload.eq(true))
.and(additional_expr),
)
.find_with_related(page::Entity)
.all(connection)
.await
.context("filter unhandled video pages failed")
}
/// 尝试创建 Video Model如果发生冲突则忽略
pub async fn create_videos(
videos_info: Vec<VideoInfo>,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
) -> Result<()> {
let video_models = videos_info
.into_iter()
.map(|v| {
let mut model = v.into_simple_model();
video_source.set_relation_id(&mut model);
model
})
.collect::<Vec<_>>();
video::Entity::insert_many(video_models)
// 这里想表达的是 on 索引名,但 sea-orm 的 api 似乎只支持列名而不支持索引名,好在留空可以达到相同的目的
.on_conflict(OnConflict::new().do_nothing().to_owned())
.do_nothing()
.exec(connection)
.await?;
Ok(())
}
/// 尝试创建 Page Model如果发生冲突则忽略
pub async fn create_pages(pages_model: Vec<page::ActiveModel>, connection: &DatabaseTransaction) -> Result<()> {
for page_chunk in pages_model.chunks(200) {
page::Entity::insert_many(page_chunk.to_vec())
.on_conflict(
OnConflict::columns([page::Column::VideoId, page::Column::Pid])
.do_nothing()
.to_owned(),
)
.do_nothing()
.exec(connection)
.await?;
}
Ok(())
}
/// 更新视频 model 的下载状态
pub async fn update_videos_model(videos: Vec<video::ActiveModel>, connection: &DatabaseConnection) -> Result<()> {
video::Entity::insert_many(videos)
.on_conflict(
OnConflict::column(video::Column::Id)
.update_columns([video::Column::DownloadStatus, video::Column::Path])
.to_owned(),
)
.exec(connection)
.await?;
Ok(())
}
/// 更新视频页 model 的下载状态
pub async fn update_pages_model(pages: Vec<page::ActiveModel>, connection: &DatabaseConnection) -> Result<()> {
let query = page::Entity::insert_many(pages).on_conflict(
OnConflict::column(page::Column::Id)
.update_columns([page::Column::DownloadStatus, page::Column::Path])
.to_owned(),
);
query.exec(connection).await?;
Ok(())
}
/// 获取所有已经启用的视频源
pub async fn get_enabled_video_sources(connection: &DatabaseConnection) -> Result<Vec<VideoSourceEnum>> {
let (favorite, watch_later, submission, collection) = tokio::try_join!(
favorite::Entity::find()
.filter(favorite::Column::Enabled.eq(true))
.all(connection),
watch_later::Entity::find()
.filter(watch_later::Column::Enabled.eq(true))
.all(connection),
submission::Entity::find()
.filter(submission::Column::Enabled.eq(true))
.all(connection),
collection::Entity::find()
.filter(collection::Column::Enabled.eq(true))
.all(connection),
)?;
let mut sources = Vec::with_capacity(favorite.len() + watch_later.len() + submission.len() + collection.len());
sources.extend(favorite.into_iter().map(VideoSourceEnum::from));
sources.extend(watch_later.into_iter().map(VideoSourceEnum::from));
sources.extend(submission.into_iter().map(VideoSourceEnum::from));
sources.extend(collection.into_iter().map(VideoSourceEnum::from));
// 此处将视频源随机打乱顺序,从概率上确保每个视频源都有机会优先执行,避免后面视频源的长期饥饿问题
sources.shuffle(&mut rand::rng());
Ok(sources)
}
/// 从数据库中加载配置
pub async fn load_db_config(connection: &DatabaseConnection) -> Result<Option<Result<Config>>> {
Ok(bili_sync_entity::config::Entity::find_by_id(1)
.one(connection)
.await?
.map(|model| {
serde_json::from_str(&model.data).map_err(|e| anyhow!("Failed to deserialize config data: {}", e))
}))
}
/// 保存配置到数据库
pub async fn save_db_config(config: &Config, connection: &DatabaseConnection) -> Result<()> {
let data = serde_json::to_string(config).context("Failed to serialize config data")?;
let model = bili_sync_entity::config::ActiveModel {
id: Set(1),
data: Set(data),
..Default::default()
};
bili_sync_entity::config::Entity::insert(model)
.on_conflict(
OnConflict::column(bili_sync_entity::config::Column::Id)
.update_column(bili_sync_entity::config::Column::Data)
.to_owned(),
)
.exec(connection)
.await
.context("Failed to save config to database")?;
Ok(())
}

View File

@@ -0,0 +1,412 @@
use anyhow::Result;
use bili_sync_entity::upper_vec::Upper as EntityUpper;
use bili_sync_entity::*;
use chrono::NaiveDateTime;
use quick_xml::Error;
use quick_xml::events::{BytesCData, BytesText};
use quick_xml::writer::Writer;
use tokio::io::{AsyncWriteExt, BufWriter};
use crate::config::NFOTimeType;
#[allow(clippy::upper_case_acronyms)]
pub enum NFO<'a> {
Movie(Movie<'a>),
TVShow(TVShow<'a>),
Upper(Upper),
Episode(Episode<'a>),
}
pub struct Movie<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub uppers: Vec<EntityUpper<i64, &'a str>>,
pub premiered: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
pub struct TVShow<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub uppers: Vec<EntityUpper<i64, &'a str>>,
pub premiered: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
pub struct Upper {
pub upper_id: String,
pub pubtime: NaiveDateTime,
}
pub struct Episode<'a> {
pub name: &'a str,
pub pid: String,
}
impl NFO<'_> {
pub async fn generate_nfo(self) -> Result<String> {
let mut buffer = r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
"#
.as_bytes()
.to_vec();
let mut tokio_buffer = BufWriter::new(&mut buffer);
let writer = Writer::new_with_indent(&mut tokio_buffer, b' ', 4);
match self {
NFO::Movie(movie) => {
Self::write_movie_nfo(writer, movie).await?;
}
NFO::TVShow(tvshow) => {
Self::write_tvshow_nfo(writer, tvshow).await?;
}
NFO::Upper(upper) => {
Self::write_upper_nfo(writer, upper).await?;
}
NFO::Episode(episode) => {
Self::write_episode_nfo(writer, episode).await?;
}
}
tokio_buffer.flush().await?;
Ok(String::from_utf8(buffer)?)
}
async fn write_movie_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, movie: Movie<'_>) -> Result<()> {
writer
.create_element("movie")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("plot")
.write_cdata_content_async(BytesCData::new(Self::format_plot(movie.bvid, movie.intro)))
.await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(movie.name))
.await?;
for upper in movie.uppers {
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&upper.mid.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(upper.role().as_ref()))
.await?;
writer
.create_element("thumb")
.write_text_content_async(BytesText::new(upper.face))
.await?;
Ok(writer)
})
.await?;
}
writer
.create_element("year")
.write_text_content_async(BytesText::new(&movie.premiered.format("%Y").to_string()))
.await?;
if let Some(tags) = movie.tags {
for tag in tags {
writer
.create_element("genre")
.write_text_content_async(BytesText::new(&tag))
.await?;
}
}
writer
.create_element("uniqueid")
.with_attribute(("type", "bilibili"))
.write_text_content_async(BytesText::new(movie.bvid))
.await?;
writer
.create_element("premiered")
.write_text_content_async(BytesText::new(&movie.premiered.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_tvshow_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, tvshow: TVShow<'_>) -> Result<()> {
writer
.create_element("tvshow")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("plot")
.write_cdata_content_async(BytesCData::new(Self::format_plot(tvshow.bvid, tvshow.intro)))
.await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(tvshow.name))
.await?;
for upper in tvshow.uppers {
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&upper.mid.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(upper.role().as_ref()))
.await?;
writer
.create_element("thumb")
.write_text_content_async(BytesText::new(upper.face))
.await?;
Ok(writer)
})
.await?;
}
writer
.create_element("year")
.write_text_content_async(BytesText::new(&tvshow.premiered.format("%Y").to_string()))
.await?;
if let Some(tags) = tvshow.tags {
for tag in tags {
writer
.create_element("genre")
.write_text_content_async(BytesText::new(&tag))
.await?;
}
}
writer
.create_element("uniqueid")
.with_attribute(("type", "bilibili"))
.write_text_content_async(BytesText::new(tvshow.bvid))
.await?;
writer
.create_element("premiered")
.write_text_content_async(BytesText::new(&tvshow.premiered.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_upper_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, upper: Upper) -> Result<()> {
writer
.create_element("person")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer.create_element("plot").write_empty_async().await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("lockdata")
.write_text_content_async(BytesText::new("false"))
.await?;
writer
.create_element("dateadded")
.write_text_content_async(BytesText::new(&upper.pubtime.format("%Y-%m-%d %H:%M:%S").to_string()))
.await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(&upper.upper_id))
.await?;
writer
.create_element("sorttitle")
.write_text_content_async(BytesText::new(&upper.upper_id))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_episode_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, episode: Episode<'_>) -> Result<()> {
writer
.create_element("episodedetails")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer.create_element("plot").write_empty_async().await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(episode.name))
.await?;
writer
.create_element("season")
.write_text_content_async(BytesText::new("1"))
.await?;
writer
.create_element("episode")
.write_text_content_async(BytesText::new(&episode.pid))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
#[inline]
fn format_plot(bvid: &str, intro: &str) -> String {
format!(
r#"原始视频:<a href="https://www.bilibili.com/video/{}/">{}</a><br/><br/>{}"#,
bvid, bvid, intro,
)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_generate_nfo() {
let video = video::Model {
intro: "intro".to_string(),
name: "name".to_string(),
upper_id: 1,
upper_name: "upper_name".to_string(),
upper_face: "https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg".to_string(),
favtime: chrono::NaiveDateTime::new(
chrono::NaiveDate::from_ymd_opt(2022, 2, 2).unwrap(),
chrono::NaiveTime::from_hms_opt(2, 2, 2).unwrap(),
),
pubtime: chrono::NaiveDateTime::new(
chrono::NaiveDate::from_ymd_opt(2033, 3, 3).unwrap(),
chrono::NaiveTime::from_hms_opt(3, 3, 3).unwrap(),
),
bvid: "BV1nWcSeeEkV".to_string(),
tags: Some(vec!["tag1".to_owned(), "tag2".to_owned()].into()),
..Default::default()
};
assert_eq!(
NFO::Movie((&video).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<movie>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
<outline/>
<title>name</title>
<actor>
<name>1</name>
<role>upper_name</role>
<thumb>https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg</thumb>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<premiered>2022-02-02</premiered>
</movie>"#,
);
assert_eq!(
NFO::TVShow((&video).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<tvshow>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
<outline/>
<title>name</title>
<actor>
<name>1</name>
<role>upper_name</role>
<thumb>https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg</thumb>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<premiered>2022-02-02</premiered>
</tvshow>"#,
);
assert_eq!(
NFO::Upper(((&video, &video.uppers().next().unwrap())).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot/>
<outline/>
<lockdata>false</lockdata>
<dateadded>2033-03-03 03:03:03</dateadded>
<title>1</title>
<sorttitle>1</sorttitle>
</person>"#,
);
let page = page::Model {
name: "name".to_string(),
pid: 3,
..Default::default()
};
assert_eq!(
NFO::Episode((&page).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot/>
<outline/>
<title>name</title>
<season>1</season>
<episode>3</episode>
</episodedetails>"#,
);
}
}
pub trait ToNFO<'a, T> {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> T;
}
impl<'a> ToNFO<'a, Movie<'a>> for &'a video::Model {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> Movie<'a> {
Movie {
name: &self.name,
intro: &self.intro,
bvid: &self.bvid,
uppers: self.uppers().collect(),
premiered: match nfo_time_type {
NFOTimeType::FavTime => self.favtime,
NFOTimeType::PubTime => self.pubtime,
},
tags: self.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> ToNFO<'a, TVShow<'a>> for &'a video::Model {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> TVShow<'a> {
TVShow {
name: &self.name,
intro: &self.intro,
bvid: &self.bvid,
uppers: self.uppers().collect(),
premiered: match nfo_time_type {
NFOTimeType::FavTime => self.favtime,
NFOTimeType::PubTime => self.pubtime,
},
tags: self.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> ToNFO<'a, Upper> for (&video::Model, &EntityUpper<i64, &str>) {
fn to_nfo(&'a self, _nfo_time_type: NFOTimeType) -> Upper {
Upper {
upper_id: self.1.mid.to_string(),
pubtime: self.0.pubtime,
}
}
}
impl<'a> ToNFO<'a, Episode<'a>> for &'a page::Model {
fn to_nfo(&'a self, _nfo_time_type: NFOTimeType) -> Episode<'a> {
Episode {
name: &self.name,
pid: self.pid.to_string(),
}
}
}

View File

@@ -0,0 +1,23 @@
use crate::bilibili::BiliClient;
use crate::config::Config;
use crate::notifier::{Message, NotifierAllExt};
pub fn notify(config: &Config, bili_client: &BiliClient, msg: impl Into<Message<'static>>) {
if let Some(notifiers) = &config.notifiers
&& !notifiers.is_empty()
{
let (notifiers, inner_client) = (notifiers.clone(), bili_client.inner_client().clone());
let msg = msg.into();
tokio::spawn(async move { notifiers.notify_all(&inner_client, msg).await });
}
}
pub fn error_and_notify(config: &Config, bili_client: &BiliClient, msg: String) {
error!("{msg}");
if let Some(notifiers) = &config.notifiers
&& !notifiers.is_empty()
{
let (notifiers, inner_client) = (notifiers.clone(), bili_client.inner_client().clone());
tokio::spawn(async move { notifiers.notify_all(&inner_client, msg).await });
}
}

View File

@@ -0,0 +1,288 @@
use bili_sync_entity::rule::{AndGroup, Condition, Rule, RuleTarget};
use bili_sync_entity::{page, video};
use chrono::{Local, NaiveDateTime};
pub(crate) trait Evaluatable<T> {
fn evaluate(&self, value: T) -> bool;
}
pub(crate) trait FieldEvaluatable {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool;
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool;
}
impl Evaluatable<&str> for Condition<String> {
fn evaluate(&self, value: &str) -> bool {
match self {
Condition::Equals(expected) => expected == value,
Condition::Contains(substring) => value.contains(substring),
Condition::IContains(substring) => value.to_lowercase().contains(&substring.to_lowercase()),
Condition::Prefix(prefix) => value.starts_with(prefix),
Condition::Suffix(suffix) => value.ends_with(suffix),
Condition::MatchesRegex(_, regex) => regex.is_match(value),
_ => false,
}
}
}
impl Evaluatable<usize> for Condition<usize> {
fn evaluate(&self, value: usize) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
Condition::GreaterThan(threshold) => value > *threshold,
Condition::LessThan(threshold) => value < *threshold,
Condition::Between(start, end) => value > *start && value < *end,
_ => false,
}
}
}
impl Evaluatable<NaiveDateTime> for Condition<NaiveDateTime> {
fn evaluate(&self, value: NaiveDateTime) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
Condition::GreaterThan(threshold) => value > *threshold,
Condition::LessThan(threshold) => value < *threshold,
Condition::Between(start, end) => value > *start && value < *end,
_ => false,
}
}
}
impl Evaluatable<bool> for Condition<bool> {
fn evaluate(&self, value: bool) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
_ => false,
}
}
}
impl FieldEvaluatable for RuleTarget {
/// 修改模型后进行评估,此时能访问的是未保存的 activeModel就地使用 activeModel 评估
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
match self {
RuleTarget::Title(cond) => video.name.try_as_ref().is_some_and(|title| cond.evaluate(title)),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.try_as_ref()
.and_then(|t| t.as_ref())
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => video
.favtime
.try_as_ref()
.map(|fav_time| fav_time.and_utc().with_timezone(&Local).naive_local()) // 数据库中保存的一律是 utc 时间,转换为 local 时间再比较
.is_some_and(|fav_time| cond.evaluate(fav_time)),
RuleTarget::PubTime(cond) => video
.pubtime
.try_as_ref()
.map(|pub_time| pub_time.and_utc().with_timezone(&Local).naive_local())
.is_some_and(|pub_time| cond.evaluate(pub_time)),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::SumVideoLength(cond) => pages
.iter()
.try_fold(0usize, |acc, page| {
page.duration.try_as_ref().map(|d| acc + *d as usize).ok_or(())
})
.is_ok_and(|total_length| cond.evaluate(total_length)),
RuleTarget::MultiUpper(cond) => cond.evaluate(video.staff.as_ref().is_some()),
RuleTarget::Not(inner) => !inner.evaluate(video, pages),
}
}
/// 手动触发对历史视频的评估,拿到的是原始 Model直接使用
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
match self {
RuleTarget::Title(cond) => cond.evaluate(&video.name),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.as_ref()
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => cond.evaluate(video.favtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PubTime(cond) => cond.evaluate(video.pubtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::SumVideoLength(cond) => {
cond.evaluate(pages.iter().fold(0usize, |acc, page| acc + page.duration as usize))
}
RuleTarget::MultiUpper(cond) => cond.evaluate(video.staff.is_some()),
RuleTarget::Not(inner) => !inner.evaluate_model(video, pages),
}
}
}
impl FieldEvaluatable for AndGroup {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.iter().all(|target| target.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.iter().all(|target| target.evaluate_model(video, pages))
}
}
impl FieldEvaluatable for Rule {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate_model(video, pages))
}
}
/// 对于 Option<Rule> 如果 rule 不存在应该被认为是通过评估
impl FieldEvaluatable for Option<Rule> {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate_model(video, pages))
}
}
#[cfg(test)]
mod tests {
use bili_sync_entity::page;
use chrono::NaiveDate;
use sea_orm::ActiveValue::Set;
use super::*;
#[test]
fn test_display() {
let test_cases = vec![
(
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
"「(标题包含“唐氏”)」",
),
(
Rule(vec![vec![
RuleTarget::Title(Condition::Prefix("街霸".to_string())),
RuleTarget::Tags(Condition::Contains("套路".to_string())),
]]),
"「(标题以“街霸”开头)且(标签包含“套路”)」",
),
(
Rule(vec![
vec![
RuleTarget::Title(Condition::Contains("Rust".to_string())),
RuleTarget::PageCount(Condition::GreaterThan(5)),
],
vec![
RuleTarget::Tags(Condition::Suffix("入门".to_string())),
RuleTarget::PubTime(Condition::GreaterThan(
NaiveDate::from_ymd_opt(2023, 1, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
)),
],
]),
"标题包含“Rust”视频分页数量大于“5”」或「标签以“入门”结尾发布时间大于“2023-01-01 00:00:00”",
),
(
Rule(vec![vec![
RuleTarget::Not(Box::new(RuleTarget::Title(Condition::Contains("广告".to_string())))),
RuleTarget::PageCount(Condition::LessThan(10)),
]]),
"标题不包含“广告”视频分页数量小于“10”",
),
(
Rule(vec![vec![
RuleTarget::FavTime(Condition::Between(
NaiveDate::from_ymd_opt(2023, 6, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
NaiveDate::from_ymd_opt(2023, 12, 31)
.unwrap()
.and_hms_opt(23, 59, 59)
.unwrap(),
)),
// autocorrect-disable
RuleTarget::Tags(Condition::MatchesRegex(
"技术|教程".to_string(),
regex::Regex::new("技术|教程").unwrap(),
)),
]]),
"收藏时间在“2023-06-01 00:00:00”和“2023-12-31 23:59:59”之间标签匹配“技术|教程”)」",
// autocorrect-enable
),
];
for (rule, expected) in test_cases {
assert_eq!(rule.to_string(), expected);
}
}
#[test]
fn test_evaluate() {
let test_cases = vec![
(
(
video::ActiveModel {
name: Set("骂谁唐氏呢!!!".to_string()),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
true,
),
(
(
video::ActiveModel::default(),
vec![page::ActiveModel::default(); 2],
),
Rule(vec![vec![RuleTarget::PageCount(Condition::Equals(1))]]),
false,
),
(
(
video::ActiveModel{
tags: Set(Some(vec!["原神".to_owned(),"永雏塔菲".to_owned(),"虚拟主播".to_owned()].into())),
..Default::default()
},
vec![],
),
Rule (vec![vec![RuleTarget::Not(Box::new(RuleTarget::Tags(Condition::Equals(
"原神".to_string(),
))))]],
),
false,
),
(
(
video::ActiveModel {
name: Set(
"万字怒扒网易《归唐》底裤中国首款大厂买断制单机靠谱吗——全网最全官方非独家幕后关于《归唐》PV 的所有秘密~都在这里了~".to_owned(),
),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Not(Box::new(RuleTarget::Title(Condition::MatchesRegex(
r"^\S+字(解析|怒扒|拆解)".to_owned(),
regex::Regex::new(r"^\S+字(解析|怒扒)").unwrap(),
))))]],
),
false,
),
];
for ((video, pages), rule, expected) in test_cases {
assert_eq!(rule.evaluate(&video, &pages), expected);
}
}
}

View File

@@ -0,0 +1,21 @@
use std::io;
use tokio::signal;
#[cfg(target_family = "windows")]
pub async fn terminate() -> io::Result<()> {
signal::ctrl_c().await
}
/// ctrl + c 发送的是 SIGINT 信号docker stop 发送的是 SIGTERM 信号,都需要处理
#[cfg(target_family = "unix")]
pub async fn terminate() -> io::Result<()> {
use tokio::select;
let mut term = signal::unix::signal(signal::unix::SignalKind::terminate())?;
let mut int = signal::unix::signal(signal::unix::SignalKind::interrupt())?;
select! {
_ = term.recv() => Ok(()),
_ = int.recv() => Ok(()),
}
}

View File

@@ -0,0 +1,351 @@
use std::marker::PhantomData;
use bili_sync_entity::{page, video};
use bili_sync_migration::{ExprTrait, IntoCondition};
use sea_orm::sea_query::Expr;
use sea_orm::{ColumnTrait, Condition};
use crate::error::ExecutionStatus;
pub static STATUS_NOT_STARTED: u32 = 0b000;
pub(super) static STATUS_MAX_RETRY: u32 = 0b100;
pub static STATUS_OK: u32 = 0b111;
pub static STATUS_COMPLETED: u32 = 1 << 31;
/// 用来表示下载的状态,不想写太多列了,所以仅使用一个 u32 表示。
/// 从低位开始,固定每三位表示一种子任务的状态。
/// 子任务状态从 0b000 开始,每执行失败一次将状态加一,最多 0b100即允许重试 4 次),该值定义为 STATUS_MAX_RETRY。
/// 如果子任务执行成功,将状态设置为 0b111该值定义为 STATUS_OK。
/// 子任务达到最大失败次数或者执行成功时,认为该子任务已经完成。
/// 当所有子任务都已经完成时,为最高位打上标记 1表示整个下载任务已经完成。
#[derive(Clone, Copy)]
pub struct Status<const N: usize, C>(u32, PhantomData<C>);
impl<const N: usize, C> Default for Status<N, C> {
fn default() -> Self {
Self(0, PhantomData)
}
}
impl<const N: usize, C> Status<N, C> {
pub(crate) const LEN: usize = N;
// 获取最高位的完成标记
pub fn get_completed(&self) -> bool {
self.0 >> 31 == 1
}
/// 依次检查所有子任务是否还应该继续执行,返回一个 bool 数组
pub fn should_run(&self) -> [bool; N] {
let mut result = [false; N];
for (i, item) in result.iter_mut().enumerate() {
*item = self.check_continue(i);
}
result
}
/// 重置所有失败的状态,将状态设置为 0b000返回值表示 status 是否发生了变化
pub fn reset_failed(&mut self) -> bool {
let mut changed = false;
for i in 0..N {
let status = self.get_status(i);
if status != STATUS_NOT_STARTED && status != STATUS_OK {
self.set_status(i, STATUS_NOT_STARTED);
changed = true;
}
}
if changed {
self.set_completed(false);
}
changed
}
/// 重置所有失败的状态,将状态设置为 0b000返回值表示 status 是否发生了变化
/// force 版本在普通版本的基础上,会额外检查是否存在需要运行的任务,如果存在则修正 completed 标记位为“未完成”
/// 这个方法的典型用例是在引入新的任务状态后重置历史视频,允许历史视频执行新引入的任务
pub fn force_reset_failed(&mut self) -> bool {
let mut changed = self.reset_failed();
// 理论上上面的 changed 就足够了,因为 completed 标志位的改变是由子任务状态的改变引起的,子任务没有改变则 completed 也不会改变
// 但考虑特殊情况,新版本引入了一个新的子任务项,此时会出现明明有子任务未执行,但 completed 标记位仍然为 true 的情况
// 当然可以在新版本迁移文件中全局重置 completed 标记位,但这样影响范围太大感觉不太好
// 在后面进行这部分额外判断可以兼容这种情况,在由用户手动触发的 reset_failed 调用中修正 completed 标记位
if !changed && self.get_completed() && self.should_run().into_iter().any(|x| x) {
changed = true;
self.set_completed(false);
}
changed
}
/// 覆盖某个子任务的状态
pub fn set(&mut self, offset: usize, status: u32) {
assert!(status < 0b1000, "status should be less than 0b1000");
self.set_status(offset, status);
if self.should_run().into_iter().all(|x| !x) {
self.set_completed(true);
} else {
self.set_completed(false);
}
}
/// 根据任务结果更新状态,任务结果是一个 Result 数组,需要与子任务一一对应
/// 如果所有子任务都已经完成,那么打上最高位的完成标记
pub fn update_status(&mut self, result: &[ExecutionStatus]) {
assert!(result.len() == N, "result length should be equal to N");
for (i, res) in result.iter().enumerate() {
self.set_result(res, i);
}
if self.should_run().into_iter().all(|x| !x) {
self.set_completed(true);
} else {
self.set_completed(false);
}
}
/// 设置最高位的完成标记
fn set_completed(&mut self, completed: bool) {
if completed {
self.0 |= 1 << 31;
} else {
self.0 &= !(1 << 31);
}
}
/// 获取某个子任务的状态
fn get_status(&self, offset: usize) -> u32 {
(self.0 >> (offset * 3)) & 0b111
}
/// 设置某个子任务的状态
fn set_status(&mut self, offset: usize, status: u32) {
self.0 = (self.0 & !(0b111 << (offset * 3))) | (status << (offset * 3));
}
// 将某个子任务的状态加一(在任务失败时使用)
fn plus_one(&mut self, offset: usize) {
self.0 += 1 << (3 * offset);
}
// 设置某个子任务的状态为 STATUS_OK在任务成功时使用
fn set_ok(&mut self, offset: usize) {
self.0 |= STATUS_OK << (3 * offset);
}
/// 检查某个子任务是否还应该继续执行,实际是检查该子任务的状态是否小于 STATUS_MAX_RETRY
fn check_continue(&self, offset: usize) -> bool {
self.get_status(offset) < STATUS_MAX_RETRY
}
/// 根据子任务执行结果更新子任务的状态
fn set_result(&mut self, result: &ExecutionStatus, offset: usize) {
// 如果任务返回 Fixed 状态,那么无论之前的状态如何,都将状态设置为 Fixed 的状态
if let ExecutionStatus::Fixed(status) = result {
assert!(*status < 0b1000, "status should be less than 0b1000");
self.set_status(offset, *status);
} else if self.get_status(offset) < STATUS_MAX_RETRY {
match result {
ExecutionStatus::Succeeded | ExecutionStatus::Skipped => self.set_ok(offset),
ExecutionStatus::Failed(_) => self.plus_one(offset),
_ => {}
}
}
}
}
impl<const N: usize, C> From<u32> for Status<N, C> {
fn from(status: u32) -> Self {
Status(status, PhantomData)
}
}
impl<const N: usize, C> From<Status<N, C>> for u32 {
fn from(status: Status<N, C>) -> Self {
status.0
}
}
impl<const N: usize, C> From<Status<N, C>> for [u32; N] {
fn from(status: Status<N, C>) -> Self {
let mut result = [0; N];
for (i, item) in result.iter_mut().enumerate() {
*item = status.get_status(i);
}
result
}
}
impl<const N: usize, C> From<[u32; N]> for Status<N, C> {
fn from(status: [u32; N]) -> Self {
let mut result = Self::default();
for (i, item) in status.iter().enumerate() {
assert!(*item < 0b1000, "status should be less than 0b1000");
result.set_status(i, *item);
}
if result.should_run().iter().all(|x| !x) {
result.set_completed(true);
}
result
}
}
/// 包含五个子任务从前到后依次是视频封面、视频信息、Up 主头像、Up 主信息、分页下载
pub type VideoStatus = Status<5, video::Column>;
impl VideoStatus {
pub fn query_builder() -> StatusQueryBuilder<{ Self::LEN }, video::Column> {
StatusQueryBuilder::new(video::Column::DownloadStatus)
}
}
/// 包含五个子任务,从前到后分别是:视频封面、视频内容、视频信息、视频弹幕、视频字幕
pub type PageStatus = Status<5, page::Column>;
impl PageStatus {
pub fn query_builder() -> StatusQueryBuilder<{ Self::LEN }, page::Column> {
StatusQueryBuilder::new(page::Column::DownloadStatus)
}
}
pub struct StatusQueryBuilder<const N: usize, C: ColumnTrait> {
column: C,
}
impl<const N: usize, C: ColumnTrait> StatusQueryBuilder<N, C> {
fn new(column: C) -> Self {
Self { column }
}
/// 完成状态:所有子任务的状态都是成功
pub fn succeeded(&self) -> Condition {
let mut condition = Condition::all();
for offset in 0..N as i32 {
condition = condition.add(Expr::col(self.column).right_shift(offset * 3).bit_and(7).eq(7))
}
condition
}
/// 失败状态:存在任何失败的子任务
pub fn failed(&self) -> Condition {
let mut condition = Condition::any();
for offset in 0..N as i32 {
condition = condition.add(
Expr::col(self.column)
.right_shift(offset * 3)
.bit_and(7)
.is_not_in([0, 7]),
)
}
condition
}
/// 等待状态:所有子任务的状态都不是失败,且其中存在未开始
pub fn waiting(&self) -> Condition {
let mut condition = Condition::any();
for offset in 0..N as i32 {
condition = condition.add(Expr::col(self.column).right_shift(offset * 3).bit_and(7).eq(0))
}
condition.and(self.failed().not()).into_condition()
}
}
#[cfg(test)]
mod tests {
use anyhow::anyhow;
use super::*;
#[test]
fn test_status_update() {
let mut status = Status::<3, video::Column>::default();
assert_eq!(status.should_run(), [true, true, true]);
for _ in 0..3 {
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(status.should_run(), [true, false, false]);
}
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(status.should_run(), [false, false, false]);
assert!(status.get_completed());
status.update_status(&[
ExecutionStatus::Fixed(1),
ExecutionStatus::Fixed(4),
ExecutionStatus::Fixed(7),
]);
assert_eq!(status.should_run(), [true, false, false]);
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [1, 4, 7]);
}
#[test]
fn test_status_convert() {
let testcases = [[0, 0, 1], [1, 2, 3], [3, 1, 2], [3, 0, 7]];
for testcase in testcases.iter() {
let status = Status::<3, video::Column>::from(testcase.clone());
assert_eq!(<[u32; 3]>::from(status), *testcase);
}
}
#[test]
fn test_status_convert_and_update() {
let testcases = [([0, 0, 1], [1, 7, 7]), ([3, 4, 3], [4, 4, 7]), ([3, 1, 7], [4, 7, 7])];
for (before, after) in testcases.iter() {
let mut status = Status::<3, video::Column>::from(before.clone());
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(<[u32; 3]>::from(status), *after);
}
}
#[test]
fn test_status_reset_failed() {
// 重置一个出现部分失败但还有重试次数的任务,将所有的失败状态重置为 0
let mut status = Status::<3, video::Column>::from([3, 4, 7]);
assert!(!status.get_completed());
assert!(status.reset_failed());
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [0, 0, 7]);
// 没有内容需要重置,但 completed 标记位是错误的(模拟新增一个子任务状态的情况)
// 此时 reset_failed 不会修正 completed 标记位,而 force_reset_failed 会
status.set_completed(true);
assert!(status.get_completed());
assert!(!status.reset_failed());
assert!(status.get_completed());
assert!(status.force_reset_failed());
assert!(!status.get_completed());
// 重置一个已经成功的任务,没有改变状态,也不会修改标记位
let mut status = Status::<3, video::Column>::from([7, 7, 7]);
assert!(status.get_completed());
assert!(!status.reset_failed());
assert!(status.get_completed());
// 重置一个全部失败的任务,修改状态并且修改标记位
let mut status = Status::<3, video::Column>::from([4, 4, 4]);
assert!(status.get_completed());
assert!(status.reset_failed());
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [0, 0, 0]);
}
#[test]
fn test_status_set() {
// 设置子状态,从 completed 到 uncompleted
let mut status = Status::<5, video::Column>::from([7, 7, 7, 7, 7]);
assert!(status.get_completed());
status.set(4, 0);
assert!(!status.get_completed());
assert_eq!(<[u32; 5]>::from(status), [7, 7, 7, 7, 0]);
// 设置子状态,从 uncompleted 到 completed
let mut status = Status::<5, video::Column>::from([4, 7, 7, 7, 0]);
assert!(!status.get_completed());
status.set(4, 7);
assert!(status.get_completed());
assert_eq!(<[u32; 5]>::from(status), [4, 7, 7, 7, 7]);
}
}

View File

@@ -0,0 +1,23 @@
use std::path::Path;
use validator::ValidationError;
use crate::utils::status::{STATUS_NOT_STARTED, STATUS_OK};
pub fn validate_status_value(value: u32) -> Result<(), ValidationError> {
if value == STATUS_OK || value == STATUS_NOT_STARTED {
Ok(())
} else {
Err(ValidationError::new(
"status_value must be either STATUS_OK or STATUS_NOT_STARTED",
))
}
}
pub fn validate_path(path: &str) -> Result<(), ValidationError> {
if path.is_empty() || !Path::new(path).is_absolute() {
Err(ValidationError::new("path must be a non-empty absolute path"))
} else {
Ok(())
}
}

View File

@@ -0,0 +1,799 @@
use std::collections::HashSet;
use std::path::{Path, PathBuf};
use std::pin::Pin;
use anyhow::{Context, Result, anyhow, bail};
use bili_sync_entity::upper_vec::Upper;
use bili_sync_entity::*;
use futures::stream::FuturesUnordered;
use futures::{Stream, StreamExt, TryStreamExt};
use sea_orm::ActiveValue::Set;
use sea_orm::TransactionTrait;
use sea_orm::entity::prelude::*;
use tokio::fs;
use tokio::sync::Semaphore;
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::{BestStream, BiliClient, BiliError, Dimension, PageInfo, Video, VideoInfo};
use crate::config::{ARGS, Config, PathSafeTemplate};
use crate::downloader::Downloader;
use crate::error::ExecutionStatus;
use crate::notifier::DownloadNotifyInfo;
use crate::utils::download_context::DownloadContext;
use crate::utils::format_arg::{page_format_args, video_format_args};
use crate::utils::model::{
create_pages, create_videos, filter_unfilled_videos, filter_unhandled_video_pages, update_pages_model,
update_videos_model,
};
use crate::utils::nfo::{NFO, ToNFO};
use crate::utils::notify::notify;
use crate::utils::rule::FieldEvaluatable;
use crate::utils::status::{PageStatus, STATUS_OK, VideoStatus};
/// 完整地处理某个视频来源
pub async fn process_video_source(
video_source: VideoSourceEnum,
bili_client: &BiliClient,
connection: &DatabaseConnection,
template: &handlebars::Handlebars<'_>,
config: &Config,
) -> Result<()> {
// 预创建视频源目录,提前检测目录是否可写
video_source.create_dir_all().await?;
// 从参数中获取视频列表的 Model 与视频流
let (video_source, video_streams) = video_source
.refresh(bili_client, &config.credential, connection)
.await?;
// 从视频流中获取新视频的简要信息,写入数据库
refresh_video_source(&video_source, video_streams, connection).await?;
// 单独请求视频详情接口,获取视频的详情信息与所有的分页,写入数据库
fetch_video_details(bili_client, &video_source, connection, config).await?;
if ARGS.scan_only {
warn!("已开启仅扫描模式,跳过视频下载..");
} else {
// 从数据库中查找所有未下载的视频与分页,下载并处理
let download_notify_info =
download_unprocessed_videos(bili_client, &video_source, connection, template, config).await?;
if download_notify_info.should_notify() {
notify(config, bili_client, download_notify_info);
}
}
Ok(())
}
/// 请求接口,获取视频列表中所有新添加的视频信息,将其写入数据库
pub async fn refresh_video_source<'a>(
video_source: &VideoSourceEnum,
video_streams: Pin<Box<dyn Stream<Item = Result<VideoInfo>> + 'a + Send>>,
connection: &DatabaseConnection,
) -> Result<()> {
video_source.log_refresh_video_start();
let latest_row_at = video_source.get_latest_row_at().and_utc();
let mut max_datetime = latest_row_at;
let mut error = Ok(());
let mut video_streams = video_streams
.enumerate()
.take_while(|(idx, res)| {
match res {
Err(e) => {
// 这里拿到的 e 是引用,无法直接传递所有权
// 对于 BiliError我们需要克隆内部的错误并附带原来的上下文方便外部检查错误类型
// 对于其他错误只保留字符串信息用作提示
if let Some(inner) = e.downcast_ref::<BiliError>() {
error = Err(inner.clone()).context(e.to_string());
} else {
error = Err(anyhow!("{:#}", e));
}
futures::future::ready(false)
}
Ok(v) => {
// 虽然 video_streams 是从新到旧的,但由于此处是分页请求,极端情况下可能发生访问完第一页时插入了两整页视频的情况
// 此时获取到的第二页视频比第一页的还要新,因此为了确保正确,理应对每一页的第一个视频进行时间比较
// 但在 streams 的抽象下,无法判断具体是在哪里分页的,所以暂且对每个视频都进行比较,应该不会有太大性能损失
let release_datetime = v.release_datetime();
if release_datetime > &max_datetime {
max_datetime = *release_datetime;
}
futures::future::ready(video_source.should_take(*idx, release_datetime, &latest_row_at))
}
}
})
.filter_map(|(idx, res)| futures::future::ready(video_source.should_filter(idx, res, &latest_row_at)))
.chunks(10);
let mut count = 0;
while let Some(videos_info) = video_streams.next().await {
count += videos_info.len();
create_videos(videos_info, video_source, connection).await?;
}
// 如果获取视频分页过程中发生了错误,直接在此处返回,不更新 latest_row_at
error?;
if max_datetime != latest_row_at {
video_source
.update_latest_row_at(max_datetime.naive_utc())
.save(connection)
.await?;
}
video_source.log_refresh_video_end(count);
Ok(())
}
/// 筛选出所有未获取到全部信息的视频,尝试补充其详细信息
pub async fn fetch_video_details(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
config: &Config,
) -> Result<()> {
video_source.log_fetch_video_start();
let videos_model = filter_unfilled_videos(video_source.filter_expr(), connection).await?;
let semaphore = Semaphore::new(config.concurrent_limit.video);
let semaphore_ref = &semaphore;
let tasks = videos_model
.into_iter()
.map(|video_model| async move {
let _permit = semaphore_ref.acquire().await.context("acquire semaphore failed")?;
let video = Video::new(bili_client, video_model.bvid.as_str(), &config.credential);
let info: Result<_> = async { Ok((video.get_tags().await?, video.get_view_info().await?)) }.await;
match info {
Err(e) => {
error!(
"获取视频 {} - {} 的详细信息失败,错误为:{:#}",
&video_model.bvid, &video_model.name, e
);
if let Some(BiliError::ErrorResponse(-404, _)) = e.downcast_ref::<BiliError>() {
let mut video_active_model: bili_sync_entity::video::ActiveModel = video_model.into();
video_active_model.valid = Set(false);
video_active_model.save(connection).await?;
}
}
Ok((tags, mut view_info)) => {
let VideoInfo::Detail { pages, .. } = &mut view_info else {
unreachable!()
};
// 构造 page model
let pages = std::mem::take(pages);
let pages = pages
.into_iter()
.map(|p| p.into_active_model(video_model.id))
.collect::<Vec<page::ActiveModel>>();
// 更新 video model 的各项有关属性
let mut video_active_model = view_info.into_detail_model(video_model, config.try_upower_anyway);
video_source.set_relation_id(&mut video_active_model);
video_active_model.single_page = Set(Some(pages.len() == 1));
video_active_model.tags = Set(Some(tags.into()));
video_active_model.should_download = Set(video_source.rule().evaluate(&video_active_model, &pages));
let txn = connection.begin().await?;
create_pages(pages, &txn).await?;
video_active_model.save(&txn).await?;
txn.commit().await?;
}
};
Ok::<_, anyhow::Error>(())
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
video_source.log_fetch_video_end();
Ok(())
}
/// 下载所有未处理成功的视频
pub async fn download_unprocessed_videos(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
template: &handlebars::Handlebars<'_>,
config: &Config,
) -> Result<DownloadNotifyInfo> {
video_source.log_download_video_start();
let semaphore = Semaphore::new(config.concurrent_limit.video);
let downloader = Downloader::new(bili_client.client.clone());
let cx = DownloadContext::new(bili_client, video_source, template, connection, &downloader, config);
let unhandled_videos_pages = filter_unhandled_video_pages(video_source.filter_expr(), connection).await?;
let mut assigned_upper_ids = HashSet::new();
let tasks = unhandled_videos_pages
.into_iter()
.map(|(video_model, pages_model)| {
// 这里按理说是可以直接拿到 assigned_uppers 的但rust 会错误地认为它引用了 local variable
// 导致编译出错,暂时先这样单独提取出一个 owned 的 upper id 列表,再在任务内部筛选
let task_uids = video_model
.uppers()
.map(|u| u.mid)
.filter(|uid| assigned_upper_ids.insert(*uid))
.collect::<Vec<_>>();
download_video_pages(video_model, pages_model, &semaphore, task_uids, cx)
})
.collect::<FuturesUnordered<_>>();
let mut risk_control_related_error = None;
let mut stream = tasks
// 触发风控时设置 download_aborted 标记并终止流
.take_while(|res| {
if let Err(e) = res
&& let Some(e) = e.downcast_ref::<BiliError>()
&& e.is_risk_control_related()
{
risk_control_related_error = Some(e.clone());
}
futures::future::ready(risk_control_related_error.is_none())
})
// 过滤掉没有触发风控的普通 Err只保留正确返回的 Model
.filter_map(|res| futures::future::ready(res.ok()))
// 将成功返回的 Model 按十个一组合并
.chunks(10);
let mut download_notify_info = DownloadNotifyInfo::new(video_source.display_name().into());
while let Some(models) = stream.next().await {
download_notify_info.record(&models);
update_videos_model(models, connection).await?;
}
if let Some(e) = risk_control_related_error {
bail!(e);
}
video_source.log_download_video_end();
Ok(download_notify_info)
}
pub async fn download_video_pages(
video_model: video::Model,
page_models: Vec<page::Model>,
semaphore: &Semaphore,
upper_uids: Vec<i64>,
cx: DownloadContext<'_>,
) -> Result<video::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = VideoStatus::from(video_model.download_status);
let separate_status = status.should_run();
// 未记录路径时填充,已经填充过路径时使用现有的
let base_path = if !video_model.path.is_empty() {
PathBuf::from(&video_model.path)
} else {
cx.video_source.path().join(
cx.template
.path_safe_render("video", &video_format_args(&video_model, &cx.config.time_format))?,
)
};
fs::create_dir_all(&base_path).await?;
let base_path = dunce::canonicalize(base_path).context("canonicalize video path failed")?;
let is_single_page = video_model.single_page.context("single_page is null")?;
let uppers_with_path = video_model
.uppers()
.filter_map(|u| {
if !upper_uids.contains(&u.mid) {
None
} else {
let id_string = u.mid.to_string();
Some((
u,
cx.config
.upper_path
.join(id_string.chars().next()?.to_string())
.join(id_string),
))
}
})
.collect::<Vec<_>>();
// 对于单页视频page 的下载已经足够
// 对于多页视频page 下载仅包含了分集内容,需要额外补上视频的 poster 的 tvshow.nfo
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载视频封面
fetch_video_poster(
separate_status[0] && !is_single_page && !cx.config.skip_option.no_poster,
&video_model,
base_path.join("poster.jpg"),
base_path.join("fanart.jpg"),
cx
),
// 生成视频信息的 nfo
generate_video_nfo(
separate_status[1] && !is_single_page && !cx.config.skip_option.no_video_nfo,
&video_model,
base_path.join("tvshow.nfo"),
cx
),
// 下载 Up 主头像
fetch_upper_face(
separate_status[2] && !cx.config.skip_option.no_upper,
&uppers_with_path,
cx
),
// 生成 Up 主信息的 nfo
generate_upper_nfo(
separate_status[3] && !cx.config.skip_option.no_upper,
&video_model,
&uppers_with_path,
cx,
),
// 分发并执行分页下载的任务
dispatch_download_page(separate_status[4], &video_model, page_models, &base_path, cx)
);
let results = [res_1.into(), res_2.into(), res_3.into(), res_4.into(), res_5.into()];
status.update_status(&results);
results
.iter()
.take(4)
.zip(["封面", "详情", "作者头像", "作者详情"])
.for_each(|(res, task_name)| match res {
ExecutionStatus::Skipped => info!("处理视频「{}」{}已成功过,跳过", &video_model.name, task_name),
ExecutionStatus::Succeeded => info!("处理视频「{}」{}成功", &video_model.name, task_name),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」{}出现常见错误,已忽略:{:#}",
&video_model.name, task_name, e
)
}
ExecutionStatus::Failed(e) => {
error!("处理视频「{}」{}失败:{:#}", &video_model.name, task_name, e)
}
ExecutionStatus::Fixed(_) => unreachable!(),
});
for result in results {
if let ExecutionStatus::Failed(e) = result
&& let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
bail!(e);
}
}
let mut video_active_model: video::ActiveModel = video_model.into();
video_active_model.download_status = Set(status.into());
video_active_model.path = Set(base_path.to_string_lossy().to_string());
Ok(video_active_model)
}
/// 分发并执行分页下载任务,当且仅当所有分页成功下载或达到最大重试次数时返回 Ok否则根据失败原因返回对应的错误
pub async fn dispatch_download_page(
should_run: bool,
video_model: &video::Model,
page_models: Vec<page::Model>,
base_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let child_semaphore = Semaphore::new(cx.config.concurrent_limit.page);
let tasks = page_models
.into_iter()
.map(|page_model| download_page(video_model, page_model, &child_semaphore, base_path, cx))
.collect::<FuturesUnordered<_>>();
let (mut risk_control_related_error, mut target_status) = (None, STATUS_OK);
let mut stream = tasks
.take_while(|res| {
match res {
Ok(model) => {
// 该视频的所有分页的下载状态都会在此返回,需要根据这些状态确认视频层“分页下载”子任务的状态
// 在过去的实现中,此处仅仅根据 page_download_status 的最高标志位来判断,如果最高标志位是 true 则认为完成
// 这样会导致即使分页中有失败到 MAX_RETRY 的情况,视频层的分页下载状态也会被认为是 Succeeded不够准确
// 新版本实现会将此处取值为所有子任务状态的最小值,这样只有所有分页的子任务全部成功时才会认为视频层的分页下载状态是 Succeeded
let page_download_status = model.download_status.try_as_ref().expect("download_status must be set");
let separate_status: [u32; 5] = PageStatus::from(*page_download_status).into();
for status in separate_status {
target_status = target_status.min(status);
}
}
Err(e) => {
if let Some(e) = e.downcast_ref::<BiliError>()
&& e.is_risk_control_related()
{
risk_control_related_error = Some(e.clone());
}
}
}
// 仅在发生风控时终止流,其它情况继续执行
futures::future::ready(risk_control_related_error.is_none())
})
.filter_map(|res| futures::future::ready(res.ok()))
.chunks(10);
while let Some(models) = stream.next().await {
update_pages_model(models, cx.connection).await?;
}
if let Some(e) = risk_control_related_error {
bail!(e);
}
// 视频中“分页下载”任务的状态始终与所有分页的最小状态一致
Ok(ExecutionStatus::Fixed(target_status))
}
/// 下载某个分页,未发生风控且正常运行时返回 Ok(Page::ActiveModel),其中 status 字段存储了新的下载状态,发生风控时返回 DownloadAbortError
pub async fn download_page(
video_model: &video::Model,
page_model: page::Model,
semaphore: &Semaphore,
base_path: &Path,
cx: DownloadContext<'_>,
) -> Result<page::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = PageStatus::from(page_model.download_status);
let separate_status = status.should_run();
let is_single_page = video_model.single_page.context("single_page is null")?;
// 未记录路径时填充,已经填充过路径时使用现有的
let (base_path, base_name) = if let Some(old_video_path) = &page_model.path
&& !old_video_path.is_empty()
{
let old_video_path = Path::new(old_video_path);
let old_video_filename = old_video_path
.file_name()
.context("invalid page path format")?
.to_string_lossy();
if is_single_page {
// 单页下的路径是 {base_path}/{base_name}.mp4
(
old_video_path.parent().context("invalid page path format")?,
old_video_filename.trim_end_matches(".mp4").to_string(),
)
} else {
// 多页下的路径是 {base_path}/Season 1/{base_name} - S01Exx.mp4
(
old_video_path
.parent()
.and_then(|p| p.parent())
.context("invalid page path format")?,
old_video_filename
.rsplit_once(" - ")
.context("invalid page path format")?
.0
.to_string(),
)
}
} else {
(
base_path,
cx.template.path_safe_render(
"page",
&page_format_args(video_model, &page_model, &cx.config.time_format),
)?,
)
};
let base_path = dunce::canonicalize(base_path).context("canonicalize base path failed")?;
let (poster_path, video_path, nfo_path, danmaku_path, fanart_path, subtitle_path) = if is_single_page {
(
base_path.join(format!("{}-poster.jpg", &base_name)),
base_path.join(format!("{}.mp4", &base_name)),
base_path.join(format!("{}.nfo", &base_name)),
base_path.join(format!("{}.zh-CN.default.ass", &base_name)),
Some(base_path.join(format!("{}-fanart.jpg", &base_name))),
base_path.join(format!("{}.srt", &base_name)),
)
} else {
(
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}-thumb.jpg", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.mp4", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.nfo", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.zh-CN.default.ass", &base_name, page_model.pid)),
// 对于多页视频,会在上一步 fetch_video_poster 中获取剧集的 fanart无需在此处下载单集的
None,
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.srt", &base_name, page_model.pid)),
)
};
let dimension = match (page_model.width, page_model.height) {
(Some(width), Some(height)) => Some(Dimension {
width,
height,
rotate: 0,
}),
_ => None,
};
let page_info = PageInfo {
cid: page_model.cid,
duration: page_model.duration,
dimension,
..Default::default()
};
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载分页封面
fetch_page_poster(
separate_status[0] && !cx.config.skip_option.no_poster,
video_model,
&page_model,
poster_path,
fanart_path,
cx
),
// 下载分页视频
fetch_page_video(separate_status[1], video_model, &page_info, &video_path, cx),
// 生成分页视频信息的 nfo
generate_page_nfo(
separate_status[2] && !cx.config.skip_option.no_video_nfo,
video_model,
&page_model,
nfo_path,
cx,
),
// 下载分页弹幕
fetch_page_danmaku(
separate_status[3] && !cx.config.skip_option.no_danmaku,
video_model,
&page_info,
danmaku_path,
cx,
),
// 下载分页字幕
fetch_page_subtitle(
separate_status[4] && !cx.config.skip_option.no_subtitle,
video_model,
&page_info,
&subtitle_path,
cx
)
);
let results = [res_1.into(), res_2.into(), res_3.into(), res_4.into(), res_5.into()];
status.update_status(&results);
results
.iter()
.zip(["封面", "视频", "详情", "弹幕", "字幕"])
.for_each(|(res, task_name)| match res {
ExecutionStatus::Skipped => info!(
"处理视频「{}」第 {} 页{}已成功过,跳过",
&video_model.name, page_model.pid, task_name
),
ExecutionStatus::Succeeded => info!(
"处理视频「{}」第 {} 页{}成功",
&video_model.name, page_model.pid, task_name
),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」第 {} 页{}出现常见错误,已忽略:{:#}",
&video_model.name, page_model.pid, task_name, e
)
}
ExecutionStatus::Failed(e) => error!(
"处理视频「{}」第 {} 页{}失败:{:#}",
&video_model.name, page_model.pid, task_name, e
),
ExecutionStatus::Fixed(_) => unreachable!(),
});
for result in results {
if let ExecutionStatus::Failed(e) = result
&& let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
bail!(e);
}
}
let mut page_active_model: page::ActiveModel = page_model.into();
page_active_model.download_status = Set(status.into());
page_active_model.path = Set(Some(video_path.to_string_lossy().to_string()));
Ok(page_active_model)
}
pub async fn fetch_page_poster(
should_run: bool,
video_model: &video::Model,
page_model: &page::Model,
poster_path: PathBuf,
fanart_path: Option<PathBuf>,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let single_page = video_model.single_page.context("single_page is null")?;
let url = if single_page {
// 单页视频直接用视频的封面
video_model.cover.as_str()
} else {
// 多页视频,如果单页没有封面,就使用视频的封面
match &page_model.image {
Some(url) => url.as_str(),
None => video_model.cover.as_str(),
}
};
cx.downloader
.fetch(url, &poster_path, &cx.config.concurrent_limit.download)
.await?;
if let Some(fanart_path) = fanart_path {
fs::copy(&poster_path, &fanart_path).await?;
}
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_video(
should_run: bool,
video_model: &video::Model,
page_info: &PageInfo,
page_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
let streams = bili_video
.get_page_analyzer(page_info)
.await?
.best_stream(&cx.config.filter_option)?;
match streams {
BestStream::Mixed(mix_stream) => {
cx.downloader
.multi_fetch(
&mix_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
BestStream::VideoAudio {
video: video_stream,
audio: None,
} => {
cx.downloader
.multi_fetch(
&video_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
BestStream::VideoAudio {
video: video_stream,
audio: Some(audio_stream),
} => {
cx.downloader
.multi_fetch_and_merge(
&video_stream.urls(cx.config.cdn_sorting),
&audio_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
}
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_danmaku(
should_run: bool,
video_model: &video::Model,
page_info: &PageInfo,
danmaku_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
bili_video
.get_danmaku_writer(page_info)
.await?
.write(danmaku_path, &cx.config.danmaku_option)
.await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_subtitle(
should_run: bool,
video_model: &video::Model,
page_info: &PageInfo,
subtitle_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
let subtitles = bili_video.get_subtitles(page_info).await?;
let tasks = subtitles
.into_iter()
.map(|subtitle| async move {
let path = subtitle_path.with_extension(format!("{}.srt", subtitle.lan));
tokio::fs::write(path, subtitle.body.to_string()).await
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_page_nfo(
should_run: bool,
video_model: &video::Model,
page_model: &page::Model,
nfo_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let single_page = video_model.single_page.context("single_page is null")?;
let nfo = if single_page {
NFO::Movie(video_model.to_nfo(cx.config.nfo_time_type))
} else {
NFO::Episode(page_model.to_nfo(cx.config.nfo_time_type))
};
generate_nfo(nfo, nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_video_poster(
should_run: bool,
video_model: &video::Model,
poster_path: PathBuf,
fanart_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
cx.downloader
.fetch(&video_model.cover, &poster_path, &cx.config.concurrent_limit.download)
.await?;
fs::copy(&poster_path, &fanart_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_upper_face(
should_run: bool,
uppers_with_path: &[(Upper<i64, &str>, PathBuf)],
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run || uppers_with_path.is_empty() {
return Ok(ExecutionStatus::Skipped);
}
let tasks = uppers_with_path
.iter()
.map(|(upper, base_path)| async move {
cx.downloader
.fetch(
upper.face,
&base_path.join("folder.jpg"),
&cx.config.concurrent_limit.download,
)
.await?;
Ok::<(), anyhow::Error>(())
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_upper_nfo(
should_run: bool,
video_model: &video::Model,
uppers_with_path: &[(Upper<i64, &str>, PathBuf)],
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let tasks = uppers_with_path
.iter()
.map(|(upper, base_path)| {
generate_nfo(
NFO::Upper((video_model, upper).to_nfo(cx.config.nfo_time_type)),
base_path.join("person.nfo"),
)
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_video_nfo(
should_run: bool,
video_model: &video::Model,
nfo_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
generate_nfo(NFO::TVShow(video_model.to_nfo(cx.config.nfo_time_type)), nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}
/// 创建 nfo_path 的父目录,然后写入 nfo 文件
async fn generate_nfo(nfo: NFO<'_>, nfo_path: PathBuf) -> Result<()> {
if let Some(parent) = nfo_path.parent() {
fs::create_dir_all(parent).await?;
}
fs::write(nfo_path, nfo.generate_nfo().await?.as_bytes()).await?;
Ok(())
}

View File

@@ -0,0 +1,13 @@
[package]
name = "bili_sync_entity"
version = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
[dependencies]
derivative = { workspace = true }
either = { workspace = true }
regex = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }

View File

@@ -0,0 +1,3 @@
pub mod rule;
pub mod string_vec;
pub mod upper_vec;

Some files were not shown because too many files have changed in this diff Show More