Compare commits

...

118 Commits
v2.6.3 ... main

Author SHA1 Message Date
ᴀᴍᴛᴏᴀᴇʀ
c4b227e26e feat: telegram 通知渠道支持仅发送文字 (#701) 2026-04-08 00:26:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
744bb536b3 feat: 视频源页显示最新视频时间 (#700) 2026-04-07 18:38:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
91ab64a068 feat: 支持自定义 webhook 请求的 headers,更新说明内容 (#693) 2026-03-31 01:49:32 +08:00
amtoaer
55dde84f96 chore: 发布 bili-sync 2.11.0 2026-03-26 20:39:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eea233e576 ci: 修复 ci 在 windows 上运行失败的错误 (#690) 2026-03-25 16:57:58 +08:00
ᴀᴍᴛᴏᴀᴇʀ
72bf2b6a4d ci: 更新 workflows 中使用的 action,避免 node 版本低于 24 的 warning (#689) 2026-03-25 16:50:47 +08:00
wanlala
47ce8f148b 添加 armv7l 版本构建 (#688)
* Add workflow_dispatch trigger for build binary

* Ready for pull request from build-binary.yaml

* Add support for armv7l architecture in Dockerfile

* Add support for linux/armv7l platform in release build

* Update build configuration for Linux-armv7 target

* Change armv7l to armv7 in release build workflow

* Update ARM platform tarball extraction in Dockerfile

* 修正 platform

---------

Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-03-25 14:29:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1c68f13c54 perf: 避免一些常见场景的字符串拷贝,略微提升性能 (#687) 2026-03-25 12:21:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2a4c1313b0 chore: 升级 rust 到 1.94.0 (#685) 2026-03-24 23:08:31 +08:00
amtoaer
ec44798523 chore: 微调 placeholder 的提示文本 2026-03-24 22:59:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8cb59d6b2a feat: 过滤规则引入视频总长度和联合投稿 (#684) 2026-03-24 22:58:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3a2df55314 perf: 移除不必要的 Vec,略微提升性能 (#682) 2026-03-24 17:15:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
04448c6d8f feat: 支持解析联合投稿 (#681) 2026-03-24 16:25:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
09604fd283 fix: 清空重置、全量刷新时跳过空路径的删除,微调前端样式 (#679) 2026-03-17 00:35:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29f36238e3 feat: 支持手动触发全量更新,清除本地多余的视频条目与文件 (#678) 2026-03-16 02:50:55 +08:00
ᴀᴍᴛᴏᴀᴇʀ
980779d5c5 fix: 视频源第一页视频为空不再视为错误 (#677) 2026-03-15 22:38:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dd96a32b35 feat: 在视频页显示视频属于哪个视频源 (#676) 2026-03-15 21:53:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d39cce043c feat: 支持筛选视频的有效性 (#673) 2026-03-15 16:44:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e97fa73542 feat: 修改通知器,支持提示成功任务数量 (#672) 2026-03-15 03:31:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2bd660efc9 feat: 添加开关,允许尝试下载未充电的视频 (#666) 2026-02-28 22:55:01 +08:00
amtoaer
fe13029e84 chore: 发布 bili-sync 2.10.4 2026-02-25 11:11:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bdf4ab58f2 docs: 更新截图和文档链接,修改前端域名 (#659) 2026-02-25 10:51:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
681617cf02 fix: 引入 dunce 库规范化路径,移除手写的规范化逻辑 (#658) 2026-02-24 23:24:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b6c5b547a3 fix: 处理 windows 下的文件夹路径,确保不以空格结尾 (#657) 2026-02-24 22:04:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8aba906904 fix: 尝试修复浏览器从休眠中恢复时的图表乱序问题 (#656) 2026-02-24 01:54:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3e465d9b71 fix: 兼容 flac/audio 字段存在但为 null 的情况 (#655) 2026-02-23 12:34:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1930a57edd feat: 添加防抖,优化日志页的自动滚动体验 (#654) 2026-02-21 23:37:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bb1576a0df perf: 使用 itertools 提供的 join,避免 collect 到 Vec 的额外分配 (#652) 2026-02-19 19:04:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5350d3491b chore: 升级 rust 到 1.93.1,移除 ws 中的一些无用变量 (#650) 2026-02-15 16:31:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e130f14c13 fix: 修复 detail 页面状态显示错误 (#649) 2026-02-15 16:28:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
980f74a242 fix: 修复某些收藏夹视频的 valid 判断 (#648) 2026-02-15 15:09:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8c04dc6564 chore: 前端自动排序 imports,合并 icon 导入并替换掉 deprecated (#642) 2026-02-07 09:27:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c49ec81d51 fix: 修复一些前端的小问题 (#641) 2026-02-06 14:12:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
580a66eb17 feat: 扩大风控检测,当 http 返回 403 或 412 时认为是风控 (#640) 2026-02-05 17:13:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
295d4105aa feat: 支持自定义 ffmpeg 路径 (#639) 2026-02-05 15:58:33 +08:00
ApliNi
151251719b feat: 添加配置目录环境变量 (#632)
* feat: 添加配置目录环境变量

* feat: 添加配置目录命令行参数

* feat: 添加配置目录短参数

* refactor: 调整一下写法

---------

Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-02-03 13:42:16 +08:00
amtoaer
e51fed984b chore: 发布 bili-sync 2.10.3 2026-01-29 13:59:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
716c78b1e3 chore: 指定项目 rust 版本为 1.93.0,调整 ci 以读取配置 (#626) 2026-01-28 18:56:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
22bc6bb3e8 feat: 调整视频源页面 UI,提高可读性 (#623) 2026-01-26 20:11:38 +08:00
ᴀᴍᴛᴏᴀᴇʀ
fedbd4cdb1 feat: 调整视频编码优先级,默认使用 AVC (#622) 2026-01-26 18:23:31 +08:00
amtoaer
c1d9dc8b87 chore: 发布 bili-sync 2.10.2 2026-01-16 15:25:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7f09a98d6c feat: 实现仅失败、仅成功、仅等待的筛选 (#610) 2026-01-16 15:10:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
269647ac22 chore: 使用 ring 代替 aws-lc-rs (#609) 2026-01-15 14:39:16 +08:00
amtoaer
e0189c5b36 chore: 移除 sea-orm 的 tls 依赖 2026-01-14 16:54:18 +08:00
开心
4c1abcf48c feat: videos页面中新增仅失败过滤选项 (#605)
* videos页面中新增 仅失败过滤选项

* 仅失败筛选时才计算失败标记,避免额外的分页查询

* 去除[仅失败]多余的逻辑判定

* refactor: 后端调整:1)为 status -> sql 加入一个中间层方便拓展;2)将 Option<bool> 改为带有 default 的 bool;3)failed 统一改成 failed_only

* refactor: 前端调整:1)前端也统一改成 failed_only;2)修复很多地方在 loadVideo 前没有读取 failedOnly;3)略微调整前端样式

* format

---------

Co-authored-by: kaixin1995 <admin@haokaikai.cn>
Co-authored-by: amtoaer <amtoaer@gmail.com>
2026-01-13 22:28:10 +08:00
amtoaer
c05463285b chore: 发布 bili-sync 2.10.1 2026-01-12 11:25:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
264de2487e fix: 修复 svelte 升级后 status-editor 按钮无法点击的问题 (#603) 2026-01-12 11:22:48 +08:00
amtoaer
ea575b04e6 chore: 发布 bili-sync 2.10.0 2026-01-11 23:17:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f122b9756b feat: 适当扩大历史日志的容量 (#602) 2026-01-11 21:42:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
26514f7174 feat: 支持清除重置,方便分页视频刷新 (#596) 2026-01-11 15:03:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5944298f10 添加扫码登录功能 (#601)
* feat: 添加扫码登录功能,支持生成二维码并轮询登录状态

* feat: 增强扫码登录功能的测试,完善二维码生成与状态轮询的文档注释

* refactor: 后端改动之:1)拆分 login 到 credential 中;2)扫码登录和刷新凭据时复用相同的 extract 函数;3)精简注释。

* refactor: 前端改动之:1)扫码在单独的弹窗页处理;2)不同 status 下采用相同布局,避免状态变化导致布局跳动

* format

---------

Co-authored-by: zkl <i@zkl2333.com>
2026-01-11 12:59:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
64eecaa822 fix: 修复某些边缘情况的图表显示异常 (#592) 2026-01-09 18:14:32 +08:00
amtoaer
18d06c51ba chore: 忽略前端 shadcn-svelte 组件的 warning 2026-01-05 13:30:09 +08:00
amtoaer
ffa5c1e860 refactor: 统一存放配置项的默认值 2026-01-05 13:01:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
97e1b6285e feat: bind_address 绑定失败后尝试 fallback 到默认地址,避免无法启动 web 服务 (#590) 2026-01-05 12:13:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e2a24eff29 chore: 更新前后端依赖版本 (#589) 2026-01-05 11:46:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
56f5ed8e01 feat: 支持搜索关注的 UP 主 (#588) 2026-01-05 00:39:45 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0b5ae3d664 fix: 修复并行下载未正确触发的问题,根据文件是否为流做不同处理 (#586) 2025-12-31 11:52:38 +08:00
amtoaer
f24ee97b28 chore: 发布 bili-sync 2.9.4 2025-12-26 21:21:36 +08:00
ᴀᴍᴛᴏᴀᴇʀ
96c11bb077 fix: 修复从 2.6.0 以下版本直接升级的行为错误 (#583) 2025-12-26 21:21:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2455f7c83d fix: 调整 toast 位置到上方居中,避免遮挡交互组件 (#582) 2025-12-26 18:12:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4faf5a7cf9 fix: 修复标志位没有正确重置的问题,支持任意失败次数任务的重置 (#581) 2025-12-26 17:43:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c2c732093d fix: 修复某些视频下载提示 404 not found 的问题 (#579) 2025-12-26 14:24:52 +08:00
amtoaer
4103122f6b chore: 发布 bili-sync 2.9.3 2025-12-20 00:43:27 +08:00
amtoaer
14b8f877cf refactor: 修复 clippy warning 2025-12-20 00:42:47 +08:00
welann
8dfc7ddf5c fix: 为过滤/跳过选项的 Switch 使用唯一 id 并修正 Label 关联 (#575) 2025-12-20 00:40:39 +08:00
amtoaer
9a63e1eb6f chore: 发布 bili-sync 2.9.2 2025-12-12 14:13:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1b279ed7f fix: 修改过滤逻辑,避免某些存储空间由于磁盘类型探测失败而被错误过滤的情况 (#568) 2025-12-11 11:35:36 +08:00
amtoaer
128ca49225 chore: 发布 bili-sync 2.9.1 2025-12-09 12:40:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8c2e8da2b0 fix: 获取磁盘空间时筛选 SSD/HDD 并根据 name 去重,防止重复计算 (#563) 2025-12-09 12:39:49 +08:00
amtoaer
5dd7486b12 chore: 发布 bili-sync 2.9.0 2025-12-08 00:54:24 +08:00
amtoaer
b7d9e5dc0c fix: 光标悬浮在切换主题的按钮上时应该变成指针 2025-12-07 00:38:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1eac3e298 feat: 支持禁用凭证检查刷新任务,由用户自行维护 credential 有效性 (#560) 2025-12-06 23:26:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3f047771cb feat: 视频规则部分,添加不区分大小写的“包含”过滤 (#559) 2025-12-06 22:00:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f1703096fd feat: 支持根据筛选条件批量编辑视频的下载状态 (#558) 2025-12-06 19:47:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
930660045f feat: 支持深色主题 (#557) 2025-12-06 01:44:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6391aa67c0 feat: 支持按照 BV 号搜索 (#554) 2025-12-05 21:52:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b5ef76b0ed fix: 正确处理“我追的合集 / 收藏夹”中的收藏夹条目,以及一些样式、文本调整 (#553) 2025-12-05 16:38:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f37d9af678 fix: 兼容 API 返回字符串类型时间戳的情况 (#552) 2025-12-05 01:56:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7ef38a38ed feat: 支持自定义 webhook 模板,支持发送测试信息 (#551) 2025-12-05 00:21:36 +08:00
amtoaer
e76673d076 chore: 发布 bili-sync 2.8.0 2025-12-01 23:15:07 +08:00
Naomi
f3822dd536 feat: 完善nfo时间字段、演员缩略图 (#542) 2025-11-29 01:22:26 +08:00
amtoaer
688c8cec6a feat: 凭据刷新部分添加一些 context 方便调试 2025-11-21 10:50:52 +08:00
amtoaer
c854e4e889 fix: 尝试修复执行速度过快导致的时间戳问题 2025-11-20 15:04:39 +08:00
amtoaer
645e686822 fix: 确保流中出现的错误类型能够正确保留 2025-11-11 14:42:05 +08:00
ᴀᴍᴛᴏᴀᴇʀ
670f21a725 refactor: 整理重构下载任务调度部分的代码,增强可读性和鲁棒性 (#531) 2025-11-11 01:29:52 +08:00
amtoaer
8931cb5d2a feat: 滚动条不再导致布局抖动,优化图表配色 2025-11-09 21:48:05 +08:00
amtoaer
66996a77c6 chore: flac 流解析错误时打印错误的流信息,方便后续修复 2025-11-09 19:11:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
170bd14fe3 feat: 重构视频下载任务的触发逻辑,由简单的 tokio::sleep 迁移至调度器调度 (#529) 2025-11-09 01:11:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c69a88f1da feat: 优化风控相关的细节处理 (#527) 2025-11-08 00:41:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8ac6829e61 feat: 支持配置通知器,在视频源处理或整个下载任务出现错误时会触发消息通知 (#526) 2025-11-07 20:37:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a871db655f feat: 支持删除视频源 (#525) 2025-11-07 15:15:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
854d39cf88 feat: 优化对全局配置的处理,调整下载路径填充逻辑 (#523) 2025-11-06 17:25:26 +08:00
amtoaer
b6cba69e11 chore: 处理视频流出错时报出具体错误信息 2025-11-02 00:43:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ff6db0ad97 feat: 更换部分 API,重构 wbi 签名实现,增加额外风控检测 (#503) 2025-10-15 02:01:41 +08:00
ᴀᴍᴛᴏᴀᴇʀ
84d353365a feat: 支持设置快捷订阅的路径默认值 (#502) 2025-10-14 18:44:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c7e0d31811 chore: 移除旧版配置文件的迁移逻辑 (#501) 2025-10-14 16:32:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2fff5134cf fix: 修复 sysinfo 初始值偶尔异常偏高的问题 (#499) 2025-10-14 01:38:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a1569d085 refactor: 重构 WebSocket 处理部分,整理逻辑并优化性能 (#498) 2025-10-13 20:15:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
de702435af feat: 重构下载模块,将文件下载到临时目录再最终移动至目标路径 (#495) 2025-10-13 01:59:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eb2606f120 feat: 加入充电视频和番剧、影视判断,同时修复 category 被错误覆盖的问题 (#494) 2025-10-12 03:01:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
02c42861ab feat: 支持跳过视频的某些处理部分 (#492) 2025-10-11 20:45:44 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ed54ca13b8 feat: 支持使用动态 api 获取投稿,该 api 会返回动态视频 (#485) 2025-10-10 18:52:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4d6669a48a refactor: 使用 returning 简化逻辑 (#488) 2025-10-10 13:57:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eadb464363 chore: 更新 rust 依赖 (#486) 2025-10-10 12:49:11 +08:00
amtoaer
2b046362d7 chore: 发布 bili-sync 2.7.0 2025-09-25 00:51:59 +08:00
ᴀᴍᴛᴏᴀᴇʀ
61c9e7de88 chore: 前端小修改,ua 随机范围添加 windows (#470) 2025-09-25 00:50:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3d25c6b321 chore: 跑一遍 auto-correct (#468) 2025-09-24 18:50:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d35858790b chore: clippy 应该拒绝 warning (#466) 2025-09-24 17:58:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b441f04cdf chore: 修复新的 clippy warnings (#467) 2025-09-24 17:36:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4db7e6763a feat: 支持重新评估历史视频,前端显示视频的规则评估状态 (#465) 2025-09-24 17:08:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bbbb7d0c5b feat: 利用 etag 节省内容传输,显式写明生命周期 (#464) 2025-09-24 02:03:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
210c94398a feat: 实现视频的筛选规则 (#457) 2025-09-24 00:42:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6c7d295fe6 fix: 修复字幕风控的报错 (#463) 2025-09-23 08:27:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
71519af2f3 chore: 移除不必要的 image-proxy (#451) 2025-08-28 18:51:23 +08:00
Thomas Yang
8ed2fbae24 feat: 请求中header的User-Agent使用随机值 (#447) 2025-08-27 10:27:23 +08:00
amtoaer
fd90bc8b73 chore: 下载失败时不再打印一大串 URL 2025-08-08 20:23:40 +08:00
amtoaer
66bd3d6a41 chore: ffmpeg 执行失败时添加一条说明 2025-08-07 15:11:29 +08:00
163 changed files with 15456 additions and 4235 deletions

View File

@@ -12,7 +12,7 @@ jobs:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
@@ -20,7 +20,7 @@ jobs:
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
@@ -29,7 +29,7 @@ jobs:
- name: Build Frontend
run: bun run build
- name: Upload Web Build Artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: web-build
path: web/build
@@ -40,6 +40,11 @@ jobs:
strategy:
matrix:
platform:
- release_for: Linux-armv7
os: ubuntu-24.04
target: armv7-unknown-linux-musleabihf
bin: bili-sync-rs
name: bili-sync-rs-Linux-armv7-musl.tar.gz
- release_for: Linux-x86_64
os: ubuntu-24.04
target: x86_64-unknown-linux-musl
@@ -67,25 +72,26 @@ jobs:
name: bili-sync-rs-Windows-x86_64.zip
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Download Web Build Artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v8
with:
name: web-build
path: web/build
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Install musl-tools
run: sudo apt-get update --yes && sudo apt-get install --yes musl-tools
if: contains(matrix.platform.target, 'musl')
- name: Read Toolchain Version
id: read_rust_toolchain
shell: bash
run: |
channel=$(grep '^channel' rust-toolchain.toml | sed 's/.*= *"\(.*\)"/\1/')
echo "value=$channel" >> $GITHUB_OUTPUT
- name: Build binary
uses: houseabsolute/actions-rust-cross@v0
uses: houseabsolute/actions-rust-cross@v1
with:
command: build
target: ${{ matrix.platform.target }}
toolchain: stable
toolchain: ${{ steps.read_rust_toolchain.outputs.value }}
args: "--locked --release"
strip: true
- name: Package as archive
@@ -98,7 +104,7 @@ jobs:
tar czvf ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
fi
- name: Upload release artifact
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: bili-sync-rs-${{ matrix.platform.release_for }}
path: |

View File

@@ -5,7 +5,7 @@ on:
branches:
- main
paths:
- 'docs/**'
- "docs/**"
jobs:
doc:
@@ -16,7 +16,7 @@ jobs:
working-directory: docs
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
@@ -24,7 +24,7 @@ jobs:
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
@@ -38,4 +38,4 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/.vitepress/dist
force_orphan: true
commit_message: 部署来自 main 的最新文档变更:
commit_message: 部署来自 main 的最新文档变更:

View File

@@ -24,9 +24,9 @@ jobs:
if: ${{ github.event_name == 'push' || !github.event.pull_request.draft }}
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- run: rustup default stable && rustup component add clippy && rustup component add rustfmt --toolchain nightly
- run: rustup install && rustup component add rustfmt --toolchain nightly
- name: Cache dependencies
uses: swatinem/rust-cache@v2
@@ -37,7 +37,7 @@ jobs:
run: cargo +nightly fmt --check
- name: cargo clippy
run: cargo clippy
run: cargo clippy -- -D warnings
- name: cargo test
run: cargo test
@@ -50,7 +50,7 @@ jobs:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
@@ -58,7 +58,7 @@ jobs:
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}

View File

@@ -16,9 +16,9 @@ jobs:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Download release artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v8
with:
merge-multiple: true
- name: Publish GitHub release
@@ -35,9 +35,9 @@ jobs:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Download release artifact
uses: actions/download-artifact@v4
uses: actions/download-artifact@v8
with:
merge-multiple: true
- name: Docker Meta
@@ -65,6 +65,7 @@ jobs:
platforms: |
linux/amd64
linux/arm64
linux/arm/v7
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

2
.gitignore vendored
View File

@@ -1,6 +1,6 @@
**/target
auth_data
*.sqlite
*.sqlite*
debug*
node_modules
docs/.vitepress/cache

2342
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ default-members = ["crates/bili_sync"]
resolver = "2"
[workspace.package]
version = "2.6.3"
version = "2.11.0"
authors = ["amtoaer <amtoaer@gmail.com>"]
license = "MIT"
description = "由 Rust & Tokio 驱动的哔哩哔哩同步工具"
@@ -15,67 +15,76 @@ publish = false
bili_sync_entity = { path = "crates/bili_sync_entity" }
bili_sync_migration = { path = "crates/bili_sync_migration" }
anyhow = { version = "1.0.98", features = ["backtrace"] }
arc-swap = { version = "1.7.1", features = ["serde"] }
assert_matches = "1.5.0"
async-std = { version = "1.13.1", features = ["attributes", "tokio1"] }
anyhow = { version = "1.0.100", features = ["backtrace"] }
arc-swap = { version = "1.8.0", features = ["serde"] }
async-stream = "0.3.6"
async-trait = "0.1.88"
axum = { version = "0.8.4", features = ["macros", "ws"] }
async-tempfile = { version = "0.7.0", features = ["uuid"] }
async-trait = "0.1.89"
axum = { version = "0.8.8", features = ["macros", "ws"] }
base64 = "0.22.1"
built = { version = "0.7.7", features = ["git2", "chrono"] }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.41", features = ["env", "string"] }
chrono = { version = "0.4.42", features = ["serde"] }
clap = { version = "4.5.54", features = ["env", "string"] }
cookie = "0.18.1"
cow-utils = "0.1.3"
croner = "3.0.1"
dashmap = "6.1.0"
derivative = "2.2.0"
dirs = "6.0.0"
dunce = "1.0.5"
either = "1.15.0"
enum_dispatch = "0.3.13"
float-ord = "0.3.2"
futures = "0.3.31"
git2 = { version = "0.20.2", features = [], default-features = false }
handlebars = "6.3.2"
git2 = { version = "0.20.3", features = [], default-features = false }
handlebars = "6.4.0"
hex = "0.4.3"
itertools = "0.14.0"
leaky-bucket = "1.1.2"
md5 = "0.8.0"
memchr = "2.7.5"
memchr = "2.7.6"
once_cell = "1.21.3"
parking_lot = "0.12.4"
parking_lot = "0.12.5"
prost = "0.14.1"
quick-xml = { version = "0.38.0", features = ["async-tokio"] }
rand = "0.9.1"
regex = "1.11.1"
reqwest = { version = "0.12.22", features = [
quick-xml = { version = "0.38.4", features = ["async-tokio"] }
rand = "0.9.2"
regex = "1.12.2"
reqwest = { version = "0.13.1", features = [
"query",
"form",
"charset",
"cookies",
"gzip",
"http2",
"json",
"rustls-tls",
"rustls-no-provider",
"stream",
], default-features = false }
rsa = { version = "0.10.0-rc.3", features = ["sha2"] }
rsa = { version = "0.10.0-rc.9", features = ["sha2"] }
rust-embed-for-web = { git = "https://github.com/amtoaer/rust-embed-for-web", tag = "v1.0.0" }
sea-orm = { version = "1.1.13", features = [
rustls = { version = "0.23.36", default-features = false, features = ["ring"] }
sea-orm = { version = "1.1.19", features = [
"macros",
"runtime-tokio-rustls",
"runtime-tokio",
"sqlx-sqlite",
"sqlite-use-returning-for-3_35",
] }
sea-orm-migration = { version = "1.1.13", features = [] }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
sea-orm-migration = { version = "1.1.19", features = [] }
serde = { version = "1.0.228", features = ["derive"] }
serde_json = "1.0.148"
serde_urlencoded = "0.7.1"
strum = { version = "0.27.1", features = ["derive"] }
sysinfo = "0.36.0"
thiserror = "2.0.12"
tokio = { version = "1.46.1", features = ["full"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
tokio-util = { version = "0.7.15", features = ["io", "rt"] }
toml = "0.9.1"
strum = { version = "0.27.2", features = ["derive"] }
sysinfo = "0.37.2"
thiserror = "2.0.17"
tokio = { version = "1.49.0", features = ["full"] }
tokio-cron-scheduler = "0.15.1"
tokio-stream = { version = "0.1.18", features = ["sync"] }
tokio-util = { version = "0.7.18", features = ["io", "rt"] }
toml = "0.9.10"
tower = "0.5.2"
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.19", features = ["chrono", "json"] }
uuid = { version = "1.17.0", features = ["v4"] }
tracing = "0.1.44"
tracing-subscriber = { version = "0.3.22", features = ["chrono", "json"] }
ua_generator = { version = "0.5.42", default-features = false }
uuid = { version = "1.19.0", features = ["v4"] }
validator = { version = "0.20.0", features = ["derive"] }
[workspace.metadata.release]

View File

@@ -13,6 +13,8 @@ COPY ./bili-sync-rs-Linux-*.tar.gz ./targets/
RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
tar xzvf ./targets/bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./; \
elif [ "$TARGETPLATFORM" = "linux/arm/v7" ]; then \
tar xzvf ./targets/bili-sync-rs-Linux-armv7-musl.tar.gz -C ./; \
else \
tar xzvf ./targets/bili-sync-rs-Linux-aarch64-musl.tar.gz -C ./; \
fi
@@ -34,4 +36,3 @@ COPY --from=base / /
ENTRYPOINT [ "/app/bili-sync-rs" ]
VOLUME [ "/app/.config/bili-sync" ]

View File

@@ -3,14 +3,14 @@
## 简介
> [!NOTE]
> [点击此处](https://bili-sync.allwens.work/)查看文档
> [查看文档](https://bili-sync.amto.cc/) [加入 Telegram 交流群](https://t.me/+nuYrt8q6uEo4MWI1)
bili-sync 是一款专为 NAS 用户编写的哔哩哔哩同步工具,由 Rust & Tokio 驱动。
## 效果演示
### 管理页
![管理页](/assets/webui.webp)
![管理页](./assets/webui.webp)
### 媒体库概览
![媒体库概览](./assets/overview.webp)
### 媒体库详情

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

After

Width:  |  Height:  |  Size: 138 KiB

View File

@@ -13,6 +13,7 @@ build = "build.rs"
anyhow = { workspace = true }
arc-swap = { workspace = true }
async-stream = { workspace = true }
async-tempfile = { workspace = true }
axum = { workspace = true }
base64 = { workspace = true }
bili_sync_entity = { workspace = true }
@@ -20,14 +21,16 @@ bili_sync_migration = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true }
cookie = { workspace = true }
cow-utils = { workspace = true }
croner = { workspace = true }
dashmap = { workspace = true }
dirs = { workspace = true }
dunce = { workspace = true }
enum_dispatch = { workspace = true }
float-ord = { workspace = true }
futures = { workspace = true }
handlebars = { workspace = true }
hex = { workspace = true }
itertools = { workspace = true }
leaky-bucket = { workspace = true }
md5 = { workspace = true }
memchr = { workspace = true }
@@ -40,6 +43,7 @@ regex = { workspace = true }
reqwest = { workspace = true }
rsa = { workspace = true }
rust-embed-for-web = { workspace = true }
rustls = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
@@ -48,18 +52,17 @@ strum = { workspace = true }
sysinfo = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }
tokio-cron-scheduler = { workspace = true }
tokio-stream = { workspace = true }
tokio-util = { workspace = true }
toml = { workspace = true }
tower = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
ua_generator = { workspace = true }
uuid = { workspace = true }
validator = { workspace = true }
[dev-dependencies]
assert_matches = { workspace = true }
[build-dependencies]
built = { workspace = true }
git2 = { workspace = true }

View File

@@ -2,7 +2,8 @@ use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use chrono::Utc;
use futures::Stream;
@@ -12,7 +13,7 @@ use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Collection, CollectionItem, CollectionType, VideoInfo};
use crate::bilibili::{BiliClient, Collection, CollectionItem, CollectionType, Credential, VideoInfo};
impl VideoSource for collection::Model {
fn display_name(&self) -> Cow<'static, str> {
@@ -43,7 +44,12 @@ impl VideoSource for collection::Model {
})
}
fn should_take(&self, _release_datetime: &chrono::DateTime<Utc>, _latest_row_at: &chrono::DateTime<Utc>) -> bool {
fn should_take(
&self,
_idx: usize,
_release_datetime: &chrono::DateTime<Utc>,
_latest_row_at: &chrono::DateTime<Utc>,
) -> bool {
// collection视频合集/视频列表)返回的内容似乎并非严格按照时间排序,并且不同 collection 的排序方式也不同
// 为了保证程序正确性collection 不根据时间提前 break而是每次都全量拉取
true
@@ -51,21 +57,27 @@ impl VideoSource for collection::Model {
fn should_filter(
&self,
_idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
// 由于 collection 的视频无固定时间顺序should_take 无法提前中断拉取,因此 should_filter 环节需要进行额外过滤
if let Ok(video_info) = video_info {
if video_info.release_datetime() > latest_row_at {
return Some(video_info);
}
if let Ok(video_info) = video_info
&& video_info.release_datetime() > latest_row_at
{
return Some(video_info);
}
None
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
@@ -78,6 +90,7 @@ impl VideoSource for collection::Model {
mid: self.m_id.to_string(),
collection_type: CollectionType::from_expected(self.r#type),
},
credential,
);
let collection_info = collection.get_info().await?;
ensure!(
@@ -88,21 +101,18 @@ impl VideoSource for collection::Model {
collection_info,
collection.collection
);
collection::ActiveModel {
let updated_model = collection::ActiveModel {
id: Unchanged(self.id),
name: Set(collection_info.name.clone()),
name: Set(collection_info.name),
..Default::default()
}
.save(connection)
.update(connection)
.await?;
Ok((
collection::Entity::find()
.filter(collection::Column::Id.eq(self.id))
.one(connection)
.await?
.context("collection not found")?
.into(),
Box::pin(collection.into_video_stream()),
))
Ok((updated_model.into(), Box::pin(collection.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -2,7 +2,8 @@ use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
@@ -11,7 +12,7 @@ use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, FavoriteList, VideoInfo};
use crate::bilibili::{BiliClient, Credential, FavoriteList, VideoInfo};
impl VideoSource for favorite::Model {
fn display_name(&self) -> Cow<'static, str> {
@@ -42,15 +43,20 @@ impl VideoSource for favorite::Model {
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let favorite = FavoriteList::new(bili_client, self.f_id.to_string());
let favorite = FavoriteList::new(bili_client, self.f_id.to_string(), credential);
let favorite_info = favorite.get_info().await?;
ensure!(
favorite_info.id == self.f_id,
@@ -58,21 +64,18 @@ impl VideoSource for favorite::Model {
favorite_info.id,
self.f_id
);
favorite::ActiveModel {
let updated_model = favorite::ActiveModel {
id: Unchanged(self.id),
name: Set(favorite_info.title.clone()),
name: Set(favorite_info.title),
..Default::default()
}
.save(connection)
.update(connection)
.await?;
Ok((
favorite::Entity::find()
.filter(favorite::Column::Id.eq(self.id))
.one(connection)
.await?
.context("favorite not found")?
.into(),
Box::pin(favorite.into_video_stream()),
))
Ok((updated_model.into(), Box::pin(favorite.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -7,7 +7,7 @@ use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::Result;
use anyhow::{Context, Result};
use chrono::Utc;
use enum_dispatch::enum_dispatch;
use futures::Stream;
@@ -19,10 +19,11 @@ use sea_orm::sea_query::SimpleExpr;
#[rustfmt::skip]
use bili_sync_entity::collection::Model as Collection;
use bili_sync_entity::favorite::Model as Favorite;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::submission::Model as Submission;
use bili_sync_entity::watch_later::Model as WatchLater;
use crate::bilibili::{BiliClient, VideoInfo};
use crate::bilibili::{BiliClient, Credential, VideoInfo};
#[enum_dispatch]
pub enum VideoSourceEnum {
@@ -55,12 +56,18 @@ pub trait VideoSource {
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel;
// 判断是否应该继续拉取视频
fn should_take(&self, release_datetime: &chrono::DateTime<Utc>, latest_row_at: &chrono::DateTime<Utc>) -> bool {
fn should_take(
&self,
_idx: usize,
release_datetime: &chrono::DateTime<Utc>,
latest_row_at: &chrono::DateTime<Utc>,
) -> bool {
release_datetime > latest_row_at
}
fn should_filter(
&self,
_idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
_latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
@@ -68,6 +75,8 @@ pub trait VideoSource {
video_info.ok()
}
fn rule(&self) -> &Option<Rule>;
fn log_refresh_video_start(&self) {
info!("开始扫描{}..", self.display_name());
}
@@ -95,11 +104,25 @@ pub trait VideoSource {
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)>;
async fn create_dir_all(&self) -> Result<()> {
let video_source_path = self.path();
tokio::fs::create_dir_all(video_source_path).await.with_context(|| {
format!(
"failed to create video source directory {}",
video_source_path.display()
)
})?;
Ok(())
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()>;
}
pub enum _ActiveModel {

View File

@@ -1,7 +1,8 @@
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use anyhow::{Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
@@ -10,7 +11,7 @@ use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Submission, VideoInfo};
use crate::bilibili::{BiliClient, Credential, Dynamic, Submission, VideoInfo};
impl VideoSource for submission::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
@@ -41,15 +42,56 @@ impl VideoSource for submission::Model {
})
}
fn should_take(
&self,
idx: usize,
release_datetime: &chrono::DateTime<chrono::Utc>,
latest_row_at: &chrono::DateTime<chrono::Utc>,
) -> bool {
// 如果使用动态 API那么可能出现用户置顶了一个很久以前的视频在动态顶部的情况
// 这种情况应该继续拉取下去,不能因为第一条不满足条件就停止
// 后续的非置顶内容是正常由新到旧排序的,可以继续使用常规方式处理
if idx == 0 && self.use_dynamic_api {
return true;
}
release_datetime > latest_row_at
}
fn should_filter(
&self,
idx: usize,
video_info: Result<VideoInfo, anyhow::Error>,
latest_row_at: &chrono::DateTime<chrono::Utc>,
) -> Option<VideoInfo> {
if idx == 0 && self.use_dynamic_api {
// 同理,动态 API 的第一条内容可能是置顶的老视频,单独做个过滤
// 其实不过滤也不影响逻辑正确性,因为后续 insert 发生冲突仍然会忽略掉
// 此处主要是出于性能考虑,减少不必要的数据库操作
if let Ok(video_info) = video_info
&& video_info.release_datetime() > latest_row_at
{
return Some(video_info);
}
None
} else {
video_info.ok()
}
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let submission = Submission::new(bili_client, self.upper_id.to_string());
let submission = Submission::new(bili_client, self.upper_id.to_string(), credential);
let upper = submission.get_info().await?;
ensure!(
upper.mid == submission.upper_id,
@@ -57,21 +99,24 @@ impl VideoSource for submission::Model {
upper.mid,
submission.upper_id
);
submission::ActiveModel {
let updated_model = submission::ActiveModel {
id: Unchanged(self.id),
upper_name: Set(upper.name),
..Default::default()
}
.save(connection)
.update(connection)
.await?;
Ok((
submission::Entity::find()
.filter(submission::Column::Id.eq(self.id))
.one(connection)
.await?
.context("submission not found")?
.into(),
Box::pin(submission.into_video_stream()),
))
let video_stream = if self.use_dynamic_api {
// 必须显式写出 dyn否则 rust 会自动推导到 impl 从而认为 if else 返回类型不一致
Box::pin(Dynamic::from(submission).into_video_stream()) as Pin<Box<dyn Stream<Item = _> + Send + 'a>>
} else {
Box::pin(submission.into_video_stream())
};
Ok((updated_model.into(), video_stream))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -2,6 +2,7 @@ use std::path::Path;
use std::pin::Pin;
use anyhow::Result;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
@@ -10,7 +11,7 @@ use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, VideoInfo, WatchLater};
use crate::bilibili::{BiliClient, Credential, VideoInfo, WatchLater};
impl VideoSource for watch_later::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
@@ -41,15 +42,25 @@ impl VideoSource for watch_later::Model {
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
credential: &'a Credential,
_connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let watch_later = WatchLater::new(bili_client);
let watch_later = WatchLater::new(bili_client, credential);
Ok((self.into(), Box::pin(watch_later.into_video_stream())))
}
async fn delete_from_db(self, conn: &impl ConnectionTrait) -> Result<()> {
self.delete(conn).await?;
Ok(())
}
}

View File

@@ -1,49 +1,119 @@
use std::borrow::Borrow;
use sea_orm::{ConnectionTrait, DatabaseTransaction};
use bili_sync_entity::video;
use bili_sync_migration::SimpleExpr;
use itertools::Itertools;
use sea_orm::{ColumnTrait, Condition, ConnectionTrait, DatabaseTransaction};
use crate::api::response::{PageInfo, VideoInfo};
use crate::api::request::{StatusFilter, ValidationFilter};
use crate::api::response::{PageInfo, SimplePageInfo, SimpleVideoInfo, VideoInfo};
use crate::utils::status::VideoStatus;
pub async fn update_video_download_status(
impl StatusFilter {
pub fn to_video_query(&self) -> Condition {
let query_builder = VideoStatus::query_builder();
match self {
Self::Failed => query_builder.failed(),
Self::Succeeded => query_builder.succeeded(),
Self::Waiting => query_builder.waiting(),
}
}
}
impl ValidationFilter {
pub fn to_video_query(&self) -> SimpleExpr {
match self {
ValidationFilter::Invalid => video::Column::Valid.eq(false),
ValidationFilter::Skipped => video::Column::Valid
.eq(true)
.and(video::Column::ShouldDownload.eq(false)),
ValidationFilter::Normal => video::Column::Valid
.eq(true)
.and(video::Column::ShouldDownload.eq(true)),
}
}
}
pub trait VideoRecord {
fn as_id_status_tuple(&self) -> (i32, u32);
}
pub trait PageRecord {
fn as_id_status_tuple(&self) -> (i32, u32);
}
impl VideoRecord for VideoInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl VideoRecord for SimpleVideoInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl PageRecord for PageInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
impl PageRecord for SimplePageInfo {
fn as_id_status_tuple(&self) -> (i32, u32) {
(self.id, self.download_status)
}
}
pub async fn update_video_download_status<T>(
txn: &DatabaseTransaction,
videos: &[impl Borrow<VideoInfo>],
videos: &[impl Borrow<T>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr> {
) -> Result<(), sea_orm::DbErr>
where
T: VideoRecord,
{
if videos.is_empty() {
return Ok(());
}
let videos = videos.iter().map(|v| v.borrow()).collect::<Vec<_>>();
if let Some(size) = batch_size {
for chunk in videos.chunks(size) {
execute_video_update_batch(txn, chunk).await?;
execute_video_update_batch(txn, chunk.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
} else {
execute_video_update_batch(txn, &videos).await?;
execute_video_update_batch(txn, videos.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
Ok(())
}
pub async fn update_page_download_status(
pub async fn update_page_download_status<T>(
txn: &DatabaseTransaction,
pages: &[impl Borrow<PageInfo>],
pages: &[impl Borrow<T>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr> {
) -> Result<(), sea_orm::DbErr>
where
T: PageRecord,
{
if pages.is_empty() {
return Ok(());
}
let pages = pages.iter().map(|v| v.borrow()).collect::<Vec<_>>();
if let Some(size) = batch_size {
for chunk in pages.chunks(size) {
execute_page_update_batch(txn, chunk).await?;
execute_page_update_batch(txn, chunk.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
} else {
execute_page_update_batch(txn, &pages).await?;
execute_page_update_batch(txn, pages.iter().map(|v| v.borrow().as_id_status_tuple())).await?;
}
Ok(())
}
async fn execute_video_update_batch(txn: &DatabaseTransaction, videos: &[&VideoInfo]) -> Result<(), sea_orm::DbErr> {
if videos.is_empty() {
async fn execute_video_update_batch(
txn: &DatabaseTransaction,
videos: impl Iterator<Item = (i32, u32)>,
) -> Result<(), sea_orm::DbErr> {
let values = videos.map(|v| format!("({}, {})", v.0, v.1)).join(", ");
if values.is_empty() {
return Ok(());
}
let sql = format!(
@@ -52,18 +122,18 @@ async fn execute_video_update_batch(txn: &DatabaseTransaction, videos: &[&VideoI
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE video.id = tempdata.id",
videos
.iter()
.map(|v| format!("({}, {})", v.id, v.download_status))
.collect::<Vec<_>>()
.join(", ")
values
);
txn.execute_unprepared(&sql).await?;
Ok(())
}
async fn execute_page_update_batch(txn: &DatabaseTransaction, pages: &[&PageInfo]) -> Result<(), sea_orm::DbErr> {
if pages.is_empty() {
async fn execute_page_update_batch(
txn: &DatabaseTransaction,
pages: impl Iterator<Item = (i32, u32)>,
) -> Result<(), sea_orm::DbErr> {
let values = pages.map(|p| format!("({}, {})", p.0, p.1)).join(", ");
if values.is_empty() {
return Ok(());
}
let sql = format!(
@@ -72,11 +142,7 @@ async fn execute_page_update_batch(txn: &DatabaseTransaction, pages: &[&PageInfo
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE page.id = tempdata.id",
pages
.iter()
.map(|p| format!("({}, {})", p.id, p.download_status))
.collect::<Vec<_>>()
.join(", ")
values
);
txn.execute_unprepared(&sql).await?;
Ok(())

View File

@@ -1,8 +1,25 @@
use serde::Deserialize;
use bili_sync_entity::rule::Rule;
use serde::{Deserialize, Serialize};
use validator::Validate;
use crate::bilibili::CollectionType;
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum StatusFilter {
Failed,
Succeeded,
Waiting,
}
#[derive(Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum ValidationFilter {
Skipped,
Invalid,
Normal,
}
#[derive(Deserialize)]
pub struct VideosRequest {
pub collection: Option<i32>,
@@ -10,12 +27,27 @@ pub struct VideosRequest {
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
pub page: Option<u64>,
pub page_size: Option<u64>,
}
#[derive(Deserialize)]
pub struct ResetRequest {
pub struct ResetVideoStatusRequest {
#[serde(default)]
pub force: bool,
}
#[derive(Deserialize)]
pub struct ResetFilteredVideoStatusRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
#[serde(default)]
pub force: bool,
}
@@ -45,6 +77,23 @@ pub struct UpdateVideoStatusRequest {
pub page_updates: Vec<PageStatusUpdate>,
}
#[derive(Deserialize, Validate)]
pub struct UpdateFilteredVideoStatusRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub status_filter: Option<StatusFilter>,
pub validation_filter: Option<ValidationFilter>,
#[serde(default)]
#[validate(nested)]
pub video_updates: Vec<StatusUpdate>,
#[serde(default)]
#[validate(nested)]
pub page_updates: Vec<StatusUpdate>,
}
#[derive(Deserialize)]
pub struct FollowedCollectionsRequest {
pub page_num: Option<i32>,
@@ -55,6 +104,7 @@ pub struct FollowedCollectionsRequest {
pub struct FollowedUppersRequest {
pub page_num: Option<i32>,
pub page_size: Option<i32>,
pub name: Option<String>,
}
#[derive(Deserialize, Validate)]
@@ -81,14 +131,27 @@ pub struct InsertSubmissionRequest {
pub path: String,
}
#[derive(Deserialize)]
pub struct ImageProxyParams {
pub url: String,
}
#[derive(Deserialize, Validate)]
#[serde(rename_all = "camelCase")]
pub struct UpdateVideoSourceRequest {
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
pub enabled: bool,
pub rule: Option<Rule>,
pub use_dynamic_api: Option<bool>,
}
#[derive(Serialize, Deserialize)]
pub struct DefaultPathRequest {
pub name: String,
}
#[derive(Debug, Deserialize)]
pub struct PollQrcodeRequest {
pub qrcode_key: String,
}
#[derive(Debug, Deserialize)]
pub struct FullSyncVideoSourceRequest {
pub delete_local: bool,
}

View File

@@ -1,7 +1,10 @@
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use sea_orm::prelude::DateTime;
use sea_orm::{DerivePartialModel, FromQueryResult};
use serde::Serialize;
use crate::bilibili::{PollStatus, Qrcode};
use crate::utils::status::{PageStatus, VideoStatus};
#[derive(Serialize)]
@@ -32,7 +35,13 @@ pub struct ResetVideoResponse {
}
#[derive(Serialize)]
pub struct ResetAllVideosResponse {
pub struct ClearAndResetVideoStatusResponse {
pub warning: Option<String>,
pub video: VideoInfo,
}
#[derive(Serialize)]
pub struct ResetFilteredVideosResponse {
pub resetted: bool,
pub resetted_videos_count: usize,
pub resetted_pages_count: usize,
@@ -45,6 +54,13 @@ pub struct UpdateVideoStatusResponse {
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct UpdateFilteredVideoStatusResponse {
pub success: bool,
pub updated_videos_count: usize,
pub updated_pages_count: usize,
}
#[derive(FromQueryResult, Serialize)]
pub struct VideoSource {
pub id: i32,
@@ -58,8 +74,14 @@ pub struct VideoInfo {
pub bvid: String,
pub name: String,
pub upper_name: String,
pub valid: bool,
pub should_download: bool,
#[serde(serialize_with = "serde_video_download_status")]
pub download_status: u32,
pub collection_id: Option<i32>,
pub favorite_id: Option<i32>,
pub submission_id: Option<i32>,
pub watch_later_id: Option<i32>,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult)]
@@ -73,6 +95,21 @@ pub struct PageInfo {
pub download_status: u32,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult, Clone, Copy)]
#[sea_orm(entity = "video::Entity")]
pub struct SimpleVideoInfo {
pub id: i32,
pub download_status: u32,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult, Clone, Copy)]
#[sea_orm(entity = "page::Entity")]
pub struct SimplePageInfo {
pub id: i32,
pub video_id: i32,
pub download_status: u32,
}
fn serde_video_download_status<S>(status: &u32, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
@@ -90,47 +127,48 @@ where
}
#[derive(Serialize)]
pub struct FavoriteWithSubscriptionStatus {
pub title: String,
pub media_count: i64,
pub fid: i64,
pub mid: i64,
pub subscribed: bool,
}
#[derive(Serialize)]
pub struct CollectionWithSubscriptionStatus {
pub title: String,
pub sid: i64,
pub mid: i64,
pub invalid: bool,
pub subscribed: bool,
}
#[derive(Serialize)]
pub struct UpperWithSubscriptionStatus {
pub mid: i64,
pub uname: String,
pub face: String,
pub sign: String,
pub invalid: bool,
pub subscribed: bool,
#[serde(tag = "type", rename_all = "snake_case")]
pub enum Followed {
Favorite {
title: String,
media_count: i64,
fid: i64,
mid: i64,
invalid: bool,
subscribed: bool,
},
Collection {
title: String,
sid: i64,
mid: i64,
media_count: i64,
invalid: bool,
subscribed: bool,
},
Upper {
mid: i64,
uname: String,
face: String,
sign: String,
invalid: bool,
subscribed: bool,
},
}
#[derive(Serialize)]
pub struct FavoritesResponse {
pub favorites: Vec<FavoriteWithSubscriptionStatus>,
pub favorites: Vec<Followed>,
}
#[derive(Serialize)]
pub struct CollectionsResponse {
pub collections: Vec<CollectionWithSubscriptionStatus>,
pub collections: Vec<Followed>,
pub total: i64,
}
#[derive(Serialize)]
pub struct UppersResponse {
pub uppers: Vec<UpperWithSubscriptionStatus>,
pub uppers: Vec<Followed>,
pub total: i64,
}
@@ -157,8 +195,9 @@ pub struct DashBoardResponse {
pub videos_by_day: Vec<DayCountPair>,
}
#[derive(Serialize)]
#[derive(Serialize, Clone, Copy)]
pub struct SysInfo {
pub timestamp: i64,
pub total_memory: u64,
pub used_memory: u64,
pub process_memory: u64,
@@ -169,9 +208,32 @@ pub struct SysInfo {
}
#[derive(Serialize, FromQueryResult)]
#[serde(rename_all = "camelCase")]
pub struct VideoSourceDetail {
pub id: i32,
pub name: String,
pub path: String,
pub rule: Option<Rule>,
#[serde(default)]
pub rule_display: Option<String>,
#[serde(default)]
pub use_dynamic_api: Option<bool>,
pub enabled: bool,
pub latest_row_at: Option<DateTime>,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdateVideoSourceResponse {
pub rule_display: Option<String>,
}
pub type GenerateQrcodeResponse = Qrcode;
pub type PollQrcodeResponse = PollStatus;
#[derive(Serialize)]
pub struct FullSyncVideoSourceResponse {
pub removed_count: usize,
pub warnings: Option<Vec<String>>,
}

View File

@@ -1,23 +1,25 @@
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::Extension;
use axum::routing::get;
use axum::routing::{get, post};
use axum::{Json, Router};
use sea_orm::DatabaseConnection;
use crate::api::error::InnerApiError;
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::bilibili::BiliClient;
use crate::config::{Config, VersionedConfig};
use crate::utils::task_notifier::TASK_STATUS_NOTIFIER;
use crate::notifier::{Message, Notifier};
pub(super) fn router() -> Router {
Router::new().route("/config", get(get_config).put(update_config))
Router::new()
.route("/config", get(get_config).put(update_config))
.route("/config/notifiers/ping", post(ping_notifiers))
}
/// 获取全局配置
pub async fn get_config() -> Result<ApiResponse<Arc<Config>>, ApiError> {
Ok(ApiResponse::ok(VersionedConfig::get().load_full()))
Ok(ApiResponse::ok(VersionedConfig::get().snapshot()))
}
/// 更新全局配置
@@ -25,12 +27,24 @@ pub async fn update_config(
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(config): ValidatedJson<Config>,
) -> Result<ApiResponse<Arc<Config>>, ApiError> {
let Some(_lock) = TASK_STATUS_NOTIFIER.detect_running() else {
// 简单避免一下可能的不一致现象
return Err(InnerApiError::BadRequest("下载任务正在运行,无法修改配置".to_string()).into());
};
config.check()?;
let new_config = VersionedConfig::get().update(config, &db).await?;
drop(_lock);
Ok(ApiResponse::ok(new_config))
}
pub async fn ping_notifiers(
Extension(bili_client): Extension<Arc<BiliClient>>,
Json(mut notifier): Json<Notifier>,
) -> Result<ApiResponse<()>, ApiError> {
// 对于 webhook 类型的通知器测试,设置上 ignore_cache tag 以强制实时渲染
if let Notifier::Webhook { ignore_cache, .. } = &mut notifier {
*ignore_cache = Some(());
}
notifier
.notify(bili_client.inner_client(), Message{
message: "This is a test notification from BiliSync.".into(),
image_url: Some("https://socialify.git.ci/amtoaer/bili-sync/image?description=1&font=KoHo&issues=1&language=1&logo=https%3A%2F%2Fs2.loli.net%2F2023%2F12%2F02%2F9EwT2yInOu1d3zm.png&name=1&owner=1&pattern=Signal&pulls=1&stargazers=1&theme=Light".to_owned()),
})
.await?;
Ok(ApiResponse::ok(()))
}

View File

@@ -55,11 +55,11 @@ ORDER BY
))
.all(&db),
)?;
return Ok(ApiResponse::ok(DashBoardResponse {
Ok(ApiResponse::ok(DashBoardResponse {
enabled_favorites,
enabled_collections,
enabled_submissions,
enable_watch_later: enabled_watch_later > 0,
videos_by_day,
}));
}))
}

View File

@@ -0,0 +1,34 @@
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Query};
use axum::routing::{get, post};
use crate::api::request::PollQrcodeRequest;
use crate::api::response::{GenerateQrcodeResponse, PollQrcodeResponse};
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::bilibili::{BiliClient, Credential};
pub(super) fn router() -> Router {
Router::new()
.route("/login/qrcode/generate", post(generate_qrcode))
.route("/login/qrcode/poll", get(poll_qrcode))
}
/// 生成扫码登录二维码
pub async fn generate_qrcode(
Extension(bili_client): Extension<Arc<BiliClient>>,
) -> Result<ApiResponse<GenerateQrcodeResponse>, ApiError> {
Ok(ApiResponse::ok(Credential::generate_qrcode(&bili_client.client).await?))
}
/// 轮询扫码登录状态
pub async fn poll_qrcode(
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<PollQrcodeRequest>,
) -> Result<ApiResponse<PollQrcodeResponse>, ApiError> {
Ok(ApiResponse::ok(
Credential::poll_qrcode(&bili_client.client, &params.qrcode_key).await?,
))
}

View File

@@ -6,15 +6,14 @@ use axum::Router;
use axum::extract::{Extension, Query};
use axum::routing::get;
use bili_sync_entity::*;
use itertools::{Either, Itertools};
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QueryFilter, QuerySelect};
use crate::api::request::{FollowedCollectionsRequest, FollowedUppersRequest};
use crate::api::response::{
CollectionWithSubscriptionStatus, CollectionsResponse, FavoriteWithSubscriptionStatus, FavoritesResponse,
UpperWithSubscriptionStatus, UppersResponse,
};
use crate::api::response::{CollectionsResponse, FavoritesResponse, Followed, UppersResponse};
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::bilibili::{BiliClient, Me};
use crate::config::VersionedConfig;
pub(super) fn router() -> Router {
Router::new()
@@ -28,31 +27,33 @@ pub async fn get_created_favorites(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
) -> Result<ApiResponse<FavoritesResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let bili_favorites = me.get_created_favorites().await?;
let favorites = if let Some(bili_favorites) = bili_favorites {
// b 站收藏夹相关接口使用的所谓 “fid” 其实是该处的 id即 fid + mid 后两位
// b 站收藏夹相关接口使用的所谓“fid”其实是该处的 id即 fid + mid 后两位
let bili_fids: Vec<_> = bili_favorites.iter().map(|fav| fav.id).collect();
let subscribed_fids: Vec<i64> = favorite::Entity::find()
let subscribed_fids: HashSet<i64> = favorite::Entity::find()
.select_only()
.column(favorite::Column::FId)
.filter(favorite::Column::FId.is_in(bili_fids))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_fids.into_iter().collect();
.await?
.into_iter()
.collect();
bili_favorites
.into_iter()
.map(|fav| FavoriteWithSubscriptionStatus {
.map(|fav| Followed::Favorite {
title: fav.title,
media_count: fav.media_count,
// api 返回的 id 才是真实的 fid
fid: fav.id,
mid: fav.mid,
subscribed: subscribed_set.contains(&fav.id),
invalid: false,
subscribed: subscribed_fids.contains(&fav.id),
})
.collect()
} else {
@@ -62,36 +63,75 @@ pub async fn get_created_favorites(
Ok(ApiResponse::ok(FavoritesResponse { favorites }))
}
/// 获取当前用户收藏的合集
/// 获取当前用户收藏的合集/收藏夹
pub async fn get_followed_collections(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedCollectionsRequest>,
) -> Result<ApiResponse<CollectionsResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(50));
let bili_collections = me.get_followed_collections(page_num, page_size).await?;
let collections = if let Some(collection_list) = bili_collections.list {
let bili_sids: Vec<_> = collection_list.iter().map(|col| col.id).collect();
let subscribed_ids: Vec<i64> = collection::Entity::find()
.select_only()
.column(collection::Column::SId)
.filter(collection::Column::SId.is_in(bili_sids))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_ids.into_iter().collect();
// collection_list 中的条目可能是合集或者收藏夹,需要分类处理
// 目前看下来,最显著的区别是合集的 fid 是 0
let (bili_fids, bili_sids): (Vec<_>, Vec<_>) = collection_list.iter().partition_map(|col| {
if col.fid != 0 {
Either::Left(col.id)
} else {
Either::Right(col.id)
}
});
let (subscribed_fids, subscribed_sids): (HashSet<i64>, HashSet<i64>) = tokio::try_join!(
async {
Result::<_, anyhow::Error>::Ok(
favorite::Entity::find()
.select_only()
.column(favorite::Column::FId)
.filter(favorite::Column::FId.is_in(bili_fids))
.into_tuple()
.all(&db)
.await?
.into_iter()
.collect(),
)
},
async {
Ok(collection::Entity::find()
.select_only()
.column(collection::Column::SId)
.filter(collection::Column::SId.is_in(bili_sids))
.into_tuple()
.all(&db)
.await?
.into_iter()
.collect())
}
)?;
collection_list
.into_iter()
.map(|col| CollectionWithSubscriptionStatus {
title: col.title,
sid: col.id,
mid: col.mid,
invalid: col.state == 1,
subscribed: subscribed_set.contains(&col.id),
.map(|col| {
if col.fid != 0 {
Followed::Favorite {
title: col.title,
media_count: col.media_count,
fid: col.id,
mid: col.mid,
invalid: col.state == 1,
subscribed: subscribed_fids.contains(&col.id),
}
} else {
Followed::Collection {
title: col.title,
sid: col.id,
mid: col.mid,
media_count: col.media_count,
invalid: col.state == 1,
subscribed: subscribed_sids.contains(&col.id),
}
}
})
.collect()
} else {
@@ -110,9 +150,12 @@ pub async fn get_followed_uppers(
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedUppersRequest>,
) -> Result<ApiResponse<UppersResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let credential = &VersionedConfig::get().read().credential;
let me = Me::new(bili_client.as_ref(), credential);
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(20));
let bili_uppers = me.get_followed_uppers(page_num, page_size).await?;
let bili_uppers = me
.get_followed_uppers(page_num, page_size, params.name.as_deref())
.await?;
let bili_uid: Vec<_> = bili_uppers.list.iter().map(|upper| upper.mid).collect();
@@ -128,7 +171,7 @@ pub async fn get_followed_uppers(
let uppers = bili_uppers
.list
.into_iter()
.map(|upper| UpperWithSubscriptionStatus {
.map(|upper| Followed::Upper {
mid: upper.mid,
// 官方没有提供字段,但是可以使用这种方式简单判断下
invalid: upper.uname == "账号已注销" && upper.face == "https://i0.hdslb.com/bfs/face/member/noface.jpg",

View File

@@ -1,25 +1,20 @@
use std::collections::HashSet;
use std::sync::Arc;
use axum::body::Body;
use axum::extract::{Extension, Query, Request};
use axum::extract::Request;
use axum::http::HeaderMap;
use axum::middleware::Next;
use axum::response::{IntoResponse, Response};
use axum::routing::get;
use axum::{Router, middleware};
use base64::Engine;
use base64::prelude::BASE64_URL_SAFE_NO_PAD;
use reqwest::{Method, StatusCode, header};
use reqwest::StatusCode;
use super::request::ImageProxyParams;
use crate::api::wrapper::ApiResponse;
use crate::bilibili::BiliClient;
use crate::config::VersionedConfig;
mod config;
mod dashboard;
mod login;
mod me;
mod task;
mod video_sources;
mod videos;
mod ws;
@@ -27,21 +22,23 @@ mod ws;
pub use ws::{LogHelper, MAX_HISTORY_LOGS};
pub fn router() -> Router {
Router::new().route("/image-proxy", get(image_proxy)).nest(
Router::new().nest(
"/api",
config::router()
.merge(me::router())
.merge(login::router())
.merge(video_sources::router())
.merge(videos::router())
.merge(dashboard::router())
.merge(ws::router())
.merge(task::router())
.layer(middleware::from_fn(auth)),
)
}
/// 中间件:使用 auth token 对请求进行身份验证
pub async fn auth(mut headers: HeaderMap, request: Request, next: Next) -> Result<Response, StatusCode> {
let config = VersionedConfig::get().load();
let config = VersionedConfig::get().read();
let token = config.auth_token.as_str();
if headers
.get("Authorization")
@@ -50,57 +47,16 @@ pub async fn auth(mut headers: HeaderMap, request: Request, next: Next) -> Resul
{
return Ok(next.run(request).await);
}
if let Some(protocol) = headers.remove("Sec-WebSocket-Protocol") {
if protocol
if let Some(protocol) = headers.remove("Sec-WebSocket-Protocol")
&& protocol
.to_str()
.ok()
.and_then(|s| BASE64_URL_SAFE_NO_PAD.decode(s).ok())
.is_some_and(|s| s == token.as_bytes())
{
let mut resp = next.run(request).await;
resp.headers_mut().insert("Sec-WebSocket-Protocol", protocol);
return Ok(resp);
}
{
let mut resp = next.run(request).await;
resp.headers_mut().insert("Sec-WebSocket-Protocol", protocol);
return Ok(resp);
}
Ok(ApiResponse::<()>::unauthorized("auth token does not match").into_response())
}
/// B 站的图片会检查 referer需要做个转发伪造一下否则直接返回 403
pub async fn image_proxy(
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<ImageProxyParams>,
) -> Response {
let resp = bili_client.client.request(Method::GET, &params.url, None).send().await;
let whitelist = [
header::CONTENT_TYPE,
header::CONTENT_LENGTH,
header::CACHE_CONTROL,
header::EXPIRES,
header::LAST_MODIFIED,
header::ETAG,
header::CONTENT_DISPOSITION,
header::CONTENT_ENCODING,
header::ACCEPT_RANGES,
header::ACCESS_CONTROL_ALLOW_ORIGIN,
]
.into_iter()
.collect::<HashSet<_>>();
let builder = Response::builder();
let response = match resp {
Err(e) => builder.status(StatusCode::BAD_GATEWAY).body(Body::new(e.to_string())),
Ok(res) => {
let mut response = builder.status(res.status());
for (k, v) in res.headers() {
if whitelist.contains(k) {
response = response.header(k, v);
}
}
let streams = res.bytes_stream();
response.body(Body::from_stream(streams))
}
};
//safety: all previously configured headers are taken from a valid response, ensuring the response is safe to use
response.unwrap()
}

View File

@@ -0,0 +1,15 @@
use anyhow::Result;
use axum::Router;
use axum::routing::post;
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::task::DownloadTaskManager;
pub(super) fn router() -> Router {
Router::new().route("/task/download", post(new_download_task))
}
pub async fn new_download_task() -> Result<ApiResponse<bool>, ApiError> {
DownloadTaskManager::get().download_once().await?;
Ok(ApiResponse::ok(true))
}

View File

@@ -1,28 +1,49 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Path};
use anyhow::{Context, Result};
use axum::extract::{Extension, Path, Query};
use axum::routing::{get, post, put};
use axum::{Json, Router};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use bili_sync_migration::Expr;
use futures::stream::FuturesUnordered;
use futures::{StreamExt, TryStreamExt};
use itertools::Itertools;
use sea_orm::ActiveValue::Set;
use sea_orm::{DatabaseConnection, EntityTrait, QuerySelect};
use sea_orm::entity::prelude::*;
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QuerySelect, QueryTrait, TransactionTrait};
use crate::adapter::_ActiveModel;
use crate::adapter::{_ActiveModel, VideoSource as _, VideoSourceEnum};
use crate::api::error::InnerApiError;
use crate::api::request::{
InsertCollectionRequest, InsertFavoriteRequest, InsertSubmissionRequest, UpdateVideoSourceRequest,
DefaultPathRequest, FullSyncVideoSourceRequest, InsertCollectionRequest, InsertFavoriteRequest,
InsertSubmissionRequest, UpdateVideoSourceRequest,
};
use crate::api::response::{
FullSyncVideoSourceResponse, UpdateVideoSourceResponse, VideoSource, VideoSourceDetail,
VideoSourcesDetailsResponse, VideoSourcesResponse,
};
use crate::api::response::{VideoSource, VideoSourceDetail, VideoSourcesDetailsResponse, VideoSourcesResponse};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::bilibili::{BiliClient, Collection, CollectionItem, FavoriteList, Submission};
use crate::config::{PathSafeTemplate, TEMPLATE, VersionedConfig};
use crate::utils::rule::FieldEvaluatable;
pub(super) fn router() -> Router {
Router::new()
.route("/video-sources", get(get_video_sources))
.route("/video-sources/details", get(get_video_sources_details))
.route("/video-sources/{type}/{id}", put(update_video_source))
.route(
"/video-sources/{type}/default-path",
get(get_video_sources_default_path),
) // 仅用于前端获取默认路径
.route(
"/video-sources/{type}/{id}",
put(update_video_source).delete(remove_video_source),
)
.route("/video-sources/{type}/{id}/evaluate", post(evaluate_video_source))
.route("/video-sources/{type}/{id}/full-sync", post(full_sync_video_source))
.route("/video-sources/favorites", post(insert_favorite))
.route("/video-sources/collections", post(insert_collection))
.route("/video-sources/submissions", post(insert_submission))
@@ -75,14 +96,16 @@ pub async fn get_video_sources(
pub async fn get_video_sources_details(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoSourcesDetailsResponse>, ApiError> {
let (collections, favorites, submissions, mut watch_later) = tokio::try_join!(
let (mut collections, mut favorites, mut submissions, mut watch_later) = tokio::try_join!(
collection::Entity::find()
.select_only()
.columns([
collection::Column::Id,
collection::Column::Name,
collection::Column::Path,
collection::Column::Enabled
collection::Column::Rule,
collection::Column::Enabled,
collection::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
@@ -92,22 +115,35 @@ pub async fn get_video_sources_details(
favorite::Column::Id,
favorite::Column::Name,
favorite::Column::Path,
favorite::Column::Enabled
favorite::Column::Rule,
favorite::Column::Enabled,
favorite::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
submission::Entity::find()
.select_only()
.column(submission::Column::Id)
.column_as(submission::Column::UpperName, "name")
.columns([submission::Column::Path, submission::Column::Enabled])
.columns([
submission::Column::Id,
submission::Column::Path,
submission::Column::Enabled,
submission::Column::Rule,
submission::Column::UseDynamicApi,
submission::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db),
watch_later::Entity::find()
.select_only()
.column(watch_later::Column::Id)
.column_as(Expr::value("稍后再看"), "name")
.columns([watch_later::Column::Path, watch_later::Column::Enabled])
.columns([
watch_later::Column::Id,
watch_later::Column::Path,
watch_later::Column::Enabled,
watch_later::Column::Rule,
watch_later::Column::LatestRowAt
])
.into_model::<VideoSourceDetail>()
.all(&db)
)?;
@@ -116,9 +152,21 @@ pub async fn get_video_sources_details(
id: 1,
name: "稍后再看".to_string(),
path: String::new(),
rule: None,
rule_display: None,
use_dynamic_api: None,
enabled: false,
latest_row_at: None,
})
}
for sources in [&mut collections, &mut favorites, &mut submissions, &mut watch_later] {
sources.iter_mut().for_each(|item| {
if let Some(rule) = &item.rule {
item.rule_display = Some(rule.to_string());
}
item.latest_row_at = item.latest_row_at.filter(|dt| dt.and_utc().timestamp() != 0);
});
}
Ok(ApiResponse::ok(VideoSourcesDetailsResponse {
collections,
favorites,
@@ -127,29 +175,52 @@ pub async fn get_video_sources_details(
}))
}
pub async fn get_video_sources_default_path(
Path(source_type): Path<String>,
Query(params): Query<DefaultPathRequest>,
) -> Result<ApiResponse<String>, ApiError> {
let template_name = match source_type.as_str() {
"favorites" => "favorite_default_path",
"collections" => "collection_default_path",
"submissions" => "submission_default_path",
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let template = TEMPLATE.read();
Ok(ApiResponse::ok(
template.path_safe_render(template_name, &serde_json::to_value(params)?)?,
))
}
/// 更新视频来源
pub async fn update_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateVideoSourceRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
) -> Result<ApiResponse<UpdateVideoSourceResponse>, ApiError> {
let rule_display = request.rule.as_ref().map(|rule| rule.to_string());
let active_model = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: collection::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Collection(active_model)
}),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: favorite::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Favorite(active_model)
}),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: submission::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
if let Some(use_dynamic_api) = request.use_dynamic_api {
active_model.use_dynamic_api = Set(use_dynamic_api);
}
_ActiveModel::Submission(active_model)
}),
"watch_later" => match watch_later::Entity::find_by_id(id).one(&db).await? {
@@ -160,6 +231,7 @@ pub async fn update_video_source(
let mut active_model: watch_later::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
Some(_ActiveModel::WatchLater(active_model))
}
None => {
@@ -170,6 +242,7 @@ pub async fn update_video_source(
Some(_ActiveModel::WatchLater(watch_later::ActiveModel {
path: Set(request.path),
enabled: Set(request.enabled),
rule: Set(request.rule),
..Default::default()
}))
}
@@ -181,22 +254,213 @@ pub async fn update_video_source(
return Err(InnerApiError::NotFound(id).into());
};
active_model.save(&db).await?;
Ok(ApiResponse::ok(UpdateVideoSourceResponse { rule_display }))
}
pub async fn remove_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<bool>, ApiError> {
// 不允许删除稍后再看
let video_source: Option<VideoSourceEnum> = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(Into::into),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(video_source) = video_source else {
return Err(InnerApiError::NotFound(id).into());
};
let txn = db.begin().await?;
page::Entity::delete_many()
.filter(
page::Column::VideoId.in_subquery(
video::Entity::find()
.filter(video_source.filter_expr())
.select_only()
.column(video::Column::Id)
.as_query()
.to_owned(),
),
)
.exec(&txn)
.await?;
video::Entity::delete_many()
.filter(video_source.filter_expr())
.exec(&txn)
.await?;
video_source.delete_from_db(&txn).await?;
txn.commit().await?;
Ok(ApiResponse::ok(true))
}
pub async fn evaluate_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<bool>, ApiError> {
// 找出对应 source 的规则与 video 筛选条件
let (rule, filter_condition) = match source_type.as_str() {
"collections" => (
collection::Entity::find_by_id(id)
.select_only()
.column(collection::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::CollectionId.eq(id),
),
"favorites" => (
favorite::Entity::find_by_id(id)
.select_only()
.column(favorite::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::FavoriteId.eq(id),
),
"submissions" => (
submission::Entity::find_by_id(id)
.select_only()
.column(submission::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::SubmissionId.eq(id),
),
"watch_later" => (
watch_later::Entity::find_by_id(id)
.select_only()
.column(watch_later::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::WatchLaterId.eq(id),
),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let videos: Vec<(video::Model, Vec<page::Model>)> = video::Entity::find()
.filter(filter_condition)
.find_with_related(page::Entity)
.all(&db)
.await?;
let video_should_download_pairs = videos
.into_iter()
.map(|(video, pages)| (video.id, rule.evaluate_model(&video, &pages)))
.collect::<Vec<(i32, bool)>>();
let txn = db.begin().await?;
for chunk in video_should_download_pairs.chunks(500) {
let sql = format!(
"WITH tempdata(id, should_download) AS (VALUES {}) \
UPDATE video \
SET should_download = tempdata.should_download \
FROM tempdata \
WHERE video.id = tempdata.id",
chunk.iter().map(|item| format!("({}, {})", item.0, item.1)).join(", ")
);
txn.execute_unprepared(&sql).await?;
}
txn.commit().await?;
Ok(ApiResponse::ok(true))
}
pub async fn full_sync_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Json(request): Json<FullSyncVideoSourceRequest>,
) -> Result<ApiResponse<FullSyncVideoSourceResponse>, ApiError> {
let video_source: Option<VideoSourceEnum> = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(Into::into),
"watch_later" => watch_later::Entity::find_by_id(id).one(&db).await?.map(Into::into),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(video_source) = video_source else {
return Err(InnerApiError::NotFound(id).into());
};
let credential = &VersionedConfig::get().read().credential;
let filter_expr = video_source.filter_expr();
let (_, video_streams) = video_source.refresh(&bili_client, credential, &db).await?;
let all_videos = video_streams
.try_collect::<Vec<_>>()
.await
.context("failed to read all videos from video stream")?;
let all_bvids = all_videos.into_iter().map(|v| v.bvid_owned()).collect::<HashSet<_>>();
let videos_to_remove = video::Entity::find()
.filter(video::Column::Bvid.is_not_in(all_bvids).and(filter_expr))
.select_only()
.columns([video::Column::Id, video::Column::Path])
.into_tuple::<(i32, String)>()
.all(&db)
.await?;
if videos_to_remove.is_empty() {
return Ok(ApiResponse::ok(FullSyncVideoSourceResponse {
removed_count: 0,
warnings: None,
}));
}
let remove_count = videos_to_remove.len();
let (video_ids, video_paths): (Vec<i32>, Vec<String>) = videos_to_remove.into_iter().unzip();
let txn = db.begin().await?;
page::Entity::delete_many()
.filter(page::Column::VideoId.is_in(video_ids.iter().copied()))
.exec(&txn)
.await?;
video::Entity::delete_many()
.filter(video::Column::Id.is_in(video_ids))
.exec(&txn)
.await?;
txn.commit().await?;
let warnings = if request.delete_local {
let tasks = video_paths
.into_iter()
.filter_map(|path| {
if path.is_empty() {
None
} else {
Some(async move {
tokio::fs::remove_dir_all(&path)
.await
.with_context(|| format!("failed to remove {path}"))?;
Result::<_, anyhow::Error>::Ok(())
})
}
})
.collect::<FuturesUnordered<_>>();
Some(
tasks
.filter_map(|res| futures::future::ready(res.err().map(|e| format!("{:#}", e))))
.collect::<Vec<_>>()
.await,
)
} else {
None
};
Ok(ApiResponse::ok(FullSyncVideoSourceResponse {
removed_count: remove_count,
warnings,
}))
}
/// 新增收藏夹订阅
pub async fn insert_favorite(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertFavoriteRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let favorite = FavoriteList::new(bili_client.as_ref(), request.fid.to_string());
let credential = &VersionedConfig::get().read().credential;
let favorite = FavoriteList::new(bili_client.as_ref(), request.fid.to_string(), credential);
let favorite_info = favorite.get_info().await?;
favorite::Entity::insert(favorite::ActiveModel {
f_id: Set(favorite_info.id),
name: Set(favorite_info.title.clone()),
path: Set(request.path),
enabled: Set(true),
enabled: Set(false),
..Default::default()
})
.exec(&db)
@@ -210,6 +474,7 @@ pub async fn insert_collection(
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertCollectionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let credential = &VersionedConfig::get().read().credential;
let collection = Collection::new(
bili_client.as_ref(),
CollectionItem {
@@ -217,6 +482,7 @@ pub async fn insert_collection(
mid: request.mid.to_string(),
collection_type: request.collection_type,
},
credential,
);
let collection_info = collection.get_info().await?;
collection::Entity::insert(collection::ActiveModel {
@@ -225,7 +491,7 @@ pub async fn insert_collection(
r#type: Set(collection_info.collection_type.into()),
name: Set(collection_info.name.clone()),
path: Set(request.path),
enabled: Set(true),
enabled: Set(false),
..Default::default()
})
.exec(&db)
@@ -240,13 +506,14 @@ pub async fn insert_submission(
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertSubmissionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let submission = Submission::new(bili_client.as_ref(), request.upper_id.to_string());
let credential = &VersionedConfig::get().read().credential;
let submission = Submission::new(bili_client.as_ref(), request.upper_id.to_string(), credential);
let upper = submission.get_info().await?;
submission::Entity::insert(submission::ActiveModel {
upper_id: Set(upper.mid.parse()?),
upper_name: Set(upper.name),
path: Set(request.path),
enabled: Set(true),
enabled: Set(false),
..Default::default()
})
.exec(&db)

View File

@@ -1,19 +1,25 @@
use std::collections::HashSet;
use anyhow::Result;
use anyhow::{Context, Result};
use axum::extract::{Extension, Path, Query};
use axum::routing::{get, post};
use axum::{Json, Router};
use bili_sync_entity::*;
use sea_orm::ActiveValue::Set;
use sea_orm::{
ColumnTrait, DatabaseConnection, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder, TransactionTrait,
ActiveModelTrait, ColumnTrait, DatabaseConnection, EntityTrait, IntoActiveModel, PaginatorTrait, QueryFilter,
QueryOrder, TransactionTrait, TryIntoModel,
};
use crate::api::error::InnerApiError;
use crate::api::helper::{update_page_download_status, update_video_download_status};
use crate::api::request::{ResetRequest, UpdateVideoStatusRequest, VideosRequest};
use crate::api::request::{
ResetFilteredVideoStatusRequest, ResetVideoStatusRequest, UpdateFilteredVideoStatusRequest,
UpdateVideoStatusRequest, VideosRequest,
};
use crate::api::response::{
PageInfo, ResetAllVideosResponse, ResetVideoResponse, UpdateVideoStatusResponse, VideoInfo, VideoResponse,
ClearAndResetVideoStatusResponse, PageInfo, ResetFilteredVideosResponse, ResetVideoResponse, SimplePageInfo,
SimpleVideoInfo, UpdateFilteredVideoStatusResponse, UpdateVideoStatusResponse, VideoInfo, VideoResponse,
VideosResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
@@ -23,9 +29,14 @@ pub(super) fn router() -> Router {
Router::new()
.route("/videos", get(get_videos))
.route("/videos/{id}", get(get_video))
.route("/videos/{id}/reset", post(reset_video))
.route("/videos/reset-all", post(reset_all_videos))
.route(
"/videos/{id}/clear-and-reset-status",
post(clear_and_reset_video_status),
)
.route("/videos/{id}/reset-status", post(reset_video_status))
.route("/videos/{id}/update-status", post(update_video_status))
.route("/videos/reset-status", post(reset_filtered_video_status))
.route("/videos/update-status", post(update_filtered_video_status))
}
/// 列出视频的基本信息,支持根据视频来源筛选、名称查找和分页
@@ -45,7 +56,17 @@ pub async fn get_videos(
}
}
if let Some(query_word) = params.query {
query = query.filter(video::Column::Name.contains(query_word));
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = params.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = params.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let total_count = query.clone().count(&db).await?;
let (page, page_size) = if let (Some(page), Some(page_size)) = (params.page, params.page_size) {
@@ -85,10 +106,10 @@ pub async fn get_video(
}))
}
pub async fn reset_video(
pub async fn reset_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetRequest>,
Json(request): Json<ResetVideoStatusRequest>,
) -> Result<ApiResponse<ResetVideoResponse>, ApiError> {
let (video_info, pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
@@ -130,7 +151,7 @@ pub async fn reset_video(
let txn = db.begin().await?;
if !resetted_videos_info.is_empty() {
// 只可能有 1 个元素,所以不用 batch
update_video_download_status(&txn, &resetted_videos_info, None).await?;
update_video_download_status::<VideoInfo>(&txn, &resetted_videos_info, None).await?;
}
if !resetted_pages_info.is_empty() {
update_page_download_status(&txn, &resetted_pages_info, Some(500)).await?;
@@ -144,15 +165,87 @@ pub async fn reset_video(
}))
}
pub async fn reset_all_videos(
pub async fn clear_and_reset_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetRequest>,
) -> Result<ApiResponse<ResetAllVideosResponse>, ApiError> {
// 先查询所有视频和页面数据
let (all_videos, all_pages) = tokio::try_join!(
video::Entity::find().into_partial_model::<VideoInfo>().all(&db),
page::Entity::find().into_partial_model::<PageInfo>().all(&db)
)?;
) -> Result<ApiResponse<ClearAndResetVideoStatusResponse>, ApiError> {
let video_info = video::Entity::find_by_id(id).one(&db).await?;
let Some(video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let txn = db.begin().await?;
let mut video_info = video_info.into_active_model();
video_info.single_page = Set(None);
video_info.download_status = Set(0);
video_info.valid = Set(true);
let video_info = video_info.update(&txn).await?;
page::Entity::delete_many()
.filter(page::Column::VideoId.eq(id))
.exec(&txn)
.await?;
txn.commit().await?;
let video_info = video_info.try_into_model()?;
let warning = if video_info.path.is_empty() {
None
} else {
tokio::fs::remove_dir_all(&video_info.path)
.await
.context(format!("删除本地路径「{}」失败", video_info.path))
.err()
.map(|e| format!("{:#}", e))
};
Ok(ApiResponse::ok(ClearAndResetVideoStatusResponse {
warning,
video: VideoInfo {
id: video_info.id,
bvid: video_info.bvid,
name: video_info.name,
upper_name: video_info.upper_name,
valid: video_info.valid,
should_download: video_info.should_download,
download_status: video_info.download_status,
collection_id: video_info.collection_id,
favorite_id: video_info.favorite_id,
submission_id: video_info.submission_id,
watch_later_id: video_info.watch_later_id,
},
}))
}
pub async fn reset_filtered_video_status(
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetFilteredVideoStatusRequest>,
) -> Result<ApiResponse<ResetFilteredVideosResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(request.collection, video::Column::CollectionId),
(request.favorite, video::Column::FavoriteId),
(request.submission, video::Column::SubmissionId),
(request.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = request.query {
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = request.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = request.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let all_videos = query.into_partial_model::<SimpleVideoInfo>().all(&db).await?;
let all_pages = page::Entity::find()
.filter(page::Column::VideoId.is_in(all_videos.iter().map(|v| v.id)))
.into_partial_model::<SimplePageInfo>()
.all(&db)
.await?;
let resetted_pages_info = all_pages
.into_iter()
.filter_map(|mut page_info| {
@@ -196,7 +289,7 @@ pub async fn reset_all_videos(
}
txn.commit().await?;
}
Ok(ApiResponse::ok(ResetAllVideosResponse {
Ok(ApiResponse::ok(ResetFilteredVideosResponse {
resetted: has_video_updates || has_page_updates,
resetted_videos_count: resetted_videos_info.len(),
resetted_pages_count: resetted_pages_info.len(),
@@ -244,10 +337,10 @@ pub async fn update_video_status(
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &[&video_info], None).await?;
update_video_download_status::<VideoInfo>(&txn, &[&video_info], None).await?;
}
if has_page_updates {
update_page_download_status(&txn, &updated_pages_info, None).await?;
update_page_download_status::<PageInfo>(&txn, &updated_pages_info, None).await?;
}
txn.commit().await?;
}
@@ -257,3 +350,70 @@ pub async fn update_video_status(
pages: pages_info,
}))
}
pub async fn update_filtered_video_status(
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateFilteredVideoStatusRequest>,
) -> Result<ApiResponse<UpdateFilteredVideoStatusResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(request.collection, video::Column::CollectionId),
(request.favorite, video::Column::FavoriteId),
(request.submission, video::Column::SubmissionId),
(request.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = request.query {
query = query.filter(
video::Column::Name
.contains(&query_word)
.or(video::Column::Bvid.contains(query_word)),
);
}
if let Some(status_filter) = request.status_filter {
query = query.filter(status_filter.to_video_query());
}
if let Some(validation_filter) = request.validation_filter {
query = query.filter(validation_filter.to_video_query());
}
let mut all_videos = query.into_partial_model::<SimpleVideoInfo>().all(&db).await?;
let mut all_pages = page::Entity::find()
.filter(page::Column::VideoId.is_in(all_videos.iter().map(|v| v.id)))
.into_partial_model::<SimplePageInfo>()
.all(&db)
.await?;
for video_info in all_videos.iter_mut() {
let mut video_status = VideoStatus::from(video_info.download_status);
for update in &request.video_updates {
video_status.set(update.status_index, update.status_value);
}
video_info.download_status = video_status.into();
}
for page_info in all_pages.iter_mut() {
let mut page_status = PageStatus::from(page_info.download_status);
for update in &request.page_updates {
page_status.set(update.status_index, update.status_value);
}
page_info.download_status = page_status.into();
}
let has_video_updates = !all_videos.is_empty();
let has_page_updates = !all_pages.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &all_videos, Some(500)).await?;
}
if has_page_updates {
update_page_download_status(&txn, &all_pages, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(UpdateFilteredVideoStatusResponse {
success: has_video_updates || has_page_updates,
updated_videos_count: all_videos.len(),
updated_pages_count: all_pages.len(),
}))
}

View File

@@ -1,20 +1,20 @@
use std::collections::VecDeque;
use std::sync::Arc;
use parking_lot::Mutex;
use parking_lot::RwLock;
use tokio::sync::broadcast;
use tracing_subscriber::fmt::MakeWriter;
pub const MAX_HISTORY_LOGS: usize = 30;
pub const MAX_HISTORY_LOGS: usize = 200;
/// LogHelper 维护了日志发送器和一个日志历史记录的缓冲区
pub struct LogHelper {
pub sender: broadcast::Sender<String>,
pub log_history: Arc<Mutex<VecDeque<String>>>,
pub log_history: Arc<RwLock<VecDeque<String>>>,
}
impl LogHelper {
pub fn new(sender: broadcast::Sender<String>, log_history: Arc<Mutex<VecDeque<String>>>) -> Self {
pub fn new(sender: broadcast::Sender<String>, log_history: Arc<RwLock<VecDeque<String>>>) -> Self {
LogHelper { sender, log_history }
}
}
@@ -31,7 +31,7 @@ impl std::io::Write for LogHelper {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
let log_message = String::from_utf8_lossy(buf).to_string();
let _ = self.sender.send(log_message.clone());
let mut history = self.log_history.lock();
let mut history = self.log_history.write();
history.push_back(log_message);
if history.len() > MAX_HISTORY_LOGS {
history.pop_front();

View File

@@ -11,19 +11,23 @@ use axum::{Extension, Router};
use dashmap::DashMap;
use futures::stream::{SplitSink, SplitStream};
use futures::{SinkExt, StreamExt, future};
use itertools::Itertools;
pub use log_helper::{LogHelper, MAX_HISTORY_LOGS};
use parking_lot::RwLock;
use serde::{Deserialize, Serialize};
use sysinfo::{
CpuRefreshKind, DiskRefreshKind, Disks, MemoryRefreshKind, ProcessRefreshKind, RefreshKind, System, get_current_pid,
CpuRefreshKind, DiskRefreshKind, Disks, MemoryRefreshKind, Pid, ProcessRefreshKind, ProcessesToUpdate, System,
get_current_pid,
};
use tokio::pin;
use tokio::task::JoinHandle;
use tokio_stream::wrappers::{BroadcastStream, IntervalStream, WatchStream};
use tokio::sync::mpsc;
use tokio::{pin, select};
use tokio_stream::wrappers::{BroadcastStream, WatchStream};
use tokio_util::future::FutureExt;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
use crate::api::response::SysInfo;
use crate::utils::task_notifier::{TASK_STATUS_NOTIFIER, TaskStatus};
use crate::task::{DownloadTaskManager, TaskStatus};
static WEBSOCKET_HANDLER: LazyLock<WebSocketHandler> = LazyLock::new(WebSocketHandler::new);
@@ -55,191 +59,251 @@ enum ClientEvent {
#[serde(rename_all = "camelCase")]
enum ServerEvent {
Logs(String),
Tasks(Arc<TaskStatus>),
SysInfo(Arc<SysInfo>),
Tasks(TaskStatus),
SysInfo(SysInfo),
}
struct WebSocketHandler {
sysinfo_subscribers: Arc<DashMap<Uuid, tokio::sync::mpsc::Sender<ServerEvent>>>,
sysinfo_handles: RwLock<Option<JoinHandle<()>>>,
sysinfo_subscribers: Arc<DashMap<Uuid, mpsc::Sender<ServerEvent>>>,
sysinfo_cancel: RwLock<Option<CancellationToken>>,
}
impl WebSocketHandler {
fn new() -> Self {
Self {
sysinfo_subscribers: Arc::new(DashMap::new()),
sysinfo_handles: RwLock::new(None),
sysinfo_cancel: RwLock::new(None),
}
}
async fn handle_sender(
&self,
mut sender: SplitSink<WebSocket, Message>,
mut rx: tokio::sync::mpsc::Receiver<ServerEvent>,
) {
/// 向客户端推送信息
async fn handle_sender(&self, mut sender: SplitSink<WebSocket, Message>, mut rx: mpsc::Receiver<ServerEvent>) {
while let Some(event) = rx.recv().await {
match serde_json::to_string(&event) {
Ok(text) => {
if let Err(e) = sender.send(Message::Text(text.into())).await {
error!("Failed to send message: {:?}", e);
break;
}
}
let text = match serde_json::to_string(&event) {
Ok(text) => text,
Err(e) => {
error!("Failed to serialize event: {:?}", e);
continue;
}
};
if let Err(e) = sender.send(Message::Text(text.into())).await {
error!("Failed to send message: {:?}", e);
break;
}
}
}
/// 从客户端接收信息
async fn handle_receiver(
&self,
mut receiver: SplitStream<WebSocket>,
tx: tokio::sync::mpsc::Sender<ServerEvent>,
tx: mpsc::Sender<ServerEvent>,
uuid: Uuid,
log_writer: LogHelper,
) {
// 日志和任务状态的处理本身就是由 stream 驱动的,可以直接为每个 ws 连接维护独立的任务处理器
// 系统信息是服务端轮询然后推送的,如果单独维护会导致每个连接都独立轮询系统信息,造成不必要的浪费
// 因此采用了全局的订阅者管理,所有连接共享同一个系统信息轮询任务
let (mut log_handle, mut task_handle) = (None, None);
let (mut log_cancel, mut task_cancel) = (None, None);
while let Some(Ok(msg)) = receiver.next().await {
if let Message::Text(text) = msg {
match serde_json::from_str::<ClientEvent>(&text) {
Ok(ClientEvent::Subscribe(event_type)) => match event_type {
EventType::Logs => {
if log_handle.as_ref().is_none_or(|h: &JoinHandle<()>| h.is_finished()) {
let log_writer_clone = log_writer.clone();
let tx_clone = tx.clone();
let history = log_writer_clone.log_history.lock();
let history_logs: Vec<String> = history.iter().cloned().collect();
drop(history);
log_handle = Some(tokio::spawn(async move {
let rx = log_writer_clone.sender.subscribe();
let log_stream = futures::stream::iter(history_logs.into_iter())
.chain(BroadcastStream::new(rx).filter_map(async |msg| msg.ok()))
.map(|msg| ServerEvent::Logs(msg));
pin!(log_stream);
while let Some(event) = log_stream.next().await {
if let Err(e) = tx_clone.send(event).await {
error!("Failed to send log event: {:?}", e);
break;
}
}
}));
}
}
EventType::Tasks => {
if task_handle.as_ref().is_none_or(|h: &JoinHandle<()>| h.is_finished()) {
let tx_clone = tx.clone();
task_handle = Some(tokio::spawn(async move {
let mut stream = WatchStream::new(TASK_STATUS_NOTIFIER.subscribe())
.map(|status| ServerEvent::Tasks(status));
while let Some(event) = stream.next().await {
if let Err(e) = tx_clone.send(event).await {
error!("Failed to send task status: {:?}", e);
break;
}
}
}));
}
}
EventType::SysInfo => self.add_sysinfo_subscriber(uuid, tx.clone()).await,
},
Ok(ClientEvent::Unsubscribe(event_type)) => match event_type {
EventType::Logs => {
if let Some(handle) = log_handle.take() {
handle.abort();
}
}
EventType::Tasks => {
if let Some(handle) = task_handle.take() {
handle.abort();
}
}
EventType::SysInfo => {
self.remove_sysinfo_subscriber(uuid).await;
}
},
Err(e) => {
error!("Failed to parse client message: {:?}", e);
let Message::Text(text) = msg else {
continue;
};
let client_event = match serde_json::from_str::<ClientEvent>(&text) {
Ok(event) => event,
Err(e) => {
error!("Failed to parse client message: {:?}, error: {:?}", text, e);
continue;
}
};
match client_event {
ClientEvent::Subscribe(EventType::Logs) => {
if log_cancel.is_none() {
log_cancel = Some(self.new_log_handler(tx.clone(), &log_writer));
}
}
ClientEvent::Unsubscribe(EventType::Logs) => {
if let Some(cancel) = log_cancel.take() {
cancel.cancel();
}
}
ClientEvent::Subscribe(EventType::Tasks) => {
if task_cancel.is_none() {
task_cancel = Some(self.new_task_handler(tx.clone()));
}
}
ClientEvent::Unsubscribe(EventType::Tasks) => {
if let Some(cancel) = task_cancel.take() {
cancel.cancel();
}
}
ClientEvent::Subscribe(EventType::SysInfo) => {
self.add_sysinfo_subscriber(uuid, tx.clone());
}
ClientEvent::Unsubscribe(EventType::SysInfo) => {
self.remove_sysinfo_subscriber(uuid);
}
}
}
// 连接关闭,清除仍然残留的任务
if let Some(cancel) = log_cancel {
cancel.cancel();
}
if let Some(cancel) = task_cancel {
cancel.cancel();
}
self.remove_sysinfo_subscriber(uuid);
}
/// 添加全局系统信息订阅者
fn add_sysinfo_subscriber(&self, uuid: Uuid, sender: mpsc::Sender<ServerEvent>) {
self.sysinfo_subscribers.insert(uuid, sender);
if self.sysinfo_cancel.read().is_none() {
let mut sys_info_cancel = self.sysinfo_cancel.write();
if sys_info_cancel.is_some() {
return;
}
*sys_info_cancel = Some(self.new_sysinfo_handler(self.sysinfo_subscribers.clone()));
}
}
/// 移除全局系统信息订阅者
fn remove_sysinfo_subscriber(&self, uuid: Uuid) {
self.sysinfo_subscribers.remove(&uuid);
if self.sysinfo_subscribers.is_empty()
&& let Some(token) = self.sysinfo_cancel.write().take()
{
token.cancel();
}
}
/// 创建异步日志推送任务,返回任务的取消令牌
fn new_log_handler(&self, tx: mpsc::Sender<ServerEvent>, log_writer: &LogHelper) -> CancellationToken {
let cancel_token = CancellationToken::new();
// 读取历史日志
let history = log_writer.log_history.read();
let history_logs = history.iter().cloned().collect::<Vec<String>>();
drop(history);
// 获取日志广播接收器
let log_rx = log_writer.sender.subscribe();
tokio::spawn(
async move {
// 合并历史日志和实时日志流
let log_stream = futures::stream::iter(history_logs)
.chain(BroadcastStream::new(log_rx).filter_map(async |msg| msg.ok()))
.map(ServerEvent::Logs);
pin!(log_stream);
while let Some(event) = log_stream.next().await {
if let Err(e) = tx.send(event).await {
error!("Failed to send log event: {:?}", e);
break;
}
}
}
}
if let Some(handle) = log_handle {
handle.abort();
}
if let Some(handle) = task_handle {
handle.abort();
}
self.remove_sysinfo_subscriber(uuid).await;
.with_cancellation_token_owned(cancel_token.clone()),
);
cancel_token
}
// 添加订阅者
async fn add_sysinfo_subscriber(&self, uuid: Uuid, sender: tokio::sync::mpsc::Sender<ServerEvent>) {
self.sysinfo_subscribers.insert(uuid, sender);
if self.sysinfo_subscribers.len() > 0
&& self
.sysinfo_handles
.read()
.as_ref()
.is_none_or(|h: &JoinHandle<()>| h.is_finished())
{
let sysinfo_subscribers = self.sysinfo_subscribers.clone();
let mut write_guard = self.sysinfo_handles.write();
if write_guard.as_ref().is_some_and(|h: &JoinHandle<()>| !h.is_finished()) {
return;
/// 创建异步任务状态推送任务,返回任务的取消令牌
fn new_task_handler(&self, tx: mpsc::Sender<ServerEvent>) -> CancellationToken {
let cancel_token = CancellationToken::new();
tokio::spawn(
async move {
let mut stream = WatchStream::new(DownloadTaskManager::get().subscribe()).map(ServerEvent::Tasks);
while let Some(event) = stream.next().await {
if let Err(e) = tx.send(event).await {
error!("Failed to send task status: {:?}", e);
break;
}
}
}
*write_guard = Some(tokio::spawn(async move {
let mut system = System::new();
let mut disks = Disks::new();
let sys_refresh_kind = sys_refresh_kind();
let disk_refresh_kind = disk_refresh_kind();
.with_cancellation_token_owned(cancel_token.clone()),
);
cancel_token
}
/// 创建异步系统信息推送任务,返回任务的取消令牌
fn new_sysinfo_handler(
&self,
sysinfo_subscribers: Arc<DashMap<Uuid, mpsc::Sender<ServerEvent>>>,
) -> CancellationToken {
let cancel_token = CancellationToken::new();
let cancel_token_clone = cancel_token.clone();
tokio::spawn(async move {
let (tx, mut rx) = mpsc::channel(10);
let (tick_tx, mut tick_rx) = mpsc::channel(3);
// 在阻塞线程中轮询系统信息,防止阻塞异步运行时
tokio::task::spawn_blocking(move || {
// 对于 linux/mac/windows 平台,该方法永远返回 Some(pid)expect 基本是安全的
let self_pid = get_current_pid().expect("Unsupported platform");
let mut stream =
IntervalStream::new(tokio::time::interval(Duration::from_secs(2))).filter_map(move |_| {
system.refresh_specifics(sys_refresh_kind);
disks.refresh_specifics(true, disk_refresh_kind);
let process = match system.process(self_pid) {
Some(p) => p,
None => return futures::future::ready(None),
};
futures::future::ready(Some(SysInfo {
total_memory: system.total_memory(),
used_memory: system.used_memory(),
process_memory: process.memory(),
used_cpu: system.global_cpu_usage(),
process_cpu: process.cpu_usage() / system.cpus().len() as f32,
total_disk: disks.iter().map(|d| d.total_space()).sum(),
available_disk: disks.iter().map(|d| d.available_space()).sum(),
}))
});
while let Some(sys_info) = stream.next().await {
let sys_info = Arc::new(sys_info);
future::join_all(sysinfo_subscribers.iter().map(async |subscriber| {
if let Err(e) = subscriber.send(ServerEvent::SysInfo(sys_info.clone())).await {
error!(
"Failed to send sysinfo event to subscriber {}: {:?}",
subscriber.key(),
e
);
}
}))
.await;
let mut system = System::new();
let mut disks = Disks::new();
while tick_rx.blocking_recv().is_some() {
system.refresh_needed(self_pid);
disks.refresh_needed(self_pid);
let process = match system.process(self_pid) {
Some(p) => p,
None => continue,
};
let (available, total) = disks
.iter()
.filter(|d| {
d.available_space() > 0
&& d.total_space() > 0
// 简单过滤一些虚拟文件系统
&& !["overlay", "tmpfs", "sysfs", "proc"]
.contains(&d.file_system().to_string_lossy().as_ref())
})
.unique_by(|d| d.name())
.fold((0, 0), |(mut available, mut total), d| {
available += d.available_space();
total += d.total_space();
(available, total)
});
let sys_info = SysInfo {
timestamp: chrono::Utc::now().timestamp_millis(),
total_memory: system.total_memory(),
used_memory: system.used_memory(),
process_memory: process.memory(),
used_cpu: system.global_cpu_usage(),
process_cpu: process.cpu_usage() / system.cpus().len() as f32,
total_disk: total,
available_disk: available,
};
if tx.blocking_send(sys_info).is_err() {
break;
}
}
});
// 异步部分负责获取由阻塞线程发送过来的系统信息,并推送给所有订阅者
// 收到取消信号时,设置标志位,确保阻塞线程正常退出
let mut interval = tokio::time::interval(Duration::from_secs(2));
loop {
select! {
_ = cancel_token_clone.cancelled() => {
drop(tick_tx);
break;
}
_ = interval.tick() => {
let _ = tick_tx.send(()).await;
}
Some(sys_info) = rx.recv() => {
future::join_all(sysinfo_subscribers.iter().map(async |subscriber| {
if let Err(e) = subscriber.send(ServerEvent::SysInfo(sys_info)).await {
error!(
"Failed to send sysinfo event to subscriber {}: {:?}",
subscriber.key(),
e
);
}
}))
.await;
}
}
}));
}
}
async fn remove_sysinfo_subscriber(&self, uuid: Uuid) {
self.sysinfo_subscribers.remove(&uuid);
if self.sysinfo_subscribers.is_empty() {
if let Some(handle) = self.sysinfo_handles.write().take() {
handle.abort();
}
}
});
cancel_token
}
}
@@ -251,13 +315,24 @@ async fn handle_socket(socket: WebSocket, log_writer: LogHelper) {
tokio::spawn(WEBSOCKET_HANDLER.handle_receiver(ws_receiver, tx, uuid, log_writer));
}
fn sys_refresh_kind() -> RefreshKind {
RefreshKind::nothing()
.with_cpu(CpuRefreshKind::nothing().with_cpu_usage())
.with_memory(MemoryRefreshKind::nothing().with_ram())
.with_processes(ProcessRefreshKind::nothing().with_cpu().with_memory())
trait SysInfoExt {
fn refresh_needed(&mut self, self_pid: Pid);
}
fn disk_refresh_kind() -> DiskRefreshKind {
DiskRefreshKind::nothing().with_storage()
impl SysInfoExt for System {
fn refresh_needed(&mut self, self_pid: Pid) {
self.refresh_memory_specifics(MemoryRefreshKind::nothing().with_ram());
self.refresh_cpu_specifics(CpuRefreshKind::nothing().with_cpu_usage());
self.refresh_processes_specifics(
ProcessesToUpdate::Some(&[self_pid]),
true,
ProcessRefreshKind::nothing().with_cpu().with_memory(),
);
}
}
impl SysInfoExt for Disks {
fn refresh_needed(&mut self, _self_pid: Pid) {
self.refresh_specifics(true, DiskRefreshKind::nothing().with_storage());
}
}

View File

@@ -2,10 +2,9 @@ use anyhow::{Context, Result, bail};
use serde::{Deserialize, Serialize};
use crate::bilibili::error::BiliError;
use crate::config::VersionedConfig;
pub struct PageAnalyzer {
info: serde_json::Value,
pub(crate) info: serde_json::Value,
}
#[derive(Debug, strum::FromRepr, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, Clone)]
@@ -101,7 +100,7 @@ impl Default for FilterOption {
video_min_quality: VideoQuality::Quality360p,
audio_max_quality: AudioQuality::QualityHiRES,
audio_min_quality: AudioQuality::Quality64k,
codecs: vec![VideoCodecs::AV1, VideoCodecs::HEV, VideoCodecs::AVC],
codecs: vec![VideoCodecs::AVC, VideoCodecs::HEV, VideoCodecs::AV1],
no_dolby_video: false,
no_dolby_audio: false,
no_hdr: false,
@@ -131,14 +130,14 @@ pub enum Stream {
// 通用的获取流链接的方法,交由 Downloader 使用
impl Stream {
pub fn urls(&self) -> Vec<&str> {
pub fn urls(&self, enable_cdn_sorting: bool) -> Vec<&str> {
match self {
Self::Flv(url) | Self::Html5Mp4(url) | Self::EpisodeTryMp4(url) => vec![url],
Self::DashVideo { url, backup_url, .. } | Self::DashAudio { url, backup_url, .. } => {
let mut urls = std::iter::once(url.as_str())
.chain(backup_url.iter().map(|s| s.as_str()))
.collect::<Vec<_>>();
if VersionedConfig::get().load().cdn_sorting {
if enable_cdn_sorting {
urls.sort_by_key(|u| {
if u.contains("upos-") {
0 // 服务商 cdn
@@ -218,7 +217,7 @@ impl PageAnalyzer {
.info
.pointer_mut("/dash/video")
.and_then(|v| v.as_array_mut())
.ok_or(BiliError::RiskControlOccurred)?
.ok_or(BiliError::VideoStreamsEmpty)?
.iter_mut()
{
let (Some(url), Some(quality), Some(codecs_id)) = (
@@ -263,39 +262,40 @@ impl PageAnalyzer {
});
}
}
if !filter_option.no_hires {
if let Some(flac) = self.info.pointer_mut("/dash/flac/audio") {
let (Some(url), Some(quality)) = (flac["baseUrl"].as_str(), flac["id"].as_u64()) else {
bail!("invalid flac stream");
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid flac stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(flac["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
if !filter_option.no_hires
&& let Some(flac) = self
.info
.pointer_mut("/dash/flac/audio")
.and_then(|f| f.as_object_mut())
{
let (Some(url), Some(quality)) = (flac["baseUrl"].as_str(), flac["id"].as_u64()) else {
bail!("invalid flac stream, flac content: {:?}", flac);
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid flac stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(flac["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
if !filter_option.no_dolby_audio {
if let Some(dolby_audio) = self
if !filter_option.no_dolby_audio
&& let Some(dolby_audio) = self
.info
.pointer_mut("/dash/dolby/audio/0")
.and_then(|a| a.as_object_mut())
{
let (Some(url), Some(quality)) = (dolby_audio["baseUrl"].as_str(), dolby_audio["id"].as_u64()) else {
bail!("invalid dolby audio stream");
};
let quality =
AudioQuality::from_repr(quality as usize).context("invalid dolby audio stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(dolby_audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
{
let (Some(url), Some(quality)) = (dolby_audio["baseUrl"].as_str(), dolby_audio["id"].as_u64()) else {
bail!("invalid dolby audio stream");
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid dolby audio stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(dolby_audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
Ok(streams)
@@ -426,16 +426,17 @@ mod tests {
Some(AudioQuality::Quality192k),
),
];
let config = VersionedConfig::get().read();
for (bvid, video_quality, video_codec, audio_quality) in testcases.into_iter() {
let client = BiliClient::new();
let video = Video::new(&client, bvid.to_owned());
let video = Video::new(&client, bvid, &config.credential);
let pages = video.get_pages().await.expect("failed to get pages");
let first_page = pages.into_iter().next().expect("no page found");
let best_stream = video
.get_page_analyzer(&first_page)
.await
.expect("failed to get page analyzer")
.best_stream(&VersionedConfig::get().load().filter_option)
.best_stream(&config.filter_option)
.expect("failed to get best stream");
dbg!(bvid, &best_stream);
match best_stream {
@@ -471,7 +472,7 @@ mod tests {
codecs: VideoCodecs::AVC,
};
assert_eq!(
stream.urls(),
stream.urls(true),
vec![
"https://upos-sz-mirrorcos.bilivideo.com",
"https://cn-tj-cu-01-11.bilivideo.com",

View File

@@ -1,13 +1,15 @@
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use anyhow::{Result, bail};
use leaky_bucket::RateLimiter;
use parking_lot::Once;
use reqwest::{Method, header};
use sea_orm::DatabaseConnection;
use ua_generator::ua;
use crate::bilibili::Credential;
use crate::bilibili::credential::WbiImg;
use crate::config::{RateLimit, VersionedCache, VersionedConfig};
use crate::config::{RateLimit, VersionedCache};
// 一个对 reqwest::Client 的简单封装,用于 Bilibili 请求
#[derive(Clone)]
@@ -15,17 +17,21 @@ pub struct Client(reqwest::Client);
impl Client {
pub fn new() -> Self {
static INIT: Once = Once::new();
INIT.call_once(|| {
rustls::crypto::ring::default_provider()
.install_default()
.expect("Failed to install rustls crypto provider");
});
// 正常访问 api 所必须的 header作为默认 header 添加到每个请求中
let mut headers = header::HeaderMap::new();
headers.insert(
header::USER_AGENT,
header::HeaderValue::from_static(
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
),
header::HeaderValue::from_static(ua::spoof_chrome_ua()),
);
headers.insert(
header::REFERER,
header::HeaderValue::from_static("https://www.bilibili.com"),
header::HeaderValue::from_static("https://www.bilibili.com/"),
);
Self(
reqwest::Client::builder()
@@ -61,56 +67,82 @@ impl Default for Client {
}
}
enum Limiter {
Latest(VersionedCache<Option<RateLimiter>>),
Snapshot(Arc<Option<RateLimiter>>),
}
pub struct BiliClient {
pub client: Client,
limiter: VersionedCache<Option<RateLimiter>>,
limiter: Limiter,
}
impl BiliClient {
pub fn new() -> Self {
let client = Client::new();
let limiter = VersionedCache::new(|config| {
Ok(config
.concurrent_limit
.rate_limit
.as_ref()
.map(|RateLimit { limit, duration }| {
RateLimiter::builder()
.initial(*limit)
.refill(*limit)
.max(*limit)
.interval(Duration::from_millis(*duration))
.build()
}))
})
.expect("failed to create rate limiter");
let limiter = Limiter::Latest(
VersionedCache::new(|config| {
Ok(config
.concurrent_limit
.rate_limit
.as_ref()
.map(|RateLimit { limit, duration }| {
RateLimiter::builder()
.initial(*limit)
.refill(*limit)
.max(*limit)
.interval(Duration::from_millis(*duration))
.build()
}))
})
.expect("failed to create rate limiter"),
);
Self { client, limiter }
}
/// 获取当前 BiliClient 的快照,快照中的限流器固定不变
pub fn snapshot(&self) -> Result<Self> {
let Limiter::Latest(inner) = &self.limiter else {
// 语法上没问题,但语义上不允许对快照进行快照
bail!("cannot snapshot a snapshot BiliClient");
};
Ok(Self {
client: self.client.clone(),
limiter: Limiter::Snapshot(inner.snapshot()),
})
}
/// 获取一个预构建的请求,通过该方法获取请求时会检查并等待速率限制
pub async fn request(&self, method: Method, url: &str) -> reqwest::RequestBuilder {
if let Some(limiter) = self.limiter.load().as_ref() {
limiter.acquire_one().await;
pub async fn request(&self, method: Method, url: &str, credential: &Credential) -> reqwest::RequestBuilder {
match &self.limiter {
Limiter::Latest(inner) => {
if let Some(limiter) = inner.read().as_ref() {
limiter.acquire_one().await;
}
}
Limiter::Snapshot(inner) => {
if let Some(limiter) = inner.as_ref() {
limiter.acquire_one().await;
}
}
}
let credential = &VersionedConfig::get().load().credential;
self.client.request(method, url, Some(credential))
}
pub async fn check_refresh(&self, connection: &DatabaseConnection) -> Result<()> {
let credential = &VersionedConfig::get().load().credential;
/// 检查并刷新 Credential不需要刷新返回 Ok(None),需要刷新返回 Ok(Some(new_credential))
pub async fn check_refresh(&self, credential: &Credential) -> Result<Option<Credential>> {
if !credential.need_refresh(&self.client).await? {
return Ok(());
return Ok(None);
}
let new_credential = credential.refresh(&self.client).await?;
VersionedConfig::get()
.update_credential(new_credential, connection)
.await?;
Ok(())
Ok(Some(credential.refresh(&self.client).await?))
}
/// 获取 wbi img用于生成请求签名
pub async fn wbi_img(&self) -> Result<WbiImg> {
let credential = &VersionedConfig::get().load().credential;
pub async fn wbi_img(&self, credential: &Credential) -> Result<WbiImg> {
credential.wbi_img(&self.client).await
}
pub fn inner_client(&self) -> &reqwest::Client {
&self.client.0
}
}

View File

@@ -7,8 +7,7 @@ use reqwest::Method;
use serde::Deserialize;
use serde_json::Value;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::{BiliClient, MIXIN_KEY, Validate, VideoInfo};
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
#[derive(PartialEq, Eq, Hash, Clone, Debug, Default, Copy)]
pub enum CollectionType {
@@ -74,6 +73,7 @@ pub struct CollectionItem {
pub struct Collection<'a> {
client: &'a BiliClient,
pub collection: CollectionItem,
credential: &'a Credential,
}
#[derive(Debug, PartialEq)]
@@ -112,8 +112,12 @@ impl<'de> Deserialize<'de> for CollectionInfo {
}
impl<'a> Collection<'a> {
pub fn new(client: &'a BiliClient, collection: CollectionItem) -> Self {
Self { client, collection }
pub fn new(client: &'a BiliClient, collection: CollectionItem, credential: &'a Credential) -> Self {
Self {
client,
collection,
credential,
}
}
pub async fn get_info(&self) -> Result<CollectionInfo> {
@@ -127,55 +131,54 @@ impl<'a> Collection<'a> {
async fn get_series_info(&self) -> Result<Value> {
self.client
.request(Method::GET, "https://api.bilibili.com/x/series/series")
.request(Method::GET, "https://api.bilibili.com/x/series/series", self.credential)
.await
.query(&[("series_id", self.collection.sid.as_str())])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<Value>()
.await?
.validate()
}
async fn get_videos(&self, page: i32) -> Result<Value> {
let page = page.to_string();
let (url, query) = match self.collection.collection_type {
CollectionType::Series => (
"https://api.bilibili.com/x/series/archives",
encoded_query(
vec![
("mid", self.collection.mid.as_str()),
("series_id", self.collection.sid.as_str()),
("only_normal", "true"),
("sort", "desc"),
("pn", page.as_str()),
("ps", "30"),
],
MIXIN_KEY.load().as_deref(),
),
),
CollectionType::Season => (
"https://api.bilibili.com/x/polymer/web-space/seasons_archives_list",
encoded_query(
vec![
("mid", self.collection.mid.as_str()),
("season_id", self.collection.sid.as_str()),
("sort_reverse", "true"),
("page_num", page.as_str()),
("page_size", "30"),
],
MIXIN_KEY.load().as_deref(),
),
),
let req = match self.collection.collection_type {
CollectionType::Series => self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/series/archives",
self.credential,
)
.await
.query(&[("pn", page)])
.query(&[
("mid", self.collection.mid.as_str()),
("series_id", self.collection.sid.as_str()),
("only_normal", "true"),
("sort", "desc"),
("ps", "30"),
]),
CollectionType::Season => self
.client
.request(
Method::GET,
"https://api.bilibili.com/x/polymer/web-space/seasons_archives_list",
self.credential,
)
.await
.query(&[("page_num", page)])
.query(&[
("mid", self.collection.mid.as_str()),
("season_id", self.collection.sid.as_str()),
("sort_reverse", "true"),
("page_size", "30"),
]),
};
self.client
.request(Method::GET, url)
.await
.query(&query)
.send()
req.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<Value>()
.await?
.validate()
@@ -193,6 +196,9 @@ impl<'a> Collection<'a> {
})?;
let archives = &mut videos["data"]["archives"];
if archives.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!(
"no videos found in collection {:?} page {}",
self.collection,

View File

@@ -1,9 +1,7 @@
use std::borrow::Cow;
use std::collections::HashSet;
use anyhow::{Context, Result, bail, ensure};
use cookie::Cookie;
use cow_utils::CowUtils;
use regex::Regex;
use reqwest::{Method, header};
use rsa::pkcs8::DecodePublicKey;
@@ -11,7 +9,7 @@ use rsa::sha2::Sha256;
use rsa::{Oaep, RsaPublicKey};
use serde::{Deserialize, Serialize};
use crate::bilibili::{Client, Validate};
use crate::bilibili::{BiliError, Client, ErrorForStatusExt, Validate};
const MIXIN_KEY_ENC_TAB: [usize; 64] = [
46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39, 12, 38,
@@ -19,6 +17,13 @@ const MIXIN_KEY_ENC_TAB: [usize; 64] = [
20, 34, 44, 52,
];
mod qrcode_status_code {
pub const SUCCESS: i64 = 0;
pub const NOT_SCANNED: i64 = 86101;
pub const SCANNED_UNCONFIRMED: i64 = 86090;
pub const EXPIRED: i64 = 86038;
}
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct Credential {
pub sessdata: String,
@@ -30,17 +35,35 @@ pub struct Credential {
#[derive(Debug, Deserialize)]
pub struct WbiImg {
img_url: String,
sub_url: String,
pub(crate) img_url: String,
pub(crate) sub_url: String,
}
impl From<WbiImg> for Option<String> {
/// 尝试将 WbiImg 转换成 mixin_key
fn from(value: WbiImg) -> Self {
let key = match (
get_filename(value.img_url.as_str()),
get_filename(value.sub_url.as_str()),
) {
#[derive(Debug, Serialize, Deserialize)]
pub struct Qrcode {
pub url: String,
pub qrcode_key: String,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "status", rename_all = "snake_case")]
pub enum PollStatus {
Success {
credential: Credential,
},
Pending {
message: String,
#[serde(default)]
scanned: bool,
},
Expired {
message: String,
},
}
impl WbiImg {
pub fn into_mixin_key(self) -> Option<String> {
let key = match (get_filename(self.img_url.as_str()), get_filename(self.sub_url.as_str())) {
(Some(img_key), Some(sub_key)) => img_key.to_string() + sub_key,
_ => return None,
};
@@ -55,13 +78,85 @@ impl Credential {
.request(Method::GET, "https://api.bilibili.com/x/web-interface/nav", Some(self))
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"]["wbi_img"].take())?)
}
pub async fn generate_qrcode(client: &Client) -> Result<Qrcode> {
let mut res = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/qrcode/generate",
None,
)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
pub async fn poll_qrcode(client: &Client, qrcode_key: &str) -> Result<PollStatus> {
let mut resp = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/qrcode/poll",
None,
)
.query(&[("qrcode_key", qrcode_key)])
.send()
.await?
.error_for_status_ext()?;
let headers = std::mem::take(resp.headers_mut());
let json = resp.json::<serde_json::Value>().await?.validate()?;
let code = json["data"]["code"].as_i64().context("missing 'code' field in data")?;
match code {
qrcode_status_code::SUCCESS => {
let mut credential = Self::extract(headers, json)?;
credential.buvid3 = Self::get_buvid3(client).await?;
Ok(PollStatus::Success { credential })
}
qrcode_status_code::NOT_SCANNED => Ok(PollStatus::Pending {
message: "未扫描".to_owned(),
scanned: false,
}),
qrcode_status_code::SCANNED_UNCONFIRMED => Ok(PollStatus::Pending {
message: "已扫描,请在手机上确认登录".to_owned(),
scanned: true,
}),
qrcode_status_code::EXPIRED => Ok(PollStatus::Expired {
message: "二维码已过期".to_owned(),
}),
_ => {
bail!(BiliError::InvalidResponse(json.to_string()));
}
}
}
/// 获取 buvid3 浏览器指纹
///
/// 参考 https://github.com/SocialSisterYi/bilibili-API-collect/blob/master/docs/misc/buvid3_4.md
async fn get_buvid3(client: &Client) -> Result<String> {
let resp = client
.request(Method::GET, "https://api.bilibili.com/x/web-frontend/getbuvid", None)
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
resp["data"]["buvid"]
.as_str()
.context("missing 'buvid' field in data")
.map(|s| s.to_string())
}
/// 检查凭据是否有效
pub async fn need_refresh(&self, client: &Client) -> Result<bool> {
let res = client
@@ -72,7 +167,7 @@ impl Credential {
)
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
@@ -81,9 +176,17 @@ impl Credential {
pub async fn refresh(&self, client: &Client) -> Result<Self> {
let correspond_path = Self::get_correspond_path();
let csrf = self.get_refresh_csrf(client, correspond_path).await?;
let new_credential = self.get_new_credential(client, &csrf).await?;
self.confirm_refresh(client, &new_credential).await?;
let csrf = self
.get_refresh_csrf(client, correspond_path)
.await
.context("获取 refresh_csrf 失败")?;
let new_credential = self
.get_new_credential(client, &csrf)
.await
.context("刷新 Credential 失败")?;
self.confirm_refresh(client, &new_credential)
.await
.context("确认更新 Credential 失败")?;
Ok(new_credential)
}
@@ -98,11 +201,11 @@ JNrRuoEUXpabUzGB8QIDAQAB
-----END PUBLIC KEY-----",
)
.expect("fail to decode public key");
let ts = chrono::Local::now().timestamp_millis();
// 精确到毫秒的时间戳可能出现时间比服务器快的情况,提前 20s 以防万一
let ts = chrono::Local::now().timestamp_millis() - 20000;
let data = format!("refresh_{}", ts).into_bytes();
let mut rng = rand::rng();
let encrypted = key
.encrypt(&mut rng, Oaep::new::<Sha256>(), &data)
.encrypt(&mut rand::rng(), Oaep::new::<Sha256>(), &data)
.expect("fail to encrypt");
hex::encode(encrypted)
}
@@ -117,12 +220,12 @@ JNrRuoEUXpabUzGB8QIDAQAB
.header(header::COOKIE, "Domain=.bilibili.com")
.send()
.await?
.error_for_status()?;
.error_for_status_ext()?;
regex_find(r#"<div id="1-name">(.+?)</div>"#, res.text().await?.as_str())
}
async fn get_new_credential(&self, client: &Client, csrf: &str) -> Result<Credential> {
let mut res = client
let mut resp = client
.request(
Method::POST,
"https://passport.bilibili.com/x/passport-login/web/cookie/refresh",
@@ -138,38 +241,11 @@ JNrRuoEUXpabUzGB8QIDAQAB
])
.send()
.await?
.error_for_status()?;
// 必须在 .json 前取出 headers否则 res 会被消耗
let headers = std::mem::take(res.headers_mut());
let res = res.json::<serde_json::Value>().await?.validate()?;
let set_cookies = headers.get_all(header::SET_COOKIE);
let mut credential = Self {
buvid3: self.buvid3.clone(),
..Self::default()
};
let required_cookies = HashSet::from(["SESSDATA", "bili_jct", "DedeUserID"]);
let cookies: Vec<Cookie> = set_cookies
.iter()
.filter_map(|x| x.to_str().ok())
.filter_map(|x| Cookie::parse(x).ok())
.filter(|x| required_cookies.contains(x.name()))
.collect();
ensure!(
cookies.len() == required_cookies.len(),
"not all required cookies found"
);
for cookie in cookies {
match cookie.name() {
"SESSDATA" => credential.sessdata = cookie.value().to_string(),
"bili_jct" => credential.bili_jct = cookie.value().to_string(),
"DedeUserID" => credential.dedeuserid = cookie.value().to_string(),
_ => unreachable!(),
}
}
match res["data"]["refresh_token"].as_str() {
Some(token) => credential.ac_time_value = token.to_string(),
None => bail!("refresh_token not found"),
}
.error_for_status_ext()?;
let headers = std::mem::take(resp.headers_mut());
let json = resp.json::<serde_json::Value>().await?.validate()?;
let mut credential = Self::extract(headers, json)?;
credential.buvid3 = self.buvid3.clone();
Ok(credential)
}
@@ -187,12 +263,42 @@ JNrRuoEUXpabUzGB8QIDAQAB
])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(())
}
/// 解析 header 和 json获取除 buvid3 字段外全部填充的 Credential
fn extract(headers: header::HeaderMap, json: serde_json::Value) -> Result<Credential> {
let mut credential = Credential::default();
let required_cookies = HashSet::from(["SESSDATA", "bili_jct", "DedeUserID"]);
let cookies: Vec<Cookie> = headers
.get_all(header::SET_COOKIE)
.iter()
.filter_map(|x| x.to_str().ok())
.filter_map(|x| Cookie::parse(x).ok())
.filter(|x| required_cookies.contains(x.name()))
.collect();
ensure!(
cookies.len() == required_cookies.len(),
"not all required cookies found"
);
for cookie in cookies {
match cookie.name() {
"SESSDATA" => credential.sessdata = cookie.value().to_string(),
"bili_jct" => credential.bili_jct = cookie.value().to_string(),
"DedeUserID" => credential.dedeuserid = cookie.value().to_string(),
_ => unreachable!(),
}
}
match json["data"]["refresh_token"].as_str() {
Some(token) => credential.ac_time_value = token.to_string(),
None => bail!("refresh_token not found"),
}
Ok(credential)
}
}
// 用指定的 pattern 正则表达式在 doc 中查找,返回第一个匹配的捕获组
@@ -213,47 +319,8 @@ fn get_filename(url: &str) -> Option<&str> {
.map(|(s, _)| s)
}
pub fn encoded_query<'a>(
params: Vec<(&'a str, impl Into<Cow<'a, str>>)>,
mixin_key: Option<impl AsRef<str>>,
) -> Vec<(&'a str, Cow<'a, str>)> {
match mixin_key {
Some(key) => _encoded_query(params, key.as_ref(), chrono::Local::now().timestamp().to_string()),
None => params.into_iter().map(|(k, v)| (k, v.into())).collect(),
}
}
fn _encoded_query<'a>(
params: Vec<(&'a str, impl Into<Cow<'a, str>>)>,
mixin_key: &str,
timestamp: String,
) -> Vec<(&'a str, Cow<'a, str>)> {
let disallowed = ['!', '\'', '(', ')', '*'];
let mut params: Vec<(&'a str, Cow<'a, str>)> = params
.into_iter()
.map(|(k, v)| {
(
k,
match Into::<Cow<'a, str>>::into(v) {
Cow::Borrowed(v) => v.cow_replace(&disallowed[..], ""),
Cow::Owned(v) => v.replace(&disallowed[..], "").into(),
},
)
})
.collect();
params.push(("wts", timestamp.into()));
params.sort_by(|a, b| a.0.cmp(b.0));
let query = serde_urlencoded::to_string(&params)
.expect("fail to encode query")
.replace('+', "%20");
params.push(("w_rid", format!("{:x}", md5::compute(query.clone() + mixin_key)).into()));
params
}
#[cfg(test)]
mod tests {
use assert_matches::assert_matches;
use super::*;
#[test]
@@ -285,54 +352,92 @@ mod tests {
}
#[test]
fn test_wbi_key() {
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = Option::<String>::from(key).expect("fail to convert key");
assert_eq!(key.as_str(), "ea1db124af3c7062474693fa704f4ff8");
// 没有特殊字符
assert_matches!(
&_encoded_query(
vec![("foo", "114"), ("bar", "514"), ("zab", "1919810")],
key.as_str(),
"1702204169".to_string(),
)[..],
[
("bar", Cow::Borrowed(a)),
("foo", Cow::Borrowed(b)),
("wts", Cow::Owned(c)),
("zab", Cow::Borrowed(d)),
("w_rid", Cow::Owned(e)),
] => {
assert_eq!(*a, "514");
assert_eq!(*b, "114");
assert_eq!(c, "1702204169");
assert_eq!(*d, "1919810");
assert_eq!(e, "8f6f2b5b3d485fe1886cec6a0be8c5d4");
}
fn test_extract_credential_success() {
let mut headers = header::HeaderMap::new();
headers.append(
header::SET_COOKIE,
"SESSDATA=test_sessdata; Path=/; Domain=bilibili.com".parse().unwrap(),
);
// 有特殊字符
assert_matches!(
&_encoded_query(
vec![("foo", "'1(1)4'"), ("bar", "!5*1!14"), ("zab", "1919810")],
key.as_str(),
"1702204169".to_string(),
)[..],
[
("bar", Cow::Owned(a)),
("foo", Cow::Owned(b)),
("wts", Cow::Owned(c)),
("zab", Cow::Borrowed(d)),
("w_rid", Cow::Owned(e)),
] => {
assert_eq!(a, "5114");
assert_eq!(b, "114");
assert_eq!(c, "1702204169");
assert_eq!(*d, "1919810");
assert_eq!(e, "6a2c86c4b0648ce062ba0dac2de91a85");
}
headers.append(
header::SET_COOKIE,
"bili_jct=test_jct; Path=/; Domain=bilibili.com".parse().unwrap(),
);
headers.append(
header::SET_COOKIE,
"DedeUserID=123456; Path=/; Domain=bilibili.com".parse().unwrap(),
);
let json = serde_json::json!({
"data": {
"refresh_token": "test_refresh_token"
}
});
let credential = Credential::extract(headers, json).unwrap();
assert_eq!(credential.sessdata, "test_sessdata");
assert_eq!(credential.bili_jct, "test_jct");
assert_eq!(credential.dedeuserid, "123456");
assert_eq!(credential.ac_time_value, "test_refresh_token");
assert!(credential.buvid3.is_empty());
}
#[test]
fn test_extract_credential_missing_sessdata() {
let headers = header::HeaderMap::new();
let json = serde_json::json!({
"data": {
"refresh_token": "test_refresh_token"
}
});
assert!(Credential::extract(headers, json).is_err());
}
#[test]
fn test_extract_credential_missing_refresh_token() {
let mut headers = header::HeaderMap::new();
headers.append(header::SET_COOKIE, "SESSDATA=test_sessdata".parse().unwrap());
headers.append(header::SET_COOKIE, "bili_jct=test_jct".parse().unwrap());
headers.append(header::SET_COOKIE, "DedeUserID=123456".parse().unwrap());
let json = serde_json::json!({
"data": {}
});
assert!(Credential::extract(headers, json).is_err());
}
#[ignore = "requires manual testing with real QR code scan"]
#[tokio::test]
async fn test_qrcode_login_flow() -> Result<()> {
let client = Client::new();
// 1. 生成二维码
let qr_response = Credential::generate_qrcode(&client).await?;
println!("二维码 URL: {}", qr_response.url);
println!("qrcode_key: {}", qr_response.qrcode_key);
println!("\n请使用 B 站 APP 扫描二维码...\n");
// 2. 轮询登录状态(最多轮询 90 次,每 2 秒一次,共 180 秒)
for i in 1..=90 {
println!("{} 次轮询...", i);
let status = Credential::poll_qrcode(&client, &qr_response.qrcode_key).await?;
match status {
PollStatus::Success { credential } => {
println!("\n登录成功!");
println!("SESSDATA: {}", credential.sessdata);
println!("bili_jct: {}", credential.bili_jct);
println!("buvid3: {}", credential.buvid3);
println!("DedeUserID: {}", credential.dedeuserid);
println!("ac_time_value: {}", credential.ac_time_value);
return Ok(());
}
PollStatus::Pending { message, scanned } => {
println!("状态: {}, 已扫描: {}", message, scanned);
}
PollStatus::Expired { message } => {
println!("\n二维码已过期: {}", message);
anyhow::bail!("二维码过期");
}
}
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
}
bail!("轮询超时")
}
}

View File

@@ -184,7 +184,7 @@ impl<'a, W: AsyncWrite> AssWriter<'a, W> {
}
}
fn escape_text(text: &str) -> Cow<str> {
fn escape_text(text: &'_ str) -> Cow<'_, str> {
let text = text.trim();
if memchr::memchr(b'\n', text.as_bytes()).is_some() {
Cow::from(text.replace('\n', "\\N"))

View File

@@ -26,7 +26,7 @@ pub struct DanmakuOption {
pub bottom_percentage: f64,
/// 透明度0-255
pub opacity: u8,
/// 是否加粗1代表是0代表否
/// 是否加粗1 代表是0 代表否
pub bold: bool,
/// 描边
pub outline: f64,

View File

@@ -39,7 +39,7 @@ pub struct Danmu {
impl Danmu {
/// 计算弹幕的“像素长度”,会乘上一个缩放因子
///
/// 汉字算一个全宽英文算2/3宽
/// 汉字算一个全宽,英文算 2/3
pub fn length(&self, config: &CanvasConfig<'_>) -> f64 {
let pts = config.danmaku_option.font_size
* self

View File

@@ -3,10 +3,9 @@ use std::path::PathBuf;
use anyhow::Result;
use tokio::fs::{self, File};
use crate::bilibili::PageInfo;
use crate::bilibili::danmaku::canvas::CanvasConfig;
use crate::bilibili::danmaku::{AssWriter, Danmu};
use crate::config::VersionedConfig;
use crate::bilibili::{DanmakuOption, PageInfo};
pub struct DanmakuWriter<'a> {
page: &'a PageInfo,
@@ -18,12 +17,11 @@ impl<'a> DanmakuWriter<'a> {
DanmakuWriter { page, danmaku }
}
pub async fn write(self, path: PathBuf) -> Result<()> {
pub async fn write(self, path: PathBuf, danmaku_option: &DanmakuOption) -> Result<()> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let config = VersionedConfig::get().load_full();
let canvas_config = CanvasConfig::new(&config.danmaku_option, self.page);
let canvas_config = CanvasConfig::new(danmaku_option, self.page);
let mut writer =
AssWriter::construct(File::create(path).await?, self.page.name.clone(), canvas_config.clone()).await?;
let mut canvas = canvas_config.canvas();

View File

@@ -0,0 +1,97 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use chrono::DateTime;
use futures::Stream;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Dynamic<'a> {
client: &'a BiliClient,
pub upper_id: String,
credential: &'a Credential,
}
impl<'a> Dynamic<'a> {
pub fn new(client: &'a BiliClient, upper_id: String, credential: &'a Credential) -> Self {
Self {
client,
upper_id,
credential,
}
}
pub async fn get_dynamics(&self, offset: Option<String>) -> Result<Value> {
self.client
.request(
Method::GET,
"https://api.bilibili.com/x/polymer/web-dynamic/v1/feed/space",
self.credential,
)
.await
.query(&[
("host_mid", self.upper_id.as_str()),
("offset", offset.as_deref().unwrap_or("")),
("type", "video"),
])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut offset = None;
loop {
let mut res = self
.get_dynamics(offset.take())
.await
.with_context(|| "failed to get dynamics")?;
let items = match res["data"]["items"].as_array_mut() {
Some(items) if !items.is_empty() => items,
_ => {
if offset.is_none() {
break;
}
Err(anyhow!("no dynamics found in offset {:?}", offset))?
}
};
for item in items.iter_mut() {
if item["type"].as_str().is_none_or(|t| t != "DYNAMIC_TYPE_AV") {
continue;
}
let pub_ts = item["modules"]["module_author"]["pub_ts"].take();
let pub_dt = pub_ts
.as_i64()
.or_else(|| pub_ts.as_str().and_then(|s| s.parse::<i64>().ok()))
.and_then(DateTime::from_timestamp_secs)
.with_context(|| format!("invalid pub_ts: {:?}", pub_ts))?;
let mut video_info: VideoInfo =
serde_json::from_value(item["modules"]["module_dynamic"]["major"]["archive"].take())?;
// 这些地方不使用 let else 是因为 try_stream! 宏不支持
if let VideoInfo::Dynamic { ref mut pubtime, .. } = video_info {
*pubtime = pub_dt;
yield video_info;
} else {
Err(anyhow!("video info is not dynamic"))?;
}
}
if let (Some(has_more), Some(new_offset)) =
(res["data"]["has_more"].as_bool(), res["data"]["offset"].as_str())
{
if !has_more {
break;
}
offset = Some(new_offset.to_string());
} else {
Err(anyhow!("no has_more or offset found"))?;
}
}
}
}
}

View File

@@ -1,9 +1,24 @@
use thiserror::Error;
#[derive(Error, Debug)]
#[derive(Error, Debug, Clone)]
pub enum BiliError {
#[error("risk control occurred")]
RiskControlOccurred,
#[error("request failed, status code: {0}, message: {1}")]
RequestFailed(i64, String),
#[error("response missing 'code' or 'message' field, full response: {0}")]
InvalidResponse(String),
#[error("API returned error code {0}, full response: {1}")]
ErrorResponse(i64, String),
#[error("risk control triggered by server, full response: {0}")]
RiskControlOccurred(String),
#[error("invalid HTTP response code {0}, reason: {1}")]
InvalidStatusCode(u16, &'static str),
#[error("no video streams available (may indicate risk control)")]
VideoStreamsEmpty,
}
impl BiliError {
pub fn is_risk_control_related(&self) -> bool {
matches!(
self,
BiliError::RiskControlOccurred(_) | BiliError::VideoStreamsEmpty | BiliError::InvalidStatusCode(_, _)
)
}
}

View File

@@ -3,10 +3,11 @@ use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Validate, VideoInfo};
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
pub struct FavoriteList<'a> {
client: &'a BiliClient,
fid: String,
credential: &'a Credential,
}
#[derive(Debug, serde::Deserialize)]
@@ -15,26 +16,28 @@ pub struct FavoriteListInfo {
pub title: String,
}
#[derive(Debug, serde::Deserialize)]
pub struct Upper<T> {
pub mid: T,
pub name: String,
pub face: String,
}
impl<'a> FavoriteList<'a> {
pub fn new(client: &'a BiliClient, fid: String) -> Self {
Self { client, fid }
pub fn new(client: &'a BiliClient, fid: String, credential: &'a Credential) -> Self {
Self {
client,
fid,
credential,
}
}
pub async fn get_info(&self) -> Result<FavoriteListInfo> {
let mut res = self
.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v3/fav/folder/info")
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/info",
self.credential,
)
.await
.query(&[("media_id", &self.fid)])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
@@ -43,7 +46,11 @@ impl<'a> FavoriteList<'a> {
async fn get_videos(&self, page: u32) -> Result<Value> {
self.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v3/fav/resource/list")
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v3/fav/resource/list",
self.credential,
)
.await
.query(&[
("media_id", self.fid.as_str()),
@@ -55,7 +62,7 @@ impl<'a> FavoriteList<'a> {
])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
@@ -72,6 +79,9 @@ impl<'a> FavoriteList<'a> {
.with_context(|| format!("failed to get videos of favorite {} page {}", self.fid, page))?;
let medias = &mut videos["data"]["medias"];
if medias.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!("no medias found in favorite {} page {}", self.fid, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(medias.take())

View File

@@ -1,31 +1,35 @@
use anyhow::{Result, ensure};
use reqwest::Method;
use crate::bilibili::{BiliClient, Validate};
use crate::config::VersionedConfig;
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate};
pub struct Me<'a> {
client: &'a BiliClient,
mid: String,
credential: &'a Credential,
}
impl<'a> Me<'a> {
pub fn new(client: &'a BiliClient) -> Self {
Self {
client,
mid: Self::my_id(),
}
pub fn new(client: &'a BiliClient, credential: &'a Credential) -> Self {
Self { client, credential }
}
pub async fn get_created_favorites(&self) -> Result<Option<Vec<FavoriteItem>>> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let mut resp = self
.client
.request(Method::GET, "https://api.bilibili.com/x/v3/fav/folder/created/list-all")
.request(
Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/created/list-all",
self.credential,
)
.await
.query(&[("up_mid", &self.mid)])
.query(&[("up_mid", &self.mid())])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
@@ -33,48 +37,65 @@ impl<'a> Me<'a> {
}
pub async fn get_followed_collections(&self, page_num: i32, page_size: i32) -> Result<Collections> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let mut resp = self
.client
.request(Method::GET, "https://api.bilibili.com/x/v3/fav/folder/collected/list")
.request(
Method::GET,
"https://api.bilibili.com/x/v3/fav/folder/collected/list",
self.credential,
)
.await
.query(&[
("up_mid", self.mid.as_str()),
("pn", page_num.to_string().as_str()),
("ps", page_size.to_string().as_str()),
("platform", "web"),
])
.query(&[("up_mid", self.mid()), ("platform", "web")])
.query(&[("pn", page_num), ("ps", page_size)])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
pub async fn get_followed_uppers(&self, page_num: i32, page_size: i32) -> Result<FollowedUppers> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
let mut resp = self
pub async fn get_followed_uppers(
&self,
page_num: i32,
page_size: i32,
name: Option<&str>,
) -> Result<FollowedUppers> {
ensure!(
!self.mid().is_empty(),
"未获取到用户 ID请确保填写设置中的 B 站认证信息"
);
let url = if name.is_some() {
"https://api.bilibili.com/x/relation/followings/search"
} else {
"https://api.bilibili.com/x/relation/followings"
};
let mut request = self
.client
.request(Method::GET, "https://api.bilibili.com/x/relation/followings")
.request(Method::GET, url, self.credential)
.await
.query(&[
("vmid", self.mid.as_str()),
("pn", page_num.to_string().as_str()),
("ps", page_size.to_string().as_str()),
])
.query(&[("vmid", self.mid())])
.query(&[("pn", page_num), ("ps", page_size)]);
if let Some(name) = name {
request = request.query(&[("name", name)]);
}
let mut resp = request
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
fn my_id() -> String {
VersionedConfig::get().load().credential.dedeuserid.clone()
fn mid(&self) -> &str {
&self.credential.dedeuserid
}
}
@@ -89,9 +110,11 @@ pub struct FavoriteItem {
#[derive(Debug, serde::Deserialize)]
pub struct CollectionItem {
pub id: i64,
pub fid: i64,
pub mid: i64,
pub state: i32,
pub title: String,
pub media_count: i64,
}
#[derive(Debug, serde::Deserialize)]

View File

@@ -1,19 +1,22 @@
use std::borrow::Cow;
use std::sync::Arc;
pub use analyzer::{BestStream, FilterOption};
use anyhow::{Result, bail, ensure};
use anyhow::{Context, Result, bail, ensure};
use arc_swap::ArcSwapOption;
use bili_sync_entity::upper_vec::Upper;
use chrono::serde::ts_seconds;
use chrono::{DateTime, Utc};
pub use client::{BiliClient, Client};
pub use collection::{Collection, CollectionItem, CollectionType};
pub use credential::Credential;
pub use credential::{Credential, PollStatus, Qrcode};
pub use danmaku::DanmakuOption;
pub use dynamic::Dynamic;
pub use error::BiliError;
pub use favorite_list::FavoriteList;
use favorite_list::Upper;
pub use me::Me;
use once_cell::sync::Lazy;
use reqwest::{RequestBuilder, StatusCode};
pub use submission::Submission;
pub use video::{Dimension, PageInfo, Video};
pub use watch_later::WatchLater;
@@ -23,6 +26,7 @@ mod client;
mod collection;
mod credential;
mod danmaku;
mod dynamic;
mod error;
mod favorite_list;
mod me;
@@ -43,19 +47,77 @@ pub(crate) trait Validate {
fn validate(self) -> Result<Self::Output>;
}
pub(crate) trait ErrorForStatusExt {
type Output;
fn error_for_status_ext(self) -> Result<Self::Output>;
}
impl Validate for serde_json::Value {
type Output = serde_json::Value;
fn validate(self) -> Result<Self::Output> {
let (code, msg) = match (self["code"].as_i64(), self["message"].as_str()) {
(Some(code), Some(msg)) => (code, msg),
_ => bail!("no code or message found"),
};
ensure!(code == 0, BiliError::RequestFailed(code, msg.to_owned()));
let code = self["code"]
.as_i64()
.with_context(|| BiliError::InvalidResponse(self.to_string()))?;
if code == -352 || !self["data"]["v_voucher"].is_null() {
bail!(BiliError::RiskControlOccurred(self.to_string()));
}
ensure!(code == 0, BiliError::ErrorResponse(code, self.to_string()));
Ok(self)
}
}
impl ErrorForStatusExt for reqwest::Response {
type Output = reqwest::Response;
fn error_for_status_ext(self) -> Result<Self::Output> {
let status = self.status();
// 412 是由于请求频率过高导致的,确定是风控问题
// 403 目前偶尔出现在下载视频音频流时,由于是偶尔出现且过一段时间消失,暂时也当成风控问题处理
if status == StatusCode::PRECONDITION_FAILED || status == StatusCode::FORBIDDEN {
bail!(BiliError::InvalidStatusCode(
status.as_u16(),
status.canonical_reason().unwrap_or("Unknown")
));
}
Ok(self.error_for_status()?)
}
}
pub(crate) trait WbiSign {
type Output;
fn wbi_sign(self, mixin_key: Option<impl AsRef<str>>) -> Result<Self::Output>;
}
impl WbiSign for RequestBuilder {
type Output = RequestBuilder;
fn wbi_sign(self, mixin_key: Option<impl AsRef<str>>) -> Result<Self::Output> {
let Some(mixin_key) = mixin_key else {
return Ok(self);
};
let (client, req) = self.build_split();
let mut req = req?;
sign_request(&mut req, mixin_key.as_ref(), chrono::Utc::now().timestamp())?;
Ok(RequestBuilder::from_parts(client, req))
}
}
fn sign_request(req: &mut reqwest::Request, mixin_key: &str, timestamp: i64) -> Result<()> {
let mut query_pairs = req.url().query_pairs().collect::<Vec<_>>();
let timestamp = timestamp.to_string();
query_pairs.push(("wts".into(), Cow::Borrowed(timestamp.as_str())));
query_pairs.sort_by(|a, b| a.0.cmp(&b.0));
let query_str = serde_urlencoded::to_string(query_pairs)?.replace('+', "%20");
let w_rid = format!("{:x}", md5::compute(query_str + mixin_key));
req.url_mut()
.query_pairs_mut()
.extend_pairs([("w_rid", w_rid), ("wts", timestamp)]);
Ok(())
}
#[derive(Debug, serde::Deserialize)]
#[serde(untagged)]
/// 注意此处的顺序是有要求的,因为对于 untagged 的 enum 来说serde 会按照顺序匹配
@@ -71,11 +133,16 @@ pub enum VideoInfo {
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64>,
upper: Upper<i64, String>,
#[serde(default)]
staff: Option<Vec<Upper<i64, String>>>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
is_upower_exclusive: bool,
is_upower_play: bool,
redirect_url: Option<String>,
pages: Vec<PageInfo>,
state: i32,
},
@@ -87,7 +154,7 @@ pub enum VideoInfo {
bvid: String,
intro: String,
cover: String,
upper: Upper<i64>,
upper: Upper<i64, String>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(with = "ts_seconds")]
@@ -105,7 +172,7 @@ pub enum VideoInfo {
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64>,
upper: Upper<i64, String>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "add_at", with = "ts_seconds")]
@@ -135,24 +202,44 @@ pub enum VideoInfo {
#[serde(rename = "created", with = "ts_seconds")]
ctime: DateTime<Utc>,
},
// 从动态获取的视频信息(此处 pubtime 未在结构中,因此使用 default + 手动赋值)
Dynamic {
title: String,
bvid: String,
desc: String,
cover: String,
#[serde(default)]
pubtime: DateTime<Utc>,
},
}
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Context;
use futures::StreamExt;
use reqwest::Method;
use super::*;
use crate::bilibili::credential::WbiImg;
use crate::config::VersionedConfig;
use crate::database::setup_database;
use crate::utils::init_logger;
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_video_info_type() {
async fn test_video_info_type() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
init_logger("None,bili_sync=debug", None);
let bili_client = BiliClient::new();
// 请求 UP 主视频必须要获取 mixin key使用 key 计算请求参数的签名,否则直接提示权限不足返回空
let Ok(Some(mixin_key)) = bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) else {
panic!("获取 mixin key 失败");
};
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
let collection = Collection::new(
&bili_client,
@@ -161,6 +248,7 @@ mod tests {
sid: "4523".to_string(),
collection_type: CollectionType::Season,
},
&credential,
);
let videos = collection
.into_video_stream()
@@ -171,7 +259,7 @@ mod tests {
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Collection { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试收藏夹
let favorite = FavoriteList::new(&bili_client, "3144336058".to_string());
let favorite = FavoriteList::new(&bili_client, "3144336058".to_string(), &credential);
let videos = favorite
.into_video_stream()
.take(20)
@@ -181,7 +269,7 @@ mod tests {
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Favorite { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试稍后再看
let watch_later = WatchLater::new(&bili_client);
let watch_later = WatchLater::new(&bili_client, &credential);
let videos = watch_later
.into_video_stream()
.take(20)
@@ -191,7 +279,7 @@ mod tests {
assert!(videos.iter().all(|v| matches!(v, VideoInfo::WatchLater { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试投稿
let submission = Submission::new(&bili_client, "956761".to_string());
let submission = Submission::new(&bili_client, "956761".to_string(), &credential);
let videos = submission
.into_video_stream()
.take(20)
@@ -200,17 +288,32 @@ mod tests {
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Submission { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试动态
let dynamic = Dynamic::new(&bili_client, "659898".to_string(), &credential);
let videos = dynamic
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Dynamic { .. })));
assert!(videos.iter().skip(1).rev().is_sorted_by_key(|v| v.release_datetime()));
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_subtitle_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let Ok(Some(mixin_key)) = bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) else {
panic!("获取 mixin key 失败");
};
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
let video = Video::new(&bili_client, "BV1gLfnY8E6D".to_string());
let video = Video::new(&bili_client, "BV1gLfnY8E6D", &credential);
let pages = video.get_pages().await?;
println!("pages: {:?}", pages);
let subtitles = video.get_subtitles(&pages[0]).await?;
@@ -223,4 +326,116 @@ mod tests {
}
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_upower_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
for (bvid, (upower_exclusive, upower_play)) in [
("BV1HxXwYEEqt", (true, false)), // 充电专享且无权观看
("BV16w41187fx", (true, true)), // 充电专享但有权观看
("BV1n34jzPEYq", (false, false)), // 普通视频
] {
let video = Video::new(&bili_client, bvid, credential);
let info = video.get_view_info().await?;
let VideoInfo::Detail {
is_upower_exclusive,
is_upower_play,
..
} = info
else {
unreachable!();
};
assert_eq!(is_upower_exclusive, upower_exclusive, "bvid: {}", bvid);
assert_eq!(is_upower_play, upower_play, "bvid: {}", bvid);
}
Ok(())
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_ep_parse() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let credential = &VersionedConfig::get().read().credential;
let bili_client = BiliClient::new();
let mixin_key = bili_client
.wbi_img(credential)
.await?
.into_mixin_key()
.context("no mixin key")?;
set_global_mixin_key(mixin_key);
for (bvid, redirect_is_none) in [
("BV1SF411g796", false), // EP
("BV13xtnzPEye", false), // 番剧
("BV1kT4NzTEZj", true), // 普通视频
] {
let video = Video::new(&bili_client, bvid, credential);
let info = video.get_view_info().await?;
let VideoInfo::Detail { redirect_url, .. } = info else {
unreachable!();
};
assert_eq!(redirect_url.is_none(), redirect_is_none, "bvid: {}", bvid);
}
Ok(())
}
#[test]
fn test_wbi_key() -> Result<()> {
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = key.into_mixin_key().context("no mixin key")?;
assert_eq!(key.as_str(), "ea1db124af3c7062474693fa704f4ff8");
let client = Client::new();
let mut req = client
.request(Method::GET, "https://www.baidu.com/", None)
.query(&[("foo", "114"), ("bar", "514")])
.query(&[("zab", "1919810")])
.build()?;
sign_request(&mut req, key.as_str(), 1702204169).unwrap();
let query: Vec<_> = req.url().query_pairs().collect();
assert_eq!(
query,
vec![
("foo".into(), "114".into()),
("bar".into(), "514".into()),
("zab".into(), "1919810".into()),
("w_rid".into(), "8f6f2b5b3d485fe1886cec6a0be8c5d4".into()),
("wts".into(), "1702204169".into()),
]
);
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = key.into_mixin_key().context("no mixin key")?;
let mut req = client
.request(Method::GET, "https://www.baidu.com/", None)
.query(&[("mid", "11997177"), ("token", "")])
.query(&[("platform", "web"), ("web_location", "1550101")])
.build()?;
sign_request(&mut req, key.as_str(), 1703513649).unwrap();
let query: Vec<_> = req.url().query_pairs().collect();
assert_eq!(
query,
vec![
("mid".into(), "11997177".into()),
("token".into(), "".into()),
("platform".into(), "web".into()),
("web_location".into(), "1550101".into()),
("w_rid".into(), "7d4428b3f2f9ee2811e116ec6fd41a4f".into()),
("wts".into(), "1703513649".into()),
]
);
Ok(())
}
}

View File

@@ -1,31 +1,45 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use bili_sync_entity::upper_vec::Upper;
use futures::Stream;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::favorite_list::Upper;
use crate::bilibili::{BiliClient, MIXIN_KEY, Validate, VideoInfo};
use crate::bilibili::{BiliClient, Credential, Dynamic, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Submission<'a> {
client: &'a BiliClient,
pub upper_id: String,
credential: &'a Credential,
}
impl<'a> From<Submission<'a>> for Dynamic<'a> {
fn from(submission: Submission<'a>) -> Self {
Dynamic::new(submission.client, submission.upper_id, submission.credential)
}
}
impl<'a> Submission<'a> {
pub fn new(client: &'a BiliClient, upper_id: String) -> Self {
Self { client, upper_id }
pub fn new(client: &'a BiliClient, upper_id: String, credential: &'a Credential) -> Self {
Self {
client,
upper_id,
credential,
}
}
pub async fn get_info(&self) -> Result<Upper<String>> {
pub async fn get_info(&self) -> Result<Upper<String, String>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/card")
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/card",
self.credential,
)
.await
.query(&[("mid", self.upper_id.as_str())])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
@@ -34,23 +48,25 @@ impl<'a> Submission<'a> {
async fn get_videos(&self, page: i32) -> Result<Value> {
self.client
.request(Method::GET, "https://api.bilibili.com/x/space/wbi/arc/search")
.request(
Method::GET,
"https://api.bilibili.com/x/space/wbi/arc/search",
self.credential,
)
.await
.query(&encoded_query(
vec![
("mid", self.upper_id.as_str()),
("order", "pubdate"),
("order_avoided", "true"),
("platform", "web"),
("web_location", "1550101"),
("pn", page.to_string().as_str()),
("ps", "30"),
],
MIXIN_KEY.load().as_deref(),
))
.query(&[
("mid", self.upper_id.as_str()),
("order", "pubdate"),
("order_avoided", "true"),
("platform", "web"),
("web_location", "1550101"),
("ps", "30"),
])
.query(&[("pn", page)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
@@ -66,6 +82,9 @@ impl<'a> Submission<'a> {
.with_context(|| format!("failed to get videos of upper {} page {}", self.upper_id, page))?;
let vlist = &mut videos["data"]["list"]["vlist"];
if vlist.as_array().is_none_or(|v| v.is_empty()) {
if page == 1 {
break;
}
Err(anyhow!("no medias found in upper {} page {}", self.upper_id, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(vlist.take())

View File

@@ -29,7 +29,7 @@ pub struct SubTitleItem {
impl SubTitleInfo {
pub fn is_ai_sub(&self) -> bool {
// ai aisubtitle.hdslb.com/bfs/ai_subtitle/xxxx
// 非 ai aisubtitle.hdslb.com/bfs/subtitle/xxxx
// 非 aiaisubtitle.hdslb.com/bfs/subtitle/xxxx
self.subtitle_url.contains("ai_subtitle")
}
}

View File

@@ -1,34 +1,22 @@
use anyhow::{Result, ensure};
use anyhow::{Context, Result, ensure};
use futures::TryStreamExt;
use futures::stream::FuturesUnordered;
use prost::Message;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::analyzer::PageAnalyzer;
use crate::bilibili::client::BiliClient;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::danmaku::{DanmakuElem, DanmakuWriter, DmSegMobileReply};
use crate::bilibili::subtitle::{SubTitle, SubTitleBody, SubTitleInfo, SubTitlesInfo};
use crate::bilibili::{MIXIN_KEY, Validate, VideoInfo};
use crate::bilibili::{Credential, ErrorForStatusExt, MIXIN_KEY, Validate, VideoInfo, WbiSign};
pub struct Video<'a> {
client: &'a BiliClient,
pub bvid: String,
pub bvid: &'a str,
credential: &'a Credential,
}
#[derive(Debug, serde::Deserialize)]
pub struct Tag {
pub tag_name: String,
}
impl serde::Serialize for Tag {
fn serialize<S>(&self, serializer: S) -> core::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
serializer.serialize_str(&self.tag_name)
}
}
#[derive(Debug, serde::Deserialize, Default)]
pub struct PageInfo {
pub cid: i64,
@@ -48,58 +36,80 @@ pub struct Dimension {
}
impl<'a> Video<'a> {
pub fn new(client: &'a BiliClient, bvid: String) -> Self {
Self { client, bvid }
pub fn new(client: &'a BiliClient, bvid: &'a str, credential: &'a Credential) -> Self {
Self {
client,
bvid,
credential,
}
}
/// 直接调用视频信息接口获取详细的视频信息,视频信息中包含了视频的分页信息
pub async fn get_view_info(&self) -> Result<VideoInfo> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/view")
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/wbi/view",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
#[allow(dead_code)]
#[cfg(test)]
pub async fn get_pages(&self) -> Result<Vec<PageInfo>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/pagelist")
.request(
Method::GET,
"https://api.bilibili.com/x/player/pagelist",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
pub async fn get_tags(&self) -> Result<Vec<Tag>> {
pub async fn get_tags(&self) -> Result<Vec<String>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/view/detail/tag")
.request(
Method::GET,
"https://api.bilibili.com/x/web-interface/view/detail/tag",
self.credential,
)
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
Ok(res["data"]
.as_array_mut()
.context("tags is not an array")?
.iter_mut()
.filter_map(|v| if let Value::String(s) = v.take() { Some(s) } else { None })
.collect())
}
pub async fn get_danmaku_writer(&self, page: &'a PageInfo) -> Result<DanmakuWriter> {
pub async fn get_danmaku_writer(&self, page: &'a PageInfo) -> Result<DanmakuWriter<'a>> {
let tasks = FuturesUnordered::new();
for i in 1..=page.duration.div_ceil(360) {
tasks.push(self.get_danmaku_segment(page, i as i64));
@@ -113,12 +123,17 @@ impl<'a> Video<'a> {
async fn get_danmaku_segment(&self, page: &PageInfo, segment_idx: i64) -> Result<Vec<DanmakuElem>> {
let mut res = self
.client
.request(Method::GET, "http://api.bilibili.com/x/v2/dm/web/seg.so")
.request(
Method::GET,
"https://api.bilibili.com/x/v2/dm/wbi/web/seg.so",
self.credential,
)
.await
.query(&[("type", 1), ("oid", page.cid), ("segment_index", segment_idx)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status()?;
.error_for_status_ext()?;
let headers = std::mem::take(res.headers_mut());
let content_type = headers.get("content-type");
ensure!(
@@ -133,22 +148,24 @@ impl<'a> Video<'a> {
pub async fn get_page_analyzer(&self, page: &PageInfo) -> Result<PageAnalyzer> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/playurl")
.request(
Method::GET,
"https://api.bilibili.com/x/player/wbi/playurl",
self.credential,
)
.await
.query(&encoded_query(
vec![
("bvid", self.bvid.as_str()),
("cid", page.cid.to_string().as_str()),
("qn", "127"),
("otype", "json"),
("fnval", "4048"),
("fourk", "1"),
],
MIXIN_KEY.load().as_deref(),
))
.query(&[
("bvid", self.bvid),
("qn", "127"),
("otype", "json"),
("fnval", "4048"),
("fourk", "1"),
])
.query(&[("cid", page.cid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
@@ -158,27 +175,30 @@ impl<'a> Video<'a> {
pub async fn get_subtitles(&self, page: &PageInfo) -> Result<Vec<SubTitle>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/v2")
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/v2", self.credential)
.await
.query(&encoded_query(
vec![("cid", &page.cid.to_string()), ("bvid", &self.bvid)],
MIXIN_KEY.load().as_deref(),
))
.query(&[("bvid", self.bvid)])
.query(&[("cid", page.cid)])
.wbi_sign(MIXIN_KEY.load().as_deref())?
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()?;
// 接口返回的信息,包含了一系列的字幕,每个字幕包含了字幕的语言和 json 下载地址
let subtitles_info: SubTitlesInfo = serde_json::from_value(res["data"]["subtitle"].take())?;
let tasks = subtitles_info
.subtitles
.into_iter()
.filter(|v| !v.is_ai_sub())
.map(|v| self.get_subtitle(v))
.collect::<FuturesUnordered<_>>();
tasks.try_collect().await
match serde_json::from_value::<Option<SubTitlesInfo>>(res["data"]["subtitle"].take())? {
Some(subtitles_info) => {
let tasks = subtitles_info
.subtitles
.into_iter()
.filter(|v| !v.is_ai_sub())
.map(|v| self.get_subtitle(v))
.collect::<FuturesUnordered<_>>();
tasks.try_collect().await
}
None => Ok(vec![]),
}
}
async fn get_subtitle(&self, info: SubTitleInfo) -> Result<SubTitle> {
@@ -188,7 +208,7 @@ impl<'a> Video<'a> {
.request(Method::GET, format!("https:{}", &info.subtitle_url).as_str(), None)
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?;
let body: SubTitleBody = serde_json::from_value(res["body"].take())?;

View File

@@ -1,25 +1,30 @@
use anyhow::{Context, Result, anyhow};
use anyhow::{Context, Result};
use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Validate, VideoInfo};
use crate::bilibili::{BiliClient, Credential, ErrorForStatusExt, Validate, VideoInfo};
pub struct WatchLater<'a> {
client: &'a BiliClient,
credential: &'a Credential,
}
impl<'a> WatchLater<'a> {
pub fn new(client: &'a BiliClient) -> Self {
Self { client }
pub fn new(client: &'a BiliClient, credential: &'a Credential) -> Self {
Self { client, credential }
}
async fn get_videos(&self) -> Result<Value> {
self.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v2/history/toview")
.request(
reqwest::Method::GET,
"https://api.bilibili.com/x/v2/history/toview",
self.credential,
)
.await
.send()
.await?
.error_for_status()?
.error_for_status_ext()?
.json::<serde_json::Value>()
.await?
.validate()
@@ -33,7 +38,7 @@ impl<'a> WatchLater<'a> {
.with_context(|| "Failed to get watch later list")?;
let list = &mut videos["data"]["list"];
if list.as_array().is_none_or(|v| v.is_empty()) {
Err(anyhow!("No videos found in watch later list"))?;
return;
}
let videos_info: Vec<VideoInfo> =
serde_json::from_value(list.take()).with_context(|| "Failed to parse watch later list")?;

View File

@@ -1,4 +1,5 @@
use std::borrow::Cow;
use std::path::PathBuf;
use std::sync::LazyLock;
use clap::Parser;
@@ -13,6 +14,15 @@ pub struct Args {
#[arg(short, long, default_value = "None,bili_sync=info", env = "RUST_LOG")]
pub log_level: String,
#[arg(short, long, env = "DISABLE_CREDENTIAL_REFRESH")]
pub disable_credential_refresh: bool,
#[arg(short, long, env = "BILI_SYNC_CONFIG_DIR")]
pub config_dir: Option<PathBuf>,
#[arg(short, long, env = "BILI_SYNC_FFMPEG_PATH")]
pub ffmpeg_path: Option<String>,
}
mod built_info {

View File

@@ -1,19 +1,29 @@
use std::path::PathBuf;
use std::sync::LazyLock;
use std::sync::{Arc, LazyLock};
use anyhow::{Result, bail};
use croner::parser::CronParser;
use itertools::Itertools;
use sea_orm::DatabaseConnection;
use serde::{Deserialize, Serialize};
use validator::Validate;
use crate::bilibili::{Credential, DanmakuOption, FilterOption};
use crate::config::LegacyConfig;
use crate::config::default::{default_auth_token, default_bind_address, default_time_format};
use crate::config::item::{ConcurrentLimit, NFOTimeType};
use crate::config::args::ARGS;
use crate::config::default::{
default_auth_token, default_bind_address, default_collection_path, default_favorite_path, default_submission_path,
default_time_format,
};
use crate::config::item::{ConcurrentLimit, NFOTimeType, SkipOption, Trigger};
use crate::notifier::Notifier;
use crate::utils::model::{load_db_config, save_db_config};
pub static CONFIG_DIR: LazyLock<PathBuf> =
LazyLock::new(|| dirs::config_dir().expect("No config path found").join("bili-sync"));
pub static CONFIG_DIR: LazyLock<PathBuf> = LazyLock::new(|| {
ARGS.config_dir
.clone()
.or_else(|| dirs::config_dir().map(|dir| dir.join("bili-sync")))
.expect("No config path found")
});
#[derive(Serialize, Deserialize, Validate, Clone)]
pub struct Config {
@@ -22,14 +32,26 @@ pub struct Config {
pub credential: Credential,
pub filter_option: FilterOption,
pub danmaku_option: DanmakuOption,
#[serde(default)]
pub skip_option: SkipOption,
pub video_name: String,
pub page_name: String,
pub interval: u64,
#[serde(default)]
pub notifiers: Option<Arc<Vec<Notifier>>>,
#[serde(default = "default_favorite_path")]
pub favorite_default_path: String,
#[serde(default = "default_collection_path")]
pub collection_default_path: String,
#[serde(default = "default_submission_path")]
pub submission_default_path: String,
pub interval: Trigger,
pub upper_path: PathBuf,
pub nfo_time_type: NFOTimeType,
pub concurrent_limit: ConcurrentLimit,
pub time_format: String,
pub cdn_sorting: bool,
#[serde(default)]
pub try_upower_anyway: bool,
pub version: u64,
}
@@ -65,25 +87,29 @@ impl Config {
if !(self.concurrent_limit.video > 0 && self.concurrent_limit.page > 0) {
errors.push("video 和 page 允许的并发数必须大于 0");
}
match &self.interval {
Trigger::Interval(secs) => {
if *secs <= 60 {
errors.push("下载任务执行间隔时间必须大于 60 秒");
}
}
Trigger::Cron(cron) => {
if CronParser::builder()
.seconds(croner::parser::Seconds::Required)
.dom_and_dow(true)
.build()
.parse(cron)
.is_err()
{
errors.push("Cron 表达式无效,正确格式为“秒 分 时 日 月 周”");
}
}
};
if !errors.is_empty() {
bail!(
errors
.into_iter()
.map(|e| format!("- {}", e))
.collect::<Vec<_>>()
.join("\n")
);
bail!(errors.into_iter().map(|e| format!("- {}", e)).join("\n"));
}
Ok(())
}
#[cfg(test)]
pub(super) fn test_default() -> Self {
Self {
cdn_sorting: true,
..Default::default()
}
}
}
impl Default for Config {
@@ -94,35 +120,20 @@ impl Default for Config {
credential: Credential::default(),
filter_option: FilterOption::default(),
danmaku_option: DanmakuOption::default(),
skip_option: SkipOption::default(),
video_name: "{{title}}".to_owned(),
page_name: "{{bvid}}".to_owned(),
interval: 1200,
notifiers: None,
favorite_default_path: default_favorite_path(),
collection_default_path: default_collection_path(),
submission_default_path: default_submission_path(),
interval: Trigger::default(),
upper_path: CONFIG_DIR.join("upper_face"),
nfo_time_type: NFOTimeType::FavTime,
concurrent_limit: ConcurrentLimit::default(),
time_format: default_time_format(),
cdn_sorting: false,
version: 0,
}
}
}
impl From<LegacyConfig> for Config {
fn from(legacy: LegacyConfig) -> Self {
Self {
auth_token: legacy.auth_token,
bind_address: legacy.bind_address,
credential: legacy.credential,
filter_option: legacy.filter_option,
danmaku_option: legacy.danmaku_option,
video_name: legacy.video_name,
page_name: legacy.page_name,
interval: legacy.interval,
upper_path: legacy.upper_path,
nfo_time_type: legacy.nfo_time_type,
concurrent_limit: legacy.concurrent_limit,
time_format: legacy.time_format,
cdn_sorting: legacy.cdn_sorting,
try_upower_anyway: false,
version: 0,
}
}

View File

@@ -1,9 +1,5 @@
use rand::seq::IndexedRandom;
pub(super) fn default_time_format() -> String {
"%Y-%m-%d".to_string()
}
/// 默认的 auth_token 实现,生成随机 16 位字符串
pub(super) fn default_auth_token() -> String {
let byte_choices = b"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*()_+-=";
@@ -13,6 +9,22 @@ pub(super) fn default_auth_token() -> String {
.collect()
}
pub(super) fn default_bind_address() -> String {
pub(crate) fn default_bind_address() -> String {
"0.0.0.0:12345".to_string()
}
pub(super) fn default_time_format() -> String {
"%Y-%m-%d".to_string()
}
pub fn default_favorite_path() -> String {
"收藏夹/{{name}}".to_owned()
}
pub fn default_collection_path() -> String {
"合集/{{name}}".to_owned()
}
pub fn default_submission_path() -> String {
"投稿/{{name}}".to_owned()
}

View File

@@ -5,6 +5,7 @@ use handlebars::handlebars_helper;
use crate::config::versioned_cache::VersionedCache;
use crate::config::{Config, PathSafeTemplate};
use crate::notifier::{Notifier, webhook_template_content, webhook_template_key};
pub static TEMPLATE: LazyLock<VersionedCache<handlebars::Handlebars<'static>>> =
LazyLock::new(|| VersionedCache::new(create_template).expect("Failed to create handlebars template"));
@@ -12,8 +13,18 @@ pub static TEMPLATE: LazyLock<VersionedCache<handlebars::Handlebars<'static>>> =
fn create_template(config: &Config) -> Result<handlebars::Handlebars<'static>> {
let mut handlebars = handlebars::Handlebars::new();
handlebars.register_helper("truncate", Box::new(truncate));
handlebars.path_safe_register("video", config.video_name.to_owned())?;
handlebars.path_safe_register("page", config.page_name.to_owned())?;
handlebars.path_safe_register("video", config.video_name.clone())?;
handlebars.path_safe_register("page", config.page_name.clone())?;
handlebars.path_safe_register("favorite_default_path", config.favorite_default_path.clone())?;
handlebars.path_safe_register("collection_default_path", config.collection_default_path.clone())?;
handlebars.path_safe_register("submission_default_path", config.submission_default_path.clone())?;
if let Some(notifiers) = &config.notifiers {
for notifier in notifiers.iter() {
if let Notifier::Webhook { url, template, .. } = notifier {
handlebars.register_template_string(&webhook_template_key(url), webhook_template_content(template))?;
}
}
}
Ok(handlebars)
}
@@ -81,7 +92,7 @@ mod tests {
"test_truncate",
&json!({"title": "你说得对,但是 Rust 是由 Mozilla 自主研发的一款全新的编译期格斗游戏。\
编译将发生在一个被称作「Cargo」的构建系统中。在这里被引用的指针将被授予「生命周期」之力导引对象安全。\
你将扮演一位名为「Rustacean」的神秘角色, 在与「Rustc」的搏斗中邂逅各种骨骼惊奇的傲娇报错。\
你将扮演一位名为「Rustacean」的神秘角色在与「Rustc」的搏斗中邂逅各种骨骼惊奇的傲娇报错。\
征服她们、通过编译同时逐步发掘「C++」程序崩溃的真相。"})
)
.unwrap(),

View File

@@ -1,19 +1,10 @@
use std::path::PathBuf;
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::utils::filenamify::filenamify;
/// 稍后再看的配置
#[derive(Serialize, Deserialize, Default)]
pub struct WatchLaterConfig {
pub enabled: bool,
pub path: PathBuf,
}
/// NFO 文件使用的时间类型
#[derive(Serialize, Deserialize, Default, Clone)]
#[derive(Serialize, Deserialize, Default, Clone, Copy)]
#[serde(rename_all = "lowercase")]
pub enum NFOTimeType {
#[default]
@@ -69,6 +60,28 @@ impl Default for ConcurrentLimit {
}
}
#[derive(Serialize, Deserialize, Clone, Default)]
pub struct SkipOption {
pub no_poster: bool,
pub no_video_nfo: bool,
pub no_upper: bool,
pub no_danmaku: bool,
pub no_subtitle: bool,
}
#[derive(Serialize, Deserialize, Clone)]
#[serde(untagged)]
pub enum Trigger {
Interval(u64),
Cron(String),
}
impl Default for Trigger {
fn default() -> Self {
Trigger::Interval(1200)
}
}
pub trait PathSafeTemplate {
fn path_safe_register(&mut self, name: &'static str, template: impl Into<String>) -> Result<()>;
fn path_safe_render(&self, name: &'static str, data: &serde_json::Value) -> Result<String>;

View File

@@ -1,134 +0,0 @@
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::Result;
use sea_orm::DatabaseConnection;
use serde::de::{Deserializer, MapAccess, Visitor};
use serde::ser::SerializeMap;
use serde::{Deserialize, Serialize};
use crate::bilibili::{CollectionItem, CollectionType, Credential, DanmakuOption, FilterOption};
use crate::config::Config;
use crate::config::default::{default_auth_token, default_bind_address, default_time_format};
use crate::config::item::{ConcurrentLimit, NFOTimeType, WatchLaterConfig};
use crate::utils::model::migrate_legacy_config;
#[derive(Serialize, Deserialize)]
pub struct LegacyConfig {
#[serde(default = "default_auth_token")]
pub auth_token: String,
#[serde(default = "default_bind_address")]
pub bind_address: String,
pub credential: Credential,
pub filter_option: FilterOption,
#[serde(default)]
pub danmaku_option: DanmakuOption,
pub favorite_list: HashMap<String, PathBuf>,
#[serde(
default,
serialize_with = "serialize_collection_list",
deserialize_with = "deserialize_collection_list"
)]
pub collection_list: HashMap<CollectionItem, PathBuf>,
#[serde(default)]
pub submission_list: HashMap<String, PathBuf>,
#[serde(default)]
pub watch_later: WatchLaterConfig,
pub video_name: String,
pub page_name: String,
pub interval: u64,
pub upper_path: PathBuf,
#[serde(default)]
pub nfo_time_type: NFOTimeType,
#[serde(default)]
pub concurrent_limit: ConcurrentLimit,
#[serde(default = "default_time_format")]
pub time_format: String,
#[serde(default)]
pub cdn_sorting: bool,
}
impl LegacyConfig {
async fn load_from_file(path: &Path) -> Result<Self> {
let legacy_config_str = tokio::fs::read_to_string(path).await?;
Ok(toml::from_str(&legacy_config_str)?)
}
pub async fn migrate_from_file(path: &Path, connection: &DatabaseConnection) -> Result<Config> {
let legacy_config = Self::load_from_file(path).await?;
migrate_legacy_config(&legacy_config, connection).await?;
Ok(legacy_config.into())
}
}
/*
后面是用于自定义 Collection 的序列化、反序列化的样板代码
*/
pub(super) fn serialize_collection_list<S>(
collection_list: &HashMap<CollectionItem, PathBuf>,
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let mut map = serializer.serialize_map(Some(collection_list.len()))?;
for (k, v) in collection_list {
let prefix = match k.collection_type {
CollectionType::Series => "series",
CollectionType::Season => "season",
};
map.serialize_entry(&[prefix, &k.mid, &k.sid].join(":"), v)?;
}
map.end()
}
pub(super) fn deserialize_collection_list<'de, D>(deserializer: D) -> Result<HashMap<CollectionItem, PathBuf>, D::Error>
where
D: Deserializer<'de>,
{
struct CollectionListVisitor;
impl<'de> Visitor<'de> for CollectionListVisitor {
type Value = HashMap<CollectionItem, PathBuf>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
formatter.write_str("a map of collection list")
}
fn visit_map<A>(self, mut map: A) -> Result<Self::Value, A::Error>
where
A: MapAccess<'de>,
{
let mut collection_list = HashMap::new();
while let Some((key, value)) = map.next_entry::<String, PathBuf>()? {
let collection_item = match key.split(':').collect::<Vec<&str>>().as_slice() {
[prefix, mid, sid] => {
let collection_type = match *prefix {
"series" => CollectionType::Series,
"season" => CollectionType::Season,
_ => {
return Err(serde::de::Error::custom(
"invalid collection type, should be series or season",
));
}
};
CollectionItem {
mid: mid.to_string(),
sid: sid.to_string(),
collection_type,
}
}
_ => {
return Err(serde::de::Error::custom(
"invalid collection key, should be series:mid:sid or season:mid:sid",
));
}
};
collection_list.insert(collection_item, value);
}
Ok(collection_list)
}
}
deserializer.deserialize_map(CollectionListVisitor)
}

View File

@@ -3,14 +3,13 @@ mod current;
mod default;
mod handlebar;
mod item;
mod legacy;
mod versioned_cache;
mod versioned_config;
pub use crate::config::args::{ARGS, version};
pub use crate::config::current::{CONFIG_DIR, Config};
pub(crate) use crate::config::default::default_bind_address;
pub use crate::config::handlebar::TEMPLATE;
pub use crate::config::item::{NFOTimeType, PathSafeTemplate, RateLimit};
pub use crate::config::legacy::LegacyConfig;
pub use crate::config::item::{ConcurrentDownloadLimit, NFOTimeType, PathSafeTemplate, RateLimit, Trigger};
pub use crate::config::versioned_cache::VersionedCache;
pub use crate::config::versioned_config::VersionedConfig;

View File

@@ -1,54 +1,56 @@
use std::sync::Arc;
use std::sync::atomic::{AtomicU64, Ordering};
use anyhow::Result;
use arc_swap::{ArcSwap, Guard};
use tokio_util::future::FutureExt;
use tokio_util::sync::CancellationToken;
use crate::config::{Config, VersionedConfig};
pub struct VersionedCache<T> {
inner: ArcSwap<T>,
version: AtomicU64,
builder: fn(&Config) -> Result<T>,
mutex: parking_lot::Mutex<()>,
inner: Arc<ArcSwap<T>>,
cancel_token: CancellationToken,
}
impl<T> VersionedCache<T> {
/// 一个跟随全局配置变化自动更新的缓存
impl<T: Send + Sync + 'static> VersionedCache<T> {
pub fn new(builder: fn(&Config) -> Result<T>) -> Result<Self> {
let current_config = VersionedConfig::get().load();
let current_version = current_config.version;
let initial_value = builder(&current_config)?;
Ok(Self {
inner: ArcSwap::from_pointee(initial_value),
version: AtomicU64::new(current_version),
builder,
mutex: parking_lot::Mutex::new(()),
})
let mut rx = VersionedConfig::get().subscribe();
let initial_value = builder(&rx.borrow_and_update())?;
let cancel_token = CancellationToken::new();
let inner = Arc::new(ArcSwap::from_pointee(initial_value));
let inner_clone = inner.clone();
tokio::spawn(
async move {
while rx.changed().await.is_ok() {
match builder(&rx.borrow()) {
Ok(new_value) => {
inner_clone.store(Arc::new(new_value));
}
Err(e) => {
error!("Failed to update versioned cache: {:?}", e);
}
}
}
}
.with_cancellation_token_owned(cancel_token.clone()),
);
Ok(Self { inner, cancel_token })
}
pub fn load(&self) -> Guard<Arc<T>> {
self.reload_if_needed();
/// 获取一个临时的只读引用
pub fn read(&self) -> Guard<Arc<T>> {
self.inner.load()
}
fn reload_if_needed(&self) {
let current_config = VersionedConfig::get().load();
let current_version = current_config.version;
let version = self.version.load(Ordering::Relaxed);
if version < current_version {
let _lock = self.mutex.lock();
if self.version.load(Ordering::Relaxed) >= current_version {
return;
}
match (self.builder)(&current_config) {
Err(e) => {
error!("Failed to rebuild versioned cache: {:?}", e);
}
Ok(new_value) => {
self.inner.store(Arc::new(new_value));
self.version.store(current_version, Ordering::Relaxed);
}
}
}
/// 获取当前缓存的完整快照
pub fn snapshot(&self) -> Arc<T> {
self.inner.load_full()
}
}
impl<T> Drop for VersionedCache<T> {
fn drop(&mut self) {
self.cancel_token.cancel();
}
}

View File

@@ -1,63 +1,58 @@
use std::sync::Arc;
use anyhow::{Result, anyhow, bail};
use anyhow::{Result, bail};
use arc_swap::{ArcSwap, Guard};
use sea_orm::DatabaseConnection;
use tokio::sync::OnceCell;
use tokio::sync::{OnceCell, watch};
use crate::bilibili::Credential;
use crate::config::{CONFIG_DIR, Config, LegacyConfig};
use crate::config::Config;
pub static VERSIONED_CONFIG: OnceCell<VersionedConfig> = OnceCell::const_new();
static VERSIONED_CONFIG: OnceCell<VersionedConfig> = OnceCell::const_new();
pub struct VersionedConfig {
inner: ArcSwap<Config>,
update_lock: tokio::sync::Mutex<()>,
tx: watch::Sender<Arc<Config>>,
rx: watch::Receiver<Arc<Config>>,
}
impl VersionedConfig {
/// 初始化全局的 `VersionedConfig`,初始化失败或者已初始化过则返回错误
pub async fn init(connection: &DatabaseConnection) -> Result<()> {
let mut config = match Config::load_from_database(connection).await? {
Some(Ok(config)) => config,
Some(Err(e)) => bail!("解析数据库配置失败: {}", e),
None => {
let config = match LegacyConfig::migrate_from_file(&CONFIG_DIR.join("config.toml"), connection).await {
Ok(config) => config,
Err(e) => {
if e.downcast_ref::<std::io::Error>()
.is_none_or(|e| e.kind() != std::io::ErrorKind::NotFound)
{
bail!("未成功读取并迁移旧版本配置:{:#}", e);
} else {
let config = Config::default();
warn!(
"生成 auth_token{},可使用该 token 登录 web UI该信息仅在首次运行时打印",
config.auth_token
);
config
}
pub async fn init(connection: &DatabaseConnection) -> Result<&'static VersionedConfig> {
VERSIONED_CONFIG
.get_or_try_init(|| async move {
let mut config = match Config::load_from_database(connection).await? {
Some(Ok(config)) => config,
Some(Err(e)) => bail!("解析数据库配置失败: {}", e),
None => {
let config = Config::default();
warn!(
"生成 auth_token{},可使用该 token 登录 web UI该信息仅在首次运行时打印",
config.auth_token
);
config.save_to_database(connection).await?;
config
}
};
config.save_to_database(connection).await?;
config
}
};
// version 本身不具有实际意义,仅用于并发更新时的版本控制,在初始化时可以直接清空
config.version = 0;
let versioned_config = VersionedConfig::new(config);
VERSIONED_CONFIG
.set(versioned_config)
.map_err(|e| anyhow!("VERSIONED_CONFIG has already been initialized: {}", e))?;
Ok(())
// version 本身不具有实际意义,仅用于并发更新时的版本控制,在初始化时可以直接清空
config.version = 0;
Ok(VersionedConfig::new(config))
})
.await
}
#[cfg(test)]
/// 单元测试直接使用测试专用的配置即可
pub fn get() -> &'static VersionedConfig {
use std::sync::LazyLock;
static TEST_CONFIG: LazyLock<VersionedConfig> = LazyLock::new(|| VersionedConfig::new(Config::test_default()));
return &TEST_CONFIG;
/// 仅在测试环境使用,该方法会尝试从测试数据库中加载配置并写入到全局的 VERSIONED_CONFIG
pub async fn init_for_test(connection: &DatabaseConnection) -> Result<&'static VersionedConfig> {
VERSIONED_CONFIG
.get_or_try_init(|| async move {
let Some(Ok(config)) = Config::load_from_database(&connection).await? else {
bail!("no config found in test database");
};
Ok(VersionedConfig::new(config))
})
.await
}
#[cfg(not(test))]
@@ -66,37 +61,52 @@ impl VersionedConfig {
VERSIONED_CONFIG.get().expect("VERSIONED_CONFIG is not initialized")
}
pub fn new(config: Config) -> Self {
#[cfg(test)]
/// 尝试获取全局的 `VersionedConfig`,如果未初始化则退回默认配置
pub fn get() -> &'static VersionedConfig {
use std::sync::LazyLock;
static FALLBACK_CONFIG: LazyLock<VersionedConfig> = LazyLock::new(|| VersionedConfig::new(Config::default()));
// 优先从全局变量获取,未初始化则退回默认配置
return VERSIONED_CONFIG.get().unwrap_or_else(|| &FALLBACK_CONFIG);
}
fn new(config: Config) -> Self {
let inner = ArcSwap::from_pointee(config);
let (tx, rx) = watch::channel(inner.load_full());
Self {
inner: ArcSwap::from_pointee(config),
inner,
update_lock: tokio::sync::Mutex::new(()),
tx,
rx,
}
}
pub fn load(&self) -> Guard<Arc<Config>> {
pub fn read(&self) -> Guard<Arc<Config>> {
self.inner.load()
}
pub fn load_full(&self) -> Arc<Config> {
pub fn snapshot(&self) -> Arc<Config> {
self.inner.load_full()
}
pub async fn update_credential(&self, new_credential: Credential, connection: &DatabaseConnection) -> Result<()> {
// 确保更新内容与写入数据库的操作是原子性的
pub fn subscribe(&self) -> watch::Receiver<Arc<Config>> {
self.rx.clone()
}
pub async fn update_credential(
&self,
new_credential: Credential,
connection: &DatabaseConnection,
) -> Result<Arc<Config>> {
let _lock = self.update_lock.lock().await;
loop {
let old_config = self.inner.load();
let mut new_config = old_config.as_ref().clone();
new_config.credential = new_credential.clone();
new_config.version += 1;
if Arc::ptr_eq(
&old_config,
&self.inner.compare_and_swap(&old_config, Arc::new(new_config)),
) {
break;
}
}
self.inner.load().save_to_database(connection).await
let mut new_config = self.inner.load().as_ref().clone();
new_config.credential = new_credential;
new_config.version += 1;
new_config.save_to_database(connection).await?;
let new_config = Arc::new(new_config);
self.inner.store(new_config.clone());
self.tx.send(new_config.clone())?;
Ok(new_config)
}
/// 外部 API 会调用这个方法,如果更新失败直接返回错误
@@ -107,14 +117,10 @@ impl VersionedConfig {
bail!("配置版本不匹配,请刷新页面修改后重新提交");
}
new_config.version += 1;
let new_config = Arc::new(new_config);
if !Arc::ptr_eq(
&old_config,
&self.inner.compare_and_swap(&old_config, new_config.clone()),
) {
bail!("配置版本不匹配,请刷新页面修改后重新提交");
}
new_config.save_to_database(connection).await?;
let new_config = Arc::new(new_config);
self.inner.store(new_config.clone());
self.tx.send(new_config.clone())?;
Ok(new_config)
}
}

View File

@@ -1,19 +1,18 @@
use std::path::Path;
use std::time::Duration;
use anyhow::{Context, Result};
use anyhow::{Context, Result, bail};
use bili_sync_migration::{Migrator, MigratorTrait};
use sea_orm::sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode, SqliteSynchronous};
use sea_orm::sqlx::{ConnectOptions as SqlxConnectOptions, Sqlite};
use sea_orm::{ConnectOptions, Database, DatabaseConnection, SqlxSqliteConnector};
use sea_orm::{ConnectOptions, ConnectionTrait, Database, DatabaseConnection, SqlxSqliteConnector, Statement};
use crate::config::CONFIG_DIR;
fn database_url() -> String {
format!("sqlite://{}?mode=rwc", CONFIG_DIR.join("data.sqlite").to_string_lossy())
fn database_url(path: &Path) -> String {
format!("sqlite://{}?mode=rwc", path.to_string_lossy())
}
async fn database_connection() -> Result<DatabaseConnection> {
let mut option = ConnectOptions::new(database_url());
async fn database_connection(database_url: &str) -> Result<DatabaseConnection> {
let mut option = ConnectOptions::new(database_url);
option
.max_connections(50)
.min_connections(5)
@@ -35,18 +34,38 @@ async fn database_connection() -> Result<DatabaseConnection> {
))
}
async fn migrate_database() -> Result<()> {
async fn migrate_database(database_url: &str) -> Result<()> {
// 注意此处使用内部构造的 DatabaseConnection而不是通过 database_connection() 获取
// 这是因为使用多个连接的 Connection 会导致奇怪的迁移顺序问题,而使用默认的连接选项不会
let connection = Database::connect(database_url()).await?;
let connection = Database::connect(database_url).await?;
// 避免 https://github.com/amtoaer/bili-sync/issues/571 问题,迁移前根据 migration 确认当前版本
// 如果用户从 2.6.0 以下版本直接升级migration 不满足需求,直接报错而不执行迁移
if connection
.query_one(Statement::from_string(
connection.get_database_backend(),
"SELECT 1 FROM seaql_migrations WHERE version = 'm20250613_043257_add_config';",
))
.await
.is_ok_and(|res| res.is_none())
{
// 查询成功且结果为空,即没有 m20250613_043257_add_config说明版本低于 2.6.0
bail!("该版本仅支持从 2.6.x 以上的版本升级,请先升级至 2.6.x 或 2.7.x 完成配置迁移,再升级至最新版本。");
}
Ok(Migrator::up(&connection, None).await?)
}
/// 进行数据库迁移并获取数据库连接,供外部使用
pub async fn setup_database() -> Result<DatabaseConnection> {
tokio::fs::create_dir_all(CONFIG_DIR.as_path()).await.context(
"Failed to create config directory. Please check if you have granted necessary permissions to your folder.",
)?;
migrate_database().await.context("Failed to migrate database")?;
database_connection().await.context("Failed to connect to database")
pub async fn setup_database(path: &Path) -> Result<DatabaseConnection> {
if let Some(parent) = path.parent() {
tokio::fs::create_dir_all(parent).await.context(
"Failed to create config directory. Please check if you have granted necessary permissions to your folder.",
)?;
}
let database_url = database_url(path);
migrate_database(&database_url)
.await
.context("Failed to migrate database")?;
database_connection(&database_url)
.await
.context("Failed to connect to database")
}

View File

@@ -4,15 +4,18 @@ use std::path::Path;
use std::sync::Arc;
use anyhow::{Context, Result, bail, ensure};
use async_tempfile::TempFile;
use futures::TryStreamExt;
use reqwest::{Method, header};
use tokio::fs::{self, File, OpenOptions};
use reqwest::{Method, StatusCode, header};
use tokio::fs::{self};
use tokio::io::{AsyncSeekExt, AsyncWriteExt};
use tokio::process::Command;
use tokio::task::JoinSet;
use tokio_util::io::StreamReader;
use crate::bilibili::Client;
use crate::config::VersionedConfig;
use crate::bilibili::{Client, ErrorForStatusExt};
use crate::config::{ARGS, ConcurrentDownloadLimit};
pub struct Downloader {
client: Client,
}
@@ -25,28 +28,134 @@ impl Downloader {
Self { client }
}
pub async fn fetch(&self, url: &str, path: &Path) -> Result<()> {
if VersionedConfig::get().load().concurrent_limit.download.enable {
self.fetch_parallel(url, path).await
pub async fn fetch(&self, url: &str, path: &Path, concurrent_download: &ConcurrentDownloadLimit) -> Result<()> {
let mut temp_file = TempFile::new().await?;
self.fetch_internal(url, &mut temp_file, false, concurrent_download)
.await?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(temp_file.file_path(), path).await?;
// temp_file 的 drop 需要 std::fs::remove_file
// 如果交由 rust 自动执行虽然逻辑正确但会略微阻塞异步上下文
// 尽量主动调用,保证正常执行的情况下文件清除操作由 spawn_blocking 在专门线程中完成
temp_file.drop_async().await;
Ok(())
}
pub async fn multi_fetch(
&self,
urls: &[&str],
path: &Path,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let temp_file = self.multi_fetch_internal(urls, true, concurrent_download).await?;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(temp_file.file_path(), path).await?;
temp_file.drop_async().await;
Ok(())
}
pub async fn multi_fetch_and_merge(
&self,
video_urls: &[&str],
audio_urls: &[&str],
path: &Path,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let (video_temp_file, audio_temp_file) = tokio::try_join!(
self.multi_fetch_internal(video_urls, true, concurrent_download),
self.multi_fetch_internal(audio_urls, true, concurrent_download)
)?;
let final_temp_file = TempFile::new().await?;
let output = Command::new(ARGS.ffmpeg_path.as_deref().unwrap_or("ffmpeg"))
.args([
"-i",
video_temp_file.file_path().to_string_lossy().as_ref(),
"-i",
audio_temp_file.file_path().to_string_lossy().as_ref(),
"-c",
"copy",
"-strict",
"unofficial",
"-f",
"mp4",
"-y",
final_temp_file.file_path().to_string_lossy().as_ref(),
])
.output()
.await
.context("failed to run ffmpeg")?;
if !output.status.success() {
bail!("ffmpeg error: {}", str::from_utf8(&output.stderr).unwrap_or("unknown"));
}
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
fs::copy(final_temp_file.file_path(), path).await?;
tokio::join!(
video_temp_file.drop_async(),
audio_temp_file.drop_async(),
final_temp_file.drop_async()
);
Ok(())
}
async fn multi_fetch_internal(
&self,
urls: &[&str],
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<TempFile> {
if urls.is_empty() {
bail!("no urls provided");
}
let mut temp_file = TempFile::new().await?;
for (idx, url) in urls.iter().enumerate() {
match self
.fetch_internal(url, &mut temp_file, is_stream, concurrent_download)
.await
{
Ok(_) => return Ok(temp_file),
Err(e) => {
if idx == urls.len() - 1 {
temp_file.drop_async().await;
return Err(e).with_context(|| format!("failed to download file from all {} urls", urls.len()));
}
temp_file.set_len(0).await?;
temp_file.rewind().await?;
}
}
}
unreachable!()
}
async fn fetch_internal(
&self,
url: &str,
file: &mut TempFile,
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
if concurrent_download.enable {
self.fetch_parallel(url, file, is_stream, concurrent_download).await
} else {
self.fetch_serial(url, path).await
self.fetch_serial(url, file).await
}
}
async fn fetch_serial(&self, url: &str, path: &Path) -> Result<()> {
async fn fetch_serial(&self, url: &str, file: &mut TempFile) -> Result<()> {
let resp = self
.client
.request(Method::GET, url, None)
.send()
.await?
.error_for_status()?;
.error_for_status_ext()?;
let expected = resp.header_content_length();
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let mut file = File::create(path).await?;
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, &mut file).await?;
let received = tokio::io::copy(&mut stream_reader, file).await?;
file.flush().await?;
if let Some(expected) = expected {
ensure!(
@@ -59,39 +168,55 @@ impl Downloader {
Ok(())
}
async fn fetch_parallel(&self, url: &str, path: &Path) -> Result<()> {
let (concurrency, threshold) = {
let config = VersionedConfig::get().load();
(
config.concurrent_limit.download.concurrency,
config.concurrent_limit.download.threshold,
)
async fn fetch_parallel(
&self,
url: &str,
file: &mut TempFile,
is_stream: bool,
concurrent_download: &ConcurrentDownloadLimit,
) -> Result<()> {
let (concurrency, threshold) = (concurrent_download.concurrency, concurrent_download.threshold);
let file_size = if is_stream {
// B 站视频、音频流存在 HEAD 为 404 但 GET 正常的情况,此处假设支持分块,直接使用携带 Range 头的 GET 请求探测
let resp = self
.client
.request(Method::GET, url, None)
.header(header::RANGE, "bytes=0-0")
.send()
.await?
.error_for_status_ext()?;
if resp.status() != StatusCode::PARTIAL_CONTENT {
return self.fetch_serial(url, file).await;
}
resp.header_file_size()
} else {
// 对于普通文件,直接使用常规的 HEAD 请求探测
let resp = self
.client
.request(Method::HEAD, url, None)
.send()
.await?
.error_for_status_ext()?;
if resp
.headers()
.get(header::ACCEPT_RANGES)
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges#none
.is_none_or(|v| v.to_str().unwrap_or_default() == "none")
{
return self.fetch_serial(url, file).await;
}
resp.header_content_length()
};
let Some(file_size) = file_size else {
return self.fetch_serial(url, file).await;
};
let resp = self
.client
.request(Method::HEAD, url, None)
.send()
.await?
.error_for_status()?;
let file_size = resp.header_content_length().unwrap_or_default();
let chunk_size = file_size / concurrency as u64;
if resp
.headers()
.get(header::ACCEPT_RANGES)
.is_none_or(|v| v.to_str().unwrap_or_default() == "none") // https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges#none
|| chunk_size < threshold
{
return self.fetch_serial(url, path).await;
if chunk_size < threshold {
return self.fetch_serial(url, file).await;
}
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let file = File::create(path).await?;
file.set_len(file_size).await?;
drop(file);
let mut tasks = JoinSet::new();
let url = Arc::new(url.to_string());
let path = Arc::new(path.to_path_buf());
for i in 0..concurrency {
let start = i as u64 * chunk_size;
let end = if i == concurrency - 1 {
@@ -99,17 +224,17 @@ impl Downloader {
} else {
start + chunk_size
} - 1;
let (url_clone, path_clone, client_clone) = (url.clone(), path.clone(), self.client.clone());
let (url_clone, client_clone) = (url.clone(), self.client.clone());
let mut file_clone = file.open_rw().await?;
tasks.spawn(async move {
let mut file = OpenOptions::new().write(true).open(path_clone.as_ref()).await?;
file.seek(SeekFrom::Start(start)).await?;
file_clone.seek(SeekFrom::Start(start)).await?;
let range_header = format!("bytes={}-{}", start, end);
let resp = client_clone
.request(Method::GET, &url_clone, None)
.header(header::RANGE, &range_header)
.send()
.await?
.error_for_status()?;
.error_for_status_ext()?;
if let Some(content_length) = resp.header_content_length() {
ensure!(
content_length == end - start + 1,
@@ -119,8 +244,8 @@ impl Downloader {
);
}
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, &mut file).await?;
file.flush().await?;
let received = tokio::io::copy(&mut stream_reader, &mut file_clone).await?;
file_clone.flush().await?;
ensure!(
received == end - start + 1,
"downloaded bytes mismatch: expected {}, got {}",
@@ -135,50 +260,15 @@ impl Downloader {
}
Ok(())
}
pub async fn fetch_with_fallback(&self, urls: &[&str], path: &Path) -> Result<()> {
if urls.is_empty() {
bail!("no urls provided");
}
let mut res = Ok(());
for url in urls {
match self.fetch(url, path).await {
Ok(_) => return Ok(()),
Err(err) => {
res = Err(err);
}
}
}
res.with_context(|| format!("failed to download from {:?}", urls))
}
pub async fn merge(&self, video_path: &Path, audio_path: &Path, output_path: &Path) -> Result<()> {
let output = tokio::process::Command::new("ffmpeg")
.args([
"-i",
video_path.to_string_lossy().as_ref(),
"-i",
audio_path.to_string_lossy().as_ref(),
"-c",
"copy",
"-strict",
"unofficial",
"-y",
output_path.to_string_lossy().as_ref(),
])
.output()
.await?;
if !output.status.success() {
bail!("ffmpeg error: {}", str::from_utf8(&output.stderr).unwrap_or("unknown"));
}
Ok(())
}
}
/// reqwest.content_length() 居然指的是 body_size 而非 content-length header没办法自己实现一下
/// https://github.com/seanmonstar/reqwest/issues/1814
trait ResponseExt {
/// 获取 Content-Length 头的值
fn header_content_length(&self) -> Option<u64>;
/// 获取 Content-Range 头中的文件总大小部分
fn header_file_size(&self) -> Option<u64>;
}
impl ResponseExt for reqwest::Response {
@@ -188,4 +278,67 @@ impl ResponseExt for reqwest::Response {
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u64>().ok())
}
fn header_file_size(&self) -> Option<u64> {
self.headers()
.get(header::CONTENT_RANGE)
.and_then(|v| v.to_str().ok())
.and_then(|s| {
// Content-Range: bytes 0-0/800946
s.rsplit_once('/')
})
.and_then(|(_, size_str)| size_str.parse::<u64>().ok())
}
}
#[cfg(test)]
mod tests {
use std::path::Path;
use anyhow::Result;
use crate::bilibili::{BestStream, BiliClient, Video};
use crate::config::VersionedConfig;
use crate::database::setup_database;
use crate::downloader::Downloader;
#[ignore = "only for manual test"]
#[tokio::test(flavor = "multi_thread")]
async fn test_parse_and_download_video() -> Result<()> {
VersionedConfig::init_for_test(&setup_database(Path::new("./test.sqlite")).await?).await?;
let config = VersionedConfig::get().read();
let client = BiliClient::new();
let video = Video::new(&client, "BV1QJmaYKEv4", &config.credential);
let pages = video.get_pages().await.expect("failed to get pages");
let first_page = pages.into_iter().next().expect("no page found");
let mut page_analyzer = video
.get_page_analyzer(&first_page)
.await
.expect("failed to get page analyzer");
let json_info = serde_json::to_string_pretty(&page_analyzer.info)?;
tokio::fs::write("./debug_playurl.json", json_info).await?;
let best_stream = page_analyzer
.best_stream(&config.filter_option)
.expect("failed to get best stream");
let BestStream::VideoAudio {
video,
audio: Some(audio),
} = best_stream
else {
panic!("best stream is not video & audio");
};
dbg!(&video);
dbg!(&audio);
let downloader = Downloader::new(client.client);
downloader
.multi_fetch_and_merge(
&video.urls(true),
&audio.urls(true),
Path::new("./output.mp4"),
&config.concurrent_limit.download,
)
.await
.expect("failed to download video");
Ok(())
}
}

View File

@@ -1,15 +1,6 @@
use std::io;
use anyhow::Result;
use thiserror::Error;
#[derive(Error, Debug)]
#[error("Request too frequently")]
pub struct DownloadAbortError();
#[derive(Error, Debug)]
#[error("Process page error")]
pub struct ProcessPageError();
pub enum ExecutionStatus {
Skipped,
@@ -17,7 +8,7 @@ pub enum ExecutionStatus {
Ignored(anyhow::Error),
Failed(anyhow::Error),
// 任务可以返回该状态固定自己的 status
FixedFailed(u32, anyhow::Error),
Fixed(u32),
}
// 目前 stable rust 似乎不支持自定义类型使用 ? 运算符,只能先在返回值使用 Result再这样套层娃
@@ -42,10 +33,10 @@ impl From<Result<ExecutionStatus>> for ExecutionStatus {
}
}
// 未包裹的 reqwest::Error
if let Some(error) = cause.downcast_ref::<reqwest::Error>() {
if is_ignored_reqwest_error(error) {
return ExecutionStatus::Ignored(err);
}
if let Some(error) = cause.downcast_ref::<reqwest::Error>()
&& is_ignored_reqwest_error(error)
{
return ExecutionStatus::Ignored(err);
}
}
ExecutionStatus::Failed(err)

View File

@@ -8,6 +8,7 @@ mod config;
mod database;
mod downloader;
mod error;
mod notifier;
mod task;
mod utils;
mod workflow;
@@ -17,23 +18,30 @@ use std::fmt::Debug;
use std::future::Future;
use std::sync::Arc;
use anyhow::{Context, Result, bail};
use bilibili::BiliClient;
use parking_lot::Mutex;
use parking_lot::RwLock;
use sea_orm::DatabaseConnection;
use task::{http_server, video_downloader};
use tokio::process::Command;
use tokio_util::sync::CancellationToken;
use tokio_util::task::TaskTracker;
use crate::api::{LogHelper, MAX_HISTORY_LOGS};
use crate::config::{ARGS, VersionedConfig};
use crate::config::{ARGS, CONFIG_DIR, VersionedConfig};
use crate::database::setup_database;
use crate::utils::init_logger;
use crate::utils::signal::terminate;
#[tokio::main]
async fn main() {
let (connection, log_writer) = init().await;
let bili_client = Arc::new(BiliClient::new());
let (bili_client, connection, log_writer) = match init().await {
Ok(res) => res,
Err(e) => {
error!("初始化失败:{:#}", e);
return;
}
};
let token = CancellationToken::new();
let tracker = TaskTracker::new();
@@ -44,14 +52,13 @@ async fn main() {
&tracker,
token.clone(),
);
if !cfg!(debug_assertions) {
spawn_task(
"定时下载",
video_downloader(connection.clone(), bili_client),
&tracker,
token.clone(),
);
}
spawn_task(
"定时下载",
video_downloader(connection.clone(), bili_client),
&tracker,
token.clone(),
);
tracker.close();
handle_shutdown(connection, tracker, token).await
@@ -77,20 +84,34 @@ fn spawn_task(
}
/// 初始化日志系统、打印欢迎信息,初始化数据库连接和全局配置
async fn init() -> (DatabaseConnection, LogHelper) {
async fn init() -> Result<(Arc<BiliClient>, DatabaseConnection, LogHelper)> {
let (tx, _rx) = tokio::sync::broadcast::channel(30);
let log_history = Arc::new(Mutex::new(VecDeque::with_capacity(MAX_HISTORY_LOGS + 1)));
let log_history = Arc::new(RwLock::new(VecDeque::with_capacity(MAX_HISTORY_LOGS + 1)));
let log_writer = LogHelper::new(tx, log_history.clone());
init_logger(&ARGS.log_level, Some(log_writer.clone()));
info!("欢迎使用 Bili-Sync当前程序版本{}", config::version());
info!("项目地址https://github.com/amtoaer/bili-sync");
let connection = setup_database().await.expect("数据库初始化失败");
let ffmpeg_path = ARGS.ffmpeg_path.as_deref().unwrap_or("ffmpeg");
let ffmpeg_exists = Command::new(ffmpeg_path)
.arg("-version")
.output()
.await
.map(|output| output.status.success())
.unwrap_or(false);
if !ffmpeg_exists {
bail!("ffmpeg 不存在或无法执行,请确保已正确安装 ffmpeg并且 {ffmpeg_path} 命令可用");
}
let connection = setup_database(&CONFIG_DIR.join("data.sqlite"))
.await
.context("数据库初始化失败")?;
info!("数据库初始化完成");
VersionedConfig::init(&connection).await.expect("配置初始化失败");
VersionedConfig::init(&connection).await.context("配置初始化失败")?;
info!("配置初始化完成");
(connection, log_writer)
Ok((Arc::new(BiliClient::new()), connection, log_writer))
}
async fn handle_shutdown(connection: DatabaseConnection, tracker: TaskTracker, token: CancellationToken) {

View File

@@ -0,0 +1,67 @@
use bili_sync_entity::video;
use crate::utils::status::{STATUS_OK, VideoStatus};
pub enum DownloadNotifyInfo {
List {
source: String,
img_url: Option<String>,
titles: Vec<String>,
},
Summary {
source: String,
img_url: Option<String>,
count: usize,
},
}
impl DownloadNotifyInfo {
pub fn new(source: String) -> Self {
Self::List {
source,
img_url: None,
titles: Vec::with_capacity(10),
}
}
pub fn record(&mut self, models: &[video::ActiveModel]) {
let success_models = models
.iter()
.filter(|m| {
let sub_task_status: [u32; 5] = VideoStatus::from(*m.download_status.as_ref()).into();
sub_task_status.into_iter().all(|s| s == STATUS_OK)
})
.collect::<Vec<_>>();
match self {
Self::List {
source,
img_url,
titles,
} => {
let count = success_models.len() + titles.len();
if count > 10 {
*self = Self::Summary {
source: std::mem::take(source),
img_url: std::mem::take(img_url),
count,
};
} else {
if img_url.is_none() {
*img_url = success_models.first().map(|m| m.cover.as_ref().clone());
}
titles.extend(success_models.into_iter().map(|m| m.name.as_ref().clone()));
}
}
Self::Summary { count, .. } => *count += success_models.len(),
}
}
pub fn should_notify(&self) -> bool {
if let Self::List { titles, .. } = self
&& titles.is_empty()
{
return false;
}
true
}
}

View File

@@ -0,0 +1,59 @@
use std::borrow::Cow;
use itertools::Itertools;
use serde::Serialize;
use crate::notifier::DownloadNotifyInfo;
#[derive(Serialize)]
pub struct Message<'a> {
pub message: Cow<'a, str>,
pub image_url: Option<String>,
}
impl<'a> From<&'a str> for Message<'a> {
fn from(message: &'a str) -> Self {
Self {
message: Cow::Borrowed(message),
image_url: None,
}
}
}
impl From<String> for Message<'_> {
fn from(message: String) -> Self {
Self {
message: message.into(),
image_url: None,
}
}
}
impl From<DownloadNotifyInfo> for Message<'_> {
fn from(info: DownloadNotifyInfo) -> Self {
match info {
DownloadNotifyInfo::List {
source,
img_url,
titles,
} => Self {
message: format!(
"{}的 {} 条新视频已入库:\n{}",
source,
titles.len(),
titles
.into_iter()
.enumerate()
.map(|(i, title)| format!("{}. {title}", i + 1))
.join("\n")
)
.into(),
image_url: img_url,
},
DownloadNotifyInfo::Summary { source, img_url, count } => Self {
message: format!("{}的 {} 条新视频已入库,快去看看吧!", source, count).into(),
image_url: img_url,
},
}
}
}

View File

@@ -0,0 +1,116 @@
mod info;
mod message;
use std::collections::HashMap;
use anyhow::Result;
use futures::future;
pub use info::DownloadNotifyInfo;
pub use message::Message;
use reqwest::header;
use serde::{Deserialize, Serialize};
use crate::config::TEMPLATE;
#[derive(Debug, Clone, Deserialize, Serialize)]
#[serde(rename_all = "camelCase", tag = "type")]
pub enum Notifier {
Telegram {
bot_token: String,
chat_id: String,
#[serde(default)]
skip_image: bool,
},
Webhook {
url: String,
template: Option<String>,
#[serde(default)]
headers: Option<HashMap<String, String>>,
#[serde(skip)]
// 一个内部辅助字段,用于决定是否强制渲染当前模板,在测试时使用
ignore_cache: Option<()>,
},
}
pub fn webhook_template_key(url: &str) -> String {
format!("payload_{}", url)
}
pub fn webhook_template_content(template: &Option<String>) -> &str {
template
.as_deref()
.filter(|t| !t.trim().is_empty())
.unwrap_or(r#"{"text": "{{{message}}}"}"#)
}
pub trait NotifierAllExt {
async fn notify_all<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()>;
}
impl NotifierAllExt for Vec<Notifier> {
async fn notify_all<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()> {
let message = message.into();
future::join_all(self.iter().map(|notifier| notifier.notify_internal(client, &message))).await;
Ok(())
}
}
impl Notifier {
pub async fn notify<'a>(&self, client: &reqwest::Client, message: impl Into<Message<'a>>) -> Result<()> {
self.notify_internal(client, &message.into()).await
}
async fn notify_internal<'a>(&self, client: &reqwest::Client, message: &Message<'a>) -> Result<()> {
match self {
Notifier::Telegram {
bot_token,
chat_id,
skip_image,
} => {
if let Some(img_url) = &message.image_url
&& !*skip_image
{
let url = format!("https://api.telegram.org/bot{}/sendPhoto", bot_token);
let params = [
("chat_id", chat_id.as_str()),
("photo", img_url.as_str()),
("caption", message.message.as_ref()),
];
client.post(&url).form(&params).send().await?;
} else {
let url = format!("https://api.telegram.org/bot{}/sendMessage", bot_token);
let params = [("chat_id", chat_id.as_str()), ("text", message.message.as_ref())];
client.post(&url).form(&params).send().await?;
}
}
Notifier::Webhook {
url,
template,
headers,
ignore_cache,
} => {
let key = webhook_template_key(url);
let handlebar = TEMPLATE.read();
let payload = match ignore_cache {
Some(_) => handlebar.render_template(webhook_template_content(template), &message)?,
None => handlebar.render(&key, &message)?,
};
let mut headers_map = header::HeaderMap::new();
headers_map.insert(header::CONTENT_TYPE, "application/json".try_into()?);
if let Some(custom_headers) = headers {
for (key, value) in custom_headers {
if let (Ok(key), Ok(value)) =
(header::HeaderName::try_from(key), header::HeaderValue::try_from(value))
{
headers_map.insert(key, value);
}
}
}
client.post(url).headers(headers_map).body(payload).send().await?;
}
}
Ok(())
}
}

View File

@@ -13,7 +13,7 @@ use sea_orm::DatabaseConnection;
use crate::api::{LogHelper, router};
use crate::bilibili::BiliClient;
use crate::config::VersionedConfig;
use crate::config::{VersionedConfig, default_bind_address};
#[derive(RustEmbed)]
#[preserve_source = false]
@@ -26,15 +26,34 @@ pub async fn http_server(
log_writer: LogHelper,
) -> Result<()> {
let app = router()
.fallback_service(get(frontend_files))
.fallback_service(get(frontend_files).head(frontend_files))
.layer(Extension(database_connection))
.layer(Extension(bili_client))
.layer(Extension(log_writer));
let config = VersionedConfig::get().load_full();
let listener = tokio::net::TcpListener::bind(&config.bind_address)
.await
.context("bind address failed")?;
info!("开始运行管理页: http://{}", config.bind_address);
let (bind_address, listener) = {
let bind_address = VersionedConfig::get().read().bind_address.to_owned();
let listen_res = tokio::net::TcpListener::bind(&bind_address)
.await
.context("bind address failed");
match listen_res {
Ok(listener) => (bind_address, listener),
Err(e) => {
let default_bind_address = default_bind_address();
if default_bind_address == bind_address {
return Err(e);
}
warn!(
"绑定到地址 {} 失败:{:#},尝试绑定到默认地址 {}",
bind_address, e, default_bind_address
);
let listener = tokio::net::TcpListener::bind(&default_bind_address)
.await
.context("bind default address failed")?;
(default_bind_address, listener)
}
}
};
info!("开始运行管理页http://{}", bind_address);
Ok(axum::serve(listener, ServiceExt::<Request>::into_make_service(app)).await?)
}
@@ -48,34 +67,51 @@ async fn frontend_files(request: Request) -> impl IntoResponse {
};
let mime_type = content.mime_type();
let content_type = mime_type.as_deref().unwrap_or("application/octet-stream");
if cfg!(debug_assertions) {
(
[(header::CONTENT_TYPE, content_type)],
// safety: `RustEmbed` returns uncompressed files directly from the filesystem in debug mode
content.data().unwrap(),
)
.into_response()
} else {
let accepted_encodings = request
.headers()
.get(header::ACCEPT_ENCODING)
.and_then(|v| v.to_str().ok())
.map(|s| s.split(',').map(str::trim).collect::<HashSet<_>>())
.unwrap_or_default();
for (encoding, data) in [("br", content.data_br()), ("gzip", content.data_gzip())] {
if accepted_encodings.contains(encoding) {
if let Some(data) = data {
return (
[
(header::CONTENT_TYPE, content_type),
(header::CONTENT_ENCODING, encoding),
],
data,
)
.into_response();
}
}
}
"Unsupported Encoding".into_response()
let default_headers = [
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
];
if let Some(if_none_match) = request.headers().get(header::IF_NONE_MATCH)
&& let Ok(client_etag) = if_none_match.to_str()
&& client_etag == content.hash()
{
return (StatusCode::NOT_MODIFIED, default_headers).into_response();
}
if request.method() == axum::http::Method::HEAD {
return (StatusCode::OK, default_headers).into_response();
}
if cfg!(debug_assertions) {
// safety: `RustEmbed` returns uncompressed files directly from the filesystem in debug mode
return (StatusCode::OK, default_headers, content.data().unwrap()).into_response();
}
let accepted_encodings = request
.headers()
.get(header::ACCEPT_ENCODING)
.and_then(|v| v.to_str().ok())
.map(|s| s.split(',').map(str::trim).collect::<HashSet<_>>())
.unwrap_or_default();
for (encoding, data) in [("br", content.data_br()), ("gzip", content.data_gzip())] {
if accepted_encodings.contains(encoding)
&& let Some(data) = data
{
return (
StatusCode::OK,
[
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
(header::CONTENT_ENCODING, encoding),
],
data,
)
.into_response();
}
}
(
StatusCode::NOT_ACCEPTABLE,
"Client must support gzip or brotli compression",
)
.into_response()
}

View File

@@ -2,4 +2,4 @@ mod http_server;
mod video_downloader;
pub use http_server::http_server;
pub use video_downloader::video_downloader;
pub use video_downloader::{DownloadTaskManager, TaskStatus, video_downloader};

View File

@@ -1,62 +1,373 @@
use std::pin::Pin;
use std::sync::Arc;
use std::time::Duration;
use anyhow::{Context, Result, bail};
use sea_orm::DatabaseConnection;
use tokio::time;
use serde::Serialize;
use tokio::sync::{OnceCell, watch};
use tokio_cron_scheduler::{Job, JobScheduler};
use crate::adapter::VideoSource;
use crate::bilibili::{self, BiliClient};
use crate::config::VersionedConfig;
use crate::bilibili::{self, BiliClient, BiliError};
use crate::config::{ARGS, Config, TEMPLATE, Trigger, VersionedConfig};
use crate::utils::model::get_enabled_video_sources;
use crate::utils::task_notifier::TASK_STATUS_NOTIFIER;
use crate::utils::notify::error_and_notify;
use crate::workflow::process_video_source;
static INSTANCE: OnceCell<DownloadTaskManager> = OnceCell::const_new();
/// 启动周期下载视频的任务
pub async fn video_downloader(connection: DatabaseConnection, bili_client: Arc<BiliClient>) {
let mut anchor = chrono::Local::now().date_naive();
loop {
info!("开始执行本轮视频下载任务..");
let _lock = TASK_STATUS_NOTIFIER.start_running().await;
let config = VersionedConfig::get().load_full();
'inner: {
if let Err(e) = config.check() {
error!("配置检查失败,跳过本轮执行:\n{:#}", e);
break 'inner;
}
match bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) {
Ok(Some(mixin_key)) => bilibili::set_global_mixin_key(mixin_key),
Ok(_) => {
error!("解析 mixin key 失败,等待下一轮执行");
break 'inner;
}
Err(e) => {
error!("获取 mixin key 遇到错误:{:#},等待下一轮执行", e);
break 'inner;
}
};
if anchor != chrono::Local::now().date_naive() {
if let Err(e) = bili_client.check_refresh(&connection).await {
error!("检查刷新 Credential 遇到错误:{:#},等待下一轮执行", e);
break 'inner;
}
anchor = chrono::Local::now().date_naive();
}
let Ok(video_sources) = get_enabled_video_sources(&connection).await else {
error!("获取视频源列表失败,等待下一轮执行");
break 'inner;
};
if video_sources.is_empty() {
info!("没有可用的视频源,等待下一轮执行");
break 'inner;
}
for video_source in video_sources {
let display_name = video_source.display_name();
if let Err(e) = process_video_source(video_source, &bili_client, &connection).await {
error!("处理 {} 时遇到错误:{:#},等待下一轮执行", display_name, e);
}
}
info!("本轮任务执行完毕,等待下一轮执行");
pub async fn video_downloader(connection: DatabaseConnection, bili_client: Arc<BiliClient>) -> Result<()> {
let task_manager = DownloadTaskManager::init(connection, bili_client).await?;
task_manager.start().await
}
pub struct DownloadTaskManager {
sched: Arc<tokio::sync::Mutex<JobScheduler>>,
cx: Arc<TaskContext>,
shutdown_rx: watch::Receiver<Result<()>>,
}
#[derive(Serialize, Default, Clone, Copy, Debug)]
pub struct TaskStatus {
is_running: bool,
last_run: Option<chrono::DateTime<chrono::Local>>,
last_finish: Option<chrono::DateTime<chrono::Local>>,
next_run: Option<chrono::DateTime<chrono::Local>>,
}
struct TaskContext {
connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
running: tokio::sync::Mutex<()>,
status_tx: watch::Sender<TaskStatus>,
status_rx: watch::Receiver<TaskStatus>,
video_task_id: tokio::sync::Mutex<Option<uuid::Uuid>>, // 存储当前视频下载任务的 UUID
}
impl DownloadTaskManager {
/// 初始化 DownloadTaskManager 单例
pub async fn init(
connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
) -> Result<&'static DownloadTaskManager> {
INSTANCE
.get_or_try_init(|| DownloadTaskManager::new(connection, bili_client))
.await
}
/// 获取 DownloadTaskManager 单例,未初始化时直接 panic
pub fn get() -> &'static DownloadTaskManager {
INSTANCE.get().expect("DownloadTaskManager is not initialized")
}
/// 订阅下载任务的状态更新
pub fn subscribe(&self) -> watch::Receiver<TaskStatus> {
self.cx.status_rx.clone()
}
/// 手动执行一次下载任务
pub async fn download_once(&self) -> Result<()> {
let _ = self
.sched
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::download_video_task(self.cx.clone()),
)?)
.await?;
Ok(())
}
/// 启动任务调度器
async fn start(&self) -> Result<()> {
self.sched.lock().await.start().await?;
let mut shutdown_rx = self.shutdown_rx.clone();
shutdown_rx.changed().await?;
self.sched.lock().await.shutdown().await.context("任务调度器关闭失败")?;
if let Err(e) = &*shutdown_rx.borrow() {
bail!("{:#}", e);
}
Ok(())
}
/// 私有的调度器构造函数
async fn new(connection: DatabaseConnection, bili_client: Arc<BiliClient>) -> Result<Self> {
let sched = Arc::new(tokio::sync::Mutex::new(JobScheduler::new().await?));
let (status_tx, status_rx) = watch::channel(TaskStatus::default());
let (running, video_task_id) = (tokio::sync::Mutex::new(()), tokio::sync::Mutex::new(None));
let cx = Arc::new(TaskContext {
connection,
bili_client,
running,
status_tx,
status_rx,
video_task_id,
});
// 读取初始配置
let mut rx = VersionedConfig::get().subscribe();
let initial_config = rx.borrow_and_update().clone();
if ARGS.disable_credential_refresh {
warn!("已禁用凭据检查与刷新任务bili-sync 将不会自动检查刷新 Credential需要用户自行维护");
} else {
// 初始化凭据检查与刷新任务,该任务必须成功,否则直接退出
sched
.lock()
.await
.add(Job::new_async_tz(
"0 0 1 * * *",
chrono::Local,
DownloadTaskManager::check_and_refresh_credential_task(cx.clone()),
)?)
.await?;
}
// 初始化并添加视频下载任务,将任务 ID 保存到 TaskManager 中
let video_task_id = async {
let job_run = DownloadTaskManager::download_video_task(cx.clone());
let job = match &initial_config.interval {
Trigger::Interval(interval) => Job::new_repeated_async(Duration::from_secs(*interval), job_run)?,
Trigger::Cron(cron) => Job::new_async_tz(cron, chrono::Local, job_run)?,
};
Result::<_, anyhow::Error>::Ok(sched.lock().await.add(job).await?)
}
.await;
let video_task_id = match video_task_id {
Ok(id) => Some(id),
Err(err) => {
error_and_notify(
&initial_config,
&cx.bili_client,
format!("初始化视频下载任务失败:{:#}", err),
);
None
}
};
*cx.video_task_id.lock().await = video_task_id;
// 发起一个一次性的任务,更新一下下次运行的时间
if let Some(video_task_id) = video_task_id {
sched
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::refresh_next_run(video_task_id, cx.clone()),
)?)
.await?;
}
// 发起一个新任务,用来监听配置变更,动态更新视频下载任务
let cx_clone = cx.clone();
let sched_clone = sched.clone();
let (shutdown_tx, shutdown_rx) = tokio::sync::watch::channel(Ok(()));
tokio::spawn(async move {
let update_task_result = async {
while rx.changed().await.is_ok() {
let new_config = rx.borrow().clone();
let cx = cx_clone.clone();
let mut video_task_id = cx.video_task_id.lock().await;
if let Some(old_video_task_id) = *video_task_id {
// 这里必须成功,不然后面会重复添加任务
sched_clone
.lock()
.await
.remove(&old_video_task_id)
.await
.context("移除旧的视频下载任务失败")?;
}
let new_video_task_id = async {
let job_run = DownloadTaskManager::download_video_task(cx.clone());
let job = match &new_config.interval {
Trigger::Interval(interval) => {
Job::new_repeated_async(Duration::from_secs(*interval), job_run)?
}
Trigger::Cron(cron) => Job::new_async_tz(cron, chrono::Local, job_run)?,
};
Result::<_, anyhow::Error>::Ok(sched_clone.lock().await.add(job).await?)
}
.await;
let new_video_task_id = match new_video_task_id {
Ok(id) => Some(id),
Err(err) => {
error_and_notify(
&initial_config,
&cx.bili_client,
format!("重载视频下载任务失败:{:#}", err),
);
None
}
};
*video_task_id = new_video_task_id;
if let Some(video_task_id) = new_video_task_id {
sched_clone
.lock()
.await
.add(Job::new_one_shot_async(
Duration::from_secs(0),
DownloadTaskManager::refresh_next_run(video_task_id, cx.clone()),
)?)
.await?;
}
}
Result::<(), anyhow::Error>::Ok(())
}
.await;
// 如果执行正常,上面应该是永远不会退出的
let _ = shutdown_tx.send(update_task_result);
});
Ok(Self { sched, cx, shutdown_rx })
}
fn check_and_refresh_credential_task(
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |_uuid, _l| {
let cx = cx.clone();
Box::pin(async move {
let _lock = cx.running.lock().await;
let config = VersionedConfig::get().read();
info!("开始执行本轮凭据检查与刷新任务..");
match check_and_refresh_credential(&cx.connection, &cx.bili_client, &config).await {
Ok(_) => info!("本轮凭据检查与刷新任务执行完毕"),
Err(e) => {
error_and_notify(
&config,
&cx.bili_client,
format!("本轮凭据检查与刷新任务执行遇到错误:{:#}", e),
);
}
}
})
}
}
fn refresh_next_run(
video_task_id: uuid::Uuid,
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |_uuid, mut l| {
let cx = cx.clone();
Box::pin(async move {
let old_status = *cx.status_rx.borrow();
let next_run = l
.next_tick_for_job(video_task_id)
.await
.ok()
.flatten()
.map(|dt| dt.with_timezone(&chrono::Local));
let _ = cx.status_tx.send(TaskStatus { next_run, ..old_status });
})
}
}
fn download_video_task(
cx: Arc<TaskContext>,
) -> impl FnMut(uuid::Uuid, JobScheduler) -> Pin<Box<dyn Future<Output = ()> + Send>> {
move |uuid, mut l| {
let cx = cx.clone();
Box::pin(async move {
let Ok(_lock) = cx.running.try_lock() else {
warn!("上一次视频下载任务尚未结束,跳过本次执行..");
return;
};
let _ = cx.status_tx.send(TaskStatus {
is_running: true,
last_run: Some(chrono::Local::now()),
last_finish: None,
next_run: None,
});
info!("开始执行本轮视频下载任务..");
let mut config = VersionedConfig::get().snapshot();
match download_video(&cx.connection, &cx.bili_client, &mut config).await {
Ok(_) => info!("本轮视频下载任务执行完毕"),
Err(e) => {
error_and_notify(
&config,
&cx.bili_client,
format!("本轮视频下载任务执行遇到错误:{:#}", e),
);
}
}
// 注意此处尽量从 updating 中读取 uuid因为当前任务可能是不存在 next_tick 的 oneshot 任务
let task_uuid = (*cx.video_task_id.lock().await).unwrap_or(uuid);
let next_run = l
.next_tick_for_job(task_uuid)
.await
.ok()
.flatten()
.map(|dt| dt.with_timezone(&chrono::Local));
let last_status = *cx.status_rx.borrow();
let _ = cx.status_tx.send(TaskStatus {
is_running: false,
last_run: last_status.last_run,
last_finish: Some(chrono::Local::now()),
next_run,
});
})
}
TASK_STATUS_NOTIFIER.finish_running(_lock);
time::sleep(time::Duration::from_secs(config.interval)).await;
}
}
async fn check_and_refresh_credential(
connection: &DatabaseConnection,
bili_client: &BiliClient,
config: &Config,
) -> Result<()> {
match bili_client
.check_refresh(&config.credential)
.await
.context("检查刷新 Credential 失败")?
{
None => {
info!("Credential 无需刷新");
}
Some(new_credential) => {
VersionedConfig::get()
.update_credential(new_credential, connection)
.await
.context("新 Credential 持久化失败")?;
info!("Credential 已刷新并保存");
}
}
Ok(())
}
async fn download_video(
connection: &DatabaseConnection,
bili_client: &BiliClient,
config: &mut Arc<Config>,
) -> Result<()> {
config.check().context("配置检查失败")?;
let mixin_key = bili_client
.wbi_img(&config.credential)
.await
.context("获取 wbi_img 失败")?
.into_mixin_key()
.context("解析 mixin key 失败")?;
bilibili::set_global_mixin_key(mixin_key);
let template = TEMPLATE.snapshot();
let bili_client = bili_client.snapshot()?;
let video_sources = get_enabled_video_sources(connection)
.await
.context("获取视频源列表失败")?;
if video_sources.is_empty() {
bail!("没有可用的视频源");
}
for video_source in video_sources {
let display_name = video_source.display_name();
if let Err(e) = process_video_source(video_source, &bili_client, connection, &template, config).await {
error_and_notify(
config,
&bili_client,
format!("处理 {} 时遇到错误:{:#},跳过该视频源", display_name, e),
);
if let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
warn!("检测到风控,终止此轮视频下载任务..");
break;
}
}
}
Ok(())
}

View File

@@ -10,6 +10,7 @@ impl VideoInfo {
let default = bili_sync_entity::video::ActiveModel {
id: NotSet,
created_at: NotSet,
should_download: NotSet,
// 此处不使用 ActiveModel::default() 是为了让其它字段有默认值
..bili_sync_entity::video::Model::default().into_active_model()
};
@@ -49,7 +50,7 @@ impl VideoInfo {
pubtime: Set(pubtime.naive_utc()),
favtime: Set(fav_time.naive_utc()),
download_status: Set(0),
valid: Set(attr == 0),
valid: Set(attr == 0 || attr == 4),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
@@ -97,13 +98,34 @@ impl VideoInfo {
valid: Set(true),
..default
},
_ => unreachable!(),
VideoInfo::Dynamic {
title,
bvid,
desc,
cover,
pubtime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
intro: Set(desc),
cover: Set(cover),
pubtime: Set(pubtime.naive_utc()),
category: Set(2), // 动态里的视频内容类型肯定是视频
valid: Set(true),
..default
},
VideoInfo::Detail { .. } => unreachable!(),
}
}
/// 填充视频详情时调用,该方法会将视频详情附加到原有的 Model 上
/// 特殊地,如果在检测视频更新时记录了 favtime那么 favtime 会维持原样,否则会使用 pubtime 填充
pub fn into_detail_model(self, base_model: bili_sync_entity::video::Model) -> bili_sync_entity::video::ActiveModel {
/// 如果开启 try_upower_anyway标记视频状态时不再检测是否充电一律进入后面的下载环节
pub fn into_detail_model(
self,
base_model: bili_sync_entity::video::Model,
try_upower_anyway: bool,
) -> bili_sync_entity::video::ActiveModel {
match self {
VideoInfo::Detail {
title,
@@ -111,28 +133,40 @@ impl VideoInfo {
intro,
cover,
upper,
staff,
ctime,
pubtime,
state,
is_upower_exclusive,
is_upower_play,
redirect_url,
..
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(2),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: if base_model.favtime != NaiveDateTime::default() {
NotSet // 之前设置了 favtime不覆盖
Set(base_model.favtime) // 之前设置了 favtime使用之前的值等价于 unset但设置上以支持后续的规则匹配
} else {
Set(pubtime.naive_utc()) // 未设置过 favtime使用 pubtime 填充
},
download_status: Set(0),
valid: Set(state == 0),
// state == 0 表示开放浏览
// is_upower_exclusive 和 is_upower_play 相等有两种情况:
// 1. 都为 true表示视频是充电专享但是已经充过电有权观看
// 2. 都为 false表示视频是非充电视频
// redirect_url 仅在视频为番剧、影视、纪录片等特殊视频时才会有值,如果为空说明是普通视频
// 仅在三种条件都满足时,才认为视频是可下载的
valid: Set(state == 0
&& (try_upower_anyway || (is_upower_exclusive == is_upower_play))
&& redirect_url.is_none()),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
staff: Set(staff.map(Into::into)),
..base_model.into_active_model()
},
_ => unreachable!(),
@@ -145,8 +179,20 @@ impl VideoInfo {
VideoInfo::Collection { pubtime: time, .. }
| VideoInfo::Favorite { fav_time: time, .. }
| VideoInfo::WatchLater { fav_time: time, .. }
| VideoInfo::Submission { ctime: time, .. } => time,
_ => unreachable!(),
| VideoInfo::Submission { ctime: time, .. }
| VideoInfo::Dynamic { pubtime: time, .. } => time,
VideoInfo::Detail { .. } => unreachable!(),
}
}
pub fn bvid_owned(self) -> String {
match self {
VideoInfo::Collection { bvid, .. }
| VideoInfo::Favorite { bvid, .. }
| VideoInfo::WatchLater { bvid, .. }
| VideoInfo::Submission { bvid, .. }
| VideoInfo::Dynamic { bvid, .. }
| VideoInfo::Detail { bvid, .. } => bvid,
}
}
}

View File

@@ -0,0 +1,36 @@
use sea_orm::DatabaseConnection;
use crate::adapter::VideoSourceEnum;
use crate::bilibili::BiliClient;
use crate::config::Config;
use crate::downloader::Downloader;
#[derive(Clone, Copy)]
pub struct DownloadContext<'a> {
pub bili_client: &'a BiliClient,
pub video_source: &'a VideoSourceEnum,
pub template: &'a handlebars::Handlebars<'a>,
pub connection: &'a DatabaseConnection,
pub downloader: &'a Downloader,
pub config: &'a Config,
}
impl<'a> DownloadContext<'a> {
pub fn new(
bili_client: &'a BiliClient,
video_source: &'a VideoSourceEnum,
template: &'a handlebars::Handlebars<'a>,
connection: &'a DatabaseConnection,
downloader: &'a Downloader,
config: &'a Config,
) -> Self {
Self {
bili_client,
video_source,
template,
connection,
downloader,
config,
}
}
}

View File

@@ -1,24 +1,21 @@
use serde_json::json;
use crate::config::VersionedConfig;
pub fn video_format_args(video_model: &bili_sync_entity::video::Model) -> serde_json::Value {
let config = VersionedConfig::get().load();
pub fn video_format_args(video_model: &bili_sync_entity::video::Model, time_format: &str) -> serde_json::Value {
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
"upper_name": &video_model.upper_name,
"upper_mid": &video_model.upper_id,
"pubtime": &video_model.pubtime.and_utc().format(&config.time_format).to_string(),
"fav_time": &video_model.favtime.and_utc().format(&config.time_format).to_string(),
"pubtime": &video_model.pubtime.and_utc().format(time_format).to_string(),
"fav_time": &video_model.favtime.and_utc().format(time_format).to_string(),
})
}
pub fn page_format_args(
video_model: &bili_sync_entity::video::Model,
page_model: &bili_sync_entity::page::Model,
time_format: &str,
) -> serde_json::Value {
let config = VersionedConfig::get().load();
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
@@ -26,7 +23,7 @@ pub fn page_format_args(
"upper_mid": &video_model.upper_id,
"ptitle": &page_model.name,
"pid": page_model.pid,
"pubtime": video_model.pubtime.and_utc().format(&config.time_format).to_string(),
"fav_time": video_model.favtime.and_utc().format(&config.time_format).to_string(),
"pubtime": video_model.pubtime.and_utc().format(time_format).to_string(),
"fav_time": video_model.favtime.and_utc().format(time_format).to_string(),
})
}

View File

@@ -1,11 +1,13 @@
pub mod convert;
pub mod download_context;
pub mod filenamify;
pub mod format_arg;
pub mod model;
pub mod nfo;
pub mod notify;
pub mod rule;
pub mod signal;
pub mod status;
pub mod task_notifier;
pub mod validation;
use tracing_subscriber::fmt;
use tracing_subscriber::layer::SubscriberExt;

View File

@@ -1,13 +1,14 @@
use anyhow::{Context, Result, anyhow};
use bili_sync_entity::*;
use rand::seq::SliceRandom;
use sea_orm::ActiveValue::Set;
use sea_orm::DatabaseTransaction;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::{OnConflict, SimpleExpr};
use sea_orm::{DatabaseTransaction, TransactionTrait};
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::VideoInfo;
use crate::config::{Config, LegacyConfig};
use crate::config::Config;
use crate::utils::status::STATUS_COMPLETED;
/// 筛选未填充的视频
@@ -41,6 +42,7 @@ pub async fn filter_unhandled_video_pages(
.and(video::Column::DownloadStatus.lt(STATUS_COMPLETED))
.and(video::Column::Category.eq(2))
.and(video::Column::SinglePage.is_not_null())
.and(video::Column::ShouldDownload.eq(true))
.and(additional_expr),
)
.find_with_related(page::Entity)
@@ -133,6 +135,8 @@ pub async fn get_enabled_video_sources(connection: &DatabaseConnection) -> Resul
sources.extend(watch_later.into_iter().map(VideoSourceEnum::from));
sources.extend(submission.into_iter().map(VideoSourceEnum::from));
sources.extend(collection.into_iter().map(VideoSourceEnum::from));
// 此处将视频源随机打乱顺序,从概率上确保每个视频源都有机会优先执行,避免后面视频源的长期饥饿问题
sources.shuffle(&mut rand::rng());
Ok(sources)
}
@@ -165,69 +169,3 @@ pub async fn save_db_config(config: &Config, connection: &DatabaseConnection) ->
.context("Failed to save config to database")?;
Ok(())
}
/// 迁移旧版本配置(即将所有相关联的内容设置为 enabled
pub async fn migrate_legacy_config(config: &LegacyConfig, connection: &DatabaseConnection) -> Result<()> {
let transaction = connection.begin().await.context("Failed to begin transaction")?;
tokio::try_join!(
migrate_favorite(config, &transaction),
migrate_watch_later(config, &transaction),
migrate_submission(config, &transaction),
migrate_collection(config, &transaction)
)?;
transaction.commit().await.context("Failed to commit transaction")?;
Ok(())
}
async fn migrate_favorite(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
favorite::Entity::update_many()
.filter(favorite::Column::FId.is_in(config.favorite_list.keys().collect::<Vec<_>>()))
.col_expr(favorite::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate favorite config")?;
Ok(())
}
async fn migrate_watch_later(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
if config.watch_later.enabled {
watch_later::Entity::update_many()
.col_expr(watch_later::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate watch later config")?;
}
Ok(())
}
async fn migrate_submission(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
submission::Entity::update_many()
.filter(submission::Column::UpperId.is_in(config.submission_list.keys().collect::<Vec<_>>()))
.col_expr(submission::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate submission config")?;
Ok(())
}
async fn migrate_collection(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
let tuples: Vec<(i64, i64, i32)> = config
.collection_list
.keys()
.filter_map(|key| Some((key.sid.parse().ok()?, key.mid.parse().ok()?, key.collection_type.into())))
.collect();
collection::Entity::update_many()
.filter(
Expr::tuple([
Expr::column(collection::Column::SId),
Expr::column(collection::Column::MId),
Expr::column(collection::Column::Type),
])
.in_tuples(tuples),
)
.col_expr(collection::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate collection config")?;
Ok(())
}

View File

@@ -1,4 +1,5 @@
use anyhow::Result;
use bili_sync_entity::upper_vec::Upper as EntityUpper;
use bili_sync_entity::*;
use chrono::NaiveDateTime;
use quick_xml::Error;
@@ -6,7 +7,7 @@ use quick_xml::events::{BytesCData, BytesText};
use quick_xml::writer::Writer;
use tokio::io::{AsyncWriteExt, BufWriter};
use crate::config::{NFOTimeType, VersionedConfig};
use crate::config::NFOTimeType;
#[allow(clippy::upper_case_acronyms)]
pub enum NFO<'a> {
@@ -20,9 +21,8 @@ pub struct Movie<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub upper_id: i64,
pub upper_name: &'a str,
pub aired: NaiveDateTime,
pub uppers: Vec<EntityUpper<i64, &'a str>>,
pub premiered: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
@@ -30,9 +30,8 @@ pub struct TVShow<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub upper_id: i64,
pub upper_name: &'a str,
pub aired: NaiveDateTime,
pub uppers: Vec<EntityUpper<i64, &'a str>>,
pub premiered: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
@@ -85,23 +84,29 @@ impl NFO<'_> {
.create_element("title")
.write_text_content_async(BytesText::new(movie.name))
.await?;
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&movie.upper_id.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(movie.upper_name))
.await?;
Ok(writer)
})
.await?;
for upper in movie.uppers {
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&upper.mid.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(upper.role().as_ref()))
.await?;
writer
.create_element("thumb")
.write_text_content_async(BytesText::new(upper.face))
.await?;
Ok(writer)
})
.await?;
}
writer
.create_element("year")
.write_text_content_async(BytesText::new(&movie.aired.format("%Y").to_string()))
.write_text_content_async(BytesText::new(&movie.premiered.format("%Y").to_string()))
.await?;
if let Some(tags) = movie.tags {
for tag in tags {
@@ -117,8 +122,8 @@ impl NFO<'_> {
.write_text_content_async(BytesText::new(movie.bvid))
.await?;
writer
.create_element("aired")
.write_text_content_async(BytesText::new(&movie.aired.format("%Y-%m-%d").to_string()))
.create_element("premiered")
.write_text_content_async(BytesText::new(&movie.premiered.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
@@ -139,23 +144,29 @@ impl NFO<'_> {
.create_element("title")
.write_text_content_async(BytesText::new(tvshow.name))
.await?;
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&tvshow.upper_id.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(tvshow.upper_name))
.await?;
Ok(writer)
})
.await?;
for upper in tvshow.uppers {
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&upper.mid.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(upper.role().as_ref()))
.await?;
writer
.create_element("thumb")
.write_text_content_async(BytesText::new(upper.face))
.await?;
Ok(writer)
})
.await?;
}
writer
.create_element("year")
.write_text_content_async(BytesText::new(&tvshow.aired.format("%Y").to_string()))
.write_text_content_async(BytesText::new(&tvshow.premiered.format("%Y").to_string()))
.await?;
if let Some(tags) = tvshow.tags {
for tag in tags {
@@ -171,8 +182,8 @@ impl NFO<'_> {
.write_text_content_async(BytesText::new(tvshow.bvid))
.await?;
writer
.create_element("aired")
.write_text_content_async(BytesText::new(&tvshow.aired.format("%Y-%m-%d").to_string()))
.create_element("premiered")
.write_text_content_async(BytesText::new(&tvshow.premiered.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
@@ -252,6 +263,7 @@ mod tests {
name: "name".to_string(),
upper_id: 1,
upper_name: "upper_name".to_string(),
upper_face: "https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg".to_string(),
favtime: chrono::NaiveDateTime::new(
chrono::NaiveDate::from_ymd_opt(2022, 2, 2).unwrap(),
chrono::NaiveTime::from_hms_opt(2, 2, 2).unwrap(),
@@ -261,11 +273,14 @@ mod tests {
chrono::NaiveTime::from_hms_opt(3, 3, 3).unwrap(),
),
bvid: "BV1nWcSeeEkV".to_string(),
tags: Some(serde_json::json!(["tag1", "tag2"])),
tags: Some(vec!["tag1".to_owned(), "tag2".to_owned()].into()),
..Default::default()
};
assert_eq!(
NFO::Movie((&video).into()).generate_nfo().await.unwrap(),
NFO::Movie((&video).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<movie>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
@@ -274,16 +289,20 @@ mod tests {
<actor>
<name>1</name>
<role>upper_name</role>
<thumb>https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg</thumb>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<aired>2022-02-02</aired>
<premiered>2022-02-02</premiered>
</movie>"#,
);
assert_eq!(
NFO::TVShow((&video).into()).generate_nfo().await.unwrap(),
NFO::TVShow((&video).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<tvshow>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
@@ -292,16 +311,20 @@ mod tests {
<actor>
<name>1</name>
<role>upper_name</role>
<thumb>https://i1.hdslb.com/bfs/face/72e8f33cadc72e022fc34624cc69e1b12ebb72c0.jpg</thumb>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<aired>2022-02-02</aired>
<premiered>2022-02-02</premiered>
</tvshow>"#,
);
assert_eq!(
NFO::Upper((&video).into()).generate_nfo().await.unwrap(),
NFO::Upper(((&video, &video.uppers().next().unwrap())).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot/>
@@ -318,7 +341,10 @@ mod tests {
..Default::default()
};
assert_eq!(
NFO::Episode((&page).into()).generate_nfo().await.unwrap(),
NFO::Episode((&page).to_nfo(NFOTimeType::FavTime))
.generate_nfo()
.await
.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot/>
@@ -331,60 +357,56 @@ mod tests {
}
}
impl<'a> From<&'a video::Model> for Movie<'a> {
fn from(video: &'a video::Model) -> Self {
Self {
name: &video.name,
intro: &video.intro,
bvid: &video.bvid,
upper_id: video.upper_id,
upper_name: &video.upper_name,
aired: match VersionedConfig::get().load().nfo_time_type {
NFOTimeType::FavTime => video.favtime,
NFOTimeType::PubTime => video.pubtime,
pub trait ToNFO<'a, T> {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> T;
}
impl<'a> ToNFO<'a, Movie<'a>> for &'a video::Model {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> Movie<'a> {
Movie {
name: &self.name,
intro: &self.intro,
bvid: &self.bvid,
uppers: self.uppers().collect(),
premiered: match nfo_time_type {
NFOTimeType::FavTime => self.favtime,
NFOTimeType::PubTime => self.pubtime,
},
tags: video
.tags
.as_ref()
.and_then(|tags| serde_json::from_value(tags.clone()).ok()),
tags: self.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> From<&'a video::Model> for TVShow<'a> {
fn from(video: &'a video::Model) -> Self {
Self {
name: &video.name,
intro: &video.intro,
bvid: &video.bvid,
upper_id: video.upper_id,
upper_name: &video.upper_name,
aired: match VersionedConfig::get().load().nfo_time_type {
NFOTimeType::FavTime => video.favtime,
NFOTimeType::PubTime => video.pubtime,
impl<'a> ToNFO<'a, TVShow<'a>> for &'a video::Model {
fn to_nfo(&'a self, nfo_time_type: NFOTimeType) -> TVShow<'a> {
TVShow {
name: &self.name,
intro: &self.intro,
bvid: &self.bvid,
uppers: self.uppers().collect(),
premiered: match nfo_time_type {
NFOTimeType::FavTime => self.favtime,
NFOTimeType::PubTime => self.pubtime,
},
tags: video
.tags
.as_ref()
.and_then(|tags| serde_json::from_value(tags.clone()).ok()),
tags: self.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> From<&'a video::Model> for Upper {
fn from(video: &'a video::Model) -> Self {
Self {
upper_id: video.upper_id.to_string(),
pubtime: video.pubtime,
impl<'a> ToNFO<'a, Upper> for (&video::Model, &EntityUpper<i64, &str>) {
fn to_nfo(&'a self, _nfo_time_type: NFOTimeType) -> Upper {
Upper {
upper_id: self.1.mid.to_string(),
pubtime: self.0.pubtime,
}
}
}
impl<'a> From<&'a page::Model> for Episode<'a> {
fn from(page: &'a page::Model) -> Self {
Self {
name: &page.name,
pid: page.pid.to_string(),
impl<'a> ToNFO<'a, Episode<'a>> for &'a page::Model {
fn to_nfo(&'a self, _nfo_time_type: NFOTimeType) -> Episode<'a> {
Episode {
name: &self.name,
pid: self.pid.to_string(),
}
}
}

View File

@@ -0,0 +1,23 @@
use crate::bilibili::BiliClient;
use crate::config::Config;
use crate::notifier::{Message, NotifierAllExt};
pub fn notify(config: &Config, bili_client: &BiliClient, msg: impl Into<Message<'static>>) {
if let Some(notifiers) = &config.notifiers
&& !notifiers.is_empty()
{
let (notifiers, inner_client) = (notifiers.clone(), bili_client.inner_client().clone());
let msg = msg.into();
tokio::spawn(async move { notifiers.notify_all(&inner_client, msg).await });
}
}
pub fn error_and_notify(config: &Config, bili_client: &BiliClient, msg: String) {
error!("{msg}");
if let Some(notifiers) = &config.notifiers
&& !notifiers.is_empty()
{
let (notifiers, inner_client) = (notifiers.clone(), bili_client.inner_client().clone());
tokio::spawn(async move { notifiers.notify_all(&inner_client, msg).await });
}
}

View File

@@ -0,0 +1,288 @@
use bili_sync_entity::rule::{AndGroup, Condition, Rule, RuleTarget};
use bili_sync_entity::{page, video};
use chrono::{Local, NaiveDateTime};
pub(crate) trait Evaluatable<T> {
fn evaluate(&self, value: T) -> bool;
}
pub(crate) trait FieldEvaluatable {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool;
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool;
}
impl Evaluatable<&str> for Condition<String> {
fn evaluate(&self, value: &str) -> bool {
match self {
Condition::Equals(expected) => expected == value,
Condition::Contains(substring) => value.contains(substring),
Condition::IContains(substring) => value.to_lowercase().contains(&substring.to_lowercase()),
Condition::Prefix(prefix) => value.starts_with(prefix),
Condition::Suffix(suffix) => value.ends_with(suffix),
Condition::MatchesRegex(_, regex) => regex.is_match(value),
_ => false,
}
}
}
impl Evaluatable<usize> for Condition<usize> {
fn evaluate(&self, value: usize) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
Condition::GreaterThan(threshold) => value > *threshold,
Condition::LessThan(threshold) => value < *threshold,
Condition::Between(start, end) => value > *start && value < *end,
_ => false,
}
}
}
impl Evaluatable<NaiveDateTime> for Condition<NaiveDateTime> {
fn evaluate(&self, value: NaiveDateTime) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
Condition::GreaterThan(threshold) => value > *threshold,
Condition::LessThan(threshold) => value < *threshold,
Condition::Between(start, end) => value > *start && value < *end,
_ => false,
}
}
}
impl Evaluatable<bool> for Condition<bool> {
fn evaluate(&self, value: bool) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
_ => false,
}
}
}
impl FieldEvaluatable for RuleTarget {
/// 修改模型后进行评估,此时能访问的是未保存的 activeModel就地使用 activeModel 评估
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
match self {
RuleTarget::Title(cond) => video.name.try_as_ref().is_some_and(|title| cond.evaluate(title)),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.try_as_ref()
.and_then(|t| t.as_ref())
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => video
.favtime
.try_as_ref()
.map(|fav_time| fav_time.and_utc().with_timezone(&Local).naive_local()) // 数据库中保存的一律是 utc 时间,转换为 local 时间再比较
.is_some_and(|fav_time| cond.evaluate(fav_time)),
RuleTarget::PubTime(cond) => video
.pubtime
.try_as_ref()
.map(|pub_time| pub_time.and_utc().with_timezone(&Local).naive_local())
.is_some_and(|pub_time| cond.evaluate(pub_time)),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::SumVideoLength(cond) => pages
.iter()
.try_fold(0usize, |acc, page| {
page.duration.try_as_ref().map(|d| acc + *d as usize).ok_or(())
})
.is_ok_and(|total_length| cond.evaluate(total_length)),
RuleTarget::MultiUpper(cond) => cond.evaluate(video.staff.as_ref().is_some()),
RuleTarget::Not(inner) => !inner.evaluate(video, pages),
}
}
/// 手动触发对历史视频的评估,拿到的是原始 Model直接使用
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
match self {
RuleTarget::Title(cond) => cond.evaluate(&video.name),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.as_ref()
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => cond.evaluate(video.favtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PubTime(cond) => cond.evaluate(video.pubtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::SumVideoLength(cond) => {
cond.evaluate(pages.iter().fold(0usize, |acc, page| acc + page.duration as usize))
}
RuleTarget::MultiUpper(cond) => cond.evaluate(video.staff.is_some()),
RuleTarget::Not(inner) => !inner.evaluate_model(video, pages),
}
}
}
impl FieldEvaluatable for AndGroup {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.iter().all(|target| target.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.iter().all(|target| target.evaluate_model(video, pages))
}
}
impl FieldEvaluatable for Rule {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate_model(video, pages))
}
}
/// 对于 Option<Rule> 如果 rule 不存在应该被认为是通过评估
impl FieldEvaluatable for Option<Rule> {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate_model(video, pages))
}
}
#[cfg(test)]
mod tests {
use bili_sync_entity::page;
use chrono::NaiveDate;
use sea_orm::ActiveValue::Set;
use super::*;
#[test]
fn test_display() {
let test_cases = vec![
(
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
"「(标题包含“唐氏”)」",
),
(
Rule(vec![vec![
RuleTarget::Title(Condition::Prefix("街霸".to_string())),
RuleTarget::Tags(Condition::Contains("套路".to_string())),
]]),
"「(标题以“街霸”开头)且(标签包含“套路”)」",
),
(
Rule(vec![
vec![
RuleTarget::Title(Condition::Contains("Rust".to_string())),
RuleTarget::PageCount(Condition::GreaterThan(5)),
],
vec![
RuleTarget::Tags(Condition::Suffix("入门".to_string())),
RuleTarget::PubTime(Condition::GreaterThan(
NaiveDate::from_ymd_opt(2023, 1, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
)),
],
]),
"标题包含“Rust”视频分页数量大于“5”」或「标签以“入门”结尾发布时间大于“2023-01-01 00:00:00”",
),
(
Rule(vec![vec![
RuleTarget::Not(Box::new(RuleTarget::Title(Condition::Contains("广告".to_string())))),
RuleTarget::PageCount(Condition::LessThan(10)),
]]),
"标题不包含“广告”视频分页数量小于“10”",
),
(
Rule(vec![vec![
RuleTarget::FavTime(Condition::Between(
NaiveDate::from_ymd_opt(2023, 6, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
NaiveDate::from_ymd_opt(2023, 12, 31)
.unwrap()
.and_hms_opt(23, 59, 59)
.unwrap(),
)),
// autocorrect-disable
RuleTarget::Tags(Condition::MatchesRegex(
"技术|教程".to_string(),
regex::Regex::new("技术|教程").unwrap(),
)),
]]),
"收藏时间在“2023-06-01 00:00:00”和“2023-12-31 23:59:59”之间标签匹配“技术|教程”)」",
// autocorrect-enable
),
];
for (rule, expected) in test_cases {
assert_eq!(rule.to_string(), expected);
}
}
#[test]
fn test_evaluate() {
let test_cases = vec![
(
(
video::ActiveModel {
name: Set("骂谁唐氏呢!!!".to_string()),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
true,
),
(
(
video::ActiveModel::default(),
vec![page::ActiveModel::default(); 2],
),
Rule(vec![vec![RuleTarget::PageCount(Condition::Equals(1))]]),
false,
),
(
(
video::ActiveModel{
tags: Set(Some(vec!["原神".to_owned(),"永雏塔菲".to_owned(),"虚拟主播".to_owned()].into())),
..Default::default()
},
vec![],
),
Rule (vec![vec![RuleTarget::Not(Box::new(RuleTarget::Tags(Condition::Equals(
"原神".to_string(),
))))]],
),
false,
),
(
(
video::ActiveModel {
name: Set(
"万字怒扒网易《归唐》底裤中国首款大厂买断制单机靠谱吗——全网最全官方非独家幕后关于《归唐》PV 的所有秘密~都在这里了~".to_owned(),
),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Not(Box::new(RuleTarget::Title(Condition::MatchesRegex(
r"^\S+字(解析|怒扒|拆解)".to_owned(),
regex::Regex::new(r"^\S+字(解析|怒扒)").unwrap(),
))))]],
),
false,
),
];
for ((video, pages), rule, expected) in test_cases {
assert_eq!(rule.evaluate(&video, &pages), expected);
}
}
}

View File

@@ -1,3 +1,10 @@
use std::marker::PhantomData;
use bili_sync_entity::{page, video};
use bili_sync_migration::{ExprTrait, IntoCondition};
use sea_orm::sea_query::Expr;
use sea_orm::{ColumnTrait, Condition};
use crate::error::ExecutionStatus;
pub static STATUS_NOT_STARTED: u32 = 0b000;
@@ -11,10 +18,17 @@ pub static STATUS_COMPLETED: u32 = 1 << 31;
/// 如果子任务执行成功,将状态设置为 0b111该值定义为 STATUS_OK。
/// 子任务达到最大失败次数或者执行成功时,认为该子任务已经完成。
/// 当所有子任务都已经完成时,为最高位打上标记 1表示整个下载任务已经完成。
#[derive(Clone, Copy, Default)]
pub struct Status<const N: usize>(u32);
#[derive(Clone, Copy)]
pub struct Status<const N: usize, C>(u32, PhantomData<C>);
impl<const N: usize> Status<N> {
impl<const N: usize, C> Default for Status<N, C> {
fn default() -> Self {
Self(0, PhantomData)
}
}
impl<const N: usize, C> Status<N, C> {
pub(crate) const LEN: usize = N;
// 获取最高位的完成标记
pub fn get_completed(&self) -> bool {
self.0 >> 31 == 1
@@ -34,11 +48,14 @@ impl<const N: usize> Status<N> {
let mut changed = false;
for i in 0..N {
let status = self.get_status(i);
if !(status < STATUS_MAX_RETRY || status == STATUS_OK) {
if status != STATUS_NOT_STARTED && status != STATUS_OK {
self.set_status(i, STATUS_NOT_STARTED);
changed = true;
}
}
if changed {
self.set_completed(false);
}
changed
}
@@ -51,8 +68,8 @@ impl<const N: usize> Status<N> {
// 但考虑特殊情况,新版本引入了一个新的子任务项,此时会出现明明有子任务未执行,但 completed 标记位仍然为 true 的情况
// 当然可以在新版本迁移文件中全局重置 completed 标记位,但这样影响范围太大感觉不太好
// 在后面进行这部分额外判断可以兼容这种情况,在由用户手动触发的 reset_failed 调用中修正 completed 标记位
if self.should_run().into_iter().any(|x| x) {
changed |= self.get_completed();
if !changed && self.get_completed() && self.should_run().into_iter().any(|x| x) {
changed = true;
self.set_completed(false);
}
changed
@@ -119,8 +136,8 @@ impl<const N: usize> Status<N> {
/// 根据子任务执行结果更新子任务的状态
fn set_result(&mut self, result: &ExecutionStatus, offset: usize) {
// 如果任务返回 FixedFailed 状态,那么无论之前的状态如何,都将状态设置为 FixedFailed 的状态
if let ExecutionStatus::FixedFailed(status, _) = result {
// 如果任务返回 Fixed 状态,那么无论之前的状态如何,都将状态设置为 Fixed 的状态
if let ExecutionStatus::Fixed(status) = result {
assert!(*status < 0b1000, "status should be less than 0b1000");
self.set_status(offset, *status);
} else if self.get_status(offset) < STATUS_MAX_RETRY {
@@ -133,20 +150,20 @@ impl<const N: usize> Status<N> {
}
}
impl<const N: usize> From<u32> for Status<N> {
impl<const N: usize, C> From<u32> for Status<N, C> {
fn from(status: u32) -> Self {
Status(status)
Status(status, PhantomData)
}
}
impl<const N: usize> From<Status<N>> for u32 {
fn from(status: Status<N>) -> Self {
impl<const N: usize, C> From<Status<N, C>> for u32 {
fn from(status: Status<N, C>) -> Self {
status.0
}
}
impl<const N: usize> From<Status<N>> for [u32; N] {
fn from(status: Status<N>) -> Self {
impl<const N: usize, C> From<Status<N, C>> for [u32; N] {
fn from(status: Status<N, C>) -> Self {
let mut result = [0; N];
for (i, item) in result.iter_mut().enumerate() {
*item = status.get_status(i);
@@ -155,9 +172,9 @@ impl<const N: usize> From<Status<N>> for [u32; N] {
}
}
impl<const N: usize> From<[u32; N]> for Status<N> {
impl<const N: usize, C> From<[u32; N]> for Status<N, C> {
fn from(status: [u32; N]) -> Self {
let mut result = Status::<N>::default();
let mut result = Self::default();
for (i, item) in status.iter().enumerate() {
assert!(*item < 0b1000, "status should be less than 0b1000");
result.set_status(i, *item);
@@ -170,20 +187,74 @@ impl<const N: usize> From<[u32; N]> for Status<N> {
}
/// 包含五个子任务从前到后依次是视频封面、视频信息、Up 主头像、Up 主信息、分页下载
pub type VideoStatus = Status<5>;
pub type VideoStatus = Status<5, video::Column>;
impl VideoStatus {
pub fn query_builder() -> StatusQueryBuilder<{ Self::LEN }, video::Column> {
StatusQueryBuilder::new(video::Column::DownloadStatus)
}
}
/// 包含五个子任务,从前到后分别是:视频封面、视频内容、视频信息、视频弹幕、视频字幕
pub type PageStatus = Status<5>;
pub type PageStatus = Status<5, page::Column>;
impl PageStatus {
pub fn query_builder() -> StatusQueryBuilder<{ Self::LEN }, page::Column> {
StatusQueryBuilder::new(page::Column::DownloadStatus)
}
}
pub struct StatusQueryBuilder<const N: usize, C: ColumnTrait> {
column: C,
}
impl<const N: usize, C: ColumnTrait> StatusQueryBuilder<N, C> {
fn new(column: C) -> Self {
Self { column }
}
/// 完成状态:所有子任务的状态都是成功
pub fn succeeded(&self) -> Condition {
let mut condition = Condition::all();
for offset in 0..N as i32 {
condition = condition.add(Expr::col(self.column).right_shift(offset * 3).bit_and(7).eq(7))
}
condition
}
/// 失败状态:存在任何失败的子任务
pub fn failed(&self) -> Condition {
let mut condition = Condition::any();
for offset in 0..N as i32 {
condition = condition.add(
Expr::col(self.column)
.right_shift(offset * 3)
.bit_and(7)
.is_not_in([0, 7]),
)
}
condition
}
/// 等待状态:所有子任务的状态都不是失败,且其中存在未开始
pub fn waiting(&self) -> Condition {
let mut condition = Condition::any();
for offset in 0..N as i32 {
condition = condition.add(Expr::col(self.column).right_shift(offset * 3).bit_and(7).eq(0))
}
condition.and(self.failed().not()).into_condition()
}
}
#[cfg(test)]
mod test {
mod tests {
use anyhow::anyhow;
use super::*;
#[test]
fn test_status_update() {
let mut status = Status::<3>::default();
let mut status = Status::<3, video::Column>::default();
assert_eq!(status.should_run(), [true, true, true]);
for _ in 0..3 {
status.update_status(&[
@@ -201,9 +272,9 @@ mod test {
assert_eq!(status.should_run(), [false, false, false]);
assert!(status.get_completed());
status.update_status(&[
ExecutionStatus::FixedFailed(1, anyhow!("")),
ExecutionStatus::FixedFailed(4, anyhow!("")),
ExecutionStatus::FixedFailed(7, anyhow!("")),
ExecutionStatus::Fixed(1),
ExecutionStatus::Fixed(4),
ExecutionStatus::Fixed(7),
]);
assert_eq!(status.should_run(), [true, false, false]);
assert!(!status.get_completed());
@@ -214,7 +285,7 @@ mod test {
fn test_status_convert() {
let testcases = [[0, 0, 1], [1, 2, 3], [3, 1, 2], [3, 0, 7]];
for testcase in testcases.iter() {
let status = Status::<3>::from(testcase.clone());
let status = Status::<3, video::Column>::from(testcase.clone());
assert_eq!(<[u32; 3]>::from(status), *testcase);
}
}
@@ -223,7 +294,7 @@ mod test {
fn test_status_convert_and_update() {
let testcases = [([0, 0, 1], [1, 7, 7]), ([3, 4, 3], [4, 4, 7]), ([3, 1, 7], [4, 7, 7])];
for (before, after) in testcases.iter() {
let mut status = Status::<3>::from(before.clone());
let mut status = Status::<3, video::Column>::from(before.clone());
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
@@ -235,12 +306,12 @@ mod test {
#[test]
fn test_status_reset_failed() {
// 重置一个已经失败的任务
let mut status = Status::<3>::from([3, 4, 7]);
// 重置一个出现部分失败但还有重试次数的任务,将所有的失败状态重置为 0
let mut status = Status::<3, video::Column>::from([3, 4, 7]);
assert!(!status.get_completed());
assert!(status.reset_failed());
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [3, 0, 7]);
assert_eq!(<[u32; 3]>::from(status), [0, 0, 7]);
// 没有内容需要重置,但 completed 标记位是错误的(模拟新增一个子任务状态的情况)
// 此时 reset_failed 不会修正 completed 标记位,而 force_reset_failed 会
status.set_completed(true);
@@ -250,22 +321,28 @@ mod test {
assert!(status.force_reset_failed());
assert!(!status.get_completed());
// 重置一个已经成功的任务,没有改变状态,也不会修改标记位
let mut status = Status::<3>::from([7, 7, 7]);
let mut status = Status::<3, video::Column>::from([7, 7, 7]);
assert!(status.get_completed());
assert!(!status.reset_failed());
assert!(status.get_completed());
// 重置一个全部失败的任务,修改状态并且修改标记位
let mut status = Status::<3, video::Column>::from([4, 4, 4]);
assert!(status.get_completed());
assert!(status.reset_failed());
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [0, 0, 0]);
}
#[test]
fn test_status_set() {
// 设置子状态,从 completed 到 uncompleted
let mut status = Status::<5>::from([7, 7, 7, 7, 7]);
let mut status = Status::<5, video::Column>::from([7, 7, 7, 7, 7]);
assert!(status.get_completed());
status.set(4, 0);
assert!(!status.get_completed());
assert_eq!(<[u32; 5]>::from(status), [7, 7, 7, 7, 0]);
// 设置子状态,从 uncompleted 到 completed
let mut status = Status::<5>::from([4, 7, 7, 7, 0]);
let mut status = Status::<5, video::Column>::from([4, 7, 7, 7, 0]);
assert!(!status.get_completed());
status.set(4, 7);
assert!(status.get_completed());

View File

@@ -1,79 +0,0 @@
use std::sync::{Arc, LazyLock};
use serde::Serialize;
use tokio::sync::MutexGuard;
use crate::config::VersionedConfig;
pub static TASK_STATUS_NOTIFIER: LazyLock<TaskStatusNotifier> = LazyLock::new(TaskStatusNotifier::new);
#[derive(Serialize)]
pub struct TaskStatus {
is_running: bool,
last_run: Option<chrono::DateTime<chrono::Local>>,
last_finish: Option<chrono::DateTime<chrono::Local>>,
next_run: Option<chrono::DateTime<chrono::Local>>,
}
pub struct TaskStatusNotifier {
mutex: tokio::sync::Mutex<()>,
tx: tokio::sync::watch::Sender<Arc<TaskStatus>>,
rx: tokio::sync::watch::Receiver<Arc<TaskStatus>>,
}
impl Default for TaskStatus {
fn default() -> Self {
Self {
is_running: false,
last_run: None,
last_finish: None,
next_run: None,
}
}
}
impl TaskStatusNotifier {
pub fn new() -> Self {
let (tx, rx) = tokio::sync::watch::channel(Arc::new(TaskStatus::default()));
Self {
mutex: tokio::sync::Mutex::const_new(()),
tx,
rx,
}
}
pub async fn start_running(&self) -> MutexGuard<()> {
let lock = self.mutex.lock().await;
let _ = self.tx.send(Arc::new(TaskStatus {
is_running: true,
last_run: Some(chrono::Local::now()),
last_finish: None,
next_run: None,
}));
lock
}
pub fn finish_running(&self, _lock: MutexGuard<()>) {
let last_status = self.tx.borrow();
let last_run = last_status.last_run.clone();
drop(last_status);
let config = VersionedConfig::get().load();
let now = chrono::Local::now();
let _ = self.tx.send(Arc::new(TaskStatus {
is_running: false,
last_run,
last_finish: Some(now),
next_run: now.checked_add_signed(chrono::Duration::seconds(config.interval as i64)),
}));
}
/// 精确探测任务执行状态,保证如果读取到“未运行”,那么在锁释放之前任务不会被执行
pub fn detect_running(&self) -> Option<MutexGuard<'_, ()>> {
self.mutex.try_lock().ok()
}
pub fn subscribe(&self) -> tokio::sync::watch::Receiver<Arc<TaskStatus>> {
self.rx.clone()
}
}

View File

@@ -3,6 +3,7 @@ use std::path::{Path, PathBuf};
use std::pin::Pin;
use anyhow::{Context, Result, anyhow, bail};
use bili_sync_entity::upper_vec::Upper;
use bili_sync_entity::*;
use futures::stream::FuturesUnordered;
use futures::{Stream, StreamExt, TryStreamExt};
@@ -14,15 +15,19 @@ use tokio::sync::Semaphore;
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::{BestStream, BiliClient, BiliError, Dimension, PageInfo, Video, VideoInfo};
use crate::config::{ARGS, PathSafeTemplate, TEMPLATE, VersionedConfig};
use crate::config::{ARGS, Config, PathSafeTemplate};
use crate::downloader::Downloader;
use crate::error::{DownloadAbortError, ExecutionStatus, ProcessPageError};
use crate::error::ExecutionStatus;
use crate::notifier::DownloadNotifyInfo;
use crate::utils::download_context::DownloadContext;
use crate::utils::format_arg::{page_format_args, video_format_args};
use crate::utils::model::{
create_pages, create_videos, filter_unfilled_videos, filter_unhandled_video_pages, update_pages_model,
update_videos_model,
};
use crate::utils::nfo::NFO;
use crate::utils::nfo::{NFO, ToNFO};
use crate::utils::notify::notify;
use crate::utils::rule::FieldEvaluatable;
use crate::utils::status::{PageStatus, STATUS_OK, VideoStatus};
/// 完整地处理某个视频来源
@@ -30,18 +35,28 @@ pub async fn process_video_source(
video_source: VideoSourceEnum,
bili_client: &BiliClient,
connection: &DatabaseConnection,
template: &handlebars::Handlebars<'_>,
config: &Config,
) -> Result<()> {
// 预创建视频源目录,提前检测目录是否可写
video_source.create_dir_all().await?;
// 从参数中获取视频列表的 Model 与视频流
let (video_source, video_streams) = video_source.refresh(bili_client, connection).await?;
let (video_source, video_streams) = video_source
.refresh(bili_client, &config.credential, connection)
.await?;
// 从视频流中获取新视频的简要信息,写入数据库
refresh_video_source(&video_source, video_streams, connection).await?;
// 单独请求视频详情接口,获取视频的详情信息与所有的分页,写入数据库
fetch_video_details(bili_client, &video_source, connection).await?;
fetch_video_details(bili_client, &video_source, connection, config).await?;
if ARGS.scan_only {
warn!("已开启仅扫描模式,跳过视频下载..");
} else {
// 从数据库中查找所有未下载的视频与分页,下载并处理
download_unprocessed_videos(bili_client, &video_source, connection).await?;
let download_notify_info =
download_unprocessed_videos(bili_client, &video_source, connection, template, config).await?;
if download_notify_info.should_notify() {
notify(config, bili_client, download_notify_info);
}
}
Ok(())
}
@@ -57,10 +72,18 @@ pub async fn refresh_video_source<'a>(
let mut max_datetime = latest_row_at;
let mut error = Ok(());
let mut video_streams = video_streams
.take_while(|res| {
.enumerate()
.take_while(|(idx, res)| {
match res {
Err(e) => {
error = Err(anyhow!(e.to_string()));
// 这里拿到的 e 是引用,无法直接传递所有权
// 对于 BiliError我们需要克隆内部的错误并附带原来的上下文方便外部检查错误类型
// 对于其他错误只保留字符串信息用作提示
if let Some(inner) = e.downcast_ref::<BiliError>() {
error = Err(inner.clone()).context(e.to_string());
} else {
error = Err(anyhow!("{:#}", e));
}
futures::future::ready(false)
}
Ok(v) => {
@@ -71,11 +94,11 @@ pub async fn refresh_video_source<'a>(
if release_datetime > &max_datetime {
max_datetime = *release_datetime;
}
futures::future::ready(video_source.should_take(release_datetime, &latest_row_at))
futures::future::ready(video_source.should_take(*idx, release_datetime, &latest_row_at))
}
}
})
.filter_map(|res| futures::future::ready(video_source.should_filter(res, &latest_row_at)))
.filter_map(|(idx, res)| futures::future::ready(video_source.should_filter(idx, res, &latest_row_at)))
.chunks(10);
let mut count = 0;
while let Some(videos_info) = video_streams.next().await {
@@ -99,16 +122,17 @@ pub async fn fetch_video_details(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
config: &Config,
) -> Result<()> {
video_source.log_fetch_video_start();
let videos_model = filter_unfilled_videos(video_source.filter_expr(), connection).await?;
let semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.video);
let semaphore = Semaphore::new(config.concurrent_limit.video);
let semaphore_ref = &semaphore;
let tasks = videos_model
.into_iter()
.map(|video_model| async move {
let _permit = semaphore_ref.acquire().await.context("acquire semaphore failed")?;
let video = Video::new(bili_client, video_model.bvid.clone());
let video = Video::new(bili_client, video_model.bvid.as_str(), &config.credential);
let info: Result<_> = async { Ok((video.get_tags().await?, video.get_view_info().await?)) }.await;
match info {
Err(e) => {
@@ -116,7 +140,7 @@ pub async fn fetch_video_details(
"获取视频 {} - {} 的详细信息失败,错误为:{:#}",
&video_model.bvid, &video_model.name, e
);
if let Some(BiliError::RequestFailed(-404, _)) = e.downcast_ref::<BiliError>() {
if let Some(BiliError::ErrorResponse(-404, _)) = e.downcast_ref::<BiliError>() {
let mut video_active_model: bili_sync_entity::video::ActiveModel = video_model.into();
video_active_model.valid = Set(false);
video_active_model.save(connection).await?;
@@ -133,10 +157,11 @@ pub async fn fetch_video_details(
.map(|p| p.into_active_model(video_model.id))
.collect::<Vec<page::ActiveModel>>();
// 更新 video model 的各项有关属性
let mut video_active_model = view_info.into_detail_model(video_model);
let mut video_active_model = view_info.into_detail_model(video_model, config.try_upower_anyway);
video_source.set_relation_id(&mut video_active_model);
video_active_model.single_page = Set(Some(pages.len() == 1));
video_active_model.tags = Set(Some(serde_json::to_value(tags)?));
video_active_model.tags = Set(Some(tags.into()));
video_active_model.should_download = Set(video_source.rule().evaluate(&video_active_model, &pages));
let txn = connection.begin().await?;
create_pages(pages, &txn).await?;
video_active_model.save(&txn).await?;
@@ -146,7 +171,7 @@ pub async fn fetch_video_details(
Ok::<_, anyhow::Error>(())
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<Vec<_>>().await?;
tasks.try_collect::<()>().await?;
video_source.log_fetch_video_end();
Ok(())
}
@@ -156,126 +181,131 @@ pub async fn download_unprocessed_videos(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
) -> Result<()> {
template: &handlebars::Handlebars<'_>,
config: &Config,
) -> Result<DownloadNotifyInfo> {
video_source.log_download_video_start();
let semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.video);
let semaphore = Semaphore::new(config.concurrent_limit.video);
let downloader = Downloader::new(bili_client.client.clone());
let cx = DownloadContext::new(bili_client, video_source, template, connection, &downloader, config);
let unhandled_videos_pages = filter_unhandled_video_pages(video_source.filter_expr(), connection).await?;
let mut assigned_upper = HashSet::new();
let mut assigned_upper_ids = HashSet::new();
let tasks = unhandled_videos_pages
.into_iter()
.map(|(video_model, pages_model)| {
let should_download_upper = !assigned_upper.contains(&video_model.upper_id);
assigned_upper.insert(video_model.upper_id);
download_video_pages(
bili_client,
video_source,
video_model,
pages_model,
connection,
&semaphore,
&downloader,
should_download_upper,
)
// 这里按理说是可以直接拿到 assigned_uppers 的但rust 会错误地认为它引用了 local variable
// 导致编译出错,暂时先这样单独提取出一个 owned 的 upper id 列表,再在任务内部筛选
let task_uids = video_model
.uppers()
.map(|u| u.mid)
.filter(|uid| assigned_upper_ids.insert(*uid))
.collect::<Vec<_>>();
download_video_pages(video_model, pages_model, &semaphore, task_uids, cx)
})
.collect::<FuturesUnordered<_>>();
let mut download_aborted = false;
let mut risk_control_related_error = None;
let mut stream = tasks
// 触发风控时设置 download_aborted 标记并终止流
.take_while(|res| {
if res
.as_ref()
.is_err_and(|e| e.downcast_ref::<DownloadAbortError>().is_some())
if let Err(e) = res
&& let Some(e) = e.downcast_ref::<BiliError>()
&& e.is_risk_control_related()
{
download_aborted = true;
risk_control_related_error = Some(e.clone());
}
futures::future::ready(!download_aborted)
futures::future::ready(risk_control_related_error.is_none())
})
// 过滤掉没有触发风控的普通 Err只保留正确返回的 Model
.filter_map(|res| futures::future::ready(res.ok()))
// 将成功返回的 Model 按十个一组合并
.chunks(10);
let mut download_notify_info = DownloadNotifyInfo::new(video_source.display_name().into());
while let Some(models) = stream.next().await {
download_notify_info.record(&models);
update_videos_model(models, connection).await?;
}
if download_aborted {
error!("下载触发风控,已终止所有任务,等待下一轮执行");
if let Some(e) = risk_control_related_error {
bail!(e);
}
video_source.log_download_video_end();
Ok(())
Ok(download_notify_info)
}
#[allow(clippy::too_many_arguments)]
pub async fn download_video_pages(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
video_model: video::Model,
pages: Vec<page::Model>,
connection: &DatabaseConnection,
page_models: Vec<page::Model>,
semaphore: &Semaphore,
downloader: &Downloader,
should_download_upper: bool,
upper_uids: Vec<i64>,
cx: DownloadContext<'_>,
) -> Result<video::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = VideoStatus::from(video_model.download_status);
let separate_status = status.should_run();
let base_path = video_source.path().join(
TEMPLATE
.load()
.path_safe_render("video", &video_format_args(&video_model))?,
);
let upper_id = video_model.upper_id.to_string();
let base_upper_path = VersionedConfig::get()
.load()
.upper_path
.join(upper_id.chars().next().context("upper_id is empty")?.to_string())
.join(upper_id);
// 未记录路径时填充,已经填充过路径时使用现有的
let base_path = if !video_model.path.is_empty() {
PathBuf::from(&video_model.path)
} else {
cx.video_source.path().join(
cx.template
.path_safe_render("video", &video_format_args(&video_model, &cx.config.time_format))?,
)
};
fs::create_dir_all(&base_path).await?;
let base_path = dunce::canonicalize(base_path).context("canonicalize video path failed")?;
let is_single_page = video_model.single_page.context("single_page is null")?;
let uppers_with_path = video_model
.uppers()
.filter_map(|u| {
if !upper_uids.contains(&u.mid) {
None
} else {
let id_string = u.mid.to_string();
Some((
u,
cx.config
.upper_path
.join(id_string.chars().next()?.to_string())
.join(id_string),
))
}
})
.collect::<Vec<_>>();
// 对于单页视频page 的下载已经足够
// 对于多页视频page 下载仅包含了分集内容,需要额外补上视频的 poster 的 tvshow.nfo
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载视频封面
fetch_video_poster(
separate_status[0] && !is_single_page,
separate_status[0] && !is_single_page && !cx.config.skip_option.no_poster,
&video_model,
downloader,
base_path.join("poster.jpg"),
base_path.join("fanart.jpg"),
cx
),
// 生成视频信息的 nfo
generate_video_nfo(
separate_status[1] && !is_single_page,
separate_status[1] && !is_single_page && !cx.config.skip_option.no_video_nfo,
&video_model,
base_path.join("tvshow.nfo"),
cx
),
// 下载 Up 主头像
fetch_upper_face(
separate_status[2] && should_download_upper,
&video_model,
downloader,
base_upper_path.join("folder.jpg"),
separate_status[2] && !cx.config.skip_option.no_upper,
&uppers_with_path,
cx
),
// 生成 Up 主信息的 nfo
generate_upper_nfo(
separate_status[3] && should_download_upper,
separate_status[3] && !cx.config.skip_option.no_upper,
&video_model,
base_upper_path.join("person.nfo"),
&uppers_with_path,
cx,
),
// 分发并执行分页下载的任务
dispatch_download_page(
separate_status[4],
bili_client,
&video_model,
pages,
connection,
downloader,
&base_path
)
dispatch_download_page(separate_status[4], &video_model, page_models, &base_path, cx)
);
let results = [res_1, res_2, res_3, res_4, res_5]
.into_iter()
.map(Into::into)
.collect::<Vec<_>>();
let results = [res_1.into(), res_2.into(), res_3.into(), res_4.into(), res_5.into()];
status.update_status(&results);
results
.iter()
@@ -286,17 +316,21 @@ pub async fn download_video_pages(
ExecutionStatus::Succeeded => info!("处理视频「{}」{}成功", &video_model.name, task_name),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」{}出现常见错误,已忽略: {:#}",
"处理视频「{}」{}出现常见错误,已忽略{:#}",
&video_model.name, task_name, e
)
}
ExecutionStatus::Failed(e) | ExecutionStatus::FixedFailed(_, e) => {
error!("处理视频「{}」{}失败: {:#}", &video_model.name, task_name, e)
ExecutionStatus::Failed(e) => {
error!("处理视频「{}」{}失败{:#}", &video_model.name, task_name, e)
}
ExecutionStatus::Fixed(_) => unreachable!(),
});
if let ExecutionStatus::Failed(e) = results.into_iter().nth(4).context("page download result not found")? {
if e.downcast_ref::<DownloadAbortError>().is_some() {
return Err(e);
for result in results {
if let ExecutionStatus::Failed(e) = result
&& let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
bail!(e);
}
}
let mut video_active_model: video::ActiveModel = video_model.into();
@@ -308,31 +342,20 @@ pub async fn download_video_pages(
/// 分发并执行分页下载任务,当且仅当所有分页成功下载或达到最大重试次数时返回 Ok否则根据失败原因返回对应的错误
pub async fn dispatch_download_page(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
pages: Vec<page::Model>,
connection: &DatabaseConnection,
downloader: &Downloader,
page_models: Vec<page::Model>,
base_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let child_semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.page);
let tasks = pages
let child_semaphore = Semaphore::new(cx.config.concurrent_limit.page);
let tasks = page_models
.into_iter()
.map(|page_model| {
download_page(
bili_client,
video_model,
page_model,
&child_semaphore,
downloader,
base_path,
)
})
.map(|page_model| download_page(video_model, page_model, &child_semaphore, base_path, cx))
.collect::<FuturesUnordered<_>>();
let (mut download_aborted, mut target_status) = (false, STATUS_OK);
let (mut risk_control_related_error, mut target_status) = (None, STATUS_OK);
let mut stream = tasks
.take_while(|res| {
match res {
@@ -348,45 +371,79 @@ pub async fn dispatch_download_page(
}
}
Err(e) => {
if e.downcast_ref::<DownloadAbortError>().is_some() {
download_aborted = true;
if let Some(e) = e.downcast_ref::<BiliError>()
&& e.is_risk_control_related()
{
risk_control_related_error = Some(e.clone());
}
}
}
// 仅在发生风控时终止流,其它情况继续执行
futures::future::ready(!download_aborted)
futures::future::ready(risk_control_related_error.is_none())
})
.filter_map(|res| futures::future::ready(res.ok()))
.chunks(10);
while let Some(models) = stream.next().await {
update_pages_model(models, connection).await?;
update_pages_model(models, cx.connection).await?;
}
if download_aborted {
error!("下载视频「{}」的分页时触发风控,将异常向上传递..", &video_model.name);
bail!(DownloadAbortError());
if let Some(e) = risk_control_related_error {
bail!(e);
}
if target_status != STATUS_OK {
return Ok(ExecutionStatus::FixedFailed(target_status, ProcessPageError().into()));
}
Ok(ExecutionStatus::Succeeded)
// 视频中“分页下载”任务的状态始终与所有分页的最小状态一致
Ok(ExecutionStatus::Fixed(target_status))
}
/// 下载某个分页,未发生风控且正常运行时返回 Ok(Page::ActiveModel),其中 status 字段存储了新的下载状态,发生风控时返回 DownloadAbortError
pub async fn download_page(
bili_client: &BiliClient,
video_model: &video::Model,
page_model: page::Model,
semaphore: &Semaphore,
downloader: &Downloader,
base_path: &Path,
cx: DownloadContext<'_>,
) -> Result<page::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = PageStatus::from(page_model.download_status);
let separate_status = status.should_run();
let is_single_page = video_model.single_page.context("single_page is null")?;
let base_name = TEMPLATE
.load()
.path_safe_render("page", &page_format_args(video_model, &page_model))?;
// 未记录路径时填充,已经填充过路径时使用现有的
let (base_path, base_name) = if let Some(old_video_path) = &page_model.path
&& !old_video_path.is_empty()
{
let old_video_path = Path::new(old_video_path);
let old_video_filename = old_video_path
.file_name()
.context("invalid page path format")?
.to_string_lossy();
if is_single_page {
// 单页下的路径是 {base_path}/{base_name}.mp4
(
old_video_path.parent().context("invalid page path format")?,
old_video_filename.trim_end_matches(".mp4").to_string(),
)
} else {
// 多页下的路径是 {base_path}/Season 1/{base_name} - S01Exx.mp4
(
old_video_path
.parent()
.and_then(|p| p.parent())
.context("invalid page path format")?,
old_video_filename
.rsplit_once(" - ")
.context("invalid page path format")?
.0
.to_string(),
)
}
} else {
(
base_path,
cx.template.path_safe_render(
"page",
&page_format_args(video_model, &page_model, &cx.config.time_format),
)?,
)
};
let base_path = dunce::canonicalize(base_path).context("canonicalize base path failed")?;
let (poster_path, video_path, nfo_path, danmaku_path, fanart_path, subtitle_path) = if is_single_page {
(
base_path.join(format!("{}-poster.jpg", &base_name)),
@@ -434,33 +491,41 @@ pub async fn download_page(
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载分页封面
fetch_page_poster(
separate_status[0],
separate_status[0] && !cx.config.skip_option.no_poster,
video_model,
&page_model,
downloader,
poster_path,
fanart_path
fanart_path,
cx
),
// 下载分页视频
fetch_page_video(
separate_status[1],
bili_client,
video_model,
downloader,
&page_info,
&video_path
),
fetch_page_video(separate_status[1], video_model, &page_info, &video_path, cx),
// 生成分页视频信息的 nfo
generate_page_nfo(separate_status[2], video_model, &page_model, nfo_path),
generate_page_nfo(
separate_status[2] && !cx.config.skip_option.no_video_nfo,
video_model,
&page_model,
nfo_path,
cx,
),
// 下载分页弹幕
fetch_page_danmaku(separate_status[3], bili_client, video_model, &page_info, danmaku_path),
fetch_page_danmaku(
separate_status[3] && !cx.config.skip_option.no_danmaku,
video_model,
&page_info,
danmaku_path,
cx,
),
// 下载分页字幕
fetch_page_subtitle(separate_status[4], bili_client, video_model, &page_info, &subtitle_path)
fetch_page_subtitle(
separate_status[4] && !cx.config.skip_option.no_subtitle,
video_model,
&page_info,
&subtitle_path,
cx
)
);
let results = [res_1, res_2, res_3, res_4, res_5]
.into_iter()
.map(Into::into)
.collect::<Vec<_>>();
let results = [res_1.into(), res_2.into(), res_3.into(), res_4.into(), res_5.into()];
status.update_status(&results);
results
.iter()
@@ -476,19 +541,22 @@ pub async fn download_page(
),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」第 {} 页{}出现常见错误,已忽略: {:#}",
"处理视频「{}」第 {} 页{}出现常见错误,已忽略{:#}",
&video_model.name, page_model.pid, task_name, e
)
}
ExecutionStatus::Failed(e) | ExecutionStatus::FixedFailed(_, e) => error!(
"处理视频「{}」第 {} 页{}失败: {:#}",
ExecutionStatus::Failed(e) => error!(
"处理视频「{}」第 {} 页{}失败{:#}",
&video_model.name, page_model.pid, task_name, e
),
ExecutionStatus::Fixed(_) => unreachable!(),
});
// 如果下载视频时触发风控,直接返回 DownloadAbortError
if let ExecutionStatus::Failed(e) = results.into_iter().nth(1).context("video download result not found")? {
if let Ok(BiliError::RiskControlOccurred) = e.downcast::<BiliError>() {
bail!(DownloadAbortError());
for result in results {
if let ExecutionStatus::Failed(e) = result
&& let Ok(e) = e.downcast::<BiliError>()
&& e.is_risk_control_related()
{
bail!(e);
}
}
let mut page_active_model: page::ActiveModel = page_model.into();
@@ -501,9 +569,9 @@ pub async fn fetch_page_poster(
should_run: bool,
video_model: &video::Model,
page_model: &page::Model,
downloader: &Downloader,
poster_path: PathBuf,
fanart_path: Option<PathBuf>,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
@@ -519,7 +587,9 @@ pub async fn fetch_page_poster(
None => video_model.cover.as_str(),
}
};
downloader.fetch(url, &poster_path).await?;
cx.downloader
.fetch(url, &poster_path, &cx.config.concurrent_limit.download)
.await?;
if let Some(fanart_path) = fanart_path {
fs::copy(&poster_path, &fanart_path).await?;
}
@@ -528,47 +598,53 @@ pub async fn fetch_page_poster(
pub async fn fetch_page_video(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
downloader: &Downloader,
page_info: &PageInfo,
page_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
let streams = bili_video
.get_page_analyzer(page_info)
.await?
.best_stream(&VersionedConfig::get().load().filter_option)?;
.best_stream(&cx.config.filter_option)?;
match streams {
BestStream::Mixed(mix_stream) => downloader.fetch_with_fallback(&mix_stream.urls(), page_path).await?,
BestStream::Mixed(mix_stream) => {
cx.downloader
.multi_fetch(
&mix_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
BestStream::VideoAudio {
video: video_stream,
audio: None,
} => downloader.fetch_with_fallback(&video_stream.urls(), page_path).await?,
} => {
cx.downloader
.multi_fetch(
&video_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
BestStream::VideoAudio {
video: video_stream,
audio: Some(audio_stream),
} => {
let (tmp_video_path, tmp_audio_path) = (
page_path.with_extension("tmp_video"),
page_path.with_extension("tmp_audio"),
);
let res = async {
downloader
.fetch_with_fallback(&video_stream.urls(), &tmp_video_path)
.await?;
downloader
.fetch_with_fallback(&audio_stream.urls(), &tmp_audio_path)
.await?;
downloader.merge(&tmp_video_path, &tmp_audio_path, page_path).await
}
.await;
let _ = fs::remove_file(tmp_video_path).await;
let _ = fs::remove_file(tmp_audio_path).await;
res?
cx.downloader
.multi_fetch_and_merge(
&video_stream.urls(cx.config.cdn_sorting),
&audio_stream.urls(cx.config.cdn_sorting),
page_path,
&cx.config.concurrent_limit.download,
)
.await?
}
}
Ok(ExecutionStatus::Succeeded)
@@ -576,34 +652,34 @@ pub async fn fetch_page_video(
pub async fn fetch_page_danmaku(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
page_info: &PageInfo,
danmaku_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
bili_video
.get_danmaku_writer(page_info)
.await?
.write(danmaku_path)
.write(danmaku_path, &cx.config.danmaku_option)
.await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_subtitle(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
page_info: &PageInfo,
subtitle_path: &Path,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
let bili_video = Video::new(cx.bili_client, video_model.bvid.as_str(), &cx.config.credential);
let subtitles = bili_video.get_subtitles(page_info).await?;
let tasks = subtitles
.into_iter()
@@ -612,7 +688,7 @@ pub async fn fetch_page_subtitle(
tokio::fs::write(path, subtitle.body.to_string()).await
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<Vec<()>>().await?;
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
@@ -621,15 +697,16 @@ pub async fn generate_page_nfo(
video_model: &video::Model,
page_model: &page::Model,
nfo_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let single_page = video_model.single_page.context("single_page is null")?;
let nfo = if single_page {
NFO::Movie(video_model.into())
NFO::Movie(video_model.to_nfo(cx.config.nfo_time_type))
} else {
NFO::Episode(page_model.into())
NFO::Episode(page_model.to_nfo(cx.config.nfo_time_type))
};
generate_nfo(nfo, nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
@@ -638,40 +715,64 @@ pub async fn generate_page_nfo(
pub async fn fetch_video_poster(
should_run: bool,
video_model: &video::Model,
downloader: &Downloader,
poster_path: PathBuf,
fanart_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
downloader.fetch(&video_model.cover, &poster_path).await?;
cx.downloader
.fetch(&video_model.cover, &poster_path, &cx.config.concurrent_limit.download)
.await?;
fs::copy(&poster_path, &fanart_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_upper_face(
should_run: bool,
video_model: &video::Model,
downloader: &Downloader,
upper_face_path: PathBuf,
uppers_with_path: &[(Upper<i64, &str>, PathBuf)],
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
if !should_run || uppers_with_path.is_empty() {
return Ok(ExecutionStatus::Skipped);
}
downloader.fetch(&video_model.upper_face, &upper_face_path).await?;
let tasks = uppers_with_path
.iter()
.map(|(upper, base_path)| async move {
cx.downloader
.fetch(
upper.face,
&base_path.join("folder.jpg"),
&cx.config.concurrent_limit.download,
)
.await?;
Ok::<(), anyhow::Error>(())
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_upper_nfo(
should_run: bool,
video_model: &video::Model,
nfo_path: PathBuf,
uppers_with_path: &[(Upper<i64, &str>, PathBuf)],
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
generate_nfo(NFO::Upper(video_model.into()), nfo_path).await?;
let tasks = uppers_with_path
.iter()
.map(|(upper, base_path)| {
generate_nfo(
NFO::Upper((video_model, upper).to_nfo(cx.config.nfo_time_type)),
base_path.join("person.nfo"),
)
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<()>().await?;
Ok(ExecutionStatus::Succeeded)
}
@@ -679,11 +780,12 @@ pub async fn generate_video_nfo(
should_run: bool,
video_model: &video::Model,
nfo_path: PathBuf,
cx: DownloadContext<'_>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
generate_nfo(NFO::TVShow(video_model.into()), nfo_path).await?;
generate_nfo(NFO::TVShow(video_model.to_nfo(cx.config.nfo_time_type)), nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}

View File

@@ -5,5 +5,9 @@ edition = { workspace = true }
publish = { workspace = true }
[dependencies]
derivative = { workspace = true }
either = { workspace = true }
regex = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }

View File

@@ -0,0 +1,3 @@
pub mod rule;
pub mod string_vec;
pub mod upper_vec;

View File

@@ -0,0 +1,129 @@
use std::fmt::Display;
use derivative::Derivative;
use sea_orm::FromJsonQueryResult;
use sea_orm::prelude::DateTime;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Clone, Debug, Serialize, Deserialize, Derivative)]
#[derivative(PartialEq, Eq)]
#[serde(rename_all = "camelCase", tag = "operator", content = "value")]
pub enum Condition<T: Serialize + Display> {
Equals(T),
Contains(T),
#[serde(rename = "icontains")]
IContains(T),
#[serde(deserialize_with = "deserialize_regex", serialize_with = "serialize_regex")]
MatchesRegex(String, #[derivative(PartialEq = "ignore")] regex::Regex),
Prefix(T),
Suffix(T),
GreaterThan(T),
LessThan(T),
Between(T, T),
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
#[serde(rename_all = "camelCase", tag = "field", content = "rule")]
pub enum RuleTarget {
Title(Condition<String>),
Tags(Condition<String>),
FavTime(Condition<DateTime>),
PubTime(Condition<DateTime>),
PageCount(Condition<usize>),
SumVideoLength(Condition<usize>),
MultiUpper(Condition<bool>),
Not(Box<RuleTarget>),
}
pub type AndGroup = Vec<RuleTarget>;
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct Rule(pub Vec<AndGroup>);
impl<T: Serialize + Display> Display for Condition<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Condition::Equals(v) => write!(f, "等于“{}”", v),
Condition::Contains(v) => write!(f, "包含“{}”", v),
Condition::IContains(v) => write!(f, "包含(不区分大小写)“{}”", v),
Condition::MatchesRegex(pat, _) => write!(f, "匹配“{}”", pat),
Condition::Prefix(v) => write!(f, "以“{}”开头", v),
Condition::Suffix(v) => write!(f, "以“{}”结尾", v),
Condition::GreaterThan(v) => write!(f, "大于“{}”", v),
Condition::LessThan(v) => write!(f, "小于“{}”", v),
Condition::Between(start, end) => write!(f, "在“{}”和“{}”之间", start, end),
}
}
}
impl Display for RuleTarget {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
fn get_field_name(rt: &RuleTarget, depth: usize) -> &'static str {
match rt {
RuleTarget::Title(_) => "标题",
RuleTarget::Tags(_) => "标签",
RuleTarget::FavTime(_) => "收藏时间",
RuleTarget::PubTime(_) => "发布时间",
RuleTarget::PageCount(_) => "视频分页数量",
RuleTarget::SumVideoLength(_) => "视频总时长",
RuleTarget::MultiUpper(_) => "联合投稿",
RuleTarget::Not(inner) => {
if depth == 0 {
get_field_name(inner, depth + 1)
} else {
"格式化失败"
}
}
}
}
let field_name = get_field_name(self, 0);
match self {
RuleTarget::Not(inner) => match inner.as_ref() {
RuleTarget::Title(cond) | RuleTarget::Tags(cond) => write!(f, "{}不{}", field_name, cond),
RuleTarget::FavTime(cond) | RuleTarget::PubTime(cond) => {
write!(f, "{}不{}", field_name, cond)
}
RuleTarget::PageCount(cond) | RuleTarget::SumVideoLength(cond) => write!(f, "{}不{}", field_name, cond),
RuleTarget::MultiUpper(cond) => write!(f, "{}不{}", field_name, cond),
RuleTarget::Not(_) => write!(f, "格式化失败"),
},
RuleTarget::Title(cond) | RuleTarget::Tags(cond) => write!(f, "{}{}", field_name, cond),
RuleTarget::FavTime(cond) | RuleTarget::PubTime(cond) => {
write!(f, "{}{}", field_name, cond)
}
RuleTarget::PageCount(cond) | RuleTarget::SumVideoLength(cond) => write!(f, "{}{}", field_name, cond),
RuleTarget::MultiUpper(cond) => write!(f, "{}{}", field_name, cond),
}
}
}
impl Display for Rule {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let groups: Vec<String> = self
.0
.iter()
.map(|group| {
let conditions: Vec<String> = group.iter().map(|target| format!("{}", target)).collect();
format!("{}", conditions.join(""))
})
.collect();
write!(f, "{}", groups.join(""))
}
}
fn deserialize_regex<'de, D>(deserializer: D) -> Result<(String, regex::Regex), D::Error>
where
D: Deserializer<'de>,
{
let pattern = String::deserialize(deserializer)?;
// 反序列化时预编译 regex优化性能
let regex = regex::Regex::new(&pattern).map_err(serde::de::Error::custom)?;
Ok((pattern, regex))
}
fn serialize_regex<S>(pattern: &str, _regex: &regex::Regex, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(pattern)
}

View File

@@ -0,0 +1,20 @@
use sea_orm::FromJsonQueryResult;
use serde::{Deserialize, Serialize};
// reference: https://www.sea-ql.org/SeaORM/docs/generate-entity/column-types/#json-column
// 在 entity 中使用裸 Vec 仅在 postgres 中支持sea-orm 会将其映射为 postgres array
// 如果需要实现跨数据库的 array必须将其包裹在 wrapper type 中
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct StringVec(pub Vec<String>);
impl From<Vec<String>> for StringVec {
fn from(value: Vec<String>) -> Self {
Self(value)
}
}
impl From<StringVec> for Vec<String> {
fn from(value: StringVec) -> Self {
value.0
}
}

View File

@@ -0,0 +1,48 @@
use std::borrow::Cow;
use sea_orm::FromJsonQueryResult;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct Upper<T, S> {
pub mid: T,
pub name: S,
pub face: S,
pub title: Option<S>,
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct UpperVec(pub Vec<Upper<i64, String>>);
impl From<Vec<Upper<i64, String>>> for UpperVec {
fn from(value: Vec<Upper<i64, String>>) -> Self {
Self(value)
}
}
impl From<UpperVec> for Vec<Upper<i64, String>> {
fn from(value: UpperVec) -> Self {
value.0
}
}
impl<T: Copy> Upper<T, String> {
pub fn as_ref(&self) -> Upper<T, &str> {
Upper {
mid: self.mid,
name: self.name.as_str(),
face: self.face.as_str(),
title: self.title.as_deref(),
}
}
}
impl<T, S: AsRef<str>> Upper<T, S> {
pub fn role(&self) -> Cow<'_, str> {
if let Some(title) = &self.title {
Cow::Owned(format!("{}{}", self.name.as_ref(), title.as_ref()))
} else {
Cow::Borrowed(self.name.as_ref())
}
}
}

View File

@@ -2,6 +2,8 @@
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "collection")]
pub struct Model {
@@ -14,6 +16,7 @@ pub struct Model {
pub path: String,
pub created_at: String,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}

View File

@@ -2,6 +2,8 @@
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "favorite")]
pub struct Model {
@@ -13,6 +15,7 @@ pub struct Model {
pub path: String,
pub created_at: String,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}

View File

@@ -2,6 +2,8 @@
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "submission")]
pub struct Model {
@@ -11,7 +13,9 @@ pub struct Model {
pub upper_name: String,
pub path: String,
pub created_at: String,
pub use_dynamic_api: bool,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}

View File

@@ -1,7 +1,11 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.12.15
use either::Either;
use sea_orm::entity::prelude::*;
use crate::string_vec::StringVec;
use crate::upper_vec::{Upper, UpperVec};
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq, Default)]
#[sea_orm(table_name = "video")]
pub struct Model {
@@ -14,6 +18,7 @@ pub struct Model {
pub upper_id: i64,
pub upper_name: String,
pub upper_face: String,
pub staff: Option<UpperVec>,
pub name: String,
pub path: String,
pub category: i32,
@@ -25,11 +30,27 @@ pub struct Model {
pub favtime: DateTime,
pub download_status: u32,
pub valid: bool,
pub tags: Option<serde_json::Value>,
pub should_download: bool,
pub tags: Option<StringVec>,
pub single_page: Option<bool>,
pub created_at: String,
}
impl Model {
pub fn uppers(&self) -> Either<impl Iterator<Item = Upper<i64, &str>>, impl Iterator<Item = Upper<i64, &str>>> {
if let Some(staff) = self.staff.as_ref() {
Either::Left(staff.0.iter().map(|u| u.as_ref()))
} else {
Either::Right(std::iter::once(Upper::<i64, &str> {
mid: self.upper_id,
name: self.upper_name.as_str(),
face: self.upper_face.as_str(),
title: None,
}))
}
}
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {
#[sea_orm(has_many = "super::page::Entity")]

View File

@@ -2,6 +2,8 @@
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "watch_later")]
pub struct Model {
@@ -10,6 +12,7 @@ pub struct Model {
pub path: String,
pub created_at: String,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}

View File

@@ -1,2 +1,5 @@
mod custom_type;
mod entities;
pub use custom_type::*;
pub use entities::*;

View File

@@ -5,5 +5,4 @@ edition = { workspace = true }
publish = { workspace = true }
[dependencies]
async-std = { workspace = true }
sea-orm-migration = { workspace = true }

View File

@@ -8,6 +8,9 @@ mod m20250122_062926_add_latest_row_at;
mod m20250612_090826_add_enabled;
mod m20250613_043257_add_config;
mod m20250712_080013_add_video_created_at_index;
mod m20250903_094454_add_rule_and_should_download;
mod m20251009_123713_add_use_dynamic_api;
mod m20260324_055217_add_staff;
pub struct Migrator;
@@ -23,6 +26,9 @@ impl MigratorTrait for Migrator {
Box::new(m20250612_090826_add_enabled::Migration),
Box::new(m20250613_043257_add_config::Migration),
Box::new(m20250712_080013_add_video_created_at_index::Migration),
Box::new(m20250903_094454_add_rule_and_should_download::Migration),
Box::new(m20251009_123713_add_use_dynamic_api::Migration),
Box::new(m20260324_055217_add_staff::Migration),
]
}
}

View File

@@ -0,0 +1,124 @@
use sea_orm_migration::prelude::*;
use sea_orm_migration::schema::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Video::Table)
.add_column(boolean(Video::ShouldDownload).default(true))
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(WatchLater::Table)
.add_column(text_null(WatchLater::Rule))
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Submission::Table)
.add_column(text_null(Submission::Rule))
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Favorite::Table)
.add_column(text_null(Favorite::Rule))
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Collection::Table)
.add_column(text_null(Collection::Rule))
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Video::Table)
.drop_column(Video::ShouldDownload)
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(WatchLater::Table)
.drop_column(WatchLater::Rule)
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Submission::Table)
.drop_column(Submission::Rule)
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Favorite::Table)
.drop_column(Favorite::Rule)
.to_owned(),
)
.await?;
manager
.alter_table(
Table::alter()
.table(Collection::Table)
.drop_column(Collection::Rule)
.to_owned(),
)
.await
}
}
#[derive(DeriveIden)]
enum Video {
Table,
ShouldDownload,
}
#[derive(DeriveIden)]
enum WatchLater {
Table,
Rule,
}
#[derive(DeriveIden)]
enum Submission {
Table,
Rule,
}
#[derive(DeriveIden)]
enum Favorite {
Table,
Rule,
}
#[derive(DeriveIden)]
enum Collection {
Table,
Rule,
}

View File

@@ -0,0 +1,36 @@
use sea_orm_migration::prelude::*;
use sea_orm_migration::schema::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Submission::Table)
.add_column(boolean(Submission::UseDynamicApi).default(false))
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Submission::Table)
.drop_column(Submission::UseDynamicApi)
.to_owned(),
)
.await
}
}
#[derive(DeriveIden)]
enum Submission {
Table,
UseDynamicApi,
}

View File

@@ -0,0 +1,30 @@
use sea_orm_migration::prelude::*;
use sea_orm_migration::schema::*;
#[derive(DeriveMigrationName)]
pub struct Migration;
#[async_trait::async_trait]
impl MigrationTrait for Migration {
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(
Table::alter()
.table(Video::Table)
.add_column(text_null(Video::Staff))
.to_owned(),
)
.await
}
async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
manager
.alter_table(Table::alter().table(Video::Table).drop_column(Video::Staff).to_owned())
.await
}
}
#[derive(DeriveIden)]
enum Video {
Table,
Staff,
}

View File

@@ -1,6 +0,0 @@
use sea_orm_migration::prelude::*;
#[async_std::main]
async fn main() {
cli::run_cli(bili_sync_migration::Migrator).await;
}

View File

@@ -21,7 +21,7 @@ export default defineConfig({
nav: [
{ text: "主页", link: "/" },
{
text: "v2.6.3",
text: "v2.11.0",
items: [
{
text: "程序更新",

View File

@@ -32,7 +32,7 @@
EMBY 的一般结构是: `媒体库 - 文件夹 - 电影/电视剧 - 分季/分集`,方便起见,我采用了如下的对应关系:
1. **文件夹**:对应 b 站的 video source
2. **电视剧** 对应 b 站的 video
2. **电视剧**:对应 b 站的 video
3. **第一季的所有分集**:对应 b 站的 page。
特别的,当 video 仅有一个 page 时为了避免过多的层级bili-sync 会将 page 展开到第二层级,变成与电视剧同级的电影。

View File

@@ -1,7 +1,7 @@
# bili-sync 是什么?
> [!TIP]
> 当前最新程序版本为 v2.6.3,文档将始终与最新程序版本保持一致。
> 当前最新程序版本为 v2.11.0,文档将始终与最新程序版本保持一致。
bili-sync 是一款专为 NAS 用户编写的哔哩哔哩同步工具。

View File

@@ -1 +1 @@
bili-sync.allwens.work
bili-sync.amto.cc

View File

@@ -13,7 +13,7 @@
在[程序发布页](https://github.com/amtoaer/bili-sync/releases)选择最新版本中对应机器架构的压缩包,解压后会获取一个名为 `bili-sync-rs` 的可执行文件,直接双击执行。
### 其二: 使用 Docker Compose 运行
### 其二:使用 Docker Compose 运行
Linux/amd64 与 Linux/arm64 两个平台可直接使用 Docker 或 Docker Compose 运行,此处以 Compose 为例:
> 请注意其中的注释,有不清楚的地方可以先继续往下看。
@@ -88,9 +88,9 @@ Jul 12 16:11:10 INFO 开始运行管理页: http://0.0.0.0:12345
认证后会看到一系列的配置,除绑定地址外的选项**基本都会实时生效**。为避免意料外的情况,建议将配置文件一次修改完毕后再点击保存。
如无特殊需求一般仅需修改“B站认证”与“视频质量”两个标签下的配置。
如无特殊需求一般仅需修改“B 站认证”与“视频质量”两个标签下的配置。
其中“B站认证”在一次填写后即可忽略程序会在**每日第一次运行视频下载任务**时检查认证状态,并在有必要时自动刷新。
其中“B 站认证”在一次填写后即可忽略,程序会在**每日第一次运行视频下载任务**时检查认证状态,并在有必要时自动刷新。
对于这些设置项的含义,请参考[配置说明](./configuration.md),可善用右侧导航在不同配置项间跳转。
@@ -98,7 +98,7 @@ Jul 12 16:11:10 INFO 开始运行管理页: http://0.0.0.0:12345
配置完毕后,我们便可以随时添加视频源订阅。
用户在正确填写“B站认证”后可以在“快捷订阅”部分查看自己创建的收藏夹、关注的合集与 UP 主一键订阅,也可以在“视频源”页手动添加并管理。
用户在正确填写“B 站认证”后可以在“快捷订阅”部分查看自己创建的收藏夹、关注的合集与 UP 主一键订阅,也可以在“视频源”页手动添加并管理。
对于手动添加的视频源,可参考如下页面获取所需的参数:

3
rust-toolchain.toml Normal file
View File

@@ -0,0 +1,3 @@
[toolchain]
channel = "1.94.0"
components = ["clippy"]

View File

@@ -15,7 +15,7 @@ from pathlib import Path
def main():
if len(sys.argv) <= 1:
print("用法: python 2.0.3_add_fanart.py <path1> <path2> ...")
print("用法python 2.0.3_add_fanart.py <path1> <path2> ...")
exit(1)
paths = [Path(path) for path in sys.argv[1:]]
for path in paths:

Some files were not shown because too many files have changed in this diff Show More