Compare commits

...

206 Commits

Author SHA1 Message Date
amtoaer
2b046362d7 chore: 发布 bili-sync 2.7.0 2025-09-25 00:51:59 +08:00
ᴀᴍᴛᴏᴀᴇʀ
61c9e7de88 chore: 前端小修改,ua 随机范围添加 windows (#470) 2025-09-25 00:50:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3d25c6b321 chore: 跑一遍 auto-correct (#468) 2025-09-24 18:50:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d35858790b chore: clippy 应该拒绝 warning (#466) 2025-09-24 17:58:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b441f04cdf chore: 修复新的 clippy warnings (#467) 2025-09-24 17:36:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4db7e6763a feat: 支持重新评估历史视频,前端显示视频的规则评估状态 (#465) 2025-09-24 17:08:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bbbb7d0c5b feat: 利用 etag 节省内容传输,显式写明生命周期 (#464) 2025-09-24 02:03:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
210c94398a feat: 实现视频的筛选规则 (#457) 2025-09-24 00:42:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6c7d295fe6 fix: 修复字幕风控的报错 (#463) 2025-09-23 08:27:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
71519af2f3 chore: 移除不必要的 image-proxy (#451) 2025-08-28 18:51:23 +08:00
Thomas Yang
8ed2fbae24 feat: 请求中header的User-Agent使用随机值 (#447) 2025-08-27 10:27:23 +08:00
amtoaer
fd90bc8b73 chore: 下载失败时不再打印一大串 URL 2025-08-08 20:23:40 +08:00
amtoaer
66bd3d6a41 chore: ffmpeg 执行失败时添加一条说明 2025-08-07 15:11:29 +08:00
amtoaer
5ef23a678f chore: 发布 bili-sync 2.6.3 2025-08-07 12:41:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
66079f3adc feat: sqlite 开启 Wal,移除不必要的 Arc,妥善释放数据库 (#421) 2025-08-06 17:20:06 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4f780faf64 fix: 增加 busy_timeout、最小化事务块、增加每批处理 page 量 (#420) 2025-08-06 14:08:07 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dbcb1fa78b fix: 为填充视频详情添加并发限制,避免数据库竞争 (#419) 2025-08-06 10:37:06 +08:00
amtoaer
386dac7735 chore: 格式化后端代码 2025-08-05 23:11:55 +08:00
Xinyu Bao
5537c621be Add error messages for the case in which the database initialization fails (#415) 2025-08-05 23:11:11 +08:00
amtoaer
c7978e20da chore: 发布 bili-sync 2.6.2 2025-07-23 22:48:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6e4af47bda fix: 修复 collection_type 反序列化错误 (#403) 2025-07-23 22:46:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
791e4997a0 docs: 修复配置项描述 (#396) 2025-07-13 16:59:06 +08:00
amtoaer
05ab83fc93 chore: 发布 bili-sync 2.6.1 2025-07-13 00:29:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
18ed9e09b1 fix: 修复 chromium 系浏览器创建 WebSocket 失败的问题 (#395) 2025-07-13 00:28:24 +08:00
amtoaer
e196afa8ce chore: 发布 bili-sync 2.6.0 2025-07-12 19:32:58 +08:00
amtoaer
9b2da75391 chore: 同样为前端加入版本号,并在发版时进行修改 2025-07-12 19:32:15 +08:00
amtoaer
664e1d9f21 docs: 在 readme 中加入管理页 2025-07-12 19:28:10 +08:00
amtoaer
31c26f033e docs: 文档跟进最新代码变化 2025-07-12 19:23:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29d78dabdd perf: 优化 dashboard 的查询性能 (#393) 2025-07-12 16:06:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
87fb597ba4 fix: 修复本地测试发现的若干问题 (#392) 2025-07-12 15:17:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c8f7a2267d chore: 更新 rust 依赖 (#391) 2025-07-11 20:44:38 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2837bb5234 feat: WebSocket connect 使用 Promise,确保 sendMessage 发生在 connect 后 (#390) 2025-07-11 20:00:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0990a276ff fix: 移动端 sidebar 在点按后自动收起 (#389) 2025-07-11 19:15:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
adc2e32e58 feat: 重置任务状态时支持 force 参数,默认不启用 (#388) 2025-07-11 19:01:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
267e9373f9 feat: 加入设置页里缺失的设置项,密码表单允许修改可见性 (#387) 2025-07-11 01:53:03 +08:00
ᴀᴍᴛᴏᴀᴇʀ
dd23d1db58 feat: 事件推送由 SSE 切换到 WebSocket (#386) 2025-07-11 00:14:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cc25749445 feat: 前端添加下载状态卡片 (#385) 2025-07-10 15:13:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
655b4389b7 feat: 支持 "在 b 站打开" 的快捷操作,一些细节优化 (#384) 2025-07-10 01:46:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
486dab5355 chore: 添加前端压缩 (#383) 2025-07-10 00:03:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
74a45526f0 fix: 修复日志页面自动滚动问题 (#382) 2025-07-09 23:34:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ce60838244 fix: 修复筛选器查询无效 (#381) 2025-07-09 21:50:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
35866888e8 fix: 新订阅添加后应该默认启用 (#380) 2025-07-08 15:29:57 +08:00
ᴀᴍᴛᴏᴀᴇʀ
fbb7623ee1 fix: 尝试修复下载错误 (#379) 2025-07-08 14:37:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1affe4d594 feat: 修改交互逻辑,支持前端查看日志 (#378) 2025-07-08 12:48:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c73a2f01a feat: 添加 dashboard 页面 (#377) 2025-07-07 23:32:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a627584fb0 refactor: 根据路径分割 api,避免单文件内容过多 (#376) 2025-07-07 01:51:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
636a843bda chore: 移除 utoipa (#375) 2025-07-07 01:01:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7bb4e7bc44 feat: 前端支持根据 ID 手动添加订阅 (#374) 2025-07-06 22:49:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e50318870e feat: 支持前端编辑、提交 Config (#370) 2025-06-18 16:50:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
28971c3ff3 feat: 添加视频源管理页,支持修改路径与启用状态 (#369) 2025-06-17 18:55:45 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f47ce92a51 chore: 通过移除依赖 debuginfo 的方式加快 debug 构建 (#368) 2025-06-17 13:56:36 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a35794ed7a refactor: 在后端处理字段映射与 invalid 判断 (#367) 2025-06-17 13:44:23 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bad00af147 chore: 移除无用的依赖 (#366) 2025-06-17 02:45:48 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4539e9379d feat: 迁移所有配置到数据库,并支持运行时重载 (#364) 2025-06-17 02:15:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a46c2572b1 chore: 为 video sources 添加 enabled 字段 (#362) 2025-06-13 12:00:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a41efdbe78 chore: 移除订阅卡片的单行最大宽度限制,支持铺满屏幕 (#359) 2025-06-09 12:17:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a98e49347b feat: 支持 webui 加载用户的订阅与收藏,一键点击订阅 (#357) 2025-06-09 11:16:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
586d5ec4ee chore: 大幅缩减构建结果的二进制文件体积 (#356) 2025-06-06 23:34:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
65a047b0fa feat: 支持手动编辑某个视频、分页状态,优化部分代码 (#355) 2025-06-06 07:39:17 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c0ed37750f refactor: 固定大小的任务省去装箱,直接使用 tokio::join! (#354) 2025-06-05 16:30:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0e98f484ef chore: 前端跑一遍 format、lint,尝试在 ci 中加入前端 lint 检查 (#353) 2025-06-04 21:37:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6226fa7c4d fix: 修复一些小问题,优化细节体验 (#352) 2025-06-04 21:15:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c528152986 feat: 重构优化部分 API,支持重置全体失败的任务 (#351) 2025-06-04 17:04:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
45849957ff refactor: 优化填充视频详情时的性能 (#350) 2025-06-02 00:56:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8510aa318e feat: 支持获取我的收藏夹、收藏的视频合集与关注的 up 主 (#349) 2025-06-02 00:15:21 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c07e475fe6 chore: 换用更美观、现代的前端页面 (#348) 2025-06-01 13:42:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a574d005c3 refactor: 重构 nfo,增强拓展性和可读性,方便后续变更 (#345) 2025-05-30 17:28:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e9d1c9eadb refactor: 移除无意义的 bvid 转 aid 逻辑 (#344) 2025-05-30 14:28:14 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a9f604a07d feat: 支持单个文件的并发下载 (#343) 2025-05-30 02:19:23 +08:00
amtoaer
6383730706 ci: 除 fmt 外一律使用 stable toolchain 2025-05-29 01:56:11 +08:00
ᴀᴍᴛᴏᴀᴇʀ
34d3e47b2d refactor: 调整视频列表/视频合集的扫描逻辑,优化性能 (#342) 2025-05-29 01:50:06 +08:00
amtoaer
d7ec0584bc chore: 发布 bili-sync 2.5.1 2025-05-19 22:54:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1ec015856b fix: 修复杜比视界合并后变为普通 HDR 的错误 (#333)
* fix: dolby hrd download not correct

* chore: 仅保留对 dolby vision 有效的参数

---------

Co-authored-by: njzydark <njzydark@gmail.com>
2025-05-19 20:57:42 +08:00
amtoaer
99d4d900e6 build: 将 git2 设置为 0.20.2 版本,尝试修复 windows 构建 2025-05-19 18:51:29 +08:00
amtoaer
f85f105e69 refactor: 修改奇怪的 if else 顺序 2025-05-19 17:06:28 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a1395458c fix: 改进视频流编码判断逻辑 (#332)
* fix: 改进视频流编码判断逻辑

* test: 添加新的单元测试,确保 HDR、杜比视界获取正常
2025-05-19 17:04:48 +08:00
amtoaer
bafb4af8dd chore: 升级依赖,修正新引入的 clippy 规则 2025-05-19 16:53:40 +08:00
amtoaer
f52724b974 chore: 发布 bili-sync 2.5.0 2025-02-27 14:04:40 +08:00
amtoaer
4e1e0c40cf docs: 文档跟进最新代码变化 2025-02-27 14:03:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
439513e5ab chore: 修改 error 判断,考虑 chain (#291) 2025-02-27 13:39:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
33a61ec08d fix: 视频合集/视频列表改为全量拉取,确保正确更新 (#290) 2025-02-25 20:55:50 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a6d0d6b777 feat: 下载时考虑 backup_url,支持按照 cdn 优先级排序 (#288) 2025-02-24 19:48:07 +08:00
amtoaer
ae685cbe61 ci: 打 tag 时仅触发 release,跳过 commit 2025-02-21 21:44:36 +08:00
amtoaer
16e14fc371 chore: 发布 bili-sync 2.4.1 2025-02-21 21:22:52 +08:00
amtoaer
b4a5dee236 ci: 使 ci 版本带有版本标签 2025-02-21 21:15:10 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2b3e6f9547 chore: 程序开始时打印欢迎信息,调整日志和构建流 (#285) 2025-02-21 21:04:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f8b93d2c76 fix: 修复配置初始化的检测 (#284) 2025-02-21 19:32:46 +08:00
ᴀᴍᴛᴏᴀᴇʀ
94462ca706 chore: 更新 rust edition 到 2024,更新依赖 (#283) 2025-02-21 17:47:49 +08:00
amtoaer
9cbefc26ab chore: 发布 bili-sync 2.4.0 2025-02-19 22:20:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2bfd69c15e docs: 文档跟进最新代码变化 (#275) 2025-02-19 22:12:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4765d6f50a fix: API TOKEN 输入框应该设置 password 类型 (#274) 2025-02-19 21:22:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bf306dfec3 chore: 补上缺失的 error_for_status 调用,修改一个 clippy 格式错误 (#273) 2025-02-19 20:40:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
a6425f11a2 fix: 修复 video 中分 p 下载状态的设置 (#272) 2025-02-19 19:04:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
395ef0013a ci: 统一使用 ubuntu 24.04 运行 ci(20.04 将被弃用) (#271) 2025-02-19 17:28:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ab0533210f chore: error 会打印更加详细的信息,修正常见错误的判断 (#270) 2025-02-19 16:53:26 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3eb2f0b14d ci: 修复并优化 ci 流程 (#269) 2025-02-19 14:33:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
42272b1294 ci: 调整构建流,在 commit 时同样构建 binary (#266) 2025-02-19 04:21:33 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d1168f35f3 build: 在 version 中展示详细的构建信息 (#265)
* build: 在 version 中展示详细的构建信息

* chore: 修改
2025-02-19 03:47:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
bc27778366 chore: 前端支持取消视频来源筛选(点击来源两次),调整 API TOKEN 填写位置 (#264) 2025-02-19 02:18:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9c5f3452e9 fix: 修复 reset 执行问题 (#263) 2025-02-19 01:52:18 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d3b4559b2d feat: 加入塑料前端 (#262) 2025-02-19 01:47:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
59305c0bb4 feat: reset_failed 支持修正标记位,这允许用户手动触发新的子任务 (#261) 2025-02-18 23:36:44 +08:00
ᴀᴍᴛᴏᴀᴇʀ
32214d5d5f chore: 将 video list model / video list 重命名为 video source (#260) 2025-02-18 22:36:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
315ad13703 feat: 在状态更新时忽略掉一些常见的错误 (#259) 2025-02-18 22:22:29 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e12a9cda95 feat: 加入重置单个视频状态的 API,视频接口返回下载状态 (#258) 2025-02-18 19:24:55 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c995b3bf72 feat: 加入带有详细类型注释的 swagger 文档 (#257) 2025-02-18 01:55:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1467c262a1 feat: 添加部分简单 API,相应修改程序入口的初始化流程 (#251) 2025-02-17 16:58:51 +08:00
amtoaer
7251802202 chore: 格式化代码 2025-02-16 03:56:47 +08:00
dragonlanc
e1285ff49a chore: 修改拼写错误 seprate -> separate (#253) 2025-02-16 03:38:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e01a22136e refactor: 使用 const 泛型约束 status (#250) 2025-02-13 21:41:05 +08:00
ᴀᴍᴛᴏᴀᴇʀ
eba69ff82a chore: 拆分主函数,支持响应终止信号 (#247)
* chore: 拆分主函数,支持响应 Ctrl + C 信号

* chore: unix 应该处理 SIGTERM
2025-02-12 03:34:17 +08:00
amtoaer
5af6fe5e6e chore: 移除多余的空格 2025-02-12 01:36:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9d8e398cbe refactor: 下载部分使用 tokio 的封装代替手动实现 (#245) 2025-02-05 02:33:15 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7097b2a6b9 fix: 修改错误拼写 (#244) 2025-02-05 02:28:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
acf7359d56 chore: 简化 up 主处理逻辑,支持 up 主信息更新 (#243) 2025-02-04 23:59:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c514b2dcc feat: 将视频的原始 URL 放到简介中 (#241) 2025-02-04 23:25:54 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2c4fa441e7 fix: 等待 task 执行 (#238) 2025-02-01 20:13:58 +08:00
ᴀᴍᴛᴏᴀᴇʀ
51672e8607 chore: 使用 tokio::spawn 运行主任务 (#237) 2025-02-01 18:47:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cc7f773300 feat: 支持下载 cc 字幕 (#234) 2025-01-30 01:20:53 +08:00
amtoaer
802565e4f6 chore: 发布 bili-sync 2.3.0 2025-01-25 00:34:47 +08:00
amtoaer
4984026017 docs: 更新文档,跟进最新代码变化 2025-01-25 00:29:12 +08:00
amtoaer
2a98359085 chore: 隐藏 target 并调整表述,缩减日志长度 2025-01-25 00:11:22 +08:00
amtoaer
979294bb94 fix: 修复 video path 未正确设置问题 2025-01-24 14:05:16 +08:00
ᴀᴍᴛᴏᴀᴇʀ
40cf22a7fa refactor: 引入 enum_dispatch 静态分发,提升性能 (#232) 2025-01-24 13:44:27 +08:00
ᴀᴍᴛᴏᴀᴇʀ
9e5a8b0573 feat: 确保 video stream 在出现错误时返回 Err (#231) 2025-01-24 13:17:12 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7c220f0d2b refactor: 精简代码,统一逻辑 (#229) 2025-01-24 01:11:59 +08:00
amtoaer
aa88f97eff refactor: 尝试将任务处理部分重构为 stream 写法,增补注释 2025-01-23 17:13:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b4177d4ffc feat: 引入更健壮的新视频检测方法 (#228)
* feat: 为各个 video list 表添加 latest_row_at 字段

* chore: 为 model 引入新增的字段

* feat: 实现新版中断条件(待测试)

* test: 更新测试
2025-01-22 23:53:18 +08:00
amtoaer
b888db6a61 refactor: 数据块已经在内存中,直接使用 write_all 2025-01-22 01:52:32 +08:00
amtoaer
6ae87364b4 feat: 为下载加入 flush 与 content-length 检查 2025-01-22 00:18:04 +08:00
amtoaer
18c966a0f9 refactor: 避免一些不必要的 to_string 2025-01-21 22:59:16 +08:00
amtoaer
ab84a8dad1 refactor: 签名时按需使用 String 2025-01-21 22:54:20 +08:00
amtoaer
1a32e38dc3 refactor: 使用 context 代替 ok_or 和 ok_or_else 2025-01-21 18:06:54 +08:00
amtoaer
0f25923c52 refactor: 继续调整优化部分代码,移除主体代码的所有 unwrap 2025-01-21 17:17:14 +08:00
amtoaer
cdc30e1b32 refactor: 优化部分代码,移除一批 unwrap 2025-01-21 03:12:45 +08:00
NKDark
c10c14c125 chore: 修改配置文件写入逻辑 (#222) 2025-01-21 01:39:48 +08:00
amtoaer
60604aeb33 docs: 更新文档描述,简化视频合集/视频列表的配置 2025-01-17 17:53:32 +08:00
amtoaer
276fb5b3e4 chore: 发布 bili-sync 2.2.0 2025-01-14 19:12:47 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e05f58b8a1 docs: 文档跟进最新代码变化 (#217) 2025-01-14 18:16:15 +08:00
amtoaer
8dfc96e1dc chore: 补充一条提示信息 2025-01-14 05:18:04 +08:00
amtoaer
cdc639cf75 fix: 修复代码语义错误,精简一些不必要的代码 2025-01-14 02:21:15 +08:00
amtoaer
847c3115cd chore: 遇到编码不符合的情况不再打印日志 2025-01-14 01:19:03 +08:00
amtoaer
7dc049ffe5 chore: 默认设置请求频率限制,用户可手动调整 2025-01-14 00:08:38 +08:00
amtoaer
265fe630dd fix: 修复 UP 主信息接口的类型问题 2025-01-14 00:07:51 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f31900e6c7 deps: 更新项目依赖 (#214) 2025-01-13 19:39:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
54b46c150e refactor: 一些边边角角的小重构 (#213) 2025-01-13 18:57:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
7d9999d6aa feat: 调整并重构视频音频流的选择逻辑,应该可以提升些许性能 (#212)
* feat: 调整并重构视频音频流的选择逻辑,应该可以提升些许性能

* test: 添加少量单元测试
2025-01-13 13:51:16 +08:00
amtoaer
05aa30119e ci: 使用最新 nightly 执行 check 2025-01-12 03:13:59 +08:00
amtoaer
368b9ef735 style: 清空 clippy 提示 2025-01-11 23:36:59 +08:00
ᴀᴍᴛᴏᴀᴇʀ
0113bf704d chore: 支持使用 leaky-bucket 限制请求频率 (#211)
* chore: 移除之前引入的 delay

* feat: 支持为 b 站请求配置频率限制
2025-01-11 23:24:01 +08:00
ᴀᴍᴛᴏᴀᴇʀ
66a7b1394e test: 修复 windows 单元测试错误 (#164) 2024-08-09 00:02:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ae05cad22f feat: 允许在 video_name 和 page_name 中使用对应平台的路径分隔符 (#163) 2024-08-08 23:53:22 +08:00
amtoaer
be3abab13f chore: 移除多余的 info 2024-08-08 22:01:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c432a282a7 fix: 修复视频 page 过多时数据库插入失败的问题 (#162) 2024-08-03 23:49:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e9e20ace93 build: 升级依赖 (#160) 2024-07-28 15:38:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
6187827e1b fix: 确保无论视频下载结果如何,都在最终删除临时文件 (#159) 2024-07-28 15:34:00 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8a4a95e343 feat: 支持设置 video 和 page 的下载并发 (#157) 2024-07-28 02:32:02 +08:00
ᴀᴍᴛᴏᴀᴇʀ
401fcdc630 refactor: 将 filenamify 移动至本地,将正则表达式设置为 static (#156) 2024-07-28 01:51:37 +08:00
ᴀᴍᴛᴏᴀᴇʀ
b2d22253c5 feat: 支持 up 主投稿视频下载 (#155) 2024-07-27 22:35:20 +08:00
ᴀᴍᴛᴏᴀᴇʀ
29bfc2efce refactor: 重构部分代码,调整函数位置 (#154) 2024-07-25 00:05:29 +08:00
ᴀᴍᴛᴏᴀᴇʀ
75de39dfbb feat: 支持设置时间格式化字符串,支持在 video_name 和 page_name 中使用 time (#152) 2024-07-24 21:06:40 +08:00
ᴀᴍᴛᴏᴀᴇʀ
8f37fdf841 refactor: 把循环拆分到外层,提取公共代码 (#151) 2024-07-24 00:36:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
20e3ac2129 build: 升级 time 依赖 (#150) 2024-07-23 22:38:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
3a8f33d273 feat: 支持各种任务结束之后的 delay 配置 (#148) 2024-07-23 22:29:25 +08:00
ᴀᴍᴛᴏᴀᴇʀ
d46881aea6 docs: 支持点击放大文档中的图片 (#149) 2024-07-23 04:13:05 -07:00
ᴀᴍᴛᴏᴀᴇʀ
e25339c53c docs: 将图片转为 webp 并压缩,大幅缩小占用空间 (#147) 2024-07-22 22:12:42 +08:00
ᴀᴍᴛᴏᴀᴇʀ
5102999676 docs: 修复配置文件位置的描述错误 (#145) 2024-07-22 12:53:41 +08:00
amtoaer
991ce3ea3c chore: 发布 bili-sync 2.1.2 2024-07-21 23:40:30 +08:00
amtoaer
e4fb096d0c build: 更新项目依赖 2024-07-21 22:51:56 +08:00
ᴀᴍᴛᴏᴀᴇʀ
28070aa7d8 docs: 添加"工作原理"小节 (#135) 2024-07-21 21:34:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
33e758bd91 refactor: 移除不必要的标记和代码块,统一 use 格式 (#144) 2024-07-21 19:16:52 +08:00
ᴀᴍᴛᴏᴀᴇʀ
86e858082d feat: 为下载视频接口加入 wbi 签名 (#143) 2024-07-21 18:47:09 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2ffe432f37 feat: 为合集接口实现 wbi 签名 (#140) 2024-07-21 16:49:53 +08:00
A1ca7raz
6ef9ecaee0 chore: 更正许可证文件名错误 (#141) 2024-07-19 20:33:36 +08:00
amtoaer
9ef88e1b2b docs: 更新部分表述,更新当前的功能列表 2024-07-11 19:27:02 +08:00
amtoaer
6e7c6061b2 chore: 发布 bili-sync 2.1.1 2024-07-11 18:09:31 +08:00
amtoaer
40b3f77748 docs: 添加 2.1.1 中稍后再看的文档 2024-07-11 18:08:13 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c27d1a2381 feat: 支持稍后再看的扫描与下载 (#131)
* 暂存

* 写点

* feat: 支持稍后再看

* chore: 干掉 print
2024-07-10 22:46:01 -07:00
ᴀᴍᴛᴏᴀᴇʀ
4c5d1b6ea1 fix: 修复 exist_labels 可能判断错误的问题 (#132) 2024-07-09 22:47:07 +08:00
amtoaer
0b6fd72682 chore: 发布 bili-sync 2.1.0 2024-07-05 22:44:56 +08:00
amtoaer
e65cd36b2e chore: 固定 CNAME 文件,修改 gh-pages 分支的提交信息 2024-07-05 22:35:34 +08:00
ᴀᴍᴛᴏᴀᴇʀ
352282f277 docs: 全局修改描述,在文档中加入版本信息并在发版时自动替换 (#128)
* 暂存

* chore: 修改一些杂项
2024-07-05 22:31:26 +08:00
amtoaer
fa2bc7b5e8 docs: 采用自定义域名方式,移除 base 目录 2024-07-05 18:26:39 +08:00
amtoaer
bb90f0c6f2 docs: 修改文档的 base 目录 2024-07-05 16:58:18 +08:00
amtoaer
90f2a1d4ed docs: 添加在线文档的 workflow 构建流,修复一些问题 2024-07-05 16:45:42 +08:00
amtoaer
e2b65746dd docs: 添加独立的文档页面,移除 README 中的相关描述 (#127) 2024-07-05 02:17:42 +08:00
amtoaer
24d0da0bf3 chore: 修改 as 大小写,避免 warning 2024-07-04 01:49:04 +08:00
ᴀᴍᴛᴏᴀᴇʀ
ff1150e863 fix: 修复重构引入的若干 bug (#126) 2024-07-04 01:00:41 +08:00
amtoaer
940abd4f3b build: 修改现有的版本号,添加 release 相关选项 2024-07-03 22:11:31 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4c9ad2318c feat: 大范围重构,支持视频合集下载 (#97) 2024-07-03 03:57:12 -07:00
ᴀᴍᴛᴏᴀᴇʀ
097f885050 build: 更新依赖 (#125) 2024-06-28 03:07:45 -07:00
ᴀᴍᴛᴏᴀᴇʀ
6ebef0a414 ci: 对处于 draft 状态的 PR 禁用 workflow (#123) 2024-06-28 00:04:30 +08:00
ᴀᴍᴛᴏᴀᴇʀ
4818e62414 refactor: 引入 clap 处理环境变量和命令行参数 (#119) 2024-06-08 10:57:08 +08:00
ᴀᴍᴛᴏᴀᴇʀ
1744f8647b chore: 修改项目路径结构,使用 workspace 组织包 (#118) 2024-06-08 01:56:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
c4db12b154 fix: 修复类型错误导致的数值溢出 (#115) 2024-06-01 03:21:23 +08:00
ᴀᴍᴛᴏᴀᴇʀ
2ef99a20c9 feat: 支持自定义 NFO 文件中的视频时间,可选加入收藏夹的时间、视频发布的时间 (#114)
* feat: 支持自定义 NFO 文件中的视频时间,可选加入收藏夹的时间、视频发布的时间

* chore: 使用小写
2024-06-01 03:01:39 +08:00
ᴀᴍᴛᴏᴀᴇʀ
67de151234 ci: 使用较旧的 rust nightly 版本,避免语言变更导致的编译失败 (#113) 2024-06-01 01:51:19 +08:00
ᴀᴍᴛᴏᴀᴇʀ
73f97f937f feat: 每次执行前检查登录状态,避免凭据失效导致的非预期行为 (#112)
* feat: 每次执行前检查登录状态,避免凭据失效导致的非预期行为

* refactor: 减少代码长度
2024-06-01 01:46:15 +08:00
ky0utarou
8fee6fb97a Update README.md - compose中指定user,附加简要说明 (#102)
* Update README.md - compose中指定user

* Update README.md - compose中指定user的简要说明
2024-05-08 19:11:32 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e5e5b07978 fix: 修复当目标文件已存在时 ffmpeg 卡住的问题 (#99) 2024-05-05 17:22:35 +08:00
ᴀᴍᴛᴏᴀᴇʀ
cd2bd9cbb3 chore: 减少并发下载量与 read_timeout 值 (#96)
* chore: 减少并发下载量与 read_timeout 值

* chore: 修正注释
2024-05-03 12:48:53 +08:00
ᴀᴍᴛᴏᴀᴇʀ
f044b18337 chore: 使用 tracing 替换 env_logger (#93) 2024-05-02 03:00:16 +08:00
amtoaer
d3bfca42f6 ci: 先安装依赖再 copy 二进制文件,确保使用 docker 缓存 2024-05-02 00:45:47 +08:00
ky0utarou
10ccb47790 ci: Dockerfile - 保留tzdata (#91)
* Dockerfile - keep tzdata for correct time

* Dockerfile - install tzdata only for correct logging time

refer to https://stackoverflow.com/a/68996528
2024-05-01 21:21:22 +08:00
ᴀᴍᴛᴏᴀᴇʀ
e732e7d616 feat: 放宽数据库连接池的连接数和获取时间,避免 time out 错误 (#87) 2024-04-29 13:46:22 +08:00
amtoaer
f81d9fc6eb chore: 修改版本并添加许可证 2024-04-29 00:59:50 +08:00
379 changed files with 23281 additions and 8381 deletions

View File

@@ -1,24 +1,52 @@
name: Build Binary And Release
name: Build Binary
on:
push:
tags:
- v*
workflow_call:
jobs:
build-frontend:
name: Build frontend
runs-on: ubuntu-24.04
defaults:
run:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Build Frontend
run: bun run build
- name: Upload Web Build Artifact
uses: actions/upload-artifact@v4
with:
name: web-build
path: web/build
build:
name: Release for ${{ matrix.platform.release_for }}
name: Build bili-sync-rs for ${{ matrix.platform.release_for }}
needs: build-frontend
runs-on: ${{ matrix.platform.os }}
strategy:
matrix:
platform:
- release_for: Linux-x86_64
os: ubuntu-20.04
os: ubuntu-24.04
target: x86_64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-x86_64-musl.tar.gz
- release_for: Linux-aarch64
os: ubuntu-20.04
os: ubuntu-24.04
target: aarch64-unknown-linux-musl
bin: bili-sync-rs
name: bili-sync-rs-Linux-aarch64-musl.tar.gz
@@ -37,10 +65,16 @@ jobs:
target: x86_64-pc-windows-msvc
bin: bili-sync-rs.exe
name: bili-sync-rs-Windows-x86_64.zip
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download Web Build Artifact
uses: actions/download-artifact@v4
with:
name: web-build
path: web/build
- name: Cache dependencies
uses: Swatinem/rust-cache@v2
- name: Install musl-tools
@@ -57,7 +91,6 @@ jobs:
- name: Package as archive
shell: bash
run: |
cp target/${{ matrix.platform.target }}/release/${{ matrix.platform.bin }} ${{ matrix.platform.release_for }}-${{ matrix.platform.bin }}
cd target/${{ matrix.platform.target }}/release
if [[ "${{ matrix.platform.target }}" == "x86_64-pc-windows-msvc" ]]; then
7z a ../../../${{ matrix.platform.name }} ${{ matrix.platform.bin }}
@@ -68,62 +101,5 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: bili-sync-rs-${{ matrix.platform.release_for }}
# contains raw binary and compressed archive
path: |
${{ github.workspace }}/${{ matrix.platform.release_for }}-${{ matrix.platform.bin }}
${{ github.workspace }}/${{ matrix.platform.name }}
release:
name: Create GitHub Release & Docker Image
needs: build
runs-on: ubuntu-20.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Download release artifact
uses: actions/download-artifact@v4
with:
merge-multiple: true
- name: Publish GitHub release
uses: softprops/action-gh-release@v2
with:
files: bili-sync-rs*
tag_name: ${{ github.ref_name }}
draft: true
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs
tags: |
type=raw,value=latest
type=raw,value=${{ github.ref_name }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
- name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs

41
.github/workflows/build-doc.yaml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Build Main Docs
on:
push:
branches:
- main
paths:
- 'docs/**'
jobs:
doc:
name: Build documentation
runs-on: ubuntu-24.04
defaults:
run:
working-directory: docs
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Build documentation
run: bun run docs:build
- name: Deploy Github Pages
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/.vitepress/dist
force_orphan: true
commit_message: 部署来自 main 的最新文档变更:

View File

@@ -1,43 +0,0 @@
name: Check
on:
push:
branches:
- main
pull_request:
branches:
- "**"
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
jobs:
tests:
name: Run Clippy and tests
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@v4
- run: rustup toolchain install nightly && rustup default nightly && rustup component add rustfmt clippy
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: cargo fmt check
run: cargo fmt --check
- name: cargo clippy
run: cargo clippy
- name: cargo test
run: cargo test

11
.github/workflows/commit-build.yaml vendored Normal file
View File

@@ -0,0 +1,11 @@
name: Build Main Binary
on:
push:
branches:
- main
jobs:
build-binary:
if: ${{ !startsWith(github.ref, 'refs/tags/') }}
uses: amtoaer/bili-sync/.github/workflows/build-binary.yaml@main

68
.github/workflows/pr-check.yaml vendored Normal file
View File

@@ -0,0 +1,68 @@
name: Check
on:
push:
branches:
- main
pull_request:
types: ["opened", "reopened", "synchronize", "ready_for_review"]
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
jobs:
check-backend:
name: Run backend checks
runs-on: ubuntu-24.04
if: ${{ github.event_name == 'push' || !github.event.pull_request.draft }}
steps:
- name: Checkout repo
uses: actions/checkout@v4
- run: rustup default stable && rustup component add clippy && rustup component add rustfmt --toolchain nightly
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: cargo fmt check
run: cargo +nightly fmt --check
- name: cargo clippy
run: cargo clippy -- -D warnings
- name: cargo test
run: cargo test
check-frontend:
name: Run frontend checks
runs-on: ubuntu-24.04
if: ${{ github.event_name == 'push' || !github.event.pull_request.draft }}
defaults:
run:
working-directory: web
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Setup bun
uses: oven-sh/setup-bun@v2
with:
bun-version: latest
- name: Install dependencies
run: bun install --frozen-lockfile
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.bun/install/cache
key: ${{ runner.os }}-bun-${{ hashFiles('docs/bun.lockb') }}
restore-keys: |
${{ runner.os }}-bun-
- name: Check Frontend
run: bun run lint

78
.github/workflows/release-build.yaml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Build Main Binary And Release
on:
push:
tags:
- v*
jobs:
build-binary:
uses: amtoaer/bili-sync/.github/workflows/build-binary.yaml@main
github-release:
name: Create GitHub Release
needs: build-binary
runs-on: ubuntu-24.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Download release artifact
uses: actions/download-artifact@v4
with:
merge-multiple: true
- name: Publish GitHub release
uses: softprops/action-gh-release@v2
with:
files: bili-sync-rs*
tag_name: ${{ github.ref_name }}
draft: true
docker-release:
name: Create Docker Image
needs: build-binary
runs-on: ubuntu-24.04
permissions:
contents: write
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Download release artifact
uses: actions/download-artifact@v4
with:
merge-multiple: true
- name: Docker Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs
tags: |
type=raw,value=latest
type=raw,value=${{ github.ref_name }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: Dockerfile
platforms: |
linux/amd64
linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
- name: Update DockerHub description
uses: peter-evans/dockerhub-description@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ secrets.DOCKERHUB_USERNAME }}/bili-sync-rs

5
.gitignore vendored
View File

@@ -1,6 +1,7 @@
**/target
auth_data
*.sqlite
*.json
video
debug*
node_modules
docs/.vitepress/cache
docs/.vitepress/dist

2756
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +1,99 @@
[package]
name = "bili-sync-rs"
version = "2.0.0"
edition = "2021"
[dependencies]
anyhow = { version = "1.0.81", features = ["backtrace"] }
arc-swap = { version = "1.7", features = ["serde"] }
async-stream = "0.3.5"
chrono = { version = "0.4.35", features = ["serde"] }
cookie = "0.18.0"
dirs = "5.0.1"
entity = { path = "entity" }
env_logger = "0.11.3"
filenamify = "0.1.0"
float-ord = "0.3.2"
futures = "0.3.30"
handlebars = "5.1.2"
hex = "0.4.3"
log = "0.4.21"
memchr = "2.5.0"
migration = { path = "migration" }
once_cell = "1.19.0"
prost = "0.12.4"
quick-xml = { version = "0.31.0", features = ["async-tokio"] }
rand = "0.8.5"
regex = "1.10.3"
reqwest = { version = "0.12.4", features = [
"json",
"stream",
"cookies",
"gzip",
"charset",
"http2",
"rustls-tls",
], default-features = false }
rsa = { version = "0.9.6", features = ["sha2"] }
sea-orm = { version = "0.12", features = [
"sqlx-sqlite",
"runtime-tokio-rustls",
"macros",
] }
serde = { version = "1.0.197", features = ["derive"] }
serde_json = "1.0"
strum = { version = "0.26", features = ["derive"] }
thiserror = "1.0.58"
tokio = { version = "1", features = ["full"] }
toml = "0.8.12"
[workspace]
members = [".", "entity", "migration"]
members = ["crates/*"]
default-members = ["crates/bili_sync"]
resolver = "2"
[workspace.package]
version = "2.7.0"
authors = ["amtoaer <amtoaer@gmail.com>"]
license = "MIT"
description = "由 Rust & Tokio 驱动的哔哩哔哩同步工具"
edition = "2024"
publish = false
[workspace.dependencies]
bili_sync_entity = { path = "crates/bili_sync_entity" }
bili_sync_migration = { path = "crates/bili_sync_migration" }
anyhow = { version = "1.0.98", features = ["backtrace"] }
arc-swap = { version = "1.7.1", features = ["serde"] }
assert_matches = "1.5.0"
async-std = { version = "1.13.1", features = ["attributes", "tokio1"] }
async-stream = "0.3.6"
async-trait = "0.1.88"
axum = { version = "0.8.4", features = ["macros", "ws"] }
base64 = "0.22.1"
built = { version = "0.7.7", features = ["git2", "chrono"] }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.41", features = ["env", "string"] }
cookie = "0.18.1"
cow-utils = "0.1.3"
dashmap = "6.1.0"
derivative = "2.2.0"
dirs = "6.0.0"
enum_dispatch = "0.3.13"
float-ord = "0.3.2"
futures = "0.3.31"
git2 = { version = "0.20.2", features = [], default-features = false }
handlebars = "6.3.2"
hex = "0.4.3"
leaky-bucket = "1.1.2"
md5 = "0.8.0"
memchr = "2.7.5"
once_cell = "1.21.3"
parking_lot = "0.12.4"
prost = "0.14.1"
quick-xml = { version = "0.38.0", features = ["async-tokio"] }
rand = "0.9.1"
regex = "1.11.1"
reqwest = { version = "0.12.22", features = [
"charset",
"cookies",
"gzip",
"http2",
"json",
"rustls-tls",
"stream",
], default-features = false }
rsa = { version = "0.10.0-rc.3", features = ["sha2"] }
rust-embed-for-web = { git = "https://github.com/amtoaer/rust-embed-for-web", tag = "v1.0.0" }
sea-orm = { version = "1.1.13", features = [
"macros",
"runtime-tokio-rustls",
"sqlx-sqlite",
] }
sea-orm-migration = { version = "1.1.13", features = [] }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
serde_urlencoded = "0.7.1"
strum = { version = "0.27.1", features = ["derive"] }
sysinfo = "0.36.0"
thiserror = "2.0.12"
tokio = { version = "1.46.1", features = ["full"] }
tokio-stream = { version = "0.1.17", features = ["sync"] }
tokio-util = { version = "0.7.15", features = ["io", "rt"] }
toml = "0.9.1"
tower = "0.5.2"
tracing = "0.1.41"
tracing-subscriber = { version = "0.3.19", features = ["chrono", "json"] }
ua_generator = "0.5.22"
uuid = { version = "1.17.0", features = ["v4"] }
validator = { version = "0.20.0", features = ["derive"] }
[workspace.metadata.release]
release = false
tag-message = ""
tag-prefix = ""
pre-release-commit-message = "chore: 发布 bili-sync {{version}}"
publish = false
pre-release-replacements = [
{ file = "../../docs/.vitepress/config.mts", search = "\"v[0-9\\.]+\"", replace = "\"v{{version}}\"", exactly = 1 },
{ file = "../../docs/introduction.md", search = " v[0-9\\.]+", replace = " v{{version}}", exactly = 1 },
{ file = "../../web/package.json", search = "\"version\": \"[0-9\\.]+\"", replace = "\"version\": \"{{version}}\"", exactly = 1 },
]
[profile.dev.package."*"]
debug = false
[profile.release]
strip = true

View File

@@ -1,23 +1,20 @@
FROM alpine as base
FROM alpine AS base
ARG TARGETPLATFORM
WORKDIR /app
COPY ./*-bili-sync-rs ./targets/
RUN apk update && apk add --no-cache \
ca-certificates \
tzdata \
ffmpeg \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone \
&& apk del tzdata
ffmpeg
COPY ./bili-sync-rs-Linux-*.tar.gz ./targets/
RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
mv ./targets/Linux-x86_64-bili-sync-rs ./bili-sync-rs; \
tar xzvf ./targets/bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./; \
else \
mv ./targets/Linux-aarch64-bili-sync-rs ./bili-sync-rs; \
tar xzvf ./targets/bili-sync-rs-Linux-aarch64-musl.tar.gz -C ./; \
fi
RUN rm -rf ./targets && chmod +x ./bili-sync-rs

View File

@@ -1,10 +1,24 @@
clean:
rm -rf ./*-bili-sync-rs
rm -rf ./bili-sync-rs-Linux*.tar.gz
build:
build-frontend:
cd ./web && bun run build && cd ..
build: build-frontend
cargo build --target x86_64-unknown-linux-musl --release
build-debug: build-frontend
cargo build --target x86_64-unknown-linux-musl
build-docker: build
cp target/x86_64-unknown-linux-musl/release/bili-sync-rs ./Linux-x86_64-bili-sync-rs
tar czvf ./bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./target/x86_64-unknown-linux-musl/release/ ./bili-sync-rs
docker build . -t bili-sync-rs-local --build-arg="TARGETPLATFORM=linux/amd64"
just clean
just clean
build-docker-debug: build-debug
tar czvf ./bili-sync-rs-Linux-x86_64-musl.tar.gz -C ./target/x86_64-unknown-linux-musl/debug/ ./bili-sync-rs
docker build . -t bili-sync-rs-local --build-arg="TARGETPLATFORM=linux/amd64"
just clean
debug: build-frontend
cargo run

21
License Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 ᴀᴍᴛᴏᴀᴇʀ
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

184
README.md
View File

@@ -3,173 +3,41 @@
## 简介
> [!NOTE]
> 此为 v2.x 版本文档v1.x 版本文档请前往[此处](https://github.com/amtoaer/bili-sync/tree/v1.x)查看
为 NAS 用户编写的 BILIBILI 收藏夹同步工具,可使用 EMBY 等媒体库工具浏览。
支持展示视频封面、名称、加入日期、标签、分页等。
> [点击此处](https://bili-sync.allwens.work/)查看文档
bili-sync 是一款专为 NAS 用户编写的哔哩哔哩同步工具,由 Rust & Tokio 驱动。
## 效果演示
**注:因为可能同时存在单页视频和多页视频,媒体库类型请选择“混合内容”。**
### 概览
![概览](./assets/overview.png)
### 详情
![详情](./assets/detail.png)
### 管理页
![管理页](/assets/webui.webp)
### 媒体库概览
![媒体库概览](./assets/overview.webp)
### 媒体库详情
![媒体库详情](./assets/detail.webp)
### 播放(使用 infuse
![播放](./assets/play.png)
![播放](./assets/play.webp)
### 文件排布
![文件](./assets/dir.png)
![文件](./assets/dir.webp)
## 配置文件说明
> [!NOTE]
> 在 Docker 环境中,`~` 会被展开为 `/app`。
## 功能与路线图
程序默认会将配置文件存储于 `~/.config/bili-sync/config.toml`,数据库文件存储于 `~/.config/bili-sync/data.sqlite`,如果发现不存在会新建并写入默认配置。
- [x] 使用用户填写的凭据认证,并在必要时自动刷新
- [x] 支持收藏夹与视频列表/视频合集的下载
- [x] 自动选择用户设置范围内最优的视频和音频流,并在下载完成后使用 FFmpeg 合并
- [x] 使用 Tokio 与 Reqwest对视频、视频分页进行异步并发下载
- [x] 使用媒体服务器支持的文件命名,方便一键作为媒体库导入
- [x] 当前轮次下载失败会在下一轮下载时重试,失败次数过多自动丢弃
- [x] 使用数据库保存媒体信息,避免对同个视频的多次请求
- [x] 打印日志,并在请求出现风控时自动终止,等待下一轮执行
- [x] 提供多平台的二进制可执行文件,为 Linux 平台提供了立即可用的 Docker 镜像
- [x] 支持对“稍后再看”内视频的自动扫描与下载
- [x] 支持对 UP 主投稿视频的自动扫描与下载
- [x] 支持限制任务的并行度和接口请求频率
- [x] 支持单个文件的分块并行下载
- [x] 支持使用 Web UI 配置,查看并管理视频、视频源
配置文件加载时会进行简单校验,默认配置无法通过校验,程序会报错终止运行。
可以下载程序后直接运行程序,看到报错后参考报错信息对默认配置进行修改,修改正确后即可正常运行。
对于配置文件中的 `credential`,请参考[凭据获取流程](https://nemo2011.github.io/bilibili-api/#/get-credential)。
配置文件中的 `video_name``page_name` 支持使用模板,模板的替换语法请参考示例。模板中的内容在执行时会被动态替换为对应的内容。
video_name 支持设置 bvid视频编号、title视频标题、upper_nameup 主名称、upper_midup 主 id
page_name 除支持 video 的全部参数外,还支持 ptitle分 P 标题、pid分 P 页号)。
对于每个 favorite_list 的下载路径,程序会在其下建立如下的文件夹:
1. 单页视频:
```bash
├── {video_name}
│   ├── {page_name}.mp4
│   ├── {page_name}.nfo
│   └── {page_name}-poster.jpg
```
2. 多页视频:
```bash
├── {video_name}
│   ├── poster.jpg
│   ├── Season 1
│   │   ├── {page_name} - S01E01.mp4
│   │   ├── {page_name} - S01E01.nfo
│   │   ├── {page_name} - S01E01-thumb.jpg
│   │   ├── {page_name} - S01E02.mp4
│   │   ├── {page_name} - S01E02.nfo
│   │   └── {page_name} - S01E02-thumb.jpg
│   └── tvshow.nfo
```
对于 filter_option 的可选值,请前往 [analyzer.rs](https://github.com/amtoaer/bili-sync/blob/main/src/bilibili/analyzer.rs) 查看。
对于 danmaku_option 的项含义,请前往 [danmaku/mod.rs](https://github.com/amtoaer/bili-sync/blob/main/src/bilibili/danmaku/canvas/mod.rs) 与 [原项目的说明](https://github.com/gwy15/danmu2ass?tab=readme-ov-file#%E5%91%BD%E4%BB%A4%E8%A1%8C) 查看。
## 配置文件示例
```toml
# 视频所处文件夹的名称
video_name = "{{title}}"
# 视频分页文件的命名
page_name = "{{bvid}}"
# 扫描运行的间隔(单位:秒)
interval = 1200
# emby 演员信息的保存位置
upper_path = "/home/amtoaer/.config/nas/emby/metadata/people/"
[credential]
# Bilibili 的 Web 端身份凭据,需要凭据才能下载高清视频
sessdata = ""
bili_jct = ""
buvid3 = ""
dedeuserid = ""
ac_time_value = ""
[filter_option]
# 视频、音频流的筛选选项,程序会使用范围内质量最高的流
# 注意设置范围过小可能导致无满足条件的流,推荐仅调整质量上限和编码优先级
video_max_quality = "Quality8k"
video_min_quality = "Quality360p"
audio_max_quality = "QualityHiRES"
audio_min_quality = "Quality64k"
codecs = [
"AV1",
"HEV",
"AVC",
]
no_dolby_video = false
no_dolby_audio = false
no_hdr = false
no_hires = false
[danmaku_option]
# 弹幕的一些相关选项,如字体、字号、透明度、停留时间、是否加粗等
duration = 12.0
font = "黑体"
font_size = 25
width_ratio = 1.2
horizontal_gap = 20.0
lane_size = 32
float_percentage = 0.5
bottom_percentage = 0.3
opacity = 76
bold = true
outline = 0.8
time_offset = 0.0
[favorite_list]
# 收藏夹 ID = 存储的位置
52642258 = "/home/amtoaer/HDDs/Videos/Bilibilis/混剪"
```
## Docker Compose 文件示例
该项目为 `Linux/amd64` 与 `Linux/arm64` 提供了 Docker 版本镜像。
Docker 版包含该平台对应版本的可执行文件(位于`/app/bili-sync-rs`),并预装了 FFmpeg其它用法与普通版本完全一致。可查看 [用于构建镜像的 Dockerfile](./Dockerfile)
以下是一个 Docker Compose 的编写示例:
```yaml
services:
bili-sync-rs:
image: amtoaer/bili-sync-rs:v2.0.0
restart: unless-stopped
network_mode: bridge
tty: true # 该选项请仅在日志终端支持彩色输出时启用,否则日志中可能会出现乱码
hostname: bili-sync-rs
container_name: bili-sync-rs
volumes:
- /home/amtoaer/.config/nas/bili-sync-rs:/app/.config/bili-sync
# 以及一些其它必要的挂载,确保此处的挂载与 bili-sync-rs 的配置相匹配
# ...
logging:
driver: "local"
```
## 路线图
- [x] 凭证认证
- [x] 视频选优
- [x] 视频下载
- [x] 支持并发下载
- [x] 支持作为 daemon 运行
- [x] 构建 nfo 和 poster 文件,方便以单集形式导入 emby
- [x] 支持收藏夹翻页,下载全部历史视频
- [x] 对接数据库,提前检查,按需下载
- [x] 支持弹幕下载
- [x] 更好的错误处理
- [x] 更好的日志
- [x] 请求过快出现风控的 workaround
- [x] 提供简单易用的打包(如 docker
- [ ] 支持 UP 主合集下载
## 参考与借鉴
@@ -177,4 +45,4 @@ services:
+ [bilibili-API-collect](https://github.com/SocialSisterYi/bilibili-API-collect) B 站的第三方接口文档
+ [bilibili-api](https://github.com/Nemo2011/bilibili-api) 使用 Python 调用接口的参考实现
+ [danmu2ass](https://github.com/gwy15/danmu2ass) 本项目弹幕下载功能的缝合来源
+ [danmu2ass](https://github.com/gwy15/danmu2ass) 本项目弹幕下载功能的缝合来源

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.1 MiB

BIN
assets/detail.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1015 KiB

BIN
assets/dir.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 MiB

BIN
assets/overview.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 MiB

BIN
assets/play.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 216 KiB

BIN
assets/webui.webp Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

View File

@@ -0,0 +1,73 @@
[package]
name = "bili_sync"
version = { workspace = true }
edition = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
description = { workspace = true }
publish = { workspace = true }
readme = "../../README.md"
build = "build.rs"
[dependencies]
anyhow = { workspace = true }
arc-swap = { workspace = true }
async-stream = { workspace = true }
axum = { workspace = true }
base64 = { workspace = true }
bili_sync_entity = { workspace = true }
bili_sync_migration = { workspace = true }
chrono = { workspace = true }
clap = { workspace = true }
cookie = { workspace = true }
cow-utils = { workspace = true }
dashmap = { workspace = true }
dirs = { workspace = true }
enum_dispatch = { workspace = true }
float-ord = { workspace = true }
futures = { workspace = true }
handlebars = { workspace = true }
hex = { workspace = true }
leaky-bucket = { workspace = true }
md5 = { workspace = true }
memchr = { workspace = true }
once_cell = { workspace = true }
parking_lot = { workspace = true }
prost = { workspace = true }
quick-xml = { workspace = true }
rand = { workspace = true }
regex = { workspace = true }
reqwest = { workspace = true }
rsa = { workspace = true }
rust-embed-for-web = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_urlencoded = { workspace = true }
strum = { workspace = true }
sysinfo = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }
tokio-stream = { workspace = true }
tokio-util = { workspace = true }
toml = { workspace = true }
tower = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
ua_generator = { workspace = true }
uuid = { workspace = true }
validator = { workspace = true }
[dev-dependencies]
assert_matches = { workspace = true }
[build-dependencies]
built = { workspace = true }
git2 = { workspace = true }
[package.metadata.release]
release = true
[[bin]]
name = "bili-sync-rs"
path = "src/main.rs"

View File

@@ -0,0 +1,3 @@
fn main() {
built::write_built_file().expect("Failed to acquire build-time information");
}

View File

@@ -0,0 +1,114 @@
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use chrono::Utc;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Collection, CollectionItem, CollectionType, VideoInfo};
impl VideoSource for collection::Model {
fn display_name(&self) -> Cow<'static, str> {
format!("{}{}", CollectionType::from_expected(self.r#type), self.name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::CollectionId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.collection_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Collection(collection::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn should_take(&self, _release_datetime: &chrono::DateTime<Utc>, _latest_row_at: &chrono::DateTime<Utc>) -> bool {
// collection视频合集/视频列表)返回的内容似乎并非严格按照时间排序,并且不同 collection 的排序方式也不同
// 为了保证程序正确性collection 不根据时间提前 break而是每次都全量拉取
true
}
fn should_filter(
&self,
video_info: Result<VideoInfo, anyhow::Error>,
latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
// 由于 collection 的视频无固定时间顺序should_take 无法提前中断拉取,因此 should_filter 环节需要进行额外过滤
if let Ok(video_info) = video_info
&& video_info.release_datetime() > latest_row_at
{
return Some(video_info);
}
None
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let collection = Collection::new(
bili_client,
CollectionItem {
sid: self.s_id.to_string(),
mid: self.m_id.to_string(),
collection_type: CollectionType::from_expected(self.r#type),
},
);
let collection_info = collection.get_info().await?;
ensure!(
collection_info.sid == self.s_id
&& collection_info.mid == self.m_id
&& collection_info.collection_type == CollectionType::from_expected(self.r#type),
"collection info mismatch: {:?} != {:?}",
collection_info,
collection.collection
);
collection::ActiveModel {
id: Unchanged(self.id),
name: Set(collection_info.name.clone()),
..Default::default()
}
.save(connection)
.await?;
Ok((
collection::Entity::find()
.filter(collection::Column::Id.eq(self.id))
.one(connection)
.await?
.context("collection not found")?
.into(),
Box::pin(collection.into_video_stream()),
))
}
}

View File

@@ -0,0 +1,83 @@
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, FavoriteList, VideoInfo};
impl VideoSource for favorite::Model {
fn display_name(&self) -> Cow<'static, str> {
format!("收藏夹「{}", self.name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::FavoriteId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.favorite_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Favorite(favorite::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let favorite = FavoriteList::new(bili_client, self.f_id.to_string());
let favorite_info = favorite.get_info().await?;
ensure!(
favorite_info.id == self.f_id,
"favorite id mismatch: {} != {}",
favorite_info.id,
self.f_id
);
favorite::ActiveModel {
id: Unchanged(self.id),
name: Set(favorite_info.title.clone()),
..Default::default()
}
.save(connection)
.await?;
Ok((
favorite::Entity::find()
.filter(favorite::Column::Id.eq(self.id))
.one(connection)
.await?
.context("favorite not found")?
.into(),
Box::pin(favorite.into_video_stream()),
))
}
}

View File

@@ -0,0 +1,138 @@
mod collection;
mod favorite;
mod submission;
mod watch_later;
use std::borrow::Cow;
use std::path::Path;
use std::pin::Pin;
use anyhow::Result;
use chrono::Utc;
use enum_dispatch::enum_dispatch;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::DatabaseConnection;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
#[rustfmt::skip]
use bili_sync_entity::collection::Model as Collection;
use bili_sync_entity::favorite::Model as Favorite;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::submission::Model as Submission;
use bili_sync_entity::watch_later::Model as WatchLater;
use crate::bilibili::{BiliClient, VideoInfo};
#[enum_dispatch]
pub enum VideoSourceEnum {
Favorite,
Collection,
Submission,
WatchLater,
}
#[enum_dispatch(VideoSourceEnum)]
pub trait VideoSource {
/// 获取视频源的名称
fn display_name(&self) -> Cow<'static, str>;
/// 获取特定视频列表的筛选条件
fn filter_expr(&self) -> SimpleExpr;
// 为 video_model 设置该视频列表的关联 id
fn set_relation_id(&self, video_model: &mut bili_sync_entity::video::ActiveModel);
// 获取视频列表的保存路径
fn path(&self) -> &Path;
/// 获取视频 model 中记录的最新时间
fn get_latest_row_at(&self) -> DateTime;
/// 更新视频 model 中记录的最新时间,此处返回需要更新的 ActiveModel接着调用 save 方法执行保存
/// 不同 VideoSource 返回的类型不同,为了 VideoSource 的 object safety 不能使用 impl Trait
/// Box<dyn ActiveModelTrait> 又提示 ActiveModelTrait 没有 object safety因此手写一个 Enum 静态分发
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel;
// 判断是否应该继续拉取视频
fn should_take(&self, release_datetime: &chrono::DateTime<Utc>, latest_row_at: &chrono::DateTime<Utc>) -> bool {
release_datetime > latest_row_at
}
fn should_filter(
&self,
video_info: Result<VideoInfo, anyhow::Error>,
_latest_row_at: &chrono::DateTime<Utc>,
) -> Option<VideoInfo> {
// 视频按照时间顺序拉取should_take 已经获取了所有需要处理的视频should_filter 无需额外处理
video_info.ok()
}
fn rule(&self) -> &Option<Rule>;
fn log_refresh_video_start(&self) {
info!("开始扫描{}..", self.display_name());
}
fn log_refresh_video_end(&self, count: usize) {
info!("扫描{}完成,获取到 {} 条新视频", self.display_name(), count);
}
fn log_fetch_video_start(&self) {
info!("开始填充{}视频详情..", self.display_name());
}
fn log_fetch_video_end(&self) {
info!("填充{}视频详情完成", self.display_name());
}
fn log_download_video_start(&self) {
info!("开始下载{}视频..", self.display_name());
}
fn log_download_video_end(&self) {
info!("下载{}视频完成", self.display_name());
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)>;
}
pub enum _ActiveModel {
Favorite(bili_sync_entity::favorite::ActiveModel),
Collection(bili_sync_entity::collection::ActiveModel),
Submission(bili_sync_entity::submission::ActiveModel),
WatchLater(bili_sync_entity::watch_later::ActiveModel),
}
impl _ActiveModel {
pub async fn save(self, connection: &DatabaseConnection) -> Result<()> {
match self {
_ActiveModel::Favorite(model) => {
model.save(connection).await?;
}
_ActiveModel::Collection(model) => {
model.save(connection).await?;
}
_ActiveModel::Submission(model) => {
model.save(connection).await?;
}
_ActiveModel::WatchLater(mut model) => {
if model.id.is_not_set() {
model.id = Set(1);
model.insert(connection).await?;
} else {
model.save(connection).await?;
}
}
}
Ok(())
}
}

View File

@@ -0,0 +1,82 @@
use std::path::Path;
use std::pin::Pin;
use anyhow::{Context, Result, ensure};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, Submission, VideoInfo};
impl VideoSource for submission::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
format!("{}」投稿", self.upper_name).into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::SubmissionId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.submission_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::Submission(submission::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let submission = Submission::new(bili_client, self.upper_id.to_string());
let upper = submission.get_info().await?;
ensure!(
upper.mid == submission.upper_id,
"submission upper id mismatch: {} != {}",
upper.mid,
submission.upper_id
);
submission::ActiveModel {
id: Unchanged(self.id),
upper_name: Set(upper.name),
..Default::default()
}
.save(connection)
.await?;
Ok((
submission::Entity::find()
.filter(submission::Column::Id.eq(self.id))
.one(connection)
.await?
.context("submission not found")?
.into(),
Box::pin(submission.into_video_stream()),
))
}
}

View File

@@ -0,0 +1,60 @@
use std::path::Path;
use std::pin::Pin;
use anyhow::Result;
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use futures::Stream;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::SimpleExpr;
use sea_orm::{DatabaseConnection, Unchanged};
use crate::adapter::{_ActiveModel, VideoSource, VideoSourceEnum};
use crate::bilibili::{BiliClient, VideoInfo, WatchLater};
impl VideoSource for watch_later::Model {
fn display_name(&self) -> std::borrow::Cow<'static, str> {
"稍后再看".into()
}
fn filter_expr(&self) -> SimpleExpr {
video::Column::WatchLaterId.eq(self.id)
}
fn set_relation_id(&self, video_model: &mut video::ActiveModel) {
video_model.watch_later_id = Set(Some(self.id));
}
fn path(&self) -> &Path {
Path::new(self.path.as_str())
}
fn get_latest_row_at(&self) -> DateTime {
self.latest_row_at
}
fn update_latest_row_at(&self, datetime: DateTime) -> _ActiveModel {
_ActiveModel::WatchLater(watch_later::ActiveModel {
id: Unchanged(self.id),
latest_row_at: Set(datetime),
..Default::default()
})
}
fn rule(&self) -> &Option<Rule> {
&self.rule
}
async fn refresh<'a>(
self,
bili_client: &'a BiliClient,
_connection: &'a DatabaseConnection,
) -> Result<(
VideoSourceEnum,
Pin<Box<dyn Stream<Item = Result<VideoInfo>> + Send + 'a>>,
)> {
let watch_later = WatchLater::new(bili_client);
Ok((self.into(), Box::pin(watch_later.into_video_stream())))
}
}

View File

@@ -0,0 +1,9 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum InnerApiError {
#[error("Primary key not found: {0}")]
NotFound(i32),
#[error("Bad request: {0}")]
BadRequest(String),
}

View File

@@ -0,0 +1,83 @@
use std::borrow::Borrow;
use sea_orm::{ConnectionTrait, DatabaseTransaction};
use crate::api::response::{PageInfo, VideoInfo};
pub async fn update_video_download_status(
txn: &DatabaseTransaction,
videos: &[impl Borrow<VideoInfo>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr> {
if videos.is_empty() {
return Ok(());
}
let videos = videos.iter().map(|v| v.borrow()).collect::<Vec<_>>();
if let Some(size) = batch_size {
for chunk in videos.chunks(size) {
execute_video_update_batch(txn, chunk).await?;
}
} else {
execute_video_update_batch(txn, &videos).await?;
}
Ok(())
}
pub async fn update_page_download_status(
txn: &DatabaseTransaction,
pages: &[impl Borrow<PageInfo>],
batch_size: Option<usize>,
) -> Result<(), sea_orm::DbErr> {
if pages.is_empty() {
return Ok(());
}
let pages = pages.iter().map(|v| v.borrow()).collect::<Vec<_>>();
if let Some(size) = batch_size {
for chunk in pages.chunks(size) {
execute_page_update_batch(txn, chunk).await?;
}
} else {
execute_page_update_batch(txn, &pages).await?;
}
Ok(())
}
async fn execute_video_update_batch(txn: &DatabaseTransaction, videos: &[&VideoInfo]) -> Result<(), sea_orm::DbErr> {
if videos.is_empty() {
return Ok(());
}
let sql = format!(
"WITH tempdata(id, download_status) AS (VALUES {}) \
UPDATE video \
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE video.id = tempdata.id",
videos
.iter()
.map(|v| format!("({}, {})", v.id, v.download_status))
.collect::<Vec<_>>()
.join(", ")
);
txn.execute_unprepared(&sql).await?;
Ok(())
}
async fn execute_page_update_batch(txn: &DatabaseTransaction, pages: &[&PageInfo]) -> Result<(), sea_orm::DbErr> {
if pages.is_empty() {
return Ok(());
}
let sql = format!(
"WITH tempdata(id, download_status) AS (VALUES {}) \
UPDATE page \
SET download_status = tempdata.download_status \
FROM tempdata \
WHERE page.id = tempdata.id",
pages
.iter()
.map(|p| format!("({}, {})", p.id, p.download_status))
.collect::<Vec<_>>()
.join(", ")
);
txn.execute_unprepared(&sql).await?;
Ok(())
}

View File

@@ -0,0 +1,8 @@
mod error;
mod helper;
mod request;
mod response;
mod routes;
mod wrapper;
pub use routes::{LogHelper, MAX_HISTORY_LOGS, router};

View File

@@ -0,0 +1,91 @@
use bili_sync_entity::rule::Rule;
use serde::Deserialize;
use validator::Validate;
use crate::bilibili::CollectionType;
#[derive(Deserialize)]
pub struct VideosRequest {
pub collection: Option<i32>,
pub favorite: Option<i32>,
pub submission: Option<i32>,
pub watch_later: Option<i32>,
pub query: Option<String>,
pub page: Option<u64>,
pub page_size: Option<u64>,
}
#[derive(Deserialize)]
pub struct ResetRequest {
#[serde(default)]
pub force: bool,
}
#[derive(Deserialize, Validate)]
pub struct StatusUpdate {
#[validate(range(min = 0, max = 4))]
pub status_index: usize,
#[validate(custom(function = "crate::utils::validation::validate_status_value"))]
pub status_value: u32,
}
#[derive(Deserialize, Validate)]
pub struct PageStatusUpdate {
pub page_id: i32,
#[validate(nested)]
pub updates: Vec<StatusUpdate>,
}
#[derive(Deserialize, Validate)]
pub struct UpdateVideoStatusRequest {
#[serde(default)]
#[validate(nested)]
pub video_updates: Vec<StatusUpdate>,
#[serde(default)]
#[validate(nested)]
pub page_updates: Vec<PageStatusUpdate>,
}
#[derive(Deserialize)]
pub struct FollowedCollectionsRequest {
pub page_num: Option<i32>,
pub page_size: Option<i32>,
}
#[derive(Deserialize)]
pub struct FollowedUppersRequest {
pub page_num: Option<i32>,
pub page_size: Option<i32>,
}
#[derive(Deserialize, Validate)]
pub struct InsertFavoriteRequest {
pub fid: i64,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
pub struct InsertCollectionRequest {
pub sid: i64,
pub mid: i64,
#[serde(default)]
pub collection_type: CollectionType,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
pub struct InsertSubmissionRequest {
pub upper_id: i64,
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
}
#[derive(Deserialize, Validate)]
pub struct UpdateVideoSourceRequest {
#[validate(custom(function = "crate::utils::validation::validate_path"))]
pub path: String,
pub enabled: bool,
pub rule: Option<Rule>,
}

View File

@@ -0,0 +1,189 @@
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use sea_orm::{DerivePartialModel, FromQueryResult};
use serde::Serialize;
use crate::utils::status::{PageStatus, VideoStatus};
#[derive(Serialize)]
pub struct VideoSourcesResponse {
pub collection: Vec<VideoSource>,
pub favorite: Vec<VideoSource>,
pub submission: Vec<VideoSource>,
pub watch_later: Vec<VideoSource>,
}
#[derive(Serialize)]
pub struct VideosResponse {
pub videos: Vec<VideoInfo>,
pub total_count: u64,
}
#[derive(Serialize)]
pub struct VideoResponse {
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct ResetVideoResponse {
pub resetted: bool,
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(Serialize)]
pub struct ResetAllVideosResponse {
pub resetted: bool,
pub resetted_videos_count: usize,
pub resetted_pages_count: usize,
}
#[derive(Serialize)]
pub struct UpdateVideoStatusResponse {
pub success: bool,
pub video: VideoInfo,
pub pages: Vec<PageInfo>,
}
#[derive(FromQueryResult, Serialize)]
pub struct VideoSource {
pub id: i32,
pub name: String,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult)]
#[sea_orm(entity = "video::Entity")]
pub struct VideoInfo {
pub id: i32,
pub bvid: String,
pub name: String,
pub upper_name: String,
pub should_download: bool,
#[serde(serialize_with = "serde_video_download_status")]
pub download_status: u32,
}
#[derive(Serialize, DerivePartialModel, FromQueryResult)]
#[sea_orm(entity = "page::Entity")]
pub struct PageInfo {
pub id: i32,
pub video_id: i32,
pub pid: i32,
pub name: String,
#[serde(serialize_with = "serde_page_download_status")]
pub download_status: u32,
}
fn serde_video_download_status<S>(status: &u32, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let status: [u32; 5] = VideoStatus::from(*status).into();
status.serialize(serializer)
}
fn serde_page_download_status<S>(status: &u32, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let status: [u32; 5] = PageStatus::from(*status).into();
status.serialize(serializer)
}
#[derive(Serialize)]
pub struct FavoriteWithSubscriptionStatus {
pub title: String,
pub media_count: i64,
pub fid: i64,
pub mid: i64,
pub subscribed: bool,
}
#[derive(Serialize)]
pub struct CollectionWithSubscriptionStatus {
pub title: String,
pub sid: i64,
pub mid: i64,
pub invalid: bool,
pub subscribed: bool,
}
#[derive(Serialize)]
pub struct UpperWithSubscriptionStatus {
pub mid: i64,
pub uname: String,
pub face: String,
pub sign: String,
pub invalid: bool,
pub subscribed: bool,
}
#[derive(Serialize)]
pub struct FavoritesResponse {
pub favorites: Vec<FavoriteWithSubscriptionStatus>,
}
#[derive(Serialize)]
pub struct CollectionsResponse {
pub collections: Vec<CollectionWithSubscriptionStatus>,
pub total: i64,
}
#[derive(Serialize)]
pub struct UppersResponse {
pub uppers: Vec<UpperWithSubscriptionStatus>,
pub total: i64,
}
#[derive(Serialize)]
pub struct VideoSourcesDetailsResponse {
pub collections: Vec<VideoSourceDetail>,
pub favorites: Vec<VideoSourceDetail>,
pub submissions: Vec<VideoSourceDetail>,
pub watch_later: Vec<VideoSourceDetail>,
}
#[derive(Serialize, FromQueryResult)]
pub struct DayCountPair {
pub day: String,
pub cnt: i64,
}
#[derive(Serialize)]
pub struct DashBoardResponse {
pub enabled_favorites: u64,
pub enabled_collections: u64,
pub enabled_submissions: u64,
pub enable_watch_later: bool,
pub videos_by_day: Vec<DayCountPair>,
}
#[derive(Serialize)]
pub struct SysInfo {
pub total_memory: u64,
pub used_memory: u64,
pub process_memory: u64,
pub used_cpu: f32,
pub process_cpu: f32,
pub total_disk: u64,
pub available_disk: u64,
}
#[derive(Serialize, FromQueryResult)]
#[serde(rename_all = "camelCase")]
pub struct VideoSourceDetail {
pub id: i32,
pub name: String,
pub path: String,
pub rule: Option<Rule>,
#[serde(default)]
pub rule_display: Option<String>,
pub enabled: bool,
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdateVideoSourceResponse {
pub rule_display: Option<String>,
}

View File

@@ -0,0 +1,36 @@
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::Extension;
use axum::routing::get;
use sea_orm::DatabaseConnection;
use crate::api::error::InnerApiError;
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::config::{Config, VersionedConfig};
use crate::utils::task_notifier::TASK_STATUS_NOTIFIER;
pub(super) fn router() -> Router {
Router::new().route("/config", get(get_config).put(update_config))
}
/// 获取全局配置
pub async fn get_config() -> Result<ApiResponse<Arc<Config>>, ApiError> {
Ok(ApiResponse::ok(VersionedConfig::get().load_full()))
}
/// 更新全局配置
pub async fn update_config(
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(config): ValidatedJson<Config>,
) -> Result<ApiResponse<Arc<Config>>, ApiError> {
let Some(_lock) = TASK_STATUS_NOTIFIER.detect_running() else {
// 简单避免一下可能的不一致现象
return Err(InnerApiError::BadRequest("下载任务正在运行,无法修改配置".to_string()).into());
};
config.check()?;
let new_config = VersionedConfig::get().update(config, &db).await?;
drop(_lock);
Ok(ApiResponse::ok(new_config))
}

View File

@@ -0,0 +1,65 @@
use axum::routing::get;
use axum::{Extension, Router};
use bili_sync_entity::*;
use sea_orm::entity::prelude::*;
use sea_orm::{FromQueryResult, Statement};
use crate::api::response::{DashBoardResponse, DayCountPair};
use crate::api::wrapper::{ApiError, ApiResponse};
pub(super) fn router() -> Router {
Router::new().route("/dashboard", get(get_dashboard))
}
async fn get_dashboard(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<DashBoardResponse>, ApiError> {
let (enabled_favorites, enabled_collections, enabled_submissions, enabled_watch_later, videos_by_day) = tokio::try_join!(
favorite::Entity::find()
.filter(favorite::Column::Enabled.eq(true))
.count(&db),
collection::Entity::find()
.filter(collection::Column::Enabled.eq(true))
.count(&db),
submission::Entity::find()
.filter(submission::Column::Enabled.eq(true))
.count(&db),
watch_later::Entity::find()
.filter(watch_later::Column::Enabled.eq(true))
.count(&db),
DayCountPair::find_by_statement(Statement::from_string(
db.get_database_backend(),
// 用 SeaORM 太复杂了,直接写个裸 SQL
"
SELECT
dates.day AS day,
COUNT(video.id) AS cnt
FROM
(
SELECT
STRFTIME('%Y-%m-%d', DATE('now', '-' || n || ' days', 'localtime')) AS day,
DATETIME(DATE('now', '-' || n || ' days', 'localtime'), 'utc') AS start_utc_datetime,
DATETIME(DATE('now', '-' || n || ' days', '+1 day', 'localtime'), 'utc') AS end_utc_datetime
FROM
(
SELECT 0 AS n UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6
)
) AS dates
LEFT JOIN
video ON video.created_at >= dates.start_utc_datetime AND video.created_at < dates.end_utc_datetime
GROUP BY
dates.day
ORDER BY
dates.day;
"
))
.all(&db),
)?;
Ok(ApiResponse::ok(DashBoardResponse {
enabled_favorites,
enabled_collections,
enabled_submissions,
enable_watch_later: enabled_watch_later > 0,
videos_by_day,
}))
}

View File

@@ -0,0 +1,146 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Query};
use axum::routing::get;
use bili_sync_entity::*;
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QueryFilter, QuerySelect};
use crate::api::request::{FollowedCollectionsRequest, FollowedUppersRequest};
use crate::api::response::{
CollectionWithSubscriptionStatus, CollectionsResponse, FavoriteWithSubscriptionStatus, FavoritesResponse,
UpperWithSubscriptionStatus, UppersResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse};
use crate::bilibili::{BiliClient, Me};
pub(super) fn router() -> Router {
Router::new()
.route("/me/favorites", get(get_created_favorites))
.route("/me/collections", get(get_followed_collections))
.route("/me/uppers", get(get_followed_uppers))
}
/// 获取当前用户创建的收藏夹
pub async fn get_created_favorites(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
) -> Result<ApiResponse<FavoritesResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let bili_favorites = me.get_created_favorites().await?;
let favorites = if let Some(bili_favorites) = bili_favorites {
// b 站收藏夹相关接口使用的所谓“fid”其实是该处的 id即 fid + mid 后两位
let bili_fids: Vec<_> = bili_favorites.iter().map(|fav| fav.id).collect();
let subscribed_fids: Vec<i64> = favorite::Entity::find()
.select_only()
.column(favorite::Column::FId)
.filter(favorite::Column::FId.is_in(bili_fids))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_fids.into_iter().collect();
bili_favorites
.into_iter()
.map(|fav| FavoriteWithSubscriptionStatus {
title: fav.title,
media_count: fav.media_count,
// api 返回的 id 才是真实的 fid
fid: fav.id,
mid: fav.mid,
subscribed: subscribed_set.contains(&fav.id),
})
.collect()
} else {
vec![]
};
Ok(ApiResponse::ok(FavoritesResponse { favorites }))
}
/// 获取当前用户收藏的合集
pub async fn get_followed_collections(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedCollectionsRequest>,
) -> Result<ApiResponse<CollectionsResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(50));
let bili_collections = me.get_followed_collections(page_num, page_size).await?;
let collections = if let Some(collection_list) = bili_collections.list {
let bili_sids: Vec<_> = collection_list.iter().map(|col| col.id).collect();
let subscribed_ids: Vec<i64> = collection::Entity::find()
.select_only()
.column(collection::Column::SId)
.filter(collection::Column::SId.is_in(bili_sids))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_ids.into_iter().collect();
collection_list
.into_iter()
.map(|col| CollectionWithSubscriptionStatus {
title: col.title,
sid: col.id,
mid: col.mid,
invalid: col.state == 1,
subscribed: subscribed_set.contains(&col.id),
})
.collect()
} else {
vec![]
};
Ok(ApiResponse::ok(CollectionsResponse {
collections,
total: bili_collections.count,
}))
}
/// 获取当前用户关注的 UP 主
pub async fn get_followed_uppers(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
Query(params): Query<FollowedUppersRequest>,
) -> Result<ApiResponse<UppersResponse>, ApiError> {
let me = Me::new(bili_client.as_ref());
let (page_num, page_size) = (params.page_num.unwrap_or(1), params.page_size.unwrap_or(20));
let bili_uppers = me.get_followed_uppers(page_num, page_size).await?;
let bili_uid: Vec<_> = bili_uppers.list.iter().map(|upper| upper.mid).collect();
let subscribed_ids: Vec<i64> = submission::Entity::find()
.select_only()
.column(submission::Column::UpperId)
.filter(submission::Column::UpperId.is_in(bili_uid))
.into_tuple()
.all(&db)
.await?;
let subscribed_set: HashSet<i64> = subscribed_ids.into_iter().collect();
let uppers = bili_uppers
.list
.into_iter()
.map(|upper| UpperWithSubscriptionStatus {
mid: upper.mid,
// 官方没有提供字段,但是可以使用这种方式简单判断下
invalid: upper.uname == "账号已注销" && upper.face == "https://i0.hdslb.com/bfs/face/member/noface.jpg",
uname: upper.uname,
face: upper.face,
sign: upper.sign,
subscribed: subscribed_set.contains(&upper.mid),
})
.collect();
Ok(ApiResponse::ok(UppersResponse {
uppers,
total: bili_uppers.total,
}))
}

View File

@@ -0,0 +1,58 @@
use axum::extract::Request;
use axum::http::HeaderMap;
use axum::middleware::Next;
use axum::response::{IntoResponse, Response};
use axum::{Router, middleware};
use base64::Engine;
use base64::prelude::BASE64_URL_SAFE_NO_PAD;
use reqwest::StatusCode;
use crate::api::wrapper::ApiResponse;
use crate::config::VersionedConfig;
mod config;
mod dashboard;
mod me;
mod video_sources;
mod videos;
mod ws;
pub use ws::{LogHelper, MAX_HISTORY_LOGS};
pub fn router() -> Router {
Router::new().nest(
"/api",
config::router()
.merge(me::router())
.merge(video_sources::router())
.merge(videos::router())
.merge(dashboard::router())
.merge(ws::router())
.layer(middleware::from_fn(auth)),
)
}
/// 中间件:使用 auth token 对请求进行身份验证
pub async fn auth(mut headers: HeaderMap, request: Request, next: Next) -> Result<Response, StatusCode> {
let config = VersionedConfig::get().load();
let token = config.auth_token.as_str();
if headers
.get("Authorization")
.and_then(|v| v.to_str().ok())
.is_some_and(|s| s == token)
{
return Ok(next.run(request).await);
}
if let Some(protocol) = headers.remove("Sec-WebSocket-Protocol")
&& protocol
.to_str()
.ok()
.and_then(|s| BASE64_URL_SAFE_NO_PAD.decode(s).ok())
.is_some_and(|s| s == token.as_bytes())
{
let mut resp = next.run(request).await;
resp.headers_mut().insert("Sec-WebSocket-Protocol", protocol);
return Ok(resp);
}
Ok(ApiResponse::<()>::unauthorized("auth token does not match").into_response())
}

View File

@@ -0,0 +1,363 @@
use std::sync::Arc;
use anyhow::Result;
use axum::Router;
use axum::extract::{Extension, Path};
use axum::routing::{get, post, put};
use bili_sync_entity::rule::Rule;
use bili_sync_entity::*;
use bili_sync_migration::Expr;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::{ColumnTrait, DatabaseConnection, EntityTrait, QuerySelect, TransactionTrait};
use crate::adapter::_ActiveModel;
use crate::api::error::InnerApiError;
use crate::api::request::{
InsertCollectionRequest, InsertFavoriteRequest, InsertSubmissionRequest, UpdateVideoSourceRequest,
};
use crate::api::response::{
UpdateVideoSourceResponse, VideoSource, VideoSourceDetail, VideoSourcesDetailsResponse, VideoSourcesResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::bilibili::{BiliClient, Collection, CollectionItem, FavoriteList, Submission};
use crate::utils::rule::FieldEvaluatable;
pub(super) fn router() -> Router {
Router::new()
.route("/video-sources", get(get_video_sources))
.route("/video-sources/details", get(get_video_sources_details))
.route("/video-sources/{type}/{id}", put(update_video_source))
.route("/video-sources/{type}/{id}/evaluate", post(evaluate_video_source))
.route("/video-sources/favorites", post(insert_favorite))
.route("/video-sources/collections", post(insert_collection))
.route("/video-sources/submissions", post(insert_submission))
}
/// 列出所有视频来源
pub async fn get_video_sources(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoSourcesResponse>, ApiError> {
let (collection, favorite, submission, mut watch_later) = tokio::try_join!(
collection::Entity::find()
.select_only()
.columns([collection::Column::Id, collection::Column::Name])
.into_model::<VideoSource>()
.all(&db),
favorite::Entity::find()
.select_only()
.columns([favorite::Column::Id, favorite::Column::Name])
.into_model::<VideoSource>()
.all(&db),
submission::Entity::find()
.select_only()
.column(submission::Column::Id)
.column_as(submission::Column::UpperName, "name")
.into_model::<VideoSource>()
.all(&db),
watch_later::Entity::find()
.select_only()
.column(watch_later::Column::Id)
.column_as(Expr::value("稍后再看"), "name")
.into_model::<VideoSource>()
.all(&db)
)?;
// watch_later 是一个特殊的视频来源,如果不存在则添加一个默认项
if watch_later.is_empty() {
watch_later.push(VideoSource {
id: 1,
name: "稍后再看".to_string(),
});
}
Ok(ApiResponse::ok(VideoSourcesResponse {
collection,
favorite,
submission,
watch_later,
}))
}
/// 获取视频来源详情
pub async fn get_video_sources_details(
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoSourcesDetailsResponse>, ApiError> {
let (mut collections, mut favorites, mut submissions, mut watch_later) = tokio::try_join!(
collection::Entity::find()
.select_only()
.columns([
collection::Column::Id,
collection::Column::Name,
collection::Column::Path,
collection::Column::Rule,
collection::Column::Enabled
])
.into_model::<VideoSourceDetail>()
.all(&db),
favorite::Entity::find()
.select_only()
.columns([
favorite::Column::Id,
favorite::Column::Name,
favorite::Column::Path,
favorite::Column::Rule,
favorite::Column::Enabled
])
.into_model::<VideoSourceDetail>()
.all(&db),
submission::Entity::find()
.select_only()
.column_as(submission::Column::UpperName, "name")
.columns([
submission::Column::Id,
submission::Column::Path,
submission::Column::Enabled,
submission::Column::Rule
])
.into_model::<VideoSourceDetail>()
.all(&db),
watch_later::Entity::find()
.select_only()
.column_as(Expr::value("稍后再看"), "name")
.columns([
watch_later::Column::Id,
watch_later::Column::Path,
watch_later::Column::Enabled,
watch_later::Column::Rule
])
.into_model::<VideoSourceDetail>()
.all(&db)
)?;
if watch_later.is_empty() {
watch_later.push(VideoSourceDetail {
id: 1,
name: "稍后再看".to_string(),
path: String::new(),
rule: None,
rule_display: None,
enabled: false,
})
}
for sources in [&mut collections, &mut favorites, &mut submissions, &mut watch_later] {
sources.iter_mut().for_each(|item| {
if let Some(rule) = &item.rule {
item.rule_display = Some(rule.to_string());
}
});
}
Ok(ApiResponse::ok(VideoSourcesDetailsResponse {
collections,
favorites,
submissions,
watch_later,
}))
}
/// 更新视频来源
pub async fn update_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateVideoSourceRequest>,
) -> Result<ApiResponse<UpdateVideoSourceResponse>, ApiError> {
let rule_display = request.rule.as_ref().map(|rule| rule.to_string());
let active_model = match source_type.as_str() {
"collections" => collection::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: collection::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Collection(active_model)
}),
"favorites" => favorite::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: favorite::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Favorite(active_model)
}),
"submissions" => submission::Entity::find_by_id(id).one(&db).await?.map(|model| {
let mut active_model: submission::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
_ActiveModel::Submission(active_model)
}),
"watch_later" => match watch_later::Entity::find_by_id(id).one(&db).await? {
// 稍后再看需要做特殊处理get 时如果稍后再看不存在返回的是 id 为 1 的假记录
// 因此此处可能是更新也可能是插入,做个额外的处理
Some(model) => {
// 如果有记录,使用 id 对应的记录更新
let mut active_model: watch_later::ActiveModel = model.into();
active_model.path = Set(request.path);
active_model.enabled = Set(request.enabled);
active_model.rule = Set(request.rule);
Some(_ActiveModel::WatchLater(active_model))
}
None => {
if id != 1 {
None
} else {
// 如果没有记录且 id 为 1插入一个新的稍后再看记录
Some(_ActiveModel::WatchLater(watch_later::ActiveModel {
path: Set(request.path),
enabled: Set(request.enabled),
rule: Set(request.rule),
..Default::default()
}))
}
}
},
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let Some(active_model) = active_model else {
return Err(InnerApiError::NotFound(id).into());
};
active_model.save(&db).await?;
Ok(ApiResponse::ok(UpdateVideoSourceResponse { rule_display }))
}
pub async fn evaluate_video_source(
Path((source_type, id)): Path<(String, i32)>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<bool>, ApiError> {
// 找出对应 source 的规则与 video 筛选条件
let (rule, filter_condition) = match source_type.as_str() {
"collections" => (
collection::Entity::find_by_id(id)
.select_only()
.column(collection::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::CollectionId.eq(id),
),
"favorites" => (
favorite::Entity::find_by_id(id)
.select_only()
.column(favorite::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::FavoriteId.eq(id),
),
"submissions" => (
submission::Entity::find_by_id(id)
.select_only()
.column(submission::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::SubmissionId.eq(id),
),
"watch_later" => (
watch_later::Entity::find_by_id(id)
.select_only()
.column(watch_later::Column::Rule)
.into_tuple::<Option<Rule>>()
.one(&db)
.await?
.and_then(|r| r),
video::Column::WatchLaterId.eq(id),
),
_ => return Err(InnerApiError::BadRequest("Invalid video source type".to_string()).into()),
};
let videos: Vec<(video::Model, Vec<page::Model>)> = video::Entity::find()
.filter(filter_condition)
.find_with_related(page::Entity)
.all(&db)
.await?;
let video_should_download_pairs = videos
.into_iter()
.map(|(video, pages)| (video.id, rule.evaluate_model(&video, &pages)))
.collect::<Vec<(i32, bool)>>();
let txn = db.begin().await?;
for chunk in video_should_download_pairs.chunks(500) {
let sql = format!(
"WITH tempdata(id, should_download) AS (VALUES {}) \
UPDATE video \
SET should_download = tempdata.should_download \
FROM tempdata \
WHERE video.id = tempdata.id",
chunk
.iter()
.map(|item| format!("({}, {})", item.0, item.1))
.collect::<Vec<_>>()
.join(", ")
);
txn.execute_unprepared(&sql).await?;
}
txn.commit().await?;
Ok(ApiResponse::ok(true))
}
/// 新增收藏夹订阅
pub async fn insert_favorite(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertFavoriteRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let favorite = FavoriteList::new(bili_client.as_ref(), request.fid.to_string());
let favorite_info = favorite.get_info().await?;
favorite::Entity::insert(favorite::ActiveModel {
f_id: Set(favorite_info.id),
name: Set(favorite_info.title.clone()),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}
/// 新增合集/列表订阅
pub async fn insert_collection(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertCollectionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let collection = Collection::new(
bili_client.as_ref(),
CollectionItem {
sid: request.sid.to_string(),
mid: request.mid.to_string(),
collection_type: request.collection_type,
},
);
let collection_info = collection.get_info().await?;
collection::Entity::insert(collection::ActiveModel {
s_id: Set(collection_info.sid),
m_id: Set(collection_info.mid),
r#type: Set(collection_info.collection_type.into()),
name: Set(collection_info.name.clone()),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}
/// 新增投稿订阅
pub async fn insert_submission(
Extension(db): Extension<DatabaseConnection>,
Extension(bili_client): Extension<Arc<BiliClient>>,
ValidatedJson(request): ValidatedJson<InsertSubmissionRequest>,
) -> Result<ApiResponse<bool>, ApiError> {
let submission = Submission::new(bili_client.as_ref(), request.upper_id.to_string());
let upper = submission.get_info().await?;
submission::Entity::insert(submission::ActiveModel {
upper_id: Set(upper.mid.parse()?),
upper_name: Set(upper.name),
path: Set(request.path),
enabled: Set(false),
..Default::default()
})
.exec(&db)
.await?;
Ok(ApiResponse::ok(true))
}

View File

@@ -0,0 +1,259 @@
use std::collections::HashSet;
use anyhow::Result;
use axum::extract::{Extension, Path, Query};
use axum::routing::{get, post};
use axum::{Json, Router};
use bili_sync_entity::*;
use sea_orm::{
ColumnTrait, DatabaseConnection, EntityTrait, PaginatorTrait, QueryFilter, QueryOrder, TransactionTrait,
};
use crate::api::error::InnerApiError;
use crate::api::helper::{update_page_download_status, update_video_download_status};
use crate::api::request::{ResetRequest, UpdateVideoStatusRequest, VideosRequest};
use crate::api::response::{
PageInfo, ResetAllVideosResponse, ResetVideoResponse, UpdateVideoStatusResponse, VideoInfo, VideoResponse,
VideosResponse,
};
use crate::api::wrapper::{ApiError, ApiResponse, ValidatedJson};
use crate::utils::status::{PageStatus, VideoStatus};
pub(super) fn router() -> Router {
Router::new()
.route("/videos", get(get_videos))
.route("/videos/{id}", get(get_video))
.route("/videos/{id}/reset", post(reset_video))
.route("/videos/reset-all", post(reset_all_videos))
.route("/videos/{id}/update-status", post(update_video_status))
}
/// 列出视频的基本信息,支持根据视频来源筛选、名称查找和分页
pub async fn get_videos(
Extension(db): Extension<DatabaseConnection>,
Query(params): Query<VideosRequest>,
) -> Result<ApiResponse<VideosResponse>, ApiError> {
let mut query = video::Entity::find();
for (field, column) in [
(params.collection, video::Column::CollectionId),
(params.favorite, video::Column::FavoriteId),
(params.submission, video::Column::SubmissionId),
(params.watch_later, video::Column::WatchLaterId),
] {
if let Some(id) = field {
query = query.filter(column.eq(id));
}
}
if let Some(query_word) = params.query {
query = query.filter(video::Column::Name.contains(query_word));
}
let total_count = query.clone().count(&db).await?;
let (page, page_size) = if let (Some(page), Some(page_size)) = (params.page, params.page_size) {
(page, page_size)
} else {
(0, 10)
};
Ok(ApiResponse::ok(VideosResponse {
videos: query
.order_by_desc(video::Column::Id)
.into_partial_model::<VideoInfo>()
.paginate(&db, page_size)
.fetch_page(page)
.await?,
total_count,
}))
}
pub async fn get_video(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
) -> Result<ApiResponse<VideoResponse>, ApiError> {
let (video_info, pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
Ok(ApiResponse::ok(VideoResponse {
video: video_info,
pages: pages_info,
}))
}
pub async fn reset_video(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetRequest>,
) -> Result<ApiResponse<ResetVideoResponse>, ApiError> {
let (video_info, pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(mut video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let resetted_pages_info = pages_info
.into_iter()
.filter_map(|mut page_info| {
let mut page_status = PageStatus::from(page_info.download_status);
if (request.force && page_status.force_reset_failed()) || page_status.reset_failed() {
page_info.download_status = page_status.into();
Some(page_info)
} else {
None
}
})
.collect::<Vec<_>>();
let mut video_status = VideoStatus::from(video_info.download_status);
let mut video_resetted = (request.force && video_status.force_reset_failed()) || video_status.reset_failed();
if !resetted_pages_info.is_empty() {
video_status.set(4, 0); // 将“分页下载”重置为 0
video_resetted = true;
}
let resetted_videos_info = if video_resetted {
video_info.download_status = video_status.into();
vec![&video_info]
} else {
vec![]
};
let resetted = !resetted_videos_info.is_empty() || !resetted_pages_info.is_empty();
if resetted {
let txn = db.begin().await?;
if !resetted_videos_info.is_empty() {
// 只可能有 1 个元素,所以不用 batch
update_video_download_status(&txn, &resetted_videos_info, None).await?;
}
if !resetted_pages_info.is_empty() {
update_page_download_status(&txn, &resetted_pages_info, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(ResetVideoResponse {
resetted,
video: video_info,
pages: resetted_pages_info,
}))
}
pub async fn reset_all_videos(
Extension(db): Extension<DatabaseConnection>,
Json(request): Json<ResetRequest>,
) -> Result<ApiResponse<ResetAllVideosResponse>, ApiError> {
// 先查询所有视频和页面数据
let (all_videos, all_pages) = tokio::try_join!(
video::Entity::find().into_partial_model::<VideoInfo>().all(&db),
page::Entity::find().into_partial_model::<PageInfo>().all(&db)
)?;
let resetted_pages_info = all_pages
.into_iter()
.filter_map(|mut page_info| {
let mut page_status = PageStatus::from(page_info.download_status);
if (request.force && page_status.force_reset_failed()) || page_status.reset_failed() {
page_info.download_status = page_status.into();
Some(page_info)
} else {
None
}
})
.collect::<Vec<_>>();
let video_ids_with_resetted_pages: HashSet<i32> = resetted_pages_info.iter().map(|page| page.video_id).collect();
let resetted_videos_info = all_videos
.into_iter()
.filter_map(|mut video_info| {
let mut video_status = VideoStatus::from(video_info.download_status);
let mut video_resetted =
(request.force && video_status.force_reset_failed()) || video_status.reset_failed();
if video_ids_with_resetted_pages.contains(&video_info.id) {
video_status.set(4, 0); // 将"分页下载"重置为 0
video_resetted = true;
}
if video_resetted {
video_info.download_status = video_status.into();
Some(video_info)
} else {
None
}
})
.collect::<Vec<_>>();
let has_video_updates = !resetted_videos_info.is_empty();
let has_page_updates = !resetted_pages_info.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &resetted_videos_info, Some(500)).await?;
}
if has_page_updates {
update_page_download_status(&txn, &resetted_pages_info, Some(500)).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(ResetAllVideosResponse {
resetted: has_video_updates || has_page_updates,
resetted_videos_count: resetted_videos_info.len(),
resetted_pages_count: resetted_pages_info.len(),
}))
}
pub async fn update_video_status(
Path(id): Path<i32>,
Extension(db): Extension<DatabaseConnection>,
ValidatedJson(request): ValidatedJson<UpdateVideoStatusRequest>,
) -> Result<ApiResponse<UpdateVideoStatusResponse>, ApiError> {
let (video_info, mut pages_info) = tokio::try_join!(
video::Entity::find_by_id(id).into_partial_model::<VideoInfo>().one(&db),
page::Entity::find()
.filter(page::Column::VideoId.eq(id))
.order_by_asc(page::Column::Cid)
.into_partial_model::<PageInfo>()
.all(&db)
)?;
let Some(mut video_info) = video_info else {
return Err(InnerApiError::NotFound(id).into());
};
let mut video_status = VideoStatus::from(video_info.download_status);
for update in &request.video_updates {
video_status.set(update.status_index, update.status_value);
}
video_info.download_status = video_status.into();
let mut updated_pages_info = Vec::new();
let mut page_id_map = pages_info
.iter_mut()
.map(|page| (page.id, page))
.collect::<std::collections::HashMap<_, _>>();
for page_update in &request.page_updates {
if let Some(page_info) = page_id_map.remove(&page_update.page_id) {
let mut page_status = PageStatus::from(page_info.download_status);
for update in &page_update.updates {
page_status.set(update.status_index, update.status_value);
}
page_info.download_status = page_status.into();
updated_pages_info.push(page_info);
}
}
let has_video_updates = !request.video_updates.is_empty();
let has_page_updates = !updated_pages_info.is_empty();
if has_video_updates || has_page_updates {
let txn = db.begin().await?;
if has_video_updates {
update_video_download_status(&txn, &[&video_info], None).await?;
}
if has_page_updates {
update_page_download_status(&txn, &updated_pages_info, None).await?;
}
txn.commit().await?;
}
Ok(ApiResponse::ok(UpdateVideoStatusResponse {
success: has_video_updates || has_page_updates,
video: video_info,
pages: pages_info,
}))
}

View File

@@ -0,0 +1,54 @@
use std::collections::VecDeque;
use std::sync::Arc;
use parking_lot::Mutex;
use tokio::sync::broadcast;
use tracing_subscriber::fmt::MakeWriter;
pub const MAX_HISTORY_LOGS: usize = 30;
/// LogHelper 维护了日志发送器和一个日志历史记录的缓冲区
pub struct LogHelper {
pub sender: broadcast::Sender<String>,
pub log_history: Arc<Mutex<VecDeque<String>>>,
}
impl LogHelper {
pub fn new(sender: broadcast::Sender<String>, log_history: Arc<Mutex<VecDeque<String>>>) -> Self {
LogHelper { sender, log_history }
}
}
impl<'a> MakeWriter<'a> for LogHelper {
type Writer = Self;
fn make_writer(&'a self) -> Self::Writer {
self.clone()
}
}
impl std::io::Write for LogHelper {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
let log_message = String::from_utf8_lossy(buf).to_string();
let _ = self.sender.send(log_message.clone());
let mut history = self.log_history.lock();
history.push_back(log_message);
if history.len() > MAX_HISTORY_LOGS {
history.pop_front();
}
Ok(buf.len())
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
impl Clone for LogHelper {
fn clone(&self) -> Self {
LogHelper {
sender: self.sender.clone(),
log_history: self.log_history.clone(),
}
}
}

View File

@@ -0,0 +1,263 @@
mod log_helper;
use std::sync::{Arc, LazyLock};
use std::time::Duration;
use axum::extract::WebSocketUpgrade;
use axum::extract::ws::{Message, WebSocket};
use axum::response::IntoResponse;
use axum::routing::any;
use axum::{Extension, Router};
use dashmap::DashMap;
use futures::stream::{SplitSink, SplitStream};
use futures::{SinkExt, StreamExt, future};
pub use log_helper::{LogHelper, MAX_HISTORY_LOGS};
use parking_lot::RwLock;
use serde::{Deserialize, Serialize};
use sysinfo::{
CpuRefreshKind, DiskRefreshKind, Disks, MemoryRefreshKind, ProcessRefreshKind, RefreshKind, System, get_current_pid,
};
use tokio::pin;
use tokio::task::JoinHandle;
use tokio_stream::wrappers::{BroadcastStream, IntervalStream, WatchStream};
use uuid::Uuid;
use crate::api::response::SysInfo;
use crate::utils::task_notifier::{TASK_STATUS_NOTIFIER, TaskStatus};
static WEBSOCKET_HANDLER: LazyLock<WebSocketHandler> = LazyLock::new(WebSocketHandler::new);
pub(super) fn router() -> Router {
Router::new().route("/ws", any(websocket_handler))
}
async fn websocket_handler(ws: WebSocketUpgrade, Extension(log_writer): Extension<LogHelper>) -> impl IntoResponse {
ws.on_upgrade(|socket| handle_socket(socket, log_writer))
}
// 事件类型枚举
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
enum EventType {
Logs,
Tasks,
SysInfo,
}
#[derive(Deserialize)]
#[serde(rename_all = "camelCase")]
enum ClientEvent {
Subscribe(EventType),
Unsubscribe(EventType),
}
#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
enum ServerEvent {
Logs(String),
Tasks(Arc<TaskStatus>),
SysInfo(Arc<SysInfo>),
}
struct WebSocketHandler {
sysinfo_subscribers: Arc<DashMap<Uuid, tokio::sync::mpsc::Sender<ServerEvent>>>,
sysinfo_handles: RwLock<Option<JoinHandle<()>>>,
}
impl WebSocketHandler {
fn new() -> Self {
Self {
sysinfo_subscribers: Arc::new(DashMap::new()),
sysinfo_handles: RwLock::new(None),
}
}
async fn handle_sender(
&self,
mut sender: SplitSink<WebSocket, Message>,
mut rx: tokio::sync::mpsc::Receiver<ServerEvent>,
) {
while let Some(event) = rx.recv().await {
match serde_json::to_string(&event) {
Ok(text) => {
if let Err(e) = sender.send(Message::Text(text.into())).await {
error!("Failed to send message: {:?}", e);
break;
}
}
Err(e) => {
error!("Failed to serialize event: {:?}", e);
}
}
}
}
async fn handle_receiver(
&self,
mut receiver: SplitStream<WebSocket>,
tx: tokio::sync::mpsc::Sender<ServerEvent>,
uuid: Uuid,
log_writer: LogHelper,
) {
// 日志和任务状态的处理本身就是由 stream 驱动的,可以直接为每个 ws 连接维护独立的任务处理器
// 系统信息是服务端轮询然后推送的,如果单独维护会导致每个连接都独立轮询系统信息,造成不必要的浪费
// 因此采用了全局的订阅者管理,所有连接共享同一个系统信息轮询任务
let (mut log_handle, mut task_handle) = (None, None);
while let Some(Ok(msg)) = receiver.next().await {
if let Message::Text(text) = msg {
match serde_json::from_str::<ClientEvent>(&text) {
Ok(ClientEvent::Subscribe(event_type)) => match event_type {
EventType::Logs => {
if log_handle.as_ref().is_none_or(|h: &JoinHandle<()>| h.is_finished()) {
let log_writer_clone = log_writer.clone();
let tx_clone = tx.clone();
let history = log_writer_clone.log_history.lock();
let history_logs: Vec<String> = history.iter().cloned().collect();
drop(history);
log_handle = Some(tokio::spawn(async move {
let rx = log_writer_clone.sender.subscribe();
let log_stream = futures::stream::iter(history_logs.into_iter())
.chain(BroadcastStream::new(rx).filter_map(async |msg| msg.ok()))
.map(ServerEvent::Logs);
pin!(log_stream);
while let Some(event) = log_stream.next().await {
if let Err(e) = tx_clone.send(event).await {
error!("Failed to send log event: {:?}", e);
break;
}
}
}));
}
}
EventType::Tasks => {
if task_handle.as_ref().is_none_or(|h: &JoinHandle<()>| h.is_finished()) {
let tx_clone = tx.clone();
task_handle = Some(tokio::spawn(async move {
let mut stream =
WatchStream::new(TASK_STATUS_NOTIFIER.subscribe()).map(ServerEvent::Tasks);
while let Some(event) = stream.next().await {
if let Err(e) = tx_clone.send(event).await {
error!("Failed to send task status: {:?}", e);
break;
}
}
}));
}
}
EventType::SysInfo => self.add_sysinfo_subscriber(uuid, tx.clone()).await,
},
Ok(ClientEvent::Unsubscribe(event_type)) => match event_type {
EventType::Logs => {
if let Some(handle) = log_handle.take() {
handle.abort();
}
}
EventType::Tasks => {
if let Some(handle) = task_handle.take() {
handle.abort();
}
}
EventType::SysInfo => {
self.remove_sysinfo_subscriber(uuid).await;
}
},
Err(e) => {
error!("Failed to parse client message: {:?}", e);
}
}
}
}
if let Some(handle) = log_handle {
handle.abort();
}
if let Some(handle) = task_handle {
handle.abort();
}
self.remove_sysinfo_subscriber(uuid).await;
}
// 添加订阅者
async fn add_sysinfo_subscriber(&self, uuid: Uuid, sender: tokio::sync::mpsc::Sender<ServerEvent>) {
self.sysinfo_subscribers.insert(uuid, sender);
if !self.sysinfo_subscribers.is_empty()
&& self
.sysinfo_handles
.read()
.as_ref()
.is_none_or(|h: &JoinHandle<()>| h.is_finished())
{
let sysinfo_subscribers = self.sysinfo_subscribers.clone();
let mut write_guard = self.sysinfo_handles.write();
if write_guard.as_ref().is_some_and(|h: &JoinHandle<()>| !h.is_finished()) {
return;
}
*write_guard = Some(tokio::spawn(async move {
let mut system = System::new();
let mut disks = Disks::new();
let sys_refresh_kind = sys_refresh_kind();
let disk_refresh_kind = disk_refresh_kind();
// 对于 linux/mac/windows 平台,该方法永远返回 Some(pid)expect 基本是安全的
let self_pid = get_current_pid().expect("Unsupported platform");
let mut stream =
IntervalStream::new(tokio::time::interval(Duration::from_secs(2))).filter_map(move |_| {
system.refresh_specifics(sys_refresh_kind);
disks.refresh_specifics(true, disk_refresh_kind);
let process = match system.process(self_pid) {
Some(p) => p,
None => return futures::future::ready(None),
};
futures::future::ready(Some(SysInfo {
total_memory: system.total_memory(),
used_memory: system.used_memory(),
process_memory: process.memory(),
used_cpu: system.global_cpu_usage(),
process_cpu: process.cpu_usage() / system.cpus().len() as f32,
total_disk: disks.iter().map(|d| d.total_space()).sum(),
available_disk: disks.iter().map(|d| d.available_space()).sum(),
}))
});
while let Some(sys_info) = stream.next().await {
let sys_info = Arc::new(sys_info);
future::join_all(sysinfo_subscribers.iter().map(async |subscriber| {
if let Err(e) = subscriber.send(ServerEvent::SysInfo(sys_info.clone())).await {
error!(
"Failed to send sysinfo event to subscriber {}: {:?}",
subscriber.key(),
e
);
}
}))
.await;
}
}));
}
}
async fn remove_sysinfo_subscriber(&self, uuid: Uuid) {
self.sysinfo_subscribers.remove(&uuid);
if self.sysinfo_subscribers.is_empty()
&& let Some(handle) = self.sysinfo_handles.write().take()
{
handle.abort();
}
}
}
async fn handle_socket(socket: WebSocket, log_writer: LogHelper) {
let (ws_sender, ws_receiver) = socket.split();
let uuid = Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel(100);
tokio::spawn(WEBSOCKET_HANDLER.handle_sender(ws_sender, rx));
tokio::spawn(WEBSOCKET_HANDLER.handle_receiver(ws_receiver, tx, uuid, log_writer));
}
fn sys_refresh_kind() -> RefreshKind {
RefreshKind::nothing()
.with_cpu(CpuRefreshKind::nothing().with_cpu_usage())
.with_memory(MemoryRefreshKind::nothing().with_ram())
.with_processes(ProcessRefreshKind::nothing().with_cpu().with_memory())
}
fn disk_refresh_kind() -> DiskRefreshKind {
DiskRefreshKind::nothing().with_storage()
}

View File

@@ -0,0 +1,119 @@
use std::borrow::Cow;
use anyhow::Error;
use axum::Json;
use axum::extract::rejection::JsonRejection;
use axum::extract::{FromRequest, Request};
use axum::response::IntoResponse;
use reqwest::StatusCode;
use serde::Serialize;
use serde::de::DeserializeOwned;
use validator::Validate;
use crate::api::error::InnerApiError;
#[derive(Serialize)]
pub struct ApiResponse<T: Serialize> {
status_code: u16,
#[serde(skip_serializing_if = "Option::is_none")]
data: Option<T>,
#[serde(skip_serializing_if = "Option::is_none")]
message: Option<Cow<'static, str>>,
}
impl<T: Serialize> ApiResponse<T> {
pub fn ok(data: T) -> Self {
Self {
status_code: 200,
data: Some(data),
message: None,
}
}
pub fn bad_request(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 400,
data: None,
message: Some(message.into()),
}
}
pub fn unauthorized(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 401,
data: None,
message: Some(message.into()),
}
}
pub fn not_found(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 404,
data: None,
message: Some(message.into()),
}
}
pub fn internal_server_error(message: impl Into<Cow<'static, str>>) -> Self {
Self {
status_code: 500,
data: None,
message: Some(message.into()),
}
}
}
impl<T: Serialize> IntoResponse for ApiResponse<T> {
fn into_response(self) -> axum::response::Response {
(
StatusCode::from_u16(self.status_code).expect("invalid Http Status Code"),
Json(self),
)
.into_response()
}
}
pub struct ApiError(Error);
impl<E> From<E> for ApiError
where
E: Into<anyhow::Error>,
{
fn from(value: E) -> Self {
Self(value.into())
}
}
impl IntoResponse for ApiError {
fn into_response(self) -> axum::response::Response {
if let Some(inner_error) = self.0.downcast_ref::<InnerApiError>() {
match inner_error {
InnerApiError::NotFound(_) => return ApiResponse::<()>::not_found(self.0.to_string()).into_response(),
InnerApiError::BadRequest(_) => {
return ApiResponse::<()>::bad_request(self.0.to_string()).into_response();
}
}
}
ApiResponse::<()>::internal_server_error(self.0.to_string()).into_response()
}
}
#[derive(Debug, Clone, Copy, Default)]
pub struct ValidatedJson<T>(pub T);
impl<T, S> FromRequest<S> for ValidatedJson<T>
where
T: DeserializeOwned + Validate,
S: Send + Sync,
Json<T>: FromRequest<S, Rejection = JsonRejection>,
{
type Rejection = ApiError;
async fn from_request(req: Request, state: &S) -> Result<Self, Self::Rejection> {
let Json(value) = Json::<T>::from_request(req, state).await?;
value
.validate()
.map_err(|e| ApiError::from(InnerApiError::BadRequest(e.to_string())))?;
Ok(ValidatedJson(value))
}
}

View File

@@ -0,0 +1,481 @@
use anyhow::{Context, Result, bail};
use serde::{Deserialize, Serialize};
use crate::bilibili::error::BiliError;
use crate::config::VersionedConfig;
pub struct PageAnalyzer {
info: serde_json::Value,
}
#[derive(Debug, strum::FromRepr, PartialEq, Eq, PartialOrd, Ord, Serialize, Deserialize, Clone)]
pub enum VideoQuality {
Quality360p = 16,
Quality480p = 32,
Quality720p = 64,
Quality1080p = 80,
Quality1080pPLUS = 112,
Quality1080p60 = 116,
Quality4k = 120,
QualityHdr = 125,
QualityDolby = 126,
Quality8k = 127,
}
#[derive(Debug, Clone, Copy, strum::FromRepr, PartialEq, Eq, Serialize, Deserialize)]
pub enum AudioQuality {
Quality64k = 30216,
Quality132k = 30232,
QualityDolby = 30250,
QualityHiRES = 30251,
Quality192k = 30280,
}
impl Ord for AudioQuality {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
self.as_sort_key().cmp(&other.as_sort_key())
}
}
impl PartialOrd for AudioQuality {
fn partial_cmp(&self, other: &AudioQuality) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl AudioQuality {
pub fn as_sort_key(&self) -> isize {
match self {
// 这可以让 Dolby 和 Hi-RES 排在 192k 之后,且 Dolby 和 Hi-RES 之间的顺序不变
Self::QualityHiRES | Self::QualityDolby => (*self as isize) + 40,
_ => *self as isize,
}
}
}
#[allow(clippy::upper_case_acronyms)]
#[derive(
Debug, strum::EnumString, strum::Display, strum::AsRefStr, PartialEq, PartialOrd, Serialize, Deserialize, Clone,
)]
pub enum VideoCodecs {
#[strum(serialize = "hev")]
HEV,
#[strum(serialize = "avc")]
AVC,
#[strum(serialize = "av01")]
AV1,
}
impl TryFrom<u64> for VideoCodecs {
type Error = anyhow::Error;
fn try_from(value: u64) -> std::result::Result<Self, Self::Error> {
// https://socialsisteryi.github.io/bilibili-API-collect/docs/video/videostream_url.html#%E8%A7%86%E9%A2%91%E7%BC%96%E7%A0%81%E4%BB%A3%E7%A0%81
match value {
7 => Ok(Self::AVC),
12 => Ok(Self::HEV),
13 => Ok(Self::AV1),
_ => bail!("invalid video codecs id: {}", value),
}
}
}
// 视频流的筛选偏好
#[derive(Serialize, Deserialize, Clone)]
pub struct FilterOption {
pub video_max_quality: VideoQuality,
pub video_min_quality: VideoQuality,
pub audio_max_quality: AudioQuality,
pub audio_min_quality: AudioQuality,
pub codecs: Vec<VideoCodecs>,
pub no_dolby_video: bool,
pub no_dolby_audio: bool,
pub no_hdr: bool,
pub no_hires: bool,
}
impl Default for FilterOption {
fn default() -> Self {
Self {
video_max_quality: VideoQuality::Quality8k,
video_min_quality: VideoQuality::Quality360p,
audio_max_quality: AudioQuality::QualityHiRES,
audio_min_quality: AudioQuality::Quality64k,
codecs: vec![VideoCodecs::AV1, VideoCodecs::HEV, VideoCodecs::AVC],
no_dolby_video: false,
no_dolby_audio: false,
no_hdr: false,
no_hires: false,
}
}
}
// 上游项目中的五种流类型,不过目测应该只有 Flv、DashVideo、DashAudio 三种会被用到
#[derive(Debug, PartialEq, PartialOrd)]
pub enum Stream {
Flv(String),
Html5Mp4(String),
EpisodeTryMp4(String),
DashVideo {
url: String,
backup_url: Vec<String>,
quality: VideoQuality,
codecs: VideoCodecs,
},
DashAudio {
url: String,
backup_url: Vec<String>,
quality: AudioQuality,
},
}
// 通用的获取流链接的方法,交由 Downloader 使用
impl Stream {
pub fn urls(&self) -> Vec<&str> {
match self {
Self::Flv(url) | Self::Html5Mp4(url) | Self::EpisodeTryMp4(url) => vec![url],
Self::DashVideo { url, backup_url, .. } | Self::DashAudio { url, backup_url, .. } => {
let mut urls = std::iter::once(url.as_str())
.chain(backup_url.iter().map(|s| s.as_str()))
.collect::<Vec<_>>();
if VersionedConfig::get().load().cdn_sorting {
urls.sort_by_key(|u| {
if u.contains("upos-") {
0 // 服务商 cdn
} else if u.contains("cn-") {
1 // 自建 cdn
} else if u.contains("mcdn") {
2 // mcdn
} else {
3 // pcdn 或者其它
}
});
}
urls
}
}
}
}
/// 用于获取视频流的最佳筛选结果,有两种可能:
/// 1. 单个混合流,作为 Mixed 返回
/// 2. 视频、音频分离,作为 VideoAudio 返回,其中音频流可能不存在(对于无声视频,如 BV1J7411H7KQ
#[derive(Debug)]
pub enum BestStream {
VideoAudio { video: Stream, audio: Option<Stream> },
Mixed(Stream),
}
impl PageAnalyzer {
pub fn new(info: serde_json::Value) -> Self {
Self { info }
}
fn is_flv_stream(&self) -> bool {
self.info.get("durl").is_some() && self.info["format"].as_str().is_some_and(|f| f.starts_with("flv"))
}
fn is_html5_mp4_stream(&self) -> bool {
self.info.get("durl").is_some()
&& self.info["format"].as_str().is_some_and(|f| f.starts_with("mp4"))
&& self.info["is_html5"].as_bool().is_some_and(|b| b)
}
fn is_episode_try_mp4_stream(&self) -> bool {
self.info.get("durl").is_some()
&& self.info["format"].as_str().is_some_and(|f| f.starts_with("mp4"))
&& self.info["is_html5"].as_bool().is_none_or(|b| !b)
}
/// 获取所有的视频、音频流,并根据条件筛选
fn streams(&mut self, filter_option: &FilterOption) -> Result<Vec<Stream>> {
if self.is_flv_stream() {
return Ok(vec![Stream::Flv(
self.info["durl"][0]["url"]
.as_str()
.context("invalid flv stream")?
.to_string(),
)]);
}
if self.is_html5_mp4_stream() {
return Ok(vec![Stream::Html5Mp4(
self.info["durl"][0]["url"]
.as_str()
.context("invalid html5 mp4 stream")?
.to_string(),
)]);
}
if self.is_episode_try_mp4_stream() {
return Ok(vec![Stream::EpisodeTryMp4(
self.info["durl"][0]["url"]
.as_str()
.context("invalid episode try mp4 stream")?
.to_string(),
)]);
}
let mut streams: Vec<Stream> = Vec::new();
for video in self
.info
.pointer_mut("/dash/video")
.and_then(|v| v.as_array_mut())
.ok_or(BiliError::RiskControlOccurred)?
.iter_mut()
{
let (Some(url), Some(quality), Some(codecs_id)) = (
video["baseUrl"].as_str(),
video["id"].as_u64(),
video["codecid"].as_u64(),
) else {
continue;
};
let quality = VideoQuality::from_repr(quality as usize).context("invalid video stream quality")?;
let Ok(codecs) = codecs_id.try_into() else {
continue;
};
if !filter_option.codecs.contains(&codecs)
|| quality < filter_option.video_min_quality
|| quality > filter_option.video_max_quality
|| (quality == VideoQuality::QualityHdr && filter_option.no_hdr)
|| (quality == VideoQuality::QualityDolby && filter_option.no_dolby_video)
{
continue;
}
streams.push(Stream::DashVideo {
url: url.to_string(),
backup_url: serde_json::from_value(video["backupUrl"].take()).unwrap_or_default(),
quality,
codecs,
});
}
if let Some(audios) = self.info.pointer_mut("/dash/audio").and_then(|a| a.as_array_mut()) {
for audio in audios.iter_mut() {
let (Some(url), Some(quality)) = (audio["baseUrl"].as_str(), audio["id"].as_u64()) else {
continue;
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid audio stream quality")?;
if quality < filter_option.audio_min_quality || quality > filter_option.audio_max_quality {
continue;
}
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
if !filter_option.no_hires
&& let Some(flac) = self.info.pointer_mut("/dash/flac/audio")
{
let (Some(url), Some(quality)) = (flac["baseUrl"].as_str(), flac["id"].as_u64()) else {
bail!("invalid flac stream");
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid flac stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(flac["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
if !filter_option.no_dolby_audio
&& let Some(dolby_audio) = self
.info
.pointer_mut("/dash/dolby/audio/0")
.and_then(|a| a.as_object_mut())
{
let (Some(url), Some(quality)) = (dolby_audio["baseUrl"].as_str(), dolby_audio["id"].as_u64()) else {
bail!("invalid dolby audio stream");
};
let quality = AudioQuality::from_repr(quality as usize).context("invalid dolby audio stream quality")?;
if quality >= filter_option.audio_min_quality && quality <= filter_option.audio_max_quality {
streams.push(Stream::DashAudio {
url: url.to_string(),
backup_url: serde_json::from_value(dolby_audio["backupUrl"].take()).unwrap_or_default(),
quality,
});
}
}
Ok(streams)
}
pub fn best_stream(&mut self, filter_option: &FilterOption) -> Result<BestStream> {
let streams = self.streams(filter_option)?;
if self.is_flv_stream() || self.is_html5_mp4_stream() || self.is_episode_try_mp4_stream() {
// 按照 streams 中的假设,符合这三种情况的流只有一个,直接取
return Ok(BestStream::Mixed(
streams.into_iter().next().context("no stream found")?,
));
}
let (videos, audios): (Vec<Stream>, Vec<Stream>) =
streams.into_iter().partition(|s| matches!(s, Stream::DashVideo { .. }));
Ok(BestStream::VideoAudio {
video: videos
.into_iter()
.max_by(|a, b| match (a, b) {
(
Stream::DashVideo {
quality: a_quality,
codecs: a_codecs,
..
},
Stream::DashVideo {
quality: b_quality,
codecs: b_codecs,
..
},
) => {
if a_quality != b_quality {
return a_quality.cmp(b_quality);
};
filter_option
.codecs
.iter()
.position(|c| c == b_codecs)
.cmp(&filter_option.codecs.iter().position(|c| c == a_codecs))
}
_ => unreachable!(),
})
.context("no video stream found")?,
audio: audios.into_iter().max_by(|a, b| match (a, b) {
(Stream::DashAudio { quality: a_quality, .. }, Stream::DashAudio { quality: b_quality, .. }) => {
a_quality.cmp(b_quality)
}
_ => unreachable!(),
}),
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::bilibili::{BiliClient, Video};
use crate::config::VersionedConfig;
#[test]
fn test_quality_order() {
assert!(
[
VideoQuality::Quality360p,
VideoQuality::Quality480p,
VideoQuality::Quality720p,
VideoQuality::Quality1080p,
VideoQuality::Quality1080pPLUS,
VideoQuality::Quality1080p60,
VideoQuality::Quality4k,
VideoQuality::QualityHdr,
VideoQuality::QualityDolby,
VideoQuality::Quality8k
]
.is_sorted()
);
assert!(
[
AudioQuality::Quality64k,
AudioQuality::Quality132k,
AudioQuality::Quality192k,
AudioQuality::QualityDolby,
AudioQuality::QualityHiRES,
]
.is_sorted()
);
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_best_stream() {
let testcases = [
// 随便一个 8k + hires 视频
(
"BV1xRChYUE2R",
VideoQuality::Quality8k,
VideoCodecs::HEV,
Some(AudioQuality::QualityHiRES),
),
// 一个没有声音的纯视频
("BV1J7411H7KQ", VideoQuality::Quality720p, VideoCodecs::HEV, None),
// 一个杜比全景声的演示片
(
"BV1Mm4y1P7JV",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityDolby),
),
// 影视飓风的杜比视界视频
(
"BV1HEf2YWEvs",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityDolby),
),
// 孤独摇滚的杜比视界 + hires + 杜比全景声视频
(
"BV1YDVYzeE39",
VideoQuality::QualityDolby,
VideoCodecs::HEV,
Some(AudioQuality::QualityHiRES),
),
// 一个京紫的 HDR 视频
(
"BV1cZ4y1b7iB",
VideoQuality::QualityHdr,
VideoCodecs::HEV,
Some(AudioQuality::Quality192k),
),
];
for (bvid, video_quality, video_codec, audio_quality) in testcases.into_iter() {
let client = BiliClient::new();
let video = Video::new(&client, bvid.to_owned());
let pages = video.get_pages().await.expect("failed to get pages");
let first_page = pages.into_iter().next().expect("no page found");
let best_stream = video
.get_page_analyzer(&first_page)
.await
.expect("failed to get page analyzer")
.best_stream(&VersionedConfig::get().load().filter_option)
.expect("failed to get best stream");
dbg!(bvid, &best_stream);
match best_stream {
BestStream::VideoAudio {
video: Stream::DashVideo { quality, codecs, .. },
audio,
} => {
assert_eq!(quality, video_quality);
assert_eq!(codecs, video_codec);
assert_eq!(
audio.map(|audio_stream| match audio_stream {
Stream::DashAudio { quality, .. } => quality,
_ => unreachable!(),
}),
audio_quality,
);
}
_ => unreachable!(),
}
}
}
#[test]
fn test_url_sort() {
let stream = Stream::DashVideo {
url: "https://xy116x207x155x163xy240ey95dy1010y700yy8dxy.mcdn.bilivideo.cn:4483".to_owned(),
backup_url: vec![
"https://upos-sz-mirrorcos.bilivideo.com".to_owned(),
"https://cn-tj-cu-01-11.bilivideo.com".to_owned(),
"https://xxx.v1d.szbdys.com".to_owned(),
],
quality: VideoQuality::Quality1080p,
codecs: VideoCodecs::AVC,
};
assert_eq!(
stream.urls(),
vec![
"https://upos-sz-mirrorcos.bilivideo.com",
"https://cn-tj-cu-01-11.bilivideo.com",
"https://xy116x207x155x163xy240ey95dy1010y700yy8dxy.mcdn.bilivideo.cn:4483",
"https://xxx.v1d.szbdys.com"
]
);
}
}

View File

@@ -1,10 +1,14 @@
use std::sync::Arc;
use std::time::Duration;
use anyhow::Result;
use reqwest::{header, Method};
use leaky_bucket::RateLimiter;
use reqwest::{Method, header};
use sea_orm::DatabaseConnection;
use ua_generator::ua;
use crate::bilibili::Credential;
use crate::config::CONFIG;
use crate::bilibili::credential::WbiImg;
use crate::config::{RateLimit, VersionedCache, VersionedConfig};
// 一个对 reqwest::Client 的简单封装,用于 Bilibili 请求
#[derive(Clone)]
@@ -16,9 +20,7 @@ impl Client {
let mut headers = header::HeaderMap::new();
headers.insert(
header::USER_AGENT,
header::HeaderValue::from_static(
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
),
header::HeaderValue::from_static(ua::spoof_chrome_ua()),
);
headers.insert(
header::REFERER,
@@ -29,9 +31,9 @@ impl Client {
.default_headers(headers)
.gzip(true)
.connect_timeout(std::time::Duration::from_secs(10))
.read_timeout(std::time::Duration::from_secs(30))
.read_timeout(std::time::Duration::from_secs(10))
.build()
.unwrap(),
.expect("failed to build reqwest client"),
)
}
@@ -60,29 +62,54 @@ impl Default for Client {
pub struct BiliClient {
pub client: Client,
limiter: VersionedCache<Option<RateLimiter>>,
}
impl BiliClient {
pub fn new() -> Self {
let client = Client::new();
Self { client }
let limiter = VersionedCache::new(|config| {
Ok(config
.concurrent_limit
.rate_limit
.as_ref()
.map(|RateLimit { limit, duration }| {
RateLimiter::builder()
.initial(*limit)
.refill(*limit)
.max(*limit)
.interval(Duration::from_millis(*duration))
.build()
}))
})
.expect("failed to create rate limiter");
Self { client, limiter }
}
pub fn request(&self, method: Method, url: &str) -> reqwest::RequestBuilder {
let credential = CONFIG.credential.load();
self.client.request(method, url, credential.as_deref())
/// 获取一个预构建的请求,通过该方法获取请求时会检查并等待速率限制
pub async fn request(&self, method: Method, url: &str) -> reqwest::RequestBuilder {
if let Some(limiter) = self.limiter.load().as_ref() {
limiter.acquire_one().await;
}
let credential = &VersionedConfig::get().load().credential;
self.client.request(method, url, Some(credential))
}
pub async fn check_refresh(&self) -> Result<()> {
let credential = CONFIG.credential.load();
let Some(credential) = credential.as_deref() else {
return Ok(());
};
pub async fn check_refresh(&self, connection: &DatabaseConnection) -> Result<()> {
let credential = &VersionedConfig::get().load().credential;
if !credential.need_refresh(&self.client).await? {
return Ok(());
}
let new_credential = credential.refresh(&self.client).await?;
CONFIG.credential.store(Some(Arc::new(new_credential)));
CONFIG.save()
VersionedConfig::get()
.update_credential(new_credential, connection)
.await?;
Ok(())
}
/// 获取 wbi img用于生成请求签名
pub async fn wbi_img(&self) -> Result<WbiImg> {
let credential = &VersionedConfig::get().load().credential;
credential.wbi_img(&self.client).await
}
}

View File

@@ -0,0 +1,300 @@
use std::fmt::{Display, Formatter};
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use reqwest::Method;
use serde::Deserialize;
use serde_json::Value;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::{BiliClient, MIXIN_KEY, Validate, VideoInfo};
#[derive(PartialEq, Eq, Hash, Clone, Debug, Default, Copy)]
pub enum CollectionType {
Series,
#[default]
Season,
}
impl<'de> serde::Deserialize<'de> for CollectionType {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let v = i32::deserialize(deserializer)?;
CollectionType::try_from(v).map_err(serde::de::Error::custom)
}
}
impl From<CollectionType> for i32 {
fn from(v: CollectionType) -> Self {
match v {
CollectionType::Series => 1,
CollectionType::Season => 2,
}
}
}
impl TryFrom<i32> for CollectionType {
type Error = anyhow::Error;
fn try_from(v: i32) -> Result<Self, Self::Error> {
match v {
1 => Ok(CollectionType::Series),
2 => Ok(CollectionType::Season),
v => Err(anyhow!("got invalid collection type {}", v)),
}
}
}
impl CollectionType {
pub fn from_expected(v: i32) -> Self {
Self::try_from(v).expect("invalid collection type")
}
}
impl Display for CollectionType {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let s = match self {
CollectionType::Series => "列表",
CollectionType::Season => "合集",
};
write!(f, "{}", s)
}
}
#[derive(PartialEq, Eq, Hash, Debug)]
pub struct CollectionItem {
pub mid: String,
pub sid: String,
pub collection_type: CollectionType,
}
pub struct Collection<'a> {
client: &'a BiliClient,
pub collection: CollectionItem,
}
#[derive(Debug, PartialEq)]
pub struct CollectionInfo {
pub name: String,
pub mid: i64,
pub sid: i64,
pub collection_type: CollectionType,
}
impl<'de> Deserialize<'de> for CollectionInfo {
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
#[derive(Deserialize)]
struct CollectionInfoRaw {
mid: i64,
name: String,
season_id: Option<i64>,
series_id: Option<i64>,
}
let raw = CollectionInfoRaw::deserialize(deserializer)?;
let (sid, collection_type) = match (raw.season_id, raw.series_id) {
(Some(sid), None) => (sid, CollectionType::Season),
(None, Some(sid)) => (sid, CollectionType::Series),
_ => return Err(serde::de::Error::custom("invalid collection info")),
};
Ok(CollectionInfo {
mid: raw.mid,
name: raw.name,
sid,
collection_type,
})
}
}
impl<'a> Collection<'a> {
pub fn new(client: &'a BiliClient, collection: CollectionItem) -> Self {
Self { client, collection }
}
pub async fn get_info(&self) -> Result<CollectionInfo> {
let meta = match self.collection.collection_type {
// 没有找到专门获取 Season 信息的接口,所以直接获取第一页,从里面取 meta 信息
CollectionType::Season => self.get_videos(1).await?["data"]["meta"].take(),
CollectionType::Series => self.get_series_info().await?["data"]["meta"].take(),
};
Ok(serde_json::from_value(meta)?)
}
async fn get_series_info(&self) -> Result<Value> {
self.client
.request(Method::GET, "https://api.bilibili.com/x/series/series")
.await
.query(&[("series_id", self.collection.sid.as_str())])
.send()
.await?
.error_for_status()?
.json::<Value>()
.await?
.validate()
}
async fn get_videos(&self, page: i32) -> Result<Value> {
let page = page.to_string();
let (url, query) = match self.collection.collection_type {
CollectionType::Series => (
"https://api.bilibili.com/x/series/archives",
encoded_query(
vec![
("mid", self.collection.mid.as_str()),
("series_id", self.collection.sid.as_str()),
("only_normal", "true"),
("sort", "desc"),
("pn", page.as_str()),
("ps", "30"),
],
MIXIN_KEY.load().as_deref(),
),
),
CollectionType::Season => (
"https://api.bilibili.com/x/polymer/web-space/seasons_archives_list",
encoded_query(
vec![
("mid", self.collection.mid.as_str()),
("season_id", self.collection.sid.as_str()),
("sort_reverse", "true"),
("page_num", page.as_str()),
("page_size", "30"),
],
MIXIN_KEY.load().as_deref(),
),
),
};
self.client
.request(Method::GET, url)
.await
.query(&query)
.send()
.await?
.error_for_status()?
.json::<Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self.get_videos(page).await.with_context(|| {
format!(
"failed to get videos of collection {:?} page {}",
self.collection, page
)
})?;
let archives = &mut videos["data"]["archives"];
if archives.as_array().is_none_or(|v| v.is_empty()) {
Err(anyhow!(
"no videos found in collection {:?} page {}",
self.collection,
page
))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(archives.take()).with_context(|| {
format!(
"failed to parse videos of collection {:?} page {}",
self.collection, page
)
})?;
for video_info in videos_info {
yield video_info;
}
let page_info = &videos["data"]["page"];
let fields = match self.collection.collection_type {
CollectionType::Series => ["num", "size", "total"],
CollectionType::Season => ["page_num", "page_size", "total"],
};
let values = fields
.iter()
.map(|f| page_info[f].as_i64())
.collect::<Vec<Option<i64>>>();
if let [Some(num), Some(size), Some(total)] = values[..] {
if num * size < total {
page += 1;
continue;
}
} else {
Err(anyhow!(
"invalid page info of collection {:?} page {}: read {:?} from {}",
self.collection,
page,
fields,
page_info
))?;
}
break;
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_collection_info_parse() {
let testcases = vec![
(
r#"
{
"category": 0,
"cover": "https://archive.biliimg.com/bfs/archive/a6fbf7a7b9f4af09d9cf40482268634df387ef68.jpg",
"description": "",
"mid": 521722088,
"name": "合集·【命运方舟全剧情解说】",
"ptime": 1714701600,
"season_id": 1987140,
"total": 10
}
"#,
CollectionInfo {
mid: 521722088,
name: "合集·【命运方舟全剧情解说】".to_owned(),
sid: 1987140,
collection_type: CollectionType::Season,
},
),
(
r#"
{
"series_id": 387212,
"mid": 521722088,
"name": "提瓦特冒险记",
"description": "原神沙雕般的游戏体验",
"keywords": [
""
],
"creator": "",
"state": 2,
"last_update_ts": 1633167320,
"total": 3,
"ctime": 1633167320,
"mtime": 1633167320,
"raw_keywords": "",
"category": 1
}
"#,
CollectionInfo {
mid: 521722088,
name: "提瓦特冒险记".to_owned(),
sid: 387212,
collection_type: CollectionType::Series,
},
),
];
for (json, expect) in testcases {
let info: CollectionInfo = serde_json::from_str(json).unwrap();
assert_eq!(info, expect);
}
}
}

View File

@@ -0,0 +1,338 @@
use std::borrow::Cow;
use std::collections::HashSet;
use anyhow::{Context, Result, bail, ensure};
use cookie::Cookie;
use cow_utils::CowUtils;
use regex::Regex;
use reqwest::{Method, header};
use rsa::pkcs8::DecodePublicKey;
use rsa::sha2::Sha256;
use rsa::{Oaep, RsaPublicKey};
use serde::{Deserialize, Serialize};
use crate::bilibili::{Client, Validate};
const MIXIN_KEY_ENC_TAB: [usize; 64] = [
46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14, 39, 12, 38,
41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60, 51, 30, 4, 22, 25, 54, 21, 56, 59, 6, 63, 57, 62, 11, 36,
20, 34, 44, 52,
];
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct Credential {
pub sessdata: String,
pub bili_jct: String,
pub buvid3: String,
pub dedeuserid: String,
pub ac_time_value: String,
}
#[derive(Debug, Deserialize)]
pub struct WbiImg {
img_url: String,
sub_url: String,
}
impl From<WbiImg> for Option<String> {
/// 尝试将 WbiImg 转换成 mixin_key
fn from(value: WbiImg) -> Self {
let key = match (
get_filename(value.img_url.as_str()),
get_filename(value.sub_url.as_str()),
) {
(Some(img_key), Some(sub_key)) => img_key.to_string() + sub_key,
_ => return None,
};
let key = key.as_bytes();
Some(MIXIN_KEY_ENC_TAB.iter().take(32).map(|&x| key[x] as char).collect())
}
}
impl Credential {
pub async fn wbi_img(&self, client: &Client) -> Result<WbiImg> {
let mut res = client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/nav", Some(self))
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"]["wbi_img"].take())?)
}
/// 检查凭据是否有效
pub async fn need_refresh(&self, client: &Client) -> Result<bool> {
let res = client
.request(
Method::GET,
"https://passport.bilibili.com/x/passport-login/web/cookie/info",
Some(self),
)
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
res["data"]["refresh"].as_bool().context("check refresh failed")
}
pub async fn refresh(&self, client: &Client) -> Result<Self> {
let correspond_path = Self::get_correspond_path();
let csrf = self.get_refresh_csrf(client, correspond_path).await?;
let new_credential = self.get_new_credential(client, &csrf).await?;
self.confirm_refresh(client, &new_credential).await?;
Ok(new_credential)
}
fn get_correspond_path() -> String {
// 调用频率很低,让 key 在函数内部构造影响不大
let key = RsaPublicKey::from_public_key_pem(
"-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDLgd2OAkcGVtoE3ThUREbio0Eg
Uc/prcajMKXvkCKFCWhJYJcLkcM2DKKcSeFpD/j6Boy538YXnR6VhcuUJOhH2x71
nzPjfdTcqMz7djHum0qSZA0AyCBDABUqCrfNgCiJ00Ra7GmRj+YCK1NJEuewlb40
JNrRuoEUXpabUzGB8QIDAQAB
-----END PUBLIC KEY-----",
)
.expect("fail to decode public key");
let ts = chrono::Local::now().timestamp_millis();
let data = format!("refresh_{}", ts).into_bytes();
let mut rng = rand::rng();
let encrypted = key
.encrypt(&mut rng, Oaep::new::<Sha256>(), &data)
.expect("fail to encrypt");
hex::encode(encrypted)
}
async fn get_refresh_csrf(&self, client: &Client, correspond_path: String) -> Result<String> {
let res = client
.request(
Method::GET,
format!("https://www.bilibili.com/correspond/1/{}", correspond_path).as_str(),
Some(self),
)
.header(header::COOKIE, "Domain=.bilibili.com")
.send()
.await?
.error_for_status()?;
regex_find(r#"<div id="1-name">(.+?)</div>"#, res.text().await?.as_str())
}
async fn get_new_credential(&self, client: &Client, csrf: &str) -> Result<Credential> {
let mut res = client
.request(
Method::POST,
"https://passport.bilibili.com/x/passport-login/web/cookie/refresh",
Some(self),
)
.header(header::COOKIE, "Domain=.bilibili.com")
.form(&[
// 这里不是 json而是 form data
("csrf", self.bili_jct.as_str()),
("refresh_csrf", csrf),
("refresh_token", self.ac_time_value.as_str()),
("source", "main_web"),
])
.send()
.await?
.error_for_status()?;
// 必须在 .json 前取出 headers否则 res 会被消耗
let headers = std::mem::take(res.headers_mut());
let res = res.json::<serde_json::Value>().await?.validate()?;
let set_cookies = headers.get_all(header::SET_COOKIE);
let mut credential = Self {
buvid3: self.buvid3.clone(),
..Self::default()
};
let required_cookies = HashSet::from(["SESSDATA", "bili_jct", "DedeUserID"]);
let cookies: Vec<Cookie> = set_cookies
.iter()
.filter_map(|x| x.to_str().ok())
.filter_map(|x| Cookie::parse(x).ok())
.filter(|x| required_cookies.contains(x.name()))
.collect();
ensure!(
cookies.len() == required_cookies.len(),
"not all required cookies found"
);
for cookie in cookies {
match cookie.name() {
"SESSDATA" => credential.sessdata = cookie.value().to_string(),
"bili_jct" => credential.bili_jct = cookie.value().to_string(),
"DedeUserID" => credential.dedeuserid = cookie.value().to_string(),
_ => unreachable!(),
}
}
match res["data"]["refresh_token"].as_str() {
Some(token) => credential.ac_time_value = token.to_string(),
None => bail!("refresh_token not found"),
}
Ok(credential)
}
async fn confirm_refresh(&self, client: &Client, new_credential: &Credential) -> Result<()> {
client
.request(
Method::POST,
"https://passport.bilibili.com/x/passport-login/web/confirm/refresh",
// 此处用的是新的凭证
Some(new_credential),
)
.form(&[
("csrf", new_credential.bili_jct.as_str()),
("refresh_token", self.ac_time_value.as_str()),
])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(())
}
}
// 用指定的 pattern 正则表达式在 doc 中查找,返回第一个匹配的捕获组
fn regex_find(pattern: &str, doc: &str) -> Result<String> {
let re = Regex::new(pattern)?;
Ok(re
.captures(doc)
.context("no match found")?
.get(1)
.context("no capture found")?
.as_str()
.to_string())
}
fn get_filename(url: &str) -> Option<&str> {
url.rsplit_once('/')
.and_then(|(_, s)| s.rsplit_once('.'))
.map(|(s, _)| s)
}
pub fn encoded_query<'a>(
params: Vec<(&'a str, impl Into<Cow<'a, str>>)>,
mixin_key: Option<impl AsRef<str>>,
) -> Vec<(&'a str, Cow<'a, str>)> {
match mixin_key {
Some(key) => _encoded_query(params, key.as_ref(), chrono::Local::now().timestamp().to_string()),
None => params.into_iter().map(|(k, v)| (k, v.into())).collect(),
}
}
fn _encoded_query<'a>(
params: Vec<(&'a str, impl Into<Cow<'a, str>>)>,
mixin_key: &str,
timestamp: String,
) -> Vec<(&'a str, Cow<'a, str>)> {
let disallowed = ['!', '\'', '(', ')', '*'];
let mut params: Vec<(&'a str, Cow<'a, str>)> = params
.into_iter()
.map(|(k, v)| {
(
k,
match Into::<Cow<'a, str>>::into(v) {
Cow::Borrowed(v) => v.cow_replace(&disallowed[..], ""),
Cow::Owned(v) => v.replace(&disallowed[..], "").into(),
},
)
})
.collect();
params.push(("wts", timestamp.into()));
params.sort_by(|a, b| a.0.cmp(b.0));
let query = serde_urlencoded::to_string(&params)
.expect("fail to encode query")
.replace('+', "%20");
params.push(("w_rid", format!("{:x}", md5::compute(query.clone() + mixin_key)).into()));
params
}
#[cfg(test)]
mod tests {
use assert_matches::assert_matches;
use super::*;
#[test]
fn test_parse_and_find() {
let doc = r#"
<html lang="zh-Hans">
<body>
<div id="1-name">b0cc8411ded2f9db2cff2edb3123acac</div>
</body>
</html>
"#;
assert_eq!(
regex_find(r#"<div id="1-name">(.+?)</div>"#, doc).unwrap(),
"b0cc8411ded2f9db2cff2edb3123acac",
);
}
#[test]
fn test_encode_query() {
let query = vec![
("bar", "五一四".to_string()),
("baz", "1919810".to_string()),
("foo", "one one four".to_string()),
];
assert_eq!(
serde_urlencoded::to_string(query).unwrap().replace('+', "%20"),
"bar=%E4%BA%94%E4%B8%80%E5%9B%9B&baz=1919810&foo=one%20one%20four"
);
}
#[test]
fn test_wbi_key() {
let key = WbiImg {
img_url: "https://i0.hdslb.com/bfs/wbi/7cd084941338484aae1ad9425b84077c.png".to_string(),
sub_url: "https://i0.hdslb.com/bfs/wbi/4932caff0ff746eab6f01bf08b70ac45.png".to_string(),
};
let key = Option::<String>::from(key).expect("fail to convert key");
assert_eq!(key.as_str(), "ea1db124af3c7062474693fa704f4ff8");
// 没有特殊字符
assert_matches!(
&_encoded_query(
vec![("foo", "114"), ("bar", "514"), ("zab", "1919810")],
key.as_str(),
"1702204169".to_string(),
)[..],
[
("bar", Cow::Borrowed(a)),
("foo", Cow::Borrowed(b)),
("wts", Cow::Owned(c)),
("zab", Cow::Borrowed(d)),
("w_rid", Cow::Owned(e)),
] => {
assert_eq!(*a, "514");
assert_eq!(*b, "114");
assert_eq!(c, "1702204169");
assert_eq!(*d, "1919810");
assert_eq!(e, "8f6f2b5b3d485fe1886cec6a0be8c5d4");
}
);
// 有特殊字符
assert_matches!(
&_encoded_query(
vec![("foo", "'1(1)4'"), ("bar", "!5*1!14"), ("zab", "1919810")],
key.as_str(),
"1702204169".to_string(),
)[..],
[
("bar", Cow::Owned(a)),
("foo", Cow::Owned(b)),
("wts", Cow::Owned(c)),
("zab", Cow::Borrowed(d)),
("w_rid", Cow::Owned(e)),
] => {
assert_eq!(a, "5114");
assert_eq!(b, "114");
assert_eq!(c, "1702204169");
assert_eq!(*d, "1919810");
assert_eq!(e, "6a2c86c4b0648ce062ba0dac2de91a85");
}
);
}
}

View File

@@ -88,14 +88,14 @@ impl fmt::Display for CanvasStyles {
}
}
pub struct AssWriter<W: AsyncWrite> {
pub struct AssWriter<'a, W: AsyncWrite> {
f: Pin<Box<BufWriter<W>>>,
title: String,
canvas_config: CanvasConfig,
canvas_config: CanvasConfig<'a>,
}
impl<W: AsyncWrite> AssWriter<W> {
pub fn new(f: W, title: String, canvas_config: CanvasConfig) -> Self {
impl<'a, W: AsyncWrite> AssWriter<'a, W> {
pub fn new(f: W, title: String, canvas_config: CanvasConfig<'a>) -> Self {
AssWriter {
// 对于 HDD、docker 之类的场景,磁盘 IO 是非常大的瓶颈。使用大缓存
f: Box::pin(BufWriter::with_capacity(10 << 20, f)),
@@ -104,7 +104,7 @@ impl<W: AsyncWrite> AssWriter<W> {
}
}
pub async fn construct(f: W, title: String, canvas_config: CanvasConfig) -> Result<Self> {
pub async fn construct(f: W, title: String, canvas_config: CanvasConfig<'a>) -> Result<Self> {
let mut res = Self::new(f, title, canvas_config);
res.init().await?;
Ok(res)
@@ -184,7 +184,7 @@ impl<W: AsyncWrite> AssWriter<W> {
}
}
fn escape_text(text: &str) -> Cow<str> {
fn escape_text(text: &'_ str) -> Cow<'_, str> {
let text = text.trim();
if memchr::memchr(b'\n', text.as_bytes()).is_some() {
Cow::from(text.replace('\n', "\\N"))

View File

@@ -1,5 +1,5 @@
use crate::bilibili::danmaku::canvas::CanvasConfig;
use crate::bilibili::danmaku::Danmu;
use crate::bilibili::danmaku::canvas::CanvasConfig;
pub enum Collision {
// 会越来越远
@@ -18,7 +18,7 @@ pub struct Lane {
}
impl Lane {
pub fn draw(danmu: &Danmu, config: &CanvasConfig) -> Self {
pub fn draw(danmu: &Danmu, config: &CanvasConfig<'_>) -> Self {
Lane {
last_shoot_time: danmu.timeline_s,
last_length: danmu.length(config),
@@ -26,7 +26,7 @@ impl Lane {
}
/// 这个槽位是否可以发射另外一条弹幕,返回可能的情形
pub fn available_for(&self, other: &Danmu, config: &CanvasConfig) -> Collision {
pub fn available_for(&self, other: &Danmu, config: &CanvasConfig<'_>) -> Collision {
#[allow(non_snake_case)]
let T = config.danmaku_option.duration;
#[allow(non_snake_case)]

View File

@@ -5,12 +5,12 @@ use anyhow::Result;
use float_ord::FloatOrd;
use lane::Lane;
use crate::bilibili::PageInfo;
use crate::bilibili::danmaku::canvas::lane::Collision;
use crate::bilibili::danmaku::danmu::DanmuType;
use crate::bilibili::danmaku::{Danmu, DrawEffect, Drawable};
use crate::bilibili::PageInfo;
#[derive(Debug, serde::Deserialize, serde::Serialize)]
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub struct DanmakuOption {
pub duration: f64,
pub font: String,
@@ -26,7 +26,7 @@ pub struct DanmakuOption {
pub bottom_percentage: f64,
/// 透明度0-255
pub opacity: u8,
/// 是否加粗1代表是0代表否
/// 是否加粗1 代表是0 代表否
pub bold: bool,
/// 描边
pub outline: f64,
@@ -54,13 +54,13 @@ impl Default for DanmakuOption {
}
#[derive(Clone)]
pub struct CanvasConfig {
pub struct CanvasConfig<'a> {
pub width: u64,
pub height: u64,
pub danmaku_option: &'static DanmakuOption,
pub danmaku_option: &'a DanmakuOption,
}
impl CanvasConfig {
pub fn new(danmaku_option: &'static DanmakuOption, page: &PageInfo) -> Self {
impl<'a> CanvasConfig<'a> {
pub fn new(danmaku_option: &'a DanmakuOption, page: &PageInfo) -> Self {
let (width, height) = Self::dimension(page);
Self {
width,
@@ -86,7 +86,7 @@ impl CanvasConfig {
((720.0 / height as f64 * width as f64) as u64, 720)
}
pub fn canvas(self) -> Canvas {
pub fn canvas(self) -> Canvas<'a> {
let float_lanes_cnt =
(self.danmaku_option.float_percentage * self.height as f64 / self.danmaku_option.lane_size as f64) as usize;
@@ -97,12 +97,12 @@ impl CanvasConfig {
}
}
pub struct Canvas {
pub config: CanvasConfig,
pub struct Canvas<'a> {
pub config: CanvasConfig<'a>,
pub float_lanes: Vec<Option<Lane>>,
}
impl Canvas {
impl<'a> Canvas<'a> {
pub fn draw(&mut self, mut danmu: Danmu) -> Result<Option<Drawable>> {
danmu.timeline_s += self.config.danmaku_option.time_offset;
if danmu.timeline_s < 0.0 {

View File

@@ -1,5 +1,5 @@
//! 一个弹幕实例,但是没有位置信息
use anyhow::{bail, Result};
use anyhow::{Result, bail};
use crate::bilibili::danmaku::canvas::CanvasConfig;
@@ -39,8 +39,8 @@ pub struct Danmu {
impl Danmu {
/// 计算弹幕的“像素长度”,会乘上一个缩放因子
///
/// 汉字算一个全宽英文算2/3宽
pub fn length(&self, config: &CanvasConfig) -> f64 {
/// 汉字算一个全宽,英文算 2/3
pub fn length(&self, config: &CanvasConfig<'_>) -> f64 {
let pts = config.danmaku_option.font_size
* self
.content

View File

@@ -3,10 +3,10 @@ use std::path::PathBuf;
use anyhow::Result;
use tokio::fs::{self, File};
use crate::bilibili::PageInfo;
use crate::bilibili::danmaku::canvas::CanvasConfig;
use crate::bilibili::danmaku::{AssWriter, Danmu};
use crate::bilibili::PageInfo;
use crate::config::CONFIG;
use crate::config::VersionedConfig;
pub struct DanmakuWriter<'a> {
page: &'a PageInfo,
@@ -22,7 +22,8 @@ impl<'a> DanmakuWriter<'a> {
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let canvas_config = CanvasConfig::new(&CONFIG.danmaku_option, self.page);
let config = VersionedConfig::get().load_full();
let canvas_config = CanvasConfig::new(&config.danmaku_option, self.page);
let mut writer =
AssWriter::construct(File::create(path).await?, self.page.name.clone(), canvas_config.clone()).await?;
let mut canvas = canvas_config.canvas();

View File

@@ -0,0 +1,95 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Validate, VideoInfo};
pub struct FavoriteList<'a> {
client: &'a BiliClient,
fid: String,
}
#[derive(Debug, serde::Deserialize)]
pub struct FavoriteListInfo {
pub id: i64,
pub title: String,
}
#[derive(Debug, serde::Deserialize)]
pub struct Upper<T> {
pub mid: T,
pub name: String,
pub face: String,
}
impl<'a> FavoriteList<'a> {
pub fn new(client: &'a BiliClient, fid: String) -> Self {
Self { client, fid }
}
pub async fn get_info(&self) -> Result<FavoriteListInfo> {
let mut res = self
.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v3/fav/folder/info")
.await
.query(&[("media_id", &self.fid)])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
async fn get_videos(&self, page: u32) -> Result<Value> {
self.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v3/fav/resource/list")
.await
.query(&[
("media_id", self.fid.as_str()),
("pn", page.to_string().as_str()),
("ps", "20"),
("order", "mtime"),
("type", "0"),
("tid", "0"),
])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()
}
// 拿到收藏夹的所有权,返回一个收藏夹下的视频流
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self
.get_videos(page)
.await
.with_context(|| format!("failed to get videos of favorite {} page {}", self.fid, page))?;
let medias = &mut videos["data"]["medias"];
if medias.as_array().is_none_or(|v| v.is_empty()) {
Err(anyhow!("no medias found in favorite {} page {}", self.fid, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(medias.take())
.with_context(|| format!("failed to parse videos of favorite {} page {}", self.fid, page))?;
for video_info in videos_info {
yield video_info;
}
let has_more = &videos["data"]["has_more"];
if let Some(v) = has_more.as_bool() {
if v {
page += 1;
continue;
}
} else {
Err(anyhow!("has_more is not a bool"))?;
}
break;
}
}
}
}

View File

@@ -0,0 +1,115 @@
use anyhow::{Result, ensure};
use reqwest::Method;
use crate::bilibili::{BiliClient, Validate};
use crate::config::VersionedConfig;
pub struct Me<'a> {
client: &'a BiliClient,
mid: String,
}
impl<'a> Me<'a> {
pub fn new(client: &'a BiliClient) -> Self {
Self {
client,
mid: Self::my_id(),
}
}
pub async fn get_created_favorites(&self) -> Result<Option<Vec<FavoriteItem>>> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
let mut resp = self
.client
.request(Method::GET, "https://api.bilibili.com/x/v3/fav/folder/created/list-all")
.await
.query(&[("up_mid", &self.mid)])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"]["list"].take())?)
}
pub async fn get_followed_collections(&self, page_num: i32, page_size: i32) -> Result<Collections> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
let mut resp = self
.client
.request(Method::GET, "https://api.bilibili.com/x/v3/fav/folder/collected/list")
.await
.query(&[
("up_mid", self.mid.as_str()),
("pn", page_num.to_string().as_str()),
("ps", page_size.to_string().as_str()),
("platform", "web"),
])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
pub async fn get_followed_uppers(&self, page_num: i32, page_size: i32) -> Result<FollowedUppers> {
ensure!(!self.mid.is_empty(), "未获取到用户 ID请确保填写设置中的 B 站认证信息");
let mut resp = self
.client
.request(Method::GET, "https://api.bilibili.com/x/relation/followings")
.await
.query(&[
("vmid", self.mid.as_str()),
("pn", page_num.to_string().as_str()),
("ps", page_size.to_string().as_str()),
])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(resp["data"].take())?)
}
fn my_id() -> String {
VersionedConfig::get().load().credential.dedeuserid.clone()
}
}
#[derive(Debug, serde::Deserialize)]
pub struct FavoriteItem {
pub title: String,
pub media_count: i64,
pub id: i64,
pub mid: i64,
}
#[derive(Debug, serde::Deserialize)]
pub struct CollectionItem {
pub id: i64,
pub mid: i64,
pub state: i32,
pub title: String,
}
#[derive(Debug, serde::Deserialize)]
pub struct Collections {
pub count: i64,
pub list: Option<Vec<CollectionItem>>,
}
#[derive(Debug, serde::Deserialize)]
pub struct FollowedUppers {
pub total: i64,
pub list: Vec<FollowedUpper>,
}
#[derive(Debug, serde::Deserialize)]
pub struct FollowedUpper {
pub mid: i64,
pub uname: String,
pub face: String,
pub sign: String,
}

View File

@@ -0,0 +1,226 @@
use std::sync::Arc;
pub use analyzer::{BestStream, FilterOption};
use anyhow::{Result, bail, ensure};
use arc_swap::ArcSwapOption;
use chrono::serde::ts_seconds;
use chrono::{DateTime, Utc};
pub use client::{BiliClient, Client};
pub use collection::{Collection, CollectionItem, CollectionType};
pub use credential::Credential;
pub use danmaku::DanmakuOption;
pub use error::BiliError;
pub use favorite_list::FavoriteList;
use favorite_list::Upper;
pub use me::Me;
use once_cell::sync::Lazy;
pub use submission::Submission;
pub use video::{Dimension, PageInfo, Video};
pub use watch_later::WatchLater;
mod analyzer;
mod client;
mod collection;
mod credential;
mod danmaku;
mod error;
mod favorite_list;
mod me;
mod submission;
mod subtitle;
mod video;
mod watch_later;
static MIXIN_KEY: Lazy<ArcSwapOption<String>> = Lazy::new(Default::default);
pub(crate) fn set_global_mixin_key(key: String) {
MIXIN_KEY.store(Some(Arc::new(key)));
}
pub(crate) trait Validate {
type Output;
fn validate(self) -> Result<Self::Output>;
}
impl Validate for serde_json::Value {
type Output = serde_json::Value;
fn validate(self) -> Result<Self::Output> {
let (code, msg) = match (self["code"].as_i64(), self["message"].as_str()) {
(Some(code), Some(msg)) => (code, msg),
_ => bail!("no code or message found"),
};
ensure!(code == 0, BiliError::RequestFailed(code, msg.to_owned()));
Ok(self)
}
}
#[derive(Debug, serde::Deserialize)]
#[serde(untagged)]
/// 注意此处的顺序是有要求的,因为对于 untagged 的 enum 来说serde 会按照顺序匹配
/// > There is no explicit tag identifying which variant the data contains.
/// > Serde will try to match the data against each variant in order and the first one that deserializes successfully is the one returned.
pub enum VideoInfo {
/// 从视频详情接口获取的视频信息
Detail {
title: String,
bvid: String,
#[serde(rename = "desc")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
pages: Vec<PageInfo>,
state: i32,
},
/// 从收藏夹接口获取的视频信息
Favorite {
title: String,
#[serde(rename = "type")]
vtype: i32,
bvid: String,
intro: String,
cover: String,
upper: Upper<i64>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(with = "ts_seconds")]
fav_time: DateTime<Utc>,
#[serde(with = "ts_seconds")]
pubtime: DateTime<Utc>,
attr: i32,
},
/// 从稍后再看接口获取的视频信息
WatchLater {
title: String,
bvid: String,
#[serde(rename = "desc")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "owner")]
upper: Upper<i64>,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "add_at", with = "ts_seconds")]
fav_time: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
state: i32,
},
/// 从视频合集/视频列表接口获取的视频信息
Collection {
bvid: String,
#[serde(rename = "pic")]
cover: String,
#[serde(with = "ts_seconds")]
ctime: DateTime<Utc>,
#[serde(rename = "pubdate", with = "ts_seconds")]
pubtime: DateTime<Utc>,
},
// 从用户投稿接口获取的视频信息
Submission {
title: String,
bvid: String,
#[serde(rename = "description")]
intro: String,
#[serde(rename = "pic")]
cover: String,
#[serde(rename = "created", with = "ts_seconds")]
ctime: DateTime<Utc>,
},
}
#[cfg(test)]
mod tests {
use futures::StreamExt;
use super::*;
use crate::utils::init_logger;
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_video_info_type() {
init_logger("None,bili_sync=debug", None);
let bili_client = BiliClient::new();
// 请求 UP 主视频必须要获取 mixin key使用 key 计算请求参数的签名,否则直接提示权限不足返回空
let Ok(Some(mixin_key)) = bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) else {
panic!("获取 mixin key 失败");
};
set_global_mixin_key(mixin_key);
let collection = Collection::new(
&bili_client,
CollectionItem {
mid: "521722088".to_string(),
sid: "4523".to_string(),
collection_type: CollectionType::Season,
},
);
let videos = collection
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Collection { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试收藏夹
let favorite = FavoriteList::new(&bili_client, "3144336058".to_string());
let videos = favorite
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Favorite { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试稍后再看
let watch_later = WatchLater::new(&bili_client);
let videos = watch_later
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::WatchLater { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
// 测试投稿
let submission = Submission::new(&bili_client, "956761".to_string());
let videos = submission
.into_video_stream()
.take(20)
.filter_map(|v| futures::future::ready(v.ok()))
.collect::<Vec<_>>()
.await;
assert!(videos.iter().all(|v| matches!(v, VideoInfo::Submission { .. })));
assert!(videos.iter().rev().is_sorted_by_key(|v| v.release_datetime()));
}
#[ignore = "only for manual test"]
#[tokio::test]
async fn test_subtitle_parse() -> Result<()> {
let bili_client = BiliClient::new();
let Ok(Some(mixin_key)) = bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) else {
panic!("获取 mixin key 失败");
};
set_global_mixin_key(mixin_key);
let video = Video::new(&bili_client, "BV1gLfnY8E6D".to_string());
let pages = video.get_pages().await?;
println!("pages: {:?}", pages);
let subtitles = video.get_subtitles(&pages[0]).await?;
for subtitle in subtitles {
println!(
"{}: {}",
subtitle.lan,
subtitle.body.to_string().chars().take(200).collect::<String>()
);
}
Ok(())
}
}

View File

@@ -0,0 +1,89 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use reqwest::Method;
use serde_json::Value;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::favorite_list::Upper;
use crate::bilibili::{BiliClient, MIXIN_KEY, Validate, VideoInfo};
pub struct Submission<'a> {
client: &'a BiliClient,
pub upper_id: String,
}
impl<'a> Submission<'a> {
pub fn new(client: &'a BiliClient, upper_id: String) -> Self {
Self { client, upper_id }
}
pub async fn get_info(&self) -> Result<Upper<String>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/card")
.await
.query(&[("mid", self.upper_id.as_str())])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"]["card"].take())?)
}
async fn get_videos(&self, page: i32) -> Result<Value> {
self.client
.request(Method::GET, "https://api.bilibili.com/x/space/wbi/arc/search")
.await
.query(&encoded_query(
vec![
("mid", self.upper_id.as_str()),
("order", "pubdate"),
("order_avoided", "true"),
("platform", "web"),
("web_location", "1550101"),
("pn", page.to_string().as_str()),
("ps", "30"),
],
MIXIN_KEY.load().as_deref(),
))
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut page = 1;
loop {
let mut videos = self
.get_videos(page)
.await
.with_context(|| format!("failed to get videos of upper {} page {}", self.upper_id, page))?;
let vlist = &mut videos["data"]["list"]["vlist"];
if vlist.as_array().is_none_or(|v| v.is_empty()) {
Err(anyhow!("no medias found in upper {} page {}", self.upper_id, page))?;
}
let videos_info: Vec<VideoInfo> = serde_json::from_value(vlist.take())
.with_context(|| format!("failed to parse videos of upper {} page {}", self.upper_id, page))?;
for video_info in videos_info {
yield video_info;
}
let count = &videos["data"]["page"]["count"];
if let Some(v) = count.as_i64() {
if v > (page * 30) as i64 {
page += 1;
continue;
}
} else {
Err(anyhow!("count is not an i64"))?;
}
break;
}
}
}
}

View File

@@ -0,0 +1,75 @@
use std::fmt::Display;
#[derive(Debug, serde::Deserialize)]
pub struct SubTitlesInfo {
pub subtitles: Vec<SubTitleInfo>,
}
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleInfo {
pub lan: String,
pub subtitle_url: String,
}
pub struct SubTitle {
pub lan: String,
pub body: SubTitleBody,
}
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleBody(pub Vec<SubTitleItem>);
#[derive(Debug, serde::Deserialize)]
pub struct SubTitleItem {
from: f64,
to: f64,
content: String,
}
impl SubTitleInfo {
pub fn is_ai_sub(&self) -> bool {
// ai aisubtitle.hdslb.com/bfs/ai_subtitle/xxxx
// 非 aiaisubtitle.hdslb.com/bfs/subtitle/xxxx
self.subtitle_url.contains("ai_subtitle")
}
}
impl Display for SubTitleBody {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
for (idx, item) in self.0.iter().enumerate() {
writeln!(f, "{}", idx)?;
writeln!(f, "{} --> {}", format_time(item.from), format_time(item.to))?;
writeln!(f, "{}", item.content)?;
writeln!(f)?;
}
Ok(())
}
}
fn format_time(time: f64) -> String {
let (second, millisecond) = (time.trunc(), (time.fract() * 1e3) as u32);
let (hour, minute, second) = (
(second / 3600.0) as u32,
((second % 3600.0) / 60.0) as u32,
(second % 60.0) as u32,
);
format!("{:02}:{:02}:{:02},{:03}", hour, minute, second, millisecond)
}
#[cfg(test)]
mod tests {
#[test]
fn test_format_time() {
// float 解析会有精度问题,但误差几毫秒应该不太关键
// 想再健壮一点就得手写 serde_json 解析拆分秒和毫秒,然后分别处理了
let testcases = [
(0.0, "00:00:00,000"),
(1.5, "00:00:01,500"),
(206.45, "00:03:26,449"),
(360001.23, "100:00:01,229"),
];
for (time, expect) in testcases.iter() {
assert_eq!(super::format_time(*time), *expect);
}
}
}

View File

@@ -0,0 +1,193 @@
use anyhow::{Context, Result, ensure};
use futures::TryStreamExt;
use futures::stream::FuturesUnordered;
use prost::Message;
use reqwest::Method;
use crate::bilibili::analyzer::PageAnalyzer;
use crate::bilibili::client::BiliClient;
use crate::bilibili::credential::encoded_query;
use crate::bilibili::danmaku::{DanmakuElem, DanmakuWriter, DmSegMobileReply};
use crate::bilibili::subtitle::{SubTitle, SubTitleBody, SubTitleInfo, SubTitlesInfo};
use crate::bilibili::{MIXIN_KEY, Validate, VideoInfo};
pub struct Video<'a> {
client: &'a BiliClient,
pub bvid: String,
}
#[derive(Debug, serde::Deserialize, Default)]
pub struct PageInfo {
pub cid: i64,
pub page: i32,
#[serde(rename = "part")]
pub name: String,
pub duration: u32,
pub first_frame: Option<String>,
pub dimension: Option<Dimension>,
}
#[derive(Debug, serde::Deserialize, Default)]
pub struct Dimension {
pub width: u32,
pub height: u32,
pub rotate: u32,
}
impl<'a> Video<'a> {
pub fn new(client: &'a BiliClient, bvid: String) -> Self {
Self { client, bvid }
}
/// 直接调用视频信息接口获取详细的视频信息,视频信息中包含了视频的分页信息
pub async fn get_view_info(&self) -> Result<VideoInfo> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/view")
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
#[allow(dead_code)]
pub async fn get_pages(&self) -> Result<Vec<PageInfo>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/pagelist")
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(serde_json::from_value(res["data"].take())?)
}
pub async fn get_tags(&self) -> Result<Vec<String>> {
let res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/web-interface/view/detail/tag")
.await
.query(&[("bvid", &self.bvid)])
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(res["data"]
.as_array()
.context("tags is not an array")?
.iter()
.filter_map(|v| v["tag_name"].as_str().map(String::from))
.collect())
}
pub async fn get_danmaku_writer(&self, page: &'a PageInfo) -> Result<DanmakuWriter<'a>> {
let tasks = FuturesUnordered::new();
for i in 1..=page.duration.div_ceil(360) {
tasks.push(self.get_danmaku_segment(page, i as i64));
}
let result: Vec<Vec<DanmakuElem>> = tasks.try_collect().await?;
let mut result: Vec<DanmakuElem> = result.into_iter().flatten().collect();
result.sort_by_key(|d| d.progress);
Ok(DanmakuWriter::new(page, result.into_iter().map(|x| x.into()).collect()))
}
async fn get_danmaku_segment(&self, page: &PageInfo, segment_idx: i64) -> Result<Vec<DanmakuElem>> {
let mut res = self
.client
.request(Method::GET, "http://api.bilibili.com/x/v2/dm/web/seg.so")
.await
.query(&[("type", 1), ("oid", page.cid), ("segment_index", segment_idx)])
.send()
.await?
.error_for_status()?;
let headers = std::mem::take(res.headers_mut());
let content_type = headers.get("content-type");
ensure!(
content_type.is_some_and(|v| v == "application/octet-stream"),
"unexpected content type: {:?}, body: {:?}",
content_type,
res.text().await
);
Ok(DmSegMobileReply::decode(res.bytes().await?)?.elems)
}
pub async fn get_page_analyzer(&self, page: &PageInfo) -> Result<PageAnalyzer> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/playurl")
.await
.query(&encoded_query(
vec![
("bvid", self.bvid.as_str()),
("cid", page.cid.to_string().as_str()),
("qn", "127"),
("otype", "json"),
("fnval", "4048"),
("fourk", "1"),
],
MIXIN_KEY.load().as_deref(),
))
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
Ok(PageAnalyzer::new(res["data"].take()))
}
pub async fn get_subtitles(&self, page: &PageInfo) -> Result<Vec<SubTitle>> {
let mut res = self
.client
.request(Method::GET, "https://api.bilibili.com/x/player/wbi/v2")
.await
.query(&encoded_query(
vec![("cid", &page.cid.to_string()), ("bvid", &self.bvid)],
MIXIN_KEY.load().as_deref(),
))
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()?;
// 接口返回的信息,包含了一系列的字幕,每个字幕包含了字幕的语言和 json 下载地址
match serde_json::from_value::<Option<SubTitlesInfo>>(res["data"]["subtitle"].take())? {
Some(subtitles_info) => {
let tasks = subtitles_info
.subtitles
.into_iter()
.filter(|v| !v.is_ai_sub())
.map(|v| self.get_subtitle(v))
.collect::<FuturesUnordered<_>>();
tasks.try_collect().await
}
None => Ok(vec![]),
}
}
async fn get_subtitle(&self, info: SubTitleInfo) -> Result<SubTitle> {
let mut res = self
.client
.client // 这里可以直接使用 inner_client因为该请求不需要鉴权
.request(Method::GET, format!("https:{}", &info.subtitle_url).as_str(), None)
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?;
let body: SubTitleBody = serde_json::from_value(res["body"].take())?;
Ok(SubTitle { lan: info.lan, body })
}
}

View File

@@ -0,0 +1,45 @@
use anyhow::{Context, Result, anyhow};
use async_stream::try_stream;
use futures::Stream;
use serde_json::Value;
use crate::bilibili::{BiliClient, Validate, VideoInfo};
pub struct WatchLater<'a> {
client: &'a BiliClient,
}
impl<'a> WatchLater<'a> {
pub fn new(client: &'a BiliClient) -> Self {
Self { client }
}
async fn get_videos(&self) -> Result<Value> {
self.client
.request(reqwest::Method::GET, "https://api.bilibili.com/x/v2/history/toview")
.await
.send()
.await?
.error_for_status()?
.json::<serde_json::Value>()
.await?
.validate()
}
pub fn into_video_stream(self) -> impl Stream<Item = Result<VideoInfo>> + 'a {
try_stream! {
let mut videos = self
.get_videos()
.await
.with_context(|| "Failed to get watch later list")?;
let list = &mut videos["data"]["list"];
if list.as_array().is_none_or(|v| v.is_empty()) {
Err(anyhow!("No videos found in watch later list"))?;
}
let videos_info: Vec<VideoInfo> =
serde_json::from_value(list.take()).with_context(|| "Failed to parse watch later list")?;
for video_info in videos_info {
yield video_info;
}
}
}
}

View File

@@ -0,0 +1,44 @@
use std::borrow::Cow;
use std::sync::LazyLock;
use clap::Parser;
pub static ARGS: LazyLock<Args> = LazyLock::new(Args::parse);
#[derive(Parser)]
#[command(name = "Bili-Sync", version = detail_version(), about, long_about = None)]
pub struct Args {
#[arg(short, long, env = "SCAN_ONLY")]
pub scan_only: bool,
#[arg(short, long, default_value = "None,bili_sync=info", env = "RUST_LOG")]
pub log_level: String,
}
mod built_info {
include!(concat!(env!("OUT_DIR"), "/built.rs"));
}
pub fn version() -> Cow<'static, str> {
if let (Some(git_version), Some(git_dirty)) = (built_info::GIT_VERSION, built_info::GIT_DIRTY) {
Cow::Owned(format!("{}{}", git_version, if git_dirty { "-dirty" } else { "" }))
} else {
Cow::Borrowed(built_info::PKG_VERSION)
}
}
fn detail_version() -> String {
format!(
"{}
Architecture: {}-{}
Author: {}
Built Time: {}
Rustc Version: {}",
version(),
built_info::CFG_OS,
built_info::CFG_TARGET_ARCH,
built_info::PKG_AUTHORS,
built_info::BUILT_TIME_UTC,
built_info::RUSTC_VERSION,
)
}

View File

@@ -0,0 +1,129 @@
use std::path::PathBuf;
use std::sync::LazyLock;
use anyhow::{Result, bail};
use sea_orm::DatabaseConnection;
use serde::{Deserialize, Serialize};
use validator::Validate;
use crate::bilibili::{Credential, DanmakuOption, FilterOption};
use crate::config::LegacyConfig;
use crate::config::default::{default_auth_token, default_bind_address, default_time_format};
use crate::config::item::{ConcurrentLimit, NFOTimeType};
use crate::utils::model::{load_db_config, save_db_config};
pub static CONFIG_DIR: LazyLock<PathBuf> =
LazyLock::new(|| dirs::config_dir().expect("No config path found").join("bili-sync"));
#[derive(Serialize, Deserialize, Validate, Clone)]
pub struct Config {
pub auth_token: String,
pub bind_address: String,
pub credential: Credential,
pub filter_option: FilterOption,
pub danmaku_option: DanmakuOption,
pub video_name: String,
pub page_name: String,
pub interval: u64,
pub upper_path: PathBuf,
pub nfo_time_type: NFOTimeType,
pub concurrent_limit: ConcurrentLimit,
pub time_format: String,
pub cdn_sorting: bool,
pub version: u64,
}
impl Config {
pub async fn load_from_database(connection: &DatabaseConnection) -> Result<Option<Result<Self>>> {
load_db_config(connection).await
}
pub async fn save_to_database(&self, connection: &DatabaseConnection) -> Result<()> {
save_db_config(self, connection).await
}
pub fn check(&self) -> Result<()> {
let mut errors = Vec::new();
if !self.upper_path.is_absolute() {
errors.push("up 主头像保存的路径应为绝对路径");
}
if self.video_name.is_empty() {
errors.push("未设置 video_name 模板");
}
if self.page_name.is_empty() {
errors.push("未设置 page_name 模板");
}
let credential = &self.credential;
if credential.sessdata.is_empty()
|| credential.bili_jct.is_empty()
|| credential.buvid3.is_empty()
|| credential.dedeuserid.is_empty()
|| credential.ac_time_value.is_empty()
{
errors.push("Credential 信息不完整,请确保填写完整");
}
if !(self.concurrent_limit.video > 0 && self.concurrent_limit.page > 0) {
errors.push("video 和 page 允许的并发数必须大于 0");
}
if !errors.is_empty() {
bail!(
errors
.into_iter()
.map(|e| format!("- {}", e))
.collect::<Vec<_>>()
.join("\n")
);
}
Ok(())
}
#[cfg(test)]
pub(super) fn test_default() -> Self {
Self {
cdn_sorting: true,
..Default::default()
}
}
}
impl Default for Config {
fn default() -> Self {
Self {
auth_token: default_auth_token(),
bind_address: default_bind_address(),
credential: Credential::default(),
filter_option: FilterOption::default(),
danmaku_option: DanmakuOption::default(),
video_name: "{{title}}".to_owned(),
page_name: "{{bvid}}".to_owned(),
interval: 1200,
upper_path: CONFIG_DIR.join("upper_face"),
nfo_time_type: NFOTimeType::FavTime,
concurrent_limit: ConcurrentLimit::default(),
time_format: default_time_format(),
cdn_sorting: false,
version: 0,
}
}
}
impl From<LegacyConfig> for Config {
fn from(legacy: LegacyConfig) -> Self {
Self {
auth_token: legacy.auth_token,
bind_address: legacy.bind_address,
credential: legacy.credential,
filter_option: legacy.filter_option,
danmaku_option: legacy.danmaku_option,
video_name: legacy.video_name,
page_name: legacy.page_name,
interval: legacy.interval,
upper_path: legacy.upper_path,
nfo_time_type: legacy.nfo_time_type,
concurrent_limit: legacy.concurrent_limit,
time_format: legacy.time_format,
cdn_sorting: legacy.cdn_sorting,
version: 0,
}
}
}

View File

@@ -0,0 +1,18 @@
use rand::seq::IndexedRandom;
pub(super) fn default_time_format() -> String {
"%Y-%m-%d".to_string()
}
/// 默认的 auth_token 实现,生成随机 16 位字符串
pub(super) fn default_auth_token() -> String {
let byte_choices = b"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*()_+-=";
let mut rng = rand::rng();
(0..16)
.map(|_| *(byte_choices.choose(&mut rng).expect("choose byte failed")) as char)
.collect()
}
pub(super) fn default_bind_address() -> String {
"0.0.0.0:12345".to_string()
}

View File

@@ -0,0 +1,91 @@
use std::sync::LazyLock;
use anyhow::Result;
use handlebars::handlebars_helper;
use crate::config::versioned_cache::VersionedCache;
use crate::config::{Config, PathSafeTemplate};
pub static TEMPLATE: LazyLock<VersionedCache<handlebars::Handlebars<'static>>> =
LazyLock::new(|| VersionedCache::new(create_template).expect("Failed to create handlebars template"));
fn create_template(config: &Config) -> Result<handlebars::Handlebars<'static>> {
let mut handlebars = handlebars::Handlebars::new();
handlebars.register_helper("truncate", Box::new(truncate));
handlebars.path_safe_register("video", config.video_name.to_owned())?;
handlebars.path_safe_register("page", config.page_name.to_owned())?;
Ok(handlebars)
}
handlebars_helper!(truncate: |s: String, len: usize| {
if s.chars().count() > len {
s.chars().take(len).collect::<String>()
} else {
s.to_string()
}
});
#[cfg(test)]
mod tests {
use serde_json::json;
use super::*;
#[test]
fn test_template_usage() {
let mut template = handlebars::Handlebars::new();
template.register_helper("truncate", Box::new(truncate));
let _ = template.path_safe_register("video", "test{{bvid}}test");
let _ = template.path_safe_register("test_truncate", "哈哈,{{ truncate title 30 }}");
let _ = template.path_safe_register("test_path_unix", "{{ truncate title 7 }}/test/a");
let _ = template.path_safe_register("test_path_windows", r"{{ truncate title 7 }}\\test\\a");
#[cfg(not(windows))]
{
assert_eq!(
template
.path_safe_render("test_path_unix", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲/test/a"
);
assert_eq!(
template
.path_safe_render("test_path_windows", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲_test_a"
);
}
#[cfg(windows)]
{
assert_eq!(
template
.path_safe_render("test_path_unix", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
"关注_永雏塔菲_test_a"
);
assert_eq!(
template
.path_safe_render("test_path_windows", &json!({"title": "关注/永雏塔菲喵"}))
.unwrap(),
r"关注_永雏塔菲\\test\\a"
);
}
assert_eq!(
template
.path_safe_render("video", &json!({"bvid": "BV1b5411h7g7"}))
.unwrap(),
"testBV1b5411h7g7test"
);
assert_eq!(
template
.path_safe_render(
"test_truncate",
&json!({"title": "你说得对,但是 Rust 是由 Mozilla 自主研发的一款全新的编译期格斗游戏。\
编译将发生在一个被称作「Cargo」的构建系统中。在这里被引用的指针将被授予「生命周期」之力导引对象安全。\
你将扮演一位名为「Rustacean」的神秘角色在与「Rustc」的搏斗中邂逅各种骨骼惊奇的傲娇报错。\
征服她们、通过编译同时逐步发掘「C++」程序崩溃的真相。"})
)
.unwrap(),
"哈哈,你说得对,但是 Rust 是由 Mozilla 自主研发的一"
);
}
}

View File

@@ -0,0 +1,87 @@
use std::path::PathBuf;
use anyhow::Result;
use serde::{Deserialize, Serialize};
use crate::utils::filenamify::filenamify;
/// 稍后再看的配置
#[derive(Serialize, Deserialize, Default)]
pub struct WatchLaterConfig {
pub enabled: bool,
pub path: PathBuf,
}
/// NFO 文件使用的时间类型
#[derive(Serialize, Deserialize, Default, Clone)]
#[serde(rename_all = "lowercase")]
pub enum NFOTimeType {
#[default]
FavTime,
PubTime,
}
/// 并发下载相关的配置
#[derive(Serialize, Deserialize, Clone)]
pub struct ConcurrentLimit {
pub video: usize,
pub page: usize,
pub rate_limit: Option<RateLimit>,
#[serde(default)]
pub download: ConcurrentDownloadLimit,
}
#[derive(Serialize, Deserialize, Clone)]
pub struct ConcurrentDownloadLimit {
pub enable: bool,
pub concurrency: usize,
pub threshold: u64,
}
impl Default for ConcurrentDownloadLimit {
fn default() -> Self {
Self {
enable: true,
concurrency: 4,
threshold: 20 * (1 << 20), // 20 MB
}
}
}
#[derive(Serialize, Deserialize, Clone)]
pub struct RateLimit {
pub limit: usize,
pub duration: u64,
}
impl Default for ConcurrentLimit {
fn default() -> Self {
Self {
video: 3,
page: 2,
// 默认的限速配置,每 250ms 允许请求 4 次
rate_limit: Some(RateLimit {
limit: 4,
duration: 250,
}),
download: ConcurrentDownloadLimit::default(),
}
}
}
pub trait PathSafeTemplate {
fn path_safe_register(&mut self, name: &'static str, template: impl Into<String>) -> Result<()>;
fn path_safe_render(&self, name: &'static str, data: &serde_json::Value) -> Result<String>;
}
/// 通过将模板字符串中的分隔符替换为自定义的字符串,使得模板字符串中的分隔符得以保留
impl PathSafeTemplate for handlebars::Handlebars<'_> {
fn path_safe_register(&mut self, name: &'static str, template: impl Into<String>) -> Result<()> {
let template = template.into();
Ok(self.register_template_string(name, template.replace(std::path::MAIN_SEPARATOR_STR, "__SEP__"))?)
}
fn path_safe_render(&self, name: &'static str, data: &serde_json::Value) -> Result<String> {
Ok(filenamify(&self.render(name, data)?).replace("__SEP__", std::path::MAIN_SEPARATOR_STR))
}
}

View File

@@ -0,0 +1,134 @@
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::Result;
use sea_orm::DatabaseConnection;
use serde::de::{Deserializer, MapAccess, Visitor};
use serde::ser::SerializeMap;
use serde::{Deserialize, Serialize};
use crate::bilibili::{CollectionItem, CollectionType, Credential, DanmakuOption, FilterOption};
use crate::config::Config;
use crate::config::default::{default_auth_token, default_bind_address, default_time_format};
use crate::config::item::{ConcurrentLimit, NFOTimeType, WatchLaterConfig};
use crate::utils::model::migrate_legacy_config;
#[derive(Serialize, Deserialize)]
pub struct LegacyConfig {
#[serde(default = "default_auth_token")]
pub auth_token: String,
#[serde(default = "default_bind_address")]
pub bind_address: String,
pub credential: Credential,
pub filter_option: FilterOption,
#[serde(default)]
pub danmaku_option: DanmakuOption,
pub favorite_list: HashMap<String, PathBuf>,
#[serde(
default,
serialize_with = "serialize_collection_list",
deserialize_with = "deserialize_collection_list"
)]
pub collection_list: HashMap<CollectionItem, PathBuf>,
#[serde(default)]
pub submission_list: HashMap<String, PathBuf>,
#[serde(default)]
pub watch_later: WatchLaterConfig,
pub video_name: String,
pub page_name: String,
pub interval: u64,
pub upper_path: PathBuf,
#[serde(default)]
pub nfo_time_type: NFOTimeType,
#[serde(default)]
pub concurrent_limit: ConcurrentLimit,
#[serde(default = "default_time_format")]
pub time_format: String,
#[serde(default)]
pub cdn_sorting: bool,
}
impl LegacyConfig {
async fn load_from_file(path: &Path) -> Result<Self> {
let legacy_config_str = tokio::fs::read_to_string(path).await?;
Ok(toml::from_str(&legacy_config_str)?)
}
pub async fn migrate_from_file(path: &Path, connection: &DatabaseConnection) -> Result<Config> {
let legacy_config = Self::load_from_file(path).await?;
migrate_legacy_config(&legacy_config, connection).await?;
Ok(legacy_config.into())
}
}
/*
后面是用于自定义 Collection 的序列化、反序列化的样板代码
*/
pub(super) fn serialize_collection_list<S>(
collection_list: &HashMap<CollectionItem, PathBuf>,
serializer: S,
) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let mut map = serializer.serialize_map(Some(collection_list.len()))?;
for (k, v) in collection_list {
let prefix = match k.collection_type {
CollectionType::Series => "series",
CollectionType::Season => "season",
};
map.serialize_entry(&[prefix, &k.mid, &k.sid].join(":"), v)?;
}
map.end()
}
pub(super) fn deserialize_collection_list<'de, D>(deserializer: D) -> Result<HashMap<CollectionItem, PathBuf>, D::Error>
where
D: Deserializer<'de>,
{
struct CollectionListVisitor;
impl<'de> Visitor<'de> for CollectionListVisitor {
type Value = HashMap<CollectionItem, PathBuf>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
formatter.write_str("a map of collection list")
}
fn visit_map<A>(self, mut map: A) -> Result<Self::Value, A::Error>
where
A: MapAccess<'de>,
{
let mut collection_list = HashMap::new();
while let Some((key, value)) = map.next_entry::<String, PathBuf>()? {
let collection_item = match key.split(':').collect::<Vec<&str>>().as_slice() {
[prefix, mid, sid] => {
let collection_type = match *prefix {
"series" => CollectionType::Series,
"season" => CollectionType::Season,
_ => {
return Err(serde::de::Error::custom(
"invalid collection type, should be series or season",
));
}
};
CollectionItem {
mid: mid.to_string(),
sid: sid.to_string(),
collection_type,
}
}
_ => {
return Err(serde::de::Error::custom(
"invalid collection key, should be series:mid:sid or season:mid:sid",
));
}
};
collection_list.insert(collection_item, value);
}
Ok(collection_list)
}
}
deserializer.deserialize_map(CollectionListVisitor)
}

View File

@@ -0,0 +1,16 @@
mod args;
mod current;
mod default;
mod handlebar;
mod item;
mod legacy;
mod versioned_cache;
mod versioned_config;
pub use crate::config::args::{ARGS, version};
pub use crate::config::current::{CONFIG_DIR, Config};
pub use crate::config::handlebar::TEMPLATE;
pub use crate::config::item::{NFOTimeType, PathSafeTemplate, RateLimit};
pub use crate::config::legacy::LegacyConfig;
pub use crate::config::versioned_cache::VersionedCache;
pub use crate::config::versioned_config::VersionedConfig;

View File

@@ -0,0 +1,54 @@
use std::sync::Arc;
use std::sync::atomic::{AtomicU64, Ordering};
use anyhow::Result;
use arc_swap::{ArcSwap, Guard};
use crate::config::{Config, VersionedConfig};
pub struct VersionedCache<T> {
inner: ArcSwap<T>,
version: AtomicU64,
builder: fn(&Config) -> Result<T>,
mutex: parking_lot::Mutex<()>,
}
impl<T> VersionedCache<T> {
pub fn new(builder: fn(&Config) -> Result<T>) -> Result<Self> {
let current_config = VersionedConfig::get().load();
let current_version = current_config.version;
let initial_value = builder(&current_config)?;
Ok(Self {
inner: ArcSwap::from_pointee(initial_value),
version: AtomicU64::new(current_version),
builder,
mutex: parking_lot::Mutex::new(()),
})
}
pub fn load(&self) -> Guard<Arc<T>> {
self.reload_if_needed();
self.inner.load()
}
fn reload_if_needed(&self) {
let current_config = VersionedConfig::get().load();
let current_version = current_config.version;
let version = self.version.load(Ordering::Relaxed);
if version < current_version {
let _lock = self.mutex.lock();
if self.version.load(Ordering::Relaxed) >= current_version {
return;
}
match (self.builder)(&current_config) {
Err(e) => {
error!("Failed to rebuild versioned cache: {:?}", e);
}
Ok(new_value) => {
self.inner.store(Arc::new(new_value));
self.version.store(current_version, Ordering::Relaxed);
}
}
}
}
}

View File

@@ -0,0 +1,120 @@
use std::sync::Arc;
use anyhow::{Result, anyhow, bail};
use arc_swap::{ArcSwap, Guard};
use sea_orm::DatabaseConnection;
use tokio::sync::OnceCell;
use crate::bilibili::Credential;
use crate::config::{CONFIG_DIR, Config, LegacyConfig};
pub static VERSIONED_CONFIG: OnceCell<VersionedConfig> = OnceCell::const_new();
pub struct VersionedConfig {
inner: ArcSwap<Config>,
update_lock: tokio::sync::Mutex<()>,
}
impl VersionedConfig {
/// 初始化全局的 `VersionedConfig`,初始化失败或者已初始化过则返回错误
pub async fn init(connection: &DatabaseConnection) -> Result<()> {
let mut config = match Config::load_from_database(connection).await? {
Some(Ok(config)) => config,
Some(Err(e)) => bail!("解析数据库配置失败: {}", e),
None => {
let config = match LegacyConfig::migrate_from_file(&CONFIG_DIR.join("config.toml"), connection).await {
Ok(config) => config,
Err(e) => {
if e.downcast_ref::<std::io::Error>()
.is_none_or(|e| e.kind() != std::io::ErrorKind::NotFound)
{
bail!("未成功读取并迁移旧版本配置:{:#}", e);
} else {
let config = Config::default();
warn!(
"生成 auth_token{},可使用该 token 登录 web UI该信息仅在首次运行时打印",
config.auth_token
);
config
}
}
};
config.save_to_database(connection).await?;
config
}
};
// version 本身不具有实际意义,仅用于并发更新时的版本控制,在初始化时可以直接清空
config.version = 0;
let versioned_config = VersionedConfig::new(config);
VERSIONED_CONFIG
.set(versioned_config)
.map_err(|e| anyhow!("VERSIONED_CONFIG has already been initialized: {}", e))?;
Ok(())
}
#[cfg(test)]
/// 单元测试直接使用测试专用的配置即可
pub fn get() -> &'static VersionedConfig {
use std::sync::LazyLock;
static TEST_CONFIG: LazyLock<VersionedConfig> = LazyLock::new(|| VersionedConfig::new(Config::test_default()));
return &TEST_CONFIG;
}
#[cfg(not(test))]
/// 获取全局的 `VersionedConfig`,如果未初始化则会 panic
pub fn get() -> &'static VersionedConfig {
VERSIONED_CONFIG.get().expect("VERSIONED_CONFIG is not initialized")
}
pub fn new(config: Config) -> Self {
Self {
inner: ArcSwap::from_pointee(config),
update_lock: tokio::sync::Mutex::new(()),
}
}
pub fn load(&self) -> Guard<Arc<Config>> {
self.inner.load()
}
pub fn load_full(&self) -> Arc<Config> {
self.inner.load_full()
}
pub async fn update_credential(&self, new_credential: Credential, connection: &DatabaseConnection) -> Result<()> {
// 确保更新内容与写入数据库的操作是原子性的
let _lock = self.update_lock.lock().await;
loop {
let old_config = self.inner.load();
let mut new_config = old_config.as_ref().clone();
new_config.credential = new_credential.clone();
new_config.version += 1;
if Arc::ptr_eq(
&old_config,
&self.inner.compare_and_swap(&old_config, Arc::new(new_config)),
) {
break;
}
}
self.inner.load().save_to_database(connection).await
}
/// 外部 API 会调用这个方法,如果更新失败直接返回错误
pub async fn update(&self, mut new_config: Config, connection: &DatabaseConnection) -> Result<Arc<Config>> {
let _lock = self.update_lock.lock().await;
let old_config = self.inner.load();
if old_config.version != new_config.version {
bail!("配置版本不匹配,请刷新页面修改后重新提交");
}
new_config.version += 1;
let new_config = Arc::new(new_config);
if !Arc::ptr_eq(
&old_config,
&self.inner.compare_and_swap(&old_config, new_config.clone()),
) {
bail!("配置版本不匹配,请刷新页面修改后重新提交");
}
new_config.save_to_database(connection).await?;
Ok(new_config)
}
}

View File

@@ -0,0 +1,52 @@
use std::time::Duration;
use anyhow::{Context, Result};
use bili_sync_migration::{Migrator, MigratorTrait};
use sea_orm::sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode, SqliteSynchronous};
use sea_orm::sqlx::{ConnectOptions as SqlxConnectOptions, Sqlite};
use sea_orm::{ConnectOptions, Database, DatabaseConnection, SqlxSqliteConnector};
use crate::config::CONFIG_DIR;
fn database_url() -> String {
format!("sqlite://{}?mode=rwc", CONFIG_DIR.join("data.sqlite").to_string_lossy())
}
async fn database_connection() -> Result<DatabaseConnection> {
let mut option = ConnectOptions::new(database_url());
option
.max_connections(50)
.min_connections(5)
.acquire_timeout(Duration::from_secs(90));
let connect_option = option
.get_url()
.parse::<SqliteConnectOptions>()
.context("Failed to parse database URL")?
.disable_statement_logging()
.busy_timeout(Duration::from_secs(90))
.journal_mode(SqliteJournalMode::Wal)
.synchronous(SqliteSynchronous::Normal)
.optimize_on_close(true, None);
Ok(SqlxSqliteConnector::from_sqlx_sqlite_pool(
option
.sqlx_pool_options::<Sqlite>()
.connect_with(connect_option)
.await?,
))
}
async fn migrate_database() -> Result<()> {
// 注意此处使用内部构造的 DatabaseConnection而不是通过 database_connection() 获取
// 这是因为使用多个连接的 Connection 会导致奇怪的迁移顺序问题,而使用默认的连接选项不会
let connection = Database::connect(database_url()).await?;
Ok(Migrator::up(&connection, None).await?)
}
/// 进行数据库迁移并获取数据库连接,供外部使用
pub async fn setup_database() -> Result<DatabaseConnection> {
tokio::fs::create_dir_all(CONFIG_DIR.as_path()).await.context(
"Failed to create config directory. Please check if you have granted necessary permissions to your folder.",
)?;
migrate_database().await.context("Failed to migrate database")?;
database_connection().await.context("Failed to connect to database")
}

View File

@@ -0,0 +1,192 @@
use core::str;
use std::io::SeekFrom;
use std::path::Path;
use std::sync::Arc;
use anyhow::{Context, Result, bail, ensure};
use futures::TryStreamExt;
use reqwest::{Method, header};
use tokio::fs::{self, File, OpenOptions};
use tokio::io::{AsyncSeekExt, AsyncWriteExt};
use tokio::task::JoinSet;
use tokio_util::io::StreamReader;
use crate::bilibili::Client;
use crate::config::VersionedConfig;
pub struct Downloader {
client: Client,
}
impl Downloader {
// Downloader 使用带有默认 Header 的 Client 构建
// 拿到 url 后下载文件不需要任何 cookie 作为身份凭证
// 但如果不设置默认 Header下载时会遇到 403 Forbidden 错误
pub fn new(client: Client) -> Self {
Self { client }
}
pub async fn fetch(&self, url: &str, path: &Path) -> Result<()> {
if VersionedConfig::get().load().concurrent_limit.download.enable {
self.fetch_parallel(url, path).await
} else {
self.fetch_serial(url, path).await
}
}
async fn fetch_serial(&self, url: &str, path: &Path) -> Result<()> {
let resp = self
.client
.request(Method::GET, url, None)
.send()
.await?
.error_for_status()?;
let expected = resp.header_content_length();
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let mut file = File::create(path).await?;
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, &mut file).await?;
file.flush().await?;
if let Some(expected) = expected {
ensure!(
received == expected,
"downloaded bytes mismatch: expected {}, got {}",
expected,
received
);
}
Ok(())
}
async fn fetch_parallel(&self, url: &str, path: &Path) -> Result<()> {
let (concurrency, threshold) = {
let config = VersionedConfig::get().load();
(
config.concurrent_limit.download.concurrency,
config.concurrent_limit.download.threshold,
)
};
let resp = self
.client
.request(Method::HEAD, url, None)
.send()
.await?
.error_for_status()?;
let file_size = resp.header_content_length().unwrap_or_default();
let chunk_size = file_size / concurrency as u64;
if resp
.headers()
.get(header::ACCEPT_RANGES)
.is_none_or(|v| v.to_str().unwrap_or_default() == "none") // https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges#none
|| chunk_size < threshold
{
return self.fetch_serial(url, path).await;
}
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).await?;
}
let file = File::create(path).await?;
file.set_len(file_size).await?;
drop(file);
let mut tasks = JoinSet::new();
let url = Arc::new(url.to_string());
let path = Arc::new(path.to_path_buf());
for i in 0..concurrency {
let start = i as u64 * chunk_size;
let end = if i == concurrency - 1 {
file_size
} else {
start + chunk_size
} - 1;
let (url_clone, path_clone, client_clone) = (url.clone(), path.clone(), self.client.clone());
tasks.spawn(async move {
let mut file = OpenOptions::new().write(true).open(path_clone.as_ref()).await?;
file.seek(SeekFrom::Start(start)).await?;
let range_header = format!("bytes={}-{}", start, end);
let resp = client_clone
.request(Method::GET, &url_clone, None)
.header(header::RANGE, &range_header)
.send()
.await?
.error_for_status()?;
if let Some(content_length) = resp.header_content_length() {
ensure!(
content_length == end - start + 1,
"content length mismatch: expected {}, got {}",
end - start + 1,
content_length
);
}
let mut stream_reader = StreamReader::new(resp.bytes_stream().map_err(std::io::Error::other));
let received = tokio::io::copy(&mut stream_reader, &mut file).await?;
file.flush().await?;
ensure!(
received == end - start + 1,
"downloaded bytes mismatch: expected {}, got {}",
end - start + 1,
received,
);
Ok(())
});
}
while let Some(res) = tasks.join_next().await {
res??;
}
Ok(())
}
pub async fn fetch_with_fallback(&self, urls: &[&str], path: &Path) -> Result<()> {
if urls.is_empty() {
bail!("no urls provided");
}
let mut res = Ok(());
for url in urls {
match self.fetch(url, path).await {
Ok(_) => return Ok(()),
Err(err) => {
res = Err(err);
}
}
}
res.context("failed to download file")
}
pub async fn merge(&self, video_path: &Path, audio_path: &Path, output_path: &Path) -> Result<()> {
let output = tokio::process::Command::new("ffmpeg")
.args([
"-i",
video_path.to_string_lossy().as_ref(),
"-i",
audio_path.to_string_lossy().as_ref(),
"-c",
"copy",
"-strict",
"unofficial",
"-y",
output_path.to_string_lossy().as_ref(),
])
.output()
.await
.context("failed to run ffmpeg")?;
if !output.status.success() {
bail!("ffmpeg error: {}", str::from_utf8(&output.stderr).unwrap_or("unknown"));
}
Ok(())
}
}
/// reqwest.content_length() 居然指的是 body_size 而非 content-length header没办法自己实现一下
/// https://github.com/seanmonstar/reqwest/issues/1814
trait ResponseExt {
fn header_content_length(&self) -> Option<u64>;
}
impl ResponseExt for reqwest::Response {
fn header_content_length(&self) -> Option<u64> {
self.headers()
.get(header::CONTENT_LENGTH)
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u64>().ok())
}
}

View File

@@ -0,0 +1,59 @@
use std::io;
use anyhow::Result;
use thiserror::Error;
#[derive(Error, Debug)]
#[error("Request too frequently")]
pub struct DownloadAbortError();
#[derive(Error, Debug)]
#[error("Process page error")]
pub struct ProcessPageError();
pub enum ExecutionStatus {
Skipped,
Succeeded,
Ignored(anyhow::Error),
Failed(anyhow::Error),
// 任务可以返回该状态固定自己的 status
FixedFailed(u32, anyhow::Error),
}
// 目前 stable rust 似乎不支持自定义类型使用 ? 运算符,只能先在返回值使用 Result再这样套层娃
impl From<Result<ExecutionStatus>> for ExecutionStatus {
fn from(res: Result<ExecutionStatus>) -> Self {
match res {
Ok(status) => status,
Err(err) => {
for cause in err.chain() {
if let Some(io_err) = cause.downcast_ref::<io::Error>() {
// 权限错误
if io_err.kind() == io::ErrorKind::PermissionDenied {
return ExecutionStatus::Ignored(err);
}
// 使用 io::Error 包裹的 reqwest::Error
if io_err.kind() == io::ErrorKind::Other
&& io_err.get_ref().is_some_and(|e| {
e.downcast_ref::<reqwest::Error>().is_some_and(is_ignored_reqwest_error)
})
{
return ExecutionStatus::Ignored(err);
}
}
// 未包裹的 reqwest::Error
if let Some(error) = cause.downcast_ref::<reqwest::Error>()
&& is_ignored_reqwest_error(error)
{
return ExecutionStatus::Ignored(err);
}
}
ExecutionStatus::Failed(err)
}
}
}
}
fn is_ignored_reqwest_error(err: &reqwest::Error) -> bool {
err.is_decode() || err.is_body() || err.is_timeout()
}

View File

@@ -0,0 +1,113 @@
#[macro_use]
extern crate tracing;
mod adapter;
mod api;
mod bilibili;
mod config;
mod database;
mod downloader;
mod error;
mod task;
mod utils;
mod workflow;
use std::collections::VecDeque;
use std::fmt::Debug;
use std::future::Future;
use std::sync::Arc;
use bilibili::BiliClient;
use parking_lot::Mutex;
use sea_orm::DatabaseConnection;
use task::{http_server, video_downloader};
use tokio_util::sync::CancellationToken;
use tokio_util::task::TaskTracker;
use crate::api::{LogHelper, MAX_HISTORY_LOGS};
use crate::config::{ARGS, VersionedConfig};
use crate::database::setup_database;
use crate::utils::init_logger;
use crate::utils::signal::terminate;
#[tokio::main]
async fn main() {
let (connection, log_writer) = init().await;
let bili_client = Arc::new(BiliClient::new());
let token = CancellationToken::new();
let tracker = TaskTracker::new();
spawn_task(
"HTTP 服务",
http_server(connection.clone(), bili_client.clone(), log_writer),
&tracker,
token.clone(),
);
if !cfg!(debug_assertions) {
spawn_task(
"定时下载",
video_downloader(connection.clone(), bili_client),
&tracker,
token.clone(),
);
}
tracker.close();
handle_shutdown(connection, tracker, token).await
}
fn spawn_task(
task_name: &'static str,
task: impl Future<Output = impl Debug> + Send + 'static,
tracker: &TaskTracker,
token: CancellationToken,
) {
tracker.spawn(async move {
tokio::select! {
res = task => {
error!("「{}」异常结束,返回结果为:「{:?}」,取消其它仍在执行的任务..", task_name, res);
token.cancel();
},
_ = token.cancelled() => {
info!("「{}」接收到取消信号,终止运行..", task_name);
}
}
});
}
/// 初始化日志系统、打印欢迎信息,初始化数据库连接和全局配置
async fn init() -> (DatabaseConnection, LogHelper) {
let (tx, _rx) = tokio::sync::broadcast::channel(30);
let log_history = Arc::new(Mutex::new(VecDeque::with_capacity(MAX_HISTORY_LOGS + 1)));
let log_writer = LogHelper::new(tx, log_history.clone());
init_logger(&ARGS.log_level, Some(log_writer.clone()));
info!("欢迎使用 Bili-Sync当前程序版本{}", config::version());
info!("项目地址https://github.com/amtoaer/bili-sync");
let connection = setup_database().await.expect("数据库初始化失败");
info!("数据库初始化完成");
VersionedConfig::init(&connection).await.expect("配置初始化失败");
info!("配置初始化完成");
(connection, log_writer)
}
async fn handle_shutdown(connection: DatabaseConnection, tracker: TaskTracker, token: CancellationToken) {
tokio::select! {
_ = tracker.wait() => {
error!("所有任务均已终止..")
}
_ = terminate() => {
info!("接收到终止信号,开始终止任务..");
token.cancel();
tracker.wait().await;
info!("所有任务均已终止..");
}
}
info!("正在关闭数据库连接..");
match connection.close().await {
Ok(()) => info!("数据库连接已关闭,程序结束"),
Err(e) => error!("关闭数据库连接时遇到错误:{:#},程序异常结束", e),
}
}

View File

@@ -0,0 +1,98 @@
use std::collections::HashSet;
use std::sync::Arc;
use anyhow::{Context, Result};
use axum::extract::Request;
use axum::http::header;
use axum::response::IntoResponse;
use axum::routing::get;
use axum::{Extension, ServiceExt};
use reqwest::StatusCode;
use rust_embed_for_web::{EmbedableFile, RustEmbed};
use sea_orm::DatabaseConnection;
use crate::api::{LogHelper, router};
use crate::bilibili::BiliClient;
use crate::config::VersionedConfig;
#[derive(RustEmbed)]
#[preserve_source = false]
#[folder = "../../web/build"]
struct Asset;
pub async fn http_server(
database_connection: DatabaseConnection,
bili_client: Arc<BiliClient>,
log_writer: LogHelper,
) -> Result<()> {
let app = router()
.fallback_service(get(frontend_files).head(frontend_files))
.layer(Extension(database_connection))
.layer(Extension(bili_client))
.layer(Extension(log_writer));
let config = VersionedConfig::get().load_full();
let listener = tokio::net::TcpListener::bind(&config.bind_address)
.await
.context("bind address failed")?;
info!("开始运行管理页http://{}", config.bind_address);
Ok(axum::serve(listener, ServiceExt::<Request>::into_make_service(app)).await?)
}
async fn frontend_files(request: Request) -> impl IntoResponse {
let mut path = request.uri().path().trim_start_matches('/');
if path.is_empty() || Asset::get(path).is_none() {
path = "index.html";
}
let Some(content) = Asset::get(path) else {
return (StatusCode::NOT_FOUND, "404 Not Found").into_response();
};
let mime_type = content.mime_type();
let content_type = mime_type.as_deref().unwrap_or("application/octet-stream");
let default_headers = [
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
];
if let Some(if_none_match) = request.headers().get(header::IF_NONE_MATCH)
&& let Ok(client_etag) = if_none_match.to_str()
&& client_etag == content.hash()
{
return (StatusCode::NOT_MODIFIED, default_headers).into_response();
}
if request.method() == axum::http::Method::HEAD {
return (StatusCode::OK, default_headers).into_response();
}
if cfg!(debug_assertions) {
// safety: `RustEmbed` returns uncompressed files directly from the filesystem in debug mode
return (StatusCode::OK, default_headers, content.data().unwrap()).into_response();
}
let accepted_encodings = request
.headers()
.get(header::ACCEPT_ENCODING)
.and_then(|v| v.to_str().ok())
.map(|s| s.split(',').map(str::trim).collect::<HashSet<_>>())
.unwrap_or_default();
for (encoding, data) in [("br", content.data_br()), ("gzip", content.data_gzip())] {
if accepted_encodings.contains(encoding)
&& let Some(data) = data
{
return (
StatusCode::OK,
[
(header::CONTENT_TYPE, content_type),
(header::CACHE_CONTROL, "no-cache"),
(header::ETAG, &content.hash()),
(header::CONTENT_ENCODING, encoding),
],
data,
)
.into_response();
}
}
(
StatusCode::NOT_ACCEPTABLE,
"Client must support gzip or brotli compression",
)
.into_response()
}

View File

@@ -0,0 +1,5 @@
mod http_server;
mod video_downloader;
pub use http_server::http_server;
pub use video_downloader::video_downloader;

View File

@@ -0,0 +1,62 @@
use std::sync::Arc;
use sea_orm::DatabaseConnection;
use tokio::time;
use crate::adapter::VideoSource;
use crate::bilibili::{self, BiliClient};
use crate::config::VersionedConfig;
use crate::utils::model::get_enabled_video_sources;
use crate::utils::task_notifier::TASK_STATUS_NOTIFIER;
use crate::workflow::process_video_source;
/// 启动周期下载视频的任务
pub async fn video_downloader(connection: DatabaseConnection, bili_client: Arc<BiliClient>) {
let mut anchor = chrono::Local::now().date_naive();
loop {
info!("开始执行本轮视频下载任务..");
let _lock = TASK_STATUS_NOTIFIER.start_running().await;
let config = VersionedConfig::get().load_full();
'inner: {
if let Err(e) = config.check() {
error!("配置检查失败,跳过本轮执行:\n{:#}", e);
break 'inner;
}
match bili_client.wbi_img().await.map(|wbi_img| wbi_img.into()) {
Ok(Some(mixin_key)) => bilibili::set_global_mixin_key(mixin_key),
Ok(_) => {
error!("解析 mixin key 失败,等待下一轮执行");
break 'inner;
}
Err(e) => {
error!("获取 mixin key 遇到错误:{:#},等待下一轮执行", e);
break 'inner;
}
};
if anchor != chrono::Local::now().date_naive() {
if let Err(e) = bili_client.check_refresh(&connection).await {
error!("检查刷新 Credential 遇到错误:{:#},等待下一轮执行", e);
break 'inner;
}
anchor = chrono::Local::now().date_naive();
}
let Ok(video_sources) = get_enabled_video_sources(&connection).await else {
error!("获取视频源列表失败,等待下一轮执行");
break 'inner;
};
if video_sources.is_empty() {
info!("没有可用的视频源,等待下一轮执行");
break 'inner;
}
for video_source in video_sources {
let display_name = video_source.display_name();
if let Err(e) = process_video_source(video_source, &bili_client, &connection).await {
error!("处理 {} 时遇到错误:{:#},等待下一轮执行", display_name, e);
}
}
info!("本轮任务执行完毕,等待下一轮执行");
}
TASK_STATUS_NOTIFIER.finish_running(_lock);
time::sleep(time::Duration::from_secs(config.interval)).await;
}
}

View File

@@ -0,0 +1,179 @@
use chrono::{DateTime, NaiveDateTime, Utc};
use sea_orm::ActiveValue::{NotSet, Set};
use sea_orm::IntoActiveModel;
use crate::bilibili::{PageInfo, VideoInfo};
impl VideoInfo {
/// 在检测视频更新时,通过该方法将 VideoInfo 转换为简单的 ActiveModel此处仅填充一些简单信息后续会使用详情覆盖
pub fn into_simple_model(self) -> bili_sync_entity::video::ActiveModel {
let default = bili_sync_entity::video::ActiveModel {
id: NotSet,
created_at: NotSet,
// 此处不使用 ActiveModel::default() 是为了让其它字段有默认值
..bili_sync_entity::video::Model::default().into_active_model()
};
match self {
VideoInfo::Collection {
bvid,
cover,
ctime,
pubtime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
category: Set(2), // 视频合集里的内容类型肯定是视频
valid: Set(true),
..default
},
VideoInfo::Favorite {
title,
vtype,
bvid,
intro,
cover,
upper,
ctime,
fav_time,
pubtime,
attr,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(vtype),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: Set(fav_time.naive_utc()),
download_status: Set(0),
valid: Set(attr == 0),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
..default
},
VideoInfo::WatchLater {
title,
bvid,
intro,
cover,
upper,
ctime,
fav_time,
pubtime,
state,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(2), // 稍后再看里的内容类型肯定是视频
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: Set(fav_time.naive_utc()),
download_status: Set(0),
valid: Set(state == 0),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
..default
},
VideoInfo::Submission {
title,
bvid,
intro,
cover,
ctime,
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
category: Set(2), // 投稿视频的内容类型肯定是视频
valid: Set(true),
..default
},
_ => unreachable!(),
}
}
/// 填充视频详情时调用,该方法会将视频详情附加到原有的 Model 上
/// 特殊地,如果在检测视频更新时记录了 favtime那么 favtime 会维持原样,否则会使用 pubtime 填充
pub fn into_detail_model(self, base_model: bili_sync_entity::video::Model) -> bili_sync_entity::video::ActiveModel {
match self {
VideoInfo::Detail {
title,
bvid,
intro,
cover,
upper,
ctime,
pubtime,
state,
..
} => bili_sync_entity::video::ActiveModel {
bvid: Set(bvid),
name: Set(title),
category: Set(2),
intro: Set(intro),
cover: Set(cover),
ctime: Set(ctime.naive_utc()),
pubtime: Set(pubtime.naive_utc()),
favtime: if base_model.favtime != NaiveDateTime::default() {
Set(base_model.favtime) // 之前设置了 favtime使用之前的值等价于 unset但设置上以支持后续的规则匹配
} else {
Set(pubtime.naive_utc()) // 未设置过 favtime使用 pubtime 填充
},
download_status: Set(0),
valid: Set(state == 0),
upper_id: Set(upper.mid),
upper_name: Set(upper.name),
upper_face: Set(upper.face),
..base_model.into_active_model()
},
_ => unreachable!(),
}
}
/// 获取视频的发布时间,用于对时间做筛选检查新视频
pub fn release_datetime(&self) -> &DateTime<Utc> {
match self {
VideoInfo::Collection { pubtime: time, .. }
| VideoInfo::Favorite { fav_time: time, .. }
| VideoInfo::WatchLater { fav_time: time, .. }
| VideoInfo::Submission { ctime: time, .. } => time,
_ => unreachable!(),
}
}
}
impl PageInfo {
pub fn into_active_model(self, video_model_id: i32) -> bili_sync_entity::page::ActiveModel {
let (width, height) = match &self.dimension {
Some(d) => {
if d.rotate == 0 {
(Some(d.width), Some(d.height))
} else {
(Some(d.height), Some(d.width))
}
}
None => (None, None),
};
bili_sync_entity::page::ActiveModel {
video_id: Set(video_model_id),
cid: Set(self.cid),
pid: Set(self.page),
name: Set(self.name),
width: Set(width),
height: Set(height),
duration: Set(self.duration),
image: Set(self.first_frame),
download_status: Set(0),
..Default::default()
}
}
}

View File

@@ -0,0 +1,61 @@
macro_rules! regex {
($re:literal $(,)?) => {{
static RE: once_cell::sync::OnceCell<regex::Regex> = once_cell::sync::OnceCell::new();
RE.get_or_init(|| regex::Regex::new($re).expect("invalid regex"))
}};
}
pub fn filenamify<S: AsRef<str>>(input: S) -> String {
let reserved = regex!("[<>:\"/\\\\|?*\u{0000}-\u{001F}\u{007F}\u{0080}-\u{009F}]+");
let windows_reserved = regex!("^(con|prn|aux|nul|com\\d|lpt\\d)$");
let outer_periods = regex!("^\\.+|\\.+$");
let replacement = "_";
let input = reserved.replace_all(input.as_ref(), replacement);
let input = outer_periods.replace_all(input.as_ref(), replacement);
let mut result = input.into_owned();
if windows_reserved.is_match(result.as_str()) {
result.push_str(replacement);
}
result
}
#[cfg(test)]
mod tests {
use super::filenamify;
#[test]
fn test_filenamify() {
assert_eq!(filenamify("foo/bar"), "foo_bar");
assert_eq!(filenamify("foo//bar"), "foo_bar");
assert_eq!(filenamify("//foo//bar//"), "_foo_bar_");
assert_eq!(filenamify("foo\\bar"), "foo_bar");
assert_eq!(filenamify("foo\\\\\\bar"), "foo_bar");
assert_eq!(filenamify(r"foo\\bar"), "foo_bar");
assert_eq!(filenamify(r"foo\\\\\\bar"), "foo_bar");
assert_eq!(filenamify("////foo////bar////"), "_foo_bar_");
assert_eq!(filenamify("foo\u{0000}bar"), "foo_bar");
assert_eq!(filenamify("\"foo<>bar*"), "_foo_bar_");
assert_eq!(filenamify("."), "_");
assert_eq!(filenamify(".."), "_");
assert_eq!(filenamify("./"), "__");
assert_eq!(filenamify("../"), "__");
assert_eq!(filenamify("../../foo/bar"), "__.._foo_bar");
assert_eq!(filenamify("foo.bar."), "foo.bar_");
assert_eq!(filenamify("foo.bar.."), "foo.bar_");
assert_eq!(filenamify("foo.bar..."), "foo.bar_");
assert_eq!(filenamify("con"), "con_");
assert_eq!(filenamify("com1"), "com1_");
assert_eq!(filenamify(":nul|"), "_nul_");
assert_eq!(filenamify("foo/bar/nul"), "foo_bar_nul");
assert_eq!(filenamify("file:///file.tar.gz"), "file_file.tar.gz");
assert_eq!(filenamify("http://www.google.com"), "http_www.google.com");
assert_eq!(
filenamify("https://www.youtube.com/watch?v=dQw4w9WgXcQ"),
"https_www.youtube.com_watch_v=dQw4w9WgXcQ"
);
}
}

View File

@@ -0,0 +1,32 @@
use serde_json::json;
use crate::config::VersionedConfig;
pub fn video_format_args(video_model: &bili_sync_entity::video::Model) -> serde_json::Value {
let config = VersionedConfig::get().load();
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
"upper_name": &video_model.upper_name,
"upper_mid": &video_model.upper_id,
"pubtime": &video_model.pubtime.and_utc().format(&config.time_format).to_string(),
"fav_time": &video_model.favtime.and_utc().format(&config.time_format).to_string(),
})
}
pub fn page_format_args(
video_model: &bili_sync_entity::video::Model,
page_model: &bili_sync_entity::page::Model,
) -> serde_json::Value {
let config = VersionedConfig::get().load();
json!({
"bvid": &video_model.bvid,
"title": &video_model.name,
"upper_name": &video_model.upper_name,
"upper_mid": &video_model.upper_id,
"ptitle": &page_model.name,
"pid": page_model.pid,
"pubtime": video_model.pubtime.and_utc().format(&config.time_format).to_string(),
"fav_time": video_model.favtime.and_utc().format(&config.time_format).to_string(),
})
}

View File

@@ -0,0 +1,42 @@
pub mod convert;
pub mod filenamify;
pub mod format_arg;
pub mod model;
pub mod nfo;
pub mod rule;
pub mod signal;
pub mod status;
pub mod task_notifier;
pub mod validation;
use tracing_subscriber::fmt;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use crate::api::LogHelper;
pub fn init_logger(log_level: &str, log_writer: Option<LogHelper>) {
let log = tracing_subscriber::fmt::Subscriber::builder()
.compact()
.with_env_filter(tracing_subscriber::EnvFilter::builder().parse_lossy(log_level))
.with_target(false)
.with_timer(tracing_subscriber::fmt::time::ChronoLocal::new(
"%b %d %H:%M:%S".to_owned(),
))
.finish();
if let Some(writer) = log_writer {
log.with(
fmt::layer()
.with_ansi(false)
.with_timer(tracing_subscriber::fmt::time::ChronoLocal::new(
"%b %d %H:%M:%S".to_owned(),
))
.json()
.flatten_event(true)
.with_writer(writer),
)
.try_init()
.expect("初始化日志失败");
} else {
log.try_init().expect("初始化日志失败");
}
}

View File

@@ -0,0 +1,234 @@
use anyhow::{Context, Result, anyhow};
use bili_sync_entity::*;
use sea_orm::ActiveValue::Set;
use sea_orm::entity::prelude::*;
use sea_orm::sea_query::{OnConflict, SimpleExpr};
use sea_orm::{DatabaseTransaction, TransactionTrait};
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::VideoInfo;
use crate::config::{Config, LegacyConfig};
use crate::utils::status::STATUS_COMPLETED;
/// 筛选未填充的视频
pub async fn filter_unfilled_videos(
additional_expr: SimpleExpr,
conn: &DatabaseConnection,
) -> Result<Vec<video::Model>> {
video::Entity::find()
.filter(
video::Column::Valid
.eq(true)
.and(video::Column::DownloadStatus.eq(0))
.and(video::Column::Category.eq(2))
.and(video::Column::SinglePage.is_null())
.and(additional_expr),
)
.all(conn)
.await
.context("filter unfilled videos failed")
}
/// 筛选未处理完成的视频和视频页
pub async fn filter_unhandled_video_pages(
additional_expr: SimpleExpr,
connection: &DatabaseConnection,
) -> Result<Vec<(video::Model, Vec<page::Model>)>> {
video::Entity::find()
.filter(
video::Column::Valid
.eq(true)
.and(video::Column::DownloadStatus.lt(STATUS_COMPLETED))
.and(video::Column::Category.eq(2))
.and(video::Column::SinglePage.is_not_null())
.and(video::Column::ShouldDownload.eq(true))
.and(additional_expr),
)
.find_with_related(page::Entity)
.all(connection)
.await
.context("filter unhandled video pages failed")
}
/// 尝试创建 Video Model如果发生冲突则忽略
pub async fn create_videos(
videos_info: Vec<VideoInfo>,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
) -> Result<()> {
let video_models = videos_info
.into_iter()
.map(|v| {
let mut model = v.into_simple_model();
video_source.set_relation_id(&mut model);
model
})
.collect::<Vec<_>>();
video::Entity::insert_many(video_models)
// 这里想表达的是 on 索引名,但 sea-orm 的 api 似乎只支持列名而不支持索引名,好在留空可以达到相同的目的
.on_conflict(OnConflict::new().do_nothing().to_owned())
.do_nothing()
.exec(connection)
.await?;
Ok(())
}
/// 尝试创建 Page Model如果发生冲突则忽略
pub async fn create_pages(pages_model: Vec<page::ActiveModel>, connection: &DatabaseTransaction) -> Result<()> {
for page_chunk in pages_model.chunks(200) {
page::Entity::insert_many(page_chunk.to_vec())
.on_conflict(
OnConflict::columns([page::Column::VideoId, page::Column::Pid])
.do_nothing()
.to_owned(),
)
.do_nothing()
.exec(connection)
.await?;
}
Ok(())
}
/// 更新视频 model 的下载状态
pub async fn update_videos_model(videos: Vec<video::ActiveModel>, connection: &DatabaseConnection) -> Result<()> {
video::Entity::insert_many(videos)
.on_conflict(
OnConflict::column(video::Column::Id)
.update_columns([video::Column::DownloadStatus, video::Column::Path])
.to_owned(),
)
.exec(connection)
.await?;
Ok(())
}
/// 更新视频页 model 的下载状态
pub async fn update_pages_model(pages: Vec<page::ActiveModel>, connection: &DatabaseConnection) -> Result<()> {
let query = page::Entity::insert_many(pages).on_conflict(
OnConflict::column(page::Column::Id)
.update_columns([page::Column::DownloadStatus, page::Column::Path])
.to_owned(),
);
query.exec(connection).await?;
Ok(())
}
/// 获取所有已经启用的视频源
pub async fn get_enabled_video_sources(connection: &DatabaseConnection) -> Result<Vec<VideoSourceEnum>> {
let (favorite, watch_later, submission, collection) = tokio::try_join!(
favorite::Entity::find()
.filter(favorite::Column::Enabled.eq(true))
.all(connection),
watch_later::Entity::find()
.filter(watch_later::Column::Enabled.eq(true))
.all(connection),
submission::Entity::find()
.filter(submission::Column::Enabled.eq(true))
.all(connection),
collection::Entity::find()
.filter(collection::Column::Enabled.eq(true))
.all(connection),
)?;
let mut sources = Vec::with_capacity(favorite.len() + watch_later.len() + submission.len() + collection.len());
sources.extend(favorite.into_iter().map(VideoSourceEnum::from));
sources.extend(watch_later.into_iter().map(VideoSourceEnum::from));
sources.extend(submission.into_iter().map(VideoSourceEnum::from));
sources.extend(collection.into_iter().map(VideoSourceEnum::from));
Ok(sources)
}
/// 从数据库中加载配置
pub async fn load_db_config(connection: &DatabaseConnection) -> Result<Option<Result<Config>>> {
Ok(bili_sync_entity::config::Entity::find_by_id(1)
.one(connection)
.await?
.map(|model| {
serde_json::from_str(&model.data).map_err(|e| anyhow!("Failed to deserialize config data: {}", e))
}))
}
/// 保存配置到数据库
pub async fn save_db_config(config: &Config, connection: &DatabaseConnection) -> Result<()> {
let data = serde_json::to_string(config).context("Failed to serialize config data")?;
let model = bili_sync_entity::config::ActiveModel {
id: Set(1),
data: Set(data),
..Default::default()
};
bili_sync_entity::config::Entity::insert(model)
.on_conflict(
OnConflict::column(bili_sync_entity::config::Column::Id)
.update_column(bili_sync_entity::config::Column::Data)
.to_owned(),
)
.exec(connection)
.await
.context("Failed to save config to database")?;
Ok(())
}
/// 迁移旧版本配置(即将所有相关联的内容设置为 enabled
pub async fn migrate_legacy_config(config: &LegacyConfig, connection: &DatabaseConnection) -> Result<()> {
let transaction = connection.begin().await.context("Failed to begin transaction")?;
tokio::try_join!(
migrate_favorite(config, &transaction),
migrate_watch_later(config, &transaction),
migrate_submission(config, &transaction),
migrate_collection(config, &transaction)
)?;
transaction.commit().await.context("Failed to commit transaction")?;
Ok(())
}
async fn migrate_favorite(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
favorite::Entity::update_many()
.filter(favorite::Column::FId.is_in(config.favorite_list.keys().collect::<Vec<_>>()))
.col_expr(favorite::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate favorite config")?;
Ok(())
}
async fn migrate_watch_later(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
if config.watch_later.enabled {
watch_later::Entity::update_many()
.col_expr(watch_later::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate watch later config")?;
}
Ok(())
}
async fn migrate_submission(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
submission::Entity::update_many()
.filter(submission::Column::UpperId.is_in(config.submission_list.keys().collect::<Vec<_>>()))
.col_expr(submission::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate submission config")?;
Ok(())
}
async fn migrate_collection(config: &LegacyConfig, connection: &DatabaseTransaction) -> Result<()> {
let tuples: Vec<(i64, i64, i32)> = config
.collection_list
.keys()
.filter_map(|key| Some((key.sid.parse().ok()?, key.mid.parse().ok()?, key.collection_type.into())))
.collect();
collection::Entity::update_many()
.filter(
Expr::tuple([
Expr::column(collection::Column::SId),
Expr::column(collection::Column::MId),
Expr::column(collection::Column::Type),
])
.in_tuples(tuples),
)
.col_expr(collection::Column::Enabled, Expr::value(true))
.exec(connection)
.await
.context("Failed to migrate collection config")?;
Ok(())
}

View File

@@ -0,0 +1,384 @@
use anyhow::Result;
use bili_sync_entity::*;
use chrono::NaiveDateTime;
use quick_xml::Error;
use quick_xml::events::{BytesCData, BytesText};
use quick_xml::writer::Writer;
use tokio::io::{AsyncWriteExt, BufWriter};
use crate::config::{NFOTimeType, VersionedConfig};
#[allow(clippy::upper_case_acronyms)]
pub enum NFO<'a> {
Movie(Movie<'a>),
TVShow(TVShow<'a>),
Upper(Upper),
Episode(Episode<'a>),
}
pub struct Movie<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub upper_id: i64,
pub upper_name: &'a str,
pub aired: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
pub struct TVShow<'a> {
pub name: &'a str,
pub intro: &'a str,
pub bvid: &'a str,
pub upper_id: i64,
pub upper_name: &'a str,
pub aired: NaiveDateTime,
pub tags: Option<Vec<String>>,
}
pub struct Upper {
pub upper_id: String,
pub pubtime: NaiveDateTime,
}
pub struct Episode<'a> {
pub name: &'a str,
pub pid: String,
}
impl NFO<'_> {
pub async fn generate_nfo(self) -> Result<String> {
let mut buffer = r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
"#
.as_bytes()
.to_vec();
let mut tokio_buffer = BufWriter::new(&mut buffer);
let writer = Writer::new_with_indent(&mut tokio_buffer, b' ', 4);
match self {
NFO::Movie(movie) => {
Self::write_movie_nfo(writer, movie).await?;
}
NFO::TVShow(tvshow) => {
Self::write_tvshow_nfo(writer, tvshow).await?;
}
NFO::Upper(upper) => {
Self::write_upper_nfo(writer, upper).await?;
}
NFO::Episode(episode) => {
Self::write_episode_nfo(writer, episode).await?;
}
}
tokio_buffer.flush().await?;
Ok(String::from_utf8(buffer)?)
}
async fn write_movie_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, movie: Movie<'_>) -> Result<()> {
writer
.create_element("movie")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("plot")
.write_cdata_content_async(BytesCData::new(Self::format_plot(movie.bvid, movie.intro)))
.await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(movie.name))
.await?;
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&movie.upper_id.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(movie.upper_name))
.await?;
Ok(writer)
})
.await?;
writer
.create_element("year")
.write_text_content_async(BytesText::new(&movie.aired.format("%Y").to_string()))
.await?;
if let Some(tags) = movie.tags {
for tag in tags {
writer
.create_element("genre")
.write_text_content_async(BytesText::new(&tag))
.await?;
}
}
writer
.create_element("uniqueid")
.with_attribute(("type", "bilibili"))
.write_text_content_async(BytesText::new(movie.bvid))
.await?;
writer
.create_element("aired")
.write_text_content_async(BytesText::new(&movie.aired.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_tvshow_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, tvshow: TVShow<'_>) -> Result<()> {
writer
.create_element("tvshow")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("plot")
.write_cdata_content_async(BytesCData::new(Self::format_plot(tvshow.bvid, tvshow.intro)))
.await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(tvshow.name))
.await?;
writer
.create_element("actor")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer
.create_element("name")
.write_text_content_async(BytesText::new(&tvshow.upper_id.to_string()))
.await?;
writer
.create_element("role")
.write_text_content_async(BytesText::new(tvshow.upper_name))
.await?;
Ok(writer)
})
.await?;
writer
.create_element("year")
.write_text_content_async(BytesText::new(&tvshow.aired.format("%Y").to_string()))
.await?;
if let Some(tags) = tvshow.tags {
for tag in tags {
writer
.create_element("genre")
.write_text_content_async(BytesText::new(&tag))
.await?;
}
}
writer
.create_element("uniqueid")
.with_attribute(("type", "bilibili"))
.write_text_content_async(BytesText::new(tvshow.bvid))
.await?;
writer
.create_element("aired")
.write_text_content_async(BytesText::new(&tvshow.aired.format("%Y-%m-%d").to_string()))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_upper_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, upper: Upper) -> Result<()> {
writer
.create_element("person")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer.create_element("plot").write_empty_async().await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("lockdata")
.write_text_content_async(BytesText::new("false"))
.await?;
writer
.create_element("dateadded")
.write_text_content_async(BytesText::new(&upper.pubtime.format("%Y-%m-%d %H:%M:%S").to_string()))
.await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(&upper.upper_id))
.await?;
writer
.create_element("sorttitle")
.write_text_content_async(BytesText::new(&upper.upper_id))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
async fn write_episode_nfo(mut writer: Writer<&mut BufWriter<&mut Vec<u8>>>, episode: Episode<'_>) -> Result<()> {
writer
.create_element("episodedetails")
.write_inner_content_async::<_, _, Error>(|writer| async move {
writer.create_element("plot").write_empty_async().await?;
writer.create_element("outline").write_empty_async().await?;
writer
.create_element("title")
.write_text_content_async(BytesText::new(episode.name))
.await?;
writer
.create_element("season")
.write_text_content_async(BytesText::new("1"))
.await?;
writer
.create_element("episode")
.write_text_content_async(BytesText::new(&episode.pid))
.await?;
Ok(writer)
})
.await?;
Ok(())
}
#[inline]
fn format_plot(bvid: &str, intro: &str) -> String {
format!(
r#"原始视频:<a href="https://www.bilibili.com/video/{}/">{}</a><br/><br/>{}"#,
bvid, bvid, intro,
)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_generate_nfo() {
let video = video::Model {
intro: "intro".to_string(),
name: "name".to_string(),
upper_id: 1,
upper_name: "upper_name".to_string(),
favtime: chrono::NaiveDateTime::new(
chrono::NaiveDate::from_ymd_opt(2022, 2, 2).unwrap(),
chrono::NaiveTime::from_hms_opt(2, 2, 2).unwrap(),
),
pubtime: chrono::NaiveDateTime::new(
chrono::NaiveDate::from_ymd_opt(2033, 3, 3).unwrap(),
chrono::NaiveTime::from_hms_opt(3, 3, 3).unwrap(),
),
bvid: "BV1nWcSeeEkV".to_string(),
tags: Some(vec!["tag1".to_owned(), "tag2".to_owned()].into()),
..Default::default()
};
assert_eq!(
NFO::Movie((&video).into()).generate_nfo().await.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<movie>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
<outline/>
<title>name</title>
<actor>
<name>1</name>
<role>upper_name</role>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<aired>2022-02-02</aired>
</movie>"#,
);
assert_eq!(
NFO::TVShow((&video).into()).generate_nfo().await.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<tvshow>
<plot><![CDATA[原始视频:<a href="https://www.bilibili.com/video/BV1nWcSeeEkV/">BV1nWcSeeEkV</a><br/><br/>intro]]></plot>
<outline/>
<title>name</title>
<actor>
<name>1</name>
<role>upper_name</role>
</actor>
<year>2022</year>
<genre>tag1</genre>
<genre>tag2</genre>
<uniqueid type="bilibili">BV1nWcSeeEkV</uniqueid>
<aired>2022-02-02</aired>
</tvshow>"#,
);
assert_eq!(
NFO::Upper((&video).into()).generate_nfo().await.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<person>
<plot/>
<outline/>
<lockdata>false</lockdata>
<dateadded>2033-03-03 03:03:03</dateadded>
<title>1</title>
<sorttitle>1</sorttitle>
</person>"#,
);
let page = page::Model {
name: "name".to_string(),
pid: 3,
..Default::default()
};
assert_eq!(
NFO::Episode((&page).into()).generate_nfo().await.unwrap(),
r#"<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<episodedetails>
<plot/>
<outline/>
<title>name</title>
<season>1</season>
<episode>3</episode>
</episodedetails>"#,
);
}
}
impl<'a> From<&'a video::Model> for Movie<'a> {
fn from(video: &'a video::Model) -> Self {
Self {
name: &video.name,
intro: &video.intro,
bvid: &video.bvid,
upper_id: video.upper_id,
upper_name: &video.upper_name,
aired: match VersionedConfig::get().load().nfo_time_type {
NFOTimeType::FavTime => video.favtime,
NFOTimeType::PubTime => video.pubtime,
},
tags: video.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> From<&'a video::Model> for TVShow<'a> {
fn from(video: &'a video::Model) -> Self {
Self {
name: &video.name,
intro: &video.intro,
bvid: &video.bvid,
upper_id: video.upper_id,
upper_name: &video.upper_name,
aired: match VersionedConfig::get().load().nfo_time_type {
NFOTimeType::FavTime => video.favtime,
NFOTimeType::PubTime => video.pubtime,
},
tags: video.tags.as_ref().map(|tags| tags.clone().into()),
}
}
}
impl<'a> From<&'a video::Model> for Upper {
fn from(video: &'a video::Model) -> Self {
Self {
upper_id: video.upper_id.to_string(),
pubtime: video.pubtime,
}
}
}
impl<'a> From<&'a page::Model> for Episode<'a> {
fn from(page: &'a page::Model) -> Self {
Self {
name: &page.name,
pid: page.pid.to_string(),
}
}
}

View File

@@ -0,0 +1,267 @@
use bili_sync_entity::rule::{AndGroup, Condition, Rule, RuleTarget};
use bili_sync_entity::{page, video};
use chrono::{Local, NaiveDateTime};
pub(crate) trait Evaluatable<T> {
fn evaluate(&self, value: T) -> bool;
}
pub(crate) trait FieldEvaluatable {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool;
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool;
}
impl Evaluatable<&str> for Condition<String> {
fn evaluate(&self, value: &str) -> bool {
match self {
Condition::Equals(expected) => expected == value,
Condition::Contains(substring) => value.contains(substring),
Condition::Prefix(prefix) => value.starts_with(prefix),
Condition::Suffix(suffix) => value.ends_with(suffix),
Condition::MatchesRegex(_, regex) => regex.is_match(value),
_ => false,
}
}
}
impl Evaluatable<usize> for Condition<usize> {
fn evaluate(&self, value: usize) -> bool {
match self {
Condition::Equals(expected) => *expected == value,
Condition::GreaterThan(threshold) => value > *threshold,
Condition::LessThan(threshold) => value < *threshold,
Condition::Between(start, end) => value > *start && value < *end,
_ => false,
}
}
}
impl Evaluatable<&NaiveDateTime> for Condition<NaiveDateTime> {
fn evaluate(&self, value: &NaiveDateTime) -> bool {
match self {
Condition::Equals(expected) => expected == value,
Condition::GreaterThan(threshold) => value > threshold,
Condition::LessThan(threshold) => value < threshold,
Condition::Between(start, end) => value > start && value < end,
_ => false,
}
}
}
impl FieldEvaluatable for RuleTarget {
/// 修改模型后进行评估,此时能访问的是未保存的 activeModel就地使用 activeModel 评估
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
match self {
RuleTarget::Title(cond) => video.name.try_as_ref().is_some_and(|title| cond.evaluate(title)),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.try_as_ref()
.and_then(|t| t.as_ref())
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => video
.favtime
.try_as_ref()
.map(|fav_time| fav_time.and_utc().with_timezone(&Local).naive_local()) // 数据库中保存的一律是 utc 时间,转换为 local 时间再比较
.is_some_and(|fav_time| cond.evaluate(&fav_time)),
RuleTarget::PubTime(cond) => video
.pubtime
.try_as_ref()
.map(|pub_time| pub_time.and_utc().with_timezone(&Local).naive_local())
.is_some_and(|pub_time| cond.evaluate(&pub_time)),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::Not(inner) => !inner.evaluate(video, pages),
}
}
/// 手动触发对历史视频的评估,拿到的是原始 Model直接使用
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
match self {
RuleTarget::Title(cond) => cond.evaluate(&video.name),
// 目前的所有条件都是分别针对全体标签进行 any 评估的,例如 Prefix("a") && Suffix("b") 意味着 any(tag.Prefix("a")) && any(tag.Suffix("b")) 而非 any(tag.Prefix("a") && tag.Suffix("b"))
// 这可能不满足用户预期,但应该问题不大,如果真有很多人用复杂标签筛选再单独改
RuleTarget::Tags(cond) => video
.tags
.as_ref()
.is_some_and(|tags| tags.0.iter().any(|tag| cond.evaluate(tag))),
RuleTarget::FavTime(cond) => cond.evaluate(&video.favtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PubTime(cond) => cond.evaluate(&video.pubtime.and_utc().with_timezone(&Local).naive_local()),
RuleTarget::PageCount(cond) => cond.evaluate(pages.len()),
RuleTarget::Not(inner) => !inner.evaluate_model(video, pages),
}
}
}
impl FieldEvaluatable for AndGroup {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.iter().all(|target| target.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.iter().all(|target| target.evaluate_model(video, pages))
}
}
impl FieldEvaluatable for Rule {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
if self.0.is_empty() {
return true;
}
self.0.iter().any(|group| group.evaluate_model(video, pages))
}
}
/// 对于 Option<Rule> 如果 rule 不存在应该被认为是通过评估
impl FieldEvaluatable for Option<Rule> {
fn evaluate(&self, video: &video::ActiveModel, pages: &[page::ActiveModel]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate(video, pages))
}
fn evaluate_model(&self, video: &video::Model, pages: &[page::Model]) -> bool {
self.as_ref().is_none_or(|rule| rule.evaluate_model(video, pages))
}
}
#[cfg(test)]
mod tests {
use bili_sync_entity::page;
use chrono::NaiveDate;
use sea_orm::ActiveValue::Set;
use super::*;
#[test]
fn test_display() {
let test_cases = vec![
(
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
"「(标题包含“唐氏”)」",
),
(
Rule(vec![vec![
RuleTarget::Title(Condition::Prefix("街霸".to_string())),
RuleTarget::Tags(Condition::Contains("套路".to_string())),
]]),
"「(标题以“街霸”开头)且(标签包含“套路”)」",
),
(
Rule(vec![
vec![
RuleTarget::Title(Condition::Contains("Rust".to_string())),
RuleTarget::PageCount(Condition::GreaterThan(5)),
],
vec![
RuleTarget::Tags(Condition::Suffix("入门".to_string())),
RuleTarget::PubTime(Condition::GreaterThan(
NaiveDate::from_ymd_opt(2023, 1, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
)),
],
]),
"标题包含“Rust”视频分页数量大于“5”」或「标签以“入门”结尾发布时间大于“2023-01-01 00:00:00”",
),
(
Rule(vec![vec![
RuleTarget::Not(Box::new(RuleTarget::Title(Condition::Contains("广告".to_string())))),
RuleTarget::PageCount(Condition::LessThan(10)),
]]),
"标题不包含“广告”视频分页数量小于“10”",
),
(
Rule(vec![vec![
RuleTarget::FavTime(Condition::Between(
NaiveDate::from_ymd_opt(2023, 6, 1)
.unwrap()
.and_hms_opt(0, 0, 0)
.unwrap(),
NaiveDate::from_ymd_opt(2023, 12, 31)
.unwrap()
.and_hms_opt(23, 59, 59)
.unwrap(),
)),
// autocorrect-disable
RuleTarget::Tags(Condition::MatchesRegex(
"技术|教程".to_string(),
regex::Regex::new("技术|教程").unwrap(),
)),
]]),
"收藏时间在“2023-06-01 00:00:00”和“2023-12-31 23:59:59”之间标签匹配“技术|教程”)」",
// autocorrect-enable
),
];
for (rule, expected) in test_cases {
assert_eq!(rule.to_string(), expected);
}
}
#[test]
fn test_evaluate() {
let test_cases = vec![
(
(
video::ActiveModel {
name: Set("骂谁唐氏呢!!!".to_string()),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Title(Condition::Contains("唐氏".to_string()))]]),
true,
),
(
(
video::ActiveModel::default(),
vec![page::ActiveModel::default(); 2],
),
Rule(vec![vec![RuleTarget::PageCount(Condition::Equals(1))]]),
false,
),
(
(
video::ActiveModel{
tags: Set(Some(vec!["原神".to_owned(),"永雏塔菲".to_owned(),"虚拟主播".to_owned()].into())),
..Default::default()
},
vec![],
),
Rule (vec![vec![RuleTarget::Not(Box::new(RuleTarget::Tags(Condition::Equals(
"原神".to_string(),
))))]],
),
false,
),
(
(
video::ActiveModel {
name: Set(
"万字怒扒网易《归唐》底裤中国首款大厂买断制单机靠谱吗——全网最全官方非独家幕后关于《归唐》PV 的所有秘密~都在这里了~".to_owned(),
),
..Default::default()
},
vec![],
),
Rule(vec![vec![RuleTarget::Not(Box::new(RuleTarget::Title(Condition::MatchesRegex(
r"^\S+字(解析|怒扒|拆解)".to_owned(),
regex::Regex::new(r"^\S+字(解析|怒扒)").unwrap(),
))))]],
),
false,
),
];
for ((video, pages), rule, expected) in test_cases {
assert_eq!(rule.evaluate(&video, &pages), expected);
}
}
}

View File

@@ -0,0 +1,21 @@
use std::io;
use tokio::signal;
#[cfg(target_family = "windows")]
pub async fn terminate() -> io::Result<()> {
signal::ctrl_c().await
}
/// ctrl + c 发送的是 SIGINT 信号docker stop 发送的是 SIGTERM 信号,都需要处理
#[cfg(target_family = "unix")]
pub async fn terminate() -> io::Result<()> {
use tokio::select;
let mut term = signal::unix::signal(signal::unix::SignalKind::terminate())?;
let mut int = signal::unix::signal(signal::unix::SignalKind::interrupt())?;
select! {
_ = term.recv() => Ok(()),
_ = int.recv() => Ok(()),
}
}

View File

@@ -0,0 +1,274 @@
use crate::error::ExecutionStatus;
pub static STATUS_NOT_STARTED: u32 = 0b000;
pub(super) static STATUS_MAX_RETRY: u32 = 0b100;
pub static STATUS_OK: u32 = 0b111;
pub static STATUS_COMPLETED: u32 = 1 << 31;
/// 用来表示下载的状态,不想写太多列了,所以仅使用一个 u32 表示。
/// 从低位开始,固定每三位表示一种子任务的状态。
/// 子任务状态从 0b000 开始,每执行失败一次将状态加一,最多 0b100即允许重试 4 次),该值定义为 STATUS_MAX_RETRY。
/// 如果子任务执行成功,将状态设置为 0b111该值定义为 STATUS_OK。
/// 子任务达到最大失败次数或者执行成功时,认为该子任务已经完成。
/// 当所有子任务都已经完成时,为最高位打上标记 1表示整个下载任务已经完成。
#[derive(Clone, Copy, Default)]
pub struct Status<const N: usize>(u32);
impl<const N: usize> Status<N> {
// 获取最高位的完成标记
pub fn get_completed(&self) -> bool {
self.0 >> 31 == 1
}
/// 依次检查所有子任务是否还应该继续执行,返回一个 bool 数组
pub fn should_run(&self) -> [bool; N] {
let mut result = [false; N];
for (i, item) in result.iter_mut().enumerate() {
*item = self.check_continue(i);
}
result
}
/// 重置所有失败的状态,将状态设置为 0b000返回值表示 status 是否发生了变化
pub fn reset_failed(&mut self) -> bool {
let mut changed = false;
for i in 0..N {
let status = self.get_status(i);
if !(status < STATUS_MAX_RETRY || status == STATUS_OK) {
self.set_status(i, STATUS_NOT_STARTED);
changed = true;
}
}
changed
}
/// 重置所有失败的状态,将状态设置为 0b000返回值表示 status 是否发生了变化
/// force 版本在普通版本的基础上,会额外检查是否存在需要运行的任务,如果存在则修正 completed 标记位为“未完成”
/// 这个方法的典型用例是在引入新的任务状态后重置历史视频,允许历史视频执行新引入的任务
pub fn force_reset_failed(&mut self) -> bool {
let mut changed = self.reset_failed();
// 理论上上面的 changed 就足够了,因为 completed 标志位的改变是由子任务状态的改变引起的,子任务没有改变则 completed 也不会改变
// 但考虑特殊情况,新版本引入了一个新的子任务项,此时会出现明明有子任务未执行,但 completed 标记位仍然为 true 的情况
// 当然可以在新版本迁移文件中全局重置 completed 标记位,但这样影响范围太大感觉不太好
// 在后面进行这部分额外判断可以兼容这种情况,在由用户手动触发的 reset_failed 调用中修正 completed 标记位
if self.should_run().into_iter().any(|x| x) {
changed |= self.get_completed();
self.set_completed(false);
}
changed
}
/// 覆盖某个子任务的状态
pub fn set(&mut self, offset: usize, status: u32) {
assert!(status < 0b1000, "status should be less than 0b1000");
self.set_status(offset, status);
if self.should_run().into_iter().all(|x| !x) {
self.set_completed(true);
} else {
self.set_completed(false);
}
}
/// 根据任务结果更新状态,任务结果是一个 Result 数组,需要与子任务一一对应
/// 如果所有子任务都已经完成,那么打上最高位的完成标记
pub fn update_status(&mut self, result: &[ExecutionStatus]) {
assert!(result.len() == N, "result length should be equal to N");
for (i, res) in result.iter().enumerate() {
self.set_result(res, i);
}
if self.should_run().into_iter().all(|x| !x) {
self.set_completed(true);
} else {
self.set_completed(false);
}
}
/// 设置最高位的完成标记
fn set_completed(&mut self, completed: bool) {
if completed {
self.0 |= 1 << 31;
} else {
self.0 &= !(1 << 31);
}
}
/// 获取某个子任务的状态
fn get_status(&self, offset: usize) -> u32 {
(self.0 >> (offset * 3)) & 0b111
}
/// 设置某个子任务的状态
fn set_status(&mut self, offset: usize, status: u32) {
self.0 = (self.0 & !(0b111 << (offset * 3))) | (status << (offset * 3));
}
// 将某个子任务的状态加一(在任务失败时使用)
fn plus_one(&mut self, offset: usize) {
self.0 += 1 << (3 * offset);
}
// 设置某个子任务的状态为 STATUS_OK在任务成功时使用
fn set_ok(&mut self, offset: usize) {
self.0 |= STATUS_OK << (3 * offset);
}
/// 检查某个子任务是否还应该继续执行,实际是检查该子任务的状态是否小于 STATUS_MAX_RETRY
fn check_continue(&self, offset: usize) -> bool {
self.get_status(offset) < STATUS_MAX_RETRY
}
/// 根据子任务执行结果更新子任务的状态
fn set_result(&mut self, result: &ExecutionStatus, offset: usize) {
// 如果任务返回 FixedFailed 状态,那么无论之前的状态如何,都将状态设置为 FixedFailed 的状态
if let ExecutionStatus::FixedFailed(status, _) = result {
assert!(*status < 0b1000, "status should be less than 0b1000");
self.set_status(offset, *status);
} else if self.get_status(offset) < STATUS_MAX_RETRY {
match result {
ExecutionStatus::Succeeded | ExecutionStatus::Skipped => self.set_ok(offset),
ExecutionStatus::Failed(_) => self.plus_one(offset),
_ => {}
}
}
}
}
impl<const N: usize> From<u32> for Status<N> {
fn from(status: u32) -> Self {
Status(status)
}
}
impl<const N: usize> From<Status<N>> for u32 {
fn from(status: Status<N>) -> Self {
status.0
}
}
impl<const N: usize> From<Status<N>> for [u32; N] {
fn from(status: Status<N>) -> Self {
let mut result = [0; N];
for (i, item) in result.iter_mut().enumerate() {
*item = status.get_status(i);
}
result
}
}
impl<const N: usize> From<[u32; N]> for Status<N> {
fn from(status: [u32; N]) -> Self {
let mut result = Status::<N>::default();
for (i, item) in status.iter().enumerate() {
assert!(*item < 0b1000, "status should be less than 0b1000");
result.set_status(i, *item);
}
if result.should_run().iter().all(|x| !x) {
result.set_completed(true);
}
result
}
}
/// 包含五个子任务从前到后依次是视频封面、视频信息、Up 主头像、Up 主信息、分页下载
pub type VideoStatus = Status<5>;
/// 包含五个子任务,从前到后分别是:视频封面、视频内容、视频信息、视频弹幕、视频字幕
pub type PageStatus = Status<5>;
#[cfg(test)]
mod tests {
use anyhow::anyhow;
use super::*;
#[test]
fn test_status_update() {
let mut status = Status::<3>::default();
assert_eq!(status.should_run(), [true, true, true]);
for _ in 0..3 {
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(status.should_run(), [true, false, false]);
}
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(status.should_run(), [false, false, false]);
assert!(status.get_completed());
status.update_status(&[
ExecutionStatus::FixedFailed(1, anyhow!("")),
ExecutionStatus::FixedFailed(4, anyhow!("")),
ExecutionStatus::FixedFailed(7, anyhow!("")),
]);
assert_eq!(status.should_run(), [true, false, false]);
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [1, 4, 7]);
}
#[test]
fn test_status_convert() {
let testcases = [[0, 0, 1], [1, 2, 3], [3, 1, 2], [3, 0, 7]];
for testcase in testcases.iter() {
let status = Status::<3>::from(testcase.clone());
assert_eq!(<[u32; 3]>::from(status), *testcase);
}
}
#[test]
fn test_status_convert_and_update() {
let testcases = [([0, 0, 1], [1, 7, 7]), ([3, 4, 3], [4, 4, 7]), ([3, 1, 7], [4, 7, 7])];
for (before, after) in testcases.iter() {
let mut status = Status::<3>::from(before.clone());
status.update_status(&[
ExecutionStatus::Failed(anyhow!("")),
ExecutionStatus::Succeeded,
ExecutionStatus::Succeeded,
]);
assert_eq!(<[u32; 3]>::from(status), *after);
}
}
#[test]
fn test_status_reset_failed() {
// 重置一个已经失败的任务
let mut status = Status::<3>::from([3, 4, 7]);
assert!(!status.get_completed());
assert!(status.reset_failed());
assert!(!status.get_completed());
assert_eq!(<[u32; 3]>::from(status), [3, 0, 7]);
// 没有内容需要重置,但 completed 标记位是错误的(模拟新增一个子任务状态的情况)
// 此时 reset_failed 不会修正 completed 标记位,而 force_reset_failed 会
status.set_completed(true);
assert!(status.get_completed());
assert!(!status.reset_failed());
assert!(status.get_completed());
assert!(status.force_reset_failed());
assert!(!status.get_completed());
// 重置一个已经成功的任务,没有改变状态,也不会修改标记位
let mut status = Status::<3>::from([7, 7, 7]);
assert!(status.get_completed());
assert!(!status.reset_failed());
assert!(status.get_completed());
}
#[test]
fn test_status_set() {
// 设置子状态,从 completed 到 uncompleted
let mut status = Status::<5>::from([7, 7, 7, 7, 7]);
assert!(status.get_completed());
status.set(4, 0);
assert!(!status.get_completed());
assert_eq!(<[u32; 5]>::from(status), [7, 7, 7, 7, 0]);
// 设置子状态,从 uncompleted 到 completed
let mut status = Status::<5>::from([4, 7, 7, 7, 0]);
assert!(!status.get_completed());
status.set(4, 7);
assert!(status.get_completed());
assert_eq!(<[u32; 5]>::from(status), [4, 7, 7, 7, 7]);
}
}

View File

@@ -0,0 +1,68 @@
use std::sync::{Arc, LazyLock};
use serde::Serialize;
use tokio::sync::MutexGuard;
use crate::config::VersionedConfig;
pub static TASK_STATUS_NOTIFIER: LazyLock<TaskStatusNotifier> = LazyLock::new(TaskStatusNotifier::new);
#[derive(Serialize, Default)]
pub struct TaskStatus {
is_running: bool,
last_run: Option<chrono::DateTime<chrono::Local>>,
last_finish: Option<chrono::DateTime<chrono::Local>>,
next_run: Option<chrono::DateTime<chrono::Local>>,
}
pub struct TaskStatusNotifier {
mutex: tokio::sync::Mutex<()>,
tx: tokio::sync::watch::Sender<Arc<TaskStatus>>,
rx: tokio::sync::watch::Receiver<Arc<TaskStatus>>,
}
impl TaskStatusNotifier {
pub fn new() -> Self {
let (tx, rx) = tokio::sync::watch::channel(Arc::new(TaskStatus::default()));
Self {
mutex: tokio::sync::Mutex::const_new(()),
tx,
rx,
}
}
pub async fn start_running(&'_ self) -> MutexGuard<'_, ()> {
let lock = self.mutex.lock().await;
let _ = self.tx.send(Arc::new(TaskStatus {
is_running: true,
last_run: Some(chrono::Local::now()),
last_finish: None,
next_run: None,
}));
lock
}
pub fn finish_running(&self, _lock: MutexGuard<()>) {
let last_status = self.tx.borrow();
let last_run = last_status.last_run;
drop(last_status);
let config = VersionedConfig::get().load();
let now = chrono::Local::now();
let _ = self.tx.send(Arc::new(TaskStatus {
is_running: false,
last_run,
last_finish: Some(now),
next_run: now.checked_add_signed(chrono::Duration::seconds(config.interval as i64)),
}));
}
/// 精确探测任务执行状态,保证如果读取到“未运行”,那么在锁释放之前任务不会被执行
pub fn detect_running(&self) -> Option<MutexGuard<'_, ()>> {
self.mutex.try_lock().ok()
}
pub fn subscribe(&self) -> tokio::sync::watch::Receiver<Arc<TaskStatus>> {
self.rx.clone()
}
}

View File

@@ -0,0 +1,23 @@
use std::path::Path;
use validator::ValidationError;
use crate::utils::status::{STATUS_NOT_STARTED, STATUS_OK};
pub fn validate_status_value(value: u32) -> Result<(), ValidationError> {
if value == STATUS_OK || value == STATUS_NOT_STARTED {
Ok(())
} else {
Err(ValidationError::new(
"status_value must be either STATUS_OK or STATUS_NOT_STARTED",
))
}
}
pub fn validate_path(path: &str) -> Result<(), ValidationError> {
if path.is_empty() || !Path::new(path).is_absolute() {
Err(ValidationError::new("path must be a non-empty absolute path"))
} else {
Ok(())
}
}

View File

@@ -0,0 +1,699 @@
use std::collections::HashSet;
use std::path::{Path, PathBuf};
use std::pin::Pin;
use anyhow::{Context, Result, anyhow, bail};
use bili_sync_entity::*;
use futures::stream::FuturesUnordered;
use futures::{Stream, StreamExt, TryStreamExt};
use sea_orm::ActiveValue::Set;
use sea_orm::TransactionTrait;
use sea_orm::entity::prelude::*;
use tokio::fs;
use tokio::sync::Semaphore;
use crate::adapter::{VideoSource, VideoSourceEnum};
use crate::bilibili::{BestStream, BiliClient, BiliError, Dimension, PageInfo, Video, VideoInfo};
use crate::config::{ARGS, PathSafeTemplate, TEMPLATE, VersionedConfig};
use crate::downloader::Downloader;
use crate::error::{DownloadAbortError, ExecutionStatus, ProcessPageError};
use crate::utils::format_arg::{page_format_args, video_format_args};
use crate::utils::model::{
create_pages, create_videos, filter_unfilled_videos, filter_unhandled_video_pages, update_pages_model,
update_videos_model,
};
use crate::utils::nfo::NFO;
use crate::utils::rule::FieldEvaluatable;
use crate::utils::status::{PageStatus, STATUS_OK, VideoStatus};
/// 完整地处理某个视频来源
pub async fn process_video_source(
video_source: VideoSourceEnum,
bili_client: &BiliClient,
connection: &DatabaseConnection,
) -> Result<()> {
// 从参数中获取视频列表的 Model 与视频流
let (video_source, video_streams) = video_source.refresh(bili_client, connection).await?;
// 从视频流中获取新视频的简要信息,写入数据库
refresh_video_source(&video_source, video_streams, connection).await?;
// 单独请求视频详情接口,获取视频的详情信息与所有的分页,写入数据库
fetch_video_details(bili_client, &video_source, connection).await?;
if ARGS.scan_only {
warn!("已开启仅扫描模式,跳过视频下载..");
} else {
// 从数据库中查找所有未下载的视频与分页,下载并处理
download_unprocessed_videos(bili_client, &video_source, connection).await?;
}
Ok(())
}
/// 请求接口,获取视频列表中所有新添加的视频信息,将其写入数据库
pub async fn refresh_video_source<'a>(
video_source: &VideoSourceEnum,
video_streams: Pin<Box<dyn Stream<Item = Result<VideoInfo>> + 'a + Send>>,
connection: &DatabaseConnection,
) -> Result<()> {
video_source.log_refresh_video_start();
let latest_row_at = video_source.get_latest_row_at().and_utc();
let mut max_datetime = latest_row_at;
let mut error = Ok(());
let mut video_streams = video_streams
.take_while(|res| {
match res {
Err(e) => {
error = Err(anyhow!(e.to_string()));
futures::future::ready(false)
}
Ok(v) => {
// 虽然 video_streams 是从新到旧的,但由于此处是分页请求,极端情况下可能发生访问完第一页时插入了两整页视频的情况
// 此时获取到的第二页视频比第一页的还要新,因此为了确保正确,理应对每一页的第一个视频进行时间比较
// 但在 streams 的抽象下,无法判断具体是在哪里分页的,所以暂且对每个视频都进行比较,应该不会有太大性能损失
let release_datetime = v.release_datetime();
if release_datetime > &max_datetime {
max_datetime = *release_datetime;
}
futures::future::ready(video_source.should_take(release_datetime, &latest_row_at))
}
}
})
.filter_map(|res| futures::future::ready(video_source.should_filter(res, &latest_row_at)))
.chunks(10);
let mut count = 0;
while let Some(videos_info) = video_streams.next().await {
count += videos_info.len();
create_videos(videos_info, video_source, connection).await?;
}
// 如果获取视频分页过程中发生了错误,直接在此处返回,不更新 latest_row_at
error?;
if max_datetime != latest_row_at {
video_source
.update_latest_row_at(max_datetime.naive_utc())
.save(connection)
.await?;
}
video_source.log_refresh_video_end(count);
Ok(())
}
/// 筛选出所有未获取到全部信息的视频,尝试补充其详细信息
pub async fn fetch_video_details(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
) -> Result<()> {
video_source.log_fetch_video_start();
let videos_model = filter_unfilled_videos(video_source.filter_expr(), connection).await?;
let semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.video);
let semaphore_ref = &semaphore;
let tasks = videos_model
.into_iter()
.map(|video_model| async move {
let _permit = semaphore_ref.acquire().await.context("acquire semaphore failed")?;
let video = Video::new(bili_client, video_model.bvid.clone());
let info: Result<_> = async { Ok((video.get_tags().await?, video.get_view_info().await?)) }.await;
match info {
Err(e) => {
error!(
"获取视频 {} - {} 的详细信息失败,错误为:{:#}",
&video_model.bvid, &video_model.name, e
);
if let Some(BiliError::RequestFailed(-404, _)) = e.downcast_ref::<BiliError>() {
let mut video_active_model: bili_sync_entity::video::ActiveModel = video_model.into();
video_active_model.valid = Set(false);
video_active_model.save(connection).await?;
}
}
Ok((tags, mut view_info)) => {
let VideoInfo::Detail { pages, .. } = &mut view_info else {
unreachable!()
};
// 构造 page model
let pages = std::mem::take(pages);
let pages = pages
.into_iter()
.map(|p| p.into_active_model(video_model.id))
.collect::<Vec<page::ActiveModel>>();
// 更新 video model 的各项有关属性
let mut video_active_model = view_info.into_detail_model(video_model);
video_source.set_relation_id(&mut video_active_model);
video_active_model.single_page = Set(Some(pages.len() == 1));
video_active_model.tags = Set(Some(tags.into()));
video_active_model.should_download = Set(video_source.rule().evaluate(&video_active_model, &pages));
let txn = connection.begin().await?;
create_pages(pages, &txn).await?;
video_active_model.save(&txn).await?;
txn.commit().await?;
}
};
Ok::<_, anyhow::Error>(())
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<Vec<_>>().await?;
video_source.log_fetch_video_end();
Ok(())
}
/// 下载所有未处理成功的视频
pub async fn download_unprocessed_videos(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
connection: &DatabaseConnection,
) -> Result<()> {
video_source.log_download_video_start();
let semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.video);
let downloader = Downloader::new(bili_client.client.clone());
let unhandled_videos_pages = filter_unhandled_video_pages(video_source.filter_expr(), connection).await?;
let mut assigned_upper = HashSet::new();
let tasks = unhandled_videos_pages
.into_iter()
.map(|(video_model, pages_model)| {
let should_download_upper = !assigned_upper.contains(&video_model.upper_id);
assigned_upper.insert(video_model.upper_id);
download_video_pages(
bili_client,
video_source,
video_model,
pages_model,
connection,
&semaphore,
&downloader,
should_download_upper,
)
})
.collect::<FuturesUnordered<_>>();
let mut download_aborted = false;
let mut stream = tasks
// 触发风控时设置 download_aborted 标记并终止流
.take_while(|res| {
if res
.as_ref()
.is_err_and(|e| e.downcast_ref::<DownloadAbortError>().is_some())
{
download_aborted = true;
}
futures::future::ready(!download_aborted)
})
// 过滤掉没有触发风控的普通 Err只保留正确返回的 Model
.filter_map(|res| futures::future::ready(res.ok()))
// 将成功返回的 Model 按十个一组合并
.chunks(10);
while let Some(models) = stream.next().await {
update_videos_model(models, connection).await?;
}
if download_aborted {
error!("下载触发风控,已终止所有任务,等待下一轮执行");
}
video_source.log_download_video_end();
Ok(())
}
#[allow(clippy::too_many_arguments)]
pub async fn download_video_pages(
bili_client: &BiliClient,
video_source: &VideoSourceEnum,
video_model: video::Model,
pages: Vec<page::Model>,
connection: &DatabaseConnection,
semaphore: &Semaphore,
downloader: &Downloader,
should_download_upper: bool,
) -> Result<video::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = VideoStatus::from(video_model.download_status);
let separate_status = status.should_run();
let base_path = video_source.path().join(
TEMPLATE
.load()
.path_safe_render("video", &video_format_args(&video_model))?,
);
let upper_id = video_model.upper_id.to_string();
let base_upper_path = VersionedConfig::get()
.load()
.upper_path
.join(upper_id.chars().next().context("upper_id is empty")?.to_string())
.join(upper_id);
let is_single_page = video_model.single_page.context("single_page is null")?;
// 对于单页视频page 的下载已经足够
// 对于多页视频page 下载仅包含了分集内容,需要额外补上视频的 poster 的 tvshow.nfo
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载视频封面
fetch_video_poster(
separate_status[0] && !is_single_page,
&video_model,
downloader,
base_path.join("poster.jpg"),
base_path.join("fanart.jpg"),
),
// 生成视频信息的 nfo
generate_video_nfo(
separate_status[1] && !is_single_page,
&video_model,
base_path.join("tvshow.nfo"),
),
// 下载 Up 主头像
fetch_upper_face(
separate_status[2] && should_download_upper,
&video_model,
downloader,
base_upper_path.join("folder.jpg"),
),
// 生成 Up 主信息的 nfo
generate_upper_nfo(
separate_status[3] && should_download_upper,
&video_model,
base_upper_path.join("person.nfo"),
),
// 分发并执行分页下载的任务
dispatch_download_page(
separate_status[4],
bili_client,
&video_model,
pages,
connection,
downloader,
&base_path
)
);
let results = [res_1, res_2, res_3, res_4, res_5]
.into_iter()
.map(Into::into)
.collect::<Vec<_>>();
status.update_status(&results);
results
.iter()
.take(4)
.zip(["封面", "详情", "作者头像", "作者详情"])
.for_each(|(res, task_name)| match res {
ExecutionStatus::Skipped => info!("处理视频「{}」{}已成功过,跳过", &video_model.name, task_name),
ExecutionStatus::Succeeded => info!("处理视频「{}」{}成功", &video_model.name, task_name),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」{}出现常见错误,已忽略:{:#}",
&video_model.name, task_name, e
)
}
ExecutionStatus::Failed(e) | ExecutionStatus::FixedFailed(_, e) => {
error!("处理视频「{}」{}失败:{:#}", &video_model.name, task_name, e)
}
});
if let ExecutionStatus::Failed(e) = results.into_iter().nth(4).context("page download result not found")?
&& e.downcast_ref::<DownloadAbortError>().is_some()
{
return Err(e);
}
let mut video_active_model: video::ActiveModel = video_model.into();
video_active_model.download_status = Set(status.into());
video_active_model.path = Set(base_path.to_string_lossy().to_string());
Ok(video_active_model)
}
/// 分发并执行分页下载任务,当且仅当所有分页成功下载或达到最大重试次数时返回 Ok否则根据失败原因返回对应的错误
pub async fn dispatch_download_page(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
pages: Vec<page::Model>,
connection: &DatabaseConnection,
downloader: &Downloader,
base_path: &Path,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let child_semaphore = Semaphore::new(VersionedConfig::get().load().concurrent_limit.page);
let tasks = pages
.into_iter()
.map(|page_model| {
download_page(
bili_client,
video_model,
page_model,
&child_semaphore,
downloader,
base_path,
)
})
.collect::<FuturesUnordered<_>>();
let (mut download_aborted, mut target_status) = (false, STATUS_OK);
let mut stream = tasks
.take_while(|res| {
match res {
Ok(model) => {
// 该视频的所有分页的下载状态都会在此返回,需要根据这些状态确认视频层“分页下载”子任务的状态
// 在过去的实现中,此处仅仅根据 page_download_status 的最高标志位来判断,如果最高标志位是 true 则认为完成
// 这样会导致即使分页中有失败到 MAX_RETRY 的情况,视频层的分页下载状态也会被认为是 Succeeded不够准确
// 新版本实现会将此处取值为所有子任务状态的最小值,这样只有所有分页的子任务全部成功时才会认为视频层的分页下载状态是 Succeeded
let page_download_status = model.download_status.try_as_ref().expect("download_status must be set");
let separate_status: [u32; 5] = PageStatus::from(*page_download_status).into();
for status in separate_status {
target_status = target_status.min(status);
}
}
Err(e) => {
if e.downcast_ref::<DownloadAbortError>().is_some() {
download_aborted = true;
}
}
}
// 仅在发生风控时终止流,其它情况继续执行
futures::future::ready(!download_aborted)
})
.filter_map(|res| futures::future::ready(res.ok()))
.chunks(10);
while let Some(models) = stream.next().await {
update_pages_model(models, connection).await?;
}
if download_aborted {
error!("下载视频「{}」的分页时触发风控,将异常向上传递..", &video_model.name);
bail!(DownloadAbortError());
}
if target_status != STATUS_OK {
return Ok(ExecutionStatus::FixedFailed(target_status, ProcessPageError().into()));
}
Ok(ExecutionStatus::Succeeded)
}
/// 下载某个分页,未发生风控且正常运行时返回 Ok(Page::ActiveModel),其中 status 字段存储了新的下载状态,发生风控时返回 DownloadAbortError
pub async fn download_page(
bili_client: &BiliClient,
video_model: &video::Model,
page_model: page::Model,
semaphore: &Semaphore,
downloader: &Downloader,
base_path: &Path,
) -> Result<page::ActiveModel> {
let _permit = semaphore.acquire().await.context("acquire semaphore failed")?;
let mut status = PageStatus::from(page_model.download_status);
let separate_status = status.should_run();
let is_single_page = video_model.single_page.context("single_page is null")?;
let base_name = TEMPLATE
.load()
.path_safe_render("page", &page_format_args(video_model, &page_model))?;
let (poster_path, video_path, nfo_path, danmaku_path, fanart_path, subtitle_path) = if is_single_page {
(
base_path.join(format!("{}-poster.jpg", &base_name)),
base_path.join(format!("{}.mp4", &base_name)),
base_path.join(format!("{}.nfo", &base_name)),
base_path.join(format!("{}.zh-CN.default.ass", &base_name)),
Some(base_path.join(format!("{}-fanart.jpg", &base_name))),
base_path.join(format!("{}.srt", &base_name)),
)
} else {
(
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}-thumb.jpg", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.mp4", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.nfo", &base_name, page_model.pid)),
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.zh-CN.default.ass", &base_name, page_model.pid)),
// 对于多页视频,会在上一步 fetch_video_poster 中获取剧集的 fanart无需在此处下载单集的
None,
base_path
.join("Season 1")
.join(format!("{} - S01E{:0>2}.srt", &base_name, page_model.pid)),
)
};
let dimension = match (page_model.width, page_model.height) {
(Some(width), Some(height)) => Some(Dimension {
width,
height,
rotate: 0,
}),
_ => None,
};
let page_info = PageInfo {
cid: page_model.cid,
duration: page_model.duration,
dimension,
..Default::default()
};
let (res_1, res_2, res_3, res_4, res_5) = tokio::join!(
// 下载分页封面
fetch_page_poster(
separate_status[0],
video_model,
&page_model,
downloader,
poster_path,
fanart_path
),
// 下载分页视频
fetch_page_video(
separate_status[1],
bili_client,
video_model,
downloader,
&page_info,
&video_path
),
// 生成分页视频信息的 nfo
generate_page_nfo(separate_status[2], video_model, &page_model, nfo_path),
// 下载分页弹幕
fetch_page_danmaku(separate_status[3], bili_client, video_model, &page_info, danmaku_path),
// 下载分页字幕
fetch_page_subtitle(separate_status[4], bili_client, video_model, &page_info, &subtitle_path)
);
let results = [res_1, res_2, res_3, res_4, res_5]
.into_iter()
.map(Into::into)
.collect::<Vec<_>>();
status.update_status(&results);
results
.iter()
.zip(["封面", "视频", "详情", "弹幕", "字幕"])
.for_each(|(res, task_name)| match res {
ExecutionStatus::Skipped => info!(
"处理视频「{}」第 {} 页{}已成功过,跳过",
&video_model.name, page_model.pid, task_name
),
ExecutionStatus::Succeeded => info!(
"处理视频「{}」第 {} 页{}成功",
&video_model.name, page_model.pid, task_name
),
ExecutionStatus::Ignored(e) => {
error!(
"处理视频「{}」第 {} 页{}出现常见错误,已忽略:{:#}",
&video_model.name, page_model.pid, task_name, e
)
}
ExecutionStatus::Failed(e) | ExecutionStatus::FixedFailed(_, e) => error!(
"处理视频「{}」第 {} 页{}失败:{:#}",
&video_model.name, page_model.pid, task_name, e
),
});
// 如果下载视频时触发风控,直接返回 DownloadAbortError
if let ExecutionStatus::Failed(e) = results.into_iter().nth(1).context("video download result not found")?
&& let Ok(BiliError::RiskControlOccurred) = e.downcast::<BiliError>()
{
bail!(DownloadAbortError());
}
let mut page_active_model: page::ActiveModel = page_model.into();
page_active_model.download_status = Set(status.into());
page_active_model.path = Set(Some(video_path.to_string_lossy().to_string()));
Ok(page_active_model)
}
pub async fn fetch_page_poster(
should_run: bool,
video_model: &video::Model,
page_model: &page::Model,
downloader: &Downloader,
poster_path: PathBuf,
fanart_path: Option<PathBuf>,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let single_page = video_model.single_page.context("single_page is null")?;
let url = if single_page {
// 单页视频直接用视频的封面
video_model.cover.as_str()
} else {
// 多页视频,如果单页没有封面,就使用视频的封面
match &page_model.image {
Some(url) => url.as_str(),
None => video_model.cover.as_str(),
}
};
downloader.fetch(url, &poster_path).await?;
if let Some(fanart_path) = fanart_path {
fs::copy(&poster_path, &fanart_path).await?;
}
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_video(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
downloader: &Downloader,
page_info: &PageInfo,
page_path: &Path,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
let streams = bili_video
.get_page_analyzer(page_info)
.await?
.best_stream(&VersionedConfig::get().load().filter_option)?;
match streams {
BestStream::Mixed(mix_stream) => downloader.fetch_with_fallback(&mix_stream.urls(), page_path).await?,
BestStream::VideoAudio {
video: video_stream,
audio: None,
} => downloader.fetch_with_fallback(&video_stream.urls(), page_path).await?,
BestStream::VideoAudio {
video: video_stream,
audio: Some(audio_stream),
} => {
let (tmp_video_path, tmp_audio_path) = (
page_path.with_extension("tmp_video"),
page_path.with_extension("tmp_audio"),
);
let res = async {
downloader
.fetch_with_fallback(&video_stream.urls(), &tmp_video_path)
.await?;
downloader
.fetch_with_fallback(&audio_stream.urls(), &tmp_audio_path)
.await?;
downloader.merge(&tmp_video_path, &tmp_audio_path, page_path).await
}
.await;
let _ = fs::remove_file(tmp_video_path).await;
let _ = fs::remove_file(tmp_audio_path).await;
res?
}
}
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_danmaku(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
page_info: &PageInfo,
danmaku_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
bili_video
.get_danmaku_writer(page_info)
.await?
.write(danmaku_path)
.await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_page_subtitle(
should_run: bool,
bili_client: &BiliClient,
video_model: &video::Model,
page_info: &PageInfo,
subtitle_path: &Path,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let bili_video = Video::new(bili_client, video_model.bvid.clone());
let subtitles = bili_video.get_subtitles(page_info).await?;
let tasks = subtitles
.into_iter()
.map(|subtitle| async move {
let path = subtitle_path.with_extension(format!("{}.srt", subtitle.lan));
tokio::fs::write(path, subtitle.body.to_string()).await
})
.collect::<FuturesUnordered<_>>();
tasks.try_collect::<Vec<()>>().await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_page_nfo(
should_run: bool,
video_model: &video::Model,
page_model: &page::Model,
nfo_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
let single_page = video_model.single_page.context("single_page is null")?;
let nfo = if single_page {
NFO::Movie(video_model.into())
} else {
NFO::Episode(page_model.into())
};
generate_nfo(nfo, nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_video_poster(
should_run: bool,
video_model: &video::Model,
downloader: &Downloader,
poster_path: PathBuf,
fanart_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
downloader.fetch(&video_model.cover, &poster_path).await?;
fs::copy(&poster_path, &fanart_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn fetch_upper_face(
should_run: bool,
video_model: &video::Model,
downloader: &Downloader,
upper_face_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
downloader.fetch(&video_model.upper_face, &upper_face_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_upper_nfo(
should_run: bool,
video_model: &video::Model,
nfo_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
generate_nfo(NFO::Upper(video_model.into()), nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}
pub async fn generate_video_nfo(
should_run: bool,
video_model: &video::Model,
nfo_path: PathBuf,
) -> Result<ExecutionStatus> {
if !should_run {
return Ok(ExecutionStatus::Skipped);
}
generate_nfo(NFO::TVShow(video_model.into()), nfo_path).await?;
Ok(ExecutionStatus::Succeeded)
}
/// 创建 nfo_path 的父目录,然后写入 nfo 文件
async fn generate_nfo(nfo: NFO<'_>, nfo_path: PathBuf) -> Result<()> {
if let Some(parent) = nfo_path.parent() {
fs::create_dir_all(parent).await?;
}
fs::write(nfo_path, nfo.generate_nfo().await?.as_bytes()).await?;
Ok(())
}

View File

@@ -0,0 +1,12 @@
[package]
name = "bili_sync_entity"
version = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
[dependencies]
derivative = { workspace = true }
sea-orm = { workspace = true }
regex = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }

View File

@@ -0,0 +1,2 @@
pub mod rule;
pub mod string_vec;

View File

@@ -0,0 +1,120 @@
use std::fmt::Display;
use derivative::Derivative;
use sea_orm::FromJsonQueryResult;
use sea_orm::prelude::DateTime;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Clone, Debug, Serialize, Deserialize, Derivative)]
#[derivative(PartialEq, Eq)]
#[serde(rename_all = "camelCase", tag = "operator", content = "value")]
pub enum Condition<T: Serialize + Display> {
Equals(T),
Contains(T),
#[serde(deserialize_with = "deserialize_regex", serialize_with = "serialize_regex")]
MatchesRegex(String, #[derivative(PartialEq = "ignore")] regex::Regex),
Prefix(T),
Suffix(T),
GreaterThan(T),
LessThan(T),
Between(T, T),
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
#[serde(rename_all = "camelCase", tag = "field", content = "rule")]
pub enum RuleTarget {
Title(Condition<String>),
Tags(Condition<String>),
FavTime(Condition<DateTime>),
PubTime(Condition<DateTime>),
PageCount(Condition<usize>),
Not(Box<RuleTarget>),
}
pub type AndGroup = Vec<RuleTarget>;
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct Rule(pub Vec<AndGroup>);
impl<T: Serialize + Display> Display for Condition<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Condition::Equals(v) => write!(f, "等于“{}”", v),
Condition::Contains(v) => write!(f, "包含“{}”", v),
Condition::MatchesRegex(pat, _) => write!(f, "匹配“{}”", pat),
Condition::Prefix(v) => write!(f, "以“{}”开头", v),
Condition::Suffix(v) => write!(f, "以“{}”结尾", v),
Condition::GreaterThan(v) => write!(f, "大于“{}”", v),
Condition::LessThan(v) => write!(f, "小于“{}”", v),
Condition::Between(start, end) => write!(f, "在“{}”和“{}”之间", start, end),
}
}
}
impl Display for RuleTarget {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
fn get_field_name(rt: &RuleTarget, depth: usize) -> &'static str {
match rt {
RuleTarget::Title(_) => "标题",
RuleTarget::Tags(_) => "标签",
RuleTarget::FavTime(_) => "收藏时间",
RuleTarget::PubTime(_) => "发布时间",
RuleTarget::PageCount(_) => "视频分页数量",
RuleTarget::Not(inner) => {
if depth == 0 {
get_field_name(inner, depth + 1)
} else {
"格式化失败"
}
}
}
}
let field_name = get_field_name(self, 0);
match self {
RuleTarget::Not(inner) => match inner.as_ref() {
RuleTarget::Title(cond) | RuleTarget::Tags(cond) => write!(f, "{}不{}", field_name, cond),
RuleTarget::FavTime(cond) | RuleTarget::PubTime(cond) => {
write!(f, "{}不{}", field_name, cond)
}
RuleTarget::PageCount(cond) => write!(f, "{}不{}", field_name, cond),
RuleTarget::Not(_) => write!(f, "格式化失败"),
},
RuleTarget::Title(cond) | RuleTarget::Tags(cond) => write!(f, "{}{}", field_name, cond),
RuleTarget::FavTime(cond) | RuleTarget::PubTime(cond) => {
write!(f, "{}{}", field_name, cond)
}
RuleTarget::PageCount(cond) => write!(f, "{}{}", field_name, cond),
}
}
}
impl Display for Rule {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let groups: Vec<String> = self
.0
.iter()
.map(|group| {
let conditions: Vec<String> = group.iter().map(|target| format!("{}", target)).collect();
format!("{}", conditions.join(""))
})
.collect();
write!(f, "{}", groups.join(""))
}
}
fn deserialize_regex<'de, D>(deserializer: D) -> Result<(String, regex::Regex), D::Error>
where
D: Deserializer<'de>,
{
let pattern = String::deserialize(deserializer)?;
// 反序列化时预编译 regex优化性能
let regex = regex::Regex::new(&pattern).map_err(serde::de::Error::custom)?;
Ok((pattern, regex))
}
fn serialize_regex<S>(pattern: &str, _regex: &regex::Regex, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(pattern)
}

View File

@@ -0,0 +1,20 @@
use sea_orm::FromJsonQueryResult;
use serde::{Deserialize, Serialize};
// reference: https://www.sea-ql.org/SeaORM/docs/generate-entity/column-types/#json-column
// 在 entity 中使用裸 Vec 仅在 postgres 中支持sea-orm 会将其映射为 postgres array
// 如果需要实现跨数据库的 array必须将其包裹在 wrapper type 中
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, FromJsonQueryResult)]
pub struct StringVec(pub Vec<String>);
impl From<Vec<String>> for StringVec {
fn from(value: Vec<String>) -> Self {
Self(value)
}
}
impl From<StringVec> for Vec<String> {
fn from(value: StringVec) -> Self {
value.0
}
}

View File

@@ -0,0 +1,26 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.12.15
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "collection")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub s_id: i64,
pub m_id: i64,
pub name: String,
pub r#type: i32,
pub path: String,
pub created_at: String,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@@ -0,0 +1,17 @@
//! `SeaORM` Entity. Generated by sea-orm-codegen 0.12.15
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "config")]
pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub data: String,
pub created_at: String,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@@ -2,6 +2,8 @@
use sea_orm::entity::prelude::*;
use crate::rule::Rule;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel, Eq)]
#[sea_orm(table_name = "favorite")]
pub struct Model {
@@ -12,6 +14,9 @@ pub struct Model {
pub name: String,
pub path: String,
pub created_at: String,
pub latest_row_at: DateTime,
pub rule: Option<Rule>,
pub enabled: bool,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]

View File

@@ -2,6 +2,10 @@
pub mod prelude;
pub mod collection;
pub mod config;
pub mod favorite;
pub mod page;
pub mod submission;
pub mod video;
pub mod watch_later;

View File

@@ -8,7 +8,7 @@ pub struct Model {
#[sea_orm(primary_key)]
pub id: i32,
pub video_id: i32,
pub cid: i32,
pub cid: i64,
pub pid: i32,
pub name: String,
pub width: Option<u32>,

Some files were not shown because too many files have changed in this diff Show More