Compare commits

...

143 Commits

Author SHA1 Message Date
jxxghp
b8f4cd5fea v2.2.9
- 资源包升级以提升安全性
- 优化了页面数据刷新机制

注意:本次升级后会默认清理一次种子识别缓存
2025-02-14 19:35:49 +08:00
jxxghp
aa1557ad9e fix setup 2025-02-14 17:37:10 +08:00
jxxghp
f03da6daca fix setup 2025-02-14 17:17:16 +08:00
jxxghp
30eb4385d4 fix sites 2025-02-14 13:44:18 +08:00
jxxghp
4c9afcc1a8 fix 2025-02-14 13:32:20 +08:00
jxxghp
dd47432a45 fix 2025-02-14 12:55:32 +08:00
jxxghp
0ba6974bd6 fix #3843
fix #3829
2025-02-13 08:08:13 +08:00
jxxghp
827d8f6d84 add workflow framework 2025-02-12 17:49:01 +08:00
jxxghp
943a462c69 Merge pull request #3885 from InfinityPacer/feature/security 2025-02-11 17:21:04 +08:00
InfinityPacer
a1bc773fb5 feat(security): add AVIF support 2025-02-11 17:10:50 +08:00
InfinityPacer
ac169b7d22 feat(security): add cache default extension for files without suffix 2025-02-11 17:09:43 +08:00
jxxghp
eecbbfea3a 更新 version.py 2025-02-10 22:28:06 +08:00
jxxghp
635ddb044e add depends for DiscoverMediaSource 2025-02-10 22:05:56 +08:00
jxxghp
1a6123489d 更新 config.py 2025-02-10 07:52:40 +08:00
jxxghp
4e69195a8d Merge pull request #3876 from InfinityPacer/feature/security 2025-02-10 07:11:28 +08:00
InfinityPacer
e48c8ee652 Revert "fix is_safe_url"
This reverts commit 5e2ad34864.
2025-02-10 02:22:53 +08:00
InfinityPacer
7df07b86b9 feat(security): add cmvideo image for http with port 2025-02-10 02:19:08 +08:00
jxxghp
5e2ad34864 fix is_safe_url 2025-02-09 22:08:21 +08:00
jxxghp
e9a147d43c 更新 config.py 2025-02-09 16:30:47 +08:00
jxxghp
a340ee045e 更新 config.py 2025-02-09 16:28:41 +08:00
jxxghp
12405f3c34 v2.2.8
- 推荐支持通过插件扩展,支持显示/隐藏榜单
- 完善了对探索类插件的支持,加强了插件UI配置能力
- 优化了TheMovieDB探索的风格过滤条件
- DOH选项调整为默认关闭
2025-02-09 12:14:40 +08:00
jxxghp
1e465ee231 refactor:优化API结构 2025-02-09 11:44:53 +08:00
jxxghp
f06c24c23e refactor apis 2025-02-08 21:47:43 +08:00
jxxghp
4b93ee4843 更新 version.py 2025-02-08 20:19:39 +08:00
jxxghp
c022e05ab9 DOH_ENABLE => false 2025-02-08 16:50:51 +08:00
jxxghp
c2a0d9d657 add media/seasons api 2025-02-08 12:44:47 +08:00
jxxghp
6fcf2c2f1f add SECURITY_IMAGE_DOMAINS 2025-02-08 08:18:58 +08:00
jxxghp
bc37daef58 - 新增图片安全域名,以支持探索插件 2025-02-07 18:23:25 +08:00
jxxghp
fab5995c4e feat:增加安全域名 thetvdb.com 2025-02-07 18:11:36 +08:00
jxxghp
0ba8aa75f5 v2.2.7 2025-02-07 08:12:01 +08:00
jxxghp
e24b3ed07a feat:使用名称、年份兜底转换 2025-02-06 20:31:37 +08:00
jxxghp
f9bddcb406 feat:订阅支持通用mediaid 2025-02-06 19:19:43 +08:00
jxxghp
247b3b24a1 fix DiscoverMediaSource 2025-02-06 18:03:27 +08:00
jxxghp
759c18acda feat:增加事件 DiscoverSource、MediaRecognizeConvert 2025-02-06 17:35:58 +08:00
jxxghp
b2462c5950 fix 2025-02-06 11:48:56 +08:00
jxxghp
3d947f712c feat:放开媒体库类型控制 2025-02-06 11:45:10 +08:00
jxxghp
89d917e487 fix 2025-02-05 17:41:30 +08:00
jxxghp
28b0a20b26 Merge pull request #3852 from zouyonghao/v2 2025-02-05 15:59:29 +08:00
Yonghao Zou
6d4396f4ba fix(jellyfin): support audio event 2025-02-05 15:23:01 +08:00
jxxghp
75dd0f27cf 更新 version.py 2025-02-04 13:30:02 +08:00
jxxghp
cb9be86c10 更新 version.py 2025-02-03 11:57:21 +08:00
jxxghp
0b8f021505 Merge pull request #3845 from InfinityPacer/feature/event 2025-02-03 07:40:58 +08:00
InfinityPacer
f2d3b1c13f feat(event): add mediainfo field for TransferInterceptEventData 2025-02-03 01:46:22 +08:00
InfinityPacer
6f24c6ba49 fix(event): reorder code execution 2025-02-03 00:14:15 +08:00
jxxghp
c5a9df88dc Merge pull request #3841 from InfinityPacer/feature/cache 2025-02-02 12:24:52 +08:00
InfinityPacer
20b2df364a chore(deps): add async_timeout~=5.0.1 for redis if Python < 3.11.3 2025-02-02 12:04:09 +08:00
jxxghp
e89103b96f Merge pull request #3840 from InfinityPacer/feature/cache 2025-02-02 11:30:04 +08:00
InfinityPacer
49f1c9c10b fix(cache): check cache existence when skip_none is False 2025-02-02 11:18:02 +08:00
InfinityPacer
b320c84c4c fix(cache): refine caching behavior for Fanart requests 2025-02-02 11:17:36 +08:00
jxxghp
e916b84ee5 Merge pull request #3839 from InfinityPacer/feature/site 2025-02-02 07:03:14 +08:00
InfinityPacer
18633a3b41 fix(site): update seeding parse for audiences 2025-02-02 01:06:43 +08:00
jxxghp
0683498497 fix #3833 2025-01-31 18:40:12 +08:00
jxxghp
7468fa4f1e Merge pull request #3833 from cddjr/fix_site_test
fix 网络异常时,站点测试误报鉴权或Cookie过期
2025-01-31 18:27:03 +08:00
景大侠
ab2b33a9fd fix 网络异常时,站点测试误报鉴权或Cookie过期 2025-01-31 16:53:40 +08:00
InfinityPacer
8bedac023b Merge pull request #3831 from InfinityPacer/feature/event
fix(event): update event type to TransferIntercept
2025-01-31 13:45:05 +08:00
InfinityPacer
7893b41175 fix(event): update event type to TransferIntercept 2025-01-31 13:43:57 +08:00
jxxghp
ab73dbb3cd 更新 version.py 2025-01-31 12:36:35 +08:00
jxxghp
cb042dbe68 Merge pull request #3830 from InfinityPacer/feature/event 2025-01-31 07:27:30 +08:00
InfinityPacer
bba0d363d7 feat(event): update comment 2025-01-31 01:40:15 +08:00
InfinityPacer
8635d8c53f feat(event): add TransferIntercept event for cancel transfer 2025-01-31 01:37:14 +08:00
jxxghp
dae6894e8b Merge pull request #3829 from cddjr/fix_missing_episodes_info 2025-01-30 21:25:05 +08:00
景大侠
b76991a027 fix 文件整理在特定情况下会缺失剧集信息 2025-01-30 21:14:34 +08:00
jxxghp
de61c43db4 fix #3828 2025-01-30 20:10:15 +08:00
jxxghp
890afc2a72 fix bug 2025-01-30 20:04:33 +08:00
jxxghp
8d4e1f3af6 更新 user_oper.py 2025-01-30 09:45:30 +08:00
jxxghp
85507a4fff feat:通过消息订阅时转换为MP用户名 2025-01-30 08:37:35 +08:00
jxxghp
6d395f9866 add UserOper list 2025-01-29 19:55:46 +08:00
jxxghp
c589f42181 fix 2025-01-29 19:02:40 +08:00
jxxghp
87bb121060 Merge pull request #3824 from cddjr/feat_tmdb_content_rating 2025-01-29 17:34:56 +08:00
景大侠
42cd35ab3c feat(TMDB): 增加内容分级的刮削 2025-01-29 16:01:44 +08:00
jxxghp
669da0d882 Merge pull request #3821 from InfinityPacer/feature/subscribe 2025-01-29 07:03:41 +08:00
InfinityPacer
9ac1346f80 fix(subscribe): support default filter group when add 2025-01-28 23:44:26 +08:00
jxxghp
f6981734d0 更新 version.py 2025-01-28 16:06:03 +08:00
jxxghp
cb6aa61b6b fix apis 2025-01-27 17:56:32 +08:00
jxxghp
2ed9cfcc9a fix api 2025-01-27 17:08:22 +08:00
jxxghp
2e796f41cb fix api 2025-01-27 13:45:57 +08:00
jxxghp
7d13e43c6f fix apis 2025-01-27 11:09:05 +08:00
jxxghp
db684de6e9 Merge pull request #3815 from Akimio521/fix/transfer-background 2025-01-27 08:16:22 +08:00
Akimio521
510ef59aa0 fix: 计算任务时某些fileitem.size是None 2025-01-27 00:35:41 +08:00
jxxghp
d56083a29e Merge pull request #3810 from Akimio521/feat/alist-token 2025-01-26 08:54:14 +08:00
Akimio521
8aed2b334e feat: 支持使用永久令牌进行认证 2025-01-25 22:31:53 +08:00
jxxghp
3bf27f224c Merge pull request #3808 from InfinityPacer/feature/plugin 2025-01-25 07:42:44 +08:00
InfinityPacer
dc9a54e74f fix(command): ensure command data isolation by using deepcopy 2025-01-25 00:32:42 +08:00
InfinityPacer
79dc194dd6 feat(plugin): add kwargs support for post_message 2025-01-25 00:18:08 +08:00
jxxghp
8e12249201 Merge pull request #3804 from InfinityPacer/feature/subscribe 2025-01-24 17:46:00 +08:00
InfinityPacer
4fa8f5b248 feat(event): use latest subscribe_info in SubscribeModified 2025-01-24 17:26:54 +08:00
InfinityPacer
3089c0c524 feat(event): add old_subscribe_info to event and update triggers 2025-01-24 17:24:29 +08:00
jxxghp
ba1ca0819e fix 关注订阅时判断历史记录 2025-01-23 13:16:32 +08:00
jxxghp
4666b9051d 更新 version.py 2025-01-23 07:11:50 +08:00
jxxghp
56c524a822 Merge pull request #3792 from InfinityPacer/feature/cache 2025-01-23 06:55:31 +08:00
jxxghp
43e8df1b9f Merge pull request #3791 from InfinityPacer/feature/subscribe 2025-01-23 06:54:09 +08:00
InfinityPacer
dbc465f6e5 fix(cache): update tmdb match_web base_wait to 5 for finer control 2025-01-23 00:11:36 +08:00
InfinityPacer
bfbd3c527c fix(cache): ensure consistent parameter ordering in get_cache_key 2025-01-22 23:53:19 +08:00
InfinityPacer
412405f69b fix(subscribe): optimize site list retrieval in get_sub_sites 2025-01-22 23:22:15 +08:00
jxxghp
12b74eb04f 更新 subscribe.py 2025-01-22 22:50:51 +08:00
jxxghp
2305a6287a fix #3777 2025-01-22 22:23:15 +08:00
jxxghp
68245be081 fix meta 2025-01-22 22:20:17 +08:00
jxxghp
29e01294bd Merge pull request #3789 from InfinityPacer/feature/cache
fix(cache): enhance fanart image caching
2025-01-22 22:16:37 +08:00
InfinityPacer
d35bee54a6 fix(limit): log accurate wait time after triggering limit 2025-01-22 21:34:59 +08:00
InfinityPacer
bf63be18e4 fix(cache): enhance fanart image caching 2025-01-22 21:34:44 +08:00
jxxghp
3dc7adc61a 更新 scheduler.py 2025-01-22 19:39:02 +08:00
jxxghp
047d1e0afd Merge pull request #3788 from InfinityPacer/feature/cache
feat(cache): optimize serialization with type-based caching
2025-01-22 18:47:08 +08:00
InfinityPacer
7c017faf31 feat(cache): optimize serialization with type-based caching 2025-01-22 17:41:11 +08:00
jxxghp
7a59565761 fix 优化订阅匹配的识别量 2025-01-22 16:37:49 +08:00
jxxghp
9afb904d40 Merge pull request #3785 from InfinityPacer/feature/cache
fix(cache): enhance tmdb match_web rate-limiting and caching
2025-01-22 15:27:08 +08:00
jxxghp
8189de589a fix #3775 2025-01-22 15:21:10 +08:00
jxxghp
c458d7525d fix #3778 2025-01-22 15:01:24 +08:00
InfinityPacer
5c7bd95f6b fix(cache): enhance tmdb match_web rate-limiting and caching 2025-01-22 14:58:56 +08:00
InfinityPacer
70c4509682 feat(cache): add exists to check key presence in cache backends 2025-01-22 14:25:30 +08:00
jxxghp
f34e36c571 feat:Follow订阅分享人功能 2025-01-22 13:32:13 +08:00
jxxghp
5054ffe7e4 Merge pull request #3776 from kiri-to/patch-1 2025-01-21 19:38:30 +08:00
kiri-to
ed30933ca2 Update nexus_php.py
修复'站点数据刷新'时潜在429问题
2025-01-21 19:10:52 +08:00
jxxghp
2a4111ecce 更新 version.py 2025-01-21 12:56:09 +08:00
jxxghp
5bc8709605 fix 全x集未识别集数问题 2025-01-21 08:16:20 +08:00
jxxghp
efa2edf869 fix 2025-01-20 18:28:06 +08:00
jxxghp
5c1e972feb 更新 __init__.py 2025-01-20 16:02:53 +08:00
jxxghp
8c23e7a7b7 fix #3760 2025-01-20 08:30:29 +08:00
jxxghp
57183f8cdc Merge pull request #3759 from wikrin/v2 2025-01-20 07:13:26 +08:00
Attente
0481b49c04 refactor(app/helper): optimize module reloading mechanism 2025-01-19 22:40:07 +08:00
jxxghp
7eb9b5e92d Merge pull request #3755 from InfinityPacer/feature/cache 2025-01-19 12:56:56 +08:00
InfinityPacer
2a409d83d4 feat(redis): update redis maxmemory 2025-01-19 12:41:30 +08:00
jxxghp
785a3f5de8 Merge pull request #3752 from InfinityPacer/feature/cache 2025-01-19 08:06:50 +08:00
InfinityPacer
7c17c1c73b feat(redis): update comments 2025-01-19 05:18:49 +08:00
InfinityPacer
0ea429782c feat(redis): optimize serialize 2025-01-19 05:13:31 +08:00
InfinityPacer
7a8f880dbe feat(redis): optimize memory limit and cache cleanup 2025-01-19 04:28:16 +08:00
InfinityPacer
0a86b72110 feat(redis): add encoding for keys and optimize deletion 2025-01-19 04:28:16 +08:00
InfinityPacer
cb5c06ee7e feat(redis): add Redis support 2025-01-19 04:28:15 +08:00
InfinityPacer
9f22ce5cc0 feat(cache): remove maxsize from recommend_cache 2025-01-19 01:26:33 +08:00
jxxghp
86e1fbc28a Merge pull request #3751 from InfinityPacer/feature/cache 2025-01-19 01:02:54 +08:00
InfinityPacer
a5c5f7c718 feat(cache): enhance cache region and decorator 2025-01-19 00:55:45 +08:00
InfinityPacer
ff5d94782f fix(TMDB): adjust trending cache maxsize to 1024 2025-01-18 23:45:03 +08:00
jxxghp
58a1bd2c86 Merge pull request #3750 from wikrin/v2 2025-01-18 07:08:18 +08:00
jxxghp
f78ba6afb0 Merge pull request #3749 from InfinityPacer/feature/cache 2025-01-18 07:07:52 +08:00
Attente
331f3455f8 fix: 指定集数 2025-01-18 02:52:26 +08:00
InfinityPacer
ad0241b7f1 feat(cache): set default skip_empty to False 2025-01-18 02:44:56 +08:00
InfinityPacer
d9508533e1 feat(cache): add cache region support 2025-01-18 02:32:08 +08:00
InfinityPacer
6d2059447e feat(cache): enhance get_plugins to skip empty during network errors 2025-01-18 02:14:01 +08:00
InfinityPacer
11d4f27268 feat(cache): migrate cachetools usage to unified cache backend 2025-01-18 02:12:20 +08:00
InfinityPacer
a29f987649 feat(cache): add cache backend implementations and decorator support 2025-01-18 02:10:17 +08:00
jxxghp
3e692c790e Merge remote-tracking branch 'origin/v2' into v2 2025-01-17 20:31:40 +08:00
jxxghp
35cc214492 fix #3743 2025-01-17 20:31:23 +08:00
jxxghp
bae7bff70d fix #3744 2025-01-17 16:41:01 +08:00
jxxghp
71ef6f6a61 fix media_files Exception 2025-01-17 12:32:55 +08:00
85 changed files with 2237 additions and 1353 deletions

25
app/actions/__init__.py Normal file
View File

@@ -0,0 +1,25 @@
from abc import ABC, abstractmethod
from pydantic.main import BaseModel
from app.schemas import ActionContext
class BaseAction(BaseModel, ABC):
"""
工作流动作基类
"""
@property
@abstractmethod
def name(self) -> str:
pass
@property
@abstractmethod
def description(self) -> str:
pass
@abstractmethod
async def execute(self, params: dict, context: ActionContext) -> ActionContext:
raise NotImplementedError

View File

@@ -2,7 +2,7 @@ from fastapi import APIRouter
from app.api.endpoints import login, user, site, message, webhook, subscribe, \
media, douban, search, plugin, tmdb, history, system, download, dashboard, \
transfer, mediaserver, bangumi, storage
transfer, mediaserver, bangumi, storage, discover, recommend, workflow
api_router = APIRouter()
api_router.include_router(login.router, prefix="/login", tags=["login"])
@@ -24,3 +24,6 @@ api_router.include_router(storage.router, prefix="/storage", tags=["storage"])
api_router.include_router(transfer.router, prefix="/transfer", tags=["transfer"])
api_router.include_router(mediaserver.router, prefix="/mediaserver", tags=["mediaserver"])
api_router.include_router(bangumi.router, prefix="/bangumi", tags=["bangumi"])
api_router.include_router(discover.router, prefix="/discover", tags=["discover"])
api_router.include_router(recommend.router, prefix="/recommend", tags=["recommend"])
api_router.include_router(workflow.router, prefix="/workflow", tags=["workflow"])

View File

@@ -4,23 +4,12 @@ from fastapi import APIRouter, Depends
from app import schemas
from app.chain.bangumi import BangumiChain
from app.chain.recommend import RecommendChain
from app.core.context import MediaInfo
from app.core.security import verify_token
router = APIRouter()
@router.get("/calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
return RecommendChain().bangumi_calendar(page=page, count=count)
@router.get("/credits/{bangumiid}", summary="查询Bangumi演职员表", response_model=List[schemas.MediaPerson])
def bangumi_credits(bangumiid: int,
page: int = 1,
@@ -61,13 +50,14 @@ def bangumi_person(person_id: int,
@router.get("/person/credits/{person_id}", summary="人物参演作品", response_model=List[schemas.MediaInfo])
def bangumi_person_credits(person_id: int,
page: int = 1,
count: int = 20,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据人物ID查询人物参演作品
"""
medias = BangumiChain().person_credits(person_id=person_id)
if medias:
return [media.to_dict() for media in medias[(page - 1) * 20: page * 20]]
return [media.to_dict() for media in medias[(page - 1) * count: page * count]]
return []

View File

@@ -0,0 +1,130 @@
from typing import Any, List
from fastapi import APIRouter, Depends
from app import schemas
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas import DiscoverSourceEventData
from app.schemas.types import ChainEventType, MediaType
from chain.bangumi import BangumiChain
from chain.douban import DoubanChain
from chain.tmdb import TmdbChain
router = APIRouter()
@router.get("/source", summary="获取探索数据源", response_model=List[schemas.DiscoverMediaSource])
def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取探索数据源
"""
# 广播事件,请示额外的探索数据源支持
event_data = DiscoverSourceEventData()
event = eventmanager.send_event(ChainEventType.DiscoverSource, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: DiscoverSourceEventData = event.event_data
if event_data.extra_sources:
return event_data.extra_sources
return []
@router.get("/bangumi", summary="探索Bangumi", response_model=List[schemas.MediaInfo])
def bangumi(type: int = 2,
cat: int = None,
sort: str = 'rank',
year: int = None,
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
探索Bangumi
"""
medias = BangumiChain().discover(type=type, cat=cat, sort=sort, year=year,
limit=count, offset=(page - 1) * count)
if medias:
return [media.to_dict() for media in medias]
return []
@router.get("/douban_movies", summary="探索豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
movies = DoubanChain().douban_discover(mtype=MediaType.MOVIE,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in movies] if movies else []
@router.get("/douban_tvs", summary="探索豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
tvs = DoubanChain().douban_discover(mtype=MediaType.TV,
sort=sort, tags=tags, page=page, count=count)
return [media.to_dict() for media in tvs] if tvs else []
@router.get("/tmdb_movies", summary="探索TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
movies = TmdbChain().tmdb_discover(mtype=MediaType.MOVIE,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@router.get("/tmdb_tvs", summary="探索TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
tvs = TmdbChain().tmdb_discover(mtype=MediaType.TV,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []

View File

@@ -4,7 +4,6 @@ from fastapi import APIRouter, Depends
from app import schemas
from app.chain.douban import DoubanChain
from app.chain.recommend import RecommendChain
from app.core.context import MediaInfo
from app.core.security import verify_token
from app.schemas import MediaType
@@ -34,100 +33,6 @@ def douban_person_credits(person_id: int,
return []
@router.get("/showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def movie_showing(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
return RecommendChain().douban_movie_showing(page=page, count=count)
@router.get("/movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
return RecommendChain().douban_movies(sort=sort, tags=tags, page=page, count=count)
@router.get("/tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def movie_top250(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_movie_top250(page=page, count=count)
@router.get("/tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def tv_weekly_chinese(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_chinese(page=page, count=count)
@router.get("/tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def tv_weekly_global(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_global(page=page, count=count)
@router.get("/tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def tv_animation(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return RecommendChain().douban_tv_animation(page=page, count=count)
@router.get("/movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def movie_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
return RecommendChain().douban_movie_hot(page=page, count=count)
@router.get("/tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def tv_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
return RecommendChain().douban_tv_hot(page=page, count=count)
@router.get("/credits/{doubanid}/{type_name}", summary="豆瓣演员阵容", response_model=List[schemas.MediaPerson])
def douban_credits(doubanid: str,
type_name: str,

View File

@@ -5,11 +5,14 @@ from fastapi import APIRouter, Depends
from app import schemas
from app.chain.media import MediaChain
from app.chain.tmdb import TmdbChain
from app.core.config import settings
from app.core.context import Context
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo, MetaInfoPath
from app.core.security import verify_token, verify_apitoken
from app.schemas import MediaType
from app.schemas import MediaType, MediaRecognizeConvertEventData
from app.schemas.types import ChainEventType
router = APIRouter()
@@ -72,6 +75,7 @@ def search(title: str,
"""
模糊搜索媒体/人物信息列表 media媒体信息person人物信息
"""
def __get_source(obj: Union[schemas.MediaInfo, schemas.MediaPerson, dict]):
"""
获取对象属性
@@ -131,25 +135,90 @@ def category(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
return MediaChain().media_category() or {}
@router.get("/seasons", summary="查询媒体季信息", response_model=List[schemas.MediaSeason])
def seasons(mediaid: str = None,
title: str = None,
year: int = None,
season: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询媒体季信息
"""
if mediaid:
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
seasons_info = TmdbChain().tmdb_seasons(tmdbid=tmdbid)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
if title:
meta = MetaInfo(title)
if year:
meta.year = year
mediainfo = MediaChain().recognize_media(meta, mtype=MediaType.TV)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
seasons_info = TmdbChain().tmdb_seasons(tmdbid=mediainfo.tmdb_id)
if seasons_info:
if season:
return [sea for sea in seasons_info if sea.season_number == season]
return seasons_info
else:
sea = season or 1
return schemas.MediaSeason(
season_number=sea,
poster_path=mediainfo.poster_path,
name=f"{sea}",
air_date=mediainfo.release_date,
overview=mediainfo.overview,
vote_average=mediainfo.vote_average,
episode_count=mediainfo.number_of_episodes
)
return []
@router.get("/{mediaid}", summary="查询媒体详情", response_model=schemas.MediaInfo)
def media_info(mediaid: str, type_name: str,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def detail(mediaid: str, type_name: str, title: str = None, year: int = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据媒体ID查询themoviedb或豆瓣媒体信息type_name: 电影/电视剧
"""
mtype = MediaType(type_name)
tmdbid, doubanid, bangumiid = None, None, None
mediainfo = None
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid[5:])
mediainfo = MediaChain().recognize_media(tmdbid=int(mediaid[5:]), mtype=mtype)
elif mediaid.startswith("douban:"):
doubanid = mediaid[7:]
mediainfo = MediaChain().recognize_media(doubanid=mediaid[7:], mtype=mtype)
elif mediaid.startswith("bangumi:"):
bangumiid = int(mediaid[8:])
if not tmdbid and not doubanid and not bangumiid:
return schemas.MediaInfo()
mediainfo = MediaChain().recognize_media(bangumiid=int(mediaid[8:]), mtype=mtype)
else:
# 广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
mediainfo = MediaChain().recognize_media(tmdbid=new_id, mtype=mtype)
elif event_data.convert_type == "douban":
mediainfo = MediaChain().recognize_media(doubanid=new_id, mtype=mtype)
elif title:
# 使用名称识别兜底
meta = MetaInfo(title)
if year:
meta.year = year
if mtype:
meta.type = mtype
mediainfo = MediaChain().recognize_media(meta=meta)
# 识别
mediainfo = MediaChain().recognize_media(tmdbid=tmdbid, doubanid=doubanid, bangumiid=bangumiid, mtype=mtype)
if mediainfo:
MediaChain().obtain_images(mediainfo)
return mediainfo.to_dict()
return schemas.MediaInfo()

View File

@@ -0,0 +1,191 @@
from typing import Any, List
from fastapi import APIRouter, Depends
from app import schemas
from app.core.event import eventmanager
from app.core.security import verify_token
from app.schemas.types import ChainEventType
from chain.recommend import RecommendChain
from schemas import RecommendSourceEventData
router = APIRouter()
@router.get("/source", summary="获取推荐数据源", response_model=List[schemas.RecommendMediaSource])
def source(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
获取推荐数据源
"""
# 广播事件,请示额外的推荐数据源支持
event_data = RecommendSourceEventData()
event = eventmanager.send_event(ChainEventType.RecommendSource, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: RecommendSourceEventData = event.event_data
if event_data.extra_sources:
return event_data.extra_sources
return []
@router.get("/bangumi_calendar", summary="Bangumi每日放送", response_model=List[schemas.MediaInfo])
def bangumi_calendar(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览Bangumi每日放送
"""
return RecommendChain().bangumi_calendar(page=page, count=count)
@router.get("/douban_showing", summary="豆瓣正在热映", response_model=List[schemas.MediaInfo])
def douban_showing(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣正在热映
"""
return RecommendChain().douban_movie_showing(page=page, count=count)
@router.get("/douban_movies", summary="豆瓣电影", response_model=List[schemas.MediaInfo])
def douban_movies(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣电影信息
"""
return RecommendChain().douban_movies(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_tvs", summary="豆瓣剧集", response_model=List[schemas.MediaInfo])
def douban_tvs(sort: str = "R",
tags: str = "",
page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_tvs(sort=sort, tags=tags, page=page, count=count)
@router.get("/douban_movie_top250", summary="豆瓣电影TOP250", response_model=List[schemas.MediaInfo])
def douban_movie_top250(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览豆瓣剧集信息
"""
return RecommendChain().douban_movie_top250(page=page, count=count)
@router.get("/douban_tv_weekly_chinese", summary="豆瓣国产剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_chinese(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
中国每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_chinese(page=page, count=count)
@router.get("/douban_tv_weekly_global", summary="豆瓣全球剧集周榜", response_model=List[schemas.MediaInfo])
def douban_tv_weekly_global(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
全球每周剧集口碑榜
"""
return RecommendChain().douban_tv_weekly_global(page=page, count=count)
@router.get("/douban_tv_animation", summary="豆瓣动画剧集", response_model=List[schemas.MediaInfo])
def douban_tv_animation(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门动画剧集
"""
return RecommendChain().douban_tv_animation(page=page, count=count)
@router.get("/douban_movie_hot", summary="豆瓣热门电影", response_model=List[schemas.MediaInfo])
def douban_movie_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电影
"""
return RecommendChain().douban_movie_hot(page=page, count=count)
@router.get("/douban_tv_hot", summary="豆瓣热门电视剧", response_model=List[schemas.MediaInfo])
def douban_tv_hot(page: int = 1,
count: int = 30,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
热门电视剧
"""
return RecommendChain().douban_tv_hot(page=page, count=count)
@router.get("/tmdb_movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
return RecommendChain().tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
return RecommendChain().tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
@router.get("/tmdb_trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
TMDB流行趋势
"""
return RecommendChain().tmdb_trending(page=page)

View File

@@ -6,8 +6,11 @@ from app import schemas
from app.chain.media import MediaChain
from app.chain.search import SearchChain
from app.core.config import settings
from app.core.event import eventmanager
from app.core.metainfo import MetaInfo
from app.core.security import verify_token
from app.schemas.types import MediaType
from app.schemas import MediaRecognizeConvertEventData
from app.schemas.types import MediaType, ChainEventType
router = APIRouter()
@@ -25,6 +28,8 @@ def search_latest(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
def search_by_id(mediaid: str,
mtype: str = None,
area: str = "title",
title: str = None,
year: int = None,
season: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
@@ -34,6 +39,8 @@ def search_by_id(mediaid: str,
mtype = MediaType(mtype)
if season:
season = int(season)
torrents = None
# 根据前缀识别媒体ID
if mediaid.startswith("tmdb:"):
tmdbid = int(mediaid.replace("tmdb:", ""))
if settings.RECOGNIZE_SOURCE == "douban":
@@ -79,8 +86,44 @@ def search_by_id(mediaid: str,
else:
return schemas.Response(success=False, message="未识别到豆瓣媒体信息")
else:
return schemas.Response(success=False, message="未知的媒体ID")
# 未知前缀,广播事件解析媒体信息
event_data = MediaRecognizeConvertEventData(
mediaid=mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
search_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=search_id,
mtype=mtype, area=area, season=season)
elif event_data.convert_type == "douban":
torrents = SearchChain().search_by_id(doubanid=search_id,
mtype=mtype, area=area, season=season)
else:
if not title:
return schemas.Response(success=False, message="未知的媒体ID")
# 使用名称识别兜底
meta = MetaInfo(title)
if year:
meta.year = year
if mtype:
meta.type = mtype
if season:
meta.type = MediaType.TV
meta.begin_season = season
mediainfo = MediaChain().recognize_media(meta=meta)
if mediainfo:
if settings.RECOGNIZE_SOURCE == "themoviedb":
torrents = SearchChain().search_by_id(tmdbid=mediainfo.tmdb_id,
mtype=mtype, area=area, season=season)
else:
torrents = SearchChain().search_by_id(doubanid=mediainfo.douban_id,
mtype=mtype, area=area, season=season)
# 返回搜索结果
if not torrents:
return schemas.Response(success=False, message="未搜索到任何资源")
else:

View File

@@ -15,10 +15,11 @@ from app.db import get_db
from app.db.models.subscribe import Subscribe
from app.db.models.subscribehistory import SubscribeHistory
from app.db.models.user import User
from app.db.systemconfig_oper import SystemConfigOper
from app.db.user_oper import get_current_active_user
from app.helper.subscribe import SubscribeHelper
from app.scheduler import Scheduler
from app.schemas.types import MediaType, EventType
from app.schemas.types import MediaType, EventType, SystemConfigKey
router = APIRouter()
@@ -81,6 +82,7 @@ def create_subscribe(
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
mediaid=subscribe_in.mediaid,
username=current_user.name,
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
@@ -108,6 +110,7 @@ def update_subscribe(
if not subscribe:
return schemas.Response(success=False, message="订阅不存在")
# 避免更新缺失集数
old_subscribe_dict = subscribe.to_dict()
subscribe_dict = subscribe_in.dict()
if not subscribe_in.lack_episode:
# 没有缺失集数时缺失集数清空避免更新为0
@@ -125,7 +128,8 @@ def update_subscribe(
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"subscribe_info": subscribe_dict,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
@@ -145,9 +149,16 @@ def update_subscribe_status(
valid_states = ["R", "P", "S"]
if state not in valid_states:
return schemas.Response(success=False, message="无效的订阅状态")
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
"state": state
})
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
@@ -161,7 +172,6 @@ def subscribe_mediaid(
"""
根据 TMDBID/豆瓣ID/BangumiId 查询订阅 tmdb:/douban:
"""
result = None
title_check = False
if mediaid.startswith("tmdb:"):
tmdbid = mediaid[5:]
@@ -182,6 +192,10 @@ def subscribe_mediaid(
result = Subscribe.get_by_bangumiid(db, int(bangumiid))
if not result and title:
title_check = True
else:
result = Subscribe.get_by_mediaid(db, mediaid)
if not result and title:
title_check = True
# 使用名称检查订阅
if title_check and title:
meta = MetaInfo(title)
@@ -212,11 +226,18 @@ def reset_subscribes(
"""
subscribe = Subscribe.get(db, subid)
if subscribe:
old_subscribe_dict = subscribe.to_dict()
subscribe.update(db, {
"note": [],
"lack_episode": subscribe.total_episode,
"state": "R"
})
# 发送订阅调整事件
eventmanager.send_event(EventType.SubscribeModified, {
"subscribe_id": subscribe.id,
"old_subscribe_info": old_subscribe_dict,
"subscribe_info": subscribe.to_dict(),
})
return schemas.Response(success=True)
return schemas.Response(success=False, message="订阅不存在")
@@ -294,6 +315,10 @@ def delete_subscribe_by_mediaid(
subscribe = Subscribe().get_by_doubanid(db, doubanid)
if subscribe:
delete_subscribes.append(subscribe)
else:
subscribe = Subscribe().get_by_mediaid(db, mediaid)
if subscribe:
delete_subscribes.append(subscribe)
for subscribe in delete_subscribes:
Subscribe().delete(db, subscribe.id)
# 发送事件
@@ -497,6 +522,42 @@ def subscribe_fork(
return result
@router.get("/follow", summary="查询已Follow的订阅分享人", response_model=List[str])
def followed_subscribers(_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
查询已Follow的订阅分享人
"""
return SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
@router.post("/follow", summary="Follow订阅分享人", response_model=schemas.Response)
def follow_subscriber(
share_uid: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
Follow订阅分享人
"""
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid not in subscribers:
subscribers.append(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.delete("/follow", summary="取消Follow订阅分享人", response_model=schemas.Response)
def unfollow_subscriber(
share_uid: str = None,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
取消Follow订阅分享人
"""
subscribers = SystemConfigOper().get(SystemConfigKey.FollowSubscribers) or []
if share_uid and share_uid in subscribers:
subscribers.remove(share_uid)
SystemConfigOper().set(SystemConfigKey.FollowSubscribers, subscribers)
return schemas.Response(success=True)
@router.get("/shares", summary="查询分享的订阅", response_model=List[schemas.SubscribeShare])
def popular_subscribes(
name: str = None,

View File

@@ -8,6 +8,7 @@ from pathlib import Path
from typing import Optional, Union
import aiofiles
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from fastapi import APIRouter, Depends, HTTPException, Header, Request, Response
from fastapi.responses import StreamingResponse
@@ -50,7 +51,6 @@ def fetch_image(
"""
处理图片缓存逻辑支持HTTP缓存和磁盘缓存
"""
if not url:
raise HTTPException(status_code=404, detail="URL not provided")
@@ -68,6 +68,10 @@ def fetch_image(
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
raise HTTPException(status_code=400, detail="Invalid cache path or file type")
@@ -88,7 +92,8 @@ def fetch_image(
# 请求远程图片
referer = "https://movie.douban.com/" if "doubanio.com" in url else None
proxies = settings.PROXY if proxy else None
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer).get_res(url=url)
response = RequestUtils(ua=settings.USER_AGENT, proxies=proxies, referer=referer,
accept_type="image/avif,image/webp,image/apng,*/*").get_res(url=url)
if not response:
raise HTTPException(status_code=502, detail="Failed to fetch the image from the remote server")

View File

@@ -3,7 +3,6 @@ from typing import List, Any
from fastapi import APIRouter, Depends
from app import schemas
from app.chain.recommend import RecommendChain
from app.chain.tmdb import TmdbChain
from app.core.security import verify_token
from app.schemas.types import MediaType
@@ -114,45 +113,6 @@ def tmdb_person_credits(person_id: int,
return []
@router.get("/movies", summary="TMDB电影", response_model=List[schemas.MediaInfo])
def tmdb_movies(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB电影信息
"""
return RecommendChain().tmdb_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
@router.get("/tvs", summary="TMDB剧集", response_model=List[schemas.MediaInfo])
def tmdb_tvs(sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
浏览TMDB剧集信息
"""
return RecommendChain().tmdb_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
@router.get("/trending", summary="TMDB流行趋势", response_model=List[schemas.MediaInfo])
def tmdb_trending(page: int = 1,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
TMDB流行趋势
"""
return RecommendChain().tmdb_trending(page=page)
@router.get("/{tmdbid}/{season}", summary="TMDB季所有集", response_model=List[schemas.TmdbEpisode])
def tmdb_season_episodes(tmdbid: int, season: int,
_: schemas.TokenPayload = Depends(verify_token)) -> Any:

View File

@@ -0,0 +1,3 @@
from fastapi import APIRouter
router = APIRouter()

View File

@@ -680,6 +680,14 @@ def arr_add_series(tv: schemas.SonarrSeries,
)
@arr_router.put("/series", summary="更新剧集订阅")
def arr_update_series(tv: schemas.SonarrSeries) -> Any:
"""
更新Sonarr剧集订阅
"""
return arr_add_series(tv)
@arr_router.delete("/series/{tid}", summary="删除剧集订阅")
def arr_remove_series(tid: int, db: Session = Depends(get_db), _: str = Depends(verify_apikey)) -> Any:
"""

View File

@@ -7,7 +7,6 @@ from pathlib import Path
from typing import Optional, Any, Tuple, List, Set, Union, Dict
from qbittorrentapi import TorrentFilesList
from ruamel.yaml import CommentedMap
from transmission_rpc import File
from app.core.config import settings
@@ -77,7 +76,7 @@ class ChainBase(metaclass=ABCMeta):
"""
cache_path = settings.TEMP_PATH / filename
if cache_path.exists():
Path(cache_path).unlink()
cache_path.unlink()
def run_module(self, method: str, *args, **kwargs) -> Any:
"""
@@ -308,7 +307,7 @@ class ChainBase(metaclass=ABCMeta):
"""
return self.run_module("search_collections", name=name)
def search_torrents(self, site: CommentedMap,
def search_torrents(self, site: dict,
keywords: List[str],
mtype: MediaType = None,
page: int = 0) -> List[TorrentInfo]:
@@ -323,7 +322,7 @@ class ChainBase(metaclass=ABCMeta):
return self.run_module("search_torrents", site=site, keywords=keywords,
mtype=mtype, page=page)
def refresh_torrents(self, site: CommentedMap) -> List[TorrentInfo]:
def refresh_torrents(self, site: dict) -> List[TorrentInfo]:
"""
获取站点最新一页的种子,多个站点需要多线程处理
:param site: 站点
@@ -531,6 +530,9 @@ class ChainBase(metaclass=ABCMeta):
# 管理员发过了,此消息不发了
logger.info(f"用户 {send_message.username} 不存在,消息无法发送到对应用户")
continue
elif send_message.username == settings.SUPERUSER:
# 管理员同名已发送
admin_sended = True
else:
# 按原消息发送全体
if not admin_sended:

View File

@@ -17,6 +17,12 @@ class BangumiChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("bangumi_calendar")
def discover(self, **kwargs) -> Optional[List[MediaInfo]]:
"""
发现Bangumi番剧
"""
return self.run_module("bangumi_discover", **kwargs)
def bangumi_info(self, bangumiid: int) -> Optional[dict]:
"""
获取Bangumi信息

View File

@@ -444,7 +444,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
for file in files:
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
init_folder=False, parent=fileitem)
init_folder=False, parent=fileitem,
overwrite=overwrite)
# 生成目录内图片文件
if init_folder:
# 图片
@@ -515,7 +516,8 @@ class MediaChain(ChainBase, metaclass=Singleton):
self.scrape_metadata(fileitem=file,
meta=meta, mediainfo=mediainfo,
parent=fileitem if file.type == "file" else None,
init_folder=True if file.type == "dir" else False)
init_folder=True if file.type == "dir" else False,
overwrite=overwrite)
# 生成目录的nfo和图片
if init_folder:
# 识别文件夹名称

View File

@@ -1,9 +1,8 @@
import threading
from typing import List, Union, Optional, Generator
from cachetools import cached, TTLCache
from app.chain import ChainBase
from app.core.cache import cached
from app.core.config import global_vars
from app.db.mediaserver_oper import MediaServerOper
from app.helper.service import ServiceConfigHelper
@@ -94,7 +93,7 @@ class MediaServerChain(ChainBase):
"""
return self.run_module("mediaserver_latest", count=count, server=server, username=username)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_latest_wallpapers(self, server: str = None, count: int = 10,
remote: bool = True, username: str = None) -> List[str]:
"""

View File

@@ -295,6 +295,8 @@ class MessageChain(ChainBase):
return
else:
best_version = True
# 转换用户名
mp_name = self.useroper.get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为N
self.subscribechain.add(title=mediainfo.title,
year=mediainfo.year,
@@ -304,7 +306,7 @@ class MessageChain(ChainBase):
channel=channel,
source=source,
userid=userid,
username=username,
username=mp_name or username,
best_version=best_version)
elif cache_type == "Torrent":
if int(text) == 0:
@@ -505,6 +507,8 @@ class MessageChain(ChainBase):
note = downloaded
else:
note = None
# 转换用户名
mp_name = self.useroper.get_name(**{f"{channel.name.lower()}_userid": userid}) if channel else None
# 添加订阅状态为R
self.subscribechain.add(title=_current_media.title,
year=_current_media.year,
@@ -514,7 +518,7 @@ class MessageChain(ChainBase):
channel=channel,
source=source,
userid=userid,
username=username,
username=mp_name or username,
state="R",
note=note)

View File

@@ -1,18 +1,16 @@
import inspect
import io
import tempfile
from functools import wraps
from pathlib import Path
from typing import Any, Callable, List
from typing import List
import pillow_avif # noqa 用于自动注册AVIF支持
from PIL import Image
from cachetools import TTLCache
from cachetools.keys import hashkey
from app.chain import ChainBase
from app.chain.bangumi import BangumiChain
from app.chain.douban import DoubanChain
from app.chain.tmdb import TmdbChain
from app.core.cache import cache_backend, cached
from app.core.config import settings, global_vars
from app.log import logger
from app.schemas import MediaType
@@ -23,42 +21,7 @@ from app.utils.singleton import Singleton
# 推荐相关的专用缓存
recommend_ttl = 24 * 3600
recommend_cache = TTLCache(maxsize=256, ttl=recommend_ttl)
# 推荐缓存装饰器,避免偶发网络获取数据为空时,页面由于缓存为空长时间渲染异常问题
def cached_with_empty_check(func: Callable):
"""
缓存装饰器,用于缓存函数的返回结果,仅在结果非空时进行缓存
:param func: 被装饰的函数
:return: 包装后的函数
"""
@wraps(func)
def wrapper(*args, **kwargs):
signature = inspect.signature(func)
resolved_kwargs = {}
# 获取默认值并结合传递的参数(如果有)
for param, value in signature.parameters.items():
if param in kwargs:
# 使用显式传递的参数
resolved_kwargs[param] = kwargs[param]
elif value.default is not inspect.Parameter.empty:
# 没有传递参数时使用默认值
resolved_kwargs[param] = value.default
# 使用 cachetools 缓存,构造缓存键
cache_key = f"{func.__name__}_{hashkey(*args, **resolved_kwargs)}"
if cache_key in recommend_cache:
return recommend_cache[cache_key]
result = func(*args, **kwargs)
# 如果返回值为空,则不缓存
if result in [None, [], {}]:
return result
recommend_cache[cache_key] = result
return result
return wrapper
recommend_cache_region = "recommend"
class RecommendChain(ChainBase, metaclass=Singleton):
@@ -78,7 +41,7 @@ class RecommendChain(ChainBase, metaclass=Singleton):
刷新推荐
"""
logger.debug("Starting to refresh Recommend data.")
recommend_cache.clear()
cache_backend.clear(region=recommend_cache_region)
logger.debug("Recommend Cache has been cleared.")
# 推荐来源方法
@@ -154,6 +117,10 @@ class RecommendChain(ChainBase, metaclass=Singleton):
sanitized_path = SecurityUtils.sanitize_url_path(url)
cache_path = settings.CACHE_PATH / "images" / sanitized_path
# 没有文件类型,则添加后缀,在恶意文件类型和实际需求下的折衷选择
if not cache_path.suffix:
cache_path = cache_path.with_suffix(".jpg")
# 确保缓存路径和文件类型合法
if not SecurityUtils.is_safe_path(settings.CACHE_PATH, cache_path, settings.SECURITY_IMAGE_SUFFIXES):
logger.debug(f"Invalid cache path or file type for URL: {url}, sanitized path: {sanitized_path}")
@@ -194,9 +161,16 @@ class RecommendChain(ChainBase, metaclass=Singleton):
logger.debug(f"Failed to write cache file {cache_path} for URL {url}: {e}")
@log_execution_time(logger=logger)
@cached_with_empty_check
def tmdb_movies(self, sort_by: str = "popularity.desc", with_genres: str = "",
with_original_language: str = "", page: int = 1) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_movies(self, sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1) -> List[dict]:
"""
TMDB热门电影
"""
@@ -204,13 +178,25 @@ class RecommendChain(ChainBase, metaclass=Singleton):
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [movie.to_dict() for movie in movies] if movies else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def tmdb_tvs(self, sort_by: str = "popularity.desc", with_genres: str = "",
with_original_language: str = "zh|en|ja|ko", page: int = 1) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_tvs(self, sort_by: str = "popularity.desc",
with_genres: str = "",
with_original_language: str = "zh|en|ja|ko",
with_keywords: str = "",
with_watch_providers: str = "",
vote_average: float = 0,
vote_count: int = 0,
release_date: str = "",
page: int = 1) -> List[dict]:
"""
TMDB热门电视剧
"""
@@ -218,12 +204,17 @@ class RecommendChain(ChainBase, metaclass=Singleton):
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
return [tv.to_dict() for tv in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def tmdb_trending(self, page: int = 1) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def tmdb_trending(self, page: int = 1) -> List[dict]:
"""
TMDB流行趋势
"""
@@ -231,8 +222,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [info.to_dict() for info in infos] if infos else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def bangumi_calendar(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def bangumi_calendar(self, page: int = 1, count: int = 30) -> List[dict]:
"""
Bangumi每日放送
"""
@@ -240,8 +231,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in medias[(page - 1) * count: page * count]] if medias else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_movie_showing(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_showing(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣正在热映
"""
@@ -249,8 +240,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_movies(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movies(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣最新电影
"""
@@ -259,8 +250,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_tvs(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tvs(self, sort: str = "R", tags: str = "", page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣最新电视剧
"""
@@ -269,8 +260,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_movie_top250(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_top250(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣电影TOP250
"""
@@ -278,8 +269,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_tv_weekly_chinese(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_chinese(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣国产剧集榜
"""
@@ -287,8 +278,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_tv_weekly_global(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_weekly_global(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣全球剧集榜
"""
@@ -296,8 +287,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_tv_animation(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_animation(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣热门动漫
"""
@@ -305,8 +296,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in tvs] if tvs else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_movie_hot(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_movie_hot(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣热门电影
"""
@@ -314,8 +305,8 @@ class RecommendChain(ChainBase, metaclass=Singleton):
return [media.to_dict() for media in movies] if movies else []
@log_execution_time(logger=logger)
@cached_with_empty_check
def douban_tv_hot(self, page: int = 1, count: int = 30) -> Any:
@cached(ttl=recommend_ttl, region=recommend_cache_region)
def douban_tv_hot(self, page: int = 1, count: int = 30) -> List[dict]:
"""
豆瓣热门电视剧
"""

View File

@@ -37,7 +37,7 @@ class SearchChain(ChainBase):
def search_by_id(self, tmdbid: int = None, doubanid: str = None,
mtype: MediaType = None, area: str = "title", season: int = None) -> List[Context]:
"""
根据TMDBID/豆瓣ID搜索资源精确匹配但不不过滤本地存在的资源
根据TMDBID/豆瓣ID搜索资源精确匹配不过滤本地存在的资源
:param tmdbid: TMDB ID
:param doubanid: 豆瓣 ID
:param mtype: 媒体,电影 or 电视剧

View File

@@ -6,7 +6,6 @@ from typing import Optional, Tuple, Union, Dict
from urllib.parse import urljoin
from lxml import etree
from ruamel.yaml import CommentedMap
from app.chain import ChainBase
from app.core.config import global_vars, settings
@@ -55,7 +54,7 @@ class SiteChain(ChainBase):
"yemapt.org": self.__yema_test,
}
def refresh_userdata(self, site: CommentedMap = None) -> Optional[SiteUserData]:
def refresh_userdata(self, site: dict = None) -> Optional[SiteUserData]:
"""
刷新站点的用户数据
:param site: 站点
@@ -138,10 +137,14 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=site.url)
if res and res.status_code == 200:
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
csrf_token = re.search(r'<meta name="x-csrf-token" content="(.+?)">', res.text)
if csrf_token:
token = csrf_token.group(1)
else:
return False, f"错误:{res.status_code} {res.reason}"
if not token:
return False, "无法获取Token"
# 调用查询用户信息接口
@@ -155,11 +158,15 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=f"{site.url}api/user/getInfo")
if user_res and user_res.status_code == 200:
if user_res is None:
return False, "无法打开网站!"
if user_res.status_code == 200:
user_info = user_res.json()
if user_info and user_info.get("data"):
return True, "连接成功"
return False, "Cookie已失效"
return False, "Cookie已失效"
else:
return False, f"错误:{user_res.status_code} {user_res.reason}"
@staticmethod
def __mteam_test(site: Site) -> Tuple[bool, str]:
@@ -182,9 +189,11 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).post_res(url=url)
state = False
message = "鉴权已过期或无效"
if res and res.status_code == 200:
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
state = False
message = "鉴权已过期或无效"
user_info = res.json() or {}
if user_info.get("data"):
# 更新最后访问时间
@@ -203,7 +212,9 @@ class SiteChain(ChainBase):
elif user_info.get("message"):
# 使用馒头的错误提示
message = user_info.get("message")
return state, message
return state, message
else:
return False, f"错误:{res.status_code} {res.reason}"
@staticmethod
def __yema_test(site: Site) -> Tuple[bool, str]:
@@ -223,11 +234,15 @@ class SiteChain(ChainBase):
proxies=settings.PROXY if site.proxy else None,
timeout=site.timeout or 15
).get_res(url=url)
if res and res.status_code == 200:
if res is None:
return False, "无法打开网站!"
if res.status_code == 200:
user_info = res.json()
if user_info and user_info.get("success"):
return True, "连接成功"
return False, "Cookie已过期"
return False, "Cookie已过期"
else:
return False, f"错误:{res.status_code} {res.reason}"
def __indexphp_test(self, site: Site) -> Tuple[bool, str]:
"""
@@ -557,12 +572,12 @@ class SiteChain(ChainBase):
elif res.status_code == 200:
msg = "Cookie已失效"
else:
msg = f"状态码{res.status_code}"
msg = f"错误{res.status_code} {res.reason}"
return False, f"{msg}"
elif public and res.status_code != 200:
return False, f"状态码{res.status_code}"
return False, f"错误{res.status_code} {res.reason}"
elif res is not None:
return False, f"状态码{res.status_code}"
return False, f"错误{res.status_code} {res.reason}"
else:
return False, f"无法打开网站!"
return True, "连接成功"

View File

@@ -6,6 +6,7 @@ import time
from datetime import datetime
from typing import Dict, List, Optional, Union, Tuple
from app import schemas
from app.chain import ChainBase
from app.chain.download import DownloadChain
from app.chain.media import MediaChain
@@ -27,9 +28,8 @@ from app.helper.message import MessageHelper
from app.helper.subscribe import SubscribeHelper
from app.helper.torrent import TorrentHelper
from app.log import logger
from app.schemas import NotExistMediaInfo, Notification, SubscrbieInfo, SubscribeEpisodeInfo, SubscribeDownloadFileInfo, \
SubscribeLibraryFileInfo
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType, EventType
from app.schemas import MediaRecognizeConvertEventData
from app.schemas.types import MediaType, SystemConfigKey, MessageChannel, NotificationType, EventType, ChainEventType
from app.utils.singleton import Singleton
@@ -59,6 +59,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
tmdbid: int = None,
doubanid: str = None,
bangumiid: int = None,
mediaid: str = None,
season: int = None,
channel: MessageChannel = None,
source: str = None,
@@ -70,7 +71,29 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
"""
识别媒体信息并添加订阅
"""
def __get_event_meida(_mediaid: str, _meta: MetaBase) -> Optional[MediaInfo]:
"""
广播事件解析媒体信息
"""
event_data = MediaRecognizeConvertEventData(
mediaid=_mediaid,
convert_type=settings.RECOGNIZE_SOURCE
)
event = eventmanager.send_event(ChainEventType.MediaRecognizeConvert, event_data)
# 使用事件返回的上下文数据
if event and event.event_data:
event_data: MediaRecognizeConvertEventData = event.event_data
if event_data.media_dict:
new_id = event_data.media_dict.get("id")
if event_data.convert_type == "themoviedb":
return self.mediachain.recognize_media(meta=_meta, tmdbid=new_id)
elif event_data.convert_type == "douban":
return self.mediachain.recognize_media(meta=_meta, doubanid=new_id)
return None
logger.info(f'开始添加订阅,标题:{title} ...')
mediainfo = None
metainfo = MetaInfo(title)
if year:
@@ -83,27 +106,41 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 识别媒体信息
if settings.RECOGNIZE_SOURCE == "themoviedb":
# TMDB识别模式
if not tmdbid and doubanid:
# 将豆瓣信息转换为TMDB信息
tmdbinfo = self.mediachain.get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
mediainfo = MediaInfo(tmdb_info=tmdbinfo)
if not tmdbid:
if doubanid:
# 将豆瓣信息转换为TMDB信息
tmdbinfo = self.mediachain.get_tmdbinfo_by_doubanid(doubanid=doubanid, mtype=mtype)
if tmdbinfo:
mediainfo = MediaInfo(tmdb_info=tmdbinfo)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = __get_event_meida(mediaid, metainfo)
else:
# 识别TMDB信息,不使用缓存
# 使用TMDBID识别
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, tmdbid=tmdbid, cache=False)
else:
# 豆瓣识别模式,不使用缓存
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid, cache=False)
if doubanid:
# 豆瓣识别模式,不使用缓存
mediainfo = self.recognize_media(meta=metainfo, mtype=mtype, doubanid=doubanid, cache=False)
elif mediaid:
# 未知前缀,广播事件解析媒体信息
mediainfo = __get_event_meida(mediaid, metainfo)
if mediainfo:
# 豆瓣标题处理
meta = MetaInfo(mediainfo.title)
mediainfo.title = meta.name
if not season:
season = meta.begin_season
# 使用名称识别兜底
if not mediainfo:
mediainfo = self.recognize_media(meta=metainfo)
# 识别失败
if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{title}tmdbid{tmdbid}doubanid{doubanid}')
return None, "未识别到媒体信息"
# 总集数
if mediainfo.type == MediaType.TV:
if not season:
@@ -138,6 +175,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
else:
# 避免season为0的问题
season = None
# 更新媒体图片
self.obtain_images(mediainfo=mediainfo)
# 合并信息
@@ -145,6 +183,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
mediainfo.douban_id = doubanid
if bangumiid:
mediainfo.bangumi_id = bangumiid
# 添加订阅
kwargs.update({
'quality': self.__get_default_subscribe_config(mediainfo.type, "quality") if not kwargs.get(
@@ -166,21 +205,23 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
'downloader': self.__get_default_subscribe_config(mediainfo.type, "downloader") if not kwargs.get(
"downloader") else kwargs.get("downloader"),
'save_path': self.__get_default_subscribe_config(mediainfo.type, "save_path") if not kwargs.get(
"save_path") else kwargs.get("save_path")
"save_path") else kwargs.get("save_path"),
'filter_groups': self.__get_default_subscribe_config(mediainfo.type, "filter_groups") if not kwargs.get(
"filter_groups") else kwargs.get("filter_groups"),
})
sid, err_msg = self.subscribeoper.add(mediainfo=mediainfo, season=season, username=username, **kwargs)
if not sid:
logger.error(f'{mediainfo.title_year} {err_msg}')
if not exist_ok and message:
# 失败发回原用户
self.post_message(Notification(channel=channel,
source=source,
mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} "
f"添加订阅失败!",
text=f"{err_msg}",
image=mediainfo.get_message_image(),
userid=userid))
self.post_message(schemas.Notification(channel=channel,
source=source,
mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} "
f"添加订阅失败!",
text=f"{err_msg}",
image=mediainfo.get_message_image(),
userid=userid))
return None, err_msg
elif message:
logger.info(f'{mediainfo.title_year} {metainfo.season} 添加订阅成功')
@@ -193,12 +234,12 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
else:
link = settings.MP_DOMAIN('#/subscribe/movie?tab=mysub')
# 订阅成功按规则发送消息
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text,
image=mediainfo.get_message_image(),
link=link,
username=username))
self.post_message(schemas.Notification(mtype=NotificationType.Subscribe,
title=f"{mediainfo.title_year} {metainfo.season} 已添加订阅",
text=text,
image=mediainfo.get_message_image(),
link=link,
username=username))
# 发送事件
EventManager().send_event(EventType.SubscribeAdded, {
"subscribe_id": sid,
@@ -409,7 +450,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaBase, mediainfo: MediaInfo,
downloads: List[Context] = None,
lefts: Dict[Union[int | str], Dict[int, NotExistMediaInfo]] = None,
lefts: Dict[Union[int | str], Dict[int, schemas.NotExistMediaInfo]] = None,
force: bool = False):
"""
判断是否应完成订阅
@@ -464,18 +505,16 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
"""
# 从系统配置获取默认订阅站点
default_sites = self.systemconfig.get(SystemConfigKey.RssSites) or []
# 如果订阅未指定站点信息,直接返回默认站点
# 如果订阅未指定站点,直接返回默认站点
if not subscribe.sites:
return default_sites
# 如果默认订阅站点未设置,直接返回订阅指定站点
if not default_sites:
return subscribe.sites or []
# 尝试解析订阅中的站点数据
user_sites = subscribe.sites
# 计算 user_sites 和 default_sites 的交集
intersection_sites = [site for site in user_sites if site in default_sites]
# 如果交集与原始订阅不一致,更新数据库
if set(intersection_sites) != set(user_sites):
self.subscribeoper.update(subscribe.id, {
"sites": intersection_sites
})
# 如果交集为空,返回默认站点
return intersection_sites if intersection_sites else default_sites
@@ -574,9 +613,9 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
continue
# 有自定义识别词时,需要判断是否需要重新识别
apply_words = None
if custom_words_list:
_, apply_words = WordsMatcher().prepare(torrent_info.title,
# 使用org_string应用一次后理论上不能再次应用
_, apply_words = WordsMatcher().prepare(torrent_meta.org_string,
custom_words=custom_words_list)
if apply_words:
logger.info(
@@ -584,6 +623,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 重新识别元数据
torrent_meta = MetaInfo(title=torrent_info.title, subtitle=torrent_info.description,
custom_words=custom_words_list)
# 更新元数据缓存
context.meta_info = torrent_meta
# 媒体信息需要重新识别
torrent_mediainfo = None
@@ -594,8 +635,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
torrent_mediainfo = self.recognize_media(meta=torrent_meta)
if torrent_mediainfo:
# 更新种子缓存
if not apply_words:
context.media_info = torrent_mediainfo
context.media_info = torrent_mediainfo
else:
# 通过标题匹配兜底
logger.warn(
@@ -607,9 +647,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
logger.info(
f'{mediainfo.title_year} 通过标题匹配到可选资源:{torrent_info.site_name} - {torrent_info.title}')
torrent_mediainfo = mediainfo
# 更新种子缓存
if not apply_words:
context.media_info = mediainfo
context.media_info = torrent_mediainfo
else:
continue
@@ -784,6 +822,68 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
})
logger.info(f'{subscribe.name} 订阅元数据更新完成')
def follow(self):
"""
刷新follow的用户分享并自动添加订阅
"""
follow_users: List[str] = self.systemconfig.get(SystemConfigKey.FollowSubscribers)
if not follow_users:
return
share_subs = self.subscribehelper.get_shares()
logger.info(f'开始刷新follow用户分享订阅 ...')
success_count = 0
for share_sub in share_subs:
uid = share_sub.get("share_uid")
if uid and uid in follow_users:
# 订阅已存在则跳过
if self.subscribeoper.exists(tmdbid=share_sub.get("tmdbid"),
doubanid=share_sub.get("doubanid"),
season=share_sub.get("season")):
continue
# 已经订阅过跳过
if self.subscribeoper.exist_history(tmdbid=share_sub.get("tmdbid"),
doubanid=share_sub.get("doubanid"),
season=share_sub.get("season")):
continue
# 去除无效属性
for key in list(share_sub.keys()):
if not hasattr(schemas.Subscribe(), key):
share_sub.pop(key)
# 类型转换
subscribe_in = schemas.Subscribe(**share_sub)
mtype = MediaType(subscribe_in.type)
# 豆瓣标题处理
if subscribe_in.doubanid or subscribe_in.bangumiid:
meta = MetaInfo(subscribe_in.name)
subscribe_in.name = meta.name
subscribe_in.season = meta.begin_season
# 标题转换
if subscribe_in.name:
title = subscribe_in.name
else:
title = None
sid, message = SubscribeChain().add(mtype=mtype,
title=title,
year=subscribe_in.year,
tmdbid=subscribe_in.tmdbid,
season=subscribe_in.season,
doubanid=subscribe_in.doubanid,
bangumiid=subscribe_in.bangumiid,
username="订阅分享",
best_version=subscribe_in.best_version,
save_path=subscribe_in.save_path,
search_imdbid=subscribe_in.search_imdbid,
custom_words=subscribe_in.custom_words,
media_category=subscribe_in.media_category,
filter_groups=subscribe_in.filter_groups,
exist_ok=True)
if sid:
success_count += 1
logger.info(f'follow用户分享订阅 {title} 添加成功')
else:
logger.error(f'follow用户分享订阅 {title} 添加失败:{message}')
logger.info(f'follow用户分享订阅刷新完成共添加 {success_count} 个订阅')
def __update_subscribe_note(self, subscribe: Subscribe, downloads: List[Context]):
"""
更新已下载信息到note字段
@@ -840,7 +940,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
return note
return []
def __update_lack_episodes(self, lefts: Dict[Union[int, str], Dict[int, NotExistMediaInfo]],
def __update_lack_episodes(self, lefts: Dict[Union[int, str], Dict[int, schemas.NotExistMediaInfo]],
subscribe: Subscribe,
mediainfo: MediaInfo,
update_date: bool = False):
@@ -895,11 +995,11 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
else:
link = settings.MP_DOMAIN('#/subscribe/movie?tab=mysub')
# 完成订阅按规则发送消息
self.post_message(Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已完成{msgstr}',
image=mediainfo.get_message_image(),
link=link,
username=subscribe.username))
self.post_message(schemas.Notification(mtype=NotificationType.Subscribe,
title=f'{mediainfo.title_year} {meta.season} 已完成{msgstr}',
image=mediainfo.get_message_image(),
link=link,
username=subscribe.username))
# 发送事件
EventManager().send_event(EventType.SubscribeComplete, {
"subscribe_id": subscribe.id,
@@ -919,9 +1019,9 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
"""
subscribes = self.subscribeoper.list()
if not subscribes:
self.post_message(Notification(channel=channel,
source=source,
title='没有任何订阅!', userid=userid))
self.post_message(schemas.Notification(channel=channel,
source=source,
title='没有任何订阅!', userid=userid))
return
title = f"共有 {len(subscribes)} 个订阅,回复对应指令操作: " \
f"\n- 删除订阅:/subscribe_delete [id]" \
@@ -937,8 +1037,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
f"[{subscribe.total_episode - (subscribe.lack_episode or subscribe.total_episode)}"
f"/{subscribe.total_episode}]")
# 发送列表
self.post_message(Notification(channel=channel, source=source,
title=title, text='\n'.join(messages), userid=userid))
self.post_message(schemas.Notification(channel=channel, source=source,
title=title, text='\n'.join(messages), userid=userid))
def remote_delete(self, arg_str: str, channel: MessageChannel,
userid: Union[str, int] = None, source: str = None):
@@ -946,9 +1046,9 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
删除订阅
"""
if not arg_str:
self.post_message(Notification(channel=channel, source=source,
title="请输入正确的命令格式:/subscribe_delete [id]"
"[id]为订阅编号", userid=userid))
self.post_message(schemas.Notification(channel=channel, source=source,
title="请输入正确的命令格式:/subscribe_delete [id]"
"[id]为订阅编号", userid=userid))
return
arg_strs = str(arg_str).split()
for arg_str in arg_strs:
@@ -958,8 +1058,8 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
subscribe_id = int(arg_str)
subscribe = self.subscribeoper.get(subscribe_id)
if not subscribe:
self.post_message(Notification(channel=channel, source=source,
title=f"订阅编号 {subscribe_id} 不存在!", userid=userid))
self.post_message(schemas.Notification(channel=channel, source=source,
title=f"订阅编号 {subscribe_id} 不存在!", userid=userid))
return
# 删除订阅
self.subscribeoper.delete(subscribe_id)
@@ -973,13 +1073,13 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
@staticmethod
def __get_subscribe_no_exits(subscribe_name: str,
no_exists: Dict[Union[int, str], Dict[int, NotExistMediaInfo]],
no_exists: Dict[Union[int, str], Dict[int, schemas.NotExistMediaInfo]],
mediakey: Union[str, int],
begin_season: int,
total_episode: int,
start_episode: int,
downloaded_episodes: List[int] = None
) -> Tuple[bool, Dict[Union[int, str], Dict[int, NotExistMediaInfo]]]:
) -> Tuple[bool, Dict[Union[int, str], Dict[int, schemas.NotExistMediaInfo]]]:
"""
根据订阅开始集数和总集数结合TMDB信息计算当前订阅的缺失集数
:param subscribe_name: 订阅名称
@@ -1029,7 +1129,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 与原集列表取交集
episodes = list(set(episode_list).intersection(set(new_episodes)))
# 更新集合
no_exists[mediakey][begin_season] = NotExistMediaInfo(
no_exists[mediakey][begin_season] = schemas.NotExistMediaInfo(
season=begin_season,
episodes=episodes,
total_episode=total_episode,
@@ -1056,7 +1156,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
if not episodes:
return True, {}
# 更新集合
no_exists[mediakey][begin_season] = NotExistMediaInfo(
no_exists[mediakey][begin_season] = schemas.NotExistMediaInfo(
season=begin_season,
episodes=episodes,
total_episode=total,
@@ -1070,7 +1170,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 如果存在已下载剧集,则差集为空时,说明所有均已存在
if not episodes:
return True, {}
no_exists[mediakey][begin_season] = NotExistMediaInfo(
no_exists[mediakey][begin_season] = schemas.NotExistMediaInfo(
season=begin_season,
episodes=episodes,
total_episode=total_episode,
@@ -1157,7 +1257,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
"min_seeders_time": default_rule.get("min_seeders_time"),
}.items() if value is not None}
def subscribe_files_info(self, subscribe: Subscribe) -> Optional[SubscrbieInfo]:
def subscribe_files_info(self, subscribe: Subscribe) -> Optional[schemas.SubscrbieInfo]:
"""
订阅相关的下载和文件信息
"""
@@ -1165,10 +1265,10 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
return
# 返回订阅数据
subscribe_info = SubscrbieInfo()
subscribe_info = schemas.SubscrbieInfo()
# 所有集的数据
episodes: Dict[int, SubscribeEpisodeInfo] = {}
episodes: Dict[int, schemas.SubscribeEpisodeInfo] = {}
if subscribe.tmdbid and subscribe.type == MediaType.TV.value:
# 查询TMDB中的集信息
tmdb_episodes = self.tmdbchain.tmdb_episodes(
@@ -1177,7 +1277,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
)
if tmdb_episodes:
for episode in tmdb_episodes:
info = SubscribeEpisodeInfo()
info = schemas.SubscribeEpisodeInfo()
info.title = episode.name
info.description = episode.overview
info.backdrop = f"https://{settings.TMDB_IMAGE_DOMAIN}/t/p/w500${episode.still_path}"
@@ -1185,12 +1285,12 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
elif subscribe.type == MediaType.TV.value:
# 根据开始结束集计算集信息
for i in range(subscribe.start_episode or 1, subscribe.total_episode + 1):
info = SubscribeEpisodeInfo()
info = schemas.SubscribeEpisodeInfo()
info.title = f'{i}'
episodes[i] = info
else:
# 电影
info = SubscribeEpisodeInfo()
info = schemas.SubscribeEpisodeInfo()
info.title = subscribe.name
episodes[0] = info
@@ -1205,7 +1305,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 识别文件名
file_meta = MetaInfo(file.filepath)
# 下载文件信息
file_info = SubscribeDownloadFileInfo(
file_info = schemas.SubscribeDownloadFileInfo(
torrent_title=his.torrent_name,
site_name=his.torrent_site,
downloader=file.downloader,
@@ -1248,7 +1348,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 识别文件名
file_meta = MetaInfo(fileitem.path)
# 媒体库文件信息
file_info = SubscribeLibraryFileInfo(
file_info = schemas.SubscribeLibraryFileInfo(
storage=fileitem.storage,
file_path=fileitem.path,
)
@@ -1308,7 +1408,7 @@ class SubscribeChain(ChainBase, metaclass=Singleton):
# 对于电视剧,构造缺失的媒体信息
no_exists = {
mediakey: {
subscribe.season: NotExistMediaInfo(
subscribe.season: schemas.NotExistMediaInfo(
season=subscribe.season,
episodes=[],
total_episode=subscribe.total_episode,

View File

@@ -1,10 +1,9 @@
import random
from typing import Optional, List
from cachetools import cached, TTLCache
from app import schemas
from app.chain import ChainBase
from app.core.cache import cached
from app.core.context import MediaInfo
from app.schemas import MediaType
from app.utils.singleton import Singleton
@@ -15,19 +14,38 @@ class TmdbChain(ChainBase, metaclass=Singleton):
TheMovieDB处理链单例运行
"""
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str,
with_original_language: str, page: int = 1) -> Optional[List[MediaInfo]]:
def tmdb_discover(self, mtype: MediaType,
sort_by: str,
with_genres: str,
with_original_language: str,
with_keywords: str,
with_watch_providers: str,
vote_average: float,
vote_count: int,
release_date: str,
page: int = 1) -> Optional[List[MediaInfo]]:
"""
:param mtype: 媒体类型
:param sort_by: 排序方式
:param with_genres: 类型
:param with_original_language: 语言
:param with_keywords: 关键字
:param with_watch_providers: 提供商
:param vote_average: 评分
:param vote_count: 评分人数
:param release_date: 上映日期
:param page: 页码
:return: 媒体信息列表
"""
return self.run_module("tmdb_discover", mtype=mtype,
sort_by=sort_by, with_genres=with_genres,
sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
with_keywords=with_keywords,
with_watch_providers=with_watch_providers,
vote_average=vote_average,
vote_count=vote_count,
release_date=release_date,
page=page)
def tmdb_trending(self, page: int = 1) -> Optional[List[MediaInfo]]:
@@ -119,7 +137,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
"""
return self.run_module("tmdb_person_credits", person_id=person_id, page=page)
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_random_wallpager(self) -> Optional[str]:
"""
获取随机壁纸缓存1个小时
@@ -133,7 +151,7 @@ class TmdbChain(ChainBase, metaclass=Singleton):
return info.backdrop_path
return None
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_trending_wallpapers(self, num: int = 10) -> List[str]:
"""
获取所有流行壁纸

View File

@@ -326,7 +326,7 @@ class JobManager:
# 计算状态为完成的任务数
if __mediaid__ not in self._job_view:
return 0
return sum([task.fileitem.size for task in self._job_view[__mediaid__].tasks if task.state == "completed"])
return sum([task.fileitem.size for task in self._job_view[__mediaid__].tasks if task.state == "completed" and task.fileitem.size is not None])
def total(self) -> int:
"""
@@ -453,9 +453,12 @@ class TransferChain(ChainBase, metaclass=Singleton):
if transferinfo.transfer_type in ["move"]:
# 所有成功的业务
tasks = self.jobview.success_tasks(task.mediainfo, task.meta.begin_season)
# 记录已处理的种子hash
processed_hashes = set()
for t in tasks:
# 下载器hash
if t.download_hash:
if t.download_hash and t.download_hash not in processed_hashes:
processed_hashes.add(t.download_hash)
if self.remove_torrents(t.download_hash, downloader=t.downloader):
logger.info(f"移动模式删除种子成功:{t.download_hash} ")
# 删除残留目录
@@ -660,22 +663,19 @@ class TransferChain(ChainBase, metaclass=Singleton):
if transfer_history:
mediainfo.title = transfer_history.title
# 获取集数据
if not task.episodes_info and mediainfo.type == MediaType.TV:
if task.meta.begin_season is None:
task.meta.begin_season = 1
mediainfo.season = mediainfo.season or task.meta.begin_season
task.episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=mediainfo.tmdb_id,
season=mediainfo.season
)
# 更新任务信息
task.mediainfo = mediainfo
# 更新队列任务
curr_task = self.jobview.remove_task(task.fileitem)
self.jobview.add_task(task, state=curr_task.state if curr_task else "waiting")
# 获取集数据
if task.mediainfo.type == MediaType.TV and not task.episodes_info:
task.episodes_info = self.tmdbchain.tmdb_episodes(
tmdbid=task.mediainfo.tmdb_id,
season=task.mediainfo.season or task.meta.begin_season or 1
)
# 查询整理目标目录
if not task.target_directory:
if task.target_path:

View File

@@ -1,3 +1,4 @@
import copy
import threading
import traceback
from typing import Any, Union, Dict, Optional
@@ -303,7 +304,7 @@ class Command(metaclass=Singleton):
)
else:
# 命令
cmd_data = command['data'] if command.get('data') else {}
cmd_data = copy.deepcopy(command['data']) if command.get('data') else {}
args_num = ObjectUtils.arguments(command['func'])
if args_num > 0:
if cmd_data:

567
app/core/cache.py Normal file
View File

@@ -0,0 +1,567 @@
import inspect
import json
import pickle
from abc import ABC, abstractmethod
from functools import wraps
from typing import Any, Dict, Optional
from urllib.parse import quote
import redis
from cachetools import TTLCache
from cachetools.keys import hashkey
from app.core.config import settings
from app.log import logger
# 默认缓存区
DEFAULT_CACHE_REGION = "DEFAULT"
class CacheBackend(ABC):
"""
缓存后端基类,定义通用的缓存接口
"""
@abstractmethod
def set(self, key: str, value: Any, ttl: int, region: str = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒
:param region: 缓存的区
:param kwargs: 其他参数
"""
pass
@abstractmethod
def exists(self, key: str, region: str = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
pass
@abstractmethod
def get(self, key: str, region: str = DEFAULT_CACHE_REGION) -> Any:
"""
获取缓存
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
pass
@abstractmethod
def delete(self, key: str, region: str = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
pass
@abstractmethod
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
pass
@abstractmethod
def close(self) -> None:
"""
关闭缓存连接
"""
pass
@staticmethod
def get_region(region: str = DEFAULT_CACHE_REGION):
"""
获取缓存的区
"""
return f"region:{region}" if region else "region:default"
@staticmethod
def get_cache_key(func, args, kwargs):
"""
获取缓存的键,通过哈希函数对函数的参数进行处理
:param func: 被装饰的函数
:param args: 位置参数
:param kwargs: 关键字参数
:return: 缓存键
"""
signature = inspect.signature(func)
# 绑定传入的参数并应用默认值
bound = signature.bind(*args, **kwargs)
bound.apply_defaults()
# 忽略第一个参数,如果它是实例(self)或类(cls)
parameters = list(signature.parameters.keys())
if parameters and parameters[0] in ("self", "cls"):
bound.arguments.pop(parameters[0], None)
# 按照函数签名顺序提取参数值列表
keys = [
bound.arguments[param] for param in signature.parameters if param in bound.arguments
]
# 使用有序参数生成缓存键
return f"{func.__name__}_{hashkey(*keys)}"
class CacheToolsBackend(CacheBackend):
"""
基于 `cachetools.TTLCache` 实现的缓存后端
特性:
- 支持动态设置缓存的 TTLTime To Live存活时间和最大条目数Maxsize
- 缓存实例按区域region划分不同 region 拥有独立的缓存实例
- 同一 region 共享相同的 TTL 和 Maxsize设置时只能作用于整个 region
限制:
- 不支持按 `key` 独立隔离 TTL 和 Maxsize仅支持作用于 region 级别
"""
def __init__(self, maxsize: int = 1000, ttl: int = 1800):
"""
初始化缓存实例
:param maxsize: 缓存的最大条目数
:param ttl: 默认缓存存活时间,单位秒
"""
self.maxsize = maxsize
self.ttl = ttl
# 存储各个 region 的缓存实例region -> TTLCache
self._region_caches: Dict[str, TTLCache] = {}
def __get_region_cache(self, region: str) -> Optional[TTLCache]:
"""
获取指定区域的缓存实例,如果不存在则返回 None
"""
region = self.get_region(region)
return self._region_caches.get(region)
def set(self, key: str, value: Any, ttl: int = None, region: str = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存值支持每个 key 独立配置 TTL 和 Maxsize
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒如果未传入则使用默认值
:param region: 缓存的区
:param kwargs: maxsize: 缓存的最大条目数如果未传入则使用默认值
"""
ttl = ttl or self.ttl
maxsize = kwargs.get("maxsize", self.maxsize)
region = self.get_region(region)
# 如果该 key 尚未有缓存实例,则创建一个新的 TTLCache 实例
region_cache = self._region_caches.setdefault(region, TTLCache(maxsize=maxsize, ttl=ttl))
# 设置缓存值
region_cache[key] = value
def exists(self, key: str, region: str = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return False
return key in region_cache
def get(self, key: str, region: str = DEFAULT_CACHE_REGION) -> Any:
"""
获取缓存的值
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
return region_cache.get(key)
def delete(self, key: str, region: str = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
region_cache = self.__get_region_cache(region)
if region_cache is None:
return None
del region_cache[key]
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
if region:
# 清理指定缓存区
region_cache = self.__get_region_cache(region)
if region_cache:
region_cache.clear()
logger.info(f"Cleared cache for region: {region}")
else:
# 清除所有区域的缓存
for region_cache in self._region_caches.values():
region_cache.clear()
logger.info("Cleared all cache")
def close(self) -> None:
"""
内存缓存不需要关闭资源
"""
pass
class RedisBackend(CacheBackend):
"""
基于 Redis 实现的缓存后端,支持通过 Redis 存储缓存
特性:
- 支持动态设置缓存的 TTLTime To Live存活时间
- 支持分区域region管理缓存不同的 region 采用独立的命名空间
- 支持自定义最大内存限制maxmemory和内存淘汰策略如 allkeys-lru
限制:
- 由于 Redis 的分布式特性,写入和读取可能受到网络延迟的影响
- Pickle 反序列化可能存在安全风险,需进一步重构调用来源,避免复杂对象缓存
"""
# 类型缓存集合,针对非容器简单类型
_complex_serializable_types = set()
_simple_serializable_types = set()
def __init__(self, redis_url: str = "redis://localhost", ttl: int = 1800):
"""
初始化 Redis 缓存实例
:param redis_url: Redis 服务的 URL
:param ttl: 缓存的存活时间,单位秒
"""
self.redis_url = redis_url
self.ttl = ttl
try:
self.client = redis.Redis.from_url(
redis_url,
decode_responses=False,
socket_timeout=30,
socket_connect_timeout=5,
health_check_interval=60,
)
# 测试连接,确保 Redis 可用
self.client.ping()
logger.debug(f"Successfully connected to Redis")
self.set_memory_limit()
except Exception as e:
logger.error(f"Failed to connect to Redis: {e}")
raise RuntimeError("Redis connection failed") from e
def set_memory_limit(self, policy: str = "allkeys-lru"):
"""
动态设置 Redis 最大内存和内存淘汰策略
:param policy: 淘汰策略(如 'allkeys-lru'
"""
try:
# 如果有显式值,则直接使用,为 0 时说明不限制,如果未配置,开启 BIG_MEMORY_MODE 时为 "1024mb",未开启时为 "256mb"
maxmemory = settings.CACHE_REDIS_MAXMEMORY or ("1024mb" if settings.BIG_MEMORY_MODE else "256mb")
self.client.config_set("maxmemory", maxmemory)
self.client.config_set("maxmemory-policy", policy)
logger.debug(f"Redis maxmemory set to {maxmemory}, policy: {policy}")
except Exception as e:
logger.error(f"Failed to set Redis maxmemory or policy: {e}")
@staticmethod
def is_container_type(t):
return t in (list, dict, tuple, set)
@classmethod
def serialize(cls, value: Any) -> bytes:
"""
将值序列化为二进制数据,根据序列化方式标识格式
"""
vt = type(value)
# 针对非容器类型使用缓存策略
if not cls.is_container_type(vt):
# 如果已知需要复杂序列化
if vt in cls._complex_serializable_types:
return b"PICKLE" + b"\x00" + pickle.dumps(value)
# 如果已知可以简单序列化
if vt in cls._simple_serializable_types:
json_data = json.dumps(value).encode("utf-8")
return b"JSON" + b"\x00" + json_data
# 对于未知的非容器类型,尝试简单序列化,如抛出异常,再使用复杂序列化
try:
json_data = json.dumps(value).encode("utf-8")
cls._simple_serializable_types.add(vt)
return b"JSON" + b"\x00" + json_data
except TypeError:
cls._complex_serializable_types.add(vt)
return b"PICKLE" + b"\x00" + pickle.dumps(value)
# 针对容器类型,每次尝试简单序列化,不使用缓存
else:
try:
json_data = json.dumps(value).encode("utf-8")
return b"JSON" + b"\x00" + json_data
except TypeError:
return b"PICKLE" + b"\x00" + pickle.dumps(value)
@classmethod
def deserialize(cls, value: bytes) -> Any:
"""
将二进制数据反序列化为原始值,根据格式标识区分序列化方式
"""
format_marker, data = value.split(b"\x00", 1)
if format_marker == b"JSON":
return json.loads(data.decode("utf-8"))
elif format_marker == b"PICKLE":
return pickle.loads(data)
else:
raise ValueError("Unknown serialization format")
# @staticmethod
# def serialize(value: Any) -> bytes:
# return msgpack.packb(value, use_bin_type=True)
#
# @staticmethod
# def deserialize(value: bytes) -> Any:
# return msgpack.unpackb(value, raw=False)
def get_redis_key(self, region: str, key: str) -> str:
"""
获取缓存 Key
"""
# 使用 region 作为缓存键的一部分
region = self.get_region(quote(region))
return f"{region}:key:{quote(key)}"
def set(self, key: str, value: Any, ttl: int = None, region: str = DEFAULT_CACHE_REGION, **kwargs) -> None:
"""
设置缓存
:param key: 缓存的键
:param value: 缓存的值
:param ttl: 缓存的存活时间,单位秒如果未传入则使用默认值
:param region: 缓存的区
:param kwargs: kwargs
"""
try:
ttl = ttl or self.ttl
redis_key = self.get_redis_key(region, key)
# 对值进行序列化
serialized_value = self.serialize(value)
kwargs.pop("maxsize", None)
self.client.set(redis_key, serialized_value, ex=ttl, **kwargs)
except Exception as e:
logger.error(f"Failed to set key: {key} in region: {region}, error: {e}")
def exists(self, key: str, region: str = DEFAULT_CACHE_REGION) -> bool:
"""
判断缓存键是否存在
:param key: 缓存的键
:param region: 缓存的区
:return: 存在返回 True否则返回 False
"""
try:
redis_key = self.get_redis_key(region, key)
return self.client.exists(redis_key) == 1
except Exception as e:
logger.error(f"Failed to exists key: {key} region: {region}, error: {e}")
return False
def get(self, key: str, region: str = DEFAULT_CACHE_REGION) -> Optional[Any]:
"""
获取缓存的值
:param key: 缓存的键
:param region: 缓存的区
:return: 返回缓存的值,如果缓存不存在返回 None
"""
try:
redis_key = self.get_redis_key(region, key)
value = self.client.get(redis_key)
if value is not None:
return self.deserialize(value) # noqa
return None
except Exception as e:
logger.error(f"Failed to get key: {key} in region: {region}, error: {e}")
return None
def delete(self, key: str, region: str = DEFAULT_CACHE_REGION) -> None:
"""
删除缓存
:param key: 缓存的键
:param region: 缓存的区
"""
try:
redis_key = self.get_redis_key(region, key)
self.client.delete(redis_key)
except Exception as e:
logger.error(f"Failed to delete key: {key} in region: {region}, error: {e}")
def clear(self, region: Optional[str] = None) -> None:
"""
清除指定区域的缓存或全部缓存
:param region: 缓存的区
"""
try:
if region:
cache_region = self.get_region(quote(region))
redis_key = f"{cache_region}:key:*"
# self.client.delete(*self.client.keys(redis_key))
with self.client.pipeline() as pipe:
for key in self.client.scan_iter(redis_key):
pipe.delete(key)
pipe.execute()
logger.info(f"Cleared Redis cache for region: {region}")
else:
self.client.flushdb()
logger.info("Cleared all Redis cache")
except Exception as e:
logger.error(f"Failed to clear cache, region: {region}, error: {e}")
def close(self) -> None:
"""
关闭 Redis 客户端的连接池
"""
if self.client:
self.client.close()
def get_cache_backend(maxsize: int = 1000, ttl: int = 1800) -> CacheBackend:
"""
根据配置获取缓存后端实例
:param maxsize: 缓存的最大条目数
:param ttl: 缓存的默认存活时间,单位秒
:return: 返回缓存后端实例
"""
cache_type = settings.CACHE_BACKEND_TYPE
logger.debug(f"Cache backend type from settings: {cache_type}")
if cache_type == "redis":
redis_url = settings.CACHE_BACKEND_URL
if redis_url:
try:
logger.debug(f"Attempting to use RedisBackend with URL: {redis_url}, TTL: {ttl}")
return RedisBackend(redis_url=redis_url, ttl=ttl)
except RuntimeError:
logger.warning("Falling back to CacheToolsBackend due to Redis connection failure.")
else:
logger.debug("Cache backend type is redis, but no valid REDIS_URL found. "
"Falling back to CacheToolsBackend.")
# 如果不是 Redis回退到内存缓存
logger.debug(f"Using CacheToolsBackend with default maxsize: {maxsize}, TTL: {ttl}")
return CacheToolsBackend(maxsize=maxsize, ttl=ttl)
def cached(region: Optional[str] = None, maxsize: int = 1000, ttl: int = 1800,
skip_none: bool = True, skip_empty: bool = False):
"""
自定义缓存装饰器,支持为每个 key 动态传递 maxsize 和 ttl
:param region: 缓存的区
:param maxsize: 缓存的最大条目数,默认值为 1000
:param ttl: 缓存的存活时间,单位秒,默认值为 1800
:param skip_none: 跳过 None 缓存,默认为 True
:param skip_empty: 跳过空值缓存(如 None, [], {}, "", set()),默认为 False
:return: 装饰器函数
"""
def should_cache(value: Any) -> bool:
"""
判断是否应该缓存结果,如果返回值是 None 或空值则不缓存
:param value: 要判断的缓存值
:return: 是否缓存结果
"""
if skip_none and value is None:
return False
# if skip_empty and value in [None, [], {}, "", set()]:
if skip_empty and not value:
return False
return True
def is_valid_cache_value(cache_key: str, cached_value: Any, cache_region: str) -> bool:
"""
判断指定的值是否为一个有效的缓存值
:param cache_key: 缓存的键
:param cached_value: 缓存的值
:param cache_region: 缓存的区
:return: 若值是有效的缓存值返回 True否则返回 False
"""
# 如果 skip_none 为 False且 value 为 None需要判断缓存实际是否存在
if not skip_none and cached_value is None:
if not cache_backend.exists(key=cache_key, region=cache_region):
return False
return True
def decorator(func):
# 获取缓存区
cache_region = region if region is not None else f"{func.__module__}.{func.__name__}"
@wraps(func)
def wrapper(*args, **kwargs):
# 获取缓存键
cache_key = cache_backend.get_cache_key(func, args, kwargs)
# 尝试获取缓存
cached_value = cache_backend.get(cache_key, region=cache_region)
if should_cache(cached_value) and is_valid_cache_value(cache_key, cached_value, cache_region):
return cached_value
# 执行函数并缓存结果
result = func(*args, **kwargs)
# 判断是否需要缓存
if not should_cache(result):
return result
# 设置缓存(如果有传入的 maxsize 和 ttl则覆盖默认值
cache_backend.set(cache_key, result, ttl=ttl, maxsize=maxsize, region=cache_region)
return result
def cache_clear():
"""
清理缓存区
"""
# 清理缓存区
cache_backend.clear(region=cache_region)
wrapper.cache_region = cache_region
wrapper.cache_clear = cache_clear
return wrapper
return decorator
# 缓存后端实例
cache_backend = get_cache_backend()
def close_cache() -> None:
"""
关闭缓存后端连接并清理资源
"""
try:
if cache_backend:
cache_backend.close()
logger.info("Cache backend closed successfully.")
except Exception as e:
logger.info(f"Error while closing cache backend: {e}")

View File

@@ -71,6 +71,12 @@ class ConfigModel(BaseModel):
DB_TIMEOUT: int = 60
# SQLite 是否启用 WAL 模式,默认关闭
DB_WAL_ENABLE: bool = False
# 缓存类型,支持 cachetools 和 redis默认使用 cachetools
CACHE_BACKEND_TYPE: str = "cachetools"
# 缓存连接字符串,仅外部缓存(如 Redis、Memcached需要
CACHE_BACKEND_URL: Optional[str] = None
# Redis 缓存最大内存限制,未配置时,如开启大内存模式时为 "1024mb",未开启时为 "256mb"
CACHE_REDIS_MAXMEMORY: Optional[str] = None
# 配置文件目录
CONFIG_DIR: Optional[str] = None
# 超级管理员
@@ -112,7 +118,7 @@ class ConfigModel(BaseModel):
# 自动检查和更新站点资源包(站点索引、认证等)
AUTO_UPDATE_RESOURCE: bool = True
# 是否启用DOH解析域名
DOH_ENABLE: bool = True
DOH_ENABLE: bool = False
# 使用 DOH 解析的域名列表
DOH_DOMAINS: str = ("api.themoviedb.org,"
"api.tmdb.org,"
@@ -230,11 +236,18 @@ class ConfigModel(BaseModel):
"doubanio.com",
"lain.bgm.tv",
"raw.githubusercontent.com",
"github.com"]
"github.com",
"thetvdb.com",
"cctvpic.com",
"iqiyipic.com",
"hdslb.com",
"cmvideo.cn",
"ykimg.com",
"qpic.cn"]
)
# 允许的图片文件后缀格式
SECURITY_IMAGE_SUFFIXES: List[str] = Field(
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg"]
default_factory=lambda: [".jpg", ".jpeg", ".png", ".webp", ".gif", ".svg", ".avif"]
)
# 重命名时支持的S0别名
RENAME_FORMAT_S0_NAMES: List[str] = Field(
@@ -351,7 +364,7 @@ class Settings(BaseSettings, ConfigModel, LogConfigModel):
return default, True
@validator('*', pre=True, always=True)
def generic_type_validator(cls, value: Any, field): # noqa
def generic_type_validator(cls, value: Any, field): # noqa
"""
通用校验器,尝试将配置值转换为期望的类型
"""

View File

@@ -262,6 +262,8 @@ class MediaInfo:
runtime: int = None
# 下一集
next_episode_to_air: dict = field(default_factory=dict)
# 内容分级
content_rating: str = None
def __post_init__(self):
# 设置媒体信息

View File

@@ -69,7 +69,7 @@ class MetaBase(object):
_subtitle_flag = False
_title_episodel_re = r"Episode\s+(\d{1,4})"
_subtitle_season_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十S\-]+)\s*季(?!\s*[全共])"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季|([0-9一二三四五六七八九十]+)\s*季\s*全"
_subtitle_season_all_re = r"[全共]\s*([0-9一二三四五六七八九十]+)\s*季"
_subtitle_episode_re = r"(?<![全共]\s*)[第\s]+([0-9一二三四五六七八九十百零EP]+)\s*[集话話期幕](?!\s*[全共])"
_subtitle_episode_between_re = r"[第]*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]?\s*-\s*第*\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
_subtitle_episode_all_re = r"([0-9一二三四五六七八九十百零]+)\s*集\s*全|[全共]\s*([0-9一二三四五六七八九十百零]+)\s*[集话話期幕]"
@@ -247,7 +247,7 @@ class MetaBase(object):
self.type = MediaType.TV
self._subtitle_flag = True
return
# x集全
# x集全/全x集
episode_all_str = re.search(r'%s' % self._subtitle_episode_all_re, title_text, re.IGNORECASE)
if episode_all_str:
episode_all = episode_all_str.group(1)
@@ -259,8 +259,6 @@ class MetaBase(object):
except Exception as err:
logger.debug(f'识别集失败:{str(err)} - {traceback.format_exc()}')
return
self.begin_episode = None
self.end_episode = None
self.type = MediaType.TV
self._subtitle_flag = True
return

View File

@@ -70,7 +70,7 @@ class ReleaseGroupsMatcher(metaclass=Singleton):
"U2": [],
"ultrahd": [],
"others": ['B(?:MDru|eyondHD|TN)', 'C(?:fandora|trlhd|MRG)', 'DON', 'EVO', 'FLUX', 'HONE(?:|yG)',
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )'],
'N(?:oGroup|T(?:b|G))', 'PandaMoon', 'SMURF', 'T(?:EPES|aengoo|rollHD )', 'UBWEB'],
"anime": ['ANi', 'HYSUB', 'KTXP', 'LoliHouse', 'MCE', 'Nekomoe kissaten', 'SweetSub', 'MingY',
'(?:Lilith|NC)-Raws', '织梦字幕组', '枫叶字幕组', '猎户手抄部', '喵萌奶茶屋', '漫猫字幕社',
'霜庭云花Sub', '北宇治字幕组', '氢气烤肉架', '云歌字幕组', '萌樱字幕组', '极影字幕社',

24
app/core/workflow.py Normal file
View File

@@ -0,0 +1,24 @@
class WorkFlowManager:
"""
工作流管理器
"""
def __init__(self):
self.workflows = {}
def register(self, workflow):
"""
注册工作流
:param workflow: 工作流对象
:return:
"""
self.workflows[workflow.name] = workflow
def get_workflow(self, name):
"""
获取工作流
:param name: 工作流名称
:return:
"""
return self.workflows.get(name)

View File

@@ -24,6 +24,7 @@ class Subscribe(Base):
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
mediaid = Column(String, index=True)
# 季号
season = Column(Integer)
# 海报
@@ -107,6 +108,14 @@ class Subscribe(Base):
result = db.query(Subscribe).filter(Subscribe.state.in_(states)).all()
return list(result)
@staticmethod
@db_query
def get_by_title(db: Session, title: str, season: int = None):
if season:
return db.query(Subscribe).filter(Subscribe.name == title,
Subscribe.season == season).first()
return db.query(Subscribe).filter(Subscribe.name == title).first()
@staticmethod
@db_query
def get_by_tmdbid(db: Session, tmdbid: int, season: int = None):
@@ -117,14 +126,6 @@ class Subscribe(Base):
result = db.query(Subscribe).filter(Subscribe.tmdbid == tmdbid).all()
return list(result)
@staticmethod
@db_query
def get_by_title(db: Session, title: str, season: int = None):
if season:
return db.query(Subscribe).filter(Subscribe.name == title,
Subscribe.season == season).first()
return db.query(Subscribe).filter(Subscribe.name == title).first()
@staticmethod
@db_query
def get_by_doubanid(db: Session, doubanid: str):
@@ -135,6 +136,11 @@ class Subscribe(Base):
def get_by_bangumiid(db: Session, bangumiid: int):
return db.query(Subscribe).filter(Subscribe.bangumiid == bangumiid).first()
@staticmethod
@db_query
def get_by_mediaid(db: Session, mediaid: str):
return db.query(Subscribe).filter(Subscribe.mediaid == mediaid).first()
@db_update
def delete_by_tmdbid(self, db: Session, tmdbid: int, season: int):
subscrbies = self.get_by_tmdbid(db, tmdbid, season)
@@ -149,6 +155,13 @@ class Subscribe(Base):
subscribe.delete(db, subscribe.id)
return True
@db_update
def delete_by_mediaid(self, db: Session, mediaid: str):
subscribe = self.get_by_mediaid(db, mediaid)
if subscribe:
subscribe.delete(db, subscribe.id)
return True
@staticmethod
@db_query
def list_by_username(db: Session, username: str, state: str = None, mtype: str = None):

View File

@@ -22,6 +22,7 @@ class SubscribeHistory(Base):
tvdbid = Column(Integer)
doubanid = Column(String, index=True)
bangumiid = Column(Integer, index=True)
mediaid = Column(String, index=True)
# 季号
season = Column(Integer)
# 海报
@@ -73,6 +74,18 @@ class SubscribeHistory(Base):
result = db.query(SubscribeHistory).filter(
SubscribeHistory.type == mtype
).order_by(
SubscribeHistory.date.desc()
SubscribeHistory.date.desc()
).offset((page - 1) * count).limit(count).all()
return list(result)
@staticmethod
@db_query
def exists(db: Session, tmdbid: int = None, doubanid: str = None, season: int = None):
if tmdbid:
if season:
return db.query(SubscribeHistory).filter(SubscribeHistory.tmdbid == tmdbid,
SubscribeHistory.season == season).first()
return db.query(SubscribeHistory).filter(SubscribeHistory.tmdbid == tmdbid).first()
elif doubanid:
return db.query(SubscribeHistory).filter(SubscribeHistory.doubanid == doubanid).first()
return None

35
app/db/models/workflow.py Normal file
View File

@@ -0,0 +1,35 @@
from datetime import datetime
from sqlalchemy import Column, Integer, JSON, Sequence, String
from app.db import Base
class Workflow(Base):
"""
工作流表
"""
# ID
id = Column(Integer, Sequence('id'), primary_key=True, index=True)
# 名称
name = Column(String, index=True, nullable=False)
# 描述
description = Column(String)
# 定时器
timer = Column(String)
# 状态N-新建 R-运行中 P-暂停 S-成功 F-失败
state = Column(String, nullable=False, index=True, default='N')
# 当前执行动作
current_action = Column(String)
# 任务执行结果
result = Column(String)
# 已执行次数
run_count = Column(Integer, default=0)
# 任务列表
actions = Column(JSON, default=list)
# 执行上下文
context = Column(JSON, default=dict)
# 创建时间
add_time = Column(String, default=datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
# 最后执行时间
last_time = Column(String)

View File

@@ -118,3 +118,16 @@ class SubscribeOper(DbOper):
kwargs.pop("id")
subscribe = SubscribeHistory(**kwargs)
subscribe.create(self._db)
def exist_history(self, tmdbid: int = None, doubanid: str = None, season: int = None):
"""
判断是否存在订阅历史
"""
if tmdbid:
if season:
return True if SubscribeHistory.exists(self._db, tmdbid=tmdbid, season=season) else False
else:
return True if SubscribeHistory.exists(self._db, tmdbid=tmdbid) else False
elif doubanid:
return True if SubscribeHistory.exists(self._db, doubanid=doubanid) else False
return False

View File

@@ -1,4 +1,4 @@
from typing import Optional
from typing import Optional, List
from fastapi import Depends, HTTPException
from sqlalchemy.orm import Session
@@ -51,6 +51,12 @@ class UserOper(DbOper):
用户管理
"""
def list(self) -> List[User]:
"""
获取用户列表
"""
return User.list(self._db)
def add(self, **kwargs):
"""
新增用户
@@ -67,27 +73,6 @@ class UserOper(DbOper):
def get_permissions(self, name: str) -> dict:
"""
获取用户权限
{
"admin": "管理员",
"usermanage": "用户管理",
"dashboard": "仪表板",
"ranking": "推荐榜单",
"resource": {
"search": "搜索站点资源",
"download": "下载站点资源",
},
"subscribe": {
"request": "提交订阅请求",
"autopass": "订阅请求自动批准"
"approve": "审批订阅请求",
"calendar": "查看订阅日历",
"manage": "管理所有订阅"
},
"downloading": {
"view": "查看正在下载任务",
"manager": "管理正在下载任务"
}
}
"""
user = User.get_by_name(self._db, name)
if user:
@@ -111,3 +96,16 @@ class UserOper(DbOper):
if settings:
return settings.get(key)
return None
def get_name(self, **kwargs) -> Optional[str]:
"""
根据绑定账号获取用户名称
"""
users = self.list()
for user in users:
user_setting = user.settings
if user_setting:
for k, v in kwargs.items():
if user_setting.get(k) == str(v):
return user.name
return None

View File

@@ -99,7 +99,7 @@ class FormatParser(object):
# `details` 格式为 `X`
start_ep = self.__offset.replace("EP", str(self._start_ep))
return int(eval(start_ep)), None, self.part
else:
elif not self._format:
# `details` 格式为 `X,X`
start_ep = self.__offset.replace("EP", str(self._start_ep))
end_ep = self.__offset.replace("EP", str(self._end_ep))

View File

@@ -64,13 +64,12 @@ class ModuleHelper:
def reload_sub_modules(parent_module, parent_module_name):
"""重新加载一级子模块"""
for sub_importer, sub_module_name, sub_is_pkg in pkgutil.walk_packages(parent_module.__path__):
full_sub_module_name = f'{parent_module_name}.{sub_module_name}'
for sub_importer, sub_module_name, sub_is_pkg in pkgutil.walk_packages(parent_module.__path__, parent_module_name+'.'):
try:
full_sub_module = importlib.import_module(full_sub_module_name)
full_sub_module = importlib.import_module(sub_module_name)
importlib.reload(full_sub_module)
except Exception as sub_err:
logger.debug(f'加载子模块 {full_sub_module_name} 失败:{str(sub_err)} - {traceback.format_exc()}')
logger.debug(f'加载子模块 {sub_module_name} 失败:{str(sub_err)} - {traceback.format_exc()}')
# 遍历包中的所有子模块
for importer, package_name, is_pkg in pkgutil.iter_modules(packages.__path__):

View File

@@ -4,11 +4,11 @@ import traceback
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Set
from cachetools import TTLCache, cached
from packaging.specifiers import SpecifierSet, InvalidSpecifier
from packaging.version import Version, InvalidVersion
from pkg_resources import Requirement, working_set
from app.core.cache import cached
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
@@ -38,24 +38,26 @@ class PluginHelper(metaclass=Singleton):
if self.install_report():
self.systemconfig.set(SystemConfigKey.PluginInstallReport, "1")
@cached(cache=TTLCache(maxsize=1000, ttl=1800))
def get_plugins(self, repo_url: str, package_version: str = None) -> Dict[str, dict]:
@cached(maxsize=1000, ttl=1800)
def get_plugins(self, repo_url: str, package_version: str = None) -> Optional[Dict[str, dict]]:
"""
获取Github所有最新插件列表
:param repo_url: Github仓库地址
:param package_version: 首选插件版本 (如 "v2", "v3"),如果不指定则获取 v1 版本
"""
if not repo_url:
return {}
return None
user, repo = self.get_repo_info(repo_url)
if not user or not repo:
return {}
return None
raw_url = self._base_url.format(user=user, repo=repo)
package_url = f"{raw_url}package.{package_version}.json" if package_version else f"{raw_url}package.json"
res = self.__request_with_fallback(package_url, headers=settings.REPO_GITHUB_HEADERS(repo=f"{user}/{repo}"))
if res is None:
return None
if res:
try:
return json.loads(res.text)
@@ -113,7 +115,7 @@ class PluginHelper(metaclass=Singleton):
return None, None
return user, repo
@cached(cache=TTLCache(maxsize=1, ttl=1800))
@cached(maxsize=1, ttl=1800)
def get_statistic(self) -> Dict:
"""
获取插件安装统计

View File

@@ -1,8 +1,7 @@
from threading import Thread
from typing import List, Tuple
from cachetools import TTLCache, cached
from app.core.cache import cached, cache_backend
from app.core.config import settings
from app.db.subscribe_oper import SubscribeOper
from app.db.systemconfig_oper import SystemConfigOper
@@ -31,7 +30,7 @@ class SubscribeHelper(metaclass=Singleton):
_sub_fork = f"{settings.MP_SERVER_HOST}/subscribe/fork/%s"
_shares_cache = TTLCache(maxsize=20, ttl=1800)
_shares_cache_region = "subscribe_share"
def __init__(self):
self.systemconfig = SystemConfigOper()
@@ -41,7 +40,7 @@ class SubscribeHelper(metaclass=Singleton):
if self.sub_report():
self.systemconfig.set(SystemConfigKey.SubscribeReport, "1")
@cached(cache=TTLCache(maxsize=20, ttl=1800))
@cached(maxsize=20, ttl=1800)
def get_statistic(self, stype: str, page: int = 1, count: int = 30) -> List[dict]:
"""
获取订阅统计数据
@@ -129,6 +128,7 @@ class SubscribeHelper(metaclass=Singleton):
return False, "订阅不存在"
subscribe_dict = subscribe.to_dict()
subscribe_dict.pop("id")
cache_backend.clear(region=self._shares_cache_region)
res = RequestUtils(proxies=settings.PROXY, content_type="application/json",
timeout=10).post(self._sub_share,
json={
@@ -142,7 +142,7 @@ class SubscribeHelper(metaclass=Singleton):
return False, "连接MoviePilot服务器失败"
if res.ok:
# 清除 get_shares 的缓存,以便实时看到结果
self._shares_cache.clear()
cache_backend.clear(region=self._shares_cache_region)
return True, ""
else:
return False, res.json().get("message")
@@ -160,7 +160,7 @@ class SubscribeHelper(metaclass=Singleton):
return False, "连接MoviePilot服务器失败"
if res.ok:
# 清除 get_shares 的缓存,以便实时看到结果
self._shares_cache.clear()
cache_backend.clear(region=self._shares_cache_region)
return True, ""
else:
return False, res.json().get("message")
@@ -181,8 +181,8 @@ class SubscribeHelper(metaclass=Singleton):
else:
return False, res.json().get("message")
@cached(cache=_shares_cache)
def get_shares(self, name: str, page: int = 1, count: int = 30) -> List[dict]:
@cached(region=_shares_cache_region)
def get_shares(self, name: str = None, page: int = 1, count: int = 30) -> List[dict]:
"""
获取订阅分享数据
"""

View File

@@ -165,3 +165,12 @@ class BangumiModule(_ModuleBase):
if credits_info:
return [MediaInfo(bangumi_info=credit) for credit in credits_info]
return []
def bangumi_discover(self, **kwargs) -> Optional[List[MediaInfo]]:
"""
发现Bangumi番剧
"""
infos = self.bangumiapi.discover(**kwargs)
if infos:
return [MediaInfo(bangumi_info=info) for info in infos]
return []

View File

@@ -1,8 +1,8 @@
from datetime import datetime
import requests
from cachetools import TTLCache, cached
from app.core.cache import cached
from app.core.config import settings
from app.utils.http import RequestUtils
@@ -13,6 +13,7 @@ class BangumiApi(object):
"""
_urls = {
"discover": "v0/subjects",
"search": "search/subjects/%s?type=2",
"calendar": "calendar",
"detail": "v0/subjects/%s",
@@ -29,15 +30,18 @@ class BangumiApi(object):
pass
@classmethod
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["bangumi"], ttl=settings.CACHE_CONF["meta"]))
def __invoke(cls, url, **kwargs):
@cached(maxsize=settings.CACHE_CONF["bangumi"], ttl=settings.CACHE_CONF["meta"])
def __invoke(cls, url, key: str = None, **kwargs):
req_url = cls._base_url + url
params = {}
if kwargs:
params.update(kwargs)
resp = cls._req.get_res(url=req_url, params=params)
try:
return resp.json() if resp else None
if not resp:
return None
result = resp.json()
return result.get(key) if key else result
except Exception as e:
print(e)
return None
@@ -188,8 +192,17 @@ class BangumiApi(object):
获取人物参演作品
"""
ret_list = []
result = self.__invoke(self._urls["person_credits"] % person_id, _ts=datetime.strftime(datetime.now(), '%Y%m%d'))
result = self.__invoke(self._urls["person_credits"] % person_id,
_ts=datetime.strftime(datetime.now(), '%Y%m%d'))
if result:
for item in result:
ret_list.append(item)
return ret_list
def discover(self, **kwargs):
"""
发现
"""
return self.__invoke(self._urls["discover"],
key="data",
_ts=datetime.strftime(datetime.now(), '%Y%m%d'), **kwargs)

View File

@@ -7,8 +7,8 @@ from random import choice
from urllib import parse
import requests
from cachetools import TTLCache, cached
from app.core.cache import cached
from app.core.config import settings
from app.utils.http import RequestUtils
from app.utils.singleton import Singleton
@@ -174,14 +174,14 @@ class DoubanApi(metaclass=Singleton):
).digest()
).decode()
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"])
def __invoke_recommend(self, url: str, **kwargs) -> dict:
"""
推荐/发现类API
"""
return self.__invoke(url, **kwargs)
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"])
def __invoke_search(self, url: str, **kwargs) -> dict:
"""
搜索类API
@@ -216,7 +216,7 @@ class DoubanApi(metaclass=Singleton):
return resp.json()
return resp.json() if resp else {}
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["douban"], ttl=settings.CACHE_CONF["meta"])
def __post(self, url: str, **kwargs) -> dict:
"""
POST请求

View File

@@ -155,7 +155,7 @@ class Emby:
case "tvshows":
library_type = MediaType.TV.value
case _:
continue
library_type = MediaType.UNKNOWN.value
image = self.__get_local_image_by_id(library.get("Id"))
libraries.append(
schemas.MediaServerLibrary(

View File

@@ -1,8 +1,7 @@
import re
from typing import Optional, Tuple, Union
from cachetools import TTLCache, cached
from app.core.cache import cached
from app.core.context import MediaInfo, settings
from app.log import logger
from app.modules import _ModuleBase
@@ -11,7 +10,6 @@ from app.utils.http import RequestUtils
class FanartModule(_ModuleBase):
"""
{
"name": "The Wheel of Time",
@@ -384,7 +382,7 @@ class FanartModule(_ModuleBase):
continue
if not isinstance(images, list):
continue
# 图片属性xx_path
image_name = self.__name(name)
if image_name.startswith("season"):
@@ -422,16 +420,19 @@ class FanartModule(_ModuleBase):
return result
@classmethod
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["fanart"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["fanart"], ttl=settings.CACHE_CONF["meta"])
def __request_fanart(cls, media_type: MediaType, queryid: Union[str, int]) -> Optional[dict]:
if media_type == MediaType.MOVIE:
image_url = cls._movie_url % queryid
else:
image_url = cls._tv_url % queryid
try:
ret = RequestUtils(proxies=cls._proxies, timeout=10).get_res(image_url)
ret = RequestUtils(proxies=cls._proxies, timeout=10).get_res(image_url, raise_exception=True)
if ret:
return ret.json()
else:
logger.debug(f"未能获取到 {queryid} 的Fanart图片")
return {}
except Exception as err:
logger.error(f"获取{queryid}的Fanart图片失败{str(err)}")
return None
return None

View File

@@ -16,7 +16,8 @@ from app.helper.module import ModuleHelper
from app.log import logger
from app.modules import _ModuleBase
from app.modules.filemanager.storages import StorageBase
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, TransferDirectoryConf, FileItem, StorageUsage, TransferRenameEventData
from app.schemas import TransferInfo, ExistMediaInfo, TmdbEpisode, TransferDirectoryConf, FileItem, StorageUsage, \
TransferRenameEventData, TransferInterceptEventData
from app.schemas.types import MediaType, ModuleType, ChainEventType, OtherModulesType
from app.utils.system import SystemUtils
@@ -368,7 +369,7 @@ class FileManagerModule(_ModuleBase):
# 覆盖模式
overwrite_mode = target_directory.overwrite_mode
# 是否需要刮削
need_scrape = scrape or target_directory.scraping
need_scrape = target_directory.scraping if scrape is None else scrape
# 目标存储类型
if not target_storage:
target_storage = target_directory.library_storage
@@ -745,11 +746,12 @@ class FileManagerModule(_ModuleBase):
logger.error(f"音轨文件 {org_path.name} 整理失败:{str(error)}")
return True, ""
def __transfer_dir(self, fileitem: FileItem, transfer_type: str,
def __transfer_dir(self, fileitem: FileItem, mediainfo: MediaInfo, transfer_type: str,
target_storage: str, target_path: Path) -> Tuple[Optional[FileItem], str]:
"""
整理整个文件夹
:param fileitem: 源文件
:param mediainfo: 媒体信息
:param transfer_type: 整理方式
:param target_storage: 目标存储
:param target_path: 目标路径
@@ -763,6 +765,22 @@ class FileManagerModule(_ModuleBase):
target_item = target_oper.get_folder(target_path)
if not target_item:
return None, f"获取目标目录失败:{target_path}"
event_data = TransferInterceptEventData(
fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_path=target_path,
transfer_type=transfer_type
)
event = eventmanager.send_event(ChainEventType.TransferIntercept, event_data)
if event and event.event_data:
event_data = event.event_data
# 如果事件被取消,跳过文件整理
if event_data.cancel:
logger.debug(
f"Transfer dir canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None, event_data.reason
# 处理所有文件
state, errmsg = self.__transfer_dir_files(fileitem=fileitem,
target_storage=target_storage,
@@ -811,16 +829,38 @@ class FileManagerModule(_ModuleBase):
# 返回成功
return True, ""
def __transfer_file(self, fileitem: FileItem, target_storage: str, target_file: Path,
def __transfer_file(self, fileitem: FileItem, mediainfo: MediaInfo, target_storage: str, target_file: Path,
transfer_type: str, over_flag: bool = False) -> Tuple[Optional[FileItem], str]:
"""
整理一个文件,同时处理其他相关文件
:param fileitem: 原文件
:param mediainfo: 媒体信息
:param target_storage: 目标存储
:param target_file: 新文件
:param transfer_type: 整理方式
:param over_flag: 是否覆盖为True时会先删除再整理
"""
logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}"
f"操作类型:{transfer_type}")
event_data = TransferInterceptEventData(
fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_path=target_file,
transfer_type=transfer_type,
options={
"over_flag": over_flag
}
)
event = eventmanager.send_event(ChainEventType.TransferIntercept, event_data)
if event and event.event_data:
event_data = event.event_data
# 如果事件被取消,跳过文件整理
if event_data.cancel:
logger.debug(
f"Transfer file canceled by event: {event_data.source},"
f"Reason: {event_data.reason}")
return None, event_data.reason
if target_storage == "local" and (target_file.exists() or target_file.is_symlink()):
if not over_flag:
logger.warn(f"文件已存在:{target_file}")
@@ -828,8 +868,6 @@ class FileManagerModule(_ModuleBase):
else:
logger.info(f"正在删除已存在的文件:{target_file}")
target_file.unlink()
logger.info(f"正在整理文件:【{fileitem.storage}{fileitem.path} 到 【{target_storage}{target_file}"
f"操作类型:{transfer_type}")
new_item, errmsg = self.__transfer_command(fileitem=fileitem,
target_storage=target_storage,
target_file=target_file,
@@ -920,18 +958,6 @@ class FileManagerModule(_ModuleBase):
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
# 重命名格式不合法
logger.error(f"重命名格式不合法:{rename_format}")
return TransferInfo(success=False,
message=f"重命名格式不合法",
fileitem=fileitem,
transfer_type=transfer_type,
need_notify=need_notify)
# 判断是否为文件夹
if fileitem.type == "dir":
# 整理整个目录,一般为蓝光原盘
@@ -946,6 +972,7 @@ class FileManagerModule(_ModuleBase):
new_path = target_path / fileitem.name
# 整理目录
new_diritem, errmsg = self.__transfer_dir(fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_path=new_path,
transfer_type=transfer_type)
@@ -1011,12 +1038,15 @@ class FileManagerModule(_ModuleBase):
overflag = False
# 目的操作对象
target_oper: StorageBase = self.__get_storage_oper(target_storage)
# 计算重命名中的文件夹层级
rename_format_level = len(rename_format.split("/")) - 1
folder_path = new_file.parents[rename_format_level - 1]
# 目标目录
target_diritem = target_oper.get_folder(new_file.parents[rename_format_level - 1])
target_diritem = target_oper.get_folder(folder_path)
if not target_diritem:
logger.error(f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败")
logger.error(f"目标目录 {folder_path} 获取失败")
return TransferInfo(success=False,
message=f"目标目录 {new_file.parents[rename_format_level - 1]} 获取失败",
message=f"目标目录 {folder_path} 获取失败",
fileitem=fileitem,
fail_list=[fileitem.path],
transfer_type=transfer_type,
@@ -1072,6 +1102,7 @@ class FileManagerModule(_ModuleBase):
self.__delete_version_files(target_storage, new_file)
# 整理文件
new_item, err_msg = self.__transfer_file(fileitem=fileitem,
mediainfo=mediainfo,
target_storage=target_storage,
target_file=new_file,
transfer_type=transfer_type,
@@ -1136,7 +1167,7 @@ class FileManagerModule(_ModuleBase):
if episode.episode_number == meta.begin_episode:
episode_date = episode.air_date
break
return {
# 标题
"title": __convert_invalid_characters(mediainfo.title),
@@ -1256,10 +1287,6 @@ class FileManagerModule(_ModuleBase):
# 重命名格式
rename_format = settings.TV_RENAME_FORMAT \
if mediainfo.type == MediaType.TV else settings.MOVIE_RENAME_FORMAT
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
if rename_format_level < 1:
continue
# 获取路径(重命名路径)
target_path = self.get_rename_path(
path=dir_path,
@@ -1267,13 +1294,19 @@ class FileManagerModule(_ModuleBase):
rename_dict=self.__get_naming_dict(meta=MetaInfo(mediainfo.title),
mediainfo=mediainfo)
)
# 计算重命名中的文件夹层数
rename_format_level = len(rename_format.split("/")) - 1
# 取相对路径的第1层目录
media_path = target_path.parents[rename_format_level - 1]
# 检索媒体文件
fileitem = storage_oper.get_item(media_path)
if not fileitem:
continue
media_files = self.list_files(fileitem, True)
try:
media_files = self.list_files(fileitem, True)
except Exception as e:
logger.debug(f"获取媒体文件列表失败:{str(e)}")
continue
if media_files:
for media_file in media_files:
if f".{media_file.extension.lower()}" in settings.RMT_MEDIAEXT:

View File

@@ -102,7 +102,7 @@ class StorageBase(metaclass=ABCMeta):
"""
获取父目录
"""
return self.get_folder(Path(fileitem.path).parent)
return self.get_item(Path(fileitem.path).parent)
@abstractmethod
def delete(self, fileitem: schemas.FileItem) -> bool:

View File

@@ -4,10 +4,10 @@ from datetime import datetime
from pathlib import Path
from typing import Optional, List, Dict
from cachetools import cached, TTLCache
from requests import Response
from app import schemas
from app.core.cache import cached
from app.core.config import settings
from app.log import logger
from app.modules.filemanager.storages import StorageBase
@@ -67,13 +67,16 @@ class Alist(StorageBase, metaclass=Singleton):
return self.__generate_token
@property
@cached(cache=TTLCache(maxsize=1, ttl=60 * 60 * 24 * 2 - 60 * 5))
@cached(maxsize=1, ttl=60 * 60 * 24 * 2 - 60 * 5)
def __generate_token(self) -> str:
"""
使用账号密码生成一个临时token
如果设置永久令牌则返回永久令牌,否则使用账号密码生成一个临时 token
缓存2天提前5分钟更新
"""
conf = self.get_conf()
token = conf.get("token")
if token:
return str(token)
resp: Response = RequestUtils(headers={
'Content-Type': 'application/json'
}).post_res(

View File

@@ -1,17 +1,14 @@
from datetime import datetime
from typing import List, Optional, Tuple, Union
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.core.context import TorrentInfo
from app.db.site_oper import SiteOper
from app.helper.module import ModuleHelper
from app.helper.sites import SitesHelper
from app.helper.sites import SitesHelper, SiteSpider
from app.log import logger
from app.modules import _ModuleBase
from app.modules.indexer.parser import SiteParserBase
from app.modules.indexer.spider import TorrentSpider
from app.modules.indexer.spider.haidan import HaiDanSpider
from app.modules.indexer.spider.mtorrent import MTorrentSpider
from app.modules.indexer.spider.tnode import TNodeSpider
@@ -76,7 +73,7 @@ class IndexerModule(_ModuleBase):
def init_setting(self) -> Tuple[str, Union[str, bool]]:
pass
def search_torrents(self, site: CommentedMap,
def search_torrents(self, site: dict,
keywords: List[str] = None,
mtype: MediaType = None,
page: int = 0) -> List[TorrentInfo]:
@@ -204,7 +201,7 @@ class IndexerModule(_ModuleBase):
return __remove_duplicate(torrents)
@staticmethod
def __spider_search(indexer: CommentedMap,
def __spider_search(indexer: dict,
search_word: str = None,
mtype: MediaType = None,
page: int = 0) -> Tuple[bool, List[dict]]:
@@ -217,14 +214,14 @@ class IndexerModule(_ModuleBase):
:param: timeout: 超时时间
:return: 是否发生错误, 种子列表
"""
_spider = TorrentSpider(indexer=indexer,
mtype=mtype,
keyword=search_word,
page=page)
_spider = SiteSpider(indexer=indexer,
mtype=mtype,
keyword=search_word,
page=page)
return _spider.is_error, _spider.get_torrents()
def refresh_torrents(self, site: CommentedMap) -> Optional[List[TorrentInfo]]:
def refresh_torrents(self, site: dict) -> Optional[List[TorrentInfo]]:
"""
获取站点最新一页的种子,多个站点需要多线程处理
:param site: 站点
@@ -232,7 +229,7 @@ class IndexerModule(_ModuleBase):
"""
return self.search_torrents(site=site)
def refresh_userdata(self, site: CommentedMap) -> Optional[SiteUserData]:
def refresh_userdata(self, site: dict) -> Optional[SiteUserData]:
"""
刷新站点的用户数据
:param site: 站点

View File

@@ -1,17 +1,34 @@
# -*- coding: utf-8 -*-
from urllib.parse import urljoin
from lxml import etree
from app.modules.indexer.parser import SiteSchema
from app.modules.indexer.parser.nexus_php import NexusPhpSiteUserInfo
from app.utils.string import StringUtils
class NexusAudiencesSiteUserInfo(NexusPhpSiteUserInfo):
schema = SiteSchema.NexusAudiences
def _parse_site_page(self, html_text: str):
super()._parse_site_page(html_text)
self._torrent_seeding_page = f"usertorrentlist.php?userid={self.userid}&type=seeding"
def _parse_seeding_pages(self):
if not self._torrent_seeding_page:
return
self._torrent_seeding_headers = {"Referer": urljoin(self._base_url, self._user_detail_page)}
super()._parse_seeding_pages()
html_text = self._get_page_content(
url=urljoin(self._base_url, self._torrent_seeding_page),
params=self._torrent_seeding_params,
headers=self._torrent_seeding_headers
)
if not html_text:
return
html = etree.HTML(html_text)
if not StringUtils.is_valid_html_element(html):
return
total_row = html.xpath('//table[@class="table table-bordered"]//tr[td[1][normalize-space()="Total"]]')
if not total_row:
return
seeding_count = total_row[0].xpath('./td[2]/text()')
seeding_size = total_row[0].xpath('./td[3]/text()')
self.seeding = StringUtils.str_int(seeding_count[0]) if seeding_count else 0
self.seeding_size = StringUtils.num_filesize(seeding_size[0].strip()) if seeding_size else 0

View File

@@ -208,9 +208,16 @@ class NexusPhpSiteUserInfo(SiteParserBase):
# 是否存在下页数据
next_page = None
next_page_text = html.xpath('//a[contains(.//text(), "下一页") or contains(.//text(), "下一頁") or contains(.//text(), ">")]/@href')
if next_page_text:
next_page = next_page_text[-1].strip()
# fix up page url
#防止识别到详情页
while next_page_text:
next_page = next_page_text.pop().strip()
if not next_page.startswith('details.php'):
break;
next_page = None
# fix up page url
if next_page:
if self.userid not in next_page:
next_page = f'{next_page}&userid={self.userid}&type=seeding'

View File

@@ -1,742 +0,0 @@
import copy
import datetime
import re
import traceback
from typing import List
from urllib.parse import quote, urlencode, urlparse, parse_qs
from jinja2 import Template
from pyquery import PyQuery
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.helper.browser import PlaywrightHelper
from app.log import logger
from app.schemas.types import MediaType
from app.utils.http import RequestUtils
from app.utils.string import StringUtils
class TorrentSpider:
# 是否出现错误
is_error: bool = False
# 索引器ID
indexerid: int = None
# 索引器名称
indexername: str = None
# 站点域名
domain: str = None
# 站点Cookie
cookie: str = None
# 站点UA
ua: str = None
# Requests 代理
proxies: dict = None
# playwright 代理
proxy_server: dict = None
# 是否渲染
render: bool = False
# Referer
referer: str = None
# 搜索关键字
keyword: str = None
# 媒体类型
mtype: MediaType = None
# 搜索路径、方式配置
search: dict = {}
# 批量搜索配置
batch: dict = {}
# 浏览配置
browse: dict = {}
# 站点分类配置
category: dict = {}
# 站点种子列表配置
list: dict = {}
# 站点种子字段配置
fields: dict = {}
# 页码
page: int = 0
# 搜索条数, 默认: 100条
result_num: int = 100
# 单个种子信息
torrents_info: dict = {}
# 种子列表
torrents_info_array: list = []
# 搜索超时, 默认: 15秒
_timeout = 15
def __init__(self,
indexer: CommentedMap,
keyword: [str, list] = None,
page: int = 0,
referer: str = None,
mtype: MediaType = None):
"""
设置查询参数
:param indexer: 索引器
:param keyword: 搜索关键字,如果数组则为批量搜索
:param page: 页码
:param referer: Referer
:param mtype: 媒体类型
"""
if not indexer:
return
self.keyword = keyword
self.mtype = mtype
self.indexerid = indexer.get('id')
self.indexername = indexer.get('name')
self.search = indexer.get('search')
self.batch = indexer.get('batch')
self.browse = indexer.get('browse')
self.category = indexer.get('category')
self.list = indexer.get('torrents').get('list', {})
self.fields = indexer.get('torrents').get('fields')
self.render = indexer.get('render')
self.domain = indexer.get('domain')
self.result_num = int(indexer.get('result_num') or 100)
self._timeout = int(indexer.get('timeout') or 15)
self.page = page
if self.domain and not str(self.domain).endswith("/"):
self.domain = self.domain + "/"
if indexer.get('ua'):
self.ua = indexer.get('ua') or settings.USER_AGENT
else:
self.ua = settings.USER_AGENT
if indexer.get('proxy'):
self.proxies = settings.PROXY
self.proxy_server = settings.PROXY_SERVER
if indexer.get('cookie'):
self.cookie = indexer.get('cookie')
if referer:
self.referer = referer
self.torrents_info_array = []
def get_torrents(self) -> List[dict]:
"""
开始请求
"""
if not self.search or not self.domain:
return []
# 种子搜索相对路径
paths = self.search.get('paths', [])
torrentspath = ""
if len(paths) == 1:
torrentspath = paths[0].get('path', '')
else:
for path in paths:
if path.get("type") == "all" and not self.mtype:
torrentspath = path.get('path')
break
elif path.get("type") == "movie" and self.mtype == MediaType.MOVIE:
torrentspath = path.get('path')
break
elif path.get("type") == "tv" and self.mtype == MediaType.TV:
torrentspath = path.get('path')
break
# 精确搜索
if self.keyword:
if isinstance(self.keyword, list):
# 批量查询
if self.batch:
delimiter = self.batch.get('delimiter') or ' '
space_replace = self.batch.get('space_replace') or ' '
search_word = delimiter.join([str(k).replace(' ',
space_replace) for k in self.keyword])
else:
search_word = " ".join(self.keyword)
# 查询模式:或
search_mode = "1"
else:
# 单个查询
search_word = self.keyword
# 查询模式与
search_mode = "0"
# 搜索URL
indexer_params = self.search.get("params", {}).copy()
if indexer_params:
search_area = indexer_params.get('search_area')
# search_area非0表示支持imdbid搜索
if (search_area and
(not self.keyword or not self.keyword.startswith('tt'))):
# 支持imdbid搜索但关键字不是imdbid时不启用imdbid搜索
indexer_params.pop('search_area')
# 变量字典
inputs_dict = {
"keyword": search_word
}
# 查询参数,默认查询标题
params = {
"search_mode": search_mode,
"search_area": 0,
"page": self.page or 0,
"notnewword": 1
}
# 额外参数
for key, value in indexer_params.items():
params.update({
"%s" % key: str(value).format(**inputs_dict)
})
# 分类条件
if self.category:
if self.mtype == MediaType.TV:
cats = self.category.get("tv") or []
elif self.mtype == MediaType.MOVIE:
cats = self.category.get("movie") or []
else:
cats = (self.category.get("movie") or []) + (self.category.get("tv") or [])
for cat in cats:
if self.category.get("field"):
value = params.get(self.category.get("field"), "")
params.update({
"%s" % self.category.get("field"): value + self.category.get("delimiter",
' ') + cat.get("id")
})
else:
params.update({
"cat%s" % cat.get("id"): 1
})
searchurl = self.domain + torrentspath + "?" + urlencode(params)
else:
# 变量字典
inputs_dict = {
"keyword": quote(search_word),
"page": self.page or 0
}
# 无额外参数
searchurl = self.domain + str(torrentspath).format(**inputs_dict)
# 列表浏览
else:
# 变量字典
inputs_dict = {
"page": self.page or 0,
"keyword": ""
}
# 有单独浏览路径
if self.browse:
torrentspath = self.browse.get("path")
if self.browse.get("start"):
start_page = int(self.browse.get("start")) + int(self.page or 0)
inputs_dict.update({
"page": start_page
})
elif self.page:
torrentspath = torrentspath + f"?page={self.page}"
# 搜索Url
searchurl = self.domain + str(torrentspath).format(**inputs_dict)
logger.info(f"开始请求:{searchurl}")
if self.render:
# 浏览器仿真
page_source = PlaywrightHelper().get_page_source(
url=searchurl,
cookies=self.cookie,
ua=self.ua,
proxies=self.proxy_server,
timeout=self._timeout
)
else:
# requests请求
ret = RequestUtils(
ua=self.ua,
cookies=self.cookie,
timeout=self._timeout,
referer=self.referer,
proxies=self.proxies
).get_res(searchurl, allow_redirects=True)
page_source = RequestUtils.get_decoded_html_content(ret,
settings.ENCODING_DETECTION_PERFORMANCE_MODE,
settings.ENCODING_DETECTION_MIN_CONFIDENCE)
# 解析
return self.parse(page_source)
def __get_title(self, torrent):
# title default text
if 'title' not in self.fields:
return
selector = self.fields.get('title', {})
if 'selector' in selector:
title = torrent(selector.get('selector', '')).clone()
self.__remove(title, selector)
items = self.__attribute_or_text(title, selector)
self.torrents_info['title'] = self.__index(items, selector)
elif 'text' in selector:
render_dict = {}
if "title_default" in self.fields:
title_default_selector = self.fields.get('title_default', {})
title_default_item = torrent(title_default_selector.get('selector', '')).clone()
self.__remove(title_default_item, title_default_selector)
items = self.__attribute_or_text(title_default_item, selector)
title_default = self.__index(items, title_default_selector)
render_dict.update({'title_default': title_default})
if "title_optional" in self.fields:
title_optional_selector = self.fields.get('title_optional', {})
title_optional_item = torrent(title_optional_selector.get('selector', '')).clone()
self.__remove(title_optional_item, title_optional_selector)
items = self.__attribute_or_text(title_optional_item, title_optional_selector)
title_optional = self.__index(items, title_optional_selector)
render_dict.update({'title_optional': title_optional})
self.torrents_info['title'] = Template(selector.get('text')).render(fields=render_dict)
self.torrents_info['title'] = self.__filter_text(self.torrents_info.get('title'),
selector.get('filters'))
def __get_description(self, torrent):
# title optional text
if 'description' not in self.fields:
return
selector = self.fields.get('description', {})
if "selector" in selector \
or "selectors" in selector:
description = torrent(selector.get('selector', selector.get('selectors', ''))).clone()
if description:
self.__remove(description, selector)
items = self.__attribute_or_text(description, selector)
self.torrents_info['description'] = self.__index(items, selector)
elif "text" in selector:
render_dict = {}
if "tags" in self.fields:
tags_selector = self.fields.get('tags', {})
tags_item = torrent(tags_selector.get('selector', '')).clone()
self.__remove(tags_item, tags_selector)
items = self.__attribute_or_text(tags_item, tags_selector)
tag = self.__index(items, tags_selector)
render_dict.update({'tags': tag})
if "subject" in self.fields:
subject_selector = self.fields.get('subject', {})
subject_item = torrent(subject_selector.get('selector', '')).clone()
self.__remove(subject_item, subject_selector)
items = self.__attribute_or_text(subject_item, subject_selector)
subject = self.__index(items, subject_selector)
render_dict.update({'subject': subject})
if "description_free_forever" in self.fields:
description_free_forever_selector = self.fields.get("description_free_forever", {})
description_free_forever_item = torrent(description_free_forever_selector.get("selector", '')).clone()
self.__remove(description_free_forever_item, description_free_forever_selector)
items = self.__attribute_or_text(description_free_forever_item, description_free_forever_selector)
description_free_forever = self.__index(items, description_free_forever_selector)
render_dict.update({"description_free_forever": description_free_forever})
if "description_normal" in self.fields:
description_normal_selector = self.fields.get("description_normal", {})
description_normal_item = torrent(description_normal_selector.get("selector", '')).clone()
self.__remove(description_normal_item, description_normal_selector)
items = self.__attribute_or_text(description_normal_item, description_normal_selector)
description_normal = self.__index(items, description_normal_selector)
render_dict.update({"description_normal": description_normal})
self.torrents_info['description'] = Template(selector.get('text')).render(fields=render_dict)
self.torrents_info['description'] = self.__filter_text(self.torrents_info.get('description'),
selector.get('filters'))
def __get_detail(self, torrent):
# details page text
if 'details' not in self.fields:
return
selector = self.fields.get('details', {})
details = torrent(selector.get('selector', '')).clone()
self.__remove(details, selector)
items = self.__attribute_or_text(details, selector)
item = self.__index(items, selector)
detail_link = self.__filter_text(item, selector.get('filters'))
if detail_link:
if not detail_link.startswith("http"):
if detail_link.startswith("//"):
self.torrents_info['page_url'] = self.domain.split(":")[0] + ":" + detail_link
elif detail_link.startswith("/"):
self.torrents_info['page_url'] = self.domain + detail_link[1:]
else:
self.torrents_info['page_url'] = self.domain + detail_link
else:
self.torrents_info['page_url'] = detail_link
def __get_download(self, torrent):
# download link text
if 'download' not in self.fields:
return
selector = self.fields.get('download', {})
download = torrent(selector.get('selector', '')).clone()
self.__remove(download, selector)
items = self.__attribute_or_text(download, selector)
item = self.__index(items, selector)
download_link = self.__filter_text(item, selector.get('filters'))
if download_link:
if not download_link.startswith("http") \
and not download_link.startswith("magnet"):
_scheme, _domain = StringUtils.get_url_netloc(self.domain)
if _domain in download_link:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{_scheme}:{download_link}"
else:
self.torrents_info['enclosure'] = f"{_scheme}://{download_link}"
else:
if download_link.startswith("/"):
self.torrents_info['enclosure'] = f"{self.domain}{download_link[1:]}"
else:
self.torrents_info['enclosure'] = f"{self.domain}{download_link}"
else:
self.torrents_info['enclosure'] = download_link
def __get_imdbid(self, torrent):
# imdbid
if "imdbid" not in self.fields:
return
selector = self.fields.get('imdbid', {})
imdbid = torrent(selector.get('selector', '')).clone()
self.__remove(imdbid, selector)
items = self.__attribute_or_text(imdbid, selector)
item = self.__index(items, selector)
self.torrents_info['imdbid'] = item
self.torrents_info['imdbid'] = self.__filter_text(self.torrents_info.get('imdbid'),
selector.get('filters'))
def __get_size(self, torrent):
# torrent size int
if 'size' not in self.fields:
return
selector = self.fields.get('size', {})
size = torrent(selector.get('selector', selector.get("selectors", ''))).clone()
self.__remove(size, selector)
items = self.__attribute_or_text(size, selector)
item = self.__index(items, selector)
if item:
size_val = item.replace("\n", "").strip()
size_val = self.__filter_text(size_val,
selector.get('filters'))
self.torrents_info['size'] = StringUtils.num_filesize(size_val)
else:
self.torrents_info['size'] = 0
def __get_leechers(self, torrent):
# torrent leechers int
if 'leechers' not in self.fields:
return
selector = self.fields.get('leechers', {})
leechers = torrent(selector.get('selector', '')).clone()
self.__remove(leechers, selector)
items = self.__attribute_or_text(leechers, selector)
item = self.__index(items, selector)
if item:
peers_val = item.split("/")[0]
peers_val = peers_val.replace(",", "")
peers_val = self.__filter_text(peers_val,
selector.get('filters'))
self.torrents_info['peers'] = int(peers_val) if peers_val and peers_val.isdigit() else 0
else:
self.torrents_info['peers'] = 0
def __get_seeders(self, torrent):
# torrent leechers int
if 'seeders' not in self.fields:
return
selector = self.fields.get('seeders', {})
seeders = torrent(selector.get('selector', '')).clone()
self.__remove(seeders, selector)
items = self.__attribute_or_text(seeders, selector)
item = self.__index(items, selector)
if item:
seeders_val = item.split("/")[0]
seeders_val = seeders_val.replace(",", "")
seeders_val = self.__filter_text(seeders_val,
selector.get('filters'))
self.torrents_info['seeders'] = int(seeders_val) if seeders_val and seeders_val.isdigit() else 0
else:
self.torrents_info['seeders'] = 0
def __get_grabs(self, torrent):
# torrent grabs int
if 'grabs' not in self.fields:
return
selector = self.fields.get('grabs', {})
grabs = torrent(selector.get('selector', '')).clone()
self.__remove(grabs, selector)
items = self.__attribute_or_text(grabs, selector)
item = self.__index(items, selector)
if item:
grabs_val = item.split("/")[0]
grabs_val = grabs_val.replace(",", "")
grabs_val = self.__filter_text(grabs_val,
selector.get('filters'))
self.torrents_info['grabs'] = int(grabs_val) if grabs_val and grabs_val.isdigit() else 0
else:
self.torrents_info['grabs'] = 0
def __get_pubdate(self, torrent):
# torrent pubdate yyyy-mm-dd hh:mm:ss
if 'date_added' not in self.fields:
return
selector = self.fields.get('date_added', {})
pubdate = torrent(selector.get('selector', '')).clone()
self.__remove(pubdate, selector)
items = self.__attribute_or_text(pubdate, selector)
pubdate_str = self.__index(items, selector)
if pubdate_str:
pubdate_str = pubdate_str.replace('\n', ' ').strip()
self.torrents_info['pubdate'] = self.__filter_text(pubdate_str,
selector.get('filters'))
def __get_date_elapsed(self, torrent):
# torrent data elaspsed text
if 'date_elapsed' not in self.fields:
return
selector = self.fields.get('date_elapsed', {})
date_elapsed = torrent(selector.get('selector', '')).clone()
self.__remove(date_elapsed, selector)
items = self.__attribute_or_text(date_elapsed, selector)
self.torrents_info['date_elapsed'] = self.__index(items, selector)
self.torrents_info['date_elapsed'] = self.__filter_text(self.torrents_info.get('date_elapsed'),
selector.get('filters'))
def __get_downloadvolumefactor(self, torrent):
# downloadvolumefactor int
selector = self.fields.get('downloadvolumefactor', {})
if not selector:
return
self.torrents_info['downloadvolumefactor'] = 1
if 'case' in selector:
for downloadvolumefactorselector in list(selector.get('case', {}).keys()):
downloadvolumefactor = torrent(downloadvolumefactorselector)
if len(downloadvolumefactor) > 0:
self.torrents_info['downloadvolumefactor'] = selector.get('case', {}).get(
downloadvolumefactorselector)
break
elif "selector" in selector:
downloadvolume = torrent(selector.get('selector', '')).clone()
self.__remove(downloadvolume, selector)
items = self.__attribute_or_text(downloadvolume, selector)
item = self.__index(items, selector)
if item:
downloadvolumefactor = re.search(r'(\d+\.?\d*)', item)
if downloadvolumefactor:
self.torrents_info['downloadvolumefactor'] = int(downloadvolumefactor.group(1))
def __get_uploadvolumefactor(self, torrent):
# uploadvolumefactor int
selector = self.fields.get('uploadvolumefactor', {})
if not selector:
return
self.torrents_info['uploadvolumefactor'] = 1
if 'case' in selector:
for uploadvolumefactorselector in list(selector.get('case', {}).keys()):
uploadvolumefactor = torrent(uploadvolumefactorselector)
if len(uploadvolumefactor) > 0:
self.torrents_info['uploadvolumefactor'] = selector.get('case', {}).get(
uploadvolumefactorselector)
break
elif "selector" in selector:
uploadvolume = torrent(selector.get('selector', '')).clone()
self.__remove(uploadvolume, selector)
items = self.__attribute_or_text(uploadvolume, selector)
item = self.__index(items, selector)
if item:
uploadvolumefactor = re.search(r'(\d+\.?\d*)', item)
if uploadvolumefactor:
self.torrents_info['uploadvolumefactor'] = int(uploadvolumefactor.group(1))
def __get_labels(self, torrent):
# labels ['label1', 'label2']
if 'labels' not in self.fields:
return
selector = self.fields.get('labels', {})
labels = torrent(selector.get("selector", "")).clone()
self.__remove(labels, selector)
items = self.__attribute_or_text(labels, selector)
if items:
self.torrents_info['labels'] = [item for item in items if item]
else:
self.torrents_info['labels'] = []
def __get_free_date(self, torrent):
# free date yyyy-mm-dd hh:mm:ss
if 'freedate' not in self.fields:
return
selector = self.fields.get('freedate', {})
freedate = torrent(selector.get('selector', '')).clone()
self.__remove(freedate, selector)
items = self.__attribute_or_text(freedate, selector)
self.torrents_info['freedate'] = self.__index(items, selector)
self.torrents_info['freedate'] = self.__filter_text(self.torrents_info.get('freedate'),
selector.get('filters'))
def __get_hit_and_run(self, torrent):
# hitandrun True/False
if 'hr' not in self.fields:
return
selector = self.fields.get('hr', {})
hit_and_run = torrent(selector.get('selector', ''))
if hit_and_run:
self.torrents_info['hit_and_run'] = True
else:
self.torrents_info['hit_and_run'] = False
def __get_category(self, torrent):
# category 电影/电视剧
if 'category' not in self.fields:
return
selector = self.fields.get('category', {})
category = torrent(selector.get('selector', '')).clone()
self.__remove(category, selector)
items = self.__attribute_or_text(category, selector)
category_value = self.__index(items, selector)
category_value = self.__filter_text(category_value,
selector.get('filters'))
if category_value and self.category:
tv_cats = [str(cat.get("id")) for cat in self.category.get("tv") or []]
movie_cats = [str(cat.get("id")) for cat in self.category.get("movie") or []]
if category_value in tv_cats \
and category_value not in movie_cats:
self.torrents_info['category'] = MediaType.TV.value
elif category_value in movie_cats:
self.torrents_info['category'] = MediaType.MOVIE.value
else:
self.torrents_info['category'] = MediaType.UNKNOWN.value
else:
self.torrents_info['category'] = MediaType.UNKNOWN.value
def get_info(self, torrent) -> dict:
"""
解析单条种子数据
"""
self.torrents_info = {}
try:
# 标题
self.__get_title(torrent)
# 描述
self.__get_description(torrent)
# 详情页面
self.__get_detail(torrent)
# 下载链接
self.__get_download(torrent)
# 完成数
self.__get_grabs(torrent)
# 下载数
self.__get_leechers(torrent)
# 做种数
self.__get_seeders(torrent)
# 大小
self.__get_size(torrent)
# IMDBID
self.__get_imdbid(torrent)
# 下载系数
self.__get_downloadvolumefactor(torrent)
# 上传系数
self.__get_uploadvolumefactor(torrent)
# 发布时间
self.__get_pubdate(torrent)
# 已发布时间
self.__get_date_elapsed(torrent)
# 免费载止时间
self.__get_free_date(torrent)
# 标签
self.__get_labels(torrent)
# HR
self.__get_hit_and_run(torrent)
# 分类
self.__get_category(torrent)
except Exception as err:
logger.error("%s 搜索出现错误:%s" % (self.indexername, str(err)))
return self.torrents_info
@staticmethod
def __filter_text(text: str, filters: list):
"""
对文件进行处理
"""
if not text or not filters or not isinstance(filters, list):
return text
if not isinstance(text, str):
text = str(text)
for filter_item in filters:
if not text:
break
method_name = filter_item.get("name")
try:
args = filter_item.get("args")
if method_name == "re_search" and isinstance(args, list):
rematch = re.search(r"%s" % args[0], text)
if rematch:
text = rematch.group(args[-1])
elif method_name == "split" and isinstance(args, list):
text = text.split(r"%s" % args[0])[args[-1]]
elif method_name == "replace" and isinstance(args, list):
text = text.replace(r"%s" % args[0], r"%s" % args[-1])
elif method_name == "dateparse" and isinstance(args, str):
text = text.replace("\n", " ").strip()
text = datetime.datetime.strptime(text, r"%s" % args)
elif method_name == "strip":
text = text.strip()
elif method_name == "appendleft":
text = f"{args}{text}"
elif method_name == "querystring":
parsed_url = urlparse(str(text))
query_params = parse_qs(parsed_url.query)
param_value = query_params.get(args)
text = param_value[0] if param_value else ''
except Exception as err:
logger.debug(f'过滤器 {method_name} 处理失败:{str(err)} - {traceback.format_exc()}')
return text.strip()
@staticmethod
def __remove(item, selector):
"""
移除元素
"""
if selector and "remove" in selector:
removelist = selector.get('remove', '').split(', ')
for v in removelist:
item.remove(v)
@staticmethod
def __attribute_or_text(item, selector: dict):
if not selector:
return item
if not item:
return []
if 'attribute' in selector:
items = [i.attr(selector.get('attribute')) for i in item.items() if i]
else:
items = [i.text() for i in item.items() if i]
return items
@staticmethod
def __index(items: list, selector: dict):
if not items:
return None
if selector:
if "contents" in selector \
and len(items) > int(selector.get("contents")):
items = items[0].split("\n")[selector.get("contents")]
elif "index" in selector \
and len(items) > int(selector.get("index")):
items = items[int(selector.get("index"))]
if isinstance(items, list):
items = items[0]
return items
def parse(self, html_text: str) -> List[dict]:
"""
解析整个页面
"""
if not html_text:
self.is_error = True
return []
# 清空旧结果
self.torrents_info_array = []
try:
# 解析站点文本对象
html_doc = PyQuery(html_text)
# 种子筛选器
torrents_selector = self.list.get('selector', '')
# 遍历种子html列表
for torn in html_doc(torrents_selector):
self.torrents_info_array.append(copy.deepcopy(self.get_info(PyQuery(torn))))
if len(self.torrents_info_array) >= int(self.result_num):
break
return self.torrents_info_array
except Exception as err:
self.is_error = True
logger.warn(f"错误:{self.indexername} {str(err)}")

View File

@@ -1,8 +1,6 @@
import urllib.parse
from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
@@ -51,7 +49,7 @@ class HaiDanSpider:
"7": 1
}
def __init__(self, indexer: CommentedMap):
def __init__(self, indexer: dict):
self.systemconfig = SystemConfigOper()
if indexer:
self._indexerid = indexer.get('id')

View File

@@ -3,8 +3,6 @@ import json
import re
from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
@@ -51,7 +49,7 @@ class MTorrentSpider:
"7": "DIY 国配 中字"
}
def __init__(self, indexer: CommentedMap):
def __init__(self, indexer: dict):
self.systemconfig = SystemConfigOper()
if indexer:
self._indexerid = indexer.get('id')

View File

@@ -1,8 +1,6 @@
import re
from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.log import logger
from app.utils.http import RequestUtils
@@ -23,7 +21,7 @@ class TNodeSpider:
_downloadurl = "%sapi/torrent/download/%s"
_pageurl = "%storrent/info/%s"
def __init__(self, indexer: CommentedMap):
def __init__(self, indexer: dict):
if indexer:
self._indexerid = indexer.get('id')
self._domain = indexer.get('domain')

View File

@@ -1,8 +1,6 @@
from typing import List, Tuple
from urllib.parse import quote
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.log import logger
from app.utils.http import RequestUtils
@@ -19,7 +17,7 @@ class TorrentLeech:
_pageurl = "%storrent/%s"
_timeout = 15
def __init__(self, indexer: CommentedMap):
def __init__(self, indexer: dict):
self._indexer = indexer
if indexer.get('proxy'):
self._proxy = settings.PROXY

View File

@@ -1,7 +1,5 @@
from typing import Tuple, List
from ruamel.yaml import CommentedMap
from app.core.config import settings
from app.db.systemconfig_oper import SystemConfigOper
from app.log import logger
@@ -46,7 +44,7 @@ class YemaSpider:
"12": "完结",
}
def __init__(self, indexer: CommentedMap):
def __init__(self, indexer: dict):
self.systemconfig = SystemConfigOper()
if indexer:
self._indexerid = indexer.get('id')

View File

@@ -149,16 +149,17 @@ class Jellyfin:
match library.get("CollectionType"):
case "movies":
library_type = MediaType.MOVIE.value
link = f"{self._playhost or self._host}web/index.html#!" \
f"/movies.html?topParentId={library.get('Id')}"
case "tvshows":
library_type = MediaType.TV.value
link = f"{self._playhost or self._host}web/index.html#!" \
f"/tv.html?topParentId={library.get('Id')}"
case _:
continue
library_type = MediaType.UNKNOWN.value
link = f"{self._playhost or self._host}web/index.html#!" \
f"/library.html?topParentId={library.get('Id')}"
image = self.__get_local_image_by_id(library.get("Id"))
link = f"{self._playhost or self._host}web/index.html#!" \
f"/movies.html?topParentId={library.get('Id')}" \
if library_type == MediaType.MOVIE.value \
else f"{self._playhost or self._host}web/index.html#!" \
f"/tv.html?topParentId={library.get('Id')}"
libraries.append(
schemas.MediaServerLibrary(
server="jellyfin",
@@ -668,6 +669,12 @@ class Jellyfin:
"S" + str(eventItem.season_id),
"E" + str(eventItem.episode_id),
message.get('Name'))
elif message.get("ItemType") == 'Audio':
# 音乐
eventItem.item_type = "AUD"
eventItem.item_name = message.get('Album')
eventItem.overview = message.get('Name')
eventItem.item_id = message.get('ItemId')
else:
# 电影
eventItem.item_type = "MOV"

View File

@@ -3,13 +3,13 @@ from pathlib import Path
from typing import List, Optional, Dict, Tuple, Generator, Any, Union
from urllib.parse import quote_plus
from cachetools import TTLCache, cached
from plexapi import media
from plexapi.myplex import MyPlexAccount
from plexapi.server import PlexServer
from requests import Response, Session
from app import schemas
from app.core.cache import cached
from app.log import logger
from app.schemas import MediaType
from app.utils.http import RequestUtils
@@ -83,7 +83,7 @@ class Plex:
logger.error(f"Authentication failed: {e}")
return None
@cached(cache=TTLCache(maxsize=100, ttl=86400))
@cached(maxsize=100, ttl=86400)
def __get_library_images(self, library_key: str, mtype: int) -> Optional[List[str]]:
"""
获取媒体服务器最近添加的媒体的图片列表
@@ -293,7 +293,7 @@ class Plex:
season_episodes[episode.seasonNumber].append(episode.index)
return videos.key, season_episodes
def get_remote_image_by_id(self,
def get_remote_image_by_id(self,
item_id: str,
image_type: str,
depth: int = 0,

View File

@@ -340,7 +340,7 @@ class Slack:
return ""
conversation_id = ""
try:
for result in self._client.conversations_list():
for result in self._client.conversations_list(types="public_channel,private_channel"):
if conversation_id:
break
for channel in result["channels"]:

View File

@@ -356,26 +356,52 @@ class TheMovieDbModule(_ModuleBase):
return None
return self.scraper.get_metadata_img(mediainfo=mediainfo, season=season, episode=episode)
def tmdb_discover(self, mtype: MediaType, sort_by: str, with_genres: str, with_original_language: str,
def tmdb_discover(self, mtype: MediaType, sort_by: str,
with_genres: str,
with_original_language: str,
with_keywords: str,
with_watch_providers: str,
vote_average: float,
vote_count: int,
release_date: str,
page: int = 1) -> Optional[List[MediaInfo]]:
"""
:param mtype: 媒体类型
:param sort_by: 排序方式
:param with_genres: 类型
:param with_original_language: 语言
:param with_keywords: 关键字
:param with_watch_providers: 提供商
:param vote_average: 评分
:param vote_count: 评分人数
:param release_date: 发布日期
:param page: 页码
:return: 媒体信息列表
"""
if mtype == MediaType.MOVIE:
infos = self.tmdb.discover_movies(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
infos = self.tmdb.discover_movies({
"sort_by": sort_by,
"with_genres": with_genres,
"with_original_language": with_original_language,
"with_keywords": with_keywords,
"with_watch_providers": with_watch_providers,
"vote_average.gte": vote_average,
"vote_count.gte": vote_count,
"release_date.gte": release_date,
"page": page
})
elif mtype == MediaType.TV:
infos = self.tmdb.discover_tvs(sort_by=sort_by,
with_genres=with_genres,
with_original_language=with_original_language,
page=page)
infos = self.tmdb.discover_tvs({
"sort_by": sort_by,
"with_genres": with_genres,
"with_original_language": with_original_language,
"with_keywords": with_keywords,
"with_watch_providers": with_watch_providers,
"vote_average.gte": vote_average,
"vote_count.gte": vote_count,
"first_air_date.gte": release_date,
"page": page
})
else:
return []
if infos:

View File

@@ -170,6 +170,9 @@ class TmdbScraper:
DomUtils.add_node(doc, root, "genre", genre.get("name") or "")
# 评分
DomUtils.add_node(doc, root, "rating", mediainfo.vote_average or "0")
# 内容分级
if content_rating := mediainfo.content_rating:
DomUtils.add_node(doc, root, "mpaa", content_rating)
return doc

View File

@@ -3,13 +3,15 @@ from typing import Optional, List
from urllib.parse import quote
import zhconv
from cachetools import TTLCache, cached
from lxml import etree
from app.core.cache import cached
from app.core.config import settings
from app.log import logger
from app.schemas import APIRateLimitException
from app.schemas.types import MediaType
from app.utils.http import RequestUtils
from app.utils.limit import rate_limit_exponential
from app.utils.string import StringUtils
from .tmdbv3api import TMDb, Search, Movie, TV, Season, Episode, Discover, Trending, Person, Collection
from .tmdbv3api.exceptions import TMDbException
@@ -491,7 +493,8 @@ class TmdbApi:
return ret_info
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"])
@rate_limit_exponential(source="match_tmdb_web", base_wait=5, max_wait=1800, enable_logging=True)
def match_web(self, name: str, mtype: MediaType) -> Optional[dict]:
"""
搜索TMDB网站直接抓取结果结果只有一条时才返回
@@ -504,51 +507,56 @@ class TmdbApi:
return {}
logger.info("正在从TheDbMovie网站查询%s ..." % name)
tmdb_url = "https://www.themoviedb.org/search?query=%s" % quote(name)
res = RequestUtils(timeout=5, ua=settings.USER_AGENT).get_res(url=tmdb_url)
if res and res.status_code == 200:
html_text = res.text
if not html_text:
return None
try:
tmdb_links = []
html = etree.HTML(html_text)
if mtype == MediaType.TV:
links = html.xpath("//a[@data-id and @data-media-type='tv']/@href")
else:
links = html.xpath("//a[@data-id]/@href")
for link in links:
if not link or (not link.startswith("/tv") and not link.startswith("/movie")):
continue
if link not in tmdb_links:
tmdb_links.append(link)
if len(tmdb_links) == 1:
tmdbinfo = self.get_info(
mtype=MediaType.TV if tmdb_links[0].startswith("/tv") else MediaType.MOVIE,
tmdbid=tmdb_links[0].split("/")[-1])
if tmdbinfo:
if mtype == MediaType.TV and tmdbinfo.get('media_type') != MediaType.TV:
return {}
if tmdbinfo.get('media_type') == MediaType.MOVIE:
logger.info("%s 从WEB识别到 电影TMDBID=%s, 名称=%s, 上映日期=%s" % (
name,
tmdbinfo.get('id'),
tmdbinfo.get('title'),
tmdbinfo.get('release_date')))
else:
logger.info("%s 从WEB识别到 电视剧TMDBID=%s, 名称=%s, 首播日期=%s" % (
name,
tmdbinfo.get('id'),
tmdbinfo.get('name'),
tmdbinfo.get('first_air_date')))
return tmdbinfo
elif len(tmdb_links) > 1:
logger.info("%s TMDB网站返回数据过多%s" % (name, len(tmdb_links)))
else:
logger.info("%s TMDB网站未查询到媒体信息" % name)
except Exception as err:
logger.error(f"从TheDbMovie网站查询出错{str(err)}")
return None
return None
res = RequestUtils(timeout=5, ua=settings.USER_AGENT, proxies=settings.PROXY).get_res(url=tmdb_url)
if res is None:
return None
if res.status_code == 429:
raise APIRateLimitException("触发TheDbMovie网站限流获取媒体信息失败")
if res.status_code != 200:
return {}
html_text = res.text
if not html_text:
return {}
try:
tmdb_links = []
html = etree.HTML(html_text)
if mtype == MediaType.TV:
links = html.xpath("//a[@data-id and @data-media-type='tv']/@href")
else:
links = html.xpath("//a[@data-id]/@href")
for link in links:
if not link or (not link.startswith("/tv") and not link.startswith("/movie")):
continue
if link not in tmdb_links:
tmdb_links.append(link)
if len(tmdb_links) == 1:
tmdbinfo = self.get_info(
mtype=MediaType.TV if tmdb_links[0].startswith("/tv") else MediaType.MOVIE,
tmdbid=tmdb_links[0].split("/")[-1])
if tmdbinfo:
if mtype == MediaType.TV and tmdbinfo.get('media_type') != MediaType.TV:
return {}
if tmdbinfo.get('media_type') == MediaType.MOVIE:
logger.info("%s 从WEB识别到 电影TMDBID=%s, 名称=%s, 上映日期=%s" % (
name,
tmdbinfo.get('id'),
tmdbinfo.get('title'),
tmdbinfo.get('release_date')))
else:
logger.info("%s 从WEB识别到 电视剧TMDBID=%s, 名称=%s, 首播日期=%s" % (
name,
tmdbinfo.get('id'),
tmdbinfo.get('name'),
tmdbinfo.get('first_air_date')))
return tmdbinfo
elif len(tmdb_links) > 1:
logger.info("%s TMDB网站返回数据过多%s" % (name, len(tmdb_links)))
else:
logger.info("%s TMDB网站未查询到媒体信息" % name)
except Exception as err:
logger.error(f"从TheDbMovie网站查询出错{str(err)}")
return {}
return {}
def get_info(self,
mtype: MediaType,
@@ -593,6 +601,8 @@ class TmdbApi:
tmdb_info['genre_ids'] = __get_genre_ids(tmdb_info.get('genres'))
# 别名和译名
tmdb_info['names'] = self.__get_names(tmdb_info)
# 内容分级
tmdb_info['content_rating'] = self.__get_content_rating(tmdb_info)
# 转换多语种标题
self.__update_tmdbinfo_extra_title(tmdb_info)
# 转换中文标题
@@ -600,6 +610,68 @@ class TmdbApi:
return tmdb_info
@staticmethod
def __get_content_rating(tmdb_info: dict) -> Optional[str]:
"""
获得tmdb中的内容评级
:param tmdb_info: TMDB信息
:return: 内容评级
"""
if not tmdb_info:
return None
# dict[地区:分级]
ratings = {}
if results := (tmdb_info.get("release_dates") or {}).get("results"):
"""
[
{
"iso_3166_1": "AR",
"release_dates": [
{
"certification": "+13",
"descriptors": [],
"iso_639_1": "",
"note": "",
"release_date": "2025-01-23T00:00:00.000Z",
"type": 3
}
]
}
]
"""
for item in results:
iso_3166_1 = item.get("iso_3166_1")
if not iso_3166_1:
continue
dates = item.get("release_dates")
if not dates:
continue
certification = dates[0].get("certification")
if not certification:
continue
ratings[iso_3166_1] = certification
elif results := (tmdb_info.get("content_ratings") or {}).get("results"):
"""
[
{
"descriptors": [],
"iso_3166_1": "US",
"rating": "TV-MA"
}
]
"""
for item in results:
iso_3166_1 = item.get("iso_3166_1")
if not iso_3166_1:
continue
rating = item.get("rating")
if not rating:
continue
ratings[iso_3166_1] = rating
if not ratings:
return None
return ratings.get("CN") or ratings.get("US")
@staticmethod
def __update_tmdbinfo_cn_title(tmdb_info: dict):
"""
@@ -678,20 +750,21 @@ class TmdbApi:
else:
en_title = __get_tmdb_lang_title(tmdb_info, "US")
tmdb_info['en_title'] = en_title or org_title
# 查找香港台湾译名
tmdb_info['hk_title'] = __get_tmdb_lang_title(tmdb_info, "HK")
tmdb_info['tw_title'] = __get_tmdb_lang_title(tmdb_info, "TW")
# 查找新加坡名(用于替代中文名)
tmdb_info['sg_title'] = __get_tmdb_lang_title(tmdb_info, "SG") or org_title
def __get_movie_detail(self,
tmdbid: int,
append_to_response: str = "images,"
"credits,"
"alternative_titles,"
"translations,"
"release_dates,"
"external_ids") -> Optional[dict]:
"""
获取电影的详情
@@ -804,6 +877,7 @@ class TmdbApi:
"credits,"
"alternative_titles,"
"translations,"
"content_ratings,"
"external_ids") -> Optional[dict]:
"""
获取电视剧的详情
@@ -1072,18 +1146,17 @@ class TmdbApi:
logger.error(str(e))
return {}
def discover_movies(self, **kwargs) -> List[dict]:
def discover_movies(self, params: dict) -> List[dict]:
"""
发现电影
:param kwargs:
:param params: 参数
:return:
"""
if not self.discover:
return []
try:
logger.debug(f"正在发现电影:{kwargs}...")
params_tuple = tuple(kwargs.items())
tmdbinfo = self.discover.discover_movies(params_tuple)
logger.debug(f"正在发现电影:{params}...")
tmdbinfo = self.discover.discover_movies(tuple(params.items()))
if tmdbinfo:
for info in tmdbinfo:
info['media_type'] = MediaType.MOVIE
@@ -1092,18 +1165,17 @@ class TmdbApi:
logger.error(str(e))
return []
def discover_tvs(self, **kwargs) -> List[dict]:
def discover_tvs(self, params: dict) -> List[dict]:
"""
发现电视剧
:param kwargs:
:param params: 参数
:return:
"""
if not self.discover:
return []
try:
logger.debug(f"正在发现电视剧:{kwargs}...")
params_tuple = tuple(kwargs.items())
tmdbinfo = self.discover.discover_tv_shows(params_tuple)
logger.debug(f"正在发现电视剧:{params}...")
tmdbinfo = self.discover.discover_tv_shows(tuple(params.items()))
if tmdbinfo:
for info in tmdbinfo:
info['media_type'] = MediaType.TV

View File

@@ -1,5 +1,5 @@
from app.core.cache import cached
from ..tmdb import TMDb
from cachetools import cached, TTLCache
try:
from urllib import urlencode
@@ -13,7 +13,7 @@ class Discover(TMDb):
"tv": "/discover/tv"
}
@cached(cache=TTLCache(maxsize=1, ttl=43200))
@cached(maxsize=1, ttl=43200)
def discover_movies(self, params_tuple):
"""
Discover movies by different types of data like average rating, number of votes, genres and certifications.
@@ -23,7 +23,7 @@ class Discover(TMDb):
params = dict(params_tuple)
return self._request_obj(self._urls["movies"], urlencode(params), key="results", call_cached=False)
@cached(cache=TTLCache(maxsize=1, ttl=43200))
@cached(maxsize=1, ttl=43200)
def discover_tv_shows(self, params_tuple):
"""
Discover TV shows by different types of data like average rating, number of votes, genres,

View File

@@ -1,4 +1,4 @@
from cachetools import cached, TTLCache
from app.core.cache import cached
from ..tmdb import TMDb
@@ -6,7 +6,7 @@ from ..tmdb import TMDb
class Trending(TMDb):
_urls = {"trending": "/trending/%s/%s"}
@cached(cache=TTLCache(maxsize=1, ttl=43200))
@cached(maxsize=1024, ttl=43200)
def _trending(self, media_type="all", time_window="day", page=1):
"""
Get trending, TTLCache 12 hours

View File

@@ -7,8 +7,8 @@ from datetime import datetime
import requests
import requests.exceptions
from cachetools import TTLCache, cached
from app.core.cache import cached
from app.core.config import settings
from app.utils.http import RequestUtils
from .exceptions import TMDbException
@@ -137,7 +137,7 @@ class TMDb(object):
def cache(self, cache):
os.environ[self.TMDB_CACHE_ENABLED] = str(cache)
@cached(cache=TTLCache(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"]))
@cached(maxsize=settings.CACHE_CONF["tmdb"], ttl=settings.CACHE_CONF["meta"])
def cached_request(self, method, url, data, json,
_ts=datetime.strftime(datetime.now(), '%Y%m%d')):
"""

View File

@@ -68,7 +68,7 @@ class WebPushModule(_ModuleBase, _MessageBase):
webpush_users = conf.config.get("WEBPUSH_USERNAME") or ""
if webpush_users:
# 设定了接收用户时,非该用户的消息不接收
if not message.userid or message.userid not in webpush_users.split(","):
if not message.username or message.username not in webpush_users.split(","):
continue
if not message.title and not message.text:
logger.warn("标题和内容不能同时为空")

View File

@@ -225,7 +225,8 @@ class _PluginBase(metaclass=ABCMeta):
return self.plugindata.del_data(plugin_id, key)
def post_message(self, channel: MessageChannel = None, mtype: NotificationType = None, title: str = None,
text: str = None, image: str = None, link: str = None, userid: str = None, username: str = None):
text: str = None, image: str = None, link: str = None, userid: str = None, username: str = None,
**kwargs):
"""
发送消息
"""
@@ -233,7 +234,7 @@ class _PluginBase(metaclass=ABCMeta):
link = settings.MP_DOMAIN(f"#/plugins?tab=installed&id={self.__class__.__name__}")
self.chain.post_message(Notification(
channel=channel, mtype=mtype, title=title, text=text,
image=image, link=link, userid=userid, username=username
image=image, link=link, userid=userid, username=username, **kwargs
))
def close(self):

View File

@@ -15,7 +15,6 @@ from app.chain.recommend import RecommendChain
from app.chain.site import SiteChain
from app.chain.subscribe import SubscribeChain
from app.chain.tmdb import TmdbChain
from app.chain.torrents import TorrentsChain
from app.chain.transfer import TransferChain
from app.core.config import settings
from app.core.event import EventManager
@@ -93,6 +92,11 @@ class Scheduler(metaclass=Singleton):
"func": SubscribeChain().refresh,
"running": False,
},
"subscribe_follow": {
"name": "关注的订阅分享",
"func": SubscribeChain().follow,
"running": False,
},
"transfer": {
"name": "下载文件整理",
"func": TransferChain().process,
@@ -242,6 +246,18 @@ class Scheduler(metaclass=Singleton):
}
)
# 关注订阅分享每1小时
self._scheduler.add_job(
self.start,
"interval",
id="subscribe_follow",
name="关注的订阅分享",
hours=1,
kwargs={
'job_id': 'subscribe_follow'
}
)
# 下载器文件转移每5分钟
self._scheduler.add_job(
self.start,
@@ -549,7 +565,6 @@ class Scheduler(metaclass=Singleton):
"""
清理缓存
"""
TorrentsChain().clear_cache()
SchedulerChain().clear_cache()
def user_auth(self):

View File

@@ -19,3 +19,4 @@ from .file import *
from .exception import *
from .system import *
from .event import *
from .workflow import *

View File

@@ -79,8 +79,6 @@ class MediaInfo(BaseModel):
title_year: Optional[str] = None
# 当前指定季,如有
season: Optional[int] = None
# 合集等id
collection_id: Optional[int] = None
# TMDB ID
tmdb_id: Optional[int] = None
# IMDB ID
@@ -91,6 +89,12 @@ class MediaInfo(BaseModel):
douban_id: Optional[str] = None
# Bangumi ID
bangumi_id: Optional[int] = None
# 合集ID
collection_id: Optional[int] = None
# 其它媒体ID前缀
mediaid_prefix: Optional[str] = None
# 其它媒体ID值
media_id: Optional[str] = None
# 媒体原语种
original_language: Optional[str] = None
# 媒体原发行标题
@@ -238,6 +242,19 @@ class Context(BaseModel):
torrent_info: Optional[TorrentInfo] = None
class MediaSeason(BaseModel):
"""
季信息
"""
air_date: Optional[str] = None
episode_count: Optional[int] = None
name: Optional[str] = None
overview: Optional[str] = None
poster_path: Optional[str] = None
season_number: Optional[int] = None
vote_average: Optional[float] = None
class MediaPerson(BaseModel):
"""
媒体人物信息

View File

@@ -3,7 +3,7 @@ from typing import Optional, Dict, Any, List, Set
from pydantic import BaseModel, Field, root_validator
from app.schemas import MessageChannel
from app.schemas import MessageChannel, FileItem
class BaseEventData(BaseModel):
@@ -50,7 +50,7 @@ class AuthCredentials(ChainEventData):
service: Optional[str] = Field(default=None, description="服务名称")
@root_validator(pre=True)
def check_fields_based_on_grant_type(cls, values): # noqa
def check_fields_based_on_grant_type(cls, values): # noqa
grant_type = values.get("grant_type")
if not grant_type:
values["grant_type"] = "password"
@@ -202,3 +202,98 @@ class ResourceDownloadEventData(ChainEventData):
cancel: bool = Field(default=False, description="是否取消下载")
source: str = Field(default="未知拦截源", description="拦截源")
reason: str = Field(default="", description="拦截原因")
class TransferInterceptEventData(ChainEventData):
"""
TransferIntercept 事件的数据模型
Attributes:
# 输入参数
fileitem (FileItem): 源文件
target_storage (str): 目标存储
target_path (Path): 目标路径
transfer_type (str): 整理方式copy、move、link、softlink等
options (dict): 其他参数
# 输出参数
cancel (bool): 是否取消下载,默认值为 False
source (str): 拦截源,默认值为 "未知拦截源"
reason (str): 拦截原因,描述拦截的具体原因
"""
# 输入参数
fileitem: FileItem = Field(..., description="源文件")
mediainfo: Any = Field(..., description="媒体信息")
target_storage: str = Field(..., description="目标存储")
target_path: Path = Field(..., description="目标路径")
transfer_type: str = Field(..., description="整理方式")
options: Optional[dict] = Field(default=None, description="其他参数")
# 输出参数
cancel: bool = Field(default=False, description="是否取消整理")
source: str = Field(default="未知拦截源", description="拦截源")
reason: str = Field(default="", description="拦截原因")
class DiscoverMediaSource(BaseModel):
"""
探索媒体数据源的基类
"""
name: str = Field(..., description="数据源名称")
mediaid_prefix: str = Field(..., description="媒体ID的前缀不含:")
api_path: str = Field(..., description="媒体数据源API地址")
filter_params: Optional[Dict[str, Any]] = Field(default=None, description="过滤参数")
filter_ui: Optional[List[dict]] = Field(default=[], description="过滤参数UI配置")
depends: Optional[Dict[str, list]] = Field(default=None, description="UI依赖关系字典")
class DiscoverSourceEventData(ChainEventData):
"""
DiscoverSource 事件的数据模型
Attributes:
# 输出参数
extra_sources (List[DiscoverMediaSource]): 额外媒体数据源
"""
# 输出参数
extra_sources: List[DiscoverMediaSource] = Field(default_factory=list, description="额外媒体数据源")
class RecommendMediaSource(BaseModel):
"""
推荐媒体数据源的基类
"""
name: str = Field(..., description="数据源名称")
api_path: str = Field(..., description="媒体数据源API地址")
class RecommendSourceEventData(ChainEventData):
"""
RecommendSource 事件的数据模型
Attributes:
# 输出参数
extra_sources (List[RecommendMediaSource]): 额外媒体数据源
"""
# 输出参数
extra_sources: List[RecommendMediaSource] = Field(default_factory=list, description="额外媒体数据源")
class MediaRecognizeConvertEventData(ChainEventData):
"""
MediaRecognizeConvert 事件的数据模型
Attributes:
# 输入参数
mediaid (str): 媒体ID格式为`前缀:ID值`,如 tmdb:12345、douban:1234567
convert_type (str): 转换类型 仅支持themoviedb/douban需要转换为对应的媒体数据并返回
# 输出参数
media_dict (dict): TheMovieDb/豆瓣的媒体数据
"""
# 输入参数
mediaid: str = Field(..., description="媒体ID")
convert_type: str = Field(..., description="转换类型themoviedb/douban")
# 输出参数
media_dict: dict = Field(default=dict, description="转换后的媒体信息TheMovieDb/豆瓣)")

View File

@@ -16,6 +16,7 @@ class Subscribe(BaseModel):
tmdbid: Optional[int] = None
doubanid: Optional[str] = None
bangumiid: Optional[int] = None
mediaid: Optional[str] = None
# 季号
season: Optional[int] = None
# 海报

View File

@@ -75,10 +75,18 @@ class ChainEventType(Enum):
CommandRegister = "command.register"
# 整理重命名
TransferRename = "transfer.rename"
# 整理拦截
TransferIntercept = "transfer.intercept"
# 资源选择
ResourceSelection = "resource.selection"
# 资源下载
ResourceDownload = "resource.download"
# 探索数据源
DiscoverSource = "discover.source"
# 媒体识别转换
MediaRecognizeConvert = "media.recognize.convert"
# 推荐数据源
RecommendSource = "recommend.source"
# 系统配置Key字典
@@ -135,6 +143,8 @@ class SystemConfigKey(Enum):
DefaultTvSubscribeConfig = "DefaultTvSubscribeConfig"
# 用户站点认证参数
UserSiteAuthParams = "UserSiteAuthParams"
# Follow订阅分享者
FollowSubscribers = "FollowSubscribers"
# 处理进度Key字典

35
app/schemas/workflow.py Normal file
View File

@@ -0,0 +1,35 @@
from abc import ABC, abstractmethod
from typing import Optional
from pydantic import BaseModel, Field
class Workflow(BaseModel):
"""
工作流信息
"""
name: Optional[str] = Field(None, description="工作流名称")
description: Optional[str] = Field(None, description="工作流描述")
timer: Optional[str] = Field(None, description="定时器")
state: Optional[str] = Field(None, description="状态")
current_action: Optional[str] = Field(None, description="当前执行动作")
result: Optional[str] = Field(None, description="任务执行结果")
run_count: Optional[int] = Field(0, description="已执行次数")
actions: Optional[list] = Field([], description="任务列表")
add_time: Optional[str] = Field(None, description="创建时间")
last_time: Optional[str] = Field(None, description="最后执行时间")
class Action(BaseModel):
"""
动作信息
"""
name: Optional[str] = Field(None, description="动作名称")
description: Optional[str] = Field(None, description="动作描述")
class ActionContext(BaseModel, ABC):
"""
动作上下文
"""
pass

View File

@@ -2,6 +2,7 @@ import sys
from fastapi import FastAPI
from app.core.cache import close_cache
from app.core.config import global_vars, settings
from app.core.module import ModuleManager
from app.log import logger
@@ -129,6 +130,8 @@ def shutdown_modules(_: FastAPI):
Monitor().stop()
# 停止线程池
ThreadHelper().shutdown()
# 停止缓存连接
close_cache()
# 停止数据库连接
close_database()
# 停止前端服务

View File

@@ -156,7 +156,8 @@ class ExponentialBackoffRateLimiter(BaseRateLimiter):
with self.lock:
self.next_allowed_time = current_time + self.current_wait
self.current_wait = min(self.current_wait * self.backoff_factor, self.max_wait)
self.log_warning(f"触发限流,将在 {self.current_wait} 秒后允许继续调用")
wait_time = self.next_allowed_time - current_time
self.log_warning(f"触发限流,将在 {wait_time:.2f} 秒后允许继续调用")
# 时间窗口限流器

View File

@@ -42,7 +42,7 @@ class SecurityUtils:
@staticmethod
def is_safe_url(url: str, allowed_domains: Union[Set[str], List[str]], strict: bool = False) -> bool:
"""
验证URL是否在允许的域名列表中包括带有端口的域名
验证URL是否在允许的域名列表中包括带有端口的域名
:param url: 需要验证的 URL
:param allowed_domains: 允许的域名集合,域名可以包含端口

View File

@@ -1,6 +1,6 @@
from typing import Optional, List
from cachetools import TTLCache, cached
from app.core.cache import cached
from app.utils.http import RequestUtils
@@ -75,7 +75,7 @@ class WebUtils:
return ""
@staticmethod
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_bing_wallpaper() -> Optional[str]:
"""
获取Bing每日壁纸
@@ -93,7 +93,7 @@ class WebUtils:
return None
@staticmethod
@cached(cache=TTLCache(maxsize=1, ttl=3600))
@cached(maxsize=1, ttl=3600)
def get_bing_wallpapers(num: int = 7) -> List[str]:
"""
获取7天的Bing每日壁纸

View File

@@ -0,0 +1,24 @@
"""2.1.1
Revision ID: 279a949d81b6
Revises: ca5461f314f2
Create Date: 2025-02-14 19:02:24.989349
"""
from app.chain.torrents import TorrentsChain
# revision identifiers, used by Alembic.
revision = '279a949d81b6'
down_revision = 'ca5461f314f2'
branch_labels = None
depends_on = None
def upgrade() -> None:
# 清理一次缓存
TorrentsChain().clear_torrents()
def downgrade() -> None:
pass

View File

@@ -0,0 +1,32 @@
"""2.1.0
Revision ID: ca5461f314f2
Revises: 55390f1f77c1
Create Date: 2025-02-06 18:28:00.644571
"""
import contextlib
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import sqlite
# revision identifiers, used by Alembic.
revision = 'ca5461f314f2'
down_revision = '55390f1f77c1'
branch_labels = None
depends_on = None
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
# 订阅增加mediaid
with contextlib.suppress(Exception):
op.add_column('subscribe', sa.Column('mediaid', sa.String(), nullable=True))
op.create_index('ix_subscribe_mediaid', 'subscribe', ['mediaid'], unique=False)
op.add_column('subscribehistory', sa.Column('mediaid', sa.String(), nullable=True))
# ### end Alembic commands ###
def downgrade() -> None:
pass

View File

@@ -32,6 +32,7 @@ func_timeout==4.3.5
bs4~=0.0.1
beautifulsoup4~=4.12.2
pillow~=10.4.0
pillow-avif-plugin~=1.4.6
pyTelegramBotAPI~=4.12.0
playwright~=1.37.0
cf-clearance~=0.31.0
@@ -64,4 +65,6 @@ python-cookietools==0.0.2.1
aligo~=6.2.4
aiofiles~=24.1.0
jieba~=0.42.1
rsa~=4.9
rsa~=4.9
redis~=5.2.1
async_timeout~=5.0.1; python_full_version < "3.11.3"

View File

@@ -1,20 +1,22 @@
from distutils.core import setup
from Cython.Build import cythonize
module_list = ['app/helper/sites.py']
setup(
name="",
author="",
zip_safe=False,
include_package_data=True,
ext_modules=cythonize(
module_list=module_list,
nthreads=0,
compiler_directives={"language_level": "3"},
),
script_args=["build_ext", "-j", '2', "--inplace"],
)
name="MoviePilot",
author="jxxghp",
zip_safe=False,
include_package_data=True,
ext_modules=cythonize(
module_list=module_list,
nthreads=0,
compiler_directives={
"language_level": "3",
"binding": False,
"nonecheck": False
},
),
script_args=["build_ext", "-j", '2', "--inplace"],
)

View File

@@ -968,7 +968,7 @@ meta_cases = [{
"year": "2023",
"part": "",
"season": "S02",
"episode": "",
"episode": "E01-E08",
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",
@@ -1016,7 +1016,7 @@ meta_cases = [{
"year": "2019",
"part": "",
"season": "S01",
"episode": "",
"episode": "E01-E36",
"restype": "WEB-DL",
"pix": "2160p",
"video_codec": "H265",

View File

@@ -1,2 +1,2 @@
APP_VERSION = 'v2.2.2'
FRONTEND_VERSION = 'v2.2.2'
APP_VERSION = 'v2.2.9'
FRONTEND_VERSION = 'v2.2.9'