Compare commits

...

29 Commits

Author SHA1 Message Date
jxxghp
c86a21d11d Merge pull request #604 from WithdewHua/subscribe 2023-09-16 20:31:42 +08:00
WithdewHua
3fb02f6490 feat: 增加更新订阅 tmdb 信息 API 2023-09-16 19:36:49 +08:00
WithdewHua
ca2c0392bb fix: 调整 API 顺序,避免错误匹配 2023-09-16 18:43:33 +08:00
WithdewHua
b8663ee735 fix: 同时更新电影订阅信息;修复 typo 2023-09-16 16:16:39 +08:00
WithdewHua
4ab60423c1 feat: 根据原标题查询媒体服务器(plex) 2023-09-16 15:48:22 +08:00
jxxghp
1ea80e6870 更新 README.md 2023-09-16 10:58:33 +08:00
jxxghp
6f1d4754be Merge pull request #600 from DDS-Derek/main 2023-09-16 08:28:56 +08:00
DDSRem
52288d98c0 bump: action jobs version
docker/metadata-action@v5
docker/setup-qemu-action@v3
docker/setup-buildx-action@v3
docker/login-action@v3
docker/build-push-action@v5

Co-Authored-By: DDSDerek <108336573+DDSDerek@users.noreply.github.com>
Co-Authored-By: DDSTomo <142158217+ddstomo@users.noreply.github.com>
2023-09-15 20:18:28 +08:00
jxxghp
d1368c4f84 fix bug 2023-09-15 17:28:35 +08:00
jxxghp
4367c53bb0 fix bug 2023-09-15 17:24:22 +08:00
jxxghp
d87f69da35 fix azusa 2023-09-15 16:07:01 +08:00
jxxghp
5ece44090e fix 2023-09-15 15:38:30 +08:00
jxxghp
01be4f9549 need test 2023-09-15 15:37:05 +08:00
jxxghp
94077917f3 Merge remote-tracking branch 'origin/main' 2023-09-15 15:22:19 +08:00
jxxghp
8af981738c fix README.md 2023-09-15 15:22:11 +08:00
jxxghp
4d7982803e Merge pull request #596 from thsrite/main
fix 辅种插件增加不辅种路径
2023-09-15 15:15:55 +08:00
thsrite
a1bba6da4a fix 辅种插件增加不辅种路径 2023-09-15 15:08:15 +08:00
jxxghp
4eb3e16b37 v1.2.1
- 修复了IOS下菜单栏需要点击两次的问题
- 修复了电影洗版重复下载的问题
- 站点新增支持ptlsp、azusa
- 认证站点新增支持ptlsp
- 仿真签到增加判断签到状态
2023-09-15 15:04:18 +08:00
jxxghp
1f0b40fe05 support ptlsp 2023-09-15 14:29:15 +08:00
jxxghp
29e92a17e7 support azusa 2023-09-15 14:01:12 +08:00
jxxghp
8cc4469282 fix #591 2023-09-15 10:59:46 +08:00
jxxghp
a5e66071ba support PTLSP 2023-09-15 10:46:54 +08:00
jxxghp
fb4e817993 fix #594 2023-09-15 10:38:15 +08:00
jxxghp
8f26110e65 Merge pull request #590 from thsrite/main 2023-09-14 16:19:46 +08:00
thsrite
9f65a088c0 fix 插件交互命令增加channel字段 2023-09-14 16:09:56 +08:00
jxxghp
15c15388b6 Merge pull request #589 from thsrite/main 2023-09-14 15:34:49 +08:00
thsrite
950a43e001 fix 每日签到记录存储bug 2023-09-14 15:28:06 +08:00
jxxghp
9a28f8c365 Merge pull request #588 from thsrite/main 2023-09-14 15:18:43 +08:00
thsrite
32cb96fc44 fix 仿真签到判断是否已签 2023-09-14 15:17:30 +08:00
20 changed files with 240 additions and 106 deletions

View File

@@ -26,7 +26,7 @@ jobs:
- -
name: Docker meta name: Docker meta
id: meta id: meta
uses: docker/metadata-action@v4 uses: docker/metadata-action@v5
with: with:
images: ${{ secrets.DOCKER_USERNAME }}/moviepilot images: ${{ secrets.DOCKER_USERNAME }}/moviepilot
tags: | tags: |
@@ -35,22 +35,22 @@ jobs:
- -
name: Set Up QEMU name: Set Up QEMU
uses: docker/setup-qemu-action@v2 uses: docker/setup-qemu-action@v3
- -
name: Set Up Buildx name: Set Up Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v3
- -
name: Login DockerHub name: Login DockerHub
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
username: ${{ secrets.DOCKER_USERNAME }} username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }} password: ${{ secrets.DOCKER_PASSWORD }}
- -
name: Build Image name: Build Image
uses: docker/build-push-action@v4 uses: docker/build-push-action@v5
with: with:
context: . context: .
file: Dockerfile file: Dockerfile

View File

@@ -81,7 +81,7 @@ docker pull jxxghp/moviepilot:latest
- **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。 - **OCR_HOST** OCR识别服务器地址格式`http(s)://ip:port`用于识别站点二维码实现自动登录获取Cookie等不配置默认使用内建服务器`https://movie-pilot.org`,可使用 [这个镜像](https://hub.docker.com/r/jxxghp/moviepilot-ocr) 自行搭建。
- **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改 - **USER_AGENT** CookieCloud对应的浏览器UA可选设置后可增加连接站点的成功率同步站点后可以在管理界面中修改
- **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割 - **AUTO_DOWNLOAD_USER** 交互搜索自动下载用户ID使用,分割
- **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行推荐使用模式。 - **SUBSCRIBE_MODE** 订阅模式,`rss`/`spider`,默认`spider``rss`模式通过定时刷新RSS来匹配订阅RSS地址会自动获取也可手动维护对站点压力小同时可设置订阅刷新周期24小时运行但订阅和下载通知不能过滤和显示免费,推荐使用rss模式。
- **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。 - **SUBSCRIBE_RSS_INTERVAL** RSS订阅模式刷新时间间隔分钟默认`30`分钟不能小于5分钟。
- **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启 - **SUBSCRIBE_SEARCH** 订阅搜索,`true`/`false`,默认`false`开启后会每隔24小时对所有订阅进行全量搜索以补齐缺失剧集一般情况下正常订阅即可订阅搜索只做为兜底会增加站点压力不建议开启
- **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram` - **MESSAGER** 消息通知渠道,支持 `telegram`/`wechat`/`slack`,开启多个渠道时使用`,`分隔。同时还需要配置对应渠道的环境变量,非对应渠道的变量可删除,推荐使用`telegram`
@@ -149,23 +149,24 @@ docker pull jxxghp/moviepilot:latest
### 2. **用户认证** ### 2. **用户认证**
- **AUTH_SITE** 认证站点,支持`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`iyuu` - **AUTH_SITE** 认证站点,支持`iyuu`/`hhclub`/`audiences`/`hddolby`/`zmpt`/`freefarm`/`hdfans`/`wintersakura`/`leaves`/`1ptba`/`icc2022`/`ptlsp`
`MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。 `MoviePilot`需要认证后才能使用,配置`AUTH_SITE`后,需要根据下表配置对应站点的认证参数。
| 站点 | 参数 | | 站点 | 参数 |
|:--:|:-----------------------------------------------------:| |:------------:|:-----------------------------------------------------:|
| iyuu | `IYUU_SIGN`IYUU登录令牌 | | iyuu | `IYUU_SIGN`IYUU登录令牌 |
| hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 | | hhclub | `HHCLUB_USERNAME`:用户名<br/>`HHCLUB_PASSKEY`:密钥 |
| audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 | | audiences | `AUDIENCES_UID`用户ID<br/>`AUDIENCES_PASSKEY`:密钥 |
| hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 | | hddolby | `HDDOLBY_ID`用户ID<br/>`HDDOLBY_PASSKEY`:密钥 |
| zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 | | zmpt | `ZMPT_UID`用户ID<br/>`ZMPT_PASSKEY`:密钥 |
| freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 | | freefarm | `FREEFARM_UID`用户ID<br/>`FREEFARM_PASSKEY`:密钥 |
| hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 | | hdfans | `HDFANS_UID`用户ID<br/>`HDFANS_PASSKEY`:密钥 |
| wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 | | wintersakura | `WINTERSAKURA_UID`用户ID<br/>`WINTERSAKURA_PASSKEY`:密钥 |
| leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 | | leaves | `LEAVES_UID`用户ID<br/>`LEAVES_PASSKEY`:密钥 |
| 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 | | 1ptba | `1PTBA_UID`用户ID<br/>`1PTBA_PASSKEY`:密钥 |
| icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 | | icc2022 | `ICC2022_UID`用户ID<br/>`ICC2022_PASSKEY`:密钥 |
| ptlsp | `PTLSP_UID`用户ID<br/>`PTLSP_PASSKEY`:密钥 |
### 2. **进阶配置** ### 2. **进阶配置**

View File

@@ -138,6 +138,53 @@ def subscribe_mediaid(
return result if result else Subscribe() return result if result else Subscribe()
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/check", summary="刷新订阅 TMDB 信息", response_model=schemas.Response)
def check_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).check()
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
根据订阅编号搜索订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe) @router.get("/{subscribe_id}", summary="订阅详情", response_model=schemas.Subscribe)
def read_subscribe( def read_subscribe(
subscribe_id: int, subscribe_id: int,
@@ -243,39 +290,3 @@ async def seerr_subscribe(request: Request, background_tasks: BackgroundTasks,
username=user_name) username=user_name)
return schemas.Response(success=True) return schemas.Response(success=True)
@router.get("/refresh", summary="刷新订阅", response_model=schemas.Response)
def refresh_subscribes(
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
刷新所有订阅
"""
SubscribeChain(db).refresh()
return schemas.Response(success=True)
@router.get("/search/{subscribe_id}", summary="搜索订阅", response_model=schemas.Response)
def search_subscribe(
subscribe_id: int,
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=subscribe_id, state=None)
return schemas.Response(success=True)
@router.get("/search", summary="搜索所有订阅", response_model=schemas.Response)
def search_subscribes(
background_tasks: BackgroundTasks,
db: Session = Depends(get_db),
_: schemas.TokenPayload = Depends(verify_token)) -> Any:
"""
搜索所有订阅
"""
background_tasks.add_task(start_subscribe_search, db=db, sid=None, state='R')
return schemas.Response(success=True)

View File

@@ -185,19 +185,30 @@ class SearchChain(ChainBase):
str(int(mediainfo.year) + 1)]: str(int(mediainfo.year) + 1)]:
logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配') logger.warn(f'{torrent.site_name} - {torrent.title} 年份不匹配')
continue continue
# 比对标题 # 比对标题和原语种标题
meta_name = StringUtils.clear_upper(torrent_meta.name) meta_name = StringUtils.clear_upper(torrent_meta.name)
if meta_name in [ if meta_name in [
StringUtils.clear_upper(mediainfo.title), StringUtils.clear_upper(mediainfo.title),
StringUtils.clear_upper(mediainfo.original_title) StringUtils.clear_upper(mediainfo.original_title)
]: ]:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}') logger.info(f'{mediainfo.title} 通过标题匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent) _match_torrents.append(torrent)
continue continue
# 在副标题中判断是否存在标题与原语种标题
if torrent.description:
subtitle = torrent.description.split()
if (StringUtils.is_chinese(mediainfo.title)
and str(mediainfo.title) in subtitle) \
or (StringUtils.is_chinese(mediainfo.original_title)
and str(mediainfo.original_title) in subtitle):
logger.info(f'{mediainfo.title} 通过副标题匹配到资源:{torrent.site_name} - {torrent.title}'
f'副标题:{torrent.description}')
_match_torrents.append(torrent)
continue
# 比对别名和译名 # 比对别名和译名
for name in mediainfo.names: for name in mediainfo.names:
if StringUtils.clear_upper(name) == meta_name: if StringUtils.clear_upper(name) == meta_name:
logger.info(f'{mediainfo.title} 匹配到资源:{torrent.site_name} - {torrent.title}') logger.info(f'{mediainfo.title} 通过别名或译名匹配到资源:{torrent.site_name} - {torrent.title}')
_match_torrents.append(torrent) _match_torrents.append(torrent)
break break
else: else:

View File

@@ -277,7 +277,7 @@ class SubscribeChain(ChainBase):
logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源') logger.warn(f'订阅 {subscribe.keyword or subscribe.name} 未搜索到资源')
if meta.type == MediaType.TV: if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数 # 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo) self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue continue
# 过滤 # 过滤
matched_contexts = [] matched_contexts = []
@@ -308,12 +308,17 @@ class SubscribeChain(ChainBase):
if torrent_meta.episode_list: if torrent_meta.episode_list:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季') logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 不是整季')
continue continue
# 优先级小于已下载优先级的不要
if subscribe.current_priority \
and torrent_info.pri_order < subscribe.current_priority:
logger.info(f'{subscribe.name} 正在洗版,{torrent_info.title} 优先级低于已下载优先级')
continue
matched_contexts.append(context) matched_contexts.append(context)
if not matched_contexts: if not matched_contexts:
logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源') logger.warn(f'订阅 {subscribe.name} 没有符合过滤条件的资源')
# 非洗版未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数 # 非洗版未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
if meta.type == MediaType.TV and not subscribe.best_version: if meta.type == MediaType.TV and not subscribe.best_version:
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo) self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
continue continue
# 自动下载 # 自动下载
downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts, downloads, lefts = self.downloadchain.batch_download(contexts=matched_contexts,
@@ -330,18 +335,18 @@ class SubscribeChain(ChainBase):
mediainfo=mediainfo, downloads=downloads) mediainfo=mediainfo, downloads=downloads)
else: else:
# 未完成下载 # 未完成下载
logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...') logger.info(f'{mediainfo.title_year} 未下载完整,继续订阅 ...')
if meta.type == MediaType.TV and not subscribe.best_version: if meta.type == MediaType.TV and not subscribe.best_version:
# 更新订阅剩余集数和时间 # 更新订阅剩余集数和时间
update_date = True if downloads else False update_date = True if downloads else False
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe, self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date) mediainfo=mediainfo, update_date=update_date)
# 手动触发时发送系统消息 # 手动触发时发送系统消息
if manual: if manual:
if sid: if sid:
self.message.put(f'订阅 {subscribes[0].name} 搜索完成!') self.message.put(f'订阅 {subscribes[0].name} 搜索完成!')
else: else:
self.message.put(f'所有订阅搜索完成!') self.message.put('所有订阅搜索完成!')
def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo, def finish_subscribe_or_not(self, subscribe: Subscribe, meta: MetaInfo,
mediainfo: MediaInfo, downloads: List[Context]): mediainfo: MediaInfo, downloads: List[Context]):
@@ -504,11 +509,13 @@ class SubscribeChain(ChainBase):
torrent_list=[torrent_info]) torrent_list=[torrent_info])
if result is not None and not result: if result is not None and not result:
# 不符合过滤规则 # 不符合过滤规则
logger.info(f"{torrent_info.title} 不匹配当前过滤规则")
continue continue
# 不在订阅站点范围的不处理 # 不在订阅站点范围的不处理
if subscribe.sites: if subscribe.sites:
sub_sites = json.loads(subscribe.sites) sub_sites = json.loads(subscribe.sites)
if sub_sites and torrent_info.site not in sub_sites: if sub_sites and torrent_info.site not in sub_sites:
logger.info(f"{torrent_info.title} 不符合 {torrent_mediainfo.title_year} 订阅站点要求")
continue continue
# 如果是电视剧 # 如果是电视剧
if torrent_mediainfo.type == MediaType.TV: if torrent_mediainfo.type == MediaType.TV:
@@ -580,12 +587,12 @@ class SubscribeChain(ChainBase):
if meta.type == MediaType.TV and not subscribe.best_version: if meta.type == MediaType.TV and not subscribe.best_version:
update_date = True if downloads else False update_date = True if downloads else False
# 未完成下载,计算剩余集数 # 未完成下载,计算剩余集数
self.__upate_lack_episodes(lefts=lefts, subscribe=subscribe, self.__update_lack_episodes(lefts=lefts, subscribe=subscribe,
mediainfo=mediainfo, update_date=update_date) mediainfo=mediainfo, update_date=update_date)
else: else:
if meta.type == MediaType.TV: if meta.type == MediaType.TV:
# 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数 # 未搜索到资源,但本地缺失可能有变化,更新订阅剩余集数
self.__upate_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo) self.__update_lack_episodes(lefts=no_exists, subscribe=subscribe, mediainfo=mediainfo)
def check(self): def check(self):
""" """
@@ -609,18 +616,15 @@ class SubscribeChain(ChainBase):
if not mediainfo: if not mediainfo:
logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}') logger.warn(f'未识别到媒体信息,标题:{subscribe.name}tmdbid{subscribe.tmdbid}')
continue continue
if not mediainfo.seasons: # 对于电视剧,获取当前季的总集数
continue
# 获取当前季的总集数
episodes = mediainfo.seasons.get(subscribe.season) or [] episodes = mediainfo.seasons.get(subscribe.season) or []
if len(episodes) > subscribe.total_episode or 0: if len(episodes) > (subscribe.total_episode or 0):
total_episode = len(episodes) total_episode = len(episodes)
lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode) lack_episode = subscribe.lack_episode + (total_episode - subscribe.total_episode)
logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...') logger.info(f'订阅 {subscribe.name} 总集数变化,更新总集数为{total_episode},缺失集数为{lack_episode} ...')
else: else:
total_episode = subscribe.total_episode total_episode = subscribe.total_episode
lack_episode = subscribe.lack_episode lack_episode = subscribe.lack_episode
logger.info(f'订阅 {subscribe.name} 总集数未变化')
# 更新TMDB信息 # 更新TMDB信息
self.subscribeoper.update(subscribe.id, { self.subscribeoper.update(subscribe.id, {
"name": mediainfo.title, "name": mediainfo.title,
@@ -677,7 +681,7 @@ class SubscribeChain(ChainBase):
return True return True
return False return False
def __upate_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]], def __update_lack_episodes(self, lefts: Dict[int, Dict[int, NotExistMediaInfo]],
subscribe: Subscribe, subscribe: Subscribe,
mediainfo: MediaInfo, mediainfo: MediaInfo,
update_date: bool = False): update_date: bool = False):
@@ -765,7 +769,7 @@ class SubscribeChain(ChainBase):
total_episode: int, total_episode: int,
start_episode: int): start_episode: int):
""" """
根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数 根据订阅开始集数和总结合TMDB信息计算当前订阅的缺失集数
:param no_exists: 缺失季集列表 :param no_exists: 缺失季集列表
:param tmdb_id: TMDB ID :param tmdb_id: TMDB ID
:param begin_season: 开始季 :param begin_season: 开始季

View File

@@ -108,7 +108,6 @@ class TorrentsChain(ChainBase, metaclass=Singleton):
site_proxy=site.get("proxy"), site_proxy=site.get("proxy"),
site_order=site.get("pri"), site_order=site.get("pri"),
title=item.get("title"), title=item.get("title"),
description=item.get("description"),
enclosure=item.get("enclosure"), enclosure=item.get("enclosure"),
page_url=item.get("link"), page_url=item.get("link"),
size=item.get("size"), size=item.get("size"),

View File

@@ -222,6 +222,10 @@ class Command(metaclass=Singleton):
if args_num > 0: if args_num > 0:
if cmd_data: if cmd_data:
# 有内置参数直接使用内置参数 # 有内置参数直接使用内置参数
data = cmd_data.get("data") or {}
data['channel'] = channel
data['user'] = userid
cmd_data['data'] = data
command['func'](**cmd_data) command['func'](**cmd_data)
elif args_num == 2: elif args_num == 2:
# 没有输入参数只输入渠道和用户ID # 没有输入参数只输入渠道和用户ID

View File

@@ -72,7 +72,7 @@ class Settings(BaseSettings):
SUBSCRIBE_RSS_INTERVAL: int = 30 SUBSCRIBE_RSS_INTERVAL: int = 30
# 订阅搜索开关 # 订阅搜索开关
SUBSCRIBE_SEARCH: bool = False SUBSCRIBE_SEARCH: bool = False
# 用户认证站点 hhclub/audiences/hddolby/zmpt/freefarm/hdfans/wintersakura/leaves/1ptba/icc2022/iyuu # 用户认证站点
AUTH_SITE: str = "" AUTH_SITE: str = ""
# 交互搜索自动下载用户ID使用,分割 # 交互搜索自动下载用户ID使用,分割
AUTO_DOWNLOAD_USER: str = None AUTO_DOWNLOAD_USER: str = None

Binary file not shown.

View File

@@ -54,7 +54,10 @@ class PlexModule(_ModuleBase):
if movie: if movie:
logger.info(f"媒体库中已存在:{movie}") logger.info(f"媒体库中已存在:{movie}")
return ExistMediaInfo(type=MediaType.MOVIE) return ExistMediaInfo(type=MediaType.MOVIE)
movies = self.plex.get_movies(title=mediainfo.title, year=mediainfo.year, tmdb_id=mediainfo.tmdb_id) movies = self.plex.get_movies(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id)
if not movies: if not movies:
logger.info(f"{mediainfo.title_year} 在媒体库中不存在") logger.info(f"{mediainfo.title_year} 在媒体库中不存在")
return None return None
@@ -63,6 +66,7 @@ class PlexModule(_ModuleBase):
return ExistMediaInfo(type=MediaType.MOVIE) return ExistMediaInfo(type=MediaType.MOVIE)
else: else:
tvs = self.plex.get_tv_episodes(title=mediainfo.title, tvs = self.plex.get_tv_episodes(title=mediainfo.title,
original_title=mediainfo.original_title,
year=mediainfo.year, year=mediainfo.year,
tmdb_id=mediainfo.tmdb_id, tmdb_id=mediainfo.tmdb_id,
item_id=itemid) item_id=itemid)

View File

@@ -130,11 +130,13 @@ class Plex(metaclass=Singleton):
def get_movies(self, def get_movies(self,
title: str, title: str,
original_title: str = None,
year: str = None, year: str = None,
tmdb_id: int = None) -> Optional[List[dict]]: tmdb_id: int = None) -> Optional[List[dict]]:
""" """
根据标题和年份检查电影是否在Plex中存在存在则返回列表 根据标题和年份检查电影是否在Plex中存在存在则返回列表
:param title: 标题 :param title: 标题
:param original_title: 原产地标题
:param year: 年份,为空则不过滤 :param year: 年份,为空则不过滤
:param tmdb_id: TMDB ID :param tmdb_id: TMDB ID
:return: 含title、year属性的字典列表 :return: 含title、year属性的字典列表
@@ -144,9 +146,14 @@ class Plex(metaclass=Singleton):
ret_movies = [] ret_movies = []
if year: if year:
movies = self._plex.library.search(title=title, year=year, libtype="movie") movies = self._plex.library.search(title=title, year=year, libtype="movie")
# 根据原标题再查一遍
if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
else: else:
movies = self._plex.library.search(title=title, libtype="movie") movies = self._plex.library.search(title=title, libtype="movie")
for movie in movies: if original_title and str(original_title) != str(title):
movies.extend(self._plex.library.search(title=original_title, year=year, libtype="movie"))
for movie in set(movies):
movie_tmdbid = self.__get_ids(movie.guids).get("tmdb_id") movie_tmdbid = self.__get_ids(movie.guids).get("tmdb_id")
if tmdb_id and movie_tmdbid: if tmdb_id and movie_tmdbid:
if str(movie_tmdbid) != str(tmdb_id): if str(movie_tmdbid) != str(tmdb_id):
@@ -157,6 +164,7 @@ class Plex(metaclass=Singleton):
def get_tv_episodes(self, def get_tv_episodes(self,
item_id: str = None, item_id: str = None,
title: str = None, title: str = None,
original_title: str = None,
year: str = None, year: str = None,
tmdb_id: int = None, tmdb_id: int = None,
season: int = None) -> Optional[Dict[int, list]]: season: int = None) -> Optional[Dict[int, list]]:
@@ -164,6 +172,7 @@ class Plex(metaclass=Singleton):
根据标题、年份、季查询电视剧所有集信息 根据标题、年份、季查询电视剧所有集信息
:param item_id: 媒体ID :param item_id: 媒体ID
:param title: 标题 :param title: 标题
:param original_title: 原产地标题
:param year: 年份,可以为空,为空时不按年份过滤 :param year: 年份,可以为空,为空时不按年份过滤
:param tmdb_id: TMDB ID :param tmdb_id: TMDB ID
:param season: 季号,数字 :param season: 季号,数字
@@ -176,6 +185,8 @@ class Plex(metaclass=Singleton):
else: else:
# 根据标题和年份模糊搜索,该结果不够准确 # 根据标题和年份模糊搜索,该结果不够准确
videos = self._plex.library.search(title=title, year=year, libtype="show") videos = self._plex.library.search(title=title, year=year, libtype="show")
if not videos and original_title and str(original_title) != str(title):
videos = self._plex.library.search(title=original_title, year=year, libtype="show")
if not videos: if not videos:
return {} return {}
if isinstance(videos, list): if isinstance(videos, list):

View File

@@ -575,6 +575,7 @@ class AutoSignIn(_PluginBase):
yesterday_str = yesterday.strftime('%Y-%m-%d') yesterday_str = yesterday.strftime('%Y-%m-%d')
# 删除昨天历史 # 删除昨天历史
self.del_data(key=type + "-" + yesterday_str) self.del_data(key=type + "-" + yesterday_str)
self.del_data(key=f"{yesterday.month}{yesterday.day}")
# 查看今天有没有签到|登录历史 # 查看今天有没有签到|登录历史
today = today.strftime('%Y-%m-%d') today = today.strftime('%Y-%m-%d')
@@ -634,11 +635,22 @@ class AutoSignIn(_PluginBase):
logger.info(f"站点{type}任务完成!") logger.info(f"站点{type}任务完成!")
# 获取今天的日期 # 获取今天的日期
key = f"{datetime.now().month}{datetime.now().day}" key = f"{datetime.now().month}{datetime.now().day}"
today_data = self.get_data(key)
if today_data:
if not isinstance(today_data, list):
today_data = [today_data]
for s in status:
today_data.append({
"site": s[0],
"status": s[1]
})
else:
today_data = [{
"site": s[0],
"status": s[1]
} for s in status]
# 保存数据 # 保存数据
self.save_data(key, [{ self.save_data(key, today_data)
"site": s[0],
"status": s[1]
} for s in status])
# 命中重试词的站点id # 命中重试词的站点id
retry_sites = [] retry_sites = []
@@ -801,6 +813,10 @@ class AutoSignIn(_PluginBase):
return f"无法通过Cloudflare" return f"无法通过Cloudflare"
return f"仿真登录失败Cookie已失效" return f"仿真登录失败Cookie已失效"
else: else:
# 判断是否已签到
if re.search(r'已签|签到已得', page_source, re.IGNORECASE) \
or SiteUtils.is_checkin(page_source):
return f"签到成功"
return "仿真签到成功" return "仿真签到成功"
else: else:
res = RequestUtils(cookies=site_cookie, res = RequestUtils(cookies=site_cookie,

View File

@@ -1,3 +1,4 @@
import os
import re import re
from datetime import datetime, timedelta from datetime import datetime, timedelta
from threading import Event from threading import Event
@@ -60,6 +61,7 @@ class IYUUAutoSeed(_PluginBase):
_sites = [] _sites = []
_notify = False _notify = False
_nolabels = None _nolabels = None
_nopaths = None
_clearcache = False _clearcache = False
# 退出事件 # 退出事件
_event = Event() _event = Event()
@@ -101,6 +103,7 @@ class IYUUAutoSeed(_PluginBase):
self._sites = config.get("sites") self._sites = config.get("sites")
self._notify = config.get("notify") self._notify = config.get("notify")
self._nolabels = config.get("nolabels") self._nolabels = config.get("nolabels")
self._nopaths = config.get("nopaths")
self._clearcache = config.get("clearcache") self._clearcache = config.get("clearcache")
self._permanent_error_caches = config.get("permanent_error_caches") or [] self._permanent_error_caches = config.get("permanent_error_caches") or []
self._error_caches = [] if self._clearcache else config.get("error_caches") or [] self._error_caches = [] if self._clearcache else config.get("error_caches") or []
@@ -242,22 +245,6 @@ class IYUUAutoSeed(_PluginBase):
} }
] ]
}, },
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
}
] ]
}, },
{ {
@@ -309,6 +296,44 @@ class IYUUAutoSeed(_PluginBase):
} }
] ]
}, },
{
'component': 'VRow',
'content': [
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextField',
'props': {
'model': 'nolabels',
'label': '不辅种标签',
'placeholder': '使用,分隔多个标签'
}
}
]
},
{
'component': 'VCol',
'props': {
'cols': 12
},
'content': [
{
'component': 'VTextarea',
'props': {
'model': 'nopaths',
'label': '不辅种数据文件目录',
'rows': 3,
'placeholder': '每一行一个目录'
}
}
]
}
]
},
{ {
'component': 'VRow', 'component': 'VRow',
'content': [ 'content': [
@@ -357,6 +382,7 @@ class IYUUAutoSeed(_PluginBase):
"token": "", "token": "",
"downloaders": [], "downloaders": [],
"sites": [], "sites": [],
"nopaths": "",
"nolabels": "" "nolabels": ""
} }
@@ -374,6 +400,7 @@ class IYUUAutoSeed(_PluginBase):
"sites": self._sites, "sites": self._sites,
"notify": self._notify, "notify": self._notify,
"nolabels": self._nolabels, "nolabels": self._nolabels,
"nopaths": self._nopaths,
"success_caches": self._success_caches, "success_caches": self._success_caches,
"error_caches": self._error_caches, "error_caches": self._error_caches,
"permanent_error_caches": self._permanent_error_caches "permanent_error_caches": self._permanent_error_caches
@@ -431,13 +458,25 @@ class IYUUAutoSeed(_PluginBase):
logger.info(f"种子 {hash_str} 辅种失败且已缓存,跳过 ...") logger.info(f"种子 {hash_str} 辅种失败且已缓存,跳过 ...")
continue continue
save_path = self.__get_save_path(torrent, downloader) save_path = self.__get_save_path(torrent, downloader)
if self._nopaths and save_path:
# 过滤不需要转移的路径
nopath_skip = False
for nopath in self._nopaths.split('\n'):
if os.path.normpath(save_path).startswith(os.path.normpath(nopath)):
logger.info(f"种子 {hash_str} 保存路径 {save_path} 不需要辅种,跳过 ...")
nopath_skip = True
break
if nopath_skip:
continue
# 获取种子标签 # 获取种子标签
torrent_labels = self.__get_label(torrent, downloader) torrent_labels = self.__get_label(torrent, downloader)
if torrent_labels and self._nolabels: if torrent_labels and self._nolabels:
is_skip = False is_skip = False
for label in self._nolabels.split(','): for label in self._nolabels.split(','):
if label in torrent_labels: if label in torrent_labels:
logger.info(f"种子 {hash_str} 含有不转移标签 {label},跳过 ...") logger.info(f"种子 {hash_str} 含有不辅种标签 {label},跳过 ...")
is_skip = True is_skip = True
break break
if is_skip: if is_skip:

View File

@@ -8,6 +8,7 @@ from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger from apscheduler.triggers.cron import CronTrigger
from app.core.config import settings from app.core.config import settings
from app.core.meta import MetaBase
from app.core.metainfo import MetaInfo from app.core.metainfo import MetaInfo
from app.db.transferhistory_oper import TransferHistoryOper from app.db.transferhistory_oper import TransferHistoryOper
from app.helper.nfo import NfoReader from app.helper.nfo import NfoReader
@@ -295,18 +296,21 @@ class LibraryScraper(_PluginBase):
continue continue
# 开始刮削目录 # 开始刮削目录
if sub_path.is_dir(): if sub_path.is_dir():
# 判断目录是不是媒体目录
dir_meta = MetaInfo(sub_path.name)
if not dir_meta.name or not dir_meta.year:
logger.warn(f"{sub_path} 可能不是媒体目录,请检查刮削目录配置,跳过 ...")
continue
logger.info(f"开始刮削目录:{sub_path} ...") logger.info(f"开始刮削目录:{sub_path} ...")
self.__scrape_dir(sub_path) self.__scrape_dir(path=sub_path, dir_meta=dir_meta)
logger.info(f"目录 {sub_path} 刮削完成") logger.info(f"目录 {sub_path} 刮削完成")
logger.info(f"媒体库 {path} 刮削完成") logger.info(f"媒体库 {path} 刮削完成")
def __scrape_dir(self, path: Path): def __scrape_dir(self, path: Path, dir_meta: MetaBase):
""" """
削刮一个目录,该目录必须是媒体文件目录 削刮一个目录,该目录必须是媒体文件目录
""" """
# 目录识别
dir_meta = MetaInfo(path.name)
# 媒体信息 # 媒体信息
mediainfo = None mediainfo = None
@@ -318,14 +322,15 @@ class LibraryScraper(_PluginBase):
return return
# 识别元数据 # 识别元数据
meta_info = MetaInfo(file.name) meta_info = MetaInfo(file.stem)
# 合并 # 合并
meta_info.merge(dir_meta) meta_info.merge(dir_meta)
# 是否刮削 # 是否刮削
scrap_metadata = settings.SCRAP_METADATA scrap_metadata = settings.SCRAP_METADATA
# 识别媒体信息 # 没有媒体信息或者名字出现变化时,需要重新识别
if not mediainfo: if not mediainfo \
or meta_info.name != dir_meta.name:
# 优先读取本地nfo文件 # 优先读取本地nfo文件
tmdbid = None tmdbid = None
if meta_info.type == MediaType.MOVIE: if meta_info.type == MediaType.MOVIE:

View File

@@ -31,3 +31,32 @@ class SiteUtils:
return True return True
return False return False
@classmethod
def is_checkin(cls, html_text: str) -> bool:
"""
判断站点是否已经签到
:return True已签到 False未签到
"""
html = etree.HTML(html_text)
if not html:
return False
# 站点签到支持的识别XPATH
xpaths = [
'//a[@id="signed"]',
'//a[contains(@href, "attendance")]',
'//a[contains(text(), "签到")]',
'//a/b[contains(text(), "签 到")]',
'//span[@id="sign_in"]/a',
'//a[contains(@href, "addbonus")]',
'//input[@class="dt_button"][contains(@value, "打卡")]',
'//a[contains(@href, "sign_in")]',
'//a[contains(@onclick, "do_signin")]',
'//a[@id="do-attendance"]',
'//shark-icon-button[@href="attendance.php"]'
]
for xpath in xpaths:
if html.xpath(xpath):
return False
return True

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
APP_VERSION = 'v1.2.0' APP_VERSION = 'v1.2.1'