mirror of
https://github.com/DrizzleTime/Foxel.git
synced 2026-05-08 01:03:20 +08:00
Compare commits
35 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
51326dea08 | ||
|
|
ac6d8ff7ad | ||
|
|
029aa2574d | ||
|
|
eeb0e6aa70 | ||
|
|
d1ceb7ddba | ||
|
|
63b54458e9 | ||
|
|
f7e6815265 | ||
|
|
4d6e0b86ad | ||
|
|
77a4749fec | ||
|
|
8eaa025f7e | ||
|
|
11799cd97c | ||
|
|
c14224827d | ||
|
|
130a304f25 | ||
|
|
bc595310a6 | ||
|
|
bf83187d8c | ||
|
|
02cc31d296 | ||
|
|
c66ca181c6 | ||
|
|
5815e6a545 | ||
|
|
7cf335ab19 | ||
|
|
36365d7410 | ||
|
|
90ddeef027 | ||
|
|
8ac3acebb4 | ||
|
|
5625f2d8bf | ||
|
|
7f33eb85ba | ||
|
|
0da64b8d9c | ||
|
|
7caa602d93 | ||
|
|
a4af9475ef | ||
|
|
ee6e570ccb | ||
|
|
ce45fca8bd | ||
|
|
77058f3535 | ||
|
|
738f3c9718 | ||
|
|
f3d9220569 | ||
|
|
da41393db3 | ||
|
|
0399011406 | ||
|
|
00462f2259 |
75
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
75
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,75 @@
|
||||
name: Bug Report / 缺陷报告
|
||||
description: Report reproducible defects with clear context / 请提供可复现的缺陷信息
|
||||
title: "[Bug] "
|
||||
labels:
|
||||
- bug
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for helping us improve Foxel! / 感谢你帮助改进 Foxel!
|
||||
Please confirm the checklist below before filing. / 在提交前请确认以下事项。
|
||||
- type: checkboxes
|
||||
id: validations
|
||||
attributes:
|
||||
label: Pre-flight Check / 提交前检查
|
||||
options:
|
||||
- label: I searched existing issues and docs / 我已搜索现有 Issue 与文档
|
||||
required: true
|
||||
- label: This is not a question or feature request / 这不是问题咨询或功能需求
|
||||
required: true
|
||||
- type: textarea
|
||||
id: summary
|
||||
attributes:
|
||||
label: Bug Summary / 缺陷摘要
|
||||
description: Briefly describe what is wrong / 简要说明出现了什么问题
|
||||
placeholder: e.g. Upload fails with 500 error / 例如:上传时报 500 错误
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: Steps to Reproduce / 复现步骤
|
||||
description: List numbered steps to trigger the bug / 列出触发问题的步骤
|
||||
placeholder: |
|
||||
1. ...
|
||||
2. ...
|
||||
3. ...
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: Expected Behavior / 预期行为
|
||||
description: What should happen instead? / 期望看到什么结果?
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: Actual Behavior / 实际行为
|
||||
description: What actually happens? Include messages or screenshots / 实际发生了什么?可附报错或截图
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: Version / 版本信息
|
||||
description: Git commit, tag, or build number / 提供 Git 提交、标签或构建号
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Environment / 运行环境
|
||||
description: OS, browser, API server config, etc. / 操作系统、浏览器、服务端配置等
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Logs & Attachments / 日志与附件
|
||||
description: Paste relevant logs, stack traces, screenshots / 粘贴相关日志、堆栈或截图
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
56
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
56
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: Feature Request / 功能需求
|
||||
description: Suggest enhancements or new capabilities / 提出改进或新增能力
|
||||
title: "[Feature] "
|
||||
labels:
|
||||
- enhancement
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Tell us about your idea! / 欢迎分享你的想法!
|
||||
Please complete the sections below so we can evaluate it quickly. / 请完整填写以下信息,便于快速评估。
|
||||
- type: checkboxes
|
||||
id: prechecks
|
||||
attributes:
|
||||
label: Pre-flight Check / 提交前检查
|
||||
options:
|
||||
- label: I searched existing issues and roadmap / 我已搜索现有 Issue 与路线图
|
||||
required: true
|
||||
- label: This is not a bug report or question / 这不是缺陷或问题咨询
|
||||
required: true
|
||||
- type: textarea
|
||||
id: summary
|
||||
attributes:
|
||||
label: Feature Summary / 功能概述
|
||||
description: What do you want to build? / 希望新增什么能力?
|
||||
placeholder: e.g. Support sharing download links / 例如:支持分享下载链接
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: motivation
|
||||
attributes:
|
||||
label: Motivation / 背景与价值
|
||||
description: Why is this feature important? Who benefits? / 为什么重要?受益者是谁?
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: scope
|
||||
attributes:
|
||||
label: Proposed Solution / 建议方案
|
||||
description: Outline how the feature might work, including API or UI hints / 描述可能的实现方式,包含 API 或 UI 提示
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: Alternatives / 可选方案
|
||||
description: List any alternatives considered / 如有考虑过其他方案请列出
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: extra
|
||||
attributes:
|
||||
label: Additional Context / 补充信息
|
||||
description: Diagrams, sketches, links, constraints, etc. / 可附上草图、链接或约束
|
||||
validations:
|
||||
required: false
|
||||
42
.github/ISSUE_TEMPLATE/question.yml
vendored
Normal file
42
.github/ISSUE_TEMPLATE/question.yml
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
name: Question / 问题咨询
|
||||
description: Ask about usage, configuration, or clarification / 用于使用、配置或澄清问题
|
||||
title: "[Question] "
|
||||
labels:
|
||||
- question
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Need help? You're in the right place. / 需要帮助?请按以下提示填写。
|
||||
Check the docs before filing. / 提交前请先查阅文档。
|
||||
- type: checkboxes
|
||||
id: prechecks
|
||||
attributes:
|
||||
label: Pre-flight Check / 提交前检查
|
||||
options:
|
||||
- label: I searched existing issues and discussions / 我已搜索现有 Issue 和讨论
|
||||
required: true
|
||||
- label: I read the relevant documentation / 我已阅读相关文档
|
||||
required: true
|
||||
- type: textarea
|
||||
id: question
|
||||
attributes:
|
||||
label: Question Details / 问题详情
|
||||
description: What do you need help with? Be specific. / 具体说明需要帮助的内容
|
||||
placeholder: Describe the scenario, expectation, and blockers / 说明场景、期望结果与阻碍
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: tried
|
||||
attributes:
|
||||
label: What You Tried / 已尝试方案
|
||||
description: List commands, configs, or steps attempted / 列出尝试过的命令、配置或步骤
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Additional Context / 补充信息
|
||||
description: Environment details, logs, screenshots / 可补充运行环境、日志或截图
|
||||
validations:
|
||||
required: false
|
||||
4
.github/workflows/docker.yml
vendored
4
.github/workflows/docker.yml
vendored
@@ -2,6 +2,8 @@ name: Build and Push Docker image
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
tags:
|
||||
- 'v*.*.*'
|
||||
workflow_dispatch:
|
||||
@@ -48,4 +50,4 @@ jobs:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ env.DOCKER_TAGS }}
|
||||
tags: ${{ env.DOCKER_TAGS }}
|
||||
|
||||
@@ -13,7 +13,9 @@ FROM python:3.13-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
RUN apt-get update && apt-get install -y nginx git && rm -rf /var/lib/apt/lists/*
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends nginx git ffmpeg \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN pip install uv
|
||||
COPY pyproject.toml uv.lock ./
|
||||
@@ -35,4 +37,4 @@ EXPOSE 80
|
||||
COPY entrypoint.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
CMD ["/entrypoint.sh"]
|
||||
CMD ["/entrypoint.sh"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from fastapi import FastAPI
|
||||
|
||||
from .routes import adapters, virtual_fs, auth, config, processors, tasks, logs, share, backup, search, vector_db, offline_downloads
|
||||
from .routes import adapters, virtual_fs, auth, config, processors, tasks, logs, share, backup, search, vector_db, offline_downloads, ai_providers
|
||||
from .routes import webdav
|
||||
from .routes import plugins
|
||||
|
||||
@@ -18,6 +18,7 @@ def include_routers(app: FastAPI):
|
||||
app.include_router(share.public_router)
|
||||
app.include_router(backup.router)
|
||||
app.include_router(vector_db.router)
|
||||
app.include_router(ai_providers.router)
|
||||
app.include_router(plugins.router)
|
||||
app.include_router(webdav.router)
|
||||
app.include_router(offline_downloads.router)
|
||||
|
||||
177
api/routes/ai_providers.py
Normal file
177
api/routes/ai_providers.py
Normal file
@@ -0,0 +1,177 @@
|
||||
from typing import Annotated, Dict, Optional
|
||||
|
||||
import httpx
|
||||
from fastapi import APIRouter, Depends, HTTPException, Path
|
||||
|
||||
from api.response import success
|
||||
from schemas.ai import (
|
||||
AIDefaultsUpdate,
|
||||
AIModelCreate,
|
||||
AIModelUpdate,
|
||||
AIProviderCreate,
|
||||
AIProviderUpdate,
|
||||
)
|
||||
from services.ai_providers import AIProviderService
|
||||
from services.auth import User, get_current_active_user
|
||||
from services.vector_db import VectorDBService
|
||||
|
||||
|
||||
router = APIRouter(prefix="/api/ai", tags=["ai"])
|
||||
service = AIProviderService()
|
||||
|
||||
|
||||
@router.get("/providers")
|
||||
async def list_providers(
|
||||
current_user: Annotated[User, Depends(get_current_active_user)]
|
||||
):
|
||||
providers = await service.list_providers()
|
||||
return success({"providers": providers})
|
||||
|
||||
|
||||
@router.post("/providers")
|
||||
async def create_provider(
|
||||
payload: AIProviderCreate,
|
||||
current_user: Annotated[User, Depends(get_current_active_user)]
|
||||
):
|
||||
provider = await service.create_provider(payload.dict())
|
||||
return success(provider)
|
||||
|
||||
|
||||
@router.get("/providers/{provider_id}")
|
||||
async def get_provider(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
provider = await service.get_provider(provider_id, with_models=True)
|
||||
return success(provider)
|
||||
|
||||
|
||||
@router.put("/providers/{provider_id}")
|
||||
async def update_provider(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
payload: AIProviderUpdate,
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
data = {k: v for k, v in payload.dict().items() if v is not None}
|
||||
if not data:
|
||||
raise HTTPException(status_code=400, detail="No fields to update")
|
||||
provider = await service.update_provider(provider_id, data)
|
||||
return success(provider)
|
||||
|
||||
|
||||
@router.delete("/providers/{provider_id}")
|
||||
async def delete_provider(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
await service.delete_provider(provider_id)
|
||||
return success({"id": provider_id})
|
||||
|
||||
|
||||
@router.post("/providers/{provider_id}/sync-models")
|
||||
async def sync_models(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
try:
|
||||
result = await service.sync_models(provider_id)
|
||||
except (httpx.RequestError, httpx.HTTPStatusError) as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Failed to synchronize models: {exc}") from exc
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=400, detail=str(exc)) from exc
|
||||
|
||||
return success(result)
|
||||
|
||||
|
||||
@router.get("/providers/{provider_id}/remote-models")
|
||||
async def fetch_remote_models(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
try:
|
||||
models = await service.fetch_remote_models(provider_id)
|
||||
except (httpx.RequestError, httpx.HTTPStatusError) as exc:
|
||||
raise HTTPException(status_code=502, detail=f"Failed to pull models: {exc}") from exc
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=400, detail=str(exc)) from exc
|
||||
|
||||
return success({"models": models})
|
||||
|
||||
|
||||
@router.get("/providers/{provider_id}/models")
|
||||
async def list_models(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
models = await service.list_models(provider_id)
|
||||
return success({"models": models})
|
||||
|
||||
|
||||
@router.post("/providers/{provider_id}/models")
|
||||
async def create_model(
|
||||
provider_id: Annotated[int, Path(..., gt=0)],
|
||||
payload: AIModelCreate,
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
model = await service.create_model(provider_id, payload.dict())
|
||||
return success(model)
|
||||
|
||||
|
||||
@router.put("/models/{model_id}")
|
||||
async def update_model(
|
||||
model_id: Annotated[int, Path(..., gt=0)],
|
||||
payload: AIModelUpdate,
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
data = {k: v for k, v in payload.dict().items() if v is not None}
|
||||
if not data:
|
||||
raise HTTPException(status_code=400, detail="No fields to update")
|
||||
model = await service.update_model(model_id, data)
|
||||
return success(model)
|
||||
|
||||
|
||||
@router.delete("/models/{model_id}")
|
||||
async def delete_model(
|
||||
model_id: Annotated[int, Path(..., gt=0)],
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
await service.delete_model(model_id)
|
||||
return success({"id": model_id})
|
||||
|
||||
|
||||
def _get_embedding_dimension(entry: Optional[Dict]) -> Optional[int]:
|
||||
if not entry:
|
||||
return None
|
||||
value = entry.get("embedding_dimensions")
|
||||
return int(value) if value is not None else None
|
||||
|
||||
|
||||
@router.get("/defaults")
|
||||
async def get_defaults(
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
defaults = await service.get_default_models()
|
||||
return success(defaults)
|
||||
|
||||
|
||||
@router.put("/defaults")
|
||||
async def update_defaults(
|
||||
payload: AIDefaultsUpdate,
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
):
|
||||
previous = await service.get_default_models()
|
||||
try:
|
||||
updated = await service.set_default_models(payload.as_mapping())
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=400, detail=str(exc)) from exc
|
||||
|
||||
prev_dim = _get_embedding_dimension(previous.get("embedding"))
|
||||
next_dim = _get_embedding_dimension(updated.get("embedding"))
|
||||
|
||||
if prev_dim and next_dim and prev_dim != next_dim:
|
||||
try:
|
||||
await VectorDBService().clear_all_data()
|
||||
except Exception as exc: # noqa: BLE001
|
||||
raise HTTPException(status_code=500, detail=f"Failed to clear vector database: {exc}") from exc
|
||||
|
||||
return success(updated)
|
||||
@@ -66,7 +66,7 @@ async def get_me(current_user: Annotated[User, Depends(get_current_active_user)]
|
||||
"""
|
||||
email = (current_user.email or "").strip().lower()
|
||||
md5_hash = hashlib.md5(email.encode("utf-8")).hexdigest()
|
||||
gravatar_url = f"https://www.gravatar.com/avatar/{md5_hash}?s=64&d=identicon"
|
||||
gravatar_url = f"https://cn.cravatar.com/avatar/{md5_hash}?s=64&d=identicon"
|
||||
return success({
|
||||
"id": current_user.id,
|
||||
"username": current_user.username,
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
import httpx
|
||||
import time
|
||||
from fastapi import APIRouter, Depends, Form, HTTPException
|
||||
from fastapi import APIRouter, Depends, Form
|
||||
from typing import Annotated
|
||||
from services.config import ConfigCenter, VERSION
|
||||
from services.auth import get_current_active_user, User, has_users
|
||||
from api.response import success
|
||||
from services.vector_db import VectorDBService
|
||||
router = APIRouter(prefix="/api/config", tags=["config"])
|
||||
|
||||
|
||||
@@ -24,27 +23,8 @@ async def set_config(
|
||||
key: str = Form(...),
|
||||
value: str = Form(...)
|
||||
):
|
||||
original_value = await ConfigCenter.get(key)
|
||||
value_to_save = value
|
||||
if key == "AI_EMBED_DIM":
|
||||
try:
|
||||
parsed_value = int(value)
|
||||
except (TypeError, ValueError):
|
||||
raise HTTPException(status_code=400, detail="AI_EMBED_DIM must be an integer")
|
||||
if parsed_value <= 0:
|
||||
raise HTTPException(status_code=400, detail="AI_EMBED_DIM must be greater than zero")
|
||||
value_to_save = str(parsed_value)
|
||||
|
||||
await ConfigCenter.set(key, value_to_save)
|
||||
|
||||
if key == "AI_EMBED_DIM" and str(original_value) != value_to_save:
|
||||
try:
|
||||
service = VectorDBService()
|
||||
await service.clear_all_data()
|
||||
except Exception as exc:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to clear vector database: {exc}")
|
||||
|
||||
return success({"key": key, "value": value_to_save})
|
||||
await ConfigCenter.set(key, value)
|
||||
return success({"key": key, "value": value})
|
||||
|
||||
|
||||
@router.get("/all")
|
||||
|
||||
@@ -3,6 +3,8 @@ from fastapi import APIRouter, Depends, Body, HTTPException
|
||||
from fastapi.concurrency import run_in_threadpool
|
||||
from typing import Annotated
|
||||
from services.processors.registry import (
|
||||
get,
|
||||
get_config_schema,
|
||||
get_config_schemas,
|
||||
get_module_path,
|
||||
reload_processors,
|
||||
@@ -11,7 +13,8 @@ from services.task_queue import task_queue_service
|
||||
from services.auth import get_current_active_user, User
|
||||
from api.response import success
|
||||
from pydantic import BaseModel
|
||||
from services.virtual_fs import path_is_directory
|
||||
from services.virtual_fs import path_is_directory, resolve_adapter_and_rel
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
router = APIRouter(prefix="/api/processors", tags=["processors"])
|
||||
|
||||
@@ -42,6 +45,15 @@ class ProcessRequest(BaseModel):
|
||||
overwrite: bool = False
|
||||
|
||||
|
||||
class ProcessDirectoryRequest(BaseModel):
|
||||
path: str
|
||||
processor_type: str
|
||||
config: dict
|
||||
overwrite: bool = True
|
||||
max_depth: Optional[int] = None
|
||||
suffix: Optional[str] = None
|
||||
|
||||
|
||||
class UpdateSourceRequest(BaseModel):
|
||||
source: str
|
||||
|
||||
@@ -69,6 +81,128 @@ async def process_file_with_processor(
|
||||
return success({"task_id": task.id})
|
||||
|
||||
|
||||
@router.post("/process-directory")
|
||||
async def process_directory_with_processor(
|
||||
current_user: Annotated[User, Depends(get_current_active_user)],
|
||||
req: ProcessDirectoryRequest = Body(...)
|
||||
):
|
||||
if req.max_depth is not None and req.max_depth < 0:
|
||||
raise HTTPException(400, detail="max_depth must be >= 0")
|
||||
|
||||
is_dir = await path_is_directory(req.path)
|
||||
if not is_dir:
|
||||
raise HTTPException(400, detail="Path must be a directory")
|
||||
|
||||
schema = get_config_schema(req.processor_type)
|
||||
_processor = get(req.processor_type)
|
||||
if not schema or not _processor:
|
||||
raise HTTPException(404, detail="Processor not found")
|
||||
|
||||
produces_file = bool(schema.get("produces_file"))
|
||||
raw_suffix = req.suffix if req.suffix is not None else None
|
||||
if raw_suffix is not None and raw_suffix.strip() == "":
|
||||
raw_suffix = None
|
||||
suffix = raw_suffix
|
||||
overwrite = req.overwrite
|
||||
|
||||
if produces_file:
|
||||
if not overwrite and not suffix:
|
||||
raise HTTPException(400, detail="Suffix is required when not overwriting files")
|
||||
else:
|
||||
overwrite = False
|
||||
suffix = None
|
||||
|
||||
supported_exts = schema.get("supported_exts") or []
|
||||
allowed_exts = {
|
||||
ext.lower().lstrip('.')
|
||||
for ext in supported_exts
|
||||
if isinstance(ext, str)
|
||||
}
|
||||
|
||||
def matches_extension(file_rel: str) -> bool:
|
||||
if not allowed_exts:
|
||||
return True
|
||||
if '.' not in file_rel:
|
||||
return '' in allowed_exts
|
||||
ext = file_rel.rsplit('.', 1)[-1].lower()
|
||||
return ext in allowed_exts or f'.{ext}' in allowed_exts
|
||||
|
||||
adapter_instance, adapter_model, root, rel = await resolve_adapter_and_rel(req.path)
|
||||
rel = rel.rstrip('/')
|
||||
|
||||
list_dir = getattr(adapter_instance, "list_dir", None)
|
||||
if not callable(list_dir):
|
||||
raise HTTPException(501, detail="Adapter does not implement list_dir")
|
||||
|
||||
def build_absolute_path(mount_path: str, rel_path: str) -> str:
|
||||
rel_norm = rel_path.lstrip('/')
|
||||
mount_norm = mount_path.rstrip('/')
|
||||
if not mount_norm:
|
||||
return '/' + rel_norm if rel_norm else '/'
|
||||
return f"{mount_norm}/{rel_norm}" if rel_norm else mount_norm
|
||||
|
||||
def apply_suffix(path_str: str, suffix_str: str) -> str:
|
||||
path_obj = Path(path_str)
|
||||
name = path_obj.name
|
||||
if not name:
|
||||
return path_str
|
||||
if '.' in name:
|
||||
base, ext = name.rsplit('.', 1)
|
||||
new_name = f"{base}{suffix_str}.{ext}"
|
||||
else:
|
||||
new_name = f"{name}{suffix_str}"
|
||||
return str(path_obj.with_name(new_name))
|
||||
|
||||
scheduled_tasks: List[str] = []
|
||||
stack: List[Tuple[str, int]] = [(rel, 0)]
|
||||
page_size = 200
|
||||
|
||||
while stack:
|
||||
current_rel, depth = stack.pop()
|
||||
page = 1
|
||||
while True:
|
||||
entries, total = await list_dir(root, current_rel, page, page_size, "name", "asc")
|
||||
entries = entries or []
|
||||
if not entries and (total or 0) == 0:
|
||||
break
|
||||
|
||||
for entry in entries:
|
||||
name = entry.get("name")
|
||||
if not name:
|
||||
continue
|
||||
child_rel = f"{current_rel}/{name}" if current_rel else name
|
||||
if entry.get("is_dir"):
|
||||
if req.max_depth is None or depth < req.max_depth:
|
||||
stack.append((child_rel.rstrip('/'), depth + 1))
|
||||
continue
|
||||
if not matches_extension(child_rel):
|
||||
continue
|
||||
absolute_path = build_absolute_path(adapter_model.path, child_rel)
|
||||
save_to = None
|
||||
if produces_file and not overwrite and suffix:
|
||||
save_to = apply_suffix(absolute_path, suffix)
|
||||
task = await task_queue_service.add_task(
|
||||
"process_file",
|
||||
{
|
||||
"path": absolute_path,
|
||||
"processor_type": req.processor_type,
|
||||
"config": req.config,
|
||||
"save_to": save_to,
|
||||
"overwrite": overwrite,
|
||||
},
|
||||
)
|
||||
scheduled_tasks.append(task.id)
|
||||
|
||||
if total is None or page * page_size >= total:
|
||||
break
|
||||
page += 1
|
||||
|
||||
return success({
|
||||
"task_ids": scheduled_tasks,
|
||||
"scheduled": len(scheduled_tasks),
|
||||
})
|
||||
|
||||
|
||||
@router.get("/source/{processor_type}")
|
||||
async def get_processor_source(
|
||||
processor_type: str,
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
from fastapi import APIRouter, Depends, Query
|
||||
|
||||
from schemas.fs import SearchResultItem
|
||||
from services.auth import get_current_active_user, User
|
||||
from services.ai import get_text_embedding
|
||||
@@ -6,24 +9,96 @@ from services.vector_db import VectorDBService
|
||||
|
||||
router = APIRouter(prefix="/api/search", tags=["search"])
|
||||
|
||||
async def search_files_by_vector(q: str, top_k: int):
|
||||
embedding = await get_text_embedding(q)
|
||||
vector_db = VectorDBService()
|
||||
results = await vector_db.search_vectors("vector_collection", embedding, top_k)
|
||||
items = [
|
||||
SearchResultItem(id=res["id"], path=res["entity"]["path"], score=res["distance"])
|
||||
for res in results[0]
|
||||
]
|
||||
return {"items": items, "query": q}
|
||||
|
||||
async def search_files_by_name(q: str, top_k: int):
|
||||
def _normalize_result(raw: Dict[str, Any], source: str, fallback_score: float = 0.0) -> SearchResultItem:
|
||||
entity = dict(raw.get("entity") or {})
|
||||
source_path = entity.get("source_path")
|
||||
stored_path = entity.get("path")
|
||||
path = source_path or stored_path or ""
|
||||
chunk_id_value = entity.get("chunk_id")
|
||||
chunk_id = str(chunk_id_value) if chunk_id_value is not None else None
|
||||
snippet = entity.get("text") or entity.get("description") or entity.get("name")
|
||||
mime = entity.get("mime")
|
||||
start_offset = entity.get("start_offset")
|
||||
end_offset = entity.get("end_offset")
|
||||
raw_score = raw.get("distance")
|
||||
score = float(raw_score) if raw_score is not None else fallback_score
|
||||
|
||||
metadata = {
|
||||
"retrieval_source": source,
|
||||
"raw_distance": raw_score,
|
||||
}
|
||||
if stored_path and stored_path != path:
|
||||
metadata["stored_path"] = stored_path
|
||||
vector_id = entity.get("vector_id")
|
||||
if vector_id:
|
||||
metadata["vector_id"] = vector_id
|
||||
|
||||
return SearchResultItem(
|
||||
id=str(raw.get("id")),
|
||||
path=path,
|
||||
score=score,
|
||||
chunk_id=chunk_id,
|
||||
snippet=snippet,
|
||||
mime=mime,
|
||||
source_type=entity.get("type") or source,
|
||||
start_offset=start_offset,
|
||||
end_offset=end_offset,
|
||||
metadata=metadata,
|
||||
)
|
||||
|
||||
|
||||
async def _vector_search(query: str, top_k: int) -> List[SearchResultItem]:
|
||||
vector_db = VectorDBService()
|
||||
results = await vector_db.search_by_path("vector_collection", q, top_k)
|
||||
items = [
|
||||
SearchResultItem(id=idx, path=res["entity"]["path"], score=res["distance"])
|
||||
for idx, res in enumerate(results[0])
|
||||
]
|
||||
return {"items": items, "query": q}
|
||||
try:
|
||||
embedding = await get_text_embedding(query)
|
||||
except Exception:
|
||||
embedding = None
|
||||
if not embedding:
|
||||
return []
|
||||
|
||||
try:
|
||||
raw_results = await vector_db.search_vectors("vector_collection", embedding, max(top_k, 10))
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
results: List[SearchResultItem] = []
|
||||
for bucket in raw_results or []:
|
||||
for record in bucket or []:
|
||||
results.append(_normalize_result(record, "vector"))
|
||||
return results
|
||||
|
||||
|
||||
async def _filename_search(query: str, page: int, page_size: int) -> Tuple[List[SearchResultItem], bool]:
|
||||
vector_db = VectorDBService()
|
||||
limit = max(page * page_size + 1, page_size * (page + 2))
|
||||
limit = min(limit, 2000)
|
||||
try:
|
||||
raw_results = await vector_db.search_by_path("vector_collection", query, limit)
|
||||
except Exception:
|
||||
return [], False
|
||||
|
||||
records = raw_results[0] if raw_results else []
|
||||
deduped: List[SearchResultItem] = []
|
||||
seen_paths: set[str] = set()
|
||||
for record in records or []:
|
||||
item = _normalize_result(record, "filename", fallback_score=1.0)
|
||||
stored_path = item.metadata.get("stored_path") if item.metadata else None
|
||||
key = item.path or stored_path or ""
|
||||
if key in seen_paths:
|
||||
continue
|
||||
seen_paths.add(key)
|
||||
deduped.append(item)
|
||||
|
||||
start = max(page - 1, 0) * page_size
|
||||
end = start + page_size
|
||||
page_items = deduped[start:end]
|
||||
for offset, item in enumerate(page_items):
|
||||
if item.metadata is None:
|
||||
item.metadata = {}
|
||||
item.metadata.setdefault("retrieval_rank", start + offset)
|
||||
has_more = len(deduped) > end
|
||||
return page_items, has_more
|
||||
|
||||
|
||||
@router.get("")
|
||||
@@ -31,11 +106,32 @@ async def search_files(
|
||||
q: str = Query(..., description="搜索查询"),
|
||||
top_k: int = Query(10, description="返回结果数量"),
|
||||
mode: str = Query("vector", description="搜索模式: 'vector' 或 'filename'"),
|
||||
page: int = Query(1, description="分页页码,仅在文件名搜索模式下生效"),
|
||||
page_size: int = Query(10, description="分页大小,仅在文件名搜索模式下生效"),
|
||||
user: User = Depends(get_current_active_user),
|
||||
):
|
||||
if not q.strip():
|
||||
return {"items": [], "query": q}
|
||||
|
||||
top_k = max(top_k, 1)
|
||||
page = max(page, 1)
|
||||
page_size = max(min(page_size, 100), 1)
|
||||
|
||||
if mode == "vector":
|
||||
return await search_files_by_vector(q, top_k)
|
||||
items = (await _vector_search(q, top_k))[:top_k]
|
||||
elif mode == "filename":
|
||||
return await search_files_by_name(q, top_k)
|
||||
items, has_more = await _filename_search(q, page, page_size)
|
||||
return {
|
||||
"items": items,
|
||||
"query": q,
|
||||
"mode": mode,
|
||||
"pagination": {
|
||||
"page": page,
|
||||
"page_size": page_size,
|
||||
"has_more": has_more,
|
||||
},
|
||||
}
|
||||
else:
|
||||
return {"items": [], "query": q, "error": "Invalid search mode"}
|
||||
items = (await _vector_search(q, top_k))[:top_k]
|
||||
|
||||
return {"items": items, "query": q, "mode": mode}
|
||||
|
||||
@@ -24,8 +24,6 @@ class VectorDBConfigPayload(BaseModel):
|
||||
|
||||
@router.post("/clear-all", summary="清空向量数据库")
|
||||
async def clear_vector_db(user: UserAccount = Depends(get_current_active_user)):
|
||||
if user.username != 'admin':
|
||||
raise HTTPException(status_code=403, detail="仅管理员可操作")
|
||||
try:
|
||||
service = VectorDBService()
|
||||
await service.clear_all_data()
|
||||
@@ -36,8 +34,6 @@ async def clear_vector_db(user: UserAccount = Depends(get_current_active_user)):
|
||||
|
||||
@router.get("/stats", summary="获取向量数据库统计")
|
||||
async def get_vector_db_stats(user: UserAccount = Depends(get_current_active_user)):
|
||||
if user.username != 'admin':
|
||||
raise HTTPException(status_code=403, detail="仅管理员可操作")
|
||||
try:
|
||||
service = VectorDBService()
|
||||
data = await service.get_all_stats()
|
||||
@@ -48,15 +44,11 @@ async def get_vector_db_stats(user: UserAccount = Depends(get_current_active_use
|
||||
|
||||
@router.get("/providers", summary="列出可用向量数据库提供者")
|
||||
async def list_vector_providers(user: UserAccount = Depends(get_current_active_user)):
|
||||
if user.username != 'admin':
|
||||
raise HTTPException(status_code=403, detail="仅管理员可操作")
|
||||
return success(list_providers())
|
||||
|
||||
|
||||
@router.get("/config", summary="获取当前向量数据库配置")
|
||||
async def get_vector_db_config(user: UserAccount = Depends(get_current_active_user)):
|
||||
if user.username != 'admin':
|
||||
raise HTTPException(status_code=403, detail="仅管理员可操作")
|
||||
service = VectorDBService()
|
||||
data = await service.current_provider()
|
||||
return success(data)
|
||||
@@ -64,18 +56,17 @@ async def get_vector_db_config(user: UserAccount = Depends(get_current_active_us
|
||||
|
||||
@router.post("/config", summary="更新向量数据库配置")
|
||||
async def update_vector_db_config(payload: VectorDBConfigPayload, user: UserAccount = Depends(get_current_active_user)):
|
||||
if user.username != 'admin':
|
||||
raise HTTPException(status_code=403, detail="仅管理员可操作")
|
||||
|
||||
entry = get_provider_entry(payload.type)
|
||||
if not entry:
|
||||
raise HTTPException(status_code=400, detail=f"未知的向量数据库类型: {payload.type}")
|
||||
raise HTTPException(
|
||||
status_code=400, detail=f"未知的向量数据库类型: {payload.type}")
|
||||
if not entry.get("enabled", True):
|
||||
raise HTTPException(status_code=400, detail="该向量数据库类型暂不可用")
|
||||
|
||||
provider_cls = get_provider_class(payload.type)
|
||||
if not provider_cls:
|
||||
raise HTTPException(status_code=400, detail=f"未找到类型 {payload.type} 对应的实现")
|
||||
raise HTTPException(
|
||||
status_code=400, detail=f"未找到类型 {payload.type} 对应的实现")
|
||||
|
||||
# 先尝试建立连接,确保配置有效
|
||||
test_provider = provider_cls(payload.config)
|
||||
|
||||
@@ -15,8 +15,9 @@ from services.virtual_fs import (
|
||||
stream_file,
|
||||
generate_temp_link_token,
|
||||
verify_temp_link_token,
|
||||
maybe_redirect_download,
|
||||
)
|
||||
from services.thumbnail import is_image_filename, get_or_create_thumb, is_raw_filename
|
||||
from services.thumbnail import is_image_filename, get_or_create_thumb, is_raw_filename, is_video_filename
|
||||
from schemas import MkdirRequest, MoveRequest
|
||||
from api.response import success
|
||||
from services.config import ConfigCenter
|
||||
@@ -50,6 +51,12 @@ async def get_file(
|
||||
except Exception as e:
|
||||
raise HTTPException(500, detail=f"RAW file processing failed: {e}")
|
||||
|
||||
adapter_instance, adapter_model, root, rel = await resolve_adapter_and_rel(full_path)
|
||||
|
||||
redirect_response = await maybe_redirect_download(adapter_instance, adapter_model, root, rel)
|
||||
if redirect_response is not None:
|
||||
return redirect_response
|
||||
|
||||
try:
|
||||
content = await read_file(full_path)
|
||||
except FileNotFoundError:
|
||||
@@ -114,8 +121,8 @@ async def get_thumb(
|
||||
adapter, mount, root, rel = await resolve_adapter_and_rel(full_path)
|
||||
if not rel or rel.endswith('/'):
|
||||
raise HTTPException(400, detail="Not a file")
|
||||
if not is_image_filename(rel):
|
||||
raise HTTPException(404, detail="Not an image")
|
||||
if not (is_image_filename(rel) or is_video_filename(rel)):
|
||||
raise HTTPException(404, detail="Not an image or video")
|
||||
# type: ignore
|
||||
data, mime, key = await get_or_create_thumb(adapter, mount.id, root, rel, w, h, fit)
|
||||
headers = {
|
||||
|
||||
@@ -2,4 +2,4 @@
|
||||
set -e
|
||||
python migrate/run.py
|
||||
nginx -g 'daemon off;' &
|
||||
exec gunicorn -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000 main:app
|
||||
exec gunicorn -k uvicorn.workers.UvicornWorker -w 1 -b 0.0.0.0:8000 main:app
|
||||
@@ -36,6 +36,81 @@ class Configuration(Model):
|
||||
table = "configurations"
|
||||
|
||||
|
||||
class AIProvider(Model):
|
||||
id = fields.IntField(pk=True)
|
||||
name = fields.CharField(max_length=100)
|
||||
identifier = fields.CharField(max_length=100, unique=True)
|
||||
provider_type = fields.CharField(max_length=50, null=True)
|
||||
api_format = fields.CharField(max_length=20)
|
||||
base_url = fields.CharField(max_length=512, null=True)
|
||||
api_key = fields.CharField(max_length=512, null=True)
|
||||
logo_url = fields.CharField(max_length=512, null=True)
|
||||
extra_config = fields.JSONField(null=True)
|
||||
created_at = fields.DatetimeField(auto_now_add=True)
|
||||
updated_at = fields.DatetimeField(auto_now=True)
|
||||
|
||||
class Meta:
|
||||
table = "ai_providers"
|
||||
|
||||
|
||||
class AIModel(Model):
|
||||
id = fields.IntField(pk=True)
|
||||
provider: fields.ForeignKeyRelation[AIProvider] = fields.ForeignKeyField(
|
||||
"models.AIProvider", related_name="models", on_delete=fields.CASCADE
|
||||
)
|
||||
name = fields.CharField(max_length=255)
|
||||
display_name = fields.CharField(max_length=255, null=True)
|
||||
description = fields.TextField(null=True)
|
||||
capabilities = fields.JSONField(null=True)
|
||||
context_window = fields.IntField(null=True)
|
||||
metadata = fields.JSONField(null=True)
|
||||
created_at = fields.DatetimeField(auto_now_add=True)
|
||||
updated_at = fields.DatetimeField(auto_now=True)
|
||||
|
||||
class Meta:
|
||||
table = "ai_models"
|
||||
unique_together = ("provider", "name")
|
||||
|
||||
@property
|
||||
def embedding_dimensions(self) -> int | None:
|
||||
metadata = self.metadata or {}
|
||||
if not isinstance(metadata, dict):
|
||||
return None
|
||||
value = metadata.get("embedding_dimensions")
|
||||
if value is None:
|
||||
return None
|
||||
try:
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
@embedding_dimensions.setter
|
||||
def embedding_dimensions(self, value: int | None) -> None:
|
||||
base_metadata = self.metadata if isinstance(self.metadata, dict) else {}
|
||||
metadata = dict(base_metadata or {})
|
||||
if value is None:
|
||||
metadata.pop("embedding_dimensions", None)
|
||||
else:
|
||||
try:
|
||||
metadata["embedding_dimensions"] = int(value)
|
||||
except (TypeError, ValueError):
|
||||
metadata.pop("embedding_dimensions", None)
|
||||
self.metadata = metadata or None
|
||||
|
||||
|
||||
class AIDefaultModel(Model):
|
||||
id = fields.IntField(pk=True)
|
||||
ability = fields.CharField(max_length=50, unique=True)
|
||||
model: fields.ForeignKeyRelation[AIModel] = fields.ForeignKeyField(
|
||||
"models.AIModel", related_name="default_for", on_delete=fields.CASCADE
|
||||
)
|
||||
created_at = fields.DatetimeField(auto_now_add=True)
|
||||
updated_at = fields.DatetimeField(auto_now=True)
|
||||
|
||||
class Meta:
|
||||
table = "ai_default_models"
|
||||
|
||||
|
||||
class AutomationTask(Model):
|
||||
id = fields.IntField(pk=True)
|
||||
name = fields.CharField(max_length=100)
|
||||
|
||||
108
pyproject.toml
108
pyproject.toml
@@ -1,95 +1,25 @@
|
||||
[project]
|
||||
name = "foxel"
|
||||
version = "0.1.0"
|
||||
description = "Add your description here"
|
||||
version = "1"
|
||||
description = "foxel.cc"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.13"
|
||||
dependencies = [
|
||||
"aioboto3==15.1.0",
|
||||
"aiobotocore==2.24.0",
|
||||
"aiofiles==24.1.0",
|
||||
"aiohappyeyeballs==2.6.1",
|
||||
"aiohttp==3.12.15",
|
||||
"aioitertools==0.12.0",
|
||||
"aiosignal==1.4.0",
|
||||
"aiosqlite==0.21.0",
|
||||
"annotated-types==0.7.0",
|
||||
"anyio==4.10.0",
|
||||
"asyncclick==8.2.2.2",
|
||||
"attrs==25.3.0",
|
||||
"bcrypt==4.3.0",
|
||||
"boto3==1.39.11",
|
||||
"botocore==1.39.11",
|
||||
"certifi==2025.8.3",
|
||||
"click==8.2.1",
|
||||
"dictdiffer==0.9.0",
|
||||
"dnspython==2.7.0",
|
||||
"email-validator==2.2.0",
|
||||
"fastapi==0.116.1",
|
||||
"fastapi-cli==0.0.8",
|
||||
"fastapi-cloud-cli==0.1.5",
|
||||
"frozenlist==1.7.0",
|
||||
"grpcio==1.74.0",
|
||||
"h11==0.16.0",
|
||||
"httpcore==1.0.9",
|
||||
"httptools==0.6.4",
|
||||
"httpx==0.28.1",
|
||||
"idna==3.10",
|
||||
"imageio==2.37.0",
|
||||
"iso8601==2.1.0",
|
||||
"jinja2==3.1.6",
|
||||
"jmespath==1.0.1",
|
||||
"markdown-it-py==4.0.0",
|
||||
"markupsafe==3.0.2",
|
||||
"mdurl==0.1.2",
|
||||
"milvus-lite==2.5.1",
|
||||
"multidict==6.6.4",
|
||||
"numpy==2.3.2",
|
||||
"pandas==2.3.1",
|
||||
"passlib==1.7.4",
|
||||
"pillow==11.3.0",
|
||||
"propcache==0.3.2",
|
||||
"protobuf==6.32.0",
|
||||
"pyaes==1.6.1",
|
||||
"pyasn1==0.6.1",
|
||||
"pydantic==2.11.7",
|
||||
"pydantic-core==2.33.2",
|
||||
"pygments==2.19.2",
|
||||
"pyjwt==2.10.1",
|
||||
"pymilvus==2.6.0",
|
||||
"pypika-tortoise==0.6.1",
|
||||
"pysocks==1.7.1",
|
||||
"python-dateutil==2.9.0.post0",
|
||||
"python-dotenv==1.1.1",
|
||||
"python-multipart==0.0.20",
|
||||
"pytz==2025.2",
|
||||
"pyyaml==6.0.2",
|
||||
"qdrant-client==1.15.1",
|
||||
"rawpy==0.25.1",
|
||||
"rich==14.1.0",
|
||||
"rich-toolkit==0.15.0",
|
||||
"rignore==0.6.4",
|
||||
"rsa==4.9.1",
|
||||
"s3transfer==0.13.1",
|
||||
"sentry-sdk==2.35.0",
|
||||
"setuptools==80.9.0",
|
||||
"shellingham==1.5.4",
|
||||
"six==1.17.0",
|
||||
"sniffio==1.3.1",
|
||||
"starlette==0.47.2",
|
||||
"telethon==1.40.0",
|
||||
"tortoise-orm==0.25.1",
|
||||
"tqdm==4.67.1",
|
||||
"typer==0.16.0",
|
||||
"typing-extensions==4.14.1",
|
||||
"typing-inspection==0.4.1",
|
||||
"tzdata==2025.2",
|
||||
"ujson==5.10.0",
|
||||
"urllib3==2.5.0",
|
||||
"uvicorn==0.35.0",
|
||||
"uvloop==0.21.0",
|
||||
"watchfiles==1.1.0",
|
||||
"websockets==15.0.1",
|
||||
"wrapt==1.17.3",
|
||||
"yarl==1.20.1",
|
||||
"aioboto3>=15.2.0",
|
||||
"aiofiles>=25.1.0",
|
||||
"fastapi>=0.116.1",
|
||||
"passlib[bcrypt]>=1.7.4",
|
||||
"bcrypt>=3.2.2,<4.0",
|
||||
"pillow>=11.3.0",
|
||||
"pyjwt>=2.10.1",
|
||||
"pysocks>=1.7.1",
|
||||
"python-dotenv>=1.1.1",
|
||||
"python-multipart>=0.0.20",
|
||||
"qdrant-client>=1.15.1",
|
||||
"rawpy>=0.25.1",
|
||||
"telethon>=1.41.2",
|
||||
"tortoise-orm>=0.25.1",
|
||||
"uvicorn>=0.37.0",
|
||||
"pymilvus[milvus-lite]>=2.6.2",
|
||||
"paramiko>=4.0.0",
|
||||
]
|
||||
|
||||
101
schemas/ai.py
Normal file
101
schemas/ai.py
Normal file
@@ -0,0 +1,101 @@
|
||||
from typing import List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from services.ai_providers import ABILITIES, normalize_capabilities
|
||||
|
||||
|
||||
class AIProviderBase(BaseModel):
|
||||
name: str
|
||||
identifier: str = Field(..., pattern=r"^[a-z0-9_\-\.]+$")
|
||||
provider_type: Optional[str] = None
|
||||
api_format: str
|
||||
base_url: Optional[str] = None
|
||||
api_key: Optional[str] = None
|
||||
logo_url: Optional[str] = None
|
||||
extra_config: Optional[dict] = None
|
||||
|
||||
@field_validator("api_format")
|
||||
def normalize_format(cls, value: str) -> str:
|
||||
fmt = value.lower()
|
||||
if fmt not in {"openai", "gemini"}:
|
||||
raise ValueError("api_format must be 'openai' or 'gemini'")
|
||||
return fmt
|
||||
|
||||
|
||||
class AIProviderCreate(AIProviderBase):
|
||||
pass
|
||||
|
||||
|
||||
class AIProviderUpdate(BaseModel):
|
||||
name: Optional[str] = None
|
||||
provider_type: Optional[str] = None
|
||||
api_format: Optional[str] = None
|
||||
base_url: Optional[str] = None
|
||||
api_key: Optional[str] = None
|
||||
logo_url: Optional[str] = None
|
||||
extra_config: Optional[dict] = None
|
||||
|
||||
@field_validator("api_format")
|
||||
def normalize_format(cls, value: Optional[str]) -> Optional[str]:
|
||||
if value is None:
|
||||
return value
|
||||
fmt = value.lower()
|
||||
if fmt not in {"openai", "gemini"}:
|
||||
raise ValueError("api_format must be 'openai' or 'gemini'")
|
||||
return fmt
|
||||
|
||||
|
||||
class AIModelBase(BaseModel):
|
||||
name: str
|
||||
display_name: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
capabilities: Optional[List[str]] = None
|
||||
context_window: Optional[int] = None
|
||||
embedding_dimensions: Optional[int] = None
|
||||
metadata: Optional[dict] = None
|
||||
|
||||
@field_validator("capabilities")
|
||||
def validate_capabilities(cls, items: Optional[List[str]]) -> Optional[List[str]]:
|
||||
if items is None:
|
||||
return None
|
||||
normalized = normalize_capabilities(items)
|
||||
invalid = set(items) - set(normalized)
|
||||
if invalid:
|
||||
raise ValueError(f"Unsupported capabilities: {', '.join(invalid)}")
|
||||
return normalized
|
||||
|
||||
|
||||
class AIModelCreate(AIModelBase):
|
||||
pass
|
||||
|
||||
|
||||
class AIModelUpdate(BaseModel):
|
||||
display_name: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
capabilities: Optional[List[str]] = None
|
||||
context_window: Optional[int] = None
|
||||
embedding_dimensions: Optional[int] = None
|
||||
metadata: Optional[dict] = None
|
||||
|
||||
@field_validator("capabilities")
|
||||
def validate_capabilities(cls, items: Optional[List[str]]) -> Optional[List[str]]:
|
||||
if items is None:
|
||||
return None
|
||||
normalized = normalize_capabilities(items)
|
||||
invalid = set(items) - set(normalized)
|
||||
if invalid:
|
||||
raise ValueError(f"Unsupported capabilities: {', '.join(invalid)}")
|
||||
return normalized
|
||||
|
||||
|
||||
class AIDefaultsUpdate(BaseModel):
|
||||
chat: Optional[int] = None
|
||||
vision: Optional[int] = None
|
||||
embedding: Optional[int] = None
|
||||
rerank: Optional[int] = None
|
||||
voice: Optional[int] = None
|
||||
tools: Optional[int] = None
|
||||
|
||||
def as_mapping(self) -> dict:
|
||||
return {ability: getattr(self, ability) for ability in ABILITIES}
|
||||
@@ -8,7 +8,7 @@ class VfsEntry(BaseModel):
|
||||
size: int
|
||||
mtime: int
|
||||
type: Optional[str] = None
|
||||
is_image: Optional[bool] = None
|
||||
has_thumbnail: Optional[bool] = None
|
||||
|
||||
|
||||
class DirListing(BaseModel):
|
||||
@@ -21,6 +21,13 @@ class SearchResultItem(BaseModel):
|
||||
id: int | str
|
||||
path: str
|
||||
score: float
|
||||
chunk_id: Optional[str] = None
|
||||
snippet: Optional[str] = None
|
||||
mime: Optional[str] = None
|
||||
source_type: Optional[str] = None
|
||||
start_offset: Optional[int] = None
|
||||
end_offset: Optional[int] = None
|
||||
metadata: Optional[dict] = None
|
||||
|
||||
|
||||
class MkdirRequest(BaseModel):
|
||||
|
||||
628
services/adapters/ftp.py
Normal file
628
services/adapters/ftp.py
Normal file
@@ -0,0 +1,628 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Dict, Tuple, AsyncIterator, Optional
|
||||
|
||||
from fastapi import HTTPException
|
||||
from fastapi.responses import StreamingResponse
|
||||
from ftplib import FTP, error_perm
|
||||
import mimetypes
|
||||
|
||||
from models import StorageAdapter
|
||||
from services.logging import LogService
|
||||
|
||||
|
||||
def _join_remote(root: str, rel: str) -> str:
|
||||
root = (root or "/").rstrip("/") or "/"
|
||||
rel = (rel or "").lstrip("/")
|
||||
if not rel:
|
||||
return root
|
||||
return f"{root}/{rel}"
|
||||
|
||||
|
||||
def _parse_mlst_line(line: str) -> Dict[str, str]:
|
||||
out: Dict[str, str] = {}
|
||||
try:
|
||||
facts, _, name = line.partition(" ")
|
||||
for part in facts.split(";"):
|
||||
if not part or "=" not in part:
|
||||
continue
|
||||
k, v = part.split("=", 1)
|
||||
out[k.strip().lower()] = v.strip()
|
||||
if name:
|
||||
out["name"] = name.strip()
|
||||
except Exception:
|
||||
pass
|
||||
return out
|
||||
|
||||
|
||||
def _parse_modify_to_epoch(mod: str) -> int:
|
||||
# Formats we may see: YYYYMMDDHHMMSS or YYYYMMDDHHMMSS(.sss)
|
||||
try:
|
||||
mod = mod.strip()
|
||||
mod = mod.split(".")[0]
|
||||
if len(mod) >= 14:
|
||||
y = int(mod[0:4])
|
||||
m = int(mod[4:6])
|
||||
d = int(mod[6:8])
|
||||
hh = int(mod[8:10])
|
||||
mm = int(mod[10:12])
|
||||
ss = int(mod[12:14])
|
||||
import datetime as _dt
|
||||
return int(_dt.datetime(y, m, d, hh, mm, ss, tzinfo=_dt.timezone.utc).timestamp())
|
||||
except Exception:
|
||||
return 0
|
||||
return 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class _Range:
|
||||
start: int
|
||||
end: Optional[int] # inclusive
|
||||
|
||||
|
||||
class FTPAdapter:
|
||||
def __init__(self, record: StorageAdapter):
|
||||
self.record = record
|
||||
cfg = record.config
|
||||
self.host: str = cfg.get("host")
|
||||
self.port: int = int(cfg.get("port", 21))
|
||||
self.username: Optional[str] = cfg.get("username")
|
||||
self.password: Optional[str] = cfg.get("password")
|
||||
self.passive: bool = bool(cfg.get("passive", True))
|
||||
self.timeout: int = int(cfg.get("timeout", 15))
|
||||
self.root_path: str = cfg.get("root", "/") or "/"
|
||||
|
||||
if not self.host:
|
||||
raise ValueError("FTP adapter requires 'host'")
|
||||
|
||||
def get_effective_root(self, sub_path: str | None) -> str:
|
||||
base = self.root_path.rstrip("/") or "/"
|
||||
if sub_path:
|
||||
return _join_remote(base, sub_path)
|
||||
return base
|
||||
|
||||
def _connect(self) -> FTP:
|
||||
ftp = FTP()
|
||||
ftp.connect(self.host, self.port, timeout=self.timeout)
|
||||
if self.username:
|
||||
ftp.login(self.username, self.password or "")
|
||||
else:
|
||||
ftp.login()
|
||||
ftp.set_pasv(self.passive)
|
||||
return ftp
|
||||
|
||||
async def list_dir(self, root: str, rel: str, page_num: int = 1, page_size: int = 50, sort_by: str = "name", sort_order: str = "asc") -> Tuple[List[Dict], int]:
|
||||
path = _join_remote(root, rel.strip('/'))
|
||||
|
||||
def _do_list() -> List[Dict]:
|
||||
ftp = self._connect()
|
||||
try:
|
||||
ftp.cwd(path)
|
||||
except error_perm as e:
|
||||
# path may be file
|
||||
ftp.quit()
|
||||
raise NotADirectoryError(rel) from e
|
||||
|
||||
entries: List[Dict] = []
|
||||
# Try MLSD first
|
||||
try:
|
||||
for name, facts in ftp.mlsd():
|
||||
if name in (".", ".."):
|
||||
continue
|
||||
is_dir = (facts.get("type") == "dir")
|
||||
size = int(facts.get("size") or 0)
|
||||
mtime = _parse_modify_to_epoch(facts.get("modify") or "")
|
||||
entries.append({
|
||||
"name": name,
|
||||
"is_dir": is_dir,
|
||||
"size": 0 if is_dir else size,
|
||||
"mtime": mtime,
|
||||
"type": "dir" if is_dir else "file",
|
||||
})
|
||||
ftp.quit()
|
||||
return entries
|
||||
except Exception:
|
||||
# Fallback to NLST + probing
|
||||
pass
|
||||
|
||||
names = []
|
||||
try:
|
||||
names = ftp.nlst()
|
||||
except Exception:
|
||||
ftp.quit()
|
||||
return []
|
||||
|
||||
for name in names:
|
||||
if name in (".", ".."):
|
||||
continue
|
||||
is_dir = False
|
||||
size = 0
|
||||
mtime = 0
|
||||
try:
|
||||
# If we can CWD, it's a directory
|
||||
ftp.cwd(_join_remote(path, name))
|
||||
ftp.cwd(path)
|
||||
is_dir = True
|
||||
except Exception:
|
||||
is_dir = False
|
||||
try:
|
||||
size = ftp.size(_join_remote(path, name)) or 0
|
||||
except Exception:
|
||||
size = 0
|
||||
try:
|
||||
mdtm = ftp.sendcmd("MDTM " + _join_remote(path, name))
|
||||
# Example: '213 20241012XXXXXX'
|
||||
if mdtm.startswith("213 "):
|
||||
mtime = _parse_modify_to_epoch(mdtm.split(" ", 1)[1])
|
||||
except Exception:
|
||||
pass
|
||||
entries.append({
|
||||
"name": name,
|
||||
"is_dir": is_dir,
|
||||
"size": 0 if is_dir else int(size or 0),
|
||||
"mtime": int(mtime or 0),
|
||||
"type": "dir" if is_dir else "file",
|
||||
})
|
||||
ftp.quit()
|
||||
return entries
|
||||
|
||||
entries = await asyncio.to_thread(_do_list)
|
||||
|
||||
reverse = sort_order.lower() == "desc"
|
||||
|
||||
def get_sort_key(item):
|
||||
key = (not item["is_dir"],)
|
||||
f = sort_by.lower()
|
||||
if f == "name":
|
||||
key += (item["name"].lower(),)
|
||||
elif f == "size":
|
||||
key += (item.get("size", 0),)
|
||||
elif f == "mtime":
|
||||
key += (item.get("mtime", 0),)
|
||||
else:
|
||||
key += (item["name"].lower(),)
|
||||
return key
|
||||
|
||||
entries.sort(key=get_sort_key, reverse=reverse)
|
||||
total = len(entries)
|
||||
start = (page_num - 1) * page_size
|
||||
end = start + page_size
|
||||
return entries[start:end], total
|
||||
|
||||
async def read_file(self, root: str, rel: str) -> bytes:
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_read() -> bytes:
|
||||
ftp = self._connect()
|
||||
try:
|
||||
chunks: List[bytes] = []
|
||||
ftp.retrbinary("RETR " + path, lambda b: chunks.append(b))
|
||||
return b"".join(chunks)
|
||||
except error_perm as e:
|
||||
if str(e).startswith("550"):
|
||||
raise FileNotFoundError(rel)
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return await asyncio.to_thread(_do_read)
|
||||
|
||||
async def write_file(self, root: str, rel: str, data: bytes):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _ensure_dirs(ftp: FTP, dir_path: str):
|
||||
parts = [p for p in dir_path.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
ftp.mkd(cur)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _do_write():
|
||||
ftp = self._connect()
|
||||
try:
|
||||
parent = "/" if "/" not in path.strip("/") else path.rsplit("/", 1)[0]
|
||||
_ensure_dirs(ftp, parent)
|
||||
from io import BytesIO
|
||||
bio = BytesIO(data)
|
||||
ftp.storbinary("STOR " + path, bio)
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_write)
|
||||
await LogService.info(
|
||||
"adapter:ftp",
|
||||
f"Wrote file to {rel}",
|
||||
details={"adapter_id": self.record.id, "path": path, "size": len(data)},
|
||||
)
|
||||
|
||||
async def write_file_stream(self, root: str, rel: str, data_iter: AsyncIterator[bytes]):
|
||||
# KISS: 聚合后一次性写入
|
||||
buf = bytearray()
|
||||
async for chunk in data_iter:
|
||||
if chunk:
|
||||
buf.extend(chunk)
|
||||
await self.write_file(root, rel, bytes(buf))
|
||||
return len(buf)
|
||||
|
||||
async def mkdir(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_mkdir():
|
||||
ftp = self._connect()
|
||||
try:
|
||||
parts = [p for p in path.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
ftp.mkd(cur)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_mkdir)
|
||||
await LogService.info("adapter:ftp", f"Created directory {rel}", details={"adapter_id": self.record.id, "path": path})
|
||||
|
||||
async def delete(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_delete():
|
||||
ftp = self._connect()
|
||||
try:
|
||||
# Try file delete
|
||||
try:
|
||||
ftp.delete(path)
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Recursively delete dir
|
||||
def _rm_tree(dir_path: str):
|
||||
try:
|
||||
ftp.cwd(dir_path)
|
||||
except Exception:
|
||||
return
|
||||
items = []
|
||||
try:
|
||||
for name, facts in ftp.mlsd():
|
||||
if name in (".", ".."):
|
||||
continue
|
||||
items.append((name, facts.get("type") == "dir"))
|
||||
except Exception:
|
||||
try:
|
||||
names = ftp.nlst()
|
||||
except Exception:
|
||||
names = []
|
||||
for n in names:
|
||||
if n in (".", ".."):
|
||||
continue
|
||||
# Best-effort dir check
|
||||
try:
|
||||
ftp.cwd(_join_remote(dir_path, n))
|
||||
ftp.cwd(dir_path)
|
||||
items.append((n, True))
|
||||
except Exception:
|
||||
items.append((n, False))
|
||||
for n, is_dir in items:
|
||||
child = _join_remote(dir_path, n)
|
||||
if is_dir:
|
||||
_rm_tree(child)
|
||||
else:
|
||||
try:
|
||||
ftp.delete(child)
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
ftp.rmd(dir_path)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
_rm_tree(path)
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_delete)
|
||||
await LogService.info("adapter:ftp", f"Deleted {rel}", details={"adapter_id": self.record.id, "path": path})
|
||||
|
||||
async def move(self, root: str, src_rel: str, dst_rel: str):
|
||||
src = _join_remote(root, src_rel)
|
||||
dst = _join_remote(root, dst_rel)
|
||||
|
||||
def _do_move():
|
||||
ftp = self._connect()
|
||||
try:
|
||||
# Ensure dst parent exists
|
||||
parent = "/" if "/" not in dst.strip("/") else dst.rsplit("/", 1)[0]
|
||||
parts = [p for p in parent.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
ftp.mkd(cur)
|
||||
except Exception:
|
||||
pass
|
||||
ftp.rename(src, dst)
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_move)
|
||||
await LogService.info("adapter:ftp", f"Moved {src_rel} to {dst_rel}", details={"adapter_id": self.record.id, "src": src, "dst": dst})
|
||||
|
||||
async def rename(self, root: str, src_rel: str, dst_rel: str):
|
||||
await self.move(root, src_rel, dst_rel)
|
||||
|
||||
async def copy(self, root: str, src_rel: str, dst_rel: str, overwrite: bool = False):
|
||||
src = _join_remote(root, src_rel)
|
||||
dst = _join_remote(root, dst_rel)
|
||||
|
||||
# naive implementation: download then upload; recursively for dirs
|
||||
async def _is_dir(path: str) -> bool:
|
||||
def _probe() -> bool:
|
||||
ftp = self._connect()
|
||||
try:
|
||||
try:
|
||||
ftp.cwd(path)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
return await asyncio.to_thread(_probe)
|
||||
|
||||
if await _is_dir(src):
|
||||
# list children, create dst dir, copy recursively
|
||||
await self.mkdir(root, dst_rel)
|
||||
|
||||
children, _ = await self.list_dir(root, src_rel, page_num=1, page_size=10_000)
|
||||
for ent in children:
|
||||
child_src = f"{src_rel.rstrip('/')}/{ent['name']}"
|
||||
child_dst = f"{dst_rel.rstrip('/')}/{ent['name']}"
|
||||
await self.copy(root, child_src, child_dst, overwrite)
|
||||
await LogService.info(
|
||||
"adapter:ftp", f"Copied directory {src_rel} to {dst_rel}",
|
||||
details={"adapter_id": self.record.id, "src": src, "dst": dst}
|
||||
)
|
||||
return
|
||||
|
||||
# file
|
||||
data = await self.read_file(root, src_rel)
|
||||
if not overwrite:
|
||||
# best-effort existence check
|
||||
try:
|
||||
await self.stat_file(root, dst_rel)
|
||||
raise FileExistsError(dst_rel)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
await self.write_file(root, dst_rel, data)
|
||||
await LogService.info("adapter:ftp", f"Copied {src_rel} to {dst_rel}", details={"adapter_id": self.record.id, "src": src, "dst": dst})
|
||||
|
||||
async def stat_file(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_stat():
|
||||
ftp = self._connect()
|
||||
try:
|
||||
# Try MLST
|
||||
try:
|
||||
resp: List[str] = []
|
||||
ftp.retrlines("MLST " + path, resp.append)
|
||||
# The last line usually contains facts
|
||||
facts = {}
|
||||
if resp:
|
||||
facts = _parse_mlst_line(resp[-1])
|
||||
name = rel.split("/")[-1]
|
||||
t = facts.get("type") or "file"
|
||||
is_dir = t == "dir"
|
||||
size = int(facts.get("size") or 0)
|
||||
mtime = _parse_modify_to_epoch(facts.get("modify") or "")
|
||||
return {
|
||||
"name": name,
|
||||
"is_dir": is_dir,
|
||||
"size": 0 if is_dir else size,
|
||||
"mtime": mtime,
|
||||
"type": "dir" if is_dir else "file",
|
||||
"path": path,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Probe directory
|
||||
try:
|
||||
ftp.cwd(path)
|
||||
return {
|
||||
"name": rel.split("/")[-1],
|
||||
"is_dir": True,
|
||||
"size": 0,
|
||||
"mtime": 0,
|
||||
"type": "dir",
|
||||
"path": path,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Treat as file
|
||||
try:
|
||||
size = ftp.size(path) or 0
|
||||
except Exception:
|
||||
size = 0
|
||||
try:
|
||||
mdtm = ftp.sendcmd("MDTM " + path)
|
||||
mtime = _parse_modify_to_epoch(mdtm.split(" ", 1)[1]) if mdtm.startswith("213 ") else 0
|
||||
except Exception:
|
||||
mtime = 0
|
||||
return {
|
||||
"name": rel.split("/")[-1],
|
||||
"is_dir": False,
|
||||
"size": int(size or 0),
|
||||
"mtime": int(mtime or 0),
|
||||
"type": "file",
|
||||
"path": path,
|
||||
}
|
||||
except error_perm as e:
|
||||
if str(e).startswith("550"):
|
||||
raise FileNotFoundError(rel)
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return await asyncio.to_thread(_do_stat)
|
||||
|
||||
async def stream_file(self, root: str, rel: str, range_header: str | None):
|
||||
path = _join_remote(root, rel)
|
||||
# Get size (best-effort)
|
||||
def _get_size() -> Optional[int]:
|
||||
ftp = self._connect()
|
||||
try:
|
||||
try:
|
||||
return int(ftp.size(path) or 0)
|
||||
except Exception:
|
||||
return None
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
total_size = await asyncio.to_thread(_get_size)
|
||||
mime, _ = mimetypes.guess_type(rel)
|
||||
content_type = mime or "application/octet-stream"
|
||||
|
||||
rng: Optional[_Range] = None
|
||||
status = 200
|
||||
headers = {"Accept-Ranges": "bytes", "Content-Type": content_type}
|
||||
if range_header and range_header.startswith("bytes=") and total_size is not None:
|
||||
try:
|
||||
s, e = (range_header.removeprefix("bytes=").split("-", 1))
|
||||
start = int(s) if s.strip() else 0
|
||||
end = int(e) if e.strip() else (total_size - 1)
|
||||
if start >= total_size:
|
||||
raise HTTPException(416, detail="Requested Range Not Satisfiable")
|
||||
if end >= total_size:
|
||||
end = total_size - 1
|
||||
rng = _Range(start, end)
|
||||
status = 206
|
||||
headers["Content-Range"] = f"bytes {start}-{end}/{total_size}"
|
||||
headers["Content-Length"] = str(end - start + 1)
|
||||
except ValueError:
|
||||
raise HTTPException(400, detail="Invalid Range header")
|
||||
elif total_size is not None:
|
||||
headers["Content-Length"] = str(total_size)
|
||||
|
||||
queue: asyncio.Queue[Optional[bytes]] = asyncio.Queue(maxsize=8)
|
||||
|
||||
class _Stop(Exception):
|
||||
pass
|
||||
|
||||
def _worker():
|
||||
ftp = self._connect()
|
||||
remaining = None
|
||||
if rng is not None:
|
||||
remaining = (rng.end - rng.start + 1) if rng.end is not None else None
|
||||
|
||||
def _cb(chunk: bytes):
|
||||
nonlocal remaining
|
||||
if not chunk:
|
||||
return
|
||||
try:
|
||||
if remaining is not None:
|
||||
if len(chunk) > remaining:
|
||||
part = chunk[:remaining]
|
||||
queue.put_nowait(part)
|
||||
remaining = 0
|
||||
raise _Stop()
|
||||
else:
|
||||
queue.put_nowait(chunk)
|
||||
remaining -= len(chunk)
|
||||
if remaining <= 0:
|
||||
raise _Stop()
|
||||
else:
|
||||
queue.put_nowait(chunk)
|
||||
except _Stop:
|
||||
raise
|
||||
except Exception:
|
||||
# queue full or event loop closed
|
||||
raise _Stop()
|
||||
|
||||
try:
|
||||
if rng is not None:
|
||||
ftp.retrbinary("RETR " + path, _cb, rest=rng.start)
|
||||
else:
|
||||
ftp.retrbinary("RETR " + path, _cb)
|
||||
queue.put_nowait(None)
|
||||
except _Stop:
|
||||
try:
|
||||
queue.put_nowait(None)
|
||||
except Exception:
|
||||
pass
|
||||
except error_perm as e:
|
||||
try:
|
||||
queue.put_nowait(None)
|
||||
except Exception:
|
||||
pass
|
||||
if str(e).startswith("550"):
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
ftp.quit()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
async def agen():
|
||||
worker_fut = asyncio.to_thread(_worker)
|
||||
try:
|
||||
while True:
|
||||
chunk = await queue.get()
|
||||
if chunk is None:
|
||||
break
|
||||
yield chunk
|
||||
finally:
|
||||
try:
|
||||
await worker_fut
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return StreamingResponse(agen(), status_code=status, headers=headers, media_type=content_type)
|
||||
|
||||
|
||||
ADAPTER_TYPE = "ftp"
|
||||
|
||||
CONFIG_SCHEMA = [
|
||||
{"key": "host", "label": "主机", "type": "string", "required": True, "placeholder": "ftp.example.com"},
|
||||
{"key": "port", "label": "端口", "type": "number", "required": False, "default": 21},
|
||||
{"key": "username", "label": "用户名", "type": "string", "required": False},
|
||||
{"key": "password", "label": "密码", "type": "password", "required": False},
|
||||
{"key": "passive", "label": "被动模式", "type": "boolean", "required": False, "default": True},
|
||||
{"key": "timeout", "label": "超时(秒)", "type": "number", "required": False, "default": 15},
|
||||
{"key": "root", "label": "根路径", "type": "string", "required": False, "default": "/"},
|
||||
]
|
||||
|
||||
|
||||
def ADAPTER_FACTORY(rec: StorageAdapter):
|
||||
return FTPAdapter(rec)
|
||||
@@ -2,7 +2,7 @@ from __future__ import annotations
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import List, Dict, Tuple, AsyncIterator
|
||||
import httpx
|
||||
from fastapi.responses import StreamingResponse
|
||||
from fastapi.responses import StreamingResponse, Response
|
||||
from fastapi import HTTPException
|
||||
from models import StorageAdapter
|
||||
|
||||
@@ -20,6 +20,7 @@ class OneDriveAdapter:
|
||||
self.client_secret = cfg.get("client_secret")
|
||||
self.refresh_token = cfg.get("refresh_token")
|
||||
self.root = cfg.get("root", "/").strip("/")
|
||||
self.enable_redirect_307 = bool(cfg.get("enable_direct_download_307"))
|
||||
|
||||
if not all([self.client_id, self.client_secret, self.refresh_token]):
|
||||
raise ValueError(
|
||||
@@ -380,6 +381,26 @@ class OneDriveAdapter:
|
||||
|
||||
return StreamingResponse(file_iterator(), status_code=status, headers=headers, media_type=content_type)
|
||||
|
||||
async def get_direct_download_response(self, root: str, rel: str):
|
||||
if not self.enable_redirect_307:
|
||||
return None
|
||||
|
||||
api_path = self._get_api_path(rel)
|
||||
if not api_path:
|
||||
raise IsADirectoryError("不能对目录进行直链重定向")
|
||||
|
||||
resp = await self._request("GET", api_path_segment=api_path)
|
||||
if resp.status_code == 404:
|
||||
raise FileNotFoundError(rel)
|
||||
resp.raise_for_status()
|
||||
|
||||
item_data = resp.json()
|
||||
download_url = item_data.get("@microsoft.graph.downloadUrl")
|
||||
if not download_url:
|
||||
return None
|
||||
|
||||
return Response(status_code=307, headers={"Location": download_url})
|
||||
|
||||
async def get_thumbnail(self, root: str, rel: str, size: str = "medium"):
|
||||
"""
|
||||
获取文件的缩略图。
|
||||
@@ -434,6 +455,7 @@ CONFIG_SCHEMA = [
|
||||
"required": True, "help_text": "可以通过运行 'python -m services.adapters.onedrive' 获取"},
|
||||
{"key": "root", "label": "根目录 (Root Path)", "type": "string",
|
||||
"required": False, "placeholder": "默认为根目录 /"},
|
||||
{"key": "enable_direct_download_307", "label": "Enable 307 redirect download", "type": "boolean", "default": False},
|
||||
]
|
||||
|
||||
|
||||
|
||||
@@ -34,8 +34,15 @@ class QuarkAdapter:
|
||||
cfg = record.config or {}
|
||||
self.cookie: str = cfg.get("cookie") or cfg.get("Cookie")
|
||||
self.root_fid: str = cfg.get("root_fid", "0")
|
||||
self.use_transcoding_address: bool = bool(cfg.get("use_transcoding_address", False))
|
||||
self.only_list_video_file: bool = bool(cfg.get("only_list_video_file", False))
|
||||
def _as_bool(value: Any) -> bool:
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
return value.strip().lower() in {"1", "true", "yes", "on"}
|
||||
return bool(value)
|
||||
|
||||
self.use_transcoding_address: bool = _as_bool(cfg.get("use_transcoding_address", False))
|
||||
self.only_list_video_file: bool = _as_bool(cfg.get("only_list_video_file", False))
|
||||
|
||||
if not self.cookie:
|
||||
raise ValueError("Quark 适配器需要 cookie 配置")
|
||||
@@ -716,8 +723,8 @@ ADAPTER_TYPE = "Quark"
|
||||
CONFIG_SCHEMA = [
|
||||
{"key": "cookie", "label": "Cookie", "type": "password", "required": True, "placeholder": "从 pan.quark.cn 复制"},
|
||||
{"key": "root_fid", "label": "根 FID", "type": "string", "required": False, "default": "0"},
|
||||
{"key": "use_transcoding_address", "label": "视频转码直链", "type": "checkbox", "required": False, "default": False},
|
||||
{"key": "only_list_video_file", "label": "仅列出视频文件", "type": "checkbox", "required": False, "default": False},
|
||||
{"key": "use_transcoding_address", "label": "视频转码直链", "type": "boolean", "required": False, "default": False},
|
||||
{"key": "only_list_video_file", "label": "仅列出视频文件", "type": "boolean", "required": False, "default": False},
|
||||
]
|
||||
|
||||
def ADAPTER_FACTORY(rec: StorageAdapter) -> BaseAdapter:
|
||||
|
||||
447
services/adapters/sftp.py
Normal file
447
services/adapters/sftp.py
Normal file
@@ -0,0 +1,447 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import mimetypes
|
||||
import stat as statmod
|
||||
from typing import List, Dict, Tuple, AsyncIterator, Optional
|
||||
|
||||
from fastapi import HTTPException
|
||||
from fastapi.responses import StreamingResponse
|
||||
import paramiko
|
||||
|
||||
from models import StorageAdapter
|
||||
from services.logging import LogService
|
||||
|
||||
|
||||
def _join_remote(root: str, rel: str) -> str:
|
||||
root = (root or "/").rstrip("/") or "/"
|
||||
rel = (rel or "").lstrip("/")
|
||||
if not rel:
|
||||
return root
|
||||
return f"{root}/{rel}"
|
||||
|
||||
|
||||
class SFTPAdapter:
|
||||
def __init__(self, record: StorageAdapter):
|
||||
self.record = record
|
||||
cfg = record.config
|
||||
self.host: str = cfg.get("host")
|
||||
self.port: int = int(cfg.get("port", 22))
|
||||
self.username: str | None = cfg.get("username")
|
||||
self.password: str | None = cfg.get("password")
|
||||
self.timeout: int = int(cfg.get("timeout", 15))
|
||||
self.root_path: str = cfg.get("root") # 必填
|
||||
self.allow_unknown_host: bool = bool(cfg.get("allow_unknown_host", True))
|
||||
|
||||
if not self.host:
|
||||
raise ValueError("SFTP adapter requires 'host'")
|
||||
if not self.username or not self.password:
|
||||
raise ValueError("SFTP adapter requires 'username' and 'password'")
|
||||
if not self.root_path:
|
||||
raise ValueError("SFTP adapter requires 'root'")
|
||||
|
||||
def get_effective_root(self, sub_path: str | None) -> str:
|
||||
base = self.root_path.rstrip("/") or "/"
|
||||
if sub_path:
|
||||
return _join_remote(base, sub_path)
|
||||
return base
|
||||
|
||||
def _connect(self) -> paramiko.SFTPClient:
|
||||
ssh = paramiko.SSHClient()
|
||||
if self.allow_unknown_host:
|
||||
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
ssh.connect(
|
||||
hostname=self.host,
|
||||
port=self.port,
|
||||
username=self.username,
|
||||
password=self.password,
|
||||
timeout=self.timeout,
|
||||
allow_agent=False,
|
||||
look_for_keys=False,
|
||||
)
|
||||
return ssh.open_sftp()
|
||||
|
||||
async def list_dir(self, root: str, rel: str, page_num: int = 1, page_size: int = 50, sort_by: str = "name", sort_order: str = "asc") -> Tuple[List[Dict], int]:
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_list() -> List[Dict]:
|
||||
sftp = self._connect()
|
||||
try:
|
||||
attrs = sftp.listdir_attr(path)
|
||||
entries: List[Dict] = []
|
||||
for a in attrs:
|
||||
name = a.filename
|
||||
is_dir = statmod.S_ISDIR(a.st_mode)
|
||||
entries.append({
|
||||
"name": name,
|
||||
"is_dir": is_dir,
|
||||
"size": 0 if is_dir else int(a.st_size or 0),
|
||||
"mtime": int(a.st_mtime or 0),
|
||||
"type": "dir" if is_dir else "file",
|
||||
})
|
||||
return entries
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
entries = await asyncio.to_thread(_do_list)
|
||||
|
||||
reverse = sort_order.lower() == "desc"
|
||||
|
||||
def get_sort_key(item):
|
||||
key = (not item["is_dir"],)
|
||||
f = sort_by.lower()
|
||||
if f == "name":
|
||||
key += (item["name"].lower(),)
|
||||
elif f == "size":
|
||||
key += (item.get("size", 0),)
|
||||
elif f == "mtime":
|
||||
key += (item.get("mtime", 0),)
|
||||
else:
|
||||
key += (item["name"].lower(),)
|
||||
return key
|
||||
|
||||
entries.sort(key=get_sort_key, reverse=reverse)
|
||||
total = len(entries)
|
||||
start = (page_num - 1) * page_size
|
||||
end = start + page_size
|
||||
return entries[start:end], total
|
||||
|
||||
async def read_file(self, root: str, rel: str) -> bytes:
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_read() -> bytes:
|
||||
sftp = self._connect()
|
||||
try:
|
||||
with sftp.open(path, "rb") as f:
|
||||
return f.read()
|
||||
except FileNotFoundError:
|
||||
raise
|
||||
except IOError as e:
|
||||
if getattr(e, "errno", None) == 2:
|
||||
raise FileNotFoundError(rel)
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return await asyncio.to_thread(_do_read)
|
||||
|
||||
async def write_file(self, root: str, rel: str, data: bytes):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _ensure_dirs(sftp: paramiko.SFTPClient, dir_path: str):
|
||||
parts = [p for p in dir_path.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
sftp.mkdir(cur)
|
||||
except IOError:
|
||||
# likely exists
|
||||
pass
|
||||
|
||||
def _do_write():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
parent = "/" if "/" not in path.strip("/") else path.rsplit("/", 1)[0]
|
||||
_ensure_dirs(sftp, parent)
|
||||
with sftp.open(path, "wb") as f:
|
||||
f.write(data)
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_write)
|
||||
await LogService.info("adapter:sftp", f"Wrote file to {rel}", details={"adapter_id": self.record.id, "path": path, "size": len(data)})
|
||||
|
||||
async def write_file_stream(self, root: str, rel: str, data_iter: AsyncIterator[bytes]):
|
||||
buf = bytearray()
|
||||
async for chunk in data_iter:
|
||||
if chunk:
|
||||
buf.extend(chunk)
|
||||
await self.write_file(root, rel, bytes(buf))
|
||||
return len(buf)
|
||||
|
||||
async def mkdir(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_mkdir():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
parts = [p for p in path.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
sftp.mkdir(cur)
|
||||
except IOError:
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_mkdir)
|
||||
await LogService.info("adapter:sftp", f"Created directory {rel}", details={"adapter_id": self.record.id, "path": path})
|
||||
|
||||
async def delete(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_delete():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
# Try file remove first
|
||||
try:
|
||||
sftp.remove(path)
|
||||
return
|
||||
except IOError:
|
||||
pass
|
||||
|
||||
def _rm_tree(dp: str):
|
||||
try:
|
||||
for a in sftp.listdir_attr(dp):
|
||||
child = _join_remote(dp, a.filename)
|
||||
if statmod.S_ISDIR(a.st_mode):
|
||||
_rm_tree(child)
|
||||
else:
|
||||
try:
|
||||
sftp.remove(child)
|
||||
except Exception:
|
||||
pass
|
||||
sftp.rmdir(dp)
|
||||
except IOError:
|
||||
pass
|
||||
|
||||
_rm_tree(path)
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_delete)
|
||||
await LogService.info("adapter:sftp", f"Deleted {rel}", details={"adapter_id": self.record.id, "path": path})
|
||||
|
||||
async def move(self, root: str, src_rel: str, dst_rel: str):
|
||||
src = _join_remote(root, src_rel)
|
||||
dst = _join_remote(root, dst_rel)
|
||||
|
||||
def _do_move():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
# ensure dst parent exists
|
||||
parent = "/" if "/" not in dst.strip("/") else dst.rsplit("/", 1)[0]
|
||||
parts = [p for p in parent.strip("/").split("/") if p]
|
||||
cur = "/"
|
||||
for p in parts:
|
||||
cur = _join_remote(cur, p)
|
||||
try:
|
||||
sftp.mkdir(cur)
|
||||
except IOError:
|
||||
pass
|
||||
sftp.rename(src, dst)
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
await asyncio.to_thread(_do_move)
|
||||
await LogService.info("adapter:sftp", f"Moved {src_rel} to {dst_rel}", details={"adapter_id": self.record.id, "src": src, "dst": dst})
|
||||
|
||||
async def rename(self, root: str, src_rel: str, dst_rel: str):
|
||||
await self.move(root, src_rel, dst_rel)
|
||||
|
||||
async def copy(self, root: str, src_rel: str, dst_rel: str, overwrite: bool = False):
|
||||
src = _join_remote(root, src_rel)
|
||||
dst = _join_remote(root, dst_rel)
|
||||
|
||||
def _is_dir() -> bool:
|
||||
sftp = self._connect()
|
||||
try:
|
||||
st = sftp.stat(src)
|
||||
return statmod.S_ISDIR(st.st_mode)
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if await asyncio.to_thread(_is_dir):
|
||||
await self.mkdir(root, dst_rel)
|
||||
|
||||
children, _ = await self.list_dir(root, src_rel, page_num=1, page_size=10_000)
|
||||
for ent in children:
|
||||
child_src = f"{src_rel.rstrip('/')}/{ent['name']}"
|
||||
child_dst = f"{dst_rel.rstrip('/')}/{ent['name']}"
|
||||
await self.copy(root, child_src, child_dst, overwrite)
|
||||
await LogService.info("adapter:sftp", f"Copied directory {src_rel} to {dst_rel}", details={"adapter_id": self.record.id, "src": src, "dst": dst})
|
||||
return
|
||||
|
||||
# file copy
|
||||
data = await self.read_file(root, src_rel)
|
||||
if not overwrite:
|
||||
try:
|
||||
await self.stat_file(root, dst_rel)
|
||||
raise FileExistsError(dst_rel)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
await self.write_file(root, dst_rel, data)
|
||||
await LogService.info("adapter:sftp", f"Copied {src_rel} to {dst_rel}", details={"adapter_id": self.record.id, "src": src, "dst": dst})
|
||||
|
||||
async def stat_file(self, root: str, rel: str):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _do_stat():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
st = sftp.stat(path)
|
||||
is_dir = statmod.S_ISDIR(st.st_mode)
|
||||
info = {
|
||||
"name": rel.split("/")[-1],
|
||||
"is_dir": is_dir,
|
||||
"size": 0 if is_dir else int(st.st_size or 0),
|
||||
"mtime": int(st.st_mtime or 0),
|
||||
"type": "dir" if is_dir else "file",
|
||||
"path": path,
|
||||
}
|
||||
return info
|
||||
except FileNotFoundError:
|
||||
raise
|
||||
except IOError as e:
|
||||
if getattr(e, "errno", None) == 2:
|
||||
raise FileNotFoundError(rel)
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return await asyncio.to_thread(_do_stat)
|
||||
|
||||
async def exists(self, root: str, rel: str) -> bool:
|
||||
try:
|
||||
await self.stat_file(root, rel)
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
async def stream_file(self, root: str, rel: str, range_header: str | None):
|
||||
path = _join_remote(root, rel)
|
||||
|
||||
def _get_stat():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
st = sftp.stat(path)
|
||||
return int(st.st_size or 0)
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
file_size = await asyncio.to_thread(_get_stat)
|
||||
if file_size is None:
|
||||
raise HTTPException(404, detail="File not found")
|
||||
|
||||
mime, _ = mimetypes.guess_type(rel)
|
||||
content_type = mime or "application/octet-stream"
|
||||
|
||||
start = 0
|
||||
end = file_size - 1
|
||||
status = 200
|
||||
headers = {
|
||||
"Accept-Ranges": "bytes",
|
||||
"Content-Type": content_type,
|
||||
"Content-Length": str(file_size),
|
||||
}
|
||||
|
||||
if range_header and range_header.startswith("bytes="):
|
||||
try:
|
||||
s, e = (range_header.removeprefix("bytes=").split("-", 1))
|
||||
if s.strip():
|
||||
start = int(s)
|
||||
if e.strip():
|
||||
end = int(e)
|
||||
if start >= file_size:
|
||||
raise HTTPException(416, detail="Requested Range Not Satisfiable")
|
||||
if end >= file_size:
|
||||
end = file_size - 1
|
||||
status = 206
|
||||
headers["Content-Length"] = str(end - start + 1)
|
||||
headers["Content-Range"] = f"bytes {start}-{end}/{file_size}"
|
||||
except ValueError:
|
||||
raise HTTPException(400, detail="Invalid Range header")
|
||||
|
||||
queue: asyncio.Queue[Optional[bytes]] = asyncio.Queue(maxsize=8)
|
||||
|
||||
def _worker():
|
||||
sftp = self._connect()
|
||||
try:
|
||||
with sftp.open(path, "rb") as f:
|
||||
f.seek(start)
|
||||
remaining = end - start + 1
|
||||
chunk_size = 64 * 1024
|
||||
while remaining > 0:
|
||||
to_read = chunk_size if remaining > chunk_size else remaining
|
||||
data = f.read(to_read)
|
||||
if not data:
|
||||
break
|
||||
try:
|
||||
queue.put_nowait(data)
|
||||
except Exception:
|
||||
break
|
||||
remaining -= len(data)
|
||||
try:
|
||||
queue.put_nowait(None)
|
||||
except Exception:
|
||||
pass
|
||||
finally:
|
||||
try:
|
||||
sftp.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
async def agen():
|
||||
worker_fut = asyncio.to_thread(_worker)
|
||||
try:
|
||||
while True:
|
||||
chunk = await queue.get()
|
||||
if chunk is None:
|
||||
break
|
||||
yield chunk
|
||||
finally:
|
||||
try:
|
||||
await worker_fut
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return StreamingResponse(agen(), status_code=status, headers=headers, media_type=content_type)
|
||||
|
||||
|
||||
ADAPTER_TYPE = "sftp"
|
||||
|
||||
CONFIG_SCHEMA = [
|
||||
{"key": "host", "label": "主机", "type": "string", "required": True, "placeholder": "sftp.example.com"},
|
||||
{"key": "port", "label": "端口", "type": "number", "required": False, "default": 22},
|
||||
{"key": "username", "label": "用户名", "type": "string", "required": True},
|
||||
{"key": "password", "label": "密码", "type": "password", "required": True},
|
||||
{"key": "root", "label": "根路径", "type": "string", "required": True, "placeholder": "/data"},
|
||||
{"key": "timeout", "label": "超时(秒)", "type": "number", "required": False, "default": 15},
|
||||
{"key": "allow_unknown_host", "label": "允许未知主机指纹", "type": "boolean", "required": False, "default": True},
|
||||
]
|
||||
|
||||
|
||||
def ADAPTER_FACTORY(rec: StorageAdapter):
|
||||
return SFTPAdapter(rec)
|
||||
@@ -74,33 +74,32 @@ class TelegramAdapter:
|
||||
for message in messages:
|
||||
if not message:
|
||||
continue
|
||||
|
||||
|
||||
media = message.document or message.video or message.photo
|
||||
if not media:
|
||||
continue
|
||||
|
||||
filename = None
|
||||
size = 0
|
||||
|
||||
if message.photo:
|
||||
photo_size = message.photo.sizes[-1]
|
||||
size = photo_size.size if hasattr(photo_size, 'size') else 0
|
||||
filename = f"photo_{message.id}.jpg"
|
||||
file_meta = message.file
|
||||
if not file_meta:
|
||||
continue
|
||||
|
||||
elif message.document or message.video:
|
||||
size = media.size
|
||||
if hasattr(media, 'attributes'):
|
||||
for attr in media.attributes:
|
||||
if hasattr(attr, 'file_name') and attr.file_name:
|
||||
filename = attr.file_name
|
||||
break
|
||||
|
||||
filename = file_meta.name
|
||||
if not filename:
|
||||
if message.text and '.' in message.text and len(message.text) < 256 and '\n' not in message.text:
|
||||
filename = message.text
|
||||
|
||||
if not filename:
|
||||
filename = f"unknown_{message.id}"
|
||||
else:
|
||||
filename = f"unknown_{message.id}"
|
||||
|
||||
size = file_meta.size
|
||||
if size is None:
|
||||
# 兼容缺失 size 的情况
|
||||
if hasattr(media, "size") and media.size is not None:
|
||||
size = media.size
|
||||
elif message.photo and getattr(message.photo, "sizes", None):
|
||||
photo_size = message.photo.sizes[-1]
|
||||
size = getattr(photo_size, "size", 0) or 0
|
||||
else:
|
||||
size = 0
|
||||
|
||||
entries.append({
|
||||
"name": f"{message.id}_{filename}",
|
||||
@@ -246,13 +245,27 @@ class TelegramAdapter:
|
||||
if not message or not media:
|
||||
raise FileNotFoundError(f"在频道 {self.chat_id} 中未找到消息ID为 {message_id} 的文件")
|
||||
|
||||
if message.photo:
|
||||
photo_size = media.sizes[-1]
|
||||
file_size = photo_size.size if hasattr(photo_size, 'size') else 0
|
||||
mime_type = "image/jpeg"
|
||||
else:
|
||||
file_size = media.size
|
||||
mime_type = media.mime_type or "application/octet-stream"
|
||||
file_meta = message.file
|
||||
file_size = file_meta.size if file_meta and file_meta.size is not None else None
|
||||
if file_size is None:
|
||||
if hasattr(media, "size") and media.size is not None:
|
||||
file_size = media.size
|
||||
elif message.photo and getattr(message.photo, "sizes", None):
|
||||
photo_size = message.photo.sizes[-1]
|
||||
file_size = getattr(photo_size, "size", 0) or 0
|
||||
else:
|
||||
file_size = 0
|
||||
|
||||
mime_type = None
|
||||
if file_meta and getattr(file_meta, "mime_type", None):
|
||||
mime_type = file_meta.mime_type
|
||||
if not mime_type:
|
||||
if hasattr(media, "mime_type") and media.mime_type:
|
||||
mime_type = media.mime_type
|
||||
elif message.photo:
|
||||
mime_type = "image/jpeg"
|
||||
else:
|
||||
mime_type = "application/octet-stream"
|
||||
|
||||
start = 0
|
||||
end = file_size - 1
|
||||
@@ -321,11 +334,16 @@ class TelegramAdapter:
|
||||
if not message or not media:
|
||||
raise FileNotFoundError(f"在频道 {self.chat_id} 中未找到消息ID为 {message_id} 的文件")
|
||||
|
||||
if message.photo:
|
||||
photo_size = media.sizes[-1]
|
||||
size = photo_size.size if hasattr(photo_size, 'size') else 0
|
||||
else:
|
||||
size = media.size
|
||||
file_meta = message.file
|
||||
size = file_meta.size if file_meta and file_meta.size is not None else None
|
||||
if size is None:
|
||||
if hasattr(media, "size") and media.size is not None:
|
||||
size = media.size
|
||||
elif message.photo and getattr(message.photo, "sizes", None):
|
||||
photo_size = message.photo.sizes[-1]
|
||||
size = getattr(photo_size, "size", 0) or 0
|
||||
else:
|
||||
size = 0
|
||||
|
||||
return {
|
||||
"name": rel,
|
||||
@@ -339,4 +357,4 @@ class TelegramAdapter:
|
||||
await client.disconnect()
|
||||
|
||||
def ADAPTER_FACTORY(rec: StorageAdapter) -> TelegramAdapter:
|
||||
return TelegramAdapter(rec)
|
||||
return TelegramAdapter(rec)
|
||||
|
||||
283
services/ai.py
283
services/ai.py
@@ -1,70 +1,247 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import httpx
|
||||
from typing import List
|
||||
from services.config import ConfigCenter
|
||||
from typing import List, Sequence, Tuple
|
||||
|
||||
from models.database import AIModel, AIProvider
|
||||
from services.ai_providers import AIProviderService
|
||||
|
||||
|
||||
provider_service = AIProviderService()
|
||||
|
||||
|
||||
class MissingModelError(RuntimeError):
|
||||
pass
|
||||
|
||||
|
||||
async def describe_image_base64(base64_image: str, detail: str = "high") -> str:
|
||||
"""
|
||||
传入base64图片和文本提示,返回图片描述文本。
|
||||
传入 base64 图片并返回描述文本。缺省时返回错误提示。
|
||||
"""
|
||||
OAI_API_URL = await ConfigCenter.get("AI_VISION_API_URL")
|
||||
VISION_MODEL = await ConfigCenter.get("AI_VISION_MODEL")
|
||||
API_KEY = await ConfigCenter.get("AI_VISION_API_KEY")
|
||||
payload = {
|
||||
"model": VISION_MODEL,
|
||||
"messages": [
|
||||
{"role": "user", "content": [
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{base64_image}",
|
||||
"detail": detail
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"text": "描述这个图片"
|
||||
}
|
||||
]}
|
||||
]
|
||||
}
|
||||
headers = {
|
||||
"Authorization": f"Bearer {API_KEY}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||
resp = await client.post(OAI_API_URL, headers=headers, json=payload)
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
return result["choices"][0]["message"]["content"]
|
||||
model, provider = await _require_model("vision")
|
||||
if provider.api_format == "openai":
|
||||
return await _describe_with_openai(provider, model, base64_image, detail)
|
||||
return await _describe_with_gemini(provider, model, base64_image, detail)
|
||||
except MissingModelError as exc:
|
||||
return str(exc)
|
||||
except httpx.ReadTimeout:
|
||||
return "请求超时,请稍后重试。"
|
||||
except Exception as e:
|
||||
return f"请求失败: {str(e)}"
|
||||
except Exception as exc: # noqa: BLE001
|
||||
return f"请求失败: {exc}"
|
||||
|
||||
|
||||
async def get_text_embedding(text: str) -> List[float]:
|
||||
"""
|
||||
传入文本,返回嵌入向量。
|
||||
传入文本,返回嵌入向量。若未配置模型则抛出异常。
|
||||
"""
|
||||
OAI_API_URL = await ConfigCenter.get("AI_EMBED_API_URL")
|
||||
EMBED_MODEL = await ConfigCenter.get("AI_EMBED_MODEL")
|
||||
API_KEY = await ConfigCenter.get("AI_EMBED_API_KEY")
|
||||
model, provider = await _require_model("embedding")
|
||||
if provider.api_format == "openai":
|
||||
return await _embedding_with_openai(provider, model, text)
|
||||
return await _embedding_with_gemini(provider, model, text)
|
||||
|
||||
|
||||
async def rerank_texts(query: str, documents: Sequence[str]) -> List[float]:
|
||||
"""调用重排序模型,为一组文档返回得分。未配置时返回空列表。"""
|
||||
if not documents:
|
||||
return []
|
||||
try:
|
||||
model, provider = await _require_model("rerank")
|
||||
except MissingModelError:
|
||||
return []
|
||||
|
||||
try:
|
||||
if provider.api_format == "openai":
|
||||
return await _rerank_with_openai(provider, model, query, documents)
|
||||
return await _rerank_with_gemini(provider, model, query, documents)
|
||||
except Exception: # noqa: BLE001
|
||||
return []
|
||||
|
||||
|
||||
async def _require_model(ability: str) -> Tuple[AIModel, AIProvider]:
|
||||
model = await provider_service.get_default_model(ability)
|
||||
if not model:
|
||||
raise MissingModelError(f"未配置默认 {ability} 模型,请前往系统设置完成配置。")
|
||||
provider = getattr(model, "provider", None)
|
||||
if provider is None:
|
||||
await model.fetch_related("provider")
|
||||
provider = model.provider
|
||||
if provider is None:
|
||||
raise MissingModelError("模型缺少关联的提供商配置。")
|
||||
if not provider.base_url:
|
||||
raise MissingModelError("该提供商未设置 API 地址。")
|
||||
return model, provider
|
||||
|
||||
|
||||
def _openai_endpoint(provider: AIProvider, path: str) -> str:
|
||||
base = (provider.base_url or "").rstrip("/")
|
||||
if not base:
|
||||
raise MissingModelError("提供商 API 地址未配置。")
|
||||
return f"{base}/{path.lstrip('/')}"
|
||||
|
||||
|
||||
def _openai_headers(provider: AIProvider) -> dict:
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if provider.api_key:
|
||||
headers["Authorization"] = f"Bearer {provider.api_key}"
|
||||
return headers
|
||||
|
||||
|
||||
def _gemini_endpoint(provider: AIProvider, path: str) -> str:
|
||||
base = (provider.base_url or "").rstrip("/")
|
||||
if not base:
|
||||
raise MissingModelError("提供商 API 地址未配置。")
|
||||
url = f"{base}/{path.lstrip('/')}"
|
||||
if provider.api_key:
|
||||
connector = "&" if "?" in url else "?"
|
||||
url = f"{url}{connector}key={provider.api_key}"
|
||||
return url
|
||||
|
||||
|
||||
async def _describe_with_openai(provider: AIProvider, model: AIModel, base64_image: str, detail: str) -> str:
|
||||
url = _openai_endpoint(provider, "/chat/completions")
|
||||
payload = {
|
||||
"model": EMBED_MODEL,
|
||||
"input": text
|
||||
"model": model.name,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{base64_image}",
|
||||
"detail": detail,
|
||||
},
|
||||
},
|
||||
{"type": "text", "text": "描述这个图片"},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
headers = {
|
||||
"Authorization": f"Bearer {API_KEY}",
|
||||
"Content-Type": "application/json"
|
||||
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||
response = await client.post(url, headers=_openai_headers(provider), json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
return body["choices"][0]["message"]["content"]
|
||||
|
||||
|
||||
async def _describe_with_gemini(provider: AIProvider, model: AIModel, base64_image: str, detail: str) -> str:
|
||||
detail_text = f"描述这个图片,细节等级:{detail}"
|
||||
model_name = model.name if model.name.startswith("models/") else f"models/{model.name}"
|
||||
url = _gemini_endpoint(provider, f"{model_name}:generateContent")
|
||||
payload = {
|
||||
"contents": [
|
||||
{
|
||||
"role": "user",
|
||||
"parts": [
|
||||
{
|
||||
"inline_data": {
|
||||
"mime_type": "image/jpeg",
|
||||
"data": base64_image,
|
||||
}
|
||||
},
|
||||
{"text": detail_text},
|
||||
],
|
||||
}
|
||||
]
|
||||
}
|
||||
async with httpx.AsyncClient() as client:
|
||||
if OAI_API_URL.endswith("chat/completions"):
|
||||
url = OAI_API_URL.replace("chat/completions", "embeddings")
|
||||
else:
|
||||
url = OAI_API_URL
|
||||
resp = await client.post(url, headers=headers, json=payload)
|
||||
resp.raise_for_status()
|
||||
result = resp.json()
|
||||
return result["data"][0]["embedding"]
|
||||
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||
response = await client.post(url, json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
candidates = body.get("candidates") or []
|
||||
if not candidates:
|
||||
return ""
|
||||
parts = candidates[0].get("content", {}).get("parts", [])
|
||||
text_parts = [part.get("text") for part in parts if isinstance(part, dict) and part.get("text")]
|
||||
return "\n".join(text_parts)
|
||||
|
||||
|
||||
async def _embedding_with_openai(provider: AIProvider, model: AIModel, text: str) -> List[float]:
|
||||
url = _openai_endpoint(provider, "/embeddings")
|
||||
payload = {
|
||||
"model": model.name,
|
||||
"input": text,
|
||||
}
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.post(url, headers=_openai_headers(provider), json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
return body["data"][0]["embedding"]
|
||||
|
||||
|
||||
async def _embedding_with_gemini(provider: AIProvider, model: AIModel, text: str) -> List[float]:
|
||||
model_name = model.name if model.name.startswith("models/") else f"models/{model.name}"
|
||||
url = _gemini_endpoint(provider, f"{model_name}:embedContent")
|
||||
payload = {
|
||||
"model": model_name,
|
||||
"content": {
|
||||
"parts": [{"text": text}],
|
||||
},
|
||||
}
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.post(url, json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
embedding = body.get("embedding") or {}
|
||||
return embedding.get("values") or []
|
||||
|
||||
|
||||
async def _rerank_with_openai(
|
||||
provider: AIProvider,
|
||||
model: AIModel,
|
||||
query: str,
|
||||
documents: Sequence[str],
|
||||
) -> List[float]:
|
||||
url = _openai_endpoint(provider, "/rerank")
|
||||
payload = {
|
||||
"model": model.name,
|
||||
"query": query,
|
||||
"documents": [
|
||||
{"id": str(idx), "text": content}
|
||||
for idx, content in enumerate(documents)
|
||||
],
|
||||
}
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.post(url, headers=_openai_headers(provider), json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
results = body.get("results") or body.get("data") or []
|
||||
scores: List[float] = []
|
||||
for item in results:
|
||||
try:
|
||||
scores.append(float(item.get("score", 0.0)))
|
||||
except (TypeError, ValueError):
|
||||
scores.append(0.0)
|
||||
return scores
|
||||
|
||||
|
||||
async def _rerank_with_gemini(
|
||||
provider: AIProvider,
|
||||
model: AIModel,
|
||||
query: str,
|
||||
documents: Sequence[str],
|
||||
) -> List[float]:
|
||||
model_name = model.name if model.name.startswith("models/") else f"models/{model.name}"
|
||||
url = _gemini_endpoint(provider, f"{model_name}:rankContent")
|
||||
payload = {
|
||||
"query": {"text": query},
|
||||
"documents": [
|
||||
{"id": str(idx), "content": {"parts": [{"text": content}]}}
|
||||
for idx, content in enumerate(documents)
|
||||
],
|
||||
}
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.post(url, json=payload)
|
||||
response.raise_for_status()
|
||||
body = response.json()
|
||||
|
||||
scores: List[float] = []
|
||||
ranked = body.get("rankedDocuments") or body.get("results") or []
|
||||
for item in ranked:
|
||||
raw_score = item.get("relevanceScore") or item.get("score") or item.get("confidenceScore")
|
||||
try:
|
||||
scores.append(float(raw_score))
|
||||
except (TypeError, ValueError):
|
||||
scores.append(0.0)
|
||||
return scores
|
||||
|
||||
347
services/ai_providers.py
Normal file
347
services/ai_providers.py
Normal file
@@ -0,0 +1,347 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Iterable
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
import httpx
|
||||
from tortoise.exceptions import DoesNotExist
|
||||
from tortoise.transactions import in_transaction
|
||||
|
||||
from models.database import AIDefaultModel, AIModel, AIProvider
|
||||
|
||||
|
||||
ABILITIES = ["chat", "vision", "embedding", "rerank", "voice", "tools"]
|
||||
|
||||
OPENAI_EMBEDDING_DIMS = {
|
||||
"text-embedding-3-large": 3072,
|
||||
"text-embedding-3-small": 1536,
|
||||
"text-embedding-ada-002": 1536,
|
||||
}
|
||||
|
||||
|
||||
def _normalize_embedding_dim(value: Any) -> Optional[int]:
|
||||
if value is None:
|
||||
return None
|
||||
try:
|
||||
casted = int(value)
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
return casted if casted > 0 else None
|
||||
|
||||
|
||||
def _apply_embedding_dim_to_metadata(
|
||||
data: Dict[str, Any],
|
||||
embedding_dim: Optional[int],
|
||||
base_metadata: Optional[Dict[str, Any]] = None,
|
||||
) -> Dict[str, Any]:
|
||||
source = base_metadata if isinstance(base_metadata, dict) else {}
|
||||
metadata: Dict[str, Any] = dict(source)
|
||||
override = data.get("metadata")
|
||||
if isinstance(override, dict) and override:
|
||||
metadata.update(override)
|
||||
if embedding_dim is None:
|
||||
metadata.pop("embedding_dimensions", None)
|
||||
else:
|
||||
metadata["embedding_dimensions"] = embedding_dim
|
||||
data["metadata"] = metadata or None
|
||||
return data
|
||||
|
||||
|
||||
def normalize_capabilities(items: Optional[Iterable[str]]) -> List[str]:
|
||||
if not items:
|
||||
return []
|
||||
normalized = []
|
||||
for cap in items:
|
||||
key = str(cap).strip().lower()
|
||||
if key in ABILITIES and key not in normalized:
|
||||
normalized.append(key)
|
||||
return normalized
|
||||
|
||||
|
||||
def infer_openai_capabilities(model_id: str) -> Tuple[List[str], Optional[int]]:
|
||||
lower = model_id.lower()
|
||||
caps = set()
|
||||
|
||||
if any(keyword in lower for keyword in ["gpt", "chat", "turbo", "o1", "sonnet", "haiku", "thinking"]):
|
||||
caps.update({"chat", "tools"})
|
||||
|
||||
if any(keyword in lower for keyword in ["vision", "gpt-4o", "gpt-4.1", "o1", "vision-preview", "omni"]):
|
||||
caps.add("vision")
|
||||
|
||||
if any(keyword in lower for keyword in ["embed", "embedding"]):
|
||||
caps.add("embedding")
|
||||
|
||||
if "rerank" in lower or "re-rank" in lower:
|
||||
caps.add("rerank")
|
||||
|
||||
if any(keyword in lower for keyword in ["tts", "speech", "audio"]):
|
||||
caps.add("voice")
|
||||
|
||||
embedding_dim = OPENAI_EMBEDDING_DIMS.get(model_id)
|
||||
return normalize_capabilities(caps), embedding_dim
|
||||
|
||||
|
||||
def infer_gemini_capabilities(methods: Iterable[str]) -> List[str]:
|
||||
caps = set()
|
||||
for method in methods:
|
||||
m = method.lower()
|
||||
if m in {"generatecontent", "counttokens"}:
|
||||
caps.update({"chat", "tools", "vision"})
|
||||
if m == "embedcontent":
|
||||
caps.add("embedding")
|
||||
if m in {"generatespeech", "audiogeneration"}:
|
||||
caps.add("voice")
|
||||
if m == "rerank":
|
||||
caps.add("rerank")
|
||||
return normalize_capabilities(caps)
|
||||
|
||||
|
||||
def serialize_provider(provider: AIProvider) -> Dict[str, Any]:
|
||||
return {
|
||||
"id": provider.id,
|
||||
"name": provider.name,
|
||||
"identifier": provider.identifier,
|
||||
"provider_type": provider.provider_type,
|
||||
"api_format": provider.api_format,
|
||||
"base_url": provider.base_url,
|
||||
"api_key": provider.api_key,
|
||||
"logo_url": provider.logo_url,
|
||||
"extra_config": provider.extra_config or {},
|
||||
"created_at": provider.created_at,
|
||||
"updated_at": provider.updated_at,
|
||||
}
|
||||
|
||||
|
||||
def model_to_dict(model: AIModel, provider: Optional[AIProvider] = None) -> Dict[str, Any]:
|
||||
provider_obj = provider or getattr(model, "provider", None)
|
||||
provider_data = serialize_provider(provider_obj) if provider_obj else None
|
||||
return {
|
||||
"id": model.id,
|
||||
"provider_id": model.provider_id,
|
||||
"name": model.name,
|
||||
"display_name": model.display_name,
|
||||
"description": model.description,
|
||||
"capabilities": normalize_capabilities(model.capabilities),
|
||||
"context_window": model.context_window,
|
||||
"embedding_dimensions": model.embedding_dimensions,
|
||||
"metadata": model.metadata or {},
|
||||
"created_at": model.created_at,
|
||||
"updated_at": model.updated_at,
|
||||
"provider": provider_data,
|
||||
}
|
||||
|
||||
|
||||
def provider_to_dict(provider: AIProvider, models: Optional[List[AIModel]] = None) -> Dict[str, Any]:
|
||||
data = serialize_provider(provider)
|
||||
if models is not None:
|
||||
data["models"] = [model_to_dict(m, provider=provider) for m in models]
|
||||
return data
|
||||
|
||||
|
||||
class AIProviderService:
|
||||
async def list_providers(self) -> List[Dict[str, Any]]:
|
||||
providers = await AIProvider.all().order_by("id").prefetch_related("models")
|
||||
return [provider_to_dict(p, models=list(p.models)) for p in providers]
|
||||
|
||||
async def get_provider(self, provider_id: int, with_models: bool = False) -> Dict[str, Any]:
|
||||
if with_models:
|
||||
provider = await AIProvider.get(id=provider_id)
|
||||
models = await provider.models.all()
|
||||
return provider_to_dict(provider, models=models)
|
||||
else:
|
||||
provider = await AIProvider.get(id=provider_id)
|
||||
return provider_to_dict(provider)
|
||||
|
||||
async def create_provider(self, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
data = payload.copy()
|
||||
data.setdefault("extra_config", {})
|
||||
provider = await AIProvider.create(**data)
|
||||
return provider_to_dict(provider)
|
||||
|
||||
async def update_provider(self, provider_id: int, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
provider = await AIProvider.get(id=provider_id)
|
||||
for field, value in payload.items():
|
||||
setattr(provider, field, value)
|
||||
await provider.save()
|
||||
return provider_to_dict(provider)
|
||||
|
||||
async def delete_provider(self, provider_id: int) -> None:
|
||||
await AIProvider.filter(id=provider_id).delete()
|
||||
|
||||
async def list_models(self, provider_id: int) -> List[Dict[str, Any]]:
|
||||
models = await AIModel.filter(provider_id=provider_id).order_by("id").prefetch_related("provider")
|
||||
return [model_to_dict(m) for m in models]
|
||||
|
||||
async def create_model(self, provider_id: int, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
data = payload.copy()
|
||||
data["provider_id"] = provider_id
|
||||
data["capabilities"] = normalize_capabilities(data.get("capabilities"))
|
||||
embedding_dim = _normalize_embedding_dim(data.pop("embedding_dimensions", None))
|
||||
data = _apply_embedding_dim_to_metadata(data, embedding_dim)
|
||||
model = await AIModel.create(**data)
|
||||
await model.fetch_related("provider")
|
||||
return model_to_dict(model)
|
||||
|
||||
async def update_model(self, model_id: int, payload: Dict[str, Any]) -> Dict[str, Any]:
|
||||
model = await AIModel.get(id=model_id)
|
||||
data = payload.copy()
|
||||
if "capabilities" in data:
|
||||
data["capabilities"] = normalize_capabilities(data.get("capabilities"))
|
||||
embedding_dim = None
|
||||
if "embedding_dimensions" in data:
|
||||
embedding_dim = _normalize_embedding_dim(data.pop("embedding_dimensions", None))
|
||||
_apply_embedding_dim_to_metadata(data, embedding_dim, base_metadata=model.metadata)
|
||||
for field, value in data.items():
|
||||
setattr(model, field, value)
|
||||
if embedding_dim is not None or ("embedding_dimensions" in payload and embedding_dim is None):
|
||||
model.embedding_dimensions = embedding_dim
|
||||
await model.save()
|
||||
await model.fetch_related("provider")
|
||||
return model_to_dict(model)
|
||||
|
||||
async def delete_model(self, model_id: int) -> None:
|
||||
await AIModel.filter(id=model_id).delete()
|
||||
|
||||
async def fetch_remote_models(self, provider_id: int) -> List[Dict[str, Any]]:
|
||||
provider = await AIProvider.get(id=provider_id)
|
||||
return await self._get_remote_models(provider)
|
||||
|
||||
async def _get_remote_models(self, provider: AIProvider) -> List[Dict[str, Any]]:
|
||||
if not provider.base_url:
|
||||
raise ValueError("Provider base_url is required for syncing models")
|
||||
|
||||
fmt = (provider.api_format or "").lower()
|
||||
if fmt not in {"openai", "gemini"}:
|
||||
raise ValueError(f"Unsupported api_format '{provider.api_format}' for syncing models")
|
||||
|
||||
if fmt == "openai":
|
||||
return await self._fetch_openai_models(provider)
|
||||
return await self._fetch_gemini_models(provider)
|
||||
|
||||
async def sync_models(self, provider_id: int) -> Dict[str, int]:
|
||||
provider = await AIProvider.get(id=provider_id)
|
||||
remote_models = await self._get_remote_models(provider)
|
||||
|
||||
created = 0
|
||||
updated = 0
|
||||
for entry in remote_models:
|
||||
defaults = entry.copy()
|
||||
model_id = defaults.pop("name")
|
||||
defaults["capabilities"] = normalize_capabilities(defaults.get("capabilities"))
|
||||
embedding_dim = _normalize_embedding_dim(defaults.pop("embedding_dimensions", None))
|
||||
defaults = _apply_embedding_dim_to_metadata(defaults, embedding_dim)
|
||||
obj, is_created = await AIModel.get_or_create(
|
||||
provider_id=provider.id,
|
||||
name=model_id,
|
||||
defaults=defaults,
|
||||
)
|
||||
if is_created:
|
||||
created += 1
|
||||
continue
|
||||
for field, value in defaults.items():
|
||||
setattr(obj, field, value)
|
||||
if embedding_dim is not None or ("embedding_dimensions" in entry and embedding_dim is None):
|
||||
obj.embedding_dimensions = embedding_dim
|
||||
await obj.save()
|
||||
updated += 1
|
||||
|
||||
return {"created": created, "updated": updated}
|
||||
|
||||
async def get_default_models(self) -> Dict[str, Optional[Dict[str, Any]]]:
|
||||
defaults = await AIDefaultModel.all().prefetch_related("model__provider")
|
||||
result: Dict[str, Optional[Dict[str, Any]]] = {ability: None for ability in ABILITIES}
|
||||
for item in defaults:
|
||||
result[item.ability] = model_to_dict(item.model, provider=item.model.provider) # type: ignore[attr-defined]
|
||||
return result
|
||||
|
||||
async def set_default_models(self, mapping: Dict[str, Optional[int]]) -> Dict[str, Optional[Dict[str, Any]]]:
|
||||
normalized = {ability: mapping.get(ability) for ability in ABILITIES}
|
||||
async with in_transaction() as connection:
|
||||
for ability, model_id in normalized.items():
|
||||
record = await AIDefaultModel.get_or_none(ability=ability)
|
||||
if model_id:
|
||||
try:
|
||||
model = await AIModel.get(id=model_id)
|
||||
except DoesNotExist:
|
||||
raise ValueError(f"Model {model_id} not found")
|
||||
if record:
|
||||
record.model_id = model_id
|
||||
await record.save(using_db=connection)
|
||||
else:
|
||||
await AIDefaultModel.create(ability=ability, model_id=model_id)
|
||||
elif record:
|
||||
await record.delete(using_db=connection)
|
||||
return await self.get_default_models()
|
||||
|
||||
async def get_default_model(self, ability: str) -> Optional[AIModel]:
|
||||
ability_key = ability.lower()
|
||||
if ability_key not in ABILITIES:
|
||||
return None
|
||||
record = await AIDefaultModel.get_or_none(ability=ability_key)
|
||||
if not record:
|
||||
return None
|
||||
model = await AIModel.get_or_none(id=record.model_id)
|
||||
if model:
|
||||
await model.fetch_related("provider")
|
||||
return model
|
||||
|
||||
async def _fetch_openai_models(self, provider: AIProvider) -> List[Dict[str, Any]]:
|
||||
base_url = provider.base_url.rstrip("/")
|
||||
url = f"{base_url}/models"
|
||||
headers = {}
|
||||
if provider.api_key:
|
||||
headers["Authorization"] = f"Bearer {provider.api_key}"
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.get(url, headers=headers)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
|
||||
data = payload.get("data", [])
|
||||
entries: List[Dict[str, Any]] = []
|
||||
for item in data:
|
||||
model_id = item.get("id")
|
||||
if not model_id:
|
||||
continue
|
||||
capabilities, embedding_dim = infer_openai_capabilities(model_id)
|
||||
entries.append({
|
||||
"name": model_id,
|
||||
"display_name": item.get("display_name"),
|
||||
"description": item.get("description"),
|
||||
"capabilities": capabilities,
|
||||
"context_window": item.get("context_window"),
|
||||
"embedding_dimensions": embedding_dim,
|
||||
"metadata": item,
|
||||
})
|
||||
return entries
|
||||
|
||||
async def _fetch_gemini_models(self, provider: AIProvider) -> List[Dict[str, Any]]:
|
||||
base_url = provider.base_url.rstrip("/")
|
||||
suffix = "/models"
|
||||
if provider.api_key:
|
||||
suffix += f"?key={provider.api_key}"
|
||||
url = f"{base_url}{suffix}"
|
||||
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
response = await client.get(url)
|
||||
response.raise_for_status()
|
||||
payload = response.json()
|
||||
|
||||
data = payload.get("models", [])
|
||||
entries: List[Dict[str, Any]] = []
|
||||
for item in data:
|
||||
model_id = item.get("name")
|
||||
if not model_id:
|
||||
continue
|
||||
methods = item.get("supportedGenerationMethods") or []
|
||||
capabilities = infer_gemini_capabilities(methods)
|
||||
entries.append({
|
||||
"name": model_id,
|
||||
"display_name": item.get("displayName"),
|
||||
"description": item.get("description"),
|
||||
"capabilities": capabilities,
|
||||
"context_window": item.get("inputTokenLimit"),
|
||||
"embedding_dimensions": item.get("embeddingDimensions"),
|
||||
"metadata": item,
|
||||
})
|
||||
return entries
|
||||
@@ -4,7 +4,7 @@ from typing import Any, Optional, Dict
|
||||
from dotenv import load_dotenv
|
||||
from models.database import Configuration
|
||||
load_dotenv(dotenv_path=".env")
|
||||
VERSION = "v1.2.10"
|
||||
VERSION = "v1.3.4"
|
||||
|
||||
class ConfigCenter:
|
||||
_cache: Dict[str, Any] = {}
|
||||
|
||||
@@ -1,15 +1,95 @@
|
||||
from typing import Dict, Any
|
||||
from typing import Dict, Any, List, Tuple
|
||||
from fastapi.responses import Response
|
||||
import base64
|
||||
from services.ai import describe_image_base64, get_text_embedding
|
||||
import mimetypes
|
||||
import os
|
||||
from io import BytesIO
|
||||
|
||||
from services.ai import describe_image_base64, get_text_embedding, provider_service
|
||||
from services.vector_db import VectorDBService, DEFAULT_VECTOR_DIMENSION
|
||||
from services.logging import LogService
|
||||
from services.config import ConfigCenter
|
||||
from PIL import Image
|
||||
|
||||
|
||||
|
||||
CHUNK_SIZE = 800
|
||||
CHUNK_OVERLAP = 200
|
||||
MAX_IMAGE_EDGE = 1600
|
||||
JPEG_QUALITY = 85
|
||||
|
||||
|
||||
def _chunk_text(content: str, chunk_size: int = CHUNK_SIZE, overlap: int = CHUNK_OVERLAP) -> List[Tuple[int, str, int, int]]:
|
||||
"""按固定窗口拆分文本,返回(chunk_id, chunk_text, start, end)。"""
|
||||
if chunk_size <= 0:
|
||||
chunk_size = CHUNK_SIZE
|
||||
if overlap >= chunk_size:
|
||||
overlap = max(chunk_size // 4, 1)
|
||||
|
||||
chunks: List[Tuple[int, str, int, int]] = []
|
||||
step = chunk_size - overlap
|
||||
idx = 0
|
||||
start = 0
|
||||
length = len(content)
|
||||
|
||||
while start < length:
|
||||
end = min(length, start + chunk_size)
|
||||
chunk = content[start:end].strip()
|
||||
if chunk:
|
||||
chunks.append((idx, chunk, start, end))
|
||||
idx += 1
|
||||
if end >= length:
|
||||
break
|
||||
start += step
|
||||
return chunks
|
||||
|
||||
|
||||
def _guess_mime(path: str) -> str:
|
||||
mime, _ = mimetypes.guess_type(path)
|
||||
return mime or "application/octet-stream"
|
||||
|
||||
|
||||
def _chunk_key(path: str, chunk_id: str) -> str:
|
||||
return f"{path}#chunk={chunk_id}"
|
||||
|
||||
|
||||
def _compress_image_for_embedding(input_bytes: bytes) -> Tuple[bytes, Dict[str, Any] | None]:
|
||||
"""压缩图片,降低发送到视觉模型的体积。"""
|
||||
if Image is None:
|
||||
return input_bytes, None
|
||||
|
||||
try:
|
||||
with Image.open(BytesIO(input_bytes)) as img:
|
||||
img = img.convert("RGB")
|
||||
width, height = img.size
|
||||
longest_edge = max(width, height)
|
||||
scale = 1.0
|
||||
if longest_edge > MAX_IMAGE_EDGE:
|
||||
scale = MAX_IMAGE_EDGE / float(longest_edge)
|
||||
new_size = (max(int(width * scale), 1), max(int(height * scale), 1))
|
||||
resample_mode = getattr(getattr(Image, "Resampling", Image), "LANCZOS")
|
||||
img = img.resize(new_size, resample=resample_mode)
|
||||
|
||||
buffer = BytesIO()
|
||||
img.save(buffer, format="JPEG", quality=JPEG_QUALITY, optimize=True)
|
||||
compressed = buffer.getvalue()
|
||||
|
||||
if len(compressed) < len(input_bytes):
|
||||
return compressed, {
|
||||
"original_bytes": len(input_bytes),
|
||||
"compressed_bytes": len(compressed),
|
||||
"scaled": scale < 1.0,
|
||||
"width": img.width,
|
||||
"height": img.height,
|
||||
}
|
||||
except Exception: # pragma: no cover - 任意图像处理异常时回退
|
||||
return input_bytes, None
|
||||
|
||||
return input_bytes, None
|
||||
|
||||
|
||||
class VectorIndexProcessor:
|
||||
name = "向量索引"
|
||||
supported_exts = ["jpg", "jpeg", "png", "bmp", "txt", "md"]
|
||||
supported_exts: List[str] = [] # 留空表示不限扩展名
|
||||
config_schema = [
|
||||
{
|
||||
"key": "action", "label": "操作", "type": "select", "required": True, "default": "create",
|
||||
@@ -33,6 +113,7 @@ class VectorIndexProcessor:
|
||||
index_type = config.get("index_type", "vector")
|
||||
vector_db = VectorDBService()
|
||||
collection_name = "vector_collection"
|
||||
|
||||
if action == "destroy":
|
||||
await vector_db.delete_vector(collection_name, path)
|
||||
await LogService.info(
|
||||
@@ -42,9 +123,19 @@ class VectorIndexProcessor:
|
||||
)
|
||||
return Response(content=f"文件 {path} 的 {index_type} 索引已销毁", media_type="text/plain")
|
||||
|
||||
if index_type == 'simple':
|
||||
mime_type = _guess_mime(path)
|
||||
|
||||
if index_type == "simple":
|
||||
await vector_db.ensure_collection(collection_name, vector=False)
|
||||
await vector_db.upsert_vector(collection_name, {'path': path})
|
||||
await vector_db.delete_vector(collection_name, path)
|
||||
await vector_db.upsert_vector(collection_name, {
|
||||
"path": path,
|
||||
"source_path": path,
|
||||
"chunk_id": "filename",
|
||||
"mime": mime_type,
|
||||
"type": "filename",
|
||||
"name": os.path.basename(path),
|
||||
})
|
||||
await LogService.info(
|
||||
"processor:vector_index",
|
||||
f"Created simple index for {path}",
|
||||
@@ -53,43 +144,116 @@ class VectorIndexProcessor:
|
||||
return Response(content=f"文件 {path} 的普通索引已创建", media_type="text/plain")
|
||||
|
||||
file_ext = path.split('.')[-1].lower()
|
||||
description = ""
|
||||
embedding = None
|
||||
details: Dict[str, Any] = {"path": path, "action": "create", "index_type": "vector"}
|
||||
|
||||
if file_ext in ["jpg", "jpeg", "png", "bmp"]:
|
||||
base64_image = base64.b64encode(input_bytes).decode("utf-8")
|
||||
description = await describe_image_base64(base64_image)
|
||||
embedding = await get_text_embedding(description)
|
||||
log_message = f"Indexed image {path}"
|
||||
response_message = f"图片已索引,描述:{description}"
|
||||
elif file_ext in ["txt", "md"]:
|
||||
text = input_bytes.decode("utf-8")
|
||||
embedding = await get_text_embedding(text)
|
||||
description = text[:100] + "..." if len(text) > 100 else text
|
||||
log_message = f"Indexed text file {path}"
|
||||
response_message = f"文本文件已索引"
|
||||
|
||||
if embedding is None:
|
||||
return Response(content="不支持的文件类型", status_code=400)
|
||||
|
||||
raw_dim = await ConfigCenter.get('AI_EMBED_DIM', DEFAULT_VECTOR_DIMENSION)
|
||||
try:
|
||||
vector_dim = int(raw_dim)
|
||||
except (TypeError, ValueError):
|
||||
vector_dim = DEFAULT_VECTOR_DIMENSION
|
||||
if vector_dim <= 0:
|
||||
vector_dim = DEFAULT_VECTOR_DIMENSION
|
||||
embedding_model = await provider_service.get_default_model("embedding")
|
||||
vector_dim = DEFAULT_VECTOR_DIMENSION
|
||||
if embedding_model and getattr(embedding_model, "embedding_dimensions", None):
|
||||
try:
|
||||
vector_dim = int(embedding_model.embedding_dimensions)
|
||||
except (TypeError, ValueError):
|
||||
vector_dim = DEFAULT_VECTOR_DIMENSION
|
||||
if vector_dim <= 0:
|
||||
vector_dim = DEFAULT_VECTOR_DIMENSION
|
||||
|
||||
await vector_db.ensure_collection(collection_name, vector=True, dim=vector_dim)
|
||||
await vector_db.upsert_vector(
|
||||
collection_name, {'path': path, 'embedding': embedding})
|
||||
|
||||
await vector_db.delete_vector(collection_name, path)
|
||||
|
||||
if file_ext in ["jpg", "jpeg", "png", "bmp"]:
|
||||
processed_bytes, compression = _compress_image_for_embedding(input_bytes)
|
||||
base64_image = base64.b64encode(processed_bytes).decode("utf-8")
|
||||
description = await describe_image_base64(base64_image)
|
||||
embedding = await get_text_embedding(description)
|
||||
image_mime = "image/jpeg" if compression else mime_type
|
||||
await vector_db.upsert_vector(collection_name, {
|
||||
"path": _chunk_key(path, "image"),
|
||||
"source_path": path,
|
||||
"chunk_id": "image",
|
||||
"embedding": embedding,
|
||||
"text": description,
|
||||
"mime": image_mime,
|
||||
"type": "image",
|
||||
})
|
||||
details["description"] = description
|
||||
if compression:
|
||||
details["image_compression"] = compression
|
||||
await LogService.info(
|
||||
"processor:vector_index",
|
||||
f"Indexed image {path}",
|
||||
details=details,
|
||||
)
|
||||
return Response(content=f"图片已索引,描述:{description}", media_type="text/plain")
|
||||
|
||||
if file_ext in ["txt", "md"]:
|
||||
try:
|
||||
text = input_bytes.decode("utf-8")
|
||||
except UnicodeDecodeError:
|
||||
return Response(content="文本文件解码失败", status_code=400)
|
||||
|
||||
chunks = _chunk_text(text)
|
||||
if not chunks:
|
||||
await vector_db.upsert_vector(collection_name, {
|
||||
"path": _chunk_key(path, "0"),
|
||||
"source_path": path,
|
||||
"chunk_id": "0",
|
||||
"embedding": await get_text_embedding(text or path),
|
||||
"text": text,
|
||||
"mime": mime_type,
|
||||
"type": "text",
|
||||
"start_offset": 0,
|
||||
"end_offset": len(text),
|
||||
})
|
||||
details["chunks"] = 1
|
||||
await LogService.info(
|
||||
"processor:vector_index",
|
||||
f"Indexed text file {path}",
|
||||
details=details,
|
||||
)
|
||||
return Response(content="文本文件已索引", media_type="text/plain")
|
||||
|
||||
chunk_count = 0
|
||||
for chunk_id, chunk_text, start, end in chunks:
|
||||
embedding = await get_text_embedding(chunk_text)
|
||||
await vector_db.upsert_vector(collection_name, {
|
||||
"path": _chunk_key(path, str(chunk_id)),
|
||||
"source_path": path,
|
||||
"chunk_id": str(chunk_id),
|
||||
"embedding": embedding,
|
||||
"text": chunk_text,
|
||||
"mime": mime_type,
|
||||
"type": "text",
|
||||
"start_offset": start,
|
||||
"end_offset": end,
|
||||
})
|
||||
chunk_count += 1
|
||||
|
||||
details["chunks"] = chunk_count
|
||||
sample = chunks[0][1]
|
||||
details["sample"] = sample[:120]
|
||||
await LogService.info(
|
||||
"processor:vector_index",
|
||||
f"Indexed text file {path}",
|
||||
details=details,
|
||||
)
|
||||
return Response(content="文本文件已索引", media_type="text/plain")
|
||||
|
||||
# 其他类型暂未支持向量索引,回退为文件名索引
|
||||
await vector_db.delete_vector(collection_name, path)
|
||||
await vector_db.upsert_vector(collection_name, {
|
||||
"path": _chunk_key(path, "fallback"),
|
||||
"source_path": path,
|
||||
"chunk_id": "filename",
|
||||
"mime": mime_type,
|
||||
"type": "filename",
|
||||
"name": os.path.basename(path),
|
||||
"embedding": [0.0] * vector_dim,
|
||||
})
|
||||
await LogService.info(
|
||||
"processor:vector_index",
|
||||
log_message,
|
||||
details={"path": path, "description": description, "action": "create", "index_type": "vector"},
|
||||
f"File type fallback to simple index for {path}",
|
||||
details={"path": path, "action": "create", "index_type": "simple", "original_type": file_ext},
|
||||
)
|
||||
return Response(content=response_message, media_type="text/plain")
|
||||
return Response(content="暂不支持该类型的向量索引,已创建文件名索引", media_type="text/plain")
|
||||
|
||||
|
||||
PROCESSOR_TYPE = "vector_index"
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
from __future__ import annotations
|
||||
import asyncio
|
||||
import inspect
|
||||
import io
|
||||
import hashlib
|
||||
import tempfile
|
||||
from contextlib import suppress
|
||||
from pathlib import Path
|
||||
from typing import Tuple
|
||||
from fastapi import HTTPException
|
||||
@@ -8,7 +12,10 @@ from fastapi import HTTPException
|
||||
ALLOWED_EXT = {"jpg", "jpeg", "png", "webp", "gif", "bmp",
|
||||
"tiff", "arw", "cr2", "cr3", "nef", "rw2", "orf", "pef", "dng"}
|
||||
RAW_EXT = {"arw", "cr2", "cr3", "nef", "rw2", "orf", "pef", "dng"}
|
||||
MAX_SOURCE_SIZE = 200 * 1024 * 1024
|
||||
VIDEO_EXT = {"mp4", "mov", "m4v", "avi", "mkv", "wmv", "flv", "webm", "mpg", "mpeg", "3gp"}
|
||||
MAX_IMAGE_SOURCE_SIZE = 200 * 1024 * 1024
|
||||
VIDEO_RANGE_LIMIT = 16 * 1024 * 1024 # 16MB
|
||||
VIDEO_INITIAL_CHUNK = 4 * 1024 * 1024
|
||||
CACHE_ROOT = Path('data/.thumb_cache')
|
||||
|
||||
|
||||
@@ -26,6 +33,13 @@ def is_raw_filename(name: str) -> bool:
|
||||
return parts[1].lower() in RAW_EXT
|
||||
|
||||
|
||||
def is_video_filename(name: str) -> bool:
|
||||
parts = name.rsplit('.', 1)
|
||||
if len(parts) < 2:
|
||||
return False
|
||||
return parts[1].lower() in VIDEO_EXT
|
||||
|
||||
|
||||
def _cache_key(adapter_id: int, rel: str, size: int, mtime: int, w: int, h: int, fit: str) -> str:
|
||||
raw = f"{adapter_id}|{rel}|{size}|{mtime}|{w}x{h}|{fit}".encode()
|
||||
return hashlib.sha1(raw).hexdigest()
|
||||
@@ -40,6 +54,30 @@ def _ensure_cache_dir(p: Path):
|
||||
p.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def _image_to_webp(im, w: int, h: int, fit: str) -> Tuple[bytes, str]:
|
||||
from PIL import Image
|
||||
if im.mode not in ("RGB", "RGBA"):
|
||||
im = im.convert("RGBA" if im.mode in ("P", "LA") else "RGB")
|
||||
if fit == 'cover':
|
||||
im_ratio = im.width / im.height
|
||||
target_ratio = w / h
|
||||
if im_ratio > target_ratio:
|
||||
new_h = h
|
||||
new_w = int(h * im_ratio)
|
||||
else:
|
||||
new_w = w
|
||||
new_h = int(w / im_ratio)
|
||||
im = im.resize((new_w, new_h))
|
||||
left = max(0, (im.width - w)//2)
|
||||
top = max(0, (im.height - h)//2)
|
||||
im = im.crop((left, top, left + w, top + h))
|
||||
else:
|
||||
im.thumbnail((w, h))
|
||||
buf = io.BytesIO()
|
||||
im.save(buf, 'WEBP', quality=80)
|
||||
return buf.getvalue(), 'image/webp'
|
||||
|
||||
|
||||
def generate_thumb(data: bytes, w: int, h: int, fit: str, is_raw: bool = False) -> Tuple[bytes, str]:
|
||||
from PIL import Image
|
||||
if is_raw:
|
||||
@@ -64,35 +102,172 @@ def generate_thumb(data: bytes, w: int, h: int, fit: str, is_raw: bool = False)
|
||||
else:
|
||||
im = Image.open(io.BytesIO(data))
|
||||
|
||||
if im.mode not in ("RGB", "RGBA"):
|
||||
im = im.convert("RGBA" if im.mode in ("P", "LA") else "RGB")
|
||||
if fit == 'cover':
|
||||
im_ratio = im.width / im.height
|
||||
target_ratio = w / h
|
||||
if im_ratio > target_ratio:
|
||||
new_h = h
|
||||
new_w = int(h * im_ratio)
|
||||
else:
|
||||
new_w = w
|
||||
new_h = int(w / im_ratio)
|
||||
im = im.resize((new_w, new_h))
|
||||
left = max(0, (im.width - w)//2)
|
||||
top = max(0, (im.height - h)//2)
|
||||
im = im.crop((left, top, left + w, top + h))
|
||||
else:
|
||||
im.thumbnail((w, h))
|
||||
buf = io.BytesIO()
|
||||
im.save(buf, 'WEBP', quality=80)
|
||||
return buf.getvalue(), 'image/webp'
|
||||
return _image_to_webp(im, w, h, fit)
|
||||
|
||||
|
||||
async def _collect_response_bytes(response, limit: int) -> bytes:
|
||||
if response is None:
|
||||
return b""
|
||||
|
||||
try:
|
||||
if isinstance(response, (bytes, bytearray)):
|
||||
return bytes(response[:limit])
|
||||
|
||||
body = getattr(response, "body", None)
|
||||
if body is not None:
|
||||
return bytes(body[:limit])
|
||||
|
||||
iterator = getattr(response, "body_iterator", None)
|
||||
if iterator is not None:
|
||||
data = bytearray()
|
||||
async for chunk in iterator:
|
||||
if not chunk:
|
||||
continue
|
||||
need = limit - len(data)
|
||||
if need <= 0:
|
||||
break
|
||||
data.extend(chunk[:need])
|
||||
if len(data) >= limit:
|
||||
break
|
||||
return bytes(data)
|
||||
|
||||
if hasattr(response, "__aiter__"):
|
||||
data = bytearray()
|
||||
async for chunk in response:
|
||||
if not chunk:
|
||||
continue
|
||||
need = limit - len(data)
|
||||
if need <= 0:
|
||||
break
|
||||
data.extend(chunk[:need])
|
||||
if len(data) >= limit:
|
||||
break
|
||||
return bytes(data)
|
||||
finally:
|
||||
close_func = getattr(response, "close", None)
|
||||
if callable(close_func):
|
||||
result = close_func()
|
||||
if inspect.isawaitable(result):
|
||||
await result
|
||||
|
||||
return b""
|
||||
|
||||
|
||||
async def _read_range_slice(adapter, root: str, rel: str, start: int, end: int) -> bytes:
|
||||
read_range = getattr(adapter, "read_file_range", None)
|
||||
if callable(read_range):
|
||||
try:
|
||||
return await read_range(root, rel, start, end)
|
||||
except TypeError:
|
||||
return await read_range(root, rel, start, end=end)
|
||||
|
||||
stream_impl = getattr(adapter, "stream_file", None)
|
||||
if callable(stream_impl):
|
||||
range_header = f"bytes={start}-{end}"
|
||||
response = await stream_impl(root, rel, range_header)
|
||||
expected = end - start + 1
|
||||
return await _collect_response_bytes(response, expected)
|
||||
|
||||
read_file = getattr(adapter, "read_file", None)
|
||||
if callable(read_file) and start == 0:
|
||||
data = await read_file(root, rel)
|
||||
slice_end = end + 1
|
||||
return data[:slice_end]
|
||||
|
||||
return b""
|
||||
|
||||
|
||||
async def _read_video_prefix(adapter, root: str, rel: str, size: int, limit: int = VIDEO_RANGE_LIMIT) -> bytes:
|
||||
chunk_size = min(VIDEO_INITIAL_CHUNK, limit)
|
||||
offset = 0
|
||||
collected = bytearray()
|
||||
|
||||
while len(collected) < limit:
|
||||
end = offset + chunk_size - 1
|
||||
data = await _read_range_slice(adapter, root, rel, offset, end)
|
||||
if not data:
|
||||
break
|
||||
collected.extend(data)
|
||||
if len(data) < chunk_size:
|
||||
break
|
||||
offset += len(data)
|
||||
remaining = limit - len(collected)
|
||||
if remaining <= 0:
|
||||
break
|
||||
chunk_size = min(chunk_size * 2, remaining)
|
||||
|
||||
if not collected and size <= limit:
|
||||
read_file = getattr(adapter, "read_file", None)
|
||||
if callable(read_file):
|
||||
blob = await read_file(root, rel)
|
||||
if blob:
|
||||
return bytes(blob[:limit])
|
||||
|
||||
return bytes(collected[:limit])
|
||||
|
||||
|
||||
async def _run_ffmpeg_extract_frame(src_path: str, dst_path: str):
|
||||
cmd = [
|
||||
"ffmpeg",
|
||||
"-y",
|
||||
"-hide_banner",
|
||||
"-loglevel", "error",
|
||||
"-i", src_path,
|
||||
"-frames:v", "1",
|
||||
dst_path,
|
||||
]
|
||||
try:
|
||||
proc = await asyncio.create_subprocess_exec(
|
||||
*cmd,
|
||||
stdout=asyncio.subprocess.PIPE,
|
||||
stderr=asyncio.subprocess.PIPE,
|
||||
)
|
||||
except FileNotFoundError as e:
|
||||
raise RuntimeError("未找到 ffmpeg,可执行文件需要在 PATH 中") from e
|
||||
|
||||
stdout, stderr = await proc.communicate()
|
||||
if proc.returncode != 0:
|
||||
message = stderr.decode().strip() or stdout.decode().strip() or "ffmpeg 执行失败"
|
||||
raise RuntimeError(message)
|
||||
|
||||
|
||||
async def _generate_video_thumb(video_bytes: bytes, rel: str, w: int, h: int, fit: str) -> Tuple[bytes, str]:
|
||||
from PIL import Image
|
||||
|
||||
suffix = Path(rel).suffix or ".mp4"
|
||||
src_tmp = tempfile.NamedTemporaryFile(suffix=suffix, delete=False)
|
||||
src_path = src_tmp.name
|
||||
try:
|
||||
src_tmp.write(video_bytes)
|
||||
src_tmp.flush()
|
||||
finally:
|
||||
src_tmp.close()
|
||||
|
||||
dst_tmp = tempfile.NamedTemporaryFile(suffix=".png", delete=False)
|
||||
dst_path = dst_tmp.name
|
||||
dst_tmp.close()
|
||||
|
||||
try:
|
||||
await _run_ffmpeg_extract_frame(src_path, dst_path)
|
||||
with Image.open(dst_path) as im:
|
||||
im.load()
|
||||
return _image_to_webp(im, w, h, fit)
|
||||
finally:
|
||||
with suppress(FileNotFoundError):
|
||||
Path(src_path).unlink()
|
||||
with suppress(FileNotFoundError):
|
||||
Path(dst_path).unlink()
|
||||
|
||||
|
||||
async def get_or_create_thumb(adapter, adapter_id: int, root: str, rel: str, w: int, h: int, fit: str = 'cover'):
|
||||
stat = await adapter.stat_file(root, rel)
|
||||
if stat['size'] > MAX_SOURCE_SIZE:
|
||||
size = int(stat.get('size') or 0)
|
||||
is_video = is_video_filename(rel)
|
||||
if not is_video and size > MAX_IMAGE_SOURCE_SIZE:
|
||||
raise HTTPException(400, detail="Image too large for thumbnail")
|
||||
|
||||
key = _cache_key(adapter_id, rel, stat['size'], int(
|
||||
stat['mtime']), w, h, fit)
|
||||
key = _cache_key(adapter_id, rel, size, int(
|
||||
stat.get('mtime', 0)), w, h, fit)
|
||||
path = _cache_path(key)
|
||||
if path.exists():
|
||||
return path.read_bytes(), 'image/webp', key
|
||||
@@ -119,14 +294,33 @@ async def get_or_create_thumb(adapter, adapter_id: int, root: str, rel: str, w:
|
||||
thumb_bytes, mime = None, None
|
||||
|
||||
if not thumb_bytes:
|
||||
read_data = await adapter.read_file(root, rel)
|
||||
try:
|
||||
thumb_bytes, mime = generate_thumb(
|
||||
read_data, w, h, fit, is_raw=is_raw_filename(rel))
|
||||
except Exception as e:
|
||||
print(e)
|
||||
raise HTTPException(
|
||||
500, detail=f"Thumbnail generation failed: {e}")
|
||||
if is_video:
|
||||
try:
|
||||
video_bytes = await _read_video_prefix(adapter, root, rel, size)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
print(f"Video prefix read failed: {e}")
|
||||
raise HTTPException(500, detail=f"Video read failed: {e}")
|
||||
|
||||
if not video_bytes:
|
||||
raise HTTPException(500, detail="Unable to read video data for thumbnail")
|
||||
|
||||
try:
|
||||
thumb_bytes, mime = await _generate_video_thumb(video_bytes, rel, w, h, fit)
|
||||
except Exception as e:
|
||||
print(f"Video thumbnail generation failed: {e}")
|
||||
raise HTTPException(
|
||||
500, detail=f"Video thumbnail generation failed: {e}")
|
||||
else:
|
||||
read_data = await adapter.read_file(root, rel)
|
||||
try:
|
||||
thumb_bytes, mime = generate_thumb(
|
||||
read_data, w, h, fit, is_raw=is_raw_filename(rel))
|
||||
except Exception as e:
|
||||
print(e)
|
||||
raise HTTPException(
|
||||
500, detail=f"Thumbnail generation failed: {e}")
|
||||
|
||||
if thumb_bytes:
|
||||
path.write_bytes(thumb_bytes)
|
||||
|
||||
@@ -39,6 +39,35 @@ class MilvusLiteProvider(BaseVectorProvider):
|
||||
raise RuntimeError("Milvus Lite client is not initialized")
|
||||
return self.client
|
||||
|
||||
@staticmethod
|
||||
def _extract_hit_payload(hit: Any) -> tuple[Any, Any, Dict[str, Any]]:
|
||||
hit_id = getattr(hit, "id", None)
|
||||
distance = getattr(hit, "distance", None)
|
||||
payload: Dict[str, Any] = {}
|
||||
|
||||
raw: Dict[str, Any] | None = None
|
||||
if hasattr(hit, "entity"):
|
||||
raw_entity = getattr(hit, "entity")
|
||||
if hasattr(raw_entity, "to_dict"):
|
||||
raw = dict(raw_entity.to_dict())
|
||||
else:
|
||||
raw = dict(raw_entity)
|
||||
elif isinstance(hit, dict):
|
||||
raw = dict(hit)
|
||||
|
||||
if raw:
|
||||
hit_id = hit_id or raw.get("id")
|
||||
distance = distance if distance is not None else raw.get("distance")
|
||||
inner = raw.get("entity")
|
||||
if isinstance(inner, dict):
|
||||
payload = dict(inner)
|
||||
else:
|
||||
payload = {k: v for k, v in raw.items() if k not in {"id", "distance", "entity"}}
|
||||
|
||||
payload.setdefault("path", payload.get("source_path"))
|
||||
payload.setdefault("source_path", payload.get("path"))
|
||||
return hit_id, distance, payload
|
||||
|
||||
@staticmethod
|
||||
def _to_int(value: Any) -> int:
|
||||
try:
|
||||
@@ -50,15 +79,20 @@ class MilvusLiteProvider(BaseVectorProvider):
|
||||
client = self._get_client()
|
||||
if client.has_collection(collection_name):
|
||||
return
|
||||
common_fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
FieldSchema(name="source_path", dtype=DataType.VARCHAR, max_length=512, is_primary=False, auto_id=False),
|
||||
]
|
||||
|
||||
if vector:
|
||||
vector_dim = dim if isinstance(dim, int) and dim > 0 else 0
|
||||
if vector_dim <= 0:
|
||||
vector_dim = 4096
|
||||
fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
*common_fields,
|
||||
FieldSchema(name="embedding", dtype=DataType.FLOAT_VECTOR, dim=vector_dim),
|
||||
]
|
||||
schema = CollectionSchema(fields, description="Image vector collection")
|
||||
schema = CollectionSchema(fields, description="Vector collection", enable_dynamic_field=True)
|
||||
client.create_collection(collection_name, schema=schema)
|
||||
index_params = MilvusClient.prepare_index_params()
|
||||
index_params.add_index(
|
||||
@@ -70,38 +104,86 @@ class MilvusLiteProvider(BaseVectorProvider):
|
||||
)
|
||||
client.create_index(collection_name, index_params=index_params)
|
||||
else:
|
||||
fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
]
|
||||
schema = CollectionSchema(fields, description="Simple file index")
|
||||
schema = CollectionSchema(common_fields, description="Simple file index", enable_dynamic_field=True)
|
||||
client.create_collection(collection_name, schema=schema)
|
||||
|
||||
def upsert_vector(self, collection_name: str, data: Dict[str, Any]) -> None:
|
||||
self._get_client().upsert(collection_name, data)
|
||||
payload = dict(data)
|
||||
payload.setdefault("source_path", payload.get("path"))
|
||||
payload.setdefault("vector_id", payload.get("path"))
|
||||
self._get_client().upsert(collection_name, data=[payload])
|
||||
|
||||
def delete_vector(self, collection_name: str, path: str) -> None:
|
||||
self._get_client().delete(collection_name, ids=[path])
|
||||
client = self._get_client()
|
||||
escaped = path.replace('"', '\\"')
|
||||
client.delete(collection_name, filter=f'source_path == "{escaped}"')
|
||||
|
||||
def search_vectors(self, collection_name: str, query_embedding, top_k: int):
|
||||
search_params = {"metric_type": "COSINE"}
|
||||
return self._get_client().search(
|
||||
output_fields = [
|
||||
"path",
|
||||
"source_path",
|
||||
"chunk_id",
|
||||
"mime",
|
||||
"text",
|
||||
"start_offset",
|
||||
"end_offset",
|
||||
"type",
|
||||
"name",
|
||||
]
|
||||
raw_results = self._get_client().search(
|
||||
collection_name,
|
||||
data=[query_embedding],
|
||||
anns_field="embedding",
|
||||
search_params=search_params,
|
||||
limit=top_k,
|
||||
output_fields=["path"],
|
||||
output_fields=output_fields,
|
||||
)
|
||||
formatted: List[List[Dict[str, Any]]] = []
|
||||
for hits in raw_results:
|
||||
bucket: List[Dict[str, Any]] = []
|
||||
for hit in hits:
|
||||
hit_id, distance, entity = self._extract_hit_payload(hit)
|
||||
bucket.append({
|
||||
"id": hit_id,
|
||||
"distance": distance,
|
||||
"entity": entity,
|
||||
})
|
||||
formatted.append(bucket)
|
||||
return formatted
|
||||
|
||||
def search_by_path(self, collection_name: str, query_path: str, top_k: int):
|
||||
filter_expr = f"path like '%{query_path}%'" if query_path else "path like '%%'"
|
||||
if query_path:
|
||||
escaped = query_path.replace('"', '\\"')
|
||||
filter_expr = f'source_path like "%{escaped}%"'
|
||||
else:
|
||||
filter_expr = "source_path like '%%'"
|
||||
results = self._get_client().query(
|
||||
collection_name,
|
||||
filter=filter_expr,
|
||||
limit=top_k,
|
||||
output_fields=["path"],
|
||||
output_fields=[
|
||||
"path",
|
||||
"source_path",
|
||||
"chunk_id",
|
||||
"mime",
|
||||
"text",
|
||||
"start_offset",
|
||||
"end_offset",
|
||||
"type",
|
||||
"name",
|
||||
],
|
||||
)
|
||||
return [[{"id": r["path"], "distance": 1.0, "entity": {"path": r["path"]}} for r in results]]
|
||||
formatted = []
|
||||
for row in results:
|
||||
entity = dict(row)
|
||||
entity.setdefault("path", entity.get("source_path"))
|
||||
formatted.append({
|
||||
"id": entity.get("path"),
|
||||
"distance": 1.0,
|
||||
"entity": entity,
|
||||
})
|
||||
return [formatted]
|
||||
|
||||
def get_all_stats(self) -> Dict[str, Any]:
|
||||
client = self._get_client()
|
||||
|
||||
@@ -47,6 +47,35 @@ class MilvusServerProvider(BaseVectorProvider):
|
||||
raise RuntimeError("Milvus Server client is not initialized")
|
||||
return self.client
|
||||
|
||||
@staticmethod
|
||||
def _extract_hit_payload(hit: Any) -> tuple[Any, Any, Dict[str, Any]]:
|
||||
hit_id = getattr(hit, "id", None)
|
||||
distance = getattr(hit, "distance", None)
|
||||
payload: Dict[str, Any] = {}
|
||||
|
||||
raw: Dict[str, Any] | None = None
|
||||
if hasattr(hit, "entity"):
|
||||
raw_entity = getattr(hit, "entity")
|
||||
if hasattr(raw_entity, "to_dict"):
|
||||
raw = dict(raw_entity.to_dict())
|
||||
else:
|
||||
raw = dict(raw_entity)
|
||||
elif isinstance(hit, dict):
|
||||
raw = dict(hit)
|
||||
|
||||
if raw:
|
||||
hit_id = hit_id or raw.get("id")
|
||||
distance = distance if distance is not None else raw.get("distance")
|
||||
inner = raw.get("entity")
|
||||
if isinstance(inner, dict):
|
||||
payload = dict(inner)
|
||||
else:
|
||||
payload = {k: v for k, v in raw.items() if k not in {"id", "distance", "entity"}}
|
||||
|
||||
payload.setdefault("path", payload.get("source_path"))
|
||||
payload.setdefault("source_path", payload.get("path"))
|
||||
return hit_id, distance, payload
|
||||
|
||||
@staticmethod
|
||||
def _to_int(value: Any) -> int:
|
||||
try:
|
||||
@@ -58,15 +87,19 @@ class MilvusServerProvider(BaseVectorProvider):
|
||||
client = self._get_client()
|
||||
if client.has_collection(collection_name):
|
||||
return
|
||||
common_fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
FieldSchema(name="source_path", dtype=DataType.VARCHAR, max_length=512, is_primary=False, auto_id=False),
|
||||
]
|
||||
if vector:
|
||||
vector_dim = dim if isinstance(dim, int) and dim > 0 else 0
|
||||
if vector_dim <= 0:
|
||||
vector_dim = 4096
|
||||
fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
*common_fields,
|
||||
FieldSchema(name="embedding", dtype=DataType.FLOAT_VECTOR, dim=vector_dim),
|
||||
]
|
||||
schema = CollectionSchema(fields, description="Image vector collection")
|
||||
schema = CollectionSchema(fields, description="Vector collection", enable_dynamic_field=True)
|
||||
client.create_collection(collection_name, schema=schema)
|
||||
index_params = MilvusClient.prepare_index_params()
|
||||
index_params.add_index(
|
||||
@@ -78,38 +111,86 @@ class MilvusServerProvider(BaseVectorProvider):
|
||||
)
|
||||
client.create_index(collection_name, index_params=index_params)
|
||||
else:
|
||||
fields = [
|
||||
FieldSchema(name="path", dtype=DataType.VARCHAR, max_length=512, is_primary=True, auto_id=False),
|
||||
]
|
||||
schema = CollectionSchema(fields, description="Simple file index")
|
||||
schema = CollectionSchema(common_fields, description="Simple file index", enable_dynamic_field=True)
|
||||
client.create_collection(collection_name, schema=schema)
|
||||
|
||||
def upsert_vector(self, collection_name: str, data: Dict[str, Any]) -> None:
|
||||
self._get_client().upsert(collection_name, data)
|
||||
payload = dict(data)
|
||||
payload.setdefault("source_path", payload.get("path"))
|
||||
payload.setdefault("vector_id", payload.get("path"))
|
||||
self._get_client().upsert(collection_name, data=[payload])
|
||||
|
||||
def delete_vector(self, collection_name: str, path: str) -> None:
|
||||
self._get_client().delete(collection_name, ids=[path])
|
||||
client = self._get_client()
|
||||
escaped = path.replace('"', '\\"')
|
||||
client.delete(collection_name, filter=f'source_path == "{escaped}"')
|
||||
|
||||
def search_vectors(self, collection_name: str, query_embedding, top_k: int):
|
||||
search_params = {"metric_type": "COSINE"}
|
||||
return self._get_client().search(
|
||||
output_fields = [
|
||||
"path",
|
||||
"source_path",
|
||||
"chunk_id",
|
||||
"mime",
|
||||
"text",
|
||||
"start_offset",
|
||||
"end_offset",
|
||||
"type",
|
||||
"name",
|
||||
]
|
||||
raw_results = self._get_client().search(
|
||||
collection_name,
|
||||
data=[query_embedding],
|
||||
anns_field="embedding",
|
||||
search_params=search_params,
|
||||
limit=top_k,
|
||||
output_fields=["path"],
|
||||
output_fields=output_fields,
|
||||
)
|
||||
formatted: List[List[Dict[str, Any]]] = []
|
||||
for hits in raw_results:
|
||||
bucket: List[Dict[str, Any]] = []
|
||||
for hit in hits:
|
||||
hit_id, distance, entity = self._extract_hit_payload(hit)
|
||||
bucket.append({
|
||||
"id": hit_id,
|
||||
"distance": distance,
|
||||
"entity": entity,
|
||||
})
|
||||
formatted.append(bucket)
|
||||
return formatted
|
||||
|
||||
def search_by_path(self, collection_name: str, query_path: str, top_k: int):
|
||||
filter_expr = f"path like '%{query_path}%'" if query_path else "path like '%%'"
|
||||
if query_path:
|
||||
escaped = query_path.replace('"', '\\"')
|
||||
filter_expr = f'source_path like "%{escaped}%"'
|
||||
else:
|
||||
filter_expr = "source_path like '%%'"
|
||||
results = self._get_client().query(
|
||||
collection_name,
|
||||
filter=filter_expr,
|
||||
limit=top_k,
|
||||
output_fields=["path"],
|
||||
output_fields=[
|
||||
"path",
|
||||
"source_path",
|
||||
"chunk_id",
|
||||
"mime",
|
||||
"text",
|
||||
"start_offset",
|
||||
"end_offset",
|
||||
"type",
|
||||
"name",
|
||||
],
|
||||
)
|
||||
return [[{"id": r["path"], "distance": 1.0, "entity": {"path": r["path"]}} for r in results]]
|
||||
formatted = []
|
||||
for row in results:
|
||||
entity = dict(row)
|
||||
entity.setdefault("path", entity.get("source_path"))
|
||||
formatted.append({
|
||||
"id": entity.get("path"),
|
||||
"distance": 1.0,
|
||||
"entity": entity,
|
||||
})
|
||||
return [formatted]
|
||||
|
||||
def get_all_stats(self) -> Dict[str, Any]:
|
||||
client = self._get_client()
|
||||
|
||||
@@ -58,29 +58,59 @@ class QdrantProvider(BaseVectorProvider):
|
||||
size = dim if vector and isinstance(dim, int) and dim > 0 else 1
|
||||
return qmodels.VectorParams(size=size, distance=qmodels.Distance.COSINE)
|
||||
|
||||
def _ensure_payload_indexes(self, client: QdrantClient, collection_name: str) -> None:
|
||||
for field in ("path", "source_path"):
|
||||
try:
|
||||
client.create_payload_index(
|
||||
collection_name=collection_name,
|
||||
field_name=field,
|
||||
field_schema="keyword",
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - 依赖外部服务
|
||||
message = str(exc).lower()
|
||||
if "already exists" in message or "index exists" in message:
|
||||
continue
|
||||
# 旧版本 qdrant 可能返回带状态码的异常,这里容忍重复创建
|
||||
raise
|
||||
|
||||
def ensure_collection(self, collection_name: str, vector: bool, dim: int) -> None:
|
||||
client = self._get_client()
|
||||
try:
|
||||
if client.collection_exists(collection_name):
|
||||
return
|
||||
exists = client.collection_exists(collection_name)
|
||||
except Exception as exc: # pragma: no cover - 依赖外部服务
|
||||
raise RuntimeError(f"Failed to check Qdrant collection '{collection_name}': {exc}") from exc
|
||||
|
||||
if exists:
|
||||
try:
|
||||
self._ensure_payload_indexes(client, collection_name)
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
|
||||
vectors_config = self._vector_params(vector, dim)
|
||||
try:
|
||||
client.create_collection(collection_name=collection_name, vectors_config=vectors_config)
|
||||
except Exception as exc: # pragma: no cover
|
||||
if "already exists" in str(exc).lower():
|
||||
try:
|
||||
self._ensure_payload_indexes(client, collection_name)
|
||||
except Exception:
|
||||
pass
|
||||
return
|
||||
raise RuntimeError(f"Failed to create Qdrant collection '{collection_name}': {exc}") from exc
|
||||
|
||||
try:
|
||||
self._ensure_payload_indexes(client, collection_name)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def _point_id(path: str) -> str:
|
||||
return str(uuid5(NAMESPACE_URL, path))
|
||||
def _point_id(uid: str) -> str:
|
||||
return str(uuid5(NAMESPACE_URL, uid))
|
||||
|
||||
def _prepare_point(self, data: Dict[str, Any]) -> qmodels.PointStruct:
|
||||
path = data.get("path")
|
||||
if not path:
|
||||
uid = data.get("path")
|
||||
if not uid:
|
||||
raise ValueError("Qdrant upsert requires 'path' in data")
|
||||
|
||||
embedding = data.get("embedding")
|
||||
@@ -89,8 +119,11 @@ class QdrantProvider(BaseVectorProvider):
|
||||
else:
|
||||
vector = [float(x) for x in embedding]
|
||||
|
||||
payload = {"path": path}
|
||||
return qmodels.PointStruct(id=self._point_id(path), vector=vector, payload=payload)
|
||||
payload = {k: v for k, v in data.items() if k != "embedding"}
|
||||
payload.setdefault("vector_id", uid)
|
||||
source_path = payload.get("source_path") or payload.get("path")
|
||||
payload["path"] = source_path
|
||||
return qmodels.PointStruct(id=self._point_id(str(uid)), vector=vector, payload=payload)
|
||||
|
||||
def upsert_vector(self, collection_name: str, data: Dict[str, Any]) -> None:
|
||||
client = self._get_client()
|
||||
@@ -99,7 +132,12 @@ class QdrantProvider(BaseVectorProvider):
|
||||
|
||||
def delete_vector(self, collection_name: str, path: str) -> None:
|
||||
client = self._get_client()
|
||||
selector = qmodels.PointIdsList(points=[self._point_id(path)])
|
||||
condition = qmodels.FieldCondition(
|
||||
key="path",
|
||||
match=qmodels.MatchValue(value=path),
|
||||
)
|
||||
flt = qmodels.Filter(must=[condition])
|
||||
selector = qmodels.FilterSelector(filter=flt)
|
||||
client.delete(collection_name=collection_name, points_selector=selector, wait=True)
|
||||
|
||||
def _format_search_results(self, points: Sequence[qmodels.ScoredPoint]):
|
||||
@@ -107,7 +145,7 @@ class QdrantProvider(BaseVectorProvider):
|
||||
{
|
||||
"id": point.id,
|
||||
"distance": point.score,
|
||||
"entity": {"path": (point.payload or {}).get("path")},
|
||||
"entity": point.payload or {},
|
||||
}
|
||||
for point in points
|
||||
]
|
||||
@@ -141,11 +179,11 @@ class QdrantProvider(BaseVectorProvider):
|
||||
break
|
||||
|
||||
for record in records:
|
||||
path = (record.payload or {}).get("path")
|
||||
if query_path and path:
|
||||
if query_path not in path:
|
||||
continue
|
||||
results.append({"id": record.id, "distance": 1.0, "entity": {"path": path}})
|
||||
payload = record.payload or {}
|
||||
path = payload.get("path")
|
||||
if query_path and path and query_path not in path:
|
||||
continue
|
||||
results.append({"id": record.id, "distance": 1.0, "entity": payload})
|
||||
if len(results) >= top_k:
|
||||
break
|
||||
|
||||
|
||||
@@ -15,14 +15,16 @@ import aiofiles
|
||||
from models import StorageAdapter
|
||||
from .adapters.registry import runtime_registry
|
||||
from api.response import page
|
||||
from .thumbnail import is_image_filename, is_raw_filename
|
||||
from .thumbnail import is_image_filename, is_raw_filename, is_video_filename
|
||||
from services.processors.registry import get as get_processor
|
||||
from services.tasks import task_service
|
||||
from services.logging import LogService
|
||||
from services.config import ConfigCenter
|
||||
from services.vector_db import VectorDBService
|
||||
|
||||
|
||||
CROSS_TRANSFER_TEMP_ROOT = Path("data/tmp/cross_transfer")
|
||||
DIRECT_REDIRECT_CONFIG_KEY = "enable_direct_download_307"
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from services.task_queue import Task
|
||||
@@ -87,6 +89,31 @@ async def resolve_adapter_and_rel(path: str):
|
||||
return adapter_instance, adapter_model, effective_root, rel
|
||||
|
||||
|
||||
async def maybe_redirect_download(adapter_instance, adapter_model, root: str, rel: str):
|
||||
"""若适配器启用了 307 直链,尝试构造重定向响应。"""
|
||||
if not rel or rel.endswith('/'):
|
||||
return None
|
||||
|
||||
config = getattr(adapter_model, "config", {}) or {}
|
||||
if not config.get(DIRECT_REDIRECT_CONFIG_KEY):
|
||||
return None
|
||||
|
||||
handler = getattr(adapter_instance, "get_direct_download_response", None)
|
||||
if not callable(handler):
|
||||
return None
|
||||
|
||||
try:
|
||||
response = await handler(root, rel)
|
||||
except FileNotFoundError:
|
||||
raise
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
if isinstance(response, Response):
|
||||
return response
|
||||
return None
|
||||
|
||||
|
||||
async def _ensure_method(adapter: Any, method: str):
|
||||
func = getattr(adapter, method, None)
|
||||
if not callable(func):
|
||||
@@ -116,7 +143,7 @@ async def list_virtual_dir(path: str, page_num: int = 1, page_size: int = 50, so
|
||||
norm = (path if path.startswith('/') else '/' + path).rstrip('/') or '/'
|
||||
adapters = await StorageAdapter.filter(enabled=True)
|
||||
|
||||
child_mount_entries = []
|
||||
child_mount_entries: List[str] = []
|
||||
norm_prefix = norm.rstrip('/')
|
||||
for a in adapters:
|
||||
if a.path == norm:
|
||||
@@ -127,6 +154,28 @@ async def list_virtual_dir(path: str, page_num: int = 1, page_size: int = 50, so
|
||||
child_mount_entries.append(tail)
|
||||
child_mount_entries = sorted(set(child_mount_entries))
|
||||
|
||||
sort_field = sort_by.lower()
|
||||
reverse = sort_order.lower() == "desc"
|
||||
|
||||
def build_sort_key(item: Dict) -> Tuple:
|
||||
key = (not bool(item.get("is_dir")),)
|
||||
if sort_field == "name":
|
||||
key += (str(item.get("name", "")).lower(),)
|
||||
elif sort_field == "size":
|
||||
key += (int(item.get("size", 0)),)
|
||||
elif sort_field == "mtime":
|
||||
key += (int(item.get("mtime", 0)),)
|
||||
else:
|
||||
key += (str(item.get("name", "")).lower(),)
|
||||
return key
|
||||
|
||||
def annotate_entry(entry: Dict) -> None:
|
||||
if not entry.get("is_dir"):
|
||||
name = entry.get("name", "")
|
||||
entry["has_thumbnail"] = bool(is_image_filename(name) or is_video_filename(name))
|
||||
else:
|
||||
entry["has_thumbnail"] = False
|
||||
|
||||
try:
|
||||
adapter_model, rel = await resolve_adapter_by_path(norm)
|
||||
adapter_instance = runtime_registry.get(adapter_model.id)
|
||||
@@ -146,57 +195,57 @@ async def list_virtual_dir(path: str, page_num: int = 1, page_size: int = 50, so
|
||||
effective_root = ''
|
||||
rel = ''
|
||||
|
||||
adapter_entries = []
|
||||
adapter_entries_page: List[Dict] = []
|
||||
adapter_entries_for_merge: List[Dict] = []
|
||||
adapter_total = 0
|
||||
covered = set()
|
||||
|
||||
if adapter_model and adapter_instance:
|
||||
list_dir = await _ensure_method(adapter_instance, "list_dir")
|
||||
try:
|
||||
adapter_entries, adapter_total = await list_dir(effective_root, rel, page_num, page_size, sort_by, sort_order)
|
||||
adapter_entries_page, adapter_total = await list_dir(effective_root, rel, page_num, page_size, sort_by, sort_order)
|
||||
except NotADirectoryError:
|
||||
raise HTTPException(400, detail="Not a directory")
|
||||
|
||||
for item in adapter_entries:
|
||||
adapter_entries_for_merge = adapter_entries_page
|
||||
|
||||
# 存在挂载节点且适配器结果被分页时,补齐完整列表以便合并排序
|
||||
if child_mount_entries and adapter_total > len(adapter_entries_page):
|
||||
full_page_size = adapter_total
|
||||
if full_page_size > 0:
|
||||
adapter_entries_for_merge, adapter_total = await list_dir(
|
||||
effective_root, rel, 1, full_page_size, sort_by, sort_order
|
||||
)
|
||||
else:
|
||||
adapter_entries_for_merge = adapter_entries_page
|
||||
|
||||
for item in adapter_entries_for_merge:
|
||||
covered.add(item["name"])
|
||||
|
||||
mount_entries = []
|
||||
for name in child_mount_entries:
|
||||
if name not in covered:
|
||||
mount_entries.append({"name": name, "is_dir": True,
|
||||
"size": 0, "mtime": 0, "type": "mount", "is_image": False})
|
||||
"size": 0, "mtime": 0, "type": "mount", "has_thumbnail": False})
|
||||
|
||||
for ent in adapter_entries:
|
||||
if not ent.get('is_dir'):
|
||||
ent['is_image'] = is_image_filename(ent['name'])
|
||||
else:
|
||||
ent['is_image'] = False
|
||||
|
||||
all_entries = adapter_entries + mount_entries
|
||||
|
||||
if mount_entries:
|
||||
reverse = sort_order.lower() == "desc"
|
||||
def get_sort_key(item):
|
||||
key = (not item.get("is_dir"),)
|
||||
sort_field = sort_by.lower()
|
||||
if sort_field == "name":
|
||||
key += (item["name"].lower(),)
|
||||
elif sort_field == "size":
|
||||
key += (item.get("size", 0),)
|
||||
elif sort_field == "mtime":
|
||||
key += (item.get("mtime", 0),)
|
||||
else:
|
||||
key += (item["name"].lower(),)
|
||||
return key
|
||||
all_entries.sort(key=get_sort_key, reverse=reverse)
|
||||
|
||||
total_entries = adapter_total + len(mount_entries)
|
||||
for ent in adapter_entries_for_merge:
|
||||
annotate_entry(ent)
|
||||
combined_entries = adapter_entries_for_merge + [
|
||||
{**ent, "has_thumbnail": False} for ent in mount_entries
|
||||
]
|
||||
combined_entries.sort(key=build_sort_key, reverse=reverse)
|
||||
|
||||
total_entries = len(combined_entries)
|
||||
start_idx = (page_num - 1) * page_size
|
||||
end_idx = start_idx + page_size
|
||||
page_entries = all_entries[start_idx:end_idx]
|
||||
page_entries = combined_entries[start_idx:end_idx]
|
||||
return page(page_entries, total_entries, page_num, page_size)
|
||||
|
||||
return page(adapter_entries, adapter_total, page_num, page_size)
|
||||
|
||||
annotate_entry_list = adapter_entries_page or []
|
||||
for ent in annotate_entry_list:
|
||||
annotate_entry(ent)
|
||||
return page(adapter_entries_page, adapter_total, page_num, page_size)
|
||||
|
||||
|
||||
async def read_file(path: str) -> Union[bytes, Any]:
|
||||
@@ -258,7 +307,12 @@ async def write_file_stream(path: str, data_iter: AsyncIterator[bytes], overwrit
|
||||
async def make_dir(path: str):
|
||||
adapter_instance, _, root, rel = await resolve_adapter_and_rel(path)
|
||||
if not rel:
|
||||
raise HTTPException(400, detail="Cannot create root")
|
||||
await LogService.info(
|
||||
"virtual_fs",
|
||||
f"Ignored create-root request for {path}",
|
||||
details={"path": path, "reason": "root directory already exists"},
|
||||
)
|
||||
return
|
||||
mkdir_func = await _ensure_method(adapter_instance, "mkdir")
|
||||
await mkdir_func(root, rel)
|
||||
await LogService.action("virtual_fs", f"Created directory {path}", details={"path": path})
|
||||
@@ -437,7 +491,7 @@ async def rename_path(src: str, dst: str, overwrite: bool = False, return_debug:
|
||||
|
||||
|
||||
async def stream_file(path: str, range_header: str | None):
|
||||
adapter_instance, _, root, rel = await resolve_adapter_and_rel(path)
|
||||
adapter_instance, adapter_model, root, rel = await resolve_adapter_and_rel(path)
|
||||
if not rel or rel.endswith('/'):
|
||||
raise HTTPException(400, detail="Path is a directory")
|
||||
if is_raw_filename(rel):
|
||||
@@ -470,6 +524,10 @@ async def stream_file(path: str, range_header: str | None):
|
||||
except Exception as e:
|
||||
raise HTTPException(500, detail=f"RAW file processing failed: {e}")
|
||||
|
||||
redirect_response = await maybe_redirect_download(adapter_instance, adapter_model, root, rel)
|
||||
if redirect_response is not None:
|
||||
return redirect_response
|
||||
|
||||
stream_impl = getattr(adapter_instance, "stream_file", None)
|
||||
if callable(stream_impl):
|
||||
return await stream_impl(root, rel, range_header)
|
||||
@@ -478,12 +536,81 @@ async def stream_file(path: str, range_header: str | None):
|
||||
return Response(content=data, media_type=mime or "application/octet-stream")
|
||||
|
||||
|
||||
async def _gather_vector_index(full_path: str, limit: int = 20):
|
||||
"""查询与文件相关的索引信息。失败时返回 None。"""
|
||||
vector_db = VectorDBService()
|
||||
try:
|
||||
raw_results = await vector_db.search_by_path("vector_collection", full_path, max(limit * 2, 20))
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
matched = []
|
||||
if raw_results:
|
||||
buckets = raw_results if isinstance(raw_results, list) else [raw_results]
|
||||
for bucket in buckets:
|
||||
if not bucket:
|
||||
continue
|
||||
for record in bucket:
|
||||
entity = dict((record or {}).get("entity") or {})
|
||||
source_path = entity.get("source_path") or entity.get("path") or ""
|
||||
if source_path != full_path:
|
||||
continue
|
||||
entry = {
|
||||
"chunk_id": str(entity.get("chunk_id")) if entity.get("chunk_id") is not None else None,
|
||||
"type": entity.get("type"),
|
||||
"mime": entity.get("mime"),
|
||||
"name": entity.get("name"),
|
||||
"start_offset": entity.get("start_offset"),
|
||||
"end_offset": entity.get("end_offset"),
|
||||
"vector_id": entity.get("vector_id"),
|
||||
}
|
||||
text = entity.get("text") or entity.get("description")
|
||||
if text:
|
||||
preview_limit = 400
|
||||
entry["preview"] = text[:preview_limit]
|
||||
entry["preview_truncated"] = len(text) > preview_limit
|
||||
matched.append(entry)
|
||||
|
||||
if not matched:
|
||||
return {"total": 0, "entries": [], "by_type": {}, "has_more": False}
|
||||
|
||||
type_counts: Dict[str, int] = {}
|
||||
for item in matched:
|
||||
key = item.get("type") or "unknown"
|
||||
type_counts[key] = type_counts.get(key, 0) + 1
|
||||
|
||||
has_more = len(matched) > limit
|
||||
return {
|
||||
"total": len(matched),
|
||||
"entries": matched[:limit],
|
||||
"by_type": type_counts,
|
||||
"has_more": has_more,
|
||||
"limit": limit,
|
||||
}
|
||||
|
||||
|
||||
async def stat_file(path: str):
|
||||
adapter_instance, _, root, rel = await resolve_adapter_and_rel(path)
|
||||
stat_func = getattr(adapter_instance, "stat_file", None)
|
||||
if not callable(stat_func):
|
||||
raise HTTPException(501, detail="Adapter does not implement stat_file")
|
||||
return await stat_func(root, rel)
|
||||
info = await stat_func(root, rel)
|
||||
|
||||
if isinstance(info, dict):
|
||||
info.setdefault("path", path)
|
||||
try:
|
||||
is_dir = bool(info.get("is_dir"))
|
||||
except Exception:
|
||||
is_dir = False
|
||||
rel_name = rel.rstrip('/').split('/')[-1] if rel else path.rstrip('/').split('/')[-1]
|
||||
name_hint = str(info.get("name") or rel_name or "")
|
||||
info["has_thumbnail"] = bool(not is_dir and (is_image_filename(name_hint) or is_video_filename(name_hint)))
|
||||
if not is_dir:
|
||||
vector_index = await _gather_vector_index(path)
|
||||
if vector_index is not None:
|
||||
info["vector_index"] = vector_index
|
||||
|
||||
return info
|
||||
|
||||
|
||||
async def copy_path(
|
||||
|
||||
1
web/public/icon/claude-color.svg
Normal file
1
web/public/icon/claude-color.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg height="1em" style="flex:none;line-height:1" viewBox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><title>Claude</title><path d="M4.709 15.955l4.72-2.647.08-.23-.08-.128H9.2l-.79-.048-2.698-.073-2.339-.097-2.266-.122-.571-.121L0 11.784l.055-.352.48-.321.686.06 1.52.103 2.278.158 1.652.097 2.449.255h.389l.055-.157-.134-.098-.103-.097-2.358-1.596-2.552-1.688-1.336-.972-.724-.491-.364-.462-.158-1.008.656-.722.881.06.225.061.893.686 1.908 1.476 2.491 1.833.365.304.145-.103.019-.073-.164-.274-1.355-2.446-1.446-2.49-.644-1.032-.17-.619a2.97 2.97 0 01-.104-.729L6.283.134 6.696 0l.996.134.42.364.62 1.414 1.002 2.229 1.555 3.03.456.898.243.832.091.255h.158V9.01l.128-1.706.237-2.095.23-2.695.08-.76.376-.91.747-.492.584.28.48.685-.067.444-.286 1.851-.559 2.903-.364 1.942h.212l.243-.242.985-1.306 1.652-2.064.73-.82.85-.904.547-.431h1.033l.76 1.129-.34 1.166-1.064 1.347-.881 1.142-1.264 1.7-.79 1.36.073.11.188-.02 2.856-.606 1.543-.28 1.841-.315.833.388.091.395-.328.807-1.969.486-2.309.462-3.439.813-.042.03.049.061 1.549.146.662.036h1.622l3.02.225.79.522.474.638-.079.485-1.215.62-1.64-.389-3.829-.91-1.312-.329h-.182v.11l1.093 1.068 2.006 1.81 2.509 2.33.127.578-.322.455-.34-.049-2.205-1.657-.851-.747-1.926-1.62h-.128v.17l.444.649 2.345 3.521.122 1.08-.17.353-.608.213-.668-.122-1.374-1.925-1.415-2.167-1.143-1.943-.14.08-.674 7.254-.316.37-.729.28-.607-.461-.322-.747.322-1.476.389-1.924.315-1.53.286-1.9.17-.632-.012-.042-.14.018-1.434 1.967-2.18 2.945-1.726 1.845-.414.164-.717-.37.067-.662.401-.589 2.388-3.036 1.44-1.882.93-1.086-.006-.158h-.055L4.132 18.56l-1.13.146-.487-.456.061-.746.231-.243 1.908-1.312-.006.006z" fill="#D97757" fill-rule="nonzero"></path></svg>
|
||||
|
After Width: | Height: | Size: 1.7 KiB |
1
web/public/icon/deepseek-color.svg
Normal file
1
web/public/icon/deepseek-color.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg height="1em" style="flex:none;line-height:1" viewBox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><title>DeepSeek</title><path d="M23.748 4.482c-.254-.124-.364.113-.512.234-.051.039-.094.09-.137.136-.372.397-.806.657-1.373.626-.829-.046-1.537.214-2.163.848-.133-.782-.575-1.248-1.247-1.548-.352-.156-.708-.311-.955-.65-.172-.241-.219-.51-.305-.774-.055-.16-.11-.323-.293-.35-.2-.031-.278.136-.356.276-.313.572-.434 1.202-.422 1.84.027 1.436.633 2.58 1.838 3.393.137.093.172.187.129.323-.082.28-.18.552-.266.833-.055.179-.137.217-.329.14a5.526 5.526 0 01-1.736-1.18c-.857-.828-1.631-1.742-2.597-2.458a11.365 11.365 0 00-.689-.471c-.985-.957.13-1.743.388-1.836.27-.098.093-.432-.779-.428-.872.004-1.67.295-2.687.684a3.055 3.055 0 01-.465.137 9.597 9.597 0 00-2.883-.102c-1.885.21-3.39 1.102-4.497 2.623C.082 8.606-.231 10.684.152 12.85c.403 2.284 1.569 4.175 3.36 5.653 1.858 1.533 3.997 2.284 6.438 2.14 1.482-.085 3.133-.284 4.994-1.86.47.234.962.327 1.78.397.63.059 1.236-.03 1.705-.128.735-.156.684-.837.419-.961-2.155-1.004-1.682-.595-2.113-.926 1.096-1.296 2.746-2.642 3.392-7.003.05-.347.007-.565 0-.845-.004-.17.035-.237.23-.256a4.173 4.173 0 001.545-.475c1.396-.763 1.96-2.015 2.093-3.517.02-.23-.004-.467-.247-.588zM11.581 18c-2.089-1.642-3.102-2.183-3.52-2.16-.392.024-.321.471-.235.763.09.288.207.486.371.739.114.167.192.416-.113.603-.673.416-1.842-.14-1.897-.167-1.361-.802-2.5-1.86-3.301-3.307-.774-1.393-1.224-2.887-1.298-4.482-.02-.386.093-.522.477-.592a4.696 4.696 0 011.529-.039c2.132.312 3.946 1.265 5.468 2.774.868.86 1.525 1.887 2.202 2.891.72 1.066 1.494 2.082 2.48 2.914.348.292.625.514.891.677-.802.09-2.14.11-3.054-.614zm1-6.44a.306.306 0 01.415-.287.302.302 0 01.2.288.306.306 0 01-.31.307.303.303 0 01-.304-.308zm3.11 1.596c-.2.081-.399.151-.59.16a1.245 1.245 0 01-.798-.254c-.274-.23-.47-.358-.552-.758a1.73 1.73 0 01.016-.588c.07-.327-.008-.537-.239-.727-.187-.156-.426-.199-.688-.199a.559.559 0 01-.254-.078c-.11-.054-.2-.19-.114-.358.028-.054.16-.186.192-.21.356-.202.767-.136 1.146.016.352.144.618.408 1.001.782.391.451.462.576.685.914.176.265.336.537.445.848.067.195-.019.354-.25.452z" fill="#4D6BFE"></path></svg>
|
||||
|
After Width: | Height: | Size: 2.1 KiB |
1
web/public/icon/gemini-color.svg
Normal file
1
web/public/icon/gemini-color.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg height="1em" style="flex:none;line-height:1" viewBox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><title>Gemini</title><path d="M20.616 10.835a14.147 14.147 0 01-4.45-3.001 14.111 14.111 0 01-3.678-6.452.503.503 0 00-.975 0 14.134 14.134 0 01-3.679 6.452 14.155 14.155 0 01-4.45 3.001c-.65.28-1.318.505-2.002.678a.502.502 0 000 .975c.684.172 1.35.397 2.002.677a14.147 14.147 0 014.45 3.001 14.112 14.112 0 013.679 6.453.502.502 0 00.975 0c.172-.685.397-1.351.677-2.003a14.145 14.145 0 013.001-4.45 14.113 14.113 0 016.453-3.678.503.503 0 000-.975 13.245 13.245 0 01-2.003-.678z" fill="#3186FF"></path><path d="M20.616 10.835a14.147 14.147 0 01-4.45-3.001 14.111 14.111 0 01-3.678-6.452.503.503 0 00-.975 0 14.134 14.134 0 01-3.679 6.452 14.155 14.155 0 01-4.45 3.001c-.65.28-1.318.505-2.002.678a.502.502 0 000 .975c.684.172 1.35.397 2.002.677a14.147 14.147 0 014.45 3.001 14.112 14.112 0 013.679 6.453.502.502 0 00.975 0c.172-.685.397-1.351.677-2.003a14.145 14.145 0 013.001-4.45 14.113 14.113 0 016.453-3.678.503.503 0 000-.975 13.245 13.245 0 01-2.003-.678z" fill="url(#lobe-icons-gemini-fill-0)"></path><path d="M20.616 10.835a14.147 14.147 0 01-4.45-3.001 14.111 14.111 0 01-3.678-6.452.503.503 0 00-.975 0 14.134 14.134 0 01-3.679 6.452 14.155 14.155 0 01-4.45 3.001c-.65.28-1.318.505-2.002.678a.502.502 0 000 .975c.684.172 1.35.397 2.002.677a14.147 14.147 0 014.45 3.001 14.112 14.112 0 013.679 6.453.502.502 0 00.975 0c.172-.685.397-1.351.677-2.003a14.145 14.145 0 013.001-4.45 14.113 14.113 0 016.453-3.678.503.503 0 000-.975 13.245 13.245 0 01-2.003-.678z" fill="url(#lobe-icons-gemini-fill-1)"></path><path d="M20.616 10.835a14.147 14.147 0 01-4.45-3.001 14.111 14.111 0 01-3.678-6.452.503.503 0 00-.975 0 14.134 14.134 0 01-3.679 6.452 14.155 14.155 0 01-4.45 3.001c-.65.28-1.318.505-2.002.678a.502.502 0 000 .975c.684.172 1.35.397 2.002.677a14.147 14.147 0 014.45 3.001 14.112 14.112 0 013.679 6.453.502.502 0 00.975 0c.172-.685.397-1.351.677-2.003a14.145 14.145 0 013.001-4.45 14.113 14.113 0 016.453-3.678.503.503 0 000-.975 13.245 13.245 0 01-2.003-.678z" fill="url(#lobe-icons-gemini-fill-2)"></path><defs><linearGradient gradientUnits="userSpaceOnUse" id="lobe-icons-gemini-fill-0" x1="7" x2="11" y1="15.5" y2="12"><stop stop-color="#08B962"></stop><stop offset="1" stop-color="#08B962" stop-opacity="0"></stop></linearGradient><linearGradient gradientUnits="userSpaceOnUse" id="lobe-icons-gemini-fill-1" x1="8" x2="11.5" y1="5.5" y2="11"><stop stop-color="#F94543"></stop><stop offset="1" stop-color="#F94543" stop-opacity="0"></stop></linearGradient><linearGradient gradientUnits="userSpaceOnUse" id="lobe-icons-gemini-fill-2" x1="3.5" x2="17.5" y1="13.5" y2="12"><stop stop-color="#FABC12"></stop><stop offset=".46" stop-color="#FABC12" stop-opacity="0"></stop></linearGradient></defs></svg>
|
||||
|
After Width: | Height: | Size: 2.8 KiB |
1
web/public/icon/openai.svg
Normal file
1
web/public/icon/openai.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg fill="currentColor" fill-rule="evenodd" height="1em" style="flex:none;line-height:1" viewBox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><title>OpenAI</title><path d="M21.55 10.004a5.416 5.416 0 00-.478-4.501c-1.217-2.09-3.662-3.166-6.05-2.66A5.59 5.59 0 0010.831 1C8.39.995 6.224 2.546 5.473 4.838A5.553 5.553 0 001.76 7.496a5.487 5.487 0 00.691 6.5 5.416 5.416 0 00.477 4.502c1.217 2.09 3.662 3.165 6.05 2.66A5.586 5.586 0 0013.168 23c2.443.006 4.61-1.546 5.361-3.84a5.553 5.553 0 003.715-2.66 5.488 5.488 0 00-.693-6.497v.001zm-8.381 11.558a4.199 4.199 0 01-2.675-.954c.034-.018.093-.05.132-.074l4.44-2.53a.71.71 0 00.364-.623v-6.176l1.877 1.069c.02.01.033.029.036.05v5.115c-.003 2.274-1.87 4.118-4.174 4.123zM4.192 17.78a4.059 4.059 0 01-.498-2.763c.032.02.09.055.131.078l4.44 2.53c.225.13.504.13.73 0l5.42-3.088v2.138a.068.068 0 01-.027.057L9.9 19.288c-1.999 1.136-4.552.46-5.707-1.51h-.001zM3.023 8.216A4.15 4.15 0 015.198 6.41l-.002.151v5.06a.711.711 0 00.364.624l5.42 3.087-1.876 1.07a.067.067 0 01-.063.005l-4.489-2.559c-1.995-1.14-2.679-3.658-1.53-5.63h.001zm15.417 3.54l-5.42-3.088L14.896 7.6a.067.067 0 01.063-.006l4.489 2.557c1.998 1.14 2.683 3.662 1.529 5.633a4.163 4.163 0 01-2.174 1.807V12.38a.71.71 0 00-.363-.623zm1.867-2.773a6.04 6.04 0 00-.132-.078l-4.44-2.53a.731.731 0 00-.729 0l-5.42 3.088V7.325a.068.068 0 01.027-.057L14.1 4.713c2-1.137 4.555-.46 5.707 1.513.487.833.664 1.809.499 2.757h.001zm-11.741 3.81l-1.877-1.068a.065.065 0 01-.036-.051V6.559c.001-2.277 1.873-4.122 4.181-4.12.976 0 1.92.338 2.671.954-.034.018-.092.05-.131.073l-4.44 2.53a.71.71 0 00-.365.623l-.003 6.173v.002zm1.02-2.168L12 9.25l2.414 1.375v2.75L12 14.75l-2.415-1.375v-2.75z"></path></svg>
|
||||
|
After Width: | Height: | Size: 1.7 KiB |
1
web/public/icon/siliconcloud-color.svg
Normal file
1
web/public/icon/siliconcloud-color.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg height="1em" style="flex:none;line-height:1" viewBox="0 0 24 24" width="1em" xmlns="http://www.w3.org/2000/svg"><title>SiliconCloud</title><path clip-rule="evenodd" d="M22.956 6.521H12.522c-.577 0-1.044.468-1.044 1.044v3.13c0 .577-.466 1.044-1.043 1.044H1.044c-.577 0-1.044.467-1.044 1.044v4.174C0 17.533.467 18 1.044 18h10.434c.577 0 1.044-.467 1.044-1.043v-3.13c0-.578.466-1.044 1.043-1.044h9.391c.577 0 1.044-.467 1.044-1.044V7.565c0-.576-.467-1.044-1.044-1.044z" fill="#6E29F6" fill-rule="evenodd"></path></svg>
|
||||
|
After Width: | Height: | Size: 520 B |
@@ -13,7 +13,7 @@ export interface AdapterItem {
|
||||
export interface AdapterTypeField {
|
||||
key: string;
|
||||
label: string;
|
||||
type: 'string' | 'password' | 'number';
|
||||
type: 'string' | 'password' | 'number' | 'boolean';
|
||||
required?: boolean;
|
||||
placeholder?: string;
|
||||
default?: any;
|
||||
|
||||
89
web/src/api/aiProviders.ts
Normal file
89
web/src/api/aiProviders.ts
Normal file
@@ -0,0 +1,89 @@
|
||||
import request from './client';
|
||||
|
||||
export type AIAbility = 'chat' | 'vision' | 'embedding' | 'rerank' | 'voice' | 'tools';
|
||||
|
||||
export interface AIProviderPayload {
|
||||
name: string;
|
||||
identifier: string;
|
||||
provider_type?: string | null;
|
||||
api_format: 'openai' | 'gemini';
|
||||
base_url?: string | null;
|
||||
api_key?: string | null;
|
||||
logo_url?: string | null;
|
||||
extra_config?: Record<string, unknown> | null;
|
||||
}
|
||||
|
||||
export interface AIProvider extends Omit<AIProviderPayload, 'extra_config'> {
|
||||
id: number;
|
||||
extra_config: Record<string, unknown>;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
models?: AIModel[];
|
||||
}
|
||||
|
||||
export interface AIModelPayload {
|
||||
name: string;
|
||||
display_name?: string | null;
|
||||
description?: string | null;
|
||||
capabilities?: AIAbility[];
|
||||
context_window?: number | null;
|
||||
embedding_dimensions?: number | null;
|
||||
metadata?: Record<string, unknown> | null;
|
||||
}
|
||||
|
||||
export interface AIModel extends Omit<AIModelPayload, 'metadata'> {
|
||||
id: number;
|
||||
provider_id: number;
|
||||
metadata: Record<string, unknown>;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
provider?: AIProvider;
|
||||
}
|
||||
|
||||
export type AIDefaultAssignments = Partial<Record<AIAbility, number | null>>;
|
||||
export type AIDefaultModels = Partial<Record<AIAbility, AIModel | null>>;
|
||||
|
||||
export async function fetchProviders() {
|
||||
const data = await request<{ providers: AIProvider[] }>('/ai/providers');
|
||||
return data.providers;
|
||||
}
|
||||
|
||||
export async function createProvider(payload: AIProviderPayload) {
|
||||
return request<AIProvider>('/ai/providers', { method: 'POST', json: payload });
|
||||
}
|
||||
|
||||
export async function updateProvider(id: number, payload: Partial<AIProviderPayload>) {
|
||||
return request<AIProvider>(`/ai/providers/${id}`, { method: 'PUT', json: payload });
|
||||
}
|
||||
|
||||
export async function deleteProvider(id: number) {
|
||||
await request(`/ai/providers/${id}`, { method: 'DELETE' });
|
||||
}
|
||||
|
||||
export async function syncProviderModels(id: number) {
|
||||
return request<{ created: number; updated: number }>(`/ai/providers/${id}/sync-models`, { method: 'POST' });
|
||||
}
|
||||
|
||||
export async function fetchRemoteModels(providerId: number) {
|
||||
return request<{ models: AIModelPayload[] }>(`/ai/providers/${providerId}/remote-models`);
|
||||
}
|
||||
|
||||
export async function createModel(providerId: number, payload: AIModelPayload) {
|
||||
return request<AIModel>(`/ai/providers/${providerId}/models`, { method: 'POST', json: payload });
|
||||
}
|
||||
|
||||
export async function updateModel(modelId: number, payload: Partial<AIModelPayload>) {
|
||||
return request<AIModel>(`/ai/models/${modelId}`, { method: 'PUT', json: payload });
|
||||
}
|
||||
|
||||
export async function deleteModel(modelId: number) {
|
||||
await request(`/ai/models/${modelId}`, { method: 'DELETE' });
|
||||
}
|
||||
|
||||
export async function fetchDefaults() {
|
||||
return request<AIDefaultModels>('/ai/defaults');
|
||||
}
|
||||
|
||||
export async function updateDefaults(payload: AIDefaultAssignments) {
|
||||
return request<AIDefaultModels>('/ai/defaults', { method: 'PUT', json: payload });
|
||||
}
|
||||
@@ -34,6 +34,18 @@ export const processorsApi = {
|
||||
method: 'POST',
|
||||
json: params,
|
||||
}),
|
||||
processDirectory: (params: {
|
||||
path: string;
|
||||
processor_type: string;
|
||||
config: any;
|
||||
overwrite: boolean;
|
||||
max_depth?: number | null;
|
||||
suffix?: string | null;
|
||||
}) =>
|
||||
request<{ task_ids: string[]; scheduled: number }>('/processors/process-directory', {
|
||||
method: 'POST',
|
||||
json: params,
|
||||
}),
|
||||
getSource: (type: string) =>
|
||||
request<{ source: string; module_path: string }>('/processors/source/' + encodeURIComponent(type), {
|
||||
method: 'GET',
|
||||
|
||||
@@ -6,7 +6,7 @@ export interface VfsEntry {
|
||||
size: number;
|
||||
mtime: number;
|
||||
type?: string;
|
||||
is_image?: boolean;
|
||||
has_thumbnail?: boolean;
|
||||
}
|
||||
|
||||
export interface DirListing {
|
||||
@@ -21,9 +21,29 @@ export interface DirListing {
|
||||
}
|
||||
|
||||
export interface SearchResultItem {
|
||||
id: number;
|
||||
id: string;
|
||||
path: string;
|
||||
score: number;
|
||||
chunk_id?: string;
|
||||
snippet?: string;
|
||||
mime?: string;
|
||||
source_type?: string;
|
||||
start_offset?: number;
|
||||
end_offset?: number;
|
||||
metadata?: Record<string, any>;
|
||||
}
|
||||
|
||||
export interface SearchPagination {
|
||||
page: number;
|
||||
page_size: number;
|
||||
has_more: boolean;
|
||||
}
|
||||
|
||||
export interface SearchResponse {
|
||||
items: SearchResultItem[];
|
||||
query: string;
|
||||
mode?: string;
|
||||
pagination?: SearchPagination;
|
||||
}
|
||||
|
||||
export const vfsApi = {
|
||||
@@ -105,6 +125,20 @@ export const vfsApi = {
|
||||
xhr.send(fd);
|
||||
});
|
||||
},
|
||||
searchFiles: (q: string, top_k: number = 10, mode: 'vector' | 'filename' = 'vector') =>
|
||||
request<{ items: SearchResultItem[]; query: string }>(`/search?q=${encodeURIComponent(q)}&top_k=${top_k}&mode=${mode}`),
|
||||
searchFiles: (
|
||||
q: string,
|
||||
top_k: number = 10,
|
||||
mode: 'vector' | 'filename' = 'vector',
|
||||
page?: number,
|
||||
page_size?: number,
|
||||
) => {
|
||||
const params = new URLSearchParams({
|
||||
q,
|
||||
top_k: String(top_k),
|
||||
mode,
|
||||
});
|
||||
if (page !== undefined) params.set('page', String(page));
|
||||
if (page_size !== undefined) params.set('page_size', String(page_size));
|
||||
return request<SearchResponse>(`/search?${params.toString()}`);
|
||||
},
|
||||
};
|
||||
|
||||
@@ -54,8 +54,8 @@ const DEFAULT_TONE: RgbColor = { r: 28, g: 32, b: 46 };
|
||||
|
||||
const isImageEntry = (ent: VfsEntry) => {
|
||||
if (ent.is_dir) return false;
|
||||
const maybe = ent as VfsEntry & { is_image?: boolean };
|
||||
if (typeof maybe.is_image === 'boolean' && maybe.is_image) return true;
|
||||
const maybe = ent as VfsEntry & { has_thumbnail?: boolean };
|
||||
if (typeof maybe.has_thumbnail === 'boolean' && maybe.has_thumbnail) return true;
|
||||
const ext = ent.name.split('.').pop()?.toLowerCase();
|
||||
if (!ext) return false;
|
||||
return ['png', 'jpg', 'jpeg', 'gif', 'webp', 'bmp', 'avif', 'ico', 'tif', 'tiff', 'svg', 'heic', 'heif', 'arw', 'cr2', 'cr3', 'nef', 'rw2', 'orf', 'pef', 'dng'].includes(ext);
|
||||
|
||||
@@ -24,27 +24,124 @@ export const TextEditorApp: React.FC<AppComponentProps> = ({ filePath, entry, on
|
||||
const isMarkdown = ext === 'md' || ext === 'markdown';
|
||||
const monacoLanguage = useMemo(() => {
|
||||
switch (ext) {
|
||||
case 'json':
|
||||
return 'json';
|
||||
// Web technologies
|
||||
case 'js':
|
||||
case 'jsx':
|
||||
return 'javascript';
|
||||
case 'ts':
|
||||
case 'tsx':
|
||||
return 'typescript';
|
||||
case 'html':
|
||||
case 'htm':
|
||||
return 'html';
|
||||
case 'css':
|
||||
return 'css';
|
||||
case 'py':
|
||||
return 'python';
|
||||
case 'sh':
|
||||
return 'shell';
|
||||
case 'scss':
|
||||
case 'sass':
|
||||
return 'scss';
|
||||
case 'less':
|
||||
return 'less';
|
||||
case 'vue':
|
||||
return 'html'; // Vue files are primarily HTML with some JS/TS
|
||||
|
||||
// Data formats
|
||||
case 'json':
|
||||
return 'json';
|
||||
case 'yaml':
|
||||
case 'yml':
|
||||
return 'yaml';
|
||||
case 'xml':
|
||||
return 'xml';
|
||||
case 'toml':
|
||||
return 'ini'; // TOML is similar to INI
|
||||
case 'ini':
|
||||
case 'cfg':
|
||||
case 'conf':
|
||||
return 'ini';
|
||||
|
||||
// Programming languages
|
||||
case 'py':
|
||||
return 'python';
|
||||
case 'java':
|
||||
return 'java';
|
||||
case 'c':
|
||||
return 'c';
|
||||
case 'cpp':
|
||||
case 'cc':
|
||||
case 'cxx':
|
||||
return 'cpp';
|
||||
case 'h':
|
||||
case 'hpp':
|
||||
case 'hxx':
|
||||
return 'cpp'; // Header files use C++ highlighting
|
||||
case 'php':
|
||||
return 'php';
|
||||
case 'rb':
|
||||
return 'ruby';
|
||||
case 'go':
|
||||
return 'go';
|
||||
case 'rs':
|
||||
return 'rust';
|
||||
case 'swift':
|
||||
return 'swift';
|
||||
case 'kt':
|
||||
return 'kotlin';
|
||||
case 'scala':
|
||||
return 'scala';
|
||||
case 'cs':
|
||||
return 'csharp';
|
||||
case 'fs':
|
||||
return 'fsharp';
|
||||
case 'vb':
|
||||
return 'vb';
|
||||
case 'pl':
|
||||
case 'pm':
|
||||
return 'perl';
|
||||
case 'r':
|
||||
return 'r';
|
||||
case 'lua':
|
||||
return 'lua';
|
||||
case 'dart':
|
||||
return 'dart';
|
||||
|
||||
// Database
|
||||
case 'sql':
|
||||
return 'sql';
|
||||
|
||||
// Shell and scripts
|
||||
case 'sh':
|
||||
case 'bash':
|
||||
case 'zsh':
|
||||
case 'fish':
|
||||
return 'shell';
|
||||
case 'ps1':
|
||||
return 'powershell';
|
||||
case 'bat':
|
||||
case 'cmd':
|
||||
return 'bat';
|
||||
|
||||
// Build and config files
|
||||
case 'dockerfile':
|
||||
return 'dockerfile';
|
||||
case 'makefile':
|
||||
return 'makefile';
|
||||
case 'gradle':
|
||||
return 'groovy';
|
||||
case 'cmake':
|
||||
return 'cmake';
|
||||
|
||||
// Markdown
|
||||
case 'md':
|
||||
case 'markdown':
|
||||
return 'markdown';
|
||||
|
||||
// Plain text and logs
|
||||
case 'txt':
|
||||
case 'log':
|
||||
case 'gitignore':
|
||||
case 'gitattributes':
|
||||
case 'editorconfig':
|
||||
case 'prettierrc':
|
||||
default:
|
||||
return 'plaintext';
|
||||
}
|
||||
|
||||
@@ -8,8 +8,27 @@ export const descriptor: AppDescriptor = {
|
||||
supported: (entry) => {
|
||||
if (entry.is_dir) return false;
|
||||
const ext = entry.name.split('.').pop()?.toLowerCase() || '';
|
||||
// Supports common text and markdown formats
|
||||
return ['txt', 'md', 'markdown', 'json', 'yaml', 'yml', 'xml', 'html', 'css', 'js', 'ts', 'py', 'sh', 'log'].includes(ext);
|
||||
// Supports common text and code formats
|
||||
return [
|
||||
// Text formats
|
||||
'txt', 'md', 'markdown', 'log',
|
||||
// Data formats
|
||||
'json', 'yaml', 'yml', 'xml', 'toml', 'ini', 'cfg', 'conf',
|
||||
// Web technologies
|
||||
'html', 'htm', 'css', 'scss', 'sass', 'less', 'js', 'jsx', 'ts', 'tsx', 'vue',
|
||||
// Programming languages
|
||||
'py', 'java', 'c', 'cpp', 'cc', 'cxx', 'h', 'hpp', 'hxx',
|
||||
'php', 'rb', 'go', 'rs', 'swift', 'kt', 'scala', 'clj', 'cljs',
|
||||
'cs', 'vb', 'fs', 'pl', 'pm', 'r', 'lua', 'dart', 'elm',
|
||||
// Database
|
||||
'sql',
|
||||
// Shell and scripts
|
||||
'sh', 'bash', 'zsh', 'fish', 'ps1', 'bat', 'cmd',
|
||||
// Build and config files
|
||||
'dockerfile', 'makefile', 'gradle', 'cmake',
|
||||
// Other common text files
|
||||
'gitignore', 'gitattributes', 'editorconfig', 'prettierrc'
|
||||
].includes(ext);
|
||||
},
|
||||
component: TextEditorApp,
|
||||
default: true,
|
||||
|
||||
26
web/src/components/WeChatModal.tsx
Normal file
26
web/src/components/WeChatModal.tsx
Normal file
@@ -0,0 +1,26 @@
|
||||
import { Modal, theme } from 'antd';
|
||||
import { useI18n } from '../i18n';
|
||||
|
||||
export interface WeChatModalProps {
|
||||
open: boolean;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
export default function WeChatModal({ open, onClose }: WeChatModalProps) {
|
||||
const { token } = theme.useToken();
|
||||
const { t } = useI18n();
|
||||
|
||||
return (
|
||||
<Modal open={open} onCancel={onClose} title={t('Join Community')} footer={null} width={320}>
|
||||
<div style={{ textAlign: 'center', padding: '12px 0' }}>
|
||||
<img src="https://foxel.cc/image/wechat.png" width={200} alt="wechat" />
|
||||
<div style={{ marginTop: 12, color: token.colorTextSecondary }}>
|
||||
{t('Scan to join WeChat group')}
|
||||
</div>
|
||||
<div style={{ marginTop: 8, fontSize: 12, color: token.colorTextTertiary }}>
|
||||
{t('If QR expires, add drizzle2001 to join')}
|
||||
</div>
|
||||
</div>
|
||||
</Modal>
|
||||
);
|
||||
}
|
||||
@@ -127,6 +127,8 @@ export const en = {
|
||||
|
||||
// Context menu
|
||||
'Upload File': 'Upload File',
|
||||
'Upload Files': 'Upload Files',
|
||||
'Upload Folder': 'Upload Folder',
|
||||
'Open': 'Open',
|
||||
'Open With': 'Open With',
|
||||
'Default': 'Default',
|
||||
@@ -137,6 +139,31 @@ export const en = {
|
||||
'Details': 'Details',
|
||||
'Get Direct Link': 'Get Direct Link',
|
||||
|
||||
// Upload modal
|
||||
'Total progress': 'Total progress',
|
||||
'Upload bytes summary': '{uploaded} / {total}',
|
||||
'Upload task summary': 'Tasks: {completed} / {total} completed, {pending} pending, {failures} failed',
|
||||
'Overwrite confirmation required': 'Overwrite confirmation required',
|
||||
'Target already exists: {path}': 'Target already exists: {path}',
|
||||
'Overwrite': 'Overwrite',
|
||||
'Skip': 'Skip',
|
||||
'Overwrite All': 'Overwrite All',
|
||||
'Skip All': 'Skip All',
|
||||
'Directory': 'Directory',
|
||||
'Creating directory...': 'Creating directory...',
|
||||
'Directory ready': 'Directory ready',
|
||||
'Create directory failed': 'Create directory failed',
|
||||
'Waiting to create': 'Waiting to create',
|
||||
'Waiting for overwrite decision': 'Waiting for overwrite decision',
|
||||
'Waiting to upload': 'Waiting to upload',
|
||||
'Skipped': 'Skipped',
|
||||
'Upload succeeded': 'Upload succeeded',
|
||||
'Upload failed': 'Upload failed',
|
||||
'No items selected for upload': 'No items selected for upload',
|
||||
'No uploadable files or directories found': 'No uploadable files or directories found',
|
||||
'Missing file content': 'Missing file content',
|
||||
'Directory conflicts with existing file': 'A file with the same name already exists at the target location',
|
||||
|
||||
// Side nav modals
|
||||
'Join Community': 'Join Community',
|
||||
'Scan to join WeChat group': 'Scan to join WeChat group',
|
||||
@@ -220,6 +247,17 @@ export const en = {
|
||||
'Copy failed': 'Copy failed',
|
||||
'Permissions': 'Permissions',
|
||||
'EXIF Info': 'EXIF Info',
|
||||
'Index Info': 'Index Info',
|
||||
'Indexed Items': 'Indexed Items',
|
||||
'Indexed Types': 'Indexed Types',
|
||||
'No index data': 'No index data',
|
||||
'Indexed Chunks': 'Indexed Chunks',
|
||||
'More Indexed Chunks': 'More Indexed Chunks',
|
||||
'Chunk ID': 'Chunk ID',
|
||||
'Offset Range': 'Offset Range',
|
||||
'Vector ID': 'Vector ID',
|
||||
'Preview': 'Preview',
|
||||
'Showing first {count} entries': 'Showing first {count} entries',
|
||||
|
||||
// Search dialog
|
||||
'Smart Search': 'Smart Search',
|
||||
@@ -296,6 +334,89 @@ export const en = {
|
||||
'Vision API Key': 'Vision API Key',
|
||||
'Embedding API URL': 'Embedding API URL',
|
||||
'Embedding API Key': 'Embedding API Key',
|
||||
'AI Providers & Models': 'AI Providers & Models',
|
||||
'Manage AI providers, synchronize compatible models, and configure default capabilities across the system.': 'Manage AI providers, synchronize compatible models, and configure default capabilities across the system.',
|
||||
'Add Provider': 'Add Provider',
|
||||
'Edit Provider': 'Edit Provider',
|
||||
'Pull Models': 'Pull Models',
|
||||
'Manual Add': 'Manual Add',
|
||||
'Clear Remote List': 'Clear Remote List',
|
||||
'Select models from the list to add them automatically': 'Select models from the list to add them automatically',
|
||||
'No remote models': 'No remote models',
|
||||
'No remote models found': 'No remote models found',
|
||||
'No remote models match search': 'No remote models match search',
|
||||
'Search fetched models': 'Search fetched models',
|
||||
'Already Added': 'Already Added',
|
||||
'Add Selected Models': 'Add Selected Models',
|
||||
'Fetch failed': 'Fetch failed',
|
||||
'Select models to add': 'Select models to add',
|
||||
'Added {count} models': 'Added {count} models',
|
||||
'Choose Template': 'Choose Template',
|
||||
'Configure Provider': 'Configure Provider',
|
||||
'Back to Templates': 'Back to Templates',
|
||||
'View Docs': 'View Docs',
|
||||
'Custom Provider': 'Custom Provider',
|
||||
'Custom Provider Description': 'Bring your own endpoint compatible with OpenAI or Gemini formats.',
|
||||
'OpenAI Provider': 'OpenAI',
|
||||
'OpenAI Provider Description': 'Access GPT-4o, GPT-4.1, GPT-3.5 and more models from OpenAI.',
|
||||
'Azure OpenAI Provider': 'Azure OpenAI',
|
||||
'Azure OpenAI Provider Description': 'Use OpenAI models deployed on Microsoft Azure.',
|
||||
'Google AI Provider': 'Google AI',
|
||||
'Google AI Provider Description': 'Gemini series models served via the Google AI platform.',
|
||||
'SiliconFlow Provider': 'SiliconFlow',
|
||||
'SiliconFlow Provider Description': 'High-performance inference platform with OpenAI-compatible APIs.',
|
||||
'OpenRouter Provider': 'OpenRouter',
|
||||
'OpenRouter Provider Description': 'Connect to multiple AI providers through a single OpenAI-style endpoint.',
|
||||
'Anthropic Provider': 'Anthropic',
|
||||
'Anthropic Provider Description': 'Claude 3 family models exposed through the Claude API.',
|
||||
'DeepSeek Provider': 'DeepSeek',
|
||||
'DeepSeek Provider Description': 'DeepSeek language models via OpenAI-compatible API.',
|
||||
'Grok Provider': 'Grok (xAI)',
|
||||
'Grok Provider Description': 'Grok models powered by xAI with OpenAI-style routes.',
|
||||
'Ollama Provider': 'Ollama',
|
||||
'Ollama Provider Description': 'Self-host and run models locally with Ollama\'s OpenAI bridge.',
|
||||
'Voyage Provider': 'Voyage AI',
|
||||
'Voyage Provider Description': 'High-quality embeddings and rerankers from Voyage AI.',
|
||||
'Delete provider?': 'Delete provider?',
|
||||
'Deleting this provider will also remove all associated models. Continue?': 'Deleting this provider will also remove all associated models. Continue?',
|
||||
'Deleted successfully': 'Deleted successfully',
|
||||
'Sync Models': 'Sync Models',
|
||||
'Sync completed: {created} created, {updated} updated': 'Sync completed: {created} created, {updated} updated',
|
||||
'Sync failed': 'Sync failed',
|
||||
'Add Model': 'Add Model',
|
||||
'Edit Model': 'Edit Model',
|
||||
'Delete model?': 'Delete model?',
|
||||
'This operation cannot be undone. Continue?': 'This operation cannot be undone. Continue?',
|
||||
'No models yet': 'No models yet',
|
||||
'Add your first AI provider to get started': 'Add your first AI provider to get started',
|
||||
'Default Models Configuration': 'Default Models Configuration',
|
||||
'Main Chat Model': 'Main Chat Model',
|
||||
'Primary assistant for conversations, reasoning, and tool calls.': 'Primary assistant for conversations, reasoning, and tool calls.',
|
||||
'Handles multimodal perception such as image understanding.': 'Handles multimodal perception such as image understanding.',
|
||||
'Transforms content into dense vectors for search and retrieval.': 'Transforms content into dense vectors for search and retrieval.',
|
||||
'Optimises ranking quality for search candidates.': 'Optimises ranking quality for search candidates.',
|
||||
'Covers text-to-speech and speech understanding scenarios.': 'Covers text-to-speech and speech understanding scenarios.',
|
||||
'Supports function calling, orchestration, and automation.': 'Supports function calling, orchestration, and automation.',
|
||||
'Select a model': 'Select a model',
|
||||
'Template': 'Template',
|
||||
'Select a template': 'Select a template',
|
||||
'Display Name': 'Display Name',
|
||||
'Enter name': 'Enter name',
|
||||
'Identifier': 'Identifier',
|
||||
'Enter identifier': 'Enter identifier',
|
||||
'Only lowercase letters, numbers, dash, dot and underscore are allowed': 'Only lowercase letters, numbers, dash, dot and underscore are allowed',
|
||||
'API Format': 'API Format',
|
||||
'Base URL': 'Base URL',
|
||||
'Enter base url': 'Enter base URL',
|
||||
'Optional, can also be provided per request': 'Optional, can also be provided per request',
|
||||
'Model Identifier': 'Model Identifier',
|
||||
'Enter model identifier': 'Enter model identifier',
|
||||
'Description': 'Description',
|
||||
'Capabilities': 'Capabilities',
|
||||
'Context Window': 'Context Window',
|
||||
'Embedding Dimensions': 'Embedding Dimensions',
|
||||
'Price /1K input tokens': 'Price /1K input tokens',
|
||||
'Price /1K output tokens': 'Price /1K output tokens',
|
||||
|
||||
// Adapters
|
||||
'Missing required config:': 'Missing required config:',
|
||||
@@ -471,6 +592,7 @@ export const en = {
|
||||
'Root Directory': 'Root Directory',
|
||||
'Please input root directory!': 'Please input root directory!',
|
||||
'e.g., data/ or /var/foxel/data': 'e.g., data/ or /var/foxel/data',
|
||||
'Optional, used for external links. Leave empty to use the current site.': 'Optional, used for external links. Leave empty to use the current site.',
|
||||
'Create Admin': 'Create Admin',
|
||||
'Create admin account': 'Create admin account',
|
||||
'This is the first account with full permissions': 'This is the first account with full permissions',
|
||||
|
||||
@@ -129,6 +129,8 @@ export const zh = {
|
||||
|
||||
// Context menu
|
||||
'Upload File': '上传文件',
|
||||
'Upload Files': '上传文件',
|
||||
'Upload Folder': '上传文件夹',
|
||||
'Open': '打开',
|
||||
'Open With': '打开方式',
|
||||
'Default': '默认',
|
||||
@@ -139,6 +141,31 @@ export const zh = {
|
||||
'Details': '详情',
|
||||
'Get Direct Link': '获取直链',
|
||||
|
||||
// Upload modal
|
||||
'Total progress': '总体进度',
|
||||
'Upload bytes summary': '{uploaded} / {total}',
|
||||
'Upload task summary': '任务:已完成 {completed} / {total},待处理 {pending},失败 {failures}',
|
||||
'Overwrite confirmation required': '需要确认是否覆盖',
|
||||
'Target already exists: {path}': '目标已存在:{path}',
|
||||
'Overwrite': '覆盖',
|
||||
'Skip': '跳过',
|
||||
'Overwrite All': '全部覆盖',
|
||||
'Skip All': '全部跳过',
|
||||
'Directory': '目录',
|
||||
'Creating directory...': '正在创建目录...',
|
||||
'Directory ready': '目录已就绪',
|
||||
'Create directory failed': '创建目录失败',
|
||||
'Waiting to create': '等待创建',
|
||||
'Waiting for overwrite decision': '等待覆盖处理',
|
||||
'Waiting to upload': '等待上传',
|
||||
'Skipped': '已跳过',
|
||||
'Upload succeeded': '上传成功',
|
||||
'Upload failed': '上传失败',
|
||||
'No items selected for upload': '未选择任何可上传项',
|
||||
'No uploadable files or directories found': '未找到可上传的文件或目录',
|
||||
'Missing file content': '缺少文件内容',
|
||||
'Directory conflicts with existing file': '目标存在同名文件,无法创建目录',
|
||||
|
||||
// Side nav modals
|
||||
'Join Community': '加入社区',
|
||||
'Scan to join WeChat group': '微信扫码加入交流群',
|
||||
@@ -220,6 +247,17 @@ export const zh = {
|
||||
'Copy failed': '复制失败',
|
||||
'Permissions': '权限',
|
||||
'EXIF Info': 'EXIF信息',
|
||||
'Index Info': '索引信息',
|
||||
'Indexed Items': '索引条目数',
|
||||
'Indexed Types': '索引类型统计',
|
||||
'No index data': '暂无索引数据',
|
||||
'Indexed Chunks': '索引条目',
|
||||
'More Indexed Chunks': '更多索引条目',
|
||||
'Chunk ID': '分片ID',
|
||||
'Offset Range': '偏移范围',
|
||||
'Vector ID': '向量ID',
|
||||
'Preview': '内容预览',
|
||||
'Showing first {count} entries': '仅展示前 {count} 条',
|
||||
|
||||
// Search dialog
|
||||
'Smart Search': '智能搜索',
|
||||
@@ -248,6 +286,10 @@ export const zh = {
|
||||
'Save': '保存',
|
||||
'App Settings': '应用设置',
|
||||
'AI Settings': 'AI设置',
|
||||
'Choose Template': '选择模板',
|
||||
'Configure Provider': '配置提供商',
|
||||
'Back to Templates': '返回选择',
|
||||
'View Docs': '查看文档',
|
||||
'Vision Model': '视觉模型',
|
||||
'Embedding Model': '嵌入模型',
|
||||
'Embedding Dimension': '向量维度',
|
||||
@@ -297,6 +339,85 @@ export const zh = {
|
||||
'Vision API Key': '视觉模型 API Key',
|
||||
'Embedding API URL': '嵌入模型 API 地址',
|
||||
'Embedding API Key': '嵌入模型 API Key',
|
||||
'AI Providers & Models': 'AI 提供商与模型',
|
||||
'Manage AI providers, synchronize compatible models, and configure default capabilities across the system.': '管理所有 AI 提供商,批量同步兼容模型,并配置系统默认能力。',
|
||||
'Add Provider': '添加提供商',
|
||||
'Edit Provider': '编辑提供商',
|
||||
'Pull Models': '拉取模型',
|
||||
'Manual Add': '手动添加',
|
||||
'Clear Remote List': '清空列表',
|
||||
'Select models from the list to add them automatically': '选择模型后可一键添加到系统',
|
||||
'No remote models': '暂无远程模型',
|
||||
'No remote models found': '未获取到远程模型',
|
||||
'No remote models match search': '没有匹配的远程模型',
|
||||
'Search fetched models': '搜索已拉取模型',
|
||||
'Already Added': '已添加',
|
||||
'Add Selected Models': '添加所选模型',
|
||||
'Fetch failed': '拉取失败',
|
||||
'Select models to add': '请选择要添加的模型',
|
||||
'Added {count} models': '已添加 {count} 个模型',
|
||||
'Custom Provider': '自定义提供商',
|
||||
'Custom Provider Description': '自定义兼容 OpenAI 或 Gemini 标准的 API 端点。',
|
||||
'OpenAI Provider': 'OpenAI',
|
||||
'OpenAI Provider Description': '访问 OpenAI 的 GPT-4o、GPT-4.1、GPT-3.5 等模型。',
|
||||
'Azure OpenAI Provider': 'Azure OpenAI',
|
||||
'Azure OpenAI Provider Description': '使用托管在微软 Azure 上的 OpenAI 模型。',
|
||||
'Google AI Provider': 'Google AI',
|
||||
'Google AI Provider Description': 'Google AI 平台提供的 Gemini 系列模型。',
|
||||
'SiliconFlow Provider': '硅基流动',
|
||||
'SiliconFlow Provider Description': '硅基流动高性能推理平台,兼容 OpenAI 接口。',
|
||||
'OpenRouter Provider': 'OpenRouter',
|
||||
'OpenRouter Provider Description': '通过一个 OpenAI 风格入口接入多家 AI 提供商。',
|
||||
'Anthropic Provider': 'Anthropic',
|
||||
'Anthropic Provider Description': '通过 Claude API 使用 Claude 3 系列模型。',
|
||||
'DeepSeek Provider': 'DeepSeek',
|
||||
'DeepSeek Provider Description': 'DeepSeek 语言模型,支持 OpenAI 兼容接口。',
|
||||
'Grok Provider': 'Grok (xAI)',
|
||||
'Grok Provider Description': 'xAI 的 Grok 模型,提供 OpenAI 风格接口。',
|
||||
'Ollama Provider': 'Ollama',
|
||||
'Ollama Provider Description': '使用 Ollama 在本地运行并管理大模型。',
|
||||
'Voyage Provider': 'Voyage AI',
|
||||
'Voyage Provider Description': 'Voyage AI 提供的高质量嵌入与重排序模型。',
|
||||
'Delete provider?': '确认删除该提供商?',
|
||||
'Deleting this provider will also remove all associated models. Continue?': '删除后将同时移除该提供商下的全部模型,是否继续?',
|
||||
'Deleted successfully': '删除成功',
|
||||
'Sync Models': '同步模型',
|
||||
'Sync completed: {created} created, {updated} updated': '同步完成:新增 {created} 个,更新 {updated} 个',
|
||||
'Sync failed': '同步失败',
|
||||
'Add Model': '添加模型',
|
||||
'Edit Model': '编辑模型',
|
||||
'Delete model?': '确认删除该模型?',
|
||||
'This operation cannot be undone. Continue?': '此操作不可撤销,是否继续?',
|
||||
'No models yet': '暂无模型',
|
||||
'Add your first AI provider to get started': '添加第一个 AI 提供商开始配置',
|
||||
'Default Models Configuration': '默认模型配置',
|
||||
'Main Chat Model': '主对话模型',
|
||||
'Primary assistant for conversations, reasoning, and tool calls.': '用于对话、推理与工具调用的核心模型。',
|
||||
'Handles multimodal perception such as image understanding.': '负责多模态感知与图像理解。',
|
||||
'Transforms content into dense vectors for search and retrieval.': '将内容向量化以驱动搜索与检索。',
|
||||
'Optimises ranking quality for search candidates.': '重新排序候选结果,提升检索相关性。',
|
||||
'Covers text-to-speech and speech understanding scenarios.': '覆盖文本转语音与语音理解场景。',
|
||||
'Supports function calling, orchestration, and automation.': '支持函数调用、编排与自动化。',
|
||||
'Select a model': '选择模型',
|
||||
'Template': '模板',
|
||||
'Select a template': '选择模板',
|
||||
'Display Name': '显示名称',
|
||||
'Enter name': '请输入名称',
|
||||
'Identifier': '标识符',
|
||||
'Enter identifier': '请输入标识符',
|
||||
'Only lowercase letters, numbers, dash, dot and underscore are allowed': '仅允许小写字母、数字、连字符、点和下划线',
|
||||
'API Format': 'API 格式',
|
||||
'Base URL': '基础 URL',
|
||||
'Enter base url': '请输入基础 URL',
|
||||
'Optional, can also be provided per request': '可选,也可在请求时提供',
|
||||
'Model Identifier': '模型标识',
|
||||
'Enter model identifier': '请输入模型标识',
|
||||
'Description': '描述',
|
||||
'Capabilities': '能力标签',
|
||||
'Context Window': '上下文窗口',
|
||||
'Embedding Dimensions': '向量维度',
|
||||
'Price /1K input tokens': '价格 /1K 输入 token',
|
||||
'Price /1K output tokens': '价格 /1K 输出 token',
|
||||
|
||||
// Adapters
|
||||
'Missing required config:': '缺少必填配置:',
|
||||
@@ -418,6 +539,17 @@ export const zh = {
|
||||
'Source Editor': '源码编辑',
|
||||
'Module Path': '模块路径',
|
||||
'Directory processing always overwrites original files': '选择目录时会强制覆盖原文件',
|
||||
'Directory execution will enqueue one task per file': '目录模式会为每个文件单独创建任务',
|
||||
'Directory scope': '目录范围',
|
||||
'Current level only': '仅当前层级',
|
||||
'Include subdirectories': '包含子目录',
|
||||
'Max depth': '最大层级',
|
||||
'Leave empty to traverse all subdirectories': '留空表示遍历所有子目录',
|
||||
'Depth must be greater or equal to 0': '层级必须大于或等于 0',
|
||||
'Output suffix': '输出后缀',
|
||||
'Suffix will be inserted before the file extension, e.g. demo_processed.mp4': '后缀会插入到文件扩展名前,例如 demo_processed.mp4',
|
||||
'Suffix such as _processed': '例如 _processed 的后缀',
|
||||
'Suffix cannot be empty': '后缀不能为空',
|
||||
'No data': '暂无数据',
|
||||
|
||||
// Path selector
|
||||
@@ -473,6 +605,7 @@ export const zh = {
|
||||
'Root Directory': '根目录',
|
||||
'Please input root directory!': '请输入根目录!',
|
||||
'e.g., data/ or /var/foxel/data': '例如: data/ 或 /var/foxel/data',
|
||||
'Optional, used for external links. Leave empty to use the current site.': '可选,用于生成外部链接;留空则使用当前站点。',
|
||||
'Create Admin': '创建管理员',
|
||||
'Create admin account': '创建管理员账户',
|
||||
'This is the first account with full permissions': '这是系统的第一个账户,将拥有最高权限。',
|
||||
|
||||
@@ -1,128 +1,605 @@
|
||||
import { Modal, Input, List, Divider, Spin, Select, Space } from 'antd';
|
||||
import { Modal, Input, List, Divider, Spin, Space, Tag, Typography, Empty, Flex, Segmented, Pagination, message } from 'antd';
|
||||
import { SearchOutlined, FileTextOutlined } from '@ant-design/icons';
|
||||
import React, { useState } from 'react';
|
||||
import React, { useRef, useState, useEffect, useCallback } from 'react';
|
||||
import { vfsApi, type SearchResultItem } from '../api/vfs';
|
||||
import { type VfsEntry } from '../api/client';
|
||||
import { processorsApi, type ProcessorTypeMeta } from '../api/processors';
|
||||
import { useI18n } from '../i18n';
|
||||
import { useNavigate } from 'react-router';
|
||||
|
||||
import { useAppWindows } from '../contexts/AppWindowsContext';
|
||||
import { ContextMenu } from '../pages/FileExplorerPage/components/ContextMenu';
|
||||
import { RenameModal } from '../pages/FileExplorerPage/components/Modals/RenameModal';
|
||||
import { MoveCopyModal } from '../pages/FileExplorerPage/components/Modals/MoveCopyModal';
|
||||
import { ShareModal } from '../pages/FileExplorerPage/components/Modals/ShareModal';
|
||||
import { DirectLinkModal } from '../pages/FileExplorerPage/components/Modals/DirectLinkModal';
|
||||
import { FileDetailModal } from '../pages/FileExplorerPage/components/FileDetailModal';
|
||||
import { ProcessorModal } from '../pages/FileExplorerPage/components/Modals/ProcessorModal';
|
||||
import { useFileActions } from '../pages/FileExplorerPage/hooks/useFileActions.tsx';
|
||||
import { useProcessor } from '../pages/FileExplorerPage/hooks/useProcessor';
|
||||
|
||||
interface SearchDialogProps {
|
||||
open: boolean;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
const SEARCH_MODES = (t: (k: string)=>string) => [
|
||||
{ label: t('Smart Search'), value: 'vector' },
|
||||
{ label: t('Name Search'), value: 'filename' },
|
||||
];
|
||||
type SearchMode = 'vector' | 'filename';
|
||||
const PAGE_SIZE = 10;
|
||||
|
||||
const SearchDialog: React.FC<SearchDialogProps> = ({ open, onClose }) => {
|
||||
const [search, setSearch] = useState('');
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [results, setResults] = useState<SearchResultItem[]>([]);
|
||||
const [searched, setSearched] = useState(false);
|
||||
const [searchMode, setSearchMode] = useState<'vector' | 'filename'>('vector');
|
||||
const [searchMode, setSearchMode] = useState<SearchMode>('vector');
|
||||
const [page, setPage] = useState(1);
|
||||
const [hasMore, setHasMore] = useState(false);
|
||||
const requestIdRef = useRef(0);
|
||||
const { t } = useI18n();
|
||||
const navigate = useNavigate();
|
||||
const { openFileWithDefaultApp, confirmOpenWithApp } = useAppWindows();
|
||||
const statCacheRef = useRef<Map<string, VfsEntry>>(new Map());
|
||||
|
||||
const handleSearch = async () => {
|
||||
if (!search.trim()) return;
|
||||
setLoading(true);
|
||||
setSearched(true);
|
||||
try {
|
||||
const res = await vfsApi.searchFiles(search, 10, searchMode);
|
||||
setResults(res.items);
|
||||
} catch (e) {
|
||||
setResults([]);
|
||||
const [contextMenuState, setContextMenuState] = useState<{ entry: VfsEntry; x: number; y: number; path: string } | null>(null);
|
||||
const [menuEntries, setMenuEntries] = useState<VfsEntry[]>([]);
|
||||
const [selectedEntryNames, setSelectedEntryNames] = useState<string[]>([]);
|
||||
const [currentPath, setCurrentPath] = useState<string>('/');
|
||||
const [processorTypes, setProcessorTypes] = useState<ProcessorTypeMeta[]>([]);
|
||||
const [renaming, setRenaming] = useState<VfsEntry | null>(null);
|
||||
const [sharingEntries, setSharingEntries] = useState<VfsEntry[]>([]);
|
||||
const [directLinkEntry, setDirectLinkEntry] = useState<VfsEntry | null>(null);
|
||||
const [detailEntry, setDetailEntry] = useState<VfsEntry | null>(null);
|
||||
const [detailData, setDetailData] = useState<any>(null);
|
||||
const [detailLoading, setDetailLoading] = useState(false);
|
||||
const [movingEntries, setMovingEntries] = useState<VfsEntry[]>([]);
|
||||
const [copyingEntries, setCopyingEntries] = useState<VfsEntry[]>([]);
|
||||
|
||||
const closeContextMenu = useCallback(() => {
|
||||
setContextMenuState(null);
|
||||
setMenuEntries([]);
|
||||
setSelectedEntryNames([]);
|
||||
}, []);
|
||||
|
||||
const noop = useCallback(() => {}, []);
|
||||
|
||||
const handleShare = useCallback((entries: VfsEntry[]) => {
|
||||
setSharingEntries(entries);
|
||||
}, []);
|
||||
|
||||
const handleDirectLink = useCallback((entry: VfsEntry) => {
|
||||
setDirectLinkEntry(entry);
|
||||
}, []);
|
||||
|
||||
const { doDelete, doDownload, doRename, doShare, doGetDirectLink, doMove, doCopy } = useFileActions({
|
||||
path: currentPath,
|
||||
refresh: noop,
|
||||
clearSelection: noop,
|
||||
onShare: handleShare,
|
||||
onGetDirectLink: handleDirectLink,
|
||||
});
|
||||
|
||||
const processorHook = useProcessor({ path: currentPath, processorTypes, refresh: noop });
|
||||
|
||||
useEffect(() => {
|
||||
if (!open) {
|
||||
statCacheRef.current.clear();
|
||||
setProcessorTypes([]);
|
||||
closeContextMenu();
|
||||
return;
|
||||
}
|
||||
let cancelled = false;
|
||||
(async () => {
|
||||
try {
|
||||
const list = await processorsApi.list();
|
||||
if (!cancelled) {
|
||||
setProcessorTypes(list);
|
||||
}
|
||||
} catch (e) {
|
||||
if (cancelled) return;
|
||||
const msg = e instanceof Error ? e.message : t('Load failed');
|
||||
message.error(msg);
|
||||
}
|
||||
})();
|
||||
return () => {
|
||||
cancelled = true;
|
||||
};
|
||||
}, [open, closeContextMenu, t]);
|
||||
|
||||
const handleClose = useCallback(() => {
|
||||
setSearch('');
|
||||
setResults([]);
|
||||
setSearched(false);
|
||||
setSearchMode('vector');
|
||||
setPage(1);
|
||||
setHasMore(false);
|
||||
requestIdRef.current = 0;
|
||||
setLoading(false);
|
||||
closeContextMenu();
|
||||
setProcessorTypes([]);
|
||||
setRenaming(null);
|
||||
setSharingEntries([]);
|
||||
setDirectLinkEntry(null);
|
||||
setDetailEntry(null);
|
||||
setDetailData(null);
|
||||
setDetailLoading(false);
|
||||
setMovingEntries([]);
|
||||
setCopyingEntries([]);
|
||||
setCurrentPath('/');
|
||||
statCacheRef.current.clear();
|
||||
onClose();
|
||||
}, [closeContextMenu, onClose]);
|
||||
|
||||
const renderSourceLabel = (value?: string) => {
|
||||
switch ((value || '').toLowerCase()) {
|
||||
case 'vector':
|
||||
return t('Vector Search');
|
||||
case 'filename':
|
||||
return t('Name Search');
|
||||
case 'text':
|
||||
return t('Text Chunk');
|
||||
case 'image':
|
||||
return t('Image Description');
|
||||
default:
|
||||
return t('Vector Search');
|
||||
}
|
||||
};
|
||||
|
||||
const sourceColor = (value?: string) => {
|
||||
switch ((value || '').toLowerCase()) {
|
||||
case 'vector':
|
||||
return 'blue';
|
||||
case 'filename':
|
||||
return 'green';
|
||||
case 'image':
|
||||
return 'volcano';
|
||||
case 'text':
|
||||
return 'geekblue';
|
||||
default:
|
||||
return 'purple';
|
||||
}
|
||||
};
|
||||
|
||||
const buildFullPath = useCallback((entryName: string, basePath?: string) => {
|
||||
const dir = basePath ?? currentPath;
|
||||
const prefix = dir === '/' ? '' : dir;
|
||||
const combined = `${prefix}/${entryName}`.replace(/\/{2,}/g, '/');
|
||||
if (!combined) return '/';
|
||||
return combined.startsWith('/') ? combined : `/${combined}`;
|
||||
}, [currentPath]);
|
||||
|
||||
const buildDefaultDestination = useCallback((targetEntries: VfsEntry[]) => {
|
||||
if (!targetEntries || targetEntries.length === 0) return '';
|
||||
if (targetEntries.length > 1) {
|
||||
return currentPath || '/';
|
||||
}
|
||||
const entry = targetEntries[0];
|
||||
const base = currentPath === '/' ? '' : currentPath;
|
||||
const segments = [base, entry.name].filter(Boolean);
|
||||
const joined = segments.join('/');
|
||||
if (!joined) return '/';
|
||||
return joined.startsWith('/') ? joined : `/${joined}`;
|
||||
}, [currentPath]);
|
||||
|
||||
const openDetail = useCallback(async (entry: VfsEntry) => {
|
||||
setDetailEntry(entry);
|
||||
setDetailLoading(true);
|
||||
try {
|
||||
const stat = await vfsApi.stat(buildFullPath(entry.name));
|
||||
setDetailData(stat);
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : t('Load failed');
|
||||
setDetailData({ error: msg });
|
||||
message.error(msg);
|
||||
} finally {
|
||||
setDetailLoading(false);
|
||||
}
|
||||
}, [buildFullPath, t]);
|
||||
|
||||
const handleOpenEntry = useCallback((entry: VfsEntry) => {
|
||||
const basePath = contextMenuState?.path ?? currentPath;
|
||||
if (entry.is_dir) {
|
||||
const next = buildFullPath(entry.name, basePath);
|
||||
navigate(`/files${next === '/' ? '' : next}`, { state: { highlight: { name: entry.name } } });
|
||||
closeContextMenu();
|
||||
handleClose();
|
||||
return;
|
||||
}
|
||||
openFileWithDefaultApp(entry, basePath);
|
||||
closeContextMenu();
|
||||
handleClose();
|
||||
}, [buildFullPath, navigate, closeContextMenu, handleClose, openFileWithDefaultApp, currentPath, contextMenuState]);
|
||||
|
||||
const ensureEntry = useCallback(async (fullPath: string, defaultName: string): Promise<VfsEntry | null> => {
|
||||
const cached = statCacheRef.current.get(fullPath);
|
||||
if (cached) {
|
||||
return { ...cached, name: cached.name || defaultName };
|
||||
}
|
||||
try {
|
||||
const stat = await vfsApi.stat(fullPath);
|
||||
const entry: VfsEntry = {
|
||||
name: (stat as any)?.name || defaultName,
|
||||
is_dir: Boolean((stat as any)?.is_dir),
|
||||
size: Number((stat as any)?.size ?? 0),
|
||||
mtime: Number((stat as any)?.mtime ?? (stat as any)?.mtime_ms ?? 0),
|
||||
type: (stat as any)?.type,
|
||||
has_thumbnail: Boolean((stat as any)?.has_thumbnail),
|
||||
};
|
||||
statCacheRef.current.set(fullPath, entry);
|
||||
return entry;
|
||||
} catch (e) {
|
||||
const msg = e instanceof Error ? e.message : t('Load failed');
|
||||
message.error(msg);
|
||||
return null;
|
||||
}
|
||||
}, [t]);
|
||||
|
||||
const handleResultContextMenu = useCallback(async (event: React.MouseEvent, item: SearchResultItem) => {
|
||||
event.preventDefault();
|
||||
closeContextMenu();
|
||||
const rawPath = (item.path || '').replace(/\/+$/, '');
|
||||
if (!rawPath) {
|
||||
return;
|
||||
}
|
||||
const normalizedPath = rawPath.startsWith('/') ? rawPath : `/${rawPath}`;
|
||||
const segments = normalizedPath.split('/').filter(Boolean);
|
||||
const filename = segments.pop() || '';
|
||||
const dir = segments.length ? `/${segments.join('/')}` : '/';
|
||||
const entry = await ensureEntry(normalizedPath, filename);
|
||||
if (!entry) return;
|
||||
setCurrentPath(dir || '/');
|
||||
setMenuEntries([entry]);
|
||||
setSelectedEntryNames([entry.name]);
|
||||
setContextMenuState({
|
||||
entry,
|
||||
x: event.clientX,
|
||||
y: event.clientY,
|
||||
path: dir || '/',
|
||||
});
|
||||
}, [closeContextMenu, ensureEntry]);
|
||||
|
||||
const performSearch = async (options?: { page?: number; mode?: SearchMode }) => {
|
||||
const query = search.trim();
|
||||
if (!query) {
|
||||
setSearched(false);
|
||||
setResults([]);
|
||||
setHasMore(false);
|
||||
return;
|
||||
}
|
||||
|
||||
const currentMode = options?.mode ?? searchMode;
|
||||
const targetPage = currentMode === 'filename' ? (options?.page ?? (currentMode === searchMode ? page : 1)) : 1;
|
||||
|
||||
const requestId = requestIdRef.current + 1;
|
||||
requestIdRef.current = requestId;
|
||||
|
||||
setLoading(true);
|
||||
closeContextMenu();
|
||||
setSearched(true);
|
||||
if (currentMode === 'filename') {
|
||||
setPage(targetPage);
|
||||
} else {
|
||||
setPage(1);
|
||||
setHasMore(false);
|
||||
}
|
||||
|
||||
try {
|
||||
const res = await vfsApi.searchFiles(
|
||||
query,
|
||||
currentMode === 'filename' ? PAGE_SIZE : 10,
|
||||
currentMode,
|
||||
currentMode === 'filename' ? targetPage : undefined,
|
||||
currentMode === 'filename' ? PAGE_SIZE : undefined,
|
||||
);
|
||||
if (requestId !== requestIdRef.current) {
|
||||
return;
|
||||
}
|
||||
setResults(res.items);
|
||||
if (currentMode === 'filename') {
|
||||
const pagination = res.pagination;
|
||||
setHasMore(Boolean(pagination?.has_more));
|
||||
if (pagination?.page) {
|
||||
setPage(pagination.page);
|
||||
}
|
||||
} else {
|
||||
setHasMore(false);
|
||||
}
|
||||
} catch (e) {
|
||||
if (requestId !== requestIdRef.current) {
|
||||
return;
|
||||
}
|
||||
setResults([]);
|
||||
if (currentMode === 'filename') {
|
||||
setHasMore(false);
|
||||
}
|
||||
} finally {
|
||||
if (requestId === requestIdRef.current) {
|
||||
setLoading(false);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const handleSearch = () => {
|
||||
if (!search.trim()) {
|
||||
closeContextMenu();
|
||||
setResults([]);
|
||||
setSearched(false);
|
||||
setHasMore(false);
|
||||
setPage(1);
|
||||
return;
|
||||
}
|
||||
void performSearch({ page: searchMode === 'filename' ? 1 : undefined });
|
||||
};
|
||||
|
||||
const handleModeChange = (value: string | number) => {
|
||||
const nextMode = value as SearchMode;
|
||||
setHasMore(false);
|
||||
setPage(1);
|
||||
setSearchMode(nextMode);
|
||||
closeContextMenu();
|
||||
if (search.trim()) {
|
||||
void performSearch({ mode: nextMode, page: nextMode === 'filename' ? 1 : undefined });
|
||||
} else {
|
||||
setResults([]);
|
||||
setSearched(false);
|
||||
}
|
||||
};
|
||||
|
||||
const totalItems = searchMode === 'filename'
|
||||
? (hasMore ? page * PAGE_SIZE + 1 : (page - 1) * PAGE_SIZE + results.length)
|
||||
: results.length;
|
||||
|
||||
return (
|
||||
<Modal
|
||||
open={open}
|
||||
onCancel={onClose}
|
||||
onCancel={handleClose}
|
||||
footer={null}
|
||||
width={600}
|
||||
width={720}
|
||||
centered
|
||||
title={null}
|
||||
closable={false}
|
||||
styles={{
|
||||
body: {
|
||||
padding: '12px 16px 16px',
|
||||
maxHeight: '70vh',
|
||||
overflow: 'hidden',
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
gap: 12,
|
||||
},
|
||||
}}
|
||||
>
|
||||
<Space.Compact style={{ marginBottom: 0, width: '100%' }}>
|
||||
<Select
|
||||
options={SEARCH_MODES(t)}
|
||||
value={searchMode}
|
||||
onChange={v => setSearchMode(v as 'vector' | 'filename')}
|
||||
style={{
|
||||
width: 120,
|
||||
fontSize: 18,
|
||||
height: 40,
|
||||
lineHeight: '40px',
|
||||
borderTopRightRadius: 0,
|
||||
borderBottomRightRadius: 0,
|
||||
borderRight: 0,
|
||||
verticalAlign: 'top',
|
||||
}}
|
||||
styles={{ popup: { root: { fontSize: 18 } } }}
|
||||
popupMatchSelectWidth={false}
|
||||
/>
|
||||
<Input
|
||||
allowClear
|
||||
prefix={<SearchOutlined />}
|
||||
placeholder={t('Search files / tags / types')}
|
||||
value={search}
|
||||
onChange={e => setSearch(e.target.value)}
|
||||
style={{
|
||||
fontSize: 18,
|
||||
height: 40,
|
||||
width: 'calc(100% - 120px)',
|
||||
borderTopLeftRadius: 0,
|
||||
borderBottomLeftRadius: 0,
|
||||
verticalAlign: 'top',
|
||||
}}
|
||||
autoFocus
|
||||
onPressEnter={handleSearch}
|
||||
/>
|
||||
</Space.Compact>
|
||||
{searched && (
|
||||
<>
|
||||
<Divider style={{ margin: '12px 0' }}>{t('Search Results')}</Divider>
|
||||
{loading ? (
|
||||
<Spin />
|
||||
) : (
|
||||
<List
|
||||
itemLayout="horizontal"
|
||||
dataSource={results}
|
||||
locale={{ emptyText: t('No files found') }}
|
||||
renderItem={item => {
|
||||
const fullPath = item.path || '';
|
||||
const trimmed = fullPath.replace(/\/+$/, '');
|
||||
const parts = trimmed.split('/');
|
||||
const filename = parts.pop() || '';
|
||||
const dir = parts.length ? '/' + parts.join('/') : '/';
|
||||
return (
|
||||
<List.Item>
|
||||
<List.Item.Meta
|
||||
avatar={<FileTextOutlined />}
|
||||
title={
|
||||
<a
|
||||
onClick={() => {
|
||||
navigate(`/files${dir === '/' ? '' : dir}`, { state: { highlight: { name: filename } } });
|
||||
onClose();
|
||||
}}
|
||||
<Flex vertical style={{ gap: 12, flex: 1, minHeight: 0 }}>
|
||||
<Flex align="center" style={{ width: '100%', gap: 12, flexWrap: 'wrap' }}>
|
||||
<Segmented
|
||||
options={[
|
||||
{ label: t('Smart Search'), value: 'vector' },
|
||||
{ label: t('Name Search'), value: 'filename' },
|
||||
]}
|
||||
value={searchMode}
|
||||
onChange={handleModeChange}
|
||||
style={{
|
||||
minWidth: 160,
|
||||
height: 40,
|
||||
borderRadius: 20,
|
||||
display: 'flex',
|
||||
alignItems: 'center',
|
||||
}}
|
||||
size="large"
|
||||
/>
|
||||
<Input
|
||||
allowClear
|
||||
prefix={<SearchOutlined />}
|
||||
placeholder={t('Search files / tags / types')}
|
||||
value={search}
|
||||
onChange={e => {
|
||||
const value = e.target.value;
|
||||
setSearch(value);
|
||||
if (!value.trim()) {
|
||||
setResults([]);
|
||||
setSearched(false);
|
||||
setHasMore(false);
|
||||
setPage(1);
|
||||
requestIdRef.current += 1;
|
||||
setLoading(false);
|
||||
}
|
||||
}}
|
||||
style={{ fontSize: 18, height: 40, flex: 1, minWidth: 240 }}
|
||||
styles={{
|
||||
input: {
|
||||
borderRadius: 20,
|
||||
},
|
||||
}}
|
||||
autoFocus
|
||||
onPressEnter={handleSearch}
|
||||
/>
|
||||
</Flex>
|
||||
|
||||
{!searched ? null : (
|
||||
<Flex vertical style={{ flex: 1, minHeight: 0 }}>
|
||||
<Divider style={{ margin: 0, padding: '0 0 12px' }}>{t('Search Results')}</Divider>
|
||||
{loading ? (
|
||||
<Flex align="center" justify="center" style={{ flex: 1 }}>
|
||||
<Spin />
|
||||
</Flex>
|
||||
) : results.length === 0 ? (
|
||||
<Flex align="center" justify="center" style={{ flex: 1 }}>
|
||||
<Empty description={t('No files found')} image={Empty.PRESENTED_IMAGE_SIMPLE} />
|
||||
</Flex>
|
||||
) : (
|
||||
<div style={{ flex: 1, minHeight: 0, display: 'flex', flexDirection: 'column' }}>
|
||||
<div style={{ flex: 1, minHeight: 0, overflowY: 'auto', paddingRight: 6 }}>
|
||||
<List
|
||||
itemLayout="horizontal"
|
||||
dataSource={results}
|
||||
split={false}
|
||||
renderItem={item => {
|
||||
const fullPath = item.path || '';
|
||||
const trimmed = fullPath.replace(/\/+$/, '');
|
||||
const parts = trimmed.split('/');
|
||||
const filename = parts.pop() || '';
|
||||
const dir = parts.length ? '/' + parts.join('/') : '/';
|
||||
const snippet = item.snippet || '';
|
||||
const retrieval = item.metadata?.retrieval_source || item.source_type;
|
||||
const retrievalLabel = renderSourceLabel(retrieval);
|
||||
const scoreText = Number.isFinite(item.score) ? item.score.toFixed(2) : '-';
|
||||
|
||||
return (
|
||||
<List.Item
|
||||
style={{ padding: '10px 12px', borderRadius: 6, background: '#fafafa', marginBottom: 8 }}
|
||||
onContextMenu={(event) => { void handleResultContextMenu(event, item); }}
|
||||
>
|
||||
{fullPath}
|
||||
</a>
|
||||
}
|
||||
description={`${t('Relevance')}: ${item.score.toFixed(2)}`}
|
||||
/>
|
||||
</List.Item>
|
||||
);
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
<List.Item.Meta
|
||||
avatar={<FileTextOutlined style={{ fontSize: 18, color: '#8c8c8c' }} />}
|
||||
title={
|
||||
<a
|
||||
onClick={() => {
|
||||
navigate(`/files${dir === '/' ? '' : dir}`, { state: { highlight: { name: filename } } });
|
||||
handleClose();
|
||||
}}
|
||||
style={{ fontSize: 16 }}
|
||||
>
|
||||
{fullPath}
|
||||
</a>
|
||||
}
|
||||
description={(
|
||||
<Space direction="vertical" size={6} style={{ width: '100%' }}>
|
||||
{snippet ? (
|
||||
<Typography.Paragraph ellipsis={{ rows: 3 }} style={{ marginBottom: 0 }}>
|
||||
{snippet}
|
||||
</Typography.Paragraph>
|
||||
) : null}
|
||||
<Space size={10} wrap>
|
||||
{retrieval ? (
|
||||
<Tag color={sourceColor(retrieval)} style={{ marginRight: 0 }}>
|
||||
{retrievalLabel}
|
||||
</Tag>
|
||||
) : null}
|
||||
<Typography.Text type="secondary">
|
||||
{t('Relevance')}: {scoreText}
|
||||
</Typography.Text>
|
||||
</Space>
|
||||
</Space>
|
||||
)}
|
||||
/>
|
||||
</List.Item>
|
||||
);
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
{searchMode === 'filename' && results.length > 0 ? (
|
||||
<Pagination
|
||||
current={page}
|
||||
pageSize={PAGE_SIZE}
|
||||
total={Math.max(totalItems, 1)}
|
||||
showSizeChanger={false}
|
||||
size="small"
|
||||
style={{ marginTop: 12, textAlign: 'right' }}
|
||||
onChange={(nextPage) => {
|
||||
void performSearch({ page: nextPage });
|
||||
}}
|
||||
/>
|
||||
) : null}
|
||||
</div>
|
||||
)}
|
||||
</Flex>
|
||||
)}
|
||||
</Flex>
|
||||
{contextMenuState ? (
|
||||
<ContextMenu
|
||||
x={contextMenuState.x}
|
||||
y={contextMenuState.y}
|
||||
entry={contextMenuState.entry}
|
||||
entries={menuEntries}
|
||||
selectedEntries={selectedEntryNames}
|
||||
processorTypes={processorTypes}
|
||||
onClose={closeContextMenu}
|
||||
onOpen={handleOpenEntry}
|
||||
onOpenWith={(entry, appKey) => confirmOpenWithApp(entry, appKey, contextMenuState?.path ?? currentPath)}
|
||||
onDownload={doDownload}
|
||||
onRename={setRenaming}
|
||||
onDelete={(entriesToDelete) => doDelete(entriesToDelete)}
|
||||
onDetail={openDetail}
|
||||
onProcess={(entry, type) => {
|
||||
processorHook.setSelectedProcessor(type);
|
||||
processorHook.openProcessorModal(entry);
|
||||
}}
|
||||
onUploadFile={noop}
|
||||
onUploadDirectory={noop}
|
||||
onCreateDir={noop}
|
||||
onShare={doShare}
|
||||
onGetDirectLink={doGetDirectLink}
|
||||
onMove={(entriesToMove) => setMovingEntries(entriesToMove)}
|
||||
onCopy={(entriesToCopy) => setCopyingEntries(entriesToCopy)}
|
||||
/>
|
||||
) : null}
|
||||
<RenameModal
|
||||
entry={renaming}
|
||||
onOk={(entry, newName) => {
|
||||
void doRename(entry, newName);
|
||||
setRenaming(null);
|
||||
}}
|
||||
onCancel={() => setRenaming(null)}
|
||||
/>
|
||||
<FileDetailModal
|
||||
entry={detailEntry}
|
||||
loading={detailLoading}
|
||||
data={detailData}
|
||||
onClose={() => setDetailEntry(null)}
|
||||
/>
|
||||
<MoveCopyModal
|
||||
mode="move"
|
||||
entries={movingEntries}
|
||||
open={movingEntries.length > 0}
|
||||
defaultPath={buildDefaultDestination(movingEntries)}
|
||||
onOk={async (destination) => {
|
||||
if (movingEntries.length > 0) {
|
||||
await doMove(movingEntries, destination);
|
||||
}
|
||||
}}
|
||||
onCancel={() => setMovingEntries([])}
|
||||
/>
|
||||
<MoveCopyModal
|
||||
mode="copy"
|
||||
entries={copyingEntries}
|
||||
open={copyingEntries.length > 0}
|
||||
defaultPath={buildDefaultDestination(copyingEntries)}
|
||||
onOk={async (destination) => {
|
||||
if (copyingEntries.length > 0) {
|
||||
await doCopy(copyingEntries, destination);
|
||||
}
|
||||
}}
|
||||
onCancel={() => setCopyingEntries([])}
|
||||
/>
|
||||
{sharingEntries.length > 0 ? (
|
||||
<ShareModal
|
||||
path={currentPath}
|
||||
entries={sharingEntries}
|
||||
open={sharingEntries.length > 0}
|
||||
onOk={() => setSharingEntries([])}
|
||||
onCancel={() => setSharingEntries([])}
|
||||
/>
|
||||
) : null}
|
||||
<DirectLinkModal
|
||||
entry={directLinkEntry}
|
||||
path={currentPath}
|
||||
open={!!directLinkEntry}
|
||||
onCancel={() => setDirectLinkEntry(null)}
|
||||
/>
|
||||
<ProcessorModal
|
||||
entry={processorHook.processorModal.entry}
|
||||
visible={processorHook.processorModal.visible}
|
||||
loading={processorHook.processorLoading}
|
||||
processorTypes={processorTypes}
|
||||
selectedProcessor={processorHook.selectedProcessor}
|
||||
config={processorHook.processorConfig}
|
||||
savingPath={processorHook.processorSavingPath}
|
||||
overwrite={processorHook.processorOverwrite}
|
||||
onOk={processorHook.handleProcessorOk}
|
||||
onCancel={processorHook.handleProcessorCancel}
|
||||
onSelectedProcessorChange={processorHook.setSelectedProcessor}
|
||||
onConfigChange={processorHook.setProcessorConfig}
|
||||
onSavingPathChange={processorHook.setProcessorSavingPath}
|
||||
onOverwriteChange={processorHook.setProcessorOverwrite}
|
||||
/>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -18,6 +18,7 @@ import ReactMarkdown from 'react-markdown';
|
||||
import { useTheme } from '../contexts/ThemeContext';
|
||||
import { useI18n } from '../i18n';
|
||||
import { useAppWindows } from '../contexts/AppWindowsContext';
|
||||
import WeChatModal from '../components/WeChatModal';
|
||||
const { Sider } = Layout;
|
||||
|
||||
export interface SideNavProps {
|
||||
@@ -211,7 +212,7 @@ const SideNav = memo(function SideNav({ collapsed, activeKey, onChange, onToggle
|
||||
<Tag icon={<WarningOutlined />} color="warning" style={{ marginInlineEnd: 0 }} />
|
||||
) : (
|
||||
<Tag icon={<WarningOutlined />} color="warning">
|
||||
{status?.version} - {t('Update available')} [{latestVersion?.version}]
|
||||
{t('Update available')} [{latestVersion?.version}]
|
||||
</Tag>
|
||||
)}
|
||||
</a>
|
||||
@@ -260,23 +261,7 @@ const SideNav = memo(function SideNav({ collapsed, activeKey, onChange, onToggle
|
||||
|
||||
</div>
|
||||
</Sider>
|
||||
<Modal
|
||||
open={isModalOpen}
|
||||
onCancel={() => setIsModalOpen(false)}
|
||||
title={t('Join Community')}
|
||||
footer={null}
|
||||
width={320}
|
||||
>
|
||||
<div style={{ textAlign: 'center', padding: '12px 0' }}>
|
||||
<img src="https://foxel.cc/image/wechat.png" width={200} alt="wechat" />
|
||||
<div style={{ marginTop: 12, color: token.colorTextSecondary }}>
|
||||
{t('Scan to join WeChat group')}
|
||||
</div>
|
||||
<div style={{ marginTop: 8, fontSize: 12, color: token.colorTextTertiary }}>
|
||||
{t('If QR expires, add drizzle2001 to join')}
|
||||
</div>
|
||||
</div>
|
||||
</Modal>
|
||||
<WeChatModal open={isModalOpen} onClose={() => setIsModalOpen(false)} />
|
||||
<Modal
|
||||
open={isVersionModalOpen}
|
||||
onCancel={() => setIsVersionModalOpen(false)}
|
||||
|
||||
@@ -1,24 +1,9 @@
|
||||
import { memo, useState, useEffect, useCallback } from 'react';
|
||||
import { Table, Button, Space, Drawer, Form, Input, Switch, message, Typography, Popconfirm, Select } from 'antd';
|
||||
import PageCard from '../components/PageCard';
|
||||
import { adaptersApi, type AdapterItem } from '../api/client';
|
||||
import { adaptersApi, type AdapterItem, type AdapterTypeMeta } from '../api/client';
|
||||
import { useI18n } from '../i18n';
|
||||
|
||||
|
||||
interface AdapterTypeField {
|
||||
key: string;
|
||||
label: string;
|
||||
type: 'string' | 'password' | 'number';
|
||||
required?: boolean;
|
||||
placeholder?: string;
|
||||
default?: any;
|
||||
}
|
||||
interface AdapterTypeMeta {
|
||||
type: string;
|
||||
name: string;
|
||||
config_schema: AdapterTypeField[];
|
||||
}
|
||||
|
||||
const AdaptersPage = memo(function AdaptersPage() {
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [data, setData] = useState<AdapterItem[]>([]);
|
||||
@@ -185,14 +170,20 @@ const AdaptersPage = memo(function AdaptersPage() {
|
||||
return currentTypeMeta.config_schema.map(field => {
|
||||
const rules = field.required ? [{ required: true, message: t('Please input {label}', { label: field.label }) }] : [];
|
||||
let inputNode: any = <Input placeholder={field.placeholder} />;
|
||||
let valuePropName: string | undefined;
|
||||
if (field.type === 'password') inputNode = <Input.Password placeholder={field.placeholder} />;
|
||||
if (field.type === 'number') inputNode = <Input type="number" placeholder={field.placeholder} />;
|
||||
if (field.type === 'boolean') {
|
||||
inputNode = <Switch />;
|
||||
valuePropName = 'checked';
|
||||
}
|
||||
return (
|
||||
<Form.Item
|
||||
key={field.key}
|
||||
name={['config', field.key]}
|
||||
label={t(field.label)}
|
||||
rules={rules}
|
||||
valuePropName={valuePropName}
|
||||
>
|
||||
{inputNode}
|
||||
</Form.Item>
|
||||
|
||||
@@ -25,13 +25,16 @@ import { FileDetailModal } from './components/FileDetailModal';
|
||||
import { MoveCopyModal } from './components/Modals/MoveCopyModal';
|
||||
import type { ViewMode } from './types';
|
||||
import { vfsApi, type VfsEntry } from '../../api/client';
|
||||
import { LoadingSkeleton } from './components/LoadingSkeleton';
|
||||
|
||||
const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
const { navKey = 'files', '*': restPath = '' } = useParams();
|
||||
const { token } = theme.useToken();
|
||||
const [viewMode, setViewMode] = useState<ViewMode>('grid');
|
||||
const [isDragging, setIsDragging] = useState(false);
|
||||
const [showSkeleton, setShowSkeleton] = useState(false);
|
||||
const dragCounter = useRef(0);
|
||||
const skeletonTimerRef = useRef<number | null>(null);
|
||||
|
||||
// --- Hooks ---
|
||||
const { path, entries, loading, pagination, processorTypes, sortBy, sortOrder, load, navigateTo, goUp, handlePaginationChange, refresh, handleSortChange } = useFileExplorer(navKey);
|
||||
@@ -40,7 +43,7 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
const { openFileWithDefaultApp, confirmOpenWithApp } = useAppWindows();
|
||||
const { ctxMenu, blankCtxMenu, openContextMenu, openBlankContextMenu, closeContextMenus } = useContextMenu();
|
||||
const uploader = useUploader(path, refresh);
|
||||
const { handleFileDrop } = uploader;
|
||||
const { handleFileDrop, openFilePicker, openDirectoryPicker, handleFileInputChange, handleDirectoryInputChange } = uploader;
|
||||
const processorHook = useProcessor({ path, processorTypes, refresh });
|
||||
const { thumbs } = useThumbnails(entries, path);
|
||||
|
||||
@@ -50,16 +53,40 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
const [sharingEntries, setSharingEntries] = useState<VfsEntry[]>([]);
|
||||
const [detailEntry, setDetailEntry] = useState<VfsEntry | null>(null);
|
||||
const [directLinkEntry, setDirectLinkEntry] = useState<VfsEntry | null>(null);
|
||||
const [detailData, setDetailData] = useState<any>(null);
|
||||
const [detailData, setDetailData] = useState<Record<string, unknown> | { error: string } | null>(null);
|
||||
const [detailLoading, setDetailLoading] = useState(false);
|
||||
const [movingEntries, setMovingEntries] = useState<VfsEntry[]>([]);
|
||||
const [copyingEntries, setCopyingEntries] = useState<VfsEntry[]>([]);
|
||||
|
||||
// --- Effects ---
|
||||
const routePath = '/' + (restPath || '').replace(/^\/+/, '');
|
||||
|
||||
useEffect(() => {
|
||||
const routeP = '/' + (restPath || '').replace(/^\/+/, '');
|
||||
load(routeP, 1, pagination.pageSize, sortBy, sortOrder);
|
||||
}, [restPath, navKey, load, pagination.pageSize, sortBy, sortOrder]);
|
||||
load(routePath, 1, pagination.pageSize, sortBy, sortOrder);
|
||||
}, [routePath, navKey, load, pagination.pageSize, sortBy, sortOrder]);
|
||||
|
||||
useEffect(() => {
|
||||
if (skeletonTimerRef.current !== null) {
|
||||
clearTimeout(skeletonTimerRef.current);
|
||||
skeletonTimerRef.current = null;
|
||||
}
|
||||
|
||||
if (loading) {
|
||||
skeletonTimerRef.current = window.setTimeout(() => {
|
||||
setShowSkeleton(true);
|
||||
skeletonTimerRef.current = null;
|
||||
}, 200);
|
||||
} else {
|
||||
setShowSkeleton(false);
|
||||
}
|
||||
|
||||
return () => {
|
||||
if (skeletonTimerRef.current !== null) {
|
||||
clearTimeout(skeletonTimerRef.current);
|
||||
skeletonTimerRef.current = null;
|
||||
}
|
||||
};
|
||||
}, [loading]);
|
||||
|
||||
// --- Handlers ---
|
||||
const handleOpenEntry = (entry: VfsEntry) => {
|
||||
@@ -77,9 +104,10 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
try {
|
||||
const fullPath = (path === '/' ? '' : path) + '/' + entry.name;
|
||||
const stat = await vfsApi.stat(fullPath);
|
||||
setDetailData(stat);
|
||||
} catch (e: any) {
|
||||
setDetailData({ error: e.message });
|
||||
setDetailData(stat as Record<string, unknown>);
|
||||
} catch (error) {
|
||||
const messageText = error instanceof Error ? error.message : String(error);
|
||||
setDetailData({ error: messageText });
|
||||
} finally {
|
||||
setDetailLoading(false);
|
||||
}
|
||||
@@ -128,7 +156,7 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
e.stopPropagation();
|
||||
setIsDragging(false);
|
||||
dragCounter.current = 0;
|
||||
handleFileDrop(e.dataTransfer.files);
|
||||
void handleFileDrop(e.dataTransfer);
|
||||
};
|
||||
|
||||
return (
|
||||
@@ -159,22 +187,37 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
onNavigate={navigateTo}
|
||||
onRefresh={refresh}
|
||||
onCreateDir={() => setCreatingDir(true)}
|
||||
onUpload={uploader.openModal}
|
||||
onUploadFile={openFilePicker}
|
||||
onUploadDirectory={openDirectoryPicker}
|
||||
onSetViewMode={setViewMode}
|
||||
onSortChange={handleSortChange}
|
||||
/>
|
||||
|
||||
<input ref={uploader.fileInputRef} type="file" style={{ display: 'none' }} multiple onChange={uploader.handleFileChange} />
|
||||
<input
|
||||
ref={uploader.fileInputRef}
|
||||
type="file"
|
||||
style={{ display: 'none' }}
|
||||
multiple
|
||||
onChange={handleFileInputChange}
|
||||
/>
|
||||
<input
|
||||
ref={uploader.directoryInputRef}
|
||||
type="file"
|
||||
style={{ display: 'none' }}
|
||||
multiple
|
||||
onChange={handleDirectoryInputChange}
|
||||
/>
|
||||
|
||||
<div style={{ flex: 1, overflow: 'auto', paddingBottom: pagination.total > 0 ? '80px' : '0' }} onContextMenu={openBlankContextMenu}>
|
||||
{loading && entries.length === 0 ? (
|
||||
{showSkeleton && loading && (entries.length === 0 || path !== routePath) ? (
|
||||
<LoadingSkeleton mode={viewMode} />
|
||||
) : !loading && entries.length === 0 ? (
|
||||
<div style={{ textAlign: 'center', padding: 40 }}><EmptyState isRoot={path === '/'} /></div>
|
||||
) : viewMode === 'grid' ? (
|
||||
<GridView
|
||||
entries={entries}
|
||||
thumbs={thumbs}
|
||||
selectedEntries={selectedEntries}
|
||||
loading={loading}
|
||||
path={path}
|
||||
onSelect={handleSelect}
|
||||
onSelectRange={handleSelectRange}
|
||||
@@ -184,7 +227,6 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
) : (
|
||||
<FileListView
|
||||
entries={entries}
|
||||
loading={loading}
|
||||
selectedEntries={selectedEntries}
|
||||
onRowClick={(r, e) => handleSelect(r, e.ctrlKey || e.metaKey)}
|
||||
onSelectionChange={setSelectedEntries}
|
||||
@@ -282,7 +324,8 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
processorHook.setSelectedProcessor(type);
|
||||
processorHook.openProcessorModal(entry);
|
||||
}}
|
||||
onUpload={uploader.openModal}
|
||||
onUploadFile={openFilePicker}
|
||||
onUploadDirectory={openDirectoryPicker}
|
||||
onCreateDir={() => setCreatingDir(true)}
|
||||
onShare={doShare}
|
||||
onGetDirectLink={doGetDirectLink}
|
||||
@@ -293,8 +336,14 @@ const FileExplorerPage = memo(function FileExplorerPage() {
|
||||
<UploadModal
|
||||
visible={uploader.isModalVisible}
|
||||
files={uploader.files}
|
||||
isUploading={uploader.isUploading}
|
||||
totalProgress={uploader.totalProgress}
|
||||
totalFileBytes={uploader.totalFileBytes}
|
||||
uploadedFileBytes={uploader.uploadedFileBytes}
|
||||
conflict={uploader.conflict}
|
||||
onClose={uploader.closeModal}
|
||||
onStartUpload={uploader.startUpload}
|
||||
onResolveConflict={uploader.confirmConflict}
|
||||
/>
|
||||
<DropzoneOverlay visible={isDragging} />
|
||||
</div>
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import React, { useLayoutEffect, useRef, useState } from 'react';
|
||||
import { Menu, theme } from 'antd';
|
||||
import type { MenuProps } from 'antd';
|
||||
import type { VfsEntry } from '../../../api/client';
|
||||
import type { ProcessorTypeMeta } from '../../../api/processors';
|
||||
import { getAppsForEntry, getDefaultAppForEntry } from '../../../apps/registry';
|
||||
import { useI18n } from '../../../i18n';
|
||||
import {
|
||||
@@ -15,7 +17,7 @@ interface ContextMenuProps {
|
||||
entry?: VfsEntry;
|
||||
entries: VfsEntry[];
|
||||
selectedEntries: string[];
|
||||
processorTypes: any[];
|
||||
processorTypes: ProcessorTypeMeta[];
|
||||
onClose: () => void;
|
||||
onOpen: (entry: VfsEntry) => void;
|
||||
onOpenWith: (entry: VfsEntry, appKey: string) => void;
|
||||
@@ -24,7 +26,8 @@ interface ContextMenuProps {
|
||||
onDelete: (entries: VfsEntry[]) => void;
|
||||
onDetail: (entry: VfsEntry) => void;
|
||||
onProcess: (entry: VfsEntry, processorType: string) => void;
|
||||
onUpload: () => void;
|
||||
onUploadFile: () => void;
|
||||
onUploadDirectory: () => void;
|
||||
onCreateDir: () => void;
|
||||
onShare: (entries: VfsEntry[]) => void;
|
||||
onGetDirectLink: (entry: VfsEntry) => void;
|
||||
@@ -32,6 +35,18 @@ interface ContextMenuProps {
|
||||
onCopy: (entries: VfsEntry[]) => void;
|
||||
}
|
||||
|
||||
type MenuItem = Required<MenuProps>['items'][number];
|
||||
|
||||
interface ActionMenuItem {
|
||||
key: string;
|
||||
label: React.ReactNode;
|
||||
icon?: React.ReactNode;
|
||||
disabled?: boolean;
|
||||
danger?: boolean;
|
||||
onClick?: () => void;
|
||||
children?: ActionMenuItem[];
|
||||
}
|
||||
|
||||
export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
const { token } = theme.useToken();
|
||||
const { t } = useI18n();
|
||||
@@ -43,10 +58,18 @@ export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
setPosition({ left: x, top: y });
|
||||
}, [x, y]);
|
||||
|
||||
const getContextMenuItems = () => {
|
||||
const getContextMenuItems = (): ActionMenuItem[] => {
|
||||
if (!entry) { // Blank context menu
|
||||
return [
|
||||
{ key: 'upload', label: t('Upload File'), icon: <UploadOutlined />, onClick: actions.onUpload },
|
||||
{
|
||||
key: 'upload',
|
||||
label: t('Upload'),
|
||||
icon: <UploadOutlined />,
|
||||
children: [
|
||||
{ key: 'upload-file', label: t('Upload Files'), onClick: actions.onUploadFile },
|
||||
{ key: 'upload-folder', label: t('Upload Folder'), onClick: actions.onUploadDirectory },
|
||||
],
|
||||
},
|
||||
{ key: 'mkdir', label: t('New Folder'), icon: <PlusOutlined />, onClick: actions.onCreateDir },
|
||||
];
|
||||
}
|
||||
@@ -57,11 +80,15 @@ export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
const targetNames = selectedEntries.includes(entry.name) ? selectedEntries : [entry.name];
|
||||
const targetEntries = entries.filter(e => targetNames.includes(e.name));
|
||||
|
||||
let processorSubMenu: any[] = [];
|
||||
let processorSubMenu: ActionMenuItem[] = [];
|
||||
if (!entry.is_dir && processorTypes.length > 0) {
|
||||
const ext = entry.name.split('.').pop()?.toLowerCase() || '';
|
||||
processorSubMenu = processorTypes
|
||||
.filter(pt => pt.supported_exts.includes(ext))
|
||||
.filter(pt => {
|
||||
const exts = pt.supported_exts;
|
||||
if (!Array.isArray(exts) || exts.length === 0) return true;
|
||||
return exts.includes(ext);
|
||||
})
|
||||
.map(pt => ({
|
||||
key: 'processor-' + pt.type,
|
||||
label: pt.name,
|
||||
@@ -69,7 +96,7 @@ export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
}));
|
||||
}
|
||||
|
||||
return [
|
||||
const menuItems: (ActionMenuItem | null)[] = [
|
||||
(entry.is_dir || apps.length > 0) ? {
|
||||
key: 'open',
|
||||
label: defaultApp ? `${t('Open')} (${defaultApp.name})` : t('Open'),
|
||||
@@ -147,18 +174,32 @@ export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
icon: <InfoCircleOutlined />,
|
||||
onClick: () => actions.onDetail(entry),
|
||||
},
|
||||
].filter(Boolean);
|
||||
];
|
||||
|
||||
return menuItems.filter((item): item is ActionMenuItem => item !== null);
|
||||
};
|
||||
|
||||
const items = getContextMenuItems()
|
||||
.filter(item => item !== null) // Ensure no null items
|
||||
.map(item => ({
|
||||
...item,
|
||||
onClick: () => {
|
||||
if (item.onClick) item.onClick();
|
||||
onClose();
|
||||
}
|
||||
}));
|
||||
const actionItems = getContextMenuItems();
|
||||
|
||||
const handlerMap = new Map<string, () => void>();
|
||||
|
||||
const mapItems = (source: ActionMenuItem[]): MenuItem[] =>
|
||||
source.map<MenuItem>((item) => {
|
||||
if (item.onClick) handlerMap.set(item.key, item.onClick);
|
||||
const mappedChildren = item.children && item.children.length > 0 ? mapItems(item.children) : undefined;
|
||||
|
||||
const transformed = {
|
||||
key: item.key,
|
||||
label: item.label,
|
||||
icon: item.icon,
|
||||
disabled: item.disabled,
|
||||
danger: item.danger,
|
||||
...(mappedChildren ? { children: mappedChildren } : {}),
|
||||
} as MenuItem;
|
||||
return transformed;
|
||||
});
|
||||
|
||||
const items = mapItems(actionItems);
|
||||
|
||||
useLayoutEffect(() => {
|
||||
if (typeof window === 'undefined') return;
|
||||
@@ -199,8 +240,13 @@ export const ContextMenu: React.FC<ContextMenuProps> = (props) => {
|
||||
onClick={onClose} // Close on any click inside the menu area
|
||||
>
|
||||
<Menu
|
||||
items={items as any[]}
|
||||
items={items}
|
||||
selectable={false}
|
||||
onClick={({ key }) => {
|
||||
const handler = handlerMap.get(String(key));
|
||||
if (handler) handler();
|
||||
onClose();
|
||||
}}
|
||||
style={{ width: 160, borderRadius: token.borderRadius, background: 'transparent' }}
|
||||
/>
|
||||
</div>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import React from 'react';
|
||||
import { Modal, Typography, Spin, theme, Card, Descriptions, Divider, Badge, Space, message } from 'antd';
|
||||
import { FileOutlined, FolderOutlined, CameraOutlined, InfoCircleOutlined } from '@ant-design/icons';
|
||||
import { Modal, Typography, Spin, theme, Card, Descriptions, Divider, Badge, Space, message, Collapse, Tag } from 'antd';
|
||||
import { FileOutlined, FolderOutlined, CameraOutlined, InfoCircleOutlined, DatabaseOutlined } from '@ant-design/icons';
|
||||
import { useI18n } from '../../../i18n';
|
||||
import type { VfsEntry } from '../../../api/client';
|
||||
|
||||
@@ -80,7 +80,63 @@ function formatFileSize(size: number | string, t: (k: string)=>string): string {
|
||||
export const FileDetailModal: React.FC<Props> = ({ entry, loading, data, onClose }) => {
|
||||
const { token } = theme.useToken();
|
||||
const { t } = useI18n();
|
||||
|
||||
const vectorIndex = data?.vector_index;
|
||||
const vectorEntries = Array.isArray(vectorIndex?.entries) ? vectorIndex.entries : [];
|
||||
const primaryIndexEntries = vectorEntries.slice(0, 3);
|
||||
const remainingIndexEntries = vectorEntries.slice(3);
|
||||
|
||||
const renderIndexEntry = (entry: any, idx: number, total: number) => {
|
||||
const key = entry?.chunk_id ?? entry?.vector_id ?? idx;
|
||||
const hasOffsets = entry?.start_offset !== undefined || entry?.end_offset !== undefined;
|
||||
const previewText = entry?.preview;
|
||||
const previewTruncated = Boolean(entry?.preview_truncated && previewText);
|
||||
|
||||
return (
|
||||
<div
|
||||
key={String(key)}
|
||||
style={{
|
||||
padding: '12px 0',
|
||||
borderBottom: idx === total - 1 ? 'none' : `1px solid ${token.colorSplit}`,
|
||||
}}
|
||||
>
|
||||
<Space direction="vertical" size={6} style={{ width: '100%' }}>
|
||||
<Space size={[4, 4]} wrap>
|
||||
{entry?.chunk_id && (
|
||||
<Tag color="blue">{t('Chunk ID')}: {entry.chunk_id}</Tag>
|
||||
)}
|
||||
{entry?.type && (
|
||||
<Tag>{entry.type}</Tag>
|
||||
)}
|
||||
{entry?.mime && (
|
||||
<Tag color="geekblue">{entry.mime}</Tag>
|
||||
)}
|
||||
{entry?.name && !previewText && (
|
||||
<Tag color="purple">{entry.name}</Tag>
|
||||
)}
|
||||
</Space>
|
||||
{hasOffsets && (
|
||||
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
|
||||
{t('Offset Range')}: {entry?.start_offset ?? '-'} ~ {entry?.end_offset ?? '-'}
|
||||
</Typography.Text>
|
||||
)}
|
||||
{entry?.vector_id && (
|
||||
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
|
||||
{t('Vector ID')}: {entry.vector_id}
|
||||
</Typography.Text>
|
||||
)}
|
||||
{previewText && (
|
||||
<Typography.Paragraph
|
||||
style={{ marginBottom: 0 }}
|
||||
ellipsis={{ rows: 3, expandable: previewTruncated }}
|
||||
>
|
||||
{previewText}
|
||||
</Typography.Paragraph>
|
||||
)}
|
||||
</Space>
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal
|
||||
title={
|
||||
@@ -225,6 +281,82 @@ export const FileDetailModal: React.FC<Props> = ({ entry, loading, data, onClose
|
||||
</>
|
||||
)}
|
||||
</Card>
|
||||
|
||||
{!data.is_dir && vectorIndex && (
|
||||
<Card
|
||||
size="small"
|
||||
style={{ borderRadius: 8, marginTop: 16 }}
|
||||
title={
|
||||
<Space>
|
||||
<DatabaseOutlined />
|
||||
{t('Index Info')}
|
||||
</Space>
|
||||
}
|
||||
>
|
||||
<Descriptions
|
||||
column={1}
|
||||
size="small"
|
||||
items={[
|
||||
{
|
||||
key: 'total',
|
||||
label: t('Indexed Items'),
|
||||
children: vectorIndex.total ?? 0,
|
||||
},
|
||||
{
|
||||
key: 'types',
|
||||
label: t('Indexed Types'),
|
||||
children: Object.keys(vectorIndex.by_type || {}).length > 0 ? (
|
||||
<Space size={[4, 4]} wrap>
|
||||
{Object.entries(vectorIndex.by_type || {}).map(([type, count]) => (
|
||||
<Tag key={type}>{type} ({count as number})</Tag>
|
||||
))}
|
||||
</Space>
|
||||
) : (
|
||||
<Typography.Text type="secondary">{t('No index data')}</Typography.Text>
|
||||
),
|
||||
},
|
||||
]}
|
||||
contentStyle={{ fontSize: 14 }}
|
||||
labelStyle={{ fontWeight: 500, color: token.colorTextSecondary, width: '30%' }}
|
||||
/>
|
||||
|
||||
{vectorIndex.total ? (
|
||||
<div style={{ marginTop: 12 }}>
|
||||
<Typography.Text strong style={{ marginBottom: 8, display: 'block' }}>
|
||||
{t('Indexed Chunks')}
|
||||
</Typography.Text>
|
||||
<div style={{ maxHeight: '40vh', overflowY: 'auto', paddingRight: 8 }}>
|
||||
{primaryIndexEntries.map((entry: any, idx: number) => renderIndexEntry(entry, idx, primaryIndexEntries.length))}
|
||||
{remainingIndexEntries.length > 0 && (
|
||||
<Collapse
|
||||
bordered={false}
|
||||
size="small"
|
||||
items={[{
|
||||
key: 'more',
|
||||
label: t('More Indexed Chunks'),
|
||||
children: (
|
||||
<div>
|
||||
{remainingIndexEntries.map((entry: any, idx: number) => renderIndexEntry(entry, idx, remainingIndexEntries.length))}
|
||||
</div>
|
||||
),
|
||||
}]}
|
||||
style={{ background: 'transparent' }}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
{vectorIndex.has_more && (
|
||||
<Typography.Text type="secondary" style={{ fontSize: 12 }}>
|
||||
{t('Showing first {count} entries', { count: vectorEntries.length })}
|
||||
</Typography.Text>
|
||||
)}
|
||||
</div>
|
||||
) : (
|
||||
<div style={{ marginTop: 12 }}>
|
||||
<Typography.Text type="secondary">{t('No index data')}</Typography.Text>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* 右侧:EXIF 信息 */}
|
||||
|
||||
@@ -45,9 +45,9 @@ export const getFileIcon = (fileName: string, size: number = 16, resolvedMode: '
|
||||
if (['xls', 'xlsx'].includes(ext)) return make(<FileExcelOutlined />, '#52c41a');
|
||||
if (['ppt', 'pptx'].includes(ext)) return make(<FilePptOutlined />, '#fa8c16');
|
||||
if (['zip', 'rar', '7z', 'tar', 'gz', 'bz2', 'xz'].includes(ext)) return make(<FileZipOutlined />, '#faad14');
|
||||
if (['js','jsx','ts','tsx','vue','html','css','scss','less','json','xml','yaml','yml','py','java','cpp','c','h','php','rb','go','rs','swift','kt'].includes(ext)) return make(<CodeOutlined />, '#13c2c2');
|
||||
if (['js','jsx','ts','tsx','vue','html','htm','css','scss','sass','less','json','xml','yaml','yml','py','java','cpp','cc','cxx','c','h','hpp','hxx','php','rb','go','rs','rust','swift','kt','scala','clj','cljs','cs','vb','fs','pl','pm','r','lua','dart','elm'].includes(ext)) return make(<CodeOutlined />, '#13c2c2');
|
||||
if (['md', 'markdown'].includes(ext)) return make(<FileMarkdownOutlined />, '#1890ff');
|
||||
if (['txt', 'log', 'ini', 'cfg', 'conf'].includes(ext)) return make(<FileTextOutlined />, '#8c8c8c');
|
||||
if (['txt', 'log', 'ini', 'cfg', 'conf', 'sh', 'bash', 'zsh', 'fish', 'ps1', 'bat', 'cmd', 'dockerfile', 'makefile', 'gradle', 'cmake', 'gitignore', 'gitattributes', 'editorconfig', 'prettierrc'].includes(ext)) return make(<FileTextOutlined />, '#8c8c8c');
|
||||
if (['ttf', 'otf', 'woff', 'woff2', 'eot'].includes(ext)) return make(<FontSizeOutlined />, '#eb2f96');
|
||||
if (['db', 'sqlite', 'sql'].includes(ext)) return make(<DatabaseOutlined />, '#fa541c');
|
||||
if (['env', 'config', 'properties', 'toml'].includes(ext)) return make(<SettingOutlined />, '#faad14');
|
||||
|
||||
@@ -9,7 +9,6 @@ import { useI18n } from '../../../i18n';
|
||||
|
||||
interface FileListViewProps {
|
||||
entries: VfsEntry[];
|
||||
loading: boolean;
|
||||
selectedEntries: string[];
|
||||
onRowClick: (entry: VfsEntry, e: React.MouseEvent) => void;
|
||||
onSelectionChange: (selectedKeys: string[]) => void;
|
||||
@@ -22,7 +21,6 @@ interface FileListViewProps {
|
||||
|
||||
export const FileListView: React.FC<FileListViewProps> = ({
|
||||
entries,
|
||||
loading,
|
||||
selectedEntries,
|
||||
onRowClick,
|
||||
onSelectionChange,
|
||||
@@ -107,7 +105,6 @@ export const FileListView: React.FC<FileListViewProps> = ({
|
||||
rowKey={r => r.name}
|
||||
dataSource={entries}
|
||||
columns={columns as any}
|
||||
loading={loading}
|
||||
pagination={false}
|
||||
onRow={(r) => ({
|
||||
onClick: (e: any) => onRowClick(r, e),
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import React, { useRef, useState, useEffect } from 'react';
|
||||
import { Tooltip, Spin, theme } from 'antd';
|
||||
import { Tooltip, theme } from 'antd';
|
||||
import { FolderFilled, PictureOutlined } from '@ant-design/icons';
|
||||
import type { VfsEntry } from '../../../api/client';
|
||||
import { getFileIcon } from './FileIcons';
|
||||
@@ -10,7 +10,6 @@ interface Props {
|
||||
entries: VfsEntry[];
|
||||
thumbs: Record<string, string>;
|
||||
selectedEntries: string[];
|
||||
loading: boolean;
|
||||
path: string;
|
||||
onSelect: (e: VfsEntry, additive?: boolean) => void;
|
||||
onSelectRange: (names: string[]) => void;
|
||||
@@ -25,7 +24,7 @@ const formatSize = (size: number) => {
|
||||
return (size / 1024 / 1024 / 1024).toFixed(1) + ' GB';
|
||||
};
|
||||
|
||||
export const GridView: React.FC<Props> = ({ entries, thumbs, selectedEntries, loading, path, onSelect, onSelectRange, onOpen, onContextMenu }) => {
|
||||
export const GridView: React.FC<Props> = ({ entries, thumbs, selectedEntries, path, onSelect, onSelectRange, onOpen, onContextMenu }) => {
|
||||
const { token } = theme.useToken();
|
||||
const { resolvedMode } = useTheme();
|
||||
const lightenColor = (hex: string, amount: number) => {
|
||||
@@ -185,8 +184,7 @@ export const GridView: React.FC<Props> = ({ entries, thumbs, selectedEntries, lo
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
{loading && <div style={{ width: '100%', textAlign: 'center', padding: 40 }}><Spin /></div>}
|
||||
{!loading && entries.length === 0 && <EmptyState isRoot={path === '/'} />}
|
||||
{entries.length === 0 && <EmptyState isRoot={path === '/'} />}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import React, { useState } from 'react';
|
||||
import { Flex, Typography, Divider, Button, Space, Tooltip, Segmented, Breadcrumb, Input, theme } from 'antd';
|
||||
import { Flex, Typography, Divider, Button, Space, Tooltip, Segmented, Breadcrumb, Input, theme, Dropdown } from 'antd';
|
||||
import { ArrowUpOutlined, ArrowDownOutlined, ReloadOutlined, PlusOutlined, UploadOutlined, AppstoreOutlined, UnorderedListOutlined } from '@ant-design/icons';
|
||||
import { Select } from 'antd';
|
||||
import { useI18n } from '../../../i18n';
|
||||
@@ -16,7 +16,8 @@ interface HeaderProps {
|
||||
onNavigate: (path: string) => void;
|
||||
onRefresh: () => void;
|
||||
onCreateDir: () => void;
|
||||
onUpload: () => void;
|
||||
onUploadFile: () => void;
|
||||
onUploadDirectory: () => void;
|
||||
onSetViewMode: (mode: ViewMode) => void;
|
||||
onSortChange: (sortBy: string, sortOrder: string) => void;
|
||||
}
|
||||
@@ -31,7 +32,8 @@ export const Header: React.FC<HeaderProps> = ({
|
||||
onNavigate,
|
||||
onRefresh,
|
||||
onCreateDir,
|
||||
onUpload,
|
||||
onUploadFile,
|
||||
onUploadDirectory,
|
||||
onSetViewMode,
|
||||
onSortChange,
|
||||
}) => {
|
||||
@@ -108,7 +110,26 @@ export const Header: React.FC<HeaderProps> = ({
|
||||
<Space size={8} wrap>
|
||||
<Button size="small" icon={<ReloadOutlined />} onClick={onRefresh} loading={loading}>{t('Refresh')}</Button>
|
||||
<Button size="small" icon={<PlusOutlined />} onClick={onCreateDir}>{t('New Folder')}</Button>
|
||||
<Button size="small" icon={<UploadOutlined />} onClick={onUpload}>{t('Upload')}</Button>
|
||||
<Dropdown.Button
|
||||
size="small"
|
||||
icon={<UploadOutlined />}
|
||||
onClick={onUploadFile}
|
||||
menu={{
|
||||
items: [
|
||||
{ key: 'file', label: t('Upload Files') },
|
||||
{ key: 'folder', label: t('Upload Folder') },
|
||||
],
|
||||
onClick: ({ key }) => {
|
||||
if (key === 'folder') {
|
||||
onUploadDirectory();
|
||||
} else {
|
||||
onUploadFile();
|
||||
}
|
||||
},
|
||||
}}
|
||||
>
|
||||
{t('Upload')}
|
||||
</Dropdown.Button>
|
||||
<Select
|
||||
size="small"
|
||||
value={sortBy}
|
||||
@@ -128,7 +149,7 @@ export const Header: React.FC<HeaderProps> = ({
|
||||
<Segmented
|
||||
size="small"
|
||||
value={viewMode}
|
||||
onChange={v => onSetViewMode(v as any)}
|
||||
onChange={value => onSetViewMode(value as ViewMode)}
|
||||
options={[
|
||||
{ label: <Tooltip title={t('Grid')}><AppstoreOutlined /></Tooltip>, value: 'grid' },
|
||||
{ label: <Tooltip title={t('List')}><UnorderedListOutlined /></Tooltip>, value: 'list' }
|
||||
|
||||
@@ -0,0 +1,71 @@
|
||||
import type { FC } from 'react';
|
||||
import { Skeleton, theme } from 'antd';
|
||||
|
||||
type LoadingMode = 'grid' | 'list';
|
||||
|
||||
interface LoadingSkeletonProps {
|
||||
mode: LoadingMode;
|
||||
count?: number;
|
||||
}
|
||||
|
||||
const createArray = (length: number) => Array.from({ length }, (_, index) => index);
|
||||
|
||||
export const LoadingSkeleton: FC<LoadingSkeletonProps> = ({ mode, count }) => {
|
||||
const { token } = theme.useToken();
|
||||
const fallbackCount = mode === 'grid' ? 50 : 30;
|
||||
const items = createArray(count ?? fallbackCount);
|
||||
|
||||
if (mode === 'grid') {
|
||||
return (
|
||||
<div
|
||||
style={{
|
||||
display: 'grid',
|
||||
gridTemplateColumns: 'repeat(auto-fill, minmax(160px, 1fr))',
|
||||
gap: 16,
|
||||
padding: 16,
|
||||
}}
|
||||
>
|
||||
{items.map((key) => (
|
||||
<div
|
||||
key={key}
|
||||
style={{
|
||||
background: token.colorBgElevated,
|
||||
borderRadius: token.borderRadius,
|
||||
padding: 16,
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
gap: 12,
|
||||
}}
|
||||
>
|
||||
<Skeleton.Button active block style={{ height: 96, borderRadius: token.borderRadiusLG }} />
|
||||
<Skeleton active title={false} paragraph={{ rows: 2, width: ['80%', '60%'] }} />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<div style={{ padding: '0 16px' }}>
|
||||
{items.map((key) => (
|
||||
<div
|
||||
key={key}
|
||||
style={{
|
||||
display: 'grid',
|
||||
gridTemplateColumns: '48px 1fr',
|
||||
alignItems: 'center',
|
||||
padding: '12px 16px',
|
||||
borderBottom: `1px solid ${token.colorBorderSecondary}`,
|
||||
}}
|
||||
>
|
||||
<Skeleton.Avatar active shape="square" size={32} />
|
||||
<div style={{ paddingLeft: 16 }}>
|
||||
<Skeleton active title={false} paragraph={{ rows: 1, width: '60%' }} />
|
||||
<Skeleton active title={false} paragraph={{ rows: 1, width: '40%' }} />
|
||||
</div>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
};
|
||||
|
||||
@@ -1,24 +1,57 @@
|
||||
import React, { useEffect } from 'react';
|
||||
import { Modal, Button, List, Progress, Typography, message, Flex } from 'antd';
|
||||
import React, { useEffect, useMemo } from 'react';
|
||||
import { Modal, Button, List, Progress, Typography, message, Flex, Tag, Space } from 'antd';
|
||||
import { CopyOutlined, CheckCircleFilled, CloseCircleFilled } from '@ant-design/icons';
|
||||
import type { UploadFile } from '../../hooks/useUploader';
|
||||
import type { ConflictDecision, UploadConflict, UploadFile } from '../../hooks/useUploader';
|
||||
import { useI18n } from '../../../../i18n';
|
||||
|
||||
interface UploadModalProps {
|
||||
visible: boolean;
|
||||
files: UploadFile[];
|
||||
isUploading: boolean;
|
||||
totalProgress: number;
|
||||
totalFileBytes: number;
|
||||
uploadedFileBytes: number;
|
||||
conflict: UploadConflict | null;
|
||||
onClose: () => void;
|
||||
onStartUpload: () => void;
|
||||
onResolveConflict: (decision: ConflictDecision) => void;
|
||||
}
|
||||
|
||||
const UploadModal: React.FC<UploadModalProps> = ({ visible, files, onClose, onStartUpload }) => {
|
||||
const formatBytes = (bytes: number) => {
|
||||
if (bytes <= 0) return '0 B';
|
||||
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||
const index = Math.min(units.length - 1, Math.floor(Math.log(bytes) / Math.log(1024)));
|
||||
const value = bytes / (1024 ** index);
|
||||
return `${value.toFixed(value >= 10 || index === 0 ? 0 : 1)} ${units[index]}`;
|
||||
};
|
||||
|
||||
const UploadModal: React.FC<UploadModalProps> = ({
|
||||
visible,
|
||||
files,
|
||||
isUploading,
|
||||
totalProgress,
|
||||
totalFileBytes,
|
||||
uploadedFileBytes,
|
||||
conflict,
|
||||
onClose,
|
||||
onStartUpload,
|
||||
onResolveConflict,
|
||||
}) => {
|
||||
const { t } = useI18n();
|
||||
|
||||
const allSuccess = files.every(f => f.status === 'success');
|
||||
|
||||
const summary = useMemo(() => {
|
||||
const total = files.length;
|
||||
const completed = files.filter(f => ['success', 'skipped'].includes(f.status)).length;
|
||||
const failures = files.filter(f => f.status === 'error').length;
|
||||
const pending = files.filter(f => ['pending', 'waiting', 'uploading'].includes(f.status)).length;
|
||||
return { total, completed, failures, pending };
|
||||
}, [files]);
|
||||
|
||||
const allFinished = files.length > 0 && files.every(f => ['success', 'error', 'skipped'].includes(f.status));
|
||||
|
||||
useEffect(() => {
|
||||
if (visible && files.length > 0 && files.every(f => f.status === 'pending')) {
|
||||
onStartUpload();
|
||||
if (visible && files.length > 0 && files.some(f => f.status === 'pending')) {
|
||||
onStartUpload();
|
||||
}
|
||||
}, [visible, files, onStartUpload]);
|
||||
|
||||
@@ -28,6 +61,29 @@ const UploadModal: React.FC<UploadModalProps> = ({ visible, files, onClose, onSt
|
||||
};
|
||||
|
||||
const renderStatus = (file: UploadFile) => {
|
||||
if (file.type === 'directory') {
|
||||
if (file.status === 'uploading') {
|
||||
return <Typography.Text type="secondary">{t('Creating directory...')}</Typography.Text>;
|
||||
}
|
||||
if (file.status === 'success') {
|
||||
return (
|
||||
<Flex align="center" gap={8}>
|
||||
<CheckCircleFilled style={{ color: 'var(--ant-color-success, #52c41a)' }} />
|
||||
<Typography.Text type="secondary">{t('Directory ready')}</Typography.Text>
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
if (file.status === 'error') {
|
||||
return (
|
||||
<Flex align="center" gap={8}>
|
||||
<CloseCircleFilled style={{ color: 'var(--ant-color-error, #ff4d4f)' }} />
|
||||
<Typography.Text type="danger" title={file.error}>{t('Create directory failed')}</Typography.Text>
|
||||
</Flex>
|
||||
);
|
||||
}
|
||||
return <Typography.Text type="secondary">{t('Waiting to create')}</Typography.Text>;
|
||||
}
|
||||
|
||||
switch (file.status) {
|
||||
case 'uploading':
|
||||
return <Progress percent={Math.round(file.progress)} size="small" />;
|
||||
@@ -39,6 +95,10 @@ const UploadModal: React.FC<UploadModalProps> = ({ visible, files, onClose, onSt
|
||||
<Button icon={<CopyOutlined />} size="small" onClick={() => handleCopy(file.permanentLink!)} type="text" />
|
||||
</Flex>
|
||||
);
|
||||
case 'waiting':
|
||||
return <Typography.Text type="warning">{t('Waiting for overwrite decision')}</Typography.Text>;
|
||||
case 'skipped':
|
||||
return <Typography.Text type="secondary">{t('Skipped')}</Typography.Text>;
|
||||
case 'error':
|
||||
return (
|
||||
<Flex align="center" gap={8}>
|
||||
@@ -56,13 +116,72 @@ const UploadModal: React.FC<UploadModalProps> = ({ visible, files, onClose, onSt
|
||||
open={visible}
|
||||
title={t('Upload File')}
|
||||
width={600}
|
||||
closable={!isUploading}
|
||||
maskClosable={!isUploading}
|
||||
onCancel={onClose}
|
||||
footer={[
|
||||
<Button key="close" onClick={onClose} disabled={!allSuccess && files.some(f => f.status === 'uploading')}>
|
||||
{allSuccess ? t('Close') : t('Done')}
|
||||
<Button key="close" onClick={onClose} disabled={!allFinished || isUploading}>
|
||||
{allFinished ? t('Close') : t('Done')}
|
||||
</Button>,
|
||||
]}
|
||||
>
|
||||
<Space direction="vertical" style={{ width: '100%' }} size={16}>
|
||||
<div>
|
||||
<Flex justify="space-between" align="center">
|
||||
<Typography.Text strong>
|
||||
{t('Total progress')}:
|
||||
</Typography.Text>
|
||||
<Typography.Text type="secondary">
|
||||
{t('Upload bytes summary', {
|
||||
uploaded: formatBytes(uploadedFileBytes),
|
||||
total: formatBytes(totalFileBytes),
|
||||
})}
|
||||
</Typography.Text>
|
||||
</Flex>
|
||||
<Progress percent={Math.round(totalProgress)} showInfo />
|
||||
<Typography.Text type="secondary">
|
||||
{t('Upload task summary', {
|
||||
completed: summary.completed,
|
||||
total: summary.total,
|
||||
pending: summary.pending,
|
||||
failures: summary.failures,
|
||||
})}
|
||||
</Typography.Text>
|
||||
</div>
|
||||
|
||||
{conflict && (
|
||||
<div
|
||||
style={{
|
||||
border: '1px solid var(--ant-color-warning-border, #faad14)',
|
||||
borderRadius: 8,
|
||||
padding: '12px 16px',
|
||||
background: 'var(--ant-color-warning-bg, rgba(250,173,20,0.1))',
|
||||
}}
|
||||
>
|
||||
<Typography.Text strong>
|
||||
{t('Overwrite confirmation required')}
|
||||
</Typography.Text>
|
||||
<Typography.Paragraph type="secondary" style={{ marginBottom: 12 }}>
|
||||
{t('Target already exists: {path}', { path: conflict.relativePath })}
|
||||
</Typography.Paragraph>
|
||||
<Flex gap={8} wrap="wrap">
|
||||
<Button size="small" type="primary" onClick={() => onResolveConflict('overwrite')}>
|
||||
{t('Overwrite')}
|
||||
</Button>
|
||||
<Button size="small" onClick={() => onResolveConflict('skip')}>
|
||||
{t('Skip')}
|
||||
</Button>
|
||||
<Button size="small" type="primary" onClick={() => onResolveConflict('overwriteAll')}>
|
||||
{t('Overwrite All')}
|
||||
</Button>
|
||||
<Button size="small" onClick={() => onResolveConflict('skipAll')}>
|
||||
{t('Skip All')}
|
||||
</Button>
|
||||
</Flex>
|
||||
</div>
|
||||
)}
|
||||
</Space>
|
||||
|
||||
<List
|
||||
dataSource={files}
|
||||
itemLayout="horizontal"
|
||||
@@ -77,9 +196,16 @@ const UploadModal: React.FC<UploadModalProps> = ({ visible, files, onClose, onSt
|
||||
onMouseLeave={(e) => { e.currentTarget.style.backgroundColor = 'transparent'; }}
|
||||
>
|
||||
<Flex justify="space-between" align="center" style={{ width: '100%' }}>
|
||||
<Typography.Text ellipsis={{ tooltip: file.file.name }} style={{ maxWidth: '60%' }}>
|
||||
{file.file.name}
|
||||
</Typography.Text>
|
||||
<Flex align="center" gap={8} style={{ maxWidth: '60%', overflow: 'hidden' }}>
|
||||
<Typography.Text ellipsis={{ tooltip: file.relativePath }} style={{ maxWidth: '100%' }}>
|
||||
{file.relativePath}
|
||||
</Typography.Text>
|
||||
{file.type === 'directory' ? (
|
||||
<Tag color="blue">{t('Directory')}</Tag>
|
||||
) : (
|
||||
<Tag color="geekblue">{formatBytes(file.size)}</Tag>
|
||||
)}
|
||||
</Flex>
|
||||
<div style={{ minWidth: 180, textAlign: 'right', flexShrink: 0 }}>
|
||||
{renderStatus(file)}
|
||||
</div>
|
||||
|
||||
@@ -13,7 +13,7 @@ export function useThumbnails(entries: VfsEntry[], path: string) {
|
||||
|
||||
useEffect(() => {
|
||||
const newThumbs: Record<string, string> = {};
|
||||
const targets = entries.filter(e => !e.is_dir && (e as any).is_image && !thumbs[e.name]);
|
||||
const targets = entries.filter(e => !e.is_dir && (e as any).has_thumbnail && !thumbs[e.name]);
|
||||
|
||||
if (targets.length > 0) {
|
||||
targets.forEach(ent => {
|
||||
@@ -37,4 +37,4 @@ export function useThumbnails(entries: VfsEntry[], path: string) {
|
||||
}, [entries, path, thumbs]);
|
||||
|
||||
return { thumbs };
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,103 +1,592 @@
|
||||
import { useState, useCallback, useRef } from 'react';
|
||||
import type { ChangeEvent, RefObject } from 'react';
|
||||
import { useState, useCallback, useRef, useMemo, useEffect } from 'react';
|
||||
import { message } from 'antd';
|
||||
import { vfsApi } from '../../../api/client';
|
||||
import { message }
|
||||
from 'antd';
|
||||
import { useI18n } from '../../../i18n';
|
||||
|
||||
type UploadStatus = 'pending' | 'waiting' | 'uploading' | 'success' | 'error' | 'skipped';
|
||||
|
||||
export interface UploadFile {
|
||||
id: string;
|
||||
file: File;
|
||||
status: 'pending' | 'uploading' | 'success' | 'error';
|
||||
name: string;
|
||||
relativePath: string;
|
||||
targetPath: string;
|
||||
type: 'file' | 'directory';
|
||||
size: number;
|
||||
loadedBytes: number;
|
||||
status: UploadStatus;
|
||||
progress: number;
|
||||
error?: string;
|
||||
permanentLink?: string;
|
||||
file?: File;
|
||||
}
|
||||
|
||||
export type ConflictDecision = 'overwrite' | 'skip' | 'overwriteAll' | 'skipAll';
|
||||
|
||||
export interface UploadConflict {
|
||||
taskId: string;
|
||||
relativePath: string;
|
||||
targetPath: string;
|
||||
type: 'file' | 'directory';
|
||||
}
|
||||
|
||||
interface RawUploadFile {
|
||||
kind: 'file';
|
||||
relativePath: string;
|
||||
file: File;
|
||||
}
|
||||
|
||||
interface RawUploadDirectory {
|
||||
kind: 'directory';
|
||||
relativePath: string;
|
||||
}
|
||||
|
||||
type RawUploadItem = RawUploadFile | RawUploadDirectory;
|
||||
|
||||
const generateId = (() => {
|
||||
const cryptoApi = typeof crypto !== 'undefined' ? crypto : undefined;
|
||||
return () => {
|
||||
if (cryptoApi?.randomUUID) return cryptoApi.randomUUID();
|
||||
return `upload-${Date.now()}-${Math.random().toString(16).slice(2, 10)}`;
|
||||
};
|
||||
})();
|
||||
|
||||
const normalizeRelativePath = (path: string) => path.replace(/\\/g, '/').replace(/^\/+/, '').replace(/\/+$/, '');
|
||||
|
||||
const joinWithBasePath = (base: string, relative: string) => {
|
||||
const cleanedBase = base === '/' ? '' : base.replace(/\/+$/, '');
|
||||
const cleanedRelative = normalizeRelativePath(relative);
|
||||
const parts = [cleanedBase, cleanedRelative].filter(Boolean);
|
||||
const joined = parts.join('/');
|
||||
return joined.startsWith('/') ? joined : `/${joined}`;
|
||||
};
|
||||
|
||||
const collectParentDirectories = (relativePath: string) => {
|
||||
const normalized = normalizeRelativePath(relativePath);
|
||||
if (!normalized) return [];
|
||||
const segments = normalized.split('/').slice(0, -1);
|
||||
const dirs: string[] = [];
|
||||
for (let i = 1; i <= segments.length; i += 1) {
|
||||
const dir = segments.slice(0, i).join('/');
|
||||
if (dir) dirs.push(dir);
|
||||
}
|
||||
return dirs;
|
||||
};
|
||||
|
||||
const collectAllDirectories = (items: RawUploadItem[]) => {
|
||||
const directories = new Set<string>();
|
||||
items.forEach((item) => {
|
||||
if (item.kind === 'directory') {
|
||||
const normalized = normalizeRelativePath(item.relativePath);
|
||||
if (normalized) directories.add(normalized);
|
||||
} else {
|
||||
collectParentDirectories(item.relativePath).forEach((dir) => directories.add(dir));
|
||||
}
|
||||
});
|
||||
return Array.from(directories).sort((a, b) => a.localeCompare(b));
|
||||
};
|
||||
|
||||
interface WebkitFileSystemFileEntry {
|
||||
isFile: true;
|
||||
isDirectory: false;
|
||||
name: string;
|
||||
fullPath: string;
|
||||
file: (
|
||||
successCallback: (file: File) => void,
|
||||
errorCallback?: (err: DOMException) => void,
|
||||
) => void;
|
||||
}
|
||||
|
||||
interface WebkitFileSystemDirectoryReader {
|
||||
readEntries: (
|
||||
successCallback: (entries: WebkitFileSystemEntry[]) => void,
|
||||
errorCallback?: (err: DOMException) => void,
|
||||
) => void;
|
||||
}
|
||||
|
||||
interface WebkitFileSystemDirectoryEntry {
|
||||
isFile: false;
|
||||
isDirectory: true;
|
||||
name: string;
|
||||
fullPath: string;
|
||||
createReader: () => WebkitFileSystemDirectoryReader;
|
||||
}
|
||||
|
||||
type WebkitFileSystemEntry = WebkitFileSystemFileEntry | WebkitFileSystemDirectoryEntry;
|
||||
|
||||
const safeStat = async (fullPath: string): Promise<{ is_dir?: boolean } | null> => {
|
||||
try {
|
||||
return await vfsApi.stat(fullPath) as { is_dir?: boolean };
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
};
|
||||
|
||||
const readAllDirectoryEntries = (directoryEntry: WebkitFileSystemDirectoryEntry): Promise<WebkitFileSystemEntry[]> =>
|
||||
new Promise((resolve, reject) => {
|
||||
const reader = directoryEntry.createReader();
|
||||
const entries: WebkitFileSystemEntry[] = [];
|
||||
const readBatch = () => {
|
||||
reader.readEntries(
|
||||
(batch: WebkitFileSystemEntry[]) => {
|
||||
if (batch.length === 0) {
|
||||
resolve(entries);
|
||||
} else {
|
||||
entries.push(...batch);
|
||||
readBatch();
|
||||
}
|
||||
},
|
||||
(err: DOMException) => reject(err),
|
||||
);
|
||||
};
|
||||
readBatch();
|
||||
});
|
||||
|
||||
const traverseEntry = async (
|
||||
entry: WebkitFileSystemEntry,
|
||||
parentPath: string,
|
||||
bucket: RawUploadItem[],
|
||||
) => {
|
||||
if (!entry) return;
|
||||
const currentPath = parentPath ? `${parentPath}/${entry.name}` : entry.name;
|
||||
if (entry.isFile) {
|
||||
const file: File = await new Promise((resolve, reject) => {
|
||||
entry.file(
|
||||
(f: File) => resolve(f),
|
||||
(err: DOMException) => reject(err),
|
||||
);
|
||||
});
|
||||
bucket.push({
|
||||
kind: 'file',
|
||||
relativePath: currentPath,
|
||||
file,
|
||||
});
|
||||
} else if (entry.isDirectory) {
|
||||
bucket.push({
|
||||
kind: 'directory',
|
||||
relativePath: currentPath,
|
||||
});
|
||||
const entries = await readAllDirectoryEntries(entry);
|
||||
for (const child of entries) {
|
||||
await traverseEntry(child, currentPath, bucket);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const collectFromFileList = async (list: FileList): Promise<RawUploadItem[]> => {
|
||||
const items: RawUploadItem[] = [];
|
||||
for (const file of Array.from(list)) {
|
||||
const fileWithPath = file as File & { webkitRelativePath?: string };
|
||||
const relativePath = fileWithPath.webkitRelativePath || file.name;
|
||||
items.push({
|
||||
kind: 'file',
|
||||
relativePath,
|
||||
file,
|
||||
});
|
||||
}
|
||||
return items;
|
||||
};
|
||||
|
||||
const collectFromDataTransfer = async (dataTransfer: DataTransfer): Promise<RawUploadItem[]> => {
|
||||
const items: RawUploadItem[] = [];
|
||||
if (dataTransfer.items && dataTransfer.items.length > 0) {
|
||||
for (const item of Array.from(dataTransfer.items)) {
|
||||
const itemWithEntry = item as DataTransferItem & {
|
||||
webkitGetAsEntry?: () => FileSystemEntry | null;
|
||||
};
|
||||
const entry = itemWithEntry.webkitGetAsEntry ? (itemWithEntry.webkitGetAsEntry() as unknown as WebkitFileSystemEntry) : null;
|
||||
if (entry) {
|
||||
await traverseEntry(entry, '', items);
|
||||
} else if (item.kind === 'file') {
|
||||
const file = item.getAsFile();
|
||||
if (file) {
|
||||
items.push({
|
||||
kind: 'file',
|
||||
relativePath: file.name,
|
||||
file,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (dataTransfer.files && dataTransfer.files.length > 0) {
|
||||
return collectFromFileList(dataTransfer.files);
|
||||
}
|
||||
return items;
|
||||
};
|
||||
|
||||
const createUploadTasks = (basePath: string, items: RawUploadItem[]): UploadFile[] => {
|
||||
const idGenerator = generateId;
|
||||
const directories = collectAllDirectories(items);
|
||||
const directoryTasks: UploadFile[] = directories.map((relativePath) => {
|
||||
const targetPath = joinWithBasePath(basePath, relativePath);
|
||||
const segments = normalizeRelativePath(relativePath).split('/');
|
||||
const name = segments[segments.length - 1] || targetPath;
|
||||
return {
|
||||
id: idGenerator(),
|
||||
name,
|
||||
relativePath,
|
||||
targetPath,
|
||||
type: 'directory',
|
||||
size: 0,
|
||||
loadedBytes: 0,
|
||||
status: 'pending',
|
||||
progress: 0,
|
||||
};
|
||||
});
|
||||
|
||||
const fileTasks: UploadFile[] = items
|
||||
.filter((item): item is RawUploadFile => item.kind === 'file')
|
||||
.map((item) => {
|
||||
const relativePath = normalizeRelativePath(item.relativePath) || item.file.name;
|
||||
const targetPath = joinWithBasePath(basePath, relativePath);
|
||||
return {
|
||||
id: idGenerator(),
|
||||
name: item.file.name,
|
||||
relativePath,
|
||||
targetPath,
|
||||
type: 'file',
|
||||
size: item.file.size,
|
||||
loadedBytes: 0,
|
||||
status: 'pending',
|
||||
progress: 0,
|
||||
file: item.file,
|
||||
};
|
||||
});
|
||||
|
||||
return [...directoryTasks, ...fileTasks];
|
||||
};
|
||||
|
||||
export function useUploader(path: string, onUploadComplete: () => void) {
|
||||
const { t } = useI18n();
|
||||
const [files, setFiles] = useState<UploadFile[]>([]);
|
||||
const [isModalVisible, setIsModalVisible] = useState(false);
|
||||
const fileInputRef = useRef<HTMLInputElement | null>(null);
|
||||
const [isUploading, setIsUploading] = useState(false);
|
||||
const [conflict, setConflict] = useState<UploadConflict | null>(null);
|
||||
const conflictResolverRef = useRef<((decision: ConflictDecision) => void) | null>(null);
|
||||
const overwriteAllRef = useRef(false);
|
||||
const skipAllRef = useRef(false);
|
||||
const createdDirsRef = useRef<Set<string>>(new Set());
|
||||
const filesRef = useRef<UploadFile[]>(files);
|
||||
const isUploadingRef = useRef(false);
|
||||
|
||||
const openModal = useCallback(() => {
|
||||
const fileInputRef = useRef<HTMLInputElement | null>(null);
|
||||
const directoryInputRef = useRef<HTMLInputElement | null>(null);
|
||||
|
||||
useEffect(() => {
|
||||
const node = directoryInputRef.current;
|
||||
if (!node) return;
|
||||
node.setAttribute('webkitdirectory', '');
|
||||
node.setAttribute('directory', '');
|
||||
}, []);
|
||||
|
||||
const mutateFiles = useCallback((updater: (prev: UploadFile[]) => UploadFile[]) => {
|
||||
setFiles((prev) => {
|
||||
const next = updater(prev);
|
||||
filesRef.current = next;
|
||||
return next;
|
||||
});
|
||||
}, []);
|
||||
|
||||
const replaceFiles = useCallback((next: UploadFile[]) => {
|
||||
filesRef.current = next;
|
||||
setFiles(next);
|
||||
}, []);
|
||||
|
||||
const updateFile = useCallback((id: string, patch: Partial<UploadFile>) => {
|
||||
mutateFiles((prev) => prev.map((f) => (f.id === id ? { ...f, ...patch } : f)));
|
||||
}, [mutateFiles]);
|
||||
|
||||
const resetOverwriteDecisions = useCallback(() => {
|
||||
overwriteAllRef.current = false;
|
||||
skipAllRef.current = false;
|
||||
}, []);
|
||||
|
||||
const openFilePicker = useCallback(() => {
|
||||
if (fileInputRef.current) {
|
||||
fileInputRef.current.click();
|
||||
}
|
||||
}, []);
|
||||
|
||||
const closeModal = useCallback(() => {
|
||||
setIsModalVisible(false);
|
||||
setFiles([]);
|
||||
const openDirectoryPicker = useCallback(() => {
|
||||
if (directoryInputRef.current) {
|
||||
directoryInputRef.current.click();
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleFileChange = (event: React.ChangeEvent<HTMLInputElement>) => {
|
||||
const closeModal = useCallback(() => {
|
||||
if (isUploadingRef.current) {
|
||||
return;
|
||||
}
|
||||
setIsModalVisible(false);
|
||||
replaceFiles([]);
|
||||
resetOverwriteDecisions();
|
||||
setConflict(null);
|
||||
conflictResolverRef.current = null;
|
||||
createdDirsRef.current = new Set();
|
||||
}, [replaceFiles, resetOverwriteDecisions]);
|
||||
|
||||
const prepareQueue = useCallback((items: RawUploadItem[]) => {
|
||||
if (!items.length) {
|
||||
message.info(t('No items selected for upload'));
|
||||
return;
|
||||
}
|
||||
const tasks = createUploadTasks(path, items);
|
||||
if (!tasks.length) {
|
||||
message.info(t('No uploadable files or directories found'));
|
||||
return;
|
||||
}
|
||||
replaceFiles(tasks);
|
||||
resetOverwriteDecisions();
|
||||
createdDirsRef.current = new Set();
|
||||
setIsModalVisible(true);
|
||||
}, [path, replaceFiles, resetOverwriteDecisions, t]);
|
||||
|
||||
const handleInputChange = useCallback(async (event: ChangeEvent<HTMLInputElement>, ref: RefObject<HTMLInputElement | null>) => {
|
||||
const selectedFiles = event.target.files;
|
||||
if (selectedFiles && selectedFiles.length > 0) {
|
||||
const newFiles: UploadFile[] = Array.from(selectedFiles).map(file => ({
|
||||
id: `${file.name}-${Date.now()}`,
|
||||
file,
|
||||
status: 'pending',
|
||||
progress: 0,
|
||||
}));
|
||||
setFiles(newFiles);
|
||||
setIsModalVisible(true);
|
||||
if (fileInputRef.current) {
|
||||
fileInputRef.current.value = '';
|
||||
if (!selectedFiles || selectedFiles.length === 0) {
|
||||
return;
|
||||
}
|
||||
const items = await collectFromFileList(selectedFiles);
|
||||
prepareQueue(items);
|
||||
if (ref.current) {
|
||||
ref.current.value = '';
|
||||
}
|
||||
}, [prepareQueue]);
|
||||
|
||||
const handleFileInputChange = useCallback(async (event: ChangeEvent<HTMLInputElement>) => {
|
||||
await handleInputChange(event, fileInputRef);
|
||||
}, [handleInputChange]);
|
||||
|
||||
const handleDirectoryInputChange = useCallback(async (event: ChangeEvent<HTMLInputElement>) => {
|
||||
await handleInputChange(event, directoryInputRef);
|
||||
}, [handleInputChange]);
|
||||
|
||||
const handleFileDrop = useCallback(async (data: DataTransfer) => {
|
||||
const items = await collectFromDataTransfer(data);
|
||||
prepareQueue(items);
|
||||
}, [prepareQueue]);
|
||||
|
||||
const awaitConflictDecision = useCallback(async (task: UploadFile): Promise<'overwrite' | 'skip'> => {
|
||||
if (overwriteAllRef.current) {
|
||||
return 'overwrite';
|
||||
}
|
||||
if (skipAllRef.current) {
|
||||
return 'skip';
|
||||
}
|
||||
return new Promise<'overwrite' | 'skip'>((resolve) => {
|
||||
updateFile(task.id, { status: 'waiting' });
|
||||
setConflict({
|
||||
taskId: task.id,
|
||||
relativePath: task.relativePath,
|
||||
targetPath: task.targetPath,
|
||||
type: task.type,
|
||||
});
|
||||
conflictResolverRef.current = (decision: ConflictDecision) => {
|
||||
if (decision === 'overwriteAll') {
|
||||
overwriteAllRef.current = true;
|
||||
resolve('overwrite');
|
||||
} else if (decision === 'skipAll') {
|
||||
skipAllRef.current = true;
|
||||
resolve('skip');
|
||||
} else if (decision === 'overwrite') {
|
||||
resolve('overwrite');
|
||||
} else {
|
||||
resolve('skip');
|
||||
}
|
||||
};
|
||||
});
|
||||
}, [updateFile]);
|
||||
|
||||
const confirmConflict = useCallback((decision: ConflictDecision) => {
|
||||
if (!conflictResolverRef.current) {
|
||||
return;
|
||||
}
|
||||
const resolver = conflictResolverRef.current;
|
||||
conflictResolverRef.current = null;
|
||||
setConflict(null);
|
||||
resolver(decision);
|
||||
}, []);
|
||||
|
||||
const ensureDirectory = useCallback(async (fullPath: string) => {
|
||||
const normalized = fullPath.replace(/\/+/g, '/');
|
||||
if (!normalized || normalized === '/') {
|
||||
return;
|
||||
}
|
||||
if (createdDirsRef.current.has(normalized)) {
|
||||
return;
|
||||
}
|
||||
try {
|
||||
await vfsApi.mkdir(normalized);
|
||||
} catch (err: unknown) {
|
||||
const messageText = err instanceof Error ? err.message : String(err);
|
||||
if (!/exist/i.test(messageText)) {
|
||||
throw err;
|
||||
}
|
||||
} finally {
|
||||
createdDirsRef.current.add(normalized);
|
||||
}
|
||||
};
|
||||
}, []);
|
||||
|
||||
const handleFileDrop = (droppedFiles: FileList) => {
|
||||
if (droppedFiles && droppedFiles.length > 0) {
|
||||
const newFiles: UploadFile[] = Array.from(droppedFiles).map(file => ({
|
||||
id: `${file.name}-${Date.now()}`,
|
||||
file,
|
||||
status: 'pending',
|
||||
progress: 0,
|
||||
}));
|
||||
setFiles(newFiles);
|
||||
setIsModalVisible(true);
|
||||
const ensureDirectoryTree = useCallback(async (targetDir: string) => {
|
||||
if (!targetDir || targetDir === '/') return;
|
||||
const normalized = targetDir.replace(/\/+/g, '/');
|
||||
const segments = normalized.replace(/^\/+/, '').split('/').filter(Boolean);
|
||||
let current = '';
|
||||
for (const segment of segments) {
|
||||
current = `${current}/${segment}`;
|
||||
await ensureDirectory(current.startsWith('/') ? current : `/${current}`);
|
||||
}
|
||||
};
|
||||
}, [ensureDirectory]);
|
||||
|
||||
const startUpload = useCallback(async () => {
|
||||
if (files.length === 0) {
|
||||
const processDirectoryTask = useCallback(async (task: UploadFile) => {
|
||||
updateFile(task.id, { status: 'uploading', progress: 10 });
|
||||
const stat = await safeStat(task.targetPath);
|
||||
if (stat && !stat.is_dir) {
|
||||
const error = t('Directory conflicts with existing file');
|
||||
updateFile(task.id, { status: 'error', progress: 0, error });
|
||||
message.error(`${task.relativePath}: ${error}`);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
await ensureDirectory(task.targetPath);
|
||||
updateFile(task.id, { status: 'success', progress: 100 });
|
||||
} catch (err: unknown) {
|
||||
const error = err instanceof Error ? err.message : t('Create directory failed');
|
||||
updateFile(task.id, { status: 'error', progress: 0, error });
|
||||
message.error(`${task.relativePath}: ${error}`);
|
||||
}
|
||||
}, [ensureDirectory, updateFile, t]);
|
||||
|
||||
const processFileTask = useCallback(async (task: UploadFile) => {
|
||||
if (!task.file) {
|
||||
updateFile(task.id, { status: 'error', error: t('Missing file content') });
|
||||
return;
|
||||
}
|
||||
|
||||
const dir = path === '/' ? '' : path;
|
||||
if (skipAllRef.current) {
|
||||
updateFile(task.id, { status: 'skipped', progress: 0 });
|
||||
return;
|
||||
}
|
||||
|
||||
for (const uploadFile of files) {
|
||||
if (uploadFile.status !== 'pending') continue;
|
||||
|
||||
setFiles(prev => prev.map(f => f.id === uploadFile.id ? { ...f, status: 'uploading' } : f));
|
||||
|
||||
const dest = (dir + '/' + uploadFile.file.name).replace(/\/+/g, '/');
|
||||
|
||||
try {
|
||||
await vfsApi.uploadStream(dest, uploadFile.file, true, (loaded, total) => {
|
||||
const progress = total > 0 ? (loaded / total) * 100 : 0;
|
||||
setFiles(prev => prev.map(f => f.id === uploadFile.id ? { ...f, progress } : f));
|
||||
});
|
||||
|
||||
const link = await vfsApi.getTempLinkToken(dest, 60 * 60 * 24 * 365 * 10);
|
||||
const permanentLink = vfsApi.getTempPublicUrl(link.token);
|
||||
|
||||
setFiles(prev => prev.map(f => f.id === uploadFile.id ? { ...f, status: 'success', progress: 100, permanentLink } : f));
|
||||
} catch (e: any) {
|
||||
setFiles(prev => prev.map(f => f.id === uploadFile.id ? { ...f, status: 'error', error: e.message } : f));
|
||||
message.error(`Upload failed: ${uploadFile.file.name} - ${e.message}`);
|
||||
let shouldOverwrite = overwriteAllRef.current;
|
||||
if (!shouldOverwrite) {
|
||||
const stat = await safeStat(task.targetPath);
|
||||
if (stat) {
|
||||
const decision = await awaitConflictDecision(task);
|
||||
if (decision === 'skip') {
|
||||
updateFile(task.id, { status: 'skipped', progress: 0 });
|
||||
return;
|
||||
}
|
||||
shouldOverwrite = true;
|
||||
}
|
||||
}
|
||||
|
||||
onUploadComplete();
|
||||
}, [files, path, onUploadComplete]);
|
||||
|
||||
setConflict(null);
|
||||
updateFile(task.id, { status: 'uploading', progress: 0, loadedBytes: 0 });
|
||||
|
||||
const parentDir = task.targetPath.replace(/\/[^/]+$/, '') || '/';
|
||||
try {
|
||||
await ensureDirectoryTree(parentDir);
|
||||
await vfsApi.uploadStream(task.targetPath, task.file, shouldOverwrite, (loaded, total) => {
|
||||
mutateFiles((prev) => prev.map((f) => {
|
||||
if (f.id !== task.id) return f;
|
||||
const effectiveTotal = total > 0 ? total : f.size;
|
||||
const size = Math.max(f.size, effectiveTotal, loaded);
|
||||
const percent = size > 0 ? Math.min(100, Math.round((loaded / size) * 100)) : 0;
|
||||
return {
|
||||
...f,
|
||||
size,
|
||||
loadedBytes: loaded,
|
||||
progress: percent,
|
||||
};
|
||||
}));
|
||||
});
|
||||
|
||||
const link = await vfsApi.getTempLinkToken(task.targetPath, 60 * 60 * 24 * 365 * 10);
|
||||
const permanentLink = vfsApi.getTempPublicUrl(link.token);
|
||||
updateFile(task.id, { status: 'success', progress: 100, loadedBytes: task.size, permanentLink });
|
||||
} catch (err: unknown) {
|
||||
const error = err instanceof Error ? err.message : t('Upload failed');
|
||||
updateFile(task.id, { status: 'error', error, progress: 0 });
|
||||
message.error(`${task.relativePath}: ${error}`);
|
||||
}
|
||||
}, [ensureDirectoryTree, awaitConflictDecision, mutateFiles, updateFile, t]);
|
||||
|
||||
const startUpload = useCallback(async () => {
|
||||
if (isUploadingRef.current) return;
|
||||
if (!filesRef.current.length) return;
|
||||
|
||||
isUploadingRef.current = true;
|
||||
setIsUploading(true);
|
||||
try {
|
||||
for (const task of filesRef.current) {
|
||||
if (task.status !== 'pending' && task.status !== 'waiting') {
|
||||
continue;
|
||||
}
|
||||
if (task.type === 'directory') {
|
||||
await processDirectoryTask(task);
|
||||
} else {
|
||||
await processFileTask(task);
|
||||
}
|
||||
}
|
||||
onUploadComplete();
|
||||
} finally {
|
||||
isUploadingRef.current = false;
|
||||
setIsUploading(false);
|
||||
}
|
||||
}, [onUploadComplete, processDirectoryTask, processFileTask]);
|
||||
|
||||
const totalFileBytes = useMemo(
|
||||
() => files.reduce((acc, f) => acc + (f.type === 'file' ? f.size : 0), 0),
|
||||
[files],
|
||||
);
|
||||
|
||||
const uploadedFileBytes = useMemo(
|
||||
() => files.reduce((acc, f) => {
|
||||
if (f.type !== 'file') return acc;
|
||||
const loaded = Math.min(f.loadedBytes, f.size);
|
||||
if (f.status === 'success') {
|
||||
return acc + (f.size || loaded);
|
||||
}
|
||||
if (f.status === 'uploading' || f.status === 'waiting') {
|
||||
return acc + loaded;
|
||||
}
|
||||
return acc;
|
||||
}, 0),
|
||||
[files],
|
||||
);
|
||||
|
||||
const directoryCounts = useMemo(() => {
|
||||
const directories = files.filter((f) => f.type === 'directory');
|
||||
const completed = directories.filter((f) => f.status === 'success').length;
|
||||
return {
|
||||
total: directories.length,
|
||||
completed,
|
||||
};
|
||||
}, [files]);
|
||||
|
||||
const totalWeight = totalFileBytes + directoryCounts.total;
|
||||
const totalProgress = totalWeight === 0
|
||||
? 0
|
||||
: ((uploadedFileBytes + directoryCounts.completed) / totalWeight) * 100;
|
||||
|
||||
return {
|
||||
files,
|
||||
isModalVisible,
|
||||
isUploading,
|
||||
totalProgress: Math.min(100, Math.max(0, totalProgress)),
|
||||
totalFileBytes,
|
||||
uploadedFileBytes,
|
||||
conflict,
|
||||
confirmConflict,
|
||||
resetOverwriteDecisions,
|
||||
fileInputRef,
|
||||
openModal,
|
||||
directoryInputRef,
|
||||
openFilePicker,
|
||||
openDirectoryPicker,
|
||||
closeModal,
|
||||
handleFileChange,
|
||||
handleFileInputChange,
|
||||
handleDirectoryInputChange,
|
||||
handleFileDrop,
|
||||
startUpload,
|
||||
};
|
||||
|
||||
@@ -6,6 +6,7 @@ import { useSystemStatus } from '../contexts/SystemContext';
|
||||
import { useNavigate } from 'react-router';
|
||||
import { useI18n } from '../i18n';
|
||||
import LanguageSwitcher from '../components/LanguageSwitcher';
|
||||
import WeChatModal from '../components/WeChatModal';
|
||||
|
||||
const { Title, Text } = Typography;
|
||||
|
||||
@@ -16,6 +17,7 @@ export default function LoginPage() {
|
||||
const [password, setPassword] = useState('');
|
||||
const [err, setErr] = useState('');
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [wechatModalOpen, setWechatModalOpen] = useState(false);
|
||||
const navigate = useNavigate();
|
||||
const { t } = useI18n();
|
||||
|
||||
@@ -167,11 +169,12 @@ export default function LoginPage() {
|
||||
<Text type="secondary">{t('Join our community:')}</Text>
|
||||
<Button type="text" icon={<GithubOutlined />} href="https://github.com/DrizzleTime/Foxel" target="_blank">GitHub</Button>
|
||||
<Button type="text" icon={<SendOutlined />} href="https://t.me/+thDsBfyqJxZkNTU1" target="_blank">Telegram</Button>
|
||||
<Button type="text" icon={<WechatOutlined />}>微信</Button>
|
||||
<Button type="text" icon={<WechatOutlined />} onClick={() => setWechatModalOpen(true)}>微信</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<WeChatModal open={wechatModalOpen} onClose={() => setWechatModalOpen(false)} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import { memo, useCallback, useEffect, useMemo, useState } from 'react';
|
||||
import {
|
||||
Button,
|
||||
Alert,
|
||||
Card,
|
||||
Empty,
|
||||
Flex,
|
||||
Form,
|
||||
Input,
|
||||
InputNumber,
|
||||
message,
|
||||
Modal,
|
||||
Segmented,
|
||||
Space,
|
||||
Spin,
|
||||
Switch,
|
||||
@@ -134,23 +137,36 @@ const ProcessorsPage = memo(function ProcessorsPage() {
|
||||
overwrite: !!selectedProcessorMeta.produces_file,
|
||||
save_to: undefined,
|
||||
config: defaults,
|
||||
directory_scope: 'current',
|
||||
max_depth: undefined,
|
||||
suffix: undefined,
|
||||
});
|
||||
setIsDirectory(false);
|
||||
}, [selectedProcessorMeta, form]);
|
||||
|
||||
const overwriteValue = Form.useWatch('overwrite', form) ?? false;
|
||||
const producesFile = selectedProcessorMeta?.produces_file ?? false;
|
||||
const overwriteWatch = Form.useWatch('overwrite', form);
|
||||
const overwriteValue = producesFile ? !!overwriteWatch : false;
|
||||
const directoryScope = Form.useWatch('directory_scope', form) ?? 'current';
|
||||
|
||||
useEffect(() => {
|
||||
if (overwriteValue) {
|
||||
form.setFieldsValue({ save_to: undefined });
|
||||
form.setFieldsValue({ save_to: undefined, suffix: undefined });
|
||||
}
|
||||
}, [overwriteValue, form]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isDirectory) {
|
||||
form.setFieldsValue({ overwrite: true, save_to: undefined });
|
||||
form.setFieldsValue({
|
||||
overwrite: producesFile ? true : false,
|
||||
save_to: undefined,
|
||||
directory_scope: 'current',
|
||||
max_depth: undefined,
|
||||
});
|
||||
} else {
|
||||
form.setFieldsValue({ suffix: undefined });
|
||||
}
|
||||
}, [isDirectory, form]);
|
||||
}, [isDirectory, form, producesFile]);
|
||||
|
||||
const handleSelectProcessor = useCallback((type: string) => {
|
||||
if (type === selectedType) return;
|
||||
@@ -232,17 +248,38 @@ const ProcessorsPage = memo(function ProcessorsPage() {
|
||||
}
|
||||
});
|
||||
setRunning(true);
|
||||
const payload: any = {
|
||||
path: values.path,
|
||||
processor_type: selectedType,
|
||||
config: finalConfig,
|
||||
overwrite: !!values.overwrite,
|
||||
};
|
||||
if (values.save_to && !values.overwrite) {
|
||||
payload.save_to = values.save_to;
|
||||
const overwriteFlag = producesFile ? !!values.overwrite : false;
|
||||
if (isDirectory) {
|
||||
const scope: 'current' | 'recursive' = values.directory_scope || 'current';
|
||||
let maxDepth: number | null = scope === 'current' ? 0 : null;
|
||||
if (scope === 'recursive' && typeof values.max_depth === 'number') {
|
||||
maxDepth = values.max_depth;
|
||||
}
|
||||
const suffixValue = producesFile && !overwriteFlag && typeof values.suffix === 'string'
|
||||
? values.suffix.trim() || null
|
||||
: null;
|
||||
const resp = await processorsApi.processDirectory({
|
||||
path: values.path,
|
||||
processor_type: selectedType,
|
||||
config: finalConfig,
|
||||
overwrite: overwriteFlag,
|
||||
max_depth: maxDepth,
|
||||
suffix: suffixValue,
|
||||
});
|
||||
messageApi.success(`${t('Task submitted')}: ${resp.scheduled}`);
|
||||
} else {
|
||||
const payload: any = {
|
||||
path: values.path,
|
||||
processor_type: selectedType,
|
||||
config: finalConfig,
|
||||
overwrite: overwriteFlag,
|
||||
};
|
||||
if (values.save_to && !overwriteFlag) {
|
||||
payload.save_to = values.save_to;
|
||||
}
|
||||
const resp = await processorsApi.process(payload);
|
||||
messageApi.success(`${t('Task submitted')}: ${resp.task_id}`);
|
||||
}
|
||||
const resp = await processorsApi.process(payload);
|
||||
messageApi.success(`${t('Task submitted')}: ${resp.task_id}`);
|
||||
} catch (err: any) {
|
||||
if (err?.errorFields) {
|
||||
return;
|
||||
@@ -251,7 +288,7 @@ const ProcessorsPage = memo(function ProcessorsPage() {
|
||||
} finally {
|
||||
setRunning(false);
|
||||
}
|
||||
}, [form, messageApi, selectedProcessorMeta, selectedType, t]);
|
||||
}, [form, isDirectory, messageApi, producesFile, selectedProcessorMeta, selectedType, t]);
|
||||
|
||||
const selectedConfigPath = pathModalField === 'path'
|
||||
? (selectedType ? form.getFieldValue('path') : undefined) || '/'
|
||||
@@ -379,11 +416,6 @@ const ProcessorsPage = memo(function ProcessorsPage() {
|
||||
<Form form={form} layout="vertical" disabled={!selectedType} style={{ padding: '12px 0' }}>
|
||||
{selectedType ? (
|
||||
<>
|
||||
{isDirectory && (
|
||||
<Text type="secondary" style={{ display: 'block', marginBottom: 12 }}>
|
||||
{t('Directory processing always overwrites original files')}
|
||||
</Text>
|
||||
)}
|
||||
<Form.Item
|
||||
label={t('Target Path')}
|
||||
required
|
||||
@@ -402,16 +434,71 @@ const ProcessorsPage = memo(function ProcessorsPage() {
|
||||
<Button onClick={() => openPathSelector('path', 'directory')}>{t('Select Directory')}</Button>
|
||||
</Flex>
|
||||
</Form.Item>
|
||||
{isDirectory && (
|
||||
<Space direction="vertical" size={12} style={{ width: '100%', marginBottom: 12 }}>
|
||||
<Alert
|
||||
type="info"
|
||||
showIcon
|
||||
message={t('Directory execution will enqueue one task per file')}
|
||||
/>
|
||||
<Form.Item name="directory_scope" label={t('Directory scope')} initialValue="current">
|
||||
<Segmented
|
||||
options={[
|
||||
{ label: t('Current level only'), value: 'current' },
|
||||
{ label: t('Include subdirectories'), value: 'recursive' },
|
||||
]}
|
||||
/>
|
||||
</Form.Item>
|
||||
{directoryScope === 'recursive' && (
|
||||
<Form.Item
|
||||
name="max_depth"
|
||||
label={t('Max depth')}
|
||||
extra={t('Leave empty to traverse all subdirectories')}
|
||||
rules={[{
|
||||
validator: async (_: any, value: number | null) => {
|
||||
if (value === undefined || value === null) return;
|
||||
if (value < 0) throw new Error(t('Depth must be greater or equal to 0'));
|
||||
},
|
||||
}]}
|
||||
>
|
||||
<InputNumber min={0} placeholder={t('Unlimited')} style={{ width: '100%' }} />
|
||||
</Form.Item>
|
||||
)}
|
||||
</Space>
|
||||
)}
|
||||
|
||||
<Form.Item
|
||||
name="overwrite"
|
||||
label={t('Overwrite original')}
|
||||
valuePropName="checked"
|
||||
>
|
||||
<Switch disabled={isDirectory} />
|
||||
</Form.Item>
|
||||
{producesFile && (
|
||||
<Form.Item
|
||||
name="overwrite"
|
||||
label={t('Overwrite original')}
|
||||
valuePropName="checked"
|
||||
>
|
||||
<Switch />
|
||||
</Form.Item>
|
||||
)}
|
||||
|
||||
{selectedProcessorMeta?.produces_file && !overwriteValue && (
|
||||
{isDirectory && producesFile && !overwriteValue && (
|
||||
<Form.Item
|
||||
name="suffix"
|
||||
label={t('Output suffix')}
|
||||
rules={[
|
||||
{ required: true, message: t('Please input a suffix') },
|
||||
{
|
||||
validator: async (_: any, value: string) => {
|
||||
if (typeof value !== 'string') return;
|
||||
if (!value.trim()) {
|
||||
throw new Error(t('Suffix cannot be empty'));
|
||||
}
|
||||
},
|
||||
},
|
||||
]}
|
||||
extra={t('Suffix will be inserted before the file extension, e.g. demo_processed.mp4')}
|
||||
>
|
||||
<Input placeholder={t('Suffix such as _processed')} />
|
||||
</Form.Item>
|
||||
)}
|
||||
|
||||
{!isDirectory && producesFile && !overwriteValue && (
|
||||
<Form.Item label={t('Save To')}>
|
||||
<Flex gap={8} align="center">
|
||||
<div style={{ flex: 1 }}>
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import { useState } from 'react';
|
||||
import { useEffect, useState } from 'react';
|
||||
import { Form, Input, Button, Card, message, Steps, Select, Space, Typography } from 'antd';
|
||||
import { UserOutlined, LockOutlined, HddOutlined } from '@ant-design/icons';
|
||||
import { adaptersApi } from '../api/adapters';
|
||||
import { setConfig } from '../api/config';
|
||||
import { useAuth } from '../contexts/AuthContext';
|
||||
import { useI18n } from '../i18n';
|
||||
import LanguageSwitcher from '../components/LanguageSwitcher';
|
||||
@@ -15,6 +16,14 @@ const SetupPage = () => {
|
||||
const [form] = Form.useForm();
|
||||
const { login, register } = useAuth();
|
||||
const { t } = useI18n();
|
||||
|
||||
useEffect(() => {
|
||||
const origin = window.location.origin;
|
||||
form.setFieldsValue({
|
||||
app_domain: origin,
|
||||
file_domain: origin,
|
||||
});
|
||||
}, [form]);
|
||||
const onFinish = async (values: any) => {
|
||||
setLoading(true);
|
||||
try {
|
||||
@@ -22,17 +31,34 @@ const SetupPage = () => {
|
||||
await login(values.username, values.password);
|
||||
message.success(t('Initialization succeeded! Logging you in...'));
|
||||
setTimeout(async () => {
|
||||
await adaptersApi.create({
|
||||
name: values.adapter_name,
|
||||
type: values.adapter_type,
|
||||
config: {
|
||||
root: values.root_dir
|
||||
},
|
||||
sub_path: null,
|
||||
path: values.path,
|
||||
enabled: true
|
||||
});
|
||||
window.location.href = '/';
|
||||
try {
|
||||
const tasks: Promise<unknown>[] = [];
|
||||
const appDomain = values.app_domain?.trim();
|
||||
const fileDomain = values.file_domain?.trim();
|
||||
if (appDomain) {
|
||||
tasks.push(setConfig('APP_DOMAIN', appDomain));
|
||||
}
|
||||
if (fileDomain) {
|
||||
tasks.push(setConfig('FILE_DOMAIN', fileDomain));
|
||||
}
|
||||
if (tasks.length) {
|
||||
await Promise.all(tasks);
|
||||
}
|
||||
await adaptersApi.create({
|
||||
name: values.adapter_name,
|
||||
type: values.adapter_type,
|
||||
config: {
|
||||
root: values.root_dir
|
||||
},
|
||||
sub_path: null,
|
||||
path: values.path,
|
||||
enabled: true
|
||||
});
|
||||
window.location.href = '/';
|
||||
} catch (configError: any) {
|
||||
console.error(configError);
|
||||
message.error(configError.response?.data?.msg || t('Initialization failed, please try later'));
|
||||
}
|
||||
}, 2000);
|
||||
} catch (error: any) {
|
||||
console.log(error)
|
||||
@@ -122,6 +148,20 @@ const SetupPage = () => {
|
||||
>
|
||||
<Input size="large" placeholder={t('e.g., data/ or /var/foxel/data')} />
|
||||
</Form.Item>
|
||||
<Form.Item
|
||||
label={t('App Domain')}
|
||||
name="app_domain"
|
||||
extra={t('Optional, used for external links. Leave empty to use the current site.')}
|
||||
>
|
||||
<Input size="large" placeholder="https://your-app-domain.com" />
|
||||
</Form.Item>
|
||||
<Form.Item
|
||||
label={t('File Domain')}
|
||||
name="file_domain"
|
||||
extra={t('Optional, used for external links. Leave empty to use the current site.')}
|
||||
>
|
||||
<Input size="large" placeholder="https://files.your-domain.com" />
|
||||
</Form.Item>
|
||||
</>
|
||||
)
|
||||
},
|
||||
|
||||
@@ -1,51 +1,35 @@
|
||||
import { Form, Input, Button, message, Tabs, Space, Card, Select, Modal, Radio, InputNumber, Spin, Empty, Alert } from 'antd';
|
||||
import { useEffect, useState, useCallback } from 'react';
|
||||
import { message, Tabs, Space } from 'antd';
|
||||
import { useEffect, useState } from 'react';
|
||||
import PageCard from '../../components/PageCard';
|
||||
import { getAllConfig, setConfig } from '../../api/config';
|
||||
import { vectorDBApi, type VectorDBStats, type VectorDBProviderMeta, type VectorDBCurrentConfig } from '../../api/vectorDB';
|
||||
import { AppstoreOutlined, RobotOutlined, DatabaseOutlined, SkinOutlined } from '@ant-design/icons';
|
||||
import { useTheme } from '../../contexts/ThemeContext';
|
||||
import '../../styles/settings-tabs.css';
|
||||
import { useI18n } from '../../i18n';
|
||||
import AppearanceSettingsTab from './components/AppearanceSettingsTab';
|
||||
import AppSettingsTab from './components/AppSettingsTab';
|
||||
import AiSettingsTab from './components/AiSettingsTab';
|
||||
import VectorDbSettingsTab from './components/VectorDbSettingsTab';
|
||||
|
||||
const APP_CONFIG_KEYS: {key: string, label: string, default?: string}[] = [
|
||||
type TabKey = 'appearance' | 'app' | 'ai' | 'vector-db';
|
||||
|
||||
const TAB_KEYS: TabKey[] = ['appearance', 'app', 'ai', 'vector-db'];
|
||||
const DEFAULT_TAB: TabKey = 'appearance';
|
||||
|
||||
const isValidTab = (key?: string): key is TabKey => !!key && (TAB_KEYS as string[]).includes(key);
|
||||
|
||||
interface SystemSettingsPageProps {
|
||||
tabKey?: string;
|
||||
onTabNavigate?: (key: TabKey, options?: { replace?: boolean }) => void;
|
||||
}
|
||||
|
||||
const APP_CONFIG_KEYS: { key: string, label: string, default?: string }[] = [
|
||||
{ key: 'APP_NAME', label: 'App Name' },
|
||||
{ key: 'APP_LOGO', label: 'Logo URL' },
|
||||
{ key: 'APP_DOMAIN', label: 'App Domain' },
|
||||
{ key: 'FILE_DOMAIN', label: 'File Domain' },
|
||||
];
|
||||
|
||||
const VISION_CONFIG_KEYS = [
|
||||
{ key: 'AI_VISION_API_URL', label: 'Vision API URL' },
|
||||
{ key: 'AI_VISION_MODEL', label: 'Vision Model', default: 'Qwen/Qwen2.5-VL-32B-Instruct' },
|
||||
{ key: 'AI_VISION_API_KEY', label: 'Vision API Key' },
|
||||
];
|
||||
|
||||
const DEFAULT_EMBED_DIMENSION = 4096;
|
||||
const EMBED_DIM_KEY = 'AI_EMBED_DIM';
|
||||
|
||||
const EMBED_CONFIG_KEYS = [
|
||||
{ key: 'AI_EMBED_API_URL', label: 'Embedding API URL' },
|
||||
{ key: 'AI_EMBED_MODEL', label: 'Embedding Model', default: 'Qwen/Qwen3-Embedding-8B' },
|
||||
{ key: 'AI_EMBED_API_KEY', label: 'Embedding API Key' },
|
||||
];
|
||||
|
||||
const ALL_AI_KEYS = [...VISION_CONFIG_KEYS, ...EMBED_CONFIG_KEYS, { key: EMBED_DIM_KEY, default: DEFAULT_EMBED_DIMENSION }];
|
||||
|
||||
const formatBytes = (bytes?: number | null) => {
|
||||
if (bytes === null || bytes === undefined) return '-';
|
||||
if (bytes === 0) return '0 B';
|
||||
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||
let value = bytes;
|
||||
let unitIndex = 0;
|
||||
while (value >= 1024 && unitIndex < units.length - 1) {
|
||||
value /= 1024;
|
||||
unitIndex += 1;
|
||||
}
|
||||
const precision = value >= 10 || unitIndex === 0 ? 0 : 1;
|
||||
return `${value.toFixed(precision)} ${units[unitIndex]}`;
|
||||
};
|
||||
|
||||
// Theme related config keys
|
||||
const THEME_KEYS = {
|
||||
MODE: 'THEME_MODE',
|
||||
@@ -55,101 +39,30 @@ const THEME_KEYS = {
|
||||
CSS: 'THEME_CUSTOM_CSS',
|
||||
};
|
||||
|
||||
export default function SystemSettingsPage() {
|
||||
const [vectorConfigForm] = Form.useForm();
|
||||
export default function SystemSettingsPage({ tabKey, onTabNavigate }: SystemSettingsPageProps) {
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [config, setConfigState] = useState<Record<string, string> | null>(null);
|
||||
const [activeTab, setActiveTab] = useState('appearance');
|
||||
const [vectorStats, setVectorStats] = useState<VectorDBStats | null>(null);
|
||||
const [vectorStatsLoading, setVectorStatsLoading] = useState(false);
|
||||
const [vectorStatsError, setVectorStatsError] = useState<string | null>(null);
|
||||
const [vectorProviders, setVectorProviders] = useState<VectorDBProviderMeta[]>([]);
|
||||
const [vectorConfig, setVectorConfig] = useState<VectorDBCurrentConfig | null>(null);
|
||||
const [vectorConfigLoading, setVectorConfigLoading] = useState(false);
|
||||
const [vectorConfigSaving, setVectorConfigSaving] = useState(false);
|
||||
const [vectorMetaError, setVectorMetaError] = useState<string | null>(null);
|
||||
const [selectedProviderType, setSelectedProviderType] = useState<string | null>(null);
|
||||
const { refreshTheme, previewTheme } = useTheme();
|
||||
const [activeTab, setActiveTab] = useState<TabKey>(() =>
|
||||
isValidTab(tabKey) ? tabKey : DEFAULT_TAB
|
||||
);
|
||||
const { refreshTheme } = useTheme();
|
||||
const { t } = useI18n();
|
||||
|
||||
useEffect(() => {
|
||||
getAllConfig().then((data) => setConfigState(data as Record<string, string>));
|
||||
}, []);
|
||||
|
||||
const fetchVectorStats = useCallback(async () => {
|
||||
setVectorStatsLoading(true);
|
||||
setVectorStatsError(null);
|
||||
try {
|
||||
const data = await vectorDBApi.getStats();
|
||||
setVectorStats(data);
|
||||
} catch (e: any) {
|
||||
const msg = e?.message || t('Load failed');
|
||||
setVectorStatsError(msg);
|
||||
message.error(msg);
|
||||
} finally {
|
||||
setVectorStatsLoading(false);
|
||||
}
|
||||
}, [t]);
|
||||
|
||||
const buildProviderConfigValues = useCallback((provider: VectorDBProviderMeta | undefined, existing?: Record<string, string>) => {
|
||||
if (!provider) return {};
|
||||
const values: Record<string, string> = {};
|
||||
const schema = provider.config_schema || [];
|
||||
schema.forEach((field) => {
|
||||
const current = existing && existing[field.key] !== undefined && existing[field.key] !== null
|
||||
? String(existing[field.key])
|
||||
: undefined;
|
||||
if (current !== undefined) {
|
||||
values[field.key] = current;
|
||||
} else if (field.default !== undefined && field.default !== null) {
|
||||
values[field.key] = String(field.default);
|
||||
} else {
|
||||
values[field.key] = '';
|
||||
}
|
||||
});
|
||||
return values;
|
||||
}, []);
|
||||
|
||||
const fetchVectorMeta = useCallback(async () => {
|
||||
setVectorConfigLoading(true);
|
||||
setVectorMetaError(null);
|
||||
try {
|
||||
const [providers, current] = await Promise.all([
|
||||
vectorDBApi.getProviders(),
|
||||
vectorDBApi.getConfig(),
|
||||
]);
|
||||
setVectorProviders(providers);
|
||||
setVectorConfig(current);
|
||||
|
||||
const enabled = providers.filter((item) => item.enabled);
|
||||
let nextType: string | null = current?.type ?? null;
|
||||
if (nextType && !providers.some((item) => item.type === nextType)) {
|
||||
nextType = null;
|
||||
}
|
||||
if (!nextType) {
|
||||
nextType = enabled[0]?.type ?? providers[0]?.type ?? null;
|
||||
}
|
||||
setSelectedProviderType(nextType);
|
||||
const provider = providers.find((item) => item.type === nextType);
|
||||
const configValues = buildProviderConfigValues(provider, nextType === current?.type ? current?.config : undefined);
|
||||
vectorConfigForm.setFieldsValue({ type: nextType || undefined, config: configValues });
|
||||
} catch (e: any) {
|
||||
const msg = e?.message || t('Load failed');
|
||||
setVectorMetaError(msg);
|
||||
message.error(msg);
|
||||
} finally {
|
||||
setVectorConfigLoading(false);
|
||||
}
|
||||
}, [buildProviderConfigValues, message, t, vectorConfigForm]);
|
||||
|
||||
const handleSave = async (values: any) => {
|
||||
const handleSave = async (values: Record<string, unknown>) => {
|
||||
setLoading(true);
|
||||
try {
|
||||
for (const [key, value] of Object.entries(values)) {
|
||||
await setConfig(key, String(value ?? ''));
|
||||
}
|
||||
message.success(t('Saved successfully'));
|
||||
setConfigState({ ...config, ...values });
|
||||
const stringValues = Object.fromEntries(
|
||||
Object.entries(values).map(([key, value]) => [key, String(value ?? '')]),
|
||||
) as Record<string, string>;
|
||||
setConfigState((prev) => ({ ...(prev ?? {}), ...stringValues }));
|
||||
// trigger theme refresh if related keys changed
|
||||
if (Object.keys(values).some(k => Object.values(THEME_KEYS).includes(k))) {
|
||||
await refreshTheme();
|
||||
@@ -160,67 +73,31 @@ export default function SystemSettingsPage() {
|
||||
setLoading(false);
|
||||
};
|
||||
|
||||
const handleProviderChange = useCallback((value: string) => {
|
||||
setSelectedProviderType(value);
|
||||
const provider = vectorProviders.find((item) => item.type === value);
|
||||
const existing = value === vectorConfig?.type ? vectorConfig?.config : undefined;
|
||||
const configValues = buildProviderConfigValues(provider, existing);
|
||||
vectorConfigForm.setFieldsValue({ type: value, config: configValues });
|
||||
}, [vectorProviders, vectorConfig, buildProviderConfigValues, vectorConfigForm]);
|
||||
|
||||
const handleVectorConfigSave = useCallback(async (values: { type: string; config?: Record<string, string> }) => {
|
||||
if (!values?.type) {
|
||||
// 离开“外观设置”时,恢复后端持久化配置(取消未保存的预览)
|
||||
useEffect(() => {
|
||||
if (!isValidTab(tabKey)) {
|
||||
setActiveTab((prev) => (prev === DEFAULT_TAB ? prev : DEFAULT_TAB));
|
||||
if (tabKey !== DEFAULT_TAB) {
|
||||
onTabNavigate?.(DEFAULT_TAB, { replace: true });
|
||||
}
|
||||
return;
|
||||
}
|
||||
setVectorConfigSaving(true);
|
||||
try {
|
||||
const configPayload = Object.fromEntries(
|
||||
Object.entries(values.config || {}).filter(([, val]) => val !== undefined && val !== null && String(val).trim() !== '')
|
||||
.map(([key, val]) => [key, String(val)])
|
||||
);
|
||||
const response = await vectorDBApi.updateConfig({ type: values.type, config: configPayload });
|
||||
setVectorConfig(response.config);
|
||||
setVectorStats(response.stats);
|
||||
setVectorStatsError(null);
|
||||
setSelectedProviderType(response.config.type);
|
||||
const provider = vectorProviders.find((item) => item.type === response.config.type);
|
||||
const mergedValues = buildProviderConfigValues(provider, response.config.config);
|
||||
vectorConfigForm.setFieldsValue({ type: response.config.type, config: mergedValues });
|
||||
message.success(t('Saved successfully'));
|
||||
} catch (e: any) {
|
||||
message.error(e?.message || t('Save failed'));
|
||||
} finally {
|
||||
setVectorConfigSaving(false);
|
||||
}
|
||||
}, [buildProviderConfigValues, message, t, vectorConfigForm, vectorProviders]);
|
||||
setActiveTab((prev) => (prev === tabKey ? prev : tabKey));
|
||||
}, [tabKey, onTabNavigate]);
|
||||
|
||||
// 离开“外观设置”时,恢复后端持久化配置(取消未保存的预览)
|
||||
useEffect(() => {
|
||||
if (activeTab !== 'appearance') {
|
||||
refreshTheme();
|
||||
}
|
||||
}, [activeTab]);
|
||||
}, [activeTab, refreshTheme]);
|
||||
|
||||
useEffect(() => {
|
||||
if (activeTab === 'vector-db') {
|
||||
if (!vectorProviders.length && !vectorConfigLoading) {
|
||||
fetchVectorMeta();
|
||||
}
|
||||
if (!vectorStats && !vectorStatsLoading) {
|
||||
fetchVectorStats();
|
||||
}
|
||||
const handleTabChange = (key: string) => {
|
||||
const nextKey: TabKey = isValidTab(key) ? key : DEFAULT_TAB;
|
||||
if (nextKey !== activeTab) {
|
||||
setActiveTab(nextKey);
|
||||
}
|
||||
}, [
|
||||
activeTab,
|
||||
fetchVectorMeta,
|
||||
fetchVectorStats,
|
||||
vectorProviders.length,
|
||||
vectorConfigLoading,
|
||||
vectorStats,
|
||||
vectorStatsLoading,
|
||||
]);
|
||||
|
||||
const selectedProvider = vectorProviders.find((item) => item.type === selectedProviderType || (!selectedProviderType && item.enabled));
|
||||
onTabNavigate?.(nextKey);
|
||||
};
|
||||
|
||||
if (!config) {
|
||||
return <PageCard title={t('System Settings')}><div>{t('Loading...')}</div></PageCard>;
|
||||
@@ -234,7 +111,7 @@ export default function SystemSettingsPage() {
|
||||
<Tabs
|
||||
className="fx-settings-tabs"
|
||||
activeKey={activeTab}
|
||||
onChange={setActiveTab}
|
||||
onChange={handleTabChange}
|
||||
centered
|
||||
tabPosition="left"
|
||||
items={[
|
||||
@@ -247,75 +124,12 @@ export default function SystemSettingsPage() {
|
||||
</span>
|
||||
),
|
||||
children: (
|
||||
<Form
|
||||
layout="vertical"
|
||||
initialValues={{
|
||||
[THEME_KEYS.MODE]: config[THEME_KEYS.MODE] ?? 'light',
|
||||
[THEME_KEYS.PRIMARY]: config[THEME_KEYS.PRIMARY] ?? '#111111',
|
||||
[THEME_KEYS.RADIUS]: Number(config[THEME_KEYS.RADIUS] ?? '10'),
|
||||
[THEME_KEYS.TOKENS]: config[THEME_KEYS.TOKENS] ?? '',
|
||||
[THEME_KEYS.CSS]: config[THEME_KEYS.CSS] ?? '',
|
||||
}}
|
||||
onValuesChange={(_, all) => {
|
||||
try {
|
||||
const tokens = all[THEME_KEYS.TOKENS] ? JSON.parse(all[THEME_KEYS.TOKENS]) : undefined;
|
||||
previewTheme({
|
||||
mode: all[THEME_KEYS.MODE],
|
||||
primaryColor: all[THEME_KEYS.PRIMARY],
|
||||
borderRadius: typeof all[THEME_KEYS.RADIUS] === 'number' ? all[THEME_KEYS.RADIUS] : undefined,
|
||||
customTokens: tokens,
|
||||
customCSS: all[THEME_KEYS.CSS],
|
||||
});
|
||||
} catch {
|
||||
// JSON 不合法时忽略 tokens 预览,其他项仍然生效
|
||||
previewTheme({
|
||||
mode: all[THEME_KEYS.MODE],
|
||||
primaryColor: all[THEME_KEYS.PRIMARY],
|
||||
borderRadius: typeof all[THEME_KEYS.RADIUS] === 'number' ? all[THEME_KEYS.RADIUS] : undefined,
|
||||
customCSS: all[THEME_KEYS.CSS],
|
||||
});
|
||||
}
|
||||
}}
|
||||
onFinish={async (vals) => {
|
||||
// Validate JSON if provided
|
||||
if (vals[THEME_KEYS.TOKENS]) {
|
||||
try { JSON.parse(vals[THEME_KEYS.TOKENS]); }
|
||||
catch { return message.error(t('Advanced tokens must be valid JSON')); }
|
||||
}
|
||||
await handleSave(vals);
|
||||
}}
|
||||
style={{ marginTop: 24 }}
|
||||
key={'appearance-' + JSON.stringify(config)}
|
||||
>
|
||||
<Card title={t('Theme')}>
|
||||
<Form.Item name={THEME_KEYS.MODE} label={t('Theme Mode')}>
|
||||
<Radio.Group buttonStyle="solid">
|
||||
<Radio.Button value="light">{t('Light')}</Radio.Button>
|
||||
<Radio.Button value="dark">{t('Dark')}</Radio.Button>
|
||||
<Radio.Button value="system">{t('Follow System')}</Radio.Button>
|
||||
</Radio.Group>
|
||||
</Form.Item>
|
||||
<Form.Item name={THEME_KEYS.PRIMARY} label={t('Primary Color')}>
|
||||
<Input type="color" size="large" />
|
||||
</Form.Item>
|
||||
<Form.Item name={THEME_KEYS.RADIUS} label={t('Border Radius')}>
|
||||
<InputNumber min={0} max={24} style={{ width: '100%' }} />
|
||||
</Form.Item>
|
||||
</Card>
|
||||
<Card title={t('Advanced')} style={{ marginTop: 24 }}>
|
||||
<Form.Item name={THEME_KEYS.TOKENS} label={t('Override AntD Tokens (JSON)')} tooltip={t('e.g. {"colorText": "#222"}') }>
|
||||
<Input.TextArea autoSize={{ minRows: 4 }} placeholder='{ "colorText": "#222" }' />
|
||||
</Form.Item>
|
||||
<Form.Item name={THEME_KEYS.CSS} label={t('Custom CSS')}>
|
||||
<Input.TextArea autoSize={{ minRows: 6 }} placeholder={":root{ }\n/* CSS */"} />
|
||||
</Form.Item>
|
||||
</Card>
|
||||
<Form.Item style={{ marginTop: 24 }}>
|
||||
<Button type="primary" htmlType="submit" loading={loading} block>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
<AppearanceSettingsTab
|
||||
config={config}
|
||||
loading={loading}
|
||||
onSave={handleSave}
|
||||
themeKeys={THEME_KEYS}
|
||||
/>
|
||||
)
|
||||
},
|
||||
{
|
||||
@@ -327,26 +141,12 @@ export default function SystemSettingsPage() {
|
||||
</span>
|
||||
),
|
||||
children: (
|
||||
<Form
|
||||
layout="vertical"
|
||||
initialValues={{
|
||||
...Object.fromEntries(APP_CONFIG_KEYS.map(({ key, default: def }) => [key, config[key] ?? def ?? ''])),
|
||||
}}
|
||||
onFinish={handleSave}
|
||||
style={{ marginTop: 24 }}
|
||||
key={JSON.stringify(config)}
|
||||
>
|
||||
{APP_CONFIG_KEYS.map(({ key, label }) => (
|
||||
<Form.Item key={key} name={key} label={t(label)}>
|
||||
<Input size="large" />
|
||||
</Form.Item>
|
||||
))}
|
||||
<Form.Item>
|
||||
<Button type="primary" htmlType="submit" loading={loading} block>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
<AppSettingsTab
|
||||
config={config}
|
||||
loading={loading}
|
||||
onSave={handleSave}
|
||||
configKeys={APP_CONFIG_KEYS}
|
||||
/>
|
||||
),
|
||||
},
|
||||
{
|
||||
@@ -358,56 +158,8 @@ export default function SystemSettingsPage() {
|
||||
</span>
|
||||
),
|
||||
children: (
|
||||
<Form
|
||||
layout="vertical"
|
||||
initialValues={{
|
||||
...Object.fromEntries(ALL_AI_KEYS.map(({ key, default: def }) => [key, key === EMBED_DIM_KEY
|
||||
? Number(config[key] ?? def ?? DEFAULT_EMBED_DIMENSION)
|
||||
: config[key] ?? def ?? ''])),
|
||||
}}
|
||||
onFinish={async (vals) => {
|
||||
const currentDim = Number(config[EMBED_DIM_KEY] ?? DEFAULT_EMBED_DIMENSION);
|
||||
const nextDim = Number(vals[EMBED_DIM_KEY] ?? DEFAULT_EMBED_DIMENSION);
|
||||
if (currentDim !== nextDim) {
|
||||
Modal.confirm({
|
||||
title: t('Confirm embedding dimension change'),
|
||||
content: t('Changing the embedding dimension will clear the vector database automatically. You will need to rebuild indexes afterwards. Continue?'),
|
||||
okText: t('Confirm'),
|
||||
cancelText: t('Cancel'),
|
||||
onOk: async () => {
|
||||
await handleSave(vals);
|
||||
},
|
||||
});
|
||||
return;
|
||||
}
|
||||
await handleSave(vals);
|
||||
}}
|
||||
style={{ marginTop: 24 }}
|
||||
key={JSON.stringify(config)}
|
||||
>
|
||||
<Card title={t('Vision Model')} style={{ marginBottom: 24 }}>
|
||||
{VISION_CONFIG_KEYS.map(({ key, label }) => (
|
||||
<Form.Item key={key} name={key} label={t(label)}>
|
||||
<Input size="large" />
|
||||
</Form.Item>
|
||||
))}
|
||||
</Card>
|
||||
<Card title={t('Embedding Model')}>
|
||||
{EMBED_CONFIG_KEYS.map(({ key, label }) => (
|
||||
<Form.Item key={key} name={key} label={t(label)}>
|
||||
<Input size="large" />
|
||||
</Form.Item>
|
||||
))}
|
||||
<Form.Item name={EMBED_DIM_KEY} label={t('Embedding Dimension')}>
|
||||
<InputNumber min={1} max={32768} style={{ width: '100%' }} />
|
||||
</Form.Item>
|
||||
</Card>
|
||||
<Form.Item style={{ marginTop: 24 }}>
|
||||
<Button type="primary" htmlType="submit" loading={loading} block>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
<AiSettingsTab
|
||||
/>
|
||||
),
|
||||
},
|
||||
{
|
||||
@@ -419,189 +171,7 @@ export default function SystemSettingsPage() {
|
||||
</span>
|
||||
),
|
||||
children: (
|
||||
<Card title={t('Vector Database Settings')} style={{ marginTop: 24 }}>
|
||||
<Space direction="vertical" size={24} style={{ width: '100%' }}>
|
||||
<Space direction="vertical" size={16} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', flexWrap: 'wrap', gap: 12 }}>
|
||||
<strong>{t('Current Statistics')}</strong>
|
||||
<Button onClick={() => { fetchVectorMeta(); fetchVectorStats(); }} loading={vectorStatsLoading || vectorConfigLoading} disabled={(vectorStatsLoading || vectorConfigLoading) && !vectorStats}>
|
||||
{t('Refresh')}
|
||||
</Button>
|
||||
</div>
|
||||
{vectorMetaError ? (
|
||||
<Alert type="error" showIcon message={vectorMetaError} />
|
||||
) : null}
|
||||
{vectorStatsLoading && !vectorStats ? (
|
||||
<Spin />
|
||||
) : vectorStats ? (
|
||||
<Space direction="vertical" size={16} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 24 }}>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Collections')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{vectorStats.collection_count}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Vectors')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{vectorStats.total_vectors}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Database Size')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{formatBytes(vectorStats.db_file_size_bytes)}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Estimated Memory')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{formatBytes(vectorStats.estimated_total_memory_bytes)}</div>
|
||||
</div>
|
||||
</div>
|
||||
{vectorStats.collections.length ? (
|
||||
<Space direction="vertical" style={{ width: '100%' }} size={16}>
|
||||
{vectorStats.collections.map((collection) => (
|
||||
<div key={collection.name} style={{ border: '1px solid #f0f0f0', borderRadius: 8, padding: 16 }}>
|
||||
<Space direction="vertical" size={12} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', flexWrap: 'wrap', gap: 12 }}>
|
||||
<strong>{collection.name}</strong>
|
||||
<span style={{ color: '#888' }}>
|
||||
{collection.is_vector_collection && collection.dimension
|
||||
? `${t('Dimension')}: ${collection.dimension}`
|
||||
: t('Non-vector collection')}
|
||||
</span>
|
||||
</div>
|
||||
<div>{t('Vectors')}: {collection.row_count}</div>
|
||||
{collection.is_vector_collection ? (
|
||||
<div>{t('Estimated memory')}: {formatBytes(collection.estimated_memory_bytes)}</div>
|
||||
) : null}
|
||||
{collection.indexes.length ? (
|
||||
<Space direction="vertical" size={4} style={{ width: '100%' }}>
|
||||
<span>{t('Indexes')}:</span>
|
||||
<ul style={{ paddingLeft: 20, margin: 0 }}>
|
||||
{collection.indexes.map((index) => (
|
||||
<li key={`${collection.name}-${index.index_name || 'default'}`}>
|
||||
<span>{index.index_name || t('Unnamed index')}</span>
|
||||
<span>{' · '}{index.index_type || '-'}</span>
|
||||
<span>{' · '}{index.metric_type || '-'}</span>
|
||||
<span>{' · '}{t('Indexed rows')}: {index.indexed_rows}</span>
|
||||
<span>{' · '}{t('Pending rows')}: {index.pending_index_rows}</span>
|
||||
<span>{' · '}{t('Status')}: {index.state || '-'}</span>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</Space>
|
||||
) : null}
|
||||
</Space>
|
||||
</div>
|
||||
))}
|
||||
</Space>
|
||||
) : (
|
||||
<Empty description={t('No collections')} />
|
||||
)}
|
||||
<div style={{ color: '#888' }}>
|
||||
{t('Estimated memory is calculated as vectors x dimension x 4 bytes (float32).')}
|
||||
</div>
|
||||
</Space>
|
||||
) : vectorStatsError ? (
|
||||
<div style={{ color: '#ff4d4f' }}>{vectorStatsError}</div>
|
||||
) : (
|
||||
<Empty description={t('No collections')} />
|
||||
)}
|
||||
</Space>
|
||||
{vectorConfigLoading && !vectorProviders.length ? (
|
||||
<Spin />
|
||||
) : (
|
||||
<Form
|
||||
layout="vertical"
|
||||
form={vectorConfigForm}
|
||||
onFinish={handleVectorConfigSave}
|
||||
initialValues={{ type: selectedProviderType || undefined, config: {} }}
|
||||
>
|
||||
<Form.Item
|
||||
name="type"
|
||||
label={t('Database Provider')}
|
||||
rules={[{ required: true, message: t('Please select a provider') }]}
|
||||
>
|
||||
<Select
|
||||
size="large"
|
||||
options={vectorProviders.map((provider) => ({
|
||||
value: provider.type,
|
||||
label: provider.enabled ? provider.label : `${provider.label} (${t('Coming soon')})`,
|
||||
disabled: !provider.enabled,
|
||||
}))}
|
||||
onChange={handleProviderChange}
|
||||
loading={vectorConfigLoading && !vectorProviders.length}
|
||||
/>
|
||||
</Form.Item>
|
||||
{selectedProvider?.description ? (
|
||||
<Alert
|
||||
type="info"
|
||||
showIcon
|
||||
message={t(selectedProvider.description)}
|
||||
style={{ marginBottom: 16 }}
|
||||
/>
|
||||
) : null}
|
||||
{selectedProvider?.config_schema?.map((field) => (
|
||||
<Form.Item
|
||||
key={field.key}
|
||||
name={['config', field.key]}
|
||||
label={t(field.label)}
|
||||
rules={field.required ? [{ required: true, message: t('Please input {label}', { label: t(field.label) }) }] : []}
|
||||
>
|
||||
{field.type === 'password' ? (
|
||||
<Input.Password size="large" placeholder={field.placeholder ? t(field.placeholder) : undefined} />
|
||||
) : (
|
||||
<Input size="large" placeholder={field.placeholder ? t(field.placeholder) : undefined} />
|
||||
)}
|
||||
</Form.Item>
|
||||
))}
|
||||
{selectedProvider && !selectedProvider.enabled ? (
|
||||
<Alert
|
||||
type="warning"
|
||||
showIcon
|
||||
message={t('This provider is not available yet')}
|
||||
style={{ marginBottom: 16 }}
|
||||
/>
|
||||
) : null}
|
||||
<Form.Item>
|
||||
<Space direction="vertical" style={{ width: '100%' }}>
|
||||
<Button
|
||||
type="primary"
|
||||
htmlType="submit"
|
||||
loading={vectorConfigSaving}
|
||||
block
|
||||
disabled={!selectedProvider?.enabled}
|
||||
>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
<Button
|
||||
danger
|
||||
htmlType="button"
|
||||
block
|
||||
onClick={() => {
|
||||
Modal.confirm({
|
||||
title: t('Confirm clear vector database?'),
|
||||
content: t('This will delete all collections irreversibly.'),
|
||||
okText: t('Confirm Clear'),
|
||||
okType: 'danger',
|
||||
cancelText: t('Cancel'),
|
||||
onOk: async () => {
|
||||
try {
|
||||
await vectorDBApi.clearAll();
|
||||
message.success(t('Vector database cleared'));
|
||||
await fetchVectorStats();
|
||||
await fetchVectorMeta();
|
||||
} catch (e: any) {
|
||||
message.error(e.message || t('Clear failed'));
|
||||
}
|
||||
},
|
||||
});
|
||||
}}
|
||||
>
|
||||
{t('Clear Vector DB')}
|
||||
</Button>
|
||||
</Space>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
)}
|
||||
</Space>
|
||||
</Card>
|
||||
<VectorDbSettingsTab isActive={activeTab === 'vector-db'} />
|
||||
),
|
||||
},
|
||||
]}
|
||||
|
||||
1136
web/src/pages/SystemSettingsPage/components/AiSettingsTab.tsx
Normal file
1136
web/src/pages/SystemSettingsPage/components/AiSettingsTab.tsx
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,47 @@
|
||||
import { Form, Input, Button } from 'antd';
|
||||
import { useI18n } from '../../../i18n';
|
||||
|
||||
interface AppConfigKey {
|
||||
key: string;
|
||||
label: string;
|
||||
default?: string;
|
||||
}
|
||||
|
||||
interface AppSettingsTabProps {
|
||||
config: Record<string, string>;
|
||||
loading: boolean;
|
||||
onSave: (values: Record<string, unknown>) => Promise<void>;
|
||||
configKeys: AppConfigKey[];
|
||||
}
|
||||
|
||||
export default function AppSettingsTab({
|
||||
config,
|
||||
loading,
|
||||
onSave,
|
||||
configKeys,
|
||||
}: AppSettingsTabProps) {
|
||||
const { t } = useI18n();
|
||||
|
||||
return (
|
||||
<Form
|
||||
layout="vertical"
|
||||
initialValues={{
|
||||
...Object.fromEntries(configKeys.map(({ key, default: def }) => [key, config[key] ?? def ?? ''])),
|
||||
}}
|
||||
onFinish={onSave}
|
||||
style={{ marginTop: 24 }}
|
||||
key={JSON.stringify(config)}
|
||||
>
|
||||
{configKeys.map(({ key, label }) => (
|
||||
<Form.Item key={key} name={key} label={t(label)}>
|
||||
<Input size="large" />
|
||||
</Form.Item>
|
||||
))}
|
||||
<Form.Item>
|
||||
<Button type="primary" htmlType="submit" loading={loading} block>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,102 @@
|
||||
import { Form, Input, Button, InputNumber, Card, Radio, message } from 'antd';
|
||||
import { useTheme } from '../../../contexts/ThemeContext';
|
||||
import { useI18n } from '../../../i18n';
|
||||
|
||||
interface ThemeKeyMap {
|
||||
MODE: string;
|
||||
PRIMARY: string;
|
||||
RADIUS: string;
|
||||
TOKENS: string;
|
||||
CSS: string;
|
||||
}
|
||||
|
||||
interface AppearanceSettingsTabProps {
|
||||
config: Record<string, string>;
|
||||
loading: boolean;
|
||||
onSave: (values: Record<string, unknown>) => Promise<void>;
|
||||
themeKeys: ThemeKeyMap;
|
||||
}
|
||||
|
||||
export default function AppearanceSettingsTab({
|
||||
config,
|
||||
loading,
|
||||
onSave,
|
||||
themeKeys,
|
||||
}: AppearanceSettingsTabProps) {
|
||||
const { previewTheme } = useTheme();
|
||||
const { t } = useI18n();
|
||||
|
||||
return (
|
||||
<Form
|
||||
layout="vertical"
|
||||
initialValues={{
|
||||
[themeKeys.MODE]: config[themeKeys.MODE] ?? 'light',
|
||||
[themeKeys.PRIMARY]: config[themeKeys.PRIMARY] ?? '#111111',
|
||||
[themeKeys.RADIUS]: Number(config[themeKeys.RADIUS] ?? '10'),
|
||||
[themeKeys.TOKENS]: config[themeKeys.TOKENS] ?? '',
|
||||
[themeKeys.CSS]: config[themeKeys.CSS] ?? '',
|
||||
}}
|
||||
onValuesChange={(_, all) => {
|
||||
try {
|
||||
const tokens = all[themeKeys.TOKENS] ? JSON.parse(all[themeKeys.TOKENS]) : undefined;
|
||||
previewTheme({
|
||||
mode: all[themeKeys.MODE],
|
||||
primaryColor: all[themeKeys.PRIMARY],
|
||||
borderRadius: typeof all[themeKeys.RADIUS] === 'number' ? all[themeKeys.RADIUS] : undefined,
|
||||
customTokens: tokens,
|
||||
customCSS: all[themeKeys.CSS],
|
||||
});
|
||||
} catch {
|
||||
previewTheme({
|
||||
mode: all[themeKeys.MODE],
|
||||
primaryColor: all[themeKeys.PRIMARY],
|
||||
borderRadius: typeof all[themeKeys.RADIUS] === 'number' ? all[themeKeys.RADIUS] : undefined,
|
||||
customCSS: all[themeKeys.CSS],
|
||||
});
|
||||
}
|
||||
}}
|
||||
onFinish={async (vals) => {
|
||||
if (vals[themeKeys.TOKENS]) {
|
||||
try {
|
||||
JSON.parse(String(vals[themeKeys.TOKENS]));
|
||||
} catch {
|
||||
message.error(t('Advanced tokens must be valid JSON'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
await onSave(vals);
|
||||
}}
|
||||
style={{ marginTop: 24 }}
|
||||
key={'appearance-' + JSON.stringify(config)}
|
||||
>
|
||||
<Card title={t('Theme')}>
|
||||
<Form.Item name={themeKeys.MODE} label={t('Theme Mode')}>
|
||||
<Radio.Group buttonStyle="solid">
|
||||
<Radio.Button value="light">{t('Light')}</Radio.Button>
|
||||
<Radio.Button value="dark">{t('Dark')}</Radio.Button>
|
||||
<Radio.Button value="system">{t('Follow System')}</Radio.Button>
|
||||
</Radio.Group>
|
||||
</Form.Item>
|
||||
<Form.Item name={themeKeys.PRIMARY} label={t('Primary Color')}>
|
||||
<Input type="color" size="large" />
|
||||
</Form.Item>
|
||||
<Form.Item name={themeKeys.RADIUS} label={t('Border Radius')}>
|
||||
<InputNumber min={0} max={24} style={{ width: '100%' }} />
|
||||
</Form.Item>
|
||||
</Card>
|
||||
<Card title={t('Advanced')} style={{ marginTop: 24 }}>
|
||||
<Form.Item name={themeKeys.TOKENS} label={t('Override AntD Tokens (JSON)')} tooltip={t('e.g. {"colorText": "#222"}')}>
|
||||
<Input.TextArea autoSize={{ minRows: 4 }} placeholder='{ "colorText": "#222" }' />
|
||||
</Form.Item>
|
||||
<Form.Item name={themeKeys.CSS} label={t('Custom CSS')}>
|
||||
<Input.TextArea autoSize={{ minRows: 6 }} placeholder={":root{ }\n/* CSS */"} />
|
||||
</Form.Item>
|
||||
</Card>
|
||||
<Form.Item style={{ marginTop: 24 }}>
|
||||
<Button type="primary" htmlType="submit" loading={loading} block>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,359 @@
|
||||
import { useCallback, useEffect, useState } from 'react';
|
||||
import { Form, Button, Card, Space, Spin, Empty, Alert, Select, Input, Modal, message } from 'antd';
|
||||
import { vectorDBApi, type VectorDBStats, type VectorDBProviderMeta, type VectorDBCurrentConfig } from '../../../api/vectorDB';
|
||||
import { useI18n } from '../../../i18n';
|
||||
|
||||
interface VectorDbSettingsTabProps {
|
||||
isActive: boolean;
|
||||
}
|
||||
|
||||
const formatBytes = (bytes?: number | null) => {
|
||||
if (bytes === null || bytes === undefined) return '-';
|
||||
if (bytes === 0) return '0 B';
|
||||
const units = ['B', 'KB', 'MB', 'GB', 'TB'];
|
||||
let value = bytes;
|
||||
let unitIndex = 0;
|
||||
while (value >= 1024 && unitIndex < units.length - 1) {
|
||||
value /= 1024;
|
||||
unitIndex += 1;
|
||||
}
|
||||
const precision = value >= 10 || unitIndex === 0 ? 0 : 1;
|
||||
return `${value.toFixed(precision)} ${units[unitIndex]}`;
|
||||
};
|
||||
|
||||
const buildProviderConfigValues = (
|
||||
provider: VectorDBProviderMeta | undefined,
|
||||
existing?: Record<string, string>,
|
||||
) => {
|
||||
if (!provider) return {};
|
||||
const values: Record<string, string> = {};
|
||||
const schema = provider.config_schema || [];
|
||||
schema.forEach((field) => {
|
||||
const current = existing && existing[field.key] !== undefined && existing[field.key] !== null
|
||||
? String(existing[field.key])
|
||||
: undefined;
|
||||
if (current !== undefined) {
|
||||
values[field.key] = current;
|
||||
} else if (field.default !== undefined && field.default !== null) {
|
||||
values[field.key] = String(field.default);
|
||||
} else {
|
||||
values[field.key] = '';
|
||||
}
|
||||
});
|
||||
return values;
|
||||
};
|
||||
|
||||
export default function VectorDbSettingsTab({ isActive }: VectorDbSettingsTabProps) {
|
||||
const [form] = Form.useForm();
|
||||
const { t } = useI18n();
|
||||
const [vectorStats, setVectorStats] = useState<VectorDBStats | null>(null);
|
||||
const [vectorStatsLoading, setVectorStatsLoading] = useState(false);
|
||||
const [vectorStatsError, setVectorStatsError] = useState<string | null>(null);
|
||||
const [vectorProviders, setVectorProviders] = useState<VectorDBProviderMeta[]>([]);
|
||||
const [vectorConfig, setVectorConfig] = useState<VectorDBCurrentConfig | null>(null);
|
||||
const [vectorConfigLoading, setVectorConfigLoading] = useState(false);
|
||||
const [vectorConfigSaving, setVectorConfigSaving] = useState(false);
|
||||
const [vectorMetaError, setVectorMetaError] = useState<string | null>(null);
|
||||
const [selectedProviderType, setSelectedProviderType] = useState<string | null>(null);
|
||||
|
||||
const fetchVectorStats = useCallback(async () => {
|
||||
setVectorStatsLoading(true);
|
||||
setVectorStatsError(null);
|
||||
try {
|
||||
const data = await vectorDBApi.getStats();
|
||||
setVectorStats(data);
|
||||
} catch (e: any) {
|
||||
const msg = e?.message || t('Load failed');
|
||||
setVectorStatsError(msg);
|
||||
message.error(msg);
|
||||
} finally {
|
||||
setVectorStatsLoading(false);
|
||||
}
|
||||
}, [t]);
|
||||
|
||||
const fetchVectorMeta = useCallback(async () => {
|
||||
setVectorConfigLoading(true);
|
||||
setVectorMetaError(null);
|
||||
try {
|
||||
const [providers, current] = await Promise.all([
|
||||
vectorDBApi.getProviders(),
|
||||
vectorDBApi.getConfig(),
|
||||
]);
|
||||
setVectorProviders(providers);
|
||||
setVectorConfig(current);
|
||||
|
||||
const enabled = providers.filter((item) => item.enabled);
|
||||
let nextType: string | null = current?.type ?? null;
|
||||
if (nextType && !providers.some((item) => item.type === nextType)) {
|
||||
nextType = null;
|
||||
}
|
||||
if (!nextType) {
|
||||
nextType = enabled[0]?.type ?? providers[0]?.type ?? null;
|
||||
}
|
||||
setSelectedProviderType(nextType);
|
||||
|
||||
const provider = providers.find((item) => item.type === nextType);
|
||||
const configValues = buildProviderConfigValues(
|
||||
provider,
|
||||
nextType === current?.type ? current?.config : undefined,
|
||||
);
|
||||
form.setFieldsValue({ type: nextType || undefined, config: configValues });
|
||||
} catch (e: any) {
|
||||
const msg = e?.message || t('Load failed');
|
||||
setVectorMetaError(msg);
|
||||
message.error(msg);
|
||||
} finally {
|
||||
setVectorConfigLoading(false);
|
||||
}
|
||||
}, [form, t]);
|
||||
|
||||
const handleProviderChange = useCallback((value: string) => {
|
||||
setSelectedProviderType(value);
|
||||
const provider = vectorProviders.find((item) => item.type === value);
|
||||
const existing = value === vectorConfig?.type ? vectorConfig?.config : undefined;
|
||||
const configValues = buildProviderConfigValues(provider, existing);
|
||||
form.setFieldsValue({ type: value, config: configValues });
|
||||
}, [form, vectorConfig, vectorProviders]);
|
||||
|
||||
const handleVectorConfigSave = useCallback(async (values: { type: string; config?: Record<string, string> }) => {
|
||||
if (!values?.type) {
|
||||
return;
|
||||
}
|
||||
setVectorConfigSaving(true);
|
||||
try {
|
||||
const configPayload = Object.fromEntries(
|
||||
Object.entries(values.config || {})
|
||||
.filter(([, val]) => val !== undefined && val !== null && String(val).trim() !== '')
|
||||
.map(([key, val]) => [key, String(val)]),
|
||||
);
|
||||
const response = await vectorDBApi.updateConfig({ type: values.type, config: configPayload });
|
||||
setVectorConfig(response.config);
|
||||
setVectorStats(response.stats);
|
||||
setVectorStatsError(null);
|
||||
setSelectedProviderType(response.config.type);
|
||||
const provider = vectorProviders.find((item) => item.type === response.config.type);
|
||||
const mergedValues = buildProviderConfigValues(provider, response.config.config);
|
||||
form.setFieldsValue({ type: response.config.type, config: mergedValues });
|
||||
message.success(t('Saved successfully'));
|
||||
} catch (e: any) {
|
||||
message.error(e?.message || t('Save failed'));
|
||||
} finally {
|
||||
setVectorConfigSaving(false);
|
||||
}
|
||||
}, [form, t, vectorProviders]);
|
||||
|
||||
const handleClearVectorDb = useCallback(() => {
|
||||
Modal.confirm({
|
||||
title: t('Confirm clear vector database?'),
|
||||
content: t('This will delete all collections irreversibly.'),
|
||||
okText: t('Confirm Clear'),
|
||||
okType: 'danger',
|
||||
cancelText: t('Cancel'),
|
||||
onOk: async () => {
|
||||
try {
|
||||
await vectorDBApi.clearAll();
|
||||
message.success(t('Vector database cleared'));
|
||||
await fetchVectorStats();
|
||||
await fetchVectorMeta();
|
||||
} catch (e: any) {
|
||||
message.error(e?.message || t('Clear failed'));
|
||||
}
|
||||
},
|
||||
});
|
||||
}, [fetchVectorMeta, fetchVectorStats, t]);
|
||||
|
||||
useEffect(() => {
|
||||
if (!isActive) {
|
||||
return;
|
||||
}
|
||||
if (!vectorProviders.length && !vectorConfigLoading) {
|
||||
fetchVectorMeta();
|
||||
}
|
||||
if (!vectorStats && !vectorStatsLoading) {
|
||||
fetchVectorStats();
|
||||
}
|
||||
}, [
|
||||
isActive,
|
||||
fetchVectorMeta,
|
||||
fetchVectorStats,
|
||||
vectorProviders.length,
|
||||
vectorConfigLoading,
|
||||
vectorStats,
|
||||
vectorStatsLoading,
|
||||
]);
|
||||
|
||||
const vectorSectionLoading = vectorStatsLoading || vectorConfigLoading;
|
||||
const selectedProvider = vectorProviders.find(
|
||||
(item) => item.type === selectedProviderType || (!selectedProviderType && item.enabled),
|
||||
);
|
||||
|
||||
return (
|
||||
<Card title={t('Vector Database Settings')} style={{ marginTop: 24 }}>
|
||||
<Space direction="vertical" size={24} style={{ width: '100%' }}>
|
||||
<Space direction="vertical" size={16} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', flexWrap: 'wrap', gap: 12 }}>
|
||||
<strong>{t('Current Statistics')}</strong>
|
||||
<Button onClick={() => { fetchVectorMeta(); fetchVectorStats(); }} loading={vectorStatsLoading || vectorConfigLoading} disabled={(vectorStatsLoading || vectorConfigLoading) && !vectorStats}>
|
||||
{t('Refresh')}
|
||||
</Button>
|
||||
</div>
|
||||
{vectorSectionLoading ? (
|
||||
<div style={{ display: 'flex', justifyContent: 'center', padding: '24px 0' }}>
|
||||
<Spin />
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
{vectorMetaError ? (
|
||||
<Alert type="error" showIcon message={vectorMetaError} />
|
||||
) : null}
|
||||
{vectorStats ? (
|
||||
<Space direction="vertical" size={16} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', flexWrap: 'wrap', gap: 24 }}>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Collections')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{vectorStats.collection_count}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Vectors')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{vectorStats.total_vectors}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Database Size')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{formatBytes(vectorStats.db_file_size_bytes)}</div>
|
||||
</div>
|
||||
<div>
|
||||
<div style={{ color: '#888' }}>{t('Estimated Memory')}</div>
|
||||
<div style={{ fontSize: 20, fontWeight: 600 }}>{formatBytes(vectorStats.estimated_total_memory_bytes)}</div>
|
||||
</div>
|
||||
</div>
|
||||
{vectorStats.collections.length ? (
|
||||
<Space direction="vertical" style={{ width: '100%' }} size={16}>
|
||||
{vectorStats.collections.map((collection) => (
|
||||
<div key={collection.name} style={{ border: '1px solid #f0f0f0', borderRadius: 8, padding: 16 }}>
|
||||
<Space direction="vertical" size={12} style={{ width: '100%' }}>
|
||||
<div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', flexWrap: 'wrap', gap: 12 }}>
|
||||
<strong>{collection.name}</strong>
|
||||
<span style={{ color: '#888' }}>
|
||||
{collection.is_vector_collection && collection.dimension
|
||||
? `${t('Dimension')}: ${collection.dimension}`
|
||||
: t('Non-vector collection')}
|
||||
</span>
|
||||
</div>
|
||||
<div>{t('Vectors')}: {collection.row_count}</div>
|
||||
{collection.is_vector_collection ? (
|
||||
<div>{t('Estimated memory')}: {formatBytes(collection.estimated_memory_bytes)}</div>
|
||||
) : null}
|
||||
{collection.indexes.length ? (
|
||||
<Space direction="vertical" size={4} style={{ width: '100%' }}>
|
||||
<span>{t('Indexes')}:</span>
|
||||
<ul style={{ paddingLeft: 20, margin: 0 }}>
|
||||
{collection.indexes.map((index) => (
|
||||
<li key={`${collection.name}-${index.index_name || 'default'}`}>
|
||||
<span>{index.index_name || t('Unnamed index')}</span>
|
||||
<span>{' · '}{index.index_type || '-'}</span>
|
||||
<span>{' · '}{index.metric_type || '-'}</span>
|
||||
<span>{' · '}{t('Indexed rows')}: {index.indexed_rows}</span>
|
||||
<span>{' · '}{t('Pending rows')}: {index.pending_index_rows}</span>
|
||||
<span>{' · '}{t('Status')}: {index.state || '-'}</span>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</Space>
|
||||
) : null}
|
||||
</Space>
|
||||
</div>
|
||||
))}
|
||||
</Space>
|
||||
) : (
|
||||
<Empty description={t('No collections')} />
|
||||
)}
|
||||
<div style={{ color: '#888' }}>
|
||||
{t('Estimated memory is calculated as vectors x dimension x 4 bytes (float32).')}
|
||||
</div>
|
||||
</Space>
|
||||
) : vectorStatsError ? (
|
||||
<div style={{ color: '#ff4d4f' }}>{vectorStatsError}</div>
|
||||
) : (
|
||||
<Empty description={t('No collections')} />
|
||||
)}
|
||||
<Form
|
||||
layout="vertical"
|
||||
form={form}
|
||||
onFinish={handleVectorConfigSave}
|
||||
initialValues={{ type: selectedProviderType || undefined, config: {} }}
|
||||
>
|
||||
<Form.Item
|
||||
name="type"
|
||||
label={t('Database Provider')}
|
||||
rules={[{ required: true, message: t('Please select a provider') }]}
|
||||
>
|
||||
<Select
|
||||
size="large"
|
||||
options={vectorProviders.map((provider) => ({
|
||||
value: provider.type,
|
||||
label: provider.enabled ? provider.label : `${provider.label} (${t('Coming soon')})`,
|
||||
disabled: !provider.enabled,
|
||||
}))}
|
||||
onChange={handleProviderChange}
|
||||
loading={vectorConfigLoading && !vectorProviders.length}
|
||||
/>
|
||||
</Form.Item>
|
||||
{selectedProvider?.description ? (
|
||||
<Alert
|
||||
type="info"
|
||||
showIcon
|
||||
message={t(selectedProvider.description)}
|
||||
style={{ marginBottom: 16 }}
|
||||
/>
|
||||
) : null}
|
||||
{selectedProvider?.config_schema?.map((field) => (
|
||||
<Form.Item
|
||||
key={field.key}
|
||||
name={['config', field.key]}
|
||||
label={t(field.label)}
|
||||
rules={field.required ? [{ required: true, message: t('Please input {label}', { label: t(field.label) }) }] : []}
|
||||
>
|
||||
{field.type === 'password' ? (
|
||||
<Input.Password size="large" placeholder={field.placeholder ? t(field.placeholder) : undefined} />
|
||||
) : (
|
||||
<Input size="large" placeholder={field.placeholder ? t(field.placeholder) : undefined} />
|
||||
)}
|
||||
</Form.Item>
|
||||
))}
|
||||
{selectedProvider && !selectedProvider.enabled ? (
|
||||
<Alert
|
||||
type="warning"
|
||||
showIcon
|
||||
message={t('This provider is not available yet')}
|
||||
style={{ marginBottom: 16 }}
|
||||
/>
|
||||
) : null}
|
||||
<Form.Item>
|
||||
<Space direction="vertical" style={{ width: '100%' }}>
|
||||
<Button
|
||||
type="primary"
|
||||
htmlType="submit"
|
||||
loading={vectorConfigSaving}
|
||||
block
|
||||
disabled={!selectedProvider?.enabled}
|
||||
>
|
||||
{t('Save')}
|
||||
</Button>
|
||||
<Button
|
||||
danger
|
||||
htmlType="button"
|
||||
block
|
||||
onClick={handleClearVectorDb}
|
||||
>
|
||||
{t('Clear Vector DB')}
|
||||
</Button>
|
||||
</Space>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
</>
|
||||
)}
|
||||
</Space>
|
||||
</Space>
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
@@ -18,17 +18,26 @@ import { AppWindowsProvider, useAppWindows } from '../contexts/AppWindowsContext
|
||||
import { AppWindowsLayer } from '../apps/AppWindowsLayer';
|
||||
|
||||
const ShellBody = memo(function ShellBody() {
|
||||
const { navKey = 'files' } = useParams();
|
||||
const params = useParams<{ navKey?: string; '*': string }>();
|
||||
const navKey = params.navKey ?? 'files';
|
||||
const subPath = params['*'] ?? '';
|
||||
const navigate = useNavigate();
|
||||
const [collapsed, setCollapsed] = useState(false);
|
||||
const { windows, closeWindow, toggleMax, bringToFront, updateWindow } = useAppWindows();
|
||||
const settingsTab = navKey === 'settings' ? (subPath.split('/')[0] || undefined) : undefined;
|
||||
return (
|
||||
<Layout style={{ minHeight: '100vh', background: 'var(--ant-color-bg-layout)' }}>
|
||||
<SideNav
|
||||
collapsed={collapsed}
|
||||
onToggle={() => setCollapsed(c => !c)}
|
||||
activeKey={navKey}
|
||||
onChange={(key) => navigate(`/${key}`)}
|
||||
onChange={(key) => {
|
||||
if (key === 'settings') {
|
||||
navigate('/settings/appearance', { replace: true });
|
||||
} else {
|
||||
navigate(`/${key}`);
|
||||
}
|
||||
}}
|
||||
/>
|
||||
<Layout style={{ background: 'var(--ant-color-bg-layout)' }}>
|
||||
<TopHeader collapsed={collapsed} onToggle={() => setCollapsed(c => !c)} />
|
||||
@@ -43,7 +52,12 @@ const ShellBody = memo(function ShellBody() {
|
||||
{navKey === 'processors' && <ProcessorsPage />}
|
||||
{navKey === 'offline' && <OfflineDownloadPage />}
|
||||
{navKey === 'plugins' && <PluginsPage />}
|
||||
{navKey === 'settings' && <SystemSettingsPage />}
|
||||
{navKey === 'settings' && (
|
||||
<SystemSettingsPage
|
||||
tabKey={settingsTab}
|
||||
onTabNavigate={(key, options) => navigate(`/settings/${key}`, options)}
|
||||
/>
|
||||
)}
|
||||
{navKey === 'logs' && <LogsPage />}
|
||||
{navKey === 'backup' && <BackupPage />}
|
||||
</Flex>
|
||||
|
||||
361
web/src/styles/ai-settings.css
Normal file
361
web/src/styles/ai-settings.css
Normal file
@@ -0,0 +1,361 @@
|
||||
.fx-ai-top-bar {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 20px 28px;
|
||||
border-radius: 16px;
|
||||
background: linear-gradient(120deg, rgba(99, 102, 241, 0.16), rgba(167, 139, 250, 0.12));
|
||||
border: 1px solid rgba(99, 102, 241, 0.15);
|
||||
}
|
||||
|
||||
.fx-ai-provider-card {
|
||||
border-radius: 16px;
|
||||
overflow: hidden;
|
||||
box-shadow: var(--ant-box-shadow-secondary);
|
||||
}
|
||||
|
||||
.fx-ai-provider-header {
|
||||
display: flex;
|
||||
justify-content: flex-start;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
height: 80px;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.fx-ai-provider-meta {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
}
|
||||
|
||||
.fx-ai-provider-logo {
|
||||
width: 36px;
|
||||
height: 36px;
|
||||
border-radius: 12px;
|
||||
object-fit: cover;
|
||||
background: var(--ant-color-fill-alter);
|
||||
padding: 4px;
|
||||
}
|
||||
|
||||
.fx-ai-provider-name {
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.fx-ai-provider-sub {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
margin-top: 4px;
|
||||
color: var(--ant-color-text-tertiary);
|
||||
}
|
||||
|
||||
.fx-ai-model-list {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.fx-ai-model-item {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: flex-start;
|
||||
gap: 12px;
|
||||
padding: 12px 14px;
|
||||
border-radius: 10px;
|
||||
background: var(--ant-color-fill-secondary);
|
||||
border: 1px solid var(--ant-color-border);
|
||||
}
|
||||
|
||||
.fx-ai-model-info {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
.fx-ai-model-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.fx-ai-model-title {
|
||||
margin: 0;
|
||||
font-size: 15px;
|
||||
}
|
||||
|
||||
.fx-ai-model-tags .ant-tag {
|
||||
border-radius: 999px;
|
||||
padding: 0 8px;
|
||||
line-height: 20px;
|
||||
}
|
||||
|
||||
.fx-ai-model-meta {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.fx-ai-model-desc {
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.fx-ai-model-metrics {
|
||||
color: var(--ant-color-text-quaternary);
|
||||
}
|
||||
|
||||
.fx-ai-model-actions {
|
||||
align-self: center;
|
||||
}
|
||||
|
||||
.fx-ai-model-actions .ant-btn {
|
||||
min-width: 32px;
|
||||
}
|
||||
|
||||
.fx-ai-empty-card {
|
||||
border-radius: 16px;
|
||||
background: var(--ant-color-fill-tertiary);
|
||||
}
|
||||
|
||||
.fx-ai-provider-actions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: flex-end;
|
||||
gap: 8px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.fx-ai-defaults-card {
|
||||
border-radius: 16px;
|
||||
box-shadow: var(--ant-box-shadow-secondary);
|
||||
}
|
||||
|
||||
.fx-ai-default-row {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
gap: 16px;
|
||||
padding: 12px 0;
|
||||
border-bottom: 1px solid var(--ant-color-border-secondary);
|
||||
}
|
||||
|
||||
.fx-ai-default-row:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.fx-ai-default-meta {
|
||||
display: flex;
|
||||
gap: 16px;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.fx-ai-default-icon {
|
||||
width: 46px;
|
||||
height: 46px;
|
||||
border-radius: 16px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 22px;
|
||||
color: var(--ant-color-text-light-solid);
|
||||
}
|
||||
|
||||
.fx-ai-default-desc {
|
||||
color: var(--ant-color-text-tertiary);
|
||||
}
|
||||
|
||||
.fx-ai-provider-option {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.fx-ai-provider-option img {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border-radius: 6px;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.fx-ai-model-option {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.fx-ai-model-name {
|
||||
flex: 1;
|
||||
min-width: 0;
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
|
||||
.fx-ai-model-provider-tag {
|
||||
padding: 0 8px;
|
||||
border-radius: 999px;
|
||||
background: var(--ant-color-fill-tertiary);
|
||||
color: var(--ant-color-text-tertiary);
|
||||
font-size: 12px;
|
||||
line-height: 20px;
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.fx-ai-add-provider-steps {
|
||||
padding: 0 8px;
|
||||
}
|
||||
|
||||
.fx-ai-template-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(240px, 1fr));
|
||||
gap: 16px;
|
||||
}
|
||||
|
||||
.fx-ai-template-card {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: 16px;
|
||||
border-radius: 16px;
|
||||
background: var(--ant-color-fill-quaternary);
|
||||
border: 1px solid transparent;
|
||||
cursor: pointer;
|
||||
transition: border-color 0.2s ease, box-shadow 0.2s ease, transform 0.2s ease;
|
||||
}
|
||||
|
||||
.fx-ai-template-card:hover {
|
||||
border-color: var(--ant-color-primary);
|
||||
box-shadow: var(--ant-box-shadow-secondary);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.fx-ai-template-card:focus-visible {
|
||||
outline: 2px solid var(--ant-color-primary);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.fx-ai-template-card-main {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
}
|
||||
|
||||
.fx-ai-template-icon {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.fx-ai-template-icon.summary {
|
||||
width: 56px;
|
||||
height: 56px;
|
||||
border-radius: 18px;
|
||||
font-size: 26px;
|
||||
}
|
||||
|
||||
.fx-ai-template-icon img {
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
border-radius: inherit;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.fx-ai-template-text {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.fx-ai-template-name {
|
||||
font-size: 15px;
|
||||
font-weight: 600;
|
||||
color: var(--ant-color-text);
|
||||
}
|
||||
|
||||
.fx-ai-template-desc {
|
||||
font-size: 12px;
|
||||
color: var(--ant-color-text-tertiary);
|
||||
}
|
||||
|
||||
.fx-ai-template-summary {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 16px;
|
||||
padding: 16px;
|
||||
border-radius: 16px;
|
||||
background: var(--ant-color-fill-quaternary);
|
||||
}
|
||||
|
||||
.fx-ai-template-summary-text {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.fx-ai-template-summary-text .fx-ai-template-name {
|
||||
font-size: 18px;
|
||||
}
|
||||
|
||||
.fx-ai-template-summary-text .fx-ai-template-desc {
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
.fx-ai-template-arrow {
|
||||
color: var(--ant-color-text-quaternary);
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.fx-ai-remote-models {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 12px;
|
||||
max-height: 320px;
|
||||
padding: 12px;
|
||||
border-radius: 12px;
|
||||
background: var(--ant-color-fill-quaternary);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.fx-ai-remote-item {
|
||||
padding: 8px 0;
|
||||
}
|
||||
|
||||
.fx-ai-remote-item .ant-checkbox-wrapper {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.fx-ai-remote-item-main {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 6px;
|
||||
}
|
||||
|
||||
.fx-ai-remote-desc {
|
||||
color: var(--ant-color-text-tertiary);
|
||||
}
|
||||
|
||||
.fx-ai-chat {
|
||||
background: linear-gradient(135deg, #805ad5, #6b46c1);
|
||||
}
|
||||
|
||||
.fx-ai-vision {
|
||||
background: linear-gradient(135deg, #4c6ef5, #4263eb);
|
||||
}
|
||||
|
||||
.fx-ai-embedding {
|
||||
background: linear-gradient(135deg, #f7b733, #fc4a1a);
|
||||
}
|
||||
|
||||
.fx-ai-rerank {
|
||||
background: linear-gradient(135deg, #0ea5e9, #0284c7);
|
||||
}
|
||||
|
||||
.fx-ai-voice {
|
||||
background: linear-gradient(135deg, #f97316, #ea580c);
|
||||
}
|
||||
|
||||
.fx-ai-tools {
|
||||
background: linear-gradient(135deg, #ec4899, #db2777);
|
||||
}
|
||||
Reference in New Issue
Block a user