晴天 f69360744f fix(assistant): 修复测试 Response Body 为 (empty) + 优化结果展示
## 用户反馈

截图显示测试结果详情里 "Response Body: (empty)",但对话实际可用。
用户:"有一个比较严重的bug...具体返回参数看不到...我感觉我们的功能不完整"

## 根因分析

1. **Accept-Encoding 未限制**:reqwest 只启用了 gzip feature(未启用 brotli),
   但 provider 经 CDN/反代可能返回 Content-Encoding: br,导致 resp.text() 解码失败。
2. **错误被静默吞**:`resp.text().await.unwrap_or_default()` 在解码失败时返回 "",
   前端展示为 (empty) 但没有任何错误提示,用户无法诊断。
3. **展示设计:reply 被截断 + 藏在折叠面板**:成功时模型回复只显示前 80 字符的
   预览,完整内容要展开 "查看完整请求/响应参数" 才能看到(还被 JSON 混在一起)。

## 修复

### Rust test_model_verbose

三个分支(OpenAI / Anthropic / Gemini)都显式加 `Accept-Encoding: identity`
头,禁止响应压缩。测试请求的响应体很小(几百字节),不压缩的性能损失可忽略。

`resp.text().await` 失败时不再 unwrap_or_default 静默吞,而是返回带 error 的
JSON:`"读取响应体失败: {e} (可能是压缩编码未支持或非 UTF-8 响应)"`

### dev-api.js test_model_verbose(Web 模式)

三个分支的 headers 都加 `'Accept-Encoding': 'identity'`,和 Rust 行为一致。

### 前端 buildTestResult 重写

- **顶部显眼展示模型回复**(边框高亮 + 完整内容 + max-height:180px 加 scroll,
  不再截断为 80 字预览):
  ```
  ✓ 连接成功 (300ms, Chat Completions)

  ╔═════════════════════════════╗
  ║ MODEL REPLY                 ║   ← 完整回复全文
  ║ 你好!我是 QC-A04...         ║
  ╚═════════════════════════════╝
  ```
- **空 respBody + 非空 reply 时给明确诊断**:"响应主体未能读取(可能是压缩
  编码异常),但已从响应流中提取到回复内容"
- **固定 prompt 脚注**:`📌 本次测试使用预设 prompt "你好,请用一句话回复"
  + max_tokens=200` —— 让用户明白这是固定诊断请求,不是真实对话。
- **详情面板的空 body 展示优化**:不再显示 "(empty)" 字面量(可能被误解为
  服务器真的返回了空字符串),改为带颜色和 italic 的 "(响应体为空)" 提示。

### i18n 新增 5 个翻译键

- testModelReply:Model Reply
- testFixedPrompt:本次测试使用预设 prompt...
- testRespBodyEmpty:响应主体未能读取...但已从流中提取回复
- testShowDetails:查看完整请求/响应参数(原来硬编码中文)
- testRespBodyEmptyDetail:(响应体为空) [原来是硬编码 "(empty)"]

均覆盖 zh-CN / en / zh-TW / ja / ko / vi。

## 验证

- npm run build 通过(assistant chunk gzip 49.43 KB)
- cargo check 通过
- 下一步:用户实测后确认 Response Body 正常显示再发 v0.13.4

## 相关

- Roadmap v0.14.0:把测试功能升级成迷你 Playground(自定义 prompt / 多轮
  对话 / 流式显示 / max_tokens 滑块)
2026-04-20 14:12:40 +08:00
2026-04-16 13:55:26 +08:00
2026-04-16 06:42:03 +00:00
2026-04-16 13:55:26 +08:00
2026-04-16 13:55:26 +08:00
2026-04-16 13:55:26 +08:00

ClawPanel

OpenClaw & Hermes Agent Management Panel with Built-in AI Assistant — Multi-Engine AI Framework Management

🇨🇳 中文 | 🇺🇸 English | 🇹🇼 繁體中文 | 🇯🇵 日本語 | 🇰🇷 한국어 | 🇻🇳 Tiếng Việt | 🇪🇸 Español | 🇧🇷 Português | 🇷🇺 Русский | 🇫🇷 Français | 🇩🇪 Deutsch

Release Downloads License CI


ClawPanel Feature Showcase

ClawPanel is a visual management panel supporting multiple AI Agent frameworks, currently with OpenClaw and Hermes Agent dual-engine support. It features a built-in intelligent AI assistant that helps you install, auto-diagnose configurations, troubleshoot issues, and fix errors. 8 tools + 4 modes + interactive Q&A — easy to manage for beginners and experts alike.

🌐 Website: claw.qt.cool | 📦 Download: GitHub Releases

🎁 QingchenCloud AI API

Internal technical testing platform, open for selected users. Sign in daily to earn credits.

QingchenCloud AI

  • Daily Sign-in Credits — Sign in daily + invite friends to earn test credits
  • OpenAI-Compatible API — Seamless integration with OpenClaw, plug and play
  • Resource Policy — Rate limiting + request caps, may queue during peak hours
  • Model Availability — Models/APIs subject to actual page display, may rotate versions

⚠️ Compliance: This platform is for technical testing only. Illegal use or circumventing security mechanisms is prohibited. Keep your API Key secure. Rules subject to latest platform policies.

🔥 Dev Board / Embedded Device Support

ClawPanel provides a pure Web deployment mode (zero GUI dependency), natively compatible with ARM64 boards:

  • Orange Pi / Raspberry Pi / RK3588npm run serve to run
  • Docker ARM64docker run ghcr.io/qingchencloud/openclaw:latest
  • Armbian / Debian / Ubuntu Server — Auto-detect architecture
  • No Rust / Tauri / GUI needed — only Node.js 18+ required

📖 See Armbian Deployment Guide | Web Dev Mode

Community

A community of passionate AI Agent developers and enthusiasts — join us!

Discord  ·  Discussions  ·  Report Issue

Features

  • 🤖 AI Assistant (New) — Built-in AI assistant, 4 modes + 8 tools + interactive Q&A. See AI Assistant Highlights
  • 🧩 Multi-Engine Architecture — Supports both OpenClaw and Hermes Agent dual engines, freely switchable, independently managed
  • 🤖 Hermes Agent Chat — Built-in Hermes Agent chat interface with tool call visualization, file system access toggle, SSE streaming output
  • 🖼️ Image Recognition — Paste screenshots or drag images, AI auto-analyzes, multimodal conversations
  • Dashboard — System overview, real-time service monitoring, quick actions
  • Service Management — OpenClaw / Hermes Gateway start/stop, version detection & one-click upgrade, config backup & restore
  • Model Configuration — Multi-provider management, model CRUD, batch connectivity tests, latency detection, drag-to-reorder, auto-save + undo
  • Gateway Configuration — Port, access scope (localhost/LAN), auth Token, Tailscale networking
  • Messaging Channels — Unified Telegram, Discord, Feishu, DingTalk, QQ management, multi-Agent binding per platform
  • Communication & Automation — Message settings, broadcast strategies, slash commands, Webhooks, execution approval
  • Usage Analytics — Token usage, API costs, model/provider/tool rankings, daily usage charts
  • Agent Management — Agent CRUD, identity editing, model config, workspace management
  • Chat — Streaming, Markdown rendering, session management, /fast /think /verbose /reasoning commands
  • Cron Jobs — Cron-based scheduled execution, multi-channel delivery
  • Log Viewer — Multi-source real-time logs with keyword search
  • Memory Management — Memory file view/edit, categorized management, ZIP export, Agent switching
  • QingchenCloud AI API — Internal testing platform, OpenAI-compatible, daily sign-in credits
  • Extensions — cftunnel tunnel management, ClawApp status monitoring
  • About — Version info, community links, related projects, one-click upgrade

Download & Install

Go to Releases for the latest version:

macOS

Chip Installer Notes
Apple Silicon (M1/M2/M3/M4) ClawPanel_x.x.x_aarch64.dmg Macs from late 2020+
Intel ClawPanel_x.x.x_x64.dmg Macs 2020 and earlier

⚠️ "Damaged" or "unverified developer"? App is unsigned. Run: sudo xattr -rd com.apple.quarantine /Applications/ClawPanel.app

Windows

Format Installer Notes
EXE ClawPanel_x.x.x_x64-setup.exe Recommended
MSI ClawPanel_x.x.x_x64_en-US.msi Enterprise / silent install

Linux

Format Installer Notes
AppImage ClawPanel_x.x.x_amd64.AppImage No install, chmod +x and run
DEB ClawPanel_x.x.x_amd64.deb sudo dpkg -i *.deb
RPM ClawPanel-x.x.x-1.x86_64.rpm sudo rpm -i *.rpm

Linux Server (Web Version)

curl -fsSL https://raw.githubusercontent.com/qingchencloud/clawpanel/main/scripts/linux-deploy.sh | bash

Visit http://YOUR_SERVER_IP:1420 after deployment. 📖 Linux Deployment Guide

Docker

docker run -d --name clawpanel --restart unless-stopped \
  -p 1420:1420 -v clawpanel-data:/root/.openclaw \
  node:22-slim \
  sh -c "apt-get update && apt-get install -y git && \
    npm install -g @qingchencloud/openclaw-zh --registry https://registry.npmmirror.com && \
    git clone https://github.com/qingchencloud/clawpanel.git /app && \
    cd /app && npm install && npm run build && npm run serve"

📖 Docker Deployment Guide

Quick Start

  1. Initial Setup — First launch auto-detects Node.js, Git, OpenClaw. One-click install if missing.
  2. Configure Models — Add AI providers (DeepSeek, MiniMax, OpenAI, Ollama, etc.) with API keys. Test connectivity.
  3. Start Gateway — Go to Service Management, click Start. Green status = ready.
  4. Start Chatting — Go to Live Chat, select model, start conversation with streaming & Markdown.

🤖 AI Assistant Highlights

Built-in AI assistant that can directly operate your system — diagnose, fix, even submit PRs.

Four Modes

Mode Icon Tools Write Confirm Use Case
Chat 💬 Pure Q&A
Plan 📋 Read configs/logs, output plans
Execute Normal work, dangerous ops need confirm
Unlimited Full auto, no prompts

Eight Tools

Tool Function
ask_user Ask user questions (single/multi/text)
get_system_info Get OS, architecture, home directory
run_command Execute shell commands
read_file / write_file Read/write files
list_directory Browse directories
list_processes View processes
check_port Check port usage

Tech Architecture

Layer Technology Description
Frontend Vanilla JS + Vite Zero framework, lightweight
Backend Rust + Tauri v2 Native performance, cross-platform
Communication Tauri IPC + Shell Plugin Frontend-backend bridge
Styling Pure CSS (CSS Variables) Dark/Light themes, glassmorphism

Build from Source

git clone https://github.com/qingchencloud/clawpanel.git
cd clawpanel && npm install

# Desktop (requires Rust + Tauri v2)
npm run tauri dev        # Development
npm run tauri build      # Production

# Web only (no Rust needed)
npm run dev              # Dev with hot reload
npm run build && npm run serve  # Production
Project Description
OpenClaw AI Agent Framework
ClawApp Cross-platform mobile chat client
cftunnel Cloudflare Tunnel tool

Contributing

Issues and Pull Requests are welcome. See CONTRIBUTING.md for guidelines.

Acknowledgements

ClawPanel keeps growing because of every contributor in the community. Thank you for helping make the project better.

Code Contributors

Thanks to these developers for submitting Pull Requests and contributing directly to the codebase:


liucong2013

#88

axdlee

#58

ATGCS

#107

livisun

#106

kiss-kedaya

#101 #94

wzh4869

#82

0xsline

#15

jonntd

#18

Community Reporters

Thanks to community members who opened issues, reported bugs, and suggested features:

If we missed your contribution, please open an issue and we will add it promptly.

Sponsor

If you find this project useful, consider supporting us via USDT (BNB Smart Chain):

Sponsor QR
0xbdd7ebdf2b30d873e556799711021c6671ffe88f

Contact

License

This project is licensed under AGPL-3.0. For commercial/proprietary use without open-source requirements, contact us for a commercial license.

© 2026 QingchenCloud (武汉晴辰天下网络科技有限公司) | claw.qt.cool

Languages
JavaScript 63.1%
Rust 24.4%
CSS 10.5%
Shell 1.1%
HTML 0.6%
Other 0.2%