MineContext
MineContext copied to clipboard
feat: Optimize initial API key validation UX (timeout & error messages) / 优化初始化 API Key 验证体验(超时控制与错误提示)
🇬🇧 English
🚀 Summary
This PR optimizes the API Key validation experience during the initial setup and settings configuration. It introduces timeout control to prevent UI freezing and adds friendly error message parsing for better user feedback.
🐛 Problem
- UI Hanging: During the initial setup or when changing settings, if the user enters an API key for a service that isn't enabled (e.g., a Volcengine model not activated) or if the network is unstable, the validation request could hang indefinitely. The frontend "spinning" loader would never stop, providing no feedback to the user.
-
Obscure Errors: Raw API error responses (often generic
400 Bad Requestwith complex JSON bodies) were displayed directly to users. It was difficult for users to diagnose issues like "Insufficient Quota" or "Access Denied".
✨ Solution
-
Timeout Control:
- Implemented a forced 15-second timeout for the model validation step in the backend (
settings.py). This ensures the UI always recovers and reports a failure if the API is unresponsive. - Fix: Ensured that while the validation request uses a short timeout, the actual configuration saved to the file does not include this limit, preventing timeouts during normal operation (e.g., analyzing large images).
- Implemented a forced 15-second timeout for the model validation step in the backend (
-
Friendly Error Messages:
- Enhanced the
LLMClient.validatemethod to parse specific error codes from Volcengine and OpenAI. - Mapped
AccessDenied-> "Access denied. Please ensure the model is enabled in the Volcengine console." - Mapped
QuotaExceeded/insufficient_quota-> Friendly reminders to check account balance. - General improvements to error summary extraction.
- Enhanced the
🛠️ Changes
-
opencontext/server/routes/settings.py: Addedtimeoutparameter to validation config logic; fixed a bug where the timeout setting was incorrectly saved to the persistent user config. -
opencontext/llm/llm_client.py: Added error code mapping logic to_extract_error_summaryto provide human-readable messages.
🇨🇳 中文
🚀 概述
本 PR 优化了初始化设置和配置修改时的 API Key 验证体验。主要增加了超时控制以防止界面假死,并增加了友好的错误信息解析,提供更直观的用户反馈。
🐛 问题
- 界面卡死: 在首次设置或修改 API Key 时,如果用户填写的 Key 对应的服务未开通(例如火山引擎未开通对应的豆包模型接入点),或者网络不通,验证请求可能会一直挂起。前端界面的“转圈”加载状态不会停止,用户得不到任何反馈。
- 报错难懂: 之前直接展示原始的 API 错误信息(通常是复杂的 JSON 或晦涩的错误码),用户很难直观判断是“余额不足”还是“未开通服务”。 #284
✨ 解决方案
-
超时控制:
- 在后端 (
settings.py) 的模型验证步骤中增加了强制的 15 秒超时限制。确保即使 API 无响应,也能及时失败并反馈给前端,避免界面假死。 - 修复: 确保仅在“验证连接”时使用短超时,保存配置时移除超时限制,防止在实际使用(如处理大图解析)时因超时而失败。
- 在后端 (
-
友好报错:
- 增强了
LLMClient.validate方法,支持解析火山引擎和 OpenAI 的特定错误码。 - 映射
AccessDenied-> 提示用户去控制台开通模型服务。 - 映射
QuotaExceeded/insufficient_quota-> 提示用户检查账户余额。 - 优化了通用的错误摘要提取逻辑。
- 增强了
🛠️ 修改文件
-
opencontext/server/routes/settings.py: 在验证逻辑中增加了timeout参数;修复了可能会将测试用的超时设置错误保存到用户配置文件的 Bug。 -
opencontext/llm/llm_client.py: 在_extract_error_summary中增加了错误码映射逻辑,提供可读的错误提示。