feat(pipe): release v0.6.2 - full-lifecycle file agent support

This commit is contained in:
fujie
2026-02-10 14:55:16 +08:00
parent 3343e73848
commit a7b244602f
10 changed files with 743 additions and 113 deletions

View File

@@ -4,7 +4,7 @@ description: OpenWebUI Plugin Development & Release Workflow
# OpenWebUI Plugin Development Workflow # OpenWebUI Plugin Development Workflow
This workflow outlines the standard process for developing, documenting, and releasing plugins for OpenWebUI, ensuring compliance with project standards and CI/CD requirements. This workflow outlines the standard process for developing, documenting, and releasing plugins for OpenWebUI. **Crucially, the default goal of this workflow is "Preparation" (updating all relevant files) rather than automatic "Submission" (git commit/push), unless a release is explicitly requested.**
## 1. Development Standards ## 1. Development Standards
@@ -77,7 +77,9 @@ Reference: `.github/workflows/release.yml`
- **When to Bump**: Only update the version when: - **When to Bump**: Only update the version when:
- User says "发布" / "release" / "bump version" - User says "发布" / "release" / "bump version"
- User explicitly asks to prepare for release - User explicitly asks to prepare for release
- **Agent Initiative**: After completing significant changes (new features, bug fixes, or multiple code modifications), the agent **SHOULD proactively ask** the user if they want to release a new version. If confirmed, update all version-related files. - **Agent Initiative**: After completing significant changes (new features, bug fixes, or multiple code modifications), the agent **SHOULD proactively ask** the user if they want to **prepare a new version** for release.
- **Release Information Compliance**: When a release is requested, the agent must generate a standard release summary (English commit title + bilingual bullet points) as defined in Section 3 & 5.
- **Default Action (Prepare Only)**: When performing a version bump or update, the agent should update all files locally but **STOP** before committing. Present the changes and the **proposed Release/Commit Message** to the user and wait for explicit confirmation to commit/push.
- **Consistency**: When bumping, update version in **ALL** locations: - **Consistency**: When bumping, update version in **ALL** locations:
1. English Code (`.py`) 1. English Code (`.py`)
2. Chinese Code (`.py`) 2. Chinese Code (`.py`)
@@ -106,6 +108,21 @@ Reference: `.github/workflows/release.yml`
- Format: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/{type}/{name}/README.md` - Format: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/{type}/{name}/README.md`
- Example: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/README.md` - Example: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/README.md`
### Release Content Standard
When the user confirms a release, the agent **MUST** follow these content standards:
1. **Commit Message**:
- **Language**: English ONLY.
- **Format**: `type(scope): description` (e.g., `feat(pipes): add streaming support for Copilot SDK`).
- **Body**: List 1-3 key changes in bullet points.
2. **Release Summary (for user review)**:
- Before committing, present a "Release Draft" containing:
- **Title**: e.g., `Release v0.1.1: [Plugin Name] - [Brief Summary]`
- **Changes**: Bilingual bullet points (English/Chinese) describing the impact.
- **Verification Status**: Confirm all 8+ files have been updated and synced.
3. **Internal Documentation**: Ensure "What's New" sections in READMEs and `docs/` match exactly the changes being released.
### Pull Request Check ### Pull Request Check
- Workflow: `.github/workflows/plugin-version-check.yml` - Workflow: `.github/workflows/plugin-version-check.yml`
@@ -126,4 +143,7 @@ Before committing:
## 5. Git Operations (Agent Rules) ## 5. Git Operations (Agent Rules)
Strictly follow the rules defined in `.github/copilot-instructions.md`**Git Operations (Agent Rules)** section. 1. **Prepare-on-Demand**: Focus on file modifications and local verification first.
2. **No Auto-Commit**: Never `git commit`, `git push`, or `create_pull_request` automatically after file updates unless the user explicitly says "commit this" or "release now".
3. **Draft Mode**: If available, use PRs as drafts first.
4. **Reference**: Strictly follow the rules defined in `.github/copilot-instructions.md`**Git Operations (Agent Rules)** section.

View File

@@ -481,6 +481,60 @@ async def get_user_language(self):
**注意**: 即使插件有 `Valves` 配置,也应优先尝试自动探测,提升用户体验。 **注意**: 即使插件有 `Valves` 配置,也应优先尝试自动探测,提升用户体验。
### 8. 智能代理文件交付规范 (Agent File Delivery Standards)
在开发具备文件生成能力的智能代理插件(如 GitHub Copilot SDK 集成)时,必须遵循以下标准流程,以确保文件在不同存储后端(本地/S3下的可用性并绕过不必要的 RAG 处理。
#### 核心协议:三步交付法 (The 3-Step Delivery Protocol)
1. **本地写入 (Write Local)**:
- 代理必须在当前执行目录 (`.`) 下创建文件。
- **严禁**使用系统临时目录(如 `/tmp`)存放待发布的文件,因为这些路径在隔离的工作空间外不可见。
2. **显式发布 (Publish)**:
- 必须调用内建工具 `publish_file_from_workspace(filename='name.ext')`
- 该工具负责将文件迁移至 Open WebUI 正式存储(自动适配 S3并注入 `skip_rag` 元数据以防止触发向量化流程RAG Bypass
3. **呈现链接 (Display Link)**:
- 获取工具返回的 `download_url`(正确格式为 `/api/v1/files/{id}/content`)。
- **必须**以 Markdown 链接形式(如 `[点击下载报告](url)`)展示给用户。
#### 路径语义 (Path Semantics)
- 代理应始终将“当前目录”视为其受保护所在的私有工作空间。
- `publish_file_from_workspace` 的参数 `filename` 仅需传入相对于当前目录的文件名。
### 9. Copilot SDK 插件工具定义规范 (Copilot SDK Tool Definition Standards)
在为 GitHub Copilot SDK 开发自定义工具时,为了确保大模型能正确识别参数(避免生成空的 `properties` Schema必须遵循以下定义模式
#### 显式参数模型 (Explicit Parameter Schema)
**禁止**仅依赖函数签名和类型提示。**必须**定义一个继承自 `pydantic.BaseModel` 的类来描述参数,并在 `define_tool` 中通过 `params_type` 显式引用。
```python
from pydantic import BaseModel, Field
from copilot import define_tool
# 1. 定义参数模型
class MyToolParams(BaseModel):
query: str = Field(..., description="搜索关键词")
limit: int = Field(default=10, description="返回结果数量限制")
# 2. 实现工具逻辑
async def my_custom_search(query: str, limit: int) -> dict:
# ... 逻辑实现 ...
return {"results": []}
# 3. 注册工具(关键:使用 params_type
my_tool = define_tool(
name="my_custom_search",
description="在特定数据源中执行搜索",
params_type=MyToolParams, # 显式传递参数模型以生成正确的 JSON Schema
)(my_custom_search)
```
#### 关键要点 (Key Requirements)
1. **params_type**: 必须在 `define_tool` 中使用此参数。这是防止大模型幻觉认为工具“无参数”的唯一可靠方法。
2. **Field 描述**: 在 `BaseModel` 中使用 `Field(..., description="...")` 为每个参数提供详细的描述信息。
3. **Required vs Optional**: 明确标注必填项(无默认值)和可选项(带 `default`)。
--- ---
## ⚡ Action 插件规范 (Action Plugin Standards) ## ⚡ Action 插件规范 (Action Plugin Standards)
@@ -928,10 +982,19 @@ Filter 实例是**单例 (Singleton)**。
- Update README/README_CN to include What's New section - Update README/README_CN to include What's New section
- Migration: default TITLE_SOURCE changed to chat_title - Migration: default TITLE_SOURCE changed to chat_title
### 4. 🤖 Git Operations (Agent Rules) #### 发布信息生成准则 (Release Summary Generation)
当准备提交时,必须向用户展示以下格式的“发布草案”:
1. **Commit Message**: 符合 Conventional Commits 的英文标题及摘要。
2. **变更列表 (Bilingual Changes)**:
- 英文: Clear descriptions of technical/functional changes.
- 中文: 清晰描述用户可见的功能改进或修复。
3. **核查状态 (Verification)**: 确认版本号已在相关 8+ 处位置同步更新。
- **允许**: 直接推送到 `main` 分支并发布。 ### 4. 🤖 Git 提交与推送规范 (Git Operations & Push Rules)
- **允许**: 创建功能分支 (`feature/xxx`),推送到功能分支。
- **核心原则**: 默认仅进行**本地文件准备**更新代码、READMEs、Docs、版本号**严禁**在未获用户明确许可的情况下自动执行 `git commit``git push`
- **允许 (需确认)**: 只有在用户明确表示“发布”、“Commit it”、“Release”或“提交”后才允许直接推送到 `main` 分支或创建 PR。
- **功能分支**: 推荐在进行大规模重构或实验性功能开发时,创建功能分支 (`feature/xxx`) 进行隔离。
### 5. 🤝 贡献者认可规范 (Contributor Recognition) ### 5. 🤝 贡献者认可规范 (Contributor Recognition)

View File

@@ -1,6 +1,6 @@
# GitHub Copilot SDK Pipe for OpenWebUI # GitHub Copilot SDK Pipe for OpenWebUI
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.6.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT **Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 0.6.2 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that integrates the official [GitHub Copilot SDK](https://github.com/github/copilot-sdk). It enables you to use **GitHub Copilot models** (e.g., `gpt-5.2-codex`, `claude-sonnet-4.5`,`gemini-3-pro`, `gpt-5-mini`) **AND** your own models via **BYOK** (OpenAI, Anthropic) directly within OpenWebUI, providing a unified agentic experience with **strict User & Chat-level Workspace Isolation**. This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that integrates the official [GitHub Copilot SDK](https://github.com/github/copilot-sdk). It enables you to use **GitHub Copilot models** (e.g., `gpt-5.2-codex`, `claude-sonnet-4.5`,`gemini-3-pro`, `gpt-5-mini`) **AND** your own models via **BYOK** (OpenAI, Anthropic) directly within OpenWebUI, providing a unified agentic experience with **strict User & Chat-level Workspace Isolation**.
@@ -14,13 +14,12 @@ This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/
--- ---
## ✨ v0.6.0 Updates (What's New) ## ✨ v0.6.2 Updates (What's New)
- **👥 User & Chat Management**: Physical management architecture (`user_id/chat_id`) for absolute resource independence. - **🛠️ New Workspace Artifacts Tool**: Introduced `publish_file_from_workspace`. Agents can now generate files (e.g., Python-generated Excel/CSV) and provide direct download links for the user to click and save.
- **🤖 Empowering Agent Autonomy**: Automatic synchronization of raw files to the workspace, enabling direct Python-based analysis of Excel/CSV. - **⚙️ Workflow Optimization**: Improved reliability of the internal agentic workspace management.
- **🔧 OpenAPI & External Tool Fixes**: Full support for tools mounted via OpenAPI servers. - **🛡️ Enhanced Security**: Refined access control for system resources within the isolated environment.
- **📊 Cost Control**: Enhanced **Billing Multiplier Limits** (`MAX_MULTIPLIER`, e.g., set to 0 for free models only) and **Model Keyword Filtering** (`EXCLUDE_KEYWORDS`) for precise cost management. - **🔧 Performance Tuning**: Optimized stream processing for larger context windows.
- **🧠 Persistent TODO Lists**: Database-backed task tracking that persists across sessions.
--- ---
@@ -33,7 +32,7 @@ This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/
- **🧠 Deep Database Integration**: Real-time persistence of TOD·O lists for long-running workflows. - **🧠 Deep Database Integration**: Real-time persistence of TOD·O lists for long-running workflows.
- **🌊 Advanced Streaming**: Full support for thinking process/Chain of Thought visualization. - **🌊 Advanced Streaming**: Full support for thinking process/Chain of Thought visualization.
- **🖼️ Intelligent Multimodal**: Vision capabilities and raw file analysis support. - **🖼️ Intelligent Multimodal**: Vision capabilities and raw file analysis support.
- **⚡ Interactive Artifacts**: Automatically renders HTML/JS apps generated by the agent directly in the chat interface. - **⚡ Full-Lifecycle File Agent**: Supports receiving uploaded files for raw bypass analysis and publishing generated results (e.g., analyzed Excel/reports) as downloadable links—a complete closed-loop agentic workflow.
--- ---

View File

@@ -1,6 +1,6 @@
# GitHub Copilot SDK 官方管道 # GitHub Copilot SDK 官方管道
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.6.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT **作者:** [Fu-Jie](https://github.com/Fu-Jie) | **版本:** 0.6.2 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,深度集成了 **GitHub Copilot SDK**。它不仅支持 **GitHub Copilot 官方模型**(如 `gpt-5.2-codex`, `claude-sonnet-4.5`, `gemini-3-pro`, `gpt-5-mini`),还支持 **BYOK (自带 Key)** 模式对接自定义服务商OpenAI, Anthropic并具备**严格的用户与会话级工作区隔离**能力,提供统一且安全的 Agent 交互体验。 这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,深度集成了 **GitHub Copilot SDK**。它不仅支持 **GitHub Copilot 官方模型**(如 `gpt-5.2-codex`, `claude-sonnet-4.5`, `gemini-3-pro`, `gpt-5-mini`),还支持 **BYOK (自带 Key)** 模式对接自定义服务商OpenAI, Anthropic并具备**严格的用户与会话级工作区隔离**能力,提供统一且安全的 Agent 交互体验。
@@ -14,13 +14,12 @@
--- ---
## ✨ 0.6.0 更新内容 (What's New) ## ✨ 0.6.2 更新内容 (What's New)
- **👥 多用户与会话管理**: 采用 `user_id/chat_id` 的物理隔离架构,确保资源独立与稳健管理 - **🛠️ 新增工作区产物工具**: 引入 `publish_file_from_workspace`。Agent 现在可以生成物理文件(如使用 Python 生成的 Excel/CSV 报表),并直接在聊天界面提供点击下载链接
- **🤖 赋予 Agent 文件自主权**: 自动将上传的文件同步至物理工作区,支持 Python 直接分析 Excel/CSV - **⚙️ 工作流优化**: 提升了内部 Agent 物理工作区管理的可靠性与原子性
- **🔧 OpenAPI & 外部工具修复**: 完美支持通过 OpenAPI 服务器挂载的工具调用 - **🛡️ 安全增强**: 精细化了隔离环境下系统资源的访问控制策略
- **📊 计费与成本控制**: 增强的**计费倍率限制** (`MAX_MULTIPLIER`,例如设为 0 即仅限免费模型) 和**模型关键词过滤** (`EXCLUDE_KEYWORDS`),实现更精准的成本管控 - **🔧 性能微调**: 针对大上下文窗口优化了流式数据处理性能
- **🧠 数据库持久化 TODO**: 任务进度跨会话保存Agent 拥有更持久的任务 memory。
--- ---
@@ -33,7 +32,7 @@
- **🧠 深度数据库集成**: 实时持久化 TOD·O 列表到 UI 进度条。 - **🧠 深度数据库集成**: 实时持久化 TOD·O 列表到 UI 进度条。
- **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染。 - **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染。
- **🖼️ 智能多模态**: 完整支持图像识别与附件上传分析。 - **🖼️ 智能多模态**: 完整支持图像识别与附件上传分析。
- **⚡ 交互式伪影 (Artifacts)**: 自动渲染 Agent 生成的 HTML/JS 应用程序,直接在聊天界面交互 - **⚡ 全生命周期文件 Agent**: 支持接收上传文件进行绕过 RAG 的深度分析,并将处理结果(如分析后的 Excel/报告)发布为可下载链接,实现完整的闭环 Agent 工作流
--- ---

View File

@@ -15,7 +15,7 @@ Pipes allow you to:
## Available Pipe Plugins ## Available Pipe Plugins
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.6.0) - Official GitHub Copilot SDK integration. Features **Workspace Isolation**, **Database Persistence**, **Zero-config OpenWebUI Tool Bridge**, **BYOK** support, and **dynamic MCP discovery**. Supports streaming, multimodal, and infinite sessions. - [GitHub Copilot SDK](github-copilot-sdk.md) (v0.6.2) - Official GitHub Copilot SDK integration. Features **Workspace Isolation**, **Database Persistence**, **Zero-config OpenWebUI Tool Bridge**, **BYOK** support, and **dynamic MCP discovery**. Supports streaming, multimodal, and infinite sessions.
--- ---

View File

@@ -15,7 +15,7 @@ Pipes 可以用于:
## 可用的 Pipe 插件 ## 可用的 Pipe 插件
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.6.0) - GitHub Copilot SDK 官方集成。具备**工作区安全隔离**、**数据库持久化**、**零配置工具桥接**与**BYOK (自带 Key) 支持**。支持流式输出、打字机思考过程及无限会话。[查看深度架构解析](github-copilot-sdk-deep-dive.zh.md)。 - [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.6.2) - GitHub Copilot SDK 官方集成。具备**工作区安全隔离**、**数据库持久化**、**零配置工具桥接**与**BYOK (自带 Key) 支持**。支持流式输出、打字机思考过程及无限会话。[查看深度架构解析](github-copilot-sdk-deep-dive.zh.md)。
--- ---

View File

@@ -1,6 +1,6 @@
# GitHub Copilot SDK Pipe for OpenWebUI # GitHub Copilot SDK Pipe for OpenWebUI
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.6.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT **Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 0.6.2 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that integrates the official [GitHub Copilot SDK](https://github.com/github/copilot-sdk). It enables you to use **GitHub Copilot models** (e.g., `gpt-5.2-codex`, `claude-sonnet-4.5`,`gemini-3-pro`, `gpt-5-mini`) **AND** your own models via **BYOK** (OpenAI, Anthropic) directly within OpenWebUI, providing a unified agentic experience with **strict User & Chat-level Workspace Isolation**. This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that integrates the official [GitHub Copilot SDK](https://github.com/github/copilot-sdk). It enables you to use **GitHub Copilot models** (e.g., `gpt-5.2-codex`, `claude-sonnet-4.5`,`gemini-3-pro`, `gpt-5-mini`) **AND** your own models via **BYOK** (OpenAI, Anthropic) directly within OpenWebUI, providing a unified agentic experience with **strict User & Chat-level Workspace Isolation**.
@@ -14,13 +14,12 @@ This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/
--- ---
## ✨ v0.6.1 Updates (What's New) ## ✨ v0.6.2 Updates (What's New)
- **👥 User & Chat Management**: Physical management architecture (`user_id/chat_id`) for absolute resource independence. - **🛠️ New Workspace Artifacts Tool**: Introduced `publish_file_from_workspace`. Agents can now generate files (e.g., Python-generated Excel/CSV) and provide direct download links for the user to click and save.
- **🤖 Empowering Agent Autonomy**: Automatic synchronization of raw files to the workspace, enabling direct Python-based analysis of Excel/CSV. - **⚙️ Workflow Optimization**: Improved reliability of the internal agentic workspace management.
- **🔧 OpenAPI & External Tool Fixes**: Full support for tools mounted via OpenAPI servers. - **🛡️ Enhanced Security**: Refined access control for system resources within the isolated environment.
- **📊 Cost Control**: Enhanced **Billing Multiplier Limits** (`MAX_MULTIPLIER`, e.g., set to 0 for free models only) and **Model Keyword Filtering** (`EXCLUDE_KEYWORDS`) for precise cost management. - **🔧 Performance Tuning**: Optimized stream processing for larger context windows.
- **🧠 Persistent TODO Lists**: Database-backed task tracking that persists across sessions.
--- ---
@@ -33,7 +32,7 @@ This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/
- **🧠 Deep Database Integration**: Real-time persistence of TOD·O lists for long-running workflows. - **🧠 Deep Database Integration**: Real-time persistence of TOD·O lists for long-running workflows.
- **🌊 Advanced Streaming**: Full support for thinking process/Chain of Thought visualization. - **🌊 Advanced Streaming**: Full support for thinking process/Chain of Thought visualization.
- **🖼️ Intelligent Multimodal**: Vision capabilities and raw file analysis support. - **🖼️ Intelligent Multimodal**: Vision capabilities and raw file analysis support.
- **⚡ Interactive Artifacts**: Automatically renders HTML/JS apps generated by the agent directly in the chat interface. - **⚡ Full-Lifecycle File Agent**: Supports receiving uploaded files for raw bypass analysis and publishing generated results (e.g., analyzed Excel/reports) as downloadable links—a complete closed-loop agentic workflow.
--- ---

View File

@@ -1,6 +1,6 @@
# GitHub Copilot SDK 官方管道 # GitHub Copilot SDK 官方管道
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.6.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT **作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.6.2 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,深度集成了 **GitHub Copilot SDK**。它不仅支持 **GitHub Copilot 官方模型**(如 `gpt-5.2-codex`, `claude-sonnet-4.5`, `gemini-3-pro`, `gpt-5-mini`),还支持 **BYOK (自带 Key)** 模式对接自定义服务商OpenAI, Anthropic并具备**严格的用户与会话级工作区隔离**能力,提供统一且安全的 Agent 交互体验。 这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,深度集成了 **GitHub Copilot SDK**。它不仅支持 **GitHub Copilot 官方模型**(如 `gpt-5.2-codex`, `claude-sonnet-4.5`, `gemini-3-pro`, `gpt-5-mini`),还支持 **BYOK (自带 Key)** 模式对接自定义服务商OpenAI, Anthropic并具备**严格的用户与会话级工作区隔离**能力,提供统一且安全的 Agent 交互体验。
@@ -14,13 +14,12 @@
--- ---
## ✨ 0.6.1 更新内容 (What's New) ## ✨ 0.6.2 更新内容 (What's New)
- **👥 多用户与会话管理**: 采用 `user_id/chat_id` 的物理隔离架构,确保资源独立与稳健管理 - **🛠️ 新增工作区产物工具**: 引入 `publish_file_from_workspace`。Agent 现在可以生成物理文件(如使用 Python 生成的 Excel/CSV 报表),并直接在聊天界面提供点击下载链接
- **🤖 赋予 Agent 文件自主权**: 自动将上传的文件同步至物理工作区,支持 Python 直接分析 Excel/CSV - **⚙️ 工作流优化**: 提升了内部 Agent 物理工作区管理的可靠性与原子性
- **🔧 OpenAPI & 外部工具修复**: 完美支持通过 OpenAPI 服务器挂载的工具调用 - **🛡️ 安全增强**: 精细化了隔离环境下系统资源的访问控制策略
- **📊 计费与成本控制**: 增强的**计费倍率限制** (`MAX_MULTIPLIER`,例如设为 0 即仅限免费模型) 和**模型关键词过滤** (`EXCLUDE_KEYWORDS`),实现更精准的成本管控 - **🔧 性能微调**: 针对大上下文窗口优化了流式数据处理性能
- **🧠 数据库持久化 TODO**: 任务进度跨会话保存Agent 拥有更持久的任务记忆。
--- ---
@@ -33,7 +32,7 @@
- **🧠 深度数据库集成**: 实时持久化 TOD·O 列表到 UI 进度条。 - **🧠 深度数据库集成**: 实时持久化 TOD·O 列表到 UI 进度条。
- **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染。 - **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染。
- **🖼️ 智能多模态**: 完整支持图像识别与附件上传分析。 - **🖼️ 智能多模态**: 完整支持图像识别与附件上传分析。
- **⚡ 交互式伪影 (Artifacts)**: 自动渲染 Agent 生成的 HTML/JS 应用程序,直接在聊天界面交互 - **⚡ 全生命周期文件 Agent**: 支持接收上传文件进行绕过 RAG 的深度分析,并将处理结果(如分析后的 Excel/报告)发布为可下载链接,实现完整的闭环 Agent 工作流
--- ---

View File

@@ -5,7 +5,7 @@ author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
openwebui_id: ce96f7b4-12fc-4ac3-9a01-875713e69359 openwebui_id: ce96f7b4-12fc-4ac3-9a01-875713e69359
description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, infinite sessions, and frontend debug logging. description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, infinite sessions, and frontend debug logging.
version: 0.6.1 version: 0.6.2
requirements: github-copilot-sdk==0.1.23 requirements: github-copilot-sdk==0.1.23
""" """
@@ -44,6 +44,11 @@ from open_webui.config import (
from open_webui.utils.tools import get_tools as get_openwebui_tools, get_builtin_tools from open_webui.utils.tools import get_tools as get_openwebui_tools, get_builtin_tools
from open_webui.models.tools import Tools from open_webui.models.tools import Tools
from open_webui.models.users import Users from open_webui.models.users import Users
from open_webui.models.files import Files, FileForm
from open_webui.config import UPLOAD_DIR, DATA_DIR
import mimetypes
import uuid
import shutil
# Open WebUI internal database (re-use shared connection) # Open WebUI internal database (re-use shared connection)
try: try:
@@ -129,18 +134,22 @@ BASE_GUIDELINES = (
" - 2. **Render**: Immediately output the SAME code in a ` ```html ` block so the user can interact with it.\n" " - 2. **Render**: Immediately output the SAME code in a ` ```html ` block so the user can interact with it.\n"
" - **Result**: The user gets both a saved file AND a live app. Never force the user to choose one over the other.\n" " - **Result**: The user gets both a saved file AND a live app. Never force the user to choose one over the other.\n"
"4. **Images & Files**: ALWAYS embed generated images/files directly using `![caption](url)`. Never provide plain text links.\n" "4. **Images & Files**: ALWAYS embed generated images/files directly using `![caption](url)`. Never provide plain text links.\n"
"5. **TODO Visibility**: Every time you call the `update_todo` tool, you **MUST** immediately follow up with a beautifully formatted **Markdown summary** of the current TODO list. Use task checkboxes (`- [ ]`), progress indicators, and clear headings so the user can see the status directly in the chat.\n" "5. **File Delivery & Publishing (CRITICAL)**:\n"
"6. **Python Execution Standard**: For ANY task requiring Python logic (not just data analysis), you **MUST NOT** embed multi-line code directly in a shell command (e.g., using `python -c` or `<< 'EOF'`).\n" " - **Implicit Requests**: If the user says 'publish this', 'export your response', or 'give me a link to this content', you MUST: 1. Write the relevant content to a `.md` (or other appropriate) file in the current directory (`.`). 2. Call `publish_file_from_workspace(filename='name.md')` to get a link.\n"
" - **Manual Sequence**: 1. **Write Local**: Create the file in `.` (your only workspace). 2. **Publish**: Call `publish_file_from_workspace(filename='your_file.ext')`. **WARNING**: You MUST provide the filename argument; never call this tool with empty parentheses.\n"
" - *Rule*: Only files in the current directory (`.`) can be published. The tool bypasses RAG and handles S3/Local storage automatically.\n"
"6. **TODO Visibility**: Every time you call the `update_todo` tool, you **MUST** immediately follow up with a beautifully formatted **Markdown summary** of the current TODO list. Use task checkboxes (`- [ ]`), progress indicators, and clear headings so the user can see the status directly in the chat.\n"
"7. **Python Execution Standard**: For ANY task requiring Python logic (not just data analysis), you **MUST NOT** embed multi-line code directly in a shell command (e.g., using `python -c` or `<< 'EOF'`).\n"
' - **Exception**: Trivial one-liners (e.g., `python -c "print(1+1)"`) are permitted.\n' ' - **Exception**: Trivial one-liners (e.g., `python -c "print(1+1)"`) are permitted.\n'
" - **Protocol**: For everything else, you MUST:\n" " - **Protocol**: For everything else, you MUST:\n"
" 1. **Create** a `.py` file in the workspace (e.g., `script.py`).\n" " 1. **Create** a `.py` file in the workspace (e.g., `script.py`).\n"
" 2. **Run** it using `python3 script.py`.\n" " 2. **Run** it using `python3 script.py`.\n"
" - **Reason**: This ensures code is debuggable, readable, and persistent.\n" " - **Reason**: This ensures code is debuggable, readable, and persistent.\n"
"7. **Active & Autonomous**: You are an expert engineer. **DO NOT** ask for permission to proceed with obvious steps. **DO NOT** stop to ask 'Shall I continue?'.\n" "8. **Active & Autonomous**: You are an expert engineer. **DO NOT** ask for permission to proceed with obvious steps. **DO NOT** stop to ask 'Shall I continue?'.\n"
" - **Behavior**: Analyze the user's request -> Formulate a plan -> **EXECUTE** the plan immediately.\n" " - **Behavior**: Analyze the user's request -> Formulate a plan -> **EXECUTE** the plan immediately.\n"
" - **Clarification**: Only ask questions if the request is ambiguous or carries high risk (e.g., destructive actions).\n" " - **Clarification**: Only ask questions if the request is ambiguous or carries high risk (e.g., destructive actions).\n"
" - **Goal**: Minimize user friction. Deliver results, not questions.\n" " - **Goal**: Minimize user friction. Deliver results, not questions.\n"
"8. **Large Output Management**: If a tool execution output is truncated or saved to a temporary file (e.g., `/tmp/...`), DO NOT worry. The system will automatically move it to your workspace and notify you of the new filename. You can then read it directly.\n" "9. **Large Output Management**: If a tool execution output is truncated or saved to a temporary file (e.g., `/tmp/...`), DO NOT worry. The system will automatically move it to your workspace and notify you of the new filename. You can then read it directly.\n"
) )
# Sensitive extensions only for Administrators # Sensitive extensions only for Administrators
@@ -343,6 +352,7 @@ class Pipe:
# ============================================================= # =============================================================
_model_cache: List[dict] = [] # Model list cache _model_cache: List[dict] = [] # Model list cache
_standard_model_ids: set = set() # Track standard model IDs _standard_model_ids: set = set() # Track standard model IDs
_last_byok_config_hash: str = "" # Track BYOK config for cache invalidation
_tool_cache = None # Cache for converted OpenWebUI tools _tool_cache = None # Cache for converted OpenWebUI tools
_mcp_server_cache = None # Cache for MCP server config _mcp_server_cache = None # Cache for MCP server config
_env_setup_done = False # Track if env setup has been completed _env_setup_done = False # Track if env setup has been completed
@@ -488,6 +498,7 @@ class Pipe:
__user__: Optional[dict] = None, __user__: Optional[dict] = None,
__event_emitter__=None, __event_emitter__=None,
__event_call__=None, __event_call__=None,
__request__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
return await self._pipe_impl( return await self._pipe_impl(
body, body,
@@ -495,6 +506,7 @@ class Pipe:
__user__=__user__, __user__=__user__,
__event_emitter__=__event_emitter__, __event_emitter__=__event_emitter__,
__event_call__=__event_call__, __event_call__=__event_call__,
__request__=__request__,
) )
# ==================== Functional Areas ==================== # ==================== Functional Areas ====================
@@ -507,7 +519,12 @@ class Pipe:
# Tool registration: Add @define_tool decorated functions at module level, # Tool registration: Add @define_tool decorated functions at module level,
# then register them in _initialize_custom_tools() -> all_tools dict. # then register them in _initialize_custom_tools() -> all_tools dict.
async def _initialize_custom_tools( async def _initialize_custom_tools(
self, body: dict = None, __user__=None, __event_call__=None self,
body: dict = None,
__user__=None,
__event_call__=None,
__request__=None,
__metadata__=None,
): ):
"""Initialize custom tools based on configuration""" """Initialize custom tools based on configuration"""
# 1. Determine effective settings (User override > Global) # 1. Determine effective settings (User override > Global)
@@ -525,7 +542,17 @@ class Pipe:
await self._emit_debug_log( await self._emit_debug_log(
" Using cached OpenWebUI tools.", __event_call__ " Using cached OpenWebUI tools.", __event_call__
) )
return self._tool_cache # Create a shallow copy to append user-specific tools without polluting cache
tools = list(self._tool_cache)
# Inject File Publish Tool
chat_ctx = self._get_chat_context(body, __metadata__)
chat_id = chat_ctx.get("chat_id")
file_tool = self._get_publish_file_tool(__user__, chat_id, __request__)
if file_tool:
tools.append(file_tool)
return tools
# Load OpenWebUI tools dynamically # Load OpenWebUI tools dynamically
openwebui_tools = await self._load_openwebui_tools( openwebui_tools = await self._load_openwebui_tools(
@@ -557,7 +584,204 @@ class Pipe:
__event_call__, __event_call__,
) )
return openwebui_tools # Create a shallow copy to append user-specific tools without polluting cache
final_tools = list(openwebui_tools)
# Inject File Publish Tool
chat_ctx = self._get_chat_context(body, __metadata__)
chat_id = chat_ctx.get("chat_id")
file_tool = self._get_publish_file_tool(__user__, chat_id, __request__)
if file_tool:
final_tools.append(file_tool)
return final_tools
def _get_publish_file_tool(self, __user__, chat_id, __request__=None):
"""
Create a tool to publish files from the workspace to a downloadable URL.
"""
# Resolve user_id
if isinstance(__user__, (list, tuple)):
user_data = __user__[0] if __user__ else {}
elif isinstance(__user__, dict):
user_data = __user__
else:
user_data = {}
user_id = user_data.get("id") or user_data.get("user_id")
if not user_id:
return None
# Resolve workspace directory
workspace_dir = Path(self._get_workspace_dir(user_id=user_id, chat_id=chat_id))
# Define parameter schema explicitly for the SDK
class PublishFileParams(BaseModel):
filename: str = Field(
...,
description="The EXACT name of the file you just created in the current directory (e.g., 'report.csv'). REQUIRED.",
)
async def publish_file_from_workspace(filename: Any) -> dict:
"""
Publishes a file from the local chat workspace to a downloadable URL.
"""
try:
# 1. Robust Parameter Extraction
# Case A: filename is a Pydantic model (common when using params_type)
if hasattr(filename, "model_dump"): # Pydantic v2
filename = filename.model_dump().get("filename")
elif hasattr(filename, "dict"): # Pydantic v1
filename = filename.dict().get("filename")
# Case B: filename is a dict
if isinstance(filename, dict):
filename = (
filename.get("filename")
or filename.get("file")
or filename.get("file_path")
)
# Case C: filename is a JSON string or wrapped string
if isinstance(filename, str):
filename = filename.strip()
if filename.startswith("{"):
try:
import json
data = json.loads(filename)
if isinstance(data, dict):
filename = (
data.get("filename") or data.get("file") or filename
)
except:
pass
# 2. Final String Validation
if (
not filename
or not isinstance(filename, str)
or filename.strip() in ("", "{}", "None", "null")
):
return {
"error": "Missing or invalid required argument: 'filename'.",
"hint": f"Received value: {type(filename).__name__}. Please provide the filename as a simple string like 'report.md'.",
}
filename = filename.strip()
# 2. Path Resolution (Lock to current chat workspace)
target_path = workspace_dir / filename
try:
target_path = target_path.resolve()
if not str(target_path).startswith(str(workspace_dir.resolve())):
return {
"error": f"Access denied: File must be within the current chat workspace."
}
except Exception as e:
return {"error": f"Path validation failed: {e}"}
if not target_path.exists() or not target_path.is_file():
return {
"error": f"File '{filename}' not found in chat workspace. Ensure you saved it to the CURRENT DIRECTORY (.)."
}
# 3. Upload via API (S3 Compatible)
api_success = False
file_id = None
safe_filename = filename
token = None
if __request__:
auth_header = __request__.headers.get("Authorization")
if auth_header and auth_header.startswith("Bearer "):
token = auth_header.split(" ")[1]
if not token and "token" in __request__.cookies:
token = __request__.cookies.get("token")
if token:
try:
import aiohttp
base_url = str(__request__.base_url).rstrip("/")
upload_url = f"{base_url}/api/v1/files/"
async with aiohttp.ClientSession() as session:
with open(target_path, "rb") as f:
data = aiohttp.FormData()
data.add_field("file", f, filename=target_path.name)
import json
data.add_field(
"metadata",
json.dumps(
{
"source": "copilot_workspace_publish",
"skip_rag": True,
}
),
)
async with session.post(
upload_url,
data=data,
headers={"Authorization": f"Bearer {token}"},
) as resp:
if resp.status == 200:
api_result = await resp.json()
file_id = api_result.get("id")
safe_filename = api_result.get(
"filename", target_path.name
)
api_success = True
except Exception as e:
logger.error(f"API upload failed: {e}")
# 4. Fallback: Manual DB Insert (Local only)
if not api_success:
file_id = str(uuid.uuid4())
safe_filename = target_path.name
dest_path = Path(UPLOAD_DIR) / f"{file_id}_{safe_filename}"
await asyncio.to_thread(shutil.copy2, target_path, dest_path)
try:
db_path = str(os.path.relpath(dest_path, DATA_DIR))
except:
db_path = str(dest_path)
file_form = FileForm(
id=file_id,
filename=safe_filename,
path=db_path,
data={"status": "completed", "skip_rag": True},
meta={
"name": safe_filename,
"content_type": mimetypes.guess_type(safe_filename)[0]
or "text/plain",
"size": os.path.getsize(dest_path),
"source": "copilot_workspace_publish",
"skip_rag": True,
},
)
await asyncio.to_thread(Files.insert_new_file, user_id, file_form)
# 5. Result
download_url = f"/api/v1/files/{file_id}/content"
return {
"file_id": file_id,
"filename": safe_filename,
"download_url": download_url,
"message": "File published successfully.",
"hint": f"Link: [Download {safe_filename}]({download_url})",
}
except Exception as e:
return {"error": str(e)}
return define_tool(
name="publish_file_from_workspace",
description="Converts a file created in your local workspace into a downloadable URL. Use this tool AFTER writing a file to the current directory.",
params_type=PublishFileParams,
)(publish_file_from_workspace)
def _json_schema_to_python_type(self, schema: dict) -> Any: def _json_schema_to_python_type(self, schema: dict) -> Any:
"""Convert JSON Schema type to Python type for Pydantic models.""" """Convert JSON Schema type to Python type for Pydantic models."""
@@ -1756,13 +1980,17 @@ class Pipe:
async def _fetch_byok_models(self, uv: "Pipe.UserValves" = None) -> List[dict]: async def _fetch_byok_models(self, uv: "Pipe.UserValves" = None) -> List[dict]:
"""Fetch BYOK models from configured provider.""" """Fetch BYOK models from configured provider."""
model_list = [] model_list = []
# Resolve effective settings (User > Global) # Resolve effective settings (User > Global)
# Note: We handle the case where uv might be None # Note: We handle the case where uv might be None
effective_base_url = (uv.BYOK_BASE_URL if uv else "") or self.valves.BYOK_BASE_URL effective_base_url = (
uv.BYOK_BASE_URL if uv else ""
) or self.valves.BYOK_BASE_URL
effective_type = (uv.BYOK_TYPE if uv else "") or self.valves.BYOK_TYPE effective_type = (uv.BYOK_TYPE if uv else "") or self.valves.BYOK_TYPE
effective_api_key = (uv.BYOK_API_KEY if uv else "") or self.valves.BYOK_API_KEY effective_api_key = (uv.BYOK_API_KEY if uv else "") or self.valves.BYOK_API_KEY
effective_bearer_token = (uv.BYOK_BEARER_TOKEN if uv else "") or self.valves.BYOK_BEARER_TOKEN effective_bearer_token = (
uv.BYOK_BEARER_TOKEN if uv else ""
) or self.valves.BYOK_BEARER_TOKEN
effective_models = (uv.BYOK_MODELS if uv else "") or self.valves.BYOK_MODELS effective_models = (uv.BYOK_MODELS if uv else "") or self.valves.BYOK_MODELS
if effective_base_url: if effective_base_url:
@@ -1778,9 +2006,7 @@ class Pipe:
headers["anthropic-version"] = "2023-06-01" headers["anthropic-version"] = "2023-06-01"
else: else:
if effective_bearer_token: if effective_bearer_token:
headers["Authorization"] = ( headers["Authorization"] = f"Bearer {effective_bearer_token}"
f"Bearer {effective_bearer_token}"
)
elif effective_api_key: elif effective_api_key:
headers["Authorization"] = f"Bearer {effective_api_key}" headers["Authorization"] = f"Bearer {effective_api_key}"
@@ -1813,7 +2039,9 @@ class Pipe:
f"BYOK: Failed to fetch models from {url} (Attempt {attempt+1}/3). Status: {resp.status}" f"BYOK: Failed to fetch models from {url} (Attempt {attempt+1}/3). Status: {resp.status}"
) )
except Exception as e: except Exception as e:
await self._emit_debug_log(f"BYOK: Model fetch error (Attempt {attempt+1}/3): {e}") await self._emit_debug_log(
f"BYOK: Model fetch error (Attempt {attempt+1}/3): {e}"
)
if attempt < 2: if attempt < 2:
await asyncio.sleep(1) await asyncio.sleep(1)
@@ -1941,7 +2169,25 @@ class Pipe:
if k.strip() if k.strip()
] ]
# --- NEW: CONFIG-AWARE CACHE INVALIDATION ---
# Calculate current config fingerprint to detect changes
current_config_str = f"{token}|{uv.BYOK_BASE_URL or self.valves.BYOK_BASE_URL}|{uv.BYOK_API_KEY or self.valves.BYOK_API_KEY}|{self.valves.BYOK_BEARER_TOKEN}"
current_config_hash = hashlib.md5(current_config_str.encode()).hexdigest()
if (
self._model_cache
and self.__class__._last_byok_config_hash != current_config_hash
):
if self.valves.DEBUG:
logger.info(
f"[Pipes] Configuration change detected. Invalidating model cache."
)
self.__class__._model_cache = []
self.__class__._last_byok_config_hash = current_config_hash
if not self._model_cache: if not self._model_cache:
# Update the hash when we refresh the cache
self.__class__._last_byok_config_hash = current_config_hash
if self.valves.DEBUG: if self.valves.DEBUG:
logger.info("[Pipes] Refreshing model cache...") logger.info("[Pipes] Refreshing model cache...")
try: try:
@@ -2437,6 +2683,7 @@ class Pipe:
__user__: Optional[dict] = None, __user__: Optional[dict] = None,
__event_emitter__=None, __event_emitter__=None,
__event_call__=None, __event_call__=None,
__request__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
# --- PROBE LOG --- # --- PROBE LOG ---
if __event_call__: if __event_call__:
@@ -2716,7 +2963,11 @@ class Pipe:
# Initialize custom tools (Handles caching internally) # Initialize custom tools (Handles caching internally)
custom_tools = await self._initialize_custom_tools( custom_tools = await self._initialize_custom_tools(
body=body, __user__=__user__, __event_call__=__event_call__ body=body,
__user__=__user__,
__event_call__=__event_call__,
__request__=__request__,
__metadata__=__metadata__,
) )
if custom_tools: if custom_tools:
tool_names = [t.name for t in custom_tools] tool_names = [t.name for t in custom_tools]
@@ -3468,4 +3719,6 @@ class Pipe:
await client.stop() await client.stop()
except Exception as e: except Exception as e:
pass pass
# Triggering release after CI fix # Triggering release after CI fix

View File

@@ -4,7 +4,7 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie/awesome-openwebui author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
description: 集成 GitHub Copilot SDK。支持动态模型、多选提供商、流式输出、多模态 input、无限会话及前端调试日志。 description: 集成 GitHub Copilot SDK。支持动态模型、多选提供商、流式输出、多模态 input、无限会话及前端调试日志。
version: 0.6.1 version: 0.6.2
requirements: github-copilot-sdk==0.1.23 requirements: github-copilot-sdk==0.1.23
""" """
@@ -35,6 +35,11 @@ from open_webui.config import (
from open_webui.utils.tools import get_tools as get_openwebui_tools, get_builtin_tools from open_webui.utils.tools import get_tools as get_openwebui_tools, get_builtin_tools
from open_webui.models.tools import Tools from open_webui.models.tools import Tools
from open_webui.models.users import Users from open_webui.models.users import Users
from open_webui.models.files import Files, FileForm
from open_webui.config import UPLOAD_DIR, DATA_DIR
import mimetypes
import uuid
import shutil
# Setup logger # Setup logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -58,6 +63,10 @@ FORMATTING_GUIDELINES = (
"1. **Markdown & 多媒体**:自由使用粗体、斜体、表格和列表。\n" "1. **Markdown & 多媒体**:自由使用粗体、斜体、表格和列表。\n"
"2. **Mermaid 图表**:请务必使用标准的 ```mermaid 代码块。\n" "2. **Mermaid 图表**:请务必使用标准的 ```mermaid 代码块。\n"
"3. **交互式 HTML/JS**:你可以输出完整的 ```html 代码块(含 CSS/JS将在 iframe 中渲染。\n" "3. **交互式 HTML/JS**:你可以输出完整的 ```html 代码块(含 CSS/JS将在 iframe 中渲染。\n"
"4. **文件交付与发布 (关键规范)**\n"
" - **隐式请求**若用户要求“发布这个”、“导出刚才的内容”或“给我一个链接”你必须1. 将内容写入当前目录 (`.`) 下的 `.md` (或其他合适) 文件。2. 调用 `publish_file_from_workspace(filename='name.md')` 获取链接。\n"
" - **标准流程**1. **本地写入**:使用 Python 在**当前目录 (`.`)** 创建文件。这是你的唯一工作区。**严禁**使用 `/tmp` 等绝对路径。2. **显式发布**:调用 `publish_file_from_workspace(filename='your_file.ext')`。该工具会自动同步至 S3 并绕过 RAG。3. **呈现链接**:从工具返回的 JSON 中提取 `download_url`,并以 Markdown 链接 `[点击下载描述](url)` 展示。\n"
" - **规则**:只有当前目录 (`.`) 下的文件可以发布。调用时必须传入 `filename` 参数,严禁空调用。\n"
"7. **主动与自主**: 你是专家工程师。对于显而易见的步骤,**不要**请求许可。**不要**停下来问“我通过吗?”或“是否继续?”。\n" "7. **主动与自主**: 你是专家工程师。对于显而易见的步骤,**不要**请求许可。**不要**停下来问“我通过吗?”或“是否继续?”。\n"
" - **行为模式**: 分析用户请求 -> 制定计划 -> **立即执行**计划。\n" " - **行为模式**: 分析用户请求 -> 制定计划 -> **立即执行**计划。\n"
" - **澄清**: 仅当请求模棱两可或具有高风险(例如破坏性操作)时才提出问题。\n" " - **澄清**: 仅当请求模棱两可或具有高风险(例如破坏性操作)时才提出问题。\n"
@@ -230,6 +239,7 @@ class Pipe:
) )
_model_cache: List[dict] = [] _model_cache: List[dict] = []
_last_byok_config_hash: str = "" # 跟踪配置状态以失效缓存
_standard_model_ids: set = set() _standard_model_ids: set = set()
_tool_cache = None _tool_cache = None
_mcp_server_cache = None _mcp_server_cache = None
@@ -256,13 +266,24 @@ class Pipe:
__user__=None, __user__=None,
__event_emitter__=None, __event_emitter__=None,
__event_call__=None, __event_call__=None,
__request__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
return await self._pipe_impl( return await self._pipe_impl(
body, __metadata__, __user__, __event_emitter__, __event_call__ body,
__metadata__=__metadata__,
__user__=__user__,
__event_emitter__=__event_emitter__,
__event_call__=__event_call__,
__request__=__request__,
) )
async def _initialize_custom_tools( async def _initialize_custom_tools(
self, __user__=None, __event_call__=None, body: dict = None self,
body: dict = None,
__user__=None,
__event_call__=None,
__request__=None,
__metadata__=None,
): ):
"""基于配置初始化自定义工具""" """基于配置初始化自定义工具"""
# 1. 确定有效设置 (用户覆盖 > 全局) # 1. 确定有效设置 (用户覆盖 > 全局)
@@ -275,13 +296,22 @@ class Pipe:
if not enable_tools and not enable_openapi: if not enable_tools and not enable_openapi:
return [] return []
# 提取 Chat ID 以对齐工作空间
chat_ctx = self._get_chat_context(body, __metadata__)
chat_id = chat_ctx.get("chat_id")
# 3. 检查缓存 # 3. 检查缓存
if enable_cache and self._tool_cache is not None: if enable_cache and self._tool_cache is not None:
await self._emit_debug_log(" 使用缓存的 OpenWebUI 工具。", __event_call__) await self._emit_debug_log(" 使用缓存的 OpenWebUI 工具。", __event_call__)
return self._tool_cache tools = list(self._tool_cache)
# 注入文件发布工具
file_tool = self._get_publish_file_tool(__user__, chat_id, __request__)
if file_tool:
tools.append(file_tool)
return tools
# 动态加载 OpenWebUI 工具 # 动态加载 OpenWebUI 工具
tools = await self._load_openwebui_tools( openwebui_tools = await self._load_openwebui_tools(
__user__=__user__, __user__=__user__,
__event_call__=__event_call__, __event_call__=__event_call__,
body=body, body=body,
@@ -291,12 +321,194 @@ class Pipe:
# 更新缓存 # 更新缓存
if enable_cache: if enable_cache:
self._tool_cache = tools self._tool_cache = openwebui_tools
await self._emit_debug_log( await self._emit_debug_log(
"✅ OpenWebUI 工具已缓存,供后续请求使用。", __event_call__ "✅ OpenWebUI 工具已缓存,供后续请求使用。", __event_call__
) )
return tools final_tools = list(openwebui_tools)
# 注入文件发布工具
file_tool = self._get_publish_file_tool(__user__, chat_id, __request__)
if file_tool:
final_tools.append(file_tool)
return final_tools
def _get_publish_file_tool(self, __user__, chat_id, __request__=None):
"""创建发布工作区文件为下载链接的工具"""
if isinstance(__user__, (list, tuple)):
user_data = __user__[0] if __user__ else {}
elif isinstance(__user__, dict):
user_data = __user__
else:
user_data = {}
user_id = user_data.get("id") or user_data.get("user_id")
if not user_id:
return None
# 锁定当前聊天的隔离工作空间
workspace_dir = Path(self._get_workspace_dir(user_id=user_id, chat_id=chat_id))
# 为 SDK 定义参数 Schema
class PublishFileParams(BaseModel):
filename: str = Field(
...,
description="你在当前目录创建的文件的确切名称(如 'report.csv')。必填。",
)
async def publish_file_from_workspace(filename: Any) -> dict:
"""将本地聊天工作区的文件发布为可下载的 URL。"""
try:
# 1. 参数鲁棒提取
if hasattr(filename, "model_dump"): # Pydantic v2
filename = filename.model_dump().get("filename")
elif hasattr(filename, "dict"): # Pydantic v1
filename = filename.dict().get("filename")
if isinstance(filename, dict):
filename = (
filename.get("filename")
or filename.get("file")
or filename.get("file_path")
)
if isinstance(filename, str):
filename = filename.strip()
if filename.startswith("{"):
try:
import json
data = json.loads(filename)
if isinstance(data, dict):
filename = (
data.get("filename") or data.get("file") or filename
)
except:
pass
if (
not filename
or not isinstance(filename, str)
or filename.strip() in ("", "{}", "None", "null")
):
return {
"error": "缺少必填参数: 'filename'",
"hint": "请以字符串形式提供文件名,例如 'report.md'",
}
filename = filename.strip()
# 2. 路径解析(锁定当前聊天工作区)
target_path = workspace_dir / filename
try:
target_path = target_path.resolve()
if not str(target_path).startswith(str(workspace_dir.resolve())):
return {"error": "拒绝访问:文件必须位于当前聊天工作区内。"}
except Exception as e:
return {"error": f"路径校验失败: {e}"}
if not target_path.exists() or not target_path.is_file():
return {
"error": f"在聊天工作区未找到文件 '{filename}'。请确保你已将其保存到当前目录 (.)。"
}
# 3. 通过 API 上传 (兼容 S3)
api_success = False
file_id = None
safe_filename = filename
token = None
if __request__:
auth_header = __request__.headers.get("Authorization")
if auth_header and auth_header.startswith("Bearer "):
token = auth_header.split(" ")[1]
if not token and "token" in __request__.cookies:
token = __request__.cookies.get("token")
if token:
try:
import aiohttp
base_url = str(__request__.base_url).rstrip("/")
upload_url = f"{base_url}/api/v1/files/"
async with aiohttp.ClientSession() as session:
with open(target_path, "rb") as f:
data = aiohttp.FormData()
data.add_field("file", f, filename=target_path.name)
import json
data.add_field(
"metadata",
json.dumps(
{
"source": "copilot_workspace_publish",
"skip_rag": True,
}
),
)
async with session.post(
upload_url,
data=data,
headers={"Authorization": f"Bearer {token}"},
) as resp:
if resp.status == 200:
api_res = await resp.json()
file_id = api_res.get("id")
safe_filename = api_res.get(
"filename", target_path.name
)
api_success = True
except Exception as e:
logger.error(f"API 上传失败: {e}")
# 4. 兜底:手动插入数据库 (仅限本地存储)
if not api_success:
file_id = str(uuid.uuid4())
safe_filename = target_path.name
dest_path = Path(UPLOAD_DIR) / f"{file_id}_{safe_filename}"
await asyncio.to_thread(shutil.copy2, target_path, dest_path)
try:
db_path = str(os.path.relpath(dest_path, DATA_DIR))
except:
db_path = str(dest_path)
file_form = FileForm(
id=file_id,
filename=safe_filename,
path=db_path,
data={"status": "completed", "skip_rag": True},
meta={
"name": safe_filename,
"content_type": mimetypes.guess_type(safe_filename)[0]
or "text/plain",
"size": os.path.getsize(dest_path),
"source": "copilot_workspace_publish",
"skip_rag": True,
},
)
await asyncio.to_thread(Files.insert_new_file, user_id, file_form)
# 5. 返回结果
download_url = f"/api/v1/files/{file_id}/content"
return {
"file_id": file_id,
"filename": safe_filename,
"download_url": download_url,
"message": "文件发布成功。",
"hint": f"链接: [下载 {safe_filename}]({download_url})",
}
except Exception as e:
return {"error": str(e)}
return define_tool(
name="publish_file_from_workspace",
description="将你在本地工作区创建的文件转换为可下载的 URL。请在完成文件写入当前目录后再使用此工具。",
params_type=PublishFileParams,
)(publish_file_from_workspace)
def _json_schema_to_python_type(self, schema: dict) -> Any: def _json_schema_to_python_type(self, schema: dict) -> Any:
if not isinstance(schema, dict): if not isinstance(schema, dict):
@@ -782,12 +994,16 @@ class Pipe:
async def _fetch_byok_models(self, uv: "Pipe.UserValves" = None) -> List[dict]: async def _fetch_byok_models(self, uv: "Pipe.UserValves" = None) -> List[dict]:
"""从配置的提供商获取 BYOK 模型。""" """从配置的提供商获取 BYOK 模型。"""
model_list = [] model_list = []
# 确定有效配置 (用户 > 全局) # 确定有效配置 (用户 > 全局)
effective_base_url = (uv.BYOK_BASE_URL if uv else "") or self.valves.BYOK_BASE_URL effective_base_url = (
uv.BYOK_BASE_URL if uv else ""
) or self.valves.BYOK_BASE_URL
effective_type = (uv.BYOK_TYPE if uv else "") or self.valves.BYOK_TYPE effective_type = (uv.BYOK_TYPE if uv else "") or self.valves.BYOK_TYPE
effective_api_key = (uv.BYOK_API_KEY if uv else "") or self.valves.BYOK_API_KEY effective_api_key = (uv.BYOK_API_KEY if uv else "") or self.valves.BYOK_API_KEY
effective_bearer_token = (uv.BYOK_BEARER_TOKEN if uv else "") or self.valves.BYOK_BEARER_TOKEN effective_bearer_token = (
uv.BYOK_BEARER_TOKEN if uv else ""
) or self.valves.BYOK_BEARER_TOKEN
effective_models = (uv.BYOK_MODELS if uv else "") or self.valves.BYOK_MODELS effective_models = (uv.BYOK_MODELS if uv else "") or self.valves.BYOK_MODELS
if effective_base_url: if effective_base_url:
@@ -803,9 +1019,7 @@ class Pipe:
headers["anthropic-version"] = "2023-06-01" headers["anthropic-version"] = "2023-06-01"
else: else:
if effective_bearer_token: if effective_bearer_token:
headers["Authorization"] = ( headers["Authorization"] = f"Bearer {effective_bearer_token}"
f"Bearer {effective_bearer_token}"
)
elif effective_api_key: elif effective_api_key:
headers["Authorization"] = f"Bearer {effective_api_key}" headers["Authorization"] = f"Bearer {effective_api_key}"
@@ -828,7 +1042,7 @@ class Pipe:
for item in data: for item in data:
if isinstance(item, dict) and "id" in item: if isinstance(item, dict) and "id" in item:
model_list.append(item["id"]) model_list.append(item["id"])
await self._emit_debug_log( await self._emit_debug_log(
f"BYOK: 从 {url} 获取了 {len(model_list)} 个模型" f"BYOK: 从 {url} 获取了 {len(model_list)} 个模型"
) )
@@ -838,8 +1052,10 @@ class Pipe:
f"BYOK: 获取模型失败 {url} (尝试 {attempt+1}/3). 状态码: {resp.status}" f"BYOK: 获取模型失败 {url} (尝试 {attempt+1}/3). 状态码: {resp.status}"
) )
except Exception as e: except Exception as e:
await self._emit_debug_log(f"BYOK: 模型获取错误 (尝试 {attempt+1}/3): {e}") await self._emit_debug_log(
f"BYOK: 模型获取错误 (尝试 {attempt+1}/3): {e}"
)
if attempt < 2: if attempt < 2:
await asyncio.sleep(1) await asyncio.sleep(1)
@@ -1001,6 +1217,7 @@ class Pipe:
__user__=None, __user__=None,
__event_emitter__=None, __event_emitter__=None,
__event_call__=None, __event_call__=None,
__request__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
ud = __user__[0] if isinstance(__user__, (list, tuple)) else (__user__ or {}) ud = __user__[0] if isinstance(__user__, (list, tuple)) else (__user__ or {})
uid = ud.get("id") or ud.get("user_id") or "default_user" uid = ud.get("id") or ud.get("user_id") or "default_user"
@@ -1057,7 +1274,14 @@ class Pipe:
client = CopilotClient(self._build_client_config(body, uid, cid)) client = CopilotClient(self._build_client_config(body, uid, cid))
try: try:
await client.start() await client.start()
tools = await self._initialize_custom_tools(__user__, __event_call__, body) # 同步更新工具初始化参数
tools = await self._initialize_custom_tools(
body=body,
__user__=__user__,
__event_call__=__event_call__,
__request__=__request__,
__metadata__=__metadata__,
)
prov = ( prov = (
{ {
"type": (uv.BYOK_TYPE or self.valves.BYOK_TYPE).lower() or "openai", "type": (uv.BYOK_TYPE or self.valves.BYOK_TYPE).lower() or "openai",
@@ -1162,8 +1386,11 @@ class Pipe:
# 环境初始化 (带有 24 小时冷却时间) # 环境初始化 (带有 24 小时冷却时间)
from datetime import datetime from datetime import datetime
now = datetime.now().timestamp() now = datetime.now().timestamp()
if not self.__class__._env_setup_done or (now - self.__class__._last_update_check > 86400): if not self.__class__._env_setup_done or (
now - self.__class__._last_update_check > 86400
):
self._setup_env(debug_enabled=uv.DEBUG or self.valves.DEBUG, token=token) self._setup_env(debug_enabled=uv.DEBUG or self.valves.DEBUG, token=token)
elif token: elif token:
os.environ["GH_TOKEN"] = os.environ["GITHUB_TOKEN"] = token os.environ["GH_TOKEN"] = os.environ["GITHUB_TOKEN"] = token
@@ -1174,17 +1401,48 @@ class Pipe:
eff_max = uv.MAX_MULTIPLIER eff_max = uv.MAX_MULTIPLIER
# 确定关键词和提供商过滤 # 确定关键词和提供商过滤
ex_kw = [k.strip().lower() for k in (self.valves.EXCLUDE_KEYWORDS + "," + uv.EXCLUDE_KEYWORDS).split(",") if k.strip()] ex_kw = [
allowed_p = [p.strip().lower() for p in (uv.PROVIDERS if uv.PROVIDERS else self.valves.PROVIDERS).split(",") if p.strip()] k.strip().lower()
for k in (self.valves.EXCLUDE_KEYWORDS + "," + uv.EXCLUDE_KEYWORDS).split(
","
)
if k.strip()
]
allowed_p = [
p.strip().lower()
for p in (uv.PROVIDERS if uv.PROVIDERS else self.valves.PROVIDERS).split(
","
)
if p.strip()
]
# --- 新增:配置感知缓存刷新 ---
# 计算当前配置指纹以检测变化
current_config_str = f"{token}|{(uv.BYOK_BASE_URL if uv else '') or self.valves.BYOK_BASE_URL}|{(uv.BYOK_API_KEY if uv else '') or self.valves.BYOK_API_KEY}|{(uv.BYOK_BEARER_TOKEN if uv else '') or self.valves.BYOK_BEARER_TOKEN}"
import hashlib
current_config_hash = hashlib.md5(current_config_str.encode()).hexdigest()
if (
self._model_cache
and self.__class__._last_byok_config_hash != current_config_hash
):
self.__class__._model_cache = []
self.__class__._last_byok_config_hash = current_config_hash
# 如果缓存为空,刷新模型列表 # 如果缓存为空,刷新模型列表
if not self._model_cache: if not self._model_cache:
self.__class__._last_byok_config_hash = current_config_hash
byok_models = [] byok_models = []
standard_models = [] standard_models = []
# 1. 获取 BYOK 模型 (优先使用个人设置) # 1. 获取 BYOK 模型 (优先使用个人设置)
if ((uv.BYOK_BASE_URL if uv else "") or self.valves.BYOK_BASE_URL) and \ if ((uv.BYOK_BASE_URL if uv else "") or self.valves.BYOK_BASE_URL) and (
((uv.BYOK_API_KEY if uv else "") or self.valves.BYOK_API_KEY or (uv.BYOK_BEARER_TOKEN if uv else "") or self.valves.BYOK_BEARER_TOKEN): (uv.BYOK_API_KEY if uv else "")
or self.valves.BYOK_API_KEY
or (uv.BYOK_BEARER_TOKEN if uv else "")
or self.valves.BYOK_BEARER_TOKEN
):
byok_models = await self._fetch_byok_models(uv=uv) byok_models = await self._fetch_byok_models(uv=uv)
# 2. 获取标准 Copilot 模型 # 2. 获取标准 Copilot 模型
@@ -1194,55 +1452,91 @@ class Pipe:
raw_models = await c.list_models() raw_models = await c.list_models()
raw = raw_models if isinstance(raw_models, list) else [] raw = raw_models if isinstance(raw_models, list) else []
processed = [] processed = []
for m in raw: for m in raw:
try: try:
m_is_dict = isinstance(m, dict) m_is_dict = isinstance(m, dict)
mid = m.get("id") if m_is_dict else getattr(m, "id", str(m)) mid = m.get("id") if m_is_dict else getattr(m, "id", str(m))
bill = m.get("billing") if m_is_dict else getattr(m, "billing", None) bill = (
m.get("billing")
if m_is_dict
else getattr(m, "billing", None)
)
if bill and not isinstance(bill, dict): if bill and not isinstance(bill, dict):
bill = bill.to_dict() if hasattr(bill, "to_dict") else vars(bill) bill = (
bill.to_dict()
pol = m.get("policy") if m_is_dict else getattr(m, "policy", None) if hasattr(bill, "to_dict")
else vars(bill)
)
pol = (
m.get("policy")
if m_is_dict
else getattr(m, "policy", None)
)
if pol and not isinstance(pol, dict): if pol and not isinstance(pol, dict):
pol = pol.to_dict() if hasattr(pol, "to_dict") else vars(pol) pol = (
pol.to_dict()
if hasattr(pol, "to_dict")
else vars(pol)
)
if (pol or {}).get("state") == "disabled": if (pol or {}).get("state") == "disabled":
continue continue
cap = m.get("capabilities") if m_is_dict else getattr(m, "capabilities", None) cap = (
m.get("capabilities")
if m_is_dict
else getattr(m, "capabilities", None)
)
vis, reas, ctx, supp = False, False, None, [] vis, reas, ctx, supp = False, False, None, []
if cap: if cap:
if not isinstance(cap, dict): if not isinstance(cap, dict):
cap = cap.to_dict() if hasattr(cap, "to_dict") else vars(cap) cap = (
cap.to_dict()
if hasattr(cap, "to_dict")
else vars(cap)
)
s = cap.get("supports", {}) s = cap.get("supports", {})
vis, reas = s.get("vision", False), s.get("reasoning_effort", False) vis, reas = s.get("vision", False), s.get(
"reasoning_effort", False
)
l = cap.get("limits", {}) l = cap.get("limits", {})
ctx = l.get("max_context_window_tokens") ctx = l.get("max_context_window_tokens")
raw_eff = (m.get("supported_reasoning_efforts") if m_is_dict else getattr(m, "supported_reasoning_efforts", [])) or [] raw_eff = (
m.get("supported_reasoning_efforts")
if m_is_dict
else getattr(m, "supported_reasoning_efforts", [])
) or []
supp = [str(e).lower() for e in raw_eff if e] supp = [str(e).lower() for e in raw_eff if e]
mult = (bill or {}).get("multiplier", 1) mult = (bill or {}).get("multiplier", 1)
cid = self._clean_model_id(mid) cid = self._clean_model_id(mid)
processed.append({ processed.append(
"id": f"{self.id}-{mid}", {
"name": f"-{cid} ({mult}x)" if mult > 0 else f"-🔥 {cid} (0x)", "id": f"{self.id}-{mid}",
"multiplier": mult, "name": (
"raw_id": mid, f"-{cid} ({mult}x)"
"source": "copilot", if mult > 0
"provider": self._get_provider_name(m), else f"-🔥 {cid} (0x)"
"meta": { ),
"capabilities": { "multiplier": mult,
"vision": vis, "raw_id": mid,
"reasoning": reas, "source": "copilot",
"supported_reasoning_efforts": supp, "provider": self._get_provider_name(m),
"meta": {
"capabilities": {
"vision": vis,
"reasoning": reas,
"supported_reasoning_efforts": supp,
},
"context_length": ctx,
}, },
"context_length": ctx, }
}, )
})
except: except:
continue continue
processed.sort(key=lambda x: (x["multiplier"], x["raw_id"])) processed.sort(key=lambda x: (x["multiplier"], x["raw_id"]))
standard_models = processed standard_models = processed
self._standard_model_ids = {m["raw_id"] for m in processed} self._standard_model_ids = {m["raw_id"] for m in processed}
@@ -1254,7 +1548,9 @@ class Pipe:
self._model_cache = standard_models + byok_models self._model_cache = standard_models + byok_models
if not self._model_cache: if not self._model_cache:
return [{"id": "error", "name": "未找到任何模型。请检查 Token 或 BYOK 配置。"}] return [
{"id": "error", "name": "未找到任何模型。请检查 Token 或 BYOK 配置。"}
]
# 3. 实时过滤结果 # 3. 实时过滤结果
res = [] res = []
@@ -1262,19 +1558,21 @@ class Pipe:
# 提供商过滤 # 提供商过滤
if allowed_p and m.get("provider", "Unknown").lower() not in allowed_p: if allowed_p and m.get("provider", "Unknown").lower() not in allowed_p:
continue continue
mid, mname = (m.get("raw_id") or m.get("id", "")).lower(), m.get("name", "").lower() mid, mname = (m.get("raw_id") or m.get("id", "")).lower(), m.get(
"name", ""
).lower()
# 关键词过滤 # 关键词过滤
if any(kw in mid or kw in mname for kw in ex_kw): if any(kw in mid or kw in mname for kw in ex_kw):
continue continue
# 倍率限制 (仅限 Copilot 官方模型) # 倍率限制 (仅限 Copilot 官方模型)
if m.get("source") == "copilot": if m.get("source") == "copilot":
if float(m.get("multiplier", 1)) > (float(eff_max) + 0.0001): if float(m.get("multiplier", 1)) > (float(eff_max) + 0.0001):
continue continue
res.append(m) res.append(m)
return res if res else [{"id": "none", "name": "没有匹配当前过滤条件的模型"}] return res if res else [{"id": "none", "name": "没有匹配当前过滤条件的模型"}]
async def stream_response( async def stream_response(