feat(github-copilot-sdk): v0.3.0 - unified tool bridge & dynamic MCP discovery

Major enhancements:
- Zero-config OpenWebUI Tool Bridge: automatically converts WebUI Functions to Copilot-compatible tools
- Dynamic MCP Discovery: seamlessly reads MCP servers from Admin Settings -> Connections
- High-performance async engine with optimized event-driven streaming
- Robust interoperability via dynamic Pydantic model generation
- Simplified token acquisition (web-based PAT only, removed CLI method)
- Updated configuration valves (renamed, removed legacy parameters)
- Comprehensive bilingual documentation sync
This commit is contained in:
fujie
2026-02-07 12:36:46 +08:00
parent 8e2c1b467e
commit f882997337
9 changed files with 1428 additions and 403 deletions

View File

@@ -1,26 +1,24 @@
# GitHub Copilot SDK Pipe for OpenWebUI # GitHub Copilot SDK Pipe for OpenWebUI
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.2.3 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT **Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.3.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience. This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
## 🚀 What's New (v0.2.3) ## 🚀 What's New (v0.3.0) - The Power of "Unified Ecosystem"
* **🧩 Per-user Overrides**: Added user-level overrides for `REASONING_EFFORT`, `CLI_PATH`, `DEBUG`, `SHOW_THINKING`, and `MODEL_ID`. * **🔌 Zero-Config Tool Bridge**: Automatically transforms your existing OpenWebUI Functions (Tools) into Copilot-compatible tools. **Copilot now has total access to your entire WebUI toolset!**
* **🧠 Thinking Output Reliability**: Thinking visibility now respects the user setting and is correctly passed into streaming. * **🔗 Dynamic MCP Discovery**: Seamlessly connects to MCP servers defined in **Admin Settings -> Connections**. No configuration files required—it just works.
* **📝 Formatting Enforcement**: Added automatic formatting hints to ensure outputs are well-structured (paragraphs, lists). * **⚡ High-Performance Async Engine**: Background CLI updates and optimized event-driven streaming ensure lightning-fast responses without UI lag.
* **🛡️ Robust Interoperability**: Advanced sanitization and dynamic Pydantic model generation ensure smooth integration even with complex third-party tools.
## ✨ Core Features ## ✨ Key Capabilities
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability. * **🌉 The Ultimate Bridge**: The first and only plugin that creates a seamless bridge between **OpenWebUI Tools** and **GitHub Copilot SDK**.
* **🛠️ Custom Tools Support**: Example tools included (random number). Easy to extend with your own tools. * **🚀 Official & Native**: Built directly on the official Python SDK, providing the most stable and authentic Copilot experience.
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages. * **🌊 Advanced Streaming (Thought Process)**: Supports full model reasoning/thinking display with typewriter effects.
* **🌊 Streaming Output**: Supports typewriter effect for fast responses. * **🖼️ Intelligent Multimodal**: Full support for images and attachments, enabling Copilot to "see" your workspace.
* **🖼 Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support). * **🛠 Effortless Setup**: Automatic CLI detection, version enforcement, and dependency management.
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box. * **🔑 Dual-Layer Security**: Supports secure OAuth flow for Chat and standard PAT for extended MCP capabilities.
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
* **🐛 Debug Mode**: Built-in detailed log output (browser console) for easy troubleshooting.
* **⚠️ Single Node Only**: Due to local session storage, this plugin currently supports single-node OpenWebUI deployment or multi-node with sticky sessions enabled.
## 📦 Installation & Usage ## 📦 Installation & Usage
@@ -38,13 +36,11 @@ Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** ico
| Parameter | Description | Default | | Parameter | Description | Default |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - | | **GH_TOKEN** | **(Required)** GitHub Access Token (PAT or OAuth Token). Access to Chat. | - |
| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` |
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
| **DEBUG** | Whether to enable debug logs (output to browser console). | `False` | | **DEBUG** | Whether to enable debug logs (output to browser console). | `False` |
| **LOG_LEVEL** | Copilot CLI log level: none, error, warning, info, debug, all. | `error` | | **LOG_LEVEL** | Copilot CLI log level: none, error, warning, info, debug, all. | `error` |
| **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` | | **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` |
| **SHOW_WORKSPACE_INFO** | Show session workspace path and summary in debug mode. | `True` | | **COPILOT_CLI_VERSION** | Specific Copilot CLI version to install/enforce. | `0.0.405` |
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - | | **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - | | **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` | | **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
@@ -52,10 +48,10 @@ Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** ico
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` | | **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
| **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` | | **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` |
| **CUSTOM_ENV_VARS** | Custom environment variables (JSON format). | - | | **CUSTOM_ENV_VARS** | Custom environment variables (JSON format). | - |
| **REASONING_EFFORT** | Reasoning effort level: low, medium, high. `xhigh` is supported for gpt-5.2-codex. | `medium` | | **REASONING_EFFORT** | Reasoning effort level: low, medium, high. `xhigh` is supported for some models. | `medium` |
| **ENFORCE_FORMATTING** | Add formatting instructions to system prompt for better readability. | `True` | | **ENFORCE_FORMATTING** | Add formatting instructions to system prompt for better readability. | `True` |
| **ENABLE_TOOLS** | Enable custom tools (example: random number). | `False` | | **ENABLE_MCP_SERVER** | Enable Direct MCP Client connection (Recommended). | `True` |
| **AVAILABLE_TOOLS** | Available tools: 'all' or comma-separated list. | `all` | | **ENABLE_OPENWEBUI_TOOLS** | Enable OpenWebUI Tools (includes defined and server tools). | `True` |
#### User Valves (per-user overrides) #### User Valves (per-user overrides)
@@ -63,37 +59,26 @@ These optional settings can be set per user (overrides global Valves):
| Parameter | Description | Default | | Parameter | Description | Default |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | Personal GitHub Token (overrides global setting). | - |
| **REASONING_EFFORT** | Reasoning effort level (low/medium/high/xhigh). | - | | **REASONING_EFFORT** | Reasoning effort level (low/medium/high/xhigh). | - |
| **CLI_PATH** | Custom path to Copilot CLI. | - |
| **DEBUG** | Enable technical debug logs. | `False` | | **DEBUG** | Enable technical debug logs. | `False` |
| **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` | | **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
| **MODEL_ID** | Custom model ID. | - | | **ENABLE_OPENWEBUI_TOOLS** | Enable OpenWebUI Tools (overrides global). | `True` |
| **ENABLE_MCP_SERVER** | Enable MCP server loading (overrides global). | `True` |
| **ENFORCE_FORMATTING** | Enforce formatting guidelines (overrides global). | `True` |
### 3. Using Custom Tools (🆕 Optional) ### 3. Get Token
This pipe includes **1 example tool** to demonstrate tool calling: To use GitHub Copilot, you need a GitHub Personal Access Token (PAT) with appropriate permissions.
* **🎲 generate_random_number**: Generate random integers **Steps to generate your token:**
**To enable:**
1. Set `ENABLE_TOOLS: true` in Valves
2. Try: "Give me a random number"
**📚 For detailed usage and creating your own tools, see [TOOLS_USAGE.md](https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/debug/github-copilot-sdk/guides/TOOLS_USAGE.md)**
### 4. Get GH_TOKEN
For security, it is recommended to use a **Fine-grained Personal Access Token**:
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta). 1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
2. Click **Generate new token**. 2. Click **Generate new token (fine-grained)**.
3. **Repository access**: Select **Public repositories** (Required to access Copilot permissions). 3. **Repository access**: Select **Public Repositories** (simplest) or **All repositories**.
4. **Permissions**: 4. **Permissions**:
* If you chose **All repositories**, you must click **Account permissions**.
* Click **Account permissions**. * Find **Copilot Requests**, and select **Access**.
* Find **Copilot Requests** (It defaults to **Read-only**, no selection needed).
5. Generate and copy the Token. 5. Generate and copy the Token.
## 📋 Dependencies ## 📋 Dependencies
@@ -103,17 +88,12 @@ This Pipe will automatically attempt to install the following dependencies:
* `github-copilot-sdk` (Python package) * `github-copilot-sdk` (Python package)
* `github-copilot-cli` (Binary file, installed via official script) * `github-copilot-cli` (Binary file, installed via official script)
## ⚠️ FAQ ## Troubleshooting ❓
* **Stuck on "Waiting..."**: * **Images and Multimodal Usage**:
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
* **Images not recognized**:
* Ensure `MODEL_ID` is a model that supports multimodal input. * Ensure `MODEL_ID` is a model that supports multimodal input.
* **Thinking not shown**: * **Thinking not shown**:
* Ensure **streaming is enabled** and the selected model supports reasoning output. * Ensure **streaming is enabled** and the selected model supports reasoning output.
* **CLI Installation Failed**:
* Ensure the OpenWebUI container has internet access.
* You can manually download the CLI and specify `CLI_PATH` in Valves.
## 📄 License ## 📄 License

View File

@@ -1,26 +1,24 @@
# GitHub Copilot SDK 官方管道 # GitHub Copilot SDK 官方管道
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.2.3 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT **作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.3.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。 这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
## 🚀 最新特性 (v0.2.3) ## 🚀 最新特性 (v0.3.0) - “统一生态”的力量
* **🧩 用户级覆盖**:新增 `REASONING_EFFORT``CLI_PATH``DEBUG``SHOW_THINKING``MODEL_ID` 的用户级覆盖。 * **🔌 零配置工具桥接 (Unified Tool Bridge)**: 自动将您现有的 OpenWebUI Functions (工具) 转换为 Copilot 兼容工具。**Copilot 现在可以无缝调用您手头所有的 WebUI 工具!**
* **🧠 思考输出可靠性**:思考显示会遵循用户设置,并正确传递到流式输出中 * **🔗 动态 MCP 自动发现**: 直接联动 OpenWebUI **管理面板 -> 连接**。无需编写任何配置文件,即插即用,瞬间扩展 Copilot 能力边界
* **📝 格式化输出增强**:自动优化输出格式(段落、列表),并解决了在某些界面下显示过于紧凑的问题 * **⚡ 高性能异步引擎**: 异步 CLI 更新检查与高度优化的事件驱动流式处理,确保对话毫秒级响应
* **🛡️ 卓越的兼容性**: 独创的动态 Pydantic 模型生成技术,确保复杂工具参数在 Copilot 端也能得到精准验证。
## ✨ 核心特性 ## ✨ 核心能力
* **🚀 官方 SDK 集成**:基于官方 SDK稳定可靠 * **🌉 强大的生态桥接**: 首个且唯一完美打通 **OpenWebUI Tools****GitHub Copilot SDK** 的插件
* **🛠️ 自定义工具支持**:内置示例工具(随机数)。易于扩展自定义工具 * **🚀 官方原生产体验**: 基于官方 Python SDK 构建,提供最稳定、最纯正的 Copilot 交互体验
* **💬 多轮对话支持**自动拼接历史上下文Copilot 能理解你的前文 * **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速 * **🖼️ 智能多模态**: 支持图像识别与附件上传,让 Copilot 拥有视觉能力
* **🖼 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot需模型支持 * **🛠 极简部署流程**: 自动检测环境、自动下载 CLI、自动管理依赖全自动化开箱即用
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI开箱即用 * **🔑 安全认证体系**: 完美支持 OAuth 授权与 PAT 模式,兼顾便捷与安全性
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens权限最小化。
* **🐛 调试模式**:内置详细的日志输出(浏览器控制台),方便排查问题。
* **⚠️ 仅支持单节点**:由于会话状态存储在本地,本插件目前仅支持 OpenWebUI 单节点部署,或开启了会话粘性 (Sticky Session) 的多节点集群。
## 📦 安装与使用 ## 📦 安装与使用
@@ -38,24 +36,22 @@
| 参数 | 说明 | 默认值 | | 参数 | 说明 | 默认值 |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - | | **GH_TOKEN** | **(必填)** GitHub 访问令牌 (PAT 或 OAuth Token)。用于聊天。 | - |
| **MODEL_ID** | 使用的模型名称。推荐 `gpt-5-mini``gpt-5`。 | `gpt-5-mini` |
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
| **DEBUG** | 是否开启调试日志(输出到浏览器控制台)。 | `False` | | **DEBUG** | 是否开启调试日志(输出到浏览器控制台)。 | `False` |
| **LOG_LEVEL** | Copilot CLI 日志级别: none, error, warning, info, debug, all。 | `error` | | **LOG_LEVEL** | Copilot CLI 日志级别: none, error, warning, info, debug, all。 | `error` |
| **SHOW_THINKING** | 是否显示模型推理/思考过程(需开启流式 + 模型支持)。 | `True` | | **SHOW_THINKING** | 是否显示模型推理/思考过程(需开启流式 + 模型支持)。 | `True` |
| **SHOW_WORKSPACE_INFO** | 在调试模式下显示会话工作空间路径和摘要。 | `True` | | **COPILOT_CLI_VERSION** | 指定安装/强制使用的 Copilot CLI 版本。 | `0.0.405` |
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - | | **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型逗号分隔。 | - |
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - | | **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` | | **INFINITE_SESSION** | 启用无限会话自动上下文压缩。 | `True` |
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` | | **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` | | **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
| **TIMEOUT** | 流式数据块超时时间 (秒)。 | `300` | | **TIMEOUT** | 每个流式分块超时(秒)。 | `300` |
| **CUSTOM_ENV_VARS** | 自定义环境变量 (JSON 格式)。 | - | | **CUSTOM_ENV_VARS** | 自定义环境变量 (JSON 格式)。 | - |
| **ENABLE_TOOLS** | 启用自定义工具 (示例:随机数)。 | `False` | | **REASONING_EFFORT** | 推理强度级别: low, medium, high. `xhigh` 仅部分模型支持。 | `medium` |
| **AVAILABLE_TOOLS** | 可用工具: 'all' 或逗号分隔列表。 | `all` | | **ENFORCE_FORMATTING** | 在系统提示词中添加格式化指导。 | `True` |
| **REASONING_EFFORT** | 推理强度级别low, medium, high。`gpt-5.2-codex`额外支持`xhigh`。 | `medium` | | **ENABLE_MCP_SERVER** | 启用直接 MCP 客户端连接 (建议)。 | `True` |
| **ENFORCE_FORMATTING** | 是否强制添加格式化指导,以提高输出可读性。 | `True` | | **ENABLE_OPENWEBUI_TOOLS** | 启用 OpenWebUI 工具 (包括自定义和服务器工具)。 | `True` |
#### 用户 Valves按用户覆盖 #### 用户 Valves按用户覆盖
@@ -63,38 +59,27 @@
| 参数 | 说明 | 默认值 | | 参数 | 说明 | 默认值 |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | 个人 GitHub Token覆盖全局设置。 | - |
| **REASONING_EFFORT** | 推理强度级别low/medium/high/xhigh。 | - | | **REASONING_EFFORT** | 推理强度级别low/medium/high/xhigh。 | - |
| **CLI_PATH** | 自定义 Copilot CLI 路径。 | - |
| **DEBUG** | 是否启用技术调试日志。 | `False` | | **DEBUG** | 是否启用技术调试日志。 | `False` |
| **SHOW_THINKING** | 是否显示思考过程(需开启流式 + 模型支持)。 | `True` | | **SHOW_THINKING** | 是否显示思考过程。 | `True` |
| **MODEL_ID** | 自定义模型 ID。 | - | | **ENABLE_OPENWEBUI_TOOLS** | 启用 OpenWebUI 工具(覆盖全局设置)。 | `True` |
| **ENABLE_MCP_SERVER** | 启用动态 MCP 服务器加载(覆盖全局设置)。 | `True` |
| **ENFORCE_FORMATTING** | 强制启用格式化指导(覆盖全局设置)。 | `True` |
### 3. 使用自定义工具 (🆕 可选) ### 3. 获取 Token
本 Pipe 内置了 **1 个示例工具**来展示工具调用功能: 要使用 GitHub Copilot您需要一个具有适当权限的 GitHub 个人访问令牌 (PAT)。
* **🎲 generate_random_number**:生成随机整数 **获取步骤:**
**启用方法:** 1. 访问 [GitHub 令牌设置](https://github.com/settings/tokens?type=beta)。
2. 点击 **Generate new token (fine-grained)**
1. 在 Valves 中设置 `ENABLE_TOOLS: true` 3. **Repository access**: 选择 **Public Repositories** (最简单) 或 **All repositories**
2. 尝试问:“给我一个随机数”
**📚 详细使用说明和创建自定义工具,请参阅 [TOOLS_USAGE.md](https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/debug/github-copilot-sdk/guides/TOOLS_USAGE.md)**
### 4. 获取 GH_TOKEN
为了安全起见,推荐使用 **Fine-grained Personal Access Token**
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
2. 点击 **Generate new token**
3. **Repository access**: 选择 **Public repositories** (必须选择此项才能看到 Copilot 权限)。
4. **Permissions**: 4. **Permissions**:
* 如果您选择了 **All repositories**,则必须点击 **Account permissions**
* 点击 **Account permissions** * 找到 **Copilot Requests**,选择 **Access**
* 找到 **Copilot Requests** (默认即为 **Read-only**,无需手动修改) 5. 生成并复制令牌
5. 生成并复制 Token。
## 📋 依赖说明 ## 📋 依赖说明
@@ -103,17 +88,12 @@
* `github-copilot-sdk` (Python 包) * `github-copilot-sdk` (Python 包)
* `github-copilot-cli` (二进制文件,通过官方脚本安装) * `github-copilot-cli` (二进制文件,通过官方脚本安装)
## ⚠️ 常见问题 ## 故障排除 (Troubleshooting) ❓
* **一直显示 "Waiting..."** * **图片及多模态使用说明**
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
* **图片无法识别**
* 确保 `MODEL_ID` 是支持多模态的模型。 * 确保 `MODEL_ID` 是支持多模态的模型。
* **看不到思考过程** * **看不到思考过程**
* 确认已开启**流式输出**,且所选模型支持推理输出。 * 确认已开启**流式输出**,且所选模型支持推理输出。
* **CLI 安装失败**
* 确保 OpenWebUI 容器有外网访问权限。
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`
## 📄 许可证 ## 📄 许可证

View File

@@ -15,7 +15,7 @@ Pipes allow you to:
## Available Pipe Plugins ## Available Pipe Plugins
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.1.1) - Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions. - [GitHub Copilot SDK](github-copilot-sdk.md) (v0.3.0) - Official GitHub Copilot SDK integration. Features **zero-config OpenWebUI Tool Bridge** and **dynamic MCP discovery**. Supports streaming, multimodal, and infinite sessions.
--- ---

View File

@@ -15,7 +15,7 @@ Pipes 可以用于:
## 可用的 Pipe 插件 ## 可用的 Pipe 插件
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.1.1) - GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。 - [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.3.0) - GitHub Copilot SDK 官方集成。**零配置工具桥接**与**动态 MCP 发现**。支持流式输出、多模态及无限会话。
--- ---

View File

@@ -1,26 +1,24 @@
# GitHub Copilot SDK Pipe for OpenWebUI # GitHub Copilot SDK Pipe for OpenWebUI
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.2.3 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT **Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.3.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience. This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
## 🚀 What's New (v0.2.3) ## 🚀 What's New (v0.3.0) - The Power of "Unified Ecosystem"
* **🧩 Per-user Overrides**: Added user-level overrides for `REASONING_EFFORT`, `CLI_PATH`, `DEBUG`, `SHOW_THINKING`, and `MODEL_ID`. * **🔌 Zero-Config Tool Bridge**: Automatically transforms your existing OpenWebUI Functions (Tools) into Copilot-compatible tools. **Copilot now has total access to your entire WebUI toolset!**
* **🧠 Thinking Output Reliability**: Thinking visibility now respects the user setting and is correctly passed into streaming. * **🔗 Dynamic MCP Discovery**: Seamlessly connects to MCP servers defined in **Admin Settings -> Connections**. No configuration files required—it just works.
* **📝 Formatting Enforcement**: Added automatic formatting hints to ensure outputs are well-structured (paragraphs, lists) and addressed "tight output" issues. * **⚡ High-Performance Async Engine**: Background CLI updates and optimized event-driven streaming ensure lightning-fast responses without UI lag.
* **🛡️ Robust Interoperability**: Advanced sanitization and dynamic Pydantic model generation ensure smooth integration even with complex third-party tools.
## ✨ Core Features ## ✨ Key Capabilities
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability. * **🌉 The Ultimate Bridge**: The first and only plugin that creates a seamless bridge between **OpenWebUI Tools** and **GitHub Copilot SDK**.
* **🛠️ Custom Tools Support**: Example tools included (random number). Easy to extend with your own tools. * **🚀 Official & Native**: Built directly on the official Python SDK, providing the most stable and authentic Copilot experience.
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages. * **🌊 Advanced Streaming (Thought Process)**: Supports full model reasoning/thinking display with typewriter effects.
* **🌊 Streaming Output**: Supports typewriter effect for fast responses. * **🖼️ Intelligent Multimodal**: Full support for images and attachments, enabling Copilot to "see" your workspace.
* **🖼 Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support). * **🛠 Effortless Setup**: Automatic CLI detection, version enforcement, and dependency management.
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box. * **🔑 Dual-Layer Security**: Supports secure OAuth flow for Chat and standard PAT for extended MCP capabilities.
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
* **🐛 Debug Mode**: Built-in detailed log output (browser console) for easy troubleshooting.
* **⚠️ Single Node Only**: Due to local session storage, this plugin currently supports single-node OpenWebUI deployment or multi-node with sticky sessions enabled.
## Installation & Configuration ## Installation & Configuration
@@ -38,13 +36,11 @@ Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** ico
| Parameter | Description | Default | | Parameter | Description | Default |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - | | **GH_TOKEN** | **(Required)** GitHub Access Token (PAT or OAuth Token). Access to Chat. | - |
| **MODEL_ID** | The model name to use. | `gpt-5-mini` |
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
| **DEBUG** | Whether to enable debug logs (output to browser console). | `False` | | **DEBUG** | Whether to enable debug logs (output to browser console). | `False` |
| **LOG_LEVEL** | Copilot CLI log level: none, error, warning, info, debug, all. | `error` | | **LOG_LEVEL** | Copilot CLI log level: none, error, warning, info, debug, all. | `error` |
| **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` | | **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` |
| **SHOW_WORKSPACE_INFO** | Show session workspace path and summary in debug mode. | `True` | | **COPILOT_CLI_VERSION** | Specific Copilot CLI version to install/enforce. | `0.0.405` |
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - | | **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - | | **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` | | **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
@@ -52,10 +48,10 @@ Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** ico
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` | | **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
| **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` | | **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` |
| **CUSTOM_ENV_VARS** | Custom environment variables (JSON format). | - | | **CUSTOM_ENV_VARS** | Custom environment variables (JSON format). | - |
| **REASONING_EFFORT** | Reasoning effort level: low, medium, high. `xhigh` is supported for gpt-5.2-codex. | `medium` | | **REASONING_EFFORT** | Reasoning effort level: low, medium, high. `xhigh` is supported for some models. | `medium` |
| **ENFORCE_FORMATTING** | Add formatting instructions to system prompt for better readability. | `True` | | **ENFORCE_FORMATTING** | Add formatting instructions to system prompt for better readability. | `True` |
| **ENABLE_TOOLS** | Enable custom tools (example: random number). | `False` | | **ENABLE_MCP_SERVER** | Enable Direct MCP Client connection (Recommended). | `True` |
| **AVAILABLE_TOOLS** | Available tools: 'all' or comma-separated list. | `all` | | **ENABLE_OPENWEBUI_TOOLS** | Enable OpenWebUI Tools (includes defined and server tools). | `True` |
#### User Valves (per-user overrides) #### User Valves (per-user overrides)
@@ -63,41 +59,30 @@ These optional settings can be set per user (overrides global Valves):
| Parameter | Description | Default | | Parameter | Description | Default |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | Personal GitHub Token (overrides global setting). | - |
| **REASONING_EFFORT** | Reasoning effort level (low/medium/high/xhigh). | - | | **REASONING_EFFORT** | Reasoning effort level (low/medium/high/xhigh). | - |
| **CLI_PATH** | Custom path to Copilot CLI. | - |
| **DEBUG** | Enable technical debug logs. | `False` | | **DEBUG** | Enable technical debug logs. | `False` |
| **SHOW_THINKING** | Show model reasoning/thinking process (requires streaming + model support). | `True` | | **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
| **MODEL_ID** | Custom model ID. | - | | **ENABLE_OPENWEBUI_TOOLS** | Enable OpenWebUI Tools (overrides global). | `True` |
| **ENABLE_MCP_SERVER** | Enable MCP server loading (overrides global). | `True` |
| **ENFORCE_FORMATTING** | Enforce formatting guidelines (overrides global). | `True` |
## ⭐ Support ## ⭐ Support
If this plugin has been useful, a star on [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) is a big motivation for me. Thank you for the support. If this plugin has been useful, a star on [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) is a big motivation for me. Thank you for the support.
## 🧩 Others ### Get Token
### Using Custom Tools (Optional) To use GitHub Copilot, you need a GitHub Personal Access Token (PAT) with appropriate permissions.
This pipe includes **1 example tool** to demonstrate tool calling: **Steps to generate your token:**
* **🎲 generate_random_number**: Generate random integers
**To enable:**
1. Set `ENABLE_TOOLS: true` in Valves
2. Try: "Give me a random number"
**📚 For detailed usage and creating your own tools, see [TOOLS_USAGE.md](TOOLS_USAGE.md)**
### Get GH_TOKEN
For security, it is recommended to use a **Fine-grained Personal Access Token**:
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta). 1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
2. Click **Generate new token**. 2. Click **Generate new token (fine-grained)**.
3. **Repository access**: Select **Public repositories** (Required to access Copilot permissions). 3. **Repository access**: Select **Public Repositories** (simplest) or **All repositories**.
4. **Permissions**: 4. **Permissions**:
* Click **Account permissions**. * If you chose **All repositories**, you must click **Account permissions**.
* Find **Copilot Requests** (It defaults to **Read-only**, no selection needed). * Find **Copilot Requests**, and select **Access**.
5. Generate and copy the Token. 5. Generate and copy the Token.
## 📋 Dependencies ## 📋 Dependencies
@@ -109,15 +94,10 @@ This Pipe will automatically attempt to install the following dependencies:
## Troubleshooting ❓ ## Troubleshooting ❓
* **Stuck on "Waiting..."**:
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
* **Images not recognized**: * **Images not recognized**:
* Ensure `MODEL_ID` is a model that supports multimodal input. * Ensure `MODEL_ID` is a model that supports multimodal input.
* **Thinking not shown**: * **Thinking not shown**:
* Ensure **streaming is enabled** and the selected model supports reasoning output. * Ensure **streaming is enabled** and the selected model supports reasoning output.
* **CLI Installation Failed**:
* Ensure the OpenWebUI container has internet access.
* You can manually download the CLI and specify `CLI_PATH` in Valves.
## Changelog ## Changelog

View File

@@ -1,26 +1,24 @@
# GitHub Copilot SDK 官方管道 # GitHub Copilot SDK 官方管道
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.2.3 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT **作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.3.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。 这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
## 🚀 最新特性 (v0.2.3) ## 🚀 最新特性 (v0.3.0) - “统一生态”的力量
* **🧩 用户级覆盖**:新增 `REASONING_EFFORT``CLI_PATH``DEBUG``SHOW_THINKING``MODEL_ID` 的用户级覆盖。 * **🔌 零配置工具桥接 (Unified Tool Bridge)**: 自动将您现有的 OpenWebUI Functions (工具) 转换为 Copilot 兼容工具。**Copilot 现在可以无缝调用您手头所有的 WebUI 工具!**
* **🧠 思考输出可靠性**:思考显示会遵循用户设置,并正确传递到流式输出中 * **🔗 动态 MCP 自动发现**: 直接联动 OpenWebUI **管理面板 -> 连接**。无需编写任何配置文件,即插即用,瞬间扩展 Copilot 能力边界
* **📝 格式化输出增强**:自动优化输出格式(短句、段落、列表),并解决了在某些界面下显示过于紧凑的问题 * **⚡ 高性能异步引擎**: 异步 CLI 更新检查与高度优化的事件驱动流式处理,确保对话毫秒级响应
* **🛡️ 卓越的兼容性**: 独创的动态 Pydantic 模型生成技术,确保复杂工具参数在 Copilot 端也能得到精准验证。
## ✨ 核心特性 ## ✨ 核心能力
* **🚀 官方 SDK 集成**:基于官方 SDK稳定可靠 * **🌉 强大的生态桥接**: 首个且唯一完美打通 **OpenWebUI Tools****GitHub Copilot SDK** 的插件
* **🛠️ 自定义工具支持**:内置示例工具(随机数)。易于扩展自定义工具 * **🚀 官方原生产体验**: 基于官方 Python SDK 构建,提供最稳定、最纯正的 Copilot 交互体验
* **💬 多轮对话支持**自动拼接历史上下文Copilot 能理解你的前文 * **🌊 深度推理展示**: 完整支持模型思考过程 (Thinking Process) 的流式渲染
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速 * **🖼️ 智能多模态**: 支持图像识别与附件上传,让 Copilot 拥有视觉能力
* **🖼 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot需模型支持 * **🛠 极简部署流程**: 自动检测环境、自动下载 CLI、自动管理依赖全自动化开箱即用
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI开箱即用 * **🔑 安全认证体系**: 完美支持 OAuth 授权与 PAT 模式,兼顾便捷与安全性
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens权限最小化。
* **🐛 调试模式**:内置详细的日志输出(浏览器控制台),方便排查问题。
* **⚠️ 仅支持单节点**:由于会话状态存储在本地,本插件目前仅支持 OpenWebUI 单节点部署,或开启了会话粘性 (Sticky Session) 的多节点集群。
## 安装与配置 ## 安装与配置
@@ -38,24 +36,22 @@
| 参数 | 说明 | 默认值 | | 参数 | 说明 | 默认值 |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - | | **GH_TOKEN** | **(必填)** GitHub 访问令牌 (PAT 或 OAuth Token)。用于聊天。 | - |
| **MODEL_ID** | 使用的模型名称。 | `gpt-5-mini` |
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
| **DEBUG** | 是否开启调试日志(输出到浏览器控制台)。 | `False` | | **DEBUG** | 是否开启调试日志(输出到浏览器控制台)。 | `False` |
| **LOG_LEVEL** | Copilot CLI 日志级别: none, error, warning, info, debug, all。 | `error` | | **LOG_LEVEL** | Copilot CLI 日志级别: none, error, warning, info, debug, all。 | `error` |
| **SHOW_THINKING** | 是否显示模型推理/思考过程(需开启流式 + 模型支持)。 | `True` | | **SHOW_THINKING** | 是否显示模型推理/思考过程(需开启流式 + 模型支持)。 | `True` |
| **SHOW_WORKSPACE_INFO** | 在调试模式下显示会话工作空间路径和摘要。 | `True` | | **COPILOT_CLI_VERSION** | 指定安装/强制使用的 Copilot CLI 版本。 | `0.0.405` |
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - | | **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型逗号分隔。 | - |
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - | | **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` | | **INFINITE_SESSION** | 启用无限会话自动上下文压缩。 | `True` |
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` | | **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` | | **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
| **TIMEOUT** | 流式数据块超时时间 (秒)。 | `300` | | **TIMEOUT** | 每个流式分块超时(秒)。 | `300` |
| **CUSTOM_ENV_VARS** | 自定义环境变量 (JSON 格式)。 | - | | **CUSTOM_ENV_VARS** | 自定义环境变量 (JSON 格式)。 | - |
| **ENABLE_TOOLS** | 启用自定义工具 (示例:随机数)。 | `False` | | **REASONING_EFFORT** | 推理强度级别: low, medium, high. `xhigh` 仅部分模型支持。 | `medium` |
| **AVAILABLE_TOOLS** | 可用工具: 'all' 或逗号分隔列表。 | `all` | | **ENFORCE_FORMATTING** | 在系统提示词中添加格式化指导。 | `True` |
| **REASONING_EFFORT** | 推理强度级别low, medium, high。`gpt-5.2-codex`额外支持`xhigh`。 | `medium` | | **ENABLE_MCP_SERVER** | 启用直接 MCP 客户端连接 (建议)。 | `True` |
| **ENFORCE_FORMATTING** | 是否强制添加格式化指导,以提高输出可读性。 | `True` | | **ENABLE_OPENWEBUI_TOOLS** | 启用 OpenWebUI 工具 (包括自定义和服务器工具)。 | `True` |
#### 用户 Valves按用户覆盖 #### 用户 Valves按用户覆盖
@@ -63,42 +59,31 @@
| 参数 | 说明 | 默认值 | | 参数 | 说明 | 默认值 |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| **GH_TOKEN** | 个人 GitHub Token覆盖全局设置。 | - |
| **REASONING_EFFORT** | 推理强度级别low/medium/high/xhigh。 | - | | **REASONING_EFFORT** | 推理强度级别low/medium/high/xhigh。 | - |
| **CLI_PATH** | 自定义 Copilot CLI 路径。 | - |
| **DEBUG** | 是否启用技术调试日志。 | `False` | | **DEBUG** | 是否启用技术调试日志。 | `False` |
| **SHOW_THINKING** | 是否显示思考过程(需开启流式 + 模型支持)。 | `True` | | **SHOW_THINKING** | 是否显示思考过程。 | `True` |
| **MODEL_ID** | 自定义模型 ID。 | - | | **ENABLE_OPENWEBUI_TOOLS** | 启用 OpenWebUI 工具(覆盖全局设置)。 | `True` |
| **ENABLE_MCP_SERVER** | 启用动态 MCP 服务器加载(覆盖全局设置)。 | `True` |
| **ENFORCE_FORMATTING** | 强制启用格式化指导(覆盖全局设置)。 | `True` |
## ⭐ 支持 ## ⭐ 支持
如果这个插件对你有帮助,欢迎到 [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) 点个 Star这将是我持续改进的动力感谢支持。 如果这个插件对你有帮助,欢迎到 [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) 点个 Star这将是我持续改进的动力感谢支持。
## 🧩 其他 ### 获取 Token
### 使用自定义工具(可选) 要使用 GitHub Copilot您需要一个具有适当权限的 GitHub 个人访问令牌 (PAT)。
本 Pipe 内置了 **1 个示例工具**来展示工具调用功能: **获取步骤:**
* **🎲 generate_random_number**:生成随机整数 1. 访问 [GitHub 令牌设置](https://github.com/settings/tokens?type=beta)。
2. 点击 **Generate new token (fine-grained)**
**启用方法:** 3. **Repository access**: 选择 **Public Repositories** (最简单) 或 **All repositories**
1. 在 Valves 中设置 `ENABLE_TOOLS: true`
2. 尝试问:“给我一个随机数”
**📚 详细使用说明和创建自定义工具,请参阅 [TOOLS_USAGE.md](TOOLS_USAGE.md)**
### 获取 GH_TOKEN
为了安全起见,推荐使用 **Fine-grained Personal Access Token**
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
2. 点击 **Generate new token**
3. **Repository access**: 选择 **Public repositories** (必须选择此项才能看到 Copilot 权限)。
4. **Permissions**: 4. **Permissions**:
* 点击 **Account permissions** * 如果您选择了 **All repositories**,则必须点击 **Account permissions**
* 找到 **Copilot Requests** (默认即为 **Read-only**,无需手动修改) * 找到 **Copilot Requests**,选择 **Access**
5. 生成并复制 Token 5. 生成并复制令牌
## 📋 依赖说明 ## 📋 依赖说明
@@ -109,13 +94,8 @@
## 故障排除 (Troubleshooting) ❓ ## 故障排除 (Troubleshooting) ❓
* **一直显示 "Waiting..."** * **图片及多模态使用说明**
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
* **图片无法识别**
* 确保 `MODEL_ID` 是支持多模态的模型。 * 确保 `MODEL_ID` 是支持多模态的模型。
* **CLI 安装失败**
* 确保 OpenWebUI 容器有外网访问权限。
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`
* **看不到思考过程** * **看不到思考过程**
* 确认已开启**流式输出**,且所选模型支持推理输出。 * 确认已开启**流式输出**,且所选模型支持推理输出。

View File

@@ -0,0 +1,187 @@
# 🛠️ Custom Tools Usage / 自定义工具使用指南
## Overview / 概览
This pipe supports **OpenWebUI Native Tools** (Functions) and **Custom Python Tools**.
本 Pipe 支持 **OpenWebUI 原生工具** (Functions) 和 **自定义 Python 工具**
---
## 🚀 OpenWebUI Native Tools / OpenWebUI 原生工具 (v0.3.0)
**New in v0.3.0**: You can use any tool defined in OpenWebUI directly with Copilot.
**v0.3.0 新增**: 您可以直接在 Copilot 中使用 OpenWebUI 中定义的任何工具。
**How to use / 如何使用:**
1. Go to **Workspace** -> **Tools**.
2. Create a tool (e.g. `get_weather`).
3. In Copilot Chat settings (Valves), ensure `ENABLE_OPENWEBUI_TOOLS` is `True` (default).
4. Ask Copilot: "Search for the latest news" or "Check weather".
**Note / 注意:**
- Tool names are automatically sanitized to match Copilot SDK requirements (e.g. `my.tool` -> `my_tool`).
- 工具名称会自动净化以符合 Copilot SDK 要求(例如 `my.tool` 变为 `my_tool`)。
---
## 📦 Python Custom Tools / Python 自定义工具
This pipe includes **1 example custom tool** that demonstrates how to use GitHub Copilot SDK's tool calling feature directly in Python code.
本 Pipe 包含 **1 个示例自定义工具**,展示如何使用 GitHub Copilot SDK 的工具调用功能。
### 1. `generate_random_number` / 生成随机数
**Description:** Generate a random integer
**描述:** 生成随机整数
**Parameters / 参数:**
- `min` (optional): Minimum value (default: 1)
- `max` (optional): Maximum value (default: 100)
- `min` (可选): 最小值 (默认: 1)
- `max` (可选): 最大值 (默认: 100)
**Example / 示例:**
```
User: "Give me a random number between 1 and 10"
Copilot: [calls generate_random_number with min=1, max=10] "Generated random number: 7"
用户: "给我一个 1 到 10 之间的随机数"
Copilot: [调用 generate_random_number参数 min=1, max=10] "生成的随机数: 7"
```
---
## ⚙️ Configuration / 配置
### Enable Tools / 启用工具
In Valves configuration:
在 Valves 配置中:
```
ENABLE_TOOLS: true
AVAILABLE_TOOLS: all
```
### Select Specific Tools / 选择特定工具
Instead of enabling all tools, specify which ones to use:
不启用所有工具,而是指定要使用的工具:
```
ENABLE_TOOLS: true
AVAILABLE_TOOLS: generate_random_number
```
---
## 🔧 How Tool Calling Works / 工具调用的工作原理
```
1. User asks a question / 用户提问
2. Copilot decides if it needs a tool / Copilot 决定是否需要工具
3. If yes, Copilot calls the appropriate tool / 如果需要,调用相应工具
4. Tool executes and returns result / 工具执行并返回结果
5. Copilot uses the result to answer / Copilot 使用结果回答
```
### Visual Feedback / 可视化反馈
When tools are called, you'll see:
当工具被调用时,你会看到:
```
🔧 **Calling tool**: `generate_random_number`
✅ **Tool `generate_random_number` completed**
Generated random number: 7
```
---
## 📚 Creating Your Own Tools / 创建自定义工具
Want to add your own Python tools? Follow this pattern (module-level tools):
想要添加自己的 Python 工具?遵循这个模式(模块级工具):
```python
from pydantic import BaseModel, Field
from copilot import define_tool
class MyToolParams(BaseModel):
param_name: str = Field(description="Parameter description")
@define_tool(description="Clear description of what the tool does and when to use it")
async def my_tool(params: MyToolParams) -> str:
# Do something
result = do_something(params.param_name)
return f"Result: {result}"
```
Then register it in `_initialize_custom_tools()`:
然后将它添加到 `_initialize_custom_tools()`:
```python
def _initialize_custom_tools(self):
if not self.valves.ENABLE_TOOLS:
return []
all_tools = {
"generate_random_number": generate_random_number,
"my_tool": my_tool, # ✅ Add here
}
if self.valves.AVAILABLE_TOOLS == "all":
return list(all_tools.values())
enabled = [t.strip() for t in self.valves.AVAILABLE_TOOLS.split(",")]
return [all_tools[name] for name in enabled if name in all_tools]
```
---
## ⚠️ Important Notes / 重要说明
### Security / 安全性
- Tools run in the same process as the pipe
- Be careful with tools that execute code or access files
- Always validate input parameters
- 工具在与 Pipe 相同的进程中运行
- 谨慎处理执行代码或访问文件的工具
- 始终验证输入参数
### Performance / 性能
- Tool execution is synchronous during streaming
- Long-running tools may cause delays
- Consider adding timeouts for external API calls
- 工具执行在流式传输期间是同步的
- 长时间运行的工具可能导致延迟
- 考虑为外部 API 调用添加超时
### Debugging / 调试
- Enable `DEBUG: true` to see tool events in the browser console
- Check tool calls in `🔧 Calling tool` messages
- Tool errors are displayed in the response
- 启用 `DEBUG: true` 在浏览器控制台查看工具事件
-`🔧 Calling tool` 消息中检查工具调用
- 工具错误会显示在响应中
---
**Version:** 0.3.0
**Last Updated:** 2026-02-05

View File

@@ -5,11 +5,12 @@ author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
openwebui_id: ce96f7b4-12fc-4ac3-9a01-875713e69359 openwebui_id: ce96f7b4-12fc-4ac3-9a01-875713e69359
description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, infinite sessions, and frontend debug logging. description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, infinite sessions, and frontend debug logging.
version: 0.2.3 version: 0.3.0
requirements: github-copilot-sdk requirements: github-copilot-sdk==0.1.22
""" """
import os import os
import re
import json import json
import base64 import base64
import tempfile import tempfile
@@ -17,44 +18,34 @@ import asyncio
import logging import logging
import shutil import shutil
import subprocess import subprocess
import hashlib
from pathlib import Path
from typing import Optional, Union, AsyncGenerator, List, Any, Dict from typing import Optional, Union, AsyncGenerator, List, Any, Dict
from pydantic import BaseModel, Field from types import SimpleNamespace
from pydantic import BaseModel, Field, create_model
# Import copilot SDK modules # Import copilot SDK modules
from copilot import CopilotClient, define_tool from copilot import CopilotClient, define_tool
# Import Tool Server Connections and Tool System from OpenWebUI Config
from open_webui.config import TOOL_SERVER_CONNECTIONS
from open_webui.utils.tools import get_tools as get_openwebui_tools
from open_webui.models.tools import Tools
from open_webui.models.users import Users
# Setup logger # Setup logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class RandomNumberParams(BaseModel):
min: int = Field(description="Minimum value (inclusive)")
max: int = Field(description="Maximum value (inclusive)")
@define_tool(description="Generate a random integer within a specified range.")
async def generate_random_number(params: RandomNumberParams) -> str:
import random
if params.min >= params.max:
raise ValueError("min must be less than max")
number = random.randint(params.min, params.max)
return f"Generated random number: {number}"
class Pipe: class Pipe:
class Valves(BaseModel): class Valves(BaseModel):
GH_TOKEN: str = Field( GH_TOKEN: str = Field(
default="", default="",
description="GitHub Fine-grained Token (Requires 'Copilot Requests' permission)", description="GitHub Fine-grained Token (Requires 'Copilot Requests' permission)",
) )
MODEL_ID: str = Field( COPILOT_CLI_VERSION: str = Field(
default="gpt-5-mini", default="0.0.405",
description="Default Copilot model name (used when dynamic fetching fails)", description="Specific Copilot CLI version to install/enforce (e.g. '0.0.405'). Leave empty for latest.",
)
CLI_PATH: str = Field(
default="/usr/local/bin/copilot",
description="Path to Copilot CLI",
) )
DEBUG: bool = Field( DEBUG: bool = Field(
default=False, default=False,
@@ -68,10 +59,6 @@ class Pipe:
default=True, default=True,
description="Show model reasoning/thinking process", description="Show model reasoning/thinking process",
) )
SHOW_WORKSPACE_INFO: bool = Field(
default=True,
description="Show session workspace path and summary in debug mode",
)
EXCLUDE_KEYWORDS: str = Field( EXCLUDE_KEYWORDS: str = Field(
default="", default="",
description="Exclude models containing these keywords (comma separated, e.g.: codex, haiku)", description="Exclude models containing these keywords (comma separated, e.g.: codex, haiku)",
@@ -100,13 +87,14 @@ class Pipe:
default="", default="",
description='Custom environment variables (JSON format, e.g., {"VAR": "value"})', description='Custom environment variables (JSON format, e.g., {"VAR": "value"})',
) )
ENABLE_TOOLS: bool = Field(
default=False, ENABLE_OPENWEBUI_TOOLS: bool = Field(
description="Enable custom tools (example: random number)", default=True,
description="Enable OpenWebUI Tools (includes defined Tools and Tool Server Tools).",
) )
AVAILABLE_TOOLS: str = Field( ENABLE_MCP_SERVER: bool = Field(
default="all", default=True,
description="Available tools: 'all' or comma-separated list (e.g., 'generate_random_number')", description="Enable Direct MCP Client connection (Recommended).",
) )
REASONING_EFFORT: str = Field( REASONING_EFFORT: str = Field(
default="medium", default="medium",
@@ -118,14 +106,14 @@ class Pipe:
) )
class UserValves(BaseModel): class UserValves(BaseModel):
GH_TOKEN: str = Field(
default="",
description="Personal GitHub Fine-grained Token (overrides global setting)",
)
REASONING_EFFORT: str = Field( REASONING_EFFORT: str = Field(
default="", default="",
description="Reasoning effort level (low, medium, high, xhigh). Leave empty to use global setting.", description="Reasoning effort level (low, medium, high, xhigh). Leave empty to use global setting.",
) )
CLI_PATH: str = Field(
default="",
description="Custom path to Copilot CLI. Leave empty to use global setting.",
)
DEBUG: bool = Field( DEBUG: bool = Field(
default=False, default=False,
description="Enable technical debug logs (connection info, etc.)", description="Enable technical debug logs (connection info, etc.)",
@@ -134,9 +122,18 @@ class Pipe:
default=True, default=True,
description="Show model reasoning/thinking process", description="Show model reasoning/thinking process",
) )
MODEL_ID: str = Field( ENABLE_OPENWEBUI_TOOLS: bool = Field(
default="", default=True,
description="Custom model ID (e.g. gpt-4o). Leave empty to use global default.", description="Enable OpenWebUI Tools (includes defined Tools and Tool Server Tools).",
)
ENABLE_MCP_SERVER: bool = Field(
default=True,
description="Enable dynamic MCP server loading (overrides global).",
)
ENFORCE_FORMATTING: bool = Field(
default=True,
description="Enforce formatting guidelines (overrides global)",
) )
def __init__(self): def __init__(self):
@@ -147,6 +144,7 @@ class Pipe:
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_") self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
self.thinking_started = False self.thinking_started = False
self._model_cache = [] # Model list cache self._model_cache = [] # Model list cache
self._last_update_check = 0 # Timestamp of last CLI update check
def __del__(self): def __del__(self):
try: try:
@@ -183,23 +181,269 @@ class Pipe:
# ==================== Custom Tool Examples ==================== # ==================== Custom Tool Examples ====================
# Tool registration: Add @define_tool decorated functions at module level, # Tool registration: Add @define_tool decorated functions at module level,
# then register them in _initialize_custom_tools() -> all_tools dict. # then register them in _initialize_custom_tools() -> all_tools dict.
def _initialize_custom_tools(self): async def _initialize_custom_tools(self, __user__=None, __event_call__=None):
"""Initialize custom tools based on configuration""" """Initialize custom tools based on configuration"""
if not self.valves.ENABLE_TOOLS: if not self.valves.ENABLE_OPENWEBUI_TOOLS:
return [] return []
# Define all available tools (register new tools here) # Load OpenWebUI tools dynamically
all_tools = { openwebui_tools = await self._load_openwebui_tools(
"generate_random_number": generate_random_number, __user__=__user__, __event_call__=__event_call__
)
return openwebui_tools
def _json_schema_to_python_type(self, schema: dict) -> Any:
"""Convert JSON Schema type to Python type for Pydantic models."""
if not isinstance(schema, dict):
return Any
schema_type = schema.get("type")
if isinstance(schema_type, list):
schema_type = next((t for t in schema_type if t != "null"), schema_type[0])
if schema_type == "string":
return str
if schema_type == "integer":
return int
if schema_type == "number":
return float
if schema_type == "boolean":
return bool
if schema_type == "object":
return Dict[str, Any]
if schema_type == "array":
items_schema = schema.get("items", {})
item_type = self._json_schema_to_python_type(items_schema)
return List[item_type]
return Any
def _convert_openwebui_tool(self, tool_name: str, tool_dict: dict):
"""Convert OpenWebUI tool definition to Copilot SDK tool."""
# Sanitize tool name to match pattern ^[a-zA-Z0-9_-]+$
sanitized_tool_name = re.sub(r"[^a-zA-Z0-9_-]", "_", tool_name)
# If sanitized name is empty or consists only of separators (e.g. pure Chinese name), generate a fallback name
if not sanitized_tool_name or re.match(r"^[_.-]+$", sanitized_tool_name):
hash_suffix = hashlib.md5(tool_name.encode("utf-8")).hexdigest()[:8]
sanitized_tool_name = f"tool_{hash_suffix}"
if sanitized_tool_name != tool_name:
logger.debug(
f"Sanitized tool name '{tool_name}' to '{sanitized_tool_name}'"
)
spec = tool_dict.get("spec", {}) if isinstance(tool_dict, dict) else {}
params_schema = spec.get("parameters", {}) if isinstance(spec, dict) else {}
properties = params_schema.get("properties", {})
required = params_schema.get("required", [])
if not isinstance(properties, dict):
properties = {}
if not isinstance(required, list):
required = []
required_set = set(required)
fields = {}
for param_name, param_schema in properties.items():
param_type = self._json_schema_to_python_type(param_schema)
description = ""
if isinstance(param_schema, dict):
description = param_schema.get("description", "")
if param_name in required_set:
if description:
fields[param_name] = (
param_type,
Field(..., description=description),
)
else:
fields[param_name] = (param_type, ...)
else:
optional_type = Optional[param_type]
if description:
fields[param_name] = (
optional_type,
Field(default=None, description=description),
)
else:
fields[param_name] = (optional_type, None)
if fields:
ParamsModel = create_model(f"{sanitized_tool_name}_Params", **fields)
else:
ParamsModel = create_model(f"{sanitized_tool_name}_Params")
tool_callable = tool_dict.get("callable")
tool_description = spec.get("description", "") if isinstance(spec, dict) else ""
if not tool_description and isinstance(spec, dict):
tool_description = spec.get("summary", "")
# Critical: If the tool name was sanitized (e.g. Chinese -> Hash), instructions are lost.
# We must inject the original name into the description so the model knows what it is.
if sanitized_tool_name != tool_name:
tool_description = f"Function '{tool_name}': {tool_description}"
async def _tool(params):
payload = params.model_dump() if hasattr(params, "model_dump") else {}
return await tool_callable(**payload)
_tool.__name__ = sanitized_tool_name
_tool.__doc__ = tool_description
# Debug log for tool conversion
logger.debug(
f"Converting tool '{sanitized_tool_name}': {tool_description[:50]}..."
)
# Core Fix: Explicitly pass params_type and name
return define_tool(
name=sanitized_tool_name,
description=tool_description,
params_type=ParamsModel,
)(_tool)
def _build_openwebui_request(self):
"""Build a minimal request-like object for OpenWebUI tool loading."""
app_state = SimpleNamespace(
config=SimpleNamespace(
TOOL_SERVER_CONNECTIONS=TOOL_SERVER_CONNECTIONS.value
),
TOOLS={},
)
app = SimpleNamespace(state=app_state)
request = SimpleNamespace(
app=app,
cookies={},
state=SimpleNamespace(token=SimpleNamespace(credentials="")),
)
return request
async def _load_openwebui_tools(self, __user__=None, __event_call__=None):
"""Load OpenWebUI tools and convert them to Copilot SDK tools."""
if isinstance(__user__, (list, tuple)):
user_data = __user__[0] if __user__ else {}
elif isinstance(__user__, dict):
user_data = __user__
else:
user_data = {}
if not user_data:
return []
user_id = user_data.get("id") or user_data.get("user_id")
if not user_id:
return []
user = Users.get_user_by_id(user_id)
if not user:
return []
# 1. Get User defined tools (Python scripts)
tool_items = Tools.get_tools_by_user_id(user_id, permission="read")
tool_ids = [tool.id for tool in tool_items] if tool_items else []
# 2. Get OpenAPI Tool Server tools
# We manually add enabled OpenAPI servers to the list because Tools.get_tools_by_user_id only checks the DB.
# open_webui.utils.tools.get_tools handles the actual loading and access control.
if hasattr(TOOL_SERVER_CONNECTIONS, "value"):
for server in TOOL_SERVER_CONNECTIONS.value:
# We only add 'openapi' servers here because get_tools currently only supports 'openapi' (or defaults to it).
# MCP tools are handled separately via ENABLE_MCP_SERVER.
if server.get("type") == "openapi":
# Format expected by get_tools: "server:<id>" implies types="openapi"
server_id = server.get("id")
if server_id:
tool_ids.append(f"server:{server_id}")
if not tool_ids:
return []
request = self._build_openwebui_request()
extra_params = {
"__request__": request,
"__user__": user_data,
"__event_emitter__": None,
"__event_call__": __event_call__,
"__chat_id__": None,
"__message_id__": None,
"__model_knowledge__": [],
} }
# Filter based on configuration tools_dict = await get_openwebui_tools(request, tool_ids, user, extra_params)
if self.valves.AVAILABLE_TOOLS == "all": if not tools_dict:
return list(all_tools.values()) return []
# Only enable specified tools converted_tools = []
enabled = [t.strip() for t in self.valves.AVAILABLE_TOOLS.split(",")] for tool_name, tool_def in tools_dict.items():
return [all_tools[name] for name in enabled if name in all_tools] try:
converted_tools.append(
self._convert_openwebui_tool(tool_name, tool_def)
)
except Exception as e:
await self._emit_debug_log(
f"Failed to load OpenWebUI tool '{tool_name}': {e}",
__event_call__,
)
return converted_tools
def _parse_mcp_servers(self) -> Optional[dict]:
"""
Dynamically load MCP servers from OpenWebUI TOOL_SERVER_CONNECTIONS.
Returns a dict of mcp_servers compatible with CopilotClient.
"""
if not self.valves.ENABLE_MCP_SERVER:
return None
mcp_servers = {}
# Iterate over OpenWebUI Tool Server Connections
if hasattr(TOOL_SERVER_CONNECTIONS, "value"):
connections = TOOL_SERVER_CONNECTIONS.value
else:
connections = []
for conn in connections:
if conn.get("type") == "mcp":
info = conn.get("info", {})
# Use ID from info or generate one
raw_id = info.get("id", f"mcp-server-{len(mcp_servers)}")
# Sanitize server_id (using same logic as tools)
server_id = re.sub(r"[^a-zA-Z0-9_-]", "_", raw_id)
if not server_id or re.match(r"^[_.-]+$", server_id):
hash_suffix = hashlib.md5(raw_id.encode("utf-8")).hexdigest()[:8]
server_id = f"server_{hash_suffix}"
url = conn.get("url")
if not url:
continue
# Build Headers (Handle Auth)
headers = {}
auth_type = conn.get("auth_type", "bearer")
key = conn.get("key", "")
if auth_type == "bearer" and key:
headers["Authorization"] = f"Bearer {key}"
elif auth_type == "basic" and key:
headers["Authorization"] = f"Basic {key}"
# Merge custom headers if any
custom_headers = conn.get("headers", {})
if isinstance(custom_headers, dict):
headers.update(custom_headers)
mcp_servers[server_id] = {
"type": "http",
"url": url,
"headers": headers,
"tools": ["*"], # Enable all tools by default
}
return mcp_servers if mcp_servers else None
async def _emit_debug_log(self, message: str, __event_call__=None): async def _emit_debug_log(self, message: str, __event_call__=None):
"""Emit debug log to frontend (console) when DEBUG is enabled.""" """Emit debug log to frontend (console) when DEBUG is enabled."""
@@ -390,9 +634,31 @@ class Pipe:
return system_prompt_content, system_prompt_source return system_prompt_content, system_prompt_source
def _get_workspace_dir(self) -> str:
"""Get the effective workspace directory with smart defaults."""
if self.valves.WORKSPACE_DIR:
return self.valves.WORKSPACE_DIR
# Smart default for OpenWebUI container
if os.path.exists("/app/backend/data"):
cwd = "/app/backend/data/copilot_workspace"
else:
# Local fallback: subdirectory in current working directory
cwd = os.path.join(os.getcwd(), "copilot_workspace")
# Ensure directory exists
if not os.path.exists(cwd):
try:
os.makedirs(cwd, exist_ok=True)
except Exception as e:
print(f"Error creating workspace {cwd}: {e}")
return os.getcwd() # Fallback to CWD if creation fails
return cwd
def _build_client_config(self, body: dict) -> dict: def _build_client_config(self, body: dict) -> dict:
"""Build CopilotClient config from valves and request body.""" """Build CopilotClient config from valves and request body."""
cwd = self.valves.WORKSPACE_DIR if self.valves.WORKSPACE_DIR else os.getcwd() cwd = self._get_workspace_dir()
client_config = {} client_config = {}
if os.environ.get("COPILOT_CLI_PATH"): if os.environ.get("COPILOT_CLI_PATH"):
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
@@ -418,7 +684,6 @@ class Pipe:
custom_tools: List[Any], custom_tools: List[Any],
system_prompt_content: Optional[str], system_prompt_content: Optional[str],
is_streaming: bool, is_streaming: bool,
reasoning_effort: str = "",
): ):
"""Build SessionConfig for Copilot SDK.""" """Build SessionConfig for Copilot SDK."""
from copilot.types import SessionConfig, InfiniteSessionConfig from copilot.types import SessionConfig, InfiniteSessionConfig
@@ -470,9 +735,9 @@ class Pipe:
"infinite_sessions": infinite_session_config, "infinite_sessions": infinite_session_config,
} }
# Add reasoning_effort if not default (medium) mcp_servers = self._parse_mcp_servers()
if reasoning_effort and reasoning_effort.lower() != "medium": if mcp_servers:
session_params["reasoning_effort"] = reasoning_effort.lower() session_params["mcp_servers"] = mcp_servers
return SessionConfig(**session_params) return SessionConfig(**session_params)
@@ -628,8 +893,8 @@ class Pipe:
# Return default model on failure # Return default model on failure
return [ return [
{ {
"id": f"{self.id}-{self.valves.MODEL_ID}", "id": f"{self.id}-gpt-5-mini",
"name": f"GitHub Copilot ({self.valves.MODEL_ID})", "name": f"GitHub Copilot (gpt-5-mini)",
} }
] ]
finally: finally:
@@ -638,8 +903,8 @@ class Pipe:
await self._emit_debug_log(f"Pipes Error: {e}") await self._emit_debug_log(f"Pipes Error: {e}")
return [ return [
{ {
"id": f"{self.id}-{self.valves.MODEL_ID}", "id": f"{self.id}-gpt-5-mini",
"name": f"GitHub Copilot ({self.valves.MODEL_ID})", "name": f"GitHub Copilot (gpt-5-mini)",
} }
] ]
@@ -654,30 +919,93 @@ class Pipe:
return client return client
def _setup_env(self, __event_call__=None): def _setup_env(self, __event_call__=None):
cli_path = self.valves.CLI_PATH # Default CLI path logic
found = False cli_path = "/usr/local/bin/copilot"
if os.environ.get("COPILOT_CLI_PATH"):
cli_path = os.environ["COPILOT_CLI_PATH"]
target_version = self.valves.COPILOT_CLI_VERSION.strip()
found = False
current_version = None
# internal helper to get version
def get_cli_version(path):
try:
output = (
subprocess.check_output(
[path, "--version"], stderr=subprocess.STDOUT
)
.decode()
.strip()
)
# Copilot CLI version output format is usually just the version number or "copilot version X.Y.Z"
# We try to extract X.Y.Z
match = re.search(r"(\d+\.\d+\.\d+)", output)
return match.group(1) if match else output
except Exception:
return None
# Check default path
if os.path.exists(cli_path): if os.path.exists(cli_path):
found = True found = True
current_version = get_cli_version(cli_path)
# Check system path if not found
if not found: if not found:
sys_path = shutil.which("copilot") sys_path = shutil.which("copilot")
if sys_path: if sys_path:
cli_path = sys_path cli_path = sys_path
found = True found = True
current_version = get_cli_version(cli_path)
# Determine if we need to install or update
should_install = False
install_reason = ""
if not found: if not found:
should_install = True
install_reason = "CLI not found"
elif target_version:
# Normalize versions for comparison (remove 'v' prefix)
norm_target = target_version.lstrip("v")
norm_current = current_version.lstrip("v") if current_version else ""
if norm_target != norm_current:
should_install = True
install_reason = f"Version mismatch (Current: {current_version}, Target: {target_version})"
if should_install:
if self.valves.DEBUG:
self._emit_debug_log_sync(
f"Installing Copilot CLI: {install_reason}...", __event_call__
)
try: try:
env = os.environ.copy()
if target_version:
env["VERSION"] = target_version
subprocess.run( subprocess.run(
"curl -fsSL https://gh.io/copilot-install | bash", "curl -fsSL https://gh.io/copilot-install | bash",
shell=True, shell=True,
check=True, check=True,
env=env,
) )
if os.path.exists(self.valves.CLI_PATH):
cli_path = self.valves.CLI_PATH # Check default install location first, then system path
if os.path.exists("/usr/local/bin/copilot"):
cli_path = "/usr/local/bin/copilot"
found = True found = True
except: elif shutil.which("copilot"):
pass cli_path = shutil.which("copilot")
found = True
if found:
current_version = get_cli_version(cli_path)
except Exception as e:
if self.valves.DEBUG:
self._emit_debug_log_sync(
f"Failed to install Copilot CLI: {e}", __event_call__
)
if found: if found:
os.environ["COPILOT_CLI_PATH"] = cli_path os.environ["COPILOT_CLI_PATH"] = cli_path
@@ -687,23 +1015,7 @@ class Pipe:
if self.valves.DEBUG: if self.valves.DEBUG:
self._emit_debug_log_sync( self._emit_debug_log_sync(
f"Copilot CLI found at: {cli_path}", __event_call__ f"Copilot CLI found at: {cli_path} (Version: {current_version})",
)
try:
# Try to get version to confirm it's executable
ver = (
subprocess.check_output(
[cli_path, "--version"], stderr=subprocess.STDOUT
)
.decode()
.strip()
)
self._emit_debug_log_sync(
f"Copilot CLI Version: {ver}", __event_call__
)
except Exception as e:
self._emit_debug_log_sync(
f"Warning: Copilot CLI found but failed to run: {e}",
__event_call__, __event_call__,
) )
else: else:
@@ -722,6 +1034,8 @@ class Pipe:
"Warning: GH_TOKEN is not set.", __event_call__ "Warning: GH_TOKEN is not set.", __event_call__
) )
self._sync_mcp_config(__event_call__)
def _process_images(self, messages, __event_call__=None): def _process_images(self, messages, __event_call__=None):
attachments = [] attachments = []
text_content = "" text_content = ""
@@ -779,8 +1093,8 @@ class Pipe:
"gpt-5.2-codex" "gpt-5.2-codex"
not in self._collect_model_ids( not in self._collect_model_ids(
body={}, body={},
request_model=self.valves.MODEL_ID, request_model=self.id,
real_model_id=self.valves.MODEL_ID, real_model_id=None,
)[0].lower() )[0].lower()
): ):
# Fallback to high if not supported # Fallback to high if not supported
@@ -823,6 +1137,53 @@ class Pipe:
except Exception as e: except Exception as e:
self._emit_debug_log_sync(f"Config sync check failed: {e}", __event_call__) self._emit_debug_log_sync(f"Config sync check failed: {e}", __event_call__)
async def _update_copilot_cli(self, cli_path: str, __event_call__=None):
"""Async task to update Copilot CLI if needed."""
import time
try:
# Check frequency (e.g., once every hour)
now = time.time()
if now - self._last_update_check < 3600:
return
self._last_update_check = now
# Simple check if "update" command is available or if we should just run it
# The user requested "async attempt to update copilot cli"
if self.valves.DEBUG:
self._emit_debug_log_sync(
"Triggering async Copilot CLI update check...", __event_call__
)
# We create a subprocess to run the update
process = await asyncio.create_subprocess_exec(
cli_path,
"update",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await process.communicate()
if self.valves.DEBUG:
output = stdout.decode().strip() or stderr.decode().strip()
if output:
self._emit_debug_log_sync(
f"Async CLI Update result: {output}", __event_call__
)
except Exception as e:
if self.valves.DEBUG:
self._emit_debug_log_sync(
f"Async CLI Update failed: {e}", __event_call__
)
def _sync_mcp_config(self, __event_call__=None):
"""Deprecated: MCP config is now handled dynamically via session config."""
pass
# ==================== Internal Implementation ==================== # ==================== Internal Implementation ====================
# _pipe_impl() contains the main request handling logic. # _pipe_impl() contains the main request handling logic.
# ================================================================ # ================================================================
@@ -835,12 +1196,22 @@ class Pipe:
__event_call__=None, __event_call__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
self._setup_env(__event_call__) self._setup_env(__event_call__)
cwd = self._get_workspace_dir()
if self.valves.DEBUG:
await self._emit_debug_log(f"Agent working in: {cwd}", __event_call__)
if not self.valves.GH_TOKEN: if not self.valves.GH_TOKEN:
return "Error: Please configure GH_TOKEN in Valves." return "Error: Please configure GH_TOKEN in Valves."
# Trigger async CLI update if configured
cli_path = os.environ.get("COPILOT_CLI_PATH")
if cli_path:
asyncio.create_task(self._update_copilot_cli(cli_path, __event_call__))
# Parse user selected model # Parse user selected model
request_model = body.get("model", "") request_model = body.get("model", "")
real_model_id = self.valves.MODEL_ID # Default value real_model_id = request_model
# Determine effective reasoning effort and debug setting # Determine effective reasoning effort and debug setting
if __user__: if __user__:
@@ -877,6 +1248,14 @@ class Pipe:
await self._emit_debug_log( await self._emit_debug_log(
f"Using selected model: {real_model_id}", __event_call__ f"Using selected model: {real_model_id}", __event_call__
) )
elif __metadata__ and __metadata__.get("base_model_id"):
base_model_id = __metadata__.get("base_model_id", "")
if base_model_id.startswith(f"{self.id}-"):
real_model_id = base_model_id[len(f"{self.id}-") :]
await self._emit_debug_log(
f"Using base model: {real_model_id} (derived from custom model {request_model})",
__event_call__,
)
messages = body.get("messages", []) messages = body.get("messages", [])
if not messages: if not messages:
@@ -918,26 +1297,58 @@ class Pipe:
await client.start() await client.start()
# Initialize custom tools # Initialize custom tools
custom_tools = self._initialize_custom_tools() custom_tools = await self._initialize_custom_tools(
__user__=__user__, __event_call__=__event_call__
)
if custom_tools: if custom_tools:
tool_names = [t.name for t in custom_tools] tool_names = [t.name for t in custom_tools]
await self._emit_debug_log( await self._emit_debug_log(
f"Enabled {len(custom_tools)} custom tools: {tool_names}", f"Enabled {len(custom_tools)} custom tools: {tool_names}",
__event_call__, __event_call__,
) )
if self.valves.DEBUG:
for t in custom_tools:
await self._emit_debug_log(
f"📋 Tool Detail: {t.name} - {t.description[:100]}...",
__event_call__,
)
# Check MCP Servers
mcp_servers = self._parse_mcp_servers()
mcp_server_names = list(mcp_servers.keys()) if mcp_servers else []
if mcp_server_names:
await self._emit_debug_log(
f"🔌 MCP Servers Configured: {mcp_server_names}",
__event_call__,
)
else:
await self._emit_debug_log(
" No MCP tool servers found in OpenWebUI Connections.",
__event_call__,
)
# Create or Resume Session # Create or Resume Session
session = None session = None
if chat_id: if chat_id:
try: try:
session = await client.resume_session(chat_id) # Prepare resume config
resume_params = {}
if mcp_servers:
resume_params["mcp_servers"] = mcp_servers
session = (
await client.resume_session(chat_id, resume_params)
if resume_params
else await client.resume_session(chat_id)
)
await self._emit_debug_log( await self._emit_debug_log(
f"Resumed session: {chat_id} (Note: Formatting guidelines only apply to NEW sessions. Create a new chat to use updated formatting.)", f"Resumed session: {chat_id} (Reasoning: {effective_reasoning_effort or 'default'})",
__event_call__, __event_call__,
) )
# Show workspace info if available # Show workspace info of available
if self.valves.DEBUG and self.valves.SHOW_WORKSPACE_INFO: if self.valves.DEBUG:
if session.workspace_path: if session.workspace_path:
await self._emit_debug_log( await self._emit_debug_log(
f"Session workspace: {session.workspace_path}", f"Session workspace: {session.workspace_path}",
@@ -990,7 +1401,7 @@ class Pipe:
) )
# Show workspace info for new sessions # Show workspace info for new sessions
if self.valves.DEBUG and self.valves.SHOW_WORKSPACE_INFO: if self.valves.DEBUG:
if session.workspace_path: if session.workspace_path:
await self._emit_debug_log( await self._emit_debug_log(
f"Session workspace: {session.workspace_path}", f"Session workspace: {session.workspace_path}",
@@ -1012,7 +1423,11 @@ class Pipe:
if body.get("stream", False): if body.get("stream", False):
init_msg = "" init_msg = ""
if self.valves.DEBUG: if self.valves.DEBUG:
init_msg = f"> [Debug] Agent working in: {os.getcwd()}\n" init_msg = (
f"> [Debug] Agent working in: {self._get_workspace_dir()}\n"
)
if mcp_server_names:
init_msg += f"> [Debug] 🔌 Connected MCP Servers: {', '.join(mcp_server_names)}\n"
# Transfer client ownership to stream_response # Transfer client ownership to stream_response
should_stop_client = False should_stop_client = False

View File

@@ -4,11 +4,12 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie/awesome-openwebui author_url: https://github.com/Fu-Jie/awesome-openwebui
funding_url: https://github.com/open-webui funding_url: https://github.com/open-webui
description: 集成 GitHub Copilot SDK。支持动态模型、多轮对话、流式输出、多模态输入、无限会话及前端调试日志。 description: 集成 GitHub Copilot SDK。支持动态模型、多轮对话、流式输出、多模态输入、无限会话及前端调试日志。
version: 0.2.3 version: 0.3.0
requirements: github-copilot-sdk requirements: github-copilot-sdk==0.1.22
""" """
import os import os
import re
import time import time
import json import json
import base64 import base64
@@ -18,46 +19,36 @@ import logging
import shutil import shutil
import subprocess import subprocess
import sys import sys
from typing import Optional, Union, AsyncGenerator, List, Any, Dict import hashlib
from pydantic import BaseModel, Field from pathlib import Path
from typing import Optional, Union, AsyncGenerator, List, Any, Dict, Callable
from types import SimpleNamespace
from pydantic import BaseModel, Field, create_model
from datetime import datetime, timezone from datetime import datetime, timezone
import contextlib import contextlib
# 导入 Copilot SDK 模块 # 导入 Copilot SDK 模块
from copilot import CopilotClient, define_tool from copilot import CopilotClient, define_tool
# 导入 OpenWebUI 配置和工具模块
from open_webui.config import TOOL_SERVER_CONNECTIONS
from open_webui.utils.tools import get_tools as get_openwebui_tools
from open_webui.models.tools import Tools
from open_webui.models.users import Users
# Setup logger # Setup logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class RandomNumberParams(BaseModel):
min: int = Field(description="最小值(包含)")
max: int = Field(description="最大值(包含)")
@define_tool(description="在指定范围内生成随机整数。")
async def generate_random_number(params: RandomNumberParams) -> str:
import random
if params.min >= params.max:
raise ValueError("min 必须小于 max")
number = random.randint(params.min, params.max)
return f"生成的随机数: {number}"
class Pipe: class Pipe:
class Valves(BaseModel): class Valves(BaseModel):
GH_TOKEN: str = Field( GH_TOKEN: str = Field(
default="", default="",
description="GitHub 细粒度 Token(需要 Copilot Requests 权限)", description="GitHub OAuth Token (来自 'gh auth token'),用于 Copilot Chat (必须)",
) )
MODEL_ID: str = Field( COPILOT_CLI_VERSION: str = Field(
default="gpt-5-mini", default="0.0.405",
description="默认 Copilot 模型名(动态获取失败时使用)", description="指定安装/强制使用的 Copilot CLI 版本 (例如 '0.0.405')。留空则使用最新版。",
)
CLI_PATH: str = Field(
default="/usr/local/bin/copilot",
description="Copilot CLI 路径",
) )
DEBUG: bool = Field( DEBUG: bool = Field(
default=False, default=False,
@@ -71,10 +62,6 @@ class Pipe:
default=True, default=True,
description="显示模型推理/思考过程", description="显示模型推理/思考过程",
) )
SHOW_WORKSPACE_INFO: bool = Field(
default=True,
description="调试模式下显示会话工作空间路径与摘要",
)
EXCLUDE_KEYWORDS: str = Field( EXCLUDE_KEYWORDS: str = Field(
default="", default="",
description="排除包含这些关键词的模型逗号分隔codex, haiku", description="排除包含这些关键词的模型逗号分隔codex, haiku",
@@ -103,13 +90,14 @@ class Pipe:
default="", default="",
description='自定义环境变量JSON 格式,例如 {"VAR": "value"}', description='自定义环境变量JSON 格式,例如 {"VAR": "value"}',
) )
ENABLE_TOOLS: bool = Field(
default=False, ENABLE_OPENWEBUI_TOOLS: bool = Field(
description="启用自定义工具(例如:随机数)", default=True,
description="启用 OpenWebUI 工具 (包括自定义工具和工具服务器工具)。",
) )
AVAILABLE_TOOLS: str = Field( ENABLE_MCP_SERVER: bool = Field(
default="all", default=True,
description="可用工具:'all' 或逗号分隔列表(例如:'generate_random_number'", description="启用直接 MCP 客户端连接 (推荐)。",
) )
REASONING_EFFORT: str = Field( REASONING_EFFORT: str = Field(
default="medium", default="medium",
@@ -121,25 +109,35 @@ class Pipe:
) )
class UserValves(BaseModel): class UserValves(BaseModel):
GH_TOKEN: str = Field(
default="",
description="个人 GitHub Fine-grained Token (覆盖全局设置)",
)
REASONING_EFFORT: str = Field( REASONING_EFFORT: str = Field(
default="", default="",
description="推理强度级别 (low, medium, high, xhigh)。留空以使用全局设置。", description="推理强度级别 (low, medium, high, xhigh)。留空以使用全局设置。",
) )
CLI_PATH: str = Field(
default="",
description="自定义 Copilot CLI 路径。留空以使用全局设置。",
)
DEBUG: bool = Field( DEBUG: bool = Field(
default=False, default=False,
description="启用技术调试日志(连接信息等)", description="启用技术调试日志(连接信息等)",
) )
SHOW_THINKING: bool = Field( SHOW_THINKING: bool = Field(
default=True, default=True,
description="显示模型推理/思考过程", description="显示模型推理/思考过程",
) )
MODEL_ID: str = Field(
default="", ENABLE_OPENWEBUI_TOOLS: bool = Field(
description="自定义模型 ID (例如 gpt-4o)。留空以使用全局默认值。", default=True,
description="启用 OpenWebUI 工具 (包括自定义工具和工具服务器工具,覆盖全局设置)。",
)
ENABLE_MCP_SERVER: bool = Field(
default=True,
description="启用动态 MCP 服务器加载 (覆盖全局设置)。",
)
ENFORCE_FORMATTING: bool = Field(
default=True,
description="强制启用格式化指导 (覆盖全局设置)",
) )
def __init__(self): def __init__(self):
@@ -150,6 +148,7 @@ class Pipe:
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_") self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
self.thinking_started = False self.thinking_started = False
self._model_cache = [] # 模型列表缓存 self._model_cache = [] # 模型列表缓存
self._last_update_check = 0 # 上次 CLI 更新检查时间
def __del__(self): def __del__(self):
try: try:
@@ -338,9 +337,31 @@ class Pipe:
return system_prompt_content, system_prompt_source return system_prompt_content, system_prompt_source
def _get_workspace_dir(self) -> str:
"""获取具有智能默认值的有效工作空间目录。"""
if self.valves.WORKSPACE_DIR:
return self.valves.WORKSPACE_DIR
# OpenWebUI 容器的智能默认值
if os.path.exists("/app/backend/data"):
cwd = "/app/backend/data/copilot_workspace"
else:
# 本地回退:当前工作目录的子目录
cwd = os.path.join(os.getcwd(), "copilot_workspace")
# 确保目录存在
if not os.path.exists(cwd):
try:
os.makedirs(cwd, exist_ok=True)
except Exception as e:
print(f"Error creating workspace {cwd}: {e}")
return os.getcwd() # 如果创建失败回退到 CWD
return cwd
def _build_client_config(self, body: dict) -> dict: def _build_client_config(self, body: dict) -> dict:
"""根据 Valves 和请求构建 CopilotClient 配置""" """根据 Valves 和请求构建 CopilotClient 配置"""
cwd = self.valves.WORKSPACE_DIR if self.valves.WORKSPACE_DIR else os.getcwd() cwd = self._get_workspace_dir()
client_config = {} client_config = {}
if os.environ.get("COPILOT_CLI_PATH"): if os.environ.get("COPILOT_CLI_PATH"):
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
@@ -359,6 +380,270 @@ class Pipe:
return client_config return client_config
async def _initialize_custom_tools(self, __user__=None, __event_call__=None):
"""根据配置初始化自定义工具"""
if not self.valves.ENABLE_OPENWEBUI_TOOLS:
return []
# 动态加载 OpenWebUI 工具
openwebui_tools = await self._load_openwebui_tools(
__user__=__user__, __event_call__=__event_call__
)
return openwebui_tools
def _json_schema_to_python_type(self, schema: dict) -> Any:
"""将 JSON Schema 类型转换为 Python 类型以用于 Pydantic 模型。"""
if not isinstance(schema, dict):
return Any
schema_type = schema.get("type")
if isinstance(schema_type, list):
schema_type = next((t for t in schema_type if t != "null"), schema_type[0])
if schema_type == "string":
return str
if schema_type == "integer":
return int
if schema_type == "number":
return float
if schema_type == "boolean":
return bool
if schema_type == "object":
return Dict[str, Any]
if schema_type == "array":
items_schema = schema.get("items", {})
item_type = self._json_schema_to_python_type(items_schema)
return List[item_type]
return Any
def _convert_openwebui_tool(self, tool_name: str, tool_dict: dict):
"""将 OpenWebUI 工具定义转换为 Copilot SDK 工具。"""
# 净化工具名称以匹配模式 ^[a-zA-Z0-9_-]+$
sanitized_tool_name = re.sub(r"[^a-zA-Z0-9_-]", "_", tool_name)
# 如果净化后的名称为空或仅包含分隔符(例如纯中文名称),生成回退名称
if not sanitized_tool_name or re.match(r"^[_.-]+$", sanitized_tool_name):
hash_suffix = hashlib.md5(tool_name.encode("utf-8")).hexdigest()[:8]
sanitized_tool_name = f"tool_{hash_suffix}"
if sanitized_tool_name != tool_name:
logger.debug(f"将工具名称 '{tool_name}' 净化为 '{sanitized_tool_name}'")
spec = tool_dict.get("spec", {}) if isinstance(tool_dict, dict) else {}
params_schema = spec.get("parameters", {}) if isinstance(spec, dict) else {}
properties = params_schema.get("properties", {})
required = params_schema.get("required", [])
if not isinstance(properties, dict):
properties = {}
if not isinstance(required, list):
required = []
required_set = set(required)
fields = {}
for param_name, param_schema in properties.items():
param_type = self._json_schema_to_python_type(param_schema)
description = ""
if isinstance(param_schema, dict):
description = param_schema.get("description", "")
if param_name in required_set:
if description:
fields[param_name] = (
param_type,
Field(..., description=description),
)
else:
fields[param_name] = (param_type, ...)
else:
optional_type = Optional[param_type]
if description:
fields[param_name] = (
optional_type,
Field(default=None, description=description),
)
else:
fields[param_name] = (optional_type, None)
if fields:
ParamsModel = create_model(f"{sanitized_tool_name}_Params", **fields)
else:
ParamsModel = create_model(f"{sanitized_tool_name}_Params")
tool_callable = tool_dict.get("callable")
tool_description = spec.get("description", "") if isinstance(spec, dict) else ""
if not tool_description and isinstance(spec, dict):
tool_description = spec.get("summary", "")
# 关键: 如果工具名称被净化(例如中文转哈希),语义会丢失。
# 我们必须将原始名称注入到描述中,以便模型知道它的作用。
if sanitized_tool_name != tool_name:
tool_description = f"功能 '{tool_name}': {tool_description}"
async def _tool(params):
payload = params.model_dump() if hasattr(params, "model_dump") else {}
return await tool_callable(**payload)
_tool.__name__ = sanitized_tool_name
_tool.__doc__ = tool_description
# 转换调试日志
logger.debug(
f"正在转换工具 '{sanitized_tool_name}': {tool_description[:50]}..."
)
# 核心关键点:必须显式传递 types否则 define_tool 无法推断动态函数的参数
# 显式传递 name 确保 SDK 注册的名称正确
return define_tool(
name=sanitized_tool_name,
description=tool_description,
params_type=ParamsModel,
)(_tool)
def _build_openwebui_request(self):
"""构建一个最小的 request 模拟对象用于 OpenWebUI 工具加载。"""
app_state = SimpleNamespace(
config=SimpleNamespace(
TOOL_SERVER_CONNECTIONS=TOOL_SERVER_CONNECTIONS.value
),
TOOLS={},
)
app = SimpleNamespace(state=app_state)
request = SimpleNamespace(
app=app,
cookies={},
state=SimpleNamespace(token=SimpleNamespace(credentials="")),
)
return request
async def _load_openwebui_tools(self, __user__=None, __event_call__=None):
"""动态加载 OpenWebUI 工具并转换为 Copilot SDK 工具。"""
if isinstance(__user__, (list, tuple)):
user_data = __user__[0] if __user__ else {}
elif isinstance(__user__, dict):
user_data = __user__
else:
user_data = {}
if not user_data:
return []
user_id = user_data.get("id") or user_data.get("user_id")
if not user_id:
return []
user = Users.get_user_by_id(user_id)
if not user:
return []
# 1. 获取用户自定义工具 (Python 脚本)
tool_items = Tools.get_tools_by_user_id(user_id, permission="read")
tool_ids = [tool.id for tool in tool_items] if tool_items else []
# 2. 获取 OpenAPI 工具服务器工具
# 我们手动添加已启用的 OpenAPI 服务器,因为 Tools.get_tools_by_user_id 仅检查数据库。
# open_webui.utils.tools.get_tools 会处理实际的加载和访问控制。
if hasattr(TOOL_SERVER_CONNECTIONS, "value"):
for server in TOOL_SERVER_CONNECTIONS.value:
# 我们在此处仅添加 'openapi' 服务器,因为 get_tools 目前似乎仅支持 'openapi' (默认为此)。
# MCP 工具通过 ENABLE_MCP_SERVER 单独处理。
if server.get("type") == "openapi":
# get_tools 期望的格式: "server:<id>" 隐含 type="openapi"
server_id = server.get("id")
if server_id:
tool_ids.append(f"server:{server_id}")
if not tool_ids:
return []
request = self._build_openwebui_request()
extra_params = {
"__request__": request,
"__user__": user_data,
"__event_emitter__": None,
"__event_call__": __event_call__,
"__chat_id__": None,
"__message_id__": None,
"__model_knowledge__": [],
}
tools_dict = await get_openwebui_tools(request, tool_ids, user, extra_params)
if not tools_dict:
return []
converted_tools = []
for tool_name, tool_def in tools_dict.items():
try:
converted_tools.append(
self._convert_openwebui_tool(tool_name, tool_def)
)
except Exception as e:
await self._emit_debug_log(
f"加载 OpenWebUI 工具 '{tool_name}' 失败: {e}",
__event_call__,
)
return converted_tools
def _parse_mcp_servers(self) -> Optional[dict]:
"""
从 OpenWebUI TOOL_SERVER_CONNECTIONS 动态加载 MCP 服务器配置。
返回兼容 CopilotClient 的 mcp_servers 字典。
"""
if not self.valves.ENABLE_MCP_SERVER:
return None
mcp_servers = {}
# 遍历 OpenWebUI 工具服务器连接
if hasattr(TOOL_SERVER_CONNECTIONS, "value"):
connections = TOOL_SERVER_CONNECTIONS.value
else:
connections = []
for conn in connections:
if conn.get("type") == "mcp":
info = conn.get("info", {})
# 使用 info 中的 ID 或自动生成
raw_id = info.get("id", f"mcp-server-{len(mcp_servers)}")
# 净化 server_id (使用与工具相同的逻辑)
server_id = re.sub(r"[^a-zA-Z0-9_-]", "_", raw_id)
if not server_id or re.match(r"^[_.-]+$", server_id):
hash_suffix = hashlib.md5(raw_id.encode("utf-8")).hexdigest()[:8]
server_id = f"server_{hash_suffix}"
url = conn.get("url")
if not url:
continue
# 构建 Header (处理认证)
headers = {}
auth_type = conn.get("auth_type", "bearer")
key = conn.get("key", "")
if auth_type == "bearer" and key:
headers["Authorization"] = f"Bearer {key}"
elif auth_type == "basic" and key:
headers["Authorization"] = f"Basic {key}"
# 合并自定义 headers
custom_headers = conn.get("headers", {})
if isinstance(custom_headers, dict):
headers.update(custom_headers)
mcp_servers[server_id] = {
"type": "http",
"url": url,
"headers": headers,
"tools": ["*"], # 默认启用所有工具
}
return mcp_servers if mcp_servers else None
def _build_session_config( def _build_session_config(
self, self,
chat_id: Optional[str], chat_id: Optional[str],
@@ -366,7 +651,6 @@ class Pipe:
custom_tools: List[Any], custom_tools: List[Any],
system_prompt_content: Optional[str], system_prompt_content: Optional[str],
is_streaming: bool, is_streaming: bool,
reasoning_effort: str = "",
): ):
"""构建 Copilot SDK 的 SessionConfig""" """构建 Copilot SDK 的 SessionConfig"""
from copilot.types import SessionConfig, InfiniteSessionConfig from copilot.types import SessionConfig, InfiniteSessionConfig
@@ -414,11 +698,12 @@ class Pipe:
"tools": custom_tools, "tools": custom_tools,
"system_message": system_message_config, "system_message": system_message_config,
"infinite_sessions": infinite_session_config, "infinite_sessions": infinite_session_config,
# 注册权限处理 Hook
} }
# 如果不是默认值medium添加 reasoning_effort mcp_servers = self._parse_mcp_servers()
if reasoning_effort and reasoning_effort.lower() != "medium": if mcp_servers:
session_params["reasoning_effort"] = reasoning_effort.lower() session_params["mcp_servers"] = mcp_servers
return SessionConfig(**session_params) return SessionConfig(**session_params)
@@ -545,24 +830,6 @@ class Pipe:
return system_prompt_content, system_prompt_source return system_prompt_content, system_prompt_source
def _initialize_custom_tools(self):
"""根据配置初始化自定义工具"""
if not self.valves.ENABLE_TOOLS:
return []
# 定义所有可用工具(在此注册新工具)
all_tools = {
"generate_random_number": generate_random_number,
}
# 根据配置过滤
if self.valves.AVAILABLE_TOOLS == "all":
return list(all_tools.values())
# 仅启用指定的工具
enabled = [t.strip() for t in self.valves.AVAILABLE_TOOLS.split(",")]
return [all_tools[name] for name in enabled if name in all_tools]
async def _emit_debug_log(self, message: str, __event_call__=None): async def _emit_debug_log(self, message: str, __event_call__=None):
"""在 DEBUG 开启时将日志输出到前端控制台。""" """在 DEBUG 开启时将日志输出到前端控制台。"""
if not self.valves.DEBUG: if not self.valves.DEBUG:
@@ -764,8 +1031,8 @@ class Pipe:
# 失败时返回默认模型 # 失败时返回默认模型
return [ return [
{ {
"id": f"{self.id}-{self.valves.MODEL_ID}", "id": f"{self.id}-gpt-5-mini",
"name": f"GitHub Copilot ({self.valves.MODEL_ID})", "name": f"GitHub Copilot (gpt-5-mini)",
} }
] ]
finally: finally:
@@ -774,8 +1041,8 @@ class Pipe:
await self._emit_debug_log(f"Pipes Error: {e}") await self._emit_debug_log(f"Pipes Error: {e}")
return [ return [
{ {
"id": f"{self.id}-{self.valves.MODEL_ID}", "id": f"{self.id}-gpt-5-mini",
"name": f"GitHub Copilot ({self.valves.MODEL_ID})", "name": f"GitHub Copilot (gpt-5-mini)",
} }
] ]
@@ -807,30 +1074,93 @@ class Pipe:
return client return client
def _setup_env(self, __event_call__=None): def _setup_env(self, __event_call__=None):
cli_path = self.valves.CLI_PATH cli_path = "/usr/local/bin/copilot"
found = False if os.environ.get("COPILOT_CLI_PATH"):
cli_path = os.environ["COPILOT_CLI_PATH"]
target_version = self.valves.COPILOT_CLI_VERSION.strip()
found = False
current_version = None
# 内部 helper: 获取版本
def get_cli_version(path):
try:
output = (
subprocess.check_output(
[path, "--version"], stderr=subprocess.STDOUT
)
.decode()
.strip()
)
# Copilot CLI 输出通常包含 "copilot version X.Y.Z" 或直接是版本号
match = re.search(r"(\d+\.\d+\.\d+)", output)
return match.group(1) if match else output
except Exception:
return None
# 检查默认路径
if os.path.exists(cli_path): if os.path.exists(cli_path):
found = True found = True
current_version = get_cli_version(cli_path)
# 二次检查系统路径
if not found: if not found:
sys_path = shutil.which("copilot") sys_path = shutil.which("copilot")
if sys_path: if sys_path:
cli_path = sys_path cli_path = sys_path
found = True found = True
current_version = get_cli_version(cli_path)
# 判断是否需要安装/更新
should_install = False
install_reason = ""
if not found: if not found:
should_install = True
install_reason = "CLI 未找到"
elif target_version:
# 标准化版本号 (移除 'v' 前缀)
norm_target = target_version.lstrip("v")
norm_current = current_version.lstrip("v") if current_version else ""
if norm_target != norm_current:
should_install = True
install_reason = (
f"版本不匹配 (当前: {current_version}, 目标: {target_version})"
)
if should_install:
if self.valves.DEBUG:
self._emit_debug_log_sync(
f"正在安装 Copilot CLI: {install_reason}...", __event_call__
)
try: try:
env = os.environ.copy()
if target_version:
env["VERSION"] = target_version
subprocess.run( subprocess.run(
"curl -fsSL https://gh.io/copilot-install | bash", "curl -fsSL https://gh.io/copilot-install | bash",
shell=True, shell=True,
check=True, check=True,
env=env,
) )
if os.path.exists(self.valves.CLI_PATH):
cli_path = self.valves.CLI_PATH # 优先检查默认安装路径,其次是系统路径
if os.path.exists("/usr/local/bin/copilot"):
cli_path = "/usr/local/bin/copilot"
found = True found = True
except: elif shutil.which("copilot"):
pass cli_path = shutil.which("copilot")
found = True
if found:
current_version = get_cli_version(cli_path)
except Exception as e:
if self.valves.DEBUG:
self._emit_debug_log_sync(
f"Copilot CLI 安装失败: {e}", __event_call__
)
if found: if found:
os.environ["COPILOT_CLI_PATH"] = cli_path os.environ["COPILOT_CLI_PATH"] = cli_path
@@ -840,7 +1170,14 @@ class Pipe:
if self.valves.DEBUG: if self.valves.DEBUG:
self._emit_debug_log_sync( self._emit_debug_log_sync(
f"Copilot CLI 已定位: {cli_path}", __event_call__ f"已找到 Copilot CLI: {cli_path} (版本: {current_version})",
__event_call__,
)
else:
if self.valves.DEBUG:
self._emit_debug_log_sync(
"错误: 未找到 Copilot CLI。相关 Agent 功能将被禁用。",
__event_call__,
) )
if self.valves.GH_TOKEN: if self.valves.GH_TOKEN:
@@ -850,6 +1187,8 @@ class Pipe:
if self.valves.DEBUG: if self.valves.DEBUG:
self._emit_debug_log_sync("Warning: GH_TOKEN 未设置。", __event_call__) self._emit_debug_log_sync("Warning: GH_TOKEN 未设置。", __event_call__)
self._sync_mcp_config(__event_call__)
def _process_images(self, messages, __event_call__=None): def _process_images(self, messages, __event_call__=None):
attachments = [] attachments = []
text_content = "" text_content = ""
@@ -944,9 +1283,113 @@ class Pipe:
except Exception as e: except Exception as e:
self._emit_debug_log_sync(f"配置同步检查失败: {e}", __event_call__) self._emit_debug_log_sync(f"配置同步检查失败: {e}", __event_call__)
def _sync_mcp_config(self, __event_call__=None):
"""已弃用MCP 配置现在通过 SessionConfig 动态处理。"""
pass
# ==================== 内部实现 ==================== # ==================== 内部实现 ====================
# _pipe_impl() 包含主请求处理逻辑。 # _pipe_impl() 包含主请求处理逻辑。
# ================================================ # ================================================
def _sync_copilot_config(self, reasoning_effort: str, __event_call__=None):
"""
如果设置了 REASONING_EFFORT则动态更新 ~/.copilot/config.json。
这提供了一个回退机制,以防 API 注入被服务器忽略。
"""
if not reasoning_effort:
return
effort = reasoning_effort
# 检查模型是否支持 xhigh
# 目前只有 gpt-5.2-codex 支持 xhigh
if effort == "xhigh":
# 简单检查,使用默认模型 ID
if (
"gpt-5.2-codex"
not in self._collect_model_ids(
body={},
request_model=self.id,
real_model_id=None,
)[0].lower()
):
# 如果不支持则回退到 high
effort = "high"
try:
# 目标标准路径 ~/.copilot/config.json
config_path = os.path.expanduser("~/.copilot/config.json")
config_dir = os.path.dirname(config_path)
# 仅当目录存在时才继续(避免在路径错误时创建垃圾文件)
if not os.path.exists(config_dir):
return
data = {}
# 读取现有配置
if os.path.exists(config_path):
try:
with open(config_path, "r") as f:
data = json.load(f)
except Exception:
data = {}
# 如果有变化则更新
current_val = data.get("reasoning_effort")
if current_val != effort:
data["reasoning_effort"] = effort
try:
with open(config_path, "w") as f:
json.dump(data, f, indent=4)
self._emit_debug_log_sync(
f"已动态更新 ~/.copilot/config.json: reasoning_effort='{effort}'",
__event_call__,
)
except Exception as e:
self._emit_debug_log_sync(
f"写入 config.json 失败: {e}", __event_call__
)
except Exception as e:
self._emit_debug_log_sync(f"配置同步检查失败: {e}", __event_call__)
async def _update_copilot_cli(self, cli_path: str, __event_call__=None):
"""异步任务:如果需要则更新 Copilot CLI。"""
import time
try:
# 检查频率(例如:每小时一次)
now = time.time()
if now - self._last_update_check < 3600:
return
self._last_update_check = now
if self.valves.DEBUG:
self._emit_debug_log_sync(
"触发异步 Copilot CLI 更新检查...", __event_call__
)
# 我们创建一个子进程来运行更新
process = await asyncio.create_subprocess_exec(
cli_path,
"update",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await process.communicate()
if self.valves.DEBUG and process.returncode == 0:
self._emit_debug_log_sync("Copilot CLI 更新检查完成", __event_call__)
elif process.returncode != 0 and self.valves.DEBUG:
self._emit_debug_log_sync(
f"Copilot CLI 更新失败: {stderr.decode()}", __event_call__
)
except Exception as e:
if self.valves.DEBUG:
self._emit_debug_log_sync(f"CLI 更新任务异常: {e}", __event_call__)
async def _pipe_impl( async def _pipe_impl(
self, self,
body: dict, body: dict,
@@ -956,12 +1399,23 @@ class Pipe:
__event_call__=None, __event_call__=None,
) -> Union[str, AsyncGenerator]: ) -> Union[str, AsyncGenerator]:
self._setup_env(__event_call__) self._setup_env(__event_call__)
cwd = self._get_workspace_dir()
if self.valves.DEBUG:
await self._emit_debug_log(f"当前工作目录: {cwd}", __event_call__)
# CLI Update Check
if os.environ.get("COPILOT_CLI_PATH"):
asyncio.create_task(
self._update_copilot_cli(os.environ["COPILOT_CLI_PATH"], __event_call__)
)
if not self.valves.GH_TOKEN: if not self.valves.GH_TOKEN:
return "Error: 请在 Valves 中配置 GH_TOKEN。" return "Error: 请在 Valves 中配置 GH_TOKEN。"
# 解析用户选择的模型 # 解析用户选择的模型
request_model = body.get("model", "") request_model = body.get("model", "")
real_model_id = self.valves.MODEL_ID # 默认值 real_model_id = request_model
# 确定有效的推理强度和调试设置 # 确定有效的推理强度和调试设置
if __user__: if __user__:
@@ -979,6 +1433,10 @@ class Pipe:
if user_valves.REASONING_EFFORT if user_valves.REASONING_EFFORT
else self.valves.REASONING_EFFORT else self.valves.REASONING_EFFORT
) )
# Sync config for reasoning effort (Legacy/Fallback)
self._sync_copilot_config(effective_reasoning_effort, __event_call__)
# 如果用户启用了 DEBUG则覆盖全局设置 # 如果用户启用了 DEBUG则覆盖全局设置
if user_valves.DEBUG: if user_valves.DEBUG:
self.valves.DEBUG = True self.valves.DEBUG = True
@@ -995,6 +1453,14 @@ class Pipe:
await self._emit_debug_log( await self._emit_debug_log(
f"使用选择的模型: {real_model_id}", __event_call__ f"使用选择的模型: {real_model_id}", __event_call__
) )
elif __metadata__ and __metadata__.get("base_model_id"):
base_model_id = __metadata__.get("base_model_id", "")
if base_model_id.startswith(f"{self.id}-"):
real_model_id = base_model_id[len(f"{self.id}-") :]
await self._emit_debug_log(
f"使用基础模型: {real_model_id} (继承自自定义模型 {request_model})",
__event_call__,
)
messages = body.get("messages", []) messages = body.get("messages", [])
if not messages: if not messages:
@@ -1019,32 +1485,66 @@ class Pipe:
is_streaming = body.get("stream", False) is_streaming = body.get("stream", False)
await self._emit_debug_log(f"请求流式传输: {is_streaming}", __event_call__) await self._emit_debug_log(f"请求流式传输: {is_streaming}", __event_call__)
# 处理多模态(图像)和提取最后的消息文本
last_text, attachments = self._process_images(messages, __event_call__)
client = CopilotClient(self._build_client_config(body)) client = CopilotClient(self._build_client_config(body))
should_stop_client = True should_stop_client = True
try: try:
await client.start() await client.start()
# 初始化自定义工具 # 初始化自定义工具
custom_tools = self._initialize_custom_tools() custom_tools = await self._initialize_custom_tools(
__user__=__user__, __event_call__=__event_call__
)
if custom_tools: if custom_tools:
tool_names = [t.name for t in custom_tools] tool_names = [t.name for t in custom_tools]
await self._emit_debug_log( await self._emit_debug_log(
f"已启用 {len(custom_tools)} 个自定义工具: {tool_names}", f"已启用 {len(custom_tools)} 个自定义工具: {tool_names}",
__event_call__, __event_call__,
) )
# 详细打印每个工具的描述 (用于调试)
if self.valves.DEBUG:
for t in custom_tools:
await self._emit_debug_log(
f"📋 工具详情: {t.name} - {t.description[:100]}...",
__event_call__,
)
# 检查 MCP 服务器
mcp_servers = self._parse_mcp_servers()
mcp_server_names = list(mcp_servers.keys()) if mcp_servers else []
if mcp_server_names:
await self._emit_debug_log(
f"🔌 MCP 服务器已配置: {mcp_server_names}",
__event_call__,
)
else:
await self._emit_debug_log(
" 未在 OpenWebUI 连接中发现 MCP 服务器。",
__event_call__,
)
session = None session = None
if chat_id: if chat_id:
try: try:
# 复用已解析的 mcp_servers
resume_config = (
{"mcp_servers": mcp_servers} if mcp_servers else None
)
# 尝试直接使用 chat_id 作为 session_id 恢复会话 # 尝试直接使用 chat_id 作为 session_id 恢复会话
session = await client.resume_session(chat_id) session = (
await client.resume_session(chat_id, resume_config)
if resume_config
else await client.resume_session(chat_id)
)
await self._emit_debug_log( await self._emit_debug_log(
f"已通过 ChatID 恢复会话: {chat_id}", __event_call__ f"已通过 ChatID 恢复会话: {chat_id}", __event_call__
) )
# 显示工作空间信息(如果可用) # 显示工作空间信息(如果可用)
if self.valves.DEBUG and self.valves.SHOW_WORKSPACE_INFO: if self.valves.DEBUG:
if session.workspace_path: if session.workspace_path:
await self._emit_debug_log( await self._emit_debug_log(
f"会话工作空间: {session.workspace_path}", f"会话工作空间: {session.workspace_path}",
@@ -1107,7 +1607,7 @@ class Pipe:
await self._emit_debug_log(f"创建了新会话: {new_sid}", __event_call__) await self._emit_debug_log(f"创建了新会话: {new_sid}", __event_call__)
# 显示新会话的工作空间信息 # 显示新会话的工作空间信息
if self.valves.DEBUG and self.valves.SHOW_WORKSPACE_INFO: if self.valves.DEBUG:
if session.workspace_path: if session.workspace_path:
await self._emit_debug_log( await self._emit_debug_log(
f"会话工作空间: {session.workspace_path}", f"会话工作空间: {session.workspace_path}",
@@ -1133,6 +1633,9 @@ class Pipe:
else: else:
init_msg = f"> [Debug] 已通过 ChatID 恢复会话: {chat_id}\n" init_msg = f"> [Debug] 已通过 ChatID 恢复会话: {chat_id}\n"
if mcp_server_names:
init_msg += f"> [Debug] 🔌 已连接 MCP 服务器: {', '.join(mcp_server_names)}\n"
return self.stream_response( return self.stream_response(
client, client,
session, session,