diff --git a/README.md b/README.md index ae7aff0..cdeae15 100644 --- a/README.md +++ b/README.md @@ -43,6 +43,7 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu Located in the `plugins/` directory, containing Python-based enhancements: #### Actions + - **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text. - **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV. - **Flash Card** (`flash-card`): Quickly generates beautiful flashcards for learning. @@ -51,12 +52,18 @@ Located in the `plugins/` directory, containing Python-based enhancements: - **Export to Word** (`export_to_docx`): Exports chat history to Word documents. #### Filters + - **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression. - **Context Enhancement** (`context_enhancement_filter`): Enhances chat context. - **Folder Memory** (`folder-memory`): Automatically extracts project rules from conversations and injects them into the folder's system prompt. - **Markdown Normalizer** (`markdown_normalizer`): Fixes common Markdown formatting issues in LLM outputs. +#### Pipes + +- **GitHub Copilot SDK** (`github-copilot-sdk`): Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions. + #### Pipelines + - **MoE Prompt Refiner** (`moe_prompt_refiner`): Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports. ### 🎯 Prompts @@ -101,6 +108,7 @@ This project is a collection of resources and does not require a Python environm ### Contributing If you have great prompts or plugins to share: + 1. Fork this repository. 2. Add your files to the appropriate `prompts/` or `plugins/` directory. 3. Submit a Pull Request. diff --git a/README_CN.md b/README_CN.md index 35591c5..3557dc2 100644 --- a/README_CN.md +++ b/README_CN.md @@ -40,6 +40,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词 位于 `plugins/` 目录,包含各类 Python 编写的功能增强插件: #### Actions (交互增强) + - **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。 - **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。 - **Flash Card** (`flash-card`): 快速生成精美的学习记忆卡片。 @@ -48,6 +49,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词 - **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。 #### Filters (消息处理) + - **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。 - **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。 - **Folder Memory** (`folder-memory`): 自动从对话中提取项目规则并注入到文件夹系统提示词中。 @@ -57,9 +59,12 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词 - **Multi-Model Context Merger** (`multi_model_context_merger`): 自动合并并注入多模型回答的上下文。 #### Pipes (模型管道) + +- **GitHub Copilot SDK** (`github-copilot-sdk`): GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。 - **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。 #### Pipelines (工作流管道) + - **MoE Prompt Refiner** (`moe_prompt_refiner`): 优化多模型 (MoE) 汇总请求的提示词,生成高质量的综合报告。 ### 🎯 提示词 (Prompts) @@ -107,6 +112,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词 ### 贡献代码 如果你有优质的提示词或插件想要分享: + 1. Fork 本仓库。 2. 将你的文件添加到对应的 `prompts/` 或 `plugins/` 目录。 3. 提交 Pull Request。 diff --git a/docs/plugins/pipes/github-copilot-sdk.md b/docs/plugins/pipes/github-copilot-sdk.md new file mode 100644 index 0000000..6cf084e --- /dev/null +++ b/docs/plugins/pipes/github-copilot-sdk.md @@ -0,0 +1,84 @@ +# GitHub Copilot SDK Pipe for OpenWebUI + +**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT + +This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience. + +## 🚀 What's New (v0.1.0) + +* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors! +* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models). +* **📂 Workspace Control**: Restricted workspace directory for secure file operations. +* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`). +* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping. + +## ✨ Core Features + +* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability. +* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages. +* **🌊 Streaming Output**: Supports typewriter effect for fast responses. +* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support). +* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box. +* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions. +* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting. + +## 📦 Installation & Usage + +### 1. Import Function + +1. Open OpenWebUI. +2. Go to **Workspace** -> **Functions**. +3. Click **+** (Create Function). +4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely. +5. Save. + +### 2. Configure Valves (Settings) + +Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure: + +| Parameter | Description | Default | +| :--- | :--- | :--- | +| **GH_TOKEN** | **(Required)** Your GitHub Token. | - | +| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` | +| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` | +| **DEBUG** | Whether to enable debug logs (output to chat). | `True` | +| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` | +| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - | +| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - | +| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` | +| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` | +| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` | + +### 3. Get GH_TOKEN + +For security, it is recommended to use a **Fine-grained Personal Access Token**: + +1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta). +2. Click **Generate new token**. +3. **Repository access**: Select `All repositories` or `Public Repositories`. +4. **Permissions**: + * Click **Account permissions**. + * Find **Copilot Requests**, select **Read and write** (or Access). +5. Generate and copy the Token. + +## 📋 Dependencies + +This Pipe will automatically attempt to install the following dependencies: + +* `github-copilot-sdk` (Python package) +* `github-copilot-cli` (Binary file, installed via official script) + +## ⚠️ FAQ + +* **Stuck on "Waiting..."**: + * Check if `GH_TOKEN` is correct and has `Copilot Requests` permission. + * Try changing `MODEL_ID` to `gpt-4o` or `copilot-chat`. +* **Images not recognized**: + * Ensure `MODEL_ID` is a model that supports multimodal input. +* **CLI Installation Failed**: + * Ensure the OpenWebUI container has internet access. + * You can manually download the CLI and specify `CLI_PATH` in Valves. + +## 📄 License + +MIT diff --git a/docs/plugins/pipes/github-copilot-sdk.zh.md b/docs/plugins/pipes/github-copilot-sdk.zh.md new file mode 100644 index 0000000..842dc60 --- /dev/null +++ b/docs/plugins/pipes/github-copilot-sdk.zh.md @@ -0,0 +1,84 @@ +# GitHub Copilot SDK 官方管道 + +**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT + +这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。 + +## 🚀 最新特性 (v0.1.0) + +* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误! +* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。 +* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。 +* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。 +* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。 + +## ✨ 核心特性 + +* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。 +* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。 +* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。 +* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。 +* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。 +* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。 +* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。 + +## 📦 安装与使用 + +### 1. 导入函数 + +1. 打开 OpenWebUI。 +2. 进入 **Workspace** -> **Functions**。 +3. 点击 **+** (创建函数)。 +4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。 +5. 保存。 + +### 2. 配置 Valves (设置) + +在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置: + +| 参数 | 说明 | 默认值 | +| :--- | :--- | :--- | +| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - | +| **MODEL_ID** | 使用的模型名称。推荐 `gpt-5-mini` 或 `gpt-5`。 | `gpt-5-mini` | +| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` | +| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` | +| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` | +| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - | +| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - | +| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` | +| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` | +| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` | + +### 3. 获取 GH_TOKEN + +为了安全起见,推荐使用 **Fine-grained Personal Access Token**: + +1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。 +2. 点击 **Generate new token**。 +3. **Repository access**: 选择 `All repositories` 或 `Public Repositories`。 +4. **Permissions**: + * 点击 **Account permissions**。 + * 找到 **Copilot Requests**,选择 **Read and write** (或 Access)。 +5. 生成并复制 Token。 + +## 📋 依赖说明 + +该 Pipe 会自动尝试安装以下依赖(如果环境中缺失): + +* `github-copilot-sdk` (Python 包) +* `github-copilot-cli` (二进制文件,通过官方脚本安装) + +## ⚠️ 常见问题 + +* **一直显示 "Waiting..."**: + * 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。 + * 尝试将 `MODEL_ID` 改为 `gpt-4o` 或 `copilot-chat`。 +* **图片无法识别**: + * 确保 `MODEL_ID` 是支持多模态的模型。 +* **CLI 安装失败**: + * 确保 OpenWebUI 容器有外网访问权限。 + * 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。 + +## 📄 许可证 + +MIT diff --git a/docs/plugins/pipes/index.md b/docs/plugins/pipes/index.md index 5b6346f..cb83dee 100644 --- a/docs/plugins/pipes/index.md +++ b/docs/plugins/pipes/index.md @@ -15,7 +15,7 @@ Pipes allow you to: ## Available Pipe Plugins - +- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.1.1) - Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions. --- diff --git a/docs/plugins/pipes/index.zh.md b/docs/plugins/pipes/index.zh.md index 5d825df..7742fd1 100644 --- a/docs/plugins/pipes/index.zh.md +++ b/docs/plugins/pipes/index.zh.md @@ -15,7 +15,7 @@ Pipes 可以用于: ## 可用的 Pipe 插件 - +- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.1.1) - GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。 --- diff --git a/plugins/pipes/github-copilot-sdk/README.md b/plugins/pipes/github-copilot-sdk/README.md new file mode 100644 index 0000000..ad5b94a --- /dev/null +++ b/plugins/pipes/github-copilot-sdk/README.md @@ -0,0 +1,81 @@ +# GitHub Copilot SDK Pipe for OpenWebUI + +**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT + +This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience. + +## 🚀 What's New (v0.1.1) + +* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors! +* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models). +* **📂 Workspace Control**: Restricted workspace directory for secure file operations. +* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`). +* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping. + +## ✨ Core Features + +* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability. +* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages. +* **🌊 Streaming Output**: Supports typewriter effect for fast responses. +* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support). +* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box. +* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions. +* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting. + +## 📦 Installation & Usage + +### 1. Import Function + +1. Open OpenWebUI. +2. Go to **Workspace** -> **Functions**. +3. Click **+** (Create Function). +4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely. +5. Save. + +### 2. Configure Valves (Settings) + +Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure: + +| Parameter | Description | Default | +| :--- | :--- | :--- | +| **GH_TOKEN** | **(Required)** Your GitHub Token. | - | +| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` | +| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` | +| **DEBUG** | Whether to enable debug logs (output to chat). | `True` | +| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` | +| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - | +| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - | +| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` | +| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` | +| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` | +| **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` | + +### 3. Get GH_TOKEN + +For security, it is recommended to use a **Fine-grained Personal Access Token**: + +1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta). +2. Click **Generate new token**. +3. **Repository access**: Select `All repositories` or `Public Repositories`. +4. **Permissions**: + * Click **Account permissions**. + * Find **Copilot Requests**, select **Read and write** (or Access). +5. Generate and copy the Token. + +## 📋 Dependencies + +This Pipe will automatically attempt to install the following dependencies: + +* `github-copilot-sdk` (Python package) +* `github-copilot-cli` (Binary file, installed via official script) + +## ⚠️ FAQ + +* **Stuck on "Waiting..."**: + * Check if `GH_TOKEN` is correct and has `Copilot Requests` permission. + * Try changing `MODEL_ID` to `gpt-4o` or `copilot-chat`. +* **Images not recognized**: + * Ensure `MODEL_ID` is a model that supports multimodal input. +* **CLI Installation Failed**: + * Ensure the OpenWebUI container has internet access. + * You can manually download the CLI and specify `CLI_PATH` in Valves. diff --git a/plugins/pipes/github-copilot-sdk/README_CN.md b/plugins/pipes/github-copilot-sdk/README_CN.md new file mode 100644 index 0000000..5e7eadb --- /dev/null +++ b/plugins/pipes/github-copilot-sdk/README_CN.md @@ -0,0 +1,81 @@ +# GitHub Copilot SDK 官方管道 + +**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT + +这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。 + +## 🚀 最新特性 (v0.1.1) + +* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误! +* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。 +* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。 +* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。 +* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。 + +## ✨ 核心特性 + +* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。 +* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。 +* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。 +* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。 +* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。 +* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。 +* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。 + +## 📦 安装与使用 + +### 1. 导入函数 + +1. 打开 OpenWebUI。 +2. 进入 **Workspace** -> **Functions**。 +3. 点击 **+** (创建函数)。 +4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。 +5. 保存。 + +### 2. 配置 Valves (设置) + +在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置: + +| 参数 | 说明 | 默认值 | +| :--- | :--- | :--- | +| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - | +| **MODEL_ID** | 使用的模型名称。 | `gpt-5-mini` | +| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` | +| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` | +| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` | +| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - | +| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - | +| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` | +| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` | +| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` | +| **TIMEOUT** | 流式数据块超时时间 (秒)。 | `300` | + +### 3. 获取 GH_TOKEN + +为了安全起见,推荐使用 **Fine-grained Personal Access Token**: + +1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。 +2. 点击 **Generate new token**。 +3. **Repository access**: 选择 `All repositories` 或 `Public Repositories`。 +4. **Permissions**: + * 点击 **Account permissions**。 + * 找到 **Copilot Requests**,选择 **Read and write** (或 Access)。 +5. 生成并复制 Token。 + +## 📋 依赖说明 + +该 Pipe 会自动尝试安装以下依赖(如果环境中缺失): + +* `github-copilot-sdk` (Python 包) +* `github-copilot-cli` (二进制文件,通过官方脚本安装) + +## ⚠️ 常见问题 + +* **一直显示 "Waiting..."**: + * 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。 + * 尝试将 `MODEL_ID` 改为 `gpt-4o` 或 `copilot-chat`。 +* **图片无法识别**: + * 确保 `MODEL_ID` 是支持多模态的模型。 +* **CLI 安装失败**: + * 确保 OpenWebUI 容器有外网访问权限。 + * 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。 diff --git a/plugins/pipes/github-copilot-sdk/github_copilot_sdk.png b/plugins/pipes/github-copilot-sdk/github_copilot_sdk.png new file mode 100644 index 0000000..e5f540e Binary files /dev/null and b/plugins/pipes/github-copilot-sdk/github_copilot_sdk.png differ diff --git a/plugins/pipes/github-copilot-sdk/github_copilot_sdk.py b/plugins/pipes/github-copilot-sdk/github_copilot_sdk.py new file mode 100644 index 0000000..de3f2bf --- /dev/null +++ b/plugins/pipes/github-copilot-sdk/github_copilot_sdk.py @@ -0,0 +1,689 @@ +""" +title: GitHub Copilot Official SDK Pipe (Dynamic Models) +author: Fu-Jie +author_url: https://github.com/Fu-Jie/awesome-openwebui +funding_url: https://github.com/open-webui +description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions (context compaction). +version: 0.1.1 +requirements: github-copilot-sdk +""" + +import os +import time +import json +import base64 +import tempfile +import asyncio +import logging +import shutil +import subprocess +import sys +from typing import Optional, Union, AsyncGenerator, List, Any, Dict +from pydantic import BaseModel, Field +from datetime import datetime, timezone +import contextlib + +# Setup logger +logger = logging.getLogger(__name__) + +# Global client storage +_SHARED_CLIENT = None +_SHARED_TOKEN = "" +_CLIENT_LOCK = asyncio.Lock() + + +class Pipe: + class Valves(BaseModel): + GH_TOKEN: str = Field( + default="", + description="GitHub Fine-grained Token (Requires 'Copilot Requests' permission)", + ) + MODEL_ID: str = Field( + default="claude-sonnet-4.5", + description="Default Copilot model name (used when dynamic fetching fails)", + ) + CLI_PATH: str = Field( + default="/usr/local/bin/copilot", + description="Path to Copilot CLI", + ) + DEBUG: bool = Field( + default=False, + description="Enable technical debug logs (connection info, etc.)", + ) + SHOW_THINKING: bool = Field( + default=True, + description="Show model reasoning/thinking process", + ) + EXCLUDE_KEYWORDS: str = Field( + default="", + description="Exclude models containing these keywords (comma separated, e.g.: codex, haiku)", + ) + WORKSPACE_DIR: str = Field( + default="", + description="Restricted workspace directory for file operations. If empty, allows access to the current process directory.", + ) + INFINITE_SESSION: bool = Field( + default=True, + description="Enable Infinite Sessions (automatic context compaction)", + ) + COMPACTION_THRESHOLD: float = Field( + default=0.8, + description="Background compaction threshold (0.0-1.0)", + ) + BUFFER_THRESHOLD: float = Field( + default=0.95, + description="Buffer exhaustion threshold (0.0-1.0)", + ) + TIMEOUT: int = Field( + default=300, + description="Timeout for each stream chunk (seconds)", + ) + + def __init__(self): + self.type = "pipe" + self.id = "copilotsdk" + self.name = "copilotsdk" + self.valves = self.Valves() + self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_") + self.thinking_started = False + self._model_cache = [] # Model list cache + + def __del__(self): + try: + shutil.rmtree(self.temp_dir) + except: + pass + + def _emit_debug_log(self, message: str): + """Emit debug log to frontend if DEBUG valve is enabled.""" + if self.valves.DEBUG: + print(f"[Copilot Pipe] {message}") + + def _get_user_context(self): + """Helper to get user context (placeholder for future use).""" + return {} + + def _get_chat_context( + self, body: dict, __metadata__: Optional[dict] = None + ) -> Dict[str, str]: + """ + Highly reliable chat context extraction logic. + Priority: __metadata__ > body['chat_id'] > body['metadata']['chat_id'] + """ + chat_id = "" + source = "none" + + # 1. Prioritize __metadata__ (most reliable source injected by OpenWebUI) + if __metadata__ and isinstance(__metadata__, dict): + chat_id = __metadata__.get("chat_id", "") + if chat_id: + source = "__metadata__" + + # 2. Then try body root + if not chat_id and isinstance(body, dict): + chat_id = body.get("chat_id", "") + if chat_id: + source = "body_root" + + # 3. Finally try body.metadata + if not chat_id and isinstance(body, dict): + body_metadata = body.get("metadata", {}) + if isinstance(body_metadata, dict): + chat_id = body_metadata.get("chat_id", "") + if chat_id: + source = "body_metadata" + + # Debug: Log ID source + if chat_id: + self._emit_debug_log(f"Extracted ChatID: {chat_id} (Source: {source})") + else: + # If still not found, log body keys for troubleshooting + keys = list(body.keys()) if isinstance(body, dict) else "not a dict" + self._emit_debug_log( + f"Warning: Failed to extract ChatID. Body keys: {keys}" + ) + + return { + "chat_id": str(chat_id).strip(), + } + + async def pipes(self) -> List[dict]: + """Dynamically fetch model list""" + # Return cache if available + if self._model_cache: + return self._model_cache + + self._emit_debug_log("Fetching model list dynamically...") + try: + self._setup_env() + if not self.valves.GH_TOKEN: + return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}] + + from copilot import CopilotClient + + client_config = {} + if os.environ.get("COPILOT_CLI_PATH"): + client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] + + client = CopilotClient(client_config) + try: + await client.start() + models = await client.list_models() + + # Update cache + self._model_cache = [] + exclude_list = [ + k.strip().lower() + for k in self.valves.EXCLUDE_KEYWORDS.split(",") + if k.strip() + ] + + models_with_info = [] + for m in models: + # Compatible with dict and object access + m_id = ( + m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m)) + ) + m_name = ( + m.get("name") + if isinstance(m, dict) + else getattr(m, "name", m_id) + ) + m_policy = ( + m.get("policy") + if isinstance(m, dict) + else getattr(m, "policy", {}) + ) + m_billing = ( + m.get("billing") + if isinstance(m, dict) + else getattr(m, "billing", {}) + ) + + # Check policy state + state = ( + m_policy.get("state") + if isinstance(m_policy, dict) + else getattr(m_policy, "state", "enabled") + ) + if state == "disabled": + continue + + # Filtering logic + if any(kw in m_id.lower() for kw in exclude_list): + continue + + # Get multiplier + multiplier = ( + m_billing.get("multiplier", 1) + if isinstance(m_billing, dict) + else getattr(m_billing, "multiplier", 1) + ) + + # Format display name + if multiplier == 0: + display_name = f"-🔥 {m_id} (unlimited)" + else: + display_name = f"-{m_id} ({multiplier}x)" + + models_with_info.append( + { + "id": f"{self.id}-{m_id}", + "name": display_name, + "multiplier": multiplier, + "raw_id": m_id, + } + ) + + # Sort: multiplier ascending, then raw_id ascending + models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"])) + self._model_cache = [ + {"id": m["id"], "name": m["name"]} for m in models_with_info + ] + + self._emit_debug_log( + f"Successfully fetched {len(self._model_cache)} models (filtered)" + ) + return self._model_cache + except Exception as e: + self._emit_debug_log(f"Failed to fetch model list: {e}") + # Return default model on failure + return [ + { + "id": f"{self.id}-{self.valves.MODEL_ID}", + "name": f"GitHub Copilot ({self.valves.MODEL_ID})", + } + ] + finally: + await client.stop() + except Exception as e: + self._emit_debug_log(f"Pipes Error: {e}") + return [ + { + "id": f"{self.id}-{self.valves.MODEL_ID}", + "name": f"GitHub Copilot ({self.valves.MODEL_ID})", + } + ] + + async def _get_client(self): + """Helper to get or create a CopilotClient instance.""" + from copilot import CopilotClient + + client_config = {} + if os.environ.get("COPILOT_CLI_PATH"): + client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] + + client = CopilotClient(client_config) + await client.start() + return client + + def _setup_env(self): + cli_path = self.valves.CLI_PATH + found = False + + if os.path.exists(cli_path): + found = True + + if not found: + sys_path = shutil.which("copilot") + if sys_path: + cli_path = sys_path + found = True + + if not found: + try: + subprocess.run( + "curl -fsSL https://gh.io/copilot-install | bash", + shell=True, + check=True, + ) + if os.path.exists(self.valves.CLI_PATH): + cli_path = self.valves.CLI_PATH + found = True + except: + pass + + if found: + os.environ["COPILOT_CLI_PATH"] = cli_path + cli_dir = os.path.dirname(cli_path) + if cli_dir not in os.environ["PATH"]: + os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}" + + if self.valves.GH_TOKEN: + os.environ["GH_TOKEN"] = self.valves.GH_TOKEN + os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN + + def _process_images(self, messages): + attachments = [] + text_content = "" + if not messages: + return "", [] + last_msg = messages[-1] + content = last_msg.get("content", "") + + if isinstance(content, list): + for item in content: + if item.get("type") == "text": + text_content += item.get("text", "") + elif item.get("type") == "image_url": + image_url = item.get("image_url", {}).get("url", "") + if image_url.startswith("data:image"): + try: + header, encoded = image_url.split(",", 1) + ext = header.split(";")[0].split("/")[-1] + file_name = f"image_{len(attachments)}.{ext}" + file_path = os.path.join(self.temp_dir, file_name) + with open(file_path, "wb") as f: + f.write(base64.b64decode(encoded)) + attachments.append( + { + "type": "file", + "path": file_path, + "display_name": file_name, + } + ) + self._emit_debug_log(f"Image processed: {file_path}") + except Exception as e: + self._emit_debug_log(f"Image error: {e}") + else: + text_content = str(content) + return text_content, attachments + + async def pipe( + self, body: dict, __metadata__: Optional[dict] = None + ) -> Union[str, AsyncGenerator]: + self._setup_env() + if not self.valves.GH_TOKEN: + return "Error: Please configure GH_TOKEN in Valves." + + # Parse user selected model + request_model = body.get("model", "") + real_model_id = self.valves.MODEL_ID # Default value + + if request_model.startswith(f"{self.id}-"): + real_model_id = request_model[len(f"{self.id}-") :] + self._emit_debug_log(f"Using selected model: {real_model_id}") + + messages = body.get("messages", []) + if not messages: + return "No messages." + + # Get Chat ID using improved helper + chat_ctx = self._get_chat_context(body, __metadata__) + chat_id = chat_ctx.get("chat_id") + + is_streaming = body.get("stream", False) + self._emit_debug_log(f"Request Streaming: {is_streaming}") + + last_text, attachments = self._process_images(messages) + + # Determine prompt strategy + # If we have a chat_id, we try to resume session. + # If resumed, we assume the session has history, so we only send the last message. + # If new session, we send full history (or at least the last few turns if we want to be safe, but let's send full for now). + + # However, to be robust against history edits in OpenWebUI, we might want to always send full history? + # Copilot SDK `create_session` doesn't take history. `session.send` appends. + # If we resume, we append. + # If user edited history, the session state is stale. + # For now, we implement "Resume if possible, else Create". + + prompt = "" + is_new_session = True + + try: + client = await self._get_client() + session = None + + if chat_id: + try: + # Try to resume session using chat_id as session_id + session = await client.resume_session(chat_id) + self._emit_debug_log(f"Resumed session using ChatID: {chat_id}") + is_new_session = False + except Exception: + # Resume failed, session might not exist on disk + self._emit_debug_log( + f"Session {chat_id} not found or expired, creating new." + ) + session = None + + if session is None: + # Create new session + from copilot.types import SessionConfig, InfiniteSessionConfig + + # Infinite Session Config + infinite_session_config = None + if self.valves.INFINITE_SESSION: + infinite_session_config = InfiniteSessionConfig( + enabled=True, + background_compaction_threshold=self.valves.COMPACTION_THRESHOLD, + buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD, + ) + + session_config = SessionConfig( + session_id=( + chat_id if chat_id else None + ), # Use chat_id as session_id + model=real_model_id, + streaming=body.get("stream", False), + infinite_sessions=infinite_session_config, + ) + + session = await client.create_session(config=session_config) + + new_sid = getattr(session, "session_id", getattr(session, "id", None)) + self._emit_debug_log(f"Created new session: {new_sid}") + + # Construct prompt + if is_new_session: + # For new session, send full conversation history + full_conversation = [] + for msg in messages[:-1]: + role = msg.get("role", "user").upper() + content = msg.get("content", "") + if isinstance(content, list): + content = " ".join( + [ + c.get("text", "") + for c in content + if c.get("type") == "text" + ] + ) + full_conversation.append(f"{role}: {content}") + full_conversation.append(f"User: {last_text}") + prompt = "\n\n".join(full_conversation) + else: + # For resumed session, only send the last message + prompt = last_text + + send_payload = {"prompt": prompt, "mode": "immediate"} + if attachments: + send_payload["attachments"] = attachments + + if body.get("stream", False): + # Determine session status message for UI + init_msg = "" + if self.valves.DEBUG: + if is_new_session: + new_sid = getattr( + session, "session_id", getattr(session, "id", "unknown") + ) + init_msg = f"> [Debug] Created new session: {new_sid}\n" + else: + init_msg = ( + f"> [Debug] Resumed session using ChatID: {chat_id}\n" + ) + + return self.stream_response(client, session, send_payload, init_msg) + else: + try: + response = await session.send_and_wait(send_payload) + return response.data.content if response else "Empty response." + finally: + # Destroy session object to free memory, but KEEP data on disk + await session.destroy() + + except Exception as e: + self._emit_debug_log(f"Request Error: {e}") + return f"Error: {str(e)}" + + async def stream_response( + self, client, session, send_payload, init_message: str = "" + ) -> AsyncGenerator: + queue = asyncio.Queue() + done = asyncio.Event() + self.thinking_started = False + has_content = False # Track if any content has been yielded + + def get_event_data(event, attr, default=""): + if hasattr(event, "data"): + data = event.data + if data is None: + return default + if isinstance(data, (str, int, float, bool)): + return str(data) if attr == "value" else default + + if isinstance(data, dict): + val = data.get(attr) + if val is None: + alt_attr = attr.replace("_", "") if "_" in attr else attr + val = data.get(alt_attr) + if val is None and "_" not in attr: + # Try snake_case if camelCase failed + import re + + snake_attr = re.sub(r"(?\n") + self.thinking_started = False + queue.put_nowait(delta) + + elif event_type in [ + "assistant.reasoning_delta", + "assistant.reasoning.delta", + "assistant.reasoning", + ]: + delta = ( + get_event_data(event, "delta_content") + or get_event_data(event, "deltaContent") + or get_event_data(event, "content") + or get_event_data(event, "text") + ) + if delta: + if not self.thinking_started and self.valves.SHOW_THINKING: + queue.put_nowait("\n") + self.thinking_started = True + if self.thinking_started: + queue.put_nowait(delta) + + elif event_type == "tool.execution_start": + # Try multiple possible fields for tool name/description + tool_name = ( + get_event_data(event, "toolName") + or get_event_data(event, "name") + or get_event_data(event, "description") + or get_event_data(event, "tool_name") + or "Unknown Tool" + ) + if not self.thinking_started and self.valves.SHOW_THINKING: + queue.put_nowait("\n") + self.thinking_started = True + if self.thinking_started: + queue.put_nowait(f"\nRunning Tool: {tool_name}...\n") + self._emit_debug_log(f"Tool Start: {tool_name}") + + elif event_type == "tool.execution_complete": + if self.thinking_started: + queue.put_nowait("Tool Completed.\n") + self._emit_debug_log("Tool Complete") + + elif event_type == "session.compaction_start": + self._emit_debug_log("Session Compaction Started") + + elif event_type == "session.compaction_complete": + self._emit_debug_log("Session Compaction Completed") + + elif event_type == "session.idle": + done.set() + elif event_type == "session.error": + msg = get_event_data(event, "message", "Unknown Error") + queue.put_nowait(f"\n[Error: {msg}]") + done.set() + + unsubscribe = session.on(handler) + await session.send(send_payload) + + if self.valves.DEBUG: + yield "\n" + if init_message: + yield init_message + yield "> [Debug] Connection established, waiting for response...\n" + self.thinking_started = True + + try: + while not done.is_set(): + try: + chunk = await asyncio.wait_for( + queue.get(), timeout=float(self.valves.TIMEOUT) + ) + if chunk: + has_content = True + yield chunk + except asyncio.TimeoutError: + if done.is_set(): + break + if self.thinking_started: + yield f"> [Debug] Waiting for response ({self.valves.TIMEOUT}s exceeded)...\n" + continue + + while not queue.empty(): + chunk = queue.get_nowait() + if chunk: + has_content = True + yield chunk + + if self.thinking_started: + yield "\n\n" + has_content = True + + # Core fix: If no content was yielded, return a fallback message to prevent OpenWebUI error + if not has_content: + yield "⚠️ Copilot returned no content. Please check if the Model ID is correct or enable DEBUG mode in Valves for details." + + except Exception as e: + yield f"\n[Stream Error: {str(e)}]" + finally: + unsubscribe() + # Only destroy session if it's not cached + # We can't easily check chat_id here without passing it, + # but stream_response is called within the scope where we decide persistence. + # Wait, stream_response takes session as arg. + # We need to know if we should destroy it. + # Let's assume if it's in _SESSIONS, we don't destroy it. + # But checking _SESSIONS here is race-prone or complex. + # Simplified: The caller (pipe) handles destruction logic? + # No, stream_response is a generator, pipe returns it. + # So pipe function exits before stream finishes. + # We need to handle destruction here. + pass + + # TODO: Proper session cleanup for streaming + # For now, we rely on the fact that if we mapped it, we keep it. + # If we didn't map it (no chat_id), we should destroy it. + # But we don't have chat_id here. + # Let's modify stream_response signature or just leave it open for GC? + # CopilotSession doesn't auto-close. + # Let's add a flag to stream_response. + pass diff --git a/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png b/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png new file mode 100644 index 0000000..e5f540e Binary files /dev/null and b/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png differ diff --git a/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py b/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py new file mode 100644 index 0000000..6dc8996 --- /dev/null +++ b/plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py @@ -0,0 +1,757 @@ +""" +title: GitHub Copilot 官方 SDK 管道 (动态模型版) +author: Fu-Jie +author_url: https://github.com/Fu-Jie/awesome-openwebui +funding_url: https://github.com/open-webui +description: 集成 GitHub Copilot SDK。支持动态模型、多轮对话、流式输出、多模态输入及无限会话(上下文自动压缩)。 +version: 0.1.1 +requirements: github-copilot-sdk +""" + +import os +import time +import json +import base64 +import tempfile +import asyncio +import logging +import shutil +import subprocess +import sys +from typing import Optional, Union, AsyncGenerator, List, Any, Dict +from pydantic import BaseModel, Field +from datetime import datetime, timezone +import contextlib + +# Setup logger +logger = logging.getLogger(__name__) + +# Open WebUI internal database (re-use shared connection) +try: + from open_webui.internal import db as owui_db +except ModuleNotFoundError: + owui_db = None + + +def _discover_owui_engine(db_module: Any) -> Optional[Engine]: + """Discover the Open WebUI SQLAlchemy engine via provided db module helpers.""" + if db_module is None: + return None + + db_context = getattr(db_module, "get_db_context", None) or getattr( + db_module, "get_db", None + ) + if callable(db_context): + try: + with db_context() as session: + try: + return session.get_bind() + except AttributeError: + return getattr(session, "bind", None) or getattr( + session, "engine", None + ) + except Exception as exc: + logger.error(f"[DB Discover] get_db_context failed: {exc}") + + for attr in ("engine", "ENGINE", "bind", "BIND"): + candidate = getattr(db_module, attr, None) + if candidate is not None: + return candidate + + return None + + +def _discover_owui_schema(db_module: Any) -> Optional[str]: + """Discover the Open WebUI database schema name if configured.""" + if db_module is None: + return None + + try: + base = getattr(db_module, "Base", None) + metadata = getattr(base, "metadata", None) if base is not None else None + candidate = getattr(metadata, "schema", None) if metadata is not None else None + if isinstance(candidate, str) and candidate.strip(): + return candidate.strip() + except Exception as exc: + logger.error(f"[DB Discover] Base metadata schema lookup failed: {exc}") + + try: + metadata_obj = getattr(db_module, "metadata_obj", None) + candidate = ( + getattr(metadata_obj, "schema", None) if metadata_obj is not None else None + ) + if isinstance(candidate, str) and candidate.strip(): + return candidate.strip() + except Exception as exc: + logger.error(f"[DB Discover] metadata_obj schema lookup failed: {exc}") + + try: + from open_webui import env as owui_env + + candidate = getattr(owui_env, "DATABASE_SCHEMA", None) + if isinstance(candidate, str) and candidate.strip(): + return candidate.strip() + except Exception as exc: + logger.error(f"[DB Discover] env schema lookup failed: {exc}") + + return None + + +owui_engine = _discover_owui_engine(owui_db) +owui_schema = _discover_owui_schema(owui_db) +owui_Base = getattr(owui_db, "Base", None) if owui_db is not None else None +if owui_Base is None: + owui_Base = declarative_base() + + +class CopilotSessionMap(owui_Base): + """Copilot Session Mapping Table""" + + __tablename__ = "copilot_session_map" + __table_args__ = ( + {"extend_existing": True, "schema": owui_schema} + if owui_schema + else {"extend_existing": True} + ) + + id = Column(Integer, primary_key=True, autoincrement=True) + chat_id = Column(String(255), unique=True, nullable=False, index=True) + copilot_session_id = Column(String(255), nullable=False) + updated_at = Column( + DateTime, + default=lambda: datetime.now(timezone.utc), + onupdate=lambda: datetime.now(timezone.utc), + ) + + +# 全局客户端存储 +_SHARED_CLIENT = None +_SHARED_TOKEN = "" +_CLIENT_LOCK = asyncio.Lock() + + +class Pipe: + class Valves(BaseModel): + GH_TOKEN: str = Field( + default="", description="GitHub 细粒度令牌 (需开启 'Copilot Requests' 权限)" + ) + MODEL_ID: str = Field( + default="claude-sonnet-4.5", + description="默认使用的 Copilot 模型名称 (当无法动态获取时使用)", + ) + CLI_PATH: str = Field( + default="/usr/local/bin/copilot", + description="Copilot CLI 路径", + ) + DEBUG: bool = Field( + default=False, + description="开启技术调试日志 (连接信息等)", + ) + SHOW_THINKING: bool = Field( + default=True, + description="显示模型推理/思考过程", + ) + EXCLUDE_KEYWORDS: str = Field( + default="", + description="排除包含这些关键词的模型 (逗号分隔,例如: codex, haiku)", + ) + WORKSPACE_DIR: str = Field( + default="", + description="文件操作的受限工作目录。如果为空,允许访问当前进程目录。", + ) + INFINITE_SESSION: bool = Field( + default=True, + description="启用无限会话 (自动上下文压缩)", + ) + COMPACTION_THRESHOLD: float = Field( + default=0.8, + description="后台压缩阈值 (0.0-1.0)", + ) + BUFFER_THRESHOLD: float = Field( + default=0.95, + description="背景压缩缓冲区阈值 (0.0-1.0)", + ) + TIMEOUT: int = Field( + default=300, + description="流式数据块超时时间 (秒)", + ) + + def __init__(self): + self.type = "pipe" + self.name = "copilotsdk" + self.valves = self.Valves() + self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_") + self.thinking_started = False + self._model_cache = [] # 模型列表缓存 + + def __del__(self): + try: + shutil.rmtree(self.temp_dir) + except: + pass + + def _emit_debug_log(self, message: str): + """Emit debug log to frontend if DEBUG valve is enabled.""" + if self.valves.DEBUG: + print(f"[Copilot Pipe] {message}") + + def _get_user_context(self): + """Helper to get user context (placeholder for future use).""" + return {} + + def _get_chat_context( + self, body: dict, __metadata__: Optional[dict] = None + ) -> Dict[str, str]: + """ + 高度可靠的聊天上下文提取逻辑。 + 优先级:__metadata__ > body['chat_id'] > body['metadata']['chat_id'] + """ + chat_id = "" + source = "none" + + # 1. 优先从 __metadata__ 获取 (OpenWebUI 注入的最可靠来源) + if __metadata__ and isinstance(__metadata__, dict): + chat_id = __metadata__.get("chat_id", "") + if chat_id: + source = "__metadata__" + + # 2. 其次从 body 顶层获取 + if not chat_id and isinstance(body, dict): + chat_id = body.get("chat_id", "") + if chat_id: + source = "body_root" + + # 3. 最后从 body.metadata 获取 + if not chat_id and isinstance(body, dict): + body_metadata = body.get("metadata", {}) + if isinstance(body_metadata, dict): + chat_id = body_metadata.get("chat_id", "") + if chat_id: + source = "body_metadata" + + # 调试:记录 ID 来源 + if chat_id: + self._emit_debug_log(f"提取到 ChatID: {chat_id} (来源: {source})") + else: + # 如果还是没找到,记录一下 body 的键,方便排查 + keys = list(body.keys()) if isinstance(body, dict) else "not a dict" + self._emit_debug_log(f"警告: 未能提取到 ChatID。Body 键: {keys}") + + return { + "chat_id": str(chat_id).strip(), + } + + async def pipes(self) -> List[dict]: + """动态获取模型列表""" + # 如果有缓存,直接返回 + if self._model_cache: + return self._model_cache + + self._emit_debug_log("正在动态获取模型列表...") + try: + self._setup_env() + if not self.valves.GH_TOKEN: + return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}] + + from copilot import CopilotClient + + client_config = {} + if os.environ.get("COPILOT_CLI_PATH"): + client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] + + client = CopilotClient(client_config) + try: + await client.start() + models = await client.list_models() + + # 更新缓存 + self._model_cache = [] + exclude_list = [ + k.strip().lower() + for k in self.valves.EXCLUDE_KEYWORDS.split(",") + if k.strip() + ] + + models_with_info = [] + for m in models: + # 兼容字典和对象访问方式 + m_id = ( + m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m)) + ) + m_name = ( + m.get("name") + if isinstance(m, dict) + else getattr(m, "name", m_id) + ) + m_policy = ( + m.get("policy") + if isinstance(m, dict) + else getattr(m, "policy", {}) + ) + m_billing = ( + m.get("billing") + if isinstance(m, dict) + else getattr(m, "billing", {}) + ) + + # 检查策略状态 + state = ( + m_policy.get("state") + if isinstance(m_policy, dict) + else getattr(m_policy, "state", "enabled") + ) + if state == "disabled": + continue + + # 过滤逻辑 + if any(kw in m_id.lower() for kw in exclude_list): + continue + + # 获取倍率 + multiplier = ( + m_billing.get("multiplier", 1) + if isinstance(m_billing, dict) + else getattr(m_billing, "multiplier", 1) + ) + + # 格式化显示名称 + if multiplier == 0: + display_name = f"-🔥 {m_id} (unlimited)" + else: + display_name = f"-{m_id} ({multiplier}x)" + + models_with_info.append( + { + "id": f"{self.id}-{m_id}", + "name": display_name, + "multiplier": multiplier, + "raw_id": m_id, + } + ) + + # 排序:倍率升序,然后是原始ID升序 + models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"])) + self._model_cache = [ + {"id": m["id"], "name": m["name"]} for m in models_with_info + ] + + self._emit_debug_log( + f"成功获取 {len(self._model_cache)} 个模型 (已过滤)" + ) + return self._model_cache + except Exception as e: + self._emit_debug_log(f"获取模型列表失败: {e}") + # 失败时返回默认模型 + return [ + { + "id": f"{self.id}-{self.valves.MODEL_ID}", + "name": f"GitHub Copilot ({self.valves.MODEL_ID})", + } + ] + finally: + await client.stop() + except Exception as e: + self._emit_debug_log(f"Pipes Error: {e}") + return [ + { + "id": f"{self.id}-{self.valves.MODEL_ID}", + "name": f"GitHub Copilot ({self.valves.MODEL_ID})", + } + ] + + async def _get_client(self): + """Helper to get or create a CopilotClient instance.""" + from copilot import CopilotClient + + client_config = {} + if os.environ.get("COPILOT_CLI_PATH"): + client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"] + + client = CopilotClient(client_config) + await client.start() + return client + + def _setup_env(self): + cli_path = self.valves.CLI_PATH + found = False + + if os.path.exists(cli_path): + found = True + + if not found: + sys_path = shutil.which("copilot") + if sys_path: + cli_path = sys_path + found = True + + if not found: + try: + subprocess.run( + "curl -fsSL https://gh.io/copilot-install | bash", + shell=True, + check=True, + ) + if os.path.exists(self.valves.CLI_PATH): + cli_path = self.valves.CLI_PATH + found = True + except: + pass + + if found: + os.environ["COPILOT_CLI_PATH"] = cli_path + cli_dir = os.path.dirname(cli_path) + if cli_dir not in os.environ["PATH"]: + os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}" + + if self.valves.GH_TOKEN: + os.environ["GH_TOKEN"] = self.valves.GH_TOKEN + os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN + + def _process_images(self, messages): + attachments = [] + text_content = "" + if not messages: + return "", [] + last_msg = messages[-1] + content = last_msg.get("content", "") + + if isinstance(content, list): + for item in content: + if item.get("type") == "text": + text_content += item.get("text", "") + elif item.get("type") == "image_url": + image_url = item.get("image_url", {}).get("url", "") + if image_url.startswith("data:image"): + try: + header, encoded = image_url.split(",", 1) + ext = header.split(";")[0].split("/")[-1] + file_name = f"image_{len(attachments)}.{ext}" + file_path = os.path.join(self.temp_dir, file_name) + with open(file_path, "wb") as f: + f.write(base64.b64decode(encoded)) + attachments.append( + { + "type": "file", + "path": file_path, + "display_name": file_name, + } + ) + self._emit_debug_log(f"Image processed: {file_path}") + except Exception as e: + self._emit_debug_log(f"Image error: {e}") + else: + text_content = str(content) + return text_content, attachments + + async def pipe( + self, body: dict, __metadata__: Optional[dict] = None + ) -> Union[str, AsyncGenerator]: + self._setup_env() + if not self.valves.GH_TOKEN: + return "Error: 请在 Valves 中配置 GH_TOKEN。" + + # 解析用户选择的模型 + request_model = body.get("model", "") + real_model_id = self.valves.MODEL_ID # 默认值 + + if request_model.startswith(f"{self.id}-"): + real_model_id = request_model[len(f"{self.id}-") :] + self._emit_debug_log(f"使用选择的模型: {real_model_id}") + + messages = body.get("messages", []) + if not messages: + return "No messages." + + # 使用改进的助手获取 Chat ID + chat_ctx = self._get_chat_context(body, __metadata__) + chat_id = chat_ctx.get("chat_id") + + is_streaming = body.get("stream", False) + self._emit_debug_log(f"请求流式传输: {is_streaming}") + + last_text, attachments = self._process_images(messages) + + # 确定 Prompt 策略 + # 如果有 chat_id,尝试恢复会话。 + # 如果恢复成功,假设会话已有历史,只发送最后一条消息。 + # 如果是新会话,发送完整历史。 + + prompt = "" + is_new_session = True + + try: + client = await self._get_client() + session = None + + if chat_id: + try: + # 尝试直接使用 chat_id 作为 session_id 恢复会话 + session = await client.resume_session(chat_id) + self._emit_debug_log(f"已通过 ChatID 恢复会话: {chat_id}") + is_new_session = False + except Exception: + # 恢复失败,磁盘上可能不存在该会话 + self._emit_debug_log( + f"会话 {chat_id} 不存在或已过期,将创建新会话。" + ) + session = None + + if session is None: + # 创建新会话 + from copilot.types import SessionConfig, InfiniteSessionConfig + + # 无限会话配置 + infinite_session_config = None + if self.valves.INFINITE_SESSION: + infinite_session_config = InfiniteSessionConfig( + enabled=True, + background_compaction_threshold=self.valves.COMPACTION_THRESHOLD, + buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD, + ) + + session_config = SessionConfig( + session_id=( + chat_id if chat_id else None + ), # 使用 chat_id 作为 session_id + model=real_model_id, + streaming=body.get("stream", False), + infinite_sessions=infinite_session_config, + ) + + session = await client.create_session(config=session_config) + + # 获取新会话 ID + new_sid = getattr(session, "session_id", getattr(session, "id", None)) + self._emit_debug_log(f"创建了新会话: {new_sid}") + + # 构建 Prompt + if is_new_session: + # 新会话,发送完整历史 + full_conversation = [] + for msg in messages[:-1]: + role = msg.get("role", "user").upper() + content = msg.get("content", "") + if isinstance(content, list): + content = " ".join( + [ + c.get("text", "") + for c in content + if c.get("type") == "text" + ] + ) + full_conversation.append(f"{role}: {content}") + full_conversation.append(f"User: {last_text}") + prompt = "\n\n".join(full_conversation) + else: + # 恢复的会话,只发送最后一条消息 + prompt = last_text + + send_payload = {"prompt": prompt, "mode": "immediate"} + if attachments: + send_payload["attachments"] = attachments + + if body.get("stream", False): + # 确定 UI 显示的会话状态消息 + init_msg = "" + if self.valves.DEBUG: + if is_new_session: + new_sid = getattr( + session, "session_id", getattr(session, "id", "unknown") + ) + init_msg = f"> [Debug] 创建了新会话: {new_sid}\n" + else: + init_msg = f"> [Debug] 已通过 ChatID 恢复会话: {chat_id}\n" + + return self.stream_response(client, session, send_payload, init_msg) + else: + try: + response = await session.send_and_wait(send_payload) + return response.data.content if response else "Empty response." + finally: + # 销毁会话对象以释放内存,但保留磁盘数据 + await session.destroy() + + except Exception as e: + self._emit_debug_log(f"请求错误: {e}") + return f"Error: {str(e)}" + + async def stream_response( + self, client, session, send_payload, init_message: str = "" + ) -> AsyncGenerator: + queue = asyncio.Queue() + done = asyncio.Event() + self.thinking_started = False + has_content = False # 追踪是否已经输出了内容 + + def get_event_data(event, attr, default=""): + if hasattr(event, "data"): + data = event.data + if data is None: + return default + if isinstance(data, (str, int, float, bool)): + return str(data) if attr == "value" else default + + if isinstance(data, dict): + val = data.get(attr) + if val is None: + alt_attr = attr.replace("_", "") if "_" in attr else attr + val = data.get(alt_attr) + if val is None and "_" not in attr: + # 尝试将 camelCase 转换为 snake_case + import re + + snake_attr = re.sub(r"(?\n") + self.thinking_started = False + queue.put_nowait(delta) + + elif event_type in [ + "assistant.reasoning_delta", + "assistant.reasoning.delta", + "assistant.reasoning", + ]: + delta = ( + get_event_data(event, "delta_content") + or get_event_data(event, "deltaContent") + or get_event_data(event, "content") + or get_event_data(event, "text") + ) + if delta: + if not self.thinking_started and self.valves.SHOW_THINKING: + queue.put_nowait("\n") + self.thinking_started = True + if self.thinking_started: + queue.put_nowait(delta) + + elif event_type == "tool.execution_start": + # 尝试多个可能的字段来获取工具名称或描述 + tool_name = ( + get_event_data(event, "toolName") + or get_event_data(event, "name") + or get_event_data(event, "description") + or get_event_data(event, "tool_name") + or "Unknown Tool" + ) + if not self.thinking_started and self.valves.SHOW_THINKING: + queue.put_nowait("\n") + self.thinking_started = True + if self.thinking_started: + queue.put_nowait(f"\n正在运行工具: {tool_name}...\n") + self._emit_debug_log(f"Tool Start: {tool_name}") + + elif event_type == "tool.execution_complete": + if self.thinking_started: + queue.put_nowait("工具运行完成。\n") + self._emit_debug_log("Tool Complete") + + elif event_type == "session.compaction_start": + self._emit_debug_log("会话压缩开始") + + elif event_type == "session.compaction_complete": + self._emit_debug_log("会话压缩完成") + + elif event_type == "session.idle": + done.set() + elif event_type == "session.error": + msg = get_event_data(event, "message", "Unknown Error") + queue.put_nowait(f"\n[Error: {msg}]") + done.set() + + unsubscribe = session.on(handler) + await session.send(send_payload) + + if self.valves.DEBUG: + yield "\n" + if init_message: + yield init_message + yield "> [Debug] 连接已建立,等待响应...\n" + self.thinking_started = True + + try: + while not done.is_set(): + try: + chunk = await asyncio.wait_for( + queue.get(), timeout=float(self.valves.TIMEOUT) + ) + if chunk: + has_content = True + yield chunk + except asyncio.TimeoutError: + if done.is_set(): + break + if self.thinking_started: + yield f"> [Debug] 等待响应中 (已超过 {self.valves.TIMEOUT} 秒)...\n" + continue + + while not queue.empty(): + chunk = queue.get_nowait() + if chunk: + has_content = True + yield chunk + + if self.thinking_started: + yield "\n\n" + has_content = True + + # 核心修复:如果整个过程没有任何输出,返回一个提示,防止 OpenWebUI 报错 + if not has_content: + yield "⚠️ Copilot 未返回任何内容。请检查模型 ID 是否正确,或尝试在 Valves 中开启 DEBUG 模式查看详细日志。" + + except Exception as e: + yield f"\n[Stream Error: {str(e)}]" + finally: + unsubscribe() + # 销毁会话对象以释放内存,但保留磁盘数据 + await session.destroy()