Compare commits
33 Commits
v2026.01.2
...
v2026.01.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
82253b114c | ||
|
|
e0bfbf6dd4 | ||
|
|
4689e80e7a | ||
|
|
556e6c1c67 | ||
|
|
3ab84a526d | ||
|
|
bdce96f912 | ||
|
|
4811b99a4b | ||
|
|
fb2a64c07a | ||
|
|
e023e4f2e2 | ||
|
|
0b16b1e0f4 | ||
|
|
59073ad7ac | ||
|
|
8248644c45 | ||
|
|
f38e6394c9 | ||
|
|
0aaa529c6b | ||
|
|
b81a6562a1 | ||
|
|
c5b10db23a | ||
|
|
d16e444643 | ||
|
|
8202468099 | ||
|
|
766e8bd20f | ||
|
|
1214ab5a8c | ||
|
|
ebddbb25f8 | ||
|
|
59545e1110 | ||
|
|
500e090b11 | ||
|
|
a75ee555fa | ||
|
|
6a8c2164cd | ||
|
|
7f7efa325a | ||
|
|
9ba6cb08fc | ||
|
|
1872271a2d | ||
|
|
813b50864a | ||
|
|
b18cefe320 | ||
|
|
a54c359fcf | ||
|
|
8d83221a4a | ||
|
|
1879000720 |
@@ -90,6 +90,9 @@ Reference: `.github/workflows/release.yml`
|
||||
- Action: Automatically updates the plugin code and metadata on OpenWebUI.com using `scripts/publish_plugin.py`.
|
||||
- **Auto-Sync**: If a local plugin has no ID but matches an existing published plugin by **Title**, the script will automatically fetch the ID, update the local file, and proceed with the update.
|
||||
- Requirement: `OPENWEBUI_API_KEY` secret must be set.
|
||||
- **README Link**: When announcing a release, always include the GitHub README URL for the plugin:
|
||||
- Format: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/{type}/{name}/README.md`
|
||||
- Example: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/README.md`
|
||||
|
||||
### Pull Request Check
|
||||
- Workflow: `.github/workflows/plugin-version-check.yml`
|
||||
|
||||
28
README.md
28
README.md
@@ -10,28 +10,28 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
||||
<!-- STATS_START -->
|
||||
## 📊 Community Stats
|
||||
|
||||
> 🕐 Auto-updated: 2026-01-20 19:10
|
||||
> 🕐 Auto-updated: 2026-01-26 03:07
|
||||
|
||||
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
||||
|:---:|:---:|:---:|:---:|
|
||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **137** | **134** | **25** |
|
||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **157** | **147** | **28** |
|
||||
|
||||
| 📝 Posts | ⬇️ Downloads | 👁️ Views | 👍 Upvotes | 💾 Saves |
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
| **16** | **1887** | **22101** | **120** | **147** |
|
||||
| **18** | **2334** | **26665** | **133** | **176** |
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
|
||||
> 🕐 Auto-updated: 2026-01-20 19:10
|
||||
> 🕐 Auto-updated: 2026-01-26 03:07
|
||||
|
||||
| Rank | Plugin | Version | Downloads | Views | Updated |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 550 | 4939 | 2026-01-17 |
|
||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 282 | 2667 | 2026-01-18 |
|
||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 215 | 844 | 2026-01-07 |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.1 | 189 | 2051 | 2026-01-20 |
|
||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 170 | 1457 | 2026-01-17 |
|
||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 144 | 2395 | 2026-01-17 |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 618 | 5538 | 2026-01-17 |
|
||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 396 | 3472 | 2026-01-18 |
|
||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 249 | 1013 | 2026-01-07 |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.2 | 222 | 2423 | 2026-01-21 |
|
||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 219 | 1790 | 2026-01-17 |
|
||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 164 | 2650 | 2026-01-17 |
|
||||
|
||||
*See full stats in [Community Stats Report](./docs/community-stats.md)*
|
||||
<!-- STATS_END -->
|
||||
@@ -43,6 +43,7 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
||||
Located in the `plugins/` directory, containing Python-based enhancements:
|
||||
|
||||
#### Actions
|
||||
|
||||
- **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text.
|
||||
- **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV.
|
||||
- **Flash Card** (`flash-card`): Quickly generates beautiful flashcards for learning.
|
||||
@@ -51,12 +52,18 @@ Located in the `plugins/` directory, containing Python-based enhancements:
|
||||
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
||||
|
||||
#### Filters
|
||||
|
||||
- **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression.
|
||||
- **Context Enhancement** (`context_enhancement_filter`): Enhances chat context.
|
||||
- **Folder Memory** (`folder-memory`): Automatically extracts project rules from conversations and injects them into the folder's system prompt.
|
||||
- **Markdown Normalizer** (`markdown_normalizer`): Fixes common Markdown formatting issues in LLM outputs.
|
||||
|
||||
#### Pipes
|
||||
|
||||
- **GitHub Copilot SDK** (`github-copilot-sdk`): Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions.
|
||||
|
||||
#### Pipelines
|
||||
|
||||
- **MoE Prompt Refiner** (`moe_prompt_refiner`): Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
|
||||
|
||||
### 🎯 Prompts
|
||||
@@ -101,6 +108,7 @@ This project is a collection of resources and does not require a Python environm
|
||||
### Contributing
|
||||
|
||||
If you have great prompts or plugins to share:
|
||||
|
||||
1. Fork this repository.
|
||||
2. Add your files to the appropriate `prompts/` or `plugins/` directory.
|
||||
3. Submit a Pull Request.
|
||||
|
||||
26
README_CN.md
26
README_CN.md
@@ -7,28 +7,28 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
<!-- STATS_START -->
|
||||
## 📊 社区统计
|
||||
|
||||
> 🕐 自动更新于 2026-01-20 19:10
|
||||
> 🕐 自动更新于 2026-01-26 03:07
|
||||
|
||||
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
||||
|:---:|:---:|:---:|:---:|
|
||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **137** | **134** | **25** |
|
||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **157** | **147** | **28** |
|
||||
|
||||
| 📝 发布 | ⬇️ 下载 | 👁️ 浏览 | 👍 点赞 | 💾 收藏 |
|
||||
|:---:|:---:|:---:|:---:|:---:|
|
||||
| **16** | **1887** | **22101** | **120** | **147** |
|
||||
| **18** | **2334** | **26665** | **133** | **176** |
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
|
||||
> 🕐 自动更新于 2026-01-20 19:10
|
||||
> 🕐 自动更新于 2026-01-26 03:07
|
||||
|
||||
| 排名 | 插件 | 版本 | 下载 | 浏览 | 更新日期 |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 550 | 4939 | 2026-01-17 |
|
||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 282 | 2667 | 2026-01-18 |
|
||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 215 | 844 | 2026-01-07 |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.1 | 189 | 2051 | 2026-01-20 |
|
||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 170 | 1457 | 2026-01-17 |
|
||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 144 | 2395 | 2026-01-17 |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 618 | 5538 | 2026-01-17 |
|
||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 396 | 3472 | 2026-01-18 |
|
||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 249 | 1013 | 2026-01-07 |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.2 | 222 | 2423 | 2026-01-21 |
|
||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 219 | 1790 | 2026-01-17 |
|
||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 164 | 2650 | 2026-01-17 |
|
||||
|
||||
*完整统计请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
||||
<!-- STATS_END -->
|
||||
@@ -40,6 +40,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
位于 `plugins/` 目录,包含各类 Python 编写的功能增强插件:
|
||||
|
||||
#### Actions (交互增强)
|
||||
|
||||
- **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。
|
||||
- **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。
|
||||
- **Flash Card** (`flash-card`): 快速生成精美的学习记忆卡片。
|
||||
@@ -48,6 +49,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
||||
|
||||
#### Filters (消息处理)
|
||||
|
||||
- **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。
|
||||
- **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。
|
||||
- **Folder Memory** (`folder-memory`): 自动从对话中提取项目规则并注入到文件夹系统提示词中。
|
||||
@@ -57,9 +59,12 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): 自动合并并注入多模型回答的上下文。
|
||||
|
||||
#### Pipes (模型管道)
|
||||
|
||||
- **GitHub Copilot SDK** (`github-copilot-sdk`): GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。
|
||||
- **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。
|
||||
|
||||
#### Pipelines (工作流管道)
|
||||
|
||||
- **MoE Prompt Refiner** (`moe_prompt_refiner`): 优化多模型 (MoE) 汇总请求的提示词,生成高质量的综合报告。
|
||||
|
||||
### 🎯 提示词 (Prompts)
|
||||
@@ -107,6 +112,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
### 贡献代码
|
||||
|
||||
如果你有优质的提示词或插件想要分享:
|
||||
|
||||
1. Fork 本仓库。
|
||||
2. 将你的文件添加到对应的 `prompts/` 或 `plugins/` 目录。
|
||||
3. 提交 Pull Request。
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "downloads",
|
||||
"message": "1.9k",
|
||||
"message": "2.3k",
|
||||
"color": "blue",
|
||||
"namedLogo": "openwebui"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "followers",
|
||||
"message": "137",
|
||||
"message": "157",
|
||||
"color": "blue"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "plugins",
|
||||
"message": "16",
|
||||
"message": "18",
|
||||
"color": "green"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "points",
|
||||
"message": "134",
|
||||
"message": "147",
|
||||
"color": "orange"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "upvotes",
|
||||
"message": "120",
|
||||
"message": "133",
|
||||
"color": "brightgreen"
|
||||
}
|
||||
@@ -1,14 +1,15 @@
|
||||
{
|
||||
"total_posts": 16,
|
||||
"total_downloads": 1887,
|
||||
"total_views": 22101,
|
||||
"total_upvotes": 120,
|
||||
"total_posts": 18,
|
||||
"total_downloads": 2334,
|
||||
"total_views": 26665,
|
||||
"total_upvotes": 133,
|
||||
"total_downvotes": 2,
|
||||
"total_saves": 147,
|
||||
"total_comments": 24,
|
||||
"total_saves": 176,
|
||||
"total_comments": 28,
|
||||
"by_type": {
|
||||
"unknown": 3,
|
||||
"action": 14,
|
||||
"unknown": 2
|
||||
"filter": 1
|
||||
},
|
||||
"posts": [
|
||||
{
|
||||
@@ -18,10 +19,10 @@
|
||||
"version": "0.9.1",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 550,
|
||||
"views": 4939,
|
||||
"upvotes": 15,
|
||||
"saves": 30,
|
||||
"downloads": 618,
|
||||
"views": 5538,
|
||||
"upvotes": 16,
|
||||
"saves": 36,
|
||||
"comments": 11,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-01-17",
|
||||
@@ -34,10 +35,10 @@
|
||||
"version": "1.4.9",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 282,
|
||||
"views": 2667,
|
||||
"upvotes": 14,
|
||||
"saves": 21,
|
||||
"downloads": 396,
|
||||
"views": 3472,
|
||||
"upvotes": 17,
|
||||
"saves": 25,
|
||||
"comments": 3,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-01-18",
|
||||
@@ -50,8 +51,8 @@
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 215,
|
||||
"views": 844,
|
||||
"downloads": 249,
|
||||
"views": 1013,
|
||||
"upvotes": 4,
|
||||
"saves": 6,
|
||||
"comments": 0,
|
||||
@@ -63,16 +64,16 @@
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "action",
|
||||
"version": "1.2.1",
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 189,
|
||||
"views": 2051,
|
||||
"downloads": 222,
|
||||
"views": 2423,
|
||||
"upvotes": 9,
|
||||
"saves": 22,
|
||||
"saves": 26,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-01-20",
|
||||
"updated_at": "2026-01-21",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
@@ -82,10 +83,10 @@
|
||||
"version": "0.4.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 170,
|
||||
"views": 1457,
|
||||
"downloads": 219,
|
||||
"views": 1790,
|
||||
"upvotes": 8,
|
||||
"saves": 17,
|
||||
"saves": 21,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-01-17",
|
||||
@@ -98,10 +99,10 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 144,
|
||||
"views": 2395,
|
||||
"upvotes": 10,
|
||||
"saves": 12,
|
||||
"downloads": 164,
|
||||
"views": 2650,
|
||||
"upvotes": 11,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-01-17",
|
||||
@@ -114,10 +115,10 @@
|
||||
"version": "1.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting.",
|
||||
"downloads": 96,
|
||||
"views": 2234,
|
||||
"downloads": 144,
|
||||
"views": 2721,
|
||||
"upvotes": 10,
|
||||
"saves": 17,
|
||||
"saves": 20,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-01-19",
|
||||
@@ -130,8 +131,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 73,
|
||||
"views": 707,
|
||||
"downloads": 91,
|
||||
"views": 828,
|
||||
"upvotes": 4,
|
||||
"saves": 7,
|
||||
"comments": 0,
|
||||
@@ -146,11 +147,11 @@
|
||||
"version": "0.4.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 65,
|
||||
"views": 1335,
|
||||
"downloads": 86,
|
||||
"views": 1588,
|
||||
"upvotes": 11,
|
||||
"saves": 3,
|
||||
"comments": 1,
|
||||
"comments": 4,
|
||||
"created_at": "2026-01-04",
|
||||
"updated_at": "2026-01-17",
|
||||
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
|
||||
@@ -162,8 +163,8 @@
|
||||
"version": "1.4.9",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 43,
|
||||
"views": 704,
|
||||
"downloads": 46,
|
||||
"views": 776,
|
||||
"upvotes": 6,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
@@ -178,15 +179,31 @@
|
||||
"version": "0.9.1",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 24,
|
||||
"views": 407,
|
||||
"upvotes": 3,
|
||||
"downloads": 27,
|
||||
"views": 445,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-31",
|
||||
"updated_at": "2026-01-17",
|
||||
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
|
||||
},
|
||||
{
|
||||
"title": "📂 Folder Memory – Auto-Evolving Project Context",
|
||||
"slug": "folder_memory_auto_evolving_project_context_4a9875b2",
|
||||
"type": "filter",
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 26,
|
||||
"views": 689,
|
||||
"upvotes": 3,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-20",
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
"slug": "闪记卡生成插件_4a31eac3",
|
||||
@@ -194,9 +211,9 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 16,
|
||||
"views": 453,
|
||||
"upvotes": 5,
|
||||
"downloads": 19,
|
||||
"views": 502,
|
||||
"upvotes": 6,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-30",
|
||||
@@ -207,16 +224,16 @@
|
||||
"title": "异步上下文压缩",
|
||||
"slug": "异步上下文压缩_5c0617cb",
|
||||
"type": "action",
|
||||
"version": "1.2.1",
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 14,
|
||||
"views": 377,
|
||||
"downloads": 18,
|
||||
"views": 473,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-01-20",
|
||||
"updated_at": "2026-01-21",
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
@@ -226,8 +243,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 6,
|
||||
"views": 261,
|
||||
"downloads": 9,
|
||||
"views": 304,
|
||||
"upvotes": 3,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -235,6 +252,22 @@
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||
"type": "unknown",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 92,
|
||||
"upvotes": 3,
|
||||
"saves": 3,
|
||||
"comments": 1,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-25",
|
||||
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||
},
|
||||
{
|
||||
"title": "Review of Claude Haiku 4.5",
|
||||
"slug": "review_of_claude_haiku_45_41b0db39",
|
||||
@@ -243,7 +276,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 62,
|
||||
"views": 93,
|
||||
"upvotes": 1,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
@@ -259,7 +292,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1208,
|
||||
"views": 1268,
|
||||
"upvotes": 12,
|
||||
"saves": 8,
|
||||
"comments": 2,
|
||||
@@ -273,11 +306,11 @@
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 137,
|
||||
"following": 2,
|
||||
"total_points": 134,
|
||||
"post_points": 118,
|
||||
"followers": 157,
|
||||
"following": 3,
|
||||
"total_points": 147,
|
||||
"post_points": 131,
|
||||
"comment_points": 16,
|
||||
"contributions": 25
|
||||
"contributions": 28
|
||||
}
|
||||
}
|
||||
@@ -1,40 +1,43 @@
|
||||
# 📊 OpenWebUI Community Stats Report
|
||||
|
||||
> 📅 Updated: 2026-01-20 19:10
|
||||
> 📅 Updated: 2026-01-26 03:07
|
||||
|
||||
## 📈 Overview
|
||||
|
||||
| Metric | Value |
|
||||
|------|------|
|
||||
| 📝 Total Posts | 16 |
|
||||
| ⬇️ Total Downloads | 1887 |
|
||||
| 👁️ Total Views | 22101 |
|
||||
| 👍 Total Upvotes | 120 |
|
||||
| 💾 Total Saves | 147 |
|
||||
| 💬 Total Comments | 24 |
|
||||
| 📝 Total Posts | 18 |
|
||||
| ⬇️ Total Downloads | 2334 |
|
||||
| 👁️ Total Views | 26665 |
|
||||
| 👍 Total Upvotes | 133 |
|
||||
| 💾 Total Saves | 176 |
|
||||
| 💬 Total Comments | 28 |
|
||||
|
||||
## 📂 By Type
|
||||
|
||||
- **unknown**: 3
|
||||
- **action**: 14
|
||||
- **unknown**: 2
|
||||
- **filter**: 1
|
||||
|
||||
## 📋 Posts List
|
||||
|
||||
| Rank | Title | Type | Version | Downloads | Views | Upvotes | Saves | Updated |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 550 | 4939 | 15 | 30 | 2026-01-17 |
|
||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 282 | 2667 | 14 | 21 | 2026-01-18 |
|
||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 215 | 844 | 4 | 6 | 2026-01-07 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.1 | 189 | 2051 | 9 | 22 | 2026-01-20 |
|
||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 170 | 1457 | 8 | 17 | 2026-01-17 |
|
||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 144 | 2395 | 10 | 12 | 2026-01-17 |
|
||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 96 | 2234 | 10 | 17 | 2026-01-19 |
|
||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 73 | 707 | 4 | 7 | 2026-01-08 |
|
||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 65 | 1335 | 11 | 3 | 2026-01-17 |
|
||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 43 | 704 | 6 | 0 | 2026-01-17 |
|
||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 24 | 407 | 3 | 1 | 2026-01-17 |
|
||||
| 12 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 16 | 453 | 5 | 1 | 2026-01-17 |
|
||||
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.1 | 14 | 377 | 5 | 1 | 2026-01-20 |
|
||||
| 14 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 6 | 261 | 3 | 1 | 2026-01-08 |
|
||||
| 15 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 62 | 1 | 0 | 2026-01-14 |
|
||||
| 16 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1208 | 12 | 8 | 2026-01-10 |
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 618 | 5538 | 16 | 36 | 2026-01-17 |
|
||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 396 | 3472 | 17 | 25 | 2026-01-18 |
|
||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 249 | 1013 | 4 | 6 | 2026-01-07 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.2 | 222 | 2423 | 9 | 26 | 2026-01-21 |
|
||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 219 | 1790 | 8 | 21 | 2026-01-17 |
|
||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 164 | 2650 | 11 | 13 | 2026-01-17 |
|
||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 144 | 2721 | 10 | 20 | 2026-01-19 |
|
||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 91 | 828 | 4 | 7 | 2026-01-08 |
|
||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 86 | 1588 | 11 | 3 | 2026-01-17 |
|
||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 46 | 776 | 6 | 0 | 2026-01-17 |
|
||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 27 | 445 | 4 | 1 | 2026-01-17 |
|
||||
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter | 0.1.0 | 26 | 689 | 3 | 4 | 2026-01-20 |
|
||||
| 13 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 19 | 502 | 6 | 1 | 2026-01-17 |
|
||||
| 14 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.2 | 18 | 473 | 5 | 1 | 2026-01-21 |
|
||||
| 15 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 9 | 304 | 3 | 1 | 2026-01-08 |
|
||||
| 16 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | unknown | | 0 | 92 | 3 | 3 | 2026-01-25 |
|
||||
| 17 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 93 | 1 | 0 | 2026-01-14 |
|
||||
| 18 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1268 | 12 | 8 | 2026-01-10 |
|
||||
|
||||
@@ -1,40 +1,43 @@
|
||||
# 📊 OpenWebUI 社区统计报告
|
||||
|
||||
> 📅 更新时间: 2026-01-20 19:10
|
||||
> 📅 更新时间: 2026-01-26 03:07
|
||||
|
||||
## 📈 总览
|
||||
|
||||
| 指标 | 数值 |
|
||||
|------|------|
|
||||
| 📝 发布数量 | 16 |
|
||||
| ⬇️ 总下载量 | 1887 |
|
||||
| 👁️ 总浏览量 | 22101 |
|
||||
| 👍 总点赞数 | 120 |
|
||||
| 💾 总收藏数 | 147 |
|
||||
| 💬 总评论数 | 24 |
|
||||
| 📝 发布数量 | 18 |
|
||||
| ⬇️ 总下载量 | 2334 |
|
||||
| 👁️ 总浏览量 | 26665 |
|
||||
| 👍 总点赞数 | 133 |
|
||||
| 💾 总收藏数 | 176 |
|
||||
| 💬 总评论数 | 28 |
|
||||
|
||||
## 📂 按类型分类
|
||||
|
||||
- **unknown**: 3
|
||||
- **action**: 14
|
||||
- **unknown**: 2
|
||||
- **filter**: 1
|
||||
|
||||
## 📋 发布列表
|
||||
|
||||
| 排名 | 标题 | 类型 | 版本 | 下载 | 浏览 | 点赞 | 收藏 | 更新日期 |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 550 | 4939 | 15 | 30 | 2026-01-17 |
|
||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 282 | 2667 | 14 | 21 | 2026-01-18 |
|
||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 215 | 844 | 4 | 6 | 2026-01-07 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.1 | 189 | 2051 | 9 | 22 | 2026-01-20 |
|
||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 170 | 1457 | 8 | 17 | 2026-01-17 |
|
||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 144 | 2395 | 10 | 12 | 2026-01-17 |
|
||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 96 | 2234 | 10 | 17 | 2026-01-19 |
|
||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 73 | 707 | 4 | 7 | 2026-01-08 |
|
||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 65 | 1335 | 11 | 3 | 2026-01-17 |
|
||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 43 | 704 | 6 | 0 | 2026-01-17 |
|
||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 24 | 407 | 3 | 1 | 2026-01-17 |
|
||||
| 12 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 16 | 453 | 5 | 1 | 2026-01-17 |
|
||||
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.1 | 14 | 377 | 5 | 1 | 2026-01-20 |
|
||||
| 14 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 6 | 261 | 3 | 1 | 2026-01-08 |
|
||||
| 15 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 62 | 1 | 0 | 2026-01-14 |
|
||||
| 16 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1208 | 12 | 8 | 2026-01-10 |
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 618 | 5538 | 16 | 36 | 2026-01-17 |
|
||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 396 | 3472 | 17 | 25 | 2026-01-18 |
|
||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 249 | 1013 | 4 | 6 | 2026-01-07 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.2 | 222 | 2423 | 9 | 26 | 2026-01-21 |
|
||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 219 | 1790 | 8 | 21 | 2026-01-17 |
|
||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 164 | 2650 | 11 | 13 | 2026-01-17 |
|
||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 144 | 2721 | 10 | 20 | 2026-01-19 |
|
||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 91 | 828 | 4 | 7 | 2026-01-08 |
|
||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 86 | 1588 | 11 | 3 | 2026-01-17 |
|
||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 46 | 776 | 6 | 0 | 2026-01-17 |
|
||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 27 | 445 | 4 | 1 | 2026-01-17 |
|
||||
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter | 0.1.0 | 26 | 689 | 3 | 4 | 2026-01-20 |
|
||||
| 13 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 19 | 502 | 6 | 1 | 2026-01-17 |
|
||||
| 14 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.2 | 18 | 473 | 5 | 1 | 2026-01-21 |
|
||||
| 15 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 9 | 304 | 3 | 1 | 2026-01-08 |
|
||||
| 16 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | unknown | | 0 | 92 | 3 | 3 | 2026-01-25 |
|
||||
| 17 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 93 | 1 | 0 | 2026-01-14 |
|
||||
| 18 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1268 | 12 | 8 | 2026-01-10 |
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.2.1</span>
|
||||
<span class="version-badge">v1.2.2</span>
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression(异步上下文压缩)
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.2.1</span>
|
||||
<span class="version-badge">v1.2.2</span>
|
||||
|
||||
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。
|
||||
|
||||
|
||||
@@ -1,4 +1,15 @@
|
||||
# Folder Memory
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
|
||||
---
|
||||
|
||||
### 📌 What's new in 0.1.0
|
||||
- **Initial Release**: Automated "Project Rules" management for OpenWebUI folders.
|
||||
- **Folder-Level Persistence**: Automatically updates folder system prompts with extracted rules.
|
||||
- **Optimized Performance**: Runs asynchronously and supports `PRIORITY` configuration for seamless integration with other filters.
|
||||
|
||||
---
|
||||
|
||||
**Folder Memory** is an intelligent context filter plugin for OpenWebUI. It automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||
|
||||
@@ -11,6 +22,10 @@ This ensures that all future conversations within that folder share the same evo
|
||||
- **Async Processing**: Runs in the background without blocking the user's chat experience.
|
||||
- **ORM Integration**: Directly updates folder data using OpenWebUI's internal models for reliability.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Conversations must occur inside a folder.** This plugin only triggers when a chat belongs to a folder (i.e., you need to create a folder in OpenWebUI and start a conversation within it).
|
||||
|
||||
## Installation
|
||||
|
||||
1. Copy `folder_memory.py` to your OpenWebUI `plugins/filters/` directory (or upload via Admin UI).
|
||||
|
||||
@@ -1,4 +1,15 @@
|
||||
# 文件夹记忆 (Folder Memory)
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
|
||||
---
|
||||
|
||||
### 📌 0.1.0 版本特性
|
||||
- **首个版本发布**:专注于自动化的“项目规则”管理。
|
||||
- **文件夹级持久化**:自动将提取的规则回写到文件夹系统提示词中。
|
||||
- **性能优化**:采用异步处理机制,并支持 `PRIORITY` 配置,确保与其他过滤器(如上下文压缩)完美协作。
|
||||
|
||||
---
|
||||
|
||||
**文件夹记忆 (Folder Memory)** 是一个 OpenWebUI 的智能上下文过滤器插件。它能自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||
|
||||
@@ -11,6 +22,10 @@
|
||||
- **异步处理**:在后台运行,不阻塞用户的聊天体验。
|
||||
- **ORM 集成**:直接使用 OpenWebUI 的内部模型更新文件夹数据,确保可靠性。
|
||||
|
||||
## 前置条件
|
||||
|
||||
- **对话必须在文件夹内进行。** 此插件仅在聊天属于某个文件夹时触发(即您需要先在 OpenWebUI 中创建一个文件夹,并在其内部开始对话)。
|
||||
|
||||
## 安装指南
|
||||
|
||||
1. 将 `folder_memory.py` (或中文版 `folder_memory_cn.py`) 复制到 OpenWebUI 的 `plugins/filters/` 目录(或通过管理员 UI 上传)。
|
||||
|
||||
@@ -22,7 +22,7 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||
|
||||
**Version:** 1.2.1
|
||||
**Version:** 1.2.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
||||
|
||||
**版本:** 1.2.1
|
||||
**版本:** 1.2.2
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
||||
|
||||
|
||||
84
docs/plugins/pipes/github-copilot-sdk.md
Normal file
84
docs/plugins/pipes/github-copilot-sdk.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# GitHub Copilot SDK Pipe for OpenWebUI
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
|
||||
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
|
||||
|
||||
## 🚀 What's New (v0.1.0)
|
||||
|
||||
* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors!
|
||||
* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models).
|
||||
* **📂 Workspace Control**: Restricted workspace directory for secure file operations.
|
||||
* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`).
|
||||
* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping.
|
||||
|
||||
## ✨ Core Features
|
||||
|
||||
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability.
|
||||
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages.
|
||||
* **🌊 Streaming Output**: Supports typewriter effect for fast responses.
|
||||
* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support).
|
||||
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box.
|
||||
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
|
||||
* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting.
|
||||
|
||||
## 📦 Installation & Usage
|
||||
|
||||
### 1. Import Function
|
||||
|
||||
1. Open OpenWebUI.
|
||||
2. Go to **Workspace** -> **Functions**.
|
||||
3. Click **+** (Create Function).
|
||||
4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely.
|
||||
5. Save.
|
||||
|
||||
### 2. Configure Valves (Settings)
|
||||
|
||||
Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure:
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| :--- | :--- | :--- |
|
||||
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - |
|
||||
| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` |
|
||||
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
|
||||
| **DEBUG** | Whether to enable debug logs (output to chat). | `True` |
|
||||
| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
|
||||
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
|
||||
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
|
||||
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
|
||||
| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` |
|
||||
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
|
||||
|
||||
### 3. Get GH_TOKEN
|
||||
|
||||
For security, it is recommended to use a **Fine-grained Personal Access Token**:
|
||||
|
||||
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
|
||||
2. Click **Generate new token**.
|
||||
3. **Repository access**: Select `All repositories` or `Public Repositories`.
|
||||
4. **Permissions**:
|
||||
* Click **Account permissions**.
|
||||
* Find **Copilot Requests**, select **Read and write** (or Access).
|
||||
5. Generate and copy the Token.
|
||||
|
||||
## 📋 Dependencies
|
||||
|
||||
This Pipe will automatically attempt to install the following dependencies:
|
||||
|
||||
* `github-copilot-sdk` (Python package)
|
||||
* `github-copilot-cli` (Binary file, installed via official script)
|
||||
|
||||
## ⚠️ FAQ
|
||||
|
||||
* **Stuck on "Waiting..."**:
|
||||
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
|
||||
* Try changing `MODEL_ID` to `gpt-4o` or `copilot-chat`.
|
||||
* **Images not recognized**:
|
||||
* Ensure `MODEL_ID` is a model that supports multimodal input.
|
||||
* **CLI Installation Failed**:
|
||||
* Ensure the OpenWebUI container has internet access.
|
||||
* You can manually download the CLI and specify `CLI_PATH` in Valves.
|
||||
|
||||
## 📄 License
|
||||
|
||||
MIT
|
||||
84
docs/plugins/pipes/github-copilot-sdk.zh.md
Normal file
84
docs/plugins/pipes/github-copilot-sdk.zh.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# GitHub Copilot SDK 官方管道
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
|
||||
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
|
||||
|
||||
## 🚀 最新特性 (v0.1.0)
|
||||
|
||||
* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误!
|
||||
* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。
|
||||
* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。
|
||||
* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。
|
||||
* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。
|
||||
|
||||
## ✨ 核心特性
|
||||
|
||||
* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。
|
||||
* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。
|
||||
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。
|
||||
* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。
|
||||
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。
|
||||
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。
|
||||
* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。
|
||||
|
||||
## 📦 安装与使用
|
||||
|
||||
### 1. 导入函数
|
||||
|
||||
1. 打开 OpenWebUI。
|
||||
2. 进入 **Workspace** -> **Functions**。
|
||||
3. 点击 **+** (创建函数)。
|
||||
4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。
|
||||
5. 保存。
|
||||
|
||||
### 2. 配置 Valves (设置)
|
||||
|
||||
在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置:
|
||||
|
||||
| 参数 | 说明 | 默认值 |
|
||||
| :--- | :--- | :--- |
|
||||
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - |
|
||||
| **MODEL_ID** | 使用的模型名称。推荐 `gpt-5-mini` 或 `gpt-5`。 | `gpt-5-mini` |
|
||||
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
|
||||
| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` |
|
||||
| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` |
|
||||
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - |
|
||||
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
|
||||
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` |
|
||||
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
|
||||
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
|
||||
|
||||
### 3. 获取 GH_TOKEN
|
||||
|
||||
为了安全起见,推荐使用 **Fine-grained Personal Access Token**:
|
||||
|
||||
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
|
||||
2. 点击 **Generate new token**。
|
||||
3. **Repository access**: 选择 `All repositories` 或 `Public Repositories`。
|
||||
4. **Permissions**:
|
||||
* 点击 **Account permissions**。
|
||||
* 找到 **Copilot Requests**,选择 **Read and write** (或 Access)。
|
||||
5. 生成并复制 Token。
|
||||
|
||||
## 📋 依赖说明
|
||||
|
||||
该 Pipe 会自动尝试安装以下依赖(如果环境中缺失):
|
||||
|
||||
* `github-copilot-sdk` (Python 包)
|
||||
* `github-copilot-cli` (二进制文件,通过官方脚本安装)
|
||||
|
||||
## ⚠️ 常见问题
|
||||
|
||||
* **一直显示 "Waiting..."**:
|
||||
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
|
||||
* 尝试将 `MODEL_ID` 改为 `gpt-4o` 或 `copilot-chat`。
|
||||
* **图片无法识别**:
|
||||
* 确保 `MODEL_ID` 是支持多模态的模型。
|
||||
* **CLI 安装失败**:
|
||||
* 确保 OpenWebUI 容器有外网访问权限。
|
||||
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。
|
||||
|
||||
## 📄 许可证
|
||||
|
||||
MIT
|
||||
@@ -15,7 +15,7 @@ Pipes allow you to:
|
||||
|
||||
## Available Pipe Plugins
|
||||
|
||||
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.1.1) - Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ Pipes 可以用于:
|
||||
|
||||
## 可用的 Pipe 插件
|
||||
|
||||
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.1.1) - GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,9 +1,13 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.2 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
## What's new in 1.2.2
|
||||
- **Critical Fix**: Resolved `TypeError: 'str' object is not callable` caused by variable name conflict in logging function.
|
||||
- **Compatibility**: Enhanced `params` handling to support Pydantic objects, improving compatibility with different OpenWebUI versions.
|
||||
|
||||
## What's new in 1.2.1
|
||||
|
||||
- **Smart Configuration**: Automatically detects base model settings for custom models and adds `summary_model_max_context` for independent summary limits.
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
# 异步上下文压缩过滤器
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.2 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
|
||||
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
||||
|
||||
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
||||
|
||||
## 1.2.2 版本更新
|
||||
- **严重错误修复**: 解决了因日志函数变量名冲突导致的 `TypeError: 'str' object is not callable` 错误。
|
||||
- **兼容性增强**: 改进了 `params` 处理逻辑以支持 Pydantic 对象,提高了对不同 OpenWebUI 版本的兼容性。
|
||||
|
||||
## 1.2.1 版本更新
|
||||
|
||||
- **智能配置增强**: 自动检测自定义模型的基础模型配置,并新增 `summary_model_max_context` 参数以独立控制摘要模型的上下文限制。
|
||||
|
||||
@@ -5,7 +5,7 @@ author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
funding_url: https://github.com/open-webui
|
||||
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
|
||||
version: 1.2.1
|
||||
version: 1.2.2
|
||||
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
||||
license: MIT
|
||||
|
||||
@@ -839,7 +839,7 @@ class Filter:
|
||||
except Exception as e:
|
||||
logger.error(f"Error emitting debug log: {e}")
|
||||
|
||||
async def _log(self, message: str, type: str = "info", event_call=None):
|
||||
async def _log(self, message: str, log_type: str = "info", event_call=None):
|
||||
"""Unified logging to both backend (print) and frontend (console.log)"""
|
||||
# Backend logging
|
||||
if self.valves.debug_mode:
|
||||
@@ -849,11 +849,11 @@ class Filter:
|
||||
if self.valves.show_debug_log and event_call:
|
||||
try:
|
||||
css = "color: #3b82f6;" # Blue default
|
||||
if type == "error":
|
||||
if log_type == "error":
|
||||
css = "color: #ef4444; font-weight: bold;" # Red
|
||||
elif type == "warning":
|
||||
elif log_type == "warning":
|
||||
css = "color: #f59e0b;" # Orange
|
||||
elif type == "success":
|
||||
elif log_type == "success":
|
||||
css = "color: #10b981; font-weight: bold;" # Green
|
||||
|
||||
# Clean message for frontend: remove separators and extra newlines
|
||||
@@ -999,6 +999,7 @@ class Filter:
|
||||
# 2. For base models: check messages for role='system'
|
||||
system_prompt_content = None
|
||||
|
||||
# Try to get from DB (custom model)
|
||||
# Try to get from DB (custom model)
|
||||
try:
|
||||
model_id = body.get("model")
|
||||
@@ -1026,12 +1027,17 @@ class Filter:
|
||||
# Handle case where params is a JSON string
|
||||
if isinstance(params, str):
|
||||
params = json.loads(params)
|
||||
# Convert Pydantic model to dict if needed
|
||||
elif hasattr(params, "model_dump"):
|
||||
params = params.model_dump()
|
||||
elif hasattr(params, "dict"):
|
||||
params = params.dict()
|
||||
|
||||
# Handle dict or Pydantic object
|
||||
# Now params should be a dict
|
||||
if isinstance(params, dict):
|
||||
system_prompt_content = params.get("system")
|
||||
else:
|
||||
# Assume Pydantic model or object
|
||||
# Fallback: try getattr
|
||||
system_prompt_content = getattr(params, "system", None)
|
||||
|
||||
if system_prompt_content:
|
||||
@@ -1050,7 +1056,7 @@ class Filter:
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._log(
|
||||
f"[Inlet] ❌ Failed to parse model params: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1071,7 +1077,7 @@ class Filter:
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._log(
|
||||
f"[Inlet] ❌ Error fetching system prompt from DB: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
if self.valves.debug_mode:
|
||||
@@ -1125,7 +1131,7 @@ class Filter:
|
||||
if not chat_id:
|
||||
await self._log(
|
||||
"[Inlet] ❌ Missing chat_id in metadata, skipping compression",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return body
|
||||
@@ -1154,7 +1160,7 @@ class Filter:
|
||||
else:
|
||||
await self._log(
|
||||
f"[Inlet] ⚠️ Invalid Model Configs (Raw: '{raw_config}'): No valid configs parsed. Expected format: 'model_id:threshold:max_context'",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
else:
|
||||
@@ -1258,7 +1264,7 @@ class Filter:
|
||||
if total_tokens > max_context_tokens:
|
||||
await self._log(
|
||||
f"[Inlet] ⚠️ Candidate prompt ({total_tokens} Tokens) exceeds limit ({max_context_tokens}). Reducing history...",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1395,7 +1401,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[Inlet] Applied summary: {system_info} + Head({len(head_messages)} msg, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} msg, {tail_tokens}t) = Total({total_section_tokens}t)",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1455,7 +1461,7 @@ class Filter:
|
||||
if total_tokens > max_context_tokens:
|
||||
await self._log(
|
||||
f"[Inlet] ⚠️ Original messages ({total_tokens} Tokens) exceed limit ({max_context_tokens}). Reducing history...",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1523,7 +1529,7 @@ class Filter:
|
||||
if not chat_id:
|
||||
await self._log(
|
||||
"[Outlet] ❌ Missing chat_id in metadata, skipping compression",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return body
|
||||
@@ -1625,7 +1631,7 @@ class Filter:
|
||||
if current_tokens >= compression_threshold_tokens:
|
||||
await self._log(
|
||||
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1648,7 +1654,7 @@ class Filter:
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[🔍 Background Calculation] ❌ Error: {str(e)}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1687,7 +1693,7 @@ class Filter:
|
||||
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
||||
await self._log(
|
||||
f"[🤖 Async Summary Task] ⚠️ target_compressed_count is None, estimating: {target_compressed_count}",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1734,7 +1740,7 @@ class Filter:
|
||||
if not summary_model_id:
|
||||
await self._log(
|
||||
"[🤖 Async Summary Task] ⚠️ Summary model does not exist, skipping compression",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return
|
||||
@@ -1765,7 +1771,7 @@ class Filter:
|
||||
excess_tokens = estimated_input_tokens - max_context_tokens
|
||||
await self._log(
|
||||
f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1822,7 +1828,7 @@ class Filter:
|
||||
if not new_summary:
|
||||
await self._log(
|
||||
"[🤖 Async Summary Task] ⚠️ Summary generation returned empty result, skipping save",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return
|
||||
@@ -1851,7 +1857,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
await self._log(
|
||||
@@ -1957,14 +1963,14 @@ class Filter:
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[Status] Error calculating tokens: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[🤖 Async Summary Task] ❌ Error: {str(e)}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -2066,7 +2072,7 @@ Based on the content above, generate the summary:
|
||||
if not model:
|
||||
await self._log(
|
||||
"[🤖 LLM Call] ⚠️ Summary model does not exist, skipping summary generation",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return ""
|
||||
@@ -2133,7 +2139,7 @@ Based on the content above, generate the summary:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 LLM Call] ✅ Successfully received summary",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -2154,7 +2160,7 @@ Based on the content above, generate the summary:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 LLM Call] ❌ {error_message}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
funding_url: https://github.com/open-webui
|
||||
description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。
|
||||
version: 1.2.1
|
||||
version: 1.2.2
|
||||
openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18
|
||||
license: MIT
|
||||
|
||||
@@ -787,7 +787,7 @@ class Filter:
|
||||
except Exception as e:
|
||||
print(f"Error emitting debug log: {e}")
|
||||
|
||||
async def _log(self, message: str, type: str = "info", event_call=None):
|
||||
async def _log(self, message: str, log_type: str = "info", event_call=None):
|
||||
"""统一日志输出到后端 (print) 和前端 (console.log)"""
|
||||
# 后端日志
|
||||
if self.valves.debug_mode:
|
||||
@@ -797,11 +797,11 @@ class Filter:
|
||||
if self.valves.show_debug_log and event_call:
|
||||
try:
|
||||
css = "color: #3b82f6;" # 默认蓝色
|
||||
if type == "error":
|
||||
if log_type == "error":
|
||||
css = "color: #ef4444; font-weight: bold;" # 红色
|
||||
elif type == "warning":
|
||||
elif log_type == "warning":
|
||||
css = "color: #f59e0b;" # 橙色
|
||||
elif type == "success":
|
||||
elif log_type == "success":
|
||||
css = "color: #10b981; font-weight: bold;" # 绿色
|
||||
|
||||
# 清理前端消息:移除分隔符和多余换行
|
||||
@@ -948,12 +948,17 @@ class Filter:
|
||||
# 处理 params 是 JSON 字符串的情况
|
||||
if isinstance(params, str):
|
||||
params = json.loads(params)
|
||||
# 转换 Pydantic 模型为字典
|
||||
elif hasattr(params, "model_dump"):
|
||||
params = params.model_dump()
|
||||
elif hasattr(params, "dict"):
|
||||
params = params.dict()
|
||||
|
||||
# 处理字典或 Pydantic 对象
|
||||
# 处理字典
|
||||
if isinstance(params, dict):
|
||||
system_prompt_content = params.get("system")
|
||||
else:
|
||||
# 假设是 Pydantic 模型或对象
|
||||
# 回退:尝试 getattr
|
||||
system_prompt_content = getattr(params, "system", None)
|
||||
|
||||
if system_prompt_content:
|
||||
@@ -972,7 +977,7 @@ class Filter:
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._log(
|
||||
f"[Inlet] ❌ 解析模型参数失败: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -986,7 +991,7 @@ class Filter:
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._log(
|
||||
f"[Inlet] ❌ 数据库中未找到模型",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -994,7 +999,7 @@ class Filter:
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._log(
|
||||
f"[Inlet] ❌ 从数据库获取系统提示词错误: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
if self.valves.debug_mode:
|
||||
@@ -1048,7 +1053,7 @@ class Filter:
|
||||
if not chat_id:
|
||||
await self._log(
|
||||
"[Inlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return body
|
||||
@@ -1154,7 +1159,7 @@ class Filter:
|
||||
if total_tokens > max_context_tokens:
|
||||
await self._log(
|
||||
f"[Inlet] ⚠️ 候选提示词 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1290,7 +1295,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[Inlet] 应用摘要: {system_info} + Head({len(head_messages)} 条, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} 条, {tail_tokens}t) = Total({total_section_tokens}t)",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1350,7 +1355,7 @@ class Filter:
|
||||
if total_tokens > max_context_tokens:
|
||||
await self._log(
|
||||
f"[Inlet] ⚠️ 原始消息 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1420,7 +1425,7 @@ class Filter:
|
||||
if not chat_id:
|
||||
await self._log(
|
||||
"[Outlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return body
|
||||
@@ -1486,7 +1491,7 @@ class Filter:
|
||||
if current_tokens >= compression_threshold_tokens:
|
||||
await self._log(
|
||||
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1509,7 +1514,7 @@ class Filter:
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[🔍 后台计算] ❌ 错误: {str(e)}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1546,7 +1551,7 @@ class Filter:
|
||||
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
||||
await self._log(
|
||||
f"[🤖 异步摘要任务] ⚠️ target_compressed_count 为 None,进行估算: {target_compressed_count}",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1593,7 +1598,7 @@ class Filter:
|
||||
if not summary_model_id:
|
||||
await self._log(
|
||||
"[🤖 异步摘要任务] ⚠️ 摘要模型不存在,跳过压缩",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return
|
||||
@@ -1624,7 +1629,7 @@ class Filter:
|
||||
excess_tokens = estimated_input_tokens - max_context_tokens
|
||||
await self._log(
|
||||
f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1681,7 +1686,7 @@ class Filter:
|
||||
if not new_summary:
|
||||
await self._log(
|
||||
"[🤖 异步摘要任务] ⚠️ 摘要生成返回空结果,跳过保存",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return
|
||||
@@ -1710,7 +1715,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
await self._log(
|
||||
@@ -1821,14 +1826,14 @@ class Filter:
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[Status] 计算 Token 错误: {e}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
await self._log(
|
||||
f"[🤖 异步摘要任务] ❌ 错误: {str(e)}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -1928,7 +1933,7 @@ class Filter:
|
||||
if not model:
|
||||
await self._log(
|
||||
"[🤖 LLM 调用] ⚠️ 摘要模型不存在,跳过摘要生成",
|
||||
type="warning",
|
||||
log_type="warning",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
return ""
|
||||
@@ -1995,7 +2000,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 LLM 调用] ✅ 成功接收摘要",
|
||||
type="success",
|
||||
log_type="success",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
@@ -2016,7 +2021,7 @@ class Filter:
|
||||
|
||||
await self._log(
|
||||
f"[🤖 LLM 调用] ❌ {error_message}",
|
||||
type="error",
|
||||
log_type="error",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
|
||||
@@ -1,10 +1,17 @@
|
||||
# Folder Memory
|
||||
|
||||
English | [中文](./README_CN.md)
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
|
||||
**Folder Memory** (formerly Folder Rule Collector) is an intelligent context filter plugin for OpenWebUI. It automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||
---
|
||||
|
||||
This ensures that all future conversations within that folder share the same evolved context and rules, without manual updates.
|
||||
### 📌 What's new in 0.1.0
|
||||
- **Initial Release**: Automated "Project Rules" management for OpenWebUI folders.
|
||||
- **Folder-Level Persistence**: Automatically updates folder system prompts with extracted rules.
|
||||
- **Optimized Performance**: Runs asynchronously and supports `PRIORITY` configuration for seamless integration with other filters.
|
||||
|
||||
---
|
||||
|
||||
**Folder Memory** is an intelligent context filter plugin for OpenWebUI. It automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||
|
||||
## ✨ Features
|
||||
|
||||
@@ -13,6 +20,10 @@ This ensures that all future conversations within that folder share the same evo
|
||||
- **Async Processing**: Runs in the background without blocking the user's chat experience.
|
||||
- **ORM Integration**: Directly updates folder data using OpenWebUI's internal models for reliability.
|
||||
|
||||
## ⚠️ Prerequisites
|
||||
|
||||
- **Conversations must occur inside a folder.** This plugin only triggers when a chat belongs to a folder (i.e., you need to create a folder in OpenWebUI and start a conversation within it).
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
1. Copy `folder_memory.py` to your OpenWebUI `plugins/filters/` directory (or upload via Admin UI).
|
||||
|
||||
@@ -1,8 +1,17 @@
|
||||
# 文件夹记忆 (Folder Memory)
|
||||
|
||||
[English](./README.md) | 中文
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
|
||||
**文件夹记忆 (Folder Memory)** (原名 Folder Rule Collector) 是一个 OpenWebUI 的智能上下文过滤器插件。它能自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||
---
|
||||
|
||||
### 📌 0.1.0 版本特性
|
||||
- **首个版本发布**:专注于自动化的“项目规则”管理。
|
||||
- **文件夹级持久化**:自动将提取的规则回写到文件夹系统提示词中。
|
||||
- **性能优化**:采用异步处理机制,并支持 `PRIORITY` 配置,确保与其他过滤器(如上下文压缩)完美协作。
|
||||
|
||||
---
|
||||
|
||||
**文件夹记忆 (Folder Memory)** 是一个 OpenWebUI 的智能上下文过滤器插件。它能自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||
|
||||
这确保了该文件夹内的所有未来对话都能共享相同的进化上下文和规则,无需手动更新。
|
||||
|
||||
@@ -13,6 +22,10 @@
|
||||
- **异步处理**:在后台运行,不阻塞用户的聊天体验。
|
||||
- **ORM 集成**:直接使用 OpenWebUI 的内部模型更新文件夹数据,确保可靠性。
|
||||
|
||||
## ⚠️ 前置条件
|
||||
|
||||
- **对话必须在文件夹内进行。** 此插件仅在聊天属于某个文件夹时触发(即您需要先在 OpenWebUI 中创建一个文件夹,并在其内部开始对话)。
|
||||
|
||||
## 📦 安装指南
|
||||
|
||||
1. 将 `folder_memory.py` (或中文版 `folder_memory_cn.py`) 复制到 OpenWebUI 的 `plugins/filters/` 目录(或通过管理员 UI 上传)。
|
||||
|
||||
81
plugins/pipes/github-copilot-sdk/README.md
Normal file
81
plugins/pipes/github-copilot-sdk/README.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# GitHub Copilot SDK Pipe for OpenWebUI
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||
|
||||
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
|
||||
|
||||
## 🚀 What's New (v0.1.1)
|
||||
|
||||
* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors!
|
||||
* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models).
|
||||
* **📂 Workspace Control**: Restricted workspace directory for secure file operations.
|
||||
* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`).
|
||||
* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping.
|
||||
|
||||
## ✨ Core Features
|
||||
|
||||
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability.
|
||||
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages.
|
||||
* **🌊 Streaming Output**: Supports typewriter effect for fast responses.
|
||||
* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support).
|
||||
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box.
|
||||
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
|
||||
* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting.
|
||||
|
||||
## 📦 Installation & Usage
|
||||
|
||||
### 1. Import Function
|
||||
|
||||
1. Open OpenWebUI.
|
||||
2. Go to **Workspace** -> **Functions**.
|
||||
3. Click **+** (Create Function).
|
||||
4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely.
|
||||
5. Save.
|
||||
|
||||
### 2. Configure Valves (Settings)
|
||||
|
||||
Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure:
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| :--- | :--- | :--- |
|
||||
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - |
|
||||
| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` |
|
||||
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
|
||||
| **DEBUG** | Whether to enable debug logs (output to chat). | `True` |
|
||||
| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
|
||||
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
|
||||
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
|
||||
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
|
||||
| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` |
|
||||
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
|
||||
| **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` |
|
||||
|
||||
### 3. Get GH_TOKEN
|
||||
|
||||
For security, it is recommended to use a **Fine-grained Personal Access Token**:
|
||||
|
||||
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
|
||||
2. Click **Generate new token**.
|
||||
3. **Repository access**: Select `All repositories` or `Public Repositories`.
|
||||
4. **Permissions**:
|
||||
* Click **Account permissions**.
|
||||
* Find **Copilot Requests**, select **Read and write** (or Access).
|
||||
5. Generate and copy the Token.
|
||||
|
||||
## 📋 Dependencies
|
||||
|
||||
This Pipe will automatically attempt to install the following dependencies:
|
||||
|
||||
* `github-copilot-sdk` (Python package)
|
||||
* `github-copilot-cli` (Binary file, installed via official script)
|
||||
|
||||
## ⚠️ FAQ
|
||||
|
||||
* **Stuck on "Waiting..."**:
|
||||
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
|
||||
* Try changing `MODEL_ID` to `gpt-4o` or `copilot-chat`.
|
||||
* **Images not recognized**:
|
||||
* Ensure `MODEL_ID` is a model that supports multimodal input.
|
||||
* **CLI Installation Failed**:
|
||||
* Ensure the OpenWebUI container has internet access.
|
||||
* You can manually download the CLI and specify `CLI_PATH` in Valves.
|
||||
81
plugins/pipes/github-copilot-sdk/README_CN.md
Normal file
81
plugins/pipes/github-copilot-sdk/README_CN.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# GitHub Copilot SDK 官方管道
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||
|
||||
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
|
||||
|
||||
## 🚀 最新特性 (v0.1.1)
|
||||
|
||||
* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误!
|
||||
* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。
|
||||
* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。
|
||||
* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。
|
||||
* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。
|
||||
|
||||
## ✨ 核心特性
|
||||
|
||||
* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。
|
||||
* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。
|
||||
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。
|
||||
* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。
|
||||
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。
|
||||
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。
|
||||
* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。
|
||||
|
||||
## 📦 安装与使用
|
||||
|
||||
### 1. 导入函数
|
||||
|
||||
1. 打开 OpenWebUI。
|
||||
2. 进入 **Workspace** -> **Functions**。
|
||||
3. 点击 **+** (创建函数)。
|
||||
4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。
|
||||
5. 保存。
|
||||
|
||||
### 2. 配置 Valves (设置)
|
||||
|
||||
在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置:
|
||||
|
||||
| 参数 | 说明 | 默认值 |
|
||||
| :--- | :--- | :--- |
|
||||
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - |
|
||||
| **MODEL_ID** | 使用的模型名称。 | `gpt-5-mini` |
|
||||
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
|
||||
| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` |
|
||||
| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` |
|
||||
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - |
|
||||
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
|
||||
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` |
|
||||
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
|
||||
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
|
||||
| **TIMEOUT** | 流式数据块超时时间 (秒)。 | `300` |
|
||||
|
||||
### 3. 获取 GH_TOKEN
|
||||
|
||||
为了安全起见,推荐使用 **Fine-grained Personal Access Token**:
|
||||
|
||||
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
|
||||
2. 点击 **Generate new token**。
|
||||
3. **Repository access**: 选择 `All repositories` 或 `Public Repositories`。
|
||||
4. **Permissions**:
|
||||
* 点击 **Account permissions**。
|
||||
* 找到 **Copilot Requests**,选择 **Read and write** (或 Access)。
|
||||
5. 生成并复制 Token。
|
||||
|
||||
## 📋 依赖说明
|
||||
|
||||
该 Pipe 会自动尝试安装以下依赖(如果环境中缺失):
|
||||
|
||||
* `github-copilot-sdk` (Python 包)
|
||||
* `github-copilot-cli` (二进制文件,通过官方脚本安装)
|
||||
|
||||
## ⚠️ 常见问题
|
||||
|
||||
* **一直显示 "Waiting..."**:
|
||||
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
|
||||
* 尝试将 `MODEL_ID` 改为 `gpt-4o` 或 `copilot-chat`。
|
||||
* **图片无法识别**:
|
||||
* 确保 `MODEL_ID` 是支持多模态的模型。
|
||||
* **CLI 安装失败**:
|
||||
* 确保 OpenWebUI 容器有外网访问权限。
|
||||
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。
|
||||
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk.png
Normal file
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 474 KiB |
689
plugins/pipes/github-copilot-sdk/github_copilot_sdk.py
Normal file
689
plugins/pipes/github-copilot-sdk/github_copilot_sdk.py
Normal file
@@ -0,0 +1,689 @@
|
||||
"""
|
||||
title: GitHub Copilot Official SDK Pipe (Dynamic Models)
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
funding_url: https://github.com/open-webui
|
||||
description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions (context compaction).
|
||||
version: 0.1.1
|
||||
requirements: github-copilot-sdk
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import base64
|
||||
import tempfile
|
||||
import asyncio
|
||||
import logging
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Optional, Union, AsyncGenerator, List, Any, Dict
|
||||
from pydantic import BaseModel, Field
|
||||
from datetime import datetime, timezone
|
||||
import contextlib
|
||||
|
||||
# Setup logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Global client storage
|
||||
_SHARED_CLIENT = None
|
||||
_SHARED_TOKEN = ""
|
||||
_CLIENT_LOCK = asyncio.Lock()
|
||||
|
||||
|
||||
class Pipe:
|
||||
class Valves(BaseModel):
|
||||
GH_TOKEN: str = Field(
|
||||
default="",
|
||||
description="GitHub Fine-grained Token (Requires 'Copilot Requests' permission)",
|
||||
)
|
||||
MODEL_ID: str = Field(
|
||||
default="claude-sonnet-4.5",
|
||||
description="Default Copilot model name (used when dynamic fetching fails)",
|
||||
)
|
||||
CLI_PATH: str = Field(
|
||||
default="/usr/local/bin/copilot",
|
||||
description="Path to Copilot CLI",
|
||||
)
|
||||
DEBUG: bool = Field(
|
||||
default=False,
|
||||
description="Enable technical debug logs (connection info, etc.)",
|
||||
)
|
||||
SHOW_THINKING: bool = Field(
|
||||
default=True,
|
||||
description="Show model reasoning/thinking process",
|
||||
)
|
||||
EXCLUDE_KEYWORDS: str = Field(
|
||||
default="",
|
||||
description="Exclude models containing these keywords (comma separated, e.g.: codex, haiku)",
|
||||
)
|
||||
WORKSPACE_DIR: str = Field(
|
||||
default="",
|
||||
description="Restricted workspace directory for file operations. If empty, allows access to the current process directory.",
|
||||
)
|
||||
INFINITE_SESSION: bool = Field(
|
||||
default=True,
|
||||
description="Enable Infinite Sessions (automatic context compaction)",
|
||||
)
|
||||
COMPACTION_THRESHOLD: float = Field(
|
||||
default=0.8,
|
||||
description="Background compaction threshold (0.0-1.0)",
|
||||
)
|
||||
BUFFER_THRESHOLD: float = Field(
|
||||
default=0.95,
|
||||
description="Buffer exhaustion threshold (0.0-1.0)",
|
||||
)
|
||||
TIMEOUT: int = Field(
|
||||
default=300,
|
||||
description="Timeout for each stream chunk (seconds)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.type = "pipe"
|
||||
self.id = "copilotsdk"
|
||||
self.name = "copilotsdk"
|
||||
self.valves = self.Valves()
|
||||
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
|
||||
self.thinking_started = False
|
||||
self._model_cache = [] # Model list cache
|
||||
|
||||
def __del__(self):
|
||||
try:
|
||||
shutil.rmtree(self.temp_dir)
|
||||
except:
|
||||
pass
|
||||
|
||||
def _emit_debug_log(self, message: str):
|
||||
"""Emit debug log to frontend if DEBUG valve is enabled."""
|
||||
if self.valves.DEBUG:
|
||||
print(f"[Copilot Pipe] {message}")
|
||||
|
||||
def _get_user_context(self):
|
||||
"""Helper to get user context (placeholder for future use)."""
|
||||
return {}
|
||||
|
||||
def _get_chat_context(
|
||||
self, body: dict, __metadata__: Optional[dict] = None
|
||||
) -> Dict[str, str]:
|
||||
"""
|
||||
Highly reliable chat context extraction logic.
|
||||
Priority: __metadata__ > body['chat_id'] > body['metadata']['chat_id']
|
||||
"""
|
||||
chat_id = ""
|
||||
source = "none"
|
||||
|
||||
# 1. Prioritize __metadata__ (most reliable source injected by OpenWebUI)
|
||||
if __metadata__ and isinstance(__metadata__, dict):
|
||||
chat_id = __metadata__.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "__metadata__"
|
||||
|
||||
# 2. Then try body root
|
||||
if not chat_id and isinstance(body, dict):
|
||||
chat_id = body.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "body_root"
|
||||
|
||||
# 3. Finally try body.metadata
|
||||
if not chat_id and isinstance(body, dict):
|
||||
body_metadata = body.get("metadata", {})
|
||||
if isinstance(body_metadata, dict):
|
||||
chat_id = body_metadata.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "body_metadata"
|
||||
|
||||
# Debug: Log ID source
|
||||
if chat_id:
|
||||
self._emit_debug_log(f"Extracted ChatID: {chat_id} (Source: {source})")
|
||||
else:
|
||||
# If still not found, log body keys for troubleshooting
|
||||
keys = list(body.keys()) if isinstance(body, dict) else "not a dict"
|
||||
self._emit_debug_log(
|
||||
f"Warning: Failed to extract ChatID. Body keys: {keys}"
|
||||
)
|
||||
|
||||
return {
|
||||
"chat_id": str(chat_id).strip(),
|
||||
}
|
||||
|
||||
async def pipes(self) -> List[dict]:
|
||||
"""Dynamically fetch model list"""
|
||||
# Return cache if available
|
||||
if self._model_cache:
|
||||
return self._model_cache
|
||||
|
||||
self._emit_debug_log("Fetching model list dynamically...")
|
||||
try:
|
||||
self._setup_env()
|
||||
if not self.valves.GH_TOKEN:
|
||||
return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}]
|
||||
|
||||
from copilot import CopilotClient
|
||||
|
||||
client_config = {}
|
||||
if os.environ.get("COPILOT_CLI_PATH"):
|
||||
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||
|
||||
client = CopilotClient(client_config)
|
||||
try:
|
||||
await client.start()
|
||||
models = await client.list_models()
|
||||
|
||||
# Update cache
|
||||
self._model_cache = []
|
||||
exclude_list = [
|
||||
k.strip().lower()
|
||||
for k in self.valves.EXCLUDE_KEYWORDS.split(",")
|
||||
if k.strip()
|
||||
]
|
||||
|
||||
models_with_info = []
|
||||
for m in models:
|
||||
# Compatible with dict and object access
|
||||
m_id = (
|
||||
m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m))
|
||||
)
|
||||
m_name = (
|
||||
m.get("name")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "name", m_id)
|
||||
)
|
||||
m_policy = (
|
||||
m.get("policy")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "policy", {})
|
||||
)
|
||||
m_billing = (
|
||||
m.get("billing")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "billing", {})
|
||||
)
|
||||
|
||||
# Check policy state
|
||||
state = (
|
||||
m_policy.get("state")
|
||||
if isinstance(m_policy, dict)
|
||||
else getattr(m_policy, "state", "enabled")
|
||||
)
|
||||
if state == "disabled":
|
||||
continue
|
||||
|
||||
# Filtering logic
|
||||
if any(kw in m_id.lower() for kw in exclude_list):
|
||||
continue
|
||||
|
||||
# Get multiplier
|
||||
multiplier = (
|
||||
m_billing.get("multiplier", 1)
|
||||
if isinstance(m_billing, dict)
|
||||
else getattr(m_billing, "multiplier", 1)
|
||||
)
|
||||
|
||||
# Format display name
|
||||
if multiplier == 0:
|
||||
display_name = f"-🔥 {m_id} (unlimited)"
|
||||
else:
|
||||
display_name = f"-{m_id} ({multiplier}x)"
|
||||
|
||||
models_with_info.append(
|
||||
{
|
||||
"id": f"{self.id}-{m_id}",
|
||||
"name": display_name,
|
||||
"multiplier": multiplier,
|
||||
"raw_id": m_id,
|
||||
}
|
||||
)
|
||||
|
||||
# Sort: multiplier ascending, then raw_id ascending
|
||||
models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"]))
|
||||
self._model_cache = [
|
||||
{"id": m["id"], "name": m["name"]} for m in models_with_info
|
||||
]
|
||||
|
||||
self._emit_debug_log(
|
||||
f"Successfully fetched {len(self._model_cache)} models (filtered)"
|
||||
)
|
||||
return self._model_cache
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Failed to fetch model list: {e}")
|
||||
# Return default model on failure
|
||||
return [
|
||||
{
|
||||
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||
}
|
||||
]
|
||||
finally:
|
||||
await client.stop()
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Pipes Error: {e}")
|
||||
return [
|
||||
{
|
||||
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||
}
|
||||
]
|
||||
|
||||
async def _get_client(self):
|
||||
"""Helper to get or create a CopilotClient instance."""
|
||||
from copilot import CopilotClient
|
||||
|
||||
client_config = {}
|
||||
if os.environ.get("COPILOT_CLI_PATH"):
|
||||
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||
|
||||
client = CopilotClient(client_config)
|
||||
await client.start()
|
||||
return client
|
||||
|
||||
def _setup_env(self):
|
||||
cli_path = self.valves.CLI_PATH
|
||||
found = False
|
||||
|
||||
if os.path.exists(cli_path):
|
||||
found = True
|
||||
|
||||
if not found:
|
||||
sys_path = shutil.which("copilot")
|
||||
if sys_path:
|
||||
cli_path = sys_path
|
||||
found = True
|
||||
|
||||
if not found:
|
||||
try:
|
||||
subprocess.run(
|
||||
"curl -fsSL https://gh.io/copilot-install | bash",
|
||||
shell=True,
|
||||
check=True,
|
||||
)
|
||||
if os.path.exists(self.valves.CLI_PATH):
|
||||
cli_path = self.valves.CLI_PATH
|
||||
found = True
|
||||
except:
|
||||
pass
|
||||
|
||||
if found:
|
||||
os.environ["COPILOT_CLI_PATH"] = cli_path
|
||||
cli_dir = os.path.dirname(cli_path)
|
||||
if cli_dir not in os.environ["PATH"]:
|
||||
os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}"
|
||||
|
||||
if self.valves.GH_TOKEN:
|
||||
os.environ["GH_TOKEN"] = self.valves.GH_TOKEN
|
||||
os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN
|
||||
|
||||
def _process_images(self, messages):
|
||||
attachments = []
|
||||
text_content = ""
|
||||
if not messages:
|
||||
return "", []
|
||||
last_msg = messages[-1]
|
||||
content = last_msg.get("content", "")
|
||||
|
||||
if isinstance(content, list):
|
||||
for item in content:
|
||||
if item.get("type") == "text":
|
||||
text_content += item.get("text", "")
|
||||
elif item.get("type") == "image_url":
|
||||
image_url = item.get("image_url", {}).get("url", "")
|
||||
if image_url.startswith("data:image"):
|
||||
try:
|
||||
header, encoded = image_url.split(",", 1)
|
||||
ext = header.split(";")[0].split("/")[-1]
|
||||
file_name = f"image_{len(attachments)}.{ext}"
|
||||
file_path = os.path.join(self.temp_dir, file_name)
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(base64.b64decode(encoded))
|
||||
attachments.append(
|
||||
{
|
||||
"type": "file",
|
||||
"path": file_path,
|
||||
"display_name": file_name,
|
||||
}
|
||||
)
|
||||
self._emit_debug_log(f"Image processed: {file_path}")
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Image error: {e}")
|
||||
else:
|
||||
text_content = str(content)
|
||||
return text_content, attachments
|
||||
|
||||
async def pipe(
|
||||
self, body: dict, __metadata__: Optional[dict] = None
|
||||
) -> Union[str, AsyncGenerator]:
|
||||
self._setup_env()
|
||||
if not self.valves.GH_TOKEN:
|
||||
return "Error: Please configure GH_TOKEN in Valves."
|
||||
|
||||
# Parse user selected model
|
||||
request_model = body.get("model", "")
|
||||
real_model_id = self.valves.MODEL_ID # Default value
|
||||
|
||||
if request_model.startswith(f"{self.id}-"):
|
||||
real_model_id = request_model[len(f"{self.id}-") :]
|
||||
self._emit_debug_log(f"Using selected model: {real_model_id}")
|
||||
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
return "No messages."
|
||||
|
||||
# Get Chat ID using improved helper
|
||||
chat_ctx = self._get_chat_context(body, __metadata__)
|
||||
chat_id = chat_ctx.get("chat_id")
|
||||
|
||||
is_streaming = body.get("stream", False)
|
||||
self._emit_debug_log(f"Request Streaming: {is_streaming}")
|
||||
|
||||
last_text, attachments = self._process_images(messages)
|
||||
|
||||
# Determine prompt strategy
|
||||
# If we have a chat_id, we try to resume session.
|
||||
# If resumed, we assume the session has history, so we only send the last message.
|
||||
# If new session, we send full history (or at least the last few turns if we want to be safe, but let's send full for now).
|
||||
|
||||
# However, to be robust against history edits in OpenWebUI, we might want to always send full history?
|
||||
# Copilot SDK `create_session` doesn't take history. `session.send` appends.
|
||||
# If we resume, we append.
|
||||
# If user edited history, the session state is stale.
|
||||
# For now, we implement "Resume if possible, else Create".
|
||||
|
||||
prompt = ""
|
||||
is_new_session = True
|
||||
|
||||
try:
|
||||
client = await self._get_client()
|
||||
session = None
|
||||
|
||||
if chat_id:
|
||||
try:
|
||||
# Try to resume session using chat_id as session_id
|
||||
session = await client.resume_session(chat_id)
|
||||
self._emit_debug_log(f"Resumed session using ChatID: {chat_id}")
|
||||
is_new_session = False
|
||||
except Exception:
|
||||
# Resume failed, session might not exist on disk
|
||||
self._emit_debug_log(
|
||||
f"Session {chat_id} not found or expired, creating new."
|
||||
)
|
||||
session = None
|
||||
|
||||
if session is None:
|
||||
# Create new session
|
||||
from copilot.types import SessionConfig, InfiniteSessionConfig
|
||||
|
||||
# Infinite Session Config
|
||||
infinite_session_config = None
|
||||
if self.valves.INFINITE_SESSION:
|
||||
infinite_session_config = InfiniteSessionConfig(
|
||||
enabled=True,
|
||||
background_compaction_threshold=self.valves.COMPACTION_THRESHOLD,
|
||||
buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD,
|
||||
)
|
||||
|
||||
session_config = SessionConfig(
|
||||
session_id=(
|
||||
chat_id if chat_id else None
|
||||
), # Use chat_id as session_id
|
||||
model=real_model_id,
|
||||
streaming=body.get("stream", False),
|
||||
infinite_sessions=infinite_session_config,
|
||||
)
|
||||
|
||||
session = await client.create_session(config=session_config)
|
||||
|
||||
new_sid = getattr(session, "session_id", getattr(session, "id", None))
|
||||
self._emit_debug_log(f"Created new session: {new_sid}")
|
||||
|
||||
# Construct prompt
|
||||
if is_new_session:
|
||||
# For new session, send full conversation history
|
||||
full_conversation = []
|
||||
for msg in messages[:-1]:
|
||||
role = msg.get("role", "user").upper()
|
||||
content = msg.get("content", "")
|
||||
if isinstance(content, list):
|
||||
content = " ".join(
|
||||
[
|
||||
c.get("text", "")
|
||||
for c in content
|
||||
if c.get("type") == "text"
|
||||
]
|
||||
)
|
||||
full_conversation.append(f"{role}: {content}")
|
||||
full_conversation.append(f"User: {last_text}")
|
||||
prompt = "\n\n".join(full_conversation)
|
||||
else:
|
||||
# For resumed session, only send the last message
|
||||
prompt = last_text
|
||||
|
||||
send_payload = {"prompt": prompt, "mode": "immediate"}
|
||||
if attachments:
|
||||
send_payload["attachments"] = attachments
|
||||
|
||||
if body.get("stream", False):
|
||||
# Determine session status message for UI
|
||||
init_msg = ""
|
||||
if self.valves.DEBUG:
|
||||
if is_new_session:
|
||||
new_sid = getattr(
|
||||
session, "session_id", getattr(session, "id", "unknown")
|
||||
)
|
||||
init_msg = f"> [Debug] Created new session: {new_sid}\n"
|
||||
else:
|
||||
init_msg = (
|
||||
f"> [Debug] Resumed session using ChatID: {chat_id}\n"
|
||||
)
|
||||
|
||||
return self.stream_response(client, session, send_payload, init_msg)
|
||||
else:
|
||||
try:
|
||||
response = await session.send_and_wait(send_payload)
|
||||
return response.data.content if response else "Empty response."
|
||||
finally:
|
||||
# Destroy session object to free memory, but KEEP data on disk
|
||||
await session.destroy()
|
||||
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Request Error: {e}")
|
||||
return f"Error: {str(e)}"
|
||||
|
||||
async def stream_response(
|
||||
self, client, session, send_payload, init_message: str = ""
|
||||
) -> AsyncGenerator:
|
||||
queue = asyncio.Queue()
|
||||
done = asyncio.Event()
|
||||
self.thinking_started = False
|
||||
has_content = False # Track if any content has been yielded
|
||||
|
||||
def get_event_data(event, attr, default=""):
|
||||
if hasattr(event, "data"):
|
||||
data = event.data
|
||||
if data is None:
|
||||
return default
|
||||
if isinstance(data, (str, int, float, bool)):
|
||||
return str(data) if attr == "value" else default
|
||||
|
||||
if isinstance(data, dict):
|
||||
val = data.get(attr)
|
||||
if val is None:
|
||||
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||
val = data.get(alt_attr)
|
||||
if val is None and "_" not in attr:
|
||||
# Try snake_case if camelCase failed
|
||||
import re
|
||||
|
||||
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||
val = data.get(snake_attr)
|
||||
else:
|
||||
val = getattr(data, attr, None)
|
||||
if val is None:
|
||||
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||
val = getattr(data, alt_attr, None)
|
||||
if val is None and "_" not in attr:
|
||||
import re
|
||||
|
||||
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||
val = getattr(data, snake_attr, None)
|
||||
|
||||
return val if val is not None else default
|
||||
return default
|
||||
|
||||
def handler(event):
|
||||
event_type = (
|
||||
getattr(event.type, "value", None)
|
||||
if hasattr(event, "type")
|
||||
else str(event.type)
|
||||
)
|
||||
|
||||
# Log full event data for tool events to help debugging
|
||||
if "tool" in event_type:
|
||||
try:
|
||||
data_str = str(event.data) if hasattr(event, "data") else "no data"
|
||||
self._emit_debug_log(f"Tool Event [{event_type}]: {data_str}")
|
||||
except:
|
||||
pass
|
||||
|
||||
self._emit_debug_log(f"Event: {event_type}")
|
||||
|
||||
# Handle message content (delta or full)
|
||||
if event_type in [
|
||||
"assistant.message_delta",
|
||||
"assistant.message.delta",
|
||||
"assistant.message",
|
||||
]:
|
||||
# Log full message event for troubleshooting why there's no delta
|
||||
if event_type == "assistant.message":
|
||||
self._emit_debug_log(
|
||||
f"Received full message event (non-Delta): {get_event_data(event, 'content')[:50]}..."
|
||||
)
|
||||
|
||||
delta = (
|
||||
get_event_data(event, "delta_content")
|
||||
or get_event_data(event, "deltaContent")
|
||||
or get_event_data(event, "content")
|
||||
or get_event_data(event, "text")
|
||||
)
|
||||
if delta:
|
||||
if self.thinking_started:
|
||||
queue.put_nowait("\n</think>\n")
|
||||
self.thinking_started = False
|
||||
queue.put_nowait(delta)
|
||||
|
||||
elif event_type in [
|
||||
"assistant.reasoning_delta",
|
||||
"assistant.reasoning.delta",
|
||||
"assistant.reasoning",
|
||||
]:
|
||||
delta = (
|
||||
get_event_data(event, "delta_content")
|
||||
or get_event_data(event, "deltaContent")
|
||||
or get_event_data(event, "content")
|
||||
or get_event_data(event, "text")
|
||||
)
|
||||
if delta:
|
||||
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||
queue.put_nowait("<think>\n")
|
||||
self.thinking_started = True
|
||||
if self.thinking_started:
|
||||
queue.put_nowait(delta)
|
||||
|
||||
elif event_type == "tool.execution_start":
|
||||
# Try multiple possible fields for tool name/description
|
||||
tool_name = (
|
||||
get_event_data(event, "toolName")
|
||||
or get_event_data(event, "name")
|
||||
or get_event_data(event, "description")
|
||||
or get_event_data(event, "tool_name")
|
||||
or "Unknown Tool"
|
||||
)
|
||||
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||
queue.put_nowait("<think>\n")
|
||||
self.thinking_started = True
|
||||
if self.thinking_started:
|
||||
queue.put_nowait(f"\nRunning Tool: {tool_name}...\n")
|
||||
self._emit_debug_log(f"Tool Start: {tool_name}")
|
||||
|
||||
elif event_type == "tool.execution_complete":
|
||||
if self.thinking_started:
|
||||
queue.put_nowait("Tool Completed.\n")
|
||||
self._emit_debug_log("Tool Complete")
|
||||
|
||||
elif event_type == "session.compaction_start":
|
||||
self._emit_debug_log("Session Compaction Started")
|
||||
|
||||
elif event_type == "session.compaction_complete":
|
||||
self._emit_debug_log("Session Compaction Completed")
|
||||
|
||||
elif event_type == "session.idle":
|
||||
done.set()
|
||||
elif event_type == "session.error":
|
||||
msg = get_event_data(event, "message", "Unknown Error")
|
||||
queue.put_nowait(f"\n[Error: {msg}]")
|
||||
done.set()
|
||||
|
||||
unsubscribe = session.on(handler)
|
||||
await session.send(send_payload)
|
||||
|
||||
if self.valves.DEBUG:
|
||||
yield "<think>\n"
|
||||
if init_message:
|
||||
yield init_message
|
||||
yield "> [Debug] Connection established, waiting for response...\n"
|
||||
self.thinking_started = True
|
||||
|
||||
try:
|
||||
while not done.is_set():
|
||||
try:
|
||||
chunk = await asyncio.wait_for(
|
||||
queue.get(), timeout=float(self.valves.TIMEOUT)
|
||||
)
|
||||
if chunk:
|
||||
has_content = True
|
||||
yield chunk
|
||||
except asyncio.TimeoutError:
|
||||
if done.is_set():
|
||||
break
|
||||
if self.thinking_started:
|
||||
yield f"> [Debug] Waiting for response ({self.valves.TIMEOUT}s exceeded)...\n"
|
||||
continue
|
||||
|
||||
while not queue.empty():
|
||||
chunk = queue.get_nowait()
|
||||
if chunk:
|
||||
has_content = True
|
||||
yield chunk
|
||||
|
||||
if self.thinking_started:
|
||||
yield "\n</think>\n"
|
||||
has_content = True
|
||||
|
||||
# Core fix: If no content was yielded, return a fallback message to prevent OpenWebUI error
|
||||
if not has_content:
|
||||
yield "⚠️ Copilot returned no content. Please check if the Model ID is correct or enable DEBUG mode in Valves for details."
|
||||
|
||||
except Exception as e:
|
||||
yield f"\n[Stream Error: {str(e)}]"
|
||||
finally:
|
||||
unsubscribe()
|
||||
# Only destroy session if it's not cached
|
||||
# We can't easily check chat_id here without passing it,
|
||||
# but stream_response is called within the scope where we decide persistence.
|
||||
# Wait, stream_response takes session as arg.
|
||||
# We need to know if we should destroy it.
|
||||
# Let's assume if it's in _SESSIONS, we don't destroy it.
|
||||
# But checking _SESSIONS here is race-prone or complex.
|
||||
# Simplified: The caller (pipe) handles destruction logic?
|
||||
# No, stream_response is a generator, pipe returns it.
|
||||
# So pipe function exits before stream finishes.
|
||||
# We need to handle destruction here.
|
||||
pass
|
||||
|
||||
# TODO: Proper session cleanup for streaming
|
||||
# For now, we rely on the fact that if we mapped it, we keep it.
|
||||
# If we didn't map it (no chat_id), we should destroy it.
|
||||
# But we don't have chat_id here.
|
||||
# Let's modify stream_response signature or just leave it open for GC?
|
||||
# CopilotSession doesn't auto-close.
|
||||
# Let's add a flag to stream_response.
|
||||
pass
|
||||
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png
Normal file
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 474 KiB |
757
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py
Normal file
757
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py
Normal file
@@ -0,0 +1,757 @@
|
||||
"""
|
||||
title: GitHub Copilot 官方 SDK 管道 (动态模型版)
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
funding_url: https://github.com/open-webui
|
||||
description: 集成 GitHub Copilot SDK。支持动态模型、多轮对话、流式输出、多模态输入及无限会话(上下文自动压缩)。
|
||||
version: 0.1.1
|
||||
requirements: github-copilot-sdk
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
import base64
|
||||
import tempfile
|
||||
import asyncio
|
||||
import logging
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Optional, Union, AsyncGenerator, List, Any, Dict
|
||||
from pydantic import BaseModel, Field
|
||||
from datetime import datetime, timezone
|
||||
import contextlib
|
||||
|
||||
# Setup logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Open WebUI internal database (re-use shared connection)
|
||||
try:
|
||||
from open_webui.internal import db as owui_db
|
||||
except ModuleNotFoundError:
|
||||
owui_db = None
|
||||
|
||||
|
||||
def _discover_owui_engine(db_module: Any) -> Optional[Engine]:
|
||||
"""Discover the Open WebUI SQLAlchemy engine via provided db module helpers."""
|
||||
if db_module is None:
|
||||
return None
|
||||
|
||||
db_context = getattr(db_module, "get_db_context", None) or getattr(
|
||||
db_module, "get_db", None
|
||||
)
|
||||
if callable(db_context):
|
||||
try:
|
||||
with db_context() as session:
|
||||
try:
|
||||
return session.get_bind()
|
||||
except AttributeError:
|
||||
return getattr(session, "bind", None) or getattr(
|
||||
session, "engine", None
|
||||
)
|
||||
except Exception as exc:
|
||||
logger.error(f"[DB Discover] get_db_context failed: {exc}")
|
||||
|
||||
for attr in ("engine", "ENGINE", "bind", "BIND"):
|
||||
candidate = getattr(db_module, attr, None)
|
||||
if candidate is not None:
|
||||
return candidate
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _discover_owui_schema(db_module: Any) -> Optional[str]:
|
||||
"""Discover the Open WebUI database schema name if configured."""
|
||||
if db_module is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
base = getattr(db_module, "Base", None)
|
||||
metadata = getattr(base, "metadata", None) if base is not None else None
|
||||
candidate = getattr(metadata, "schema", None) if metadata is not None else None
|
||||
if isinstance(candidate, str) and candidate.strip():
|
||||
return candidate.strip()
|
||||
except Exception as exc:
|
||||
logger.error(f"[DB Discover] Base metadata schema lookup failed: {exc}")
|
||||
|
||||
try:
|
||||
metadata_obj = getattr(db_module, "metadata_obj", None)
|
||||
candidate = (
|
||||
getattr(metadata_obj, "schema", None) if metadata_obj is not None else None
|
||||
)
|
||||
if isinstance(candidate, str) and candidate.strip():
|
||||
return candidate.strip()
|
||||
except Exception as exc:
|
||||
logger.error(f"[DB Discover] metadata_obj schema lookup failed: {exc}")
|
||||
|
||||
try:
|
||||
from open_webui import env as owui_env
|
||||
|
||||
candidate = getattr(owui_env, "DATABASE_SCHEMA", None)
|
||||
if isinstance(candidate, str) and candidate.strip():
|
||||
return candidate.strip()
|
||||
except Exception as exc:
|
||||
logger.error(f"[DB Discover] env schema lookup failed: {exc}")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
owui_engine = _discover_owui_engine(owui_db)
|
||||
owui_schema = _discover_owui_schema(owui_db)
|
||||
owui_Base = getattr(owui_db, "Base", None) if owui_db is not None else None
|
||||
if owui_Base is None:
|
||||
owui_Base = declarative_base()
|
||||
|
||||
|
||||
class CopilotSessionMap(owui_Base):
|
||||
"""Copilot Session Mapping Table"""
|
||||
|
||||
__tablename__ = "copilot_session_map"
|
||||
__table_args__ = (
|
||||
{"extend_existing": True, "schema": owui_schema}
|
||||
if owui_schema
|
||||
else {"extend_existing": True}
|
||||
)
|
||||
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
chat_id = Column(String(255), unique=True, nullable=False, index=True)
|
||||
copilot_session_id = Column(String(255), nullable=False)
|
||||
updated_at = Column(
|
||||
DateTime,
|
||||
default=lambda: datetime.now(timezone.utc),
|
||||
onupdate=lambda: datetime.now(timezone.utc),
|
||||
)
|
||||
|
||||
|
||||
# 全局客户端存储
|
||||
_SHARED_CLIENT = None
|
||||
_SHARED_TOKEN = ""
|
||||
_CLIENT_LOCK = asyncio.Lock()
|
||||
|
||||
|
||||
class Pipe:
|
||||
class Valves(BaseModel):
|
||||
GH_TOKEN: str = Field(
|
||||
default="", description="GitHub 细粒度令牌 (需开启 'Copilot Requests' 权限)"
|
||||
)
|
||||
MODEL_ID: str = Field(
|
||||
default="claude-sonnet-4.5",
|
||||
description="默认使用的 Copilot 模型名称 (当无法动态获取时使用)",
|
||||
)
|
||||
CLI_PATH: str = Field(
|
||||
default="/usr/local/bin/copilot",
|
||||
description="Copilot CLI 路径",
|
||||
)
|
||||
DEBUG: bool = Field(
|
||||
default=False,
|
||||
description="开启技术调试日志 (连接信息等)",
|
||||
)
|
||||
SHOW_THINKING: bool = Field(
|
||||
default=True,
|
||||
description="显示模型推理/思考过程",
|
||||
)
|
||||
EXCLUDE_KEYWORDS: str = Field(
|
||||
default="",
|
||||
description="排除包含这些关键词的模型 (逗号分隔,例如: codex, haiku)",
|
||||
)
|
||||
WORKSPACE_DIR: str = Field(
|
||||
default="",
|
||||
description="文件操作的受限工作目录。如果为空,允许访问当前进程目录。",
|
||||
)
|
||||
INFINITE_SESSION: bool = Field(
|
||||
default=True,
|
||||
description="启用无限会话 (自动上下文压缩)",
|
||||
)
|
||||
COMPACTION_THRESHOLD: float = Field(
|
||||
default=0.8,
|
||||
description="后台压缩阈值 (0.0-1.0)",
|
||||
)
|
||||
BUFFER_THRESHOLD: float = Field(
|
||||
default=0.95,
|
||||
description="背景压缩缓冲区阈值 (0.0-1.0)",
|
||||
)
|
||||
TIMEOUT: int = Field(
|
||||
default=300,
|
||||
description="流式数据块超时时间 (秒)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.type = "pipe"
|
||||
self.name = "copilotsdk"
|
||||
self.valves = self.Valves()
|
||||
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
|
||||
self.thinking_started = False
|
||||
self._model_cache = [] # 模型列表缓存
|
||||
|
||||
def __del__(self):
|
||||
try:
|
||||
shutil.rmtree(self.temp_dir)
|
||||
except:
|
||||
pass
|
||||
|
||||
def _emit_debug_log(self, message: str):
|
||||
"""Emit debug log to frontend if DEBUG valve is enabled."""
|
||||
if self.valves.DEBUG:
|
||||
print(f"[Copilot Pipe] {message}")
|
||||
|
||||
def _get_user_context(self):
|
||||
"""Helper to get user context (placeholder for future use)."""
|
||||
return {}
|
||||
|
||||
def _get_chat_context(
|
||||
self, body: dict, __metadata__: Optional[dict] = None
|
||||
) -> Dict[str, str]:
|
||||
"""
|
||||
高度可靠的聊天上下文提取逻辑。
|
||||
优先级:__metadata__ > body['chat_id'] > body['metadata']['chat_id']
|
||||
"""
|
||||
chat_id = ""
|
||||
source = "none"
|
||||
|
||||
# 1. 优先从 __metadata__ 获取 (OpenWebUI 注入的最可靠来源)
|
||||
if __metadata__ and isinstance(__metadata__, dict):
|
||||
chat_id = __metadata__.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "__metadata__"
|
||||
|
||||
# 2. 其次从 body 顶层获取
|
||||
if not chat_id and isinstance(body, dict):
|
||||
chat_id = body.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "body_root"
|
||||
|
||||
# 3. 最后从 body.metadata 获取
|
||||
if not chat_id and isinstance(body, dict):
|
||||
body_metadata = body.get("metadata", {})
|
||||
if isinstance(body_metadata, dict):
|
||||
chat_id = body_metadata.get("chat_id", "")
|
||||
if chat_id:
|
||||
source = "body_metadata"
|
||||
|
||||
# 调试:记录 ID 来源
|
||||
if chat_id:
|
||||
self._emit_debug_log(f"提取到 ChatID: {chat_id} (来源: {source})")
|
||||
else:
|
||||
# 如果还是没找到,记录一下 body 的键,方便排查
|
||||
keys = list(body.keys()) if isinstance(body, dict) else "not a dict"
|
||||
self._emit_debug_log(f"警告: 未能提取到 ChatID。Body 键: {keys}")
|
||||
|
||||
return {
|
||||
"chat_id": str(chat_id).strip(),
|
||||
}
|
||||
|
||||
async def pipes(self) -> List[dict]:
|
||||
"""动态获取模型列表"""
|
||||
# 如果有缓存,直接返回
|
||||
if self._model_cache:
|
||||
return self._model_cache
|
||||
|
||||
self._emit_debug_log("正在动态获取模型列表...")
|
||||
try:
|
||||
self._setup_env()
|
||||
if not self.valves.GH_TOKEN:
|
||||
return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}]
|
||||
|
||||
from copilot import CopilotClient
|
||||
|
||||
client_config = {}
|
||||
if os.environ.get("COPILOT_CLI_PATH"):
|
||||
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||
|
||||
client = CopilotClient(client_config)
|
||||
try:
|
||||
await client.start()
|
||||
models = await client.list_models()
|
||||
|
||||
# 更新缓存
|
||||
self._model_cache = []
|
||||
exclude_list = [
|
||||
k.strip().lower()
|
||||
for k in self.valves.EXCLUDE_KEYWORDS.split(",")
|
||||
if k.strip()
|
||||
]
|
||||
|
||||
models_with_info = []
|
||||
for m in models:
|
||||
# 兼容字典和对象访问方式
|
||||
m_id = (
|
||||
m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m))
|
||||
)
|
||||
m_name = (
|
||||
m.get("name")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "name", m_id)
|
||||
)
|
||||
m_policy = (
|
||||
m.get("policy")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "policy", {})
|
||||
)
|
||||
m_billing = (
|
||||
m.get("billing")
|
||||
if isinstance(m, dict)
|
||||
else getattr(m, "billing", {})
|
||||
)
|
||||
|
||||
# 检查策略状态
|
||||
state = (
|
||||
m_policy.get("state")
|
||||
if isinstance(m_policy, dict)
|
||||
else getattr(m_policy, "state", "enabled")
|
||||
)
|
||||
if state == "disabled":
|
||||
continue
|
||||
|
||||
# 过滤逻辑
|
||||
if any(kw in m_id.lower() for kw in exclude_list):
|
||||
continue
|
||||
|
||||
# 获取倍率
|
||||
multiplier = (
|
||||
m_billing.get("multiplier", 1)
|
||||
if isinstance(m_billing, dict)
|
||||
else getattr(m_billing, "multiplier", 1)
|
||||
)
|
||||
|
||||
# 格式化显示名称
|
||||
if multiplier == 0:
|
||||
display_name = f"-🔥 {m_id} (unlimited)"
|
||||
else:
|
||||
display_name = f"-{m_id} ({multiplier}x)"
|
||||
|
||||
models_with_info.append(
|
||||
{
|
||||
"id": f"{self.id}-{m_id}",
|
||||
"name": display_name,
|
||||
"multiplier": multiplier,
|
||||
"raw_id": m_id,
|
||||
}
|
||||
)
|
||||
|
||||
# 排序:倍率升序,然后是原始ID升序
|
||||
models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"]))
|
||||
self._model_cache = [
|
||||
{"id": m["id"], "name": m["name"]} for m in models_with_info
|
||||
]
|
||||
|
||||
self._emit_debug_log(
|
||||
f"成功获取 {len(self._model_cache)} 个模型 (已过滤)"
|
||||
)
|
||||
return self._model_cache
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"获取模型列表失败: {e}")
|
||||
# 失败时返回默认模型
|
||||
return [
|
||||
{
|
||||
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||
}
|
||||
]
|
||||
finally:
|
||||
await client.stop()
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Pipes Error: {e}")
|
||||
return [
|
||||
{
|
||||
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||
}
|
||||
]
|
||||
|
||||
async def _get_client(self):
|
||||
"""Helper to get or create a CopilotClient instance."""
|
||||
from copilot import CopilotClient
|
||||
|
||||
client_config = {}
|
||||
if os.environ.get("COPILOT_CLI_PATH"):
|
||||
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||
|
||||
client = CopilotClient(client_config)
|
||||
await client.start()
|
||||
return client
|
||||
|
||||
def _setup_env(self):
|
||||
cli_path = self.valves.CLI_PATH
|
||||
found = False
|
||||
|
||||
if os.path.exists(cli_path):
|
||||
found = True
|
||||
|
||||
if not found:
|
||||
sys_path = shutil.which("copilot")
|
||||
if sys_path:
|
||||
cli_path = sys_path
|
||||
found = True
|
||||
|
||||
if not found:
|
||||
try:
|
||||
subprocess.run(
|
||||
"curl -fsSL https://gh.io/copilot-install | bash",
|
||||
shell=True,
|
||||
check=True,
|
||||
)
|
||||
if os.path.exists(self.valves.CLI_PATH):
|
||||
cli_path = self.valves.CLI_PATH
|
||||
found = True
|
||||
except:
|
||||
pass
|
||||
|
||||
if found:
|
||||
os.environ["COPILOT_CLI_PATH"] = cli_path
|
||||
cli_dir = os.path.dirname(cli_path)
|
||||
if cli_dir not in os.environ["PATH"]:
|
||||
os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}"
|
||||
|
||||
if self.valves.GH_TOKEN:
|
||||
os.environ["GH_TOKEN"] = self.valves.GH_TOKEN
|
||||
os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN
|
||||
|
||||
def _process_images(self, messages):
|
||||
attachments = []
|
||||
text_content = ""
|
||||
if not messages:
|
||||
return "", []
|
||||
last_msg = messages[-1]
|
||||
content = last_msg.get("content", "")
|
||||
|
||||
if isinstance(content, list):
|
||||
for item in content:
|
||||
if item.get("type") == "text":
|
||||
text_content += item.get("text", "")
|
||||
elif item.get("type") == "image_url":
|
||||
image_url = item.get("image_url", {}).get("url", "")
|
||||
if image_url.startswith("data:image"):
|
||||
try:
|
||||
header, encoded = image_url.split(",", 1)
|
||||
ext = header.split(";")[0].split("/")[-1]
|
||||
file_name = f"image_{len(attachments)}.{ext}"
|
||||
file_path = os.path.join(self.temp_dir, file_name)
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(base64.b64decode(encoded))
|
||||
attachments.append(
|
||||
{
|
||||
"type": "file",
|
||||
"path": file_path,
|
||||
"display_name": file_name,
|
||||
}
|
||||
)
|
||||
self._emit_debug_log(f"Image processed: {file_path}")
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"Image error: {e}")
|
||||
else:
|
||||
text_content = str(content)
|
||||
return text_content, attachments
|
||||
|
||||
async def pipe(
|
||||
self, body: dict, __metadata__: Optional[dict] = None
|
||||
) -> Union[str, AsyncGenerator]:
|
||||
self._setup_env()
|
||||
if not self.valves.GH_TOKEN:
|
||||
return "Error: 请在 Valves 中配置 GH_TOKEN。"
|
||||
|
||||
# 解析用户选择的模型
|
||||
request_model = body.get("model", "")
|
||||
real_model_id = self.valves.MODEL_ID # 默认值
|
||||
|
||||
if request_model.startswith(f"{self.id}-"):
|
||||
real_model_id = request_model[len(f"{self.id}-") :]
|
||||
self._emit_debug_log(f"使用选择的模型: {real_model_id}")
|
||||
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
return "No messages."
|
||||
|
||||
# 使用改进的助手获取 Chat ID
|
||||
chat_ctx = self._get_chat_context(body, __metadata__)
|
||||
chat_id = chat_ctx.get("chat_id")
|
||||
|
||||
is_streaming = body.get("stream", False)
|
||||
self._emit_debug_log(f"请求流式传输: {is_streaming}")
|
||||
|
||||
last_text, attachments = self._process_images(messages)
|
||||
|
||||
# 确定 Prompt 策略
|
||||
# 如果有 chat_id,尝试恢复会话。
|
||||
# 如果恢复成功,假设会话已有历史,只发送最后一条消息。
|
||||
# 如果是新会话,发送完整历史。
|
||||
|
||||
prompt = ""
|
||||
is_new_session = True
|
||||
|
||||
try:
|
||||
client = await self._get_client()
|
||||
session = None
|
||||
|
||||
if chat_id:
|
||||
try:
|
||||
# 尝试直接使用 chat_id 作为 session_id 恢复会话
|
||||
session = await client.resume_session(chat_id)
|
||||
self._emit_debug_log(f"已通过 ChatID 恢复会话: {chat_id}")
|
||||
is_new_session = False
|
||||
except Exception:
|
||||
# 恢复失败,磁盘上可能不存在该会话
|
||||
self._emit_debug_log(
|
||||
f"会话 {chat_id} 不存在或已过期,将创建新会话。"
|
||||
)
|
||||
session = None
|
||||
|
||||
if session is None:
|
||||
# 创建新会话
|
||||
from copilot.types import SessionConfig, InfiniteSessionConfig
|
||||
|
||||
# 无限会话配置
|
||||
infinite_session_config = None
|
||||
if self.valves.INFINITE_SESSION:
|
||||
infinite_session_config = InfiniteSessionConfig(
|
||||
enabled=True,
|
||||
background_compaction_threshold=self.valves.COMPACTION_THRESHOLD,
|
||||
buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD,
|
||||
)
|
||||
|
||||
session_config = SessionConfig(
|
||||
session_id=(
|
||||
chat_id if chat_id else None
|
||||
), # 使用 chat_id 作为 session_id
|
||||
model=real_model_id,
|
||||
streaming=body.get("stream", False),
|
||||
infinite_sessions=infinite_session_config,
|
||||
)
|
||||
|
||||
session = await client.create_session(config=session_config)
|
||||
|
||||
# 获取新会话 ID
|
||||
new_sid = getattr(session, "session_id", getattr(session, "id", None))
|
||||
self._emit_debug_log(f"创建了新会话: {new_sid}")
|
||||
|
||||
# 构建 Prompt
|
||||
if is_new_session:
|
||||
# 新会话,发送完整历史
|
||||
full_conversation = []
|
||||
for msg in messages[:-1]:
|
||||
role = msg.get("role", "user").upper()
|
||||
content = msg.get("content", "")
|
||||
if isinstance(content, list):
|
||||
content = " ".join(
|
||||
[
|
||||
c.get("text", "")
|
||||
for c in content
|
||||
if c.get("type") == "text"
|
||||
]
|
||||
)
|
||||
full_conversation.append(f"{role}: {content}")
|
||||
full_conversation.append(f"User: {last_text}")
|
||||
prompt = "\n\n".join(full_conversation)
|
||||
else:
|
||||
# 恢复的会话,只发送最后一条消息
|
||||
prompt = last_text
|
||||
|
||||
send_payload = {"prompt": prompt, "mode": "immediate"}
|
||||
if attachments:
|
||||
send_payload["attachments"] = attachments
|
||||
|
||||
if body.get("stream", False):
|
||||
# 确定 UI 显示的会话状态消息
|
||||
init_msg = ""
|
||||
if self.valves.DEBUG:
|
||||
if is_new_session:
|
||||
new_sid = getattr(
|
||||
session, "session_id", getattr(session, "id", "unknown")
|
||||
)
|
||||
init_msg = f"> [Debug] 创建了新会话: {new_sid}\n"
|
||||
else:
|
||||
init_msg = f"> [Debug] 已通过 ChatID 恢复会话: {chat_id}\n"
|
||||
|
||||
return self.stream_response(client, session, send_payload, init_msg)
|
||||
else:
|
||||
try:
|
||||
response = await session.send_and_wait(send_payload)
|
||||
return response.data.content if response else "Empty response."
|
||||
finally:
|
||||
# 销毁会话对象以释放内存,但保留磁盘数据
|
||||
await session.destroy()
|
||||
|
||||
except Exception as e:
|
||||
self._emit_debug_log(f"请求错误: {e}")
|
||||
return f"Error: {str(e)}"
|
||||
|
||||
async def stream_response(
|
||||
self, client, session, send_payload, init_message: str = ""
|
||||
) -> AsyncGenerator:
|
||||
queue = asyncio.Queue()
|
||||
done = asyncio.Event()
|
||||
self.thinking_started = False
|
||||
has_content = False # 追踪是否已经输出了内容
|
||||
|
||||
def get_event_data(event, attr, default=""):
|
||||
if hasattr(event, "data"):
|
||||
data = event.data
|
||||
if data is None:
|
||||
return default
|
||||
if isinstance(data, (str, int, float, bool)):
|
||||
return str(data) if attr == "value" else default
|
||||
|
||||
if isinstance(data, dict):
|
||||
val = data.get(attr)
|
||||
if val is None:
|
||||
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||
val = data.get(alt_attr)
|
||||
if val is None and "_" not in attr:
|
||||
# 尝试将 camelCase 转换为 snake_case
|
||||
import re
|
||||
|
||||
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||
val = data.get(snake_attr)
|
||||
else:
|
||||
val = getattr(data, attr, None)
|
||||
if val is None:
|
||||
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||
val = getattr(data, alt_attr, None)
|
||||
if val is None and "_" not in attr:
|
||||
import re
|
||||
|
||||
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||
val = getattr(data, snake_attr, None)
|
||||
|
||||
return val if val is not None else default
|
||||
return default
|
||||
|
||||
def handler(event):
|
||||
event_type = (
|
||||
getattr(event.type, "value", None)
|
||||
if hasattr(event, "type")
|
||||
else str(event.type)
|
||||
)
|
||||
|
||||
# 记录工具事件的完整数据以辅助调试
|
||||
if "tool" in event_type:
|
||||
try:
|
||||
data_str = str(event.data) if hasattr(event, "data") else "no data"
|
||||
self._emit_debug_log(f"Tool Event [{event_type}]: {data_str}")
|
||||
except:
|
||||
pass
|
||||
|
||||
self._emit_debug_log(f"Event: {event_type}")
|
||||
|
||||
# 处理消息内容 (增量或全量)
|
||||
if event_type in [
|
||||
"assistant.message_delta",
|
||||
"assistant.message.delta",
|
||||
"assistant.message",
|
||||
]:
|
||||
# 记录全量消息事件的特殊日志,帮助排查为什么没有 delta
|
||||
if event_type == "assistant.message":
|
||||
self._emit_debug_log(
|
||||
f"收到全量消息事件 (非 Delta): {get_event_data(event, 'content')[:50]}..."
|
||||
)
|
||||
|
||||
delta = (
|
||||
get_event_data(event, "delta_content")
|
||||
or get_event_data(event, "deltaContent")
|
||||
or get_event_data(event, "content")
|
||||
or get_event_data(event, "text")
|
||||
)
|
||||
if delta:
|
||||
if self.thinking_started:
|
||||
queue.put_nowait("\n</think>\n")
|
||||
self.thinking_started = False
|
||||
queue.put_nowait(delta)
|
||||
|
||||
elif event_type in [
|
||||
"assistant.reasoning_delta",
|
||||
"assistant.reasoning.delta",
|
||||
"assistant.reasoning",
|
||||
]:
|
||||
delta = (
|
||||
get_event_data(event, "delta_content")
|
||||
or get_event_data(event, "deltaContent")
|
||||
or get_event_data(event, "content")
|
||||
or get_event_data(event, "text")
|
||||
)
|
||||
if delta:
|
||||
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||
queue.put_nowait("<think>\n")
|
||||
self.thinking_started = True
|
||||
if self.thinking_started:
|
||||
queue.put_nowait(delta)
|
||||
|
||||
elif event_type == "tool.execution_start":
|
||||
# 尝试多个可能的字段来获取工具名称或描述
|
||||
tool_name = (
|
||||
get_event_data(event, "toolName")
|
||||
or get_event_data(event, "name")
|
||||
or get_event_data(event, "description")
|
||||
or get_event_data(event, "tool_name")
|
||||
or "Unknown Tool"
|
||||
)
|
||||
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||
queue.put_nowait("<think>\n")
|
||||
self.thinking_started = True
|
||||
if self.thinking_started:
|
||||
queue.put_nowait(f"\n正在运行工具: {tool_name}...\n")
|
||||
self._emit_debug_log(f"Tool Start: {tool_name}")
|
||||
|
||||
elif event_type == "tool.execution_complete":
|
||||
if self.thinking_started:
|
||||
queue.put_nowait("工具运行完成。\n")
|
||||
self._emit_debug_log("Tool Complete")
|
||||
|
||||
elif event_type == "session.compaction_start":
|
||||
self._emit_debug_log("会话压缩开始")
|
||||
|
||||
elif event_type == "session.compaction_complete":
|
||||
self._emit_debug_log("会话压缩完成")
|
||||
|
||||
elif event_type == "session.idle":
|
||||
done.set()
|
||||
elif event_type == "session.error":
|
||||
msg = get_event_data(event, "message", "Unknown Error")
|
||||
queue.put_nowait(f"\n[Error: {msg}]")
|
||||
done.set()
|
||||
|
||||
unsubscribe = session.on(handler)
|
||||
await session.send(send_payload)
|
||||
|
||||
if self.valves.DEBUG:
|
||||
yield "<think>\n"
|
||||
if init_message:
|
||||
yield init_message
|
||||
yield "> [Debug] 连接已建立,等待响应...\n"
|
||||
self.thinking_started = True
|
||||
|
||||
try:
|
||||
while not done.is_set():
|
||||
try:
|
||||
chunk = await asyncio.wait_for(
|
||||
queue.get(), timeout=float(self.valves.TIMEOUT)
|
||||
)
|
||||
if chunk:
|
||||
has_content = True
|
||||
yield chunk
|
||||
except asyncio.TimeoutError:
|
||||
if done.is_set():
|
||||
break
|
||||
if self.thinking_started:
|
||||
yield f"> [Debug] 等待响应中 (已超过 {self.valves.TIMEOUT} 秒)...\n"
|
||||
continue
|
||||
|
||||
while not queue.empty():
|
||||
chunk = queue.get_nowait()
|
||||
if chunk:
|
||||
has_content = True
|
||||
yield chunk
|
||||
|
||||
if self.thinking_started:
|
||||
yield "\n</think>\n"
|
||||
has_content = True
|
||||
|
||||
# 核心修复:如果整个过程没有任何输出,返回一个提示,防止 OpenWebUI 报错
|
||||
if not has_content:
|
||||
yield "⚠️ Copilot 未返回任何内容。请检查模型 ID 是否正确,或尝试在 Valves 中开启 DEBUG 模式查看详细日志。"
|
||||
|
||||
except Exception as e:
|
||||
yield f"\n[Stream Error: {str(e)}]"
|
||||
finally:
|
||||
unsubscribe()
|
||||
# 销毁会话对象以释放内存,但保留磁盘数据
|
||||
await session.destroy()
|
||||
@@ -217,6 +217,23 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def _get_readme_url(file_path: str) -> str:
|
||||
"""
|
||||
Generate GitHub README URL from plugin file path.
|
||||
从插件文件路径生成 GitHub README 链接。
|
||||
"""
|
||||
if not file_path:
|
||||
return ""
|
||||
# Extract plugin directory (e.g., plugins/filters/folder-memory/folder_memory.py -> plugins/filters/folder-memory)
|
||||
from pathlib import Path
|
||||
|
||||
plugin_dir = Path(file_path).parent
|
||||
# Convert to GitHub URL
|
||||
return (
|
||||
f"https://github.com/Fu-Jie/awesome-openwebui/blob/main/{plugin_dir}/README.md"
|
||||
)
|
||||
|
||||
|
||||
def format_release_notes(
|
||||
comparison: dict[str, list], ignore_removed: bool = False
|
||||
) -> str:
|
||||
@@ -229,9 +246,12 @@ def format_release_notes(
|
||||
if comparison["added"]:
|
||||
lines.append("### 新增插件 / New Plugins")
|
||||
for plugin in comparison["added"]:
|
||||
readme_url = _get_readme_url(plugin.get("file_path", ""))
|
||||
lines.append(f"- **{plugin['title']}** v{plugin['version']}")
|
||||
if plugin.get("description"):
|
||||
lines.append(f" - {plugin['description']}")
|
||||
if readme_url:
|
||||
lines.append(f" - 📖 [README / 文档]({readme_url})")
|
||||
lines.append("")
|
||||
|
||||
if comparison["updated"]:
|
||||
@@ -258,7 +278,10 @@ def format_release_notes(
|
||||
)
|
||||
prev_ver = prev_manifest.get("version") or prev.get("version")
|
||||
|
||||
readme_url = _get_readme_url(curr.get("file_path", ""))
|
||||
lines.append(f"- **{curr_title}**: v{prev_ver} → v{curr_ver}")
|
||||
if readme_url:
|
||||
lines.append(f" - 📖 [README / 文档]({readme_url})")
|
||||
lines.append("")
|
||||
|
||||
if comparison["removed"] and not ignore_removed:
|
||||
|
||||
Reference in New Issue
Block a user