Compare commits
38 Commits
v2026.01.2
...
v2026.01.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
813b019653 | ||
|
|
b0b1542939 | ||
|
|
15f19d8b8d | ||
|
|
82253b114c | ||
|
|
e0bfbf6dd4 | ||
|
|
4689e80e7a | ||
|
|
556e6c1c67 | ||
|
|
3ab84a526d | ||
|
|
bdce96f912 | ||
|
|
4811b99a4b | ||
|
|
fb2a64c07a | ||
|
|
e023e4f2e2 | ||
|
|
0b16b1e0f4 | ||
|
|
59073ad7ac | ||
|
|
8248644c45 | ||
|
|
f38e6394c9 | ||
|
|
0aaa529c6b | ||
|
|
b81a6562a1 | ||
|
|
c5b10db23a | ||
|
|
d16e444643 | ||
|
|
8202468099 | ||
|
|
766e8bd20f | ||
|
|
1214ab5a8c | ||
|
|
ebddbb25f8 | ||
|
|
59545e1110 | ||
|
|
500e090b11 | ||
|
|
a75ee555fa | ||
|
|
6a8c2164cd | ||
|
|
7f7efa325a | ||
|
|
9ba6cb08fc | ||
|
|
1872271a2d | ||
|
|
813b50864a | ||
|
|
b18cefe320 | ||
|
|
a54c359fcf | ||
|
|
8d83221a4a | ||
|
|
1879000720 | ||
|
|
ba92649a98 | ||
|
|
d2276dcaae |
@@ -90,6 +90,9 @@ Reference: `.github/workflows/release.yml`
|
|||||||
- Action: Automatically updates the plugin code and metadata on OpenWebUI.com using `scripts/publish_plugin.py`.
|
- Action: Automatically updates the plugin code and metadata on OpenWebUI.com using `scripts/publish_plugin.py`.
|
||||||
- **Auto-Sync**: If a local plugin has no ID but matches an existing published plugin by **Title**, the script will automatically fetch the ID, update the local file, and proceed with the update.
|
- **Auto-Sync**: If a local plugin has no ID but matches an existing published plugin by **Title**, the script will automatically fetch the ID, update the local file, and proceed with the update.
|
||||||
- Requirement: `OPENWEBUI_API_KEY` secret must be set.
|
- Requirement: `OPENWEBUI_API_KEY` secret must be set.
|
||||||
|
- **README Link**: When announcing a release, always include the GitHub README URL for the plugin:
|
||||||
|
- Format: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/{type}/{name}/README.md`
|
||||||
|
- Example: `https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/README.md`
|
||||||
|
|
||||||
### Pull Request Check
|
### Pull Request Check
|
||||||
- Workflow: `.github/workflows/plugin-version-check.yml`
|
- Workflow: `.github/workflows/plugin-version-check.yml`
|
||||||
|
|||||||
19
.github/copilot-instructions.md
vendored
19
.github/copilot-instructions.md
vendored
@@ -100,13 +100,14 @@ description: 插件功能的简短描述。Brief description of plugin functiona
|
|||||||
| `author_url` | 作者主页链接 | `https://github.com/Fu-Jie/awesome-openwebui` |
|
| `author_url` | 作者主页链接 | `https://github.com/Fu-Jie/awesome-openwebui` |
|
||||||
| `funding_url` | 赞助/项目链接 | `https://github.com/open-webui` |
|
| `funding_url` | 赞助/项目链接 | `https://github.com/open-webui` |
|
||||||
| `version` | 语义化版本号 | `0.1.0`, `1.2.3` |
|
| `version` | 语义化版本号 | `0.1.0`, `1.2.3` |
|
||||||
| `icon_url` | 图标 (Base64 编码的 SVG) | 见下方图标规范 |
|
| `icon_url` | 图标 (Base64 编码的 SVG) | 仅 Action 插件**必须**提供。其他类型可选。 |
|
||||||
| `requirements` | 额外依赖 (仅 OpenWebUI 环境未安装的) | `python-docx==1.1.2` |
|
| `requirements` | 额外依赖 (仅 OpenWebUI 环境未安装的) | `python-docx==1.1.2` |
|
||||||
| `description` | 功能描述 | `将对话导出为 Word 文档` |
|
| `description` | 功能描述 | `将对话导出为 Word 文档` |
|
||||||
|
|
||||||
#### 图标规范 (Icon Guidelines)
|
#### 图标规范 (Icon Guidelines)
|
||||||
|
|
||||||
- 图标来源:从 [Lucide Icons](https://lucide.dev/icons/) 获取符合插件功能的图标
|
- 图标来源:从 [Lucide Icons](https://lucide.dev/icons/) 获取符合插件功能的图标
|
||||||
|
- 适用范围:Action 插件**必须**提供,其他插件可选
|
||||||
- 格式:Base64 编码的 SVG
|
- 格式:Base64 编码的 SVG
|
||||||
- 获取方法:从 Lucide 下载 SVG,然后使用 Base64 编码
|
- 获取方法:从 Lucide 下载 SVG,然后使用 Base64 编码
|
||||||
- 示例格式:
|
- 示例格式:
|
||||||
@@ -822,6 +823,22 @@ Filter 实例是**单例 (Singleton)**。
|
|||||||
|
|
||||||
#### Commit Message 规范
|
#### Commit Message 规范
|
||||||
使用 Conventional Commits 格式 (`feat`, `fix`, `docs`, etc.)。
|
使用 Conventional Commits 格式 (`feat`, `fix`, `docs`, etc.)。
|
||||||
|
**必须**在提交标题与正文中清晰描述变更内容,确保在 Release 页面可读且可追踪。
|
||||||
|
|
||||||
|
要求:
|
||||||
|
- 标题必须包含“做了什么”与影响范围(避免含糊词)。
|
||||||
|
- 正文必须列出关键变更点(1-3 条),与实际改动一一对应。
|
||||||
|
- 若影响用户或插件行为,必须在正文标明影响与迁移说明。
|
||||||
|
|
||||||
|
推荐格式:
|
||||||
|
- `feat(actions): add export settings panel`
|
||||||
|
- `fix(filters): handle empty metadata to avoid crash`
|
||||||
|
- `docs(plugins): update bilingual README structure`
|
||||||
|
|
||||||
|
正文示例:
|
||||||
|
- Add valves for export format selection
|
||||||
|
- Update README/README_CN to include What's New section
|
||||||
|
- Migration: default TITLE_SOURCE changed to chat_title
|
||||||
|
|
||||||
### 4. 🤖 Git Operations (Agent Rules)
|
### 4. 🤖 Git Operations (Agent Rules)
|
||||||
|
|
||||||
|
|||||||
29
README.md
29
README.md
@@ -10,28 +10,28 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
|||||||
<!-- STATS_START -->
|
<!-- STATS_START -->
|
||||||
## 📊 Community Stats
|
## 📊 Community Stats
|
||||||
|
|
||||||
> 🕐 Auto-updated: 2026-01-20 17:15
|
> 🕐 Auto-updated: 2026-01-26 15:14
|
||||||
|
|
||||||
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
||||||
|:---:|:---:|:---:|:---:|
|
|:---:|:---:|:---:|:---:|
|
||||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **137** | **134** | **25** |
|
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **158** | **152** | **31** |
|
||||||
|
|
||||||
| 📝 Posts | ⬇️ Downloads | 👁️ Views | 👍 Upvotes | 💾 Saves |
|
| 📝 Posts | ⬇️ Downloads | 👁️ Views | 👍 Upvotes | 💾 Saves |
|
||||||
|:---:|:---:|:---:|:---:|:---:|
|
|:---:|:---:|:---:|:---:|:---:|
|
||||||
| **16** | **1878** | **22027** | **120** | **147** |
|
| **19** | **2388** | **27294** | **138** | **183** |
|
||||||
|
|
||||||
### 🔥 Top 6 Popular Plugins
|
### 🔥 Top 6 Popular Plugins
|
||||||
|
|
||||||
> 🕐 Auto-updated: 2026-01-20 17:15
|
> 🕐 Auto-updated: 2026-01-26 15:14
|
||||||
|
|
||||||
| Rank | Plugin | Version | Downloads | Views | Updated |
|
| Rank | Plugin | Version | Downloads | Views | Updated |
|
||||||
|:---:|------|:---:|:---:|:---:|:---:|
|
|:---:|------|:---:|:---:|:---:|:---:|
|
||||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 550 | 4933 | 2026-01-17 |
|
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 629 | 5600 | 2026-01-17 |
|
||||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 281 | 2651 | 2026-01-18 |
|
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 410 | 3621 | 2026-01-25 |
|
||||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 213 | 835 | 2026-01-07 |
|
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 255 | 1039 | 2026-01-07 |
|
||||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.0 | 189 | 2048 | 2026-01-19 |
|
| 4️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 229 | 1839 | 2026-01-17 |
|
||||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 168 | 1449 | 2026-01-17 |
|
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.2 | 227 | 2461 | 2026-01-21 |
|
||||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 143 | 2386 | 2026-01-17 |
|
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 165 | 2674 | 2026-01-17 |
|
||||||
|
|
||||||
*See full stats in [Community Stats Report](./docs/community-stats.md)*
|
*See full stats in [Community Stats Report](./docs/community-stats.md)*
|
||||||
<!-- STATS_END -->
|
<!-- STATS_END -->
|
||||||
@@ -43,6 +43,7 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
|||||||
Located in the `plugins/` directory, containing Python-based enhancements:
|
Located in the `plugins/` directory, containing Python-based enhancements:
|
||||||
|
|
||||||
#### Actions
|
#### Actions
|
||||||
|
|
||||||
- **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text.
|
- **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text.
|
||||||
- **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV.
|
- **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV.
|
||||||
- **Flash Card** (`flash-card`): Quickly generates beautiful flashcards for learning.
|
- **Flash Card** (`flash-card`): Quickly generates beautiful flashcards for learning.
|
||||||
@@ -51,11 +52,18 @@ Located in the `plugins/` directory, containing Python-based enhancements:
|
|||||||
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
||||||
|
|
||||||
#### Filters
|
#### Filters
|
||||||
|
|
||||||
- **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression.
|
- **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression.
|
||||||
- **Context Enhancement** (`context_enhancement_filter`): Enhances chat context.
|
- **Context Enhancement** (`context_enhancement_filter`): Enhances chat context.
|
||||||
|
- **Folder Memory** (`folder-memory`): Automatically extracts project rules from conversations and injects them into the folder's system prompt.
|
||||||
- **Markdown Normalizer** (`markdown_normalizer`): Fixes common Markdown formatting issues in LLM outputs.
|
- **Markdown Normalizer** (`markdown_normalizer`): Fixes common Markdown formatting issues in LLM outputs.
|
||||||
|
|
||||||
|
#### Pipes
|
||||||
|
|
||||||
|
- **GitHub Copilot SDK** (`github-copilot-sdk`): Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions.
|
||||||
|
|
||||||
#### Pipelines
|
#### Pipelines
|
||||||
|
|
||||||
- **MoE Prompt Refiner** (`moe_prompt_refiner`): Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
|
- **MoE Prompt Refiner** (`moe_prompt_refiner`): Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
|
||||||
|
|
||||||
### 🎯 Prompts
|
### 🎯 Prompts
|
||||||
@@ -100,6 +108,7 @@ This project is a collection of resources and does not require a Python environm
|
|||||||
### Contributing
|
### Contributing
|
||||||
|
|
||||||
If you have great prompts or plugins to share:
|
If you have great prompts or plugins to share:
|
||||||
|
|
||||||
1. Fork this repository.
|
1. Fork this repository.
|
||||||
2. Add your files to the appropriate `prompts/` or `plugins/` directory.
|
2. Add your files to the appropriate `prompts/` or `plugins/` directory.
|
||||||
3. Submit a Pull Request.
|
3. Submit a Pull Request.
|
||||||
|
|||||||
27
README_CN.md
27
README_CN.md
@@ -7,28 +7,28 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
|||||||
<!-- STATS_START -->
|
<!-- STATS_START -->
|
||||||
## 📊 社区统计
|
## 📊 社区统计
|
||||||
|
|
||||||
> 🕐 自动更新于 2026-01-20 17:15
|
> 🕐 自动更新于 2026-01-26 15:14
|
||||||
|
|
||||||
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
||||||
|:---:|:---:|:---:|:---:|
|
|:---:|:---:|:---:|:---:|
|
||||||
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **137** | **134** | **25** |
|
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **158** | **152** | **31** |
|
||||||
|
|
||||||
| 📝 发布 | ⬇️ 下载 | 👁️ 浏览 | 👍 点赞 | 💾 收藏 |
|
| 📝 发布 | ⬇️ 下载 | 👁️ 浏览 | 👍 点赞 | 💾 收藏 |
|
||||||
|:---:|:---:|:---:|:---:|:---:|
|
|:---:|:---:|:---:|:---:|:---:|
|
||||||
| **16** | **1878** | **22027** | **120** | **147** |
|
| **19** | **2388** | **27294** | **138** | **183** |
|
||||||
|
|
||||||
### 🔥 热门插件 Top 6
|
### 🔥 热门插件 Top 6
|
||||||
|
|
||||||
> 🕐 自动更新于 2026-01-20 17:15
|
> 🕐 自动更新于 2026-01-26 15:14
|
||||||
|
|
||||||
| 排名 | 插件 | 版本 | 下载 | 浏览 | 更新日期 |
|
| 排名 | 插件 | 版本 | 下载 | 浏览 | 更新日期 |
|
||||||
|:---:|------|:---:|:---:|:---:|:---:|
|
|:---:|------|:---:|:---:|:---:|:---:|
|
||||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 550 | 4933 | 2026-01-17 |
|
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 0.9.1 | 629 | 5600 | 2026-01-17 |
|
||||||
| 🥈 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 281 | 2651 | 2026-01-18 |
|
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 1.4.9 | 410 | 3621 | 2026-01-25 |
|
||||||
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 213 | 835 | 2026-01-07 |
|
| 🥉 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 0.3.7 | 255 | 1039 | 2026-01-07 |
|
||||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.0 | 189 | 2048 | 2026-01-19 |
|
| 4️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 229 | 1839 | 2026-01-17 |
|
||||||
| 5️⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 0.4.3 | 168 | 1449 | 2026-01-17 |
|
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 1.2.2 | 227 | 2461 | 2026-01-21 |
|
||||||
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 143 | 2386 | 2026-01-17 |
|
| 6️⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 0.2.4 | 165 | 2674 | 2026-01-17 |
|
||||||
|
|
||||||
*完整统计请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
*完整统计请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
||||||
<!-- STATS_END -->
|
<!-- STATS_END -->
|
||||||
@@ -40,6 +40,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
|||||||
位于 `plugins/` 目录,包含各类 Python 编写的功能增强插件:
|
位于 `plugins/` 目录,包含各类 Python 编写的功能增强插件:
|
||||||
|
|
||||||
#### Actions (交互增强)
|
#### Actions (交互增强)
|
||||||
|
|
||||||
- **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。
|
- **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。
|
||||||
- **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。
|
- **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。
|
||||||
- **Flash Card** (`flash-card`): 快速生成精美的学习记忆卡片。
|
- **Flash Card** (`flash-card`): 快速生成精美的学习记忆卡片。
|
||||||
@@ -48,17 +49,22 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
|||||||
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
||||||
|
|
||||||
#### Filters (消息处理)
|
#### Filters (消息处理)
|
||||||
|
|
||||||
- **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。
|
- **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。
|
||||||
- **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。
|
- **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。
|
||||||
|
- **Folder Memory** (`folder-memory`): 自动从对话中提取项目规则并注入到文件夹系统提示词中。
|
||||||
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Gemini Manifold 配套增强。
|
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Gemini Manifold 配套增强。
|
||||||
- **Gemini Multimodal Filter** (`web_gemini_multimodel_filter`): 为任意模型提供多模态能力(PDF、Office、视频等),支持智能路由和字幕精修。
|
- **Gemini Multimodal Filter** (`web_gemini_multimodel_filter`): 为任意模型提供多模态能力(PDF、Office、视频等),支持智能路由和字幕精修。
|
||||||
- **Markdown Normalizer** (`markdown_normalizer`): 修复 LLM 输出中常见的 Markdown 格式问题。
|
- **Markdown Normalizer** (`markdown_normalizer`): 修复 LLM 输出中常见的 Markdown 格式问题。
|
||||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): 自动合并并注入多模型回答的上下文。
|
- **Multi-Model Context Merger** (`multi_model_context_merger`): 自动合并并注入多模型回答的上下文。
|
||||||
|
|
||||||
#### Pipes (模型管道)
|
#### Pipes (模型管道)
|
||||||
|
|
||||||
|
- **GitHub Copilot SDK** (`github-copilot-sdk`): GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。
|
||||||
- **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。
|
- **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。
|
||||||
|
|
||||||
#### Pipelines (工作流管道)
|
#### Pipelines (工作流管道)
|
||||||
|
|
||||||
- **MoE Prompt Refiner** (`moe_prompt_refiner`): 优化多模型 (MoE) 汇总请求的提示词,生成高质量的综合报告。
|
- **MoE Prompt Refiner** (`moe_prompt_refiner`): 优化多模型 (MoE) 汇总请求的提示词,生成高质量的综合报告。
|
||||||
|
|
||||||
### 🎯 提示词 (Prompts)
|
### 🎯 提示词 (Prompts)
|
||||||
@@ -106,6 +112,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
|||||||
### 贡献代码
|
### 贡献代码
|
||||||
|
|
||||||
如果你有优质的提示词或插件想要分享:
|
如果你有优质的提示词或插件想要分享:
|
||||||
|
|
||||||
1. Fork 本仓库。
|
1. Fork 本仓库。
|
||||||
2. 将你的文件添加到对应的 `prompts/` 或 `plugins/` 目录。
|
2. 将你的文件添加到对应的 `prompts/` 或 `plugins/` 目录。
|
||||||
3. 提交 Pull Request。
|
3. 提交 Pull Request。
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
{
|
{
|
||||||
"schemaVersion": 1,
|
"schemaVersion": 1,
|
||||||
"label": "downloads",
|
"label": "downloads",
|
||||||
"message": "1.9k",
|
"message": "2.4k",
|
||||||
"color": "blue",
|
"color": "blue",
|
||||||
"namedLogo": "openwebui"
|
"namedLogo": "openwebui"
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schemaVersion": 1,
|
"schemaVersion": 1,
|
||||||
"label": "followers",
|
"label": "followers",
|
||||||
"message": "137",
|
"message": "158",
|
||||||
"color": "blue"
|
"color": "blue"
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schemaVersion": 1,
|
"schemaVersion": 1,
|
||||||
"label": "plugins",
|
"label": "plugins",
|
||||||
"message": "16",
|
"message": "19",
|
||||||
"color": "green"
|
"color": "green"
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schemaVersion": 1,
|
"schemaVersion": 1,
|
||||||
"label": "points",
|
"label": "points",
|
||||||
"message": "134",
|
"message": "152",
|
||||||
"color": "orange"
|
"color": "orange"
|
||||||
}
|
}
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"schemaVersion": 1,
|
"schemaVersion": 1,
|
||||||
"label": "upvotes",
|
"label": "upvotes",
|
||||||
"message": "120",
|
"message": "138",
|
||||||
"color": "brightgreen"
|
"color": "brightgreen"
|
||||||
}
|
}
|
||||||
@@ -1,15 +1,16 @@
|
|||||||
{
|
{
|
||||||
"total_posts": 16,
|
"total_posts": 19,
|
||||||
"total_downloads": 1878,
|
"total_downloads": 2388,
|
||||||
"total_views": 22027,
|
"total_views": 27294,
|
||||||
"total_upvotes": 120,
|
"total_upvotes": 138,
|
||||||
"total_downvotes": 2,
|
"total_downvotes": 2,
|
||||||
"total_saves": 147,
|
"total_saves": 183,
|
||||||
"total_comments": 24,
|
"total_comments": 33,
|
||||||
"by_type": {
|
"by_type": {
|
||||||
"filter": 1,
|
"pipe": 1,
|
||||||
"action": 13,
|
"action": 14,
|
||||||
"unknown": 2
|
"unknown": 3,
|
||||||
|
"filter": 1
|
||||||
},
|
},
|
||||||
"posts": [
|
"posts": [
|
||||||
{
|
{
|
||||||
@@ -19,29 +20,29 @@
|
|||||||
"version": "0.9.1",
|
"version": "0.9.1",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||||
"downloads": 550,
|
"downloads": 629,
|
||||||
"views": 4933,
|
"views": 5600,
|
||||||
"upvotes": 15,
|
"upvotes": 16,
|
||||||
"saves": 30,
|
"saves": 37,
|
||||||
"comments": 11,
|
"comments": 11,
|
||||||
"created_at": "2025-12-30",
|
"created_at": "2025-12-30",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"title": "📊 Smart Infographic (AntV)",
|
"title": "Smart Infographic",
|
||||||
"slug": "smart_infographic_ad6f0c7f",
|
"slug": "smart_infographic_ad6f0c7f",
|
||||||
"type": "action",
|
"type": "action",
|
||||||
"version": "1.4.9",
|
"version": "1.4.9",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||||
"downloads": 281,
|
"downloads": 410,
|
||||||
"views": 2651,
|
"views": 3621,
|
||||||
"upvotes": 14,
|
"upvotes": 18,
|
||||||
"saves": 21,
|
"saves": 27,
|
||||||
"comments": 3,
|
"comments": 7,
|
||||||
"created_at": "2025-12-28",
|
"created_at": "2025-12-28",
|
||||||
"updated_at": "2026-01-18",
|
"updated_at": "2026-01-25",
|
||||||
"url": "https://openwebui.com/posts/smart_infographic_ad6f0c7f"
|
"url": "https://openwebui.com/posts/smart_infographic_ad6f0c7f"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@@ -51,8 +52,8 @@
|
|||||||
"version": "0.3.7",
|
"version": "0.3.7",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||||
"downloads": 213,
|
"downloads": 255,
|
||||||
"views": 835,
|
"views": 1039,
|
||||||
"upvotes": 4,
|
"upvotes": 4,
|
||||||
"saves": 6,
|
"saves": 6,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
@@ -60,22 +61,6 @@
|
|||||||
"updated_at": "2026-01-07",
|
"updated_at": "2026-01-07",
|
||||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"title": "Async Context Compression",
|
|
||||||
"slug": "async_context_compression_b1655bc8",
|
|
||||||
"type": "action",
|
|
||||||
"version": "1.2.0",
|
|
||||||
"author": "Fu-Jie",
|
|
||||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
|
||||||
"downloads": 189,
|
|
||||||
"views": 2048,
|
|
||||||
"upvotes": 9,
|
|
||||||
"saves": 22,
|
|
||||||
"comments": 0,
|
|
||||||
"created_at": "2025-11-08",
|
|
||||||
"updated_at": "2026-01-19",
|
|
||||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"title": "Export to Word (Enhanced)",
|
"title": "Export to Word (Enhanced)",
|
||||||
"slug": "export_to_word_enhanced_formatting_fca6a315",
|
"slug": "export_to_word_enhanced_formatting_fca6a315",
|
||||||
@@ -83,15 +68,31 @@
|
|||||||
"version": "0.4.3",
|
"version": "0.4.3",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||||
"downloads": 168,
|
"downloads": 229,
|
||||||
"views": 1449,
|
"views": 1839,
|
||||||
"upvotes": 8,
|
"upvotes": 8,
|
||||||
"saves": 17,
|
"saves": 21,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
"created_at": "2026-01-03",
|
"created_at": "2026-01-03",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
"url": "https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315"
|
"url": "https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"title": "Async Context Compression",
|
||||||
|
"slug": "async_context_compression_b1655bc8",
|
||||||
|
"type": "action",
|
||||||
|
"version": "1.2.2",
|
||||||
|
"author": "Fu-Jie",
|
||||||
|
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||||
|
"downloads": 227,
|
||||||
|
"views": 2461,
|
||||||
|
"upvotes": 9,
|
||||||
|
"saves": 27,
|
||||||
|
"comments": 0,
|
||||||
|
"created_at": "2025-11-08",
|
||||||
|
"updated_at": "2026-01-21",
|
||||||
|
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"title": "Flash Card",
|
"title": "Flash Card",
|
||||||
"slug": "flash_card_65a2ea8f",
|
"slug": "flash_card_65a2ea8f",
|
||||||
@@ -99,10 +100,10 @@
|
|||||||
"version": "0.2.4",
|
"version": "0.2.4",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||||
"downloads": 143,
|
"downloads": 165,
|
||||||
"views": 2386,
|
"views": 2674,
|
||||||
"upvotes": 10,
|
"upvotes": 11,
|
||||||
"saves": 12,
|
"saves": 13,
|
||||||
"comments": 2,
|
"comments": 2,
|
||||||
"created_at": "2025-12-30",
|
"created_at": "2025-12-30",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
@@ -115,10 +116,10 @@
|
|||||||
"version": "1.2.4",
|
"version": "1.2.4",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting.",
|
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting.",
|
||||||
"downloads": 95,
|
"downloads": 148,
|
||||||
"views": 2228,
|
"views": 2762,
|
||||||
"upvotes": 10,
|
"upvotes": 10,
|
||||||
"saves": 17,
|
"saves": 20,
|
||||||
"comments": 5,
|
"comments": 5,
|
||||||
"created_at": "2026-01-12",
|
"created_at": "2026-01-12",
|
||||||
"updated_at": "2026-01-19",
|
"updated_at": "2026-01-19",
|
||||||
@@ -131,10 +132,10 @@
|
|||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||||
"downloads": 71,
|
"downloads": 91,
|
||||||
"views": 703,
|
"views": 839,
|
||||||
"upvotes": 4,
|
"upvotes": 4,
|
||||||
"saves": 7,
|
"saves": 8,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
"created_at": "2026-01-08",
|
"created_at": "2026-01-08",
|
||||||
"updated_at": "2026-01-08",
|
"updated_at": "2026-01-08",
|
||||||
@@ -147,11 +148,11 @@
|
|||||||
"version": "0.4.3",
|
"version": "0.4.3",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||||
"downloads": 65,
|
"downloads": 87,
|
||||||
"views": 1329,
|
"views": 1614,
|
||||||
"upvotes": 11,
|
"upvotes": 11,
|
||||||
"saves": 3,
|
"saves": 4,
|
||||||
"comments": 1,
|
"comments": 4,
|
||||||
"created_at": "2026-01-04",
|
"created_at": "2026-01-04",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
|
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
|
||||||
@@ -163,8 +164,8 @@
|
|||||||
"version": "1.4.9",
|
"version": "1.4.9",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||||
"downloads": 43,
|
"downloads": 46,
|
||||||
"views": 702,
|
"views": 781,
|
||||||
"upvotes": 6,
|
"upvotes": 6,
|
||||||
"saves": 0,
|
"saves": 0,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
@@ -179,15 +180,47 @@
|
|||||||
"version": "0.9.1",
|
"version": "0.9.1",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||||
"downloads": 24,
|
"downloads": 27,
|
||||||
"views": 406,
|
"views": 447,
|
||||||
"upvotes": 3,
|
"upvotes": 4,
|
||||||
"saves": 1,
|
"saves": 1,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
"created_at": "2025-12-31",
|
"created_at": "2025-12-31",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
|
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"title": "📂 Folder Memory – Auto-Evolving Project Context",
|
||||||
|
"slug": "folder_memory_auto_evolving_project_context_4a9875b2",
|
||||||
|
"type": "filter",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"author": "Fu-Jie",
|
||||||
|
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||||
|
"downloads": 26,
|
||||||
|
"views": 725,
|
||||||
|
"upvotes": 3,
|
||||||
|
"saves": 4,
|
||||||
|
"comments": 0,
|
||||||
|
"created_at": "2026-01-20",
|
||||||
|
"updated_at": "2026-01-20",
|
||||||
|
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "异步上下文压缩",
|
||||||
|
"slug": "异步上下文压缩_5c0617cb",
|
||||||
|
"type": "action",
|
||||||
|
"version": "1.2.2",
|
||||||
|
"author": "Fu-Jie",
|
||||||
|
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||||
|
"downloads": 20,
|
||||||
|
"views": 486,
|
||||||
|
"upvotes": 5,
|
||||||
|
"saves": 1,
|
||||||
|
"comments": 0,
|
||||||
|
"created_at": "2025-11-08",
|
||||||
|
"updated_at": "2026-01-21",
|
||||||
|
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"title": "闪记卡 (Flash Card)",
|
"title": "闪记卡 (Flash Card)",
|
||||||
"slug": "闪记卡生成插件_4a31eac3",
|
"slug": "闪记卡生成插件_4a31eac3",
|
||||||
@@ -195,31 +228,15 @@
|
|||||||
"version": "0.2.4",
|
"version": "0.2.4",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||||
"downloads": 16,
|
"downloads": 19,
|
||||||
"views": 452,
|
"views": 507,
|
||||||
"upvotes": 5,
|
"upvotes": 6,
|
||||||
"saves": 1,
|
"saves": 1,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
"created_at": "2025-12-30",
|
"created_at": "2025-12-30",
|
||||||
"updated_at": "2026-01-17",
|
"updated_at": "2026-01-17",
|
||||||
"url": "https://openwebui.com/posts/闪记卡生成插件_4a31eac3"
|
"url": "https://openwebui.com/posts/闪记卡生成插件_4a31eac3"
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"title": "异步上下文压缩",
|
|
||||||
"slug": "异步上下文压缩_5c0617cb",
|
|
||||||
"type": "filter",
|
|
||||||
"version": "1.2.0",
|
|
||||||
"author": "Fu-Jie",
|
|
||||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
|
||||||
"downloads": 14,
|
|
||||||
"views": 374,
|
|
||||||
"upvotes": 5,
|
|
||||||
"saves": 1,
|
|
||||||
"comments": 0,
|
|
||||||
"created_at": "2025-11-08",
|
|
||||||
"updated_at": "2026-01-19",
|
|
||||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"title": "精读",
|
"title": "精读",
|
||||||
"slug": "精读_99830b0f",
|
"slug": "精读_99830b0f",
|
||||||
@@ -227,8 +244,8 @@
|
|||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"author": "Fu-Jie",
|
"author": "Fu-Jie",
|
||||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||||
"downloads": 6,
|
"downloads": 9,
|
||||||
"views": 261,
|
"views": 306,
|
||||||
"upvotes": 3,
|
"upvotes": 3,
|
||||||
"saves": 1,
|
"saves": 1,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
@@ -236,6 +253,38 @@
|
|||||||
"updated_at": "2026-01-08",
|
"updated_at": "2026-01-08",
|
||||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"title": "GitHub Copilot Official SDK Pipe",
|
||||||
|
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||||
|
"type": "pipe",
|
||||||
|
"version": "0.1.1",
|
||||||
|
"author": "Fu-Jie",
|
||||||
|
"description": "Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions (context compaction).",
|
||||||
|
"downloads": 0,
|
||||||
|
"views": 8,
|
||||||
|
"upvotes": 1,
|
||||||
|
"saves": 0,
|
||||||
|
"comments": 0,
|
||||||
|
"created_at": "2026-01-26",
|
||||||
|
"updated_at": "2026-01-26",
|
||||||
|
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||||
|
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||||
|
"type": "unknown",
|
||||||
|
"version": "",
|
||||||
|
"author": "",
|
||||||
|
"description": "",
|
||||||
|
"downloads": 0,
|
||||||
|
"views": 222,
|
||||||
|
"upvotes": 6,
|
||||||
|
"saves": 4,
|
||||||
|
"comments": 2,
|
||||||
|
"created_at": "2026-01-25",
|
||||||
|
"updated_at": "2026-01-25",
|
||||||
|
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"title": "Review of Claude Haiku 4.5",
|
"title": "Review of Claude Haiku 4.5",
|
||||||
"slug": "review_of_claude_haiku_45_41b0db39",
|
"slug": "review_of_claude_haiku_45_41b0db39",
|
||||||
@@ -244,7 +293,7 @@
|
|||||||
"author": "",
|
"author": "",
|
||||||
"description": "",
|
"description": "",
|
||||||
"downloads": 0,
|
"downloads": 0,
|
||||||
"views": 62,
|
"views": 93,
|
||||||
"upvotes": 1,
|
"upvotes": 1,
|
||||||
"saves": 0,
|
"saves": 0,
|
||||||
"comments": 0,
|
"comments": 0,
|
||||||
@@ -260,7 +309,7 @@
|
|||||||
"author": "",
|
"author": "",
|
||||||
"description": "",
|
"description": "",
|
||||||
"downloads": 0,
|
"downloads": 0,
|
||||||
"views": 1208,
|
"views": 1270,
|
||||||
"upvotes": 12,
|
"upvotes": 12,
|
||||||
"saves": 8,
|
"saves": 8,
|
||||||
"comments": 2,
|
"comments": 2,
|
||||||
@@ -274,11 +323,11 @@
|
|||||||
"name": "Fu-Jie",
|
"name": "Fu-Jie",
|
||||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||||
"followers": 137,
|
"followers": 158,
|
||||||
"following": 2,
|
"following": 3,
|
||||||
"total_points": 134,
|
"total_points": 152,
|
||||||
"post_points": 118,
|
"post_points": 136,
|
||||||
"comment_points": 16,
|
"comment_points": 16,
|
||||||
"contributions": 25
|
"contributions": 31
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,41 +1,45 @@
|
|||||||
# 📊 OpenWebUI Community Stats Report
|
# 📊 OpenWebUI Community Stats Report
|
||||||
|
|
||||||
> 📅 Updated: 2026-01-20 17:15
|
> 📅 Updated: 2026-01-26 15:14
|
||||||
|
|
||||||
## 📈 Overview
|
## 📈 Overview
|
||||||
|
|
||||||
| Metric | Value |
|
| Metric | Value |
|
||||||
|------|------|
|
|------|------|
|
||||||
| 📝 Total Posts | 16 |
|
| 📝 Total Posts | 19 |
|
||||||
| ⬇️ Total Downloads | 1878 |
|
| ⬇️ Total Downloads | 2388 |
|
||||||
| 👁️ Total Views | 22027 |
|
| 👁️ Total Views | 27294 |
|
||||||
| 👍 Total Upvotes | 120 |
|
| 👍 Total Upvotes | 138 |
|
||||||
| 💾 Total Saves | 147 |
|
| 💾 Total Saves | 183 |
|
||||||
| 💬 Total Comments | 24 |
|
| 💬 Total Comments | 33 |
|
||||||
|
|
||||||
## 📂 By Type
|
## 📂 By Type
|
||||||
|
|
||||||
|
- **pipe**: 1
|
||||||
|
- **action**: 14
|
||||||
|
- **unknown**: 3
|
||||||
- **filter**: 1
|
- **filter**: 1
|
||||||
- **action**: 13
|
|
||||||
- **unknown**: 2
|
|
||||||
|
|
||||||
## 📋 Posts List
|
## 📋 Posts List
|
||||||
|
|
||||||
| Rank | Title | Type | Version | Downloads | Views | Upvotes | Saves | Updated |
|
| Rank | Title | Type | Version | Downloads | Views | Upvotes | Saves | Updated |
|
||||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 550 | 4933 | 15 | 30 | 2026-01-17 |
|
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 629 | 5600 | 16 | 37 | 2026-01-17 |
|
||||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 281 | 2651 | 14 | 21 | 2026-01-18 |
|
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 410 | 3621 | 18 | 27 | 2026-01-25 |
|
||||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 213 | 835 | 4 | 6 | 2026-01-07 |
|
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 255 | 1039 | 4 | 6 | 2026-01-07 |
|
||||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.0 | 189 | 2048 | 9 | 22 | 2026-01-19 |
|
| 4 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 229 | 1839 | 8 | 21 | 2026-01-17 |
|
||||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 168 | 1449 | 8 | 17 | 2026-01-17 |
|
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.2 | 227 | 2461 | 9 | 27 | 2026-01-21 |
|
||||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 143 | 2386 | 10 | 12 | 2026-01-17 |
|
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 165 | 2674 | 11 | 13 | 2026-01-17 |
|
||||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 95 | 2228 | 10 | 17 | 2026-01-19 |
|
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 148 | 2762 | 10 | 20 | 2026-01-19 |
|
||||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 71 | 703 | 4 | 7 | 2026-01-08 |
|
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 91 | 839 | 4 | 8 | 2026-01-08 |
|
||||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 65 | 1329 | 11 | 3 | 2026-01-17 |
|
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 87 | 1614 | 11 | 4 | 2026-01-17 |
|
||||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 43 | 702 | 6 | 0 | 2026-01-17 |
|
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 46 | 781 | 6 | 0 | 2026-01-17 |
|
||||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 24 | 406 | 3 | 1 | 2026-01-17 |
|
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 27 | 447 | 4 | 1 | 2026-01-17 |
|
||||||
| 12 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 16 | 452 | 5 | 1 | 2026-01-17 |
|
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter | 0.1.0 | 26 | 725 | 3 | 4 | 2026-01-20 |
|
||||||
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.2.0 | 14 | 374 | 5 | 1 | 2026-01-19 |
|
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.2 | 20 | 486 | 5 | 1 | 2026-01-21 |
|
||||||
| 14 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 6 | 261 | 3 | 1 | 2026-01-08 |
|
| 14 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 19 | 507 | 6 | 1 | 2026-01-17 |
|
||||||
| 15 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 62 | 1 | 0 | 2026-01-14 |
|
| 15 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 9 | 306 | 3 | 1 | 2026-01-08 |
|
||||||
| 16 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1208 | 12 | 8 | 2026-01-10 |
|
| 16 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe | 0.1.1 | 0 | 8 | 1 | 0 | 2026-01-26 |
|
||||||
|
| 17 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | unknown | | 0 | 222 | 6 | 4 | 2026-01-25 |
|
||||||
|
| 18 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 93 | 1 | 0 | 2026-01-14 |
|
||||||
|
| 19 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1270 | 12 | 8 | 2026-01-10 |
|
||||||
|
|||||||
@@ -1,41 +1,45 @@
|
|||||||
# 📊 OpenWebUI 社区统计报告
|
# 📊 OpenWebUI 社区统计报告
|
||||||
|
|
||||||
> 📅 更新时间: 2026-01-20 17:15
|
> 📅 更新时间: 2026-01-26 15:14
|
||||||
|
|
||||||
## 📈 总览
|
## 📈 总览
|
||||||
|
|
||||||
| 指标 | 数值 |
|
| 指标 | 数值 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| 📝 发布数量 | 16 |
|
| 📝 发布数量 | 19 |
|
||||||
| ⬇️ 总下载量 | 1878 |
|
| ⬇️ 总下载量 | 2388 |
|
||||||
| 👁️ 总浏览量 | 22027 |
|
| 👁️ 总浏览量 | 27294 |
|
||||||
| 👍 总点赞数 | 120 |
|
| 👍 总点赞数 | 138 |
|
||||||
| 💾 总收藏数 | 147 |
|
| 💾 总收藏数 | 183 |
|
||||||
| 💬 总评论数 | 24 |
|
| 💬 总评论数 | 33 |
|
||||||
|
|
||||||
## 📂 按类型分类
|
## 📂 按类型分类
|
||||||
|
|
||||||
|
- **pipe**: 1
|
||||||
|
- **action**: 14
|
||||||
|
- **unknown**: 3
|
||||||
- **filter**: 1
|
- **filter**: 1
|
||||||
- **action**: 13
|
|
||||||
- **unknown**: 2
|
|
||||||
|
|
||||||
## 📋 发布列表
|
## 📋 发布列表
|
||||||
|
|
||||||
| 排名 | 标题 | 类型 | 版本 | 下载 | 浏览 | 点赞 | 收藏 | 更新日期 |
|
| 排名 | 标题 | 类型 | 版本 | 下载 | 浏览 | 点赞 | 收藏 | 更新日期 |
|
||||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 550 | 4933 | 15 | 30 | 2026-01-17 |
|
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 629 | 5600 | 16 | 37 | 2026-01-17 |
|
||||||
| 2 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 281 | 2651 | 14 | 21 | 2026-01-18 |
|
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.9 | 410 | 3621 | 18 | 27 | 2026-01-25 |
|
||||||
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 213 | 835 | 4 | 6 | 2026-01-07 |
|
| 3 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 255 | 1039 | 4 | 6 | 2026-01-07 |
|
||||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.0 | 189 | 2048 | 9 | 22 | 2026-01-19 |
|
| 4 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 229 | 1839 | 8 | 21 | 2026-01-17 |
|
||||||
| 5 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 168 | 1449 | 8 | 17 | 2026-01-17 |
|
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | action | 1.2.2 | 227 | 2461 | 9 | 27 | 2026-01-21 |
|
||||||
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 143 | 2386 | 10 | 12 | 2026-01-17 |
|
| 6 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 165 | 2674 | 11 | 13 | 2026-01-17 |
|
||||||
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 95 | 2228 | 10 | 17 | 2026-01-19 |
|
| 7 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | action | 1.2.4 | 148 | 2762 | 10 | 20 | 2026-01-19 |
|
||||||
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 71 | 703 | 4 | 7 | 2026-01-08 |
|
| 8 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 91 | 839 | 4 | 8 | 2026-01-08 |
|
||||||
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 65 | 1329 | 11 | 3 | 2026-01-17 |
|
| 9 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 87 | 1614 | 11 | 4 | 2026-01-17 |
|
||||||
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 43 | 702 | 6 | 0 | 2026-01-17 |
|
| 10 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.9 | 46 | 781 | 6 | 0 | 2026-01-17 |
|
||||||
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 24 | 406 | 3 | 1 | 2026-01-17 |
|
| 11 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 27 | 447 | 4 | 1 | 2026-01-17 |
|
||||||
| 12 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 16 | 452 | 5 | 1 | 2026-01-17 |
|
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter | 0.1.0 | 26 | 725 | 3 | 4 | 2026-01-20 |
|
||||||
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.2.0 | 14 | 374 | 5 | 1 | 2026-01-19 |
|
| 13 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action | 1.2.2 | 20 | 486 | 5 | 1 | 2026-01-21 |
|
||||||
| 14 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 6 | 261 | 3 | 1 | 2026-01-08 |
|
| 14 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 19 | 507 | 6 | 1 | 2026-01-17 |
|
||||||
| 15 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 62 | 1 | 0 | 2026-01-14 |
|
| 15 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 9 | 306 | 3 | 1 | 2026-01-08 |
|
||||||
| 16 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1208 | 12 | 8 | 2026-01-10 |
|
| 16 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe | 0.1.1 | 0 | 8 | 1 | 0 | 2026-01-26 |
|
||||||
|
| 17 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | unknown | | 0 | 222 | 6 | 4 | 2026-01-25 |
|
||||||
|
| 18 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | unknown | | 0 | 93 | 1 | 0 | 2026-01-14 |
|
||||||
|
| 19 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 1270 | 12 | 8 | 2026-01-10 |
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# Async Context Compression
|
# Async Context Compression
|
||||||
|
|
||||||
<span class="category-badge filter">Filter</span>
|
<span class="category-badge filter">Filter</span>
|
||||||
<span class="version-badge">v1.2.1</span>
|
<span class="version-badge">v1.2.2</span>
|
||||||
|
|
||||||
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# Async Context Compression(异步上下文压缩)
|
# Async Context Compression(异步上下文压缩)
|
||||||
|
|
||||||
<span class="category-badge filter">Filter</span>
|
<span class="category-badge filter">Filter</span>
|
||||||
<span class="version-badge">v1.2.1</span>
|
<span class="version-badge">v1.2.2</span>
|
||||||
|
|
||||||
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。
|
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。
|
||||||
|
|
||||||
|
|||||||
57
docs/plugins/filters/folder-memory.md
Normal file
57
docs/plugins/filters/folder-memory.md
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# Folder Memory
|
||||||
|
|
||||||
|
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📌 What's new in 0.1.0
|
||||||
|
- **Initial Release**: Automated "Project Rules" management for OpenWebUI folders.
|
||||||
|
- **Folder-Level Persistence**: Automatically updates folder system prompts with extracted rules.
|
||||||
|
- **Optimized Performance**: Runs asynchronously and supports `PRIORITY` configuration for seamless integration with other filters.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Folder Memory** is an intelligent context filter plugin for OpenWebUI. It automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||||
|
|
||||||
|
This ensures that all future conversations within that folder share the same evolved context and rules, without manual updates.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Automatic Extraction**: Analyzes chat history every N messages to extract project rules.
|
||||||
|
- **Non-destructive Injection**: Updates only the specific "Project Rules" block in the system prompt, preserving other instructions.
|
||||||
|
- **Async Processing**: Runs in the background without blocking the user's chat experience.
|
||||||
|
- **ORM Integration**: Directly updates folder data using OpenWebUI's internal models for reliability.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- **Conversations must occur inside a folder.** This plugin only triggers when a chat belongs to a folder (i.e., you need to create a folder in OpenWebUI and start a conversation within it).
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
1. Copy `folder_memory.py` to your OpenWebUI `plugins/filters/` directory (or upload via Admin UI).
|
||||||
|
2. Enable the filter in your **Settings** -> **Filters**.
|
||||||
|
3. (Optional) Configure the triggering threshold (default: every 10 messages).
|
||||||
|
|
||||||
|
## Configuration (Valves)
|
||||||
|
|
||||||
|
| Valve | Default | Description |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `PRIORITY` | `20` | Priority level for the filter operations. |
|
||||||
|
| `MESSAGE_TRIGGER_COUNT` | `10` | The number of messages required to trigger a rule analysis. |
|
||||||
|
| `MODEL_ID` | `""` | The model used to generate rules. If empty, uses the current chat model. |
|
||||||
|
| `RULES_BLOCK_TITLE` | `## 📂 Project Rules` | The title displayed above the injected rules block. |
|
||||||
|
| `SHOW_DEBUG_LOG` | `False` | Show detailed debug logs in the browser console. |
|
||||||
|
| `UPDATE_ROOT_FOLDER` | `False` | If enabled, finds and updates the root folder rules instead of the current subfolder. |
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
1. **Trigger**: When a conversation reaches `MESSAGE_TRIGGER_COUNT` (e.g., 10, 20 messages).
|
||||||
|
2. **Analysis**: The plugin sends the recent conversation + existing rules to the LLM.
|
||||||
|
3. **Synthesis**: The LLM merges new insights with old rules, removing obsolete ones.
|
||||||
|
4. **Update**: The new rule set replaces the `<!-- OWUI_PROJECT_RULES_START -->` block in the folder's system prompt.
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
See [ROADMAP](https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/ROADMAP.md) for future plans, including "Project Knowledge" collection.
|
||||||
57
docs/plugins/filters/folder-memory.zh.md
Normal file
57
docs/plugins/filters/folder-memory.zh.md
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# 文件夹记忆 (Folder Memory)
|
||||||
|
|
||||||
|
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📌 0.1.0 版本特性
|
||||||
|
- **首个版本发布**:专注于自动化的“项目规则”管理。
|
||||||
|
- **文件夹级持久化**:自动将提取的规则回写到文件夹系统提示词中。
|
||||||
|
- **性能优化**:采用异步处理机制,并支持 `PRIORITY` 配置,确保与其他过滤器(如上下文压缩)完美协作。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**文件夹记忆 (Folder Memory)** 是一个 OpenWebUI 的智能上下文过滤器插件。它能自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||||
|
|
||||||
|
这确保了该文件夹内的所有未来对话都能共享相同的进化上下文和规则,无需手动更新。
|
||||||
|
|
||||||
|
## 功能特性
|
||||||
|
|
||||||
|
- **自动提取**:每隔 N 条消息分析一次聊天记录,提取项目规则。
|
||||||
|
- **无损注入**:仅更新系统提示词中的特定“项目规则”块,保留其他指令。
|
||||||
|
- **异步处理**:在后台运行,不阻塞用户的聊天体验。
|
||||||
|
- **ORM 集成**:直接使用 OpenWebUI 的内部模型更新文件夹数据,确保可靠性。
|
||||||
|
|
||||||
|
## 前置条件
|
||||||
|
|
||||||
|
- **对话必须在文件夹内进行。** 此插件仅在聊天属于某个文件夹时触发(即您需要先在 OpenWebUI 中创建一个文件夹,并在其内部开始对话)。
|
||||||
|
|
||||||
|
## 安装指南
|
||||||
|
|
||||||
|
1. 将 `folder_memory.py` (或中文版 `folder_memory_cn.py`) 复制到 OpenWebUI 的 `plugins/filters/` 目录(或通过管理员 UI 上传)。
|
||||||
|
2. 在 **设置** -> **过滤器** 中启用该插件。
|
||||||
|
3. (可选)配置触发阈值(默认:每 10 条消息)。
|
||||||
|
|
||||||
|
## 配置 (Valves)
|
||||||
|
|
||||||
|
| 参数 | 默认值 | 说明 |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `PRIORITY` | `20` | 过滤器操作的优先级。 |
|
||||||
|
| `MESSAGE_TRIGGER_COUNT` | `10` | 触发规则分析的消息数量阈值。 |
|
||||||
|
| `MODEL_ID` | `""` | 用于生成规则的模型 ID。若为空,则使用当前对话模型。 |
|
||||||
|
| `RULES_BLOCK_TITLE` | `## 📂 项目规则` | 显示在注入规则块上方的标题。 |
|
||||||
|
| `SHOW_DEBUG_LOG` | `False` | 在浏览器控制台显示详细调试日志。 |
|
||||||
|
| `UPDATE_ROOT_FOLDER` | `False` | 如果启用,将向上查找并更新根文件夹的规则,而不是当前子文件夹。 |
|
||||||
|
|
||||||
|
## 工作原理
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
1. **触发**:当对话达到 `MESSAGE_TRIGGER_COUNT`(例如 10、20 条消息)时。
|
||||||
|
2. **分析**:插件将最近的对话 + 现有规则发送给 LLM。
|
||||||
|
3. **综合**:LLM 将新见解与旧规则合并,移除过时的规则。
|
||||||
|
4. **更新**:新的规则集替换文件夹系统提示词中的 `<!-- OWUI_PROJECT_RULES_START -->` 块。
|
||||||
|
|
||||||
|
## 路线图
|
||||||
|
|
||||||
|
查看 [ROADMAP](https://github.com/Fu-Jie/awesome-openwebui/blob/main/plugins/filters/folder-memory/ROADMAP.md) 了解未来计划,包括“项目知识”收集功能。
|
||||||
@@ -22,7 +22,7 @@ Filters act as middleware in the message pipeline:
|
|||||||
|
|
||||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||||
|
|
||||||
**Version:** 1.2.1
|
**Version:** 1.2.2
|
||||||
|
|
||||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||||
|
|
||||||
@@ -36,7 +36,15 @@ Filters act as middleware in the message pipeline:
|
|||||||
|
|
||||||
[:octicons-arrow-right-24: Documentation](context-enhancement.md)
|
[:octicons-arrow-right-24: Documentation](context-enhancement.md)
|
||||||
|
|
||||||
|
- :material-folder-refresh:{ .lg .middle } **Folder Memory**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||||
|
|
||||||
|
**Version:** 0.1.0
|
||||||
|
|
||||||
|
[:octicons-arrow-right-24: Documentation](folder-memory.md)
|
||||||
|
|
||||||
- :material-format-paint:{ .lg .middle } **Markdown Normalizer**
|
- :material-format-paint:{ .lg .middle } **Markdown Normalizer**
|
||||||
|
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ Filter 充当消息管线中的中间件:
|
|||||||
|
|
||||||
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
||||||
|
|
||||||
**版本:** 1.2.1
|
**版本:** 1.2.2
|
||||||
|
|
||||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
||||||
|
|
||||||
@@ -36,7 +36,15 @@ Filter 充当消息管线中的中间件:
|
|||||||
|
|
||||||
[:octicons-arrow-right-24: 查看文档](context-enhancement.md)
|
[:octicons-arrow-right-24: 查看文档](context-enhancement.md)
|
||||||
|
|
||||||
|
- :material-folder-refresh:{ .lg .middle } **Folder Memory**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||||
|
|
||||||
|
**版本:** 0.1.0
|
||||||
|
|
||||||
|
[:octicons-arrow-right-24: 查看文档](folder-memory.zh.md)
|
||||||
|
|
||||||
- :material-format-paint:{ .lg .middle } **Markdown Normalizer**
|
- :material-format-paint:{ .lg .middle } **Markdown Normalizer**
|
||||||
|
|
||||||
|
|||||||
84
docs/plugins/pipes/github-copilot-sdk.md
Normal file
84
docs/plugins/pipes/github-copilot-sdk.md
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
# GitHub Copilot SDK Pipe for OpenWebUI
|
||||||
|
|
||||||
|
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||||
|
|
||||||
|
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
|
||||||
|
|
||||||
|
## 🚀 What's New (v0.1.0)
|
||||||
|
|
||||||
|
* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors!
|
||||||
|
* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models).
|
||||||
|
* **📂 Workspace Control**: Restricted workspace directory for secure file operations.
|
||||||
|
* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`).
|
||||||
|
* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping.
|
||||||
|
|
||||||
|
## ✨ Core Features
|
||||||
|
|
||||||
|
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability.
|
||||||
|
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages.
|
||||||
|
* **🌊 Streaming Output**: Supports typewriter effect for fast responses.
|
||||||
|
* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support).
|
||||||
|
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box.
|
||||||
|
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
|
||||||
|
* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting.
|
||||||
|
|
||||||
|
## 📦 Installation & Usage
|
||||||
|
|
||||||
|
### 1. Import Function
|
||||||
|
|
||||||
|
1. Open OpenWebUI.
|
||||||
|
2. Go to **Workspace** -> **Functions**.
|
||||||
|
3. Click **+** (Create Function).
|
||||||
|
4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely.
|
||||||
|
5. Save.
|
||||||
|
|
||||||
|
### 2. Configure Valves (Settings)
|
||||||
|
|
||||||
|
Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure:
|
||||||
|
|
||||||
|
| Parameter | Description | Default |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - |
|
||||||
|
| **MODEL_ID** | The model name to use. Recommended `gpt-5-mini` or `gpt-5`. | `gpt-5-mini` |
|
||||||
|
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
|
||||||
|
| **DEBUG** | Whether to enable debug logs (output to chat). | `True` |
|
||||||
|
| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
|
||||||
|
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
|
||||||
|
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
|
||||||
|
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
|
||||||
|
| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` |
|
||||||
|
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
|
||||||
|
|
||||||
|
### 3. Get GH_TOKEN
|
||||||
|
|
||||||
|
For security, it is recommended to use a **Fine-grained Personal Access Token**:
|
||||||
|
|
||||||
|
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
|
||||||
|
2. Click **Generate new token**.
|
||||||
|
3. **Repository access**: Select `All repositories` or `Public Repositories`.
|
||||||
|
4. **Permissions**:
|
||||||
|
* Click **Account permissions**.
|
||||||
|
* Find **Copilot Requests**, select **Read and write** (or Access).
|
||||||
|
5. Generate and copy the Token.
|
||||||
|
|
||||||
|
## 📋 Dependencies
|
||||||
|
|
||||||
|
This Pipe will automatically attempt to install the following dependencies:
|
||||||
|
|
||||||
|
* `github-copilot-sdk` (Python package)
|
||||||
|
* `github-copilot-cli` (Binary file, installed via official script)
|
||||||
|
|
||||||
|
## ⚠️ FAQ
|
||||||
|
|
||||||
|
* **Stuck on "Waiting..."**:
|
||||||
|
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
|
||||||
|
* Try changing `MODEL_ID` to `gpt-4o` or `copilot-chat`.
|
||||||
|
* **Images not recognized**:
|
||||||
|
* Ensure `MODEL_ID` is a model that supports multimodal input.
|
||||||
|
* **CLI Installation Failed**:
|
||||||
|
* Ensure the OpenWebUI container has internet access.
|
||||||
|
* You can manually download the CLI and specify `CLI_PATH` in Valves.
|
||||||
|
|
||||||
|
## 📄 License
|
||||||
|
|
||||||
|
MIT
|
||||||
84
docs/plugins/pipes/github-copilot-sdk.zh.md
Normal file
84
docs/plugins/pipes/github-copilot-sdk.zh.md
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
# GitHub Copilot SDK 官方管道
|
||||||
|
|
||||||
|
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||||
|
|
||||||
|
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
|
||||||
|
|
||||||
|
## 🚀 最新特性 (v0.1.0)
|
||||||
|
|
||||||
|
* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误!
|
||||||
|
* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。
|
||||||
|
* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。
|
||||||
|
* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。
|
||||||
|
* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。
|
||||||
|
|
||||||
|
## ✨ 核心特性
|
||||||
|
|
||||||
|
* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。
|
||||||
|
* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。
|
||||||
|
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。
|
||||||
|
* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。
|
||||||
|
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。
|
||||||
|
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。
|
||||||
|
* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。
|
||||||
|
|
||||||
|
## 📦 安装与使用
|
||||||
|
|
||||||
|
### 1. 导入函数
|
||||||
|
|
||||||
|
1. 打开 OpenWebUI。
|
||||||
|
2. 进入 **Workspace** -> **Functions**。
|
||||||
|
3. 点击 **+** (创建函数)。
|
||||||
|
4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。
|
||||||
|
5. 保存。
|
||||||
|
|
||||||
|
### 2. 配置 Valves (设置)
|
||||||
|
|
||||||
|
在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置:
|
||||||
|
|
||||||
|
| 参数 | 说明 | 默认值 |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - |
|
||||||
|
| **MODEL_ID** | 使用的模型名称。推荐 `gpt-5-mini` 或 `gpt-5`。 | `gpt-5-mini` |
|
||||||
|
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
|
||||||
|
| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` |
|
||||||
|
| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` |
|
||||||
|
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - |
|
||||||
|
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
|
||||||
|
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` |
|
||||||
|
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
|
||||||
|
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
|
||||||
|
|
||||||
|
### 3. 获取 GH_TOKEN
|
||||||
|
|
||||||
|
为了安全起见,推荐使用 **Fine-grained Personal Access Token**:
|
||||||
|
|
||||||
|
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
|
||||||
|
2. 点击 **Generate new token**。
|
||||||
|
3. **Repository access**: 选择 `All repositories` 或 `Public Repositories`。
|
||||||
|
4. **Permissions**:
|
||||||
|
* 点击 **Account permissions**。
|
||||||
|
* 找到 **Copilot Requests**,选择 **Read and write** (或 Access)。
|
||||||
|
5. 生成并复制 Token。
|
||||||
|
|
||||||
|
## 📋 依赖说明
|
||||||
|
|
||||||
|
该 Pipe 会自动尝试安装以下依赖(如果环境中缺失):
|
||||||
|
|
||||||
|
* `github-copilot-sdk` (Python 包)
|
||||||
|
* `github-copilot-cli` (二进制文件,通过官方脚本安装)
|
||||||
|
|
||||||
|
## ⚠️ 常见问题
|
||||||
|
|
||||||
|
* **一直显示 "Waiting..."**:
|
||||||
|
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
|
||||||
|
* 尝试将 `MODEL_ID` 改为 `gpt-4o` 或 `copilot-chat`。
|
||||||
|
* **图片无法识别**:
|
||||||
|
* 确保 `MODEL_ID` 是支持多模态的模型。
|
||||||
|
* **CLI 安装失败**:
|
||||||
|
* 确保 OpenWebUI 容器有外网访问权限。
|
||||||
|
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。
|
||||||
|
|
||||||
|
## 📄 许可证
|
||||||
|
|
||||||
|
MIT
|
||||||
@@ -15,7 +15,7 @@ Pipes allow you to:
|
|||||||
|
|
||||||
## Available Pipe Plugins
|
## Available Pipe Plugins
|
||||||
|
|
||||||
|
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.1.1) - Official GitHub Copilot SDK integration. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ Pipes 可以用于:
|
|||||||
|
|
||||||
## 可用的 Pipe 插件
|
## 可用的 Pipe 插件
|
||||||
|
|
||||||
|
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.1.1) - GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,13 @@
|
|||||||
# Async Context Compression Filter
|
# Async Context Compression Filter
|
||||||
|
|
||||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 1.2.2 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||||
|
|
||||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||||
|
|
||||||
|
## What's new in 1.2.2
|
||||||
|
- **Critical Fix**: Resolved `TypeError: 'str' object is not callable` caused by variable name conflict in logging function.
|
||||||
|
- **Compatibility**: Enhanced `params` handling to support Pydantic objects, improving compatibility with different OpenWebUI versions.
|
||||||
|
|
||||||
## What's new in 1.2.1
|
## What's new in 1.2.1
|
||||||
|
|
||||||
- **Smart Configuration**: Automatically detects base model settings for custom models and adds `summary_model_max_context` for independent summary limits.
|
- **Smart Configuration**: Automatically detects base model settings for custom models and adds `summary_model_max_context` for independent summary limits.
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
# 异步上下文压缩过滤器
|
# 异步上下文压缩过滤器
|
||||||
|
|
||||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 1.2.2 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||||
|
|
||||||
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
||||||
|
|
||||||
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
||||||
|
|
||||||
|
## 1.2.2 版本更新
|
||||||
|
- **严重错误修复**: 解决了因日志函数变量名冲突导致的 `TypeError: 'str' object is not callable` 错误。
|
||||||
|
- **兼容性增强**: 改进了 `params` 处理逻辑以支持 Pydantic 对象,提高了对不同 OpenWebUI 版本的兼容性。
|
||||||
|
|
||||||
## 1.2.1 版本更新
|
## 1.2.1 版本更新
|
||||||
|
|
||||||
- **智能配置增强**: 自动检测自定义模型的基础模型配置,并新增 `summary_model_max_context` 参数以独立控制摘要模型的上下文限制。
|
- **智能配置增强**: 自动检测自定义模型的基础模型配置,并新增 `summary_model_max_context` 参数以独立控制摘要模型的上下文限制。
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ author: Fu-Jie
|
|||||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
funding_url: https://github.com/open-webui
|
funding_url: https://github.com/open-webui
|
||||||
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
|
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
|
||||||
version: 1.2.1
|
version: 1.2.2
|
||||||
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
||||||
license: MIT
|
license: MIT
|
||||||
|
|
||||||
@@ -839,7 +839,7 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error emitting debug log: {e}")
|
logger.error(f"Error emitting debug log: {e}")
|
||||||
|
|
||||||
async def _log(self, message: str, type: str = "info", event_call=None):
|
async def _log(self, message: str, log_type: str = "info", event_call=None):
|
||||||
"""Unified logging to both backend (print) and frontend (console.log)"""
|
"""Unified logging to both backend (print) and frontend (console.log)"""
|
||||||
# Backend logging
|
# Backend logging
|
||||||
if self.valves.debug_mode:
|
if self.valves.debug_mode:
|
||||||
@@ -849,11 +849,11 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and event_call:
|
if self.valves.show_debug_log and event_call:
|
||||||
try:
|
try:
|
||||||
css = "color: #3b82f6;" # Blue default
|
css = "color: #3b82f6;" # Blue default
|
||||||
if type == "error":
|
if log_type == "error":
|
||||||
css = "color: #ef4444; font-weight: bold;" # Red
|
css = "color: #ef4444; font-weight: bold;" # Red
|
||||||
elif type == "warning":
|
elif log_type == "warning":
|
||||||
css = "color: #f59e0b;" # Orange
|
css = "color: #f59e0b;" # Orange
|
||||||
elif type == "success":
|
elif log_type == "success":
|
||||||
css = "color: #10b981; font-weight: bold;" # Green
|
css = "color: #10b981; font-weight: bold;" # Green
|
||||||
|
|
||||||
# Clean message for frontend: remove separators and extra newlines
|
# Clean message for frontend: remove separators and extra newlines
|
||||||
@@ -999,6 +999,7 @@ class Filter:
|
|||||||
# 2. For base models: check messages for role='system'
|
# 2. For base models: check messages for role='system'
|
||||||
system_prompt_content = None
|
system_prompt_content = None
|
||||||
|
|
||||||
|
# Try to get from DB (custom model)
|
||||||
# Try to get from DB (custom model)
|
# Try to get from DB (custom model)
|
||||||
try:
|
try:
|
||||||
model_id = body.get("model")
|
model_id = body.get("model")
|
||||||
@@ -1026,12 +1027,17 @@ class Filter:
|
|||||||
# Handle case where params is a JSON string
|
# Handle case where params is a JSON string
|
||||||
if isinstance(params, str):
|
if isinstance(params, str):
|
||||||
params = json.loads(params)
|
params = json.loads(params)
|
||||||
|
# Convert Pydantic model to dict if needed
|
||||||
|
elif hasattr(params, "model_dump"):
|
||||||
|
params = params.model_dump()
|
||||||
|
elif hasattr(params, "dict"):
|
||||||
|
params = params.dict()
|
||||||
|
|
||||||
# Handle dict or Pydantic object
|
# Now params should be a dict
|
||||||
if isinstance(params, dict):
|
if isinstance(params, dict):
|
||||||
system_prompt_content = params.get("system")
|
system_prompt_content = params.get("system")
|
||||||
else:
|
else:
|
||||||
# Assume Pydantic model or object
|
# Fallback: try getattr
|
||||||
system_prompt_content = getattr(params, "system", None)
|
system_prompt_content = getattr(params, "system", None)
|
||||||
|
|
||||||
if system_prompt_content:
|
if system_prompt_content:
|
||||||
@@ -1050,7 +1056,7 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and __event_call__:
|
if self.valves.show_debug_log and __event_call__:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ❌ Failed to parse model params: {e}",
|
f"[Inlet] ❌ Failed to parse model params: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1071,7 +1077,7 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and __event_call__:
|
if self.valves.show_debug_log and __event_call__:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ❌ Error fetching system prompt from DB: {e}",
|
f"[Inlet] ❌ Error fetching system prompt from DB: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
if self.valves.debug_mode:
|
if self.valves.debug_mode:
|
||||||
@@ -1125,7 +1131,7 @@ class Filter:
|
|||||||
if not chat_id:
|
if not chat_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[Inlet] ❌ Missing chat_id in metadata, skipping compression",
|
"[Inlet] ❌ Missing chat_id in metadata, skipping compression",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return body
|
return body
|
||||||
@@ -1154,7 +1160,7 @@ class Filter:
|
|||||||
else:
|
else:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ⚠️ Invalid Model Configs (Raw: '{raw_config}'): No valid configs parsed. Expected format: 'model_id:threshold:max_context'",
|
f"[Inlet] ⚠️ Invalid Model Configs (Raw: '{raw_config}'): No valid configs parsed. Expected format: 'model_id:threshold:max_context'",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
@@ -1258,7 +1264,7 @@ class Filter:
|
|||||||
if total_tokens > max_context_tokens:
|
if total_tokens > max_context_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ⚠️ Candidate prompt ({total_tokens} Tokens) exceeds limit ({max_context_tokens}). Reducing history...",
|
f"[Inlet] ⚠️ Candidate prompt ({total_tokens} Tokens) exceeds limit ({max_context_tokens}). Reducing history...",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1395,7 +1401,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] Applied summary: {system_info} + Head({len(head_messages)} msg, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} msg, {tail_tokens}t) = Total({total_section_tokens}t)",
|
f"[Inlet] Applied summary: {system_info} + Head({len(head_messages)} msg, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} msg, {tail_tokens}t) = Total({total_section_tokens}t)",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1455,7 +1461,7 @@ class Filter:
|
|||||||
if total_tokens > max_context_tokens:
|
if total_tokens > max_context_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ⚠️ Original messages ({total_tokens} Tokens) exceed limit ({max_context_tokens}). Reducing history...",
|
f"[Inlet] ⚠️ Original messages ({total_tokens} Tokens) exceed limit ({max_context_tokens}). Reducing history...",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1523,7 +1529,7 @@ class Filter:
|
|||||||
if not chat_id:
|
if not chat_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[Outlet] ❌ Missing chat_id in metadata, skipping compression",
|
"[Outlet] ❌ Missing chat_id in metadata, skipping compression",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return body
|
return body
|
||||||
@@ -1625,7 +1631,7 @@ class Filter:
|
|||||||
if current_tokens >= compression_threshold_tokens:
|
if current_tokens >= compression_threshold_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})",
|
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1648,7 +1654,7 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🔍 Background Calculation] ❌ Error: {str(e)}",
|
f"[🔍 Background Calculation] ❌ Error: {str(e)}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1687,7 +1693,7 @@ class Filter:
|
|||||||
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 Async Summary Task] ⚠️ target_compressed_count is None, estimating: {target_compressed_count}",
|
f"[🤖 Async Summary Task] ⚠️ target_compressed_count is None, estimating: {target_compressed_count}",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1734,7 +1740,7 @@ class Filter:
|
|||||||
if not summary_model_id:
|
if not summary_model_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 Async Summary Task] ⚠️ Summary model does not exist, skipping compression",
|
"[🤖 Async Summary Task] ⚠️ Summary model does not exist, skipping compression",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
@@ -1765,7 +1771,7 @@ class Filter:
|
|||||||
excess_tokens = estimated_input_tokens - max_context_tokens
|
excess_tokens = estimated_input_tokens - max_context_tokens
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens",
|
f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1822,7 +1828,7 @@ class Filter:
|
|||||||
if not new_summary:
|
if not new_summary:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 Async Summary Task] ⚠️ Summary generation returned empty result, skipping save",
|
"[🤖 Async Summary Task] ⚠️ Summary generation returned empty result, skipping save",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
@@ -1851,7 +1857,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters",
|
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
await self._log(
|
await self._log(
|
||||||
@@ -1957,14 +1963,14 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Status] Error calculating tokens: {e}",
|
f"[Status] Error calculating tokens: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 Async Summary Task] ❌ Error: {str(e)}",
|
f"[🤖 Async Summary Task] ❌ Error: {str(e)}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -2066,7 +2072,7 @@ Based on the content above, generate the summary:
|
|||||||
if not model:
|
if not model:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 LLM Call] ⚠️ Summary model does not exist, skipping summary generation",
|
"[🤖 LLM Call] ⚠️ Summary model does not exist, skipping summary generation",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return ""
|
return ""
|
||||||
@@ -2133,7 +2139,7 @@ Based on the content above, generate the summary:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 LLM Call] ✅ Successfully received summary",
|
f"[🤖 LLM Call] ✅ Successfully received summary",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -2154,7 +2160,7 @@ Based on the content above, generate the summary:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 LLM Call] ❌ {error_message}",
|
f"[🤖 LLM Call] ❌ {error_message}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ author: Fu-Jie
|
|||||||
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
funding_url: https://github.com/open-webui
|
funding_url: https://github.com/open-webui
|
||||||
description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。
|
description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。
|
||||||
version: 1.2.1
|
version: 1.2.2
|
||||||
openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18
|
openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18
|
||||||
license: MIT
|
license: MIT
|
||||||
|
|
||||||
@@ -787,7 +787,7 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"Error emitting debug log: {e}")
|
print(f"Error emitting debug log: {e}")
|
||||||
|
|
||||||
async def _log(self, message: str, type: str = "info", event_call=None):
|
async def _log(self, message: str, log_type: str = "info", event_call=None):
|
||||||
"""统一日志输出到后端 (print) 和前端 (console.log)"""
|
"""统一日志输出到后端 (print) 和前端 (console.log)"""
|
||||||
# 后端日志
|
# 后端日志
|
||||||
if self.valves.debug_mode:
|
if self.valves.debug_mode:
|
||||||
@@ -797,11 +797,11 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and event_call:
|
if self.valves.show_debug_log and event_call:
|
||||||
try:
|
try:
|
||||||
css = "color: #3b82f6;" # 默认蓝色
|
css = "color: #3b82f6;" # 默认蓝色
|
||||||
if type == "error":
|
if log_type == "error":
|
||||||
css = "color: #ef4444; font-weight: bold;" # 红色
|
css = "color: #ef4444; font-weight: bold;" # 红色
|
||||||
elif type == "warning":
|
elif log_type == "warning":
|
||||||
css = "color: #f59e0b;" # 橙色
|
css = "color: #f59e0b;" # 橙色
|
||||||
elif type == "success":
|
elif log_type == "success":
|
||||||
css = "color: #10b981; font-weight: bold;" # 绿色
|
css = "color: #10b981; font-weight: bold;" # 绿色
|
||||||
|
|
||||||
# 清理前端消息:移除分隔符和多余换行
|
# 清理前端消息:移除分隔符和多余换行
|
||||||
@@ -948,12 +948,17 @@ class Filter:
|
|||||||
# 处理 params 是 JSON 字符串的情况
|
# 处理 params 是 JSON 字符串的情况
|
||||||
if isinstance(params, str):
|
if isinstance(params, str):
|
||||||
params = json.loads(params)
|
params = json.loads(params)
|
||||||
|
# 转换 Pydantic 模型为字典
|
||||||
|
elif hasattr(params, "model_dump"):
|
||||||
|
params = params.model_dump()
|
||||||
|
elif hasattr(params, "dict"):
|
||||||
|
params = params.dict()
|
||||||
|
|
||||||
# 处理字典或 Pydantic 对象
|
# 处理字典
|
||||||
if isinstance(params, dict):
|
if isinstance(params, dict):
|
||||||
system_prompt_content = params.get("system")
|
system_prompt_content = params.get("system")
|
||||||
else:
|
else:
|
||||||
# 假设是 Pydantic 模型或对象
|
# 回退:尝试 getattr
|
||||||
system_prompt_content = getattr(params, "system", None)
|
system_prompt_content = getattr(params, "system", None)
|
||||||
|
|
||||||
if system_prompt_content:
|
if system_prompt_content:
|
||||||
@@ -972,7 +977,7 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and __event_call__:
|
if self.valves.show_debug_log and __event_call__:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ❌ 解析模型参数失败: {e}",
|
f"[Inlet] ❌ 解析模型参数失败: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -986,7 +991,7 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and __event_call__:
|
if self.valves.show_debug_log and __event_call__:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ❌ 数据库中未找到模型",
|
f"[Inlet] ❌ 数据库中未找到模型",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -994,7 +999,7 @@ class Filter:
|
|||||||
if self.valves.show_debug_log and __event_call__:
|
if self.valves.show_debug_log and __event_call__:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ❌ 从数据库获取系统提示词错误: {e}",
|
f"[Inlet] ❌ 从数据库获取系统提示词错误: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
if self.valves.debug_mode:
|
if self.valves.debug_mode:
|
||||||
@@ -1048,7 +1053,7 @@ class Filter:
|
|||||||
if not chat_id:
|
if not chat_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[Inlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
"[Inlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return body
|
return body
|
||||||
@@ -1154,7 +1159,7 @@ class Filter:
|
|||||||
if total_tokens > max_context_tokens:
|
if total_tokens > max_context_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ⚠️ 候选提示词 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
f"[Inlet] ⚠️ 候选提示词 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1290,7 +1295,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] 应用摘要: {system_info} + Head({len(head_messages)} 条, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} 条, {tail_tokens}t) = Total({total_section_tokens}t)",
|
f"[Inlet] 应用摘要: {system_info} + Head({len(head_messages)} 条, {head_tokens}t) + Summary({summary_tokens}t) + Tail({len(tail_messages)} 条, {tail_tokens}t) = Total({total_section_tokens}t)",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1350,7 +1355,7 @@ class Filter:
|
|||||||
if total_tokens > max_context_tokens:
|
if total_tokens > max_context_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Inlet] ⚠️ 原始消息 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
f"[Inlet] ⚠️ 原始消息 ({total_tokens} Tokens) 超过上限 ({max_context_tokens})。正在缩减历史记录...",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1420,7 +1425,7 @@ class Filter:
|
|||||||
if not chat_id:
|
if not chat_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[Outlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
"[Outlet] ❌ metadata 中缺少 chat_id,跳过压缩",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return body
|
return body
|
||||||
@@ -1486,7 +1491,7 @@ class Filter:
|
|||||||
if current_tokens >= compression_threshold_tokens:
|
if current_tokens >= compression_threshold_tokens:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})",
|
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1509,7 +1514,7 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🔍 后台计算] ❌ 错误: {str(e)}",
|
f"[🔍 后台计算] ❌ 错误: {str(e)}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1546,7 +1551,7 @@ class Filter:
|
|||||||
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 异步摘要任务] ⚠️ target_compressed_count 为 None,进行估算: {target_compressed_count}",
|
f"[🤖 异步摘要任务] ⚠️ target_compressed_count 为 None,进行估算: {target_compressed_count}",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1593,7 +1598,7 @@ class Filter:
|
|||||||
if not summary_model_id:
|
if not summary_model_id:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 异步摘要任务] ⚠️ 摘要模型不存在,跳过压缩",
|
"[🤖 异步摘要任务] ⚠️ 摘要模型不存在,跳过压缩",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
@@ -1624,7 +1629,7 @@ class Filter:
|
|||||||
excess_tokens = estimated_input_tokens - max_context_tokens
|
excess_tokens = estimated_input_tokens - max_context_tokens
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token",
|
f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1681,7 +1686,7 @@ class Filter:
|
|||||||
if not new_summary:
|
if not new_summary:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 异步摘要任务] ⚠️ 摘要生成返回空结果,跳过保存",
|
"[🤖 异步摘要任务] ⚠️ 摘要生成返回空结果,跳过保存",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
@@ -1710,7 +1715,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符",
|
f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
await self._log(
|
await self._log(
|
||||||
@@ -1821,14 +1826,14 @@ class Filter:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[Status] 计算 Token 错误: {e}",
|
f"[Status] 计算 Token 错误: {e}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 异步摘要任务] ❌ 错误: {str(e)}",
|
f"[🤖 异步摘要任务] ❌ 错误: {str(e)}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -1928,7 +1933,7 @@ class Filter:
|
|||||||
if not model:
|
if not model:
|
||||||
await self._log(
|
await self._log(
|
||||||
"[🤖 LLM 调用] ⚠️ 摘要模型不存在,跳过摘要生成",
|
"[🤖 LLM 调用] ⚠️ 摘要模型不存在,跳过摘要生成",
|
||||||
type="warning",
|
log_type="warning",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
return ""
|
return ""
|
||||||
@@ -1995,7 +2000,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 LLM 调用] ✅ 成功接收摘要",
|
f"[🤖 LLM 调用] ✅ 成功接收摘要",
|
||||||
type="success",
|
log_type="success",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -2016,7 +2021,7 @@ class Filter:
|
|||||||
|
|
||||||
await self._log(
|
await self._log(
|
||||||
f"[🤖 LLM 调用] ❌ {error_message}",
|
f"[🤖 LLM 调用] ❌ {error_message}",
|
||||||
type="error",
|
log_type="error",
|
||||||
event_call=__event_call__,
|
event_call=__event_call__,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
60
plugins/filters/folder-memory/README.md
Normal file
60
plugins/filters/folder-memory/README.md
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
# Folder Memory
|
||||||
|
|
||||||
|
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.0 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📌 What's new in 0.1.0
|
||||||
|
- **Initial Release**: Automated "Project Rules" management for OpenWebUI folders.
|
||||||
|
- **Folder-Level Persistence**: Automatically updates folder system prompts with extracted rules.
|
||||||
|
- **Optimized Performance**: Runs asynchronously and supports `PRIORITY` configuration for seamless integration with other filters.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Folder Memory** is an intelligent context filter plugin for OpenWebUI. It automatically extracts consistent "Project Rules" from ongoing conversations within a folder and injects them back into the folder's system prompt.
|
||||||
|
|
||||||
|
## ✨ Features
|
||||||
|
|
||||||
|
- **Automatic Extraction**: Analyzes chat history every N messages to extract project rules.
|
||||||
|
- **Non-destructive Injection**: Updates only the specific "Project Rules" block in the system prompt, preserving other instructions.
|
||||||
|
- **Async Processing**: Runs in the background without blocking the user's chat experience.
|
||||||
|
- **ORM Integration**: Directly updates folder data using OpenWebUI's internal models for reliability.
|
||||||
|
|
||||||
|
## ⚠️ Prerequisites
|
||||||
|
|
||||||
|
- **Conversations must occur inside a folder.** This plugin only triggers when a chat belongs to a folder (i.e., you need to create a folder in OpenWebUI and start a conversation within it).
|
||||||
|
|
||||||
|
## 📦 Installation
|
||||||
|
|
||||||
|
1. Copy `folder_memory.py` to your OpenWebUI `plugins/filters/` directory (or upload via Admin UI).
|
||||||
|
2. Enable the filter in your **Settings** -> **Filters**.
|
||||||
|
3. (Optional) Configure the triggering threshold (default: every 10 messages).
|
||||||
|
|
||||||
|
## ⚙️ Configuration (Valves)
|
||||||
|
|
||||||
|
| Valve | Default | Description |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `PRIORITY` | `20` | Priority level for the filter operations. |
|
||||||
|
| `MESSAGE_TRIGGER_COUNT` | `10` | The number of messages required to trigger a rule analysis. |
|
||||||
|
| `MODEL_ID` | `""` | The model used to generate rules. If empty, uses the current chat model. |
|
||||||
|
| `RULES_BLOCK_TITLE` | `## 📂 Project Rules` | The title displayed above the injected rules block. |
|
||||||
|
| `SHOW_DEBUG_LOG` | `False` | Show detailed debug logs in the browser console. |
|
||||||
|
| `UPDATE_ROOT_FOLDER` | `False` | If enabled, finds and updates the root folder rules instead of the current subfolder. |
|
||||||
|
|
||||||
|
## 🛠️ How It Works
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
1. **Trigger**: When a conversation reaches `MESSAGE_TRIGGER_COUNT` (e.g., 10, 20 messages).
|
||||||
|
2. **Analysis**: The plugin sends the recent conversation + existing rules to the LLM.
|
||||||
|
3. **Synthesis**: The LLM merges new insights with old rules, removing obsolete ones.
|
||||||
|
4. **Update**: The new rule set replaces the `<!-- OWUI_PROJECT_RULES_START -->` block in the folder's system prompt.
|
||||||
|
|
||||||
|
## ⚠️ Notes
|
||||||
|
|
||||||
|
- This plugin modifies the `system_prompt` of your folders.
|
||||||
|
- It uses a specific marker `<!-- OWUI_PROJECT_RULES_START -->` to locate its content. Do not manually remove these markers if you want the plugin to continue managing that section.
|
||||||
|
|
||||||
|
## 🗺️ Roadmap
|
||||||
|
|
||||||
|
See [ROADMAP.md](./ROADMAP.md) for future plans, including "Project Knowledge" collection.
|
||||||
62
plugins/filters/folder-memory/README_CN.md
Normal file
62
plugins/filters/folder-memory/README_CN.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# 文件夹记忆 (Folder Memory)
|
||||||
|
|
||||||
|
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.0 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📌 0.1.0 版本特性
|
||||||
|
- **首个版本发布**:专注于自动化的“项目规则”管理。
|
||||||
|
- **文件夹级持久化**:自动将提取的规则回写到文件夹系统提示词中。
|
||||||
|
- **性能优化**:采用异步处理机制,并支持 `PRIORITY` 配置,确保与其他过滤器(如上下文压缩)完美协作。
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**文件夹记忆 (Folder Memory)** 是一个 OpenWebUI 的智能上下文过滤器插件。它能自动从文件夹内的对话中提取一致性的“项目规则”,并将其回写到文件夹的系统提示词中。
|
||||||
|
|
||||||
|
这确保了该文件夹内的所有未来对话都能共享相同的进化上下文和规则,无需手动更新。
|
||||||
|
|
||||||
|
## ✨ 功能特性
|
||||||
|
|
||||||
|
- **自动提取**:每隔 N 条消息分析一次聊天记录,提取项目规则。
|
||||||
|
- **无损注入**:仅更新系统提示词中的特定“项目规则”块,保留其他指令。
|
||||||
|
- **异步处理**:在后台运行,不阻塞用户的聊天体验。
|
||||||
|
- **ORM 集成**:直接使用 OpenWebUI 的内部模型更新文件夹数据,确保可靠性。
|
||||||
|
|
||||||
|
## ⚠️ 前置条件
|
||||||
|
|
||||||
|
- **对话必须在文件夹内进行。** 此插件仅在聊天属于某个文件夹时触发(即您需要先在 OpenWebUI 中创建一个文件夹,并在其内部开始对话)。
|
||||||
|
|
||||||
|
## 📦 安装指南
|
||||||
|
|
||||||
|
1. 将 `folder_memory.py` (或中文版 `folder_memory_cn.py`) 复制到 OpenWebUI 的 `plugins/filters/` 目录(或通过管理员 UI 上传)。
|
||||||
|
2. 在 **设置** -> **过滤器** 中启用该插件。
|
||||||
|
3. (可选)配置触发阈值(默认:每 10 条消息)。
|
||||||
|
|
||||||
|
## ⚙️ 配置 (Valves)
|
||||||
|
|
||||||
|
| 参数 | 默认值 | 说明 |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| `PRIORITY` | `20` | 过滤器操作的优先级。 |
|
||||||
|
| `MESSAGE_TRIGGER_COUNT` | `10` | 触发规则分析的消息数量阈值。 |
|
||||||
|
| `MODEL_ID` | `""` | 用于生成规则的模型 ID。若为空,则使用当前对话模型。 |
|
||||||
|
| `RULES_BLOCK_TITLE` | `## 📂 项目规则` | 显示在注入规则块上方的标题。 |
|
||||||
|
| `SHOW_DEBUG_LOG` | `False` | 在浏览器控制台显示详细调试日志。 |
|
||||||
|
| `UPDATE_ROOT_FOLDER` | `False` | 如果启用,将向上查找并更新根文件夹的规则,而不是当前子文件夹。 |
|
||||||
|
|
||||||
|
## 🛠️ 工作原理
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
1. **触发**:当对话达到 `MESSAGE_TRIGGER_COUNT`(例如 10、20 条消息)时。
|
||||||
|
2. **分析**:插件将最近的对话 + 现有规则发送给 LLM。
|
||||||
|
3. **综合**:LLM 将新见解与旧规则合并,移除过时的规则。
|
||||||
|
4. **更新**:新的规则集替换文件夹系统提示词中的 `<!-- OWUI_PROJECT_RULES_START -->` 块。
|
||||||
|
|
||||||
|
## ⚠️ 注意事项
|
||||||
|
|
||||||
|
- 此插件会修改文件夹的 `system_prompt`。
|
||||||
|
- 它使用特定标记 `<!-- OWUI_PROJECT_RULES_START -->` 来定位内容。如果您希望插件继续管理该部分,请勿手动删除这些标记。
|
||||||
|
|
||||||
|
## 🗺️ 路线图
|
||||||
|
|
||||||
|
查看 [ROADMAP.md](./ROADMAP.md) 了解未来计划,包括“项目知识”收集功能。
|
||||||
10
plugins/filters/folder-memory/ROADMAP.md
Normal file
10
plugins/filters/folder-memory/ROADMAP.md
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
# Roadmap
|
||||||
|
|
||||||
|
## Future Features
|
||||||
|
|
||||||
|
### 🧠 Project Knowledge (Planned)
|
||||||
|
In future versions, we plan to introduce "Project Knowledge" collection. Unlike "Rules" which are strict instructions, "Knowledge" will capture reusable information, consensus, and context that helps the LLM understand the project better.
|
||||||
|
|
||||||
|
- **Knowledge Extraction**: Automatically extract reusable knowledge (terminology, style guides, business logic) from conversations.
|
||||||
|
- **Long-term Memory**: Use the entire folder's chat history as a corpus for knowledge generation.
|
||||||
|
- **Context Injection**: Inject summarized knowledge into the system prompt alongside rules.
|
||||||
BIN
plugins/filters/folder-memory/folder-memory-demo.png
Normal file
BIN
plugins/filters/folder-memory/folder-memory-demo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 459 KiB |
483
plugins/filters/folder-memory/folder_memory.py
Normal file
483
plugins/filters/folder-memory/folder_memory.py
Normal file
@@ -0,0 +1,483 @@
|
|||||||
|
"""
|
||||||
|
title: 📂 Folder Memory
|
||||||
|
author: Fu-Jie
|
||||||
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
|
funding_url: https://github.com/open-webui
|
||||||
|
version: 0.1.0
|
||||||
|
description: Automatically extracts project rules from conversations and injects them into the folder's system prompt.
|
||||||
|
requirements:
|
||||||
|
"""
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from typing import Optional, Dict, List
|
||||||
|
from fastapi import Request
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import asyncio
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from open_webui.utils.chat import generate_chat_completion
|
||||||
|
from open_webui.models.users import Users
|
||||||
|
from open_webui.models.folders import Folders, FolderUpdateForm
|
||||||
|
from open_webui.models.chats import Chats
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||||
|
)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Markers for rule injection
|
||||||
|
RULES_BLOCK_START = "<!-- OWUI_PROJECT_RULES_START -->"
|
||||||
|
RULES_BLOCK_END = "<!-- OWUI_PROJECT_RULES_END -->"
|
||||||
|
|
||||||
|
# System Prompt for Rule Generation
|
||||||
|
SYSTEM_PROMPT_RULE_GENERATOR = """
|
||||||
|
You are a project rule extractor. Your task is to extract "Project Rules" from the conversation and merge them with existing rules.
|
||||||
|
|
||||||
|
### Input
|
||||||
|
1. **Existing Rules**: Current rules in the folder system prompt.
|
||||||
|
2. **Conversation**: Recent chat history.
|
||||||
|
|
||||||
|
### Goal
|
||||||
|
Synthesize a concise list of rules that apply to this project/folder.
|
||||||
|
- **Remove** rules that are no longer relevant or were one-off instructions.
|
||||||
|
- **Add** new consistent requirements found in the conversation.
|
||||||
|
- **Merge** similar rules.
|
||||||
|
- **Format**: Concise bullet points (Markdown).
|
||||||
|
|
||||||
|
### Output Format
|
||||||
|
ONLY output the rules list as Markdown bullet points. Do not include any intro/outro text.
|
||||||
|
Example:
|
||||||
|
- Always use Python 3.11 for type hinting.
|
||||||
|
- Docstrings must follow Google style.
|
||||||
|
- Commit messages should be in English.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class Filter:
|
||||||
|
class Valves(BaseModel):
|
||||||
|
PRIORITY: int = Field(
|
||||||
|
default=20, description="Priority level for the filter operations."
|
||||||
|
)
|
||||||
|
SHOW_DEBUG_LOG: bool = Field(
|
||||||
|
default=False, description="Show debug logs in console."
|
||||||
|
)
|
||||||
|
MESSAGE_TRIGGER_COUNT: int = Field(
|
||||||
|
default=10, description="Analyze rules after every N messages in a chat."
|
||||||
|
)
|
||||||
|
MODEL_ID: str = Field(
|
||||||
|
default="",
|
||||||
|
description="Model used for rule extraction. If empty, uses the current chat model.",
|
||||||
|
)
|
||||||
|
RULES_BLOCK_TITLE: str = Field(
|
||||||
|
default="## 📂 Project Rules",
|
||||||
|
description="Title displayed above the rules block.",
|
||||||
|
)
|
||||||
|
UPDATE_ROOT_FOLDER: bool = Field(
|
||||||
|
default=False,
|
||||||
|
description="If enabled, finds and updates the root folder rules instead of the current subfolder.",
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.valves = self.Valves()
|
||||||
|
|
||||||
|
# ==================== Helper Methods ====================
|
||||||
|
|
||||||
|
def _get_user_context(self, __user__: Optional[dict]) -> Dict[str, str]:
|
||||||
|
"""Safely extracts user context information."""
|
||||||
|
if isinstance(__user__, (list, tuple)):
|
||||||
|
user_data = __user__[0] if __user__ else {}
|
||||||
|
elif isinstance(__user__, dict):
|
||||||
|
user_data = __user__
|
||||||
|
else:
|
||||||
|
user_data = {}
|
||||||
|
|
||||||
|
return {
|
||||||
|
"user_id": user_data.get("id", ""),
|
||||||
|
"user_name": user_data.get("name", "User"),
|
||||||
|
"user_language": user_data.get("language", "en-US"),
|
||||||
|
}
|
||||||
|
|
||||||
|
def _get_chat_context(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""Unified extraction of chat context information (chat_id, message_id)."""
|
||||||
|
chat_id = ""
|
||||||
|
message_id = ""
|
||||||
|
|
||||||
|
if isinstance(body, dict):
|
||||||
|
chat_id = body.get("chat_id", "")
|
||||||
|
message_id = body.get("id", "")
|
||||||
|
|
||||||
|
if not chat_id or not message_id:
|
||||||
|
body_metadata = body.get("metadata", {})
|
||||||
|
if isinstance(body_metadata, dict):
|
||||||
|
if not chat_id:
|
||||||
|
chat_id = body_metadata.get("chat_id", "")
|
||||||
|
if not message_id:
|
||||||
|
message_id = body_metadata.get("message_id", "")
|
||||||
|
|
||||||
|
if __metadata__ and isinstance(__metadata__, dict):
|
||||||
|
if not chat_id:
|
||||||
|
chat_id = __metadata__.get("chat_id", "")
|
||||||
|
if not message_id:
|
||||||
|
message_id = __metadata__.get("message_id", "")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"chat_id": str(chat_id).strip(),
|
||||||
|
"message_id": str(message_id).strip(),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _emit_debug_log(self, __event_emitter__, title: str, data: dict):
|
||||||
|
if self.valves.SHOW_DEBUG_LOG and __event_emitter__:
|
||||||
|
try:
|
||||||
|
# Flat log format as requested
|
||||||
|
js_code = f"""
|
||||||
|
console.log("[Folder Memory] {title}", {json.dumps(data, ensure_ascii=False)});
|
||||||
|
"""
|
||||||
|
await __event_emitter__({"type": "execute", "data": {"code": js_code}})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error emitting log: {e}")
|
||||||
|
|
||||||
|
async def _emit_status(
|
||||||
|
self, __event_emitter__, description: str, done: bool = False
|
||||||
|
):
|
||||||
|
if __event_emitter__:
|
||||||
|
await __event_emitter__(
|
||||||
|
{"type": "status", "data": {"description": description, "done": done}}
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_folder_id(self, body: dict) -> Optional[str]:
|
||||||
|
# 1. Try retrieving folder_id specifically from metadata
|
||||||
|
if "metadata" in body and isinstance(body["metadata"], dict):
|
||||||
|
if "folder_id" in body["metadata"]:
|
||||||
|
return body["metadata"]["folder_id"]
|
||||||
|
|
||||||
|
# 2. Check regular body chat object if available
|
||||||
|
if "chat" in body and isinstance(body["chat"], dict):
|
||||||
|
if "folder_id" in body["chat"]:
|
||||||
|
return body["chat"]["folder_id"]
|
||||||
|
|
||||||
|
# 3. Try fallback via Chat ID (Most reliable)
|
||||||
|
chat_id = body.get("chat_id")
|
||||||
|
if not chat_id:
|
||||||
|
if "metadata" in body and isinstance(body["metadata"], dict):
|
||||||
|
chat_id = body["metadata"].get("chat_id")
|
||||||
|
|
||||||
|
if chat_id:
|
||||||
|
try:
|
||||||
|
chat = Chats.get_chat_by_id(chat_id)
|
||||||
|
if chat and chat.folder_id:
|
||||||
|
return chat.folder_id
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Failed to fetch chat {chat_id}: {e}")
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_existing_rules(self, system_prompt: str) -> str:
|
||||||
|
pattern = re.compile(
|
||||||
|
re.escape(RULES_BLOCK_START) + r"([\s\S]*?)" + re.escape(RULES_BLOCK_END)
|
||||||
|
)
|
||||||
|
match = pattern.search(system_prompt)
|
||||||
|
if match:
|
||||||
|
# Remove title if it's inside the block
|
||||||
|
content = match.group(1).strip()
|
||||||
|
# Simple cleanup of the title if user formatted it inside
|
||||||
|
title_pat = re.compile(r"^#+\s+.*$", re.MULTILINE)
|
||||||
|
return title_pat.sub("", content).strip()
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def _inject_rules(self, system_prompt: str, new_rules: str, title: str) -> str:
|
||||||
|
new_block_content = f"\n{title}\n\n{new_rules}\n"
|
||||||
|
new_block = f"{RULES_BLOCK_START}{new_block_content}{RULES_BLOCK_END}"
|
||||||
|
|
||||||
|
system_prompt = system_prompt or ""
|
||||||
|
pattern = re.compile(
|
||||||
|
re.escape(RULES_BLOCK_START) + r"[\s\S]*?" + re.escape(RULES_BLOCK_END)
|
||||||
|
)
|
||||||
|
|
||||||
|
if pattern.search(system_prompt):
|
||||||
|
return pattern.sub(new_block, system_prompt).strip()
|
||||||
|
else:
|
||||||
|
# Append if not found
|
||||||
|
if system_prompt:
|
||||||
|
return f"{system_prompt}\n\n{new_block}"
|
||||||
|
else:
|
||||||
|
return new_block
|
||||||
|
|
||||||
|
async def _generate_new_rules(
|
||||||
|
self,
|
||||||
|
current_rules: str,
|
||||||
|
messages: List[Dict],
|
||||||
|
user_id: str,
|
||||||
|
__request__: Request,
|
||||||
|
) -> str:
|
||||||
|
# Prepare context
|
||||||
|
conversation_text = "\n".join(
|
||||||
|
[
|
||||||
|
f"{msg['role'].upper()}: {msg['content']}"
|
||||||
|
for msg in messages[-20:] # Analyze last 20 messages context
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
prompt = f"""
|
||||||
|
Existing Rules:
|
||||||
|
{current_rules if current_rules else "None"}
|
||||||
|
|
||||||
|
Conversation Excerpt:
|
||||||
|
{conversation_text}
|
||||||
|
|
||||||
|
Please output the updated Project Rules:
|
||||||
|
"""
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"model": self.valves.MODEL_ID,
|
||||||
|
"messages": [
|
||||||
|
{"role": "system", "content": SYSTEM_PROMPT_RULE_GENERATOR},
|
||||||
|
{"role": "user", "content": prompt},
|
||||||
|
],
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
# We need a user object for permission checks in generate_chat_completion
|
||||||
|
user = Users.get_user_by_id(user_id)
|
||||||
|
if not user:
|
||||||
|
return current_rules
|
||||||
|
|
||||||
|
completion = await generate_chat_completion(__request__, payload, user)
|
||||||
|
if "choices" in completion and len(completion["choices"]) > 0:
|
||||||
|
content = completion["choices"][0]["message"]["content"].strip()
|
||||||
|
# Basic validation: ensure it looks like a list
|
||||||
|
if (
|
||||||
|
content.startswith("-")
|
||||||
|
or content.startswith("*")
|
||||||
|
or content.startswith("1.")
|
||||||
|
):
|
||||||
|
return content
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Rule generation failed: {e}")
|
||||||
|
|
||||||
|
return current_rules
|
||||||
|
|
||||||
|
async def _process_rules_update(
|
||||||
|
self,
|
||||||
|
folder_id: str,
|
||||||
|
body: dict,
|
||||||
|
user_id: str,
|
||||||
|
__request__: Request,
|
||||||
|
__event_emitter__,
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Start Processing",
|
||||||
|
{"step": "start", "initial_folder_id": folder_id, "user_id": user_id},
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1. Fetch Folder Data (ORM)
|
||||||
|
initial_folder = Folders.get_folder_by_id_and_user_id(folder_id, user_id)
|
||||||
|
if not initial_folder:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Error: Initial folder not found",
|
||||||
|
{
|
||||||
|
"step": "fetch_initial_folder",
|
||||||
|
"initial_folder_id": folder_id,
|
||||||
|
"user_id": user_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Subfolder handling logic
|
||||||
|
target_folder = initial_folder
|
||||||
|
if self.valves.UPDATE_ROOT_FOLDER:
|
||||||
|
# Traverse up until a folder with no parent_id is found
|
||||||
|
while target_folder and getattr(target_folder, "parent_id", None):
|
||||||
|
try:
|
||||||
|
parent = Folders.get_folder_by_id_and_user_id(
|
||||||
|
target_folder.parent_id, user_id
|
||||||
|
)
|
||||||
|
if parent:
|
||||||
|
target_folder = parent
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Warning: Failed to traverse parent folder",
|
||||||
|
{"step": "traverse_root", "error": str(e)},
|
||||||
|
)
|
||||||
|
break
|
||||||
|
|
||||||
|
target_folder_id = target_folder.id
|
||||||
|
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Target Folder Resolved",
|
||||||
|
{
|
||||||
|
"step": "target_resolved",
|
||||||
|
"target_folder_id": target_folder_id,
|
||||||
|
"target_folder_name": target_folder.name,
|
||||||
|
"is_root_update": target_folder_id != folder_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
existing_data = target_folder.data if target_folder.data else {}
|
||||||
|
existing_sys_prompt = existing_data.get("system_prompt", "")
|
||||||
|
|
||||||
|
# 2. Extract Existing Rules
|
||||||
|
current_rules_content = self._extract_existing_rules(existing_sys_prompt)
|
||||||
|
|
||||||
|
# 3. Generate New Rules
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "Analyzing project rules...", done=False
|
||||||
|
)
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
new_rules_content = await self._generate_new_rules(
|
||||||
|
current_rules_content, messages, user_id, __request__
|
||||||
|
)
|
||||||
|
|
||||||
|
rules_changed = new_rules_content != current_rules_content
|
||||||
|
|
||||||
|
# 4. If no change, skip
|
||||||
|
if not rules_changed:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"No Changes",
|
||||||
|
{
|
||||||
|
"step": "check_changes",
|
||||||
|
"reason": "content_identical_or_generation_failed",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__,
|
||||||
|
"Rule analysis complete: No new content.",
|
||||||
|
done=True,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# 5. Inject Rules into System Prompt
|
||||||
|
updated_sys_prompt = existing_sys_prompt
|
||||||
|
if rules_changed:
|
||||||
|
updated_sys_prompt = self._inject_rules(
|
||||||
|
updated_sys_prompt,
|
||||||
|
new_rules_content,
|
||||||
|
self.valves.RULES_BLOCK_TITLE,
|
||||||
|
)
|
||||||
|
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Ready to Update DB",
|
||||||
|
{"step": "pre_db_update", "target_folder_id": target_folder_id},
|
||||||
|
)
|
||||||
|
|
||||||
|
# 6. Update Folder (ORM) - Only update 'data' field
|
||||||
|
existing_data["system_prompt"] = updated_sys_prompt
|
||||||
|
|
||||||
|
updated_folder = Folders.update_folder_by_id_and_user_id(
|
||||||
|
target_folder_id,
|
||||||
|
user_id,
|
||||||
|
FolderUpdateForm(data=existing_data),
|
||||||
|
)
|
||||||
|
|
||||||
|
if not updated_folder:
|
||||||
|
raise Exception("Update folder failed (ORM returned None)")
|
||||||
|
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "Rule analysis complete: Rules updated.", done=True
|
||||||
|
)
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Rule Generation Process & Change Details",
|
||||||
|
{
|
||||||
|
"step": "success",
|
||||||
|
"folder_id": target_folder_id,
|
||||||
|
"target_is_root": target_folder_id != folder_id,
|
||||||
|
"model_used": self.valves.MODEL_ID,
|
||||||
|
"analyzed_messages_count": len(messages),
|
||||||
|
"old_rules_length": len(current_rules_content),
|
||||||
|
"new_rules_length": len(new_rules_content),
|
||||||
|
"changes_digest": {
|
||||||
|
"old_rules_preview": (
|
||||||
|
current_rules_content[:100] + "..."
|
||||||
|
if current_rules_content
|
||||||
|
else "None"
|
||||||
|
),
|
||||||
|
"new_rules_preview": (
|
||||||
|
new_rules_content[:100] + "..."
|
||||||
|
if new_rules_content
|
||||||
|
else "None"
|
||||||
|
),
|
||||||
|
},
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Async rule processing error: {e}")
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "Failed to update rules.", done=True
|
||||||
|
)
|
||||||
|
# Emit error to console for debugging
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Execution Error",
|
||||||
|
{"error": str(e), "folder_id": folder_id},
|
||||||
|
)
|
||||||
|
|
||||||
|
# ==================== Filter Hooks ====================
|
||||||
|
|
||||||
|
async def inlet(
|
||||||
|
self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None
|
||||||
|
) -> dict:
|
||||||
|
return body
|
||||||
|
|
||||||
|
async def outlet(
|
||||||
|
self,
|
||||||
|
body: dict,
|
||||||
|
__user__: Optional[dict] = None,
|
||||||
|
__event_emitter__=None,
|
||||||
|
__request__: Optional[Request] = None,
|
||||||
|
) -> dict:
|
||||||
|
user_ctx = self._get_user_context(__user__)
|
||||||
|
chat_ctx = self._get_chat_context(body)
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
if not messages:
|
||||||
|
return body
|
||||||
|
|
||||||
|
# Trigger logic: Message Count threshold
|
||||||
|
if len(messages) % self.valves.MESSAGE_TRIGGER_COUNT != 0:
|
||||||
|
return body
|
||||||
|
|
||||||
|
folder_id = self._get_folder_id(body)
|
||||||
|
if not folder_id:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"Skipping Analysis",
|
||||||
|
{
|
||||||
|
"reason": "Chat does not belong to any folder",
|
||||||
|
"chat_id": chat_ctx.get("chat_id"),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return body
|
||||||
|
|
||||||
|
# User Info
|
||||||
|
user_id = user_ctx.get("user_id")
|
||||||
|
if not user_id:
|
||||||
|
return body
|
||||||
|
|
||||||
|
# Async Task
|
||||||
|
if self.valves.MODEL_ID == "":
|
||||||
|
self.valves.MODEL_ID = body.get("model", "")
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
self._process_rules_update(
|
||||||
|
folder_id, body, user_id, __request__, __event_emitter__
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return body
|
||||||
470
plugins/filters/folder-memory/folder_memory_cn.py
Normal file
470
plugins/filters/folder-memory/folder_memory_cn.py
Normal file
@@ -0,0 +1,470 @@
|
|||||||
|
"""
|
||||||
|
title: 📂 文件夹记忆 (Folder Memory)
|
||||||
|
author: Fu-Jie
|
||||||
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
|
funding_url: https://github.com/open-webui
|
||||||
|
version: 0.1.0
|
||||||
|
description: 自动从对话中提取项目规则,并将其注入到文件夹的系统提示词中。
|
||||||
|
requirements:
|
||||||
|
"""
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from typing import Optional, Dict, List
|
||||||
|
from fastapi import Request
|
||||||
|
import logging
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import asyncio
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from open_webui.utils.chat import generate_chat_completion
|
||||||
|
from open_webui.models.users import Users
|
||||||
|
from open_webui.models.folders import Folders, FolderUpdateForm
|
||||||
|
from open_webui.models.chats import Chats
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
||||||
|
)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# 规则注入标记
|
||||||
|
RULES_BLOCK_START = "<!-- OWUI_PROJECT_RULES_START -->"
|
||||||
|
RULES_BLOCK_END = "<!-- OWUI_PROJECT_RULES_END -->"
|
||||||
|
|
||||||
|
# 规则生成系统提示词
|
||||||
|
SYSTEM_PROMPT_RULE_GENERATOR = """
|
||||||
|
你是一个项目规则提取器。你的任务是从对话中提取“项目规则”,并与现有规则合并。
|
||||||
|
|
||||||
|
### 输入
|
||||||
|
1. **现有规则 (Existing Rules)**:当前文件夹系统提示词中的规则。
|
||||||
|
2. **对话片段 (Conversation)**:最近的聊天记录。
|
||||||
|
|
||||||
|
### 目标
|
||||||
|
综合生成一份适用于当前项目/文件夹的简洁规则列表。
|
||||||
|
- **移除** 不再相关或仅是一次性指令的规则。
|
||||||
|
- **添加** 对话中发现的新的、一致性的要求。
|
||||||
|
- **合并** 相似的规则。
|
||||||
|
- **格式**:简洁的 Markdown 项目符号列表。
|
||||||
|
|
||||||
|
### 输出格式
|
||||||
|
仅输出 Markdown 项目符号列表形式的规则。不要包含任何开头或结尾的说明文字。
|
||||||
|
示例:
|
||||||
|
- 始终使用 Python 3.11 进行类型提示。
|
||||||
|
- 文档字符串必须遵循 Google 风格。
|
||||||
|
- 提交信息必须使用英文。
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class Filter:
|
||||||
|
class Valves(BaseModel):
|
||||||
|
PRIORITY: int = Field(default=20, description="过滤器操作的优先级。")
|
||||||
|
SHOW_DEBUG_LOG: bool = Field(
|
||||||
|
default=False, description="在控制台显示调试日志。"
|
||||||
|
)
|
||||||
|
MESSAGE_TRIGGER_COUNT: int = Field(
|
||||||
|
default=10, description="每隔 N 条消息分析一次规则。"
|
||||||
|
)
|
||||||
|
MODEL_ID: str = Field(
|
||||||
|
default="", description="用于提取规则的模型 ID。为空则使用当前对话模型。"
|
||||||
|
)
|
||||||
|
RULES_BLOCK_TITLE: str = Field(
|
||||||
|
default="## 📂 项目规则", description="显示在规则块上方的标题。"
|
||||||
|
)
|
||||||
|
UPDATE_ROOT_FOLDER: bool = Field(
|
||||||
|
default=False,
|
||||||
|
description="如果启用,将向上查找并更新根文件夹的规则,而不是当前子文件夹。",
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.valves = self.Valves()
|
||||||
|
|
||||||
|
# ==================== 辅助方法 ====================
|
||||||
|
|
||||||
|
def _get_user_context(self, __user__: Optional[dict]) -> Dict[str, str]:
|
||||||
|
"""安全提取用户上下文信息。"""
|
||||||
|
if isinstance(__user__, (list, tuple)):
|
||||||
|
user_data = __user__[0] if __user__ else {}
|
||||||
|
elif isinstance(__user__, dict):
|
||||||
|
user_data = __user__
|
||||||
|
else:
|
||||||
|
user_data = {}
|
||||||
|
|
||||||
|
return {
|
||||||
|
"user_id": user_data.get("id", ""),
|
||||||
|
"user_name": user_data.get("name", "User"),
|
||||||
|
"user_language": user_data.get("language", "zh-CN"),
|
||||||
|
}
|
||||||
|
|
||||||
|
def _get_chat_context(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""统一提取聊天上下文信息 (chat_id, message_id)。"""
|
||||||
|
chat_id = ""
|
||||||
|
message_id = ""
|
||||||
|
|
||||||
|
if isinstance(body, dict):
|
||||||
|
chat_id = body.get("chat_id", "")
|
||||||
|
message_id = body.get("id", "")
|
||||||
|
|
||||||
|
if not chat_id or not message_id:
|
||||||
|
body_metadata = body.get("metadata", {})
|
||||||
|
if isinstance(body_metadata, dict):
|
||||||
|
if not chat_id:
|
||||||
|
chat_id = body_metadata.get("chat_id", "")
|
||||||
|
if not message_id:
|
||||||
|
message_id = body_metadata.get("message_id", "")
|
||||||
|
|
||||||
|
if __metadata__ and isinstance(__metadata__, dict):
|
||||||
|
if not chat_id:
|
||||||
|
chat_id = __metadata__.get("chat_id", "")
|
||||||
|
if not message_id:
|
||||||
|
message_id = __metadata__.get("message_id", "")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"chat_id": str(chat_id).strip(),
|
||||||
|
"message_id": str(message_id).strip(),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def _emit_debug_log(self, __event_emitter__, title: str, data: dict):
|
||||||
|
if self.valves.SHOW_DEBUG_LOG and __event_emitter__:
|
||||||
|
try:
|
||||||
|
# 按照用户要求的格式输出展平的日志
|
||||||
|
js_code = f"""
|
||||||
|
console.log("[Folder Memory] {title}", {json.dumps(data, ensure_ascii=False)});
|
||||||
|
"""
|
||||||
|
await __event_emitter__({"type": "execute", "data": {"code": js_code}})
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"发出日志错误: {e}")
|
||||||
|
|
||||||
|
async def _emit_status(
|
||||||
|
self, __event_emitter__, description: str, done: bool = False
|
||||||
|
):
|
||||||
|
if __event_emitter__:
|
||||||
|
await __event_emitter__(
|
||||||
|
{"type": "status", "data": {"description": description, "done": done}}
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_folder_id(self, body: dict) -> Optional[str]:
|
||||||
|
# 1. 尝试从 metadata 获取 folder_id
|
||||||
|
if "metadata" in body and isinstance(body["metadata"], dict):
|
||||||
|
if "folder_id" in body["metadata"]:
|
||||||
|
return body["metadata"]["folder_id"]
|
||||||
|
|
||||||
|
# 2. 检查 chat 对象
|
||||||
|
if "chat" in body and isinstance(body["chat"], dict):
|
||||||
|
if "folder_id" in body["chat"]:
|
||||||
|
return body["chat"]["folder_id"]
|
||||||
|
|
||||||
|
# 3. 尝试通过 Chat ID 查找 (最可靠的方法)
|
||||||
|
chat_id = body.get("chat_id")
|
||||||
|
if not chat_id:
|
||||||
|
if "metadata" in body and isinstance(body["metadata"], dict):
|
||||||
|
chat_id = body["metadata"].get("chat_id")
|
||||||
|
|
||||||
|
if chat_id:
|
||||||
|
try:
|
||||||
|
chat = Chats.get_chat_by_id(chat_id)
|
||||||
|
if chat and chat.folder_id:
|
||||||
|
return chat.folder_id
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"获取聊天信息失败 chat_id={chat_id}: {e}")
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _extract_existing_rules(self, system_prompt: str) -> str:
|
||||||
|
pattern = re.compile(
|
||||||
|
re.escape(RULES_BLOCK_START) + r"([\s\S]*?)" + re.escape(RULES_BLOCK_END)
|
||||||
|
)
|
||||||
|
match = pattern.search(system_prompt)
|
||||||
|
if match:
|
||||||
|
# 如果标题在块内,将其移除以便纯净合并
|
||||||
|
content = match.group(1).strip()
|
||||||
|
title_pat = re.compile(r"^#+\s+.*$", re.MULTILINE)
|
||||||
|
return title_pat.sub("", content).strip()
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def _inject_rules(self, system_prompt: str, new_rules: str, title: str) -> str:
|
||||||
|
new_block_content = f"\n{title}\n\n{new_rules}\n"
|
||||||
|
new_block = f"{RULES_BLOCK_START}{new_block_content}{RULES_BLOCK_END}"
|
||||||
|
|
||||||
|
system_prompt = system_prompt or ""
|
||||||
|
pattern = re.compile(
|
||||||
|
re.escape(RULES_BLOCK_START) + r"[\s\S]*?" + re.escape(RULES_BLOCK_END)
|
||||||
|
)
|
||||||
|
|
||||||
|
if pattern.search(system_prompt):
|
||||||
|
# 替换现有块
|
||||||
|
return pattern.sub(new_block, system_prompt).strip()
|
||||||
|
else:
|
||||||
|
# 追加到末尾
|
||||||
|
if system_prompt:
|
||||||
|
return f"{system_prompt}\n\n{new_block}"
|
||||||
|
else:
|
||||||
|
return new_block
|
||||||
|
|
||||||
|
async def _generate_new_rules(
|
||||||
|
self,
|
||||||
|
current_rules: str,
|
||||||
|
messages: List[Dict],
|
||||||
|
user_id: str,
|
||||||
|
__request__: Request,
|
||||||
|
) -> str:
|
||||||
|
# 准备上下文
|
||||||
|
conversation_text = "\n".join(
|
||||||
|
[
|
||||||
|
f"{msg['role'].upper()}: {msg['content']}"
|
||||||
|
for msg in messages[-20:] # 分析最近 20 条消息上下文
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
prompt = f"""
|
||||||
|
Existing Rules (现有规则):
|
||||||
|
{current_rules if current_rules else "无"}
|
||||||
|
|
||||||
|
Conversation Excerpt (对话片段):
|
||||||
|
{conversation_text}
|
||||||
|
|
||||||
|
Please output the updated Project Rules (请输出更新后的项目规则):
|
||||||
|
"""
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"model": self.valves.MODEL_ID,
|
||||||
|
"messages": [
|
||||||
|
{"role": "system", "content": SYSTEM_PROMPT_RULE_GENERATOR},
|
||||||
|
{"role": "user", "content": prompt},
|
||||||
|
],
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
# 需要用户对象进行权限检查
|
||||||
|
user = Users.get_user_by_id(user_id)
|
||||||
|
if not user:
|
||||||
|
return current_rules
|
||||||
|
|
||||||
|
completion = await generate_chat_completion(__request__, payload, user)
|
||||||
|
if "choices" in completion and len(completion["choices"]) > 0:
|
||||||
|
content = completion["choices"][0]["message"]["content"].strip()
|
||||||
|
# 简单验证:确保看起来像个列表
|
||||||
|
if (
|
||||||
|
content.startswith("-")
|
||||||
|
or content.startswith("*")
|
||||||
|
or content.startswith("1.")
|
||||||
|
):
|
||||||
|
return content
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"规则生成失败: {e}")
|
||||||
|
|
||||||
|
return current_rules
|
||||||
|
|
||||||
|
async def _process_rules_update(
|
||||||
|
self,
|
||||||
|
folder_id: str,
|
||||||
|
body: dict,
|
||||||
|
user_id: str,
|
||||||
|
__request__: Request,
|
||||||
|
__event_emitter__,
|
||||||
|
):
|
||||||
|
try:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"开始处理",
|
||||||
|
{"step": "start", "initial_folder_id": folder_id, "user_id": user_id},
|
||||||
|
)
|
||||||
|
|
||||||
|
# 1. 获取文件夹数据 (ORM)
|
||||||
|
initial_folder = Folders.get_folder_by_id_and_user_id(folder_id, user_id)
|
||||||
|
if not initial_folder:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"错误:未找到初始文件夹",
|
||||||
|
{
|
||||||
|
"step": "fetch_initial_folder",
|
||||||
|
"initial_folder_id": folder_id,
|
||||||
|
"user_id": user_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# 处理子文件夹逻辑:决定是更新当前文件夹还是根文件夹
|
||||||
|
target_folder = initial_folder
|
||||||
|
if self.valves.UPDATE_ROOT_FOLDER:
|
||||||
|
# 向上遍历直到找到没有 parent_id 的根文件夹
|
||||||
|
while target_folder and getattr(target_folder, "parent_id", None):
|
||||||
|
try:
|
||||||
|
parent = Folders.get_folder_by_id_and_user_id(
|
||||||
|
target_folder.parent_id, user_id
|
||||||
|
)
|
||||||
|
if parent:
|
||||||
|
target_folder = parent
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"警告:向上查找父文件夹失败",
|
||||||
|
{"step": "traverse_root", "error": str(e)},
|
||||||
|
)
|
||||||
|
break
|
||||||
|
|
||||||
|
target_folder_id = target_folder.id
|
||||||
|
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"定目标文件夹",
|
||||||
|
{
|
||||||
|
"step": "target_resolved",
|
||||||
|
"target_folder_id": target_folder_id,
|
||||||
|
"target_folder_name": target_folder.name,
|
||||||
|
"is_root_update": target_folder_id != folder_id,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
existing_data = target_folder.data if target_folder.data else {}
|
||||||
|
existing_sys_prompt = existing_data.get("system_prompt", "")
|
||||||
|
|
||||||
|
# 2. 提取现有规则
|
||||||
|
current_rules_content = self._extract_existing_rules(existing_sys_prompt)
|
||||||
|
|
||||||
|
# 3. 生成新规则
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "正在分析项目规则...", done=False
|
||||||
|
)
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
new_rules_content = await self._generate_new_rules(
|
||||||
|
current_rules_content, messages, user_id, __request__
|
||||||
|
)
|
||||||
|
|
||||||
|
rules_changed = new_rules_content != current_rules_content
|
||||||
|
|
||||||
|
# 如果生成结果无变更
|
||||||
|
if not rules_changed:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"无变更",
|
||||||
|
{
|
||||||
|
"step": "check_changes",
|
||||||
|
"reason": "content_identical_or_generation_failed",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "规则分析完成:无新增内容。", done=True
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# 5. 注入规则到 System Prompt
|
||||||
|
updated_sys_prompt = existing_sys_prompt
|
||||||
|
if rules_changed:
|
||||||
|
updated_sys_prompt = self._inject_rules(
|
||||||
|
updated_sys_prompt,
|
||||||
|
new_rules_content,
|
||||||
|
self.valves.RULES_BLOCK_TITLE,
|
||||||
|
)
|
||||||
|
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"准备更新数据库",
|
||||||
|
{"step": "pre_db_update", "target_folder_id": target_folder_id},
|
||||||
|
)
|
||||||
|
|
||||||
|
# 6. 更新文件夹 (ORM) - 仅更新 'data' 字段
|
||||||
|
existing_data["system_prompt"] = updated_sys_prompt
|
||||||
|
|
||||||
|
updated_folder = Folders.update_folder_by_id_and_user_id(
|
||||||
|
target_folder_id,
|
||||||
|
user_id,
|
||||||
|
FolderUpdateForm(data=existing_data),
|
||||||
|
)
|
||||||
|
|
||||||
|
if not updated_folder:
|
||||||
|
raise Exception("Update folder failed (ORM returned None)")
|
||||||
|
|
||||||
|
await self._emit_status(
|
||||||
|
__event_emitter__, "规则分析完成:规则已更新。", done=True
|
||||||
|
)
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"规则生成过程和变更详情",
|
||||||
|
{
|
||||||
|
"step": "success",
|
||||||
|
"folder_id": target_folder_id,
|
||||||
|
"target_is_root": target_folder_id != folder_id,
|
||||||
|
"model_used": self.valves.MODEL_ID,
|
||||||
|
"analyzed_messages_count": len(messages),
|
||||||
|
"old_rules_length": len(current_rules_content),
|
||||||
|
"new_rules_length": len(new_rules_content),
|
||||||
|
"changes_digest": {
|
||||||
|
"old_rules_preview": (
|
||||||
|
current_rules_content[:100] + "..."
|
||||||
|
if current_rules_content
|
||||||
|
else "None"
|
||||||
|
),
|
||||||
|
"new_rules_preview": (
|
||||||
|
new_rules_content[:100] + "..."
|
||||||
|
if new_rules_content
|
||||||
|
else "None"
|
||||||
|
),
|
||||||
|
},
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"异步规则处理错误: {e}")
|
||||||
|
await self._emit_status(__event_emitter__, "更新规则失败。", done=True)
|
||||||
|
# 在控制台也输出错误信息,方便调试
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__, "执行出错", {"error": str(e), "folder_id": folder_id}
|
||||||
|
)
|
||||||
|
|
||||||
|
# ==================== Filter Hooks ====================
|
||||||
|
|
||||||
|
async def inlet(
|
||||||
|
self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None
|
||||||
|
) -> dict:
|
||||||
|
return body
|
||||||
|
|
||||||
|
async def outlet(
|
||||||
|
self,
|
||||||
|
body: dict,
|
||||||
|
__user__: Optional[dict] = None,
|
||||||
|
__event_emitter__=None,
|
||||||
|
__request__: Optional[Request] = None,
|
||||||
|
) -> dict:
|
||||||
|
user_ctx = self._get_user_context(__user__)
|
||||||
|
chat_ctx = self._get_chat_context(body)
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
if not messages:
|
||||||
|
return body
|
||||||
|
|
||||||
|
# 触发逻辑:消息计数阈值
|
||||||
|
if len(messages) % self.valves.MESSAGE_TRIGGER_COUNT != 0:
|
||||||
|
return body
|
||||||
|
|
||||||
|
folder_id = self._get_folder_id(body)
|
||||||
|
if not folder_id:
|
||||||
|
await self._emit_debug_log(
|
||||||
|
__event_emitter__,
|
||||||
|
"跳过分析",
|
||||||
|
{"reason": "对话不属于任何文件夹", "chat_id": chat_ctx.get("chat_id")},
|
||||||
|
)
|
||||||
|
return body
|
||||||
|
|
||||||
|
# 用户信息
|
||||||
|
user_id = user_ctx.get("user_id")
|
||||||
|
if not user_id:
|
||||||
|
return body
|
||||||
|
|
||||||
|
# 异步任务
|
||||||
|
if self.valves.MODEL_ID == "":
|
||||||
|
self.valves.MODEL_ID = body.get("model", "")
|
||||||
|
|
||||||
|
asyncio.create_task(
|
||||||
|
self._process_rules_update(
|
||||||
|
folder_id, body, user_id, __request__, __event_emitter__
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return body
|
||||||
81
plugins/pipes/github-copilot-sdk/README.md
Normal file
81
plugins/pipes/github-copilot-sdk/README.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# GitHub Copilot SDK Pipe for OpenWebUI
|
||||||
|
|
||||||
|
**Author:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **Version:** 0.1.1 | **Project:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **License:** MIT
|
||||||
|
|
||||||
|
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that allows you to use GitHub Copilot models (such as `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`) directly within OpenWebUI. It is built upon the official [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk), providing a native integration experience.
|
||||||
|
|
||||||
|
## 🚀 What's New (v0.1.1)
|
||||||
|
|
||||||
|
* **♾️ Infinite Sessions**: Automatic context compaction for long-running conversations. No more context limit errors!
|
||||||
|
* **🧠 Thinking Process**: Real-time display of model reasoning/thinking process (for supported models).
|
||||||
|
* **📂 Workspace Control**: Restricted workspace directory for secure file operations.
|
||||||
|
* **🔍 Model Filtering**: Exclude specific models using keywords (e.g., `codex`, `haiku`).
|
||||||
|
* **💾 Session Persistence**: Improved session resume logic using OpenWebUI chat ID mapping.
|
||||||
|
|
||||||
|
## ✨ Core Features
|
||||||
|
|
||||||
|
* **🚀 Official SDK Integration**: Built on the official SDK for stability and reliability.
|
||||||
|
* **💬 Multi-turn Conversation**: Automatically concatenates history context so Copilot understands your previous messages.
|
||||||
|
* **🌊 Streaming Output**: Supports typewriter effect for fast responses.
|
||||||
|
* **🖼️ Multimodal Support**: Supports image uploads, automatically converting them to attachments for Copilot (requires model support).
|
||||||
|
* **🛠️ Zero-config Installation**: Automatically detects and downloads the GitHub Copilot CLI, ready to use out of the box.
|
||||||
|
* **🔑 Secure Authentication**: Supports Fine-grained Personal Access Tokens for minimized permissions.
|
||||||
|
* **🐛 Debug Mode**: Built-in detailed log output for easy connection troubleshooting.
|
||||||
|
* **⚠️ Single Node Only**: Due to local session storage, this plugin currently supports single-node OpenWebUI deployment or multi-node with sticky sessions enabled.
|
||||||
|
|
||||||
|
## 📦 Installation & Usage
|
||||||
|
|
||||||
|
### 1. Import Function
|
||||||
|
|
||||||
|
1. Open OpenWebUI.
|
||||||
|
2. Go to **Workspace** -> **Functions**.
|
||||||
|
3. Click **+** (Create Function).
|
||||||
|
4. Paste the content of `github_copilot_sdk.py` (or `github_copilot_sdk_cn.py` for Chinese) completely.
|
||||||
|
5. Save.
|
||||||
|
|
||||||
|
### 2. Configure Valves (Settings)
|
||||||
|
|
||||||
|
Find "GitHub Copilot" in the function list and click the **⚙️ (Valves)** icon to configure:
|
||||||
|
|
||||||
|
| Parameter | Description | Default |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **GH_TOKEN** | **(Required)** Your GitHub Token. | - |
|
||||||
|
| **MODEL_ID** | The model name to use. | `gpt-5-mini` |
|
||||||
|
| **CLI_PATH** | Path to the Copilot CLI. Will download automatically if not found. | `/usr/local/bin/copilot` |
|
||||||
|
| **DEBUG** | Whether to enable debug logs (output to chat). | `True` |
|
||||||
|
| **SHOW_THINKING** | Show model reasoning/thinking process. | `True` |
|
||||||
|
| **EXCLUDE_KEYWORDS** | Exclude models containing these keywords (comma separated). | - |
|
||||||
|
| **WORKSPACE_DIR** | Restricted workspace directory for file operations. | - |
|
||||||
|
| **INFINITE_SESSION** | Enable Infinite Sessions (automatic context compaction). | `True` |
|
||||||
|
| **COMPACTION_THRESHOLD** | Background compaction threshold (0.0-1.0). | `0.8` |
|
||||||
|
| **BUFFER_THRESHOLD** | Buffer exhaustion threshold (0.0-1.0). | `0.95` |
|
||||||
|
| **TIMEOUT** | Timeout for each stream chunk (seconds). | `300` |
|
||||||
|
|
||||||
|
### 3. Get GH_TOKEN
|
||||||
|
|
||||||
|
For security, it is recommended to use a **Fine-grained Personal Access Token**:
|
||||||
|
|
||||||
|
1. Visit [GitHub Token Settings](https://github.com/settings/tokens?type=beta).
|
||||||
|
2. Click **Generate new token**.
|
||||||
|
3. **Repository access**: Select **Public repositories** (Required to access Copilot permissions).
|
||||||
|
4. **Permissions**:
|
||||||
|
* Click **Account permissions**.
|
||||||
|
* Find **Copilot Requests** (It defaults to **Read-only**, no selection needed).
|
||||||
|
5. Generate and copy the Token.
|
||||||
|
|
||||||
|
## 📋 Dependencies
|
||||||
|
|
||||||
|
This Pipe will automatically attempt to install the following dependencies:
|
||||||
|
|
||||||
|
* `github-copilot-sdk` (Python package)
|
||||||
|
* `github-copilot-cli` (Binary file, installed via official script)
|
||||||
|
|
||||||
|
## ⚠️ FAQ
|
||||||
|
|
||||||
|
* **Stuck on "Waiting..."**:
|
||||||
|
* Check if `GH_TOKEN` is correct and has `Copilot Requests` permission.
|
||||||
|
* **Images not recognized**:
|
||||||
|
* Ensure `MODEL_ID` is a model that supports multimodal input.
|
||||||
|
* **CLI Installation Failed**:
|
||||||
|
* Ensure the OpenWebUI container has internet access.
|
||||||
|
* You can manually download the CLI and specify `CLI_PATH` in Valves.
|
||||||
81
plugins/pipes/github-copilot-sdk/README_CN.md
Normal file
81
plugins/pipes/github-copilot-sdk/README_CN.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# GitHub Copilot SDK 官方管道
|
||||||
|
|
||||||
|
**作者:** [Fu-Jie](https://github.com/Fu-Jie/awesome-openwebui) | **版本:** 0.1.1 | **项目:** [Awesome OpenWebUI](https://github.com/Fu-Jie/awesome-openwebui) | **许可证:** MIT
|
||||||
|
|
||||||
|
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,允许你直接在 OpenWebUI 中使用 GitHub Copilot 模型(如 `gpt-5`, `gpt-5-mini`, `claude-sonnet-4.5`)。它基于官方 [GitHub Copilot SDK for Python](https://github.com/github/copilot-sdk) 构建,提供了原生级的集成体验。
|
||||||
|
|
||||||
|
## 🚀 最新特性 (v0.1.1)
|
||||||
|
|
||||||
|
* **♾️ 无限会话 (Infinite Sessions)**:支持长对话的自动上下文压缩,告别上下文超限错误!
|
||||||
|
* **🧠 思考过程展示**:实时显示模型的推理/思考过程(需模型支持)。
|
||||||
|
* **📂 工作目录控制**:支持设置受限工作目录,确保文件操作安全。
|
||||||
|
* **🔍 模型过滤**:支持通过关键词排除特定模型(如 `codex`, `haiku`)。
|
||||||
|
* **💾 会话持久化**: 改进的会话恢复逻辑,直接关联 OpenWebUI 聊天 ID,连接更稳定。
|
||||||
|
|
||||||
|
## ✨ 核心特性
|
||||||
|
|
||||||
|
* **🚀 官方 SDK 集成**:基于官方 SDK,稳定可靠。
|
||||||
|
* **💬 多轮对话支持**:自动拼接历史上下文,Copilot 能理解你的前文。
|
||||||
|
* **🌊 流式输出 (Streaming)**:支持打字机效果,响应迅速。
|
||||||
|
* **🖼️ 多模态支持**:支持上传图片,自动转换为附件发送给 Copilot(需模型支持)。
|
||||||
|
* **🛠️ 零配置安装**:自动检测并下载 GitHub Copilot CLI,开箱即用。
|
||||||
|
* **🔑 安全认证**:支持 Fine-grained Personal Access Tokens,权限最小化。
|
||||||
|
* **🐛 调试模式**:内置详细的日志输出,方便排查连接问题。
|
||||||
|
* **⚠️ 仅支持单节点**:由于会话状态存储在本地,本插件目前仅支持 OpenWebUI 单节点部署,或开启了会话粘性 (Sticky Session) 的多节点集群。
|
||||||
|
|
||||||
|
## 📦 安装与使用
|
||||||
|
|
||||||
|
### 1. 导入函数
|
||||||
|
|
||||||
|
1. 打开 OpenWebUI。
|
||||||
|
2. 进入 **Workspace** -> **Functions**。
|
||||||
|
3. 点击 **+** (创建函数)。
|
||||||
|
4. 将 `github_copilot_sdk_cn.py` 的内容完整粘贴进去。
|
||||||
|
5. 保存。
|
||||||
|
|
||||||
|
### 2. 配置 Valves (设置)
|
||||||
|
|
||||||
|
在函数列表中找到 "GitHub Copilot",点击 **⚙️ (Valves)** 图标进行配置:
|
||||||
|
|
||||||
|
| 参数 | 说明 | 默认值 |
|
||||||
|
| :--- | :--- | :--- |
|
||||||
|
| **GH_TOKEN** | **(必填)** 你的 GitHub Token。 | - |
|
||||||
|
| **MODEL_ID** | 使用的模型名称。 | `gpt-5-mini` |
|
||||||
|
| **CLI_PATH** | Copilot CLI 的路径。如果未找到会自动下载。 | `/usr/local/bin/copilot` |
|
||||||
|
| **DEBUG** | 是否开启调试日志(输出到对话框)。 | `True` |
|
||||||
|
| **SHOW_THINKING** | 是否显示模型推理/思考过程。 | `True` |
|
||||||
|
| **EXCLUDE_KEYWORDS** | 排除包含这些关键词的模型 (逗号分隔)。 | - |
|
||||||
|
| **WORKSPACE_DIR** | 文件操作的受限工作目录。 | - |
|
||||||
|
| **INFINITE_SESSION** | 启用无限会话 (自动上下文压缩)。 | `True` |
|
||||||
|
| **COMPACTION_THRESHOLD** | 后台压缩阈值 (0.0-1.0)。 | `0.8` |
|
||||||
|
| **BUFFER_THRESHOLD** | 缓冲耗尽阈值 (0.0-1.0)。 | `0.95` |
|
||||||
|
| **TIMEOUT** | 流式数据块超时时间 (秒)。 | `300` |
|
||||||
|
|
||||||
|
### 3. 获取 GH_TOKEN
|
||||||
|
|
||||||
|
为了安全起见,推荐使用 **Fine-grained Personal Access Token**:
|
||||||
|
|
||||||
|
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
|
||||||
|
2. 点击 **Generate new token**。
|
||||||
|
3. **Repository access**: 选择 **Public repositories** (必须选择此项才能看到 Copilot 权限)。
|
||||||
|
4. **Permissions**:
|
||||||
|
* 点击 **Account permissions**。
|
||||||
|
* 找到 **Copilot Requests** (默认即为 **Read-only**,无需手动修改)。
|
||||||
|
5. 生成并复制 Token。
|
||||||
|
|
||||||
|
## 📋 依赖说明
|
||||||
|
|
||||||
|
该 Pipe 会自动尝试安装以下依赖(如果环境中缺失):
|
||||||
|
|
||||||
|
* `github-copilot-sdk` (Python 包)
|
||||||
|
* `github-copilot-cli` (二进制文件,通过官方脚本安装)
|
||||||
|
|
||||||
|
## ⚠️ 常见问题
|
||||||
|
|
||||||
|
* **一直显示 "Waiting..."**:
|
||||||
|
* 检查 `GH_TOKEN` 是否正确且拥有 `Copilot Requests` 权限。
|
||||||
|
* **图片无法识别**:
|
||||||
|
* 确保 `MODEL_ID` 是支持多模态的模型。
|
||||||
|
* **CLI 安装失败**:
|
||||||
|
* 确保 OpenWebUI 容器有外网访问权限。
|
||||||
|
* 你可以手动下载 CLI 并挂载到容器中,然后在 Valves 中指定 `CLI_PATH`。
|
||||||
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk.png
Normal file
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 474 KiB |
690
plugins/pipes/github-copilot-sdk/github_copilot_sdk.py
Normal file
690
plugins/pipes/github-copilot-sdk/github_copilot_sdk.py
Normal file
@@ -0,0 +1,690 @@
|
|||||||
|
"""
|
||||||
|
title: GitHub Copilot Official SDK Pipe
|
||||||
|
author: Fu-Jie
|
||||||
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
|
funding_url: https://github.com/open-webui
|
||||||
|
openwebui_id: ce96f7b4-12fc-4ac3-9a01-875713e69359
|
||||||
|
description: Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, and infinite sessions (context compaction).
|
||||||
|
version: 0.1.1
|
||||||
|
requirements: github-copilot-sdk
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import base64
|
||||||
|
import tempfile
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from typing import Optional, Union, AsyncGenerator, List, Any, Dict
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
import contextlib
|
||||||
|
|
||||||
|
# Setup logger
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Global client storage
|
||||||
|
_SHARED_CLIENT = None
|
||||||
|
_SHARED_TOKEN = ""
|
||||||
|
_CLIENT_LOCK = asyncio.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
class Pipe:
|
||||||
|
class Valves(BaseModel):
|
||||||
|
GH_TOKEN: str = Field(
|
||||||
|
default="",
|
||||||
|
description="GitHub Fine-grained Token (Requires 'Copilot Requests' permission)",
|
||||||
|
)
|
||||||
|
MODEL_ID: str = Field(
|
||||||
|
default="claude-sonnet-4.5",
|
||||||
|
description="Default Copilot model name (used when dynamic fetching fails)",
|
||||||
|
)
|
||||||
|
CLI_PATH: str = Field(
|
||||||
|
default="/usr/local/bin/copilot",
|
||||||
|
description="Path to Copilot CLI",
|
||||||
|
)
|
||||||
|
DEBUG: bool = Field(
|
||||||
|
default=False,
|
||||||
|
description="Enable technical debug logs (connection info, etc.)",
|
||||||
|
)
|
||||||
|
SHOW_THINKING: bool = Field(
|
||||||
|
default=True,
|
||||||
|
description="Show model reasoning/thinking process",
|
||||||
|
)
|
||||||
|
EXCLUDE_KEYWORDS: str = Field(
|
||||||
|
default="",
|
||||||
|
description="Exclude models containing these keywords (comma separated, e.g.: codex, haiku)",
|
||||||
|
)
|
||||||
|
WORKSPACE_DIR: str = Field(
|
||||||
|
default="",
|
||||||
|
description="Restricted workspace directory for file operations. If empty, allows access to the current process directory.",
|
||||||
|
)
|
||||||
|
INFINITE_SESSION: bool = Field(
|
||||||
|
default=True,
|
||||||
|
description="Enable Infinite Sessions (automatic context compaction)",
|
||||||
|
)
|
||||||
|
COMPACTION_THRESHOLD: float = Field(
|
||||||
|
default=0.8,
|
||||||
|
description="Background compaction threshold (0.0-1.0)",
|
||||||
|
)
|
||||||
|
BUFFER_THRESHOLD: float = Field(
|
||||||
|
default=0.95,
|
||||||
|
description="Buffer exhaustion threshold (0.0-1.0)",
|
||||||
|
)
|
||||||
|
TIMEOUT: int = Field(
|
||||||
|
default=300,
|
||||||
|
description="Timeout for each stream chunk (seconds)",
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.type = "pipe"
|
||||||
|
self.id = "copilotsdk"
|
||||||
|
self.name = "copilotsdk"
|
||||||
|
self.valves = self.Valves()
|
||||||
|
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
|
||||||
|
self.thinking_started = False
|
||||||
|
self._model_cache = [] # Model list cache
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(self.temp_dir)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _emit_debug_log(self, message: str):
|
||||||
|
"""Emit debug log to frontend if DEBUG valve is enabled."""
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
print(f"[Copilot Pipe] {message}")
|
||||||
|
|
||||||
|
def _get_user_context(self):
|
||||||
|
"""Helper to get user context (placeholder for future use)."""
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _get_chat_context(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""
|
||||||
|
Highly reliable chat context extraction logic.
|
||||||
|
Priority: __metadata__ > body['chat_id'] > body['metadata']['chat_id']
|
||||||
|
"""
|
||||||
|
chat_id = ""
|
||||||
|
source = "none"
|
||||||
|
|
||||||
|
# 1. Prioritize __metadata__ (most reliable source injected by OpenWebUI)
|
||||||
|
if __metadata__ and isinstance(__metadata__, dict):
|
||||||
|
chat_id = __metadata__.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "__metadata__"
|
||||||
|
|
||||||
|
# 2. Then try body root
|
||||||
|
if not chat_id and isinstance(body, dict):
|
||||||
|
chat_id = body.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "body_root"
|
||||||
|
|
||||||
|
# 3. Finally try body.metadata
|
||||||
|
if not chat_id and isinstance(body, dict):
|
||||||
|
body_metadata = body.get("metadata", {})
|
||||||
|
if isinstance(body_metadata, dict):
|
||||||
|
chat_id = body_metadata.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "body_metadata"
|
||||||
|
|
||||||
|
# Debug: Log ID source
|
||||||
|
if chat_id:
|
||||||
|
self._emit_debug_log(f"Extracted ChatID: {chat_id} (Source: {source})")
|
||||||
|
else:
|
||||||
|
# If still not found, log body keys for troubleshooting
|
||||||
|
keys = list(body.keys()) if isinstance(body, dict) else "not a dict"
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"Warning: Failed to extract ChatID. Body keys: {keys}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"chat_id": str(chat_id).strip(),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def pipes(self) -> List[dict]:
|
||||||
|
"""Dynamically fetch model list"""
|
||||||
|
# Return cache if available
|
||||||
|
if self._model_cache:
|
||||||
|
return self._model_cache
|
||||||
|
|
||||||
|
self._emit_debug_log("Fetching model list dynamically...")
|
||||||
|
try:
|
||||||
|
self._setup_env()
|
||||||
|
if not self.valves.GH_TOKEN:
|
||||||
|
return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}]
|
||||||
|
|
||||||
|
from copilot import CopilotClient
|
||||||
|
|
||||||
|
client_config = {}
|
||||||
|
if os.environ.get("COPILOT_CLI_PATH"):
|
||||||
|
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||||
|
|
||||||
|
client = CopilotClient(client_config)
|
||||||
|
try:
|
||||||
|
await client.start()
|
||||||
|
models = await client.list_models()
|
||||||
|
|
||||||
|
# Update cache
|
||||||
|
self._model_cache = []
|
||||||
|
exclude_list = [
|
||||||
|
k.strip().lower()
|
||||||
|
for k in self.valves.EXCLUDE_KEYWORDS.split(",")
|
||||||
|
if k.strip()
|
||||||
|
]
|
||||||
|
|
||||||
|
models_with_info = []
|
||||||
|
for m in models:
|
||||||
|
# Compatible with dict and object access
|
||||||
|
m_id = (
|
||||||
|
m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m))
|
||||||
|
)
|
||||||
|
m_name = (
|
||||||
|
m.get("name")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "name", m_id)
|
||||||
|
)
|
||||||
|
m_policy = (
|
||||||
|
m.get("policy")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "policy", {})
|
||||||
|
)
|
||||||
|
m_billing = (
|
||||||
|
m.get("billing")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "billing", {})
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check policy state
|
||||||
|
state = (
|
||||||
|
m_policy.get("state")
|
||||||
|
if isinstance(m_policy, dict)
|
||||||
|
else getattr(m_policy, "state", "enabled")
|
||||||
|
)
|
||||||
|
if state == "disabled":
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Filtering logic
|
||||||
|
if any(kw in m_id.lower() for kw in exclude_list):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Get multiplier
|
||||||
|
multiplier = (
|
||||||
|
m_billing.get("multiplier", 1)
|
||||||
|
if isinstance(m_billing, dict)
|
||||||
|
else getattr(m_billing, "multiplier", 1)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Format display name
|
||||||
|
if multiplier == 0:
|
||||||
|
display_name = f"-🔥 {m_id} (unlimited)"
|
||||||
|
else:
|
||||||
|
display_name = f"-{m_id} ({multiplier}x)"
|
||||||
|
|
||||||
|
models_with_info.append(
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{m_id}",
|
||||||
|
"name": display_name,
|
||||||
|
"multiplier": multiplier,
|
||||||
|
"raw_id": m_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Sort: multiplier ascending, then raw_id ascending
|
||||||
|
models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"]))
|
||||||
|
self._model_cache = [
|
||||||
|
{"id": m["id"], "name": m["name"]} for m in models_with_info
|
||||||
|
]
|
||||||
|
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"Successfully fetched {len(self._model_cache)} models (filtered)"
|
||||||
|
)
|
||||||
|
return self._model_cache
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Failed to fetch model list: {e}")
|
||||||
|
# Return default model on failure
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||||
|
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
finally:
|
||||||
|
await client.stop()
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Pipes Error: {e}")
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||||
|
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
async def _get_client(self):
|
||||||
|
"""Helper to get or create a CopilotClient instance."""
|
||||||
|
from copilot import CopilotClient
|
||||||
|
|
||||||
|
client_config = {}
|
||||||
|
if os.environ.get("COPILOT_CLI_PATH"):
|
||||||
|
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||||
|
|
||||||
|
client = CopilotClient(client_config)
|
||||||
|
await client.start()
|
||||||
|
return client
|
||||||
|
|
||||||
|
def _setup_env(self):
|
||||||
|
cli_path = self.valves.CLI_PATH
|
||||||
|
found = False
|
||||||
|
|
||||||
|
if os.path.exists(cli_path):
|
||||||
|
found = True
|
||||||
|
|
||||||
|
if not found:
|
||||||
|
sys_path = shutil.which("copilot")
|
||||||
|
if sys_path:
|
||||||
|
cli_path = sys_path
|
||||||
|
found = True
|
||||||
|
|
||||||
|
if not found:
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
"curl -fsSL https://gh.io/copilot-install | bash",
|
||||||
|
shell=True,
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
if os.path.exists(self.valves.CLI_PATH):
|
||||||
|
cli_path = self.valves.CLI_PATH
|
||||||
|
found = True
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if found:
|
||||||
|
os.environ["COPILOT_CLI_PATH"] = cli_path
|
||||||
|
cli_dir = os.path.dirname(cli_path)
|
||||||
|
if cli_dir not in os.environ["PATH"]:
|
||||||
|
os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}"
|
||||||
|
|
||||||
|
if self.valves.GH_TOKEN:
|
||||||
|
os.environ["GH_TOKEN"] = self.valves.GH_TOKEN
|
||||||
|
os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN
|
||||||
|
|
||||||
|
def _process_images(self, messages):
|
||||||
|
attachments = []
|
||||||
|
text_content = ""
|
||||||
|
if not messages:
|
||||||
|
return "", []
|
||||||
|
last_msg = messages[-1]
|
||||||
|
content = last_msg.get("content", "")
|
||||||
|
|
||||||
|
if isinstance(content, list):
|
||||||
|
for item in content:
|
||||||
|
if item.get("type") == "text":
|
||||||
|
text_content += item.get("text", "")
|
||||||
|
elif item.get("type") == "image_url":
|
||||||
|
image_url = item.get("image_url", {}).get("url", "")
|
||||||
|
if image_url.startswith("data:image"):
|
||||||
|
try:
|
||||||
|
header, encoded = image_url.split(",", 1)
|
||||||
|
ext = header.split(";")[0].split("/")[-1]
|
||||||
|
file_name = f"image_{len(attachments)}.{ext}"
|
||||||
|
file_path = os.path.join(self.temp_dir, file_name)
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
f.write(base64.b64decode(encoded))
|
||||||
|
attachments.append(
|
||||||
|
{
|
||||||
|
"type": "file",
|
||||||
|
"path": file_path,
|
||||||
|
"display_name": file_name,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
self._emit_debug_log(f"Image processed: {file_path}")
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Image error: {e}")
|
||||||
|
else:
|
||||||
|
text_content = str(content)
|
||||||
|
return text_content, attachments
|
||||||
|
|
||||||
|
async def pipe(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Union[str, AsyncGenerator]:
|
||||||
|
self._setup_env()
|
||||||
|
if not self.valves.GH_TOKEN:
|
||||||
|
return "Error: Please configure GH_TOKEN in Valves."
|
||||||
|
|
||||||
|
# Parse user selected model
|
||||||
|
request_model = body.get("model", "")
|
||||||
|
real_model_id = self.valves.MODEL_ID # Default value
|
||||||
|
|
||||||
|
if request_model.startswith(f"{self.id}-"):
|
||||||
|
real_model_id = request_model[len(f"{self.id}-") :]
|
||||||
|
self._emit_debug_log(f"Using selected model: {real_model_id}")
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
if not messages:
|
||||||
|
return "No messages."
|
||||||
|
|
||||||
|
# Get Chat ID using improved helper
|
||||||
|
chat_ctx = self._get_chat_context(body, __metadata__)
|
||||||
|
chat_id = chat_ctx.get("chat_id")
|
||||||
|
|
||||||
|
is_streaming = body.get("stream", False)
|
||||||
|
self._emit_debug_log(f"Request Streaming: {is_streaming}")
|
||||||
|
|
||||||
|
last_text, attachments = self._process_images(messages)
|
||||||
|
|
||||||
|
# Determine prompt strategy
|
||||||
|
# If we have a chat_id, we try to resume session.
|
||||||
|
# If resumed, we assume the session has history, so we only send the last message.
|
||||||
|
# If new session, we send full history (or at least the last few turns if we want to be safe, but let's send full for now).
|
||||||
|
|
||||||
|
# However, to be robust against history edits in OpenWebUI, we might want to always send full history?
|
||||||
|
# Copilot SDK `create_session` doesn't take history. `session.send` appends.
|
||||||
|
# If we resume, we append.
|
||||||
|
# If user edited history, the session state is stale.
|
||||||
|
# For now, we implement "Resume if possible, else Create".
|
||||||
|
|
||||||
|
prompt = ""
|
||||||
|
is_new_session = True
|
||||||
|
|
||||||
|
try:
|
||||||
|
client = await self._get_client()
|
||||||
|
session = None
|
||||||
|
|
||||||
|
if chat_id:
|
||||||
|
try:
|
||||||
|
# Try to resume session using chat_id as session_id
|
||||||
|
session = await client.resume_session(chat_id)
|
||||||
|
self._emit_debug_log(f"Resumed session using ChatID: {chat_id}")
|
||||||
|
is_new_session = False
|
||||||
|
except Exception:
|
||||||
|
# Resume failed, session might not exist on disk
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"Session {chat_id} not found or expired, creating new."
|
||||||
|
)
|
||||||
|
session = None
|
||||||
|
|
||||||
|
if session is None:
|
||||||
|
# Create new session
|
||||||
|
from copilot.types import SessionConfig, InfiniteSessionConfig
|
||||||
|
|
||||||
|
# Infinite Session Config
|
||||||
|
infinite_session_config = None
|
||||||
|
if self.valves.INFINITE_SESSION:
|
||||||
|
infinite_session_config = InfiniteSessionConfig(
|
||||||
|
enabled=True,
|
||||||
|
background_compaction_threshold=self.valves.COMPACTION_THRESHOLD,
|
||||||
|
buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD,
|
||||||
|
)
|
||||||
|
|
||||||
|
session_config = SessionConfig(
|
||||||
|
session_id=(
|
||||||
|
chat_id if chat_id else None
|
||||||
|
), # Use chat_id as session_id
|
||||||
|
model=real_model_id,
|
||||||
|
streaming=body.get("stream", False),
|
||||||
|
infinite_sessions=infinite_session_config,
|
||||||
|
)
|
||||||
|
|
||||||
|
session = await client.create_session(config=session_config)
|
||||||
|
|
||||||
|
new_sid = getattr(session, "session_id", getattr(session, "id", None))
|
||||||
|
self._emit_debug_log(f"Created new session: {new_sid}")
|
||||||
|
|
||||||
|
# Construct prompt
|
||||||
|
if is_new_session:
|
||||||
|
# For new session, send full conversation history
|
||||||
|
full_conversation = []
|
||||||
|
for msg in messages[:-1]:
|
||||||
|
role = msg.get("role", "user").upper()
|
||||||
|
content = msg.get("content", "")
|
||||||
|
if isinstance(content, list):
|
||||||
|
content = " ".join(
|
||||||
|
[
|
||||||
|
c.get("text", "")
|
||||||
|
for c in content
|
||||||
|
if c.get("type") == "text"
|
||||||
|
]
|
||||||
|
)
|
||||||
|
full_conversation.append(f"{role}: {content}")
|
||||||
|
full_conversation.append(f"User: {last_text}")
|
||||||
|
prompt = "\n\n".join(full_conversation)
|
||||||
|
else:
|
||||||
|
# For resumed session, only send the last message
|
||||||
|
prompt = last_text
|
||||||
|
|
||||||
|
send_payload = {"prompt": prompt, "mode": "immediate"}
|
||||||
|
if attachments:
|
||||||
|
send_payload["attachments"] = attachments
|
||||||
|
|
||||||
|
if body.get("stream", False):
|
||||||
|
# Determine session status message for UI
|
||||||
|
init_msg = ""
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
if is_new_session:
|
||||||
|
new_sid = getattr(
|
||||||
|
session, "session_id", getattr(session, "id", "unknown")
|
||||||
|
)
|
||||||
|
init_msg = f"> [Debug] Created new session: {new_sid}\n"
|
||||||
|
else:
|
||||||
|
init_msg = (
|
||||||
|
f"> [Debug] Resumed session using ChatID: {chat_id}\n"
|
||||||
|
)
|
||||||
|
|
||||||
|
return self.stream_response(client, session, send_payload, init_msg)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
response = await session.send_and_wait(send_payload)
|
||||||
|
return response.data.content if response else "Empty response."
|
||||||
|
finally:
|
||||||
|
# Destroy session object to free memory, but KEEP data on disk
|
||||||
|
await session.destroy()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Request Error: {e}")
|
||||||
|
return f"Error: {str(e)}"
|
||||||
|
|
||||||
|
async def stream_response(
|
||||||
|
self, client, session, send_payload, init_message: str = ""
|
||||||
|
) -> AsyncGenerator:
|
||||||
|
queue = asyncio.Queue()
|
||||||
|
done = asyncio.Event()
|
||||||
|
self.thinking_started = False
|
||||||
|
has_content = False # Track if any content has been yielded
|
||||||
|
|
||||||
|
def get_event_data(event, attr, default=""):
|
||||||
|
if hasattr(event, "data"):
|
||||||
|
data = event.data
|
||||||
|
if data is None:
|
||||||
|
return default
|
||||||
|
if isinstance(data, (str, int, float, bool)):
|
||||||
|
return str(data) if attr == "value" else default
|
||||||
|
|
||||||
|
if isinstance(data, dict):
|
||||||
|
val = data.get(attr)
|
||||||
|
if val is None:
|
||||||
|
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||||
|
val = data.get(alt_attr)
|
||||||
|
if val is None and "_" not in attr:
|
||||||
|
# Try snake_case if camelCase failed
|
||||||
|
import re
|
||||||
|
|
||||||
|
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||||
|
val = data.get(snake_attr)
|
||||||
|
else:
|
||||||
|
val = getattr(data, attr, None)
|
||||||
|
if val is None:
|
||||||
|
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||||
|
val = getattr(data, alt_attr, None)
|
||||||
|
if val is None and "_" not in attr:
|
||||||
|
import re
|
||||||
|
|
||||||
|
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||||
|
val = getattr(data, snake_attr, None)
|
||||||
|
|
||||||
|
return val if val is not None else default
|
||||||
|
return default
|
||||||
|
|
||||||
|
def handler(event):
|
||||||
|
event_type = (
|
||||||
|
getattr(event.type, "value", None)
|
||||||
|
if hasattr(event, "type")
|
||||||
|
else str(event.type)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Log full event data for tool events to help debugging
|
||||||
|
if "tool" in event_type:
|
||||||
|
try:
|
||||||
|
data_str = str(event.data) if hasattr(event, "data") else "no data"
|
||||||
|
self._emit_debug_log(f"Tool Event [{event_type}]: {data_str}")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self._emit_debug_log(f"Event: {event_type}")
|
||||||
|
|
||||||
|
# Handle message content (delta or full)
|
||||||
|
if event_type in [
|
||||||
|
"assistant.message_delta",
|
||||||
|
"assistant.message.delta",
|
||||||
|
"assistant.message",
|
||||||
|
]:
|
||||||
|
# Log full message event for troubleshooting why there's no delta
|
||||||
|
if event_type == "assistant.message":
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"Received full message event (non-Delta): {get_event_data(event, 'content')[:50]}..."
|
||||||
|
)
|
||||||
|
|
||||||
|
delta = (
|
||||||
|
get_event_data(event, "delta_content")
|
||||||
|
or get_event_data(event, "deltaContent")
|
||||||
|
or get_event_data(event, "content")
|
||||||
|
or get_event_data(event, "text")
|
||||||
|
)
|
||||||
|
if delta:
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait("\n</think>\n")
|
||||||
|
self.thinking_started = False
|
||||||
|
queue.put_nowait(delta)
|
||||||
|
|
||||||
|
elif event_type in [
|
||||||
|
"assistant.reasoning_delta",
|
||||||
|
"assistant.reasoning.delta",
|
||||||
|
"assistant.reasoning",
|
||||||
|
]:
|
||||||
|
delta = (
|
||||||
|
get_event_data(event, "delta_content")
|
||||||
|
or get_event_data(event, "deltaContent")
|
||||||
|
or get_event_data(event, "content")
|
||||||
|
or get_event_data(event, "text")
|
||||||
|
)
|
||||||
|
if delta:
|
||||||
|
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||||
|
queue.put_nowait("<think>\n")
|
||||||
|
self.thinking_started = True
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait(delta)
|
||||||
|
|
||||||
|
elif event_type == "tool.execution_start":
|
||||||
|
# Try multiple possible fields for tool name/description
|
||||||
|
tool_name = (
|
||||||
|
get_event_data(event, "toolName")
|
||||||
|
or get_event_data(event, "name")
|
||||||
|
or get_event_data(event, "description")
|
||||||
|
or get_event_data(event, "tool_name")
|
||||||
|
or "Unknown Tool"
|
||||||
|
)
|
||||||
|
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||||
|
queue.put_nowait("<think>\n")
|
||||||
|
self.thinking_started = True
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait(f"\nRunning Tool: {tool_name}...\n")
|
||||||
|
self._emit_debug_log(f"Tool Start: {tool_name}")
|
||||||
|
|
||||||
|
elif event_type == "tool.execution_complete":
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait("Tool Completed.\n")
|
||||||
|
self._emit_debug_log("Tool Complete")
|
||||||
|
|
||||||
|
elif event_type == "session.compaction_start":
|
||||||
|
self._emit_debug_log("Session Compaction Started")
|
||||||
|
|
||||||
|
elif event_type == "session.compaction_complete":
|
||||||
|
self._emit_debug_log("Session Compaction Completed")
|
||||||
|
|
||||||
|
elif event_type == "session.idle":
|
||||||
|
done.set()
|
||||||
|
elif event_type == "session.error":
|
||||||
|
msg = get_event_data(event, "message", "Unknown Error")
|
||||||
|
queue.put_nowait(f"\n[Error: {msg}]")
|
||||||
|
done.set()
|
||||||
|
|
||||||
|
unsubscribe = session.on(handler)
|
||||||
|
await session.send(send_payload)
|
||||||
|
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
yield "<think>\n"
|
||||||
|
if init_message:
|
||||||
|
yield init_message
|
||||||
|
yield "> [Debug] Connection established, waiting for response...\n"
|
||||||
|
self.thinking_started = True
|
||||||
|
|
||||||
|
try:
|
||||||
|
while not done.is_set():
|
||||||
|
try:
|
||||||
|
chunk = await asyncio.wait_for(
|
||||||
|
queue.get(), timeout=float(self.valves.TIMEOUT)
|
||||||
|
)
|
||||||
|
if chunk:
|
||||||
|
has_content = True
|
||||||
|
yield chunk
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
if done.is_set():
|
||||||
|
break
|
||||||
|
if self.thinking_started:
|
||||||
|
yield f"> [Debug] Waiting for response ({self.valves.TIMEOUT}s exceeded)...\n"
|
||||||
|
continue
|
||||||
|
|
||||||
|
while not queue.empty():
|
||||||
|
chunk = queue.get_nowait()
|
||||||
|
if chunk:
|
||||||
|
has_content = True
|
||||||
|
yield chunk
|
||||||
|
|
||||||
|
if self.thinking_started:
|
||||||
|
yield "\n</think>\n"
|
||||||
|
has_content = True
|
||||||
|
|
||||||
|
# Core fix: If no content was yielded, return a fallback message to prevent OpenWebUI error
|
||||||
|
if not has_content:
|
||||||
|
yield "⚠️ Copilot returned no content. Please check if the Model ID is correct or enable DEBUG mode in Valves for details."
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
yield f"\n[Stream Error: {str(e)}]"
|
||||||
|
finally:
|
||||||
|
unsubscribe()
|
||||||
|
# Only destroy session if it's not cached
|
||||||
|
# We can't easily check chat_id here without passing it,
|
||||||
|
# but stream_response is called within the scope where we decide persistence.
|
||||||
|
# Wait, stream_response takes session as arg.
|
||||||
|
# We need to know if we should destroy it.
|
||||||
|
# Let's assume if it's in _SESSIONS, we don't destroy it.
|
||||||
|
# But checking _SESSIONS here is race-prone or complex.
|
||||||
|
# Simplified: The caller (pipe) handles destruction logic?
|
||||||
|
# No, stream_response is a generator, pipe returns it.
|
||||||
|
# So pipe function exits before stream finishes.
|
||||||
|
# We need to handle destruction here.
|
||||||
|
pass
|
||||||
|
|
||||||
|
# TODO: Proper session cleanup for streaming
|
||||||
|
# For now, we rely on the fact that if we mapped it, we keep it.
|
||||||
|
# If we didn't map it (no chat_id), we should destroy it.
|
||||||
|
# But we don't have chat_id here.
|
||||||
|
# Let's modify stream_response signature or just leave it open for GC?
|
||||||
|
# CopilotSession doesn't auto-close.
|
||||||
|
# Let's add a flag to stream_response.
|
||||||
|
pass
|
||||||
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png
Normal file
BIN
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 474 KiB |
757
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py
Normal file
757
plugins/pipes/github-copilot-sdk/github_copilot_sdk_cn.py
Normal file
@@ -0,0 +1,757 @@
|
|||||||
|
"""
|
||||||
|
title: GitHub Copilot Official SDK Pipe
|
||||||
|
author: Fu-Jie
|
||||||
|
author_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||||
|
funding_url: https://github.com/open-webui
|
||||||
|
description: 集成 GitHub Copilot SDK。支持动态模型、多轮对话、流式输出、多模态输入及无限会话(上下文自动压缩)。
|
||||||
|
version: 0.1.1
|
||||||
|
requirements: github-copilot-sdk
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import base64
|
||||||
|
import tempfile
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from typing import Optional, Union, AsyncGenerator, List, Any, Dict
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
import contextlib
|
||||||
|
|
||||||
|
# Setup logger
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Open WebUI internal database (re-use shared connection)
|
||||||
|
try:
|
||||||
|
from open_webui.internal import db as owui_db
|
||||||
|
except ModuleNotFoundError:
|
||||||
|
owui_db = None
|
||||||
|
|
||||||
|
|
||||||
|
def _discover_owui_engine(db_module: Any) -> Optional[Engine]:
|
||||||
|
"""Discover the Open WebUI SQLAlchemy engine via provided db module helpers."""
|
||||||
|
if db_module is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
db_context = getattr(db_module, "get_db_context", None) or getattr(
|
||||||
|
db_module, "get_db", None
|
||||||
|
)
|
||||||
|
if callable(db_context):
|
||||||
|
try:
|
||||||
|
with db_context() as session:
|
||||||
|
try:
|
||||||
|
return session.get_bind()
|
||||||
|
except AttributeError:
|
||||||
|
return getattr(session, "bind", None) or getattr(
|
||||||
|
session, "engine", None
|
||||||
|
)
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error(f"[DB Discover] get_db_context failed: {exc}")
|
||||||
|
|
||||||
|
for attr in ("engine", "ENGINE", "bind", "BIND"):
|
||||||
|
candidate = getattr(db_module, attr, None)
|
||||||
|
if candidate is not None:
|
||||||
|
return candidate
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _discover_owui_schema(db_module: Any) -> Optional[str]:
|
||||||
|
"""Discover the Open WebUI database schema name if configured."""
|
||||||
|
if db_module is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
base = getattr(db_module, "Base", None)
|
||||||
|
metadata = getattr(base, "metadata", None) if base is not None else None
|
||||||
|
candidate = getattr(metadata, "schema", None) if metadata is not None else None
|
||||||
|
if isinstance(candidate, str) and candidate.strip():
|
||||||
|
return candidate.strip()
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error(f"[DB Discover] Base metadata schema lookup failed: {exc}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
metadata_obj = getattr(db_module, "metadata_obj", None)
|
||||||
|
candidate = (
|
||||||
|
getattr(metadata_obj, "schema", None) if metadata_obj is not None else None
|
||||||
|
)
|
||||||
|
if isinstance(candidate, str) and candidate.strip():
|
||||||
|
return candidate.strip()
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error(f"[DB Discover] metadata_obj schema lookup failed: {exc}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
from open_webui import env as owui_env
|
||||||
|
|
||||||
|
candidate = getattr(owui_env, "DATABASE_SCHEMA", None)
|
||||||
|
if isinstance(candidate, str) and candidate.strip():
|
||||||
|
return candidate.strip()
|
||||||
|
except Exception as exc:
|
||||||
|
logger.error(f"[DB Discover] env schema lookup failed: {exc}")
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
owui_engine = _discover_owui_engine(owui_db)
|
||||||
|
owui_schema = _discover_owui_schema(owui_db)
|
||||||
|
owui_Base = getattr(owui_db, "Base", None) if owui_db is not None else None
|
||||||
|
if owui_Base is None:
|
||||||
|
owui_Base = declarative_base()
|
||||||
|
|
||||||
|
|
||||||
|
class CopilotSessionMap(owui_Base):
|
||||||
|
"""Copilot Session Mapping Table"""
|
||||||
|
|
||||||
|
__tablename__ = "copilot_session_map"
|
||||||
|
__table_args__ = (
|
||||||
|
{"extend_existing": True, "schema": owui_schema}
|
||||||
|
if owui_schema
|
||||||
|
else {"extend_existing": True}
|
||||||
|
)
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||||
|
chat_id = Column(String(255), unique=True, nullable=False, index=True)
|
||||||
|
copilot_session_id = Column(String(255), nullable=False)
|
||||||
|
updated_at = Column(
|
||||||
|
DateTime,
|
||||||
|
default=lambda: datetime.now(timezone.utc),
|
||||||
|
onupdate=lambda: datetime.now(timezone.utc),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
# 全局客户端存储
|
||||||
|
_SHARED_CLIENT = None
|
||||||
|
_SHARED_TOKEN = ""
|
||||||
|
_CLIENT_LOCK = asyncio.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
class Pipe:
|
||||||
|
class Valves(BaseModel):
|
||||||
|
GH_TOKEN: str = Field(
|
||||||
|
default="", description="GitHub 细粒度令牌 (需开启 'Copilot Requests' 权限)"
|
||||||
|
)
|
||||||
|
MODEL_ID: str = Field(
|
||||||
|
default="claude-sonnet-4.5",
|
||||||
|
description="默认使用的 Copilot 模型名称 (当无法动态获取时使用)",
|
||||||
|
)
|
||||||
|
CLI_PATH: str = Field(
|
||||||
|
default="/usr/local/bin/copilot",
|
||||||
|
description="Copilot CLI 路径",
|
||||||
|
)
|
||||||
|
DEBUG: bool = Field(
|
||||||
|
default=False,
|
||||||
|
description="开启技术调试日志 (连接信息等)",
|
||||||
|
)
|
||||||
|
SHOW_THINKING: bool = Field(
|
||||||
|
default=True,
|
||||||
|
description="显示模型推理/思考过程",
|
||||||
|
)
|
||||||
|
EXCLUDE_KEYWORDS: str = Field(
|
||||||
|
default="",
|
||||||
|
description="排除包含这些关键词的模型 (逗号分隔,例如: codex, haiku)",
|
||||||
|
)
|
||||||
|
WORKSPACE_DIR: str = Field(
|
||||||
|
default="",
|
||||||
|
description="文件操作的受限工作目录。如果为空,允许访问当前进程目录。",
|
||||||
|
)
|
||||||
|
INFINITE_SESSION: bool = Field(
|
||||||
|
default=True,
|
||||||
|
description="启用无限会话 (自动上下文压缩)",
|
||||||
|
)
|
||||||
|
COMPACTION_THRESHOLD: float = Field(
|
||||||
|
default=0.8,
|
||||||
|
description="后台压缩阈值 (0.0-1.0)",
|
||||||
|
)
|
||||||
|
BUFFER_THRESHOLD: float = Field(
|
||||||
|
default=0.95,
|
||||||
|
description="背景压缩缓冲区阈值 (0.0-1.0)",
|
||||||
|
)
|
||||||
|
TIMEOUT: int = Field(
|
||||||
|
default=300,
|
||||||
|
description="流式数据块超时时间 (秒)",
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.type = "pipe"
|
||||||
|
self.name = "copilotsdk"
|
||||||
|
self.valves = self.Valves()
|
||||||
|
self.temp_dir = tempfile.mkdtemp(prefix="copilot_images_")
|
||||||
|
self.thinking_started = False
|
||||||
|
self._model_cache = [] # 模型列表缓存
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
try:
|
||||||
|
shutil.rmtree(self.temp_dir)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _emit_debug_log(self, message: str):
|
||||||
|
"""Emit debug log to frontend if DEBUG valve is enabled."""
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
print(f"[Copilot Pipe] {message}")
|
||||||
|
|
||||||
|
def _get_user_context(self):
|
||||||
|
"""Helper to get user context (placeholder for future use)."""
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _get_chat_context(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Dict[str, str]:
|
||||||
|
"""
|
||||||
|
高度可靠的聊天上下文提取逻辑。
|
||||||
|
优先级:__metadata__ > body['chat_id'] > body['metadata']['chat_id']
|
||||||
|
"""
|
||||||
|
chat_id = ""
|
||||||
|
source = "none"
|
||||||
|
|
||||||
|
# 1. 优先从 __metadata__ 获取 (OpenWebUI 注入的最可靠来源)
|
||||||
|
if __metadata__ and isinstance(__metadata__, dict):
|
||||||
|
chat_id = __metadata__.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "__metadata__"
|
||||||
|
|
||||||
|
# 2. 其次从 body 顶层获取
|
||||||
|
if not chat_id and isinstance(body, dict):
|
||||||
|
chat_id = body.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "body_root"
|
||||||
|
|
||||||
|
# 3. 最后从 body.metadata 获取
|
||||||
|
if not chat_id and isinstance(body, dict):
|
||||||
|
body_metadata = body.get("metadata", {})
|
||||||
|
if isinstance(body_metadata, dict):
|
||||||
|
chat_id = body_metadata.get("chat_id", "")
|
||||||
|
if chat_id:
|
||||||
|
source = "body_metadata"
|
||||||
|
|
||||||
|
# 调试:记录 ID 来源
|
||||||
|
if chat_id:
|
||||||
|
self._emit_debug_log(f"提取到 ChatID: {chat_id} (来源: {source})")
|
||||||
|
else:
|
||||||
|
# 如果还是没找到,记录一下 body 的键,方便排查
|
||||||
|
keys = list(body.keys()) if isinstance(body, dict) else "not a dict"
|
||||||
|
self._emit_debug_log(f"警告: 未能提取到 ChatID。Body 键: {keys}")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"chat_id": str(chat_id).strip(),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def pipes(self) -> List[dict]:
|
||||||
|
"""动态获取模型列表"""
|
||||||
|
# 如果有缓存,直接返回
|
||||||
|
if self._model_cache:
|
||||||
|
return self._model_cache
|
||||||
|
|
||||||
|
self._emit_debug_log("正在动态获取模型列表...")
|
||||||
|
try:
|
||||||
|
self._setup_env()
|
||||||
|
if not self.valves.GH_TOKEN:
|
||||||
|
return [{"id": f"{self.id}-error", "name": "Error: GH_TOKEN not set"}]
|
||||||
|
|
||||||
|
from copilot import CopilotClient
|
||||||
|
|
||||||
|
client_config = {}
|
||||||
|
if os.environ.get("COPILOT_CLI_PATH"):
|
||||||
|
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||||
|
|
||||||
|
client = CopilotClient(client_config)
|
||||||
|
try:
|
||||||
|
await client.start()
|
||||||
|
models = await client.list_models()
|
||||||
|
|
||||||
|
# 更新缓存
|
||||||
|
self._model_cache = []
|
||||||
|
exclude_list = [
|
||||||
|
k.strip().lower()
|
||||||
|
for k in self.valves.EXCLUDE_KEYWORDS.split(",")
|
||||||
|
if k.strip()
|
||||||
|
]
|
||||||
|
|
||||||
|
models_with_info = []
|
||||||
|
for m in models:
|
||||||
|
# 兼容字典和对象访问方式
|
||||||
|
m_id = (
|
||||||
|
m.get("id") if isinstance(m, dict) else getattr(m, "id", str(m))
|
||||||
|
)
|
||||||
|
m_name = (
|
||||||
|
m.get("name")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "name", m_id)
|
||||||
|
)
|
||||||
|
m_policy = (
|
||||||
|
m.get("policy")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "policy", {})
|
||||||
|
)
|
||||||
|
m_billing = (
|
||||||
|
m.get("billing")
|
||||||
|
if isinstance(m, dict)
|
||||||
|
else getattr(m, "billing", {})
|
||||||
|
)
|
||||||
|
|
||||||
|
# 检查策略状态
|
||||||
|
state = (
|
||||||
|
m_policy.get("state")
|
||||||
|
if isinstance(m_policy, dict)
|
||||||
|
else getattr(m_policy, "state", "enabled")
|
||||||
|
)
|
||||||
|
if state == "disabled":
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 过滤逻辑
|
||||||
|
if any(kw in m_id.lower() for kw in exclude_list):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 获取倍率
|
||||||
|
multiplier = (
|
||||||
|
m_billing.get("multiplier", 1)
|
||||||
|
if isinstance(m_billing, dict)
|
||||||
|
else getattr(m_billing, "multiplier", 1)
|
||||||
|
)
|
||||||
|
|
||||||
|
# 格式化显示名称
|
||||||
|
if multiplier == 0:
|
||||||
|
display_name = f"-🔥 {m_id} (unlimited)"
|
||||||
|
else:
|
||||||
|
display_name = f"-{m_id} ({multiplier}x)"
|
||||||
|
|
||||||
|
models_with_info.append(
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{m_id}",
|
||||||
|
"name": display_name,
|
||||||
|
"multiplier": multiplier,
|
||||||
|
"raw_id": m_id,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# 排序:倍率升序,然后是原始ID升序
|
||||||
|
models_with_info.sort(key=lambda x: (x["multiplier"], x["raw_id"]))
|
||||||
|
self._model_cache = [
|
||||||
|
{"id": m["id"], "name": m["name"]} for m in models_with_info
|
||||||
|
]
|
||||||
|
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"成功获取 {len(self._model_cache)} 个模型 (已过滤)"
|
||||||
|
)
|
||||||
|
return self._model_cache
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"获取模型列表失败: {e}")
|
||||||
|
# 失败时返回默认模型
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||||
|
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
finally:
|
||||||
|
await client.stop()
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Pipes Error: {e}")
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
"id": f"{self.id}-{self.valves.MODEL_ID}",
|
||||||
|
"name": f"GitHub Copilot ({self.valves.MODEL_ID})",
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
async def _get_client(self):
|
||||||
|
"""Helper to get or create a CopilotClient instance."""
|
||||||
|
from copilot import CopilotClient
|
||||||
|
|
||||||
|
client_config = {}
|
||||||
|
if os.environ.get("COPILOT_CLI_PATH"):
|
||||||
|
client_config["cli_path"] = os.environ["COPILOT_CLI_PATH"]
|
||||||
|
|
||||||
|
client = CopilotClient(client_config)
|
||||||
|
await client.start()
|
||||||
|
return client
|
||||||
|
|
||||||
|
def _setup_env(self):
|
||||||
|
cli_path = self.valves.CLI_PATH
|
||||||
|
found = False
|
||||||
|
|
||||||
|
if os.path.exists(cli_path):
|
||||||
|
found = True
|
||||||
|
|
||||||
|
if not found:
|
||||||
|
sys_path = shutil.which("copilot")
|
||||||
|
if sys_path:
|
||||||
|
cli_path = sys_path
|
||||||
|
found = True
|
||||||
|
|
||||||
|
if not found:
|
||||||
|
try:
|
||||||
|
subprocess.run(
|
||||||
|
"curl -fsSL https://gh.io/copilot-install | bash",
|
||||||
|
shell=True,
|
||||||
|
check=True,
|
||||||
|
)
|
||||||
|
if os.path.exists(self.valves.CLI_PATH):
|
||||||
|
cli_path = self.valves.CLI_PATH
|
||||||
|
found = True
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if found:
|
||||||
|
os.environ["COPILOT_CLI_PATH"] = cli_path
|
||||||
|
cli_dir = os.path.dirname(cli_path)
|
||||||
|
if cli_dir not in os.environ["PATH"]:
|
||||||
|
os.environ["PATH"] = f"{cli_dir}:{os.environ['PATH']}"
|
||||||
|
|
||||||
|
if self.valves.GH_TOKEN:
|
||||||
|
os.environ["GH_TOKEN"] = self.valves.GH_TOKEN
|
||||||
|
os.environ["GITHUB_TOKEN"] = self.valves.GH_TOKEN
|
||||||
|
|
||||||
|
def _process_images(self, messages):
|
||||||
|
attachments = []
|
||||||
|
text_content = ""
|
||||||
|
if not messages:
|
||||||
|
return "", []
|
||||||
|
last_msg = messages[-1]
|
||||||
|
content = last_msg.get("content", "")
|
||||||
|
|
||||||
|
if isinstance(content, list):
|
||||||
|
for item in content:
|
||||||
|
if item.get("type") == "text":
|
||||||
|
text_content += item.get("text", "")
|
||||||
|
elif item.get("type") == "image_url":
|
||||||
|
image_url = item.get("image_url", {}).get("url", "")
|
||||||
|
if image_url.startswith("data:image"):
|
||||||
|
try:
|
||||||
|
header, encoded = image_url.split(",", 1)
|
||||||
|
ext = header.split(";")[0].split("/")[-1]
|
||||||
|
file_name = f"image_{len(attachments)}.{ext}"
|
||||||
|
file_path = os.path.join(self.temp_dir, file_name)
|
||||||
|
with open(file_path, "wb") as f:
|
||||||
|
f.write(base64.b64decode(encoded))
|
||||||
|
attachments.append(
|
||||||
|
{
|
||||||
|
"type": "file",
|
||||||
|
"path": file_path,
|
||||||
|
"display_name": file_name,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
self._emit_debug_log(f"Image processed: {file_path}")
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"Image error: {e}")
|
||||||
|
else:
|
||||||
|
text_content = str(content)
|
||||||
|
return text_content, attachments
|
||||||
|
|
||||||
|
async def pipe(
|
||||||
|
self, body: dict, __metadata__: Optional[dict] = None
|
||||||
|
) -> Union[str, AsyncGenerator]:
|
||||||
|
self._setup_env()
|
||||||
|
if not self.valves.GH_TOKEN:
|
||||||
|
return "Error: 请在 Valves 中配置 GH_TOKEN。"
|
||||||
|
|
||||||
|
# 解析用户选择的模型
|
||||||
|
request_model = body.get("model", "")
|
||||||
|
real_model_id = self.valves.MODEL_ID # 默认值
|
||||||
|
|
||||||
|
if request_model.startswith(f"{self.id}-"):
|
||||||
|
real_model_id = request_model[len(f"{self.id}-") :]
|
||||||
|
self._emit_debug_log(f"使用选择的模型: {real_model_id}")
|
||||||
|
|
||||||
|
messages = body.get("messages", [])
|
||||||
|
if not messages:
|
||||||
|
return "No messages."
|
||||||
|
|
||||||
|
# 使用改进的助手获取 Chat ID
|
||||||
|
chat_ctx = self._get_chat_context(body, __metadata__)
|
||||||
|
chat_id = chat_ctx.get("chat_id")
|
||||||
|
|
||||||
|
is_streaming = body.get("stream", False)
|
||||||
|
self._emit_debug_log(f"请求流式传输: {is_streaming}")
|
||||||
|
|
||||||
|
last_text, attachments = self._process_images(messages)
|
||||||
|
|
||||||
|
# 确定 Prompt 策略
|
||||||
|
# 如果有 chat_id,尝试恢复会话。
|
||||||
|
# 如果恢复成功,假设会话已有历史,只发送最后一条消息。
|
||||||
|
# 如果是新会话,发送完整历史。
|
||||||
|
|
||||||
|
prompt = ""
|
||||||
|
is_new_session = True
|
||||||
|
|
||||||
|
try:
|
||||||
|
client = await self._get_client()
|
||||||
|
session = None
|
||||||
|
|
||||||
|
if chat_id:
|
||||||
|
try:
|
||||||
|
# 尝试直接使用 chat_id 作为 session_id 恢复会话
|
||||||
|
session = await client.resume_session(chat_id)
|
||||||
|
self._emit_debug_log(f"已通过 ChatID 恢复会话: {chat_id}")
|
||||||
|
is_new_session = False
|
||||||
|
except Exception:
|
||||||
|
# 恢复失败,磁盘上可能不存在该会话
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"会话 {chat_id} 不存在或已过期,将创建新会话。"
|
||||||
|
)
|
||||||
|
session = None
|
||||||
|
|
||||||
|
if session is None:
|
||||||
|
# 创建新会话
|
||||||
|
from copilot.types import SessionConfig, InfiniteSessionConfig
|
||||||
|
|
||||||
|
# 无限会话配置
|
||||||
|
infinite_session_config = None
|
||||||
|
if self.valves.INFINITE_SESSION:
|
||||||
|
infinite_session_config = InfiniteSessionConfig(
|
||||||
|
enabled=True,
|
||||||
|
background_compaction_threshold=self.valves.COMPACTION_THRESHOLD,
|
||||||
|
buffer_exhaustion_threshold=self.valves.BUFFER_THRESHOLD,
|
||||||
|
)
|
||||||
|
|
||||||
|
session_config = SessionConfig(
|
||||||
|
session_id=(
|
||||||
|
chat_id if chat_id else None
|
||||||
|
), # 使用 chat_id 作为 session_id
|
||||||
|
model=real_model_id,
|
||||||
|
streaming=body.get("stream", False),
|
||||||
|
infinite_sessions=infinite_session_config,
|
||||||
|
)
|
||||||
|
|
||||||
|
session = await client.create_session(config=session_config)
|
||||||
|
|
||||||
|
# 获取新会话 ID
|
||||||
|
new_sid = getattr(session, "session_id", getattr(session, "id", None))
|
||||||
|
self._emit_debug_log(f"创建了新会话: {new_sid}")
|
||||||
|
|
||||||
|
# 构建 Prompt
|
||||||
|
if is_new_session:
|
||||||
|
# 新会话,发送完整历史
|
||||||
|
full_conversation = []
|
||||||
|
for msg in messages[:-1]:
|
||||||
|
role = msg.get("role", "user").upper()
|
||||||
|
content = msg.get("content", "")
|
||||||
|
if isinstance(content, list):
|
||||||
|
content = " ".join(
|
||||||
|
[
|
||||||
|
c.get("text", "")
|
||||||
|
for c in content
|
||||||
|
if c.get("type") == "text"
|
||||||
|
]
|
||||||
|
)
|
||||||
|
full_conversation.append(f"{role}: {content}")
|
||||||
|
full_conversation.append(f"User: {last_text}")
|
||||||
|
prompt = "\n\n".join(full_conversation)
|
||||||
|
else:
|
||||||
|
# 恢复的会话,只发送最后一条消息
|
||||||
|
prompt = last_text
|
||||||
|
|
||||||
|
send_payload = {"prompt": prompt, "mode": "immediate"}
|
||||||
|
if attachments:
|
||||||
|
send_payload["attachments"] = attachments
|
||||||
|
|
||||||
|
if body.get("stream", False):
|
||||||
|
# 确定 UI 显示的会话状态消息
|
||||||
|
init_msg = ""
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
if is_new_session:
|
||||||
|
new_sid = getattr(
|
||||||
|
session, "session_id", getattr(session, "id", "unknown")
|
||||||
|
)
|
||||||
|
init_msg = f"> [Debug] 创建了新会话: {new_sid}\n"
|
||||||
|
else:
|
||||||
|
init_msg = f"> [Debug] 已通过 ChatID 恢复会话: {chat_id}\n"
|
||||||
|
|
||||||
|
return self.stream_response(client, session, send_payload, init_msg)
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
response = await session.send_and_wait(send_payload)
|
||||||
|
return response.data.content if response else "Empty response."
|
||||||
|
finally:
|
||||||
|
# 销毁会话对象以释放内存,但保留磁盘数据
|
||||||
|
await session.destroy()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self._emit_debug_log(f"请求错误: {e}")
|
||||||
|
return f"Error: {str(e)}"
|
||||||
|
|
||||||
|
async def stream_response(
|
||||||
|
self, client, session, send_payload, init_message: str = ""
|
||||||
|
) -> AsyncGenerator:
|
||||||
|
queue = asyncio.Queue()
|
||||||
|
done = asyncio.Event()
|
||||||
|
self.thinking_started = False
|
||||||
|
has_content = False # 追踪是否已经输出了内容
|
||||||
|
|
||||||
|
def get_event_data(event, attr, default=""):
|
||||||
|
if hasattr(event, "data"):
|
||||||
|
data = event.data
|
||||||
|
if data is None:
|
||||||
|
return default
|
||||||
|
if isinstance(data, (str, int, float, bool)):
|
||||||
|
return str(data) if attr == "value" else default
|
||||||
|
|
||||||
|
if isinstance(data, dict):
|
||||||
|
val = data.get(attr)
|
||||||
|
if val is None:
|
||||||
|
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||||
|
val = data.get(alt_attr)
|
||||||
|
if val is None and "_" not in attr:
|
||||||
|
# 尝试将 camelCase 转换为 snake_case
|
||||||
|
import re
|
||||||
|
|
||||||
|
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||||
|
val = data.get(snake_attr)
|
||||||
|
else:
|
||||||
|
val = getattr(data, attr, None)
|
||||||
|
if val is None:
|
||||||
|
alt_attr = attr.replace("_", "") if "_" in attr else attr
|
||||||
|
val = getattr(data, alt_attr, None)
|
||||||
|
if val is None and "_" not in attr:
|
||||||
|
import re
|
||||||
|
|
||||||
|
snake_attr = re.sub(r"(?<!^)(?=[A-Z])", "_", attr).lower()
|
||||||
|
val = getattr(data, snake_attr, None)
|
||||||
|
|
||||||
|
return val if val is not None else default
|
||||||
|
return default
|
||||||
|
|
||||||
|
def handler(event):
|
||||||
|
event_type = (
|
||||||
|
getattr(event.type, "value", None)
|
||||||
|
if hasattr(event, "type")
|
||||||
|
else str(event.type)
|
||||||
|
)
|
||||||
|
|
||||||
|
# 记录工具事件的完整数据以辅助调试
|
||||||
|
if "tool" in event_type:
|
||||||
|
try:
|
||||||
|
data_str = str(event.data) if hasattr(event, "data") else "no data"
|
||||||
|
self._emit_debug_log(f"Tool Event [{event_type}]: {data_str}")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
self._emit_debug_log(f"Event: {event_type}")
|
||||||
|
|
||||||
|
# 处理消息内容 (增量或全量)
|
||||||
|
if event_type in [
|
||||||
|
"assistant.message_delta",
|
||||||
|
"assistant.message.delta",
|
||||||
|
"assistant.message",
|
||||||
|
]:
|
||||||
|
# 记录全量消息事件的特殊日志,帮助排查为什么没有 delta
|
||||||
|
if event_type == "assistant.message":
|
||||||
|
self._emit_debug_log(
|
||||||
|
f"收到全量消息事件 (非 Delta): {get_event_data(event, 'content')[:50]}..."
|
||||||
|
)
|
||||||
|
|
||||||
|
delta = (
|
||||||
|
get_event_data(event, "delta_content")
|
||||||
|
or get_event_data(event, "deltaContent")
|
||||||
|
or get_event_data(event, "content")
|
||||||
|
or get_event_data(event, "text")
|
||||||
|
)
|
||||||
|
if delta:
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait("\n</think>\n")
|
||||||
|
self.thinking_started = False
|
||||||
|
queue.put_nowait(delta)
|
||||||
|
|
||||||
|
elif event_type in [
|
||||||
|
"assistant.reasoning_delta",
|
||||||
|
"assistant.reasoning.delta",
|
||||||
|
"assistant.reasoning",
|
||||||
|
]:
|
||||||
|
delta = (
|
||||||
|
get_event_data(event, "delta_content")
|
||||||
|
or get_event_data(event, "deltaContent")
|
||||||
|
or get_event_data(event, "content")
|
||||||
|
or get_event_data(event, "text")
|
||||||
|
)
|
||||||
|
if delta:
|
||||||
|
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||||
|
queue.put_nowait("<think>\n")
|
||||||
|
self.thinking_started = True
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait(delta)
|
||||||
|
|
||||||
|
elif event_type == "tool.execution_start":
|
||||||
|
# 尝试多个可能的字段来获取工具名称或描述
|
||||||
|
tool_name = (
|
||||||
|
get_event_data(event, "toolName")
|
||||||
|
or get_event_data(event, "name")
|
||||||
|
or get_event_data(event, "description")
|
||||||
|
or get_event_data(event, "tool_name")
|
||||||
|
or "Unknown Tool"
|
||||||
|
)
|
||||||
|
if not self.thinking_started and self.valves.SHOW_THINKING:
|
||||||
|
queue.put_nowait("<think>\n")
|
||||||
|
self.thinking_started = True
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait(f"\n正在运行工具: {tool_name}...\n")
|
||||||
|
self._emit_debug_log(f"Tool Start: {tool_name}")
|
||||||
|
|
||||||
|
elif event_type == "tool.execution_complete":
|
||||||
|
if self.thinking_started:
|
||||||
|
queue.put_nowait("工具运行完成。\n")
|
||||||
|
self._emit_debug_log("Tool Complete")
|
||||||
|
|
||||||
|
elif event_type == "session.compaction_start":
|
||||||
|
self._emit_debug_log("会话压缩开始")
|
||||||
|
|
||||||
|
elif event_type == "session.compaction_complete":
|
||||||
|
self._emit_debug_log("会话压缩完成")
|
||||||
|
|
||||||
|
elif event_type == "session.idle":
|
||||||
|
done.set()
|
||||||
|
elif event_type == "session.error":
|
||||||
|
msg = get_event_data(event, "message", "Unknown Error")
|
||||||
|
queue.put_nowait(f"\n[Error: {msg}]")
|
||||||
|
done.set()
|
||||||
|
|
||||||
|
unsubscribe = session.on(handler)
|
||||||
|
await session.send(send_payload)
|
||||||
|
|
||||||
|
if self.valves.DEBUG:
|
||||||
|
yield "<think>\n"
|
||||||
|
if init_message:
|
||||||
|
yield init_message
|
||||||
|
yield "> [Debug] 连接已建立,等待响应...\n"
|
||||||
|
self.thinking_started = True
|
||||||
|
|
||||||
|
try:
|
||||||
|
while not done.is_set():
|
||||||
|
try:
|
||||||
|
chunk = await asyncio.wait_for(
|
||||||
|
queue.get(), timeout=float(self.valves.TIMEOUT)
|
||||||
|
)
|
||||||
|
if chunk:
|
||||||
|
has_content = True
|
||||||
|
yield chunk
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
if done.is_set():
|
||||||
|
break
|
||||||
|
if self.thinking_started:
|
||||||
|
yield f"> [Debug] 等待响应中 (已超过 {self.valves.TIMEOUT} 秒)...\n"
|
||||||
|
continue
|
||||||
|
|
||||||
|
while not queue.empty():
|
||||||
|
chunk = queue.get_nowait()
|
||||||
|
if chunk:
|
||||||
|
has_content = True
|
||||||
|
yield chunk
|
||||||
|
|
||||||
|
if self.thinking_started:
|
||||||
|
yield "\n</think>\n"
|
||||||
|
has_content = True
|
||||||
|
|
||||||
|
# 核心修复:如果整个过程没有任何输出,返回一个提示,防止 OpenWebUI 报错
|
||||||
|
if not has_content:
|
||||||
|
yield "⚠️ Copilot 未返回任何内容。请检查模型 ID 是否正确,或尝试在 Valves 中开启 DEBUG 模式查看详细日志。"
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
yield f"\n[Stream Error: {str(e)}]"
|
||||||
|
finally:
|
||||||
|
unsubscribe()
|
||||||
|
# 销毁会话对象以释放内存,但保留磁盘数据
|
||||||
|
await session.destroy()
|
||||||
@@ -217,6 +217,23 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
|||||||
return "\n".join(lines)
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_readme_url(file_path: str) -> str:
|
||||||
|
"""
|
||||||
|
Generate GitHub README URL from plugin file path.
|
||||||
|
从插件文件路径生成 GitHub README 链接。
|
||||||
|
"""
|
||||||
|
if not file_path:
|
||||||
|
return ""
|
||||||
|
# Extract plugin directory (e.g., plugins/filters/folder-memory/folder_memory.py -> plugins/filters/folder-memory)
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
plugin_dir = Path(file_path).parent
|
||||||
|
# Convert to GitHub URL
|
||||||
|
return (
|
||||||
|
f"https://github.com/Fu-Jie/awesome-openwebui/blob/main/{plugin_dir}/README.md"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def format_release_notes(
|
def format_release_notes(
|
||||||
comparison: dict[str, list], ignore_removed: bool = False
|
comparison: dict[str, list], ignore_removed: bool = False
|
||||||
) -> str:
|
) -> str:
|
||||||
@@ -229,9 +246,12 @@ def format_release_notes(
|
|||||||
if comparison["added"]:
|
if comparison["added"]:
|
||||||
lines.append("### 新增插件 / New Plugins")
|
lines.append("### 新增插件 / New Plugins")
|
||||||
for plugin in comparison["added"]:
|
for plugin in comparison["added"]:
|
||||||
|
readme_url = _get_readme_url(plugin.get("file_path", ""))
|
||||||
lines.append(f"- **{plugin['title']}** v{plugin['version']}")
|
lines.append(f"- **{plugin['title']}** v{plugin['version']}")
|
||||||
if plugin.get("description"):
|
if plugin.get("description"):
|
||||||
lines.append(f" - {plugin['description']}")
|
lines.append(f" - {plugin['description']}")
|
||||||
|
if readme_url:
|
||||||
|
lines.append(f" - 📖 [README / 文档]({readme_url})")
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
if comparison["updated"]:
|
if comparison["updated"]:
|
||||||
@@ -258,7 +278,10 @@ def format_release_notes(
|
|||||||
)
|
)
|
||||||
prev_ver = prev_manifest.get("version") or prev.get("version")
|
prev_ver = prev_manifest.get("version") or prev.get("version")
|
||||||
|
|
||||||
|
readme_url = _get_readme_url(curr.get("file_path", ""))
|
||||||
lines.append(f"- **{curr_title}**: v{prev_ver} → v{curr_ver}")
|
lines.append(f"- **{curr_title}**: v{prev_ver} → v{curr_ver}")
|
||||||
|
if readme_url:
|
||||||
|
lines.append(f" - 📖 [README / 文档]({readme_url})")
|
||||||
lines.append("")
|
lines.append("")
|
||||||
|
|
||||||
if comparison["removed"] and not ignore_removed:
|
if comparison["removed"] and not ignore_removed:
|
||||||
|
|||||||
Reference in New Issue
Block a user