Compare commits

...

8 Commits

17 changed files with 946 additions and 606 deletions

View File

@@ -7,26 +7,26 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
<!-- STATS_START -->
## 📊 Community Stats
> 🕐 Auto-updated: 2026-01-09 20:14
> 🕐 Auto-updated: 2026-01-10 17:08
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|:---:|:---:|:---:|:---:|
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **59** | **70** | **20** |
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **71** | **72** | **21** |
| 📝 Posts | ⬇️ Downloads | 👁️ Views | 👍 Upvotes | 💾 Saves |
|:---:|:---:|:---:|:---:|:---:|
| **13** | **1016** | **10831** | **62** | **56** |
| **14** | **1066** | **11486** | **64** | **66** |
### 🔥 Top 6 Popular Plugins
| Rank | Plugin | Downloads | Views |
|:---:|------|:---:|:---:|
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 323 | 2878 |
| 🥈 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 180 | 532 |
| 🥉 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 121 | 1355 |
| 4⃣ | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 106 | 1265 |
| 5⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 91 | 1665 |
| 6⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 80 | 751 |
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 341 | 3080 |
| 🥈 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 181 | 551 |
| 🥉 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 125 | 1390 |
| 4⃣ | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 116 | 1355 |
| 5⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 92 | 1735 |
| 6⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 87 | 800 |
*See full stats in [Community Stats Report](./docs/community-stats.md)*
<!-- STATS_END -->

View File

@@ -7,26 +7,26 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
<!-- STATS_START -->
## 📊 社区统计
> 🕐 自动更新于 2026-01-09 20:14
> 🕐 自动更新于 2026-01-10 17:08
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|:---:|:---:|:---:|:---:|
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **59** | **70** | **20** |
| [Fu-Jie](https://openwebui.com/u/Fu-Jie) | **71** | **72** | **21** |
| 📝 发布 | ⬇️ 下载 | 👁️ 浏览 | 👍 点赞 | 💾 收藏 |
|:---:|:---:|:---:|:---:|:---:|
| **13** | **1016** | **10831** | **62** | **56** |
| **14** | **1066** | **11486** | **64** | **66** |
### 🔥 热门插件 Top 6
| 排名 | 插件 | 下载 | 浏览 |
|:---:|------|:---:|:---:|
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 323 | 2878 |
| 🥈 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 180 | 532 |
| 🥉 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 121 | 1355 |
| 4⃣ | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 106 | 1265 |
| 5⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 91 | 1665 |
| 6⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 80 | 751 |
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | 341 | 3080 |
| 🥈 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | 181 | 551 |
| 🥉 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | 125 | 1390 |
| 4⃣ | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | 116 | 1355 |
| 5⃣ | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | 92 | 1735 |
| 6⃣ | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | 87 | 800 |
*完整统计请查看 [社区统计报告](./docs/community-stats.zh.md)*
<!-- STATS_END -->

View File

@@ -1,12 +1,13 @@
{
"total_posts": 13,
"total_downloads": 1016,
"total_views": 10831,
"total_upvotes": 62,
"total_posts": 14,
"total_downloads": 1066,
"total_views": 11486,
"total_upvotes": 64,
"total_downvotes": 2,
"total_saves": 56,
"total_saves": 66,
"total_comments": 15,
"by_type": {
"unknown": 1,
"action": 11,
"filter": 2
},
@@ -18,10 +19,10 @@
"version": "0.9.1",
"author": "Fu-Jie",
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
"downloads": 323,
"views": 2878,
"downloads": 341,
"views": 3080,
"upvotes": 10,
"saves": 17,
"saves": 21,
"comments": 10,
"created_at": "2025-12-30",
"updated_at": "2026-01-07",
@@ -34,10 +35,10 @@
"version": "0.3.7",
"author": "Fu-Jie",
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
"downloads": 180,
"views": 532,
"downloads": 181,
"views": 551,
"upvotes": 3,
"saves": 3,
"saves": 4,
"comments": 0,
"created_at": "2025-05-30",
"updated_at": "2026-01-07",
@@ -50,8 +51,8 @@
"version": "1.1.0",
"author": "Fu-Jie",
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
"downloads": 121,
"views": 1355,
"downloads": 125,
"views": 1390,
"upvotes": 5,
"saves": 9,
"comments": 0,
@@ -66,8 +67,8 @@
"version": "1.4.1",
"author": "jeff",
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
"downloads": 106,
"views": 1265,
"downloads": 116,
"views": 1355,
"upvotes": 7,
"saves": 9,
"comments": 2,
@@ -82,10 +83,10 @@
"version": "0.2.4",
"author": "Fu-Jie",
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
"downloads": 91,
"views": 1665,
"downloads": 92,
"views": 1735,
"upvotes": 8,
"saves": 5,
"saves": 6,
"comments": 2,
"created_at": "2025-12-30",
"updated_at": "2026-01-07",
@@ -98,10 +99,10 @@
"version": "0.4.3",
"author": "Fu-Jie",
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
"downloads": 80,
"views": 751,
"downloads": 87,
"views": 800,
"upvotes": 5,
"saves": 6,
"saves": 8,
"comments": 0,
"created_at": "2026-01-03",
"updated_at": "2026-01-07",
@@ -115,7 +116,7 @@
"author": "jeff",
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
"downloads": 35,
"views": 473,
"views": 480,
"upvotes": 3,
"saves": 0,
"comments": 0,
@@ -130,8 +131,8 @@
"version": "0.4.3",
"author": "Fu-Jie",
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
"downloads": 30,
"views": 902,
"downloads": 31,
"views": 929,
"upvotes": 8,
"saves": 2,
"comments": 1,
@@ -139,6 +140,22 @@
"updated_at": "2026-01-07",
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
},
{
"title": "Deep Dive",
"slug": "deep_dive_c0b846e4",
"type": "action",
"version": "1.0.0",
"author": "Fu-Jie",
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
"downloads": 22,
"views": 259,
"upvotes": 3,
"saves": 3,
"comments": 0,
"created_at": "2026-01-08",
"updated_at": "2026-01-08",
"url": "https://openwebui.com/posts/deep_dive_c0b846e4"
},
{
"title": "思维导图",
"slug": "智能生成交互式思维导图帮助用户可视化知识_8d4b097b",
@@ -147,7 +164,7 @@
"author": "Fu-Jie",
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
"downloads": 17,
"views": 295,
"views": 304,
"upvotes": 2,
"saves": 1,
"comments": 0,
@@ -155,22 +172,6 @@
"updated_at": "2026-01-07",
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
},
{
"title": "Deep Dive",
"slug": "deep_dive_c0b846e4",
"type": "action",
"version": "1.0.0",
"author": "Fu-Jie",
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
"downloads": 14,
"views": 167,
"upvotes": 3,
"saves": 1,
"comments": 0,
"created_at": "2026-01-08",
"updated_at": "2026-01-08",
"url": "https://openwebui.com/posts/deep_dive_c0b846e4"
},
{
"title": "闪记卡 (Flash Card)",
"slug": "闪记卡生成插件_4a31eac3",
@@ -179,7 +180,7 @@
"author": "Fu-Jie",
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
"downloads": 12,
"views": 339,
"views": 345,
"upvotes": 4,
"saves": 1,
"comments": 0,
@@ -195,7 +196,7 @@
"author": "Fu-Jie",
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
"downloads": 6,
"views": 148,
"views": 153,
"upvotes": 2,
"saves": 1,
"comments": 0,
@@ -211,13 +212,29 @@
"author": "Fu-Jie",
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
"downloads": 1,
"views": 61,
"views": 86,
"upvotes": 2,
"saves": 1,
"comments": 0,
"created_at": "2026-01-08",
"updated_at": "2026-01-08",
"url": "https://openwebui.com/posts/精读_99830b0f"
},
{
"title": " 🛠️ Debug Open WebUI Plugins in Your Browser",
"slug": "debug_open_webui_plugins_in_your_browser_81bf7960",
"type": "unknown",
"version": "",
"author": "",
"description": "",
"downloads": 0,
"views": 19,
"upvotes": 2,
"saves": 0,
"comments": 0,
"created_at": "2026-01-10",
"updated_at": "2026-01-10",
"url": "https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960"
}
],
"user": {
@@ -225,11 +242,11 @@
"name": "Fu-Jie",
"profile_url": "https://openwebui.com/u/Fu-Jie",
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
"followers": 59,
"followers": 71,
"following": 2,
"total_points": 70,
"post_points": 60,
"total_points": 72,
"post_points": 62,
"comment_points": 10,
"contributions": 20
"contributions": 21
}
}

View File

@@ -1,20 +1,21 @@
# 📊 OpenWebUI Community Stats Report
> 📅 Updated: 2026-01-09 20:14
> 📅 Updated: 2026-01-10 17:08
## 📈 Overview
| Metric | Value |
|------|------|
| 📝 Total Posts | 13 |
| ⬇️ Total Downloads | 1016 |
| 👁️ Total Views | 10831 |
| 👍 Total Upvotes | 62 |
| 💾 Total Saves | 56 |
| 📝 Total Posts | 14 |
| ⬇️ Total Downloads | 1066 |
| 👁️ Total Views | 11486 |
| 👍 Total Upvotes | 64 |
| 💾 Total Saves | 66 |
| 💬 Total Comments | 15 |
## 📂 By Type
- **unknown**: 1
- **action**: 11
- **filter**: 2
@@ -22,16 +23,17 @@
| Rank | Title | Type | Version | Downloads | Views | Upvotes | Saves | Updated |
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 323 | 2878 | 10 | 17 | 2026-01-07 |
| 2 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 180 | 532 | 3 | 3 | 2026-01-07 |
| 3 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter | 1.1.0 | 121 | 1355 | 5 | 9 | 2026-01-07 |
| 4 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.1 | 106 | 1265 | 7 | 9 | 2026-01-07 |
| 5 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 91 | 1665 | 8 | 5 | 2026-01-07 |
| 6 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 80 | 751 | 5 | 6 | 2026-01-07 |
| 7 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.1 | 35 | 473 | 3 | 0 | 2026-01-07 |
| 8 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 30 | 902 | 8 | 2 | 2026-01-07 |
| 9 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 17 | 295 | 2 | 1 | 2026-01-07 |
| 10 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 14 | 167 | 3 | 1 | 2026-01-08 |
| 11 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 12 | 339 | 4 | 1 | 2026-01-07 |
| 12 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.1.0 | 6 | 148 | 2 | 1 | 2026-01-07 |
| 13 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 1 | 61 | 2 | 1 | 2026-01-08 |
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 341 | 3080 | 10 | 21 | 2026-01-07 |
| 2 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 181 | 551 | 3 | 4 | 2026-01-07 |
| 3 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter | 1.1.0 | 125 | 1390 | 5 | 9 | 2026-01-07 |
| 4 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.1 | 116 | 1355 | 7 | 9 | 2026-01-07 |
| 5 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 92 | 1735 | 8 | 6 | 2026-01-07 |
| 6 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 87 | 800 | 5 | 8 | 2026-01-07 |
| 7 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.1 | 35 | 480 | 3 | 0 | 2026-01-07 |
| 8 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 31 | 929 | 8 | 2 | 2026-01-07 |
| 9 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 22 | 259 | 3 | 3 | 2026-01-08 |
| 10 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 17 | 304 | 2 | 1 | 2026-01-07 |
| 11 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 12 | 345 | 4 | 1 | 2026-01-07 |
| 12 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.1.0 | 6 | 153 | 2 | 1 | 2026-01-07 |
| 13 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 1 | 86 | 2 | 1 | 2026-01-08 |
| 14 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 19 | 2 | 0 | 2026-01-10 |

View File

@@ -1,20 +1,21 @@
# 📊 OpenWebUI 社区统计报告
> 📅 更新时间: 2026-01-09 20:14
> 📅 更新时间: 2026-01-10 17:08
## 📈 总览
| 指标 | 数值 |
|------|------|
| 📝 发布数量 | 13 |
| ⬇️ 总下载量 | 1016 |
| 👁️ 总浏览量 | 10831 |
| 👍 总点赞数 | 62 |
| 💾 总收藏数 | 56 |
| 📝 发布数量 | 14 |
| ⬇️ 总下载量 | 1066 |
| 👁️ 总浏览量 | 11486 |
| 👍 总点赞数 | 64 |
| 💾 总收藏数 | 66 |
| 💬 总评论数 | 15 |
## 📂 按类型分类
- **unknown**: 1
- **action**: 11
- **filter**: 2
@@ -22,16 +23,17 @@
| 排名 | 标题 | 类型 | 版本 | 下载 | 浏览 | 点赞 | 收藏 | 更新日期 |
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 323 | 2878 | 10 | 17 | 2026-01-07 |
| 2 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 180 | 532 | 3 | 3 | 2026-01-07 |
| 3 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter | 1.1.0 | 121 | 1355 | 5 | 9 | 2026-01-07 |
| 4 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.1 | 106 | 1265 | 7 | 9 | 2026-01-07 |
| 5 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 91 | 1665 | 8 | 5 | 2026-01-07 |
| 6 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 80 | 751 | 5 | 6 | 2026-01-07 |
| 7 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.1 | 35 | 473 | 3 | 0 | 2026-01-07 |
| 8 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 30 | 902 | 8 | 2 | 2026-01-07 |
| 9 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 17 | 295 | 2 | 1 | 2026-01-07 |
| 10 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 14 | 167 | 3 | 1 | 2026-01-08 |
| 11 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 12 | 339 | 4 | 1 | 2026-01-07 |
| 12 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.1.0 | 6 | 148 | 2 | 1 | 2026-01-07 |
| 13 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 1 | 61 | 2 | 1 | 2026-01-08 |
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action | 0.9.1 | 341 | 3080 | 10 | 21 | 2026-01-07 |
| 2 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action | 0.3.7 | 181 | 551 | 3 | 4 | 2026-01-07 |
| 3 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter | 1.1.0 | 125 | 1390 | 5 | 9 | 2026-01-07 |
| 4 | [📊 Smart Infographic (AntV)](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action | 1.4.1 | 116 | 1355 | 7 | 9 | 2026-01-07 |
| 5 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action | 0.2.4 | 92 | 1735 | 8 | 6 | 2026-01-07 |
| 6 | [Export to Word (Enhanced)](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action | 0.4.3 | 87 | 800 | 5 | 8 | 2026-01-07 |
| 7 | [📊 智能信息图 (AntV Infographic)](https://openwebui.com/posts/智能信息图_e04a48ff) | action | 1.4.1 | 35 | 480 | 3 | 0 | 2026-01-07 |
| 8 | [导出为 Word (增强版)](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action | 0.4.3 | 31 | 929 | 8 | 2 | 2026-01-07 |
| 9 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action | 1.0.0 | 22 | 259 | 3 | 3 | 2026-01-08 |
| 10 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action | 0.9.1 | 17 | 304 | 2 | 1 | 2026-01-07 |
| 11 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action | 0.2.4 | 12 | 345 | 4 | 1 | 2026-01-07 |
| 12 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | filter | 1.1.0 | 6 | 153 | 2 | 1 | 2026-01-07 |
| 13 | [精读](https://openwebui.com/posts/精读_99830b0f) | action | 1.0.0 | 1 | 86 | 2 | 1 | 2026-01-08 |
| 14 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | unknown | | 0 | 19 | 2 | 0 | 2026-01-10 |

View File

@@ -53,7 +53,6 @@ OpenWebUI supports four types of plugins, each serving a different purpose:
| [Knowledge Card](actions/knowledge-card.md) | Action | Create beautiful learning flashcards | 0.2.0 |
| [Export to Excel](actions/export-to-excel.md) | Action | Export chat history to Excel files | 1.0.0 |
| [Export to Word](actions/export-to-word.md) | Action | Export chat content to Word (.docx) with formatting | 0.1.0 |
| [Summary](actions/summary.md) | Action | Text summarization tool | 1.0.0 |
| [Async Context Compression](filters/async-context-compression.md) | Filter | Intelligent context compression | 1.0.0 |
| [Context Enhancement](filters/context-enhancement.md) | Filter | Enhance chat context | 1.0.0 |
| [Gemini Manifold Companion](filters/gemini-manifold-companion.md) | Filter | Companion for Gemini Manifold | 1.0.0 |

View File

@@ -53,7 +53,6 @@ OpenWebUI 支持四种类型的插件,每种都有不同的用途:
| [Knowledge Card知识卡片](actions/knowledge-card.md) | Action | 生成精美学习卡片 | 0.2.0 |
| [Export to Excel导出到 Excel](actions/export-to-excel.md) | Action | 导出聊天记录为 Excel | 1.0.0 |
| [Export to Word导出为 Word](actions/export-to-word.md) | Action | 将聊天内容导出为 Word (.docx) 并保留格式 | 0.1.0 |
| [Summary摘要](actions/summary.md) | Action | 文本摘要工具 | 1.0.0 |
| [Async Context Compression异步上下文压缩](filters/async-context-compression.md) | Filter | 智能上下文压缩 | 1.0.0 |
| [Context Enhancement上下文增强](filters/context-enhancement.md) | Filter | 提升对话上下文 | 1.0.0 |
| [Gemini Manifold Companion](filters/gemini-manifold-companion.md) | Filter | Gemini Manifold 伴侣 | 1.0.0 |

View File

@@ -187,7 +187,6 @@ nav:
- Knowledge Card: plugins/actions/knowledge-card.md
- Export to Excel: plugins/actions/export-to-excel.md
- Export to Word: plugins/actions/export-to-word.md
- Summary: plugins/actions/summary.md
- Filters:
- plugins/filters/index.md
- Async Context Compression: plugins/filters/async-context-compression.md

View File

@@ -5,7 +5,7 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie
funding_url: https://github.com/Fu-Jie/awesome-openwebui
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
version: 1.1.0
version: 1.1.1
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
license: MIT
@@ -139,6 +139,10 @@ debug_mode
Default: true
Description: Prints detailed debug information to the log. Recommended to set to `false` in production.
show_debug_log
Default: false
Description: Print debug logs to browser console (F12). Useful for frontend debugging.
🔧 Deployment
═══════════════════════════════════════════════════════
@@ -355,6 +359,9 @@ class Filter:
debug_mode: bool = Field(
default=True, description="Enable detailed logging for debugging."
)
show_debug_log: bool = Field(
default=False, description="Print debug logs to browser console (F12)"
)
def _save_summary(self, chat_id: str, summary: str, compressed_count: int):
"""Saves the summary to the database."""
@@ -516,12 +523,109 @@ class Filter:
return message
async def _emit_debug_log(
self,
__event_call__,
chat_id: str,
original_count: int,
compressed_count: int,
summary_length: int,
kept_first: int,
kept_last: int,
):
"""Emit debug log to browser console via JS execution"""
if not self.valves.show_debug_log or not __event_call__:
return
try:
# Prepare data for JS
log_data = {
"chatId": chat_id,
"originalCount": original_count,
"compressedCount": compressed_count,
"summaryLength": summary_length,
"keptFirst": kept_first,
"keptLast": kept_last,
"ratio": (
f"{(1 - compressed_count/original_count)*100:.1f}%"
if original_count > 0
else "0%"
),
}
# Construct JS code
js_code = f"""
(async function() {{
console.group("🗜️ Async Context Compression Debug");
console.log("Chat ID:", {json.dumps(chat_id)});
console.log("Messages:", {original_count} + " -> " + {compressed_count});
console.log("Compression Ratio:", {json.dumps(log_data['ratio'])});
console.log("Summary Length:", {summary_length} + " chars");
console.log("Configuration:", {{
"Keep First": {kept_first},
"Keep Last": {kept_last}
}});
console.groupEnd();
}})();
"""
await __event_call__(
{
"type": "execute",
"data": {"code": js_code},
}
)
except Exception as e:
print(f"Error emitting debug log: {e}")
async def _log(self, message: str, type: str = "info", event_call=None):
"""Unified logging to both backend (print) and frontend (console.log)"""
# Backend logging
if self.valves.debug_mode:
print(message)
# Frontend logging
if self.valves.show_debug_log and event_call:
try:
css = "color: #3b82f6;" # Blue default
if type == "error":
css = "color: #ef4444; font-weight: bold;" # Red
elif type == "warning":
css = "color: #f59e0b;" # Orange
elif type == "success":
css = "color: #10b981; font-weight: bold;" # Green
# Clean message for frontend: remove separators and extra newlines
lines = message.split("\n")
# Keep lines that don't start with lots of equals or hyphens
filtered_lines = [
line
for line in lines
if not line.strip().startswith("====")
and not line.strip().startswith("----")
]
clean_message = "\n".join(filtered_lines).strip()
if not clean_message:
return
# Escape quotes in message for JS string
safe_message = clean_message.replace('"', '\\"').replace("\n", "\\n")
js_code = f"""
console.log("%c[Compression] {safe_message}", "{css}");
"""
await event_call({"type": "execute", "data": {"code": js_code}})
except Exception as e:
print(f"Failed to emit log to frontend: {e}")
async def inlet(
self,
body: dict,
__user__: Optional[dict] = None,
__metadata__: dict = None,
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> dict:
"""
Executed before sending to the LLM.
@@ -530,10 +634,11 @@ class Filter:
messages = body.get("messages", [])
chat_id = __metadata__["chat_id"]
if self.valves.debug_mode:
print(f"\n{'='*60}")
print(f"[Inlet] Chat ID: {chat_id}")
print(f"[Inlet] Received {len(messages)} messages")
if self.valves.debug_mode or self.valves.show_debug_log:
await self._log(
f"\n{'='*60}\n[Inlet] Chat ID: {chat_id}\n[Inlet] Received {len(messages)} messages",
event_call=__event_call__,
)
# Record the target compression progress for the original messages, for use in outlet
# Target is to compress up to the (total - keep_last) message
@@ -541,17 +646,18 @@ class Filter:
# [Optimization] Simple state cleanup check
if chat_id in self.temp_state:
if self.valves.debug_mode:
print(
f"[Inlet] ⚠️ Overwriting unconsumed old state (Chat ID: {chat_id})"
)
await self._log(
f"[Inlet] ⚠️ Overwriting unconsumed old state (Chat ID: {chat_id})",
type="warning",
event_call=__event_call__,
)
self.temp_state[chat_id] = target_compressed_count
if self.valves.debug_mode:
print(
f"[Inlet] Recorded target compression progress: {target_compressed_count}"
)
await self._log(
f"[Inlet] Recorded target compression progress: {target_compressed_count}",
event_call=__event_call__,
)
# Load summary record
summary_record = await asyncio.to_thread(self._load_summary_record, chat_id)
@@ -600,19 +706,32 @@ class Filter:
}
)
if self.valves.debug_mode:
print(
f"[Inlet] Applied summary: Head({len(head_messages)}) + Summary + Tail({len(tail_messages)})"
)
await self._log(
f"[Inlet] Applied summary: Head({len(head_messages)}) + Summary + Tail({len(tail_messages)})",
type="success",
event_call=__event_call__,
)
# Emit debug log to frontend (Keep the structured log as well)
await self._emit_debug_log(
__event_call__,
chat_id,
len(messages),
len(final_messages),
len(summary_record.summary),
self.valves.keep_first,
self.valves.keep_last,
)
else:
# No summary, use original messages
final_messages = messages
body["messages"] = final_messages
if self.valves.debug_mode:
print(f"[Inlet] Final send: {len(body['messages'])} messages")
print(f"{'='*60}\n")
await self._log(
f"[Inlet] Final send: {len(body['messages'])} messages\n{'='*60}\n",
event_call=__event_call__,
)
return body
@@ -622,6 +741,7 @@ class Filter:
__user__: Optional[dict] = None,
__metadata__: dict = None,
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> dict:
"""
Executed after the LLM response is complete.
@@ -630,21 +750,23 @@ class Filter:
chat_id = __metadata__["chat_id"]
model = body.get("model", "gpt-3.5-turbo")
if self.valves.debug_mode:
print(f"\n{'='*60}")
print(f"[Outlet] Chat ID: {chat_id}")
print(f"[Outlet] Response complete")
if self.valves.debug_mode or self.valves.show_debug_log:
await self._log(
f"\n{'='*60}\n[Outlet] Chat ID: {chat_id}\n[Outlet] Response complete",
event_call=__event_call__,
)
# Process Token calculation and summary generation asynchronously in the background (do not wait for completion, do not affect output)
asyncio.create_task(
self._check_and_generate_summary_async(
chat_id, model, body, __user__, __event_emitter__
chat_id, model, body, __user__, __event_emitter__, __event_call__
)
)
if self.valves.debug_mode:
print(f"[Outlet] Background processing started")
print(f"{'='*60}\n")
await self._log(
f"[Outlet] Background processing started\n{'='*60}\n",
event_call=__event_call__,
)
return body
@@ -655,6 +777,7 @@ class Filter:
body: dict,
user_data: Optional[dict],
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
):
"""
Background processing: Calculates Token count and generates summary (does not block response).
@@ -668,36 +791,50 @@ class Filter:
"compression_threshold_tokens", self.valves.compression_threshold_tokens
)
if self.valves.debug_mode:
print(f"\n[🔍 Background Calculation] Starting Token count...")
await self._log(
f"\n[🔍 Background Calculation] Starting Token count...",
event_call=__event_call__,
)
# Calculate Token count in a background thread
current_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, messages
)
if self.valves.debug_mode:
print(f"[🔍 Background Calculation] Token count: {current_tokens}")
await self._log(
f"[🔍 Background Calculation] Token count: {current_tokens}",
event_call=__event_call__,
)
# Check if compression is needed
if current_tokens >= compression_threshold_tokens:
if self.valves.debug_mode:
print(
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})"
)
await self._log(
f"[🔍 Background Calculation] ⚡ Compression threshold triggered (Token: {current_tokens} >= {compression_threshold_tokens})",
type="warning",
event_call=__event_call__,
)
# Proceed to generate summary
await self._generate_summary_async(
messages, chat_id, body, user_data, __event_emitter__
messages,
chat_id,
body,
user_data,
__event_emitter__,
__event_call__,
)
else:
if self.valves.debug_mode:
print(
f"[🔍 Background Calculation] Compression threshold not reached (Token: {current_tokens} < {compression_threshold_tokens})"
)
await self._log(
f"[🔍 Background Calculation] Compression threshold not reached (Token: {current_tokens} < {compression_threshold_tokens})",
event_call=__event_call__,
)
except Exception as e:
print(f"[🔍 Background Calculation] ❌ Error: {str(e)}")
await self._log(
f"[🔍 Background Calculation] ❌ Error: {str(e)}",
type="error",
event_call=__event_call__,
)
async def _generate_summary_async(
self,
@@ -706,6 +843,7 @@ class Filter:
body: dict,
user_data: Optional[dict],
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
):
"""
Generates summary asynchronously (runs in background, does not block response).
@@ -715,18 +853,20 @@ class Filter:
3. Generate summary for the remaining middle messages.
"""
try:
if self.valves.debug_mode:
print(f"\n[🤖 Async Summary Task] Starting...")
await self._log(
f"\n[🤖 Async Summary Task] Starting...", event_call=__event_call__
)
# 1. Get target compression progress
# Prioritize getting from temp_state (calculated by inlet). If unavailable (e.g., after restart), assume current is full history.
target_compressed_count = self.temp_state.pop(chat_id, None)
if target_compressed_count is None:
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] ⚠️ Could not get inlet state, estimating progress using current message count: {target_compressed_count}"
)
await self._log(
f"[🤖 Async Summary Task] ⚠️ Could not get inlet state, estimating progress using current message count: {target_compressed_count}",
type="warning",
event_call=__event_call__,
)
# 2. Determine the range of messages to compress (Middle)
start_index = self.valves.keep_first
@@ -736,18 +876,18 @@ class Filter:
# Ensure indices are valid
if start_index >= end_index:
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] Middle messages empty (Start: {start_index}, End: {end_index}), skipping"
)
await self._log(
f"[🤖 Async Summary Task] Middle messages empty (Start: {start_index}, End: {end_index}), skipping",
event_call=__event_call__,
)
return
middle_messages = messages[start_index:end_index]
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] Middle messages to process: {len(middle_messages)}"
)
await self._log(
f"[🤖 Async Summary Task] Middle messages to process: {len(middle_messages)}",
event_call=__event_call__,
)
# 3. Check Token limit and truncate (Max Context Truncation)
# [Optimization] Use the summary model's (if any) threshold to decide how many middle messages can be processed
@@ -762,22 +902,26 @@ class Filter:
"max_context_tokens", self.valves.max_context_tokens
)
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] Using max limit for model {summary_model_id}: {max_context_tokens} Tokens"
)
# Calculate current total Tokens (using summary model for counting)
total_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, messages
await self._log(
f"[🤖 Async Summary Task] Using max limit for model {summary_model_id}: {max_context_tokens} Tokens",
event_call=__event_call__,
)
if total_tokens > max_context_tokens:
excess_tokens = total_tokens - max_context_tokens
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] ⚠️ Total Tokens ({total_tokens}) exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens"
)
# Calculate tokens for middle messages only (plus buffer for prompt)
# We only send middle_messages to the summary model, so we shouldn't count the full history against its limit.
middle_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, middle_messages
)
# Add buffer for prompt and output (approx 2000 tokens)
estimated_input_tokens = middle_tokens + 2000
if estimated_input_tokens > max_context_tokens:
excess_tokens = estimated_input_tokens - max_context_tokens
await self._log(
f"[🤖 Async Summary Task] ⚠️ Middle messages ({middle_tokens} Tokens) + Buffer exceed summary model limit ({max_context_tokens}), need to remove approx {excess_tokens} Tokens",
type="warning",
event_call=__event_call__,
)
# Remove from the head of middle_messages
removed_tokens = 0
@@ -785,20 +929,22 @@ class Filter:
while removed_tokens < excess_tokens and middle_messages:
msg_to_remove = middle_messages.pop(0)
msg_tokens = self._count_tokens(str(msg_to_remove.get("content", "")))
msg_tokens = self._count_tokens(
str(msg_to_remove.get("content", ""))
)
removed_tokens += msg_tokens
removed_count += 1
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] Removed {removed_count} messages, totaling {removed_tokens} Tokens"
)
await self._log(
f"[🤖 Async Summary Task] Removed {removed_count} messages, totaling {removed_tokens} Tokens",
event_call=__event_call__,
)
if not middle_messages:
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] Middle messages empty after truncation, skipping summary generation"
)
await self._log(
f"[🤖 Async Summary Task] Middle messages empty after truncation, skipping summary generation",
event_call=__event_call__,
)
return
# 4. Build conversation text
@@ -820,14 +966,14 @@ class Filter:
)
new_summary = await self._call_summary_llm(
None, conversation_text, body, user_data
None, conversation_text, body, user_data, __event_call__
)
# 6. Save new summary
if self.valves.debug_mode:
print(
"[Optimization] Saving summary in a background thread to avoid blocking the event loop."
)
await self._log(
"[Optimization] Saving summary in a background thread to avoid blocking the event loop.",
event_call=__event_call__,
)
await asyncio.to_thread(
self._save_summary, chat_id, new_summary, target_compressed_count
@@ -845,16 +991,22 @@ class Filter:
}
)
if self.valves.debug_mode:
print(
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters"
)
print(
f"[🤖 Async Summary Task] Progress update: Compressed up to original message {target_compressed_count}"
)
await self._log(
f"[🤖 Async Summary Task] ✅ Complete! New summary length: {len(new_summary)} characters",
type="success",
event_call=__event_call__,
)
await self._log(
f"[🤖 Async Summary Task] Progress update: Compressed up to original message {target_compressed_count}",
event_call=__event_call__,
)
except Exception as e:
print(f"[🤖 Async Summary Task] ❌ Error: {str(e)}")
await self._log(
f"[🤖 Async Summary Task] ❌ Error: {str(e)}",
type="error",
event_call=__event_call__,
)
import traceback
traceback.print_exc()
@@ -891,12 +1043,15 @@ class Filter:
new_conversation_text: str,
body: dict,
user_data: dict,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> str:
"""
Calls the LLM to generate a summary using Open WebUI's built-in method.
"""
if self.valves.debug_mode:
print(f"[🤖 LLM Call] Using Open WebUI's built-in method")
await self._log(
f"[🤖 LLM Call] Using Open WebUI's built-in method",
event_call=__event_call__,
)
# Build summary prompt (Optimized)
summary_prompt = f"""
@@ -935,8 +1090,7 @@ Based on the content above, generate the summary:
# Determine the model to use
model = self.valves.summary_model or body.get("model", "")
if self.valves.debug_mode:
print(f"[🤖 LLM Call] Model: {model}")
await self._log(f"[🤖 LLM Call] Model: {model}", event_call=__event_call__)
# Build payload
payload = {
@@ -954,18 +1108,19 @@ Based on the content above, generate the summary:
raise ValueError("Could not get user ID")
# [Optimization] Get user object in a background thread to avoid blocking the event loop.
if self.valves.debug_mode:
print(
"[Optimization] Getting user object in a background thread to avoid blocking the event loop."
)
await self._log(
"[Optimization] Getting user object in a background thread to avoid blocking the event loop.",
event_call=__event_call__,
)
user = await asyncio.to_thread(Users.get_user_by_id, user_id)
if not user:
raise ValueError(f"Could not find user: {user_id}")
if self.valves.debug_mode:
print(f"[🤖 LLM Call] User: {user.email}")
print(f"[🤖 LLM Call] Sending request...")
await self._log(
f"[🤖 LLM Call] User: {user.email}\n[🤖 LLM Call] Sending request...",
event_call=__event_call__,
)
# Create Request object
request = Request(scope={"type": "http", "app": webui_app})
@@ -978,8 +1133,11 @@ Based on the content above, generate the summary:
summary = response["choices"][0]["message"]["content"].strip()
if self.valves.debug_mode:
print(f"[🤖 LLM Call] ✅ Successfully received summary")
await self._log(
f"[🤖 LLM Call] ✅ Successfully received summary",
type="success",
event_call=__event_call__,
)
return summary
@@ -991,7 +1149,10 @@ Based on the content above, generate the summary:
"If this is a pipeline (Pipe) model or an incompatible model, please specify a compatible summary model (e.g., 'gemini-2.5-flash') in the configuration."
)
if self.valves.debug_mode:
print(f"[🤖 LLM Call] ❌ {error_message}")
await self._log(
f"[🤖 LLM Call] ❌ {error_message}",
type="error",
event_call=__event_call__,
)
raise Exception(error_message)

View File

@@ -5,7 +5,7 @@ author: Fu-Jie
author_url: https://github.com/Fu-Jie
funding_url: https://github.com/Fu-Jie/awesome-openwebui
description: 通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。
version: 1.1.0
version: 1.1.1
openwebui_id: 5c0617cb-a9e4-4bd6-a440-d276534ebd18
license: MIT
@@ -138,6 +138,10 @@ debug_mode (调试模式)
默认: true
说明: 在日志中打印详细的调试信息。生产环境建议设为 `false`。
show_debug_log (前端调试日志)
默认: false
说明: 在浏览器控制台打印调试日志 (F12)。便于前端调试。
🔧 部署配置
═══════════════════════════════════════════════════════
@@ -345,6 +349,9 @@ class Filter:
default=0.1, ge=0.0, le=2.0, description="摘要生成的温度参数"
)
debug_mode: bool = Field(default=True, description="调试模式,打印详细日志")
show_debug_log: bool = Field(
default=False, description="在浏览器控制台打印调试日志 (F12)"
)
def _save_summary(self, chat_id: str, summary: str, compressed_count: int):
"""保存摘要到数据库"""
@@ -426,9 +433,7 @@ class Filter:
# 回退策略:粗略估算 (1 token ≈ 4 chars)
return len(text) // 4
def _calculate_messages_tokens(
self, messages: List[Dict]
) -> int:
def _calculate_messages_tokens(self, messages: List[Dict]) -> int:
"""计算消息列表的总 Token 数"""
total_tokens = 0
for msg in messages:
@@ -502,12 +507,109 @@ class Filter:
return message
async def _emit_debug_log(
self,
__event_call__,
chat_id: str,
original_count: int,
compressed_count: int,
summary_length: int,
kept_first: int,
kept_last: int,
):
"""Emit debug log to browser console via JS execution"""
if not self.valves.show_debug_log or not __event_call__:
return
try:
# Prepare data for JS
log_data = {
"chatId": chat_id,
"originalCount": original_count,
"compressedCount": compressed_count,
"summaryLength": summary_length,
"keptFirst": kept_first,
"keptLast": kept_last,
"ratio": (
f"{(1 - compressed_count/original_count)*100:.1f}%"
if original_count > 0
else "0%"
),
}
# Construct JS code
js_code = f"""
(async function() {{
console.group("🗜️ Async Context Compression Debug");
console.log("Chat ID:", {json.dumps(chat_id)});
console.log("Messages:", {original_count} + " -> " + {compressed_count});
console.log("Compression Ratio:", {json.dumps(log_data['ratio'])});
console.log("Summary Length:", {summary_length} + " chars");
console.log("Configuration:", {{
"Keep First": {kept_first},
"Keep Last": {kept_last}
}});
console.groupEnd();
}})();
"""
await __event_call__(
{
"type": "execute",
"data": {"code": js_code},
}
)
except Exception as e:
print(f"Error emitting debug log: {e}")
async def _log(self, message: str, type: str = "info", event_call=None):
"""统一日志输出到后端 (print) 和前端 (console.log)"""
# 后端日志
if self.valves.debug_mode:
print(message)
# 前端日志
if self.valves.show_debug_log and event_call:
try:
css = "color: #3b82f6;" # 默认蓝色
if type == "error":
css = "color: #ef4444; font-weight: bold;" # 红色
elif type == "warning":
css = "color: #f59e0b;" # 橙色
elif type == "success":
css = "color: #10b981; font-weight: bold;" # 绿色
# 清理前端消息:移除分隔符和多余换行
lines = message.split("\n")
# 保留不以大量等号或连字符开头的行
filtered_lines = [
line
for line in lines
if not line.strip().startswith("====")
and not line.strip().startswith("----")
]
clean_message = "\n".join(filtered_lines).strip()
if not clean_message:
return
# 转义消息中的引号和换行符
safe_message = clean_message.replace('"', '\\"').replace("\n", "\\n")
js_code = f"""
console.log("%c[压缩] {safe_message}", "{css}");
"""
await event_call({"type": "execute", "data": {"code": js_code}})
except Exception as e:
print(f"发送前端日志失败: {e}")
async def inlet(
self,
body: dict,
__user__: Optional[dict] = None,
__metadata__: dict = None,
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> dict:
"""
在发送到 LLM 之前执行
@@ -516,10 +618,11 @@ class Filter:
messages = body.get("messages", [])
chat_id = __metadata__["chat_id"]
if self.valves.debug_mode:
print(f"\n{'='*60}")
print(f"[Inlet] Chat ID: {chat_id}")
print(f"[Inlet] 收到 {len(messages)} 条消息")
if self.valves.debug_mode or self.valves.show_debug_log:
await self._log(
f"\n{'='*60}\n[Inlet] Chat ID: {chat_id}\n[Inlet] 收到 {len(messages)} 条消息",
event_call=__event_call__,
)
# 记录原始消息的目标压缩进度,供 outlet 使用
# 目标是压缩到倒数第 keep_last 条之前
@@ -527,13 +630,18 @@ class Filter:
# [优化] 简单的状态清理检查
if chat_id in self.temp_state:
if self.valves.debug_mode:
print(f"[Inlet] ⚠️ 覆盖未消费的旧状态 (Chat ID: {chat_id})")
await self._log(
f"[Inlet] ⚠️ 覆盖未消费的旧状态 (Chat ID: {chat_id})",
type="warning",
event_call=__event_call__,
)
self.temp_state[chat_id] = target_compressed_count
if self.valves.debug_mode:
print(f"[Inlet] 记录目标压缩进度: {target_compressed_count}")
await self._log(
f"[Inlet] 记录目标压缩进度: {target_compressed_count}",
event_call=__event_call__,
)
# 加载摘要记录
summary_record = await asyncio.to_thread(self._load_summary_record, chat_id)
@@ -582,19 +690,32 @@ class Filter:
}
)
if self.valves.debug_mode:
print(
f"[Inlet] 应用摘要: Head({len(head_messages)}) + Summary + Tail({len(tail_messages)})"
)
await self._log(
f"[Inlet] 应用摘要: Head({len(head_messages)}) + Summary + Tail({len(tail_messages)})",
type="success",
event_call=__event_call__,
)
# Emit debug log to frontend (Keep the structured log as well)
await self._emit_debug_log(
__event_call__,
chat_id,
len(messages),
len(final_messages),
len(summary_record.summary),
self.valves.keep_first,
self.valves.keep_last,
)
else:
# 没有摘要,使用原始消息
final_messages = messages
body["messages"] = final_messages
if self.valves.debug_mode:
print(f"[Inlet] 最终发送: {len(body['messages'])} 条消息")
print(f"{'='*60}\n")
await self._log(
f"[Inlet] 最终发送: {len(body['messages'])} 条消息\n{'='*60}\n",
event_call=__event_call__,
)
return body
@@ -604,6 +725,7 @@ class Filter:
__user__: Optional[dict] = None,
__metadata__: dict = None,
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> dict:
"""
在 LLM 响应完成后执行
@@ -612,21 +734,23 @@ class Filter:
chat_id = __metadata__["chat_id"]
model = body.get("model", "gpt-3.5-turbo")
if self.valves.debug_mode:
print(f"\n{'='*60}")
print(f"[Outlet] Chat ID: {chat_id}")
print(f"[Outlet] 响应完成")
if self.valves.debug_mode or self.valves.show_debug_log:
await self._log(
f"\n{'='*60}\n[Outlet] Chat ID: {chat_id}\n[Outlet] 响应完成",
event_call=__event_call__,
)
# 在后台异步处理 Token 计算和摘要生成(不等待完成,不影响输出)
asyncio.create_task(
self._check_and_generate_summary_async(
chat_id, model, body, __user__, __event_emitter__
chat_id, model, body, __user__, __event_emitter__, __event_call__
)
)
if self.valves.debug_mode:
print(f"[Outlet] 后台处理已启动")
print(f"{'='*60}\n")
await self._log(
f"[Outlet] 后台处理已启动\n{'='*60}\n",
event_call=__event_call__,
)
return body
@@ -637,6 +761,7 @@ class Filter:
body: dict,
user_data: Optional[dict],
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
):
"""
后台处理:计算 Token 数并生成摘要(不阻塞响应)
@@ -650,36 +775,50 @@ class Filter:
"compression_threshold_tokens", self.valves.compression_threshold_tokens
)
if self.valves.debug_mode:
print(f"\n[🔍 后台计算] 开始 Token 计数...")
await self._log(
f"\n[🔍 后台计算] 开始 Token 计数...",
event_call=__event_call__,
)
# 在后台线程中计算 Token 数
current_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, messages
)
if self.valves.debug_mode:
print(f"[🔍 后台计算] Token 数: {current_tokens}")
await self._log(
f"[🔍 后台计算] Token 数: {current_tokens}",
event_call=__event_call__,
)
# 检查是否需要压缩
if current_tokens >= compression_threshold_tokens:
if self.valves.debug_mode:
print(
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})"
)
await self._log(
f"[🔍 后台计算] ⚡ 触发压缩阈值 (Token: {current_tokens} >= {compression_threshold_tokens})",
type="warning",
event_call=__event_call__,
)
# 继续生成摘要
await self._generate_summary_async(
messages, chat_id, body, user_data, __event_emitter__
messages,
chat_id,
body,
user_data,
__event_emitter__,
__event_call__,
)
else:
if self.valves.debug_mode:
print(
f"[🔍 后台计算] 未触发压缩阈值 (Token: {current_tokens} < {compression_threshold_tokens})"
)
await self._log(
f"[🔍 后台计算] 未触发压缩阈值 (Token: {current_tokens} < {compression_threshold_tokens})",
event_call=__event_call__,
)
except Exception as e:
print(f"[🔍 后台计算] ❌ 错误: {str(e)}")
await self._log(
f"[🔍 后台计算] ❌ 错误: {str(e)}",
type="error",
event_call=__event_call__,
)
async def _generate_summary_async(
self,
@@ -688,6 +827,7 @@ class Filter:
body: dict,
user_data: Optional[dict],
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
__event_call__: Callable[[Any], Awaitable[None]] = None,
):
"""
异步生成摘要(后台执行,不阻塞响应)
@@ -697,18 +837,18 @@ class Filter:
3. 对剩余的中间消息生成摘要。
"""
try:
if self.valves.debug_mode:
print(f"\n[🤖 异步摘要任务] 开始...")
await self._log(f"\n[🤖 异步摘要任务] 开始...", event_call=__event_call__)
# 1. 获取目标压缩进度
# 优先从 temp_state 获取(由 inlet 计算),如果获取不到(例如重启后),则假设当前是完整历史
target_compressed_count = self.temp_state.pop(chat_id, None)
if target_compressed_count is None:
target_compressed_count = max(0, len(messages) - self.valves.keep_last)
if self.valves.debug_mode:
print(
f"[🤖 异步摘要任务] ⚠️ 无法获取 inlet 状态,使用当前消息数估算进度: {target_compressed_count}"
)
await self._log(
f"[🤖 异步摘要任务] ⚠️ 无法获取 inlet 状态,使用当前消息数估算进度: {target_compressed_count}",
type="warning",
event_call=__event_call__,
)
# 2. 确定待压缩的消息范围 (Middle)
start_index = self.valves.keep_first
@@ -718,16 +858,18 @@ class Filter:
# 确保索引有效
if start_index >= end_index:
if self.valves.debug_mode:
print(
f"[🤖 异步摘要任务] 中间消息为空 (Start: {start_index}, End: {end_index}),跳过"
)
await self._log(
f"[🤖 异步摘要任务] 中间消息为空 (Start: {start_index}, End: {end_index}),跳过",
event_call=__event_call__,
)
return
middle_messages = messages[start_index:end_index]
if self.valves.debug_mode:
print(f"[🤖 异步摘要任务] 待处理中间消息: {len(middle_messages)}")
await self._log(
f"[🤖 异步摘要任务] 待处理中间消息: {len(middle_messages)}",
event_call=__event_call__,
)
# 3. 检查 Token 上限并截断 (Max Context Truncation)
# [优化] 使用摘要模型(如果有)的阈值来决定能处理多少中间消息
@@ -740,22 +882,26 @@ class Filter:
"max_context_tokens", self.valves.max_context_tokens
)
if self.valves.debug_mode:
print(
f"[🤖 异步摘要任务] 使用模型 {summary_model_id} 的上限: {max_context_tokens} Tokens"
)
# 计算当前总 Token (使用摘要模型进行计数)
total_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, messages
await self._log(
f"[🤖 异步摘要任务] 使用模型 {summary_model_id} 的上限: {max_context_tokens} Tokens",
event_call=__event_call__,
)
if total_tokens > max_context_tokens:
excess_tokens = total_tokens - max_context_tokens
if self.valves.debug_mode:
print(
f"[🤖 异步摘要任务] ⚠️ 总 Token ({total_tokens}) 超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token"
)
# 计算中间消息的 Token (加上提示词的缓冲)
# 我们只把 middle_messages 发送给摘要模型,所以不应该把完整历史计入限制
middle_tokens = await asyncio.to_thread(
self._calculate_messages_tokens, middle_messages
)
# 增加提示词和输出的缓冲 (约 2000 Tokens)
estimated_input_tokens = middle_tokens + 2000
if estimated_input_tokens > max_context_tokens:
excess_tokens = estimated_input_tokens - max_context_tokens
await self._log(
f"[🤖 异步摘要任务] ⚠️ 中间消息 ({middle_tokens} Tokens) + 缓冲超过摘要模型上限 ({max_context_tokens}),需要移除约 {excess_tokens} Token",
type="warning",
event_call=__event_call__,
)
# 从 middle_messages 头部开始移除
removed_tokens = 0
@@ -769,14 +915,16 @@ class Filter:
removed_tokens += msg_tokens
removed_count += 1
if self.valves.debug_mode:
print(
f"[🤖 异步摘要任务] 已移除 {removed_count} 条消息,共 {removed_tokens} Token"
)
await self._log(
f"[🤖 异步摘要任务] 已移除 {removed_count} 条消息,共 {removed_tokens} Token",
event_call=__event_call__,
)
if not middle_messages:
if self.valves.debug_mode:
print(f"[🤖 异步摘要任务] 截断后中间消息为空,跳过摘要生成")
await self._log(
f"[🤖 异步摘要任务] 截断后中间消息为空,跳过摘要生成",
event_call=__event_call__,
)
return
# 4. 构建对话文本
@@ -798,12 +946,14 @@ class Filter:
)
new_summary = await self._call_summary_llm(
None, conversation_text, body, user_data
None, conversation_text, body, user_data, __event_call__
)
# 6. 保存新摘要
if self.valves.debug_mode:
print("[优化] 在后台线程中保存摘要以避免阻塞事件循环。")
await self._log(
"[优化] 在后台线程中保存摘要以避免阻塞事件循环。",
event_call=__event_call__,
)
await asyncio.to_thread(
self._save_summary, chat_id, new_summary, target_compressed_count
@@ -815,32 +965,40 @@ class Filter:
{
"type": "status",
"data": {
"description": f"上下文摘要已更新 (压缩 {len(middle_messages)} 条消息)",
"description": f"上下文摘要已更新 (压缩 {len(middle_messages)} 条消息)",
"done": True,
},
}
)
if self.valves.debug_mode:
print(f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符")
print(
f"[🤖 异步摘要任务] 进度更新: 已压缩至原始第 {target_compressed_count} 条消息"
)
await self._log(
f"[🤖 异步摘要任务] ✅ 完成!新摘要长度: {len(new_summary)} 字符",
type="success",
event_call=__event_call__,
)
await self._log(
f"[🤖 异步摘要任务] 进度更新: 已压缩至原始消息 {target_compressed_count}",
event_call=__event_call__,
)
except Exception as e:
print(f"[🤖 异步摘要任务] ❌ 错误: {str(e)}")
await self._log(
f"[🤖 异步摘要任务] ❌ 错误: {str(e)}",
type="error",
event_call=__event_call__,
)
import traceback
traceback.print_exc()
def _format_messages_for_summary(self, messages: list) -> str:
"""格式化消息用于摘要"""
"""Formats messages for summarization."""
formatted = []
for i, msg in enumerate(messages, 1):
role = msg.get("role", "unknown")
content = msg.get("content", "")
# 处理多模态内容
# Handle multimodal content
if isinstance(content, list):
text_parts = []
for part in content:
@@ -848,10 +1006,10 @@ class Filter:
text_parts.append(part.get("text", ""))
content = " ".join(text_parts)
# 处理角色名称
role_name = {"user": "用户", "assistant": "助手"}.get(role, role)
# Handle role name
role_name = {"user": "User", "assistant": "Assistant"}.get(role, role)
# 限制每条消息的长度,避免过长
# Limit length of each message to avoid excessive length
if len(content) > 500:
content = content[:500] + "..."
@@ -865,12 +1023,15 @@ class Filter:
new_conversation_text: str,
body: dict,
user_data: dict,
__event_call__: Callable[[Any], Awaitable[None]] = None,
) -> str:
"""
使用 Open WebUI 内置方法调用 LLM 生成摘要
调用 LLM 生成摘要,使用 Open Web UI 内置方法
"""
if self.valves.debug_mode:
print(f"[🤖 LLM 调用] 使用 Open WebUI 内置方法")
await self._log(
f"[🤖 LLM 调用] 使用 Open Web UI 内置方法",
event_call=__event_call__,
)
# 构建摘要提示词 (优化版)
summary_prompt = f"""
@@ -909,8 +1070,7 @@ class Filter:
# 确定使用的模型
model = self.valves.summary_model or body.get("model", "")
if self.valves.debug_mode:
print(f"[🤖 LLM 调用] 模型: {model}")
await self._log(f"[🤖 LLM 调用] 模型: {model}", event_call=__event_call__)
# 构建 payload
payload = {
@@ -927,17 +1087,20 @@ class Filter:
if not user_id:
raise ValueError("无法获取用户 ID")
# [优化] 在后台线程中获取用户对象以避免阻塞事件循环
if self.valves.debug_mode:
print("[优化] 在后台线程中获取用户对象以避免阻塞事件循环。")
# [优化] 在后台线程中获取用户对象以避免阻塞事件循环
await self._log(
"[优化] 在后台线程中获取用户对象以避免阻塞事件循环。",
event_call=__event_call__,
)
user = await asyncio.to_thread(Users.get_user_by_id, user_id)
if not user:
raise ValueError(f"无法找到用户: {user_id}")
if self.valves.debug_mode:
print(f"[🤖 LLM 调用] 用户: {user.email}")
print(f"[🤖 LLM 调用] 发送请求...")
await self._log(
f"[🤖 LLM 调用] 用户: {user.email}\n[🤖 LLM 调用] 发送请求...",
event_call=__event_call__,
)
# 创建 Request 对象
request = Request(scope={"type": "http", "app": webui_app})
@@ -950,8 +1113,11 @@ class Filter:
summary = response["choices"][0]["message"]["content"].strip()
if self.valves.debug_mode:
print(f"[🤖 LLM 调用] ✅ 成功获取摘要")
await self._log(
f"[🤖 LLM 调用] ✅ 成功接收摘要",
type="success",
event_call=__event_call__,
)
return summary
@@ -959,11 +1125,14 @@ class Filter:
error_message = f"调用 LLM ({model}) 生成摘要时发生错误: {str(e)}"
if not self.valves.summary_model:
error_message += (
"\n[提示] 您没有指定摘要模型 (summary_model),因此尝试使用当前对话的模型。"
"如果这是一个流水线Pipe模型或不兼容的模型,请在配置中指定一个兼容的摘要模型'gemini-2.5-flash'"
"\n[提示] 您未指定 summary_model因此过滤器尝试使用当前对话的模型。"
"如果这是流水线 (Pipe) 模型或不兼容的模型,请在配置中指定兼容的摘要模型 (例'gemini-2.5-flash')"
)
if self.valves.debug_mode:
print(f"[🤖 LLM 调用] ❌ {error_message}")
await self._log(
f"[🤖 LLM 调用] ❌ {error_message}",
type="error",
event_call=__event_call__,
)
raise Exception(error_message)

View File

@@ -1,12 +1,9 @@
"""
title: Context & Model Enhancement Filter
author: Fu-Jie
author_url: https://github.com/Fu-Jie
funding_url: https://github.com/Fu-Jie/awesome-openwebui
version: 0.2
version: 0.3
description:
一个功能全面的 Filter 插件,用于增强请求上下文和优化模型功能。提供大核心功能:
一个专注于增强请求上下文和优化模型功能的 Filter 插件。提供大核心功能:
1. 环境变量注入:在每条用户消息前自动注入用户环境变量(用户名、时间、时区、语言等)
- 支持纯文本、图片、多模态消息
@@ -24,222 +21,24 @@ description:
- 动态模型重定向
- 智能化的模型识别和适配
4. 智能内容规范化:生产级的内容清洗与修复系统
- 智能修复损坏的代码块(前缀、后缀、缩进)
- 规范化 LaTeX 公式格式(行内/块级)
- 优化思维链标签(</thought>)格式
- 自动闭合未结束的代码块
- 智能列表格式修复
- 清理冗余的 XML 标签
- 可配置的规则系统
features:
- 自动化环境变量管理
- 智能模型功能适配
- 异步状态反馈
- 幂等性保证
- 多模型支持
- 智能内容清洗与规范化
"""
from pydantic import BaseModel, Field
from typing import Optional, List, Callable
from typing import Optional
import re
import logging
from dataclasses import dataclass, field
import asyncio
# 配置日志
logger = logging.getLogger(__name__)
@dataclass
class NormalizerConfig:
"""规范化配置类,用于动态启用/禁用特定规则"""
enable_escape_fix: bool = True # 修复转义字符
enable_thought_tag_fix: bool = True # 修复思考链标签
enable_code_block_fix: bool = True # 修复代码块格式
enable_latex_fix: bool = True # 修复 LaTeX 公式格式
enable_list_fix: bool = False # 修复列表换行
enable_unclosed_block_fix: bool = True # 修复未闭合代码块
enable_fullwidth_symbol_fix: bool = False # 修复代码内的全角符号
enable_xml_tag_cleanup: bool = True # 清理 XML 残留标签
# 自定义清理函数列表(高级扩展用)
custom_cleaners: List[Callable[[str], str]] = field(default_factory=list)
class ContentNormalizer:
"""LLM 输出内容规范化器 - 生产级实现"""
# --- 1. 预编译正则表达式(性能优化) ---
_PATTERNS = {
# 代码块前缀:如果 ``` 前面不是行首也不是换行符
'code_block_prefix': re.compile(r'(?<!^)(?<!\n)(```)', re.MULTILINE),
# 代码块后缀:匹配 ```语言名 后面紧跟非空白字符(没有换行)
# 匹配 ```python code 这种情况,但不匹配 ```python 或 ```python\n
'code_block_suffix': re.compile(r'(```[\w\+\-\.]*)[ \t]+([^\n\r])'),
# 代码块缩进:行首的空白字符 + ```
'code_block_indent': re.compile(r'^[ \t]+(```)', re.MULTILINE),
# 思考链标签:</thought> 后可能跟空格或换行
'thought_tag': re.compile(r'</thought>[ \t]*\n*'),
# LaTeX 块级公式:\[ ... \]
'latex_bracket_block': re.compile(r'\\\[(.+?)\\\]', re.DOTALL),
# LaTeX 行内公式:\( ... \)
'latex_paren_inline': re.compile(r'\\\((.+?)\\\)'),
# 列表项:非换行符 + 数字 + 点 + 空格 (e.g. "Text1. Item")
'list_item': re.compile(r'([^\n])(\d+\. )'),
# XML 残留标签 (如 Claude 的 artifacts)
'xml_artifacts': re.compile(r'</?(?:antArtifact|antThinking|artifact)[^>]*>', re.IGNORECASE),
}
def __init__(self, config: Optional[NormalizerConfig] = None):
self.config = config or NormalizerConfig()
self.applied_fixes = []
def normalize(self, content: str) -> str:
"""主入口:按顺序应用所有规范化规则"""
self.applied_fixes = []
if not content:
return content
try:
# 1. 转义字符修复(必须最先执行,否则影响后续正则)
if self.config.enable_escape_fix:
original = content
content = self._fix_escape_characters(content)
if content != original:
self.applied_fixes.append("修复转义字符")
# 2. 思考链标签规范化
if self.config.enable_thought_tag_fix:
original = content
content = self._fix_thought_tags(content)
if content != original:
self.applied_fixes.append("规范化思考链")
# 3. 代码块格式修复
if self.config.enable_code_block_fix:
original = content
content = self._fix_code_blocks(content)
if content != original:
self.applied_fixes.append("修复代码块格式")
# 4. LaTeX 公式规范化
if self.config.enable_latex_fix:
original = content
content = self._fix_latex_formulas(content)
if content != original:
self.applied_fixes.append("规范化 LaTeX 公式")
# 5. 列表格式修复
if self.config.enable_list_fix:
original = content
content = self._fix_list_formatting(content)
if content != original:
self.applied_fixes.append("修复列表格式")
# 6. 未闭合代码块检测与修复
if self.config.enable_unclosed_block_fix:
original = content
content = self._fix_unclosed_code_blocks(content)
if content != original:
self.applied_fixes.append("闭合未结束代码块")
# 7. 全角符号转半角(仅代码块内)
if self.config.enable_fullwidth_symbol_fix:
original = content
content = self._fix_fullwidth_symbols_in_code(content)
if content != original:
self.applied_fixes.append("全角符号转半角")
# 8. XML 标签残留清理
if self.config.enable_xml_tag_cleanup:
original = content
content = self._cleanup_xml_tags(content)
if content != original:
self.applied_fixes.append("清理 XML 标签")
# 9. 执行自定义清理函数
for cleaner in self.config.custom_cleaners:
original = content
content = cleaner(content)
if content != original:
self.applied_fixes.append("执行自定义清理")
return content
except Exception as e:
# 生产环境保底机制:如果清洗过程报错,返回原始内容,避免阻断服务
logger.error(f"内容规范化失败: {e}", exc_info=True)
return content
def _fix_escape_characters(self, content: str) -> str:
"""修复过度转义的字符"""
# 注意:先处理具体的转义序列,再处理通用的双反斜杠
content = content.replace("\\r\\n", "\n")
content = content.replace("\\n", "\n")
content = content.replace("\\t", "\t")
# 修复过度转义的反斜杠 (例如路径 C:\\Users)
content = content.replace("\\\\", "\\")
return content
def _fix_thought_tags(self, content: str) -> str:
"""规范化 </thought> 标签,统一为空两行"""
return self._PATTERNS['thought_tag'].sub("</thought>\n\n", content)
def _fix_code_blocks(self, content: str) -> str:
"""修复代码块格式(独占行、换行、去缩进)"""
# C: 移除代码块前的缩进(必须先执行,否则影响下面的判断)
content = self._PATTERNS['code_block_indent'].sub(r"\1", content)
# A: 确保 ``` 前有换行
content = self._PATTERNS['code_block_prefix'].sub(r"\n\1", content)
# B: 确保 ```语言标识 后有换行
content = self._PATTERNS['code_block_suffix'].sub(r"\1\n\2", content)
return content
def _fix_latex_formulas(self, content: str) -> str:
"""规范化 LaTeX 公式:\[ -> $$ (块级), \( -> $ (行内)"""
content = self._PATTERNS['latex_bracket_block'].sub(r"$$\1$$", content)
content = self._PATTERNS['latex_paren_inline'].sub(r"$\1$", content)
return content
def _fix_list_formatting(self, content: str) -> str:
"""修复列表项缺少换行的问题 (如 'text1. item' -> 'text\\n1. item')"""
return self._PATTERNS['list_item'].sub(r"\1\n\2", content)
def _fix_unclosed_code_blocks(self, content: str) -> str:
"""检测并修复未闭合的代码块"""
if content.count("```") % 2 != 0:
logger.warning("检测到未闭合的代码块,自动补全")
content += "\n```"
return content
def _fix_fullwidth_symbols_in_code(self, content: str) -> str:
"""在代码块内将全角符号转为半角(精细化操作)"""
# 常见误用的全角符号映射
FULLWIDTH_MAP = {
'': ',', '': '.', '': '(', '': ')',
'': '[', '': ']', '': ';', '': ':',
'': '?', '': '!', '"': '"', '"': '"',
''': "'", ''': "'",
}
parts = content.split("```")
# 代码块内容位于索引 1, 3, 5... (奇数位)
for i in range(1, len(parts), 2):
for full, half in FULLWIDTH_MAP.items():
parts[i] = parts[i].replace(full, half)
return "```".join(parts)
def _cleanup_xml_tags(self, content: str) -> str:
"""移除无关的 XML 标签"""
return self._PATTERNS['xml_artifacts'].sub("", content)
class Filter:
class Valves(BaseModel):
@@ -349,13 +148,9 @@ class Filter:
body["model"] = body["model"] + "-search"
features["web_search"] = False
search_enabled_for_model = True
if user_email == "yi204o@qq.com":
features["web_search"] = False
# 如果启用了模型本身的搜索能力,发送状态提示
if search_enabled_for_model and __event_emitter__:
import asyncio
try:
asyncio.create_task(
self._emit_search_status(__event_emitter__, model_name)
@@ -464,8 +259,6 @@ class Filter:
# 环境变量注入成功后,发送状态提示给用户
if env_injected and __event_emitter__:
import asyncio
try:
# 如果在异步环境中,使用 await
asyncio.create_task(self._emit_env_status(__event_emitter__))
@@ -506,67 +299,3 @@ class Filter:
)
except Exception as e:
print(f"发送搜索状态提示时出错: {e}")
async def _emit_normalization_status(self, __event_emitter__, applied_fixes: List[str] = None):
"""
发送内容规范化完成的状态提示
"""
description = "✓ 内容已自动规范化"
if applied_fixes:
description += f"{', '.join(applied_fixes)}"
try:
await __event_emitter__(
{
"type": "status",
"data": {
"description": description,
"done": True,
},
}
)
except Exception as e:
print(f"发送规范化状态提示时出错: {e}")
def _contains_html(self, content: str) -> bool:
"""
检测内容是否包含 HTML 标签
"""
# 匹配常见的 HTML 标签
pattern = r"<\s*/?\s*(?:html|head|body|div|span|p|br|hr|ul|ol|li|table|thead|tbody|tfoot|tr|td|th|img|a|b|i|strong|em|code|pre|blockquote|h[1-6]|script|style|form|input|button|label|select|option|iframe|link|meta|title)\b"
return bool(re.search(pattern, content, re.IGNORECASE))
def outlet(self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None) -> dict:
"""
处理传出响应体,通过修改最后一条助手消息的内容。
使用 ContentNormalizer 进行全面的内容规范化。
"""
if "messages" in body and body["messages"]:
last = body["messages"][-1]
content = last.get("content", "") or ""
if last.get("role") == "assistant" and isinstance(content, str):
# 如果包含 HTML跳过规范化为了防止错误格式化
if self._contains_html(content):
return body
# 初始化规范化器
normalizer = ContentNormalizer()
# 执行规范化
new_content = normalizer.normalize(content)
# 更新内容
if new_content != content:
last["content"] = new_content
# 如果内容发生了改变,发送状态提示
if __event_emitter__:
import asyncio
try:
# 传入 applied_fixes
asyncio.create_task(self._emit_normalization_status(__event_emitter__, normalizer.applied_fixes))
except RuntimeError:
# 假如不在循环中,则忽略
pass
return body

View File

@@ -0,0 +1,162 @@
# Markdown Normalizer 功能详解
本插件旨在修复 LLM 输出中常见的 Markdown 格式问题,确保在 Open WebUI 中完美渲染。以下是支持的修复功能列表及示例。
## 1. 代码块修复 (Code Block Fixes)
### 1.1 去除代码块缩进
LLM 有时会在代码块前添加空格缩进,导致渲染失效。本插件会自动移除这些缩进。
**Before:**
```python
print("hello")
```
**After:**
```python
print("hello")
```
### 1.2 补全代码块前后换行
代码块标记 ` ``` ` 必须独占一行。如果 LLM 将其与文本混在一行,插件会自动修复。
**Before:**
Here is code:```python
print("hello")```
**After:**
Here is code:
```python
print("hello")
```
### 1.3 修复语言标识符后的换行
有时 LLM 会忘记在语言标识符(如 `python`)后换行。
**Before:**
```python print("hello")
```
**After:**
```python
print("hello")
```
### 1.4 自动闭合代码块
如果输出被截断或 LLM 忘记闭合代码块,插件会自动添加结尾的 ` ``` `
**Before:**
```python
print("unfinished code...")
**After:**
```python
print("unfinished code...")
```
## 2. LaTeX 公式规范化 (LaTeX Normalization)
Open WebUI 使用 MathJax/KaTeX 渲染公式,通常需要 `$$``$` 包裹。本插件会将常见的 LaTeX 括号语法转换为标准格式。
**Before:**
块级公式:\[ E = mc^2 \]
行内公式:\( a^2 + b^2 = c^2 \)
**After:**
块级公式:$$ E = mc^2 $$
行内公式:$ a^2 + b^2 = c^2 $
## 3. 转义字符清理 (Escape Character Fix)
修复过度转义的字符,这常见于某些 API 返回的原始字符串中。
**Before:**
Line 1\\nLine 2\\tTabbed
**After:**
Line 1
Line 2 Tabbed
## 4. 思维链标签规范化 (Thought Tag Fix)
**功能**:
1. 确保 `</thought>` 标签后有足够的空行,防止思维链内容与正文粘连。
2. **标准化标签**: 将 `<think>` (DeepSeek 等模型常用) 或 `<thinking>` 统一转换为 Open WebUI 标准的 `<thought>` 标签,以便正确触发 UI 的折叠功能。
**默认**: 开启 (`enable_thought_tag_fix = True`)
**示例**:
* **Before**: `<think>Thinking...</think>Response starts here.`
* **After**:
```xml
<thought>Thinking...</thought>
Response starts here.
```
## 5. 列表格式修复 (List Formatting Fix)
*默认关闭,需在设置中开启*
修复列表项缺少换行的问题。
**Before:**
Header1. Item 1
**After:**
Header
1. Item 1
## 6. 全角符号转半角 (Full-width Symbol Fix)
*默认关闭,需在设置中开启*
仅在**代码块内部**将全角符号转换为半角符号,防止代码因符号问题无法运行。
**Before:**
```python
if x == 1
print"hello"
```
**After:**
```python
if x == 1:
print("hello")
```
## 7. Mermaid 语法修复 (Mermaid Syntax Fix)
**功能**: 修复 Mermaid 图表中常见的语法错误,特别是未加引号的标签包含特殊字符的情况。
**默认**: 开启 (`enable_mermaid_fix = True`)
**示例**:
* **Before**:
```mermaid
graph TD
A[Label with (parens)] --> B(Label with [brackets])
```
* **After**:
```mermaid
graph TD
A["Label with (parens)"] --> B("Label with [brackets]")
```
## 8. XML 标签清理 (XML Cleanup)
移除 LLM 输出中残留的无用 XML 标签(如 Claude 的 artifact 标签)。
**Before:**
Here is the result <antArtifact>hidden metadata</antArtifact>.
**After:**
## 9. 标题格式修复 (Heading Format Fix)
**功能**: 修复标题标记 `#` 后缺少空格的问题。
**默认**: 开启 (`enable_heading_fix = True`)
**示例**:
* **Before**: `#Heading 1`
* **After**: `# Heading 1`
## 10. 表格格式修复 (Table Format Fix)
**功能**: 修复表格行末尾缺少管道符 `|` 的问题。
**默认**: 开启 (`enable_table_fix = True`)
**示例**:
* **Before**: `| Col 1 | Col 2`
* **After**: `| Col 1 | Col 2 |`

View File

@@ -0,0 +1,44 @@
# Markdown Normalizer Filter
A production-grade content normalizer filter for Open WebUI that fixes common Markdown formatting issues in LLM outputs. It ensures that code blocks, LaTeX formulas, Mermaid diagrams, and other Markdown elements are rendered correctly.
## Features
* **Mermaid Syntax Fix**: Automatically fixes common Mermaid syntax errors, such as unquoted node labels and unclosed subgraphs, ensuring diagrams render correctly.
* **Frontend Console Debugging**: Supports printing structured debug logs directly to the browser console (F12) for easier troubleshooting.
* **Code Block Formatting**: Fixes broken code block prefixes, suffixes, and indentation.
* **LaTeX Normalization**: Standardizes LaTeX formula delimiters (`\[` -> `$$`, `\(` -> `$`).
* **Thought Tag Normalization**: Unifies thought tags (`<think>`, `<thinking>` -> `<thought>`).
* **Escape Character Fix**: Cleans up excessive escape characters (`\\n`, `\\t`).
* **List Formatting**: Ensures proper newlines in list items.
* **Heading Fix**: Adds missing spaces in headings (`#Heading` -> `# Heading`).
* **Table Fix**: Adds missing closing pipes in tables.
* **XML Cleanup**: Removes leftover XML artifacts.
## Usage
1. Install the plugin in Open WebUI.
2. Enable the filter globally or for specific models.
3. Configure the enabled fixes in the **Valves** settings.
4. (Optional) Enable **Show Debug Log** in Valves to view detailed logs in the browser console.
## Configuration (Valves)
* `priority`: Filter priority (default: 50).
* `enable_escape_fix`: Fix excessive escape characters.
* `enable_thought_tag_fix`: Normalize thought tags.
* `enable_code_block_fix`: Fix code block formatting.
* `enable_latex_fix`: Normalize LaTeX formulas.
* `enable_list_fix`: Fix list item newlines (Experimental).
* `enable_unclosed_block_fix`: Auto-close unclosed code blocks.
* `enable_fullwidth_symbol_fix`: Fix full-width symbols in code blocks.
* `enable_mermaid_fix`: Fix Mermaid syntax errors.
* `enable_heading_fix`: Fix missing space in headings.
* `enable_table_fix`: Fix missing closing pipe in tables.
* `enable_xml_tag_cleanup`: Cleanup leftover XML tags.
* `show_status`: Show status notification when fixes are applied.
* `show_debug_log`: Print debug logs to browser console.
## License
MIT

View File

@@ -0,0 +1,44 @@
# Markdown 格式化过滤器 (Markdown Normalizer)
这是一个用于 Open WebUI 的生产级内容格式化过滤器,旨在修复 LLM 输出中常见的 Markdown 格式问题。它能确保代码块、LaTeX 公式、Mermaid 图表和其他 Markdown 元素被正确渲染。
## 功能特性
* **Mermaid 语法修复**: 自动修复常见的 Mermaid 语法错误,如未加引号的节点标签和未闭合的子图 (Subgraph),确保图表能正确渲染。
* **前端控制台调试**: 支持将结构化的调试日志直接打印到浏览器控制台 (F12),方便排查问题。
* **代码块格式化**: 修复破损的代码块前缀、后缀和缩进问题。
* **LaTeX 规范化**: 标准化 LaTeX 公式定界符 (`\[` -> `$$`, `\(` -> `$`)。
* **思维标签规范化**: 统一思维链标签 (`<think>`, `<thinking>` -> `<thought>`)。
* **转义字符修复**: 清理过度的转义字符 (`\\n`, `\\t`)。
* **列表格式化**: 确保列表项有正确的换行。
* **标题修复**: 修复标题中缺失的空格 (`#标题` -> `# 标题`)。
* **表格修复**: 修复表格中缺失的闭合管道符。
* **XML 清理**: 移除残留的 XML 标签。
## 使用方法
1. 在 Open WebUI 中安装此插件。
2. 全局启用或为特定模型启用此过滤器。
3.**Valves** 设置中配置需要启用的修复项。
4. (可选) 在 Valves 中开启 **显示调试日志 (Show Debug Log)** 以在浏览器控制台中查看详细日志。
## 配置项 (Valves)
* `priority`: 过滤器优先级 (默认: 50)。
* `enable_escape_fix`: 修复过度的转义字符。
* `enable_thought_tag_fix`: 规范化思维标签。
* `enable_code_block_fix`: 修复代码块格式。
* `enable_latex_fix`: 规范化 LaTeX 公式。
* `enable_list_fix`: 修复列表项换行 (实验性)。
* `enable_unclosed_block_fix`: 自动闭合未闭合的代码块。
* `enable_fullwidth_symbol_fix`: 修复代码块中的全角符号。
* `enable_mermaid_fix`: 修复 Mermaid 语法错误。
* `enable_heading_fix`: 修复标题中缺失的空格。
* `enable_table_fix`: 修复表格中缺失的闭合管道符。
* `enable_xml_tag_cleanup`: 清理残留的 XML 标签。
* `show_status`: 应用修复时显示状态通知。
* `show_debug_log`: 在浏览器控制台打印调试日志。
## 许可证
MIT

View File

@@ -74,6 +74,7 @@ class ContentNormalizer:
# Fix "reverse optimization": Must precisely match shape delimiters to avoid breaking structure
# Priority: Longer delimiters match first
"mermaid_node": re.compile(
r'("[^"\\]*(?:\\.[^"\\]*)*")|' # Match quoted strings first (Group 1)
r"(\w+)\s*(?:"
r"(\(\(\()(?![\"])(.*?)(?<![\"])(\)\)\))|" # (((...))) Double Circle
r"(\(\()(?![\"])(.*?)(?<![\"])(\)\))|" # ((...)) Circle
@@ -281,14 +282,18 @@ class ContentNormalizer:
"""Fix common Mermaid syntax errors while preserving node shapes"""
def replacer(match):
# Group 1 is ID
id_str = match.group(1)
# Group 1 is Quoted String (if matched)
if match.group(1):
return match.group(1)
# Group 2 is ID
id_str = match.group(2)
# Find matching shape group
# Groups start at index 2, each shape has 3 groups (Open, Content, Close)
# We iterate to find the non-None one
# Groups start at index 3 (in match.group terms) or index 2 (in match.groups() tuple)
# Tuple: (String, ID, Open1, Content1, Close1, ...)
groups = match.groups()
for i in range(1, len(groups), 3):
for i in range(2, len(groups), 3):
if groups[i] is not None:
open_char = groups[i]
content = groups[i + 1]

View File

@@ -69,6 +69,7 @@ class ContentNormalizer:
# 修复"反向优化"问题:必须精确匹配各种形状的定界符,避免破坏形状结构
# 优先级:长定界符优先匹配
"mermaid_node": re.compile(
r'("[^"\\]*(?:\\.[^"\\]*)*")|' # Match quoted strings first (Group 1)
r"(\w+)\s*(?:"
r"(\(\(\()(?![\"])(.*?)(?<![\"])(\)\)\))|" # (((...))) Double Circle
r"(\(\()(?![\"])(.*?)(?<![\"])(\)\))|" # ((...)) Circle
@@ -276,14 +277,18 @@ class ContentNormalizer:
"""修复常见的 Mermaid 语法错误,同时保留节点形状"""
def replacer(match):
# Group 1 是 ID
id_str = match.group(1)
# Group 1 is Quoted String (if matched)
if match.group(1):
return match.group(1)
# 查找匹配的形状组
# 组从索引 2 开始,每个形状有 3 个组 (Open, Content, Close)
# 我们遍历找到非 None 的那一组
# Group 2 is ID
id_str = match.group(2)
# Find matching shape group
# Groups start at index 3 (in match.group terms) or index 2 (in match.groups() tuple)
# Tuple: (String, ID, Open1, Content1, Close1, ...)
groups = match.groups()
for i in range(1, len(groups), 3):
for i in range(2, len(groups), 3):
if groups[i] is not None:
open_char = groups[i]
content = groups[i + 1]

View File

@@ -157,7 +157,10 @@ class OpenWebUIStats:
stats["total_comments"] += post.get("commentCount", 0)
# 解析 data 字段 - 正确路径: data.function.meta
function_data = post.get("data", {}).get("function", {})
function_data = post.get("data", {})
if function_data is None:
function_data = {}
function_data = function_data.get("function", {})
meta = function_data.get("meta", {})
manifest = meta.get("manifest", {})
post_type = meta.get("type", function_data.get("type", "unknown"))