diff --git a/README.md b/README.md
index ce895a4..6b44337 100644
--- a/README.md
+++ b/README.md
@@ -80,10 +80,14 @@ A collection of enhancements, plugins, and prompts for [open-webui](https://gith
**Experience interactive thinking.** Seamlessly transforms complex chat sessions into structured, clickable mind maps for better visual modeling and rapid idea extraction.
+
+
### 3. [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f)
**Professional data storytelling.** Converts raw information into sleek, boardroom-ready infographics powered by AntV, perfect for summarizing long-form content instantly.
+
+
### 4. [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315)
**High-fidelity reporting.** Export conversation history into professionally formatted Word documents with preserved headers, code blocks, and math formulas.
@@ -137,7 +141,7 @@ Located in the `plugins/` directory, containing Python-based enhancements:
### Pipelines
-- **MoE Prompt Refiner** (`moe_prompt_refiner`): Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
+- **Wisdom Synthesizer** (`wisdom_synthesizer`): An external pipeline filter that refactors aggregate requests with collective wisdom to output structured expert reports.
@@ -161,6 +165,8 @@ Standalone frontend extensions to supercharge your Open WebUI:
[](https://openwebui.com/blog/newsletter-january-28-2026): An all-in-one prompt management suite featuring AI-powered prompt generation, spotlight-style quick search, and advanced category organization.
+
+
## 📖 Documentation
Located in the `docs/en/` directory:
diff --git a/README_CN.md b/README_CN.md
index a47a29d..28b4a90 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -75,10 +75,14 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
**体验浸入式思维。** 将复杂的对话瞬间转化为结构化、可点击的交互式思维导图,助力知识建模与逻辑提取。
+
+
### 3. [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) [](https://openwebui.com/posts/smart_infographic_ad6f0c7f)
**专业数据叙事。** 将零散信息转化为精美的信息图表(由 AntV 驱动),一键生成学术/汇报级的可视化总结。
+
+
### 4. [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) [](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315)
**高保真文档导出。** 将对话历史导出为格式完美的 Word 文档,完美保留标题、代码块、LaTeX 公式及 Mermaid 流程图。
@@ -132,7 +136,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
### Pipelines (工作流管道)
-- **MoE Prompt Refiner** (`moe_prompt_refiner`): 优化多模型 (MoE) 汇总请求的提示词,生成高质量的综合报告。
+- **Wisdom Synthesizer** (`wisdom_synthesizer`): 智能拦截并重塑多模型汇总请求,发挥集体智慧(Collective Wisdom),将常规汇总熔炼为专家级对比报告。
@@ -154,6 +158,8 @@ Open WebUI 的前端增强扩展:
- **[Open WebUI Prompt Plus](https://github.com/Fu-Jie/open-webui-prompt-plus)** [](https://openwebui.com/blog/newsletter-january-28-2026):一站式提示词管理套件,支持 AI 提示词生成、Spotlight 风格快速搜索及高级分类管理。
+ 
+
## 📖 开发文档
diff --git a/docs/plugins/pipelines/index.md b/docs/plugins/pipelines/index.md
index ed89c81..7ce3798 100644
--- a/docs/plugins/pipelines/index.md
+++ b/docs/plugins/pipelines/index.md
@@ -17,15 +17,12 @@ Pipelines extend beyond simple transformations to implement:
-- :material-view-module:{ .lg .middle } **MoE Prompt Refiner**
+- :material-view-module:{ .lg .middle } **Wisdom Synthesizer**
+ [:octicons-tag-24: v0.1.0](https://github.com/Fu-Jie/open-webui-pipeline-wisdom-synthesizer){ .bubble }
- ---
+ An external pipeline filter that refactors aggregate requests with collective wisdom to output structured expert reports.
- Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
-
- **Version:** 1.0.0
-
- [:octicons-arrow-right-24: Documentation](moe-prompt-refiner.md)
+ [:octicons-arrow-right-24: Documentation](wisdom-synthesizer.md)
diff --git a/docs/plugins/pipelines/index.zh.md b/docs/plugins/pipelines/index.zh.md
index 7a67990..d7d0a65 100644
--- a/docs/plugins/pipelines/index.zh.md
+++ b/docs/plugins/pipelines/index.zh.md
@@ -17,15 +17,12 @@ Pipelines 不仅是简单转换,还可以实现:
-- :material-view-module:{ .lg .middle } **MoE Prompt Refiner**
+- :material-view-module:{ .lg .middle } **Wisdom Synthesizer**
+ [:octicons-tag-24: v0.1.0](https://github.com/Fu-Jie/open-webui-pipeline-wisdom-synthesizer){ .bubble }
- ---
+ 智能拦截并重构多模型汇总请求,发挥集体智慧(Collective Wisdom),将常规汇总熔炼为专家级对比报告。
- 为 Mixture of Experts(MoE)汇总请求优化提示词,生成高质量综合报告。
-
- **版本:** 1.0.0
-
- [:octicons-arrow-right-24: 查看文档](moe-prompt-refiner.md)
+ [:octicons-arrow-right-24: 查看文档](wisdom-synthesizer.md)
diff --git a/docs/plugins/pipelines/moe-prompt-refiner.md b/docs/plugins/pipelines/moe-prompt-refiner.md
deleted file mode 100644
index 18081b1..0000000
--- a/docs/plugins/pipelines/moe-prompt-refiner.md
+++ /dev/null
@@ -1,109 +0,0 @@
-# MoE Prompt Refiner
-
-Pipeline
-v1.0.0
-
-Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
-
----
-
-## Overview
-
-The MoE Prompt Refiner is an advanced pipeline that optimizes prompts before sending them to multiple expert models, then synthesizes the responses into comprehensive, high-quality reports.
-
-## Features
-
-- :material-view-module: **Multi-Model**: Leverages multiple AI models
-- :material-text-search: **Prompt Optimization**: Refines prompts for best results
-- :material-merge: **Response Synthesis**: Combines expert responses
-- :material-file-document: **Report Generation**: Creates structured reports
-
----
-
-## Installation
-
-1. Download the pipeline file: [`moe_prompt_refiner.py`](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines)
-2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
-3. Configure expert models and settings
-4. Enable the pipeline
-
----
-
-## How It Works
-
-```mermaid
-graph TD
- A[User Prompt] --> B[Prompt Refiner]
- B --> C[Expert Model 1]
- B --> D[Expert Model 2]
- B --> E[Expert Model N]
- C --> F[Response Synthesizer]
- D --> F
- E --> F
- F --> G[Comprehensive Report]
-```
-
----
-
-## Configuration
-
-| Option | Type | Default | Description |
-|--------|------|---------|-------------|
-| `expert_models` | list | `[]` | List of models to consult |
-| `synthesis_model` | string | `"auto"` | Model for synthesizing responses |
-| `report_format` | string | `"markdown"` | Output format |
-
----
-
-## Use Cases
-
-- **Research Reports**: Gather insights from multiple AI perspectives
-- **Comprehensive Analysis**: Multi-faceted problem analysis
-- **Decision Support**: Balanced recommendations from diverse models
-- **Content Creation**: Rich, multi-perspective content
-
----
-
-## Example
-
-**Input Prompt:**
-```
-Analyze the pros and cons of microservices architecture
-```
-
-**Output Report:**
-```markdown
-# Microservices Architecture Analysis
-
-## Executive Summary
-Based on analysis from multiple expert perspectives...
-
-## Advantages
-1. **Scalability** (Expert A)...
-2. **Technology Flexibility** (Expert B)...
-
-## Disadvantages
-1. **Complexity** (Expert A)...
-2. **Distributed System Challenges** (Expert C)...
-
-## Recommendations
-Synthesized recommendations based on expert consensus...
-```
-
----
-
-## Requirements
-
-!!! note "Prerequisites"
- - OpenWebUI v0.3.0 or later
- - Access to multiple LLM models
- - Sufficient API quotas for multi-model queries
-
-!!! warning "Resource Usage"
- This pipeline makes multiple API calls per request. Monitor your usage and costs.
-
----
-
-## Source Code
-
-[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines){ .md-button }
diff --git a/docs/plugins/pipelines/moe-prompt-refiner.zh.md b/docs/plugins/pipelines/moe-prompt-refiner.zh.md
deleted file mode 100644
index 3088b8b..0000000
--- a/docs/plugins/pipelines/moe-prompt-refiner.zh.md
+++ /dev/null
@@ -1,109 +0,0 @@
-# MoE Prompt Refiner
-
-Pipeline
-v1.0.0
-
-为 Mixture of Experts(MoE)汇总请求优化提示词,生成高质量的综合报告。
-
----
-
-## 概览
-
-MoE Prompt Refiner 是一个高级 Pipeline,会在将请求发送给多个专家模型前先优化提示词,然后综合各模型回复,输出结构化的高质量报告。
-
-## 功能特性
-
-- :material-view-module: **多模型**:同时利用多个 AI 模型
-- :material-text-search: **提示词优化**:在发送前优化 prompt 获得更好结果
-- :material-merge: **结果合成**:整合专家回复
-- :material-file-document: **报告生成**:输出结构化报告
-
----
-
-## 安装
-
-1. 下载 Pipeline 文件:[`moe_prompt_refiner.py`](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines)
-2. 上传到 OpenWebUI:**Admin Panel** → **Settings** → **Functions**
-3. 配置专家模型及相关参数
-4. 启用该 Pipeline
-
----
-
-## 工作流程
-
-```mermaid
-graph TD
- A[User Prompt] --> B[Prompt Refiner]
- B --> C[Expert Model 1]
- B --> D[Expert Model 2]
- B --> E[Expert Model N]
- C --> F[Response Synthesizer]
- D --> F
- E --> F
- F --> G[Comprehensive Report]
-```
-
----
-
-## 配置项
-
-| 选项 | 类型 | 默认值 | 说明 |
-|--------|------|---------|-------------|
-| `expert_models` | list | `[]` | 需要咨询的模型列表 |
-| `synthesis_model` | string | `"auto"` | 用于综合回复的模型 |
-| `report_format` | string | `"markdown"` | 输出格式 |
-
----
-
-## 适用场景
-
-- **研究报告**:从多个 AI 视角收集洞见
-- **综合分析**:多角度问题拆解
-- **决策支持**:获得多模型的平衡建议
-- **内容创作**:生成多视角的丰富内容
-
----
-
-## 示例
-
-**输入 Prompt:**
-```
-Analyze the pros and cons of microservices architecture
-```
-
-**输出报告:**
-```markdown
-# Microservices Architecture Analysis
-
-## Executive Summary
-Based on analysis from multiple expert perspectives...
-
-## Advantages
-1. **Scalability** (Expert A)...
-2. **Technology Flexibility** (Expert B)...
-
-## Disadvantages
-1. **Complexity** (Expert A)...
-2. **Distributed System Challenges** (Expert C)...
-
-## Recommendations
-Synthesized recommendations based on expert consensus...
-```
-
----
-
-## 运行要求
-
-!!! note "前置条件"
- - OpenWebUI v0.3.0 及以上
- - 可以访问多个 LLM 模型
- - 有足够的 API 配额支撑多模型请求
-
-!!! warning "资源消耗"
- 此 Pipeline 每次请求会进行多次 API 调用,请关注用量与成本。
-
----
-
-## 源码
-
-[:fontawesome-brands-github: 在 GitHub 查看](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines){ .md-button }
diff --git a/docs/plugins/pipelines/wisdom-synthesizer.md b/docs/plugins/pipelines/wisdom-synthesizer.md
new file mode 100644
index 0000000..0e6a9af
--- /dev/null
+++ b/docs/plugins/pipelines/wisdom-synthesizer.md
@@ -0,0 +1,73 @@
+# Wisdom Synthesizer (Collective Wisdom Synthesizer)
+
+An external pipeline filter (Pipeline/Filter) for **Open WebUI** that intercepts multi-model aggregate requests to leverage collective wisdom, reshaping **basic and linear aggregate outputs** into structured, high-contrast **expert analysis reports**.
+
+
+
+---
+
+## 🚀 Key Features
+
+* **Smart Interception**: Automatically catches Open WebUI's “Summarize various models' responses” requests.
+* **Dynamic Parsing**: Strips away generic formatting and precisely extracts the **original user query** and **each model's individual response**.
+* **Wisdom Fusion**: Directs the summary model to act as a “Chief Analyst”, enforcing a critical evaluation workflow instead of generic merging.
+* **Standardized Output Structure**: Guarantees output layout includes:
+ * **【Core Consensus】**: Aggregated common ground across models.
+ * **【Key Divergences】**: Comparative breakdown of different perspectives/approaches.
+ * **【Unique Insights】**: Spotlighting innovative points found in a single model.
+ * **【Synthesis & Recommendation】**: An action-oriented, blended strategy set.
+
+---
+
+## 📦 Installation & Usage (Pipelines Mode)
+
+> [!IMPORTANT]
+> **Prerequisite**:
+> This plugin relies on the official **[open-webui/pipelines](https://github.com/open-webui/pipelines)** framework. Please ensure your Open WebUI backend is already connected to an active `pipelines` runner environment beforehand.
+
+This plugin runs as a single-file pipeline filter component and supports importing with just a single click:
+
+### 🚀 One-Click Import via URL (Recommended 🌟)
+
+1. Log into your Open WebUI board, go to **Admin settings** -> **Pipelines** tab.
+2. Click **“Add Pipeline”** and paste the **GitHub Raw link** of `wisdom_synthesizer.py` into the address bar.
+3. Save configurations to load automatically.
+
+Below is the visual operational guide for getting it loaded:
+
+
+
+---
+
+## ⚙️ Valves Configuration
+
+Configuration items inside safe Valves toggles:
+
+| Parameter | Default | Description |
+| :--- | :--- | :--- |
+| `pipelines` | `["*"]` | Target model IDs to apply this Filter to *(Keep default for global)* |
+| `priority` | `0` | Filter pipeline execution order priority (lower numbers execute first). |
+| `model_id` | `None` | (Optional) Force the summarize job to run on a dedicated high-spec summary model. |
+| `trigger_prefix` | `You have been provided...` | Pre-set phrase to trigger interception. Usually requires no changes. |
+| `query_start_marker` | `'the latest user query: "'` | Anchor used to locate the start of the original query. |
+| `query_end_marker` | `'"\n\nYour task is to'` | Anchor used to locate the end of the original query. |
+| `response_start_marker` | `"Responses from models: "` | Anchor used to locate where the model responses begin. |
+
+> [!TIP]
+> **Configuration Tip**:
+> The default `["*"]` allows the filter to securely adapt to any aggregator models chosen on the fly. In most scenarios, **keeping this default configuration** is highly recommended.
+
+---
+
+## 🤝 Related Projects
+
+If you're building inside the Open WebUI ecosystem, you might find my other plugins sets helpful:
+
+* 🚀 **[openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)** —— A comprehensive collection of Actions, Pipes, and Tools to supercharge your workspace.
+* 🪄 **[open-webui-prompt-plus](https://github.com/Fu-Jie/open-webui-prompt-plus)** —— Enhances Prompt engineering with AI-powered generators, Spotlight-style searches, and interactive forms.
+
+---
+
+## 📄 License
+
+[MIT License](LICENSE)
diff --git a/docs/plugins/pipelines/wisdom-synthesizer.zh.md b/docs/plugins/pipelines/wisdom-synthesizer.zh.md
new file mode 100644
index 0000000..a9edd48
--- /dev/null
+++ b/docs/plugins/pipelines/wisdom-synthesizer.zh.md
@@ -0,0 +1,73 @@
+# Wisdom Synthesizer (集体智慧合成器)
+
+专为 **Open WebUI** 设计的外置管道过滤器(Pipeline/Filter),旨在通过智能拦截并重构多模型汇总请求,发挥集体智慧(Collective Wisdom),将原本较为**基础和扁平的常规汇总**熔炼为结构清晰、具备多维对比度的**专家级综合分析报告**。
+
+
+
+---
+
+## 🚀 核心功能
+
+* **智能拦截**:自动捕获 Open WebUI 的“汇总多模型响应”请求(通过内置前缀触发)。
+* **动态解析**:剥离多余格式,精准提取**原始用户问题**与**各模型的独立回答**。
+* **智慧融合**:摒弃基础的模型合并,强制总结模型扮演“首席分析师”,发挥集体智慧审视全局。
+* **规范输出**:将汇总响应熔炼为以下结构:
+ * **【核心共识】**: 提炼模型间的相同点。
+ * **【关键分歧】**: 对比不同视角的碰撞。
+ * **【独特洞察】**: 发现单一模型闪光点。
+ * **【综合建议】**: 最终形成有弹性的熔铸方案。
+
+---
+
+## 📦 安装与使用 (Pipelines 模式)
+
+> [!IMPORTANT]
+> **前提条件**:
+> 本插件依赖于 Open WebUI 官方的 **[open-webui/pipelines](https://github.com/open-webui/pipelines)** 框架插件系统。请确保你的 Open WebUI 后端已经架设好或已连接底层的 `pipelines` 服务端环境。
+
+本插件为单文件管道过滤组件,支持在面板中一键拉取安装:
+
+### 🚀 通过 URL 一键导入 (推荐 🌟)
+
+1. 登录你的 Open WebUI 后台,进入 **管理员设置** -> **Pipelines** 选项卡。
+2. 点击 **“添加 Pipeline”**,并在地址栏中复制贴入此仓库中 `wisdom_synthesizer.py` 的 **GitHub Raw 链接**。
+3. 点击 **保存** 即可成功加载。
+
+以下是操作动态演示:
+
+
+
+---
+
+## ⚙️ Valves 管道配置
+
+进入管道配置项,可动态调整以下参数:
+
+| 参数 | 默认值 | 说明 |
+| :--- | :--- | :--- |
+| `pipelines` | `["*"]` | 应用此 Filter 的目标模型 ID *(如果要全局生效保持默认)* |
+| `priority` | `0` | 过滤器管道执行优先级(数字越小,越优先执行) |
+| `model_id` | `None` | (可选) 强制将汇总任务流向你指定的某个专用高性能总结模型 |
+| `trigger_prefix` | `You have been provided...` | 用于触发拦截的提示词起始句柄前缀。一般无需修改 |
+| `query_start_marker` | `'the latest user query: "'` | 解析原始查询的起始标记锚点 |
+| `query_end_marker` | `'"\n\nYour task is to'` | 解析原始查询的结束标记锚点 |
+| `response_start_marker` | `"Responses from models: "` | 解析各个模型独立响应的起始锚点标志 |
+
+> [!TIP]
+> **配置建议**:
+> 默认值 `["*"]` 可在所有选定的汇总模型上自适应生效。在绝大多数情况下,你**仅需保持此默认参数**便可保障全局自适应拦截。
+
+---
+
+## 🤝 友情链接 (Related Projects)
+
+如果你对 Open WebUI 的扩展生态感兴趣,欢迎关注我的其它开源方案:
+
+* 🚀 **[openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)** —— 包含各种增强 Actions、Pipes、Tools 等一篮子开源插件合集,助你解锁更多黑魔法。
+* 🪄 **[open-webui-prompt-plus](https://github.com/Fu-Jie/open-webui-prompt-plus)** —— 包含 AI 驱动的提示词生成器、Spotlight 搜索框及交互变量表单,极速拉满提示词工程。
+
+---
+
+## 📄 开源许可
+
+[MIT License](LICENSE)
diff --git a/plugins/pipelines/moe_prompt_refiner.py b/plugins/pipelines/moe_prompt_refiner.py
deleted file mode 100644
index f599025..0000000
--- a/plugins/pipelines/moe_prompt_refiner.py
+++ /dev/null
@@ -1,232 +0,0 @@
-import os
-from typing import List, Optional
-from pydantic import BaseModel
-import time
-
-
-class Pipeline:
- """
- 该管道用于优化多模型(MoE)汇总请求的提示词。
-
- 它会拦截用于汇总多个模型响应的请求,提取原始用户查询和各个模型的具体回答,
- 然后构建一个新的、更详细、结构化的提示词。
-
- 这个经过优化的提示词会引导最终的汇总模型扮演一个专家分析师的角色,
- 将输入信息整合成一份高质量、全面的综合报告。
- """
-
- class Valves(BaseModel):
- # 指定该过滤器管道将连接到的目标管道ID(模型)。
- # 如果希望连接到所有管道,可以设置为 ["*"]。
- pipelines: List[str] = ["*"]
-
- # 为过滤器管道分配一个优先级。
- # 优先级决定了过滤器管道的执行顺序。
- # 数字越小,优先级越高。
- priority: int = 0
-
- # 指定用于分析和总结的模型ID。
- # 如果设置,MoE汇总请求将被重定向到此模型。
- model_id: Optional[str] = None
-
- # MoE 汇总请求的触发前缀。
- # 用于识别是否为 MoE 汇总请求。
- trigger_prefix: str = (
- "You have been provided with a set of responses from various models to the latest user query"
- )
-
- # 解析原始查询的起始标记
- query_start_marker: str = 'the latest user query: "'
-
- # 解析原始查询的结束标记
- query_end_marker: str = '"\n\nYour task is to'
-
- # 解析模型响应的起始标记
- response_start_marker: str = "Responses from models: "
-
- def __init__(self):
- self.type = "filter"
- self.name = "moe_prompt_refiner"
- self.valves = self.Valves()
-
- async def on_startup(self):
- # 此函数在服务器启动时调用。
- # print(f"on_startup:{__name__}")
- pass
-
- async def on_shutdown(self):
- # 此函数在服务器停止时调用。
- # print(f"on_shutdown:{__name__}")
- pass
-
- async def inlet(self, body: dict, user: Optional[dict] = None) -> dict:
- """
- 此方法是管道的入口点。
-
- 它会检查传入的请求是否为多模型(MoE)汇总请求。如果是,它会解析原始提示词,
- 提取用户的查询和来自不同模型的响应。然后,它会动态构建一个新的、结构更清晰的提示词,
- 并用它替换原始的消息内容。
-
- 参数:
- body (dict): 包含消息的请求体。
- user (Optional[dict]): 用户信息。
-
- 返回:
- dict: 包含优化后提示词的已修改请求体。
- """
- print(f"pipe:{__name__}")
-
- messages = body.get("messages", [])
- if not messages:
- return body
-
- user_message_content = ""
- user_message_index = -1
-
- # 找到最后一条用户消息
- for i in range(len(messages) - 1, -1, -1):
- if messages[i].get("role") == "user":
- content = messages[i].get("content", "")
- # 处理内容为数组的情况(多模态消息)
- if isinstance(content, list):
- # 从数组中提取所有文本内容
- text_parts = []
- for item in content:
- if isinstance(item, dict) and item.get("type") == "text":
- text_parts.append(item.get("text", ""))
- elif isinstance(item, str):
- text_parts.append(item)
- user_message_content = "".join(text_parts)
- elif isinstance(content, str):
- user_message_content = content
-
- user_message_index = i
- break
-
- if user_message_index == -1:
- return body
-
- # 检查是否为MoE汇总请求
- if isinstance(user_message_content, str) and user_message_content.startswith(
- self.valves.trigger_prefix
- ):
- print("检测到MoE汇总请求,正在更改提示词。")
-
- # 如果配置了 model_id,则重定向请求
- if self.valves.model_id:
- print(f"重定向请求到模型: {self.valves.model_id}")
- body["model"] = self.valves.model_id
-
- # 1. 提取原始查询
- query_start_phrase = self.valves.query_start_marker
- query_end_phrase = self.valves.query_end_marker
- start_index = user_message_content.find(query_start_phrase)
- end_index = user_message_content.find(query_end_phrase)
-
- original_query = ""
- if start_index != -1 and end_index != -1:
- original_query = user_message_content[
- start_index + len(query_start_phrase) : end_index
- ]
-
- # 2. 提取各个模型的响应
- responses_start_phrase = self.valves.response_start_marker
- responses_start_index = user_message_content.find(responses_start_phrase)
-
- responses_text = ""
- if responses_start_index != -1:
- responses_text = user_message_content[
- responses_start_index + len(responses_start_phrase) :
- ]
-
- # 使用三重双引号作为分隔符来提取响应
- responses = [
- part.strip() for part in responses_text.split('"""') if part.strip()
- ]
-
- # 3. 动态构建模型响应部分
- responses_section = ""
- for i, response in enumerate(responses):
- responses_section += f'''"""
-[第 {i + 1} 个模型的完整回答]
-{response}
-"""
-'''
-
- # 4. 构建新的提示词
- merge_prompt = f'''# 角色定位
-你是一位经验丰富的首席分析师,正在处理来自多个独立 AI 专家团队对同一问题的分析报告。你的任务是将这些报告进行深度整合、批判性分析,并提炼出一份结构清晰、洞察深刻、对决策者极具价值的综合报告。
-
-# 原始用户问题
-{original_query}
-
-# 输入格式说明 ⚠️ 重要
-各模型的响应已通过 """ (三重引号)分隔符准确识别和分离。系统已将不同模型的回答分别提取,你现在需要基于以下分离后的内容进行分析。
-
-**已分离的模型响应**:
-{responses_section}
-# 核心任务
-请勿简单地复制或拼接原始报告。你需要运用你的专业分析能力,完成以下步骤:
-
-## 1. 信息解析与评估 (Analysis & Evaluation)
-- **准确分隔**: 已根据 """ 分隔符,准确识别每个模型的回答边界。
-- **可信度评估**: 批判性地审视每份报告,识别其中可能存在的偏见、错误或不一致之处。
-- **逻辑梳理**: 理清每份报告的核心论点、支撑论据和推理链条。
-
-## 2. 核心洞察提炼 (Insight Extraction)
-- **识别共识**: 找出所有报告中共同提及、高度一致的观点或建议。这通常是问题的核心事实或最稳健的策略。
-- **突出差异**: 明确指出各报告在视角、方法、预测或结论上的关键分歧点。这些分歧往往蕴含着重要的战略考量。
-- **捕捉亮点**: 挖掘单个报告中独有的、具有创新性或深刻性的见解,这些"闪光点"可能是关键的差异化优势。
-
-## 3. 综合报告撰写 (Synthesis)
-基于以上分析,生成一份包含以下结构的综合报告:
-
-### **【核心共识】**
-- 用清晰的要点列出所有模型一致认同的关键信息或建议。
-- 标注覆盖范围(如"所有模型均同意"或"多数模型提及")。
-
-### **【关键分歧】**
-- 清晰地对比不同模型在哪些核心问题上持有不同观点。
-- 用序号或描述性语言标识不同的观点阵营(如"观点 A 与观点 B 的分歧"或"方案 1 vs 方案 2")。
-- 简要说明其原因或侧重点的差异。
-
-### **【独特洞察】**
-- 提炼并呈现那些仅在单个报告中出现,但极具价值的独特建议或视角。
-- 用"某个模型提出"或"另一视角"等中立表述,避免因缺少显式来源标记而造成的混淆。
-
-### **【综合分析与建议】**
-- **整合**: 基于共识、差异和亮点,提供一个全面、平衡、且经过你专业判断优化的最终分析。
-- **建议**: 如果原始指令是寻求方案或策略,这里应提出一个或多个融合了各方优势的、可执行的建议。
-
-# 格式要求
-- 语言精炼、逻辑清晰、结构分明。
-- 使用加粗、列表、标题等格式,确保报告易于阅读和理解。
-- 由于缺少显式的模型标识,**在呈现差异化观点时,使用描述性或序号化的方式**(如"第一种观点""另一个视角")而非具体的模型名称。
-- 始终以"为用户提供最高价值的决策依据"为目标。
-
-# 输出结构示例
-根据以上要求,你的输出应该呈现如下结构:
-
-## 【核心共识】
-✓ [共识观点 1] —— 所有模型均同意
-✓ [共识观点 2] —— 多数模型同意
-
-## 【关键分歧】
-⚡ **在[议题]上的分歧**:
-- 观点阵营 A: ...
-- 观点阵营 B: ...
-- 观点阵营 C: ...
-
-## 【独特洞察】
-💡 [某个模型独有的深刻观点]: ...
-💡 [另一个模型的创新视角]: ...
-
-## 【综合分析与建议】
-基于以上分析,推荐方案/策略: ...
-'''
-
- # 5. 替换原始消息内容
- body["messages"][user_message_index]["content"] = merge_prompt
- print("提示词已成功动态替换。")
-
- return body
diff --git a/plugins/pipelines/moe_prompt_refiner/valves.json b/plugins/pipelines/moe_prompt_refiner/valves.json
deleted file mode 100644
index 9e26dfe..0000000
--- a/plugins/pipelines/moe_prompt_refiner/valves.json
+++ /dev/null
@@ -1 +0,0 @@
-{}
\ No newline at end of file
diff --git a/plugins/pipelines/wisdom_synthesizer.py b/plugins/pipelines/wisdom_synthesizer.py
new file mode 100644
index 0000000..ac18100
--- /dev/null
+++ b/plugins/pipelines/wisdom_synthesizer.py
@@ -0,0 +1,207 @@
+import logging
+from typing import List, Optional
+from pydantic import BaseModel
+
+
+class Pipeline:
+ """
+ This pipeline optimizes the prompt used for multi-model summarization requests.
+
+ It intercepts the request intended to summarize responses from various models,
+ extracts the original user query and the individual model answers,
+ and constructs a new, more detailed and structured prompt.
+
+ The optimized prompt guides the summarizing model to act as an expert analyst,
+ synthesizing the inputs into a high-quality, comprehensive integrated report.
+ """
+
+ class Valves(BaseModel):
+ # Specifies target pipeline IDs (models) this filter will attach to.
+ # Use ["*"] to connect to all pipelines.
+ pipelines: List[str] = ["*"]
+
+ # Assigns priority to the filter pipeline.
+ # Determines execution order (lower numbers execute first).
+ priority: int = 0
+
+ # Specifies model ID for analysis and summarization.
+ # If set, the aggregate request will be redirected to this model.
+ model_id: Optional[str] = None
+
+ # Trigger prefix for aggregate requests.
+ # Used to identify if it is an aggregate synthesis request.
+ trigger_prefix: str = (
+ "You have been provided with a set of responses from various models to the latest user query"
+ )
+
+ # Marker for parsing the original query start
+ query_start_marker: str = 'the latest user query: "'
+
+ # Marker for parsing the original query end
+ query_end_marker: str = '"\n\nYour task is to'
+
+ # Marker for parsing model responses start
+ response_start_marker: str = "Responses from models: "
+
+ def __init__(self):
+ self.type = "filter"
+ self.name = "wisdom_synthesizer"
+ self.valves = self.Valves()
+
+ async def on_startup(self):
+ pass
+
+ async def on_shutdown(self):
+ pass
+
+ async def inlet(self, body: dict, user: Optional[dict] = None) -> dict:
+ """
+ Entry point for the pipeline filter.
+
+ Checks if the request is an aggregate request. If yes, parses the original prompt,
+ extracts query & responses, builds a dynamically-structured new prompt,
+ and replaces the original message content.
+ """
+ logging.info(f"pipe:{__name__}")
+
+ messages = body.get("messages", [])
+ if not messages:
+ return body
+
+ user_message_content = ""
+ user_message_index = -1
+
+ for i in range(len(messages) - 1, -1, -1):
+ if messages[i].get("role") == "user":
+ content = messages[i].get("content", "")
+ if isinstance(content, list):
+ text_parts = []
+ for item in content:
+ if isinstance(item, dict) and item.get("type") == "text":
+ text_parts.append(item.get("text", ""))
+ elif isinstance(item, str):
+ text_parts.append(item)
+ user_message_content = "".join(text_parts)
+ elif isinstance(content, str):
+ user_message_content = content
+
+ user_message_index = i
+ break
+
+ if user_message_index == -1:
+ return body
+
+ if isinstance(user_message_content, str) and user_message_content.startswith(
+ self.valves.trigger_prefix
+ ):
+ logging.info("Detected aggregate request, modifying prompt.")
+
+ if self.valves.model_id:
+ logging.info(f"Redirecting request to model: {self.valves.model_id}")
+ body["model"] = self.valves.model_id
+
+ query_start_phrase = self.valves.query_start_marker
+ query_end_phrase = self.valves.query_end_marker
+ start_index = user_message_content.find(query_start_phrase)
+ end_index = user_message_content.find(query_end_phrase)
+
+ original_query = ""
+ if start_index != -1 and end_index != -1:
+ original_query = user_message_content[
+ start_index + len(query_start_phrase) : end_index
+ ]
+
+ responses_start_phrase = self.valves.response_start_marker
+ responses_start_index = user_message_content.find(responses_start_phrase)
+
+ responses_text = ""
+ if responses_start_index != -1:
+ responses_text = user_message_content[
+ responses_start_index + len(responses_start_phrase) :
+ ]
+
+ import re
+ responses = [
+ part.strip() for part in re.split(r'\n?\"\"\"\n?', responses_text) if part.strip()
+ ]
+
+ responses_section = ""
+ for i, response in enumerate(responses):
+ responses_section += f'''"""
+[Complete Response from Model {i + 1}]
+{response}
+"""
+'''
+
+ merge_prompt = f'''# Role Definition
+You are an experienced Chief Analyst processing analysis reports from multiple independent AI expert teams regarding the same question. Your task is to perform deep integration, critical analysis, and distill a structured, insightful, and highly actionable comprehensive report for decision-makers.
+
+# Original User Query
+{original_query}
+
+# Input Format Instruction ⚠️ IMPORTANT
+The responses from various models have been accurately identified and separated using an artificial """ (triple quote) delimiter. The system has extracted these distinct answers; you must now analyze based on the separated content below.
+
+**Separated Model Responses**:
+{responses_section}
+
+# Core Tasks
+Do not simply copy or concatenate the original reports. Use your professional analytical skills to complete the following steps:
+
+## 1. Analysis & Evaluation
+- **Accurate Separation**: Verified boundaries using the """ delimiter.
+- **Credibility Assessment**: Critically examine each report for potential biases, errors, or inconsistencies.
+- **Logic Tracing**: Smooth out core arguments and reasoning chains.
+
+## 2. Insight Extraction
+- **Identify Consensus**: Find points or recommendations uniformly mentioned across models. This represents the core facts or robust strategies.
+- **Highlight Divergences**: Explicitly state key disagreements in perspectives, methods, or forecasts.
+- **Capture Highlights**: Unearth innovative views found only in a single report.
+
+## 3. Comprehensive Reporting
+Based on the analysis above, generate a synthesis containing:
+
+### **【Core Consensus】**
+- List key information or advice agreed upon by models.
+- Annotate coverage (e.g., "All models agree" or "Majority of models").
+
+### **【Key Divergences】**
+- Contrast different viewpoints on core issues clearly.
+- Use descriptive references (e.g., "Perspective Camp A vs B").
+
+### **【Unique Insights】**
+- Present high-value unique advice or perspectives from standalone reports.
+
+### **【Synthesis & Recommendation】**
+- **Integration**: A balanced final analysis optimized by your professional judgment.
+- **Recommendation**: Formulate actionable blended strategies.
+
+# Format Requirements
+- Concise language, clear logic, distinct structure.
+- Use bolding, lists, and headings for readability.
+- **Language Alignment**: You MUST respond in the **SAME LANGUAGE** as the `Original User Query` above (e.g., if the user query is in Chinese, reply in Chinese; if in English, reply in English). Translate all section headers for your output.
+
+# Output Structure Example
+Output should follow this structure:
+
+## 【Core Consensus】
+✓ [Consensus Point 1] —— All models agree
+✓ [Consensus Point 2] —— Majority of models agree
+
+## 【Key Divergences】
+⚡ **Divergence on [Topic]**:
+- Perspective Camp A: ...
+- Perspective Camp B: ...
+
+## 【Unique Insights】
+💡 [Insight from Model A]: ...
+💡 [Insight from Model B]: ...
+
+## 【Synthesis & Recommendation】
+Based on the analysis above, recommended strategies: ...
+'''
+
+ body["messages"][user_message_index]["content"] = merge_prompt
+ logging.info("Prompt dynamically replaced successfully.")
+
+ return body