feat: Introduce Wisdom Synthesizer pipeline and documentation, replacing the MoE Prompt Refiner.

This commit is contained in:
fujie
2026-03-23 00:33:20 +08:00
parent 2fae50a39f
commit e6f22b0f82
11 changed files with 375 additions and 467 deletions

View File

@@ -17,15 +17,12 @@ Pipelines extend beyond simple transformations to implement:
<div class="grid cards" markdown>
- :material-view-module:{ .lg .middle } **MoE Prompt Refiner**
- :material-view-module:{ .lg .middle } **Wisdom Synthesizer**
[:octicons-tag-24: v0.1.0](https://github.com/Fu-Jie/open-webui-pipeline-wisdom-synthesizer){ .bubble }
---
An external pipeline filter that refactors aggregate requests with collective wisdom to output structured expert reports.
Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
**Version:** 1.0.0
[:octicons-arrow-right-24: Documentation](moe-prompt-refiner.md)
[:octicons-arrow-right-24: Documentation](wisdom-synthesizer.md)
</div>

View File

@@ -17,15 +17,12 @@ Pipelines 不仅是简单转换,还可以实现:
<div class="grid cards" markdown>
- :material-view-module:{ .lg .middle } **MoE Prompt Refiner**
- :material-view-module:{ .lg .middle } **Wisdom Synthesizer**
[:octicons-tag-24: v0.1.0](https://github.com/Fu-Jie/open-webui-pipeline-wisdom-synthesizer){ .bubble }
---
智能拦截并重构多模型汇总请求发挥集体智慧Collective Wisdom将常规汇总熔炼为专家级对比报告。
为 Mixture of ExpertsMoE汇总请求优化提示词生成高质量综合报告。
**版本:** 1.0.0
[:octicons-arrow-right-24: 查看文档](moe-prompt-refiner.md)
[:octicons-arrow-right-24: 查看文档](wisdom-synthesizer.md)
</div>

View File

@@ -1,109 +0,0 @@
# MoE Prompt Refiner
<span class="category-badge pipeline">Pipeline</span>
<span class="version-badge">v1.0.0</span>
Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
---
## Overview
The MoE Prompt Refiner is an advanced pipeline that optimizes prompts before sending them to multiple expert models, then synthesizes the responses into comprehensive, high-quality reports.
## Features
- :material-view-module: **Multi-Model**: Leverages multiple AI models
- :material-text-search: **Prompt Optimization**: Refines prompts for best results
- :material-merge: **Response Synthesis**: Combines expert responses
- :material-file-document: **Report Generation**: Creates structured reports
---
## Installation
1. Download the pipeline file: [`moe_prompt_refiner.py`](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines)
2. Upload to OpenWebUI: **Admin Panel****Settings****Functions**
3. Configure expert models and settings
4. Enable the pipeline
---
## How It Works
```mermaid
graph TD
A[User Prompt] --> B[Prompt Refiner]
B --> C[Expert Model 1]
B --> D[Expert Model 2]
B --> E[Expert Model N]
C --> F[Response Synthesizer]
D --> F
E --> F
F --> G[Comprehensive Report]
```
---
## Configuration
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `expert_models` | list | `[]` | List of models to consult |
| `synthesis_model` | string | `"auto"` | Model for synthesizing responses |
| `report_format` | string | `"markdown"` | Output format |
---
## Use Cases
- **Research Reports**: Gather insights from multiple AI perspectives
- **Comprehensive Analysis**: Multi-faceted problem analysis
- **Decision Support**: Balanced recommendations from diverse models
- **Content Creation**: Rich, multi-perspective content
---
## Example
**Input Prompt:**
```
Analyze the pros and cons of microservices architecture
```
**Output Report:**
```markdown
# Microservices Architecture Analysis
## Executive Summary
Based on analysis from multiple expert perspectives...
## Advantages
1. **Scalability** (Expert A)...
2. **Technology Flexibility** (Expert B)...
## Disadvantages
1. **Complexity** (Expert A)...
2. **Distributed System Challenges** (Expert C)...
## Recommendations
Synthesized recommendations based on expert consensus...
```
---
## Requirements
!!! note "Prerequisites"
- OpenWebUI v0.3.0 or later
- Access to multiple LLM models
- Sufficient API quotas for multi-model queries
!!! warning "Resource Usage"
This pipeline makes multiple API calls per request. Monitor your usage and costs.
---
## Source Code
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines){ .md-button }

View File

@@ -1,109 +0,0 @@
# MoE Prompt Refiner
<span class="category-badge pipeline">Pipeline</span>
<span class="version-badge">v1.0.0</span>
为 Mixture of ExpertsMoE汇总请求优化提示词生成高质量的综合报告。
---
## 概览
MoE Prompt Refiner 是一个高级 Pipeline会在将请求发送给多个专家模型前先优化提示词然后综合各模型回复输出结构化的高质量报告。
## 功能特性
- :material-view-module: **多模型**:同时利用多个 AI 模型
- :material-text-search: **提示词优化**:在发送前优化 prompt 获得更好结果
- :material-merge: **结果合成**:整合专家回复
- :material-file-document: **报告生成**:输出结构化报告
---
## 安装
1. 下载 Pipeline 文件:[`moe_prompt_refiner.py`](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines)
2. 上传到 OpenWebUI**Admin Panel** → **Settings****Functions**
3. 配置专家模型及相关参数
4. 启用该 Pipeline
---
## 工作流程
```mermaid
graph TD
A[User Prompt] --> B[Prompt Refiner]
B --> C[Expert Model 1]
B --> D[Expert Model 2]
B --> E[Expert Model N]
C --> F[Response Synthesizer]
D --> F
E --> F
F --> G[Comprehensive Report]
```
---
## 配置项
| 选项 | 类型 | 默认值 | 说明 |
|--------|------|---------|-------------|
| `expert_models` | list | `[]` | 需要咨询的模型列表 |
| `synthesis_model` | string | `"auto"` | 用于综合回复的模型 |
| `report_format` | string | `"markdown"` | 输出格式 |
---
## 适用场景
- **研究报告**:从多个 AI 视角收集洞见
- **综合分析**:多角度问题拆解
- **决策支持**:获得多模型的平衡建议
- **内容创作**:生成多视角的丰富内容
---
## 示例
**输入 Prompt**
```
Analyze the pros and cons of microservices architecture
```
**输出报告:**
```markdown
# Microservices Architecture Analysis
## Executive Summary
Based on analysis from multiple expert perspectives...
## Advantages
1. **Scalability** (Expert A)...
2. **Technology Flexibility** (Expert B)...
## Disadvantages
1. **Complexity** (Expert A)...
2. **Distributed System Challenges** (Expert C)...
## Recommendations
Synthesized recommendations based on expert consensus...
```
---
## 运行要求
!!! note "前置条件"
- OpenWebUI v0.3.0 及以上
- 可以访问多个 LLM 模型
- 有足够的 API 配额支撑多模型请求
!!! warning "资源消耗"
此 Pipeline 每次请求会进行多次 API 调用,请关注用量与成本。
---
## 源码
[:fontawesome-brands-github: 在 GitHub 查看](https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/pipelines){ .md-button }

View File

@@ -0,0 +1,73 @@
# Wisdom Synthesizer (Collective Wisdom Synthesizer)
An external pipeline filter (Pipeline/Filter) for **Open WebUI** that intercepts multi-model aggregate requests to leverage collective wisdom, reshaping **basic and linear aggregate outputs** into structured, high-contrast **expert analysis reports**.
![Effect Demonstration](wisdom_synthesizer.gif)
---
## 🚀 Key Features
* **Smart Interception**: Automatically catches Open WebUI's “Summarize various models' responses” requests.
* **Dynamic Parsing**: Strips away generic formatting and precisely extracts the **original user query** and **each model's individual response**.
* **Wisdom Fusion**: Directs the summary model to act as a “Chief Analyst”, enforcing a critical evaluation workflow instead of generic merging.
* **Standardized Output Structure**: Guarantees output layout includes:
* **【Core Consensus】**: Aggregated common ground across models.
* **【Key Divergences】**: Comparative breakdown of different perspectives/approaches.
* **【Unique Insights】**: Spotlighting innovative points found in a single model.
* **【Synthesis & Recommendation】**: An action-oriented, blended strategy set.
---
## 📦 Installation & Usage (Pipelines Mode)
> [!IMPORTANT]
> **Prerequisite**:
> This plugin relies on the official **[open-webui/pipelines](https://github.com/open-webui/pipelines)** framework. Please ensure your Open WebUI backend is already connected to an active `pipelines` runner environment beforehand.
This plugin runs as a single-file pipeline filter component and supports importing with just a single click:
### 🚀 One-Click Import via URL (Recommended 🌟)
1. Log into your Open WebUI board, go to **Admin settings** -> **Pipelines** tab.
2. Click **“Add Pipeline”** and paste the **GitHub Raw link** of `wisdom_synthesizer.py` into the address bar.
3. Save configurations to load automatically.
Below is the visual operational guide for getting it loaded:
![Installation Guide](install.gif)
---
## ⚙️ Valves Configuration
Configuration items inside safe Valves toggles:
| Parameter | Default | Description |
| :--- | :--- | :--- |
| `pipelines` | `["*"]` | Target model IDs to apply this Filter to *Keep default for global* |
| `priority` | `0` | Filter pipeline execution order priority (lower numbers execute first). |
| `model_id` | `None` | (Optional) Force the summarize job to run on a dedicated high-spec summary model. |
| `trigger_prefix` | `You have been provided...` | Pre-set phrase to trigger interception. Usually requires no changes. |
| `query_start_marker` | `'the latest user query: "'` | Anchor used to locate the start of the original query. |
| `query_end_marker` | `'"\n\nYour task is to'` | Anchor used to locate the end of the original query. |
| `response_start_marker` | `"Responses from models: "` | Anchor used to locate where the model responses begin. |
> [!TIP]
> **Configuration Tip**:
> The default `["*"]` allows the filter to securely adapt to any aggregator models chosen on the fly. In most scenarios, **keeping this default configuration** is highly recommended.
---
## 🤝 Related Projects
If you're building inside the Open WebUI ecosystem, you might find my other plugins sets helpful:
* 🚀 **[openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)** —— A comprehensive collection of Actions, Pipes, and Tools to supercharge your workspace.
* 🪄 **[open-webui-prompt-plus](https://github.com/Fu-Jie/open-webui-prompt-plus)** —— Enhances Prompt engineering with AI-powered generators, Spotlight-style searches, and interactive forms.
---
## 📄 License
[MIT License](LICENSE)

View File

@@ -0,0 +1,73 @@
# Wisdom Synthesizer (集体智慧合成器)
专为 **Open WebUI** 设计的外置管道过滤器Pipeline/Filter旨在通过智能拦截并重构多模型汇总请求发挥集体智慧Collective Wisdom将原本较为**基础和扁平的常规汇总**熔炼为结构清晰、具备多维对比度的**专家级综合分析报告**。
![效果演示](wisdom_synthesizer.gif)
---
## 🚀 核心功能
* **智能拦截**:自动捕获 Open WebUI 的“汇总多模型响应”请求(通过内置前缀触发)。
* **动态解析**:剥离多余格式,精准提取**原始用户问题**与**各模型的独立回答**。
* **智慧融合**:摒弃基础的模型合并,强制总结模型扮演“首席分析师”,发挥集体智慧审视全局。
* **规范输出**:将汇总响应熔炼为以下结构:
* **【核心共识】**: 提炼模型间的相同点。
* **【关键分歧】**: 对比不同视角的碰撞。
* **【独特洞察】**: 发现单一模型闪光点。
* **【综合建议】**: 最终形成有弹性的熔铸方案。
---
## 📦 安装与使用 (Pipelines 模式)
> [!IMPORTANT]
> **前提条件**
> 本插件依赖于 Open WebUI 官方的 **[open-webui/pipelines](https://github.com/open-webui/pipelines)** 框架插件系统。请确保你的 Open WebUI 后端已经架设好或已连接底层的 `pipelines` 服务端环境。
本插件为单文件管道过滤组件,支持在面板中一键拉取安装:
### 🚀 通过 URL 一键导入 (推荐 🌟)
1. 登录你的 Open WebUI 后台,进入 **管理员设置** -> **Pipelines** 选项卡。
2. 点击 **“添加 Pipeline”**,并在地址栏中复制贴入此仓库中 `wisdom_synthesizer.py`**GitHub Raw 链接**
3. 点击 **保存** 即可成功加载。
以下是操作动态演示:
![安装操作图](install.gif)
---
## ⚙️ Valves 管道配置
进入管道配置项,可动态调整以下参数:
| 参数 | 默认值 | 说明 |
| :--- | :--- | :--- |
| `pipelines` | `["*"]` | 应用此 Filter 的目标模型 ID *(如果要全局生效保持默认)* |
| `priority` | `0` | 过滤器管道执行优先级(数字越小,越优先执行) |
| `model_id` | `None` | (可选) 强制将汇总任务流向你指定的某个专用高性能总结模型 |
| `trigger_prefix` | `You have been provided...` | 用于触发拦截的提示词起始句柄前缀。一般无需修改 |
| `query_start_marker` | `'the latest user query: "'` | 解析原始查询的起始标记锚点 |
| `query_end_marker` | `'"\n\nYour task is to'` | 解析原始查询的结束标记锚点 |
| `response_start_marker` | `"Responses from models: "` | 解析各个模型独立响应的起始锚点标志 |
> [!TIP]
> **配置建议**
> 默认值 `["*"]` 可在所有选定的汇总模型上自适应生效。在绝大多数情况下,你**仅需保持此默认参数**便可保障全局自适应拦截。
---
## 🤝 友情链接 (Related Projects)
如果你对 Open WebUI 的扩展生态感兴趣,欢迎关注我的其它开源方案:
* 🚀 **[openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)** —— 包含各种增强 Actions、Pipes、Tools 等一篮子开源插件合集,助你解锁更多黑魔法。
* 🪄 **[open-webui-prompt-plus](https://github.com/Fu-Jie/open-webui-prompt-plus)** —— 包含 AI 驱动的提示词生成器、Spotlight 搜索框及交互变量表单,极速拉满提示词工程。
---
## 📄 开源许可
[MIT License](LICENSE)