Compare commits
17 Commits
async-cont
...
batch-inst
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cea31fed38 | ||
|
|
8f8147828b | ||
|
|
158792d82f | ||
|
|
858d048d81 | ||
|
|
2f518d4c7a | ||
|
|
baae09a223 | ||
|
|
903bd7b372 | ||
|
|
8c998ecc73 | ||
|
|
f11cf27404 | ||
|
|
41f271d2d8 | ||
|
|
984d3061c7 | ||
|
|
ba11cdd157 | ||
|
|
b1482b6083 | ||
|
|
0cc46e0188 | ||
|
|
93a42cbe03 | ||
|
|
fdf95a2825 | ||
|
|
5fe66a5803 |
BIN
.agent/agent_hub.db
Normal file
BIN
.agent/agent_hub.db
Normal file
Binary file not shown.
171
.agent/learnings/filter-async-context-compression-design.md
Normal file
171
.agent/learnings/filter-async-context-compression-design.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Filter: async-context-compression 设计模式与工程实践
|
||||
|
||||
**日期**: 2026-03-12
|
||||
**模块**: `plugins/filters/async-context-compression/async_context_compression.py`
|
||||
**关键特性**: 上下文压缩、异步摘要生成、状态管理、LLM 工程优化
|
||||
|
||||
---
|
||||
|
||||
## 核心工程洞察
|
||||
|
||||
### 1. Request 对象的 Filter-to-LLM 传导链
|
||||
|
||||
**问题**:Filter 的 `outlet` 阶段启动背景异步任务(`asyncio.create_task`)调用 `generate_chat_completion`(内部 API),但无法直接访问原始 HTTP `request`。早期代码用最小化合成 Request(仅 `{"type": "http", "app": webui_app}`),暴露兼容性风险。
|
||||
|
||||
**解决方案**:
|
||||
|
||||
- OpenWebUI 对 `outlet` 同样支持 `__request__` 参数注入(即 `inlet` + `outlet` 都支持)
|
||||
- 透传 `__request__` 通过整个异步调用链:`outlet → _locked_summary_task → _check_and_generate_summary_async → _generate_summary_async → _call_summary_llm`
|
||||
- 在最终调用处:`request = __request__ or Request(...)`(兜底降级)
|
||||
|
||||
**收获**:LLM 调用路径应始终倾向于使用真实请求上下文,而非人工合成。即使后台任务中,`request.app` 的应用级状态仍持续有效。
|
||||
|
||||
---
|
||||
|
||||
### 2. 异步摘要生成中的上下文完整性
|
||||
|
||||
**关键场景分化**:
|
||||
|
||||
| 情况 | `summary_index` 值 | 旧摘要位置 | 需要 `previous_summary` |
|
||||
|------|--------|----------|---------|
|
||||
| Inlet 已注入旧摘要 | Not None | `messages[0]`(middle_messages 首项) | ❌ 否,已在 conversation_text 中 |
|
||||
| Outlet 收原始消息(未注入) | None | DB 存档 | ✅ **是**,必须显式读取并透传 |
|
||||
|
||||
**问题根源**:`outlet` 收到的消息来自原始数据库查询,未经过 `inlet` 的摘要注入。当 LLM 看不到历史摘要时,已压缩的知识(旧对话、已解决的问题、先前的发现)会被重新处理或遗忘。
|
||||
|
||||
**实现要点**:
|
||||
|
||||
```python
|
||||
# 仅当 summary_index is None 时异步加载旧摘要
|
||||
if summary_index is None:
|
||||
previous_summary = await asyncio.to_thread(
|
||||
self._load_summary, chat_id, body
|
||||
)
|
||||
else:
|
||||
previous_summary = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. 上下文压缩的 LLM Prompt 设计
|
||||
|
||||
**工程原则**:
|
||||
|
||||
1. **Clear Input Boundaries**:用 XML 风格标签(`<previous_working_memory>`, `<new_conversation>`)明确分界,避免 LLM 混淆"指令示例"与"待处理数据"
|
||||
2. **State-Aware Merging**:不是"保留所有旧事实",而是**更新状态**——`"bug X exists" → "bug X fixed"` 或彻底移除已解决项
|
||||
3. **Goal Evolution**:Current Goal 反映**最新**意图;旧目标迁移到 Working Memory 作为上下文
|
||||
4. **Error Verbatim**:Stack trace、异常类型、错误码必须逐字引用(是调试的一等公民)
|
||||
5. **Format Strictness**:结构变为 **REQUIRED**(而非 Suggested),允许零内容项省略,但布局一致
|
||||
|
||||
**新 Prompt 结构**:
|
||||
|
||||
```
|
||||
[Rules] → [Output Constraints] → [Required Structure Header] → [Boundaries] → <previous_working_memory> → <new_conversation>
|
||||
```
|
||||
|
||||
关键改进:
|
||||
|
||||
- 规则 3(Ruthless Denoising) → 新增规则 4(Error Verbatim) + 规则 5(Causal Chain)
|
||||
- "Suggested" Structure → "Required" Structure with Optional Sections
|
||||
- 新增 `## Causal Log` 专项,强制单行因果链格式:`[MSG_ID?] action → result`
|
||||
- Token 预算策略明确:按近期性和紧迫性优先裁剪(RRF)
|
||||
|
||||
---
|
||||
|
||||
### 4. 异步任务中的错误边界与恢复
|
||||
|
||||
**现象**:背景摘要生成任务(`asyncio.create_task`)的异常不会阻塞用户响应,但需要:
|
||||
|
||||
- 完整的日志链路(`_log` 调用 + `event_emitter` 通知)
|
||||
- 数据库事务的原子性(摘要和压缩状态同时保存)
|
||||
- 前端 UI 反馈(status event: "generating..." → "complete" 或 "error")
|
||||
|
||||
**最佳实践**:
|
||||
|
||||
- 用 `asyncio.Lock` 按 chat_id 防止并发摘要任务
|
||||
- 后台执行繁重操作(tokenize、LLM call)用 `asyncio.to_thread`
|
||||
- 所有 I/O(DB reads/writes)需包裹异步线程池
|
||||
- 异常捕获限制在 try-except,日志不要吞掉堆栈信息
|
||||
|
||||
---
|
||||
|
||||
### 5. Filter 单例与状态设计陷阱
|
||||
|
||||
**约束**:Filter 实例是全局单例,所有会话共享同一个 `self`。
|
||||
|
||||
**禁忌**:
|
||||
|
||||
```python
|
||||
# ❌ 错误:self.temp_buffer = ... (会被其他并发会话污染)
|
||||
self.temp_state = body # 危险!
|
||||
|
||||
# ✅ 正确:无状态或使用锁/chat_id 隔离
|
||||
self._chat_locks[chat_id] = asyncio.Lock() # 每个 chat 一个锁
|
||||
```
|
||||
|
||||
**设计**:
|
||||
|
||||
- Valves(Pydantic BaseModel)保存全局配置 ✅
|
||||
- 使用 dict 按 `chat_id` 键维护临时状态(lock、计数器)✅
|
||||
- 传参而非全局变量保存请求级数据 ✅
|
||||
|
||||
---
|
||||
|
||||
## 集成场景:Filter + Pipe 的配合
|
||||
|
||||
**当 Pipe 模型调用 Filter 时**:
|
||||
|
||||
1. `inlet` 注入摘要,削减上下文会话消息数
|
||||
2. Pipe 模型(通常为 Copilot SDK 或自定义内核)处理精简消息
|
||||
3. `outlet` 触发背景摘要,无阻塞用户响应
|
||||
4. 下一轮对话时,`inlet` 再次注入最新摘要
|
||||
|
||||
**关键约束**:
|
||||
|
||||
- `_should_skip_compression` 检测 `__model__.get("pipe")` 或 `copilot_sdk`,必要时跳过注入
|
||||
- Pipe 模型若有自己的上下文管理(如 Copilot 的 native tool calling),过度压缩会失去工具调用链
|
||||
- 摘要模型选择(`summary_model` Valve)应兼容当前 Pipe 环境的 API(推荐用通用模型如 gemini-flash)
|
||||
|
||||
---
|
||||
|
||||
## 内部 API 契约速记
|
||||
|
||||
### `generate_chat_completion(request, payload, user)`
|
||||
|
||||
- **request**: FastAPI Request;可来自真实 HTTP 或 `__request__` 注入
|
||||
- **payload**: `{"model": id, "messages": [...], "stream": false, "max_tokens": N, "temperature": T}`
|
||||
- **user**: UserModel;从 DB 查询或 `__user__` 转换(需 `Users.get_user_by_id()`)
|
||||
- **返回**: dict 或 JSONResponse;若是后者需 `response.body.decode()` + JSON parse
|
||||
|
||||
### Filter 生命周期
|
||||
|
||||
```
|
||||
New Message → inlet (User input) → [Plugins wait] → LLM → outlet (Response) → Summary Task (Background)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 调试清单
|
||||
|
||||
- [ ] `__request__` 在 `outlet` 签名中声明且被 OpenWebUI 注入(非 None)
|
||||
- [ ] 异步调用链中每层都透传 `__request__`,最底层兜底合成
|
||||
- [ ] `summary_index is None` 时从 DB 异步读取 `previous_summary`
|
||||
- [ ] LLM Prompt 中 `<previous_working_memory>` 和 `<new_conversation>` 有明确边界
|
||||
- [ ] 错误处理不吞堆栈:`logger.exception()` 或 `exc_info=True`
|
||||
- [ ] `asyncio.Lock` 按 chat_id 避免并发工作冲突
|
||||
- [ ] Copilot SDK / Pipe 模型需 `_should_skip_compression()` 检查
|
||||
- [ ] Token budget 在 max_summary_tokens 下规划,优先保留近期事件
|
||||
|
||||
---
|
||||
|
||||
## 相关文件
|
||||
|
||||
- 核心实现:`plugins/filters/async-context-compression/async_context_compression.py`
|
||||
- README:`plugins/filters/async-context-compression/README.md` + `README_CN.md`
|
||||
- OpenWebUI 内部:`open_webui/utils/chat.py` → `generate_chat_completion()`
|
||||
|
||||
---
|
||||
|
||||
**版本**: 1.0
|
||||
**维护者**: Fu-Jie
|
||||
**最后更新**: 2026-03-12
|
||||
45
.agent/learnings/openwebui-community-api.md
Normal file
45
.agent/learnings/openwebui-community-api.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# OpenWebUI Community API Patterns
|
||||
|
||||
## Post Data Structure Variations
|
||||
|
||||
When fetching posts from the OpenWebUI Community API (`https://api.openwebui.com/api/v1/posts/...`), the structure of the `data` field varies significantly depending on the `type` of the post.
|
||||
|
||||
### Observed Mappings
|
||||
|
||||
| Post Type | Data Key (under `data`) | Usual Content |
|
||||
|-----------|-------------------------|---------------|
|
||||
| `action` | `function` | Plugin code and metadata |
|
||||
| `filter` | `function` | Filter logic and metadata |
|
||||
| `pipe` | `function` | Pipe logic and metadata |
|
||||
| `tool` | `tool` | Tool definition and logic |
|
||||
| `prompt` | `prompt` | Prompt template strings |
|
||||
| `model` | `model` | Model configuration |
|
||||
|
||||
### Implementation Workaround
|
||||
|
||||
To robustly extract metadata (like `version` or `description`) regardless of the post type, the following heuristic logic is recommended:
|
||||
|
||||
```python
|
||||
def _get_plugin_obj(post: dict) -> dict:
|
||||
data = post.get("data", {}) or {}
|
||||
post_type = post.get("type")
|
||||
|
||||
# Priority 1: Use specific type key
|
||||
if post_type in data:
|
||||
return data[post_type]
|
||||
|
||||
# Priority 2: Fallback to common keys
|
||||
for k in ["function", "tool", "pipe"]:
|
||||
if k in data:
|
||||
return data[k]
|
||||
|
||||
# Priority 3: First available key
|
||||
if data:
|
||||
return list(data.values())[0]
|
||||
|
||||
return {}
|
||||
```
|
||||
|
||||
### Gotchas
|
||||
- Some older posts or different categories might not have a `version` field in `manifest`, leading to empty strings or `N/A` in reports.
|
||||
- `slug` should be used as the unique identifier rather than `title` when tracking stats across history.
|
||||
29
.agent/rules/agent_protocol.md
Normal file
29
.agent/rules/agent_protocol.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Agent Coordination Protocol (FOR AGENTS ONLY)
|
||||
|
||||
## 🛡️ The Golden Rule
|
||||
**NEVER modify code without verifying the lock status in the Agent Hub.**
|
||||
|
||||
## 🔑 Identity Management
|
||||
- `claude-code`: Official Claude CLI
|
||||
- `copilot-agent`: GitHub Copilot
|
||||
- `gemini-cursor`: Cursor IDE or Gemini extension
|
||||
- `iflow-agent`: iFlow SDK agent
|
||||
|
||||
## 🛠️ The Synchronization Tool
|
||||
Script: `scripts/agent_sync.py` (SQLite-backed)
|
||||
|
||||
### 🏎️ Workflow Lifecycle
|
||||
1. **Initialize Session**:
|
||||
- `python3 scripts/agent_sync.py status`
|
||||
- `python3 scripts/agent_sync.py register <id> <name> "<objective>"`
|
||||
2. **Resource Acquisition**:
|
||||
- `python3 scripts/agent_sync.py lock <id> <file_path>`
|
||||
- If blocked, identify the owner from `status` and do not attempt to bypass.
|
||||
3. **Collaboration (Research Mode)**:
|
||||
- If the project mode is `RESEARCH`, prioritize the `note` command.
|
||||
- Summarize findings: `python3 scripts/agent_sync.py note <id> "<topic>" "<summary>"`
|
||||
4. **Cleanup**:
|
||||
- `python3 scripts/agent_sync.py unlock <id> <file_path>`
|
||||
|
||||
## 📜 Shared Memory
|
||||
Read `.agent/learnings/` to avoid reinventing the wheel.
|
||||
0
.agent/shared_context/.gitkeep
Normal file
0
.agent/shared_context/.gitkeep
Normal file
8
.cursorrules
Normal file
8
.cursorrules
Normal file
@@ -0,0 +1,8 @@
|
||||
# 🤖 Cursor/Gemini Multi-Agent Protocol
|
||||
|
||||
1. **STATUS CHECK**: Always run `python3 scripts/agent_sync.py status` first.
|
||||
2. **REGISTRATION**: Run `python3 scripts/agent_sync.py register gemini-id "Gemini" "Current task"`.
|
||||
3. **LOCKING**: Never edit without `python3 scripts/agent_sync.py lock gemini-id <path>`.
|
||||
4. **STANDARDS**: Refer to `.agent/rules/plugin_standards.md` for coding guidelines.
|
||||
|
||||
Full details in `COOPERATION.md`.
|
||||
16
.github/workflows/community-stats.yml
vendored
16
.github/workflows/community-stats.yml
vendored
@@ -38,9 +38,12 @@ jobs:
|
||||
id: old_stats
|
||||
run: |
|
||||
if [ -f docs/community-stats.json ]; then
|
||||
cp docs/community-stats.json docs/community-stats.json.old
|
||||
echo "total_posts=$(jq -r '.total_posts // 0' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
echo "versions=$(jq -r '[.posts[] | {slug: .slug, version: .version}] | sort_by(.slug) | map("\(.slug):\(.version)") | join(",")' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "total_posts=0" >> $GITHUB_OUTPUT
|
||||
echo "versions=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Generate stats report
|
||||
@@ -56,12 +59,15 @@ jobs:
|
||||
id: new_stats
|
||||
run: |
|
||||
echo "total_posts=$(jq -r '.total_posts // 0' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
echo "versions=$(jq -r '[.posts[] | {slug: .slug, version: .version}] | sort_by(.slug) | map("\(.slug):\(.version)") | join(",")' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Check for significant changes
|
||||
id: check_changes
|
||||
run: |
|
||||
OLD_POSTS="${{ steps.old_stats.outputs.total_posts }}"
|
||||
NEW_POSTS="${{ steps.new_stats.outputs.total_posts }}"
|
||||
OLD_VERSIONS="${{ steps.old_stats.outputs.versions }}"
|
||||
NEW_VERSIONS="${{ steps.new_stats.outputs.versions }}"
|
||||
|
||||
SHOULD_COMMIT="false"
|
||||
CHANGE_REASON=""
|
||||
@@ -69,14 +75,20 @@ jobs:
|
||||
if [ "$NEW_POSTS" -gt "$OLD_POSTS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="new plugin added ($OLD_POSTS -> $NEW_POSTS)"
|
||||
echo "📦 New plugin detected: $OLD_POSTS -> $NEW_POSTS"
|
||||
elif [ "$NEW_POSTS" -lt "$OLD_POSTS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="plugin removed ($OLD_POSTS -> $NEW_POSTS)"
|
||||
elif [ "$OLD_VERSIONS" != "$NEW_VERSIONS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="plugin versions updated"
|
||||
echo "🔄 Version change detected"
|
||||
fi
|
||||
|
||||
echo "should_commit=$SHOULD_COMMIT" >> $GITHUB_OUTPUT
|
||||
echo "change_reason=$CHANGE_REASON" >> $GITHUB_OUTPUT
|
||||
|
||||
if [ "$SHOULD_COMMIT" = "false" ]; then
|
||||
echo "ℹ️ No significant changes detected, skipping commit"
|
||||
echo "ℹ️ No significant changes (posts or versions), skipping commit"
|
||||
else
|
||||
echo "✅ Significant changes detected: $CHANGE_REASON"
|
||||
fi
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -142,3 +142,4 @@ logs/
|
||||
# OpenWebUI specific
|
||||
# Add any specific ignores for OpenWebUI plugins if needed
|
||||
.git-worktrees/
|
||||
plugins/filters/auth_model_info/
|
||||
|
||||
13
CLAUDE.md
Normal file
13
CLAUDE.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# 🤖 Claude Multi-Agent Protocol (MACP)
|
||||
|
||||
## 🚀 Mandatory Startup
|
||||
1. **Check Hub**: `python3 scripts/agent_sync.py status`
|
||||
2. **Register**: `python3 scripts/agent_sync.py register claude-code "Claude" "Handling user request"`
|
||||
3. **Lock**: `python3 scripts/agent_sync.py lock claude-code <file_path>`
|
||||
4. **Handoff**: Use `python3 scripts/agent_sync.py note` for collaborative findings.
|
||||
|
||||
## 🤝 Project Standards
|
||||
Read these BEFORE writing any code:
|
||||
- `.agent/rules/plugin_standards.md`
|
||||
- `.agent/rules/agent_protocol.md`
|
||||
- `COOPERATION.md`
|
||||
33
COOPERATION.md
Normal file
33
COOPERATION.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# 🤖 Multi-Agent Cooperation Protocol (MACP) v2.1
|
||||
|
||||
本项目采用 **SQLite 协作中控 (Agent Hub)** 来管理多个 AI Agent 的并发任务。
|
||||
|
||||
## 🚀 核心指令 (Quick Commands)
|
||||
使用 `./scripts/macp` 即可快速调用,无需记忆复杂的 Python 参数。
|
||||
|
||||
| 指令 | 描述 |
|
||||
| :--- | :--- |
|
||||
| **`/status`** | 查看全场状态(活跃 Agent、文件锁、任务、研究主题) |
|
||||
| **`/study <topic> <desc>`** | **一键发起联合研究**。广播主题并通知所有 Agent 进入研究状态。 |
|
||||
| **`/summon <agent> <task>`** | **定向召唤**。给特定 Agent 派发高优先级任务。 |
|
||||
| **`/handover <agent> <msg>`** | **任务接力**。释放当前进度并交棒给下一个 Agent。 |
|
||||
| **`/broadcast <msg>`** | **全场广播**。发送紧急通知或状态同步。 |
|
||||
| **`/check`** | **收件箱检查**。查看是否有分配给你的待办任务。 |
|
||||
| **`/resolve <topic> <result>`** | **归档结论**。结束研究专题并记录最终共识。 |
|
||||
| **`/ping`** | **生存检查**。快速查看哪些 Agent 在线。 |
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ 协作准则
|
||||
1. **先查后动**:开始工作前先运行 `./scripts/macp /status`。
|
||||
2. **锁即所有权**:修改文件前必须获取锁。
|
||||
3. **意图先行**:大型重构建议先通过 `/study` 发起方案讨论。
|
||||
4. **及时解锁**:Commit 并 Push 后,请务必 `/handover` 或手动解锁。
|
||||
|
||||
## 📁 基础设施
|
||||
- **数据库**: `.agent/agent_hub.db` (不要手动编辑)
|
||||
- **内核**: `scripts/agent_sync.py`
|
||||
- **快捷工具**: `scripts/macp`
|
||||
|
||||
---
|
||||
*Generated by Claude (Coordinator) in collaboration with Sisyphus & Copilot.*
|
||||
16
README.md
16
README.md
@@ -9,7 +9,6 @@ A collection of enhancements, plugins, and prompts for [open-webui](https://gith
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 Community Stats
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
||||
@@ -20,19 +19,18 @@ A collection of enhancements, plugins, and prompts for [open-webui](https://gith
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
| Rank | Plugin | Version | Downloads | Views | 📅 Updated |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 5️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 Total Downloads Trend
|
||||
|
||||

|
||||
|
||||
*See full stats and charts in [Community Stats Report](./docs/community-stats.md)*
|
||||
|
||||
16
README_CN.md
16
README_CN.md
@@ -6,7 +6,6 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 社区统计
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
||||
@@ -17,19 +16,18 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
| 排名 | 插件 | 版本 | 下载 | 浏览 | 📅 更新 |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 5️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 总下载量累计趋势
|
||||
|
||||

|
||||
|
||||
*完整统计与趋势图请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
||||
|
||||
139
ai-tabs.sh
Executable file
139
ai-tabs.sh
Executable file
@@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# ai-tabs - Ultra Orchestrator
|
||||
# Version: v1.0.0
|
||||
# License: MIT
|
||||
# Author: Fu-Jie
|
||||
# Description: Batch-launches and orchestrates multiple AI CLI tools as Tabs.
|
||||
# ==============================================================================
|
||||
|
||||
# 1. Single-Instance Lock
|
||||
LOCK_FILE="/tmp/ai_terminal_launch.lock"
|
||||
# If lock is less than 10 seconds old, another instance is running. Exit.
|
||||
if [ -f "$LOCK_FILE" ]; then
|
||||
LOCK_TIME=$(stat -f %m "$LOCK_FILE")
|
||||
NOW=$(date +%s)
|
||||
if (( NOW - LOCK_TIME < 10 )); then
|
||||
echo "⚠️ Another launch in progress. Skipping to prevent duplicates."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
touch "$LOCK_FILE"
|
||||
trap 'rm -f "$LOCK_FILE"' EXIT
|
||||
|
||||
# 2. Configuration & Constants
|
||||
INIT_DELAY=4.5
|
||||
PASTE_DELAY=0.3
|
||||
CMD_CREATION_DELAY=0.3
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PARENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Search for .env
|
||||
if [ -f "${SCRIPT_DIR}/.env" ]; then
|
||||
ENV_FILE="${SCRIPT_DIR}/.env"
|
||||
elif [ -f "${PARENT_DIR}/.env" ]; then
|
||||
ENV_FILE="${PARENT_DIR}/.env"
|
||||
fi
|
||||
|
||||
# Supported Tools
|
||||
SUPPORTED_TOOLS=(
|
||||
"claude:--continue"
|
||||
"opencode:--continue"
|
||||
"gemini:--resume latest"
|
||||
"copilot:--continue"
|
||||
"iflow:--continue"
|
||||
"kilo:--continue"
|
||||
)
|
||||
|
||||
FOUND_TOOLS_NAMES=()
|
||||
FOUND_CMDS=()
|
||||
|
||||
# 3. Part A: Load Manual Configuration
|
||||
if [ -f "$ENV_FILE" ]; then
|
||||
set -a; source "$ENV_FILE"; set +a
|
||||
for var in $(compgen -v | grep '^TOOL_[0-9]' | sort -V); do
|
||||
TPATH="${!var}"
|
||||
if [ -x "$TPATH" ]; then
|
||||
NAME=$(basename "$TPATH")
|
||||
FLAG="--continue"
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
[[ "${item%%:*}" == "$NAME" ]] && FLAG="${item#*:}" && break
|
||||
done
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# 4. Part B: Automatic Tool Discovery
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
NAME="${item%%:*}"
|
||||
FLAG="${item#*:}"
|
||||
ALREADY_CONFIGURED=false
|
||||
for configured in "${FOUND_TOOLS_NAMES[@]}"; do
|
||||
[[ "$configured" == "$NAME" ]] && ALREADY_CONFIGURED=true && break
|
||||
done
|
||||
[[ "$ALREADY_CONFIGURED" == true ]] && continue
|
||||
TPATH=$(which "$NAME" 2>/dev/null)
|
||||
if [ -z "$TPATH" ]; then
|
||||
SEARCH_PATHS=(
|
||||
"/opt/homebrew/bin/$NAME"
|
||||
"/usr/local/bin/$NAME"
|
||||
"$HOME/.local/bin/$NAME"
|
||||
"$HOME/bin/$NAME"
|
||||
"$HOME/.$NAME/bin/$NAME"
|
||||
"$HOME/.nvm/versions/node/*/bin/$NAME"
|
||||
"$HOME/.npm-global/bin/$NAME"
|
||||
"$HOME/.cargo/bin/$NAME"
|
||||
)
|
||||
for p in "${SEARCH_PATHS[@]}"; do
|
||||
for found_p in $p; do [[ -x "$found_p" ]] && TPATH="$found_p" && break 2; done
|
||||
done
|
||||
fi
|
||||
if [ -n "$TPATH" ]; then
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
|
||||
NUM_FOUND=${#FOUND_CMDS[@]}
|
||||
[[ "$NUM_FOUND" -eq 0 ]] && exit 1
|
||||
|
||||
# 5. Core Orchestration (Reset + Launch)
|
||||
# Using Command Palette automation to avoid the need for manual shortcut binding.
|
||||
AS_SCRIPT="tell application \"System Events\"\n"
|
||||
|
||||
# Phase A: Creation (Using Command Palette to ensure it opens in Editor Area)
|
||||
for ((i=1; i<=NUM_FOUND; i++)); do
|
||||
AS_SCRIPT+=" keystroke \"p\" using {command down, shift down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
# Ensure we are searching for the command. Using clipboard for speed and universal language support.
|
||||
AS_SCRIPT+=" set the clipboard to \"Terminal: Create New Terminal in Editor Area\"\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
AS_SCRIPT+=" delay $CMD_CREATION_DELAY\n"
|
||||
done
|
||||
|
||||
# Phase B: Warmup
|
||||
AS_SCRIPT+=" delay $INIT_DELAY\n"
|
||||
|
||||
# Phase C: Command Injection (Reverse)
|
||||
for ((i=NUM_FOUND-1; i>=0; i--)); do
|
||||
FULL_CMD="${FOUND_CMDS[$i]}"
|
||||
CLEAN_CMD=$(echo "$FULL_CMD" | sed 's/"/\\"/g')
|
||||
AS_SCRIPT+=" set the clipboard to \"$CLEAN_CMD\"\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay $PASTE_DELAY\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
if [ $i -gt 0 ]; then
|
||||
AS_SCRIPT+=" delay 0.5\n"
|
||||
AS_SCRIPT+=" keystroke \"[\" using {command down, shift down}\n"
|
||||
fi
|
||||
done
|
||||
AS_SCRIPT+="end tell"
|
||||
|
||||
# Execute
|
||||
echo -e "$AS_SCRIPT" | osascript
|
||||
echo "✨ Ai tabs initialized successfully ($NUM_FOUND tools found)."
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "downloads",
|
||||
"message": "7.8k",
|
||||
"message": "9.1k",
|
||||
"color": "blue",
|
||||
"namedLogo": "openwebui"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "followers",
|
||||
"message": "315",
|
||||
"message": "353",
|
||||
"color": "blue"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "points",
|
||||
"message": "329",
|
||||
"message": "359",
|
||||
"color": "orange"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "upvotes",
|
||||
"message": "281",
|
||||
"message": "305",
|
||||
"color": "brightgreen"
|
||||
}
|
||||
@@ -1,19 +1,17 @@
|
||||
{
|
||||
"total_posts": 27,
|
||||
"total_downloads": 7786,
|
||||
"total_views": 82342,
|
||||
"total_upvotes": 281,
|
||||
"total_downloads": 9120,
|
||||
"total_views": 95785,
|
||||
"total_upvotes": 305,
|
||||
"total_downvotes": 4,
|
||||
"total_saves": 398,
|
||||
"total_comments": 63,
|
||||
"total_saves": 452,
|
||||
"total_comments": 77,
|
||||
"by_type": {
|
||||
"post": 6,
|
||||
"filter": 4,
|
||||
"tool": 2,
|
||||
"pipe": 1,
|
||||
"filter": 4,
|
||||
"action": 12,
|
||||
"prompt": 1,
|
||||
"review": 1
|
||||
"prompt": 1
|
||||
},
|
||||
"posts": [
|
||||
{
|
||||
@@ -23,11 +21,11 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 1542,
|
||||
"views": 12996,
|
||||
"upvotes": 28,
|
||||
"saves": 66,
|
||||
"comments": 18,
|
||||
"downloads": 1797,
|
||||
"views": 15350,
|
||||
"upvotes": 31,
|
||||
"saves": 72,
|
||||
"comments": 23,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-27",
|
||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||
@@ -39,11 +37,11 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 1230,
|
||||
"views": 12309,
|
||||
"upvotes": 25,
|
||||
"saves": 46,
|
||||
"comments": 10,
|
||||
"downloads": 1362,
|
||||
"views": 13589,
|
||||
"upvotes": 28,
|
||||
"saves": 53,
|
||||
"comments": 12,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/smart_infographic_ad6f0c7f"
|
||||
@@ -52,18 +50,34 @@
|
||||
"title": "Markdown Normalizer",
|
||||
"slug": "markdown_normalizer_baaa8732",
|
||||
"type": "filter",
|
||||
"version": "1.2.7",
|
||||
"version": "1.2.8",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting. Including LaTeX command protection.",
|
||||
"downloads": 719,
|
||||
"views": 7704,
|
||||
"upvotes": 20,
|
||||
"saves": 42,
|
||||
"downloads": 838,
|
||||
"views": 8739,
|
||||
"upvotes": 21,
|
||||
"saves": 46,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-08",
|
||||
"url": "https://openwebui.com/posts/markdown_normalizer_baaa8732"
|
||||
},
|
||||
{
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "filter",
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 801,
|
||||
"views": 7258,
|
||||
"upvotes": 18,
|
||||
"saves": 54,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-03-14",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
"title": "Export to Word Enhanced",
|
||||
"slug": "export_to_word_enhanced_formatting_fca6a315",
|
||||
@@ -71,31 +85,15 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 700,
|
||||
"views": 5399,
|
||||
"upvotes": 17,
|
||||
"saves": 37,
|
||||
"downloads": 799,
|
||||
"views": 6146,
|
||||
"upvotes": 20,
|
||||
"saves": 41,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315"
|
||||
},
|
||||
{
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "filter",
|
||||
"version": "1.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 669,
|
||||
"views": 6274,
|
||||
"upvotes": 16,
|
||||
"saves": 47,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
"title": "AI Task Instruction Generator",
|
||||
"slug": "ai_task_instruction_generator_9bab8b37",
|
||||
@@ -103,10 +101,10 @@
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 583,
|
||||
"views": 6659,
|
||||
"upvotes": 9,
|
||||
"saves": 17,
|
||||
"downloads": 692,
|
||||
"views": 7783,
|
||||
"upvotes": 10,
|
||||
"saves": 20,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-28",
|
||||
"updated_at": "2026-01-28",
|
||||
@@ -119,29 +117,45 @@
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 563,
|
||||
"views": 3153,
|
||||
"downloads": 616,
|
||||
"views": 3508,
|
||||
"upvotes": 11,
|
||||
"saves": 11,
|
||||
"saves": 12,
|
||||
"comments": 0,
|
||||
"created_at": "2025-05-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "0.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Standalone OpenWebUI tool for managing native Workspace Skills (list/show/install/create/update/delete) for any model.",
|
||||
"downloads": 500,
|
||||
"views": 6112,
|
||||
"upvotes": 8,
|
||||
"saves": 23,
|
||||
"comments": 4,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-14",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.9.1",
|
||||
"version": "0.10.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A powerful Agent SDK integration for OpenWebUI. It deeply bridges GitHub Copilot SDK with OpenWebUI's ecosystem, enabling the Agent to autonomously perform intent recognition, web search, and context compaction. It seamlessly reuses your existing Tools, MCP servers, OpenAPI servers, and Skills for a professional, full-featured experience.",
|
||||
"downloads": 335,
|
||||
"views": 4905,
|
||||
"downloads": 403,
|
||||
"views": 5699,
|
||||
"upvotes": 16,
|
||||
"saves": 10,
|
||||
"comments": 6,
|
||||
"saves": 12,
|
||||
"comments": 8,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
},
|
||||
{
|
||||
@@ -151,31 +165,15 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 312,
|
||||
"views": 4448,
|
||||
"downloads": 331,
|
||||
"views": 4722,
|
||||
"upvotes": 13,
|
||||
"saves": 20,
|
||||
"saves": 22,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/flash_card_65a2ea8f"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 303,
|
||||
"views": 4265,
|
||||
"upvotes": 7,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "Deep Dive",
|
||||
"slug": "deep_dive_c0b846e4",
|
||||
@@ -183,8 +181,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 219,
|
||||
"views": 1764,
|
||||
"downloads": 229,
|
||||
"views": 1887,
|
||||
"upvotes": 6,
|
||||
"saves": 15,
|
||||
"comments": 0,
|
||||
@@ -199,8 +197,8 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 165,
|
||||
"views": 2831,
|
||||
"downloads": 172,
|
||||
"views": 3038,
|
||||
"upvotes": 14,
|
||||
"saves": 7,
|
||||
"comments": 4,
|
||||
@@ -215,15 +213,31 @@
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 112,
|
||||
"views": 1992,
|
||||
"downloads": 130,
|
||||
"views": 2181,
|
||||
"upvotes": 7,
|
||||
"saves": 11,
|
||||
"saves": 13,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-20",
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 116,
|
||||
"views": 2375,
|
||||
"upvotes": 5,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
@@ -231,8 +245,8 @@
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 76,
|
||||
"views": 2311,
|
||||
"downloads": 93,
|
||||
"views": 2474,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -247,8 +261,8 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 68,
|
||||
"views": 1431,
|
||||
"downloads": 72,
|
||||
"views": 1572,
|
||||
"upvotes": 10,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -263,8 +277,8 @@
|
||||
"version": "0.9.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 52,
|
||||
"views": 761,
|
||||
"downloads": 56,
|
||||
"views": 814,
|
||||
"upvotes": 6,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
@@ -279,8 +293,8 @@
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 39,
|
||||
"views": 838,
|
||||
"downloads": 42,
|
||||
"views": 904,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
@@ -289,20 +303,20 @@
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 34,
|
||||
"views": 767,
|
||||
"upvotes": 2,
|
||||
"saves": 3,
|
||||
"title": "精读",
|
||||
"slug": "精读_99830b0f",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 37,
|
||||
"views": 708,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
@@ -312,7 +326,7 @@
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 34,
|
||||
"views": 888,
|
||||
"views": 926,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -320,47 +334,31 @@
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/闪记卡生成插件_4a31eac3"
|
||||
},
|
||||
{
|
||||
"title": "精读",
|
||||
"slug": "精读_99830b0f",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 31,
|
||||
"views": 647,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "An Unconventional Use of Open Terminal ⚡",
|
||||
"slug": "an_unconventional_use_of_open_terminal_35498f8f",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 14,
|
||||
"upvotes": 1,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"views": 3335,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 2,
|
||||
"created_at": "2026-03-06",
|
||||
"updated_at": "2026-03-06",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI",
|
||||
"slug": "github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1585,
|
||||
"views": 1803,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -371,12 +369,12 @@
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️",
|
||||
"slug": "github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2608,
|
||||
"views": 2797,
|
||||
"upvotes": 8,
|
||||
"saves": 4,
|
||||
"comments": 1,
|
||||
@@ -387,14 +385,14 @@
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks",
|
||||
"slug": "github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2390,
|
||||
"views": 2442,
|
||||
"upvotes": 7,
|
||||
"saves": 4,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-10",
|
||||
"updated_at": "2026-02-10",
|
||||
@@ -403,15 +401,15 @@
|
||||
{
|
||||
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1915,
|
||||
"upvotes": 12,
|
||||
"saves": 21,
|
||||
"comments": 8,
|
||||
"views": 2014,
|
||||
"upvotes": 13,
|
||||
"saves": 23,
|
||||
"comments": 9,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||
@@ -424,7 +422,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 251,
|
||||
"views": 271,
|
||||
"upvotes": 2,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
@@ -435,14 +433,14 @@
|
||||
{
|
||||
"title": " 🛠️ Debug Open WebUI Plugins in Your Browser",
|
||||
"slug": "debug_open_webui_plugins_in_your_browser_81bf7960",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1549,
|
||||
"views": 1588,
|
||||
"upvotes": 16,
|
||||
"saves": 12,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-01-10",
|
||||
"updated_at": "2026-01-10",
|
||||
@@ -454,11 +452,11 @@
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 315,
|
||||
"followers": 353,
|
||||
"following": 6,
|
||||
"total_points": 329,
|
||||
"post_points": 279,
|
||||
"comment_points": 50,
|
||||
"contributions": 59
|
||||
"total_points": 359,
|
||||
"post_points": 303,
|
||||
"comment_points": 56,
|
||||
"contributions": 68
|
||||
}
|
||||
}
|
||||
462
docs/community-stats.json.old
Normal file
462
docs/community-stats.json.old
Normal file
@@ -0,0 +1,462 @@
|
||||
{
|
||||
"total_posts": 27,
|
||||
"total_downloads": 8947,
|
||||
"total_views": 94188,
|
||||
"total_upvotes": 301,
|
||||
"total_downvotes": 4,
|
||||
"total_saves": 444,
|
||||
"total_comments": 75,
|
||||
"by_type": {
|
||||
"tool": 2,
|
||||
"filter": 4,
|
||||
"pipe": 1,
|
||||
"action": 12,
|
||||
"prompt": 1
|
||||
},
|
||||
"posts": [
|
||||
{
|
||||
"title": "Smart Mind Map",
|
||||
"slug": "turn_any_text_into_beautiful_mind_maps_3094c59a",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 1772,
|
||||
"views": 15047,
|
||||
"upvotes": 30,
|
||||
"saves": 70,
|
||||
"comments": 21,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-27",
|
||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||
},
|
||||
{
|
||||
"title": "Smart Infographic",
|
||||
"slug": "smart_infographic_ad6f0c7f",
|
||||
"type": "action",
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 1350,
|
||||
"views": 13453,
|
||||
"upvotes": 27,
|
||||
"saves": 52,
|
||||
"comments": 12,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/smart_infographic_ad6f0c7f"
|
||||
},
|
||||
{
|
||||
"title": "Markdown Normalizer",
|
||||
"slug": "markdown_normalizer_baaa8732",
|
||||
"type": "filter",
|
||||
"version": "1.2.8",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting. Including LaTeX command protection.",
|
||||
"downloads": 824,
|
||||
"views": 8622,
|
||||
"upvotes": 21,
|
||||
"saves": 45,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-03-08",
|
||||
"url": "https://openwebui.com/posts/markdown_normalizer_baaa8732"
|
||||
},
|
||||
{
|
||||
"title": "Export to Word Enhanced",
|
||||
"slug": "export_to_word_enhanced_formatting_fca6a315",
|
||||
"type": "action",
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 780,
|
||||
"views": 6015,
|
||||
"upvotes": 19,
|
||||
"saves": 39,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315"
|
||||
},
|
||||
{
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "filter",
|
||||
"version": "1.4.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 776,
|
||||
"views": 7102,
|
||||
"upvotes": 17,
|
||||
"saves": 53,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-03-13",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
"title": "AI Task Instruction Generator",
|
||||
"slug": "ai_task_instruction_generator_9bab8b37",
|
||||
"type": "prompt",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 676,
|
||||
"views": 7619,
|
||||
"upvotes": 10,
|
||||
"saves": 19,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-28",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37"
|
||||
},
|
||||
{
|
||||
"title": "Export to Excel",
|
||||
"slug": "export_mulit_table_to_excel_244b8f9d",
|
||||
"type": "action",
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 612,
|
||||
"views": 3475,
|
||||
"upvotes": 11,
|
||||
"saves": 12,
|
||||
"comments": 0,
|
||||
"created_at": "2025-05-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "0.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Standalone OpenWebUI tool for managing native Workspace Skills (list/show/install/create/update/delete) for any model.",
|
||||
"downloads": 463,
|
||||
"views": 5862,
|
||||
"upvotes": 8,
|
||||
"saves": 23,
|
||||
"comments": 4,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-13",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.10.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A powerful Agent SDK integration for OpenWebUI. It deeply bridges GitHub Copilot SDK with OpenWebUI's ecosystem, enabling the Agent to autonomously perform intent recognition, web search, and context compaction. It seamlessly reuses your existing Tools, MCP servers, OpenAPI servers, and Skills for a professional, full-featured experience.",
|
||||
"downloads": 402,
|
||||
"views": 5629,
|
||||
"upvotes": 16,
|
||||
"saves": 12,
|
||||
"comments": 8,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
},
|
||||
{
|
||||
"title": "Flash Card",
|
||||
"slug": "flash_card_65a2ea8f",
|
||||
"type": "action",
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 327,
|
||||
"views": 4685,
|
||||
"upvotes": 13,
|
||||
"saves": 22,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/flash_card_65a2ea8f"
|
||||
},
|
||||
{
|
||||
"title": "Deep Dive",
|
||||
"slug": "deep_dive_c0b846e4",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 228,
|
||||
"views": 1874,
|
||||
"upvotes": 6,
|
||||
"saves": 15,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/deep_dive_c0b846e4"
|
||||
},
|
||||
{
|
||||
"title": "导出为Word增强版",
|
||||
"slug": "导出为_word_支持公式流程图表格和代码块_8a6306c0",
|
||||
"type": "action",
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 172,
|
||||
"views": 3019,
|
||||
"upvotes": 14,
|
||||
"saves": 7,
|
||||
"comments": 4,
|
||||
"created_at": "2026-01-04",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
|
||||
},
|
||||
{
|
||||
"title": "📂 Folder Memory – Auto-Evolving Project Context",
|
||||
"slug": "folder_memory_auto_evolving_project_context_4a9875b2",
|
||||
"type": "filter",
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 128,
|
||||
"views": 2154,
|
||||
"upvotes": 7,
|
||||
"saves": 13,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-20",
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 106,
|
||||
"views": 2284,
|
||||
"upvotes": 5,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
"type": "filter",
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 93,
|
||||
"views": 2462,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-09",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee"
|
||||
},
|
||||
{
|
||||
"title": "智能信息图",
|
||||
"slug": "智能信息图_e04a48ff",
|
||||
"type": "action",
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 72,
|
||||
"views": 1566,
|
||||
"upvotes": 10,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/智能信息图_e04a48ff"
|
||||
},
|
||||
{
|
||||
"title": "思维导图",
|
||||
"slug": "智能生成交互式思维导图帮助用户可视化知识_8d4b097b",
|
||||
"type": "action",
|
||||
"version": "0.9.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 56,
|
||||
"views": 807,
|
||||
"upvotes": 6,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-31",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
|
||||
},
|
||||
{
|
||||
"title": "异步上下文压缩",
|
||||
"slug": "异步上下文压缩_5c0617cb",
|
||||
"type": "action",
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 42,
|
||||
"views": 892,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
"slug": "闪记卡生成插件_4a31eac3",
|
||||
"type": "action",
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 34,
|
||||
"views": 922,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/闪记卡生成插件_4a31eac3"
|
||||
},
|
||||
{
|
||||
"title": "精读",
|
||||
"slug": "精读_99830b0f",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 34,
|
||||
"views": 699,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "An Unconventional Use of Open Terminal ⚡",
|
||||
"slug": "an_unconventional_use_of_open_terminal_35498f8f",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 3205,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 2,
|
||||
"created_at": "2026-03-06",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI",
|
||||
"slug": "github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1781,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-27",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️",
|
||||
"slug": "github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2775,
|
||||
"upvotes": 8,
|
||||
"saves": 4,
|
||||
"comments": 1,
|
||||
"created_at": "2026-02-22",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks",
|
||||
"slug": "github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2441,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-10",
|
||||
"updated_at": "2026-02-10",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293"
|
||||
},
|
||||
{
|
||||
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1999,
|
||||
"upvotes": 13,
|
||||
"saves": 23,
|
||||
"comments": 9,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||
},
|
||||
{
|
||||
"title": "Review of Claude Haiku 4.5",
|
||||
"slug": "review_of_claude_haiku_45_41b0db39",
|
||||
"type": "review",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 267,
|
||||
"upvotes": 2,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-14",
|
||||
"updated_at": "2026-01-14",
|
||||
"url": "https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39"
|
||||
},
|
||||
{
|
||||
"title": " 🛠️ Debug Open WebUI Plugins in Your Browser",
|
||||
"slug": "debug_open_webui_plugins_in_your_browser_81bf7960",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1583,
|
||||
"upvotes": 16,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-01-10",
|
||||
"updated_at": "2026-01-10",
|
||||
"url": "https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960"
|
||||
}
|
||||
],
|
||||
"user": {
|
||||
"username": "Fu-Jie",
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 348,
|
||||
"following": 6,
|
||||
"total_points": 352,
|
||||
"post_points": 299,
|
||||
"comment_points": 53,
|
||||
"contributions": 67
|
||||
}
|
||||
}
|
||||
@@ -8,7 +8,7 @@
|
||||
> *Blue: Downloads | Purple: Views (Real-time dynamic)*
|
||||
|
||||
### 📂 Content Distribution
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 Overview
|
||||
@@ -25,13 +25,11 @@
|
||||
|
||||
## 📂 By Type
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
## 📋 Posts List
|
||||
|
||||
@@ -39,28 +37,28 @@
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-08 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-14 |
|
||||
| 5 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 8 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-14 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-07 |
|
||||
| 10 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 14 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 15 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 16 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 19 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 20 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | action |  |  |  |  |  | 2026-03-07 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | pipe |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | action |  |  |  |  |  | 2026-01-28 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | action |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
> *蓝色: 总下载量 | 紫色: 总浏览量 (实时动态生成)*
|
||||
|
||||
### 📂 内容分类占比 (Distribution)
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 总览
|
||||
@@ -25,13 +25,11 @@
|
||||
|
||||
## 📂 按类型分类
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
## 📋 发布列表
|
||||
|
||||
@@ -39,28 +37,28 @@
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-08 |
|
||||
| 4 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-14 |
|
||||
| 5 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 8 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-14 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-07 |
|
||||
| 10 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 14 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 15 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 16 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 19 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 20 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | action |  |  |  |  |  | 2026-03-07 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | pipe |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | action |  |  |  |  |  | 2026-01-28 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | action |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
@@ -1,13 +1,19 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.4.1 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.5.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
## What's new in 1.4.1
|
||||
## What's new in 1.5.0
|
||||
|
||||
- **Reverse-Unfolding Mechanism**: Accurately reconstructs the expanded native tool-calling sequence during the outlet phase to permanently fix coordinate drift and missing summaries for long tool-based conversations.
|
||||
- **Safer Tool Trimming**: Refactored `enable_tool_output_trimming` to strictly use atomic block groups for safe trimming, completely preventing JSON payload corruption.
|
||||
- **External Chat Reference Summaries**: Added support for referenced chat context blocks that can reuse cached summaries, inject small referenced chats directly, or generate summaries for larger referenced chats before injection.
|
||||
- **Fast Multilingual Token Estimation**: Added a new mixed-script token estimation pipeline so inlet/outlet preflight checks can avoid unnecessary exact token counts while staying much closer to real usage.
|
||||
- **Stronger Working-Memory Prompt**: Refined the XML summary prompt to better preserve actionable context across general chat, coding tasks, and tool-heavy conversations.
|
||||
- **Clearer Frontend Debug Logs**: Reworked browser-console logging into grouped structural snapshots that are easier to scan during debugging.
|
||||
- **Safer Tool Trimming Defaults**: Enabled native tool-output trimming by default and exposed a dedicated `tool_trim_threshold_chars` valve with a 600-character default.
|
||||
- **Safer Referenced-Chat Fallbacks**: If generating a referenced chat summary fails, the new reference-summary path now falls back to direct contextual injection instead of failing the whole chat.
|
||||
- **Correct Summary Budgeting**: `summary_model_max_context` now controls summary-input fitting, while `max_summary_tokens` remains an output-length cap.
|
||||
- **More Visible Summary Failures**: Important background summary failures now surface in the browser console (`F12`) and as a status hint even when `show_debug_log` is off.
|
||||
|
||||
---
|
||||
|
||||
@@ -19,15 +25,85 @@ This filter reduces token consumption in long conversations through intelligent
|
||||
- ✅ Persistent storage via Open WebUI's shared database connection (PostgreSQL, SQLite, etc.).
|
||||
- ✅ Flexible retention policy to keep the first and last N messages.
|
||||
- ✅ Smart injection of historical summaries back into the context.
|
||||
- ✅ External chat reference summarization with cached-summary reuse, direct injection for small chats, and generated summaries for larger chats.
|
||||
- ✅ Structure-aware trimming that preserves document structure (headers, intro, conclusion).
|
||||
- ✅ Native tool output trimming for cleaner context when using function calling.
|
||||
- ✅ Real-time context usage monitoring with warning notifications (>90%).
|
||||
- ✅ Detailed token logging for precise debugging and optimization.
|
||||
- ✅ Fast multilingual token estimation plus exact token fallback for precise debugging and optimization.
|
||||
- ✅ **Smart Model Matching**: Automatically inherits configuration from base models for custom presets.
|
||||
- ⚠ **Multimodal Support**: Images are preserved but their tokens are **NOT** calculated. Please adjust thresholds accordingly.
|
||||
|
||||
---
|
||||
|
||||
## What This Fixes
|
||||
|
||||
- **Problem 1: A referenced chat could break the current request.**
|
||||
Before, if the filter needed to summarize a referenced chat and that LLM call failed, the current chat could fail with it. Now it degrades gracefully and injects direct context instead.
|
||||
- **Problem 2: Some referenced chats were being cut too aggressively.**
|
||||
Before, the output limit (`max_summary_tokens`) could be treated like the input window, which made large referenced chats shrink earlier than necessary. Now input fitting uses the summary model's real context window (`summary_model_max_context` or model/global fallback).
|
||||
- **Problem 3: Some background summary failures were too easy to miss.**
|
||||
Before, a failure during background summary preparation could disappear quietly when frontend debug logging was off. Now important failures are forced to the browser console and also shown through a user-facing status message.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
This filter operates in two phases:
|
||||
|
||||
1. `inlet`: injects stored summaries, processes external chat references, and trims context when required before the request is sent to the model.
|
||||
2. `outlet`: runs asynchronously after the response is complete, decides whether a new summary should be generated, and persists it when appropriate.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Request enters inlet] --> B[Normalize tool IDs and optionally trim large tool outputs]
|
||||
B --> C{Referenced chats attached?}
|
||||
C -- No --> D[Load current chat summary if available]
|
||||
C -- Yes --> E[Inspect each referenced chat]
|
||||
|
||||
E --> F{Existing cached summary?}
|
||||
F -- Yes --> G[Reuse cached summary]
|
||||
F -- No --> H{Fits direct budget?}
|
||||
H -- Yes --> I[Inject full referenced chat text]
|
||||
H -- No --> J[Prepare referenced-chat summary input]
|
||||
|
||||
J --> K{Referenced-chat summary call succeeds?}
|
||||
K -- Yes --> L[Inject generated referenced summary]
|
||||
K -- No --> M[Fallback to direct contextual injection]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[Build current-chat Head + Summary + Tail]
|
||||
N --> O{Over max_context_tokens?}
|
||||
O -- Yes --> P[Trim oldest atomic groups]
|
||||
O -- No --> Q[Send final context to the model]
|
||||
P --> Q
|
||||
|
||||
Q --> R[Model returns the reply]
|
||||
R --> S[Outlet rebuilds the full history]
|
||||
S --> T{Reached compression threshold?}
|
||||
T -- No --> U[Finish]
|
||||
T -- Yes --> V[Fit summary input to the summary model context]
|
||||
|
||||
V --> W{Background summary call succeeds?}
|
||||
W -- Yes --> X[Save new chat summary and update status]
|
||||
W -- No --> Y[Force browser-console error and show status hint]
|
||||
```
|
||||
|
||||
### Key Notes
|
||||
|
||||
- `inlet` only injects and trims context. It does not generate the main chat summary.
|
||||
- `outlet` performs summary generation asynchronously and does not block the current reply.
|
||||
- External chat references may come from an existing persisted summary, a small chat's full text, or a generated/truncated reference summary.
|
||||
- If a referenced-chat summary call fails, the filter falls back to direct context injection instead of failing the whole request.
|
||||
- `summary_model_max_context` controls summary-input fitting. `max_summary_tokens` only controls how long the generated summary may be.
|
||||
- Important background summary failures are surfaced to the browser console (`F12`) and the chat status area.
|
||||
- External reference messages are protected during trimming so they are not discarded first.
|
||||
|
||||
---
|
||||
|
||||
## Installation & Configuration
|
||||
|
||||
### 1) Database (automatic)
|
||||
@@ -51,11 +127,12 @@ This filter reduces token consumption in long conversations through intelligent
|
||||
| `keep_first` | `1` | Always keep the first N messages (protects system prompts). |
|
||||
| `keep_last` | `6` | Always keep the last N messages to preserve recent context. |
|
||||
| `summary_model` | `None` | Model for summaries. Strongly recommended to set a fast, economical model (e.g., `gemini-2.5-flash`, `deepseek-v3`). Falls back to the current chat model when empty. |
|
||||
| `summary_model_max_context` | `0` | Max context tokens for the summary model. If 0, falls back to `model_thresholds` or global `max_context_tokens`. |
|
||||
| `max_summary_tokens` | `16384` | Maximum tokens for the generated summary. |
|
||||
| `summary_temperature` | `0.3` | Randomness for summary generation. Lower is more deterministic. |
|
||||
| `summary_model_max_context` | `0` | Input context window used to fit summary requests. If `0`, falls back to `model_thresholds` or global `max_context_tokens`. |
|
||||
| `max_summary_tokens` | `16384` | Maximum output length for the generated summary. This is not the summary-input context limit. |
|
||||
| `summary_temperature` | `0.1` | Randomness for summary generation. Lower is more deterministic. |
|
||||
| `model_thresholds` | `{}` | Per-model overrides for `compression_threshold_tokens` and `max_context_tokens` (useful for mixed models). |
|
||||
| `enable_tool_output_trimming` | `false` | When enabled and `function_calling: "native"` is active, trims verbose tool outputs to extract only the final answer. |
|
||||
| `enable_tool_output_trimming` | `true` | When enabled for `function_calling: "native"`, trims oversized native tool outputs while keeping the tool-call chain intact. |
|
||||
| `tool_trim_threshold_chars` | `600` | Trim native tool output blocks once their total content length reaches this threshold. |
|
||||
| `debug_mode` | `false` | Log verbose debug info. Set to `false` in production. |
|
||||
| `show_debug_log` | `false` | Print debug logs to browser console (F12). Useful for frontend debugging. |
|
||||
| `show_token_usage_status` | `true` | Show token usage status notification in the chat interface. |
|
||||
@@ -71,8 +148,12 @@ If this plugin has been useful, a star on [OpenWebUI Extensions](https://github.
|
||||
|
||||
- **Initial system prompt is lost**: Keep `keep_first` greater than 0 to protect the initial message.
|
||||
- **Compression effect is weak**: Raise `compression_threshold_tokens` or lower `keep_first` / `keep_last` to allow more aggressive compression.
|
||||
- **A referenced chat summary fails**: The current request should continue with a direct-context fallback. Check the browser console (`F12`) if you need the upstream failure details.
|
||||
- **A background summary silently seems to do nothing**: Important failures now surface in chat status and the browser console (`F12`).
|
||||
- **Submit an Issue**: If you encounter any problems, please submit an issue on GitHub: [OpenWebUI Extensions Issues](https://github.com/Fu-Jie/openwebui-extensions/issues)
|
||||
|
||||
## Changelog
|
||||
|
||||
See [`v1.5.0` Release Notes](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/v1.5.0.md) for the release-specific summary.
|
||||
|
||||
See the full history on GitHub: [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
@@ -1,15 +1,21 @@
|
||||
# 异步上下文压缩过滤器
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.4.1 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.5.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
|
||||
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
||||
|
||||
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
||||
|
||||
## 1.4.1 版本更新
|
||||
## 1.5.0 版本更新
|
||||
|
||||
- **逆向展开机制**: 引入 `_unfold_messages` 机制以在 `outlet` 阶段精确对齐坐标系,彻底解决了由于前端视图折叠导致长轮次工具调用对话出现进度漂移或跳过生成摘要的问题。
|
||||
- **更安全的工具内容裁剪**: 重构了 `enable_tool_output_trimming`,现在严格使用原子级分组进行安全的原生工具内容裁剪,替代了激进的正则表达式匹配,防止 JSON 载荷损坏。
|
||||
- **外部聊天引用摘要**: 新增对引用聊天上下文的摘要支持。现在可以复用缓存摘要、直接注入较小引用聊天,或先为较大的引用聊天生成摘要再注入。
|
||||
- **快速多语言 Token 预估**: 新增混合脚本 Token 预估链路,使 inlet / outlet 的预检可以减少不必要的精确计数,同时比旧的粗略字符比值更接近真实用量。
|
||||
- **更稳健的工作记忆提示词**: 重写 XML 摘要提示词,增强普通聊天、编码任务和连续工具调用场景下的关键信息保留能力。
|
||||
- **更清晰的前端调试日志**: 浏览器控制台日志改为分组化、结构化展示,排查上下文压缩行为更直观。
|
||||
- **更安全的工具裁剪默认值**: 原生工具输出裁剪默认开启,并新增 `tool_trim_threshold_chars` 配置项,默认阈值为 600 字符。
|
||||
- **更稳妥的引用聊天回退**: 当新的引用聊天摘要路径生成失败时,不再拖垮当前请求,而是自动回退为直接注入上下文。
|
||||
- **更准确的摘要预算**: `summary_model_max_context` 现在只负责摘要输入窗口,`max_summary_tokens` 继续只负责摘要输出长度。
|
||||
- **更容易发现摘要失败**: 重要的后台摘要失败现在会强制显示到浏览器控制台 (`F12`),并同步给出状态提示。
|
||||
|
||||
---
|
||||
|
||||
@@ -21,14 +27,84 @@
|
||||
- ✅ **持久化存储**: 复用 Open WebUI 共享数据库连接,自动支持 PostgreSQL/SQLite 等。
|
||||
- ✅ **灵活保留策略**: 可配置保留对话头部和尾部消息,确保关键信息连贯。
|
||||
- ✅ **智能注入**: 将历史摘要智能注入到新上下文中。
|
||||
- ✅ **外部聊天引用摘要**: 支持复用缓存摘要、小聊天直接注入,以及大聊天先摘要后注入。
|
||||
- ✅ **结构感知裁剪**: 智能折叠过长消息,保留文档骨架(标题、首尾)。
|
||||
- ✅ **原生工具输出裁剪**: 支持裁剪冗长的工具调用输出。
|
||||
- ✅ **实时监控**: 实时监控上下文使用情况,超过 90% 发出警告。
|
||||
- ✅ **详细日志**: 提供精确的 Token 统计日志,便于调试。
|
||||
- ✅ **快速预估 + 精确回退**: 提供更快的多语言 Token 预估,并在必要时回退到精确统计,便于调试。
|
||||
- ✅ **智能模型匹配**: 自定义模型自动继承基础模型的阈值配置。
|
||||
- ⚠ **多模态支持**: 图片内容会被保留,但其 Token **不参与计算**。请相应调整阈值。
|
||||
|
||||
详细的工作原理和流程请参考 [工作流程指南](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/WORKFLOW_GUIDE_CN.md)。
|
||||
详细的工作原理和更长说明仍可参考 [工作流程指南](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/WORKFLOW_GUIDE_CN.md)。
|
||||
|
||||
---
|
||||
|
||||
## 这次解决了什么问题(通俗版)
|
||||
|
||||
- **问题 1:引用别的聊天时,摘要失败可能把当前对话一起弄挂。**
|
||||
以前如果过滤器需要先帮被引用聊天做摘要,而这一步的 LLM 调用失败了,当前请求也可能直接失败。现在改成了“能摘要就摘要,失败就退回直接塞上下文”,当前对话不会被一起拖死。
|
||||
- **问题 2:有些被引用聊天被截得太早,信息丢得太多。**
|
||||
以前有一段逻辑把 `max_summary_tokens` 这种“输出长度限制”误当成了“输入上下文窗口”,结果大一点的引用聊天会被过早截断。现在改成按摘要模型真实的输入窗口来算,能保留更多有用内容。
|
||||
- **问题 3:后台摘要失败时,用户不容易知道发生了什么。**
|
||||
以前在 `show_debug_log=false` 时,有些后台失败只会留在内部日志里。现在关键失败会强制打到浏览器控制台,并在聊天状态里提醒去看 `F12`。
|
||||
|
||||
---
|
||||
|
||||
## 工作流总览
|
||||
|
||||
该过滤器分为两个阶段:
|
||||
|
||||
1. `inlet`:在请求发送给模型前执行,负责注入已有摘要、处理外部聊天引用、并在必要时裁剪上下文。
|
||||
2. `outlet`:在模型回复完成后异步执行,负责判断是否需要生成新摘要,并在合适时写入数据库。
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[请求进入 inlet] --> B[规范化工具 ID 并按需裁剪超长工具输出]
|
||||
B --> C{是否附带引用聊天?}
|
||||
C -- 否 --> D[如果有当前聊天摘要就先加载]
|
||||
C -- 是 --> E[逐个检查被引用聊天]
|
||||
|
||||
E --> F{已有缓存摘要?}
|
||||
F -- 是 --> G[直接复用缓存摘要]
|
||||
F -- 否 --> H{能直接放进当前预算?}
|
||||
H -- 是 --> I[直接注入完整引用聊天文本]
|
||||
H -- 否 --> J[准备引用聊天的摘要输入]
|
||||
|
||||
J --> K{引用聊天摘要调用成功?}
|
||||
K -- 是 --> L[注入生成后的引用摘要]
|
||||
K -- 否 --> M[回退为直接注入上下文]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[为当前聊天构造 Head + Summary + Tail]
|
||||
N --> O{是否超过 max_context_tokens?}
|
||||
O -- 是 --> P[从最旧 atomic groups 开始裁剪]
|
||||
O -- 否 --> Q[把最终上下文发给模型]
|
||||
P --> Q
|
||||
|
||||
Q --> R[模型返回当前回复]
|
||||
R --> S[Outlet 重建完整历史]
|
||||
S --> T{达到压缩阈值了吗?}
|
||||
T -- 否 --> U[结束]
|
||||
T -- 是 --> V[把摘要输入压到摘要模型可接受的上下文窗口]
|
||||
|
||||
V --> W{后台摘要调用成功?}
|
||||
W -- 是 --> X[保存新摘要并更新状态]
|
||||
W -- 否 --> Y[强制输出浏览器控制台错误并提示用户查看]
|
||||
```
|
||||
|
||||
### 关键说明
|
||||
|
||||
- `inlet` 只负责注入和裁剪上下文,不负责生成当前聊天的主摘要。
|
||||
- `outlet` 异步生成摘要,不会阻塞当前回复。
|
||||
- 外部聊天引用可以来自已有持久化摘要、小聊天的完整文本,或动态生成/截断后的引用摘要。
|
||||
- 如果引用聊天摘要失败,会自动回退为直接注入上下文,而不是让当前请求失败。
|
||||
- `summary_model_max_context` 控制摘要输入窗口;`max_summary_tokens` 只控制生成摘要的输出长度。
|
||||
- 重要的后台摘要失败会显示到浏览器控制台 (`F12`) 和聊天状态提示里。
|
||||
- 外部引用消息在裁剪阶段会被特殊保护,避免被最先删除。
|
||||
|
||||
---
|
||||
|
||||
@@ -64,8 +140,8 @@
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| :-------------------- | :------ | :------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `summary_model` | `None` | 用于生成摘要的模型 ID。**强烈建议**配置快速、经济、上下文窗口大的模型(如 `gemini-2.5-flash`、`deepseek-v3`)。留空则尝试复用当前对话模型。 |
|
||||
| `summary_model_max_context` | `0` | 摘要模型的最大上下文 Token 数。如果为 0,则回退到 `model_thresholds` 或全局 `max_context_tokens`。 |
|
||||
| `max_summary_tokens` | `16384` | 生成摘要时允许的最大 Token 数。 |
|
||||
| `summary_model_max_context` | `0` | 摘要请求可使用的输入上下文窗口。如果为 0,则回退到 `model_thresholds` 或全局 `max_context_tokens`。 |
|
||||
| `max_summary_tokens` | `16384` | 生成摘要时允许的最大输出 Token 数。它不是摘要输入窗口上限。 |
|
||||
| `summary_temperature` | `0.1` | 控制摘要生成的随机性,较低的值结果更稳定。 |
|
||||
|
||||
### 高级配置
|
||||
@@ -93,7 +169,8 @@
|
||||
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| :----------------------------- | :------- | :-------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `enable_tool_output_trimming` | `false` | 启用时,若 `function_calling: "native"` 激活,将裁剪冗长的工具输出以仅提取最终答案。 |
|
||||
| `enable_tool_output_trimming` | `true` | 启用后(仅在 `function_calling: "native"` 下生效)会裁剪过大的本机工具输出,保留工具调用链结构并以简短占位替换冗长内容。 |
|
||||
| `tool_trim_threshold_chars` | `600` | 当本机工具输出累计字符数达到该值时触发裁剪,适用于包含长文本或表格的工具结果。 |
|
||||
| `debug_mode` | `false` | 是否在 Open WebUI 的控制台日志中打印详细的调试信息。生产环境默认且建议设为 `false`。 |
|
||||
| `show_debug_log` | `false` | 是否在浏览器控制台 (F12) 打印调试日志。便于前端调试。 |
|
||||
| `show_token_usage_status` | `true` | 是否在对话结束时显示 Token 使用情况的状态通知。 |
|
||||
@@ -109,8 +186,12 @@
|
||||
|
||||
- **初始系统提示丢失**:将 `keep_first` 设置为大于 0。
|
||||
- **压缩效果不明显**:提高 `compression_threshold_tokens`,或降低 `keep_first` / `keep_last` 以增强压缩力度。
|
||||
- **引用聊天摘要失败**:当前请求现在应该会继续执行,并回退为直接注入上下文。如果要看上游失败原因,请打开浏览器控制台 (`F12`)。
|
||||
- **后台摘要看起来“没反应”**:重要失败现在会同时出现在状态提示和浏览器控制台 (`F12`) 中。
|
||||
- **提交 Issue**: 如果遇到任何问题,请在 GitHub 上提交 Issue:[OpenWebUI Extensions Issues](https://github.com/Fu-Jie/openwebui-extensions/issues)
|
||||
|
||||
## 更新日志
|
||||
|
||||
请查看 [`v1.5.0` 版本发布说明](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/v1.5.0_CN.md) 获取本次版本的独立发布摘要。
|
||||
|
||||
完整历史请查看 GitHub 项目: [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
@@ -20,9 +20,9 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
---
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||
Reduces token consumption in long conversations with safer summary fallbacks and clearer failure visibility.
|
||||
|
||||
**Version:** 1.4.1
|
||||
**Version:** 1.5.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||
|
||||
|
||||
@@ -20,11 +20,11 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
---
|
||||
|
||||
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
||||
通过更稳健的摘要回退和更清晰的失败提示,降低长对话的 token 消耗并保持连贯性。
|
||||
|
||||
**版本:** 1.4.1
|
||||
**版本:** 1.5.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.zh.md)
|
||||
|
||||
- :material-text-box-plus:{ .lg .middle } **Context Enhancement**
|
||||
|
||||
|
||||
139
docs/plugins/tools/batch-install-plugins-tool.md
Normal file
139
docs/plugins/tools/batch-install-plugins-tool.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Batch Install Plugins from GitHub
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.0.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
---
|
||||
|
||||
One-click batch install plugins from GitHub repositories to your OpenWebUI instance.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **One-Click Install**: Install all plugins with a single command
|
||||
- **Auto-Update**: Automatically updates previously installed plugins
|
||||
- **GitHub Support**: Install plugins from any GitHub repository
|
||||
- **Multi-Type Support**: Supports Pipe, Action, Filter, and Tool plugins
|
||||
- **Confirmation**: Shows plugin list before installing, allows selective installation
|
||||
- **i18n**: Supports 11 languages
|
||||
|
||||
## Flow
|
||||
|
||||
```
|
||||
User Input
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Discover Plugins from GitHub │
|
||||
│ (fetch file tree + parse .py) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Filter by Type & Keywords │
|
||||
│ (tool/filter/pipe/action) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Show Confirmation Dialog │
|
||||
│ (list plugins + exclude hint) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
├── [Cancel] → End
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Install to OpenWebUI │
|
||||
│ (update or create each plugin) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
Done
|
||||
```
|
||||
|
||||
## How to Use
|
||||
|
||||
1. Open OpenWebUI and go to **Workspace > Tools**
|
||||
2. Install **Batch Install Plugins from GitHub** from the official marketplace
|
||||
3. Enable this tool for your model/chat
|
||||
4. Ask the model to install plugins
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```
|
||||
"Install all plugins"
|
||||
"Install all plugins from github.com/username/repo"
|
||||
"Install only pipe plugins"
|
||||
"Install action and filter plugins"
|
||||
"Install all plugins, exclude_keywords=copilot"
|
||||
```
|
||||
|
||||
## Popular Plugin Repositories
|
||||
|
||||
Here are some popular repositories with many plugins you can install:
|
||||
|
||||
### Community Collections
|
||||
|
||||
```
|
||||
# Install all plugins from iChristGit's collection
|
||||
"Install all plugins from iChristGit/OpenWebui-Tools"
|
||||
|
||||
# Install all tools from Haervwe's tools collection
|
||||
"Install all plugins from Haervwe/open-webui-tools"
|
||||
|
||||
# Install all plugins from Classic298's repository
|
||||
"Install all plugins from Classic298/open-webui-plugins"
|
||||
|
||||
# Install all functions from suurt8ll's collection
|
||||
"Install all plugins from suurt8ll/open_webui_functions"
|
||||
|
||||
# Install only specific types (e.g., only tools)
|
||||
"Install only tool plugins from iChristGit/OpenWebui-Tools"
|
||||
|
||||
# Exclude certain keywords while installing
|
||||
"Install all plugins from Haervwe/open-webui-tools, exclude_keywords=test,deprecated"
|
||||
```
|
||||
|
||||
### Supported Repositories
|
||||
|
||||
- `Fu-Jie/openwebui-extensions` - Default, official plugin collection
|
||||
- `iChristGit/OpenWebui-Tools` - Comprehensive tool and plugin collection
|
||||
- `Haervwe/open-webui-tools` - Specialized tools and utilities
|
||||
- `Classic298/open-webui-plugins` - Various plugin implementations
|
||||
- `suurt8ll/open_webui_functions` - Function-based plugins
|
||||
|
||||
## Default Repository
|
||||
|
||||
When no repository is specified, defaults to `Fu-Jie/openwebui-extensions`.
|
||||
|
||||
## Plugin Detection Rules
|
||||
|
||||
### Fu-Jie/openwebui-extensions (Strict)
|
||||
|
||||
For the default repository, plugins must have:
|
||||
1. A `.py` file containing `class Tools:`, `class Filter:`, `class Pipe:`, or `class Action:`
|
||||
2. A docstring with `title:`, `description:`, and **`openwebui_id:`** fields
|
||||
3. Filename must not end with `_cn`
|
||||
|
||||
### Other GitHub Repositories
|
||||
|
||||
For other repositories:
|
||||
1. A `.py` file containing `class Tools:`, `class Filter:`, `class Pipe:`, or `class Action:`
|
||||
2. A docstring with `title:` and `description:` fields
|
||||
|
||||
## Configuration (Valves)
|
||||
|
||||
| Parameter | Default | Description |
|
||||
| --- | --- | --- |
|
||||
| `SKIP_KEYWORDS` | `test,verify,example,template,mock` | Comma-separated keywords to skip |
|
||||
| `TIMEOUT` | `20` | Request timeout in seconds |
|
||||
|
||||
## Confirmation Timeout
|
||||
|
||||
User confirmation dialogs have a default timeout of **2 minutes (120 seconds)**, allowing sufficient time for users to:
|
||||
- Read and review the plugin list
|
||||
- Make installation decisions
|
||||
- Handle network delays
|
||||
|
||||
## Support
|
||||
|
||||
If this plugin has been useful, a star on [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) is a big motivation for me. Thank you for the support.
|
||||
139
docs/plugins/tools/batch-install-plugins-tool.zh.md
Normal file
139
docs/plugins/tools/batch-install-plugins-tool.zh.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Batch Install Plugins from GitHub - 从 GitHub 批量安装插件
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie) | **版本:** 1.0.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可:** MIT
|
||||
|
||||
---
|
||||
|
||||
一键从 GitHub 仓库批量安装插件到你的 OpenWebUI 实例。
|
||||
|
||||
## ✨ 主要特性
|
||||
|
||||
- **一键安装**: 一条命令安装所有插件
|
||||
- **自动更新**: 自动更新之前已安装的插件
|
||||
- **GitHub 支持**: 支持从任何 GitHub 仓库安装插件
|
||||
- **多类型支持**: 支持 Pipe、Action、Filter 和 Tool 插件
|
||||
- **确认机制**: 安装前显示插件列表,允许选择性安装
|
||||
- **国际化**: 支持 11 种语言
|
||||
|
||||
## 工作流
|
||||
|
||||
```
|
||||
用户输入
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 从 GitHub 发现插件 │
|
||||
│ (获取文件树 + 解析 .py) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 按类型和关键词过滤 │
|
||||
│ (tool/filter/pipe/action) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 显示确认对话框 │
|
||||
│ (插件列表 + 排除提示) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
├── [取消] → 结束
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 安装到 OpenWebUI │
|
||||
│ (更新或创建每个插件) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
完成
|
||||
```
|
||||
|
||||
## 🚀 使用方法
|
||||
|
||||
1. 打开 OpenWebUI,进入 **工作区 > 工具**
|
||||
2. 从官方市场安装 **Batch Install Plugins from GitHub**
|
||||
3. 为你的模型/聊天启用此工具
|
||||
4. 让模型安装插件
|
||||
|
||||
## 使用示例
|
||||
|
||||
```
|
||||
"安装所有插件"
|
||||
"从 github.com/username/repo 安装所有插件"
|
||||
"仅安装 pipe 插件"
|
||||
"安装 action 和 filter 插件"
|
||||
"安装所有插件,exclude_keywords=copilot"
|
||||
```
|
||||
|
||||
## 热门插件仓库
|
||||
|
||||
这些是包含大量插件的热门仓库,你可以从中安装插件:
|
||||
|
||||
### 社区合集
|
||||
|
||||
```
|
||||
# 从 iChristGit 的集合安装所有插件
|
||||
"从 iChristGit/OpenWebui-Tools 安装所有插件"
|
||||
|
||||
# 从 Haervwe 的工具集合只安装工具
|
||||
"从 Haervwe/open-webui-tools 安装所有插件"
|
||||
|
||||
# 从 Classic298 的仓库安装所有插件
|
||||
"从 Classic298/open-webui-plugins 安装所有插件"
|
||||
|
||||
# 从 suurt8ll 的集合安装所有函数
|
||||
"从 suurt8ll/open_webui_functions 安装所有插件"
|
||||
|
||||
# 仅安装特定类型的插件(比如只安装工具)
|
||||
"从 iChristGit/OpenWebui-Tools 仅安装 tool 插件"
|
||||
|
||||
# 安装时排除特定关键词
|
||||
"从 Haervwe/open-webui-tools 安装所有插件,exclude_keywords=test,deprecated"
|
||||
```
|
||||
|
||||
### 支持的仓库
|
||||
|
||||
- `Fu-Jie/openwebui-extensions` - 默认的官方插件集合
|
||||
- `iChristGit/OpenWebui-Tools` - 全面的工具和插件集合
|
||||
- `Haervwe/open-webui-tools` - 专业的工具和实用程序
|
||||
- `Classic298/open-webui-plugins` - 各种插件实现
|
||||
- `suurt8ll/open_webui_functions` - 基于函数的插件
|
||||
|
||||
## 默认仓库
|
||||
|
||||
未指定仓库时,默认使用 `Fu-Jie/openwebui-extensions`。
|
||||
|
||||
## 插件检测规则
|
||||
|
||||
### Fu-Jie/openwebui-extensions(严格模式)
|
||||
|
||||
对于默认仓库,插件必须有:
|
||||
1. 包含 `class Tools:`、`class Filter:`、`class Pipe:` 或 `class Action:` 的 `.py` 文件
|
||||
2. 包含 `title:`、`description:` 和 **`openwebui_id:`** 字段的文档字符串
|
||||
3. 文件名不能以 `_cn` 结尾
|
||||
|
||||
### 其他 GitHub 仓库
|
||||
|
||||
对于其他仓库:
|
||||
1. 包含 `class Tools:`、`class Filter:`、`class Pipe:` 或 `class Action:` 的 `.py` 文件
|
||||
2. 包含 `title:` 和 `description:` 字段的文档字符串
|
||||
|
||||
## 配置 (Valves)
|
||||
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| --- | --- | --- |
|
||||
| `SKIP_KEYWORDS` | `test,verify,example,template,mock` | 要跳过的关键词,用逗号分隔 |
|
||||
| `TIMEOUT` | `20` | 请求超时时间(秒) |
|
||||
|
||||
## 确认超时时间
|
||||
|
||||
用户确认对话框的默认超时时间为 **2 分钟(120 秒)**,为用户提供充足的时间来:
|
||||
- 阅读和查看插件列表
|
||||
- 做出安装决定
|
||||
- 处理网络延迟
|
||||
|
||||
## 支持
|
||||
|
||||
如果这个插件对你有帮助,欢迎到 [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) 点个 Star,这将是我持续改进的动力,感谢支持。
|
||||
@@ -4,5 +4,6 @@ OpenWebUI native Tool plugins that can be used across models.
|
||||
|
||||
## Available Tool Plugins
|
||||
|
||||
- [Batch Install Plugins from GitHub](batch-install-plugins-tool.md) (v1.0.0) - One-click batch install plugins from GitHub repositories with confirmation and multi-language support.
|
||||
- [OpenWebUI Skills Manager Tool](openwebui-skills-manager-tool.md) (v0.3.0) - Simple native skill management (`list/show/install/create/update/delete`).
|
||||
- [Smart Mind Map Tool](smart-mind-map-tool.md) (v1.0.0) - Intelligently analyzes text content and proactively generates interactive mind maps to help users structure and visualize knowledge.
|
||||
|
||||
@@ -4,5 +4,6 @@
|
||||
|
||||
## 可用 Tool 插件
|
||||
|
||||
- [Batch Install Plugins from GitHub](batch-install-plugins-tool.zh.md) (v1.0.0) - 一键从 GitHub 仓库批量安装插件,支持确认和多语言。
|
||||
- [OpenWebUI Skills 管理工具](openwebui-skills-manager-tool.zh.md) (v0.3.0) - 简化技能管理(`list/show/install/create/update/delete`)。
|
||||
- [智能思维导图工具 (Smart Mind Map Tool)](smart-mind-map-tool.zh.md) (v1.0.0) - 智能分析文本内容并主动生成交互式思维导图,帮助用户结构化与可视化知识。
|
||||
|
||||
@@ -340,5 +340,45 @@
|
||||
"total_saves": 274,
|
||||
"followers": 220,
|
||||
"points": 271
|
||||
},
|
||||
{
|
||||
"date": "2026-03-12",
|
||||
"total_posts": 27,
|
||||
"total_downloads": 8765,
|
||||
"total_views": 92460,
|
||||
"total_upvotes": 300,
|
||||
"total_saves": 431,
|
||||
"followers": 344,
|
||||
"points": 351,
|
||||
"contributions": 66,
|
||||
"posts": {
|
||||
"turn_any_text_into_beautiful_mind_maps_3094c59a": 1730,
|
||||
"smart_infographic_ad6f0c7f": 1330,
|
||||
"markdown_normalizer_baaa8732": 807,
|
||||
"export_to_word_enhanced_formatting_fca6a315": 767,
|
||||
"async_context_compression_b1655bc8": 760,
|
||||
"ai_task_instruction_generator_9bab8b37": 666,
|
||||
"export_mulit_table_to_excel_244b8f9d": 604,
|
||||
"openwebui_skills_manager_tool_b4bce8e4": 434,
|
||||
"github_copilot_official_sdk_pipe_ce96f7b4": 399,
|
||||
"flash_card_65a2ea8f": 325,
|
||||
"deep_dive_c0b846e4": 224,
|
||||
"导出为_word_支持公式流程图表格和代码块_8a6306c0": 171,
|
||||
"folder_memory_auto_evolving_project_context_4a9875b2": 125,
|
||||
"smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d": 100,
|
||||
"github_copilot_sdk_files_filter_403a62ee": 93,
|
||||
"智能信息图_e04a48ff": 71,
|
||||
"智能生成交互式思维导图帮助用户可视化知识_8d4b097b": 53,
|
||||
"异步上下文压缩_5c0617cb": 40,
|
||||
"闪记卡生成插件_4a31eac3": 34,
|
||||
"精读_99830b0f": 32,
|
||||
"an_unconventional_use_of_open_terminal_35498f8f": 0,
|
||||
"github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452": 0,
|
||||
"github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131": 0,
|
||||
"github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293": 0,
|
||||
"open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e": 0,
|
||||
"review_of_claude_haiku_45_41b0db39": 0,
|
||||
"debug_open_webui_plugins_in_your_browser_81bf7960": 0
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,123 @@
|
||||
# ✅ Async Context Compression 部署完成(2024-03-12)
|
||||
|
||||
## 🎯 部署摘要
|
||||
|
||||
**日期**: 2024-03-12
|
||||
**版本**: 1.4.1
|
||||
**状态**: ✅ 成功部署
|
||||
**目标**: OpenWebUI localhost:3003
|
||||
|
||||
---
|
||||
|
||||
## 📌 新增功能
|
||||
|
||||
### 前端控制台调试信息
|
||||
|
||||
在 `async_context_compression.py` 中增加了 6 个结构化数据检查点,可在浏览器 Console 中查看插件的内部数据流。
|
||||
|
||||
#### 新增方法
|
||||
|
||||
```python
|
||||
async def _emit_struct_log(self, __event_call__, title: str, data: Any):
|
||||
"""
|
||||
Emit structured data to browser console.
|
||||
- Arrays → console.table() [表格形式]
|
||||
- Objects → console.dir(d, {depth: 3}) [树形结构]
|
||||
"""
|
||||
```
|
||||
|
||||
#### 6 个检查点
|
||||
|
||||
| # | 检查点 | 阶段 | 显示内容 |
|
||||
|---|-------|------|--------|
|
||||
| 1️⃣ | `__user__ structure` | Inlet 入口 | id, name, language, resolved_language |
|
||||
| 2️⃣ | `__metadata__ structure` | Inlet 入口 | chat_id, message_id, function_calling |
|
||||
| 3️⃣ | `body top-level structure` | Inlet 入口 | model, message_count, metadata keys |
|
||||
| 4️⃣ | `summary_record loaded from DB` | Inlet DB 后 | compressed_count, summary_preview, timestamps |
|
||||
| 5️⃣ | `final_messages shape → LLM` | Inlet 返回前 | 表格:每条消息的 role、content_length、tools |
|
||||
| 6️⃣ | `middle_messages shape` | 异步摘要中 | 表格:要摘要的消息切片 |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 快速开始(5 分钟)
|
||||
|
||||
### 步骤 1: 启用 Filter
|
||||
```
|
||||
OpenWebUI → Settings → Filters → 启用 "Async Context Compression"
|
||||
```
|
||||
|
||||
### 步骤 2: 启用调试
|
||||
```
|
||||
在 Filter 配置中 → show_debug_log: ON → Save
|
||||
```
|
||||
|
||||
### 步骤 3: 打开控制台
|
||||
```
|
||||
F12 (Windows/Linux) 或 Cmd+Option+I (Mac) → Console 标签
|
||||
```
|
||||
|
||||
### 步骤 4: 发送消息
|
||||
```
|
||||
发送 10+ 条消息,观察 📋 [Compression] 开头的日志
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 代码变更
|
||||
|
||||
```
|
||||
新增方法: _emit_struct_log() [42 行]
|
||||
新增日志点: 6 个
|
||||
新增代码总行: ~150 行
|
||||
向后兼容: 100% (由 show_debug_log 保护)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 日志示例
|
||||
|
||||
### 表格日志(Arrays)
|
||||
```
|
||||
📋 [Compression] Inlet: final_messages shape → LLM (7 msgs)
|
||||
┌─────┬──────────┬──────────────┬─────────────┐
|
||||
│index│role │content_length│has_tool_... │
|
||||
├─────┼──────────┼──────────────┼─────────────┤
|
||||
│ 0 │"system" │150 │false │
|
||||
│ 1 │"user" │200 │false │
|
||||
│ 2 │"assistant"│500 │true │
|
||||
└─────┴──────────┴──────────────┴─────────────┘
|
||||
```
|
||||
|
||||
### 树形日志(Objects)
|
||||
```
|
||||
📋 [Compression] Inlet: __metadata__ structure
|
||||
├─ chat_id: "chat-abc123..."
|
||||
├─ message_id: "msg-xyz789"
|
||||
├─ function_calling: "native"
|
||||
└─ all_keys: ["chat_id", "message_id", ...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ 验证清单
|
||||
|
||||
- [x] 代码变更已保存
|
||||
- [x] 部署脚本成功执行
|
||||
- [x] OpenWebUI 正常运行
|
||||
- [x] 新增 6 个日志点
|
||||
- [x] 防卡死保护已实装
|
||||
- [x] 向后兼容性完整
|
||||
|
||||
---
|
||||
|
||||
## 📖 文档
|
||||
|
||||
- [QUICK_START.md](../../scripts/QUICK_START.md) - 快速参考
|
||||
- [README_CN.md](./README_CN.md) - 插件说明
|
||||
- [DEPLOYMENT_REFERENCE.md](./DEPLOYMENT_REFERENCE.md) - 部署工具
|
||||
|
||||
---
|
||||
|
||||
**部署时间**: 2024-03-12
|
||||
**维护者**: Fu-Jie
|
||||
**项目**: [openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
@@ -1,13 +1,19 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.4.1 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.5.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
## What's new in 1.4.1
|
||||
## What's new in 1.5.0
|
||||
|
||||
- **Reverse-Unfolding Mechanism**: Accurately reconstructs the expanded native tool-calling sequence during the outlet phase to permanently fix coordinate drift and missing summaries for long tool-based conversations.
|
||||
- **Safer Tool Trimming**: Refactored `enable_tool_output_trimming` to strictly use atomic block groups for safe trimming, completely preventing JSON payload corruption.
|
||||
- **External Chat Reference Summaries**: Added support for referenced chat context blocks that can reuse cached summaries, inject small referenced chats directly, or generate summaries for larger referenced chats before injection.
|
||||
- **Fast Multilingual Token Estimation**: Added a new mixed-script token estimation pipeline so inlet/outlet preflight checks can avoid unnecessary exact token counts while staying much closer to real usage.
|
||||
- **Stronger Working-Memory Prompt**: Refined the XML summary prompt to better preserve actionable context across general chat, coding tasks, and tool-heavy conversations.
|
||||
- **Clearer Frontend Debug Logs**: Reworked browser-console logging into grouped structural snapshots that are easier to scan during debugging.
|
||||
- **Safer Tool Trimming Defaults**: Enabled native tool-output trimming by default and exposed a dedicated `tool_trim_threshold_chars` valve with a 600-character default.
|
||||
- **Safer Referenced-Chat Fallbacks**: If generating a referenced chat summary fails, the new reference-summary path now falls back to direct contextual injection instead of failing the whole chat.
|
||||
- **Correct Summary Budgeting**: `summary_model_max_context` now controls summary-input fitting, while `max_summary_tokens` remains an output-length cap.
|
||||
- **More Visible Summary Failures**: Important background summary failures now surface in the browser console (`F12`) and as a status hint even when `show_debug_log` is off.
|
||||
|
||||
---
|
||||
|
||||
@@ -19,15 +25,85 @@ This filter reduces token consumption in long conversations through intelligent
|
||||
- ✅ Persistent storage via Open WebUI's shared database connection (PostgreSQL, SQLite, etc.).
|
||||
- ✅ Flexible retention policy to keep the first and last N messages.
|
||||
- ✅ Smart injection of historical summaries back into the context.
|
||||
- ✅ External chat reference summarization with cached-summary reuse, direct injection for small chats, and generated summaries for larger chats.
|
||||
- ✅ Structure-aware trimming that preserves document structure (headers, intro, conclusion).
|
||||
- ✅ Native tool output trimming for cleaner context when using function calling.
|
||||
- ✅ Real-time context usage monitoring with warning notifications (>90%).
|
||||
- ✅ Detailed token logging for precise debugging and optimization.
|
||||
- ✅ Fast multilingual token estimation plus exact token fallback for precise debugging and optimization.
|
||||
- ✅ **Smart Model Matching**: Automatically inherits configuration from base models for custom presets.
|
||||
- ⚠ **Multimodal Support**: Images are preserved but their tokens are **NOT** calculated. Please adjust thresholds accordingly.
|
||||
|
||||
---
|
||||
|
||||
## What This Fixes
|
||||
|
||||
- **Problem 1: A referenced chat could break the current request.**
|
||||
Before, if the filter needed to summarize a referenced chat and that LLM call failed, the current chat could fail with it. Now it degrades gracefully and injects direct context instead.
|
||||
- **Problem 2: Some referenced chats were being cut too aggressively.**
|
||||
Before, the output limit (`max_summary_tokens`) could be treated like the input window, which made large referenced chats shrink earlier than necessary. Now input fitting uses the summary model's real context window (`summary_model_max_context` or model/global fallback).
|
||||
- **Problem 3: Some background summary failures were too easy to miss.**
|
||||
Before, a failure during background summary preparation could disappear quietly when frontend debug logging was off. Now important failures are forced to the browser console and also shown through a user-facing status message.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
This filter operates in two phases:
|
||||
|
||||
1. `inlet`: injects stored summaries, processes external chat references, and trims context when required before the request is sent to the model.
|
||||
2. `outlet`: runs asynchronously after the response is complete, decides whether a new summary should be generated, and persists it when appropriate.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Request enters inlet] --> B[Normalize tool IDs and optionally trim large tool outputs]
|
||||
B --> C{Referenced chats attached?}
|
||||
C -- No --> D[Load current chat summary if available]
|
||||
C -- Yes --> E[Inspect each referenced chat]
|
||||
|
||||
E --> F{Existing cached summary?}
|
||||
F -- Yes --> G[Reuse cached summary]
|
||||
F -- No --> H{Fits direct budget?}
|
||||
H -- Yes --> I[Inject full referenced chat text]
|
||||
H -- No --> J[Prepare referenced-chat summary input]
|
||||
|
||||
J --> K{Referenced-chat summary call succeeds?}
|
||||
K -- Yes --> L[Inject generated referenced summary]
|
||||
K -- No --> M[Fallback to direct contextual injection]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[Build current-chat Head + Summary + Tail]
|
||||
N --> O{Over max_context_tokens?}
|
||||
O -- Yes --> P[Trim oldest atomic groups]
|
||||
O -- No --> Q[Send final context to the model]
|
||||
P --> Q
|
||||
|
||||
Q --> R[Model returns the reply]
|
||||
R --> S[Outlet rebuilds the full history]
|
||||
S --> T{Reached compression threshold?}
|
||||
T -- No --> U[Finish]
|
||||
T -- Yes --> V[Fit summary input to the summary model context]
|
||||
|
||||
V --> W{Background summary call succeeds?}
|
||||
W -- Yes --> X[Save new chat summary and update status]
|
||||
W -- No --> Y[Force browser-console error and show status hint]
|
||||
```
|
||||
|
||||
### Key Notes
|
||||
|
||||
- `inlet` only injects and trims context. It does not generate the main chat summary.
|
||||
- `outlet` performs summary generation asynchronously and does not block the current reply.
|
||||
- External chat references may come from an existing persisted summary, a small chat's full text, or a generated/truncated reference summary.
|
||||
- If a referenced-chat summary call fails, the filter falls back to direct context injection instead of failing the whole request.
|
||||
- `summary_model_max_context` controls summary-input fitting. `max_summary_tokens` only controls how long the generated summary may be.
|
||||
- Important background summary failures are surfaced to the browser console (`F12`) and the chat status area.
|
||||
- External reference messages are protected during trimming so they are not discarded first.
|
||||
|
||||
---
|
||||
|
||||
## Installation & Configuration
|
||||
|
||||
### 1) Database (automatic)
|
||||
@@ -51,11 +127,12 @@ This filter reduces token consumption in long conversations through intelligent
|
||||
| `keep_first` | `1` | Always keep the first N messages (protects system prompts). |
|
||||
| `keep_last` | `6` | Always keep the last N messages to preserve recent context. |
|
||||
| `summary_model` | `None` | Model for summaries. Strongly recommended to set a fast, economical model (e.g., `gemini-2.5-flash`, `deepseek-v3`). Falls back to the current chat model when empty. |
|
||||
| `summary_model_max_context` | `0` | Max context tokens for the summary model. If 0, falls back to `model_thresholds` or global `max_context_tokens`. |
|
||||
| `max_summary_tokens` | `16384` | Maximum tokens for the generated summary. |
|
||||
| `summary_temperature` | `0.3` | Randomness for summary generation. Lower is more deterministic. |
|
||||
| `summary_model_max_context` | `0` | Input context window used to fit summary requests. If `0`, falls back to `model_thresholds` or global `max_context_tokens`. |
|
||||
| `max_summary_tokens` | `16384` | Maximum output length for the generated summary. This is not the summary-input context limit. |
|
||||
| `summary_temperature` | `0.1` | Randomness for summary generation. Lower is more deterministic. |
|
||||
| `model_thresholds` | `{}` | Per-model overrides for `compression_threshold_tokens` and `max_context_tokens` (useful for mixed models). |
|
||||
| `enable_tool_output_trimming` | `false` | When enabled and `function_calling: "native"` is active, trims verbose tool outputs to extract only the final answer. |
|
||||
| `enable_tool_output_trimming` | `true` | When enabled for `function_calling: "native"`, trims oversized native tool outputs while keeping the tool-call chain intact. |
|
||||
| `tool_trim_threshold_chars` | `600` | Trim native tool output blocks once their total content length reaches this threshold. |
|
||||
| `debug_mode` | `false` | Log verbose debug info. Set to `false` in production. |
|
||||
| `show_debug_log` | `false` | Print debug logs to browser console (F12). Useful for frontend debugging. |
|
||||
| `show_token_usage_status` | `true` | Show token usage status notification in the chat interface. |
|
||||
@@ -71,8 +148,12 @@ If this plugin has been useful, a star on [OpenWebUI Extensions](https://github.
|
||||
|
||||
- **Initial system prompt is lost**: Keep `keep_first` greater than 0 to protect the initial message.
|
||||
- **Compression effect is weak**: Raise `compression_threshold_tokens` or lower `keep_first` / `keep_last` to allow more aggressive compression.
|
||||
- **A referenced chat summary fails**: The current request should continue with a direct-context fallback. Check the browser console (`F12`) if you need the upstream failure details.
|
||||
- **A background summary silently seems to do nothing**: Important failures now surface in chat status and the browser console (`F12`).
|
||||
- **Submit an Issue**: If you encounter any problems, please submit an issue on GitHub: [OpenWebUI Extensions Issues](https://github.com/Fu-Jie/openwebui-extensions/issues)
|
||||
|
||||
## Changelog
|
||||
|
||||
See [`v1.5.0` Release Notes](./v1.5.0.md) for the release-specific summary.
|
||||
|
||||
See the full history on GitHub: [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
@@ -1,15 +1,21 @@
|
||||
# 异步上下文压缩过滤器
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.4.1 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.5.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
|
||||
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
||||
|
||||
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
||||
|
||||
## 1.4.1 版本更新
|
||||
## 1.5.0 版本更新
|
||||
|
||||
- **逆向展开机制**: 引入 `_unfold_messages` 机制以在 `outlet` 阶段精确对齐坐标系,彻底解决了由于前端视图折叠导致长轮次工具调用对话出现进度漂移或跳过生成摘要的问题。
|
||||
- **更安全的工具内容裁剪**: 重构了 `enable_tool_output_trimming`,现在严格使用原子级分组进行安全的原生工具内容裁剪,替代了激进的正则表达式匹配,防止 JSON 载荷损坏。
|
||||
- **外部聊天引用摘要**: 新增对引用聊天上下文的摘要支持。现在可以复用缓存摘要、直接注入较小引用聊天,或先为较大的引用聊天生成摘要再注入。
|
||||
- **快速多语言 Token 预估**: 新增混合脚本 Token 预估链路,使 inlet / outlet 的预检可以减少不必要的精确计数,同时比旧的粗略字符比值更接近真实用量。
|
||||
- **更稳健的工作记忆提示词**: 重写 XML 摘要提示词,增强普通聊天、编码任务和连续工具调用场景下的关键信息保留能力。
|
||||
- **更清晰的前端调试日志**: 浏览器控制台日志改为分组化、结构化展示,排查上下文压缩行为更直观。
|
||||
- **更安全的工具裁剪默认值**: 原生工具输出裁剪默认开启,并新增 `tool_trim_threshold_chars` 配置项,默认阈值为 600 字符。
|
||||
- **更稳妥的引用聊天回退**: 当新的引用聊天摘要路径生成失败时,不再拖垮当前请求,而是自动回退为直接注入上下文。
|
||||
- **更准确的摘要预算**: `summary_model_max_context` 现在只负责摘要输入窗口,`max_summary_tokens` 继续只负责摘要输出长度。
|
||||
- **更容易发现摘要失败**: 重要的后台摘要失败现在会强制显示到浏览器控制台 (`F12`),并同步给出状态提示。
|
||||
|
||||
---
|
||||
|
||||
@@ -21,14 +27,84 @@
|
||||
- ✅ **持久化存储**: 复用 Open WebUI 共享数据库连接,自动支持 PostgreSQL/SQLite 等。
|
||||
- ✅ **灵活保留策略**: 可配置保留对话头部和尾部消息,确保关键信息连贯。
|
||||
- ✅ **智能注入**: 将历史摘要智能注入到新上下文中。
|
||||
- ✅ **外部聊天引用摘要**: 支持复用缓存摘要、小聊天直接注入,以及大聊天先摘要后注入。
|
||||
- ✅ **结构感知裁剪**: 智能折叠过长消息,保留文档骨架(标题、首尾)。
|
||||
- ✅ **原生工具输出裁剪**: 支持裁剪冗长的工具调用输出。
|
||||
- ✅ **实时监控**: 实时监控上下文使用情况,超过 90% 发出警告。
|
||||
- ✅ **详细日志**: 提供精确的 Token 统计日志,便于调试。
|
||||
- ✅ **快速预估 + 精确回退**: 提供更快的多语言 Token 预估,并在必要时回退到精确统计,便于调试。
|
||||
- ✅ **智能模型匹配**: 自定义模型自动继承基础模型的阈值配置。
|
||||
- ⚠ **多模态支持**: 图片内容会被保留,但其 Token **不参与计算**。请相应调整阈值。
|
||||
|
||||
详细的工作原理和流程请参考 [工作流程指南](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/WORKFLOW_GUIDE_CN.md)。
|
||||
详细的工作原理和更长说明仍可参考 [工作流程指南](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/WORKFLOW_GUIDE_CN.md)。
|
||||
|
||||
---
|
||||
|
||||
## 这次解决了什么问题(通俗版)
|
||||
|
||||
- **问题 1:引用别的聊天时,摘要失败可能把当前对话一起弄挂。**
|
||||
以前如果过滤器需要先帮被引用聊天做摘要,而这一步的 LLM 调用失败了,当前请求也可能直接失败。现在改成了“能摘要就摘要,失败就退回直接塞上下文”,当前对话不会被一起拖死。
|
||||
- **问题 2:有些被引用聊天被截得太早,信息丢得太多。**
|
||||
以前有一段逻辑把 `max_summary_tokens` 这种“输出长度限制”误当成了“输入上下文窗口”,结果大一点的引用聊天会被过早截断。现在改成按摘要模型真实的输入窗口来算,能保留更多有用内容。
|
||||
- **问题 3:后台摘要失败时,用户不容易知道发生了什么。**
|
||||
以前在 `show_debug_log=false` 时,有些后台失败只会留在内部日志里。现在关键失败会强制打到浏览器控制台,并在聊天状态里提醒去看 `F12`。
|
||||
|
||||
---
|
||||
|
||||
## 工作流总览
|
||||
|
||||
该过滤器分为两个阶段:
|
||||
|
||||
1. `inlet`:在请求发送给模型前执行,负责注入已有摘要、处理外部聊天引用、并在必要时裁剪上下文。
|
||||
2. `outlet`:在模型回复完成后异步执行,负责判断是否需要生成新摘要,并在合适时写入数据库。
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[请求进入 inlet] --> B[规范化工具 ID 并按需裁剪超长工具输出]
|
||||
B --> C{是否附带引用聊天?}
|
||||
C -- 否 --> D[如果有当前聊天摘要就先加载]
|
||||
C -- 是 --> E[逐个检查被引用聊天]
|
||||
|
||||
E --> F{已有缓存摘要?}
|
||||
F -- 是 --> G[直接复用缓存摘要]
|
||||
F -- 否 --> H{能直接放进当前预算?}
|
||||
H -- 是 --> I[直接注入完整引用聊天文本]
|
||||
H -- 否 --> J[准备引用聊天的摘要输入]
|
||||
|
||||
J --> K{引用聊天摘要调用成功?}
|
||||
K -- 是 --> L[注入生成后的引用摘要]
|
||||
K -- 否 --> M[回退为直接注入上下文]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[为当前聊天构造 Head + Summary + Tail]
|
||||
N --> O{是否超过 max_context_tokens?}
|
||||
O -- 是 --> P[从最旧 atomic groups 开始裁剪]
|
||||
O -- 否 --> Q[把最终上下文发给模型]
|
||||
P --> Q
|
||||
|
||||
Q --> R[模型返回当前回复]
|
||||
R --> S[Outlet 重建完整历史]
|
||||
S --> T{达到压缩阈值了吗?}
|
||||
T -- 否 --> U[结束]
|
||||
T -- 是 --> V[把摘要输入压到摘要模型可接受的上下文窗口]
|
||||
|
||||
V --> W{后台摘要调用成功?}
|
||||
W -- 是 --> X[保存新摘要并更新状态]
|
||||
W -- 否 --> Y[强制输出浏览器控制台错误并提示用户查看]
|
||||
```
|
||||
|
||||
### 关键说明
|
||||
|
||||
- `inlet` 只负责注入和裁剪上下文,不负责生成当前聊天的主摘要。
|
||||
- `outlet` 异步生成摘要,不会阻塞当前回复。
|
||||
- 外部聊天引用可以来自已有持久化摘要、小聊天的完整文本,或动态生成/截断后的引用摘要。
|
||||
- 如果引用聊天摘要失败,会自动回退为直接注入上下文,而不是让当前请求失败。
|
||||
- `summary_model_max_context` 控制摘要输入窗口;`max_summary_tokens` 只控制生成摘要的输出长度。
|
||||
- 重要的后台摘要失败会显示到浏览器控制台 (`F12`) 和聊天状态提示里。
|
||||
- 外部引用消息在裁剪阶段会被特殊保护,避免被最先删除。
|
||||
|
||||
---
|
||||
|
||||
@@ -64,8 +140,8 @@
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| :-------------------- | :------ | :------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `summary_model` | `None` | 用于生成摘要的模型 ID。**强烈建议**配置快速、经济、上下文窗口大的模型(如 `gemini-2.5-flash`、`deepseek-v3`)。留空则尝试复用当前对话模型。 |
|
||||
| `summary_model_max_context` | `0` | 摘要模型的最大上下文 Token 数。如果为 0,则回退到 `model_thresholds` 或全局 `max_context_tokens`。 |
|
||||
| `max_summary_tokens` | `16384` | 生成摘要时允许的最大 Token 数。 |
|
||||
| `summary_model_max_context` | `0` | 摘要请求可使用的输入上下文窗口。如果为 0,则回退到 `model_thresholds` 或全局 `max_context_tokens`。 |
|
||||
| `max_summary_tokens` | `16384` | 生成摘要时允许的最大输出 Token 数。它不是摘要输入窗口上限。 |
|
||||
| `summary_temperature` | `0.1` | 控制摘要生成的随机性,较低的值结果更稳定。 |
|
||||
|
||||
### 高级配置
|
||||
@@ -93,7 +169,8 @@
|
||||
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| :----------------------------- | :------- | :-------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `enable_tool_output_trimming` | `false` | 启用时,若 `function_calling: "native"` 激活,将裁剪冗长的工具输出以仅提取最终答案。 |
|
||||
| `enable_tool_output_trimming` | `true` | 启用后(仅在 `function_calling: "native"` 下生效)会裁剪过大的本机工具输出,保留工具调用链结构并以简短占位替换冗长内容。 |
|
||||
| `tool_trim_threshold_chars` | `600` | 当本机工具输出累计字符数达到该值时触发裁剪,适用于包含长文本或表格的工具结果。 |
|
||||
| `debug_mode` | `false` | 是否在 Open WebUI 的控制台日志中打印详细的调试信息。生产环境默认且建议设为 `false`。 |
|
||||
| `show_debug_log` | `false` | 是否在浏览器控制台 (F12) 打印调试日志。便于前端调试。 |
|
||||
| `show_token_usage_status` | `true` | 是否在对话结束时显示 Token 使用情况的状态通知。 |
|
||||
@@ -109,8 +186,12 @@
|
||||
|
||||
- **初始系统提示丢失**:将 `keep_first` 设置为大于 0。
|
||||
- **压缩效果不明显**:提高 `compression_threshold_tokens`,或降低 `keep_first` / `keep_last` 以增强压缩力度。
|
||||
- **引用聊天摘要失败**:当前请求现在应该会继续执行,并回退为直接注入上下文。如果要看上游失败原因,请打开浏览器控制台 (`F12`)。
|
||||
- **后台摘要看起来“没反应”**:重要失败现在会同时出现在状态提示和浏览器控制台 (`F12`) 中。
|
||||
- **提交 Issue**: 如果遇到任何问题,请在 GitHub 上提交 Issue:[OpenWebUI Extensions Issues](https://github.com/Fu-Jie/openwebui-extensions/issues)
|
||||
|
||||
## 更新日志
|
||||
|
||||
请查看 [`v1.5.0` 版本发布说明](./v1.5.0_CN.md) 获取本次版本的独立发布摘要。
|
||||
|
||||
完整历史请查看 GitHub 项目: [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
@@ -0,0 +1,315 @@
|
||||
# 📋 Response 结构检查指南
|
||||
|
||||
## 🎯 新增检查点
|
||||
|
||||
在 `_call_summary_llm()` 方法中添加了 **3 个新的响应检查点**,用于前端控制台检查 LLM 调用的完整响应流程。
|
||||
|
||||
### 新增检查点位置
|
||||
|
||||
| # | 检查点名称 | 位置 | 显示内容 |
|
||||
|---|-----------|------|--------|
|
||||
| 1️⃣ | **LLM Response structure** | `generate_chat_completion()` 返回后 | 原始 response 对象的类型、键、结构 |
|
||||
| 2️⃣ | **LLM Summary extracted & cleaned** | 提取并清理 summary 后 | 摘要长度、字数、格式、是否为空 |
|
||||
| 3️⃣ | **Summary saved to database** | 保存到 DB 后验证 | 数据库记录是否正确保存、字段一致性 |
|
||||
|
||||
---
|
||||
|
||||
## 📊 检查点详解
|
||||
|
||||
### 1️⃣ LLM Response structure
|
||||
|
||||
**显示时机**: `generate_chat_completion()` 返回,处理前
|
||||
**用途**: 验证原始响应数据结构
|
||||
|
||||
```
|
||||
📋 [Compression] LLM Response structure (raw from generate_chat_completion)
|
||||
├─ type: "dict" / "Response" / "JSONResponse"
|
||||
├─ has_body: true/false (表示是否为 Response 对象)
|
||||
├─ has_status_code: true/false
|
||||
├─ is_dict: true/false
|
||||
├─ keys: ["choices", "usage", "model", ...] (如果是 dict)
|
||||
├─ first_choice_keys: ["message", "finish_reason", ...]
|
||||
├─ message_keys: ["role", "content"]
|
||||
└─ content_length: 1234 (摘要文本长度)
|
||||
```
|
||||
|
||||
**关键验证**:
|
||||
- ✅ `type` — 应该是 `dict` 或 `JSONResponse`
|
||||
- ✅ `is_dict` — 最终应该是 `true`(处理完毕后)
|
||||
- ✅ `keys` — 应该包含 `choices` 和 `usage`
|
||||
- ✅ `first_choice_keys` — 应该包含 `message`
|
||||
- ✅ `message_keys` — 应该包含 `role` 和 `content`
|
||||
- ✅ `content_length` — 摘要不应该为空(> 0)
|
||||
|
||||
---
|
||||
|
||||
### 2️⃣ LLM Summary extracted & cleaned
|
||||
|
||||
**显示时机**: 从 response 中提取并 strip() 后
|
||||
**用途**: 验证提取的摘要内容质量
|
||||
|
||||
```
|
||||
📋 [Compression] LLM Summary extracted & cleaned
|
||||
├─ type: "str"
|
||||
├─ length_chars: 1234
|
||||
├─ length_words: 156
|
||||
├─ first_100_chars: "用户提问关于......"
|
||||
├─ has_newlines: true
|
||||
├─ newline_count: 3
|
||||
└─ is_empty: false
|
||||
```
|
||||
|
||||
**关键验证**:
|
||||
- ✅ `type` — 应该始终是 `str`
|
||||
- ✅ `is_empty` — 应该是 `false`(不能为空)
|
||||
- ✅ `length_chars` — 通常 100-2000 字符(取决于配置)
|
||||
- ✅ `newline_count` — 多行摘要通常有几个换行符
|
||||
- ✅ `first_100_chars` — 可视化开头内容,检查是否正确
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ Summary saved to database
|
||||
|
||||
**显示时机**: 保存到 DB 后,重新加载验证
|
||||
**用途**: 确认数据库持久化成功且数据一致
|
||||
|
||||
```
|
||||
📋 [Compression] Summary saved to database (verification)
|
||||
├─ db_id: 42
|
||||
├─ db_chat_id: "chat-abc123..."
|
||||
├─ db_compressed_message_count: 10
|
||||
├─ db_summary_length_chars: 1234
|
||||
├─ db_summary_preview_100: "用户提问关于......"
|
||||
├─ db_created_at: "2024-03-12 15:30:45.123456+00:00"
|
||||
├─ db_updated_at: "2024-03-12 15:35:20.654321+00:00"
|
||||
├─ matches_input_chat_id: true
|
||||
└─ matches_input_compressed_count: true
|
||||
```
|
||||
|
||||
**关键验证** ⭐ 最重要:
|
||||
- ✅ `matches_input_chat_id` — **必须是 `true`**
|
||||
- ✅ `matches_input_compressed_count` — **必须是 `true`**
|
||||
- ✅ `db_summary_length_chars` — 与提取后的长度一致
|
||||
- ✅ `db_updated_at` — 应该是最新时间戳
|
||||
- ✅ `db_id` — 应该有有效的数据库 ID
|
||||
|
||||
---
|
||||
|
||||
## 🔍 如何在前端查看
|
||||
|
||||
### 步骤 1: 启用调试模式
|
||||
|
||||
在 OpenWebUI 中:
|
||||
```
|
||||
Settings → Filters → Async Context Compression
|
||||
↓
|
||||
找到 valve: "show_debug_log"
|
||||
↓
|
||||
勾选启用 + Save
|
||||
```
|
||||
|
||||
### 步骤 2: 打开浏览器控制台
|
||||
|
||||
- **Windows/Linux**: F12 → Console
|
||||
- **Mac**: Cmd + Option + I → Console
|
||||
|
||||
### 步骤 3: 触发摘要生成
|
||||
|
||||
发送足够多的消息使 Filter 触发压缩:
|
||||
```
|
||||
1. 发送 15+ 条消息
|
||||
2. 等待后台摘要任务开始
|
||||
3. 在 Console 观察 📋 日志
|
||||
```
|
||||
|
||||
### 步骤 4: 观察完整流程
|
||||
|
||||
```
|
||||
[1] 📋 LLM Response structure (raw)
|
||||
↓ (显示原始响应类型、结构)
|
||||
[2] 📋 LLM Summary extracted & cleaned
|
||||
↓ (显示提取后的文本信息)
|
||||
[3] 📋 Summary saved to database (verification)
|
||||
↓ (显示数据库保存结果)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📈 完整流程验证
|
||||
|
||||
### 优质流程示例 ✅
|
||||
|
||||
```
|
||||
1️⃣ Response structure:
|
||||
- type: "dict"
|
||||
- is_dict: true
|
||||
- has "choices": true
|
||||
- has "usage": true
|
||||
|
||||
2️⃣ Summary extracted:
|
||||
- is_empty: false
|
||||
- length_chars: 1500
|
||||
- length_words: 200
|
||||
|
||||
3️⃣ DB verification:
|
||||
- matches_input_chat_id: true ✅
|
||||
- matches_input_compressed_count: true ✅
|
||||
- db_id: 42 (有效)
|
||||
```
|
||||
|
||||
### 问题流程示例 ❌
|
||||
|
||||
```
|
||||
1️⃣ Response structure:
|
||||
- type: "Response" (需要处理)
|
||||
- has_body: true
|
||||
- (需要解析 body)
|
||||
|
||||
2️⃣ Summary extracted:
|
||||
- is_empty: true ❌ (摘要为空!)
|
||||
- length_chars: 0
|
||||
|
||||
3️⃣ DB verification:
|
||||
- matches_input_chat_id: false ❌ (chat_id 不匹配!)
|
||||
- matches_input_compressed_count: false ❌ (计数不匹配!)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ 调试技巧
|
||||
|
||||
### 快速过滤日志
|
||||
|
||||
在 Console 过滤框输入:
|
||||
```
|
||||
📋 (搜索所有压缩日志)
|
||||
LLM Response (搜索响应相关)
|
||||
Summary extracted (搜索提取摘要)
|
||||
saved to database (搜索保存验证)
|
||||
```
|
||||
|
||||
### 展开表格/对象查看详情
|
||||
|
||||
1. **对象型日志** (console.dir)
|
||||
- 点击左边的 ▶ 符号展开
|
||||
- 逐级查看嵌套字段
|
||||
|
||||
2. **表格型日志** (console.table)
|
||||
- 点击上方的 ▶ 展开
|
||||
- 查看完整列
|
||||
|
||||
### 对比多个日志
|
||||
|
||||
```javascript
|
||||
// 在 Console 中手动对比
|
||||
检查点1: type = "dict", is_dict = true
|
||||
检查点2: is_empty = false, length_chars = 1234
|
||||
检查点3: matches_input_chat_id = true
|
||||
↓
|
||||
如果所有都符合预期 → ✅ 流程正常
|
||||
如果有不符的 → ❌ 检查具体问题
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🐛 常见问题诊断
|
||||
|
||||
### Q: "type" 是 "Response" 而不是 "dict"?
|
||||
|
||||
**原因**: 某些后端返回 Response 对象而非 dict
|
||||
**解决**: 代码会自动处理,看后续日志是否成功解析
|
||||
|
||||
```
|
||||
检查点1: type = "Response" ← 需要解析
|
||||
↓
|
||||
代码执行 `response.body` 解析
|
||||
↓
|
||||
再次检查是否变为 dict
|
||||
```
|
||||
|
||||
### Q: "is_empty" 是 true?
|
||||
|
||||
**原因**: LLM 没有返回有效的摘要文本
|
||||
**诊断**:
|
||||
1. 检查 `first_100_chars` — 应该有实际内容
|
||||
2. 检查模型是否正确配置
|
||||
3. 检查中间消息是否过多导致 LLM 超时
|
||||
|
||||
### Q: "matches_input_chat_id" 是 false?
|
||||
|
||||
**原因**: 保存到 DB 时 chat_id 不匹配
|
||||
**诊断**:
|
||||
1. 对比 `db_chat_id` 和输入的 `chat_id`
|
||||
2. 可能是数据库连接问题
|
||||
3. 可能是并发修改导致的
|
||||
|
||||
### Q: "matches_input_compressed_count" 是 false?
|
||||
|
||||
**原因**: 保存的消息计数与预期不符
|
||||
**诊断**:
|
||||
1. 对比 `db_compressed_message_count` 和 `saved_compressed_count`
|
||||
2. 检查中间消息是否被意外修改
|
||||
3. 检查原子边界对齐是否正确
|
||||
|
||||
---
|
||||
|
||||
## 📚 相关代码位置
|
||||
|
||||
```python
|
||||
# 文件: async_context_compression.py
|
||||
|
||||
# 检查点 1: 响应结构检查 (L3459)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"LLM Response structure (raw from generate_chat_completion)",
|
||||
response_inspection_data,
|
||||
)
|
||||
|
||||
# 检查点 2: 摘要提取检查 (L3524)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"LLM Summary extracted & cleaned",
|
||||
summary_inspection,
|
||||
)
|
||||
|
||||
# 检查点 3: 数据库保存检查 (L3168)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"Summary saved to database (verification)",
|
||||
save_inspection,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 完整检查清单
|
||||
|
||||
在前端 Console 中验证:
|
||||
|
||||
- [ ] 检查点 1 出现且 `is_dict: true`
|
||||
- [ ] 检查点 1 显示 `first_choice_keys` 包含 `message`
|
||||
- [ ] 检查点 2 出现且 `is_empty: false`
|
||||
- [ ] 检查点 2 显示合理的 `length_chars` (通常 > 100)
|
||||
- [ ] 检查点 3 出现且 `matches_input_chat_id: true`
|
||||
- [ ] 检查点 3 显示 `matches_input_compressed_count: true`
|
||||
- [ ] 所有日志时间戳合理
|
||||
- [ ] 没有异常或错误信息
|
||||
|
||||
---
|
||||
|
||||
## 📞 后续步骤
|
||||
|
||||
1. ✅ 启用调试模式
|
||||
2. ✅ 发送消息触发摘要生成
|
||||
3. ✅ 观察 3 个新检查点
|
||||
4. ✅ 验证所有字段符合预期
|
||||
5. ✅ 如有问题,参考本指南诊断
|
||||
|
||||
---
|
||||
|
||||
**最后更新**: 2024-03-12
|
||||
**相关特性**: Response 结构检查 (v1.4.1+)
|
||||
**文档**: [async_context_compression.py 第 3459, 3524, 3168 行]
|
||||
File diff suppressed because it is too large
Load Diff
270
plugins/filters/async-context-compression/community_post.md
Normal file
270
plugins/filters/async-context-compression/community_post.md
Normal file
@@ -0,0 +1,270 @@
|
||||
[](https://openwebui.com/posts/async_context_compression_b1655bc8)
|
||||
|
||||
# Async Context Compression: A Production-Scale Working-Memory Filter for OpenWebUI
|
||||
|
||||
Long chats do not just get expensive. They also get fragile.
|
||||
|
||||
Once a conversation grows large enough, you usually have to choose between two bad options:
|
||||
|
||||
- keep the full history and pay a heavy context cost
|
||||
- trim aggressively and risk losing continuity, tool state, or important prior decisions
|
||||
|
||||
`Async Context Compression` is built to avoid that tradeoff.
|
||||
|
||||
It is not a simple “summarize old messages” utility. It is a structure-aware, async, database-backed working-memory system for OpenWebUI that can compress long conversations while preserving conversational continuity, tool-calling integrity, and now, as of `v1.5.0`, referenced-chat context injection as well.
|
||||
|
||||
This plugin has now reached the point where it feels complete enough to be described as a serious, high-capability filter rather than a small convenience add-on.
|
||||
|
||||
**[📖 Full README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/README.md)**
|
||||
**[📝 v1.5.0 Release Notes](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/v1.5.0.md)**
|
||||
|
||||
---
|
||||
|
||||
## Why This Plugin Exists
|
||||
|
||||
OpenWebUI conversations often contain much more than plain chat:
|
||||
|
||||
- long-running planning threads
|
||||
- coding sessions with repeated tool use
|
||||
- model-specific context limits
|
||||
- multimodal messages
|
||||
- external referenced chats
|
||||
- custom models with different context windows
|
||||
|
||||
A naive compression strategy is not enough in those environments.
|
||||
|
||||
If a filter only drops earlier messages based on length, it can:
|
||||
|
||||
- break native tool-calling chains
|
||||
- lose critical task state
|
||||
- destroy continuity in old chats
|
||||
- make debugging impossible
|
||||
- hide important provider-side failures
|
||||
|
||||
`Async Context Compression` is designed around a stronger premise:
|
||||
|
||||
> compress history without treating conversation structure as disposable
|
||||
|
||||
That means it tries to preserve what actually matters for the next turn:
|
||||
|
||||
- the current goal
|
||||
- durable user preferences
|
||||
- recent progress
|
||||
- tool outputs that still matter
|
||||
- error state
|
||||
- summary continuity
|
||||
- referenced context from other chats
|
||||
|
||||
---
|
||||
|
||||
## What Makes It Different
|
||||
|
||||
This plugin now combines several capabilities that are usually split across separate systems:
|
||||
|
||||
### 1. Asynchronous working-memory generation
|
||||
|
||||
The current reply is not blocked while the plugin generates a new summary in the background.
|
||||
|
||||
### 2. Persistent summary storage
|
||||
|
||||
Summaries are stored in OpenWebUI's shared database and reused across turns, instead of being regenerated from scratch every time.
|
||||
|
||||
### 3. Structure-aware trimming
|
||||
|
||||
The filter respects atomic message boundaries so native tool-calling history is not corrupted by compression.
|
||||
|
||||
### 4. External chat reference summarization
|
||||
|
||||
New in `v1.5.0`: referenced chats can now be reused as cached summaries, injected directly if small enough, or summarized before injection if too large.
|
||||
|
||||
### 5. Mixed-script token estimation
|
||||
|
||||
The plugin now uses a much stronger multilingual token estimation path before falling back to exact counting, which helps reduce unnecessary expensive token calculations while staying much closer to real usage.
|
||||
|
||||
### 6. Real failure visibility
|
||||
|
||||
Important background summary failures are surfaced to the browser console and status messages instead of disappearing silently.
|
||||
|
||||
---
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
This is the current high-level flow:
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Request enters inlet] --> B[Normalize tool IDs and optionally trim large tool outputs]
|
||||
B --> C{Referenced chats attached?}
|
||||
C -- No --> D[Load current chat summary if available]
|
||||
C -- Yes --> E[Inspect each referenced chat]
|
||||
|
||||
E --> F{Existing cached summary?}
|
||||
F -- Yes --> G[Reuse cached summary]
|
||||
F -- No --> H{Fits direct budget?}
|
||||
H -- Yes --> I[Inject full referenced chat text]
|
||||
H -- No --> J[Prepare referenced-chat summary input]
|
||||
|
||||
J --> K{Referenced-chat summary call succeeds?}
|
||||
K -- Yes --> L[Inject generated referenced summary]
|
||||
K -- No --> M[Fallback to direct contextual injection]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[Build current-chat Head + Summary + Tail]
|
||||
N --> O{Over max_context_tokens?}
|
||||
O -- Yes --> P[Trim oldest atomic groups]
|
||||
O -- No --> Q[Send final context to the model]
|
||||
P --> Q
|
||||
|
||||
Q --> R[Model returns the reply]
|
||||
R --> S[Outlet rebuilds the full history]
|
||||
S --> T{Reached compression threshold?}
|
||||
T -- No --> U[Finish]
|
||||
T -- Yes --> V[Fit summary input to the summary model context]
|
||||
|
||||
V --> W{Background summary call succeeds?}
|
||||
W -- Yes --> X[Save new chat summary and update status]
|
||||
W -- No --> Y[Force browser-console error and show status hint]
|
||||
```
|
||||
|
||||
This is why I consider the plugin “powerful” now: it is no longer solving a single problem. It is coordinating context reduction, summary persistence, tool safety, referenced-chat handling, and model-budget control inside one filter.
|
||||
|
||||
---
|
||||
|
||||
## New in v1.5.0
|
||||
|
||||
This release is important because it turns the plugin from “long-chat compression with strong tool safety” into something closer to a reusable context-management layer.
|
||||
|
||||
### External chat reference summaries
|
||||
|
||||
This is a new feature in `v1.5.0`, not just a small adjustment.
|
||||
|
||||
When a user references another chat:
|
||||
|
||||
- the plugin can reuse an existing cached summary
|
||||
- inject the full referenced chat if it is small enough
|
||||
- or generate a summary first if the referenced chat is too large
|
||||
|
||||
That means the filter can now carry relevant context across chats, not just across turns inside the same chat.
|
||||
|
||||
### Fast multilingual token estimation
|
||||
|
||||
Also new in `v1.5.0`.
|
||||
|
||||
The plugin no longer relies on a rough one-size-fits-all character ratio. It now estimates token usage with mixed-script heuristics that behave much better for:
|
||||
|
||||
- English
|
||||
- Chinese
|
||||
- Japanese
|
||||
- Korean
|
||||
- Cyrillic
|
||||
- Arabic
|
||||
- Thai
|
||||
- mixed-language conversations
|
||||
|
||||
This matters because the plugin makes context decisions constantly. Better estimation means fewer unnecessary exact counts and fewer bad preflight assumptions.
|
||||
|
||||
### Stronger final-prompt budgeting
|
||||
|
||||
The summary path now fits the **real final summary request**, not just an intermediate estimate. That includes:
|
||||
|
||||
- prompt wrapper
|
||||
- formatted conversation text
|
||||
- previous summary
|
||||
- reserved output budget
|
||||
- safety margin
|
||||
|
||||
This directly improves reliability in the large old-chat cases that are hardest to handle.
|
||||
|
||||
---
|
||||
|
||||
## Why It Feels Complete Now
|
||||
|
||||
I would describe the current plugin as “feature-complete for the main problem space,” because it now covers the major operational surfaces that matter in real usage:
|
||||
|
||||
- long plain-chat conversations
|
||||
- multi-step coding threads
|
||||
- native tool-calling conversations
|
||||
- persistent summaries
|
||||
- custom model thresholds
|
||||
- background async generation
|
||||
- external chat references
|
||||
- multilingual token estimation
|
||||
- failure surfacing for debugging
|
||||
|
||||
That does not mean it is finished forever. It means the plugin has crossed the line from a narrow experimental filter into a robust context-management system with enough breadth to support demanding OpenWebUI usage patterns.
|
||||
|
||||
---
|
||||
|
||||
## Scale and Engineering Depth
|
||||
|
||||
For people who care about implementation depth, this plugin is not small anymore.
|
||||
|
||||
Current code size:
|
||||
|
||||
- main plugin: **4,573 lines**
|
||||
- focused test file: **1,037 lines**
|
||||
- combined visible implementation + regression coverage: **5,610 lines**
|
||||
|
||||
Line count is not a quality metric by itself, but at this scale it does say something real:
|
||||
|
||||
- the plugin has grown well beyond a toy filter
|
||||
- the behavior surface is large enough to require explicit regression testing
|
||||
- the plugin now encodes a lot of edge-case handling that only shows up after repeated real-world usage
|
||||
|
||||
In other words: this is no longer “just summarize old messages.” It is a fairly serious stateful filter.
|
||||
|
||||
---
|
||||
|
||||
## Practical Benefits
|
||||
|
||||
If you use OpenWebUI heavily, the value is straightforward:
|
||||
|
||||
- lower token consumption in long chats
|
||||
- better continuity across long-running sessions
|
||||
- safer native tool-calling history
|
||||
- fewer broken conversations after compression
|
||||
- more stable summary generation on large histories
|
||||
- better visibility when the provider rejects a summary request
|
||||
- useful reuse of context from referenced chats
|
||||
|
||||
This plugin is especially valuable if you:
|
||||
|
||||
- regularly work in long coding chats
|
||||
- use models with strict context budgets
|
||||
- rely on native tool calling
|
||||
- revisit old project chats
|
||||
- want summaries to behave like working memory, not like lossy notes
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
- OpenWebUI Community: <https://openwebui.com/posts/async_context_compression_b1655bc8>
|
||||
- Source: <https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/filters/async-context-compression>
|
||||
|
||||
If you want the full valve list, deployment notes, and troubleshooting details, the README is the best reference.
|
||||
|
||||
---
|
||||
|
||||
## Final Note
|
||||
|
||||
Do I think this plugin is powerful?
|
||||
|
||||
Yes, genuinely.
|
||||
|
||||
Not because it is large, but because it now solves the right combination of problems at once:
|
||||
|
||||
- cost control
|
||||
- continuity
|
||||
- structural safety
|
||||
- async persistence
|
||||
- cross-chat reuse
|
||||
- operational debuggability
|
||||
|
||||
That combination is what makes it feel strong.
|
||||
|
||||
If you have been looking for a serious long-conversation memory/compression filter for OpenWebUI, `Async Context Compression` is now in that category.
|
||||
282
plugins/filters/async-context-compression/community_post_CN.md
Normal file
282
plugins/filters/async-context-compression/community_post_CN.md
Normal file
@@ -0,0 +1,282 @@
|
||||
[](https://openwebui.com/posts/async_context_compression_b1655bc8)
|
||||
|
||||
# Async Context Compression:一个面向生产场景的 OpenWebUI 工作记忆过滤器
|
||||
|
||||
长对话的问题,从来不只是“贵”。
|
||||
|
||||
当聊天足够长时,通常只剩下两个都不太好的选择:
|
||||
|
||||
- 保留完整历史,继续承担很高的上下文成本
|
||||
- 粗暴裁剪旧消息,但冒着丢失上下文、工具状态和关键决策的风险
|
||||
|
||||
`Async Context Compression` 的目标,就是尽量避免这个二选一。
|
||||
|
||||
它不是一个简单的“把老消息总结一下”的小工具,而是一个带有结构感知、异步摘要、数据库持久化能力的 OpenWebUI 工作记忆系统。它的任务不是单纯缩短上下文,而是在压缩长对话的同时,尽量保留:
|
||||
|
||||
- 对话连续性
|
||||
- 工具调用状态完整性
|
||||
- 历史摘要进度
|
||||
- 跨聊天引用上下文
|
||||
- 出错时的可诊断性
|
||||
|
||||
到 `v1.5.0` 这个阶段,我认为它已经不再只是一个“方便的小过滤器”,而是一个足够完整、足够强、也足够有工程深度的上下文管理插件。
|
||||
|
||||
**[📖 完整 README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/README_CN.md)**
|
||||
**[📝 v1.5.0 发布说明](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/v1.5.0_CN.md)**
|
||||
|
||||
---
|
||||
|
||||
## 为什么会有这个插件
|
||||
|
||||
OpenWebUI 里的真实对话,通常并不只是“用户问一句,模型答一句”。
|
||||
|
||||
它常常还包含:
|
||||
|
||||
- 很长的项目型对话
|
||||
- 多轮编码与调试
|
||||
- 原生工具调用
|
||||
- 多模态消息
|
||||
- 不同模型上下文窗口差异
|
||||
- 其他聊天的引用上下文
|
||||
|
||||
在这种环境里,单纯靠“按长度裁掉旧消息”其实不够。
|
||||
|
||||
如果一个过滤器只会按长度或索引裁剪消息,它很容易:
|
||||
|
||||
- 把原生 tool-calling 历史裁坏
|
||||
- 丢掉仍然会影响下一轮回复的关键信息
|
||||
- 在老聊天里破坏连续性
|
||||
- 出问题时几乎无法排查
|
||||
- 把上游 provider 报错伪装成模糊的内部错误
|
||||
|
||||
`Async Context Compression` 的核心思路更强一些:
|
||||
|
||||
> 可以压缩历史,但不能把“对话结构”当成无关紧要的东西一起压掉
|
||||
|
||||
它真正想保留的是下一轮最需要的状态:
|
||||
|
||||
- 当前目标
|
||||
- 持久偏好
|
||||
- 最近进展
|
||||
- 仍然有效的工具结果
|
||||
- 错误状态
|
||||
- 已有摘要的连续性
|
||||
- 来自其他聊天的相关上下文
|
||||
|
||||
---
|
||||
|
||||
## 它和普通摘要插件有什么不同
|
||||
|
||||
现在这个插件,实际上已经把几个通常要分散在不同系统里的能力组合到了一起:
|
||||
|
||||
### 1. 异步工作记忆生成
|
||||
|
||||
用户当前这次回复不会被后台摘要阻塞。
|
||||
|
||||
### 2. 持久化摘要存储
|
||||
|
||||
摘要会写入 OpenWebUI 共享数据库,并在后续轮次中复用,而不是每次都从头重算。
|
||||
|
||||
### 3. 结构感知裁剪
|
||||
|
||||
裁剪逻辑会尊重原子消息边界,避免把原生 tool-calling 历史裁坏。
|
||||
|
||||
### 4. 外部聊天引用摘要
|
||||
|
||||
这是 `v1.5.0` 新增的重要能力:被引用聊天现在可以直接复用缓存摘要、在小体量时直接注入、或者在过大时先生成摘要再注入。
|
||||
|
||||
### 5. 多语言 Token 预估
|
||||
|
||||
插件现在具备更强的多脚本文本 Token 预估逻辑,在很多情况下可以减少不必要的精确计数,同时明显比旧的粗略字符比值更贴近真实用量。
|
||||
|
||||
### 6. 失败可见性
|
||||
|
||||
关键的后台摘要失败现在会出现在浏览器控制台和状态提示里,不再悄悄消失。
|
||||
|
||||
---
|
||||
|
||||
## 工作流总览
|
||||
|
||||
下面是当前的高层流程:
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Request enters inlet] --> B[Normalize tool IDs and optionally trim large tool outputs]
|
||||
B --> C{Referenced chats attached?}
|
||||
C -- No --> D[Load current chat summary if available]
|
||||
C -- Yes --> E[Inspect each referenced chat]
|
||||
|
||||
E --> F{Existing cached summary?}
|
||||
F -- Yes --> G[Reuse cached summary]
|
||||
F -- No --> H{Fits direct budget?}
|
||||
H -- Yes --> I[Inject full referenced chat text]
|
||||
H -- No --> J[Prepare referenced-chat summary input]
|
||||
|
||||
J --> K{Referenced-chat summary call succeeds?}
|
||||
K -- Yes --> L[Inject generated referenced summary]
|
||||
K -- No --> M[Fallback to direct contextual injection]
|
||||
|
||||
G --> D
|
||||
I --> D
|
||||
L --> D
|
||||
M --> D
|
||||
|
||||
D --> N[Build current-chat Head + Summary + Tail]
|
||||
N --> O{Over max_context_tokens?}
|
||||
O -- Yes --> P[Trim oldest atomic groups]
|
||||
O -- No --> Q[Send final context to the model]
|
||||
P --> Q
|
||||
|
||||
Q --> R[Model returns the reply]
|
||||
R --> S[Outlet rebuilds the full history]
|
||||
S --> T{Reached compression threshold?}
|
||||
T -- No --> U[Finish]
|
||||
T -- Yes --> V[Fit summary input to the summary model context]
|
||||
|
||||
V --> W{Background summary call succeeds?}
|
||||
W -- Yes --> X[Save new chat summary and update status]
|
||||
W -- No --> Y[Force browser-console error and show status hint]
|
||||
```
|
||||
|
||||
这也是为什么我会觉得它现在“强”:它已经不再只解决一个问题,而是在一个过滤器里同时协调:
|
||||
|
||||
- 上下文压缩
|
||||
- 历史摘要复用
|
||||
- 工具调用安全性
|
||||
- 被引用聊天上下文
|
||||
- 模型预算控制
|
||||
|
||||
---
|
||||
|
||||
## v1.5.0 为什么重要
|
||||
|
||||
这个版本的重要性在于,它把插件从“长对话压缩器”推进成了一个更接近“上下文管理层”的东西。
|
||||
|
||||
### 外部聊天引用摘要
|
||||
|
||||
这是 `v1.5.0` 的新功能,不是小修小补。
|
||||
|
||||
当用户引用另一个聊天时,插件现在可以:
|
||||
|
||||
- 直接复用已有缓存摘要
|
||||
- 如果聊天足够小,直接把完整内容注入
|
||||
- 如果聊天太大,先生成摘要再注入
|
||||
|
||||
这意味着它现在不仅能跨“轮次”保留上下文,也能开始跨“聊天”携带相关上下文。
|
||||
|
||||
### 快速多语言 Token 预估
|
||||
|
||||
这同样是 `v1.5.0` 的新能力。
|
||||
|
||||
插件不再依赖简单粗暴的统一字符比值,而是改用更适合混合语言文本的估算方式,尤其对下面这些场景更有意义:
|
||||
|
||||
- 英文
|
||||
- 中文
|
||||
- 日文
|
||||
- 韩文
|
||||
- 西里尔字符
|
||||
- 阿拉伯语
|
||||
- 泰语
|
||||
- 中英混合或多语言混合对话
|
||||
|
||||
这很重要,因为上下文管理类插件会不断做预算判断。预估更准,就意味着更少无意义的精确计算,也更不容易在预检阶段做出错误判断。
|
||||
|
||||
### 更强的最终请求预算控制
|
||||
|
||||
现在的摘要路径会去拟合“真实最终 summary request”,而不是只看一个中间估算值。它会把这些内容都算进去:
|
||||
|
||||
- prompt 包装
|
||||
- 格式化后的对话文本
|
||||
- previous summary
|
||||
- 预留输出预算
|
||||
- 安全余量
|
||||
|
||||
这对老聊天、大聊天和最难处理的边界情况特别关键。
|
||||
|
||||
---
|
||||
|
||||
## 为什么我觉得它现在已经足够完整
|
||||
|
||||
如果把“问题空间”列出来,我会说这个插件现在对主要场景已经覆盖得比较完整了:
|
||||
|
||||
- 很长的普通聊天
|
||||
- 多轮编码与调试对话
|
||||
- 原生工具调用
|
||||
- 历史摘要持久化
|
||||
- 自定义模型阈值
|
||||
- 异步后台摘要
|
||||
- 外部聊天引用
|
||||
- 多语言 Token 预估
|
||||
- 调试可见性
|
||||
|
||||
这并不代表它永远不会再迭代,而是说它已经越过了“窄功能实验品”的阶段,进入了一个更像“通用上下文管理系统”的形态。
|
||||
|
||||
---
|
||||
|
||||
## 代码规模与工程深度
|
||||
|
||||
如果你关心实现深度,这个插件现在已经不小了。
|
||||
|
||||
当前代码规模:
|
||||
|
||||
- 主插件文件:**4,573 行**
|
||||
- 聚焦测试文件:**1,037 行**
|
||||
- 可见实现 + 回归测试合计:**5,610 行**
|
||||
|
||||
代码行数本身不等于质量,但在这个量级上,它至少说明了几件真实的事:
|
||||
|
||||
- 这已经不是一个玩具级过滤器
|
||||
- 这个插件的行为面足够大,必须靠专门回归测试兜住
|
||||
- 它已经积累了很多只有在真实使用中才会暴露出来的边界处理逻辑
|
||||
|
||||
也就是说,它现在做的事情,已经明显不是“把老消息总结一下”那么简单。
|
||||
|
||||
---
|
||||
|
||||
## 实际价值
|
||||
|
||||
如果你是 OpenWebUI 的重度用户,这个插件的价值其实很直接:
|
||||
|
||||
- 长聊天更省 Token
|
||||
- 长会话连续性更好
|
||||
- 原生 tool-calling 更安全
|
||||
- 压缩后更不容易把会话搞坏
|
||||
- 大历史摘要生成更稳定
|
||||
- provider 拒绝摘要请求时更容易看到真错误
|
||||
- 能复用其他聊天里的有效上下文
|
||||
|
||||
尤其适合这些用户:
|
||||
|
||||
- 经常做长时间编码聊天
|
||||
- 使用上下文窗口比较紧的模型
|
||||
- 依赖原生工具调用
|
||||
- 经常回看旧项目聊天
|
||||
- 希望摘要更像“工作记忆”而不是“丢失细节的简要笔记”
|
||||
|
||||
---
|
||||
|
||||
## 安装
|
||||
|
||||
- OpenWebUI 社区:<https://openwebui.com/posts/async_context_compression_b1655bc8>
|
||||
- 源码目录:<https://github.com/Fu-Jie/openwebui-extensions/tree/main/plugins/filters/async-context-compression>
|
||||
|
||||
如果你想看完整的 valves、部署说明和故障排查,README 仍然是最完整的参考入口。
|
||||
|
||||
---
|
||||
|
||||
## 最后一句
|
||||
|
||||
你问我这个插件是不是强大。
|
||||
|
||||
我的答案是:**是,确实强,而且现在已经不是“看起来强”,而是“问题空间覆盖得比较完整”的那种强。**
|
||||
|
||||
不是因为它代码多,而是因为它现在同时解决的是一组真正相关的问题:
|
||||
|
||||
- 成本控制
|
||||
- 连续性
|
||||
- 结构安全
|
||||
- 异步持久化
|
||||
- 跨聊天上下文复用
|
||||
- 出错时的可诊断性
|
||||
|
||||
正是这几个东西一起成立,才让它现在像一个真正成熟的长对话上下文管理插件。
|
||||
@@ -18,6 +18,63 @@ def _ensure_module(name: str) -> types.ModuleType:
|
||||
return module
|
||||
|
||||
|
||||
def _install_dependency_stubs() -> None:
|
||||
pydantic_module = _ensure_module("pydantic")
|
||||
sqlalchemy_module = _ensure_module("sqlalchemy")
|
||||
sqlalchemy_orm_module = _ensure_module("sqlalchemy.orm")
|
||||
sqlalchemy_engine_module = _ensure_module("sqlalchemy.engine")
|
||||
|
||||
class DummyBaseModel:
|
||||
def __init__(self, **kwargs):
|
||||
annotations = getattr(self.__class__, "__annotations__", {})
|
||||
for field_name in annotations:
|
||||
if field_name in kwargs:
|
||||
value = kwargs[field_name]
|
||||
else:
|
||||
value = getattr(self.__class__, field_name, None)
|
||||
setattr(self, field_name, value)
|
||||
|
||||
def dummy_field(default=None, **kwargs):
|
||||
return default
|
||||
|
||||
class DummyMetadata:
|
||||
def create_all(self, *args, **kwargs):
|
||||
return None
|
||||
|
||||
def dummy_declarative_base():
|
||||
class DummyBase:
|
||||
metadata = DummyMetadata()
|
||||
|
||||
return DummyBase
|
||||
|
||||
def dummy_sessionmaker(*args, **kwargs):
|
||||
return lambda: None
|
||||
|
||||
class DummyEngine:
|
||||
pass
|
||||
|
||||
def dummy_column(*args, **kwargs):
|
||||
return None
|
||||
|
||||
def dummy_type(*args, **kwargs):
|
||||
return None
|
||||
|
||||
def dummy_inspect(*args, **kwargs):
|
||||
return types.SimpleNamespace(has_table=lambda *a, **k: False)
|
||||
|
||||
pydantic_module.BaseModel = DummyBaseModel
|
||||
pydantic_module.Field = dummy_field
|
||||
sqlalchemy_module.Column = dummy_column
|
||||
sqlalchemy_module.String = dummy_type
|
||||
sqlalchemy_module.Text = dummy_type
|
||||
sqlalchemy_module.DateTime = dummy_type
|
||||
sqlalchemy_module.Integer = dummy_type
|
||||
sqlalchemy_module.inspect = dummy_inspect
|
||||
sqlalchemy_orm_module.declarative_base = dummy_declarative_base
|
||||
sqlalchemy_orm_module.sessionmaker = dummy_sessionmaker
|
||||
sqlalchemy_engine_module.Engine = DummyEngine
|
||||
|
||||
|
||||
def _install_openwebui_stubs() -> None:
|
||||
_ensure_module("open_webui")
|
||||
_ensure_module("open_webui.utils")
|
||||
@@ -47,7 +104,8 @@ def _install_openwebui_stubs() -> None:
|
||||
return None
|
||||
|
||||
class DummyRequest:
|
||||
pass
|
||||
def __init__(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
chat_module.generate_chat_completion = generate_chat_completion
|
||||
users_module.Users = DummyUsers
|
||||
@@ -57,6 +115,7 @@ def _install_openwebui_stubs() -> None:
|
||||
fastapi_requests.Request = DummyRequest
|
||||
|
||||
|
||||
_install_dependency_stubs()
|
||||
_install_openwebui_stubs()
|
||||
spec = importlib.util.spec_from_file_location(MODULE_NAME, PLUGIN_PATH)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
@@ -189,9 +248,12 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
{"role": "assistant", "content": "Final answer"},
|
||||
]
|
||||
|
||||
trimmed_count = self.filter._trim_native_tool_outputs(messages, "en-US")
|
||||
trimmed_count, trim_debug = self.filter._trim_native_tool_outputs(
|
||||
messages, "en-US"
|
||||
)
|
||||
|
||||
self.assertEqual(trimmed_count, 1)
|
||||
self.assertIsNone(trim_debug)
|
||||
self.assertEqual(messages[1]["content"], "... [Content collapsed] ...")
|
||||
self.assertTrue(messages[1]["metadata"]["is_trimmed"])
|
||||
self.assertTrue(messages[2]["metadata"]["tool_outputs_trimmed"])
|
||||
@@ -213,9 +275,12 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
}
|
||||
]
|
||||
|
||||
trimmed_count = self.filter._trim_native_tool_outputs(messages, "en-US")
|
||||
trimmed_count, trim_debug = self.filter._trim_native_tool_outputs(
|
||||
messages, "en-US"
|
||||
)
|
||||
|
||||
self.assertEqual(trimmed_count, 1)
|
||||
self.assertIsNone(trim_debug)
|
||||
self.assertIn(
|
||||
'result=""... [Content collapsed] ...""',
|
||||
messages[0]["content"],
|
||||
@@ -258,9 +323,12 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
{"role": "tool", "content": "x" * 1600},
|
||||
]
|
||||
|
||||
trimmed_count = self.filter._trim_native_tool_outputs(messages, "en-US")
|
||||
trimmed_count, trim_debug = self.filter._trim_native_tool_outputs(
|
||||
messages, "en-US"
|
||||
)
|
||||
|
||||
self.assertEqual(trimmed_count, 1)
|
||||
self.assertIsNone(trim_debug)
|
||||
self.assertEqual(messages[1]["content"], "... [Content collapsed] ...")
|
||||
self.assertTrue(messages[1]["metadata"]["is_trimmed"])
|
||||
|
||||
@@ -391,11 +459,55 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
|
||||
self.assertTrue(create_task_called)
|
||||
|
||||
def test_summary_save_progress_matches_truncated_input(self):
|
||||
def test_estimate_messages_tokens_counts_output_text_parts(self):
|
||||
messages = [
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": [{"type": "output_text", "text": "abcd" * 25}],
|
||||
}
|
||||
]
|
||||
|
||||
self.assertEqual(
|
||||
self.filter._estimate_messages_tokens(messages),
|
||||
module._estimate_text_tokens("abcd" * 25),
|
||||
)
|
||||
|
||||
def test_unfold_messages_keeps_plain_assistant_output_when_expand_is_not_richer(self):
|
||||
misc_module = _ensure_module("open_webui.utils.misc")
|
||||
misc_module.convert_output_to_messages = lambda output, raw=True: [
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": [{"type": "output_text", "text": "Plain reply"}],
|
||||
}
|
||||
]
|
||||
|
||||
messages = [
|
||||
{
|
||||
"id": "assistant-1",
|
||||
"role": "assistant",
|
||||
"content": "Plain reply",
|
||||
"output": [
|
||||
{
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [{"type": "output_text", "text": "Plain reply"}],
|
||||
}
|
||||
],
|
||||
}
|
||||
]
|
||||
|
||||
unfolded = self.filter._unfold_messages(messages)
|
||||
|
||||
self.assertEqual(len(unfolded), 1)
|
||||
self.assertEqual(unfolded[0]["id"], "assistant-1")
|
||||
self.assertEqual(unfolded[0]["content"], "Plain reply")
|
||||
self.assertNotIn("output", unfolded[0])
|
||||
|
||||
def test_summary_save_progress_matches_final_prompt_shrink(self):
|
||||
self.filter.valves.keep_first = 1
|
||||
self.filter.valves.keep_last = 1
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.summary_model_max_context = 0
|
||||
self.filter.valves.summary_model_max_context = 1200
|
||||
|
||||
captured = {}
|
||||
events = []
|
||||
@@ -404,12 +516,14 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
events.append(event)
|
||||
|
||||
async def mock_summary_llm(
|
||||
previous_summary,
|
||||
new_conversation_text,
|
||||
body,
|
||||
user_data,
|
||||
__event_call__,
|
||||
__event_call__=None,
|
||||
__request__=None,
|
||||
previous_summary=None,
|
||||
):
|
||||
captured["conversation_text"] = new_conversation_text
|
||||
return "new summary"
|
||||
|
||||
def mock_save_summary(chat_id, summary, compressed_count):
|
||||
@@ -424,17 +538,22 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
self.filter._call_summary_llm = mock_summary_llm
|
||||
self.filter._save_summary = mock_save_summary
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 3500
|
||||
"max_context_tokens": 1200
|
||||
}
|
||||
self.filter._calculate_messages_tokens = lambda messages: len(messages) * 1000
|
||||
self.filter._count_tokens = lambda text: 1000
|
||||
self.filter._format_messages_for_summary = lambda messages: "\n".join(
|
||||
msg["content"] for msg in messages
|
||||
)
|
||||
self.filter._build_summary_prompt = (
|
||||
lambda conversation_text, previous_summary=None: conversation_text
|
||||
)
|
||||
self.filter._count_tokens = lambda text: len(text)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
{"role": "user", "content": "Question 1"},
|
||||
{"role": "assistant", "content": "Answer 1"},
|
||||
{"role": "user", "content": "Question 2"},
|
||||
{"role": "assistant", "content": "Answer 2"},
|
||||
{"role": "user", "content": "Q" * 100},
|
||||
{"role": "assistant", "content": "A" * 100},
|
||||
{"role": "user", "content": "B" * 100},
|
||||
{"role": "assistant", "content": "C" * 100},
|
||||
{"role": "user", "content": "Question 3"},
|
||||
]
|
||||
|
||||
@@ -453,9 +572,466 @@ class TestAsyncContextCompression(unittest.TestCase):
|
||||
|
||||
self.assertEqual(captured["chat_id"], "chat-1")
|
||||
self.assertEqual(captured["summary"], "new summary")
|
||||
self.assertEqual(captured["compressed_count"], 2)
|
||||
self.assertEqual(captured["compressed_count"], 3)
|
||||
self.assertEqual(captured["conversation_text"], f"{'Q' * 100}\n{'A' * 100}")
|
||||
self.assertTrue(any(event["type"] == "status" for event in events))
|
||||
|
||||
def test_generate_summary_async_drops_previous_summary_when_prompt_still_oversized(self):
|
||||
self.filter.valves.keep_first = 1
|
||||
self.filter.valves.keep_last = 1
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.summary_model_max_context = 1200
|
||||
|
||||
captured = {}
|
||||
|
||||
async def mock_summary_llm(
|
||||
new_conversation_text,
|
||||
body,
|
||||
user_data,
|
||||
__event_call__=None,
|
||||
__request__=None,
|
||||
previous_summary=None,
|
||||
):
|
||||
captured["conversation_text"] = new_conversation_text
|
||||
captured["previous_summary"] = previous_summary
|
||||
return "new summary"
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
self.filter._log = noop_log
|
||||
self.filter._call_summary_llm = mock_summary_llm
|
||||
self.filter._save_summary = lambda *args: None
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 1200
|
||||
}
|
||||
self.filter._format_messages_for_summary = lambda messages: "\n".join(
|
||||
msg["content"] for msg in messages
|
||||
)
|
||||
self.filter._build_summary_prompt = (
|
||||
lambda conversation_text, previous_summary=None: (
|
||||
(previous_summary or "") + "\n" + conversation_text
|
||||
)
|
||||
)
|
||||
self.filter._count_tokens = lambda text: len(text)
|
||||
self.filter._load_summary = lambda chat_id, body: "P" * 220
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
{"role": "user", "content": "Q" * 60},
|
||||
{"role": "assistant", "content": "Answer 1"},
|
||||
{"role": "user", "content": "Question 2"},
|
||||
]
|
||||
|
||||
asyncio.run(
|
||||
self.filter._generate_summary_async(
|
||||
messages=messages,
|
||||
chat_id="chat-1",
|
||||
body={"model": "fake-summary-model"},
|
||||
user_data={"id": "user-1"},
|
||||
target_compressed_count=2,
|
||||
lang="en-US",
|
||||
__event_emitter__=None,
|
||||
__event_call__=None,
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(captured["conversation_text"], "Q" * 60)
|
||||
self.assertIsNone(captured["previous_summary"])
|
||||
|
||||
def test_call_summary_llm_surfaces_provider_error_dict(self):
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.show_debug_log = False
|
||||
|
||||
async def fake_generate_chat_completion(request, payload, user):
|
||||
return {"error": {"message": "context too long", "code": 400}}
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
frontend_calls = []
|
||||
|
||||
async def fake_event_call(payload):
|
||||
frontend_calls.append(payload)
|
||||
return True
|
||||
|
||||
original_generate = module.generate_chat_completion
|
||||
original_get_user = getattr(module.Users, "get_user_by_id", None)
|
||||
|
||||
module.generate_chat_completion = fake_generate_chat_completion
|
||||
module.Users.get_user_by_id = staticmethod(
|
||||
lambda user_id: types.SimpleNamespace(email="user@example.com")
|
||||
)
|
||||
self.filter._log = noop_log
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 8192
|
||||
}
|
||||
self.filter._build_summary_prompt = (
|
||||
lambda conversation_text, previous_summary=None: conversation_text
|
||||
)
|
||||
|
||||
try:
|
||||
with self.assertRaises(Exception) as exc_info:
|
||||
asyncio.run(
|
||||
self.filter._call_summary_llm(
|
||||
"conversation",
|
||||
{"model": "fake-summary-model"},
|
||||
{"id": "user-1"},
|
||||
__event_call__=fake_event_call,
|
||||
)
|
||||
)
|
||||
finally:
|
||||
module.generate_chat_completion = original_generate
|
||||
if original_get_user is None:
|
||||
delattr(module.Users, "get_user_by_id")
|
||||
else:
|
||||
module.Users.get_user_by_id = original_get_user
|
||||
|
||||
self.assertIn("Upstream provider error: context too long", str(exc_info.exception))
|
||||
self.assertNotIn(
|
||||
"LLM response format incorrect or empty", str(exc_info.exception)
|
||||
)
|
||||
self.assertTrue(frontend_calls)
|
||||
self.assertEqual(frontend_calls[0]["type"], "execute")
|
||||
self.assertIn("console.error", frontend_calls[0]["data"]["code"])
|
||||
self.assertIn("context too long", frontend_calls[0]["data"]["code"])
|
||||
|
||||
def test_generate_summary_async_status_guides_user_to_browser_console(self):
|
||||
self.filter.valves.keep_first = 1
|
||||
self.filter.valves.keep_last = 1
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.summary_model_max_context = 1200
|
||||
self.filter.valves.show_debug_log = False
|
||||
|
||||
events = []
|
||||
frontend_calls = []
|
||||
|
||||
async def fake_summary_llm(*args, **kwargs):
|
||||
raise Exception("boom details")
|
||||
|
||||
async def fake_emitter(event):
|
||||
events.append(event)
|
||||
|
||||
async def fake_event_call(payload):
|
||||
frontend_calls.append(payload)
|
||||
return True
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
self.filter._log = noop_log
|
||||
self.filter._call_summary_llm = fake_summary_llm
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 1200
|
||||
}
|
||||
self.filter._format_messages_for_summary = lambda messages: "\n".join(
|
||||
msg["content"] for msg in messages
|
||||
)
|
||||
self.filter._build_summary_prompt = (
|
||||
lambda conversation_text, previous_summary=None: conversation_text
|
||||
)
|
||||
self.filter._count_tokens = lambda text: len(text)
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
{"role": "user", "content": "Q" * 40},
|
||||
{"role": "assistant", "content": "A" * 40},
|
||||
{"role": "user", "content": "Question 2"},
|
||||
]
|
||||
|
||||
asyncio.run(
|
||||
self.filter._generate_summary_async(
|
||||
messages=messages,
|
||||
chat_id="chat-1",
|
||||
body={"model": "fake-summary-model"},
|
||||
user_data={"id": "user-1"},
|
||||
target_compressed_count=2,
|
||||
lang="en-US",
|
||||
__event_emitter__=fake_emitter,
|
||||
__event_call__=fake_event_call,
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(frontend_calls)
|
||||
self.assertIn("console.error", frontend_calls[0]["data"]["code"])
|
||||
self.assertIn("boom details", frontend_calls[0]["data"]["code"])
|
||||
status_descriptions = [
|
||||
event["data"]["description"]
|
||||
for event in events
|
||||
if event.get("type") == "status"
|
||||
]
|
||||
self.assertTrue(
|
||||
any("Check browser console (F12) for details" in text for text in status_descriptions)
|
||||
)
|
||||
|
||||
def test_check_and_generate_summary_async_forces_frontend_and_status_on_pre_summary_error(
|
||||
self,
|
||||
):
|
||||
self.filter.valves.show_debug_log = False
|
||||
|
||||
events = []
|
||||
frontend_calls = []
|
||||
|
||||
async def fake_emitter(event):
|
||||
events.append(event)
|
||||
|
||||
async def fake_event_call(payload):
|
||||
frontend_calls.append(payload)
|
||||
return True
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
def fail_estimate(_messages):
|
||||
raise Exception("pre summary boom")
|
||||
|
||||
self.filter._log = noop_log
|
||||
self.filter._estimate_messages_tokens = fail_estimate
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"compression_threshold_tokens": 100,
|
||||
"max_context_tokens": 1000,
|
||||
}
|
||||
|
||||
asyncio.run(
|
||||
self.filter._check_and_generate_summary_async(
|
||||
chat_id="chat-1",
|
||||
model="fake-model",
|
||||
body={"messages": [{"role": "user", "content": "Hello"}]},
|
||||
user_data={"id": "user-1"},
|
||||
target_compressed_count=1,
|
||||
lang="en-US",
|
||||
__event_emitter__=fake_emitter,
|
||||
__event_call__=fake_event_call,
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(frontend_calls)
|
||||
self.assertIn("console.error", frontend_calls[0]["data"]["code"])
|
||||
self.assertIn("pre summary boom", frontend_calls[0]["data"]["code"])
|
||||
status_descriptions = [
|
||||
event["data"]["description"]
|
||||
for event in events
|
||||
if event.get("type") == "status"
|
||||
]
|
||||
self.assertTrue(
|
||||
any("Check browser console (F12) for details" in text for text in status_descriptions)
|
||||
)
|
||||
|
||||
def test_external_reference_message_detection_matches_injected_marker(self):
|
||||
message = {
|
||||
"role": "assistant",
|
||||
"content": "External refs",
|
||||
"metadata": {
|
||||
"is_summary": True,
|
||||
"is_external_references": True,
|
||||
"source": "external_references",
|
||||
},
|
||||
}
|
||||
|
||||
self.assertTrue(self.filter._is_external_reference_message(message))
|
||||
|
||||
def test_handle_external_chat_references_falls_back_when_summary_llm_errors(self):
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.max_summary_tokens = 4096
|
||||
|
||||
async def fake_summary_llm(*args, **kwargs):
|
||||
raise Exception("reference summary failed")
|
||||
|
||||
self.filter._call_summary_llm = fake_summary_llm
|
||||
self.filter._load_summary_record = lambda chat_id: None
|
||||
self.filter._load_full_chat_messages = lambda chat_id: [
|
||||
{"role": "user", "content": "Referenced question"},
|
||||
{"role": "assistant", "content": "Referenced answer"},
|
||||
]
|
||||
self.filter._format_messages_for_summary = (
|
||||
lambda messages: "Referenced conversation body"
|
||||
)
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 5001
|
||||
}
|
||||
self.filter._estimate_messages_tokens = lambda messages: 5000
|
||||
|
||||
body = {
|
||||
"model": "main-model",
|
||||
"messages": [{"role": "user", "content": "Current prompt"}],
|
||||
"metadata": {
|
||||
"files": [
|
||||
{
|
||||
"type": "chat",
|
||||
"id": "chat-ref-1",
|
||||
"name": "Referenced Chat",
|
||||
}
|
||||
]
|
||||
},
|
||||
}
|
||||
|
||||
result = asyncio.run(
|
||||
self.filter._handle_external_chat_references(
|
||||
body,
|
||||
user_data={"id": "user-1"},
|
||||
)
|
||||
)
|
||||
|
||||
self.assertIn("__external_references__", result)
|
||||
self.assertIn(
|
||||
"Referenced conversation body",
|
||||
result["__external_references__"]["content"],
|
||||
)
|
||||
|
||||
def test_generate_referenced_summaries_background_uses_model_context_window_fallback(
|
||||
self,
|
||||
):
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.summary_model_max_context = 0
|
||||
self.filter.valves.max_summary_tokens = 64
|
||||
|
||||
captured = {}
|
||||
truncate_calls = []
|
||||
|
||||
async def fake_summary_llm(
|
||||
new_conversation_text,
|
||||
body,
|
||||
user_data,
|
||||
__event_call__=None,
|
||||
__request__=None,
|
||||
previous_summary=None,
|
||||
):
|
||||
captured["conversation_text"] = new_conversation_text
|
||||
return "cached summary"
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
self.filter._call_summary_llm = fake_summary_llm
|
||||
self.filter._log = noop_log
|
||||
self.filter._save_summary = lambda *args: None
|
||||
self.filter._get_model_thresholds = lambda model_id: {
|
||||
"max_context_tokens": 5000
|
||||
}
|
||||
self.filter._truncate_messages_for_summary = (
|
||||
lambda messages, max_tokens: truncate_calls.append(max_tokens) or "truncated"
|
||||
)
|
||||
|
||||
conversation_text = "x" * 600
|
||||
|
||||
asyncio.run(
|
||||
self.filter._generate_referenced_summaries_background(
|
||||
[
|
||||
{
|
||||
"chat_id": "chat-ref-ctx",
|
||||
"title": "Referenced Chat",
|
||||
"conversation_text": conversation_text,
|
||||
"covers_full_history": True,
|
||||
"covered_message_count": 1,
|
||||
}
|
||||
],
|
||||
user_data={"id": "user-1"},
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(captured["conversation_text"], conversation_text)
|
||||
self.assertEqual(truncate_calls, [])
|
||||
|
||||
def test_generate_referenced_summaries_background_uses_summary_llm_signature(self):
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
|
||||
captured = {}
|
||||
|
||||
async def fake_summary_llm(
|
||||
new_conversation_text,
|
||||
body,
|
||||
user_data,
|
||||
__event_call__=None,
|
||||
__request__=None,
|
||||
previous_summary=None,
|
||||
):
|
||||
captured["conversation_text"] = new_conversation_text
|
||||
captured["body"] = body
|
||||
captured["user_data"] = user_data
|
||||
captured["request"] = __request__
|
||||
captured["previous_summary"] = previous_summary
|
||||
return "cached reference summary"
|
||||
|
||||
def fake_save_summary(chat_id, summary, compressed_count):
|
||||
captured["saved"] = (chat_id, summary, compressed_count)
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
self.filter._call_summary_llm = fake_summary_llm
|
||||
self.filter._save_summary = fake_save_summary
|
||||
self.filter._log = noop_log
|
||||
|
||||
request = object()
|
||||
|
||||
asyncio.run(
|
||||
self.filter._generate_referenced_summaries_background(
|
||||
[
|
||||
{
|
||||
"chat_id": "chat-ref-1",
|
||||
"title": "Referenced Chat",
|
||||
"conversation_text": "Full referenced conversation",
|
||||
"covers_full_history": True,
|
||||
"covered_message_count": 3,
|
||||
}
|
||||
],
|
||||
user_data={"id": "user-1"},
|
||||
__request__=request,
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(captured["conversation_text"], "Full referenced conversation")
|
||||
self.assertEqual(captured["body"]["model"], "fake-summary-model")
|
||||
self.assertEqual(captured["user_data"], {"id": "user-1"})
|
||||
self.assertIs(captured["request"], request)
|
||||
self.assertIsNone(captured["previous_summary"])
|
||||
self.assertEqual(
|
||||
captured["saved"], ("chat-ref-1", "cached reference summary", 3)
|
||||
)
|
||||
|
||||
def test_generate_referenced_summaries_background_skips_progress_save_for_truncation(self):
|
||||
self.filter.valves.summary_model = "fake-summary-model"
|
||||
self.filter.valves.summary_model_max_context = 100
|
||||
|
||||
saved_calls = []
|
||||
captured = {}
|
||||
|
||||
async def fake_summary_llm(
|
||||
new_conversation_text,
|
||||
body,
|
||||
user_data,
|
||||
__event_call__=None,
|
||||
__request__=None,
|
||||
previous_summary=None,
|
||||
):
|
||||
captured["conversation_text"] = new_conversation_text
|
||||
return "ephemeral summary"
|
||||
|
||||
async def noop_log(*args, **kwargs):
|
||||
return None
|
||||
|
||||
self.filter._call_summary_llm = fake_summary_llm
|
||||
self.filter._save_summary = lambda *args: saved_calls.append(args)
|
||||
self.filter._log = noop_log
|
||||
self.filter._load_full_chat_messages = lambda chat_id: [
|
||||
{"role": "user", "content": "msg 1"},
|
||||
{"role": "assistant", "content": "msg 2"},
|
||||
]
|
||||
self.filter._format_messages_for_summary = lambda messages: "x" * 600
|
||||
self.filter._truncate_messages_for_summary = (
|
||||
lambda messages, max_tokens: "tail only"
|
||||
)
|
||||
|
||||
asyncio.run(
|
||||
self.filter._generate_referenced_summaries_background(
|
||||
[{"chat_id": "chat-ref-2", "title": "Large Referenced Chat"}],
|
||||
user_data={"id": "user-1"},
|
||||
)
|
||||
)
|
||||
|
||||
self.assertEqual(captured["conversation_text"], "tail only")
|
||||
self.assertEqual(saved_calls, [])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
|
||||
27
plugins/filters/async-context-compression/v1.5.0.md
Normal file
27
plugins/filters/async-context-compression/v1.5.0.md
Normal file
@@ -0,0 +1,27 @@
|
||||
[](https://openwebui.com/f/fujie/async_context_compression)
|
||||
|
||||
## Overview
|
||||
|
||||
Compared with the previous git version (`1.4.2`), this release introduces two major new capabilities: external chat reference summarization and a much stronger multilingual token-estimation pipeline. It also improves the reliability of the surrounding summary workflow, especially when provider-side failures occur.
|
||||
|
||||
**[📖 README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/README.md)**
|
||||
|
||||
## New Features
|
||||
|
||||
- **External Chat Reference Summaries**: Add support for referenced chat context injection that can reuse cached summaries, inject small referenced chats directly, or generate summaries for larger referenced chats before injection.
|
||||
- **Fast Multilingual Token Estimation**: Replace the old rough `len(text)//4` fallback with a new mixed-script estimation pipeline so preflight decisions stay much closer to actual usage across English, Chinese, Japanese, Korean, Arabic, Cyrillic, Thai, and mixed content.
|
||||
- **Stronger Working-Memory Prompt**: Refined the XML summary prompt so generated working memory preserves more actionable state across general chat, coding tasks, and tool-heavy conversations.
|
||||
- **Clearer Frontend Debug Logs**: Reworked browser-console debug output into grouped structural snapshots that make inlet/outlet state easier to inspect.
|
||||
- **Safer Tool Trimming Defaults**: Enabled native tool-output trimming by default and exposed `tool_trim_threshold_chars` with a 600-character threshold.
|
||||
|
||||
## Bug Fixes
|
||||
|
||||
- **Referenced-Chat Fallback Reliability**: If the new referenced-chat summary path fails, the active request now falls back to direct contextual injection instead of failing the whole chat.
|
||||
- **Correct Summary Budgeting**: Fixed referenced-chat summary preparation so `summary_model_max_context` controls summary-input fitting, while `max_summary_tokens` remains an output cap.
|
||||
- **Visible Background Failures**: Important background summary failures now surface to the browser console and chat status even when `show_debug_log` is disabled.
|
||||
- **Provider Error Surfacing**: Improved summary-call error extraction so non-standard upstream provider error payloads are reported more clearly.
|
||||
|
||||
## Release Notes
|
||||
|
||||
- Bilingual plugin README files and mirrored docs pages were refreshed for the `1.5.0` release.
|
||||
- This release is aimed at reducing silent failure modes and making summary behavior easier to reason about during debugging.
|
||||
27
plugins/filters/async-context-compression/v1.5.0_CN.md
Normal file
27
plugins/filters/async-context-compression/v1.5.0_CN.md
Normal file
@@ -0,0 +1,27 @@
|
||||
[](https://openwebui.com/f/fujie/async_context_compression)
|
||||
|
||||
## 概述
|
||||
|
||||
相较上一个 git 版本(`1.4.2`),本次发布新增了两个重要能力:外部聊天引用摘要,以及更强的多语言 Token 预估链路。同时也补强了围绕这些新能力的摘要流程稳定性,特别是上游提供商报错时的回退与可见性。
|
||||
|
||||
**[📖 README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/filters/async-context-compression/README_CN.md)**
|
||||
|
||||
## 新功能
|
||||
|
||||
- **外部聊天引用摘要**:新增引用聊天上下文注入能力。现在可以复用缓存摘要、直接注入较小引用聊天,或先为较大的引用聊天生成摘要再注入。
|
||||
- **快速多语言 Token 预估**:用新的混合脚本估算链路替代旧的 `len(text)//4` 粗略回退,使预检在英文、中文、日文、韩文、阿拉伯文、西里尔文、泰文及混合内容下都更接近真实用量。
|
||||
- **更稳健的工作记忆提示词**:重写 XML 摘要提示词,让生成出的 working memory 在普通聊天、编码任务和密集工具调用场景下保留更多可操作上下文。
|
||||
- **更清晰的前端调试日志**:浏览器控制台调试输出改为分组化、结构化展示,更容易观察 inlet / outlet 的真实状态。
|
||||
- **更安全的工具裁剪默认值**:原生工具输出裁剪默认开启,并新增 `tool_trim_threshold_chars`,默认阈值为 600 字符。
|
||||
|
||||
## 问题修复
|
||||
|
||||
- **引用聊天回退更稳妥**:当新的引用聊天摘要路径失败时,当前请求会自动回退为直接注入上下文,而不是整个对话一起失败。
|
||||
- **摘要预算计算更准确**:修复引用聊天摘要准备逻辑,明确由 `summary_model_max_context` 控制摘要输入窗口,而 `max_summary_tokens` 只控制摘要输出长度。
|
||||
- **后台失败更容易发现**:即使关闭 `show_debug_log`,关键后台摘要失败现在也会显示到浏览器控制台和聊天状态提示中。
|
||||
- **提供商错误信息更清晰**:改进摘要调用的错误提取逻辑,让非标准上游错误载荷也能更准确地显示出来。
|
||||
|
||||
## 发布说明
|
||||
|
||||
- 已同步更新中英插件 README 与 docs 镜像页,确保 `1.5.0` 发布说明一致。
|
||||
- 本次版本的目标,是减少“静默失败”这类难排查问题,并让摘要行为在调试时更容易理解。
|
||||
@@ -114,6 +114,7 @@ class Filter:
|
||||
|
||||
# Check if it's a Copilot model
|
||||
is_copilot_model = self._is_copilot_model(current_model)
|
||||
body["is_copilot_model"] = is_copilot_model
|
||||
|
||||
await self._emit_debug_log(
|
||||
__event_emitter__,
|
||||
|
||||
137
plugins/tools/batch-install-plugins/README.md
Normal file
137
plugins/tools/batch-install-plugins/README.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# Batch Install Plugins from GitHub
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.0.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
One-click batch install plugins from GitHub repositories to your OpenWebUI instance.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **One-Click Install**: Install all plugins with a single command
|
||||
- **Auto-Update**: Automatically updates previously installed plugins
|
||||
- **GitHub Support**: Install plugins from any GitHub repository
|
||||
- **Multi-Type Support**: Supports Pipe, Action, Filter, and Tool plugins
|
||||
- **Confirmation**: Shows plugin list before installing, allows selective installation
|
||||
- **i18n**: Supports 11 languages
|
||||
|
||||
## Flow
|
||||
|
||||
```
|
||||
User Input
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Discover Plugins from GitHub │
|
||||
│ (fetch file tree + parse .py) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Filter by Type & Keywords │
|
||||
│ (tool/filter/pipe/action) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Show Confirmation Dialog │
|
||||
│ (list plugins + exclude hint) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
├── [Cancel] → End
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ Install to OpenWebUI │
|
||||
│ (update or create each plugin) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
Done
|
||||
```
|
||||
|
||||
## How to Use
|
||||
|
||||
1. Open OpenWebUI and go to **Workspace > Tools**
|
||||
2. Install **Batch Install Plugins from GitHub** from the official marketplace
|
||||
3. Enable this tool for your model/chat
|
||||
4. Ask the model to install plugins
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```
|
||||
"Install all plugins"
|
||||
"Install all plugins from github.com/username/repo"
|
||||
"Install only pipe plugins"
|
||||
"Install action and filter plugins"
|
||||
"Install all plugins, exclude_keywords=copilot"
|
||||
```
|
||||
|
||||
## Popular Plugin Repositories
|
||||
|
||||
Here are some popular repositories with many plugins you can install:
|
||||
|
||||
### Community Collections
|
||||
|
||||
```
|
||||
# Install all plugins from iChristGit's collection
|
||||
"Install all plugins from iChristGit/OpenWebui-Tools"
|
||||
|
||||
# Install all tools from Haervwe's tools collection
|
||||
"Install all plugins from Haervwe/open-webui-tools"
|
||||
|
||||
# Install all plugins from Classic298's repository
|
||||
"Install all plugins from Classic298/open-webui-plugins"
|
||||
|
||||
# Install all functions from suurt8ll's collection
|
||||
"Install all plugins from suurt8ll/open_webui_functions"
|
||||
|
||||
# Install only specific types (e.g., only tools)
|
||||
"Install only tool plugins from iChristGit/OpenWebui-Tools"
|
||||
|
||||
# Exclude certain keywords while installing
|
||||
"Install all plugins from Haervwe/open-webui-tools, exclude_keywords=test,deprecated"
|
||||
```
|
||||
|
||||
### Supported Repositories
|
||||
|
||||
- `Fu-Jie/openwebui-extensions` - Default, official plugin collection
|
||||
- `iChristGit/OpenWebui-Tools` - Comprehensive tool and plugin collection
|
||||
- `Haervwe/open-webui-tools` - Specialized tools and utilities
|
||||
- `Classic298/open-webui-plugins` - Various plugin implementations
|
||||
- `suurt8ll/open_webui_functions` - Function-based plugins
|
||||
|
||||
## Default Repository
|
||||
|
||||
When no repository is specified, defaults to `Fu-Jie/openwebui-extensions`.
|
||||
|
||||
## Plugin Detection Rules
|
||||
|
||||
### Fu-Jie/openwebui-extensions (Strict)
|
||||
|
||||
For the default repository, plugins must have:
|
||||
1. A `.py` file containing `class Tools:`, `class Filter:`, `class Pipe:`, or `class Action:`
|
||||
2. A docstring with `title:`, `description:`, and **`openwebui_id:`** fields
|
||||
3. Filename must not end with `_cn`
|
||||
|
||||
### Other GitHub Repositories
|
||||
|
||||
For other repositories:
|
||||
1. A `.py` file containing `class Tools:`, `class Filter:`, `class Pipe:`, or `class Action:`
|
||||
2. A docstring with `title:` and `description:` fields
|
||||
|
||||
## Configuration (Valves)
|
||||
|
||||
| Parameter | Default | Description |
|
||||
| --- | --- | --- |
|
||||
| `SKIP_KEYWORDS` | `test,verify,example,template,mock` | Comma-separated keywords to skip |
|
||||
| `TIMEOUT` | `20` | Request timeout in seconds |
|
||||
|
||||
## Confirmation Timeout
|
||||
|
||||
User confirmation dialogs have a default timeout of **2 minutes (120 seconds)**, allowing sufficient time for users to:
|
||||
- Read and review the plugin list
|
||||
- Make installation decisions
|
||||
- Handle network delays
|
||||
|
||||
## Support
|
||||
|
||||
If this plugin has been useful, a star on [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) is a big motivation for me. Thank you for the support.
|
||||
137
plugins/tools/batch-install-plugins/README_CN.md
Normal file
137
plugins/tools/batch-install-plugins/README_CN.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# Batch Install Plugins from GitHub
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie) | **版本:** 1.0.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
一键将 GitHub 仓库中的插件批量安装到你的 OpenWebUI 实例。
|
||||
|
||||
## 主要功能
|
||||
|
||||
- 一键安装:单个命令安装所有插件
|
||||
- 自动更新:自动更新之前安装过的插件
|
||||
- GitHub 支持:从任意 GitHub 仓库安装插件
|
||||
- 多类型支持:支持 Pipe、Action、Filter 和 Tool 插件
|
||||
- 安装确认:安装前显示插件列表,支持选择性安装
|
||||
- 国际化:支持 11 种语言
|
||||
|
||||
## 流程
|
||||
|
||||
```
|
||||
用户输入
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 从 GitHub 发现插件 │
|
||||
│ (获取文件树 + 解析 .py 文件) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 按类型和关键词过滤 │
|
||||
│ (tool/filter/pipe/action) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 显示确认对话框 │
|
||||
│ (插件列表 + 排除提示) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
├── [取消] → 结束
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ 安装到 OpenWebUI │
|
||||
│ (更新或创建每个插件) │
|
||||
└─────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
完成
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
1. 打开 OpenWebUI,进入 **Workspace > Tools**
|
||||
2. 从官方市场安装 **Batch Install Plugins from GitHub**
|
||||
3. 为你的模型/对话启用此工具
|
||||
4. 让模型调用工具方法
|
||||
|
||||
## 使用示例
|
||||
|
||||
```
|
||||
"安装所有插件"
|
||||
"从 github.com/username/repo 安装所有插件"
|
||||
"只安装 pipe 插件"
|
||||
"安装 action 和 filter 插件"
|
||||
"安装所有插件, exclude_keywords=copilot"
|
||||
```
|
||||
|
||||
## 热门插件仓库
|
||||
|
||||
这些是包含大量插件的热门仓库,你可以从中安装插件:
|
||||
|
||||
### 社区合集
|
||||
|
||||
```
|
||||
# 从 iChristGit 的集合安装所有插件
|
||||
"从 iChristGit/OpenWebui-Tools 安装所有插件"
|
||||
|
||||
# 从 Haervwe 的工具集合只安装工具
|
||||
"从 Haervwe/open-webui-tools 安装所有插件"
|
||||
|
||||
# 从 Classic298 的仓库安装所有插件
|
||||
"从 Classic298/open-webui-plugins 安装所有插件"
|
||||
|
||||
# 从 suurt8ll 的集合安装所有函数
|
||||
"从 suurt8ll/open_webui_functions 安装所有插件"
|
||||
|
||||
# 只安装特定类型的插件(比如只安装工具)
|
||||
"从 iChristGit/OpenWebui-Tools 只安装 tool 插件"
|
||||
|
||||
# 安装时排除特定关键词
|
||||
"从 Haervwe/open-webui-tools 安装所有插件, exclude_keywords=test,deprecated"
|
||||
```
|
||||
|
||||
### 支持的仓库
|
||||
|
||||
- `Fu-Jie/openwebui-extensions` - 默认的官方插件集合
|
||||
- `iChristGit/OpenWebui-Tools` - 全面的工具和插件集合
|
||||
- `Haervwe/open-webui-tools` - 专业的工具和实用程序
|
||||
- `Classic298/open-webui-plugins` - 各种插件实现
|
||||
- `suurt8ll/open_webui_functions` - 基于函数的插件
|
||||
|
||||
## 默认仓库
|
||||
|
||||
未指定仓库时,默认为 `Fu-Jie/openwebui-extensions`。
|
||||
|
||||
## 插件检测规则
|
||||
|
||||
### Fu-Jie/openwebui-extensions(严格模式)
|
||||
|
||||
默认仓库的插件必须满足:
|
||||
1. 包含 `class Tools:`、`class Filter:`、`class Pipe:` 或 `class Action:` 的 `.py` 文件
|
||||
2. Docstring 中包含 `title:`、`description:` 和 **`openwebui_id:`** 字段
|
||||
3. 文件名不能以 `_cn` 结尾
|
||||
|
||||
### 其他 GitHub 仓库
|
||||
|
||||
其他仓库的插件必须满足:
|
||||
1. 包含 `class Tools:`、`class Filter:`、`class Pipe:` 或 `class Action:` 的 `.py` 文件
|
||||
2. Docstring 中包含 `title:` 和 `description:` 字段
|
||||
|
||||
## 配置(Valves)
|
||||
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| --- | --- | --- |
|
||||
| `SKIP_KEYWORDS` | `test,verify,example,template,mock` | 逗号分隔的跳过关键词 |
|
||||
| `TIMEOUT` | `20` | 请求超时时间(秒)|
|
||||
|
||||
## 确认超时时间
|
||||
|
||||
用户确认对话框的默认超时时间为 **2 分钟(120 秒)**,为用户提供充足的时间来:
|
||||
- 阅读和查看插件列表
|
||||
- 做出安装决定
|
||||
- 处理网络延迟
|
||||
|
||||
## 支持
|
||||
|
||||
如果这个插件对你有帮助,欢迎到 [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) 点个 Star,这将是我持续改进的动力,感谢支持。
|
||||
1262
plugins/tools/batch-install-plugins/batch_install_plugins.py
Normal file
1262
plugins/tools/batch-install-plugins/batch_install_plugins.py
Normal file
File diff suppressed because it is too large
Load Diff
67
plugins/tools/batch-install-plugins/v1.0.0.md
Normal file
67
plugins/tools/batch-install-plugins/v1.0.0.md
Normal file
@@ -0,0 +1,67 @@
|
||||
[](https://openwebui.com/t/fujie/batch_install_plugins)
|
||||
|
||||
## Overview
|
||||
|
||||
Batch Install Plugins from GitHub is a new tool for OpenWebUI that enables one-click installation of multiple plugins directly from GitHub repositories. This initial release includes comprehensive features for discovering, filtering, and installing plugins with user confirmation, extensive multi-language support, and robust debugging capabilities for container deployments.
|
||||
|
||||
**[📖 README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/tools/batch-install-plugins/README.md)**
|
||||
|
||||
## Features
|
||||
|
||||
- **One-Click Installation**: Install all plugins from a repository with a single command
|
||||
- **Smart Plugin Discovery**: Parse Python files to extract metadata and validate plugins automatically
|
||||
- **Multi-Type Support**: Support Pipe, Action, Filter, and Tool plugins in a single operation
|
||||
- **Confirmation Dialog**: Display plugin list before installation for user review and approval
|
||||
- **Selective Installation**: Exclude specific plugins using keyword-based filtering
|
||||
- **Smart Fallback**: Container deployments auto-retry with localhost:8080 if primary connection fails
|
||||
- **Enhanced Debugging**: Rich frontend JavaScript and backend Python logs for troubleshooting
|
||||
- **Extended Timeout**: 120-second confirmation window for thoughtful decision-making
|
||||
- **Async Architecture**: Non-blocking I/O operations for better performance
|
||||
- **Full Internationalization**: Complete support for 11 languages with proper fallback maps
|
||||
- **Auto-Update**: Automatically updates previously installed plugins
|
||||
- **Self-Exclusion**: Automatically excludes the tool itself from batch operations
|
||||
|
||||
## Technical Highlights
|
||||
|
||||
- **httpx Integration**: Modern async HTTP client for reliable, non-blocking requests
|
||||
- **Event Emitter Support**: Proper handling of OpenWebUI event injection with fallbacks
|
||||
- **Timeout Protection**: Wrapped frontend execution with timeout guards to prevent hanging
|
||||
- **Filtered List Consistency**: Uses single source of truth for confirmation and installation
|
||||
- **Error Localization**: All error messages are user-facing and properly localized across languages
|
||||
- **Deployment Resilience**: Intelligent base URL resolution handles domain, localhost, and containerized environments
|
||||
|
||||
## Supported Repositories
|
||||
|
||||
- **Default**: Fu-Jie/openwebui-extensions (strict validation)
|
||||
- **Custom**: Any GitHub repository with Python plugin files
|
||||
|
||||
## Testing
|
||||
|
||||
Comprehensive regression tests included:
|
||||
- Filtered installation list consistency
|
||||
- Missing event emitter handling
|
||||
- Confirmation timeout verification
|
||||
- Full failure scenarios
|
||||
- Localization completeness
|
||||
- Connection error debug logging and smart fallback
|
||||
|
||||
All 6 tests pass successfully.
|
||||
|
||||
## Documentation
|
||||
|
||||
- English README with flow diagrams and usage examples
|
||||
- Chinese README (README_CN.md) with complete translations
|
||||
- Mirrored documentation for official docs site
|
||||
- Plugin index entries in both English and Chinese
|
||||
|
||||
## Compatibility
|
||||
|
||||
- OpenWebUI: 0.2.x - 0.8.x
|
||||
- Python: 3.9+
|
||||
- Dependencies: httpx (async HTTP client), pydantic (type validation)
|
||||
|
||||
## Release Notes
|
||||
|
||||
- This initial v1.0.0 release includes complete plugin infrastructure with smart deployment handling.
|
||||
- The plugin is designed to handle diverse deployment scenarios (domain, localhost, containerized) with minimal configuration.
|
||||
|
||||
67
plugins/tools/batch-install-plugins/v1.0.0_CN.md
Normal file
67
plugins/tools/batch-install-plugins/v1.0.0_CN.md
Normal file
@@ -0,0 +1,67 @@
|
||||
[](https://openwebui.com/t/fujie/batch_install_plugins)
|
||||
|
||||
## 概述
|
||||
|
||||
从 GitHub 批量安装插件是一款全新的 OpenWebUI 工具,支持直接从 GitHub 仓库一键安装多个插件。此首个发布版本包含了全面的插件发现、过滤和安装功能,支持用户确认流程、广泛的多语言支持,以及针对容器部署的健壮调试能力。
|
||||
|
||||
**[📖 README](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/tools/batch-install-plugins/README_CN.md)**
|
||||
|
||||
## 主要功能
|
||||
|
||||
- **一键安装**:通过单个命令安装仓库中的所有插件
|
||||
- **智能插件发现**:解析 Python 文件提取元数据并自动验证插件
|
||||
- **多类型支持**:在单个操作中支持 Pipe、Action、Filter 和 Tool 插件
|
||||
- **确认对话框**:安装前显示插件列表供用户审查和批准
|
||||
- **选择性安装**:通过基于关键词的过滤排除特定插件
|
||||
- **智能降级**:容器环境中主 URL 连接失败时自动重试 localhost:8080
|
||||
- **增强调试**:前端 JavaScript 和后端 Python 富日志输出,便于排查问题
|
||||
- **延长超时**:120 秒确认窗口,给用户充分的思考时间
|
||||
- **异步架构**:非阻塞 I/O 操作,性能更优
|
||||
- **完整国际化**:支持 11 种语言,包含适当的回退机制
|
||||
- **自动更新**:自动更新之前安装过的插件
|
||||
- **自排除机制**:自动排除工具自身,避免在批量操作中重复安装
|
||||
|
||||
## 技术亮点
|
||||
|
||||
- **httpx 集成**:现代化的异步 HTTP 客户端,请求更可靠且非阻塞
|
||||
- **事件注入支持**:正确处理 OpenWebUI 事件注入,提供回退支持
|
||||
- **超时保护**:前端执行周围包装了超时保护,防止进程挂起
|
||||
- **过滤列表一致性**:确认和安装使用同一份过滤列表
|
||||
- **错误本地化**:所有错误消息都是面向用户的,已正确本地化到各语言
|
||||
- **部署弹性**:智能 Base URL 解析处理域名、localhost 和容器化环境
|
||||
|
||||
## 支持的仓库
|
||||
|
||||
- **默认**:Fu-Jie/openwebui-extensions(严格验证)
|
||||
- **自定义**:任意 GitHub 仓库中的 Python 插件文件
|
||||
|
||||
## 测试覆盖
|
||||
|
||||
包含全面的回归测试:
|
||||
- 过滤安装列表一致性
|
||||
- 缺少事件注入器时的处理
|
||||
- 确认超时验证
|
||||
- 完全失败场景
|
||||
- 本地化完整性
|
||||
- 连接错误调试日志和智能降级
|
||||
|
||||
所有 6 个测试均通过。
|
||||
|
||||
## 文档
|
||||
|
||||
- 英文 README,包含流程图和使用示例
|
||||
- 中文 README (README_CN.md),完整翻译
|
||||
- 官方文档站点的镜像文档
|
||||
- 英文和中文的插件索引条目
|
||||
|
||||
## 兼容性
|
||||
|
||||
- OpenWebUI:0.2.x - 0.8.x
|
||||
- Python:3.9+
|
||||
- 依赖:httpx(异步 HTTP 客户端)、pydantic(类型验证)
|
||||
|
||||
## 发布说明
|
||||
|
||||
- 本首发 v1.0.0 版本包含完整的插件基础设施和智能部署处理能力。
|
||||
- 该插件设计用于处理多种部署场景(域名、localhost、容器化),配置最少。
|
||||
|
||||
@@ -9,6 +9,7 @@ This directory contains automated scripts for deploying plugins in development t
|
||||
1. **OpenWebUI Running**: Make sure OpenWebUI is running locally (default `http://localhost:3000`)
|
||||
2. **API Key**: You need a valid OpenWebUI API key
|
||||
3. **Environment File**: Create a `.env` file in this directory containing your API key:
|
||||
|
||||
```
|
||||
api_key=sk-xxxxxxxxxxxxx
|
||||
```
|
||||
@@ -42,12 +43,14 @@ python deploy_filter.py --list
|
||||
Used to deploy Filter-type plugins (such as message filtering, context compression, etc.).
|
||||
|
||||
**Key Features**:
|
||||
|
||||
- ✅ Auto-extracts metadata from Python files (version, author, description, etc.)
|
||||
- ✅ Attempts to update existing plugins, creates if not found
|
||||
- ✅ Supports multiple Filter plugin management
|
||||
- ✅ Detailed error messages and connection diagnostics
|
||||
|
||||
**Usage**:
|
||||
|
||||
```bash
|
||||
# Deploy async_context_compression (default)
|
||||
python deploy_filter.py
|
||||
@@ -62,6 +65,7 @@ python deploy_filter.py -l
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Load API key from `.env`
|
||||
2. Find target Filter plugin directory
|
||||
3. Read Python source file
|
||||
@@ -76,6 +80,7 @@ python deploy_filter.py -l
|
||||
Used to deploy Pipe-type plugins (such as GitHub Copilot SDK).
|
||||
|
||||
**Usage**:
|
||||
|
||||
```bash
|
||||
python deploy_pipe.py
|
||||
```
|
||||
@@ -101,6 +106,7 @@ Create a dedicated long-term API key in OpenWebUI Settings for deployment purpos
|
||||
**Cause**: OpenWebUI is not running or port is different
|
||||
|
||||
**Solution**:
|
||||
|
||||
- Make sure OpenWebUI is running
|
||||
- Check which port OpenWebUI is actually listening on (usually 3000)
|
||||
- Edit the URL in the script if needed
|
||||
@@ -110,6 +116,7 @@ Create a dedicated long-term API key in OpenWebUI Settings for deployment purpos
|
||||
**Cause**: `.env` file was not created
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
echo "api_key=sk-your-api-key-here" > .env
|
||||
```
|
||||
@@ -119,6 +126,7 @@ echo "api_key=sk-your-api-key-here" > .env
|
||||
**Cause**: Filter directory name is incorrect
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# List all available Filters
|
||||
python deploy_filter.py --list
|
||||
@@ -129,6 +137,7 @@ python deploy_filter.py --list
|
||||
**Cause**: API key is invalid or expired
|
||||
|
||||
**Solution**:
|
||||
|
||||
1. Verify your API key is valid
|
||||
2. Generate a new API key
|
||||
3. Update the `.env` file
|
||||
@@ -177,7 +186,8 @@ python deploy_filter.py async-context-compression
|
||||
|
||||
## Security Considerations
|
||||
|
||||
⚠️ **Important**:
|
||||
⚠️ **Important**:
|
||||
|
||||
- ✅ Add `.env` file to `.gitignore` (avoid committing sensitive info)
|
||||
- ✅ Never commit API keys to version control
|
||||
- ✅ Use only on trusted networks
|
||||
|
||||
@@ -7,6 +7,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
## 📋 New Files
|
||||
|
||||
### 1. **deploy_filter.py** — Filter Plugin Deployment Script
|
||||
|
||||
- **Location**: `scripts/deploy_filter.py`
|
||||
- **Function**: Auto-deploy Filter-type plugins to local OpenWebUI instance
|
||||
- **Features**:
|
||||
@@ -19,6 +20,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- **Code Lines**: ~300
|
||||
|
||||
### 2. **DEPLOYMENT_GUIDE.md** — Complete Deployment Guide
|
||||
|
||||
- **Location**: `scripts/DEPLOYMENT_GUIDE.md`
|
||||
- **Contents**:
|
||||
- Prerequisites and quick start
|
||||
@@ -28,6 +30,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- Step-by-step workflow examples
|
||||
|
||||
### 3. **QUICK_START.md** — Quick Reference Card
|
||||
|
||||
- **Location**: `scripts/QUICK_START.md`
|
||||
- **Contents**:
|
||||
- One-line deployment command
|
||||
@@ -37,6 +40,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- CI/CD integration examples
|
||||
|
||||
### 4. **test_deploy_filter.py** — Unit Test Suite
|
||||
|
||||
- **Location**: `tests/scripts/test_deploy_filter.py`
|
||||
- **Test Coverage**:
|
||||
- ✅ Filter file discovery (3 tests)
|
||||
@@ -138,6 +142,7 @@ openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
||||
```
|
||||
|
||||
**Supported Metadata Fields**:
|
||||
|
||||
- `title` — Filter display name ✅
|
||||
- `id` — Unique identifier ✅
|
||||
- `author` — Author name ✅
|
||||
@@ -335,17 +340,20 @@ Metadata Extraction and Delivery
|
||||
### Debugging Tips
|
||||
|
||||
1. **Enable Verbose Logging**:
|
||||
|
||||
```bash
|
||||
python deploy_filter.py 2>&1 | tee deploy.log
|
||||
```
|
||||
|
||||
2. **Test API Connection**:
|
||||
|
||||
```bash
|
||||
curl -X GET http://localhost:3000/api/v1/functions \
|
||||
-H "Authorization: Bearer $API_KEY"
|
||||
```
|
||||
|
||||
3. **Verify .env File**:
|
||||
|
||||
```bash
|
||||
grep "api_key=" scripts/.env
|
||||
```
|
||||
|
||||
@@ -73,12 +73,14 @@ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Optimized specifically for async_context_compression
|
||||
- ✅ Clear deployment steps and confirmation
|
||||
- ✅ Friendly error messages
|
||||
- ✅ Shows next steps after successful deployment
|
||||
|
||||
**Sample Output**:
|
||||
|
||||
```
|
||||
======================================================================
|
||||
🚀 Deploying Async Context Compression Filter Plugin
|
||||
@@ -117,6 +119,7 @@ python deploy_filter.py --list
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Generic Filter deployment tool
|
||||
- ✅ Supports multiple plugins
|
||||
- ✅ Auto metadata extraction
|
||||
@@ -142,6 +145,7 @@ python deploy_tool.py openwebui-skills-manager
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Supports Tools plugin deployment
|
||||
- ✅ Auto-detects `Tools` class definition
|
||||
- ✅ Smart update/create logic
|
||||
@@ -290,6 +294,7 @@ git status # should not show .env
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# 1. Check if OpenWebUI is running
|
||||
curl http://localhost:3000
|
||||
@@ -309,6 +314,7 @@ curl http://localhost:3000
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
echo "api_key=sk-your-api-key" > .env
|
||||
cat .env # verify file created
|
||||
@@ -321,6 +327,7 @@ cat .env # verify file created
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# List all available Filters
|
||||
python deploy_filter.py --list
|
||||
@@ -337,6 +344,7 @@ python deploy_filter.py async-context-compression
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# 1. Verify API key is correct
|
||||
grep "api_key=" .env
|
||||
@@ -370,7 +378,7 @@ python deploy_async_context_compression.py
|
||||
|
||||
### Method 2: Verify in OpenWebUI
|
||||
|
||||
1. Open OpenWebUI: http://localhost:3000
|
||||
1. Open OpenWebUI: <http://localhost:3000>
|
||||
2. Go to Settings → Filters
|
||||
3. Check if 'Async Context Compression' is listed
|
||||
4. Verify version number is correct (should be latest)
|
||||
@@ -380,6 +388,7 @@ python deploy_async_context_compression.py
|
||||
1. Open a new conversation
|
||||
2. Enable 'Async Context Compression' Filter
|
||||
3. Have multiple-turn conversation and verify compression/summarization works
|
||||
|
||||
## 💡 Advanced Usage
|
||||
|
||||
### Automated Deploy & Test
|
||||
@@ -473,4 +482,3 @@ Newly created deployment-related files:
|
||||
**Last Updated**: 2026-03-09
|
||||
**Script Status**: ✅ Ready for production
|
||||
**Test Coverage**: 10/10 passed ✅
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
✅ **Yes, re-deploying automatically updates the plugin!**
|
||||
|
||||
The deployment script uses a **smart two-stage strategy**:
|
||||
|
||||
1. 🔄 **Try UPDATE First** (if plugin exists)
|
||||
2. 📝 **Auto CREATE** (if update fails — plugin doesn't exist)
|
||||
|
||||
@@ -54,6 +55,7 @@ if response.status_code == 200:
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
- Send **POST** to `/api/v1/functions/id/{filter_id}/update`
|
||||
- If returns **HTTP 200**, plugin exists and update succeeded
|
||||
- Includes:
|
||||
@@ -84,6 +86,7 @@ if response.status_code != 200:
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
- If update fails (HTTP ≠ 200), auto-attempt create
|
||||
- Send **POST** to `/api/v1/functions/create`
|
||||
- Uses **same payload** (code, metadata identical)
|
||||
@@ -103,6 +106,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
1. Try UPDATE → fails (HTTP 404 — plugin doesn't exist)
|
||||
2. Auto-try CREATE → succeeds (HTTP 200)
|
||||
3. Plugin created in OpenWebUI
|
||||
@@ -121,6 +125,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
1. Read modified code
|
||||
2. Try UPDATE → succeeds (HTTP 200 — plugin exists)
|
||||
3. Plugin in OpenWebUI updated to latest code
|
||||
@@ -147,6 +152,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
|
||||
- 🚀 Each update takes only 5 seconds
|
||||
- 📝 Each is an incremental update
|
||||
- ✅ No need to restart OpenWebUI
|
||||
@@ -181,11 +187,13 @@ version: 1.3.0
|
||||
```
|
||||
|
||||
**Each deployment**:
|
||||
|
||||
1. Script reads version from docstring
|
||||
2. Sends this version in manifest to OpenWebUI
|
||||
3. If you change version in code, deployment updates to new version
|
||||
|
||||
**Best Practice**:
|
||||
|
||||
```bash
|
||||
# 1. Modify code
|
||||
vim async_context_compression.py
|
||||
@@ -300,6 +308,7 @@ Usually **not needed** because:
|
||||
4. ✅ Failures auto-rollback
|
||||
|
||||
但如果真的需要控制,可以:
|
||||
|
||||
- 手动修改脚本 (修改 `deploy_filter.py`)
|
||||
- 或分别使用 UPDATE/CREATE 的具体 API 端点
|
||||
|
||||
@@ -323,6 +332,7 @@ Usually **not needed** because:
|
||||
### Q: 可以同时部署多个插件吗?
|
||||
|
||||
✅ **可以!**
|
||||
|
||||
```bash
|
||||
python deploy_filter.py async-context-compression
|
||||
python deploy_filter.py folder-memory
|
||||
@@ -337,6 +347,7 @@ python deploy_filter.py context_enhancement_filter
|
||||
---
|
||||
|
||||
**总结**: 部署脚本的更新机制完全自动化,开发者只需修改代码,每次运行 `deploy_async_context_compression.py` 就会自动:
|
||||
|
||||
1. ✅ 创建(第一次)或更新(后续)插件
|
||||
2. ✅ 从代码提取最新的元数据和版本号
|
||||
3. ✅ 立即生效,无需重启 OpenWebUI
|
||||
|
||||
202
scripts/agent_sync.py
Executable file
202
scripts/agent_sync.py
Executable file
@@ -0,0 +1,202 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
🤖 AGENT SYNC TOOL v2.2 (Unified Semantic Edition)
|
||||
-------------------------------------------------
|
||||
Consolidated and simplified command set based on Copilot's architectural feedback.
|
||||
Native support for Study, Task, and Broadcast workflows.
|
||||
Maintains Sisyphus's advanced task management (task_queue, subscriptions).
|
||||
"""
|
||||
import sqlite3
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
|
||||
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
|
||||
|
||||
def get_connection():
|
||||
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||||
return sqlite3.connect(DB_PATH)
|
||||
|
||||
def init_db():
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.executescript('''
|
||||
CREATE TABLE IF NOT EXISTS agents (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
task TEXT,
|
||||
status TEXT DEFAULT 'idle',
|
||||
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS file_locks (
|
||||
file_path TEXT PRIMARY KEY,
|
||||
agent_id TEXT,
|
||||
lock_type TEXT DEFAULT 'write',
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
content TEXT,
|
||||
note_type TEXT DEFAULT 'note', -- 'note', 'study', 'conclusion'
|
||||
is_resolved INTEGER DEFAULT 0,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_queue (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
initiator TEXT,
|
||||
task_type TEXT, -- 'research', 'collab', 'fix'
|
||||
topic TEXT,
|
||||
description TEXT,
|
||||
priority TEXT DEFAULT 'normal',
|
||||
status TEXT DEFAULT 'pending', -- 'pending', 'active', 'completed'
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_subscriptions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
task_id INTEGER,
|
||||
agent_id TEXT,
|
||||
role TEXT, -- 'lead', 'reviewer', 'worker', 'observer'
|
||||
FOREIGN KEY(task_id) REFERENCES task_queue(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS broadcasts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
sender_id TEXT,
|
||||
type TEXT,
|
||||
payload TEXT,
|
||||
active INTEGER DEFAULT 1,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS global_settings (
|
||||
key TEXT PRIMARY KEY, value TEXT
|
||||
);
|
||||
''')
|
||||
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ MACP 2.2 Semantic Kernel Active")
|
||||
|
||||
def get_status():
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
print("\n--- 🛰️ Agent Fleet ---")
|
||||
for r in cursor.execute("SELECT id, name, status, task FROM agents"):
|
||||
print(f"[{r[2].upper()}] {r[1]} ({r[0]}) | Task: {r[3]}")
|
||||
|
||||
print("\n--- 📋 Global Task Queue ---")
|
||||
for r in cursor.execute("SELECT id, topic, task_type, priority, status FROM task_queue WHERE status != 'completed'"):
|
||||
print(f" #{r[0]} [{r[2].upper()}] {r[1]} | {r[3]} | {r[4]}")
|
||||
|
||||
print("\n--- 📚 Active Studies ---")
|
||||
for r in cursor.execute("SELECT topic, agent_id FROM research_log WHERE note_type='study' AND is_resolved=0"):
|
||||
print(f" 🔬 {r[0]} (by {r[1]})")
|
||||
|
||||
print("\n--- 📢 Live Broadcasts ---")
|
||||
for r in cursor.execute("SELECT sender_id, type, payload FROM broadcasts WHERE active=1 ORDER BY created_at DESC LIMIT 3"):
|
||||
print(f"📣 {r[0]} [{r[1].upper()}]: {r[2]}")
|
||||
|
||||
print("\n--- 🔒 File Locks ---")
|
||||
for r in cursor.execute("SELECT file_path, agent_id, lock_type FROM file_locks ORDER BY timestamp DESC LIMIT 20"):
|
||||
print(f" {r[0]} -> {r[1]} ({r[2]})")
|
||||
|
||||
cursor.execute("SELECT value FROM global_settings WHERE key='mode'")
|
||||
mode = cursor.fetchone()[0]
|
||||
print(f"\n🌍 Project Mode: {mode.upper()}")
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers(dest="command")
|
||||
|
||||
# Base commands
|
||||
subparsers.add_parser("init")
|
||||
subparsers.add_parser("status")
|
||||
subparsers.add_parser("check")
|
||||
subparsers.add_parser("ping")
|
||||
|
||||
reg = subparsers.add_parser("register")
|
||||
reg.add_argument("id"); reg.add_argument("name"); reg.add_argument("task")
|
||||
|
||||
# Lock commands
|
||||
lock = subparsers.add_parser("lock")
|
||||
lock.add_argument("id"); lock.add_argument("path")
|
||||
unlock = subparsers.add_parser("unlock")
|
||||
unlock.add_argument("id"); unlock.add_argument("path")
|
||||
|
||||
# Research & Note commands
|
||||
note = subparsers.add_parser("note")
|
||||
note.add_argument("id"); note.add_argument("topic"); note.add_argument("content")
|
||||
note.add_argument("--type", default="note")
|
||||
|
||||
# Semantic Workflows (The Unified Commands)
|
||||
study = subparsers.add_parser("study")
|
||||
study.add_argument("id"); study.add_argument("topic"); study.add_argument("--desc", default=None)
|
||||
|
||||
resolve = subparsers.add_parser("resolve")
|
||||
resolve.add_argument("id"); resolve.add_argument("topic"); resolve.add_argument("conclusion")
|
||||
|
||||
# Task Management (The Advanced Commands)
|
||||
assign = subparsers.add_parser("assign")
|
||||
assign.add_argument("id"); assign.add_argument("target"); assign.add_argument("topic")
|
||||
assign.add_argument("--role", default="worker"); assign.add_argument("--priority", default="normal")
|
||||
|
||||
bc = subparsers.add_parser("broadcast")
|
||||
bc.add_argument("id"); bc.add_argument("type"); bc.add_argument("payload")
|
||||
|
||||
args = parser.parse_args()
|
||||
if args.command == "init": init_db()
|
||||
elif args.command == "status" or args.command == "check" or args.command == "ping": get_status()
|
||||
elif args.command == "register":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT OR REPLACE INTO agents (id, name, task, status, last_seen) VALUES (?, ?, ?, 'active', CURRENT_TIMESTAMP)", (args.id, args.name, args.task))
|
||||
conn.commit(); conn.close()
|
||||
print(f"🤖 Registered: {args.id}")
|
||||
elif args.command == "lock":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
try:
|
||||
cursor.execute("INSERT INTO file_locks (file_path, agent_id) VALUES (?, ?)", (args.path, args.id))
|
||||
conn.commit(); print(f"🔒 Locked {args.path}")
|
||||
except: print(f"❌ Lock conflict on {args.path}"); sys.exit(1)
|
||||
finally: conn.close()
|
||||
elif args.command == "unlock":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("DELETE FROM file_locks WHERE file_path=? AND agent_id=?", (args.path, args.id))
|
||||
conn.commit(); conn.close(); print(f"🔓 Unlocked {args.path}")
|
||||
elif args.command == "study":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, 'study')", (args.id, args.topic, args.desc or "Study started"))
|
||||
cursor.execute("UPDATE agents SET status = 'researching'")
|
||||
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, 'research', ?)", (args.id, f"NEW STUDY: {args.topic}"))
|
||||
cursor.execute("UPDATE global_settings SET value = ? WHERE key = 'mode'", (f"RESEARCH: {args.topic}",))
|
||||
conn.commit(); conn.close()
|
||||
print(f"🔬 Study '{args.topic}' initiated.")
|
||||
elif args.command == "resolve":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("UPDATE research_log SET is_resolved = 1 WHERE topic = ?", (args.topic,))
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type, is_resolved) VALUES (?, ?, ?, 'conclusion', 1)", (args.id, args.topic, args.conclusion))
|
||||
cursor.execute("UPDATE global_settings SET value = 'isolation' WHERE key = 'mode'")
|
||||
cursor.execute("UPDATE agents SET status = 'active' WHERE status = 'researching'")
|
||||
conn.commit(); conn.close()
|
||||
print(f"✅ Study '{args.topic}' resolved.")
|
||||
elif args.command == "assign":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"INSERT INTO task_queue (initiator, task_type, topic, description, priority, status) VALUES (?, 'task', ?, ?, ?, 'pending')",
|
||||
(args.id, args.topic, f"Assigned to {args.target}: {args.topic}", args.priority),
|
||||
)
|
||||
task_id = cursor.lastrowid
|
||||
cursor.execute("INSERT INTO task_subscriptions (task_id, agent_id, role) VALUES (?, ?, ?)", (task_id, args.target, args.role))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📋 Task #{task_id} assigned to {args.target}")
|
||||
elif args.command == "broadcast":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("UPDATE broadcasts SET active = 0 WHERE type = ?", (args.type,))
|
||||
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, ?, ?)", (args.id, args.type, args.payload))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📡 Broadcast: {args.payload}")
|
||||
elif args.command == "note":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, ?)", (args.id, args.topic, args.content, args.type))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📝 Note added.")
|
||||
847
scripts/agent_sync_v2.py
Executable file
847
scripts/agent_sync_v2.py
Executable file
@@ -0,0 +1,847 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
🤖 AGENT SYNC TOOL v2.0 - MULTI-AGENT COOPERATION PROTOCOL (MACP)
|
||||
---------------------------------------------------------
|
||||
Enhanced collaboration commands for seamless multi-agent synergy.
|
||||
|
||||
QUICK COMMANDS:
|
||||
@research <topic> - Start a joint research topic
|
||||
@join <topic> - Join an active research topic
|
||||
@find <topic> <content> - Post a finding to research topic
|
||||
@consensus <topic> - Generate consensus document
|
||||
@assign <agent> <task> - Assign task to specific agent
|
||||
@notify <message> - Broadcast to all agents
|
||||
@handover <agent> - Handover current task
|
||||
@poll <question> - Start a quick poll
|
||||
@switch <agent> - Request switch to specific agent
|
||||
|
||||
WORKFLOW: @research -> @find (xN) -> @consensus -> @assign
|
||||
"""
|
||||
import sqlite3
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Dict, Optional
|
||||
|
||||
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
|
||||
|
||||
def get_connection():
|
||||
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||||
return sqlite3.connect(DB_PATH)
|
||||
|
||||
def init_db():
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.executescript('''
|
||||
CREATE TABLE IF NOT EXISTS agents (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
task TEXT,
|
||||
status TEXT DEFAULT 'idle',
|
||||
current_research TEXT,
|
||||
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS file_locks (
|
||||
file_path TEXT PRIMARY KEY,
|
||||
agent_id TEXT,
|
||||
lock_type TEXT,
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(agent_id) REFERENCES agents(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
content TEXT,
|
||||
finding_type TEXT DEFAULT 'note',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(agent_id) REFERENCES agents(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_topics (
|
||||
topic TEXT PRIMARY KEY,
|
||||
status TEXT DEFAULT 'active',
|
||||
initiated_by TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS agent_research_participation (
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
joined_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (agent_id, topic)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_assignments (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
task TEXT,
|
||||
assigned_by TEXT,
|
||||
status TEXT DEFAULT 'pending',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS notifications (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
message TEXT,
|
||||
is_broadcast BOOLEAN DEFAULT 0,
|
||||
is_read BOOLEAN DEFAULT 0,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS polls (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
question TEXT,
|
||||
created_by TEXT,
|
||||
status TEXT DEFAULT 'active',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS poll_votes (
|
||||
poll_id INTEGER,
|
||||
agent_id TEXT,
|
||||
vote TEXT,
|
||||
voted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (poll_id, agent_id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS global_settings (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT
|
||||
);
|
||||
''')
|
||||
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ Agent Hub v2.0 initialized at {DB_PATH}")
|
||||
|
||||
# ============ AGENT MANAGEMENT ============
|
||||
|
||||
def register_agent(agent_id, name, task, status="idle"):
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO agents (id, name, task, status, last_seen)
|
||||
VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP)
|
||||
''', (agent_id, name, task, status))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🤖 Agent '{name}' ({agent_id}) registered.")
|
||||
|
||||
def update_agent_status(agent_id, status, research_topic=None):
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
if research_topic:
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = ?, current_research = ?, last_seen = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (status, research_topic, agent_id))
|
||||
else:
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = ?, last_seen = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (status, agent_id))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ RESEARCH COLLABORATION ============
|
||||
|
||||
def start_research(agent_id, topic):
|
||||
"""@research - Start a new research topic and notify all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create research topic
|
||||
try:
|
||||
cursor.execute('''
|
||||
INSERT INTO research_topics (topic, status, initiated_by)
|
||||
VALUES (?, 'active', ?)
|
||||
''', (topic, agent_id))
|
||||
except sqlite3.IntegrityError:
|
||||
print(f"⚠️ Research topic '{topic}' already exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Add initiator as participant
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
|
||||
VALUES (?, ?)
|
||||
''', (agent_id, topic))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'researching', current_research = ?
|
||||
WHERE id = ?
|
||||
''', (topic, agent_id))
|
||||
|
||||
# Notify all other agents
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
|
||||
other_agents = cursor.fetchall()
|
||||
for (other_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (other_id, f"🔬 New research started: '{topic}' by {agent_id}. Use '@join {topic}' to participate."))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
print(f"🔬 Research topic '{topic}' started by {agent_id}")
|
||||
print(f"📢 Notified {len(other_agents)} other agents")
|
||||
|
||||
def join_research(agent_id, topic):
|
||||
"""@join - Join an active research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if topic exists and is active
|
||||
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Research topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
if result[0] != 'active':
|
||||
print(f"⚠️ Research topic '{topic}' is {result[0]}")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Add participant
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
|
||||
VALUES (?, ?)
|
||||
''', (agent_id, topic))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'researching', current_research = ?
|
||||
WHERE id = ?
|
||||
''', (topic, agent_id))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ {agent_id} joined research: '{topic}'")
|
||||
|
||||
def post_finding(agent_id, topic, content, finding_type="note"):
|
||||
"""@find - Post a finding to research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if topic exists
|
||||
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Research topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
if result[0] != 'active':
|
||||
print(f"⚠️ Research topic '{topic}' is {result[0]}")
|
||||
|
||||
# Add finding
|
||||
cursor.execute('''
|
||||
INSERT INTO research_log (agent_id, topic, content, finding_type)
|
||||
VALUES (?, ?, ?, ?)
|
||||
''', (agent_id, topic, content, finding_type))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET last_seen = CURRENT_TIMESTAMP WHERE id = ?
|
||||
''', (agent_id,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📝 Finding added to '{topic}' by {agent_id}")
|
||||
|
||||
def generate_consensus(topic):
|
||||
"""@consensus - Generate consensus document from research findings"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get all findings
|
||||
cursor.execute('''
|
||||
SELECT agent_id, content, finding_type, created_at
|
||||
FROM research_log
|
||||
WHERE topic = ?
|
||||
ORDER BY created_at
|
||||
''', (topic,))
|
||||
findings = cursor.fetchall()
|
||||
|
||||
if not findings:
|
||||
print(f"⚠️ No findings found for topic '{topic}'")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Get participants
|
||||
cursor.execute('''
|
||||
SELECT agent_id FROM agent_research_participation WHERE topic = ?
|
||||
''', (topic,))
|
||||
participants = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
# Mark topic as completed
|
||||
cursor.execute('''
|
||||
UPDATE research_topics
|
||||
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
|
||||
WHERE topic = ?
|
||||
''', (topic,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Generate consensus document
|
||||
consensus_dir = os.path.join(os.getcwd(), ".agent/consensus")
|
||||
os.makedirs(consensus_dir, exist_ok=True)
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{topic.replace(' ', '_').replace('/', '_')}_{timestamp}.md"
|
||||
filepath = os.path.join(consensus_dir, filename)
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
f.write(f"# 🎯 Consensus: {topic}\n\n")
|
||||
f.write(f"**Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
|
||||
f.write(f"**Participants**: {', '.join(participants)}\n\n")
|
||||
f.write("---\n\n")
|
||||
|
||||
for agent_id, content, finding_type, created_at in findings:
|
||||
f.write(f"## [{finding_type.upper()}] {agent_id}\n\n")
|
||||
f.write(f"*{created_at}*\n\n")
|
||||
f.write(f"{content}\n\n")
|
||||
|
||||
print(f"✅ Consensus generated: {filepath}")
|
||||
print(f"📊 Total findings: {len(findings)}")
|
||||
print(f"👥 Participants: {len(participants)}")
|
||||
|
||||
return filepath
|
||||
|
||||
# ============ TASK MANAGEMENT ============
|
||||
|
||||
def assign_task(assigned_by, agent_id, task):
|
||||
"""@assign - Assign task to specific agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO task_assignments (agent_id, task, assigned_by)
|
||||
VALUES (?, ?, ?)
|
||||
''', (agent_id, task, assigned_by))
|
||||
|
||||
# Notify the agent
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (agent_id, f"📋 New task assigned by {assigned_by}: {task}"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📋 Task assigned to {agent_id} by {assigned_by}")
|
||||
|
||||
def list_tasks(agent_id=None):
|
||||
"""List tasks for an agent or all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
if agent_id:
|
||||
cursor.execute('''
|
||||
SELECT id, task, assigned_by, status, created_at
|
||||
FROM task_assignments
|
||||
WHERE agent_id = ? AND status != 'completed'
|
||||
ORDER BY created_at DESC
|
||||
''', (agent_id,))
|
||||
tasks = cursor.fetchall()
|
||||
|
||||
print(f"\n📋 Tasks for {agent_id}:")
|
||||
for task_id, task, assigned_by, status, created_at in tasks:
|
||||
print(f" [{status.upper()}] #{task_id}: {task} (from {assigned_by})")
|
||||
else:
|
||||
cursor.execute('''
|
||||
SELECT agent_id, id, task, assigned_by, status
|
||||
FROM task_assignments
|
||||
WHERE status != 'completed'
|
||||
ORDER BY agent_id
|
||||
''')
|
||||
tasks = cursor.fetchall()
|
||||
|
||||
print(f"\n📋 All pending tasks:")
|
||||
current_agent = None
|
||||
for agent, task_id, task, assigned_by, status in tasks:
|
||||
if agent != current_agent:
|
||||
print(f"\n {agent}:")
|
||||
current_agent = agent
|
||||
print(f" [{status.upper()}] #{task_id}: {task}")
|
||||
|
||||
conn.close()
|
||||
|
||||
def complete_task(task_id):
|
||||
"""Mark a task as completed"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
UPDATE task_assignments
|
||||
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (task_id,))
|
||||
|
||||
if cursor.rowcount > 0:
|
||||
print(f"✅ Task #{task_id} marked as completed")
|
||||
else:
|
||||
print(f"❌ Task #{task_id} not found")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ NOTIFICATIONS ============
|
||||
|
||||
def broadcast_message(from_agent, message):
|
||||
"""@notify - Broadcast message to all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (from_agent,))
|
||||
other_agents = cursor.fetchall()
|
||||
|
||||
for (agent_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 1)
|
||||
''', (agent_id, f"📢 Broadcast from {from_agent}: {message}"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📢 Broadcast sent to {len(other_agents)} agents")
|
||||
|
||||
def get_notifications(agent_id, unread_only=False):
|
||||
"""Get notifications for an agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
if unread_only:
|
||||
cursor.execute('''
|
||||
SELECT id, message, is_broadcast, created_at
|
||||
FROM notifications
|
||||
WHERE agent_id = ? AND is_read = 0
|
||||
ORDER BY created_at DESC
|
||||
''', (agent_id,))
|
||||
else:
|
||||
cursor.execute('''
|
||||
SELECT id, message, is_broadcast, created_at
|
||||
FROM notifications
|
||||
WHERE agent_id = ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT 10
|
||||
''', (agent_id,))
|
||||
|
||||
notifications = cursor.fetchall()
|
||||
|
||||
print(f"\n🔔 Notifications for {agent_id}:")
|
||||
for notif_id, message, is_broadcast, created_at in notifications:
|
||||
prefix = "📢" if is_broadcast else "🔔"
|
||||
print(f" {prefix} {message}")
|
||||
print(f" {created_at}")
|
||||
|
||||
# Mark as read
|
||||
cursor.execute('''
|
||||
UPDATE notifications SET is_read = 1
|
||||
WHERE agent_id = ? AND is_read = 0
|
||||
''', (agent_id,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ POLLS ============
|
||||
|
||||
def start_poll(agent_id, question):
|
||||
"""@poll - Start a quick poll"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO polls (question, created_by, status)
|
||||
VALUES (?, ?, 'active')
|
||||
''', (question, agent_id))
|
||||
poll_id = cursor.lastrowid
|
||||
|
||||
# Notify all agents
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
|
||||
other_agents = cursor.fetchall()
|
||||
for (other_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (other_id, f"🗳️ New poll from {agent_id}: '{question}' (Poll #{poll_id}). Vote with: @vote {poll_id} <yes/no/maybe>"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🗳️ Poll #{poll_id} started: {question}")
|
||||
return poll_id
|
||||
|
||||
def vote_poll(agent_id, poll_id, vote):
|
||||
"""@vote - Vote on a poll"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO poll_votes (poll_id, agent_id, vote)
|
||||
VALUES (?, ?, ?)
|
||||
''', (poll_id, agent_id, vote))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ Vote recorded for poll #{poll_id}: {vote}")
|
||||
|
||||
def show_poll_results(poll_id):
|
||||
"""Show poll results"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT question FROM polls WHERE id = ?", (poll_id,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Poll #{poll_id} not found")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
question = result[0]
|
||||
|
||||
cursor.execute('''
|
||||
SELECT vote, COUNT(*) FROM poll_votes
|
||||
WHERE poll_id = ?
|
||||
GROUP BY vote
|
||||
''', (poll_id,))
|
||||
votes = dict(cursor.fetchall())
|
||||
|
||||
cursor.execute('''
|
||||
SELECT agent_id, vote FROM poll_votes
|
||||
WHERE poll_id = ?
|
||||
''', (poll_id,))
|
||||
details = cursor.fetchall()
|
||||
|
||||
conn.close()
|
||||
|
||||
print(f"\n🗳️ Poll #{poll_id}: {question}")
|
||||
print("Results:")
|
||||
for vote, count in votes.items():
|
||||
print(f" {vote}: {count}")
|
||||
print("\nVotes:")
|
||||
for agent, vote in details:
|
||||
print(f" {agent}: {vote}")
|
||||
|
||||
# ============ HANDOVER ============
|
||||
|
||||
def request_handover(from_agent, to_agent, context=""):
|
||||
"""@handover - Request task handover to another agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get current task of from_agent
|
||||
cursor.execute("SELECT task FROM agents WHERE id = ?", (from_agent,))
|
||||
result = cursor.fetchone()
|
||||
current_task = result[0] if result else "current task"
|
||||
|
||||
# Create handover notification
|
||||
message = f"🔄 Handover request from {from_agent}: '{current_task}'"
|
||||
if context:
|
||||
message += f" | Context: {context}"
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (to_agent, message))
|
||||
|
||||
# Update from_agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'idle', task = NULL
|
||||
WHERE id = ?
|
||||
''', (from_agent,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🔄 Handover requested: {from_agent} -> {to_agent}")
|
||||
|
||||
def switch_to(agent_id, to_agent):
|
||||
"""@switch - Request to switch to specific agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
message = f"🔄 {agent_id} requests to switch to you for continuation"
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (to_agent, message))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🔄 Switch request sent: {agent_id} -> {to_agent}")
|
||||
|
||||
# ============ STATUS & MONITORING ============
|
||||
|
||||
def get_status():
|
||||
"""Enhanced status view"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🛰️ ACTIVE AGENTS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT name, task, status, current_research, last_seen
|
||||
FROM agents
|
||||
ORDER BY last_seen DESC
|
||||
'''):
|
||||
status_emoji = {
|
||||
'active': '🟢',
|
||||
'idle': '⚪',
|
||||
'researching': '🔬',
|
||||
'busy': '🔴'
|
||||
}.get(row[2], '⚪')
|
||||
|
||||
research_info = f" | Research: {row[3]}" if row[3] else ""
|
||||
print(f"{status_emoji} [{row[2].upper()}] {row[0]}: {row[1]}{research_info}")
|
||||
print(f" Last seen: {row[4]}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🔬 ACTIVE RESEARCH TOPICS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT t.topic, t.initiated_by, t.created_at,
|
||||
(SELECT COUNT(*) FROM agent_research_participation WHERE topic = t.topic) as participants,
|
||||
(SELECT COUNT(*) FROM research_log WHERE topic = t.topic) as findings
|
||||
FROM research_topics t
|
||||
WHERE t.status = 'active'
|
||||
ORDER BY t.created_at DESC
|
||||
'''):
|
||||
print(f"🔬 {row[0]}")
|
||||
print(f" Initiated by: {row[1]} | Participants: {row[3]} | Findings: {row[4]}")
|
||||
print(f" Started: {row[2]}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🔒 FILE LOCKS")
|
||||
print("="*60)
|
||||
|
||||
locks = list(cursor.execute('''
|
||||
SELECT file_path, agent_id, lock_type
|
||||
FROM file_locks
|
||||
ORDER BY timestamp DESC
|
||||
'''))
|
||||
|
||||
if locks:
|
||||
for file_path, agent_id, lock_type in locks:
|
||||
lock_emoji = '🔒' if lock_type == 'write' else '🔍'
|
||||
print(f"{lock_emoji} {file_path} -> {agent_id} ({lock_type})")
|
||||
else:
|
||||
print(" No active locks")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("📋 PENDING TASKS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT agent_id, COUNT(*)
|
||||
FROM task_assignments
|
||||
WHERE status = 'pending'
|
||||
GROUP BY agent_id
|
||||
'''):
|
||||
print(f" {row[0]}: {row[1]} pending tasks")
|
||||
|
||||
cursor.execute("SELECT value FROM global_settings WHERE key = 'mode'")
|
||||
mode = cursor.fetchone()[0]
|
||||
print(f"\n🌍 Global Mode: {mode.upper()}")
|
||||
print("="*60)
|
||||
|
||||
conn.close()
|
||||
|
||||
def show_research_topic(topic):
|
||||
"""Show detailed view of a research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT status, initiated_by, created_at FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
status, initiated_by, created_at = result
|
||||
|
||||
print(f"\n🔬 Research: {topic}")
|
||||
print(f"Status: {status} | Initiated by: {initiated_by} | Started: {created_at}")
|
||||
|
||||
cursor.execute('''
|
||||
SELECT agent_id FROM agent_research_participation WHERE topic = ?
|
||||
''', (topic,))
|
||||
participants = [row[0] for row in cursor.fetchall()]
|
||||
print(f"Participants: {', '.join(participants)}")
|
||||
|
||||
print("\n--- Findings ---")
|
||||
cursor.execute('''
|
||||
SELECT agent_id, content, finding_type, created_at
|
||||
FROM research_log
|
||||
WHERE topic = ?
|
||||
ORDER BY created_at
|
||||
''', (topic,))
|
||||
|
||||
for agent_id, content, finding_type, created_at in cursor.fetchall():
|
||||
emoji = {'note': '📝', 'finding': '🔍', 'concern': '⚠️', 'solution': '✅'}.get(finding_type, '📝')
|
||||
print(f"\n{emoji} [{finding_type.upper()}] {agent_id} ({created_at})")
|
||||
print(f" {content}")
|
||||
|
||||
conn.close()
|
||||
|
||||
# ============ MAIN CLI ============
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="🤖 Agent Sync v2.0 - Multi-Agent Cooperation Protocol",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
QUICK COMMANDS:
|
||||
@research <topic> Start joint research
|
||||
@join <topic> Join active research
|
||||
@find <topic> <content> Post finding to research
|
||||
@consensus <topic> Generate consensus document
|
||||
@assign <agent> <task> Assign task to agent
|
||||
@notify <message> Broadcast to all agents
|
||||
@handover <agent> [context] Handover task
|
||||
@switch <agent> Request switch to agent
|
||||
@poll <question> Start a poll
|
||||
@vote <poll_id> <vote> Vote on poll
|
||||
@tasks [agent] List tasks
|
||||
@complete <task_id> Complete task
|
||||
@notifications [agent] Check notifications
|
||||
@topic <topic> View research topic details
|
||||
|
||||
EXAMPLES:
|
||||
python3 agent_sync_v2.py research claude-code "API Design"
|
||||
python3 agent_sync_v2.py find copilot "API Design" "Use REST instead of GraphQL"
|
||||
python3 agent_sync_v2.py assign claude-code copilot "Implement REST endpoints"
|
||||
python3 agent_sync_v2.py consensus "API Design"
|
||||
"""
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command", help="Command to execute")
|
||||
|
||||
# Legacy commands
|
||||
subparsers.add_parser("init", help="Initialize the database")
|
||||
|
||||
reg = subparsers.add_parser("register", help="Register an agent")
|
||||
reg.add_argument("id", help="Agent ID")
|
||||
reg.add_argument("name", help="Agent name")
|
||||
reg.add_argument("task", help="Current task")
|
||||
reg.add_argument("--status", default="idle", help="Agent status")
|
||||
|
||||
lock = subparsers.add_parser("lock", help="Lock a file")
|
||||
lock.add_argument("id", help="Agent ID")
|
||||
lock.add_argument("path", help="File path")
|
||||
lock.add_argument("--type", default="write", choices=["write", "research"], help="Lock type")
|
||||
|
||||
unlock = subparsers.add_parser("unlock", help="Unlock a file")
|
||||
unlock.add_argument("id", help="Agent ID")
|
||||
unlock.add_argument("path", help="File path")
|
||||
|
||||
subparsers.add_parser("status", help="Show status dashboard")
|
||||
|
||||
# New v2.0 commands
|
||||
research = subparsers.add_parser("research", help="@research - Start joint research topic")
|
||||
research.add_argument("agent_id", help="Agent initiating research")
|
||||
research.add_argument("topic", help="Research topic")
|
||||
|
||||
join = subparsers.add_parser("join", help="@join - Join active research")
|
||||
join.add_argument("agent_id", help="Agent joining")
|
||||
join.add_argument("topic", help="Topic to join")
|
||||
|
||||
find = subparsers.add_parser("find", help="@find - Post finding to research")
|
||||
find.add_argument("agent_id", help="Agent posting finding")
|
||||
find.add_argument("topic", help="Research topic")
|
||||
find.add_argument("content", help="Finding content")
|
||||
find.add_argument("--type", default="note", choices=["note", "finding", "concern", "solution"], help="Type of finding")
|
||||
|
||||
consensus = subparsers.add_parser("consensus", help="@consensus - Generate consensus document")
|
||||
consensus.add_argument("topic", help="Topic to generate consensus for")
|
||||
|
||||
assign = subparsers.add_parser("assign", help="@assign - Assign task to agent")
|
||||
assign.add_argument("from_agent", help="Agent assigning the task")
|
||||
assign.add_argument("to_agent", help="Agent to assign task to")
|
||||
assign.add_argument("task", help="Task description")
|
||||
|
||||
tasks = subparsers.add_parser("tasks", help="@tasks - List pending tasks")
|
||||
tasks.add_argument("--agent", help="Filter by agent ID")
|
||||
|
||||
complete = subparsers.add_parser("complete", help="@complete - Mark task as completed")
|
||||
complete.add_argument("task_id", type=int, help="Task ID to complete")
|
||||
|
||||
notify = subparsers.add_parser("notify", help="@notify - Broadcast message to all agents")
|
||||
notify.add_argument("from_agent", help="Agent sending notification")
|
||||
notify.add_argument("message", help="Message to broadcast")
|
||||
|
||||
handover = subparsers.add_parser("handover", help="@handover - Handover task to another agent")
|
||||
handover.add_argument("from_agent", help="Current agent")
|
||||
handover.add_argument("to_agent", help="Agent to handover to")
|
||||
handover.add_argument("--context", default="", help="Handover context")
|
||||
|
||||
switch = subparsers.add_parser("switch", help="@switch - Request switch to specific agent")
|
||||
switch.add_argument("from_agent", help="Current agent")
|
||||
switch.add_argument("to_agent", help="Agent to switch to")
|
||||
|
||||
poll = subparsers.add_parser("poll", help="@poll - Start a quick poll")
|
||||
poll.add_argument("agent_id", help="Agent starting poll")
|
||||
poll.add_argument("question", help="Poll question")
|
||||
|
||||
vote = subparsers.add_parser("vote", help="@vote - Vote on a poll")
|
||||
vote.add_argument("agent_id", help="Agent voting")
|
||||
vote.add_argument("poll_id", type=int, help="Poll ID")
|
||||
vote.add_argument("vote_choice", choices=["yes", "no", "maybe"], help="Your vote")
|
||||
|
||||
poll_results = subparsers.add_parser("poll-results", help="Show poll results")
|
||||
poll_results.add_argument("poll_id", type=int, help="Poll ID")
|
||||
|
||||
notifications = subparsers.add_parser("notifications", help="@notifications - Check notifications")
|
||||
notifications.add_argument("agent_id", help="Agent to check notifications for")
|
||||
notifications.add_argument("--unread", action="store_true", help="Show only unread")
|
||||
|
||||
topic = subparsers.add_parser("topic", help="@topic - View research topic details")
|
||||
topic.add_argument("topic_name", help="Topic name")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.command == "init":
|
||||
init_db()
|
||||
elif args.command == "register":
|
||||
register_agent(args.id, args.name, args.task, args.status)
|
||||
elif args.command == "lock":
|
||||
lock_file(args.id, args.path, args.type)
|
||||
elif args.command == "unlock":
|
||||
unlock_file(args.id, args.path)
|
||||
elif args.command == "status":
|
||||
get_status()
|
||||
elif args.command == "research":
|
||||
start_research(args.agent_id, args.topic)
|
||||
elif args.command == "join":
|
||||
join_research(args.agent_id, args.topic)
|
||||
elif args.command == "find":
|
||||
post_finding(args.agent_id, args.topic, args.content, args.type)
|
||||
elif args.command == "consensus":
|
||||
generate_consensus(args.topic)
|
||||
elif args.command == "assign":
|
||||
assign_task(args.from_agent, args.to_agent, args.task)
|
||||
elif args.command == "tasks":
|
||||
list_tasks(args.agent)
|
||||
elif args.command == "complete":
|
||||
complete_task(args.task_id)
|
||||
elif args.command == "notify":
|
||||
broadcast_message(args.from_agent, args.message)
|
||||
elif args.command == "handover":
|
||||
request_handover(args.from_agent, args.to_agent, args.context)
|
||||
elif args.command == "switch":
|
||||
switch_to(args.from_agent, args.to_agent)
|
||||
elif args.command == "poll":
|
||||
start_poll(args.agent_id, args.question)
|
||||
elif args.command == "vote":
|
||||
vote_poll(args.agent_id, args.poll_id, args.vote_choice)
|
||||
elif args.command == "poll-results":
|
||||
show_poll_results(args.poll_id)
|
||||
elif args.command == "notifications":
|
||||
get_notifications(args.agent_id, args.unread)
|
||||
elif args.command == "topic":
|
||||
show_research_topic(args.topic_name)
|
||||
else:
|
||||
parser.print_help()
|
||||
@@ -11,9 +11,9 @@ Usage:
|
||||
To get started:
|
||||
1. Create .env file with your OpenWebUI API key:
|
||||
echo "api_key=sk-your-key-here" > .env
|
||||
|
||||
|
||||
2. Make sure OpenWebUI is running on localhost:3000
|
||||
|
||||
|
||||
3. Run this script:
|
||||
python deploy_async_context_compression.py
|
||||
"""
|
||||
@@ -34,10 +34,10 @@ def main():
|
||||
print("🚀 Deploying Async Context Compression Filter Plugin")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
|
||||
# Deploy the filter
|
||||
success = deploy_filter("async-context-compression")
|
||||
|
||||
|
||||
if success:
|
||||
print()
|
||||
print("=" * 70)
|
||||
@@ -63,7 +63,7 @@ def main():
|
||||
print(" • Check network connectivity")
|
||||
print()
|
||||
return 1
|
||||
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
|
||||
@@ -49,53 +49,78 @@ def _load_api_key() -> str:
|
||||
raise ValueError("api_key not found in .env file.")
|
||||
|
||||
|
||||
def _load_openwebui_base_url() -> str:
|
||||
"""Load OpenWebUI base URL from .env file or environment.
|
||||
|
||||
Checks in order:
|
||||
1. OPENWEBUI_BASE_URL in .env
|
||||
2. OPENWEBUI_BASE_URL environment variable
|
||||
3. Default to http://localhost:3000
|
||||
"""
|
||||
if ENV_FILE.exists():
|
||||
for line in ENV_FILE.read_text(encoding="utf-8").splitlines():
|
||||
line = line.strip()
|
||||
if line.startswith("OPENWEBUI_BASE_URL="):
|
||||
url = line.split("=", 1)[1].strip()
|
||||
if url:
|
||||
return url
|
||||
|
||||
# Try environment variable
|
||||
url = os.environ.get("OPENWEBUI_BASE_URL")
|
||||
if url:
|
||||
return url
|
||||
|
||||
# Default
|
||||
return "http://localhost:3000"
|
||||
|
||||
|
||||
def _find_filter_file(filter_name: str) -> Optional[Path]:
|
||||
"""Find the main Python file for a filter.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter (e.g., 'async-context-compression')
|
||||
|
||||
|
||||
Returns:
|
||||
Path to the main Python file, or None if not found.
|
||||
"""
|
||||
filter_dir = FILTERS_DIR / filter_name
|
||||
if not filter_dir.exists():
|
||||
return None
|
||||
|
||||
|
||||
# Try to find a .py file matching the filter name
|
||||
py_files = list(filter_dir.glob("*.py"))
|
||||
|
||||
|
||||
# Prefer a file with the filter name (with hyphens converted to underscores)
|
||||
preferred_name = filter_name.replace("-", "_") + ".py"
|
||||
for py_file in py_files:
|
||||
if py_file.name == preferred_name:
|
||||
return py_file
|
||||
|
||||
|
||||
# Otherwise, return the first .py file (usually the only one)
|
||||
if py_files:
|
||||
return py_files[0]
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
"""Extract metadata from the plugin docstring.
|
||||
|
||||
|
||||
Args:
|
||||
content: Python file content
|
||||
|
||||
|
||||
Returns:
|
||||
Dictionary with extracted metadata (title, author, version, etc.)
|
||||
"""
|
||||
metadata = {}
|
||||
|
||||
|
||||
# Extract docstring
|
||||
match = re.search(r'"""(.*?)"""', content, re.DOTALL)
|
||||
if not match:
|
||||
return metadata
|
||||
|
||||
|
||||
docstring = match.group(1)
|
||||
|
||||
|
||||
# Extract key-value pairs
|
||||
for line in docstring.split("\n"):
|
||||
line = line.strip()
|
||||
@@ -104,7 +129,7 @@ def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
key = parts[0].strip().lower()
|
||||
value = parts[1].strip()
|
||||
metadata[key] = value
|
||||
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
@@ -112,13 +137,13 @@ def _build_filter_payload(
|
||||
filter_name: str, file_path: Path, content: str, metadata: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""Build the payload for the filter update/create API.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter
|
||||
file_path: Path to the plugin file
|
||||
content: File content
|
||||
metadata: Extracted metadata
|
||||
|
||||
|
||||
Returns:
|
||||
Payload dictionary ready for API submission
|
||||
"""
|
||||
@@ -126,12 +151,14 @@ def _build_filter_payload(
|
||||
filter_id = metadata.get("id", filter_name).replace("-", "_")
|
||||
title = metadata.get("title", filter_name)
|
||||
author = metadata.get("author", "Fu-Jie")
|
||||
author_url = metadata.get("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
author_url = metadata.get(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
funding_url = metadata.get("funding_url", "https://github.com/open-webui")
|
||||
description = metadata.get("description", f"Filter plugin: {title}")
|
||||
version = metadata.get("version", "1.0.0")
|
||||
openwebui_id = metadata.get("openwebui_id", "")
|
||||
|
||||
|
||||
payload = {
|
||||
"id": filter_id,
|
||||
"name": title,
|
||||
@@ -150,20 +177,20 @@ def _build_filter_payload(
|
||||
},
|
||||
"content": content,
|
||||
}
|
||||
|
||||
|
||||
# Add openwebui_id if available
|
||||
if openwebui_id:
|
||||
payload["meta"]["manifest"]["openwebui_id"] = openwebui_id
|
||||
|
||||
|
||||
return payload
|
||||
|
||||
|
||||
def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
"""Deploy a filter plugin to OpenWebUI.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter to deploy
|
||||
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
@@ -191,7 +218,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
|
||||
if not metadata:
|
||||
print(f"[ERROR] Could not extract metadata from {file_path}")
|
||||
return False
|
||||
@@ -211,12 +238,14 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
}
|
||||
|
||||
# 6. Send update request
|
||||
update_url = "http://localhost:3000/api/v1/functions/id/{}/update".format(filter_id)
|
||||
create_url = "http://localhost:3000/api/v1/functions/create"
|
||||
|
||||
base_url = _load_openwebui_base_url()
|
||||
update_url = "{}/api/v1/functions/id/{}/update".format(base_url, filter_id)
|
||||
create_url = "{}/api/v1/functions/create".format(base_url)
|
||||
|
||||
print(f"📦 Deploying filter '{title}' (version {version})...")
|
||||
print(f" File: {file_path}")
|
||||
|
||||
print(f" Target: {base_url}")
|
||||
|
||||
try:
|
||||
# Try update first
|
||||
response = requests.post(
|
||||
@@ -225,7 +254,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if response.status_code == 200:
|
||||
print(f"✅ Successfully updated '{title}' filter!")
|
||||
return True
|
||||
@@ -234,7 +263,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
f"⚠️ Update failed with status {response.status_code}, "
|
||||
"attempting to create instead..."
|
||||
)
|
||||
|
||||
|
||||
# Try create if update fails
|
||||
res_create = requests.post(
|
||||
create_url,
|
||||
@@ -242,23 +271,24 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if res_create.status_code == 200:
|
||||
print(f"✅ Successfully created '{title}' filter!")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ Failed to update or create. Status: {res_create.status_code}")
|
||||
print(
|
||||
f"❌ Failed to update or create. Status: {res_create.status_code}"
|
||||
)
|
||||
try:
|
||||
error_msg = res_create.json()
|
||||
print(f" Error: {error_msg}")
|
||||
except:
|
||||
print(f" Response: {res_create.text[:500]}")
|
||||
return False
|
||||
|
||||
|
||||
except requests.exceptions.ConnectionError:
|
||||
print(
|
||||
"❌ Connection error: Could not reach OpenWebUI at localhost:3000"
|
||||
)
|
||||
base_url = _load_openwebui_base_url()
|
||||
print(f"❌ Connection error: Could not reach OpenWebUI at {base_url}")
|
||||
print(" Make sure OpenWebUI is running and accessible.")
|
||||
return False
|
||||
except requests.exceptions.Timeout:
|
||||
@@ -272,16 +302,20 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
def list_filters() -> None:
|
||||
"""List all available filters."""
|
||||
print("📋 Available filters:")
|
||||
filters = [d.name for d in FILTERS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")]
|
||||
|
||||
filters = [
|
||||
d.name
|
||||
for d in FILTERS_DIR.iterdir()
|
||||
if d.is_dir() and not d.name.startswith("_")
|
||||
]
|
||||
|
||||
if not filters:
|
||||
print(" (No filters found)")
|
||||
return
|
||||
|
||||
|
||||
for filter_name in sorted(filters):
|
||||
filter_dir = FILTERS_DIR / filter_name
|
||||
py_file = _find_filter_file(filter_name)
|
||||
|
||||
|
||||
if py_file:
|
||||
content = py_file.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
@@ -76,52 +76,51 @@ def _get_base_url() -> str:
|
||||
|
||||
if not base_url:
|
||||
raise ValueError(
|
||||
f"Missing url. Please create {ENV_FILE} with: "
|
||||
"url=http://localhost:3000"
|
||||
f"Missing url. Please create {ENV_FILE} with: " "url=http://localhost:3000"
|
||||
)
|
||||
return base_url.rstrip("/")
|
||||
|
||||
|
||||
def _find_tool_file(tool_name: str) -> Optional[Path]:
|
||||
"""Find the main Python file for a tool.
|
||||
|
||||
|
||||
Args:
|
||||
tool_name: Directory name of the tool (e.g., 'openwebui-skills-manager')
|
||||
|
||||
|
||||
Returns:
|
||||
Path to the main Python file, or None if not found.
|
||||
"""
|
||||
tool_dir = TOOLS_DIR / tool_name
|
||||
if not tool_dir.exists():
|
||||
return None
|
||||
|
||||
|
||||
# Try to find a .py file matching the tool name
|
||||
py_files = list(tool_dir.glob("*.py"))
|
||||
|
||||
|
||||
# Prefer a file with the tool name (with hyphens converted to underscores)
|
||||
preferred_name = tool_name.replace("-", "_") + ".py"
|
||||
for py_file in py_files:
|
||||
if py_file.name == preferred_name:
|
||||
return py_file
|
||||
|
||||
|
||||
# Otherwise, return the first .py file (usually the only one)
|
||||
if py_files:
|
||||
return py_files[0]
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
"""Extract metadata from the plugin docstring."""
|
||||
metadata = {}
|
||||
|
||||
|
||||
# Extract docstring
|
||||
match = re.search(r'"""(.*?)"""', content, re.DOTALL)
|
||||
if not match:
|
||||
return metadata
|
||||
|
||||
|
||||
docstring = match.group(1)
|
||||
|
||||
|
||||
# Extract key-value pairs
|
||||
for line in docstring.split("\n"):
|
||||
line = line.strip()
|
||||
@@ -130,7 +129,7 @@ def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
key = parts[0].strip().lower()
|
||||
value = parts[1].strip()
|
||||
metadata[key] = value
|
||||
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
@@ -141,12 +140,14 @@ def _build_tool_payload(
|
||||
tool_id = metadata.get("id", tool_name).replace("-", "_")
|
||||
title = metadata.get("title", tool_name)
|
||||
author = metadata.get("author", "Fu-Jie")
|
||||
author_url = metadata.get("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
author_url = metadata.get(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
funding_url = metadata.get("funding_url", "https://github.com/open-webui")
|
||||
description = metadata.get("description", f"Tool plugin: {title}")
|
||||
version = metadata.get("version", "1.0.0")
|
||||
openwebui_id = metadata.get("openwebui_id", "")
|
||||
|
||||
|
||||
payload = {
|
||||
"id": tool_id,
|
||||
"name": title,
|
||||
@@ -165,20 +166,20 @@ def _build_tool_payload(
|
||||
},
|
||||
"content": content,
|
||||
}
|
||||
|
||||
|
||||
# Add openwebui_id if available
|
||||
if openwebui_id:
|
||||
payload["meta"]["manifest"]["openwebui_id"] = openwebui_id
|
||||
|
||||
|
||||
return payload
|
||||
|
||||
|
||||
def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
"""Deploy a tool plugin to OpenWebUI.
|
||||
|
||||
|
||||
Args:
|
||||
tool_name: Directory name of the tool to deploy
|
||||
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
@@ -207,7 +208,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
|
||||
if not metadata:
|
||||
print(f"[ERROR] Could not extract metadata from {file_path}")
|
||||
return False
|
||||
@@ -229,10 +230,10 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
# 6. Send update request through the native tool endpoints
|
||||
update_url = f"{base_url}/api/v1/tools/id/{tool_id}/update"
|
||||
create_url = f"{base_url}/api/v1/tools/create"
|
||||
|
||||
|
||||
print(f"📦 Deploying tool '{title}' (version {version})...")
|
||||
print(f" File: {file_path}")
|
||||
|
||||
|
||||
try:
|
||||
# Try update first
|
||||
response = requests.post(
|
||||
@@ -241,7 +242,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if response.status_code == 200:
|
||||
print(f"✅ Successfully updated '{title}' tool!")
|
||||
return True
|
||||
@@ -250,7 +251,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
f"⚠️ Update failed with status {response.status_code}, "
|
||||
"attempting to create instead..."
|
||||
)
|
||||
|
||||
|
||||
# Try create if update fails
|
||||
res_create = requests.post(
|
||||
create_url,
|
||||
@@ -258,23 +259,23 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if res_create.status_code == 200:
|
||||
print(f"✅ Successfully created '{title}' tool!")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ Failed to update or create. Status: {res_create.status_code}")
|
||||
print(
|
||||
f"❌ Failed to update or create. Status: {res_create.status_code}"
|
||||
)
|
||||
try:
|
||||
error_msg = res_create.json()
|
||||
print(f" Error: {error_msg}")
|
||||
except:
|
||||
print(f" Response: {res_create.text[:500]}")
|
||||
return False
|
||||
|
||||
|
||||
except requests.exceptions.ConnectionError:
|
||||
print(
|
||||
"❌ Connection error: Could not reach OpenWebUI at {base_url}"
|
||||
)
|
||||
print("❌ Connection error: Could not reach OpenWebUI at {base_url}")
|
||||
print(" Make sure OpenWebUI is running and accessible.")
|
||||
return False
|
||||
except requests.exceptions.Timeout:
|
||||
@@ -288,16 +289,18 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
def list_tools() -> None:
|
||||
"""List all available tools."""
|
||||
print("📋 Available tools:")
|
||||
tools = [d.name for d in TOOLS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")]
|
||||
|
||||
tools = [
|
||||
d.name for d in TOOLS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")
|
||||
]
|
||||
|
||||
if not tools:
|
||||
print(" (No tools found)")
|
||||
return
|
||||
|
||||
|
||||
for tool_name in sorted(tools):
|
||||
tool_dir = TOOLS_DIR / tool_name
|
||||
py_file = _find_tool_file(tool_name)
|
||||
|
||||
|
||||
if py_file:
|
||||
content = py_file.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
@@ -187,9 +187,7 @@ def build_payload(candidate: PluginCandidate) -> Dict[str, object]:
|
||||
manifest = dict(candidate.metadata)
|
||||
manifest.setdefault("title", candidate.title)
|
||||
manifest.setdefault("author", "Fu-Jie")
|
||||
manifest.setdefault(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
manifest.setdefault("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
manifest.setdefault("funding_url", "https://github.com/open-webui")
|
||||
manifest.setdefault(
|
||||
"description", f"{candidate.plugin_type.title()} plugin: {candidate.title}"
|
||||
@@ -233,7 +231,9 @@ def build_api_urls(base_url: str, candidate: PluginCandidate) -> Tuple[str, str]
|
||||
)
|
||||
|
||||
|
||||
def discover_plugins(plugin_types: Sequence[str]) -> Tuple[List[PluginCandidate], List[Tuple[Path, str]]]:
|
||||
def discover_plugins(
|
||||
plugin_types: Sequence[str],
|
||||
) -> Tuple[List[PluginCandidate], List[Tuple[Path, str]]]:
|
||||
candidates: List[PluginCandidate] = []
|
||||
skipped: List[Tuple[Path, str]] = []
|
||||
|
||||
@@ -344,7 +344,9 @@ def print_skipped_summary(skipped: Sequence[Tuple[Path, str]]) -> None:
|
||||
for _, reason in skipped:
|
||||
counts[reason] = counts.get(reason, 0) + 1
|
||||
|
||||
summary = ", ".join(f"{reason}: {count}" for reason, count in sorted(counts.items()))
|
||||
summary = ", ".join(
|
||||
f"{reason}: {count}" for reason, count in sorted(counts.items())
|
||||
)
|
||||
print(f"Skipped {len(skipped)} files ({summary}).")
|
||||
|
||||
|
||||
@@ -421,19 +423,19 @@ def main(argv: Optional[Sequence[str]] = None) -> int:
|
||||
failed_candidates.append(candidate)
|
||||
print(f" [FAILED] {message}")
|
||||
|
||||
print(f"\n" + "="*80)
|
||||
print(f"\n" + "=" * 80)
|
||||
print(
|
||||
f"Finished: {success_count}/{len(candidates)} plugins installed successfully."
|
||||
)
|
||||
|
||||
|
||||
if failed_candidates:
|
||||
print(f"\n❌ {len(failed_candidates)} plugin(s) failed to install:")
|
||||
for candidate in failed_candidates:
|
||||
print(f" • {candidate.title} ({candidate.plugin_type})")
|
||||
print(f" → Check the error message above")
|
||||
print()
|
||||
|
||||
print("="*80)
|
||||
|
||||
print("=" * 80)
|
||||
return 0 if success_count == len(candidates) else 1
|
||||
|
||||
|
||||
|
||||
110
scripts/macp
Executable file
110
scripts/macp
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/bin/bash
|
||||
# 🤖 MACP Quick Command v2.1 (Unified Edition)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
AGENT_ID_FILE=".agent/current_agent"
|
||||
|
||||
resolve_agent_id() {
|
||||
if [ -n "${MACP_AGENT_ID:-}" ]; then
|
||||
echo "$MACP_AGENT_ID"
|
||||
return
|
||||
fi
|
||||
|
||||
if [ -f "$AGENT_ID_FILE" ]; then
|
||||
cat "$AGENT_ID_FILE"
|
||||
return
|
||||
fi
|
||||
|
||||
echo "Error: MACP agent identity is not set. Export MACP_AGENT_ID or create .agent/current_agent." >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
resolve_agent_name() {
|
||||
python3 - <<'PY2'
|
||||
import os
|
||||
import sqlite3
|
||||
import sys
|
||||
|
||||
agent_id = os.environ.get("MACP_AGENT_ID", "").strip()
|
||||
if not agent_id:
|
||||
path = os.path.join(os.getcwd(), ".agent", "current_agent")
|
||||
if os.path.exists(path):
|
||||
with open(path, "r", encoding="utf-8") as handle:
|
||||
agent_id = handle.read().strip()
|
||||
|
||||
db_path = os.path.join(os.getcwd(), ".agent", "agent_hub.db")
|
||||
name = agent_id or "Agent"
|
||||
|
||||
if agent_id and os.path.exists(db_path):
|
||||
conn = sqlite3.connect(db_path)
|
||||
cur = conn.cursor()
|
||||
cur.execute("SELECT name FROM agents WHERE id = ?", (agent_id,))
|
||||
row = cur.fetchone()
|
||||
conn.close()
|
||||
if row and row[0]:
|
||||
name = row[0]
|
||||
|
||||
sys.stdout.write(name)
|
||||
PY2
|
||||
}
|
||||
|
||||
AGENT_ID="$(resolve_agent_id)"
|
||||
export MACP_AGENT_ID="$AGENT_ID"
|
||||
AGENT_NAME="$(resolve_agent_name)"
|
||||
|
||||
CMD="${1:-}"
|
||||
if [ -z "$CMD" ]; then
|
||||
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]" >&2
|
||||
exit 1
|
||||
fi
|
||||
shift
|
||||
|
||||
case "$CMD" in
|
||||
/study)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
DESC="$*"
|
||||
if [ -n "$DESC" ]; then
|
||||
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC" --desc "$DESC"
|
||||
else
|
||||
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC"
|
||||
fi
|
||||
;;
|
||||
/broadcast)
|
||||
python3 scripts/agent_sync.py broadcast "$AGENT_ID" manual "$*"
|
||||
;;
|
||||
/summon)
|
||||
TO_AGENT="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker --priority high
|
||||
;;
|
||||
/handover)
|
||||
TO_AGENT="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker
|
||||
python3 scripts/agent_sync.py register "$AGENT_ID" "$AGENT_NAME" "Idle"
|
||||
;;
|
||||
/note)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py note "$AGENT_ID" "$TOPIC" "$*" --type note
|
||||
;;
|
||||
/check)
|
||||
python3 scripts/agent_sync.py check
|
||||
;;
|
||||
/resolve)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py resolve "$AGENT_ID" "$TOPIC" "$*"
|
||||
;;
|
||||
/ping)
|
||||
python3 scripts/agent_sync.py status | grep "\["
|
||||
;;
|
||||
/status)
|
||||
python3 scripts/agent_sync.py status
|
||||
;;
|
||||
*)
|
||||
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]"
|
||||
;;
|
||||
esac
|
||||
@@ -277,12 +277,37 @@ class OpenWebUIStats:
|
||||
},
|
||||
}
|
||||
|
||||
def _get_plugin_obj(self, post: dict) -> dict:
|
||||
"""Extract the actual plugin object from post['data'] (handling different keys like function/tool/pipe)."""
|
||||
data = post.get("data", {}) or {}
|
||||
if not data:
|
||||
return {}
|
||||
|
||||
# Priority 1: Use post['type'] as the key (standard behavior)
|
||||
post_type = post.get("type")
|
||||
if post_type and post_type in data and data[post_type]:
|
||||
return data[post_type]
|
||||
|
||||
# Priority 2: Fallback to 'function' (most common for actions/filters/pipes)
|
||||
if "function" in data and data["function"]:
|
||||
return data["function"]
|
||||
|
||||
# Priority 3: Try other known keys
|
||||
for k in ["tool", "pipe", "action", "filter", "prompt", "model"]:
|
||||
if k in data and data[k]:
|
||||
return data[k]
|
||||
|
||||
# Priority 4: If there's only one key in data, assume that's the one
|
||||
if len(data) == 1:
|
||||
return list(data.values())[0] or {}
|
||||
|
||||
return {}
|
||||
|
||||
def _resolve_post_type(self, post: dict) -> str:
|
||||
"""Resolve the post category type"""
|
||||
top_type = post.get("type")
|
||||
function_data = post.get("data", {}) or {}
|
||||
function_obj = function_data.get("function", {}) or {}
|
||||
meta = function_obj.get("meta", {}) or {}
|
||||
plugin_obj = self._get_plugin_obj(post)
|
||||
meta = plugin_obj.get("meta", {}) or {}
|
||||
manifest = meta.get("manifest", {}) or {}
|
||||
|
||||
# Category identification priority:
|
||||
@@ -292,17 +317,17 @@ class OpenWebUIStats:
|
||||
post_type = "unknown"
|
||||
if meta.get("type"):
|
||||
post_type = meta.get("type")
|
||||
elif function_obj.get("type"):
|
||||
post_type = function_obj.get("type")
|
||||
elif plugin_obj.get("type"):
|
||||
post_type = plugin_obj.get("type")
|
||||
elif top_type:
|
||||
post_type = top_type
|
||||
elif not meta and not function_obj:
|
||||
elif not meta and not plugin_obj:
|
||||
post_type = "post"
|
||||
|
||||
post_type = self._normalize_post_type(post_type)
|
||||
|
||||
# Unified and heuristic identification logic
|
||||
if post_type == "unknown" and function_obj:
|
||||
if post_type == "unknown" and plugin_obj:
|
||||
post_type = "action"
|
||||
|
||||
if post_type == "action" or post_type == "unknown":
|
||||
@@ -600,9 +625,8 @@ class OpenWebUIStats:
|
||||
for post in posts:
|
||||
post_type = self._resolve_post_type(post)
|
||||
|
||||
function_data = post.get("data", {}) or {}
|
||||
function_obj = function_data.get("function", {}) or {}
|
||||
meta = function_obj.get("meta", {}) or {}
|
||||
plugin_obj = self._get_plugin_obj(post)
|
||||
meta = plugin_obj.get("meta", {}) or {}
|
||||
manifest = meta.get("manifest", {}) or {}
|
||||
|
||||
# Accumulate statistics
|
||||
@@ -615,13 +639,12 @@ class OpenWebUIStats:
|
||||
stats["total_saves"] += post.get("saveCount", 0)
|
||||
stats["total_comments"] += post.get("commentCount", 0)
|
||||
|
||||
# Key: total views do not include non-downloadable types (e.g., post, review)
|
||||
if post_type in self.DOWNLOADABLE_TYPES or post_downloads > 0:
|
||||
# Key: only count views for posts with actual downloads (exclude post/review types)
|
||||
if post_type not in ("post", "review") and post_downloads > 0:
|
||||
stats["total_views"] += post_views
|
||||
|
||||
if post_type not in stats["by_type"]:
|
||||
stats["by_type"][post_type] = 0
|
||||
stats["by_type"][post_type] += 1
|
||||
if post_type not in stats["by_type"]:
|
||||
stats["by_type"][post_type] = 0
|
||||
stats["by_type"][post_type] += 1
|
||||
|
||||
# Individual post information
|
||||
created_at = datetime.fromtimestamp(post.get("createdAt", 0))
|
||||
|
||||
@@ -9,14 +9,15 @@ local deployment are present and functional.
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main():
|
||||
"""Check all deployment tools are ready."""
|
||||
base_dir = Path(__file__).parent.parent
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
print("✨ Async Context Compression Local Deployment Tools — Verification Status")
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
|
||||
files_to_check = {
|
||||
"🐍 Python Scripts": [
|
||||
"scripts/deploy_async_context_compression.py",
|
||||
@@ -34,56 +35,56 @@ def main():
|
||||
"tests/scripts/test_deploy_filter.py",
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
all_exist = True
|
||||
|
||||
|
||||
for category, files in files_to_check.items():
|
||||
print(f"\n{category}:")
|
||||
print("-" * 80)
|
||||
|
||||
|
||||
for file_path in files:
|
||||
full_path = base_dir / file_path
|
||||
exists = full_path.exists()
|
||||
status = "✅" if exists else "❌"
|
||||
|
||||
|
||||
print(f" {status} {file_path}")
|
||||
|
||||
|
||||
if exists and file_path.endswith(".py"):
|
||||
size = full_path.stat().st_size
|
||||
lines = len(full_path.read_text().split('\n'))
|
||||
lines = len(full_path.read_text().split("\n"))
|
||||
print(f" └─ [{size} bytes, ~{lines} lines]")
|
||||
|
||||
|
||||
if not exists:
|
||||
all_exist = False
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
|
||||
if all_exist:
|
||||
print("✅ All deployment tool files are ready!")
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
|
||||
print("🚀 Quick Start (3 ways):\n")
|
||||
|
||||
|
||||
print(" Method 1: Easiest (Recommended)")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_async_context_compression.py")
|
||||
print()
|
||||
|
||||
|
||||
print(" Method 2: Generic Tool")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_filter.py")
|
||||
print()
|
||||
|
||||
|
||||
print(" Method 3: Deploy Other Filters")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_filter.py --list")
|
||||
print(" python deploy_filter.py folder-memory")
|
||||
print()
|
||||
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
print("📚 Documentation References:\n")
|
||||
print(" • Quick Start: scripts/QUICK_START.md")
|
||||
print(" • Complete Guide: scripts/DEPLOYMENT_GUIDE.md")
|
||||
@@ -91,12 +92,12 @@ def main():
|
||||
print(" • Script Info: scripts/README.md")
|
||||
print(" • Test Coverage: pytest tests/scripts/test_deploy_filter.py -v")
|
||||
print()
|
||||
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
return 0
|
||||
else:
|
||||
print("❌ Some files are missing!")
|
||||
print("="*80 + "\n")
|
||||
print("=" * 80 + "\n")
|
||||
return 1
|
||||
|
||||
|
||||
|
||||
302
tests/plugins/tools/test_batch_install_plugins.py
Normal file
302
tests/plugins/tools/test_batch_install_plugins.py
Normal file
@@ -0,0 +1,302 @@
|
||||
import asyncio
|
||||
import importlib.util
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
|
||||
|
||||
MODULE_PATH = (
|
||||
Path(__file__).resolve().parents[3]
|
||||
/ "plugins"
|
||||
/ "tools"
|
||||
/ "batch-install-plugins"
|
||||
/ "batch_install_plugins.py"
|
||||
)
|
||||
SPEC = importlib.util.spec_from_file_location("batch_install_plugins", MODULE_PATH)
|
||||
batch_install_plugins = importlib.util.module_from_spec(SPEC)
|
||||
assert SPEC.loader is not None
|
||||
sys.modules[SPEC.name] = batch_install_plugins
|
||||
SPEC.loader.exec_module(batch_install_plugins)
|
||||
|
||||
|
||||
def make_candidate(title: str, file_path: str, function_id: str):
|
||||
return batch_install_plugins.PluginCandidate(
|
||||
plugin_type="tool",
|
||||
file_path=file_path,
|
||||
metadata={"title": title, "description": f"{title} description"},
|
||||
content="class Tools:\n pass\n",
|
||||
function_id=function_id,
|
||||
)
|
||||
|
||||
|
||||
def make_request():
|
||||
class Request:
|
||||
base_url = "http://localhost:3000/"
|
||||
headers = {"Authorization": "Bearer token"}
|
||||
|
||||
return Request()
|
||||
|
||||
|
||||
class DummyResponse:
|
||||
def __init__(self, status_code: int, json_data=None, text: str = ""):
|
||||
self.status_code = status_code
|
||||
self._json_data = json_data
|
||||
self.text = text
|
||||
|
||||
def json(self):
|
||||
if self._json_data is None:
|
||||
raise ValueError("no json body")
|
||||
return self._json_data
|
||||
|
||||
|
||||
class FakeAsyncClient:
|
||||
posts = []
|
||||
responses = []
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
async def __aenter__(self):
|
||||
return self
|
||||
|
||||
async def __aexit__(self, exc_type, exc, tb):
|
||||
return False
|
||||
|
||||
async def post(self, url, headers=None, json=None):
|
||||
type(self).posts.append((url, headers, json))
|
||||
if not type(self).responses:
|
||||
raise AssertionError("No fake response configured for POST request")
|
||||
response = type(self).responses.pop(0)
|
||||
if isinstance(response, Exception):
|
||||
raise response
|
||||
return response
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_only_installs_filtered_candidates(monkeypatch):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
exclude = make_candidate(
|
||||
"Exclude Me",
|
||||
"plugins/tools/exclude-me/exclude_me.py",
|
||||
"exclude_me",
|
||||
)
|
||||
self_plugin = make_candidate(
|
||||
"Batch Install Plugins from GitHub",
|
||||
"plugins/tools/batch-install-plugins/batch_install_plugins.py",
|
||||
"batch_install_plugins",
|
||||
)
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep, exclude, self_plugin], []
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
FakeAsyncClient.posts = []
|
||||
FakeAsyncClient.responses = [DummyResponse(404), DummyResponse(201)]
|
||||
monkeypatch.setattr(batch_install_plugins.httpx, "AsyncClient", FakeAsyncClient)
|
||||
|
||||
events = []
|
||||
captured = {}
|
||||
|
||||
async def event_call(payload):
|
||||
if payload["type"] == "confirmation":
|
||||
captured["message"] = payload["data"]["message"]
|
||||
elif payload["type"] == "execute":
|
||||
captured.setdefault("execute_codes", []).append(payload["data"]["code"])
|
||||
return True
|
||||
|
||||
async def emitter(event):
|
||||
events.append(event)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "en-US"},
|
||||
__event_call__=event_call,
|
||||
__request__=make_request(),
|
||||
__event_emitter__=emitter,
|
||||
repo=batch_install_plugins.DEFAULT_REPO,
|
||||
plugin_types=["tool"],
|
||||
exclude_keywords="exclude",
|
||||
)
|
||||
|
||||
assert "Created: Keep Plugin" in result
|
||||
assert "Exclude Me" not in result
|
||||
assert "1/1" in result
|
||||
assert captured["message"].count("[tool]") == 1
|
||||
assert "Keep Plugin" in captured["message"]
|
||||
assert "Exclude Me" not in captured["message"]
|
||||
assert "Batch Install Plugins from GitHub" not in captured["message"]
|
||||
assert "exclude, batch-install-plugins" in captured["message"]
|
||||
|
||||
urls = [url for url, _, _ in FakeAsyncClient.posts]
|
||||
assert urls == [
|
||||
"http://localhost:3000/api/v1/tools/id/keep_plugin/update",
|
||||
"http://localhost:3000/api/v1/tools/create",
|
||||
]
|
||||
assert any(
|
||||
"Starting OpenWebUI install requests" in code
|
||||
for code in captured.get("execute_codes", [])
|
||||
)
|
||||
assert events[-1]["type"] == "notification"
|
||||
assert events[-1]["data"]["type"] == "success"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_supports_missing_event_emitter(monkeypatch):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep], []
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
FakeAsyncClient.posts = []
|
||||
FakeAsyncClient.responses = [DummyResponse(404), DummyResponse(201)]
|
||||
monkeypatch.setattr(batch_install_plugins.httpx, "AsyncClient", FakeAsyncClient)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "en-US"},
|
||||
__request__=make_request(),
|
||||
repo="example/repo",
|
||||
plugin_types=["tool"],
|
||||
)
|
||||
|
||||
assert "Created: Keep Plugin" in result
|
||||
assert "1/1" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_handles_confirmation_timeout(monkeypatch):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep], []
|
||||
|
||||
async def fake_wait_for(awaitable, timeout):
|
||||
awaitable.close()
|
||||
raise asyncio.TimeoutError
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
monkeypatch.setattr(batch_install_plugins.asyncio, "wait_for", fake_wait_for)
|
||||
|
||||
events = []
|
||||
|
||||
async def event_call(payload):
|
||||
return True
|
||||
|
||||
async def emitter(event):
|
||||
events.append(event)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "en-US"},
|
||||
__event_call__=event_call,
|
||||
__request__=make_request(),
|
||||
__event_emitter__=emitter,
|
||||
repo="example/repo",
|
||||
plugin_types=["tool"],
|
||||
)
|
||||
|
||||
assert result == "Confirmation timed out or failed. Installation cancelled."
|
||||
assert events[-1]["type"] == "notification"
|
||||
assert events[-1]["data"]["type"] == "warning"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_marks_total_failure_as_error(monkeypatch):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep], []
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
FakeAsyncClient.posts = []
|
||||
FakeAsyncClient.responses = [
|
||||
DummyResponse(500, {"detail": "update failed"}, "update failed"),
|
||||
DummyResponse(500, {"detail": "create failed"}, "create failed"),
|
||||
]
|
||||
monkeypatch.setattr(batch_install_plugins.httpx, "AsyncClient", FakeAsyncClient)
|
||||
|
||||
events = []
|
||||
|
||||
async def emitter(event):
|
||||
events.append(event)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "en-US"},
|
||||
__request__=make_request(),
|
||||
__event_emitter__=emitter,
|
||||
repo="example/repo",
|
||||
plugin_types=["tool"],
|
||||
)
|
||||
|
||||
assert "Failed: Keep Plugin - status 500:" in result
|
||||
assert "0/1" in result
|
||||
assert events[-1]["type"] == "notification"
|
||||
assert events[-1]["data"]["type"] == "error"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_localizes_timeout_errors(monkeypatch):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep], []
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
FakeAsyncClient.posts = []
|
||||
FakeAsyncClient.responses = [httpx.TimeoutException("timed out")]
|
||||
monkeypatch.setattr(batch_install_plugins.httpx, "AsyncClient", FakeAsyncClient)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "zh-CN"},
|
||||
__request__=make_request(),
|
||||
repo="example/repo",
|
||||
plugin_types=["tool"],
|
||||
)
|
||||
|
||||
assert "失败:Keep Plugin - 请求超时" in result
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_install_all_plugins_emits_frontend_debug_logs_on_connect_error(
|
||||
monkeypatch,
|
||||
):
|
||||
keep = make_candidate("Keep Plugin", "plugins/tools/keep/keep.py", "keep_plugin")
|
||||
|
||||
async def fake_discover_plugins(url, skip_keywords):
|
||||
return [keep], []
|
||||
|
||||
monkeypatch.setattr(batch_install_plugins, "discover_plugins", fake_discover_plugins)
|
||||
FakeAsyncClient.posts = []
|
||||
# Both initial attempt and fallback retry should fail
|
||||
FakeAsyncClient.responses = [httpx.ConnectError("connect failed"), httpx.ConnectError("connect failed")]
|
||||
monkeypatch.setattr(batch_install_plugins.httpx, "AsyncClient", FakeAsyncClient)
|
||||
|
||||
execute_codes = []
|
||||
events = []
|
||||
|
||||
async def event_call(payload):
|
||||
if payload["type"] == "execute":
|
||||
execute_codes.append(payload["data"]["code"])
|
||||
return True
|
||||
if payload["type"] == "confirmation":
|
||||
return True
|
||||
raise AssertionError(f"Unexpected event_call payload type: {payload['type']}")
|
||||
|
||||
async def emitter(event):
|
||||
events.append(event)
|
||||
|
||||
result = await batch_install_plugins.Tools().install_all_plugins(
|
||||
__user__={"id": "u1", "language": "en-US"},
|
||||
__event_call__=event_call,
|
||||
__request__=make_request(),
|
||||
__event_emitter__=emitter,
|
||||
repo="example/repo",
|
||||
plugin_types=["tool"],
|
||||
)
|
||||
|
||||
assert result == "Cannot connect to OpenWebUI. Is it running?"
|
||||
assert any("OpenWebUI connection failed" in code for code in execute_codes)
|
||||
assert any("console.error" in code for code in execute_codes)
|
||||
assert any("http://localhost:3000" in code for code in execute_codes)
|
||||
assert events[-1]["type"] == "notification"
|
||||
assert events[-1]["data"]["type"] == "error"
|
||||
@@ -66,7 +66,7 @@ def test_build_payload_uses_native_tool_shape_for_tools():
|
||||
"description": "Demo tool description",
|
||||
"openwebui_id": "12345678-1234-1234-1234-123456789abc",
|
||||
},
|
||||
content='class Tools:\n pass\n',
|
||||
content="class Tools:\n pass\n",
|
||||
function_id="demo_tool",
|
||||
)
|
||||
|
||||
@@ -79,7 +79,7 @@ def test_build_payload_uses_native_tool_shape_for_tools():
|
||||
"description": "Demo tool description",
|
||||
"manifest": {},
|
||||
},
|
||||
"content": 'class Tools:\n pass\n',
|
||||
"content": "class Tools:\n pass\n",
|
||||
"access_grants": [],
|
||||
}
|
||||
|
||||
@@ -89,7 +89,7 @@ def test_build_api_urls_uses_tool_endpoints_for_tools():
|
||||
plugin_type="tool",
|
||||
file_path=Path("plugins/tools/demo/demo_tool.py"),
|
||||
metadata={"title": "Demo Tool"},
|
||||
content='class Tools:\n pass\n',
|
||||
content="class Tools:\n pass\n",
|
||||
function_id="demo_tool",
|
||||
)
|
||||
|
||||
@@ -101,7 +101,9 @@ def test_build_api_urls_uses_tool_endpoints_for_tools():
|
||||
assert create_url == "http://localhost:3000/api/v1/tools/create"
|
||||
|
||||
|
||||
def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, monkeypatch):
|
||||
def test_discover_plugins_only_returns_supported_openwebui_plugins(
|
||||
tmp_path, monkeypatch
|
||||
):
|
||||
actions_dir = tmp_path / "plugins" / "actions"
|
||||
filters_dir = tmp_path / "plugins" / "filters"
|
||||
pipes_dir = tmp_path / "plugins" / "pipes"
|
||||
@@ -110,7 +112,9 @@ def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, mon
|
||||
write_plugin(actions_dir / "flash-card" / "flash_card.py", PLUGIN_HEADER)
|
||||
write_plugin(actions_dir / "flash-card" / "flash_card_cn.py", PLUGIN_HEADER)
|
||||
write_plugin(actions_dir / "infographic" / "verify_generation.py", PLUGIN_HEADER)
|
||||
write_plugin(filters_dir / "missing-id" / "missing_id.py", '"""\ntitle: Missing ID\n"""\n')
|
||||
write_plugin(
|
||||
filters_dir / "missing-id" / "missing_id.py", '"""\ntitle: Missing ID\n"""\n'
|
||||
)
|
||||
write_plugin(pipes_dir / "sdk" / "github_copilot_sdk.py", PLUGIN_HEADER)
|
||||
write_plugin(tools_dir / "skills" / "openwebui_skills_manager.py", PLUGIN_HEADER)
|
||||
|
||||
@@ -150,7 +154,9 @@ def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, mon
|
||||
("class Action:\n pass\n", "missing plugin header"),
|
||||
],
|
||||
)
|
||||
def test_discover_plugins_reports_missing_metadata(tmp_path, monkeypatch, header, expected_reason):
|
||||
def test_discover_plugins_reports_missing_metadata(
|
||||
tmp_path, monkeypatch, header, expected_reason
|
||||
):
|
||||
action_dir = tmp_path / "plugins" / "actions"
|
||||
plugin_file = action_dir / "demo" / "demo.py"
|
||||
write_plugin(plugin_file, header)
|
||||
|
||||
139
zed-ai-tabs.sh
Executable file
139
zed-ai-tabs.sh
Executable file
@@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# ai-tabs - Ultra Orchestrator
|
||||
# Version: v1.0.0
|
||||
# License: MIT
|
||||
# Author: Fu-Jie
|
||||
# Description: Batch-launches and orchestrates multiple AI CLI tools as Tabs.
|
||||
# ==============================================================================
|
||||
|
||||
# 1. Single-Instance Lock
|
||||
LOCK_FILE="/tmp/ai_terminal_launch.lock"
|
||||
# If lock is less than 10 seconds old, another instance is running. Exit.
|
||||
if [ -f "$LOCK_FILE" ]; then
|
||||
LOCK_TIME=$(stat -f %m "$LOCK_FILE")
|
||||
NOW=$(date +%s)
|
||||
if (( NOW - LOCK_TIME < 10 )); then
|
||||
echo "⚠️ Another launch in progress. Skipping to prevent duplicates."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
touch "$LOCK_FILE"
|
||||
trap 'rm -f "$LOCK_FILE"' EXIT
|
||||
|
||||
# 2. Configuration & Constants
|
||||
INIT_DELAY=4.5
|
||||
PASTE_DELAY=0.3
|
||||
CMD_CREATION_DELAY=0.3
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PARENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Search for .env
|
||||
if [ -f "${SCRIPT_DIR}/.env" ]; then
|
||||
ENV_FILE="${SCRIPT_DIR}/.env"
|
||||
elif [ -f "${PARENT_DIR}/.env" ]; then
|
||||
ENV_FILE="${PARENT_DIR}/.env"
|
||||
fi
|
||||
|
||||
# Supported Tools
|
||||
SUPPORTED_TOOLS=(
|
||||
"claude:--continue"
|
||||
"opencode:--continue"
|
||||
"gemini:--resume latest"
|
||||
"copilot:--continue"
|
||||
"iflow:--continue"
|
||||
"kilo:--continue"
|
||||
)
|
||||
|
||||
FOUND_TOOLS_NAMES=()
|
||||
FOUND_CMDS=()
|
||||
|
||||
# 3. Part A: Load Manual Configuration
|
||||
if [ -f "$ENV_FILE" ]; then
|
||||
set -a; source "$ENV_FILE"; set +a
|
||||
for var in $(compgen -v | grep '^TOOL_[0-9]' | sort -V); do
|
||||
TPATH="${!var}"
|
||||
if [ -x "$TPATH" ]; then
|
||||
NAME=$(basename "$TPATH")
|
||||
FLAG="--continue"
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
[[ "${item%%:*}" == "$NAME" ]] && FLAG="${item#*:}" && break
|
||||
done
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# 4. Part B: Automatic Tool Discovery
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
NAME="${item%%:*}"
|
||||
FLAG="${item#*:}"
|
||||
ALREADY_CONFIGURED=false
|
||||
for configured in "${FOUND_TOOLS_NAMES[@]}"; do
|
||||
[[ "$configured" == "$NAME" ]] && ALREADY_CONFIGURED=true && break
|
||||
done
|
||||
[[ "$ALREADY_CONFIGURED" == true ]] && continue
|
||||
TPATH=$(which "$NAME" 2>/dev/null)
|
||||
if [ -z "$TPATH" ]; then
|
||||
SEARCH_PATHS=(
|
||||
"/opt/homebrew/bin/$NAME"
|
||||
"/usr/local/bin/$NAME"
|
||||
"$HOME/.local/bin/$NAME"
|
||||
"$HOME/bin/$NAME"
|
||||
"$HOME/.$NAME/bin/$NAME"
|
||||
"$HOME/.nvm/versions/node/*/bin/$NAME"
|
||||
"$HOME/.npm-global/bin/$NAME"
|
||||
"$HOME/.cargo/bin/$NAME"
|
||||
)
|
||||
for p in "${SEARCH_PATHS[@]}"; do
|
||||
for found_p in $p; do [[ -x "$found_p" ]] && TPATH="$found_p" && break 2; done
|
||||
done
|
||||
fi
|
||||
if [ -n "$TPATH" ]; then
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
|
||||
NUM_FOUND=${#FOUND_CMDS[@]}
|
||||
[[ "$NUM_FOUND" -eq 0 ]] && exit 1
|
||||
|
||||
# 5. Core Orchestration (Reset + Launch)
|
||||
# Using Command Palette automation to avoid the need for manual shortcut binding.
|
||||
AS_SCRIPT="tell application \"System Events\"\n"
|
||||
|
||||
# Phase A: Creation (Using Command Palette to ensure it opens in Editor Area)
|
||||
for ((i=1; i<=NUM_FOUND; i++)); do
|
||||
AS_SCRIPT+=" keystroke \"p\" using {command down, shift down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
# Ensure we are searching for the command. Using clipboard for speed and universal language support.
|
||||
AS_SCRIPT+=" set the clipboard to \"workspace: new center terminal\"\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
AS_SCRIPT+=" delay $CMD_CREATION_DELAY\n"
|
||||
done
|
||||
|
||||
# Phase B: Warmup
|
||||
AS_SCRIPT+=" delay $INIT_DELAY\n"
|
||||
|
||||
# Phase C: Command Injection (Reverse)
|
||||
for ((i=NUM_FOUND-1; i>=0; i--)); do
|
||||
FULL_CMD="${FOUND_CMDS[$i]}"
|
||||
CLEAN_CMD=$(echo "$FULL_CMD" | sed 's/"/\\"/g')
|
||||
AS_SCRIPT+=" set the clipboard to \"$CLEAN_CMD\"\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay $PASTE_DELAY\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
if [ $i -gt 0 ]; then
|
||||
AS_SCRIPT+=" delay 0.5\n"
|
||||
AS_SCRIPT+=" keystroke \"[\" using {command down, shift down}\n"
|
||||
fi
|
||||
done
|
||||
AS_SCRIPT+="end tell"
|
||||
|
||||
# Execute
|
||||
echo -e "$AS_SCRIPT" | osascript
|
||||
echo "✨ Ai tabs initialized successfully ($NUM_FOUND tools found)."
|
||||
Reference in New Issue
Block a user