Compare commits
12 Commits
async-cont
...
async-cont
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
baae09a223 | ||
|
|
903bd7b372 | ||
|
|
8c998ecc73 | ||
|
|
f11cf27404 | ||
|
|
41f271d2d8 | ||
|
|
984d3061c7 | ||
|
|
ba11cdd157 | ||
|
|
b1482b6083 | ||
|
|
0cc46e0188 | ||
|
|
93a42cbe03 | ||
|
|
fdf95a2825 | ||
|
|
5fe66a5803 |
BIN
.agent/agent_hub.db
Normal file
BIN
.agent/agent_hub.db
Normal file
Binary file not shown.
171
.agent/learnings/filter-async-context-compression-design.md
Normal file
171
.agent/learnings/filter-async-context-compression-design.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Filter: async-context-compression 设计模式与工程实践
|
||||
|
||||
**日期**: 2026-03-12
|
||||
**模块**: `plugins/filters/async-context-compression/async_context_compression.py`
|
||||
**关键特性**: 上下文压缩、异步摘要生成、状态管理、LLM 工程优化
|
||||
|
||||
---
|
||||
|
||||
## 核心工程洞察
|
||||
|
||||
### 1. Request 对象的 Filter-to-LLM 传导链
|
||||
|
||||
**问题**:Filter 的 `outlet` 阶段启动背景异步任务(`asyncio.create_task`)调用 `generate_chat_completion`(内部 API),但无法直接访问原始 HTTP `request`。早期代码用最小化合成 Request(仅 `{"type": "http", "app": webui_app}`),暴露兼容性风险。
|
||||
|
||||
**解决方案**:
|
||||
|
||||
- OpenWebUI 对 `outlet` 同样支持 `__request__` 参数注入(即 `inlet` + `outlet` 都支持)
|
||||
- 透传 `__request__` 通过整个异步调用链:`outlet → _locked_summary_task → _check_and_generate_summary_async → _generate_summary_async → _call_summary_llm`
|
||||
- 在最终调用处:`request = __request__ or Request(...)`(兜底降级)
|
||||
|
||||
**收获**:LLM 调用路径应始终倾向于使用真实请求上下文,而非人工合成。即使后台任务中,`request.app` 的应用级状态仍持续有效。
|
||||
|
||||
---
|
||||
|
||||
### 2. 异步摘要生成中的上下文完整性
|
||||
|
||||
**关键场景分化**:
|
||||
|
||||
| 情况 | `summary_index` 值 | 旧摘要位置 | 需要 `previous_summary` |
|
||||
|------|--------|----------|---------|
|
||||
| Inlet 已注入旧摘要 | Not None | `messages[0]`(middle_messages 首项) | ❌ 否,已在 conversation_text 中 |
|
||||
| Outlet 收原始消息(未注入) | None | DB 存档 | ✅ **是**,必须显式读取并透传 |
|
||||
|
||||
**问题根源**:`outlet` 收到的消息来自原始数据库查询,未经过 `inlet` 的摘要注入。当 LLM 看不到历史摘要时,已压缩的知识(旧对话、已解决的问题、先前的发现)会被重新处理或遗忘。
|
||||
|
||||
**实现要点**:
|
||||
|
||||
```python
|
||||
# 仅当 summary_index is None 时异步加载旧摘要
|
||||
if summary_index is None:
|
||||
previous_summary = await asyncio.to_thread(
|
||||
self._load_summary, chat_id, body
|
||||
)
|
||||
else:
|
||||
previous_summary = None
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. 上下文压缩的 LLM Prompt 设计
|
||||
|
||||
**工程原则**:
|
||||
|
||||
1. **Clear Input Boundaries**:用 XML 风格标签(`<previous_working_memory>`, `<new_conversation>`)明确分界,避免 LLM 混淆"指令示例"与"待处理数据"
|
||||
2. **State-Aware Merging**:不是"保留所有旧事实",而是**更新状态**——`"bug X exists" → "bug X fixed"` 或彻底移除已解决项
|
||||
3. **Goal Evolution**:Current Goal 反映**最新**意图;旧目标迁移到 Working Memory 作为上下文
|
||||
4. **Error Verbatim**:Stack trace、异常类型、错误码必须逐字引用(是调试的一等公民)
|
||||
5. **Format Strictness**:结构变为 **REQUIRED**(而非 Suggested),允许零内容项省略,但布局一致
|
||||
|
||||
**新 Prompt 结构**:
|
||||
|
||||
```
|
||||
[Rules] → [Output Constraints] → [Required Structure Header] → [Boundaries] → <previous_working_memory> → <new_conversation>
|
||||
```
|
||||
|
||||
关键改进:
|
||||
|
||||
- 规则 3(Ruthless Denoising) → 新增规则 4(Error Verbatim) + 规则 5(Causal Chain)
|
||||
- "Suggested" Structure → "Required" Structure with Optional Sections
|
||||
- 新增 `## Causal Log` 专项,强制单行因果链格式:`[MSG_ID?] action → result`
|
||||
- Token 预算策略明确:按近期性和紧迫性优先裁剪(RRF)
|
||||
|
||||
---
|
||||
|
||||
### 4. 异步任务中的错误边界与恢复
|
||||
|
||||
**现象**:背景摘要生成任务(`asyncio.create_task`)的异常不会阻塞用户响应,但需要:
|
||||
|
||||
- 完整的日志链路(`_log` 调用 + `event_emitter` 通知)
|
||||
- 数据库事务的原子性(摘要和压缩状态同时保存)
|
||||
- 前端 UI 反馈(status event: "generating..." → "complete" 或 "error")
|
||||
|
||||
**最佳实践**:
|
||||
|
||||
- 用 `asyncio.Lock` 按 chat_id 防止并发摘要任务
|
||||
- 后台执行繁重操作(tokenize、LLM call)用 `asyncio.to_thread`
|
||||
- 所有 I/O(DB reads/writes)需包裹异步线程池
|
||||
- 异常捕获限制在 try-except,日志不要吞掉堆栈信息
|
||||
|
||||
---
|
||||
|
||||
### 5. Filter 单例与状态设计陷阱
|
||||
|
||||
**约束**:Filter 实例是全局单例,所有会话共享同一个 `self`。
|
||||
|
||||
**禁忌**:
|
||||
|
||||
```python
|
||||
# ❌ 错误:self.temp_buffer = ... (会被其他并发会话污染)
|
||||
self.temp_state = body # 危险!
|
||||
|
||||
# ✅ 正确:无状态或使用锁/chat_id 隔离
|
||||
self._chat_locks[chat_id] = asyncio.Lock() # 每个 chat 一个锁
|
||||
```
|
||||
|
||||
**设计**:
|
||||
|
||||
- Valves(Pydantic BaseModel)保存全局配置 ✅
|
||||
- 使用 dict 按 `chat_id` 键维护临时状态(lock、计数器)✅
|
||||
- 传参而非全局变量保存请求级数据 ✅
|
||||
|
||||
---
|
||||
|
||||
## 集成场景:Filter + Pipe 的配合
|
||||
|
||||
**当 Pipe 模型调用 Filter 时**:
|
||||
|
||||
1. `inlet` 注入摘要,削减上下文会话消息数
|
||||
2. Pipe 模型(通常为 Copilot SDK 或自定义内核)处理精简消息
|
||||
3. `outlet` 触发背景摘要,无阻塞用户响应
|
||||
4. 下一轮对话时,`inlet` 再次注入最新摘要
|
||||
|
||||
**关键约束**:
|
||||
|
||||
- `_should_skip_compression` 检测 `__model__.get("pipe")` 或 `copilot_sdk`,必要时跳过注入
|
||||
- Pipe 模型若有自己的上下文管理(如 Copilot 的 native tool calling),过度压缩会失去工具调用链
|
||||
- 摘要模型选择(`summary_model` Valve)应兼容当前 Pipe 环境的 API(推荐用通用模型如 gemini-flash)
|
||||
|
||||
---
|
||||
|
||||
## 内部 API 契约速记
|
||||
|
||||
### `generate_chat_completion(request, payload, user)`
|
||||
|
||||
- **request**: FastAPI Request;可来自真实 HTTP 或 `__request__` 注入
|
||||
- **payload**: `{"model": id, "messages": [...], "stream": false, "max_tokens": N, "temperature": T}`
|
||||
- **user**: UserModel;从 DB 查询或 `__user__` 转换(需 `Users.get_user_by_id()`)
|
||||
- **返回**: dict 或 JSONResponse;若是后者需 `response.body.decode()` + JSON parse
|
||||
|
||||
### Filter 生命周期
|
||||
|
||||
```
|
||||
New Message → inlet (User input) → [Plugins wait] → LLM → outlet (Response) → Summary Task (Background)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 调试清单
|
||||
|
||||
- [ ] `__request__` 在 `outlet` 签名中声明且被 OpenWebUI 注入(非 None)
|
||||
- [ ] 异步调用链中每层都透传 `__request__`,最底层兜底合成
|
||||
- [ ] `summary_index is None` 时从 DB 异步读取 `previous_summary`
|
||||
- [ ] LLM Prompt 中 `<previous_working_memory>` 和 `<new_conversation>` 有明确边界
|
||||
- [ ] 错误处理不吞堆栈:`logger.exception()` 或 `exc_info=True`
|
||||
- [ ] `asyncio.Lock` 按 chat_id 避免并发工作冲突
|
||||
- [ ] Copilot SDK / Pipe 模型需 `_should_skip_compression()` 检查
|
||||
- [ ] Token budget 在 max_summary_tokens 下规划,优先保留近期事件
|
||||
|
||||
---
|
||||
|
||||
## 相关文件
|
||||
|
||||
- 核心实现:`plugins/filters/async-context-compression/async_context_compression.py`
|
||||
- README:`plugins/filters/async-context-compression/README.md` + `README_CN.md`
|
||||
- OpenWebUI 内部:`open_webui/utils/chat.py` → `generate_chat_completion()`
|
||||
|
||||
---
|
||||
|
||||
**版本**: 1.0
|
||||
**维护者**: Fu-Jie
|
||||
**最后更新**: 2026-03-12
|
||||
45
.agent/learnings/openwebui-community-api.md
Normal file
45
.agent/learnings/openwebui-community-api.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# OpenWebUI Community API Patterns
|
||||
|
||||
## Post Data Structure Variations
|
||||
|
||||
When fetching posts from the OpenWebUI Community API (`https://api.openwebui.com/api/v1/posts/...`), the structure of the `data` field varies significantly depending on the `type` of the post.
|
||||
|
||||
### Observed Mappings
|
||||
|
||||
| Post Type | Data Key (under `data`) | Usual Content |
|
||||
|-----------|-------------------------|---------------|
|
||||
| `action` | `function` | Plugin code and metadata |
|
||||
| `filter` | `function` | Filter logic and metadata |
|
||||
| `pipe` | `function` | Pipe logic and metadata |
|
||||
| `tool` | `tool` | Tool definition and logic |
|
||||
| `prompt` | `prompt` | Prompt template strings |
|
||||
| `model` | `model` | Model configuration |
|
||||
|
||||
### Implementation Workaround
|
||||
|
||||
To robustly extract metadata (like `version` or `description`) regardless of the post type, the following heuristic logic is recommended:
|
||||
|
||||
```python
|
||||
def _get_plugin_obj(post: dict) -> dict:
|
||||
data = post.get("data", {}) or {}
|
||||
post_type = post.get("type")
|
||||
|
||||
# Priority 1: Use specific type key
|
||||
if post_type in data:
|
||||
return data[post_type]
|
||||
|
||||
# Priority 2: Fallback to common keys
|
||||
for k in ["function", "tool", "pipe"]:
|
||||
if k in data:
|
||||
return data[k]
|
||||
|
||||
# Priority 3: First available key
|
||||
if data:
|
||||
return list(data.values())[0]
|
||||
|
||||
return {}
|
||||
```
|
||||
|
||||
### Gotchas
|
||||
- Some older posts or different categories might not have a `version` field in `manifest`, leading to empty strings or `N/A` in reports.
|
||||
- `slug` should be used as the unique identifier rather than `title` when tracking stats across history.
|
||||
29
.agent/rules/agent_protocol.md
Normal file
29
.agent/rules/agent_protocol.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Agent Coordination Protocol (FOR AGENTS ONLY)
|
||||
|
||||
## 🛡️ The Golden Rule
|
||||
**NEVER modify code without verifying the lock status in the Agent Hub.**
|
||||
|
||||
## 🔑 Identity Management
|
||||
- `claude-code`: Official Claude CLI
|
||||
- `copilot-agent`: GitHub Copilot
|
||||
- `gemini-cursor`: Cursor IDE or Gemini extension
|
||||
- `iflow-agent`: iFlow SDK agent
|
||||
|
||||
## 🛠️ The Synchronization Tool
|
||||
Script: `scripts/agent_sync.py` (SQLite-backed)
|
||||
|
||||
### 🏎️ Workflow Lifecycle
|
||||
1. **Initialize Session**:
|
||||
- `python3 scripts/agent_sync.py status`
|
||||
- `python3 scripts/agent_sync.py register <id> <name> "<objective>"`
|
||||
2. **Resource Acquisition**:
|
||||
- `python3 scripts/agent_sync.py lock <id> <file_path>`
|
||||
- If blocked, identify the owner from `status` and do not attempt to bypass.
|
||||
3. **Collaboration (Research Mode)**:
|
||||
- If the project mode is `RESEARCH`, prioritize the `note` command.
|
||||
- Summarize findings: `python3 scripts/agent_sync.py note <id> "<topic>" "<summary>"`
|
||||
4. **Cleanup**:
|
||||
- `python3 scripts/agent_sync.py unlock <id> <file_path>`
|
||||
|
||||
## 📜 Shared Memory
|
||||
Read `.agent/learnings/` to avoid reinventing the wheel.
|
||||
0
.agent/shared_context/.gitkeep
Normal file
0
.agent/shared_context/.gitkeep
Normal file
8
.cursorrules
Normal file
8
.cursorrules
Normal file
@@ -0,0 +1,8 @@
|
||||
# 🤖 Cursor/Gemini Multi-Agent Protocol
|
||||
|
||||
1. **STATUS CHECK**: Always run `python3 scripts/agent_sync.py status` first.
|
||||
2. **REGISTRATION**: Run `python3 scripts/agent_sync.py register gemini-id "Gemini" "Current task"`.
|
||||
3. **LOCKING**: Never edit without `python3 scripts/agent_sync.py lock gemini-id <path>`.
|
||||
4. **STANDARDS**: Refer to `.agent/rules/plugin_standards.md` for coding guidelines.
|
||||
|
||||
Full details in `COOPERATION.md`.
|
||||
16
.github/workflows/community-stats.yml
vendored
16
.github/workflows/community-stats.yml
vendored
@@ -38,9 +38,12 @@ jobs:
|
||||
id: old_stats
|
||||
run: |
|
||||
if [ -f docs/community-stats.json ]; then
|
||||
cp docs/community-stats.json docs/community-stats.json.old
|
||||
echo "total_posts=$(jq -r '.total_posts // 0' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
echo "versions=$(jq -r '[.posts[] | {slug: .slug, version: .version}] | sort_by(.slug) | map("\(.slug):\(.version)") | join(",")' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "total_posts=0" >> $GITHUB_OUTPUT
|
||||
echo "versions=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Generate stats report
|
||||
@@ -56,12 +59,15 @@ jobs:
|
||||
id: new_stats
|
||||
run: |
|
||||
echo "total_posts=$(jq -r '.total_posts // 0' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
echo "versions=$(jq -r '[.posts[] | {slug: .slug, version: .version}] | sort_by(.slug) | map("\(.slug):\(.version)") | join(",")' docs/community-stats.json)" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Check for significant changes
|
||||
id: check_changes
|
||||
run: |
|
||||
OLD_POSTS="${{ steps.old_stats.outputs.total_posts }}"
|
||||
NEW_POSTS="${{ steps.new_stats.outputs.total_posts }}"
|
||||
OLD_VERSIONS="${{ steps.old_stats.outputs.versions }}"
|
||||
NEW_VERSIONS="${{ steps.new_stats.outputs.versions }}"
|
||||
|
||||
SHOULD_COMMIT="false"
|
||||
CHANGE_REASON=""
|
||||
@@ -69,14 +75,20 @@ jobs:
|
||||
if [ "$NEW_POSTS" -gt "$OLD_POSTS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="new plugin added ($OLD_POSTS -> $NEW_POSTS)"
|
||||
echo "📦 New plugin detected: $OLD_POSTS -> $NEW_POSTS"
|
||||
elif [ "$NEW_POSTS" -lt "$OLD_POSTS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="plugin removed ($OLD_POSTS -> $NEW_POSTS)"
|
||||
elif [ "$OLD_VERSIONS" != "$NEW_VERSIONS" ]; then
|
||||
SHOULD_COMMIT="true"
|
||||
CHANGE_REASON="plugin versions updated"
|
||||
echo "🔄 Version change detected"
|
||||
fi
|
||||
|
||||
echo "should_commit=$SHOULD_COMMIT" >> $GITHUB_OUTPUT
|
||||
echo "change_reason=$CHANGE_REASON" >> $GITHUB_OUTPUT
|
||||
|
||||
if [ "$SHOULD_COMMIT" = "false" ]; then
|
||||
echo "ℹ️ No significant changes detected, skipping commit"
|
||||
echo "ℹ️ No significant changes (posts or versions), skipping commit"
|
||||
else
|
||||
echo "✅ Significant changes detected: $CHANGE_REASON"
|
||||
fi
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -142,3 +142,4 @@ logs/
|
||||
# OpenWebUI specific
|
||||
# Add any specific ignores for OpenWebUI plugins if needed
|
||||
.git-worktrees/
|
||||
plugins/filters/auth_model_info/
|
||||
|
||||
13
CLAUDE.md
Normal file
13
CLAUDE.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# 🤖 Claude Multi-Agent Protocol (MACP)
|
||||
|
||||
## 🚀 Mandatory Startup
|
||||
1. **Check Hub**: `python3 scripts/agent_sync.py status`
|
||||
2. **Register**: `python3 scripts/agent_sync.py register claude-code "Claude" "Handling user request"`
|
||||
3. **Lock**: `python3 scripts/agent_sync.py lock claude-code <file_path>`
|
||||
4. **Handoff**: Use `python3 scripts/agent_sync.py note` for collaborative findings.
|
||||
|
||||
## 🤝 Project Standards
|
||||
Read these BEFORE writing any code:
|
||||
- `.agent/rules/plugin_standards.md`
|
||||
- `.agent/rules/agent_protocol.md`
|
||||
- `COOPERATION.md`
|
||||
33
COOPERATION.md
Normal file
33
COOPERATION.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# 🤖 Multi-Agent Cooperation Protocol (MACP) v2.1
|
||||
|
||||
本项目采用 **SQLite 协作中控 (Agent Hub)** 来管理多个 AI Agent 的并发任务。
|
||||
|
||||
## 🚀 核心指令 (Quick Commands)
|
||||
使用 `./scripts/macp` 即可快速调用,无需记忆复杂的 Python 参数。
|
||||
|
||||
| 指令 | 描述 |
|
||||
| :--- | :--- |
|
||||
| **`/status`** | 查看全场状态(活跃 Agent、文件锁、任务、研究主题) |
|
||||
| **`/study <topic> <desc>`** | **一键发起联合研究**。广播主题并通知所有 Agent 进入研究状态。 |
|
||||
| **`/summon <agent> <task>`** | **定向召唤**。给特定 Agent 派发高优先级任务。 |
|
||||
| **`/handover <agent> <msg>`** | **任务接力**。释放当前进度并交棒给下一个 Agent。 |
|
||||
| **`/broadcast <msg>`** | **全场广播**。发送紧急通知或状态同步。 |
|
||||
| **`/check`** | **收件箱检查**。查看是否有分配给你的待办任务。 |
|
||||
| **`/resolve <topic> <result>`** | **归档结论**。结束研究专题并记录最终共识。 |
|
||||
| **`/ping`** | **生存检查**。快速查看哪些 Agent 在线。 |
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ 协作准则
|
||||
1. **先查后动**:开始工作前先运行 `./scripts/macp /status`。
|
||||
2. **锁即所有权**:修改文件前必须获取锁。
|
||||
3. **意图先行**:大型重构建议先通过 `/study` 发起方案讨论。
|
||||
4. **及时解锁**:Commit 并 Push 后,请务必 `/handover` 或手动解锁。
|
||||
|
||||
## 📁 基础设施
|
||||
- **数据库**: `.agent/agent_hub.db` (不要手动编辑)
|
||||
- **内核**: `scripts/agent_sync.py`
|
||||
- **快捷工具**: `scripts/macp`
|
||||
|
||||
---
|
||||
*Generated by Claude (Coordinator) in collaboration with Sisyphus & Copilot.*
|
||||
14
README.md
14
README.md
@@ -9,7 +9,6 @@ A collection of enhancements, plugins, and prompts for [open-webui](https://gith
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 Community Stats
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
||||
@@ -20,19 +19,18 @@ A collection of enhancements, plugins, and prompts for [open-webui](https://gith
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
| Rank | Plugin | Version | Downloads | Views | 📅 Updated |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 Total Downloads Trend
|
||||
|
||||

|
||||
|
||||
*See full stats and charts in [Community Stats Report](./docs/community-stats.md)*
|
||||
|
||||
14
README_CN.md
14
README_CN.md
@@ -6,7 +6,6 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 社区统计
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
||||
@@ -17,19 +16,18 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
| 排名 | 插件 | 版本 | 下载 | 浏览 | 📅 更新 |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 总下载量累计趋势
|
||||
|
||||

|
||||
|
||||
*完整统计与趋势图请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
||||
|
||||
139
ai-tabs.sh
Executable file
139
ai-tabs.sh
Executable file
@@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# ai-tabs - Ultra Orchestrator
|
||||
# Version: v1.0.0
|
||||
# License: MIT
|
||||
# Author: Fu-Jie
|
||||
# Description: Batch-launches and orchestrates multiple AI CLI tools as Tabs.
|
||||
# ==============================================================================
|
||||
|
||||
# 1. Single-Instance Lock
|
||||
LOCK_FILE="/tmp/ai_terminal_launch.lock"
|
||||
# If lock is less than 10 seconds old, another instance is running. Exit.
|
||||
if [ -f "$LOCK_FILE" ]; then
|
||||
LOCK_TIME=$(stat -f %m "$LOCK_FILE")
|
||||
NOW=$(date +%s)
|
||||
if (( NOW - LOCK_TIME < 10 )); then
|
||||
echo "⚠️ Another launch in progress. Skipping to prevent duplicates."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
touch "$LOCK_FILE"
|
||||
trap 'rm -f "$LOCK_FILE"' EXIT
|
||||
|
||||
# 2. Configuration & Constants
|
||||
INIT_DELAY=4.5
|
||||
PASTE_DELAY=0.3
|
||||
CMD_CREATION_DELAY=0.3
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PARENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Search for .env
|
||||
if [ -f "${SCRIPT_DIR}/.env" ]; then
|
||||
ENV_FILE="${SCRIPT_DIR}/.env"
|
||||
elif [ -f "${PARENT_DIR}/.env" ]; then
|
||||
ENV_FILE="${PARENT_DIR}/.env"
|
||||
fi
|
||||
|
||||
# Supported Tools
|
||||
SUPPORTED_TOOLS=(
|
||||
"claude:--continue"
|
||||
"opencode:--continue"
|
||||
"gemini:--resume latest"
|
||||
"copilot:--continue"
|
||||
"iflow:--continue"
|
||||
"kilo:--continue"
|
||||
)
|
||||
|
||||
FOUND_TOOLS_NAMES=()
|
||||
FOUND_CMDS=()
|
||||
|
||||
# 3. Part A: Load Manual Configuration
|
||||
if [ -f "$ENV_FILE" ]; then
|
||||
set -a; source "$ENV_FILE"; set +a
|
||||
for var in $(compgen -v | grep '^TOOL_[0-9]' | sort -V); do
|
||||
TPATH="${!var}"
|
||||
if [ -x "$TPATH" ]; then
|
||||
NAME=$(basename "$TPATH")
|
||||
FLAG="--continue"
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
[[ "${item%%:*}" == "$NAME" ]] && FLAG="${item#*:}" && break
|
||||
done
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# 4. Part B: Automatic Tool Discovery
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
NAME="${item%%:*}"
|
||||
FLAG="${item#*:}"
|
||||
ALREADY_CONFIGURED=false
|
||||
for configured in "${FOUND_TOOLS_NAMES[@]}"; do
|
||||
[[ "$configured" == "$NAME" ]] && ALREADY_CONFIGURED=true && break
|
||||
done
|
||||
[[ "$ALREADY_CONFIGURED" == true ]] && continue
|
||||
TPATH=$(which "$NAME" 2>/dev/null)
|
||||
if [ -z "$TPATH" ]; then
|
||||
SEARCH_PATHS=(
|
||||
"/opt/homebrew/bin/$NAME"
|
||||
"/usr/local/bin/$NAME"
|
||||
"$HOME/.local/bin/$NAME"
|
||||
"$HOME/bin/$NAME"
|
||||
"$HOME/.$NAME/bin/$NAME"
|
||||
"$HOME/.nvm/versions/node/*/bin/$NAME"
|
||||
"$HOME/.npm-global/bin/$NAME"
|
||||
"$HOME/.cargo/bin/$NAME"
|
||||
)
|
||||
for p in "${SEARCH_PATHS[@]}"; do
|
||||
for found_p in $p; do [[ -x "$found_p" ]] && TPATH="$found_p" && break 2; done
|
||||
done
|
||||
fi
|
||||
if [ -n "$TPATH" ]; then
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
|
||||
NUM_FOUND=${#FOUND_CMDS[@]}
|
||||
[[ "$NUM_FOUND" -eq 0 ]] && exit 1
|
||||
|
||||
# 5. Core Orchestration (Reset + Launch)
|
||||
# Using Command Palette automation to avoid the need for manual shortcut binding.
|
||||
AS_SCRIPT="tell application \"System Events\"\n"
|
||||
|
||||
# Phase A: Creation (Using Command Palette to ensure it opens in Editor Area)
|
||||
for ((i=1; i<=NUM_FOUND; i++)); do
|
||||
AS_SCRIPT+=" keystroke \"p\" using {command down, shift down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
# Ensure we are searching for the command. Using clipboard for speed and universal language support.
|
||||
AS_SCRIPT+=" set the clipboard to \"Terminal: Create New Terminal in Editor Area\"\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
AS_SCRIPT+=" delay $CMD_CREATION_DELAY\n"
|
||||
done
|
||||
|
||||
# Phase B: Warmup
|
||||
AS_SCRIPT+=" delay $INIT_DELAY\n"
|
||||
|
||||
# Phase C: Command Injection (Reverse)
|
||||
for ((i=NUM_FOUND-1; i>=0; i--)); do
|
||||
FULL_CMD="${FOUND_CMDS[$i]}"
|
||||
CLEAN_CMD=$(echo "$FULL_CMD" | sed 's/"/\\"/g')
|
||||
AS_SCRIPT+=" set the clipboard to \"$CLEAN_CMD\"\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay $PASTE_DELAY\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
if [ $i -gt 0 ]; then
|
||||
AS_SCRIPT+=" delay 0.5\n"
|
||||
AS_SCRIPT+=" keystroke \"[\" using {command down, shift down}\n"
|
||||
fi
|
||||
done
|
||||
AS_SCRIPT+="end tell"
|
||||
|
||||
# Execute
|
||||
echo -e "$AS_SCRIPT" | osascript
|
||||
echo "✨ Ai tabs initialized successfully ($NUM_FOUND tools found)."
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "downloads",
|
||||
"message": "7.8k",
|
||||
"message": "8.8k",
|
||||
"color": "blue",
|
||||
"namedLogo": "openwebui"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "followers",
|
||||
"message": "315",
|
||||
"message": "344",
|
||||
"color": "blue"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "points",
|
||||
"message": "329",
|
||||
"message": "351",
|
||||
"color": "orange"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "upvotes",
|
||||
"message": "281",
|
||||
"message": "300",
|
||||
"color": "brightgreen"
|
||||
}
|
||||
@@ -1,19 +1,17 @@
|
||||
{
|
||||
"total_posts": 27,
|
||||
"total_downloads": 7786,
|
||||
"total_views": 82342,
|
||||
"total_upvotes": 281,
|
||||
"total_downloads": 8765,
|
||||
"total_views": 92460,
|
||||
"total_upvotes": 300,
|
||||
"total_downvotes": 4,
|
||||
"total_saves": 398,
|
||||
"total_comments": 63,
|
||||
"total_saves": 431,
|
||||
"total_comments": 73,
|
||||
"by_type": {
|
||||
"post": 6,
|
||||
"tool": 2,
|
||||
"pipe": 1,
|
||||
"filter": 4,
|
||||
"pipe": 1,
|
||||
"action": 12,
|
||||
"prompt": 1,
|
||||
"review": 1
|
||||
"prompt": 1
|
||||
},
|
||||
"posts": [
|
||||
{
|
||||
@@ -23,13 +21,13 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 1542,
|
||||
"views": 12996,
|
||||
"upvotes": 28,
|
||||
"saves": 66,
|
||||
"comments": 18,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-27",
|
||||
"downloads": 1730,
|
||||
"views": 14700,
|
||||
"upvotes": 30,
|
||||
"saves": 67,
|
||||
"comments": 21,
|
||||
"created_at": "2025-12-31",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||
},
|
||||
{
|
||||
@@ -39,10 +37,10 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 1230,
|
||||
"views": 12309,
|
||||
"upvotes": 25,
|
||||
"saves": 46,
|
||||
"downloads": 1330,
|
||||
"views": 13250,
|
||||
"upvotes": 27,
|
||||
"saves": 50,
|
||||
"comments": 10,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
@@ -52,16 +50,16 @@
|
||||
"title": "Markdown Normalizer",
|
||||
"slug": "markdown_normalizer_baaa8732",
|
||||
"type": "filter",
|
||||
"version": "1.2.7",
|
||||
"version": "1.2.8",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting. Including LaTeX command protection.",
|
||||
"downloads": 719,
|
||||
"views": 7704,
|
||||
"upvotes": 20,
|
||||
"saves": 42,
|
||||
"downloads": 807,
|
||||
"views": 8499,
|
||||
"upvotes": 21,
|
||||
"saves": 44,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-09",
|
||||
"url": "https://openwebui.com/posts/markdown_normalizer_baaa8732"
|
||||
},
|
||||
{
|
||||
@@ -71,10 +69,10 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 700,
|
||||
"views": 5399,
|
||||
"upvotes": 17,
|
||||
"saves": 37,
|
||||
"downloads": 767,
|
||||
"views": 5898,
|
||||
"upvotes": 18,
|
||||
"saves": 38,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-02-13",
|
||||
@@ -84,16 +82,16 @@
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "filter",
|
||||
"version": "1.3.0",
|
||||
"version": "1.4.1",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 669,
|
||||
"views": 6274,
|
||||
"upvotes": 16,
|
||||
"saves": 47,
|
||||
"downloads": 760,
|
||||
"views": 6985,
|
||||
"upvotes": 17,
|
||||
"saves": 50,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-11",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
@@ -103,10 +101,10 @@
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 583,
|
||||
"views": 6659,
|
||||
"upvotes": 9,
|
||||
"saves": 17,
|
||||
"downloads": 666,
|
||||
"views": 7490,
|
||||
"upvotes": 10,
|
||||
"saves": 19,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-28",
|
||||
"updated_at": "2026-01-28",
|
||||
@@ -119,8 +117,8 @@
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 563,
|
||||
"views": 3153,
|
||||
"downloads": 604,
|
||||
"views": 3426,
|
||||
"upvotes": 11,
|
||||
"saves": 11,
|
||||
"comments": 0,
|
||||
@@ -128,20 +126,36 @@
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "0.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Standalone OpenWebUI tool for managing native Workspace Skills (list/show/install/create/update/delete) for any model.",
|
||||
"downloads": 434,
|
||||
"views": 5597,
|
||||
"upvotes": 8,
|
||||
"saves": 22,
|
||||
"comments": 4,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-11",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.9.1",
|
||||
"version": "0.10.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A powerful Agent SDK integration for OpenWebUI. It deeply bridges GitHub Copilot SDK with OpenWebUI's ecosystem, enabling the Agent to autonomously perform intent recognition, web search, and context compaction. It seamlessly reuses your existing Tools, MCP servers, OpenAPI servers, and Skills for a professional, full-featured experience.",
|
||||
"downloads": 335,
|
||||
"views": 4905,
|
||||
"downloads": 399,
|
||||
"views": 5542,
|
||||
"upvotes": 16,
|
||||
"saves": 10,
|
||||
"comments": 6,
|
||||
"saves": 11,
|
||||
"comments": 8,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
},
|
||||
{
|
||||
@@ -151,31 +165,15 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 312,
|
||||
"views": 4448,
|
||||
"downloads": 325,
|
||||
"views": 4650,
|
||||
"upvotes": 13,
|
||||
"saves": 20,
|
||||
"saves": 22,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/flash_card_65a2ea8f"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 303,
|
||||
"views": 4265,
|
||||
"upvotes": 7,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "Deep Dive",
|
||||
"slug": "deep_dive_c0b846e4",
|
||||
@@ -183,8 +181,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 219,
|
||||
"views": 1764,
|
||||
"downloads": 224,
|
||||
"views": 1852,
|
||||
"upvotes": 6,
|
||||
"saves": 15,
|
||||
"comments": 0,
|
||||
@@ -199,8 +197,8 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 165,
|
||||
"views": 2831,
|
||||
"downloads": 171,
|
||||
"views": 2974,
|
||||
"upvotes": 14,
|
||||
"saves": 7,
|
||||
"comments": 4,
|
||||
@@ -215,15 +213,31 @@
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 112,
|
||||
"views": 1992,
|
||||
"downloads": 125,
|
||||
"views": 2137,
|
||||
"upvotes": 7,
|
||||
"saves": 11,
|
||||
"saves": 13,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-20",
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 100,
|
||||
"views": 2203,
|
||||
"upvotes": 5,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
@@ -231,13 +245,13 @@
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 76,
|
||||
"views": 2311,
|
||||
"downloads": 93,
|
||||
"views": 2452,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-09",
|
||||
"updated_at": "2026-03-03",
|
||||
"updated_at": "2026-03-04",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee"
|
||||
},
|
||||
{
|
||||
@@ -247,8 +261,8 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 68,
|
||||
"views": 1431,
|
||||
"downloads": 71,
|
||||
"views": 1545,
|
||||
"upvotes": 10,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -263,8 +277,8 @@
|
||||
"version": "0.9.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 52,
|
||||
"views": 761,
|
||||
"downloads": 53,
|
||||
"views": 789,
|
||||
"upvotes": 6,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
@@ -279,8 +293,8 @@
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 39,
|
||||
"views": 838,
|
||||
"downloads": 40,
|
||||
"views": 876,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
@@ -288,22 +302,6 @@
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 34,
|
||||
"views": 767,
|
||||
"upvotes": 2,
|
||||
"saves": 3,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
"slug": "闪记卡生成插件_4a31eac3",
|
||||
@@ -312,7 +310,7 @@
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 34,
|
||||
"views": 888,
|
||||
"views": 917,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -327,8 +325,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 31,
|
||||
"views": 647,
|
||||
"downloads": 32,
|
||||
"views": 678,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -339,62 +337,62 @@
|
||||
{
|
||||
"title": "An Unconventional Use of Open Terminal ⚡",
|
||||
"slug": "an_unconventional_use_of_open_terminal_35498f8f",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 14,
|
||||
"upvotes": 1,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-06",
|
||||
"updated_at": "2026-03-06",
|
||||
"views": 3009,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 2,
|
||||
"created_at": "2026-03-07",
|
||||
"updated_at": "2026-03-07",
|
||||
"url": "https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI",
|
||||
"slug": "github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1585,
|
||||
"views": 1762,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-27",
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️",
|
||||
"slug": "github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2608,
|
||||
"views": 2758,
|
||||
"upvotes": 8,
|
||||
"saves": 4,
|
||||
"comments": 1,
|
||||
"created_at": "2026-02-22",
|
||||
"created_at": "2026-02-23",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks",
|
||||
"slug": "github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293",
|
||||
"type": "post",
|
||||
"type": "pipe",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2390,
|
||||
"views": 2430,
|
||||
"upvotes": 7,
|
||||
"saves": 4,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-10",
|
||||
"updated_at": "2026-02-10",
|
||||
@@ -403,17 +401,17 @@
|
||||
{
|
||||
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1915,
|
||||
"upvotes": 12,
|
||||
"saves": 21,
|
||||
"comments": 8,
|
||||
"views": 1989,
|
||||
"upvotes": 13,
|
||||
"saves": 23,
|
||||
"comments": 9,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-28",
|
||||
"updated_at": "2026-01-29",
|
||||
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||
},
|
||||
{
|
||||
@@ -424,7 +422,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 251,
|
||||
"views": 263,
|
||||
"upvotes": 2,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
@@ -435,14 +433,14 @@
|
||||
{
|
||||
"title": " 🛠️ Debug Open WebUI Plugins in Your Browser",
|
||||
"slug": "debug_open_webui_plugins_in_your_browser_81bf7960",
|
||||
"type": "post",
|
||||
"type": "action",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1549,
|
||||
"views": 1579,
|
||||
"upvotes": 16,
|
||||
"saves": 12,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-01-10",
|
||||
"updated_at": "2026-01-10",
|
||||
@@ -454,11 +452,11 @@
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 315,
|
||||
"followers": 344,
|
||||
"following": 6,
|
||||
"total_points": 329,
|
||||
"post_points": 279,
|
||||
"comment_points": 50,
|
||||
"contributions": 59
|
||||
"total_points": 351,
|
||||
"post_points": 298,
|
||||
"comment_points": 53,
|
||||
"contributions": 66
|
||||
}
|
||||
}
|
||||
464
docs/community-stats.json.old
Normal file
464
docs/community-stats.json.old
Normal file
@@ -0,0 +1,464 @@
|
||||
{
|
||||
"total_posts": 27,
|
||||
"total_downloads": 7786,
|
||||
"total_views": 82342,
|
||||
"total_upvotes": 281,
|
||||
"total_downvotes": 4,
|
||||
"total_saves": 398,
|
||||
"total_comments": 63,
|
||||
"by_type": {
|
||||
"post": 6,
|
||||
"tool": 2,
|
||||
"pipe": 1,
|
||||
"filter": 4,
|
||||
"action": 12,
|
||||
"prompt": 1,
|
||||
"review": 1
|
||||
},
|
||||
"posts": [
|
||||
{
|
||||
"title": "Smart Mind Map",
|
||||
"slug": "turn_any_text_into_beautiful_mind_maps_3094c59a",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 1542,
|
||||
"views": 12996,
|
||||
"upvotes": 28,
|
||||
"saves": 66,
|
||||
"comments": 18,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-27",
|
||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||
},
|
||||
{
|
||||
"title": "Smart Infographic",
|
||||
"slug": "smart_infographic_ad6f0c7f",
|
||||
"type": "action",
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 1230,
|
||||
"views": 12309,
|
||||
"upvotes": 25,
|
||||
"saves": 46,
|
||||
"comments": 10,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/smart_infographic_ad6f0c7f"
|
||||
},
|
||||
{
|
||||
"title": "Markdown Normalizer",
|
||||
"slug": "markdown_normalizer_baaa8732",
|
||||
"type": "filter",
|
||||
"version": "1.2.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting. Including LaTeX command protection.",
|
||||
"downloads": 719,
|
||||
"views": 7704,
|
||||
"upvotes": 20,
|
||||
"saves": 42,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/markdown_normalizer_baaa8732"
|
||||
},
|
||||
{
|
||||
"title": "Export to Word Enhanced",
|
||||
"slug": "export_to_word_enhanced_formatting_fca6a315",
|
||||
"type": "action",
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 700,
|
||||
"views": 5399,
|
||||
"upvotes": 17,
|
||||
"saves": 37,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315"
|
||||
},
|
||||
{
|
||||
"title": "Async Context Compression",
|
||||
"slug": "async_context_compression_b1655bc8",
|
||||
"type": "filter",
|
||||
"version": "1.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 669,
|
||||
"views": 6274,
|
||||
"upvotes": 16,
|
||||
"saves": 47,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
"title": "AI Task Instruction Generator",
|
||||
"slug": "ai_task_instruction_generator_9bab8b37",
|
||||
"type": "prompt",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 583,
|
||||
"views": 6659,
|
||||
"upvotes": 9,
|
||||
"saves": 17,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-28",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37"
|
||||
},
|
||||
{
|
||||
"title": "Export to Excel",
|
||||
"slug": "export_mulit_table_to_excel_244b8f9d",
|
||||
"type": "action",
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 563,
|
||||
"views": 3153,
|
||||
"upvotes": 11,
|
||||
"saves": 11,
|
||||
"comments": 0,
|
||||
"created_at": "2025-05-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.9.1",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A powerful Agent SDK integration for OpenWebUI. It deeply bridges GitHub Copilot SDK with OpenWebUI's ecosystem, enabling the Agent to autonomously perform intent recognition, web search, and context compaction. It seamlessly reuses your existing Tools, MCP servers, OpenAPI servers, and Skills for a professional, full-featured experience.",
|
||||
"downloads": 335,
|
||||
"views": 4905,
|
||||
"upvotes": 16,
|
||||
"saves": 10,
|
||||
"comments": 6,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
},
|
||||
{
|
||||
"title": "Flash Card",
|
||||
"slug": "flash_card_65a2ea8f",
|
||||
"type": "action",
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 312,
|
||||
"views": 4448,
|
||||
"upvotes": 13,
|
||||
"saves": 20,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/flash_card_65a2ea8f"
|
||||
},
|
||||
{
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 303,
|
||||
"views": 4265,
|
||||
"upvotes": 7,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "Deep Dive",
|
||||
"slug": "deep_dive_c0b846e4",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 219,
|
||||
"views": 1764,
|
||||
"upvotes": 6,
|
||||
"saves": 15,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/deep_dive_c0b846e4"
|
||||
},
|
||||
{
|
||||
"title": "导出为Word增强版",
|
||||
"slug": "导出为_word_支持公式流程图表格和代码块_8a6306c0",
|
||||
"type": "action",
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 165,
|
||||
"views": 2831,
|
||||
"upvotes": 14,
|
||||
"saves": 7,
|
||||
"comments": 4,
|
||||
"created_at": "2026-01-04",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0"
|
||||
},
|
||||
{
|
||||
"title": "📂 Folder Memory – Auto-Evolving Project Context",
|
||||
"slug": "folder_memory_auto_evolving_project_context_4a9875b2",
|
||||
"type": "filter",
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 112,
|
||||
"views": 1992,
|
||||
"upvotes": 7,
|
||||
"saves": 11,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-20",
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
"type": "filter",
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 76,
|
||||
"views": 2311,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-09",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee"
|
||||
},
|
||||
{
|
||||
"title": "智能信息图",
|
||||
"slug": "智能信息图_e04a48ff",
|
||||
"type": "action",
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 68,
|
||||
"views": 1431,
|
||||
"upvotes": 10,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/智能信息图_e04a48ff"
|
||||
},
|
||||
{
|
||||
"title": "思维导图",
|
||||
"slug": "智能生成交互式思维导图帮助用户可视化知识_8d4b097b",
|
||||
"type": "action",
|
||||
"version": "0.9.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 52,
|
||||
"views": 761,
|
||||
"upvotes": 6,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-31",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b"
|
||||
},
|
||||
{
|
||||
"title": "异步上下文压缩",
|
||||
"slug": "异步上下文压缩_5c0617cb",
|
||||
"type": "action",
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 39,
|
||||
"views": 838,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 34,
|
||||
"views": 767,
|
||||
"upvotes": 2,
|
||||
"saves": 3,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
"slug": "闪记卡生成插件_4a31eac3",
|
||||
"type": "action",
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 34,
|
||||
"views": 888,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/闪记卡生成插件_4a31eac3"
|
||||
},
|
||||
{
|
||||
"title": "精读",
|
||||
"slug": "精读_99830b0f",
|
||||
"type": "action",
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 31,
|
||||
"views": 647,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "An Unconventional Use of Open Terminal ⚡",
|
||||
"slug": "an_unconventional_use_of_open_terminal_35498f8f",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 14,
|
||||
"upvotes": 1,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-06",
|
||||
"updated_at": "2026-03-06",
|
||||
"url": "https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI",
|
||||
"slug": "github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1585,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-27",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️",
|
||||
"slug": "github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2608,
|
||||
"upvotes": 8,
|
||||
"saves": 4,
|
||||
"comments": 1,
|
||||
"created_at": "2026-02-22",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks",
|
||||
"slug": "github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2390,
|
||||
"upvotes": 7,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-10",
|
||||
"updated_at": "2026-02-10",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293"
|
||||
},
|
||||
{
|
||||
"title": "🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager",
|
||||
"slug": "open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1915,
|
||||
"upvotes": 12,
|
||||
"saves": 21,
|
||||
"comments": 8,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e"
|
||||
},
|
||||
{
|
||||
"title": "Review of Claude Haiku 4.5",
|
||||
"slug": "review_of_claude_haiku_45_41b0db39",
|
||||
"type": "review",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 251,
|
||||
"upvotes": 2,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-14",
|
||||
"updated_at": "2026-01-14",
|
||||
"url": "https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39"
|
||||
},
|
||||
{
|
||||
"title": " 🛠️ Debug Open WebUI Plugins in Your Browser",
|
||||
"slug": "debug_open_webui_plugins_in_your_browser_81bf7960",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1549,
|
||||
"upvotes": 16,
|
||||
"saves": 12,
|
||||
"comments": 2,
|
||||
"created_at": "2026-01-10",
|
||||
"updated_at": "2026-01-10",
|
||||
"url": "https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960"
|
||||
}
|
||||
],
|
||||
"user": {
|
||||
"username": "Fu-Jie",
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 315,
|
||||
"following": 6,
|
||||
"total_points": 329,
|
||||
"post_points": 279,
|
||||
"comment_points": 50,
|
||||
"contributions": 59
|
||||
}
|
||||
}
|
||||
@@ -8,7 +8,7 @@
|
||||
> *Blue: Downloads | Purple: Views (Real-time dynamic)*
|
||||
|
||||
### 📂 Content Distribution
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 Overview
|
||||
@@ -25,42 +25,40 @@
|
||||
|
||||
## 📂 By Type
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
## 📋 Posts List
|
||||
|
||||
| Rank | Title | Type | Version | Downloads | Views | Upvotes | Saves | Updated |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-28 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-09 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-11 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 8 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-11 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-07 |
|
||||
| 10 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 14 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 15 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-04 |
|
||||
| 16 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | action |  |  |  |  |  | 2026-03-07 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | pipe |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | action |  |  |  |  |  | 2026-01-29 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | action |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
> *蓝色: 总下载量 | 紫色: 总浏览量 (实时动态生成)*
|
||||
|
||||
### 📂 内容分类占比 (Distribution)
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 总览
|
||||
@@ -25,42 +25,40 @@
|
||||
|
||||
## 📂 按类型分类
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
## 📋 发布列表
|
||||
|
||||
| 排名 | 标题 | 类型 | 版本 | 下载 | 浏览 | 点赞 | 收藏 | 更新日期 |
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-28 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-09 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-11 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 8 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-11 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-07 |
|
||||
| 10 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 14 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 15 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-04 |
|
||||
| 16 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | action |  |  |  |  |  | 2026-03-07 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | pipe |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | pipe |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | action |  |  |  |  |  | 2026-01-29 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | action |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
@@ -340,5 +340,45 @@
|
||||
"total_saves": 274,
|
||||
"followers": 220,
|
||||
"points": 271
|
||||
},
|
||||
{
|
||||
"date": "2026-03-12",
|
||||
"total_posts": 27,
|
||||
"total_downloads": 8765,
|
||||
"total_views": 92460,
|
||||
"total_upvotes": 300,
|
||||
"total_saves": 431,
|
||||
"followers": 344,
|
||||
"points": 351,
|
||||
"contributions": 66,
|
||||
"posts": {
|
||||
"turn_any_text_into_beautiful_mind_maps_3094c59a": 1730,
|
||||
"smart_infographic_ad6f0c7f": 1330,
|
||||
"markdown_normalizer_baaa8732": 807,
|
||||
"export_to_word_enhanced_formatting_fca6a315": 767,
|
||||
"async_context_compression_b1655bc8": 760,
|
||||
"ai_task_instruction_generator_9bab8b37": 666,
|
||||
"export_mulit_table_to_excel_244b8f9d": 604,
|
||||
"openwebui_skills_manager_tool_b4bce8e4": 434,
|
||||
"github_copilot_official_sdk_pipe_ce96f7b4": 399,
|
||||
"flash_card_65a2ea8f": 325,
|
||||
"deep_dive_c0b846e4": 224,
|
||||
"导出为_word_支持公式流程图表格和代码块_8a6306c0": 171,
|
||||
"folder_memory_auto_evolving_project_context_4a9875b2": 125,
|
||||
"smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d": 100,
|
||||
"github_copilot_sdk_files_filter_403a62ee": 93,
|
||||
"智能信息图_e04a48ff": 71,
|
||||
"智能生成交互式思维导图帮助用户可视化知识_8d4b097b": 53,
|
||||
"异步上下文压缩_5c0617cb": 40,
|
||||
"闪记卡生成插件_4a31eac3": 34,
|
||||
"精读_99830b0f": 32,
|
||||
"an_unconventional_use_of_open_terminal_35498f8f": 0,
|
||||
"github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452": 0,
|
||||
"github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131": 0,
|
||||
"github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293": 0,
|
||||
"open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e": 0,
|
||||
"review_of_claude_haiku_45_41b0db39": 0,
|
||||
"debug_open_webui_plugins_in_your_browser_81bf7960": 0
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -0,0 +1,123 @@
|
||||
# ✅ Async Context Compression 部署完成(2024-03-12)
|
||||
|
||||
## 🎯 部署摘要
|
||||
|
||||
**日期**: 2024-03-12
|
||||
**版本**: 1.4.1
|
||||
**状态**: ✅ 成功部署
|
||||
**目标**: OpenWebUI localhost:3003
|
||||
|
||||
---
|
||||
|
||||
## 📌 新增功能
|
||||
|
||||
### 前端控制台调试信息
|
||||
|
||||
在 `async_context_compression.py` 中增加了 6 个结构化数据检查点,可在浏览器 Console 中查看插件的内部数据流。
|
||||
|
||||
#### 新增方法
|
||||
|
||||
```python
|
||||
async def _emit_struct_log(self, __event_call__, title: str, data: Any):
|
||||
"""
|
||||
Emit structured data to browser console.
|
||||
- Arrays → console.table() [表格形式]
|
||||
- Objects → console.dir(d, {depth: 3}) [树形结构]
|
||||
"""
|
||||
```
|
||||
|
||||
#### 6 个检查点
|
||||
|
||||
| # | 检查点 | 阶段 | 显示内容 |
|
||||
|---|-------|------|--------|
|
||||
| 1️⃣ | `__user__ structure` | Inlet 入口 | id, name, language, resolved_language |
|
||||
| 2️⃣ | `__metadata__ structure` | Inlet 入口 | chat_id, message_id, function_calling |
|
||||
| 3️⃣ | `body top-level structure` | Inlet 入口 | model, message_count, metadata keys |
|
||||
| 4️⃣ | `summary_record loaded from DB` | Inlet DB 后 | compressed_count, summary_preview, timestamps |
|
||||
| 5️⃣ | `final_messages shape → LLM` | Inlet 返回前 | 表格:每条消息的 role、content_length、tools |
|
||||
| 6️⃣ | `middle_messages shape` | 异步摘要中 | 表格:要摘要的消息切片 |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 快速开始(5 分钟)
|
||||
|
||||
### 步骤 1: 启用 Filter
|
||||
```
|
||||
OpenWebUI → Settings → Filters → 启用 "Async Context Compression"
|
||||
```
|
||||
|
||||
### 步骤 2: 启用调试
|
||||
```
|
||||
在 Filter 配置中 → show_debug_log: ON → Save
|
||||
```
|
||||
|
||||
### 步骤 3: 打开控制台
|
||||
```
|
||||
F12 (Windows/Linux) 或 Cmd+Option+I (Mac) → Console 标签
|
||||
```
|
||||
|
||||
### 步骤 4: 发送消息
|
||||
```
|
||||
发送 10+ 条消息,观察 📋 [Compression] 开头的日志
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📊 代码变更
|
||||
|
||||
```
|
||||
新增方法: _emit_struct_log() [42 行]
|
||||
新增日志点: 6 个
|
||||
新增代码总行: ~150 行
|
||||
向后兼容: 100% (由 show_debug_log 保护)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 日志示例
|
||||
|
||||
### 表格日志(Arrays)
|
||||
```
|
||||
📋 [Compression] Inlet: final_messages shape → LLM (7 msgs)
|
||||
┌─────┬──────────┬──────────────┬─────────────┐
|
||||
│index│role │content_length│has_tool_... │
|
||||
├─────┼──────────┼──────────────┼─────────────┤
|
||||
│ 0 │"system" │150 │false │
|
||||
│ 1 │"user" │200 │false │
|
||||
│ 2 │"assistant"│500 │true │
|
||||
└─────┴──────────┴──────────────┴─────────────┘
|
||||
```
|
||||
|
||||
### 树形日志(Objects)
|
||||
```
|
||||
📋 [Compression] Inlet: __metadata__ structure
|
||||
├─ chat_id: "chat-abc123..."
|
||||
├─ message_id: "msg-xyz789"
|
||||
├─ function_calling: "native"
|
||||
└─ all_keys: ["chat_id", "message_id", ...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ 验证清单
|
||||
|
||||
- [x] 代码变更已保存
|
||||
- [x] 部署脚本成功执行
|
||||
- [x] OpenWebUI 正常运行
|
||||
- [x] 新增 6 个日志点
|
||||
- [x] 防卡死保护已实装
|
||||
- [x] 向后兼容性完整
|
||||
|
||||
---
|
||||
|
||||
## 📖 文档
|
||||
|
||||
- [QUICK_START.md](../../scripts/QUICK_START.md) - 快速参考
|
||||
- [README_CN.md](./README_CN.md) - 插件说明
|
||||
- [DEPLOYMENT_REFERENCE.md](./DEPLOYMENT_REFERENCE.md) - 部署工具
|
||||
|
||||
---
|
||||
|
||||
**部署时间**: 2024-03-12
|
||||
**维护者**: Fu-Jie
|
||||
**项目**: [openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
@@ -1,13 +1,14 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.4.1 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 1.4.2 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
## What's new in 1.4.1
|
||||
## What's new in 1.4.2
|
||||
|
||||
- **Reverse-Unfolding Mechanism**: Accurately reconstructs the expanded native tool-calling sequence during the outlet phase to permanently fix coordinate drift and missing summaries for long tool-based conversations.
|
||||
- **Safer Tool Trimming**: Refactored `enable_tool_output_trimming` to strictly use atomic block groups for safe trimming, completely preventing JSON payload corruption.
|
||||
- **Enhanced Summary Path Robustness**: Thread `__request__` context through entire summary generation pipeline for reliable authentication and provider handling.
|
||||
- **Improved Error Diagnostics**: LLM response validation failures now include complete response body in error logs for transparent troubleshooting.
|
||||
- **Smart Previous Summary Loading**: Automatically load and merge previous summaries from DB when not present in outlet payload, enabling incremental state merging across summary generations.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
# 异步上下文压缩过滤器
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.4.1 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 1.4.2 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
|
||||
> **重要提示**:为了确保所有过滤器的可维护性和易用性,每个过滤器都应附带清晰、完整的文档,以确保其功能、配置和使用方法得到充分说明。
|
||||
|
||||
本过滤器通过智能摘要和消息压缩技术,在保持对话连贯性的同时,显著降低长对话的 Token 消耗。
|
||||
|
||||
## 1.4.1 版本更新
|
||||
## 1.4.2 版本更新
|
||||
|
||||
- **逆向展开机制**: 引入 `_unfold_messages` 机制以在 `outlet` 阶段精确对齐坐标系,彻底解决了由于前端视图折叠导致长轮次工具调用对话出现进度漂移或跳过生成摘要的问题。
|
||||
- **更安全的工具内容裁剪**: 重构了 `enable_tool_output_trimming`,现在严格使用原子级分组进行安全的原生工具内容裁剪,替代了激进的正则表达式匹配,防止 JSON 载荷损坏。
|
||||
- **强化摘要路径健壮性**: 在整个摘要生成管道中透传 `__request__` 上下文,确保认证和提供商处理的可靠性。
|
||||
- **改进错误诊断**: LLM 响应校验失败时,错误日志现在包含完整的响应体,便于透明的故障排除。
|
||||
- **智能旧摘要加载**: 当 outlet payload 中缺失摘要消息时,自动从 DB 加载并合并前一代摘要,实现增量式状态合并。
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,315 @@
|
||||
# 📋 Response 结构检查指南
|
||||
|
||||
## 🎯 新增检查点
|
||||
|
||||
在 `_call_summary_llm()` 方法中添加了 **3 个新的响应检查点**,用于前端控制台检查 LLM 调用的完整响应流程。
|
||||
|
||||
### 新增检查点位置
|
||||
|
||||
| # | 检查点名称 | 位置 | 显示内容 |
|
||||
|---|-----------|------|--------|
|
||||
| 1️⃣ | **LLM Response structure** | `generate_chat_completion()` 返回后 | 原始 response 对象的类型、键、结构 |
|
||||
| 2️⃣ | **LLM Summary extracted & cleaned** | 提取并清理 summary 后 | 摘要长度、字数、格式、是否为空 |
|
||||
| 3️⃣ | **Summary saved to database** | 保存到 DB 后验证 | 数据库记录是否正确保存、字段一致性 |
|
||||
|
||||
---
|
||||
|
||||
## 📊 检查点详解
|
||||
|
||||
### 1️⃣ LLM Response structure
|
||||
|
||||
**显示时机**: `generate_chat_completion()` 返回,处理前
|
||||
**用途**: 验证原始响应数据结构
|
||||
|
||||
```
|
||||
📋 [Compression] LLM Response structure (raw from generate_chat_completion)
|
||||
├─ type: "dict" / "Response" / "JSONResponse"
|
||||
├─ has_body: true/false (表示是否为 Response 对象)
|
||||
├─ has_status_code: true/false
|
||||
├─ is_dict: true/false
|
||||
├─ keys: ["choices", "usage", "model", ...] (如果是 dict)
|
||||
├─ first_choice_keys: ["message", "finish_reason", ...]
|
||||
├─ message_keys: ["role", "content"]
|
||||
└─ content_length: 1234 (摘要文本长度)
|
||||
```
|
||||
|
||||
**关键验证**:
|
||||
- ✅ `type` — 应该是 `dict` 或 `JSONResponse`
|
||||
- ✅ `is_dict` — 最终应该是 `true`(处理完毕后)
|
||||
- ✅ `keys` — 应该包含 `choices` 和 `usage`
|
||||
- ✅ `first_choice_keys` — 应该包含 `message`
|
||||
- ✅ `message_keys` — 应该包含 `role` 和 `content`
|
||||
- ✅ `content_length` — 摘要不应该为空(> 0)
|
||||
|
||||
---
|
||||
|
||||
### 2️⃣ LLM Summary extracted & cleaned
|
||||
|
||||
**显示时机**: 从 response 中提取并 strip() 后
|
||||
**用途**: 验证提取的摘要内容质量
|
||||
|
||||
```
|
||||
📋 [Compression] LLM Summary extracted & cleaned
|
||||
├─ type: "str"
|
||||
├─ length_chars: 1234
|
||||
├─ length_words: 156
|
||||
├─ first_100_chars: "用户提问关于......"
|
||||
├─ has_newlines: true
|
||||
├─ newline_count: 3
|
||||
└─ is_empty: false
|
||||
```
|
||||
|
||||
**关键验证**:
|
||||
- ✅ `type` — 应该始终是 `str`
|
||||
- ✅ `is_empty` — 应该是 `false`(不能为空)
|
||||
- ✅ `length_chars` — 通常 100-2000 字符(取决于配置)
|
||||
- ✅ `newline_count` — 多行摘要通常有几个换行符
|
||||
- ✅ `first_100_chars` — 可视化开头内容,检查是否正确
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ Summary saved to database
|
||||
|
||||
**显示时机**: 保存到 DB 后,重新加载验证
|
||||
**用途**: 确认数据库持久化成功且数据一致
|
||||
|
||||
```
|
||||
📋 [Compression] Summary saved to database (verification)
|
||||
├─ db_id: 42
|
||||
├─ db_chat_id: "chat-abc123..."
|
||||
├─ db_compressed_message_count: 10
|
||||
├─ db_summary_length_chars: 1234
|
||||
├─ db_summary_preview_100: "用户提问关于......"
|
||||
├─ db_created_at: "2024-03-12 15:30:45.123456+00:00"
|
||||
├─ db_updated_at: "2024-03-12 15:35:20.654321+00:00"
|
||||
├─ matches_input_chat_id: true
|
||||
└─ matches_input_compressed_count: true
|
||||
```
|
||||
|
||||
**关键验证** ⭐ 最重要:
|
||||
- ✅ `matches_input_chat_id` — **必须是 `true`**
|
||||
- ✅ `matches_input_compressed_count` — **必须是 `true`**
|
||||
- ✅ `db_summary_length_chars` — 与提取后的长度一致
|
||||
- ✅ `db_updated_at` — 应该是最新时间戳
|
||||
- ✅ `db_id` — 应该有有效的数据库 ID
|
||||
|
||||
---
|
||||
|
||||
## 🔍 如何在前端查看
|
||||
|
||||
### 步骤 1: 启用调试模式
|
||||
|
||||
在 OpenWebUI 中:
|
||||
```
|
||||
Settings → Filters → Async Context Compression
|
||||
↓
|
||||
找到 valve: "show_debug_log"
|
||||
↓
|
||||
勾选启用 + Save
|
||||
```
|
||||
|
||||
### 步骤 2: 打开浏览器控制台
|
||||
|
||||
- **Windows/Linux**: F12 → Console
|
||||
- **Mac**: Cmd + Option + I → Console
|
||||
|
||||
### 步骤 3: 触发摘要生成
|
||||
|
||||
发送足够多的消息使 Filter 触发压缩:
|
||||
```
|
||||
1. 发送 15+ 条消息
|
||||
2. 等待后台摘要任务开始
|
||||
3. 在 Console 观察 📋 日志
|
||||
```
|
||||
|
||||
### 步骤 4: 观察完整流程
|
||||
|
||||
```
|
||||
[1] 📋 LLM Response structure (raw)
|
||||
↓ (显示原始响应类型、结构)
|
||||
[2] 📋 LLM Summary extracted & cleaned
|
||||
↓ (显示提取后的文本信息)
|
||||
[3] 📋 Summary saved to database (verification)
|
||||
↓ (显示数据库保存结果)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📈 完整流程验证
|
||||
|
||||
### 优质流程示例 ✅
|
||||
|
||||
```
|
||||
1️⃣ Response structure:
|
||||
- type: "dict"
|
||||
- is_dict: true
|
||||
- has "choices": true
|
||||
- has "usage": true
|
||||
|
||||
2️⃣ Summary extracted:
|
||||
- is_empty: false
|
||||
- length_chars: 1500
|
||||
- length_words: 200
|
||||
|
||||
3️⃣ DB verification:
|
||||
- matches_input_chat_id: true ✅
|
||||
- matches_input_compressed_count: true ✅
|
||||
- db_id: 42 (有效)
|
||||
```
|
||||
|
||||
### 问题流程示例 ❌
|
||||
|
||||
```
|
||||
1️⃣ Response structure:
|
||||
- type: "Response" (需要处理)
|
||||
- has_body: true
|
||||
- (需要解析 body)
|
||||
|
||||
2️⃣ Summary extracted:
|
||||
- is_empty: true ❌ (摘要为空!)
|
||||
- length_chars: 0
|
||||
|
||||
3️⃣ DB verification:
|
||||
- matches_input_chat_id: false ❌ (chat_id 不匹配!)
|
||||
- matches_input_compressed_count: false ❌ (计数不匹配!)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ 调试技巧
|
||||
|
||||
### 快速过滤日志
|
||||
|
||||
在 Console 过滤框输入:
|
||||
```
|
||||
📋 (搜索所有压缩日志)
|
||||
LLM Response (搜索响应相关)
|
||||
Summary extracted (搜索提取摘要)
|
||||
saved to database (搜索保存验证)
|
||||
```
|
||||
|
||||
### 展开表格/对象查看详情
|
||||
|
||||
1. **对象型日志** (console.dir)
|
||||
- 点击左边的 ▶ 符号展开
|
||||
- 逐级查看嵌套字段
|
||||
|
||||
2. **表格型日志** (console.table)
|
||||
- 点击上方的 ▶ 展开
|
||||
- 查看完整列
|
||||
|
||||
### 对比多个日志
|
||||
|
||||
```javascript
|
||||
// 在 Console 中手动对比
|
||||
检查点1: type = "dict", is_dict = true
|
||||
检查点2: is_empty = false, length_chars = 1234
|
||||
检查点3: matches_input_chat_id = true
|
||||
↓
|
||||
如果所有都符合预期 → ✅ 流程正常
|
||||
如果有不符的 → ❌ 检查具体问题
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🐛 常见问题诊断
|
||||
|
||||
### Q: "type" 是 "Response" 而不是 "dict"?
|
||||
|
||||
**原因**: 某些后端返回 Response 对象而非 dict
|
||||
**解决**: 代码会自动处理,看后续日志是否成功解析
|
||||
|
||||
```
|
||||
检查点1: type = "Response" ← 需要解析
|
||||
↓
|
||||
代码执行 `response.body` 解析
|
||||
↓
|
||||
再次检查是否变为 dict
|
||||
```
|
||||
|
||||
### Q: "is_empty" 是 true?
|
||||
|
||||
**原因**: LLM 没有返回有效的摘要文本
|
||||
**诊断**:
|
||||
1. 检查 `first_100_chars` — 应该有实际内容
|
||||
2. 检查模型是否正确配置
|
||||
3. 检查中间消息是否过多导致 LLM 超时
|
||||
|
||||
### Q: "matches_input_chat_id" 是 false?
|
||||
|
||||
**原因**: 保存到 DB 时 chat_id 不匹配
|
||||
**诊断**:
|
||||
1. 对比 `db_chat_id` 和输入的 `chat_id`
|
||||
2. 可能是数据库连接问题
|
||||
3. 可能是并发修改导致的
|
||||
|
||||
### Q: "matches_input_compressed_count" 是 false?
|
||||
|
||||
**原因**: 保存的消息计数与预期不符
|
||||
**诊断**:
|
||||
1. 对比 `db_compressed_message_count` 和 `saved_compressed_count`
|
||||
2. 检查中间消息是否被意外修改
|
||||
3. 检查原子边界对齐是否正确
|
||||
|
||||
---
|
||||
|
||||
## 📚 相关代码位置
|
||||
|
||||
```python
|
||||
# 文件: async_context_compression.py
|
||||
|
||||
# 检查点 1: 响应结构检查 (L3459)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"LLM Response structure (raw from generate_chat_completion)",
|
||||
response_inspection_data,
|
||||
)
|
||||
|
||||
# 检查点 2: 摘要提取检查 (L3524)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"LLM Summary extracted & cleaned",
|
||||
summary_inspection,
|
||||
)
|
||||
|
||||
# 检查点 3: 数据库保存检查 (L3168)
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
await self._emit_struct_log(
|
||||
__event_call__,
|
||||
"Summary saved to database (verification)",
|
||||
save_inspection,
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 完整检查清单
|
||||
|
||||
在前端 Console 中验证:
|
||||
|
||||
- [ ] 检查点 1 出现且 `is_dict: true`
|
||||
- [ ] 检查点 1 显示 `first_choice_keys` 包含 `message`
|
||||
- [ ] 检查点 2 出现且 `is_empty: false`
|
||||
- [ ] 检查点 2 显示合理的 `length_chars` (通常 > 100)
|
||||
- [ ] 检查点 3 出现且 `matches_input_chat_id: true`
|
||||
- [ ] 检查点 3 显示 `matches_input_compressed_count: true`
|
||||
- [ ] 所有日志时间戳合理
|
||||
- [ ] 没有异常或错误信息
|
||||
|
||||
---
|
||||
|
||||
## 📞 后续步骤
|
||||
|
||||
1. ✅ 启用调试模式
|
||||
2. ✅ 发送消息触发摘要生成
|
||||
3. ✅ 观察 3 个新检查点
|
||||
4. ✅ 验证所有字段符合预期
|
||||
5. ✅ 如有问题,参考本指南诊断
|
||||
|
||||
---
|
||||
|
||||
**最后更新**: 2024-03-12
|
||||
**相关特性**: Response 结构检查 (v1.4.1+)
|
||||
**文档**: [async_context_compression.py 第 3459, 3524, 3168 行]
|
||||
@@ -5,16 +5,17 @@ author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/openwebui-extensions
|
||||
funding_url: https://github.com/open-webui
|
||||
description: Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.
|
||||
version: 1.4.1
|
||||
version: 1.4.2
|
||||
openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
||||
license: MIT
|
||||
|
||||
═══════════════════════════════════════════════════════════════════════════════
|
||||
📌 What's new in 1.4.1
|
||||
📌 What's new in 1.4.2
|
||||
═══════════════════════════════════════════════════════════════════════════════
|
||||
|
||||
✅ Reverse-Unfolding Mechanism: Accurately reconstructs the expanded native tool-calling sequence during the outlet phase to permanently fix coordinate drift and missing summaries for long tool-based conversations.
|
||||
✅ Safer Tool Trimming: Refactored `enable_tool_output_trimming` to strictly use atomic block groups for safe trimming, completely preventing JSON payload corruption.
|
||||
✅ Enhanced Summary Path Robustness: Thread __request__ context through entire summary generation pipeline for reliable authentication and provider handling.
|
||||
✅ Improved Error Diagnostics: LLM response validation failures now include complete response body in error logs for better troubleshooting.
|
||||
✅ Smart Previous Summary Loading: Automatically load and merge previous summaries from DB when not present in outlet payload, enabling incremental state merging.
|
||||
|
||||
═══════════════════════════════════════════════════════════════════════════════
|
||||
📌 Overview
|
||||
@@ -1516,27 +1517,31 @@ class Filter:
|
||||
"index": index,
|
||||
"role": message.get("role", "unknown"),
|
||||
"has_tool_calls": bool(isinstance(tool_calls, list) and tool_calls),
|
||||
"tool_call_count": len(tool_calls)
|
||||
if isinstance(tool_calls, list)
|
||||
else 0,
|
||||
"tool_call_id_lengths": [
|
||||
len(str(tc.get("id", "")))
|
||||
for tc in tool_calls[:3]
|
||||
if isinstance(tc, dict)
|
||||
]
|
||||
if isinstance(tool_calls, list)
|
||||
else [],
|
||||
"tool_call_count": (
|
||||
len(tool_calls) if isinstance(tool_calls, list) else 0
|
||||
),
|
||||
"tool_call_id_lengths": (
|
||||
[
|
||||
len(str(tc.get("id", "")))
|
||||
for tc in tool_calls[:3]
|
||||
if isinstance(tc, dict)
|
||||
]
|
||||
if isinstance(tool_calls, list)
|
||||
else []
|
||||
),
|
||||
"has_tool_call_id": isinstance(message.get("tool_call_id"), str),
|
||||
"tool_call_id_length": len(str(message.get("tool_call_id", "")))
|
||||
if isinstance(message.get("tool_call_id"), str)
|
||||
else 0,
|
||||
"tool_call_id_length": (
|
||||
len(str(message.get("tool_call_id", "")))
|
||||
if isinstance(message.get("tool_call_id"), str)
|
||||
else 0
|
||||
),
|
||||
"content_type": type(content).__name__,
|
||||
"content_length": len(content) if isinstance(content, str) else 0,
|
||||
"has_tool_details_block": isinstance(content, str)
|
||||
and '<details type="tool_calls"' in content,
|
||||
"metadata_keys": sorted(metadata.keys())[:8]
|
||||
if isinstance(metadata, dict)
|
||||
else [],
|
||||
"metadata_keys": (
|
||||
sorted(metadata.keys())[:8] if isinstance(metadata, dict) else []
|
||||
),
|
||||
}
|
||||
|
||||
if isinstance(content, list):
|
||||
@@ -1585,14 +1590,16 @@ class Filter:
|
||||
|
||||
return {
|
||||
"body_keys": sorted(body.keys()),
|
||||
"metadata_keys": sorted(metadata.keys()) if isinstance(metadata, dict) else [],
|
||||
"metadata_keys": (
|
||||
sorted(metadata.keys()) if isinstance(metadata, dict) else []
|
||||
),
|
||||
"params_keys": sorted(params.keys()) if isinstance(params, dict) else [],
|
||||
"metadata_function_calling": metadata.get("function_calling")
|
||||
if isinstance(metadata, dict)
|
||||
else None,
|
||||
"params_function_calling": params.get("function_calling")
|
||||
if isinstance(params, dict)
|
||||
else None,
|
||||
"metadata_function_calling": (
|
||||
metadata.get("function_calling") if isinstance(metadata, dict) else None
|
||||
),
|
||||
"params_function_calling": (
|
||||
params.get("function_calling") if isinstance(params, dict) else None
|
||||
),
|
||||
"message_count": len(messages) if isinstance(messages, list) else 0,
|
||||
"role_counts": role_counts,
|
||||
"assistant_tool_call_indices": assistant_tool_call_indices[:8],
|
||||
@@ -1624,9 +1631,11 @@ class Filter:
|
||||
"id": message.get("id", ""),
|
||||
"parentId": message.get("parentId") or message.get("parent_id"),
|
||||
"tool_call_id": message.get("tool_call_id", ""),
|
||||
"tool_call_count": len(message.get("tool_calls", []))
|
||||
if isinstance(message.get("tool_calls"), list)
|
||||
else 0,
|
||||
"tool_call_count": (
|
||||
len(message.get("tool_calls", []))
|
||||
if isinstance(message.get("tool_calls"), list)
|
||||
else 0
|
||||
),
|
||||
"is_summary": self._is_summary_message(message),
|
||||
"content_length": len(content) if isinstance(content, str) else 0,
|
||||
}
|
||||
@@ -1647,9 +1656,11 @@ class Filter:
|
||||
"id": message.get("id", ""),
|
||||
"parentId": message.get("parentId") or message.get("parent_id"),
|
||||
"tool_call_id": message.get("tool_call_id", ""),
|
||||
"tool_call_count": len(message.get("tool_calls", []))
|
||||
if isinstance(message.get("tool_calls"), list)
|
||||
else 0,
|
||||
"tool_call_count": (
|
||||
len(message.get("tool_calls", []))
|
||||
if isinstance(message.get("tool_calls"), list)
|
||||
else 0
|
||||
),
|
||||
"is_summary": self._is_summary_message(message),
|
||||
"content_length": len(content) if isinstance(content, str) else 0,
|
||||
}
|
||||
@@ -1659,7 +1670,9 @@ class Filter:
|
||||
"message_count": len(messages),
|
||||
"summary_state": summary_state,
|
||||
"original_history_count": self._get_original_history_count(messages),
|
||||
"target_compressed_count": self._calculate_target_compressed_count(messages),
|
||||
"target_compressed_count": self._calculate_target_compressed_count(
|
||||
messages
|
||||
),
|
||||
"effective_keep_first": self._get_effective_keep_first(messages),
|
||||
"head_sample": sample,
|
||||
"tail_sample": tail_sample,
|
||||
@@ -1681,20 +1694,25 @@ class Filter:
|
||||
continue
|
||||
|
||||
# If it's an assistant message with the hidden 'output' field, unfold it
|
||||
if msg.get("role") == "assistant" and isinstance(msg.get("output"), list) and msg.get("output"):
|
||||
if (
|
||||
msg.get("role") == "assistant"
|
||||
and isinstance(msg.get("output"), list)
|
||||
and msg.get("output")
|
||||
):
|
||||
try:
|
||||
from open_webui.utils.misc import convert_output_to_messages
|
||||
|
||||
expanded = convert_output_to_messages(msg["output"], raw=True)
|
||||
if expanded:
|
||||
unfolded.extend(expanded)
|
||||
continue
|
||||
except ImportError:
|
||||
pass # Fallback if for some reason the internal import fails
|
||||
pass # Fallback if for some reason the internal import fails
|
||||
|
||||
# Clean message (strip 'output' field just like inlet does)
|
||||
clean_msg = {k: v for k, v in msg.items() if k != "output"}
|
||||
unfolded.append(clean_msg)
|
||||
|
||||
|
||||
return unfolded
|
||||
|
||||
def _get_function_calling_mode(self, body: dict) -> str:
|
||||
@@ -1831,7 +1849,9 @@ class Filter:
|
||||
)
|
||||
except ValueError as ve:
|
||||
if "broadcast" in str(ve).lower():
|
||||
logger.debug("Cannot broadcast to frontend without explicit room; suppressing further frontend logs in this session.")
|
||||
logger.debug(
|
||||
"Cannot broadcast to frontend without explicit room; suppressing further frontend logs in this session."
|
||||
)
|
||||
self.valves.show_debug_log = False
|
||||
else:
|
||||
logger.error(f"Failed to process log to frontend: ValueError: {ve}")
|
||||
@@ -1867,19 +1887,9 @@ class Filter:
|
||||
"""
|
||||
Check if compression should be skipped.
|
||||
Returns True if:
|
||||
1. The base model includes 'copilot_sdk'
|
||||
"""
|
||||
# Check if base model includes copilot_sdk
|
||||
if __model__:
|
||||
base_model_id = __model__.get("base_model_id", "")
|
||||
if "copilot_sdk" in base_model_id.lower():
|
||||
return True
|
||||
|
||||
# Also check model in body
|
||||
model_id = body.get("model", "")
|
||||
if "copilot_sdk" in model_id.lower():
|
||||
if body.get("is_copilot_model", False):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
async def inlet(
|
||||
@@ -2493,6 +2503,7 @@ class Filter:
|
||||
__model__: dict = None,
|
||||
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
|
||||
__event_call__: Callable[[Any], Awaitable[None]] = None,
|
||||
__request__: Request = None,
|
||||
) -> dict:
|
||||
"""
|
||||
Executed after the LLM response is complete.
|
||||
@@ -2545,10 +2556,22 @@ class Filter:
|
||||
# In the outlet phase, the frontend payload often lacks the hidden 'output' field.
|
||||
# We try to load the full, raw history from the database first.
|
||||
db_messages = self._load_full_chat_messages(chat_id)
|
||||
messages_to_unfold = db_messages if (db_messages and len(db_messages) >= len(messages)) else messages
|
||||
|
||||
messages_to_unfold = (
|
||||
db_messages
|
||||
if (db_messages and len(db_messages) >= len(messages))
|
||||
else messages
|
||||
)
|
||||
|
||||
summary_messages = self._unfold_messages(messages_to_unfold)
|
||||
message_source = "outlet-db-unfolded" if db_messages and len(summary_messages) != len(messages) else "outlet-body-unfolded" if len(summary_messages) != len(messages) else "outlet-body"
|
||||
message_source = (
|
||||
"outlet-db-unfolded"
|
||||
if db_messages and len(summary_messages) != len(messages)
|
||||
else (
|
||||
"outlet-body-unfolded"
|
||||
if len(summary_messages) != len(messages)
|
||||
else "outlet-body"
|
||||
)
|
||||
)
|
||||
|
||||
if self.valves.show_debug_log and __event_call__:
|
||||
source_progress = self._build_summary_progress_snapshot(summary_messages)
|
||||
@@ -2593,6 +2616,7 @@ class Filter:
|
||||
lang,
|
||||
__event_emitter__,
|
||||
__event_call__,
|
||||
__request__,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -2609,6 +2633,7 @@ class Filter:
|
||||
lang: str,
|
||||
__event_emitter__: Callable,
|
||||
__event_call__: Callable,
|
||||
__request__: Request = None,
|
||||
):
|
||||
"""Wrapper to run summary generation with an async lock."""
|
||||
async with lock:
|
||||
@@ -2621,6 +2646,7 @@ class Filter:
|
||||
lang,
|
||||
__event_emitter__,
|
||||
__event_call__,
|
||||
__request__,
|
||||
)
|
||||
|
||||
async def _check_and_generate_summary_async(
|
||||
@@ -2633,6 +2659,7 @@ class Filter:
|
||||
lang: str = "en-US",
|
||||
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
|
||||
__event_call__: Callable[[Any], Awaitable[None]] = None,
|
||||
__request__: Request = None,
|
||||
):
|
||||
"""
|
||||
Background processing: Calculates Token count and generates summary (does not block response).
|
||||
@@ -2736,6 +2763,7 @@ class Filter:
|
||||
lang,
|
||||
__event_emitter__,
|
||||
__event_call__,
|
||||
__request__,
|
||||
)
|
||||
else:
|
||||
await self._log(
|
||||
@@ -2767,6 +2795,7 @@ class Filter:
|
||||
lang: str = "en-US",
|
||||
__event_emitter__: Callable[[Any], Awaitable[None]] = None,
|
||||
__event_call__: Callable[[Any], Awaitable[None]] = None,
|
||||
__request__: Request = None,
|
||||
):
|
||||
"""
|
||||
Generates summary asynchronously (runs in background, does not block response).
|
||||
@@ -2920,8 +2949,25 @@ class Filter:
|
||||
# 4. Build conversation text
|
||||
conversation_text = self._format_messages_for_summary(middle_messages)
|
||||
|
||||
# 5. Call LLM to generate new summary
|
||||
# Note: previous_summary is not passed here because old summary (if any) is already included in middle_messages
|
||||
# 5. Determine previous_summary to pass to LLM.
|
||||
# When summary_index is not None, the old summary message is already the first
|
||||
# entry of middle_messages (protected_prefix=1), so it appears verbatim in
|
||||
# conversation_text — no need to inject separately.
|
||||
# When summary_index is None the outlet messages come from raw DB history that
|
||||
# has never had the summary injected, so we must load it from DB explicitly.
|
||||
if summary_index is None:
|
||||
previous_summary = await asyncio.to_thread(
|
||||
self._load_summary, chat_id, body
|
||||
)
|
||||
if previous_summary:
|
||||
await self._log(
|
||||
"[🤖 Async Summary Task] Loaded previous summary from DB to pass as context (summary not in messages)",
|
||||
event_call=__event_call__,
|
||||
)
|
||||
else:
|
||||
previous_summary = None # already embedded in middle_messages[0]
|
||||
|
||||
# 6. Call LLM to generate new summary
|
||||
|
||||
# Send status notification for starting summary generation
|
||||
if __event_emitter__:
|
||||
@@ -2938,11 +2984,12 @@ class Filter:
|
||||
)
|
||||
|
||||
new_summary = await self._call_summary_llm(
|
||||
None,
|
||||
conversation_text,
|
||||
{**body, "model": summary_model_id},
|
||||
user_data,
|
||||
__event_call__,
|
||||
__request__,
|
||||
previous_summary=previous_summary,
|
||||
)
|
||||
|
||||
if not new_summary:
|
||||
@@ -3165,11 +3212,12 @@ class Filter:
|
||||
|
||||
async def _call_summary_llm(
|
||||
self,
|
||||
previous_summary: Optional[str],
|
||||
new_conversation_text: str,
|
||||
body: dict,
|
||||
user_data: dict,
|
||||
__event_call__: Callable[[Any], Awaitable[None]] = None,
|
||||
__request__: Request = None,
|
||||
previous_summary: Optional[str] = None,
|
||||
) -> str:
|
||||
"""
|
||||
Calls the LLM to generate a summary using Open WebUI's built-in method.
|
||||
@@ -3179,43 +3227,56 @@ class Filter:
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
# Build summary prompt (Optimized)
|
||||
summary_prompt = f"""
|
||||
You are a professional conversation context compression expert. Your task is to create a high-fidelity summary of the following conversation content.
|
||||
This conversation may contain previous summaries (as system messages or text) and subsequent conversation content.
|
||||
# Build summary prompt (Optimized for State/Working Memory and Tool Calling)
|
||||
previous_summary_block = (
|
||||
f"<previous_working_memory>\n{previous_summary}\n</previous_working_memory>\n\n"
|
||||
if previous_summary
|
||||
else ""
|
||||
)
|
||||
summary_prompt = f"""You are an expert Context Compression Engine. Produce a high-fidelity, maximally dense "Working Memory" snapshot from the inputs below.
|
||||
|
||||
### Core Objectives
|
||||
1. **Comprehensive Summary**: Concisely summarize key information, user intent, and assistant responses from the conversation.
|
||||
2. **De-noising**: Remove greetings, repetitions, confirmations, and other non-essential information.
|
||||
3. **Key Retention**:
|
||||
* **Code snippets, commands, and technical parameters must be preserved verbatim. Do not modify or generalize them.**
|
||||
* User intent, core requirements, decisions, and action items must be clearly preserved.
|
||||
4. **Coherence**: The generated summary should be a cohesive whole that can replace the original conversation as context.
|
||||
5. **Detailed Record**: Since length is permitted, please preserve details, reasoning processes, and nuances of multi-turn interactions as much as possible, rather than just high-level generalizations.
|
||||
### Processing Rules
|
||||
1. **State-Aware Merging**: If `<previous_working_memory>` is provided, you MUST merge it with the new conversation. Preserve facts that are still true; UPDATE or SUPERSEDE facts whose state has changed (e.g., "bug X exists" → "bug X fixed in commit abc"); REMOVE facts fully resolved with no future relevance.
|
||||
2. **Goal Tracking**: Reflect the LATEST user intent as "Current Goal". If the goal has shifted, move the old goal to Working Memory as "Prior Goal (completed/abandoned)".
|
||||
3. **Tool-Call Decompression**: From raw JSON tool arguments/results, extract ONLY: definitive facts, concrete return values, error codes, root causes. Discard structural boilerplate.
|
||||
4. **Error & Exception Verbatim**: Stack traces, error messages, exception types, and exit codes MUST be quoted exactly — they are primary debugging artifacts.
|
||||
5. **Ruthless Denoising**: Delete greetings, apologies, acknowledgments, and any phrase that carries zero information.
|
||||
6. **Verbatim Retention**: Code snippets, shell commands, file paths, config values, and Message IDs (e.g., [ID: ...]) MUST appear character-for-character.
|
||||
7. **Causal Chain**: For each tool call or action, record: trigger → operation → outcome (one line per event).
|
||||
|
||||
### Output Requirements
|
||||
* **Format**: Structured text, logically clear.
|
||||
* **Language**: Consistent with the conversation language (usually English).
|
||||
* **Length**: Strictly control within {self.valves.max_summary_tokens} Tokens.
|
||||
* **Strictly Forbidden**: Do not output "According to the conversation...", "The summary is as follows..." or similar filler. Output the summary content directly.
|
||||
### Output Constraints
|
||||
* **Format**: Follow the Required Structure below — omit a section only if it has zero content.
|
||||
* **Token Budget**: Stay under {self.valves.max_summary_tokens} tokens. Prioritize recency and actionability when trimming.
|
||||
* **Tone**: Terse, robotic, third-person where applicable.
|
||||
* **Language**: Match the dominant language of the conversation.
|
||||
* **Forbidden**: No preamble, no closing remarks, no meta-commentary. Start directly with the first section header.
|
||||
|
||||
### Suggested Summary Structure
|
||||
* **Current Goal/Topic**: A one-sentence summary of the problem currently being solved.
|
||||
* **Key Information & Context**:
|
||||
* Confirmed facts/parameters.
|
||||
* **Code/Technical Details** (Wrap in code blocks).
|
||||
* **Progress & Conclusions**: Completed steps and reached consensus.
|
||||
* **Action Items/Next Steps**: Clear follow-up actions.
|
||||
### Required Output Structure
|
||||
## Current Goal
|
||||
(Single sentence: what the user is trying to achieve RIGHT NOW)
|
||||
|
||||
### Identity Traceability
|
||||
The input dialogue contains message IDs (e.g., [ID: ...]) and optional names.
|
||||
If a specific message contributes a critical decision, a unique code snippet, or a tool-calling result, please reference its ID or Name in your summary to maintain traceability.
|
||||
## Working Memory & Facts
|
||||
(Bullet list — each item: one established fact, constraint, or parsed tool result. Mark superseded items as ~~old~~ → new. Cite [ID: ...] when critical.)
|
||||
|
||||
## Code & Artifacts
|
||||
(Only if present. Exact code blocks with language tags. File paths as inline code.)
|
||||
|
||||
## Causal Log
|
||||
(Chronological. Format: `[MSG_ID?] action → result`. One line per event. Keep only the last N events that remain causally relevant.)
|
||||
|
||||
## Errors & Exceptions
|
||||
(Only if unresolved. Exact quoted text. Include error type, message, and last known stack frame.)
|
||||
|
||||
## Pending / Next Steps
|
||||
(Ordered list. First item = most immediate action.)
|
||||
|
||||
---
|
||||
{previous_summary_block}<new_conversation>
|
||||
{new_conversation_text}
|
||||
</new_conversation>
|
||||
---
|
||||
|
||||
Based on the content above, generate the summary (including key message identities where relevant):
|
||||
Generate the Working Memory:
|
||||
"""
|
||||
# Determine the model to use
|
||||
model = self._clean_model_id(self.valves.summary_model) or self._clean_model_id(
|
||||
@@ -3262,8 +3323,8 @@ Based on the content above, generate the summary (including key message identiti
|
||||
event_call=__event_call__,
|
||||
)
|
||||
|
||||
# Create Request object
|
||||
request = Request(scope={"type": "http", "app": webui_app})
|
||||
# Use the injected request if available, otherwise fall back to a minimal synthetic one
|
||||
request = __request__ or Request(scope={"type": "http", "app": webui_app})
|
||||
|
||||
# Call generate_chat_completion
|
||||
response = await generate_chat_completion(request, payload, user)
|
||||
@@ -3284,8 +3345,13 @@ Based on the content above, generate the summary (including key message identiti
|
||||
or "choices" not in response
|
||||
or not response["choices"]
|
||||
):
|
||||
try:
|
||||
response_repr = json_module.dumps(response, ensure_ascii=False, indent=2)
|
||||
except Exception:
|
||||
response_repr = repr(response)
|
||||
raise ValueError(
|
||||
f"LLM response format incorrect or empty: {type(response).__name__}"
|
||||
f"LLM response format incorrect or empty: {type(response).__name__}\n"
|
||||
f"Full response:\n{response_repr}"
|
||||
)
|
||||
|
||||
summary = response["choices"][0]["message"]["content"].strip()
|
||||
|
||||
@@ -114,6 +114,7 @@ class Filter:
|
||||
|
||||
# Check if it's a Copilot model
|
||||
is_copilot_model = self._is_copilot_model(current_model)
|
||||
body["is_copilot_model"] = is_copilot_model
|
||||
|
||||
await self._emit_debug_log(
|
||||
__event_emitter__,
|
||||
|
||||
@@ -9,6 +9,7 @@ This directory contains automated scripts for deploying plugins in development t
|
||||
1. **OpenWebUI Running**: Make sure OpenWebUI is running locally (default `http://localhost:3000`)
|
||||
2. **API Key**: You need a valid OpenWebUI API key
|
||||
3. **Environment File**: Create a `.env` file in this directory containing your API key:
|
||||
|
||||
```
|
||||
api_key=sk-xxxxxxxxxxxxx
|
||||
```
|
||||
@@ -42,12 +43,14 @@ python deploy_filter.py --list
|
||||
Used to deploy Filter-type plugins (such as message filtering, context compression, etc.).
|
||||
|
||||
**Key Features**:
|
||||
|
||||
- ✅ Auto-extracts metadata from Python files (version, author, description, etc.)
|
||||
- ✅ Attempts to update existing plugins, creates if not found
|
||||
- ✅ Supports multiple Filter plugin management
|
||||
- ✅ Detailed error messages and connection diagnostics
|
||||
|
||||
**Usage**:
|
||||
|
||||
```bash
|
||||
# Deploy async_context_compression (default)
|
||||
python deploy_filter.py
|
||||
@@ -62,6 +65,7 @@ python deploy_filter.py -l
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. Load API key from `.env`
|
||||
2. Find target Filter plugin directory
|
||||
3. Read Python source file
|
||||
@@ -76,6 +80,7 @@ python deploy_filter.py -l
|
||||
Used to deploy Pipe-type plugins (such as GitHub Copilot SDK).
|
||||
|
||||
**Usage**:
|
||||
|
||||
```bash
|
||||
python deploy_pipe.py
|
||||
```
|
||||
@@ -101,6 +106,7 @@ Create a dedicated long-term API key in OpenWebUI Settings for deployment purpos
|
||||
**Cause**: OpenWebUI is not running or port is different
|
||||
|
||||
**Solution**:
|
||||
|
||||
- Make sure OpenWebUI is running
|
||||
- Check which port OpenWebUI is actually listening on (usually 3000)
|
||||
- Edit the URL in the script if needed
|
||||
@@ -110,6 +116,7 @@ Create a dedicated long-term API key in OpenWebUI Settings for deployment purpos
|
||||
**Cause**: `.env` file was not created
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
echo "api_key=sk-your-api-key-here" > .env
|
||||
```
|
||||
@@ -119,6 +126,7 @@ echo "api_key=sk-your-api-key-here" > .env
|
||||
**Cause**: Filter directory name is incorrect
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# List all available Filters
|
||||
python deploy_filter.py --list
|
||||
@@ -129,6 +137,7 @@ python deploy_filter.py --list
|
||||
**Cause**: API key is invalid or expired
|
||||
|
||||
**Solution**:
|
||||
|
||||
1. Verify your API key is valid
|
||||
2. Generate a new API key
|
||||
3. Update the `.env` file
|
||||
@@ -177,7 +186,8 @@ python deploy_filter.py async-context-compression
|
||||
|
||||
## Security Considerations
|
||||
|
||||
⚠️ **Important**:
|
||||
⚠️ **Important**:
|
||||
|
||||
- ✅ Add `.env` file to `.gitignore` (avoid committing sensitive info)
|
||||
- ✅ Never commit API keys to version control
|
||||
- ✅ Use only on trusted networks
|
||||
|
||||
@@ -7,6 +7,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
## 📋 New Files
|
||||
|
||||
### 1. **deploy_filter.py** — Filter Plugin Deployment Script
|
||||
|
||||
- **Location**: `scripts/deploy_filter.py`
|
||||
- **Function**: Auto-deploy Filter-type plugins to local OpenWebUI instance
|
||||
- **Features**:
|
||||
@@ -19,6 +20,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- **Code Lines**: ~300
|
||||
|
||||
### 2. **DEPLOYMENT_GUIDE.md** — Complete Deployment Guide
|
||||
|
||||
- **Location**: `scripts/DEPLOYMENT_GUIDE.md`
|
||||
- **Contents**:
|
||||
- Prerequisites and quick start
|
||||
@@ -28,6 +30,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- Step-by-step workflow examples
|
||||
|
||||
### 3. **QUICK_START.md** — Quick Reference Card
|
||||
|
||||
- **Location**: `scripts/QUICK_START.md`
|
||||
- **Contents**:
|
||||
- One-line deployment command
|
||||
@@ -37,6 +40,7 @@ Added a complete local deployment toolchain for the `async_context_compression`
|
||||
- CI/CD integration examples
|
||||
|
||||
### 4. **test_deploy_filter.py** — Unit Test Suite
|
||||
|
||||
- **Location**: `tests/scripts/test_deploy_filter.py`
|
||||
- **Test Coverage**:
|
||||
- ✅ Filter file discovery (3 tests)
|
||||
@@ -138,6 +142,7 @@ openwebui_id: b1655bc8-6de9-4cad-8cb5-a6f7829a02ce
|
||||
```
|
||||
|
||||
**Supported Metadata Fields**:
|
||||
|
||||
- `title` — Filter display name ✅
|
||||
- `id` — Unique identifier ✅
|
||||
- `author` — Author name ✅
|
||||
@@ -335,17 +340,20 @@ Metadata Extraction and Delivery
|
||||
### Debugging Tips
|
||||
|
||||
1. **Enable Verbose Logging**:
|
||||
|
||||
```bash
|
||||
python deploy_filter.py 2>&1 | tee deploy.log
|
||||
```
|
||||
|
||||
2. **Test API Connection**:
|
||||
|
||||
```bash
|
||||
curl -X GET http://localhost:3000/api/v1/functions \
|
||||
-H "Authorization: Bearer $API_KEY"
|
||||
```
|
||||
|
||||
3. **Verify .env File**:
|
||||
|
||||
```bash
|
||||
grep "api_key=" scripts/.env
|
||||
```
|
||||
|
||||
@@ -73,12 +73,14 @@ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Optimized specifically for async_context_compression
|
||||
- ✅ Clear deployment steps and confirmation
|
||||
- ✅ Friendly error messages
|
||||
- ✅ Shows next steps after successful deployment
|
||||
|
||||
**Sample Output**:
|
||||
|
||||
```
|
||||
======================================================================
|
||||
🚀 Deploying Async Context Compression Filter Plugin
|
||||
@@ -117,6 +119,7 @@ python deploy_filter.py --list
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Generic Filter deployment tool
|
||||
- ✅ Supports multiple plugins
|
||||
- ✅ Auto metadata extraction
|
||||
@@ -142,6 +145,7 @@ python deploy_tool.py openwebui-skills-manager
|
||||
```
|
||||
|
||||
**Features**:
|
||||
|
||||
- ✅ Supports Tools plugin deployment
|
||||
- ✅ Auto-detects `Tools` class definition
|
||||
- ✅ Smart update/create logic
|
||||
@@ -290,6 +294,7 @@ git status # should not show .env
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# 1. Check if OpenWebUI is running
|
||||
curl http://localhost:3000
|
||||
@@ -309,6 +314,7 @@ curl http://localhost:3000
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
echo "api_key=sk-your-api-key" > .env
|
||||
cat .env # verify file created
|
||||
@@ -321,6 +327,7 @@ cat .env # verify file created
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# List all available Filters
|
||||
python deploy_filter.py --list
|
||||
@@ -337,6 +344,7 @@ python deploy_filter.py async-context-compression
|
||||
```
|
||||
|
||||
**Solution**:
|
||||
|
||||
```bash
|
||||
# 1. Verify API key is correct
|
||||
grep "api_key=" .env
|
||||
@@ -370,7 +378,7 @@ python deploy_async_context_compression.py
|
||||
|
||||
### Method 2: Verify in OpenWebUI
|
||||
|
||||
1. Open OpenWebUI: http://localhost:3000
|
||||
1. Open OpenWebUI: <http://localhost:3000>
|
||||
2. Go to Settings → Filters
|
||||
3. Check if 'Async Context Compression' is listed
|
||||
4. Verify version number is correct (should be latest)
|
||||
@@ -380,6 +388,7 @@ python deploy_async_context_compression.py
|
||||
1. Open a new conversation
|
||||
2. Enable 'Async Context Compression' Filter
|
||||
3. Have multiple-turn conversation and verify compression/summarization works
|
||||
|
||||
## 💡 Advanced Usage
|
||||
|
||||
### Automated Deploy & Test
|
||||
@@ -473,4 +482,3 @@ Newly created deployment-related files:
|
||||
**Last Updated**: 2026-03-09
|
||||
**Script Status**: ✅ Ready for production
|
||||
**Test Coverage**: 10/10 passed ✅
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
✅ **Yes, re-deploying automatically updates the plugin!**
|
||||
|
||||
The deployment script uses a **smart two-stage strategy**:
|
||||
|
||||
1. 🔄 **Try UPDATE First** (if plugin exists)
|
||||
2. 📝 **Auto CREATE** (if update fails — plugin doesn't exist)
|
||||
|
||||
@@ -54,6 +55,7 @@ if response.status_code == 200:
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
- Send **POST** to `/api/v1/functions/id/{filter_id}/update`
|
||||
- If returns **HTTP 200**, plugin exists and update succeeded
|
||||
- Includes:
|
||||
@@ -84,6 +86,7 @@ if response.status_code != 200:
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
- If update fails (HTTP ≠ 200), auto-attempt create
|
||||
- Send **POST** to `/api/v1/functions/create`
|
||||
- Uses **same payload** (code, metadata identical)
|
||||
@@ -103,6 +106,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
1. Try UPDATE → fails (HTTP 404 — plugin doesn't exist)
|
||||
2. Auto-try CREATE → succeeds (HTTP 200)
|
||||
3. Plugin created in OpenWebUI
|
||||
@@ -121,6 +125,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**What Happens**:
|
||||
|
||||
1. Read modified code
|
||||
2. Try UPDATE → succeeds (HTTP 200 — plugin exists)
|
||||
3. Plugin in OpenWebUI updated to latest code
|
||||
@@ -147,6 +152,7 @@ $ python deploy_async_context_compression.py
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
|
||||
- 🚀 Each update takes only 5 seconds
|
||||
- 📝 Each is an incremental update
|
||||
- ✅ No need to restart OpenWebUI
|
||||
@@ -181,11 +187,13 @@ version: 1.3.0
|
||||
```
|
||||
|
||||
**Each deployment**:
|
||||
|
||||
1. Script reads version from docstring
|
||||
2. Sends this version in manifest to OpenWebUI
|
||||
3. If you change version in code, deployment updates to new version
|
||||
|
||||
**Best Practice**:
|
||||
|
||||
```bash
|
||||
# 1. Modify code
|
||||
vim async_context_compression.py
|
||||
@@ -300,6 +308,7 @@ Usually **not needed** because:
|
||||
4. ✅ Failures auto-rollback
|
||||
|
||||
但如果真的需要控制,可以:
|
||||
|
||||
- 手动修改脚本 (修改 `deploy_filter.py`)
|
||||
- 或分别使用 UPDATE/CREATE 的具体 API 端点
|
||||
|
||||
@@ -323,6 +332,7 @@ Usually **not needed** because:
|
||||
### Q: 可以同时部署多个插件吗?
|
||||
|
||||
✅ **可以!**
|
||||
|
||||
```bash
|
||||
python deploy_filter.py async-context-compression
|
||||
python deploy_filter.py folder-memory
|
||||
@@ -337,6 +347,7 @@ python deploy_filter.py context_enhancement_filter
|
||||
---
|
||||
|
||||
**总结**: 部署脚本的更新机制完全自动化,开发者只需修改代码,每次运行 `deploy_async_context_compression.py` 就会自动:
|
||||
|
||||
1. ✅ 创建(第一次)或更新(后续)插件
|
||||
2. ✅ 从代码提取最新的元数据和版本号
|
||||
3. ✅ 立即生效,无需重启 OpenWebUI
|
||||
|
||||
202
scripts/agent_sync.py
Executable file
202
scripts/agent_sync.py
Executable file
@@ -0,0 +1,202 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
🤖 AGENT SYNC TOOL v2.2 (Unified Semantic Edition)
|
||||
-------------------------------------------------
|
||||
Consolidated and simplified command set based on Copilot's architectural feedback.
|
||||
Native support for Study, Task, and Broadcast workflows.
|
||||
Maintains Sisyphus's advanced task management (task_queue, subscriptions).
|
||||
"""
|
||||
import sqlite3
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
from datetime import datetime
|
||||
|
||||
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
|
||||
|
||||
def get_connection():
|
||||
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||||
return sqlite3.connect(DB_PATH)
|
||||
|
||||
def init_db():
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.executescript('''
|
||||
CREATE TABLE IF NOT EXISTS agents (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
task TEXT,
|
||||
status TEXT DEFAULT 'idle',
|
||||
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS file_locks (
|
||||
file_path TEXT PRIMARY KEY,
|
||||
agent_id TEXT,
|
||||
lock_type TEXT DEFAULT 'write',
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
content TEXT,
|
||||
note_type TEXT DEFAULT 'note', -- 'note', 'study', 'conclusion'
|
||||
is_resolved INTEGER DEFAULT 0,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_queue (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
initiator TEXT,
|
||||
task_type TEXT, -- 'research', 'collab', 'fix'
|
||||
topic TEXT,
|
||||
description TEXT,
|
||||
priority TEXT DEFAULT 'normal',
|
||||
status TEXT DEFAULT 'pending', -- 'pending', 'active', 'completed'
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_subscriptions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
task_id INTEGER,
|
||||
agent_id TEXT,
|
||||
role TEXT, -- 'lead', 'reviewer', 'worker', 'observer'
|
||||
FOREIGN KEY(task_id) REFERENCES task_queue(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS broadcasts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
sender_id TEXT,
|
||||
type TEXT,
|
||||
payload TEXT,
|
||||
active INTEGER DEFAULT 1,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS global_settings (
|
||||
key TEXT PRIMARY KEY, value TEXT
|
||||
);
|
||||
''')
|
||||
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ MACP 2.2 Semantic Kernel Active")
|
||||
|
||||
def get_status():
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
print("\n--- 🛰️ Agent Fleet ---")
|
||||
for r in cursor.execute("SELECT id, name, status, task FROM agents"):
|
||||
print(f"[{r[2].upper()}] {r[1]} ({r[0]}) | Task: {r[3]}")
|
||||
|
||||
print("\n--- 📋 Global Task Queue ---")
|
||||
for r in cursor.execute("SELECT id, topic, task_type, priority, status FROM task_queue WHERE status != 'completed'"):
|
||||
print(f" #{r[0]} [{r[2].upper()}] {r[1]} | {r[3]} | {r[4]}")
|
||||
|
||||
print("\n--- 📚 Active Studies ---")
|
||||
for r in cursor.execute("SELECT topic, agent_id FROM research_log WHERE note_type='study' AND is_resolved=0"):
|
||||
print(f" 🔬 {r[0]} (by {r[1]})")
|
||||
|
||||
print("\n--- 📢 Live Broadcasts ---")
|
||||
for r in cursor.execute("SELECT sender_id, type, payload FROM broadcasts WHERE active=1 ORDER BY created_at DESC LIMIT 3"):
|
||||
print(f"📣 {r[0]} [{r[1].upper()}]: {r[2]}")
|
||||
|
||||
print("\n--- 🔒 File Locks ---")
|
||||
for r in cursor.execute("SELECT file_path, agent_id, lock_type FROM file_locks ORDER BY timestamp DESC LIMIT 20"):
|
||||
print(f" {r[0]} -> {r[1]} ({r[2]})")
|
||||
|
||||
cursor.execute("SELECT value FROM global_settings WHERE key='mode'")
|
||||
mode = cursor.fetchone()[0]
|
||||
print(f"\n🌍 Project Mode: {mode.upper()}")
|
||||
conn.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers(dest="command")
|
||||
|
||||
# Base commands
|
||||
subparsers.add_parser("init")
|
||||
subparsers.add_parser("status")
|
||||
subparsers.add_parser("check")
|
||||
subparsers.add_parser("ping")
|
||||
|
||||
reg = subparsers.add_parser("register")
|
||||
reg.add_argument("id"); reg.add_argument("name"); reg.add_argument("task")
|
||||
|
||||
# Lock commands
|
||||
lock = subparsers.add_parser("lock")
|
||||
lock.add_argument("id"); lock.add_argument("path")
|
||||
unlock = subparsers.add_parser("unlock")
|
||||
unlock.add_argument("id"); unlock.add_argument("path")
|
||||
|
||||
# Research & Note commands
|
||||
note = subparsers.add_parser("note")
|
||||
note.add_argument("id"); note.add_argument("topic"); note.add_argument("content")
|
||||
note.add_argument("--type", default="note")
|
||||
|
||||
# Semantic Workflows (The Unified Commands)
|
||||
study = subparsers.add_parser("study")
|
||||
study.add_argument("id"); study.add_argument("topic"); study.add_argument("--desc", default=None)
|
||||
|
||||
resolve = subparsers.add_parser("resolve")
|
||||
resolve.add_argument("id"); resolve.add_argument("topic"); resolve.add_argument("conclusion")
|
||||
|
||||
# Task Management (The Advanced Commands)
|
||||
assign = subparsers.add_parser("assign")
|
||||
assign.add_argument("id"); assign.add_argument("target"); assign.add_argument("topic")
|
||||
assign.add_argument("--role", default="worker"); assign.add_argument("--priority", default="normal")
|
||||
|
||||
bc = subparsers.add_parser("broadcast")
|
||||
bc.add_argument("id"); bc.add_argument("type"); bc.add_argument("payload")
|
||||
|
||||
args = parser.parse_args()
|
||||
if args.command == "init": init_db()
|
||||
elif args.command == "status" or args.command == "check" or args.command == "ping": get_status()
|
||||
elif args.command == "register":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT OR REPLACE INTO agents (id, name, task, status, last_seen) VALUES (?, ?, ?, 'active', CURRENT_TIMESTAMP)", (args.id, args.name, args.task))
|
||||
conn.commit(); conn.close()
|
||||
print(f"🤖 Registered: {args.id}")
|
||||
elif args.command == "lock":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
try:
|
||||
cursor.execute("INSERT INTO file_locks (file_path, agent_id) VALUES (?, ?)", (args.path, args.id))
|
||||
conn.commit(); print(f"🔒 Locked {args.path}")
|
||||
except: print(f"❌ Lock conflict on {args.path}"); sys.exit(1)
|
||||
finally: conn.close()
|
||||
elif args.command == "unlock":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("DELETE FROM file_locks WHERE file_path=? AND agent_id=?", (args.path, args.id))
|
||||
conn.commit(); conn.close(); print(f"🔓 Unlocked {args.path}")
|
||||
elif args.command == "study":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, 'study')", (args.id, args.topic, args.desc or "Study started"))
|
||||
cursor.execute("UPDATE agents SET status = 'researching'")
|
||||
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, 'research', ?)", (args.id, f"NEW STUDY: {args.topic}"))
|
||||
cursor.execute("UPDATE global_settings SET value = ? WHERE key = 'mode'", (f"RESEARCH: {args.topic}",))
|
||||
conn.commit(); conn.close()
|
||||
print(f"🔬 Study '{args.topic}' initiated.")
|
||||
elif args.command == "resolve":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("UPDATE research_log SET is_resolved = 1 WHERE topic = ?", (args.topic,))
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type, is_resolved) VALUES (?, ?, ?, 'conclusion', 1)", (args.id, args.topic, args.conclusion))
|
||||
cursor.execute("UPDATE global_settings SET value = 'isolation' WHERE key = 'mode'")
|
||||
cursor.execute("UPDATE agents SET status = 'active' WHERE status = 'researching'")
|
||||
conn.commit(); conn.close()
|
||||
print(f"✅ Study '{args.topic}' resolved.")
|
||||
elif args.command == "assign":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"INSERT INTO task_queue (initiator, task_type, topic, description, priority, status) VALUES (?, 'task', ?, ?, ?, 'pending')",
|
||||
(args.id, args.topic, f"Assigned to {args.target}: {args.topic}", args.priority),
|
||||
)
|
||||
task_id = cursor.lastrowid
|
||||
cursor.execute("INSERT INTO task_subscriptions (task_id, agent_id, role) VALUES (?, ?, ?)", (task_id, args.target, args.role))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📋 Task #{task_id} assigned to {args.target}")
|
||||
elif args.command == "broadcast":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("UPDATE broadcasts SET active = 0 WHERE type = ?", (args.type,))
|
||||
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, ?, ?)", (args.id, args.type, args.payload))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📡 Broadcast: {args.payload}")
|
||||
elif args.command == "note":
|
||||
conn = get_connection(); cursor = conn.cursor()
|
||||
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, ?)", (args.id, args.topic, args.content, args.type))
|
||||
conn.commit(); conn.close()
|
||||
print(f"📝 Note added.")
|
||||
847
scripts/agent_sync_v2.py
Executable file
847
scripts/agent_sync_v2.py
Executable file
@@ -0,0 +1,847 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
🤖 AGENT SYNC TOOL v2.0 - MULTI-AGENT COOPERATION PROTOCOL (MACP)
|
||||
---------------------------------------------------------
|
||||
Enhanced collaboration commands for seamless multi-agent synergy.
|
||||
|
||||
QUICK COMMANDS:
|
||||
@research <topic> - Start a joint research topic
|
||||
@join <topic> - Join an active research topic
|
||||
@find <topic> <content> - Post a finding to research topic
|
||||
@consensus <topic> - Generate consensus document
|
||||
@assign <agent> <task> - Assign task to specific agent
|
||||
@notify <message> - Broadcast to all agents
|
||||
@handover <agent> - Handover current task
|
||||
@poll <question> - Start a quick poll
|
||||
@switch <agent> - Request switch to specific agent
|
||||
|
||||
WORKFLOW: @research -> @find (xN) -> @consensus -> @assign
|
||||
"""
|
||||
import sqlite3
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from typing import List, Dict, Optional
|
||||
|
||||
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
|
||||
|
||||
def get_connection():
|
||||
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
|
||||
return sqlite3.connect(DB_PATH)
|
||||
|
||||
def init_db():
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.executescript('''
|
||||
CREATE TABLE IF NOT EXISTS agents (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
task TEXT,
|
||||
status TEXT DEFAULT 'idle',
|
||||
current_research TEXT,
|
||||
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS file_locks (
|
||||
file_path TEXT PRIMARY KEY,
|
||||
agent_id TEXT,
|
||||
lock_type TEXT,
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(agent_id) REFERENCES agents(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_log (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
content TEXT,
|
||||
finding_type TEXT DEFAULT 'note',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(agent_id) REFERENCES agents(id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS research_topics (
|
||||
topic TEXT PRIMARY KEY,
|
||||
status TEXT DEFAULT 'active',
|
||||
initiated_by TEXT,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS agent_research_participation (
|
||||
agent_id TEXT,
|
||||
topic TEXT,
|
||||
joined_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (agent_id, topic)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS task_assignments (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
task TEXT,
|
||||
assigned_by TEXT,
|
||||
status TEXT DEFAULT 'pending',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
completed_at TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS notifications (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
agent_id TEXT,
|
||||
message TEXT,
|
||||
is_broadcast BOOLEAN DEFAULT 0,
|
||||
is_read BOOLEAN DEFAULT 0,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS polls (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
question TEXT,
|
||||
created_by TEXT,
|
||||
status TEXT DEFAULT 'active',
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS poll_votes (
|
||||
poll_id INTEGER,
|
||||
agent_id TEXT,
|
||||
vote TEXT,
|
||||
voted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (poll_id, agent_id)
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS global_settings (
|
||||
key TEXT PRIMARY KEY,
|
||||
value TEXT
|
||||
);
|
||||
''')
|
||||
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ Agent Hub v2.0 initialized at {DB_PATH}")
|
||||
|
||||
# ============ AGENT MANAGEMENT ============
|
||||
|
||||
def register_agent(agent_id, name, task, status="idle"):
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO agents (id, name, task, status, last_seen)
|
||||
VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP)
|
||||
''', (agent_id, name, task, status))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🤖 Agent '{name}' ({agent_id}) registered.")
|
||||
|
||||
def update_agent_status(agent_id, status, research_topic=None):
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
if research_topic:
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = ?, current_research = ?, last_seen = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (status, research_topic, agent_id))
|
||||
else:
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = ?, last_seen = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (status, agent_id))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ RESEARCH COLLABORATION ============
|
||||
|
||||
def start_research(agent_id, topic):
|
||||
"""@research - Start a new research topic and notify all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Create research topic
|
||||
try:
|
||||
cursor.execute('''
|
||||
INSERT INTO research_topics (topic, status, initiated_by)
|
||||
VALUES (?, 'active', ?)
|
||||
''', (topic, agent_id))
|
||||
except sqlite3.IntegrityError:
|
||||
print(f"⚠️ Research topic '{topic}' already exists")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Add initiator as participant
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
|
||||
VALUES (?, ?)
|
||||
''', (agent_id, topic))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'researching', current_research = ?
|
||||
WHERE id = ?
|
||||
''', (topic, agent_id))
|
||||
|
||||
# Notify all other agents
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
|
||||
other_agents = cursor.fetchall()
|
||||
for (other_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (other_id, f"🔬 New research started: '{topic}' by {agent_id}. Use '@join {topic}' to participate."))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
print(f"🔬 Research topic '{topic}' started by {agent_id}")
|
||||
print(f"📢 Notified {len(other_agents)} other agents")
|
||||
|
||||
def join_research(agent_id, topic):
|
||||
"""@join - Join an active research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if topic exists and is active
|
||||
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Research topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
if result[0] != 'active':
|
||||
print(f"⚠️ Research topic '{topic}' is {result[0]}")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Add participant
|
||||
cursor.execute('''
|
||||
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
|
||||
VALUES (?, ?)
|
||||
''', (agent_id, topic))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'researching', current_research = ?
|
||||
WHERE id = ?
|
||||
''', (topic, agent_id))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ {agent_id} joined research: '{topic}'")
|
||||
|
||||
def post_finding(agent_id, topic, content, finding_type="note"):
|
||||
"""@find - Post a finding to research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Check if topic exists
|
||||
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Research topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
if result[0] != 'active':
|
||||
print(f"⚠️ Research topic '{topic}' is {result[0]}")
|
||||
|
||||
# Add finding
|
||||
cursor.execute('''
|
||||
INSERT INTO research_log (agent_id, topic, content, finding_type)
|
||||
VALUES (?, ?, ?, ?)
|
||||
''', (agent_id, topic, content, finding_type))
|
||||
|
||||
# Update agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET last_seen = CURRENT_TIMESTAMP WHERE id = ?
|
||||
''', (agent_id,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📝 Finding added to '{topic}' by {agent_id}")
|
||||
|
||||
def generate_consensus(topic):
|
||||
"""@consensus - Generate consensus document from research findings"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get all findings
|
||||
cursor.execute('''
|
||||
SELECT agent_id, content, finding_type, created_at
|
||||
FROM research_log
|
||||
WHERE topic = ?
|
||||
ORDER BY created_at
|
||||
''', (topic,))
|
||||
findings = cursor.fetchall()
|
||||
|
||||
if not findings:
|
||||
print(f"⚠️ No findings found for topic '{topic}'")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
# Get participants
|
||||
cursor.execute('''
|
||||
SELECT agent_id FROM agent_research_participation WHERE topic = ?
|
||||
''', (topic,))
|
||||
participants = [row[0] for row in cursor.fetchall()]
|
||||
|
||||
# Mark topic as completed
|
||||
cursor.execute('''
|
||||
UPDATE research_topics
|
||||
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
|
||||
WHERE topic = ?
|
||||
''', (topic,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# Generate consensus document
|
||||
consensus_dir = os.path.join(os.getcwd(), ".agent/consensus")
|
||||
os.makedirs(consensus_dir, exist_ok=True)
|
||||
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename = f"{topic.replace(' ', '_').replace('/', '_')}_{timestamp}.md"
|
||||
filepath = os.path.join(consensus_dir, filename)
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
f.write(f"# 🎯 Consensus: {topic}\n\n")
|
||||
f.write(f"**Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
|
||||
f.write(f"**Participants**: {', '.join(participants)}\n\n")
|
||||
f.write("---\n\n")
|
||||
|
||||
for agent_id, content, finding_type, created_at in findings:
|
||||
f.write(f"## [{finding_type.upper()}] {agent_id}\n\n")
|
||||
f.write(f"*{created_at}*\n\n")
|
||||
f.write(f"{content}\n\n")
|
||||
|
||||
print(f"✅ Consensus generated: {filepath}")
|
||||
print(f"📊 Total findings: {len(findings)}")
|
||||
print(f"👥 Participants: {len(participants)}")
|
||||
|
||||
return filepath
|
||||
|
||||
# ============ TASK MANAGEMENT ============
|
||||
|
||||
def assign_task(assigned_by, agent_id, task):
|
||||
"""@assign - Assign task to specific agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO task_assignments (agent_id, task, assigned_by)
|
||||
VALUES (?, ?, ?)
|
||||
''', (agent_id, task, assigned_by))
|
||||
|
||||
# Notify the agent
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (agent_id, f"📋 New task assigned by {assigned_by}: {task}"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📋 Task assigned to {agent_id} by {assigned_by}")
|
||||
|
||||
def list_tasks(agent_id=None):
|
||||
"""List tasks for an agent or all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
if agent_id:
|
||||
cursor.execute('''
|
||||
SELECT id, task, assigned_by, status, created_at
|
||||
FROM task_assignments
|
||||
WHERE agent_id = ? AND status != 'completed'
|
||||
ORDER BY created_at DESC
|
||||
''', (agent_id,))
|
||||
tasks = cursor.fetchall()
|
||||
|
||||
print(f"\n📋 Tasks for {agent_id}:")
|
||||
for task_id, task, assigned_by, status, created_at in tasks:
|
||||
print(f" [{status.upper()}] #{task_id}: {task} (from {assigned_by})")
|
||||
else:
|
||||
cursor.execute('''
|
||||
SELECT agent_id, id, task, assigned_by, status
|
||||
FROM task_assignments
|
||||
WHERE status != 'completed'
|
||||
ORDER BY agent_id
|
||||
''')
|
||||
tasks = cursor.fetchall()
|
||||
|
||||
print(f"\n📋 All pending tasks:")
|
||||
current_agent = None
|
||||
for agent, task_id, task, assigned_by, status in tasks:
|
||||
if agent != current_agent:
|
||||
print(f"\n {agent}:")
|
||||
current_agent = agent
|
||||
print(f" [{status.upper()}] #{task_id}: {task}")
|
||||
|
||||
conn.close()
|
||||
|
||||
def complete_task(task_id):
|
||||
"""Mark a task as completed"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
UPDATE task_assignments
|
||||
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
|
||||
WHERE id = ?
|
||||
''', (task_id,))
|
||||
|
||||
if cursor.rowcount > 0:
|
||||
print(f"✅ Task #{task_id} marked as completed")
|
||||
else:
|
||||
print(f"❌ Task #{task_id} not found")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ NOTIFICATIONS ============
|
||||
|
||||
def broadcast_message(from_agent, message):
|
||||
"""@notify - Broadcast message to all agents"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (from_agent,))
|
||||
other_agents = cursor.fetchall()
|
||||
|
||||
for (agent_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 1)
|
||||
''', (agent_id, f"📢 Broadcast from {from_agent}: {message}"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"📢 Broadcast sent to {len(other_agents)} agents")
|
||||
|
||||
def get_notifications(agent_id, unread_only=False):
|
||||
"""Get notifications for an agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
if unread_only:
|
||||
cursor.execute('''
|
||||
SELECT id, message, is_broadcast, created_at
|
||||
FROM notifications
|
||||
WHERE agent_id = ? AND is_read = 0
|
||||
ORDER BY created_at DESC
|
||||
''', (agent_id,))
|
||||
else:
|
||||
cursor.execute('''
|
||||
SELECT id, message, is_broadcast, created_at
|
||||
FROM notifications
|
||||
WHERE agent_id = ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT 10
|
||||
''', (agent_id,))
|
||||
|
||||
notifications = cursor.fetchall()
|
||||
|
||||
print(f"\n🔔 Notifications for {agent_id}:")
|
||||
for notif_id, message, is_broadcast, created_at in notifications:
|
||||
prefix = "📢" if is_broadcast else "🔔"
|
||||
print(f" {prefix} {message}")
|
||||
print(f" {created_at}")
|
||||
|
||||
# Mark as read
|
||||
cursor.execute('''
|
||||
UPDATE notifications SET is_read = 1
|
||||
WHERE agent_id = ? AND is_read = 0
|
||||
''', (agent_id,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
# ============ POLLS ============
|
||||
|
||||
def start_poll(agent_id, question):
|
||||
"""@poll - Start a quick poll"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO polls (question, created_by, status)
|
||||
VALUES (?, ?, 'active')
|
||||
''', (question, agent_id))
|
||||
poll_id = cursor.lastrowid
|
||||
|
||||
# Notify all agents
|
||||
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
|
||||
other_agents = cursor.fetchall()
|
||||
for (other_id,) in other_agents:
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (other_id, f"🗳️ New poll from {agent_id}: '{question}' (Poll #{poll_id}). Vote with: @vote {poll_id} <yes/no/maybe>"))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🗳️ Poll #{poll_id} started: {question}")
|
||||
return poll_id
|
||||
|
||||
def vote_poll(agent_id, poll_id, vote):
|
||||
"""@vote - Vote on a poll"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute('''
|
||||
INSERT OR REPLACE INTO poll_votes (poll_id, agent_id, vote)
|
||||
VALUES (?, ?, ?)
|
||||
''', (poll_id, agent_id, vote))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"✅ Vote recorded for poll #{poll_id}: {vote}")
|
||||
|
||||
def show_poll_results(poll_id):
|
||||
"""Show poll results"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT question FROM polls WHERE id = ?", (poll_id,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Poll #{poll_id} not found")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
question = result[0]
|
||||
|
||||
cursor.execute('''
|
||||
SELECT vote, COUNT(*) FROM poll_votes
|
||||
WHERE poll_id = ?
|
||||
GROUP BY vote
|
||||
''', (poll_id,))
|
||||
votes = dict(cursor.fetchall())
|
||||
|
||||
cursor.execute('''
|
||||
SELECT agent_id, vote FROM poll_votes
|
||||
WHERE poll_id = ?
|
||||
''', (poll_id,))
|
||||
details = cursor.fetchall()
|
||||
|
||||
conn.close()
|
||||
|
||||
print(f"\n🗳️ Poll #{poll_id}: {question}")
|
||||
print("Results:")
|
||||
for vote, count in votes.items():
|
||||
print(f" {vote}: {count}")
|
||||
print("\nVotes:")
|
||||
for agent, vote in details:
|
||||
print(f" {agent}: {vote}")
|
||||
|
||||
# ============ HANDOVER ============
|
||||
|
||||
def request_handover(from_agent, to_agent, context=""):
|
||||
"""@handover - Request task handover to another agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get current task of from_agent
|
||||
cursor.execute("SELECT task FROM agents WHERE id = ?", (from_agent,))
|
||||
result = cursor.fetchone()
|
||||
current_task = result[0] if result else "current task"
|
||||
|
||||
# Create handover notification
|
||||
message = f"🔄 Handover request from {from_agent}: '{current_task}'"
|
||||
if context:
|
||||
message += f" | Context: {context}"
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (to_agent, message))
|
||||
|
||||
# Update from_agent status
|
||||
cursor.execute('''
|
||||
UPDATE agents SET status = 'idle', task = NULL
|
||||
WHERE id = ?
|
||||
''', (from_agent,))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🔄 Handover requested: {from_agent} -> {to_agent}")
|
||||
|
||||
def switch_to(agent_id, to_agent):
|
||||
"""@switch - Request to switch to specific agent"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
message = f"🔄 {agent_id} requests to switch to you for continuation"
|
||||
|
||||
cursor.execute('''
|
||||
INSERT INTO notifications (agent_id, message, is_broadcast)
|
||||
VALUES (?, ?, 0)
|
||||
''', (to_agent, message))
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
print(f"🔄 Switch request sent: {agent_id} -> {to_agent}")
|
||||
|
||||
# ============ STATUS & MONITORING ============
|
||||
|
||||
def get_status():
|
||||
"""Enhanced status view"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🛰️ ACTIVE AGENTS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT name, task, status, current_research, last_seen
|
||||
FROM agents
|
||||
ORDER BY last_seen DESC
|
||||
'''):
|
||||
status_emoji = {
|
||||
'active': '🟢',
|
||||
'idle': '⚪',
|
||||
'researching': '🔬',
|
||||
'busy': '🔴'
|
||||
}.get(row[2], '⚪')
|
||||
|
||||
research_info = f" | Research: {row[3]}" if row[3] else ""
|
||||
print(f"{status_emoji} [{row[2].upper()}] {row[0]}: {row[1]}{research_info}")
|
||||
print(f" Last seen: {row[4]}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🔬 ACTIVE RESEARCH TOPICS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT t.topic, t.initiated_by, t.created_at,
|
||||
(SELECT COUNT(*) FROM agent_research_participation WHERE topic = t.topic) as participants,
|
||||
(SELECT COUNT(*) FROM research_log WHERE topic = t.topic) as findings
|
||||
FROM research_topics t
|
||||
WHERE t.status = 'active'
|
||||
ORDER BY t.created_at DESC
|
||||
'''):
|
||||
print(f"🔬 {row[0]}")
|
||||
print(f" Initiated by: {row[1]} | Participants: {row[3]} | Findings: {row[4]}")
|
||||
print(f" Started: {row[2]}")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("🔒 FILE LOCKS")
|
||||
print("="*60)
|
||||
|
||||
locks = list(cursor.execute('''
|
||||
SELECT file_path, agent_id, lock_type
|
||||
FROM file_locks
|
||||
ORDER BY timestamp DESC
|
||||
'''))
|
||||
|
||||
if locks:
|
||||
for file_path, agent_id, lock_type in locks:
|
||||
lock_emoji = '🔒' if lock_type == 'write' else '🔍'
|
||||
print(f"{lock_emoji} {file_path} -> {agent_id} ({lock_type})")
|
||||
else:
|
||||
print(" No active locks")
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("📋 PENDING TASKS")
|
||||
print("="*60)
|
||||
|
||||
for row in cursor.execute('''
|
||||
SELECT agent_id, COUNT(*)
|
||||
FROM task_assignments
|
||||
WHERE status = 'pending'
|
||||
GROUP BY agent_id
|
||||
'''):
|
||||
print(f" {row[0]}: {row[1]} pending tasks")
|
||||
|
||||
cursor.execute("SELECT value FROM global_settings WHERE key = 'mode'")
|
||||
mode = cursor.fetchone()[0]
|
||||
print(f"\n🌍 Global Mode: {mode.upper()}")
|
||||
print("="*60)
|
||||
|
||||
conn.close()
|
||||
|
||||
def show_research_topic(topic):
|
||||
"""Show detailed view of a research topic"""
|
||||
conn = get_connection()
|
||||
cursor = conn.cursor()
|
||||
|
||||
cursor.execute("SELECT status, initiated_by, created_at FROM research_topics WHERE topic = ?", (topic,))
|
||||
result = cursor.fetchone()
|
||||
if not result:
|
||||
print(f"❌ Topic '{topic}' not found")
|
||||
conn.close()
|
||||
return
|
||||
|
||||
status, initiated_by, created_at = result
|
||||
|
||||
print(f"\n🔬 Research: {topic}")
|
||||
print(f"Status: {status} | Initiated by: {initiated_by} | Started: {created_at}")
|
||||
|
||||
cursor.execute('''
|
||||
SELECT agent_id FROM agent_research_participation WHERE topic = ?
|
||||
''', (topic,))
|
||||
participants = [row[0] for row in cursor.fetchall()]
|
||||
print(f"Participants: {', '.join(participants)}")
|
||||
|
||||
print("\n--- Findings ---")
|
||||
cursor.execute('''
|
||||
SELECT agent_id, content, finding_type, created_at
|
||||
FROM research_log
|
||||
WHERE topic = ?
|
||||
ORDER BY created_at
|
||||
''', (topic,))
|
||||
|
||||
for agent_id, content, finding_type, created_at in cursor.fetchall():
|
||||
emoji = {'note': '📝', 'finding': '🔍', 'concern': '⚠️', 'solution': '✅'}.get(finding_type, '📝')
|
||||
print(f"\n{emoji} [{finding_type.upper()}] {agent_id} ({created_at})")
|
||||
print(f" {content}")
|
||||
|
||||
conn.close()
|
||||
|
||||
# ============ MAIN CLI ============
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(
|
||||
description="🤖 Agent Sync v2.0 - Multi-Agent Cooperation Protocol",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
QUICK COMMANDS:
|
||||
@research <topic> Start joint research
|
||||
@join <topic> Join active research
|
||||
@find <topic> <content> Post finding to research
|
||||
@consensus <topic> Generate consensus document
|
||||
@assign <agent> <task> Assign task to agent
|
||||
@notify <message> Broadcast to all agents
|
||||
@handover <agent> [context] Handover task
|
||||
@switch <agent> Request switch to agent
|
||||
@poll <question> Start a poll
|
||||
@vote <poll_id> <vote> Vote on poll
|
||||
@tasks [agent] List tasks
|
||||
@complete <task_id> Complete task
|
||||
@notifications [agent] Check notifications
|
||||
@topic <topic> View research topic details
|
||||
|
||||
EXAMPLES:
|
||||
python3 agent_sync_v2.py research claude-code "API Design"
|
||||
python3 agent_sync_v2.py find copilot "API Design" "Use REST instead of GraphQL"
|
||||
python3 agent_sync_v2.py assign claude-code copilot "Implement REST endpoints"
|
||||
python3 agent_sync_v2.py consensus "API Design"
|
||||
"""
|
||||
)
|
||||
subparsers = parser.add_subparsers(dest="command", help="Command to execute")
|
||||
|
||||
# Legacy commands
|
||||
subparsers.add_parser("init", help="Initialize the database")
|
||||
|
||||
reg = subparsers.add_parser("register", help="Register an agent")
|
||||
reg.add_argument("id", help="Agent ID")
|
||||
reg.add_argument("name", help="Agent name")
|
||||
reg.add_argument("task", help="Current task")
|
||||
reg.add_argument("--status", default="idle", help="Agent status")
|
||||
|
||||
lock = subparsers.add_parser("lock", help="Lock a file")
|
||||
lock.add_argument("id", help="Agent ID")
|
||||
lock.add_argument("path", help="File path")
|
||||
lock.add_argument("--type", default="write", choices=["write", "research"], help="Lock type")
|
||||
|
||||
unlock = subparsers.add_parser("unlock", help="Unlock a file")
|
||||
unlock.add_argument("id", help="Agent ID")
|
||||
unlock.add_argument("path", help="File path")
|
||||
|
||||
subparsers.add_parser("status", help="Show status dashboard")
|
||||
|
||||
# New v2.0 commands
|
||||
research = subparsers.add_parser("research", help="@research - Start joint research topic")
|
||||
research.add_argument("agent_id", help="Agent initiating research")
|
||||
research.add_argument("topic", help="Research topic")
|
||||
|
||||
join = subparsers.add_parser("join", help="@join - Join active research")
|
||||
join.add_argument("agent_id", help="Agent joining")
|
||||
join.add_argument("topic", help="Topic to join")
|
||||
|
||||
find = subparsers.add_parser("find", help="@find - Post finding to research")
|
||||
find.add_argument("agent_id", help="Agent posting finding")
|
||||
find.add_argument("topic", help="Research topic")
|
||||
find.add_argument("content", help="Finding content")
|
||||
find.add_argument("--type", default="note", choices=["note", "finding", "concern", "solution"], help="Type of finding")
|
||||
|
||||
consensus = subparsers.add_parser("consensus", help="@consensus - Generate consensus document")
|
||||
consensus.add_argument("topic", help="Topic to generate consensus for")
|
||||
|
||||
assign = subparsers.add_parser("assign", help="@assign - Assign task to agent")
|
||||
assign.add_argument("from_agent", help="Agent assigning the task")
|
||||
assign.add_argument("to_agent", help="Agent to assign task to")
|
||||
assign.add_argument("task", help="Task description")
|
||||
|
||||
tasks = subparsers.add_parser("tasks", help="@tasks - List pending tasks")
|
||||
tasks.add_argument("--agent", help="Filter by agent ID")
|
||||
|
||||
complete = subparsers.add_parser("complete", help="@complete - Mark task as completed")
|
||||
complete.add_argument("task_id", type=int, help="Task ID to complete")
|
||||
|
||||
notify = subparsers.add_parser("notify", help="@notify - Broadcast message to all agents")
|
||||
notify.add_argument("from_agent", help="Agent sending notification")
|
||||
notify.add_argument("message", help="Message to broadcast")
|
||||
|
||||
handover = subparsers.add_parser("handover", help="@handover - Handover task to another agent")
|
||||
handover.add_argument("from_agent", help="Current agent")
|
||||
handover.add_argument("to_agent", help="Agent to handover to")
|
||||
handover.add_argument("--context", default="", help="Handover context")
|
||||
|
||||
switch = subparsers.add_parser("switch", help="@switch - Request switch to specific agent")
|
||||
switch.add_argument("from_agent", help="Current agent")
|
||||
switch.add_argument("to_agent", help="Agent to switch to")
|
||||
|
||||
poll = subparsers.add_parser("poll", help="@poll - Start a quick poll")
|
||||
poll.add_argument("agent_id", help="Agent starting poll")
|
||||
poll.add_argument("question", help="Poll question")
|
||||
|
||||
vote = subparsers.add_parser("vote", help="@vote - Vote on a poll")
|
||||
vote.add_argument("agent_id", help="Agent voting")
|
||||
vote.add_argument("poll_id", type=int, help="Poll ID")
|
||||
vote.add_argument("vote_choice", choices=["yes", "no", "maybe"], help="Your vote")
|
||||
|
||||
poll_results = subparsers.add_parser("poll-results", help="Show poll results")
|
||||
poll_results.add_argument("poll_id", type=int, help="Poll ID")
|
||||
|
||||
notifications = subparsers.add_parser("notifications", help="@notifications - Check notifications")
|
||||
notifications.add_argument("agent_id", help="Agent to check notifications for")
|
||||
notifications.add_argument("--unread", action="store_true", help="Show only unread")
|
||||
|
||||
topic = subparsers.add_parser("topic", help="@topic - View research topic details")
|
||||
topic.add_argument("topic_name", help="Topic name")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.command == "init":
|
||||
init_db()
|
||||
elif args.command == "register":
|
||||
register_agent(args.id, args.name, args.task, args.status)
|
||||
elif args.command == "lock":
|
||||
lock_file(args.id, args.path, args.type)
|
||||
elif args.command == "unlock":
|
||||
unlock_file(args.id, args.path)
|
||||
elif args.command == "status":
|
||||
get_status()
|
||||
elif args.command == "research":
|
||||
start_research(args.agent_id, args.topic)
|
||||
elif args.command == "join":
|
||||
join_research(args.agent_id, args.topic)
|
||||
elif args.command == "find":
|
||||
post_finding(args.agent_id, args.topic, args.content, args.type)
|
||||
elif args.command == "consensus":
|
||||
generate_consensus(args.topic)
|
||||
elif args.command == "assign":
|
||||
assign_task(args.from_agent, args.to_agent, args.task)
|
||||
elif args.command == "tasks":
|
||||
list_tasks(args.agent)
|
||||
elif args.command == "complete":
|
||||
complete_task(args.task_id)
|
||||
elif args.command == "notify":
|
||||
broadcast_message(args.from_agent, args.message)
|
||||
elif args.command == "handover":
|
||||
request_handover(args.from_agent, args.to_agent, args.context)
|
||||
elif args.command == "switch":
|
||||
switch_to(args.from_agent, args.to_agent)
|
||||
elif args.command == "poll":
|
||||
start_poll(args.agent_id, args.question)
|
||||
elif args.command == "vote":
|
||||
vote_poll(args.agent_id, args.poll_id, args.vote_choice)
|
||||
elif args.command == "poll-results":
|
||||
show_poll_results(args.poll_id)
|
||||
elif args.command == "notifications":
|
||||
get_notifications(args.agent_id, args.unread)
|
||||
elif args.command == "topic":
|
||||
show_research_topic(args.topic_name)
|
||||
else:
|
||||
parser.print_help()
|
||||
@@ -11,9 +11,9 @@ Usage:
|
||||
To get started:
|
||||
1. Create .env file with your OpenWebUI API key:
|
||||
echo "api_key=sk-your-key-here" > .env
|
||||
|
||||
|
||||
2. Make sure OpenWebUI is running on localhost:3000
|
||||
|
||||
|
||||
3. Run this script:
|
||||
python deploy_async_context_compression.py
|
||||
"""
|
||||
@@ -34,10 +34,10 @@ def main():
|
||||
print("🚀 Deploying Async Context Compression Filter Plugin")
|
||||
print("=" * 70)
|
||||
print()
|
||||
|
||||
|
||||
# Deploy the filter
|
||||
success = deploy_filter("async-context-compression")
|
||||
|
||||
|
||||
if success:
|
||||
print()
|
||||
print("=" * 70)
|
||||
@@ -63,7 +63,7 @@ def main():
|
||||
print(" • Check network connectivity")
|
||||
print()
|
||||
return 1
|
||||
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
|
||||
@@ -49,53 +49,78 @@ def _load_api_key() -> str:
|
||||
raise ValueError("api_key not found in .env file.")
|
||||
|
||||
|
||||
def _load_openwebui_base_url() -> str:
|
||||
"""Load OpenWebUI base URL from .env file or environment.
|
||||
|
||||
Checks in order:
|
||||
1. OPENWEBUI_BASE_URL in .env
|
||||
2. OPENWEBUI_BASE_URL environment variable
|
||||
3. Default to http://localhost:3000
|
||||
"""
|
||||
if ENV_FILE.exists():
|
||||
for line in ENV_FILE.read_text(encoding="utf-8").splitlines():
|
||||
line = line.strip()
|
||||
if line.startswith("OPENWEBUI_BASE_URL="):
|
||||
url = line.split("=", 1)[1].strip()
|
||||
if url:
|
||||
return url
|
||||
|
||||
# Try environment variable
|
||||
url = os.environ.get("OPENWEBUI_BASE_URL")
|
||||
if url:
|
||||
return url
|
||||
|
||||
# Default
|
||||
return "http://localhost:3000"
|
||||
|
||||
|
||||
def _find_filter_file(filter_name: str) -> Optional[Path]:
|
||||
"""Find the main Python file for a filter.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter (e.g., 'async-context-compression')
|
||||
|
||||
|
||||
Returns:
|
||||
Path to the main Python file, or None if not found.
|
||||
"""
|
||||
filter_dir = FILTERS_DIR / filter_name
|
||||
if not filter_dir.exists():
|
||||
return None
|
||||
|
||||
|
||||
# Try to find a .py file matching the filter name
|
||||
py_files = list(filter_dir.glob("*.py"))
|
||||
|
||||
|
||||
# Prefer a file with the filter name (with hyphens converted to underscores)
|
||||
preferred_name = filter_name.replace("-", "_") + ".py"
|
||||
for py_file in py_files:
|
||||
if py_file.name == preferred_name:
|
||||
return py_file
|
||||
|
||||
|
||||
# Otherwise, return the first .py file (usually the only one)
|
||||
if py_files:
|
||||
return py_files[0]
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
"""Extract metadata from the plugin docstring.
|
||||
|
||||
|
||||
Args:
|
||||
content: Python file content
|
||||
|
||||
|
||||
Returns:
|
||||
Dictionary with extracted metadata (title, author, version, etc.)
|
||||
"""
|
||||
metadata = {}
|
||||
|
||||
|
||||
# Extract docstring
|
||||
match = re.search(r'"""(.*?)"""', content, re.DOTALL)
|
||||
if not match:
|
||||
return metadata
|
||||
|
||||
|
||||
docstring = match.group(1)
|
||||
|
||||
|
||||
# Extract key-value pairs
|
||||
for line in docstring.split("\n"):
|
||||
line = line.strip()
|
||||
@@ -104,7 +129,7 @@ def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
key = parts[0].strip().lower()
|
||||
value = parts[1].strip()
|
||||
metadata[key] = value
|
||||
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
@@ -112,13 +137,13 @@ def _build_filter_payload(
|
||||
filter_name: str, file_path: Path, content: str, metadata: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""Build the payload for the filter update/create API.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter
|
||||
file_path: Path to the plugin file
|
||||
content: File content
|
||||
metadata: Extracted metadata
|
||||
|
||||
|
||||
Returns:
|
||||
Payload dictionary ready for API submission
|
||||
"""
|
||||
@@ -126,12 +151,14 @@ def _build_filter_payload(
|
||||
filter_id = metadata.get("id", filter_name).replace("-", "_")
|
||||
title = metadata.get("title", filter_name)
|
||||
author = metadata.get("author", "Fu-Jie")
|
||||
author_url = metadata.get("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
author_url = metadata.get(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
funding_url = metadata.get("funding_url", "https://github.com/open-webui")
|
||||
description = metadata.get("description", f"Filter plugin: {title}")
|
||||
version = metadata.get("version", "1.0.0")
|
||||
openwebui_id = metadata.get("openwebui_id", "")
|
||||
|
||||
|
||||
payload = {
|
||||
"id": filter_id,
|
||||
"name": title,
|
||||
@@ -150,20 +177,20 @@ def _build_filter_payload(
|
||||
},
|
||||
"content": content,
|
||||
}
|
||||
|
||||
|
||||
# Add openwebui_id if available
|
||||
if openwebui_id:
|
||||
payload["meta"]["manifest"]["openwebui_id"] = openwebui_id
|
||||
|
||||
|
||||
return payload
|
||||
|
||||
|
||||
def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
"""Deploy a filter plugin to OpenWebUI.
|
||||
|
||||
|
||||
Args:
|
||||
filter_name: Directory name of the filter to deploy
|
||||
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
@@ -191,7 +218,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
|
||||
if not metadata:
|
||||
print(f"[ERROR] Could not extract metadata from {file_path}")
|
||||
return False
|
||||
@@ -211,12 +238,14 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
}
|
||||
|
||||
# 6. Send update request
|
||||
update_url = "http://localhost:3000/api/v1/functions/id/{}/update".format(filter_id)
|
||||
create_url = "http://localhost:3000/api/v1/functions/create"
|
||||
|
||||
base_url = _load_openwebui_base_url()
|
||||
update_url = "{}/api/v1/functions/id/{}/update".format(base_url, filter_id)
|
||||
create_url = "{}/api/v1/functions/create".format(base_url)
|
||||
|
||||
print(f"📦 Deploying filter '{title}' (version {version})...")
|
||||
print(f" File: {file_path}")
|
||||
|
||||
print(f" Target: {base_url}")
|
||||
|
||||
try:
|
||||
# Try update first
|
||||
response = requests.post(
|
||||
@@ -225,7 +254,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if response.status_code == 200:
|
||||
print(f"✅ Successfully updated '{title}' filter!")
|
||||
return True
|
||||
@@ -234,7 +263,7 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
f"⚠️ Update failed with status {response.status_code}, "
|
||||
"attempting to create instead..."
|
||||
)
|
||||
|
||||
|
||||
# Try create if update fails
|
||||
res_create = requests.post(
|
||||
create_url,
|
||||
@@ -242,23 +271,24 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if res_create.status_code == 200:
|
||||
print(f"✅ Successfully created '{title}' filter!")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ Failed to update or create. Status: {res_create.status_code}")
|
||||
print(
|
||||
f"❌ Failed to update or create. Status: {res_create.status_code}"
|
||||
)
|
||||
try:
|
||||
error_msg = res_create.json()
|
||||
print(f" Error: {error_msg}")
|
||||
except:
|
||||
print(f" Response: {res_create.text[:500]}")
|
||||
return False
|
||||
|
||||
|
||||
except requests.exceptions.ConnectionError:
|
||||
print(
|
||||
"❌ Connection error: Could not reach OpenWebUI at localhost:3000"
|
||||
)
|
||||
base_url = _load_openwebui_base_url()
|
||||
print(f"❌ Connection error: Could not reach OpenWebUI at {base_url}")
|
||||
print(" Make sure OpenWebUI is running and accessible.")
|
||||
return False
|
||||
except requests.exceptions.Timeout:
|
||||
@@ -272,16 +302,20 @@ def deploy_filter(filter_name: str = DEFAULT_FILTER) -> bool:
|
||||
def list_filters() -> None:
|
||||
"""List all available filters."""
|
||||
print("📋 Available filters:")
|
||||
filters = [d.name for d in FILTERS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")]
|
||||
|
||||
filters = [
|
||||
d.name
|
||||
for d in FILTERS_DIR.iterdir()
|
||||
if d.is_dir() and not d.name.startswith("_")
|
||||
]
|
||||
|
||||
if not filters:
|
||||
print(" (No filters found)")
|
||||
return
|
||||
|
||||
|
||||
for filter_name in sorted(filters):
|
||||
filter_dir = FILTERS_DIR / filter_name
|
||||
py_file = _find_filter_file(filter_name)
|
||||
|
||||
|
||||
if py_file:
|
||||
content = py_file.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
@@ -76,52 +76,51 @@ def _get_base_url() -> str:
|
||||
|
||||
if not base_url:
|
||||
raise ValueError(
|
||||
f"Missing url. Please create {ENV_FILE} with: "
|
||||
"url=http://localhost:3000"
|
||||
f"Missing url. Please create {ENV_FILE} with: " "url=http://localhost:3000"
|
||||
)
|
||||
return base_url.rstrip("/")
|
||||
|
||||
|
||||
def _find_tool_file(tool_name: str) -> Optional[Path]:
|
||||
"""Find the main Python file for a tool.
|
||||
|
||||
|
||||
Args:
|
||||
tool_name: Directory name of the tool (e.g., 'openwebui-skills-manager')
|
||||
|
||||
|
||||
Returns:
|
||||
Path to the main Python file, or None if not found.
|
||||
"""
|
||||
tool_dir = TOOLS_DIR / tool_name
|
||||
if not tool_dir.exists():
|
||||
return None
|
||||
|
||||
|
||||
# Try to find a .py file matching the tool name
|
||||
py_files = list(tool_dir.glob("*.py"))
|
||||
|
||||
|
||||
# Prefer a file with the tool name (with hyphens converted to underscores)
|
||||
preferred_name = tool_name.replace("-", "_") + ".py"
|
||||
for py_file in py_files:
|
||||
if py_file.name == preferred_name:
|
||||
return py_file
|
||||
|
||||
|
||||
# Otherwise, return the first .py file (usually the only one)
|
||||
if py_files:
|
||||
return py_files[0]
|
||||
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
"""Extract metadata from the plugin docstring."""
|
||||
metadata = {}
|
||||
|
||||
|
||||
# Extract docstring
|
||||
match = re.search(r'"""(.*?)"""', content, re.DOTALL)
|
||||
if not match:
|
||||
return metadata
|
||||
|
||||
|
||||
docstring = match.group(1)
|
||||
|
||||
|
||||
# Extract key-value pairs
|
||||
for line in docstring.split("\n"):
|
||||
line = line.strip()
|
||||
@@ -130,7 +129,7 @@ def _extract_metadata(content: str) -> Dict[str, Any]:
|
||||
key = parts[0].strip().lower()
|
||||
value = parts[1].strip()
|
||||
metadata[key] = value
|
||||
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
@@ -141,12 +140,14 @@ def _build_tool_payload(
|
||||
tool_id = metadata.get("id", tool_name).replace("-", "_")
|
||||
title = metadata.get("title", tool_name)
|
||||
author = metadata.get("author", "Fu-Jie")
|
||||
author_url = metadata.get("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
author_url = metadata.get(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
funding_url = metadata.get("funding_url", "https://github.com/open-webui")
|
||||
description = metadata.get("description", f"Tool plugin: {title}")
|
||||
version = metadata.get("version", "1.0.0")
|
||||
openwebui_id = metadata.get("openwebui_id", "")
|
||||
|
||||
|
||||
payload = {
|
||||
"id": tool_id,
|
||||
"name": title,
|
||||
@@ -165,20 +166,20 @@ def _build_tool_payload(
|
||||
},
|
||||
"content": content,
|
||||
}
|
||||
|
||||
|
||||
# Add openwebui_id if available
|
||||
if openwebui_id:
|
||||
payload["meta"]["manifest"]["openwebui_id"] = openwebui_id
|
||||
|
||||
|
||||
return payload
|
||||
|
||||
|
||||
def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
"""Deploy a tool plugin to OpenWebUI.
|
||||
|
||||
|
||||
Args:
|
||||
tool_name: Directory name of the tool to deploy
|
||||
|
||||
|
||||
Returns:
|
||||
True if successful, False otherwise
|
||||
"""
|
||||
@@ -207,7 +208,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
|
||||
if not metadata:
|
||||
print(f"[ERROR] Could not extract metadata from {file_path}")
|
||||
return False
|
||||
@@ -229,10 +230,10 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
# 6. Send update request through the native tool endpoints
|
||||
update_url = f"{base_url}/api/v1/tools/id/{tool_id}/update"
|
||||
create_url = f"{base_url}/api/v1/tools/create"
|
||||
|
||||
|
||||
print(f"📦 Deploying tool '{title}' (version {version})...")
|
||||
print(f" File: {file_path}")
|
||||
|
||||
|
||||
try:
|
||||
# Try update first
|
||||
response = requests.post(
|
||||
@@ -241,7 +242,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if response.status_code == 200:
|
||||
print(f"✅ Successfully updated '{title}' tool!")
|
||||
return True
|
||||
@@ -250,7 +251,7 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
f"⚠️ Update failed with status {response.status_code}, "
|
||||
"attempting to create instead..."
|
||||
)
|
||||
|
||||
|
||||
# Try create if update fails
|
||||
res_create = requests.post(
|
||||
create_url,
|
||||
@@ -258,23 +259,23 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
data=json.dumps(payload),
|
||||
timeout=10,
|
||||
)
|
||||
|
||||
|
||||
if res_create.status_code == 200:
|
||||
print(f"✅ Successfully created '{title}' tool!")
|
||||
return True
|
||||
else:
|
||||
print(f"❌ Failed to update or create. Status: {res_create.status_code}")
|
||||
print(
|
||||
f"❌ Failed to update or create. Status: {res_create.status_code}"
|
||||
)
|
||||
try:
|
||||
error_msg = res_create.json()
|
||||
print(f" Error: {error_msg}")
|
||||
except:
|
||||
print(f" Response: {res_create.text[:500]}")
|
||||
return False
|
||||
|
||||
|
||||
except requests.exceptions.ConnectionError:
|
||||
print(
|
||||
"❌ Connection error: Could not reach OpenWebUI at {base_url}"
|
||||
)
|
||||
print("❌ Connection error: Could not reach OpenWebUI at {base_url}")
|
||||
print(" Make sure OpenWebUI is running and accessible.")
|
||||
return False
|
||||
except requests.exceptions.Timeout:
|
||||
@@ -288,16 +289,18 @@ def deploy_tool(tool_name: str = DEFAULT_TOOL) -> bool:
|
||||
def list_tools() -> None:
|
||||
"""List all available tools."""
|
||||
print("📋 Available tools:")
|
||||
tools = [d.name for d in TOOLS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")]
|
||||
|
||||
tools = [
|
||||
d.name for d in TOOLS_DIR.iterdir() if d.is_dir() and not d.name.startswith("_")
|
||||
]
|
||||
|
||||
if not tools:
|
||||
print(" (No tools found)")
|
||||
return
|
||||
|
||||
|
||||
for tool_name in sorted(tools):
|
||||
tool_dir = TOOLS_DIR / tool_name
|
||||
py_file = _find_tool_file(tool_name)
|
||||
|
||||
|
||||
if py_file:
|
||||
content = py_file.read_text(encoding="utf-8")
|
||||
metadata = _extract_metadata(content)
|
||||
|
||||
@@ -187,9 +187,7 @@ def build_payload(candidate: PluginCandidate) -> Dict[str, object]:
|
||||
manifest = dict(candidate.metadata)
|
||||
manifest.setdefault("title", candidate.title)
|
||||
manifest.setdefault("author", "Fu-Jie")
|
||||
manifest.setdefault(
|
||||
"author_url", "https://github.com/Fu-Jie/openwebui-extensions"
|
||||
)
|
||||
manifest.setdefault("author_url", "https://github.com/Fu-Jie/openwebui-extensions")
|
||||
manifest.setdefault("funding_url", "https://github.com/open-webui")
|
||||
manifest.setdefault(
|
||||
"description", f"{candidate.plugin_type.title()} plugin: {candidate.title}"
|
||||
@@ -233,7 +231,9 @@ def build_api_urls(base_url: str, candidate: PluginCandidate) -> Tuple[str, str]
|
||||
)
|
||||
|
||||
|
||||
def discover_plugins(plugin_types: Sequence[str]) -> Tuple[List[PluginCandidate], List[Tuple[Path, str]]]:
|
||||
def discover_plugins(
|
||||
plugin_types: Sequence[str],
|
||||
) -> Tuple[List[PluginCandidate], List[Tuple[Path, str]]]:
|
||||
candidates: List[PluginCandidate] = []
|
||||
skipped: List[Tuple[Path, str]] = []
|
||||
|
||||
@@ -344,7 +344,9 @@ def print_skipped_summary(skipped: Sequence[Tuple[Path, str]]) -> None:
|
||||
for _, reason in skipped:
|
||||
counts[reason] = counts.get(reason, 0) + 1
|
||||
|
||||
summary = ", ".join(f"{reason}: {count}" for reason, count in sorted(counts.items()))
|
||||
summary = ", ".join(
|
||||
f"{reason}: {count}" for reason, count in sorted(counts.items())
|
||||
)
|
||||
print(f"Skipped {len(skipped)} files ({summary}).")
|
||||
|
||||
|
||||
@@ -421,19 +423,19 @@ def main(argv: Optional[Sequence[str]] = None) -> int:
|
||||
failed_candidates.append(candidate)
|
||||
print(f" [FAILED] {message}")
|
||||
|
||||
print(f"\n" + "="*80)
|
||||
print(f"\n" + "=" * 80)
|
||||
print(
|
||||
f"Finished: {success_count}/{len(candidates)} plugins installed successfully."
|
||||
)
|
||||
|
||||
|
||||
if failed_candidates:
|
||||
print(f"\n❌ {len(failed_candidates)} plugin(s) failed to install:")
|
||||
for candidate in failed_candidates:
|
||||
print(f" • {candidate.title} ({candidate.plugin_type})")
|
||||
print(f" → Check the error message above")
|
||||
print()
|
||||
|
||||
print("="*80)
|
||||
|
||||
print("=" * 80)
|
||||
return 0 if success_count == len(candidates) else 1
|
||||
|
||||
|
||||
|
||||
110
scripts/macp
Executable file
110
scripts/macp
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/bin/bash
|
||||
# 🤖 MACP Quick Command v2.1 (Unified Edition)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
AGENT_ID_FILE=".agent/current_agent"
|
||||
|
||||
resolve_agent_id() {
|
||||
if [ -n "${MACP_AGENT_ID:-}" ]; then
|
||||
echo "$MACP_AGENT_ID"
|
||||
return
|
||||
fi
|
||||
|
||||
if [ -f "$AGENT_ID_FILE" ]; then
|
||||
cat "$AGENT_ID_FILE"
|
||||
return
|
||||
fi
|
||||
|
||||
echo "Error: MACP agent identity is not set. Export MACP_AGENT_ID or create .agent/current_agent." >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
resolve_agent_name() {
|
||||
python3 - <<'PY2'
|
||||
import os
|
||||
import sqlite3
|
||||
import sys
|
||||
|
||||
agent_id = os.environ.get("MACP_AGENT_ID", "").strip()
|
||||
if not agent_id:
|
||||
path = os.path.join(os.getcwd(), ".agent", "current_agent")
|
||||
if os.path.exists(path):
|
||||
with open(path, "r", encoding="utf-8") as handle:
|
||||
agent_id = handle.read().strip()
|
||||
|
||||
db_path = os.path.join(os.getcwd(), ".agent", "agent_hub.db")
|
||||
name = agent_id or "Agent"
|
||||
|
||||
if agent_id and os.path.exists(db_path):
|
||||
conn = sqlite3.connect(db_path)
|
||||
cur = conn.cursor()
|
||||
cur.execute("SELECT name FROM agents WHERE id = ?", (agent_id,))
|
||||
row = cur.fetchone()
|
||||
conn.close()
|
||||
if row and row[0]:
|
||||
name = row[0]
|
||||
|
||||
sys.stdout.write(name)
|
||||
PY2
|
||||
}
|
||||
|
||||
AGENT_ID="$(resolve_agent_id)"
|
||||
export MACP_AGENT_ID="$AGENT_ID"
|
||||
AGENT_NAME="$(resolve_agent_name)"
|
||||
|
||||
CMD="${1:-}"
|
||||
if [ -z "$CMD" ]; then
|
||||
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]" >&2
|
||||
exit 1
|
||||
fi
|
||||
shift
|
||||
|
||||
case "$CMD" in
|
||||
/study)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
DESC="$*"
|
||||
if [ -n "$DESC" ]; then
|
||||
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC" --desc "$DESC"
|
||||
else
|
||||
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC"
|
||||
fi
|
||||
;;
|
||||
/broadcast)
|
||||
python3 scripts/agent_sync.py broadcast "$AGENT_ID" manual "$*"
|
||||
;;
|
||||
/summon)
|
||||
TO_AGENT="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker --priority high
|
||||
;;
|
||||
/handover)
|
||||
TO_AGENT="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker
|
||||
python3 scripts/agent_sync.py register "$AGENT_ID" "$AGENT_NAME" "Idle"
|
||||
;;
|
||||
/note)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py note "$AGENT_ID" "$TOPIC" "$*" --type note
|
||||
;;
|
||||
/check)
|
||||
python3 scripts/agent_sync.py check
|
||||
;;
|
||||
/resolve)
|
||||
TOPIC="$1"
|
||||
shift
|
||||
python3 scripts/agent_sync.py resolve "$AGENT_ID" "$TOPIC" "$*"
|
||||
;;
|
||||
/ping)
|
||||
python3 scripts/agent_sync.py status | grep "\["
|
||||
;;
|
||||
/status)
|
||||
python3 scripts/agent_sync.py status
|
||||
;;
|
||||
*)
|
||||
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]"
|
||||
;;
|
||||
esac
|
||||
@@ -277,12 +277,37 @@ class OpenWebUIStats:
|
||||
},
|
||||
}
|
||||
|
||||
def _get_plugin_obj(self, post: dict) -> dict:
|
||||
"""Extract the actual plugin object from post['data'] (handling different keys like function/tool/pipe)."""
|
||||
data = post.get("data", {}) or {}
|
||||
if not data:
|
||||
return {}
|
||||
|
||||
# Priority 1: Use post['type'] as the key (standard behavior)
|
||||
post_type = post.get("type")
|
||||
if post_type and post_type in data and data[post_type]:
|
||||
return data[post_type]
|
||||
|
||||
# Priority 2: Fallback to 'function' (most common for actions/filters/pipes)
|
||||
if "function" in data and data["function"]:
|
||||
return data["function"]
|
||||
|
||||
# Priority 3: Try other known keys
|
||||
for k in ["tool", "pipe", "action", "filter", "prompt", "model"]:
|
||||
if k in data and data[k]:
|
||||
return data[k]
|
||||
|
||||
# Priority 4: If there's only one key in data, assume that's the one
|
||||
if len(data) == 1:
|
||||
return list(data.values())[0] or {}
|
||||
|
||||
return {}
|
||||
|
||||
def _resolve_post_type(self, post: dict) -> str:
|
||||
"""Resolve the post category type"""
|
||||
top_type = post.get("type")
|
||||
function_data = post.get("data", {}) or {}
|
||||
function_obj = function_data.get("function", {}) or {}
|
||||
meta = function_obj.get("meta", {}) or {}
|
||||
plugin_obj = self._get_plugin_obj(post)
|
||||
meta = plugin_obj.get("meta", {}) or {}
|
||||
manifest = meta.get("manifest", {}) or {}
|
||||
|
||||
# Category identification priority:
|
||||
@@ -292,17 +317,17 @@ class OpenWebUIStats:
|
||||
post_type = "unknown"
|
||||
if meta.get("type"):
|
||||
post_type = meta.get("type")
|
||||
elif function_obj.get("type"):
|
||||
post_type = function_obj.get("type")
|
||||
elif plugin_obj.get("type"):
|
||||
post_type = plugin_obj.get("type")
|
||||
elif top_type:
|
||||
post_type = top_type
|
||||
elif not meta and not function_obj:
|
||||
elif not meta and not plugin_obj:
|
||||
post_type = "post"
|
||||
|
||||
post_type = self._normalize_post_type(post_type)
|
||||
|
||||
# Unified and heuristic identification logic
|
||||
if post_type == "unknown" and function_obj:
|
||||
if post_type == "unknown" and plugin_obj:
|
||||
post_type = "action"
|
||||
|
||||
if post_type == "action" or post_type == "unknown":
|
||||
@@ -600,9 +625,8 @@ class OpenWebUIStats:
|
||||
for post in posts:
|
||||
post_type = self._resolve_post_type(post)
|
||||
|
||||
function_data = post.get("data", {}) or {}
|
||||
function_obj = function_data.get("function", {}) or {}
|
||||
meta = function_obj.get("meta", {}) or {}
|
||||
plugin_obj = self._get_plugin_obj(post)
|
||||
meta = plugin_obj.get("meta", {}) or {}
|
||||
manifest = meta.get("manifest", {}) or {}
|
||||
|
||||
# Accumulate statistics
|
||||
@@ -615,13 +639,12 @@ class OpenWebUIStats:
|
||||
stats["total_saves"] += post.get("saveCount", 0)
|
||||
stats["total_comments"] += post.get("commentCount", 0)
|
||||
|
||||
# Key: total views do not include non-downloadable types (e.g., post, review)
|
||||
if post_type in self.DOWNLOADABLE_TYPES or post_downloads > 0:
|
||||
# Key: only count views for posts with actual downloads (exclude post/review types)
|
||||
if post_type not in ("post", "review") and post_downloads > 0:
|
||||
stats["total_views"] += post_views
|
||||
|
||||
if post_type not in stats["by_type"]:
|
||||
stats["by_type"][post_type] = 0
|
||||
stats["by_type"][post_type] += 1
|
||||
if post_type not in stats["by_type"]:
|
||||
stats["by_type"][post_type] = 0
|
||||
stats["by_type"][post_type] += 1
|
||||
|
||||
# Individual post information
|
||||
created_at = datetime.fromtimestamp(post.get("createdAt", 0))
|
||||
|
||||
@@ -9,14 +9,15 @@ local deployment are present and functional.
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main():
|
||||
"""Check all deployment tools are ready."""
|
||||
base_dir = Path(__file__).parent.parent
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
print("✨ Async Context Compression Local Deployment Tools — Verification Status")
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
|
||||
files_to_check = {
|
||||
"🐍 Python Scripts": [
|
||||
"scripts/deploy_async_context_compression.py",
|
||||
@@ -34,56 +35,56 @@ def main():
|
||||
"tests/scripts/test_deploy_filter.py",
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
all_exist = True
|
||||
|
||||
|
||||
for category, files in files_to_check.items():
|
||||
print(f"\n{category}:")
|
||||
print("-" * 80)
|
||||
|
||||
|
||||
for file_path in files:
|
||||
full_path = base_dir / file_path
|
||||
exists = full_path.exists()
|
||||
status = "✅" if exists else "❌"
|
||||
|
||||
|
||||
print(f" {status} {file_path}")
|
||||
|
||||
|
||||
if exists and file_path.endswith(".py"):
|
||||
size = full_path.stat().st_size
|
||||
lines = len(full_path.read_text().split('\n'))
|
||||
lines = len(full_path.read_text().split("\n"))
|
||||
print(f" └─ [{size} bytes, ~{lines} lines]")
|
||||
|
||||
|
||||
if not exists:
|
||||
all_exist = False
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
|
||||
print("\n" + "=" * 80)
|
||||
|
||||
if all_exist:
|
||||
print("✅ All deployment tool files are ready!")
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
|
||||
print("🚀 Quick Start (3 ways):\n")
|
||||
|
||||
|
||||
print(" Method 1: Easiest (Recommended)")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_async_context_compression.py")
|
||||
print()
|
||||
|
||||
|
||||
print(" Method 2: Generic Tool")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_filter.py")
|
||||
print()
|
||||
|
||||
|
||||
print(" Method 3: Deploy Other Filters")
|
||||
print(" ─────────────────────────────────────────────────────────")
|
||||
print(" cd scripts")
|
||||
print(" python deploy_filter.py --list")
|
||||
print(" python deploy_filter.py folder-memory")
|
||||
print()
|
||||
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
print("📚 Documentation References:\n")
|
||||
print(" • Quick Start: scripts/QUICK_START.md")
|
||||
print(" • Complete Guide: scripts/DEPLOYMENT_GUIDE.md")
|
||||
@@ -91,12 +92,12 @@ def main():
|
||||
print(" • Script Info: scripts/README.md")
|
||||
print(" • Test Coverage: pytest tests/scripts/test_deploy_filter.py -v")
|
||||
print()
|
||||
|
||||
print("="*80 + "\n")
|
||||
|
||||
print("=" * 80 + "\n")
|
||||
return 0
|
||||
else:
|
||||
print("❌ Some files are missing!")
|
||||
print("="*80 + "\n")
|
||||
print("=" * 80 + "\n")
|
||||
return 1
|
||||
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ def test_build_payload_uses_native_tool_shape_for_tools():
|
||||
"description": "Demo tool description",
|
||||
"openwebui_id": "12345678-1234-1234-1234-123456789abc",
|
||||
},
|
||||
content='class Tools:\n pass\n',
|
||||
content="class Tools:\n pass\n",
|
||||
function_id="demo_tool",
|
||||
)
|
||||
|
||||
@@ -79,7 +79,7 @@ def test_build_payload_uses_native_tool_shape_for_tools():
|
||||
"description": "Demo tool description",
|
||||
"manifest": {},
|
||||
},
|
||||
"content": 'class Tools:\n pass\n',
|
||||
"content": "class Tools:\n pass\n",
|
||||
"access_grants": [],
|
||||
}
|
||||
|
||||
@@ -89,7 +89,7 @@ def test_build_api_urls_uses_tool_endpoints_for_tools():
|
||||
plugin_type="tool",
|
||||
file_path=Path("plugins/tools/demo/demo_tool.py"),
|
||||
metadata={"title": "Demo Tool"},
|
||||
content='class Tools:\n pass\n',
|
||||
content="class Tools:\n pass\n",
|
||||
function_id="demo_tool",
|
||||
)
|
||||
|
||||
@@ -101,7 +101,9 @@ def test_build_api_urls_uses_tool_endpoints_for_tools():
|
||||
assert create_url == "http://localhost:3000/api/v1/tools/create"
|
||||
|
||||
|
||||
def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, monkeypatch):
|
||||
def test_discover_plugins_only_returns_supported_openwebui_plugins(
|
||||
tmp_path, monkeypatch
|
||||
):
|
||||
actions_dir = tmp_path / "plugins" / "actions"
|
||||
filters_dir = tmp_path / "plugins" / "filters"
|
||||
pipes_dir = tmp_path / "plugins" / "pipes"
|
||||
@@ -110,7 +112,9 @@ def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, mon
|
||||
write_plugin(actions_dir / "flash-card" / "flash_card.py", PLUGIN_HEADER)
|
||||
write_plugin(actions_dir / "flash-card" / "flash_card_cn.py", PLUGIN_HEADER)
|
||||
write_plugin(actions_dir / "infographic" / "verify_generation.py", PLUGIN_HEADER)
|
||||
write_plugin(filters_dir / "missing-id" / "missing_id.py", '"""\ntitle: Missing ID\n"""\n')
|
||||
write_plugin(
|
||||
filters_dir / "missing-id" / "missing_id.py", '"""\ntitle: Missing ID\n"""\n'
|
||||
)
|
||||
write_plugin(pipes_dir / "sdk" / "github_copilot_sdk.py", PLUGIN_HEADER)
|
||||
write_plugin(tools_dir / "skills" / "openwebui_skills_manager.py", PLUGIN_HEADER)
|
||||
|
||||
@@ -150,7 +154,9 @@ def test_discover_plugins_only_returns_supported_openwebui_plugins(tmp_path, mon
|
||||
("class Action:\n pass\n", "missing plugin header"),
|
||||
],
|
||||
)
|
||||
def test_discover_plugins_reports_missing_metadata(tmp_path, monkeypatch, header, expected_reason):
|
||||
def test_discover_plugins_reports_missing_metadata(
|
||||
tmp_path, monkeypatch, header, expected_reason
|
||||
):
|
||||
action_dir = tmp_path / "plugins" / "actions"
|
||||
plugin_file = action_dir / "demo" / "demo.py"
|
||||
write_plugin(plugin_file, header)
|
||||
|
||||
139
zed-ai-tabs.sh
Executable file
139
zed-ai-tabs.sh
Executable file
@@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# ai-tabs - Ultra Orchestrator
|
||||
# Version: v1.0.0
|
||||
# License: MIT
|
||||
# Author: Fu-Jie
|
||||
# Description: Batch-launches and orchestrates multiple AI CLI tools as Tabs.
|
||||
# ==============================================================================
|
||||
|
||||
# 1. Single-Instance Lock
|
||||
LOCK_FILE="/tmp/ai_terminal_launch.lock"
|
||||
# If lock is less than 10 seconds old, another instance is running. Exit.
|
||||
if [ -f "$LOCK_FILE" ]; then
|
||||
LOCK_TIME=$(stat -f %m "$LOCK_FILE")
|
||||
NOW=$(date +%s)
|
||||
if (( NOW - LOCK_TIME < 10 )); then
|
||||
echo "⚠️ Another launch in progress. Skipping to prevent duplicates."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
touch "$LOCK_FILE"
|
||||
trap 'rm -f "$LOCK_FILE"' EXIT
|
||||
|
||||
# 2. Configuration & Constants
|
||||
INIT_DELAY=4.5
|
||||
PASTE_DELAY=0.3
|
||||
CMD_CREATION_DELAY=0.3
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PARENT_DIR="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Search for .env
|
||||
if [ -f "${SCRIPT_DIR}/.env" ]; then
|
||||
ENV_FILE="${SCRIPT_DIR}/.env"
|
||||
elif [ -f "${PARENT_DIR}/.env" ]; then
|
||||
ENV_FILE="${PARENT_DIR}/.env"
|
||||
fi
|
||||
|
||||
# Supported Tools
|
||||
SUPPORTED_TOOLS=(
|
||||
"claude:--continue"
|
||||
"opencode:--continue"
|
||||
"gemini:--resume latest"
|
||||
"copilot:--continue"
|
||||
"iflow:--continue"
|
||||
"kilo:--continue"
|
||||
)
|
||||
|
||||
FOUND_TOOLS_NAMES=()
|
||||
FOUND_CMDS=()
|
||||
|
||||
# 3. Part A: Load Manual Configuration
|
||||
if [ -f "$ENV_FILE" ]; then
|
||||
set -a; source "$ENV_FILE"; set +a
|
||||
for var in $(compgen -v | grep '^TOOL_[0-9]' | sort -V); do
|
||||
TPATH="${!var}"
|
||||
if [ -x "$TPATH" ]; then
|
||||
NAME=$(basename "$TPATH")
|
||||
FLAG="--continue"
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
[[ "${item%%:*}" == "$NAME" ]] && FLAG="${item#*:}" && break
|
||||
done
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# 4. Part B: Automatic Tool Discovery
|
||||
for item in "${SUPPORTED_TOOLS[@]}"; do
|
||||
NAME="${item%%:*}"
|
||||
FLAG="${item#*:}"
|
||||
ALREADY_CONFIGURED=false
|
||||
for configured in "${FOUND_TOOLS_NAMES[@]}"; do
|
||||
[[ "$configured" == "$NAME" ]] && ALREADY_CONFIGURED=true && break
|
||||
done
|
||||
[[ "$ALREADY_CONFIGURED" == true ]] && continue
|
||||
TPATH=$(which "$NAME" 2>/dev/null)
|
||||
if [ -z "$TPATH" ]; then
|
||||
SEARCH_PATHS=(
|
||||
"/opt/homebrew/bin/$NAME"
|
||||
"/usr/local/bin/$NAME"
|
||||
"$HOME/.local/bin/$NAME"
|
||||
"$HOME/bin/$NAME"
|
||||
"$HOME/.$NAME/bin/$NAME"
|
||||
"$HOME/.nvm/versions/node/*/bin/$NAME"
|
||||
"$HOME/.npm-global/bin/$NAME"
|
||||
"$HOME/.cargo/bin/$NAME"
|
||||
)
|
||||
for p in "${SEARCH_PATHS[@]}"; do
|
||||
for found_p in $p; do [[ -x "$found_p" ]] && TPATH="$found_p" && break 2; done
|
||||
done
|
||||
fi
|
||||
if [ -n "$TPATH" ]; then
|
||||
FOUND_TOOLS_NAMES+=("$NAME")
|
||||
FOUND_CMDS+=("'$TPATH' $FLAG || '$TPATH' || exec \$SHELL")
|
||||
fi
|
||||
done
|
||||
|
||||
NUM_FOUND=${#FOUND_CMDS[@]}
|
||||
[[ "$NUM_FOUND" -eq 0 ]] && exit 1
|
||||
|
||||
# 5. Core Orchestration (Reset + Launch)
|
||||
# Using Command Palette automation to avoid the need for manual shortcut binding.
|
||||
AS_SCRIPT="tell application \"System Events\"\n"
|
||||
|
||||
# Phase A: Creation (Using Command Palette to ensure it opens in Editor Area)
|
||||
for ((i=1; i<=NUM_FOUND; i++)); do
|
||||
AS_SCRIPT+=" keystroke \"p\" using {command down, shift down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
# Ensure we are searching for the command. Using clipboard for speed and universal language support.
|
||||
AS_SCRIPT+=" set the clipboard to \"workspace: new center terminal\"\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
AS_SCRIPT+=" delay $CMD_CREATION_DELAY\n"
|
||||
done
|
||||
|
||||
# Phase B: Warmup
|
||||
AS_SCRIPT+=" delay $INIT_DELAY\n"
|
||||
|
||||
# Phase C: Command Injection (Reverse)
|
||||
for ((i=NUM_FOUND-1; i>=0; i--)); do
|
||||
FULL_CMD="${FOUND_CMDS[$i]}"
|
||||
CLEAN_CMD=$(echo "$FULL_CMD" | sed 's/"/\\"/g')
|
||||
AS_SCRIPT+=" set the clipboard to \"$CLEAN_CMD\"\n"
|
||||
AS_SCRIPT+=" delay 0.1\n"
|
||||
AS_SCRIPT+=" keystroke \"v\" using {command down}\n"
|
||||
AS_SCRIPT+=" delay $PASTE_DELAY\n"
|
||||
AS_SCRIPT+=" keystroke return\n"
|
||||
if [ $i -gt 0 ]; then
|
||||
AS_SCRIPT+=" delay 0.5\n"
|
||||
AS_SCRIPT+=" keystroke \"[\" using {command down, shift down}\n"
|
||||
fi
|
||||
done
|
||||
AS_SCRIPT+="end tell"
|
||||
|
||||
# Execute
|
||||
echo -e "$AS_SCRIPT" | osascript
|
||||
echo "✨ Ai tabs initialized successfully ($NUM_FOUND tools found)."
|
||||
Reference in New Issue
Block a user