Merge remote changes, keeping local version improvements

This commit is contained in:
fujie
2026-03-13 14:58:22 +08:00
12 changed files with 1851 additions and 0 deletions

BIN
.agent/agent_hub.db Normal file

Binary file not shown.

View File

@@ -0,0 +1,171 @@
# Filter: async-context-compression 设计模式与工程实践
**日期**: 2026-03-12
**模块**: `plugins/filters/async-context-compression/async_context_compression.py`
**关键特性**: 上下文压缩、异步摘要生成、状态管理、LLM 工程优化
---
## 核心工程洞察
### 1. Request 对象的 Filter-to-LLM 传导链
**问题**Filter 的 `outlet` 阶段启动背景异步任务(`asyncio.create_task`)调用 `generate_chat_completion`(内部 API但无法直接访问原始 HTTP `request`。早期代码用最小化合成 Request`{"type": "http", "app": webui_app}`),暴露兼容性风险。
**解决方案**
- OpenWebUI 对 `outlet` 同样支持 `__request__` 参数注入(即 `inlet` + `outlet` 都支持)
- 透传 `__request__` 通过整个异步调用链:`outlet → _locked_summary_task → _check_and_generate_summary_async → _generate_summary_async → _call_summary_llm`
- 在最终调用处:`request = __request__ or Request(...)`(兜底降级)
**收获**LLM 调用路径应始终倾向于使用真实请求上下文,而非人工合成。即使后台任务中,`request.app` 的应用级状态仍持续有效。
---
### 2. 异步摘要生成中的上下文完整性
**关键场景分化**
| 情况 | `summary_index` 值 | 旧摘要位置 | 需要 `previous_summary` |
|------|--------|----------|---------|
| Inlet 已注入旧摘要 | Not None | `messages[0]`middle_messages 首项) | ❌ 否,已在 conversation_text 中 |
| Outlet 收原始消息(未注入) | None | DB 存档 | ✅ **是**,必须显式读取并透传 |
**问题根源**`outlet` 收到的消息来自原始数据库查询,未经过 `inlet` 的摘要注入。当 LLM 看不到历史摘要时,已压缩的知识(旧对话、已解决的问题、先前的发现)会被重新处理或遗忘。
**实现要点**
```python
# 仅当 summary_index is None 时异步加载旧摘要
if summary_index is None:
previous_summary = await asyncio.to_thread(
self._load_summary, chat_id, body
)
else:
previous_summary = None
```
---
### 3. 上下文压缩的 LLM Prompt 设计
**工程原则**
1. **Clear Input Boundaries**:用 XML 风格标签(`<previous_working_memory>`, `<new_conversation>`)明确分界,避免 LLM 混淆"指令示例"与"待处理数据"
2. **State-Aware Merging**:不是"保留所有旧事实",而是**更新状态**——`"bug X exists" → "bug X fixed"` 或彻底移除已解决项
3. **Goal Evolution**Current Goal 反映**最新**意图;旧目标迁移到 Working Memory 作为上下文
4. **Error Verbatim**Stack trace、异常类型、错误码必须逐字引用是调试的一等公民
5. **Format Strictness**:结构变为 **REQUIRED**(而非 Suggested允许零内容项省略但布局一致
**新 Prompt 结构**
```
[Rules] → [Output Constraints] → [Required Structure Header] → [Boundaries] → <previous_working_memory> → <new_conversation>
```
关键改进:
- 规则 3Ruthless Denoising → 新增规则 4Error Verbatim + 规则 5Causal Chain
- "Suggested" Structure → "Required" Structure with Optional Sections
- 新增 `## Causal Log` 专项,强制单行因果链格式:`[MSG_ID?] action → result`
- Token 预算策略明确按近期性和紧迫性优先裁剪RRF
---
### 4. 异步任务中的错误边界与恢复
**现象**:背景摘要生成任务(`asyncio.create_task`)的异常不会阻塞用户响应,但需要:
- 完整的日志链路(`_log` 调用 + `event_emitter` 通知)
- 数据库事务的原子性(摘要和压缩状态同时保存)
- 前端 UI 反馈status event: "generating..." → "complete" 或 "error"
**最佳实践**
-`asyncio.Lock` 按 chat_id 防止并发摘要任务
- 后台执行繁重操作tokenize、LLM call`asyncio.to_thread`
- 所有 I/ODB reads/writes需包裹异步线程池
- 异常捕获限制在 try-except日志不要吞掉堆栈信息
---
### 5. Filter 单例与状态设计陷阱
**约束**Filter 实例是全局单例,所有会话共享同一个 `self`
**禁忌**
```python
# ❌ 错误self.temp_buffer = ... (会被其他并发会话污染)
self.temp_state = body # 危险!
# ✅ 正确:无状态或使用锁/chat_id 隔离
self._chat_locks[chat_id] = asyncio.Lock() # 每个 chat 一个锁
```
**设计**
- ValvesPydantic BaseModel保存全局配置 ✅
- 使用 dict 按 `chat_id` 键维护临时状态lock、计数器
- 传参而非全局变量保存请求级数据 ✅
---
## 集成场景Filter + Pipe 的配合
**当 Pipe 模型调用 Filter 时**
1. `inlet` 注入摘要,削减上下文会话消息数
2. Pipe 模型(通常为 Copilot SDK 或自定义内核)处理精简消息
3. `outlet` 触发背景摘要,无阻塞用户响应
4. 下一轮对话时,`inlet` 再次注入最新摘要
**关键约束**
- `_should_skip_compression` 检测 `__model__.get("pipe")``copilot_sdk`,必要时跳过注入
- Pipe 模型若有自己的上下文管理(如 Copilot 的 native tool calling过度压缩会失去工具调用链
- 摘要模型选择(`summary_model` Valve应兼容当前 Pipe 环境的 API推荐用通用模型如 gemini-flash
---
## 内部 API 契约速记
### `generate_chat_completion(request, payload, user)`
- **request**: FastAPI Request可来自真实 HTTP 或 `__request__` 注入
- **payload**: `{"model": id, "messages": [...], "stream": false, "max_tokens": N, "temperature": T}`
- **user**: UserModel从 DB 查询或 `__user__` 转换(需 `Users.get_user_by_id()`
- **返回**: dict 或 JSONResponse若是后者需 `response.body.decode()` + JSON parse
### Filter 生命周期
```
New Message → inlet (User input) → [Plugins wait] → LLM → outlet (Response) → Summary Task (Background)
```
---
## 调试清单
- [ ] `__request__``outlet` 签名中声明且被 OpenWebUI 注入(非 None
- [ ] 异步调用链中每层都透传 `__request__`,最底层兜底合成
- [ ] `summary_index is None` 时从 DB 异步读取 `previous_summary`
- [ ] LLM Prompt 中 `<previous_working_memory>``<new_conversation>` 有明确边界
- [ ] 错误处理不吞堆栈:`logger.exception()``exc_info=True`
- [ ] `asyncio.Lock` 按 chat_id 避免并发工作冲突
- [ ] Copilot SDK / Pipe 模型需 `_should_skip_compression()` 检查
- [ ] Token budget 在 max_summary_tokens 下规划,优先保留近期事件
---
## 相关文件
- 核心实现:`plugins/filters/async-context-compression/async_context_compression.py`
- README`plugins/filters/async-context-compression/README.md` + `README_CN.md`
- OpenWebUI 内部:`open_webui/utils/chat.py``generate_chat_completion()`
---
**版本**: 1.0
**维护者**: Fu-Jie
**最后更新**: 2026-03-12

View File

@@ -0,0 +1,29 @@
# Agent Coordination Protocol (FOR AGENTS ONLY)
## 🛡️ The Golden Rule
**NEVER modify code without verifying the lock status in the Agent Hub.**
## 🔑 Identity Management
- `claude-code`: Official Claude CLI
- `copilot-agent`: GitHub Copilot
- `gemini-cursor`: Cursor IDE or Gemini extension
- `iflow-agent`: iFlow SDK agent
## 🛠️ The Synchronization Tool
Script: `scripts/agent_sync.py` (SQLite-backed)
### 🏎️ Workflow Lifecycle
1. **Initialize Session**:
- `python3 scripts/agent_sync.py status`
- `python3 scripts/agent_sync.py register <id> <name> "<objective>"`
2. **Resource Acquisition**:
- `python3 scripts/agent_sync.py lock <id> <file_path>`
- If blocked, identify the owner from `status` and do not attempt to bypass.
3. **Collaboration (Research Mode)**:
- If the project mode is `RESEARCH`, prioritize the `note` command.
- Summarize findings: `python3 scripts/agent_sync.py note <id> "<topic>" "<summary>"`
4. **Cleanup**:
- `python3 scripts/agent_sync.py unlock <id> <file_path>`
## 📜 Shared Memory
Read `.agent/learnings/` to avoid reinventing the wheel.

View File

8
.cursorrules Normal file
View File

@@ -0,0 +1,8 @@
# 🤖 Cursor/Gemini Multi-Agent Protocol
1. **STATUS CHECK**: Always run `python3 scripts/agent_sync.py status` first.
2. **REGISTRATION**: Run `python3 scripts/agent_sync.py register gemini-id "Gemini" "Current task"`.
3. **LOCKING**: Never edit without `python3 scripts/agent_sync.py lock gemini-id <path>`.
4. **STANDARDS**: Refer to `.agent/rules/plugin_standards.md` for coding guidelines.
Full details in `COOPERATION.md`.

13
CLAUDE.md Normal file
View File

@@ -0,0 +1,13 @@
# 🤖 Claude Multi-Agent Protocol (MACP)
## 🚀 Mandatory Startup
1. **Check Hub**: `python3 scripts/agent_sync.py status`
2. **Register**: `python3 scripts/agent_sync.py register claude-code "Claude" "Handling user request"`
3. **Lock**: `python3 scripts/agent_sync.py lock claude-code <file_path>`
4. **Handoff**: Use `python3 scripts/agent_sync.py note` for collaborative findings.
## 🤝 Project Standards
Read these BEFORE writing any code:
- `.agent/rules/plugin_standards.md`
- `.agent/rules/agent_protocol.md`
- `COOPERATION.md`

33
COOPERATION.md Normal file
View File

@@ -0,0 +1,33 @@
# 🤖 Multi-Agent Cooperation Protocol (MACP) v2.1
本项目采用 **SQLite 协作中控 (Agent Hub)** 来管理多个 AI Agent 的并发任务。
## 🚀 核心指令 (Quick Commands)
使用 `./scripts/macp` 即可快速调用,无需记忆复杂的 Python 参数。
| 指令 | 描述 |
| :--- | :--- |
| **`/status`** | 查看全场状态(活跃 Agent、文件锁、任务、研究主题 |
| **`/study <topic> <desc>`** | **一键发起联合研究**。广播主题并通知所有 Agent 进入研究状态。 |
| **`/summon <agent> <task>`** | **定向召唤**。给特定 Agent 派发高优先级任务。 |
| **`/handover <agent> <msg>`** | **任务接力**。释放当前进度并交棒给下一个 Agent。 |
| **`/broadcast <msg>`** | **全场广播**。发送紧急通知或状态同步。 |
| **`/check`** | **收件箱检查**。查看是否有分配给你的待办任务。 |
| **`/resolve <topic> <result>`** | **归档结论**。结束研究专题并记录最终共识。 |
| **`/ping`** | **生存检查**。快速查看哪些 Agent 在线。 |
---
## 🛡️ 协作准则
1. **先查后动**:开始工作前先运行 `./scripts/macp /status`
2. **锁即所有权**:修改文件前必须获取锁。
3. **意图先行**:大型重构建议先通过 `/study` 发起方案讨论。
4. **及时解锁**Commit 并 Push 后,请务必 `/handover` 或手动解锁。
## 📁 基础设施
- **数据库**: `.agent/agent_hub.db` (不要手动编辑)
- **内核**: `scripts/agent_sync.py`
- **快捷工具**: `scripts/macp`
---
*Generated by Claude (Coordinator) in collaboration with Sisyphus & Copilot.*

View File

@@ -0,0 +1,123 @@
# ✅ Async Context Compression 部署完成2024-03-12
## 🎯 部署摘要
**日期**: 2024-03-12
**版本**: 1.4.1
**状态**: ✅ 成功部署
**目标**: OpenWebUI localhost:3003
---
## 📌 新增功能
### 前端控制台调试信息
`async_context_compression.py` 中增加了 6 个结构化数据检查点,可在浏览器 Console 中查看插件的内部数据流。
#### 新增方法
```python
async def _emit_struct_log(self, __event_call__, title: str, data: Any):
"""
Emit structured data to browser console.
- Arrays → console.table() [表格形式]
- Objects → console.dir(d, {depth: 3}) [树形结构]
"""
```
#### 6 个检查点
| # | 检查点 | 阶段 | 显示内容 |
|---|-------|------|--------|
| 1⃣ | `__user__ structure` | Inlet 入口 | id, name, language, resolved_language |
| 2⃣ | `__metadata__ structure` | Inlet 入口 | chat_id, message_id, function_calling |
| 3⃣ | `body top-level structure` | Inlet 入口 | model, message_count, metadata keys |
| 4⃣ | `summary_record loaded from DB` | Inlet DB 后 | compressed_count, summary_preview, timestamps |
| 5⃣ | `final_messages shape → LLM` | Inlet 返回前 | 表格:每条消息的 role、content_length、tools |
| 6⃣ | `middle_messages shape` | 异步摘要中 | 表格:要摘要的消息切片 |
---
## 🚀 快速开始5 分钟)
### 步骤 1: 启用 Filter
```
OpenWebUI → Settings → Filters → 启用 "Async Context Compression"
```
### 步骤 2: 启用调试
```
在 Filter 配置中 → show_debug_log: ON → Save
```
### 步骤 3: 打开控制台
```
F12 (Windows/Linux) 或 Cmd+Option+I (Mac) → Console 标签
```
### 步骤 4: 发送消息
```
发送 10+ 条消息,观察 📋 [Compression] 开头的日志
```
---
## 📊 代码变更
```
新增方法: _emit_struct_log() [42 行]
新增日志点: 6 个
新增代码总行: ~150 行
向后兼容: 100% (由 show_debug_log 保护)
```
---
## 💡 日志示例
### 表格日志Arrays
```
📋 [Compression] Inlet: final_messages shape → LLM (7 msgs)
┌─────┬──────────┬──────────────┬─────────────┐
│index│role │content_length│has_tool_... │
├─────┼──────────┼──────────────┼─────────────┤
│ 0 │"system" │150 │false │
│ 1 │"user" │200 │false │
│ 2 │"assistant"│500 │true │
└─────┴──────────┴──────────────┴─────────────┘
```
### 树形日志Objects
```
📋 [Compression] Inlet: __metadata__ structure
├─ chat_id: "chat-abc123..."
├─ message_id: "msg-xyz789"
├─ function_calling: "native"
└─ all_keys: ["chat_id", "message_id", ...]
```
---
## ✅ 验证清单
- [x] 代码变更已保存
- [x] 部署脚本成功执行
- [x] OpenWebUI 正常运行
- [x] 新增 6 个日志点
- [x] 防卡死保护已实装
- [x] 向后兼容性完整
---
## 📖 文档
- [QUICK_START.md](../../scripts/QUICK_START.md) - 快速参考
- [README_CN.md](./README_CN.md) - 插件说明
- [DEPLOYMENT_REFERENCE.md](./DEPLOYMENT_REFERENCE.md) - 部署工具
---
**部署时间**: 2024-03-12
**维护者**: Fu-Jie
**项目**: [openwebui-extensions](https://github.com/Fu-Jie/openwebui-extensions)

View File

@@ -0,0 +1,315 @@
# 📋 Response 结构检查指南
## 🎯 新增检查点
`_call_summary_llm()` 方法中添加了 **3 个新的响应检查点**,用于前端控制台检查 LLM 调用的完整响应流程。
### 新增检查点位置
| # | 检查点名称 | 位置 | 显示内容 |
|---|-----------|------|--------|
| 1⃣ | **LLM Response structure** | `generate_chat_completion()` 返回后 | 原始 response 对象的类型、键、结构 |
| 2⃣ | **LLM Summary extracted & cleaned** | 提取并清理 summary 后 | 摘要长度、字数、格式、是否为空 |
| 3⃣ | **Summary saved to database** | 保存到 DB 后验证 | 数据库记录是否正确保存、字段一致性 |
---
## 📊 检查点详解
### 1⃣ LLM Response structure
**显示时机**: `generate_chat_completion()` 返回,处理前
**用途**: 验证原始响应数据结构
```
📋 [Compression] LLM Response structure (raw from generate_chat_completion)
├─ type: "dict" / "Response" / "JSONResponse"
├─ has_body: true/false (表示是否为 Response 对象)
├─ has_status_code: true/false
├─ is_dict: true/false
├─ keys: ["choices", "usage", "model", ...] (如果是 dict)
├─ first_choice_keys: ["message", "finish_reason", ...]
├─ message_keys: ["role", "content"]
└─ content_length: 1234 (摘要文本长度)
```
**关键验证**:
-`type` — 应该是 `dict``JSONResponse`
-`is_dict` — 最终应该是 `true`(处理完毕后)
-`keys` — 应该包含 `choices``usage`
-`first_choice_keys` — 应该包含 `message`
-`message_keys` — 应该包含 `role``content`
-`content_length` — 摘要不应该为空(> 0
---
### 2⃣ LLM Summary extracted & cleaned
**显示时机**: 从 response 中提取并 strip() 后
**用途**: 验证提取的摘要内容质量
```
📋 [Compression] LLM Summary extracted & cleaned
├─ type: "str"
├─ length_chars: 1234
├─ length_words: 156
├─ first_100_chars: "用户提问关于......"
├─ has_newlines: true
├─ newline_count: 3
└─ is_empty: false
```
**关键验证**:
-`type` — 应该始终是 `str`
-`is_empty` — 应该是 `false`(不能为空)
-`length_chars` — 通常 100-2000 字符(取决于配置)
-`newline_count` — 多行摘要通常有几个换行符
-`first_100_chars` — 可视化开头内容,检查是否正确
---
### 3⃣ Summary saved to database
**显示时机**: 保存到 DB 后,重新加载验证
**用途**: 确认数据库持久化成功且数据一致
```
📋 [Compression] Summary saved to database (verification)
├─ db_id: 42
├─ db_chat_id: "chat-abc123..."
├─ db_compressed_message_count: 10
├─ db_summary_length_chars: 1234
├─ db_summary_preview_100: "用户提问关于......"
├─ db_created_at: "2024-03-12 15:30:45.123456+00:00"
├─ db_updated_at: "2024-03-12 15:35:20.654321+00:00"
├─ matches_input_chat_id: true
└─ matches_input_compressed_count: true
```
**关键验证** ⭐ 最重要:
-`matches_input_chat_id`**必须是 `true`**
-`matches_input_compressed_count`**必须是 `true`**
-`db_summary_length_chars` — 与提取后的长度一致
-`db_updated_at` — 应该是最新时间戳
-`db_id` — 应该有有效的数据库 ID
---
## 🔍 如何在前端查看
### 步骤 1: 启用调试模式
在 OpenWebUI 中:
```
Settings → Filters → Async Context Compression
找到 valve: "show_debug_log"
勾选启用 + Save
```
### 步骤 2: 打开浏览器控制台
- **Windows/Linux**: F12 → Console
- **Mac**: Cmd + Option + I → Console
### 步骤 3: 触发摘要生成
发送足够多的消息使 Filter 触发压缩:
```
1. 发送 15+ 条消息
2. 等待后台摘要任务开始
3. 在 Console 观察 📋 日志
```
### 步骤 4: 观察完整流程
```
[1] 📋 LLM Response structure (raw)
↓ (显示原始响应类型、结构)
[2] 📋 LLM Summary extracted & cleaned
↓ (显示提取后的文本信息)
[3] 📋 Summary saved to database (verification)
↓ (显示数据库保存结果)
```
---
## 📈 完整流程验证
### 优质流程示例 ✅
```
1⃣ Response structure:
- type: "dict"
- is_dict: true
- has "choices": true
- has "usage": true
2⃣ Summary extracted:
- is_empty: false
- length_chars: 1500
- length_words: 200
3⃣ DB verification:
- matches_input_chat_id: true ✅
- matches_input_compressed_count: true ✅
- db_id: 42 (有效)
```
### 问题流程示例 ❌
```
1⃣ Response structure:
- type: "Response" (需要处理)
- has_body: true
- (需要解析 body)
2⃣ Summary extracted:
- is_empty: true ❌ (摘要为空!)
- length_chars: 0
3⃣ DB verification:
- matches_input_chat_id: false ❌ (chat_id 不匹配!)
- matches_input_compressed_count: false ❌ (计数不匹配!)
```
---
## 🛠️ 调试技巧
### 快速过滤日志
在 Console 过滤框输入:
```
📋 (搜索所有压缩日志)
LLM Response (搜索响应相关)
Summary extracted (搜索提取摘要)
saved to database (搜索保存验证)
```
### 展开表格/对象查看详情
1. **对象型日志** (console.dir)
- 点击左边的 ▶ 符号展开
- 逐级查看嵌套字段
2. **表格型日志** (console.table)
- 点击上方的 ▶ 展开
- 查看完整列
### 对比多个日志
```javascript
// 在 Console 中手动对比
检查点1: type = "dict", is_dict = true
检查点2: is_empty = false, length_chars = 1234
检查点3: matches_input_chat_id = true
如果所有都符合预期 流程正常
如果有不符的 检查具体问题
```
---
## 🐛 常见问题诊断
### Q: "type" 是 "Response" 而不是 "dict"?
**原因**: 某些后端返回 Response 对象而非 dict
**解决**: 代码会自动处理,看后续日志是否成功解析
```
检查点1: type = "Response" ← 需要解析
代码执行 `response.body` 解析
再次检查是否变为 dict
```
### Q: "is_empty" 是 true?
**原因**: LLM 没有返回有效的摘要文本
**诊断**:
1. 检查 `first_100_chars` — 应该有实际内容
2. 检查模型是否正确配置
3. 检查中间消息是否过多导致 LLM 超时
### Q: "matches_input_chat_id" 是 false?
**原因**: 保存到 DB 时 chat_id 不匹配
**诊断**:
1. 对比 `db_chat_id` 和输入的 `chat_id`
2. 可能是数据库连接问题
3. 可能是并发修改导致的
### Q: "matches_input_compressed_count" 是 false?
**原因**: 保存的消息计数与预期不符
**诊断**:
1. 对比 `db_compressed_message_count``saved_compressed_count`
2. 检查中间消息是否被意外修改
3. 检查原子边界对齐是否正确
---
## 📚 相关代码位置
```python
# 文件: async_context_compression.py
# 检查点 1: 响应结构检查 (L3459)
if self.valves.show_debug_log and __event_call__:
await self._emit_struct_log(
__event_call__,
"LLM Response structure (raw from generate_chat_completion)",
response_inspection_data,
)
# 检查点 2: 摘要提取检查 (L3524)
if self.valves.show_debug_log and __event_call__:
await self._emit_struct_log(
__event_call__,
"LLM Summary extracted & cleaned",
summary_inspection,
)
# 检查点 3: 数据库保存检查 (L3168)
if self.valves.show_debug_log and __event_call__:
await self._emit_struct_log(
__event_call__,
"Summary saved to database (verification)",
save_inspection,
)
```
---
## 🎯 完整检查清单
在前端 Console 中验证:
- [ ] 检查点 1 出现且 `is_dict: true`
- [ ] 检查点 1 显示 `first_choice_keys` 包含 `message`
- [ ] 检查点 2 出现且 `is_empty: false`
- [ ] 检查点 2 显示合理的 `length_chars` (通常 > 100)
- [ ] 检查点 3 出现且 `matches_input_chat_id: true`
- [ ] 检查点 3 显示 `matches_input_compressed_count: true`
- [ ] 所有日志时间戳合理
- [ ] 没有异常或错误信息
---
## 📞 后续步骤
1. ✅ 启用调试模式
2. ✅ 发送消息触发摘要生成
3. ✅ 观察 3 个新检查点
4. ✅ 验证所有字段符合预期
5. ✅ 如有问题,参考本指南诊断
---
**最后更新**: 2024-03-12
**相关特性**: Response 结构检查 (v1.4.1+)
**文档**: [async_context_compression.py 第 3459, 3524, 3168 行]

202
scripts/agent_sync.py Executable file
View File

@@ -0,0 +1,202 @@
#!/usr/bin/env python3
"""
🤖 AGENT SYNC TOOL v2.2 (Unified Semantic Edition)
-------------------------------------------------
Consolidated and simplified command set based on Copilot's architectural feedback.
Native support for Study, Task, and Broadcast workflows.
Maintains Sisyphus's advanced task management (task_queue, subscriptions).
"""
import sqlite3
import os
import sys
import argparse
from datetime import datetime
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
def get_connection():
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
return sqlite3.connect(DB_PATH)
def init_db():
conn = get_connection()
cursor = conn.cursor()
cursor.executescript('''
CREATE TABLE IF NOT EXISTS agents (
id TEXT PRIMARY KEY,
name TEXT,
task TEXT,
status TEXT DEFAULT 'idle',
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS file_locks (
file_path TEXT PRIMARY KEY,
agent_id TEXT,
lock_type TEXT DEFAULT 'write',
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS research_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
topic TEXT,
content TEXT,
note_type TEXT DEFAULT 'note', -- 'note', 'study', 'conclusion'
is_resolved INTEGER DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS task_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
initiator TEXT,
task_type TEXT, -- 'research', 'collab', 'fix'
topic TEXT,
description TEXT,
priority TEXT DEFAULT 'normal',
status TEXT DEFAULT 'pending', -- 'pending', 'active', 'completed'
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS task_subscriptions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_id INTEGER,
agent_id TEXT,
role TEXT, -- 'lead', 'reviewer', 'worker', 'observer'
FOREIGN KEY(task_id) REFERENCES task_queue(id)
);
CREATE TABLE IF NOT EXISTS broadcasts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
sender_id TEXT,
type TEXT,
payload TEXT,
active INTEGER DEFAULT 1,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS global_settings (
key TEXT PRIMARY KEY, value TEXT
);
''')
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
conn.commit()
conn.close()
print(f"✅ MACP 2.2 Semantic Kernel Active")
def get_status():
conn = get_connection(); cursor = conn.cursor()
print("\n--- 🛰️ Agent Fleet ---")
for r in cursor.execute("SELECT id, name, status, task FROM agents"):
print(f"[{r[2].upper()}] {r[1]} ({r[0]}) | Task: {r[3]}")
print("\n--- 📋 Global Task Queue ---")
for r in cursor.execute("SELECT id, topic, task_type, priority, status FROM task_queue WHERE status != 'completed'"):
print(f" #{r[0]} [{r[2].upper()}] {r[1]} | {r[3]} | {r[4]}")
print("\n--- 📚 Active Studies ---")
for r in cursor.execute("SELECT topic, agent_id FROM research_log WHERE note_type='study' AND is_resolved=0"):
print(f" 🔬 {r[0]} (by {r[1]})")
print("\n--- 📢 Live Broadcasts ---")
for r in cursor.execute("SELECT sender_id, type, payload FROM broadcasts WHERE active=1 ORDER BY created_at DESC LIMIT 3"):
print(f"📣 {r[0]} [{r[1].upper()}]: {r[2]}")
print("\n--- 🔒 File Locks ---")
for r in cursor.execute("SELECT file_path, agent_id, lock_type FROM file_locks ORDER BY timestamp DESC LIMIT 20"):
print(f" {r[0]} -> {r[1]} ({r[2]})")
cursor.execute("SELECT value FROM global_settings WHERE key='mode'")
mode = cursor.fetchone()[0]
print(f"\n🌍 Project Mode: {mode.upper()}")
conn.close()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(dest="command")
# Base commands
subparsers.add_parser("init")
subparsers.add_parser("status")
subparsers.add_parser("check")
subparsers.add_parser("ping")
reg = subparsers.add_parser("register")
reg.add_argument("id"); reg.add_argument("name"); reg.add_argument("task")
# Lock commands
lock = subparsers.add_parser("lock")
lock.add_argument("id"); lock.add_argument("path")
unlock = subparsers.add_parser("unlock")
unlock.add_argument("id"); unlock.add_argument("path")
# Research & Note commands
note = subparsers.add_parser("note")
note.add_argument("id"); note.add_argument("topic"); note.add_argument("content")
note.add_argument("--type", default="note")
# Semantic Workflows (The Unified Commands)
study = subparsers.add_parser("study")
study.add_argument("id"); study.add_argument("topic"); study.add_argument("--desc", default=None)
resolve = subparsers.add_parser("resolve")
resolve.add_argument("id"); resolve.add_argument("topic"); resolve.add_argument("conclusion")
# Task Management (The Advanced Commands)
assign = subparsers.add_parser("assign")
assign.add_argument("id"); assign.add_argument("target"); assign.add_argument("topic")
assign.add_argument("--role", default="worker"); assign.add_argument("--priority", default="normal")
bc = subparsers.add_parser("broadcast")
bc.add_argument("id"); bc.add_argument("type"); bc.add_argument("payload")
args = parser.parse_args()
if args.command == "init": init_db()
elif args.command == "status" or args.command == "check" or args.command == "ping": get_status()
elif args.command == "register":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("INSERT OR REPLACE INTO agents (id, name, task, status, last_seen) VALUES (?, ?, ?, 'active', CURRENT_TIMESTAMP)", (args.id, args.name, args.task))
conn.commit(); conn.close()
print(f"🤖 Registered: {args.id}")
elif args.command == "lock":
conn = get_connection(); cursor = conn.cursor()
try:
cursor.execute("INSERT INTO file_locks (file_path, agent_id) VALUES (?, ?)", (args.path, args.id))
conn.commit(); print(f"🔒 Locked {args.path}")
except: print(f"❌ Lock conflict on {args.path}"); sys.exit(1)
finally: conn.close()
elif args.command == "unlock":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("DELETE FROM file_locks WHERE file_path=? AND agent_id=?", (args.path, args.id))
conn.commit(); conn.close(); print(f"🔓 Unlocked {args.path}")
elif args.command == "study":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, 'study')", (args.id, args.topic, args.desc or "Study started"))
cursor.execute("UPDATE agents SET status = 'researching'")
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, 'research', ?)", (args.id, f"NEW STUDY: {args.topic}"))
cursor.execute("UPDATE global_settings SET value = ? WHERE key = 'mode'", (f"RESEARCH: {args.topic}",))
conn.commit(); conn.close()
print(f"🔬 Study '{args.topic}' initiated.")
elif args.command == "resolve":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("UPDATE research_log SET is_resolved = 1 WHERE topic = ?", (args.topic,))
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type, is_resolved) VALUES (?, ?, ?, 'conclusion', 1)", (args.id, args.topic, args.conclusion))
cursor.execute("UPDATE global_settings SET value = 'isolation' WHERE key = 'mode'")
cursor.execute("UPDATE agents SET status = 'active' WHERE status = 'researching'")
conn.commit(); conn.close()
print(f"✅ Study '{args.topic}' resolved.")
elif args.command == "assign":
conn = get_connection(); cursor = conn.cursor()
cursor.execute(
"INSERT INTO task_queue (initiator, task_type, topic, description, priority, status) VALUES (?, 'task', ?, ?, ?, 'pending')",
(args.id, args.topic, f"Assigned to {args.target}: {args.topic}", args.priority),
)
task_id = cursor.lastrowid
cursor.execute("INSERT INTO task_subscriptions (task_id, agent_id, role) VALUES (?, ?, ?)", (task_id, args.target, args.role))
conn.commit(); conn.close()
print(f"📋 Task #{task_id} assigned to {args.target}")
elif args.command == "broadcast":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("UPDATE broadcasts SET active = 0 WHERE type = ?", (args.type,))
cursor.execute("INSERT INTO broadcasts (sender_id, type, payload) VALUES (?, ?, ?)", (args.id, args.type, args.payload))
conn.commit(); conn.close()
print(f"📡 Broadcast: {args.payload}")
elif args.command == "note":
conn = get_connection(); cursor = conn.cursor()
cursor.execute("INSERT INTO research_log (agent_id, topic, content, note_type) VALUES (?, ?, ?, ?)", (args.id, args.topic, args.content, args.type))
conn.commit(); conn.close()
print(f"📝 Note added.")

847
scripts/agent_sync_v2.py Executable file
View File

@@ -0,0 +1,847 @@
#!/usr/bin/env python3
"""
🤖 AGENT SYNC TOOL v2.0 - MULTI-AGENT COOPERATION PROTOCOL (MACP)
---------------------------------------------------------
Enhanced collaboration commands for seamless multi-agent synergy.
QUICK COMMANDS:
@research <topic> - Start a joint research topic
@join <topic> - Join an active research topic
@find <topic> <content> - Post a finding to research topic
@consensus <topic> - Generate consensus document
@assign <agent> <task> - Assign task to specific agent
@notify <message> - Broadcast to all agents
@handover <agent> - Handover current task
@poll <question> - Start a quick poll
@switch <agent> - Request switch to specific agent
WORKFLOW: @research -> @find (xN) -> @consensus -> @assign
"""
import sqlite3
import os
import sys
import argparse
import json
from datetime import datetime, timedelta
from typing import List, Dict, Optional
DB_PATH = os.path.join(os.getcwd(), ".agent/agent_hub.db")
def get_connection():
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
return sqlite3.connect(DB_PATH)
def init_db():
conn = get_connection()
cursor = conn.cursor()
cursor.executescript('''
CREATE TABLE IF NOT EXISTS agents (
id TEXT PRIMARY KEY,
name TEXT,
task TEXT,
status TEXT DEFAULT 'idle',
current_research TEXT,
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS file_locks (
file_path TEXT PRIMARY KEY,
agent_id TEXT,
lock_type TEXT,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY(agent_id) REFERENCES agents(id)
);
CREATE TABLE IF NOT EXISTS research_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
topic TEXT,
content TEXT,
finding_type TEXT DEFAULT 'note',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY(agent_id) REFERENCES agents(id)
);
CREATE TABLE IF NOT EXISTS research_topics (
topic TEXT PRIMARY KEY,
status TEXT DEFAULT 'active',
initiated_by TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
completed_at TIMESTAMP
);
CREATE TABLE IF NOT EXISTS agent_research_participation (
agent_id TEXT,
topic TEXT,
joined_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (agent_id, topic)
);
CREATE TABLE IF NOT EXISTS task_assignments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
task TEXT,
assigned_by TEXT,
status TEXT DEFAULT 'pending',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
completed_at TIMESTAMP
);
CREATE TABLE IF NOT EXISTS notifications (
id INTEGER PRIMARY KEY AUTOINCREMENT,
agent_id TEXT,
message TEXT,
is_broadcast BOOLEAN DEFAULT 0,
is_read BOOLEAN DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS polls (
id INTEGER PRIMARY KEY AUTOINCREMENT,
question TEXT,
created_by TEXT,
status TEXT DEFAULT 'active',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS poll_votes (
poll_id INTEGER,
agent_id TEXT,
vote TEXT,
voted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (poll_id, agent_id)
);
CREATE TABLE IF NOT EXISTS global_settings (
key TEXT PRIMARY KEY,
value TEXT
);
''')
cursor.execute("INSERT OR IGNORE INTO global_settings (key, value) VALUES ('mode', 'isolation')")
conn.commit()
conn.close()
print(f"✅ Agent Hub v2.0 initialized at {DB_PATH}")
# ============ AGENT MANAGEMENT ============
def register_agent(agent_id, name, task, status="idle"):
conn = get_connection()
cursor = conn.cursor()
cursor.execute('''
INSERT OR REPLACE INTO agents (id, name, task, status, last_seen)
VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP)
''', (agent_id, name, task, status))
conn.commit()
conn.close()
print(f"🤖 Agent '{name}' ({agent_id}) registered.")
def update_agent_status(agent_id, status, research_topic=None):
conn = get_connection()
cursor = conn.cursor()
if research_topic:
cursor.execute('''
UPDATE agents SET status = ?, current_research = ?, last_seen = CURRENT_TIMESTAMP
WHERE id = ?
''', (status, research_topic, agent_id))
else:
cursor.execute('''
UPDATE agents SET status = ?, last_seen = CURRENT_TIMESTAMP
WHERE id = ?
''', (status, agent_id))
conn.commit()
conn.close()
# ============ RESEARCH COLLABORATION ============
def start_research(agent_id, topic):
"""@research - Start a new research topic and notify all agents"""
conn = get_connection()
cursor = conn.cursor()
# Create research topic
try:
cursor.execute('''
INSERT INTO research_topics (topic, status, initiated_by)
VALUES (?, 'active', ?)
''', (topic, agent_id))
except sqlite3.IntegrityError:
print(f"⚠️ Research topic '{topic}' already exists")
conn.close()
return
# Add initiator as participant
cursor.execute('''
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
VALUES (?, ?)
''', (agent_id, topic))
# Update agent status
cursor.execute('''
UPDATE agents SET status = 'researching', current_research = ?
WHERE id = ?
''', (topic, agent_id))
# Notify all other agents
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
other_agents = cursor.fetchall()
for (other_id,) in other_agents:
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 0)
''', (other_id, f"🔬 New research started: '{topic}' by {agent_id}. Use '@join {topic}' to participate."))
conn.commit()
conn.close()
print(f"🔬 Research topic '{topic}' started by {agent_id}")
print(f"📢 Notified {len(other_agents)} other agents")
def join_research(agent_id, topic):
"""@join - Join an active research topic"""
conn = get_connection()
cursor = conn.cursor()
# Check if topic exists and is active
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
result = cursor.fetchone()
if not result:
print(f"❌ Research topic '{topic}' not found")
conn.close()
return
if result[0] != 'active':
print(f"⚠️ Research topic '{topic}' is {result[0]}")
conn.close()
return
# Add participant
cursor.execute('''
INSERT OR IGNORE INTO agent_research_participation (agent_id, topic)
VALUES (?, ?)
''', (agent_id, topic))
# Update agent status
cursor.execute('''
UPDATE agents SET status = 'researching', current_research = ?
WHERE id = ?
''', (topic, agent_id))
conn.commit()
conn.close()
print(f"{agent_id} joined research: '{topic}'")
def post_finding(agent_id, topic, content, finding_type="note"):
"""@find - Post a finding to research topic"""
conn = get_connection()
cursor = conn.cursor()
# Check if topic exists
cursor.execute("SELECT status FROM research_topics WHERE topic = ?", (topic,))
result = cursor.fetchone()
if not result:
print(f"❌ Research topic '{topic}' not found")
conn.close()
return
if result[0] != 'active':
print(f"⚠️ Research topic '{topic}' is {result[0]}")
# Add finding
cursor.execute('''
INSERT INTO research_log (agent_id, topic, content, finding_type)
VALUES (?, ?, ?, ?)
''', (agent_id, topic, content, finding_type))
# Update agent status
cursor.execute('''
UPDATE agents SET last_seen = CURRENT_TIMESTAMP WHERE id = ?
''', (agent_id,))
conn.commit()
conn.close()
print(f"📝 Finding added to '{topic}' by {agent_id}")
def generate_consensus(topic):
"""@consensus - Generate consensus document from research findings"""
conn = get_connection()
cursor = conn.cursor()
# Get all findings
cursor.execute('''
SELECT agent_id, content, finding_type, created_at
FROM research_log
WHERE topic = ?
ORDER BY created_at
''', (topic,))
findings = cursor.fetchall()
if not findings:
print(f"⚠️ No findings found for topic '{topic}'")
conn.close()
return
# Get participants
cursor.execute('''
SELECT agent_id FROM agent_research_participation WHERE topic = ?
''', (topic,))
participants = [row[0] for row in cursor.fetchall()]
# Mark topic as completed
cursor.execute('''
UPDATE research_topics
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
WHERE topic = ?
''', (topic,))
conn.commit()
conn.close()
# Generate consensus document
consensus_dir = os.path.join(os.getcwd(), ".agent/consensus")
os.makedirs(consensus_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"{topic.replace(' ', '_').replace('/', '_')}_{timestamp}.md"
filepath = os.path.join(consensus_dir, filename)
with open(filepath, 'w', encoding='utf-8') as f:
f.write(f"# 🎯 Consensus: {topic}\n\n")
f.write(f"**Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n")
f.write(f"**Participants**: {', '.join(participants)}\n\n")
f.write("---\n\n")
for agent_id, content, finding_type, created_at in findings:
f.write(f"## [{finding_type.upper()}] {agent_id}\n\n")
f.write(f"*{created_at}*\n\n")
f.write(f"{content}\n\n")
print(f"✅ Consensus generated: {filepath}")
print(f"📊 Total findings: {len(findings)}")
print(f"👥 Participants: {len(participants)}")
return filepath
# ============ TASK MANAGEMENT ============
def assign_task(assigned_by, agent_id, task):
"""@assign - Assign task to specific agent"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute('''
INSERT INTO task_assignments (agent_id, task, assigned_by)
VALUES (?, ?, ?)
''', (agent_id, task, assigned_by))
# Notify the agent
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 0)
''', (agent_id, f"📋 New task assigned by {assigned_by}: {task}"))
conn.commit()
conn.close()
print(f"📋 Task assigned to {agent_id} by {assigned_by}")
def list_tasks(agent_id=None):
"""List tasks for an agent or all agents"""
conn = get_connection()
cursor = conn.cursor()
if agent_id:
cursor.execute('''
SELECT id, task, assigned_by, status, created_at
FROM task_assignments
WHERE agent_id = ? AND status != 'completed'
ORDER BY created_at DESC
''', (agent_id,))
tasks = cursor.fetchall()
print(f"\n📋 Tasks for {agent_id}:")
for task_id, task, assigned_by, status, created_at in tasks:
print(f" [{status.upper()}] #{task_id}: {task} (from {assigned_by})")
else:
cursor.execute('''
SELECT agent_id, id, task, assigned_by, status
FROM task_assignments
WHERE status != 'completed'
ORDER BY agent_id
''')
tasks = cursor.fetchall()
print(f"\n📋 All pending tasks:")
current_agent = None
for agent, task_id, task, assigned_by, status in tasks:
if agent != current_agent:
print(f"\n {agent}:")
current_agent = agent
print(f" [{status.upper()}] #{task_id}: {task}")
conn.close()
def complete_task(task_id):
"""Mark a task as completed"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute('''
UPDATE task_assignments
SET status = 'completed', completed_at = CURRENT_TIMESTAMP
WHERE id = ?
''', (task_id,))
if cursor.rowcount > 0:
print(f"✅ Task #{task_id} marked as completed")
else:
print(f"❌ Task #{task_id} not found")
conn.commit()
conn.close()
# ============ NOTIFICATIONS ============
def broadcast_message(from_agent, message):
"""@notify - Broadcast message to all agents"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute("SELECT id FROM agents WHERE id != ?", (from_agent,))
other_agents = cursor.fetchall()
for (agent_id,) in other_agents:
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 1)
''', (agent_id, f"📢 Broadcast from {from_agent}: {message}"))
conn.commit()
conn.close()
print(f"📢 Broadcast sent to {len(other_agents)} agents")
def get_notifications(agent_id, unread_only=False):
"""Get notifications for an agent"""
conn = get_connection()
cursor = conn.cursor()
if unread_only:
cursor.execute('''
SELECT id, message, is_broadcast, created_at
FROM notifications
WHERE agent_id = ? AND is_read = 0
ORDER BY created_at DESC
''', (agent_id,))
else:
cursor.execute('''
SELECT id, message, is_broadcast, created_at
FROM notifications
WHERE agent_id = ?
ORDER BY created_at DESC
LIMIT 10
''', (agent_id,))
notifications = cursor.fetchall()
print(f"\n🔔 Notifications for {agent_id}:")
for notif_id, message, is_broadcast, created_at in notifications:
prefix = "📢" if is_broadcast else "🔔"
print(f" {prefix} {message}")
print(f" {created_at}")
# Mark as read
cursor.execute('''
UPDATE notifications SET is_read = 1
WHERE agent_id = ? AND is_read = 0
''', (agent_id,))
conn.commit()
conn.close()
# ============ POLLS ============
def start_poll(agent_id, question):
"""@poll - Start a quick poll"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute('''
INSERT INTO polls (question, created_by, status)
VALUES (?, ?, 'active')
''', (question, agent_id))
poll_id = cursor.lastrowid
# Notify all agents
cursor.execute("SELECT id FROM agents WHERE id != ?", (agent_id,))
other_agents = cursor.fetchall()
for (other_id,) in other_agents:
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 0)
''', (other_id, f"🗳️ New poll from {agent_id}: '{question}' (Poll #{poll_id}). Vote with: @vote {poll_id} <yes/no/maybe>"))
conn.commit()
conn.close()
print(f"🗳️ Poll #{poll_id} started: {question}")
return poll_id
def vote_poll(agent_id, poll_id, vote):
"""@vote - Vote on a poll"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute('''
INSERT OR REPLACE INTO poll_votes (poll_id, agent_id, vote)
VALUES (?, ?, ?)
''', (poll_id, agent_id, vote))
conn.commit()
conn.close()
print(f"✅ Vote recorded for poll #{poll_id}: {vote}")
def show_poll_results(poll_id):
"""Show poll results"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute("SELECT question FROM polls WHERE id = ?", (poll_id,))
result = cursor.fetchone()
if not result:
print(f"❌ Poll #{poll_id} not found")
conn.close()
return
question = result[0]
cursor.execute('''
SELECT vote, COUNT(*) FROM poll_votes
WHERE poll_id = ?
GROUP BY vote
''', (poll_id,))
votes = dict(cursor.fetchall())
cursor.execute('''
SELECT agent_id, vote FROM poll_votes
WHERE poll_id = ?
''', (poll_id,))
details = cursor.fetchall()
conn.close()
print(f"\n🗳️ Poll #{poll_id}: {question}")
print("Results:")
for vote, count in votes.items():
print(f" {vote}: {count}")
print("\nVotes:")
for agent, vote in details:
print(f" {agent}: {vote}")
# ============ HANDOVER ============
def request_handover(from_agent, to_agent, context=""):
"""@handover - Request task handover to another agent"""
conn = get_connection()
cursor = conn.cursor()
# Get current task of from_agent
cursor.execute("SELECT task FROM agents WHERE id = ?", (from_agent,))
result = cursor.fetchone()
current_task = result[0] if result else "current task"
# Create handover notification
message = f"🔄 Handover request from {from_agent}: '{current_task}'"
if context:
message += f" | Context: {context}"
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 0)
''', (to_agent, message))
# Update from_agent status
cursor.execute('''
UPDATE agents SET status = 'idle', task = NULL
WHERE id = ?
''', (from_agent,))
conn.commit()
conn.close()
print(f"🔄 Handover requested: {from_agent} -> {to_agent}")
def switch_to(agent_id, to_agent):
"""@switch - Request to switch to specific agent"""
conn = get_connection()
cursor = conn.cursor()
message = f"🔄 {agent_id} requests to switch to you for continuation"
cursor.execute('''
INSERT INTO notifications (agent_id, message, is_broadcast)
VALUES (?, ?, 0)
''', (to_agent, message))
conn.commit()
conn.close()
print(f"🔄 Switch request sent: {agent_id} -> {to_agent}")
# ============ STATUS & MONITORING ============
def get_status():
"""Enhanced status view"""
conn = get_connection()
cursor = conn.cursor()
print("\n" + "="*60)
print("🛰️ ACTIVE AGENTS")
print("="*60)
for row in cursor.execute('''
SELECT name, task, status, current_research, last_seen
FROM agents
ORDER BY last_seen DESC
'''):
status_emoji = {
'active': '🟢',
'idle': '',
'researching': '🔬',
'busy': '🔴'
}.get(row[2], '')
research_info = f" | Research: {row[3]}" if row[3] else ""
print(f"{status_emoji} [{row[2].upper()}] {row[0]}: {row[1]}{research_info}")
print(f" Last seen: {row[4]}")
print("\n" + "="*60)
print("🔬 ACTIVE RESEARCH TOPICS")
print("="*60)
for row in cursor.execute('''
SELECT t.topic, t.initiated_by, t.created_at,
(SELECT COUNT(*) FROM agent_research_participation WHERE topic = t.topic) as participants,
(SELECT COUNT(*) FROM research_log WHERE topic = t.topic) as findings
FROM research_topics t
WHERE t.status = 'active'
ORDER BY t.created_at DESC
'''):
print(f"🔬 {row[0]}")
print(f" Initiated by: {row[1]} | Participants: {row[3]} | Findings: {row[4]}")
print(f" Started: {row[2]}")
print("\n" + "="*60)
print("🔒 FILE LOCKS")
print("="*60)
locks = list(cursor.execute('''
SELECT file_path, agent_id, lock_type
FROM file_locks
ORDER BY timestamp DESC
'''))
if locks:
for file_path, agent_id, lock_type in locks:
lock_emoji = '🔒' if lock_type == 'write' else '🔍'
print(f"{lock_emoji} {file_path} -> {agent_id} ({lock_type})")
else:
print(" No active locks")
print("\n" + "="*60)
print("📋 PENDING TASKS")
print("="*60)
for row in cursor.execute('''
SELECT agent_id, COUNT(*)
FROM task_assignments
WHERE status = 'pending'
GROUP BY agent_id
'''):
print(f" {row[0]}: {row[1]} pending tasks")
cursor.execute("SELECT value FROM global_settings WHERE key = 'mode'")
mode = cursor.fetchone()[0]
print(f"\n🌍 Global Mode: {mode.upper()}")
print("="*60)
conn.close()
def show_research_topic(topic):
"""Show detailed view of a research topic"""
conn = get_connection()
cursor = conn.cursor()
cursor.execute("SELECT status, initiated_by, created_at FROM research_topics WHERE topic = ?", (topic,))
result = cursor.fetchone()
if not result:
print(f"❌ Topic '{topic}' not found")
conn.close()
return
status, initiated_by, created_at = result
print(f"\n🔬 Research: {topic}")
print(f"Status: {status} | Initiated by: {initiated_by} | Started: {created_at}")
cursor.execute('''
SELECT agent_id FROM agent_research_participation WHERE topic = ?
''', (topic,))
participants = [row[0] for row in cursor.fetchall()]
print(f"Participants: {', '.join(participants)}")
print("\n--- Findings ---")
cursor.execute('''
SELECT agent_id, content, finding_type, created_at
FROM research_log
WHERE topic = ?
ORDER BY created_at
''', (topic,))
for agent_id, content, finding_type, created_at in cursor.fetchall():
emoji = {'note': '📝', 'finding': '🔍', 'concern': '⚠️', 'solution': ''}.get(finding_type, '📝')
print(f"\n{emoji} [{finding_type.upper()}] {agent_id} ({created_at})")
print(f" {content}")
conn.close()
# ============ MAIN CLI ============
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="🤖 Agent Sync v2.0 - Multi-Agent Cooperation Protocol",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
QUICK COMMANDS:
@research <topic> Start joint research
@join <topic> Join active research
@find <topic> <content> Post finding to research
@consensus <topic> Generate consensus document
@assign <agent> <task> Assign task to agent
@notify <message> Broadcast to all agents
@handover <agent> [context] Handover task
@switch <agent> Request switch to agent
@poll <question> Start a poll
@vote <poll_id> <vote> Vote on poll
@tasks [agent] List tasks
@complete <task_id> Complete task
@notifications [agent] Check notifications
@topic <topic> View research topic details
EXAMPLES:
python3 agent_sync_v2.py research claude-code "API Design"
python3 agent_sync_v2.py find copilot "API Design" "Use REST instead of GraphQL"
python3 agent_sync_v2.py assign claude-code copilot "Implement REST endpoints"
python3 agent_sync_v2.py consensus "API Design"
"""
)
subparsers = parser.add_subparsers(dest="command", help="Command to execute")
# Legacy commands
subparsers.add_parser("init", help="Initialize the database")
reg = subparsers.add_parser("register", help="Register an agent")
reg.add_argument("id", help="Agent ID")
reg.add_argument("name", help="Agent name")
reg.add_argument("task", help="Current task")
reg.add_argument("--status", default="idle", help="Agent status")
lock = subparsers.add_parser("lock", help="Lock a file")
lock.add_argument("id", help="Agent ID")
lock.add_argument("path", help="File path")
lock.add_argument("--type", default="write", choices=["write", "research"], help="Lock type")
unlock = subparsers.add_parser("unlock", help="Unlock a file")
unlock.add_argument("id", help="Agent ID")
unlock.add_argument("path", help="File path")
subparsers.add_parser("status", help="Show status dashboard")
# New v2.0 commands
research = subparsers.add_parser("research", help="@research - Start joint research topic")
research.add_argument("agent_id", help="Agent initiating research")
research.add_argument("topic", help="Research topic")
join = subparsers.add_parser("join", help="@join - Join active research")
join.add_argument("agent_id", help="Agent joining")
join.add_argument("topic", help="Topic to join")
find = subparsers.add_parser("find", help="@find - Post finding to research")
find.add_argument("agent_id", help="Agent posting finding")
find.add_argument("topic", help="Research topic")
find.add_argument("content", help="Finding content")
find.add_argument("--type", default="note", choices=["note", "finding", "concern", "solution"], help="Type of finding")
consensus = subparsers.add_parser("consensus", help="@consensus - Generate consensus document")
consensus.add_argument("topic", help="Topic to generate consensus for")
assign = subparsers.add_parser("assign", help="@assign - Assign task to agent")
assign.add_argument("from_agent", help="Agent assigning the task")
assign.add_argument("to_agent", help="Agent to assign task to")
assign.add_argument("task", help="Task description")
tasks = subparsers.add_parser("tasks", help="@tasks - List pending tasks")
tasks.add_argument("--agent", help="Filter by agent ID")
complete = subparsers.add_parser("complete", help="@complete - Mark task as completed")
complete.add_argument("task_id", type=int, help="Task ID to complete")
notify = subparsers.add_parser("notify", help="@notify - Broadcast message to all agents")
notify.add_argument("from_agent", help="Agent sending notification")
notify.add_argument("message", help="Message to broadcast")
handover = subparsers.add_parser("handover", help="@handover - Handover task to another agent")
handover.add_argument("from_agent", help="Current agent")
handover.add_argument("to_agent", help="Agent to handover to")
handover.add_argument("--context", default="", help="Handover context")
switch = subparsers.add_parser("switch", help="@switch - Request switch to specific agent")
switch.add_argument("from_agent", help="Current agent")
switch.add_argument("to_agent", help="Agent to switch to")
poll = subparsers.add_parser("poll", help="@poll - Start a quick poll")
poll.add_argument("agent_id", help="Agent starting poll")
poll.add_argument("question", help="Poll question")
vote = subparsers.add_parser("vote", help="@vote - Vote on a poll")
vote.add_argument("agent_id", help="Agent voting")
vote.add_argument("poll_id", type=int, help="Poll ID")
vote.add_argument("vote_choice", choices=["yes", "no", "maybe"], help="Your vote")
poll_results = subparsers.add_parser("poll-results", help="Show poll results")
poll_results.add_argument("poll_id", type=int, help="Poll ID")
notifications = subparsers.add_parser("notifications", help="@notifications - Check notifications")
notifications.add_argument("agent_id", help="Agent to check notifications for")
notifications.add_argument("--unread", action="store_true", help="Show only unread")
topic = subparsers.add_parser("topic", help="@topic - View research topic details")
topic.add_argument("topic_name", help="Topic name")
args = parser.parse_args()
if args.command == "init":
init_db()
elif args.command == "register":
register_agent(args.id, args.name, args.task, args.status)
elif args.command == "lock":
lock_file(args.id, args.path, args.type)
elif args.command == "unlock":
unlock_file(args.id, args.path)
elif args.command == "status":
get_status()
elif args.command == "research":
start_research(args.agent_id, args.topic)
elif args.command == "join":
join_research(args.agent_id, args.topic)
elif args.command == "find":
post_finding(args.agent_id, args.topic, args.content, args.type)
elif args.command == "consensus":
generate_consensus(args.topic)
elif args.command == "assign":
assign_task(args.from_agent, args.to_agent, args.task)
elif args.command == "tasks":
list_tasks(args.agent)
elif args.command == "complete":
complete_task(args.task_id)
elif args.command == "notify":
broadcast_message(args.from_agent, args.message)
elif args.command == "handover":
request_handover(args.from_agent, args.to_agent, args.context)
elif args.command == "switch":
switch_to(args.from_agent, args.to_agent)
elif args.command == "poll":
start_poll(args.agent_id, args.question)
elif args.command == "vote":
vote_poll(args.agent_id, args.poll_id, args.vote_choice)
elif args.command == "poll-results":
show_poll_results(args.poll_id)
elif args.command == "notifications":
get_notifications(args.agent_id, args.unread)
elif args.command == "topic":
show_research_topic(args.topic_name)
else:
parser.print_help()

110
scripts/macp Executable file
View File

@@ -0,0 +1,110 @@
#!/bin/bash
# 🤖 MACP Quick Command v2.1 (Unified Edition)
set -euo pipefail
AGENT_ID_FILE=".agent/current_agent"
resolve_agent_id() {
if [ -n "${MACP_AGENT_ID:-}" ]; then
echo "$MACP_AGENT_ID"
return
fi
if [ -f "$AGENT_ID_FILE" ]; then
cat "$AGENT_ID_FILE"
return
fi
echo "Error: MACP agent identity is not set. Export MACP_AGENT_ID or create .agent/current_agent." >&2
exit 1
}
resolve_agent_name() {
python3 - <<'PY2'
import os
import sqlite3
import sys
agent_id = os.environ.get("MACP_AGENT_ID", "").strip()
if not agent_id:
path = os.path.join(os.getcwd(), ".agent", "current_agent")
if os.path.exists(path):
with open(path, "r", encoding="utf-8") as handle:
agent_id = handle.read().strip()
db_path = os.path.join(os.getcwd(), ".agent", "agent_hub.db")
name = agent_id or "Agent"
if agent_id and os.path.exists(db_path):
conn = sqlite3.connect(db_path)
cur = conn.cursor()
cur.execute("SELECT name FROM agents WHERE id = ?", (agent_id,))
row = cur.fetchone()
conn.close()
if row and row[0]:
name = row[0]
sys.stdout.write(name)
PY2
}
AGENT_ID="$(resolve_agent_id)"
export MACP_AGENT_ID="$AGENT_ID"
AGENT_NAME="$(resolve_agent_name)"
CMD="${1:-}"
if [ -z "$CMD" ]; then
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]" >&2
exit 1
fi
shift
case "$CMD" in
/study)
TOPIC="$1"
shift
DESC="$*"
if [ -n "$DESC" ]; then
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC" --desc "$DESC"
else
python3 scripts/agent_sync.py study "$AGENT_ID" "$TOPIC"
fi
;;
/broadcast)
python3 scripts/agent_sync.py broadcast "$AGENT_ID" manual "$*"
;;
/summon)
TO_AGENT="$1"
shift
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker --priority high
;;
/handover)
TO_AGENT="$1"
shift
python3 scripts/agent_sync.py assign "$AGENT_ID" "$TO_AGENT" "$*" --role worker
python3 scripts/agent_sync.py register "$AGENT_ID" "$AGENT_NAME" "Idle"
;;
/note)
TOPIC="$1"
shift
python3 scripts/agent_sync.py note "$AGENT_ID" "$TOPIC" "$*" --type note
;;
/check)
python3 scripts/agent_sync.py check
;;
/resolve)
TOPIC="$1"
shift
python3 scripts/agent_sync.py resolve "$AGENT_ID" "$TOPIC" "$*"
;;
/ping)
python3 scripts/agent_sync.py status | grep "\["
;;
/status)
python3 scripts/agent_sync.py status
;;
*)
echo "Usage: ./scripts/macp [/status|/ping|/study|/broadcast|/summon|/handover|/note|/check|/resolve]"
;;
esac