Compare commits
53 Commits
v2026.02.2
...
v2026.03.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f5a983fb4a | ||
|
|
35dec491de | ||
|
|
67de7f1cfc | ||
|
|
b954fbca1d | ||
|
|
c1411e731d | ||
|
|
df78f0454b | ||
|
|
d5931fbc5e | ||
|
|
af59959ade | ||
|
|
56a6ddd422 | ||
|
|
eda495e55f | ||
|
|
3642058292 | ||
|
|
5b0464dcdd | ||
|
|
2aff7f1bf4 | ||
|
|
ba0d63930e | ||
|
|
a1568de67b | ||
|
|
f4a38a7906 | ||
|
|
2e6c61737f | ||
|
|
c1e9aca5dc | ||
|
|
6f700fe610 | ||
|
|
3927e384cc | ||
|
|
e1dac2219e | ||
|
|
9436364b9a | ||
|
|
e7b1ff4c54 | ||
|
|
c4ff4fea7e | ||
|
|
32afc3286e | ||
|
|
3e8b15af46 | ||
|
|
658f37baa6 | ||
|
|
c65ba57553 | ||
|
|
7c17dbbe23 | ||
|
|
c6279240b9 | ||
|
|
a8a324500a | ||
|
|
369e8c900c | ||
|
|
83e317a335 | ||
|
|
c28c3c837b | ||
|
|
701ea0b18f | ||
|
|
eb79bc9633 | ||
|
|
0c7d427b93 | ||
|
|
07bc5f027e | ||
|
|
701fc3e906 | ||
|
|
d392af66c9 | ||
|
|
67cf86fb26 | ||
|
|
fe98b0e007 | ||
|
|
3236d19e28 | ||
|
|
354c1eee6b | ||
|
|
6b3eb8064b | ||
|
|
f32e90e182 | ||
|
|
8001ab18ee | ||
|
|
dcfde9c0dc | ||
|
|
dbcf7421ea | ||
|
|
1705baf976 | ||
|
|
8e8d478ece | ||
|
|
acc9cd7ff2 | ||
|
|
e4582c3197 |
46
.agent/learnings/README.md
Normal file
46
.agent/learnings/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# `.agent/learnings/` — Engineering Learnings & Reusable Patterns
|
||||
|
||||
This directory stores **hard-won engineering insights** discovered during development.
|
||||
Each file is a standalone Markdown note covering a specific topic, pattern, or gotcha.
|
||||
|
||||
The goal is to avoid re-investigating the same issue twice.
|
||||
|
||||
---
|
||||
|
||||
## Conventions
|
||||
|
||||
- **File naming**: `{topic}.md`, e.g., `openwebui-tool-injection.md`
|
||||
- **Scope**: One clear topic per file. Keep files focused and concise.
|
||||
- **Format**: Use the template below.
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# [Topic Title]
|
||||
|
||||
> Discovered: YYYY-MM-DD
|
||||
|
||||
## Context
|
||||
Where / when does this apply?
|
||||
|
||||
## Finding
|
||||
What exactly did we learn?
|
||||
|
||||
## Solution / Pattern
|
||||
The code or approach that works.
|
||||
|
||||
## Gotchas
|
||||
Edge cases or caveats to watch out for.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Index
|
||||
|
||||
| File | Topic |
|
||||
|------|-------|
|
||||
| [openwebui-tool-injection.md](./openwebui-tool-injection.md) | How OpenWebUI injects parameters into Tool functions, and what the Pipe must provide |
|
||||
| [openwebui-mock-request.md](./openwebui-mock-request.md) | How to build a valid Mock Request for calling OpenWebUI-internal APIs from a Pipe |
|
||||
| [copilot-plan-mode-prompt-parity.md](./copilot-plan-mode-prompt-parity.md) | Why Plan Mode prompt logic must be shared between fresh-session and resume-session injection |
|
||||
40
.agent/learnings/copilot-plan-mode-prompt-parity.md
Normal file
40
.agent/learnings/copilot-plan-mode-prompt-parity.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Copilot Plan Mode Prompt Parity
|
||||
|
||||
> Discovered: 2026-03-06
|
||||
|
||||
## Context
|
||||
|
||||
The GitHub Copilot SDK pipe builds system prompts in two paths:
|
||||
|
||||
- fresh session creation via `_build_session_config(...)`
|
||||
- resumed session injection via the `system_parts` rebuild branch
|
||||
|
||||
Plan Mode guidance was duplicated across those branches.
|
||||
|
||||
## Finding
|
||||
|
||||
If Plan Mode instructions are edited in only one branch, resumed sessions silently lose planning behavior or capability hints that fresh sessions still have.
|
||||
|
||||
This is especially easy to miss because both branches still work, but resumed chats receive a weaker or stale prompt.
|
||||
|
||||
Session mode switching alone is also not enough. Even when `session.rpc.mode.set(Mode.PLAN)` succeeds, the SDK may still skip creating the expected `plan.md` if the runtime system prompt does not explicitly include the original Plan Mode persistence contract.
|
||||
|
||||
## Solution / Pattern
|
||||
|
||||
Extract the Plan Mode prompt into one shared helper and call it from both branches:
|
||||
|
||||
```python
|
||||
def _build_plan_mode_context(plan_path: str) -> str:
|
||||
...
|
||||
```
|
||||
|
||||
Then inject it in both places with the chat-specific `plan.md` path.
|
||||
|
||||
For extra safety, when the pipe later reads `session.rpc.plan.read()`, mirror the returned content into the chat-specific `COPILOTSDK_CONFIG_DIR/session-state/<chat_id>/plan.md` path. This keeps the UI-visible file in sync even if the SDK persists plan state internally but does not materialize the file where the chat integration expects it.
|
||||
|
||||
## Gotchas
|
||||
|
||||
- Keep the helper dynamic: the `plan.md` path must still be resolved per chat/session.
|
||||
- Do not only update debug prompt artifacts; the effective runtime prompt lives in `plugins/pipes/github-copilot-sdk/github_copilot_sdk.py`.
|
||||
- Resume-session parity matters for capability guidance just as much as for session context.
|
||||
- If users report that Plan Mode is active but `plan.md` is missing, check both halves: prompt parity and the final `rpc.plan.read()` -> `plan.md` sync path.
|
||||
131
.agent/learnings/openwebui-mock-request.md
Normal file
131
.agent/learnings/openwebui-mock-request.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# Building a Valid Mock Request for OpenWebUI Pipes
|
||||
|
||||
> Discovered: 2026-03-05
|
||||
|
||||
## Context
|
||||
|
||||
OpenWebUI Pipes run as a Pipe plugin, not as a real HTTP request handler. When the Pipe
|
||||
needs to call OpenWebUI-internal APIs (like `generate_chat_completion`, `get_tools`, etc.)
|
||||
or load Tools that do the same, it must provide a **fake-but-complete Request object**.
|
||||
|
||||
## Finding
|
||||
|
||||
OpenWebUI's internal functions expect `request` to satisfy several contracts:
|
||||
|
||||
```
|
||||
request.app.state.MODELS → dict { model_id: ModelModel } — MUST be populated!
|
||||
request.app.state.config → config object with all env variables
|
||||
request.app.state.TOOLS → dict (can start empty)
|
||||
request.app.state.FUNCTIONS → dict (can start empty)
|
||||
request.app.state.redis → None is fine
|
||||
request.app.state.TOOL_SERVERS → [] is fine
|
||||
request.app.url_path_for(name, **path_params) → str
|
||||
request.headers → dict with Authorization, host, user-agent
|
||||
request.state.user → user dict
|
||||
request.state.token.credentials → str (the Bearer token, without "Bearer " prefix)
|
||||
await request.json() → dict (the raw request body)
|
||||
await request.body() → bytes (the raw request body as JSON bytes)
|
||||
```
|
||||
|
||||
## Solution / Pattern
|
||||
|
||||
```python
|
||||
from types import SimpleNamespace
|
||||
import json as _json_mod
|
||||
|
||||
def _build_openwebui_request(user: dict, token: str, body: dict = None):
|
||||
from open_webui.config import PERSISTENT_CONFIG_REGISTRY
|
||||
from open_webui.models.models import Models as _Models
|
||||
|
||||
# 1. Build config from registry
|
||||
config = SimpleNamespace()
|
||||
for item in PERSISTENT_CONFIG_REGISTRY:
|
||||
val = item.value
|
||||
if hasattr(val, "value"):
|
||||
val = val.value
|
||||
setattr(config, item.env_name, val)
|
||||
|
||||
# 2. Populate MODELS from DB — critical for model validation
|
||||
system_models = {}
|
||||
try:
|
||||
for m in _Models.get_all_models():
|
||||
system_models[m.id] = m
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# 3. Build app_state
|
||||
app_state = SimpleNamespace(
|
||||
config=config,
|
||||
TOOLS={},
|
||||
TOOL_CONTENTS={},
|
||||
FUNCTIONS={},
|
||||
FUNCTION_CONTENTS={},
|
||||
MODELS=system_models, # <-- KEY: must not be empty!
|
||||
redis=None,
|
||||
TOOL_SERVERS=[],
|
||||
)
|
||||
|
||||
# 4. url_path_for helper
|
||||
def url_path_for(name: str, **params):
|
||||
if name == "get_file_content_by_id":
|
||||
return f"/api/v1/files/{params.get('id')}/content"
|
||||
return f"/mock/{name}"
|
||||
|
||||
app = SimpleNamespace(state=app_state, url_path_for=url_path_for)
|
||||
|
||||
# 5. Async body helpers
|
||||
async def _json():
|
||||
return body or {}
|
||||
|
||||
async def _body_fn():
|
||||
return _json_mod.dumps(body or {}).encode("utf-8")
|
||||
|
||||
# 6. Headers
|
||||
headers = {
|
||||
"user-agent": "Mozilla/5.0",
|
||||
"host": "localhost:8080",
|
||||
"accept": "*/*",
|
||||
}
|
||||
if token:
|
||||
headers["Authorization"] = token if token.startswith("Bearer ") else f"Bearer {token}"
|
||||
|
||||
return SimpleNamespace(
|
||||
app=app,
|
||||
headers=headers,
|
||||
method="POST",
|
||||
cookies={},
|
||||
base_url="http://localhost:8080",
|
||||
url=SimpleNamespace(path="/api/chat/completions", base_url="http://localhost:8080"),
|
||||
state=SimpleNamespace(
|
||||
token=SimpleNamespace(credentials=token or ""),
|
||||
user=user or {},
|
||||
),
|
||||
json=_json,
|
||||
body=_body_fn,
|
||||
)
|
||||
```
|
||||
|
||||
## Token Extraction
|
||||
|
||||
Tokens can be found in multiple places. Check in order:
|
||||
|
||||
```python
|
||||
# 1. Direct in body (some SDK requests embed it)
|
||||
token = body.get("token")
|
||||
|
||||
# 2. In metadata
|
||||
token = token or (metadata or {}).get("token")
|
||||
|
||||
# 3. In the original __request__ Authorization header
|
||||
if not token and __request__ is not None:
|
||||
auth = getattr(__request__, "headers", {}).get("Authorization", "")
|
||||
if auth.startswith("Bearer "):
|
||||
token = auth.split(" ", 1)[1]
|
||||
```
|
||||
|
||||
## Gotchas
|
||||
|
||||
- **`app.state.MODELS` empty = "Model not found"** for *any* model ID, even correct ones.
|
||||
- `TOOL_SERVER_CONNECTIONS` must be synced from DB, not from in-memory cache (stale in multi-worker).
|
||||
- `request.state.token.credentials` should be the **raw token** (no "Bearer " prefix).
|
||||
- Tools may call `await request.json()` — must be an async method, not a regular attribute.
|
||||
83
.agent/learnings/openwebui-tool-injection.md
Normal file
83
.agent/learnings/openwebui-tool-injection.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# OpenWebUI Tool Parameter Injection
|
||||
|
||||
> Discovered: 2026-03-05
|
||||
|
||||
## Context
|
||||
|
||||
When OpenWebUI loads a Python Tool and calls one of its functions (e.g. `generate_mind_map`),
|
||||
it automatically matches parameters from an `extra_params` dict against the function's
|
||||
signature **by name**. This is done in:
|
||||
|
||||
```
|
||||
open_webui/utils/tools.py → get_async_tool_function_and_apply_extra_params()
|
||||
```
|
||||
|
||||
The lookup is: `extra_params = {k: v for k, v in extra_params.items() if k in sig.parameters}`
|
||||
|
||||
## Finding
|
||||
|
||||
A Tool function declares its dependencies via its parameter names. Common injected names:
|
||||
|
||||
| Parameter Name | What it contains |
|
||||
|-----------------------|---------------------------------------------------|
|
||||
| `__user__` | User context dict (id, email, role, name) |
|
||||
| `__event_emitter__` | Async callable to emit status/notification events |
|
||||
| `__event_call__` | Async callable for JS `__event_call__` roundtrips |
|
||||
| `__request__` | Request-like object (must have `.app.state.MODELS`) |
|
||||
| `__metadata__` | Dict: `{model, base_model_id, chat_id, ...}` |
|
||||
| `__messages__` | Full conversation history list |
|
||||
| `__chat_id__` | Current chat UUID |
|
||||
| `__message_id__` | Current message UUID |
|
||||
| `__session_id__` | Current session UUID |
|
||||
| `__files__` | Files attached to the current message |
|
||||
| `__task__` | Task type string (e.g. `title_generation`) |
|
||||
| `body` | Raw request body dict (non-dunder variant) |
|
||||
| `request` | Request object (non-dunder variant) |
|
||||
|
||||
## Key Rule
|
||||
|
||||
**`extra_params` must contain ALL keys a Tool's function signature declares.**
|
||||
If a key is missing from `extra_params`, the parameter silently receives its default
|
||||
value (e.g. `{}` for `__metadata__`). This means the Tool appears to work but
|
||||
gets empty/wrong context.
|
||||
|
||||
## Solution / Pattern
|
||||
|
||||
When a Pipe calls an OpenWebUI Tool, it must populate `extra_params` with **all** the above:
|
||||
|
||||
```python
|
||||
extra_params = {
|
||||
"__request__": request, # Must have app.state.MODELS populated!
|
||||
"request": request, # Non-dunder alias
|
||||
"__user__": user_data,
|
||||
"__event_emitter__": __event_emitter__,
|
||||
"__event_call__": __event_call__,
|
||||
"__messages__": messages,
|
||||
"__metadata__": __metadata__ or {},
|
||||
"__chat_id__": chat_id,
|
||||
"__message_id__": message_id,
|
||||
"__session_id__": session_id,
|
||||
"__files__": files,
|
||||
"__task__": task,
|
||||
"__task_body__": task_body,
|
||||
"body": body, # Non-dunder alias
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
## Model Resolution
|
||||
|
||||
Tools that call `generate_chat_completion` internally need a **valid model ID**.
|
||||
When the conversation is running under a Pipe/Manifold model (e.g. `github_copilot.gpt-4o`),
|
||||
the Tool's `valves.MODEL_ID` must be a *real* model known to the system.
|
||||
|
||||
`generate_chat_completion` validates model IDs against `request.app.state.MODELS`.
|
||||
➡️ That dict **must be populated** from the database (see `openwebui-mock-request.md`).
|
||||
|
||||
## Gotchas
|
||||
|
||||
- Tools call `generate_chat_completion` with a `request` arg that must be the full Mock Request.
|
||||
- If `app.state.MODELS` is empty, even a correctly-spelled model ID will cause "Model not found".
|
||||
- `__metadata__['model']` can be a **dict** (from DB) **or a string** (manifold ID). Tools must
|
||||
handle both types.
|
||||
- For manifold models not in the DB, strip the prefix: `github_copilot.gpt-4o` → `gpt-4o`.
|
||||
@@ -138,6 +138,18 @@ Before completing an antigravity operation, confirm:
|
||||
- [ ] Database changes are idempotent (safe to re-run)
|
||||
- [ ] Timeout guards are in place for all async calls to external systems
|
||||
- [ ] The user can observe progress through status/notification events
|
||||
- [ ] Non-obvious findings / gotchas are saved to `.agent/learnings/{topic}.md`
|
||||
|
||||
---
|
||||
|
||||
## Mandatory: Knowledge Capture
|
||||
|
||||
Any non-obvious pattern, internal API contract, or workaround discovered during an
|
||||
antigravity session **MUST** be saved to `.agent/learnings/{topic}.md` before the
|
||||
session ends. This ensures hard-won insights are not lost between sessions.
|
||||
|
||||
**Format**: See `.agent/learnings/README.md`
|
||||
**Existing entries**: Browse `.agent/learnings/` for prior knowledge to reuse.
|
||||
|
||||
---
|
||||
|
||||
@@ -145,3 +157,4 @@ Before completing an antigravity operation, confirm:
|
||||
|
||||
- Full engineering spec: `.github/copilot-instructions.md` → Section: **Antigravity Development Mode**
|
||||
- Design document: `docs/development/copilot-engineering-plan.md` → Section 5
|
||||
- Knowledge base: `.agent/learnings/` — reusable engineering patterns and gotchas
|
||||
|
||||
71
.agent/skills/README.md
Normal file
71
.agent/skills/README.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Agent Skills Index
|
||||
|
||||
This folder contains reusable Agent Skills for GitHub Copilot / VS Code custom agent workflows.
|
||||
|
||||
## Available Skills
|
||||
|
||||
- **community-announcer**
|
||||
- Purpose: Generate community announcement content and related assets.
|
||||
- Entry: `community-announcer/SKILL.md`
|
||||
|
||||
- **doc-mirror-sync**
|
||||
- Purpose: Sync mirrored documentation content and helper scripts.
|
||||
- Entry: `doc-mirror-sync/SKILL.md`
|
||||
|
||||
- **gh-issue-replier**
|
||||
- Purpose: Draft standardized issue replies with templates.
|
||||
- Entry: `gh-issue-replier/SKILL.md`
|
||||
|
||||
- **gh-issue-scheduler**
|
||||
- Purpose: Schedule and discover unanswered issues for follow-up.
|
||||
- Entry: `gh-issue-scheduler/SKILL.md`
|
||||
|
||||
- **i18n-validator**
|
||||
- Purpose: Validate translation key consistency across i18n dictionaries.
|
||||
- Entry: `i18n-validator/SKILL.md`
|
||||
|
||||
- **plugin-scaffolder**
|
||||
- Purpose: Scaffold OpenWebUI plugin boilerplate with repository standards.
|
||||
- Entry: `plugin-scaffolder/SKILL.md`
|
||||
|
||||
- **version-bumper**
|
||||
- Purpose: Assist with semantic version bumping workflows.
|
||||
- Entry: `version-bumper/SKILL.md`
|
||||
|
||||
- **xlsx-single-file**
|
||||
- Purpose: Single-file spreadsheet operations workflow without LibreOffice.
|
||||
- Entry: `xlsx-single-file/SKILL.md`
|
||||
|
||||
---
|
||||
|
||||
## Release Pipeline Skills
|
||||
|
||||
These four skills form a complete release pipeline and are designed to be used in sequence:
|
||||
|
||||
```
|
||||
release-prep → pr-submitter → pr-reviewer → release-finalizer
|
||||
(prepare) (push & PR) (respond to review) (merge & close issue)
|
||||
```
|
||||
|
||||
- **release-prep**
|
||||
- Purpose: Full release preparation — version sync across 7+ files, bilingual release notes creation, consistency check, and commit.
|
||||
- Entry: `release-prep/SKILL.md`
|
||||
|
||||
- **pr-submitter**
|
||||
- Purpose: Shell-escape-safe PR submission — writes body to temp file, validates sections, pushes branch, creates PR via `gh pr create --body-file`.
|
||||
- Entry: `pr-submitter/SKILL.md`
|
||||
|
||||
- **pr-reviewer**
|
||||
- Purpose: Fetch PR review comments, categorize feedback, implement fixes, commit and push, reply to reviewers.
|
||||
- Entry: `pr-reviewer/SKILL.md`
|
||||
|
||||
- **release-finalizer**
|
||||
- Purpose: Merge release PR to main with proper commit message, auto-link and close related issues, post closing messages.
|
||||
- Entry: `release-finalizer/SKILL.md`
|
||||
|
||||
## Notes
|
||||
|
||||
- Skill definitions follow the expected location pattern:
|
||||
- `.github/skills/<skill-name>/SKILL.md`
|
||||
- Each skill may include optional `assets/`, `references/`, and `scripts/` folders.
|
||||
- This directory mirrors `.gemini/skills` for compatibility.
|
||||
23
.agent/skills/community-announcer/SKILL.md
Normal file
23
.agent/skills/community-announcer/SKILL.md
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
name: community-announcer
|
||||
description: Drafts engaging English and Chinese update announcements for the OpenWebUI Community and other social platforms. Use when a new version is released.
|
||||
---
|
||||
|
||||
# Community Announcer
|
||||
|
||||
## Overview
|
||||
Automates the drafting of high-impact update announcements.
|
||||
|
||||
## Workflow
|
||||
1. **Source Intel**: Read the latest version's `What's New` section from `README.md`.
|
||||
2. **Drafting**: Create two versions:
|
||||
- **Community Post**: Professional, structured, technical.
|
||||
- **Catchy Short**: For Discord/Twitter, use emojis and bullet points.
|
||||
3. **Multi-language**: Generate BOTH English and Chinese versions automatically.
|
||||
|
||||
## Announcement Structure (Recommended)
|
||||
- **Headline**: "Update vX.X.X - [Main Feature]"
|
||||
- **Introduction**: Brief context.
|
||||
- **Key Highlights**: Bulleted list of fixes/features.
|
||||
- **Action**: "Download from [Market Link]"
|
||||
- **Closing**: Thanks and Star request.
|
||||
50
.agent/skills/doc-mirror-sync/SKILL.md
Normal file
50
.agent/skills/doc-mirror-sync/SKILL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: doc-mirror-sync
|
||||
description: Automatically synchronizes plugin READMEs to the official documentation directory (docs/). Use after editing a plugin's local documentation to keep the MkDocs site up to date.
|
||||
---
|
||||
|
||||
# Doc Mirror Sync
|
||||
|
||||
## Overview
|
||||
Automates the mirroring of `plugins/{type}/{name}/README.md` to `docs/plugins/{type}/{name}.md`.
|
||||
|
||||
## Docs-Only Mode (No Release Changes)
|
||||
Use this mode when the request is "only sync docs".
|
||||
|
||||
- Only update documentation mirror files under `docs/plugins/**`.
|
||||
- Do **not** bump plugin version.
|
||||
- Do **not** modify plugin code (`plugins/**.py`) unless explicitly requested.
|
||||
- Do **not** update root badges/dates for release.
|
||||
- Do **not** run release preparation steps.
|
||||
|
||||
## Workflow
|
||||
1. Identify changed READMEs.
|
||||
2. Copy content to corresponding mirror paths.
|
||||
3. Update version badges in `docs/plugins/{type}/index.md`.
|
||||
|
||||
## Commands
|
||||
|
||||
### Sync all mirrors (EN + ZH)
|
||||
|
||||
```bash
|
||||
python .github/skills/doc-mirror-sync/scripts/sync.py
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN only)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN + ZH)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
cp plugins/<type>/<name>/README_CN.md docs/plugins/<type>/<name>.zh.md
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- If asked for English-only update, sync only `README.md` -> `.md` mirror.
|
||||
- If both languages are requested, sync both `README.md` and `README_CN.md`.
|
||||
- After syncing, verify git diff only contains docs file changes.
|
||||
38
.agent/skills/doc-mirror-sync/scripts/sync.py
Normal file
38
.agent/skills/doc-mirror-sync/scripts/sync.py
Normal file
@@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
import shutil
|
||||
import re
|
||||
|
||||
def sync_mirrors():
|
||||
plugins_root = "plugins"
|
||||
docs_root = "docs/plugins"
|
||||
|
||||
types = ["actions", "filters", "pipes", "pipelines", "tools"]
|
||||
|
||||
for t in types:
|
||||
src_type_dir = os.path.join(plugins_root, t)
|
||||
dest_type_dir = os.path.join(docs_root, t)
|
||||
|
||||
if not os.path.exists(src_type_dir): continue
|
||||
os.makedirs(dest_type_dir, exist_ok=True)
|
||||
|
||||
for name in os.listdir(src_type_dir):
|
||||
plugin_dir = os.path.join(src_type_dir, name)
|
||||
if not os.path.isdir(plugin_dir): continue
|
||||
|
||||
# Sync README.md -> docs/plugins/{type}/{name}.md
|
||||
src_readme = os.path.join(plugin_dir, "README.md")
|
||||
if os.path.exists(src_readme):
|
||||
dest_readme = os.path.join(dest_type_dir, f"{name}.md")
|
||||
shutil.copy(src_readme, dest_readme)
|
||||
print(f"✅ Mirrored: {t}/{name} (EN)")
|
||||
|
||||
# Sync README_CN.md -> docs/plugins/{type}/{name}.zh.md
|
||||
src_readme_cn = os.path.join(plugin_dir, "README_CN.md")
|
||||
if os.path.exists(src_readme_cn):
|
||||
dest_readme_zh = os.path.join(dest_type_dir, f"{name}.zh.md")
|
||||
shutil.copy(src_readme_cn, dest_readme_zh)
|
||||
print(f"✅ Mirrored: {t}/{name} (ZH)")
|
||||
|
||||
if __name__ == "__main__":
|
||||
sync_mirrors()
|
||||
51
.agent/skills/gh-issue-replier/SKILL.md
Normal file
51
.agent/skills/gh-issue-replier/SKILL.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
name: gh-issue-replier
|
||||
description: Professional English replier for GitHub issues. Use when a task is completed, a bug is fixed, or more info is needed from the user. Automates replying using the 'gh' CLI tool.
|
||||
---
|
||||
|
||||
# Gh Issue Replier
|
||||
|
||||
## Overview
|
||||
|
||||
The `gh-issue-replier` skill enables Gemini CLI to interact with GitHub issues professionally. It enforces English for all communications and leverages the `gh` CLI to post comments.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Identify the Issue**: Find the issue number (e.g., #49).
|
||||
2. **Check Star Status**: Run the bundled script to check if the author has starred the repo.
|
||||
* Command: `bash scripts/check_star.sh <issue-number>`
|
||||
* Interpretation:
|
||||
* Exit code **0**: User has starred. Use "Already Starred" templates.
|
||||
* Exit code **1**: User has NOT starred. Include "Star Request" in the reply.
|
||||
3. **Select a Template**: Load [templates.md](references/templates.md) to choose a suitable English response pattern.
|
||||
4. **Draft the Reply**: Compose a concise message based on the star status.
|
||||
5. **Post the Comment**: Use the `gh` tool to submit the reply.
|
||||
|
||||
## Tool Integration
|
||||
|
||||
### Check Star Status
|
||||
```bash
|
||||
bash scripts/check_star.sh <issue-number>
|
||||
```
|
||||
|
||||
### Post Comment
|
||||
```bash
|
||||
gh issue comment <issue-number> --body "<message-body>"
|
||||
```
|
||||
|
||||
Example (if user has NOT starred):
|
||||
```bash
|
||||
gh issue comment 49 --body "This has been fixed in v1.2.7. If you find this helpful, a star on the repo would be much appreciated! ⭐"
|
||||
```
|
||||
|
||||
Example (if user HAS starred):
|
||||
```bash
|
||||
gh issue comment 49 --body "This has been fixed in v1.2.7. Thanks for your support!"
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Language**: ALWAYS use English for the comment body, even if the system prompt or user conversation is in another language.
|
||||
- **Tone**: Professional, helpful, and appreciative.
|
||||
- **Precision**: When announcing a fix, mention the specific version or the logic change (e.g., "Updated regex pattern").
|
||||
- **Closing**: If the issue is resolved and you have permission, you can also use `gh issue close <number>`.
|
||||
@@ -0,0 +1,17 @@
|
||||
# Reference Documentation for Gh Issue Replier
|
||||
|
||||
This is a placeholder for detailed reference documentation.
|
||||
Replace with actual reference content or delete if not needed.
|
||||
|
||||
## Structure Suggestions
|
||||
|
||||
### API Reference Example
|
||||
- Overview
|
||||
- Authentication
|
||||
- Endpoints with examples
|
||||
- Error codes
|
||||
|
||||
### Workflow Guide Example
|
||||
- Prerequisites
|
||||
- Step-by-step instructions
|
||||
- Best practices
|
||||
45
.agent/skills/gh-issue-replier/references/templates.md
Normal file
45
.agent/skills/gh-issue-replier/references/templates.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Issue Reply Templates
|
||||
|
||||
Use these templates to craft professional English replies. Adjust placeholders like `@username`, `v1.2.x`, and `[commit hash]` as needed.
|
||||
|
||||
## 1. Acknowledging a New Issue
|
||||
Use when you first see an issue and want to let the user know you are working on it.
|
||||
|
||||
- "Thank you for reporting this! I'm looking into it right now."
|
||||
- "Thanks for bringing this to my attention. I'll try to reproduce this behavior and get back to you shortly."
|
||||
|
||||
## 2. Requesting More Information
|
||||
Use when you need logs or specific details to fix the bug.
|
||||
|
||||
- "Could you please provide the **'Original'** vs **'Normalized'** content from your browser console logs (F12)? It would help a lot in debugging."
|
||||
- "It would be very helpful if you could share the specific Markdown text that triggered this issue."
|
||||
|
||||
## 3. Announcing a Fix
|
||||
Use when you have pushed the fix to the repository.
|
||||
|
||||
- "This has been fixed in version **v1.2.x**. You can update the plugin to resolve it."
|
||||
- "I've just pushed a fix for this in [commit hash]. Please let me know if it works for you after updating."
|
||||
- "The issue was caused by a greedy regex pattern. I've updated it to use a tempered greedy token to prevent incorrect merging."
|
||||
|
||||
## 4. Guiding to Official Market
|
||||
Always provide the official market link to ensure the user gets the latest verified version.
|
||||
|
||||
- "The fix is now live! You can download the latest version from the official OpenWebUI Community page here: [Plugin Market Link]. Simply update the function in your OpenWebUI instance to apply the changes."
|
||||
- "I recommend getting the updated version from the official store: [Link]. It includes the fix for the spacing issue we discussed."
|
||||
|
||||
## 5. Closing the Issue
|
||||
Use when the issue is confirmed resolved.
|
||||
|
||||
- "Glad to hear it's working now! Closing this for now. Feel free to reopen it if the problem persists."
|
||||
- "Since this is resolved, I'm closing this issue. Thanks again for your feedback!"
|
||||
|
||||
## 5. Pro-tip: Star Request
|
||||
Gently handle star requests based on the user's current status.
|
||||
|
||||
### If User has NOT starred:
|
||||
- "If you find this plugin helpful, a star on the repo would be much appreciated! ⭐"
|
||||
- "We'd love your support! If this fixed your issue, please consider starring the repository. ⭐"
|
||||
|
||||
### If User HAS already starred:
|
||||
- "Thanks again for starring the project and for your continuous support!"
|
||||
- "I appreciate your support and for being a stargazer of this project!"
|
||||
31
.agent/skills/gh-issue-replier/scripts/check_star.sh
Executable file
31
.agent/skills/gh-issue-replier/scripts/check_star.sh
Executable file
@@ -0,0 +1,31 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Robust Star Checker v2
|
||||
# Usage: ./check_star.sh <issue_number>
|
||||
|
||||
ISSUE_NUM=$1
|
||||
if [ -z "$ISSUE_NUM" ]; then exit 2; fi
|
||||
|
||||
# 1. Get Repo and Author info
|
||||
REPO_FULL=$(gh repo view --json owner,name -q ".owner.login + \"/\" + .name")
|
||||
USER_LOGIN=$(gh issue view "$ISSUE_NUM" --json author -q ".author.login")
|
||||
|
||||
# 2. Use GraphQL for high precision (Detects stars even when REST 404s)
|
||||
IS_STARRED=$(gh api graphql -f query='
|
||||
query($owner:String!, $repo:String!, $user:String!) {
|
||||
repository(owner:$owner, name:$repo) {
|
||||
stargazers(query:$user, first:1) {
|
||||
nodes {
|
||||
login
|
||||
}
|
||||
}
|
||||
}
|
||||
}' -f owner="${REPO_FULL%/*}" -f repo="${REPO_FULL#*/}" -f user="$USER_LOGIN" -q ".data.repository.stargazers.nodes[0].login")
|
||||
|
||||
if [ "$IS_STARRED" == "$USER_LOGIN" ]; then
|
||||
echo "Confirmed: @$USER_LOGIN HAS starred $REPO_FULL. ⭐"
|
||||
exit 0
|
||||
else
|
||||
echo "Confirmed: @$USER_LOGIN has NOT starred $REPO_FULL."
|
||||
exit 1
|
||||
fi
|
||||
42
.agent/skills/gh-issue-scheduler/SKILL.md
Normal file
42
.agent/skills/gh-issue-scheduler/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: gh-issue-scheduler
|
||||
description: Finds all open GitHub issues that haven't been replied to by the owner, summarizes them, and generates a solution plan. Use when the user wants to audit pending tasks or plan maintenance work.
|
||||
---
|
||||
|
||||
# Gh Issue Scheduler
|
||||
|
||||
## Overview
|
||||
|
||||
The `gh-issue-scheduler` skill helps maintainers track community feedback by identifying unaddressed issues and drafting actionable technical plans to resolve them.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Identify Unanswered Issues**: Run the bundled script to fetch issues without owner replies.
|
||||
* Command: `bash scripts/find_unanswered.sh`
|
||||
2. **Analyze and Summarize**: For each identified issue, summarize the core problem and the user's intent.
|
||||
3. **Generate Solution Plans**: Draft a technical "Action Plan" for each issue, including:
|
||||
* **Root Cause Analysis** (if possible)
|
||||
* **Proposed Fix/Implementation**
|
||||
* **Verification Strategy**
|
||||
4. **Present to User**: Display a structured report of all pending issues and their respective plans.
|
||||
|
||||
## Tool Integration
|
||||
|
||||
### Find Unanswered Issues
|
||||
```bash
|
||||
bash scripts/find_unanswered.sh
|
||||
```
|
||||
|
||||
## Report Format
|
||||
|
||||
When presenting the summary, use the following Markdown structure:
|
||||
|
||||
### 📋 Unanswered Issues Audit
|
||||
|
||||
#### Issue #[Number]: [Title]
|
||||
- **Author**: @username
|
||||
- **Summary**: Concise description of the problem.
|
||||
- **Action Plan**:
|
||||
1. Step 1 (e.g., Investigate file X)
|
||||
2. Step 2 (e.g., Apply fix Y)
|
||||
3. Verification (e.g., Run test Z)
|
||||
42
.agent/skills/gh-issue-scheduler/scripts/find_unanswered.sh
Executable file
42
.agent/skills/gh-issue-scheduler/scripts/find_unanswered.sh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Fetch all open issues and filter for those without responses from the owner/collaborators.
|
||||
# Uses 'gh' CLI.
|
||||
|
||||
REPO_FULL=$(gh repo view --json owner,name -q ".owner.login + "/" + .name")
|
||||
OWNER=${REPO_FULL%/*}
|
||||
|
||||
# 1. Get all open issues
|
||||
OPEN_ISSUES=$(gh issue list --state open --json number,title,author,createdAt --limit 100)
|
||||
|
||||
echo "Analysis for repository: $REPO_FULL"
|
||||
echo "------------------------------------"
|
||||
|
||||
# Process each issue
|
||||
echo "$OPEN_ISSUES" | jq -c '.[]' | while read -r issue; do
|
||||
NUMBER=$(echo "$issue" | jq -r '.number')
|
||||
TITLE=$(echo "$issue" | jq -r '.title')
|
||||
AUTHOR=$(echo "$issue" | jq -r '.author.login')
|
||||
|
||||
# Check comments for owner responses
|
||||
# We look for comments where the author is the repo owner
|
||||
COMMENTS=$(gh issue view "$NUMBER" --json comments -q ".comments[].author.login" 2>/dev/null)
|
||||
|
||||
HAS_OWNER_REPLY=false
|
||||
for COMMENT_AUTHOR in $COMMENTS; do
|
||||
if [ "$COMMENT_AUTHOR" == "$OWNER" ]; then
|
||||
HAS_OWNER_REPLY=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$HAS_OWNER_REPLY" == "false" ]; then
|
||||
echo "ISSUE_START"
|
||||
echo "ID: $NUMBER"
|
||||
echo "Title: $TITLE"
|
||||
echo "Author: $AUTHOR"
|
||||
echo "Description:"
|
||||
gh issue view "$NUMBER" --json body -q ".body"
|
||||
echo "ISSUE_END"
|
||||
fi
|
||||
done
|
||||
14
.agent/skills/i18n-validator/SKILL.md
Normal file
14
.agent/skills/i18n-validator/SKILL.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
name: i18n-validator
|
||||
description: Validates multi-language consistency in the TRANSLATIONS dictionary of a plugin. Use to check if any language keys are missing or if translations need updating.
|
||||
---
|
||||
|
||||
# I18n Validator
|
||||
|
||||
## Overview
|
||||
Ensures all 12 supported languages (en-US, zh-CN, etc.) have aligned translation keys.
|
||||
|
||||
## Features
|
||||
- Detects missing keys in non-English dictionaries.
|
||||
- Suggests translations using the core AI engine.
|
||||
- Validates the `fallback_map` for variant redirects.
|
||||
54
.agent/skills/i18n-validator/scripts/validate_i18n.py
Normal file
54
.agent/skills/i18n-validator/scripts/validate_i18n.py
Normal file
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env python3
|
||||
import sys
|
||||
import ast
|
||||
import os
|
||||
|
||||
def check_i18n(file_path):
|
||||
if not os.path.exists(file_path):
|
||||
print(f"Error: File not found {file_path}")
|
||||
return
|
||||
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
tree = ast.parse(f.read())
|
||||
|
||||
translations = {}
|
||||
for node in tree.body:
|
||||
if isinstance(node, ast.Assign):
|
||||
for target in node.targets:
|
||||
if isinstance(target, ast.Name) and target.id == "TRANSLATIONS":
|
||||
translations = ast.literal_eval(node.value)
|
||||
break
|
||||
|
||||
if not translations:
|
||||
print("⚠️ No TRANSLATIONS dictionary found.")
|
||||
return
|
||||
|
||||
# Base keys from English
|
||||
base_lang = "en-US"
|
||||
if base_lang not in translations:
|
||||
print(f"❌ Error: {base_lang} missing in TRANSLATIONS.")
|
||||
return
|
||||
|
||||
base_keys = set(translations[base_lang].keys())
|
||||
print(f"🔍 Analyzing {file_path}...")
|
||||
print(f"Standard keys ({len(base_keys)}): {', '.join(sorted(base_keys))}
|
||||
")
|
||||
|
||||
for lang, keys in translations.items():
|
||||
if lang == base_lang: continue
|
||||
lang_keys = set(keys.keys())
|
||||
missing = base_keys - lang_keys
|
||||
extra = lang_keys - base_keys
|
||||
|
||||
if missing:
|
||||
print(f"❌ {lang}: Missing {len(missing)} keys: {', '.join(missing)}")
|
||||
if extra:
|
||||
print(f"⚠️ {lang}: Has {len(extra)} extra keys: {', '.join(extra)}")
|
||||
if not missing and not extra:
|
||||
print(f"✅ {lang}: Aligned.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: validate_i18n.py <path_to_plugin.py>")
|
||||
sys.exit(1)
|
||||
check_i18n(sys.argv[1])
|
||||
19
.agent/skills/plugin-scaffolder/SKILL.md
Normal file
19
.agent/skills/plugin-scaffolder/SKILL.md
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: plugin-scaffolder
|
||||
description: Generates a standardized single-file i18n Python plugin template based on project standards. Use when starting a new plugin development to skip boilerplate writing.
|
||||
---
|
||||
|
||||
# Plugin Scaffolder
|
||||
|
||||
## Overview
|
||||
Generates compliant OpenWebUI plugin templates with built-in i18n, common utility methods, and required docstring fields.
|
||||
|
||||
## Usage
|
||||
1. Provide the **Plugin Name** and **Type** (action/filter/pipe).
|
||||
2. The skill will generate the `.py` file and the bilingual `README` files.
|
||||
|
||||
## Template Standard
|
||||
- `Valves(BaseModel)` with `UPPER_SNAKE_CASE`
|
||||
- `_get_user_context` with JS fallback and timeout
|
||||
- `_emit_status` and `_emit_debug_log` methods
|
||||
- Standardized docstring metadata
|
||||
34
.agent/skills/plugin-scaffolder/assets/README_template.md
Normal file
34
.agent/skills/plugin-scaffolder/assets/README_template.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# {{TITLE}}
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 0.1.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
{{DESCRIPTION}}
|
||||
|
||||
## 🔥 What's New in v0.1.0
|
||||
|
||||
* Initial release of {{TITLE}}.
|
||||
|
||||
## 🌐 Multilingual Support
|
||||
|
||||
Supports automatic interface and status switching for the following languages:
|
||||
`English`, `简体中文`, `繁體中文 (香港)`, `繁體中文 (台灣)`, `한국어`, `日本語`, `Français`, `Deutsch`, `Español`, `Italiano`, `Tiếng Việt`, `Bahasa Indonesia`.
|
||||
|
||||
## ✨ Core Features
|
||||
|
||||
* Feature 1
|
||||
* Feature 2
|
||||
|
||||
## How to Use 🛠️
|
||||
|
||||
1. Install the plugin in Open WebUI.
|
||||
2. Configure settings in Valves.
|
||||
|
||||
## Configuration (Valves) ⚙️
|
||||
|
||||
| Parameter | Default | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `priority` | `50` | Execution priority. |
|
||||
|
||||
## ⭐ Support
|
||||
|
||||
If this plugin has been useful, a star on [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) is a big motivation for me. Thank you for the support.
|
||||
80
.agent/skills/plugin-scaffolder/assets/template.py
Normal file
80
.agent/skills/plugin-scaffolder/assets/template.py
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
title: {{TITLE}}
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/openwebui-extensions
|
||||
funding_url: https://github.com/open-webui
|
||||
version: 0.1.0
|
||||
description: {{DESCRIPTION}}
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import json
|
||||
from typing import Optional, Dict, Any, List, Callable, Awaitable
|
||||
from pydantic import BaseModel, Field
|
||||
from fastapi import Request
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
TRANSLATIONS = {
|
||||
"en-US": {"status_starting": "Starting {{TITLE}}..."},
|
||||
"zh-CN": {"status_starting": "正在启动 {{TITLE}}..."},
|
||||
"zh-HK": {"status_starting": "正在啟動 {{TITLE}}..."},
|
||||
"zh-TW": {"status_starting": "正在啟動 {{TITLE}}..."},
|
||||
"ko-KR": {"status_starting": "{{TITLE}} 시작 중..."},
|
||||
"ja-JP": {"status_starting": "{{TITLE}} を起動中..."},
|
||||
"fr-FR": {"status_starting": "Démarrage de {{TITLE}}..."},
|
||||
"de-DE": {"status_starting": "{{TITLE}} wird gestartet..."},
|
||||
"es-ES": {"status_starting": "Iniciando {{TITLE}}..."},
|
||||
"it-IT": {"status_starting": "Avvio di {{TITLE}}..."},
|
||||
"vi-VN": {"status_starting": "Đang khởi động {{TITLE}}..."},
|
||||
"id-ID": {"status_starting": "Memulai {{TITLE}}..."},
|
||||
}
|
||||
|
||||
class {{CLASS_NAME}}:
|
||||
class Valves(BaseModel):
|
||||
priority: int = Field(default=50, description="Priority level (lower = earlier).")
|
||||
show_status: bool = Field(default=True, description="Show status updates in UI.")
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
self.fallback_map = {
|
||||
"zh": "zh-CN", "en": "en-US", "ko": "ko-KR", "ja": "ja-JP",
|
||||
"fr": "fr-FR", "de": "de-DE", "es": "es-ES", "it": "it-IT",
|
||||
"vi": "vi-VN", "id": "id-ID"
|
||||
}
|
||||
|
||||
def _get_translation(self, lang: str, key: str, **kwargs) -> str:
|
||||
target_lang = lang
|
||||
if target_lang not in TRANSLATIONS:
|
||||
base = target_lang.split("-")[0]
|
||||
target_lang = self.fallback_map.get(base, "en-US")
|
||||
|
||||
lang_dict = TRANSLATIONS.get(target_lang, TRANSLATIONS["en-US"])
|
||||
text = lang_dict.get(key, TRANSLATIONS["en-US"].get(key, key))
|
||||
return text.format(**kwargs) if kwargs else text
|
||||
|
||||
async def _get_user_context(self, __user__: Optional[dict], __event_call__: Optional[Callable] = None, __request__: Optional[Request] = None) -> dict:
|
||||
user_data = __user__ if isinstance(__user__, dict) else {}
|
||||
user_language = user_data.get("language", "en-US")
|
||||
if __event_call__:
|
||||
try:
|
||||
js = "try { return (document.documentElement.lang || localStorage.getItem('locale') || navigator.language || 'en-US'); } catch (e) { return 'en-US'; }"
|
||||
frontend_lang = await asyncio.wait_for(__event_call__({"type": "execute", "data": {"code": js}}), timeout=2.0)
|
||||
if frontend_lang: user_language = frontend_lang
|
||||
except: pass
|
||||
return {"user_language": user_language}
|
||||
|
||||
async def {{METHOD_NAME}}(self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None, __event_call__=None, __request__: Optional[Request] = None) -> dict:
|
||||
if self.valves.show_status and __event_emitter__:
|
||||
user_ctx = await self._get_user_context(__user__, __event_call__, __request__)
|
||||
msg = self._get_translation(user_ctx["user_language"], "status_starting")
|
||||
await __event_emitter__({"type": "status", "data": {"description": msg, "done": False}})
|
||||
|
||||
# Implement core logic here
|
||||
|
||||
if self.valves.show_status and __event_emitter__:
|
||||
await __event_emitter__({"type": "status", "data": {"description": "Done", "done": True}})
|
||||
return body
|
||||
80
.agent/skills/plugin-scaffolder/assets/template.py.j2
Normal file
80
.agent/skills/plugin-scaffolder/assets/template.py.j2
Normal file
@@ -0,0 +1,80 @@
|
||||
"""
|
||||
title: {{TITLE}}
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie/openwebui-extensions
|
||||
funding_url: https://github.com/open-webui
|
||||
version: 0.1.0
|
||||
description: {{DESCRIPTION}}
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import json
|
||||
from typing import Optional, Dict, Any, List, Callable, Awaitable
|
||||
from pydantic import BaseModel, Field
|
||||
from fastapi import Request
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
TRANSLATIONS = {
|
||||
"en-US": {"status_starting": "Starting {{TITLE}}..."},
|
||||
"zh-CN": {"status_starting": "正在启动 {{TITLE}}..."},
|
||||
"zh-HK": {"status_starting": "正在啟動 {{TITLE}}..."},
|
||||
"zh-TW": {"status_starting": "正在啟動 {{TITLE}}..."},
|
||||
"ko-KR": {"status_starting": "{{TITLE}} 시작 중..."},
|
||||
"ja-JP": {"status_starting": "{{TITLE}} を起動中..."},
|
||||
"fr-FR": {"status_starting": "Démarrage de {{TITLE}}..."},
|
||||
"de-DE": {"status_starting": "{{TITLE}} wird gestartet..."},
|
||||
"es-ES": {"status_starting": "Iniciando {{TITLE}}..."},
|
||||
"it-IT": {"status_starting": "Avvio di {{TITLE}}..."},
|
||||
"vi-VN": {"status_starting": "Đang khởi động {{TITLE}}..."},
|
||||
"id-ID": {"status_starting": "Memulai {{TITLE}}..."},
|
||||
}
|
||||
|
||||
class {{CLASS_NAME}}:
|
||||
class Valves(BaseModel):
|
||||
priority: int = Field(default=50, description="Priority level (lower = earlier).")
|
||||
show_status: bool = Field(default=True, description="Show status updates in UI.")
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
self.fallback_map = {
|
||||
"zh": "zh-CN", "en": "en-US", "ko": "ko-KR", "ja": "ja-JP",
|
||||
"fr": "fr-FR", "de": "de-DE", "es": "es-ES", "it": "it-IT",
|
||||
"vi": "vi-VN", "id": "id-ID"
|
||||
}
|
||||
|
||||
def _get_translation(self, lang: str, key: str, **kwargs) -> str:
|
||||
target_lang = lang
|
||||
if target_lang not in TRANSLATIONS:
|
||||
base = target_lang.split("-")[0]
|
||||
target_lang = self.fallback_map.get(base, "en-US")
|
||||
|
||||
lang_dict = TRANSLATIONS.get(target_lang, TRANSLATIONS["en-US"])
|
||||
text = lang_dict.get(key, TRANSLATIONS["en-US"].get(key, key))
|
||||
return text.format(**kwargs) if kwargs else text
|
||||
|
||||
async def _get_user_context(self, __user__: Optional[dict], __event_call__: Optional[Callable] = None, __request__: Optional[Request] = None) -> dict:
|
||||
user_data = __user__ if isinstance(__user__, dict) else {}
|
||||
user_language = user_data.get("language", "en-US")
|
||||
if __event_call__:
|
||||
try:
|
||||
js = "try { return (document.documentElement.lang || localStorage.getItem('locale') || navigator.language || 'en-US'); } catch (e) { return 'en-US'; }"
|
||||
frontend_lang = await asyncio.wait_for(__event_call__({"type": "execute", "data": {"code": js}}), timeout=2.0)
|
||||
if frontend_lang: user_language = frontend_lang
|
||||
except: pass
|
||||
return {"user_language": user_language}
|
||||
|
||||
async def {{METHOD_NAME}}(self, body: dict, __user__: Optional[dict] = None, __event_emitter__=None, __event_call__=None, __request__: Optional[Request] = None) -> dict:
|
||||
if self.valves.show_status and __event_emitter__:
|
||||
user_ctx = await self._get_user_context(__user__, __event_call__, __request__)
|
||||
msg = self._get_translation(user_ctx["user_language"], "status_starting")
|
||||
await __event_emitter__({"type": "status", "data": {"description": msg, "done": False}})
|
||||
|
||||
# Implement core logic here
|
||||
|
||||
if self.valves.show_status and __event_emitter__:
|
||||
await __event_emitter__({"type": "status", "data": {"description": "Done", "done": True}})
|
||||
return body
|
||||
66
.agent/skills/plugin-scaffolder/scripts/scaffold.py
Normal file
66
.agent/skills/plugin-scaffolder/scripts/scaffold.py
Normal file
@@ -0,0 +1,66 @@
|
||||
#!/usr/bin/env python3
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def scaffold(p_type, p_name, title, desc):
|
||||
target_dir = f"plugins/{p_type}/{p_name}"
|
||||
os.makedirs(target_dir, exist_ok=True)
|
||||
|
||||
class_name = (
|
||||
"Action"
|
||||
if p_type == "actions"
|
||||
else (
|
||||
"Filter"
|
||||
if p_type == "filters"
|
||||
else "Tools" if p_type == "tools" else "Pipe"
|
||||
)
|
||||
)
|
||||
method_name = (
|
||||
"action"
|
||||
if p_type == "actions"
|
||||
else (
|
||||
"outlet"
|
||||
if p_type == "filters"
|
||||
else "execute" if p_type == "tools" else "pipe"
|
||||
)
|
||||
)
|
||||
|
||||
replacements = {
|
||||
"{{TITLE}}": title,
|
||||
"{{DESCRIPTION}}": desc,
|
||||
"{{CLASS_NAME}}": class_name,
|
||||
"{{METHOD_NAME}}": method_name,
|
||||
}
|
||||
|
||||
# Files to generate
|
||||
templates = [
|
||||
("assets/template.py.j2", f"{p_name}.py"),
|
||||
("assets/README_template.md", "README.md"),
|
||||
("assets/README_template.md", "README_CN.md"),
|
||||
]
|
||||
|
||||
# Path relative to skill root
|
||||
skill_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
for t_path, t_name in templates:
|
||||
template_file = os.path.join(skill_root, t_path)
|
||||
if not os.path.exists(template_file):
|
||||
print(f"⚠️ Warning: Template not found {template_file}")
|
||||
continue
|
||||
|
||||
with open(template_file, "r") as f:
|
||||
content = f.read()
|
||||
for k, v in replacements.items():
|
||||
content = content.replace(k, v)
|
||||
|
||||
with open(os.path.join(target_dir, t_name), "w") as f:
|
||||
f.write(content)
|
||||
print(f"✅ Generated: {target_dir}/{t_name}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 5:
|
||||
print("Usage: scaffold.py <type> <name> <title> <desc>")
|
||||
sys.exit(1)
|
||||
scaffold(sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4])
|
||||
180
.agent/skills/pr-reviewer/SKILL.md
Normal file
180
.agent/skills/pr-reviewer/SKILL.md
Normal file
@@ -0,0 +1,180 @@
|
||||
---
|
||||
name: pr-reviewer
|
||||
description: Fetches PR review comments, analyzes requested changes, implements fixes, commits and pushes the resolution. Use after a reviewer has left comments on an open PR to close the feedback loop efficiently.
|
||||
---
|
||||
|
||||
# PR Reviewer
|
||||
|
||||
## Overview
|
||||
|
||||
This skill automates the response cycle for code review. When a reviewer leaves comments on a Pull Request, this skill fetches all pending feedback, categorizes issues by severity, implements fixes, and submits a follow-up commit with appropriate review response comments.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- An open PR exists with pending review comments
|
||||
- The local branch matches the PR's head branch
|
||||
- `gh` CLI is authenticated
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1 — Fetch Review State
|
||||
|
||||
Retrieve all review comments and overall review status:
|
||||
|
||||
```bash
|
||||
# Get overall review decisions
|
||||
PAGER=cat GH_PAGER=cat gh pr view <PR-NUMBER> --json reviews,reviewDecision,headRefName \
|
||||
--jq '{decision: .reviewDecision, reviews: [.reviews[] | {author: .author.login, state: .state, body: .body}]}'
|
||||
|
||||
# Get inline code comments (specific line comments)
|
||||
PAGER=cat GH_PAGER=cat gh api repos/Fu-Jie/openwebui-extensions/pulls/<PR-NUMBER>/comments \
|
||||
--jq '[.[] | {path: .path, line: .line, body: .body, author: .user.login, id: .id}]'
|
||||
|
||||
# Get general issue comments
|
||||
PAGER=cat GH_PAGER=cat gh issue view <PR-NUMBER> --comments --json comments \
|
||||
--jq '[.comments[] | {author: .author.login, body: .body}]'
|
||||
```
|
||||
|
||||
Confirm the current local branch matches the PR head:
|
||||
```bash
|
||||
git branch --show-current
|
||||
```
|
||||
If mismatched, checkout the correct branch first.
|
||||
|
||||
### Step 2 — Categorize Review Feedback
|
||||
|
||||
Group feedback into categories:
|
||||
|
||||
| Category | Examples | Action |
|
||||
|----------|---------|--------|
|
||||
| **Code Bug** | Logic error, incorrect variable, broken condition | Fix code immediately |
|
||||
| **Style / Formatting** | Indentation, naming convention, missing blank line | Fix code |
|
||||
| **Documentation** | Missing i18n key, wrong version in README, typo | Fix docs |
|
||||
| **Design Question** | Suggestion to restructure, alternative approach | Discuss with user before implementing |
|
||||
| **Nitpick / Optional** | Minor style preferences reviewer marked as optional | Fix if quick; document if skipped |
|
||||
| **Blocking** | Reviewer explicitly blocks merge | Must fix before proceeding |
|
||||
|
||||
Present the full categorized list to the user and confirm the resolution plan.
|
||||
|
||||
### Step 3 — Implement Fixes
|
||||
|
||||
For each accepted fix:
|
||||
|
||||
1. Read the affected file at the commented line for context:
|
||||
```bash
|
||||
sed -n '<line-5>,<line+10>p' <file-path>
|
||||
```
|
||||
2. Apply the fix using appropriate file edit tools
|
||||
3. After editing, verify the specific area looks correct
|
||||
|
||||
**For code changes that might affect behavior:**
|
||||
- Check if tests exist: `ls tests/test_*.py`
|
||||
- If tests exist, run them: `python -m pytest tests/ -v`
|
||||
|
||||
**For documentation fixes:**
|
||||
- If modifying README.md, check if `docs/` mirror needs the same fix
|
||||
- Apply the same fix to both locations
|
||||
|
||||
### Step 4 — Run Consistency Checks
|
||||
|
||||
After all fixes are applied:
|
||||
|
||||
```bash
|
||||
# Version consistency (if any version files were touched)
|
||||
python3 scripts/check_version_consistency.py
|
||||
|
||||
# Quick syntax check for Python files
|
||||
python3 -m py_compile plugins/{type}/{name}/{name}.py && echo "✅ Syntax OK"
|
||||
```
|
||||
|
||||
### Step 5 — Stage and Commit
|
||||
|
||||
Create a new commit (do NOT amend if the branch has already been pushed, to avoid force-push):
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git status
|
||||
```
|
||||
|
||||
Draft a Conventional Commits message for the fixup:
|
||||
|
||||
Format: `fix(scope): address review feedback`
|
||||
|
||||
Body should list what was fixed, referencing reviewer concerns:
|
||||
```
|
||||
fix(github-copilot-sdk): address review feedback from @reviewer
|
||||
|
||||
- Fix X per review comment on line Y of file Z
|
||||
- Update README to clarify auth requirement
|
||||
- Correct edge case in _parse_mcp_servers logic
|
||||
```
|
||||
|
||||
```bash
|
||||
git commit -m "<fixup commit message>"
|
||||
```
|
||||
|
||||
### Step 6 — Push the Fix Commit
|
||||
|
||||
```bash
|
||||
git push origin $(git branch --show-current)
|
||||
```
|
||||
|
||||
**Force-push policy:**
|
||||
- Use `git push` (non-force) by default
|
||||
- Only use `git push --force-with-lease` if:
|
||||
1. The user explicitly requests it, AND
|
||||
2. The only change is an amended commit squash (cosmetic, no logic change)
|
||||
3. Never use `--force` (without `--lease`)
|
||||
|
||||
### Step 7 — Respond to Reviewers
|
||||
|
||||
For each addressed review comment, post a reply:
|
||||
|
||||
```bash
|
||||
# Reply to inline comment
|
||||
gh api repos/Fu-Jie/openwebui-extensions/pulls/<PR-NUMBER>/comments/<COMMENT-ID>/replies \
|
||||
-X POST -f body="Fixed in commit <SHORT-SHA>. <Brief explanation of what was changed.>"
|
||||
|
||||
# General comment to summarize all fixes
|
||||
gh issue comment <PR-NUMBER> --body "All review feedback addressed in commit <SHORT-SHA>:
|
||||
- Fixed: <item 1>
|
||||
- Fixed: <item 2>
|
||||
Ready for re-review. 🙏"
|
||||
```
|
||||
|
||||
### Step 8 — Re-Request Review (Optional)
|
||||
|
||||
If the reviewer had submitted a `CHANGES_REQUESTED` review, request a new review after fixes:
|
||||
|
||||
```bash
|
||||
PAGER=cat GH_PAGER=cat gh api repos/Fu-Jie/openwebui-extensions/pulls/<PR-NUMBER>/requested_reviewers \
|
||||
-X POST -f reviewers[]='<reviewer-login>'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Decision Guide
|
||||
|
||||
### When NOT to implement a suggestion immediately
|
||||
|
||||
- **Design questions**: "Should this be a separate class?" — Present to user for decision
|
||||
- **Optional nitpicks**: Reviewer marked as `nit:` — Ask user if they want to include it
|
||||
- **Large refactors**: If fix would require changing >50 lines, propose a separate follow-up issue instead
|
||||
|
||||
### When to ask the user before proceeding
|
||||
|
||||
- Any fix involving behavioral changes to plugin logic
|
||||
- Renaming Valve keys (breaking change — requires migration notes)
|
||||
- Changes that affect the bilingual release notes already committed
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- ❌ Do NOT `git commit --amend` on a pushed commit without user approval for force-push
|
||||
- ❌ Do NOT silently skip a reviewer's comment; always acknowledge it (implement or explain why not)
|
||||
- ❌ Do NOT use `--force` (only `--force-with-lease` when absolutely necessary)
|
||||
- ❌ Do NOT make unrelated changes in the fixup commit; keep scope focused on review feedback
|
||||
- ❌ Do NOT respond to reviewer comments in Chinese if the PR language context is English
|
||||
194
.agent/skills/pr-submitter/SKILL.md
Normal file
194
.agent/skills/pr-submitter/SKILL.md
Normal file
@@ -0,0 +1,194 @@
|
||||
---
|
||||
name: pr-submitter
|
||||
description: Submits a feature branch as a Pull Request with a validated, properly formatted bilingual PR body. Handles shell-escape-safe body writing via temp files. Use after release-prep has committed all changes.
|
||||
---
|
||||
|
||||
# PR Submitter
|
||||
|
||||
## Overview
|
||||
|
||||
This skill handles the final step of pushing a feature branch and creating a validated Pull Request on GitHub. Its primary purpose is to avoid the shell-escaping pitfalls (backticks, special characters in `gh pr create --body`) by always writing the PR body to a **temp file** first.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- All changes are committed (use `release-prep` skill first)
|
||||
- The `gh` CLI is authenticated (`gh auth status`)
|
||||
- Current branch is NOT `main` or `master`
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0 — Initialize Temp Directory (Project-Based)
|
||||
|
||||
For all temporary files, use the project's `.temp/` directory instead of system `/tmp`:
|
||||
|
||||
```bash
|
||||
# Create temp directory if it doesn't exist
|
||||
mkdir -p .temp
|
||||
```
|
||||
|
||||
**Why**: All temporary files stay within the project workspace, avoiding system `/tmp` pollution and better aligning with OpenWebUI workspace isolation principles.
|
||||
|
||||
### Step 1 — Pre-Flight Checks
|
||||
|
||||
Run these checks before any push:
|
||||
|
||||
```bash
|
||||
# 1. Confirm not on protected branch
|
||||
git branch --show-current
|
||||
|
||||
# 2. Verify there are commits to push
|
||||
git log origin/$(git branch --show-current)..HEAD --oneline 2>/dev/null || echo "No remote tracking branch yet"
|
||||
|
||||
# 3. Check gh CLI auth
|
||||
gh auth status
|
||||
```
|
||||
|
||||
If any check fails, stop and report clearly.
|
||||
|
||||
### Step 2 — Collect PR Metadata
|
||||
|
||||
Gather:
|
||||
- **PR Title**: Must follow Conventional Commits format, English only (e.g., `feat(github-copilot-sdk): release v0.8.0 with conditional tool filtering`)
|
||||
- **Target base branch**: Default is `main`
|
||||
- **Plugin name + version** (to build body sections)
|
||||
- **Key changes** (reuse from release-prep or the latest What's New section)
|
||||
|
||||
### Step 3 — Build PR Body File (Shell-Escape-Safe)
|
||||
|
||||
**Always write the body to a temp file in `.temp/` directory.** Never embed multi-line markdown with special characters directly in a shell command.
|
||||
|
||||
```bash
|
||||
cat > .temp/pr_body.md << 'HEREDOC'
|
||||
## Summary
|
||||
|
||||
Brief one-sentence description of what this PR accomplishes.
|
||||
|
||||
## Changes
|
||||
|
||||
### New Features
|
||||
- Feature 1 description
|
||||
- Feature 2 description
|
||||
|
||||
### Bug Fixes
|
||||
- Fix 1 description
|
||||
|
||||
## Plugin Version
|
||||
- `PluginName` bumped to `vX.X.X`
|
||||
|
||||
## Documentation
|
||||
- README.md / README_CN.md updated
|
||||
- docs/ mirrors synced
|
||||
|
||||
## Testing
|
||||
- [ ] Tested locally in OpenWebUI
|
||||
- [ ] i18n validated (all language keys present)
|
||||
- [ ] Version consistency check passed (`python3 scripts/check_version_consistency.py`)
|
||||
|
||||
---
|
||||
|
||||
## 变更摘要(中文)
|
||||
|
||||
简要描述本次 PR 的改动内容。
|
||||
|
||||
### 新功能
|
||||
- 功能1描述
|
||||
- 功能2描述
|
||||
|
||||
### 问题修复
|
||||
- 修复1描述
|
||||
HEREDOC
|
||||
```
|
||||
|
||||
**Critical rules for the body file:**
|
||||
- Use `<< 'HEREDOC'` (quoted heredoc) to prevent variable expansion
|
||||
- Keep all backticks literal — they are safe inside a heredoc
|
||||
- Paths like `/api/v1/files/` are safe too since heredoc doesn't interpret them as commands
|
||||
|
||||
### Step 4 — Validate PR Body
|
||||
|
||||
Before submitting, verify the body file contains expected sections:
|
||||
|
||||
```bash
|
||||
# Check key sections exist
|
||||
grep -q "## Summary" .temp/pr_body.md && echo "✅ Summary" || echo "❌ Summary missing"
|
||||
grep -q "## Changes" .temp/pr_body.md && echo "✅ Changes" || echo "❌ Changes missing"
|
||||
grep -q "## 变更摘要" .temp/pr_body.md && echo "✅ CN Section" || echo "❌ CN Section missing"
|
||||
|
||||
# Preview the body
|
||||
cat .temp/pr_body.md
|
||||
```
|
||||
|
||||
Ask the user to confirm the body content before proceeding.
|
||||
|
||||
### Step 5 — Push Branch
|
||||
|
||||
```bash
|
||||
git push -u origin $(git branch --show-current)
|
||||
```
|
||||
|
||||
If push is rejected (non-fast-forward), report to user and ask whether to force-push. **Do NOT force-push without explicit confirmation.**
|
||||
|
||||
### Step 6 — Create Pull Request
|
||||
|
||||
```bash
|
||||
gh pr create \
|
||||
--base main \
|
||||
--head $(git branch --show-current) \
|
||||
--title "<PR title from Step 2>" \
|
||||
--body-file .temp/pr_body.md
|
||||
```
|
||||
|
||||
Always use `--body-file`, never `--body` with inline markdown.
|
||||
|
||||
### Step 7 — Verify PR Creation
|
||||
|
||||
```bash
|
||||
PAGER=cat GH_PAGER=cat gh pr view --json number,url,title,body --jq '{number: .number, url: .url, title: .title, body_preview: .body[:200]}'
|
||||
```
|
||||
|
||||
Confirm:
|
||||
- PR number and URL
|
||||
- Title matches intended Conventional Commits format
|
||||
- Body preview includes key sections (not truncated/corrupted)
|
||||
|
||||
If the body appears corrupted (empty sections, missing backtick content), use edit:
|
||||
|
||||
```bash
|
||||
gh pr edit <PR-NUMBER> --body-file /tmp/pr_body.md
|
||||
```
|
||||
|
||||
### Step 8 — Cleanup
|
||||
|
||||
```bash
|
||||
rm -f .temp/pr_body.md
|
||||
```
|
||||
|
||||
**Note**: The `.temp/` directory itself is preserved for reuse; only the individual PR body file is deleted. To fully clean up: `rm -rf .temp/`
|
||||
|
||||
Report final PR URL to the user.
|
||||
|
||||
---
|
||||
|
||||
## Shell-Escape Safety Rules
|
||||
|
||||
| Risk | Safe Approach |
|
||||
|------|--------------|
|
||||
| Backticks in `--body` | Write to file, use `--body-file` |
|
||||
| Paths like `/api/...` | Safe in heredoc; risky in inline `--body` |
|
||||
| Newlines in `--body` | File-based only |
|
||||
| `$variable` expansion | Use `<< 'HEREDOC'` (quoted) |
|
||||
| Double quotes in body | Safe in heredoc file |
|
||||
| Temp file storage | Use `.temp/` dir, not `/tmp` |
|
||||
| Cleanup after use | Always delete temp file (keep dir) |
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- ❌ Never use `--body "..."` with multi-line content directly in shell command
|
||||
- ❌ Never interpolate variables directly into heredoc without quoting the delimiter
|
||||
- ❌ Never force-push (`--force`) without explicit user confirmation
|
||||
- ❌ Never target `main` as the source branch (only as base)
|
||||
- ❌ Never skip the body validation step — a PR with empty body is worse than a delayed PR
|
||||
208
.agent/skills/release-finalizer/SKILL.md
Normal file
208
.agent/skills/release-finalizer/SKILL.md
Normal file
@@ -0,0 +1,208 @@
|
||||
---
|
||||
name: release-finalizer
|
||||
description: Merges a release PR, associates it with resolved issues, replies to issue reporters, and closes issues. Use after PR review is complete and ready for merge. Closes the release cycle.
|
||||
---
|
||||
|
||||
# Release Finalizer
|
||||
|
||||
## Overview
|
||||
|
||||
This skill completes the final step of the release cycle: merging the release PR to `main`, replying to all related issues with solutions, and automatically closing them using GitHub's issue linking mechanism.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- The PR is in `OPEN` state and ready to merge
|
||||
- All status checks have passed (CI green)
|
||||
- All review feedback has been addressed
|
||||
- The PR relates to one or more GitHub issues (either in PR description or through commits)
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1 — Pre-Merge Verification
|
||||
|
||||
Verify that the PR is ready:
|
||||
|
||||
```bash
|
||||
PAGER=cat GH_PAGER=cat gh pr view <PR-NUMBER> --json state,statusCheckRollup,reviewDecision
|
||||
```
|
||||
|
||||
Checklist:
|
||||
- ✅ `state` is `OPEN`
|
||||
- ✅ `statusCheckRollup` all have `conclusion: SUCCESS`
|
||||
- ✅ `reviewDecision` is `APPROVED` or empty (no blocking reviews)
|
||||
|
||||
If any check fails, **do NOT merge**. Report the issue to the user.
|
||||
|
||||
### Step 2 — Identify Related Issues
|
||||
|
||||
Issues can be linked to a PR in multiple ways. Check the PR description and commit messages for keywords:
|
||||
|
||||
```bash
|
||||
PAGER=cat GH_PAGER=cat gh pr view <PR-NUMBER> --json body,commits
|
||||
```
|
||||
|
||||
Look for patterns like:
|
||||
- `Closes #XX`, `Fixes #XX`, `Resolves #XX` (in description or commit bodies)
|
||||
- `#XX` mentioned as "related to" or "addresses"
|
||||
|
||||
**Manual input**: If issue links are not in the PR, ask the user which issue(s) this PR resolves.
|
||||
|
||||
Extract all issue numbers into a list: `[#48, #52, ...]`
|
||||
|
||||
### Step 3 — Select Merge Strategy
|
||||
|
||||
Offer the user three options:
|
||||
|
||||
| Strategy | Git Behavior | Use Case |
|
||||
|----------|-------------|----------|
|
||||
| **Squash** | All commits squashed into one commit on main | Clean history, recommended for release PRs |
|
||||
| **Rebase** | Linear history, no merge commit | Preserve commit granularity |
|
||||
| **Merge** | Merge commit created | Preserve full PR context |
|
||||
|
||||
**Recommendation for release PRs**: Use `--squash` to create a single clean commit.
|
||||
|
||||
If user doesn't specify, default to `--squash`.
|
||||
|
||||
### Step 4 — Prepare Merge Commit Message
|
||||
|
||||
If using `--squash`, craft a single comprehensive commit message:
|
||||
|
||||
**Format** (Conventional Commits + Github linking):
|
||||
```
|
||||
type(scope): description
|
||||
|
||||
- Bullet point 1
|
||||
- Bullet point 2
|
||||
|
||||
Closes #48
|
||||
Closes #52
|
||||
```
|
||||
|
||||
The `Closes #XX` keyword tells GitHub to automatically close those issues when the commit lands on `main`.
|
||||
|
||||
Example:
|
||||
```
|
||||
feat(pipes,filters): release Copilot SDK Pipe v0.8.0 and Files Filter v0.1.3
|
||||
|
||||
- Implement P1~P4 conditional tool filtering system
|
||||
- Fix file publishing reliability across all storage backends
|
||||
- Add strict file URL validation
|
||||
- Update bilingual documentation
|
||||
|
||||
Closes #48
|
||||
```
|
||||
|
||||
### Step 5 — Execute Merge
|
||||
|
||||
```bash
|
||||
gh pr merge <PR-NUMBER> \
|
||||
--squash \
|
||||
--delete-branch \
|
||||
-m "type(scope): description" \
|
||||
-b "- Bullet 1\n- Bullet 2\n\nCloses #48"
|
||||
```
|
||||
|
||||
**Key flags:**
|
||||
- `--squash`: Squash commits (recommended for releases)
|
||||
- `--delete-branch`: Delete the feature branch after merge
|
||||
- `-m`: Commit subject
|
||||
- `-b`: Commit body (supports `\n` for newlines)
|
||||
|
||||
Confirm the merge is successful; GitHub will automatically close related issues with `Closes #XX` keyword.
|
||||
|
||||
### Step 6 — Verify Auto-Close
|
||||
|
||||
GitHub automatically closes issues when a commit with `Closes #XX` lands on the default branch (`main`).
|
||||
|
||||
To verify:
|
||||
```bash
|
||||
PAGER=cat GH_PAGER=cat gh issue view <ISSUE-NUMBER> --json state
|
||||
```
|
||||
|
||||
Should show `state: CLOSED`.
|
||||
|
||||
### Step 7 — Post Closing Message (Optional but Recommended)
|
||||
|
||||
For better UX, manually post a summary comment to **each issue** before it auto-closes (since auto-close happens silently):
|
||||
|
||||
```bash
|
||||
gh issue comment <ISSUE-NUMBER> --body "
|
||||
This has been fixed in PR #<PR-NUMBER>, which is now merged to main.
|
||||
|
||||
**Solution Summary:**
|
||||
- <Key fix 1>
|
||||
- <Key fix 2>
|
||||
|
||||
The fix will be available in the next plugin release. Thank you for reporting! ⭐
|
||||
"
|
||||
```
|
||||
|
||||
### Step 8 — (Optional) Regenerate Release Notes
|
||||
|
||||
If the merge revealed any final tweaks to release notes:
|
||||
|
||||
```bash
|
||||
# Re-export release notes from merged commit
|
||||
git log --oneline -1 <merged-commit-sha>
|
||||
```
|
||||
|
||||
If needed, create a follow-up PR with doc polish (do NOT force-push the merged commit).
|
||||
|
||||
---
|
||||
|
||||
## Merge Strategy Decision Tree
|
||||
|
||||
```
|
||||
Is this a patch/hotfix release?
|
||||
├─ YES → Use --squash
|
||||
└─ NO → Multi-feature release?
|
||||
├─ YES → Use --squash (cleaner history)
|
||||
└─ NO → Preserve detail?
|
||||
├─ YES → Use --rebase
|
||||
└─ NO → Use --merge (preserve PR context)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Auto-Close Keywords
|
||||
|
||||
These keywords in commit/PR messages will auto-close issues when merged to `main`:
|
||||
|
||||
- `Closes #XX`
|
||||
- `Fixes #XX`
|
||||
- `Resolves #XX`
|
||||
- `close #XX` (case-insensitive)
|
||||
- `fix #XX`
|
||||
- `resolve #XX`
|
||||
|
||||
**Important**: The keyword must be on the **final commit that lands on** `main`. For squash merges, it must be in the squash commit message body.
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- ❌ Do NOT merge if any status checks are PENDING or FAILED
|
||||
- ❌ Do NOT merge if there are blocking reviews (reviewDecision: `CHANGES_REQUESTED`)
|
||||
- ❌ Do NOT merge without verifying the Conventional Commits format in the merge message
|
||||
- ❌ Do NOT merge without including `Closes #XX` keywords for all related issues
|
||||
- ❌ Do NOT assume issues will auto-close silently — post a courtesy comment first
|
||||
- ❌ Do NOT delete the branch if it might be needed for cherry-pick or hotfixes later
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue did not auto-close after merge
|
||||
- Verify the `Closes #XX` keyword is in the **final commit message** (use `git log` to check)
|
||||
- Ensure the commit is on the `main` branch
|
||||
- GitHub sometimes takes a few seconds to process; refresh the issue page
|
||||
|
||||
### Multiple issues to close
|
||||
- List all in separate `Closes #XX` lines in the commit body
|
||||
- Each one will be independently auto-closed
|
||||
|
||||
### Want to close issue without merge?
|
||||
- Use `gh issue close <ISSUE-NUMBER>` manually
|
||||
- Only recommended if the PR was manually reverted or deemed invalid
|
||||
157
.agent/skills/release-prep/SKILL.md
Normal file
157
.agent/skills/release-prep/SKILL.md
Normal file
@@ -0,0 +1,157 @@
|
||||
---
|
||||
name: release-prep
|
||||
description: Orchestrates the full release preparation flow for a plugin — version sync across 7+ files, bilingual release notes creation, and commit message drafting. Use before submitting a PR. Does NOT push or create a PR; that is handled by pr-submitter.
|
||||
---
|
||||
|
||||
# Release Prep
|
||||
|
||||
## Overview
|
||||
|
||||
This skill drives the complete pre-PR release pipeline. It enforces the repository rule that every release must synchronize the version number and changelog across **at least 7 locations** before a commit is created.
|
||||
|
||||
## Scope
|
||||
|
||||
This skill covers:
|
||||
1. Version sync (delegates detail to `version-bumper` if needed)
|
||||
2. Bilingual release notes file creation
|
||||
3. 7-location consistency verification
|
||||
4. Conventional Commits message drafting
|
||||
5. `git add -A && git commit` execution
|
||||
|
||||
It **stops before** `git push` or `gh pr create`. Use the `pr-submitter` skill for those steps.
|
||||
|
||||
### Temporary File Convention
|
||||
|
||||
Any temporary files created during release prep (e.g., draft changelogs) must:
|
||||
- Be written to the project's `.temp/` directory, **NOT** system `/tmp`
|
||||
- Be cleaned up before commit using `rm -f .temp/file_name`
|
||||
- Never be committed to git (add `.temp/` to `.gitignore`)
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1 — Collect Release Info
|
||||
|
||||
Ask the user (or infer from current state) the following:
|
||||
- **Plugin name** and **type** (actions / filters / pipes / tools)
|
||||
- **New version number** (e.g., `0.8.0`)
|
||||
- **Key changes** in English and Chinese (1-5 bullet points each)
|
||||
|
||||
If a `What's New` section already exists in README.md, extract it as the source of truth.
|
||||
|
||||
### Step 2 — Sync Version Across 7 Locations
|
||||
|
||||
Verify AND update the version string in all of the following. Mark each as ✅ or ❌:
|
||||
|
||||
| # | File | Location |
|
||||
|---|------|----------|
|
||||
| 1 | `plugins/{type}/{name}/{name}.py` | `version:` in docstring |
|
||||
| 2 | `plugins/{type}/{name}/README.md` | `**Version:** x.x.x` metadata line |
|
||||
| 3 | `plugins/{type}/{name}/README_CN.md` | `**Version:** x.x.x` metadata line |
|
||||
| 4 | `docs/plugins/{type}/{name}.md` | `**Version:** x.x.x` metadata line |
|
||||
| 5 | `docs/plugins/{type}/{name}.zh.md` | `**Version:** x.x.x` metadata line |
|
||||
| 6 | `docs/plugins/{type}/index.md` | version badge for this plugin |
|
||||
| 7 | `docs/plugins/{type}/index.zh.md` | version badge for this plugin |
|
||||
|
||||
Additionally update the root-level **updated date badge** in:
|
||||
- `README.md` — ``
|
||||
- `README_CN.md` — same badge format
|
||||
|
||||
Use today's date (`YYYY-MM-DD`) for the badge.
|
||||
|
||||
### Step 3 — Update What's New (All 4 Doc Files)
|
||||
|
||||
The `What's New` / `最新更新` section must contain **only the most recent release's changes**. Previous entries should be removed from this section (they live in CHANGELOG or release notes files).
|
||||
|
||||
Update these 4 files' `What's New` / `最新更新` block consistently:
|
||||
- `plugins/{type}/{name}/README.md`
|
||||
- `plugins/{type}/{name}/README_CN.md`
|
||||
- `docs/plugins/{type}/{name}.md`
|
||||
- `docs/plugins/{type}/{name}.zh.md`
|
||||
|
||||
### Step 4 — Create Bilingual Release Notes Files
|
||||
|
||||
Create two versioned release notes files:
|
||||
|
||||
**Path**: `plugins/{type}/{name}/v{version}.md`
|
||||
**Path**: `plugins/{type}/{name}/v{version}_CN.md`
|
||||
|
||||
#### Required Sections
|
||||
|
||||
Each file must include:
|
||||
1. **Title**: `# v{version} Release Notes` (EN) / `# v{version} 版本发布说明` (CN)
|
||||
2. **Overview**: One paragraph summarizing this release
|
||||
3. **New Features** / **新功能**: Bulleted list of features
|
||||
4. **Bug Fixes** / **问题修复**: Bulleted list of fixes
|
||||
5. **Migration Notes** / **迁移说明**: Breaking changes or Valve key renames (omit section if none)
|
||||
6. **Companion Plugins** / **配套插件** (optional): If a companion plugin was updated
|
||||
|
||||
If a release notes file already exists for this version, update it rather than creating a new one.
|
||||
|
||||
#### Full Coverage Rule (Mandatory)
|
||||
|
||||
Release notes must cover **all updates in the current release scope** and not only headline features.
|
||||
|
||||
Minimum required coverage in both EN/CN files:
|
||||
- New features and capability enhancements
|
||||
- Bug fixes and reliability fixes
|
||||
- Documentation/README/doc-mirror updates that affect user understanding or usage
|
||||
- Terminology/i18n/wording fixes that change visible behavior or messaging
|
||||
|
||||
Before commit, cross-check release notes against `git diff` and ensure no meaningful update is omitted.
|
||||
|
||||
### Step 5 — Verify Consistency (Pre-Commit Check)
|
||||
|
||||
Run the consistency check script:
|
||||
|
||||
```bash
|
||||
python3 scripts/check_version_consistency.py
|
||||
```
|
||||
|
||||
If issues are found, fix them before proceeding. Do not commit with inconsistencies.
|
||||
|
||||
### Step 6 — Draft Conventional Commits Message
|
||||
|
||||
Generate the commit message following `commit-message.instructions.md` rules:
|
||||
- **Language**: English ONLY
|
||||
- **Format**: `type(scope): subject` + blank line + body bullets
|
||||
- **Scope**: use plugin folder name (e.g., `github-copilot-sdk`)
|
||||
- **Body**: 1-3 bullets summarizing key changes
|
||||
- Explicitly mention "READMEs and docs synced" if version was bumped
|
||||
|
||||
Present the full commit message to the user for review before executing.
|
||||
|
||||
### Step 7 — Stage and Commit
|
||||
|
||||
After user approval (or if user says "commit it"):
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "<approved commit message>"
|
||||
```
|
||||
|
||||
Confirm the commit hash and list the number of files changed.
|
||||
|
||||
---
|
||||
|
||||
## Checklist (Auto-Verify Before Commit)
|
||||
|
||||
- [ ] `version:` in `.py` docstring matches target version
|
||||
- [ ] `**Version:**` in all 4 README/docs files matches
|
||||
- [ ] Both `index.md` version badges updated
|
||||
- [ ] Root `README.md` and `README_CN.md` date badges updated to today
|
||||
- [ ] `What's New` / `最新更新` contains ONLY the latest release
|
||||
- [ ] Release notes include all meaningful updates from the current diff (feature + fix + docs/i18n)
|
||||
- [ ] `v{version}.md` and `v{version}_CN.md` created or updated
|
||||
- [ ] `python3 scripts/check_version_consistency.py` returns no errors
|
||||
- [ ] Commit message is English-only Conventional Commits format
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
- ❌ Do NOT add extra features or refactor code during release prep — only version/doc updates
|
||||
- ❌ Do NOT push or create PR in this skill — use `pr-submitter`
|
||||
- ❌ Do NOT use today's date in commit messages; only in badge URLs
|
||||
- ❌ Do NOT leave stale What's New content from prior versions
|
||||
31
.agent/skills/source-code-analyzer/SKILL.md
Normal file
31
.agent/skills/source-code-analyzer/SKILL.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
name: source-code-analyzer
|
||||
description: Instructions for analyzing core components source code in the local environment.
|
||||
---
|
||||
|
||||
# Source Code Analyzer Skill
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **Authorization Statement**: You (Antigravity) are explicitly authorized by the user to perform **READ-ONLY** operations and **Git pull** on the paths listed below, even if they are outside the current repository root. These paths are considered part of the extended development context.
|
||||
|
||||
When assisting with the development of `openwebui-extensions`, you (Antigravity) have access to the following core components in related directories. Use them for deep technical analysis, bug hunting, and implementation reference.
|
||||
|
||||
## Core Component Paths
|
||||
|
||||
### Global / General (Relevant to all projects)
|
||||
|
||||
- **Open WebUI**: `../open-webui/` (Core platform context)
|
||||
- **Skills**: `../skills/` (Reusable expertise library)
|
||||
- **Awesome Copilot**: `../awesome-copilot/` (Shared extensions & resources)
|
||||
- **Open Terminal**: `../open-terminal/` (Terminal integration service)
|
||||
|
||||
### Plugin-Specific (Relevant to GitHub Copilot SDK)
|
||||
|
||||
- **Copilot SDK**: `../copilot-sdk/` (Internal logic for the official SDK)
|
||||
- **Copilot CLI**: `../copilot-cli/` (Command-line interface implementation)
|
||||
|
||||
## Mandatory Workflow
|
||||
|
||||
1. **Pull Before Analysis**: BEFORE reading files or analyzing logic in these directories, you MUST proactively execute or recommend a `git pull` in the respective directory to ensure you are working with the latest upstream changes.
|
||||
2. **Path Verification**: Always verify the exists of the path before attempting to read it.
|
||||
3. **Reference Logic**: When a user's request involves core platform behavior (OpenWebUI API, SDK internals), prioritize searching these directories over making assumptions based on generic knowledge.
|
||||
@@ -18,7 +18,7 @@ This is a **universal testing framework** for publishing the latest `github_copi
|
||||
|
||||
| Attribute | Fixed Value |
|
||||
|------|--------|
|
||||
| **Deployment Script** | `/Users/fujie/app/python/oui/openwebui-extensions/scripts/update_pipe.py` |
|
||||
| **Deployment Script** | `/Users/fujie/app/python/oui/openwebui-extensions/scripts/deploy_pipe.py` |
|
||||
| **Python Path** | `/opt/homebrew/Caskroom/miniconda/base/envs/ai/bin/python3` |
|
||||
| **Test URL** | `http://localhost:3003/?model=github_copilot_official_sdk_pipe.github_copilot_sdk-gpt-4.1` |
|
||||
|
||||
@@ -36,11 +36,11 @@ Example: *Modified tool calling logic -> Test prompt should trigger a specific t
|
||||
Use the `run_command` tool to execute the fixed update task:
|
||||
|
||||
```bash
|
||||
/opt/homebrew/Caskroom/miniconda/base/envs/ai/bin/python3 /Users/fujie/app/python/oui/openwebui-extensions/scripts/update_pipe.py
|
||||
/opt/homebrew/Caskroom/miniconda/base/envs/ai/bin/python3 /Users/fujie/app/python/oui/openwebui-extensions/scripts/deploy_pipe.py
|
||||
```
|
||||
|
||||
> **Mechanism**: `update_pipe.py` automatically loads the API Key from `scripts/.env` in the same directory.
|
||||
> **Verification**: Look for `✅ Successfully updated... version X.X.X`. If a 401 error occurs, remind the user to generate a new API Key in OpenWebUI and update `.env`.
|
||||
> **Mechanism**: `deploy_pipe.py` automatically loads the API Key from `scripts/.env` in the same directory.
|
||||
> **Verification**: Look for `✅ Successfully updated... version X.X.X` or `✅ Successfully created...`. If a 401 error occurs, remind the user to generate a new API Key in OpenWebUI and update `.env`.
|
||||
|
||||
### Step 3: Verify via Browser Subagent (Verify)
|
||||
|
||||
|
||||
26
.agent/skills/version-bumper/SKILL.md
Normal file
26
.agent/skills/version-bumper/SKILL.md
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
name: version-bumper
|
||||
description: Automates version upgrades and changelog synchronization across 7+ files (Code, READMEs, Docs). Use when a plugin is ready for release to ensure version consistency.
|
||||
---
|
||||
|
||||
# Version Bumper
|
||||
|
||||
## Overview
|
||||
This skill ensures that every version upgrade is synchronized across the entire repository, following the strict "Documentation Sync" rule in GEMINI.md.
|
||||
|
||||
## Workflow
|
||||
1. **Prepare Info**: Gather the new version number and brief changelogs in both English and Chinese.
|
||||
2. **Auto-Patch**: The skill will help you identify and update:
|
||||
- `plugins/.../name.py` (docstring version)
|
||||
- `plugins/.../README.md` (metadata & What's New)
|
||||
- `plugins/.../README_CN.md` (metadata & 最新更新)
|
||||
- `docs/plugins/...md` (mirrors)
|
||||
- `docs/plugins/index.md` (version badge)
|
||||
- `README.md` (updated date badge)
|
||||
3. **Verify**: Check the diffs to ensure no formatting was broken.
|
||||
|
||||
## Tool Integration
|
||||
Execute the bump script (draft):
|
||||
```bash
|
||||
python3 scripts/bump.py <version> "<message_en>" "<message_zh>"
|
||||
```
|
||||
70
.agent/skills/version-bumper/scripts/bump.py
Normal file
70
.agent/skills/version-bumper/scripts/bump.py
Normal file
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env python3
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
from datetime import datetime
|
||||
|
||||
def patch_file(file_path, old_pattern, new_content, is_regex=False):
|
||||
if not os.path.exists(file_path):
|
||||
print(f"Warning: File not found: {file_path}")
|
||||
return False
|
||||
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
if is_regex:
|
||||
new_content_result = re.sub(old_pattern, new_content, content, flags=re.MULTILINE)
|
||||
else:
|
||||
new_content_result = content.replace(old_pattern, new_content)
|
||||
|
||||
if new_content_result != content:
|
||||
with open(file_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content_result)
|
||||
print(f"✅ Patched: {file_path}")
|
||||
return True
|
||||
else:
|
||||
print(f"ℹ️ No change needed: {file_path}")
|
||||
return False
|
||||
|
||||
def bump_version(plugin_type, plugin_name, new_version, msg_en, msg_zh):
|
||||
print(f"🚀 Bumping {plugin_name} ({plugin_type}) to {new_version}...")
|
||||
|
||||
today = datetime.now().strftime("%Y-%m-%d")
|
||||
today_badge = today.replace("-", "--")
|
||||
|
||||
# 1. Patch Plugin Python File
|
||||
py_file = f"plugins/{plugin_type}/{plugin_name}/{plugin_name}.py"
|
||||
patch_file(py_file, r"version: \d+\.\d+\.\d+", f"version: {new_version}", is_regex=True)
|
||||
|
||||
# 2. Patch Plugin READMEs
|
||||
readme_en = f"plugins/{plugin_type}/{plugin_name}/README.md"
|
||||
readme_zh = f"plugins/{plugin_type}/{plugin_name}/README_CN.md"
|
||||
|
||||
# Update version in metadata
|
||||
patch_file(readme_en, r"\*\*Version:\*\* \d+\.\d+\.\d+", f"**Version:** {new_version}", is_regex=True)
|
||||
patch_file(readme_zh, r"\*\*版本:\*\* \d+\.\d+\.\d+", f"**版本:** {new_version}", is_regex=True)
|
||||
|
||||
# Update What's New (Assuming standard headers)
|
||||
patch_file(readme_en, r"## 🔥 What's New in v.*?\n", f"## 🔥 What's New in v{new_version}\n\n* {msg_en}\n", is_regex=True)
|
||||
patch_file(readme_zh, r"## 🔥 最新更新 v.*?\n", f"## 🔥 最新更新 v{new_version}\n\n* {msg_zh}\n", is_regex=True)
|
||||
|
||||
# 3. Patch Docs Mirrors
|
||||
doc_en = f"docs/plugins/{plugin_type}/{plugin_name}.md"
|
||||
doc_zh = f"docs/plugins/{plugin_type}/{plugin_name}.zh.md"
|
||||
patch_file(doc_en, r"\*\*Version:\*\* \d+\.\d+\.\d+", f"**Version:** {new_version}", is_regex=True)
|
||||
patch_file(doc_zh, r"\*\*版本:\*\* \d+\.\d+\.\d+", f"**版本:** {new_version}", is_regex=True)
|
||||
|
||||
# 4. Patch Root READMEs (Updated Date Badge)
|
||||
patch_file("README.md", r"badge/202\d--\d\d--\d\d-gray", f"badge/{today_badge}-gray", is_regex=True)
|
||||
patch_file("README_CN.md", r"badge/202\d--\d\d--\d\d-gray", f"badge/{today_badge}-gray", is_regex=True)
|
||||
|
||||
print("\n✨ All synchronization tasks completed.")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) < 6:
|
||||
print("Usage: bump.py <type> <name> <version> <msg_en> <msg_zh>")
|
||||
print("Example: bump.py filters markdown_normalizer 1.2.8 'Fix bug' '修复错误'")
|
||||
sys.exit(1)
|
||||
|
||||
bump_version(sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5])
|
||||
@@ -140,6 +140,7 @@ Before committing:
|
||||
- [ ] `docs/` index and detail pages are updated?
|
||||
- [ ] Root `README.md` is updated?
|
||||
- [ ] All version numbers match exactly?
|
||||
- [ ] Any non-obvious findings saved to `.agent/learnings/{topic}.md`?
|
||||
|
||||
## 5. Git Operations (Agent Rules)
|
||||
|
||||
@@ -147,3 +148,12 @@ Before committing:
|
||||
2. **No Auto-Commit**: Never `git commit`, `git push`, or `create_pull_request` automatically after file updates unless the user explicitly says "commit this" or "release now".
|
||||
3. **Draft Mode**: If available, use PRs as drafts first.
|
||||
4. **Reference**: Strictly follow the rules defined in `.github/copilot-instructions.md` → **Git Operations (Agent Rules)** section.
|
||||
|
||||
## 6. Knowledge Capture (Mandatory)
|
||||
|
||||
Whenever you discover a non-obvious behaviour, internal API contract, or workaround
|
||||
during plugin development, **document it in `.agent/learnings/{topic}.md`** before
|
||||
ending the session.
|
||||
|
||||
- Browse `.agent/learnings/` **first** at the start of a session to reuse existing knowledge.
|
||||
- Format: see `.agent/learnings/README.md`.
|
||||
|
||||
@@ -8,7 +8,43 @@ description: Automatically synchronizes plugin READMEs to the official documenta
|
||||
## Overview
|
||||
Automates the mirroring of `plugins/{type}/{name}/README.md` to `docs/plugins/{type}/{name}.md`.
|
||||
|
||||
## Docs-Only Mode (No Release Changes)
|
||||
Use this mode when the request is "only sync docs".
|
||||
|
||||
- Only update documentation mirror files under `docs/plugins/**`.
|
||||
- Do **not** bump plugin version.
|
||||
- Do **not** modify plugin code (`plugins/**.py`) unless explicitly requested.
|
||||
- Do **not** update root badges/dates for release.
|
||||
- Do **not** run release preparation steps.
|
||||
|
||||
## Workflow
|
||||
1. Identify changed READMEs.
|
||||
2. Copy content to corresponding mirror paths.
|
||||
3. Update version badges in `docs/plugins/{type}/index.md`.
|
||||
|
||||
## Commands
|
||||
|
||||
### Sync all mirrors (EN + ZH)
|
||||
|
||||
```bash
|
||||
python .github/skills/doc-mirror-sync/scripts/sync.py
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN only)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN + ZH)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
cp plugins/<type>/<name>/README_CN.md docs/plugins/<type>/<name>.zh.md
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- If asked for English-only update, sync only `README.md` -> `.md` mirror.
|
||||
- If both languages are requested, sync both `README.md` and `README_CN.md`.
|
||||
- After syncing, verify git diff only contains docs file changes.
|
||||
|
||||
165
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
165
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,165 @@
|
||||
name: 🐛 Bug Report
|
||||
description: Report a bug or issue with OpenWebUI plugins
|
||||
title: "[BUG] "
|
||||
labels: ["bug"]
|
||||
assignees: []
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for reporting a bug! Please provide clear information to help us reproduce and fix the issue.
|
||||
|
||||
- type: dropdown
|
||||
id: plugin-type
|
||||
attributes:
|
||||
label: Plugin Type
|
||||
description: Which type of plugin is affected?
|
||||
options:
|
||||
- Action
|
||||
- Filter
|
||||
- Pipe
|
||||
- Pipeline
|
||||
- Tool
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: plugin-name
|
||||
attributes:
|
||||
label: Plugin Name
|
||||
description: Which plugin has the issue?
|
||||
options:
|
||||
- "Select a plugin..."
|
||||
- "Action - Deep Dive"
|
||||
- "Action - Export to Word Enhanced"
|
||||
- "Action - Export to Excel"
|
||||
- "Action - Flash Card"
|
||||
- "Action - Smart Infographic"
|
||||
- "Action - Smart Mind Map"
|
||||
- "Filter - Async Context Compression"
|
||||
- "Filter - Context & Model Enhancement Filter"
|
||||
- "Filter - Folder Memory"
|
||||
- "Filter - GitHub Copilot SDK Files Filter"
|
||||
- "Filter - Markdown Normalizer"
|
||||
- "Filter - Gemini Multimodel Filter"
|
||||
- "Pipeline - MOE Prompt Refiner"
|
||||
- "Pipe - GitHub Copilot SDK"
|
||||
- "Pipe - iFlow SDK"
|
||||
- "Tool - OpenWebUI Skills Manager"
|
||||
- "Tool - Smart Infographic Tool"
|
||||
- "Tool - Smart Mind Map Tool"
|
||||
- "Other / Not Listed"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: plugin-name-other
|
||||
attributes:
|
||||
label: Plugin Name (if not in list)
|
||||
description: If you selected 'Other / Not Listed', please specify the plugin name
|
||||
placeholder: "Plugin name"
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: Clearly describe the bug. What went wrong? What did you expect?
|
||||
placeholder: |
|
||||
I tried to use [feature], but instead of [expected behavior], it [actual behavior].
|
||||
|
||||
Error message (if any):
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: steps
|
||||
attributes:
|
||||
label: Steps to Reproduce
|
||||
description: How can we reproduce this issue?
|
||||
placeholder: |
|
||||
1. Click on...
|
||||
2. Enter...
|
||||
3. See error...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: openwebui-version
|
||||
attributes:
|
||||
label: OpenWebUI Version
|
||||
description: What version of OpenWebUI are you using?
|
||||
placeholder: "e.g., 0.3.0 or main"
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Operating System & Container
|
||||
description: "Your operating system and deployment method"
|
||||
placeholder: |
|
||||
Example 1:
|
||||
OS: macOS 14.3
|
||||
Container: Docker (version 24.0.x)
|
||||
|
||||
Example 2:
|
||||
OS: Ubuntu 22.04 LTS
|
||||
Deployment: Docker Compose
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
## 📋 Debug Information (Optional)
|
||||
|
||||
To help diagnose the issue faster, please provide relevant logs:
|
||||
|
||||
**Frontend Console Logs** (Recommended):
|
||||
1. Enable: Click avatar → Settings → General → Enable Plugin Debug Output
|
||||
2. Open DevTools: Press `F12` (or `Cmd+Option+I` on Mac)
|
||||
3. Go to Console tab and copy any error messages or `🛠️ Debug` output
|
||||
|
||||
**Server-Side Logs**:
|
||||
- Docker: `docker logs <container-id>`
|
||||
- Local: Check terminal output or log files
|
||||
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Error Logs (Optional)
|
||||
description: "Paste frontend console or server logs that show the error"
|
||||
placeholder: |
|
||||
Error message from console:
|
||||
[Your logs here]
|
||||
render: bash
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: additional
|
||||
attributes:
|
||||
label: Additional Information (Optional)
|
||||
description: "Screenshots, related issues, or other helpful details"
|
||||
placeholder: |
|
||||
- Screenshots (if applicable)
|
||||
- Related issues or discussions
|
||||
- Steps you've already tried to fix it
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
label: Checklist
|
||||
options:
|
||||
- label: I have searched for duplicate issues
|
||||
required: true
|
||||
- label: I have provided clear reproduction steps
|
||||
required: true
|
||||
- label: I have mentioned OpenWebUI version and OS/container info
|
||||
required: true
|
||||
11
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
11
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Documentation
|
||||
url: https://docs.openwebui.com/
|
||||
about: Official OpenWebUI documentation
|
||||
- name: Discussions
|
||||
url: https://github.com/Fu-Jie/openwebui-extensions/discussions
|
||||
about: Ask questions and discuss with the community
|
||||
- name: OpenWebUI Repository
|
||||
url: https://github.com/open-webui/open-webui
|
||||
about: Main OpenWebUI project
|
||||
104
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
104
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,104 @@
|
||||
name: ✨ Feature Request
|
||||
description: Suggest a new feature or improvement
|
||||
title: "[FEATURE] "
|
||||
labels: ["enhancement"]
|
||||
assignees: []
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for your suggestion! Please describe the feature you'd like to see.
|
||||
|
||||
- type: dropdown
|
||||
id: plugin-type
|
||||
attributes:
|
||||
label: Plugin Type (Optional)
|
||||
description: Is this for a specific plugin type?
|
||||
options:
|
||||
- Action
|
||||
- Filter
|
||||
- Pipe
|
||||
- Pipeline
|
||||
- Tool
|
||||
- Core/General
|
||||
- Documentation
|
||||
- Other
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: dropdown
|
||||
id: plugin-name
|
||||
attributes:
|
||||
label: Plugin Name (Optional)
|
||||
description: Which plugin would benefit from this feature?
|
||||
options:
|
||||
- "Select a plugin..."
|
||||
- "Action - Deep Dive"
|
||||
- "Action - Export to Word Enhanced"
|
||||
- "Action - Export to Excel"
|
||||
- "Action - Flash Card"
|
||||
- "Action - Smart Infographic"
|
||||
- "Action - Smart Mind Map"
|
||||
- "Filter - Async Context Compression"
|
||||
- "Filter - Context & Model Enhancement Filter"
|
||||
- "Filter - Folder Memory"
|
||||
- "Filter - GitHub Copilot SDK Files Filter"
|
||||
- "Filter - Markdown Normalizer"
|
||||
- "Filter - Gemini Multimodel Filter"
|
||||
- "Pipeline - MOE Prompt Refiner"
|
||||
- "Pipe - GitHub Copilot SDK"
|
||||
- "Pipe - iFlow SDK"
|
||||
- "Tool - OpenWebUI Skills Manager"
|
||||
- "Tool - Smart Infographic Tool"
|
||||
- "Tool - Smart Mind Map Tool"
|
||||
- "Core/General Improvement"
|
||||
- "Other / Not Listed"
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Feature Description
|
||||
description: Clearly describe the feature you're requesting
|
||||
placeholder: |
|
||||
What feature would you like to see?
|
||||
How would it work?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: motivation
|
||||
attributes:
|
||||
label: Motivation & Use Case
|
||||
description: Why is this feature important? What problem does it solve?
|
||||
placeholder: |
|
||||
What's the pain point this solves?
|
||||
How would it improve your workflow?
|
||||
Who else would benefit from this?
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: additional
|
||||
attributes:
|
||||
label: Additional Information (Optional)
|
||||
description: "Mockups, alternatives considered, references, or examples"
|
||||
placeholder: |
|
||||
- Links to related plugins or tools
|
||||
- Screenshots or mockups
|
||||
- Alternative approaches you've considered
|
||||
- Code examples or references
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: checkboxes
|
||||
id: checklist
|
||||
attributes:
|
||||
label: Checklist
|
||||
options:
|
||||
- label: I have searched existing issues/discussions for similar requests
|
||||
required: true
|
||||
- label: This feature would be useful for multiple users
|
||||
required: false
|
||||
5
.github/agents/plugin-implementer.agent.md
vendored
5
.github/agents/plugin-implementer.agent.md
vendored
@@ -56,6 +56,11 @@ When bumping, update ALL 7+ files (code docstring + 2× README + 2× doc detail
|
||||
- Never run `git commit`, `git push`, or create PRs automatically.
|
||||
- After all edits, list what changed and why, then stop.
|
||||
|
||||
## Knowledge Capture (Mandatory)
|
||||
Before ending the session, if you discovered any non-obvious internal API behaviour,
|
||||
parameter injection quirk, or workaround, save it to `.agent/learnings/{topic}.md`.
|
||||
Also browse `.agent/learnings/` at the start to reuse existing knowledge.
|
||||
|
||||
## Completion Output
|
||||
- Modified files (full relative paths, one-line descriptions)
|
||||
- Remaining manual checks
|
||||
|
||||
1
.github/agents/plugin-planner.agent.md
vendored
1
.github/agents/plugin-planner.agent.md
vendored
@@ -22,6 +22,7 @@ You are the **planning specialist** for the `openwebui-extensions` repository.
|
||||
- Never propose `git commit`, `git push`, or PR creation.
|
||||
- Every plan must end with an acceptance checklist for the user to approve before handing off.
|
||||
- Reference `.github/copilot-instructions.md` as the authoritative spec.
|
||||
- Browse `.agent/learnings/` **first** to reuse existing knowledge before researching anything.
|
||||
|
||||
## Repository Plugin Inventory
|
||||
|
||||
|
||||
4
.github/agents/plugin-reviewer.agent.md
vendored
4
.github/agents/plugin-reviewer.agent.md
vendored
@@ -54,6 +54,9 @@ Full review rules are in .github/instructions/code-review.instructions.md.
|
||||
- [ ] `docs/plugins/{type}/index.md` and `.zh.md` version badges updated.
|
||||
- [ ] Root `README.md` / `README_CN.md` date badge updated.
|
||||
|
||||
**8. Knowledge Capture**
|
||||
- [ ] Any non-obvious findings (API contracts, injection quirks, gotchas) documented in `.agent/learnings/{topic}.md`.
|
||||
|
||||
### 🟡 Non-blocking (suggestions)
|
||||
- Copilot SDK tools: `params_type=MyParams` in `define_tool()`.
|
||||
- Long tasks (>3s): periodic `_emit_notification("info")` every 5s.
|
||||
@@ -68,4 +71,5 @@ Full review rules are in .github/instructions/code-review.instructions.md.
|
||||
- **Blocking issues** (file:line references)
|
||||
- **Non-blocking suggestions**
|
||||
- **Pass / Fail verdict**
|
||||
- **Knowledge captured?** (`.agent/learnings/` updated if any discoveries were made)
|
||||
- **Next step**: Pass → handoff to Release Prep; Fail → return to Implementer with fix list
|
||||
|
||||
25
.github/copilot-instructions.md
vendored
25
.github/copilot-instructions.md
vendored
@@ -32,6 +32,15 @@ plugins/actions/export_to_docx/
|
||||
- `README.md` - English documentation
|
||||
- `README_CN.md` - 中文文档
|
||||
|
||||
#### 文档交付与审阅 (Documentation Delivery for Review)
|
||||
|
||||
当任务涉及文档类内容时,例如 README、Guide、Post、Release Notes、Announcement、Development Docs:
|
||||
|
||||
- **必须**同时提供英文版与中文版,方便审阅与校对。
|
||||
- 若仓库最终只提交英文文件,也**必须**在对话中额外提供中文版草稿给维护者 review。
|
||||
- 若用户未明确指定只保留单语文件,默认按双语交付处理。
|
||||
- 中文版的目标是**便于审阅**,应忠实对应英文原意,可在表达上自然调整,但不得遗漏风险、限制、步骤或结论。
|
||||
|
||||
#### README 结构规范 (README Structure Standard)
|
||||
|
||||
所有插件 README 必须遵循以下统一结构顺序:
|
||||
@@ -1151,6 +1160,7 @@ Filter 实例是**单例 (Singleton)**。
|
||||
- [ ] **README 结构**:
|
||||
- **Key Capabilities** (英文) / **核心功能** (中文): 必须包含所有核心功能
|
||||
- **What's New** (英文) / **最新更新** (中文): 仅包含最新版本的变更信息
|
||||
- [ ] **知识沉淀**: 开发过程中发现的非显而易见的规律、踩坑或内部 API 合约,必须记录到 `.agent/learnings/{topic}.md`
|
||||
|
||||
### 2. 🔄 一致性维护 (Consistency Maintenance)
|
||||
|
||||
@@ -1208,6 +1218,21 @@ Filter 实例是**单例 (Singleton)**。
|
||||
|
||||
使用 `@all-contributors please add @username for <type>` 指令。
|
||||
|
||||
### 6. 📖 知识沉淀 Knowledge Capture (Mandatory)
|
||||
|
||||
任何开发会话中发现的**非显而易见**的内部 API 行为、参数注入机制、Mock 对象要求或其他踩坑经验,
|
||||
**必须**在会话结束前记录到 `.agent/learnings/{topic}.md`。
|
||||
|
||||
- **开始前**: 先浏览 `.agent/learnings/` 确认是否存在相关先验知识,避免重复调研。
|
||||
- **格式规范**: 参见 `.agent/learnings/README.md`。
|
||||
- **现有条目**: 见 `.agent/learnings/` 目录。
|
||||
|
||||
典型需要记录的内容:
|
||||
- OpenWebUI 内部函数的参数注入机制
|
||||
- Pipe 调用 Tool 时必须提供的上下文字段
|
||||
- Mock Request 对象所需满足的接口契约
|
||||
- 模型 ID 在不同上下文中的解析规则
|
||||
|
||||
---
|
||||
|
||||
## 📚 参考资源 (Reference Resources)
|
||||
|
||||
21
.github/gh-aw/README.md
vendored
Normal file
21
.github/gh-aw/README.md
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
# gh-aw Support Files
|
||||
|
||||
This directory stores repository-local support files for GitHub Agentic Workflows.
|
||||
|
||||
## Purpose
|
||||
|
||||
Keep review aids, policy notes, and human-facing mirrors out of `.github/workflows/` so only real gh-aw source workflows live there.
|
||||
|
||||
## Structure
|
||||
|
||||
- `review-mirrors/`: Chinese review mirrors and maintainer-facing explanations for workflow source files.
|
||||
|
||||
## Current Files
|
||||
|
||||
- `review-mirrors/aw-pr-maintainer-review.zh.md`: Chinese review mirror for `.github/workflows/aw-pr-maintainer-review.md`.
|
||||
- `review-mirrors/aw-release-preflight.zh.md`: Chinese review mirror for `.github/workflows/aw-release-preflight.md`.
|
||||
- `review-mirrors/aw-ci-audit.zh.md`: Chinese review mirror for `.github/workflows/aw-ci-audit.md`.
|
||||
|
||||
## Rule
|
||||
|
||||
Files in this directory are for maintainer review and documentation only. They are not gh-aw workflow source files and should not be compiled.
|
||||
249
.github/gh-aw/review-mirrors/aw-ci-audit.zh.md
vendored
Normal file
249
.github/gh-aw/review-mirrors/aw-ci-audit.zh.md
vendored
Normal file
@@ -0,0 +1,249 @@
|
||||
# aw-ci-audit 中文对照
|
||||
|
||||
对应源文件:`.github/workflows/aw-ci-audit.md`
|
||||
|
||||
用途:这是一份给维护者 review 用的中文对照说明,不是 gh-aw 工作流源文件,也不参与 `gh aw compile`。
|
||||
|
||||
## 工作流定位
|
||||
|
||||
这个工作流的目标是做“CI / 自动化健康审计”。
|
||||
|
||||
它不是日志转储器,也不是自动修复器,而是用于:
|
||||
|
||||
- 检查近期仓库自动化是否出现可重复的失败模式
|
||||
- 分析 release、publish、stats 等关键工作流的薄弱点
|
||||
- 只在有新且可操作的诊断结论时,创建一条维护 issue
|
||||
|
||||
如果没有新的可操作诊断,或者问题已经被现有 issue 覆盖,就执行 `noop`。
|
||||
|
||||
## Frontmatter 对照
|
||||
|
||||
### 触发方式
|
||||
|
||||
- `schedule: daily`
|
||||
- `workflow_dispatch`
|
||||
- `roles: all`
|
||||
- `skip-bots`
|
||||
- `github-actions`
|
||||
- `copilot`
|
||||
- `dependabot`
|
||||
- `renovate`
|
||||
|
||||
说明:这套设计更适合“定期体检 + 手动补查”,而不是直接绑到不确定的 workflow failure 事件上。
|
||||
|
||||
### 权限
|
||||
|
||||
当前设计为只读:
|
||||
|
||||
- `contents: read`
|
||||
- `issues: read`
|
||||
- `pull-requests: read`
|
||||
- `actions: read`
|
||||
|
||||
说明:工作流只做诊断分析,不改代码、不发 release、不创建 PR。
|
||||
|
||||
### Safe Outputs
|
||||
|
||||
已配置:
|
||||
|
||||
- `create-issue`
|
||||
- 标题前缀:`[ci-audit] `
|
||||
- labels:`ci-audit`、`maintenance`
|
||||
- 不自动关闭旧 issue
|
||||
|
||||
最终只能二选一:
|
||||
|
||||
- 有新且可操作的诊断时执行 `create_issue`
|
||||
- 无新问题时执行 `noop`
|
||||
|
||||
### 工具
|
||||
|
||||
- `github`
|
||||
- `repos`
|
||||
- `issues`
|
||||
- `pull_requests`
|
||||
- `bash`
|
||||
- 仅开放只读类命令,如 `pwd`、`ls`、`cat`、`rg`、`git diff`、`git show`
|
||||
|
||||
## 正文指令对照
|
||||
|
||||
## 主要目标
|
||||
|
||||
要求代理审计:
|
||||
|
||||
- release 相关 workflow 的失败或波动
|
||||
- 插件发布失败
|
||||
- 社区统计更新回归
|
||||
- 重复出现的 workflow 脆弱点
|
||||
- 维护者真正可以执行的下一步动作
|
||||
|
||||
明确限制:
|
||||
|
||||
- 只做诊断
|
||||
- 不改文件
|
||||
- 不推代码
|
||||
- 不开 PR
|
||||
- 不发 release
|
||||
|
||||
## 高优先级依据文件
|
||||
|
||||
在形成结论前,优先把这些文件当成“自动化规则源”:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/publish_plugin.yml`
|
||||
- `.github/workflows/publish_new_plugin.yml`
|
||||
- `.github/workflows/plugin-version-check.yml`
|
||||
- `.github/workflows/community-stats.yml`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## 重点关注的目标工作流
|
||||
|
||||
优先检查:
|
||||
|
||||
- `release.yml`
|
||||
- `publish_plugin.yml`
|
||||
- `publish_new_plugin.yml`
|
||||
- `plugin-version-check.yml`
|
||||
- `community-stats.yml`
|
||||
- `deploy.yml`
|
||||
|
||||
如果这些没有明显问题,不要无限扩大范围。
|
||||
|
||||
## 审查范围
|
||||
|
||||
聚焦“近期失败或可疑自动化信号”,并优先给出基于本仓库结构的诊断,而不是泛泛的 CI 建议。
|
||||
|
||||
它应该像“在看仓库自动化健康趋势的维护者”,而不是普通日志摘要机器人。
|
||||
|
||||
## 重点检查项
|
||||
|
||||
### 1. Release 与 Publish 失败
|
||||
|
||||
检查近期失败是否指向这些可操作问题:
|
||||
|
||||
- 版本提取或比较逻辑漂移
|
||||
- release note 打包缺口
|
||||
- publish 脚本的认证或环境问题
|
||||
- workflow 中的结构假设已经不匹配当前仓库
|
||||
- 如果不改仓库逻辑,就可能持续复现的失败
|
||||
|
||||
### 2. Stats 与定时任务稳定性
|
||||
|
||||
检查定时维护任务是否出现这些脆弱点:
|
||||
|
||||
- community stats 该提交时不再提交
|
||||
- badge / docs 生成逻辑过时
|
||||
- 依赖外部 API 的任务反复因同类原因失败
|
||||
- schedule 驱动任务制造低价值噪音
|
||||
|
||||
### 3. 维护者信号质量
|
||||
|
||||
只有当结论“真的值得维护者处理”时,才创建 issue。
|
||||
|
||||
适合开 issue 的情况:
|
||||
|
||||
- 同类失败在多次运行中重复出现
|
||||
- workflow 逻辑与当前仓库结构不匹配
|
||||
- 大概率缺 secret / 权限 / 路径假设过时
|
||||
- 重复出现的低信号失败值得过滤或加固
|
||||
|
||||
不要为一次性噪音失败开 issue,除非它很可能复发。
|
||||
|
||||
### 4. 已有 Issue 感知
|
||||
|
||||
在创建新 issue 前,先判断是否已有 open issue 覆盖同一类 CI 问题。
|
||||
|
||||
如果已有 issue 已经足够覆盖,就优先 `noop`,避免制造重复单。
|
||||
|
||||
## 严重级别
|
||||
|
||||
只允许三档:
|
||||
|
||||
- `High`
|
||||
- 高概率重复发生,且会持续影响仓库自动化
|
||||
- `Medium`
|
||||
- 建议尽快修,以降低维护成本或 workflow 漂移
|
||||
- `Low`
|
||||
- 可选的稳健性增强或清理建议
|
||||
|
||||
并且明确要求:
|
||||
|
||||
- 不要为了开 issue 而硬造问题
|
||||
|
||||
## Issue 格式
|
||||
|
||||
如果要创建 issue,必须只有一条维护 issue。
|
||||
|
||||
要求:
|
||||
|
||||
- 英文
|
||||
- 简洁
|
||||
- 先写 findings,不写空泛表扬
|
||||
- 带可点击路径引用
|
||||
- 不用嵌套列表
|
||||
- 不要粘贴大段原始日志,除非短摘录确实必要
|
||||
|
||||
固定结构:
|
||||
|
||||
```markdown
|
||||
## CI Audit
|
||||
|
||||
### Summary
|
||||
Short diagnosis of the failure pattern or automation risk.
|
||||
|
||||
### Findings
|
||||
- `path/to/file`: specific problem or likely root cause
|
||||
|
||||
### Suggested Next Steps
|
||||
- concrete maintainer action
|
||||
- concrete maintainer action
|
||||
|
||||
### Notes
|
||||
- Mention whether this appears recurring, new, or already partially mitigated.
|
||||
```
|
||||
|
||||
补充规则:
|
||||
|
||||
- 正常情况下控制在约 300 词以内
|
||||
- 如果是相关联的问题,合并成一个 issue,不要拆多个
|
||||
- 优先提交“单个可执行诊断”,而不是大杂烩
|
||||
|
||||
## No-Issue 规则
|
||||
|
||||
如果没有值得报告的新诊断:
|
||||
|
||||
- 不要创建状态汇报型 issue
|
||||
- 不要复述 workflows 看起来健康
|
||||
- 直接走 `noop`
|
||||
|
||||
示例:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed recent repository automation signals and found no new actionable CI diagnosis worth opening as a maintenance issue."}}
|
||||
```
|
||||
|
||||
## 建议执行流程
|
||||
|
||||
1. 检查近期仓库自动化上下文
|
||||
2. 优先检查目标工作流
|
||||
3. 识别可重复或仓库特定的失败模式
|
||||
4. 判断该问题是否已被 open issue 覆盖
|
||||
5. 只有在诊断“新且可操作”时,才起草最短有用的维护 issue
|
||||
6. 最终只执行一次 `create_issue` 或一次 `noop`
|
||||
|
||||
## 额外约束
|
||||
|
||||
- 不要为单次低信号瞬时失败开 issue
|
||||
- 除非失败模式非常明确,否则不要顺势要求大规模重构
|
||||
- 优先给出仓库特定原因,而不是泛泛的“重试试试”
|
||||
- 如果根因不确定,要把不确定性写明
|
||||
- 如果现有 issue 已经覆盖,优先 `noop` 而不是重复开单
|
||||
|
||||
## 最终要求
|
||||
|
||||
必须以且仅以一次 safe output 结束:
|
||||
|
||||
- 有新且可操作的诊断:`create_issue`
|
||||
- 无新问题:`noop`
|
||||
268
.github/gh-aw/review-mirrors/aw-pr-maintainer-review.zh.md
vendored
Normal file
268
.github/gh-aw/review-mirrors/aw-pr-maintainer-review.zh.md
vendored
Normal file
@@ -0,0 +1,268 @@
|
||||
# aw-pr-maintainer-review 中文对照
|
||||
|
||||
对应源文件:`.github/workflows/aw-pr-maintainer-review.md`
|
||||
|
||||
用途:这是一份给维护者 review 用的中文对照说明,不是 gh-aw 工作流源文件,也不参与 `gh aw compile`。
|
||||
|
||||
## 工作流定位
|
||||
|
||||
这个工作流的目标是对触发 PR 做一次“维护者语义审查”。
|
||||
|
||||
它不是通用 code review 机器人,也不是自动修复器,而是用来检查以下问题:
|
||||
|
||||
- 是否违反本仓库插件开发规范
|
||||
- 是否缺失应同步更新的 README / README_CN / docs 镜像文件
|
||||
- 是否存在发布准备层面的遗漏
|
||||
- 是否引入明显的高风险行为回归
|
||||
|
||||
如果 PR 已经足够合规,没有可操作的维护者反馈,就不评论,而是执行 `noop`。
|
||||
|
||||
## Frontmatter 对照
|
||||
|
||||
### 触发方式
|
||||
|
||||
- `pull_request`
|
||||
- 类型:`opened`、`reopened`、`synchronize`、`ready_for_review`
|
||||
- 路径限制:
|
||||
- `plugins/**`
|
||||
- `docs/**`
|
||||
- `.github/**`
|
||||
- `README.md`
|
||||
- `README_CN.md`
|
||||
- `workflow_dispatch`
|
||||
- `roles: all`
|
||||
- `skip-bots`
|
||||
- `github-actions`
|
||||
- `copilot`
|
||||
- `dependabot`
|
||||
- `renovate`
|
||||
|
||||
### 权限
|
||||
|
||||
当前设计为只读:
|
||||
|
||||
- `contents: read`
|
||||
- `issues: read`
|
||||
- `pull-requests: read`
|
||||
|
||||
说明:工作流不会直接改代码,也不会提交 review comment 之外的写操作。
|
||||
|
||||
### Safe Outputs
|
||||
|
||||
已配置:
|
||||
|
||||
- `add-comment`
|
||||
- 目标:当前触发 PR
|
||||
- 最多 1 条
|
||||
- 隐藏旧评论
|
||||
- 不加 footer
|
||||
|
||||
同时要求最终必须二选一:
|
||||
|
||||
- 有问题时执行 `add_comment`
|
||||
- 无问题时执行 `noop`
|
||||
|
||||
### 工具
|
||||
|
||||
- `github`
|
||||
- `repos`
|
||||
- `issues`
|
||||
- `pull_requests`
|
||||
- `bash`
|
||||
- 仅开放只读类命令,如 `pwd`、`ls`、`cat`、`rg`、`git diff`、`git show`
|
||||
|
||||
## 正文指令对照
|
||||
|
||||
## 主要目标
|
||||
|
||||
要求代理审查:
|
||||
|
||||
- 仓库标准合规性
|
||||
- 缺失的同步更新文件
|
||||
- 发布准备缺口
|
||||
- 文档漂移
|
||||
- 插件代码中的高风险回归
|
||||
|
||||
明确限制:
|
||||
|
||||
- 只做 review
|
||||
- 不改文件
|
||||
- 不推代码
|
||||
- 不创建 PR
|
||||
|
||||
## 高优先级依据文件
|
||||
|
||||
在形成结论前,优先把这些文件当成“本仓库规则源”:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/instructions/code-review.instructions.md`
|
||||
- `.github/instructions/commit-message.instructions.md`
|
||||
- `.github/skills/release-prep/SKILL.md`
|
||||
- `.github/skills/doc-mirror-sync/SKILL.md`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## 审查范围
|
||||
|
||||
- 先看 PR diff 和 changed files
|
||||
- 只有在验证一致性时,才扩展读取关联文件
|
||||
- 优先遵循“仓库特定规则”,而不是泛泛的最佳实践
|
||||
|
||||
换句话说,它应该像“熟悉本仓库的维护者”,而不是通用 lint bot。
|
||||
|
||||
## 重点检查项
|
||||
|
||||
### 1. 插件代码规范
|
||||
|
||||
当 `plugins/**/*.py` 变化时,重点看:
|
||||
|
||||
- 是否保持单文件 i18n 模式
|
||||
- 用户可见文本是否进入翻译字典
|
||||
- 是否使用 `_get_user_context` 和 `_get_chat_context`
|
||||
- `__event_call__` 的 JS 执行是否具备 timeout 防护和前端兜底
|
||||
- 是否引入 `print()` 到生产插件代码
|
||||
- emitter 是否安全判空
|
||||
- filter 插件是否把请求级可变状态塞到 `self`
|
||||
- Copilot SDK / OpenWebUI tool 定义是否仍符合仓库规范
|
||||
|
||||
### 2. 版本与发布卫生
|
||||
|
||||
当 `plugins/**/*.py` 改动时,检查是否“应该同步但没同步”:
|
||||
|
||||
- 插件 docstring 的 `version:`
|
||||
- 插件目录下 `README.md`
|
||||
- 插件目录下 `README_CN.md`
|
||||
- `docs/plugins/**` 下的镜像页面
|
||||
- `docs/plugins/{type}/index.md` 等索引文件
|
||||
- 如果是明显 release-prep 类型 PR,再看根 `README.md` 和 `README_CN.md` 日期 badge
|
||||
|
||||
这里的关键语义是:
|
||||
|
||||
- 不是每个 PR 都必须当发布处理
|
||||
- 只有在“用户可见行为、元数据、版本化文档、发布面内容”发生变化时,才提示缺失同步
|
||||
|
||||
### 3. 文档同步
|
||||
|
||||
当插件 README 改动时,检查是否应同步 docs 镜像:
|
||||
|
||||
- `plugins/actions/{name}/README.md` -> `docs/plugins/actions/{name}.md`
|
||||
- `plugins/actions/{name}/README_CN.md` -> `docs/plugins/actions/{name}.zh.md`
|
||||
- `plugins/filters/{name}/README.md` -> `docs/plugins/filters/{name}.md`
|
||||
- `plugins/filters/{name}/README_CN.md` -> `docs/plugins/filters/{name}.zh.md`
|
||||
- `plugins/pipes/{name}/README.md` -> `docs/plugins/pipes/{name}.md`
|
||||
- `plugins/pipes/{name}/README_CN.md` -> `docs/plugins/pipes/{name}.zh.md`
|
||||
- `plugins/pipelines/{name}/README.md` -> `docs/plugins/pipelines/{name}.md`
|
||||
- `plugins/pipelines/{name}/README_CN.md` -> `docs/plugins/pipelines/{name}.zh.md`
|
||||
- `plugins/tools/{name}/README.md` -> `docs/plugins/tools/{name}.md`
|
||||
- `plugins/tools/{name}/README_CN.md` -> `docs/plugins/tools/{name}.zh.md`
|
||||
|
||||
如果是 docs-only 且明显有意为之,不要过度报错。
|
||||
|
||||
### 4. PR 质量
|
||||
|
||||
只在“确实让维护者审查变难”时,才指出 PR 描述缺失这些内容:
|
||||
|
||||
- 改了什么
|
||||
- 为什么改
|
||||
- 是否需要迁移或重新配置
|
||||
|
||||
## 严重级别
|
||||
|
||||
只允许三档:
|
||||
|
||||
- `Blocking`
|
||||
- 大概率 bug、发布回归、缺少必需同步、严重规范破坏
|
||||
- `Important`
|
||||
- 应该合并前修,但不一定是直接运行时错误
|
||||
- `Minor`
|
||||
- 建议项,可选
|
||||
|
||||
并且明确要求:
|
||||
|
||||
- 不要为了留言而硬凑问题
|
||||
|
||||
## 评论格式
|
||||
|
||||
如果要评论,必须只有一条总结评论。
|
||||
|
||||
要求:
|
||||
|
||||
- 英文
|
||||
- 简洁
|
||||
- 先给 findings,不先夸赞
|
||||
- 带可点击路径引用
|
||||
- 不使用嵌套列表
|
||||
- 不要机械复述 diff
|
||||
|
||||
固定结构:
|
||||
|
||||
```markdown
|
||||
## PR Maintainer Review
|
||||
|
||||
### Blocking
|
||||
- `path/to/file`: specific issue and why it matters
|
||||
|
||||
### Important
|
||||
- `path/to/file`: specific issue and what sync/check is missing
|
||||
|
||||
### Minor
|
||||
- `path/to/file`: optional improvement or consistency note
|
||||
|
||||
### Merge Readiness
|
||||
- Ready after the items above are addressed.
|
||||
```
|
||||
|
||||
补充规则:
|
||||
|
||||
- 空 section 要省略
|
||||
- 如果只有一个严重级别,只保留那个 section 和 `Merge Readiness`
|
||||
- 正常情况下控制在约 250 词以内
|
||||
|
||||
## No-Comment 规则
|
||||
|
||||
如果没有有意义的维护者反馈:
|
||||
|
||||
- 不要发“看起来不错”这类表扬评论
|
||||
- 不要复述 checks passed
|
||||
- 直接走 `noop`
|
||||
|
||||
示例:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed the PR diff and repository sync expectations, and found no actionable maintainer feedback."}}
|
||||
```
|
||||
|
||||
## 建议执行流程
|
||||
|
||||
1. 找出变更文件
|
||||
2. 读取高优先级规则文件
|
||||
3. 对照插件审查规范检查插件代码
|
||||
4. 对照 doc mirror 规则检查 README / docs
|
||||
5. 判断是否缺失 version sync 或 release-facing 文件
|
||||
6. 先起草最短但有用的维护者总结
|
||||
7. 最终只执行一次 `add_comment` 或一次 `noop`
|
||||
|
||||
## 额外约束
|
||||
|
||||
- 不要要求与本 PR 无关的大重构
|
||||
- 小型内部变更不要强拉成 release-prep
|
||||
- 明显是私有/内部改动时,不要强制要求 docs sync
|
||||
- 优先给出“仓库特定”的反馈,而不是通用 code review 废话
|
||||
- 如果你不确定某个同步文件是否必需,把级别降为 `Important`
|
||||
- 如果问题依赖 PR 意图但当前信息不足,要把表述写成“条件性判断”,不要装作确定
|
||||
|
||||
## 最终要求
|
||||
|
||||
必须以且仅以一次 safe output 结束:
|
||||
|
||||
- 有可操作反馈:`add_comment`
|
||||
- 无可操作反馈:`noop`
|
||||
|
||||
## Review 结论
|
||||
|
||||
这份英文源工作流目前已经可以作为后续 `gh aw compile` 的候选源文件。
|
||||
|
||||
中文镜像的目的只有两个:
|
||||
|
||||
- 方便你逐段审阅策略是否符合预期
|
||||
- 避免把中文说明混进真正要编译的 workflow 源文件
|
||||
275
.github/gh-aw/review-mirrors/aw-release-preflight.zh.md
vendored
Normal file
275
.github/gh-aw/review-mirrors/aw-release-preflight.zh.md
vendored
Normal file
@@ -0,0 +1,275 @@
|
||||
# aw-release-preflight 中文对照
|
||||
|
||||
对应源文件:`.github/workflows/aw-release-preflight.md`
|
||||
|
||||
用途:这是一份给维护者 review 用的中文对照说明,不是 gh-aw 工作流源文件,也不参与 `gh aw compile`。
|
||||
|
||||
## 工作流定位
|
||||
|
||||
这个工作流的目标是对触发变更做一次“发布前预检语义审查”。
|
||||
|
||||
它不是发布执行器,也不是自动补版本工具,而是用于判断:
|
||||
|
||||
- 这次改动是否真的在做 release-prep
|
||||
- 如果是在做 release-prep,版本同步是否完整
|
||||
- 双语 README、docs 镜像、release notes 是否齐全
|
||||
- 是否存在会影响发布质量的说明缺失或文档漂移
|
||||
|
||||
如果当前变更并不是发布准备,或者已经足够一致、没有可操作反馈,就执行 `noop`。
|
||||
|
||||
## Frontmatter 对照
|
||||
|
||||
### 触发方式
|
||||
|
||||
- `pull_request`
|
||||
- 类型:`opened`、`reopened`、`synchronize`、`ready_for_review`
|
||||
- 路径限制:
|
||||
- `plugins/**/*.py`
|
||||
- `plugins/**/README.md`
|
||||
- `plugins/**/README_CN.md`
|
||||
- `plugins/**/v*.md`
|
||||
- `plugins/**/v*_CN.md`
|
||||
- `docs/plugins/**/*.md`
|
||||
- `README.md`
|
||||
- `README_CN.md`
|
||||
- `.github/**`
|
||||
- `workflow_dispatch`
|
||||
- `roles: all`
|
||||
- `skip-bots`
|
||||
- `github-actions`
|
||||
- `copilot`
|
||||
- `dependabot`
|
||||
- `renovate`
|
||||
|
||||
### 权限
|
||||
|
||||
当前设计为只读:
|
||||
|
||||
- `contents: read`
|
||||
- `issues: read`
|
||||
- `pull-requests: read`
|
||||
|
||||
说明:工作流不会发 release、不会推代码、不会改文件。
|
||||
|
||||
### Safe Outputs
|
||||
|
||||
已配置:
|
||||
|
||||
- `add-comment`
|
||||
- 目标:当前触发 PR
|
||||
- 最多 1 条
|
||||
- 隐藏旧评论
|
||||
- 不加 footer
|
||||
|
||||
最终只能二选一:
|
||||
|
||||
- 有问题时执行 `add_comment`
|
||||
- 无问题时执行 `noop`
|
||||
|
||||
### 工具
|
||||
|
||||
- `github`
|
||||
- `repos`
|
||||
- `issues`
|
||||
- `pull_requests`
|
||||
- `bash`
|
||||
- 仅开放只读类命令,如 `pwd`、`ls`、`cat`、`rg`、`git diff`、`git show`
|
||||
|
||||
## 正文指令对照
|
||||
|
||||
## 主要目标
|
||||
|
||||
要求代理检查:
|
||||
|
||||
- 版本同步完整性
|
||||
- 双语 README 与 docs 一致性
|
||||
- release notes 完整性
|
||||
- 发布面索引或 badge 漂移
|
||||
- 用户可见发布是否缺失迁移说明或维护者上下文
|
||||
|
||||
明确限制:
|
||||
|
||||
- 只做 review
|
||||
- 不改文件
|
||||
- 不推代码
|
||||
- 不创建 release
|
||||
- 不创建 PR
|
||||
|
||||
## 高优先级依据文件
|
||||
|
||||
在形成结论前,优先把这些文件当成“发布规则源”:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/instructions/commit-message.instructions.md`
|
||||
- `.github/skills/release-prep/SKILL.md`
|
||||
- `.github/skills/doc-mirror-sync/SKILL.md`
|
||||
- `.github/workflows/release.yml`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## 审查范围
|
||||
|
||||
- 从 PR diff 和 changed files 开始
|
||||
- 只有在验证发布同步时才扩展到相关 release-facing 文件
|
||||
- 优先遵循仓库既有 release-prep 规则,而不是泛泛的 release 建议
|
||||
|
||||
换句话说,它应该像“合并前最后做一致性复核的维护者”。
|
||||
|
||||
## 重点检查项
|
||||
|
||||
### 1. 发布相关文件中的版本同步
|
||||
|
||||
当某个插件明显在准备发版时,检查这些位置是否同步:
|
||||
|
||||
- 插件 Python docstring 的 `version:`
|
||||
- 插件目录下 `README.md`
|
||||
- 插件目录下 `README_CN.md`
|
||||
- `docs/plugins/**` 英文镜像页
|
||||
- `docs/plugins/**/*.zh.md` 中文镜像页
|
||||
- `docs/plugins/{type}/index.md` 中该插件的条目或版本 badge
|
||||
- `docs/plugins/{type}/index.zh.md` 中该插件的条目或版本 badge
|
||||
|
||||
但只有在“这次改动明显带有发布意图”时才提示,不要把所有 PR 都按发布处理。
|
||||
|
||||
### 2. README 与 docs 镜像一致性
|
||||
|
||||
当插件 README 变化时,检查 docs 镜像是否同步。
|
||||
|
||||
路径映射:
|
||||
|
||||
- `plugins/actions/{name}/README.md` -> `docs/plugins/actions/{name}.md`
|
||||
- `plugins/actions/{name}/README_CN.md` -> `docs/plugins/actions/{name}.zh.md`
|
||||
- `plugins/filters/{name}/README.md` -> `docs/plugins/filters/{name}.md`
|
||||
- `plugins/filters/{name}/README_CN.md` -> `docs/plugins/filters/{name}.zh.md`
|
||||
- `plugins/pipes/{name}/README.md` -> `docs/plugins/pipes/{name}.md`
|
||||
- `plugins/pipes/{name}/README_CN.md` -> `docs/plugins/pipes/{name}.zh.md`
|
||||
- `plugins/pipelines/{name}/README.md` -> `docs/plugins/pipelines/{name}.md`
|
||||
- `plugins/pipelines/{name}/README_CN.md` -> `docs/plugins/pipelines/{name}.zh.md`
|
||||
- `plugins/tools/{name}/README.md` -> `docs/plugins/tools/{name}.md`
|
||||
- `plugins/tools/{name}/README_CN.md` -> `docs/plugins/tools/{name}.zh.md`
|
||||
|
||||
如果是纯文档调整、而且并非发版预备,不要过度报错。
|
||||
|
||||
### 3. What's New 与 Release Notes 覆盖度
|
||||
|
||||
当这次更新明显是发布面插件更新时,检查:
|
||||
|
||||
- `What's New` 是否只反映最新版本
|
||||
- `最新更新` 是否与英文对应
|
||||
- 是否存在 `v{version}.md` 和 `v{version}_CN.md`
|
||||
- release notes 是否覆盖当前 diff 中有意义的功能、修复、文档或迁移变化
|
||||
|
||||
对纯内部小改动,不要强制要求 release notes。
|
||||
|
||||
### 4. 根 README 与发布面索引漂移
|
||||
|
||||
当改动明显面向正式发布时,再检查:
|
||||
|
||||
- 根 `README.md` 的日期 badge
|
||||
- 根 `README_CN.md` 的日期 badge
|
||||
- `docs/plugins/**/index.md`
|
||||
- `docs/plugins/**/index.zh.md`
|
||||
|
||||
不要把这种检查强加给普通内部 PR。
|
||||
|
||||
### 5. 维护者上下文与发布清晰度
|
||||
|
||||
检查 PR 描述或发布面文案是否缺少关键上下文:
|
||||
|
||||
- 这次到底发布了什么
|
||||
- 为什么这次发布值得做
|
||||
- 是否需要迁移或重新配置
|
||||
|
||||
只有在缺失信息会明显增加 release review 成本时,才提示。
|
||||
|
||||
## 严重级别
|
||||
|
||||
只允许三档:
|
||||
|
||||
- `Blocking`
|
||||
- 高概率发布回归、缺少必要版本同步、发布面更新明显不完整
|
||||
- `Important`
|
||||
- 合并前最好修,避免发布混乱或文档漂移
|
||||
- `Minor`
|
||||
- 可选的发布面清理或一致性建议
|
||||
|
||||
并且明确要求:
|
||||
|
||||
- 不要为了留言而造问题
|
||||
|
||||
## 评论格式
|
||||
|
||||
如果要评论,必须只有一条总结评论。
|
||||
|
||||
要求:
|
||||
|
||||
- 英文
|
||||
- 简洁
|
||||
- 先给 findings,不先夸赞
|
||||
- 带可点击路径引用
|
||||
- 不使用嵌套列表
|
||||
- 不要机械复述 diff
|
||||
|
||||
固定结构:
|
||||
|
||||
```markdown
|
||||
## Release Preflight Review
|
||||
|
||||
### Blocking
|
||||
- `path/to/file`: specific release-facing problem and why it matters
|
||||
|
||||
### Important
|
||||
- `path/to/file`: missing sync or release-documentation gap
|
||||
|
||||
### Minor
|
||||
- `path/to/file`: optional cleanup or consistency improvement
|
||||
|
||||
### Release Readiness
|
||||
- Ready after the items above are addressed.
|
||||
```
|
||||
|
||||
补充规则:
|
||||
|
||||
- 空 section 要省略
|
||||
- 如果只有一个严重级别,只保留那个 section 和 `Release Readiness`
|
||||
- 正常情况下控制在约 250 词以内
|
||||
|
||||
## No-Comment 规则
|
||||
|
||||
如果没有有意义的发布前预检反馈:
|
||||
|
||||
- 不要发“看起来不错”这类表扬评论
|
||||
- 不要复述 checks passed
|
||||
- 直接走 `noop`
|
||||
|
||||
示例:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed the release-facing diff, version-sync expectations, and bilingual documentation coverage, and found no actionable preflight feedback."}}
|
||||
```
|
||||
|
||||
## 建议执行流程
|
||||
|
||||
1. 判断这次改动是否真的带有发布意图
|
||||
2. 检查 PR diff 中的变更文件
|
||||
3. 读取仓库的 release-prep 规则文件
|
||||
4. 只有在存在发布意图时,才检查 plugin version sync
|
||||
5. 检查 README、README_CN、docs 镜像、索引和 release notes 是否漂移
|
||||
6. 起草最短但有用的维护者总结
|
||||
7. 最终只执行一次 `add_comment` 或一次 `noop`
|
||||
|
||||
## 额外约束
|
||||
|
||||
- 不要把完整 release-prep 要求硬套到微小内部改动上
|
||||
- 非明确发布型 PR,不要强制要求根 README 日期 badge 更新
|
||||
- 如果这次改动并不现实地构成发版预备,就不要强求 release notes
|
||||
- 优先给出仓库特定的同步反馈,而不是泛泛的发布建议
|
||||
- 如果不确定某个 release-facing 同步文件是否必需,把级别降为 `Important`
|
||||
- 如果问题依赖“推测出来的意图”,要用条件式表述,不要装作确定
|
||||
|
||||
## 最终要求
|
||||
|
||||
必须以且仅以一次 safe output 结束:
|
||||
|
||||
- 有可操作反馈:`add_comment`
|
||||
- 无可操作反馈:`noop`
|
||||
36
.github/skills/doc-mirror-sync/SKILL.md
vendored
36
.github/skills/doc-mirror-sync/SKILL.md
vendored
@@ -8,7 +8,43 @@ description: Automatically synchronizes plugin READMEs to the official documenta
|
||||
## Overview
|
||||
Automates the mirroring of `plugins/{type}/{name}/README.md` to `docs/plugins/{type}/{name}.md`.
|
||||
|
||||
## Docs-Only Mode (No Release Changes)
|
||||
Use this mode when the request is "only sync docs".
|
||||
|
||||
- Only update documentation mirror files under `docs/plugins/**`.
|
||||
- Do **not** bump plugin version.
|
||||
- Do **not** modify plugin code (`plugins/**.py`) unless explicitly requested.
|
||||
- Do **not** update root badges/dates for release.
|
||||
- Do **not** run release preparation steps.
|
||||
|
||||
## Workflow
|
||||
1. Identify changed READMEs.
|
||||
2. Copy content to corresponding mirror paths.
|
||||
3. Update version badges in `docs/plugins/{type}/index.md`.
|
||||
|
||||
## Commands
|
||||
|
||||
### Sync all mirrors (EN + ZH)
|
||||
|
||||
```bash
|
||||
python .github/skills/doc-mirror-sync/scripts/sync.py
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN only)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
```
|
||||
|
||||
### Sync only one plugin (EN + ZH)
|
||||
|
||||
```bash
|
||||
cp plugins/<type>/<name>/README.md docs/plugins/<type>/<name>.md
|
||||
cp plugins/<type>/<name>/README_CN.md docs/plugins/<type>/<name>.zh.md
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- If asked for English-only update, sync only `README.md` -> `.md` mirror.
|
||||
- If both languages are requested, sync both `README.md` and `README_CN.md`.
|
||||
- After syncing, verify git diff only contains docs file changes.
|
||||
|
||||
13
.github/skills/release-prep/SKILL.md
vendored
13
.github/skills/release-prep/SKILL.md
vendored
@@ -89,6 +89,18 @@ Each file must include:
|
||||
|
||||
If a release notes file already exists for this version, update it rather than creating a new one.
|
||||
|
||||
#### Full Coverage Rule (Mandatory)
|
||||
|
||||
Release notes must cover **all updates in the current release scope** and not only headline features.
|
||||
|
||||
Minimum required coverage in both EN/CN files:
|
||||
- New features and capability enhancements
|
||||
- Bug fixes and reliability fixes
|
||||
- Documentation/README/doc-mirror updates that affect user understanding or usage
|
||||
- Terminology/i18n/wording fixes that change visible behavior or messaging
|
||||
|
||||
Before commit, cross-check release notes against `git diff` and ensure no meaningful update is omitted.
|
||||
|
||||
### Step 5 — Verify Consistency (Pre-Commit Check)
|
||||
|
||||
Run the consistency check script:
|
||||
@@ -130,6 +142,7 @@ Confirm the commit hash and list the number of files changed.
|
||||
- [ ] Both `index.md` version badges updated
|
||||
- [ ] Root `README.md` and `README_CN.md` date badges updated to today
|
||||
- [ ] `What's New` / `最新更新` contains ONLY the latest release
|
||||
- [ ] Release notes include all meaningful updates from the current diff (feature + fix + docs/i18n)
|
||||
- [ ] `v{version}.md` and `v{version}_CN.md` created or updated
|
||||
- [ ] `python3 scripts/check_version_consistency.py` returns no errors
|
||||
- [ ] Commit message is English-only Conventional Commits format
|
||||
|
||||
1
.github/skills/source-code-analyzer/SKILL.md
vendored
1
.github/skills/source-code-analyzer/SKILL.md
vendored
@@ -17,6 +17,7 @@ When assisting with the development of `openwebui-extensions`, you (Antigravity)
|
||||
- **Open WebUI**: `../open-webui/` (Core platform context)
|
||||
- **Skills**: `../skills/` (Reusable expertise library)
|
||||
- **Awesome Copilot**: `../awesome-copilot/` (Shared extensions & resources)
|
||||
- **Open Terminal**: `../open-terminal/` (Terminal integration service)
|
||||
|
||||
### Plugin-Specific (Relevant to GitHub Copilot SDK)
|
||||
|
||||
|
||||
222
.github/workflows/aw-ci-audit.md
vendored
Normal file
222
.github/workflows/aw-ci-audit.md
vendored
Normal file
@@ -0,0 +1,222 @@
|
||||
---
|
||||
description: "CI audit workflow for failed releases, publish jobs, stats updates, and other important repository automation"
|
||||
private: true
|
||||
labels: [automation, diagnostics, ci, gh-aw]
|
||||
metadata:
|
||||
author: Fu-Jie
|
||||
category: maintenance
|
||||
maturity: draft
|
||||
on:
|
||||
schedule: daily
|
||||
workflow_dispatch:
|
||||
roles: all
|
||||
skip-bots: [github-actions, copilot, dependabot, renovate]
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
actions: read
|
||||
engine: copilot
|
||||
network:
|
||||
allowed:
|
||||
- defaults
|
||||
safe-outputs:
|
||||
create-issue:
|
||||
title-prefix: "[ci-audit] "
|
||||
labels: [ci-audit, maintenance]
|
||||
close-older-issues: false
|
||||
allowed-github-references: [repo]
|
||||
timeout-minutes: 15
|
||||
tools:
|
||||
github:
|
||||
toolsets: [repos, issues, pull_requests]
|
||||
bash:
|
||||
- pwd
|
||||
- ls
|
||||
- cat
|
||||
- head
|
||||
- tail
|
||||
- grep
|
||||
- wc
|
||||
- rg
|
||||
- git status
|
||||
- git diff
|
||||
- git show
|
||||
- git ls-files
|
||||
---
|
||||
|
||||
# CI Audit
|
||||
|
||||
You are the repository maintainer assistant for `Fu-Jie/openwebui-extensions`.
|
||||
|
||||
Your job is to inspect recent repository automation health and create **one concise maintenance issue only when there is actionable CI or automation feedback**.
|
||||
|
||||
If there is no meaningful failure pattern, no new actionable diagnosis, or no useful maintainer issue to open, you **must call `noop`** with a short explanation.
|
||||
|
||||
## Primary Goal
|
||||
|
||||
Audit recent automation health for:
|
||||
|
||||
- failed or flaky release-related workflows
|
||||
- plugin publishing failures
|
||||
- community stats update regressions
|
||||
- repeated workflow drift or fragile maintenance steps
|
||||
- repository-specific next steps maintainers can actually act on
|
||||
|
||||
This workflow is **diagnostic-only**. Do not modify files, push code, open pull requests, or create releases.
|
||||
|
||||
## High-Priority Source Files
|
||||
|
||||
Use these files as the authoritative context before forming conclusions:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/workflows/release.yml`
|
||||
- `.github/workflows/publish_plugin.yml`
|
||||
- `.github/workflows/publish_new_plugin.yml`
|
||||
- `.github/workflows/plugin-version-check.yml`
|
||||
- `.github/workflows/community-stats.yml`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## Target Workflows
|
||||
|
||||
Prioritize these workflows first:
|
||||
|
||||
- `release.yml`
|
||||
- `publish_plugin.yml`
|
||||
- `publish_new_plugin.yml`
|
||||
- `plugin-version-check.yml`
|
||||
- `community-stats.yml`
|
||||
- `deploy.yml`
|
||||
|
||||
If there are no meaningful issues there, do not widen scope unnecessarily.
|
||||
|
||||
## Review Scope
|
||||
|
||||
Focus on recent failed or suspicious automation runs and repository-facing symptoms. Prefer diagnosis that is grounded in repository context, not generic CI advice.
|
||||
|
||||
This workflow should behave like a maintainer who is reviewing workflow health trends, not like a generic log summarizer.
|
||||
|
||||
Focus especially on these areas:
|
||||
|
||||
### 1. Release and Publish Failures
|
||||
|
||||
Inspect whether recent failures suggest actionable problems such as:
|
||||
|
||||
- version extraction or comparison drift
|
||||
- release-note packaging gaps
|
||||
- publish-script authentication or environment issues
|
||||
- assumptions in release jobs that no longer match repository structure
|
||||
- failures that are likely to recur until repository logic changes
|
||||
|
||||
### 2. Stats and Scheduled Workflow Reliability
|
||||
|
||||
Inspect whether scheduled maintenance jobs show drift or fragility such as:
|
||||
|
||||
- community stats commits no longer happening when expected
|
||||
- badge or docs generation assumptions becoming stale
|
||||
- external API dependent jobs failing in repeatable ways
|
||||
- schedule-driven jobs causing noisy or low-value churn
|
||||
|
||||
### 3. Signal Quality for Maintainers
|
||||
|
||||
Only create an issue if there is a useful diagnosis with at least one concrete next step.
|
||||
|
||||
Good issue-worthy findings include:
|
||||
|
||||
- a repeated failure signature across runs
|
||||
- a repository mismatch between workflow logic and current file layout
|
||||
- a likely missing secret, missing permission, or stale path assumption
|
||||
- repeated low-signal failures that should be filtered or hardened
|
||||
|
||||
Do not open issues for one-off noise unless the failure pattern is likely to recur.
|
||||
|
||||
### 4. Existing Issue Awareness
|
||||
|
||||
Before creating a new issue, check whether a recent open issue already appears to cover the same CI failure pattern.
|
||||
|
||||
If an existing issue already covers the problem well enough, prefer `noop` and mention that the diagnosis is already tracked.
|
||||
|
||||
## Severity Model
|
||||
|
||||
Use three levels only:
|
||||
|
||||
- `High`: likely recurring CI or automation failure with repository impact
|
||||
- `Medium`: useful to fix soon to reduce maintenance burden or workflow drift
|
||||
- `Low`: optional hardening or cleanup suggestion
|
||||
|
||||
Do not invent issues just to create a report.
|
||||
|
||||
## Issue Creation Rules
|
||||
|
||||
Create **one maintenance issue** only if there is actionable new diagnosis.
|
||||
|
||||
The issue must:
|
||||
|
||||
- be in English
|
||||
- be concise and maintainer-like
|
||||
- lead with findings, not generic praise
|
||||
- include clickable file references like ``.github/workflows/release.yml`` or ``scripts/publish_plugin.py``
|
||||
- avoid nested bullets
|
||||
- avoid pasting raw logs unless a short excerpt is critical
|
||||
|
||||
Use this exact structure when creating the issue:
|
||||
|
||||
```markdown
|
||||
## CI Audit
|
||||
|
||||
### Summary
|
||||
Short diagnosis of the failure pattern or automation risk.
|
||||
|
||||
### Findings
|
||||
- `path/to/file`: specific problem or likely root cause
|
||||
|
||||
### Suggested Next Steps
|
||||
- concrete maintainer action
|
||||
- concrete maintainer action
|
||||
|
||||
### Notes
|
||||
- Mention whether this appears recurring, new, or already partially mitigated.
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- Keep the issue under about 300 words unless multiple workflows are affected.
|
||||
- If there are multiple related findings, group them into one issue rather than opening separate issues.
|
||||
- Prefer a single, actionable diagnosis over a broad laundry list.
|
||||
|
||||
## No-Issue Rule
|
||||
|
||||
If there is no meaningful new diagnosis to report:
|
||||
|
||||
- do not create a status-only issue
|
||||
- do not restate that workflows look healthy
|
||||
- call `noop` with a short explanation like:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed recent repository automation signals and found no new actionable CI diagnosis worth opening as a maintenance issue."}}
|
||||
```
|
||||
|
||||
## Suggested Audit Process
|
||||
|
||||
1. Inspect recent repository automation context.
|
||||
2. Prioritize the target workflows listed above.
|
||||
3. Identify recurring or repository-specific failure patterns.
|
||||
4. Check whether the problem is already tracked in an open issue.
|
||||
5. Draft the shortest useful maintenance issue only if the diagnosis is actionable and new.
|
||||
6. Finish with exactly one `create_issue` or one `noop`.
|
||||
|
||||
## Important Constraints
|
||||
|
||||
- Do not create an issue for a single low-signal transient failure.
|
||||
- Do not propose large refactors unless the failure pattern clearly justifies them.
|
||||
- Prefer repository-specific causes over generic "retry later" style advice.
|
||||
- If the likely root cause is uncertain, state the uncertainty explicitly.
|
||||
- If the pattern appears already tracked, prefer `noop` over duplicate issue creation.
|
||||
|
||||
## Final Requirement
|
||||
|
||||
You **must** finish with exactly one safe output action:
|
||||
|
||||
- `create_issue` if there is actionable new diagnosis
|
||||
- `noop` if there is not
|
||||
236
.github/workflows/aw-pr-maintainer-review.md
vendored
Normal file
236
.github/workflows/aw-pr-maintainer-review.md
vendored
Normal file
@@ -0,0 +1,236 @@
|
||||
---
|
||||
description: "Semantic PR maintainer review for plugin standards, bilingual docs sync, and release readiness gaps"
|
||||
private: true
|
||||
labels: [automation, review, pull-request, gh-aw]
|
||||
metadata:
|
||||
author: Fu-Jie
|
||||
category: maintenance
|
||||
maturity: draft
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, reopened, synchronize, ready_for_review]
|
||||
paths:
|
||||
- 'plugins/**'
|
||||
- 'docs/**'
|
||||
- '.github/**'
|
||||
- 'README.md'
|
||||
- 'README_CN.md'
|
||||
forks: ["*"]
|
||||
workflow_dispatch:
|
||||
roles: all
|
||||
skip-bots: [github-actions, copilot, dependabot, renovate]
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
engine: copilot
|
||||
network:
|
||||
allowed:
|
||||
- defaults
|
||||
safe-outputs:
|
||||
add-comment:
|
||||
target: triggering
|
||||
max: 1
|
||||
hide-older-comments: true
|
||||
footer: false
|
||||
allowed-github-references: [repo]
|
||||
timeout-minutes: 12
|
||||
tools:
|
||||
github:
|
||||
toolsets: [repos, issues, pull_requests]
|
||||
bash:
|
||||
- pwd
|
||||
- ls
|
||||
- cat
|
||||
- head
|
||||
- tail
|
||||
- grep
|
||||
- wc
|
||||
- rg
|
||||
- git status
|
||||
- git diff
|
||||
- git show
|
||||
- git ls-files
|
||||
---
|
||||
|
||||
# PR Maintainer Review
|
||||
|
||||
You are the repository maintainer assistant for `Fu-Jie/openwebui-extensions`.
|
||||
|
||||
Your job is to review the triggering pull request against this repository's standards and leave **one concise summary comment only when there is actionable feedback**.
|
||||
|
||||
If the PR already looks compliant enough and there is no useful maintainer feedback to add, you **must call `noop`** with a short explanation.
|
||||
|
||||
## Primary Goal
|
||||
|
||||
Review the PR for:
|
||||
|
||||
- repository-standard compliance
|
||||
- missing synchronized file updates
|
||||
- release-readiness gaps
|
||||
- documentation drift introduced by the change
|
||||
- risky behavior regressions in plugin code
|
||||
|
||||
This workflow is **review-only**. Do not attempt to modify files, push code, or open pull requests.
|
||||
|
||||
## High-Priority Source Files
|
||||
|
||||
Use these files as the authoritative rule set before forming conclusions:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/instructions/code-review.instructions.md`
|
||||
- `.github/instructions/commit-message.instructions.md`
|
||||
- `.github/skills/release-prep/SKILL.md`
|
||||
- `.github/skills/doc-mirror-sync/SKILL.md`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## Review Scope
|
||||
|
||||
Start from the PR diff and changed files only. Expand into related files only when necessary to verify consistency.
|
||||
|
||||
Prioritize repository policy over generic best practices. This workflow should behave like a maintainer who knows this repository well, not like a broad lint bot.
|
||||
|
||||
Focus especially on these areas:
|
||||
|
||||
### 1. Plugin Code Standards
|
||||
|
||||
When a plugin Python file changes, check for repository-specific correctness:
|
||||
|
||||
- single-file i18n pattern is preserved
|
||||
- user-visible text is routed through translations where appropriate
|
||||
- `_get_user_context` and `_get_chat_context` are used instead of fragile direct access
|
||||
- `__event_call__` JavaScript execution has timeout guards and JS-side fallback handling
|
||||
- `print()` is not introduced in production plugin code
|
||||
- emitter usage is guarded safely
|
||||
- filter plugins do not store request-scoped mutable state on `self`
|
||||
- OpenWebUI/Copilot SDK tool definitions remain consistent with repository conventions
|
||||
|
||||
### 2. Versioning and Release Hygiene
|
||||
|
||||
When `plugins/**/*.py` changes, verify whether the PR also updates what should normally move with it:
|
||||
|
||||
- plugin docstring `version:` changed when behavior changed
|
||||
- local `README.md` and `README_CN.md` changed where user-visible behavior changed
|
||||
- mirrored docs under `docs/plugins/**` changed where required
|
||||
- docs plugin indexes changed if a published version badge or listing text should change
|
||||
- root `README.md` and `README_CN.md` updated date badge if this PR is clearly release-prep oriented
|
||||
|
||||
Do not require every PR to be full release prep. Only flag missing sync files when the PR clearly changes published behavior, plugin metadata, versioned documentation, or release-facing content.
|
||||
|
||||
### 3. Documentation Sync
|
||||
|
||||
When plugin READMEs change, check whether matching docs mirrors should also change:
|
||||
|
||||
- `plugins/{type}/{name}/README.md` -> `docs/plugins/{type}/{name}.md`
|
||||
- `plugins/{type}/{name}/README_CN.md` -> `docs/plugins/{type}/{name}.zh.md`
|
||||
|
||||
When docs-only changes are intentional, avoid over-reporting.
|
||||
|
||||
Useful path mappings:
|
||||
|
||||
- `plugins/actions/{name}/README.md` -> `docs/plugins/actions/{name}.md`
|
||||
- `plugins/actions/{name}/README_CN.md` -> `docs/plugins/actions/{name}.zh.md`
|
||||
- `plugins/filters/{name}/README.md` -> `docs/plugins/filters/{name}.md`
|
||||
- `plugins/filters/{name}/README_CN.md` -> `docs/plugins/filters/{name}.zh.md`
|
||||
- `plugins/pipes/{name}/README.md` -> `docs/plugins/pipes/{name}.md`
|
||||
- `plugins/pipes/{name}/README_CN.md` -> `docs/plugins/pipes/{name}.zh.md`
|
||||
- `plugins/pipelines/{name}/README.md` -> `docs/plugins/pipelines/{name}.md`
|
||||
- `plugins/pipelines/{name}/README_CN.md` -> `docs/plugins/pipelines/{name}.zh.md`
|
||||
- `plugins/tools/{name}/README.md` -> `docs/plugins/tools/{name}.md`
|
||||
- `plugins/tools/{name}/README_CN.md` -> `docs/plugins/tools/{name}.zh.md`
|
||||
|
||||
### 4. PR Quality and Maintainer Signal
|
||||
|
||||
Check whether the PR description is missing key maintainer context:
|
||||
|
||||
- what changed
|
||||
- why it changed
|
||||
- whether users need migration or reconfiguration
|
||||
|
||||
Only mention this if the omission makes review materially harder.
|
||||
|
||||
## Severity Model
|
||||
|
||||
Use three levels only:
|
||||
|
||||
- `Blocking`: likely bug, release regression, missing required sync, or standards breakage
|
||||
- `Important`: should be fixed before merge, but not an obvious runtime break
|
||||
- `Minor`: worthwhile suggestion, but optional
|
||||
|
||||
Do not invent issues just to leave a comment.
|
||||
|
||||
## Commenting Rules
|
||||
|
||||
Leave **one summary comment** only if there is actionable feedback.
|
||||
|
||||
The comment must:
|
||||
|
||||
- be in English
|
||||
- be concise and maintainer-like
|
||||
- lead with findings, not compliments
|
||||
- include clickable file references like ``plugins/pipes/foo/foo.py`` or ``docs/plugins/pipes/index.md``
|
||||
- avoid nested bullets
|
||||
- avoid repeating obvious diff content
|
||||
|
||||
Use this exact structure when commenting:
|
||||
|
||||
```markdown
|
||||
## PR Maintainer Review
|
||||
|
||||
### Blocking
|
||||
- `path/to/file`: specific issue and why it matters
|
||||
|
||||
### Important
|
||||
- `path/to/file`: specific issue and what sync/check is missing
|
||||
|
||||
### Minor
|
||||
- `path/to/file`: optional improvement or consistency note
|
||||
|
||||
### Merge Readiness
|
||||
- Ready after the items above are addressed.
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- Omit empty sections.
|
||||
- If there is only one severity category, include only that category plus `Merge Readiness`.
|
||||
- Keep the full comment under about 250 words unless multiple files are involved.
|
||||
|
||||
## No-Comment Rule
|
||||
|
||||
If the PR has no meaningful maintainer findings:
|
||||
|
||||
- do not leave a praise-only comment
|
||||
- do not restate that checks passed
|
||||
- call `noop` with a short explanation like:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed the PR diff and repository sync expectations, and found no actionable maintainer feedback."}}
|
||||
```
|
||||
|
||||
## Suggested Review Process
|
||||
|
||||
1. Identify the changed files in the PR.
|
||||
2. Read the high-priority repository rule files.
|
||||
3. Compare changed plugin code against plugin review instructions.
|
||||
4. Compare changed README or docs files against doc-mirror expectations.
|
||||
5. Determine whether version-sync or release-facing files are missing.
|
||||
6. Draft the shortest useful maintainer summary.
|
||||
7. Leave exactly one `add_comment` or one `noop`.
|
||||
|
||||
## Important Constraints
|
||||
|
||||
- Do not request broad refactors unless the PR already touches that area.
|
||||
- Do not require release-prep steps for tiny internal-only edits.
|
||||
- Do not insist on docs sync when the change is clearly private/internal and not user-facing.
|
||||
- Prefer precise, repository-specific feedback over generic code review advice.
|
||||
- If you are unsure whether a sync file is required, downgrade to `Important` rather than `Blocking`.
|
||||
- If a finding depends on intent that is not visible in the PR, explicitly say it is conditional instead of presenting it as certain.
|
||||
|
||||
## Final Requirement
|
||||
|
||||
You **must** finish with exactly one safe output action:
|
||||
|
||||
- `add_comment` if there is actionable feedback
|
||||
- `noop` if there is not
|
||||
248
.github/workflows/aw-release-preflight.md
vendored
Normal file
248
.github/workflows/aw-release-preflight.md
vendored
Normal file
@@ -0,0 +1,248 @@
|
||||
---
|
||||
description: "Release preflight review for version sync, bilingual docs, release notes, and release-facing consistency"
|
||||
private: true
|
||||
labels: [automation, review, release, gh-aw]
|
||||
metadata:
|
||||
author: Fu-Jie
|
||||
category: maintenance
|
||||
maturity: draft
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, reopened, synchronize, ready_for_review]
|
||||
paths:
|
||||
- 'plugins/**/*.py'
|
||||
- 'plugins/**/README.md'
|
||||
- 'plugins/**/README_CN.md'
|
||||
- 'plugins/**/v*.md'
|
||||
- 'plugins/**/v*_CN.md'
|
||||
- 'docs/plugins/**/*.md'
|
||||
- 'README.md'
|
||||
- 'README_CN.md'
|
||||
- '.github/**'
|
||||
forks: ["*"]
|
||||
workflow_dispatch:
|
||||
roles: all
|
||||
skip-bots: [github-actions, copilot, dependabot, renovate]
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
engine: copilot
|
||||
network:
|
||||
allowed:
|
||||
- defaults
|
||||
safe-outputs:
|
||||
add-comment:
|
||||
target: triggering
|
||||
max: 1
|
||||
hide-older-comments: true
|
||||
footer: false
|
||||
allowed-github-references: [repo]
|
||||
timeout-minutes: 12
|
||||
tools:
|
||||
github:
|
||||
toolsets: [repos, issues, pull_requests]
|
||||
bash:
|
||||
- pwd
|
||||
- ls
|
||||
- cat
|
||||
- head
|
||||
- tail
|
||||
- grep
|
||||
- wc
|
||||
- rg
|
||||
- git status
|
||||
- git diff
|
||||
- git show
|
||||
- git ls-files
|
||||
---
|
||||
|
||||
# Release Preflight Review
|
||||
|
||||
You are the repository maintainer assistant for `Fu-Jie/openwebui-extensions`.
|
||||
|
||||
Your job is to perform a **release-preflight review** for the triggering change and leave **one concise summary comment only when there is actionable release-facing feedback**.
|
||||
|
||||
If the change is not actually release-prep, or it already looks consistent enough that there is no useful maintainer feedback to add, you **must call `noop`** with a short explanation.
|
||||
|
||||
## Primary Goal
|
||||
|
||||
Review the change for:
|
||||
|
||||
- version-sync completeness
|
||||
- bilingual README and docs consistency
|
||||
- release-notes completeness
|
||||
- release-facing index or badge drift
|
||||
- missing migration or maintainer context for a user-visible release
|
||||
|
||||
This workflow is **review-only**. Do not modify files, push code, create releases, or open pull requests.
|
||||
|
||||
## High-Priority Source Files
|
||||
|
||||
Use these files as the authoritative rule set before forming conclusions:
|
||||
|
||||
- `.github/copilot-instructions.md`
|
||||
- `.github/instructions/commit-message.instructions.md`
|
||||
- `.github/skills/release-prep/SKILL.md`
|
||||
- `.github/skills/doc-mirror-sync/SKILL.md`
|
||||
- `.github/workflows/release.yml`
|
||||
- `docs/development/gh-aw-integration-plan.md`
|
||||
- `docs/development/gh-aw-integration-plan.zh.md`
|
||||
|
||||
## Review Scope
|
||||
|
||||
Start from the PR diff and changed files only. Expand into related release-facing files only when needed to verify sync.
|
||||
|
||||
Prioritize repository release policy over generic release advice. This workflow should act like a maintainer performing a final consistency pass before a release-oriented merge.
|
||||
|
||||
Focus especially on these areas:
|
||||
|
||||
### 1. Version Sync Across Release Files
|
||||
|
||||
When a plugin release is being prepared, check whether the expected version bump is consistently reflected across the release-facing file set:
|
||||
|
||||
- plugin Python docstring `version:`
|
||||
- plugin-local `README.md`
|
||||
- plugin-local `README_CN.md`
|
||||
- docs mirror page in `docs/plugins/**`
|
||||
- Chinese docs mirror page in `docs/plugins/**/*.zh.md`
|
||||
- plugin list entries or badges in `docs/plugins/{type}/index.md`
|
||||
- plugin list entries or badges in `docs/plugins/{type}/index.zh.md`
|
||||
|
||||
Only flag this when the change is clearly release-oriented, version-oriented, or user-visible enough that a synchronized release update is expected.
|
||||
|
||||
### 2. README and Docs Mirror Consistency
|
||||
|
||||
When plugin README files change, check whether the mirrored docs pages were updated consistently.
|
||||
|
||||
Useful path mappings:
|
||||
|
||||
- `plugins/actions/{name}/README.md` -> `docs/plugins/actions/{name}.md`
|
||||
- `plugins/actions/{name}/README_CN.md` -> `docs/plugins/actions/{name}.zh.md`
|
||||
- `plugins/filters/{name}/README.md` -> `docs/plugins/filters/{name}.md`
|
||||
- `plugins/filters/{name}/README_CN.md` -> `docs/plugins/filters/{name}.zh.md`
|
||||
- `plugins/pipes/{name}/README.md` -> `docs/plugins/pipes/{name}.md`
|
||||
- `plugins/pipes/{name}/README_CN.md` -> `docs/plugins/pipes/{name}.zh.md`
|
||||
- `plugins/pipelines/{name}/README.md` -> `docs/plugins/pipelines/{name}.md`
|
||||
- `plugins/pipelines/{name}/README_CN.md` -> `docs/plugins/pipelines/{name}.zh.md`
|
||||
- `plugins/tools/{name}/README.md` -> `docs/plugins/tools/{name}.md`
|
||||
- `plugins/tools/{name}/README_CN.md` -> `docs/plugins/tools/{name}.zh.md`
|
||||
|
||||
Do not over-report if the change is intentionally docs-only and not a release-prep change.
|
||||
|
||||
### 3. What's New and Release Notes Coverage
|
||||
|
||||
When a release-facing plugin update is present, check whether the release documentation covers the current scope clearly enough:
|
||||
|
||||
- the current `What's New` section reflects the latest release only
|
||||
- the Chinese `最新更新` section is aligned with the English version
|
||||
- `v{version}.md` and `v{version}_CN.md` exist when release notes are expected
|
||||
- release notes cover meaningful feature, fix, docs, or migration changes in the current diff
|
||||
|
||||
Do not require release notes for tiny internal-only edits. Do flag missing release notes if the PR is obviously preparing a published plugin release.
|
||||
|
||||
### 4. Root Readme and Release-Facing Index Drift
|
||||
|
||||
For clearly release-oriented changes, check whether repository-level release-facing surfaces also need updates:
|
||||
|
||||
- root `README.md` updated date badge
|
||||
- root `README_CN.md` updated date badge
|
||||
- plugin index entries under `docs/plugins/**/index.md`
|
||||
- plugin index entries under `docs/plugins/**/index.zh.md`
|
||||
|
||||
Only mention missing root-level updates when the PR is truly release-prep oriented, not for routine internal edits.
|
||||
|
||||
### 5. Maintainer Context and Release Clarity
|
||||
|
||||
Check whether the PR description or visible release-facing text is missing essential context:
|
||||
|
||||
- what is being released
|
||||
- why the release matters
|
||||
- whether migration or reconfiguration is needed
|
||||
|
||||
Only mention this if the omission makes release review materially harder.
|
||||
|
||||
## Severity Model
|
||||
|
||||
Use three levels only:
|
||||
|
||||
- `Blocking`: likely release regression, missing required version sync, or clearly incomplete release-facing update
|
||||
- `Important`: should be fixed before merge to avoid release confusion or drift
|
||||
- `Minor`: worthwhile release-facing cleanup or consistency suggestion
|
||||
|
||||
Do not invent issues just to leave a comment.
|
||||
|
||||
## Commenting Rules
|
||||
|
||||
Leave **one summary comment** only if there is actionable release-preflight feedback.
|
||||
|
||||
The comment must:
|
||||
|
||||
- be in English
|
||||
- be concise and maintainer-like
|
||||
- lead with findings, not compliments
|
||||
- include clickable file references like ``plugins/pipes/foo/README.md`` or ``docs/plugins/pipes/index.md``
|
||||
- avoid nested bullets
|
||||
- avoid restating obvious diff content
|
||||
|
||||
Use this exact structure when commenting:
|
||||
|
||||
```markdown
|
||||
## Release Preflight Review
|
||||
|
||||
### Blocking
|
||||
- `path/to/file`: specific release-facing problem and why it matters
|
||||
|
||||
### Important
|
||||
- `path/to/file`: missing sync or release-documentation gap
|
||||
|
||||
### Minor
|
||||
- `path/to/file`: optional cleanup or consistency improvement
|
||||
|
||||
### Release Readiness
|
||||
- Ready after the items above are addressed.
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- Omit empty sections.
|
||||
- If there is only one severity category, include only that category plus `Release Readiness`.
|
||||
- Keep the full comment under about 250 words unless multiple files are involved.
|
||||
|
||||
## No-Comment Rule
|
||||
|
||||
If the change has no meaningful release-preflight findings:
|
||||
|
||||
- do not leave a praise-only comment
|
||||
- do not restate that checks passed
|
||||
- call `noop` with a short explanation like:
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: reviewed the release-facing diff, version-sync expectations, and bilingual documentation coverage, and found no actionable preflight feedback."}}
|
||||
```
|
||||
|
||||
## Suggested Review Process
|
||||
|
||||
1. Identify whether the change is actually release-oriented.
|
||||
2. Inspect the changed files in the PR diff.
|
||||
3. Read the repository release-prep rule files.
|
||||
4. Check plugin version-sync expectations only where release intent is visible.
|
||||
5. Check README, README_CN, docs mirrors, indexes, and release notes for drift.
|
||||
6. Draft the shortest useful maintainer summary.
|
||||
7. Leave exactly one `add_comment` or one `noop`.
|
||||
|
||||
## Important Constraints
|
||||
|
||||
- Do not force full release-prep expectations onto tiny internal edits.
|
||||
- Do not require root README badge updates unless the PR is clearly release-facing.
|
||||
- Do not ask for release notes if the change is not realistically a release-prep PR.
|
||||
- Prefer repository-specific sync feedback over generic release advice.
|
||||
- If you are unsure whether a release-facing sync file is required, downgrade to `Important` rather than `Blocking`.
|
||||
- If a finding depends on inferred intent, state it conditionally instead of presenting it as certain.
|
||||
|
||||
## Final Requirement
|
||||
|
||||
You **must** finish with exactly one safe output action:
|
||||
|
||||
- `add_comment` if there is actionable feedback
|
||||
- `noop` if there is not
|
||||
71
.github/workflows/release.yml
vendored
71
.github/workflows/release.yml
vendored
@@ -22,6 +22,11 @@ on:
|
||||
- main
|
||||
paths:
|
||||
- 'plugins/**/*.py'
|
||||
- 'plugins/**/README.md'
|
||||
- 'plugins/**/README_CN.md'
|
||||
- 'plugins/**/v*.md'
|
||||
- 'plugins/**/v*_CN.md'
|
||||
- 'docs/plugins/**/*.md'
|
||||
tags:
|
||||
- 'v*'
|
||||
|
||||
@@ -52,6 +57,8 @@ permissions:
|
||||
jobs:
|
||||
check-changes:
|
||||
runs-on: ubuntu-latest
|
||||
# Skip release if commit message contains [skip release]
|
||||
if: ${{ !contains(github.event.head_commit.message, '[skip release]') }}
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
@@ -59,6 +66,8 @@ jobs:
|
||||
has_changes: ${{ steps.detect.outputs.has_changes }}
|
||||
changed_plugins: ${{ steps.detect.outputs.changed_plugins }}
|
||||
release_notes: ${{ steps.detect.outputs.release_notes }}
|
||||
has_doc_changes: ${{ steps.detect.outputs.has_doc_changes }}
|
||||
changed_doc_files: ${{ steps.detect.outputs.changed_doc_files }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
@@ -95,17 +104,19 @@ jobs:
|
||||
python scripts/extract_plugin_versions.py --json --output current_versions.json
|
||||
|
||||
# Get previous plugin versions by checking out old plugins
|
||||
if git worktree add /tmp/old_repo ${COMPARE_REF} 2>/dev/null; then
|
||||
if [ -d /tmp/old_repo/plugins ]; then
|
||||
python scripts/extract_plugin_versions.py --plugins-dir /tmp/old_repo/plugins --json --output old_versions.json
|
||||
OLD_WORKTREE=$(mktemp -d)
|
||||
if git worktree add "$OLD_WORKTREE" ${COMPARE_REF} 2>/dev/null; then
|
||||
if [ -d "$OLD_WORKTREE/plugins" ]; then
|
||||
python scripts/extract_plugin_versions.py --plugins-dir "$OLD_WORKTREE/plugins" --json --output old_versions.json
|
||||
else
|
||||
echo "[]" > old_versions.json
|
||||
fi
|
||||
git worktree remove /tmp/old_repo 2>/dev/null || true
|
||||
git worktree remove "$OLD_WORKTREE" 2>/dev/null || true
|
||||
else
|
||||
echo "Failed to create worktree, using empty version list"
|
||||
echo "[]" > old_versions.json
|
||||
fi
|
||||
rm -rf "$OLD_WORKTREE" 2>/dev/null || true
|
||||
|
||||
# Compare versions and generate release notes
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --ignore-removed --output changes.md
|
||||
@@ -113,9 +124,29 @@ jobs:
|
||||
|
||||
echo "=== Version Changes ==="
|
||||
cat changes.md
|
||||
|
||||
# Detect documentation/release-note changes that should be reflected in release notes
|
||||
git diff --name-only "$COMPARE_REF"..HEAD -- \
|
||||
'plugins/**/README.md' \
|
||||
'plugins/**/README_CN.md' \
|
||||
'plugins/**/v*.md' \
|
||||
'plugins/**/v*_CN.md' \
|
||||
'docs/plugins/**/*.md' > changed_docs.txt || true
|
||||
|
||||
if [ -s changed_docs.txt ]; then
|
||||
echo "has_doc_changes=true" >> $GITHUB_OUTPUT
|
||||
echo "changed_doc_files<<EOF" >> $GITHUB_OUTPUT
|
||||
cat changed_docs.txt >> $GITHUB_OUTPUT
|
||||
echo "" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "has_doc_changes=false" >> $GITHUB_OUTPUT
|
||||
echo "changed_doc_files=" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
# Check if there are any changes
|
||||
if grep -q "No changes detected" changes.md; then
|
||||
# Only trigger release if there are actual version changes, not just doc changes
|
||||
echo "has_changes=false" >> $GITHUB_OUTPUT
|
||||
echo "changed_plugins=" >> $GITHUB_OUTPUT
|
||||
else
|
||||
@@ -214,7 +245,6 @@ jobs:
|
||||
id: plugins
|
||||
run: |
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
|
||||
- name: Collect plugin files for release
|
||||
id: collect_files
|
||||
@@ -308,9 +338,9 @@ jobs:
|
||||
LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$LAST_TAG" ]; then
|
||||
COMMITS=$(git log ${LAST_TAG}..HEAD --pretty=format:"- %s" --no-merges -- plugins/ | head -20)
|
||||
COMMITS=$(git log ${LAST_TAG}..HEAD --pretty=format:"- **%s**%n%b" --no-merges -- plugins/ | sed '/^$/d' | head -40)
|
||||
else
|
||||
COMMITS=$(git log --pretty=format:"- %s" --no-merges -10 -- plugins/)
|
||||
COMMITS=$(git log --pretty=format:"- **%s**%n%b" --no-merges -10 -- plugins/ | sed '/^$/d')
|
||||
fi
|
||||
|
||||
{
|
||||
@@ -328,9 +358,30 @@ jobs:
|
||||
NOTES: ${{ github.event.inputs.release_notes }}
|
||||
DETECTED_CHANGES: ${{ needs.check-changes.outputs.release_notes }}
|
||||
COMMITS: ${{ steps.commits.outputs.commits }}
|
||||
DOC_FILES: ${{ needs.check-changes.outputs.changed_doc_files }}
|
||||
run: |
|
||||
> release_notes.md
|
||||
|
||||
|
||||
# 1. Release notes from v*.md files (highest priority, shown first)
|
||||
if [ -n "$DOC_FILES" ]; then
|
||||
RELEASE_NOTE_FILES=$(echo "$DOC_FILES" | grep -E '^plugins/.*/v[^/]*\.md$' | grep -v '_CN\.md$' || true)
|
||||
if [ -n "$RELEASE_NOTE_FILES" ]; then
|
||||
while IFS= read -r file; do
|
||||
[ -z "$file" ] && continue
|
||||
if [ -f "$file" ]; then
|
||||
# Inject plugin README link before each release note file content
|
||||
plugin_dir=$(dirname "$file")
|
||||
readme_url="https://github.com/Fu-Jie/openwebui-extensions/blob/main/${plugin_dir}/README.md"
|
||||
echo "> 📖 [Plugin README](${readme_url})" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
cat "$file" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
fi
|
||||
done <<< "$RELEASE_NOTE_FILES"
|
||||
fi
|
||||
fi
|
||||
|
||||
# 2. Plugin version changes detected by script
|
||||
if [ -n "$TITLE" ]; then
|
||||
echo "## $TITLE" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
@@ -343,6 +394,7 @@ jobs:
|
||||
echo "" >> release_notes.md
|
||||
fi
|
||||
|
||||
# 3. Commits (Conventional Commits format with body)
|
||||
if [ -n "$COMMITS" ]; then
|
||||
echo "## Commits" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
@@ -357,8 +409,6 @@ jobs:
|
||||
echo "" >> release_notes.md
|
||||
fi
|
||||
|
||||
|
||||
|
||||
cat >> release_notes.md << 'EOF'
|
||||
|
||||
## Download
|
||||
@@ -389,6 +439,7 @@ jobs:
|
||||
cat release_notes.md
|
||||
|
||||
- name: Create Git Tag
|
||||
if: ${{ !startsWith(github.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ Plugin types: `actions` / `filters` / `pipes` / `pipelines` / `tools`
|
||||
2. **No silent failures.** All errors must surface via `__event_emitter__` notification or backend `logging`.
|
||||
3. **No hardcoded model IDs.** Default to the current conversation model; let `Valves` override.
|
||||
4. **Chinese responses.** Reply in Simplified Chinese for all planning, explanations, and status summaries. English only for code, commit messages, and docstrings.
|
||||
5. **Knowledge capture.** Whenever you discover a non-obvious pattern, gotcha, or workaround (e.g., internal API contracts, mock object requirements, parameter injection quirks), save it to `.agent/learnings/{topic}.md` **before ending the session**. See `.agent/learnings/README.md` for format and existing entries.
|
||||
|
||||
---
|
||||
|
||||
|
||||
50
README.md
50
README.md
@@ -5,11 +5,10 @@
|
||||
|
||||
English | [中文](./README_CN.md)
|
||||
|
||||
A collection of enhancements, plugins, and prompts for [OpenWebUI](https://github.com/open-webui/open-webui), developed and curated for personal use to extend functionality and improve experience.
|
||||
A collection of enhancements, plugins, and prompts for [open-webui](https://github.com/open-webui/open-webui), developed and curated for personal use to extend functionality and improve experience.
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 Community Stats
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 Author | 👥 Followers | ⭐ Points | 🏆 Contributions |
|
||||
@@ -20,19 +19,18 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
|
||||
### 🔥 Top 6 Popular Plugins
|
||||
| Rank | Plugin | Version | Downloads | Views | 📅 Updated |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 Total Downloads Trend
|
||||
|
||||

|
||||
|
||||
*See full stats and charts in [Community Stats Report](./docs/community-stats.md)*
|
||||
@@ -40,20 +38,29 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
||||
|
||||
## 🌟 Star Features
|
||||
|
||||
### 1. [GitHub Copilot SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) [](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4)
|
||||
### 1. [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4)    
|
||||
|
||||
**The ultimate autonomous Agent for OpenWebUI.** Transforming your LLM into a powerful OS-level engineer with native code execution, deep tool autonomy, and professional skill management.
|
||||
**The ultimate autonomous Agent integration for OpenWebUI.** Deeply bridging GitHub Copilot SDK with your OpenWebUI ecosystem. It enables the Agent to autonomously perform **intent recognition**, **web search**, and **context compaction** while reusing your existing tools, skills, and configurations for a professional, full-featured experience.
|
||||
|
||||
> [!TIP]
|
||||
> **No GitHub Copilot subscription required!** Supports **BYOK (Bring Your Own Key)** mode using your own OpenAI/Anthropic API keys.
|
||||
|
||||
#### 🚀 Key Leap (v0.9.0+)
|
||||
#### 🚀 Key Leap (v0.10.0)
|
||||
|
||||
- **🧩 Skills Revolution**: Native support for **SKILL directories** (scripts, templates, resources) coupled with a **Bidirectional Bridge** to OpenWebUI Workspace Skills.
|
||||
- **⌨️ Prompt Enhancement**: Restored native Copilot CLI **Plan Mode** for complex tasks and integrated native SQLite-backed session management for robust state persistence.
|
||||
- **📋 Live TODO Widget**: Added a compact real-time task tracking widget synchronized with `session.db`, keeping in-progress work visible without cluttering the chat history.
|
||||
- **🔌 Seamless Ecosystem Integration**: Automatically injects and reuses your OpenWebUI **Tools**, **MCP**, **OpenAPI Servers**, and **Skills**, significantly enhancing the Agent's capabilities through your existing setup.
|
||||
- **🌐 Language Consistency**: System prompts mandate that Agent output language remains strictly consistent with user input.
|
||||
- **🧩 Skills Revolution**: Native support for **SKILL directories** and a **Bidirectional Bridge** to OpenWebUI Workspace Skills.
|
||||
- **🛡️ Secure Isolation**: Strict user/session-level **Workspace Sandboxing** with persistent configuration.
|
||||
- **📊 Interactive Delivery**: Professional **File Delivery Protocol** for instant HTML artifacts and persistent downloadable results.
|
||||
- **📊 Interactive Delivery**: Full support for **HTML Artifacts** and **RichUI** rendering, providing instant interactive previews and persistent downloadable results.
|
||||
- **🛠️ Deterministic Toolchain**: Built-in specialized tools for skill lifecycles (`manage_skills`) and system optimization.
|
||||
|
||||
> [!TIP]
|
||||
> **💡 Pro Tip: Enhanced Visualization**
|
||||
> We highly recommend asking the Agent to install the [Visual Explainer](https://github.com/nicobailon/visual-explainer) skill during your conversation. It dramatically improves the aesthetics and interactivity of generated **HTML Artifacts**. Simply tell the AI:
|
||||
> "Please install this skill: <https://github.com/nicobailon/visual-explainer>" to get started.
|
||||
|
||||
#### 📺 Demo: Visual Skills & Data Analysis
|
||||
|
||||

|
||||
@@ -64,19 +71,19 @@ A collection of enhancements, plugins, and prompts for [OpenWebUI](https://githu
|
||||
- **[GitHub Star Forecasting](./docs/plugins/pipes/star-prediction-example.md)**: Automatically parsing CSV data, writing analysis scripts, and generating interactive growth dashboards.
|
||||
- **[Video Optimization](./docs/plugins/pipes/video-processing-example.md)**: Direct control of system-level tools (FFmpeg) to accelerate and compress media with professional color optimization.
|
||||
|
||||
### 2. [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) [](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a)
|
||||
### 2. [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a)
|
||||
|
||||
**Experience interactive thinking.** Seamlessly transforms complex chat sessions into structured, clickable mind maps for better visual modeling and rapid idea extraction.
|
||||
|
||||
### 3. [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) [](https://openwebui.com/posts/smart_infographic_ad6f0c7f)
|
||||
### 3. [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f)
|
||||
|
||||
**Professional data storytelling.** Converts raw information into sleek, boardroom-ready infographics powered by AntV, perfect for summarizing long-form content instantly.
|
||||
|
||||
### 4. [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) [](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315)
|
||||
### 4. [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315)
|
||||
|
||||
**High-fidelity reporting.** Export conversation history into professionally formatted Word documents with preserved headers, code blocks, and math formulas.
|
||||
|
||||
### 5. [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) [](https://openwebui.com/posts/async_context_compression_b1655bc8)
|
||||
### 5. [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8)
|
||||
|
||||
**Maximize your context window.** Intelligently compresses chat history using LLM logic to save tokens and costs while maintaining a high-quality reasoning chain.
|
||||
|
||||
@@ -97,6 +104,11 @@ Located in the `plugins/` directory, containing Python-based enhancements:
|
||||
- **Export to Excel** (`export_to_excel`): Exports chat history to Excel files.
|
||||
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
||||
|
||||
### Tools
|
||||
|
||||
- **Smart Mind Map Tool** (`smart-mind-map-tool`): The tool version of Smart Mind Map, enabling AI proactive/autonomous invocation.
|
||||
- **OpenWebUI Skills Manager Tool** (`openwebui-skills-manager-tool`): Native tool for managing OpenWebUI skills.
|
||||
|
||||
### Filters
|
||||
|
||||
- **GitHub Copilot SDK Files Filter** (`github_copilot_sdk_files_filter`): Essential companion for Copilot SDK. Bypasses RAG to ensure full file accessibility for Agents.
|
||||
|
||||
42
README_CN.md
42
README_CN.md
@@ -6,7 +6,6 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
|
||||
<!-- STATS_START -->
|
||||
## 📊 社区统计
|
||||
>
|
||||
> 
|
||||
|
||||
| 👤 作者 | 👥 粉丝 | ⭐ 积分 | 🏆 贡献 |
|
||||
@@ -17,19 +16,18 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
| :---: | :---: | :---: | :---: | :---: |
|
||||
|  |  |  |  |  |
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
|
||||
### 🔥 热门插件 Top 6
|
||||
| 排名 | 插件 | 版本 | 下载 | 浏览 | 📅 更新 |
|
||||
| :---: | :--- | :---: | :---: | :---: | :---: |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) |  |  |  |  |
|
||||
| 🥇 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) |  |  |  |  |
|
||||
| 🥈 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) |  |  |  |  |
|
||||
| 🥉 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) |  |  |  |  |
|
||||
| 4️⃣ | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) |  |  |  |  |
|
||||
| 5️⃣ | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) |  |  |  |  |
|
||||
| 6️⃣ | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) |  |  |  |  |
|
||||
|
||||
### 📈 总下载量累计趋势
|
||||
|
||||

|
||||
|
||||
*完整统计与趋势图请查看 [社区统计报告](./docs/community-stats.zh.md)*
|
||||
@@ -37,19 +35,28 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
|
||||
## 🌟 精选功能
|
||||
|
||||
### 1. [GitHub Copilot SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) [](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4)
|
||||
### 1. [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) [](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) 
|
||||
|
||||
**OpenWebUI 终极自主 Agent 增强。** 将 LLM 转化为具备 OS 级操作能力的专业工程师,支持原生代码执行、深度工具自治以及专业技能管理。
|
||||
**OpenWebUI 终极自主 Agent 深度集成。** 将 GitHub Copilot SDK 与 OpenWebUI 生态完美桥接。它允许 Agent 具备**智能意图识别**、**自主网页搜索**与**自动上下文压缩**能力,同时直接复用您现有的工具、技能与配置,通过全功能 Skill 体系带来极致的专业交互体验。
|
||||
|
||||
> [!TIP]
|
||||
> **无需 GitHub Copilot 订阅!** 支持 **BYOK (Bring Your Own Key)** 模式,使用你自己的 OpenAI/Anthropic API Key。
|
||||
|
||||
#### 🚀 核心进化 (v0.9.0+)
|
||||
#### 🚀 核心进化 (v0.10.0)
|
||||
|
||||
- **🧩 技能革命**: 原生支持 **SKILL 目录**(含脚本、模板与资源),并实现与 OpenWebUI **工作区 > Skills** 的深度双向桥接。
|
||||
- **⌨️ 提示词增强**:恢复了原生 Copilot CLI **原生计划模式 (Native Plan Mode)**,并集成了基于 SQLite 的原生会话持久化管理,确保复杂任务编排与状态追踪的稳定性。
|
||||
- **📋 Live TODO 小组件**:新增基于 `session.db` 实时任务状态的紧凑型嵌入式 TODO 小组件,任务进度常驻可见,无需在正文中重复显示全部待办列表。
|
||||
- **🔌 生态深度注入**: 自动读取并复用 OpenWebUI **工具 (Tools)**、**MCP**、**OpenAPI Server** 与 **技能 (Skills)**,显著增强 Agent 的实战能力。
|
||||
- **🧩 技能革命**: 原生支持 **SKILL 目录**,并实现与 OpenWebUI **工作区 > Skills** 的深度双向桥接。
|
||||
- **🛡️ 安全沙箱**: 严格的用户/会话级 **工作区隔离** 与持久化配置环境。
|
||||
- **📊 交互交付**: 专业 **文件交付协议**,支持即时预览交互式 HTML Artifacts 与持久化结果下载。
|
||||
- **📊 交互交付**: 完整支持 **HTML Artifacts** 与 **RichUI** 渲染,提供即时预览交互式应用程序与持久化结果下载。
|
||||
- **🛠️ 确定性工具链**: 内置 `manage_skills` 等专业工具,赋予 Agent 完整的技能生命周期管理能力。
|
||||
- **🌐 语言一致性**: 提示词强制要求 Agent 输出语言与用户输入保持一致,确保国际化体验。
|
||||
|
||||
> [!TIP]
|
||||
> **💡 进阶实战建议**
|
||||
> 强烈推荐在对话中让 Agent 为其安装 [Visual Explainer](https://github.com/nicobailon/visual-explainer) 技能。该技能能显著提升 **HTML Artifacts** 的美观度与交互深度,只需对 AI 说:
|
||||
> “请帮我安装这个技能:<https://github.com/nicobailon/visual-explainer”> 即可瞬间启用。
|
||||
|
||||
#### 📺 演示:可视化技能与数据分析
|
||||
|
||||
@@ -94,6 +101,11 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
- **Export to Excel** (`export_to_excel`): 将对话内容导出为 Excel 文件。
|
||||
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
||||
|
||||
### Tools (工具)
|
||||
|
||||
- **智能思维导图工具** (`smart-mind-map-tool`): 思维导图的 Tool 版本,支持 AI 主动/自主调用。
|
||||
- **OpenWebUI Skills 管理工具** (`openwebui-skills-manager-tool`): 用于管理 OpenWebUI Skills 的原生工具。
|
||||
|
||||
### Filters (消息处理)
|
||||
|
||||
- **GitHub Copilot SDK Files Filter** (`github_copilot_sdk_files_filter`): Copilot SDK 必备搭档。绕过 RAG,确保 Agent 能真正看到你的每一个文件。
|
||||
@@ -105,7 +117,7 @@ OpenWebUI 增强功能集合。包含个人开发与收集的插件、提示词
|
||||
|
||||
### Pipes (模型管道)
|
||||
|
||||
- **GitHub Copilot SDK** (`github-copilot-sdk`): GitHub Copilot SDK 官方集成。支持动态模型、多轮对话、流式输出、图片输入及无限会话。
|
||||
- **GitHub Copilot SDK** (`github-copilot-sdk`): 深度集成 GitHub Copilot SDK 的强大 Agent。支持智能意图识别、自主网页搜索与上下文压缩,并能够无缝复用 OpenWebUI 的工具 (Tools)、MCP 与 OpenAPI Server。
|
||||
|
||||
### Pipelines (工作流管道)
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 15 MiB After Width: | Height: | Size: 10 MiB |
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "downloads",
|
||||
"message": "6.4k",
|
||||
"message": "7.8k",
|
||||
"color": "blue",
|
||||
"namedLogo": "openwebui"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "followers",
|
||||
"message": "295",
|
||||
"message": "315",
|
||||
"color": "blue"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "plugins",
|
||||
"message": "25",
|
||||
"message": "27",
|
||||
"color": "green"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "points",
|
||||
"message": "299",
|
||||
"message": "329",
|
||||
"color": "orange"
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"schemaVersion": 1,
|
||||
"label": "upvotes",
|
||||
"message": "254",
|
||||
"message": "281",
|
||||
"color": "brightgreen"
|
||||
}
|
||||
@@ -1,16 +1,17 @@
|
||||
{
|
||||
"total_posts": 25,
|
||||
"total_downloads": 6379,
|
||||
"total_views": 67827,
|
||||
"total_upvotes": 254,
|
||||
"total_downvotes": 3,
|
||||
"total_saves": 337,
|
||||
"total_comments": 58,
|
||||
"total_posts": 27,
|
||||
"total_downloads": 7786,
|
||||
"total_views": 82342,
|
||||
"total_upvotes": 281,
|
||||
"total_downvotes": 4,
|
||||
"total_saves": 398,
|
||||
"total_comments": 63,
|
||||
"by_type": {
|
||||
"post": 6,
|
||||
"tool": 2,
|
||||
"pipe": 1,
|
||||
"action": 12,
|
||||
"filter": 4,
|
||||
"action": 12,
|
||||
"prompt": 1,
|
||||
"review": 1
|
||||
},
|
||||
@@ -22,11 +23,11 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Intelligently analyzes text content and generates interactive mind maps to help users structure and visualize knowledge.",
|
||||
"downloads": 1328,
|
||||
"views": 11410,
|
||||
"upvotes": 23,
|
||||
"saves": 59,
|
||||
"comments": 15,
|
||||
"downloads": 1542,
|
||||
"views": 12996,
|
||||
"upvotes": 28,
|
||||
"saves": 66,
|
||||
"comments": 18,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-27",
|
||||
"url": "https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a"
|
||||
@@ -38,10 +39,10 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "AI-powered infographic generator based on AntV Infographic. Supports professional templates, auto-icon matching, and SVG/PNG downloads.",
|
||||
"downloads": 1076,
|
||||
"views": 10746,
|
||||
"downloads": 1230,
|
||||
"views": 12309,
|
||||
"upvotes": 25,
|
||||
"saves": 40,
|
||||
"saves": 46,
|
||||
"comments": 10,
|
||||
"created_at": "2025-12-28",
|
||||
"updated_at": "2026-02-13",
|
||||
@@ -54,13 +55,13 @@
|
||||
"version": "1.2.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A content normalizer filter that fixes common Markdown formatting issues in LLM outputs, such as broken code blocks, LaTeX formulas, and list formatting. Including LaTeX command protection.",
|
||||
"downloads": 609,
|
||||
"views": 6795,
|
||||
"upvotes": 18,
|
||||
"saves": 37,
|
||||
"downloads": 719,
|
||||
"views": 7704,
|
||||
"upvotes": 20,
|
||||
"saves": 42,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-12",
|
||||
"updated_at": "2026-02-27",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/markdown_normalizer_baaa8732"
|
||||
},
|
||||
{
|
||||
@@ -70,10 +71,10 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Export current conversation from Markdown to Word (.docx) with Mermaid diagrams rendered client-side (Mermaid.js, SVG+PNG), LaTeX math, real hyperlinks, improved tables, syntax highlighting, and blockquote support.",
|
||||
"downloads": 578,
|
||||
"views": 4611,
|
||||
"upvotes": 16,
|
||||
"saves": 30,
|
||||
"downloads": 700,
|
||||
"views": 5399,
|
||||
"upvotes": 17,
|
||||
"saves": 37,
|
||||
"comments": 5,
|
||||
"created_at": "2026-01-03",
|
||||
"updated_at": "2026-02-13",
|
||||
@@ -86,31 +87,15 @@
|
||||
"version": "1.3.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Reduces token consumption in long conversations while maintaining coherence through intelligent summarization and message compression.",
|
||||
"downloads": 559,
|
||||
"views": 5452,
|
||||
"upvotes": 15,
|
||||
"saves": 41,
|
||||
"downloads": 669,
|
||||
"views": 6274,
|
||||
"upvotes": 16,
|
||||
"saves": 47,
|
||||
"comments": 0,
|
||||
"created_at": "2025-11-08",
|
||||
"updated_at": "2026-02-21",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/async_context_compression_b1655bc8"
|
||||
},
|
||||
{
|
||||
"title": "Export to Excel",
|
||||
"slug": "export_mulit_table_to_excel_244b8f9d",
|
||||
"type": "action",
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 492,
|
||||
"views": 2693,
|
||||
"upvotes": 10,
|
||||
"saves": 8,
|
||||
"comments": 0,
|
||||
"created_at": "2025-05-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "AI Task Instruction Generator",
|
||||
"slug": "ai_task_instruction_generator_9bab8b37",
|
||||
@@ -118,15 +103,47 @@
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 473,
|
||||
"views": 5498,
|
||||
"downloads": 583,
|
||||
"views": 6659,
|
||||
"upvotes": 9,
|
||||
"saves": 14,
|
||||
"saves": 17,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-28",
|
||||
"updated_at": "2026-01-28",
|
||||
"url": "https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37"
|
||||
},
|
||||
{
|
||||
"title": "Export to Excel",
|
||||
"slug": "export_mulit_table_to_excel_244b8f9d",
|
||||
"type": "action",
|
||||
"version": "0.3.7",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.",
|
||||
"downloads": 563,
|
||||
"views": 3153,
|
||||
"upvotes": 11,
|
||||
"saves": 11,
|
||||
"comments": 0,
|
||||
"created_at": "2025-05-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.9.1",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A powerful Agent SDK integration for OpenWebUI. It deeply bridges GitHub Copilot SDK with OpenWebUI's ecosystem, enabling the Agent to autonomously perform intent recognition, web search, and context compaction. It seamlessly reuses your existing Tools, MCP servers, OpenAPI servers, and Skills for a professional, full-featured experience.",
|
||||
"downloads": 335,
|
||||
"views": 4905,
|
||||
"upvotes": 16,
|
||||
"saves": 10,
|
||||
"comments": 6,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
},
|
||||
{
|
||||
"title": "Flash Card",
|
||||
"slug": "flash_card_65a2ea8f",
|
||||
@@ -134,30 +151,30 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Quickly generates beautiful flashcards from text, extracting key points and categories.",
|
||||
"downloads": 285,
|
||||
"views": 4128,
|
||||
"downloads": 312,
|
||||
"views": 4448,
|
||||
"upvotes": 13,
|
||||
"saves": 18,
|
||||
"saves": 20,
|
||||
"comments": 2,
|
||||
"created_at": "2025-12-30",
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/flash_card_65a2ea8f"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot Official SDK Pipe",
|
||||
"slug": "github_copilot_official_sdk_pipe_ce96f7b4",
|
||||
"type": "pipe",
|
||||
"version": "0.9.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Integrate GitHub Copilot SDK. Supports dynamic models, multi-turn conversation, streaming, multimodal input, infinite sessions, bidirectional OpenWebUI Skills bridge, and manage_skills tool.",
|
||||
"downloads": 263,
|
||||
"views": 4106,
|
||||
"upvotes": 14,
|
||||
"saves": 10,
|
||||
"comments": 6,
|
||||
"created_at": "2026-01-26",
|
||||
"updated_at": "2026-02-27",
|
||||
"url": "https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4"
|
||||
"title": "OpenWebUI Skills Manager Tool",
|
||||
"slug": "openwebui_skills_manager_tool_b4bce8e4",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 303,
|
||||
"views": 4265,
|
||||
"upvotes": 7,
|
||||
"saves": 13,
|
||||
"comments": 2,
|
||||
"created_at": "2026-02-28",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4"
|
||||
},
|
||||
{
|
||||
"title": "Deep Dive",
|
||||
@@ -166,10 +183,10 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A comprehensive thinking lens that dives deep into any content - from context to logic, insights, and action paths.",
|
||||
"downloads": 204,
|
||||
"views": 1631,
|
||||
"downloads": 219,
|
||||
"views": 1764,
|
||||
"upvotes": 6,
|
||||
"saves": 13,
|
||||
"saves": 15,
|
||||
"comments": 0,
|
||||
"created_at": "2026-01-08",
|
||||
"updated_at": "2026-01-08",
|
||||
@@ -182,8 +199,8 @@
|
||||
"version": "0.4.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "将对话导出为 Word (.docx),支持 Mermaid 图表 (客户端渲染 SVG+PNG)、LaTeX 数学公式、真实超链接、增强表格格式、代码高亮和引用块。",
|
||||
"downloads": 153,
|
||||
"views": 2631,
|
||||
"downloads": 165,
|
||||
"views": 2831,
|
||||
"upvotes": 14,
|
||||
"saves": 7,
|
||||
"comments": 4,
|
||||
@@ -198,8 +215,8 @@
|
||||
"version": "0.1.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "Automatically extracts project rules from conversations and injects them into the folder's system prompt.",
|
||||
"downloads": 99,
|
||||
"views": 1839,
|
||||
"downloads": 112,
|
||||
"views": 1992,
|
||||
"upvotes": 7,
|
||||
"saves": 11,
|
||||
"comments": 0,
|
||||
@@ -207,6 +224,22 @@
|
||||
"updated_at": "2026-01-20",
|
||||
"url": "https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
"type": "filter",
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 76,
|
||||
"views": 2311,
|
||||
"upvotes": 4,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-09",
|
||||
"updated_at": "2026-03-03",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee"
|
||||
},
|
||||
{
|
||||
"title": "智能信息图",
|
||||
"slug": "智能信息图_e04a48ff",
|
||||
@@ -214,8 +247,8 @@
|
||||
"version": "1.5.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "基于 AntV Infographic 的智能信息图生成插件。支持多种专业模板,自动图标匹配,并提供 SVG/PNG 下载功能。",
|
||||
"downloads": 65,
|
||||
"views": 1304,
|
||||
"downloads": 68,
|
||||
"views": 1431,
|
||||
"upvotes": 10,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -223,22 +256,6 @@
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/智能信息图_e04a48ff"
|
||||
},
|
||||
{
|
||||
"title": "GitHub Copilot SDK Files Filter",
|
||||
"slug": "github_copilot_sdk_files_filter_403a62ee",
|
||||
"type": "filter",
|
||||
"version": "0.1.3",
|
||||
"author": "Fu-Jie",
|
||||
"description": "A specialized filter to bypass OpenWebUI's default RAG for GitHub Copilot SDK models. It moves uploaded files to a safe location ('copilot_files') so the Copilot Pipe can process them natively without interference.",
|
||||
"downloads": 54,
|
||||
"views": 2098,
|
||||
"upvotes": 3,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-09",
|
||||
"updated_at": "2026-02-25",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee"
|
||||
},
|
||||
{
|
||||
"title": "思维导图",
|
||||
"slug": "智能生成交互式思维导图帮助用户可视化知识_8d4b097b",
|
||||
@@ -246,8 +263,8 @@
|
||||
"version": "0.9.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "智能分析文本内容,生成交互式思维导图,帮助用户结构化和可视化知识。",
|
||||
"downloads": 45,
|
||||
"views": 691,
|
||||
"downloads": 52,
|
||||
"views": 761,
|
||||
"upvotes": 6,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
@@ -262,8 +279,8 @@
|
||||
"version": "1.2.2",
|
||||
"author": "Fu-Jie",
|
||||
"description": "通过智能摘要和消息压缩,降低长对话的 token 消耗,同时保持对话连贯性。",
|
||||
"downloads": 38,
|
||||
"views": 783,
|
||||
"downloads": 39,
|
||||
"views": 838,
|
||||
"upvotes": 7,
|
||||
"saves": 5,
|
||||
"comments": 0,
|
||||
@@ -271,6 +288,22 @@
|
||||
"updated_at": "2026-02-13",
|
||||
"url": "https://openwebui.com/posts/异步上下文压缩_5c0617cb"
|
||||
},
|
||||
{
|
||||
"title": "🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs",
|
||||
"slug": "smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d",
|
||||
"type": "tool",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 34,
|
||||
"views": 767,
|
||||
"upvotes": 2,
|
||||
"saves": 3,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-04",
|
||||
"updated_at": "2026-03-05",
|
||||
"url": "https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d"
|
||||
},
|
||||
{
|
||||
"title": "闪记卡 (Flash Card)",
|
||||
"slug": "闪记卡生成插件_4a31eac3",
|
||||
@@ -278,8 +311,8 @@
|
||||
"version": "0.2.4",
|
||||
"author": "Fu-Jie",
|
||||
"description": "快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。",
|
||||
"downloads": 32,
|
||||
"views": 830,
|
||||
"downloads": 34,
|
||||
"views": 888,
|
||||
"upvotes": 7,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -294,8 +327,8 @@
|
||||
"version": "1.0.0",
|
||||
"author": "Fu-Jie",
|
||||
"description": "全方位的思维透镜 —— 从背景全景到逻辑脉络,从深度洞察到行动路径。",
|
||||
"downloads": 26,
|
||||
"views": 581,
|
||||
"downloads": 31,
|
||||
"views": 647,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
@@ -304,51 +337,51 @@
|
||||
"url": "https://openwebui.com/posts/精读_99830b0f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Copilot SDK Skills Core Capabilities & Extended Delivery",
|
||||
"title": "An Unconventional Use of Open Terminal ⚡",
|
||||
"slug": "an_unconventional_use_of_open_terminal_35498f8f",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 14,
|
||||
"upvotes": 1,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
"created_at": "2026-03-06",
|
||||
"updated_at": "2026-03-06",
|
||||
"url": "https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI",
|
||||
"slug": "github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 7,
|
||||
"upvotes": 0,
|
||||
"saves": 0,
|
||||
"views": 1585,
|
||||
"upvotes": 5,
|
||||
"saves": 1,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-27",
|
||||
"updated_at": "2026-02-27",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.8.0: Conditional Tool Filtering & Publish Reliability 🎛️",
|
||||
"slug": "github_copilot_sdk_pipe_v080_conditional_tool_filt_a5a3322d",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1059,
|
||||
"upvotes": 2,
|
||||
"saves": 2,
|
||||
"comments": 0,
|
||||
"created_at": "2026-02-25",
|
||||
"updated_at": "2026-02-25",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v080_conditional_tool_filt_a5a3322d"
|
||||
},
|
||||
{
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Native Tool UI & Zero-Config CLI 🛠️",
|
||||
"title": "🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️",
|
||||
"slug": "github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131",
|
||||
"type": "post",
|
||||
"version": "",
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2162,
|
||||
"upvotes": 7,
|
||||
"saves": 2,
|
||||
"views": 2608,
|
||||
"upvotes": 8,
|
||||
"saves": 4,
|
||||
"comments": 1,
|
||||
"created_at": "2026-02-22",
|
||||
"updated_at": "2026-02-22",
|
||||
"updated_at": "2026-02-28",
|
||||
"url": "https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131"
|
||||
},
|
||||
{
|
||||
@@ -359,7 +392,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 2257,
|
||||
"views": 2390,
|
||||
"upvotes": 7,
|
||||
"saves": 4,
|
||||
"comments": 0,
|
||||
@@ -375,9 +408,9 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1839,
|
||||
"views": 1915,
|
||||
"upvotes": 12,
|
||||
"saves": 19,
|
||||
"saves": 21,
|
||||
"comments": 8,
|
||||
"created_at": "2026-01-25",
|
||||
"updated_at": "2026-01-28",
|
||||
@@ -391,7 +424,7 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 234,
|
||||
"views": 251,
|
||||
"upvotes": 2,
|
||||
"saves": 0,
|
||||
"comments": 0,
|
||||
@@ -407,9 +440,9 @@
|
||||
"author": "",
|
||||
"description": "",
|
||||
"downloads": 0,
|
||||
"views": 1502,
|
||||
"views": 1549,
|
||||
"upvotes": 16,
|
||||
"saves": 11,
|
||||
"saves": 12,
|
||||
"comments": 2,
|
||||
"created_at": "2026-01-10",
|
||||
"updated_at": "2026-01-10",
|
||||
@@ -421,11 +454,11 @@
|
||||
"name": "Fu-Jie",
|
||||
"profile_url": "https://openwebui.com/u/Fu-Jie",
|
||||
"profile_image": "https://community.s3.openwebui.com/uploads/users/b15d1348-4347-42b4-b815-e053342d6cb0/profile_d9510745-4bd4-4f8f-a997-4a21847d9300.webp",
|
||||
"followers": 295,
|
||||
"followers": 315,
|
||||
"following": 6,
|
||||
"total_points": 299,
|
||||
"post_points": 251,
|
||||
"comment_points": 48,
|
||||
"contributions": 54
|
||||
"total_points": 329,
|
||||
"post_points": 279,
|
||||
"comment_points": 50,
|
||||
"contributions": 59
|
||||
}
|
||||
}
|
||||
@@ -8,7 +8,7 @@
|
||||
> *Blue: Downloads | Purple: Views (Real-time dynamic)*
|
||||
|
||||
### 📂 Content Distribution
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 Overview
|
||||
@@ -26,9 +26,10 @@
|
||||
## 📂 By Type
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
@@ -38,26 +39,28 @@
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-02-27 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-02-21 |
|
||||
| 6 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 7 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 8 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-02-27 |
|
||||
| 10 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 11 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 13 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-02-25 |
|
||||
| 15 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 19 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Copilot SDK Skills Core Capabilities & Extended Delivery](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-27 |
|
||||
| 20 | [🚀 GitHub Copilot SDK Pipe v0.8.0: Conditional Tool Filtering & Publish Reliability 🎛️](https://openwebui.com/posts/github_copilot_sdk_pipe_v080_conditional_tool_filt_a5a3322d) | post |  |  |  |  |  | 2026-02-25 |
|
||||
| 21 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Native Tool UI & Zero-Config CLI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-22 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 23 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 24 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 25 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
> *蓝色: 总下载量 | 紫色: 总浏览量 (实时动态生成)*
|
||||
|
||||
### 📂 内容分类占比 (Distribution)
|
||||

|
||||

|
||||
|
||||
|
||||
## 📈 总览
|
||||
@@ -26,9 +26,10 @@
|
||||
## 📂 按类型分类
|
||||
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
- 
|
||||
|
||||
@@ -38,26 +39,28 @@
|
||||
|:---:|------|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
||||
| 1 | [Smart Mind Map](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) | action |  |  |  |  |  | 2026-02-27 |
|
||||
| 2 | [Smart Infographic](https://openwebui.com/posts/smart_infographic_ad6f0c7f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-02-27 |
|
||||
| 3 | [Markdown Normalizer](https://openwebui.com/posts/markdown_normalizer_baaa8732) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 4 | [Export to Word Enhanced](https://openwebui.com/posts/export_to_word_enhanced_formatting_fca6a315) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-02-21 |
|
||||
| 6 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 7 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 8 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 9 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-02-27 |
|
||||
| 10 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 11 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 12 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 13 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-02-25 |
|
||||
| 15 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 19 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Copilot SDK Skills Core Capabilities & Extended Delivery](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-27 |
|
||||
| 20 | [🚀 GitHub Copilot SDK Pipe v0.8.0: Conditional Tool Filtering & Publish Reliability 🎛️](https://openwebui.com/posts/github_copilot_sdk_pipe_v080_conditional_tool_filt_a5a3322d) | post |  |  |  |  |  | 2026-02-25 |
|
||||
| 21 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Native Tool UI & Zero-Config CLI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-22 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 23 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 24 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 25 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
| 5 | [Async Context Compression](https://openwebui.com/posts/async_context_compression_b1655bc8) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 6 | [AI Task Instruction Generator](https://openwebui.com/posts/ai_task_instruction_generator_9bab8b37) | prompt |  |  |  |  |  | 2026-01-28 |
|
||||
| 7 | [Export to Excel](https://openwebui.com/posts/export_mulit_table_to_excel_244b8f9d) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 8 | [GitHub Copilot Official SDK Pipe](https://openwebui.com/posts/github_copilot_official_sdk_pipe_ce96f7b4) | pipe |  |  |  |  |  | 2026-03-03 |
|
||||
| 9 | [Flash Card](https://openwebui.com/posts/flash_card_65a2ea8f) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 10 | [OpenWebUI Skills Manager Tool](https://openwebui.com/posts/openwebui_skills_manager_tool_b4bce8e4) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 11 | [Deep Dive](https://openwebui.com/posts/deep_dive_c0b846e4) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 12 | [导出为Word增强版](https://openwebui.com/posts/导出为_word_支持公式流程图表格和代码块_8a6306c0) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 13 | [📂 Folder Memory – Auto-Evolving Project Context](https://openwebui.com/posts/folder_memory_auto_evolving_project_context_4a9875b2) | filter |  |  |  |  |  | 2026-01-20 |
|
||||
| 14 | [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/github_copilot_sdk_files_filter_403a62ee) | filter |  |  |  |  |  | 2026-03-03 |
|
||||
| 15 | [智能信息图](https://openwebui.com/posts/智能信息图_e04a48ff) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 16 | [思维导图](https://openwebui.com/posts/智能生成交互式思维导图帮助用户可视化知识_8d4b097b) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 17 | [异步上下文压缩](https://openwebui.com/posts/异步上下文压缩_5c0617cb) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 18 | [🧠 Smart Mind Map Tool: Auto-Generate Interactive Knowledge Graphs](https://openwebui.com/posts/smart_mind_map_tool_auto_generate_interactive_know_d25f4e3d) | tool |  |  |  |  |  | 2026-03-05 |
|
||||
| 19 | [闪记卡 (Flash Card)](https://openwebui.com/posts/闪记卡生成插件_4a31eac3) | action |  |  |  |  |  | 2026-02-13 |
|
||||
| 20 | [精读](https://openwebui.com/posts/精读_99830b0f) | action |  |  |  |  |  | 2026-01-08 |
|
||||
| 21 | [An Unconventional Use of Open Terminal ⚡](https://openwebui.com/posts/an_unconventional_use_of_open_terminal_35498f8f) | post |  |  |  |  |  | 2026-03-06 |
|
||||
| 22 | [🚀 GitHub Copilot SDK Pipe v0.9.0: Skills & RichUI](https://openwebui.com/posts/github_copilot_sdk_pipe_v090_copilot_sdk_skills_co_99a42452) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 23 | [🚀 GitHub Copilot SDK Pipe v0.7.0: Skills & Rich UI 🛠️](https://openwebui.com/posts/github_copilot_sdk_pipe_v070_native_tool_ui_zero_c_4af38131) | post |  |  |  |  |  | 2026-02-28 |
|
||||
| 24 | [🚀 GitHub Copilot SDK Pipe: AI That Executes, Not Just Talks](https://openwebui.com/posts/github_copilot_sdk_for_openwebui_elevate_your_ai_t_a140f293) | post |  |  |  |  |  | 2026-02-10 |
|
||||
| 25 | [🚀 Open WebUI Prompt Plus: AI-Powered Prompt Manager](https://openwebui.com/posts/open_webui_prompt_plus_ai_powered_prompt_manager_s_15fa060e) | post |  |  |  |  |  | 2026-01-28 |
|
||||
| 26 | [Review of Claude Haiku 4.5](https://openwebui.com/posts/review_of_claude_haiku_45_41b0db39) | review |  |  |  |  |  | 2026-01-14 |
|
||||
| 27 | [ 🛠️ Debug Open WebUI Plugins in Your Browser](https://openwebui.com/posts/debug_open_webui_plugins_in_your_browser_81bf7960) | post |  |  |  |  |  | 2026-01-10 |
|
||||
|
||||
426
docs/development/gh-aw-integration-plan.md
Normal file
426
docs/development/gh-aw-integration-plan.md
Normal file
@@ -0,0 +1,426 @@
|
||||
# gh-aw Integration Plan
|
||||
|
||||
> This document proposes a safe, incremental adoption plan for GitHub Agentic Workflows (`gh-aw`) in the `openwebui-extensions` repository.
|
||||
|
||||
---
|
||||
|
||||
## 1. Goals
|
||||
|
||||
- Add repository-aware AI maintenance without replacing stable script-based CI.
|
||||
- Use `gh-aw` where natural language reasoning is stronger than deterministic shell logic.
|
||||
- Preserve the current release, deploy, publish, and stats workflows as the execution backbone.
|
||||
- Introduce observability, diagnosis, and long-term maintenance memory for repository operations.
|
||||
|
||||
---
|
||||
|
||||
## 2. Why gh-aw Fits This Repository
|
||||
|
||||
This repository already has strong deterministic automation:
|
||||
|
||||
- `/.github/workflows/release.yml`
|
||||
- `/.github/workflows/plugin-version-check.yml`
|
||||
- `/.github/workflows/deploy.yml`
|
||||
- `/.github/workflows/publish_plugin.yml`
|
||||
- `/.github/workflows/community-stats.yml`
|
||||
|
||||
Those workflows are good at exact execution, but they do not deeply understand repository policy.
|
||||
|
||||
`gh-aw` is a good fit for tasks that require:
|
||||
|
||||
- reading code, docs, and PR descriptions together
|
||||
- applying repository conventions with nuance
|
||||
- generating structured review comments
|
||||
- diagnosing failed workflow runs
|
||||
- keeping long-term maintenance notes across runs
|
||||
|
||||
This matches the repository's real needs:
|
||||
|
||||
- bilingual documentation synchronization
|
||||
- plugin code + README + docs consistency
|
||||
- release-prep validation across many files
|
||||
- issue and PR maintenance at scale
|
||||
|
||||
---
|
||||
|
||||
## 3. Non-Goals
|
||||
|
||||
The first adoption phase should not:
|
||||
|
||||
- replace `release.yml`
|
||||
- replace `publish_plugin.yml`
|
||||
- replace MkDocs deployment
|
||||
- auto-merge or auto-push code changes by default
|
||||
- grant broad write permissions to the agent
|
||||
|
||||
`gh-aw` should begin as a review, diagnosis, and preflight layer.
|
||||
|
||||
---
|
||||
|
||||
## 4. Adoption Principles
|
||||
|
||||
### 4.1 Keep deterministic workflows for execution
|
||||
|
||||
Existing YAML workflows remain responsible for:
|
||||
|
||||
- release creation
|
||||
- plugin publishing
|
||||
- documentation deployment
|
||||
- version extraction and comparison
|
||||
- stats generation
|
||||
|
||||
### 4.2 Add agentic workflows for judgment
|
||||
|
||||
`gh-aw` workflows should focus on:
|
||||
|
||||
- policy-aware review
|
||||
- release readiness checks
|
||||
- docs drift analysis
|
||||
- CI failure investigation
|
||||
- issue triage and response drafting
|
||||
|
||||
### 4.3 Default to read-only behavior
|
||||
|
||||
Start with minimal permissions and use safe outputs only for controlled comments or issue creation.
|
||||
|
||||
### 4.4 Keep the blast radius small
|
||||
|
||||
Roll out one workflow at a time, verify output quality, then expand.
|
||||
|
||||
---
|
||||
|
||||
## 5. Proposed Repository Layout
|
||||
|
||||
### 5.1 New files and directories
|
||||
|
||||
```text
|
||||
.github/
|
||||
├── workflows/
|
||||
│ ├── release.yml
|
||||
│ ├── plugin-version-check.yml
|
||||
│ ├── deploy.yml
|
||||
│ ├── publish_plugin.yml
|
||||
│ ├── community-stats.yml
|
||||
│ ├── aw-pr-maintainer-review.md
|
||||
│ ├── aw-pr-maintainer-review.lock.yml
|
||||
│ ├── aw-release-preflight.md
|
||||
│ ├── aw-release-preflight.lock.yml
|
||||
│ ├── aw-ci-audit.md
|
||||
│ ├── aw-ci-audit.lock.yml
|
||||
│ ├── aw-docs-drift-review.md
|
||||
│ └── aw-docs-drift-review.lock.yml
|
||||
├── gh-aw/
|
||||
│ ├── prompts/
|
||||
│ │ ├── pr-review-policy.md
|
||||
│ │ ├── release-preflight-policy.md
|
||||
│ │ ├── ci-audit-policy.md
|
||||
│ │ └── docs-drift-policy.md
|
||||
│ ├── schemas/
|
||||
│ │ └── review-output-example.json
|
||||
│ └── README.md
|
||||
└── copilot-instructions.md
|
||||
```
|
||||
|
||||
### 5.2 Naming convention
|
||||
|
||||
Use an `aw-` prefix for all agentic workflow source files:
|
||||
|
||||
- `aw-pr-maintainer-review.md`
|
||||
- `aw-release-preflight.md`
|
||||
- `aw-ci-audit.md`
|
||||
- `aw-docs-drift-review.md`
|
||||
|
||||
Reasons:
|
||||
|
||||
- clearly separates agentic workflows from existing handwritten YAML workflows
|
||||
- keeps `gh-aw` assets easy to search
|
||||
- avoids ambiguity during debugging and release review
|
||||
|
||||
### 5.3 Why not replace `.yml` files
|
||||
|
||||
The current workflows are production logic. `gh-aw` should complement them first, not absorb their responsibility.
|
||||
|
||||
---
|
||||
|
||||
## 6. Recommended Workflow Portfolio
|
||||
|
||||
### 6.1 Phase 1: PR Maintainer Review
|
||||
|
||||
**File**: `/.github/workflows/aw-pr-maintainer-review.md`
|
||||
|
||||
**Purpose**:
|
||||
|
||||
- review PRs that touch plugins, docs, or development guidance
|
||||
- comment on missing repository-standard updates
|
||||
- act as a semantic layer on top of `plugin-version-check.yml`
|
||||
|
||||
**Checks to perform**:
|
||||
|
||||
- plugin version updated when code changes
|
||||
- `README.md` and `README_CN.md` both updated when required
|
||||
- docs mirror pages updated when required
|
||||
- root README badge/date update needed for release-related changes
|
||||
- i18n and helper-method standards followed for plugin code
|
||||
- Conventional Commit quality in PR title/body if relevant
|
||||
|
||||
**Suggested permissions**:
|
||||
|
||||
```yaml
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
```
|
||||
|
||||
**Suggested tools**:
|
||||
|
||||
- `github:` read-focused issue/PR/repo tools
|
||||
- `bash:` limited read commands only
|
||||
- `edit:` disabled in early phase
|
||||
- `agentic-workflows:` optional only after adoption matures
|
||||
|
||||
### 6.2 Phase 1: Release Preflight
|
||||
|
||||
**File**: `/.github/workflows/aw-release-preflight.md`
|
||||
|
||||
**Purpose**:
|
||||
|
||||
- run before release or on manual dispatch
|
||||
- verify release completeness before `release.yml` does packaging and publishing
|
||||
|
||||
**Checks to perform**:
|
||||
|
||||
- code version and docs versions are aligned
|
||||
- bilingual README updates exist
|
||||
- docs plugin mirrors exist and match the release target
|
||||
- release notes sources exist where expected
|
||||
- commit message and release draft are coherent
|
||||
|
||||
**Output style**:
|
||||
|
||||
- summary comment on PR or issue
|
||||
- optional checklist artifact
|
||||
- no direct release creation
|
||||
|
||||
### 6.3 Phase 2: CI Audit
|
||||
|
||||
**File**: `/.github/workflows/aw-ci-audit.md`
|
||||
|
||||
**Purpose**:
|
||||
|
||||
- inspect failed runs of `release.yml`, `publish_plugin.yml`, `community-stats.yml`, and other important workflows
|
||||
- summarize likely root cause and next fix steps
|
||||
|
||||
**Why gh-aw is strong here**:
|
||||
|
||||
- it can use `logs` and `audit` via `gh aw mcp-server`
|
||||
- it is designed for workflow introspection and post-hoc analysis
|
||||
|
||||
### 6.4 Phase 2: Docs Drift Review
|
||||
|
||||
**File**: `/.github/workflows/aw-docs-drift-review.md`
|
||||
|
||||
**Purpose**:
|
||||
|
||||
- periodically inspect whether plugin code, local README files, mirrored docs, and root indexes have drifted apart
|
||||
|
||||
**Checks to perform**:
|
||||
|
||||
- missing `README_CN.md`
|
||||
- README sections out of order
|
||||
- docs page missing after plugin update
|
||||
- version mismatches across code and docs
|
||||
|
||||
### 6.5 Phase 3: Issue Maintainer
|
||||
|
||||
**Candidate file**: `/.github/workflows/aw-issue-maintainer.md`
|
||||
|
||||
**Purpose**:
|
||||
|
||||
- summarize unreplied issues
|
||||
- propose bilingual responses
|
||||
- group repeated bug reports by plugin
|
||||
|
||||
This should come after the earlier review and audit flows are trusted.
|
||||
|
||||
---
|
||||
|
||||
## 7. Mapping to Existing Workflows
|
||||
|
||||
| Current Workflow | Keep As-Is | gh-aw Companion | Role Split |
|
||||
|------|------|------|------|
|
||||
| `/.github/workflows/release.yml` | Yes | `aw-release-preflight.md` | `release.yml` executes; `gh-aw` judges readiness |
|
||||
| `/.github/workflows/plugin-version-check.yml` | Yes | `aw-pr-maintainer-review.md` | hard gate + semantic review |
|
||||
| `/.github/workflows/deploy.yml` | Yes | none initially | deterministic build and deploy |
|
||||
| `/.github/workflows/publish_plugin.yml` | Yes | `aw-ci-audit.md` | deterministic publish + failure diagnosis |
|
||||
| `/.github/workflows/community-stats.yml` | Yes | `aw-ci-audit.md` | deterministic stats + anomaly diagnosis |
|
||||
|
||||
---
|
||||
|
||||
## 8. Tooling Model
|
||||
|
||||
### 8.1 Built-in tools to enable first
|
||||
|
||||
For early workflows, prefer a narrow tool set:
|
||||
|
||||
```yaml
|
||||
tools:
|
||||
github:
|
||||
toolsets: [default]
|
||||
bash:
|
||||
- echo
|
||||
- pwd
|
||||
- ls
|
||||
- cat
|
||||
- head
|
||||
- tail
|
||||
- grep
|
||||
- wc
|
||||
- git status
|
||||
- git diff
|
||||
```
|
||||
|
||||
Do not enable unrestricted shell access in phase 1.
|
||||
|
||||
### 8.2 MCP usage model
|
||||
|
||||
Use `gh aw mcp-server` later for:
|
||||
|
||||
- workflow `status`
|
||||
- workflow `compile`
|
||||
- workflow `logs`
|
||||
- workflow `audit`
|
||||
- `mcp-inspect`
|
||||
|
||||
This is especially valuable for `aw-ci-audit.md`.
|
||||
|
||||
### 8.3 Safe output policy
|
||||
|
||||
In early adoption, only allow safe outputs that:
|
||||
|
||||
- comment on PRs
|
||||
- comment on issues
|
||||
- open a low-risk maintenance issue when explicitly needed
|
||||
|
||||
Avoid any automatic code-writing safe outputs at first.
|
||||
|
||||
---
|
||||
|
||||
## 9. Repo Memory Strategy
|
||||
|
||||
`gh-aw` repo memory is a strong fit for this repository, but it should be constrained.
|
||||
|
||||
### 9.1 Recommended first use cases
|
||||
|
||||
- recurring CI failure signatures
|
||||
- repeated docs sync omissions
|
||||
- common reviewer reminders
|
||||
- issue clusters by plugin name
|
||||
|
||||
### 9.2 Recommended configuration shape
|
||||
|
||||
- store only `.md` and `.json`
|
||||
- small patch size limit
|
||||
- one memory stream per concern
|
||||
|
||||
Suggested conceptual layout:
|
||||
|
||||
```text
|
||||
memory/review-notes/*.md
|
||||
memory/ci-patterns/*.md
|
||||
memory/issue-clusters/*.json
|
||||
```
|
||||
|
||||
### 9.3 Important caution
|
||||
|
||||
Do not store secrets, tokens, or unpublished sensitive data in repo memory.
|
||||
|
||||
---
|
||||
|
||||
## 10. Rollout Plan
|
||||
|
||||
### Phase 0: Preparation
|
||||
|
||||
- install `gh-aw` locally for maintainers
|
||||
- add a short `/.github/gh-aw/README.md`
|
||||
- document workflow naming and review expectations
|
||||
|
||||
### Phase 1: Read-only semantic review
|
||||
|
||||
- introduce `aw-pr-maintainer-review.md`
|
||||
- introduce `aw-release-preflight.md`
|
||||
- keep outputs limited to summaries and comments
|
||||
|
||||
### Phase 2: Diagnostics and memory
|
||||
|
||||
- introduce `aw-ci-audit.md`
|
||||
- enable `agentic-workflows:` where useful
|
||||
- add constrained `repo-memory` configuration for repeated failure patterns
|
||||
|
||||
### Phase 3: Maintenance automation
|
||||
|
||||
- add docs drift patrol
|
||||
- add issue maintenance workflow
|
||||
- consider limited code-change proposals only after trust is established
|
||||
|
||||
---
|
||||
|
||||
## 11. Local Maintainer Setup
|
||||
|
||||
For local experimentation and debugging:
|
||||
|
||||
### 11.1 Install CLI
|
||||
|
||||
```bash
|
||||
curl -sL https://raw.githubusercontent.com/github/gh-aw/main/install-gh-aw.sh | bash
|
||||
```
|
||||
|
||||
### 11.2 Useful commands
|
||||
|
||||
```bash
|
||||
gh aw version
|
||||
gh aw compile
|
||||
gh aw status
|
||||
gh aw run aw-pr-maintainer-review
|
||||
gh aw logs
|
||||
gh aw audit <run-id>
|
||||
```
|
||||
|
||||
### 11.3 VS Code MCP integration
|
||||
|
||||
A future optional improvement is adding `gh aw mcp-server` to local MCP configuration so workflow introspection tools are available in editor-based agent sessions.
|
||||
|
||||
---
|
||||
|
||||
## 12. Recommended First Deliverables
|
||||
|
||||
Start with these two workflows only:
|
||||
|
||||
1. `aw-pr-maintainer-review.md`
|
||||
2. `aw-release-preflight.md`
|
||||
|
||||
This gives the repository the highest-value upgrade with the lowest operational risk.
|
||||
|
||||
---
|
||||
|
||||
## 13. Success Criteria
|
||||
|
||||
Adoption is working if:
|
||||
|
||||
- PR review comments become more specific and repository-aware
|
||||
- release preparation catches missing docs or version sync earlier
|
||||
- CI failures produce actionable summaries faster
|
||||
- maintainers spend less time on repetitive policy review
|
||||
- deterministic workflows remain stable and unchanged in core behavior
|
||||
|
||||
---
|
||||
|
||||
## 14. Summary
|
||||
|
||||
For `openwebui-extensions`, `gh-aw` should be adopted as an intelligent maintenance layer.
|
||||
|
||||
- Keep current YAML workflows for execution.
|
||||
- Add agentic workflows for policy-aware review and diagnosis.
|
||||
- Start read-only.
|
||||
- Expand only after signal quality is proven.
|
||||
|
||||
This approach aligns with the repository's existing strengths: strong conventions, bilingual maintenance, plugin lifecycle complexity, and growing repository operations.
|
||||
424
docs/development/gh-aw-integration-plan.zh.md
Normal file
424
docs/development/gh-aw-integration-plan.zh.md
Normal file
@@ -0,0 +1,424 @@
|
||||
# gh-aw 集成方案
|
||||
|
||||
> 本文档用于为 `openwebui-extensions` 仓库设计一套安全、渐进式的 GitHub Agentic Workflows (`gh-aw`) 接入方案。
|
||||
|
||||
---
|
||||
|
||||
## 1. 目标
|
||||
|
||||
- 在不替换现有稳定 CI 的前提下,引入具备仓库理解能力的 AI 维护层。
|
||||
- 将 `gh-aw` 用于更适合自然语言推理的任务,而不是机械脚本执行。
|
||||
- 保留当前发布、部署、发布插件和统计工作流作为执行骨架。
|
||||
- 为仓库维护引入可观测性、自动诊断和长期记忆能力。
|
||||
|
||||
---
|
||||
|
||||
## 2. 为什么这个仓库适合 gh-aw
|
||||
|
||||
本仓库已经有一套很强的确定性自动化:
|
||||
|
||||
- `/.github/workflows/release.yml`
|
||||
- `/.github/workflows/plugin-version-check.yml`
|
||||
- `/.github/workflows/deploy.yml`
|
||||
- `/.github/workflows/publish_plugin.yml`
|
||||
- `/.github/workflows/community-stats.yml`
|
||||
|
||||
这些工作流擅长精确执行,但并不擅长理解仓库规范本身。
|
||||
|
||||
`gh-aw` 更适合以下任务:
|
||||
|
||||
- 联合阅读代码、文档和 PR 描述后再做判断
|
||||
- 带语义地应用仓库规范
|
||||
- 生成结构化的 review 评论
|
||||
- 自动分析失败的工作流运行
|
||||
- 在多次运行之间保存维护经验和模式
|
||||
|
||||
这与当前仓库的真实需求高度匹配:
|
||||
|
||||
- 双语文档同步
|
||||
- 插件代码、README 与 docs 一致性检查
|
||||
- 跨多个文件的发布前完整性核查
|
||||
- Issue 与 PR 的规模化维护
|
||||
|
||||
---
|
||||
|
||||
## 3. 非目标
|
||||
|
||||
第一阶段不建议让 `gh-aw`:
|
||||
|
||||
- 替换 `release.yml`
|
||||
- 替换 `publish_plugin.yml`
|
||||
- 替换 MkDocs 部署
|
||||
- 默认自动合并或自动推送代码
|
||||
- 一开始就拥有过宽的写权限
|
||||
|
||||
第一阶段应把它定位为 review、诊断和 preflight 层。
|
||||
|
||||
---
|
||||
|
||||
## 4. 接入原则
|
||||
|
||||
### 4.1 确定性执行继续由 YAML 工作流承担
|
||||
|
||||
现有 YAML workflow 继续负责:
|
||||
|
||||
- 创建 release
|
||||
- 发布插件
|
||||
- 部署文档
|
||||
- 提取和比较版本号
|
||||
- 生成社区统计
|
||||
|
||||
### 4.2 Agentic workflow 只负责判断和总结
|
||||
|
||||
`gh-aw` workflow 优先承担:
|
||||
|
||||
- 基于规范的语义审查
|
||||
- 发布前完整性检查
|
||||
- 文档漂移巡检
|
||||
- CI 失败原因分析
|
||||
- Issue 分流与回复草稿生成
|
||||
|
||||
### 4.3 默认只读
|
||||
|
||||
优先使用最小权限,并通过 safe outputs 进行受控评论或低风险输出。
|
||||
|
||||
### 4.4 逐步扩容
|
||||
|
||||
一次只上线一个 agentic workflow,验证质量后再扩大范围。
|
||||
|
||||
---
|
||||
|
||||
## 5. 建议的仓库结构
|
||||
|
||||
### 5.1 新增文件和目录
|
||||
|
||||
```text
|
||||
.github/
|
||||
├── workflows/
|
||||
│ ├── release.yml
|
||||
│ ├── plugin-version-check.yml
|
||||
│ ├── deploy.yml
|
||||
│ ├── publish_plugin.yml
|
||||
│ ├── community-stats.yml
|
||||
│ ├── aw-pr-maintainer-review.md
|
||||
│ ├── aw-pr-maintainer-review.lock.yml
|
||||
│ ├── aw-release-preflight.md
|
||||
│ ├── aw-release-preflight.lock.yml
|
||||
│ ├── aw-ci-audit.md
|
||||
│ ├── aw-ci-audit.lock.yml
|
||||
│ ├── aw-docs-drift-review.md
|
||||
│ └── aw-docs-drift-review.lock.yml
|
||||
├── gh-aw/
|
||||
│ ├── prompts/
|
||||
│ │ ├── pr-review-policy.md
|
||||
│ │ ├── release-preflight-policy.md
|
||||
│ │ ├── ci-audit-policy.md
|
||||
│ │ └── docs-drift-policy.md
|
||||
│ ├── schemas/
|
||||
│ │ └── review-output-example.json
|
||||
│ └── README.md
|
||||
└── copilot-instructions.md
|
||||
```
|
||||
|
||||
### 5.2 命名规范
|
||||
|
||||
所有 agentic workflow 源文件统一使用 `aw-` 前缀:
|
||||
|
||||
- `aw-pr-maintainer-review.md`
|
||||
- `aw-release-preflight.md`
|
||||
- `aw-ci-audit.md`
|
||||
- `aw-docs-drift-review.md`
|
||||
|
||||
这样做的原因:
|
||||
|
||||
- 可以和现有手写 YAML 工作流明确区分
|
||||
- 便于在仓库中快速搜索和定位
|
||||
- 方便调试和发布时识别来源
|
||||
|
||||
### 5.3 为什么不直接替换 `.yml`
|
||||
|
||||
当前 `.yml` 文件承担的是生产执行逻辑。第一阶段 `gh-aw` 的角色应该是补充,而不是接管。
|
||||
|
||||
---
|
||||
|
||||
## 6. 建议优先建设的 workflow 组合
|
||||
|
||||
### 6.1 第一阶段:PR 维护者语义审查
|
||||
|
||||
**文件**: `/.github/workflows/aw-pr-maintainer-review.md`
|
||||
|
||||
**作用**:
|
||||
|
||||
- 审查涉及插件、文档或开发规范的 PR
|
||||
- 对缺失的仓库标准更新给出评论
|
||||
- 作为 `plugin-version-check.yml` 之上的语义层
|
||||
|
||||
**建议检查项**:
|
||||
|
||||
- 插件代码修改后是否更新版本号
|
||||
- 是否同时更新 `README.md` 和 `README_CN.md`
|
||||
- 是否同步更新 docs 镜像页
|
||||
- 是否需要更新根 README 的日期 badge
|
||||
- 插件代码是否遵守 i18n 与 helper 规范
|
||||
- PR 标题或正文是否符合 Conventional Commits 精神
|
||||
|
||||
**建议权限**:
|
||||
|
||||
```yaml
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
```
|
||||
|
||||
**建议工具**:
|
||||
|
||||
- 只读型 `github:` 工具
|
||||
- 只开放少量只读 `bash:` 命令
|
||||
- 第一阶段不开放 `edit:`
|
||||
- `agentic-workflows:` 可在后续成熟后再启用
|
||||
|
||||
### 6.2 第一阶段:发布前预检
|
||||
|
||||
**文件**: `/.github/workflows/aw-release-preflight.md`
|
||||
|
||||
**作用**:
|
||||
|
||||
- 在 release 前或手动触发时执行
|
||||
- 在 `release.yml` 打包和发布之前,先检查发布完整性
|
||||
|
||||
**建议检查项**:
|
||||
|
||||
- 代码版本号和文档版本号是否一致
|
||||
- 双语 README 是否完整更新
|
||||
- docs 插件镜像页是否存在并匹配当前发布目标
|
||||
- release notes 来源文件是否齐全
|
||||
- commit message 与 release 草案是否连贯
|
||||
|
||||
**输出方式**:
|
||||
|
||||
- 在 PR 或 issue 中写总结评论
|
||||
- 可附带 checklist artifact
|
||||
- 不直接执行正式发布
|
||||
|
||||
### 6.3 第二阶段:CI 失败自动审计
|
||||
|
||||
**文件**: `/.github/workflows/aw-ci-audit.md`
|
||||
|
||||
**作用**:
|
||||
|
||||
- 分析 `release.yml`、`publish_plugin.yml`、`community-stats.yml` 等关键 workflow 的失败运行
|
||||
- 输出根因判断和下一步修复建议
|
||||
|
||||
**适合 gh-aw 的原因**:
|
||||
|
||||
- 可以通过 `gh aw mcp-server` 使用 `logs`、`audit` 等能力
|
||||
- 原生支持对 workflow 执行痕迹进行事后分析
|
||||
|
||||
### 6.4 第二阶段:文档漂移巡检
|
||||
|
||||
**文件**: `/.github/workflows/aw-docs-drift-review.md`
|
||||
|
||||
**作用**:
|
||||
|
||||
- 定期检查插件代码、插件目录 README、本地 docs 镜像和根索引之间是否发生漂移
|
||||
|
||||
**建议检查项**:
|
||||
|
||||
- 是否缺少 `README_CN.md`
|
||||
- README 章节顺序是否偏离规范
|
||||
- 插件更新后 docs 页面是否缺失
|
||||
- 代码和文档中的版本号是否不一致
|
||||
|
||||
### 6.5 第三阶段:Issue 维护助手
|
||||
|
||||
**候选文件**: `/.github/workflows/aw-issue-maintainer.md`
|
||||
|
||||
**作用**:
|
||||
|
||||
- 汇总长期未回复的 issue
|
||||
- 生成英文或双语回复草稿
|
||||
- 按插件归类重复问题
|
||||
|
||||
这个阶段建议在前面的 review 和 audit 流程稳定后再上线。
|
||||
|
||||
---
|
||||
|
||||
## 7. 与现有 workflow 的职责映射
|
||||
|
||||
| 当前 Workflow | 是否保留 | gh-aw 搭档 | 职责划分 |
|
||||
|------|------|------|------|
|
||||
| `/.github/workflows/release.yml` | 保留 | `aw-release-preflight.md` | `release.yml` 负责执行,`gh-aw` 负责判断是否已准备好 |
|
||||
| `/.github/workflows/plugin-version-check.yml` | 保留 | `aw-pr-maintainer-review.md` | 硬性门禁 + 语义审查 |
|
||||
| `/.github/workflows/deploy.yml` | 保留 | 初期不加 | 确定性构建和部署 |
|
||||
| `/.github/workflows/publish_plugin.yml` | 保留 | `aw-ci-audit.md` | 确定性发布 + 失败诊断 |
|
||||
| `/.github/workflows/community-stats.yml` | 保留 | `aw-ci-audit.md` | 确定性统计 + 异常诊断 |
|
||||
|
||||
---
|
||||
|
||||
## 8. 工具模型建议
|
||||
|
||||
### 8.1 第一阶段建议启用的内建工具
|
||||
|
||||
建议从窄权限工具集开始:
|
||||
|
||||
```yaml
|
||||
tools:
|
||||
github:
|
||||
toolsets: [default]
|
||||
bash:
|
||||
- echo
|
||||
- pwd
|
||||
- ls
|
||||
- cat
|
||||
- head
|
||||
- tail
|
||||
- grep
|
||||
- wc
|
||||
- git status
|
||||
- git diff
|
||||
```
|
||||
|
||||
第一阶段不要开放完全不受限的 shell。
|
||||
|
||||
### 8.2 MCP 使用策略
|
||||
|
||||
后续可通过 `gh aw mcp-server` 引入:
|
||||
|
||||
- workflow `status`
|
||||
- workflow `compile`
|
||||
- workflow `logs`
|
||||
- workflow `audit`
|
||||
- `mcp-inspect`
|
||||
|
||||
这对 `aw-ci-audit.md` 特别有价值。
|
||||
|
||||
### 8.3 Safe output 策略
|
||||
|
||||
第一阶段仅开放低风险 safe outputs:
|
||||
|
||||
- 给 PR 写评论
|
||||
- 给 issue 写评论
|
||||
- 在明确需要时创建低风险维护 issue
|
||||
|
||||
一开始不要让 agent 自动提交代码修改。
|
||||
|
||||
---
|
||||
|
||||
## 9. Repo Memory 策略
|
||||
|
||||
`gh-aw` 的 repo memory 很适合本仓库,但必须加限制。
|
||||
|
||||
### 9.1 第一批适合保存的内容
|
||||
|
||||
- 重复出现的 CI 失败模式
|
||||
- 常见文档同步遗漏
|
||||
- 高频 review 提醒项
|
||||
- 按插件聚类的 issue 模式
|
||||
|
||||
### 9.2 推荐配置思路
|
||||
|
||||
- 只允许 `.md` 和 `.json`
|
||||
- 限制 patch size
|
||||
- 按主题拆成多个 memory stream
|
||||
|
||||
建议的逻辑布局:
|
||||
|
||||
```text
|
||||
memory/review-notes/*.md
|
||||
memory/ci-patterns/*.md
|
||||
memory/issue-clusters/*.json
|
||||
```
|
||||
|
||||
### 9.3 重要提醒
|
||||
|
||||
不要把 secret、token 或未公开敏感信息写入 repo memory。
|
||||
|
||||
---
|
||||
|
||||
## 10. 分阶段落地顺序
|
||||
|
||||
### Phase 0: 准备阶段
|
||||
|
||||
- 维护者本地安装 `gh-aw`
|
||||
- 添加一个简短的 `/.github/gh-aw/README.md`
|
||||
- 写清楚 workflow 命名规范和 review 预期
|
||||
|
||||
### Phase 1: 只读语义审查
|
||||
|
||||
- 上线 `aw-pr-maintainer-review.md`
|
||||
- 上线 `aw-release-preflight.md`
|
||||
- 输出先限制为总结和评论
|
||||
|
||||
### Phase 2: 诊断与记忆
|
||||
|
||||
- 上线 `aw-ci-audit.md`
|
||||
- 在需要的地方启用 `agentic-workflows:`
|
||||
- 为重复失败模式加入受限 `repo-memory`
|
||||
|
||||
### Phase 3: 维护自动化
|
||||
|
||||
- 增加文档漂移巡检
|
||||
- 增加 issue 维护 workflow
|
||||
- 只有在信号质量足够稳定后,再考虑有限度的代码修改建议
|
||||
|
||||
---
|
||||
|
||||
## 11. 维护者本地使用建议
|
||||
|
||||
### 11.1 安装 CLI
|
||||
|
||||
```bash
|
||||
curl -sL https://raw.githubusercontent.com/github/gh-aw/main/install-gh-aw.sh | bash
|
||||
```
|
||||
|
||||
### 11.2 常用命令
|
||||
|
||||
```bash
|
||||
gh aw version
|
||||
gh aw compile
|
||||
gh aw status
|
||||
gh aw run aw-pr-maintainer-review
|
||||
gh aw logs
|
||||
gh aw audit <run-id>
|
||||
```
|
||||
|
||||
### 11.3 VS Code MCP 集成
|
||||
|
||||
后续可选增强项是把 `gh aw mcp-server` 加入本地 MCP 配置,这样编辑器内的 agent 会直接具备 workflow 自省能力。
|
||||
|
||||
---
|
||||
|
||||
## 12. 最小可行落地建议
|
||||
|
||||
建议第一步只做这两个 workflow:
|
||||
|
||||
1. `aw-pr-maintainer-review.md`
|
||||
2. `aw-release-preflight.md`
|
||||
|
||||
这样可以以最低风险获得最高价值的增强。
|
||||
|
||||
---
|
||||
|
||||
## 13. 成功标准
|
||||
|
||||
如果接入有效,应该看到这些结果:
|
||||
|
||||
- PR 评论更具体,更贴合仓库规范
|
||||
- 发布前能更早发现文档或版本同步遗漏
|
||||
- CI 失败后更快得到可执行的总结
|
||||
- 维护者花在重复性规范检查上的时间下降
|
||||
- 现有确定性 workflow 的核心行为保持稳定
|
||||
|
||||
---
|
||||
|
||||
## 14. 总结
|
||||
|
||||
对 `openwebui-extensions` 来说,`gh-aw` 最合适的定位是智能维护层。
|
||||
|
||||
- 现有 YAML workflow 继续负责执行。
|
||||
- agentic workflow 负责语义审查和诊断。
|
||||
- 第一阶段默认只读。
|
||||
- 等输出质量稳定后再逐步放权。
|
||||
|
||||
这条路径和仓库现状是匹配的:规范密度高、双语维护复杂、插件生命周期长,而且已经具备成熟的 AI 工程上下文。
|
||||
BIN
docs/development/image.png
Normal file
BIN
docs/development/image.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 406 KiB |
@@ -32,6 +32,14 @@ Learn how to develop plugins and contribute to OpenWebUI Extensions.
|
||||
|
||||
[:octicons-arrow-right-24: Read the Plan](copilot-engineering-plan.md)
|
||||
|
||||
- :material-source-branch:{ .lg .middle } **gh-aw Integration Plan**
|
||||
|
||||
---
|
||||
|
||||
Adoption plan for using GitHub Agentic Workflows as a semantic review and diagnostics layer in this repository.
|
||||
|
||||
[:octicons-arrow-right-24: Read the Plan](gh-aw-integration-plan.md)
|
||||
|
||||
- :material-github:{ .lg .middle } **Contributing**
|
||||
|
||||
---
|
||||
|
||||
@@ -32,6 +32,14 @@
|
||||
|
||||
[:octicons-arrow-right-24: 阅读文档](copilot-engineering-plan.md)
|
||||
|
||||
- :material-source-branch:{ .lg .middle } **gh-aw 集成方案**
|
||||
|
||||
---
|
||||
|
||||
面向本仓库的 GitHub Agentic Workflows 渐进式接入设计,重点覆盖语义审查、发布预检与 CI 诊断。
|
||||
|
||||
[:octicons-arrow-right-24: 阅读文档](gh-aw-integration-plan.zh.md)
|
||||
|
||||
- :material-github:{ .lg .middle } **贡献指南**
|
||||
|
||||
---
|
||||
|
||||
@@ -1,8 +1,14 @@
|
||||
# GitHub Copilot SDK Pipe for OpenWebUI
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 0.9.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 0.10.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/open-webui) that integrates the official [GitHub Copilot SDK](https://github.com/github/copilot-sdk). It enables you to use **GitHub Copilot models** (e.g., `gpt-5.2-codex`, `claude-sonnet-4.5`,`gemini-3-pro`, `gpt-5-mini`) **AND** your own models via **BYOK** (OpenAI, Anthropic) directly within OpenWebUI, providing a unified agentic experience with **strict User & Chat-level Workspace Isolation**.
|
||||
This is a powerful **GitHub Copilot SDK** Pipe for **OpenWebUI** that provides a unified **Agentic experience**. It goes beyond simple model access by enabling autonomous **Intent Recognition**, **Web Search**, and **Context Compaction**. It seamlessly reuses your existing **Tools, MCP servers, OpenAPI servers, and Skills** from OpenWebUI to create a truly integrated ecosystem.
|
||||
|
||||
- **🧠 Autonomous Intent Recognition**: The Agent independently analyzes user goals to determine the most effective path forward.
|
||||
- **🌐 Smart Web Search**: Built-in capability to trigger web searches autonomously based on task requirements.
|
||||
- **♾️ Infinite Session (Context Compaction)**: Automatically manages long-running conversations by compacting context (summarization + TODO persistence) to maintain project focus.
|
||||
- **🧩 Ecosystem Injection**: Directly reads and leverages your configured **OpenWebUI Tools, MCPs, OpenAPI Servers, and Skills**.
|
||||
- **🎨 Interactive Delivery**: Native support for **HTML Artifacts** and **RichUI** components for real-time visualization and reporting.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **Essential Companion**
|
||||
@@ -14,31 +20,39 @@ This is an advanced Pipe function for [OpenWebUI](https://github.com/open-webui/
|
||||
|
||||
---
|
||||
|
||||
## ✨ v0.9.0: The Skills Revolution & Stability Update
|
||||
## ✨ v0.10.0: Native Prompt Restoration, Live TODO Widget & SDK v0.1.30
|
||||
|
||||
- **🧩 Copilot SDK Skills Support**: Native support for Copilot SDK skill directories (`SKILL.md` + resources).
|
||||
- **🔄 OpenWebUI Skills Bridge**: Full bidirectional sync between OpenWebUI Workspace > Skills and SDK skill directories.
|
||||
- **🛠️ Deterministic `manage_skills` Tool**: Expert tool for stable install/create/list/edit/delete skill operations.
|
||||
- **🌊 Reinforced Status Bar**: Multi-layered locking mechanism (`session_finalized` guard) and atomic async delivery to prevent "stuck" indicators.
|
||||
- **🗂️ Persistent Config Directory**: Added `COPILOTSDK_CONFIG_DIR` for stable session-state persistence across container restarts.
|
||||
- **⌨️ Authentic Prompt Restoration**: Most native Copilot CLI prompts have been restored to ensure authentic behavior and enhanced capabilities across the Agentic workflow.
|
||||
- **📋 Live TODO Widget**: Added a compact real-time task tracking widget synchronized with `session.db`, keeping in-progress work visible without cluttering the chat history.
|
||||
- **🧩 OpenWebUI Tool Call Fixes**: Fixed custom tool invocation by syncing injected context with OpenWebUI 0.8.x expectations, including `__request__`, `request`, `body`, `__messages__`, `__metadata__`, `__files__`, `__task__`, and session/chat/message IDs.
|
||||
- **🔒 SDK v0.1.30 + Adaptive Workstyle**: Upgraded the pipe to `github-copilot-sdk==0.1.30`, moving workflow logic into the system prompt for autonomous "Plan-vs-Execute" decisions.
|
||||
- **🐛 Intent + Widget UX Fixes**: Fixed `report_intent` localization and cleaned up TODO widget layout for a more professional look.
|
||||
- **🧾 Better Embedded Tool Results**: Improved HTML/embedded tool outcomes and synchronized documentation surface.
|
||||
|
||||
---
|
||||
|
||||
## ✨ Key Capabilities
|
||||
|
||||
- **🔑 Unified Intelligence (Official + BYOK)**: Seamlessly switch between official GitHub Copilot models (o1, GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Flash) and your own models (OpenAI, Anthropic) via **Bring Your Own Key** mode.
|
||||
- **🔑 Unified Intelligence (Official + BYOK)**: Seamlessly switch between official GitHub Copilot models and your own models (OpenAI, Anthropic, DeepSeek, xAI) via **Bring Your Own Key** mode.
|
||||
- **🛡️ Physical Workspace Isolation**: Every session runs in its own isolated directory sandbox. This ensures absolute data privacy and prevents cross-chat file contamination while allowing the Agent full filesystem access.
|
||||
- **🔌 Universal Tool Protocol**:
|
||||
- **Native MCP**: Direct, high-performance connection to Model Context Protocol servers.
|
||||
- **OpenAPI Bridge**: Connect to any external REST API as an Agent tool.
|
||||
- **OpenWebUI Native**: Zero-config bridge to your existing OpenWebUI tools and built-ins (Web Search, Memory, etc.).
|
||||
- **🧩 OpenWebUI Skills Bridge**: Transforms simple OpenWebUI Markdown instructions into powerful SDK skill folders complete with supporting scripts, templates, and data.
|
||||
- **🧭 Adaptive Planning and Execution**: The Agent decides whether to respond with a planning-first analysis or direct implementation flow based on task complexity, ambiguity, and user intent.
|
||||
- **♾️ Infinite Session Management**: Advanced context window management with automatic "Compaction" (summarization + list persistence). Carry out weeks-long projects without losing the core thread.
|
||||
- **📊 Interactive Artifacts & Publishing**:
|
||||
- **Live HTML/JS**: Instantly render and interact with apps, dashboards, or reports generated by the Agent.
|
||||
- **Persistent Publishing**: Agents can "publish" generated files (Excel, CSV, docs) to OpenWebUI's file storage, providing permanent download links.
|
||||
- **🌊 UX-First Streaming**: Full support for "Thinking" processes (Chain of Thought), status indicators, and real-time progress bars for long-running tasks.
|
||||
- **🧠 Deep Database Integration**: Real-time persistence of TOD·O lists and session metadata ensures your workflow state is always visible in the UI.
|
||||
- **🧠 Deep Database Integration**: Real-time persistence of TODO lists and session metadata ensures your workflow state is always visible in the UI.
|
||||
|
||||
> [!TIP]
|
||||
> **💡 Visualization Pro-Tip**
|
||||
> To get the most out of **HTML Artifacts** and **RichUI**, we highly recommend asking the Agent to install the skill via its GitHub URL:
|
||||
> "Install this skill: <https://github.com/nicobailon/visual-explainer>".
|
||||
> This skill is specifically optimized for generating high-quality visual components and integrates perfectly with this Pipe.
|
||||
|
||||
---
|
||||
|
||||
@@ -69,7 +83,6 @@ Administrators define the default behavior for all users in the function setting
|
||||
| `ENABLE_MCP_SERVER` | `True` | Enable Direct MCP Client connection (Recommended). |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | `True` | Enable bidirectional sync with OpenWebUI Workspace > Skills. |
|
||||
| `OPENWEBUI_SKILLS_SHARED_DIR` | `/app/backend/data/cache/copilot-openwebui-skills` | Shared cache directory for skills. |
|
||||
| `GITHUB_SKILLS_SOURCE_URL` | `""` | Optional GitHub tree URL for batch skill import (e.g., anthropic/skills). |
|
||||
| `DISABLED_SKILLS` | `""` | Comma-separated skill names to disable in SDK session. |
|
||||
| `REASONING_EFFORT` | `medium` | Reasoning effort level: low, medium, high. |
|
||||
| `SHOW_THINKING` | `True` | Show model reasoning/thinking process. |
|
||||
@@ -95,7 +108,6 @@ Standard users can override these settings in their individual Profile/Function
|
||||
| `MAX_MULTIPLIER` | Maximum allowed billing multiplier override. |
|
||||
| `EXCLUDE_KEYWORDS` | Exclude models containing these keywords. |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | Enable loading all active OpenWebUI skills readable by you into SDK `SKILL.md` directories. |
|
||||
| `GITHUB_SKILLS_SOURCE_URL` | Optional GitHub tree URL for batch skill import in your own session. |
|
||||
| `DISABLED_SKILLS` | Comma-separated skill names to disable for your own session. |
|
||||
| `BYOK_API_KEY` | Use your personal OpenAI/Anthropic API Key. |
|
||||
|
||||
|
||||
@@ -1,46 +1,59 @@
|
||||
# GitHub Copilot SDK 官方管道
|
||||
# GitHub Copilot Official SDK Pipe
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 0.9.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **版本:** 0.10.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
|
||||
这是一个用于 [OpenWebUI](https://github.com/open-webui/open-webui) 的高级 Pipe 函数,深度集成了 **GitHub Copilot SDK**。它不仅支持 **GitHub Copilot 官方模型**(如 `gpt-5.2-codex`, `claude-sonnet-4.5`, `gemini-3-pro`, `gpt-5-mini`),还支持 **BYOK (自带 Key)** 模式对接自定义服务商(OpenAI, Anthropic),并具备**严格的用户与会话级工作区隔离**能力,提供统一且安全的 Agent 交互体验。
|
||||
这是一个将 **GitHub Copilot SDK** 深度集成到 **OpenWebUI** 中的强大 Agent SDK 管道。它不仅实现了 SDK 的核心功能,还支持 **智能意图识别**、**自主网页搜索** 与 **自动上下文压缩**,并能够无缝读取 OpenWebUI 已有的配置进行智能注入,让 Agent 能够具备以下能力:
|
||||
|
||||
- **🧠 智能意图识别**:Agent 能自主分析用户任务的深层意图,决定最有效的处理路径。
|
||||
- **🌐 自主网页搜索**:具备独立的网页搜索触发判断力,无需用户手动干预。
|
||||
- **♾️ 自动压缩上下文**:支持 Infinite Session,自动对长对话进行上下文压缩与摘要,确保长期任务跟进。
|
||||
- **🛠️ 全功能 Skill 体系**:完美支持本地自定义 Skill 目录,通过脚本与资源的结合实现真正的功能增强。
|
||||
- **🧩 深度生态复用**:直接复用您在 OpenWebUI 中配置的各种 **工具 (Tools)**、**MCP**、**OpenAPI Server** 和 **技能 (Skills)**。
|
||||
|
||||
为您带来更强、更完整的交互体验。
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **核心伴侣组件**
|
||||
> 如需启用文件处理与数据分析能力,请务必安装 [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/403a62ee-a596-45e7-be65-fab9cc24dd6)。
|
||||
>
|
||||
>## ✨ 0.9.0 核心更新:技能革命与稳定性加固
|
||||
|
||||
- **🧩 Copilot SDK Skills 原生支持**: 技能可作为一等上下文能力被加载和使用。
|
||||
- **🔄 OpenWebUI Skills 桥接**: 实现 OpenWebUI **工作区 > Skills** 与 SDK 技能目录的深度双向同步。
|
||||
- **🛠️ 确定性 `manage_skills` 工具**: 通过稳定工具契约完成技能的生命周期管理。
|
||||
- **🌊 状态栏逻辑加固**: 引入 `session_finalized` 多层锁定机制,彻底解决任务完成后状态栏回弹或卡死的问题。
|
||||
- **🗂️ 环境目录持久化**: 增强 `COPILOTSDK_CONFIG_DIR` 逻辑,确保会话状态跨容器重启稳定存在。
|
||||
- **🌐 持续化共享缓存(扩展)**: 技能统一存储在 `OPENWEBUI_SKILLS_SHARED_DIR/shared/`,跨会话与容器重启复用。
|
||||
- **🎯 智能意图路由(扩展)**: 自动识别技能管理请求并优先路由到 `manage_skills`,确保执行确定性。
|
||||
- **🗂️ 环境目录升级**: 新增 `COPILOTSDK_CONFIG_DIR`,并自动回退到 `/app/backend/data/.copilot`,确保 SDK 配置与会话状态在容器重启后稳定持久化。
|
||||
- **🧭 CLI 提示词护栏**: 系统提示词明确区分可执行的 **tools** 与不可调用的 **skills**,并要求技能生命周期操作优先走 `manage_skills`,同时强化 CLI/Python 执行规范。
|
||||
|
||||
> 如需启用文件处理与数据分析能力,请务必安装 [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/403a62ee-a596-45e7-be65-fab9cc249dd6)。
|
||||
> [!TIP]
|
||||
> **BYOK 模式无需订阅**
|
||||
> 如果您使用自带的 API Key (BYOK 模式对接 OpenAI/Anthropic),**您不需要 GitHub Copilot 官方订阅**。只有在访问 GitHub 官方模型时才需要订阅。
|
||||
|
||||
---
|
||||
|
||||
## ✨ v0.10.0 最新更新:原生提示词恢复、Live TODO 小组件与 SDK v0.1.30 完善
|
||||
|
||||
- **⌨️ 原生提示词恢复**:恢复了大部分 Copilot CLI 原生提示词,确保 Agent 在处理复杂任务时具备最正宗的行为逻辑与增强能力。
|
||||
- **📋 Live TODO 小组件**:新增基于 `session.db` 实时任务状态的紧凑型嵌入式 TODO 小组件,任务进度常驻可见,无需在正文中重复显示全部待办列表。
|
||||
- **🧩 OpenWebUI 工具调用修复**:修复自定义工具调用时上下文注入不完整的问题,完全对齐 OpenWebUI 0.8.x 所需的系统级上下文(`__request__`、`body`、`__metadata__` 等)。
|
||||
- **🔒 SDK v0.1.30 与自适应工作流**:升级到 `github-copilot-sdk==0.1.30`,将规划与执行逻辑移至系统提示词,让 Agent 根据任务复杂度自主决策工作流。
|
||||
- **🐛 意图与体验优化**:修复 `report_intent` 国际化问题,优化 TODO 小组件的视觉布局,减少冗余空白。
|
||||
- **🧾 嵌入结果与文档更新**:改进 HTML/嵌入式工具结果处理,同步中英 README 与 docs 镜像页,确保发布状态一致。
|
||||
|
||||
---
|
||||
|
||||
## ✨ 核心能力 (Key Capabilities)
|
||||
|
||||
- **🔑 统一智能体验 (官方 + BYOK)**: 自由切换官方模型(o1, GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Flash)与自定义服务商(OpenAI, Anthropic),支持 **BYOK (自带 Key)** 模式。
|
||||
- **🔑 统一智能体验 (官方 + BYOK)**: 自由切换官方模型与自定义服务商(OpenAI, Anthropic, DeepSeek, xAI),支持 **BYOK (自带 Key)** 模式。
|
||||
- **🛡️ 物理级工作区隔离**: 每个会话在独立的沙箱目录中运行。确保绝对的数据隐私,防止不同聊天间的文件污染,同时给予 Agent 完整的文件系统操作权限。
|
||||
- **🔌 通用工具协议**:
|
||||
- **原生 MCP**: 高性能直连 Model Context Protocol 服务器。
|
||||
- **OpenAPI 桥接**: 将任何外部 REST API 一键转换为 Agent 可调用的工具。
|
||||
- **OpenWebUI 原生桥接**: 零配置接入现有的 OpenWebUI 工具及内置功能(网页搜索、记忆等)。
|
||||
- **🧩 OpenWebUI Skills 桥接**: 将简单的 OpenWebUI Markdown 指令转化为包含脚本、模板和数据的强大 SDK 技能文件夹。
|
||||
- **🧩 OpenWebUI Skills 桥接**: 将简单的 OpenWebUI Markdown 指令转化为包含脚本、模板 and 数据的强大 SDK 技能文件夹。
|
||||
- **🧭 自适应规划与执行**: Agent 会根据任务复杂度、歧义程度和用户意图,自主决定先输出结构化方案,还是直接分析、实现并验证。
|
||||
- **♾️ 无限会话管理**: 先进的上下文窗口管理,支持自动“压缩”(摘要提取 + TODO 列表持久化)。支持长达数周的项目跟踪而不会丢失核心上下文。
|
||||
- **📊 交互式产物与发布**:
|
||||
- **实时 HTML/JS**: 瞬间渲染并交互 Agent 生成的应用程序、可视化看板或报告。
|
||||
- **持久化发布**: Agent 可将生成的产物(Excel, CSV, 文档)发布至 OpenWebUI 文件存储,并在聊天中提供永久下载链接。
|
||||
- **🌊 极致交互体验**: 完整支持深度思考过程 (Thinking Process) 流式渲染、状态指示器以及长任务实时进度条。
|
||||
- **🧠 深度数据库集成**: TOD·O 列表与会话元数据的实时持久化,确保任务执行状态在 UI 上清晰可见。
|
||||
- **🧠 深度数据库集成**: TODO 列表与会话元数据的实时持久化,确保任务执行状态在 UI 上清晰可见。
|
||||
|
||||
> [!TIP]
|
||||
> **💡 增强渲染建议**
|
||||
> 为了获得最精美的 **HTML Artifacts** 与 **RichUI** 效果,建议在对话中通过提供的 GitHub 链接直接命令 Agent 安装:
|
||||
> “请安装此技能:<https://github.com/nicobailon/visual-explainer”。>
|
||||
> 该技能专为生成高质量可视化组件而设计,能够与本 Pipe 完美协作。
|
||||
|
||||
---
|
||||
|
||||
@@ -49,162 +62,106 @@
|
||||
`GitHub Copilot SDK Files Filter` 是本 Pipe 的配套插件,用于阻止 OpenWebUI 默认 RAG 在 Pipe 接手前抢先处理上传文件。
|
||||
|
||||
- **作用**: 将上传文件移动到 `copilot_files`,让 Pipe 能直接读取原始二进制。
|
||||
- **必要性**: 若未安装,文件可能被提前解析/向量化,Agent 难以拿到原始文件。
|
||||
- **必要性**: 若未安装,文件可能被提前解析/向量化,Agent 拿到原始文件。
|
||||
- **v0.1.3 重点**:
|
||||
- 修复 BYOK 模型 ID 识别(支持 `github_copilot_official_sdk_pipe.xxx` 前缀匹配)。
|
||||
- 新增双通道调试日志(`show_debug_log`):后端 logger + 浏览器控制台。
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ 核心配置参数 (Valves)
|
||||
## ⚙️ 核心配置 (Valves)
|
||||
|
||||
### 1. 管理员配置 (基础设置)
|
||||
### 1. 管理员设置(全局默认)
|
||||
|
||||
管理员可在函数设置中定义全局默认行为。
|
||||
管理员可在函数设置中为所有用户定义默认行为。
|
||||
|
||||
| 参数 | 默认值 | 说明 |
|
||||
| Valve | 默认值 | 描述 |
|
||||
| :--- | :--- | :--- |
|
||||
| `GH_TOKEN` | `""` | 全局 GitHub Token (需具备 'Copilot Requests' 权限)。 |
|
||||
| `COPILOTSDK_CONFIG_DIR` | `""` | SDK 配置与会话状态持久化目录 (例如: `/app/backend/data/.copilot`)。 |
|
||||
| `ENABLE_OPENWEBUI_TOOLS` | `True` | 启用 OpenWebUI 工具 (包括定义工具和内置工具)。 |
|
||||
| `ENABLE_OPENAPI_SERVER` | `True` | 启用 OpenAPI 工具服务器连接。 |
|
||||
| `ENABLE_MCP_SERVER` | `True` | 启用直接 MCP 客户端连接 (推荐)。 |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | `True` | 开启与 OpenWebUI **工作区 > Skills** 的双向同步桥接。 |
|
||||
| `OPENWEBUI_SKILLS_SHARED_DIR` | `/app/backend/data/cache/copilot-openwebui-skills` | OpenWebUI skills 转换后的共享缓存目录。 |
|
||||
| `GITHUB_SKILLS_SOURCE_URL` | `""` | 可选 GitHub tree 地址,用于批量导入 skills(例如 anthropic/skills)。 |
|
||||
| `DISABLED_SKILLS` | `""` | 逗号分隔的 skill 名称黑名单(如 `docs-writer,webapp-testing`)。 |
|
||||
| `REASONING_EFFORT` | `medium` | 推理强度:low, medium, high。 |
|
||||
| `SHOW_THINKING` | `True` | 显示模型推理/思考过程。 |
|
||||
| `INFINITE_SESSION` | `True` | 启用无限会话 (自动上下文压缩)。 |
|
||||
| `MAX_MULTIPLIER` | `1.0` | 最大允许的模型计费倍率 (0x 为仅限免费模型)。 |
|
||||
| `EXCLUDE_KEYWORDS` | `""` | 排除包含这些关键字的模型 (逗号分隔)。 |
|
||||
| `TIMEOUT` | `300` | 每个流数据块的超时时间 (秒)。 |
|
||||
| `BYOK_TYPE` | `openai` | BYOK 服务商类型:`openai`, `anthropic`。 |
|
||||
| `BYOK_BASE_URL` | `""` | BYOK 基础 URL (例如: <https://api.openai.com/v1)。> |
|
||||
| `BYOK_MODELS` | `""` | BYOK 模型列表 (逗号分隔)。留空则从 API 获取。 |
|
||||
| `CUSTOM_ENV_VARS` | `""` | 自定义环境变量 (JSON 格式)。 |
|
||||
| `DEBUG` | `False` | 开启此项以在前端控制台输出详细调试日志。 |
|
||||
| `GH_TOKEN` | `""` | 全局 GitHub Fine-grained Token,需要 `Copilot Requests` 权限。 |
|
||||
| `COPILOTSDK_CONFIG_DIR` | `/app/backend/data/.copilot` | SDK 配置与会话状态的持久化目录。 |
|
||||
| `ENABLE_OPENWEBUI_TOOLS` | `True` | 启用 OpenWebUI Tools 与 Built-in Tools。 |
|
||||
| `ENABLE_OPENAPI_SERVER` | `True` | 启用 OpenAPI Tool Server 连接。 |
|
||||
| `ENABLE_MCP_SERVER` | `True` | 启用 MCP Server 连接。 |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | `True` | 启用 OpenWebUI Skills 到 SDK 技能目录的同步。 |
|
||||
| `OPENWEBUI_SKILLS_SHARED_DIR` | `/app/backend/data/cache/copilot-openwebui-skills` | Skills 共享缓存目录。 |
|
||||
| `DISABLED_SKILLS` | `""` | 逗号分隔的禁用技能名列表。 |
|
||||
| `REASONING_EFFORT` | `medium` | 推理强度:`low`、`medium`、`high`、`xhigh`。 |
|
||||
| `SHOW_THINKING` | `True` | 是否显示思考过程。 |
|
||||
| `INFINITE_SESSION` | `True` | 是否启用无限会话与上下文压缩。 |
|
||||
| `MAX_MULTIPLIER` | `1.0` | 允许的最大账单倍率。`0` 表示仅允许免费模型。 |
|
||||
| `EXCLUDE_KEYWORDS` | `""` | 排除包含这些关键词的模型。 |
|
||||
| `TIMEOUT` | `300` | 每个流式分片的超时时间(秒)。 |
|
||||
| `BYOK_TYPE` | `openai` | BYOK 提供商类型:`openai` 或 `anthropic`。 |
|
||||
| `BYOK_BASE_URL` | `""` | BYOK Base URL。 |
|
||||
| `BYOK_MODELS` | `""` | BYOK 模型列表,留空则尝试从 API 获取。 |
|
||||
| `CUSTOM_ENV_VARS` | `""` | 自定义环境变量(JSON 格式)。 |
|
||||
| `DEBUG` | `False` | 启用浏览器控制台/技术调试日志。 |
|
||||
|
||||
### 2. 用户配置 (个人覆盖)
|
||||
### 2. 用户设置(个人覆盖)
|
||||
|
||||
普通用户可在各自的个人设置中根据需要覆盖以下参数。
|
||||
普通用户可在个人资料或函数设置中覆盖以下选项。
|
||||
|
||||
| 参数 | 说明 |
|
||||
| Valve | 描述 |
|
||||
| :--- | :--- |
|
||||
| `GH_TOKEN` | 使用个人的 GitHub Token。 |
|
||||
| `REASONING_EFFORT` | 个人偏好的推理强度。 |
|
||||
| `SHOW_THINKING` | 显示模型推理/思考过程。 |
|
||||
| `MAX_MULTIPLIER` | 最大允许的模型计费倍率覆盖。 |
|
||||
| `EXCLUDE_KEYWORDS` | 排除包含这些关键字的模型。 |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | 启用将当前用户可读的全部已启用 OpenWebUI skills 转换并加载为 SDK `SKILL.md` 目录。 |
|
||||
| `GITHUB_SKILLS_SOURCE_URL` | 为当前用户会话设置可选 GitHub tree 地址以批量导入 skills。 |
|
||||
| `DISABLED_SKILLS` | 为当前用户会话禁用指定 skills(逗号分隔)。 |
|
||||
| `BYOK_API_KEY` | 使用个人的 OpenAI/Anthropic API Key。 |
|
||||
| `GH_TOKEN` | 使用个人 GitHub Token。 |
|
||||
| `REASONING_EFFORT` | 个人推理强度偏好。 |
|
||||
| `SHOW_THINKING` | 是否显示思考过程。 |
|
||||
| `MAX_MULTIPLIER` | 个人最大账单倍率限制。 |
|
||||
| `EXCLUDE_KEYWORDS` | 个人模型排除关键词。 |
|
||||
| `ENABLE_OPENWEBUI_TOOLS` | 是否启用 OpenWebUI Tools 与 Built-in Tools。 |
|
||||
| `ENABLE_OPENAPI_SERVER` | 是否启用 OpenAPI Tool Server。 |
|
||||
| `ENABLE_MCP_SERVER` | 是否启用 MCP Server。 |
|
||||
| `ENABLE_OPENWEBUI_SKILLS` | 是否加载你可读的 OpenWebUI Skills 到 SDK 技能目录。 |
|
||||
| `DISABLED_SKILLS` | 逗号分隔的个人禁用技能列表。 |
|
||||
| `BYOK_API_KEY` | 个人 BYOK API Key。 |
|
||||
| `BYOK_TYPE` | 个人 BYOK 提供商类型覆盖。 |
|
||||
| `BYOK_BASE_URL` | 个人 BYOK Base URL 覆盖。 |
|
||||
| `BYOK_BEARER_TOKEN` | 个人 BYOK Bearer Token 覆盖。 |
|
||||
| `BYOK_MODELS` | 个人 BYOK 模型列表覆盖。 |
|
||||
| `BYOK_WIRE_API` | 个人 BYOK Wire API 覆盖。 |
|
||||
|
||||
---
|
||||
|
||||
### 🌊 细粒度反馈与流畅体验 (Fluid UX)
|
||||
## 🚀 安装与配置
|
||||
|
||||
彻底告别复杂任务执行过程中的“卡顿”感:
|
||||
### 1. 导入函数
|
||||
|
||||
- **🔄 实时状态气泡**: 将 SDK 内部事件(如 `turn_start`, `compaction`, `subagent_started`)直接映射为 OpenWebUI 的状态栏信息。
|
||||
- **🧭 分阶段状态描述增强**: 状态栏会明确显示处理阶段(处理中、技能触发、工具执行、工具完成/失败、发布中、任务完成)。
|
||||
- **⏱️ 长任务心跳提示**: 长时间处理中会周期性显示“仍在处理中(已耗时 X 秒)”,避免用户误判为卡死。
|
||||
- **📈 工具执行进度追踪**: 长耗时工具(如代码分析)会在状态栏实时显示进度百分比及当前子任务描述。
|
||||
- **⚡ 即时响应反馈**: 从响应开始第一秒即显示“助手正在处理您的请求...”,减少等待空窗感。
|
||||
1. 打开 OpenWebUI,进入 **Workspace** -> **Functions**。
|
||||
2. 点击 **+**(Create Function),粘贴 `github_copilot_sdk.py` 内容。
|
||||
3. 保存并确保已启用。
|
||||
|
||||
### 2. 获取 Token
|
||||
|
||||
1. 访问 [GitHub Token Settings](https://github.com/settings/tokens?type=beta)。
|
||||
2. 创建 **Fine-grained token**,授予 **Account permissions** -> **Copilot Requests** 权限。
|
||||
3. 将生成的 Token 填入 `GH_TOKEN`。
|
||||
|
||||
### 3. 认证要求(必填其一)
|
||||
|
||||
必须至少配置一种凭据来源:
|
||||
|
||||
- `GH_TOKEN`(GitHub Copilot 官方订阅路线),或
|
||||
- `BYOK_API_KEY`(OpenAI / Anthropic 自带 Key 路线)。
|
||||
|
||||
如果两者都未配置,模型列表将不会显示。
|
||||
|
||||
---
|
||||
|
||||
### 🛡️ 智能版本兼容
|
||||
## 🤝 支持 (Support)
|
||||
|
||||
插件会自动根据您的 OpenWebUI 版本调整功能集:
|
||||
|
||||
- **v0.8.0+**: 开启 Rich UI、实时状态气泡及集成 HTML 预览。
|
||||
- **旧版本**: 自动回退至标准 Markdown 代码块模式,确保最大稳定性。
|
||||
如果这个插件对你有帮助,欢迎到 [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) 点个 Star,这将是我持续改进的动力,感谢支持。
|
||||
|
||||
---
|
||||
|
||||
## 🎯 典型应用场景 (Use Cases)
|
||||
## ⚠️ 故障排除 (Troubleshooting)
|
||||
|
||||
- **📁 全自主仓库维护**: Agent 在隔离工作区内自动分析代码、运行测试并应用补丁。
|
||||
- **📊 深度财务数据审计**: 直接通过 Python 加载 Excel/CSV 原始数据(绕过 RAG),生成图表并实时预览。
|
||||
- **📝 长任务项目管理**: 自动拆解复杂任务并持久化 TOD·O 进度,跨会话跟踪执行状态。
|
||||
- **工具无法使用?** 请先确认 OpenWebUI Tools / MCP / OpenAPI Server 已在对应设置中启用。
|
||||
- **文件找不到?** 确保已启用配套的 `Files Filter` 插件,否则 RAG 可能会提前消费原始文件。
|
||||
- **BYOK 报错?** 确认 `BYOK_BASE_URL` 包含正确协议前缀(如 `https://`),且模型 ID 准确无误。
|
||||
- **卡在 "Thinking..."?** 检查后端网络连接,或打开 `DEBUG` 查看更详细的 SDK 日志。
|
||||
|
||||
---
|
||||
|
||||
## ⭐ 支持与交流 (Support)
|
||||
## Changelog
|
||||
|
||||
如果这个插件对您有所帮助,请在 [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) 项目上点个 **Star** 💫,这是对我最大的鼓励。
|
||||
|
||||
---
|
||||
|
||||
## 🚀 安装与配置 (Installation)
|
||||
|
||||
### 1) 导入函数
|
||||
|
||||
1. 打开 OpenWebUI,前往 **工作区** -> **函数**。
|
||||
2. 点击 **+** (创建函数),完整粘贴 `github_copilot_sdk.py` 的内容。
|
||||
3. 点击保存并确保已启用。
|
||||
|
||||
### 2) 获取 Token (Get Token)
|
||||
|
||||
1. 访问 [GitHub Token 设置](https://github.com/settings/tokens?type=beta)。
|
||||
2. 创建 **Fine-grained token**,授予 **Account permissions** -> **Copilot Requests** 访问权限。
|
||||
3. 将生成的 Token 填入插件的 `GH_TOKEN` 配置项中。
|
||||
|
||||
### 3) 认证配置要求(必填)
|
||||
|
||||
你必须至少配置以下一种凭据:
|
||||
|
||||
- `GH_TOKEN`(GitHub Copilot 官方订阅路径),或
|
||||
- `BYOK_API_KEY`(OpenAI/Anthropic 自带 Key 路径)。
|
||||
|
||||
如果两者都未配置,模型列表将不会出现。
|
||||
|
||||
### 4) 配套插件 (强烈推荐)
|
||||
|
||||
为了获得最佳的文件处理体验,请安装 [GitHub Copilot SDK Files Filter](https://openwebui.com/posts/403a62ee-a596-45e7-be65-fab9cc249dd6)。
|
||||
|
||||
---
|
||||
|
||||
### 📤 增强型发布工具与交互式组件
|
||||
|
||||
`publish_file_from_workspace` 现采用更清晰、可落地的交付规范:
|
||||
|
||||
- **Artifacts 模式(`artifacts`,默认)**:返回 `[Preview]` + `[Download]`,并可附带 `html_embed`,在 ```html 代码块中直接渲染。
|
||||
- **Rich UI 模式(`richui`)**:仅返回 `[Preview]` + `[Download]`,由发射器自动触发集成式预览(消息中不输出 iframe 代码块)。
|
||||
- **📄 PDF 安全交付规则**:仅输出 Markdown 链接(可用时为 `[Preview]` + `[Download]`)。**禁止通过 iframe/html 方式嵌入 PDF。**
|
||||
- **⚡ 稳定双通道发布**:在本地与对象存储后端下,保持交互预览与持久下载链接一致可用。
|
||||
- **✅ 状态集成**:通过 OpenWebUI 状态栏实时反馈发布进度与完成状态。
|
||||
- **📘 发布工具指南(GitHub)**:[publish_file_from_workspace 工具指南(中文)](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/pipes/github-copilot-sdk/PUBLISH_FILE_FROM_WORKSPACE_CN.md)
|
||||
|
||||
---
|
||||
|
||||
### 🧩 OpenWebUI Skills 桥接与 `manage_skills` 工具
|
||||
|
||||
SDK 现在具备与 OpenWebUI **工作区 > Skills** 的双向同步能力:
|
||||
|
||||
- **🔄 自动同步**: 每次请求时,前端定义的技能会自动作为 `SKILL.md` 文件夹同步至 SDK 共享缓存,Agent 可直接调用。
|
||||
- **🛠️ `manage_skills` 工具**: 内置专业工具,赋予 Agent (或用户) 绝对的技能管理权。
|
||||
- `list`: 列出所有已安装技能及描述。
|
||||
- `install`: 从 GitHub URL (自动转换归档链接) 或直接从 `.zip`/`.tar.gz` 安装。
|
||||
- `create`: 从当前会话内容创建新技能目录,支持写入 `SKILL.md` 及辅助资源文件 (脚本、模板)。
|
||||
- `edit`: 更新现有技能文件夹。
|
||||
- `delete`: 原子化删除本地目录及关联的数据库条目,防止僵尸技能复活。
|
||||
- **📁 完整的文件夹支持**: 不同于数据库中单文件存储,SDK 会加载技能的**整个目录**。这使得技能可以携带二进制脚本、数据文件或复杂模板。
|
||||
- **🌐 持久化共享缓存**: 技能存储在 `OPENWEBUI_SKILLS_SHARED_DIR/shared/`,跨会话及容器重启持久存在。
|
||||
- **📚 技能完整文档(GitHub)**: [manage_skills 工具指南(中文)](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/pipes/github-copilot-sdk/SKILLS_MANAGER_CN.md) | [Skills Best Practices(中文)](https://github.com/Fu-Jie/openwebui-extensions/blob/main/plugins/pipes/github-copilot-sdk/SKILLS_BEST_PRACTICES_CN.md)
|
||||
|
||||
---
|
||||
|
||||
## 📋 常见问题与依赖 (Troubleshooting)
|
||||
|
||||
- **Agent 无法识别文件?**: 请确保已安装并启用了 Files Filter 插件,否则原始文件会被 RAG 干扰。
|
||||
- **看不到状态更新或 TODO 进度条?**: 状态气泡会覆盖处理/工具阶段;而 TODO 进度条仅在 Agent 使用 `update_todo` 工具(通常是复杂任务)时出现。
|
||||
- **依赖安装**: 本管道会自动管理 `github-copilot-sdk` (Python 包) 并优先直接使用内置的二进制 CLI,无需手动干预。
|
||||
|
||||
---
|
||||
|
||||
## 更新日志 (Changelog)
|
||||
|
||||
完整历史记录请见 GitHub: [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
完整历史请查看 GitHub 项目主页:[OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
@@ -15,7 +15,7 @@ Pipes allow you to:
|
||||
|
||||
## Available Pipe Plugins
|
||||
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.9.0) - Official GitHub Copilot SDK integration. Features **Workspace Isolation**, **Zero-config OpenWebUI Tool Bridge**, **BYOK** support, and **dynamic MCP discovery**. **NEW in v0.9.0: OpenWebUI Skills Bridge**, reinforced status bar stability, and persistent SDK config management. [View Deep Dive](github-copilot-sdk-deep-dive.md) | [**View Advanced Tutorial**](github-copilot-sdk-tutorial.md) | [**View Detailed Usage Guide**](github-copilot-sdk-usage-guide.md).
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.md) (v0.10.0) - Official GitHub Copilot SDK integration. Features **Workspace Isolation**, **Zero-config OpenWebUI Tool Bridge**, **BYOK** support, and **dynamic MCP discovery**. **NEW in v0.10.0: Native Prompt Restoration (Plan Mode & SQLite session management), Live TODO Widget integration, and SDK v0.1.30 alignment**. [View Deep Dive](github-copilot-sdk-deep-dive.md) | [**View Advanced Tutorial**](github-copilot-sdk-tutorial.md) | [**View Detailed Usage Guide**](github-copilot-sdk-usage-guide.md).
|
||||
- **[Case Study: GitHub 100 Star Growth Analysis](star-prediction-example.md)** - Learn how to use the GitHub Copilot SDK Pipe with Minimax 2.1 to automatically analyze CSV data and generate project growth reports.
|
||||
- **[Case Study: High-Quality Video to GIF Conversion](video-processing-example.md)** - See how the model uses system-level FFmpeg to accelerate, scale, and optimize colors for screen recordings.
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ Pipes 可以用于:
|
||||
|
||||
## 可用的 Pipe 插件
|
||||
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.9.0) - GitHub Copilot SDK 官方集成。具备**工作区安全隔离**、**零配置工具桥接**与**BYOK (自带 Key) 支持**。**v0.9.0 重量级更新:OpenWebUI Skills 桥接**、状态栏稳定性加固,以及持久化 SDK 配置目录管理(`COPILOTSDK_CONFIG_DIR`)。[查看深度架构解析](github-copilot-sdk-deep-dive.zh.md) | [**查看进阶实战教程**](github-copilot-sdk-tutorial.zh.md) | [**查看详细使用手册**](github-copilot-sdk-usage-guide.zh.md)。
|
||||
- [GitHub Copilot SDK](github-copilot-sdk.zh.md) (v0.10.0) - GitHub Copilot SDK 官方集成。具备**工作区安全隔离**、**零配置工具桥接**与**BYOK (自带 Key) 支持**。**v0.10.0 更新:原生提示词恢复(原生计划模式与 SQLite 会话管理)、新增紧凑型 Live TODO 小组件,并对齐 SDK v0.1.30**。[查看深度架构解析](github-copilot-sdk-deep-dive.zh.md) | [**查看进阶实战教程**](github-copilot-sdk-tutorial.zh.md) | [**查看详细使用手册**](github-copilot-sdk-usage-guide.zh.md)。
|
||||
- **[实战案例:GitHub 100 Star 增长预测](star-prediction-example.zh.md)** - 展示如何使用 GitHub Copilot SDK Pipe 结合 Minimax 2.1 模型,自动编写脚本分析 CSV 数据并生成详细的项目增长报告。
|
||||
- **[实战案例:视频高质量 GIF 转换与加速](video-processing-example.zh.md)** - 演示模型如何通过底层 FFmpeg 工具对录屏进行加速、缩放及双阶段色彩优化处理。
|
||||
|
||||
|
||||
@@ -4,4 +4,5 @@ OpenWebUI native Tool plugins that can be used across models.
|
||||
|
||||
## Available Tool Plugins
|
||||
|
||||
- [OpenWebUI Skills Manager Tool](openwebui-skills-manager-tool.md) (v0.2.0) - Simple native skill management (`list/show/install/create/update/delete`).
|
||||
- [OpenWebUI Skills Manager Tool](openwebui-skills-manager-tool.md) (v0.2.1) - Simple native skill management (`list/show/install/create/update/delete`).
|
||||
- [Smart Mind Map Tool](smart-mind-map-tool.md) (v1.0.0) - Intelligently analyzes text content and proactively generates interactive mind maps to help users structure and visualize knowledge.
|
||||
|
||||
@@ -4,4 +4,5 @@
|
||||
|
||||
## 可用 Tool 插件
|
||||
|
||||
- [OpenWebUI Skills 管理工具](openwebui-skills-manager-tool.zh.md) (v0.2.0) - 简化技能管理(`list/show/install/create/update/delete`)。
|
||||
- [OpenWebUI Skills 管理工具](openwebui-skills-manager-tool.zh.md) (v0.2.1) - 简化技能管理(`list/show/install/create/update/delete`)。
|
||||
- [智能思维导图工具 (Smart Mind Map Tool)](smart-mind-map-tool.zh.md) (v1.0.0) - 智能分析文本内容并主动生成交互式思维导图,帮助用户结构化与可视化知识。
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
# OpenWebUI Skills Manager Tool
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 0.2.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 0.2.1 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
A standalone OpenWebUI Tool plugin for managing native Workspace Skills across models.
|
||||
|
||||
## What's New
|
||||
|
||||
- Added GitHub skills-directory auto-discovery for `install_skill` (e.g., `.../tree/main/skills`) to install all child skills in one request.
|
||||
- Fixed language detection with robust frontend-first fallback (`__event_call__` + timeout), request header fallback, and profile fallback.
|
||||
|
||||
## Key Features
|
||||
|
||||
- Native skill management
|
||||
@@ -22,6 +27,10 @@ A standalone OpenWebUI Tool plugin for managing native Workspace Skills across m
|
||||
## Installation
|
||||
|
||||
1. Open OpenWebUI → Workspace → Tools
|
||||
2. Create Tool and paste:
|
||||
- `plugins/tools/openwebui-skills-manager/openwebui_skills_manager.py`
|
||||
2. Install **OpenWebUI Skills Manager Tool** from the official marketplace
|
||||
3. Save and enable for your chat/model
|
||||
|
||||
### Manual Installation (Alternative)
|
||||
|
||||
- Create Tool and paste:
|
||||
- `plugins/tools/openwebui-skills-manager/openwebui_skills_manager.py`
|
||||
|
||||
@@ -1,9 +1,14 @@
|
||||
# OpenWebUI Skills 管理工具
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 0.2.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie/openwebui-extensions) | **Version:** 0.2.1 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions)
|
||||
|
||||
一个可跨模型使用的 OpenWebUI 原生 Tool 插件,用于管理 Workspace Skills。
|
||||
|
||||
## 最新更新
|
||||
|
||||
- `install_skill` 新增 GitHub 技能目录自动发现(例如 `.../tree/main/skills`),可一键安装目录下所有子技能。
|
||||
- 修复语言获取逻辑:前端优先(`__event_call__` + 超时保护),并回退到请求头与用户资料。
|
||||
|
||||
## 核心特性
|
||||
|
||||
- 原生技能管理
|
||||
@@ -22,6 +27,10 @@
|
||||
## 安装方式
|
||||
|
||||
1. 打开 OpenWebUI → Workspace → Tools
|
||||
2. 新建 Tool 并粘贴:
|
||||
- `plugins/tools/openwebui-skills-manager/openwebui_skills_manager.py`
|
||||
2. 在官方市场安装 **OpenWebUI Skills 管理工具**
|
||||
3. 保存并在模型/聊天中启用
|
||||
|
||||
### 手动安装(备选)
|
||||
|
||||
- 新建 Tool 并粘贴:
|
||||
- `plugins/tools/openwebui-skills-manager/openwebui_skills_manager.py`
|
||||
|
||||
62
docs/plugins/tools/smart-mind-map-tool.md
Normal file
62
docs/plugins/tools/smart-mind-map-tool.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Smart Mind Map Tool - Knowledge Visualization & Structuring
|
||||
|
||||
Smart Mind Map Tool is the tool version of the popular Smart Mind Map plugin for OpenWebUI. It allows the model to proactively generate interactive mind maps during conversations by intelligently analyzing context and structuring knowledge into visual hierarchies.
|
||||
|
||||
> ℹ️ **Note**: Prefer the manual trigger button instead? Check out the [Smart Mind Map Action Version](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a) here.
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.0.0 | **Project:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **License:** MIT
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Why is there a Tool version?
|
||||
|
||||
1. **Powered by OpenWebUI 0.8.0 Rich UI**: Previous versions of OpenWebUI did not support embedding custom HTML/iframes directly into the chat stream. Starting with 0.8.0, the platform introduced full Rich UI rendering support for **both Actions and Tools**, unleashing interactive frontend possibilities.
|
||||
2. **AI Autonomous Invocation (vs. Action)**: While an **Action** is passive and requires a manual button click from the user, the **Tool** version gives the model **autonomy**. The AI can analyze the conversational context and decide on its own exactly when generating a mind map would be most helpful, offering a true "smart assistant" experience.
|
||||
|
||||
It is perfect for:
|
||||
|
||||
- Summarizing complex discussions.
|
||||
- Planning projects or outlining articles.
|
||||
- Explaining hierarchical concepts.
|
||||
|
||||
## ✨ Key Features
|
||||
|
||||
- ✅ **Proactive Generation**: The AI triggers the tool automatically when it senses a need for structured visualization.
|
||||
- ✅ **Full Context Awareness**: Supports aggregation of the entire conversation history to generate comprehensive knowledge maps.
|
||||
- ✅ **Native Multi-language UI (i18n)**: Automatically detects and adapts to your browser/system language (en-US, zh-CN, ja-JP, etc.).
|
||||
- ✅ **Premium UI/UX**: Matches the Action version with a compact toolbar, glassmorphism aesthetics, and professional borders.
|
||||
- ✅ **Interactive Controls**: Zoom (In/Out/Reset), Level-based expansion (Default to Level 3), and Fullscreen mode.
|
||||
- ✅ **High-Quality Export**: Export your mind maps as print-ready PNG images.
|
||||
|
||||
## 🛠️ Installation & Setup
|
||||
|
||||
1. **Install**: Upload `smart_mind_map_tool.py` to your OpenWebUI Admin Settings -> Plugins -> Tools.
|
||||
2. **Enable Native Tool Calling**: Navigate to `Admin Settings -> Models` or your workspace settings, and ensure that **Native Tool Calling** is enabled for your selected model. This is required for the AI to reliably and actively invoke the tool automatically.
|
||||
3. **Assign**: Toggle the tool "ON" for your desired models in the workspace or model settings.
|
||||
4. **Configure**:
|
||||
- `MESSAGE_COUNT`: Set to `12` (default) to use the 12 most recent messages, or `0` for the entire conversation history.
|
||||
- `MODEL_ID`: Specify a preferred model for analysis (defaults to the current chat model).
|
||||
|
||||
## ⚙️ Configuration (Valves)
|
||||
|
||||
| Parameter | Default | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `MODEL_ID` | (Empty) | The model used for text analysis. If empty, uses the current chat model. |
|
||||
| `MESSAGE_COUNT` | `12` | Number of messages to aggregate. `0` = All messages. |
|
||||
| `MIN_TEXT_LENGTH` | `100` | Minimum character count required to trigger a mind map. |
|
||||
|
||||
## ❓ FAQ & Troubleshooting
|
||||
|
||||
- **Language mismatch?**: The tool uses a 4-level detection (Frontend Script > Browser Header > User Profile > Default). Ensure your browser language is set correctly.
|
||||
- **Too tiny or too large?**: We've optimized the height to `500px` for inline chat display with a responsive "Fit to Screen" logic.
|
||||
- **Exporting**: Click the "⛶" for fullscreen if you want a wider view before exporting to PNG.
|
||||
|
||||
---
|
||||
|
||||
## ⭐ Support
|
||||
|
||||
If this tool helps you visualize ideas better, please give us a star on [GitHub](https://github.com/Fu-Jie/openwebui-extensions).
|
||||
|
||||
## ⚖️ License
|
||||
|
||||
MIT License. Developed with ❤️ by Fu-Jie.
|
||||
62
docs/plugins/tools/smart-mind-map-tool.zh.md
Normal file
62
docs/plugins/tools/smart-mind-map-tool.zh.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# 思维导图工具 - 知识可视化与结构化利器
|
||||
|
||||
思维导图工具(Smart Mind Map Tool)是广受好评的“思维导图”插件的工具(Tool)版本。它赋予了模型主动生成交互式思维导图的能力,通过智能分析上下文,将碎片化知识转化为层级分明的视觉架构。
|
||||
|
||||
> ℹ️ **说明**:如果您更倾向于手动点击按钮触发生成,可以获取 [思维导图 Action(动作)版本](https://openwebui.com/posts/turn_any_text_into_beautiful_mind_maps_3094c59a)。
|
||||
|
||||
**作者:** [Fu-Jie](https://github.com/Fu-Jie) | **版本:** 1.0.0 | **项目:** [OpenWebUI Extensions](https://github.com/Fu-Jie/openwebui-extensions) | **许可证:** MIT
|
||||
|
||||
---
|
||||
|
||||
## 🚀 为什么会有工具(Tool)版本?
|
||||
|
||||
1. **得益于 OpenWebUI 0.8.0 的 Rich UI 特性**:在以前的版本中,是不支持直接将自定义的 HTML/iframe 嵌入到对话流中的。而从 0.8.0 开始,平台不仅支持了这种顺滑的前端组件直出(Rich UI),而且同时对 **Action** 和 **Tool** 开放了该能力。
|
||||
2. **AI 自主调用(区别于 Action)**:**Action** 是被动的,需要用户在输入框或消息旁手动点击触发;而 **Tool** 赋予了模型**自主权**。AI 可以根据对话上下文,自行判断在什么时候为您生成导图最有帮助,实现真正的“智能助理”体验。
|
||||
|
||||
它非常适合以下场景:
|
||||
|
||||
- 总结复杂的对话内容。
|
||||
- 规划项目、整理文章大纲。
|
||||
- 解释具有层级结构的抽象概念。
|
||||
|
||||
## ✨ 核心特性
|
||||
|
||||
- ✅ **主动触发生成**:AI 在感知到需要视觉化展示时会自动调用工具生成导图。
|
||||
- ✅ **全量上下文感知**:支持聚合整个会话历史(MESSAGE_COUNT 为 0),生成最完整的知识地图。
|
||||
- ✅ **原生多语言 UI (i18n)**:自动检测并适配浏览器/系统语言(简体中文、繁体中文、英文、日文、韩文等)。
|
||||
- ✅ **统一的高级视觉**:完全复刻 Action 版本的极简工具栏、玻璃拟态审美以及专业边框阴影。
|
||||
- ✅ **深度交互控制**:支持缩放(放大/缩小/重置)、层级调节(默认为 3 级展开)以及全屏模式。
|
||||
- ✅ **高品质导出**:支持将导图导出为超高清 PNG 图片。
|
||||
|
||||
## 🛠️ 安装与设置
|
||||
|
||||
1. **安装**:在 OpenWebUI 管理员设置 -> 插件 -> 工具中上传 `smart_mind_map_tool.py`。
|
||||
2. **启用原生理机制**:在“管理员设置 -> 模型”或配置里,确保目标模型**启用了原生工具调用(Native Tool Calling)**。只有开启这个能力,AI 才能自主并稳定地触发 Tool 功能。
|
||||
3. **分配工具**:在工作区或聊天界面处为目标模型选中并挂载本工具。
|
||||
4. **配置**:
|
||||
- `MESSAGE_COUNT`:设置为 `12`(默认)以使用最近的 12 条对话记录,或设置为 `0` 聚合全部历史。
|
||||
- `MODEL_ID`:指定分析导图时偏好的模型(留空则默认使用当前模型)。
|
||||
|
||||
## ⚙️ 配置参数 (Valves)
|
||||
|
||||
| 参数 | 默认值 | 描述 |
|
||||
| :--- | :--- | :--- |
|
||||
| `MODEL_ID` | (留空) | 用于文本分析的模型 ID。留空则随当前聊天模型。 |
|
||||
| `MESSAGE_COUNT` | `12` | 聚合消息的数量。`0` 表示全量消息,`12` 表示截取最近的 12 条。 |
|
||||
| `MIN_TEXT_LENGTH` | `100` | 触发导图分析所需的最小字符长度。 |
|
||||
|
||||
## ❓ 常见问题
|
||||
|
||||
- **语言显示不正确?**:工具采用 4 级探测机制(前端脚本 > 浏览器头 > 用户资料 > 默认)。请检查浏览器语言设置。
|
||||
- **生成的导图太小或太大?**:我们针对对话流内联显示优化了 `500px` 的固定高度,并配有自适应缩放逻辑。
|
||||
- **导出图片**:建议先点击“⛶”进入全屏,获得最佳构图后再点击导出。
|
||||
|
||||
---
|
||||
|
||||
## ⭐ 支持
|
||||
|
||||
如果这个工具帮您理清了思路,欢迎在 [GitHub](https://github.com/Fu-Jie/openwebui-extensions) 给我们一个 Star。
|
||||
|
||||
## ⚖️ 许可证
|
||||
|
||||
MIT License. Designed with ❤️ by Fu-Jie.
|
||||
51
original_system_prompt.md
Normal file
51
original_system_prompt.md
Normal file
@@ -0,0 +1,51 @@
|
||||
You are a helpful assistant.
|
||||
|
||||
[Session Context]
|
||||
- **Your Isolated Workspace**: `/app/backend/data/copilot_workspace/user_123/chat_456`
|
||||
- **Active User ID**: `user_123`
|
||||
- **Active Chat ID**: `chat_456`
|
||||
- **Skills Directory**: `/app/backend/data/skills/shared/` — contains user-installed skills.
|
||||
- **Config Directory**: `/app/backend/data/.copilot` — system configuration (Restricted).
|
||||
- **CLI Tools Path**: `/app/backend/data/.copilot_tools/` — Global tools installed via npm or pip will automatically go here and be in your $PATH. Python tools are strictly isolated in a venv here.
|
||||
**CRITICAL INSTRUCTION**: You MUST use the above workspace for ALL file operations.
|
||||
- DO NOT create files in `/tmp` or any other system directories.
|
||||
- Always interpret 'current directory' as your Isolated Workspace.
|
||||
|
||||
[Available Native System Tools]
|
||||
The host environment is rich. Based on the official OpenWebUI Docker deployment baseline (backend image), the following CLI tools are expected to be preinstalled and globally available in $PATH:
|
||||
- **Network/Data**: `curl`, `jq`, `netcat-openbsd`
|
||||
- **Media/Doc**: `pandoc` (format conversion), `ffmpeg` (audio/video)
|
||||
- **Build/System**: `git`, `gcc`, `make`, `build-essential`, `zstd`, `bash`
|
||||
- **Python/Runtime**: `python3`, `pip3`, `uv`
|
||||
- **Verification Rule**: Before installing any CLI/tool dependency, first check availability with `which <tool>` or a lightweight version probe (e.g. `<tool> --version`).
|
||||
- **Python Libs**: The active virtual environment inherits `--system-site-packages`. Advanced libraries like `pandas`, `numpy`, `pillow`, `opencv-python-headless`, `pypdf`, `langchain`, `playwright`, `httpx`, and `beautifulsoup4` are ALREADY installed. Try importing them before attempting to install.
|
||||
|
||||
|
||||
[Mode Context: Plan Mode]
|
||||
You are currently operating in **Plan Mode**.
|
||||
DEFINITION: Plan mode is a collaborative phase to outline multi-step plans or conduct research BEFORE any code is modified.
|
||||
|
||||
<workflow>
|
||||
1. Clarification: If requirements/goals are ambiguous, ask questions.
|
||||
2. Analysis: Analyze the codebase to understand constraints. You MAY use shell commands (e.g., `ls`, `grep`, `find`, `cat`) and other read-only tools.
|
||||
3. Formulation: Generate your structured plan OR research findings.
|
||||
4. Approval: Present the detailed plan directly to the user for approval via chat.
|
||||
</workflow>
|
||||
|
||||
<key_principles>
|
||||
- ZERO CODE MODIFICATION: You must NOT execute file edits, write operations, or destructive system changes. Your permissions are locked to READ/RESEARCH ONLY, with the sole exception of the progress-tracking file `plan.md`.
|
||||
- SHELL USAGE: Shell execution is ENABLED for research purposes. Any attempts to modify the filesystem via shell (e.g., `sed -i`, `rm`) will be strictly blocked, except for appending to `plan.md`.
|
||||
- PURE RESEARCH SUPPORT: If the user requests a pure research report, output your conclusions directly matching the plan style.
|
||||
- PERSISTENCE: You MUST save your proposed plan to `/app/backend/data/.copilot/session-state/chat_456/plan.md` to sync with the UI. The UI automatically reads this file to update the plan view.
|
||||
</key_principles>
|
||||
|
||||
<plan_format>
|
||||
When presenting your findings or plan in the chat, structure it clearly:
|
||||
## Plan / Report: {Title}
|
||||
**TL;DR**: {Summary}
|
||||
**Detailed Tasks / Steps**: {List step-by-step}
|
||||
**Affected Files**:
|
||||
- `path/to/file`
|
||||
**Constraint/Status**: {Any constraints}
|
||||
</plan_format>
|
||||
Acknowledge your role as a planner and format your next response using the plan style above.
|
||||
@@ -1,359 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
======================================================================
|
||||
Staged README Synchronizer to OpenWebUI Community
|
||||
暂存 README 文件同步到 OpenWebUI 社区工具
|
||||
======================================================================
|
||||
|
||||
PURPOSE / 用途:
|
||||
--------------
|
||||
This script synchronizes staged README.md/README_CN.md files to their
|
||||
corresponding OpenWebUI Community posts automatically. It's designed for
|
||||
batch updating documentation content without modifying plugin versions
|
||||
or media attachments.
|
||||
|
||||
本脚本自动将暂存的 README.md/README_CN.md 文件同步到对应的 OpenWebUI
|
||||
社区帖子。专为批量更新文档内容设计,不修改插件版本或媒体附件。
|
||||
|
||||
USAGE / 使用方法:
|
||||
----------------
|
||||
1. Set up environment:
|
||||
配置环境:
|
||||
|
||||
Create a .env file in the repository root with:
|
||||
在仓库根目录创建 .env 文件,包含:
|
||||
|
||||
OPENWEBUI_API_KEY=your_api_key_here
|
||||
|
||||
2. Stage README files to sync:
|
||||
暂存需要同步的 README 文件:
|
||||
|
||||
git add plugins/actions/my_plugin/README.md
|
||||
git add plugins/actions/my_plugin/README_CN.md
|
||||
|
||||
3. Run the script:
|
||||
运行脚本:
|
||||
|
||||
python plugins/debug/common_tools/update_readmes_to_market.py
|
||||
|
||||
WORKFLOW / 工作流程:
|
||||
-------------------
|
||||
1. Load OPENWEBUI_API_KEY from .env file
|
||||
从 .env 文件加载 OPENWEBUI_API_KEY
|
||||
|
||||
2. Get list of staged README.md/README_CN.md files via git
|
||||
通过 git 获取暂存的 README.md/README_CN.md 文件列表
|
||||
|
||||
3. For each staged README:
|
||||
对于每个暂存的 README:
|
||||
|
||||
a. Locate the corresponding plugin .py file
|
||||
定位对应的插件 .py 文件
|
||||
|
||||
b. Extract openwebui_id/post_id from plugin frontmatter
|
||||
从插件前置信息中提取 openwebui_id/post_id
|
||||
|
||||
c. Fetch existing post data from OpenWebUI Community API
|
||||
从 OpenWebUI 社区 API 获取现有帖子数据
|
||||
|
||||
d. Update post content with new README content
|
||||
用新的 README 内容更新帖子内容
|
||||
|
||||
e. Push changes via API (preserves version & media)
|
||||
通过 API 推送更改(保留版本和媒体)
|
||||
|
||||
REQUIREMENTS / 依赖要求:
|
||||
-----------------------
|
||||
- python-dotenv: For loading .env configuration
|
||||
用于加载 .env 配置文件
|
||||
- Git repository: Must be run from a git-tracked workspace
|
||||
必须在 git 跟踪的工作区中运行
|
||||
|
||||
KEY FEATURES / 关键特性:
|
||||
-----------------------
|
||||
✅ Only updates content field (不仅更新内容字段)
|
||||
✅ Skips files without openwebui_id (跳过没有 openwebui_id 的文件)
|
||||
✅ Automatically matches CN/EN plugin files (自动匹配中英文插件文件)
|
||||
✅ Supports staged plugin source code updates (支持暂存插件源码更新)
|
||||
✅ Safe: Won't modify version or media fields (安全:不会修改版本或媒体字段)
|
||||
|
||||
NOTES / 注意事项:
|
||||
---------------
|
||||
- This is a DEBUG/DEVELOPMENT tool, not for production workflows
|
||||
这是一个调试/开发工具,不用于生产工作流
|
||||
|
||||
- Always verify changes in OpenWebUI Community after sync
|
||||
同步后务必在 OpenWebUI 社区中验证更改
|
||||
|
||||
- Requires valid API key with update permissions
|
||||
需要具有更新权限的有效 API 密钥
|
||||
|
||||
AUTHOR / 作者:
|
||||
-------------
|
||||
Fu-Jie
|
||||
GitHub: https://github.com/Fu-Jie/openwebui-extensions
|
||||
|
||||
======================================================================
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, List
|
||||
|
||||
|
||||
def _load_dotenv(repo_root: Path) -> None:
|
||||
try:
|
||||
from dotenv import load_dotenv # type: ignore
|
||||
except Exception as exc: # pragma: no cover
|
||||
print("Missing dependency: python-dotenv. Please install it and retry.")
|
||||
raise SystemExit(1) from exc
|
||||
|
||||
env_path = repo_root / ".env"
|
||||
load_dotenv(env_path)
|
||||
|
||||
|
||||
def _get_repo_root() -> Path:
|
||||
return Path(__file__).resolve().parents[3]
|
||||
|
||||
|
||||
def _get_staged_readmes(repo_root: Path) -> List[Path]:
|
||||
try:
|
||||
output = subprocess.check_output(
|
||||
[
|
||||
"git",
|
||||
"-C",
|
||||
str(repo_root),
|
||||
"diff",
|
||||
"--cached",
|
||||
"--name-only",
|
||||
"--",
|
||||
"*.md",
|
||||
],
|
||||
text=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as exc:
|
||||
print(f"Failed to read staged files: {exc}")
|
||||
return []
|
||||
|
||||
paths = []
|
||||
for line in output.splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
if line.endswith("README.md") or line.endswith("README_CN.md"):
|
||||
paths.append(repo_root / line)
|
||||
return paths
|
||||
|
||||
|
||||
def _get_staged_plugin_files(repo_root: Path) -> List[Path]:
|
||||
try:
|
||||
output = subprocess.check_output(
|
||||
[
|
||||
"git",
|
||||
"-C",
|
||||
str(repo_root),
|
||||
"diff",
|
||||
"--cached",
|
||||
"--name-only",
|
||||
"--",
|
||||
"*.py",
|
||||
],
|
||||
text=True,
|
||||
)
|
||||
except subprocess.CalledProcessError as exc:
|
||||
print(f"Failed to read staged files: {exc}")
|
||||
return []
|
||||
|
||||
paths = []
|
||||
for line in output.splitlines():
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
if "/plugins/" not in line:
|
||||
continue
|
||||
if line.endswith("__init__.py") or os.path.basename(line).startswith("test_"):
|
||||
continue
|
||||
paths.append(repo_root / line)
|
||||
return paths
|
||||
|
||||
|
||||
def _parse_frontmatter(content: str) -> Dict[str, str]:
|
||||
match = re.search(r'^\s*"""\n(.*?)\n"""', content, re.DOTALL)
|
||||
if not match:
|
||||
match = re.search(r'"""\n(.*?)\n"""', content, re.DOTALL)
|
||||
if not match:
|
||||
return {}
|
||||
|
||||
frontmatter = match.group(1)
|
||||
meta: Dict[str, str] = {}
|
||||
for line in frontmatter.split("\n"):
|
||||
if ":" in line:
|
||||
key, value = line.split(":", 1)
|
||||
meta[key.strip()] = value.strip()
|
||||
return meta
|
||||
|
||||
|
||||
def _find_plugin_file(readme_path: Path) -> Optional[Path]:
|
||||
plugin_dir = readme_path.parent
|
||||
is_cn = readme_path.name.lower().endswith("readme_cn.md")
|
||||
|
||||
py_files = [
|
||||
p
|
||||
for p in plugin_dir.glob("*.py")
|
||||
if p.name != "__init__.py" and not p.name.startswith("test_")
|
||||
]
|
||||
if not py_files:
|
||||
return None
|
||||
|
||||
cn_files = [p for p in py_files if p.stem.endswith("_cn")]
|
||||
en_files = [p for p in py_files if not p.stem.endswith("_cn")]
|
||||
|
||||
candidates = cn_files + en_files if is_cn else en_files + cn_files
|
||||
|
||||
# Prefer files that contain openwebui_id/post_id in frontmatter
|
||||
for candidate in candidates:
|
||||
post_id = _get_post_id(candidate)
|
||||
if post_id:
|
||||
return candidate
|
||||
|
||||
return candidates[0] if candidates else None
|
||||
|
||||
|
||||
def _get_post_id(plugin_file: Path) -> Optional[str]:
|
||||
try:
|
||||
content = plugin_file.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
meta = _parse_frontmatter(content)
|
||||
return meta.get("openwebui_id") or meta.get("post_id")
|
||||
|
||||
|
||||
def _get_plugin_metadata(plugin_file: Path) -> Dict[str, str]:
|
||||
try:
|
||||
content = plugin_file.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return {}
|
||||
return _parse_frontmatter(content)
|
||||
|
||||
|
||||
def _find_readme_for_plugin(plugin_file: Path) -> Optional[str]:
|
||||
plugin_dir = plugin_file.parent
|
||||
is_cn = plugin_file.stem.endswith("_cn")
|
||||
readme_candidates = ["README_CN.md", "README.md"] if is_cn else ["README.md", "README_CN.md"]
|
||||
for name in readme_candidates:
|
||||
readme_path = plugin_dir / name
|
||||
if readme_path.exists():
|
||||
return readme_path.read_text(encoding="utf-8")
|
||||
return None
|
||||
|
||||
|
||||
def main() -> int:
|
||||
repo_root = _get_repo_root()
|
||||
_load_dotenv(repo_root)
|
||||
|
||||
api_key = os.environ.get("OPENWEBUI_API_KEY")
|
||||
if not api_key:
|
||||
print("OPENWEBUI_API_KEY is not set in environment.")
|
||||
return 1
|
||||
|
||||
client_module_path = repo_root / "scripts" / "openwebui_community_client.py"
|
||||
spec = importlib.util.spec_from_file_location(
|
||||
"openwebui_community_client", client_module_path
|
||||
)
|
||||
if not spec or not spec.loader:
|
||||
print("Failed to load openwebui_community_client module.")
|
||||
return 1
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
client = module.get_client(api_key)
|
||||
|
||||
staged_plugins = _get_staged_plugin_files(repo_root)
|
||||
staged_readmes = _get_staged_readmes(repo_root)
|
||||
if not staged_plugins and not staged_readmes:
|
||||
print("No staged README or plugin files found.")
|
||||
return 0
|
||||
|
||||
updated_post_ids: set[str] = set()
|
||||
|
||||
for plugin_file in staged_plugins:
|
||||
if not plugin_file.exists():
|
||||
print(f"Skipped (missing): {plugin_file}")
|
||||
continue
|
||||
|
||||
post_id = _get_post_id(plugin_file)
|
||||
if not post_id:
|
||||
print(f"Skipped (no openwebui_id): {plugin_file}")
|
||||
continue
|
||||
|
||||
try:
|
||||
post_data = client.get_post(post_id)
|
||||
if not post_data:
|
||||
print(f"Skipped (post not found): {plugin_file}")
|
||||
continue
|
||||
|
||||
source_code = plugin_file.read_text(encoding="utf-8")
|
||||
metadata = _get_plugin_metadata(plugin_file)
|
||||
readme_content = _find_readme_for_plugin(plugin_file)
|
||||
|
||||
ok = client.update_plugin(
|
||||
post_id=post_id,
|
||||
source_code=source_code,
|
||||
readme_content=readme_content or metadata.get("description", ""),
|
||||
metadata=metadata,
|
||||
media_urls=None,
|
||||
)
|
||||
if ok:
|
||||
updated_post_ids.add(post_id)
|
||||
print(f"Updated plugin -> {plugin_file} (post_id: {post_id})")
|
||||
except Exception as exc:
|
||||
print(f"Failed: {plugin_file} ({exc})")
|
||||
|
||||
for readme_path in staged_readmes:
|
||||
if not readme_path.exists():
|
||||
print(f"Skipped (missing): {readme_path}")
|
||||
continue
|
||||
|
||||
plugin_file = _find_plugin_file(readme_path)
|
||||
if not plugin_file:
|
||||
print(f"Skipped (no plugin file): {readme_path}")
|
||||
continue
|
||||
|
||||
post_id = _get_post_id(plugin_file)
|
||||
if not post_id:
|
||||
print(f"Skipped (no openwebui_id): {readme_path}")
|
||||
continue
|
||||
|
||||
try:
|
||||
if post_id in updated_post_ids:
|
||||
print(f"Skipped (already updated via plugin): {readme_path}")
|
||||
continue
|
||||
|
||||
post_data = client.get_post(post_id)
|
||||
if not post_data:
|
||||
print(f"Skipped (post not found): {readme_path}")
|
||||
continue
|
||||
|
||||
readme_content = readme_path.read_text(encoding="utf-8")
|
||||
|
||||
# Update README content only, keep other fields unchanged.
|
||||
post_data["content"] = readme_content
|
||||
|
||||
ok = client.update_post(post_id, post_data)
|
||||
if ok:
|
||||
print(f"Updated README -> {readme_path} (post_id: {post_id})")
|
||||
except Exception as exc:
|
||||
print(f"Failed: {readme_path} ({exc})")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
138
plugins/debug/copilot-sdk/USAGE_CN.md
Normal file
138
plugins/debug/copilot-sdk/USAGE_CN.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Copilot SDK 自动任务脚本使用说明
|
||||
|
||||
本目录提供了一个通用任务执行脚本,以及两个示例任务脚本:
|
||||
- `auto_programming_task.py`(通用)
|
||||
- `run_mindmap_action_to_tool.sh`(示例:mind map action → tool)
|
||||
- `run_infographic_action_to_tool.sh`(示例:infographic action → tool)
|
||||
|
||||
## 1. 先决条件
|
||||
|
||||
- 在仓库根目录执行(非常重要)
|
||||
- Python 3 可用
|
||||
- 当前环境中可正常使用 Copilot SDK / CLI
|
||||
|
||||
建议先验证:
|
||||
|
||||
python3 plugins/debug/copilot-sdk/auto_programming_task.py --help | head -40
|
||||
|
||||
---
|
||||
|
||||
## 2. 核心行为(当前默认)
|
||||
|
||||
`auto_programming_task.py` 默认是 **两阶段自动执行**:
|
||||
|
||||
1) 先规划(Planning):AI 根据你的需求自动补全上下文、扩展为可执行计划。
|
||||
2) 再执行(Execution):AI 按计划直接改代码并给出结果。
|
||||
|
||||
如果你要关闭“先规划”,可使用 `--no-plan-first`。
|
||||
|
||||
---
|
||||
|
||||
## 3. 可复制命令(通用)
|
||||
|
||||
### 3.1 最常用:直接写任务文本
|
||||
|
||||
python3 plugins/debug/copilot-sdk/auto_programming_task.py \
|
||||
--task "把 plugins/actions/xxx/xxx.py 转成 plugins/tools/xxx-tool/ 下的单文件 Tool 插件。保留 i18n 和语言回退逻辑。不要升级 SDK 版本。" \
|
||||
--cwd "$PWD" \
|
||||
--model "gpt-5.3-codex" \
|
||||
--reasoning-effort "xhigh" \
|
||||
--timeout 3600 \
|
||||
--stream \
|
||||
--trace-events \
|
||||
--heartbeat-seconds 8
|
||||
|
||||
### 3.2 使用任务文件(长任务推荐)
|
||||
|
||||
先写任务文件(例如 task.txt),再执行:
|
||||
|
||||
python3 plugins/debug/copilot-sdk/auto_programming_task.py \
|
||||
--task-file "./task.txt" \
|
||||
--cwd "$PWD" \
|
||||
--model "gpt-5.3-codex" \
|
||||
--reasoning-effort "xhigh" \
|
||||
--timeout 3600 \
|
||||
--stream \
|
||||
--trace-events \
|
||||
--heartbeat-seconds 8
|
||||
|
||||
### 3.3 关闭规划阶段(仅直接执行)
|
||||
|
||||
python3 plugins/debug/copilot-sdk/auto_programming_task.py \
|
||||
--task "你的任务" \
|
||||
--cwd "$PWD" \
|
||||
--model "gpt-5-mini" \
|
||||
--reasoning-effort "medium" \
|
||||
--timeout 1800 \
|
||||
--no-plan-first
|
||||
|
||||
---
|
||||
|
||||
## 4. 可复制命令(示例脚本)
|
||||
|
||||
### 4.1 Mind Map 示例任务
|
||||
|
||||
./plugins/debug/copilot-sdk/run_mindmap_action_to_tool.sh
|
||||
|
||||
### 4.2 Infographic 示例任务
|
||||
|
||||
./plugins/debug/copilot-sdk/run_infographic_action_to_tool.sh
|
||||
|
||||
说明:这两个脚本是“固定任务模板”,适合当前仓库;复制到其他仓库时通常需要改任务内容。
|
||||
|
||||
---
|
||||
|
||||
## 5. 结果如何判定“完成”
|
||||
|
||||
建议同时满足以下条件:
|
||||
|
||||
1) 进程退出码为 0
|
||||
2) 输出中出现阶段结束信息(含最终摘要)
|
||||
3) 看到 `session.idle`(若是 `session.error` 则未完成)
|
||||
4) `git diff --name-only` 显示改动范围符合你的约束
|
||||
|
||||
可复制检查命令:
|
||||
|
||||
echo $?
|
||||
git diff --name-only
|
||||
git status --short
|
||||
|
||||
---
|
||||
|
||||
## 6. 参数速查
|
||||
|
||||
- `--task`:直接传任务文本
|
||||
- `--task-file`:从文件读取任务文本(与 `--task` 二选一)
|
||||
- `--cwd`:工作区目录(建议用 `$PWD`)
|
||||
- `--model`:模型(例如 `gpt-5.3-codex`、`gpt-5-mini`)
|
||||
- `--reasoning-effort`:`low|medium|high|xhigh`
|
||||
- `--timeout`:超时秒数
|
||||
- `--stream`:实时输出增量内容
|
||||
- `--trace-events`:输出事件流,便于排错
|
||||
- `--heartbeat-seconds`:心跳输出间隔
|
||||
- `--no-plan-first`:关闭默认“先规划后执行”
|
||||
|
||||
---
|
||||
|
||||
## 7. 常见问题
|
||||
|
||||
### Q1:为什么提示找不到脚本?
|
||||
你大概率不在仓库根目录。先执行:
|
||||
|
||||
pwd
|
||||
|
||||
确认后再运行命令。
|
||||
|
||||
### Q2:执行很久没有输出?
|
||||
加上 `--trace-events --stream`,并适当增大 `--timeout`。
|
||||
|
||||
### Q3:改动超出预期范围?
|
||||
把范围约束明确写进任务文本,例如:
|
||||
|
||||
“不要修改其他文件代码,可以读取整个项目作为代码库。”
|
||||
|
||||
并在完成后用:
|
||||
|
||||
git diff --name-only
|
||||
|
||||
进行核对。
|
||||
447
plugins/debug/copilot-sdk/auto_programming_task.py
Normal file
447
plugins/debug/copilot-sdk/auto_programming_task.py
Normal file
@@ -0,0 +1,447 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Run an autonomous programming task via Copilot SDK.
|
||||
|
||||
Usage:
|
||||
python plugins/debug/copilot-sdk/auto_programming_task.py \
|
||||
--task "Fix failing tests in tests/test_xxx.py" \
|
||||
--cwd /Users/fujie/app/python/oui/openwebui-extensions
|
||||
|
||||
Notes:
|
||||
- Default model is gpt-5-mini (low-cost for repeated runs).
|
||||
- This script DOES NOT pin/upgrade SDK versions.
|
||||
- Copilot CLI must be available (or set COPILOT_CLI_PATH).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
|
||||
DEFAULT_TASK = (
|
||||
"Convert plugins/actions/smart-mind-map/smart_mind_map.py (Action plugin) "
|
||||
"into a Tool plugin implementation under plugins/tools/. "
|
||||
"Keep Copilot SDK version unchanged, follow patterns from "
|
||||
"plugins/pipes/github-copilot-sdk/, and implement a runnable MVP with "
|
||||
"i18n/status events/basic validation."
|
||||
)
|
||||
|
||||
|
||||
def _ensure_copilot_importable() -> None:
|
||||
"""Try local SDK path fallback if `copilot` package is not installed."""
|
||||
try:
|
||||
import copilot # noqa: F401
|
||||
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
candidates = []
|
||||
|
||||
env_path = os.environ.get("COPILOT_SDK_PYTHON_PATH", "").strip()
|
||||
if env_path:
|
||||
candidates.append(Path(env_path))
|
||||
|
||||
# Default sibling repo path: ../copilot-sdk/python
|
||||
# Current file: plugins/debug/copilot-sdk/auto_programming_task.py
|
||||
repo_root = Path(__file__).resolve().parents[3]
|
||||
candidates.append(repo_root.parent / "copilot-sdk" / "python")
|
||||
|
||||
for path in candidates:
|
||||
if path.exists():
|
||||
sys.path.insert(0, str(path))
|
||||
try:
|
||||
import copilot # noqa: F401
|
||||
|
||||
return
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
raise RuntimeError(
|
||||
"Cannot import `copilot` package. Install copilot-sdk python package "
|
||||
"or set COPILOT_SDK_PYTHON_PATH to copilot-sdk/python directory."
|
||||
)
|
||||
|
||||
|
||||
def _build_agent_prompt(task: str, cwd: str, extra_context: Optional[str]) -> str:
|
||||
extra = extra_context.strip() if extra_context else ""
|
||||
return textwrap.dedent(
|
||||
f"""
|
||||
You are an autonomous coding agent running in workspace: {cwd}
|
||||
|
||||
Primary task:
|
||||
{task}
|
||||
|
||||
Requirements:
|
||||
1. Inspect relevant files and implement changes directly in the workspace.
|
||||
2. Keep changes minimal and focused on the task.
|
||||
3. If tests/build commands exist, run targeted validation for changed scope.
|
||||
4. If blocked, explain the blocker and propose concrete next steps.
|
||||
5. At the end, provide a concise summary of:
|
||||
- files changed
|
||||
- what was implemented
|
||||
- validation results
|
||||
|
||||
{f'Additional context:\n{extra}' if extra else ''}
|
||||
"""
|
||||
).strip()
|
||||
|
||||
|
||||
def _build_planning_prompt(task: str, cwd: str, extra_context: Optional[str]) -> str:
|
||||
extra = extra_context.strip() if extra_context else ""
|
||||
return textwrap.dedent(
|
||||
f"""
|
||||
You are a senior autonomous coding planner running in workspace: {cwd}
|
||||
|
||||
User requirement (may be underspecified):
|
||||
{task}
|
||||
|
||||
Goal:
|
||||
Expand the requirement into an actionable implementation plan that can be executed end-to-end without extra clarification whenever possible.
|
||||
|
||||
Output format (strict):
|
||||
1) Expanded Objective (clear, concrete, scoped)
|
||||
2) Assumptions (only necessary assumptions)
|
||||
3) Step-by-step Plan (ordered, verifiable)
|
||||
4) Validation Plan (how to verify changes)
|
||||
5) Execution Brief (concise instruction for implementation agent)
|
||||
|
||||
Constraints:
|
||||
- Keep scope minimal and aligned with the user requirement.
|
||||
- Do not invent unrelated features.
|
||||
- Prefer practical MVP completion.
|
||||
|
||||
{f'Additional context:\n{extra}' if extra else ''}
|
||||
"""
|
||||
).strip()
|
||||
|
||||
|
||||
def _build_execution_prompt(
|
||||
task: str, cwd: str, extra_context: Optional[str], plan_text: str
|
||||
) -> str:
|
||||
extra = extra_context.strip() if extra_context else ""
|
||||
return textwrap.dedent(
|
||||
f"""
|
||||
You are an autonomous coding agent running in workspace: {cwd}
|
||||
|
||||
User requirement:
|
||||
{task}
|
||||
|
||||
Planner output (must follow):
|
||||
{plan_text}
|
||||
|
||||
Execution requirements:
|
||||
1. Execute the plan directly; do not stop after analysis.
|
||||
2. If the original requirement is underspecified, use the planner assumptions and continue.
|
||||
3. Keep changes minimal, focused, and runnable.
|
||||
4. Run targeted validation for changed scope where possible.
|
||||
5. If blocked by missing prerequisites, report blocker and the smallest next action.
|
||||
6. Finish with concise summary:
|
||||
- files changed
|
||||
- implemented behavior
|
||||
- validation results
|
||||
|
||||
{f'Additional context:\n{extra}' if extra else ''}
|
||||
"""
|
||||
).strip()
|
||||
|
||||
|
||||
async def _run_single_session(
|
||||
client,
|
||||
args: argparse.Namespace,
|
||||
prompt: str,
|
||||
stage_name: str,
|
||||
stream_output: bool,
|
||||
) -> tuple[int, str]:
|
||||
from copilot.types import PermissionHandler
|
||||
|
||||
def _auto_user_input_handler(request, _invocation):
|
||||
question = ""
|
||||
if isinstance(request, dict):
|
||||
question = str(request.get("question", "")).lower()
|
||||
choices = request.get("choices") or []
|
||||
if choices and isinstance(choices, list):
|
||||
preferred = args.auto_user_answer.strip()
|
||||
for choice in choices:
|
||||
c = str(choice)
|
||||
if preferred and preferred.lower() == c.lower():
|
||||
return {"answer": c, "wasFreeform": False}
|
||||
return {"answer": str(choices[0]), "wasFreeform": False}
|
||||
|
||||
preferred = args.auto_user_answer.strip() or "continue"
|
||||
if "confirm" in question or "proceed" in question:
|
||||
preferred = "yes"
|
||||
return {"answer": preferred, "wasFreeform": True}
|
||||
|
||||
session_config = {
|
||||
"model": args.model,
|
||||
"reasoning_effort": args.reasoning_effort,
|
||||
"streaming": True,
|
||||
"infinite_sessions": {
|
||||
"enabled": True,
|
||||
},
|
||||
"on_permission_request": PermissionHandler.approve_all,
|
||||
"on_user_input_request": _auto_user_input_handler,
|
||||
}
|
||||
|
||||
session = await client.create_session(session_config)
|
||||
|
||||
done = asyncio.Event()
|
||||
full_messages = []
|
||||
has_error = False
|
||||
|
||||
def on_event(event):
|
||||
nonlocal has_error
|
||||
etype = getattr(event, "type", "unknown")
|
||||
if hasattr(etype, "value"):
|
||||
etype = etype.value
|
||||
|
||||
if args.trace_events:
|
||||
print(f"\n[{stage_name}][EVENT] {etype}", flush=True)
|
||||
|
||||
if etype == "assistant.message_delta" and stream_output:
|
||||
delta = getattr(event.data, "delta_content", "") or ""
|
||||
if delta:
|
||||
print(delta, end="", flush=True)
|
||||
|
||||
elif etype == "assistant.message":
|
||||
content = getattr(event.data, "content", "") or ""
|
||||
if content:
|
||||
full_messages.append(content)
|
||||
|
||||
elif etype == "session.error":
|
||||
has_error = True
|
||||
done.set()
|
||||
elif etype == "session.idle":
|
||||
done.set()
|
||||
|
||||
unsubscribe = session.on(on_event)
|
||||
heartbeat_task = None
|
||||
|
||||
async def _heartbeat():
|
||||
while not done.is_set():
|
||||
await asyncio.sleep(max(3, int(args.heartbeat_seconds)))
|
||||
if not done.is_set():
|
||||
print(
|
||||
f"[{stage_name}][heartbeat] waiting for assistant events...",
|
||||
flush=True,
|
||||
)
|
||||
|
||||
try:
|
||||
heartbeat_task = asyncio.create_task(_heartbeat())
|
||||
await session.send({"prompt": prompt, "mode": "immediate"})
|
||||
await asyncio.wait_for(done.wait(), timeout=args.timeout)
|
||||
|
||||
if stream_output:
|
||||
print("\n")
|
||||
|
||||
final_message = full_messages[-1] if full_messages else ""
|
||||
if final_message:
|
||||
print(f"\n===== {stage_name} FINAL MESSAGE =====\n")
|
||||
print(final_message)
|
||||
|
||||
if has_error:
|
||||
return 1, final_message
|
||||
return 0, final_message
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
print(f"\n❌ [{stage_name}] Timeout after {args.timeout}s")
|
||||
return 124, ""
|
||||
except Exception as exc:
|
||||
print(f"\n❌ [{stage_name}] Run failed: {exc}")
|
||||
return 1, ""
|
||||
finally:
|
||||
if heartbeat_task:
|
||||
heartbeat_task.cancel()
|
||||
try:
|
||||
unsubscribe()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
await session.destroy()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
async def run_task(args: argparse.Namespace) -> int:
|
||||
_ensure_copilot_importable()
|
||||
|
||||
from copilot import CopilotClient
|
||||
|
||||
task_text = (args.task or "").strip()
|
||||
if args.task_file:
|
||||
task_text = Path(args.task_file).read_text(encoding="utf-8").strip()
|
||||
|
||||
if not task_text:
|
||||
task_text = DEFAULT_TASK
|
||||
|
||||
direct_prompt = _build_agent_prompt(task_text, args.cwd, args.extra_context)
|
||||
|
||||
client_options = {
|
||||
"cwd": args.cwd,
|
||||
"log_level": args.log_level,
|
||||
}
|
||||
|
||||
if args.cli_path:
|
||||
client_options["cli_path"] = args.cli_path
|
||||
|
||||
if args.github_token:
|
||||
client_options["github_token"] = args.github_token
|
||||
|
||||
print(f"🚀 Starting Copilot SDK task runner")
|
||||
print(f" cwd: {args.cwd}")
|
||||
print(f" model: {args.model}")
|
||||
print(f" reasoning_effort: {args.reasoning_effort}")
|
||||
print(f" plan_first: {args.plan_first}")
|
||||
|
||||
client = CopilotClient(client_options)
|
||||
await client.start()
|
||||
|
||||
try:
|
||||
if args.plan_first:
|
||||
planning_prompt = _build_planning_prompt(
|
||||
task_text, args.cwd, args.extra_context
|
||||
)
|
||||
print("\n🧭 Stage 1/2: Planning and requirement expansion")
|
||||
plan_code, plan_text = await _run_single_session(
|
||||
client=client,
|
||||
args=args,
|
||||
prompt=planning_prompt,
|
||||
stage_name="PLANNING",
|
||||
stream_output=False,
|
||||
)
|
||||
if plan_code != 0:
|
||||
return plan_code
|
||||
|
||||
execution_prompt = _build_execution_prompt(
|
||||
task=task_text,
|
||||
cwd=args.cwd,
|
||||
extra_context=args.extra_context,
|
||||
plan_text=plan_text or "(No planner output provided)",
|
||||
)
|
||||
print("\n⚙️ Stage 2/2: Execute plan autonomously")
|
||||
exec_code, _ = await _run_single_session(
|
||||
client=client,
|
||||
args=args,
|
||||
prompt=execution_prompt,
|
||||
stage_name="EXECUTION",
|
||||
stream_output=args.stream,
|
||||
)
|
||||
return exec_code
|
||||
|
||||
print("\n⚙️ Direct mode: Execute task without planning stage")
|
||||
exec_code, _ = await _run_single_session(
|
||||
client=client,
|
||||
args=args,
|
||||
prompt=direct_prompt,
|
||||
stage_name="EXECUTION",
|
||||
stream_output=args.stream,
|
||||
)
|
||||
return exec_code
|
||||
finally:
|
||||
try:
|
||||
await client.stop()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run one autonomous programming task with Copilot SDK"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--task",
|
||||
default="",
|
||||
help="Task description text (if empty, uses built-in default task)",
|
||||
)
|
||||
parser.add_argument("--task-file", default="", help="Path to a task text file")
|
||||
parser.add_argument("--cwd", default=os.getcwd(), help="Workspace directory")
|
||||
parser.add_argument(
|
||||
"--model",
|
||||
default="gpt-5-mini",
|
||||
help="Model id for Copilot session (default: gpt-5-mini)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--reasoning-effort",
|
||||
default="medium",
|
||||
choices=["low", "medium", "high", "xhigh"],
|
||||
help="Reasoning effort",
|
||||
)
|
||||
parser.add_argument("--timeout", type=int, default=1800, help="Timeout seconds")
|
||||
parser.add_argument(
|
||||
"--log-level",
|
||||
default="info",
|
||||
choices=["trace", "debug", "info", "warn", "error"],
|
||||
help="Copilot client log level",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--github-token",
|
||||
default=os.environ.get("GH_TOKEN", ""),
|
||||
help="Optional GitHub token; defaults to GH_TOKEN",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--cli-path",
|
||||
default=os.environ.get("COPILOT_CLI_PATH", ""),
|
||||
help="Optional Copilot CLI path",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--extra-context",
|
||||
default="",
|
||||
help="Optional extra context appended to the task prompt",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--stream",
|
||||
action="store_true",
|
||||
help="Print assistant delta stream in real-time",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--trace-events",
|
||||
action="store_true",
|
||||
help="Print each SDK event type for debugging",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--auto-user-answer",
|
||||
default="continue",
|
||||
help="Default answer for on_user_input_request",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--heartbeat-seconds",
|
||||
type=int,
|
||||
default=12,
|
||||
help="Heartbeat interval while waiting for events",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--plan-first",
|
||||
action="store_true",
|
||||
help="Run planning stage before execution (default behavior)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-plan-first",
|
||||
action="store_true",
|
||||
help="Disable planning stage and run direct execution",
|
||||
)
|
||||
return parser
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = build_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.task_file and (args.task or "").strip():
|
||||
parser.error("Use either --task or --task-file, not both")
|
||||
|
||||
args.plan_first = True
|
||||
if args.no_plan_first:
|
||||
args.plan_first = False
|
||||
elif args.plan_first:
|
||||
args.plan_first = True
|
||||
|
||||
return asyncio.run(run_task(args))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
142
plugins/debug/copilot-sdk/check_default_agents.py
Normal file
142
plugins/debug/copilot-sdk/check_default_agents.py
Normal file
@@ -0,0 +1,142 @@
|
||||
import asyncio
|
||||
import json
|
||||
import sys
|
||||
from typing import Any, Callable
|
||||
|
||||
from copilot import CopilotClient
|
||||
|
||||
try:
|
||||
from copilot import PermissionHandler
|
||||
except ImportError:
|
||||
PermissionHandler = None
|
||||
|
||||
|
||||
def _to_dict(obj: Any) -> dict:
|
||||
if obj is None:
|
||||
return {}
|
||||
to_dict = getattr(obj, "to_dict", None)
|
||||
if callable(to_dict):
|
||||
return to_dict()
|
||||
if isinstance(obj, dict):
|
||||
return obj
|
||||
result = {}
|
||||
for key in ("name", "display_name", "description"):
|
||||
if hasattr(obj, key):
|
||||
result[key] = getattr(obj, key)
|
||||
return result
|
||||
|
||||
|
||||
def _extract_agents(result: Any) -> list[dict]:
|
||||
if result is None:
|
||||
return []
|
||||
|
||||
if isinstance(result, dict):
|
||||
raw_agents = result.get("agents")
|
||||
else:
|
||||
raw_agents = getattr(result, "agents", None)
|
||||
|
||||
if not raw_agents:
|
||||
return []
|
||||
|
||||
normalized = []
|
||||
for item in raw_agents:
|
||||
data = _to_dict(item)
|
||||
normalized.append(
|
||||
{
|
||||
"name": str(data.get("name", "") or "").strip(),
|
||||
"display_name": str(data.get("display_name", "") or "").strip(),
|
||||
"description": str(data.get("description", "") or "").strip(),
|
||||
}
|
||||
)
|
||||
return normalized
|
||||
|
||||
|
||||
def _extract_current_agent(result: Any) -> dict | None:
|
||||
if result is None:
|
||||
return None
|
||||
|
||||
if isinstance(result, dict):
|
||||
agent = result.get("agent")
|
||||
else:
|
||||
agent = getattr(result, "agent", None)
|
||||
|
||||
if not agent:
|
||||
return None
|
||||
|
||||
data = _to_dict(agent)
|
||||
return {
|
||||
"name": str(data.get("name", "") or "").strip(),
|
||||
"display_name": str(data.get("display_name", "") or "").strip(),
|
||||
"description": str(data.get("description", "") or "").strip(),
|
||||
}
|
||||
|
||||
|
||||
async def main() -> int:
|
||||
client = CopilotClient()
|
||||
started = False
|
||||
session = None
|
||||
|
||||
try:
|
||||
await client.start()
|
||||
started = True
|
||||
|
||||
session_config: dict[str, Any] = {}
|
||||
permission_handler: Callable | None = getattr(
|
||||
PermissionHandler, "approve_all", None
|
||||
)
|
||||
if callable(permission_handler):
|
||||
session_config["on_permission_request"] = permission_handler
|
||||
|
||||
session = await client.create_session(session_config)
|
||||
|
||||
list_result = await session.rpc.agent.list()
|
||||
current_result = await session.rpc.agent.get_current()
|
||||
|
||||
agents = _extract_agents(list_result)
|
||||
current = _extract_current_agent(current_result)
|
||||
|
||||
payload = {
|
||||
"agents_count": len(agents),
|
||||
"agents": agents,
|
||||
"current_agent": current,
|
||||
"summary": (
|
||||
"No custom agents detected in current runtime."
|
||||
if not agents
|
||||
else "Custom agents detected."
|
||||
),
|
||||
}
|
||||
|
||||
print(json.dumps(payload, ensure_ascii=False, indent=2))
|
||||
|
||||
if not agents:
|
||||
print("\n[INFO] 当前运行时没有已注入的 custom agents(默认通常为空)。")
|
||||
elif not current:
|
||||
print("\n[INFO] 已检测到 custom agents,但当前没有选中的 agent。")
|
||||
else:
|
||||
print(
|
||||
"\n[INFO] 当前已选中 agent: "
|
||||
f"{current.get('display_name') or current.get('name') or '(unknown)'}"
|
||||
)
|
||||
|
||||
return 0
|
||||
|
||||
except Exception as exc:
|
||||
print(f"[ERROR] Agent 检测失败: {exc}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
finally:
|
||||
if session is not None:
|
||||
try:
|
||||
await session.destroy()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if started:
|
||||
try:
|
||||
await client.stop()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(asyncio.run(main()))
|
||||
137
plugins/debug/copilot-sdk/run_owui_api_docs_phases.sh
Executable file
137
plugins/debug/copilot-sdk/run_owui_api_docs_phases.sh
Executable file
@@ -0,0 +1,137 @@
|
||||
#!/usr/bin/env bash
|
||||
# run_owui_api_docs_phases.sh
|
||||
# One-click runner: generate OpenWebUI API documentation across 8 phases.
|
||||
#
|
||||
# Usage:
|
||||
# ./plugins/debug/copilot-sdk/run_owui_api_docs_phases.sh
|
||||
# ./plugins/debug/copilot-sdk/run_owui_api_docs_phases.sh --start-phase 3
|
||||
# ./plugins/debug/copilot-sdk/run_owui_api_docs_phases.sh --only-phase 1
|
||||
#
|
||||
# Working directory: /Users/fujie/app/python/oui/open-webui (open-webui source)
|
||||
# Task files: plugins/debug/copilot-sdk/tasks/owui-api-docs/phases/
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ── Resolve paths ────────────────────────────────────────────────────────────
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)" # openwebui-extensions root
|
||||
TASKS_DIR="${SCRIPT_DIR}/tasks/owui-api-docs/phases"
|
||||
TARGET_CWD="/Users/fujie/app/python/oui/open-webui" # source repo to scan
|
||||
RUNNER="${SCRIPT_DIR}/auto_programming_task.py"
|
||||
PYTHON="${PYTHON:-python3}"
|
||||
|
||||
# ── Arguments ────────────────────────────────────────────────────────────────
|
||||
START_PHASE=1
|
||||
ONLY_PHASE=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--start-phase)
|
||||
START_PHASE="$2"; shift 2 ;;
|
||||
--only-phase)
|
||||
ONLY_PHASE="$2"; shift 2 ;;
|
||||
*)
|
||||
echo "Unknown argument: $1" >&2; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ── Phase definitions ─────────────────────────────────────────────────────────
|
||||
declare -a PHASE_FILES=(
|
||||
"01_route_index.txt"
|
||||
"02_auth_users_groups_models.txt"
|
||||
"03_chats_channels_memories_notes.txt"
|
||||
"04_files_folders_knowledge_retrieval.txt"
|
||||
"05_ollama_openai_audio_images.txt"
|
||||
"06_tools_functions_pipelines_skills_tasks.txt"
|
||||
"07_configs_prompts_evaluations_analytics_scim_utils.txt"
|
||||
"08_consolidation_index.txt"
|
||||
)
|
||||
|
||||
declare -a PHASE_LABELS=(
|
||||
"Route Index (master table)"
|
||||
"Auth / Users / Groups / Models"
|
||||
"Chats / Channels / Memories / Notes"
|
||||
"Files / Folders / Knowledge / Retrieval"
|
||||
"Ollama / OpenAI / Audio / Images"
|
||||
"Tools / Functions / Pipelines / Skills / Tasks"
|
||||
"Configs / Prompts / Evaluations / Analytics / SCIM / Utils"
|
||||
"Consolidation — README + JSON"
|
||||
)
|
||||
|
||||
# ── Pre-flight checks ─────────────────────────────────────────────────────────
|
||||
echo "============================================================"
|
||||
echo " OpenWebUI API Docs — Phase Runner"
|
||||
echo "============================================================"
|
||||
echo " Source (--cwd): ${TARGET_CWD}"
|
||||
echo " Task files: ${TASKS_DIR}"
|
||||
echo " Runner: ${RUNNER}"
|
||||
echo ""
|
||||
|
||||
if [[ ! -d "${TARGET_CWD}" ]]; then
|
||||
echo "ERROR: Target source directory not found: ${TARGET_CWD}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "${RUNNER}" ]]; then
|
||||
echo "ERROR: Runner script not found: ${RUNNER}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── Run phases ────────────────────────────────────────────────────────────────
|
||||
TOTAL=${#PHASE_FILES[@]}
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
|
||||
for i in "${!PHASE_FILES[@]}"; do
|
||||
PHASE_NUM=$((i + 1))
|
||||
TASK_FILE="${TASKS_DIR}/${PHASE_FILES[$i]}"
|
||||
LABEL="${PHASE_LABELS[$i]}"
|
||||
|
||||
# --only-phase filter
|
||||
if [[ -n "${ONLY_PHASE}" && "${PHASE_NUM}" != "${ONLY_PHASE}" ]]; then
|
||||
echo " [SKIP] Phase ${PHASE_NUM}: ${LABEL}"
|
||||
continue
|
||||
fi
|
||||
|
||||
# --start-phase filter
|
||||
if [[ "${PHASE_NUM}" -lt "${START_PHASE}" ]]; then
|
||||
echo " [SKIP] Phase ${PHASE_NUM}: ${LABEL} (before start phase)"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ ! -f "${TASK_FILE}" ]]; then
|
||||
echo " [ERROR] Task file not found: ${TASK_FILE}" >&2
|
||||
FAILED=$((FAILED + 1))
|
||||
break
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "──────────────────────────────────────────────────────────"
|
||||
echo " Phase ${PHASE_NUM}/${TOTAL}: ${LABEL}"
|
||||
echo " Task file: ${PHASE_FILES[$i]}"
|
||||
echo "──────────────────────────────────────────────────────────"
|
||||
|
||||
if "${PYTHON}" "${RUNNER}" \
|
||||
--task-file "${TASK_FILE}" \
|
||||
--cwd "${TARGET_CWD}" \
|
||||
--model "claude-sonnet-4.6" \
|
||||
--reasoning-effort high \
|
||||
--no-plan-first; then
|
||||
echo " ✓ Phase ${PHASE_NUM} completed successfully."
|
||||
PASSED=$((PASSED + 1))
|
||||
else
|
||||
EXIT_CODE=$?
|
||||
echo ""
|
||||
echo " ✗ Phase ${PHASE_NUM} FAILED (exit code: ${EXIT_CODE})." >&2
|
||||
echo " Fix the issue and re-run with: --start-phase ${PHASE_NUM}" >&2
|
||||
FAILED=$((FAILED + 1))
|
||||
exit "${EXIT_CODE}"
|
||||
fi
|
||||
done
|
||||
|
||||
# ── Summary ──────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo "============================================================"
|
||||
echo " Run complete: ${PASSED} passed, ${FAILED} failed"
|
||||
echo " Output: ${TARGET_CWD}/api_docs/"
|
||||
echo "============================================================"
|
||||
@@ -0,0 +1,74 @@
|
||||
# OpenWebUI API Documentation — Phase Run Order
|
||||
|
||||
## Overview
|
||||
|
||||
This task set reads the OpenWebUI backend source code and generates a complete
|
||||
API reference in `api_docs/` inside the open-webui repository.
|
||||
|
||||
**Source repo:** `/Users/fujie/app/python/oui/open-webui`
|
||||
**Output directory:** `/Users/fujie/app/python/oui/open-webui/api_docs/`
|
||||
**Task files dir:** `plugins/debug/copilot-sdk/tasks/owui-api-docs/phases/`
|
||||
|
||||
---
|
||||
|
||||
## Phase Execution Order
|
||||
|
||||
Run phases sequentially. Each phase depends on the previous.
|
||||
|
||||
| Order | Task File | Coverage | ~Lines Read |
|
||||
|-------|-----------|----------|-------------|
|
||||
| 1 | `01_route_index.txt` | main.py + all 26 router files → master route table | ~15,000 |
|
||||
| 2 | `02_auth_users_groups_models.txt` | auths, users, groups, models | ~4,600 |
|
||||
| 3 | `03_chats_channels_memories_notes.txt` | chats, channels, memories, notes | ~5,500 |
|
||||
| 4 | `04_files_folders_knowledge_retrieval.txt` | files, folders, knowledge, retrieval | ~5,200 |
|
||||
| 5 | `05_ollama_openai_audio_images.txt` | ollama, openai, audio, images | ~6,900 |
|
||||
| 6 | `06_tools_functions_pipelines_skills_tasks.txt` | tools, functions, pipelines, skills, tasks | ~3,200 |
|
||||
| 7 | `07_configs_prompts_evaluations_analytics_scim_utils.txt` | configs, prompts, evaluations, analytics, scim, utils | ~3,400 |
|
||||
| 8 | `08_consolidation_index.txt` | Consolidates all outputs → README.md + JSON | (reads generated files) |
|
||||
|
||||
---
|
||||
|
||||
## Output Files (after all phases complete)
|
||||
|
||||
```
|
||||
open-webui/api_docs/
|
||||
├── README.md ← Master index + quick reference
|
||||
├── 00_route_index.md ← Complete route table (200+ endpoints)
|
||||
├── 02_auths.md
|
||||
├── 02_users.md
|
||||
├── 02_groups.md
|
||||
├── 02_models.md
|
||||
├── 03_chats.md
|
||||
├── 03_channels.md
|
||||
├── 03_memories.md
|
||||
├── 03_notes.md
|
||||
├── 04_files.md
|
||||
├── 04_folders.md
|
||||
├── 04_knowledge.md
|
||||
├── 04_retrieval.md
|
||||
├── 05_ollama.md
|
||||
├── 05_openai.md
|
||||
├── 05_audio.md
|
||||
├── 05_images.md
|
||||
├── 06_tools.md
|
||||
├── 06_functions.md
|
||||
├── 06_pipelines.md
|
||||
├── 06_skills.md
|
||||
├── 06_tasks.md
|
||||
├── 07_configs.md
|
||||
├── 07_prompts.md
|
||||
├── 07_evaluations.md
|
||||
├── 07_analytics.md
|
||||
├── 07_scim.md
|
||||
├── 07_utils.md
|
||||
└── openwebui_api.json ← Machine-readable summary (all routes)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- Each phase uses `--no-plan-first` (detailed instructions already provided).
|
||||
- Working directory for all phases: `/Users/fujie/app/python/oui/open-webui`
|
||||
- The one-click runner: `run_owui_api_docs_phases.sh`
|
||||
- If a phase fails, fix the issue and re-run that single phase before continuing.
|
||||
@@ -0,0 +1,41 @@
|
||||
Phase 1 Mission:
|
||||
Scan the entire OpenWebUI backend source and produce a master route index table.
|
||||
|
||||
Source root: backend/open_webui/
|
||||
Target output directory: api_docs/
|
||||
|
||||
Constraints:
|
||||
- Read-only on ALL files EXCEPT under api_docs/ (create it if missing).
|
||||
- Do NOT generate per-endpoint detail yet — only the master table.
|
||||
- Cover every router file in backend/open_webui/routers/.
|
||||
- Also read backend/open_webui/main.py to capture route prefixes (app.include_router calls).
|
||||
|
||||
Deliverables:
|
||||
1) Create directory: api_docs/
|
||||
2) Create file: api_docs/00_route_index.md
|
||||
|
||||
Content of 00_route_index.md must contain:
|
||||
- A table with columns: Module | HTTP Method | Path | Handler Function | Auth Required | Brief Description
|
||||
- One row per route decorator found in every router file.
|
||||
- "Auth Required" = YES if the route depends on get_verified_user / get_admin_user / similar dependency, NO otherwise.
|
||||
- "Brief Description" = first sentence of the handler's docstring, or empty string if none.
|
||||
- Group rows by Module (router file name without .py).
|
||||
- At the top: a summary section listing total_route_count and module_count.
|
||||
|
||||
Process:
|
||||
1. Read main.py — extract all app.include_router() calls, note prefix and tags per router.
|
||||
2. For each router file in backend/open_webui/routers/, read it fully.
|
||||
3. Find every @router.get/@router.post/@router.put/@router.delete/@router.patch decorator.
|
||||
4. For each decorator: record path, method, function name, auth dependency, docstring.
|
||||
5. Write the combined table to api_docs/00_route_index.md.
|
||||
|
||||
Exit Criteria:
|
||||
- api_docs/00_route_index.md exists.
|
||||
- Table contains at least 100 rows (the codebase has 200+ routes).
|
||||
- No placeholder or TBD.
|
||||
- Total route count printed at the top.
|
||||
|
||||
Final output format:
|
||||
- List of files created/updated.
|
||||
- Total routes found.
|
||||
- Any router files that could not be parsed and why.
|
||||
@@ -0,0 +1,82 @@
|
||||
Phase 2 Mission:
|
||||
Generate detailed API reference documentation for authentication, users, groups, and models endpoints.
|
||||
|
||||
Prerequisites:
|
||||
- api_docs/00_route_index.md must already exist (from Phase 1).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/auths.py
|
||||
- backend/open_webui/routers/users.py
|
||||
- backend/open_webui/routers/groups.py
|
||||
- backend/open_webui/routers/models.py
|
||||
- backend/open_webui/models/auths.py (Pydantic models)
|
||||
- backend/open_webui/models/users.py
|
||||
- backend/open_webui/models/groups.py (if exists)
|
||||
- backend/open_webui/models/models.py
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 02_auths.md
|
||||
- 02_users.md
|
||||
- 02_groups.md
|
||||
- 02_models.md
|
||||
|
||||
Per-endpoint format (use this EXACTLY for every endpoint in each file):
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Header | Authorization | Bearer token | Yes | JWT token |
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
| Query | param_name | type | No | description |
|
||||
| Path | param_name | type | Yes | description |
|
||||
|
||||
*If no request body/params, write: "No additional parameters."*
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 400 | Bad request / validation error |
|
||||
| 401 | Not authenticated |
|
||||
| 403 | Insufficient permissions |
|
||||
| 404 | Resource not found |
|
||||
|
||||
---
|
||||
|
||||
Instructions:
|
||||
1. Read each router file fully to understand every route.
|
||||
2. Trace Pydantic model definitions from the corresponding models/ file.
|
||||
3. Fill in every field from actual code — no guessing.
|
||||
4. If a field is Optional with a default, mark Required = No.
|
||||
5. For auth: check FastAPI dependency injection (Depends(get_verified_user) → "Verified user", Depends(get_admin_user) → "Admin only").
|
||||
6. List ALL endpoints in the router — do not skip any.
|
||||
|
||||
Exit Criteria:
|
||||
- 4 output files created.
|
||||
- Every route from 00_route_index.md for these modules is covered.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
@@ -0,0 +1,87 @@
|
||||
Phase 3 Mission:
|
||||
Generate detailed API reference documentation for chat, channels, memories, and notes endpoints.
|
||||
|
||||
Prerequisites:
|
||||
- api_docs/00_route_index.md must already exist (from Phase 1).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/chats.py
|
||||
- backend/open_webui/routers/channels.py
|
||||
- backend/open_webui/routers/memories.py
|
||||
- backend/open_webui/routers/notes.py
|
||||
- backend/open_webui/models/chats.py (Pydantic models)
|
||||
- backend/open_webui/models/channels.py
|
||||
- backend/open_webui/models/memories.py
|
||||
- backend/open_webui/models/notes.py (if exists)
|
||||
- backend/open_webui/models/messages.py (shared message models)
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 03_chats.md
|
||||
- 03_channels.md
|
||||
- 03_memories.md
|
||||
- 03_notes.md
|
||||
|
||||
Per-endpoint format:
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
|
||||
*If no parameters, write: "No additional parameters."*
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 401 | Not authenticated |
|
||||
| 403 | Insufficient permissions |
|
||||
| 404 | Resource not found |
|
||||
|
||||
---
|
||||
|
||||
Special notes for this phase:
|
||||
- chats.py is 1527 lines with ~40 routes — document ALL of them.
|
||||
- channels.py is 2133 lines — document ALL routes; note WebSocket upgrade endpoints separately.
|
||||
- For WebSocket endpoints: note the protocol (ws://) and describe events/message payload format.
|
||||
- Pay special attention to chat history structure: messages array, history.messages dict.
|
||||
- Note pagination parameters (skip, limit, page) where applicable.
|
||||
|
||||
Instructions:
|
||||
1. Read each router file fully.
|
||||
2. Trace Pydantic model definitions from the corresponding models/ file.
|
||||
3. For complex response types (list of chats, paginated results), show the wrapper structure.
|
||||
4. If a route modifies chat history, document the exact history object shape.
|
||||
5. List ALL endpoints — do not skip paginated variants.
|
||||
|
||||
Exit Criteria:
|
||||
- 4 output files created.
|
||||
- Every route from 00_route_index.md for these modules is covered.
|
||||
- WebSocket endpoints documented with payload shape.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
- Note any complex schemas that required deep tracing.
|
||||
@@ -0,0 +1,94 @@
|
||||
Phase 4 Mission:
|
||||
Generate detailed API reference documentation for files, folders, knowledge base, and retrieval endpoints.
|
||||
|
||||
Prerequisites:
|
||||
- api_docs/00_route_index.md must already exist (from Phase 1).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/files.py (~911 lines)
|
||||
- backend/open_webui/routers/folders.py (~351 lines)
|
||||
- backend/open_webui/routers/knowledge.py (~1139 lines)
|
||||
- backend/open_webui/routers/retrieval.py (~2820 lines — LARGEST FILE)
|
||||
- backend/open_webui/models/files.py
|
||||
- backend/open_webui/models/folders.py
|
||||
- backend/open_webui/models/knowledge.py
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 04_files.md
|
||||
- 04_folders.md
|
||||
- 04_knowledge.md
|
||||
- 04_retrieval.md
|
||||
|
||||
Per-endpoint format:
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
|
||||
*If no parameters, write: "No additional parameters."*
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 401 | Not authenticated |
|
||||
| 404 | Resource not found |
|
||||
|
||||
---
|
||||
|
||||
Special notes for this phase:
|
||||
|
||||
FILES:
|
||||
- File upload uses multipart/form-data — document the form fields.
|
||||
- File metadata response: id, filename, meta.content_type, size, user_id.
|
||||
- File content endpoint: returns raw bytes — note Content-Type header behavior.
|
||||
|
||||
KNOWLEDGE:
|
||||
- Knowledge base endpoints interact with vector store — note which ones trigger embedding/indexing.
|
||||
- Document the "files" array in knowledge base objects (which file IDs are linked).
|
||||
- Add/remove files from knowledge base: document the exact request shape.
|
||||
|
||||
RETRIEVAL:
|
||||
- retrieval.py is 2820 lines; it configures the RAG pipeline (embedding models, chunk settings, etc.).
|
||||
- Prioritize documenting: query endpoint, config GET/POST endpoints, embedding model endpoints.
|
||||
- For config endpoints: document ALL configuration fields (chunk_size, chunk_overlap, top_k, etc.).
|
||||
- Document the "process" endpoints (process_doc, process_web, process_youtube) with their request shapes.
|
||||
|
||||
Instructions:
|
||||
1. Read ALL source files listed above.
|
||||
2. For retrieval.py: focus on public API surface (router endpoints), not internal helper functions.
|
||||
3. Document file upload endpoints with multipart form fields clearly marked.
|
||||
4. Trace vector DB config models in retrieval.py to document all configurable fields.
|
||||
|
||||
Exit Criteria:
|
||||
- 4 output files created.
|
||||
- retrieval.py endpoints fully documented including all config fields.
|
||||
- File upload endpoints show form-data field names.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
- Note any tricky schemas (nested config objects, etc.).
|
||||
@@ -0,0 +1,98 @@
|
||||
Phase 5 Mission:
|
||||
Generate detailed API reference documentation for AI provider endpoints: Ollama, OpenAI-compatible, Audio, and Images.
|
||||
|
||||
Prerequisites:
|
||||
- api_docs/00_route_index.md must already exist (from Phase 1).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/ollama.py (~1884 lines)
|
||||
- backend/open_webui/routers/openai.py (~1466 lines)
|
||||
- backend/open_webui/routers/audio.py (~1397 lines)
|
||||
- backend/open_webui/routers/images.py (~1164 lines)
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 05_ollama.md
|
||||
- 05_openai.md
|
||||
- 05_audio.md
|
||||
- 05_images.md
|
||||
|
||||
Per-endpoint format:
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Streaming:** Yes / No *(add this line for endpoints that support SSE/streaming)*
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 401 | Not authenticated |
|
||||
| 503 | Upstream provider unavailable |
|
||||
|
||||
---
|
||||
|
||||
Special notes for this phase:
|
||||
|
||||
OLLAMA:
|
||||
- Endpoints are mostly pass-through proxies to Ollama's own API.
|
||||
- Document which endpoints are admin-only (model management) vs user-accessible (generate/chat).
|
||||
- For streaming endpoints (generate, chat), note: "Supports SSE streaming via stream=true."
|
||||
- Document the model pull/push/delete management endpoints carefully.
|
||||
|
||||
OPENAI:
|
||||
- Endpoints proxy to configured OpenAI-compatible backend.
|
||||
- Document the /api/openai/models endpoint (returns merged model list).
|
||||
- Note which endpoints pass through request body to upstream unchanged.
|
||||
- Document admin endpoints for adding/removing OpenAI API connections.
|
||||
|
||||
AUDIO:
|
||||
- Document: transcription (STT), TTS synthesis, and audio config endpoints.
|
||||
- For file upload endpoints: specify multipart/form-data field names.
|
||||
- Document supported audio formats and any size limits visible in code.
|
||||
- Note: Engine types (openai, whisper, etc.) and configuration endpoints.
|
||||
|
||||
IMAGES:
|
||||
- Document: image generation endpoints and image engine config.
|
||||
- Note DALL-E vs ComfyUI vs Automatic1111 backend differences if documented in code.
|
||||
- Document image config GET/POST: size, steps, model, and other parameters.
|
||||
|
||||
Instructions:
|
||||
1. Read each file fully — they are complex proxying routers.
|
||||
2. For pass-through proxy routes: still document the expected request/response shape.
|
||||
3. Distinguish between admin configuration routes and user-facing generation routes.
|
||||
4. Streaming endpoints must be clearly marked with "Streaming: Yes" and note the SSE event format.
|
||||
|
||||
Exit Criteria:
|
||||
- 4 output files created.
|
||||
- Every route from 00_route_index.md for these modules is covered.
|
||||
- Streaming endpoints clearly annotated.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
- Note streaming endpoints count per module.
|
||||
@@ -0,0 +1,103 @@
|
||||
Phase 6 Mission:
|
||||
Generate detailed API reference documentation for tools, functions, pipelines, skills, and tasks endpoints.
|
||||
|
||||
Prerequisites:
|
||||
- docs/open_webui_api/00_route_index.md must already exist (from Phase 1).
|
||||
- NOTE: Output directory is api_docs/ (not docs/open_webui_api/).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/tools.py (~868 lines)
|
||||
- backend/open_webui/routers/functions.py (~605 lines)
|
||||
- backend/open_webui/routers/pipelines.py (~540 lines)
|
||||
- backend/open_webui/routers/skills.py (~447 lines)
|
||||
- backend/open_webui/routers/tasks.py (~764 lines)
|
||||
- backend/open_webui/models/tools.py
|
||||
- backend/open_webui/models/functions.py
|
||||
- backend/open_webui/models/skills.py
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 06_tools.md
|
||||
- 06_functions.md
|
||||
- 06_pipelines.md
|
||||
- 06_skills.md
|
||||
- 06_tasks.md
|
||||
|
||||
Per-endpoint format:
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 401 | Not authenticated |
|
||||
| 404 | Resource not found |
|
||||
|
||||
---
|
||||
|
||||
Special notes for this phase:
|
||||
|
||||
TOOLS:
|
||||
- Tools are user-created Python functions exposed to LLM. Document CRUD operations.
|
||||
- The tool "specs" field: document its structure (list of OpenAI function call specs).
|
||||
- Document the "export" endpoint if present.
|
||||
|
||||
FUNCTIONS:
|
||||
- Functions include filters, actions, pipes registered by admin.
|
||||
- Document the `type` field values: "filter", "action", "pipe".
|
||||
- Document the `meta` and `valves` fields structure.
|
||||
|
||||
PIPELINES:
|
||||
- Pipelines connect to external pipeline servers.
|
||||
- Document: add pipeline (URL + API key), list pipelines, get valves, set valves.
|
||||
- Note: pipelines proxy through to an external server; document that behavior.
|
||||
|
||||
SKILLS:
|
||||
- Skills are agent-style plugins with multi-step execution.
|
||||
- Document the skills schema: name, content (Python source), meta.
|
||||
- Note if there's a "call" endpoint for executing a skill.
|
||||
|
||||
TASKS:
|
||||
- Tasks module handles background processing (title generation, tag generation, etc.).
|
||||
- Document config endpoints (GET/POST for task-specific LLM settings).
|
||||
- Document any direct invocation endpoints.
|
||||
|
||||
Instructions:
|
||||
1. Read all source files fully.
|
||||
2. For valves/specs/meta fields with complex structure, show the full nested schema.
|
||||
3. Distinguish admin-only CRUD from user-accessible execution endpoints.
|
||||
4. For endpoints that execute code (tools, functions, skills), clearly note security implications.
|
||||
|
||||
Exit Criteria:
|
||||
- 5 output files created.
|
||||
- Every route from 00_route_index.md for these modules is covered.
|
||||
- Complex nested schemas (valves, specs, meta) fully documented.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
@@ -0,0 +1,109 @@
|
||||
Phase 7 Mission:
|
||||
Generate detailed API reference documentation for configuration, prompts, evaluations, analytics, SCIM, and utility endpoints.
|
||||
|
||||
Prerequisites:
|
||||
- api_docs/00_route_index.md must already exist (from Phase 1).
|
||||
|
||||
Source files to read (fully):
|
||||
- backend/open_webui/routers/configs.py (~548 lines)
|
||||
- backend/open_webui/routers/prompts.py (~759 lines)
|
||||
- backend/open_webui/routers/evaluations.py (~466 lines)
|
||||
- backend/open_webui/routers/analytics.py (~454 lines)
|
||||
- backend/open_webui/routers/scim.py (~1030 lines)
|
||||
- backend/open_webui/routers/utils.py (~123 lines)
|
||||
- backend/open_webui/models/prompts.py
|
||||
- backend/open_webui/config.py (for config field definitions)
|
||||
|
||||
Output files to create under api_docs/:
|
||||
- 07_configs.md
|
||||
- 07_prompts.md
|
||||
- 07_evaluations.md
|
||||
- 07_analytics.md
|
||||
- 07_scim.md
|
||||
- 07_utils.md
|
||||
|
||||
Per-endpoint format:
|
||||
|
||||
---
|
||||
|
||||
### {HTTP_METHOD} {full_path}
|
||||
|
||||
**Summary:** One sentence description.
|
||||
|
||||
**Auth:** Admin only | Verified user | Public
|
||||
|
||||
**Request**
|
||||
|
||||
| Location | Field | Type | Required | Description |
|
||||
|----------|-------|------|----------|-------------|
|
||||
| Body | field_name | type | Yes/No | description |
|
||||
|
||||
**Response `200`**
|
||||
|
||||
```json
|
||||
{
|
||||
"example_field": "example_value"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| field_name | type | description |
|
||||
|
||||
**Error Responses**
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| 401 | Not authenticated |
|
||||
| 404 | Resource not found |
|
||||
|
||||
---
|
||||
|
||||
Special notes for this phase:
|
||||
|
||||
CONFIGS:
|
||||
- This is the most important module in this phase.
|
||||
- The global config GET/POST endpoints control system-wide settings.
|
||||
- Read backend/open_webui/config.py to enumerate ALL configurable fields.
|
||||
- Document every config field with its type, default, and effect.
|
||||
- Group config fields by category (auth, RAG, models, UI, etc.) in the output.
|
||||
|
||||
PROMPTS:
|
||||
- System prompts stored by users.
|
||||
- Document CRUD operations and the command field (trigger word like "/summarize").
|
||||
- Note the "access_control" field structure.
|
||||
|
||||
EVALUATIONS:
|
||||
- Feedback/rating data for model responses.
|
||||
- Document the feedback object structure (rating, comment, model_id, etc.).
|
||||
- Note any aggregation/analytics endpoints.
|
||||
|
||||
ANALYTICS:
|
||||
- Usage statistics endpoints.
|
||||
- Document what metrics are tracked and aggregation options.
|
||||
|
||||
SCIM:
|
||||
- SCIM 2.0 protocol for enterprise user/group provisioning.
|
||||
- Document: /Users, /Groups, /ServiceProviderConfig, /ResourceTypes endpoints.
|
||||
- Note: SCIM uses different Content-Type and auth mechanism — document these.
|
||||
- Follow SCIM 2.0 RFC schema format for user/group objects.
|
||||
|
||||
UTILS:
|
||||
- Miscellaneous utility endpoints.
|
||||
- Document all available utilities (markdown renderer, code executor, etc.).
|
||||
|
||||
Instructions:
|
||||
1. Read config.py in addition to router files to get complete field lists.
|
||||
2. For SCIM: follow SCIM 2.0 RFC conventions in documentation format.
|
||||
3. For configs: produce a separate "All Config Fields" appendix table.
|
||||
|
||||
Exit Criteria:
|
||||
- 6 output files created.
|
||||
- configs.md includes appendix table of ALL config fields with defaults.
|
||||
- scim.md follows SCIM 2.0 documentation conventions.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- List of files created.
|
||||
- Count of endpoints documented per file.
|
||||
- Count of config fields documented in configs.md appendix.
|
||||
@@ -0,0 +1,89 @@
|
||||
Phase 8 Mission:
|
||||
Consolidate all previously generated phase outputs into a polished master index and a machine-readable summary.
|
||||
|
||||
Prerequisites:
|
||||
- ALL phase 1~7 output files must exist under api_docs/.
|
||||
- Specifically, these files must exist:
|
||||
- api_docs/00_route_index.md
|
||||
- api_docs/02_auths.md, 02_users.md, 02_groups.md, 02_models.md
|
||||
- api_docs/03_chats.md, 03_channels.md, 03_memories.md, 03_notes.md
|
||||
- api_docs/04_files.md, 04_folders.md, 04_knowledge.md, 04_retrieval.md
|
||||
- api_docs/05_ollama.md, 05_openai.md, 05_audio.md, 05_images.md
|
||||
- api_docs/06_tools.md, 06_functions.md, 06_pipelines.md, 06_skills.md, 06_tasks.md
|
||||
- api_docs/07_configs.md, 07_prompts.md, 07_evaluations.md, 07_analytics.md, 07_scim.md, 07_utils.md
|
||||
|
||||
Output files to create/update under api_docs/:
|
||||
1. api_docs/README.md — human-readable master index
|
||||
2. api_docs/openwebui_api.json — machine-readable OpenAPI-style JSON summary
|
||||
|
||||
Content of README.md:
|
||||
- Title: "OpenWebUI Backend API Reference"
|
||||
- Subtitle: "Auto-generated from source code. Do not edit manually."
|
||||
- Generation date (today's date)
|
||||
- Table of Contents (links to every .md file above)
|
||||
- Statistics:
|
||||
- Total module count
|
||||
- Total route count (from 00_route_index.md)
|
||||
- Admin-only route count
|
||||
- Public route count
|
||||
- Streaming endpoint count
|
||||
- Quick Reference: a condensed table of the 20 most commonly used endpoints (chat creation, message send, file upload, model list, auth login/logout, etc.)
|
||||
- Authentication Guide section:
|
||||
- How to get a JWT token (reference auths.md)
|
||||
- How to include it in requests (Authorization: Bearer <token>)
|
||||
- Token expiry behavior
|
||||
- Common Patterns section:
|
||||
- Pagination (skip/limit parameters)
|
||||
- Error response shape: {detail: string}
|
||||
- Rate limiting (if documented in code)
|
||||
|
||||
Content of openwebui_api.json:
|
||||
A JSON object with this structure:
|
||||
{
|
||||
"meta": {
|
||||
"generated_date": "YYYY-MM-DD",
|
||||
"source": "backend/open_webui/routers/",
|
||||
"total_routes": <number>,
|
||||
"modules": [<list of module names>]
|
||||
},
|
||||
"routes": [
|
||||
{
|
||||
"module": "auths",
|
||||
"method": "POST",
|
||||
"path": "/api/v1/auths/signin",
|
||||
"handler": "signin",
|
||||
"auth_required": false,
|
||||
"auth_type": "public",
|
||||
"summary": "Authenticate user and return JWT token.",
|
||||
"request_body": {
|
||||
"email": {"type": "str", "required": true},
|
||||
"password": {"type": "str", "required": true}
|
||||
},
|
||||
"response_200": {
|
||||
"token": {"type": "str"},
|
||||
"token_type": {"type": "str"},
|
||||
"id": {"type": "str"}
|
||||
},
|
||||
"streaming": false
|
||||
}
|
||||
]
|
||||
}
|
||||
- Include ALL routes from all modules.
|
||||
- For streaming endpoints: "streaming": true.
|
||||
|
||||
Instructions:
|
||||
1. Read ALL generated phase output files (00 through 07).
|
||||
2. Parse or summarize endpoint data from each file to populate the JSON.
|
||||
3. Write README.md with complete statistics and quick reference.
|
||||
4. Validate: total_routes in README.md must match count in openwebui_api.json.
|
||||
|
||||
Exit Criteria:
|
||||
- api_docs/README.md exists with statistics and ToC.
|
||||
- api_docs/openwebui_api.json exists with all routes (valid JSON).
|
||||
- Route counts in README.md and JSON are consistent.
|
||||
- No placeholder or TBD.
|
||||
|
||||
Final output format:
|
||||
- Confirmation of files created.
|
||||
- Total routes count in JSON.
|
||||
- Any modules with missing or incomplete data (for manual review).
|
||||
@@ -1,98 +0,0 @@
|
||||
# 开发指南勘误与更新
|
||||
|
||||
## 权限控制章节修正(第 2.2 节)
|
||||
|
||||
### ⚠️ 关键勘误
|
||||
|
||||
在实际测试中发现,Shell 权限请求使用的是 **`fullCommandText`** 字段,而非文档中提到的 `command` 字段。
|
||||
|
||||
### 需要修改的代码行
|
||||
|
||||
**第 89 行(错误):**
|
||||
|
||||
```python
|
||||
command = request.get("command", "")
|
||||
```
|
||||
|
||||
**应改为(正确):**
|
||||
|
||||
```python
|
||||
command = request.get("fullCommandText", "") or request.get("command", "")
|
||||
```
|
||||
|
||||
### 完整的正确实现
|
||||
|
||||
```python
|
||||
async def on_user_permission_request(request, context):
|
||||
"""
|
||||
统一权限审批网关
|
||||
"""
|
||||
kind = request.get("kind") # shell, write, mcp, read, url
|
||||
# ✅ 正确:使用 fullCommandText(shell)或 command(其他)
|
||||
command = request.get("fullCommandText", "") or request.get("command", "")
|
||||
|
||||
# 1. 超级模式:全部允许
|
||||
if self.valves.PERMISSIONS_ALLOW_ALL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 2. 默认安全:始终允许 "读" 和 "Web浏览"
|
||||
if kind in ["read", "url"]:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 3. 细粒度控制
|
||||
if kind == "shell":
|
||||
if self.valves.PERMISSIONS_ALLOW_SHELL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
pattern = self.valves.PERMISSIONS_SHELL_ALLOW_PATTERN
|
||||
if pattern and command:
|
||||
try:
|
||||
if re.match(pattern, command):
|
||||
return {"kind": "approved"}
|
||||
except re.error:
|
||||
print(f"[Config Error] Invalid Regex: {pattern}")
|
||||
|
||||
if kind == "write" and self.valves.PERMISSIONS_ALLOW_WRITE:
|
||||
return {"kind": "approved"}
|
||||
|
||||
if kind == "mcp" and self.valves.PERMISSIONS_ALLOW_MCP:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 4. 默认拒绝
|
||||
print(f"[Permission Denied] Blocked: {kind} {command}")
|
||||
return {
|
||||
"kind": "denied-by-rules",
|
||||
"rules": [{"kind": "check-openwebui-valves"}]
|
||||
}
|
||||
```
|
||||
|
||||
### Shell 权限请求的完整结构
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "shell",
|
||||
"toolCallId": "call_xxx",
|
||||
"fullCommandText": "ls -la", // ← 关键字段
|
||||
"intention": "List all files and directories",
|
||||
"commands": [
|
||||
{
|
||||
"identifier": "ls -la",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"possiblePaths": [],
|
||||
"possibleUrls": [],
|
||||
"hasWriteFileRedirection": false,
|
||||
"canOfferSessionApproval": false
|
||||
}
|
||||
```
|
||||
|
||||
## 测试验证
|
||||
|
||||
已通过完整测试套件验证(8/8 通过),详见 [PERMISSION_TEST_REPORT.md](./PERMISSION_TEST_REPORT.md)。
|
||||
|
||||
---
|
||||
|
||||
**更新日期**: 2026-01-30
|
||||
**验证状态**: ✅ 已测试
|
||||
**影响范围**: 2.2 权限与确认章节
|
||||
@@ -1,173 +0,0 @@
|
||||
# GitHub Copilot SDK 权限控制测试报告
|
||||
|
||||
## 测试日期
|
||||
|
||||
2026-01-30
|
||||
|
||||
## 测试环境
|
||||
|
||||
- **Model**: gpt-4.1
|
||||
- **Python**: 3.12
|
||||
- **Copilot SDK**: Latest
|
||||
|
||||
## 关键发现
|
||||
|
||||
### 1. Shell 权限请求结构
|
||||
|
||||
Shell 类型的权限请求使用 **`fullCommandText`** 字段,而非 `command` 字段。
|
||||
|
||||
**完整请求示例:**
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "shell",
|
||||
"toolCallId": "call_JKLi7tz3uSDQWE3LgzCpvSVy",
|
||||
"fullCommandText": "ls -la",
|
||||
"intention": "List all files and directories with details in the current directory",
|
||||
"commands": [
|
||||
{
|
||||
"identifier": "ls -la",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"possiblePaths": [],
|
||||
"possibleUrls": [],
|
||||
"hasWriteFileRedirection": false,
|
||||
"canOfferSessionApproval": false
|
||||
}
|
||||
```
|
||||
|
||||
### 2. 正则匹配模式验证
|
||||
|
||||
正则白名单模式已验证有效,必须使用 `fullCommandText` 字段:
|
||||
|
||||
```python
|
||||
command = request.get("fullCommandText", "") or request.get("command", "")
|
||||
pattern = self.valves.PERMISSIONS_SHELL_ALLOW_PATTERN
|
||||
if pattern and command:
|
||||
if re.match(pattern, command):
|
||||
return {"kind": "approved"}
|
||||
```
|
||||
|
||||
## 测试结果
|
||||
|
||||
### 完整测试套件(8/8 通过 ✅)
|
||||
|
||||
| # | 测试用例 | 配置 | 提示词 | 预期 | 结果 | 状态 |
|
||||
|---|---------|------|--------|------|------|------|
|
||||
| 1 | Default Deny Shell | 默认 | 请执行: ls -la | ❌ Denied | ❌ Denied | ✅ |
|
||||
| 2 | Allow All | allow_all=True | 请执行: ls -la | ✅ Approved | ✅ Approved | ✅ |
|
||||
| 3 | Allow Shell | allow_shell=True | 请执行: pwd | ✅ Approved | ✅ Approved | ✅ |
|
||||
| 4 | Regex Match: ^ls | pattern='^ls' | 请执行: ls -la | ✅ Approved | ✅ Approved | ✅ |
|
||||
| 5 | Regex No Match | pattern='^ls' | 请执行: pwd | ❌ Denied | ❌ Denied | ✅ |
|
||||
| 6 | Regex Complex | pattern='^(ls\|pwd\|echo)' | 请执行: pwd | ✅ Approved | ✅ Approved | ✅ |
|
||||
| 7 | Regex No Match: git | pattern='^(ls\|pwd\|echo)' | 请执行: git status | ❌ Denied | ❌ Denied | ✅ |
|
||||
| 8 | Read Permission | 默认 | Read: README.md | ✅ Approved | ✅ Approved | ✅ |
|
||||
|
||||
**总体通过率: 100%** 🎉
|
||||
|
||||
## 推荐配置示例
|
||||
|
||||
### 1. 安全模式(推荐生产环境)
|
||||
|
||||
```python
|
||||
PERMISSIONS_ALLOW_ALL: bool = False
|
||||
PERMISSIONS_ALLOW_SHELL: bool = False
|
||||
PERMISSIONS_SHELL_ALLOW_PATTERN: str = "^(ls|pwd|echo|cat).*"
|
||||
PERMISSIONS_ALLOW_WRITE: bool = False
|
||||
PERMISSIONS_ALLOW_MCP: bool = True
|
||||
```
|
||||
|
||||
### 2. 开发模式
|
||||
|
||||
```python
|
||||
PERMISSIONS_ALLOW_ALL: bool = False
|
||||
PERMISSIONS_ALLOW_SHELL: bool = False
|
||||
PERMISSIONS_SHELL_ALLOW_PATTERN: str = "^(ls|pwd|echo|cat|grep|git status|npm test).*"
|
||||
PERMISSIONS_ALLOW_WRITE: bool = False
|
||||
PERMISSIONS_ALLOW_MCP: bool = True
|
||||
```
|
||||
|
||||
### 3. 完全信任模式(仅限受控环境)
|
||||
|
||||
```python
|
||||
PERMISSIONS_ALLOW_ALL: bool = True
|
||||
```
|
||||
|
||||
## 实现建议
|
||||
|
||||
### 正确的权限处理代码
|
||||
|
||||
```python
|
||||
import re
|
||||
from typing import Any, Dict
|
||||
|
||||
async def on_user_permission_request(request: Dict[str, Any], context: Dict[str, str]):
|
||||
"""
|
||||
统一权限审批网关
|
||||
"""
|
||||
kind = request.get("kind")
|
||||
# 关键:使用 fullCommandText 而非 command
|
||||
command = request.get("fullCommandText", "") or request.get("command", "")
|
||||
|
||||
# 1. 超级模式
|
||||
if self.valves.PERMISSIONS_ALLOW_ALL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 2. 默认安全(read、url)
|
||||
if kind in ["read", "url"]:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 3. Shell 细粒度控制
|
||||
if kind == "shell":
|
||||
if self.valves.PERMISSIONS_ALLOW_SHELL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
pattern = self.valves.PERMISSIONS_SHELL_ALLOW_PATTERN
|
||||
if pattern and command:
|
||||
try:
|
||||
if re.match(pattern, command):
|
||||
return {"kind": "approved"}
|
||||
except re.error as e:
|
||||
logger.error(f"Invalid regex: {pattern} - {e}")
|
||||
|
||||
# 4. Write 权限
|
||||
if kind == "write" and self.valves.PERMISSIONS_ALLOW_WRITE:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 5. MCP 权限
|
||||
if kind == "mcp" and self.valves.PERMISSIONS_ALLOW_MCP:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 6. 默认拒绝
|
||||
logger.warning(f"Permission Denied: {kind} {command}")
|
||||
return {
|
||||
"kind": "denied-by-rules",
|
||||
"rules": [{"kind": "security-policy"}]
|
||||
}
|
||||
```
|
||||
|
||||
## 常见正则模式示例
|
||||
|
||||
| 用途 | 正则表达式 | 说明 |
|
||||
|------|-----------|------|
|
||||
| 只读命令 | `^(ls|pwd|cat|echo|grep).*` | 允许常见只读命令 |
|
||||
| Git 只读 | `^git (status\|log\|diff\|show).*` | 允许 Git 只读操作 |
|
||||
| npm/yarn 测试 | `^(npm\|yarn) (test\|run).*` | 允许测试脚本 |
|
||||
| 完全 shell | `.*` | ⚠️ 危险:允许所有命令 |
|
||||
|
||||
## 测试脚本位置
|
||||
|
||||
- 基础测试: [test_shell_permission_pattern.py](./test_shell_permission_pattern.py)
|
||||
- 完整测试套件: [test_permission_comprehensive.py](./test_permission_comprehensive.py)
|
||||
|
||||
## 结论
|
||||
|
||||
✅ **GitHub Copilot SDK 的权限控制机制完全有效**
|
||||
✅ **正则白名单模式已验证可用**
|
||||
⚠️ **必须使用 `fullCommandText` 字段获取命令内容**
|
||||
|
||||
---
|
||||
|
||||
**测试执行者**: GitHub Copilot
|
||||
**审核状态**: ✅ 已验证
|
||||
@@ -1,238 +0,0 @@
|
||||
# OpenWebUI GitHub Copilot Pipe Enhancement Guide
|
||||
|
||||
基于 Copilot SDK 源码级研究的深度技术总结,旨在指导 OpenWebUI Pipe 的功能增强开发。
|
||||
|
||||
## 1. 认证机制 (Authentication)
|
||||
|
||||
官方支持通过环境变量传递 Token。在 Pipe 中,只要确保 `GH_TOKEN` 或 `GITHUB_TOKEN` 存在于环境变量中,Copilot CLI 即可自动识别,无需在 `CopilotClient` 构造函数中重复注入。
|
||||
|
||||
### 核心实现
|
||||
|
||||
Pipe 应确保将 Token(来自 Valve 或 Env)正确设置到当前进程的环境变量中。
|
||||
|
||||
```python
|
||||
import os
|
||||
from copilot import CopilotClient
|
||||
|
||||
# 1. 设置环境变量 (如果从 Valve 获取)
|
||||
if self.valves.GH_TOKEN:
|
||||
os.environ["GH_TOKEN"] = self.valves.GH_TOKEN
|
||||
|
||||
# 2. 初始化客户端
|
||||
# CopilotClient 启动的 CLI 子进程会自动继承当前环境中的 GH_TOKEN
|
||||
client = CopilotClient({
|
||||
# "cli_path": ...,
|
||||
# 注意:无需在此处重复传入 github_token,CLI 会自动读取环境变量
|
||||
})
|
||||
|
||||
# 3. 启动前检查 (建议)
|
||||
# status = await client.get_auth_status()
|
||||
# if not status.isAuthenticated: ...
|
||||
```
|
||||
|
||||
## 2. 权限与确认 (Permissions & Tools) - 核心控制点
|
||||
|
||||
这是用户最关心的部分:如何知道有哪些工具,以及如何控制它们的执行。
|
||||
|
||||
### 2.1 内置工具 (Built-in Tools)
|
||||
|
||||
Copilot CLI 内部管理了一组标准工具,**Python SDK 目前没有直接的 API (`client.list_tools()`) 来列出这些工具**。
|
||||
|
||||
但是,根据 SDK 的 `PermissionRequest` 类型定义 (`copilot/types.py`),我们可以反推其能力类别:
|
||||
|
||||
* **`shell`**: 执行终端命令 (对应 `run_terminal_command` 等)
|
||||
* **`filesystem`** (对应 `read/write`): 文件读写 (对应 `read_file`, `edit_file`, `delete_file` 等)
|
||||
* **`url`**: 网络访问 (对应 `fetch_url` 等)
|
||||
* **`mcp`**: 连接的 MCP 服务器工具
|
||||
|
||||
> **提示**: `available_tools` 参数可以用来“隐藏”工具,让 Agent 根本不知道它有一把锤子。而 `on_permission_request` 是用来拦截 Agent 挥舞锤子的动作。通常我们建议**能力全开 (不设置 available_tools 限制)**,而在**权限层 (on_permission_request) 做拦截**。
|
||||
|
||||
### 2.2 实现“全部允许”与“按需允许”
|
||||
|
||||
建议在 Valves 中增加权限控制字段,并在 `on_permission_request` 中实现逻辑。
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
class Valves(BaseModel):
|
||||
# ... 其他 Valve ...
|
||||
# 权限控制开关
|
||||
PERMISSIONS_ALLOW_ALL: bool = Field(default=False, description="DANGER: Auto-approve ALL actions (shell, write, etc).")
|
||||
PERMISSIONS_ALLOW_SHELL: bool = Field(default=False, description="Auto-approve shell commands.")
|
||||
PERMISSIONS_SHELL_ALLOW_PATTERN: str = Field(default="", description="Regex for approved shell commands (e.g., '^ls|^grep').")
|
||||
PERMISSIONS_ALLOW_WRITE: bool = Field(default=False, description="Auto-approve file write/edit/delete.")
|
||||
PERMISSIONS_ALLOW_MCP: bool = Field(default=True, description="Auto-approve MCP tool execution.")
|
||||
|
||||
# 权限处理 Hook 实现
|
||||
async def on_user_permission_request(request, context):
|
||||
"""
|
||||
统一权限审批网关
|
||||
request keys: kind, toolCallId, ... (shell requests have 'command')
|
||||
"""
|
||||
kind = request.get("kind") # shell, write, mcp, read, url
|
||||
|
||||
# 1. 超级模式:全部允许
|
||||
if self.valves.PERMISSIONS_ALLOW_ALL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 2. 默认安全:始终允许 "读" 和 "Web浏览" (根据需求调整)
|
||||
if kind in ["read", "url"]:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 3. 细粒度控制
|
||||
if kind == "shell":
|
||||
# 3.1 完全允许 Shell
|
||||
if self.valves.PERMISSIONS_ALLOW_SHELL:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 3.2 基于正则允许特定命令
|
||||
command = request.get("command", "")
|
||||
pattern = self.valves.PERMISSIONS_SHELL_ALLOW_PATTERN
|
||||
if pattern and command:
|
||||
try:
|
||||
if re.match(pattern, command):
|
||||
return {"kind": "approved"}
|
||||
except re.error:
|
||||
print(f"[Config Error] Invalid Regex: {pattern}")
|
||||
|
||||
if kind == "write" and self.valves.PERMISSIONS_ALLOW_WRITE:
|
||||
return {"kind": "approved"}
|
||||
|
||||
if kind == "mcp" and self.valves.PERMISSIONS_ALLOW_MCP:
|
||||
return {"kind": "approved"}
|
||||
|
||||
# 4. 默认拒绝
|
||||
print(f"[Permission Denied] Blocked request for: {kind} {request.get('command', '')}")
|
||||
return {
|
||||
"kind": "denied-by-rules",
|
||||
"rules": [{"kind": "check-openwebui-valves"}]
|
||||
}
|
||||
|
||||
# 注册 Hook
|
||||
session = await client.create_session({
|
||||
# ...
|
||||
"on_permission_request": on_user_permission_request
|
||||
})
|
||||
```
|
||||
|
||||
## 3. Agent 与 MCP 集成 (Agents & MCP)
|
||||
|
||||
SDK 中的 Agent 和 MCP 并非独立文件,而是会话配置 (`SessionConfig`) 的一部分。Pipe 可以通过 Valves 动态构建这些配置。
|
||||
|
||||
### 关键映射关系
|
||||
|
||||
| SDK 概念 | OpenWebUI 对应 | 实现位置 | 关键参数 |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **Custom Agent** | 自定义模型 / Persona | `create_session(custom_agents=[...])` | `name`, `prompt`, `tools` (仅名称) |
|
||||
| **Agent Tools** | Valve 开关 / 预置工具 | `create_session(tools=[func1, func2])` | 必须先在 `tools` 注册函数,Agent 才能引用 |
|
||||
| **MCP Server** | Valve 配置 (JSON) | `create_session(mcp_servers={...})` | `command`, `args`, `env` (本地) |
|
||||
|
||||
### 代码范式:动态构建 Agent
|
||||
|
||||
```python
|
||||
async def create_agent_session(client, user_prompt, model_name):
|
||||
# 1. 定义工具 (必须是函数引用)
|
||||
# 假设已从 OpenWebUI Tools 转换或内置
|
||||
available_tools = [tool_web_search, tool_run_script]
|
||||
|
||||
# 2. 构建 Agent Manifest (针对当前请求的虚拟 Agent)
|
||||
agent_manifest = {
|
||||
"name": "openwebui_agent",
|
||||
"description": "Dynamic agent from OpenWebUI",
|
||||
"prompt": "You are a helpful assistant...", # 这里注入 System Prompt
|
||||
"tools": ["web_search", "run_script"], # 引用上方工具的 name
|
||||
"mcp_servers": {
|
||||
# 可以在这里为特定 Agent 绑定 MCP
|
||||
}
|
||||
}
|
||||
|
||||
# 3. 创建会话
|
||||
session = await client.create_session({
|
||||
"model": "gpt-4", # 底层模型
|
||||
"custom_agents": [agent_manifest],
|
||||
"tools": available_tools, # 注册实际代码
|
||||
"available_tools": ["web_search"], # 白名单控制当前可用工具
|
||||
# ... 权限配置
|
||||
})
|
||||
```
|
||||
|
||||
## 4. MCP 服务器配置 (Native MCP Support)
|
||||
|
||||
Pipe 可以直接支持标准 MCP 协议(Stdio)。不需要额外的 MCP 客户端代理,SDK 原生支持。
|
||||
|
||||
### Valve 配置结构建议
|
||||
|
||||
建议在 Pipe 的 Valves 中增加一个 `MCP_CONFIG` 字段(JSON 字符串),解析后直接传给 SDK。
|
||||
|
||||
```python
|
||||
# Valve 输入示例 (JSON)
|
||||
# {
|
||||
# "brave_search": {
|
||||
# "type": "local",
|
||||
# "command": "npx",
|
||||
# "args": ["-y", "@modelcontextprotocol/server-brave-search"],
|
||||
# "env": {"BRAVE_API_KEY": "..."}
|
||||
# }
|
||||
# }
|
||||
|
||||
# 代码实现
|
||||
mcp_config = json.loads(self.valves.MCP_CONFIG)
|
||||
session = await client.create_session({
|
||||
# ...
|
||||
"mcp_servers": mcp_config,
|
||||
# 注意:必须配合权限自动审批,否则 MCP 工具无法调用
|
||||
"on_permission_request": auto_approve_policy
|
||||
})
|
||||
```
|
||||
|
||||
## 5. 会话管理:持久化 vs 重放 (Persistence)
|
||||
|
||||
OpenWebUI 是无状态的,但 Copilot SDK 是有状态的(保留上下文窗口优化)。
|
||||
|
||||
### 最佳实践:以 `chat_id` 为锚点
|
||||
|
||||
利用 OpenWebUI 提供的 `chat_id` 来决定是 `resume` 还是 `start`。
|
||||
|
||||
1. **Map**: 维护 `Dict[chat_id, session_id]` (内存或数据库)。
|
||||
2. **Flow**:
|
||||
* 请求进来 -> 检查 `chat_id` 是否有对应的 `session_id`。
|
||||
* **有**: 尝试 `client.resume_session(session_id)`。
|
||||
* *注意*:Resume 时必须重新传入 `tools`, `hooks`, `on_permission_request`,因为这些 Python 对象不会被序列化保存。
|
||||
* **无/失败**: 调用 `client.create_session()`,并将新 `session_id` 存入 Map。
|
||||
3. **Fallback**: 如果 Resume 失败(例如后端重启 SDK 进程丢失),回退到 Create 新会话,并可选地将 OpenWebUI 传来的 `messages` 历史以 System Message 或历史插入的方式“重放”进去(虽然 SDK 不直接支持 insert history,但可以通过连续的 `send` 模拟,但这很慢)。
|
||||
* *简易方案*:Resume 失败就作为新对话开始,只带入 System Prompt。
|
||||
|
||||
## 6. 高级 Hook:提示词增强
|
||||
|
||||
利用 `on_user_prompt_submitted` 钩子,可以在不修改用户可见内容的情况下,向 Copilot 注入隐式上下文(例如当前文件内容、Pipe 的元指令)。
|
||||
|
||||
```python
|
||||
async def inject_context_hook(input_data, ctx):
|
||||
user_prompt = input_data["prompt"]
|
||||
|
||||
# 比如:检测到用户在问代码,自动附加上下文
|
||||
additional_context = "Current Language: Python. Framework: OpenWebUI."
|
||||
|
||||
return {
|
||||
"modifiedPrompt": user_prompt, # 可以在这里改写提示词
|
||||
"additionalContext": additional_context # 注入隐藏上下文
|
||||
}
|
||||
|
||||
session = await client.create_session({
|
||||
# ...
|
||||
"hooks": {
|
||||
"on_user_prompt_submitted": inject_context_hook
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**总结开发清单:**
|
||||
|
||||
1. [ ] **Env Auth**: 读取环境变量 -> `CopilotClient`。
|
||||
2. [ ] **Permission Valve**: 实现 `PERMISSIONS_ALLOW_ALL/SHELL` 等 Valves。
|
||||
3. [ ] **Auto-Approve Hook**: 实现 `on_permission_request` 逻辑。
|
||||
4. [ ] **MCP Valve**: 添加 JSON Valve -> `session.mcp_servers`。
|
||||
5. [ ] **Session Map**: 实现 `chat_id` <-> `session_id` 的简单的内存映射。
|
||||
6. [ ] **Resume Logic**: 优先 `resume_session`,并记得并在 resume 时重传 Hook 和 Tools。
|
||||
@@ -1,620 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import asyncio
|
||||
import datetime as dt
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import textwrap
|
||||
from typing import Iterable, List, Optional
|
||||
|
||||
from copilot import CopilotClient
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
|
||||
)
|
||||
logger = logging.getLogger("copilot_sdk_guide")
|
||||
|
||||
DEFAULT_CONTEXT_URLS = [
|
||||
"https://raw.githubusercontent.com/github/copilot-sdk/main/README.md",
|
||||
"https://raw.githubusercontent.com/github/copilot-sdk/main/python/README.md",
|
||||
"https://raw.githubusercontent.com/github/copilot-sdk/main/docs/getting-started.md",
|
||||
"https://raw.githubusercontent.com/github/copilot-cli/main/README.md",
|
||||
"https://raw.githubusercontent.com/github/copilot-cli/main/changelog.md",
|
||||
"https://docs.github.com/en/copilot/concepts/agents/about-copilot-cli",
|
||||
"https://docs.github.com/en/copilot/concepts/agents/about-agent-skills",
|
||||
"https://raw.githubusercontent.com/github/awesome-copilot/main/README.md",
|
||||
"https://raw.githubusercontent.com/github/awesome-copilot/main/skills/copilot-sdk/SKILL.md",
|
||||
"https://raw.githubusercontent.com/github/awesome-copilot/main/instructions/agent-skills.instructions.md",
|
||||
]
|
||||
|
||||
AWESOME_COPILOT_REPO = "github/awesome-copilot"
|
||||
AWESOME_COPILOT_BRANCH = "main"
|
||||
AWESOME_COPILOT_DOC_DIRS = ["docs/", "instructions/"]
|
||||
|
||||
TOPICS = [
|
||||
"MCP Server Integration: JSON-RPC config and SDK hooks",
|
||||
"Agent Manifests: Defining capabilities and permissions programmatically",
|
||||
"Headless Auth: Device Code Flow and credential persistence",
|
||||
"Session Replay vs Resume: Handling stateless frontend history",
|
||||
"Advanced Session Hooks: Intercepting and modifying user prompts",
|
||||
"Workspace Virtualization: Handling CWD for remote/virtual files",
|
||||
"Error Recovery: Handling session disconnects and re-auth",
|
||||
"Confirmation Events: programmatic handling of 'confirmation_required'",
|
||||
"Skills: Conflict resolution and precedence defaults",
|
||||
"Debugging: Tracing JSON-RPC traffic in the SDK",
|
||||
"Billing & Policies: How seat management affects SDK features",
|
||||
]
|
||||
|
||||
QUESTION_TEMPLATES = [
|
||||
"Give a concise overview of {topic}.",
|
||||
"Provide best practices and common pitfalls for {topic}.",
|
||||
"Show a minimal example snippet for {topic}.",
|
||||
"List recommended configuration defaults for {topic}.",
|
||||
"How does {topic} relate to building a custom Agent?",
|
||||
]
|
||||
|
||||
CLI_FOCUS_QUESTIONS = [
|
||||
"How to configure MCP servers in ~/.copilot/config.json for SDK usage?",
|
||||
"What CLI environment variables force 'Agent' mode vs 'Generic' mode?",
|
||||
"Explain the 'confirmation' flow in CLI and how it maps to SDK events.",
|
||||
"Does the CLI support 'dry-run' permission checks for tools?",
|
||||
"What are the undocumented requirements for 'workspace' context updates?",
|
||||
"How does the CLI handle 'device code' re-authentication automatically?",
|
||||
]
|
||||
|
||||
|
||||
def build_questions(max_questions: int) -> List[str]:
|
||||
questions: List[str] = []
|
||||
|
||||
for topic in TOPICS:
|
||||
for template in QUESTION_TEMPLATES:
|
||||
questions.append(template.format(topic=topic))
|
||||
|
||||
questions.extend(CLI_FOCUS_QUESTIONS)
|
||||
|
||||
# De-duplicate while preserving order
|
||||
seen = set()
|
||||
uniq: List[str] = []
|
||||
for q in questions:
|
||||
if q in seen:
|
||||
continue
|
||||
seen.add(q)
|
||||
uniq.append(q)
|
||||
|
||||
return uniq[:max_questions]
|
||||
|
||||
|
||||
def build_deep_dive_prompts() -> List[str]:
|
||||
return [
|
||||
"Provide a python code example for configuring `CopilotClient` to connect to a local MCP server (e.g. Brave Search) via `CopilotClient` config.",
|
||||
"Explain how to programmatically handle `tool.confirmation_required` events in a non-interactive stream using `session.on()`.",
|
||||
"Show how to implement a 'Device Flow' login helper using SDK primitives (if available) or raw HTTP showing how to persist credentials.",
|
||||
"Compare the pros and cons of 'Session Replay' (fast-forwarding history) vs 'Session Resume' (stateful ID) for a stateless web backend like OpenWebUI.",
|
||||
"Detail the exact protocol for 'Virtual Workspace': how to implement a file system provider that feeds content to Copilot without physical files.",
|
||||
"Create an 'Agent Manifest' example: how to define an Agent capable of specific high-privileged tools via SDK.",
|
||||
"List all 'hidden' `SessionConfig` parameters relevant to Agent behavior and personality.",
|
||||
]
|
||||
|
||||
|
||||
def load_questions(path: str) -> List[str]:
|
||||
if path.lower().endswith(".json"):
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
if isinstance(data, list):
|
||||
return [str(x).strip() for x in data if str(x).strip()]
|
||||
raise ValueError("JSON must be an array of strings")
|
||||
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
lines = [line.strip() for line in f.readlines()]
|
||||
return [line for line in lines if line]
|
||||
|
||||
|
||||
def fetch_url(url: str, headers: Optional[dict] = None) -> str:
|
||||
import urllib.request
|
||||
import time
|
||||
|
||||
retries = 3
|
||||
if headers is None:
|
||||
headers = {}
|
||||
|
||||
req = urllib.request.Request(url, headers=headers)
|
||||
|
||||
for i in range(retries):
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=20) as response:
|
||||
return response.read().decode("utf-8", errors="replace")
|
||||
except Exception as exc:
|
||||
if i == retries - 1:
|
||||
logger.warning(
|
||||
"Failed to fetch %s after %d attempts: %s", url, retries, exc
|
||||
)
|
||||
return ""
|
||||
time.sleep(1 * (i + 1))
|
||||
return ""
|
||||
|
||||
|
||||
def list_repo_markdown_urls(
|
||||
repo: str,
|
||||
branch: str,
|
||||
dir_prefixes: List[str],
|
||||
) -> List[str]:
|
||||
api_url = f"https://api.github.com/repos/{repo}/git/trees/{branch}?recursive=1"
|
||||
headers = {}
|
||||
if os.environ.get("GITHUB_TOKEN"):
|
||||
headers["Authorization"] = f"token {os.environ.get('GITHUB_TOKEN')}"
|
||||
|
||||
try:
|
||||
content = fetch_url(api_url, headers=headers)
|
||||
if not content:
|
||||
return []
|
||||
data = json.loads(content)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to list repo tree: %s", exc)
|
||||
return []
|
||||
|
||||
tree = data.get("tree", []) if isinstance(data, dict) else []
|
||||
urls: List[str] = []
|
||||
for item in tree:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
path = item.get("path", "")
|
||||
if not path or not path.endswith(".md"):
|
||||
continue
|
||||
if any(path.startswith(prefix) for prefix in dir_prefixes):
|
||||
raw = f"https://raw.githubusercontent.com/{repo}/{branch}/{path}"
|
||||
urls.append(raw)
|
||||
return urls
|
||||
|
||||
|
||||
def read_local_sdk_source(max_chars: int = 300000) -> str:
|
||||
"""
|
||||
Locates the installed 'copilot' package and reads its source code.
|
||||
This ensures analysis is based on the actual installed version, not just docs.
|
||||
"""
|
||||
try:
|
||||
import copilot
|
||||
except ImportError:
|
||||
logger.error("Could not import 'copilot' SDK. Is it installed?")
|
||||
return ""
|
||||
|
||||
package_dir = os.path.dirname(copilot.__file__)
|
||||
logger.info(f"Reading SDK source from: {package_dir}")
|
||||
|
||||
source_chunks = []
|
||||
total_chars = 0
|
||||
|
||||
# Prioritize key files that define core logic
|
||||
priority_files = ["client.py", "session.py", "types.py", "events.py", "__init__.py"]
|
||||
|
||||
# First pass: Recursively find all .py files
|
||||
all_py_files = []
|
||||
for root, dirs, files in os.walk(package_dir):
|
||||
if "__pycache__" in root:
|
||||
continue
|
||||
for file in files:
|
||||
if file.endswith(".py"):
|
||||
all_py_files.append(os.path.join(root, file))
|
||||
|
||||
# Sort files: priority files first, then alphabetical
|
||||
def sort_key(path):
|
||||
fname = os.path.basename(path)
|
||||
if fname in priority_files:
|
||||
return (0, priority_files.index(fname))
|
||||
return (1, path)
|
||||
|
||||
all_py_files.sort(key=sort_key)
|
||||
|
||||
for path in all_py_files:
|
||||
rel_path = os.path.relpath(path, os.path.dirname(package_dir))
|
||||
try:
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# Add file delimiter for the model
|
||||
header = f"\n\n# ==================================================\n# SOURCE CODE FILE: {rel_path}\n# ==================================================\n"
|
||||
chunk = header + content
|
||||
|
||||
if total_chars + len(chunk) > max_chars:
|
||||
remaining = max_chars - total_chars
|
||||
if remaining > len(header) + 100:
|
||||
source_chunks.append(
|
||||
chunk[:remaining] + "\n# [TRUNCATED DUE TO LENGTH LIMIT]"
|
||||
)
|
||||
logger.warning(f"Context limit reached. Stopping at {rel_path}")
|
||||
break
|
||||
|
||||
source_chunks.append(chunk)
|
||||
total_chars += len(chunk)
|
||||
logger.info(f"Loaded source file: {rel_path} ({len(content)} chars)")
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to read source file {path}: {e}")
|
||||
|
||||
return "".join(source_chunks)
|
||||
|
||||
|
||||
def build_context(urls: Iterable[str], max_chars: int) -> str:
|
||||
chunks: List[str] = []
|
||||
remaining = max_chars
|
||||
|
||||
for url in urls:
|
||||
if remaining <= 0:
|
||||
break
|
||||
try:
|
||||
content = fetch_url(url)
|
||||
header = f"[Source: {url}]\n"
|
||||
if len(header) >= remaining:
|
||||
break
|
||||
remaining -= len(header)
|
||||
|
||||
if len(content) > remaining:
|
||||
content = content[:remaining] + "\n[TRUNCATED]\n"
|
||||
remaining = 0
|
||||
else:
|
||||
remaining -= len(content)
|
||||
|
||||
chunks.append(header + content)
|
||||
logger.info("Fetched context: %s", url)
|
||||
except Exception as exc:
|
||||
logger.warning("Failed to fetch %s: %s", url, exc)
|
||||
|
||||
return "\n\n".join(chunks)
|
||||
|
||||
|
||||
def write_jsonl(path: str, item: dict) -> None:
|
||||
with open(path, "a", encoding="utf-8") as f:
|
||||
f.write(json.dumps(item, ensure_ascii=False) + "\n")
|
||||
|
||||
|
||||
def write_markdown_header(path: str, title: str, meta: dict) -> None:
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
f.write(f"# {title}\n\n")
|
||||
for k, v in meta.items():
|
||||
f.write(f"- **{k}**: {v}\n")
|
||||
f.write("\n---\n\n")
|
||||
|
||||
|
||||
def append_markdown_qa(path: str, question: str, answer: str) -> None:
|
||||
with open(path, "a", encoding="utf-8") as f:
|
||||
f.write(f"## Q: {question}\n\n")
|
||||
f.write(f"{answer}\n\n")
|
||||
|
||||
|
||||
def clamp_questions(questions: List[str], max_questions: int) -> List[str]:
|
||||
return questions[: max(1, min(max_questions, 400))]
|
||||
|
||||
|
||||
def print_progress_bar(
|
||||
iteration,
|
||||
total,
|
||||
prefix="",
|
||||
suffix="",
|
||||
decimals=1,
|
||||
length=50,
|
||||
fill="█",
|
||||
printEnd="\r",
|
||||
):
|
||||
"""
|
||||
Call in a loop to create terminal progress bar
|
||||
"""
|
||||
percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total)))
|
||||
filledLength = int(length * iteration // total)
|
||||
bar = fill * filledLength + "-" * (length - filledLength)
|
||||
# Clear line extension to handle shrinking suffixes
|
||||
print(f"\r{prefix} |{bar}| {percent}% {suffix}\033[K", end=printEnd)
|
||||
# Print New Line on Complete
|
||||
if iteration == total:
|
||||
print()
|
||||
|
||||
|
||||
async def run_session(
|
||||
model: str,
|
||||
questions: List[str],
|
||||
output_dir: str,
|
||||
context: str,
|
||||
session_id: Optional[str],
|
||||
delay: float,
|
||||
output_lang: str,
|
||||
enable_infinite_sessions: bool,
|
||||
timeout: int,
|
||||
) -> None:
|
||||
client = CopilotClient()
|
||||
await client.start()
|
||||
|
||||
session_config = {"model": model}
|
||||
if session_id:
|
||||
session_config["session_id"] = session_id
|
||||
if enable_infinite_sessions:
|
||||
session_config["infinite_sessions"] = {
|
||||
"enabled": True,
|
||||
"background_compaction_threshold": 0.8,
|
||||
"buffer_exhaustion_threshold": 0.95,
|
||||
}
|
||||
|
||||
session = await client.create_session(session_config)
|
||||
|
||||
timestamp = dt.datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
jsonl_path = os.path.join(output_dir, f"copilot_sdk_guide_{timestamp}.jsonl")
|
||||
md_path = os.path.join(output_dir, f"copilot_sdk_guide_{timestamp}.md")
|
||||
|
||||
write_markdown_header(
|
||||
md_path,
|
||||
"GitHub Copilot SDK & CLI 研究报告",
|
||||
{
|
||||
"model": model,
|
||||
"questions": len(questions),
|
||||
"timestamp": timestamp,
|
||||
"language": output_lang,
|
||||
},
|
||||
)
|
||||
|
||||
lang_instruction = "Chinese" if "zh" in output_lang.lower() else "English"
|
||||
|
||||
system_prompt = textwrap.dedent(
|
||||
f"""
|
||||
You are an expert assistant. Focus on GitHub Copilot SDK and GitHub Copilot CLI.
|
||||
|
||||
CRITICAL INSTRUCTION: SOURCE CODE FIRST.
|
||||
You have been provided with the ACTUAL PYTHON SOURCE CODE of the `copilot` SDK in the context.
|
||||
When answering questions:
|
||||
1. FIRST, analyze the provided source code (look for class definitions, type hints, methods).
|
||||
2. THEN, refer to documentation if source code is ambiguous.
|
||||
3. Do NOT hallucinate methods that do not exist in the source code.
|
||||
4. If a feature (like MCP) is not explicitly in the code, explain how to implement it using the available primitives (low-level hooks/events).
|
||||
|
||||
Provide accurate, concise answers in {lang_instruction}. When relevant, include command names,
|
||||
configuration keys, and pitfalls. Use bullet points where useful.
|
||||
|
||||
Output requirements:
|
||||
- Write in {lang_instruction}.
|
||||
- Provide practical code snippets (Python/TypeScript/CLI) when helpful.
|
||||
- Include a short "建议/落地" section for integration into a pipe.
|
||||
- If citing facts from provided context, briefly mention the source URL.
|
||||
"""
|
||||
).strip()
|
||||
|
||||
if context:
|
||||
system_prompt += "\n\nAdditional context:\n" + context
|
||||
|
||||
await session.send_and_wait({"prompt": system_prompt}, timeout=timeout)
|
||||
|
||||
total_q = len(questions)
|
||||
print_progress_bar(0, total_q, prefix="Progress:", suffix="Starting...", length=30)
|
||||
|
||||
for idx, question in enumerate(questions, start=1):
|
||||
# Update progress bar (Asking...)
|
||||
q_short = (question[:40] + "...") if len(question) > 40 else question.ljust(43)
|
||||
print_progress_bar(
|
||||
idx - 1, total_q, prefix="Progress:", suffix=f"Asking: {q_short}", length=30
|
||||
)
|
||||
|
||||
# Log to file/debug only
|
||||
logger.debug("[%s/%s] Asking: %s", idx, total_q, question)
|
||||
|
||||
answer = ""
|
||||
max_retries = 3
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = await session.send_and_wait(
|
||||
{"prompt": question}, timeout=timeout
|
||||
)
|
||||
answer = response.data.content if response and response.data else ""
|
||||
break
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error asking question (Attempt {attempt+1}/{max_retries}): {e}"
|
||||
)
|
||||
if attempt < max_retries - 1:
|
||||
await asyncio.sleep(2)
|
||||
else:
|
||||
answer = f"Error retrieving answer: {e}"
|
||||
|
||||
write_jsonl(
|
||||
jsonl_path,
|
||||
{
|
||||
"index": idx,
|
||||
"question": question,
|
||||
"answer": answer,
|
||||
"model": model,
|
||||
},
|
||||
)
|
||||
append_markdown_qa(md_path, question, answer)
|
||||
|
||||
# Update progress bar (Done...)
|
||||
print_progress_bar(
|
||||
idx, total_q, prefix="Progress:", suffix=f"Done: {q_short}", length=30
|
||||
)
|
||||
|
||||
if delay > 0:
|
||||
await asyncio.sleep(delay)
|
||||
|
||||
await session.destroy()
|
||||
await client.stop()
|
||||
|
||||
logger.info("Saved output to %s and %s", jsonl_path, md_path)
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Ask up to 100 Copilot SDK questions via GitHub Copilot SDK",
|
||||
)
|
||||
parser.add_argument("--model", default="gpt-5.2-codex", help="Model to use")
|
||||
parser.add_argument(
|
||||
"--max-questions",
|
||||
type=int,
|
||||
default=100,
|
||||
help="Max number of questions (1-400)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--questions-file",
|
||||
default="",
|
||||
help="Path to .txt or .json list of questions",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--context-url",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Additional context URL (repeatable)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-default-context",
|
||||
action="store_true",
|
||||
help="Disable default Copilot SDK context URLs",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include-awesome-copilot-docs",
|
||||
action="store_true",
|
||||
help="Include all markdown files from awesome-copilot/docs",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--include-awesome-copilot-instructions",
|
||||
action="store_true",
|
||||
help="Include all markdown files from awesome-copilot/instructions",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-sdk-source",
|
||||
action="store_true",
|
||||
help="Do NOT read local SDK source code (default: reads source)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--session-id",
|
||||
default="",
|
||||
help="Optional custom session ID",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
default="",
|
||||
help="Directory to save outputs",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--delay",
|
||||
type=float,
|
||||
default=0.5,
|
||||
help="Delay between questions (seconds)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-context-chars",
|
||||
type=int,
|
||||
default=400000,
|
||||
help="Max characters of aggregated context (default: 400000)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--disable-infinite-sessions",
|
||||
action="store_true",
|
||||
help="Disable infinite sessions (default: enabled)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-lang",
|
||||
default="zh-CN",
|
||||
help="Output language (default: zh-CN)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--deep-dive",
|
||||
action="store_true",
|
||||
help="Append deep-dive prompts for more detailed research",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout",
|
||||
type=int,
|
||||
default=3600,
|
||||
help="Session request timeout in seconds (default: 3600)",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = parse_args()
|
||||
|
||||
if args.questions_file:
|
||||
questions = load_questions(args.questions_file)
|
||||
else:
|
||||
# Generate enough questions to cover everything
|
||||
questions = build_questions(9999)
|
||||
|
||||
if args.deep_dive:
|
||||
# Prepend deep dive questions to ensure they are prioritized
|
||||
questions = build_deep_dive_prompts() + questions
|
||||
|
||||
questions = clamp_questions(questions, args.max_questions)
|
||||
if not questions:
|
||||
logger.error("No questions to ask")
|
||||
sys.exit(1)
|
||||
|
||||
context_urls = [] if args.no_default_context else list(DEFAULT_CONTEXT_URLS)
|
||||
|
||||
if args.include_awesome_copilot_docs:
|
||||
context_urls.extend(
|
||||
list_repo_markdown_urls(
|
||||
AWESOME_COPILOT_REPO,
|
||||
AWESOME_COPILOT_BRANCH,
|
||||
["docs/"],
|
||||
)
|
||||
)
|
||||
|
||||
if args.include_awesome_copilot_instructions:
|
||||
context_urls.extend(
|
||||
list_repo_markdown_urls(
|
||||
AWESOME_COPILOT_REPO,
|
||||
AWESOME_COPILOT_BRANCH,
|
||||
["instructions/"],
|
||||
)
|
||||
)
|
||||
|
||||
context_urls.extend(args.context_url or [])
|
||||
|
||||
# 1. Read local source code first (Priority: High)
|
||||
# We allocate up to max_context_chars to source code initially.
|
||||
# The actual usage will likely be less for a typical SDK.
|
||||
source_context = ""
|
||||
source_chars_count = 0
|
||||
if not args.no_sdk_source:
|
||||
source_context = read_local_sdk_source(args.max_context_chars)
|
||||
source_chars_count = len(source_context)
|
||||
logger.info(f"Source context usage: {source_chars_count} chars")
|
||||
|
||||
# 2. Calculate remaining budget for Web Docs (Priority: Secondary)
|
||||
# We ensure we don't exceed the global limit.
|
||||
remaining_chars = max(10000, args.max_context_chars - source_chars_count)
|
||||
logger.info(f"Remaining budget for web docs: {remaining_chars} chars")
|
||||
|
||||
# 3. Fetch remote docs
|
||||
web_context = build_context(context_urls, remaining_chars)
|
||||
|
||||
combined_context = ""
|
||||
# Assemble context in order of authority (Source > Docs)
|
||||
if source_context:
|
||||
combined_context += (
|
||||
"# PRIMARY SOURCE: LOCAL SDK CODE (AUTHORITATIVE)\n"
|
||||
+ source_context
|
||||
+ "\n\n"
|
||||
)
|
||||
if web_context:
|
||||
combined_context += (
|
||||
"# SECONDARY SOURCE: WEB DOCUMENTATION & AWESOME-COPILOT\n" + web_context
|
||||
)
|
||||
|
||||
output_dir = args.output_dir or os.path.join(
|
||||
os.getcwd(), "plugins", "debug", "copilot_sdk_research", "outputs"
|
||||
)
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
asyncio.run(
|
||||
run_session(
|
||||
model=args.model,
|
||||
questions=questions,
|
||||
output_dir=output_dir,
|
||||
context=combined_context,
|
||||
session_id=args.session_id or None,
|
||||
delay=args.delay,
|
||||
output_lang=args.output_lang,
|
||||
enable_infinite_sessions=not args.disable_infinite_sessions,
|
||||
timeout=args.timeout,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,2 +0,0 @@
|
||||
This is a dummy test file for verifying permission controls.
|
||||
You can delete this file safely.
|
||||
@@ -1,26 +0,0 @@
|
||||
import inspect
|
||||
import json
|
||||
|
||||
try:
|
||||
import copilot
|
||||
from copilot import types
|
||||
|
||||
print(f"Copilot location: {copilot.__file__}")
|
||||
|
||||
print("\n=== Inspecting PermissionRequest types ===")
|
||||
# Look for definition of PermissionRequest or similar
|
||||
|
||||
# In the absence of direct access to the CLI output structure documentation,
|
||||
# we can check if there are type hints or typed dicts in copilot.types
|
||||
|
||||
for name, obj in inspect.getmembers(types):
|
||||
if "Permission" in name or "Request" in name:
|
||||
print(f"\nType: {name}")
|
||||
try:
|
||||
if hasattr(obj, "__annotations__"):
|
||||
print(obj.__annotations__)
|
||||
except:
|
||||
pass
|
||||
|
||||
except ImportError:
|
||||
print("copilot module not installed")
|
||||
@@ -1,68 +0,0 @@
|
||||
{"index": 1, "question": "Provide a python code example for configuring `CopilotClient` to connect to a local MCP server (e.g. Brave Search) via `CopilotClient` config.", "answer": "SDK 源码显示 MCP 服务器通过 `create_session`/`resume_session` 的 `mcp_servers` 配置传入并映射为 `mcpServers`,不是 CopilotClient 构造参数(源:本地源码 copilot/types.py / copilot/client.py)。下面是本地 stdio MCP(以 Brave Search 为例,命令/密钥请替换)的 Python 示例。 \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"mcp_servers\": {\n \"brave_search\": {\n \"type\": \"local\", # 或 \"stdio\"\n \"command\": \"/path/to/brave-search-mcp\", # 替换为你的 MCP server 启动命令\n \"args\": [],\n \"env\": {\"BRAVE_API_KEY\": \"YOUR_KEY\"},\n \"tools\": [\"*\"],\n \"timeout\": 30000, # 毫秒\n }\n },\n })\n\n await session.send_and_wait({\"prompt\": \"用 Brave Search 查询 OpenAI 最新公告\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n要点/坑:\n- `tools`: `[\"*\"]` 表示全量,`[]` 表示不启用\n- `timeout`: 毫秒\n- `command/args/env`: 按你的 MCP server 实际启动方式替换\n\n建议/落地:\n- pipe 内复用同一 session,结束时统一 `client.stop()`", "model": "gpt-5.2-codex"}
|
||||
{"index": 2, "question": "Explain how to programmatically handle `tool.confirmation_required` events in a non-interactive stream using `session.on()`.", "answer": "SDK 源码未定义 `tool.confirmation_required` 事件,权限确认由 JSON‑RPC `permission.request` 触发并交给 `on_permission_request` 回调处理,`session.on()` 只能订阅 `tool.user_requested`/`tool.execution_start` 等事件做记录关联(来源:本地源码 copilot/client.py、copilot/session.py、copilot/types.py)。 \n在非交互流里必须提供 `on_permission_request` 自动审批/拒绝,否则默认拒绝且回调异常也会被拒绝。\n\n要点/坑 \n- 配置键: `on_permission_request`(SessionConfig) \n- 权限类型: `shell` / `write` / `mcp` / `read` / `url` \n- 回调异常或缺失 -> `denied-no-approval-rule-and-could-not-request-from-user`\n\n```python\nimport asyncio\nfrom copilot import CopilotClient\nfrom copilot.generated.session_events import SessionEventType\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n def on_event(ev):\n if ev.type == SessionEventType.TOOL_USER_REQUESTED:\n print(f\"tool requested: {ev.data.tool_name} {ev.data.tool_call_id}\")\n elif ev.type == SessionEventType.TOOL_EXECUTION_START:\n print(f\"tool start: {ev.data.tool_name}\")\n\n async def on_permission_request(req, ctx):\n kind = req.get(\"kind\")\n if kind in (\"read\", \"mcp\"):\n return {\"kind\": \"approved\"}\n return {\"kind\": \"denied-by-rules\", \"rules\": [{\"kind\": \"deny-non-read\"}]}\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"on_permission_request\": on_permission_request,\n })\n session.on(on_event)\n\n await session.send_and_wait({\"prompt\": \"请调用需要权限的工具\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n建议/落地 \n- pipe 启动时注册审批策略 + `session.on` 记录 `tool_call_id` 做审计", "model": "gpt-5.2-codex"}
|
||||
{"index": 3, "question": "Show how to implement a 'Device Flow' login helper using SDK primitives (if available) or raw HTTP showing how to persist credentials.", "answer": "SDK 本身没有 device‑flow 登录 API;可用鉴权原语只有 CopilotClient 的 `github_token`/`use_logged_in_user` 与 `get_auth_status`(源:本地源码 copilot/client.py、copilot/types.py)。 \n下面用 GitHub OAuth Device Flow 原始 HTTP 获取 token、持久化本地并注入 `github_token`(端点说明:https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow)。\n\n```python\nimport asyncio\nimport json\nimport os\nimport time\nimport urllib.parse\nimport urllib.request\nfrom pathlib import Path\n\nfrom copilot import CopilotClient\n\nCLIENT_ID = \"YOUR_OAUTH_APP_CLIENT_ID\"\nSCOPE = \"read:user\" # 按你的 OAuth App 需求调整\nTOKEN_PATH = Path.home() / \".config\" / \"myapp\" / \"copilot_token.json\"\n\ndef http_post(url, data):\n body = urllib.parse.urlencode(data).encode()\n req = urllib.request.Request(url, data=body, headers={\"Accept\": \"application/json\"})\n with urllib.request.urlopen(req) as resp:\n return json.loads(resp.read().decode())\n\ndef load_token():\n if TOKEN_PATH.exists():\n return json.loads(TOKEN_PATH.read_text()).get(\"access_token\")\n return None\n\ndef save_token(token):\n TOKEN_PATH.parent.mkdir(parents=True, exist_ok=True)\n TOKEN_PATH.write_text(json.dumps(token))\n os.chmod(TOKEN_PATH, 0o600)\n\ndef device_flow():\n code = http_post(\n \"https://github.com/login/device/code\",\n {\"client_id\": CLIENT_ID, \"scope\": SCOPE},\n )\n print(f\"Open {code['verification_uri']} and enter {code['user_code']}\")\n interval = int(code.get(\"interval\", 5))\n\n while True:\n token = http_post(\n \"https://github.com/login/oauth/access_token\",\n {\n \"client_id\": CLIENT_ID,\n \"device_code\": code[\"device_code\"],\n \"grant_type\": \"urn:ietf:params:oauth:grant-type:device_code\",\n },\n )\n if \"access_token\" in token:\n return token\n if token.get(\"error\") == \"slow_down\":\n interval += 5\n elif token.get(\"error\") not in (\"authorization_pending\",):\n raise RuntimeError(token)\n time.sleep(interval)\n\nasync def main():\n access_token = load_token()\n if not access_token:\n token = device_flow()\n save_token(token)\n access_token = token[\"access_token\"]\n\n client = CopilotClient({\"github_token\": access_token})\n await client.start()\n # ... create_session / send ...\n await client.stop()\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n要点/坑 \n- `cli_url` 与 `github_token` 互斥 \n- SDK 不保存凭据 需自行持久化 \n- 轮询需处理 `authorization_pending` / `slow_down`\n\n建议/落地 \n- pipe 启动时读取 token 文件 不存在则走 device flow \n- 用 `github_token` 注入 保持非交互运行", "model": "gpt-5.2-codex"}
|
||||
{"index": 4, "question": "Compare the pros and cons of 'Session Replay' (fast-forwarding history) vs 'Session Resume' (stateful ID) for a stateless web backend like OpenWebUI.", "answer": "对无状态后端(OpenWebUI)而言,SDK 只支持通过 `resume_session(session_id)` 的状态续接;所谓“Session Replay”需要你自行重放历史 prompt(来源:本地 SDK 源码 copilot/client.py、copilot/session.py)。 \n对比要点(短语): \n- Replay:无状态/易扩展;缺点是耗时耗 tokens、易截断/丢工具状态、可能重复副作用 \n- Resume:省 tokens、保留工具/compaction/`workspace_path` 状态;缺点是需持久化 session_id、依赖会话存活/跨实例一致性。 \n建议/落地:优先持久化 chat_id→session_id 并 resume_session,失效时回退为重放最近 N 轮并重建工具上下文。", "model": "gpt-5.2-codex"}
|
||||
{"index": 5, "question": "Detail the exact protocol for 'Virtual Workspace': how to implement a file system provider that feeds content to Copilot without physical files.", "answer": "SDK 源码里没有“Virtual Workspace”协议或文件系统 provider 接口;会话只识别真实路径(`SessionConfig.working_directory`、`MessageOptions.attachments`)。要在无物理文件情况下供给内容,只能用自定义工具或 MCP 服务器把“读写文件”变成 tool 调用(来源:本地 SDK 源码 copilot/types.py / copilot/client.py)。\n\n可用原语(源码)\n- `SessionConfig.tools` / `available_tools`:自定义工具与白名单\n- `MessageOptions.attachments`:仅支持真实路径文件/目录\n\n方案 A:自定义工具实现 VFS(推荐)\n```python\nimport asyncio\nfrom pydantic import BaseModel, Field\nfrom copilot import CopilotClient, define_tool\n\nVFS = {\"README.md\": \"# Virtual README\\n内容来自内存\\n\"}\n\nclass VfsReadParams(BaseModel):\n path: str = Field(description=\"virtual path\")\n\n@define_tool(name=\"vfs_read\", description=\"Read a virtual file\")\ndef vfs_read(params: VfsReadParams) -> str:\n return VFS.get(params.path, \"\")\n\n@define_tool(name=\"vfs_list\", description=\"List virtual files\")\ndef vfs_list() -> list[str]:\n return sorted(VFS.keys())\n\nasync def main():\n client = CopilotClient()\n await client.start()\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"tools\": [vfs_read, vfs_list],\n \"available_tools\": [\"vfs_read\", \"vfs_list\"],\n \"system_message\": {\"content\": \"Use vfs_* tools for all file access.\"},\n })\n await session.send_and_wait({\"prompt\": \"打开 README.md\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n方案 B:MCP 服务器(本地 stdio,VFS 由你实现)\n```python\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"mcp_servers\": {\n \"vfs\": {\n \"type\": \"local\",\n \"command\": \"python\",\n \"args\": [\"-m\", \"my_vfs_mcp_server\"],\n \"tools\": [\"*\"],\n }\n },\n})\n```\n\n注意/坑 \n- `available_tools` 只保留你的 VFS 工具以强制使用 \n- 内置文件工具只读真实文件;要兼容只能落地临时文件 \n- 大文件需分块返回,避免上下文爆炸 \n\n建议/落地 \n- pipe 内维护 VFS 存储(DB/Redis/内存) \n- 启动 session 时注入 `system_message` + `available_tools` 强制 VFS 入口", "model": "gpt-5.2-codex"}
|
||||
{"index": 6, "question": "Create an 'Agent Manifest' example: how to define an Agent capable of specific high-privileged tools via SDK.", "answer": "SDK 没有独立“Agent Manifest”文件格式,等价于 `create_session` 的 `custom_agents`(CustomAgentConfig),并用 `tools`/`available_tools` 约束高权限工具,权限由 `on_permission_request` 处理(来源:本地 SDK 源码 copilot/types.py、copilot/client.py)。 \n示例如下(manifest 为 Python dict,可序列化为 JSON 配置)。\n\n```python\nimport asyncio\nfrom pydantic import BaseModel, Field\nfrom copilot import CopilotClient, define_tool\n\nclass AdminShellParams(BaseModel):\n cmd: str = Field(description=\"approved shell command\")\n\n@define_tool(name=\"admin_shell\", description=\"Run an approved shell command\")\ndef admin_shell(params: AdminShellParams) -> str:\n # TODO: 执行受控命令(白名单/沙箱)\n return f\"approved: {params.cmd}\"\n\nclass AdminWriteParams(BaseModel):\n path: str = Field(description=\"target path\")\n content: str = Field(description=\"content\")\n\n@define_tool(name=\"admin_write\", description=\"Write to privileged store\")\ndef admin_write(params: AdminWriteParams) -> str:\n # TODO: 受控写入\n return \"ok\"\n\nasync def on_permission_request(req, _ctx):\n if req.get(\"kind\") in (\"shell\", \"write\"):\n return {\"kind\": \"approved\"}\n return {\"kind\": \"denied-by-rules\", \"rules\": [{\"kind\": \"deny\"}]}\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n agent_manifest = {\n \"name\": \"ops-admin\",\n \"display_name\": \"Ops Admin\",\n \"description\": \"Privileged ops agent\",\n \"prompt\": \"Use only admin_* tools for privileged ops.\",\n \"tools\": [\"admin_shell\", \"admin_write\"],\n \"infer\": True,\n }\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"tools\": [admin_shell, admin_write],\n \"custom_agents\": [agent_manifest],\n \"available_tools\": [\"admin_shell\", \"admin_write\"],\n \"on_permission_request\": on_permission_request,\n })\n\n await session.send_and_wait({\"prompt\": \"使用 ops-admin 执行受控操作\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n要点/坑 \n- session.tools 未注册 -> agent.tools 不可用 \n- custom_agents.tools 仅写名称 \n- 非交互缺少 on_permission_request -> 默认拒绝 \n- SDK 无显式 agent 选择参数 -> 需在 prompt 指定 \n\n建议/落地 \n- pipe 启动时加载 manifest 注入 custom_agents + available_tools + on_permission_request", "model": "gpt-5.2-codex"}
|
||||
{"index": 7, "question": "List all 'hidden' `SessionConfig` parameters relevant to Agent behavior and personality.", "answer": "基于本地 SDK 源码 `copilot/types.py` 的 SessionConfig,影响 Agent 行为/人格的相关参数如下(来源:本地 SDK 源码)。 \n- `system_message`: `append`/`replace`;`replace` 会移除 SDK guardrails(风险) \n- `custom_agents`: `prompt`/`description`/`display_name`/`tools`/`mcp_servers`/`infer`;`tools` 仅名称且必须在 `tools` 注册 \n- `tools` / `available_tools` / `excluded_tools`: 能力与行为边界;`available_tools` 优先于 `excluded_tools` \n- `hooks`: `on_user_prompt_submitted`/`on_pre_tool_use`/`on_post_tool_use`/`on_session_start`/`on_session_end`/`on_error_occurred`,可改写 prompt/工具参数/结果 \n- `on_permission_request`: 非交互审批策略;缺失或异常 => 拒绝 \n- `on_user_input_request`: 启用/处理 `ask_user` \n- `mcp_servers`: 注入外部工具集 \n- `skill_directories` / `disabled_skills`: 技能启用/禁用 \n- `model` / `provider`: 模型与 BYOK 影响输出风格与能力 \n- `working_directory`: 上下文范围 \n- `infinite_sessions`: compaction 阈值影响记忆保留 \n- `config_dir`: CLI 配置/状态目录,行为取决于 CLI 侧配置 \n\n建议/落地:在 pipe 启动时固定 `system_message` + `available_tools/excluded_tools` + `hooks`,并持久化 session_id 以便 `resume_session`。", "model": "gpt-5.2-codex"}
|
||||
{"index": 8, "question": "Give a concise overview of MCP Server Integration: JSON-RPC config and SDK hooks.", "answer": "SDK 中 MCP 通过 create_session/resume_session 的 mcp_servers 传入(映射到 JSON‑RPC 的 mcpServers),支持 local/stdio 与 http/sse,两类配置字段分别为 type/command/args/env/cwd/timeout/tools 或 type/url/headers/timeout/tools(来源:本地 SDK 源码)。 \nSDK hooks 使用 hooks(on_pre_tool_use/on_post_tool_use/on_user_prompt_submitted/on_session_start/on_session_end/on_error_occurred) 进行拦截与附加上下文,权限需 on_permission_request 对 kind='mcp' 明确批准,且 session.on 可审计 tool.execution_* 事件中的 mcp_server_name/mcp_tool_name。 \n示例见下,建议/落地:pipe 中启用 on_permission_request、限制 tools 白名单并记录 MCP 事件。\n\n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def on_permission_request(req, _):\n return {\"kind\": \"approved\"} if req.get(\"kind\") == \"mcp\" else {\n \"kind\": \"denied-by-rules\",\n \"rules\": [{\"kind\": \"deny\"}],\n }\n\nasync def on_pre_tool_use(input, _):\n if input.get(\"toolName\", \"\").startswith(\"mcp.\"):\n return {\"additionalContext\": \"MCP tool call observed\"}\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"mcp_servers\": {\n \"brave\": {\n \"type\": \"local\",\n \"command\": \"/path/to/brave-mcp\",\n \"args\": [],\n \"env\": {\"BRAVE_API_KEY\": \"YOUR_KEY\"},\n \"tools\": [\"*\"],\n \"timeout\": 30000,\n }\n },\n \"hooks\": {\"on_pre_tool_use\": on_pre_tool_use},\n \"on_permission_request\": on_permission_request,\n })\n\n await session.send_and_wait({\"prompt\": \"用 Brave Search 查询 Copilot SDK 新闻\"})\n await client.stop()\n\nasyncio.run(main())\n```", "model": "gpt-5.2-codex"}
|
||||
{"index": 9, "question": "Provide best practices and common pitfalls for MCP Server Integration: JSON-RPC config and SDK hooks.", "answer": "基于本地 SDK 源码,MCP 配置通过 `create_session`/`resume_session` 的 `mcp_servers` 进入 JSON‑RPC `mcpServers`,并通过 `on_permission_request` 与 `hooks` 控制工具审批与行为(来源:本地 SDK 源码) \n- 最佳实践:local/stdio 用 `type/command/args/env/cwd/timeout/tools`;remote http/sse 用 `type/url/headers/timeout/tools`;`tools` 尽量白名单、必要时 `[\"*\"]`;`timeout` 毫秒;`hooks.on_pre_tool_use` 做改参/拒绝,`session.on` 记录 `tool.execution_*` 与 `mcp_server_name`/`mcp_tool_name` \n- 常见坑:把 MCP 配置放在 CopilotClient;缺少 `on_permission_request`(kind=\"mcp\")导致默认拒绝;`tools: []` 等于禁用全部;设置 `available_tools` 却未包含 MCP 工具名导致被过滤 \n\n建议/落地:pipe 中固定审批策略 + 统一 MCP 配置模板 + 审计日志", "model": "gpt-5.2-codex"}
|
||||
{"index": 10, "question": "Show a minimal example snippet for MCP Server Integration: JSON-RPC config and SDK hooks.", "answer": "基于本地 SDK 源码,MCP 通过 `mcp_servers` 传入并用 `hooks` 处理工具调用(来源:本地 SDK 源码)。最小示例: \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"mcp_servers\": {\n \"brave\": {\n \"type\": \"local\",\n \"command\": \"/path/to/brave-mcp\",\n \"args\": [],\n \"env\": {\"BRAVE_API_KEY\": \"YOUR_KEY\"},\n \"tools\": [\"*\"],\n }\n },\n \"hooks\": {\n \"on_pre_tool_use\": lambda input, _: {\"additionalContext\": \"MCP tool call\"},\n },\n \"on_permission_request\": lambda req, _: {\"kind\": \"approved\"},\n })\n\n await session.send_and_wait({\"prompt\": \"用 Brave Search 搜索 Copilot SDK\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n建议/落地:固定 `on_permission_request`(kind=\"mcp\")并限制 `tools` 白名单。", "model": "gpt-5.2-codex"}
|
||||
{"index": 11, "question": "List recommended configuration defaults for MCP Server Integration: JSON-RPC config and SDK hooks.", "answer": "推荐默认值(MCP JSON‑RPC 配置,来源:本地 SDK 源码):local/stdio 服务器用 `type=\"local\"/\"stdio\"`(远程用 `http`/`sse`)、`tools` 白名单(无则 `\"*\"`)、`timeout=30000` 毫秒、`env` 仅放密钥、`cwd` 指向受控目录。 \nHooks 默认:`on_pre_tool_use` 追加上下文并可设 `permissionDecision`(ask/allow)、必要时改参,`on_post_tool_use` 记录结果,`on_error_occurred` 设置 retry/skip。 \n建议/落地:非交互场景配置 `on_permission_request` 仅放行 `kind=\"mcp\"`,否则拒绝;工具名不确定时先不设 `available_tools`,避免误过滤。", "model": "gpt-5.2-codex"}
|
||||
{"index": 12, "question": "How does MCP Server Integration: JSON-RPC config and SDK hooks relate to building a custom Agent?", "answer": "SDK 源码中,自定义 Agent 通过 `SessionConfig.custom_agents` 定义,`CustomAgentConfig` 支持 `mcp_servers`,因此“Agent 绑定 MCP”就是在 agent config 里声明 `mcp_servers`,底层在 `create_session` 时转成 JSON‑RPC 的 `mcpServers`(来源:本地 SDK 源码 copilot/types.py、copilot/client.py)。 \nHooks 是会话级拦截器(`on_pre_tool_use`/`on_post_tool_use` 等),对 MCP 工具同样生效;权限确认需 `on_permission_request` 审批 `kind=\"mcp\"`。 \n建议/落地:pipe 中为目标 Agent 绑定 MCP + 审批钩子,示例: \n```python\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"custom_agents\": [{\n \"name\": \"search\",\n \"prompt\": \"Use MCP for search.\",\n \"mcp_servers\": {\n \"brave\": {\n \"type\": \"local\",\n \"command\": \"/path/to/brave-mcp\",\n \"args\": [],\n \"env\": {\"BRAVE_API_KEY\": \"YOUR_KEY\"},\n \"tools\": [\"*\"],\n }\n },\n }],\n \"hooks\": {\"on_pre_tool_use\": lambda i, _: {\"additionalContext\": \"mcp\"}},\n \"on_permission_request\": lambda r, _: {\"kind\": \"approved\"},\n})\n```", "model": "gpt-5.2-codex"}
|
||||
{"index": 13, "question": "Give a concise overview of Agent Manifests: Defining capabilities and permissions programmatically.", "answer": "SDK 没有独立的 Agent Manifest 文件,等价做法是用 `SessionConfig.custom_agents` 描述 agent,并用 `tools`/`available_tools`/`excluded_tools` 限定能力。 \n权限通过 `on_permission_request` 与 `hooks`(如 `on_pre_tool_use`)在会话级控制,且 `custom_agents.tools` 只能写工具名并需在 `SessionConfig.tools` 注册。 \n建议/落地:pipe 启动时加载 manifest→注入 `custom_agents` + `available_tools` + `on_permission_request` 固化权限边界。", "model": "gpt-5.2-codex"}
|
||||
{"index": 14, "question": "Provide best practices and common pitfalls for Agent Manifests: Defining capabilities and permissions programmatically.", "answer": "SDK 没有独立 Manifest 文件,等价于 `SessionConfig.custom_agents` + `tools/available_tools/excluded_tools` + `on_permission_request` + `hooks` 来定义能力与权限(来源:本地 SDK 源码)。 \n最佳实践:`custom_agents.tools` 只写已在 `SessionConfig.tools` 注册的名称,配合 `available_tools` 白名单与 `on_permission_request` 精准放行,必要时用 `hooks.on_pre_tool_use` 做参数校验/审计,`system_message` 选 append 以保留 guardrails。 \n常见坑:工具未注册或名称不一致、缺少 `on_permission_request` 导致默认拒绝、误用 `system_message` replace 去掉安全限制、期待 agent 自动被选中或跨实例不持久 `session_id`;建议/落地:pipe 启动时加载 manifest→注入 `custom_agents/available_tools/on_permission_request` 并持久化 `session_id`。", "model": "gpt-5.2-codex"}
|
||||
{"index": 15, "question": "Show a minimal example snippet for Agent Manifests: Defining capabilities and permissions programmatically.", "answer": "最小示例(本地 SDK 源码:`custom_agents` + `tools` + `available_tools` + `on_permission_request`): \n```python\nimport asyncio\nfrom pydantic import BaseModel, Field\nfrom copilot import CopilotClient, define_tool\n\nclass PingParams(BaseModel):\n text: str = Field(description=\"text to echo\")\n\n@define_tool(name=\"priv_ping\", description=\"Privileged echo\")\ndef priv_ping(params: PingParams) -> str:\n return f\"ok: {params.text}\"\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n agent_manifest = {\n \"name\": \"priv-agent\",\n \"prompt\": \"Only use priv_ping for actions.\",\n \"tools\": [\"priv_ping\"],\n \"infer\": True,\n }\n\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"tools\": [priv_ping],\n \"custom_agents\": [agent_manifest],\n \"available_tools\": [\"priv_ping\"],\n \"on_permission_request\": lambda req, _: {\"kind\": \"approved\"},\n })\n\n await session.send_and_wait({\"prompt\": \"调用 priv_ping\"})\n await client.stop()\n\nasyncio.run(main())\n```\n\n建议/落地:保持工具名一致并白名单化,非交互务必提供 `on_permission_request`。", "model": "gpt-5.2-codex"}
|
||||
{"index": 16, "question": "List recommended configuration defaults for Agent Manifests: Defining capabilities and permissions programmatically.", "answer": "基于本地 SDK 源码(copilot/types.py / copilot/client.py),推荐默认值如下: \n- custom_agents:name 必填、prompt 必填、tools 仅名称、infer=True,display_name/description 可选 \n- available_tools=白名单、excluded_tools=空、system_message=append \n- on_permission_request:除明确允许的 kind 其余拒绝;hooks.on_pre_tool_use 做审计/参数校验(其余 hooks 按需) \n建议/落地:pipe 启动时加载 manifest→注册 tools + available_tools + on_permission_request,并持久化 session_id。", "model": "gpt-5.2-codex"}
|
||||
{"index": 17, "question": "How does Agent Manifests: Defining capabilities and permissions programmatically relate to building a custom Agent?", "answer": "“Agent Manifest”在 SDK 里就是 `SessionConfig.custom_agents` 的配置对象,用它定义自定义 Agent 的 `name/prompt/tools/mcp_servers/infer`,并在同一 session 里用 `tools` 注册实际工具、用 `available_tools/excluded_tools` 限定能力(来源:本地 SDK 源码)。 \n权限不在 manifest 内,而是由会话级 `on_permission_request` 和 hooks(如 `on_pre_tool_use`)统一控制,且对所有 Agent 生效。 \n建议/落地:pipe 启动时把 manifest 转成 `custom_agents` 并同步注册工具 + 白名单,再配置 `on_permission_request` 作为权限策略。", "model": "gpt-5.2-codex"}
|
||||
{"index": 18, "question": "Give a concise overview of Headless Auth: Device Code Flow and credential persistence.", "answer": "SDK 仅提供 `CopilotClient` 的 `github_token/use_logged_in_user` 与 `get_auth_status()`,没有内置 device‑flow API(源码:copilot/client.py、copilot/types.py)。 \nHeadless Device Flow 需走 GitHub OAuth Device Flow HTTP 端点获取 token(https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow),将 token 持久化(权限 600)并通过 `CopilotClient({\"github_token\": token})` 注入,且与 `cli_url` 互斥。 \n建议/落地:启动时读 token 文件,不存在则执行 device flow 并保存,再启动客户端。\n\n```python\ntoken = load_token() or device_flow() # 调用 device code + access_token 端点\nsave_token(token) # chmod 600\nclient = CopilotClient({\"github_token\": token[\"access_token\"]})\nawait client.start()\n```", "model": "gpt-5.2-codex"}
|
||||
{"index": 19, "question": "Provide best practices and common pitfalls for Headless Auth: Device Code Flow and credential persistence.", "answer": "SDK 不内置 device flow,只能用 GitHub OAuth Device Flow HTTP 换取 token 并通过 CopilotClient({github_token}) 注入,且与 cli_url 互斥(源码:copilot/client.py、copilot/types.py)。 \n最佳实践/坑:- token 文件 chmod 600 或 OS keychain;- 记录到期与刷新策略,启动时用 get_auth_status 验证;- 轮询需处理 authorization_pending/slow_down;- 未持久化、use_logged_in_user 与 github_token 同时使用或与 cli_url 混用会失败(端点说明:https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow)。 \n建议/落地:pipe 启动时读 token 文件,不存在则走 device flow 并保存,失败时回退重新授权。", "model": "gpt-5.2-codex"}
|
||||
{"index": 20, "question": "Show a minimal example snippet for Headless Auth: Device Code Flow and credential persistence.", "answer": "最小示例:用 GitHub Device Flow(端点说明见 https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow)拿 token,持久化后注入 CopilotClient({github_token}). \n```python\nimport asyncio\nimport json\nimport os\nimport time\nimport urllib.parse\nimport urllib.request\nfrom pathlib import Path\nfrom copilot import CopilotClient\n\nCLIENT_ID = \"YOUR_OAUTH_APP_CLIENT_ID\"\nTOKEN_PATH = Path.home() / \".config\" / \"myapp\" / \"copilot_token.json\"\n\ndef post(url, data):\n body = urllib.parse.urlencode(data).encode()\n req = urllib.request.Request(url, data=body, headers={\"Accept\": \"application/json\"})\n with urllib.request.urlopen(req) as resp:\n return json.loads(resp.read().decode())\n\ndef device_flow():\n code = post(\"https://github.com/login/device/code\", {\"client_id\": CLIENT_ID, \"scope\": \"read:user\"})\n print(f\"Open {code['verification_uri']} and enter {code['user_code']}\")\n interval = int(code.get(\"interval\", 5))\n while True:\n tok = post(\n \"https://github.com/login/oauth/access_token\",\n {\n \"client_id\": CLIENT_ID,\n \"device_code\": code[\"device_code\"],\n \"grant_type\": \"urn:ietf:params:oauth:grant-type:device_code\",\n },\n )\n if \"access_token\" in tok:\n return tok\n if tok.get(\"error\") == \"slow_down\":\n interval += 5\n elif tok.get(\"error\") != \"authorization_pending\":\n raise RuntimeError(tok)\n time.sleep(interval)\n\ndef load_token():\n return json.loads(TOKEN_PATH.read_text()).get(\"access_token\") if TOKEN_PATH.exists() else None\n\ndef save_token(tok):\n TOKEN_PATH.parent.mkdir(parents=True, exist_ok=True)\n TOKEN_PATH.write_text(json.dumps(tok))\n os.chmod(TOKEN_PATH, 0o600)\n\nasync def main():\n access_token = load_token()\n if not access_token:\n tok = device_flow()\n save_token(tok)\n access_token = tok[\"access_token\"]\n\n client = CopilotClient({\"github_token\": access_token})\n await client.start()\n # ... create_session / send ...\n await client.stop()\n\nasyncio.run(main())\n```\n\n建议/落地 \n- 启动先读 token 文件;缺失再 device flow \n- token 文件权限 600 \n- 失效则重新授权", "model": "gpt-5.2-codex"}
|
||||
{"index": 21, "question": "List recommended configuration defaults for Headless Auth: Device Code Flow and credential persistence.", "answer": "基于本地 SDK 源码(`github_token` 与 `cli_url` 互斥)与 GitHub Device Flow 文档(https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow),推荐默认值: \n- Token 持久化:`~/.config/<app>/copilot_token.json` 或系统密钥链,文件权限 600 \n- Client 选项:仅 `github_token`,`use_logged_in_user` 留空/False,禁用 `cli_url` \n- 轮询:使用 device code 返回的 `interval`(缺省 5s),处理 `authorization_pending`/`slow_down` \n- 启动校验:`get_auth_status()` 失败即重新授权 \n\n建议/落地:启动先读 token 文件 → 无则 device flow → 保存 → `client.start()`", "model": "gpt-5.2-codex"}
|
||||
{"index": 22, "question": "How does Headless Auth: Device Code Flow and credential persistence relate to building a custom Agent?", "answer": "Headless Auth 只负责拿到并持久化 `github_token`,让 `CopilotClient` 能在无交互环境启动会话并加载 `custom_agents`;因此它是自定义 Agent 能稳定运行的前置条件,而不是 Agent 定义本身(来源:本地 SDK 源码)。 \n建议/落地:在无状态后端启动时读取 token 文件(不存在就走 device flow)→ `CopilotClient({\"github_token\": token})` → `create_session(custom_agents=...)`。", "model": "gpt-5.2-codex"}
|
||||
{"index": 23, "question": "Give a concise overview of Session Replay vs Resume: Handling stateless frontend history.", "answer": "基于本地 SDK 源码(copilot/client.py、copilot/session.py),Resume 是 `resume_session(session_id)` 保留会话状态,Replay 只能由应用重放历史 prompt。优缺点:Resume 省 tokens/保工具与 compaction 状态但依赖持久化 session_id;Replay 无状态易扩展但成本高且易丢副作用控制。建议/落地:无状态前端请保存 session_id 并优先 Resume,失效时回放最近 N 轮。", "model": "gpt-5.2-codex"}
|
||||
{"index": 24, "question": "Provide best practices and common pitfalls for Session Replay vs Resume: Handling stateless frontend history.", "answer": "最佳实践:在无状态前端每次请求携带 session_id 并优先 `resume_session`,同时恢复 tools/hooks/on_permission_request/system_message 以保持行为一致(来源:本地 SDK 源码)。 \n常见坑:Replay 会重复执行工具且耗 tokens;Resume 未重新注册工具或权限处理器会出现 tool not supported/默认拒绝;会话失效未回退。 \n建议/落地:持久化 chat_id→session_id,resume 失败时仅回放最近 N 轮并禁用/幂等化有副作用的工具。", "model": "gpt-5.2-codex"}
|
||||
{"index": 25, "question": "Show a minimal example snippet for Session Replay vs Resume: Handling stateless frontend history.", "answer": "来源:本地 SDK 源码(copilot/client.py, copilot/session.py),最小示例如下(优先 Resume,失败则 Replay)。 \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def handle_request(session_id: str | None, history: list[str], latest: str):\n client = CopilotClient()\n await client.start()\n\n try:\n # Resume: 复用会话状态\n if not session_id:\n raise RuntimeError(\"no session_id\")\n session = await client.resume_session(session_id)\n except Exception:\n # Replay: 逐条重放历史\n session = await client.create_session({\"model\": \"gpt-5\"})\n session_id = session.session_id\n for prompt in history:\n await session.send_and_wait({\"prompt\": prompt})\n\n response = await session.send_and_wait({\"prompt\": latest})\n await client.stop()\n return response.data.content, session_id\n\n# asyncio.run(handle_request(session_id, history, latest_prompt))\n```\n建议/落地:持久化 session_id,resume 失败时仅回放最近 N 轮。", "model": "gpt-5.2-codex"}
|
||||
{"index": 26, "question": "List recommended configuration defaults for Session Replay vs Resume: Handling stateless frontend history.", "answer": "基于本地 SDK 源码,推荐默认配置如下(Resume / Replay): \n- Resume:持久化 session_id;`resume_session({...})` 传回 tools、hooks、on_permission_request、working_directory、streaming、provider、mcp_servers、custom_agents、skill_directories、disabled_skills;`disable_resume` 保持默认 False \n- Replay:`create_session({...})` 传回 model、system_message、tools、available_tools/excluded_tools、hooks、on_permission_request、working_directory、provider、mcp_servers、custom_agents、infinite_sessions(默认启用);仅回放最近 N 轮;默认拒绝 write/shell 或用只读白名单", "model": "gpt-5.2-codex"}
|
||||
{"index": 27, "question": "How does Session Replay vs Resume: Handling stateless frontend history relate to building a custom Agent?", "answer": "Resume 会保留 session_id 关联的 custom_agents/工具状态,Replay 需要每次 create_session 时重新注入 custom_agents + tools,否则 persona/能力会丢失(来源:本地 SDK 源码)。对于无状态前端,Resume 更可靠且省 token,但要持久化 session_id 并在 resume 时重新注册 tools/hooks;Replay 则需回放历史并保证相同 custom_agents 配置。建议/落地:pipe 优先 resume_session,失败再 replay 最近 N 轮并重建 custom_agents/tools/权限策略。", "model": "gpt-5.2-codex"}
|
||||
{"index": 28, "question": "Give a concise overview of Advanced Session Hooks: Intercepting and modifying user prompts.", "answer": "拦截/修改用户提示要用 `SessionHooks.on_user_prompt_submitted`(不是 `session.on` 事件订阅),`input.prompt` 可改为 `output.modifiedPrompt` 并附加 `additionalContext`/`suppressOutput`,且需在 `create_session`/`resume_session` 传入 `hooks` 才会触发 `hooks.invoke`(来源:本地 SDK 源码 copilot/types.py、copilot/session.py、copilot/client.py)。 \n```python\nfrom copilot import CopilotClient\n\nasync def on_user_prompt_submitted(input, _ctx):\n return {\"modifiedPrompt\": input[\"prompt\"].strip(), \"additionalContext\": \"normalize whitespace\"}\n\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"hooks\": {\"on_user_prompt_submitted\": on_user_prompt_submitted},\n})\n```\n建议/落地:在 pipe 入口统一做 prompt 清洗/拼接上下文,并记录修改前后用于审计。", "model": "gpt-5.2-codex"}
|
||||
{"index": 29, "question": "Provide best practices and common pitfalls for Advanced Session Hooks: Intercepting and modifying user prompts.", "answer": "最佳实践:在 create_session/resume_session 传入 hooks.on_user_prompt_submitted,返回 {modifiedPrompt, additionalContext, suppressOutput},修改保持幂等、最小化、可审计(来源:本地 SDK 源码 copilot/types.py、copilot/client.py、copilot/session.py)。 \n常见坑:忘记传 hooks 或 resume 时未重新传、handler 抛异常会被吞并返回 None、返回非 dict 或过度改写提示导致无效/偏航。 \n建议/落地:在 pipe 入口统一注册该 hook 并记录修改前后,保持 system_message 为 append。", "model": "gpt-5.2-codex"}
|
||||
{"index": 30, "question": "Show a minimal example snippet for Advanced Session Hooks: Intercepting and modifying user prompts.", "answer": "最小示例(来源:本地 SDK 源码): \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def on_user_prompt_submitted(hook_input, _ctx):\n return {\n \"modifiedPrompt\": hook_input[\"prompt\"].strip(),\n \"additionalContext\": \"normalized\",\n }\n\nasync def main():\n client = CopilotClient()\n await client.start()\n session = await client.create_session({\n \"model\": \"gpt-5\",\n \"hooks\": {\"on_user_prompt_submitted\": on_user_prompt_submitted},\n })\n await session.send_and_wait({\"prompt\": \" hello \"})\n await client.stop()\n\nasyncio.run(main())\n```\n建议/落地:在 pipe 入口统一注册该 hook,并记录修改前后以便审计。", "model": "gpt-5.2-codex"}
|
||||
{"index": 31, "question": "List recommended configuration defaults for Advanced Session Hooks: Intercepting and modifying user prompts.", "answer": "- 推荐默认值(来源:本地 SDK 源码 copilot/types.py):`hooks={\"on_user_prompt_submitted\": handler}`,handler 默认返回 `{\"modifiedPrompt\": input[\"prompt\"], \"additionalContext\": \"\", \"suppressOutput\": False}` 或 `None`(不改写)。 \n- `create_session`/`resume_session` 都要传同一 hooks;`system_message` 保持 append 以保留 guardrails。 \n- 建议/落地:pipe 入口统一注册该 hook 并记录原/改提示用于审计。", "model": "gpt-5.2-codex"}
|
||||
{"index": 32, "question": "How does Advanced Session Hooks: Intercepting and modifying user prompts relate to building a custom Agent?", "answer": "Hooks 是会话级拦截器,对所有自定义 Agent 都生效;因此它是定义 Agent 行为边界和输入规范化的“外层控制”,而不是 Agent Manifest 本身(来源:本地 SDK 源码 copilot/types.py、copilot/session.py)。 \n在自定义 Agent 场景下,`on_user_prompt_submitted` 可统一清洗/增强用户指令,再由 custom_agents 的 prompt 决定具体风格。 \n建议/落地:pipe 启动时同时注入 custom_agents + hooks,确保 resume 时同样注册 hooks。", "model": "gpt-5.2-codex"}
|
||||
{"index": 33, "question": "Give a concise overview of Workspace Virtualization: Handling CWD for remote/virtual files.", "answer": "SDK 只有 `SessionConfig.working_directory` 来约束 CWD,`attachments` 也要求真实路径,没有“虚拟工作区/FS provider”协议(来源:本地 SDK 源码 copilot/types.py、copilot/client.py)。要处理远程/虚拟文件,需用自定义工具或 MCP 服务器实现 VFS 读写,或把内容落地到 working_directory 下的临时文件再交给模型。建议/落地:pipe 固定 working_directory 映射到会话工作区,统一走 vfs_* 工具/MCP,且在 on_permission_request 限制 write/shell。", "model": "gpt-5.2-codex"}
|
||||
{"index": 34, "question": "Provide best practices and common pitfalls for Workspace Virtualization: Handling CWD for remote/virtual files.", "answer": "SDK 没有虚拟工作区/FS provider,仅支持 `SessionConfig.working_directory` 与真实路径 `attachments`(来源:本地 SDK 源码),因此远程/虚拟文件需落地或用自定义工具/MCP 提供内容。 \n最佳实践:固定 working_directory 到可控目录/会话 workspace,VFS 工具分块返回,配合 `available_tools`/`on_permission_request` 限权;常见坑:直接传虚拟路径/URL、working_directory 不存在、resume 未重注册工具/权限、一次性返回过大内容导致上下文截断。 \n建议/落地:pipe 入口做 remote→临时文件或 vfs_* 映射,并持久化路径索引。", "model": "gpt-5.2-codex"}
|
||||
{"index": 35, "question": "Show a minimal example snippet for Workspace Virtualization: Handling CWD for remote/virtual files.", "answer": "SDK 仅提供 `SessionConfig.working_directory` 与真实路径附件,没有虚拟 FS/provider(来源:本地 SDK 源码 copilot/types.py)。最小做法是把远程内容落地到 CWD 下再作为附件传入: \n```python\nimport asyncio\nimport pathlib\nfrom copilot import CopilotClient\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n workdir = \"/tmp/owui\"\n pathlib.Path(workdir).mkdir(parents=True, exist_ok=True)\n remote_content = \"...\"\n path = f\"{workdir}/remote.txt\"\n pathlib.Path(path).write_text(remote_content)\n\n session = await client.create_session({\"model\": \"gpt-5\", \"working_directory\": workdir})\n await session.send_and_wait({\n \"prompt\": \"Summarize remote.txt\",\n \"attachments\": [{\"type\": \"file\", \"path\": path}],\n })\n await client.stop()\n\nasyncio.run(main())\n```\n建议/落地:pipe 入口固定 working_directory,落地临时文件后立即处理,并用 `on_permission_request` 限制 write/shell。", "model": "gpt-5.2-codex"}
|
||||
{"index": 36, "question": "List recommended configuration defaults for Workspace Virtualization: Handling CWD for remote/virtual files.", "answer": "推荐默认值(来源:本地 SDK 源码 copilot/types.py): \n- `working_directory`: 指向可控的会话目录(如 session.workspace_path 或 /tmp/<app>/<session_id>) \n- `available_tools`: 只允许 vfs_* 或只读工具,避免任意 shell/write \n- `on_permission_request`: 默认拒绝 write/shell,仅放行 read/mcp \n- `attachments`: 仅真实路径文件;远程内容先落地 \n\n建议/落地:pipe 入口统一映射 remote→临时文件或 VFS 工具,并持久化映射表。", "model": "gpt-5.2-codex"}
|
||||
{"index": 37, "question": "How does Workspace Virtualization: Handling CWD for remote/virtual files relate to building a custom Agent?", "answer": "SDK 没有虚拟文件系统协议,custom agent 只能依赖 `working_directory` 和真实路径附件,因此处理远程/虚拟文件需通过自定义工具/MCP 并在 agent prompt/available_tools 中强制使用(来源:本地 SDK 源码 copilot/types.py、copilot/client.py)。 \n这使“工作区虚拟化”成为 agent 能力与权限设计的一部分,需要 `available_tools`/`on_permission_request`/hooks 来限定访问范围。 \n建议/落地:pipe 中固定 working_directory 到会话目录,注入 vfs_* 工具白名单并在 prompt 明确使用 VFS。", "model": "gpt-5.2-codex"}
|
||||
{"index": 38, "question": "Give a concise overview of Error Recovery: Handling session disconnects and re-auth.", "answer": "本地 SDK 源码中没有显式“重连”API,断线通常表现为请求异常或 `SessionEventType.SESSION_ERROR`,需要 `stop()/force_stop()` 后重新 `start()` 并 `resume_session`(失败则 replay)(来源:本地 SDK 源码)。 \n认证可用 `get_auth_status()` 检测,未认证时重新创建 `CopilotClient({\"github_token\": token})`;若 `cli_url` 连接外部服务器则认证由外部处理(来源:本地 SDK 源码)。\n\n```python\nimport asyncio\nfrom copilot import CopilotClient\nfrom copilot.generated.session_events import SessionEventType\n\nasync def safe_send(session_id, prompt, token):\n client = CopilotClient({\"github_token\": token})\n await client.start()\n try:\n session = await (client.resume_session(session_id) if session_id else client.create_session({\"model\": \"gpt-5\"}))\n session.on(lambda ev: ev.type == SessionEventType.SESSION_ERROR and print(ev.data.message))\n resp = await session.send_and_wait({\"prompt\": prompt})\n return resp, session.session_id\n except Exception:\n await client.stop()\n client = CopilotClient({\"github_token\": token})\n await client.start()\n session = await client.create_session({\"model\": \"gpt-5\"})\n resp = await session.send_and_wait({\"prompt\": prompt})\n return resp, session.session_id\n finally:\n await client.stop()\n```\n\n建议/落地:持久化 session_id 与 token,优先 resume,失败再回放最近 N 轮。", "model": "gpt-5.2-codex"}
|
||||
{"index": 39, "question": "Provide best practices and common pitfalls for Error Recovery: Handling session disconnects and re-auth.", "answer": "最佳实践(基于本地 SDK 源码):监听 `SESSION_ERROR`/捕获请求异常后 `stop()/force_stop()`→重新 `start()`→`resume_session`(失败再 replay),启动时用 `get_auth_status()` 校验并在失效时重建 `CopilotClient({\"github_token\": ...})`,同时在 resume 时重新注册 tools/hooks/权限处理器。 \n常见坑:把 `cli_url` 与 `github_token` 混用、忘记重注册工具导致 `tool not supported`、缺少 `on_permission_request` 导致默认拒绝、以及不回退到 replay 造成会话中断。 \n建议/落地:无状态后端持久化 `session_id+token`、实现幂等的重试/回放策略并记录错误上下文用于告警。", "model": "gpt-5.2-codex"}
|
||||
{"index": 40, "question": "Show a minimal example snippet for Error Recovery: Handling session disconnects and re-auth.", "answer": "最小示例(本地 SDK 源码:copilot/client.py、copilot/session.py): \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def send_with_recovery(session_id, prompt, token):\n client = CopilotClient({\"github_token\": token})\n await client.start()\n try:\n session = await (client.resume_session(session_id) if session_id else client.create_session({\"model\": \"gpt-5\"}))\n resp = await session.send_and_wait({\"prompt\": prompt})\n return resp.data.content, session.session_id\n except Exception:\n await client.stop()\n client = CopilotClient({\"github_token\": token})\n await client.start()\n session = await client.create_session({\"model\": \"gpt-5\"})\n resp = await session.send_and_wait({\"prompt\": prompt})\n return resp.data.content, session.session_id\n finally:\n await client.stop()\n```\n建议/落地:持久化 token+session_id,resume 失败再重建并回放必要历史。", "model": "gpt-5.2-codex"}
|
||||
{"index": 41, "question": "List recommended configuration defaults for Error Recovery: Handling session disconnects and re-auth.", "answer": "推荐默认值(字段来源:本地 SDK 源码):CopilotClientOptions—auto_start=True、log_level='info'、use_stdio=True(非 cli_url)、github_token=持久化 token;SessionConfig/ResumeSessionConfig—hooks.on_error_occurred=handler、on_permission_request=非交互审批、working_directory 固定、mcp_servers/custom_agents/streaming 与首次一致、可固定 config_dir 并持久化 session_id。 \n常见坑:cli_url 与 github_token 互斥、resume 时未重传 tools/hooks 导致 tool not supported 或默认拒绝。 \n建议/落地:pipe 保存 token+session_id,断线后先 resume_session,失败再回放最近 N 轮。", "model": "gpt-5.2-codex"}
|
||||
{"index": 42, "question": "How does Error Recovery: Handling session disconnects and re-auth relate to building a custom Agent?", "answer": "错误恢复与自定义 Agent 强相关:Agent 定义与工具/权限处理器都是 session 级配置,断线或重认证后必须重建客户端并重新注入 `custom_agents`、`tools`、`hooks`、`on_permission_request`,否则能力/人格会丢失(来源:本地 SDK 源码)。 \n建议/落地:持久化 session_id+token,重连优先 `resume_session` 并重新注册上述配置,失败再回放最近 N 轮。", "model": "gpt-5.2-codex"}
|
||||
{"index": 43, "question": "Give a concise overview of Confirmation Events: programmatic handling of 'confirmation_required'.", "answer": "SDK 源码里没有独立的 `confirmation_required` 事件;权限确认是 JSON‑RPC `permission.request`,通过 `SessionConfig.on_permission_request` 回调处理(缺失或异常会默认拒绝)。 \n`session.on()` 只能用于审计/关联工具事件(如 `tool.user_requested`、`tool.execution_start`),不能替代确认逻辑。", "model": "gpt-5.2-codex"}
|
||||
{"index": 44, "question": "Provide best practices and common pitfalls for Confirmation Events: programmatic handling of 'confirmation_required'.", "answer": "SDK 源码未定义 `confirmation_required` 事件,确认流程必须用 `SessionConfig.on_permission_request` 处理,缺失或异常会默认拒绝(来源:本地 SDK 源码)。 \n最佳实践/坑: \n- ✅ 对 `kind` 建立白名单策略(如仅允许 `read/mcp`),记录决策并用 `session.on` 仅做审计关联 `tool.user_requested` \n- ✅ `resume_session` 时重新注册 `on_permission_request`/hooks \n- ❌ 误以为 `session.on()` 可完成确认、返回非规范 dict 或抛异常导致全部拒绝", "model": "gpt-5.2-codex"}
|
||||
{"index": 45, "question": "Show a minimal example snippet for Confirmation Events: programmatic handling of 'confirmation_required'.", "answer": "SDK 里没有 `confirmation_required` 事件,需用 `SessionConfig.on_permission_request` 处理权限确认。 \n最小示例: \n```python\nfrom copilot import CopilotClient\n\nclient = CopilotClient()\nawait client.start()\n\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"on_permission_request\": lambda req, _: {\"kind\": \"approved\"},\n})\n```\n建议/落地:在 pipe 中统一注册 on_permission_request,并在 resume_session 时重复注册。", "model": "gpt-5.2-codex"}
|
||||
{"index": 46, "question": "List recommended configuration defaults for Confirmation Events: programmatic handling of 'confirmation_required'.", "answer": "推荐默认值(SDK 无 confirmation_required 事件,改用 on_permission_request,来源:本地 SDK 源码):- 对 kind 建白名单(常见只放行 read/mcp)- 未命中默认返回 denied-by-rules 并给 rules - resume_session 时重注册 on_permission_request/hooks。 \n建议/落地:pipe 启动即注入 on_permission_request 并记录审批日志。", "model": "gpt-5.2-codex"}
|
||||
{"index": 47, "question": "How does Confirmation Events: programmatic handling of 'confirmation_required' relate to building a custom Agent?", "answer": "SDK 没有 `confirmation_required` 事件,权限确认由 `create_session`/`resume_session` 的 `on_permission_request` 处理,且对同一 session 的所有 custom agent 生效(来源:本地 SDK 源码)。 \n建议/落地:在 pipe 启动时注入 `on_permission_request` 白名单策略并在 resume 时重注册,以约束高权限工具。", "model": "gpt-5.2-codex"}
|
||||
{"index": 48, "question": "Give a concise overview of Skills: Conflict resolution and precedence defaults.", "answer": "在本地 SDK 源码中,Skills 只通过 SessionConfig 的 `skill_directories` 与 `disabled_skills` 暴露,未定义冲突解析/优先级规则,因此行为取决于 CLI 运行时实现(来源:本地 SDK 源码 copilot/types.py)。 \n为可控性,建议确保技能名唯一、仅配置单一目录或由你自行按顺序拼接目录,并用 `disabled_skills` 明确屏蔽同名/不应启用的技能。 \n建议/落地:在无状态后端每次 `create_session`/`resume_session` 时显式传入同一份 `skill_directories` 与 `disabled_skills`,避免因默认加载顺序差异导致行为漂移。", "model": "gpt-5.2-codex"}
|
||||
{"index": 49, "question": "Provide best practices and common pitfalls for Skills: Conflict resolution and precedence defaults.", "answer": "SDK 源码仅暴露 SessionConfig/ResumeSessionConfig 的 `skill_directories` 和 `disabled_skills`,未定义冲突解析或优先级,行为由 CLI 侧决定(来源:本地 SDK 源码 copilot/types.py)。 \n最佳实践:技能名全局唯一、只配置单一目录或显式排序,并用 `disabled_skills` 禁用冲突项;常见坑是依赖默认加载顺序或 resume 未重传这两个字段导致行为漂移。 \n建议/落地:在 pipe 启动与 resume 时统一注入同一份 `skill_directories`/`disabled_skills` 并记录变更。", "model": "gpt-5.2-codex"}
|
||||
{"index": 50, "question": "Show a minimal example snippet for Skills: Conflict resolution and precedence defaults.", "answer": "SDK 里技能只通过 `skill_directories` 与 `disabled_skills` 配置,冲突/优先级未在源码定义,需自行避免同名。 \n最小示例: \n```python\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"skill_directories\": [\"/opt/skills\"],\n \"disabled_skills\": [\"legacy_skill\"],\n})\n``` \n建议:技能名唯一并在 `resume_session` 时重传这两项。", "model": "gpt-5.2-codex"}
|
||||
{"index": 51, "question": "List recommended configuration defaults for Skills: Conflict resolution and precedence defaults.", "answer": "推荐默认值(基于 SDK 仅暴露 `skill_directories`/`disabled_skills`): \n- `skill_directories`: 明确设置为单一目录(或你自己控制顺序的列表),不要依赖 CLI 默认搜索顺序 \n- `disabled_skills`: 默认空列表;如存在同名/旧技能则显式禁用 \n- Resume/Replay:始终复用同一份配置以避免“隐式优先级”漂移", "model": "gpt-5.2-codex"}
|
||||
{"index": 52, "question": "How does Skills: Conflict resolution and precedence defaults relate to building a custom Agent?", "answer": "Skills 是会话级配置(`skill_directories`/`disabled_skills`),会影响所有自定义 Agent 的可用能力与提示行为;SDK 不定义冲突优先级,因此需要你在构建自定义 Agent 时确保技能命名唯一并显式禁用冲突项(来源:本地 SDK 源码)。 \n建议/落地:为 custom agent 固定同一份 skills 配置,并在 resume 时重复传入,避免行为漂移。", "model": "gpt-5.2-codex"}
|
||||
{"index": 53, "question": "Give a concise overview of Debugging: Tracing JSON-RPC traffic in the SDK.", "answer": "本地 SDK 源码里 JsonRpcClient 没有内置 JSON‑RPC trace,只能通过 `CopilotClient({\"log_level\":\"debug\"})` 打开 CLI 侧日志(非原始包)(来源:本地 SDK 源码)。 \n要抓实际 JSON‑RPC 流量,可 monkey‑patch `JsonRpcClient._send_message`/`_read_message` 或包装 stdio 进行 tee,最小示例如下。 \n建议/落地:在 pipe 中开启 patch 并对日志做脱敏与采样。 \n```python\nfrom copilot.jsonrpc import JsonRpcClient\n\norig_send = JsonRpcClient._send_message\nasync def _send(self, message):\n print(\">>\", message)\n return await orig_send(self, message)\nJsonRpcClient._send_message = _send\n\norig_read = JsonRpcClient._read_message\ndef _read(self):\n msg = orig_read(self)\n if msg:\n print(\"<<\", msg)\n return msg\nJsonRpcClient._read_message = _read\n```", "model": "gpt-5.2-codex"}
|
||||
{"index": 54, "question": "Provide best practices and common pitfalls for Debugging: Tracing JSON-RPC traffic in the SDK.", "answer": "源码里 JsonRpcClient 没有内置 trace,只能用 `CopilotClient({\"log_level\":\"debug\"})` 查看 CLI 侧日志或 monkey‑patch `JsonRpcClient._send_message/_read_message` 做 tee(来源:本地 SDK 源码)。 \n最佳实践:仅在调试启用、对 github_token/API key/内容脱敏、采样/限流、只读不改消息并注意线程安全;常见坑:记录完整 prompt/工具参数导致泄露,或同步打印阻塞 IO 造成超时/断连。", "model": "gpt-5.2-codex"}
|
||||
{"index": 55, "question": "Show a minimal example snippet for Debugging: Tracing JSON-RPC traffic in the SDK.", "answer": "最小示例(monkey‑patch `JsonRpcClient` 打印收发): \n```python\nfrom copilot.jsonrpc import JsonRpcClient\n\norig_send = JsonRpcClient._send_message\nasync def _send(self, msg):\n print(\">>\", msg)\n return await orig_send(self, msg)\nJsonRpcClient._send_message = _send\n\norig_read = JsonRpcClient._read_message\ndef _read(self):\n msg = orig_read(self)\n if msg:\n print(\"<<\", msg)\n return msg\nJsonRpcClient._read_message = _read\n```\n建议:仅在调试启用并脱敏日志。", "model": "gpt-5.2-codex"}
|
||||
{"index": 56, "question": "List recommended configuration defaults for Debugging: Tracing JSON-RPC traffic in the SDK.", "answer": "推荐默认值(基于 SDK 源码):- 生产环境关闭 trace;- 调试时 `CopilotClient({\"log_level\":\"debug\"})`;- 如需 JSON‑RPC 报文,采用临时 monkey‑patch 并启用脱敏/采样;- 不修改消息内容,避免线程阻塞。 \n建议/落地:pipe 中通过环境变量开关调试与采样率。", "model": "gpt-5.2-codex"}
|
||||
{"index": 57, "question": "How does Debugging: Tracing JSON-RPC traffic in the SDK relate to building a custom Agent?", "answer": "JSON‑RPC trace 能看到 custom_agents 在 `session.create/session.resume` 的实际 payload,以及后续 `tool.call`/`permission.request`/`hooks.invoke` 是否按预期触发,从而验证自定义 Agent 的能力与权限边界(来源:本地 SDK 源码)。 \n建议/落地:仅在开发环境开启并脱敏日志;用它核对 customAgents/tools/权限决策与 MCP 配置。", "model": "gpt-5.2-codex"}
|
||||
{"index": 58, "question": "Give a concise overview of Billing & Policies: How seat management affects SDK features.", "answer": "- 访问与政策:SDK 依赖 Copilot CLI,需有效 Copilot 订阅且组织/企业未禁用 CLI,否则无法使用(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md,https://raw.githubusercontent.com/github/copilot-cli/main/README.md) \n- 计费与认证:每次 prompt 计入 Copilot premium requests 配额,PAT 需 Copilot Requests 权限(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md,https://raw.githubusercontent.com/github/copilot-cli/main/README.md)\n\n建议/落地:启动先 `get_auth_status()`,失败时提示分配席位/启用 CLI 或更新 token", "model": "gpt-5.2-codex"}
|
||||
{"index": 59, "question": "Provide best practices and common pitfalls for Billing & Policies: How seat management affects SDK features.", "answer": "最佳实践:确认用户/组织席位允许 Copilot CLI/SDK 使用,启动前用 `get_auth_status()` 预检,使用 PAT 时确保具备 Copilot Requests 权限(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md,https://raw.githubusercontent.com/github/copilot-cli/main/README.md)。 \n常见坑:未分配席位或组织/企业策略禁用 CLI 导致 SDK 失效、PAT 权限不足或 premium requests 用尽引发认证/配额错误(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md,https://raw.githubusercontent.com/github/copilot-cli/main/README.md)。 \n建议/落地:在 pipe 启动时做席位与鉴权检测并给出明确提示/自助链接,记录每次请求计费与配额告警阈值以便限流。", "model": "gpt-5.2-codex"}
|
||||
{"index": 60, "question": "Show a minimal example snippet for Billing & Policies: How seat management affects SDK features.", "answer": "基于 SDK README,最小示例是在启动时用 `get_auth_status()` 检查是否已分配席位/可用认证(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md)。 \n```python\nimport asyncio\nfrom copilot import CopilotClient\n\nasync def main():\n client = CopilotClient()\n await client.start()\n\n auth = await client.get_auth_status()\n if not auth.isAuthenticated:\n raise RuntimeError(\"Copilot 未认证/未分配席位,请先登录或分配权限\")\n\n # ...正常创建 session...\n await client.stop()\n\nasyncio.run(main())\n```\n建议/落地:pipe 启动先做该检查,失败时提示分配席位或更新 PAT 权限。", "model": "gpt-5.2-codex"}
|
||||
{"index": 61, "question": "List recommended configuration defaults for Billing & Policies: How seat management affects SDK features.", "answer": "推荐默认值:CopilotClient 用 `use_logged_in_user=True`、`auto_start=True`、`log_level=\"info\"`;如用 PAT 则用 `github_token` 注入并确保具备 Copilot Requests 权限,且启动前 `get_auth_status()` 失败即按席位/策略限制处理(来源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md ,https://raw.githubusercontent.com/github/copilot-cli/main/README.md)。 \n建议/落地:pipe 启动先做 auth 预检并给出续订/分配席位提示,避免会话中途才触发权限或配额错误。", "model": "gpt-5.2-codex"}
|
||||
{"index": 62, "question": "How does Billing & Policies: How seat management affects SDK features relate to building a custom Agent?", "answer": "席位与组织策略决定 Copilot CLI/SDK 是否可用,因此自定义 Agent 只是同一会话上的配置,前提仍是有有效订阅与允许的策略,否则 Agent 无法运行(源:https://raw.githubusercontent.com/github/copilot-sdk/main/README.md ,https://raw.githubusercontent.com/github/copilot-cli/main/README.md)。 \n建议/落地:在创建自定义 Agent 前先 `get_auth_status()` 预检并对配额/政策限制做降级提示。", "model": "gpt-5.2-codex"}
|
||||
{"index": 63, "question": "How to configure MCP servers in ~/.copilot/config.json for SDK usage?", "answer": "我在 CLI 文档里未看到 `~/.copilot/config.json` 的 MCP 配置结构;CLI 仅提供 `/mcp add|edit|delete|show` 命令管理配置(来源:Copilot CLI 文档)。 \n因此建议用 CLI 写入配置,并在 SDK 中用 `config_dir` 指向该配置目录;或直接在 SDK 的 `create_session(mcp_servers=...)` 传入。 \n如果你确认存在 `~/.copilot/config.json` 的具体 schema,请提供示例或路径,我可按你的文件格式给出精确配置。", "model": "gpt-5.2-codex"}
|
||||
{"index": 64, "question": "What CLI environment variables force 'Agent' mode vs 'Generic' mode?", "answer": "CLI 文档未提到任何“通过环境变量强制 Agent 模式/Generic 模式”的开关(源:Copilot CLI 文档)。 \n已知与模式相关的是命令/参数,例如 `--agent` 选择自定义 agent,或交互内 `/agent` 切换;但不是环境变量。 \n如果你指的是内部实验开关,请提供具体变量名或版本线索,我可继续追索。", "model": "gpt-5.2-codex"}
|
||||
{"index": 65, "question": "Explain the 'confirmation' flow in CLI and how it maps to SDK events.", "answer": "- CLI:工具权限弹窗确认,可用 `/allow-all` 或 `/yolo` 自动批准。(源: https://raw.githubusercontent.com/github/copilot-cli/main/README.md) \n- SDK:对应 JSON‑RPC `permission.request`→`on_permission_request` 返回 `approved/denied-*`,`session.on()` 只用于审计 tool.execution_*,无独立 confirmation 事件。(源: 本地 SDK 源码 copilot/client.py、copilot/session.py、copilot/types.py)\n\n示例:\n```python\nfrom copilot import CopilotClient\nfrom copilot.generated.session_events import SessionEventType\n\nclient = CopilotClient()\nawait client.start()\nsession = await client.create_session({\n \"model\": \"gpt-5\",\n \"on_permission_request\": lambda req, _: {\"kind\": \"approved\"},\n})\nsession.on(lambda ev: ev.type == SessionEventType.TOOL_EXECUTION_START and print(ev.data.tool_name))\n```\n\n建议/落地:非交互模式实现 `on_permission_request` 并记录 `tool.execution_*` 事件。", "model": "gpt-5.2-codex"}
|
||||
{"index": 66, "question": "Does the CLI support 'dry-run' permission checks for tools?", "answer": "CLI 文档未提到“dry-run”权限检查,只有正常确认与 `/allow-all`、`/yolo` 自动批准(源:https://raw.githubusercontent.com/github/copilot-cli/main/README.md)。SDK 侧仅有 `permission.request`→`on_permission_request` 与 `hooks.on_pre_tool_use` 的拦截/拒绝(本地 SDK 源码),不提供不执行的试运行;建议/落地:pipe 中用 `on_permission_request` 记录/拒绝高风险工具并在 hooks 做审计。", "model": "gpt-5.2-codex"}
|
||||
{"index": 67, "question": "What are the undocumented requirements for 'workspace' context updates?", "answer": "CLI 文档与本地 SDK 源码都未提到“workspace context updates”的未公开要求;SDK 只暴露 `working_directory` 与 `session.workspace_path`,以及 `attachments` 真实路径(来源:本地 SDK 源码)。 \n如果你指的是 CLI 的“工作区/会话文件”更新规则,请提供具体页面或日志片段,我可以进一步定位。", "model": "gpt-5.2-codex"}
|
||||
{"index": 68, "question": "How does the CLI handle 'device code' re-authentication automatically?", "answer": "CLI 文档未说明“自动 device code 重新认证”的具体机制;只提到未登录时使用 `/login`,PAT 可通过 `GH_TOKEN/GITHUB_TOKEN` 提供(源:Copilot CLI 文档)。因此无法确认是否有自动 re-auth 或缓存刷新策略。 \n若需无交互场景,请使用 SDK 的 `github_token` 注入并自行实现 device flow 轮询与持久化。", "model": "gpt-5.2-codex"}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user