Compare commits
31 Commits
v2025.12.3
...
v2026.01.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
593a9ce22b | ||
|
|
fe497cccb7 | ||
|
|
88aa7e156a | ||
|
|
dbfce27986 | ||
|
|
9be6fe08fa | ||
|
|
782378eed8 | ||
|
|
4e59bb6518 | ||
|
|
3e73fcb3f0 | ||
|
|
c460337c43 | ||
|
|
e775b23503 | ||
|
|
b3cdb8e26e | ||
|
|
0e6f902d16 | ||
|
|
c15c73897f | ||
|
|
035439ce02 | ||
|
|
b84ff4a3a2 | ||
|
|
e22744abd0 | ||
|
|
54c90238f7 | ||
|
|
40d77121bd | ||
|
|
3795976a79 | ||
|
|
f5e5e5caa4 | ||
|
|
0c893ce61f | ||
|
|
8f4ce8f084 | ||
|
|
ac2cf00807 | ||
|
|
b9d8100cdb | ||
|
|
bb1cc0d966 | ||
|
|
2e238c5b5d | ||
|
|
b56e7cb41e | ||
|
|
236ae43c0c | ||
|
|
a4e8cc52f9 | ||
|
|
c8e8434bc6 | ||
|
|
3ee00bb083 |
103
.agent/workflows/plugin-development.md
Normal file
103
.agent/workflows/plugin-development.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
description: OpenWebUI Plugin Development & Release Workflow
|
||||
---
|
||||
|
||||
# OpenWebUI Plugin Development Workflow
|
||||
|
||||
This workflow outlines the standard process for developing, documenting, and releasing plugins for OpenWebUI, ensuring compliance with project standards and CI/CD requirements.
|
||||
|
||||
## 1. Development Standards
|
||||
|
||||
Reference: `.github/copilot-instructions.md`
|
||||
|
||||
### Bilingual Requirement
|
||||
Every plugin **MUST** have bilingual versions for both code and documentation:
|
||||
|
||||
- **Code**:
|
||||
- English: `plugins/{type}/{name}/{name}.py`
|
||||
- Chinese: `plugins/{type}/{name}/{name_cn}.py` (or `中文名.py`)
|
||||
- **README**:
|
||||
- English: `plugins/{type}/{name}/README.md`
|
||||
- Chinese: `plugins/{type}/{name}/README_CN.md`
|
||||
|
||||
### Code Structure
|
||||
- **Docstring**: Must include `title`, `author`, `version`, `description`, etc.
|
||||
- **Valves**: Use `pydantic` for configuration.
|
||||
- **Database**: Re-use `open_webui.internal.db` shared connection.
|
||||
- **User Context**: Use `_get_user_context` helper method.
|
||||
|
||||
### Commit Messages
|
||||
- **Language**: **English ONLY**. Do not use Chinese in commit messages.
|
||||
- **Format**: Conventional Commits (e.g., `feat:`, `fix:`, `docs:`).
|
||||
|
||||
## 2. Documentation Updates
|
||||
|
||||
When adding or updating a plugin, you **MUST** update the following documentation files to maintain consistency:
|
||||
|
||||
### Plugin Directory
|
||||
- `README.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
- `README_CN.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
|
||||
### Global Documentation (`docs/`)
|
||||
- **Index Pages**:
|
||||
- `docs/plugins/{type}/index.md`: Add/Update list item with **correct version**.
|
||||
- `docs/plugins/{type}/index.zh.md`: Add/Update list item with **correct version**.
|
||||
- **Detail Pages**:
|
||||
- `docs/plugins/{type}/{name}.md`: Ensure content matches README.
|
||||
- `docs/plugins/{type}/{name}.zh.md`: Ensure content matches README_CN.
|
||||
|
||||
### Root README
|
||||
- `README.md`: Add to "Featured Plugins" if applicable.
|
||||
- `README_CN.md`: Add to "Featured Plugins" if applicable.
|
||||
|
||||
## 3. Version Control & Release
|
||||
|
||||
Reference: `.github/workflows/release.yml`
|
||||
|
||||
### Version Bumping
|
||||
- **Rule**: Version bump is required **ONLY when the user explicitly requests a release**. Regular code changes do NOT require version bumps.
|
||||
- **Format**: Semantic Versioning (e.g., `1.0.0` -> `1.0.1`).
|
||||
- **When to Bump**: Only update the version when:
|
||||
- User says "发布" / "release" / "bump version"
|
||||
- User explicitly asks to prepare for release
|
||||
- **Agent Initiative**: After completing significant changes (new features, bug fixes, or multiple code modifications), the agent **SHOULD proactively ask** the user if they want to release a new version. If confirmed, update all version-related files.
|
||||
- **Consistency**: When bumping, update version in **ALL** locations:
|
||||
1. English Code (`.py`)
|
||||
2. Chinese Code (`.py`)
|
||||
3. English README (`README.md`)
|
||||
4. Chinese README (`README_CN.md`)
|
||||
5. Docs Index (`docs/.../index.md`)
|
||||
6. Docs Index CN (`docs/.../index.zh.md`)
|
||||
7. Docs Detail (`docs/.../{name}.md`)
|
||||
8. Docs Detail CN (`docs/.../{name}.zh.md`)
|
||||
|
||||
### Automated Release Process
|
||||
1. **Trigger**: Push to `main` branch with changes in `plugins/**/*.py`.
|
||||
2. **Detection**: `scripts/extract_plugin_versions.py` detects changed plugins and compares versions.
|
||||
3. **Release**:
|
||||
- Generates release notes based on changes.
|
||||
- Creates a GitHub Release tag (e.g., `v2024.01.01-1`).
|
||||
- Uploads individual `.py` files of **changed plugins only** as assets.
|
||||
|
||||
### Pull Request Check
|
||||
- Workflow: `.github/workflows/plugin-version-check.yml`
|
||||
- Checks if plugin files are modified.
|
||||
- **Fails** if version number is not updated.
|
||||
- **Fails** if PR description is too short (< 20 chars).
|
||||
|
||||
## 4. Verification Checklist
|
||||
|
||||
Before committing:
|
||||
|
||||
- [ ] Code is bilingual and functional?
|
||||
- [ ] Docstrings have updated version?
|
||||
- [ ] READMEs are updated and bilingual?
|
||||
- [ ] `docs/` index and detail pages are updated?
|
||||
- [ ] Root `README.md` is updated?
|
||||
- [ ] All version numbers match exactly?
|
||||
|
||||
## 5. Git Operations (Agent Rules)
|
||||
|
||||
Strictly follow the rules defined in `.github/copilot-instructions.md` → **Git Operations (Agent Rules)** section.
|
||||
|
||||
|
||||
272
.github/copilot-instructions.md
vendored
272
.github/copilot-instructions.md
vendored
@@ -37,7 +37,9 @@ README 文件应包含以下内容:
|
||||
- 安装和设置说明 / Installation and setup instructions
|
||||
- 使用示例 / Usage examples
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 版本和作者信息 / Version and author information
|
||||
- **新增功能 / New Features**: 如果是更新现有插件,必须明确列出并描述新增功能(发布到官方市场的重要要求)。/ If updating an existing plugin, explicitly list and describe new features (Critical for official market release).
|
||||
|
||||
### 官方文档 (Official Documentation)
|
||||
|
||||
@@ -795,10 +797,169 @@ For iframe plugins to access parent document theme information, users need to co
|
||||
- [ ] 实现 Valves 配置
|
||||
- [ ] 使用 logging 而非 print
|
||||
- [ ] 测试双语界面
|
||||
- [ ] **一致性检查 (Consistency Check)**:
|
||||
- [ ] 更新 `README.md` 插件列表
|
||||
- [ ] 更新 `README_CN.md` 插件列表
|
||||
- [ ] 更新/创建 `docs/` 下的对应文档
|
||||
- [ ] 确保文档版本号与代码一致
|
||||
|
||||
---
|
||||
|
||||
## 📚 参考资源 (Reference Resources)
|
||||
## 🔄 一致性维护 (Consistency Maintenance)
|
||||
|
||||
任何插件的**新增、修改或移除**,必须同时更新以下三个位置,保持完全一致:
|
||||
|
||||
1. **插件代码 (Plugin Code)**: 更新 `version` 和功能实现。
|
||||
2. **项目文档 (Docs)**: 更新 `docs/` 下对应的文档文件(版本号、功能描述)。
|
||||
3. **自述文件 (README)**: 更新根目录下的 `README.md` 和 `README_CN.md` 中的插件列表。
|
||||
|
||||
> [!IMPORTANT]
|
||||
> 提交 PR 前,请务必检查这三处是否同步。例如:如果删除了一个插件,必须同时从 README 列表中移除,并删除对应的 docs 文档。
|
||||
|
||||
---
|
||||
|
||||
## <20> 发布工作流 (Release Workflow)
|
||||
|
||||
### 自动发布 (Automatic Release)
|
||||
|
||||
当插件更新推送到 `main` 分支时,会**自动触发**发布流程:
|
||||
|
||||
1. 🔍 检测版本变化(与上次 release 对比)
|
||||
2. 📝 生成发布说明(包含更新内容和提交记录)
|
||||
3. 📦 创建 GitHub Release(包含可下载的插件文件)
|
||||
4. 🏷️ 自动生成版本号(格式:`vYYYY.MM.DD-运行号`)
|
||||
|
||||
**注意**:仅**移除插件**(删除文件)**不会触发**自动发布。只有新增或修改插件(且更新了版本号)才会触发发布。移除的插件将不会出现在发布日志中。
|
||||
|
||||
### 发布前必须完成 (Pre-release Requirements)
|
||||
|
||||
> [!IMPORTANT]
|
||||
> 版本号**仅在用户明确要求发布时**才需要更新。日常代码更改**无需**更新版本号。
|
||||
|
||||
**触发版本更新的关键词**:
|
||||
- 用户说 "发布"、"release"、"bump version"
|
||||
- 用户明确要求准备发布
|
||||
|
||||
**Agent 主动询问发布 (Agent-Initiated Release Prompt)**:
|
||||
|
||||
当 Agent 完成以下类型的更改后,**应主动询问**用户是否需要发布新版本:
|
||||
|
||||
| 更改类型 | 示例 | 是否询问发布 |
|
||||
|---------|------|-------------|
|
||||
| 新功能 | 新增导出格式、新的配置选项 | ✅ 询问 |
|
||||
| 重要 Bug 修复 | 修复导致崩溃或数据丢失的问题 | ✅ 询问 |
|
||||
| 累积多次更改 | 同一插件在会话中被修改 >= 3 次 | ✅ 询问 |
|
||||
| 小优化 | 代码清理、格式符号处理 | ❌ 不询问 |
|
||||
| 文档更新 | 只改 README、注释 | ❌ 不询问 |
|
||||
|
||||
如果用户确认发布,Agent 需要更新所有版本相关的文件(代码、README、docs 等)。
|
||||
|
||||
**发布时需要完成**:
|
||||
1. ✅ **更新版本号** - 修改插件文档字符串中的 `version` 字段
|
||||
2. ✅ **中英文版本同步** - 确保两个版本的版本号一致
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Plugin
|
||||
version: 0.2.0 # <- 发布时更新这里!
|
||||
...
|
||||
"""
|
||||
```
|
||||
|
||||
### 版本编号规则 (Versioning)
|
||||
|
||||
遵循[语义化版本](https://semver.org/lang/zh-CN/):
|
||||
|
||||
| 变更类型 | 版本变化 | 示例 |
|
||||
|---------|---------|------|
|
||||
| Bug 修复 | PATCH +1 | 0.1.0 → 0.1.1 |
|
||||
| 新功能 | MINOR +1 | 0.1.1 → 0.2.0 |
|
||||
| 不兼容变更 | MAJOR +1 | 0.2.0 → 1.0.0 |
|
||||
|
||||
### 发布方式 (Release Methods)
|
||||
|
||||
**方式 A:直接推送到 main(推荐)**
|
||||
|
||||
```bash
|
||||
# 1. 暂存更改
|
||||
git add plugins/actions/my-plugin/
|
||||
|
||||
# 2. 提交(使用规范的 commit message)
|
||||
git commit -m "feat(my-plugin): add new feature X
|
||||
|
||||
- Add feature X for better user experience
|
||||
- Fix bug Y
|
||||
- Update version to 0.2.0"
|
||||
|
||||
# 3. 推送到 main
|
||||
git push origin main
|
||||
|
||||
# GitHub Actions 会自动创建 Release
|
||||
```
|
||||
|
||||
**方式 B:创建 PR(团队协作)**
|
||||
|
||||
```bash
|
||||
# 1. 创建功能分支
|
||||
git checkout -b feature/my-plugin-v0.2.0
|
||||
|
||||
# 2. 提交更改
|
||||
git commit -m "feat(my-plugin): add new feature X"
|
||||
|
||||
# 3. 推送并创建 PR
|
||||
git push origin feature/my-plugin-v0.2.0
|
||||
|
||||
# 4. PR 合并后自动触发发布
|
||||
```
|
||||
|
||||
**方式 C:手动触发发布**
|
||||
|
||||
1. 前往 GitHub Actions → "Plugin Release / 插件发布"
|
||||
2. 点击 "Run workflow"
|
||||
3. 填写版本号和发布说明
|
||||
|
||||
### Commit Message 规范 (Commit Convention)
|
||||
|
||||
使用 [Conventional Commits](https://www.conventionalcommits.org/) 格式:
|
||||
|
||||
```
|
||||
<type>(<scope>): <description>
|
||||
|
||||
[optional body]
|
||||
|
||||
[optional footer]
|
||||
```
|
||||
|
||||
常用类型:
|
||||
- `feat`: 新功能
|
||||
- `fix`: Bug 修复
|
||||
- `docs`: 文档更新
|
||||
- `refactor`: 代码重构
|
||||
- `style`: 代码格式调整
|
||||
- `perf`: 性能优化
|
||||
|
||||
示例:
|
||||
```
|
||||
feat(flash-card): add _get_user_context for safer user info retrieval
|
||||
|
||||
- Add _get_user_context method to handle various __user__ types
|
||||
- Prevent AttributeError when __user__ is not a dict
|
||||
- Update version to 0.2.2 for both English and Chinese versions
|
||||
```
|
||||
|
||||
### 发布检查清单 (Release Checklist)
|
||||
|
||||
发布前确保完成以下检查:
|
||||
|
||||
- [ ] 更新插件版本号(英文版 + 中文版)
|
||||
- [ ] 测试插件功能正常
|
||||
- [ ] 确保代码通过格式检查
|
||||
- [ ] 编写清晰的 commit message
|
||||
- [ ] 推送到 main 分支或合并 PR
|
||||
|
||||
---
|
||||
|
||||
## <20>📚 参考资源 (Reference Resources)
|
||||
|
||||
- [Action 插件模板 (英文)](plugins/actions/ACTION_PLUGIN_TEMPLATE.py)
|
||||
- [Action 插件模板 (中文)](plugins/actions/ACTION_PLUGIN_TEMPLATE_CN.py)
|
||||
@@ -816,3 +977,112 @@ GitHub: [Fu-Jie/awesome-openwebui](https://github.com/Fu-Jie/awesome-openwebui)
|
||||
## License
|
||||
|
||||
MIT License
|
||||
|
||||
---
|
||||
|
||||
## 📝 Commit Message Guidelines
|
||||
|
||||
**Commit messages MUST be in English.** Do not use Chinese.
|
||||
|
||||
### Format
|
||||
Follow the [Conventional Commits](https://www.conventionalcommits.org/) specification:
|
||||
|
||||
- `feat`: New feature
|
||||
- `fix`: Bug fix
|
||||
- `docs`: Documentation only changes
|
||||
- `style`: Changes that do not affect the meaning of the code (white-space, formatting, etc)
|
||||
- `refactor`: A code change that neither fixes a bug nor adds a feature
|
||||
- `perf`: A code change that improves performance
|
||||
- `test`: Adding missing tests or correcting existing tests
|
||||
- `chore`: Changes to the build process or auxiliary tools and libraries such as documentation generation
|
||||
|
||||
### Examples
|
||||
|
||||
✅ **Good:**
|
||||
- `feat: add new export to pdf plugin`
|
||||
- `fix: resolve icon rendering issue in documentation`
|
||||
- `docs: update README with installation steps`
|
||||
|
||||
❌ **Bad:**
|
||||
- `新增导出PDF插件` (Chinese is not allowed)
|
||||
- `update code` (Too vague)
|
||||
|
||||
---
|
||||
|
||||
## 🤖 Git Operations (Agent Rules)
|
||||
|
||||
**重要规则 (CRITICAL RULES FOR AI AGENTS)**:
|
||||
|
||||
AI Agent(如 Copilot、Gemini、Claude 等)在执行 Git 操作时必须遵守以下规则:
|
||||
|
||||
| 操作 (Operation) | 允许 (Allowed) | 说明 (Description) |
|
||||
|-----------------|---------------|---------------------|
|
||||
| 创建功能分支 | ✅ 允许 | `git checkout -b feature/xxx` |
|
||||
| 推送到功能分支 | ✅ 允许 | `git push origin feature/xxx` |
|
||||
| 直接推送到 main | ❌ 禁止 | `git push origin main` 需要用户手动执行 |
|
||||
| 合并到 main | ❌ 禁止 | 任何合并操作需要用户明确批准 |
|
||||
| Rebase 到 main | ❌ 禁止 | 任何 rebase 操作需要用户明确批准 |
|
||||
|
||||
**规则详解 (Rule Details)**:
|
||||
|
||||
1. **Feature Branches Allowed**: Agent **可以**创建新的功能分支并推送到远程仓库
|
||||
2. **No Direct Push to Main**: Agent **禁止**直接推送任何更改到 `main` 分支
|
||||
3. **No Auto-Merge**: Agent **禁止**在未经用户明确批准的情况下合并任何分支到 `main`
|
||||
4. **User Approval Required**: 任何影响 `main` 分支的操作(push、merge、rebase)都需要用户明确批准
|
||||
|
||||
> [!CAUTION]
|
||||
> 违反上述规则可能导致代码库不稳定或触发意外的 CI/CD 流程。Agent 应始终在功能分支上工作,并让用户决定何时合并到主分支。
|
||||
|
||||
---
|
||||
|
||||
## ⏳ 长时间运行任务通知 (Long-running Task Notifications)
|
||||
|
||||
如果一个前台任务(Foreground Task)的运行时间预计超过 **3秒**,必须实现用户通知机制,以避免用户感到困惑。
|
||||
|
||||
**要求 (Requirements):**
|
||||
|
||||
1. **初始通知 (Initial Notification)**: 任务开始时**立即**发送第一条通知,告知用户正在处理中(例如:“正在使用 AI 生成中...”)。
|
||||
2. **周期性通知 (Periodic Notification)**: 之后每隔 **5秒** 发送一次通知,告知用户任务仍在运行中。
|
||||
3. **完成清理 (Cleanup)**: 任务完成后,应自动取消通知任务。
|
||||
|
||||
**代码示例 (Code Example):**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def long_running_task_with_notification(self, event_emitter, ...):
|
||||
# 定义实际任务
|
||||
async def actual_task():
|
||||
# ... 执行耗时操作 ...
|
||||
return result
|
||||
|
||||
# 定义通知任务
|
||||
async def notification_task():
|
||||
# 立即发送首次通知
|
||||
if event_emitter:
|
||||
await self._send_notification(event_emitter, "info", "正在使用 AI 生成中...")
|
||||
|
||||
# 之后每5秒通知一次
|
||||
while True:
|
||||
await asyncio.sleep(5)
|
||||
if event_emitter:
|
||||
await self._send_notification(event_emitter, "info", "仍在处理中,请耐心等待...")
|
||||
|
||||
# 并发运行任务
|
||||
task_future = asyncio.ensure_future(actual_task())
|
||||
notify_future = asyncio.ensure_future(notification_task())
|
||||
|
||||
# 等待任务完成
|
||||
done, pending = await asyncio.wait(
|
||||
[task_future, notify_future],
|
||||
return_when=asyncio.FIRST_COMPLETED
|
||||
)
|
||||
|
||||
# 取消通知任务
|
||||
if not notify_future.done():
|
||||
notify_future.cancel()
|
||||
|
||||
# 获取结果
|
||||
if task_future in done:
|
||||
return task_future.result()
|
||||
```
|
||||
|
||||
87
.github/workflows/release.yml
vendored
87
.github/workflows/release.yml
vendored
@@ -54,6 +54,9 @@ permissions:
|
||||
jobs:
|
||||
check-changes:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
outputs:
|
||||
has_changes: ${{ steps.detect.outputs.has_changes }}
|
||||
changed_plugins: ${{ steps.detect.outputs.changed_plugins }}
|
||||
@@ -65,6 +68,12 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global core.quotepath false
|
||||
git config --global i18n.commitencoding utf-8
|
||||
git config --global i18n.logoutputencoding utf-8
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
@@ -101,7 +110,7 @@ jobs:
|
||||
fi
|
||||
|
||||
# Compare versions and generate release notes
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --output changes.md
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --ignore-removed --output changes.md
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --json --output changes.json
|
||||
|
||||
echo "=== Version Changes ==="
|
||||
@@ -131,6 +140,7 @@ jobs:
|
||||
|
||||
echo "changed_plugins<<EOF" >> $GITHUB_OUTPUT
|
||||
cat changed_files.txt >> $GITHUB_OUTPUT
|
||||
echo "" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
@@ -138,6 +148,7 @@ jobs:
|
||||
{
|
||||
echo 'release_notes<<EOF'
|
||||
cat changes.md
|
||||
echo ""
|
||||
echo 'EOF'
|
||||
} >> $GITHUB_OUTPUT
|
||||
|
||||
@@ -145,6 +156,10 @@ jobs:
|
||||
needs: check-changes
|
||||
if: needs.check-changes.outputs.has_changes == 'true' || github.event_name == 'workflow_dispatch' || startsWith(github.ref, 'refs/tags/v')
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
@@ -152,6 +167,12 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global core.quotepath false
|
||||
git config --global i18n.commitencoding utf-8
|
||||
git config --global i18n.logoutputencoding utf-8
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
@@ -175,10 +196,7 @@ jobs:
|
||||
id: plugins
|
||||
run: |
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
python scripts/extract_plugin_versions.py --markdown --output plugin_table.md
|
||||
|
||||
echo "=== Plugin Versions ==="
|
||||
cat plugin_table.md
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
|
||||
- name: Collect plugin files for release
|
||||
id: collect_files
|
||||
@@ -198,32 +216,27 @@ jobs:
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo "Collecting all plugin files..."
|
||||
find plugins -name "*.py" -type f ! -name "__*" | while read -r file; do
|
||||
dir=$(dirname "$file")
|
||||
mkdir -p "release_plugins/$dir"
|
||||
cp "$file" "release_plugins/$file"
|
||||
done
|
||||
echo "No changed plugins detected. Skipping file collection."
|
||||
fi
|
||||
|
||||
# Create a zip file with error handling
|
||||
cd release_plugins
|
||||
if [ -n "$(ls -A . 2>/dev/null)" ]; then
|
||||
if zip -r ../plugins_release.zip .; then
|
||||
echo "Successfully created plugins_release.zip"
|
||||
else
|
||||
echo "Warning: Failed to create zip file, creating empty placeholder"
|
||||
touch ../plugins_release.zip
|
||||
fi
|
||||
else
|
||||
echo "No plugin files to zip, creating empty placeholder"
|
||||
touch ../plugins_release.zip
|
||||
fi
|
||||
cd ..
|
||||
# cd release_plugins
|
||||
# Zip step removed as per user request
|
||||
|
||||
echo "=== Collected Files ==="
|
||||
find release_plugins -name "*.py" -type f | head -20
|
||||
|
||||
- name: Debug Filenames
|
||||
run: |
|
||||
python3 -c "import sys; print(f'Filesystem encoding: {sys.getfilesystemencoding()}')"
|
||||
ls -R release_plugins
|
||||
|
||||
- name: Upload Debug Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: debug-plugins
|
||||
path: release_plugins/
|
||||
|
||||
- name: Get commit messages
|
||||
id: commits
|
||||
if: github.event_name == 'push'
|
||||
@@ -239,8 +252,9 @@ jobs:
|
||||
{
|
||||
echo 'commits<<EOF'
|
||||
echo "$COMMITS"
|
||||
echo ""
|
||||
echo 'EOF'
|
||||
} >> $GITHUB_OUTPUT
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Generate release notes
|
||||
id: notes
|
||||
@@ -280,16 +294,13 @@ jobs:
|
||||
echo "" >> release_notes.md
|
||||
fi
|
||||
|
||||
echo "## All Plugin Versions / 所有插件版本" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
cat plugin_table.md >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
|
||||
|
||||
cat >> release_notes.md << 'EOF'
|
||||
|
||||
## Download / 下载
|
||||
|
||||
📦 **plugins_release.zip** - 包含本次更新的所有插件文件 / Contains all updated plugin files
|
||||
📦 **Download the updated plugin files below** / 请在下方下载更新的插件文件
|
||||
|
||||
### Installation / 安装
|
||||
|
||||
@@ -323,10 +334,21 @@ jobs:
|
||||
prerelease: ${{ github.event.inputs.prerelease || false }}
|
||||
files: |
|
||||
plugin_versions.json
|
||||
plugins_release.zip
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Upload Release Assets
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
# Check if there are any .py files to upload
|
||||
if [ -d release_plugins ] && [ -n "$(find release_plugins -type f -name '*.py' 2>/dev/null)" ]; then
|
||||
echo "Uploading plugin files..."
|
||||
find release_plugins -type f -name "*.py" -print0 | xargs -0 gh release upload ${{ steps.version.outputs.version }} --clobber
|
||||
else
|
||||
echo "No plugin files to upload. Skipping asset upload."
|
||||
fi
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
echo "## 🚀 Release Created Successfully!" >> $GITHUB_STEP_SUMMARY
|
||||
@@ -336,5 +358,4 @@ jobs:
|
||||
echo "### Updated Plugins" >> $GITHUB_STEP_SUMMARY
|
||||
echo "${{ needs.check-changes.outputs.release_notes }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### All Plugin Versions" >> $GITHUB_STEP_SUMMARY
|
||||
cat plugin_table.md >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
|
||||
@@ -14,15 +14,17 @@ Located in the `plugins/` directory, containing Python-based enhancements:
|
||||
|
||||
#### Actions
|
||||
- **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text.
|
||||
- **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV.
|
||||
- **Knowledge Card** (`knowledge-card`): Creates beautiful flashcards for learning.
|
||||
- **Export to Excel** (`export_to_excel`): Exports chat history to Excel files.
|
||||
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
||||
- **Summary** (`summary`): Text summarization tool.
|
||||
|
||||
#### Filters
|
||||
- **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression.
|
||||
- **Context Enhancement** (`context_enhancement_filter`): Enhances chat context.
|
||||
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Companion filter for Gemini Manifold.
|
||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): Merges context from multiple models.
|
||||
|
||||
|
||||
#### Pipes
|
||||
- **Gemini Manifold** (`gemini_mainfold`): Pipeline for Gemini model integration.
|
||||
|
||||
@@ -8,15 +8,17 @@ OpenWebUI 增强功能集合。包含个人开发与收集的### 🧩 插件 (Pl
|
||||
|
||||
#### Actions (交互增强)
|
||||
- **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。
|
||||
- **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。
|
||||
- **Knowledge Card** (`knowledge-card`): 快速生成精美的学习记忆卡片。
|
||||
- **Export to Excel** (`export_to_excel`): 将对话内容导出为 Excel 文件。
|
||||
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
||||
- **Summary** (`summary`): 文本摘要生成工具。
|
||||
|
||||
#### Filters (消息处理)
|
||||
- **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。
|
||||
- **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。
|
||||
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Gemini Manifold 配套增强。
|
||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): 多模型上下文合并。
|
||||
|
||||
|
||||
#### Pipes (模型管道)
|
||||
- **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。
|
||||
|
||||
@@ -1,12 +1,25 @@
|
||||
# Export to Excel
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.6</span>
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis, archiving, and sharing.
|
||||
|
||||
|
||||
### What's New in v0.3.6
|
||||
- **OpenWebUI-Style Theme**: Modern dark header with light gray zebra striping for better readability.
|
||||
- **Zebra Striping**: Alternating row colors for improved visual scanning.
|
||||
- **Smart Data Type Conversion**: Automatically converts columns to numeric or datetime types.
|
||||
- **Full Cell Bold/Italic**: Supports Markdown bold/italic formatting in Excel.
|
||||
- **Partial Markdown Cleanup**: Removes partial Markdown symbols for cleaner output.
|
||||
- **Export Scope**: Choose between "Last Message" or "All Messages".
|
||||
- **Smart Sheet Naming**: Names sheets based on Markdown headers or message index.
|
||||
- **Smart Filename Generation**: Generates filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **AI Title Generation**: Supports using a specific model (`MODEL_ID`) for title generation with progress notifications.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
The Export to Excel plugin allows you to download your chat conversations as Excel files. This is useful for:
|
||||
@@ -23,6 +36,13 @@ The Export to Excel plugin allows you to download your chat conversations as Exc
|
||||
- :material-download: **One-Click Download**: Instant file generation
|
||||
- :material-history: **Full History**: Exports complete conversation
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -1,12 +1,25 @@
|
||||
# Export to Excel(导出到 Excel)
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.6</span>
|
||||
|
||||
将聊天记录导出为 Excel 表格,便于分析、归档和分享。
|
||||
|
||||
|
||||
### v0.3.6 更新内容
|
||||
- **OpenWebUI 风格主题**:现代深灰表头,搭配浅灰斑马纹,提升可读性。
|
||||
- **斑马纹效果**:隔行变色,方便视觉扫描。
|
||||
- **智能数据类型转换**:自动将列转换为数字或日期类型。
|
||||
- **全单元格粗体/斜体**:支持 Markdown 粗体/斜体格式。
|
||||
- **部分 Markdown 清理**:移除部分 Markdown 符号,输出更整洁。
|
||||
- **导出范围**:可选择导出"最后一条消息"或"所有消息"。
|
||||
- **智能 Sheet 命名**:根据 Markdown 标题或消息索引命名 Sheet。
|
||||
- **智能文件名生成**:支持对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **AI 标题生成**:支持指定模型 (`MODEL_ID`) 生成标题,并提供生成进度通知。
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 概览
|
||||
|
||||
Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用于:
|
||||
@@ -23,6 +36,13 @@ Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用
|
||||
- :material-download: **一键下载**:即时生成文件
|
||||
- :material-history: **完整历史**:导出完整会话内容
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
---
|
||||
|
||||
## 安装
|
||||
|
||||
@@ -33,7 +33,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Transform text into professional infographics using AntV visualization engine with various templates.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.3.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](smart-infographic.md)
|
||||
|
||||
@@ -43,7 +43,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and memorization.
|
||||
|
||||
**Version:** 0.2.0
|
||||
**Version:** 0.2.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](knowledge-card.md)
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis and archiving.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.3.6
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](export-to-excel.md)
|
||||
|
||||
@@ -73,7 +73,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.1.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](summary.md)
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
使用 AntV 可视化引擎,将文本转成专业的信息图。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.3.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](smart-infographic.md)
|
||||
|
||||
@@ -43,7 +43,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
快速生成精美的学习记忆卡片,适合学习与记忆。
|
||||
|
||||
**版本:** 0.2.0
|
||||
**版本:** 0.2.2
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](knowledge-card.md)
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
将聊天记录导出为 Excel 电子表格,方便分析或归档。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.3.6
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](export-to-excel.md)
|
||||
|
||||
@@ -73,7 +73,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
对长文本进行精简总结,提取要点。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.1.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](summary.md)
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Knowledge Card
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v0.2.0</span>
|
||||
<span class="version-badge">v0.2.2</span>
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and quick memorization.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Smart Infographic
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.3.0</span>
|
||||
|
||||
An AntV Infographic engine powered plugin that transforms long text into professional, beautiful infographics with a single click.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Summary
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.1.0</span>
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Summary(摘要)
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.1.0</span>
|
||||
|
||||
为长文本生成简洁摘要,并提取关键要点。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.1.0</span>
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression(异步上下文压缩)
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.1.0</span>
|
||||
|
||||
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Context Enhancement
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.2</span>
|
||||
|
||||
Enhances chat context with additional information for improved LLM responses.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Context Enhancement(上下文增强)
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.2</span>
|
||||
|
||||
为聊天自动补充上下文信息,让 LLM 回复更相关、更准确。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Gemini Manifold Companion
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.2</span>
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin, providing enhanced functionality.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Gemini Manifold Companion
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.2</span>
|
||||
|
||||
Gemini Manifold Pipe 的伴随过滤器,用于增强 Gemini 集成的处理效果。
|
||||
|
||||
|
||||
@@ -16,13 +16,13 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-compress:{ .lg .middle } **Async Context Compression**
|
||||
- :material-arrow-collapse-vertical:{ .lg .middle } **Async Context Compression**
|
||||
|
||||
---
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.1.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||
|
||||
@@ -32,7 +32,7 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
Enhances chat context with additional information for better responses.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](context-enhancement.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.7.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](gemini-manifold-companion.md)
|
||||
|
||||
|
||||
@@ -16,13 +16,13 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-compress:{ .lg .middle } **Async Context Compression**
|
||||
- :material-arrow-collapse-vertical:{ .lg .middle } **Async Context Compression**
|
||||
|
||||
---
|
||||
|
||||
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.1.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
||||
|
||||
@@ -32,7 +32,7 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
为聊天增加额外信息,提升回复质量。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.2
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](context-enhancement.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
Gemini Manifold Pipe 插件的伴随过滤器。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.7.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](gemini-manifold-companion.md)
|
||||
|
||||
|
||||
@@ -2,12 +2,33 @@
|
||||
|
||||
This plugin allows you to export your chat history to an Excel (.xlsx) file directly from the chat interface.
|
||||
|
||||
## What's New in v0.3.6
|
||||
|
||||
- **OpenWebUI-Style Theme**: Modern dark header (#1f2937) with light gray zebra striping for better readability.
|
||||
- **Zebra Striping**: Alternating row colors (#ffffff / #f3f4f6) for improved visual scanning.
|
||||
- **Smart Data Type Conversion**: Automatically converts columns to numeric or datetime types with fallback to string.
|
||||
- **Full Cell Bold/Italic**: Supports full cell Markdown bold (`**text**`) and italic (`*text*`) formatting in Excel.
|
||||
- **Partial Markdown Cleanup**: Automatically removes partial Markdown formatting symbols (e.g., `Some **bold** text` → `Some bold text`) for cleaner Excel output.
|
||||
- **Export Scope**: Added `EXPORT_SCOPE` valve to choose between exporting tables from the "Last Message" (default) or "All Messages".
|
||||
- **Smart Sheet Naming**: Automatically names sheets based on Markdown headers, AI titles (if enabled), or message index (e.g., `Msg1-Tab1`).
|
||||
- **Multiple Tables Support**: Improved handling of multiple tables within single or multiple messages.
|
||||
- **Smart Filename Generation**: Supports generating filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **Configuration Options**: Added `TITLE_SOURCE` setting to control filename generation strategy.
|
||||
- **AI Title Generation**: Added `MODEL_ID` setting to specify the model for AI title generation, with progress notifications.
|
||||
|
||||
## Features
|
||||
|
||||
- **One-Click Export**: Adds an "Export to Excel" button to the chat.
|
||||
- **Automatic Header Extraction**: Intelligently identifies table headers from the chat content.
|
||||
- **Multi-Table Support**: Handles multiple tables within a single chat session.
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Install the plugin.
|
||||
|
||||
@@ -2,16 +2,37 @@
|
||||
|
||||
此插件允许你直接从聊天界面将对话历史导出为 Excel (.xlsx) 文件。
|
||||
|
||||
## v0.3.6 更新内容
|
||||
|
||||
- **OpenWebUI 风格主题**:现代深灰表头 (#1f2937),搭配浅灰斑马纹,提升可读性。
|
||||
- **斑马纹效果**:隔行变色(#ffffff / #f3f4f6),方便视觉扫描。
|
||||
- **智能数据类型转换**:自动将列转换为数字或日期类型,无法转换时保持字符串。
|
||||
- **全单元格粗体/斜体**:支持 Excel 中的全单元格 Markdown 粗体 (`**text**`) 和斜体 (`*text*`) 格式。
|
||||
- **部分 Markdown 清理**:自动移除部分 Markdown 格式符号(如 `部分**加粗**文本` → `部分加粗文本`),使 Excel 输出更整洁。
|
||||
- **导出范围**: 新增 `EXPORT_SCOPE` 配置项,可选择导出"最后一条消息"(默认)或"所有消息"中的表格。
|
||||
- **智能 Sheet 命名**: 根据 Markdown 标题、AI 标题(如启用)或消息索引(如 `消息1-表1`)自动命名 Sheet。
|
||||
- **多表格支持**: 优化了对单条或多条消息中包含多个表格的处理。
|
||||
- **智能文件名生成**:支持根据对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **配置选项**:新增 `TITLE_SOURCE` 设置,用于控制文件名生成策略。
|
||||
- **AI 标题生成**:新增 `MODEL_ID` 设置用于指定 AI 标题生成模型,并支持生成进度通知。
|
||||
|
||||
## 功能特点
|
||||
|
||||
- **一键导出**:在聊天界面添加“导出为 Excel”按钮。
|
||||
- **一键导出**:在聊天界面添加"导出为 Excel"按钮。
|
||||
- **自动表头提取**:智能识别聊天内容中的表格标题。
|
||||
- **多表支持**:支持处理单次对话中的多个表格。
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
## 使用方法
|
||||
|
||||
1. 安装插件。
|
||||
2. 在任意对话中,点击“导出为 Excel”按钮。
|
||||
2. 在任意对话中,点击"导出为 Excel"按钮。
|
||||
3. 文件将自动下载到你的设备。
|
||||
|
||||
## 作者
|
||||
|
||||
@@ -3,9 +3,9 @@ title: Export to Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.6
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: Exports the current chat history to an Excel (.xlsx) file, with automatic header extraction.
|
||||
description: Extracts tables from chat messages and exports them to Excel (.xlsx) files with smart formatting.
|
||||
"""
|
||||
|
||||
import os
|
||||
@@ -15,14 +15,33 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Literal
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: Literal["chat_title", "ai_generated", "markdown_title"] = Field(
|
||||
default="chat_title",
|
||||
description="Title Source: 'chat_title' (Chat Title), 'ai_generated' (AI Generated), 'markdown_title' (Markdown Title)",
|
||||
)
|
||||
EXPORT_SCOPE: Literal["last_message", "all_messages"] = Field(
|
||||
default="last_message",
|
||||
description="Export Scope: 'last_message' (Last Message Only), 'all_messages' (All Messages)",
|
||||
)
|
||||
MODEL_ID: str = Field(
|
||||
default="",
|
||||
description="Model ID for AI title generation. Leave empty to use the current chat model.",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,6 +54,7 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
@@ -53,8 +73,6 @@ class Action:
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
@@ -63,24 +81,180 @@ class Action:
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="No messages found.")
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="No tables found.")
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
# Get dynamic filename and sheet names
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
)
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
continue
|
||||
|
||||
# Generate sheet names for this message's tables
|
||||
# If multiple messages, we need to ensure uniqueness across the whole workbook
|
||||
# We'll generate base names here and deduplicate later if needed,
|
||||
# or better: generate unique names on the fly.
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
# Note: Generating titles for EVERY table in all messages might be too slow/expensive.
|
||||
# We'll skip this for 'all_messages' scope to avoid timeout, unless it's just one message.
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
# Logic for AI generation (simplified for now, reusing existing flow if possible)
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
# Use global message index (from original list if possible, but here we iterate target_messages)
|
||||
# Let's use the loop index.
|
||||
# If multiple tables in one message: "Msg 1 - Table 1"
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Msg{msg_index+1}-Tab{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"Msg{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Table {table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="No tables found in the selected scope."
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# Notify user about the number of tables found
|
||||
table_count = len(all_tables)
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
await self._send_notification(
|
||||
__event_emitter__,
|
||||
"info",
|
||||
f"Found {table_count} table(s) in all messages.",
|
||||
)
|
||||
# Wait a moment for user to see the notification before download dialog
|
||||
await asyncio.sleep(1.5)
|
||||
# Generate Workbook Title (Filename)
|
||||
# Use the title of the chat, or the first header of the first message with tables
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "ai_generated":
|
||||
# Use AI to generate a title based on message content
|
||||
if target_messages and __request__:
|
||||
# Get content from the first message with tables
|
||||
content_for_title = ""
|
||||
for msg in target_messages:
|
||||
msg_content = msg.get("content", "")
|
||||
if msg_content:
|
||||
content_for_title = msg_content
|
||||
break
|
||||
if content_for_title:
|
||||
title = await self.generate_title_using_ai(
|
||||
body,
|
||||
content_for_title,
|
||||
user_id,
|
||||
__request__,
|
||||
__event_emitter__,
|
||||
)
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
# Try to find first header in the first message that has content
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
# Try extracting from content again if not already tried
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Use optimized filename generation logic
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# If no title found, use user_yyyymmdd format
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,8 +263,10 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# Save tables to Excel (using enhanced formatting)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
@@ -172,6 +348,149 @@ class Action:
|
||||
__event_emitter__, "error", "No tables found to export!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self,
|
||||
body: dict,
|
||||
content: str,
|
||||
user_id: str,
|
||||
request: Any,
|
||||
event_emitter: Callable = None,
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
# Use configured MODEL_ID or fallback to current chat model
|
||||
model = (
|
||||
self.valves.MODEL_ID.strip()
|
||||
if self.valves.MODEL_ID
|
||||
else body.get("model")
|
||||
)
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant. Generate a short, concise filename (max 10 words) for an Excel export based on the following content. Do not use quotes or file extensions. Avoid special characters that are invalid in filenames. Only output the filename.",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # Limit content length
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
# Define the generation task
|
||||
async def generate_task():
|
||||
return await generate_chat_completion(request, payload, user_obj)
|
||||
|
||||
# Define the notification task
|
||||
async def notification_task():
|
||||
# Send initial notification immediately
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"info",
|
||||
"AI is generating a filename for your Excel file...",
|
||||
)
|
||||
|
||||
# Subsequent notifications every 5 seconds
|
||||
while True:
|
||||
await asyncio.sleep(5)
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"info",
|
||||
"Still generating filename, please be patient...",
|
||||
)
|
||||
|
||||
# Run tasks concurrently
|
||||
gen_future = asyncio.ensure_future(generate_task())
|
||||
notify_future = asyncio.ensure_future(notification_task())
|
||||
|
||||
done, pending = await asyncio.wait(
|
||||
[gen_future, notify_future], return_when=asyncio.FIRST_COMPLETED
|
||||
)
|
||||
|
||||
# Cancel notification task if generation is done
|
||||
if not notify_future.done():
|
||||
notify_future.cancel()
|
||||
|
||||
# Get result
|
||||
if gen_future in done:
|
||||
response = gen_future.result()
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
else:
|
||||
# Should not happen if return_when=FIRST_COMPLETED and we cancel notify
|
||||
await gen_future
|
||||
response = gen_future.result()
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error generating title: {e}")
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"warning",
|
||||
f"AI title generation failed, using default title. Error: {str(e)}",
|
||||
)
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""Extract title from Markdown h1/h2 only"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# Match h1-h2 headings only
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""Extract chat_id from body or metadata"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""Fetch chat title from database by chat_id"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"Failed to load chat {chat_id}: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
Extract Markdown tables and their positions from message text
|
||||
@@ -456,24 +775,51 @@ class Action:
|
||||
with pd.ExcelWriter(file_path, engine="xlsxwriter") as writer:
|
||||
workbook = writer.book
|
||||
|
||||
# OpenWebUI-style theme colors
|
||||
HEADER_BG = "#1f2937" # Dark gray (matches OpenWebUI sidebar)
|
||||
HEADER_FG = "#ffffff" # White text
|
||||
ROW_ODD_BG = "#ffffff" # White for odd rows
|
||||
ROW_EVEN_BG = "#f3f4f6" # Light gray for even rows (zebra striping)
|
||||
BORDER_COLOR = "#e5e7eb" # Light border
|
||||
|
||||
# Define header style - Center aligned
|
||||
header_format = workbook.add_format(
|
||||
{
|
||||
"bold": True,
|
||||
"font_size": 12,
|
||||
"font_color": "white",
|
||||
"bg_color": "#00abbd",
|
||||
"font_size": 11,
|
||||
"font_name": "Arial",
|
||||
"font_color": HEADER_FG,
|
||||
"bg_color": HEADER_BG,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
# Text cell style - Left aligned
|
||||
# Text cell style - Left aligned (odd rows)
|
||||
text_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
# Text cell style - Left aligned (even rows - zebra)
|
||||
text_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
@@ -482,14 +828,51 @@ class Action:
|
||||
|
||||
# Number cell style - Right aligned
|
||||
number_format = workbook.add_format(
|
||||
{"border": 1, "align": "right", "valign": "vcenter"}
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
number_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
# Integer format - Right aligned
|
||||
integer_format = workbook.add_format(
|
||||
{
|
||||
"num_format": "0",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
integer_format_alt = workbook.add_format(
|
||||
{
|
||||
"num_format": "0",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
@@ -499,7 +882,24 @@ class Action:
|
||||
decimal_format = workbook.add_format(
|
||||
{
|
||||
"num_format": "0.00",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
decimal_format_alt = workbook.add_format(
|
||||
{
|
||||
"num_format": "0.00",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
@@ -508,7 +908,24 @@ class Action:
|
||||
# Date format - Center aligned
|
||||
date_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
date_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
@@ -518,12 +935,114 @@ class Action:
|
||||
# Sequence format - Center aligned
|
||||
sequence_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
sequence_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
# Bold cell style (for full cell bolding)
|
||||
text_bold_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_bold_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
# Italic cell style (for full cell italics)
|
||||
text_italic_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_italic_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
# Code cell style (for inline code with highlight background)
|
||||
CODE_BG = "#f0f0f0" # Light gray background for code
|
||||
text_code_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Consolas",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": CODE_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_code_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Consolas",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": CODE_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
for i, table in enumerate(tables):
|
||||
try:
|
||||
table_data = table["data"]
|
||||
@@ -565,12 +1084,18 @@ class Action:
|
||||
|
||||
print(f"DataFrame created with columns: {list(df.columns)}")
|
||||
|
||||
# Fix pandas FutureWarning
|
||||
# Smart data type conversion using pandas infer_objects
|
||||
for col in df.columns:
|
||||
# Try numeric conversion first
|
||||
try:
|
||||
df[col] = pd.to_numeric(df[col])
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
# Try datetime conversion
|
||||
try:
|
||||
df[col] = pd.to_datetime(df[col], errors="raise")
|
||||
except (ValueError, TypeError):
|
||||
# Keep as string, use infer_objects for optimization
|
||||
df[col] = df[col].infer_objects()
|
||||
|
||||
# Write data first (without header)
|
||||
df.to_excel(
|
||||
@@ -582,19 +1107,25 @@ class Action:
|
||||
)
|
||||
worksheet = writer.sheets[sheet_name]
|
||||
|
||||
# Apply enhanced formatting
|
||||
# Apply enhanced formatting with zebra striping
|
||||
formats = {
|
||||
"header": header_format,
|
||||
"text": [text_format, text_format_alt],
|
||||
"number": [number_format, number_format_alt],
|
||||
"integer": [integer_format, integer_format_alt],
|
||||
"decimal": [decimal_format, decimal_format_alt],
|
||||
"date": [date_format, date_format_alt],
|
||||
"sequence": [sequence_format, sequence_format_alt],
|
||||
"bold": [text_bold_format, text_bold_format_alt],
|
||||
"italic": [text_italic_format, text_italic_format_alt],
|
||||
"code": [text_code_format, text_code_format_alt],
|
||||
}
|
||||
self.apply_enhanced_formatting(
|
||||
worksheet,
|
||||
df,
|
||||
headers,
|
||||
workbook,
|
||||
header_format,
|
||||
text_format,
|
||||
number_format,
|
||||
integer_format,
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
formats,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -611,23 +1142,22 @@ class Action:
|
||||
df,
|
||||
headers,
|
||||
workbook,
|
||||
header_format,
|
||||
text_format,
|
||||
number_format,
|
||||
integer_format,
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
formats,
|
||||
):
|
||||
"""
|
||||
Apply enhanced formatting
|
||||
- Header: Center aligned
|
||||
Apply enhanced formatting with zebra striping
|
||||
- Header: Center aligned (dark background)
|
||||
- Number: Right aligned
|
||||
- Text: Left aligned
|
||||
- Date: Center aligned
|
||||
- Sequence: Center aligned
|
||||
- Zebra striping: alternating row colors
|
||||
- Supports full cell Markdown bold (**text**) and italic (*text*)
|
||||
"""
|
||||
try:
|
||||
# Extract format from formats dict
|
||||
header_format = formats["header"]
|
||||
|
||||
# 1. Write headers (Center aligned)
|
||||
print(f"Writing headers with enhanced alignment: {headers}")
|
||||
for col_idx, header in enumerate(headers):
|
||||
@@ -651,43 +1181,99 @@ class Action:
|
||||
else:
|
||||
column_types[col_idx] = "text"
|
||||
|
||||
# 3. Write and format data
|
||||
# 3. Write and format data with zebra striping
|
||||
for row_idx, row in df.iterrows():
|
||||
# Determine if odd or even row (0-indexed, so row 0 is odd visually as row 1)
|
||||
is_alt_row = (
|
||||
row_idx % 2 == 1
|
||||
) # Even index = odd visual row, use alt format
|
||||
|
||||
for col_idx, value in enumerate(row):
|
||||
content_type = column_types.get(col_idx, "text")
|
||||
|
||||
# Select format based on content type
|
||||
# Select format based on content type and zebra striping
|
||||
fmt_idx = 1 if is_alt_row else 0
|
||||
|
||||
if content_type == "number":
|
||||
# Number - Right aligned
|
||||
if pd.api.types.is_numeric_dtype(df.iloc[:, col_idx]):
|
||||
if pd.api.types.is_integer_dtype(df.iloc[:, col_idx]):
|
||||
current_format = integer_format
|
||||
current_format = formats["integer"][fmt_idx]
|
||||
else:
|
||||
try:
|
||||
numeric_value = float(value)
|
||||
if numeric_value.is_integer():
|
||||
current_format = integer_format
|
||||
current_format = formats["integer"][fmt_idx]
|
||||
value = int(numeric_value)
|
||||
else:
|
||||
current_format = decimal_format
|
||||
current_format = formats["decimal"][fmt_idx]
|
||||
except (ValueError, TypeError):
|
||||
current_format = decimal_format
|
||||
current_format = formats["decimal"][fmt_idx]
|
||||
else:
|
||||
current_format = number_format
|
||||
current_format = formats["number"][fmt_idx]
|
||||
|
||||
elif content_type == "date":
|
||||
# Date - Center aligned
|
||||
current_format = date_format
|
||||
current_format = formats["date"][fmt_idx]
|
||||
|
||||
elif content_type == "sequence":
|
||||
# Sequence - Center aligned
|
||||
current_format = sequence_format
|
||||
current_format = formats["sequence"][fmt_idx]
|
||||
|
||||
else:
|
||||
# Text - Left aligned
|
||||
current_format = text_format
|
||||
current_format = formats["text"][fmt_idx]
|
||||
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
if content_type == "text" and isinstance(value, str):
|
||||
# Check for full cell bold (**text**)
|
||||
match_bold = re.fullmatch(r"\*\*(.+)\*\*", value.strip())
|
||||
# Check for full cell italic (*text*)
|
||||
match_italic = re.fullmatch(r"\*(.+)\*", value.strip())
|
||||
# Check for full cell code (`text`)
|
||||
match_code = re.fullmatch(r"`(.+)`", value.strip())
|
||||
|
||||
if match_bold:
|
||||
# Extract content and apply bold format
|
||||
clean_value = match_bold.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["bold"][fmt_idx],
|
||||
)
|
||||
elif match_italic:
|
||||
# Extract content and apply italic format
|
||||
clean_value = match_italic.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["italic"][fmt_idx],
|
||||
)
|
||||
elif match_code:
|
||||
# Extract content and apply code format (highlighted)
|
||||
clean_value = match_code.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["code"][fmt_idx],
|
||||
)
|
||||
else:
|
||||
# Remove partial markdown formatting symbols (can't render partial formatting in Excel)
|
||||
# Remove bold markers **text** -> text
|
||||
clean_value = re.sub(r"\*\*(.+?)\*\*", r"\1", value)
|
||||
# Remove italic markers *text* -> text (but not inside **)
|
||||
clean_value = re.sub(
|
||||
r"(?<!\*)\*([^*]+)\*(?!\*)", r"\1", clean_value
|
||||
)
|
||||
# Remove code markers `text` -> text
|
||||
clean_value = re.sub(r"`(.+?)`", r"\1", clean_value)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, current_format
|
||||
)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
|
||||
# 4. Auto-adjust column width
|
||||
for col_idx, column in enumerate(headers):
|
||||
@@ -777,3 +1363,6 @@ class Action:
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in basic formatting: {str(e)}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in basic formatting: {str(e)}")
|
||||
|
||||
@@ -3,9 +3,9 @@ title: 导出为 Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.6
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: 将当前对话历史导出为 Excel (.xlsx) 文件,支持自动提取表头。
|
||||
description: 从聊天消息中提取表格并导出为 Excel (.xlsx) 文件,支持智能格式化。
|
||||
"""
|
||||
|
||||
import os
|
||||
@@ -15,14 +15,33 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Literal
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: Literal["chat_title", "ai_generated", "markdown_title"] = Field(
|
||||
default="chat_title",
|
||||
description="标题来源: 'chat_title' (对话标题), 'ai_generated' (AI生成), 'markdown_title' (Markdown标题)",
|
||||
)
|
||||
EXPORT_SCOPE: Literal["last_message", "all_messages"] = Field(
|
||||
default="last_message",
|
||||
description="导出范围: 'last_message' (仅最后一条消息), 'all_messages' (所有消息)",
|
||||
)
|
||||
MODEL_ID: str = Field(
|
||||
default="",
|
||||
description="AI 标题生成模型 ID。留空则使用当前对话模型。",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,52 +54,196 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
user_language = (
|
||||
__user__[0].get("language", "zh-CN") if __user__ else "zh-CN"
|
||||
__user__[0].get("language", "en-US") if __user__ else "en-US"
|
||||
)
|
||||
user_name = __user__[0].get("name", "用户") if __user__[0] else "用户"
|
||||
user_name = __user__[0].get("name", "User") if __user__[0] else "User"
|
||||
user_id = (
|
||||
__user__[0]["id"]
|
||||
if __user__ and "id" in __user__[0]
|
||||
else "unknown_user"
|
||||
)
|
||||
elif isinstance(__user__, dict):
|
||||
user_language = __user__.get("language", "zh-CN")
|
||||
user_name = __user__.get("name", "用户")
|
||||
user_language = __user__.get("language", "en-US")
|
||||
user_name = __user__.get("name", "User")
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "正在保存到文件...", "done": False},
|
||||
"data": {"description": "正在保存文件...", "done": False},
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="未找到消息。")
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="未找到任何表格。")
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
# 获取动态文件名和sheet名称
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
)
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
continue
|
||||
|
||||
# Generate sheet names for this message's tables
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"消息{msg_index+1}-表{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"消息{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"表{table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="在选定范围内未找到表格。"
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# 通知用户提取到的表格数量
|
||||
table_count = len(all_tables)
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
await self._send_notification(
|
||||
__event_emitter__,
|
||||
"info",
|
||||
f"从所有消息中提取到 {table_count} 个表格。",
|
||||
)
|
||||
# 等待片刻让用户看到通知,再触发下载
|
||||
await asyncio.sleep(1.5)
|
||||
|
||||
# Generate Workbook Title (Filename)
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "ai_generated":
|
||||
# 使用 AI 根据消息内容生成标题
|
||||
if target_messages and __request__:
|
||||
# 获取第一条有表格的消息内容
|
||||
content_for_title = ""
|
||||
for msg in target_messages:
|
||||
msg_content = msg.get("content", "")
|
||||
if msg_content:
|
||||
content_for_title = msg_content
|
||||
break
|
||||
if content_for_title:
|
||||
title = await self.generate_title_using_ai(
|
||||
body,
|
||||
content_for_title,
|
||||
user_id,
|
||||
__request__,
|
||||
__event_emitter__,
|
||||
)
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# 使用优化后的文件名生成逻辑
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# 如果没找到标题则使用 user_yyyymmdd 格式
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,10 +252,12 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# 保存表格到Excel(使用符合中国规范的格式化功能)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# 触发文件下载
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
with open(excel_file_path, "rb") as file:
|
||||
file_content = file.read()
|
||||
@@ -123,7 +288,7 @@ class Action:
|
||||
URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
}} catch (error) {{
|
||||
console.error('触发下载时出错:', error);
|
||||
console.error('Error triggering download:', error);
|
||||
}}
|
||||
"""
|
||||
},
|
||||
@@ -132,15 +297,15 @@ class Action:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "输出已保存", "done": True},
|
||||
"data": {"description": "文件已保存", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
# 清理临时文件
|
||||
# Clean up temp file
|
||||
if os.path.exists(excel_file_path):
|
||||
os.remove(excel_file_path)
|
||||
|
||||
return {"message": "下载事件已触发"}
|
||||
return {"message": "下载已触发"}
|
||||
|
||||
except HTTPException as e:
|
||||
print(f"Error processing tables: {str(e.detail)}")
|
||||
@@ -148,13 +313,13 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {e.detail}",
|
||||
"description": f"保存文件错误: {e.detail}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
raise e
|
||||
except Exception as e:
|
||||
@@ -163,15 +328,158 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {str(e)}",
|
||||
"description": f"保存文件错误: {str(e)}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self,
|
||||
body: dict,
|
||||
content: str,
|
||||
user_id: str,
|
||||
request: Any,
|
||||
event_emitter: Callable = None,
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
# 使用配置的 MODEL_ID 或回退到当前对话模型
|
||||
model = (
|
||||
self.valves.MODEL_ID.strip()
|
||||
if self.valves.MODEL_ID
|
||||
else body.get("model")
|
||||
)
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "你是一个乐于助人的助手。请根据以下内容为 Excel 导出文件生成一个简短、简洁的文件名(最多10个字)。不要使用引号或文件扩展名。避免使用文件名中无效的特殊字符。只输出文件名。",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # 限制内容长度
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
# 定义生成任务
|
||||
async def generate_task():
|
||||
return await generate_chat_completion(request, payload, user_obj)
|
||||
|
||||
# 定义通知任务
|
||||
async def notification_task():
|
||||
# 立即发送首次通知
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"info",
|
||||
"AI 正在为您生成文件名,请稍候...",
|
||||
)
|
||||
|
||||
# 之后每5秒通知一次
|
||||
while True:
|
||||
await asyncio.sleep(5)
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"info",
|
||||
"文件名生成中,请耐心等待...",
|
||||
)
|
||||
|
||||
# 并发运行任务
|
||||
gen_future = asyncio.ensure_future(generate_task())
|
||||
notify_future = asyncio.ensure_future(notification_task())
|
||||
|
||||
done, pending = await asyncio.wait(
|
||||
[gen_future, notify_future], return_when=asyncio.FIRST_COMPLETED
|
||||
)
|
||||
|
||||
# 如果生成完成,取消通知任务
|
||||
if not notify_future.done():
|
||||
notify_future.cancel()
|
||||
|
||||
# 获取结果
|
||||
if gen_future in done:
|
||||
response = gen_future.result()
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
else:
|
||||
# 理论上不会发生,因为是 FIRST_COMPLETED 且我们取消了 notify
|
||||
await gen_future
|
||||
response = gen_future.result()
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
|
||||
except Exception as e:
|
||||
print(f"生成标题时出错: {e}")
|
||||
if event_emitter:
|
||||
await self._send_notification(
|
||||
event_emitter,
|
||||
"warning",
|
||||
f"AI 文件名生成失败,将使用默认名称。错误: {str(e)}",
|
||||
)
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""从 Markdown h1/h2 中提取标题"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# 仅匹配 h1-h2 标题
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""从 body 或 metadata 中提取 chat_id"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""通过 chat_id 从数据库获取对话标题"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"加载对话 {chat_id} 失败: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
从消息文本中提取Markdown表格及位置信息
|
||||
@@ -473,25 +781,52 @@ class Action:
|
||||
with pd.ExcelWriter(file_path, engine="xlsxwriter") as writer:
|
||||
workbook = writer.book
|
||||
|
||||
# 定义表头样式 - 居中对齐(符合中国规范)
|
||||
# OpenWebUI 风格主题配色
|
||||
HEADER_BG = "#1f2937" # 深灰色 (匹配 OpenWebUI 侧边栏)
|
||||
HEADER_FG = "#ffffff" # 白色文字
|
||||
ROW_ODD_BG = "#ffffff" # 奇数行白色
|
||||
ROW_EVEN_BG = "#f3f4f6" # 偶数行浅灰 (斑马纹)
|
||||
BORDER_COLOR = "#e5e7eb" # 浅色边框
|
||||
|
||||
# 表头样式 - 居中对齐
|
||||
header_format = workbook.add_format(
|
||||
{
|
||||
"bold": True,
|
||||
"font_size": 12,
|
||||
"font_color": "white",
|
||||
"bg_color": "#00abbd",
|
||||
"font_size": 11,
|
||||
"font_name": "Arial",
|
||||
"font_color": HEADER_FG,
|
||||
"bg_color": HEADER_BG,
|
||||
"border": 1,
|
||||
"align": "center", # 表头居中
|
||||
"border_color": BORDER_COLOR,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
# 文本单元格样式 - 左对齐
|
||||
# 文本单元格样式 - 左对齐 (奇数行)
|
||||
text_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"align": "left", # 文本左对齐
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
# 文本单元格样式 - 左对齐 (偶数行 - 斑马纹)
|
||||
text_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
@@ -499,15 +834,52 @@ class Action:
|
||||
|
||||
# 数值单元格样式 - 右对齐
|
||||
number_format = workbook.add_format(
|
||||
{"border": 1, "align": "right", "valign": "vcenter"} # 数值右对齐
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
number_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
# 整数格式 - 右对齐
|
||||
integer_format = workbook.add_format(
|
||||
{
|
||||
"num_format": "0",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"align": "right", # 整数右对齐
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
integer_format_alt = workbook.add_format(
|
||||
{
|
||||
"num_format": "0",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
@@ -516,8 +888,25 @@ class Action:
|
||||
decimal_format = workbook.add_format(
|
||||
{
|
||||
"num_format": "0.00",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"align": "right", # 小数右对齐
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
decimal_format_alt = workbook.add_format(
|
||||
{
|
||||
"num_format": "0.00",
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "right",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
@@ -525,8 +914,25 @@ class Action:
|
||||
# 日期格式 - 居中对齐
|
||||
date_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"align": "center", # 日期居中对齐
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
date_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
@@ -535,12 +941,114 @@ class Action:
|
||||
# 序号格式 - 居中对齐
|
||||
sequence_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"align": "center", # 序号居中对齐
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
sequence_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "center",
|
||||
"valign": "vcenter",
|
||||
}
|
||||
)
|
||||
|
||||
# 粗体单元格样式 (用于全单元格加粗)
|
||||
text_bold_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_bold_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
# 斜体单元格样式 (用于全单元格斜体)
|
||||
text_italic_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_ODD_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_italic_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Arial",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": ROW_EVEN_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
# 代码单元格样式 (用于行内代码高亮显示)
|
||||
CODE_BG = "#f0f0f0" # 代码浅灰背景
|
||||
text_code_format = workbook.add_format(
|
||||
{
|
||||
"font_name": "Consolas",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": CODE_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
text_code_format_alt = workbook.add_format(
|
||||
{
|
||||
"font_name": "Consolas",
|
||||
"font_size": 10,
|
||||
"border": 1,
|
||||
"border_color": BORDER_COLOR,
|
||||
"bg_color": CODE_BG,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
}
|
||||
)
|
||||
|
||||
for i, table in enumerate(tables):
|
||||
try:
|
||||
table_data = table["data"]
|
||||
@@ -582,12 +1090,18 @@ class Action:
|
||||
|
||||
print(f"DataFrame created with columns: {list(df.columns)}")
|
||||
|
||||
# 修复pandas FutureWarning - 使用try-except替代errors='ignore'
|
||||
# 智能数据类型转换
|
||||
for col in df.columns:
|
||||
# 先尝试数字转换
|
||||
try:
|
||||
df[col] = pd.to_numeric(df[col])
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
# 尝试日期转换
|
||||
try:
|
||||
df[col] = pd.to_datetime(df[col], errors="raise")
|
||||
except (ValueError, TypeError):
|
||||
# 保持为字符串,使用 infer_objects 优化
|
||||
df[col] = df[col].infer_objects()
|
||||
|
||||
# 先写入数据(不包含表头)
|
||||
df.to_excel(
|
||||
@@ -599,19 +1113,25 @@ class Action:
|
||||
)
|
||||
worksheet = writer.sheets[sheet_name]
|
||||
|
||||
# 应用符合中国规范的格式化
|
||||
# 应用符合中国规范的格式化 (带斑马纹)
|
||||
formats = {
|
||||
"header": header_format,
|
||||
"text": [text_format, text_format_alt],
|
||||
"number": [number_format, number_format_alt],
|
||||
"integer": [integer_format, integer_format_alt],
|
||||
"decimal": [decimal_format, decimal_format_alt],
|
||||
"date": [date_format, date_format_alt],
|
||||
"sequence": [sequence_format, sequence_format_alt],
|
||||
"bold": [text_bold_format, text_bold_format_alt],
|
||||
"italic": [text_italic_format, text_italic_format_alt],
|
||||
"code": [text_code_format, text_code_format_alt],
|
||||
}
|
||||
self.apply_chinese_standard_formatting(
|
||||
worksheet,
|
||||
df,
|
||||
headers,
|
||||
workbook,
|
||||
header_format,
|
||||
text_format,
|
||||
number_format,
|
||||
integer_format,
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
formats,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -628,23 +1148,22 @@ class Action:
|
||||
df,
|
||||
headers,
|
||||
workbook,
|
||||
header_format,
|
||||
text_format,
|
||||
number_format,
|
||||
integer_format,
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
formats,
|
||||
):
|
||||
"""
|
||||
应用符合中国官方表格规范的格式化
|
||||
- 表头: 居中对齐
|
||||
应用符合中国官方表格规范的格式化 (带斑马纹)
|
||||
- 表头: 居中对齐 (深色背景)
|
||||
- 数值: 右对齐
|
||||
- 文本: 左对齐
|
||||
- 日期: 居中对齐
|
||||
- 序号: 居中对齐
|
||||
- 斑马纹: 隔行变色
|
||||
- 支持全单元格 Markdown 粗体 (**text**) 和斜体 (*text*)
|
||||
"""
|
||||
try:
|
||||
# 从 formats 字典提取格式
|
||||
header_format = formats["header"]
|
||||
|
||||
# 1. 写入表头(居中对齐)
|
||||
print(f"Writing headers with Chinese standard alignment: {headers}")
|
||||
for col_idx, header in enumerate(headers):
|
||||
@@ -668,43 +1187,97 @@ class Action:
|
||||
else:
|
||||
column_types[col_idx] = "text"
|
||||
|
||||
# 3. 写入并格式化数据(根据类型使用不同对齐方式)
|
||||
# 3. 写入并格式化数据(带斑马纹)
|
||||
for row_idx, row in df.iterrows():
|
||||
# 确定奇偶行 (0-indexed, 所以 row 0 视觉上是第 1 行)
|
||||
is_alt_row = row_idx % 2 == 1 # 偶数索引 = 奇数行, 使用 alt 格式
|
||||
|
||||
for col_idx, value in enumerate(row):
|
||||
content_type = column_types.get(col_idx, "text")
|
||||
|
||||
# 根据内容类型选择格式
|
||||
# 根据内容类型和斑马纹选择格式
|
||||
fmt_idx = 1 if is_alt_row else 0
|
||||
|
||||
if content_type == "number":
|
||||
# 数值类型 - 右对齐
|
||||
if pd.api.types.is_numeric_dtype(df.iloc[:, col_idx]):
|
||||
if pd.api.types.is_integer_dtype(df.iloc[:, col_idx]):
|
||||
current_format = integer_format
|
||||
current_format = formats["integer"][fmt_idx]
|
||||
else:
|
||||
try:
|
||||
numeric_value = float(value)
|
||||
if numeric_value.is_integer():
|
||||
current_format = integer_format
|
||||
current_format = formats["integer"][fmt_idx]
|
||||
value = int(numeric_value)
|
||||
else:
|
||||
current_format = decimal_format
|
||||
current_format = formats["decimal"][fmt_idx]
|
||||
except (ValueError, TypeError):
|
||||
current_format = decimal_format
|
||||
current_format = formats["decimal"][fmt_idx]
|
||||
else:
|
||||
current_format = number_format
|
||||
current_format = formats["number"][fmt_idx]
|
||||
|
||||
elif content_type == "date":
|
||||
# 日期类型 - 居中对齐
|
||||
current_format = date_format
|
||||
current_format = formats["date"][fmt_idx]
|
||||
|
||||
elif content_type == "sequence":
|
||||
# 序号类型 - 居中对齐
|
||||
current_format = sequence_format
|
||||
current_format = formats["sequence"][fmt_idx]
|
||||
|
||||
else:
|
||||
# 文本类型 - 左对齐
|
||||
current_format = text_format
|
||||
current_format = formats["text"][fmt_idx]
|
||||
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
if content_type == "text" and isinstance(value, str):
|
||||
# 检查是否全单元格加粗 (**text**)
|
||||
match_bold = re.fullmatch(r"\*\*(.+)\*\*", value.strip())
|
||||
# 检查是否全单元格斜体 (*text*)
|
||||
match_italic = re.fullmatch(r"\*(.+)\*", value.strip())
|
||||
# 检查是否全单元格代码 (`text`)
|
||||
match_code = re.fullmatch(r"`(.+)`", value.strip())
|
||||
|
||||
if match_bold:
|
||||
# 提取内容并应用粗体格式
|
||||
clean_value = match_bold.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["bold"][fmt_idx],
|
||||
)
|
||||
elif match_italic:
|
||||
# 提取内容并应用斜体格式
|
||||
clean_value = match_italic.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["italic"][fmt_idx],
|
||||
)
|
||||
elif match_code:
|
||||
# 提取内容并应用代码格式 (高亮显示)
|
||||
clean_value = match_code.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1,
|
||||
col_idx,
|
||||
clean_value,
|
||||
formats["code"][fmt_idx],
|
||||
)
|
||||
else:
|
||||
# 移除部分 Markdown 格式符号 (Excel 无法渲染部分格式)
|
||||
# 移除粗体标记 **text** -> text
|
||||
clean_value = re.sub(r"\*\*(.+?)\*\*", r"\1", value)
|
||||
# 移除斜体标记 *text* -> text (但不影响 ** 内部的内容)
|
||||
clean_value = re.sub(
|
||||
r"(?<!\*)\*([^*]+)\*(?!\*)", r"\1", clean_value
|
||||
)
|
||||
# 移除代码标记 `text` -> text
|
||||
clean_value = re.sub(r"`(.+?)`", r"\1", clean_value)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, current_format
|
||||
)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
|
||||
# 4. 自动调整列宽
|
||||
for col_idx, column in enumerate(headers):
|
||||
@@ -804,3 +1377,6 @@ class Action:
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Even basic formatting failed: {str(e)}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Even basic formatting failed: {str(e)}")
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""
|
||||
title: 精读 (Deep Reading)
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0Ij48ZGVmcz48bGluZWFyR3JhZGllbnQgaWQ9ImciIHgxPSIwIiB5MT0iMCIgeDI9IjEiIHkyPSIxIj48c3RvcCBvZmZzZXQ9IjAlIiBzdG9wLWNvbG9yPSIjNDI4NWY0Ii8+PHN0b3Agb2Zmc2V0PSIxMDAlIiBzdG9wLWNvbG9yPSIjMWU4OGU1Ii8+PC9saW5lYXJHcmFkaWVudD48L2RlZnM+PHBhdGggZD0iTTYgMmg4bDYgNnYxMmEyIDIgMCAwIDEtMiAySDZhMiAyIDAgMCAxLTItMlY0YTIgMiAwIDAgMSAyLTJ6IiBmaWxsPSJ1cmwoI2cpIi8+PHBhdGggZD0iTTE0IDJsNiA2aC02eiIgZmlsbD0iIzFlODhlNSIgb3BhY2l0eT0iMC42Ii8+PGxpbmUgeDE9IjgiIHkxPSIxMyIgeDI9IjE2IiB5Mj0iMTMiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiLz48bGluZSB4MT0iOCIgeTE9IjE3IiB4Mj0iMTQiIHkyPSIxNyIgc3Ryb2tlPSIjZmZmIiBzdHJva2Utd2lkdGg9IjEuNSIvPjxjaXJjbGUgY3g9IjE2IiBjeT0iMTgiIHI9IjMiIGZpbGw9IiNmZmQ3MDAiLz48cGF0aCBkPSJNMTYgMTZsMS41IDEuNSIgc3Ryb2tlPSIjNDI4NWY0IiBzdHJva2Utd2lkdGg9IjIiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPjwvc3ZnPg==
|
||||
version: 2.0.0
|
||||
version: 0.1.0
|
||||
description: 深度分析长篇文本,提炼详细摘要、关键信息点和可执行的行动建议,适合工作和学习场景。
|
||||
requirements: jinja2, markdown
|
||||
"""
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.2.0 | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.1.0 | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Example Pipe Plugin
|
||||
|
||||
**Author:** OpenWebUI Community | **Version:** 1.0.0 | **License:** MIT
|
||||
**Author:** OpenWebUI Community | **Version:** 1.26.0 | **License:** MIT
|
||||
|
||||
This is a template/example for creating Pipe plugins in OpenWebUI.
|
||||
|
||||
|
||||
315
scripts/check_version_consistency.py
Normal file
315
scripts/check_version_consistency.py
Normal file
@@ -0,0 +1,315 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script to check and enforce version consistency across OpenWebUI plugins and documentation.
|
||||
用于检查并强制 OpenWebUI 插件和文档之间版本一致性的脚本。
|
||||
|
||||
Usage:
|
||||
python scripts/check_version_consistency.py # Check only
|
||||
python scripts/check_version_consistency.py --fix # Check and fix
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Dict, Tuple
|
||||
|
||||
# ANSI colors
|
||||
GREEN = "\033[92m"
|
||||
RED = "\033[91m"
|
||||
YELLOW = "\033[93m"
|
||||
BLUE = "\033[94m"
|
||||
RESET = "\033[0m"
|
||||
|
||||
|
||||
def log_info(msg):
|
||||
print(f"{BLUE}[INFO]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_success(msg):
|
||||
print(f"{GREEN}[OK]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_warning(msg):
|
||||
print(f"{YELLOW}[WARN]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_error(msg):
|
||||
print(f"{RED}[ERR]{RESET} {msg}")
|
||||
|
||||
|
||||
class VersionChecker:
|
||||
def __init__(self, root_dir: str, fix: bool = False):
|
||||
self.root_dir = Path(root_dir)
|
||||
self.plugins_dir = self.root_dir / "plugins"
|
||||
self.docs_dir = self.root_dir / "docs" / "plugins"
|
||||
self.fix = fix
|
||||
self.issues_found = 0
|
||||
self.fixed_count = 0
|
||||
|
||||
def extract_version_from_py(self, file_path: Path) -> Optional[str]:
|
||||
"""Extract version from Python docstring."""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
match = re.search(r"version:\s*([\d\.]+)", content)
|
||||
if match:
|
||||
return match.group(1)
|
||||
except Exception as e:
|
||||
log_error(f"Failed to read {file_path}: {e}")
|
||||
return None
|
||||
|
||||
def update_file_content(
|
||||
self, file_path: Path, pattern: str, replacement: str, version: str
|
||||
) -> bool:
|
||||
"""Update file content with new version."""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
new_content = re.sub(pattern, replacement, content)
|
||||
|
||||
if content != new_content:
|
||||
if self.fix:
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
log_success(
|
||||
f"Fixed {file_path.relative_to(self.root_dir)}: -> {version}"
|
||||
)
|
||||
self.fixed_count += 1
|
||||
return True
|
||||
else:
|
||||
log_error(
|
||||
f"Mismatch in {file_path.relative_to(self.root_dir)}: Expected {version}"
|
||||
)
|
||||
self.issues_found += 1
|
||||
return False
|
||||
return True
|
||||
except Exception as e:
|
||||
log_error(f"Failed to update {file_path}: {e}")
|
||||
return False
|
||||
|
||||
def check_plugin(self, plugin_type: str, plugin_dir: Path):
|
||||
"""Check consistency for a single plugin."""
|
||||
plugin_name = plugin_dir.name
|
||||
|
||||
# 1. Identify Source of Truth (English .py file)
|
||||
py_file = plugin_dir / f"{plugin_name}.py"
|
||||
if not py_file.exists():
|
||||
# Try finding any .py file that matches the directory name pattern or is the main file
|
||||
py_files = list(plugin_dir.glob("*.py"))
|
||||
# Filter out _cn.py, templates, etc.
|
||||
candidates = [
|
||||
f
|
||||
for f in py_files
|
||||
if not f.name.endswith("_cn.py") and "TEMPLATE" not in f.name
|
||||
]
|
||||
if candidates:
|
||||
py_file = candidates[0]
|
||||
else:
|
||||
return # Not a valid plugin dir
|
||||
|
||||
true_version = self.extract_version_from_py(py_file)
|
||||
if not true_version:
|
||||
log_warning(f"Skipping {plugin_name}: No version found in {py_file.name}")
|
||||
return
|
||||
|
||||
log_info(f"Checking {plugin_name} (v{true_version})...")
|
||||
|
||||
# 2. Check Chinese .py file
|
||||
cn_py_files = list(plugin_dir.glob("*_cn.py")) + list(
|
||||
plugin_dir.glob("*中文*.py")
|
||||
)
|
||||
# Also check for files that are not the main file but might be the CN version
|
||||
for f in plugin_dir.glob("*.py"):
|
||||
if f != py_file and "TEMPLATE" not in f.name and f not in cn_py_files:
|
||||
# Heuristic: if it has Chinese characters or ends in _cn
|
||||
if re.search(r"[\u4e00-\u9fff]", f.name) or f.name.endswith("_cn.py"):
|
||||
cn_py_files.append(f)
|
||||
|
||||
for cn_py in set(cn_py_files):
|
||||
self.update_file_content(
|
||||
cn_py, r"(version:\s*)([\d\.]+)", rf"\g<1>{true_version}", true_version
|
||||
)
|
||||
|
||||
# 3. Check README.md (English)
|
||||
readme = plugin_dir / "README.md"
|
||||
if readme.exists():
|
||||
# Pattern 1: **Version:** 1.0.0
|
||||
self.update_file_content(
|
||||
readme,
|
||||
r"(\*\*Version:?\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
# Pattern 2: | **Version:** 1.0.0 |
|
||||
self.update_file_content(
|
||||
readme,
|
||||
r"(\|\s*\*\*Version:\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 4. Check README_CN.md (Chinese)
|
||||
readme_cn = plugin_dir / "README_CN.md"
|
||||
if readme_cn.exists():
|
||||
# Pattern: **版本:** 1.0.0
|
||||
self.update_file_content(
|
||||
readme_cn,
|
||||
r"(\*\*版本:?\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 5. Check Global Docs Index (docs/plugins/{type}/index.md)
|
||||
index_md = self.docs_dir / plugin_type / "index.md"
|
||||
if index_md.exists():
|
||||
# Need to find the specific block for this plugin.
|
||||
# This is harder with regex on the whole file.
|
||||
# We assume the format: **Version:** X.Y.Z
|
||||
# But we need to make sure we are updating the RIGHT plugin's version.
|
||||
# Strategy: Look for the plugin title or link, then the version nearby.
|
||||
|
||||
# Extract title from py file to help search
|
||||
title = self.extract_title(py_file)
|
||||
if title:
|
||||
self.update_version_in_index(index_md, title, true_version)
|
||||
|
||||
# 6. Check Global Docs Index CN (docs/plugins/{type}/index.zh.md)
|
||||
index_zh = self.docs_dir / plugin_type / "index.zh.md"
|
||||
if index_zh.exists():
|
||||
# Try to find Chinese title? Or just use English title if listed?
|
||||
# Often Chinese index uses English title or Chinese title.
|
||||
# Let's try to extract Chinese title from cn_py if available
|
||||
cn_title = None
|
||||
if cn_py_files:
|
||||
cn_title = self.extract_title(cn_py_files[0])
|
||||
|
||||
target_title = cn_title if cn_title else title
|
||||
if target_title:
|
||||
self.update_version_in_index(
|
||||
index_zh, target_title, true_version, is_zh=True
|
||||
)
|
||||
|
||||
# 7. Check Global Detail Page (docs/plugins/{type}/{name}.md)
|
||||
# The doc filename usually matches the plugin directory name
|
||||
detail_md = self.docs_dir / plugin_type / f"{plugin_name}.md"
|
||||
if detail_md.exists():
|
||||
self.update_file_content(
|
||||
detail_md,
|
||||
r'(<span class="version-badge">v)([\d\.]+)(</span>)',
|
||||
rf"\g<1>{true_version}\g<3>",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 8. Check Global Detail Page CN (docs/plugins/{type}/{name}.zh.md)
|
||||
detail_zh = self.docs_dir / plugin_type / f"{plugin_name}.zh.md"
|
||||
if detail_zh.exists():
|
||||
self.update_file_content(
|
||||
detail_zh,
|
||||
r'(<span class="version-badge">v)([\d\.]+)(</span>)',
|
||||
rf"\g<1>{true_version}\g<3>",
|
||||
true_version,
|
||||
)
|
||||
|
||||
def extract_title(self, file_path: Path) -> Optional[str]:
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
match = re.search(r"title:\s*(.+)", content)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
def update_version_in_index(
|
||||
self, file_path: Path, title: str, version: str, is_zh: bool = False
|
||||
):
|
||||
"""
|
||||
Update version in index file.
|
||||
Look for:
|
||||
- ... **Title** ...
|
||||
- ...
|
||||
- **Version:** X.Y.Z
|
||||
"""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
|
||||
# Escape title for regex
|
||||
safe_title = re.escape(title)
|
||||
|
||||
# Regex to find the plugin block and its version
|
||||
# We look for the title, then non-greedy match until we find Version line
|
||||
if is_zh:
|
||||
ver_label = r"\*\*版本:\*\*"
|
||||
else:
|
||||
ver_label = r"\*\*Version:\*\*"
|
||||
|
||||
# Pattern: (Title ...)(Version: )(\d+\.\d+\.\d+)
|
||||
# We allow some lines between title and version
|
||||
pattern = rf"(\*\*{safe_title}\*\*[\s\S]*?{ver_label}\s*)([\d\.]+)"
|
||||
|
||||
match = re.search(pattern, content)
|
||||
if match:
|
||||
current_ver = match.group(2)
|
||||
if current_ver != version:
|
||||
if self.fix:
|
||||
new_content = content.replace(
|
||||
match.group(0), f"{match.group(1)}{version}"
|
||||
)
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
log_success(
|
||||
f"Fixed index for {title}: {current_ver} -> {version}"
|
||||
)
|
||||
self.fixed_count += 1
|
||||
else:
|
||||
log_error(
|
||||
f"Mismatch in index for {title}: Found {current_ver}, Expected {version}"
|
||||
)
|
||||
self.issues_found += 1
|
||||
else:
|
||||
# log_warning(f"Could not find entry for '{title}' in {file_path.name}")
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
log_error(f"Failed to check index {file_path}: {e}")
|
||||
|
||||
def run(self):
|
||||
if not self.plugins_dir.exists():
|
||||
log_error(f"Plugins directory not found: {self.plugins_dir}")
|
||||
return
|
||||
|
||||
# Scan actions, filters, pipes
|
||||
for type_dir in self.plugins_dir.iterdir():
|
||||
if type_dir.is_dir() and type_dir.name in ["actions", "filters", "pipes"]:
|
||||
for plugin_dir in type_dir.iterdir():
|
||||
if plugin_dir.is_dir():
|
||||
self.check_plugin(type_dir.name, plugin_dir)
|
||||
|
||||
print("-" * 40)
|
||||
if self.issues_found > 0:
|
||||
if self.fix:
|
||||
print(f"Fixed {self.fixed_count} issues.")
|
||||
else:
|
||||
print(f"Found {self.issues_found} version inconsistencies.")
|
||||
print(f"Run with --fix to automatically resolve them.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("All versions are consistent! ✨")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Check version consistency.")
|
||||
parser.add_argument("--fix", action="store_true", help="Fix inconsistencies")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Assume script is run from root or scripts dir
|
||||
root = Path.cwd()
|
||||
if (root / "scripts").exists():
|
||||
pass
|
||||
elif root.name == "scripts":
|
||||
root = root.parent
|
||||
|
||||
checker = VersionChecker(str(root), fix=args.fix)
|
||||
checker.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -96,6 +96,15 @@ def scan_plugins_directory(plugins_dir: str) -> list[dict[str, Any]]:
|
||||
for root, _dirs, files in os.walk(plugins_path):
|
||||
for file in files:
|
||||
if file.endswith(".py") and not file.startswith("__"):
|
||||
# Skip specific files that should not trigger release
|
||||
if file in [
|
||||
"gemini_manifold.py",
|
||||
"gemini_manifold_companion.py",
|
||||
"ACTION_PLUGIN_TEMPLATE.py",
|
||||
"ACTION_PLUGIN_TEMPLATE_CN.py",
|
||||
]:
|
||||
continue
|
||||
|
||||
file_path = os.path.join(root, file)
|
||||
metadata = extract_plugin_metadata(file_path)
|
||||
if metadata:
|
||||
@@ -109,9 +118,7 @@ def scan_plugins_directory(plugins_dir: str) -> list[dict[str, Any]]:
|
||||
return plugins
|
||||
|
||||
|
||||
def compare_versions(
|
||||
current: list[dict], previous_file: str
|
||||
) -> dict[str, list[dict]]:
|
||||
def compare_versions(current: list[dict], previous_file: str) -> dict[str, list[dict]]:
|
||||
"""
|
||||
Compare current plugin versions with a previous version file.
|
||||
比较当前插件版本与之前的版本文件。
|
||||
@@ -168,7 +175,9 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
||||
"|---------------|----------------|-------------|---------------------|",
|
||||
]
|
||||
|
||||
for plugin in sorted(plugins, key=lambda x: (x.get("type", ""), x.get("title", ""))):
|
||||
for plugin in sorted(
|
||||
plugins, key=lambda x: (x.get("type", ""), x.get("title", ""))
|
||||
):
|
||||
title = plugin.get("title", "Unknown")
|
||||
version = plugin.get("version", "Unknown")
|
||||
plugin_type = plugin.get("type", "Unknown").capitalize()
|
||||
@@ -181,7 +190,9 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def format_release_notes(comparison: dict[str, list]) -> str:
|
||||
def format_release_notes(
|
||||
comparison: dict[str, list], ignore_removed: bool = False
|
||||
) -> str:
|
||||
"""
|
||||
Format version comparison as release notes.
|
||||
将版本比较格式化为发布说明。
|
||||
@@ -206,7 +217,7 @@ def format_release_notes(comparison: dict[str, list]) -> str:
|
||||
)
|
||||
lines.append("")
|
||||
|
||||
if comparison["removed"]:
|
||||
if comparison["removed"] and not ignore_removed:
|
||||
lines.append("### 移除插件 / Removed Plugins")
|
||||
for plugin in comparison["removed"]:
|
||||
lines.append(f"- **{plugin['title']}** v{plugin['version']}")
|
||||
@@ -239,6 +250,11 @@ def main():
|
||||
metavar="FILE",
|
||||
help="Compare with previous version JSON file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ignore-removed",
|
||||
action="store_true",
|
||||
help="Ignore removed plugins in output",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
@@ -257,7 +273,9 @@ def main():
|
||||
if args.json:
|
||||
output = json.dumps(comparison, indent=2, ensure_ascii=False)
|
||||
else:
|
||||
output = format_release_notes(comparison)
|
||||
output = format_release_notes(
|
||||
comparison, ignore_removed=args.ignore_removed
|
||||
)
|
||||
if not output.strip():
|
||||
output = "No changes detected. / 未检测到更改。"
|
||||
elif args.json:
|
||||
@@ -268,13 +286,17 @@ def main():
|
||||
# Default: simple list
|
||||
lines = []
|
||||
for plugin in sorted(plugins, key=lambda x: x.get("title", "")):
|
||||
lines.append(f"{plugin.get('title', 'Unknown')}: v{plugin.get('version', '?')}")
|
||||
lines.append(
|
||||
f"{plugin.get('title', 'Unknown')}: v{plugin.get('version', '?')}"
|
||||
)
|
||||
output = "\n".join(lines)
|
||||
|
||||
# Write output
|
||||
if args.output:
|
||||
with open(args.output, "w", encoding="utf-8") as f:
|
||||
f.write(output)
|
||||
if not output.endswith("\n"):
|
||||
f.write("\n")
|
||||
print(f"Output written to {args.output}")
|
||||
else:
|
||||
print(output)
|
||||
|
||||
Reference in New Issue
Block a user