Compare commits
29 Commits
v2025.12.3
...
v2026.01.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
dbfce27986 | ||
|
|
9be6fe08fa | ||
|
|
782378eed8 | ||
|
|
4e59bb6518 | ||
|
|
3e73fcb3f0 | ||
|
|
c460337c43 | ||
|
|
e775b23503 | ||
|
|
b3cdb8e26e | ||
|
|
0e6f902d16 | ||
|
|
c15c73897f | ||
|
|
035439ce02 | ||
|
|
b84ff4a3a2 | ||
|
|
e22744abd0 | ||
|
|
54c90238f7 | ||
|
|
40d77121bd | ||
|
|
3795976a79 | ||
|
|
f5e5e5caa4 | ||
|
|
0c893ce61f | ||
|
|
8f4ce8f084 | ||
|
|
ac2cf00807 | ||
|
|
b9d8100cdb | ||
|
|
bb1cc0d966 | ||
|
|
2e238c5b5d | ||
|
|
b56e7cb41e | ||
|
|
236ae43c0c | ||
|
|
a4e8cc52f9 | ||
|
|
c8e8434bc6 | ||
|
|
3ee00bb083 | ||
|
|
0a9549da34 |
102
.agent/workflows/plugin-development.md
Normal file
102
.agent/workflows/plugin-development.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
description: OpenWebUI Plugin Development & Release Workflow
|
||||
---
|
||||
|
||||
# OpenWebUI Plugin Development Workflow
|
||||
|
||||
This workflow outlines the standard process for developing, documenting, and releasing plugins for OpenWebUI, ensuring compliance with project standards and CI/CD requirements.
|
||||
|
||||
## 1. Development Standards
|
||||
|
||||
Reference: `.github/copilot-instructions.md`
|
||||
|
||||
### Bilingual Requirement
|
||||
Every plugin **MUST** have bilingual versions for both code and documentation:
|
||||
|
||||
- **Code**:
|
||||
- English: `plugins/{type}/{name}/{name}.py`
|
||||
- Chinese: `plugins/{type}/{name}/{name_cn}.py` (or `中文名.py`)
|
||||
- **README**:
|
||||
- English: `plugins/{type}/{name}/README.md`
|
||||
- Chinese: `plugins/{type}/{name}/README_CN.md`
|
||||
|
||||
### Code Structure
|
||||
- **Docstring**: Must include `title`, `author`, `version`, `description`, etc.
|
||||
- **Valves**: Use `pydantic` for configuration.
|
||||
- **Database**: Re-use `open_webui.internal.db` shared connection.
|
||||
- **User Context**: Use `_get_user_context` helper method.
|
||||
|
||||
### Commit Messages
|
||||
- **Language**: **English ONLY**. Do not use Chinese in commit messages.
|
||||
- **Format**: Conventional Commits (e.g., `feat:`, `fix:`, `docs:`).
|
||||
|
||||
## 2. Documentation Updates
|
||||
|
||||
When adding or updating a plugin, you **MUST** update the following documentation files to maintain consistency:
|
||||
|
||||
### Plugin Directory
|
||||
- `README.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
- `README_CN.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
|
||||
### Global Documentation (`docs/`)
|
||||
- **Index Pages**:
|
||||
- `docs/plugins/{type}/index.md`: Add/Update list item with **correct version**.
|
||||
- `docs/plugins/{type}/index.zh.md`: Add/Update list item with **correct version**.
|
||||
- **Detail Pages**:
|
||||
- `docs/plugins/{type}/{name}.md`: Ensure content matches README.
|
||||
- `docs/plugins/{type}/{name}.zh.md`: Ensure content matches README_CN.
|
||||
|
||||
### Root README
|
||||
- `README.md`: Add to "Featured Plugins" if applicable.
|
||||
- `README_CN.md`: Add to "Featured Plugins" if applicable.
|
||||
|
||||
## 3. Version Control & Release
|
||||
|
||||
Reference: `.github/workflows/release.yml`
|
||||
|
||||
### Version Bumping
|
||||
- **Rule**: Any change to plugin logic **MUST** be accompanied by a version bump in the docstring.
|
||||
- **Format**: Semantic Versioning (e.g., `1.0.0` -> `1.0.1`).
|
||||
- **Consistency**: Update version in **ALL** locations:
|
||||
1. English Code (`.py`)
|
||||
2. Chinese Code (`.py`)
|
||||
3. English README (`README.md`)
|
||||
4. Chinese README (`README_CN.md`)
|
||||
5. Docs Index (`docs/.../index.md`)
|
||||
6. Docs Index CN (`docs/.../index.zh.md`)
|
||||
7. Docs Detail (`docs/.../{name}.md`)
|
||||
8. Docs Detail CN (`docs/.../{name}.zh.md`)
|
||||
|
||||
### Automated Release Process
|
||||
1. **Trigger**: Push to `main` branch with changes in `plugins/**/*.py`.
|
||||
2. **Detection**: `scripts/extract_plugin_versions.py` detects changed plugins and compares versions.
|
||||
3. **Release**:
|
||||
- Generates release notes based on changes.
|
||||
- Creates a GitHub Release tag (e.g., `v2024.01.01-1`).
|
||||
- Uploads individual `.py` files of **changed plugins only** as assets.
|
||||
|
||||
### Pull Request Check
|
||||
- Workflow: `.github/workflows/plugin-version-check.yml`
|
||||
- Checks if plugin files are modified.
|
||||
- **Fails** if version number is not updated.
|
||||
- **Fails** if PR description is too short (< 20 chars).
|
||||
|
||||
## 4. Verification Checklist
|
||||
|
||||
Before committing:
|
||||
|
||||
- [ ] Code is bilingual and functional?
|
||||
- [ ] Docstrings have updated version?
|
||||
- [ ] READMEs are updated and bilingual?
|
||||
- [ ] `docs/` index and detail pages are updated?
|
||||
- [ ] Root `README.md` is updated?
|
||||
- [ ] All version numbers match exactly?
|
||||
|
||||
## 5. Git Operations (Agent Rules)
|
||||
|
||||
**CRITICAL RULE FOR AGENTS**:
|
||||
|
||||
- **No Auto-Push**: Agents **MUST NOT** automatically push changes to the remote `main` branch.
|
||||
- **Local Commit Only**: All changes must be committed locally.
|
||||
- **User Approval**: Pushing to remote requires explicit user action or approval.
|
||||
|
||||
170
.github/copilot-instructions.md
vendored
170
.github/copilot-instructions.md
vendored
@@ -37,7 +37,9 @@ README 文件应包含以下内容:
|
||||
- 安装和设置说明 / Installation and setup instructions
|
||||
- 使用示例 / Usage examples
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 版本和作者信息 / Version and author information
|
||||
- **新增功能 / New Features**: 如果是更新现有插件,必须明确列出并描述新增功能(发布到官方市场的重要要求)。/ If updating an existing plugin, explicitly list and describe new features (Critical for official market release).
|
||||
|
||||
### 官方文档 (Official Documentation)
|
||||
|
||||
@@ -795,10 +797,147 @@ For iframe plugins to access parent document theme information, users need to co
|
||||
- [ ] 实现 Valves 配置
|
||||
- [ ] 使用 logging 而非 print
|
||||
- [ ] 测试双语界面
|
||||
- [ ] **一致性检查 (Consistency Check)**:
|
||||
- [ ] 更新 `README.md` 插件列表
|
||||
- [ ] 更新 `README_CN.md` 插件列表
|
||||
- [ ] 更新/创建 `docs/` 下的对应文档
|
||||
- [ ] 确保文档版本号与代码一致
|
||||
|
||||
---
|
||||
|
||||
## 📚 参考资源 (Reference Resources)
|
||||
## 🔄 一致性维护 (Consistency Maintenance)
|
||||
|
||||
任何插件的**新增、修改或移除**,必须同时更新以下三个位置,保持完全一致:
|
||||
|
||||
1. **插件代码 (Plugin Code)**: 更新 `version` 和功能实现。
|
||||
2. **项目文档 (Docs)**: 更新 `docs/` 下对应的文档文件(版本号、功能描述)。
|
||||
3. **自述文件 (README)**: 更新根目录下的 `README.md` 和 `README_CN.md` 中的插件列表。
|
||||
|
||||
> [!IMPORTANT]
|
||||
> 提交 PR 前,请务必检查这三处是否同步。例如:如果删除了一个插件,必须同时从 README 列表中移除,并删除对应的 docs 文档。
|
||||
|
||||
---
|
||||
|
||||
## <20> 发布工作流 (Release Workflow)
|
||||
|
||||
### 自动发布 (Automatic Release)
|
||||
|
||||
当插件更新推送到 `main` 分支时,会**自动触发**发布流程:
|
||||
|
||||
1. 🔍 检测版本变化(与上次 release 对比)
|
||||
2. 📝 生成发布说明(包含更新内容和提交记录)
|
||||
3. 📦 创建 GitHub Release(包含可下载的插件文件)
|
||||
4. 🏷️ 自动生成版本号(格式:`vYYYY.MM.DD-运行号`)
|
||||
|
||||
**注意**:仅**移除插件**(删除文件)**不会触发**自动发布。只有新增或修改插件(且更新了版本号)才会触发发布。移除的插件将不会出现在发布日志中。
|
||||
|
||||
### 发布前必须完成 (Pre-release Requirements)
|
||||
|
||||
1. ✅ **更新版本号** - 修改插件文档字符串中的 `version` 字段
|
||||
2. ✅ **中英文版本同步** - 确保两个版本的版本号一致
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Plugin
|
||||
version: 0.2.0 # <- 必须更新这里!
|
||||
...
|
||||
"""
|
||||
```
|
||||
|
||||
### 版本编号规则 (Versioning)
|
||||
|
||||
遵循[语义化版本](https://semver.org/lang/zh-CN/):
|
||||
|
||||
| 变更类型 | 版本变化 | 示例 |
|
||||
|---------|---------|------|
|
||||
| Bug 修复 | PATCH +1 | 0.1.0 → 0.1.1 |
|
||||
| 新功能 | MINOR +1 | 0.1.1 → 0.2.0 |
|
||||
| 不兼容变更 | MAJOR +1 | 0.2.0 → 1.0.0 |
|
||||
|
||||
### 发布方式 (Release Methods)
|
||||
|
||||
**方式 A:直接推送到 main(推荐)**
|
||||
|
||||
```bash
|
||||
# 1. 暂存更改
|
||||
git add plugins/actions/my-plugin/
|
||||
|
||||
# 2. 提交(使用规范的 commit message)
|
||||
git commit -m "feat(my-plugin): add new feature X
|
||||
|
||||
- Add feature X for better user experience
|
||||
- Fix bug Y
|
||||
- Update version to 0.2.0"
|
||||
|
||||
# 3. 推送到 main
|
||||
git push origin main
|
||||
|
||||
# GitHub Actions 会自动创建 Release
|
||||
```
|
||||
|
||||
**方式 B:创建 PR(团队协作)**
|
||||
|
||||
```bash
|
||||
# 1. 创建功能分支
|
||||
git checkout -b feature/my-plugin-v0.2.0
|
||||
|
||||
# 2. 提交更改
|
||||
git commit -m "feat(my-plugin): add new feature X"
|
||||
|
||||
# 3. 推送并创建 PR
|
||||
git push origin feature/my-plugin-v0.2.0
|
||||
|
||||
# 4. PR 合并后自动触发发布
|
||||
```
|
||||
|
||||
**方式 C:手动触发发布**
|
||||
|
||||
1. 前往 GitHub Actions → "Plugin Release / 插件发布"
|
||||
2. 点击 "Run workflow"
|
||||
3. 填写版本号和发布说明
|
||||
|
||||
### Commit Message 规范 (Commit Convention)
|
||||
|
||||
使用 [Conventional Commits](https://www.conventionalcommits.org/) 格式:
|
||||
|
||||
```
|
||||
<type>(<scope>): <description>
|
||||
|
||||
[optional body]
|
||||
|
||||
[optional footer]
|
||||
```
|
||||
|
||||
常用类型:
|
||||
- `feat`: 新功能
|
||||
- `fix`: Bug 修复
|
||||
- `docs`: 文档更新
|
||||
- `refactor`: 代码重构
|
||||
- `style`: 代码格式调整
|
||||
- `perf`: 性能优化
|
||||
|
||||
示例:
|
||||
```
|
||||
feat(flash-card): add _get_user_context for safer user info retrieval
|
||||
|
||||
- Add _get_user_context method to handle various __user__ types
|
||||
- Prevent AttributeError when __user__ is not a dict
|
||||
- Update version to 0.2.2 for both English and Chinese versions
|
||||
```
|
||||
|
||||
### 发布检查清单 (Release Checklist)
|
||||
|
||||
发布前确保完成以下检查:
|
||||
|
||||
- [ ] 更新插件版本号(英文版 + 中文版)
|
||||
- [ ] 测试插件功能正常
|
||||
- [ ] 确保代码通过格式检查
|
||||
- [ ] 编写清晰的 commit message
|
||||
- [ ] 推送到 main 分支或合并 PR
|
||||
|
||||
---
|
||||
|
||||
## <20>📚 参考资源 (Reference Resources)
|
||||
|
||||
- [Action 插件模板 (英文)](plugins/actions/ACTION_PLUGIN_TEMPLATE.py)
|
||||
- [Action 插件模板 (中文)](plugins/actions/ACTION_PLUGIN_TEMPLATE_CN.py)
|
||||
@@ -816,3 +955,32 @@ GitHub: [Fu-Jie/awesome-openwebui](https://github.com/Fu-Jie/awesome-openwebui)
|
||||
## License
|
||||
|
||||
MIT License
|
||||
|
||||
---
|
||||
|
||||
## 📝 Commit Message Guidelines
|
||||
|
||||
**Commit messages MUST be in English.** Do not use Chinese.
|
||||
|
||||
### Format
|
||||
Follow the [Conventional Commits](https://www.conventionalcommits.org/) specification:
|
||||
|
||||
- `feat`: New feature
|
||||
- `fix`: Bug fix
|
||||
- `docs`: Documentation only changes
|
||||
- `style`: Changes that do not affect the meaning of the code (white-space, formatting, etc)
|
||||
- `refactor`: A code change that neither fixes a bug nor adds a feature
|
||||
- `perf`: A code change that improves performance
|
||||
- `test`: Adding missing tests or correcting existing tests
|
||||
- `chore`: Changes to the build process or auxiliary tools and libraries such as documentation generation
|
||||
|
||||
### Examples
|
||||
|
||||
✅ **Good:**
|
||||
- `feat: add new export to pdf plugin`
|
||||
- `fix: resolve icon rendering issue in documentation`
|
||||
- `docs: update README with installation steps`
|
||||
|
||||
❌ **Bad:**
|
||||
- `新增导出PDF插件` (Chinese is not allowed)
|
||||
- `update code` (Too vague)
|
||||
|
||||
81
.github/workflows/release.yml
vendored
81
.github/workflows/release.yml
vendored
@@ -54,6 +54,9 @@ permissions:
|
||||
jobs:
|
||||
check-changes:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
outputs:
|
||||
has_changes: ${{ steps.detect.outputs.has_changes }}
|
||||
changed_plugins: ${{ steps.detect.outputs.changed_plugins }}
|
||||
@@ -65,6 +68,12 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global core.quotepath false
|
||||
git config --global i18n.commitencoding utf-8
|
||||
git config --global i18n.logoutputencoding utf-8
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
@@ -101,7 +110,7 @@ jobs:
|
||||
fi
|
||||
|
||||
# Compare versions and generate release notes
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --output changes.md
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --ignore-removed --output changes.md
|
||||
python scripts/extract_plugin_versions.py --compare old_versions.json --json --output changes.json
|
||||
|
||||
echo "=== Version Changes ==="
|
||||
@@ -131,6 +140,7 @@ jobs:
|
||||
|
||||
echo "changed_plugins<<EOF" >> $GITHUB_OUTPUT
|
||||
cat changed_files.txt >> $GITHUB_OUTPUT
|
||||
echo "" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
@@ -138,6 +148,7 @@ jobs:
|
||||
{
|
||||
echo 'release_notes<<EOF'
|
||||
cat changes.md
|
||||
echo ""
|
||||
echo 'EOF'
|
||||
} >> $GITHUB_OUTPUT
|
||||
|
||||
@@ -145,6 +156,10 @@ jobs:
|
||||
needs: check-changes
|
||||
if: needs.check-changes.outputs.has_changes == 'true' || github.event_name == 'workflow_dispatch' || startsWith(github.ref, 'refs/tags/v')
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
@@ -152,6 +167,12 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global core.quotepath false
|
||||
git config --global i18n.commitencoding utf-8
|
||||
git config --global i18n.logoutputencoding utf-8
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
@@ -175,10 +196,7 @@ jobs:
|
||||
id: plugins
|
||||
run: |
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
python scripts/extract_plugin_versions.py --markdown --output plugin_table.md
|
||||
|
||||
echo "=== Plugin Versions ==="
|
||||
cat plugin_table.md
|
||||
python scripts/extract_plugin_versions.py --json --output plugin_versions.json
|
||||
|
||||
- name: Collect plugin files for release
|
||||
id: collect_files
|
||||
@@ -198,32 +216,27 @@ jobs:
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo "Collecting all plugin files..."
|
||||
find plugins -name "*.py" -type f ! -name "__*" | while read -r file; do
|
||||
dir=$(dirname "$file")
|
||||
mkdir -p "release_plugins/$dir"
|
||||
cp "$file" "release_plugins/$file"
|
||||
done
|
||||
echo "No changed plugins detected. Skipping file collection."
|
||||
fi
|
||||
|
||||
# Create a zip file with error handling
|
||||
cd release_plugins
|
||||
if [ -n "$(ls -A . 2>/dev/null)" ]; then
|
||||
if zip -r ../plugins_release.zip .; then
|
||||
echo "Successfully created plugins_release.zip"
|
||||
else
|
||||
echo "Warning: Failed to create zip file, creating empty placeholder"
|
||||
touch ../plugins_release.zip
|
||||
fi
|
||||
else
|
||||
echo "No plugin files to zip, creating empty placeholder"
|
||||
touch ../plugins_release.zip
|
||||
fi
|
||||
cd ..
|
||||
# cd release_plugins
|
||||
# Zip step removed as per user request
|
||||
|
||||
echo "=== Collected Files ==="
|
||||
find release_plugins -name "*.py" -type f | head -20
|
||||
|
||||
- name: Debug Filenames
|
||||
run: |
|
||||
python3 -c "import sys; print(f'Filesystem encoding: {sys.getfilesystemencoding()}')"
|
||||
ls -R release_plugins
|
||||
|
||||
- name: Upload Debug Artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: debug-plugins
|
||||
path: release_plugins/
|
||||
|
||||
- name: Get commit messages
|
||||
id: commits
|
||||
if: github.event_name == 'push'
|
||||
@@ -239,8 +252,9 @@ jobs:
|
||||
{
|
||||
echo 'commits<<EOF'
|
||||
echo "$COMMITS"
|
||||
echo ""
|
||||
echo 'EOF'
|
||||
} >> $GITHUB_OUTPUT
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Generate release notes
|
||||
id: notes
|
||||
@@ -280,16 +294,13 @@ jobs:
|
||||
echo "" >> release_notes.md
|
||||
fi
|
||||
|
||||
echo "## All Plugin Versions / 所有插件版本" >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
cat plugin_table.md >> release_notes.md
|
||||
echo "" >> release_notes.md
|
||||
|
||||
|
||||
cat >> release_notes.md << 'EOF'
|
||||
|
||||
## Download / 下载
|
||||
|
||||
📦 **plugins_release.zip** - 包含本次更新的所有插件文件 / Contains all updated plugin files
|
||||
📦 **Download the updated plugin files below** / 请在下方下载更新的插件文件
|
||||
|
||||
### Installation / 安装
|
||||
|
||||
@@ -323,10 +334,15 @@ jobs:
|
||||
prerelease: ${{ github.event.inputs.prerelease || false }}
|
||||
files: |
|
||||
plugin_versions.json
|
||||
plugins_release.zip
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Upload Release Assets
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
find release_plugins -type f -name "*.py" -print0 | xargs -0 gh release upload ${{ steps.version.outputs.version }} --clobber
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
echo "## 🚀 Release Created Successfully!" >> $GITHUB_STEP_SUMMARY
|
||||
@@ -336,5 +352,4 @@ jobs:
|
||||
echo "### Updated Plugins" >> $GITHUB_STEP_SUMMARY
|
||||
echo "${{ needs.check-changes.outputs.release_notes }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "### All Plugin Versions" >> $GITHUB_STEP_SUMMARY
|
||||
cat plugin_table.md >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
|
||||
@@ -14,15 +14,17 @@ Located in the `plugins/` directory, containing Python-based enhancements:
|
||||
|
||||
#### Actions
|
||||
- **Smart Mind Map** (`smart-mind-map`): Generates interactive mind maps from text.
|
||||
- **Smart Infographic** (`infographic`): Transforms text into professional infographics using AntV.
|
||||
- **Knowledge Card** (`knowledge-card`): Creates beautiful flashcards for learning.
|
||||
- **Export to Excel** (`export_to_excel`): Exports chat history to Excel files.
|
||||
- **Export to Word** (`export_to_docx`): Exports chat history to Word documents.
|
||||
- **Summary** (`summary`): Text summarization tool.
|
||||
|
||||
#### Filters
|
||||
- **Async Context Compression** (`async-context-compression`): Optimizes token usage via context compression.
|
||||
- **Context Enhancement** (`context_enhancement_filter`): Enhances chat context.
|
||||
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Companion filter for Gemini Manifold.
|
||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): Merges context from multiple models.
|
||||
|
||||
|
||||
#### Pipes
|
||||
- **Gemini Manifold** (`gemini_mainfold`): Pipeline for Gemini model integration.
|
||||
|
||||
@@ -8,15 +8,17 @@ OpenWebUI 增强功能集合。包含个人开发与收集的### 🧩 插件 (Pl
|
||||
|
||||
#### Actions (交互增强)
|
||||
- **Smart Mind Map** (`smart-mind-map`): 智能分析文本并生成交互式思维导图。
|
||||
- **Smart Infographic** (`infographic`): 基于 AntV 的智能信息图生成工具。
|
||||
- **Knowledge Card** (`knowledge-card`): 快速生成精美的学习记忆卡片。
|
||||
- **Export to Excel** (`export_to_excel`): 将对话内容导出为 Excel 文件。
|
||||
- **Export to Word** (`export_to_docx`): 将对话内容导出为 Word 文档。
|
||||
- **Summary** (`summary`): 文本摘要生成工具。
|
||||
|
||||
#### Filters (消息处理)
|
||||
- **Async Context Compression** (`async-context-compression`): 异步上下文压缩,优化 Token 使用。
|
||||
- **Context Enhancement** (`context_enhancement_filter`): 上下文增强过滤器。
|
||||
- **Gemini Manifold Companion** (`gemini_manifold_companion`): Gemini Manifold 配套增强。
|
||||
- **Multi-Model Context Merger** (`multi_model_context_merger`): 多模型上下文合并。
|
||||
|
||||
|
||||
#### Pipes (模型管道)
|
||||
- **Gemini Manifold** (`gemini_mainfold`): 集成 Gemini 模型的管道。
|
||||
|
||||
@@ -1,12 +1,24 @@
|
||||
# Export to Excel
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.4</span>
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis, archiving, and sharing.
|
||||
|
||||
|
||||
### What's New in v0.3.5
|
||||
- **Export Scope**: Added `EXPORT_SCOPE` valve to choose between exporting tables from the "Last Message" (default) or "All Messages".
|
||||
- **Smart Sheet Naming**: Automatically names sheets based on Markdown headers, AI titles (if enabled), or message index (e.g., `Msg1-Tab1`).
|
||||
- **Multiple Tables Support**: Improved handling of multiple tables within single or multiple messages.
|
||||
|
||||
## What's New in v0.3.4
|
||||
|
||||
- **Smart Filename Generation**: Now supports generating filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **Configuration Options**: Added `TITLE_SOURCE` setting to control filename generation strategy.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
The Export to Excel plugin allows you to download your chat conversations as Excel files. This is useful for:
|
||||
@@ -23,6 +35,13 @@ The Export to Excel plugin allows you to download your chat conversations as Exc
|
||||
- :material-download: **One-Click Download**: Instant file generation
|
||||
- :material-history: **Full History**: Exports complete conversation
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -1,12 +1,24 @@
|
||||
# Export to Excel(导出到 Excel)
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.4</span>
|
||||
|
||||
将聊天记录导出为 Excel 表格,便于分析、归档和分享。
|
||||
|
||||
|
||||
### v0.3.5 更新内容
|
||||
- **导出范围**: 新增 `EXPORT_SCOPE` 配置项,可选择导出“最后一条消息”(默认)或“所有消息”中的表格。
|
||||
- **智能 Sheet 命名**: 根据 Markdown 标题、AI 标题(如启用)或消息索引(如 `消息1-表1`)自动命名 Sheet。
|
||||
- **多表格支持**: 优化了对单条或多条消息中包含多个表格的处理。
|
||||
|
||||
### v0.3.4 更新内容
|
||||
|
||||
- **智能文件名生成**:支持根据对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **配置选项**:新增 `TITLE_SOURCE` 设置,用于控制文件名生成策略。
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 概览
|
||||
|
||||
Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用于:
|
||||
@@ -23,6 +35,13 @@ Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用
|
||||
- :material-download: **一键下载**:即时生成文件
|
||||
- :material-history: **完整历史**:导出完整会话内容
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
---
|
||||
|
||||
## 安装
|
||||
|
||||
@@ -33,7 +33,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Transform text into professional infographics using AntV visualization engine with various templates.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.3.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](smart-infographic.md)
|
||||
|
||||
@@ -43,7 +43,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and memorization.
|
||||
|
||||
**Version:** 0.2.0
|
||||
**Version:** 0.2.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](knowledge-card.md)
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis and archiving.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.3.5
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](export-to-excel.md)
|
||||
|
||||
@@ -73,7 +73,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.1.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](summary.md)
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
使用 AntV 可视化引擎,将文本转成专业的信息图。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.3.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](smart-infographic.md)
|
||||
|
||||
@@ -43,7 +43,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
快速生成精美的学习记忆卡片,适合学习与记忆。
|
||||
|
||||
**版本:** 0.2.0
|
||||
**版本:** 0.2.2
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](knowledge-card.md)
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
将聊天记录导出为 Excel 电子表格,方便分析或归档。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.3.4
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](export-to-excel.md)
|
||||
|
||||
@@ -73,7 +73,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
对长文本进行精简总结,提取要点。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.1.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](summary.md)
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Knowledge Card
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v0.2.0</span>
|
||||
<span class="version-badge">v0.2.2</span>
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and quick memorization.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Smart Infographic
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.3.0</span>
|
||||
|
||||
An AntV Infographic engine powered plugin that transforms long text into professional, beautiful infographics with a single click.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Summary
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.1.0</span>
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Summary(摘要)
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.1.0</span>
|
||||
|
||||
为长文本生成简洁摘要,并提取关键要点。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.1.0</span>
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Async Context Compression(异步上下文压缩)
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v1.1.0</span>
|
||||
|
||||
通过智能摘要减少长对话的 token 消耗,同时保持对话连贯。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Context Enhancement
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.2</span>
|
||||
|
||||
Enhances chat context with additional information for improved LLM responses.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Context Enhancement(上下文增强)
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.2</span>
|
||||
|
||||
为聊天自动补充上下文信息,让 LLM 回复更相关、更准确。
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Gemini Manifold Companion
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.2</span>
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin, providing enhanced functionality.
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Gemini Manifold Companion
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.2</span>
|
||||
|
||||
Gemini Manifold Pipe 的伴随过滤器,用于增强 Gemini 集成的处理效果。
|
||||
|
||||
|
||||
@@ -16,13 +16,13 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-compress:{ .lg .middle } **Async Context Compression**
|
||||
- :material-arrow-collapse-vertical:{ .lg .middle } **Async Context Compression**
|
||||
|
||||
---
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.1.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||
|
||||
@@ -32,7 +32,7 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
Enhances chat context with additional information for better responses.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 0.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](context-enhancement.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ Filters act as middleware in the message pipeline:
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin.
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Version:** 1.7.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](gemini-manifold-companion.md)
|
||||
|
||||
|
||||
@@ -16,13 +16,13 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-compress:{ .lg .middle } **Async Context Compression**
|
||||
- :material-arrow-collapse-vertical:{ .lg .middle } **Async Context Compression**
|
||||
|
||||
---
|
||||
|
||||
通过智能总结减少长对话的 token 消耗,同时保持连贯性。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.1.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](async-context-compression.md)
|
||||
|
||||
@@ -32,7 +32,7 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
为聊天增加额外信息,提升回复质量。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 0.2
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](context-enhancement.md)
|
||||
|
||||
@@ -42,7 +42,7 @@ Filter 充当消息管线中的中间件:
|
||||
|
||||
Gemini Manifold Pipe 插件的伴随过滤器。
|
||||
|
||||
**版本:** 1.0.0
|
||||
**版本:** 1.7.0
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](gemini-manifold-companion.md)
|
||||
|
||||
|
||||
@@ -2,12 +2,29 @@
|
||||
|
||||
This plugin allows you to export your chat history to an Excel (.xlsx) file directly from the chat interface.
|
||||
|
||||
### What's New in v0.3.5
|
||||
- **Export Scope**: Added `EXPORT_SCOPE` valve to choose between exporting tables from the "Last Message" (default) or "All Messages".
|
||||
- **Smart Sheet Naming**: Automatically names sheets based on Markdown headers, AI titles (if enabled), or message index (e.g., `Msg1-Tab1`).
|
||||
- **Multiple Tables Support**: Improved handling of multiple tables within single or multiple messages.
|
||||
|
||||
## What's New in v0.3.4
|
||||
|
||||
- **Smart Filename Generation**: Now supports generating filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **Configuration Options**: Added `TITLE_SOURCE` setting to control filename generation strategy.
|
||||
|
||||
## Features
|
||||
|
||||
- **One-Click Export**: Adds an "Export to Excel" button to the chat.
|
||||
- **Automatic Header Extraction**: Intelligently identifies table headers from the chat content.
|
||||
- **Multi-Table Support**: Handles multiple tables within a single chat session.
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Install the plugin.
|
||||
|
||||
@@ -2,12 +2,29 @@
|
||||
|
||||
此插件允许你直接从聊天界面将对话历史导出为 Excel (.xlsx) 文件。
|
||||
|
||||
### v0.3.5 更新内容
|
||||
- **导出范围**: 新增 `EXPORT_SCOPE` 配置项,可选择导出“最后一条消息”(默认)或“所有消息”中的表格。
|
||||
- **智能 Sheet 命名**: 根据 Markdown 标题、AI 标题(如启用)或消息索引(如 `消息1-表1`)自动命名 Sheet。
|
||||
- **多表格支持**: 优化了对单条或多条消息中包含多个表格的处理。
|
||||
|
||||
## v0.3.4 更新内容
|
||||
|
||||
- **智能文件名生成**:支持根据对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **配置选项**:新增 `TITLE_SOURCE` 设置,用于控制文件名生成策略。
|
||||
|
||||
## 功能特点
|
||||
|
||||
- **一键导出**:在聊天界面添加“导出为 Excel”按钮。
|
||||
- **自动表头提取**:智能识别聊天内容中的表格标题。
|
||||
- **多表支持**:支持处理单次对话中的多个表格。
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
## 使用方法
|
||||
|
||||
1. 安装插件。
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Export to Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.5
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: Exports the current chat history to an Excel (.xlsx) file, with automatic header extraction.
|
||||
"""
|
||||
@@ -15,14 +15,28 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: str = Field(
|
||||
default="chat_title",
|
||||
description="Title Source: 'chat_title' (Chat Title), 'ai_generated' (AI Generated), 'markdown_title' (Markdown Title)",
|
||||
)
|
||||
EXPORT_SCOPE: str = Field(
|
||||
default="last_message",
|
||||
description="Export Scope: 'last_message' (Last Message Only), 'all_messages' (All Messages)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,6 +49,7 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
@@ -53,8 +68,6 @@ class Action:
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
@@ -63,24 +76,152 @@ class Action:
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="No messages found.")
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="No tables found.")
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
# Get dynamic filename and sheet names
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
)
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
continue
|
||||
|
||||
# Generate sheet names for this message's tables
|
||||
# If multiple messages, we need to ensure uniqueness across the whole workbook
|
||||
# We'll generate base names here and deduplicate later if needed,
|
||||
# or better: generate unique names on the fly.
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
# Note: Generating titles for EVERY table in all messages might be too slow/expensive.
|
||||
# We'll skip this for 'all_messages' scope to avoid timeout, unless it's just one message.
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
# Logic for AI generation (simplified for now, reusing existing flow if possible)
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
# Use global message index (from original list if possible, but here we iterate target_messages)
|
||||
# Let's use the loop index.
|
||||
# If multiple tables in one message: "Msg 1 - Table 1"
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Msg{msg_index+1}-Tab{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"Msg{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Table {table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="No tables found in the selected scope."
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# Generate Workbook Title (Filename)
|
||||
# Use the title of the chat, or the first header of the first message with tables
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
# Try to find first header in the first message that has content
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
# Try extracting from content again if not already tried
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Use optimized filename generation logic
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# If no title found, use user_yyyymmdd format
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,8 +230,10 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# Save tables to Excel (using enhanced formatting)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
@@ -172,6 +315,88 @@ class Action:
|
||||
__event_emitter__, "error", "No tables found to export!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self, body: dict, content: str, user_id: str, request: Any
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
model = body.get("model")
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant. Generate a short, concise title (max 10 words) for the following text. Do not use quotes. Only output the title.",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # Limit content length
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
response = await generate_chat_completion(request, payload, user_obj)
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
except Exception as e:
|
||||
print(f"Error generating title: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""Extract title from Markdown h1/h2 only"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# Match h1-h2 headings only
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""Extract chat_id from body or metadata"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""Fetch chat title from database by chat_id"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"Failed to load chat {chat_id}: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
Extract Markdown tables and their positions from message text
|
||||
@@ -524,6 +749,28 @@ class Action:
|
||||
}
|
||||
)
|
||||
|
||||
# Bold cell style (for full cell bolding)
|
||||
text_bold_format = workbook.add_format(
|
||||
{
|
||||
"border": 1,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
# Italic cell style (for full cell italics)
|
||||
text_italic_format = workbook.add_format(
|
||||
{
|
||||
"border": 1,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
for i, table in enumerate(tables):
|
||||
try:
|
||||
table_data = table["data"]
|
||||
@@ -595,6 +842,8 @@ class Action:
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
text_bold_format,
|
||||
text_italic_format,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -618,6 +867,8 @@ class Action:
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
text_bold_format=None,
|
||||
text_italic_format=None,
|
||||
):
|
||||
"""
|
||||
Apply enhanced formatting
|
||||
@@ -626,6 +877,7 @@ class Action:
|
||||
- Text: Left aligned
|
||||
- Date: Center aligned
|
||||
- Sequence: Center aligned
|
||||
- Supports full cell Markdown bold (**text**) and italic (*text*)
|
||||
"""
|
||||
try:
|
||||
# 1. Write headers (Center aligned)
|
||||
@@ -687,7 +939,28 @@ class Action:
|
||||
# Text - Left aligned
|
||||
current_format = text_format
|
||||
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
if content_type == "text" and isinstance(value, str):
|
||||
# Check for full cell bold (**text**)
|
||||
match_bold = re.fullmatch(r"\*\*(.+)\*\*", value.strip())
|
||||
# Check for full cell italic (*text*)
|
||||
match_italic = re.fullmatch(r"\*(.+)\*", value.strip())
|
||||
|
||||
if match_bold:
|
||||
# Extract content and apply bold format
|
||||
clean_value = match_bold.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, text_bold_format
|
||||
)
|
||||
elif match_italic:
|
||||
# Extract content and apply italic format
|
||||
clean_value = match_italic.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, text_italic_format
|
||||
)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
|
||||
# 4. Auto-adjust column width
|
||||
for col_idx, column in enumerate(headers):
|
||||
@@ -777,3 +1050,6 @@ class Action:
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in basic formatting: {str(e)}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error in basic formatting: {str(e)}")
|
||||
|
||||
@@ -3,7 +3,7 @@ title: 导出为 Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.5
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: 将当前对话历史导出为 Excel (.xlsx) 文件,支持自动提取表头。
|
||||
"""
|
||||
@@ -15,14 +15,28 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: str = Field(
|
||||
default="chat_title",
|
||||
description="标题来源: 'chat_title' (对话标题), 'ai_generated' (AI生成), 'markdown_title' (Markdown标题)",
|
||||
)
|
||||
EXPORT_SCOPE: str = Field(
|
||||
default="last_message",
|
||||
description="导出范围: 'last_message' (仅最后一条消息), 'all_messages' (所有消息)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,52 +49,167 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
user_language = (
|
||||
__user__[0].get("language", "zh-CN") if __user__ else "zh-CN"
|
||||
__user__[0].get("language", "en-US") if __user__ else "en-US"
|
||||
)
|
||||
user_name = __user__[0].get("name", "用户") if __user__[0] else "用户"
|
||||
user_name = __user__[0].get("name", "User") if __user__[0] else "User"
|
||||
user_id = (
|
||||
__user__[0]["id"]
|
||||
if __user__ and "id" in __user__[0]
|
||||
else "unknown_user"
|
||||
)
|
||||
elif isinstance(__user__, dict):
|
||||
user_language = __user__.get("language", "zh-CN")
|
||||
user_name = __user__.get("name", "用户")
|
||||
user_language = __user__.get("language", "en-US")
|
||||
user_name = __user__.get("name", "User")
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "正在保存到文件...", "done": False},
|
||||
"data": {"description": "正在保存文件...", "done": False},
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="未找到消息。")
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="未找到任何表格。")
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
# 获取动态文件名和sheet名称
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
)
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
continue
|
||||
|
||||
# Generate sheet names for this message's tables
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"消息{msg_index+1}-表{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"消息{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"表{table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="在选定范围内未找到表格。"
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# Generate Workbook Title (Filename)
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# 使用优化后的文件名生成逻辑
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# 如果没找到标题则使用 user_yyyymmdd 格式
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,10 +218,12 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# 保存表格到Excel(使用符合中国规范的格式化功能)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# 触发文件下载
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
with open(excel_file_path, "rb") as file:
|
||||
file_content = file.read()
|
||||
@@ -123,7 +254,7 @@ class Action:
|
||||
URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
}} catch (error) {{
|
||||
console.error('触发下载时出错:', error);
|
||||
console.error('Error triggering download:', error);
|
||||
}}
|
||||
"""
|
||||
},
|
||||
@@ -132,15 +263,15 @@ class Action:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "输出已保存", "done": True},
|
||||
"data": {"description": "文件已保存", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
# 清理临时文件
|
||||
# Clean up temp file
|
||||
if os.path.exists(excel_file_path):
|
||||
os.remove(excel_file_path)
|
||||
|
||||
return {"message": "下载事件已触发"}
|
||||
return {"message": "下载已触发"}
|
||||
|
||||
except HTTPException as e:
|
||||
print(f"Error processing tables: {str(e.detail)}")
|
||||
@@ -148,13 +279,13 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {e.detail}",
|
||||
"description": f"保存文件错误: {e.detail}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
raise e
|
||||
except Exception as e:
|
||||
@@ -163,15 +294,97 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {str(e)}",
|
||||
"description": f"保存文件错误: {str(e)}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self, body: dict, content: str, user_id: str, request: Any
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
model = body.get("model")
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "你是一个乐于助人的助手。请为以下文本生成一个简短、简洁的标题(最多10个字)。不要使用引号。只输出标题。",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # 限制内容长度
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
response = await generate_chat_completion(request, payload, user_obj)
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
except Exception as e:
|
||||
print(f"生成标题时出错: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""从 Markdown h1/h2 中提取标题"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# 仅匹配 h1-h2 标题
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""从 body 或 metadata 中提取 chat_id"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""通过 chat_id 从数据库获取对话标题"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"加载对话 {chat_id} 失败: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
从消息文本中提取Markdown表格及位置信息
|
||||
@@ -541,6 +754,28 @@ class Action:
|
||||
}
|
||||
)
|
||||
|
||||
# 粗体单元格样式 (用于全单元格加粗)
|
||||
text_bold_format = workbook.add_format(
|
||||
{
|
||||
"border": 1,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"bold": True,
|
||||
}
|
||||
)
|
||||
|
||||
# 斜体单元格样式 (用于全单元格斜体)
|
||||
text_italic_format = workbook.add_format(
|
||||
{
|
||||
"border": 1,
|
||||
"align": "left",
|
||||
"valign": "vcenter",
|
||||
"text_wrap": True,
|
||||
"italic": True,
|
||||
}
|
||||
)
|
||||
|
||||
for i, table in enumerate(tables):
|
||||
try:
|
||||
table_data = table["data"]
|
||||
@@ -612,6 +847,8 @@ class Action:
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
text_bold_format,
|
||||
text_italic_format,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
@@ -635,6 +872,8 @@ class Action:
|
||||
decimal_format,
|
||||
date_format,
|
||||
sequence_format,
|
||||
text_bold_format=None,
|
||||
text_italic_format=None,
|
||||
):
|
||||
"""
|
||||
应用符合中国官方表格规范的格式化
|
||||
@@ -643,6 +882,7 @@ class Action:
|
||||
- 文本: 左对齐
|
||||
- 日期: 居中对齐
|
||||
- 序号: 居中对齐
|
||||
- 支持全单元格 Markdown 粗体 (**text**) 和斜体 (*text*)
|
||||
"""
|
||||
try:
|
||||
# 1. 写入表头(居中对齐)
|
||||
@@ -704,7 +944,28 @@ class Action:
|
||||
# 文本类型 - 左对齐
|
||||
current_format = text_format
|
||||
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
if content_type == "text" and isinstance(value, str):
|
||||
# 检查是否全单元格加粗 (**text**)
|
||||
match_bold = re.fullmatch(r"\*\*(.+)\*\*", value.strip())
|
||||
# 检查是否全单元格斜体 (*text*)
|
||||
match_italic = re.fullmatch(r"\*(.+)\*", value.strip())
|
||||
|
||||
if match_bold:
|
||||
# 提取内容并应用粗体格式
|
||||
clean_value = match_bold.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, text_bold_format
|
||||
)
|
||||
elif match_italic:
|
||||
# 提取内容并应用斜体格式
|
||||
clean_value = match_italic.group(1)
|
||||
worksheet.write(
|
||||
row_idx + 1, col_idx, clean_value, text_italic_format
|
||||
)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
else:
|
||||
worksheet.write(row_idx + 1, col_idx, value, current_format)
|
||||
|
||||
# 4. 自动调整列宽
|
||||
for col_idx, column in enumerate(headers):
|
||||
@@ -804,3 +1065,6 @@ class Action:
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Even basic formatting failed: {str(e)}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Warning: Even basic formatting failed: {str(e)}")
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Flash Card
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.2.1
|
||||
version: 0.2.2
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwb2x5Z29uIHBvaW50cz0iMTIgMiAyIDcgMTIgMTIgMjIgNyAxMiAyIi8+PHBvbHlsaW5lIHBvaW50cz0iMiAxNyAxMiAyMiAyMiAxNyIvPjxwb2x5bGluZSBwb2ludHM9IjIgMTIgMTIgMTcgMjIgMTIiLz48L3N2Zz4=
|
||||
description: Quickly generates beautiful flashcards from text, extracting key points and categories.
|
||||
"""
|
||||
|
||||
@@ -3,7 +3,7 @@ title: 闪记卡 (Flash Card)
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.2.1
|
||||
version: 0.2.2
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwb2x5Z29uIHBvaW50cz0iMTIgMiAyIDcgMTIgMTIgMjIgNyAxMiAyIi8+PHBvbHlsaW5lIHBvaW50cz0iMiAxNyAxMiAyMiAyMiAxNyIvPjxwb2x5bGluZSBwb2ludHM9IjIgMTIgMTIgMTcgMjIgMTIiLz48L3N2Zz4=
|
||||
description: 快速将文本提炼为精美的学习记忆卡片,支持核心要点提取与分类。
|
||||
"""
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""
|
||||
title: 精读 (Deep Reading)
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0Ij48ZGVmcz48bGluZWFyR3JhZGllbnQgaWQ9ImciIHgxPSIwIiB5MT0iMCIgeDI9IjEiIHkyPSIxIj48c3RvcCBvZmZzZXQ9IjAlIiBzdG9wLWNvbG9yPSIjNDI4NWY0Ii8+PHN0b3Agb2Zmc2V0PSIxMDAlIiBzdG9wLWNvbG9yPSIjMWU4OGU1Ii8+PC9saW5lYXJHcmFkaWVudD48L2RlZnM+PHBhdGggZD0iTTYgMmg4bDYgNnYxMmEyIDIgMCAwIDEtMiAySDZhMiAyIDAgMCAxLTItMlY0YTIgMiAwIDAgMSAyLTJ6IiBmaWxsPSJ1cmwoI2cpIi8+PHBhdGggZD0iTTE0IDJsNiA2aC02eiIgZmlsbD0iIzFlODhlNSIgb3BhY2l0eT0iMC42Ii8+PGxpbmUgeDE9IjgiIHkxPSIxMyIgeDI9IjE2IiB5Mj0iMTMiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiLz48bGluZSB4MT0iOCIgeTE9IjE3IiB4Mj0iMTQiIHkyPSIxNyIgc3Ryb2tlPSIjZmZmIiBzdHJva2Utd2lkdGg9IjEuNSIvPjxjaXJjbGUgY3g9IjE2IiBjeT0iMTgiIHI9IjMiIGZpbGw9IiNmZmQ3MDAiLz48cGF0aCBkPSJNMTYgMTZsMS41IDEuNSIgc3Ryb2tlPSIjNDI4NWY0IiBzdHJva2Utd2lkdGg9IjIiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPjwvc3ZnPg==
|
||||
version: 2.0.0
|
||||
version: 0.1.0
|
||||
description: 深度分析长篇文本,提炼详细摘要、关键信息点和可执行的行动建议,适合工作和学习场景。
|
||||
requirements: jinja2, markdown
|
||||
"""
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Async Context Compression Filter
|
||||
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.2.0 | **License:** MIT
|
||||
**Author:** [Fu-Jie](https://github.com/Fu-Jie) | **Version:** 1.1.0 | **License:** MIT
|
||||
|
||||
This filter reduces token consumption in long conversations through intelligent summarization and message compression while keeping conversations coherent.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Example Pipe Plugin
|
||||
|
||||
**Author:** OpenWebUI Community | **Version:** 1.0.0 | **License:** MIT
|
||||
**Author:** OpenWebUI Community | **Version:** 1.26.0 | **License:** MIT
|
||||
|
||||
This is a template/example for creating Pipe plugins in OpenWebUI.
|
||||
|
||||
|
||||
315
scripts/check_version_consistency.py
Normal file
315
scripts/check_version_consistency.py
Normal file
@@ -0,0 +1,315 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script to check and enforce version consistency across OpenWebUI plugins and documentation.
|
||||
用于检查并强制 OpenWebUI 插件和文档之间版本一致性的脚本。
|
||||
|
||||
Usage:
|
||||
python scripts/check_version_consistency.py # Check only
|
||||
python scripts/check_version_consistency.py --fix # Check and fix
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Dict, Tuple
|
||||
|
||||
# ANSI colors
|
||||
GREEN = "\033[92m"
|
||||
RED = "\033[91m"
|
||||
YELLOW = "\033[93m"
|
||||
BLUE = "\033[94m"
|
||||
RESET = "\033[0m"
|
||||
|
||||
|
||||
def log_info(msg):
|
||||
print(f"{BLUE}[INFO]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_success(msg):
|
||||
print(f"{GREEN}[OK]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_warning(msg):
|
||||
print(f"{YELLOW}[WARN]{RESET} {msg}")
|
||||
|
||||
|
||||
def log_error(msg):
|
||||
print(f"{RED}[ERR]{RESET} {msg}")
|
||||
|
||||
|
||||
class VersionChecker:
|
||||
def __init__(self, root_dir: str, fix: bool = False):
|
||||
self.root_dir = Path(root_dir)
|
||||
self.plugins_dir = self.root_dir / "plugins"
|
||||
self.docs_dir = self.root_dir / "docs" / "plugins"
|
||||
self.fix = fix
|
||||
self.issues_found = 0
|
||||
self.fixed_count = 0
|
||||
|
||||
def extract_version_from_py(self, file_path: Path) -> Optional[str]:
|
||||
"""Extract version from Python docstring."""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
match = re.search(r"version:\s*([\d\.]+)", content)
|
||||
if match:
|
||||
return match.group(1)
|
||||
except Exception as e:
|
||||
log_error(f"Failed to read {file_path}: {e}")
|
||||
return None
|
||||
|
||||
def update_file_content(
|
||||
self, file_path: Path, pattern: str, replacement: str, version: str
|
||||
) -> bool:
|
||||
"""Update file content with new version."""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
new_content = re.sub(pattern, replacement, content)
|
||||
|
||||
if content != new_content:
|
||||
if self.fix:
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
log_success(
|
||||
f"Fixed {file_path.relative_to(self.root_dir)}: -> {version}"
|
||||
)
|
||||
self.fixed_count += 1
|
||||
return True
|
||||
else:
|
||||
log_error(
|
||||
f"Mismatch in {file_path.relative_to(self.root_dir)}: Expected {version}"
|
||||
)
|
||||
self.issues_found += 1
|
||||
return False
|
||||
return True
|
||||
except Exception as e:
|
||||
log_error(f"Failed to update {file_path}: {e}")
|
||||
return False
|
||||
|
||||
def check_plugin(self, plugin_type: str, plugin_dir: Path):
|
||||
"""Check consistency for a single plugin."""
|
||||
plugin_name = plugin_dir.name
|
||||
|
||||
# 1. Identify Source of Truth (English .py file)
|
||||
py_file = plugin_dir / f"{plugin_name}.py"
|
||||
if not py_file.exists():
|
||||
# Try finding any .py file that matches the directory name pattern or is the main file
|
||||
py_files = list(plugin_dir.glob("*.py"))
|
||||
# Filter out _cn.py, templates, etc.
|
||||
candidates = [
|
||||
f
|
||||
for f in py_files
|
||||
if not f.name.endswith("_cn.py") and "TEMPLATE" not in f.name
|
||||
]
|
||||
if candidates:
|
||||
py_file = candidates[0]
|
||||
else:
|
||||
return # Not a valid plugin dir
|
||||
|
||||
true_version = self.extract_version_from_py(py_file)
|
||||
if not true_version:
|
||||
log_warning(f"Skipping {plugin_name}: No version found in {py_file.name}")
|
||||
return
|
||||
|
||||
log_info(f"Checking {plugin_name} (v{true_version})...")
|
||||
|
||||
# 2. Check Chinese .py file
|
||||
cn_py_files = list(plugin_dir.glob("*_cn.py")) + list(
|
||||
plugin_dir.glob("*中文*.py")
|
||||
)
|
||||
# Also check for files that are not the main file but might be the CN version
|
||||
for f in plugin_dir.glob("*.py"):
|
||||
if f != py_file and "TEMPLATE" not in f.name and f not in cn_py_files:
|
||||
# Heuristic: if it has Chinese characters or ends in _cn
|
||||
if re.search(r"[\u4e00-\u9fff]", f.name) or f.name.endswith("_cn.py"):
|
||||
cn_py_files.append(f)
|
||||
|
||||
for cn_py in set(cn_py_files):
|
||||
self.update_file_content(
|
||||
cn_py, r"(version:\s*)([\d\.]+)", rf"\g<1>{true_version}", true_version
|
||||
)
|
||||
|
||||
# 3. Check README.md (English)
|
||||
readme = plugin_dir / "README.md"
|
||||
if readme.exists():
|
||||
# Pattern 1: **Version:** 1.0.0
|
||||
self.update_file_content(
|
||||
readme,
|
||||
r"(\*\*Version:?\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
# Pattern 2: | **Version:** 1.0.0 |
|
||||
self.update_file_content(
|
||||
readme,
|
||||
r"(\|\s*\*\*Version:\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 4. Check README_CN.md (Chinese)
|
||||
readme_cn = plugin_dir / "README_CN.md"
|
||||
if readme_cn.exists():
|
||||
# Pattern: **版本:** 1.0.0
|
||||
self.update_file_content(
|
||||
readme_cn,
|
||||
r"(\*\*版本:?\*\*\s*)([\d\.]+)",
|
||||
rf"\g<1>{true_version}",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 5. Check Global Docs Index (docs/plugins/{type}/index.md)
|
||||
index_md = self.docs_dir / plugin_type / "index.md"
|
||||
if index_md.exists():
|
||||
# Need to find the specific block for this plugin.
|
||||
# This is harder with regex on the whole file.
|
||||
# We assume the format: **Version:** X.Y.Z
|
||||
# But we need to make sure we are updating the RIGHT plugin's version.
|
||||
# Strategy: Look for the plugin title or link, then the version nearby.
|
||||
|
||||
# Extract title from py file to help search
|
||||
title = self.extract_title(py_file)
|
||||
if title:
|
||||
self.update_version_in_index(index_md, title, true_version)
|
||||
|
||||
# 6. Check Global Docs Index CN (docs/plugins/{type}/index.zh.md)
|
||||
index_zh = self.docs_dir / plugin_type / "index.zh.md"
|
||||
if index_zh.exists():
|
||||
# Try to find Chinese title? Or just use English title if listed?
|
||||
# Often Chinese index uses English title or Chinese title.
|
||||
# Let's try to extract Chinese title from cn_py if available
|
||||
cn_title = None
|
||||
if cn_py_files:
|
||||
cn_title = self.extract_title(cn_py_files[0])
|
||||
|
||||
target_title = cn_title if cn_title else title
|
||||
if target_title:
|
||||
self.update_version_in_index(
|
||||
index_zh, target_title, true_version, is_zh=True
|
||||
)
|
||||
|
||||
# 7. Check Global Detail Page (docs/plugins/{type}/{name}.md)
|
||||
# The doc filename usually matches the plugin directory name
|
||||
detail_md = self.docs_dir / plugin_type / f"{plugin_name}.md"
|
||||
if detail_md.exists():
|
||||
self.update_file_content(
|
||||
detail_md,
|
||||
r'(<span class="version-badge">v)([\d\.]+)(</span>)',
|
||||
rf"\g<1>{true_version}\g<3>",
|
||||
true_version,
|
||||
)
|
||||
|
||||
# 8. Check Global Detail Page CN (docs/plugins/{type}/{name}.zh.md)
|
||||
detail_zh = self.docs_dir / plugin_type / f"{plugin_name}.zh.md"
|
||||
if detail_zh.exists():
|
||||
self.update_file_content(
|
||||
detail_zh,
|
||||
r'(<span class="version-badge">v)([\d\.]+)(</span>)',
|
||||
rf"\g<1>{true_version}\g<3>",
|
||||
true_version,
|
||||
)
|
||||
|
||||
def extract_title(self, file_path: Path) -> Optional[str]:
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
match = re.search(r"title:\s*(.+)", content)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
def update_version_in_index(
|
||||
self, file_path: Path, title: str, version: str, is_zh: bool = False
|
||||
):
|
||||
"""
|
||||
Update version in index file.
|
||||
Look for:
|
||||
- ... **Title** ...
|
||||
- ...
|
||||
- **Version:** X.Y.Z
|
||||
"""
|
||||
try:
|
||||
content = file_path.read_text(encoding="utf-8")
|
||||
|
||||
# Escape title for regex
|
||||
safe_title = re.escape(title)
|
||||
|
||||
# Regex to find the plugin block and its version
|
||||
# We look for the title, then non-greedy match until we find Version line
|
||||
if is_zh:
|
||||
ver_label = r"\*\*版本:\*\*"
|
||||
else:
|
||||
ver_label = r"\*\*Version:\*\*"
|
||||
|
||||
# Pattern: (Title ...)(Version: )(\d+\.\d+\.\d+)
|
||||
# We allow some lines between title and version
|
||||
pattern = rf"(\*\*{safe_title}\*\*[\s\S]*?{ver_label}\s*)([\d\.]+)"
|
||||
|
||||
match = re.search(pattern, content)
|
||||
if match:
|
||||
current_ver = match.group(2)
|
||||
if current_ver != version:
|
||||
if self.fix:
|
||||
new_content = content.replace(
|
||||
match.group(0), f"{match.group(1)}{version}"
|
||||
)
|
||||
file_path.write_text(new_content, encoding="utf-8")
|
||||
log_success(
|
||||
f"Fixed index for {title}: {current_ver} -> {version}"
|
||||
)
|
||||
self.fixed_count += 1
|
||||
else:
|
||||
log_error(
|
||||
f"Mismatch in index for {title}: Found {current_ver}, Expected {version}"
|
||||
)
|
||||
self.issues_found += 1
|
||||
else:
|
||||
# log_warning(f"Could not find entry for '{title}' in {file_path.name}")
|
||||
pass
|
||||
|
||||
except Exception as e:
|
||||
log_error(f"Failed to check index {file_path}: {e}")
|
||||
|
||||
def run(self):
|
||||
if not self.plugins_dir.exists():
|
||||
log_error(f"Plugins directory not found: {self.plugins_dir}")
|
||||
return
|
||||
|
||||
# Scan actions, filters, pipes
|
||||
for type_dir in self.plugins_dir.iterdir():
|
||||
if type_dir.is_dir() and type_dir.name in ["actions", "filters", "pipes"]:
|
||||
for plugin_dir in type_dir.iterdir():
|
||||
if plugin_dir.is_dir():
|
||||
self.check_plugin(type_dir.name, plugin_dir)
|
||||
|
||||
print("-" * 40)
|
||||
if self.issues_found > 0:
|
||||
if self.fix:
|
||||
print(f"Fixed {self.fixed_count} issues.")
|
||||
else:
|
||||
print(f"Found {self.issues_found} version inconsistencies.")
|
||||
print(f"Run with --fix to automatically resolve them.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("All versions are consistent! ✨")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Check version consistency.")
|
||||
parser.add_argument("--fix", action="store_true", help="Fix inconsistencies")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Assume script is run from root or scripts dir
|
||||
root = Path.cwd()
|
||||
if (root / "scripts").exists():
|
||||
pass
|
||||
elif root.name == "scripts":
|
||||
root = root.parent
|
||||
|
||||
checker = VersionChecker(str(root), fix=args.fix)
|
||||
checker.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -96,6 +96,15 @@ def scan_plugins_directory(plugins_dir: str) -> list[dict[str, Any]]:
|
||||
for root, _dirs, files in os.walk(plugins_path):
|
||||
for file in files:
|
||||
if file.endswith(".py") and not file.startswith("__"):
|
||||
# Skip specific files that should not trigger release
|
||||
if file in [
|
||||
"gemini_manifold.py",
|
||||
"gemini_manifold_companion.py",
|
||||
"ACTION_PLUGIN_TEMPLATE.py",
|
||||
"ACTION_PLUGIN_TEMPLATE_CN.py",
|
||||
]:
|
||||
continue
|
||||
|
||||
file_path = os.path.join(root, file)
|
||||
metadata = extract_plugin_metadata(file_path)
|
||||
if metadata:
|
||||
@@ -109,9 +118,7 @@ def scan_plugins_directory(plugins_dir: str) -> list[dict[str, Any]]:
|
||||
return plugins
|
||||
|
||||
|
||||
def compare_versions(
|
||||
current: list[dict], previous_file: str
|
||||
) -> dict[str, list[dict]]:
|
||||
def compare_versions(current: list[dict], previous_file: str) -> dict[str, list[dict]]:
|
||||
"""
|
||||
Compare current plugin versions with a previous version file.
|
||||
比较当前插件版本与之前的版本文件。
|
||||
@@ -168,7 +175,9 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
||||
"|---------------|----------------|-------------|---------------------|",
|
||||
]
|
||||
|
||||
for plugin in sorted(plugins, key=lambda x: (x.get("type", ""), x.get("title", ""))):
|
||||
for plugin in sorted(
|
||||
plugins, key=lambda x: (x.get("type", ""), x.get("title", ""))
|
||||
):
|
||||
title = plugin.get("title", "Unknown")
|
||||
version = plugin.get("version", "Unknown")
|
||||
plugin_type = plugin.get("type", "Unknown").capitalize()
|
||||
@@ -181,7 +190,9 @@ def format_markdown_table(plugins: list[dict]) -> str:
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def format_release_notes(comparison: dict[str, list]) -> str:
|
||||
def format_release_notes(
|
||||
comparison: dict[str, list], ignore_removed: bool = False
|
||||
) -> str:
|
||||
"""
|
||||
Format version comparison as release notes.
|
||||
将版本比较格式化为发布说明。
|
||||
@@ -206,7 +217,7 @@ def format_release_notes(comparison: dict[str, list]) -> str:
|
||||
)
|
||||
lines.append("")
|
||||
|
||||
if comparison["removed"]:
|
||||
if comparison["removed"] and not ignore_removed:
|
||||
lines.append("### 移除插件 / Removed Plugins")
|
||||
for plugin in comparison["removed"]:
|
||||
lines.append(f"- **{plugin['title']}** v{plugin['version']}")
|
||||
@@ -239,6 +250,11 @@ def main():
|
||||
metavar="FILE",
|
||||
help="Compare with previous version JSON file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ignore-removed",
|
||||
action="store_true",
|
||||
help="Ignore removed plugins in output",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
@@ -257,7 +273,9 @@ def main():
|
||||
if args.json:
|
||||
output = json.dumps(comparison, indent=2, ensure_ascii=False)
|
||||
else:
|
||||
output = format_release_notes(comparison)
|
||||
output = format_release_notes(
|
||||
comparison, ignore_removed=args.ignore_removed
|
||||
)
|
||||
if not output.strip():
|
||||
output = "No changes detected. / 未检测到更改。"
|
||||
elif args.json:
|
||||
@@ -268,13 +286,17 @@ def main():
|
||||
# Default: simple list
|
||||
lines = []
|
||||
for plugin in sorted(plugins, key=lambda x: x.get("title", "")):
|
||||
lines.append(f"{plugin.get('title', 'Unknown')}: v{plugin.get('version', '?')}")
|
||||
lines.append(
|
||||
f"{plugin.get('title', 'Unknown')}: v{plugin.get('version', '?')}"
|
||||
)
|
||||
output = "\n".join(lines)
|
||||
|
||||
# Write output
|
||||
if args.output:
|
||||
with open(args.output, "w", encoding="utf-8") as f:
|
||||
f.write(output)
|
||||
if not output.endswith("\n"):
|
||||
f.write("\n")
|
||||
print(f"Output written to {args.output}")
|
||||
else:
|
||||
print(output)
|
||||
|
||||
Reference in New Issue
Block a user