Compare commits
4 Commits
v2026.01.0
...
v2026.01.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e22744abd0 | ||
|
|
54c90238f7 | ||
|
|
40d77121bd | ||
|
|
3795976a79 |
@@ -35,8 +35,8 @@ Every plugin **MUST** have bilingual versions for both code and documentation:
|
||||
When adding or updating a plugin, you **MUST** update the following documentation files to maintain consistency:
|
||||
|
||||
### Plugin Directory
|
||||
- `README.md`: Update version, description, and usage.
|
||||
- `README_CN.md`: Update version, description, and usage.
|
||||
- `README.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
- `README_CN.md`: Update version, description, and usage. **Explicitly describe new features.**
|
||||
|
||||
### Global Documentation (`docs/`)
|
||||
- **Index Pages**:
|
||||
|
||||
2
.github/copilot-instructions.md
vendored
2
.github/copilot-instructions.md
vendored
@@ -37,7 +37,9 @@ README 文件应包含以下内容:
|
||||
- 安装和设置说明 / Installation and setup instructions
|
||||
- 使用示例 / Usage examples
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 故障排除指南 / Troubleshooting guide
|
||||
- 版本和作者信息 / Version and author information
|
||||
- **新增功能 / New Features**: 如果是更新现有插件,必须明确列出并描述新增功能(发布到官方市场的重要要求)。/ If updating an existing plugin, explicitly list and describe new features (Critical for official market release).
|
||||
|
||||
### 官方文档 (Official Documentation)
|
||||
|
||||
|
||||
4
.github/workflows/release.yml
vendored
4
.github/workflows/release.yml
vendored
@@ -145,6 +145,10 @@ jobs:
|
||||
needs: check-changes
|
||||
if: needs.check-changes.outputs.has_changes == 'true' || github.event_name == 'workflow_dispatch' || startsWith(github.ref, 'refs/tags/v')
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
LANG: en_US.UTF-8
|
||||
LC_ALL: en_US.UTF-8
|
||||
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
|
||||
@@ -1,12 +1,24 @@
|
||||
# Export to Excel
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v0.3.3</span>
|
||||
<span class="version-badge">v0.3.4</span>
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis, archiving, and sharing.
|
||||
|
||||
|
||||
### What's New in v0.3.5
|
||||
- **Export Scope**: Added `EXPORT_SCOPE` valve to choose between exporting tables from the "Last Message" (default) or "All Messages".
|
||||
- **Smart Sheet Naming**: Automatically names sheets based on Markdown headers, AI titles (if enabled), or message index (e.g., `Msg1-Tab1`).
|
||||
- **Multiple Tables Support**: Improved handling of multiple tables within single or multiple messages.
|
||||
|
||||
## What's New in v0.3.4
|
||||
|
||||
- **Smart Filename Generation**: Now supports generating filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **Configuration Options**: Added `TITLE_SOURCE` setting to control filename generation strategy.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
The Export to Excel plugin allows you to download your chat conversations as Excel files. This is useful for:
|
||||
@@ -23,6 +35,13 @@ The Export to Excel plugin allows you to download your chat conversations as Exc
|
||||
- :material-download: **One-Click Download**: Instant file generation
|
||||
- :material-history: **Full History**: Exports complete conversation
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -1,12 +1,24 @@
|
||||
# Export to Excel(导出到 Excel)
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
<span class="version-badge">v0.3.4</span>
|
||||
|
||||
将聊天记录导出为 Excel 表格,便于分析、归档和分享。
|
||||
|
||||
|
||||
### v0.3.5 更新内容
|
||||
- **导出范围**: 新增 `EXPORT_SCOPE` 配置项,可选择导出“最后一条消息”(默认)或“所有消息”中的表格。
|
||||
- **智能 Sheet 命名**: 根据 Markdown 标题、AI 标题(如启用)或消息索引(如 `消息1-表1`)自动命名 Sheet。
|
||||
- **多表格支持**: 优化了对单条或多条消息中包含多个表格的处理。
|
||||
|
||||
### v0.3.4 更新内容
|
||||
|
||||
- **智能文件名生成**:支持根据对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **配置选项**:新增 `TITLE_SOURCE` 设置,用于控制文件名生成策略。
|
||||
|
||||
---
|
||||
|
||||
|
||||
## 概览
|
||||
|
||||
Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用于:
|
||||
@@ -23,6 +35,13 @@ Export to Excel 插件可以把你的聊天记录下载为 Excel 文件,适用
|
||||
- :material-download: **一键下载**:即时生成文件
|
||||
- :material-history: **完整历史**:导出完整会话内容
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
---
|
||||
|
||||
## 安装
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions are interactive plugins that:
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis and archiving.
|
||||
|
||||
**Version:** 0.3.3
|
||||
**Version:** 0.3.5
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](export-to-excel.md)
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ Actions 是交互式插件,能够:
|
||||
|
||||
将聊天记录导出为 Excel 电子表格,方便分析或归档。
|
||||
|
||||
**版本:** 0.3.3
|
||||
**版本:** 0.3.4
|
||||
|
||||
[:octicons-arrow-right-24: 查看文档](export-to-excel.md)
|
||||
|
||||
|
||||
@@ -2,12 +2,29 @@
|
||||
|
||||
This plugin allows you to export your chat history to an Excel (.xlsx) file directly from the chat interface.
|
||||
|
||||
### What's New in v0.3.5
|
||||
- **Export Scope**: Added `EXPORT_SCOPE` valve to choose between exporting tables from the "Last Message" (default) or "All Messages".
|
||||
- **Smart Sheet Naming**: Automatically names sheets based on Markdown headers, AI titles (if enabled), or message index (e.g., `Msg1-Tab1`).
|
||||
- **Multiple Tables Support**: Improved handling of multiple tables within single or multiple messages.
|
||||
|
||||
## What's New in v0.3.4
|
||||
|
||||
- **Smart Filename Generation**: Now supports generating filenames based on Chat Title, AI Summary, or Markdown Headers.
|
||||
- **Configuration Options**: Added `TITLE_SOURCE` setting to control filename generation strategy.
|
||||
|
||||
## Features
|
||||
|
||||
- **One-Click Export**: Adds an "Export to Excel" button to the chat.
|
||||
- **Automatic Header Extraction**: Intelligently identifies table headers from the chat content.
|
||||
- **Multi-Table Support**: Handles multiple tables within a single chat session.
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Title Source**: Choose how the filename is generated:
|
||||
- `chat_title`: Use the chat title (default).
|
||||
- `ai_generated`: Use AI to generate a concise title from the content.
|
||||
- `markdown_title`: Extract the first H1/H2 header from the markdown content.
|
||||
|
||||
## Usage
|
||||
|
||||
1. Install the plugin.
|
||||
|
||||
@@ -2,12 +2,29 @@
|
||||
|
||||
此插件允许你直接从聊天界面将对话历史导出为 Excel (.xlsx) 文件。
|
||||
|
||||
### v0.3.5 更新内容
|
||||
- **导出范围**: 新增 `EXPORT_SCOPE` 配置项,可选择导出“最后一条消息”(默认)或“所有消息”中的表格。
|
||||
- **智能 Sheet 命名**: 根据 Markdown 标题、AI 标题(如启用)或消息索引(如 `消息1-表1`)自动命名 Sheet。
|
||||
- **多表格支持**: 优化了对单条或多条消息中包含多个表格的处理。
|
||||
|
||||
## v0.3.4 更新内容
|
||||
|
||||
- **智能文件名生成**:支持根据对话标题、AI 总结或 Markdown 标题生成文件名。
|
||||
- **配置选项**:新增 `TITLE_SOURCE` 设置,用于控制文件名生成策略。
|
||||
|
||||
## 功能特点
|
||||
|
||||
- **一键导出**:在聊天界面添加“导出为 Excel”按钮。
|
||||
- **自动表头提取**:智能识别聊天内容中的表格标题。
|
||||
- **多表支持**:支持处理单次对话中的多个表格。
|
||||
|
||||
## 配置
|
||||
|
||||
- **标题来源 (Title Source)**:选择文件名的生成方式:
|
||||
- `chat_title`:使用对话标题(默认)。
|
||||
- `ai_generated`:使用 AI 根据内容生成简洁标题。
|
||||
- `markdown_title`:提取 Markdown 内容中的第一个 H1/H2 标题。
|
||||
|
||||
## 使用方法
|
||||
|
||||
1. 安装插件。
|
||||
|
||||
@@ -3,7 +3,7 @@ title: Export to Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.5
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: Exports the current chat history to an Excel (.xlsx) file, with automatic header extraction.
|
||||
"""
|
||||
@@ -15,14 +15,28 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: str = Field(
|
||||
default="chat_title",
|
||||
description="Title Source: 'chat_title' (Chat Title), 'ai_generated' (AI Generated), 'markdown_title' (Markdown Title)",
|
||||
)
|
||||
EXPORT_SCOPE: str = Field(
|
||||
default="last_message",
|
||||
description="Export Scope: 'last_message' (Last Message Only), 'all_messages' (All Messages)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,6 +49,7 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
@@ -53,8 +68,6 @@ class Action:
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
@@ -63,24 +76,152 @@ class Action:
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="No messages found.")
|
||||
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="No tables found.")
|
||||
continue
|
||||
|
||||
# Get dynamic filename and sheet names
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
# Generate sheet names for this message's tables
|
||||
# If multiple messages, we need to ensure uniqueness across the whole workbook
|
||||
# We'll generate base names here and deduplicate later if needed,
|
||||
# or better: generate unique names on the fly.
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
# Use optimized filename generation logic
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
# Note: Generating titles for EVERY table in all messages might be too slow/expensive.
|
||||
# We'll skip this for 'all_messages' scope to avoid timeout, unless it's just one message.
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
# Logic for AI generation (simplified for now, reusing existing flow if possible)
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
# Use global message index (from original list if possible, but here we iterate target_messages)
|
||||
# Let's use the loop index.
|
||||
# If multiple tables in one message: "Msg 1 - Table 1"
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Msg{msg_index+1}-Tab{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"Msg{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"Table {table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="No tables found in the selected scope."
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# Generate Workbook Title (Filename)
|
||||
# Use the title of the chat, or the first header of the first message with tables
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
# Try to find first header in the first message that has content
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
# Try extracting from content again if not already tried
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# If no title found, use user_yyyymmdd format
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,8 +230,10 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# Save tables to Excel (using enhanced formatting)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
@@ -172,6 +315,88 @@ class Action:
|
||||
__event_emitter__, "error", "No tables found to export!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self, body: dict, content: str, user_id: str, request: Any
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
model = body.get("model")
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant. Generate a short, concise title (max 10 words) for the following text. Do not use quotes. Only output the title.",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # Limit content length
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
response = await generate_chat_completion(request, payload, user_obj)
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
except Exception as e:
|
||||
print(f"Error generating title: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""Extract title from Markdown h1/h2 only"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# Match h1-h2 headings only
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""Extract chat_id from body or metadata"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""Fetch chat title from database by chat_id"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"Failed to load chat {chat_id}: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
Extract Markdown tables and their positions from message text
|
||||
|
||||
@@ -3,7 +3,7 @@ title: 导出为 Excel
|
||||
author: Fu-Jie
|
||||
author_url: https://github.com/Fu-Jie
|
||||
funding_url: https://github.com/Fu-Jie/awesome-openwebui
|
||||
version: 0.3.3
|
||||
version: 0.3.5
|
||||
icon_url: data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiPjxwYXRoIGQ9Ik0xNSAySDZhMiAyIDAgMCAwLTIgMnYxNmEyIDIgMCAwIDAgMiAyaDEyYTIgMiAwIDAgMCAyLTJWN1oiLz48cGF0aCBkPSJNMTQgMnY0YTIgMiAwIDAgMCAyIDJoNCIvPjxwYXRoIGQ9Ik04IDEzaDIiLz48cGF0aCBkPSJNMTQgMTNoMiIvPjxwYXRoIGQ9Ik04IDE3aDIiLz48cGF0aCBkPSJNMTQgMTdoMiIvPjwvc3ZnPg==
|
||||
description: 将当前对话历史导出为 Excel (.xlsx) 文件,支持自动提取表头。
|
||||
"""
|
||||
@@ -15,14 +15,28 @@ import base64
|
||||
from fastapi import FastAPI, HTTPException
|
||||
from typing import Optional, Callable, Awaitable, Any, List, Dict
|
||||
import datetime
|
||||
import asyncio
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import Users
|
||||
from open_webui.utils.chat import generate_chat_completion
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
TITLE_SOURCE: str = Field(
|
||||
default="chat_title",
|
||||
description="标题来源: 'chat_title' (对话标题), 'ai_generated' (AI生成), 'markdown_title' (Markdown标题)",
|
||||
)
|
||||
EXPORT_SCOPE: str = Field(
|
||||
default="last_message",
|
||||
description="导出范围: 'last_message' (仅最后一条消息), 'all_messages' (所有消息)",
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def _send_notification(self, emitter: Callable, type: str, content: str):
|
||||
await emitter(
|
||||
@@ -35,52 +49,167 @@ class Action:
|
||||
__user__=None,
|
||||
__event_emitter__=None,
|
||||
__event_call__: Optional[Callable[[Any], Awaitable[None]]] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
):
|
||||
print(f"action:{__name__}")
|
||||
if isinstance(__user__, (list, tuple)):
|
||||
user_language = (
|
||||
__user__[0].get("language", "zh-CN") if __user__ else "zh-CN"
|
||||
__user__[0].get("language", "en-US") if __user__ else "en-US"
|
||||
)
|
||||
user_name = __user__[0].get("name", "用户") if __user__[0] else "用户"
|
||||
user_name = __user__[0].get("name", "User") if __user__[0] else "User"
|
||||
user_id = (
|
||||
__user__[0]["id"]
|
||||
if __user__ and "id" in __user__[0]
|
||||
else "unknown_user"
|
||||
)
|
||||
elif isinstance(__user__, dict):
|
||||
user_language = __user__.get("language", "zh-CN")
|
||||
user_name = __user__.get("name", "用户")
|
||||
user_language = __user__.get("language", "en-US")
|
||||
user_name = __user__.get("name", "User")
|
||||
user_id = __user__.get("id", "unknown_user")
|
||||
|
||||
if __event_emitter__:
|
||||
last_assistant_message = body["messages"][-1]
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "正在保存到文件...", "done": False},
|
||||
"data": {"description": "正在保存文件...", "done": False},
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
message_content = last_assistant_message["content"]
|
||||
tables = self.extract_tables_from_message(message_content)
|
||||
messages = body.get("messages", [])
|
||||
if not messages:
|
||||
raise HTTPException(status_code=400, detail="未找到消息。")
|
||||
|
||||
# Determine messages to process based on scope
|
||||
target_messages = []
|
||||
if self.valves.EXPORT_SCOPE == "all_messages":
|
||||
target_messages = messages
|
||||
else:
|
||||
target_messages = [messages[-1]]
|
||||
|
||||
all_tables = []
|
||||
all_sheet_names = []
|
||||
|
||||
# Process messages
|
||||
for msg_index, msg in enumerate(target_messages):
|
||||
content = msg.get("content", "")
|
||||
tables = self.extract_tables_from_message(content)
|
||||
|
||||
if not tables:
|
||||
raise HTTPException(status_code=400, detail="未找到任何表格。")
|
||||
continue
|
||||
|
||||
# 获取动态文件名和sheet名称
|
||||
workbook_name, sheet_names = self.generate_names_from_content(
|
||||
message_content, tables
|
||||
# Generate sheet names for this message's tables
|
||||
|
||||
# Extract headers for this message
|
||||
headers = []
|
||||
lines = content.split("\n")
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^#{1,6}\s+", line):
|
||||
headers.append(
|
||||
{
|
||||
"text": re.sub(r"^#{1,6}\s+", "", line).strip(),
|
||||
"line_num": i,
|
||||
}
|
||||
)
|
||||
|
||||
# 使用优化后的文件名生成逻辑
|
||||
for table_index, table in enumerate(tables):
|
||||
sheet_name = ""
|
||||
|
||||
# 1. Try Markdown Header (closest above)
|
||||
table_start_line = table["start_line"] - 1
|
||||
closest_header_text = None
|
||||
candidate_headers = [
|
||||
h for h in headers if h["line_num"] < table_start_line
|
||||
]
|
||||
if candidate_headers:
|
||||
closest_header = max(
|
||||
candidate_headers, key=lambda x: x["line_num"]
|
||||
)
|
||||
closest_header_text = closest_header["text"]
|
||||
|
||||
if closest_header_text:
|
||||
sheet_name = self.clean_sheet_name(closest_header_text)
|
||||
|
||||
# 2. AI Generated (Only if explicitly enabled and we have a request object)
|
||||
if (
|
||||
not sheet_name
|
||||
and self.valves.TITLE_SOURCE == "ai_generated"
|
||||
and len(target_messages) == 1
|
||||
):
|
||||
pass
|
||||
|
||||
# 3. Fallback: Message Index
|
||||
if not sheet_name:
|
||||
if len(target_messages) > 1:
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"消息{msg_index+1}-表{table_index+1}"
|
||||
else:
|
||||
sheet_name = f"消息{msg_index+1}"
|
||||
else:
|
||||
# Single message (last_message scope)
|
||||
if len(tables) > 1:
|
||||
sheet_name = f"表{table_index+1}"
|
||||
else:
|
||||
sheet_name = "Sheet1"
|
||||
|
||||
all_tables.append(table)
|
||||
all_sheet_names.append(sheet_name)
|
||||
|
||||
if not all_tables:
|
||||
raise HTTPException(
|
||||
status_code=400, detail="在选定范围内未找到表格。"
|
||||
)
|
||||
|
||||
# Deduplicate sheet names
|
||||
final_sheet_names = []
|
||||
seen_names = {}
|
||||
for name in all_sheet_names:
|
||||
base_name = name
|
||||
counter = 1
|
||||
while name in seen_names:
|
||||
name = f"{base_name} ({counter})"
|
||||
counter += 1
|
||||
seen_names[name] = True
|
||||
final_sheet_names.append(name)
|
||||
|
||||
# Generate Workbook Title (Filename)
|
||||
title = ""
|
||||
chat_id = self.extract_chat_id(body, None)
|
||||
chat_title = ""
|
||||
if chat_id:
|
||||
chat_title = await self.fetch_chat_title(chat_id, user_id)
|
||||
|
||||
if (
|
||||
self.valves.TITLE_SOURCE == "chat_title"
|
||||
or not self.valves.TITLE_SOURCE
|
||||
):
|
||||
title = chat_title
|
||||
elif self.valves.TITLE_SOURCE == "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
# Fallback for filename
|
||||
if not title:
|
||||
if chat_title:
|
||||
title = chat_title
|
||||
else:
|
||||
if self.valves.TITLE_SOURCE != "markdown_title":
|
||||
for msg in target_messages:
|
||||
extracted = self.extract_title(msg.get("content", ""))
|
||||
if extracted:
|
||||
title = extracted
|
||||
break
|
||||
|
||||
current_datetime = datetime.datetime.now()
|
||||
formatted_date = current_datetime.strftime("%Y%m%d")
|
||||
|
||||
# 如果没找到标题则使用 user_yyyymmdd 格式
|
||||
if not workbook_name:
|
||||
if not title:
|
||||
workbook_name = f"{user_name}_{formatted_date}"
|
||||
else:
|
||||
workbook_name = self.clean_filename(title)
|
||||
|
||||
filename = f"{workbook_name}.xlsx"
|
||||
excel_file_path = os.path.join(
|
||||
@@ -89,10 +218,12 @@ class Action:
|
||||
|
||||
os.makedirs(os.path.dirname(excel_file_path), exist_ok=True)
|
||||
|
||||
# 保存表格到Excel(使用符合中国规范的格式化功能)
|
||||
self.save_tables_to_excel_enhanced(tables, excel_file_path, sheet_names)
|
||||
# Save tables to Excel
|
||||
self.save_tables_to_excel_enhanced(
|
||||
all_tables, excel_file_path, final_sheet_names
|
||||
)
|
||||
|
||||
# 触发文件下载
|
||||
# Trigger file download
|
||||
if __event_call__:
|
||||
with open(excel_file_path, "rb") as file:
|
||||
file_content = file.read()
|
||||
@@ -123,7 +254,7 @@ class Action:
|
||||
URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
}} catch (error) {{
|
||||
console.error('触发下载时出错:', error);
|
||||
console.error('Error triggering download:', error);
|
||||
}}
|
||||
"""
|
||||
},
|
||||
@@ -132,15 +263,15 @@ class Action:
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "输出已保存", "done": True},
|
||||
"data": {"description": "文件已保存", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
# 清理临时文件
|
||||
# Clean up temp file
|
||||
if os.path.exists(excel_file_path):
|
||||
os.remove(excel_file_path)
|
||||
|
||||
return {"message": "下载事件已触发"}
|
||||
return {"message": "下载已触发"}
|
||||
|
||||
except HTTPException as e:
|
||||
print(f"Error processing tables: {str(e.detail)}")
|
||||
@@ -148,13 +279,13 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {e.detail}",
|
||||
"description": f"保存文件错误: {e.detail}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
raise e
|
||||
except Exception as e:
|
||||
@@ -163,15 +294,97 @@ class Action:
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"保存文件时出错: {str(e)}",
|
||||
"description": f"保存文件错误: {str(e)}",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
await self._send_notification(
|
||||
__event_emitter__, "error", "没有找到可以导出的表格!"
|
||||
__event_emitter__, "error", "未找到可导出的表格!"
|
||||
)
|
||||
|
||||
async def generate_title_using_ai(
|
||||
self, body: dict, content: str, user_id: str, request: Any
|
||||
) -> str:
|
||||
if not request:
|
||||
return ""
|
||||
|
||||
try:
|
||||
user_obj = Users.get_user_by_id(user_id)
|
||||
model = body.get("model")
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "你是一个乐于助人的助手。请为以下文本生成一个简短、简洁的标题(最多10个字)。不要使用引号。只输出标题。",
|
||||
},
|
||||
{"role": "user", "content": content[:2000]}, # 限制内容长度
|
||||
],
|
||||
"stream": False,
|
||||
}
|
||||
|
||||
response = await generate_chat_completion(request, payload, user_obj)
|
||||
if response and "choices" in response:
|
||||
return response["choices"][0]["message"]["content"].strip()
|
||||
except Exception as e:
|
||||
print(f"生成标题时出错: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def extract_title(self, content: str) -> str:
|
||||
"""从 Markdown h1/h2 中提取标题"""
|
||||
lines = content.split("\n")
|
||||
for line in lines:
|
||||
# 仅匹配 h1-h2 标题
|
||||
match = re.match(r"^#{1,2}\s+(.+)$", line.strip())
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return ""
|
||||
|
||||
def extract_chat_id(self, body: dict, metadata: Optional[dict]) -> str:
|
||||
"""从 body 或 metadata 中提取 chat_id"""
|
||||
if isinstance(body, dict):
|
||||
chat_id = body.get("chat_id") or body.get("id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
|
||||
for key in ("chat", "conversation"):
|
||||
nested = body.get(key)
|
||||
if isinstance(nested, dict):
|
||||
nested_id = nested.get("id") or nested.get("chat_id")
|
||||
if isinstance(nested_id, str) and nested_id.strip():
|
||||
return nested_id.strip()
|
||||
if isinstance(metadata, dict):
|
||||
chat_id = metadata.get("chat_id")
|
||||
if isinstance(chat_id, str) and chat_id.strip():
|
||||
return chat_id.strip()
|
||||
return ""
|
||||
|
||||
async def fetch_chat_title(self, chat_id: str, user_id: str = "") -> str:
|
||||
"""通过 chat_id 从数据库获取对话标题"""
|
||||
if not chat_id:
|
||||
return ""
|
||||
|
||||
def _load_chat():
|
||||
if user_id:
|
||||
return Chats.get_chat_by_id_and_user_id(id=chat_id, user_id=user_id)
|
||||
return Chats.get_chat_by_id(chat_id)
|
||||
|
||||
try:
|
||||
chat = await asyncio.to_thread(_load_chat)
|
||||
except Exception as exc:
|
||||
print(f"加载对话 {chat_id} 失败: {exc}")
|
||||
return ""
|
||||
|
||||
if not chat:
|
||||
return ""
|
||||
|
||||
data = getattr(chat, "chat", {}) or {}
|
||||
title = data.get("title") or getattr(chat, "title", "")
|
||||
return title.strip() if isinstance(title, str) else ""
|
||||
|
||||
def extract_tables_from_message(self, message: str) -> List[Dict]:
|
||||
"""
|
||||
从消息文本中提取Markdown表格及位置信息
|
||||
|
||||
Reference in New Issue
Block a user