Add MkDocs documentation portal with Material theme and CI/CD workflow
Co-authored-by: Fu-Jie <33599649+Fu-Jie@users.noreply.github.com>
This commit is contained in:
67
docs/plugins/actions/export-to-excel.md
Normal file
67
docs/plugins/actions/export-to-excel.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Export to Excel
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis, archiving, and sharing.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Export to Excel plugin allows you to download your chat conversations as Excel files. This is useful for:
|
||||
|
||||
- Archiving important conversations
|
||||
- Analyzing chat data
|
||||
- Sharing conversations with colleagues
|
||||
- Creating documentation from AI-assisted research
|
||||
|
||||
## Features
|
||||
|
||||
- :material-file-excel: **Excel Export**: Standard `.xlsx` format
|
||||
- :material-table: **Formatted Output**: Clean table structure
|
||||
- :material-download: **One-Click Download**: Instant file generation
|
||||
- :material-history: **Full History**: Exports complete conversation
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`export_to_excel.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/export_to_excel)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Enable the plugin
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. Have a conversation you want to export
|
||||
2. Click the **Export** button in the message action bar
|
||||
3. The Excel file will be automatically downloaded
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
The exported Excel file contains:
|
||||
|
||||
| Column | Description |
|
||||
|--------|-------------|
|
||||
| Timestamp | When the message was sent |
|
||||
| Role | User or Assistant |
|
||||
| Content | The message text |
|
||||
| Model | The AI model used (for assistant messages) |
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- No additional Python packages required (uses built-in libraries)
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/export_to_excel){ .md-button }
|
||||
141
docs/plugins/actions/index.md
Normal file
141
docs/plugins/actions/index.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Action Plugins
|
||||
|
||||
Action plugins add custom buttons below messages in the chat interface, allowing you to trigger specific functionalities with a single click.
|
||||
|
||||
## What are Actions?
|
||||
|
||||
Actions are interactive plugins that:
|
||||
|
||||
- :material-gesture-tap: Add buttons to the message action bar
|
||||
- :material-export: Generate and export content (mind maps, charts, files)
|
||||
- :material-api: Interact with external services and APIs
|
||||
- :material-animation-play: Create visualizations and interactive content
|
||||
|
||||
---
|
||||
|
||||
## Available Action Plugins
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-brain:{ .lg .middle } **Smart Mind Map**
|
||||
|
||||
---
|
||||
|
||||
Intelligently analyzes text content and generates interactive mind maps with beautiful visualizations.
|
||||
|
||||
**Version:** 0.7.2
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](smart-mind-map.md)
|
||||
|
||||
- :material-card-text:{ .lg .middle } **Knowledge Card**
|
||||
|
||||
---
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and memorization.
|
||||
|
||||
**Version:** 0.2.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](knowledge-card.md)
|
||||
|
||||
- :material-file-excel:{ .lg .middle } **Export to Excel**
|
||||
|
||||
---
|
||||
|
||||
Export chat conversations to Excel spreadsheet format for analysis and archiving.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](export-to-excel.md)
|
||||
|
||||
- :material-text-box-search:{ .lg .middle } **Summary**
|
||||
|
||||
---
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](summary.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Download the desired plugin `.py` file
|
||||
2. Navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
3. Upload the file and configure settings
|
||||
4. Use the action button in chat messages
|
||||
|
||||
---
|
||||
|
||||
## Development Template
|
||||
|
||||
Want to create your own Action plugin? Use our template:
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Custom Action
|
||||
author: Your Name
|
||||
version: 1.0.0
|
||||
description: Description of your action plugin
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
class Action:
|
||||
class Valves(BaseModel):
|
||||
# Add your configuration options here
|
||||
show_status: bool = Field(
|
||||
default=True,
|
||||
description="Show status updates during processing"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def action(
|
||||
self,
|
||||
body: dict,
|
||||
__user__: Optional[Dict[str, Any]] = None,
|
||||
__event_emitter__: Optional[Any] = None,
|
||||
__request__: Optional[Any] = None,
|
||||
) -> Optional[dict]:
|
||||
"""
|
||||
Main action method triggered when user clicks the action button.
|
||||
|
||||
Args:
|
||||
body: Message body containing conversation data
|
||||
__user__: Current user information
|
||||
__event_emitter__: For sending notifications and status updates
|
||||
__request__: FastAPI request object
|
||||
|
||||
Returns:
|
||||
Modified body dict or None
|
||||
"""
|
||||
# Send status update
|
||||
if __event_emitter__ and self.valves.show_status:
|
||||
await __event_emitter__({
|
||||
"type": "status",
|
||||
"data": {"description": "Processing...", "done": False}
|
||||
})
|
||||
|
||||
# Your plugin logic here
|
||||
messages = body.get("messages", [])
|
||||
if messages:
|
||||
last_message = messages[-1].get("content", "")
|
||||
# Process the message...
|
||||
|
||||
# Complete status
|
||||
if __event_emitter__ and self.valves.show_status:
|
||||
await __event_emitter__({
|
||||
"type": "status",
|
||||
"data": {"description": "Done!", "done": True}
|
||||
})
|
||||
|
||||
return body
|
||||
```
|
||||
|
||||
For more details, check our [Plugin Development Guide](../../development/plugin-guide.md).
|
||||
88
docs/plugins/actions/knowledge-card.md
Normal file
88
docs/plugins/actions/knowledge-card.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Knowledge Card
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v0.2.0</span>
|
||||
|
||||
Quickly generates beautiful learning memory cards, perfect for studying and quick memorization.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Knowledge Card plugin (also known as Flash Card / 闪记卡) transforms content into visually appealing flashcards that are perfect for learning and memorization. Whether you're studying for exams, learning new concepts, or reviewing key points, this plugin helps you create effective study materials.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-card-text: **Beautiful Cards**: Modern, clean design for easy reading
|
||||
- :material-animation-play: **Interactive**: Flip cards to reveal answers
|
||||
- :material-export: **Exportable**: Save cards for offline study
|
||||
- :material-palette: **Customizable**: Multiple themes and styles
|
||||
- :material-translate: **Multi-language**: Supports various languages
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`knowledge_card.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/knowledge-card)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Enable the plugin
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. Have a conversation about a topic you want to learn
|
||||
2. Click the **Flash Card** button in the message action bar
|
||||
3. The plugin will analyze the content and generate flashcards
|
||||
4. Click on cards to flip and reveal answers
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `cards_per_message` | integer | `5` | Maximum cards to generate |
|
||||
| `theme` | string | `"modern"` | Visual theme |
|
||||
| `show_hints` | boolean | `true` | Include hints on cards |
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
=== "Question Side"
|
||||
```
|
||||
┌─────────────────────────────┐
|
||||
│ │
|
||||
│ What is the capital of │
|
||||
│ France? │
|
||||
│ │
|
||||
│ [Click to flip] │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
=== "Answer Side"
|
||||
```
|
||||
┌─────────────────────────────┐
|
||||
│ │
|
||||
│ Paris │
|
||||
│ │
|
||||
│ The city of lights, │
|
||||
│ located on the Seine │
|
||||
│ │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- No additional Python packages required
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/knowledge-card){ .md-button }
|
||||
91
docs/plugins/actions/smart-mind-map.md
Normal file
91
docs/plugins/actions/smart-mind-map.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Smart Mind Map
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v0.7.2</span>
|
||||
|
||||
Intelligently analyzes text content and generates interactive mind maps for better visualization and understanding.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Smart Mind Map plugin transforms text content into beautiful, interactive mind maps. It uses AI to analyze the structure of your content and creates a hierarchical visualization that makes complex information easier to understand.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-brain: **AI-Powered Analysis**: Intelligently extracts key concepts and relationships
|
||||
- :material-gesture-swipe: **Interactive Navigation**: Zoom, pan, and explore the mind map
|
||||
- :material-palette: **Beautiful Styling**: Modern design with customizable colors
|
||||
- :material-download: **Export Options**: Save as image or structured data
|
||||
- :material-translate: **Multi-language Support**: Works with multiple languages
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`smart_mind_map.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/smart-mind-map)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Enable the plugin
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. Start a conversation and get a response from the AI
|
||||
2. Click the **Mind Map** button in the message action bar
|
||||
3. Wait for the mind map to generate
|
||||
4. Interact with the visualization:
|
||||
- **Zoom**: Scroll to zoom in/out
|
||||
- **Pan**: Click and drag to move around
|
||||
- **Expand/Collapse**: Click nodes to show/hide children
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `show_status` | boolean | `true` | Show processing status updates |
|
||||
| `max_depth` | integer | `5` | Maximum depth of the mind map |
|
||||
| `theme` | string | `"default"` | Color theme for the visualization |
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
The plugin generates an interactive HTML mind map embedded in the chat:
|
||||
|
||||
```
|
||||
📊 Mind Map Generated
|
||||
├── Main Topic
|
||||
│ ├── Subtopic 1
|
||||
│ │ ├── Detail A
|
||||
│ │ └── Detail B
|
||||
│ ├── Subtopic 2
|
||||
│ └── Subtopic 3
|
||||
└── Related Concepts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- No additional Python packages required
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
??? question "Mind map is not displaying?"
|
||||
Ensure your browser supports HTML5 Canvas and JavaScript is enabled.
|
||||
|
||||
??? question "Generation takes too long?"
|
||||
For very long texts, the AI analysis may take more time. Consider breaking down the content into smaller sections.
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/smart-mind-map){ .md-button }
|
||||
82
docs/plugins/actions/summary.md
Normal file
82
docs/plugins/actions/summary.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Summary
|
||||
|
||||
<span class="category-badge action">Action</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Generate concise summaries of long text content with key points extraction.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Summary plugin helps you quickly understand long pieces of text by generating concise summaries with extracted key points. It's perfect for:
|
||||
|
||||
- Summarizing long articles or documents
|
||||
- Extracting key points from conversations
|
||||
- Creating quick overviews of complex topics
|
||||
|
||||
## Features
|
||||
|
||||
- :material-text-box-search: **Smart Summarization**: AI-powered content analysis
|
||||
- :material-format-list-bulleted: **Key Points**: Extracted important highlights
|
||||
- :material-content-copy: **Easy Copy**: One-click copying of summaries
|
||||
- :material-tune: **Adjustable Length**: Control summary detail level
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`summary.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/summary)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Enable the plugin
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. Get a long response from the AI or paste long text
|
||||
2. Click the **Summary** button in the message action bar
|
||||
3. View the generated summary with key points
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `summary_length` | string | `"medium"` | Length of summary (short/medium/long) |
|
||||
| `include_key_points` | boolean | `true` | Extract and list key points |
|
||||
| `language` | string | `"auto"` | Output language |
|
||||
|
||||
---
|
||||
|
||||
## Example Output
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
|
||||
This document discusses the implementation of a new feature
|
||||
for the application, focusing on user experience improvements
|
||||
and performance optimizations.
|
||||
|
||||
### Key Points
|
||||
|
||||
- ✅ New user interface design improves accessibility
|
||||
- ✅ Backend optimizations reduce load times by 40%
|
||||
- ✅ Mobile responsiveness enhanced
|
||||
- ✅ Integration with third-party services simplified
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Uses the active LLM model for summarization
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/actions/summary){ .md-button }
|
||||
122
docs/plugins/filters/async-context-compression.md
Normal file
122
docs/plugins/filters/async-context-compression.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# Async Context Compression
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining conversational coherence.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Async Context Compression filter helps manage token usage in long conversations by:
|
||||
|
||||
- Intelligently summarizing older messages
|
||||
- Preserving important context
|
||||
- Reducing API costs
|
||||
- Maintaining conversation coherence
|
||||
|
||||
This is especially useful for:
|
||||
|
||||
- Long-running conversations
|
||||
- Complex multi-turn discussions
|
||||
- Cost optimization
|
||||
- Token limit management
|
||||
|
||||
## Features
|
||||
|
||||
- :material-compress: **Smart Compression**: AI-powered context summarization
|
||||
- :material-clock-fast: **Async Processing**: Non-blocking background compression
|
||||
- :material-memory: **Context Preservation**: Keeps important information
|
||||
- :material-currency-usd-off: **Cost Reduction**: Minimize token usage
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`async_context_compression.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/async-context-compression)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Configure compression settings
|
||||
4. Enable the filter
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Incoming Messages] --> B{Token Count > Threshold?}
|
||||
B -->|No| C[Pass Through]
|
||||
B -->|Yes| D[Summarize Older Messages]
|
||||
D --> E[Preserve Recent Messages]
|
||||
E --> F[Combine Summary + Recent]
|
||||
F --> G[Send to LLM]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `token_threshold` | integer | `4000` | Trigger compression above this token count |
|
||||
| `preserve_recent` | integer | `5` | Number of recent messages to keep uncompressed |
|
||||
| `summary_model` | string | `"auto"` | Model to use for summarization |
|
||||
| `compression_ratio` | float | `0.3` | Target compression ratio |
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
### Before Compression
|
||||
|
||||
```
|
||||
[Message 1] User: Tell me about Python...
|
||||
[Message 2] AI: Python is a programming language...
|
||||
[Message 3] User: What about its history?
|
||||
[Message 4] AI: Python was created by Guido...
|
||||
[Message 5] User: And its features?
|
||||
[Message 6] AI: Python has many features...
|
||||
... (many more messages)
|
||||
[Message 20] User: Current question
|
||||
```
|
||||
|
||||
### After Compression
|
||||
|
||||
```
|
||||
[Summary] Previous conversation covered Python basics,
|
||||
history, features, and common use cases...
|
||||
|
||||
[Message 18] User: Recent question about decorators
|
||||
[Message 19] AI: Decorators in Python are...
|
||||
[Message 20] User: Current question
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Access to an LLM for summarization
|
||||
|
||||
!!! tip "Best Practices"
|
||||
- Set appropriate token thresholds based on your model's context window
|
||||
- Preserve more recent messages for technical discussions
|
||||
- Test compression settings in non-critical conversations first
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
??? question "Compression not triggering?"
|
||||
Check if the token count exceeds your configured threshold. Enable debug logging for more details.
|
||||
|
||||
??? question "Important context being lost?"
|
||||
Increase the `preserve_recent` setting or lower the compression ratio.
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/async-context-compression){ .md-button }
|
||||
51
docs/plugins/filters/context-enhancement.md
Normal file
51
docs/plugins/filters/context-enhancement.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Context Enhancement
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Enhances chat context with additional information for improved LLM responses.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Context Enhancement filter automatically enriches your conversations with contextual information, making LLM responses more relevant and accurate.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-text-box-plus: **Auto Enhancement**: Automatically adds relevant context
|
||||
- :material-clock: **Time Awareness**: Includes current date/time information
|
||||
- :material-account: **User Context**: Incorporates user preferences
|
||||
- :material-cog: **Customizable**: Configure what context to include
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`context_enhancement_filter.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/context_enhancement_filter)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Configure enhancement options
|
||||
4. Enable the filter
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `include_datetime` | boolean | `true` | Add current date/time |
|
||||
| `include_user_info` | boolean | `true` | Add user name and preferences |
|
||||
| `custom_context` | string | `""` | Custom context to always include |
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/context_enhancement_filter){ .md-button }
|
||||
54
docs/plugins/filters/gemini-manifold-companion.md
Normal file
54
docs/plugins/filters/gemini-manifold-companion.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Gemini Manifold Companion
|
||||
|
||||
<span class="category-badge filter">Filter</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin, providing enhanced functionality.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Gemini Manifold Companion works alongside the [Gemini Manifold Pipe](../pipes/gemini-manifold.md) to provide additional processing and enhancement for Gemini model integrations.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-handshake: **Seamless Integration**: Works with Gemini Manifold pipe
|
||||
- :material-format-text: **Message Formatting**: Optimizes messages for Gemini
|
||||
- :material-shield: **Error Handling**: Graceful handling of API issues
|
||||
- :material-tune: **Fine-tuning**: Additional configuration options
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. First, install the [Gemini Manifold Pipe](../pipes/gemini-manifold.md)
|
||||
2. Download the companion filter: [`gemini_manifold_companion.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/gemini_manifold_companion)
|
||||
3. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
4. Enable the filter
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `auto_format` | boolean | `true` | Auto-format messages for Gemini |
|
||||
| `handle_errors` | boolean | `true` | Enable error handling |
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! warning "Dependency"
|
||||
This filter requires the **Gemini Manifold Pipe** to be installed and configured.
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Gemini Manifold Pipe installed
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/filters/gemini_manifold_companion){ .md-button }
|
||||
155
docs/plugins/filters/index.md
Normal file
155
docs/plugins/filters/index.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# Filter Plugins
|
||||
|
||||
Filter plugins process and modify messages before they are sent to the LLM or after responses are generated.
|
||||
|
||||
## What are Filters?
|
||||
|
||||
Filters act as middleware in the message pipeline:
|
||||
|
||||
- :material-arrow-right-bold: **Inlet**: Process user messages before they reach the LLM
|
||||
- :material-arrow-left-bold: **Outlet**: Process LLM responses before they're displayed
|
||||
- :material-stream: **Stream**: Process streaming responses in real-time
|
||||
|
||||
---
|
||||
|
||||
## Available Filter Plugins
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-compress:{ .lg .middle } **Async Context Compression**
|
||||
|
||||
---
|
||||
|
||||
Reduces token consumption in long conversations through intelligent summarization while maintaining coherence.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](async-context-compression.md)
|
||||
|
||||
- :material-text-box-plus:{ .lg .middle } **Context Enhancement**
|
||||
|
||||
---
|
||||
|
||||
Enhances chat context with additional information for better responses.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](context-enhancement.md)
|
||||
|
||||
- :material-google:{ .lg .middle } **Gemini Manifold Companion**
|
||||
|
||||
---
|
||||
|
||||
Companion filter for the Gemini Manifold pipe plugin.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](gemini-manifold-companion.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## How Filters Work
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[User Message] --> B[Inlet Filter]
|
||||
B --> C[LLM]
|
||||
C --> D[Outlet Filter]
|
||||
D --> E[Display to User]
|
||||
```
|
||||
|
||||
### Inlet Processing
|
||||
|
||||
The `inlet` method processes messages before they reach the LLM:
|
||||
|
||||
```python
|
||||
async def inlet(self, body: dict, __metadata__: dict) -> dict:
|
||||
# Modify the request before sending to LLM
|
||||
messages = body.get("messages", [])
|
||||
# Add context, modify prompts, etc.
|
||||
return body
|
||||
```
|
||||
|
||||
### Outlet Processing
|
||||
|
||||
The `outlet` method processes responses after they're generated:
|
||||
|
||||
```python
|
||||
async def outlet(self, body: dict, __metadata__: dict) -> dict:
|
||||
# Modify the response before displaying
|
||||
messages = body.get("messages", [])
|
||||
# Format output, add citations, etc.
|
||||
return body
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Download the desired filter `.py` file
|
||||
2. Navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
3. Upload the file and configure settings
|
||||
4. Enable the filter in chat settings or globally
|
||||
|
||||
---
|
||||
|
||||
## Development Template
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Custom Filter
|
||||
author: Your Name
|
||||
version: 1.0.0
|
||||
description: Description of your filter plugin
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Optional
|
||||
|
||||
class Filter:
|
||||
class Valves(BaseModel):
|
||||
priority: int = Field(
|
||||
default=0,
|
||||
description="Filter priority (lower = earlier execution)"
|
||||
)
|
||||
enabled: bool = Field(
|
||||
default=True,
|
||||
description="Enable/disable this filter"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
|
||||
async def inlet(
|
||||
self,
|
||||
body: dict,
|
||||
__user__: Optional[dict] = None,
|
||||
__metadata__: Optional[dict] = None
|
||||
) -> dict:
|
||||
"""Process messages before sending to LLM."""
|
||||
if not self.valves.enabled:
|
||||
return body
|
||||
|
||||
# Your inlet logic here
|
||||
messages = body.get("messages", [])
|
||||
|
||||
return body
|
||||
|
||||
async def outlet(
|
||||
self,
|
||||
body: dict,
|
||||
__user__: Optional[dict] = None,
|
||||
__metadata__: Optional[dict] = None
|
||||
) -> dict:
|
||||
"""Process responses before displaying."""
|
||||
if not self.valves.enabled:
|
||||
return body
|
||||
|
||||
# Your outlet logic here
|
||||
|
||||
return body
|
||||
```
|
||||
|
||||
For more details, check our [Plugin Development Guide](../../development/plugin-guide.md).
|
||||
92
docs/plugins/index.md
Normal file
92
docs/plugins/index.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# Plugin Center
|
||||
|
||||
Welcome to the OpenWebUI Extras Plugin Center! Here you'll find a comprehensive collection of plugins to enhance your OpenWebUI experience.
|
||||
|
||||
## Plugin Types
|
||||
|
||||
OpenWebUI supports four types of plugins, each serving a different purpose:
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-gesture-tap:{ .lg .middle } **Actions**
|
||||
|
||||
---
|
||||
|
||||
Add custom buttons below messages to trigger specific actions like generating mind maps, exporting data, or creating visualizations.
|
||||
|
||||
[:octicons-arrow-right-24: Browse Actions](actions/index.md)
|
||||
|
||||
- :material-filter:{ .lg .middle } **Filters**
|
||||
|
||||
---
|
||||
|
||||
Process and modify messages before they reach the LLM or after responses are generated. Perfect for context enhancement and compression.
|
||||
|
||||
[:octicons-arrow-right-24: Browse Filters](filters/index.md)
|
||||
|
||||
- :material-pipe:{ .lg .middle } **Pipes**
|
||||
|
||||
---
|
||||
|
||||
Create custom model integrations or transform LLM responses. Connect to external APIs or implement custom model logic.
|
||||
|
||||
[:octicons-arrow-right-24: Browse Pipes](pipes/index.md)
|
||||
|
||||
- :material-pipe-wrench:{ .lg .middle } **Pipelines**
|
||||
|
||||
---
|
||||
|
||||
Complex workflows that combine multiple processing steps. Ideal for advanced use cases requiring multi-step transformations.
|
||||
|
||||
[:octicons-arrow-right-24: Browse Pipelines](pipelines/index.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## All Plugins at a Glance
|
||||
|
||||
| Plugin | Type | Description | Version |
|
||||
|--------|------|-------------|---------|
|
||||
| [Smart Mind Map](actions/smart-mind-map.md) | Action | Generate interactive mind maps from text | 0.7.2 |
|
||||
| [Knowledge Card](actions/knowledge-card.md) | Action | Create beautiful learning flashcards | 0.2.0 |
|
||||
| [Export to Excel](actions/export-to-excel.md) | Action | Export chat history to Excel files | 1.0.0 |
|
||||
| [Summary](actions/summary.md) | Action | Text summarization tool | 1.0.0 |
|
||||
| [Async Context Compression](filters/async-context-compression.md) | Filter | Intelligent context compression | 1.0.0 |
|
||||
| [Context Enhancement](filters/context-enhancement.md) | Filter | Enhance chat context | 1.0.0 |
|
||||
| [Gemini Manifold Companion](filters/gemini-manifold-companion.md) | Filter | Companion for Gemini Manifold | 1.0.0 |
|
||||
| [Gemini Manifold](pipes/gemini-manifold.md) | Pipe | Gemini model integration | 1.0.0 |
|
||||
| [MoE Prompt Refiner](pipelines/moe-prompt-refiner.md) | Pipeline | Multi-model prompt refinement | 1.0.0 |
|
||||
|
||||
---
|
||||
|
||||
## Installation Guide
|
||||
|
||||
### Step 1: Download the Plugin
|
||||
|
||||
Click on any plugin above to view its documentation and download the `.py` file.
|
||||
|
||||
### Step 2: Upload to OpenWebUI
|
||||
|
||||
1. Open OpenWebUI and navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
2. Click the **+** button to add a new function
|
||||
3. Upload the downloaded `.py` file
|
||||
4. Configure any required settings (API keys, options, etc.)
|
||||
|
||||
### Step 3: Enable and Use
|
||||
|
||||
1. Refresh the page after uploading
|
||||
2. For **Actions**: Look for the plugin button in the message action bar
|
||||
3. For **Filters**: Enable in your chat settings or globally
|
||||
4. For **Pipes**: Select the custom model from the model dropdown
|
||||
5. For **Pipelines**: Configure and activate in the pipeline settings
|
||||
|
||||
---
|
||||
|
||||
## Plugin Compatibility
|
||||
|
||||
!!! info "OpenWebUI Version"
|
||||
Most plugins in this collection are designed for OpenWebUI **v0.3.0** and later. Please check individual plugin documentation for specific version requirements.
|
||||
|
||||
!!! warning "Dependencies"
|
||||
Some plugins may require additional Python packages. Check each plugin's documentation for required dependencies.
|
||||
63
docs/plugins/pipelines/index.md
Normal file
63
docs/plugins/pipelines/index.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Pipeline Plugins
|
||||
|
||||
Pipelines are complex workflows that combine multiple processing steps for advanced use cases.
|
||||
|
||||
## What are Pipelines?
|
||||
|
||||
Pipelines extend beyond simple transformations to implement:
|
||||
|
||||
- :material-workflow: Multi-step processing workflows
|
||||
- :material-source-merge: Model orchestration
|
||||
- :material-robot-industrial: Advanced agent behaviors
|
||||
- :material-cog-box: Complex business logic
|
||||
|
||||
---
|
||||
|
||||
## Available Pipeline Plugins
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-view-module:{ .lg .middle } **MoE Prompt Refiner**
|
||||
|
||||
---
|
||||
|
||||
Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](moe-prompt-refiner.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## How Pipelines Differ
|
||||
|
||||
| Feature | Filters | Pipes | Pipelines |
|
||||
|---------|---------|-------|-----------|
|
||||
| Complexity | Simple | Medium | High |
|
||||
| Use Case | Message processing | Model integration | Multi-step workflows |
|
||||
| Execution | Before/after LLM | As LLM | Custom orchestration |
|
||||
| Dependencies | Minimal | API access | Often multiple services |
|
||||
|
||||
---
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Download the pipeline `.py` file
|
||||
2. Navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
3. Upload and configure required services
|
||||
4. Enable the pipeline
|
||||
|
||||
---
|
||||
|
||||
## Development Considerations
|
||||
|
||||
Pipelines often require:
|
||||
|
||||
- Multiple API integrations
|
||||
- State management across steps
|
||||
- Error handling at each stage
|
||||
- Performance optimization
|
||||
|
||||
See the [Plugin Development Guide](../../development/plugin-guide.md) for detailed guidance.
|
||||
109
docs/plugins/pipelines/moe-prompt-refiner.md
Normal file
109
docs/plugins/pipelines/moe-prompt-refiner.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# MoE Prompt Refiner
|
||||
|
||||
<span class="category-badge pipeline">Pipeline</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Refines prompts for Mixture of Experts (MoE) summary requests to generate high-quality comprehensive reports.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The MoE Prompt Refiner is an advanced pipeline that optimizes prompts before sending them to multiple expert models, then synthesizes the responses into comprehensive, high-quality reports.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-view-module: **Multi-Model**: Leverages multiple AI models
|
||||
- :material-text-search: **Prompt Optimization**: Refines prompts for best results
|
||||
- :material-merge: **Response Synthesis**: Combines expert responses
|
||||
- :material-file-document: **Report Generation**: Creates structured reports
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the pipeline file: [`moe_prompt_refiner.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipelines)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Configure expert models and settings
|
||||
4. Enable the pipeline
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Prompt] --> B[Prompt Refiner]
|
||||
B --> C[Expert Model 1]
|
||||
B --> D[Expert Model 2]
|
||||
B --> E[Expert Model N]
|
||||
C --> F[Response Synthesizer]
|
||||
D --> F
|
||||
E --> F
|
||||
F --> G[Comprehensive Report]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `expert_models` | list | `[]` | List of models to consult |
|
||||
| `synthesis_model` | string | `"auto"` | Model for synthesizing responses |
|
||||
| `report_format` | string | `"markdown"` | Output format |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
- **Research Reports**: Gather insights from multiple AI perspectives
|
||||
- **Comprehensive Analysis**: Multi-faceted problem analysis
|
||||
- **Decision Support**: Balanced recommendations from diverse models
|
||||
- **Content Creation**: Rich, multi-perspective content
|
||||
|
||||
---
|
||||
|
||||
## Example
|
||||
|
||||
**Input Prompt:**
|
||||
```
|
||||
Analyze the pros and cons of microservices architecture
|
||||
```
|
||||
|
||||
**Output Report:**
|
||||
```markdown
|
||||
# Microservices Architecture Analysis
|
||||
|
||||
## Executive Summary
|
||||
Based on analysis from multiple expert perspectives...
|
||||
|
||||
## Advantages
|
||||
1. **Scalability** (Expert A)...
|
||||
2. **Technology Flexibility** (Expert B)...
|
||||
|
||||
## Disadvantages
|
||||
1. **Complexity** (Expert A)...
|
||||
2. **Distributed System Challenges** (Expert C)...
|
||||
|
||||
## Recommendations
|
||||
Synthesized recommendations based on expert consensus...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Access to multiple LLM models
|
||||
- Sufficient API quotas for multi-model queries
|
||||
|
||||
!!! warning "Resource Usage"
|
||||
This pipeline makes multiple API calls per request. Monitor your usage and costs.
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipelines){ .md-button }
|
||||
106
docs/plugins/pipes/gemini-manifold.md
Normal file
106
docs/plugins/pipes/gemini-manifold.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Gemini Manifold
|
||||
|
||||
<span class="category-badge pipe">Pipe</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Integration pipeline for Google's Gemini models with full streaming support.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Gemini Manifold pipe provides seamless integration with Google's Gemini AI models. It exposes Gemini models as selectable options in OpenWebUI, allowing you to use them just like any other model.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-google: **Full Gemini Support**: Access all Gemini model variants
|
||||
- :material-stream: **Streaming**: Real-time response streaming
|
||||
- :material-image: **Multimodal**: Support for images and text
|
||||
- :material-shield: **Error Handling**: Robust error management
|
||||
- :material-tune: **Configurable**: Customize model parameters
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`gemini_manifold.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipes/gemini_mainfold)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Configure your Gemini API key
|
||||
4. Select Gemini models from the model dropdown
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Required | Description |
|
||||
|--------|------|----------|-------------|
|
||||
| `GEMINI_API_KEY` | string | Yes | Your Google AI Studio API key |
|
||||
| `DEFAULT_MODEL` | string | No | Default Gemini model to use |
|
||||
| `TEMPERATURE` | float | No | Response temperature (0-1) |
|
||||
| `MAX_TOKENS` | integer | No | Maximum response tokens |
|
||||
|
||||
---
|
||||
|
||||
## Available Models
|
||||
|
||||
When configured, the following models become available:
|
||||
|
||||
- `gemini-pro` - Text-only model
|
||||
- `gemini-pro-vision` - Multimodal model
|
||||
- `gemini-1.5-pro` - Latest Pro model
|
||||
- `gemini-1.5-flash` - Fast response model
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. After installation, go to any chat
|
||||
2. Open the model selector dropdown
|
||||
3. Look for models prefixed with your pipe name
|
||||
4. Select a Gemini model
|
||||
5. Start chatting!
|
||||
|
||||
---
|
||||
|
||||
## Getting an API Key
|
||||
|
||||
1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey)
|
||||
2. Create a new API key
|
||||
3. Copy the key and paste it in the plugin configuration
|
||||
|
||||
!!! warning "API Key Security"
|
||||
Keep your API key secure. Never share it publicly or commit it to version control.
|
||||
|
||||
---
|
||||
|
||||
## Companion Filter
|
||||
|
||||
For enhanced functionality, consider installing the [Gemini Manifold Companion](../filters/gemini-manifold-companion.md) filter.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Valid Gemini API key
|
||||
- Internet access to Google AI APIs
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
??? question "Models not appearing?"
|
||||
Ensure your API key is correctly configured and the plugin is enabled.
|
||||
|
||||
??? question "API errors?"
|
||||
Check your API key validity and quota limits in Google AI Studio.
|
||||
|
||||
??? question "Slow responses?"
|
||||
Consider using `gemini-1.5-flash` for faster response times.
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipes/gemini_mainfold){ .md-button }
|
||||
133
docs/plugins/pipes/index.md
Normal file
133
docs/plugins/pipes/index.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Pipe Plugins
|
||||
|
||||
Pipe plugins create custom model integrations or transform LLM responses. They appear as selectable models in the OpenWebUI interface.
|
||||
|
||||
## What are Pipes?
|
||||
|
||||
Pipes allow you to:
|
||||
|
||||
- :material-api: Connect to external AI APIs (Gemini, Claude, etc.)
|
||||
- :material-robot: Create custom model wrappers
|
||||
- :material-cog-transfer: Transform requests and responses
|
||||
- :material-middleware: Implement middleware logic
|
||||
|
||||
---
|
||||
|
||||
## Available Pipe Plugins
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-google:{ .lg .middle } **Gemini Manifold**
|
||||
|
||||
---
|
||||
|
||||
Integration pipeline for Google's Gemini models with full streaming support.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](gemini-manifold.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## How Pipes Work
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[User selects Pipe as Model] --> B[Pipe receives request]
|
||||
B --> C[Transform/Route request]
|
||||
C --> D[External API / Custom Logic]
|
||||
D --> E[Return response]
|
||||
E --> F[Display to User]
|
||||
```
|
||||
|
||||
### The `pipes` Method
|
||||
|
||||
Defines what models this pipe provides:
|
||||
|
||||
```python
|
||||
def pipes(self):
|
||||
return [
|
||||
{"id": "my-model", "name": "My Custom Model"},
|
||||
{"id": "my-model-fast", "name": "My Custom Model (Fast)"}
|
||||
]
|
||||
```
|
||||
|
||||
### The `pipe` Method
|
||||
|
||||
Handles the actual request processing:
|
||||
|
||||
```python
|
||||
def pipe(self, body: dict) -> Generator:
|
||||
# Process the request
|
||||
messages = body.get("messages", [])
|
||||
|
||||
# Call external API or custom logic
|
||||
response = call_external_api(messages)
|
||||
|
||||
# Return response (can be streaming)
|
||||
return response
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Download the desired pipe `.py` file
|
||||
2. Navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
3. Upload the file and configure API keys
|
||||
4. The pipe will appear as a selectable model
|
||||
|
||||
---
|
||||
|
||||
## Development Template
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Custom Pipe
|
||||
author: Your Name
|
||||
version: 1.0.0
|
||||
description: Description of your pipe plugin
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Generator, Iterator, Union
|
||||
|
||||
class Pipe:
|
||||
class Valves(BaseModel):
|
||||
API_KEY: str = Field(
|
||||
default="",
|
||||
description="API key for the external service"
|
||||
)
|
||||
API_URL: str = Field(
|
||||
default="https://api.example.com",
|
||||
description="API endpoint URL"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
|
||||
def pipes(self) -> list[dict]:
|
||||
"""Define available models."""
|
||||
return [
|
||||
{"id": "my-model", "name": "My Custom Model"},
|
||||
]
|
||||
|
||||
def pipe(
|
||||
self,
|
||||
body: dict
|
||||
) -> Union[str, Generator, Iterator]:
|
||||
"""Process the request and return response."""
|
||||
messages = body.get("messages", [])
|
||||
model = body.get("model", "")
|
||||
|
||||
# Your logic here
|
||||
# Can return:
|
||||
# - str: Single response
|
||||
# - Generator/Iterator: Streaming response
|
||||
|
||||
return "Response from custom pipe"
|
||||
```
|
||||
|
||||
For more details, check our [Plugin Development Guide](../../development/plugin-guide.md).
|
||||
Reference in New Issue
Block a user