Add MkDocs documentation portal with Material theme and CI/CD workflow
Co-authored-by: Fu-Jie <33599649+Fu-Jie@users.noreply.github.com>
This commit is contained in:
106
docs/plugins/pipes/gemini-manifold.md
Normal file
106
docs/plugins/pipes/gemini-manifold.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Gemini Manifold
|
||||
|
||||
<span class="category-badge pipe">Pipe</span>
|
||||
<span class="version-badge">v1.0.0</span>
|
||||
|
||||
Integration pipeline for Google's Gemini models with full streaming support.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Gemini Manifold pipe provides seamless integration with Google's Gemini AI models. It exposes Gemini models as selectable options in OpenWebUI, allowing you to use them just like any other model.
|
||||
|
||||
## Features
|
||||
|
||||
- :material-google: **Full Gemini Support**: Access all Gemini model variants
|
||||
- :material-stream: **Streaming**: Real-time response streaming
|
||||
- :material-image: **Multimodal**: Support for images and text
|
||||
- :material-shield: **Error Handling**: Robust error management
|
||||
- :material-tune: **Configurable**: Customize model parameters
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
1. Download the plugin file: [`gemini_manifold.py`](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipes/gemini_mainfold)
|
||||
2. Upload to OpenWebUI: **Admin Panel** → **Settings** → **Functions**
|
||||
3. Configure your Gemini API key
|
||||
4. Select Gemini models from the model dropdown
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
| Option | Type | Required | Description |
|
||||
|--------|------|----------|-------------|
|
||||
| `GEMINI_API_KEY` | string | Yes | Your Google AI Studio API key |
|
||||
| `DEFAULT_MODEL` | string | No | Default Gemini model to use |
|
||||
| `TEMPERATURE` | float | No | Response temperature (0-1) |
|
||||
| `MAX_TOKENS` | integer | No | Maximum response tokens |
|
||||
|
||||
---
|
||||
|
||||
## Available Models
|
||||
|
||||
When configured, the following models become available:
|
||||
|
||||
- `gemini-pro` - Text-only model
|
||||
- `gemini-pro-vision` - Multimodal model
|
||||
- `gemini-1.5-pro` - Latest Pro model
|
||||
- `gemini-1.5-flash` - Fast response model
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
1. After installation, go to any chat
|
||||
2. Open the model selector dropdown
|
||||
3. Look for models prefixed with your pipe name
|
||||
4. Select a Gemini model
|
||||
5. Start chatting!
|
||||
|
||||
---
|
||||
|
||||
## Getting an API Key
|
||||
|
||||
1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey)
|
||||
2. Create a new API key
|
||||
3. Copy the key and paste it in the plugin configuration
|
||||
|
||||
!!! warning "API Key Security"
|
||||
Keep your API key secure. Never share it publicly or commit it to version control.
|
||||
|
||||
---
|
||||
|
||||
## Companion Filter
|
||||
|
||||
For enhanced functionality, consider installing the [Gemini Manifold Companion](../filters/gemini-manifold-companion.md) filter.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
!!! note "Prerequisites"
|
||||
- OpenWebUI v0.3.0 or later
|
||||
- Valid Gemini API key
|
||||
- Internet access to Google AI APIs
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
??? question "Models not appearing?"
|
||||
Ensure your API key is correctly configured and the plugin is enabled.
|
||||
|
||||
??? question "API errors?"
|
||||
Check your API key validity and quota limits in Google AI Studio.
|
||||
|
||||
??? question "Slow responses?"
|
||||
Consider using `gemini-1.5-flash` for faster response times.
|
||||
|
||||
---
|
||||
|
||||
## Source Code
|
||||
|
||||
[:fontawesome-brands-github: View on GitHub](https://github.com/Fu-Jie/awesome-openwebui/tree/main/plugins/pipes/gemini_mainfold){ .md-button }
|
||||
133
docs/plugins/pipes/index.md
Normal file
133
docs/plugins/pipes/index.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Pipe Plugins
|
||||
|
||||
Pipe plugins create custom model integrations or transform LLM responses. They appear as selectable models in the OpenWebUI interface.
|
||||
|
||||
## What are Pipes?
|
||||
|
||||
Pipes allow you to:
|
||||
|
||||
- :material-api: Connect to external AI APIs (Gemini, Claude, etc.)
|
||||
- :material-robot: Create custom model wrappers
|
||||
- :material-cog-transfer: Transform requests and responses
|
||||
- :material-middleware: Implement middleware logic
|
||||
|
||||
---
|
||||
|
||||
## Available Pipe Plugins
|
||||
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :material-google:{ .lg .middle } **Gemini Manifold**
|
||||
|
||||
---
|
||||
|
||||
Integration pipeline for Google's Gemini models with full streaming support.
|
||||
|
||||
**Version:** 1.0.0
|
||||
|
||||
[:octicons-arrow-right-24: Documentation](gemini-manifold.md)
|
||||
|
||||
</div>
|
||||
|
||||
---
|
||||
|
||||
## How Pipes Work
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[User selects Pipe as Model] --> B[Pipe receives request]
|
||||
B --> C[Transform/Route request]
|
||||
C --> D[External API / Custom Logic]
|
||||
D --> E[Return response]
|
||||
E --> F[Display to User]
|
||||
```
|
||||
|
||||
### The `pipes` Method
|
||||
|
||||
Defines what models this pipe provides:
|
||||
|
||||
```python
|
||||
def pipes(self):
|
||||
return [
|
||||
{"id": "my-model", "name": "My Custom Model"},
|
||||
{"id": "my-model-fast", "name": "My Custom Model (Fast)"}
|
||||
]
|
||||
```
|
||||
|
||||
### The `pipe` Method
|
||||
|
||||
Handles the actual request processing:
|
||||
|
||||
```python
|
||||
def pipe(self, body: dict) -> Generator:
|
||||
# Process the request
|
||||
messages = body.get("messages", [])
|
||||
|
||||
# Call external API or custom logic
|
||||
response = call_external_api(messages)
|
||||
|
||||
# Return response (can be streaming)
|
||||
return response
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Installation
|
||||
|
||||
1. Download the desired pipe `.py` file
|
||||
2. Navigate to **Admin Panel** → **Settings** → **Functions**
|
||||
3. Upload the file and configure API keys
|
||||
4. The pipe will appear as a selectable model
|
||||
|
||||
---
|
||||
|
||||
## Development Template
|
||||
|
||||
```python
|
||||
"""
|
||||
title: My Custom Pipe
|
||||
author: Your Name
|
||||
version: 1.0.0
|
||||
description: Description of your pipe plugin
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Generator, Iterator, Union
|
||||
|
||||
class Pipe:
|
||||
class Valves(BaseModel):
|
||||
API_KEY: str = Field(
|
||||
default="",
|
||||
description="API key for the external service"
|
||||
)
|
||||
API_URL: str = Field(
|
||||
default="https://api.example.com",
|
||||
description="API endpoint URL"
|
||||
)
|
||||
|
||||
def __init__(self):
|
||||
self.valves = self.Valves()
|
||||
|
||||
def pipes(self) -> list[dict]:
|
||||
"""Define available models."""
|
||||
return [
|
||||
{"id": "my-model", "name": "My Custom Model"},
|
||||
]
|
||||
|
||||
def pipe(
|
||||
self,
|
||||
body: dict
|
||||
) -> Union[str, Generator, Iterator]:
|
||||
"""Process the request and return response."""
|
||||
messages = body.get("messages", [])
|
||||
model = body.get("model", "")
|
||||
|
||||
# Your logic here
|
||||
# Can return:
|
||||
# - str: Single response
|
||||
# - Generator/Iterator: Streaming response
|
||||
|
||||
return "Response from custom pipe"
|
||||
```
|
||||
|
||||
For more details, check our [Plugin Development Guide](../../development/plugin-guide.md).
|
||||
Reference in New Issue
Block a user