feat: 新增插件系统、多种插件类型、开发指南及多语言文档。

This commit is contained in:
fujie
2025-12-20 12:34:49 +08:00
commit eaa6319991
74 changed files with 28409 additions and 0 deletions

View File

@@ -0,0 +1,7 @@
{
"label": "Development",
"position": 800,
"link": {
"type": "generated-index"
}
}

View File

@@ -0,0 +1,424 @@
---
sidebar_position: 3
title: "Events"
---
# 🔔 Events: Using `__event_emitter__` and `__event_call__` in Open WebUI
Open WebUI's plugin architecture is not just about processing input and producing output—**it's about real-time, interactive communication with the UI and users**. To make your Tools, Functions, and Pipes more dynamic, Open WebUI provides a built-in event system via the `__event_emitter__` and `__event_call__` helpers.
This guide explains **what events are**, **how you can trigger them** from your code, and **the full catalog of event types** you can use (including much more than just `"input"`).
---
## 🌊 What Are Events?
**Events** are real-time notifications or interactive requests sent from your backend code (Tool, or Function) to the web UI. They allow you to update the chat, display notifications, request confirmation, run UI flows, and more.
- Events are sent using the `__event_emitter__` helper for one-way updates, or `__event_call__` when you need user input or a response (e.g., confirmation, input, etc.).
**Metaphor:**
Think of Events like push notifications and modal dialogs that your plugin can trigger, making the chat experience richer and more interactive.
---
## 🧰 Basic Usage
### Sending an Event
You can trigger an event anywhere inside your Tool, or Function by calling:
```python
await __event_emitter__(
{
"type": "status", # See the event types list below
"data": {
"description": "Processing started!",
"done": False,
"hidden": False,
},
}
)
```
You **do not** need to manually add fields like `chat_id` or `message_id`—these are handled automatically by Open WebUI.
### Interactive Events
When you need to pause execution until the user responds (e.g., confirm/cancel dialogs, code execution, or input), use `__event_call__`:
```python
result = await __event_call__(
{
"type": "input", # Or "confirmation", "execute"
"data": {
"title": "Please enter your password",
"message": "Password is required for this action",
"placeholder": "Your password here",
},
}
)
# result will contain the user's input value
```
---
## 📜 Event Payload Structure
When you emit or call an event, the basic structure is:
```json
{
"type": "event_type", // See full list below
"data": { ... } // Event-specific payload
}
```
Most of the time, you only set `"type"` and `"data"`. Open WebUI fills in the routing automatically.
---
## 🗂 Full List of Event Types
Below is a comprehensive table of **all supported `type` values** for events, along with their intended effect and data structure. (This is based on up-to-date analysis of Open WebUI event handling logic.)
| type | When to use | Data payload structure (examples) |
| -------------------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `status` | Show a status update/history for a message | `{description: ..., done: bool, hidden: bool}` |
| `chat:completion` | Provide a chat completion result | (Custom, see Open WebUI internals) |
| `chat:message:delta`,<br/>`message` | Append content to the current message | `{content: "text to append"}` |
| `chat:message`,<br/>`replace` | Replace current message content completely | `{content: "replacement text"}` |
| `chat:message:files`,<br/>`files` | Set or overwrite message files (for uploads, output) | `{files: [...]}` |
| `chat:title` | Set (or update) the chat conversation title | Topic string OR `{title: ...}` |
| `chat:tags` | Update the set of tags for a chat | Tag array or object |
| `source`,<br/>`citation` | Add a source/citation, or code execution result | For code: See [below.](/features/plugin/development/events#source-or-citation-and-code-execution) |
| `notification` | Show a notification ("toast") in the UI | `{type: "info" or "success" or "error" or "warning", content: "..."}` |
| `confirmation` <br/>(needs `__event_call__`) | Ask for confirmation (OK/Cancel dialog) | `{title: "...", message: "..."}` |
| `input` <br/>(needs `__event_call__`) | Request simple user input ("input box" dialog) | `{title: "...", message: "...", placeholder: "...", value: ...}` |
| `execute` <br/>(needs `__event_call__`) | Request user-side code execution and return result | `{code: "...javascript code..."}` | |
**Other/Advanced types:**
- You can define your own types and handle them at the UI layer (or use upcoming event-extension mechanisms).
### ❗ Details on Specific Event Types
### `status`
Show a status/progress update in the UI:
```python
await __event_emitter__(
{
"type": "status",
"data": {
"description": "Step 1/3: Fetching data...",
"done": False,
"hidden": False,
},
}
)
```
---
### `chat:message:delta` or `message`
**Streaming output** (append text):
```python
await __event_emitter__(
{
"type": "chat:message:delta", # or simply "message"
"data": {
"content": "Partial text, "
},
}
)
# Later, as you generate more:
await __event_emitter__(
{
"type": "chat:message:delta",
"data": {
"content": "next chunk of response."
},
}
)
```
---
### `chat:message` or `replace`
**Set (or replace) the entire message content:**
```python
await __event_emitter__(
{
"type": "chat:message", # or "replace"
"data": {
"content": "Final, complete response."
},
}
)
```
---
### `files` or `chat:message:files`
**Attach or update files:**
```python
await __event_emitter__(
{
"type": "files", # or "chat:message:files"
"data": {
"files": [
# Open WebUI File Objects
]
},
}
)
```
---
### `chat:title`
**Update the chat's title:**
```python
await __event_emitter__(
{
"type": "chat:title",
"data": {
"title": "Market Analysis Bot Session"
},
}
)
```
---
### `chat:tags`
**Update the chat's tags:**
```python
await __event_emitter__(
{
"type": "chat:tags",
"data": {
"tags": ["finance", "AI", "daily-report"]
},
}
)
```
---
### `source` or `citation` (and code execution)
**Add a reference/citation:**
```python
await __event_emitter__(
{
"type": "source", # or "citation"
"data": {
# Open WebUI Source (Citation) Object
}
}
)
```
**For code execution (track execution state):**
```python
await __event_emitter__(
{
"type": "source",
"data": {
# Open WebUI Code Source (Citation) Object
}
}
)
```
---
### `notification`
**Show a toast notification:**
```python
await __event_emitter__(
{
"type": "notification",
"data": {
"type": "info", # "success", "warning", "error"
"content": "The operation completed successfully!"
}
}
)
```
---
### `confirmation` (**requires** `__event_call__`)
**Show a confirm dialog and get user response:**
```python
result = await __event_call__(
{
"type": "confirmation",
"data": {
"title": "Are you sure?",
"message": "Do you really want to proceed?"
}
}
)
if result: # or check result contents
await __event_emitter__({
"type": "notification",
"data": {"type": "success", "content": "User confirmed operation."}
})
else:
await __event_emitter__({
"type": "notification",
"data": {"type": "warning", "content": "User cancelled."}
})
```
---
### `input` (**requires** `__event_call__`)
**Prompt user for text input:**
```python
result = await __event_call__(
{
"type": "input",
"data": {
"title": "Enter your name",
"message": "We need your name to proceed.",
"placeholder": "Your full name"
}
}
)
user_input = result
await __event_emitter__(
{
"type": "notification",
"data": {"type": "info", "content": f"You entered: {user_input}"}
}
)
```
---
### `execute` (**requires** `__event_call__`)
**Run code dynamically on the user's side:**
```python
result = await __event_call__(
{
"type": "execute",
"data": {
"code": "print(40 + 2);",
}
}
)
await __event_emitter__(
{
"type": "notification",
"data": {
"type": "info",
"content": f"Code executed, result: {result}"
}
}
)
```
---
## 🏗️ When & Where to Use Events
- **From any Tool, or Function** in Open WebUI.
- To **stream responses**, show progress, request user data, update the UI, or display supplementary info/files.
- `await __event_emitter__` is for one-way messages (fire and forget).
- `await __event_call__` is for when you need a response from the user (input, execute, confirmation).
---
## 💡 Tips & Advanced Notes
- **Multiple types per message:** You can emit several events of different types for one message—for example, show `status` updates, then stream with `chat:message:delta`, then complete with a `chat:message`.
- **Custom event types:** While the above list is the standard, you may use your own types and detect/handle them in custom UI code.
- **Extensibility:** The event system is designed to evolve—always check the [Open WebUI documentation](https://github.com/open-webui/open-webui) for the most current list and advanced usage.
---
## 🧐 FAQ
### Q: How do I trigger a notification for the user?
Use `notification` type:
```python
await __event_emitter__({
"type": "notification",
"data": {"type": "success", "content": "Task complete"}
})
```
### Q: How do I prompt the user for input and get their answer?
Use:
```python
response = await __event_call__({
"type": "input",
"data": {
"title": "What's your name?",
"message": "Please enter your preferred name:",
"placeholder": "Name"
}
})
# response will be: {"value": "user's answer"}
```
### Q: What event types are available for `__event_call__`?
- `"input"`: Input box dialog
- `"confirmation"`: Yes/No, OK/Cancel dialog
- `"execute"`: Run provided code on client and return result
### Q: Can I update files attached to a message?
Yes—use the `"files"` or `"chat:message:files"` event type with a `{files: [...]}` payload.
### Q: Can I update the conversation title or tags?
Absolutely: use `"chat:title"` or `"chat:tags"` accordingly.
### Q: Can I stream responses (partial tokens) to the user?
Yes—emit `"chat:message:delta"` events in a loop, then finish with `"chat:message"`.
---
## 📝 Conclusion
**Events** give you real-time, interactive superpowers inside Open WebUI. They let your code update content, trigger notifications, request user input, stream results, handle code, and much more—seamlessly plugging your backend intelligence into the chat UI.
- Use `__event_emitter__` for one-way status/content updates.
- Use `__event_call__` for interactions that require user follow-up (input, confirmation, execution).
Refer to this document for common event types and structures, and explore Open WebUI source code or docs for breaking updates or custom events!
---
**Happy event-driven coding in Open WebUI! 🚀**

View File

@@ -0,0 +1,340 @@
---
sidebar_position: 999
title: "Reserved Arguments"
---
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
:::
# 🪄 Special Arguments
When developping your own `Tools`, `Functions` (`Filters`, `Pipes` or `Actions`), `Pipelines` etc, you can use special arguments explore the full spectrum of what Open-WebUI has to offer.
This page aims to detail the type and structure of each special argument as well as provide an example.
### `body`
A `dict` usually destined to go almost directly to the model. Although it is not strictly a special argument, it is included here for easier reference and because it contains itself some special arguments.
<details>
<summary>Example</summary>
```json
{
"stream": true,
"model": "my-cool-model",
# lowercase string with - separated words: this is the ID of the model
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this picture?"
},
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAdYAAAGcCAYAAABk2YF[REDACTED]"
# Images are passed as base64 encoded data
}
}
]
},
{
"role": "assistant",
"content": "The image appears to be [REDACTED]"
},
],
"features": {
"image_generation": false,
"code_interpreter": false,
"web_search": false
},
"stream_options": {
"include_usage": true
},
"metadata": "[The exact same dict as __metadata__]",
"files": "[The exact same list as __files__]"
}
```
</details>
### `__user__`
A `dict` with user information.
Note that if the `UserValves` class is defined, its instance has to be accessed via `__user__["valves"]`. Otherwise, the `valves` keyvalue is missing entirely from `__user__`.
<details>
<summary>Example</summary>
```json
{
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"email": "cheesy_dude@openwebui.com",
"name": "Patrick",
"role": "user",
# role can be either `user` or `admin`
"valves": "[the UserValve instance]"
}
```
</details>
### `__metadata__`
A `dict` with wide ranging information about the chat, model, files, etc.
<details>
<summary>Example</summary>
```json
{
"user_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"chat_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"message_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"session_id": "xxxxxxxxxxxxxxxxxxxx",
"tool_ids": null,
# tool_ids is a list of str.
"tool_servers": [],
"files": "[Same as in body['files']]",
# If no files are given, the files key exists in __metadata__ and its value is []
"features": {
"image_generation": false,
"code_interpreter": false,
"web_search": false
},
"variables": {
"{{USER_NAME}}": "cheesy_username",
"{{USER_LOCATION}}": "Unknown",
"{{CURRENT_DATETIME}}": "2025-02-02 XX:XX:XX",
"{{CURRENT_DATE}}": "2025-02-02",
"{{CURRENT_TIME}}": "XX:XX:XX",
"{{CURRENT_WEEKDAY}}": "Monday",
"{{CURRENT_TIMEZONE}}": "Europe/Berlin",
"{{USER_LANGUAGE}}": "en-US"
},
"model": "[The exact same dict as __model__]",
"direct": false,
"function_calling": "native",
"type": "user_response",
"interface": "open-webui"
}
```
</details>
### `__model__`
A `dict` with information about the model.
<details>
<summary>Example</summary>
```json
{
"id": "my-cool-model",
"name": "My Cool Model",
"object": "model",
"created": 1746000000,
"owned_by": "openai",
# either openai or ollama
"info": {
"id": "my-cool-model",
"user_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"base_model_id": "gpt-4o",
# this is the name of model that the model endpoint serves
"name": "My Cool Model",
"params": {
"system": "You are my best assistant. You answer [REDACTED]",
"function_calling": "native"
# custom options appear here, for example "Top K"
},
"meta": {
"profile_image_url": "/static/favicon.png",
"description": "Description of my-cool-model",
"capabilities": {
"vision": true,
"usage": true,
"citations": true
},
"position": 17,
"tags": [
{
"name": "for_friends"
},
{
"name": "vision_enabled"
}
],
"suggestion_prompts": null
},
"access_control": {
"read": {
"group_ids": [],
"user_ids": []
},
"write": {
"group_ids": [],
"user_ids": []
}
},
"is_active": true,
"updated_at": 1740000000,
"created_at": 1740000000
},
"preset": true,
"actions": [],
"tags": [
{
"name": "for_friends"
},
{
"name": "vision_enabled"
}
]
}
```
</details>
### `__messages__`
A `list` of the previous messages.
See the `body["messages"]` value above.
### `__chat_id__`
The `str` of the `chat_id`.
See the `__metadata__["chat_id"]` value above.
### `__session_id__`
The `str` of the `session_id`.
See the `__metadata__["session_id"]` value above.
### `__message_id__`
The `str` of the `message_id`.
See the `__metadata__["message_id"]` value above.
### `__event_emitter__`
A `Callable` used to display event information to the user.
### `__event_call__`
A `Callable` used for `Actions`.
### `__files__`
A `list` of files sent via the chat. Note that images are not considered files and are sent directly to the model as part of the `body["messages"]` list.
The actual binary of the file is not part of the arguments for performance reason, but the file remain nonetheless accessible by its path if needed. For example using `docker` the python syntax for the path could be:
```python
from pathlib import Path
the_file = Path(f"/app/backend/data/uploads/{__files__[0]["files"]["id"]}_{__files__[0]["files"]["filename"]}")
assert the_file.exists()
```
Note that the same files dict can also be accessed via `__metadata__["files"]` (and its value is `[]` if no files are sent) or via `body["files"]` (but the `files` key is missing entirely from `body` if no files are sent).
<details>
<summary>Example</summary>
```json
[
{
"type": "file",
"file": {
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"filename": "Napoleon - Wikipedia.pdf",
"user_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"hash": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"data": {
"content": "Napoleon - Wikipedia\n\n\nNapoleon I\n\nThe Emperor Napoleon in His Study at the\nTuileries, 1812\n\nEmperor of the French\n\n1st reign 18 May 1804 6 April 1814\n\nSuccessor Louis XVIII[a]\n\n2nd reign 20 March 1815  22 June 1815\n\nSuccessor Louis XVIII[a]\n\nFirst Consul of the French Republic\n\nIn office\n13 December 1799  18 May 1804\n\nBorn Napoleone Buonaparte\n15 August 1769\nAjaccio, Corsica, Kingdom of\nFrance\n\nDied 5 May 1821 (aged 51)\nLongwood, Saint Helena\n\nBurial 15 December 1840\nLes Invalides, Paris\n\nNapoleon\nNapoleon Bonaparte[b] (born Napoleone\nBuonaparte;[1][c] 15 August 1769 5 May 1821), later\nknown [REDACTED]",
# The content value is the output of the document parser, the above example is with Tika as a document parser
},
"meta": {
"name": "Napoleon - Wikipedia.pdf",
"content_type": "application/pdf",
"size": 10486578,
# in bytes, here about 10Mb
"data": {},
"collection_name": "file-96xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
# always begins by 'file'
},
"created_at": 1740000000,
"updated_at": 1740000000
},
"id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"url": "/api/v1/files/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "Napoleon - Wikipedia.pdf",
"collection_name": "file-96xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
"status": "uploaded",
"size": 10486578,
"error": "",
"itemId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
# itemId is not the same as file["id"]
}
]
```
</details>
### `__request__`
An instance of `fastapi.Request`. You can read more in the [migration page](/docs/features/plugin/migration/index.mdx) or in [fastapi's documentation](https://fastapi.tiangolo.com/reference/request/).
### `__task__`
A `str` for the type of task. Its value is just a shorthand for `__metadata__["task"]` if present, otherwise `None`.
<details>
<summary>Possible values</summary>
```json
[
"title_generation",
"tags_generation",
"emoji_generation",
"query_generation",
"image_prompt_generation",
"autocomplete_generation",
"function_calling",
"moa_response_generation"
]
```
</details>
### `__task_body__`
A `dict` containing the `body` needed to accomplish a given `__task__`. Its value is just a shorthand for `__metadata__["task_body"]` if present, otherwise `None`.
Its structure is the same as `body` above, with modifications like using the appropriate model and system message etc.
### `__tools__`
A `list` of `ToolUserModel` instances.
For details the attributes of `ToolUserModel` instances, the code can be found in [tools.py](https://github.com/open-webui/open-webui/blob/main/backend/open_webui/models/tools.py).

View File

@@ -0,0 +1,77 @@
---
sidebar_position: 3
title: "Valves"
---
## Valves
Valves and UserValves are used to allow users to provide dynamic details such as an API key or a configuration option. These will create a fillable field or a bool switch in the GUI menu for the given function. They are always optional, but HIGHLY encouraged.
Hence, Valves and UserValves class can be defined in either a `Pipe`, `Pipeline`, `Filter` or `Tools` class.
Valves are configurable by admins alone via the Tools or Functions menus. On the other hand UserValves are configurable by any users directly from a chat session.
<details>
<summary>Commented example</summary>
```python
from pydantic import BaseModel, Field
from typing import Literal
# Define and Valves
class Filter:
# Notice the current indentation: Valves and UserValves must be declared as
# attributes of a Tools, Filter or Pipe class. Here we take the
# example of a Filter.
class Valves(BaseModel):
# Valves and UserValves inherit from pydantic's BaseModel. This
# enables complex use cases like model validators etc.
test_valve: int = Field( # Notice the type hint: it is used to
# choose the kind of UI element to show the user (buttons,
# texts, etc).
default=4,
description="A valve controlling a numberical value"
# required=False, # you can enforce fields using True
)
# To give the user the choice between multiple strings, you can use Literal from typing:
choice_option: Literal["choiceA", "choiceB"] = Field(
default="choiceA",
description="An example of a multi choice valve",
)
priority: int = Field(
default=0,
description="Priority level for the filter operations. Lower values are passed through first"
)
# The priority field is optional but if present will be used to
# order the Filters.
pass
# Note that this 'pass' helps for parsing and is recommended.
# UserValves are defined the same way.
class UserValves(BaseModel):
test_user_valve: bool = Field(
default=False, description="A user valve controlling a True/False (on/off) switch"
)
pass
def __init__(self):
self.valves = self.Valves()
# Because they are set by the admin, they are accessible directly
# upon code execution.
pass
# The inlet method is only used for Filter but the __user__ handling is the same
def inlet(self, body: dict, __user__: dict):
# Because UserValves are defined per user they are only available
# on use.
# Note that although __user__ is a dict, __user__["valves"] is a
# UserValves object. Hence you can access values like that:
test_user_valve = __user__["valves"].test_user_valve
# Or:
test_user_valve = dict(__user__["valves"])["test_user_valve"]
# But this will return the default value instead of the actual value:
# test_user_valve = __user__["valves"]["test_user_valve"] # Do not do that!
```
</details>

View File

@@ -0,0 +1,316 @@
---
sidebar_position: 3
title: "Action Function"
---
Action functions allow you to write custom buttons that appear in the message toolbar for end users to interact with. This feature enables more interactive messaging, allowing users to grant permission before a task is performed, generate visualizations of structured data, download an audio snippet of chats, and many other use cases.
Actions are admin-managed functions that extend the chat interface with custom interactive capabilities. When a message is generated by a model that has actions configured, these actions appear as clickable buttons beneath the message.
A scaffold of Action code can be found [in the community section](https://openwebui.com/f/hub/custom_action/). For more Action Function examples built by the community, visit [https://openwebui.com/functions](https://openwebui.com/functions).
An example of a graph visualization Action can be seen in the video below.
<div align="center">
<a href="#">
<img
src="/images/pipelines/graph-viz-action.gif"
alt="Graph Visualization Action"
/>
</a>
</div>
## Action Function Architecture
Actions are Python-based functions that integrate directly into the chat message toolbar. They execute server-side and can interact with users through real-time events, modify message content, and access the full Open WebUI context.
### Function Structure
Actions follow a specific class structure with an `action` method as the main entry point:
```python
class Action:
def __init__(self):
self.valves = self.Valves()
class Valves(BaseModel):
# Configuration parameters
parameter_name: str = "default_value"
async def action(self, body: dict, __user__=None, __event_emitter__=None, __event_call__=None):
# Action implementation
return {"content": "Modified message content"}
```
### Action Method Parameters
The `action` method receives several parameters that provide access to the execution context:
- **`body`** - Dictionary containing the message data and context
- **`__user__`** - Current user object with permissions and settings
- **`__event_emitter__`** - Function to send real-time updates to the frontend
- **`__event_call__`** - Function for bidirectional communication (confirmations, inputs)
- **`__model__`** - Model information that triggered the action
- **`__request__`** - FastAPI request object for accessing headers, etc.
- **`__id__`** - Action ID (useful for multi-action functions)
## Event System Integration
Actions can utilize Open WebUI's real-time event system for interactive experiences:
### Event Emitter (`__event_emitter__`)
**For more information about Events and Event emitters, see [Events and Event Emitters](https://docs.openwebui.com/features/plugin/events/).**
Send real-time updates to the frontend during action execution:
```python
async def action(self, body: dict, __event_emitter__=None):
# Send status updates
await __event_emitter__({
"type": "status",
"data": {"description": "Processing request..."}
})
# Send notifications
await __event_emitter__({
"type": "notification",
"data": {"type": "info", "content": "Action completed successfully"}
})
```
### Event Call (`__event_call__`)
Request user input or confirmation during execution:
```python
async def action(self, body: dict, __event_call__=None):
# Request user confirmation
response = await __event_call__({
"type": "confirmation",
"data": {
"title": "Confirm Action",
"message": "Are you sure you want to proceed?"
}
})
# Request user input
user_input = await __event_call__({
"type": "input",
"data": {
"title": "Enter Value",
"message": "Please provide additional information:",
"placeholder": "Type your input here..."
}
})
```
## Action Types and Configurations
### Single Actions
Standard actions with one `action` method:
```python
async def action(self, body: dict, **kwargs):
# Single action implementation
return {"content": "Action result"}
```
### Multi-Actions
Functions can define multiple sub-actions through an `actions` array:
```python
actions = [
{
"id": "summarize",
"name": "Summarize",
"icon_url": "data:image/svg+xml;base64,..."
},
{
"id": "translate",
"name": "Translate",
"icon_url": "data:image/svg+xml;base64,..."
}
]
async def action(self, body: dict, __id__=None, **kwargs):
if __id__ == "summarize":
# Summarization logic
return {"content": "Summary: ..."}
elif __id__ == "translate":
# Translation logic
return {"content": "Translation: ..."}
```
### Global vs Model-Specific Actions
- **Global Actions** - Turn on the toggle in the Action's settings, to globally enable it for all users and all models.
- **Model-Specific Actions** - Configure enabled actions for specific models in the model settings.
## Advanced Capabilities
### Background Task Execution
For long-running operations, actions can integrate with the task system:
```python
async def action(self, body: dict, __event_emitter__=None):
# Start long-running process
await __event_emitter__({
"type": "status",
"data": {"description": "Starting background processing..."}
})
# Perform time-consuming operation
result = await some_long_running_function()
return {"content": f"Processing completed: {result}"}
```
### File and Media Handling
Actions can work with uploaded files and generate new media:
```python
async def action(self, body: dict):
message = body
# Access uploaded files
if message.get("files"):
for file in message["files"]:
# Process file based on type
if file["type"] == "image":
# Image processing logic
pass
# Return new files
return {
"content": "Analysis complete",
"files": [
{
"type": "image",
"url": "generated_chart.png",
"name": "Analysis Chart"
}
]
}
```
### User Context and Permissions
Actions can access user information and respect permissions:
```python
async def action(self, body: dict, __user__=None):
if __user__["role"] != "admin":
return {"content": "This action requires admin privileges"}
user_name = __user__["name"]
return {"content": f"Hello {user_name}, admin action completed"}
```
## Example - Specifying Action Frontmatter
Each Action function can include a docstring at the top to define metadata for the button. This helps customize the display and behavior of your Action in Open WebUI.
Example of supported frontmatter fields:
- `title`: Display name of the Action.
- `author`: Name of the creator.
- `version`: Version number of the Action.
- `required_open_webui_version`: Minimum compatible version of Open WebUI.
- `icon_url (optional)`: URL or Base64 string for a custom icon.
**Base64-Encoded Example:**
<details>
<summary>Example</summary>
```python
"""
title: Enhanced Message Processor
author: @admin
version: 1.2.0
required_open_webui_version: 0.5.0
icon_url: data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHZpZXdCb3g9IjAgMCAyNCAyNCIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTEyIDJMMTMuMDkgOC4yNkwyMCA5TDEzLjA5IDE1Ljc0TDEyIDIyTDEwLjkxIDE1Ljc0TDQgOUwxMC45MSA4LjI2TDEyIDJaIiBzdHJva2U9ImN1cnJlbnRDb2xvciIgc3Ryb2tlLXdpZHRoPSIyIiBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiLz4KPHN2Zz4K
requirements: requests,beautifulsoup4
"""
from pydantic import BaseModel
class Action:
def __init__(self):
self.valves = self.Valves()
class Valves(BaseModel):
api_key: str = ""
processing_mode: str = "standard"
async def action(
self,
body: dict,
__user__=None,
__event_emitter__=None,
__event_call__=None,
):
# Send initial status
await __event_emitter__({
"type": "status",
"data": {"description": "Processing message..."}
})
# Get user confirmation
response = await __event_call__({
"type": "confirmation",
"data": {
"title": "Process Message",
"message": "Do you want to enhance this message?"
}
})
if not response:
return {"content": "Action cancelled by user"}
# Process the message
original_content = body.get("content", "")
enhanced_content = f"Enhanced: {original_content}"
return {"content": enhanced_content}
```
</details>
## Best Practices
### Error Handling
Always implement proper error handling in your actions:
```python
async def action(self, body: dict, __event_emitter__=None):
try:
# Action logic here
result = perform_operation()
return {"content": f"Success: {result}"}
except Exception as e:
await __event_emitter__({
"type": "notification",
"data": {"type": "error", "content": f"Action failed: {str(e)}"}
})
return {"content": "Action encountered an error"}
```
### Performance Considerations
- Use async/await for I/O operations
- Implement timeouts for external API calls
- Provide progress updates for long-running operations
- Consider using background tasks for heavy processing
### User Experience
- Always provide clear feedback through event emitters
- Use confirmation dialogs for destructive actions
- Include helpful error messages
## Integration with Open WebUI Features
Actions integrate seamlessly with other Open WebUI features:
- **Models** - Actions can be model-specific or global
- **Tools** - Actions can invoke external tools and APIs
- **Files** - Actions can process uploaded files and generate new ones
- **Memory** - Actions can access conversation history and context
- **Permissions** - Actions respect user roles and access controls
For more examples and community-contributed actions, visit [https://openwebui.com/functions](https://openwebui.com/functions) where you can discover, download, and explore custom functions built by the Open WebUI community.

View File

@@ -0,0 +1,423 @@
---
sidebar_position: 2
title: "Filter Function"
---
# 🪄 Filter Function: Modify Inputs and Outputs
Welcome to the comprehensive guide on Filter Functions in Open WebUI! Filters are a flexible and powerful **plugin system** for modifying data *before it's sent to the Large Language Model (LLM)* (input) or *after its returned from the LLM* (output). Whether youre transforming inputs for better context or cleaning up outputs for improved readability, **Filter Functions** let you do it all.
This guide will break down **what Filters are**, how they work, their structure, and everything you need to know to build powerful and user-friendly filters of your own. Lets dig in, and dont worry—Ill use metaphors, examples, and tips to make everything crystal clear! 🌟
---
## 🌊 What Are Filters in Open WebUI?
Imagine Open WebUI as a **stream of water** flowing through pipes:
- **User inputs** and **LLM outputs** are the water.
- **Filters** are the **water treatment stages** that clean, modify, and adapt the water before it reaches the final destination.
Filters sit in the middle of the flow—like checkpoints—where you decide what needs to be adjusted.
Heres a quick summary of what Filters do:
1. **Modify User Inputs (Inlet Function)**: Tweak the input data before it reaches the AI model. This is where you enhance clarity, add context, sanitize text, or reformat messages to match specific requirements.
2. **Intercept Model Outputs (Stream Function)**: Capture and adjust the AIs responses **as theyre generated** by the model. This is useful for real-time modifications, like filtering out sensitive information or formatting the output for better readability.
3. **Modify Model Outputs (Outlet Function)**: Adjust the AI's response **after its processed**, before showing it to the user. This can help refine, log, or adapt the data for a cleaner user experience.
> **Key Concept:** Filters are not standalone models but tools that enhance or transform the data traveling *to* and *from* models.
Filters are like **translators or editors** in the AI workflow: you can intercept and change the conversation without interrupting the flow.
---
## 🗺️ Structure of a Filter Function: The Skeleton
Let's start with the simplest representation of a Filter Function. Don't worry if some parts feel technical at first—well break it all down step by step!
### 🦴 Basic Skeleton of a Filter
```python
from pydantic import BaseModel
from typing import Optional
class Filter:
# Valves: Configuration options for the filter
class Valves(BaseModel):
pass
def __init__(self):
# Initialize valves (optional configuration for the Filter)
self.valves = self.Valves()
def inlet(self, body: dict) -> dict:
# This is where you manipulate user inputs.
print(f"inlet called: {body}")
return body
def stream(self, event: dict) -> dict:
# This is where you modify streamed chunks of model output.
print(f"stream event: {event}")
return event
def outlet(self, body: dict) -> None:
# This is where you manipulate model outputs.
print(f"outlet called: {body}")
```
---
### 🆕 🧲 Toggle Filter Example: Adding Interactivity and Icons (New in Open WebUI 0.6.10)
Filters can do more than simply modify text—they can expose UI toggles and display custom icons. For instance, you might want a filter that can be turned on/off with a user interface button, and displays a special icon in Open WebUIs message input UI.
Heres how you could create such a toggle filter:
```python
from pydantic import BaseModel, Field
from typing import Optional
class Filter:
class Valves(BaseModel):
pass
def __init__(self):
self.valves = self.Valves()
self.toggle = True # IMPORTANT: This creates a switch UI in Open WebUI
# TIP: Use SVG Data URI!
self.icon = """data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGZpbGw9Im5vbmUiIHZpZXdCb3g9IjAgMCAyNCAyNCIgc3Ryb2tlLXdpZHRoPSIxLjUiIHN0cm9rZT0iY3VycmVudENvbG9yIiBjbGFzcz0ic2l6ZS02Ij4KICA8cGF0aCBzdHJva2UtbGluZWNhcD0icm91bmQiIHN0cm9rZS1saW5lam9pbj0icm91bmQiIGQ9Ik0xMiAxOHYtNS4yNW0wIDBhNi4wMSA2LjAxIDAgMCAwIDEuNS0uMTg5bS0xLjUuMTg5YTYuMDEgNi4wMSAwIDAgMS0xLjUtLjE4OW0zLjc1IDcuNDc4YTEyLjA2IDEyLjA2IDAgMCAxLTQuNSAwbTMuNzUgMi4zODNhMTQuNDA2IDE0LjQwNiAwIDAgMS0zIDBNMTQuMjUgMTh2LS4xOTJjMC0uOTgzLjY1OC0xLjgyMyAxLjUwOC0yLjMxNmE3LjUgNy41IDAgMSAwLTcuNTE3IDBjLjg1LjQ5MyAxLjUwOSAxLjMzMyAxLjUwOSAyLjMxNlYxOCIgLz4KPC9zdmc+Cg=="""
pass
async def inlet(
self, body: dict, __event_emitter__, __user__: Optional[dict] = None
) -> dict:
await __event_emitter__(
{
"type": "status",
"data": {
"description": "Toggled!",
"done": True,
"hidden": False,
},
}
)
return body
```
#### 🖼️ Whats happening?
- **toggle = True** creates a switch UI in Open WebUI—users can manually enable or disable the filter in real time.
- **icon** (with a Data URI) will show up as a little image next to the filters name. You can use any SVG as long as its Data URI encoded!
- **The `inlet` function** uses the `__event_emitter__` special argument to broadcast feedback/status to the UI, such as a little toast/notification that reads "Toggled!"
![Toggle Filter](/images/features/plugin/functions/toggle-filter.png)
You can use these mechanisms to make your filters dynamic, interactive, and visually unique within Open WebUIs plugin ecosystem.
---
### 🎯 Key Components Explained
#### 1⃣ **`Valves` Class (Optional Settings)**
Think of **Valves** as the knobs and sliders for your filter. If you want to give users configurable options to adjust your Filters behavior, you define those here.
```python
class Valves(BaseModel):
OPTION_NAME: str = "Default Value"
```
For example:
If you're creating a filter that converts responses into uppercase, you might allow users to configure whether every output gets totally capitalized via a valve like `TRANSFORM_UPPERCASE: bool = True/False`.
##### Configuring Valves with Dropdown Menus (Enums)
You can enhance the user experience for your filter's settings by providing dropdown menus instead of free-form text inputs for certain `Valves`. This is achieved using `json_schema_extra` with the `enum` keyword in your Pydantic `Field` definitions.
The `enum` keyword allows you to specify a list of predefined values that the UI should present as options in a dropdown.
**Example:** Creating a dropdown for color themes in a filter.
```python
from pydantic import BaseModel, Field
from typing import Optional
# Define your available options (e.g., color themes)
COLOR_THEMES = {
"Plain (No Color)": [],
"Monochromatic Blue": ["blue", "RoyalBlue", "SteelBlue", "LightSteelBlue"],
"Warm & Energetic": ["orange", "red", "magenta", "DarkOrange"],
"Cool & Calm": ["cyan", "blue", "green", "Teal", "CadetBlue"],
"Forest & Earth": ["green", "DarkGreen", "LimeGreen", "OliveGreen"],
"Mystical Purple": ["purple", "DarkOrchid", "MediumPurple", "Lavender"],
"Grayscale": ["gray", "DarkGray", "LightGray"],
"Rainbow Fun": [
"red",
"orange",
"yellow",
"green",
"blue",
"indigo",
"violet",
],
"Ocean Breeze": ["blue", "cyan", "LightCyan", "DarkTurquoise"],
"Sunset Glow": ["DarkRed", "DarkOrange", "Orange", "gold"],
"Custom Sequence (See Code)": [],
}
class Filter:
class Valves(BaseModel):
selected_theme: str = Field(
"Monochromatic Blue",
description="Choose a predefined color theme for LLM responses. 'Plain (No Color)' disables coloring.",
json_schema_extra={"enum": list(COLOR_THEMES.keys())}, # KEY: This creates the dropdown
)
custom_colors_csv: str = Field(
"",
description="CSV of colors for 'Custom Sequence' theme (e.g., 'red,blue,green'). Uses xcolor names.",
)
strip_existing_latex: bool = Field(
True,
description="If true, attempts to remove existing LaTeX color commands. Recommended to avoid nested rendering issues.",
)
colorize_type: str = Field(
"sequential_word",
description="How to apply colors: 'sequential_word' (word by word), 'sequential_line' (line by line), 'per_letter' (letter by letter), 'full_message' (entire message).",
json_schema_extra={
"enum": [
"sequential_word",
"sequential_line",
"per_letter",
"full_message",
]
}, # Another example of an enum dropdown
)
color_cycle_reset_per_message: bool = Field(
True,
description="If true, the color sequence restarts for each new LLM response message. If false, it continues across messages.",
)
debug_logging: bool = Field(
False,
description="Enable verbose logging to the console for debugging filter operations.",
)
def __init__(self):
self.valves = self.Valves()
# ... rest of your __init__ logic ...
```
**What's happening?**
* **`json_schema_extra`**: This argument in `Field` allows you to inject arbitrary JSON Schema properties that Pydantic doesn't explicitly support but can be used by downstream tools (like Open WebUI's UI renderer).
* **`"enum": list(COLOR_THEMES.keys())`**: This tells Open WebUI that the `selected_theme` field should present a selection of values, specifically the keys from our `COLOR_THEMES` dictionary. The UI will then render a dropdown menu with "Plain (No Color)", "Monochromatic Blue", "Warm & Energetic", etc., as selectable options.
* The `colorize_type` field also demonstrates another `enum` dropdown for different coloring methods.
Using `enum` for your `Valves` options makes your filters more user-friendly and prevents invalid inputs, leading to a smoother configuration experience.
---
#### 2⃣ **`inlet` Function (Input Pre-Processing)**
The `inlet` function is like **prepping food before cooking**. Imagine youre a chef: before the ingredients go into the recipe (the LLM in this case), you might wash vegetables, chop onions, or season the meat. Without this step, your final dish could lack flavor, have unwashed produce, or simply be inconsistent.
In the world of Open WebUI, the `inlet` function does this important prep work on the **user input** before its sent to the model. It ensures the input is as clean, contextual, and helpful as possible for the AI to handle.
📥 **Input**:
- **`body`**: The raw input from Open WebUI to the model. It is in the format of a chat-completion request (usually a dictionary that includes fields like the conversation's messages, model settings, and other metadata). Think of this as your recipe ingredients.
🚀 **Your Task**:
Modify and return the `body`. The modified version of the `body` is what the LLM works with, so this is your chance to bring clarity, structure, and context to the input.
##### 🍳 Why Would You Use the `inlet`?
1. **Adding Context**: Automatically append crucial information to the users input, especially if their text is vague or incomplete. For example, you might add "You are a friendly assistant" or "Help this user troubleshoot a software bug."
2. **Formatting Data**: If the input requires a specific format, like JSON or Markdown, you can transform it before sending it to the model.
3. **Sanitizing Input**: Remove unwanted characters, strip potentially harmful or confusing symbols (like excessive whitespace or emojis), or replace sensitive information.
4. **Streamlining User Input**: If your models output improves with additional guidance, you can use the `inlet` to inject clarifying instructions automatically!
##### 💡 Example Use Cases: Build on Food Prep
###### 🥗 Example 1: Adding System Context
Lets say the LLM is a chef preparing a dish for Italian cuisine, but the user hasnt mentioned "This is for Italian cooking." You can ensure the message is clear by appending this context before sending the data to the model.
```python
def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
# Add system message for Italian context in the conversation
context_message = {
"role": "system",
"content": "You are helping the user prepare an Italian meal."
}
# Insert the context at the beginning of the chat history
body.setdefault("messages", []).insert(0, context_message)
return body
```
📖 **What Happens?**
- Any user input like "What are some good dinner ideas?" now carries the Italian theme because weve set the system context! Cheesecake might not show up as an answer, but pasta sure will.
###### 🔪 Example 2: Cleaning Input (Remove Odd Characters)
Suppose the input from the user looks messy or includes unwanted symbols like `!!!`, making the conversation inefficient or harder for the model to parse. You can clean it up while preserving the core content.
```python
def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
# Clean the last user input (from the end of the 'messages' list)
last_message = body["messages"][-1]["content"]
body["messages"][-1]["content"] = last_message.replace("!!!", "").strip()
return body
```
📖 **What Happens?**
- Before: `"How can I debug this issue!!!"` ➡️ Sent to the model as `"How can I debug this issue"`
:::note
Note: The user feels the same, but the model processes a cleaner and easier-to-understand query.
:::
##### 📊 How `inlet` Helps Optimize Input for the LLM:
- Improves **accuracy** by clarifying ambiguous queries.
- Makes the AI **more efficient** by removing unnecessary noise like emojis, HTML tags, or extra punctuation.
- Ensures **consistency** by formatting user input to match the models expected patterns or schemas (like, say, JSON for a specific use case).
💭 **Think of `inlet` as the sous-chef in your kitchen**—ensuring everything that goes into the model (your AI "recipe") has been prepped, cleaned, and seasoned to perfection. The better the input, the better the output!
---
#### 🆕 3⃣ **`stream` Hook (New in Open WebUI 0.5.17)**
##### 🔄 What is the `stream` Hook?
The **`stream` function** is a new feature introduced in Open WebUI **0.5.17** that allows you to **intercept and modify streamed model responses** in real time.
Unlike `outlet`, which processes an entire completed response, `stream` operates on **individual chunks** as they are received from the model.
##### 🛠️ When to Use the Stream Hook?
- Modify **streaming responses** before they are displayed to users.
- Implement **real-time censorship or cleanup**.
- **Monitor streamed data** for logging/debugging.
##### 📜 Example: Logging Streaming Chunks
Heres how you can inspect and modify streamed LLM responses:
```python
def stream(self, event: dict) -> dict:
print(event) # Print each incoming chunk for inspection
return event
```
> **Example Streamed Events:**
```jsonl
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": "Hi"}}]}
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": "!"}}]}
{"id": "chatcmpl-B4l99MMaP3QLGU5uV7BaBM0eDS0jb","choices": [{"delta": {"content": " 😊"}}]}
```
📖 **What Happens?**
- Each line represents a **small fragment** of the model's streamed response.
- The **`delta.content` field** contains the progressively generated text.
##### 🔄 Example: Filtering Out Emojis from Streamed Data
```python
def stream(self, event: dict) -> dict:
for choice in event.get("choices", []):
delta = choice.get("delta", {})
if "content" in delta:
delta["content"] = delta["content"].replace("😊", "") # Strip emojis
return event
```
📖 **Before:** `"Hi 😊"`
📖 **After:** `"Hi"`
---
#### 4⃣ **`outlet` Function (Output Post-Processing)**
The `outlet` function is like a **proofreader**: tidy up the AI's response (or make final changes) *after its processed by the LLM.*
📤 **Input**:
- **`body`**: This contains **all current messages** in the chat (user history + LLM replies).
🚀 **Your Task**: Modify this `body`. You can clean, append, or log changes, but be mindful of how each adjustment impacts the user experience.
💡 **Best Practices**:
- Prefer logging over direct edits in the outlet (e.g., for debugging or analytics).
- If heavy modifications are needed (like formatting outputs), consider using the **pipe function** instead.
💡 **Example Use Case**: Strip out sensitive API responses you don't want the user to see:
```python
def outlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
for message in body["messages"]:
message["content"] = message["content"].replace("<API_KEY>", "[REDACTED]")
return body
```
---
## 🌟 Filters in Action: Building Practical Examples
Lets build some real-world examples to see how youd use Filters!
### 📚 Example #1: Add Context to Every User Input
Want the LLM to always know it's assisting a customer in troubleshooting software bugs? You can add instructions like **"You're a software troubleshooting assistant"** to every user query.
```python
class Filter:
def inlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
context_message = {
"role": "system",
"content": "You're a software troubleshooting assistant."
}
body.setdefault("messages", []).insert(0, context_message)
return body
```
---
### 📚 Example #2: Highlight Outputs for Easy Reading
Returning output in Markdown or another formatted style? Use the `outlet` function!
```python
class Filter:
def outlet(self, body: dict, __user__: Optional[dict] = None) -> dict:
# Add "highlight" markdown for every response
for message in body["messages"]:
if message["role"] == "assistant": # Target model response
message["content"] = f"**{message['content']}**" # Highlight with Markdown
return body
```
---
## 🚧 Potential Confusion: Clear FAQ 🛑
### **Q: How Are Filters Different From Pipe Functions?**
Filters modify data **going to** and **coming from models** but do not significantly interact with logic outside of these phases. Pipes, on the other hand:
- Can integrate **external APIs** or significantly transform how the backend handles operations.
- Expose custom logic as entirely new "models."
### **Q: Can I Do Heavy Post-Processing Inside `outlet`?**
You can, but **its not the best practice.**:
- **Filters** are designed to make lightweight changes or apply logging.
- If heavy modifications are required, consider a **Pipe Function** instead.
---
## 🎉 Recap: Why Build Filter Functions?
By now, youve learned:
1. **Inlet** manipulates **user inputs** (pre-processing).
2. **Stream** intercepts and modifies **streamed model outputs** (real-time).
3. **Outlet** tweaks **AI outputs** (post-processing).
4. Filters are best for lightweight, real-time alterations to the data flow.
5. With **Valves**, you empower users to configure Filters dynamically for tailored behavior.
---
🚀 **Your Turn**: Start experimenting! What small tweak or context addition could elevate your Open WebUI experience? Filters are fun to build, flexible to use, and can take your models to the next level!
Happy coding! ✨

View File

@@ -0,0 +1,133 @@
---
sidebar_position: 1
title: "Functions"
---
## 🚀 What Are Functions?
Functions are like **plugins** for Open WebUI. They help you **extend its capabilities**—whether its adding support for new AI model providers like Anthropic or Vertex AI, tweaking how messages are processed, or introducing custom buttons to the interface for better usability.
Unlike external tools that may require complex integrations, **Functions are built-in and run within the Open WebUI environment.** That means they are fast, modular, and dont rely on external dependencies.
Think of Functions as **modular building blocks** that let you enhance how the WebUI works, tailored exactly to what you need. Theyre lightweight, highly customizable, and written in **pure Python**, so you have the freedom to create anything—from new AI-powered workflows to integrations with anything you use, like Google Search or Home Assistant.
---
## 🏗️ Types of Functions
There are **three types of Functions** in Open WebUI, each with a specific purpose. Lets break them down and explain exactly what they do:
---
### 1. [**Pipe Function** Create Custom "Agents/Models"](./pipe.mdx)
A **Pipe Function** is how you create **custom agents/models** or integrations, which then appear in the interface as if they were standalone models.
**What does it do?**
- Pipes let you define complex workflows. For instance, you could create a Pipe that sends data to **Model A** and **Model B**, processes their outputs, and combines the results into one finalized answer.
- Pipes dont even have to use AI! They can be setups for **search APIs**, **weather data**, or even systems like **Home Assistant**. Basically, anything youd like to interact with can become part of Open WebUI.
**Use case example:**
Imagine you want to query Google Search directly from Open WebUI. You can create a Pipe Function that:
1. Takes your message as the search query.
2. Sends the query to Google Searchs API.
3. Processes the response and returns it to you inside the WebUI like a normal "model" response.
When enabled, **Pipe Functions show up as their own selectable model**. Use Pipes whenever you need custom functionality that works like a model in the interface.
For a detailed guide, see [**Pipe Functions**](./pipe.mdx).
---
### 2. [**Filter Function** Modify Inputs and Outputs](./filter.mdx)
A **Filter Function** is like a tool for tweaking data before it gets sent to the AI **or** after it comes back.
**What does it do?**
Filters act as "hooks" in the workflow and have two main parts:
- **Inlet**: Adjust the input that is sent to the model. For example, adding additional instructions, keywords, or formatting tweaks.
- **Outlet**: Modify the output that you receive from the model. For instance, cleaning up the response, adjusting tone, or formatting data into a specific style.
**Use case example:**
Suppose youre working on a project that needs precise formatting. You can use a Filter to ensure:
1. Your input is always transformed into the required format.
2. The output from the model is cleaned up before being displayed.
Filters are **linked to specific models** or can be enabled for all models **globally**, depending on your needs.
Check out the full guide for more examples and instructions: [**Filter Functions**](./filter.mdx).
---
### 3. [**Action Function** Add Custom Buttons](./action.mdx)
An **Action Function** is used to add **custom buttons** to the chat interface.
**What does it do?**
Actions allow you to define **interactive shortcuts** that trigger specific functionality directly from the chat. These buttons appear underneath individual chat messages, giving you convenient, one-click access to the actions you define.
**Use case example:**
Lets say you often need to summarize long messages or generate specific outputs like translations. You can create an Action Function to:
1. Add a “Summarize” button under every incoming message.
2. When clicked, it triggers your custom function to process that message and return the summary.
Buttons provide a **clean and user-friendly way** to interact with extended functionality you define.
Learn how to set them up in the [**Action Functions Guide**](./action.mdx).
---
## 🛠️ How to Use Functions
Here's how to put Functions to work in Open WebUI:
### 1. **Install Functions**
You can install Functions via the Open WebUI interface or by importing them manually. You can find community-created functions on the [Open WebUI Community Site](https://openwebui.com/functions).
⚠️ **Be cautious.** Only install Functions from trusted sources. Running unknown code poses security risks.
---
### 2. **Enable Functions**
Functions must be explicitly enabled after installation:
- When you enable a **Pipe Function**, it becomes available as its own **model** in the interface.
- For **Filter** and **Action Functions**, enabling them isnt enough—you also need to assign them to specific models or enable them globally for all models.
---
### 3. **Assign Filters or Actions to Models**
- Navigate to `Workspace => Models` and assign your Filter or Action to the relevant model there.
- Alternatively, enable Functions for **all models globally** by going to `Workspace => Functions`, selecting the "..." menu, and toggling the **Global** switch.
---
### Quick Summary
- **Pipes** appear as standalone models you can interact with.
- **Filters** modify inputs/outputs for smoother AI interactions.
- **Actions** add clickable buttons to individual chat messages.
Once youve followed the setup process, Functions will seamlessly enhance your workflows.
---
## ✅ Why Use Functions?
Functions are designed for anyone who wants to **unlock new possibilities** with Open WebUI:
- **Extend**: Add new models or integrate with non-AI tools like APIs, databases, or smart devices.
- **Optimize**: Tweak inputs and outputs to fit your use case perfectly.
- **Simplify**: Add buttons or shortcuts to make the interface intuitive and efficient.
Whether youre customizing workflows for specific projects, integrating external data, or just making Open WebUI easier to use, Functions are the key to taking control of your instance.
---
### 📝 Final Notes:
1. Always install Functions from **trusted sources only**.
2. Make sure you understand the difference between Pipe, Filter, and Action Functions to use them effectively.
3. Explore the official guides:
- [Pipe Functions Guide](./pipe.mdx)
- [Filter Functions Guide](./filter.mdx)
- [Action Functions Guide](./action.mdx)
By leveraging Functions, youll bring entirely new capabilities to your Open WebUI setup. Start experimenting today! 🚀

View File

@@ -0,0 +1,400 @@
---
sidebar_position: 1
title: "Pipe Function"
---
# 🚰 Pipe Function: Create Custom "Agents/Models"
Welcome to this guide on creating **Pipes** in Open WebUI! Think of Pipes as a way to **adding** a new model to Open WebUI. In this document, we'll break down what a Pipe is, how it works, and how you can create your own to add custom logic and processing to your Open WebUI models. We'll use clear metaphors and go through every detail to ensure you have a comprehensive understanding.
## Introduction to Pipes
Imagine Open WebUI as a **plumbing system** where data flows through pipes and valves. In this analogy:
- **Pipes** are like **plugins** that let you introduce new pathways for data to flow, allowing you to inject custom logic and processing.
- **Valves** are the **configurable parts** of your pipe that control how data flows through it.
By creating a Pipe, you're essentially crafting a custom model with the specific behavior you want, all within the Open WebUI framework.
---
## Understanding the Pipe Structure
Let's start with a basic, barebones version of a Pipe to understand its structure:
```python
from pydantic import BaseModel, Field
class Pipe:
class Valves(BaseModel):
MODEL_ID: str = Field(default="")
def __init__(self):
self.valves = self.Valves()
def pipe(self, body: dict):
# Logic goes here
print(self.valves, body) # This will print the configuration options and the input body
return "Hello, World!"
```
### The Pipe Class
- **Definition**: The `Pipe` class is where you define your custom logic.
- **Purpose**: Acts as the blueprint for your plugin, determining how it behaves within Open WebUI.
### Valves: Configuring Your Pipe
- **Definition**: `Valves` is a nested class within `Pipe`, inheriting from `BaseModel`.
- **Purpose**: It contains the configuration options (parameters) that persist across the use of your Pipe.
- **Example**: In the above code, `MODEL_ID` is a configuration option with a default empty string.
**Metaphor**: Think of Valves as the knobs on a real-world pipe system that control the flow of water. In your Pipe, Valves allow users to adjust settings that influence how the data flows and is processed.
### The `__init__` Method
- **Definition**: The constructor method for the `Pipe` class.
- **Purpose**: Initializes the Pipe's state and sets up any necessary components.
- **Best Practice**: Keep it simple; primarily initialize `self.valves` here.
```python
def __init__(self):
self.valves = self.Valves()
```
### The `pipe` Function
- **Definition**: The core function where your custom logic resides.
- **Parameters**:
- `body`: A dictionary containing the input data.
- **Purpose**: Processes the input data using your custom logic and returns the result.
```python
def pipe(self, body: dict):
# Logic goes here
print(self.valves, body) # This will print the configuration options and the input body
return "Hello, World!"
```
**Note**: Always place `Valves` at the top of your `Pipe` class, followed by `__init__`, and then the `pipe` function. This structure ensures clarity and consistency.
---
## Creating Multiple Models with Pipes
What if you want your Pipe to create **multiple models** within Open WebUI? You can achieve this by defining a `pipes` function or variable inside your `Pipe` class. This setup, informally called a **manifold**, allows your Pipe to represent multiple models.
Here's how you can do it:
```python
from pydantic import BaseModel, Field
class Pipe:
class Valves(BaseModel):
MODEL_ID: str = Field(default="")
def __init__(self):
self.valves = self.Valves()
def pipes(self):
return [
{"id": "model_id_1", "name": "model_1"},
{"id": "model_id_2", "name": "model_2"},
{"id": "model_id_3", "name": "model_3"},
]
def pipe(self, body: dict):
# Logic goes here
print(self.valves, body) # Prints the configuration options and the input body
model = body.get("model", "")
return f"{model}: Hello, World!"
```
### Explanation
- **`pipes` Function**:
- Returns a list of dictionaries.
- Each dictionary represents a model with unique `id` and `name` keys.
- These models will show up individually in the Open WebUI model selector.
- **Updated `pipe` Function**:
- Processes input based on the selected model.
- In this example, it includes the model name in the returned string.
---
## Example: OpenAI Proxy Pipe
Let's dive into a practical example where we'll create a Pipe that proxies requests to the OpenAI API. This Pipe will fetch available models from OpenAI and allow users to interact with them through Open WebUI.
```python
from pydantic import BaseModel, Field
import requests
class Pipe:
class Valves(BaseModel):
NAME_PREFIX: str = Field(
default="OPENAI/",
description="Prefix to be added before model names.",
)
OPENAI_API_BASE_URL: str = Field(
default="https://api.openai.com/v1",
description="Base URL for accessing OpenAI API endpoints.",
)
OPENAI_API_KEY: str = Field(
default="",
description="API key for authenticating requests to the OpenAI API.",
)
def __init__(self):
self.valves = self.Valves()
def pipes(self):
if self.valves.OPENAI_API_KEY:
try:
headers = {
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
"Content-Type": "application/json",
}
r = requests.get(
f"{self.valves.OPENAI_API_BASE_URL}/models", headers=headers
)
models = r.json()
return [
{
"id": model["id"],
"name": f'{self.valves.NAME_PREFIX}{model.get("name", model["id"])}',
}
for model in models["data"]
if "gpt" in model["id"]
]
except Exception as e:
return [
{
"id": "error",
"name": "Error fetching models. Please check your API Key.",
},
]
else:
return [
{
"id": "error",
"name": "API Key not provided.",
},
]
def pipe(self, body: dict, __user__: dict):
print(f"pipe:{__name__}")
headers = {
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
"Content-Type": "application/json",
}
# Extract model id from the model name
model_id = body["model"][body["model"].find(".") + 1 :]
# Update the model id in the body
payload = {**body, "model": model_id}
try:
r = requests.post(
url=f"{self.valves.OPENAI_API_BASE_URL}/chat/completions",
json=payload,
headers=headers,
stream=True,
)
r.raise_for_status()
if body.get("stream", False):
return r.iter_lines()
else:
return r.json()
except Exception as e:
return f"Error: {e}"
```
### Detailed Breakdown
#### Valves Configuration
- **`NAME_PREFIX`**:
- Adds a prefix to the model names displayed in Open WebUI.
- Default: `"OPENAI/"`.
- **`OPENAI_API_BASE_URL`**:
- Specifies the base URL for the OpenAI API.
- Default: `"https://api.openai.com/v1"`.
- **`OPENAI_API_KEY`**:
- Your OpenAI API key for authentication.
- Default: `""` (empty string; must be provided).
#### The `pipes` Function
- **Purpose**: Fetches available OpenAI models and makes them accessible in Open WebUI.
- **Process**:
1. **Check for API Key**: Ensures that an API key is provided.
2. **Fetch Models**: Makes a GET request to the OpenAI API to retrieve available models.
3. **Filter Models**: Returns models that have `"gpt"` in their `id`.
4. **Error Handling**: If there's an issue, returns an error message.
- **Return Format**: A list of dictionaries with `id` and `name` for each model.
#### The `pipe` Function
- **Purpose**: Handles the request to the selected OpenAI model and returns the response.
- **Parameters**:
- `body`: Contains the request data.
- `__user__`: Contains user information (not used in this example but can be useful for authentication or logging).
- **Process**:
1. **Prepare Headers**: Sets up the headers with the API key and content type.
2. **Extract Model ID**: Extracts the actual model ID from the selected model name.
3. **Prepare Payload**: Updates the body with the correct model ID.
4. **Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint.
5. **Handle Streaming**: If `stream` is `True`, returns an iterable of lines.
6. **Error Handling**: Catches exceptions and returns an error message.
### Extending the Proxy Pipe
You can modify this proxy Pipe to support additional service providers like Anthropic, Perplexity, and more by adjusting the API endpoints, headers, and logic within the `pipes` and `pipe` functions.
---
## Using Internal Open WebUI Functions
Sometimes, you may want to leverage the internal functions of Open WebUI within your Pipe. You can import these functions directly from the `open_webui` package. Keep in mind that while unlikely, internal functions may change for optimization purposes, so always refer to the latest documentation.
Here's how you can use internal Open WebUI functions:
```python
from pydantic import BaseModel, Field
from fastapi import Request
from open_webui.models.users import Users
from open_webui.utils.chat import generate_chat_completion
class Pipe:
def __init__(self):
pass
async def pipe(
self,
body: dict,
__user__: dict,
__request__: Request,
) -> str:
# Use the unified endpoint with the updated signature
user = Users.get_user_by_id(__user__["id"])
body["model"] = "llama3.2:latest"
return await generate_chat_completion(__request__, body, user)
```
### Explanation
- **Imports**:
- `Users` from `open_webui.models.users`: To fetch user information.
- `generate_chat_completion` from `open_webui.utils.chat`: To generate chat completions using internal logic.
- **Asynchronous `pipe` Function**:
- **Parameters**:
- `body`: Input data for the model.
- `__user__`: Dictionary containing user information.
- `__request__`: The request object from FastAPI (required by `generate_chat_completion`).
- **Process**:
1. **Fetch User Object**: Retrieves the user object using their ID.
2. **Set Model**: Specifies the model to be used.
3. **Generate Completion**: Calls `generate_chat_completion` to process the input and produce an output.
### Important Notes
- **Function Signatures**: Refer to the latest Open WebUI codebase or documentation for the most accurate function signatures and parameters.
- **Best Practices**: Always handle exceptions and errors gracefully to ensure a smooth user experience.
---
## Frequently Asked Questions
### Q1: Why should I use Pipes in Open WebUI?
**A**: Pipes allow you to add new "model" with custom logic and processing to Open WebUI. It's a flexible plugin system that lets you integrate external APIs, customize model behaviors, and create innovative features without altering the core codebase.
---
### Q2: What are Valves, and why are they important?
**A**: Valves are the configurable parameters of your Pipe. They function like settings or controls that determine how your Pipe operates. By adjusting Valves, you can change the behavior of your Pipe without modifying the underlying code.
---
### Q3: Can I create a Pipe without Valves?
**A**: Yes, you can create a simple Pipe without defining a Valves class if your Pipe doesn't require any persistent configuration options. However, including Valves is a good practice for flexibility and future scalability.
---
### Q4: How do I ensure my Pipe is secure when using API keys?
**A**: Never hard-code sensitive information like API keys into your Pipe. Instead, use Valves to input and store API keys securely. Ensure that your code handles these keys appropriately and avoids logging or exposing them.
---
### Q5: What is the difference between the `pipe` and `pipes` functions?
**A**:
- **`pipe` Function**: The primary function where you process the input data and generate an output. It handles the logic for a single model.
- **`pipes` Function**: Allows your Pipe to represent multiple models by returning a list of model definitions. Each model will appear individually in Open WebUI.
---
### Q6: How can I handle errors in my Pipe?
**A**: Use try-except blocks within your `pipe` and `pipes` functions to catch exceptions. Return meaningful error messages or handle the errors gracefully to ensure the user is informed about what went wrong.
---
### Q7: Can I use external libraries in my Pipe?
**A**: Yes, you can import and use external libraries as needed. Ensure that any dependencies are properly installed and managed within your environment.
---
### Q8: How do I test my Pipe?
**A**: Test your Pipe by running Open WebUI in a development environment and selecting your custom model from the interface. Validate that your Pipe behaves as expected with various inputs and configurations.
---
### Q9: Are there any best practices for organizing my Pipe's code?
**A**: Yes, follow these guidelines:
- Keep `Valves` at the top of your `Pipe` class.
- Initialize variables in the `__init__` method, primarily `self.valves`.
- Place the `pipe` function after the `__init__` method.
- Use clear and descriptive variable names.
- Comment your code for clarity.
---
### Q10: Where can I find the latest Open WebUI documentation?
**A**: Visit the official Open WebUI repository or documentation site for the most up-to-date information, including function signatures, examples, and migration guides if any changes occur.
---
## Conclusion
By now, you should have a thorough understanding of how to create and use Pipes in Open WebUI. Pipes offer a powerful way to extend and customize the capabilities of Open WebUI to suit your specific needs. Whether you're integrating external APIs, adding new models, or injecting complex logic, Pipes provide the flexibility to make it happen.
Remember to:
- **Use clear and consistent structure** in your Pipe classes.
- **Leverage Valves** for configurable options.
- **Handle errors gracefully** to improve the user experience.
- **Consult the latest documentation** for any updates or changes.
Happy coding, and enjoy extending your Open WebUI with Pipes!

View File

@@ -0,0 +1,91 @@
---
sidebar_position: 300
title: "Tools & Functions (Plugins)"
---
# 🛠️ Tools & Functions
Imagine you've just stumbled upon Open WebUI, or maybe you're already using it, but you're a bit lost with all the talk about "Tools", "Functions", and "Pipelines". Everything sounds like some mysterious tech jargon, right? No worries! Let's break it down piece by piece, super clearly, step by step. By the end of this, you'll have a solid understanding of what these terms mean, how they work, and why know it's not as complicated as it seems.
## TL;DR
- **Tools** extend the abilities of LLMs, allowing them to collect real-world, real-time data like weather, stock prices, etc.
- **Functions** extend the capabilities of the Open WebUI itself, enabling you to add new AI model support (like Anthropic or Vertex AI) or improve usability (like creating custom buttons or filters).
- **Pipelines** are more for advanced users who want to transform Open WebUI features into API-compatible workflows—mainly for offloading heavy processing.
Getting started with Tools and Functions is easy because everythings already built into the core system! You just **click a button** and **import these features directly from the community**, so theres no coding or deep technical work required.
## What are "Tools" and "Functions"?
Let's start by thinking of **Open WebUI** as a "base" software that can do many tasks related to using Large Language Models (LLMs). But sometimes, you need extra features or abilities that don't come *out of the box*—this is where **tools** and **functions** come into play.
### Tools
**Tools** are an exciting feature because they allow LLMs to do more than just process text. They provide **external abilities** that LLMs wouldn't otherwise have on their own.
#### Example of a Tool:
Imagine you're chatting with an LLM and you want it to give you the latest weather update or stock prices in real time. Normally, the LLM can't do that because it's just working on pre-trained knowledge. This is where **tools** come in!
- **Tools are like plugins** that the LLM can use to gather **real-world, real-time data**. So, with a "weather tool" enabled, the model can go out on the internet, gather live weather data, and display it in your conversation.
Tools are essentially **abilities** youre giving your AI to help it interact with the outside world. By adding these, the LLM can "grab" useful information or perform specialized tasks based on the context of the conversation.
#### Examples of Tools (extending LLMs abilities):
1. **Real-time weather predictions** 🛰️.
2. **Stock price retrievers** 📈.
3. **Flight tracking information** ✈️.
### Functions
While **tools** are used by the AI during a conversation, **functions** help extend or customize the capabilities of Open WebUI itself. Imagine tools are like adding new ingredients to a dish, and functions are the process you use to control the kitchen! 🚪
#### Let's break that down:
- **Functions** give you the ability to tweak or add **features** inside **Open WebUI** itself.
- Youre not giving new abilities to the LLM, but instead, youre extending the **interface, behavior, or logic** of the platform itself!
For instance, maybe you want to:
1. Add a new AI model like **Anthropic** to the WebUI.
2. Create a custom button in your toolbar that performs a frequently used command.
3. Implement a better **filter** function that catches inappropriate or **spammy messages** from the incoming text.
Without functions, these would all be out of reach. But with this framework in Open WebUI, you can easily extend these features!
### Where to Find and Manage Functions
Functions are not located in the same place as Tools.
- **Tools** are about model access and live in your **Workspace tabs** (where you add models, prompts, and knowledge collections). They can be added by users if granted permissions.
- **Functions** are about **platform customization** and are found in the **Admin Panel**.
They are configured and managed only by admins who want to extend the platform interface or behavior for all users.
### Summary of Differences:
- **Tools** are things that allow LLMs to **do more things** outside their default abilities (such as retrieving live info or performing custom tasks based on external data).
- **Functions** help the WebUI itself **do more things**, like adding new AI models or creating smarter ways to filter data.
Both are designed to be **pluggable**, meaning you can easily import them into your system with just one click from the community! 🎉 You wont have to spend hours coding or tinkering with them.
## What are Pipelines?
And then, we have **Pipelines**… Heres where things start to sound pretty technical—but dont despair.
**Pipelines** are part of an Open WebUI initiative focused on making every piece of the WebUI **inter-operable with OpenAIs API system**. Essentially, they extend what both **Tools** and **Functions** can already do, but now with even more flexibility. They allow you to turn features into OpenAI API-compatible formats. 🧠
### But heres the thing…
You likely **won't need** pipelines unless you're dealing with super-advanced setups.
- **Who are pipelines for?** Typically, **experts** or people running more complicated use cases.
- **When do you need them?** If you're trying to offload processing from your primary Open WebUI instance to a different machine (so you dont overload your primary system).
In most cases, as a beginner or even an intermediate user, you wont have to worry about pipelines. Just focus on enjoying the benefits that **tools** and **functions** bring to your Open WebUI experience!
## Want to Try? 🚀
Jump into Open WebUI, head over to the community section, and try importing a tool like **weather updates** or maybe adding a new feature to the toolbar with a function. Exploring these tools will show you how powerful and flexible Open WebUI can be!
🌟 There's always more to learn, so stay curious and keep experimenting!

View File

@@ -0,0 +1,255 @@
---
sidebar_position: 9999
title: "Migrating Tools & Functions: 0.4 to 0.5"
---
# 🚚 Migration Guide: Open WebUI 0.4 to 0.5
Welcome to the Open WebUI 0.5 migration guide! If you're working on existing projects or building new ones, this guide will walk you through the key changes from **version 0.4 to 0.5** and provide an easy-to-follow roadmap for upgrading your Functions. Let's make this transition as smooth as possible! 😊
---
## 🧐 What Has Changed and Why?
With Open WebUI 0.5, weve overhauled the architecture to make the project **simpler, more unified, and scalable**. Here's the big picture:
- **Old Architecture:** 🎯 Previously, Open WebUI was built on a **sub-app architecture** where each app (e.g., `ollama`, `openai`) was a separate FastAPI application. This caused fragmentation and extra complexity when managing apps.
- **New Architecture:** 🚀 With version 0.5, we have transitioned to a **single FastAPI app** with multiple **routers**. This means better organization, centralized flow, and reduced redundancy.
### Key Changes:
Heres an overview of what changed:
1. **Apps have been moved to Routers.**
- Previous: `open_webui.apps`
- Now: `open_webui.routers`
2. **Main app structure simplified.**
- The old `open_webui.apps.webui` has been transformed into `open_webui.main`, making it the central entry point for the project.
3. **Unified API Endpoint**
- Open WebUI 0.5 introduces a **unified function**, `chat_completion`, in `open_webui.main`, replacing separate functions for models like `ollama` and `openai`. This offers a consistent and streamlined API experience. However, the **direct successor** of these individual functions is `generate_chat_completion` from `open_webui.utils.chat`. If you prefer a lightweight POST request without handling additional parsing (e.g., files, tools, or misc), this utility function is likely what you want.
#### Example:
```python
# Full API flow with parsing (new function):
from open_webui.main import chat_completion
# Lightweight, direct POST request (direct successor):
from open_webui.utils.chat import generate_chat_completion
```
Choose the approach that best fits your use case!
4. **Updated Function Signatures.**
- Function signatures now adhere to a new format, requiring a `request` object.
- The `request` object can be obtained using the `__request__` parameter in the function signature. Below is an example:
```python
class Pipe:
def __init__(self):
pass
async def pipe(
self,
body: dict,
__user__: dict,
__request__: Request, # New parameter
) -> str:
# Write your function here
```
📌 **Why did we make these changes?**
- To simplify the codebase, making it easier to extend and maintain.
- To unify APIs for a more streamlined developer experience.
- To enhance performance by consolidating redundant elements.
---
## ✅ Step-by-Step Migration Guide
Follow this guide to smoothly update your project.
---
### 🔄 1. Shifting from `apps` to `routers`
All apps have been renamed and relocated under `open_webui.routers`. This affects imports in your codebase.
Quick changes for import paths:
| **Old Path** | **New Path** |
|-----------------------------------|-----------------------------------|
| `open_webui.apps.ollama` | `open_webui.routers.ollama` |
| `open_webui.apps.openai` | `open_webui.routers.openai` |
| `open_webui.apps.audio` | `open_webui.routers.audio` |
| `open_webui.apps.retrieval` | `open_webui.routers.retrieval` |
| `open_webui.apps.webui` | `open_webui.main` |
### 📜 An Important Example
To clarify the special case of the main app (`webui`), heres a simple rule of thumb:
- **Was in `webui`?** Its now in the projects root or `open_webui.main`.
- For example:
- **Before (0.4):**
```python
from open_webui.apps.webui.models import SomeModel
```
- **After (0.5):**
```python
from open_webui.models import SomeModel
```
In general, **just replace `open_webui.apps` with `open_webui.routers`—except for `webui`, which is now `open_webui.main`!**
---
### 👩‍💻 2. Updating Import Statements
Lets look at what this update looks like in your code:
#### Before:
```python
from open_webui.apps.ollama import main as ollama
from open_webui.apps.openai import main as openai
```
#### After:
```python
# Separate router imports
from open_webui.routers.ollama import generate_chat_completion
from open_webui.routers.openai import generate_chat_completion
# Or use the unified endpoint
from open_webui.main import chat_completion
```
:::tip
Prioritize the unified endpoint (`chat_completion`) for simplicity and future compatibility.
:::
### 📝 **Additional Note: Choosing Between `main.chat_completion` and `utils.chat.generate_chat_completion`**
Depending on your use case, you can choose between:
1. **`open_webui.main.chat_completion`:**
- Simulates making a POST request to `/api/chat/completions`.
- Processes files, tools, and other miscellaneous tasks.
- Best when you want the complete API flow handled automatically.
2. **`open_webui.utils.chat.generate_chat_completion`:**
- Directly makes a POST request without handling extra parsing or tasks.
- This is the **direct successor** to the previous `main.generate_chat_completions`, `ollama.generate_chat_completion` and `openai.generate_chat_completion` functions in Open WebUI 0.4.
- Best for simplified and more lightweight scenarios.
#### Example:
```python
# Use this for the full API flow with parsing:
from open_webui.main import chat_completion
# Use this for a stripped-down, direct POST request:
from open_webui.utils.chat import generate_chat_completion
```
---
### 📋 3. Adapting to Updated Function Signatures
Weve updated the **function signatures** to better fit the new architecture. If you're looking for a direct replacement, start with the lightweight utility function `generate_chat_completion` from `open_webui.utils.chat`. For the full API flow, use the new unified `chat_completion` function in `open_webui.main`.
#### Function Signature Changes:
| **Old** | **Direct Successor (New)** | **Unified Option (New)** |
|-----------------------------------------|-----------------------------------------|-----------------------------------------|
| `openai.generate_chat_completion(form_data: dict, user: UserModel)` | `generate_chat_completion(request: Request, form_data: dict, user: UserModel)` | `chat_completion(request: Request, form_data: dict, user: UserModel)` |
- **Direct Successor (`generate_chat_completion`)**: A lightweight, 1:1 replacement for previous `ollama`/`openai` methods.
- **Unified Option (`chat_completion`)**: Use this for the complete API flow, including file parsing and additional functionality.
#### Example:
If you're using `chat_completion`, heres how your function should look now:
### 🛠️ How to Refactor Your Custom Function
Lets rewrite a sample function to match the new structure:
#### Before (0.4):
```python
from pydantic import BaseModel
from open_webui.apps.ollama import generate_chat_completion
class User(BaseModel):
id: str
email: str
name: str
role: str
class Pipe:
def __init__(self):
pass
async def pipe(self, body: dict, __user__: dict) -> str:
# Calls OpenAI endpoint
user = User(**__user__)
body["model"] = "llama3.2:latest"
return await ollama.generate_chat_completion(body, user)
```
#### After (0.5):
```python
from pydantic import BaseModel
from fastapi import Request
from open_webui.utils.chat import generate_chat_completion
class User(BaseModel):
id: str
email: str
name: str
role: str
class Pipe:
def __init__(self):
pass
async def pipe(
self,
body: dict,
__user__: dict,
__request__: Request,
) -> str:
# Uses the unified endpoint with updated signature
user = User(**__user__)
body["model"] = "llama3.2:latest"
return await generate_chat_completion(__request__, body, user)
```
### Important Notes:
- You must pass a `Request` object (`__request__`) in the new function signature.
- Other optional parameters (like `__user__` and `__event_emitter__`) ensure flexibility for more complex use cases.
---
### 🌟 4. Recap: Key Concepts in Simple Terms
Heres a quick cheat sheet to remember:
- **Apps to Routers:** Update all imports from `open_webui.apps` ➡️ `open_webui.routers`.
- **Unified Endpoint:** Use `open_webui.main.chat_completion` for simplicity if both `ollama` and `openai` are involved.
- **Adapt Function Signatures:** Ensure your functions pass the required `request` object.
---
## 🎉 Hooray! You're Ready!
That's it! You've successfully migrated from **Open WebUI 0.4 to 0.5**. By refactoring your imports, using the unified endpoint, and updating function signatures, you'll be fully equipped to leverage the latest features and improvements in version 0.5.
---
💬 **Questions or Feedback?**
If you run into any issues or have suggestions, feel free to open a [GitHub issue](https://github.com/open-webui/open-webui) or ask in the community forums!
Happy coding! ✨

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,144 @@
---
sidebar_position: 2
title: "Tools"
---
# ⚙️ What are Tools?
Tools are small Python scripts that add superpowers to your LLM. When enabled, they allow your chatbot to do amazing things — like search the web, scrape data, generate images, talk back using AI voices, and more.
Think of Tools as useful plugins that your AI can use when chatting with you.
---
## 🚀 What Can Tools Help Me Do?
Here are just a few examples of what Tools let your AI assistant do:
- 🌍 Web Search: Get real-time answers by searching the internet.
- 🖼️ Image Generation: Create images from your prompts.
- 🔊 Voice Output: Generate AI voices using ElevenLabs.
Explore ready-to-use tools in the 🧰 [Tools Showcase](https://openwebui.com/tools)
---
## 📦 How to Install Tools
There are two easy ways to install Tools in Open WebUI:
1. Go to [Community Tool Library](https://openwebui.com/tools)
2. Choose a Tool, then click the Get button.
3. Enter your Open WebUI instances IP address or URL.
4. Click “Import to WebUI” — done!
:::warning
Safety Tip: Never import a Tool you dont recognize or trust. These are Python scripts and might run unsafe code.
:::
---
## 🔧 How to Use Tools in Open WebUI
Once you've installed Tools (well show you how below), heres how to enable and use them:
You have two ways to enable a Tool for your model:
### Option 1: Enable from the Chat Window
While chatting, click the icon in the input area. Youll see a list of available Tools — you can enable any of them on the fly for that session.
:::tip
Tip: Enabling a Tool gives the model permission to use it — but it may not use it unless it's useful for the task.
:::
### ✏️ Option 2: Enable by Default (Recommended for Frequent Use)
1. Go to: Workspace ➡️ Models
2. Choose the model youre using (like GPT-4 or LLaMa2) and click the ✏️ edit icon.
3. Scroll down to the “Tools” section.
4. ✅ Check the Tools you want your model to have access to by default.
5. Click Save.
This ensures the model always has these Tools ready to use whenever you chat with it.
You can also let your LLM auto-select the right Tools using the AutoTool Filter:
🔗 [AutoTool Filter](https://openwebui.com/f/hub/autotool_filter/)
🎯 Note: Even when using AutoTool, you still need to enable your Tools using Option 2.
✅ And thats it — your LLM is now Tool-powered! You're ready to supercharge your chats with web search, image generation, voice output, and more.
---
## 🧠 Choosing How Tools Are Used: Default vs Native
Once Tools are enabled for your model, Open WebUI gives you two different ways to let your LLM use them in conversations.
You can decide how the model should call Tools by choosing between:
- 🟡 Default Mode (Prompt-based)
- 🟢 Native Mode (Built-in function calling)
Lets break it down:
### 🟡 Default Mode (Prompt-based Tool Triggering)
This is the default setting in Open WebUI.
Here, your LLM doesnt need to natively support function calling. Instead, we guide the model using smart tool selection prompt template to select and use a Tool.
✅ Works with almost any model
✅ Great way to unlock Tools with basic or local models
❗ Not as reliable or flexible as Native Mode when chaining tools
### 🟢 Native Mode (Function Calling Built-In)
If your model does support “native” function calling (like GPT-4o or GPT-3.5-turbo-1106), you can use this powerful mode to let the LLM decide — in real time — when and how to call multiple Tools during a single chat message.
✅ Fast, accurate, and can chain multiple Tools in one response
✅ The most natural and advanced experience
❗ Requires a model that actually supports native function calling
### ✳️ How to Switch Between Modes
Want to enable native function calling in your chats? Here's how:
![Chat Controls](/images/features/plugin/tools/chat-controls.png)
1. Open the chat window with your model.
2. Click ⚙Chat Controls > Advanced Params.
3. Look for the Function Calling setting and switch it from Default → Native
Thats it! Your chat is now using true native Tool support (as long as the model supports it).
➡️ We recommend using GPT-4o or another OpenAI model for the best native function-calling experience.
🔎 Some local models may claim support, but often struggle with accurate or complex Tool usage.
💡 Summary:
| Mode | Who its for | Pros | Cons |
|----------|----------------------------------|-----------------------------------------|--------------------------------------|
| Default | Any model | Broad compatibility, safer, flexible | May be less accurate or slower |
| Native | GPT-4o, etc. | Fast, smart, excellent tool chaining | Needs proper function call support |
Choose the one that works best for your setup — and remember, you can always switch on the fly via Chat Controls.
👏 And that's it — your LLM now knows how and when to use Tools, intelligently.
---
## 🧠 Summary
Tools are add-ons that help your AI model do much more than just chat. From answering real-time questions to generating images or speaking out loud — Tools bring your AI to life.
- Visit: [https://openwebui.com/tools](https://openwebui.com/tools) to discover new Tools.
- Install them manually or with one-click.
- Enable them per model from Workspace ➡️ Models.
- Use them in chat by clicking
Now go make your AI waaaaay smarter 🤖✨

View File

@@ -0,0 +1,176 @@
---
sidebar_position: 10
title: "FAQ"
---
#### 🌐 Q: Why isn't my local OpenAPI tool server accessible from the WebUI interface?
**A:** If your tool server is running locally (e.g., http://localhost:8000), browser-based clients may be restricted from accessing it due to CORS (Cross-Origin Resource Sharing) policies.
Make sure to explicitly enable CORS headers in your OpenAPI server. For example, if you're using FastAPI, you can add:
```python
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # or specify your client origin
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
Also, if Open WebUI is served over HTTPS (e.g., https://yourdomain.com), your local server must meet one of the following conditions:
- Be accessed from the same domain using HTTPS (e.g., https://localhost:8000).
- OR run on localhost (127.0.0.1) to allow browsers to relax security for local development.
- Otherwise, browsers may block insecure requests from HTTPS pages to HTTP APIs due to mixed-content rules.
To work securely in production over HTTPS, your OpenAPI servers must also be served over HTTPS.
---
#### 🚀 Q: Do I need to use FastAPI for my server implementation?
**A:** No! While our reference implementations are written using FastAPI for clarity and ease of use, you can use any framework or language that produces a valid OpenAPI (Swagger) specification. Some common choices include:
- FastAPI (Python)
- Flask + Flask-RESTX (Python)
- Express + Swagger UI (JavaScript/Node)
- Spring Boot (Java)
- Go with Swag or Echo
The key is to ensure your server exposes a valid OpenAPI schema, and that it communicates over HTTP(S).
It is important to set a custom operationId for all endpoints.
---
#### 🚀 Q: Why choose OpenAPI over MCP?
**A:** OpenAPI wins over MCP in most real-world scenarios due to its simplicity, tooling ecosystem, stability, and developer-friendliness. Here's why:
- ✅ **Reuse Your Existing Code**: If youve built REST APIs before, you're mostly done—you dont need to rewrite your logic. Just define a compliant OpenAPI spec and expose your current code as a tool server.
With MCP, you had to reimplement your tool logic inside a custom protocol layer, duplicating work and increasing the surface area to maintain.
- 💼 **Less to Maintain & Debug**: OpenAPI fits naturally into modern dev workflows. You can test endpoints with Postman, inspect logs with built-in APIs, troubleshoot easily with mature ecosystem tools—and often without modifying your core app at all.
MCP introduced new layers of transport, schema parsing, and runtime quirks, all of which had to be debugged manually.
- 🌍 **Standards-Based**: OpenAPI is widely adopted across the tech industry. Its well-defined structure means tools, agents, and servers can interoperate immediately, without needing special bridges or translations.
- 🧰 **Better Tooling**: Theres an entire universe of tools that support OpenAPI—automatic client/server generation, documentation, validation, mocking, testing, and even security audit tools.
- 🔐 **First-Class Security Support**: OpenAPI includes native support for things like OAuth2, JWTs, API Keys, and HTTPS—making it easier to build secure endpoints with common libraries and standards.
- 🧠 **More Devs Already Know It**: Using OpenAPI means you're speaking a language already familiar to backend teams, frontend developers, DevOps, and product engineers. Theres no learning curve or costly onboarding required.
- 🔄 **Future-Proof & Extensible**: OpenAPI evolves with API standards and remains forward-compatible. MCP, by contrast, was bespoke and experimental—often requiring changes as the surrounding ecosystem changed.
🧵 Bottom line: OpenAPI lets you do more with less effort, less code duplication, and fewer surprises. Its a production-ready, developer-friendly route to powering LLM tools—without rebuilding everything from scratch.
---
#### 🔐 Q: How do I secure my OpenAPI tool server?
**A:** OpenAPI supports industry-standard security mechanisms like:
- OAuth 2.0
- API Key headers
- JWT (JSON Web Token)
- Basic Auth
Use HTTPS in production to encrypt data in transit, and restrict endpoints with proper auth/authz methods as appropriate. You can incorporate these directly in your OpenAPI schema using the securitySchemes field.
---
#### ❓ Q: What kind of tools can I build using OpenAPI tool servers?
**A:** If it can be exposed via a REST API, you can build it. Common tool types include:
- Filesystem operations (read/write files, list directories)
- Git and document repository access
- Database querying or schema exploration
- Web scrapers or summarizers
- External SaaS integrations (e.g., Salesforce, Jira, Slack)
- LLM-attached memory stores / RAG components
- Secure internal microservices exposed to your agent
---
#### 🔌 Q: Can I run more than one tool server at the same time?
**A:** Absolutely. Each tool server runs independently and exposes its own OpenAPI schema. Your agent configuration can point to multiple tool servers, allowing you to mix and match based on need.
There's no limit—just ensure each server runs on its own port or address and is reachable by the agent host.
---
#### 🧪 Q: How do I test a tool server before linking it to an LLM agent?
**A:** You can test your OpenAPI tool servers using:
- Swagger UI or ReDoc (built into FastAPI by default)
- Postman or Insomnia
- curl or httpie from the command line
- Pythons requests module
- OpenAPI validators and mockers
Once validated, you can register the tool server with an LLM agent or through Open WebUI.
---
#### 🛠️ Q: Can I extend or customize the reference servers?
**A:** Yes! All servers in the servers/ directory are built to be simple templates. Fork and modify them to:
- Add new endpoints and business logic
- Integrate authentication
- Change response formats
- Connect to new services or internal APIs
- Deploy via Docker, Kubernetes, or any cloud host
---
#### 🌍 Q: Can I run OpenAPI tool servers on cloud platforms like AWS or GCP?
**A:** Yes. These servers are plain HTTP services. You can deploy them as:
- AWS Lambda with API Gateway (serverless)
- EC2 or GCP Compute Engine instances
- Kubernetes services in GKE/EKS/AKS
- Cloud Run or App Engine
- Render, Railway, Heroku, etc.
Just make sure theyre securely configured and publicly reachable (or VPN'd) if needed by the agent or user.
---
#### 🧪 Q: What if I have an existing MCP server?
**A:** Great news! You can use our MCP-to-OpenAPI Bridge: [mcpo](https://github.com/open-webui/mcpo), exposing your existing MCP-based tools as OpenAPI-compatible APIs is now easier than ever. No rewrites, no headaches — just plug and go! 🚀
If you've already built tools using the MCP protocol, `mcpo` helps you instantly unlock compatibility with Open WebUI and any OpenAPI-based agent — ensuring your hard work remains fully accessible and future-ready.
[Check out the optional Bridge to MCP section in the docs for setup instructions.](https://github.com/open-webui/openapi-servers?tab=readme-ov-file#-bridge-to-mcp-optional)
**Quick Start:**
```bash
uvx mcpo --port 8000 -- uvx mcp-server-time --local-timezone=America/New_York
```
✨ Thats it — your MCP server is now OpenAPI-ready!
---
#### 🗂️ Q: Can one OpenAPI server implement multiple tools?
**A:** Yes. A single OpenAPI server can offer multiple related capabilities grouped under different endpoints. For example, a document server may provide search, upload, OCR, and summarization—all within one schema.
You can also modularize completely by creating one OpenAPI server per tool if you prefer isolation and flexibility.
---
🙋 Have more questions? Visit the GitHub discussions for help and feedback from the community:
👉 [Community Discussions](https://github.com/open-webui/openapi-servers/discussions)

View File

@@ -0,0 +1,70 @@
---
sidebar_position: 400
title: "OpenAPI Tool Servers"
---
import { TopBanners } from "@site/src/components/TopBanners";
<TopBanners />
# 🌟 OpenAPI Tool Servers
This repository provides reference OpenAPI Tool Server implementations making it easy and secure for developers to integrate external tooling and data sources into LLM agents and workflows. Designed for maximum ease of use and minimal learning curve, these implementations utilize the widely adopted and battle-tested [OpenAPI specification](https://www.openapis.org/) as the standard protocol.
By leveraging OpenAPI, we eliminate the need for a proprietary or unfamiliar communication protocol, ensuring you can quickly and confidently build or integrate servers. This means less time spent figuring out custom interfaces and more time building powerful tools that enhance your AI applications.
## ☝️ Why OpenAPI?
- **Established Standard**: OpenAPI is a widely used, production-proven API standard backed by thousands of tools, companies, and communities.
- **No Reinventing the Wheel**: No additional documentation or proprietary spec confusion. If you build REST APIs or use OpenAPI today, you're already set.
- **Easy Integration & Hosting**: Deploy your tool servers externally or locally without vendor lock-in or complex configurations.
- **Strong Security Focus**: Built around HTTP/REST APIs, OpenAPI inherently supports widely used, secure communication methods including HTTPS and well-proven authentication standards (OAuth, JWT, API Keys).
- **Future-Friendly & Stable**: Unlike less mature or experimental protocols, OpenAPI promises reliability, stability, and long-term community support.
## 🚀 Quickstart
Get started quickly with our reference FastAPI-based implementations provided in the `servers/` directory. (You can adapt these examples into your preferred stack as needed, such as using [FastAPI](https://fastapi.tiangolo.com/), [FastOpenAPI](https://github.com/mr-fatalyst/fastopenapi) or any other OpenAPI-compatible library):
```bash
git clone https://github.com/open-webui/openapi-servers
cd openapi-servers
```
### With Bash
```bash
# Example: Installing dependencies for a specific server 'filesystem'
cd servers/filesystem
pip install -r requirements.txt
uvicorn main:app --host 0.0.0.0 --reload
```
The filesystem server should be reachable from: [http://localhost:8000](http://localhost:8000)
The documentation path will be: [http://localhost:8000](http://localhost:8000)
### With Docker
If you have docker compose installed, bring the servers up with:
```bash
docker compose up
```
The services will be reachable from:
* [Filesystem localhost:8081](http://localhost:8081)
* [memory server localhost:8082](http://localhost:8082)
* [time-server localhost:8083](http://localhost:8083)
Now, simply point your OpenAPI-compatible clients or AI agents to your local or publicly deployed URL—no configuration headaches, no complicated transports.
## 🌱 Open WebUI Community
- For general discussions, technical exchange, and announcements, visit our [Community Discussions](https://github.com/open-webui/openapi-servers/discussions) page.
- Have ideas or feedback? Please open an issue!

View File

@@ -0,0 +1,199 @@
---
sidebar_position: 3
title: "MCP Support"
---
This documentation explains how to easily set up and deploy the [**MCP (Model Context Protocol)-to-OpenAPI proxy server** (mcpo)](https://github.com/open-webui/mcpo) provided by Open WebUI. Learn how you can effortlessly expose MCP-based tool servers using standard, familiar OpenAPI endpoints suitable for end-users and developers.
### 📌 What is the MCP Proxy Server?
The MCP-to-OpenAPI proxy server lets you use tool servers implemented with MCP (Model Context Protocol) directly via standard REST/OpenAPI APIs—no need to manage unfamiliar or complicated custom protocols. If you're an end-user or application developer, this means you can interact easily with powerful MCP-based tooling directly through familiar REST-like endpoints.
### 💡 Why Use mcpo?
While MCP tool servers are powerful and flexible, they commonly communicate via standard input/output (stdio)—often running on your local machine where they can easily access your filesystem, environment, and other native system capabilities.
Thats a strength—but also a limitation.
If you want to deploy your main interface (like Open WebUI) on the cloud, you quickly run into a problem: your cloud instance cant speak directly to an MCP server running locally on your machine via stdio.
[Thats where mcpo comes in with a game-changing solution.](https://github.com/open-webui/mcpo)
MCP servers typically rely on raw stdio communication, which is:
- 🔓 Inherently insecure across environments
- ❌ Incompatible with most modern tools, UIs, or platforms
- 🧩 Lacking critical features like authentication, documentation, and error handling
The mcpo proxy eliminates those issues—automatically:
- ✅ Instantly compatible with existing OpenAPI tools, SDKs, and clients
- 🛡 Wraps your tools with secure, scalable, and standards-based HTTP endpoints
- 🧠 Auto-generates interactive OpenAPI documentation for every tool, entirely config-free
- 🔌 Uses plain HTTP—no socket setup, daemon juggling, or platform-specific glue code
So even though adding mcpo might at first seem like "just one more layer"—in reality, it simplifies everything while giving you:
- Better integration ✅
- Better security ✅
- Better scalability ✅
- Happier developers & users ✅
✨ With mcpo, your local-only AI tools become cloud-ready, UI-friendly, and instantly interoperable—without changing a single line of tool server code.
### ✅ Quickstart: Running the Proxy Locally
Here's how simple it is to launch the MCP-to-OpenAPI proxy server using the lightweight, easy-to-use tool **mcpo** ([GitHub Repository](https://github.com/open-webui/mcpo)):
1. **Prerequisites**
- **Python 3.8+** with `pip` installed.
- MCP-compatible application (for example: `mcp-server-time`)
- (Optional but recommended) `uv` installed for faster startup and zero-config convenience.
2. **Install mcpo**
Using **uv** (recommended):
```bash
uvx mcpo --port 8000 -- your_mcp_server_command
```
Or using `pip`:
```bash
pip install mcpo
mcpo --port 8000 -- your_mcp_server_command
```
3. 🚀 **Run the Proxy Server**
To start your MCP-to-OpenAPI proxy server, you need an MCP-compatible tool server. If you don't have one yet, the MCP community provides various ready-to-use MCP server implementations.
✨ **Where to find MCP Servers?**
You can discover officially supported MCP servers at the following repository example:
- [modelcontextprotocol/servers on GitHub](https://github.com/modelcontextprotocol/servers)
For instance, the popular **Time MCP Server** is documented [here](https://github.com/modelcontextprotocol/servers/blob/main/src/time/README.md), and is typically referenced clearly in the README, inside the provided MCP configuration. Specifically, the README states:
> Add to your Claude settings:
>
> ```json
> "mcpServers": {
> "time": {
> "command": "uvx",
> "args": ["mcp-server-time", "--local-timezone=America/New_York"]
> }
> }
> ```
🔑 **Translating this MCP setup to a quick local proxy command**:
You can easily run the recommended MCP server (`mcp-server-time`) directly through the **MCP-to-OpenAPI proxy** (`mcpo`) like this:
```bash
uvx mcpo --port 8000 -- uvx mcp-server-time --local-timezone=America/New_York
```
That's it! You're now running the MCP-to-OpenAPI Proxy locally and exposing the powerful **MCP Time Server** through standard OpenAPI endpoints accessible at:
- 📖 **Interactive OpenAPI Documentation:** [`http://localhost:8000/docs`](http://localhost:8000/docs)
Feel free to replace `uvx mcp-server-time --local-timezone=America/New_York` with your preferred MCP Server command from other available MCP implementations found in the official repository.
🤝 **To integrate with Open WebUI after launching the server, check our [docs](https://docs.openwebui.com/openapi-servers/open-webui/).**
### 🚀 Accessing the Generated APIs
As soon as it starts, the MCP Proxy (`mcpo`) automatically:
- Discovers MCP tools dynamically and generates REST endpoints.
- Creates interactive, human-readable OpenAPI documentation accessible at:
- `http://localhost:8000/docs`
Simply call the auto-generated API endpoints directly via HTTP clients, AI agents, or other OpenAPI tools of your preference.
### 📖 Example Workflow for End-Users
Assuming you started the above server command (`uvx mcp-server-time`):
- Visit your local API documentation at `http://localhost:8000/docs`.
- Select a generated endpoint (e.g., `/get_current_time`) and use the provided interactive form.
- Click "**Execute**" and instantly receive your response.
No setup complexity—just instant REST APIs.
## 🚀 Deploying in Production (Example)
Deploying your MCP-to-OpenAPI proxy (powered by mcpo) is straightforward. Here's how to easily Dockerize and deploy it to cloud or VPS solutions:
### 🐳 Dockerize your Proxy Server using mcpo
1. **Dockerfile Example**
Create the following `Dockerfile` inside your deployment directory:
```dockerfile
FROM python:3.11-slim
WORKDIR /app
RUN pip install mcpo uv
# Replace with your MCP server command; example: uvx mcp-server-time
CMD ["uvx", "mcpo", "--host", "0.0.0.0", "--port", "8000", "--", "uvx", "mcp-server-time", "--local-timezone=America/New_York"]
```
2. **Build & Run the Container Locally**
```bash
docker build -t mcp-proxy-server .
docker run -d -p 8000:8000 mcp-proxy-server
```
3. **Deploying Your Container**
Push to DockerHub or another registry:
```bash
docker tag mcp-proxy-server yourdockerusername/mcp-proxy-server:latest
docker push yourdockerusername/mcp-proxy-server:latest
```
Deploy using Docker Compose, Kubernetes YAML manifests, or your favorite cloud container services (AWS ECS, Azure Container Instances, Render.com, or Heroku).
✔️ Your production MCP servers are now effortlessly available via REST APIs!
## 🧑‍💻 Technical Details and Background
### 🍃 How It Works (Technical Summary)
- **Dynamic Schema Discovery & Endpoints:** At server startup, the proxy connects to the MCP server to query available tools. It automatically builds FastAPI endpoints based on the MCP tool schemas, creating concise and clear REST endpoints.
- **OpenAPI Auto-documentation:** Endpoints generated are seamlessly documented and available via FastAPI's built-in Swagger UI (`/docs`). No extra doc writing required.
- **Asynchronous & Performant**: Built on robust asynchronous libraries, ensuring speed and reliability for concurrent users.
### 📚 Under the Hood:
- FastAPI (Automatic routing & docs generation)
- MCP Client (Standard MCP integration & schema discovery)
- Standard JSON over HTTP (Easy integration)
## ⚡️ Why is the MCP-to-OpenAPI Proxy Superior?
Here's why leveraging MCP servers through OpenAPI via the proxy approach is significantly better and why Open WebUI enthusiastically supports it:
- **User-friendly & Familiar Interface**: No custom clients; just HTTP REST endpoints you already know.
- **Instant Integration**: Immediately compatible with thousands of existing REST/OpenAPI tools, SDKs, and services.
- **Powerful & Automatic Docs**: Built-in Swagger UI documentation is automatically generated, always accurate, and maintained.
- **No New Protocol overhead**: Eliminates the necessity to directly handle MCP-specific protocol complexities and socket communication issues.
- **Battle-Tested Security & Stability**: Inherits well-established HTTPS transport, standard auth methods (JWT, API keys), solid async libraries, and FastAPIs proven robustness.
- **Future-Proof**: MCP proxy uses existing, stable, standard REST/OpenAPI formats guaranteed long-term community support and evolution.
🌟 **Bottom line:** MCP-to-OpenAPI makes your powerful MCP-based AI tools broadly accessible through intuitive, reliable, and scalable REST endpoints. Open WebUI proudly supports and recommends this best-in-class approach.
## 📢 Community & Support
- For questions, suggestions, or feature requests, please use our [GitHub Issue tracker](https://github.com/open-webui/openapi-servers/issues) or join our [Community Discussions](https://github.com/open-webui/openapi-servers/discussions).
Happy integrations! 🌟🚀

View File

@@ -0,0 +1,211 @@
---
sidebar_position: 1
title: "Open WebUI Integration"
---
## Overview
Open WebUI v0.6+ supports seamless integration with external tools via the OpenAPI servers — meaning you can easily extend your LLM workflows using custom or community-powered tool servers 🧰.
In this guide, you'll learn how to launch an OpenAPI-compatible tool server and connect it to Open WebUI through the intuitive user interface. Lets get started! 🚀
---
## Step 1: Launch an OpenAPI Tool Server
To begin, you'll need to start one of the reference tool servers available in the [openapi-servers repo](https://github.com/open-webui/openapi-servers). For quick testing, well use the time tool server as an example.
🛠️ Example: Starting the `time` server locally
```bash
git clone https://github.com/open-webui/openapi-servers
cd openapi-servers
# Navigate to the time server
cd servers/time
# Install required dependencies
pip install -r requirements.txt
# Start the server
uvicorn main:app --host 0.0.0.0 --reload
```
Once running, this will host a local OpenAPI server at http://localhost:8000, which you can point Open WebUI to.
![Time Server](/images/openapi-servers/open-webui/time-server.png)
---
## Step 2: Connect Tool Server in Open WebUI
Next, connect your running tool server to Open WebUI:
1. Open WebUI in your browser.
2. Open ⚙️ **Settings**.
3. Click on **Tools** to add a new tool server.
4. Enter the URL where your OpenAPI tool server is running (e.g., http://localhost:8000).
5. Click "Save".
![Settings Page](/images/openapi-servers/open-webui/settings.png)
### 🧑‍💻 User Tool Servers vs. 🛠️ Global Tool Servers
There are two ways to register tool servers in Open WebUI:
#### 1. User Tool Servers (added via regular Settings)
- Only accessible to the user who registered the tool server.
- The connection is made directly from the browser (client-side) by the user.
- Perfect for personal workflows or when testing custom/local tools.
#### 2. Global Tool Servers (added via Admin Settings)
Admins can manage shared tool servers available to all or selected users across the entire deployment:
- Go to 🛠️ **Admin Settings > Tools**.
- Add the tool server URL just as you would in user settings.
- These tools are treated similarly to Open WebUIs built-in tools.
#### Main Difference: Where Are Requests Made From?
The primary distinction between **User Tool Servers** and **Global Tool Servers** is where the API connection and requests are actually made:
- **User Tool Servers**
- Requests to the tool server are performed **directly from your browser** (the client).
- This means you can safely connect to localhost URLs (like `http://localhost:8000`)—even exposing private or development-only endpoints such as your local filesystem or dev tools—without risking exposure to the wider internet or other users.
- Your connection is isolated; only your browser can access that tool server.
- **Global Tool Servers**
- Requests are sent **from the Open WebUI backend/server** (not your browser).
- The backend must be able to reach the tool server URL you specify—so `localhost` means the backend server's localhost, *not* your computer's.
- Use this for sharing tools with other users across the deployment, but be mindful: since the backend makes the requests, you cannot access your personal local resources (like your own filesystem) through this method.
- Think security! Only expose remote/global endpoints that are safe and meant to be accessed by multiple users.
**Summary Table:**
| Tool Server Type | Request Origin | Use Localhost? | Use Case Example |
| ------------------ | -------------------- | ------------------ | ---------------------------------------- |
| User Tool Server | User's Browser (Client-side) | Yes (private to you) | Personal tools, local dev/testing |
| Global Tool Server | Open WebUI Backend (Server-side) | No (unless running on the backend itself) | Team/shared tools, enterprise integrations |
:::tip
User Tool Servers are best for personal or experimental tools, especially those running on your own machine, while Global Tool Servers are ideal for production or shared environments where everyone needs access to the same tools.
:::
### 👉 Optional: Using a Config File with mcpo
If you're running multiple tools through mcpo using a config file, take note:
🧩 Each tool is mounted under its own unique path!
For example, if youre using memory and time tools simultaneously through mcpo, theyll each be available at a distinct route:
- http://localhost:8000/time
- http://localhost:8000/memory
This means:
- When connecting a tool in Open WebUI, you must enter the full route to that specific tool — do NOT enter just the root URL (http://localhost:8000).
- Add each tool individually in Open WebUI Settings using their respective subpath URLs.
![MCPO Config Tools Setting](/images/openapi-servers/open-webui/mcpo-config-tools.png)
✅ Good:
http://localhost:8000/time
http://localhost:8000/memory
🚫 Not valid:
http://localhost:8000
This ensures Open WebUI recognizes and communicates with each tool server correctly.
---
## Step 3: Confirm Your Tool Server Is Connected ✅
Once your tool server is successfully connected, Open WebUI will display a 👇 tool server indicator directly in the message input area:
📍 You'll now see this icon below the input box:
![Tool Server Indicator](/images/openapi-servers/open-webui/message-input.png)
Clicking this icon opens a popup where you can:
- View connected tool server information
- See which tools are available and which server they're provided by
- Debug or disconnect any tool if needed
🔍 Heres what the tool information modal looks like:
![Tool Info Modal Expanded](/images/openapi-servers/open-webui/info-modal.png)
### 🛠️ Global Tool Servers Look Different — And Are Hidden by Default!
If you've connected a Global Tool Server (i.e., one thats admin-configured), it will not appear automatically in the input area like user tool servers do.
Instead:
- Global tools are hidden by default and must be explicitly activated per user.
- To enable them, you'll need to click on the button in the message input area (bottom left of the chat box), and manually toggle on the specific global tool(s) you want to use.
Heres what that looks like:
![Global Tool Server Message Input](/images/openapi-servers/open-webui/global-message-input.png)
⚠️ Important Notes for Global Tool Servers:
- They will not show up in the tool indicator popup until enabled from the menu.
- Each global tool must be individually toggled on to become active inside your current chat.
- Once toggled on, they function the same way as user tools.
- Admins can control access to global tools via role-based permissions.
This is ideal for team setups or shared environments, where commonly-used tools (e.g., document search, memory, or web lookup) should be centrally accessible by multiple users.
---
## (Optional) Step 4: Use "Native" Function Calling (ReACT-style) Tool Use 🧠
:::info
For this to work effectively, **your selected model must support native tool calling**. Some local models claim support but often produce poor results. We strongly recommend using GPT-4o or another OpenAI model that supports function calling natively for the best experience.
:::
Want to enable ReACT-style (Reasoning + Acting) native function calls directly inside your conversations? You can switch Open WebUI to use native function calling.
✳️ How to enable native function calling:
1. Open the chat window.
2. Go to ⚙️ **Chat Controls > Advanced Params**.
3. Change the **Function Calling** parameter from `Default` to `Native`.
![Native Tool Call](/images/openapi-servers/open-webui/native.png)
---
## Need More Tools? Explore & Expand! 🧱
The [openapi-servers repo](https://github.com/open-webui/openapi-servers) includes a variety of useful reference servers:
- 📂 Filesystem access
- 🧠 Memory & knowledge graphs
- 🗃️ Git repo browsing
- 🌎 Web search (WIP)
- 🛢️ Database querying (WIP)
You can run any of these in the same way and connect them to Open WebUI by repeating the steps above.
---
## Troubleshooting & Tips 🧩
- ❌ Not connecting? Make sure the URL is correct and accessible from the browser used to run Open WebUI.
- 🔒 If you're using remote servers, check firewalls and HTTPS configs!
- 📝 To make servers persist, consider deploying them in Docker or with system services.
Need help? Visit the 👉 [Discussions page](https://github.com/open-webui/openapi-servers/discussions) or [open an issue](https://github.com/open-webui/openapi-servers/issues).