Technical Documentation

Openwebui Tools: A Comprehensive Guide To Creation, Installation, And Usage

Technical guide covering openwebui tools: a comprehensive guide to creation, installation, and usage

👤
Author
Cosmic Lounge AI Team
📅
Updated
6/1/2025
⏱️
Read Time
16 min
Topics
#llm #ai #model #api #installation #configuration #code #openwebui

📖 Reading Mode

📖 Table of Contents

🌌 OpenWebUI Tools: A Comprehensive Guide to Creation, Installation, and Usage

OpenWebUI, a user-friendly interface for interacting with Large Language Models (LLMs), offers a powerful feature called “Tools” that extends the capabilities of LLMs beyond text processing. Tools act as plugins, enabling LLMs to interact with the real world by accessing and processing external information and services. This article provides a comprehensive guide to creating, installing, and using tools in OpenWebUI, complete with instructions and examples.



🌟 Creating Custom Tools in OpenWebUI

Creating custom tools in OpenWebUI involves defining Python functions that can be accessed by the LLM during a conversation. These functions can perform various tasks, such as fetching real-time information, interacting with external APIs, or executing code 1. Here’s a breakdown of the process:

1. Define the Tool:

  • Navigate to the “Workspace” tab in OpenWebUI and select “Tools.”

  • Click the ”+” button to create a new tool.

  • Provide a name, a unique ID, and a description for your tool.

  • Write the Python code for your tool in the provided editor. Ensure to include type hints for arguments to improve consistency2. 2. Structure of the Code:

  • Define your tools as methods within a class called Tools.

  • Optionally include subclasses called Valves and UserValves to allow users to provide dynamic details like API keys or configuration options. These will create fillable fields or boolean switches in the GUI menu for the given tool. Valves are configurable by admins alone, and UserValves are configurable by any user3.

  • Use event emitters to add additional information to the chat interface. There are two types of event emitters: “status” and “message.” 3

  • Status emitters add status updates to a message while it is being processed. These can be used to inform users about the progress of a task, especially for tools that delay the LLM response or process large amounts of information.
  • Message emitters append messages to the chat at any stage in the tool’s execution. This allows you to provide feedback to the user, embed images, or even render web pages before, during, or after the LLM response. Python from pydantic import BaseModel, Field

class Tools: def __init__(self): """Initialize the Tool.""" self.valves = self. Valves()

class Valves(BaseModel): api_key: str = Field("", description=“Your API key here”)

def reverse_string(self, string: str) -> str: """ Reverses the input string. :param string: The string to reverse. """

🌌 Example usage of valves.

if self.valves.api_key != “42”: return “Wrong API key” return string[::-1]

async def test_function( self, prompt: str, __user__: dict, __event_emitter__=None ) -> str: """ This is a demo. :param test: This is a test parameter. """ try: await __event_emitter__( { “type”: “status”, # We set the type here.

“data”: {“description”: “Message that shows up in the chat”, “done”: False},

🌌 Note done is False here indicating we are still emitting statuses.

} )

🌌 Do some other logic here.

await __event_emitter__( { “type”: “status”, “data”: {“description”: “Completed a task message”, “done”: True, “hidden”: False},

🌌 Note done is True here indicating we are done emitting statuses.

🌌 You can also set “hidden”: True if you want to remove the status once the message is returned.

} ) except Exception as e: await __event_emitter__( { “type”: “status”, “data”: {“description”: f”An error occured: {e}”, “done”: True}, } ) return f”Tell the user: {e}”

1. Save and Enable the Tool:

  • Once you’ve written the code, save the tool.

  • To use the tool, you need to enable it for a specific model. Go to “Workspace” -> “Models,” select the model you want to use, and click the pencil icon to edit it.

  • In the “Tools” section, check the box next to your custom tool and save the model2. In addition to the core functionalities mentioned above, tools in OpenWebUI can also be used for tasks like:

  • External Voice Synthesis: Make API requests within the chat to integrate external voice synthesis services like ElevenLabs and generate audio based on the LLM output3.

  • File Interaction: Access and process local files, enabling LLMs to interact with user data or perform tasks like analyzing documents3.

  • Image Generation: Generate images based on user prompts, adding a visual dimension to the chat experience3.



🌟 Installing Custom Tools in OpenWebUI

OpenWebUI provides two ways to install custom tools:

1. Download and Import Manually:

  • Go to the OpenWebUI community site (https://openwebui.com/tools/)3.

  • Click on the tool you want to import and click the blue “Get” button.

  • Click “Download as JSON export.”

  • In OpenWebUI, navigate to “Workspace” -> “Tools” and click “Import Tools” to upload the JSON file3. 2. Import via your OpenWebUI URL:

  • Go to the OpenWebUI community site (https://openwebui.com/tools/)3.

  • Click on the tool you want to import and click the blue “Get” button.

  • Enter the IP address of your OpenWebUI instance and click “Import to WebUI.” This will automatically open your instance and allow you to import the tool3.



🌟 Using Custom Tools in OpenWebUI

Once a tool is installed and enabled for a model, you can use it in a chat session by following these steps:

1. Start a Chat Session:

  • Go to “Workspace” and create a new chat.

  • Select the model for which you enabled the tool. 2. Enable the Tool in the Chat:

  • In the chat window, click the ”+” icon next to the prompt box.

  • Enable the tool you want to use2. 3. Use the Tool in Your Prompt:

  • When you ask a question or give a command that requires the tool’s functionality, the LLM will automatically utilize the tool to provide a response4. OpenWebUI also provides an “AutoTool Filter” function that allows LLMs to automatically select and use relevant tools without manual enabling in the chat. This feature enhances the user experience by streamlining the tool selection process3.



🌟 Example Custom Tools

While the OpenWebUI community site offers a variety of pre-built tools, let’s explore a few examples to illustrate how custom tools work:

⚡ 1. Web Search Tool

This tool allows the LLM to perform live web searches using SearXNG and scrape the first N pages5.

  • Code: The code for this tool can be found on the OpenWebUI community site.

  • Installation: Follow the installation instructions provided on the community site.

  • Usage: In a chat session with the tool enabled, you can ask questions like “What are the latest news on AI?” or “Find me information on quantum computing.” The LLM will use the tool to search the web and provide relevant results.

⚡ 2. YouTube Transcript Provider

This tool retrieves the full YouTube transcript in English for a given YouTube video URL5.

  • Code: The code for this tool can be found on the OpenWebUI community site.

  • Installation: Follow the installation instructions provided on the community site.

  • Usage: In a chat session with the tool enabled, you can provide a YouTube video URL and ask the LLM to summarize the video or answer questions about its content.

⚡ 3. Calculator Tool

This tool provides a simple calculator functionality within the chat5.

  • Code:

Python

from typing import Optional from pydantic import BaseModel, Field

class Tools: def __init__(self): """Initialize the Tool.""" pass # No valves needed for this tool.

def calculate(self, expression: str) -> str: """ Evaluates a mathematical expression. :param expression: The mathematical expression to evaluate. """ try: result = str(eval(expression)) return result except Exception as e: return f”Error: {e}”

  • Installation: 1. Copy the code above. 2. In OpenWebUI, navigate to “Workspace” -> “Tools” and click the ”+” button. 3. Provide a name (e.g., “Calculator”), a unique ID (e.g., “calculator_tool”), and a description (e.g., “Evaluates mathematical expressions.”). 4. Paste the code into the editor and save the tool.

  • Usage: 1. Go to “Workspace” -> “Models,” select a model, and click the pencil icon to edit it. 2. In the “Tools” section, check the box next to your “Calculator” tool and save the model. 3. Start a chat session with the model and enable the “Calculator” tool. 4. You can now ask questions like “What is 2 + 2?” or “Calculate 10 * 5.” The LLM will use the tool to evaluate the expression and provide the result.



🌟 Pipelines: Advanced Workflows in OpenWebUI

While tools extend the capabilities of LLMs within a chat session, pipelines offer a more advanced way to customize and extend OpenWebUI itself. Pipelines are essentially API-compatible workflows that allow you to offload heavy processing or integrate with non-OpenAI providers3. Think of pipelines as a way to create more complex and customized interactions with LLMs. They can be used to:

  • Transform OpenWebUI features: Create custom workflows that combine multiple tools and functions.

  • Offload processing: Distribute the workload to different machines, improving performance and scalability.

  • Integrate with external services: Connect OpenWebUI with other APIs and services to create more comprehensive solutions.



🌟 Conclusion

OpenWebUI tools provide a powerful way to enhance the interactive capabilities of LLMs. By acting as a bridge between LLMs and the real world, tools enable LLMs to access and process external information, perform actions, and generate dynamic content. The flexibility and customization offered by OpenWebUI tools allow users to create tools tailored to their specific needs and integrate them with various LLMs3. The ability to create and install custom tools, along with the availability of pre-built tools on the OpenWebUI community site, opens up a wide range of possibilities for using LLMs in more complex and interactive applications. Whether it’s fetching real-time data, interacting with external services, or generating dynamic content, OpenWebUI tools empower users to unlock the full potential of LLMs and create more engaging and informative chat experiences.

🔧 Full Template for a tool.

import os

import requests

from datetime import datetime

class Tools:

def __init__(self):

pass

🌌 Add your custom tools using pure Python code here, make sure to add type hints

🌌 Use Sphinx-style docstrings to document your tools, they will be used for generating tools specifications

🌌 Please refer to function_calling_filter_pipeline.py file from pipelines project for an example

def get_user_name_and_email_and_id(self, __user__: dict = {}) -> str:

"""

Get the user name, Email and ID from the user object.

"""

🌌 Do not include :param for __user__ in the docstring as it should not be shown in the tool’s specification

🌌 The session user object will be passed as a parameter when the function is called

print(__user__)

result = ""

if “name” in __user__:

result += f”User: {__user__[‘name’]}”

if “id” in __user__:

result += f” (ID: {__user__[‘id’]})”

if “email” in __user__:

result += f” (Email: {__user__[‘email’]})”

if result == "":

result = “User: Unknown”

return result

def get_current_time(self) -> str:

"""

Get the current time in a more human-readable format.

:return: The current time.

"""

now = datetime.now()

current_time = now.strftime(“%I:%M:%S %p”) # Using 12-hour format with AM/PM

current_date = now.strftime(

“%A, %B %d, %Y”

) # Full weekday, month name, day, and year

return f”Current Date and Time = {current_date}, {current_time}”

def calculator(self, equation: str) -> str:

"""

Calculate the result of an equation.

:param equation: The equation to calculate.

"""

🌌 Avoid using eval in production code

🌌 https://nedbatchelder.com/blog/201206/eval\_really\_is\_dangerous.html

try:

result = eval(equation)

return f”{equation} = {result}”

except Exception as e:

print(e)

return “Invalid equation”

def get_current_weather(self, city: str) -> str:

"""

Get the current weather for a given city.

:param city: The name of the city to get the weather for.

:return: The current weather information or an error message.

"""

api_key = os.getenv(“OPENWEATHER_API_KEY”)

if not api_key:

return (

“API key is not set in the environment variable ‘OPENWEATHER_API_KEY’.”

)

base_url = “http://api.openweathermap.org/data/2.5/weather

params = {

“q”: city,

“appid”: api_key,

“units”: “metric”, # Optional: Use ‘imperial’ for Fahrenheit

}

try:

response = requests.get(base_url, params=params)

response.raise_for_status() # Raise HTTPError for bad responses (4xx and 5xx)

data = response.json()

if data.get(“cod”) != 200:

return f”Error fetching weather data: {data.get(‘message’)}”

weather_description = data[“weather”][0][“description”]

temperature = data[“main”][“temp”]

humidity = data[“main”][“humidity”]

wind_speed = data[“wind”][“speed”]

return f”Weather in {city}: {temperature}°C”

except requests. RequestException as e:

return f”Error fetching weather data: {str(e)}”