🌌 OpenWebUI’s API: A Comprehensive Guide with Python Examples
OpenWebUI is a powerful and versatile platform for interacting with Large Language Models (LLMs). It offers a user-friendly interface and supports a wide range of LLMs, including those compatible with OpenAI. While the graphical user interface provides an intuitive way to interact with these models, OpenWebUI also exposes an API that empowers developers to integrate and automate LLM interactions programmatically. To gather information for this comprehensive guide, we delved into official documentation, explored community forums, and analyzed video tutorials. This research process allowed us to compile a detailed overview of OpenWebUI’s API, including its endpoints, functionalities, and practical Python code examples.
🌟 API Endpoints and Functionalities
OpenWebUI’s API provides a variety of endpoints, each serving a specific function:
-
Retrieve All Models: The GET /api/models endpoint retrieves a comprehensive list of all models accessible within OpenWebUI. This includes models that have been created or added through the platform, enabling developers to programmatically manage and interact with the available LLMs1.
-
Chat Completions: The POST /api/chat/completions endpoint adheres to the OpenAI Chat Completion API standard. This allows developers to send chat messages to the models and receive corresponding responses, facilitating the development of dynamic and interactive conversational AI applications. This endpoint supports a diverse range of models, including those from Ollama, OpenAI, and OpenWebUI’s own Function models1.
-
Retrieval Augmented Generation (RAG): The POST /api/v1/files/ endpoint enables the uploading of files for use with Retrieval Augmented Generation (RAG). RAG empowers LLMs to access and utilize external data sources, enriching their knowledge base and improving the contextuality of their responses1.
-
File Management: The POST /api/v1/knowledge/{id}/file/add endpoint provides a mechanism for organizing files into structured knowledge collections. These collections can be readily referenced during chat completions, enabling efficient knowledge management and retrieval within conversational flows1. Beyond these core endpoints, OpenWebUI’s API offers a comprehensive suite of endpoints for managing various aspects of the platform, including configurations, authentications, users, chats, documents, prompts, memories, tools, functions, and utilities2. This breadth of functionality allows for deep integration and customization of OpenWebUI within diverse applications.
🌟 API Access with Python
OpenWebUI’s API employs API keys for secure authentication. You can obtain your API key by navigating to the “Settings > Account” section within the OpenWebUI interface1. Below is a Python code snippet demonstrating how to use the requests library to interact with the API:
Python
import requests
def get_models(api_key): """Retrieves all models available in OpenWebUI.""" url = ‘http://localhost:5000/api/models’ # Replace with your OpenWebUI URL
headers = {‘Authorization’: f’Bearer {api_key}’} response = requests.get(url, headers=headers) if response.status_code == 200: return response.json() else: raise Exception(f”API request failed with status code {response.status_code}”)
🌌 Example usage
api_key = “YOUR_API_KEY” # Replace with your actual API key
models = get_models(api_key) print(models)
This code effectively retrieves a list of all available models by making a GET request to the /api/models endpoint.
🌟 API Documentation
OpenWebUI provides Swagger documentation for its API, which can be accessed through the following path: /docs. This documentation offers a detailed overview of the API endpoints, request parameters, and response formats. It is important to note that the API is currently experimental and may be subject to changes in future releases1.
🌟 Python Examples for API Functionalities
The following Python code examples illustrate how to utilize OpenWebUI’s API to perform various tasks:
⚡ Chat Completions
Python
import requests
def chat_completion(api_key, model, messages): """Sends a chat completion request to OpenWebUI.""" url = ‘http://localhost:3000/api/chat/completions’ # Replace with your OpenWebUI URL
headers = { ‘Authorization’: f’Bearer {api_key}’, ‘Content-Type’: ‘application/json’ } payload = { “model”: model, “messages”: messages } response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: return response.json() else: raise Exception(f”API request failed with status code {response.status_code}”)
🌌 Example usage
api_key = “YOUR_API_KEY” model = “llama2” # Replace with the desired model
messages = [{“role”: “user”, “content”: “Hello, how are you?”}] response = chat_completion(api_key, model, messages) print(response)
This example demonstrates how to send a chat message to a specified LLM and receive its response.
⚡ File Uploads for RAG
Python
import requests
def upload_file(api_key, file_path): """Uploads a file for RAG to OpenWebUI.""" url = ‘http://localhost:3000/api/v1/files/’ # Replace with your OpenWebUI URL
headers = { ‘Authorization’: f’Bearer {api_key}’, ‘Accept’: ‘application/json’ } files = {‘file’: open(file_path, ‘rb’)} response = requests.post(url, headers=headers, files=files) if response.status_code == 200: return response.json() else: raise Exception(f”API request failed with status code {response.status_code}”)
🌌 Example usage
api_key = “YOUR_API_KEY” file_path = “/path/to/your/file.txt” # Replace with the actual file path
response = upload_file(api_key, file_path) print(response)
This code snippet shows how to upload a file to OpenWebUI, which can then be used with the RAG feature to provide the LLM with external knowledge.
⚡ Chat Completions with RAG
Python
import requests
def chat_completion_with_rag(api_key, model, query, collection_id): """Sends a chat completion request with RAG to OpenWebUI.""" url = ‘http://localhost:3000/api/chat/completions’ # Replace with your OpenWebUI URL
headers = { ‘Authorization’: f’Bearer {api_key}’, ‘Content-Type’: ‘application/json’ } payload = { ‘model’: model, ‘messages’: [{‘role’: ‘user’, ‘content’: query}], ‘files’: [{‘type’: ‘collection’, ‘id’: collection_id}] } response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: return response.json() else: raise Exception(f”API request failed with status code {response.status_code}”)
🌌 Example usage
api_key = “YOUR_API_KEY” model = “llama2” # Replace with the desired model
query = “What is the capital of France?” collection_id = “YOUR_COLLECTION_ID” # Replace with the actual collection ID
response = chat_completion_with_rag(api_key, model, query, collection_id) print(response)
This example demonstrates how to send a chat completion request that incorporates a reference to a knowledge collection, enabling the LLM to leverage external information through RAG.
🌟 Practical Use Cases
OpenWebUI’s API unlocks a wide range of practical applications. Here are a few examples of how the API can be used:
-
Building a Chatbot: The API can be used to create a custom chatbot that interacts with users, answers questions, and provides information. This chatbot can be integrated into various platforms, such as messaging apps, websites, or social media.
-
Automating Content Generation: The API can be used to automate the generation of different types of content, such as articles, summaries, or social media posts. This can save time and resources while ensuring consistent content quality.
-
Integrating with Data Analysis Tools: The API can be used to connect OpenWebUI with data analysis tools, allowing LLMs to analyze data, generate reports, and provide insights. This can enhance data analysis workflows and provide valuable perspectives.
🌟 Key Takeaways
OpenWebUI’s API provides a robust and flexible interface for interacting with LLMs programmatically. It offers a wide range of functionalities, including model management, chat completions, and integration with external knowledge sources through RAG. By leveraging the API, developers can:
-
Increase Flexibility: Integrate LLMs into various applications and workflows.
-
Enable Automation: Automate tasks such as content generation, data analysis, and chatbot interactions.
-
Enhance Integration: Connect OpenWebUI with other tools and platforms to create powerful AI-driven solutions.
🌟 API Documentation and Resources
While the official documentation for OpenWebUI’s API is still under development, the following resources can provide valuable information and support:
-
OpenWebUI Documentation: The official documentation offers basic information about API endpoints and authentication1.
-
GitHub Discussions: The OpenWebUI GitHub repository hosts a discussions forum where developers can engage in conversations, ask questions, and share their experiences with the API4.
-
Community Forums: Online communities like Reddit and Cloudron have dedicated forums for OpenWebUI, where users and developers discuss various aspects of the platform, including API usage5.
🌟 Conclusion
OpenWebUI’s API empowers developers to harness the full potential of LLMs by providing a programmatic interface for interaction and automation. With its diverse set of endpoints and functionalities, the API enables seamless integration of LLMs into various applications and workflows. While the API is still under development, the available resources and code examples provide a solid foundation for developers to get started.
🔧 Works cited
1. API Endpoints | Open WebUI, accessed on January 8, 2025, https://docs.openwebui.com/getting-started/advanced-topics/api-endpoints/
2. open-webui built-in API quick usage guide - OpenAI compatible ollama endpoint vs. open-webui endpoint · open-webui open-webui · Discussion #5033 - GitHub, accessed on January 8, 2025, https://github.com/open-webui/open-webui/discussions/5033
3. Open WebUI API Access for Lollms-webui | Restackio, accessed on January 8, 2025, https://www.restack.io/p/lollms-webui-answer-open-webui-api-access-cat-ai
4. open-webui open-webui · Discussions - GitHub, accessed on January 8, 2025, https://github.com/open-webui/open-webui/discussions
5. I’m the Sole Maintainer of Open WebUI — AMA! : r/OpenWebUI, accessed on January 8, 2025, https://www.reddit.com/r/OpenWebUI/comments/1gjziqm/im_the_sole_maintainer_of_open_webui_ama/
6. OpenWebUI | Cloudron Forum, accessed on January 8, 2025, https://forum.cloudron.io/category/185/openwebui