Setting Up a Local LLM with Ollama and Open-WebUI on Windows (WSL)
I was looking to run a local Large Language Model (LLM) to experiment with coding against it without burning through OpenAI API credits. This guide will walk you through setting up Ollama and Open-WebUI locally on Windows using WSL. Currently, I run my Local LLM on an Nvidia 2070 GPU with 8GB of VRAM, which isn’t the best but works for me. I plan to expand on these articles as I learn more about the optimal ways to utilize AI in my life.
I followed the steps below to set up my environment. In a separate blog post, I will demonstrate how to write a basic script to interact with the local LLM API using the OpenAI library.
Step 1: Install WSL (If Not Already Installed)
If you haven’t already set up WSL (Windows Subsystem for Linux),If you already have Linux or are using the windows version of Ollama please skip this step. Open PowerShell as Administrator and run:
|
|
Restart your system and ensure you have a Linux distribution installed (Ubuntu 24.04 is recommended).
Step 2: Install Ollama
First, inside WSL, install Ollama, which allows you to run LLMs locally:
|
|
Once installed, verify by running:
|
|
Step 3: Download a Model
To start using Ollama, download a model (e.g., llama3.1:latest):
|
|
This will download and set up the model for local use.
Step 4: Create a Custom Model File (Increase Context Window)
By default, models have a limited context length. To increase it, I created a custom Modelfile:
|
|
Then, create the custom model:
|
|
Step 5: Set Up a Python Virtual Environment
Before installing Open-WebUI, it’s best to create a Python virtual environment:
|
|
Ensure pip is up to date:
|
|
Step 6: Install Open-WebUI
Now, install Open-WebUI to interact with the model through a web interface:
|
|
Step 7: Run Open-WebUI
Once installed, start the WebUI:
|
|
You should now be able to access Open-WebUI in your browser and interact with your local LLM!
Conclusion
Setting up Ollama and Open-WebUI locally lets you experiment with AI models without worrying about API costs. It gives you the freedom to test, tweak, and build without relying on external services.