Featured image of post Setting Up an Local LLM for use

Setting Up an Local LLM for use

Setting Up a Local LLM with Ollama and Open-WebUI on Windows (WSL)

I was looking to run a local Large Language Model (LLM) to experiment with coding against it without burning through OpenAI API credits. This guide will walk you through setting up Ollama and Open-WebUI locally on Windows using WSL. Currently, I run my Local LLM on an Nvidia 2070 GPU with 8GB of VRAM, which isn’t the best but works for me. I plan to expand on these articles as I learn more about the optimal ways to utilize AI in my life.

I followed the steps below to set up my environment. In a separate blog post, I will demonstrate how to write a basic script to interact with the local LLM API using the OpenAI library.

Step 1: Install WSL (If Not Already Installed)

If you haven’t already set up WSL (Windows Subsystem for Linux),If you already have Linux or are using the windows version of Ollama please skip this step. Open PowerShell as Administrator and run:

1
wsl --install

Restart your system and ensure you have a Linux distribution installed (Ubuntu 24.04 is recommended).

Step 2: Install Ollama

First, inside WSL, install Ollama, which allows you to run LLMs locally:

1
curl -fsSL https://ollama.com/install.sh | sh

Once installed, verify by running:

1
ollama --version

Step 3: Download a Model

To start using Ollama, download a model (e.g., llama3.1:latest):

1
ollama pull llama3.1:latest  

This will download and set up the model for local use.

Step 4: Create a Custom Model File (Increase Context Window)

By default, models have a limited context length. To increase it, I created a custom Modelfile:

1
2
echo "FROM llama3.1:latest  
PARAMETER num_ctx 4096" > Modelfile

Then, create the custom model:

1
ollama create my-custom-model -f Modelfile

Step 5: Set Up a Python Virtual Environment

Before installing Open-WebUI, it’s best to create a Python virtual environment:

1
2
python3 -m venv myvenv
source myvenv/bin/activate  # On Windows (WSL), use source

Ensure pip is up to date:

1
pip install --upgrade pip

Step 6: Install Open-WebUI

Now, install Open-WebUI to interact with the model through a web interface:

1
2
3
git clone https://github.com/open-webui/open-webui.git
cd open-webui
pip install -r requirements.txt

Step 7: Run Open-WebUI

Once installed, start the WebUI:

1
open-webui serve

You should now be able to access Open-WebUI in your browser and interact with your local LLM!

Conclusion

Setting up Ollama and Open-WebUI locally lets you experiment with AI models without worrying about API costs. It gives you the freedom to test, tweak, and build without relying on external services.