Demonstration of running locally Ollama and Open-WebUI on a WSL2 containerized environment.
Warning
This is a POC and not intended to be run on Production Environments.
-
Make sure any container engine(Podman in this case) is installed on your WSL2 environment.
-
Install ollama with below command:
NOTE: here PWD/ollama represents host directory where models data will be saved.
podman run --name ollama --rm --detach --privileged --gpus all -p 11434:11434 -v $PWD/ollama:/root/.ollama ollama/ollama
- Verify if container is up using below command:
podman container ls
- Create an alias using below command:
alias ollama='podman exec ollama ollama'
- Run below command to pull ollama model:
ollama pull qwen2.5-coder:0.5b
-
Make sure any container engine(Podman in this case) is installed on your WSL2 environment.
-
Install ollama with below command:
NOTE: here OLLAMA_BASE_URL represents endpoint where ollama is serving.saved.
podman run --rm --detach --network host -p 8080:8080 -e WEBUI_AUTH=false -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui ghcr.io/open-webui/open-webui:main
- Verify if container is up using below command:
podman container ls
-
Podman is used to run ollama and open-webui containers.
-
The model Ollama is running in this example: ππ°ππ§2.5-ππ¨πππ«:0.5π
-
Presently it is on CPU as i have ππ§πππ₯ ππ«π’π¬ππ iπππ which is not natively supported by Ollama as of now it seems.
-
Note: one potential method to determine compatibility of Ollama with Intel IrisXe iGPU is by leveraging the Intel Corporation π’π©ππ±-π₯π₯π¦ πππππ₯ππ«πππ’π¨π§ π₯π’ππ«ππ«π².
