Compose-Examples/examples/ollama-ui/README.md

24 lines
891 B
Markdown
Raw Normal View History

2024-04-23 10:40:10 +00:00
# References
- https://github.com/ollama/ollama
- https://hub.docker.com/r/ollama/ollama
- https://github.com/open-webui/open-webui
# Notes
2024-04-23 12:01:17 +00:00
You can spawn Ollama first and then download the [respective LLM models](https://ollama.com/library) via docker exec. Alternatively, spawn the whole stack directly and download LLM models within Open WebUI using a browser.
2024-04-23 10:40:10 +00:00
````
2024-04-23 12:01:17 +00:00
# spawn ollama and ui
docker compose up -d
2024-04-23 10:40:10 +00:00
2024-04-23 12:01:17 +00:00
# (optional) download an llm model via docker exec
2024-04-23 10:40:10 +00:00
docker exec ollama ollama run llama3:8b
````
2024-04-23 12:01:17 +00:00
Afterwards, we can browse Open WebUI on `http://127.0.0.1:8080` and register our first user account. You may want to disable open user registration later on by uncommenting the env `ENABLE_SIGNUP` variable and restarting the Open WebUI container.
2024-04-23 10:40:10 +00:00
> [!TIP]
>
> You likely want to pass a GPU into the Ollama container. Please read [this](https://hub.docker.com/r/ollama/ollama).