Addons

Ollama

The easiest way to get up and running with large language models locally

Open Home Assistant instance and add repository

Models are stored in /config/ollama.

Utilize the OLLAMA_HOST="http://homeassistant.local:11434" environment variable to use Ollama CLI on another machine.

Refer to the Ollama documentation for further details..

Example for pulling models:

Using Ollama CLI:

export OLLAMA_HOST="http://homeassistant.local:11434"
ollama pull nomic-text-embed

Using curl:

curl http://homeassistant.local:11434/api/pull -d '{
  "name": "nomic-text-embed"
}'

Ava Server will automatically pull models on start when they are requested from Ollama.


© Mykhailo Marynenko 2024