Local Instructor w/ Ollama
Mix.install(
[
{:instructor, path: Path.expand("../../", __DIR__)}
]
)
Introduction
Before running the code below, please ensure the following:
-
Pull the Required Model:
Run the following command to pull the necessary model image:ollama pull qwen2.5:7b
This command downloads the model
qwen2.5:7b
so that it can be used by the Instructor adapter. -
Start the server:
Run the following command to start the local ollama server:ollama serve
Once you have the server running, you can call it with Instructor using the Ollama Adapter.
There are three ways to configure Instructor to use Ollama,
-
via
Mix.install([...], [instructor: [adapter: Instructor.Adapters.Ollama, ollama: [...]]])
-
via
config :instructor, adapter: Instructor.Adapters.Ollama, ollama: [...]
-
At runtime via
Instructor.chat_completion(..., config)
For brevity, in this livebook, we’ll configure it at runtime
config = [
adapter: Instructor.Adapters.Ollama
]
defmodule President do
use Ecto.Schema
use Instructor
@primary_key false
embedded_schema do
field(:first_name, :string)
field(:last_name, :string)
field(:entered_office_date, :date)
end
end
Instructor.chat_completion(
[
model: "qwen2.5:7b",
mode: :json,
response_model: President,
messages: [
%{role: "user", content: "Who was the first president of the United States?"}
]
],
config
)
There are three ways to configure Instructor to use Ollama,
-
via
Mix.install([...], config)
-
via
config :instructor, adapter: :ollama, ollama: config
-
At runtime via
Instructor.chat_completion(..., config)
For brevity, in this livebook, we’ll configure it at runtime