OpenSource LLMs Prompt Engineering
Mix.install([
:req,
:jason
])
Ollama API Locally
defmodule Ollama do
@timeout 300_000
@url "http://localhost:11434/api/generate"
def call(model, prompt) do
payload =
Jason.encode!(%{
"model" => model,
"prompt" => prompt,
"stream" => false
})
resp = Req.post!(@url, body: payload, receive_timeout: @timeout)
if resp.status == 200 do
resp.body["response"]
else
inspect(resp.body)
end
end
end
Main points for promting LLMs
- How the LLM was finetuned and packaged behind an API impacts how to prompt it.
- Different LLMs & LLM versions mean different prompts.
- Prompting is not software engineering. It’s closer to googling.
- Prompts are just a strings. Do not overcomplicate it.
- RAG is prompt-engineering. Do not overcomplicate it. It just concatenates strings and adds them to the prompt.
Let’s start with memorable anology
You go out every day with pants on. You see everyone wearing pants. You’re told it’s right to wear pants. You are a good person for wearing pants.
Every day, you act normal, because you wear pants.
Then, one day, you go out and you’re not wearing pants.
Do you still act normal?
Note: “Wearing pants” is an anology for Prompt Setting.
Prompt Setting Mistral
prompt = "Respond kindly to the child: i really hate zucchini. why should i eat it?"
# prompt setting
# [INST]{text}[/INST]
prompt_with_setting = "[INST] #{prompt} [/INST]"
Ollama.call("mistral", prompt)
|> IO.puts()
Prompt Setting LLama2
prompt = "You are a healths food nut. I'm drinking green juice"
# [INST]<>\n{system_text}<>\n{text}[/INST]
sys_prompt =
"You are a game desinger and developer of 2D games. You have experience as designer and a javascript developer."
prompt =
"Give me step by step design of minimalistic Tettris game. I want the design and the instructions of how the game to be implemented to be detailed and complete."
prompt_with_setting = "[INST]<>\n#{sys_prompt}<>\n#{prompt}[/INST]"
Ollama.call("llama2:7b", prompt_with_setting)
|> IO.puts()