Bringing Elixir to life
Mix.install(
[
{:kino, "~> 0.9.0"},
{:evision, "~> 0.1.21"},
{:req, "~> 0.3"},
{:kino_vega_lite, "~> 0.1.7"},
{:kino_bumblebee, "~> 0.3.0"},
{:exla, "~> 0.5.1"},
{:kino_explorer, "~> 0.1.4"}
],
config: [nx: [default_backend: EXLA.Backend]]
)
Processes
You might have heard that we can create millions of processes:
for _ <- 1..1_000_000 do
spawn(fn -> :ok end)
end
Process communicate by sending messages between them:
child =
spawn(fn ->
receive do
{:ping, caller} -> send(caller, :pong)
end
end)
send(child, {:ping, self()})
receive do
:pong -> :it_worked!
end
And Livebook can helps us see how processes communicate between them:
Kino.Process.render_seq_trace(fn ->
child =
spawn(fn ->
receive do
{:ping, caller} -> send(caller, :pong)
end
end)
send(child, {:ping, self()})
receive do
:pong -> :it_worked!
end
end)
Maybe you want to see how Elixir can perform multiple tasks at once, scaling on both CPU and IO?
Kino.Process.render_seq_trace(fn ->
1..4
|> Task.async_stream(
fn _ -> Process.sleep(Enum.random(100..300)) end,
max_concurrency: 4
)
|> Stream.run()
end)
Messages can also be distributed across nodes. Let’s try to execute something in the Distributed
module:
Distributed.hello_world()
But what if Distributed
is defined on another notebook?
node =
Kino.Input.text("Node")
|> Kino.render()
|> Kino.Input.read()
|> String.to_atom()
cookie =
Kino.Input.text("Cookie")
|> Kino.render()
|> Kino.Input.read()
|> String.to_atom()
Node.set_cookie(node, cookie)
:erpc.call(node, Distributed, :hello_world, [])
Maybe we want to learn about supervisors too?
Mix.Supervisor
:mix
And it turns out, that processes and distribution are the major abstraction that makes Livebook works.
Multiplayer runtime
Your code runs in a separate Erlang VM instance using the distribution channels we learned earlier:
flowchart LR;
subgraph Livebook
l[Session];
end
subgraph Runtime
c[Code];
l--Erlang distribution-->c;
end
This brings a separation of concern where your code actually knows nothing about Livebook. The Kino library is the one responsible for connecting both sides and supporting additional features such as the rendering of outputs.
Anyone can create their own outputs. We have two kinds: static and live. Let’s build a counter as a live output:
defmodule CounterExample do
use Kino.JS
use Kino.JS.Live
def new(count) do
Kino.JS.Live.new(__MODULE__, count)
end
@impl true
def init(count, ctx) do
{:ok, assign(ctx, count: count)}
end
@impl true
def handle_connect(ctx) do
{:ok, ctx.assigns.count, ctx}
end
@impl true
def handle_event("bump", _, ctx) do
ctx = update(ctx, :count, &(&1 + 1))
broadcast_event(ctx, "update", ctx.assigns.count)
{:noreply, ctx}
end
asset "main.js" do
"""
export function init(ctx, count) {
ctx.root.innerHTML = `
Bump
`;
const countEl = document.getElementById("count");
const bumpEl = document.getElementById("bump");
countEl.innerHTML = count;
ctx.handleEvent("update", (count) => {
countEl.innerHTML = count;
});
bumpEl.addEventListener("click", (event) => {
ctx.pushEvent("bump");
});
}
"""
end
end
CounterExample.new(0)
If you open up this same Livebook on another tab, you will learn that both Livebook and your outputs are collaborative, opening the way to building all sorts of collaborative applications!
This is possible because each “live” output is a separate process and they can all run concurrently. Here is how the underlying architecture looks like:
flowchart LR;
subgraph Clients
b1((Browser #1));
b2((Browser #2));
end
subgraph Livebook
l[Session];
b1--WebSockets-->l;
b2--WebSockets-->l;
end
subgraph Runtime
c[Code];
o1[Output #1];
o2[Output #2];
l--Erlang distribution-->c;
l--Erlang distribution-->o1;
l--Erlang distribution-->o2;
end
Interactive data
The Erlang VM provides a great set of tools for observability. Let’s gather information about all processes:
processes =
for pid <- Process.list() do
info = Process.info(pid, [:reductions, :memory, :status])
%{
pid: inspect(pid),
reductions: info[:reductions],
memory: info[:memory],
status: info[:status]
}
end
Actually, let’s put this data into a table:
Kino.DataTable.new(processes)
Smart cells: meta-programmable notebooks
But how to plot it?
Smart cells run as part of your code and you can create any Smart cell that you want. They build on top of live outputs and share the same building blocks:
flowchart LR;
subgraph Clients
b1((Browser #1));
b2((Browser #2));
end
subgraph Livebook
l[Session];
b1--WebSockets-->l;
b2--WebSockets-->l;
end
subgraph Runtime
c[Code];
sc[Smart cell];
o1[Output #1];
o2[Output #2];
l--Erlang distribution-->c;
l--Erlang distribution-->o1;
l--Erlang distribution-->o2;
l--Erlang distribution-->sc;
end
Erlang VM processes all the way down!
You can also publish Smart cells as packages to Hex.pm or installed them any other Elixir package.
Live programming
We have started exploring Live programming ideas only recently and we’ve already seen how Livebook interacts with your code and the runtime to generate sequential traces.
There is another feature we want to show, which is how can use Elixir pipelines and its dbg
macro to manipulate code, as shown in “Unravel: A Fluent Code Explorer for Data Wrangling” by Nischal Shrestha and co:
"Elixir is cool!"
|> String.trim_trailing("!")
|> String.split()
|> Enum.reverse()
|> List.first()
|> dbg()
We have been very excited to see that our generalization scales to different use cases. Here is a community example. Let’s start with an image:
%{body: image} =
Req.get!("https://raw.githubusercontent.com/pjreddie/darknet/master/data/dog.jpg")
Kino.Image.new(image, :jpeg)
And now let’s transform this image:
alias Evision, as: OpenCV
rotation = OpenCV.getRotationMatrix2D({512 / 2, 512 / 2}, 90, 1)
image
|> OpenCV.imdecode(OpenCV.Constant.cv_IMREAD_ANYCOLOR())
|> OpenCV.blur({9, 9})
|> OpenCV.warpAffine(rotation, {512, 512})
|> OpenCV.rectangle({50, 10}, {125, 60}, {255, 0, 0})
|> OpenCV.ellipse({300, 300}, {100, 200}, 30, 0, 360, {255, 255, 0}, thickness: 3)
|> dbg()
:ok
For more examples, see this Livebook by Ryo Wakabayashi.
What if we provide “Examples” via doctests?
defmodule HelloWorld do
@doc """
iex> HelloWorld.my_addition(1, 2)
3
iex> HelloWorld.my_addition(1, 2)
4
"""
def my_addition(a, b) do
a + b
end
end
We are excited about exploring this because it promotes best practices across documentation, testing, and debugging.