Jacobi Method
metnum_url = "https://github.com/santiago-imelio/metnum.git"
Mix.install([
{:metnum, git: metnum_url, branch: "main"},
{:exla, "~> 0.7.3"},
{:tucan, "~> 0.3.1"},
{:kino_vega_lite, "~> 0.1.13"}
])
Nx.global_default_backend(EXLA.Backend)
Nx.Defn.global_default_options(compiler: EXLA)
Introduction
The Jacobi Method is an iterative algorithm that approximates the solution of a strictly diagonally dominant system of linear equations. In each iteration, it will approximate the solution of the system using the previous approximation. When two contiguous approximations are close enough, we say the algorithm has converged to the desired solution.
Given a system of linear equations $Ax = b$ of $n \times n$, the Jacobi method can be expressed as the sequence of approximations ${xi}{k \in \mathbb{N}}$ where each term can be calculated as
$$ xi^{(k+1)} = \frac{1}{a{ii}} \left( bi - \sum{\substack{j=1 \ j \neq i}}^n a_{ij} x_j^{(k)} \right), \forall i \in {1,…,n} $$
The coefficient matrix $A$ must be strictly diagonally dominant. That is
$$ |a{ii}| > \sum{\substack{j=1 \ j \neq i}}^n |a_{ij}|, \forall i $$
Example
We will demonstrate our implementation with the following system of linear equations
$$ \begin{cases} 2x_1 - x_2 = 1\ -x_1 + 3x_2 - x_3 = 2 \ -x_2 + 4x_3 - x_4 = 3 \ -x_3 + 5x_4 = 4 \end{cases} $$
The coefficients matrix $A$ is strictly diagonally dominant, therefore applying Jacobi will converge.
a =
Nx.tensor([
[2, -1, 0, 0],
[-1, 3, -1, 0],
[0, -1, 4, -1],
[0, 0, -1, 5]
])
b = Nx.tensor([1, 2, 3, 4])
x_0 = Nx.tensor([0, 0, 0, 0])
:ok
:ok
We run the Jacobi method with the initial approximation $x_0 = (0,0,0,0)$.
epochs = 15
jacobi_seq = Metnum.SOE.jacobi(a, b, x_0, max_epochs: epochs)
#Nx.Tensor<
f32[15][4]
EXLA.Backend
[
[0.0, 0.0, 0.0, 0.0],
[0.5, 0.6666666865348816, 0.75, 0.800000011920929],
[0.8333333730697632, 1.0833333730697632, 1.1166666746139526, 0.949999988079071],
[1.0416667461395264, 1.3166667222976685, 1.2583333253860474, 1.0233333110809326],
[1.1583333015441895, 1.433333396911621, 1.3350000381469727, 1.0516666173934937],
[1.2166666984558105, 1.4977778196334839, 1.371250033378601, 1.0670000314712524],
[1.2488889694213867, 1.5293055772781372, 1.391194462776184, 1.0742499828338623],
[1.2646527290344238, 1.5466943979263306, 1.4008889198303223, 1.078238844871521],
[1.2733471393585205, 1.555180549621582, 1.406233310699463, 1.0801777839660645],
[1.277590274810791, 1.5598602294921875, 1.4088395833969116, 1.0812466144561768],
[1.2799301147460938, 1.562143325805664, 1.4102766513824463, 1.0817679166793823],
[1.281071662902832, 1.5634021759033203, 1.410977840423584, 1.0820553302764893],
[1.2817010879516602, 1.5640164613723755, ...],
...
]
>
As shown here, the method starts converging towards a solution after several iterations. A conventional way of measuring how the algorithm converges is by using the norm of the error of each increment:
$$ ||e^{(k)}|| = ||x^{(k)}-x^{(k-1)}|| $$
real_solution = Nx.LinAlg.solve(a, b)
#Nx.Tensor<
f32[4]
EXLA.Backend
[1.2823528051376343, 1.5647058486938477, 1.4117647409439087, 1.082352876663208]
>
norms =
jacobi_seq
|> Nx.vectorize(:steps)
|> Nx.LinAlg.norm(ord: 2)
|> Nx.devectorize(keep_names: false)
|> Nx.to_list()
errors =
Nx.subtract(jacobi_seq[1..(epochs - 1)], jacobi_seq[0..(epochs - 2)])
|> Nx.vectorize(:deltas)
|> Nx.LinAlg.norm(ord: 2)
|> Nx.devectorize(keep_names: false)
|> Nx.to_list()
real_solution_norm =
real_solution
|> Nx.LinAlg.norm(ord: 2)
|> Nx.to_number()
:ok
:ok
data_norms = [norm: norms, epoch: 0..(epochs - 1)]
data_errors = [error: errors, epoch: 1..(epochs - 1)]
norm_plot_opts = [title: "2-norm by epoch", height: 300, width: 400, points: true]
hruler_opts = [line_color: "red", stroke_width: 2, stroke_dash: [3]]
error_plot_opts = [
title: "Evolution of iteration error",
height: 300,
width: 300,
points: true,
line_color: "green",
point_color: "green"
]
plot =
Tucan.hconcat([
Tucan.lineplot(data_norms, "epoch", "norm", norm_plot_opts)
|> Tucan.hruler(real_solution_norm, hruler_opts),
Tucan.lineplot(data_errors, "epoch", "error", error_plot_opts)
])
# Comment to visualize with KinoVegaLite
:ok
:ok
Evolution of the norm and incremental error of each approximation |