Powered by AppSignal & Oban Pro

Using the Diffo Provider Instance Extension

use_diffo_provider_extension.livemd

Using the Diffo Provider Instance Extension

Mix.install(
  [
    {:diffo, path: "/Users/Beanlanda/git/diffo"}
  ],
  config: [
    diffo: [ash_domains: [Diffo.Provider]]
  ],
  consolidate_protocols: false
)

Overview

Diffo is a Telecommunications Management Forum (TMF) Service and Resource Manager, built for autonomous networks.

It is implemented using the Ash Framework leveraging core and community extensions including some created and maintained by diffo-dev. As such it is highly customizable using Spark DSL and as necessary Elixir. If you are not already familiar with Ash then please explore Ash Get Started

First ensure you’ve explored the Diffo Livebook for an introduction to Diffo: Run in Livebook

In this ‘Diffo Provider Instance Extension’ livebook you will learn about:

  • TMF Services and Resources
  • Building your own Domain
  • Declaring a Composite Resource using the Instance Extension
  • Using the Assigner
  • Composing a Resource from partially assigned Resources
  • Declaring domain Parties using the Party Extension
  • Declaring domain Places using the Place Extension

Installing Neo4j and Configuring Bolty

Diffo uses the Ash Neo4j DataLayer, which requires Neo4j to be installed

While Neo4j community edition is open source and you can build from source it is likely that you’ll use an installation.

AshNeo4j uses neo4j which must be installed and running. You can install latest major Neo4j versions from the community tab at Neo4j Deployment Center, or use the 5.26.8 direct link

When you install neo4j you’ll typically have a default username and password. Take note of this and any other non-standard config.

Update the configuration below as necessary and evaluate.

config = [
  uri: "bolt://localhost:7687",
  auth: [username: "neo4j", password: "password"],
  user_agent: "diffoLivebook/1",
  pool_size: 15,
  max_overflow: 3,
  prefix: :default,
  name: Bolt,
  log: false,
  log_hex: false
]

Bolty needs a process in your supervision tree, this will start one with the config if not already running:

AshNeo4j.BoltyHelper.start(config)

Now you should be able to verify that Neo4j is running:

AshNeo4j.BoltyHelper.is_connected()

You can get all nodes related to other nodes the following query:

AshNeo4j.Cypher.run("MATCH (n1)-[r]->(n2) RETURN r, n1, n2 LIMIT 50")

It is helpful to have a Neo4j browser open locally, typically:

http://localhost:7474/browser/

Once you connect and issue a query like the one above you’ll be able to explore the results interactively.

OPTIONAL If you want to clear your database you can evaluate:

AshNeo4j.Neo4jHelper.delete_all()

TMF Services and Resources

TMF Services are network services with industry standard structure and API that are operated for you by a Provider Entity. Ideally TMF Services are as abstract as possible, such that the Consumer specifies their intent (often by selecting a service from a catalog and providing minimal configuration of features and/or characteristics) allowing the provider to deliver the service as best it sees fit. This is powerful as it allows the service to perform advanced uses cases, like move, technology change, and allow the provider to optimise and even dynamically recompose the service. TMF Resources are generally a network resource that needs to be assigned to provide a service. They are generally too low level to have value on their own and where possible are entirely hidden from the product layer.

TMF Services are generally composed of services and/or resources. TMF Resources can also be composed of resources (but not services).

TMF Services and Resources are similar in that they each have a Specification, and are defined by Features and Characteristics. They also can have outgoing relationships with other services and resources, indeed this is fundamental to composition and in particular resource assignment.

Resources are generally created/managed/owned by a Provider, and assigned to a Consumer. Often the assignment is effectively a lease during which period the consumer has exclusive use of the resource under the provider’s conditions, effectively ‘owning’ the resource.

When a Provider creates a pool of resources this is known as ‘allocation’. For instance a VLAN pool may contain VLAN ID’s 0..4095, and perhaps a new pool is inherently allocated with either a new interface, or the creation of a logical L2 VLAN domain.

When a Consumer is leased a resource this is assignment.

Assigment is effectively a request for a relationship from a Provider Resource ‘back up’ to a Consumer Service or Resource. There are different variants on this:

  • Specific Resource assignment - the specific resource requested by the Consumer is assigned
  • ‘To specification’ Resource assignment - an entire resource is assigned by the Provider, allocation may be ‘just in time’
  • Partial Resource assignment - a partial resource is assigned by the Provider, the consumer is aware of the ‘pool resource’.
  • Specific partial resource assignment - a partial resource requested by the Consumer is assigned

In all cases the assignment is only successful if the Provider allows the requested relationship to occur from it back to the Consumer.

Partial resource assignment uses a relationship characteristic to indicate which part of the resource is optionally requested and ultimately assigned.

Instance Extension

Diffo.Provider.Instance models either a Service or Resource. It actually uses the Diffo.Provider.BaseInstance Spark.Dsl.Fragment. There is no need to evaluate the Diffo.Provider.Instance below, it is already defined.

defmodule Diffo.Provider.Instance do
  @moduledoc """
  Ash Resource for a TMF Service or Resource Instance
  """
  alias Diffo.Provider.BaseInstance

  use Ash.Resource,
    fragments: [BaseInstance],
    domain: Diffo.Provider

  resource do
    description "An Ash Resource for a TMF Service or Resource Instance"
    plural_name :instances
  end
end

Diffo also has an inbuilt Spark DSL extension Diffo.Provider.Instance.Extension which provides DSL and functions for use in building and operating domain specific services and resources.

The extension has two top-level sections:

structure do — describes the static shape of the Instance kind: its TMF Specification, Characteristics, Features, Party roles, and Place roles. All declarations are baked into the module at compile time and introspectable at runtime via generated functions (specification/0, characteristics/0, features/0, parties/0, places/0) and Diffo.Provider.Instance.Info.

behaviour do — declares which Ash actions should be wired for instance lifecycle management. Declaring create :name injects :specified_by, :features, and :characteristics arguments onto that action, and the BuildBefore/BuildAfter changes registered on BaseInstance automatically handle specification upsert, feature and characteristic creation, party validation, and graph relationship wiring for every create action. You write the action body for your domain-specific accepts and arguments; the structural wiring is handled for you.

Feature and Instance Characteristics can have payloads defined by Ash.TypedStruct. TypedStruct are DSL specified types which are effectively lightweight embedded resources. We’ve extended both AshJason and AshOutstanding to support Ash.TypedStruct.

For partial resource allocation and assignment we’ve created Diffo.Provider.Assigner. It is used by the host resource, which declares a characteristic with an Diffo.Provider.AssignableValue TypedStruct. Allocation is managed within the Provider domain using this characteristic. Assignment to Services or Resources is via ‘reverse’ type: “assignedTo” relationships enriched by relationship characteristics.

We can still use the Diffo.Provider API’s noting that they will return Diffo.Provider.Instance rather than our specific domain resource, but we’ll use our own domain API linked to specific actions.

Let’s imagine a Compute domain which operates GPU and NPU resources. We want to expose a Cluster composite resource which can be dynamically composed of a number of GPU and NPU cores.

Each instance of Cluster could be created on Consumer demand as a ‘container’ for the GPU and NPU core partial resources.

Each of the GPU and NPU Resource instances is created and managed by the Provider and is effectively a resource pool for individually assignable cores.

We’ll define all the resources first, then declare the Diffo.Compute domain once they are all compiled — Ash validates code_interface at domain compile time so all referenced resources must exist first.

Declaring a Composite Resource

We will start by declaring the Cluster Resource. It is going to be a composite resource, where it can be assigned individual GPU and NPU cores via resource relationships. It is an Ash.Resource incorporating the Diffo.Provider.BaseInstance fragment.

defmodule Diffo.Compute.Cluster do
  @moduledoc """
  Cluster Resource Instance
  """

  alias Diffo.Provider.BaseInstance
  alias Diffo.Provider.Instance.Relationship
  alias Diffo.Provider.Instance.Characteristic
  alias Diffo.Compute
  alias Diffo.Compute.ClusterValue
  alias Diffo.Compute.Tenant
  alias Diffo.Compute.Engineer

  use Ash.Resource,
    fragments: [BaseInstance],
    domain: Compute

  resource do
    description "An Ash Resource representing a Cluster"
    plural_name :Clusters
  end

  structure do
    specification do
      id "4bcfc4c9-e776-4878-a658-e8d81857bed7"
      name "cluster"
      type :resourceSpecification
      description "A Cluster Resource Instance"
      category "Network Resource"
    end

    characteristics do
      characteristic :cluster, ClusterValue
    end

    parties do
      party :operator, Tenant
      party :manager, Engineer
    end

    places do
      place :data_centre, Diffo.Compute.DataCentre
    end
  end

  behaviour do
    actions do
      create :build
    end
  end

  actions do
    create :build do
      description "creates a new Cluster resource instance for build"
      accept [:id, :name, :type, :which]
      argument :relationships, {:array, :struct}
      argument :places, {:array, :struct}
      argument :parties, {:array, :struct}

      change set_attribute(:type, :resource)
      change load [:href]
      upsert? false
    end

    update :define do
      description "defines the cluster"
      argument :characteristic_value_updates, {:array, :term}

      change after_action(fn changeset, result, _context ->
               with {:ok, result} <- Characteristic.update_values(result, changeset),
                    {:ok, cluster} <- Compute.get_cluster_by_id(result.id),
                    do: {:ok, cluster}
             end)
    end

    update :relate do
      description "relates the cluster with other instances"
      argument :relationships, {:array, :struct}

      change after_action(fn changeset, result, _context ->
               with {:ok, _cluster} <- Relationship.relate_instance(result, changeset),
                    {:ok, cluster} <- Compute.get_cluster_by_id(result.id),
                    do: {:ok, cluster}
             end)
    end
  end
end

And of course we’ll need a ClusterValue TypedStruct for the Cluster Resource’s cluster characteristic:

defmodule Diffo.Compute.ClusterValue do
  @moduledoc """
  AshTyped Struct for Cluster Characteristic Value
  """
  use Ash.TypedStruct, extensions: [AshJason.TypedStruct, AshOutstanding.TypedStruct]

  jason do
    pick [:name, :gpu_cores, :npu_cores]
    compact true
  end

  outstanding do
    expect [:gpu_cores]
  end

  typed_struct do
    field :name, :string, description: "the cluster name"

    field :gpu_cores, :integer,
      default: 0,
      constraints: [min: 0],
      description: "the number of GPU cores in the cluster"

    field :npu_cores, :integer,
      default: 0,
      constraints: [min: 0],
      description: "the number of NPU cores in the cluster"
  end

  defimpl String.Chars do
    def to_string(struct) do
      inspect(struct)
    end
  end
end

Using the Assigner

We’ll now define a GPU Resource which uses the Diffo.Provider.Assigner functionality.

defmodule Diffo.Compute.GPU do
  @moduledoc """
  GPU Resource Instance
  """

  alias Diffo.Provider.BaseInstance
  alias Diffo.Provider.Instance.Relationship
  alias Diffo.Provider.Instance.Characteristic
  alias Diffo.Provider.Assigner
  alias Diffo.Provider.Assignment
  alias Diffo.Provider.AssignableValue
  alias Diffo.Compute
  alias Diffo.Compute.GPUValue

  use Ash.Resource,
    fragments: [BaseInstance],
    domain: Compute

  resource do
    description "An Ash Resource representing a GPU"
    plural_name :gpus
  end

  structure do
    specification do
      id "ad50073f-17e0-45cb-b9b1-aa4296876156"
      name "gpu"
      type :resourceSpecification
      description "A GPU Resource Instance"
      category "Network Resource"
    end

    characteristics do
      characteristic :gpu, GPUValue
      characteristic :cores, AssignableValue
    end
  end

  behaviour do
    actions do
      create :build
    end
  end

  actions do
    create :build do
      description "creates a new GPU resource instance for build"
      accept [:id, :name, :type, :which]
      argument :relationships, {:array, :struct}
      argument :places, {:array, :struct}
      argument :parties, {:array, :struct}

      change set_attribute(:type, :resource)
      change load [:href]
      upsert? false
    end

    update :define do
      description "defines the GPU"
      argument :characteristic_value_updates, {:array, :term}

      change after_action(fn changeset, result, _context ->
               with {:ok, result} <- Characteristic.update_values(result, changeset),
                    {:ok, result} <- Compute.get_gpu_by_id(result.id),
                    do: {:ok, result}
             end)
    end

    update :relate do
      description "relates the GPU with other instances"
      argument :relationships, {:array, :struct}

      change after_action(fn changeset, result, _context ->
               with {:ok, result} <- Relationship.relate_instance(result, changeset),
                    {:ok, result} <- Compute.get_gpu_by_id(result.id),
                    do: {:ok, result}
             end)
    end

    update :assign_core do
      description "relates the GPU with an instance by assigning a core"
      argument :assignment, :struct, constraints: [instance_of: Assignment]

      change after_action(fn changeset, result, _context ->
               with {:ok, result} <- Assigner.assign(result, changeset, :cores, :core),
                    {:ok, result} <- Compute.get_gpu_by_id(result.id),
                    do: {:ok, result}
             end)
    end
  end
end

And we must define the GPUValue TypedStruct, used in the GPU’s gpu characteristic:

defmodule Diffo.Compute.GPUValue do
  @moduledoc """
  AshTyped Struct for GPU Characteristic Value
  """
  use Ash.TypedStruct, extensions: [AshJason.TypedStruct, AshOutstanding.TypedStruct]

  jason do
    pick [:name, :family, :model, :technology]
    compact true
  end

  outstanding do
    expect [:name]
  end

  typed_struct do
    field :name, :string, description: "the GPU name"

    field :family, :atom, description: "the GPU family name"

    field :model, :string, description: "the GPU model name"

    field :technology, :atom, description: "the GPU technology"
  end

  defimpl String.Chars do
    def to_string(struct) do
      inspect(struct)
    end
  end
end

Party Extension

Diffo.Provider.BaseParty is an Ash Resource Fragment for domain-specific Party kinds, mirroring BaseInstance. It provides common Party attributes — id, href, name, type, referred_type — and the Diffo.Provider.Party.Extension DSL, which lets a Party kind declare the roles it plays with respect to Instances and other Parties.

type defaults to :PartyRef and can be set to :Individual, :Organization, or :Entity. Domain party kinds typically set type in their build action. The id defaults to a generated uuid but can be set to any meaningful string (such as an ABN or a data centre identifier).

The Diffo.Provider.Party.Extension DSL cheat sheet is at DSL-Diffo.Provider.Party.Extension.

Defining Party kinds

We’ll add two Party kinds to our Compute domain — Tenant for the operating company, and Engineer for the individuals who manage resources.

defmodule Diffo.Compute.Tenant do
  @moduledoc """
  Tenant in the Compute domain
  """

  alias Diffo.Provider.BaseParty
  alias Diffo.Compute

  use Ash.Resource,
    fragments: [BaseParty],
    domain: Compute

  resource do
    description "A Compute Tenant"
    plural_name :tenants
  end

  actions do
    create :build do
      accept [:id, :name]
      change set_attribute(:type, :Organization)
    end
  end

  instances do
    role :operator, Diffo.Compute.Cluster
    role :operator, Diffo.Compute.GPU
  end
end
defmodule Diffo.Compute.Engineer do
  @moduledoc """
  Engineer in the Compute domain
  """

  alias Diffo.Provider.BaseParty
  alias Diffo.Compute

  use Ash.Resource,
    fragments: [BaseParty],
    domain: Compute

  resource do
    description "A Compute Engineer"
    plural_name :engineers
  end

  actions do
    create :build do
      accept [:id, :name]
      change set_attribute(:type, :Individual)
    end
  end

  instances do
    role :manager, Diffo.Compute.Cluster
  end

  parties do
    role :employer, Diffo.Compute.Tenant
  end
end

Place Extension

Diffo.Provider.BasePlace is an Ash Resource Fragment for domain-specific Place kinds, mirroring BaseInstance and BaseParty. It provides common Place attributes — id, href, name, type, referred_type — and the Diffo.Provider.Place.Extension DSL, which lets a Place kind declare the roles it plays with respect to Instances, Parties, and other Places.

type defaults to :PlaceRef and is typically set in the build action to the concrete place type (:GeographicSite, :GeographicLocation, or :GeographicAddress). When referred_type is present, type must be :PlaceRef — meaning this Place is a reference rather than a physical location.

The Diffo.Provider.Place.Extension DSL cheat sheet is at DSL-Diffo.Provider.Place.Extension.

Defining Place kinds

We’ll add a DataCentre Place kind to our Compute domain. Clusters are hosted at a data centre; the instances do block records that relationship from the DataCentre’s perspective.

defmodule Diffo.Compute.DataCentre do
  @moduledoc """
  DataCentre in the Compute domain
  """

  alias Diffo.Provider.BasePlace
  alias Diffo.Compute

  use Ash.Resource,
    fragments: [BasePlace],
    domain: Compute

  resource do
    description "A Compute Data Centre"
    plural_name :data_centres
  end

  jason do
    pick [:id, :href, :name, :type]
    compact true
    rename type: "@type"
  end

  outstanding do
    expect [:id, :name, :type]
  end

  actions do
    create :build do
      accept [:id, :href, :name]
      change set_attribute(:type, :GeographicSite)
    end
  end

  instances do
    role :data_centre, Diffo.Compute.Cluster
    role :data_centre, Diffo.Compute.GPU
  end
end

Compute Domain

With all resources defined we can now declare the Diffo.Compute domain, which exposes a typed API for each resource:

defmodule Diffo.Compute do
  @moduledoc """
  Compute - example domain
  """
  use Ash.Domain,
    otp_app: :diffo,
    validate_config_inclusion?: false

  alias Diffo.Compute.GPU
  #alias Diffo.Compute.NPU
  alias Diffo.Compute.Cluster
  alias Diffo.Compute.Tenant
  alias Diffo.Compute.Engineer
  alias Diffo.Compute.DataCentre

  resources do
    resource GPU do
      define :get_gpu_by_id, action: :read, get_by: :id
      define :build_gpu, action: :build
      define :define_gpu, action: :define
      define :relate_gpu, action: :relate
      define :assign_gpu_core, action: :assign_core
    end

    #resource NPU do
      #define :get_npu_by_id, action: :read, get_by: :id
      #define :build_npu, action: :build
      #define :define_npu, action: :define
      #define :relate_npu, action: :relate
      #define :assign_npu_core, action: :assign_core
    #end

    resource Cluster do
      define :get_cluster_by_id, action: :read, get_by: :id
      define :build_cluster, action: :build
      define :define_cluster, action: :define
      define :relate_cluster, action: :relate
    end

    resource Tenant do
      define :create_tenant, action: :build
      define :get_tenant_by_id, action: :read, get_by: :id
      define :list_tenants, action: :read
    end

    resource Engineer do
      define :create_engineer, action: :build
      define :get_engineer_by_id, action: :read, get_by: :id
      define :list_engineers, action: :read
    end

    resource DataCentre do
      define :create_data_centre, action: :build
      define :get_data_centre_by_id, action: :read, get_by: :id
    end
  end
end

Creating Party instances

Clear any data from previous runs before starting (safe to re-evaluate):

AshNeo4j.Neo4jHelper.delete_all()

Now the domain is defined we’ll create our Tenant and Engineer first — we’ll need them when building Cluster instances. The id for the Tenant is set to a meaningful string — the company’s ABN.

alias Diffo.Compute
alias Diffo.Provider.Instance.Party

{:ok, tenant} = Compute.create_tenant(%{
  id: "51824753556",
  name: "Acme Compute Pty Ltd"
})

{:ok, engineer} = Compute.create_engineer(%{
  name: "Alice Zhang"
})

Creating a Cluster

First we create the data centre — our DataCentre resource uses BasePlace, so it is managed via the Compute domain API like any other domain resource:

alias Diffo.Provider.Instance.Place

{:ok, dc} = Compute.create_data_centre(%{id: "NXTM2", name: "NextDC M2"})

Now build the cluster, passing the data centre as a place and our party members by id and role:

places = [%Place{id: dc.id, role: :data_centre}]
parties = [
  %Party{id: tenant.id, role: :operator},
  %Party{id: engineer.id, role: :manager}
]
cluster_1 = Diffo.Compute.build_cluster!(%{name: "cluster_1", places: places, parties: parties})
Jason.encode!(cluster_1, pretty: true) |> IO.puts

Using the Assigner

Now we’ll create a couple of GPU instances:

gpu_1 = Compute.build_gpu!(%{name: "GPU 1"})
gpu_2 = Compute.build_gpu!(%{name: "GPU 2"})

We need to define each GPU instance, in this case defining the gpu Characteristic AssignableValue performs the allocation - in this case setting how many GPU cores are available.

updates = [
  gpu: [family: :nvidia, model: "GeForce RTX5090", technology: :blackwell],
  cores: [first: 1, last: 680, free: 680, assignable_type: "tensor"]
]

gpu_1 = Compute.define_gpu!(gpu_1, %{characteristic_value_updates: updates})
gpu_2 = Compute.define_gpu!(gpu_2, %{characteristic_value_updates: updates})

The GPU’s core characteristic is an AssignableValue, now we’ve allocated it we can use it to keep track of how many cores are free (unassigned). We can render one as json:

Jason.encode!(gpu_1, pretty: true) |> IO.puts

Composing a Resource from partially assigned Resources

Now we can auto-assign GPU cores from each GPU to our cluster_1. We’ll assign 3 cores from gpu_1, and one from gpu_2.

alias Diffo.Provider.Assignment

assignment = %{assignment: %Assignment{assignee_id: cluster_1.id, operation: :auto_assign}}
gpu_1 = Compute.assign_gpu_core!(gpu_1, assignment)
gpu_1 = Compute.assign_gpu_core!(gpu_1, assignment)
gpu_1 = Compute.assign_gpu_core!(gpu_1, assignment)
gpu_2 = Compute.assign_gpu_core!(gpu_2, assignment)

Now our cluster should have a core from each gpu. Check in the neo4j browser for the type: :assignedTo Relationship from the gpu_1 and gpu_2 to the clusters. There should be four, each with a Relationship Characteristic of core, with a value of the assigned core, e.g. 1, 2.

Also the gpu will show each assignedTo relationship, since these are forward relationships. These should also show the relationship characteristic:

Jason.encode!(gpu_1, pretty: true) |> IO.puts

Make sure you have a look at it in the neo4j browser. There should be Relationship nodes with a role of :assignedTo from each GPU resource instance to the cluster_1 resource instance. Each Relationship should be defined by a Characteristic with the assigned core number. There is no central assignment table, rather the relationships ARE the assignments.

As an exercise, clone the GPU resource to create an NPU resource and assign some NPU cores from it to your cluster. Check that the assigned NPU cores are unique.

What happens when there are none left to assign? What happens when I request a specific assignment from an instance to which the partial resource is already assigned?

What Next?

In this tutorial you’ve used Diffo’s Provider Instance Extension to define a Compute domain with a composite Cluster resource comprised of assigned GPU cores, the Provider Party Extension to define Tenant and Engineer party kinds that operate and manage those resources, and the Provider Place Extension to declare where instances and parties exist geographically.

BaseParty and BasePlace follow the same pattern as BaseInstance — domain-specific resources use them as fragments and write their own actions for domain-specific attributes. No manual wiring is needed.

Domain-specific Place kinds (such as a DataCentre with its own attributes) use BasePlace as a fragment and declare their roles via instances do, parties do, and places do sections on Diffo.Provider.Place.Extension. Party kinds similarly declare their place roles via places do on Diffo.Provider.Party.Extension.

If you find Diffo useful please visit and star on github. Feel free to join discussions and raise issues to discuss PR’s.