Phoenix And Ecto
Mix.install([
{:jason, "~> 1.4"},
{:kino, "~> 0.9", override: true},
{:youtube, github: "brooklinjazz/youtube"},
{:hidden_cell, github: "brooklinjazz/hidden_cell"}
])
Navigation
Home Report An Issue Group Project: BlogBlog: PostsReview Questions
Upon completing this lesson, a student should be able to answer the following questions.
- How do we generate a resource in a Phoenix application?
- Explain Contexts, Schemas, Migrations, and Repo.
Overview
This is a companion reading for the Blog: Posts exercise. This lesson is an overview of how to work with a database using Ecto in a Phoenix application.
Ecto
Ecto provides a standard API layer for communicating with the database of an Elixir application.
By default, Ecto uses a PostgreSQL Database. Ensure you already have PostgreSQL installed on your computer.
Ecto splits into four main modules.
-
Ecto.Repo handles all communication between the application and the database.
Ecto.Repo
reads and writes from the underlying PostgreSQL database. -
Ecto.Query built queries to retrieve and manipulate data with the
Ecto.Repo
repository. - Ecto.Schema maps the application struct data representation to the underlying PostgreSQL database representation.
- Ecto.Changeset creates changesets for validating and applying constraints to structs.
Phoenix uses Ecto to handle the data layer of a web application.
Domain
In a Phoenix application, by default, there are four main modules responsible for managing business logic and the domain of our application. They are:
-
Contexts: They encapsulate specific domain areas, define the business logic and data access functions, and provide an API to expose functionality to other parts of the application.
-
Schemas: They define the structure of the data entities in the domain, including fields, types, and constraints.
-
Repo: It provides an abstraction layer for interacting with the database, encapsulating queries, and providing a clean interface to perform CRUD operations on the entities defined in Schema modules.
-
Migrations: They define changes to the database schema over time, such as creating or updating tables and columns.
Together, these modules provide a way to model the business logic and data entities in the application and work with a database. Note that we are not limited to these modules, but this is how Phoenix organizes an application by default.
Generators
We can define all of the necessary modules and boilerplate for part of our domain using the phoenix generators.
For example, we could generate posts in a blog project with the following command.
mix phx.gen.html Posts Post posts title:string body:text
The command above would to the following:
- Create a Posts context.
- Create a Post schema
-
Creates a posts table in the database with the
title:string
andbody:text
fields. See the Attributes documentation for a complete list of attribute types.
It also generates boilerplate for the web layer of our application including the controller, component, and template. It does not generate routes for our router though, as these have to be manually inserted.
See mix phx.gen for a complete list of phoenix generators.
Context
Phoenix contexts are a way of organizing related functionality in a Phoenix application. Each context represents a coherent set of operations that relate to a specific domain or aspect of the application’s functionality.
For example, we might have a Blog
application with a Posts
context that manages all operations for creating, editing, and deleting blog posts.
Here’s an example context module for a Posts
context. We can see functions work with the Post
schema and the Repo
module to retrieve, create, edit, and delete posts in the database.
defmodule Blog.Posts do
import Ecto.Query, warn: false
alias Blog.Repo
alias Blog.Posts.Post
def list_posts do
Repo.all(Post)
end
def get_post!(id), do: Repo.get!(Post, id)
def create_post(attrs \\ %{}) do
%Post{}
|> Post.changeset(attrs)
|> Repo.insert()
end
def update_post(%Post{} = post, attrs) do
post
|> Post.changeset(attrs)
|> Repo.update()
end
def delete_post(%Post{} = post) do
Repo.delete(post)
end
def change_post(%Post{} = post, attrs \\ %{}) do
Post.changeset(post, attrs)
end
end
Ecto.Query
Ecto.Query
is imported in the example above. Contexts often work with Ecto.Query
when writing queries for retrieving and manipulating data in a database.
Here’s an example query that allows us to filter a list of posts query by a case insensitive and partially matching title
field.
def list_posts(title) do
search = "%#{title}%"
query = from p in Post, where: ilike(p.title, ^search)
Repo.all(query)
end
ilike allows us to search a case insensitive string. The %
symbol acts as a wildcard to find partially matching searches.
from/2 selects the table to query, and where/3 filters the list of results.
Ecto allows for a pipeable syntax that some developers prefer. For example, the above could also be written as:
def list_posts(title) do
search = "%#{title}%"
Post
|> where([p], ilike(p.title, ^search))
|> Repo.all()
end
Schema
In Ecto, a schema is a module that defines the structure and properties of a database table. The schema maps the fields of the table to the fields of the struct in the module, and it also defines any constraints or validations on those fields.
Here’s an example of a Post
schema that defines a post with title
, subtitle
, and content
fields. We also see the changeset/2
function used to define a Post
changeset used with the Repo
module earlier in the Posts
context when creating or updating data.
defmodule Blog.Posts.Post do
use Ecto.Schema
import Ecto.Changeset
schema "posts" do
field :content, :string
field :subtitle, :string
field :title, :string
timestamps()
end
@doc false
def changeset(post, attrs) do
post
|> cast(attrs, [:title, :subtitle, :content])
|> validate_required([:title, :subtitle, :content])
end
end
The Ecto.schema/2 and field/3 macros define the fields and the data types of those fields. The Schema maps the fields of the PostgreSQL table into valid Elixir terms in a Post
struct to make records in the database elixir-friendly when we work with them. You can find the full list of valid field types at Ecto.Schema: Types and casting.
Validations And Constraints
Validations ensure data is valid before entering the database. Constraints are rules enforced by the database.
For example, the validate_required/3 validates that a field exists before inserting it into the database.
However, we might add a unique_constraint that ensures a title is unique when compared with other posts in the database. This requires querying the database under the hood.
def changeset(post, attrs) do
post
|> cast(attrs, [:title, :subtitle, :content])
|> validate_required([:title, :subtitle, :content])
|> unique_constraint(:title)
end
Migration
Migrations are used to make changes to a database schema over time, as the requirements of an application evolve. Here’s an example of a CreatePosts
migration that would create a posts
table in the database.
defmodule Blog.Repo.Migrations.CreatePosts do
use Ecto.Migration
def change do
create table(:posts) do
add :title, :string
add :subtitle, :string
add :content, :text
timestamps()
end
create unique_index(:posts, [:title])
end
end
The Ecto.Migration module provides functions for manipulating tables and fields in a table. The create/2 macro creates a new table in the database and the add/3 adds a new column to the table.
You can use any of the Elixir Primitive Types and they will be converted to the appropriate Field Type for your database.
Some database field types which are not Elixir primitive types such as :text
can be given directly.
Generating Migrations
Migrations should always be generated rather than manually created to add a timestamp to the migration file. For example, the example above might be in a file priv/repo/migrations/20230508221034_create_posts.ex
Migrations can be created individually using the mix ecto.gen.migration command with the name of the migration.
Here’s an example where create_posts
is the name of the migration.
$ mix ecto.gen.migration create_posts
Running Migrations
Migrations are run in chronological order based on the timestamp, so they can be run or reset anytime using the following commands:
# Run Migrations
$ mix ecto.migrate
# Reset The Database And Re-run Migrations
$ mix ecto.reset
You can also rollback migrations with mix ecto.rollback. There are options that allow you to rollback specific migrations.
$ mix ecto.rollback
Manipulating Tables
Ecto provides functions for manipulating tables in a database. Here are a few common functions:
- create/2: creates a new table in the database
- alter/2: modifies an existing table in the database
- rename/2: renames an existing table in the database
- drop/2: removes an existing table from the database
Manipulating Fields
Ecto.Migration provides functions for manipulating fields in a table. Here are a few to get you started.
- add/3 adds a new field to the table.
- modify/2 modifies an existing field in the table.
- remove/2 removes an existing field from the table.
Here is an example migration that uses some of these Ecto functions to manipulate fields on a table:
defmodule MyApp.Repo.Migrations.AlterUsersTable do
use Ecto.Migration
def change do
alter table(:users) do
add :phone, :string
modify :email, :string, unique: true
modify :name, :string, null: false
remove :age
end
end
end
Schemas Reflect Migrations
The schema maps the fields of our database table into Elixir data. A migration may involve changing the data type or structure of an existing column, which would in turn require updating the schema to reflect these changes. Always ensure the schema reflects the latest version of a table in the database.
Query
The Ecto.Query module defines functions for creating database queries. Queries themselves do not manipulate data in the database, but are instead a set of instructions to perform some operation in the database typically provided the Ecto.Repo that performs the actual work on the Database.
Here are some common functions:
- from/2: Defines the table or tables to query
- where/3: Specifies a condition that records must meet to be included in the query results.
- order_by/3: Sorts the query results by one or more fields.
- select/3: Specifies which fields to include in the query results.
- type: Returns a new query object with a specific type. Often useful when providing Elixir terms in queries to convert them into a database type.
Here’s an example query using keyword list syntax. The ^
symbol is used to inject Elixir terms into a database query. Elixir terms must be bound to a variable to inject them.
search = "%#{title}%"
today = DateTime.utc_now()
query =
from(p in Post,
where: ilike(p.title, ^search),
where: p.visible,
where: p.published_on <= type(^today, :utc_datetime),
order_by: [desc: p.published_on]
)
The same query could be written using an alternative pipe-based syntax like so:
query =
Post
|> where([p], ilike(p.title, ^search))
|> where([p], p.visible)
|> where([p], p.published_on <= type(^today, :utc_datetime))
|> order_by([p], desc: p.published_on)
Repo
The Ecto.Repo module defines functions for working with the database. Anytime we need to manipulate records in the database we work with the Repo
module.
Here are some common functions:
- Repo.all: get all records.
- Repo.get!: get one record.
- Repo.insert/2: insert a record.
- Repo.update: update existing record.
- Repo.delete: delete record.
These cover the full range of CRUD (create, read, update, delete) actions. Typically these functions will work in combination with the Schema module that defines a struct and a schema.
For example, here’s an example of how we use a Post
struct to create a changeset that is then passed to the Repo.insert/2
function to insert a new post into the database.
%Post{}
|> Post.changeset(%{title: "some title", subtitle: "some subtitle" content: "some content"})
|> Repo.insert()
Entity Relationship Diagrams
Projects often use Entity Relationship Diagrams or UML (Unified Modelling Language) diagrams of some kind to represent the data layer of the application.
Livebook let’s us write Entity Relationship Diagrams directly. See Mermaid: Entity Relationship Diagrams for an overview of the syntax.
It’s important to be able to read these diagrams. Here’s an example:
erDiagram
Comment {
id post_id
text content
}
Post {
string title
text content
date published_on
boolean visibility
}
Post ||--O{ Comment : "has many"
The above diagram describes both the fields and types of the data, as well as the associations between data.
The pipe ||
in the line between Post
and Comment
represents one and the triple-lined-arrow with a circle represents zero or more. This describes the one to many relationship between posts and comments.
The post_id
field isn’t strictly necessary given the diagram arrow implies the child contains a foreign key id but we’ve included it for sake of clarity.
Seeding
Phoenix projects include a priv/repo/seeds.exs
file for creating data (seeding) the database.
The seed.exs
file can be run with the following command from the project folder:
mix run priv/repo/seeds.exs
Typically seed files work with the Repo
module or functions in contexts to create data for manual testing purposes.
Blog.Repo.insert!(%Blog.Posts.Post{
title: "Some Title",
subtitle: "Some Subtitle",
content: "Some Content",
})
Blog.Posts.create_post(%{
title: "Some Title",
subtitle: "Some Subtitle",
content: "Some Content",
})
Seed files should not be run in the test environment, as they can interfere with test assertions.
Seeds files are automatically run when resetting the database.
mix ecto.reset
Further Reading
Consider the following resource(s) to deepen your understanding of the topic.
- Phoenix Generators: mix phx.gen
- Phoenix HTML Generator: mix phx.gen.html
- Phoenix Schema Generator Types
- Pragmatic Bookshelf: Programming Ecto
Commit Your Progress
DockYard Academy now recommends you use the latest Release rather than forking or cloning our repository.
Run git status
to ensure there are no undesirable changes.
Then run the following in your command line from the curriculum
folder to commit your progress.
$ git add .
$ git commit -m "finish Phoenix And Ecto reading"
$ git push
We’re proud to offer our open-source curriculum free of charge for anyone to learn from at their own pace.
We also offer a paid course where you can learn from an instructor alongside a cohort of your peers. We will accept applications for the June-August 2023 cohort soon.