sb logoToday I Learned

How to grasp the business logic with unique_index

How do I ensure a unique index when only a user tries to get more than one ticket for a paid conference? And be more flexible in accepting such users get more than one ticket for a free conference?

How are our goals here?

  • sent paid conference to Ticket.changeset/1 ensure the status: :paid, create one.
  • sent free conference to Ticket.changeset/1 ensure the status: :free, create many.

How we can apply these to our business logic:

  create table(:tickets) do
    add :conference_id, references(:conferences), null: false
    add :user_id, references(:users), null: false
    add :status, :string, null: false, default: "free"

  create unique_index(:tickets, [:conference_id, :user_id, :status], where: "status = 'paid'")

How can we play?

iex> Ticket.changeset(%{conference: %{is_paid: false}, user: %{...}, status: :free}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 1, status: :free, conference_id: 1, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: false}, user: %{...}, status: :free}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 2, status: :free, conference_id: 1, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: true}, user: %{...}, status: :paid}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 3, status: :paid, conference_id: 2, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: true}, user: %{...}, status: :paid}) |> Repo.insert()
** (Ecto.ConstraintError)


New line between examples in doctests matters.

The following module with doctest generates 1 test and makes 2 assertions in it.

defmodule Foo do
  @doc """
  Does foo

  ## Examples

  def bar(a), do: a

defmodule FooTest do
  use ExUnit.Case, async: true
  doctest Foo

Roughly an equivalent of:

defmodule FooTest do
  use ExUnit.Case, async: true

  test "" do
    assert === 1
    assert === 2

However, if we intersperse the examples with new lines

defmodule Foo do
  @doc """
  Does foo

  ## Examples


  def bar(a), do: a

Doctest would generate 2 tests with single assertion in each.

Thus, when we run mix test the number of doctests differs.

That could also be verified by calling in iex -S mix an undocumented function ExUnit.DocTest.__doctests__/2 which returns the list of generated ASTs for tests.

iex> ExUnit.DocTest.__doctests__(Foo, only: [bar: 1]) |> |> IO.puts()

{" (1)",
   value =
   expected = 1
   formatted = "iex>"
   last_expr = ""
   expected_expr = "1"
   stack = [{Foo, :__MODULE__, 0, line: 7, file: "lib/foo.ex"}]
   ExUnit.DocTest.__test__(value, expected, formatted, last_expr, expected_expr, stack)
{" (2)",

When to use the handle_params callback

The handle_params/3 callback is helpful for using the state in the URL to drive the presentation of your LiveView. This is nice because you can share the URL with anyone and see the same LiveView state. handle_params is invoked after mount or whenever there is a live navigation event. If your LiveView is changing state based on the URL, handle_params is the right place to assign values on your LiveView, as you will avoid processing both in mount/1 and handle_params/3. To trigger handle_params/3, push_patch/2 can be used server-side, while live_patch/2 will trigger handle_param/3 through a client-side interaction.

For example, imagine we want to use handle_params/3 to implement pagination, filtering, and sorting. Using these two examples, handle_params/3 can handle five different cases of URL state

  • only pagination /route?page=2&per_page=10
  • only filtering /route?filter=a
  • only sorting /route?sort_by=id&sort_order=asc
  • pagination, filtering, and sorting /route?page=2&per_page=10&filter=sneakers?sort_by=name&sort_order=asc
  • none specified (use defaults) /route
def handle_params(params, _url, socket) do
  paginate_options = %{page: params["page"], per_page: params["per_page"]}
  filter_options = %{filter: params["filter"]}
  sort_options = %{sort_by: params["sort_by"], sort_order: params["sort_order"]}

  shoes =
      paginate: paginate_options,
      sort: sort_options,
      filter: filter_options

      options: Map.merge(paginate_options, sort_options, filter_options),
      shoes: shoes

def handle_params(_params, _url, socket) do
  {:noreply, socket}

What you should know about the live_session macro

Imagine you have a few endpoints and would like to group their authorization rules. With live_session/3 , can achieve that!

live_session has three options:

  1. session - name of the session
  2. on_mount - callback function
  3. root_layout - apply a different layout to the group

It is important to understand the Security Considerations of live_session, especially for handling authentication and authorization in your LiveView.

In the following example, we use live_session to set a new root_layout only for admin users, and authorize admins only in the :adminUserHook

live_session :admins, 
  root_layout: {ExampleWeb.AdminLayoutView, :root},
  on_mount: {ExampleWeb.UserHook, :admin} do
  scope "/", ExampleWeb do
    pipe_through [:browser, :auth]

    live "/admin", HomeLive, :page
defmodule ExampleWeb.AdminLayoutView do
  @moduledoc false
  use ExampleWeb, :view

  def render("root.html", assigns) do
    <!DOCTYPE html>
    <html lang="en">
        <title>Admin Layout</title> 
          <%= @inner_content %>

How to take leverage from on_mount to reduce code

Phoenix LiveView has implemented some cool features, and one of them is the on_mount/1 callback function.

This callback function will run before the mount/3 function in your LiveView.

There are two ways to set the on_mount callback function:

  1. In router using live_session/3.
  2. In your LiveView modules with on_mount macro.

If you need to do something before the mount/3 in all your LiveViews, live_session/3 is likely the best fit. However, if it isonly for a few them, the on_mount macro will be better for your needs.

on_mount is helpful for reducing repetitive code in your LiveViews. Let’s look at an example.

defmodule ExampleWeb.UserHook do
  import Phoenix.LiveView

  def on_mount(:default, _params, %{"current_user" => current_user} = _session, socket) do
    if authorized?(current_user) do
      {:cont, socket}
      {:halt, socket}
  def on_mount(:admin, _params, %{"current_user" => current_user} = _session, socket) do
    if admin?(current_user) do
      {:cont, socket}
      {:halt, socket}

The live_session/3 on Router:

live_session :default, on_mount: ExampleWeb.UserHook do
  scope "/", ExampleWeb do
    pipe_through [:browser, :auth]

    live "/", HomeLive, :page

The on_mount macro:

defmodule ExampleWeb.HomeLive do
  use ExampleWeb, :live_view
  on_mount {ExampleWeb.UserHook, :admin}

  def mount(_params, _session, socket) do
    # ...

How to import CSV file to the Database

Today I learned how to import CSV data file to the database, and populate the table.

Imagine you have this migration in your application with the following columns:

create table(:users) do
  add(:first_name, :string, null: false)
  add(:last_name, :string, null: false)
  add(:username, :string, null: false)
  add(:email, :string, null: false)

And this would be the CSV file:

First Name,Last Name,Username,Email

And how can I import my CSV file to the users’ table on the database?

~$ psql -U user -d database <<USERS
COPY users(first_name, last_name, username, email) FROM '/path/to/users.csv' DELIMITER ',' CSV HEADER;

After finishing, you will receive and output with COPY 2 the quantity of copies into your table.

Idempotence in Distributed Systems

Sooner or later, you will come across the term “idempotence” in the context of distributed systems. What is the relationship between these terms?

Let’s consider writing an REST API with a POST request. When you try to create a resource and call it multiple times, the system should only create this resource once, or update it, for a given unique entity.

A more specific example of this in a distributed system could be a payment system. A payment operation will be considered idempotent if we attempt to apply the same charge or payment multiple times, but it only gets processed once.

Creating idempotent operations in a distributed system can be challenging, especially if implemented in the application layer. If possible, you can push this responsibility to your database and ensure idempotence with features like unique indices.

This is what I’ve been learning and I’m excited to learn more!

Using the Keyword module for options

You should consider using Keyword.fetch!/2 and Keyword.get/3 for options to APIs.

Without options

defmodule MyApp do
  def config(name, author \\ "Herminio Torres", description \\ "Description") do
      name: name,
      author: author,
      description: description
iex> MyApp.config
config/1    config/2    config/3
iex> MyApp.config("my_app")
  author: "Herminio Torres",
  description: "Description",
  name: "my_app"
iex> MyApp.config("my_app", "Change")
  author: "Change",
  description: "Description",
  name: "my_app"
  • Creates a config function with many arities
  • You are forced to pass all paramaters when you intend to change just the last default argument.

With Options

defmodule MyApp do
  def config(opts) do
    name = Keyword.fetch!(opts, :name)
    author = Keyword.get(opts, :author, "Herminio Torres")
    description = Keyword.get(opts, :description, "Description")
      name: name,
      author: author,
      description: description
iex> MyApp.config([])
** (KeyError) key :name not found in: []
    (elixir 1.12.3) lib/keyword.ex:420: Keyword.fetch!/2
    iex:3: MyApp.config/1
iex> MyApp.config([name: "my_app"])
  author: "Herminio Torres",
  description: "Description",
  name: "my_app"
iex> MyApp.config([name: "my_app", description: "Change"])
  author: "Herminio Torres",
  description: "Change",
  name: "my_app"
  • The raised error leads you to which options are required
  • Keyword lists make the arguments named
  • Only one function arity is exposed


Taming data with Ecto.Enum and Ecto.Type

A coworker and I discussed about taking advantage of Ecto.Enum and Ecto.Type instead of having one more dependency.

The schema:

defmodule Blog.Category do
  use Blog.Schema

  schema "categories" do
    field(:name, Ecto.Enum, [:til, :elixir, :ecto])

Divide & Conquer with Reflections and Ecto.Type.load/3:

iex> type = Blog.Category.__schema__(:type, :name)
{:parameterized, Ecto.Enum,
   mappings: [til: "til", elixir: "elixir", ecto: "ecto"],
   on_cast: %{"til" => :til, "elixir" => :elixir, "ecto" => :ecto},
   on_dump: %{til: "til", elixir: "elixir", ecto: "ecto"},
   on_load: %{"til" => :til, "elixir" => :elixir, "ecto" => :ecto},
   type: :string
iex> Ecto.Type.load(type, "unknown")
iex> Ecto.Type.load(type, "ecto")
{:ok, :ecto}
iex> Ecto.Type.load(type, :ecto)

In the meantime:

iex> Ecto.Enum.values(Blog.Category, :name)
[:til, :elixir, :ecto]
iex> Ecto.Enum.dump_values(Blog.Category, :name)
["til", "elixir", "ecto"]
iex> Ecto.Enum.mappings(Blog.Category, :name)
[til: "til", elixir: "elixir", ecto: "ecto"]

Also, now we have the same API ecto_enum:

iex> valid? = fn list, value -> Enum.any?(list, fn item -> item == value end) end
#Function<43.40011524/2 in :erl_eval.expr/5>
iex> categories = Ecto.Enum.dump_values(Blog.Category, :name)
["til", "elixir", "ecto"]
iex> category = "unknown"
iex> if valid?.(categories, category), do: {:ok, String.to_existing_atom(category)}, else: :error
iex> category = "ecto"
iex> if valid?.(categories, category), do: {:ok, String.to_existing_atom(category)}, else: :error
{:ok, :ecto}


Living with-out

There is a tendency in Elixir to reach for the with macro to handle control flow.

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params),
       {:ok, client} <- OAuth2.Client.get_token(client(type), params) do
    {:ok, client}
    err -> err

At first glance, this code looks succinct, clean, and readable. Then you realize two refactors

With returns the unmatched value by default

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params),
       {:ok, client} <- OAuth2.Client.get_token(client(type), params) do
    {:ok, client}

If there is no else condition, you don’t need to double match on the last clause.

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params) do
       OAuth2.Client.get_token(client(type), params)

After these two refactors, you realize that you never needed with in the first place!

def fetch_token(type, params \\ []) do
  params = Keyword.merge(required_get_token_params(type), params)

  OAuth2.Client.get_token(client(type), params)

Now, this may seem overly pedantic, but many bugs can hang out in the plain sight of verbose code. Eliminating branching in your code is a great strategy for reducing complexity.

Measuring test coverage with Mix

I am accustomed to using tools like codecov and coveralls to measure test coverage. But Mix has built-in test coverage calculations.

mix test --cover
Generating cover results ...

Percentage | Module
    75.00% | Chameleon
   100.00% | Chameleon.CMYK
   100.00% | Chameleon.Color.CMYK.Any
   100.00% | Chameleon.PantoneToHex
    98.15% | Chameleon.RGB
   100.00% | Chameleon.RGB888
    87.50% | Chameleon.Util
   100.00% | ChameleonTest.Case
    97.25% | Total

It prints out a great summary, but also generates HTML files that details which lines are covered, and which are not.

open cover/Elixir.Chameleon.Util.html

How works with a process alias

A coworker and I were discussing the mechanics of GenServer.reply/2, which led to a conversation about what is passed as the second argument to a GenServer.handle_call/3 callback.

def handle_call(:action, from, state)

What is in from? Well, typically, it is a tuple of the calling process’s pid, and a unique ref for the call. {pid(), ref()}. You can see this in action inside of the implementation of OTP

Mref = erlang:monitor(process, Process),
Process ! {Label, {self(), Mref}, Request},
     {Mref, Reply} ->

The process creates the ref, makes the call, then blocks with a receive until the callee responds or it times out. The ref is essential so that it knows it’s receiving a reply to the message, instead of any arbitrary message.

If you call the GenServer not by a pid, but by using an atom alias, the call is a little different. Instead of just matching on the pid and ref, it also throws in an :alias atom with the ref, so that it can use slightly different dispatch logic.

Tag = [alias | Mref],

erlang:send(Process, {Label, {self(), Tag}, Request}, [noconnect]),

    {[alias | Mref], Reply} ->
        erlang:demonitor(Mref, [flush]),
        {ok, Reply};


Relying on external resources

I have been working on a feature that depends on an external source for some data. The data file has to be built in a different system and essentially vendored into this one.

Elixir has a way of noting inside of a module that it depends on that external resource. Doing that allows tools to have insight on the dependency. For instance, when the module tagging the external resource is compiled, the external resource may be compiled as well.

defmodule App.SomeModule do
@external_resource Path.join("src", "filename.ex")

`@external_resource` expects a path in the form of a binary.

Moar backtrace!

If you’re dealing in logs and an exception is thrown in Elixir, sometimes the backtrace is entirely unhelpful, like this.

(jason 1.2.2) lib/jason.ex:199: Jason.encode_to_iodata!/2
(postgrex 0.15.9) lib/postgrex/type_module.ex:897: Postgrex.DefaultTypes.encode_params/3
(postgrex 0.15.9) lib/postgrex/query.ex:75: DBConnection.Query.Postgrex.Query.encode/3
(db_connection 2.4.0) lib/db_connection.ex:1205: DBConnection.encode/5
(db_connection 2.4.0) lib/db_connection.ex:1305: DBConnection.run_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:574: DBConnection.parsed_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:566: DBConnection.prepare_execute/4
(postgrex 0.15.9) lib/postgrex.ex:251: Postgrex.query/4

Where did this exception actually happen?

Turns out the default depth is only 8 calls deep! The good news is it’s configurable.

:erlang.system_flag(:backtrace_depth, new_depth)

Bump that up to 20 or 30 and you’ll probably find the real culprit 😎

Compiler warnings as errors at test time

We use mix compile --warnings-as-errors in our CI linting step to catch potential issues before they hit production. This is great but it’s a command I never run prior to pushing up a pull request, so sometimes they slip through and I have to do a fixup and push again. Wouldn’t it be great if these could be caught when you ran your test suite to catch them prior to PR?

Johanna Larsson (@joladev) had a great solution! Add this to your test_helper.exs file and you can surface these at test time.

# test_helper.exs
Code.put_compiler_option(:warnings_as_errors, true)

Register modules for lookup with persistent term

There have been several times in my years with Elixir that I’ve found a need to define a collection of modules that can be looked up as a group using some form of tagged dispatch.

Imagine an Event module that has two fields name and data. For event name, there will be a different shape of data. When you persist these events, you may want to dispatch to a different Ecto.load function or some other means to cast your data. By listing the modules, and filtering on an exported function, this can be done at runtime!

defmodule Event.Registry do
  @doc """
  Loads all `Event` modules into persistent term
  @spec load() :: :ok
  def load do
    {:ok, modules} = :application.get_key(:my_app, :modules) 
    :persistent_term.put(__MODULE__, Enum.filter(modules, &function_exported?(&1, :__event_name__, 0)))

  @doc """
  Looks up the `Event` module that exports the given `name`
  iex> load()
  iex> lookup(:hello)
  @spec lookup(name :: atom()) :: module() | nil
  def lookup(name) when is_atom(name) do
    :persistent_term.get(__MODULE__) |> Enum.find(fn module -> module.__event_name__() == name end)

In my production version, I use the event name in the key so that we can optimize away the Enum.find.

Using Phoenix hooks to control parent DOM elements

I’m building a scrollable modal that overlays a screen that’s also scrollable. I find it to be a bit of an awkward UX if both the foreground and background are scrollable in this case, so I want to disable the background scrolling when the modal opens.

The problem is that the document body can’t see the state of my LiveView. Fortunately, LiveView (combined with Tailwind CSS in our case) can handle this in another way. Using hooks, we can tell our app to add a CSS class when our modal opens, and then remove the class on modal close.

<!-- root.html.eex -->
<body id="app">
  <%= @inner_content %>

And now we add our hook:

// assets/js/hooks/index.js
const hooks = {}

hooks.ToggleAppScroll = {
  mounted: () => {
  destroyed: () => {

export default hooks

And then in our modal component:

  def render(assigns) do
      <div phx-hook="ToggleAppScroll" class="bg-gray-300 bg-opacity-50 fixed top-0">
          <!-- Modal content goes here -->

And now any new component that wants to disable scrolling of the app simply has to add phx-hook="ToggleAppScroll" to its attributes. The phx-hook lifecycle will handle the rest.

Avoid variables in your LiveView leex templates

In my phoenix templates, I have a tendency to use variables, especially for functions where I want to return multiple values. For example

# Template
  <% {color_class, indicator} = status_indicator(@match) %>
  <p class="<%= color_class %>">
    <%= indicator %>

# View 
def status_icon(match) do
  case Match.status(match) do
    :prematch -> {"text-yellow-500", "prematch"}
    :live -> {"text-green-500", "live"}

This is fine for a normal template, but with Phoenix LiveView, you’re running into a LiveEEX pitfall.

Avoid defining local variables, except within for, case, and friends

Using this method makes Phoenix LiveView opt out of change tracking, which means sending data over the wire everytime.

Instead, use multiple functions or call into another template:

# Multiple functions
  <p class="<%= status_indicator_color(@match) %>">
    <%= status_indicator(@match) %>

# If your operation is expensive, do it once and call into another template 
  <%= render MatchView, "status_indicator.html", status: Match.status(@match) %>

# other template
<p class="<%= status_color(@status) %>">
  <%= status_indicator(@status) %>

# View
def status_color(status) do
  case status do
    :prematch -> "text-yellow-500"
    :live -> "text-green-500"

def status_indicator(status) do

Using trigrams for better searches in Postgres

Having used Elasticsearch in the past, I thought it was the best and easiest way to handle fuzzy searches. Today I discovered an extension for Postgres called “pg_trgm” that might prevent you from needing an Elastic instance after all. Postgres is actually very good at text searches using ILIKE, but they are optimized for terms that are left-anchored (eg. ILIKE 'term%' and not ILIKE '%erm%). Trigrams will work the same no matter where the match is in the column. In addition, it will give a weight to each match expressing how close it is.

CREATE INDEX names_last_name_idx ON names USING GIN(last_name gin_trgm_ops);

To see what the index looks like:

select show_trgm('resudek');
# {  r, re,dek,ek ,esu,res,sud,ude}

(these are the indexed trigrams!)

And to perform a search with weighting:

select last_name, similarity('dek', last_name) from names;
# last_name | similarity
# resudek   | 0.2
# rezutek   | 0.090909
# johnson   | 0

Setting the default editor

I just changed my OS from Ubuntu to PopOS. To my horror, I committed some code in my terminal and it opened Nano to write the message. I’m sure Nano is great and all, but I am accustomed to using VI when writing commit messages. To see the available editors on my system:

><> update-alternatives --list editor

And then set the one I want:

><> sudo update-alternatives --set editor /usr/bin/vim.tiny

always use sudo when typing in commands you found on teh internet.

Use Makeup to display data in your Phoenix UI

Here at Simplebet, we work on a product that is primarily centered around data. For our internal tools, we often want to be able to view the contents of raw messages for inspection, debugging, and testing. By combining Phoenix templates with the Makeup source code highlighter, it couldn’t be easier.

To get started, add makeup and makeup_elixir to your project. (Fun fact, Makeup is the tool that ex_doc uses to generate that beautifully printed source code in all Elixir documentation)

 {:makeup, "~> 1.0"},
 {:makeup_elixir, "~> 0.15.0"},

Then you can render your data like this

<div class="m-2 overflow-hidden shadow-lg">
        <%= raw Makeup.stylesheet(Makeup.Styles.HTML.StyleMap.friendly_style()) %>
    <%= raw Makeup.highlight(inspect(@message.message, pretty: true)) %>

The best part, Makeup has a list of stylesheets that you can choose from. I chose “friendly”, but there are many more. Enjoy!

Working with nested associations in LiveView

Creating forms of nested associations in LiveView can be intimidating and a head-scratcher at first. It turns out it is actually very easy!

In my sports example, I have a Match which has many PlayersInMatch, which has many PlayersInQuarter. We start with a changeset around match Ecto.Changeset.change(match)with everything preloaded.

The trick is, in order to update everything in the handle_event callback, you need to make sure that the id of all the associations is present in the form, and you do this by rendering hidden_inputs_for(assoc) at each level.

<%= for player <- @players do %>
    <%= hidden_inputs_for(player) %>
    <li class=""><%= player_name(player) %></li>
        <%= for piq <- inputs_for(player, :players_in_quarter) do %>
            <%= hidden_inputs_for(piq) %>
            <li class="border pl-2 py-1"><%= number_input piq, :points %></li>
        <% end %>
<% end %>

When the callback is called, you will get all of the nested associations in the Match. Then it is simple as

 def handle_event("validate", %{"match" => params}, socket) do
   changeset = 
    |> Ecto.Changeset.cast(params, [])
    |> Ecto.Changeset.cast_assoc(:players_in_match, with: &PlayerInMatch.update_changeset/2)
  {:noreply, assign(socket, :changeset, changeset)}


I was recently working on a project that involved printing the contents of a file in an Elixir app. This is generally simple using, but an issue arose when trying to print the contents of a binary file. The unusual bytes caused errors while trying to print.

While there is no way to be 100% sure a file is binary (mime or even file extension can help), one way to at least ensure we could print the file contents in Elixir was to use String.valid?.

This is the entire implementation from the Elixir 1.11 source:

  def valid?(<<_::utf8, t::binary>>), do: valid?(t)
  def valid?(<<>>), do: true
  def valid?(_), do: false

You can see it tries to cast each character as UTF-8. If it reaches the end of the string, it is valid - super simple, right?

Respecting XDG Settings

The XDG spec is a way for users to define where files get created. I was recently working on a feature that stores a local cache and local configs and found an easy way to determine that path for a user using Elixir/Erlang. Filename.basedir/2 was added in OTP19 for just this purpose.

iex(1)> :filename.basedir(:user_config, "simplebet")

iex(2)> System.put_env("XDG_CONFIG_HOME", "/home/todd/configs/")    

iex(3)> :filename.basedir(:user_config, "simplebet")            

Notice when the environment variable “XDG_CONFIG_HOME” is present, Erlang uses that value to build the path.

Persistent Term - another in memory data store

tl;dr persistent term is very fast for reads, slower for updates and writes and deletes.

Erlang added a new option for kv storage in v21.2 called persistent_term. The major difference between it and ETS is persitent term is highly optimized for reading terms (at the expense of writing and updating.) When a term is updated or deleted a global GC pass is run to scan for any process using that term.

The API is very simple, eg.

:persistent_term.put({:globally, :uniq, :key}, :some_term)

When to use the Decimal library over floats? 🤔

Today I was wondering why we were using the Decimal library instead of my favorite faulty friend, the IEEE 754 floating-point number. Well, it says it right in the readme.

Arbitrary precision decimal arithmetic.

But what is arbitrary precision, Dave?! 🥱

Well, can explain it better than I can, but let me illustrate with an example!

iex(7)> 0.1 + 0.2


No fear, the Decimal library provides arbitrary precision floating points for us!

iex(1)> Decimal.add(Decimal.from_float(0.1), Decimal.from_float(0.2))

Load data from staging db to local db

Connect to remote server

$ psql
\c staging_server

Extract data from staging to your local filesystem

\copy (SELECT * from post where id=12) to posts.csv csv header
\copy (SELECT * from comments where post_id=12) to comments.csv csv header;

Connect to local server

\c my_app_dev

Import data

\copy posts from posts.csv DELIMITER ',' CSV header;
\copy comments from comments.csv DELIMITER ',' CSV header;


Handy when you need to extract only some rows from staging or prod to try something locally

Camel Case or Snake Case?

I recently had a need to convert to snake case string. I remember Rails had the ActiveSupport Inflector class that made this easy, but I couldn’t remember ever seeing this in Elixir (Outside of the library Inflex.) That’s when I discovered some string helpers in the Macro module.

iex> Macro.camelize( "internal_representation_value")

iex> Macro.underscore("InternalRepresentationValue")

Path of Least Resistance

I found that in the below example, I was running into parsing errors because in this translation we go from YAML > JSON > HCL(Vault). I commented out the original code at the bottom and inserted straight HCL into the document so that we didn’t need to translate anymore. This is specific to the kubevault operator.

policyDocument: |
    path "{{.Values.datasource}}/data/some-app/*" {
      capabilities = ["read", "list"]

  # policy:
  #   path:
  #     {{.Values.datasource}}/data/some-app/*:
  #       capabilities:
  #       - read
  #       - list

More robust access to the pipelined variable

Pipelines are a fantastic syntax to clarify intent when doing variable mutations.

The default syntax |> passes the pipelined variable to the invoked function as the first parameter but does not provide a means to reference the variable.

This means that mutating a pipelined variable while introspecting its state cannot be done with default syntax.

If the transform is simple, consider using an anonymous function and Capture operator as in this example where a field in a map is “moved” to a new key

map_with_moved_keys =
  %{foo: "bar"}
  |> (&Map.put_new(&1, :new_foo, &
  |> Map.drop([:foo])

 %{new_foo: "bar"}

Mitigating Timing Attacks

The tl;dr on timing attacks is that when comparing 2 values, if your comparison operator returns as soon as it finds it’s first non-matching value it is possible to determine the value by timing how fast it returns.

"ABC123" == "ABC012"
# if each character takes 1μs, this will return after 4μs. Thus, we know the first 3 chars are correct.

Plug.Crypto.secure_compare("ABC123", "ABC012")
# always returns in constant time

secure_compare/2 check if the byte size is the same (if they arent it will return faster.) If the byte size is the same, the function will return slower, but always in a constant time.

Avoid nesting configuration

Often, I will see configuration nested under some thematic element, rather than the configuration’s intended usage. Let’s imagine that I want to mock out my mailer during tests, so I’ll store the actual mailer as a module attribute at compile time, and fallback to the actual module.

# config/test.exs
import Config

config :my_blog, :content_management, [mailer: MockMailModule, minimum_words: 200]

While this configuration makes sense thematically, the usage is going to be very different.

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management)[:mailer] || MailModule

  def send_marketing_email do"hi")

This is fine, and it definitely works, but it would be simpler if we didn’t nest our configuration, and modeled it around the usage pattern.

# config/test.exs
import Config

config :my_blog, :content_management_mailer, MockMailModule
config :my_blog, :content_management_minimum_words, 200

Now we can leverage the default argument of compile_env/3

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management_mailer, MailModule)

  def send_marketing_email do"hi")

Inspecting Pipelines

When debugging a pipeline, you can inspect any of the intermediate steps using IO.inspect/2:

"Sphinx of black quartz, judge my vow."
|> String.downcase()
|> IO.inspect(label: "Downcased")
|> String.replace(~r/[[:punct:]]/, "")
|> IO.inspect(label: "Punctuation Removed")
|> String.split(" ")
|> IO.inspect(label: "Words")
|> Enum.reduce(%{}, fn word, acc ->
  Map.update(acc, word, 1, &(&1 + 1))
|> IO.inspect(label: "Counts")

The label option formats the output to make it clear which step is being inspected:

Downcased: "sphinx of black quartz, judge my vow."
Punctuation Removed: "sphinx of black quartz judge my vow"
Words: ["sphinx", "of", "black", "quartz", "judge", "my", "vow"]
Counts: %{
  "black" => 1,
  "judge" => 1,
  "my" => 1,
  "of" => 1,
  "quartz" => 1,
  "sphinx" => 1,
  "vow" => 1

This works because IO.inspect/2 always returns the first argument passed to it!

Parameterized ExUnit tests

In ExUnit, it is not immediately obvious how to do the same “test” using different parameters.

It can be tedious to write individual tests for each required field asserting the validation. It’s also difficult for future-you to determine if you have complete coverage.

The cheating way

Remove the all the required fields from the source map before calling changeset and make one massive assert

The better way

The solution I use here is to set the @tag test attribute as two properties of the test. The first @tag field: field_name is the property I’m testing against The second @tag message_attr: %{attr_name => nil} is the value to assign that field before running the test.

You’ll see that these @tag values are available in the test context by the given tag name

      {:entity_name, :entity_name},
      {:entity_uuid, :team_uuid}
    |> Enum.each(fn {field_name, attr_name} ->
      @tag field: field_name
      @tag message_attr: %{attr_name => nil}
      test "when `#{field_name}` missing, invalid ... required", context do
        message = TestMessageHelpers.market_message(context.message_attr)

        %Changeset{valid?: false} = changeset = Subject.changeset(%Subject{}, message)

        assert changeset.errors == [{context.field, {"can't be blank", [validation: :required]}}]

Performing magic in Elixir

Ancient Magic

iex> [109, 97, 103, 105, 99]

In Elixir, there is a data type known as a charlist that can be confusing for beginners. Charlists are a list of codepoints (ASCII integer values), that can be rendered as a readable string.

Modern Magic

iex> <<109, 97, 103, 105, 99>>

Bitstrings are also a sequence of codepoints, but packed together as a contiguous sequence of bits.

Which magic should I prefer?

If you’re writing Elixir, you almost always want the bitstring version when dealing with strings, since they are more memory efficient. If you’re using the String module, you’re dealing with bitstrings under the hood. As an Elixir developer, it is rare that you will need a charlist, and it will typically be due to interfacing with Erlang code.

For displaying bitstrings and charlists in IEx, read more in the Elixir Getting Started guide and the Inspect.Opts documentation.

Revealing the magic

As a poor magician, I will reveal my trick

iex> inspect("magic", binaries: :as_binary)
"<<109, 97, 103, 105, 99>>"

types.SimpleNamespace in the python standard lib

In the python standard lib, there is a handy object called SimpleNamespace. It is an easy way to namespace other objects and comes with a nice __repr__ built in. I often find myself using it in tests. In situations where I need a generic object, it is more flexible than object because it allows creating and deleting attributes.

import types 

foo = types.SimpleNamespace() = 42
# 42

# AttributeError: 'types.SimpleNamespace' object has no attribute 'bar'

The SimpleNameSpace object is roughly equivalent to the below class:

class SimpleNamespace:
    def __init__(self, **kwargs):

    def __repr__(self):
        keys = sorted(self.__dict__)
        items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys)
        return "{}({})".format(type(self).__name__, ", ".join(items))

    def __eq__(self, other):
        return self.__dict__ == other.__dict__

Using Dynamic queries in Ecto

When you have a Phoenix Controller and you need to do a query based on the params, you might end up with something likes this:

defmodule App.PostController do
  def index(conn, params) do
     posts = App.Context.list_posts(params)
     render(conn, "index.html", posts: posts)
defmodule App.Context do
   def  list_posts(params) do
    query = Post 

    query = if user_id = params["owner_id"] do
      query |> where([p], p.user_id == ^user_id)


There is a better way! Dynamic queries (

defmodule App.Context do
   def  list_posts(params) do
    |> where(^filter_where(params))
    |> Repo.all()

  defp filter_where(params) do
    Enum.reduce(params, dynamic(true), fn
      {"owner_id", user_id}, dynamic ->
        dynamic([p], ^dynamic and p.user_id == ^user_id)
      {_, _}, dynamic ->

Now, all your where clauses are in one place :)

Postgres Foreign Key checks permission denied

Foreign key checks are done as the owner as the target table, not as the user issuing the query.

This resulted in a permission error:

ProgrammingError: permission denied for schema example
LINE 1: SELECT 1 FROM ONLY "example"."table" x WHERE "id" OPERATOR(...
QUERY:  SELECT 1 FROM ONLY "example"."table" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x

When dumping from one environment to local for testing, be sure that the owner of the table has permissions on your local postgres. Since it’s local, just give the owner of the table superuser perms.


Use context functions for writing tests

When writing ExUnit tests that require setup, use describe blocks and context functions to your advantage!

def App.FooTest do
  use ExUnit.Case

  describe "when there is a bar" do
    setup :single_bar
    test "you can get a bar", %{bar: bar} do
     assert %App.Bar{} = App.Context.get_bar(

  describe "when there is a fancy bar" do
    setup :single_bar
    @tag bar_params: %{color: "orange"}
    test "you can get a fancy bar", %{bar: bar} do
      assert %App.Bar{} = App.Context.get_bar(

  def single_bar(context) do
    params = context[:bar_params] || %{a: 1}
    {:ok, bar} = App.Context.create_bar(params)
    %{bar: bar}

Don't use Map functions on structs

While it may seem like a good idea, Map functions should not be used on Elixir structs, as they can lead to some violations of the data structure. Specifically, you can use the Map API to add keys that don’t exist on the struct.

defmodule Test do
  defstruct [:foo]

test = %Test{}

# Adds a field :bar that doesn't exist on the struct
%{__struct__: Test, bar: :a, foo: nil} = Map.put(test, :bar, :a)

Instead, use the Map update syntax that validates that you’re using an existing key. If it doesn’t exist, it will throw a KeyError

%{test | bar: :a}
** (KeyError) key :bar not found in: %Test{foo: nil}
    (stdlib 3.12.1) :maps.update(:bar, :a, %Test{foo: nil})
    (stdlib 3.12.1) erl_eval.erl:256: anonymous fn/2 in :erl_eval.expr/5
    (stdlib 3.12.1) lists.erl:1263: :lists.foldl/3


Given the embedded schema

embedded_schema do
  field(:mame, :string)

Used by the following code raises a pattern match error that can be difficult to diagnose. Notice that the schema definition uses :mame with an “M”, and @root_fields uses correct (but mismatched) :name with an “N”

@root_fields [:name]

parsed = Jason.decode!(data)

|> Ecto.Changeset.cast(parsed, @root_fields)

I got sidetracked thinking that the cast was unhappy because the passed data was using string keys