sb logoToday I Learned

31 posts about #elixir

How to grasp the business logic with unique_index

How do I ensure a unique index when only a user tries to get more than one ticket for a paid conference? And be more flexible in accepting such users get more than one ticket for a free conference?

How are our goals here?

  • sent paid conference to Ticket.changeset/1 ensure the status: :paid, create one.
  • sent free conference to Ticket.changeset/1 ensure the status: :free, create many.

How we can apply these to our business logic:

  create table(:tickets) do
    add :conference_id, references(:conferences), null: false
    add :user_id, references(:users), null: false
    add :status, :string, null: false, default: "free"

  create unique_index(:tickets, [:conference_id, :user_id, :status], where: "status = 'paid'")

How can we play?

iex> Ticket.changeset(%{conference: %{is_paid: false}, user: %{...}, status: :free}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 1, status: :free, conference_id: 1, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: false}, user: %{...}, status: :free}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 2, status: :free, conference_id: 1, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: true}, user: %{...}, status: :paid}) |> Repo.insert()
[debug] QUERY OK
  %Ticket%{id: 3, status: :paid, conference_id: 2, user_id: 1}
iex> Ticket.changeset(%{conference: %{is_paid: true}, user: %{...}, status: :paid}) |> Repo.insert()
** (Ecto.ConstraintError)


New line between examples in doctests matters.

The following module with doctest generates 1 test and makes 2 assertions in it.

defmodule Foo do
  @doc """
  Does foo

  ## Examples

  def bar(a), do: a

defmodule FooTest do
  use ExUnit.Case, async: true
  doctest Foo

Roughly an equivalent of:

defmodule FooTest do
  use ExUnit.Case, async: true

  test "" do
    assert === 1
    assert === 2

However, if we intersperse the examples with new lines

defmodule Foo do
  @doc """
  Does foo

  ## Examples


  def bar(a), do: a

Doctest would generate 2 tests with single assertion in each.

Thus, when we run mix test the number of doctests differs.

That could also be verified by calling in iex -S mix an undocumented function ExUnit.DocTest.__doctests__/2 which returns the list of generated ASTs for tests.

iex> ExUnit.DocTest.__doctests__(Foo, only: [bar: 1]) |> |> IO.puts()

{" (1)",
   value =
   expected = 1
   formatted = "iex>"
   last_expr = ""
   expected_expr = "1"
   stack = [{Foo, :__MODULE__, 0, line: 7, file: "lib/foo.ex"}]
   ExUnit.DocTest.__test__(value, expected, formatted, last_expr, expected_expr, stack)
{" (2)",

Idempotence in Distributed Systems

Sooner or later, you will come across the term “idempotence” in the context of distributed systems. What is the relationship between these terms?

Let’s consider writing an REST API with a POST request. When you try to create a resource and call it multiple times, the system should only create this resource once, or update it, for a given unique entity.

A more specific example of this in a distributed system could be a payment system. A payment operation will be considered idempotent if we attempt to apply the same charge or payment multiple times, but it only gets processed once.

Creating idempotent operations in a distributed system can be challenging, especially if implemented in the application layer. If possible, you can push this responsibility to your database and ensure idempotence with features like unique indices.

This is what I’ve been learning and I’m excited to learn more!

Using the Keyword module for options

You should consider using Keyword.fetch!/2 and Keyword.get/3 for options to APIs.

Without options

defmodule MyApp do
  def config(name, author \\ "Herminio Torres", description \\ "Description") do
      name: name,
      author: author,
      description: description
iex> MyApp.config
config/1    config/2    config/3
iex> MyApp.config("my_app")
  author: "Herminio Torres",
  description: "Description",
  name: "my_app"
iex> MyApp.config("my_app", "Change")
  author: "Change",
  description: "Description",
  name: "my_app"
  • Creates a config function with many arities
  • You are forced to pass all paramaters when you intend to change just the last default argument.

With Options

defmodule MyApp do
  def config(opts) do
    name = Keyword.fetch!(opts, :name)
    author = Keyword.get(opts, :author, "Herminio Torres")
    description = Keyword.get(opts, :description, "Description")
      name: name,
      author: author,
      description: description
iex> MyApp.config([])
** (KeyError) key :name not found in: []
    (elixir 1.12.3) lib/keyword.ex:420: Keyword.fetch!/2
    iex:3: MyApp.config/1
iex> MyApp.config([name: "my_app"])
  author: "Herminio Torres",
  description: "Description",
  name: "my_app"
iex> MyApp.config([name: "my_app", description: "Change"])
  author: "Herminio Torres",
  description: "Change",
  name: "my_app"
  • The raised error leads you to which options are required
  • Keyword lists make the arguments named
  • Only one function arity is exposed


Taming data with Ecto.Enum and Ecto.Type

A coworker and I discussed about taking advantage of Ecto.Enum and Ecto.Type instead of having one more dependency.

The schema:

defmodule Blog.Category do
  use Blog.Schema

  schema "categories" do
    field(:name, Ecto.Enum, [:til, :elixir, :ecto])

Divide & Conquer with Reflections and Ecto.Type.load/3:

iex> type = Blog.Category.__schema__(:type, :name)
{:parameterized, Ecto.Enum,
   mappings: [til: "til", elixir: "elixir", ecto: "ecto"],
   on_cast: %{"til" => :til, "elixir" => :elixir, "ecto" => :ecto},
   on_dump: %{til: "til", elixir: "elixir", ecto: "ecto"},
   on_load: %{"til" => :til, "elixir" => :elixir, "ecto" => :ecto},
   type: :string
iex> Ecto.Type.load(type, "unknown")
iex> Ecto.Type.load(type, "ecto")
{:ok, :ecto}
iex> Ecto.Type.load(type, :ecto)

In the meantime:

iex> Ecto.Enum.values(Blog.Category, :name)
[:til, :elixir, :ecto]
iex> Ecto.Enum.dump_values(Blog.Category, :name)
["til", "elixir", "ecto"]
iex> Ecto.Enum.mappings(Blog.Category, :name)
[til: "til", elixir: "elixir", ecto: "ecto"]

Also, now we have the same API ecto_enum:

iex> valid? = fn list, value -> Enum.any?(list, fn item -> item == value end) end
#Function<43.40011524/2 in :erl_eval.expr/5>
iex> categories = Ecto.Enum.dump_values(Blog.Category, :name)
["til", "elixir", "ecto"]
iex> category = "unknown"
iex> if valid?.(categories, category), do: {:ok, String.to_existing_atom(category)}, else: :error
iex> category = "ecto"
iex> if valid?.(categories, category), do: {:ok, String.to_existing_atom(category)}, else: :error
{:ok, :ecto}


Living with-out

There is a tendency in Elixir to reach for the with macro to handle control flow.

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params),
       {:ok, client} <- OAuth2.Client.get_token(client(type), params) do
    {:ok, client}
    err -> err

At first glance, this code looks succinct, clean, and readable. Then you realize two refactors

With returns the unmatched value by default

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params),
       {:ok, client} <- OAuth2.Client.get_token(client(type), params) do
    {:ok, client}

If there is no else condition, you don’t need to double match on the last clause.

def fetch_token(type, params \\ []) do
  with params <- Keyword.merge(required_get_token_params(type), params) do
       OAuth2.Client.get_token(client(type), params)

After these two refactors, you realize that you never needed with in the first place!

def fetch_token(type, params \\ []) do
  params = Keyword.merge(required_get_token_params(type), params)

  OAuth2.Client.get_token(client(type), params)

Now, this may seem overly pedantic, but many bugs can hang out in the plain sight of verbose code. Eliminating branching in your code is a great strategy for reducing complexity.

Measuring test coverage with Mix

I am accustomed to using tools like codecov and coveralls to measure test coverage. But Mix has built-in test coverage calculations.

mix test --cover
Generating cover results ...

Percentage | Module
    75.00% | Chameleon
   100.00% | Chameleon.CMYK
   100.00% | Chameleon.Color.CMYK.Any
   100.00% | Chameleon.PantoneToHex
    98.15% | Chameleon.RGB
   100.00% | Chameleon.RGB888
    87.50% | Chameleon.Util
   100.00% | ChameleonTest.Case
    97.25% | Total

It prints out a great summary, but also generates HTML files that details which lines are covered, and which are not.

open cover/Elixir.Chameleon.Util.html

How works with a process alias

A coworker and I were discussing the mechanics of GenServer.reply/2, which led to a conversation about what is passed as the second argument to a GenServer.handle_call/3 callback.

def handle_call(:action, from, state)

What is in from? Well, typically, it is a tuple of the calling process’s pid, and a unique ref for the call. {pid(), ref()}. You can see this in action inside of the implementation of OTP

Mref = erlang:monitor(process, Process),
Process ! {Label, {self(), Mref}, Request},
     {Mref, Reply} ->

The process creates the ref, makes the call, then blocks with a receive until the callee responds or it times out. The ref is essential so that it knows it’s receiving a reply to the message, instead of any arbitrary message.

If you call the GenServer not by a pid, but by using an atom alias, the call is a little different. Instead of just matching on the pid and ref, it also throws in an :alias atom with the ref, so that it can use slightly different dispatch logic.

Tag = [alias | Mref],

erlang:send(Process, {Label, {self(), Tag}, Request}, [noconnect]),

    {[alias | Mref], Reply} ->
        erlang:demonitor(Mref, [flush]),
        {ok, Reply};


Relying on external resources

I have been working on a feature that depends on an external source for some data. The data file has to be built in a different system and essentially vendored into this one.

Elixir has a way of noting inside of a module that it depends on that external resource. Doing that allows tools to have insight on the dependency. For instance, when the module tagging the external resource is compiled, the external resource may be compiled as well.

defmodule App.SomeModule do
@external_resource Path.join("src", "filename.ex")

`@external_resource` expects a path in the form of a binary.

Moar backtrace!

If you’re dealing in logs and an exception is thrown in Elixir, sometimes the backtrace is entirely unhelpful, like this.

(jason 1.2.2) lib/jason.ex:199: Jason.encode_to_iodata!/2
(postgrex 0.15.9) lib/postgrex/type_module.ex:897: Postgrex.DefaultTypes.encode_params/3
(postgrex 0.15.9) lib/postgrex/query.ex:75: DBConnection.Query.Postgrex.Query.encode/3
(db_connection 2.4.0) lib/db_connection.ex:1205: DBConnection.encode/5
(db_connection 2.4.0) lib/db_connection.ex:1305: DBConnection.run_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:574: DBConnection.parsed_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:566: DBConnection.prepare_execute/4
(postgrex 0.15.9) lib/postgrex.ex:251: Postgrex.query/4

Where did this exception actually happen?

Turns out the default depth is only 8 calls deep! The good news is it’s configurable.

:erlang.system_flag(:backtrace_depth, new_depth)

Bump that up to 20 or 30 and you’ll probably find the real culprit 😎

Compiler warnings as errors at test time

We use mix compile --warnings-as-errors in our CI linting step to catch potential issues before they hit production. This is great but it’s a command I never run prior to pushing up a pull request, so sometimes they slip through and I have to do a fixup and push again. Wouldn’t it be great if these could be caught when you ran your test suite to catch them prior to PR?

Johanna Larsson (@joladev) had a great solution! Add this to your test_helper.exs file and you can surface these at test time.

# test_helper.exs
Code.put_compiler_option(:warnings_as_errors, true)

Register modules for lookup with persistent term

There have been several times in my years with Elixir that I’ve found a need to define a collection of modules that can be looked up as a group using some form of tagged dispatch.

Imagine an Event module that has two fields name and data. For event name, there will be a different shape of data. When you persist these events, you may want to dispatch to a different Ecto.load function or some other means to cast your data. By listing the modules, and filtering on an exported function, this can be done at runtime!

defmodule Event.Registry do
  @doc """
  Loads all `Event` modules into persistent term
  @spec load() :: :ok
  def load do
    {:ok, modules} = :application.get_key(:my_app, :modules) 
    :persistent_term.put(__MODULE__, Enum.filter(modules, &function_exported?(&1, :__event_name__, 0)))

  @doc """
  Looks up the `Event` module that exports the given `name`
  iex> load()
  iex> lookup(:hello)
  @spec lookup(name :: atom()) :: module() | nil
  def lookup(name) when is_atom(name) do
    :persistent_term.get(__MODULE__) |> Enum.find(fn module -> module.__event_name__() == name end)

In my production version, I use the event name in the key so that we can optimize away the Enum.find.

Avoid variables in your LiveView leex templates

In my phoenix templates, I have a tendency to use variables, especially for functions where I want to return multiple values. For example

# Template
  <% {color_class, indicator} = status_indicator(@match) %>
  <p class="<%= color_class %>">
    <%= indicator %>

# View 
def status_icon(match) do
  case Match.status(match) do
    :prematch -> {"text-yellow-500", "prematch"}
    :live -> {"text-green-500", "live"}

This is fine for a normal template, but with Phoenix LiveView, you’re running into a LiveEEX pitfall.

Avoid defining local variables, except within for, case, and friends

Using this method makes Phoenix LiveView opt out of change tracking, which means sending data over the wire everytime.

Instead, use multiple functions or call into another template:

# Multiple functions
  <p class="<%= status_indicator_color(@match) %>">
    <%= status_indicator(@match) %>

# If your operation is expensive, do it once and call into another template 
  <%= render MatchView, "status_indicator.html", status: Match.status(@match) %>

# other template
<p class="<%= status_color(@status) %>">
  <%= status_indicator(@status) %>

# View
def status_color(status) do
  case status do
    :prematch -> "text-yellow-500"
    :live -> "text-green-500"

def status_indicator(status) do

Use Makeup to display data in your Phoenix UI

Here at Simplebet, we work on a product that is primarily centered around data. For our internal tools, we often want to be able to view the contents of raw messages for inspection, debugging, and testing. By combining Phoenix templates with the Makeup source code highlighter, it couldn’t be easier.

To get started, add makeup and makeup_elixir to your project. (Fun fact, Makeup is the tool that ex_doc uses to generate that beautifully printed source code in all Elixir documentation)

 {:makeup, "~> 1.0"},
 {:makeup_elixir, "~> 0.15.0"},

Then you can render your data like this

<div class="m-2 overflow-hidden shadow-lg">
        <%= raw Makeup.stylesheet(Makeup.Styles.HTML.StyleMap.friendly_style()) %>
    <%= raw Makeup.highlight(inspect(@message.message, pretty: true)) %>

The best part, Makeup has a list of stylesheets that you can choose from. I chose “friendly”, but there are many more. Enjoy!

Working with nested associations in LiveView

Creating forms of nested associations in LiveView can be intimidating and a head-scratcher at first. It turns out it is actually very easy!

In my sports example, I have a Match which has many PlayersInMatch, which has many PlayersInQuarter. We start with a changeset around match Ecto.Changeset.change(match)with everything preloaded.

The trick is, in order to update everything in the handle_event callback, you need to make sure that the id of all the associations is present in the form, and you do this by rendering hidden_inputs_for(assoc) at each level.

<%= for player <- @players do %>
    <%= hidden_inputs_for(player) %>
    <li class=""><%= player_name(player) %></li>
        <%= for piq <- inputs_for(player, :players_in_quarter) do %>
            <%= hidden_inputs_for(piq) %>
            <li class="border pl-2 py-1"><%= number_input piq, :points %></li>
        <% end %>
<% end %>

When the callback is called, you will get all of the nested associations in the Match. Then it is simple as

 def handle_event("validate", %{"match" => params}, socket) do
   changeset = 
    |> Ecto.Changeset.cast(params, [])
    |> Ecto.Changeset.cast_assoc(:players_in_match, with: &PlayerInMatch.update_changeset/2)
  {:noreply, assign(socket, :changeset, changeset)}


I was recently working on a project that involved printing the contents of a file in an Elixir app. This is generally simple using, but an issue arose when trying to print the contents of a binary file. The unusual bytes caused errors while trying to print.

While there is no way to be 100% sure a file is binary (mime or even file extension can help), one way to at least ensure we could print the file contents in Elixir was to use String.valid?.

This is the entire implementation from the Elixir 1.11 source:

  def valid?(<<_::utf8, t::binary>>), do: valid?(t)
  def valid?(<<>>), do: true
  def valid?(_), do: false

You can see it tries to cast each character as UTF-8. If it reaches the end of the string, it is valid - super simple, right?

Persistent Term - another in memory data store

tl;dr persistent term is very fast for reads, slower for updates and writes and deletes.

Erlang added a new option for kv storage in v21.2 called persistent_term. The major difference between it and ETS is persitent term is highly optimized for reading terms (at the expense of writing and updating.) When a term is updated or deleted a global GC pass is run to scan for any process using that term.

The API is very simple, eg.

:persistent_term.put({:globally, :uniq, :key}, :some_term)

When to use the Decimal library over floats? 🤔

Today I was wondering why we were using the Decimal library instead of my favorite faulty friend, the IEEE 754 floating-point number. Well, it says it right in the readme.

Arbitrary precision decimal arithmetic.

But what is arbitrary precision, Dave?! 🥱

Well, can explain it better than I can, but let me illustrate with an example!

iex(7)> 0.1 + 0.2


No fear, the Decimal library provides arbitrary precision floating points for us!

iex(1)> Decimal.add(Decimal.from_float(0.1), Decimal.from_float(0.2))

Camel Case or Snake Case?

I recently had a need to convert to snake case string. I remember Rails had the ActiveSupport Inflector class that made this easy, but I couldn’t remember ever seeing this in Elixir (Outside of the library Inflex.) That’s when I discovered some string helpers in the Macro module.

iex> Macro.camelize( "internal_representation_value")

iex> Macro.underscore("InternalRepresentationValue")

More robust access to the pipelined variable

Pipelines are a fantastic syntax to clarify intent when doing variable mutations.

The default syntax |> passes the pipelined variable to the invoked function as the first parameter but does not provide a means to reference the variable.

This means that mutating a pipelined variable while introspecting its state cannot be done with default syntax.

If the transform is simple, consider using an anonymous function and Capture operator as in this example where a field in a map is “moved” to a new key

map_with_moved_keys =
  %{foo: "bar"}
  |> (&Map.put_new(&1, :new_foo, &
  |> Map.drop([:foo])

 %{new_foo: "bar"}

Mitigating Timing Attacks

The tl;dr on timing attacks is that when comparing 2 values, if your comparison operator returns as soon as it finds it’s first non-matching value it is possible to determine the value by timing how fast it returns.

"ABC123" == "ABC012"
# if each character takes 1μs, this will return after 4μs. Thus, we know the first 3 chars are correct.

Plug.Crypto.secure_compare("ABC123", "ABC012")
# always returns in constant time

secure_compare/2 check if the byte size is the same (if they arent it will return faster.) If the byte size is the same, the function will return slower, but always in a constant time.

Avoid nesting configuration

Often, I will see configuration nested under some thematic element, rather than the configuration’s intended usage. Let’s imagine that I want to mock out my mailer during tests, so I’ll store the actual mailer as a module attribute at compile time, and fallback to the actual module.

# config/test.exs
import Config

config :my_blog, :content_management, [mailer: MockMailModule, minimum_words: 200]

While this configuration makes sense thematically, the usage is going to be very different.

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management)[:mailer] || MailModule

  def send_marketing_email do"hi")

This is fine, and it definitely works, but it would be simpler if we didn’t nest our configuration, and modeled it around the usage pattern.

# config/test.exs
import Config

config :my_blog, :content_management_mailer, MockMailModule
config :my_blog, :content_management_minimum_words, 200

Now we can leverage the default argument of compile_env/3

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management_mailer, MailModule)

  def send_marketing_email do"hi")

Inspecting Pipelines

When debugging a pipeline, you can inspect any of the intermediate steps using IO.inspect/2:

"Sphinx of black quartz, judge my vow."
|> String.downcase()
|> IO.inspect(label: "Downcased")
|> String.replace(~r/[[:punct:]]/, "")
|> IO.inspect(label: "Punctuation Removed")
|> String.split(" ")
|> IO.inspect(label: "Words")
|> Enum.reduce(%{}, fn word, acc ->
  Map.update(acc, word, 1, &(&1 + 1))
|> IO.inspect(label: "Counts")

The label option formats the output to make it clear which step is being inspected:

Downcased: "sphinx of black quartz, judge my vow."
Punctuation Removed: "sphinx of black quartz judge my vow"
Words: ["sphinx", "of", "black", "quartz", "judge", "my", "vow"]
Counts: %{
  "black" => 1,
  "judge" => 1,
  "my" => 1,
  "of" => 1,
  "quartz" => 1,
  "sphinx" => 1,
  "vow" => 1

This works because IO.inspect/2 always returns the first argument passed to it!

Parameterized ExUnit tests

In ExUnit, it is not immediately obvious how to do the same “test” using different parameters.

It can be tedious to write individual tests for each required field asserting the validation. It’s also difficult for future-you to determine if you have complete coverage.

The cheating way

Remove the all the required fields from the source map before calling changeset and make one massive assert

The better way

The solution I use here is to set the @tag test attribute as two properties of the test. The first @tag field: field_name is the property I’m testing against The second @tag message_attr: %{attr_name => nil} is the value to assign that field before running the test.

You’ll see that these @tag values are available in the test context by the given tag name

      {:entity_name, :entity_name},
      {:entity_uuid, :team_uuid}
    |> Enum.each(fn {field_name, attr_name} ->
      @tag field: field_name
      @tag message_attr: %{attr_name => nil}
      test "when `#{field_name}` missing, invalid ... required", context do
        message = TestMessageHelpers.market_message(context.message_attr)

        %Changeset{valid?: false} = changeset = Subject.changeset(%Subject{}, message)

        assert changeset.errors == [{context.field, {"can't be blank", [validation: :required]}}]

Performing magic in Elixir

Ancient Magic

iex> [109, 97, 103, 105, 99]

In Elixir, there is a data type known as a charlist that can be confusing for beginners. Charlists are a list of codepoints (ASCII integer values), that can be rendered as a readable string.

Modern Magic

iex> <<109, 97, 103, 105, 99>>

Bitstrings are also a sequence of codepoints, but packed together as a contiguous sequence of bits.

Which magic should I prefer?

If you’re writing Elixir, you almost always want the bitstring version when dealing with strings, since they are more memory efficient. If you’re using the String module, you’re dealing with bitstrings under the hood. As an Elixir developer, it is rare that you will need a charlist, and it will typically be due to interfacing with Erlang code.

For displaying bitstrings and charlists in IEx, read more in the Elixir Getting Started guide and the Inspect.Opts documentation.

Revealing the magic

As a poor magician, I will reveal my trick

iex> inspect("magic", binaries: :as_binary)
"<<109, 97, 103, 105, 99>>"

Use context functions for writing tests

When writing ExUnit tests that require setup, use describe blocks and context functions to your advantage!

def App.FooTest do
  use ExUnit.Case

  describe "when there is a bar" do
    setup :single_bar
    test "you can get a bar", %{bar: bar} do
     assert %App.Bar{} = App.Context.get_bar(

  describe "when there is a fancy bar" do
    setup :single_bar
    @tag bar_params: %{color: "orange"}
    test "you can get a fancy bar", %{bar: bar} do
      assert %App.Bar{} = App.Context.get_bar(

  def single_bar(context) do
    params = context[:bar_params] || %{a: 1}
    {:ok, bar} = App.Context.create_bar(params)
    %{bar: bar}

Don't use Map functions on structs

While it may seem like a good idea, Map functions should not be used on Elixir structs, as they can lead to some violations of the data structure. Specifically, you can use the Map API to add keys that don’t exist on the struct.

defmodule Test do
  defstruct [:foo]

test = %Test{}

# Adds a field :bar that doesn't exist on the struct
%{__struct__: Test, bar: :a, foo: nil} = Map.put(test, :bar, :a)

Instead, use the Map update syntax that validates that you’re using an existing key. If it doesn’t exist, it will throw a KeyError

%{test | bar: :a}
** (KeyError) key :bar not found in: %Test{foo: nil}
    (stdlib 3.12.1) :maps.update(:bar, :a, %Test{foo: nil})
    (stdlib 3.12.1) erl_eval.erl:256: anonymous fn/2 in :erl_eval.expr/5
    (stdlib 3.12.1) lists.erl:1263: :lists.foldl/3


Given the embedded schema

embedded_schema do
  field(:mame, :string)

Used by the following code raises a pattern match error that can be difficult to diagnose. Notice that the schema definition uses :mame with an “M”, and @root_fields uses correct (but mismatched) :name with an “N”

@root_fields [:name]

parsed = Jason.decode!(data)

|> Ecto.Changeset.cast(parsed, @root_fields)

I got sidetracked thinking that the cast was unhappy because the passed data was using string keys