sb logoToday I Learned

Measuring test coverage with Mix

I am accustomed to using tools like codecov and coveralls to measure test coverage. But Mix has built-in test coverage calculations.

mix test --cover
Generating cover results ...

Percentage | Module
-----------|--------------------------
    75.00% | Chameleon
   100.00% | Chameleon.CMYK
   100.00% | Chameleon.Color.CMYK.Any
...
   100.00% | Chameleon.PantoneToHex
    98.15% | Chameleon.RGB
   100.00% | Chameleon.RGB888
    87.50% | Chameleon.Util
   100.00% | ChameleonTest.Case
-----------|--------------------------
    97.25% | Total

It prints out a great summary, but also generates HTML files that details which lines are covered, and which are not.

open cover/Elixir.Chameleon.Util.html

How GenServer.call/2 works with a process alias

A coworker and I were discussing the mechanics of GenServer.reply/2, which led to a conversation about what is passed as the second argument to a GenServer.handle_call/3 callback.

def handle_call(:action, from, state)

What is in from? Well, typically, it is a tuple of the calling process’s pid, and a unique ref for the call. {pid(), ref()}. You can see this in action inside of the implementation of OTP

Mref = erlang:monitor(process, Process),
Process ! {Label, {self(), Mref}, Request},
receive
     {Mref, Reply} ->
            ...

The process creates the ref, makes the call, then blocks with a receive until the callee responds or it times out. The ref is essential so that it knows it’s receiving a reply to the message, instead of any arbitrary message.

If you call the GenServer not by a pid, but by using an atom alias, the call is a little different. Instead of just matching on the pid and ref, it also throws in an :alias atom with the ref, so that it can use slightly different dispatch logic.

Tag = [alias | Mref],

erlang:send(Process, {Label, {self(), Tag}, Request}, [noconnect]),

receive
    {[alias | Mref], Reply} ->
        erlang:demonitor(Mref, [flush]),
        {ok, Reply};

Cool!

Relying on external resources

I have been working on a feature that depends on an external source for some data. The data file has to be built in a different system and essentially vendored into this one.

Elixir has a way of noting inside of a module that it depends on that external resource. Doing that allows tools to have insight on the dependency. For instance, when the module tagging the external resource is compiled, the external resource may be compiled as well.

defmodule App.SomeModule do
@external_resource Path.join("src", "filename.ex")
...
end

`@external_resource` expects a path in the form of a binary.

Moar backtrace!

If you’re dealing in logs and an exception is thrown in Elixir, sometimes the backtrace is entirely unhelpful, like this.

(jason 1.2.2) lib/jason.ex:199: Jason.encode_to_iodata!/2
(postgrex 0.15.9) lib/postgrex/type_module.ex:897: Postgrex.DefaultTypes.encode_params/3
(postgrex 0.15.9) lib/postgrex/query.ex:75: DBConnection.Query.Postgrex.Query.encode/3
(db_connection 2.4.0) lib/db_connection.ex:1205: DBConnection.encode/5
(db_connection 2.4.0) lib/db_connection.ex:1305: DBConnection.run_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:574: DBConnection.parsed_prepare_execute/5
(db_connection 2.4.0) lib/db_connection.ex:566: DBConnection.prepare_execute/4
(postgrex 0.15.9) lib/postgrex.ex:251: Postgrex.query/4

Where did this exception actually happen?

Turns out the default depth is only 8 calls deep! The good news is it’s configurable.

:erlang.system_flag(:backtrace_depth, new_depth)

Bump that up to 20 or 30 and you’ll probably find the real culprit 😎

Compiler warnings as errors at test time

We use mix compile --warnings-as-errors in our CI linting step to catch potential issues before they hit production. This is great but it’s a command I never run prior to pushing up a pull request, so sometimes they slip through and I have to do a fixup and push again. Wouldn’t it be great if these could be caught when you ran your test suite to catch them prior to PR?

Johanna Larsson (@joladev) had a great solution! Add this to your test_helper.exs file and you can surface these at test time.

# test_helper.exs
Code.put_compiler_option(:warnings_as_errors, true)
...

Register modules for lookup with persistent term

There have been several times in my years with Elixir that I’ve found a need to define a collection of modules that can be looked up as a group using some form of tagged dispatch.

Imagine an Event module that has two fields name and data. For event name, there will be a different shape of data. When you persist these events, you may want to dispatch to a different Ecto.load function or some other means to cast your data. By listing the modules, and filtering on an exported function, this can be done at runtime!

defmodule Event.Registry do
  @doc """
  Loads all `Event` modules into persistent term
  """
  @spec load() :: :ok
  def load do
    {:ok, modules} = :application.get_key(:my_app, :modules) 
    :persistent_term.put(__MODULE__, Enum.filter(modules, &function_exported?(&1, :__event_name__, 0)))
  end

  @doc """
  Looks up the `Event` module that exports the given `name`
  iex> load()
  iex> lookup(:hello)
  Event.Hello
  """
  @spec lookup(name :: atom()) :: module() | nil
  def lookup(name) when is_atom(name) do
    :persistent_term.get(__MODULE__) |> Enum.find(fn module -> module.__event_name__() == name end)
  end
end

In my production version, I use the event name in the key so that we can optimize away the Enum.find.

Using Phoenix hooks to control parent DOM elements

I’m building a scrollable modal that overlays a screen that’s also scrollable. I find it to be a bit of an awkward UX if both the foreground and background are scrollable in this case, so I want to disable the background scrolling when the modal opens.

The problem is that the document body can’t see the state of my LiveView. Fortunately, LiveView (combined with Tailwind CSS in our case) can handle this in another way. Using hooks, we can tell our app to add a CSS class when our modal opens, and then remove the class on modal close.

<!-- root.html.eex -->
<body id="app">
  ...
  <%= @inner_content %>
</body>

And now we add our hook:

// assets/js/hooks/index.js
const hooks = {}

hooks.ToggleAppScroll = {
  mounted: () => {
    document.getElementById("app").classList.toggle('overflow-hidden');
  },
  destroyed: () => {
    document.getElementById("app").classList.toggle('overflow-hidden');
  }
}

export default hooks

And then in our modal component:

  def render(assigns) do
    ~L"""
      <div phx-hook="ToggleAppScroll" class="bg-gray-300 bg-opacity-50 fixed top-0">
          <!-- Modal content goes here -->
      </div>
     """
  end

And now any new component that wants to disable scrolling of the app simply has to add phx-hook="ToggleAppScroll" to its attributes. The phx-hook lifecycle will handle the rest.

Avoid variables in your LiveView leex templates

In my phoenix templates, I have a tendency to use variables, especially for functions where I want to return multiple values. For example

# Template
<section>
  <% {color_class, indicator} = status_indicator(@match) %>
  <p class="<%= color_class %>">
    <%= indicator %>
  </p>
</section>

# View 
def status_icon(match) do
  case Match.status(match) do
    :prematch -> {"text-yellow-500", "prematch"}
    :live -> {"text-green-500", "live"}
  end
end

This is fine for a normal template, but with Phoenix LiveView, you’re running into a LiveEEX pitfall.

Avoid defining local variables, except within for, case, and friends

Using this method makes Phoenix LiveView opt out of change tracking, which means sending data over the wire everytime.

Instead, use multiple functions or call into another template:

# Multiple functions
<section>
  <p class="<%= status_indicator_color(@match) %>">
    <%= status_indicator(@match) %>
  </p>
</section>

# If your operation is expensive, do it once and call into another template 
<section>
  <%= render MatchView, "status_indicator.html", status: Match.status(@match) %>
</section>

# other template
<p class="<%= status_color(@status) %>">
  <%= status_indicator(@status) %>
</p>

# View
def status_color(status) do
  case status do
    :prematch -> "text-yellow-500"
    :live -> "text-green-500"
end

def status_indicator(status) do
  to_string(status)
end

Using trigrams for better searches in Postgres

Having used Elasticsearch in the past, I thought it was the best and easiest way to handle fuzzy searches. Today I discovered an extension for Postgres called “pg_trgm” that might prevent you from needing an Elastic instance after all. Postgres is actually very good at text searches using ILIKE, but they are optimized for terms that are left-anchored (eg. ILIKE 'term%' and not ILIKE '%erm%). Trigrams will work the same no matter where the match is in the column. In addition, it will give a weight to each match expressing how close it is.

CREATE EXTENSION pg_trgm;
CREATE INDEX names_last_name_idx ON names USING GIN(last_name gin_trgm_ops);

To see what the index looks like:

select show_trgm('resudek');
# {  r, re,dek,ek ,esu,res,sud,ude}

(these are the indexed trigrams!)

And to perform a search with weighting:

select last_name, similarity('dek', last_name) from names;
# last_name | similarity
# resudek   | 0.2
# rezutek   | 0.090909
# johnson   | 0

Setting the default editor

I just changed my OS from Ubuntu to PopOS. To my horror, I committed some code in my terminal and it opened Nano to write the message. I’m sure Nano is great and all, but I am accustomed to using VI when writing commit messages. To see the available editors on my system:

><> update-alternatives --list editor
/bin/ed
/bin/nano
/usr/bin/vim.tiny

And then set the one I want:

><> sudo update-alternatives --set editor /usr/bin/vim.tiny

always use sudo when typing in commands you found on teh internet.

Use Makeup to display data in your Phoenix UI

Here at Simplebet, we work on a product that is primarily centered around data. For our internal tools, we often want to be able to view the contents of raw messages for inspection, debugging, and testing. By combining Phoenix templates with the Makeup source code highlighter, it couldn’t be easier.

To get started, add makeup and makeup_elixir to your project. (Fun fact, Makeup is the tool that ex_doc uses to generate that beautifully printed source code in all Elixir documentation)

 {:makeup, "~> 1.0"},
 {:makeup_elixir, "~> 0.15.0"},

Then you can render your data like this

<div class="m-2 overflow-hidden shadow-lg">
    <style>
        <%= raw Makeup.stylesheet(Makeup.Styles.HTML.StyleMap.friendly_style()) %>
    </style>
    <%= raw Makeup.highlight(inspect(@message.message, pretty: true)) %>
</div>

The best part, Makeup has a list of stylesheets that you can choose from. I chose “friendly”, but there are many more. Enjoy!

Working with nested associations in LiveView

Creating forms of nested associations in LiveView can be intimidating and a head-scratcher at first. It turns out it is actually very easy!

In my sports example, I have a Match which has many PlayersInMatch, which has many PlayersInQuarter. We start with a changeset around match Ecto.Changeset.change(match)with everything preloaded.

The trick is, in order to update everything in the handle_event callback, you need to make sure that the id of all the associations is present in the form, and you do this by rendering hidden_inputs_for(assoc) at each level.

<%= for player <- @players do %>
    <%= hidden_inputs_for(player) %>
    <li class=""><%= player_name(player) %></li>
        <%= for piq <- inputs_for(player, :players_in_quarter) do %>
            <%= hidden_inputs_for(piq) %>
            <li class="border pl-2 py-1"><%= number_input piq, :points %></li>
        <% end %>
    </li>
<% end %>

When the callback is called, you will get all of the nested associations in the Match. Then it is simple as

 def handle_event("validate", %{"match" => params}, socket) do
   changeset = 
    socket.assigns.match
    |> Ecto.Changeset.cast(params, [])
    |> Ecto.Changeset.cast_assoc(:players_in_match, with: &PlayerInMatch.update_changeset/2)
  {:noreply, assign(socket, :changeset, changeset)}
end

String.valid?/1

I was recently working on a project that involved printing the contents of a file in an Elixir app. This is generally simple using File.read, but an issue arose when trying to print the contents of a binary file. The unusual bytes caused errors while trying to print.

While there is no way to be 100% sure a file is binary (mime or even file extension can help), one way to at least ensure we could print the file contents in Elixir was to use String.valid?.

This is the entire implementation from the Elixir 1.11 source:

  def valid?(<<_::utf8, t::binary>>), do: valid?(t)
  def valid?(<<>>), do: true
  def valid?(_), do: false

You can see it tries to cast each character as UTF-8. If it reaches the end of the string, it is valid - super simple, right?

Respecting XDG Settings

The XDG spec is a way for users to define where files get created. I was recently working on a feature that stores a local cache and local configs and found an easy way to determine that path for a user using Elixir/Erlang. Filename.basedir/2 was added in OTP19 for just this purpose.

iex(1)> :filename.basedir(:user_config, "simplebet")
"/home/todd/.config/simplebet"

iex(2)> System.put_env("XDG_CONFIG_HOME", "/home/todd/configs/")    
:ok

iex(3)> :filename.basedir(:user_config, "simplebet")            
"/home/todd/configs/simplebet"

Notice when the environment variable “XDG_CONFIG_HOME” is present, Erlang uses that value to build the path.

Persistent Term - another in memory data store

tl;dr persistent term is very fast for reads, slower for updates and writes and deletes.

Erlang added a new option for kv storage in v21.2 called persistent_term. The major difference between it and ETS is persitent term is highly optimized for reading terms (at the expense of writing and updating.) When a term is updated or deleted a global GC pass is run to scan for any process using that term.

The API is very simple, eg.

:persistent_term.put({:globally, :uniq, :key}, :some_term)

When to use the Decimal library over floats? 🤔

Today I was wondering why we were using the Decimal library instead of my favorite faulty friend, the IEEE 754 floating-point number. Well, it says it right in the readme.

Arbitrary precision decimal arithmetic.

But what is arbitrary precision, Dave?! 🥱

Well, http://0.30000000000000004.com/ can explain it better than I can, but let me illustrate with an example!

iex(7)> 0.1 + 0.2
0.30000000000000004

🤢🤮

No fear, the Decimal library provides arbitrary precision floating points for us!

iex(1)> Decimal.add(Decimal.from_float(0.1), Decimal.from_float(0.2))
#Decimal<0.3>

Load data from staging db to local db

Connect to remote server

$ psql
\c staging_server

Extract data from staging to your local filesystem

\copy (SELECT * from post where id=12) to posts.csv csv header
\copy (SELECT * from comments where post_id=12) to comments.csv csv header;

Connect to local server

\c my_app_dev

Import data

\copy posts from posts.csv DELIMITER ',' CSV header;
\copy comments from comments.csv DELIMITER ',' CSV header;

Done!

Handy when you need to extract only some rows from staging or prod to try something locally

Camel Case or Snake Case?

I recently had a need to convert to snake case string. I remember Rails had the ActiveSupport Inflector class that made this easy, but I couldn’t remember ever seeing this in Elixir (Outside of the library Inflex.) That’s when I discovered some string helpers in the Macro module.

iex> Macro.camelize( "internal_representation_value")
"InternalRepresentationValue"

iex> Macro.underscore("InternalRepresentationValue")
"internal_representation_value"

Path of Least Resistance

I found that in the below example, I was running into parsing errors because in this translation we go from YAML > JSON > HCL(Vault). I commented out the original code at the bottom and inserted straight HCL into the document so that we didn’t need to translate anymore. This is specific to the kubevault operator.

policyDocument: |
    path "{{.Values.datasource}}/data/some-app/*" {
      capabilities = ["read", "list"]
    }

  # policy:
  #   path:
  #     {{.Values.datasource}}/data/some-app/*:
  #       capabilities:
  #       - read
  #       - list

More robust access to the pipelined variable

Pipelines are a fantastic syntax to clarify intent when doing variable mutations.

The default syntax |> passes the pipelined variable to the invoked function as the first parameter but does not provide a means to reference the variable.

This means that mutating a pipelined variable while introspecting its state cannot be done with default syntax.

If the transform is simple, consider using an anonymous function and Capture operator as in this example where a field in a map is “moved” to a new key

map_with_moved_keys =
  %{foo: "bar"}
  |> (&Map.put_new(&1, :new_foo, &1.foo)).()
  |> Map.drop([:foo])

 %{new_foo: "bar"}

Mitigating Timing Attacks

The tl;dr on timing attacks is that when comparing 2 values, if your comparison operator returns as soon as it finds it’s first non-matching value it is possible to determine the value by timing how fast it returns.

"ABC123" == "ABC012"
# if each character takes 1μs, this will return after 4μs. Thus, we know the first 3 chars are correct.

Plug.Crypto.secure_compare("ABC123", "ABC012")
# always returns in constant time

secure_compare/2 check if the byte size is the same (if they arent it will return faster.) If the byte size is the same, the function will return slower, but always in a constant time.

https://hexdocs.pm/plug_crypto/Plug.Crypto.html#secure_compare/2

Avoid nesting configuration

Often, I will see configuration nested under some thematic element, rather than the configuration’s intended usage. Let’s imagine that I want to mock out my mailer during tests, so I’ll store the actual mailer as a module attribute at compile time, and fallback to the actual module.

# config/test.exs
import Config

config :my_blog, :content_management, [mailer: MockMailModule, minimum_words: 200]

While this configuration makes sense thematically, the usage is going to be very different.

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management)[:mailer] || MailModule

  def send_marketing_email do
    @mailer.email("hi")
  end
end

This is fine, and it definitely works, but it would be simpler if we didn’t nest our configuration, and modeled it around the usage pattern.

# config/test.exs
import Config

config :my_blog, :content_management_mailer, MockMailModule
config :my_blog, :content_management_minimum_words, 200

Now we can leverage the default argument of compile_env/3

def MyBlog.Marketing do
  @mailer Application.compile_env(:my_blog, :content_management_mailer, MailModule)

  def send_marketing_email do
    @mailer.email("hi")
  end
end

Inspecting Pipelines

When debugging a pipeline, you can inspect any of the intermediate steps using IO.inspect/2:

"Sphinx of black quartz, judge my vow."
|> String.downcase()
|> IO.inspect(label: "Downcased")
|> String.replace(~r/[[:punct:]]/, "")
|> IO.inspect(label: "Punctuation Removed")
|> String.split(" ")
|> IO.inspect(label: "Words")
|> Enum.reduce(%{}, fn word, acc ->
  Map.update(acc, word, 1, &(&1 + 1))
end)
|> IO.inspect(label: "Counts")

The label option formats the output to make it clear which step is being inspected:

Downcased: "sphinx of black quartz, judge my vow."
Punctuation Removed: "sphinx of black quartz judge my vow"
Words: ["sphinx", "of", "black", "quartz", "judge", "my", "vow"]
Counts: %{
  "black" => 1,
  "judge" => 1,
  "my" => 1,
  "of" => 1,
  "quartz" => 1,
  "sphinx" => 1,
  "vow" => 1
}

This works because IO.inspect/2 always returns the first argument passed to it!

Parameterized ExUnit tests

In ExUnit, it is not immediately obvious how to do the same “test” using different parameters.

It can be tedious to write individual tests for each required field asserting the validation. It’s also difficult for future-you to determine if you have complete coverage.

The cheating way

Remove the all the required fields from the source map before calling changeset and make one massive assert

The better way

The solution I use here is to set the @tag test attribute as two properties of the test. The first @tag field: field_name is the property I’m testing against The second @tag message_attr: %{attr_name => nil} is the value to assign that field before running the test.

You’ll see that these @tag values are available in the test context by the given tag name

    [
      {:entity_name, :entity_name},
      {:entity_uuid, :team_uuid}
    ]
    |> Enum.each(fn {field_name, attr_name} ->
      @tag field: field_name
      @tag message_attr: %{attr_name => nil}
      test "when `#{field_name}` missing, invalid ... required", context do
        message = TestMessageHelpers.market_message(context.message_attr)

        %Changeset{valid?: false} = changeset = Subject.changeset(%Subject{}, message)

        assert changeset.errors == [{context.field, {"can't be blank", [validation: :required]}}]
      end
    end)

Performing magic in Elixir

Ancient Magic

iex> [109, 97, 103, 105, 99]
'magic'

In Elixir, there is a data type known as a charlist that can be confusing for beginners. Charlists are a list of codepoints (ASCII integer values), that can be rendered as a readable string.

Modern Magic

iex> <<109, 97, 103, 105, 99>>
"magic"

Bitstrings are also a sequence of codepoints, but packed together as a contiguous sequence of bits.

Which magic should I prefer?

If you’re writing Elixir, you almost always want the bitstring version when dealing with strings, since they are more memory efficient. If you’re using the String module, you’re dealing with bitstrings under the hood. As an Elixir developer, it is rare that you will need a charlist, and it will typically be due to interfacing with Erlang code.

For displaying bitstrings and charlists in IEx, read more in the Elixir Getting Started guide and the Inspect.Opts documentation.

Revealing the magic

As a poor magician, I will reveal my trick

iex> inspect("magic", binaries: :as_binary)
"<<109, 97, 103, 105, 99>>"

types.SimpleNamespace in the python standard lib

In the python standard lib, there is a handy object called SimpleNamespace. It is an easy way to namespace other objects and comes with a nice __repr__ built in. I often find myself using it in tests. In situations where I need a generic object, it is more flexible than object because it allows creating and deleting attributes.

import types 

foo = types.SimpleNamespace()
foo.bar = 42
print(foo.bar)
# 42

del foo.bar
print(foo.bar)
# AttributeError: 'types.SimpleNamespace' object has no attribute 'bar'

The SimpleNameSpace object is roughly equivalent to the below class:

class SimpleNamespace:
    def __init__(self, **kwargs):
        self.__dict__.update(kwargs)

    def __repr__(self):
        keys = sorted(self.__dict__)
        items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys)
        return "{}({})".format(type(self).__name__, ", ".join(items))

    def __eq__(self, other):
        return self.__dict__ == other.__dict__

Using Dynamic queries in Ecto

When you have a Phoenix Controller and you need to do a query based on the params, you might end up with something likes this:

defmodule App.PostController do
  def index(conn, params) do
     posts = App.Context.list_posts(params)
     render(conn, "index.html", posts: posts)
  end
end
defmodule App.Context do
   ......
   def  list_posts(params) do
    query = Post 

    query = if user_id = params["owner_id"] do
      query |> where([p], p.user_id == ^user_id)
    else
      query
    end

    Repo.all(query)
  end
  .....
end

There is a better way! Dynamic queries (https://hexdocs.pm/ecto/dynamic-queries.html)

defmodule App.Context do
   ......
   def  list_posts(params) do
    Post 
    |> where(^filter_where(params))
    |> Repo.all()
  end

  defp filter_where(params) do
    Enum.reduce(params, dynamic(true), fn
      {"owner_id", user_id}, dynamic ->
        dynamic([p], ^dynamic and p.user_id == ^user_id)
      {_, _}, dynamic ->
        dynamic
    end)
  end
  .....
end

Now, all your where clauses are in one place :)

Postgres Foreign Key checks permission denied

Foreign key checks are done as the owner as the target table, not as the user issuing the query.

This resulted in a permission error:

ProgrammingError: permission denied for schema example
LINE 1: SELECT 1 FROM ONLY "example"."table" x WHERE "id" OPERATOR(...
                           ^
QUERY:  SELECT 1 FROM ONLY "example"."table" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x

When dumping from one environment to local for testing, be sure that the owner of the table has permissions on your local postgres. Since it’s local, just give the owner of the table superuser perms.

ALTER USER username WITH SUPERUSER;

Use context functions for writing tests

When writing ExUnit tests that require setup, use describe blocks and context functions to your advantage!

def App.FooTest do
  use ExUnit.Case

  describe "when there is a bar" do
    setup :single_bar
    
    test "you can get a bar", %{bar: bar} do
     assert %App.Bar{} = App.Context.get_bar(bar.id)
    end
  end

  describe "when there is a fancy bar" do
    setup :single_bar
    @tag bar_params: %{color: "orange"}
    test "you can get a fancy bar", %{bar: bar} do
      assert %App.Bar{} = App.Context.get_bar(bar.id)
    end
  end

  def single_bar(context) do
    params = context[:bar_params] || %{a: 1}
    {:ok, bar} = App.Context.create_bar(params)
    %{bar: bar}
  end
end

Don't use Map functions on structs

While it may seem like a good idea, Map functions should not be used on Elixir structs, as they can lead to some violations of the data structure. Specifically, you can use the Map API to add keys that don’t exist on the struct.

defmodule Test do
  defstruct [:foo]
end

test = %Test{}

# Adds a field :bar that doesn't exist on the struct
%{__struct__: Test, bar: :a, foo: nil} = Map.put(test, :bar, :a)

Instead, use the Map update syntax that validates that you’re using an existing key. If it doesn’t exist, it will throw a KeyError

%{test | bar: :a}
** (KeyError) key :bar not found in: %Test{foo: nil}
    (stdlib 3.12.1) :maps.update(:bar, :a, %Test{foo: nil})
    (stdlib 3.12.1) erl_eval.erl:256: anonymous fn/2 in :erl_eval.expr/5
    (stdlib 3.12.1) lists.erl:1263: :lists.foldl/3

Ecto.Changeset.cast

Given the embedded schema

embedded_schema do
  field(:mame, :string)
end

Used by the following code raises a pattern match error that can be difficult to diagnose. Notice that the schema definition uses :mame with an “M”, and @root_fields uses correct (but mismatched) :name with an “N”

@root_fields [:name]

parsed = Jason.decode!(data)

%__MODULE__{}
|> Ecto.Changeset.cast(parsed, @root_fields)

I got sidetracked thinking that the cast was unhappy because the passed data was using string keys