Modeling Formulas with Recursive Discriminators

Written by Pete Corey on May 28, 2018.

I recently ran into an issue while trying to represent a nested, discriminator-based schema using Mongoose in a Node.js client project. The goal was to represent a logical formula by creating a hierarchy of “reducers” (&&, ||, etc…) that would reduce a series of nested “checks” down into a single value.

Let’s make that a little more relatable with an example. Imagine what we’re trying to represent the following formula:


x == 100 || (x <= 10 && x >= 0)

If we wanted to store this in MongoDB, we’d have to represent that somehow as a JSON object. Let’s take a stab at that:


{
  type: "reducer",
  reducer: "||",
  checks: [
    {
      type: "check",
      field: "x",
      comparator: "==",
      value: 100
    },
    {
      type: "reducer",
      reducer: "&&",
      checks: [
        {
          type: "check",
          field: "x",
          comparator: "<=",
          value: 10
        },
        {
          type: "check",
          field: "x",
          comparator: ">=",
          value: 0
        }
      ]
    }
  ]
}

What a behemoth!

While the JSON representation is ridiculously more verbose than our mathematical representation, it gives us everything we need to recreate our formula, and lets us store that formula in our database. This is exactly what we want.


The trouble comes when we try to represent this schema with Mongoose.

We can break our entire JSON representation into two distinct “types”. We have a “check” type that has field, comparator, and value fields, and a “reducer” type that has a reducer field, and a checks field that contains a list of either “check” or “reducer” objects.

Historically, Mongoose had trouble with a field in a document adhering to either one schema or another. That all changed with the introduction of “discriminators”, and later, “embedded discriminators”. Embedded discriminators let us say that an element of an array adheres to one of multiple schemas defined with different discriminators.

Again, let’s make that more clear with an example. If we wanted to store our formula within a document, we’d start by defining the schema for that wrapping “base” document:


const baseSchema = new Schema({
  name: String,
  formula: checkSchema
});

The formula field will hold our formula. We can define the shell of our checkSchema like so:


const checkSchema = new Schema(
  {},
  {
    discriminatorKey: "type",
    _id: false
  }
);

Here’s we’re setting the discriminatorKey to "type", which means that Mongoose will look at the value of "type" to determine what kind of schema the rest of this subdocument should adhere to.

Next, we have to define each type of our formula. Our "reducer" has a reducer field and a formula field:


baseSchema.path("formula").discriminator("reducer", new Schema(
  {
    reducer: {
      type: String,
      enum: ['&&', '||']
    },
    checks: [checkSchema]
  },
  { _id: false }
));

Similarly, our "check" type has its own unique set of fields:


baseSchema.path("formula").discriminator("check", new Schema(
  {
    field: String,
    comparator: {
      type: String,
      enum: ['&&', '||']
    },
    value: Number
  },
  { _id: false }
));

Unfortunately, this only works for the first level of our formula. Trying to define a top-level "reducer" or "check" works great, but trying to put a "reducer" or a "check" within a "reducer" fails. Those nested objects are stripped from our final object.


The problem is that we’re defining our discriminators based off of a path originating from the baseSchema:


baseSchema.path("formula").discriminator(...);

Our nested "reducer" subdocuments don’t have any discriminators attached to their checks. To fix this, we’d need to create two new functions that recursively builds each layer of our discriminator stack.

We’ll start with a buildCheckSchema function that simply returns a new schema for our "check"-type subdocuments. This schema doesn’t have any children, so it doesn’t need to define any new discriminators:


const buildCheckSchema = () =>
  new Schema({
    field: String,
    comparator: {
      type: String,
      enum: ['&&', '||']
    },
    value: Number
  }, { _id: false });

Our buildReducerSchema function needs to be a little more sophisticated. First, it needs to create the "reducer"-type sub-schema. Next, it needs to attach "reducer" and "check" discriminators to the checks field of that new schema with recursive calls to buildCheckSchema and buildReducerSchema:


const buildReducerSchema = () => {
    let reducerSchema = new Schema(
        {
            reducer: {
                type: String,
                enum: ['&&', '||']
            },
            checks: [checkSchema]
        },
        { _id: false }
    );
    reducerSchema.path('checks').discriminator('reducer', buildReducerSchema());
    reducerSchema.path('checks').discriminator('check', buildCheckSchema());
    return reducerSchema;
};

While this works in concept, it blows up in practice. Mongoose’s discriminator function greedily consumes the schemas passed into it, which creates an infinite recursive loop that blows the top off of our stack.


The solution I landed on with this problem is to limit the number of recursive calls we can make to buildReducerSchema to some maximum value. We can add this limit by passing an optional n argument to buildReducerSchema that defaults to 0. Every time we call buildReducerSchema from within buildReducerSchema, we’ll pass it an incremented value of n:


reducerSchema.path('checks').discriminator('reducer', buildReducerSchema(n + 1));

Next, we’ll use the value of n to enforce our maximum recursion limit:


const buildReducerSchema = (n = 0) => {
  if (n > 100) {
    return buildCheckSchema();
  }
  ...
};

If we reach one hundred recursions, we simply force the next layer to be a "check"-type schema, gracefully terminating the schema stack.

To finish things off, we need to pass our baseSchema these recursively constructed discriminators (without an initial value of n):


baseSchema.path("checks").discriminator("reducer", buildReducerSchema());
baseSchema.path("checks").discriminator("check", buildCheckSchema());

And that’s it!

Against all odds we managed to build a nested, discriminator-based schema that can fully represent any formula we throw at it, up to a depth of one hundred reducers deep. At the end of the day, I’m happy with that solution.

Spreading Through the Bitcoin Network

Written by Pete Corey on May 21, 2018.

Previously, we beefed up our Elixir-based Bitcoin-node-in-progress to use the Connection behavior to better manage our connection to our peer node. Now that we can robustly connect to a single peer node, let’s broaden our horizons and connect to multiple peers!

Let’s refactor our node to use a dynamic supervisor to manage our collection of connections, and start recursively connecting to nodes in the Bitcoin peer-to-peer network!

Going Dynamic

Each of our connections to a Bitcoin peer node is currently managed through a BitcoinNetwork.Node process. We’ll manage this collection of processes with a new dynamic supervisor called Bitcoin.Node.Supervisor.

Let’s create that new supervisor now:


defmodule BitcoinNetwork.Node.Supervisor do
  use DynamicSupervisor

  def start_link([]) do
    DynamicSupervisor.start_link(__MODULE__, [], name: __MODULE__)
  end

  def init([]) do
    DynamicSupervisor.init(strategy: :one_for_one)
  end
end

The code here is largely boilerplate. Our Node.Supervisor initiates itself with a :one_for_one strategy (the only supervision strategy currently available to a dynamic supervisor). It’s also important to note that like all dynamic supervisors, our Node.Supervisor starts without children.

Back to Where we Started

Next, we’ll go into our BitcoinNetwork.Application supervisor and replace our BitcoinNetwork.Node child specification with a specification for our new dynamic supervisor:


Supervisor.start_link(
  [
    {DynamicSupervisor, strategy: :one_for_one, name: BitcoinNetwork.Node.Supervisor}
  ],
  strategy: :one_for_one
)

After our Application has successfully started its Node.Supervisor child, we’ll go ahead and add our Node process as a child of our new dynamic supervisor:


DynamicSupervisor.start_child(BitcoinNetwork.Node.Supervisor, %{
  id: BitcoinNetwork.Node,
  start:
    {BitcoinNetwork.Node, :start_link,
     [
       {
         Application.get_env(:bitcoin_network, :ip),
         Application.get_env(:bitcoin_network, :port)
       }
     ]},
  restart: :transient
})

We simply moved our BitcoinNetwork.Node child specification out of our old supervisor’s child list, and dropped it into our call to DynamicSupervisor.start_child/2.

What we’re really trying to do here is “connect to a node”, but all of this boilerplate is confusing our intentions. Let’s create a new function in our BitcoinNetwork module called connect_to_node/2 that takes a node’s IP address and a port, and adds a child to our Node.Supervisor that manages the connection to that node:


def connect_to_node(ip, port) do
  DynamicSupervisor.start_child(BitcoinNetwork.Node.Supervisor, %{
    id: BitcoinNetwork.Node,
    start: {BitcoinNetwork.Node, :start_link, [{ip, port}]},
    restart: :transient
  })
end

Now we can replace the start_child/2 mess in the start/2 callback of our Application module with a call to our new connect_to_node/2 function:


BitcoinNetwork.connect_to_node(
  Application.get_env(:bitcoin_network, :ip),
  Application.get_env(:bitcoin_network, :port)
)

That’s much nicer.

Now it’s clear that when our application starts up, it creates a new dynamic supervisor, Node.Supervisor, and then connects to the Bitcoin node specified in our application’s configuration.

At this point, we’re back up to feature parity with our original one-node solution. All we’ve really managed to do it add a supervisor layer into our supervision tree.

Our new supervision tree.

Adding Nodes

Now that we’re equipped with our connect_to_node/2 function and our new dynamic node supervisor, we’re ready rapidly expand our network of known Bitcoin nodes.

Our Node process is currently listening for incoming node addresses in one of our handle_payload/2 functions:


defp handle_payload(%Addr{addr_list: addr_list}, state) do
  log([:bright, "Received ", :green, "#{length(addr_list)}", :reset, :bright, " peers."])

  {:ok, state}
end

We can connect to each of these additional peer nodes by mapping each node address in addr_list over our new connect_to_node/2 function:


Enum.map(addr_list, &BitcoinNetwork.connect_to_node(&1.ip, &1.port))

Let’s clean this up a bit by adding another function head to our connect_to_node/2 function that accepts a single NetAddr struct as a parameter:


def connect_to_node(%NetAddr{ip: ip, port: port}), do: connect_to_node(ip, port)

Now we can simply our map over the list of NetAddr structures we receive in our addr_list variable:


Enum.map(addr_list, &BitcoinNetwork.connect_to_node/1)

Beautiful.

Now our application fires up, connects to our initial Bitcoin peer node, receives that node’s list of peers, and spawns a dynamically supervised process that attempts to connect to each of those peers. If any of those peers successfully connect and return their list of peers, we’ll repeat the process.

So many peers!

Uncontrolled Growth

At this point, our Bitcoin node will happily spreading itself through the Bitcoin peer-to-peer network, introducing itself as a peer to tens thousands of nodes. However, this level of connectivity might be overkill for our node.

We need some way of limiting the number of active peer connections to some configurable value.

We’ll start implementing this limit by adding a max_peers configuration value to our config.exs:


config :bitcoin_network, max_peers: 125

Let’s start with a limit of one hundred twenty five connections, just like the default limit in the Bitcoin core client.

Next, we’ll make a new function in our BitcoinNetwork module to count the number of active peer connections. This is fairly straight forward thanks to the count_children/1 function on the DynamicSupervisor module:


def count_peers() do
  BitcoinNetwork.Node.Supervisor
  |> DynamicSupervisor.count_children()
  |> Map.get(:active)
end

Next, in our connect_to_node/2 function, we’ll wrap our call to DynamicSupervisor.start_child/2 with a check that we haven’t reached our max_peers limit:


if count_peers() < Application.get_env(:bitcoin_network, :max_peers) do
  DynamicSupervisor.start_child(BitcoinNetwork.Node.Supervisor, %{
    ...
  })
else
  {:error, :max_peers}
end

And that’s all there is to it! Now, every time we receive a peer and try to connect to it, our connect_to_node/2 function will first check that we haven’t exceeded the max_peers limit defined in our application’s configuration.

Our Bitcoin node will now limit its pool of peers to a maximum of one hundred twenty five connections.

Final Thoughts

Elixir’s dynamic supervisor is a breeze to work with and made it possible to easily and quickly scale up our pool of peers from one to tens of thousands of connections in the blink of an eye.

While our Bitcoin node is working its way through the Bitcoin peer-to-peer network, it doesn’t actually do anything. We’ll need to spend some time in the future figuring out how to process incoming blocks and transactions. Maybe at some point we’ll even be able to send our own transactions and mine for new blocks!

It sounds like we’ll have to dust off Mastering Bitcoin and finish off the few remaining chapters.

Beefing Up our Bitcoin Node with Connection

Written by Pete Corey on May 14, 2018.

We left off in our Bitcoin adventure by building a bare-bones Bitcoin node that connects to another peer node on the network. While our Elixir-based node was able to connect to a peer, that connection was fragile at best. Any problems with the initial connection or version messaging would leave our application dead in the water.

Thankfully, there are ways of beefing our the resilience of our Elixir node. Today we’ll be refactoring our Bitcoin node to use James Fish’s Connection behavior, rather than the basic GenServer behavior that ships with Elixir. Implementing this behavior in our node will give us more robustness in our connection process, along with the option to reconnect to a peer node in the case of failure.

Let’s get to it!

Our Starting Point

Before we dive into refactoring our Bitcoin node to use the new Connection behavior, we should go over some changes I made to simplify the BitcoinNetwork.Node module.

Previously, every message parsed out of incoming TCP packets was assembled into a BitcoinNetowkr.Protocol.Message struct and cast back to the current node process as a process message. In hindsight, this solution is overly complicated and weighted down with boilerplate and message passing overhead. Instead, I opted to take my own advice and “just use a function” to handle my incoming messages.


def handle_info({:tcp, _port, data}, state) do
  {messages, rest} = chunk(state.rest <> data)

  case handle_messages(messages, state) do
    {:error, reason, _state} -> {:stop, reason}
    {:ok, state} -> {:noreply, %{state | rest: rest}}
  end
end

Now the assembled Message structs are passed off to a handle_messages/2 helper function, which returns either an :error tuple, or an :ok tuple with the current node’s updated state after processing each of the received messages.

The handle_messages/2 filters out invalid messages, and runs each of the remaining messages through a handle_payload/2 helper function. We pass this function a new parsed_payload field, which holds the parsed struct-based representation of the inbound Bitcoin message:


defp handle_messages(messages, state) do
  messages
  |> Enum.filter(&Message.verify_checksum/1)
  |> Enum.reduce_while({:ok, state}, fn message, state ->
    case handle_payload(message.parsed_payload, state) do
      {:error, reason, state} -> {:halt, {:error, reason, state}}
      {:ok, state} -> {:cont, {:ok, state}}
    end
  end)
end

Notice that we’re using Enum.reduce_while/3 to give our handle_payload/2 calls the opportunity to modify the state of the node before the next message is processed.

If we run into a problem handling a parsed payload, we immediately exit our reduction by returning a :halt tuple.

The main benefit of this refactor comes from the simplicity of our handle_payload/2 methods. Here’s what our “ping” handler looks like after the refactor:


defp handle_payload(%Ping{}, state) do
  with :ok <- Message.serialize("pong") |> send_message(state.socket) do
    {:ok, state}
  else
    {:error, reason} -> {:error, reason, state}
  end
end

We use pattern matching to listen for BitcoinNetwork.Protocol.Ping messages. When we receive a Ping, we serialize and send a “pong” back to our peer node. If anything goes wrong with sending the response, we return an :error tuple.

Beautiful.

Connection without Connecting

The Connection behavior is a specialization of the GenServer behavior, and is intended to be used to represent connections to external resources. It mirrors the entire API of a standard GenServer, and adds two additional callbacks for us to implement: connect/2 and disconnect/2. As you’ve probably guessed, these two callbacks are used to connect and disconnect from our external resource.

Before we start using the Connection behavior in our application, we’ll need to add it as a dependency in our mix.exs file:


defp deps do
  [
    {:connection, "~> 1.0"}
  ]
end

Next, we’ll start our GenServer to Connection conversion by replacing our use of the GenServer behavior with the new Connection behavior, and wholesale replacing GenServer with Connection throughout our BitcoinNetwork.Node module:


defmodule BitcoinNetwork.Node do
  use Connection

  def start_link({ip, port}) do
    Connection.start_link(__MODULE__, %{ip: ip, port: port, rest: ""})
  end

  ...

Because the Connection behavior is a superset of the GenServer behavior, our node should still run like it used to given these changes. Let’s try it out.


** (Mix) Could not start application bitcoin_network: exited in: BitcoinNetwork.Application.start(:normal, [])
    ** (EXIT) an exception was raised:
        ** (ArgumentError) The module BitcoinNetwork.Node was given as
        a child to a supervisor but it does not implement child_spec/1.

Uh oh.

The Connection behavior doesn’t implement a child_spec/1 callback like our old GenServer behavior did, and our application no longer likes the child specification shorthand we’re using in our BitcoinNetwork.Application supervisor:


{BitcoinNetwork.Node,
 {Application.get_env(:bitcoin_network, :ip),
  Application.get_env(:bitcoin_network, :port)}}

We’ll fix this by fleshing out our child specification into a full specification map in our BitcoinNetwork.Application module:


%{
  id: BitcoinNetwork.Node,
  start:
    {BitcoinNetwork.Node, :start_link,
     [
       {
         Application.get_env(:bitcoin_network, :ip),
         Application.get_env(:bitcoin_network, :port)
       }
     ]},
  restart: :transient
}

With those changes, our Bitcoin node runs just like it used to.

Connecting with Connect

So far our refactor isn’t very exciting. While our Bitcoin node still works, we haven’t added any new functionality. Let’s change that by fleshing out the connect/2 callback provided by the Connection behavior.

We’ll start by sketching out the connect/2 callback within our module:


def connect(_info, state) do
end

Within our connect/2 callback, we should handle all of the behavior associated with connecting to our external resource. You may remember that this was previously being handled in our init/1 callback. Let’s start migrating that code into our connect/2 function.

The first step in connecting to our peer node is to establish a TCP connection:


:gen_tcp.connect(IP.to_tuple(state.ip), state.port, options)

The next step is sending our initial “version” message and establishing communication with the peer:


send_message(message, socket)

If both of these things go well, we can say that we’ve successfully connected to our peer Bitcoin node. In that case, the Connection behavior dictates that we should return an :ok tuple with the new state of the process.


with {:ok, socket} <- :gen_tcp.connect(IP.to_tuple(state.ip), state.port, options),
     :ok <- send_message(message, socket) do
  {:ok, Map.put_new(state, :socket, socket)}
end

However, if something goes wrong, we have a couple options. We can either return a :stop tuple to kill the current process. That’s similar to the previous functionality of our node. Alternatively, we can return a :backoff tuple which instructs the Connection behavior to retry our connection behavior after the specified timeout.

Let’s try reconnecting to our peer node if something goes wrong. To do this, all we need to do is add an else block to our with that returns our :backoff tuple:


else
  _ -> {:backoff, 1000, state}

Now, after a failed connection attempt our Bitcoin node will retry the connection after one thousand milliseconds.

Limiting Retries

Our new connection retry logic works beautifully. It almost works too well, in fact. If we try to connect to a non-existent Bitcoin peer node, we can see that our node will attempt to reconnect until the end of time. Let’s limit the number of retry attempt our node can make before it gives up.

We’ll do this by adding a retries field to our initial state with an initial value of 0:


def start_link({ip, port}) do
  Connection.start_link(__MODULE__, %{
    ...
    retries: 0
  })
end

We’ll also add a @max_retries module attribute to indicate how many retries we want our node to attempt:


@max_retries 3

Next, we’ll modify the :backoff tuple returned by our connection/2 callback to increment retries in the returned state map:


{:backoff, 1000, Map.put(state, :retries, state.retries + 1)}

Lastly, we’ll add a new connect/2 function head that detects when we’ve reached the maximum number of allowed retries. When we reach that limit, we want to return a :stop tuple to kill the current process:


def connect(_info, state = %{retries: @max_retries}) do
  {:stop, :normal, state}
end

Beautiful. Now our Bitcoin node will stop attempting to connect to its peer node after three failed attempts, waiting one second between each.

Disconnecting with Connect

Now that we’ve revamped how we connect to our peer node, we need to consider what should happen in the event that we disconnect from that node.

If our handle_call/3, handle_cast/2, or handle_info/2 callbacks return a :disconnect tuple, our Connection behavior will call our disconnect/2 callback, which will decide the next course of action.

We have several options for handling the disconnection in our disconnect/2 callback. We can return a :connect tuple to attempt a reconnection immediately. Similarly, we can return a :backoff tuple to delay the reconnection by the specified timestamp. Alternatively, we can return a :noconnect tuple to keep the current process alive, but not attempt to reconnect to our peer node. Lastly, our disconnect/2 callback can return a :stop tuple to immediately terminate our Bitcoin node process.

When we start connecting to more nodes in the future, the loss of a single node isn’t a big deal. Losing peers is just a part of life, unfortunately. With that in mind, if we detect a disconnect, we’ll simply close our TCP connection return a :stop tuple from our disconnect/2 callback:


def disconnect(_, state) do
  :ok = :gen_tcp.close(state.socket)
  {:stop, :normal, state}
end

Next, when handling the result of our call to handle_messages/2, we’ll deal with errors slightly differently. Instead of returning a :stop tuple when we receive an :error while handling one of our messages, we’ll instead return a :disconnect tuple:


case handle_messages(messages, state) do
  {:error, reason, state} -> {:disconnect, reason, %{state | rest: rest}}
  state -> {:noreply, %{state | rest: rest}}
end

This will drop us into our disconnect/2 callback with the given reason for the disconnect.

That’s all there is to it!

Final Thoughts

This refactor involved quite a few moving pieces, but in the end the final product is a cleaner, simpler, and more robust piece of software. With these changes we’ve positioned ourselves very nicely to move forward and expand on the Bitcoin node project we’ve found ourselves in.

Be sure to check out the complete code on Github to get a cohesive view of what we’ve done.

Next time we’ll start expanding our network of nodes by recursively connecting with the neighboring nodes we receive from our peer node. Stay tuned!