Beefing Up our Bitcoin Node with Connection

Written by Pete Corey on May 14, 2018.

We left off in our Bitcoin adventure by building a bare-bones Bitcoin node that connects to another peer node on the network. While our Elixir-based node was able to connect to a peer, that connection was fragile at best. Any problems with the initial connection or version messaging would leave our application dead in the water.

Thankfully, there are ways of beefing our the resilience of our Elixir node. Today we’ll be refactoring our Bitcoin node to use James Fish’s Connection behavior, rather than the basic GenServer behavior that ships with Elixir. Implementing this behavior in our node will give us more robustness in our connection process, along with the option to reconnect to a peer node in the case of failure.

Let’s get to it!

Our Starting Point

Before we dive into refactoring our Bitcoin node to use the new Connection behavior, we should go over some changes I made to simplify the BitcoinNetwork.Node module.

Previously, every message parsed out of incoming TCP packets was assembled into a BitcoinNetowkr.Protocol.Message struct and cast back to the current node process as a process message. In hindsight, this solution is overly complicated and weighted down with boilerplate and message passing overhead. Instead, I opted to take my own advice and “just use a function” to handle my incoming messages.

def handle_info({:tcp, _port, data}, state) do
  {messages, rest} = chunk( <> data)

  case handle_messages(messages, state) do
    {:error, reason, _state} -> {:stop, reason}
    {:ok, state} -> {:noreply, %{state | rest: rest}}

Now the assembled Message structs are passed off to a handle_messages/2 helper function, which returns either an :error tuple, or an :ok tuple with the current node’s updated state after processing each of the received messages.

The handle_messages/2 filters out invalid messages, and runs each of the remaining messages through a handle_payload/2 helper function. We pass this function a new parsed_payload field, which holds the parsed struct-based representation of the inbound Bitcoin message:

defp handle_messages(messages, state) do
  |> Enum.filter(&Message.verify_checksum/1)
  |> Enum.reduce_while({:ok, state}, fn message, state ->
    case handle_payload(message.parsed_payload, state) do
      {:error, reason, state} -> {:halt, {:error, reason, state}}
      {:ok, state} -> {:cont, {:ok, state}}

Notice that we’re using Enum.reduce_while/3 to give our handle_payload/2 calls the opportunity to modify the state of the node before the next message is processed.

If we run into a problem handling a parsed payload, we immediately exit our reduction by returning a :halt tuple.

The main benefit of this refactor comes from the simplicity of our handle_payload/2 methods. Here’s what our “ping” handler looks like after the refactor:

defp handle_payload(%Ping{}, state) do
  with :ok <- Message.serialize("pong") |> send_message(state.socket) do
    {:ok, state}
    {:error, reason} -> {:error, reason, state}

We use pattern matching to listen for BitcoinNetwork.Protocol.Ping messages. When we receive a Ping, we serialize and send a “pong” back to our peer node. If anything goes wrong with sending the response, we return an :error tuple.


Connection without Connecting

The Connection behavior is a specialization of the GenServer behavior, and is intended to be used to represent connections to external resources. It mirrors the entire API of a standard GenServer, and adds two additional callbacks for us to implement: connect/2 and disconnect/2. As you’ve probably guessed, these two callbacks are used to connect and disconnect from our external resource.

Before we start using the Connection behavior in our application, we’ll need to add it as a dependency in our mix.exs file:

defp deps do
    {:connection, "~> 1.0"}

Next, we’ll start our GenServer to Connection conversion by replacing our use of the GenServer behavior with the new Connection behavior, and wholesale replacing GenServer with Connection throughout our BitcoinNetwork.Node module:

defmodule BitcoinNetwork.Node do
  use Connection

  def start_link({ip, port}) do
    Connection.start_link(__MODULE__, %{ip: ip, port: port, rest: ""})


Because the Connection behavior is a superset of the GenServer behavior, our node should still run like it used to given these changes. Let’s try it out.

** (Mix) Could not start application bitcoin_network: exited in: BitcoinNetwork.Application.start(:normal, [])
    ** (EXIT) an exception was raised:
        ** (ArgumentError) The module BitcoinNetwork.Node was given as
        a child to a supervisor but it does not implement child_spec/1.

Uh oh.

The Connection behavior doesn’t implement a child_spec/1 callback like our old GenServer behavior did, and our application no longer likes the child specification shorthand we’re using in our BitcoinNetwork.Application supervisor:

 {Application.get_env(:bitcoin_network, :ip),
  Application.get_env(:bitcoin_network, :port)}}

We’ll fix this by fleshing out our child specification into a full specification map in our BitcoinNetwork.Application module:

  id: BitcoinNetwork.Node,
    {BitcoinNetwork.Node, :start_link,
         Application.get_env(:bitcoin_network, :ip),
         Application.get_env(:bitcoin_network, :port)
  restart: :transient

With those changes, our Bitcoin node runs just like it used to.

Connecting with Connect

So far our refactor isn’t very exciting. While our Bitcoin node still works, we haven’t added any new functionality. Let’s change that by fleshing out the connect/2 callback provided by the Connection behavior.

We’ll start by sketching out the connect/2 callback within our module:

def connect(_info, state) do

Within our connect/2 callback, we should handle all of the behavior associated with connecting to our external resource. You may remember that this was previously being handled in our init/1 callback. Let’s start migrating that code into our connect/2 function.

The first step in connecting to our peer node is to establish a TCP connection:

:gen_tcp.connect(IP.to_tuple(state.ip), state.port, options)

The next step is sending our initial “version” message and establishing communication with the peer:

send_message(message, socket)

If both of these things go well, we can say that we’ve successfully connected to our peer Bitcoin node. In that case, the Connection behavior dictates that we should return an :ok tuple with the new state of the process.

with {:ok, socket} <- :gen_tcp.connect(IP.to_tuple(state.ip), state.port, options),
     :ok <- send_message(message, socket) do
  {:ok, Map.put_new(state, :socket, socket)}

However, if something goes wrong, we have a couple options. We can either return a :stop tuple to kill the current process. That’s similar to the previous functionality of our node. Alternatively, we can return a :backoff tuple which instructs the Connection behavior to retry our connection behavior after the specified timeout.

Let’s try reconnecting to our peer node if something goes wrong. To do this, all we need to do is add an else block to our with that returns our :backoff tuple:

  _ -> {:backoff, 1000, state}

Now, after a failed connection attempt our Bitcoin node will retry the connection after one thousand milliseconds.

Limiting Retries

Our new connection retry logic works beautifully. It almost works too well, in fact. If we try to connect to a non-existent Bitcoin peer node, we can see that our node will attempt to reconnect until the end of time. Let’s limit the number of retry attempt our node can make before it gives up.

We’ll do this by adding a retries field to our initial state with an initial value of 0:

def start_link({ip, port}) do
  Connection.start_link(__MODULE__, %{
    retries: 0

We’ll also add a @max_retries module attribute to indicate how many retries we want our node to attempt:

@max_retries 3

Next, we’ll modify the :backoff tuple returned by our connection/2 callback to increment retries in the returned state map:

{:backoff, 1000, Map.put(state, :retries, state.retries + 1)}

Lastly, we’ll add a new connect/2 function head that detects when we’ve reached the maximum number of allowed retries. When we reach that limit, we want to return a :stop tuple to kill the current process:

def connect(_info, state = %{retries: @max_retries}) do
  {:stop, :normal, state}

Beautiful. Now our Bitcoin node will stop attempting to connect to its peer node after three failed attempts, waiting one second between each.

Disconnecting with Connect

Now that we’ve revamped how we connect to our peer node, we need to consider what should happen in the event that we disconnect from that node.

If our handle_call/3, handle_cast/2, or handle_info/2 callbacks return a :disconnect tuple, our Connection behavior will call our disconnect/2 callback, which will decide the next course of action.

We have several options for handling the disconnection in our disconnect/2 callback. We can return a :connect tuple to attempt a reconnection immediately. Similarly, we can return a :backoff tuple to delay the reconnection by the specified timestamp. Alternatively, we can return a :noconnect tuple to keep the current process alive, but not attempt to reconnect to our peer node. Lastly, our disconnect/2 callback can return a :stop tuple to immediately terminate our Bitcoin node process.

When we start connecting to more nodes in the future, the loss of a single node isn’t a big deal. Losing peers is just a part of life, unfortunately. With that in mind, if we detect a disconnect, we’ll simply close our TCP connection return a :stop tuple from our disconnect/2 callback:

def disconnect(_, state) do
  :ok = :gen_tcp.close(state.socket)
  {:stop, :normal, state}

Next, when handling the result of our call to handle_messages/2, we’ll deal with errors slightly differently. Instead of returning a :stop tuple when we receive an :error while handling one of our messages, we’ll instead return a :disconnect tuple:

case handle_messages(messages, state) do
  {:error, reason, state} -> {:disconnect, reason, %{state | rest: rest}}
  state -> {:noreply, %{state | rest: rest}}

This will drop us into our disconnect/2 callback with the given reason for the disconnect.

That’s all there is to it!

Final Thoughts

This refactor involved quite a few moving pieces, but in the end the final product is a cleaner, simpler, and more robust piece of software. With these changes we’ve positioned ourselves very nicely to move forward and expand on the Bitcoin node project we’ve found ourselves in.

Be sure to check out the complete code on Github to get a cohesive view of what we’ve done.

Next time we’ll start expanding our network of nodes by recursively connecting with the neighboring nodes we receive from our peer node. Stay tuned!

Reversing BIP-39 and the Power of Property Testing

Written by Pete Corey on May 7, 2018.

I was recently asked how I would go about reversing the BIP-39 encoding algorithm we implemented previous and used to build our BIP-39 haiku miner.

Implementing the reverse of this algorithm seemed straight-forward at first, but it quickly led me down a rabbit hole that showed me just how powerful property-based testing can be.

Read on!

What is BIP-39 Again?

If you’re asking yourself, “what is BIP-39 again?”, I highly recommend you check out the first article in this series, “From Bytes to Mnemonic using Elixir”, for a full rundown of the BIP-39 encoding algorithm.

As a quick summary, the BIP-39 encoding algorithm is intended to convert an arbitrary set of bytes into an easily memorizable sequence of words. The algorithm goes something like this:

  1. Have bytes you want to encode.
  2. Append a partial checksum to the end of your bytes.
  3. Map every eleven bit chunk of the resulting binary onto a wordlist of your choice.

The devil is in the detail, as we’ll see.

Laying the Groundwork

Before we write the reverse of our BIP-39 encoding algorithm, we need to lay some initial groundwork.

The Bip39.Mnemonic module we built in the previous article only had a single public function: generate/0. The generate/0 function generated a random set of bytes and converted it into a BIP-39-style mnemonic.

def generate do
  |> attach_checksum
  |> map_onto_wordlist

Moving forward, we should separate the encoding functionality from the entropy generation so we can test the encoding algorithm independently, with our own data. This will simplify the testing of our final solution.

def generate do

def encode(data) do
  |> attach_checksum
  |> map_onto_wordlist

For clarity’s sake, we’ve also renamed the entropy/0 function to generate_entropy.

Great. Now that we have an encode/1 function that encodes a given binary, we’re set up to add a new function, decode/1, that reverses the process and returns the binary data decoded from a given mnemonic.

def decode(mnemonic) do

Decoding the Mnemonic

The high-level process for reversing our BIP-39 algorithm and decoding our mnemonic into a binary looks something like this:

  1. Maps the words in the mnemonic back into a binary.
  2. Separate the appended partial checksum from the encoded data.
  3. Verify that the appended checksum matches the actual checksum of the data.

Sounds like a plan.

The first step of decoding our mnemonic in our decode/1 function is to convert our encoded mnemonic wordlist back into a binary.

First, we’ll map each word onto its index in our @wordlist. Next, we’ll convert each index into an eleven-bit binary and reduce that list of binaries down into a single concatenated binary:

data_and_checksum =
  |>, fn w -> w == &1 end))
  |> Enum.reduce(<<>>, fn n, acc -> <<acc::bits, n::11>> end)

What we’re left with is our originally encoded data concatenated with a variable-length partial checksum.

We know that the variable-length checksum is 1/32 the length of our originally encoded data. Given that, we also know that the length of our originally encoded data is 32/33 the length of data_and_checksum. The concatenated checksum will fill the remaining space:

total_size = bit_size(data_and_checksum)
data_size = div(total_size * 32, 33)
checksum_size = total_size - data_size

Now that we know the structure of data_and_checksum, we can pull out the individual pieces we care about using binary pattern matching:

<<data::bits-size(data_size), partial_checksum::bits-size(checksum_size)>> =


Now all that’s left to do is verify that the partial_checksum provided matches the calculated checksum of the provided data binary. If the checksums match, we’ll return an :ok tuple containing the decoded data. Otherwise, we’ll return an :error tuple explaining the situation:

if <<data::bits, partial_checksum::bits>> == attach_checksum(data) do
  {:ok, data}
  {:error, :bad_checksum}

That’s it!

We can now encode/1 a given binary into a mnemonic wordlist, and then decode/1 it to retrieve the original binary.

Putting our Solution to the Test

Now that we’ve built our Bip39.Mnemonic.encode/1 and Bip39.Mnemonic.decode/1 functions, we need to test that our encoding and decoding process is working as expected.

Testing an encoder/decoder pair is perfectly suited to property-based testing, so we’ll use the StreamData library to test our solution. We’ll set up a new test module, Bip39MnemonicTest, that scaffolds out a new property test for our mnemonic encoder:

defmodule Bip39MnemonicTest do
  use ExUnit.Case
  use ExUnitProperties

  property "encodes and decodes mnemonics" do

The property that we’re trying to test is that a given binary is equal to its encoded mnemonic, decoded back into a binary. We can test this fairly easily with StreamData.

We know that the BIP-39 algorithm only supports encoding data between sixteen and thirty two bits:

The allowed size of ENT is 128-256 bits.

Given that, we’ll generate a stream of random binaries that fall within that size range:

check all data <- binary(min_length: 16, max_length: 32) do

Next, we’ll generate the mnemonic for our randomly generated data binary, and assert that the decoded mnemonic matches our original data:

mnemonic = Bip39.Mnemonic.encode(data)
assert Bip39.Mnemonic.decode(mnemonic) == {:ok, data}

If all goes well, our test should pass.

An Under-Specified Encoder

Unfortunately, things rarely go as planned.

Our new test seems to run through several successful iterations of the encode/decode assertion, but ultimately fails. Thankfully, StreamData shrinks the failing test as much as possible and gives us the failing input:

1) property encodes and decodes mnemonics (Bip39MnemonicTest)
   Failed with generated values (after 20 successful runs):
       * Clause:    data <- binary(min_length: 16, max_length: 32)
         Generated: <<0, 0, 0, 0, 55, 157, 129, 190, 93, 189, 119, 124, 164, 131, 5, 67, 23, 225, 251, 162, 200>>
   Assertion with == failed
   code:  assert Bip39.Mnemonic.decode(mnemonic) == {:ok, data}
   left:  {:error, :bad_checksum}
   right: {:ok, <<0, 0, 0, 0, 55, 157, 129, 190, 93, 189, 119, 124, 164, 131, 5, 67, 23, 225, 251, 162, 200>>}

After an intense debugging session, I realized that there was nothing wrong with my Bip39.Mnemonic.decode/1 function. Instead, the problem was with my encoder.

The BIP-39 specification clearly states that in addition to being “128-256 bits” in length, the length of the binary data being encoded must also be a multiple of thirty two bits:

The mnemonic must encode entropy in a multiple of 32 bits.

Ignoring this requirement results in issues when generating and appending the partial checksum, and results in data loss during the decoding procedure.

To accommodate this requirement, let’s update our property test to truncate all generated binaries to the nearest thirty two bits:

check all bytes <- binary(min_length: 16, max_length: 32),
          bits_to_truncate = bytes |> bit_size |> rem(32),
          <<_::size(bits_to_truncate), data::bits>> = bytes do
  mnemonic = Bip39.Mnemonic.encode(data)
  assert Bip39.Mnemonic.decode(mnemonic) == {:ok, data}

Now our test passes, as expected!

Tightening Up our Encoding Process

While our Bip39.Mnemonic.encode/1 functions works when passed the correct data, it’s probably not a good idea to assume that the developer knows what constitutes “good data”.

Instead, let’s refactor Bip39.Mnemonic.encode/1 to enforce the length requirements outlined in the BIP-39 specification.

Let’s update the function head to assert that data is a binary, assert that its length falls between one hundred twenty eight and two hundred fifty six bits, and assert that its length in bits is a multiple of thirty two:

def encode(data)
    when is_binary(data) and bit_size(data) >= 128 and bit_size(data) <= 256 and
           rem(bit_size(data), 32) == 0 do
   |> attach_checksum
   |> map_onto_wordlist}

If all of these requirements hold, we’ll return the encoded data wrapped in an :ok tuple. Otherwise, we need to return an :error tuple. We can do this with a second encode/1 function head:

def encode(_), do: {:error, :invalid_data}

Wrapping our Bip39.Mnemonic.encode/1 result in an :ok tuple breaks our test. We’ll need to fix that:

check all bytes <- binary(min_length: 16, max_length: 32),
          bits_to_truncate = bytes |> bit_size |> rem(32),
          <<_::size(bits_to_truncate), data::bits>> = bytes do
  {:ok, mnemonic} = Bip39.Mnemonic.encode(data)
  assert Bip39.Mnemonic.decode(mnemonic) == {:ok, data}

We should also add property tests to ensure that invalid binaries can’t be encoded by mistake:

First we’ll test that short binaries are rejected:

property "rejects short binaries" do
  check all bits <- integer(1..8),
            <<_::size(bits), data::bits>> <- binary(max_length: 16) do
    assert Bip39.Mnemonic.encode(data) == {:error, :invalid_data}

Next, we’ll test that long binaries are rejected:

property "rejects long binaries" do
  check all bits <- integer(1..8),
            bytes <- binary(min_length: 32),
            data = <<bytes::binary, 0::size(bits)>> do
    assert Bip39.Mnemonic.encode(data) == {:error, :invalid_data}

And finally, we’ll test that all “misaligned” binaries, or binaries who’s lengths don’t align to thirty two bits, are rejected:

property "rejects misaligned binaries" do
  check all data <- bitstring(min_length: 129, max_length: 256),
            data |> bit_size |> rem(32) != 0 do
    assert Bip39.Mnemonic.encode(data) == {:error, :invalid_data}

Perfect. Now I’m fully confident in our BIP-39 encode/decode solution.

Final Thoughts

While this seemingly simple task threw me down a rabbit hole I definitely didn’t expect, I’m grateful for the experience. This showed me in a very hands-on way just how powerful property-based testing can be. Without randomly generated test cases, I don’t think I would have recognized the issues with my encode function.

If you’d like to see the BIP-39 encoder/decoder’s source in its entirity, be sure to check out the entire Bip39 project on Github.

I’d like to thank Pierre Martin for bringing up the topic of reversing our BIP-39 algorithm. After talking with me on the Elixir Slack group, he filed a Github issue with his solution to the problem. I highly recommend you check out his approach for a more fleshed out solution.

Visualizing the Oplog with Splunk

Written by Pete Corey on Apr 30, 2018.

I recently found myself investigating a mysterious occurrence in a production Meteor application. Seemingly randomly, without any obvious connection to user activity or periodic job activity, our Meteor server would spike to one hundred precent CPU consumption and stay pegged there until it was restarted.

After investigating nearly every hunch I could come up with, I was left with very few courses of action. My final theory was that a massive influx of MongoDB operations were flooding into our database. Any concerned observers listening within our Meteor application would be overwhelmed trying to catch up with the changes and consume all available CPU cycles on the server.

In order to test this theory, I wanted to plot the MongoDB Oplog as a time series chart and compare it against the timeline of known CPU spikes, looking for any correlations.

I had many options for how to approach this problem, but I decided to use Splunk to visualize and explore the Oplog data. I’m very happy with how Splunk performed, and I can see myself using it again.

I was interested in all Oplog events that happened in the twenty-four hour period surrounding a known CPU spike at 22:55 UTC on April 23rd, 2018. I fired up Studio 3T and ran the following query against the collection of my MongoDB database:

  $and: [
    {ts: {$gte: new Timestamp(1524480600,1)}},
    {ts: {$lte: new Timestamp(1524567000,1)}}

The above query returned over seven hundred fifty thousand results, which I was able to export into a JSON file using Studio 3T (Studio 3T is the only MongoDB client I’ve found that supports saving an entire set of query results to file).

Once those seven hundred fifty thousand Oplog events were exported to disk, I was able to upload them directly into a Splunk index. Splunk gracefully parsed the JSON data and flattened each object into a neatly searchable collection.

With the data available in Splunk, I was free to start exploring.

My first step was to plot a time chart of all of the Oplog events. Given the large amount of data I was working with, I decided to bin my events into five minute bands:

| timechart span=5m count

An overview of our data.

Interestingly, an obvious dip in Oplog events occurred around the time of the observed CPU spike. This is the exact opposite of what I expected to see given my working hypothesis.

Zooming in on the dip.

Investigating further, I decided to plot a time series for every type of Oplog event, based on the op field:

| timechart span=1m count by op

To improve clarity, I also focused on a narrower time range, reduced my bin size, and switched to a log scale Y axis.

Everything working as intended.

This new chart shows that insert (i) and update (u) operations completely stop during the dip, but no-op (n) operations continue as usual. This seemed to indicate that the database was healthy, but the Meteor application stopped making insert and update requests.

This makes sense. If our server was eating up available CPU cycles, it probably wouldn’t find the time to query the database.

After visualizing the Oplog events around several of these CPU spikes, it became painfully obvious that my working hypothesis was not correct. There wasn’t any influx of database operations prior to a spike, and any dips in database activity were easily attributable to server restarts.

So now we’re back to square one. Was all of this for nothing?

Absolutely not!

When you’re investigating a problem, proving that something is not the cause of the problem can be incredibly valuable. By repeatedly narrowing down the possible set of culprits, we simplify the problem in our minds and make the real cause that much easier to find.

After spending more time digging into this issue, I’m convinced that it’s related to “fork bombs” crippling the server, and discussed in this issue filed against the Meteor project.

That said, this exploration proved to be incredibly valuable. By proving to myself that obverserver overload was not the cause of the spikes, I was able to rule out a huge swatch of potential fixes.

I was also able to spend some time trying out a fantastic new tool. I’m sure I’ll find myself using Splunk again in the future.