Rendering ASCII Chord Charts with React

Written by Pete Corey on Oct 8, 2018.

A few weeks ago I begrudgingly decided that my Chord project needs a web-based front-end. After weighing various options, I decided to implement the heart of the front-end as a React-based ASCII chord chart renderer.

After some initial code sketching, I had a working prototype, and a few revisions later I found myself happy with the final code. Let’s dig into it!

What’s the Goal?

Before we start diving into code, let’s take a look at what we’ll be building.

Our Chord back-end treats chords as either a list of optional numbers representing frets played on specific strings, or a list of optional two-tuples of numbers representing the fret played and the finger used to play that fret. For example, on the back-end we’d represent a classic open C major chord with the following list:


[nil, 3, 2, 0, 1, nil]

And with a common fingering:


[nil, {3, 3}, {2, 2}, {0, nil}, {1, 1}, nil]

Unfortunately, Javascript doesn’t have a “tuple” type, so we’re forced to represent our chords as either one or two dimensional arrays of numbers. In our front-end, those same chords would be represented like so:


[null, 3, 2, 0, 1, null]

[null, [3, 3], [2, 2], [0, null], [1, 1], null]

Our goal is to transform that representation into the following chord chart:

C major chord chart.

Let’s get to it!

Building Our Chart

We’ll start by creating a new React component to render a chord passed in through a given chord prop, and rendering a styled pre element to hold our soon-to-be chord chart:


const Chart = styled.pre`
  font-family: "Source Code Pro";
  text-align: center;
`;

export default ({ chord }) => {
  return (
    <Chart/>
  );
};

Before we render our chord, we’ll need to calculate some basic metrics which we’ll use throughout the process, and lay out our plan of attack:


export default ({ chord, name }) => {
    let { min, max } = getMinAndMax(chord)

  return (
    <Chart>
      {_.chain()
        .thru(buildFretRange)
        .thru(buildFretRows)
        .thru(intersperseFretWire)
        .thru(appendFingering)
        .thru(attachLeftGutter)
        .thru(joinRows)
        .value()}
    </Chart>
  );
};

The getMinAndMax helper is defined globally inside our module and simply filters out unplayed frets and returns an object consisting of the minimum fret used in the chord (min), and the maximum fret used in the chord (max):


const getMinAndMax = chord =>
  _.chain(chord)
    .map(string => (_.isArray(string) ? string[0] : string))
    .reject(_.isNull)
    .thru(frets => ({
      min: _.min(frets),
      max: _.max(frets)
    }))
    .value();

Once we’re armed with these metrics, we can see that our game plan is to build our range of frets (buildFretRange), build each of our fret rows (buildFretRows), intersperse our fret wire between those fret rows (intersperseFretWire), append any fingering instructions that were passed in with our chord (appendFingering), attach the left gutter (attachLeftGutter), and join everything together (joinRows).

Now we need to build out each of these component pieces.

Divide and Conquer

With min and max in scope, we can easily build a helper function to build our fret range:


const buildFretRange = () => _.range(min, Math.max(max + 1, min + 5));

Notice that we’re enforcing a minimum height on our chord chart. If the range of our chord is less than five frets, we’ll render enough empty frets at the bottom of the chart to fill the remaining space.

Our resulting range is a range of numbers, one for each fret used in the span of our chord.


Once we have our chord’s range, we can transform each of the frets in that range into a renderable representation of a fret row:


const buildFretRows = frets =>
  _.map(frets, fret =>
    _.chain(_.range(chord.length))
      .map(
        string =>
          (_.isArray(chord[string]) ? chord[string][0] : chord[string]) ==
          fret ? (
            <Finger>{fret == 0 ? "○" : "●"}</Finger>
          ) : (
            <Wire>{fret == 0 ? "┬" : "│"}</Wire>
          )
      )
      .value()
  );

We start by mapping over each fret in our list of frets. For each fret, We map over each of the strings in our chord (_.range(chord.length)). Next, we check if each string and fret combination is being played in our current chord. If it is, we render either a symbol, if the fret is being fingered, or a symbol if we’re playing an open string.

If we’re not playing the string/fret combination, we render a fret wire with either the symbol used to represent the nut of the guitar, or the symbol used to represent an unfretted string.

Both Finger and Wire are simply styled span elements:


const Finger = styled.span`
  font-weight: bold;
`;

const Wire = styled.span``;

At this point, our chord chart is starting to take shape, but without any horizontal fret wire or fret markers, it’s a bit disorienting to look at:

C major chord chart without fret wire.

Let’s clear things up a bit by interspersing fret wire between each of our fret rows:


const intersperseFretWire = rows =>
  _.flatMap(rows, row => [
    row,
    <Wire>{`├${_.repeat("┼", chord.length - 2)}┤`{:.language-javascript}}</Wire>
  ]);

We use Lodash’s flatMap to append a Wire component after each of our fret rows. This leaves us with an array of alternating fret rows and fret wires.


Some chords come complete with fingering suggestions. We’ll place those suggestions below our chord chart:


const appendFingering = rows => [
  ...rows,
  <Fingering>
    {_.chain(chord)
      .map(fret => (_.isArray(fret) ? fret[1] : " "))
      .value()}
  </Fingering>
];

Note that the Fingering component is just a (un-)styled span:


const Fingering = styled.span``;

We’re almost finished. Some chords are played further up the neck than others. Without indicating where the nut of our guitar is, a player has no way of orienting themselves.

Let’s give the readers of our charts some grounding by labeling the lowest fret of our chart in a left gutter:


const attachLeftGutter = rows =>
  _.map(rows, (row, i) => (
    <Fragment>
      <Label>{i == 0 && min != 0 ? _.pad(min, 2) : "  "}</Label>
      {row}
    </Fragment>
  ));

React’s new Fragment syntax gives us a nice way of combining multiple rendered components without introducing extra DOM cruft.

Notice that we’re not rendering fret labels for open chords. Because we’re rendering the nut using special symbols (), we don’t need to indicate that the chord starts on fret zero.

Final Thoughts

That’s all there is to it. We can use our new component to render a wide variety of chords:


<Chord chord={[null, 10, 10, 9, 12, null]} />
<Chord chord={[null, 8, 10, 9, 10, null]} />
<Chord chord={[null, 3, 8, 6, 9, null]} />
<Chord chord={[null, [3, 3], [2, 2], [0, null], [1, 1], null]} />
<Chord chord={[null, [10, 2], [10, 3], [9, 1], [12, 4], null]} />

All of which look beautiful when rendered in glorious ASCII!

Our chords.

Be sure to check out the entire project on Github, and while you’re at it, check out the refactor of my original solution done by Giorgio Torres. Giorgio swooped in after I complained that my first iteration was some of the ugliest React I’ve ever written and contributed his highly-polished solution. Thanks Giorgio!

Snapshot Testing GraphQL Queries

Written by Pete Corey on Oct 1, 2018.

For a recent client project, I’ve been building out a Node.js backend service fronted by a GraphQL API. A recent revelation made me realize just how useful Jest’s snapshot testing can be for writing high-level backend tests, specifically tests targeting GraphQL queries.

My typical approach for testing GraphQL queries is to import and test each query’s resolver function individually, as if it were just another function in my application.

Here’s an example to help paint a more complete picture:


const { bedCount } = require('...');

describe('Unit.bedCount', () => {
    it('it counts beds', async () => {
        expect.assertions(1);
        let user = await createTestUser();
        let unit = await Unit.model.create({ _id: 1, name: 'n1' });
        await Bed.model.create({ _id: 2, unitId: 1, name: 'b1' });
        await Bed.model.create({ _id: 3, unitId: 1, name: 'b2' });

        let result = bedCount(unit, {}, { user });
        expect(result).toEqual(2);
    });
});

Our test is verifying that a bedCount edge off of our Unit type returns the correct number of beds that live under that unit. We test this by manually inserting some test data into our database, importing the bedCount resolver function, and manually calling bedCount with the correct root (unit), args ({}), and context ({ user }) arguments. Once we have the result, we verify that it’s correct.

All’s well and good here.

However, things start to get more complex when the result of our query increases in complexity. We very quickly have to start flexing our Jest muscles and writing all kinds of complex matchers to verify the contents of our result.

What’s more, by testing our resolver function directly, we’re only testing half of our GraphQL endpoint. We’re not verifying that our schema actually contains the edge we’re trying to test, or the fields we’re trying to fetch off our our result type.

Thankfully, there’s a better way to test these queries. Using Jest snapshots we can refactor our original test into something like this:


describe('Unit.bedCount', () => {
    it('it counts beds', async () => {
        expect.assertions(1);
        let user = await createTestUser();
        await Unit.model.create({ _id: 1, name: 'n1' });
        await Bed.model.create({ _id: 2, unitId: 1, name: 'b1' });
        await Bed.model.create({ _id: 3, unitId: 1, name: 'b2' });
        let query = `
            query {
                unit(_id: 1) {
                    bedCount
                }
            }
        `;
        expect(await graphql(schema, query, {}, { user })).toMatchSnapshot();
    });
});

Here we’re once again setting up some test data, but then we perform an actual query through an instance of our GraphQL endpoint we set up on the fly. We pass in our application’s GraphQL schema (schema), the query we’d like to test (query), a root value ({}), and the context we’d like to use when performing the query ({ user }).

However, instead of manually verifying that the results of our resolver are correct, we make an assertion that the result of our GraphQL query matches our snapshot.

When we first run this test, Jest creates a __snapshots__ folder alongside our test. In that folder, we’ll find a bedCount.test.js.snap file that holds the result of our query:


// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`Unit.bedCount it counts beds 1`{:.language-javascript}] = `
Object {
  "data": Object {
    "unit": Object {
      "bedCount": 2,
    },
  },
}
`;

Any time the result of our query changes, our snapshot test will fail and give us a diff of our previous result saved in our snapshot, and the new, differing result.

The main benefit of this solution, in my mind, is that our query results can be as simple or as complex as we’d like. It doesn’t matter to us, as we don’t have to work with that data directly. We simply had it off to Jest.

Thanks Jest!

Living the Simple Life with Recursive Parsing and Serialization

Written by Pete Corey on Sep 24, 2018.

I just pushed a massive refactor of my Elixir-powered Bitcoin full node project that considerably simplifies the parsing and serialization of Bitcoin network messages.

I’m a big fan of the solution I landed on, and I wanted to share it with you. The key insight I had was to switch to a recursive solution where each sub-component of every messages handles its own parsing and serialization.

Obviously the devil is in the details, so let’s dive in.

What’s the Problem?

Before I took on this refactor, I was handling the parsing and serialization of Bitcoin network messages entirely manually. For every message, I’d define a parse/1 function and implement a corresponding serialize/1 protocol. Every field within the message was manually parsed and serialized using Elixir’s various binary manipulation operations.

As an example, here’s how a NetAddr message would be parsed using this technique:


def parse(binary) do
  with {:ok, time, rest} <- parse_time(binary),
       {:ok, services, rest} <- parse_services(rest),
       {:ok, ip, rest} <- parse_ip(rest),
       {:ok, port, rest} <- parse_port(rest) do
    {:ok, %NetAddr{time: time, services: services, ip: ip, port: port}, rest}
  end
end

defp parse_time(<<time::32-little, rest::binary>>),
  do: {:ok, time, rest}

defp parse_time(_binary),
  do: {:error, :bad_time}

defp parse_services(<<services::64-little, rest::binary>>),
  do: {:ok, services, rest}

defp parse_services(_binary),
  do: {:error, :bad_services}

defp parse_ip(<<ip::binary-size(16), rest::binary>>),
  do: {:ok, ip, rest}

defp parse_ip(_binary),
  do: {:error, :bad_ip}

defp parse_port(<<port::16-big, rest::binary>>),
  do: {:ok, port, rest}

defp parse_port(_binary),
  do: {:error, :bad_port}

While this was fantastic practice at manipulating binaries within Elixir, it wasn’t a scalable solution. There are simply too many messages in the Bitcoin protocol to implement in this time consuming way. Not only that, but many of the messages share common sub-structures who’s parse/1 and serialize/1 implementations would need to be repeated throughout the project.

Daunted with the task of implementing a parse/1 and serialize/1 function for every message in the protocol’s peer-to-peer vocabulary, I decided I needed a better solution.

Taking Advantage of Sub-Structures

As I mentioned up above, many Bitcoin messages share common sub-structures. Instead of dooming me to tedious repetition, I realized that these repeated structures were actually a blessing from the DRY gods.

If we could architect our parse/1 and serialize/1 implementations in a way that offloads the responsibility of parsing and serializing these shared sub-structures, the parsing and serialization implementations of our top-level messages could be substantially simplified.

Not only that, but we could take the notion of “sub-structures” even further. In many ways, the types of the primitives that compose together to build the protocol’s messages and sub-structures are sub-structures in and of themselves. For example, a uint32_t, which is a C type commonly used to define unsigned integers throughout the protocol’s various messages, is actually a sub-structure that has a single field and specific parsing and serialization rules.

We could implement a UInt32T struct with a corresponding parse/1 function like so:


defmodule BitcoinNetwork.Protocol.UInt32T do
  defstruct value: nil

  def parse(<<value::little-unsigned-integer-32, rest::binary>>),
    do: {:ok, %BitcoinNetwork.Protocol.UInt32T{value: value}, rest}
end

Similarly, we could reverse the process and serialize our newly parsed UInt32T:


defimpl BitcoinNetwork.Protocol.Serialize, for: BitcoinNetwork.Protocol.UInt32T do
  def serialize(%{value: value}),
    do: <<value::little-unsigned-integer-32>>
end

Composing Sub-Structures

Now we have parsing and serialization rules built for these base-level sub-structures like UInt32T and other primitive types. We can build upon the work we’ve done by composing these sub-structures together into more complex structure.

For example, a NetAddr is really just a UInt32T, a UInt64T, a sixteen byte Binary, and a UInt16T representing an addresses’ time, services, ip, and port, respectively. We can write a NetAddr struct complete with a parse/1 function that calls out to the parse/1 functions of these more primitive sub-structures:


defmodule BitcoinNetwork.Protocol.NetAddr do
  defstruct time: nil,
            services: nil,
            ip: nil,
            port: nil

  alias BitcoinNetwork.Protocol.{Binary, NetAddr, UInt32T, UInt64T, UInt16T}

  def parse(binary) do
    with {:ok, time, rest} <- UInt32T.parse(binary),
         {:ok, services, rest} <- UInt64T.parse(rest),
         {:ok, ip, rest} <- Binary.parse(rest, 16),
         {:ok, port, rest} <- UInt16T.parse(rest),
         do:
           {:ok,
            %NetAddr{
              time: time,
              services: services,
              ip: ip,
              port: port
            }, rest}
  end
end

Serializing a NetAddr structure is even easier. We simply build a list of the fields we want serialized, in the order we want them serialized, and then map over that list with our serialize/1 function:


defimpl BitcoinNetwork.Protocol.Serialize, for: BitcoinNetwork.Protocol.NetAddr do
  def serialize(net_addr),
    do:
      [
        net_addr.time,
        net_addr.services,
        net_addr.ip,
        net_addr.port
      ]
      |> BitcoinNetwork.Protocol.Serialize.serialize()
end

We’re left with an Elixir binary that represents the entire serialized NetAddr structure, but we didn’t have to do any of the heavy lifting ourselves.

The best part of this solution is that we can repeatedly build on top of our sub-structures. An Addr message is composed of a VarInt and a list of NetAddr sub-structures. It’s sub-structures all the way down.

Special Cases and Rough Edges

While the general case for this solution works beautifully, there are a few special cases and rough edges we need to smooth over.

The first of these rough edges comes when parsing and serializing fixed-size binaries. For example, within the NetAddr structure, we need to parse sixteen bytes off of the wire and interpret those bytes as an IP address. We instructed our NetAddr parser to do this by calling Binary.parse/2 with 16 as a second argument.

Our Binary module’s parse/2 function accepts an optional second argument that lets us specify exactly how many bytes we want to parse out of the incoming binary:


defmodule BitcoinNetwork.Protocol.Binary do
  def parse(binary, size \\ 1) do
    <<binary::binary-size(size), rest::binary>> = binary
    {:ok, binary, rest}
  end
end

Notice that Binary.parse/2 returns a primitive Elixir binary, rather than a struct. This is an intentional decision and makes our serialization that much easier:


defimpl BitcoinNetwork.Protocol.Serialize, for: BitString do
  def serialize(binary),
    do: binary
end

Another special case we need to handle is made apparent when we need to parse and serialize lists of “things”. A perfect example of this appears in our code when we need to parse an Addr structure, which is composed of a VarInt number of NetAddr structures:


with {:ok, count, rest} <- VarInt.parse(binary),
     {:ok, addr_list, rest} <- Array.parse(rest, value(count), &NetAddr.parse/1),
     do:
       {:ok,
        %Addr{
          count: count,
          addr_list: addr_list
        }, rest}

Like Binary.parse/2, Array.parse/3 has some special behavior associated with it. Our Array module’s parse/3 function takes our binary to parse, the number of “things” we want to parse out of it, and a function to parse each individual “thing”:


defmodule BitcoinNetwork.Protocol.Array do
  def parse(binary, count, parser),
    do: parse(binary, count, parser, [])
end

Our parse/3 function calls out to a private parse/4 function that builds up an accumulator of our parsed “things”. Once we’ve parsed a sufficient number of “things”, we return our accumulated list:


defp parse(rest, 0, parser, list),
  do: {:ok, Enum.reverse(list), rest}

The non-base case of our parse/4 function simply applies our parser/1 function to our binary and appends the resulting parsed “thing” to our list of “things”:


defp parse(binary, count, parser, list) do
  with {:ok, parsed, rest} <- parser.(binary),
       do: parse(rest, count - 1, parser, [parsed | list])
end

Once again, the result of our Array.parse/3 function returns a primitive Elixir list, not a struct. This makes our serialization fairly straight forward:


defimpl BitcoinNetwork.Protocol.Serialize, for: List do
  def serialize(list),
    do:
      list
      |> Enum.map(&BitcoinNetwork.Protocol.Serialize.serialize/1)
      |> join()

  def join(pieces),
    do: 
      pieces
      |> Enum.reduce(<<>>, fn piece, binary -> <<binary::binary, piece::binary>> end)
end

We simply map serialize/1 over our list of “things”, and concatenate the newly serialized pieces together.

If you remember back to our NetAddr serialization example, you’ll notice that we’ve been using our List primitive’s serialization protocol this whole time.

Awesome!

Final Thoughts

I struggled with this refactor on and off for a good few weeks. Ultimately, I’m happy with the solution I landed on. It’s more complex than my original solution in terms of the number of moving parts, but it’s a much more scalable and mentally manageable solution than the one I was originally working with.

Now that this is out of my system, I can turn my attention to the interesting pieces of building a Bitcoin full node: processing blocks!

Expect articles digging into that topic in the future. In the meantime, check out the entire project on Github to get a more hands-on feel for the refactor and the recursive parsing and serialization solution I ultimately landed on.