Pete Corey Writing Work Contact

Point in Polygon

This post is written as a set of Literate Commits. The goal of this style is to show you how this program came together from beginning to end.

Each commit in the project is represented by a section of the article. Click each section's header to see the commit on Github, or check out the repository and follow along.

Written by Pete Corey on Jul 20, 2016.

Project Setup

The goal of today’s kata is to implement a function called pointInPoly. pointInPoly is called with a polygon represented as a series of points as the first argument, and a single point at the second argument. Each point is represented as a set of [x, y] coordinates, where both x and y are numbers. pointInPoly should return true if the point is within the defined polygon, and false if it is not.

The kata description points out several assumptions we can make about the inputs: 1) The polygon will be a valid simple polygon. That is, it will have at least three points, none of its edges will cross each other, and exactly two edges will meet at each vertex. 2) In the tests, the point will never fall exactly on an edge of the polygon.

And lastly, although the description never explicitly says so, we’re assuming that the points in the polygon are given in order; each point shares an edge with the next.

This initial commit sets up our initial project.


+{ + "presets": ["es2015"] +}




+{ + "main": "index.js", + "scripts": { + "test": "mocha ./test --compilers js:babel-register" + }, + "dependencies": { + "babel-preset-es2015": "^6.9.0", + "babel-register": "^6.9.0", + "chai": "^3.5.0", + "lodash": "^4.12.0", + "mocha": "^2.4.5" + } +}


+import { expect } from "chai"; + +describe("index", function() { + + it("works", function() { + expect(2+2).to.equal(4); + }); + +});

The Simplest Test

The first thing we do when we’re solving a problem like this is to write a test. Because we’re implementing a function given to us, we already know what our final interface will look like (pointInPoly), so we can immediately write a test for it.

Our first test asserts that a point at the origin ([0, 0]) is within a simple triangle with points at [-1, -1], [1, -1], and [0, 1].

After writing this test, our test suite complains that pointInPoly is not defined. This is quickly fixed by importing pointInPoly into our test file and then exporting it from index.js.

After exporting the empty pointInPoly function, the test suite shouts that it expected undefined to be true. To bring us back to green we change our new pointInPoly function to return true.


+export function pointInPoly(poly, point) { + return true; +}


import { expect } from "chai"; +import { pointInPoly } from "../"; -describe("index", function() { +describe("pointInPoly", function() { - it("works", function() { - expect(2+2).to.equal(4); + it("detects a point in a triangle", function() { + let poly = [[-1, -1], [1, -1], [0, 1]]; + expect(pointInPoly(poly, [0, 0])); });

Fleshing Out a Strategy

We knew that our initial pointInPoly solution was incomplete. Returning true for all cases obviously wasn’t going to work. But what do we do instead? How do we even begin to tackle this problem?

Thankfully, from my days of video game programming I know that a simple test for checking if a point lies within a polygon is to send a ray outward from that point. If the ray intersects with the lines of the polygon an odd number of times, the point lies within the polygon. Otherwise, it’s outside.

Since we’re in a green state, we can do a little refactoring and implement a high level version of this solution. We want to count the number of intersections between our imaginary ray and our polygon:

let intersections = countIntersections(poly, point);

And then return true if that count is odd, or false if it’s even:

return !!(intersections % 2);

After making these changes, our test suite complains that countIntersections does not exist, so let’s quickly define it and have it return 1 to bring us back to green.


+function countIntersections(poly, point) { + return 1; +} + export function pointInPoly(poly, point) { - return true; + let intersections = countIntersections(poly, point); + return !!(intersections % 2); }

Rethinking Our Interfaces

After some soul-searching, I decided I wasn’t happy with where we were going with our previous solution. countIntersections was really an “internal method”. Outside of the context of our point-in-polygon problem, a function called countIntersections that takes in a poly and a point really just doesn’t make any sense.

Because it was an internal method, I was hesitant to write tests for it. These tests would be too closely coupled with our implementation, and would make refactoring difficult. Additionally, countIntersections would most likely call other methods would be even more contextually dependant and awkward to test.

We needed a cleaner solution.

After reconsidering the problem, it’s clear that we’re dealing with a few really solid abstractions. The most apparent is a polygon. If we implement a generic polygon, we’d be able to cleanly specify what we want from our pointInPoly method:

function pointInPoly(poly, point) {
    return polygon(poly).surrounds(point);

Additionally, by breaking polygon out into a new abstraction, we can freely build and test its interface to our heart’s content.

With this in mind, we wrote a new set of tests that describe polygon. The first tests looks nearly identical to our pointInPoly tests and checks if a polygon surrounds a point.

Our dummy implementation of polygon.surrounds simply returns true.


+export function polygon(_points) { + + let surrounds = (point) => true; + + return { + surrounds + }; +}


+import { expect } from "chai"; +import { polygon } from "../polygon"; + +describe("polygon", function() { + + it("checks if a polygon surrounds a point", function() { + let poly = polygon([[-1, -1], [1, -1], [0, 1]]); + expect(poly.surrounds([0, 0])); + }); + +});

Restating Point-in-Polygon

Now that we’ve defined our polygon, we can restate our implementation of surrounds. We want to translate our polygon so that the point we’ve been given can be treated as the origin. Next we want to count the number of intersections that an arbitrary ray ([0, 1]) makes with the newly translated polygon:

let intersections = translate([-x, -y]).intersections([0, 1]);

Lastly we want to return true from surrounds if intersections is odd:

return !!(intersections % 2);

After making these changes, our test suite complains about translate and intersections not being defined.

The fastest way to get us back to green is to have translate return a new polygon, and have intersections return 1.


... - let surrounds = (point) => true; + let surrounds = ([x, y]) => { + let intersections = translate([-x, -y]).intersections([0, 1]); + return !!(intersections % 2); + }; + + let translate = ([x, y]) => { + return polygon(_points); + }; + + let intersections = (ray) => 1; return { - surrounds + surrounds, + translate, + intersections };

Translate Base Case

Now we can start testing the component pieces of our surrounds function.

First up, let’s write a test for translate. A straight-forward base case for translate asserts that calling translate on an empty polygon ([[]]) should return an empty polygon.

After writing this test, I realized that I needed a function to return the points from a polygon. Thus, points was born.

let points = () => _points;

Suprisingly, our naive solution also works for all calls to translate where x and y are zero.


... + let points = () => _points; + return { ... translate, - intersections + intersections, + points };


... + it("translates a polygon", function() { + expect(polygon([]).translate([0, 0]).points()).to.deep.equal([]); + }); + });

Finishing Translate

Adding a more complicated test of translate shows that we need a better solution. Thankfully, it isn’t a huge leap to come up with the final form of the function.

The translate function returns a new polygon where every point in the polygon has been incremented by the provided x and y values.


... let translate = ([x, y]) => { - return polygon(_points); + return polygon( => [p[0] + x, p[1] + y])); };


... expect(polygon([]).translate([0, 0]).points()).to.deep.equal([]); + expect(polygon([ + [0, 0], [5, -5] + ]).translate([1, 1]).points()).to.deep.equal([ + [1, 1], [6, -4] + ]); });

Line Abstraction

Now we turn out attention to the intersections function. This still seems like a daunting piece of functionality, and we should break it down into simpler pieces, if possible.

If we’re not afraid of making use of another abstraction (line), a simple implementation of intersections could be written like this:

return lines().filter(line => line.intersects(ray)).length;

In plain english, this reads as “return the number of lines in this polygon that intersect the given ray”.

This is a nice solution. To make it a reality, let’s create a new set of tests for our new line abstraction and a dummy implementation of line and line.intersects.


+export function line(a, b) { + + let intersects = (ray) => true; + + return { + intersects + }; +}


+import { expect } from "chai"; +import { line } from "../line"; + +describe("line", function() { + + it("checks if the line intersects a ray", function() { + expect(line([0, 1], [1, 0]).intersects([1, 1])); + }); + +});

Finishing Line

Adding another test against line.intersects shows that we need a better solution.

Determining if a line intersects with a ray is a well-documented problem. I used this blog post as a guide for implementing my solution. Be sure to check it out for details on the math being used here.


... - let intersects = (ray) => true; + function cross(v1, v2) { + return (v1[0] * v2[1]) - (v1[1]*v2[0]); + } + + function dot(v1, v2) { + return v1[0] * v2[0] + v1[1] + v2[1]; + } + + let intersects = (ray) => { + let v1 = [-a[0], -a[1]]; + let v2 = [b[0] - a[0], b[1] - a[1]]; + let v3 = [-ray[1], ray[0]]; + + let t1 = cross(v2, v1) / (dot(v2, v3)); + let t2 = (dot(v1, v3)) / (dot(v2, v3)); + + return t1 >= 0 && (t2 >= 0 && t2 <= 1); + };


... expect(line([0, 1], [1, 0]).intersects([1, 1])); + expect(line([0, 1], [1, 0]).intersects([-1, -1])); });

Finishing Intersections

Now that line and line.intersects exist, we can implement our polygon.intersections method.

As usual, we start by adding a test for intersections, and then we make our test suite happy by importing line, and creating the polygon.lines function.


+import { line } from "./line"; + export function polygon(_points) { ... - let intersections = (ray) => 1; + let intersections = (ray) => { + return lines().filter(line => line.intersects(ray)).length; + }; ... + let lines = () => { + return [line([-1, 1], [1, 1])]; + } + return { ... intersections, - points + points, + lines };


... + it("counts intersections with a ray", function() { + let poly = polygon([[-1, -1], [1, -1], [0, 1]]); + expect(polygon([ + [-1, -1], [1, -1], [0, 1] + ]).intersections([0, 1])).to.equal(1); + }); + });

Constructing Lines

Finally, all we need to do to complete our solution is to build our polygon.lines function. This function should transform the set of _points that were used to define our polygon into a set of line objects.

We implement a test for polygon.lines and use it to drive the creation of our solution. Don’t forget that the last point in the polygon must connect back to the first!


... + let points = () => [a, b]; + return { - intersects + intersects, + points };


... let lines = () => { - return [line([-1, 1], [1, 1])]; + if ((!_points) || !_points.length) { + return []; + } + + let last = _points[0]; + let pairs = _points.slice(1).map((point) => { + let segment = line(last, point); + last = point; + return segment; + }); + pairs.push(line(_points[_points.length - 1], _points[0])); + + return pairs; }


... + it("creates lines for a polygon", function() { + let lines = polygon([ + [-1, -1], [1, -1], [0, 1] + ]).lines(); + expect( => line.points())).to.deep.equal([ + [[-1, -1], [1, -1]], + [[1, -1], [0, 1]], + [[0, 1], [-1, -1]] + ]); + }); + });

Pull it all Together

Now that all of our building blocks are finalized, we can come back to our original pointInPoly method and rewrite it exactly how we had imagined:

return polygon(poly).surrounds(point);

After making this change, our test suite is still green. Everything is working as expected.


-function countIntersections(poly, point) { - return 1; -} +import { polygon } from "./polygon"; export function pointInPoly(poly, point) { - let intersections = countIntersections(poly, point); - return !!(intersections % 2); + return polygon(poly).surrounds(point); }

Final Test & Bug Fix

At this point, our solution should be finished. However, when we feed in the tests provided by the kata, we notice a failure.

After digging into what’s happening, we notice that the system was claiming that a line, line([4, -6], [4, 4]), was intersecting with a ray, [0, 1]. Clearly, this is incorrect.

To find out what was causing this, we write a new test against the line.intersects function:

expect(line([4, -6], [4, 4]).intersects([0, 1]));

As expected, this test fails.

After some poking, prodding, and comparing against reference equations, we notice a typo in the dot function. After fixing the dot production calculation, our entire test suite shifts back to a passing state.



... function dot(v1, v2) { - return v1[0] * v2[0] + v1[1] + v2[1]; + return v1[0] * v2[0] + v1[1] * v2[1]; }


... + it("detects a point in a square", function() { + var poly = [ + [-5, -5], [5, -5], + [5, 5], [-5, 5] + ]; + + expect(pointInPoly(poly, [-6, 0])); + expect(pointInPoly(poly, [1, 1])); + }); + });


... expect(line([0, 1], [1, 0]).intersects([-1, -1])); + expect(line([4, -6], [4, 4]).intersects([0, 1])); });


When you compare our solution with other submitted solutions, you’ll notice that ours is longer. Our solution probably took much longer to write as well. However, our solution was a fantastic exercise in deliberate practice.

By consciously focusing on writing robust and maintainable code, we had a few introspective moments about our process and our technique.

The first major insight that we had came when we realized we were going down a bad road with the countIntersections method. By creating additional abstractions, we ended up with more testable, maintainable and re-usable code.

At the very end of the process we found a bug in the solution. Thanks to our test suite we were able to almost immediately find the source of the bug and fix it.

Be sure to check out the full project, complete with detail commit messages on GitHub.

Meteor's Nested Import Controversy

Written by Pete Corey on Jul 17, 2016.

In my last post you might have noticed an interesting piece of ES6-flavored syntax sugar. We imported a module within a nested block:

if (Meteor.isServer) {
    import winston from "winston";

Although this code is isomorphic and executed on both the client and the server, winston is only imported on the server.

While this kind of nested importing seems like a handy addition to our Meteor toolbox, it doesn’t come without its share of controversy.

Meteor Meet Reify

As recently as Meteor version, this kind of nested import was impossible. Importing a module within any non-top-level block would result in an exception when building your Meteor application:

import winston from “winston”;
SyntaxError: Unexpected reserved word

However, this all changed in Meteor 1.3.3. Digging through the release notes for that version, you’ll notice a very interesting bullet point:

import statements in application modules are no longer restricted to the top level, and may now appear inside conditional statements (e.g. if (Meteor.isServer) { import … }) or in nested scopes.

In this release, Meteor transitioned to using Ben Newman’s Reify transpiler, which transforms our nested import statement into something like this:

if (Meteor.isServer) {
    var winston;

Initially, this seems like a useful improvement to the module system.

Importing modules within nested blocks can alleviate some of the pains of context-dependent (client vs. server) imports in isomorphic code. You only want this module imported on the server? Not a problem!

Reify Meet Babel

Trouble quickly rears its ugly head when we try using these modules outside the context of the Meteor build tool.

To simplify our example, imagine we have a module that looks like this:

export function parse(input) {
    import qs from "qs";
    return qs.parse(input);

This module exports a function called parse that takes in an input string, runs it through qs.parse, and returns the result.

If this were a Meteor module, this would work just fine. The qs module would be imported at runtime using module.import and everything would work as expected.

Now, imagine that we wanted to test this functionality. Because we want to keep our tests fast, we’ll bypass Meteor’s test framework and use Mocha directly.

A simple test for this module might look something like this:

import { expect } from "chai";
import { parse } from "../imports/parse";

describe("myParseModule", function() {
    it("parses input", function() {
            foo: "bar"

We execute this test by running mocha over our ./test directory. Unaware of the transition to Reify (and, admittedly, unaware that Reify even exists), we specify that we want to use Babel as our Javascript transpiler:

mocha ./test --compilers js:babel-register

Unfortunately, when Babel tries to transpile our application, it throws an error:

SyntaxError: 'import' and 'export' may only appear at the top level (2:4)
  1 | export function parse(input) {
> 2 |     import qs from "qs";
    |     ^
  3 |     return qs.parse(input);
  4 | }
  5 |

Outside the context of Reify and the Meteor build system, nested imports are not recognized as valid ES6.

The Controversy

Currently, ES6 only supports top-level module imports. This design decision is intended to open the doors for static analysis tools, better resolution of cyclic dependencies, improved dead code removal, and faster lookups, along with proposed Javascript features like macros and types.

Reify’s choice to deviate from this decision is potentially at odds with these design goals, and violates the ES6 specification itself.

That isn’t to say that Reify or Meteor are necessarily in the wrong. Specifications should be changeable, provided there is a compelling reason to change. Ben took up the torch and wrote a compelling document outlining the benefits of nested imports.

In addition to static imports, ES6 also describes a module loader API that can be used to dynamically import modules:

["./foo", "./bar"]
.then((foo, { baz }) => {
    // ...

An argument could be made that the dynamic module loader API makes techniques like dead code removal impossible. How can a static analysis tool know which modules can be culled if it can’t see, at compile time, which modules will be used?

let version = Math.round(Math.random());
System.import("./foo-v" + version);

Can our build system remove the foo-v0 module from our final bundle? What about foo-v1? Either of the modules could be chosen at runtime, so it’s impossible to know.

Ben argues that using nested imports, which require string literal import locations and require all import symbols be explicitly named would eliminate this problem entirely. Even with nested imports, it’s easy to see which modules and symbols within those modules will be required in a final bundle.

Would nested imports bring us closer to our goals of better compile-time static analysis, while at the same time providing a better, more consistent developer experience?

The controversy is subtle, but the controversy is real.

Looking Forward

As Meteor developers, we have two immediate options moving forward. We can embrace Reify, and potentially distance ourselves from the rest of the Javascript community, or we call fall back to using CommonJS-style require statements to pull in nested modules (or shim ES6-style module loaders):

if (Meteor.isServer) {
    const winston = require("winston");

For the time being, because I enjoy using native Node.js tools outside the context of the Meteor build tool, I plan on refraining from using nested imports.

I’m very interested to see how all of this will play out.

Ben will be discussing his proposal for nested imports with the ECMAScript standards committee at the end of this month.

Literate Commits

Written by Pete Corey on Jul 11, 2016.

I’ve always been interested in Donald Knuth’s idea of “literate programming”. Presenting a complete program as a story or narrative is a very powerful framework for expressing complex ideas.

While literate programming is interesting, it’s also uncommon. The practice of literary programming seems to be at odds with how most of us develop software.

Thankfully, with a small shift in perspective we may be able to employ all of the benefits of writing literary software, while leveraging the tools we use on a daily basis.

The Problem With Literate Programming

Literate programming doesn’t come without its share of problems. In a recent article, John Cook does a fantastic job of outlining some of the drawbacks of this style of programming, and how they’ve hindered its wider adoption. Additionally, commenter Peter Norvig gives a compelling argument against literate programming in the comments:

I think the problem with Literate Programming is that assumes there is a single best order of presentation of the explanation. I agree that the order imposed by the compiler is not always best, but different readers have different purposes. You don’t read documentation like a novel, cover to cover. You read the parts that you need for the task(s) you want to do now.

A developer’s experiences, preferences, and goals will lead them to approach the same codebase in very different ways. A new developer looking to take ownership over an existing codebase may be looking for a much more holistic view of the software than a developer looking to fix a single bug.

Choose Your Own Adventure

But what if we had all of the benefits of a literate program without the strict presentation order? What if we could dive into any piece of the code that interests us and read only the sections of documentation that relate to that code?

What would be ideal is a tool to help construct such paths for each reader, just-in-time; not a tool that makes the author choose a single path for all readers.

Peter’s idea of a “reading path” that can be constructed on the fly strikes a chord with me and resembles an experiment I’ve been working on lately.

In an attempt to better document, improve, and share my programming workflow, I’ve been setting aside time for documented, deliberate practice.

During these practice sessions, I write short programs while following what I consider to be “best practices”. I’m very deliberate during these sessions and document the thought process and impetus behind every change with highly detailed, “literate” commit messages.

The goal of this process is to turn the project’s revision history into an artifact, a set of literate commits, that represent my thoughts as I go through the steps of writing professional-level software.

Benefits of Literate Commits

In the short amount of time I’ve been doing them, these practices sessions have been enlightening. Intentional observation of my process has already led to many personal insights.

Slowing down and ensuring that each and every commit serves a singular purpose and adds to the narrative history of the project has done wonders to reduce thrash and the introduction of “stupid mistakes”.

The knowledge that the project’s revision history will be on display, rather than buried in the annals of git log is a powerful motivating factor for doings things right the first time.

The goal is that this repeated act of “doing things right the first time” will eventually turn into habit.

Learning From History

The portion of Peter’s comment that stands out is his desire for a choose-your-own-adventure-style tool for bringing yourself up to speed with a given piece of code.

Imagine finding a section of code that you don’t understand within a project. git blame can already be used to find the most recent commits against that section, but those commits are most likely unhelpful out of context.

Instead imagine that those commit messages were highly detailed, through explanations of why that code was changed and what the original developer hoped to accomplish with their change.

Now go further back. Review all of the related commits that led up to the current state of this particular piece of code.

Read in chronological order, these commits should paint a clear picture of how and why this particular code came into existence and how it has changed over the course of its life.

Those who do not read history are doomed to repeat it.

This kind of historical context is invaluable when writing software. By observing how a piece of code has changed over time, you can build a better understanding of the purpose it serves, and put yourself in the right mindset to change it.

Example Project

For a very simple, introductory example to this style of programming and writing, take a look at how I solved a simple code kata in literary commit style.

This is a very basic example, but I hope it serves as a clear introduction to the style. I plan on continuing to release literal commit posts over the coming months. Hopefully this intentional style of programming can be as helpful to other as it has been to me.

While I’m not advocating using literary commits in real-world software, in my limited experience, it can be an incredibly useful tool for honing your craft.