Inject Detect - Coming Soon!

Written by Pete Corey on Mar 6, 2017.

To make a long story short, I’ve decided to start working on a new project called Inject Detect.

Inject Detect is a SaaS application designed to detect NoSQL Injection attacks against your MongoDB-backed application as they happen.

Check out the Inject Detect landing page for more details, and sign up for the Inject Detect newsletter to stay in the loop. I’ll also send you an introduction to NoSQL Injection for signing up!

Why NoSQL Injection?

It turns out that my most popular post from last year was about the NoSQL Injection talk I gave at the 2016 Crater Remote Conference.

This couldn’t make me happier! NoSQL Injection has been, and continues to be one of the most serious security issues I see pop up time and time again in Meteor applications (and any application using MongoDB).

In fact, of all the serious security issues I’ve found while conducting Meteor security assessments, nearly half are directly caused by NoSQL Injection vulnerabilities!

An Idea is Born

Wanting to piggyback off of the success of last year’s NoSQL Injection talk, I began considering writing an ebook or an online course diving into the topic of NoSQL Injection.

During my brainstorming, I asked myself many questions. What is NoSQL Injection? What causes it? How do I prevent it? What does it look like in different stacks and technologies? All of these questions had fairly well-accepted answered within the software development community.

Finally, I landed on the question of “How do I detect NoSQL Injection?” This question struck a chord with me.

When we write applications, we do our best to make them secure. In terms to preventing NoSQL Injection, we try to make sure that every possible piece of user-provided data is thoroughly checked and validated.

But what if we miss something? How do we know we don’t have vulnerable code sitting in production right now? How would we know if we were being hit with NoSQL Injection attacks as we speak? Server-side errors wouldn’t be raised, and the malicious user certainly wouldn’t file a bug report.

It seems like we’re operating blindly here, and that seems like a very dangerous gamble.

Enter Inject Detect

Inject Detect is my answer to this problem.

By analyzing the structure of the queries made against your MongoDB database and comparing them to a set of expected queries, Inject Detect will be able to identify and quickly notify you about suspicious queries that may be the result of a NoSQL injection attack.

Put simply, Inject Detect is a fully automated and easily configurable check as a service.

Inject Detect is still in very early development. My goal is to be transparent about its development. If you want to follow along, sign up for the Inject Detect newsletter and check back here for development updates as they happen!

Does Inject Detect sound like a useful service for you or your team? What features would you expect from it? Let me know - I’d love to hear your feedback!

My Favorite Pattern Revisited

Written by Pete Corey on Feb 27, 2017.

A few weeks ago, I posted an article about my favorite pattern without a name. Surprisingly, this article got quite a bit of feedback, both good and bad.

People were quick to point out that this pattern did indeed have a name. It’s a fluent interface! It’s an interceptor, a la Clojure! It’s a lense! No wait, it’s just plain-old functional composition!

Some people pointed out that, regardless of what its called, it’s an awful pattern.

While all most of these comments were relevant and useful, I found one of the discussions around this article especially interesting from a practical point of view; my friend Charles Watson introduced me to the beauty of Elixir’s with macro!

Criticisms

The original example we started with in my previous article looked something like this:


user = get_user(sms.from)
response = get_response(sms.message)
send_response(user, response)

After constructing an all-encompassing state object and chaining it through our three methods, we were left with this:


%{sms: sms, user: nil, response: nil}
|> get_user
|> get_response
|> send_response

The main criticism of this approach largely boils down to the fact that we’re allowing our functions to know too much about the architecture of our final solution.

By passing our entire state “God object” into each function, we’re obfuscating the actual dependencies of the function. This makes it difficult to determine what the function actually does, and what it needs to operate.


From a practical standpoint, this chaining also presents problems with error handling.

Our original solution assumed that all of our functions succeeded. However, what happens if any of the functions in our chain fail? Can we even tell how they would fail in our example? Would they return an :error tuple? Would they throw an exception?

It’s hard to tell from reading the code, and even worse, both failure modes would lead to a less-than-ideal debugging situations.

Thankfully, we can refactor this solution to use the with macro and address both of these criticisms.

Using the With Macro

With Elixir’s with macro, we could have refactored our original example to look like this:


with
  user     <- get_user(sms.from)
  response <- get_response(sms.message)
do
  send_response(user, response)
end

So what’s the big deal? Arguably, this is much less clean that both our previous refactor and our original implementation!

While using the with macro does cost a few extra characters, it doesn’t come without its benefits.

In our original example, I happily glossed over any errors that might have occurred during our SMS sending process.

Imagine if get_response encountered an error. What does it return? Judging by the fact that a happy path call returns a response object, it’s easy to assume that an error would result in an exception. What if we wanted to gracefully handle that error, rather than having our process blow up?

Let’s pretend that we’ve refactored get_user, get_response, and send_response to return either an {:ok, result} tuple if everything went well, or an {:error, error} tuple in the case of an error.

We could then refactor our with-powered function pipeline to gracefully handle these errors:


with
  {:ok, user}     <- get_user(sms.from)
  {:ok, response} <- get_response(sms.message)
  {:ok, sent}     <- send_response(user, response)
do
  {:ok, sent}
else
  {:error, :no_response} -> send_response(user, "I'm not sure what to say...")
  error -> error
end

Our with assignments happen in order. First, we call get_user and try to pattern match it against {:ok, user}. If that fails, we fall into the else block where we try to pattern match against our known error patterns.

If get_user fails with an {:error, :user_not_found} error, for example, that error will match the error -> error case in our else block and will be returned by our with expression.

Even more interestingly, if get_response fails with a {:error, :no_response} error, we’ll match against that error tuple in our else block and send an error response back to the user.

Using with, we’re able to short circuit our function pipeline as soon as anything unexpected happens, while still being able to gracefully handle errors.

Another added benefit of using with over the pattern I described in my previous post is that it doesn’t artificially inflate the surface area of the functions we’re calling.

Each function is passed only the exact arguments it needs. This reduction of arguments creates a much more understandable, testable, and maintainable solution.

On top of that, by specifying arguments more explicitly, a natural ordering falls out of our function chain.

Final Thoughts

While this is a fairly contrived example, with can be used to gracefully express complicated functional pipelines. I’ll definitely be using the with macro in my future adventures with Elixir.

I’d like to thank my friend Charles Watson for pointing out the with macro to me and showing me just how awesome it can be.

If you’re interested in this type of thing and want to dive deeper into the world of functional composition, I highly recommend you check out this response to my previous article, left by Drew Tipson. He outlines many interesting topics which are all fantastic diving boards into worlds of amazing topics.

Happy composing!

Rendering Life on a Canvas with Phoenix Channels

Written by Pete Corey on Feb 20, 2017.

In a recent article, we wrote an Elixir application to play Conway’s Game of Life with Elixir processes. While this was an excellent exercise in “thinking with processes”, the final result wasn’t visually impressive.

Usually when you implement the Game of Life, you expect some kind of graphical interface to view the results of the simulation.

Let’s fix that shortcoming by building out a Phoenix based front-end for our Game of Life application and render our living processes to the screen using an HTML5 canvas.

Creating an Umbrella Project

Our game of life simulation already exists as a server-side Elixir application. We somehow need to painlessly incorporate Phoenix into our application stack so we can build out our web-based front-end.

Thankfully, Elixir umbrella projects let us do exactly this.

Using an umbrella project, we’ll be able to run our life application and a Phoenix server simultaneously in a single Elixir instance. Not only that, but these two applications will be able to seamlessly reference and communicate with each other.

To turn our Life project into an umbrella project, we’ll create a new folder in the root of our project called apps/life/, and move everything from our Life project into that folder.

Next, we’ll recreate the mix.exs file and the config folder and corresponding files needed by our umbrella application in our project root. If everything has gone well, we’ll still be able to run our tests from our project root:


mix test

And we can still run our life application through the project root:


iex -S mix

Now we can go into our new apps folder and create a new Phoenix application:


cd apps/
mix phoenix.new interface --no-ecto

Notice that we’re forgoing Ecto here. If you remember from last time, our Game of Life simulation lives entirely in memory, so we won’t need a persistence layer.

Once we’ve created our Phoenix application, our umbrella project’s folder structure should look something like this:


.
├── README.md
├── apps
│   ├── interface
│   │   └── ...
│   └── life
│       └── ...
├── config
│   └── config.exs
└── mix.exs

Notice that interface and life are complete, stand-alone Elixir applications. By organizing them within an umbrella project, we can coordinate and run them all within a single Elixir environment.

To make sure that everything is working correctly, let’s start our project with an interactive shell, and fire up the Erlang observer:


iex -S mix phoenix.server
:observer.start

If we navigate to http://localhost:4000/, we should see our Phoenix framework hello world page. Not only that, but the observer shows us that in addition to our Phoenix application, our life application is alive and kicking on the server as well.

Channeling Life

Now that our Phoenix server is set up, we can get to the interesting bits of the project.

If you remember from last time, every time we call Universe.tick, our Game of Life simulation progresses to the next generation. We’ll be using Phoenix Channels to receive “tick” requests from the client and to broadcast cell information to all interested users.

Let’s start the process of wiring up our socket communication by registering a "life" channel in our UserSocket module:


channel "life", Interface.LifeChannel

Within our Interface.LifeChannel module, we’ll define a join handler:


def join("life", _, socket) do
  ...
end

In our join handler, we’ll do several things. First, we’ll “restart” our simulation by clearing out any currently living cells:


Cell.Supervisor.children
|> Enum.map(&Cell.reap/1)

Next, we’ll spawn our initial cells. In this case, let’s spawn a diehard methuselah at the coordinates {20, 20}:


  Pattern.diehard(20, 20)
  |> Enum.map(&Cell.sow/1)

Lastly, we’ll return a list positions of all living cells in our system:


  {:ok, %{positions: Cell.Supervisor.positions}, socket}

Cell.Supervisor.positions is a helper function written specifically for our interface. It returns the positions of all living cells in a list of structs:


def positions do
  children()
  |> Enum.map(&Cell.position/1)
  |> Enum.map(fn {x, y} -> %{x: x, y: y} end)
end

Now that our join handler is finished up, we need to write our “tick” handler:


def handle_in("tick", _, socket) do
  ...
end

In our tick handler, we’ll call Universe.tick to run our simulation through to the next generation:


Universe.tick

Next, we’ll broadcast the positions of all living cells over our socket:


broadcast!(socket, "tick", %{positions: Cell.Supervisor.positions})

And finally, we return from our tick handler with no reply:


{:noreply, socket}

Rendering Life

Now that our "life" channel is wired up to our Game of Life simulator, we can build the front-end pieces of our interface.

The first thing we’ll do is strip down our index.html.eex template and replace the markup in our app.html.eex template with a simple canvas:


<canvas id="canvas"></canvas>

Next, we’ll start working on our app.js file.

We’ll need to set up our canvas context and prepare it for rendering. We want our canvas to fill the entire browser window, so we’ll do some hacking with backingStorePixelRatio and devicePixelRatio to set the scale, height and width of our canvas equal to window.innerWidth and window.innerHeight respectively. Check out the source for specifics.

Now we’ll need a render function. Our render function will be called with an array of cell position objects. Its job is to clear the screen of the last render and draw a square at every cell’s given {x, y} position:


function render(positions) {
    context.clearRect(0, 0, canvas.width, canvas.height);
    positions.forEach(({x, y}) => {
        context.fillRect(x * scale, y * scale, scale, scale);
    });
}

Now that our canvas is set up and ready to render, we need to open a channel to our Phoenix server.

We’ll start by establishing a socket connection:


let socket = new Socket("/socket");
socket.connect();

Next, we’ll set up to our "life" channel:


let channel = socket.channel("life", {});

When we join the channel, we’ll wait for a successful response. This response will contain the initial set of living cells from the server. We’ll pass those cells’ positions into our render function:


channel.join()
  .receive("ok", cells => render(cells.positions));

We’ll also periodically request ticks from the server:


setTimeout(function tick() {
  channel.push("tick");
  setTimeout(tick, 100);
}, 100);

Every tick will result in a "tick" event being broadcast down to our client. We should set up a handler for this event:


channel.on("tick", cells => {
  render(cells.positions);
});

Once again, we simple pass the cells’ positions into our render function.

That’s it! After loading up our Phoenix application, we should see life unfold before our eyes!

Phoenix as an Afterthought

While Conway’s Game of Life is interesting, and “thinking in processes” is an important concept to grasp, there’s a bigger point here that I want to drive home.

In our first article, we implemented our Game of Life simulation as a standalone, vanilla Elixir application. It wasn’t until later that we decided to bring the Phoenix framework into the picture.

Using Phoenix was an afterthought, not a driving force, in the creation of our application.

Should we choose to, we could easily swap out Phoenix with another front-end framework with no fears about effecting the core domain of the project.


Throughout my career as a software developer I’ve worked on many software projects. Without fail, the most painful of these projects have been the those tightly coupled to their surrounding frameworks or libraries.

Loosely coupled applications, or applications with a clear distinction between what is core application code and what is everything else, are easier to understand, test, and maintain.

Some languages and frameworks lend themselves more easily to this kind of decoupling. Thankfully, Elixir’s process model, the concept of Elixir “applications”, and umbrella projects make this kind of decoupling a walk in the park.

Taken this as a reminder to build your framework around your application. Don’t build your application around your framework.