Clone Meteor Collection References

Written by Pete Corey on Sep 19, 2016.

We recently ran into an interesting situation in a Meteor application we were building for a client.

The application had several types of users. We wanted each type of users to have a distinct set of helpers (defined with the Collection Helpers package).

Unfortunately, Meteor’s heavy use of global variables and the inability to define multiple collection references for a single MongoDB collection made this a more complicated task than we hoped.

Buyers and Sellers

To get a better idea of what we’re talking about, imagine we have “buyers” and “sellers”. Both of these are normal users, so they’ll reference the Meteor.users collection:


Buyers = Meteor.users;
Sellers = Meteor.users;

Now let’s define a few helpers:


Buyers.helpers({
  buy() { ... },
  history() { ... }
});

Sellers.helpers({
  sell() { ... },
  history() { ... }
});

Let’s imagine that buy on Buyers carries out a purchase, and history returns a list of all purchases that buyer has made. Similarly, sell on Sellers carries out a sale, and history returns a list of sales that seller has made.

A Buyer’s Seller History

We can call sell on a Seller, as expected:


let seller = Sellers.findOne({ ... });
seller.sell();

Similarly, we can call buy on a Buyer:


let buyer = Buyers.findOne({ ... });
buyer.buy();

We can also call history on both buyer and seller. However, when we call history on our seller, we don’t get a list of their sales. Instead, we get a list of their purchases.

If we dig a little more, we’ll also notice that we can call sell on our buyer, and buy on our seller.

This is definitely not what we want. These two distinct types of users should have totally separate sets of helpers.

Supersets of Helpers

These issues are happening because we’re defining two sets of helpers on the same Meteor.users collection. After the second call to helpers, Meteor.users has a buy helper, a sell helper, and the seller’s version of the history helper (the buyer’s history was overridden).

Even though we’re using different variables to point to our “different” collections, both variables are pointing to the same collection reference.

Our Meteor.users collection now has a superset of helper functions made up of the union of the Buyers and Sellers helpers.

Cloned Collection References

After considering a few more architecturally complicated solutions to this problem, we realized that an easy solution was sitting right under our noses.

Instead of having Buyers and Sellers reference the Meteor.users collection directly, we could have Buyers and Sellers reference shallow clones of the Meteor.users collection:


Buyers = _.clone(Meteor.users);
Sellers = _.clone(Meteor.users);

This way, each clone would have it’s own internal _helpers function which is used to transform the database document into an object usable by our Meteor application.

Calling Buyers.helpers will define helper functions on the Buyers collection reference, not the Sellers or Meteor.users collection references. Similarly, Sellers.helpers will set up a set of helper functions unique to the Sellers collection reference.

Now calling buyer.history() returns a list of purchases, and seller.history() returns a list of sales. The sell helper doesn’t exist on our buyer user, and buy doesn’t exist on our seller.

Perfect!

Final Thoughts

While this solution worked great for our application, it might not be the best solution to your problem.

Cloning collection references is a delicate thing that may not play nicely with all collection functionality, or all collection-centric Meteor packages.

Also note that deep cloning of collection references does not work at all. While we haven’t looked under the hood to find out what’s going on, we assume that it has to do with breaking callback references or something along those lines.

If you’re facing a problem like this, try to work out a solution that operates within the design principles of Meteor before hacking your way around them. But if all else fails, remember that you have options.

Phoenix Todos - Back-end Authentication

This post is written as a set of Literate Commits. The goal of this style is to show you how this program came together from beginning to end.

Each commit in the project is represented by a section of the article. Click each section's header to see the commit on Github, or check out the repository and follow along.

Written by Pete Corey on Sep 14, 2016.

Enter Guardian

Now we’re getting to the meat of our authentication system. We have our User model set up, but we need to associate users with active sessions.

This is where Guardian comes in. Guardian is an authentication framework that leverages JSON Web Tokens (JWT) and plays nicely with Phoenix Channels.

To use Guardian, we’ll first add it as a depenency to our application:


{:guardian, "~> 0.12.0"}

Next, we need to do some configuring:


config :guardian, Guardian,
  allowed_algos: ["HS512"], # optional
  verify_module: Guardian.JWT,  # optional
  issuer: "PhoenixTodos",
  ttl: { 30, :days },
  verify_issuer: true, # optional
  secret_key: %{"kty" => "oct", "k" => System.get_env("GUARDIAN_SECRET_KEY")},
  serializer: PhoenixTodos.GuardianSerializer

You’ll notice that I’m pulling my secret_key from my system’s environment variables. It’s a bad idea to keep secrets in version control.

I also specified a serializer module. This is Guardian’s bridge into your system. It acts as a translation layer between Guardian’s JWT and your User model.

Because it’s unique to our system, we’ll need to build the PhoenixTodos.GuardianSerializer ourselves.

Our serializer will need two fuctions. The first, for_token translates a User model into a token string. An invalid User should return an :error:


test "generates token for valid user", %{user: user} do
  assert {:ok, _} = GuardianSerializer.for_token(user)
end

test "generates error for invalid user", %{} do
  assert {:error, "Invalid user"} = GuardianSerializer.for_token(%{})
end

Thanks to Elixir’s pattern matching, for_token is a very simple function:


def for_token(%User{id: id}), do: {:ok, "User:#{id}"}
def for_token(_), do: {:error, "Invalid user"}

Similarly, we need to define a from_token function, which takes a token string and returns the corresponding User model:


test "finds user from valid token", %{user: user} do
  {:ok, token} = GuardianSerializer.for_token(user)
  assert {:ok, _} = GuardianSerializer.from_token(token)
end

test "doesn't find user from invalid token", %{} do
  assert {:error, "Invalid user"} = GuardianSerializer.from_token("bad")
end

To implement this, we’ll pull the User id out of the token string, and look it up in the database:


def from_token("User:" <> id), do: {:ok, Repo.get(User, String.to_integer(id))}
def from_token(_), do: {:error, "Invalid user"}

Now that we’ve finished our serializer, we’re in a position to wire up the rest of our authentication system!

config/config.exs

... binary_id: false + +config :guardian, Guardian, + allowed_algos: ["HS512"], # optional + verify_module: Guardian.JWT, # optional + issuer: "PhoenixTodos", + ttl: { 30, :days }, + verify_issuer: true, # optional + secret_key: %{"kty" => "oct", "k" => System.get_env("GUARDIAN_SECRET_KEY")}, + serializer: PhoenixTodos.GuardianSerializer

lib/phoenix_todos/guardian_serializer.ex

+defmodule PhoenixTodos.GuardianSerializer do + @behavior Guardian.Serializer + + alias PhoenixTodos.{User, Repo} + + def for_token(%User{id: id}), do: {:ok, "User:#{id}"} + def for_token(_), do: {:error, "Invalid user"} + + def from_token("User:" <> id), do: {:ok, Repo.get(User, String.to_integer(id))} + def from_token(_), do: {:error, "Invalid user"} +end

mix.exs

... {:mix_test_watch, "~> 0.2", only: :dev}, - {:comeonin, "~> 2.0"}] + {:comeonin, "~> 2.0"}, + {:guardian, "~> 0.12.0"}] end

mix.lock

-%{"comeonin": {:hex, :comeonin, "2.5.2"}, +%{"base64url": {:hex, :base64url, "0.0.1"}, + "comeonin": {:hex, :comeonin, "2.5.2"}, "connection": {:hex, :connection, "1.0.4"}, "gettext": {:hex, :gettext, "0.11.0"}, + "guardian": {:hex, :guardian, "0.12.0"}, + "jose": {:hex, :jose, "1.8.0"}, "mime": {:hex, :mime, "1.0.1"}, "postgrex": {:hex, :postgrex, "0.11.2"}, - "ranch": {:hex, :ranch, "1.2.1"}} + "ranch": {:hex, :ranch, "1.2.1"}, + "uuid": {:hex, :uuid, "1.1.4"}}

test/lib/guardian_serializer_test.exs

+defmodule PhoenixTodos.GuardianSerializerTest do + use ExUnit.Case, async: true + + alias PhoenixTodos.{User, Repo, GuardianSerializer} + + setup_all do + user = User.changeset(%User{}, %{ + email: "email@example.com", + password: "password" + }) + |> Repo.insert! + + {:ok, user: user} + end + + test "generates token for valid user", %{user: user} do + assert {:ok, _} = GuardianSerializer.for_token(user) + end + + test "generates error for invalid user", %{} do + assert {:error, "Invalid user"} = GuardianSerializer.for_token(%{}) + end + + test "finds user from valid token", %{user: user} do + {:ok, token} = GuardianSerializer.for_token(user) + assert {:ok, _} = GuardianSerializer.from_token(token) + end + + test "doesn't find user from invalid token", %{} do + assert {:error, "Invalid user"} = GuardianSerializer.from_token("bad") + end +end

Sign-Up Route and Controller

The first step to implementing authentication in our application is creating a back-end sign-up route that creates a new user in our system.

To do this, we’ll create an "/api/users" route that sends POST requests to the UserController.create function:


post "/users", UserController, :create

We expect the user’s email and password to be sent as parameters to this endpoint. UserController.create takes those params, passes them into our User.changeset, and then attempts to insert the resulting User into the database:


User.changeset(%User{}, params)
|> Repo.insert

If the insert fails, we return the changeset errors to the client:


conn
|> put_status(:unprocessable_entity)
|> render(PhoenixTodos.ApiView, "error.json", error: changeset)

Otherwise, we’ll use Guardian to sign the new user’s JWT and return the jwt and user objects down to the client:


{:ok, jwt, _full_claims} = Guardian.encode_and_sign(user, :token)
conn
|> put_status(:created)
|> render(PhoenixTodos.ApiView, "data.json", data: %{jwt: jwt, user: user})

Now all a user needs to do to sign up with our Todos application is send a POST request to /api/users with their email and password. In turn, they’ll receive their JWT which they can send along with any subsequent requests to verify their identity.

test/controllers/user_controller_test.exs

+defmodule PhoenixTodos.UserControllerTest do + use PhoenixTodos.ConnCase + + test "creates a user", %{conn: conn} do + conn = post conn, "/api/users", user: %{ + email: "email@example.com", + password: "password" + } + %{ + "jwt" => _, + "user" => %{ + "id" => _, + "email" => "email@example.com" + } + } = json_response(conn, 201) + end + + test "fails user validation", %{conn: conn} do + conn = post conn, "/api/users", user: %{ + email: "email@example.com", + password: "pass" + } + %{ + "errors" => [ + %{ + "password" => "should be at least 5 character(s)" + } + ] + } = json_response(conn, 422) + end +end

web/controllers/user_controller.ex

+defmodule PhoenixTodos.UserController do + use PhoenixTodos.Web, :controller + + alias PhoenixTodos.{User, Repo} + + def create(conn, %{"user" => params}) do + User.changeset(%User{}, params) + |> Repo.insert + |> handle_insert(conn) + end + + defp handle_insert({:ok, user}, conn) do + {:ok, jwt, _full_claims} = Guardian.encode_and_sign(user, :token) + conn + |> put_status(:created) + |> render(PhoenixTodos.ApiView, "data.json", data: %{jwt: jwt, user: user}) + end + defp handle_insert({:error, changeset}, conn) do + conn + |> put_status(:unprocessable_entity) + |> render(PhoenixTodos.ApiView, "error.json", error: changeset) + end +end

web/models/user.ex

... use PhoenixTodos.Web, :model + @derive {Poison.Encoder, only: [:id, :email]}

web/router.ex

... + scope "/api", PhoenixTodos do + pipe_through :api + + post "/users", UserController, :create + end + scope "/", PhoenixTodos do ... - # Other scopes may use custom stacks. - # scope "/api", PhoenixTodos do - # pipe_through :api - # end end

web/views/api_view.ex

+defmodule PhoenixTodos.ApiView do + use PhoenixTodos.Web, :view + + def render("data.json", %{data: data}) do + data + end + + def render("error.json", %{error: changeset = %Ecto.Changeset{}}) do + errors = Enum.map(changeset.errors, fn {field, detail} -> + %{} |> Map.put(field, render_detail(detail)) + end) + + %{ errors: errors } + end + + def render("error.json", %{error: error}), do: %{error: error} + + def render("error.json", %{}), do: %{} + + defp render_detail({message, values}) do + Enum.reduce(values, message, fn {k, v}, acc -> String.replace(acc, "%{#{k}}", to_string(v)) end) + end + + defp render_detail(message) do + message + end + +end

Sign-In Route and Controller

Now that users have the ability to join our application, how will they sign into their accounts?

We’ll start implementing sign-in functionality by adding a new route to our Phoenix application:


post "/sessions", SessionController, :create

When a user sends a POST request to /sessions, we’ll route them to the create function in our SessionController module. This function will attempt to sign the user in with the credentials they provide.

At a high level, the create function will be fairly straight-forward. We want to look up the user based on the email they gave, check if the password they supplied matches what we have on file:


def create(conn, %{"email" => email, "password" => password}) do
  user = get_user(email)
  user
  |> check_password(password)
  |> handle_check_password(conn, user)
end

If get_user returns nil, we couldn’t find the user based on the email address they provided. In that case, we’ll return false from check_password:


defp check_password(nil, _password), do: false

Otherwise, we’ll use Comeonin to compare the hashed password we have saved in encrypted_password with the hash of the password the user provided:


defp check_password(user, password) do
  Comeonin.Bcrypt.checkpw(password, user.encrypted_password)
end

If all goes well, we’ll return a jwt and the user object for the now-authenticated user:


render(PhoenixTodos.ApiView, "data.json", data: %{jwt: jwt, user: user})

We can test this sign-in route/controller combination just like we’ve tested our sign-up functionality.

test/controllers/session_controller_test.exs

+defmodule PhoenixTodos.SessionControllerTest do + use PhoenixTodos.ConnCase + + alias PhoenixTodos.{User, Repo} + + test "creates a session", %{conn: conn} do + %User{} + |> User.changeset(%{ + email: "email@example.com", + password: "password" + }) + |> Repo.insert! + + conn = post conn, "/api/sessions", email: "email@example.com", password: "password" + %{ + "jwt" => _jwt, + "user" => %{ + "id" => _id, + "email" => "email@example.com" + } + } = json_response(conn, 201) + end + + test "fails authorization", %{conn: conn} do + conn = post conn, "/api/sessions", email: "email@example.com", password: "wrong" + %{ + "error" => "Unable to authenticate" + } = json_response(conn, 422) + end +end

web/controllers/session_controller.ex

+defmodule PhoenixTodos.SessionController do + use PhoenixTodos.Web, :controller + + alias PhoenixTodos.{User, Repo} + + def create(conn, %{"email" => email, "password" => password}) do + user = get_user(email) + user + |> check_password(password) + |> handle_check_password(conn, user) + end + + defp get_user(email) do + Repo.get_by(User, email: String.downcase(email)) + end + + defp check_password(nil, _password), do: false + defp check_password(user, password) do + Comeonin.Bcrypt.checkpw(password, user.encrypted_password) + end + + defp handle_check_password(true, conn, user) do + {:ok, jwt, _full_claims} = Guardian.encode_and_sign(user, :token) + conn + |> put_status(:created) + |> render(PhoenixTodos.ApiView, "data.json", data: %{jwt: jwt, user: user}) + end + defp handle_check_password(false, conn, _user) do + conn + |> put_status(:unprocessable_entity) + |> render(PhoenixTodos.ApiView, "error.json", error: "Unable to authenticate") + end + +end

web/router.ex

... plug :accepts, ["json"] + plug Guardian.Plug.VerifyHeader + plug Guardian.Plug.LoadResource end ... post "/users", UserController, :create + + post "/sessions", SessionController, :create end

Sign-Out Route and Controller

The final piece of our authorization trifecta is the ability for users to sign out once they’ve successfully joined or signed into the application.

To implement sign-out functionality, we’ll want to create a route that destroys a user’s session when its called by an authenticated user:


delete "/sessions", SessionController, :delete

This new route points to SessionController.delete. This function doesn’t exist yet, so let’s create it:


def delete(conn, _) do
  conn
  |> revoke_claims
  |> render(PhoenixTodos.ApiView, "data.json", data: %{})
end

revoke_claims will be a private function that simply looks up the current user’s token and claims, and then revokes them:


{:ok, claims} = Guardian.Plug.claims(conn)
Guardian.Plug.current_token(conn)
|> Guardian.revoke!(claims)

In implementing this feature, we cleaned up our SessionControllerTest module a bit. We added a create_user function, which creates a user with a given email address and password, and a create_session function that logs that user in.

Using those functions we can create a user’s session, and then construct a DELETE request with the user’s JWT (session_response["jwt"]) in the "authorization" header. If this request is successful, we’ve successfully deleted the user’s session.

test/controllers/session_controller_test.exs

... - test "creates a session", %{conn: conn} do + defp create_user(email, password) do %User{} |> User.changeset(%{ - email: "email@example.com", - password: "password" - }) + email: email, + password: password + }) |> Repo.insert! + end - conn = post conn, "/api/sessions", email: "email@example.com", password: "password" - %{ - "jwt" => _jwt, - "user" => %{ - "id" => _id, - "email" => "email@example.com" - } - } = json_response(conn, 201) + defp create_session(conn, email, password) do + post(conn, "/api/sessions", email: email, password: password) + |> json_response(201) + end + + test "creates a session", %{conn: conn} do + create_user("email@example.com", "password") + + response = create_session(conn, "email@example.com", "password") + + assert response["jwt"] + assert response["user"]["id"] + assert response["user"]["email"] end ... end + + test "deletes a session", %{conn: conn} do + create_user("email@example.com", "password") + session_response = create_session(conn, "email@example.com", "password") + + conn + |> put_req_header("authorization", session_response["jwt"]) + |> delete("/api/sessions") + |> json_response(200) + end + end

web/controllers/session_controller.ex

... + def delete(conn, _) do + conn + |> revoke_claims + |> render(PhoenixTodos.ApiView, "data.json", data: %{}) + end + + defp revoke_claims(conn) do + {:ok, claims} = Guardian.Plug.claims(conn) + Guardian.Plug.current_token(conn) + |> Guardian.revoke!(claims) + conn + end + def create(conn, %{"email" => email, "password" => password}) do

web/router.ex

... post "/sessions", SessionController, :create + delete "/sessions", SessionController, :delete end

Final Thoughts

As a Meteor developer, it seems like we’re spending an huge amount of time implementing authorization in our Phoenix Todos application. This functionality comes out of the box with Meteor!

The truth is that authentication is a massive, nuanced problem. Meteor’s Accounts system is a shining example of what Meteor does right. It abstracts away an incredibly tedious, but extremely important aspect of building web applications into an easy to use package.

On the other hand, Phoenix’s approach of forcing us to implement our own authentication system has its own set of benefits. By implementing authentication ourselves, we always know exactly what’s going on in every step of the process. There is no magic here. Complete control can be liberating.

Check back next week when we turn our attention back to the front-end, and wire up our sign-up and sign-in React templates!

Rewriting History

Written by Pete Corey on Sep 12, 2016.

If you’ve been following our blog, you’ll notice that we’ve been writing lots of what we’re calling “literate commit” posts.

The goal of a literate commit style post is to break down each Git commit into a readable, clear explanation of the code change. The idea is that this chronological narrative helps tell the story of how a piece of software came into being.

Combined with tools like git blame and git log you can even generate detailed histories for small, focused sections of the codebase.

But sometimes generating repositories with this level of historical narrative requires something that most Git users warn against: rewriting history.

Why Change the Past

It’s usually considered bad practice to modify a project’s revision history, and in most cases this is true. However, there are certain situations where changing history is the right thing to do.

In our case, the main artifact of each literate commit project is not the software itself; it’s the revision history. The project serves as a lesson or tutorial.

In this situation, it might make sense to revise a commit message for clarity. Maybe we want to break a single, large commit into two separate commits so that each describes a smaller piece of history. Or, maybe while we’re developing we discover a small change that should have been included in a previous commit. Rather than making an “Oops, I should have done this earlier” commit, we can just change our revision history and include the change in the original commit.

It’s important to note that in these situations, it’s assumed that only one person will be working with the repository. If multiple people are contributing, editing revision history is not advised.

In The Beginning…

Imagine that we have some boilerplate that we use as a base for all of our projects. Being good developers, we keep track of its revision history using Git, and possibly host it on an external service like GitHub.

Starting a new project with this base might look something like this:


mkdir my_project
cd my_project
git clone https://github.com/pcorey/base .
git remote remove origin
git remote add origin https://github.com/pcorey/my_project

We’ve cloned base into the my_project directory, removed it’s origin pointer to the base repository, and replaced it with a pointer to a new my_project repository.

Great, but we’re still stuck with whatever commits existed in the base project before we cloned it into my_project. Those commits most likely don’t contribute to the narrative of this specific project and should be changed.

One solution to this problem is to clobber the Git history by removing the .git folder, but this is the nuclear option. There are easier ways of accomplishing our goal.

The --root flag of the git rebase command lets us revise every commit in our project, including the root commit. This means that we can interactively rebase and reword the root commits created in the base project:


git rebase -i --root master

reword f784c6a First commit
# Rebase f784c6a onto 5d85358 (1 command(s))

Using reword tells Git that we’d like to use the commit, but we want to modify its commit message. In our case, we might want to explain the project we’re starting and discuss the base set of files we pulled into the repository.

Splicing in a Commit

Next, let’s imaging that our project has three commits. The first commit sets up our project’s boilerplate. The second commit adds a file called foo.js, and the third commit updates that file:


git log --online

1d5f372 Updated foo.js
873641e Added foo.js
b3065c9 Project setup

What if we forgot to create a file called bar.js after we created foo.js. For maximum clarity, we want this file to be created in a new commit following 873641e. How would we do it?

Once again, interactive rebase comes to the rescue. While doing a root rebase, we can mark 873641e as needing editing:


git rebase -i --root master

pick b3065c9 Project setup
edit 873641e Added foo.js
pick 1d5f372 Updated foo.js

After rebasing, our Git HEAD will point to 873641e. Our git log looks like this:


git log --online

1d5f372 Updated foo.js
873641e Added foo.js

We can now add bar.js and commit the change:


touch bar.js
git add bar.js
git commit -am "Added bar.js"

Reviewing our log, we’ll see a new commit following 873641e:


git log --online

58f31fd Added bar.js
41817a4 Added foo.js
81df941 Project setup

Everything looks good. Now we can continue our rebase and check out our final revision history:


git rebase --continue
git log --oneline

b8b7b18 Updated foo.js
58f31fd Added bar.js
41817a4 Added foo.js
81df941 Project setup

We’ve successfully injected a commit into our revision history!

Revising a Commit

What if we notice a typo in our project that was introduced by our boilerplate? We don’t want to randomly include a typo fix in our Git history; that will distract from the overall narrative. How would we fix this situation?

Once again, we’ll harness the power of our interactive root rebase!


git rebase -i --root master

edit b3065c9 Project setup
pick 873641e Added foo.js
pick 1d5f372 Updated foo.js

After starting the rebase, our HEAD will point to the first commit, b3065c9. From there, we can fix our typo, and then amend the commit:


vim README.md
git add README.md
git commit --amend

Our HEAD is still pointing to the first commit, but now our fixed typo is included in the set of changes!

We can continue our rebase and go about our business, pretending that the typo never existed.


git rebase --continue

With Great Power

Remember young Time Lord, with great power comes great responsibility.

Tampering with revision history can lead to serious losses for your project if done incorrectly. It’s recommended that you practice any changes you plan to make in another branch before attempting them in master. Another fallback is to reset hard to origin/master if all goes wrong:


git reset --hard origin/master

While changing history can be dangerous, it’s a very useful skill to have. When you want your history to be the main artifact of your work, it pays to ensure it’s as polished and perfected as possible.