Pete Corey Writing Work Contact

Using Apollo Client with Elixir's Absinthe

Written by Pete Corey on Nov 21, 2016.

There’s no doubt that GraphQL has been making waves in the web development community since it was announced, and for good reason! GraphQL helps decouple an application’s front-end from its back-end in amazingly flexible ways.

Unfortunately, React and Redux, the current go-to front-end frameworks for handling client-side state and interacting with a GraphQL server are cumbersome to use at best. Thankfully, the Apollo client, a new project from the Meteor Development Group, is trying to offer a more straight-forward, batteries included option for interfacing with GraphQL and managing your client-side state.

Let’s dig into how to set up a basic GraphQL server in Elixir using Absinthe, and how to interact with that server using the Apollo client.

Elixir’s Absinthe

Absinthe is a GraphQL implementation for Elixir. It lets you set up a GraphQL endpoint on your Elixir/Phoenix server.

Setting up Absinthe is a straight-forward process. To start, we’ll add dependencies on the absinthe and absinthe_plug Mix packages and fire up their corresponding applications:

defp deps do
  [ ...
   {:absinthe, "~> 1.2.0"},
   {:absinthe_plug, "~> 1.2.0"}]

applications: [ … :absinthe, :absinthe_plug]

Just like in the Absinthe tutorial, our next step is to set up our GraphQL types. We’ll create simple schemas for an author and a post:

object :author do
  field :id, :id
  field :first_name, :string
  field :last_name, :string
  field :posts, list_of(:post) do
    resolve fn author, _, _ ->
      {:ok, HelloAbsinthe.Schema.find_posts(}

object :post do
  field :id, :id
  field :title, :string
  field :author, :author do
    resolve fn post, _, _ ->
      {:ok, HelloAbsinthe.Schema.find_author(}
  field :votes, :integer

Next, we’ll define the types of queries we support. To keep things simple, we’ll add two basic queries. The first, posts, will return all posts in the system, and the second, author, will return an author for a given id:

query do
  field :posts, list_of(:post) do
    resolve &get_all_posts/2

  field :author, type: :author do
    arg :id, non_null(:id)
    resolve &get_author/2

To cut down on the number of moving parts in this example, we’ll write our two resolver functions to return a set of hard-coded posts and authors, rather than pulling them from some external data source:

@posts [
  %{id: 1, title: "GraphQL Rocks",           votes: 3, author: %{id: 1}},
  %{id: 2, title: "Introduction to GraphQL", votes: 2, author: %{id: 2}},
  %{id: 3, title: "Advanced GraphQL",        votes: 1, author: %{id: 1}}

@authors [
  %{id: 1, first_name: "Sashko", last_name: "Stubailo"},
  %{id: 2, first_name: "Tom",    last_name: "Coleman"},


def get_all_posts(_args, _info) do
  {:ok, @posts}

def get_author(%{id: id}, _info) do
  {:ok, find_author(id)}

def find_author(id) do
  Enum.find(@authors, fn author -> == id end)

def find_posts(author_id) do
  Enum.find(@posts, fn post -> == author_id end)

Now all we need to do is tell Absinthe that we want our GraphQL endpoint to listen on the "/graphql" route and that we want it to use our newly defined schemas and queries:

forward "/graphql", Absinthe.Plug, schema: HelloAbsinthe.Schema

And that’s it! Now we can send our server GraphQL queries and it will process them and send back the result.

Let’s move on to setting up Apollo on the front-end.

Apollo Client

If you haven’t noticed already, we’re basing this example off of the query example on the Apollo Developer page.

Before we continue with their example, we need to set up React in our application. Since we started with a fresh Phoenix project (mix, we’ll need to install install some NPM dependencies to work with React, Apollo, etc…:

npm install --save react react-dom apollo-client react-apollo \
                         graphql-tag babel-preset-react

Next, we’ll need to tell Brunch how to we want our ES6 transpiled by tweaking our Babel options in brunch-config.js:

plugins: {
  babel: {
    presets: ["es2015", "react"],

The last thing we need to do is replace the HTML our Phoenix application generates (in app.html.eex) with an empty <div> to hold our React application:

 <div id="app"></div>

Now we can copy over the <PostList> component from the Apollo example. We’ll throw it in a file called PostList.jsx.

Lastly, we’ll create an instance of ApolloClient and wire up the <PostList> component to our container <div> in our app.js:

const client = new ApolloClient();

  <ApolloProvider client={client}>
    <PostList />

And that’s it! When our application reloads, we’ll see all of the hard-coded author and post data from our server loaded up and rendered on the client.

How it Works

This is obviously a drastically over-simplified example of what GraphQL can do, but it’s a good jumping off point. Let’s see how all of it ties together, starting on the client.

The <PostList> component we pulled from the Apollo example is a simple component that expects to be passed a loading boolean and a list of posts inside of a data property.

If loading is true, we’ll show a loading message. Otherwise, we’ll render the list of posts:

function PostList({ data: { loading, posts } }) {
  if (loading) {
    return <div>Loading</div>;
  } else {
    return (<ul>{ => … )} </ul>);

Where do loading and posts come from? The loading field is controlled by the Apollo client. When we’re waiting on the response for a GraphQL query, loading will be true. The posts field actually comes directly from the response to our GraphQL query.

When we export PostList, we actually wrap it in a GraphQL query that describes the data this component needs to render:

export default graphql(gql`
  query allPosts {
    posts {
      author {

The shape of a GraphQL query’s response maps directly to the shape of the query itself. Notice how we’re asking for a set of posts. We want each post to be returned with an id, title, votes, and an author object, complete with id, firstName, and lastName.

Our response will look exactly like this:

  posts: [
      id: 1,
      title: "GraphQL Rocks",
      votes: 3,
      author: {
        id: 1,
        firstName: "Sashko",
        lastName: "Stubailo"

This is the power of GraphQL. It inverts the normal query/result relationship between the client and the server. The client tells the server exactly what it needs, and that exact data is returned from the query. No more, no less.

Apollo takes that client-first mentality even further. With Apollo, each component tells the server exactly what it needs and manages it’s data lifecycle entirely on its own, independent from other components in the application.

Final Thoughts

I’m really excited about the combination of an Elixir/Absinthe back-end driving an Apollo-powered client front-end.

I’ve only just started playing with this combination, but I hope to start building out more complex and realistic applications to see if it lived up to my hopes and expectations.

Be sure to check out the entire project on GitHub. Have you used Absinthe or any part of the Apollo stack? If so, shoot me an email and let me know your opinions!

Phoenix Todos - Public and Private Lists

This post is written as a set of Literate Commits. The goal of this style is to show you how this program came together from beginning to end.

Each commit in the project is represented by a section of the article. Click each section's header to see the commit on Github, or check out the repository and follow along.

Written by Pete Corey on Nov 16, 2016.

Make Private

Now that our channel connection can be authenticated, we can gives users the ability to make their lists private.

To start, we’ll add a "make_private" channel event handler. This handler will call List.make_private and set the list’s user_id equal to the socket’s currently authenticated user:

list = get_user_id(socket)
|> List.make_private(list_id)
|> Repo.preload(:todos)

Once we’ve done that, we’ll broadcast a "update_list" event to all connected clients.

However, if a list becomes private, we’ll want to remove it from other users’ clients, instead of just showing the change. To do this, we’ll have to intercept all outbound "update_list" events:

intercept ["update_list"]

def handle_out("update_list", list, socket) do

If a user has permission to see the outgoing list, we’ll push another "update_list" event. Otherwise, we’ll push a "remove_list" event:

case List.canView?(get_user_id(socket), list) do
  true ->
    push(socket, "update_list", list)
  false ->
    push(socket, "remove_list", list)

After wiring up all of the necessary Redux plumbing to call our "make_private" event, the functionality it complete.


... + intercept ["update_list"] + defp get_user_id(socket) do ... + def handle_in("make_private", %{ + "list_id" => list_id, + }, socket) do + list = get_user_id(socket) + |> List.make_private(list_id) + |> Repo.preload(:todos) + + broadcast! socket, "update_list", list + + {:noreply, socket} + end + def handle_in("delete_todo", %{ ... + def handle_out("update_list", list, socket) do + case List.canView?(get_user_id(socket), list) do + true -> + push(socket, "update_list", list) + false -> + push(socket, "remove_list", list) + end + {:noreply, socket} + end + end


... @required_fields ~w(name incomplete_count) - @optional_fields ~w() + @optional_fields ~w(user_id) ... + def make_private(user_id, id) do + Repo.get(PhoenixTodos.List, id) + |> changeset(%{ + user_id: user_id + }) + |> Repo.update! + end + def delete_todo(todo_id) do ... + def canView?(_, %{user_id: nil}), do: true + def canView?(user_id, %{user_id: user_id}), do: true + def canView?(_, _), do: false + end


... +export const MAKE_PRIVATE_REQUEST = "MAKE_PRIVATE_REQUEST"; +export const MAKE_PRIVATE_SUCCESS = "MAKE_PRIVATE_SUCCESS"; +export const MAKE_PRIVATE_FAILURE = "MAKE_PRIVATE_FAILURE"; + export const DELETE_TODO_REQUEST = "DELETE_TODO_REQUEST"; ... channel.on("update_list", list => { + console.log("update_list", list) dispatch(updateList(list)); ... +export function makePrivateRequest() { + return { type: MAKE_PRIVATE_REQUEST }; +} + +export function makePrivateSuccess() { + return { type: MAKE_PRIVATE_SUCCESS }; +} + +export function makePrivateFailure() { + return { type: MAKE_PRIVATE_FAILURE }; +} + +export function makePrivate(list_id) { + return (dispatch, getState) => { + const { channel } = getState(); + dispatch(makePrivateRequest()); + channel.push("make_private", { list_id }) + .receive("ok", (list) => { + dispatch(makePrivateSuccess()); + }) + .receive("error", () => dispatch(makePrivateFailure())) + .receive("timeout", () => dispatch(makePrivateFailure())); + } +} + export function deleteTodoRequest() {


... } else { -{ listId: list._id }, alert); + this.props.makePrivate(; }


... deleteList, + makePrivate, deleteTodo ... updateName={this.props.updateName} - deleteList={this.props.deleteList}/> + deleteList={this.props.deleteList} + makePrivate={this.props.makePrivate} + /> <div className="content-scrollable list-items"> ... }, + makePrivate: (list_id) => { + return dispatch(makePrivate(list_id)); + }, deleteTodo: (todo_id) => {

Make Public

Just as we let users make their lists private, we need to let them make their private lists public again.

We’ll do this by adding a "make_public" channel event that sets the user_id field on the specified list to nil and broadcasts an "update_list" event.

list = List.make_public(list_id)
|> Repo.preload(:todos)

broadcast! socket, "update_list", list

Unfortunately, this introduces a situation where lists are added back into the UI through a "update_list" event rather than a "add_list" event.

To handle this, we need to check if the "UPDATE_LIST" Redux reducer actually found the list it was trying to update. If it didn’t, we’ll push the list to the end of the list, adding it to the UI:

if (!found) {

And with that, users can make their private lists public.


... + def handle_in("make_public", %{ + "list_id" => list_id, + }, socket) do + list = List.make_public(list_id) + |> Repo.preload(:todos) + + broadcast! socket, "update_list", list + + {:noreply, socket} + end + def handle_in("delete_todo", %{


... + def make_public(id) do + Repo.get(PhoenixTodos.List, id) + |> changeset(%{ + user_id: nil + }) + |> Repo.update! + end + def delete_todo(todo_id) do


... +export const MAKE_PUBLIC_REQUEST = "MAKE_PUBLIC_REQUEST"; +export const MAKE_PUBLIC_SUCCESS = "MAKE_PUBLIC_SUCCESS"; +export const MAKE_PUBLIC_FAILURE = "MAKE_PUBLIC_FAILURE"; + export const DELETE_TODO_REQUEST = "DELETE_TODO_REQUEST"; ... channel.on("update_list", list => { - console.log("update_list", list) dispatch(updateList(list)); ... +export function makePublicRequest() { + return { type: MAKE_PUBLIC_REQUEST }; +} + +export function makePublicSuccess() { + return { type: MAKE_PUBLIC_SUCCESS }; +} + +export function makePublicFailure() { + return { type: MAKE_PUBLIC_FAILURE }; +} + +export function makePublic(list_id) { + return (dispatch, getState) => { + const { channel } = getState(); + dispatch(makePublicRequest()); + channel.push("make_public", { list_id }) + .receive("ok", (list) => { + dispatch(makePublicSuccess()); + }) + .receive("error", () => dispatch(makePublicFailure())) + .receive("timeout", () => dispatch(makePublicFailure())); + } +} + export function deleteTodoRequest() {


... if (list.user_id) { -{ listId: list._id }, alert); + this.props.makePublic(; } else {


... makePrivate, + makePublic, deleteTodo ... makePrivate={this.props.makePrivate} + makePublic={this.props.makePublic} /> ... }, + makePublic: (list_id) => { + return dispatch(makePublic(list_id)); + }, deleteTodo: (todo_id) => {


... case UPDATE_LIST: + let found = false; let lists = => { - return === ? action.list : list; + if ( === { + found = true; + return action.list; + } + else { + return list; + } }); + if (!found) { + lists.push(action.list); + } return Object.assign({}, state, { lists });

Final Thoughts

At this point, we’ve roughly recreated all of the features of the Meteor Todos application in Phoenix and Elixir.

I’ll be the first to admit that there are many problems with the project as it currently stands. My solution to channel authentication isn’t the best, many channel events aren’t making proper authorization checks, the front-end Redux architecture is awful, etc… That being said, this was a fantastic learning experience.

Building out Meteor-esque functionality in Phoenix is definitely more work than using Meteor, but I still believe that the benefits of using an Elixir backend outweigh the drawbacks. With a little more effort, I think I’ll be able to reduce the upfront burden quite a bit through packages and libraries.

Expect many upcoming articles discussing what I’ve learned from this conversion and how to approach building Elixir and Phoenix applications from the perspective of a Meteor developer.

Basic Meteor Authentication in Phoenix

Written by Pete Corey on Nov 14, 2016.

A question that often comes up when I’m talking to Meteor developers about transitioning to Phoenix is how to handle authentication.

When transitioning, a developer with an existing application and data may want to integrate with Meteor’s existing authentication data in their Elixir/Phoenix application instead of jumping ship and switching to an entirely different authentication scheme.

Let’s dig into how Meteor’s password authentication works and how to use it within a Elixir/Phoenix application.

Setting Up Our Projects

To start, let’s assume that you have a Meteor application built with user accounts managed through the accounts-password package.

For development purposes, let’s assume that your Meteor server is running locally on port 3000, and your MongoDB database instance is running locally on port 3001.

If you want to follow along, a quick way to set this up would be to clone the example Todos application and spin it up on your machine:

git clone
cd todos

Next, register a dummy user account (e.g., ""/"password") in your browser.

Now that Meteor has MongoDB running and populated with a Meteor-style user account, we’ll set up a new Phoenix project.

We’ll use Mix to create our application, and because we’re using MongoDB as our database, we’ll specify that we don’t want to use Ecto:

mix meteor_auth --no-ecto

Following the instructions in the mongodb driver package, we’ll add dependencies on the mongodb and poolboy packages, and create a MongoPool module.

Finally, we’ll add the MongoPool to our list of supervised worker processes:

children = [
  # Start the endpoint when the application starts
  supervisor(MeteorAuth.Endpoint, []),
  # Here you could define other workers and supervisors as children
  worker(MongoPool, [[database: "meteor", port: 3001]])

After restarting our Phoenix server, our application should be wired up and communicating with our local MongoDB database.

Anatomy of Meteor Authentication

At first glance, Meteor’s password-based authentication system can be confusing.

However, once you untangle the mess of asynchronous, highly configurable and pluggable code, you’re left with a fairly straight-forward authentication process.

Authenticating an existing user usually begins with a call to the "login" Meteor method. This method will call the login handler registered in the accounts-password package, which simply does a password check. The result of the password check is passed into the _attemptLogin function, which actually logs the user in if the password check was successful, or returns an error if the check was unsuccessful.

The results of a successful login are that the authenticated user will be associated with the current connection, and that the user’s _id, resume token, and a tokenExpires timestamp will be returned to the client.

Building an Accounts Module

To support the ability to log into a Meteor application through Elixir, we’ll build a (hugely simplified) accounts module. The module will be responsible for transforming the email and password combination passed to the server into an authenticated user session.

Let’s start by defining the module and the module’s entry points:

defmodule MeteorAuth.Accounts do

  def login(socket, %{
              "user" => %{"email" => email},
              "password" => password
            }) when is_binary(email) and is_binary(password) do
    |> attempt_login(%{query: %{"emails.0.address": email}}, password)


The login function in our MeteorAuth.Accounts module will take in a Phoenix channel socket and a map that holds the user’s provided email address and password.

Notice that we're asserting that both email and password should be "binary" types? This helps prevent NoSQL injection vulnerabilities.

The login function calls attempt_login, which grabs the user from MongoDB based on the constructed query (get_user_from_query), checks the user’s password (valid_credentials?), and finally attempt to log the user in (log_in_user):

defp attempt_login(socket, %{query: query}, password) do
  user = get_user_from_query(query)
  valid? = valid_credentials?(user, password)
  log_in_user(valid?, socket, user)

To fetch the user document from MongoDB, we’re running a find query against the "users" collection, transforming the resulting database cursor into a list, and then returning the first element from that list:

defp get_user_from_query(query) do
  |> Mongo.find("users", query)
  |> Enum.to_list
  |> List.first

To check the user’s password, we transform the user-provided password string into a format that Meteor’s accounts package expects, and then we use the Comeonin package to securely compare the hashed version of the password string with the hashed password saved in the user’s document:

defp valid_credentials?(%{"services" => %{"password" => %{"bcrypt" => bcrypt}}},
                        password) do
  |> get_password_string
  |> Comeonin.Bcrypt.checkpw(bcrypt)

Notice how we’re using pattern matching to destructure a complex user document and grab only the fields we care about. Isn't Elixir awesome?

Before Bcrypt hashing a password string, Meteor expects it to be SHA256 hashed and converted into a lowercased base16 (hexadecimal) string. This is fairly painless thanks to Erlang’s :crypto library:

defp get_password_string(password) do
  :crypto.hash(:sha256, password)
  |> Base.encode16
  |> String.downcase

Our valid_credentials? function will return either a true or a false if the user-provided credentials are correct or incorrect.

We can pattern match our log_in_user function to do different things for valid and invalid credentials. If a user has provided a valid email address and password, we’ll log them in by assigning their user document to the current socket:

defp log_in_user(true, socket, user) do
  auth_socket = Phoenix.Socket.assign(socket, :user, user)
  {:ok, %{"id" => user["_id"]}, auth_socket}

For invalid credentials, we’ll simply return an error:

defp log_in_user(false, _socket, _user) do

Logging in Through Channels

Now that our MeteorAuth.Accounts module is finished up, we can wire it up to a Phoenix channel to test the end-to-end functionality.

We’ll start by creating a "ddp" channel in our default UserSocket module:

channel "ddp", MeteorAuth.DDPChannel

In our MeteorAuth.DDPChannel module, we’ll create a "login" event handler that calls our MeteorAuth.Accounts.login function:

def handle_in("login", params, socket) do
  case MeteorAuth.Accounts.login(socket, params) do
    {:ok, res, auth_socket} ->
      {:reply, {:ok, res}, auth_socket}
    {:error} ->
      {:reply, {:error}, socket}

If login returns an :ok atom, we’ll reply back with an :ok status and the results of the login process (the user’s _id).

If login returns an :error, we’ll reply back to the client with an error.

To make sure that everything’s working correctly, we can make another event handler for a "foo" event. This event handler will simply inspect and return the currently assigned :user on the socket:

def handle_in("foo", _, socket) do
  user = socket.assigns[:user] |> IO.inspect
  case user do
    nil ->
      {:reply, :ok, socket}
    %{"_id" => id} ->
      {:reply, {:ok, %{"id" => id}}, socket}

On the client, we can test to make sure that everything’s working as expected by running through a few different combinations of "foo" and "login" events:

let channel ="ddp", {})

    .receive("ok", resp => { console.log("foo ok", resp) })
    .receive("error", resp => { console.log("foo error", resp) })


channel.push("login", {user: {email: ""}, password: "password"})
    .receive("ok", resp => { console.log("login ok", resp) })
    .receive("error", resp => { console.log("login error", resp) })

    .receive("ok", resp => { console.log("foo ok", resp) })
    .receive("error", resp => { console.log("foo error", resp) })

And as expected, everything works!

We can now check if a user is currently authenticated on a socket by looking for the assigned :user. If none exists, the current user is unauthenticated. If :user exists, we know that the current user has been authenticated and is who they say they are.

Future Work

So far, we’ve only been able to log in with credentials set up through a Meteor application. We’re not creating or accepting resume tokens, and we’re missing lots of functionality related to signing up, logging out, resetting passwords, etc…

If your goal is to recreate the entirety of Meteor’s accounts package in Elixir/Phoenix, you have a long march ahead of you. The purpose of this article is to simply show that it’s possible and fairly painlessly to integrate these two stacks together.

It’s important to know that for green-field projects, or projects seriously planning on doing a full Elixir/Phoenix transition, there are better, more Phoenix-centric ways of approaching and handling user authentication and authorization.

That being said, if there’s any interest, I may do some future work related to resume tokens, signing up and out, and potentially turning this code into a more full-fledged Elixir package.

For now, feel free to check out the entire project on GitHub to get the full source. Let me know if there’s anything in particular you’d like to see come out of this!