Pete Corey Writing Work Contact

How I Actually Wrote My First Ebook

Written by Pete Corey on May 27, 2019.

It’s been nearly three months since I released my first book, Secure Meteor. Time has flown, and I couldn’t be happier with how it’s been embraced by the Meteor community. In the early days of creating Secure Meteor (and the middle days, and the late days…), I wasn’t sure about the best way of actually writing a self-published, technical ebook.

I’m not talking about how to come up with the words and content. You’re on your own for that. I’m talking about how to get those words from my mind into a digital artifact that can be consumed by readers.

What editor do I use? Word? Emacs? Ulysses? Scrivener? Something else?

If I’m using a plain-text editor, what format do I write in? Markdown? If so, what flavor? LaTeX? if so, what distribution? HTML? Something else?

How do I turn what I’ve written into a well typeset final product? Pandoc? LaTeX? CSS? Something else?

The fact that you can purchase a copy of Secure Meteor is proof enough that I landed on answers to all of these questions. Let’s dive into the nuts and bolts of the process and workflow I came up with to create the digital artifact that is Secure Meteor!

Please note that I’m not necessarily advocating for this workflow. This process has taught me lots of lessons, and I’ll go over what I’ve come to believe towards the end of this article.

Writing in Scrivener

I’ve been a long-time user of Ulysses; I use it to write all of my online content. That said, I wasn’t sure it was up to the task of writing a several-hundred page technical book. I had heard wonderful things about Scrivener, so I decided to try it out on this project.

At its heart, Scrivener is a rich-text editor. To write Secure Meteor, I used a subset of Scrivener’s rich-text formatting tools to describe the pieces of my book. “Emphasis” and “code span” character styles were used for inline styling, and the “code block” style was used for sections of source code.

For example, this section of text in Scrivener:

Eventually looks like this in the final book:

I added a few application keyboard shortcuts to make toggling between these styles easier:

With those shortcuts I can hit ^I to switch to the inline “code span” style, ^C to switch to a “code block”, and ^N to clear the current style. Scrivener’s built-in i shortcut for “emphasis” was also very helpful.

I also added a custom “Pete’s Tips” paragraph style which is used to highlight callouts and points of emphasis throughout various chapters. In Scrivener, my tips are highlighted in yellow:

And in the final book, they’re floated left and styled for emphasis:

Organizing Content

In the early days, I was lost in the various ways of organizing a Scrivener project. Should I have one document per chapter? Should I have a folder per chapter and a document per section? Should I use the “Title”/”Header 1”/”Header 2” paragraph styles with unnamed Scrivener documents, or should I just use document names to indicate chapter/section names?

Ultimately I landed on a completely hierarchical organization scheme that doesn’t use any “Title” or “Header” paragraph styles.

Every document in the root of my Scrivener project is considered a chapter in Secure Meteor. Chapters without sub-sections are simply named documents. Chapters with sub-sections are named folders. The first document in that folder is unnamed, and any following sub-sections are named documents (or folders, if we want to go deeper).

This organization scheme worked out really well for me when it came time to lay out my final document and build my table of contents.

Scrivomatic

Unfortunately, Scrivener’s compiler support for syntax-highlighted code blocks isn’t great (read: non-existent). If I wanted my book to be styled the way I wanted, I had no choice but to do the final rendering outside of Scrivener.

I decided on using Pandoc to render my book into HTML, and found Scrivomatic to be an unbelievably useful tool for working with Pandoc within the context of a Scrivener project.

After installing Scrivomatic and its various dependencies, I added a “front matter” document to my Scrivener project:


---
title: "<$projecttitle>"
author:
  - Pete Corey
keywords: 
  - Meteor
  - Security
pandocomatic_:
  use-template:
    - secure-meteor-html
---

After adding my front matter, I added a “Scrivomatic” compile format, once again, following Scrivomatic’s instructions. It’s in this compile format that I added a prefix and suffix for “Pete’s Tips” paragraph styles that wraps each tip in a <p> tag with a tip class:

Next, I added the secure-meteor-html template referenced in my front matter to my ~/.pandoc/pandocomatic.yaml configuration file:


  secure-meteor-html:
    setup: []
    preprocessors: []
    pandoc:
      standalone: true
    metadata:
      notes-after-punctuation: false
    postprocessors: []
    cleanup: []
    pandoc:
      from: markdown
      to: html5
      standalone: true
      number-sections: false
      section-divs: true
      css: ./stylesheet.css
      self-contained: true
      toc: true
      toc-depth: 4
      base-header-level: 1
      template: ./custom.html
      

Note that I’m using ./custom.html and ./stylesheet.css as my HTML and CSS template files. Those will live within my Scrivener project folder (~/Secure Meteor).

Also note that I’m telling Pandoc to build a table of contents, which it happily does, thanks to the project structure we went over previously.

My custom.html is a stripped down and customized version of Scrivomatic’s default HTML template. To get the styling and structure of my title page just right, I built it out manually in the template:


$if(title)$
<header id="title-block-header">
    <div>
        <h1 class="title">Secure Meteor</h1>
        <p class="subtitle">Learn the ins and outs of securing your Meteor application from a Meteor security professional.</p>
        <p class="author">Written by Pete Corey.</p>
    </div>
</header>
$endif$

My CSS template, which you can see here, was also based on a stripped down version of Scrivomatic’s default CSS template. A few callouts to mention are that I used Typekit to pull down the font I wanted to use:


@import url("https://use.typekit.net/ssa1tke.css");

body { 
  font-family: "freight-sans-pro",sans-serif;
  ...
}

I added the styling for “Pete’s Tips” floating sections:


.tip {
    font-size: 1.6em;
    float: right;
    max-width: 66%;
    margin: 0.5em 0 0.5em 1em;
    line-height: 1.6;
    color: #ccc;
    text-align: right;
}

And I set up various page-break-* rules around the table of contents, chapters, sections, and code blocks:


#TOC {
    page-break-after: always;
}

h1 {
    page-break-before: always
}

h1,h2,h3,h4,h5,h6 {
    page-break-after: avoid;
}

.sourceCode {
    page-break-inside: avoid;
}

My goals with these rules were to always start a chapter on a new page, to avoid section headings hanging at the end of pages, and to avoid code blocks being broken in half by page breaks.

Generating a well-formatted HTML version of my book had the nice side effect of letting me easily publish sample chapters online.

HTML to PDF

Pandoc, through Scrivomatic, was doing a great job of converting my Scrivener project into an HTML document, but now I wanted to generate a PDF document as a final artifact that I could give to my customers. Pandoc’s PDF generation uses LaTeX to typeset and format documents, and after much pain and strife, I decided I didn’t want to go that route.

I wanted to turn my HTML document, which was perfectly styled, into a distributable PDF.

The first route I took was to simply open the HTML document in Chrome and “print” it to a PDF document. This worked, but I wanted an automated solution that didn’t require I remember margin settings and page sizes. I also wanted a solution that allowed me to append styled page numbers to the footer of every page in the book, aside from the title page (which was built in our HTML template, outside the context of our Scrivener project and our generated table of contents).

I landed on writing a Puppeteer script that renders the HTML version of Secure Meteor into its final PDF. There are quite a few things going on in this script. First, it renders the title page by itself into first.pdf:


await page.pdf({
  path: "first.pdf",
  pageRanges: "1",
  ...
});

Next, it saves the rest of the pages to rest.pdf, including a custom footer that renders the current page number:


await page.pdf({
  path: "rest.pdf",
  pageRanges: "2-",
  footerTemplate: "...",
  ...
});

Finally, first.pdf and rest.pdf are merged together using the pdf-merge NPM package, which uses pdftk under the hood:


await pdfMerge([`${__dirname}/first.pdf`, `${__dirname}/rest.pdf`], {
  output: `${__dirname}/out.pdf`,
  libPath: "/usr/local/bin/pdftk"
});

By rendering the title separately from the rest of the book we’re able to place page numbers on the internal pages of our book, while keeping the title page footer free. This is another reason for building the title page into our HTML template. If we built it with Scrivener, Scrivomatic would count it as a page when generating our table of contents, which we don’t want.

Fine Tuning Page Breaks and Line Wraps

Finally, I had a mostly automated process for going from a draft in Scrivener to a rendered PDF. I could compile my Scrivener project down to HTML and then run my ./puppeteer script to generate a final PDF.

After looking through this final PDF, I realized that it still needed quite a bit of work.

Some code blocks overflowed out of the page. I went through each page, looking for these offending blocks of code and manually trimmed them down to size by truncating lines cleanly at a certain character count, when appropriate, or by adding line breaks where possible.

I also noticed many unaesthetic page breaks: section headers too close to the bottom of a page, large gaps at the bottom of pages caused by subsequent large code blocks, poorly floated “Pete’s Tips”. I had no choice but to start on page one and work my way through each of these issues.

I didn’t want to change the text of the book, so my only choice was to manually modify the generated HTML and add page-break-* styles on specific elements. Eventually, I massaged the book into a form I was happy with. Unfortunately, any changes I make to the text in Scrivener will force me to redo these manual changes.

Eventually, I had my final PDF. If you’d like to see how it turned out, go grab a copy of Secure Meteor or check out a few of the sample chapters!

Final Thoughts

I’m a few months removed from this whole process, and I have far more thoughts now than I did when I first started.

Would I use this workflow to write another book? Probably not. For all of Scrivener’s power, I don’t think rich-text editing is my jam. I’m more inclined to use Ulysses, which I know and love, to write in a plain-text format. If I had to choose today, I’d write in a flavor of Markdown or begin my journey up LaTeX’s the steep learning curve.

I also need to find a better renderer than a browser. There’s a whole host of CSS functionality that’s proposed or deprecated that would make rendering paged media in the browser more feasible, like CSS-only page numbers, orphans and widows, and more, but none of it works in current versions of Chrome and Firefox. Prince seems to promise some of this functionality, but its price tag is too steep for me. Then again, working directly with LaTeX seems like it would aleviate these problems altogether.

Ultimately, I wanted to document this process because figuring this stuff out was ridiculously difficult. Writing the words of the book was easy in comparison. Hopefully this will act as a guide to others to show what’s currently possible, and some potential pitfalls to avoid.

Minimum Viable Phoenix

This post is written as a set of Literate Commits. The goal of this style is to show you how this program came together from beginning to end.

Each commit in the project is represented by a section of the article. Click each section's header to see the commit on Github, or check out the repository and follow along.

Written by Pete Corey on May 20, 2019.

Starting at the Beginning

Phoenix ships with quite a few bells and whistles. Whenever you fire up mix phx.new to create a new web application, forty six files are created and spread across thirty directories!

This can be overwhelming to developers new to Phoenix.

To build a better understanding of the framework and how all of its moving pieces interact, let’s strip Phoenix down to its bare bones. Let’s start from zero and slowly build up to a minimum viable Phoenix application.

.gitignore


+.DS_Store

Minimum Viable Elixir

Starting at the beginning, we need to recognize that all Phoenix applications are Elixir applications. Our first step in the process of building a minimum viable Phoenix application is really to build a minimum viable Elixir application.

Interestingly, the simplest possible Elixir application is simply an *.ex file that contains some source code. To set ourselves up for success later, let’s place our code in lib/minimal/application.ex. We’ll start by simply printing "Hello." to the console.


IO.puts("Hello.")

Surprisingly, we can execute our newly written Elixir application by compiling it:


➜ elixirc lib/minimal/application.ex
Hello.

This confused me at first, but it was explained to me that in the Elixir world, compilation is also evaluation.

lib/minimal/application.ex


+IO.puts("Hello.")

Generating Artifacts

While our execution-by-compilation works, it’s really nothing more than an on-the-fly evaluation. We’re not generating any compilation artifacts that can be re-used later, or deployed elsewhere.

We can fix that by moving our code into a module. Once we compile our newly modularized application.ex, a new Elixir.Minimal.Application.beam file will appear in the root of our project.

We can run our compiled Elixir program by running elixir in the directory that contains our *.beam file and specifying an expression to evaluate using the -e flag:


➜ elixir -e "Minimal.Application.start()"
Hello.

Similarly, we could spin up an interactive shell (iex) in the same directory and evaluate the expression ourselves:


iex(1)> Minimal.Application.start()
Hello.

.gitignore


+*.beam
.DS_Store

lib/minimal/application.ex


-IO.puts("Hello.")
+defmodule Minimal.Application do
+  def start do
+    IO.puts("Hello.")
+  end
+end

Incorporating Mix

This is great, but manually managing our *.beam files and bootstrap expressions is a little cumbersome. Not to mention the fact that we haven’t even started working with dependencies yet.

Let’s make our lives easier by incorporating the Mix build tool into our application development process.

We can do that by creating a mix.exs Elixir script file in the root of our project that defines a module that uses Mix.Project and describes our application. We write a project/0 callback in our new MixProject module who’s only requirement is to return our application’s name (:minimal) and version ("0.1.0").


def project do
  [
    app: :minimal,
    version: "0.1.0"
  ]
end

While Mix only requires that we return the :app and :version configuration values, it’s worth taking a look at the other configuration options available to us, especially :elixir and :start_permanent, :build_path, :elixirc_paths, and others.

Next, we need to specify an application/0 callback in our MixProject module that tells Mix which module we want to run when our application fires up.


def application do
  [
    mod: {Minimal.Application, []}
  ]
end

Here we’re pointing it to the Minimal.Application module we wrote previously.

During the normal application startup process, Elixir will call the start/2 function of the module we specify with :normal as the first argument, and whatever we specify ([] in this case) as the second. With that in mind, let’s modify our Minimal.Application.start/2 function to accept those parameters:


def start(:normal, []) do
  IO.puts("Hello.")
  {:ok, self()}
end

Notice that we also changed the return value of start/2 to be an :ok tuple whose second value is a PID. Normally, an application would spin up a supervisor process as its first act of life and return its PID. We’re not doing that yet, so we simply return the current process’ PID.

Once these changes are done, we can run our application with mix or mix run, or fire up an interactive Elixir shell with iex -S mix. No bootstrap expression required!

.gitignore


 *.beam
-.DS_Store
+.DS_Store
+/_build/

lib/minimal/application.ex


 defmodule Minimal.Application do
-  def start do
+  def start(:normal, []) do
     IO.puts("Hello.")
+    {:ok, self()}
   end

mix.exs


+defmodule Minimal.MixProject do
+  use Mix.Project
+
+  def project do
+    [
+      app: :minimal,
+      version: "0.1.0"
+    ]
+  end
+
+  def application do
+    [
+      mod: {Minimal.Application, []}
+    ]
+  end
+end

Pulling in Dependencies

Now that we’ve built a minimum viable Elixir project, let’s turn our attention to the Phoenix framework. The first thing we need to do to incorporate Phoenix into our Elixir project is to install a few dependencies.

We’ll start by adding a deps array to the project/0 callback in our mix.exs file. In deps we’ll list :phoenix, :plug_cowboy, and :jason as dependencies.

By default, Mix stores downloaded dependencies in the deps/ folder at the root of our project. Let’s be sure to add that folder to our .gitignore. Once we’ve done that, we can install our dependencies with mix deps.get.

The reliance on :phoenix makes sense, but why are we already pulling in :plug_cowboy and :jason?

Under the hood, Phoenix uses the Cowboy web server, and Plug to compose functionality on top of our web server. It would make sense that Phoenix relies on :plug_cowboy to bring these two components into our application. If we try to go on with building our application without installing :plug_cowboy, we’ll be greeted with the following errors:

** (UndefinedFunctionError) function Plug.Cowboy.child_spec/1 is undefined (module Plug.Cowboy is not available)
    Plug.Cowboy.child_spec([scheme: :http, plug: {MinimalWeb.Endpoint, []}
    ...

Similarly, Phoenix relies on a JSON serialization library to be installed and configured. Without either :jason or :poison installed, we’d receive the following warning when trying to run our application:

warning: failed to load Jason for Phoenix JSON encoding
(module Jason is not available).

Ensure Jason exists in your deps in mix.exs,
and you have configured Phoenix to use it for JSON encoding by
verifying the following exists in your config/config.exs:

config :phoenix, :json_library, Jason

Heeding that advice, we’ll install :jason and add that configuration line to a new file in our project, config/config.exs.

.gitignore


 /_build/
+/deps/

config/config.exs


+use Mix.Config
+
+config :phoenix, :json_library, Jason

mix.exs


   app: :minimal,
-  version: "0.1.0"
+  version: "0.1.0",
+  deps: [
+    {:jason, "~> 1.0"},
+    {:phoenix, "~> 1.4"},
+    {:plug_cowboy, "~> 2.0"}
+  ]
 ]
 

Introducing the Endpoint

Now that we’ve installed our dependencies on the Phoenix framework and the web server it uses under the hood, it’s time to define how that web server incorporates into our application.

We do this by defining an “endpoint”, which is our application’s interface into the underlying HTTP web server, and our clients’ interface into our web application.

Following Phoenix conventions, we define our endpoint by creating a MinimalWeb.Endpoint module that uses Phoenix.Endpoint and specifies the :name of our OTP application (:minimal):


defmodule MinimalWeb.Endpoint do
  use Phoenix.Endpoint, otp_app: :minimal
end

The __using__/1 macro in Phoenix.Endpoint does quite a bit of heaving lifting. Among many other things, it loads the endpoint’s initial configuration, sets up a plug pipeline using Plug.Builder, and defines helper functions to describe our endpoint as an OTP process. If you’re curious about how Phoenix works at a low level, start your search here.

Phoenix.Endpoint uses the value we provide in :otp_app to look up configuration values for our application. Phoenix will complain if we don’t provide a bare minimum configuration entry for our endpoint, so we’ll add that to our config/config.exs file:


config :minimal, MinimalWeb.Endpoint, []

But there are a few configuration values we want to pass to our endpoint, like the host and port we want to serve from. These values are usually environment-dependent, so we’ll add a line at the bottom of our config/config.exs to load another configuration file based on our current environment:


import_config "#{Mix.env()}.exs"

Next, we’ll create a new config/dev.exs file that specifies the :host and :port we’ll serve from during development:


use Mix.Config

config :minimal, MinimalWeb.Endpoint,
  url: [host: "localhost"],
  http: [port: 4000]

If we were to start our application at this point, we’d still be greeted with Hello. printed to the console, rather than a running Phoenix server. We still need to incorporate our Phoenix endpoint into our application.

We do this by turning our Minimal.Application into a proper supervisor and instructing it to load our endpoint as a supervised child:


use Application

def start(:normal, []) do
  Supervisor.start_link(
    [
      MinimalWeb.Endpoint
    ],
    strategy: :one_for_one
  )
end

Once we’ve done that, we can fire up our application using mix phx.server or iex -S mix phx.server and see that our endpoint is listening on localhost port 4000.

Alternatively, if you want to use our old standby of mix run, either configure Phoenix to serve all endpoints on startup, which is what mix phx.server does under the hood:


config :phoenix, :serve_endpoints, true

Or configure your application’s endpoint specifically:


config :minimal, MinimalWeb.Endpoint, server: true

config/config.exs


+config :minimal, MinimalWeb.Endpoint, []
+
 config :phoenix, :json_library, Jason
+
+import_config "#{Mix.env()}.exs"

config/dev.exs


+use Mix.Config
+
+config :minimal, MinimalWeb.Endpoint,
+  url: [host: "localhost"],
+  http: [port: 4000]

lib/minimal/application.ex


 defmodule Minimal.Application do
+  use Application
+
   def start(:normal, []) do
-    IO.puts("Hello.")
-    {:ok, self()}
+    Supervisor.start_link(
+      [
+        MinimalWeb.Endpoint
+      ],
+      strategy: :one_for_one
+    )
   end
 

lib/minimal_web/endpoint.ex


+defmodule MinimalWeb.Endpoint do
+  use Phoenix.Endpoint, otp_app: :minimal
+end

Adding a Route

Our Phoenix endpoint is now listening for inbound HTTP requests, but this doesn’t do us much good if we’re not serving any content!

The first step in serving content from a Phoenix application is to configure our router. A router maps requests sent to a route, or path on your web server, to a specific module and function. That function’s job is to handle the request and return a response.

We can add a route to our application by making a new module, MinimalWeb.Router, that uses Phoenix.Router:


defmodule MinimalWeb.Router do
  use Phoenix.Router
end

And we can instruct our MinimalWeb.Endpoint to use our new router:


plug(MinimalWeb.Router)

The Phoenix.Router module generates a handful of helpful macros, like match, get, post, etc… and configures itself to a module-based plug. This is the reason we can seamlessly incorporate it in our endpoint using the plug macro.

Now that our router is wired into our endpoint, let’s add a route to our application:


get("/", MinimalWeb.HomeController, :index)

Here we’re instructing Phoenix to send any HTTP GET requests for / to the index/2 function in our MinimalWeb.HomeController “controller” module.

Our MinimalWeb.HomeController module needs to use Phoenix.Controller and provide our MinimalWeb module as a :namespace configuration option:


defmodule MinimalWeb.HomeController do
  use Phoenix.Controller, namespace: MinimalWeb
end

Phoenix.Controller, like Phoenix.Endpoint and Phoenix.Router does quite a bit. It establishes itself as a plug and by using Phoenix.Controller.Pipeline, and it uses the :namespace module we provide to do some automatic layout and view module detection.

Because our controller module is essentially a glorified plug, we can expect Phoenix to pass conn as the first argument to our specified controller function, and any user-provided parameters as the second argument. Just like any other plug’s call/2 function, our index/2 should return our (potentially modified) conn:


def index(conn, _params) do
  conn
end

But returning an unmodified conn like this is essentially a no-op.

Let’s spice things up a bit and return a simple HTML response to the requester. The simplest way of doing that is to use Phoenix’s built-in Phoenix.Controller.html/2 function, which takes our conn as its first argument, and the HTML we want to send back to the client as the second:


Phoenix.Controller.html(conn, """
  

Hello.

""")

If we dig into html/2, we’ll find that it’s using Plug’s built-in Plug.Conn.send_resp/3 function:


Plug.Conn.send_resp(conn, 200, """
  

Hello.

""")

And ultimately send_resp/3 is just modifying our conn structure directly:


%{
  conn
  | status: 200,
    resp_body: """
      

Hello.

""", state: :set }

These three expressions are identical, and we can use whichever one we choose to return our HTML fragment from our controller. For now, we’ll follow best practices and stick with Phoenix’s html/2 helper function.

lib/minimal_web/controllers/home_controller.ex


+defmodule MinimalWeb.HomeController do
+  use Phoenix.Controller, namespace: MinimalWeb
+
+  def index(conn, _params) do
+    Phoenix.Controller.html(conn, """
+      

Hello.

+ """) + end +end

lib/minimal_web/endpoint.ex


   use Phoenix.Endpoint, otp_app: :minimal
+
+  plug(MinimalWeb.Router)
 end
 

lib/minimal_web/router.ex


+defmodule MinimalWeb.Router do
+  use Phoenix.Router
+
+  get("/", MinimalWeb.HomeController, :index)
+end

Handling Errors

Our Phoenix-based web application is now successfully serving content from the / route. If we navigate to http://localhost:4000/, we’ll be greeted by our friendly HomeController:

But behind the scenes, we’re having issues. Our browser automatically requests the /facicon.ico asset from our server, and having no idea how to respond to a request for an asset that doesn’t exist, Phoenix kills the request process and automatically returns a 500 HTTP status code.

We need a way of handing requests for missing content.

Thankfully, the stack trace Phoenix gave us when it killed the request process gives us a hint for how to do this:

Request: GET /favicon.ico
  ** (exit) an exception was raised:
    ** (UndefinedFunctionError) function MinimalWeb.ErrorView.render/2 is undefined (module MinimalWeb.ErrorView is not available)
        MinimalWeb.ErrorView.render("404.html", %{conn: ...

Phoenix is attempting to call MinimalWeb.ErrorView.render/2 with "404.html" as the first argument and our request’s conn as the second, and is finding that the module and function don’t exist.

Let’s fix that:


defmodule MinimalWeb.ErrorView do
  def render("404.html", _assigns) do
    "Not Found"
  end
end

Our render/2 function is a view, not a controller, so we just have to return the content we want to render in our response, not the conn itself. That said, the distinctions between views and controllers may be outside the scope of building a “minimum viable Phoenix application,” so we’ll skim over that for now.

Be sure to read move about the ErrorView module, and how it incorporates into our application’s endpoint. Also note that the module called to render errors is customizable through the :render_errors configuration option.

lib/minimal_web/views/error_view.ex


+defmodule MinimalWeb.ErrorView do
+  def render("404.html", _assigns) do
+    "Not Found"
+  end
+end

Final Thoughts

So there we have it. A “minimum viable” Phoenix application. It’s probably worth pointing out that we’re using the phrase “minimum viable” loosely here. I’m sure there are people who can come up with more “minimal” Phoenix applications. Similarly, I’m sure there are concepts and tools that I left out, like views and templates, that would cause people to argue that this example is too minimal.

The idea was to explore the Phoenix framework from the ground up, building each of the requisite components ourselves, without relying on automatically generated boilerplate. I’d like to think we accomplished that goal.

I’ve certainly learned a thing or two!

If there’s one thing I’ve taken away from this process, it’s that there is no magic behind Phoenix. Everything it’s doing can be understood with a little familiarity with the Phoenix codebase, a healthy understanding of Elixir metaprogramming, and a little knowledge about Plug.

Is My Apollo Client Connected to the Server?

Written by Pete Corey on May 13, 2019.

When you’re building a real-time, subscription-heavy front-end application, it can be useful to know if your client is actively connected to the server. If that connection is broken, maybe because the server is temporarily down for maintenance, we’d like to be able to show a message explaining the situation to the user. Once we re-establish our connection, we’d like to hide that message and go back to business as usual.

That’s the dream, at least. Trying to implement this functionality using Apollo turned out to be more trouble than we expected on a recent client project.

Let’s go over a few of the solutions we tried that didn’t solve the problem, for various reasons, and then let’s go over the final working solution we came up with. Ultimately, I’m happy with what we landed on, but I didn’t expect to uncover so many roadblocks along the way.

What Didn’t Work

Our first attempt was to build a component that polled for an online query on the server. If the query ever failed with an error on the client, we’d show a “disconnected” message to the user. Presumably, once the connection to the server was re-established, the error would clear, and we’d re-render the children of our component:


const Connected = props => {
  return (
    <Query query={gql'{ online }'} pollInterval={5000}>
      {({error, loading}) => {
        if (loading) {
            return <Loader/>;
        }
        else if (error) {
            return <Message/>;
        }
        else {
            return props.children;
        }
      }}
    </Query>
  );
}

Unfortunately, our assumptions didn’t hold up. Apparently when a query fails, Apollo (react-apollo@2.5.5) will stop polling on that failing query, stopping our connectivity checker dead in its tracks.

NOTE: Apparently, this should work, and in various simplified reproductions I built while writing this article, it did work. Here are various issues and pull requests documenting the problem, merging in fixes (which others claim don’t work), and documenting workarounds:


We thought, “well, if polling is turned off on error, let’s just turn it back on!” Our next attempt used startPolling to try restarting our periodic heartbeat query.


if (error) {
  startPolling(5000);
}

No dice.

Our component successfully restarts polling and carries on refetching our query, but the Query component returns values for both data and error, along with a networkStatus of 8, which indicates that “one or more errors were detected.”

If a query returns both an error and data, how are we to know which to trust? Was the query successful? Or was there an error?

We also tried to implement our own polling system with various combinations of setTimeout and setInterval. Ultimately, none of these solutions seemed to work because Apollo was returning both error and data for queries, once the server had recovered.

NOTE: This should also work, though it would be unnecessary, if it weren’t for the issues mentioned above.


Lastly, we considered leveraging subscriptions to build our connectivity detection system. We wrote a online subscription which pushes a timestamp down to the client every five seconds. Our component subscribes to this publication… And then what?

We’d need to set up another five second interval on the client that flips into an error state if it hasn’t seen a heartbeat in the last interval.

But once again, once our connection to the server is re-established, our subscription won’t re-instantiate in a sane way, and our client will be stuck showing a stale disconnected message.

What Did Work

We decided to go a different route and implemented a solution that leverages the SubscriptionClient lifecycle and Apollo’s client-side query functionality.

At a high level, we store our online boolean in Apollo’s client-side cache, and update this value whenever Apollo detects that a WebSocket connection has been disconnected or reconnected. Because we store online in the cache, our Apollo components can easily query for its value.

Starting things off, we added a purely client-side online query that returns a Boolean!, and a resolver that defaults to being “offline”:


const resolvers = {
    Query: { online: () => false }
};

const typeDefs = gql`
  extend type Query {
    online: Boolean!
  }
`;

const apolloClient = new ApolloClient({
  ...
  typeDefs,
  resolvers
});

Next we refactored our Connected component to query for the value of online from the cache:


const Connected = props => {
  return (
    <Query query={gql'{ online @client }'}>
      {({error, loading}) => {
        if (loading) {
            return <Loader/>;
        }
        else if (error) {
            return <Message/>;
        }
        else {
            return props.children;
        }
      }}
    </Query>
  );
}

Notice that we’re not polling on this query. Any time we update our online value in the cache, Apollo knows to re-render this component with the new value.

Next, while setting up our SubscriptionClient and WebSocketLink, we added a few hooks to detect when our client is connected, disconnected, and later reconnected to the server. In each of those cases, we write the appropriate value of online to our cache:


subscriptionClient.onConnected(() =>
    apolloClient.writeData({ data: { online: true } })
);

subscriptionClient.onReconnected(() =>
    apolloClient.writeData({ data: { online: true } })
);

subscriptionClient.onDisconnected(() =>
    apolloClient.writeData({ data: { online: false } })
);

And that’s all there is to it!

Any time our SubscriptionClient detects that it’s disconnected from the server, we write offline: false into our cache, and any time we connect or reconnect, we write offline: true. Our component picks up each of these changes and shows a corresponding message to the user.

Huge thanks to this StackOverflow comment for pointing us in the right direction.