Blaze Meets Clusterize.js

Written by Pete Corey on Apr 18, 2016.

Recently, I’ve been working on a Meteor project that deals with lots of data. Most of this data is rendered in “cards” that populate a vertically scrolling list.

These cards need to be very quickly scannable, sortable, and filterable by any users of the application. To do this quickly, we need to publish all of this data to the client and let the UI handle its presentation; we can’t rely on techniques like infinite scrolling or on-the-fly subscriptions.

This situation led to an interesting problem with Blaze and an even more interesting solution leveraging Clusterize.js and a client-side cache. Let’s dig into it!

Blazingly Slow

The naive Blaze solution to presenting a bunch of UI components is to simply render each of these cards within an {{#each}} block:

{{#each data in cards}}
  {{> card data}}

Unfortunately, as we start to render (and re-render) more and more cards in our list, or application slows to a crawl. After lots of profiling, debugging, and researching I came to the conclusion that Blaze simply isn’t designed to handle this much rendering and re-rendering.

Arunoda of MeteorHacks (partially) explains the issue in this article and its corresponding blog post.

Enter Clusterize.js

For our situation, a better approach was to use Clusterize.js to efficiently manage and render the massive list of cards.

Rather than dumping all of our cards into the DOM at once, Clusterize.js only renders the small portion of the cards that are currently visible. As you scroll through the list, those DOM elements are recycled and replaced with the newly visible cards. This efficient use of the DOM makes Clusterize a much more effective option when dealing with large sets of scrolling data.

Unfortunately, using Clusterize.js with Blaze wasn’t the most straight-forward process. Here’s a breakdown of how I approached the problem.

I didn’t want this Clusterize.js implementation code to permeate the rest of my front-end code, so I decided to abstract all of the Clusterize-specific complexity I was about to introduce into its own private Blaze component. This component introduced some boilerplate DOM elements required by Clusterize and an onRendered hook required to initialize the plugin:

<template name="clusterize">
  <div id="scrollArea">
    <div id="contentArea" class="clusterize-content">
Template.clusterize.onRendered(function() {
  // Initialize Clusterize.js here...
  this.clusterize = undefined;

The component was designed to accept a cursor and a template name. Each document returned by the cursor was associated with a single card that needed to be rendered with the given template. We could use the component like this:

{{> clusterize cursor=getCardDocuments

Where getCardDocuments was a helper that returned a cursor, and getClusterizeOptions returned an options object to be passed into Clusterize.js.

Basic Rendering

The most straight forward way of using Clusterize.js is to render our cards in the DOM using a Blaze ``{:.language-javascript} tag, and then initialize the plugin:

<div id="contentArea" class="clusterize-content">
  {{#each document in cursor}}
    {{> Template.dynamic template=template data=document}}

Unfortunately, this leads to the same problems that started this whole mess. Naively rendering lots of templates in Blaze is inherently slow!

Another technique would be to manage the rendering process ourselves and give Clusterize.js a list of raw HTML strings to manage and render:

Template.cachedClusterize.onRendered(function() {
  this.autorun(() => {

    // Any time data changes, re-run
    let data = Template.currentData();
    if (!data) {

    // Build the HTML for each patient card
    let template = Template[data.template];
    let rows = data.cursor.fetch().map(function(document) {
      return Blaze.toHTMLWithData(template, document);

    // Update or initialize Clusterize.js
    if (this.clusterize) {
    else {
      this.clusterize = new Clusterize(_.extend({
        rows: rows,
        scrollElem: this.$("#scrollArea")[0],
        contentElem: this.$("#contentArea")[0]
      }, data.options));


This seems like a step in the right direction, but as the cursor changes, you might notice that our component takes quite a bit of time to re-render each of the cards before passing the raw HTML off to Clusterize.js…

There has to be a faster way!

Cached Rendering

Thankfully, speeding up this implementation was fairly straight-forward. The key insight is that we don’t want to waste time re-rendering a card if we’ve already rendered it in the past. This sounds like an ideal job for a cache!

In this case, I decided to use a simple LRU cache (specifically, lru-cache) to cache my rendered templates. This cache can be set up in your application in a variety of ways depending on your current Meteor version.

I decided that a simple, but effective caching strategy would be to store each card’s rendered HTML string in the cache, indexed by the card’s _id.

This change makes the Clusterize.js render method slightly more complex:

let rows = data.cursor.fetch().map(function(document) {
  // Has this card already been rendered?
  let html = TemplateCache.get(document._id);
  if (html) {
    return html;

  // Render the card and save it to the cache...
  html = Blaze.toHTMLWithData(template, document);
  TemplateCache.set(document._id, html);
  return html;

Now, if we ever try to re-render a card that’s already been rendered on the client, we’ll find that card in the cache and instantly return the card’s rendered HTML.

This greatly improves the speed of our Clusterize.js component as we change the set of cards we’re trying to render.

Cache Invalidation

Unfortunately, our Clusterize.js component in its current form has some major issues.

If we ever update any data on a document that should be reflected on that document’s card, we’ll never see that change. Because that card has already been rendered and cached, it’ll never be re-rendered. We’re stuck looking at old, stale data in our cards list.

To deal with this situation, we need to clear any cache entries for a card whenever its corresponding document is changed. The most straight-forward way of doing this is through an observe handler on the cursor provided to our component:

// Invalidate our cache whenever a doucment changes
  changed: function(id) {

Bam! We now have incredibly fast, dynamically updating cards in our Clusterize.js managed list!

Next Steps

What I described here is a fairly simplified version of the Clusterize.js component I finally landed on.

This version doesn’t handle “client-side joins” within your rendered cards. It also doesn’t handle changes made to documents on the server, while that document doesn’t exist in the client’s cursor. These downfalls can easily be addressed with slightly more sophisticated invalidation rules and caching schemes.

At the end of the day, Clusterize.js was a life saver. With some minor massaging, it was able to step in and replace Blaze to do some majorly impressive feats of rendering.

CollectionFS Safety Considerations

Written by Pete Corey on Apr 4, 2016.

The ability to upload files is a core piece of functionality in many web applications. Because of this, many libraries have sprung up around the topic of managing and facilitating file uploads. Arguably the most popular Meteor file upload package is CollectionFS.

CollectionFS is ubiquitous throughout the Meteor community due to its extensive functionality and its ease of use. In fact, it’s so easy to use that many developers simply drop it into their application and move on to their next feature without considering the implications that file uploading might have.

From a security and performance perspective, there are several things you should consider and make conscious choices about before adding file uploading to your application:

  • File size limiting
  • File count limiting
  • File type restrictions
  • File handling and processing

Let’s dive into each of these topics and explore why they’re so important.

File Size Limiting

The size of files being uploaded into your system quickly becomes important when you begin working with those files. Are you processing the files that are uploaded in any way? In doing that processing, are you attempting to load the entire file’s contents into memory?

Loading a massive file into memory can quickly lead to performance issues and server crashes. Node.js applications have a default maximum of 1.76GB of available memory. If a user were to upload a file that’s around 1.76GB or larger, it would lead to the server crashing and the application being completely unavailable during the restart.

Thankfully, restricting an upload’s file size is a very simple process when using CollectionFS. The following code creates a Files collection and uses the filter object to specify a maximum file size of 100MB (1e+8 bytes).

Files = new FS.Collection("files", {
  stores: [ ... ],
  filter: {
    maxSize: 1e+8

Any files larger that 100MB will be rejected by this filter.

File Count Limiting

Along with limiting the size of files being uploaded, you should also limit the number of files uploadable by users of your system.

Imagine that you’re using CollectionFS to upload files into an S3 bucket. Left unchecked, a malicious user might upload hundreds or thousands of very large files into this bucket, drastically increasing the amount of storage space being used.

Without some kind of alerting, you may not notice this until your next AWS billing cycle where you’ll find a notably increased S3 bill!

Adding a maximum limit to the number of files in your CollectionFS stores is accomplished by added a custom insert rule to your file collection:

  insert: function() {
    return Files.find().count() <= 100;

In this example, file uploads will be rejected if there are already 100 files uploaded to your stores.

We could easily tweak this example to allow a maximum number of files per user or per any arbitrary group of users.

File Type Restrictions

When files go up, they usually come down. Any system that allows for the uploading of files usually intends for those files to be downloaded and used at some point in the future.

However, what if a user could upload any kind of file they wanted? Imagine the repercussions of a user uploading malicious scripts, executables or viruses to your applications. Those files might be downloaded and run by some other user, leading to a compromise of their system.

Most applications only work with a small set of file types. It’s good security practice to only allow files of those types to be uploaded. The best way to restrict the file types allowed into your system is to allow a set of expected file extensions:

Files = new FS.Collection("files", {
  stores: [ ... ],
  filter: {
    allow: {
      extensions: ["pdf", "doc", "docx"]

This example only allows PDFs and Word documents to be uploaded.

Filtering on file extension is considered safer than filtering on content type. The content type of an uploaded is provided by the client and can easily be spoofed.

Additionally, always whitelist expected file extensions, rather than blacklisting disallowed extensions. Blacklisting creates the possibility that a harmful file extension was forgotten, and could still be uploaded.

File Handling and Processing

When working with uploaded files, always act defensively. Never assume that the file is well-formed or that it will conform to your assumptions. Bugs in file processing algorithms have historically led to some severe vulnerabilities.

CollectionFS’s transformWrite runs in an unprotected thread of execute. This means that any uncaught exceptions that bubble up out of this method will escalate all the way to the event loop and crash the application. Once the server restarts, CollectionFS will notice that the transformation was not success and will re-attempt to transform the file, crashing the server in the process.

This kind of repeated crashing can leave your application completely inaccessible to users until the file having problems is removed from your CollectionFS store. A malicious user may intentionally create a crash loop to deny service to your application.

Final Thoughts

Working with files can be a dangerous proposition. Thankfully, CollectionFS and its associated storage drivers takes some of the danger out of our hands. In most circumstances we don’t have to worry about things like directory traversal vulnerabilities, or the possibilities of arbitrary code execution.

As we’ve seen, there are still things we need to be considerate of. If you follow these suggestions and spent time thoroughly analyzing your file upload system, you should have nothing to worry about.

Bypassing Package-Based Basic Auth

Written by Pete Corey on Mar 28, 2016.

In a previous post, I talked about using Basic Authentication to hide your Meteor application from prying eyes. Unfortunately, the most straight-forward way of implementing this kind of protection has its flaws.

To see those flaws, let’s imagine that we’ve set up a basic Meteor application with the kit:basic-auth package and a Meteor method:

  foo: function() {
    return "bar";

When we try to navigate to the application (http://localhost:3000/), we notice that we can’t access the application without valid credentials. Great!

Bypassing Basic Auth

However, Jesse Rosenberger recently pointed out that kit:basic-auth, and similar packages such as jabbslad:basic-auth, do not provide Basic Auth protection for WebSocket connections. This means that any external user can easily bypass this authentication mechanism and access your Meteor methods and publications.

For example, an external user could connect directly to your application using Meteor’s DDP API and call your "foo" method:

var connection = DDP.connect("http://localhost:3000/");"foo", function(err, res) {

Any unauthorized user that runs the above code will receive a result of "bar" from the "foo" method.

This is a bad thing.

Calling In the Dark

While the DDP API gives users access to all of your Meteor methods and publications, it doesn’t reveal those methods and publications. In order to call a method or subscribe to a publication, a user needs to know its name.

However, this kind of security through obscurity shouldn’t be considered any real protection. An attacker eager to discover your DDP endpoints could build a brute forcer that guesses method and publication names in an attempt to uncover your endpoints.

A Better Basic Auth

At first glance, the kit:basic-auth and jabbslad:basic-auth packages seem to be doing all the right things. They’re injecting the Basic Auth check as a piece of connect middleware at the head of the stack which, in theory, should catch all HTTP traffic and verify the user’s credentials.

Unfortunately, the Meteor framework establishes the socket connection long before any of these middleware methods are called. This means that Basic Auth is ignored during the WebSocket handshake and upgrade process.

One possible technique for overcoming this middleware issue is to “overshadow” all "request" and "upgrade" listeners and inject our Basic Auth check there. The Meteor framework does this exact thing to support raw WebSocket connections.

However, a more straightforward approach to this problem may be to move your application behind a proxy such as HAProxy, or NGINX and implement Basic Auth at that level. The proxy would protect all assets and endpoints, including the /sockjs/.../websocket endpoint, which is used to establish a WebSocket connection with the server.

Final Thoughts & Thanks

I’d like to give a massive thanks to Jesse Rosenberger who pointed out this issue to me, and gave me a huge amount of very helpful information and observations.

I’d also like to apologize to anyone hiding applications behind a package-based Basic Auth guard based on my advice. I’ve updated my previous post on this subject to reflect what I’ve learned and pointed out the current shortcomings of this package-based approach.