Dance, Computer, Dance

by Ray Grasso


Pieces I've written.

Forgetting Data in Event Sourced Systems

13 September, 2018

GDPR’s right to be forgotten means we have to be able to erase a person’s data from our systems. Event sourced systems work from an immutable log of events which makes erasure difficult. You probably want to think hard about storing data you need to delete in an immutable event log but sometimes that choice is already made and you need to make it work, so let’s dig in.

Erasing user data from current state projections

This is relatively straightforward. A RightToBeForgottenInvoked event is added to the event store for the person. All projectors that depend on personal data listen for this event and prune or scrub the appropriate data for the person from their projections.

Erasing data from the event stream itself

This case is trickier. We need to rewrite history in a way that doesn’t break things. Let’s look at an option for erasing data without rebuilding the event stream. This approach is also applicable for projections that are immutable change logs.

We can store personal data outside of events themselves in a separate storage layer. Each event instead stores a key for retrieving the data from this layer and any event consumers request the data when they need it. Given this data is personal the storage layer should probably encrypt the data at rest.

Once a RightToBeForgottenInvoked event is added to the event store all data for that person can be erased from the storage layer. All subsequent requests for data from the secure storage layer for that person’s data will return null objects rather than the actual data. This should make life easier for all consumers and avoid you null checking yourself to death all over the place.

Let’s see what this secure storage layer might look like.

Sketch of a secure storage layer

Our secure storage layer stores data that is scoped to a person and has a type (so we can return null objects). The store allows all data for a specific person to be erased.

Let’s start with two main models: a Person1 and a Data model.

      Data                 Person
  ┌──────────┐        ┌───────────────┐
  │    id    │   ┌───>│      id       │
  ├──────────┤   │    ├───────────────┤
  │person_id │───┘    │encryption_key │
  ├──────────┤        ├───────────────┤
  │   type   │        │   is_erased   │
  ├──────────┤        └───────────────┘

The interface to the secure storage layer is outlined below.

class SecureStorage
  def add(person_id, data_id, type, data)
    # Find the Person model for person_id (lazily create one if needed).
    # Encrypt the data using the person's encryption_key and store the
    # ciphertext in the data table using the client supplied data_id and type.
    # Clients will store this data_id in an event body and use it to retrieve
    # the data later.

  def erase_data_for_person(person_id)
    # Mark the corresponding record in the person table as erased
    # and delete the encryption key.

  def get(data_id)
    person = Person.find_non_erased(person_id)
    if person
      # Look up the row from the data table, decrypt ciphertext using the
      # key on the person model, and return the data.
      # Look up the row from the data table and return a null object for
      # that data type.

Where does that leave us?

After a person has invoked their right to be forgotten all current state projections will be updated to erase that person’s data. The event store will return null objects for any events that contain data for the person which means that any event processors won’t see that data as they build their projections. It will also contain the RightToBeForgottenInvoked event for the person so consumers can handle that explicitly if required.

  1. This could be expanded to be more general but we’ll stick with person for the purpose of this post. 

Replacing Google Analytics with GoAccess

3 September, 2018

I removed Google Analytics from my sites1 but still wanted access to some simple request statistics on them. Turns out GoAccess gives me most of what I need by analysing my Apache access logs.

The main challenges I ran into were, working out the correct flags for GoAccess, feeding compressed and uncompressed access logs at the same time, and ignoring junk requests from internet pests.

I pulled together a bash script that handles this, the essence of which is below.

# Analyse all log files
{ cat /var/www/mysite/logs/access.log; zcat /var/www/mysite/logs/access.log.*.gz; } | \
  # Strip out junk requests
  grep -v -E '\.php|jmx-console|\.cgi|phpmyadmin|dbadmin' | \
  # Fire up goaccess using the correct log file format for consolidated apache logs
  goaccess --log-format='%h %^[%d:%t %^] \"%r\" %s %b \"%R\" \"%u\"' --date-format='%d/%b/%Y' --time-format='%H:%M:%S' --ignore-crawlers
  1. A) It does way more than I need. B) I still don’t understand how to use it properly. C) It needlessly collects and sends your information off to the Big G. 

Remote Working Strategies

30 March, 2018

I’m almost two years into working remotely full time. It affords me flexibility and focus but it also comes with its challenges. I have a few strategies that help make it work for me and maybe they’ll help you too.

  • I mostly work from a room with a closable door. At the end of my work day I walk away and close said door. I find this helps me disconnect and keep my home and work contexts separate.
  • Change up where you work. It’s good to work in different rooms and different locations. I like to go somewhere where there are people around; even if I’m not speaking to them e.g. I’m often that rando working on his laptop in the food court.
  • I spend a lot of time on video calls. I have this headset by Jabra which has a decent microphone that doesn’t pick up much background noise. It also has a hardware mute button on the cord always within reach. As a bonus, people throw lots of “you look like you work in a call centre” gags at me.
  • Regular lunches in the city with friends helps top up my face to face human interaction stores.
  • Your energy levels will vary, do your best to ride it out. Sometimes I am a storm of energy and rip through my work. Other times I struggle to lock in and focus. Stick with it. Hold strong.
  • Get outside regularly. The dark side of not having a commute is that you can end up barely moving all day. I regularly walk around my neighbourhood to get some steps under my belt and sunshine on my face.
  • Enjoy the flexibility.

Event Sourcing Libraries

17 February, 2018

Creating an event sourced, CQRS application is simple enough conceptually but there is a lot of hidden detail when it comes to building them. There are a couple of event sourcing libraries I’ve used that can help.

The first, Event Sourcery, is in Ruby and created by my colleagues at Envato. You can use Postgres as your data store and it gives you what you need to build aggregates and events and projectors and process managers.

The immutability and process supervision baked into Elixir makes it a compelling option for implementing these kind of applications as well. Commanded is written in Elixir and follows a very similar approach to Event Sourcery and works a treat.

The Convenience of _.chain Without Importing the World

16 June, 2017

I’ve been meaning to work out how to maintain the convenience of the Lodash’s _.chain function whilst only including the parts of Lodash that I actually need.

Turns out you can cherry pick the fp version of the functions you need and compose them together with _.flow.

import sortBy from 'lodash/fp/sortBy';
import flatMap from 'lodash/fp/flatMap';
import uniq from 'lodash/fp/uniq';
import reverse from 'lodash/fp/reverse';
import flow from 'lodash/fp/flow';

const exampleData = [
    "happenedAt": "2017-06-15T19:00:00+08:00",
    "projects": [
      "Project One"
    "happenedAt": "2017-06-16T19:00:00+08:00",
    "projects": [
      "Project One",
      "Project Two"

const listOfProjectsByTime = (entries) => {
  return flow(

You can read more in Lodash’s FP Guide.

Consistent Update Times for Middleman Blog Articles with Git

16 May, 2017

The default template for an Atom feed in Middleman Blog uses the last modified time of an article’s source file as the article’s last update time. This means that if I build the site on two different machines I will get different last updated times on articles in the two atom feeds. I’d rather the built site look the same regardless of where I build it.

The source code for the site lives in a Git repository which means I have a consistent source for update times that I can rely on. So, I’ve added a helper that asks Git for the last commit time of a file and falls back to its last modified time if the file isn’t currently tracked in Git.

helpers do
 def last_update_time(file)
    Time.parse `git log -1 --format=%cd #{file} 2>/dev/null`

I now use this helper in my Atom template for each article.

xml.entry do
  xml.updated last_update_time(article.source_file).iso8601
  xml.content article.body, "type" => "html"

Adding Webpack to Middleman's External Pipeline

18 February, 2017

I use Middleman to build most of my content-focused websites. With the upgrade to version 4 comes the opportunity to move the asset pipeline out to an external provider such as Webpack.

I struggled to find good examples of how to integrate Webpack 2 with Middleman 4 so I’m documenting the approach I used here. For example code refer to middleman-webpack on Github.

Points of Interest

Build and development commands for webpack are in package.json.

"scripts": {
  "start": "NODE_ENV=development ./node_modules/webpack/bin/webpack.js --watch -d --color",
  "build": "NODE_ENV=production ./node_modules/webpack/bin/webpack.js --bail -p"

The external pipeline configuration in Middleman just calls those tasks.

activate :external_pipeline,
           name: :webpack,
           command: build? ? "yarn run build" : "yarn run start",
           source: ".tmp/dist",
           latency: 1

set :css_dir, 'assets/stylesheets'
set :js_dir, 'assets/javascript'
set :images_dir, 'images'

Assets are loaded by Webpack from the assets folder outside of the Middleman source directory1. Webpack includes any JS and CSS imported by the entry point files in webpack.config.js and generates bundle files into the asset paths Middleman uses.

module.exports = {
  entry: {
    main: './assets/javascript/main.js',

  output: {
    path: __dirname + '/.tmp/dist',
    filename: 'assets/javascript/[name].bundle.js',

  // ...


The config for Webpack itself is fairly straightforward. The ExtractText plugin extracts any included CSS into a file named after the entry point it was extracted from.

module.exports = {
  // ...

  plugins: [
    new ExtractTextPlugin("assets/stylesheets/[name].bundle.css"),

  // ...

This means you can include your styles from your JS entry file like normal and Webpack will extract the styles properly2.

Using the standard Middleman helpers to include the generated JS and CSS bundles allows Middleman to handle asset hashing at build time.

  <%= stylesheet_link_tag "main.bundle" %>

  <%= javascript_include_tag "main.bundle" %>


If you want to add modern JS and CSS to a bunch of statically generated pages then Middleman and Webpack works fine.

If, however, you are looking for a boilerplate for building a React SPA then something like react-boilerplate or create-react-app is likely a better fit.

  1. To avoid asset files being processed by both Webpack and Middleman. 

  2. Images are currently managed via Middleman and not Webpack. 

Structuring a Large Elm Application

21 October, 2016

I’m building an application in Elm and have been working on a strategy for breaking it down into smaller pieces.

My preferred approach is a few minor tweaks to the pattern used in this modular version of the Elm TodoMVC application1.

The Structure

The file structure is as follows.

$ tree src
├── Global
│   ├── Model.elm
│   ├── Msg.elm
│   └── Update.elm
├── Main.elm
├── Model.elm
├── Msg.elm
├── TransactionList
│   ├── Components
│   │   ├── FilterForm.elm
│   │   └── TransactionTable.elm
│   ├── Model.elm
│   ├── Msg.elm
│   ├── Update.elm
│   └── View.elm
├── Update.elm
└── View.elm

Global contains global state and messages, and TransactionList is a page in the application.

The top level Model, Msg, Update, and View modules stitch together the lower level components into functions that are passed into the top level Elm application (as shown below).

-- Main.elm
import Html.App as Html
import Model
import Update
import View

main : Program Never
main =
        { init = Model.init
        , update = Update.updateWithCmd
        , subscriptions = Update.subscriptions
        , view = View.rootView }

-- Model.elm
module Model exposing (..)

import Global.Model as Global
import TransactionList.Model as TransactionList

type alias Model =
    { global : Global.Model
    , transactionList : TransactionList.Model

init : ( Model, Cmd msg )
init =
    ( initialModel, Cmd.none )

initialModel : Model
initialModel =
    { global = Global.initialModel
    , transactionList = TransactionList.initialModel

-- Msg.elm
module Msg exposing (..)
import Global.Msg as Global
import TransactionList.Msg as TransactionList

type Msg
    = MsgForGlobal Global.Msg
    | MsgForTransactionList TransactionList.Msg

One of the things I like about this pattern is how readable each top level module is with import aliases.

View, Update, and Global State

The view and update functions compose similarly but I pass the top level model down to both so that they can cherry pick whatever state they need.

The lower level update functions can look at all the state and just return the piece of the model they are responsible for. For example the Global model can have common entities and state specific to the transaction list live in the TransactionList model.

Views are similar in that they can take state from the global model as well as their own model and render as necessary.

-- Update.elm
module Update exposing (..)

import Msg exposing (Msg)
import Model exposing (Model)
import Global.Update as Global
import TransactionList.Update as TransactionList

updateWithCmd : Msg -> Model -> ( Model, Cmd Msg )
updateWithCmd msg model =
    ( update msg model, updateCmd msg )

update : Msg -> Model -> Model
update msg model =
    { model
        | global = Global.update msg model
        , transactionList = TransactionList.update msg model

updateCmd : Msg -> Cmd Msg
updateCmd msg =
       [ TransactionList.updateCmd msg

-- View.elm
module View exposing (..)

import Model exposing (Model)
import Msg exposing (Msg)
import TransactionList.View as TransactionListView
import Html exposing (..)
import Html.Attributes exposing (..)

rootView : Model -> Html Msg
rootView model =
    div [ class "container" ]
        [ TransactionListView.view model ]

This approach seems to be working pretty well so far and it seems like adding routing shouldn’t be too difficult.

A Step in the Right Direction

28 July, 2016

I pulled the pin on working in the React/Redux space a few months ago after I became tired of the churn. Things were moving quickly and I found myself spending more time wiring together framework code than writing application code. This kind of thing sneaks up on you.

One glaring ommission was a preconfigured and opinionated build chain. I moved from starter kit to starter kit chasing the latest webpack-livereload-hot-swap-reload shine. Each kit was subtly different to the one before it. Not just their build components either. I missed having agreed-upon file conventions on where to store your actions, reducers, stores, and friends. It made me appreciate the curation provided by the Ember team in their toolchain.

The creation of Create React App (triggered by Emberconf no less) is a step in the right direction. Bravo.

Infrastructure Koans

21 May, 2016

Envato teams are responsible for the operation of the systems they build.

My team is trying something different to help onboard new people. We’re creating a set of infrastructure koans for them to complete. The koans are tasks that—once completed—will help folks navigate our infrastructure and systems, and thereby acquire skills that are essential for supporting our services.

When someone joins the team a new issue is created in one of our team’s Github repos using the koans document as a template. Once the new team member has completed all of the koans they are added to the on-call rota and assigned a buddy who can help if things get tricky whilst on call.

The koans are not meant to be layed out step by step unless the task is complex or requires unusual knowledge. We hope this encourages folks to explore and internalise more than they would if following a todo list.

Some Example Koans

Set yourself up on PagerDuty and read through past incidents.

View metrics for each of our systems in New Relic.

  • What is the average response time?
  • What does CPU, Memory, and I/O utilisation look like on each server?
  • What are the slowest transactions for the service? Dig into each transaction and see where the time is spent.
  • Check the error analytics tab and look for any relationships between errors and past deployments.
  • Check the availability and capacity reports.
  • Look for trends in the web transactions and database reports.

Look up each of our services in Rollbar.

  • What are the two most common errors being reported?
  • Drill into the details of a recorded error.
  • Are these errors we can live with? Should we create a task to fix them?

Open the AWS CloudWatch console.

  • Look through the available dashboards and metrics.
  • What CloudWatch alerts do we have configured for our production systems?

Open the AWS ECS console.

  • How many task definitions do we have? How many available versions exist for each of them?
  • Which systems make up each of our service clusters?
  • How many repositories do we have in ECR?

Look through our Stackmaster templates and find the results of building stacks from them in CloudFormation.

Access our ELK cluster and run some queries.

Run queries against our production database replicas.

Decrypt some database backups.

SSH into various servers in our infrastructure.

A Docker Container for River5

30 March, 2016

I’m rebuilding a VPS that I use for a bunch of my websites, side projects, and experiments. It hosts some static sites via Apache, a few rails apps, a node app, Postgres, and other bits and bobs. I want maintenance of configuration to be simpler in the future so I’m giving Docker a crack.

One of the side effects of using Docker should be that I can mess about with different tools and rollback to a clean state easily.

My first effort in learning Docker has been to create a container for hosting the River5 river-of-news aggregator by Dave Winer.

It’s up on Github and ripe for pull requests.

Running Webpack and Rails with Guard

3 November, 2015

A while ago I decided to graft React and Flux onto an existing Rails app using Webpack. I opted to avoid hacking on Sprockets and instead used Guard to smooth out the development process.

This is me finally writing about that process.


I installed all the necessary node modules from the root of the Rails app.

Dependencies and scripts from package.json:

  "scripts": {
    "test": "PHANTOMJS_BIN=./node_modules/.bin/phantomjs ./node_modules/karma/bin/karma start karma.config.js",
    "test-debug": "./node_modules/karma/bin/karma start karma.debug.config.js",
    "build": "./node_modules/webpack/bin/webpack.js --config -c"
  "dependencies": {
    "classnames": "^1.2.0",
    "eventemitter3": "^0.1.6",
    "flux": "^2.0.1",
    "keymirror": "^0.1.1",
    "lodash": "^3.5.0",
    "moment": "^2.9.0",
    "react": "^0.13.0",
    "react-bootstrap": "^0.19.1",
    "react-router": "^0.13.2",
    "react-router-bootstrap": "^0.12.1",
    "react-tools": "^0.13.1",
    "webpack": "^1.7.3",
    "whatwg-fetch": "^0.7.0"
  "devDependencies": {
    "jasmine-core": "^2.2.0",
    "jsx-loader": "^0.12.2",
    "karma": "^0.12.31",
    "karma-jasmine": "^0.3.5",
    "karma-jasmine-matchers": "^2.0.0-beta1",
    "karma-mocha": "^0.1.10",
    "karma-mocha-reporter": "^1.0.2",
    "karma-phantomjs-launcher": "^0.1.4",
    "karma-webpack": "^1.5.0",
    "mocha": "^2.2.1",
    "node-inspector": "^0.9.2",
    "phantomjs": "^1.9.16",
    "react-hot-loader": "^1.2.3",
    "webpack-dev-server": "^1.7.0"

Development Server

I wanted a single command to run my development server as per normal Rails development.

Firstly, I set up the Webpack config to read from, and build to app/assets/javascripts.

From webpack.config.js:

var webpack = require('webpack');

module.exports = {
  // Set the directory where webpack looks when you use 'require'
  context: __dirname + '/app/assets/javascripts',

  // Just one entry for this app
  entry: {
    main: [

  plugins: [
    new webpack.HotModuleReplacementPlugin()

  output: {
    filename: '[name].bundle.js',
    // Save the bundle in the same directory as our other JS
    path: __dirname + '/app/assets/javascripts',
    // Required for webpack-dev-server
    publicPath: 'http://localhost:8080/assets'

  // The only version of source maps that seemed to consistently work
  devtool: 'inline-source-map',

  // Make sure we can resolve requires to jsx files
  resolve: {
    extensions: ["", ".web.js", ".js", ".jsx"]

  // Would make more sense to use Babel now
  module: {
    loaders: [
        test: /\.jsx?$/,
        exclude: [ /node_modules/, /__tests__/ ],
        loaders: [ 'react-hot', 'jsx-loader?harmony' ]

Then the Rails app includes the built Webpack bundle.

From app/assets/javascripts/application.js:

//= require main.bundle

To get access to the Webpack Dev Server and React hot loader during development I added some asset URL rewrite hackery in development mode.

From config/environments/development.rb:

  # In development send *.bundle.js to the webpack-dev-server running on 8080
  config.action_controller.asset_host = { |source|
    if source =~ /\.bundle\.js$/i

Then I kick it all off via Guard using guard-rails and guard-process.

Selections from Guardfile:

# Run the Rails server
guard :rails do

# Run the Webpack dev server
guard :process, :name => "Webpack Dev Server", :command => "webpack-dev-server --config webpack.config.js --inline"

All Javascript and JSX files live in app/assets/javascripts and app/assets/javascripts/main.js is the application’s entry point.

To develop locally I run guard, hit http://localhost:3000 like normal, and have React hot swapping goodness when editing Javascript files.


I originally tried integrating Jest for Javascript tests but found it difficult to debug failing tests whilst using it. So, I switched to Karma and Jasmine and had Guard run the tests continually.

From Guardfile:

# Run Karma
guard :process, :name => "Javascript tests", :command => "npm test", dont_stop: true do

Like Jest, I keep tests next to application code in __tests__ directories. Karma will pick them all up based upon file suffixes.

A test-debug npm script1 runs the tests in a browser for easy debugging.


module.exports = function(config) {

     * These are the files required to run the tests.
     * The `Function.prototype.bind` polyfill is required by PhantomJS
     * because it uses an older version of JavaScript.
    files: [

     * The actual tests are preprocessed by the karma-webpack plugin, so that
     * their source can be properly transpiled.
    preprocessors: {
      './app/assets/javascripts/**/__tests__/*Spec.js': ['webpack'],

    /* We want to run the tests using the PhantomJS headless browser. */
    browsers: ['PhantomJS'],

    frameworks: ['jasmine', 'jasmine-matchers'],

    reporters: ['mocha'],

     * The configuration for the karma-webpack plugin.
     * This is very similar to the main webpack.local.config.js.
    webpack: {
      context: __dirname + '/app/assets/javascripts',
      module: {
        loaders: [
          { test: /\.jsx?$/, exclude: /node_modules/, loader: "jsx-loader?harmony"}
      resolve: {
        extensions: ['', '.js', '.jsx']

     * Configuration option to turn off verbose logging of webpack compilation.
    webpackMiddleware: {
      noInfo: true

     * Once the mocha test suite returns, we want to exit from the test runner as well.
    singleRun: true,

    plugins: [


When deploying I use Capistrano to build the Javascript with Webpack before allowing Rails to precompile the assets as per normal.

From package.json:

  "scripts": {
    "build": "./node_modules/webpack/bin/webpack.js --config -c"

The Webpack config for prod just has the development server and hot loader config stripped out.

var webpack = require('webpack');

module.exports = {
  // 'context' sets the directory where webpack looks for module files you list in
  // your 'require' statements
  context: __dirname + '/app/assets/javascripts',

  // 'entry' specifies the entry point, where webpack starts reading all
  // dependencies listed and bundling them into the output file.
  // The entrypoint can be anywhere and named anything - here we are storing it in
  // the 'javascripts' directory to follow Rails conventions.
  entry: {
    main: [

  // 'output' specifies the filepath for saving the bundled output generated by
  // wepback.
  // It is an object with options, and you can interpolate the name of the entry
  // file using '[name]' in the filename.
  // You will want to add the bundled filename to your '.gitignore'.
  output: {
    filename: '[name].bundle.js',
    // We want to save the bundle in the same directory as the other JS.
    path: __dirname + '/app/assets/javascripts',

  // Make sure we can resolve requires to jsx files
  resolve: {
    extensions: ["", ".web.js", ".js", ".jsx"]

  module: {
    loaders: [
        test: /\.jsx?$/,
        exclude: [ /node_modules/, /__tests__/ ],
        loaders: [ 'jsx-loader?harmony' ]

The Capistrano tasks in config/deploy.rb:

namespace :deploy do
  namespace :assets do
    desc "Build Javascript via webpack"
    task :webpack do
      on roles(:app) do
        execute("cd #{release_path} && npm install && npm run build")

before "deploy:assets:precompile", "deploy:assets:webpack"


I’m not sure if there is a simpler way to incorporate Webpack into Rails nowadays but this approach worked pretty well for me.

  1. As shown in the package.json listing above. 

Impressions of React and Flux

21 June, 2015

I’ve enjoyed using React and Flux recently so I thought I’d write down my initial impressions.


I like that React makes application state explicit. It forces me to think through what is core to the application and what is derived.

Flux deals with React’s one-way data flow by centralising state in stores. This means that I can browse my application’s stores to see all of its state and how that state is manipulated.

One wrinkle in the React state story is how it handles user interactions. It matches user interactions with component state via interactive props. Wiring up onChange events and maintaining a local copy of user interaction state in your components is onerous. Components are surfacing for React that handle this grunt work for you, however.


I like that the actions available in the application—whether triggered by a user or a system—are reified and explicit. They provide a nice snapshot of all the significant interactions in your application.

Data and functions instead of models and methods

I’ve found that the Flux approach means I don’t have client side models in my application. State is just data and behaviour is captured in your stores and actions. Stores hold arrays of data that are updated or manipulated with simple functions and calls to a library like lodash.


In Flux you manage events manually. This leads to a bunch of boilerplate in your stores, views, and actions but does give you the control to emit and subscribe as needed. I’ve followed the recommendation and used multiple stores that build on each other e.g. a paged transaction store that subscribes to a raw transaction store. This keeps each store’s code small and means that individual view hierarchies can subscribe to the stores that compose all of the functionality they require.

This explicit event subscription removes some of the pain you can get from evaluation order in complex sets of dependant computed observables in a framework like Knockout.


React is more library than framework and Flux is more approach than library. This leads to writing a bunch of boilerplate code. Libraries such as Fluxxor alleviate this somewhat, but I prefer to write the boilerplate at the moment so I better understand how all of the pieces involved hang together.

Simplicity of components

I like that components in Flux are independent, simple pieces of Javascript that are easy to comprehend and test.

A different mental model

React’s one way data flow model leads to a declarative style of programming that obviates some of the established ways I write front end code. I often catch myself making design decisions to improve performance and then remind myself that my component code should be declarative because the virtual DOM will only apply changes as required. It takes a while to break these old habits.

Flux itself has multiple moving parts and takes some time to understand. This makes the learning curve a little steep. Keep this diagram handy:

Flux Flow
An example Flux flow


It’s taken a while to get my head around the React+Flux approach. It is lower level than other front end libraries and frameworks but components are being created to reduce the amount of glue code that needs to be written.

I like that React+Flux forces me to think about building rich client side applications differently. The separation and arrangement of code it encourages feels cleaner than frameworks that use two-way binding.

Using ctags on modern Javascript

18 April, 2015

I use Vim as my text editor and ctags for source code navigation.

I’ve found ctag’s default javascript tagging to be lacking so I’ve added the following to my ctags config file to handle some of the newer ES6 ES2015 syntax such as classes1.

Note that the listing below contains comments which ctags config files don’t support. You can find the actual file on Github.


// Constants

// A constant: AAA0_123 = { or AAA0_123: {
--regex-js=/[ \t.]([A-Z][A-Z0-9._$]+)[ \t]*[=:][ \t]*([0-9"'\[\{]|null)/\1/n,constant/

// Properties

// .name = {
--regex-js=/\.([A-Za-z0-9._$]+)[ \t]*=[ \t]*\{/\1/o,object/

// "name": {
--regex-js=/['"]*([A-Za-z0-9_$]+)['"]*[ \t]*:[ \t]*\{/\1/o,object/

// parent["name"] = {
--regex-js=/([A-Za-z0-9._$]+)\[["']([A-Za-z0-9_$]+)["']\][ \t]*=[ \t]*\{/\1\.\2/o,object/

// Classes

// name = (function()
--regex-js=/([A-Za-z0-9._$]+)[ \t]*=[ \t]*\(function\(\)/\1/c,class/

// "name": (function()
--regex-js=/['"]*([A-Za-z0-9_$]+)['"]*:[ \t]*\(function\(\)/\1/c,class/

// class ClassName
--regex-js=/class[ \t]+([A-Za-z0-9._$]+)[ \t]*/\1/c,class/

// ClassName = React.createClass
--regex-js=/([A-Za-z$][A-Za-z0-9_$()]+)[ \t]*=[ \t]*[Rr]eact.createClass[ \t]*\(/\1/c,class/

// Capitalised object: Name = whatever({
--regex-js=/([A-Z][A-Za-z0-9_$]+)[ \t]*=[ \t]*[A-Za-z0-9_$]*[ \t]*[{(]/\1/c,class/

// Capitalised object: Name: whatever({
--regex-js=/([A-Z][A-Za-z0-9_$]+)[ \t]*:[ \t]*[A-Za-z0-9_$]*[ \t]*[{(]/\1/c,class/

// Functions

// name = function(
--regex-js=/([A-Za-z$][A-Za-z0-9_$]+)[ \t]*=[ \t]*function[ \t]*\(/\1/f,function/

// Methods

// Class method or function (this matches too many things which I filter out separtely)
// name() {
--regex-js=/(function)*[ \t]*([A-Za-z$_][A-Za-z0-9_$]+)[ \t]*\([^)]*\)[ \t]*\{/\2/f,function/

// "name": function(
--regex-js=/['"]*([A-Za-z$][A-Za-z0-9_$]+)['"]*:[ \t]*function[ \t]*\(/\1/m,method/

// parent["name"] = function(
--regex-js=/([A-Za-z0-9_$]+)\[["']([A-Za-z0-9_$]+)["']\][ \t]*=[ \t]*function[ \t]*\(/\2/m,method/

Some of these matchers are too eager but a lack of negative look behinds in the regex engine ctags uses makes that a pain to avoid. Instead I have a script which executes ctags and then filters obviously useless tags from the tag file afterwards.

#!/usr/bin/env bash

set -e

# ctags doesn't handle negative look behinds so instead this script
# strips false positives out of a tags file.

ctags "$@"


while [[ $# > 1 ]]

case $key in

# Filter out false matches from class method regex
sed -i '' -E '/^(if|switch|function|module\.exports|it|describe)	.+language:js$/d' $FILE

# Filter out false matches from object definition regex
sed -i '' -E '/var[ 	]+[a-zA-Z0-9_$]+[ 	]+=[ 	]+require\(.+language:js$/d' $FILE

I trigger the script from within Vim automatically using a plugin I wrote called tagman.vim.

  1. I found tools such as jsctags didn’t do the job and, as always, I’d prefer a more minimal approach.