Dance, Computer, Dance

by Ray Grasso

Posts

Pieces I've written.

The Convenience of _.chain Without Importing the World

16 June, 2017

I’ve been meaning to work out how to maintain the convenience of the Lodash’s _.chain function whilst only including the parts of Lodash that I actually need.

Turns out you can cherry pick the fp version of the functions you need and compose them together with _.flow.

import sortBy from 'lodash/fp/sortBy';
import flatMap from 'lodash/fp/flatMap';
import uniq from 'lodash/fp/uniq';
import reverse from 'lodash/fp/reverse';
import flow from 'lodash/fp/flow';

const exampleData = [
  {
    "happenedAt": "2017-06-15T19:00:00+08:00",
    "projects": [
      "Project One"
    ],
  },
  {
    "happenedAt": "2017-06-16T19:00:00+08:00",
    "projects": [
      "Project One",
      "Project Two"
    ],
  },
];

const listOfProjectsByTime = (entries) => {
  return flow(
    sortBy('happenedAt'),
    reverse,
    flatMap('projects'),
    uniq,
  )(entries);
}

You can read more in Lodash’s FP Guide.

Consistent Update Times for Middleman Blog Articles with Git

16 May, 2017

The default template for an Atom feed in Middleman Blog uses the last modified time of an article’s source file as the article’s last update time. This means that if I build the site on two different machines I will get different last updated times on articles in the two atom feeds. I’d rather the built site look the same regardless of where I build it.

The source code for the site lives in a Git repository which means I have a consistent source for update times that I can rely on. So, I’ve added a helper that asks Git for the last commit time of a file and falls back to its last modified time if the file isn’t currently tracked in Git.

helpers do
 def last_update_time(file)
    Time.parse `git log -1 --format=%cd #{file} 2>/dev/null`
  rescue
    File.mtime(file)
  end
do

I now use this helper in my Atom template for each article.

xml.entry do
  xml.published article.date.to_time.iso8601
  xml.updated last_update_time(article.source_file).iso8601
  xml.content article.body, "type" => "html"
end

Adding Webpack to Middleman's External Pipeline

18 February, 2017

I use Middleman to build most of my content-focused websites. With the upgrade to version 4 comes the opportunity to move the asset pipeline out to an external provider such as Webpack.

I struggled to find good examples of how to integrate Webpack 2 with Middleman 4 so I’m documenting the approach I used here. For example code refer to middleman-webpack on Github.

Points of Interest

Build and development commands for webpack are in package.json.

"scripts": {
  "start": "NODE_ENV=development ./node_modules/webpack/bin/webpack.js --watch -d --color",
  "build": "NODE_ENV=production ./node_modules/webpack/bin/webpack.js --bail -p"
},

The external pipeline configuration in Middleman just calls those tasks.

activate :external_pipeline,
           name: :webpack,
           command: build? ? "yarn run build" : "yarn run start",
           source: ".tmp/dist",
           latency: 1

set :css_dir, 'assets/stylesheets'
set :js_dir, 'assets/javascript'
set :images_dir, 'images'

Assets are loaded by Webpack from the assets folder outside of the Middleman source directory1. Webpack includes any JS and CSS imported by the entry point files in webpack.config.js and generates bundle files into the asset paths Middleman uses.

module.exports = {
  entry: {
    main: './assets/javascript/main.js',
  },

  output: {
    path: __dirname + '/.tmp/dist',
    filename: 'assets/javascript/[name].bundle.js',
  },

  // ...

}

The config for Webpack itself is fairly straightforward. The ExtractText plugin extracts any included CSS into a file named after the entry point it was extracted from.

module.exports = {
  // ...

  plugins: [
    new ExtractTextPlugin("assets/stylesheets/[name].bundle.css"),
  ],

  // ...
}

This means you can include your styles from your JS entry file like normal and Webpack will extract the styles properly2.

Using the standard Middleman helpers to include the generated JS and CSS bundles allows Middleman to handle asset hashing at build time.

<head>
  <%= stylesheet_link_tag "main.bundle" %>
</head>

<body>
  <%= javascript_include_tag "main.bundle" %>
</body>

Finally

If you want to add modern JS and CSS to a bunch of statically generated pages then Middleman and Webpack works fine.

If, however, you are looking for a boilerplate for building a React SPA then something like react-boilerplate or create-react-app is likely a better fit.

  1. To avoid asset files being processed by both Webpack and Middleman. 

  2. Images are currently managed via Middleman and not Webpack. 

Structuring a Large Elm Application

21 October, 2016

I’m building an application in Elm and have been working on a strategy for breaking it down into smaller pieces.

My preferred approach is a few minor tweaks to the pattern used in this modular version of the Elm TodoMVC application1.

The Structure

The file structure is as follows.

$ tree src
src
├── Global
│   ├── Model.elm
│   ├── Msg.elm
│   └── Update.elm
├── Main.elm
├── Model.elm
├── Msg.elm
├── TransactionList
│   ├── Components
│   │   ├── FilterForm.elm
│   │   └── TransactionTable.elm
│   ├── Model.elm
│   ├── Msg.elm
│   ├── Update.elm
│   └── View.elm
├── Update.elm
└── View.elm

Global contains global state and messages, and TransactionList is a page in the application.

The top level Model, Msg, Update, and View modules stitch together the lower level components into functions that are passed into the top level Elm application (as shown below).

--
-- Main.elm
--
import Html.App as Html
import Model
import Update
import View

main : Program Never
main =
    Html.program
        { init = Model.init
        , update = Update.updateWithCmd
        , subscriptions = Update.subscriptions
        , view = View.rootView }

--
-- Model.elm
--
module Model exposing (..)

import Global.Model as Global
import TransactionList.Model as TransactionList

type alias Model =
    { global : Global.Model
    , transactionList : TransactionList.Model
    }

init : ( Model, Cmd msg )
init =
    ( initialModel, Cmd.none )

initialModel : Model
initialModel =
    { global = Global.initialModel
    , transactionList = TransactionList.initialModel
    }
  
--
-- Msg.elm
--
module Msg exposing (..)
import Global.Msg as Global
import TransactionList.Msg as TransactionList

type Msg
    = MsgForGlobal Global.Msg
    | MsgForTransactionList TransactionList.Msg

One of the things I like about this pattern is how readable each top level module is with import aliases.

View, Update, and Global State

The view and update functions compose similarly but I pass the top level model down to both so that they can cherry pick whatever state they need.

The lower level update functions can look at all the state and just return the piece of the model they are responsible for. For example the Global model can have common entities and state specific to the transaction list live in the TransactionList model.

Views are similar in that they can take state from the global model as well as their own model and render as necessary.

--
-- Update.elm
--
module Update exposing (..)

import Msg exposing (Msg)
import Model exposing (Model)
import Global.Update as Global
import TransactionList.Update as TransactionList

updateWithCmd : Msg -> Model -> ( Model, Cmd Msg )
updateWithCmd msg model =
    ( update msg model, updateCmd msg )

update : Msg -> Model -> Model
update msg model =
    { model
        | global = Global.update msg model
        , transactionList = TransactionList.update msg model
    }

updateCmd : Msg -> Cmd Msg
updateCmd msg =
    Cmd.batch
       [ TransactionList.updateCmd msg
       ]
       
--
-- View.elm
--
module View exposing (..)

import Model exposing (Model)
import Msg exposing (Msg)
import TransactionList.View as TransactionListView
import Html exposing (..)
import Html.Attributes exposing (..)

rootView : Model -> Html Msg
rootView model =
    div [ class "container" ] 
        [ TransactionListView.view model ]

This approach seems to be working pretty well so far and it seems like adding routing shouldn’t be too difficult.

A Step in the Right Direction

28 July, 2016

I pulled the pin on working in the React/Redux space a few months ago after I became tired of the churn. Things were moving quickly and I found myself spending more time wiring together framework code than writing application code. This kind of thing sneaks up on you.

One glaring ommission was a preconfigured and opinionated build chain. I moved from starter kit to starter kit chasing the latest webpack-livereload-hot-swap-reload shine. Each kit was subtly different to the one before it. Not just their build components either. I missed having agreed-upon file conventions on where to store your actions, reducers, stores, and friends. It made me appreciate the curation provided by the Ember team in their toolchain.

The creation of Create React App (triggered by Emberconf no less) is a step in the right direction. Bravo.

Infrastructure Koans

21 May, 2016

Envato teams are responsible for the operation of the systems they build.

My team is trying something different to help onboard new people. We’re creating a set of infrastructure koans for them to complete. The koans are tasks that—once completed—will help folks navigate our infrastructure and systems, and thereby acquire skills that are essential for supporting our services.

When someone joins the team a new issue is created in one of our team’s Github repos using the koans document as a template. Once the new team member has completed all of the koans they are added to the on-call rota and assigned a buddy who can help if things get tricky whilst on call.

The koans are not meant to be layed out step by step unless the task is complex or requires unusual knowledge. We hope this encourages folks to explore and internalise more than they would if following a todo list.

Some Example Koans

Set yourself up on PagerDuty and read through past incidents.

View metrics for each of our systems in New Relic.

  • What is the average response time?
  • What does CPU, Memory, and I/O utilisation look like on each server?
  • What are the slowest transactions for the service? Dig into each transaction and see where the time is spent.
  • Check the error analytics tab and look for any relationships between errors and past deployments.
  • Check the availability and capacity reports.
  • Look for trends in the web transactions and database reports.

Look up each of our services in Rollbar.

  • What are the two most common errors being reported?
  • Drill into the details of a recorded error.
  • Are these errors we can live with? Should we create a task to fix them?

Open the AWS CloudWatch console.

  • Look through the available dashboards and metrics.
  • What CloudWatch alerts do we have configured for our production systems?

Open the AWS ECS console.

  • How many task definitions do we have? How many available versions exist for each of them?
  • Which systems make up each of our service clusters?
  • How many repositories do we have in ECR?

Look through our Stackmaster templates and find the results of building stacks from them in CloudFormation.

Access our ELK cluster and run some queries.

Run queries against our production database replicas.

Decrypt some database backups.

SSH into various servers in our infrastructure.

A Docker Container for River5

30 March, 2016

I’m rebuilding a VPS that I use for a bunch of my websites, side projects, and experiments. It hosts some static sites via Apache, a few rails apps, a node app, Postgres, and other bits and bobs. I want maintenance of configuration to be simpler in the future so I’m giving Docker a crack.

One of the side effects of using Docker should be that I can mess about with different tools and rollback to a clean state easily.

My first effort in learning Docker has been to create a container for hosting the River5 river-of-news aggregator by Dave Winer.

It’s up on Github and ripe for pull requests.

Running Webpack and Rails with Guard

3 November, 2015

A while ago I decided to graft React and Flux onto an existing Rails app using Webpack. I opted to avoid hacking on Sprockets and instead used Guard to smooth out the development process.

This is me finally writing about that process.

Node

I installed all the necessary node modules from the root of the Rails app.

Dependencies and scripts from package.json:

{
  "scripts": {
    "test": "PHANTOMJS_BIN=./node_modules/.bin/phantomjs ./node_modules/karma/bin/karma start karma.config.js",
    "test-debug": "./node_modules/karma/bin/karma start karma.debug.config.js",
    "build": "./node_modules/webpack/bin/webpack.js --config webpack.prod.config.js -c"
  },
  "dependencies": {
    "classnames": "^1.2.0",
    "eventemitter3": "^0.1.6",
    "flux": "^2.0.1",
    "keymirror": "^0.1.1",
    "lodash": "^3.5.0",
    "moment": "^2.9.0",
    "react": "^0.13.0",
    "react-bootstrap": "^0.19.1",
    "react-router": "^0.13.2",
    "react-router-bootstrap": "^0.12.1",
    "react-tools": "^0.13.1",
    "webpack": "^1.7.3",
    "whatwg-fetch": "^0.7.0"
  },
  "devDependencies": {
    "jasmine-core": "^2.2.0",
    "jsx-loader": "^0.12.2",
    "karma": "^0.12.31",
    "karma-jasmine": "^0.3.5",
    "karma-jasmine-matchers": "^2.0.0-beta1",
    "karma-mocha": "^0.1.10",
    "karma-mocha-reporter": "^1.0.2",
    "karma-phantomjs-launcher": "^0.1.4",
    "karma-webpack": "^1.5.0",
    "mocha": "^2.2.1",
    "node-inspector": "^0.9.2",
    "phantomjs": "^1.9.16",
    "react-hot-loader": "^1.2.3",
    "webpack-dev-server": "^1.7.0"
  }
}

Development Server

I wanted a single command to run my development server as per normal Rails development.

Firstly, I set up the Webpack config to read from, and build to app/assets/javascripts.

From webpack.config.js:

var webpack = require('webpack');

module.exports = {
  // Set the directory where webpack looks when you use 'require'
  context: __dirname + '/app/assets/javascripts',

  // Just one entry for this app
  entry: {
    main: [
      'webpack-dev-server/client?http://localhost:8080/assets',
      'webpack/hot/only-dev-server',
      './main.js'
    ]
  },

  plugins: [
    new webpack.HotModuleReplacementPlugin()
  ],

  output: {
    filename: '[name].bundle.js',
    // Save the bundle in the same directory as our other JS
    path: __dirname + '/app/assets/javascripts',
    // Required for webpack-dev-server
    publicPath: 'http://localhost:8080/assets'
  },

  // The only version of source maps that seemed to consistently work
  devtool: 'inline-source-map',

  // Make sure we can resolve requires to jsx files
  resolve: {
    extensions: ["", ".web.js", ".js", ".jsx"]
  },

  // Would make more sense to use Babel now
  module: {
    loaders: [
      {
        test: /\.jsx?$/,
        exclude: [ /node_modules/, /__tests__/ ],
        loaders: [ 'react-hot', 'jsx-loader?harmony' ]
      }
    ]
  }
}

Then the Rails app includes the built Webpack bundle.

From app/assets/javascripts/application.js:

//= require main.bundle

To get access to the Webpack Dev Server and React hot loader during development I added some asset URL rewrite hackery in development mode.

From config/environments/development.rb:

  # In development send *.bundle.js to the webpack-dev-server running on 8080
  config.action_controller.asset_host = Proc.new { |source|
    if source =~ /\.bundle\.js$/i
      "http://localhost:8080"
    end
  }

Then I kick it all off via Guard using guard-rails and guard-process.

Selections from Guardfile:

# Run the Rails server
guard :rails do
  watch('Gemfile.lock')
  watch(%r{^(config|lib)/.*})
end

# Run the Webpack dev server
guard :process, :name => "Webpack Dev Server", :command => "webpack-dev-server --config webpack.config.js --inline"

All Javascript and JSX files live in app/assets/javascripts and app/assets/javascripts/main.js is the application’s entry point.

To develop locally I run guard, hit http://localhost:3000 like normal, and have React hot swapping goodness when editing Javascript files.

Tests

I originally tried integrating Jest for Javascript tests but found it difficult to debug failing tests whilst using it. So, I switched to Karma and Jasmine and had Guard run the tests continually.

From Guardfile:

# Run Karma
guard :process, :name => "Javascript tests", :command => "npm test", dont_stop: true do
  watch(%r{Spec.js$})
  watch(%r{.jsx?$})
end

Like Jest, I keep tests next to application code in __tests__ directories. Karma will pick them all up based upon file suffixes.

A test-debug npm script1 runs the tests in a browser for easy debugging.

karma.config.js:

module.exports = function(config) {
  config.set({

    /**
     * These are the files required to run the tests.
     *
     * The `Function.prototype.bind` polyfill is required by PhantomJS
     * because it uses an older version of JavaScript.
     */
    files: [
      './app/assets/javascripts/test/polyfill.js',
      './app/assets/javascripts/**/__tests__/*Spec.js'
    ],

    /**
     * The actual tests are preprocessed by the karma-webpack plugin, so that
     * their source can be properly transpiled.
     */
    preprocessors: {
      './app/assets/javascripts/**/__tests__/*Spec.js': ['webpack'],
    },

    /* We want to run the tests using the PhantomJS headless browser. */
    browsers: ['PhantomJS'],

    frameworks: ['jasmine', 'jasmine-matchers'],

    reporters: ['mocha'],

    /**
     * The configuration for the karma-webpack plugin.
     *
     * This is very similar to the main webpack.local.config.js.
     */
    webpack: {
      context: __dirname + '/app/assets/javascripts',
      module: {
        loaders: [
          { test: /\.jsx?$/, exclude: /node_modules/, loader: "jsx-loader?harmony"}
        ]
      },
      resolve: {
        extensions: ['', '.js', '.jsx']
      }
    },

    /**
     * Configuration option to turn off verbose logging of webpack compilation.
     */
    webpackMiddleware: {
      noInfo: true
    },

    /**
     * Once the mocha test suite returns, we want to exit from the test runner as well.
     */
    singleRun: true,

    plugins: [
      'karma-webpack',
      'karma-jasmine',
      'karma-jasmine-matchers',
      'karma-phantomjs-launcher',
      'karma-mocha-reporter'
    ],
  });
}

Deployment

When deploying I use Capistrano to build the Javascript with Webpack before allowing Rails to precompile the assets as per normal.

From package.json:

{
  "scripts": {
    "build": "./node_modules/webpack/bin/webpack.js --config webpack.prod.config.js -c"
  }
}

The Webpack config for prod just has the development server and hot loader config stripped out.

webpack.prod.config.js:

var webpack = require('webpack');

module.exports = {
  // 'context' sets the directory where webpack looks for module files you list in
  // your 'require' statements
  context: __dirname + '/app/assets/javascripts',

  // 'entry' specifies the entry point, where webpack starts reading all
  // dependencies listed and bundling them into the output file.
  // The entrypoint can be anywhere and named anything - here we are storing it in
  // the 'javascripts' directory to follow Rails conventions.
  entry: {
    main: [
      './main.js'
    ]
  },

  // 'output' specifies the filepath for saving the bundled output generated by
  // wepback.
  // It is an object with options, and you can interpolate the name of the entry
  // file using '[name]' in the filename.
  // You will want to add the bundled filename to your '.gitignore'.
  output: {
    filename: '[name].bundle.js',
    // We want to save the bundle in the same directory as the other JS.
    path: __dirname + '/app/assets/javascripts',
  },

  // Make sure we can resolve requires to jsx files
  resolve: {
    extensions: ["", ".web.js", ".js", ".jsx"]
  },

  module: {
    loaders: [
      {
        test: /\.jsx?$/,
        exclude: [ /node_modules/, /__tests__/ ],
        loaders: [ 'jsx-loader?harmony' ]
      }
    ]
  }
};

The Capistrano tasks in config/deploy.rb:

namespace :deploy do
  namespace :assets do
    desc "Build Javascript via webpack"
    task :webpack do
      on roles(:app) do
        execute("cd #{release_path} && npm install && npm run build")
      end
    end
  end
end

before "deploy:assets:precompile", "deploy:assets:webpack"

Finally

I’m not sure if there is a simpler way to incorporate Webpack into Rails nowadays but this approach worked pretty well for me.

  1. As shown in the package.json listing above. 

Impressions of React and Flux

21 June, 2015

I’ve enjoyed using React and Flux recently so I thought I’d write down my initial impressions.

State

I like that React makes application state explicit. It forces me to think through what is core to the application and what is derived.

Flux deals with React’s one-way data flow by centralising state in stores. This means that I can browse my application’s stores to see all of its state and how that state is manipulated.

One wrinkle in the React state story is how it handles user interactions. It matches user interactions with component state via interactive props. Wiring up onChange events and maintaining a local copy of user interaction state in your components is onerous. Components are surfacing for React that handle this grunt work for you, however.

Actions

I like that the actions available in the application—whether triggered by a user or a system—are reified and explicit. They provide a nice snapshot of all the significant interactions in your application.

Data and functions instead of models and methods

I’ve found that the Flux approach means I don’t have client side models in my application. State is just data and behaviour is captured in your stores and actions. Stores hold arrays of data that are updated or manipulated with simple functions and calls to a library like lodash.

Events

In Flux you manage events manually. This leads to a bunch of boilerplate in your stores, views, and actions but does give you the control to emit and subscribe as needed. I’ve followed the recommendation and used multiple stores that build on each other e.g. a paged transaction store that subscribes to a raw transaction store. This keeps each store’s code small and means that individual view hierarchies can subscribe to the stores that compose all of the functionality they require.

This explicit event subscription removes some of the pain you can get from evaluation order in complex sets of dependant computed observables in a framework like Knockout.

Boilerplate

React is more library than framework and Flux is more approach than library. This leads to writing a bunch of boilerplate code. Libraries such as Fluxxor alleviate this somewhat, but I prefer to write the boilerplate at the moment so I better understand how all of the pieces involved hang together.

Simplicity of components

I like that components in Flux are independent, simple pieces of Javascript that are easy to comprehend and test.

A different mental model

React’s one way data flow model leads to a declarative style of programming that obviates some of the established ways I write front end code. I often catch myself making design decisions to improve performance and then remind myself that my component code should be declarative because the virtual DOM will only apply changes as required. It takes a while to break these old habits.

Flux itself has multiple moving parts and takes some time to understand. This makes the learning curve a little steep. Keep this diagram handy:

Flux Flow
An example Flux flow

Finally

It’s taken a while to get my head around the React+Flux approach. It is lower level than other front end libraries and frameworks but components are being created to reduce the amount of glue code that needs to be written.

I like that React+Flux forces me to think about building rich client side applications differently. The separation and arrangement of code it encourages feels cleaner than frameworks that use two-way binding.

Using ctags on modern Javascript

18 April, 2015

I use Vim as my text editor and ctags for source code navigation.

I’ve found ctag’s default javascript tagging to be lacking so I’ve added the following to my ctags config file to handle some of the newer ES6 ES2015 syntax such as classes1.

Note that the listing below contains comments which ctags config files don’t support. You can find the actual file on Github.

--languages=-javascript
--langdef=js
--langmap=js:.js
--langmap=js:+.jsx

//
// Constants
//

// A constant: AAA0_123 = { or AAA0_123: {
--regex-js=/[ \t.]([A-Z][A-Z0-9._$]+)[ \t]*[=:][ \t]*([0-9"'\[\{]|null)/\1/n,constant/

//
// Properties
//

// .name = {
--regex-js=/\.([A-Za-z0-9._$]+)[ \t]*=[ \t]*\{/\1/o,object/

// "name": {
--regex-js=/['"]*([A-Za-z0-9_$]+)['"]*[ \t]*:[ \t]*\{/\1/o,object/

// parent["name"] = {
--regex-js=/([A-Za-z0-9._$]+)\[["']([A-Za-z0-9_$]+)["']\][ \t]*=[ \t]*\{/\1\.\2/o,object/

//
// Classes
//

// name = (function()
--regex-js=/([A-Za-z0-9._$]+)[ \t]*=[ \t]*\(function\(\)/\1/c,class/

// "name": (function()
--regex-js=/['"]*([A-Za-z0-9_$]+)['"]*:[ \t]*\(function\(\)/\1/c,class/

// class ClassName
--regex-js=/class[ \t]+([A-Za-z0-9._$]+)[ \t]*/\1/c,class/

// ClassName = React.createClass
--regex-js=/([A-Za-z$][A-Za-z0-9_$()]+)[ \t]*=[ \t]*[Rr]eact.createClass[ \t]*\(/\1/c,class/

// Capitalised object: Name = whatever({
--regex-js=/([A-Z][A-Za-z0-9_$]+)[ \t]*=[ \t]*[A-Za-z0-9_$]*[ \t]*[{(]/\1/c,class/

// Capitalised object: Name: whatever({
--regex-js=/([A-Z][A-Za-z0-9_$]+)[ \t]*:[ \t]*[A-Za-z0-9_$]*[ \t]*[{(]/\1/c,class/

//
// Functions
//

// name = function(
--regex-js=/([A-Za-z$][A-Za-z0-9_$]+)[ \t]*=[ \t]*function[ \t]*\(/\1/f,function/

//
// Methods
//

// Class method or function (this matches too many things which I filter out separtely)
// name() {
--regex-js=/(function)*[ \t]*([A-Za-z$_][A-Za-z0-9_$]+)[ \t]*\([^)]*\)[ \t]*\{/\2/f,function/

// "name": function(
--regex-js=/['"]*([A-Za-z$][A-Za-z0-9_$]+)['"]*:[ \t]*function[ \t]*\(/\1/m,method/

// parent["name"] = function(
--regex-js=/([A-Za-z0-9_$]+)\[["']([A-Za-z0-9_$]+)["']\][ \t]*=[ \t]*function[ \t]*\(/\2/m,method/

Some of these matchers are too eager but a lack of negative look behinds in the regex engine ctags uses makes that a pain to avoid. Instead I have a script which executes ctags and then filters obviously useless tags from the tag file afterwards.

#!/usr/bin/env bash

set -e

# ctags doesn't handle negative look behinds so instead this script
# strips false positives out of a tags file.

ctags "$@"

FILE="tags"

while [[ $# > 1 ]]
do
key="$1"

case $key in
    -f)
    FILE="$2"
    shift
    ;;
esac
shift
done

# Filter out false matches from class method regex
sed -i '' -E '/^(if|switch|function|module\.exports|it|describe)	.+language:js$/d' $FILE

# Filter out false matches from object definition regex
sed -i '' -E '/var[ 	]+[a-zA-Z0-9_$]+[ 	]+=[ 	]+require\(.+language:js$/d' $FILE

I trigger the script from within Vim automatically using a plugin I wrote called tagman.vim.

  1. I found tools such as jsctags didn’t do the job and, as always, I’d prefer a more minimal approach. 

Making chruby and binstubs play nice

11 February, 2015

Like a gentleman I use chruby and Bundler to manage Ruby versions and gems in my projects.

Instead of typing bundle exec to run gem executables within a project, I prefer saving keystrokes and using an executable’s name on its own1. I also want to avoid installing another tool like Gem home.

So off to binstubs land I go. Bundler generates them for you and these days Rails even ships with a few as standard. These stub files live in your project and ensure the right set of gems for your project are loaded when they’re executed.

Security risks aside I could just prepend my path with ./bin: and walk away—except that chruby auto-switching spoils the party. When I enter a project directory with a .ruby-version file, chruby prepends the current Ruby version paths at the beginning of PATH thereby matching before my previously prepended ./bin:.

Chruby recommends using rubygems-bundler but I don’t want to install another gem to get this to work. So I tweaked my zsh setup to use preexec_functions like chruby to patch my PATH. I add my function to preexec_functions after chruby loads so that my code patches the PATH after chruby does its work.

As for security I use the same scheme as Tim Pope. Add a git alias for marking a git repository as trusted and then only add a project’s bin directory to PATH if it is marked as such.

Now I just mark a repo as trusted via git trust, and its local binstubs are automatically added to my path.

Changes in my .zshenv:

# Remove the need for bundle exec ... or ./bin/...
# by adding ./bin to path if the current project is trusted

function set_local_bin_path() {
  # Replace any existing local bin paths with our new one
  export PATH="${1:-""}`echo "$PATH"|sed -e 's,[^:]*\.git/[^:]*bin:,,g'`"
}

function add_trusted_local_bin_to_path() {
  if [[ -d "$PWD/.git/safe" ]]; then
    # We're in a trusted project directory so update our local bin path
    set_local_bin_path "$PWD/.git/safe/../../bin:"
  fi
}

# Make sure add_trusted_local_bin_to_path runs after chruby so we
# prepend the default chruby gem paths
if [[ -n "$ZSH_VERSION" ]]; then
  if [[ ! "$preexec_functions" == *add_trusted_local_bin_to_path* ]]; then
    preexec_functions+=("add_trusted_local_bin_to_path")
  fi
fi

The git trust alias from my .gitconfig:

[alias]
  # Mark a repo as trusted
  trust = "!mkdir -p .git/safe"
  1. Even though I’ve aliased bundle exec to be in my shell I still feel like an animal when I have to type it. 

Adding jspm to a Rails app

6 February, 2015

Recent posts from Glen Maddern and Thoughtbot inspired me to try my hand at some ES6.

I put together a toy app using jspm and liked what I saw.

I then noticed that the latest beta release of React.js supports ES6 classes.

This led me to dust off an old side project that uses React.js and add jspm to it. The app is built using rails so I spent a little time working out a way to add jspm to that.

I’ve stayed away from the asset pipeline and placed the libraries and application Javascript managed by jspm in the public folder of the app directly.

I’ve extracted the results and placed them up on Github.

Automating my OS X Setup

31 October, 2014

I have a new laptop for work so I spent the last week automating its setup using Babushka.

Check it out on Github and feel free to ignore the more obsessive parts of the Readme.

Start with a Monolith

21 September, 2014

The Microservices train is leaving the station baby and everyone is getting on board. Lots of folks have written about Microservices and their benefits but a recent project experience has left me more interested in when you should use the approach.

Here are two posts which jibe with some of what I’ve recently felt.

Eric Lindvall of Papertrail:

When you’re starting out, and when you’re small, the speed at which you can make changes and improvements makes all the difference in the world. Having a bunch of separate services with interfaces and contracts just means that you have to make the same change in more places and have to do busywork to share code.

What can you do to reduce the friction required to push out that new feature or fix that bug? How can you reduce the number of steps that it takes to get a change into the hands of your users? Having code in a single repository, using an established web framework like Rails or Django can help a lot in reducing those steps. Don’t be scared of monolithic web apps when you’re small. Being small can be an advantage. Use it.

Adrian Cockcroft ex. Netflix:

I joined Netflix in ‘07 and the architecture then was a monolithic development; a two week Agile sort of train model sprint if you like. And every two weeks the code would be given to QA for a few days and then Operations would try to make it work, and eventually … every two weeks we would do that again; go through that cycle. And that worked fine for small teams and that is the way most people should start off. I mean if you’ve got a hand full of people who are building a monolith, you don’t know what you are doing, you are trying to find your business model, and so it’s the ability to just keep rapidly throwing code at the customer base is really important.

Once you figure out how… Once you’ve got a large number of customers, and assuming that you are building some Web-based, SasS-based kind of service, you start to get a bigger team, you start to need more availability.

Large projects with long-term timelines seem like good candidates for using the Microservices approach1.

On the other hand, new products or services may not be the right situation to immediately dive in with a Microservices approach. It’s likely that the idea itself is being fleshed out and investing anywhere outside of that core goal is ultimately waste. Carving process boundaries throughout your domain in this early turbulent stage is going to slow you down when you inevitably need to move them.

Pushing infrastructure style functionality—such as logging or email delivery—out into services makes sense, but waiting to see how things develop seems worthwhile when it comes to the core domain. Initially focussing on understanding the domain and investing in getting changes out to production as quickly as possible is likely more important then developing loads of cross-process plumbing.

A monolithic application isn’t such a bad place to start. The trick, as always, is to know when to change that plan.

  1. In fact a brown field or system refresh project seems like an ideal situation to test the waters of implementing them.</span> These projects have a runway long enough to justify the investment required to put all of the required communication, deployment, and monitoring ligatures in place.