Dance Computer, Dance

by Ray Grasso


Pieces I've written.

Hyper Keys and Mouse Buttons With Karabiner

I’ve got a hankering for keyboard shortcuts.

I’m all about pressing a key without having to worry about which application I’m in and my computer doing something useful.

This noble pursuit has taught me one thing: there’s never enough keys™.

Good old Vim has demonstrated the value of a trusty leader key in the war to get more keys. So, I undertook a holy mission to find the mythical macOS hyper key, and along the way found the deep well of keyboard customisation that is Karabiner-Elements.

Hyper key

I’ve set up Karabiner-Elements so that if I combine the backslash key with other keys, it acts as the hyper key 1.

I use this hyper key as a prefix to bind global shortcuts without having to crush my fingers, and soul, into a ball.

Here’s a selection of the shortcuts I keep behind this hyper key prefix:

  • \+t brings my time tracking app into focus.
  • \+s locks my screen.
  • A bunch of shortcuts move windows around via Moom.
  • A couple of shortcuts switch my audio output between my headphones and speakers via an Alfred workflow.

Mouse buttons

macOS doesn’t natively recognise the extra buttons on my new mouse which sucks because: there’s never enough keys™.

So, I was chuffed to find that Karabiner recognises these extra mouse buttons and can bind them to key sequences.

Here’s a look at my bindings:

My Karabiner-Elements preference

I miss the sideways scrolling of the Magic Mouse, but I’ve set up Karabiner so that if I hold down my scroll wheel button, I can scroll left and right. It works reasonably well and means I don’t need to reach for shift while spinning the scroll wheel to side-scroll.

I map button 4 of my mouse to play and pause my music. The media keys on my keyboard are a chord away, but usually, it’s easier to press a single button instead.

I map button 5 to a shortcut2 assigned to the Meet Mute Chrome extension. This shortcut toggles mute on my current running Google Meet meeting, which is killer.

The config

So there you go. Maybe you’ll find something useful in my Karabiner-Elements config file that you can steal.

And may you never run out of keys.

  1. The hyper key on the Mac is a combination of Ctrl+Command+Option+Shift which is the equivalent of a dragon costume with four people in it, but hey, it does the job. 

  2. Command+shift+d because the extension doesn’t recognise the hyper key chord for some reason. 

Lazily Loading Resized Images on a Hugo Photoblog

I rebuilt A Strange Kind of Madness using Hugo a month or so ago. As with most photoblogs, it has pages with many images on them, and I was inspired by Photo Stream to load these images lazily.

Image resizing

If you want Hugo to resize images when it builds a site, you need to place your images alongside posts, so they are considered page resources. So, I put each post in a folder with its associated image and reference it in a field called, shockingly, image in the post front matter.

$ ls content/posts/2020-03-31-my-post

$ cat content/posts/2020-03-31-my-post/
title = My Post
date = 2020-03-31T10:42:00+08:00
image = 20200331-4491.jpg

Adding Lazyload

Roll-up pages have many thumbnails and benefit most from lazy loading.

First up, add the lazyload javascript library to your site build.

import Lazyload from "lazyload";

// Fire up our lazyloading (just initialising it does the job)
const _lazyload = new Lazyload();

The library’s default configuration targets images with the lazyload class and loads the image stored in the data-src attribute. If you place an image on the standard src attribute, it will be treated as a placeholder.

Placeholder images

I wanted something more interesting than a sea of grey rectangles for placeholder images. I had a look at using BlurHash, but that was going to involve rendering canvas elements for placeholders 1.

I want the front end to be as simple as possible 2, so I abandoned that approach and instead created a single-pixel resize of the source image which provides a simplistic average colour placeholder for each image. It does the trick.

Placeholder images

The markup

All of the necessary resizing code and markup is in a Hugo partial that renders a thumbnail for each post. Be sure that your image tags include width and height attributes so the browser lays them out correctly such that lazy loading is effective.

{{- with .Resources.GetMatch (.Params.image) -}}
  {{/* Resize to a single pixel for a placeholder image  */}}
  {{- $placeholder := .Resize "1x1" -}}
  {{/* Resize to 800 pixels wide for a thumbnail image */}}
  {{- $thumbnail := .Resize "800x" -}}
    src="{{ $placeholder.RelPermalink }}"
    data-src="{{ $thumbnail.RelPermalink }}"
    width="{{ $thumbnail.Width }}"
    height="{{ $thumbnail.Height }}"
    class="lazyload" />
{{- end -}}

The main downside to this is that two resizes for each image adds a bunch of time to the site’s build process but that’s a trade off I’m willing to make.

Go forth, and embrace laziness.

  1. Or integrating React components that handle this for you

  2. The only Javascript it uses is for this lazy loading. 

A Tour of my Desk

I’ve been working from home full time for four years, and my desk setup has mostly remained the same during that time. But the recent spate of folks sharing their home office setups—this Basecamp post was my favourite—inspired me to spruce up my own.

So, now it’s tour time folks!

My Desk

My work laptop (1) is a 13 inch 2017 MacBook Pro. It’s got the busted keyboard design. I mostly use an external keyboard, so mine still works fine 🤞.

My monitor (2) is a 27 inch Dell that’s six years old. It does the job. I’ll probably wait until it dies before I replace it.

I use a Kinesis Advantage 2 keyboard (3) which lets me hold my hands like a T-Rex while I type. It’s easy on the wrists but beware, it took me six months to type on this thing properly, and those six months were tough going so perhaps not a fun challenge to take up during already potentially stressful quarantine times.

I replaced my trusty stack of too-dry-to-read textbooks with the brutalist styles of this monitor stand by Brateck (4). It has a drawer where I can store my notebook and pens when I’m not using them. But what tickles the organiser in me the most is the cavity beneath it where I can store other things1.

Pictures and plants are, of course, essential (5 & 6).

I used an Apple Magic Mouse for years, but it started acting flaky after I upgraded to Catalina. So, I switched to a Razer Death Adder (7) which is way more comfortable to hold. It doesn’t allow me to side scroll as easily but not recharging batteries regularly is nice. Also, it has disco lights and a USB cable. Actually, let’s talk about cables and wires for a minute.

So, wires. Yeah, they get in the way. Yeah, they can look ugly. But you know what else they are? Reliable. That’s right, like a blue heeler at dusk, they’re always there for you. They mean there’s no more stuttering when you move your mouse. You don’t need to worry about Wi-Fi turbulence when you’ve got an old fashioned Cat 6 cable plugged in baby 2. With that off my chest, let’s get back to the tour.

Next up is the linchpin, the box that brings it all together. A couple of my colleagues recommended the CalDigit TS3 Plus Thunderbolt dock (9) and it’s tops. I plug everything into it, USB devices, my router, my display, my microphone and headset—and it all flows to my laptop via a single Thunderbolt cable (8). The TS3 Plus also serves as a power source for my MacBook so I can leave my power cord in my bag for that wondrous time, someday in the future when I can work outside again.

I spend a large portion of my day on video calls so a reliable audio setup is essential. I have a Jabra headset which is light and comfortable, but I also have an old set of Sennheisers (10) that I enjoy listening to music through. Again, some colleagues tipped me off to the fact that I can frankenstein a microphone onto any headset by using an Antlion ModMic. Its hardware mute button isn’t as low down on the cord as I like, but at least it’s there.


Another bonus of wearing this new set up is that I look like a helicopter pilot instead of a call centre worker and who doesn’t want to look like a helicopter pilot, right?

I run my headphones into a Magni 2U headphone amp (11) and plug that and the microphone directly into the TS3 Plus 3. So now, I have my favourite headphones handy when I need to concentrate and want to listen to something from one of my go to playlists (or White Noise).

When I want to listen to music without headphones, I stream it through my fairly ancient Jambox (12). It’s rugged and still trucking though I expect the battery to self combust any day now. I've ordered a pair of Audioengine HD3 speakers to replace it because my ears deserve it.

Update: There weren’t any HD3’s in stock so I instead went with a pair of Edifier R1280DB’s. They are cheaper, connect to the TS3 via digital optical cable, sound good, and look alright.

Desk Speaker

I’ve also ordered a Logitech C925E webcam which I’ll mount on my monitor so I’m not always side-eyeing folks from my laptop camera in meetings.

I spent a scandalous amount of money on a Herman Miller Embody (13) when I first set up my home office. I never worry about my chair, so I think I can say that money was worth it 🤷‍♂️.

Finally, there’s my desk (14). It is a standing desk that I put together eight years ago. Back then, it was tricky to find standing desks online, so to save money, I only ordered legs from GeekDesk 4 and attached a bamboo tabletop from Ikea to them. The top could be a bit larger, but maybe I’m just greedy.

So, there it is—the throne of my weekday castle.

There are a window and couch out of shot to the left. The couch is mostly ornamental because if I allow myself to lie down on it and close my eyes for just one minute, all will be lost.

  1. I guess I could have hollowed out the middle of those old textbooks 🤔. 

  2. You should try to use an Ethernet cable at least. It can make your video calls more reliable

  3. I used an Antlion USB adapter for this but found it would end up with static when my laptop woke from sleep, so now I plug in directly, and things seem fine. 

  4. I ordered the v2 legs at the time and shipped them to Perth from the US for a sum that would make a Nigerian prince blush. 

Playing a Random Album on Spotify

I still like listening to albums and sometimes want Spotify to play a random album from a playlist of albums I’ve created.

I couldn’t find anything out there that does this so I wrote myself a script to handle it instead.

Here’s a rundown if you want to use it.

First up, you’ll need a playlist with at least one track from each of the albums you want to choose from (here’s mine). Grab the ID of your playlist1, and your username and add them into the script below.

Then, you’ll need to create an app in Spotify and get your client ID and secret, add them to the script below, so you can authorise the script.

Finally, run gem install rspotify in your default ruby2 and you should be off to the races.

Run the script with Spotify desktop app installed and it’ll open up a random album for you to press that sweet, sweet play button on ⏯.

I run the script from an Alfred workflow so I’ve got it close at hand.

Enjoy 🎷🎶

#!/usr/bin/env ruby
#/ Usage: open-random-album
#/ Open a random album in Spotify.

require "rspotify"

class RandomAlbum

  def self.fetch
    RSpotify.authenticate(CLIENT_ID, CLIENT_SECRET)

  # Grab the albums from a playlist and choose one at random
  def fetch
    playlist = RSpotify::Playlist.find(USERNAME, PLAYLIST_ID)


  def tracks_in_playlist(playlist)
    limit = 100
    offset = 0

    [].tap do |result|
      loop do
        tracks = playlist.tracks(limit: limit, offset: offset)

        break if tracks.empty?

        offset += limit

  def albums_in_playlist(playlist)
    tracks = tracks_in_playlist(playlist)
    tracks.reduce({}) do |acc, track|
      acc[] = track.album

album = RandomAlbum.fetch

puts "Opening '#{}' in Spotify"
system "open #{album.uri}"
  1. Click on Share -> Copy Spotify URI. The playlist’s ID is the string after the last colon. 

  2. If this becomes a pain I guess you could look into bundling the script up with its required gems somehow. 

Better Kindle Reading

I read the majority of my books on my Kindle. The Kindle’s convenience is pretty hard to beat and I enjoy its highlighting and note-taking features.

With that said, I find I miss the context that a physical book provides. It’s much easier to breezily flick around a physical book to find previous sections you’ve read or peek ahead to see what’s coming up.

Recently I fired up the Kindle for Mac and found that I can get better context and a view of my highlights all at once. Open it up in widescreen, open the contents and notes and highlights sidebars, and boom baby, you can see a summary of where you are in the table of contents and what passages you’ve highlighted or bookmarked on the right.

Kindle for Mac with both sidebars open

I find this arrangement particularly helpful when reviewing a book I’ve previously read. Have a crack yourself, see if you like it.

Finding Open Web Pages with Alfred

There are a handful of web pages that I use regularly throughout the day. Some are web apps that I keep pinned in Chrome while others come and go as I work.

I tend to close tabs when I’m done with them but I still end up with many open tabs. I’ve created an Alfred Workflow that opens a page I’m looking for so I don’t have to pick through my Chrome tabs by hand to find it.

The Find Page workflow takes a URL from a predefined list, runs an AppleScript that finds and activates the associated page if it’s already open in Chrome, otherwise it opens it in a new tab.

Find Page Workflow definition

Find Page Workflow example

You can download the workflow and try it yourself.

I’m giving Audible a go again. I like it for listening to non-fiction books. Fiction ones, not so much. 🤷‍♂️

Update: I’m reversing this. I actually prefer listening to fiction on Audible. I like highlighting things too much in non-fiction so Audible doesn’t really work as well for me for that.

Designing Data-Intensive Applications 📚

Designing Data-Intensive Applications by Martin Kleppmann

This book surveys data storage and distributed systems and is a fantastic primer for all software developers.

It starts with naive approaches to storing data, quickly builds up to how transactions work, and works up to the complexities of building distributed systems.

I particularly enjoyed the chapter on stream processing and event sourcing. It contrasts stream processing to batch processing and highlights many of the challenges of these approaches and explores options for addressing them.

Using Netlify for Hosting

I recently moved the hosting of my various blogs and websites off my own server to Netlify.

I was originally going to set up an S3 bucket and Cloudfront distribution for each of my sites but Netlify provides me the CDN and hosting features I need all bundled up already. You can upload files directly for serving or hook your site up to run a static site generator when you push to a branch of a Github repository.

In short, I’m not longer paying hosting costs and they handle all of the SSL certificate renewal from Let’s Encrypt for me.

Next up I plan to clean up the tooling I use for some of my sites and tweak things on here so I have more variety in my posts.

2019 is the year of the blog baby.

Forgetting Data in Event Sourced Systems

GDPR’s right to be forgotten means we have to be able to erase a person’s data from our systems. Event sourced systems work from an immutable log of events which makes erasure difficult. You probably want to think hard about storing data you need to delete in an immutable event log but sometimes that choice is already made and you need to make it work, so let’s dig in.

Erasing user data from current state projections

This is relatively straightforward. A RightToBeForgottenInvoked event is added to the event store for the person. All projectors that depend on personal data listen for this event and prune or scrub the appropriate data for the person from their projections.

Erasing data from the event stream itself

This case is trickier. We need to rewrite history in a way that doesn’t break things. Let’s look at an option for erasing data without rebuilding the event stream. This approach is also applicable for projections that are immutable change logs.

We can store personal data outside of events themselves in a separate storage layer. Each event instead stores a key for retrieving the data from this layer and any event consumers request the data when they need it. Given this data is personal the storage layer should probably encrypt the data at rest.

Once a RightToBeForgottenInvoked event is added to the event store all data for that person can be erased from the storage layer. All subsequent requests for data from the secure storage layer for that person’s data will return null objects rather than the actual data. This should make life easier for all consumers and avoid you null checking yourself to death all over the place.

Let’s see what this secure storage layer might look like.

Sketch of a secure storage layer

Our secure storage layer stores data that is scoped to a person and has a type (so we can return null objects). The store allows all data for a specific person to be erased.

Let’s start with two main models: a Person1 and a Data model.

      Data                 Person
  ┌──────────┐        ┌───────────────┐
  │    id    │   ┌───>│      id       │
  ├──────────┤   │    ├───────────────┤
  │person_id │───┘    │encryption_key │
  ├──────────┤        ├───────────────┤
  │   type   │        │   is_erased   │
  ├──────────┤        └───────────────┘

The interface to the secure storage layer is outlined below.

class SecureStorage
  def add(person_id, data_id, type, data)
    # Find the Person model for person_id (lazily create one if needed).
    # Encrypt the data using the person's encryption_key and store the
    # ciphertext in the data table using the client supplied data_id and type.
    # Clients will store this data_id in an event body and use it to retrieve
    # the data later.

  def erase_data_for_person(person_id)
    # Mark the corresponding record in the person table as erased
    # and delete the encryption key.

  def get(data_id)
    person = Person.find_non_erased(person_id)
    if person
      # Look up the row from the data table, decrypt ciphertext using the
      # key on the person model, and return the data.
      # Look up the row from the data table and return a null object for
      # that data type.

Where does that leave us?

After a person has invoked their right to be forgotten all current state projections will be updated to erase that person’s data. The event store will return null objects for any events that contain data for the person which means that any event processors won’t see that data as they build their projections. It will also contain the RightToBeForgottenInvoked event for the person so consumers can handle that explicitly if required.

  1. This could be expanded to be more general but we’ll stick with person for the purpose of this post. 

Remote Working Strategies

I’m almost two years into working remotely full time. It affords me flexibility and focus but it also comes with its challenges. I have a few strategies that help make it work for me and maybe they’ll help you too.

  • I mostly work from a room with a closable door. At the end of my work day I walk away and close said door. I find this helps me disconnect and keep my home and work contexts separate.
  • Change up where you work. It’s good to work in different rooms and different locations. I like to go somewhere where there are people around; even if I’m not speaking to them e.g. I’m often that rando working on his laptop in the food court.
  • I spend a lot of time on video calls. I have this headset by Jabra which has a decent microphone that doesn’t pick up much background noise. It also has a hardware mute button on the cord always within reach. As a bonus, people throw lots of “you look like you work in a call centre” gags at me.
  • Regular lunches in the city with friends helps top up my face to face human interaction stores.
  • Your energy levels will vary, do your best to ride it out. Sometimes I am a storm of energy and rip through my work. Other times I struggle to lock in and focus. Stick with it. Hold strong.
  • Get outside regularly. The dark side of not having a commute is that you can end up barely moving all day. I regularly walk around my neighbourhood to get some steps under my belt and sunshine on my face.
  • Enjoy the flexibility.

Event Sourcing Libraries

Creating an event sourced, CQRS application is simple enough conceptually but there is a lot of hidden detail when it comes to building them. There are a couple of event sourcing libraries I’ve used that can help.

The first, Event Sourcery, is in Ruby and created by my colleagues at Envato. You can use Postgres as your data store and it gives you what you need to build aggregates and events and projectors and process managers.

The immutability and process supervision baked into Elixir makes it a compelling option for implementing these kind of applications as well. Commanded is written in Elixir and follows a very similar approach to Event Sourcery and works a treat.

The Convenience of _.chain Without Importing the World

I’ve been meaning to work out how to maintain the convenience of the Lodash’s _.chain function whilst only including the parts of Lodash that I actually need.

Turns out you can cherry pick the fp version of the functions you need and compose them together with _.flow.

import sortBy from 'lodash/fp/sortBy';
import flatMap from 'lodash/fp/flatMap';
import uniq from 'lodash/fp/uniq';
import reverse from 'lodash/fp/reverse';
import flow from 'lodash/fp/flow';

const exampleData = [
    "happenedAt": "2017-06-15T19:00:00+08:00",
    "projects": [
      "Project One"
    "happenedAt": "2017-06-16T19:00:00+08:00",
    "projects": [
      "Project One",
      "Project Two"

const listOfProjectsByTime = (entries) => {
  return flow(

You can read more in Lodash’s FP Guide.

Consistent Update Times for Middleman Blog Articles with Git

The default template for an Atom feed in Middleman Blog uses the last modified time of an article’s source file as the article’s last update time. This means that if I build the site on two different machines I will get different last updated times on articles in the two atom feeds. I’d rather the built site look the same regardless of where I build it.

The source code for the site lives in a Git repository which means I have a consistent source for update times that I can rely on. So, I’ve added a helper that asks Git for the last commit time of a file and falls back to its last modified time if the file isn’t currently tracked in Git.

helpers do
 def last_update_time(file)
    Time.parse `git log -1 --format=%cd #{file} 2>/dev/null`

I now use this helper in my Atom template for each article.

xml.entry do
  xml.updated last_update_time(article.source_file).iso8601
  xml.content article.body, "type" => "html"

Adding Webpack to Middleman's External Pipeline

I use Middleman to build most of my content-focused websites. With the upgrade to version 4 comes the opportunity to move the asset pipeline out to an external provider such as Webpack.

I struggled to find good examples of how to integrate Webpack 2 with Middleman 4 so I’m documenting the approach I used here. For example code refer to middleman-webpack on Github.

Points of Interest

Build and development commands for webpack are in package.json.

"scripts": {
  "start": "NODE_ENV=development ./node_modules/webpack/bin/webpack.js --watch -d --color",
  "build": "NODE_ENV=production ./node_modules/webpack/bin/webpack.js --bail -p"

The external pipeline configuration in Middleman just calls those tasks.

activate :external_pipeline,
           name: :webpack,
           command: build? ? "yarn run build" : "yarn run start",
           source: ".tmp/dist",
           latency: 1

set :css_dir, 'assets/stylesheets'
set :js_dir, 'assets/javascript'
set :images_dir, 'images'

Assets are loaded by Webpack from the assets folder outside of the Middleman source directory1. Webpack includes any JS and CSS imported by the entry point files in webpack.config.js and generates bundle files into the asset paths Middleman uses.

module.exports = {
  entry: {
    main: './assets/javascript/main.js',

  output: {
    path: __dirname + '/.tmp/dist',
    filename: 'assets/javascript/[name].bundle.js',

  // ...


The config for Webpack itself is fairly straightforward. The ExtractText plugin extracts any included CSS into a file named after the entry point it was extracted from.

module.exports = {
  // ...

  plugins: [
    new ExtractTextPlugin("assets/stylesheets/[name].bundle.css"),

  // ...

This means you can include your styles from your JS entry file like normal and Webpack will extract the styles properly2.

Using the standard Middleman helpers to include the generated JS and CSS bundles allows Middleman to handle asset hashing at build time.

  <%= stylesheet_link_tag "main.bundle" %>

  <%= javascript_include_tag "main.bundle" %>


If you want to add modern JS and CSS to a bunch of statically generated pages then Middleman and Webpack works fine.

If, however, you are looking for a boilerplate for building a React SPA then something like react-boilerplate or create-react-app is likely a better fit.

  1. To avoid asset files being processed by both Webpack and Middleman. 

  2. Images are currently managed via Middleman and not Webpack.