One of Elm’s goals is to change our relationship with compilers. Compilers should be assistants, not adversaries. A compiler should not just detect bugs, it should then help you understand why there is a bug. It should not berate you in a robot voice, it should give you specific hints that help you write better code. Ultimately, a compiler should make programming faster and more fun!
I’ve enjoyed stretching my brain a bit whilst learning Elm. These changes will improve the learning process tremendously.
If you use PostgreSQL this is a nice native Mac OSX client.
A properly formed git commit subject line should always be able to complete the following sentence:
If applied, this commit will your subject line here
- If applied, this commit will refactor subsystem X for readability
- If applied, this commit will update getting started documentation
- If applied, this commit will remove deprecated methods
- If applied, this commit will release version 1.0.0
- If applied, this commit will merge pull request #123 from user/branch
The purpose of a self-imposed deadline is to sharpen the edge of your prioritization sword and stake a flag of coordination for the team. It’s not a hill to die on. It’s not a justification for weeks of death marching. It’s a voluntary constraint on scope.
Yes, deadlines are wonderful! They’re the tie-breaker on feature debates. They suck all the excess heat out of the prioritization joust: “Hey, I’d love to get your additional pet feature into the first release, but, you know: THE DEADLINE”.
A more recent and less fictitious example is electronic logging devices on trucks. These are intended to limit the hours people drive, but what do you do if you’re caught ten miles from a motel?
The device logs only once a minute, so if you accelerate to 45 mph, and then make sure to slow down under the 10 mph threshold right at the minute mark, you can go as far as you want.
So we have these tired truckers staring at their phones, bunny-hopping down the freeway late at night.
Of course there’s an obvious technical countermeasure. You can start measuring once a second.
Notice what you’re doing, though. Now you’re in an adversarial arms race with another human being that has nothing to do with measurement. It’s become an issue of control, agency and power.
You thought observing the driver’s behavior would get you closer to reality, but instead you’ve put another layer between you and what’s really going on.
These kinds of arms races are a symptom of data disease. We’ve seen them reach the point of absurdity in the online advertising industry, which unfortunately is also the economic cornerstone of the web. Advertisers have built a huge surveillance apparatus in the dream of perfect knowledge, only to find themselves in a hall of mirrors, where they can’t tell who is real and who is fake.
Facebook’s struggle pivoting to mobile illustrates the potential for trouble. Facebook’s speedy, individualistic and iterative way of designing and shipping software was deeply embedded in product team culture. If you worked in the web tier, the cost of releasing was pretty close to zero and literally everything else about the way you worked was optimized to take advantage of that assumption. As the company’s focus shifted to native mobile apps, the engineers hired for their mobile expertise insisted on a heretofore unknown process like feature and UI freeze, functional specs and QA.
PSYC 4410: Obsessions of the Programmer Mind
Identify and understand tangential topics that software developers frequently fixate on: code formatting, taxonomy, type systems, splitting projects into too many files. Includes detailed study of knee-jerk criticism when exposed to unfamiliar systems.
During internal testing, his team realized that if you don’t recognize a single artist in a playlist, you might question if it’s actually geared for you. That’s why the playlist is intended to have a mix of mostly new tracks with a few songs you’ve heard before.
“There’s something compelling about this humans versus robots narrative: a lovingly curated playlist versus an algorithm screwing up your sexy time,” says Ogle. “That whole distinction no longer really describes how we work. Discover Weekly is humans all the way down. Every single track that appears in Discover Weekly is because other humans being have said, ‘Hey this is a good song, and here’s why.’”
My Discover Weekly playlist has been hitting the spot and it’s interesting to get an insight into how they are put together.
Although Spotify has had humans making playlists for years, its efforts got a major boost last year with the introduction of Truffle Pig, an internal tool from The Echo Nest that breaks music down into thousands of categories like “wonky,” “chillwave,” “stomp and holler,” or “downtempo.”
What you hear from everyone at Spotify is that humans using data insights are key to curating music on a large scale. Naturally, they’re also using data to evaluate how well playlists are working.
An example of data-centric tools aiding human decision makers.
This implies that the problem with enterprises is not the stupidity of its buyers. They are no less smart than the average person–in fact, they are as smart with their personal choices for computing as anybody. The problem is that enterprises have a capital use and allocation model which is obsolete. This capital decision process assumes that capital goods are expensive, needing depreciation, and therefore should be regulated, governed and carefully chosen. The processes built for capital goods are extended to ephemera like devices, software and networking.
It does not help that these new capital goods are used to manage what became the most important asset of the company: information. We thus have a perfect storm of increasingly inappropriate allocation of resources to resolving firms’ increasingly important processes. The result is loss of productivity, increasingly bizarre regulation and prohibition of the most desirable tools.
If you think about innovation as a scarce resource, it starts to make less sense to be on the front lines of innovating on databases. Or on programming paradigms.
The point isn’t really that these things can’t work. Of course they can work. But exciting new technology takes a great deal more attention to work than boring, proven technology does.
Some fantastic points about what your job is when building software and the trade-offs that are constantly in front of you.
real, direct measure of “innovation” is change in human behaviour. In fact, it is useful to take this way of thinking as definitional: innovation is the sum of change across the whole system, not a thing which causes a change in how people behave. No small innovation ever caused a large shift in how people spend their time and no large one has ever failed to do so.
This is the best definition for an overused term I’ve heard in a while.
The whole piece is an enthralling call to arms to a group of people building a product.
there’s been an explosion of people and companies looking to do interesting things with their data. I’ve been lucky to work on dozens of data-heavy interfaces throughout my career and I wanted to share some thoughts on how to arrive at a distinct and meaningful product.
I’ve been doing a lot of data-driven interface work over the last few years and the advice in this article is spot on.
It’s funny. I can’t count the number of times I’ve seen organizations with large monolithic software applications move toward SOA. It’s easy to think that we’ve learned a lesson in the industry and that services are just the new “best practice.” Surely we would’ve started with them if we were just starting development now. Actually, I think the truth is a bit more jarring. Different architectures work at different scales. Maybe we need to recognize that and understand when to leap from one to another.
The process space has similar issues. If we cargo-cult anything from the small team space into the large organization space, we’re falling prey to a blind spot. We’re missing an opportunity to re-think things and figure out what works best for small teams, interacting teams, and large-scale development. They may be quite different things.
I agree that the rules are different for different organisation sizes. I like Michael’s comparison of moving across organisation sizes to state transitions in physics.
At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per second, is horizontally scalable, and has reliably served in production for almost 3 years.
Haskell isn’t a common choice for large production systems like Sigma, and in this post, we’ll explain some of the thinking that led to that decision. We also wanted to share the experiences and lessons we learned along the way. We made several improvements to GHC (the Haskell compiler) and fed them back upstream, and we were able to achieve better performance from Haskell compared with the previous implementation.
Terminology for iOS is based on WordNet, a great semantic lexical reference. We do not offer a full Mac app for Terminology, but have prepared a dictionary using this same great data for use in the built-in OS X Dictionary app.
So, interesting: an enormous amount of effort is spent on apps that convert map/vector-structured data into map/vector-structured data.
However, because of the mindshare dominance of object-oriented programming that came because of Java - “Smalltalk for the masses” - it’s hard for most people to think of an approach other than a pipeline of structured data -> objects -> structured data. The tools and frameworks and languages push you toward that. So: a programming model tailored to objects with serious lifetimes has been used for data that lasts only for one HTTP request.
We’ve given up on “fat models, skinny controllers” as a design style for our Rails apps—in fact we abandoned it before we started. Instead, we factor our code into special-purpose classes, commonly called service objects. We’ve thrashed on exactly how these classes should be written, so this post is going to outline what I think is the most successful way to create a service object.
Some good advice for building service objects in Rails.
As of today Front Row uses Haskell for anything that needs to run on a server machine that is more complex than a 20 line ruby script. This includes most web services, cron-driven mailers, command-line support tools, applications for processing and validating content created by our teachers and more. We’ve been using Haskell actively in production since 2014.
An experience report on using Haskell in production.
In this talk, I will explain to you how list folds work using an explanation that is very easy to understand, but most importantly, without sacrificing accuracy.
I particularly like the constructor replacement analogy for foldr.
- Recovery over Perfection
- Predictability over Commitment
- Safety Nets over Change Control
- Collaboration over Handoffs
Ultimately all these statements are about creating responsive systems.
When we design processes that attempt to corral reality into a neat little box, we set ourselves up for failure. Such systems are brittle. We may feel in control, but it’s an illusion. The real world is not constrained by our imagined boundaries. There are surprises just around the corner.
I have told this story to different audiences. Programmers as a rule are delighted by it and managers, invariably, get more and more annoyed as the story progresses; true mathematicians, however, fail to see the point.
The Lisp model is that programming is a more general kind of interaction with a machine. The act of describing what you want the machine to do is interleaved with the machine actually doing what you have described, observing the results, and then changing the description of what you want the machine to do based on those observations. There is no bright line where a program is finished and becomes an artifact unto itself.
Having deployed a variety of Ruby apps (Rails and non-Rails) over the course of many years, here are some lessons I’ve learned to keep things afloat.
- In Rust, as in garbage collected languages, you never explicitly free memory
- In Rust, unlike in garbage collected languages, you never explicitly close or release resources like files, sockets and locks
- Rust achieves both of these features without runtime costs (garbage collection or reference counting), and without sacrificing safety.
This post contains a nice summary of Rust’s ownership model.
If 90% of everything is crap, the obvious strategy should be to produce as much crap as you can afford, because the other 10% will be spectacular.
But now we have something much more abstract, a set of generalized requirements that can apply to all sorts of things:
- You start with a bunch of things, and some way of combining them two at a time.
- Rule 1 (Closure): The result of combining two things is always another one of the things.
- Rule 2 (Associativity): When combining more than two things, which pairwise combination you do first doesn’t matter.
- Rule 3 (Identity element): There is a special thing called “zero” such that when you combine any thing with “zero” you get the original thing back.
With these rules in place, we can come back to the definition of a monoid. A “monoid” is just a system that obeys all three rules. Simple!
To sum up, a monoid is basically a way to describe an aggregation pattern – we have a list of things, we have some way of combining them, and we get a single aggregated object back at the end.
The first of three posts that give a simple definition of what monoids are and the benefits they provide.
The more Google pushes the sophistication of its web development tooling, the more we all benefit.
I erased most of my TODO list since I really only need to stay alive and listen to people and everything else is a lie.
Kent Beck came up with his four rules of simple design while he was developing ExtremeProgramming in the late 1990’s. I express them like this.
- Passes the tests
- Reveals intention
- No duplication
- Fewest elements
Object Oriented Programming died several times already. This time might be the last.
In iOS 8, this code doesn’t just fail, it fails silently. You will get no error or warning, you won’t ever get a location update and you won’t understand why. Your app will never even ask for permission to use location.
I just got bitten by this while updating an app. This post goes through what is needed to fix it.
You cannot build a 60fps scrolling list view with DOM.
The Flipboard team have gone to great lengths to get the performance they want from the browser.
Everything is rendered to
canvas elements. They’ve created their own representation of elements which they pool aggressively to avoid GC hits.
This is another place where React’s virtual DOM comes into its own. Flipboard use React to render to
canvas elements. They’ve wrapped this up into react-canvas.
This is an extreme approach but the app feels slick because of it.
When they create 4089 libraries for doing asynchronous programming, they’re trying to cope at the library level with a problem that the language foisted onto them.
Each of those function expressions closes over all of its surrounding context. That moves parameters like iceCream and caramel off the callstack and onto the heap. When the outer function returns and the callstack is trashed, it’s cool. That data is still floating around the heap.
The problem is you have to manually reify every damn one of these steps. There’s actually a name for this transformation: continuation-passing style. It was invented by language hackers in the 70s as an intermediate representation to use in the guts of their compilers. It’s a really bizarro way to represent code that happens to make some compiler optimizations easier to do.
No one ever for a second thought that a programmer would write actual code like that. And then Node came along and all of the sudden here we are pretending to be compiler back-ends. Where did we go wrong?
Red and Green callbacks:
What’s even more difficult to understand is errors. Errors in multi-threaded callback code with shared memory is something that would give me an extremely large headache.
The in-language mixing of synchronous and asynchronous calls makes code hard to reason about. The language-side simplicity of Ruby or Java’s concurrency model is something I have taken for granted.
I strongly believe that there is a law of conservation of complexity in software. When we break up big things into small pieces we invariably push the complexity to their interaction.
Make it real
Ideas are cheap. Make a prototype, sketch a CLI session, draw a wireframe. Discuss around concrete examples, not hand-waving abstractions. Don’t say you did something, provide a URL that proves it.
Plenty more great stuff in there.
Wiki is a relentless consensus engine. That’s useful.
But here’s the thing. You want the consensus engine, eventually. But you don’t want it at first.
It’s funny, I was looking over this keynote last night, and I saw this line and realized — this is the simplest explanation of federated wiki.
In my opinion, this is the greatest success “hack” there is. If you live in an environment that nudges you toward the right decision and if you surround yourself with people who make your new behavior seem normal, then you’ll find success is almost an afterthought.
Systems are better than goals.
Monads are in danger of becoming a bit of a joke: for every person who raves
about them, there’s another person asking what in the world they are, and
a third person writing a confusing tutorial about them. With their
technical-sounding name and forbidding reputation, monads can seem like
a complex, abstract idea that’s only relevant to mathematicians and Haskell
Forget all that! In this pragmatic article we’ll roll up our sleeves and get
stuck into refactoring some awkward Ruby code, using the good parts of monads to
tackle the problems we encounter along the way. We’ll see how the
straightforward design pattern underlying monads can help us to make our code
simpler, clearer and more reusable by uncovering its hidden structure, and we’ll
end up with a shared understanding of what monads actually are and why people
won’t shut up about them.
The accompanying video is short, snappy, and to the point.