The philosophy of games, why we play them, and how it relates to modern polarisation and conspiracy theories.
Breaking non-fiction books down into three kinds with advice on how to read them.
A look at how Etsy tailored their search results to individual customers.
Rule #1: Don’t be afraid to launch a product without machine learning.
Machine learning is cool, but it requires data. Theoretically, you can take data from a different problem and then tweak the model for a new product, but this will likely underperform basic heuristics. If you think that machine learning will give you a 100% boost, then a heuristic will get you 50% of the way there.
To recharge themselves, individuals need to recognize the costs of energy-depleting behaviors and then take responsibility for changing them, regardless of the circumstances they’re facing.
The article covers four dimensions of energy: body, emotions, mind, and spirit.
There are two parts to software development: creating a design and expressing it as code. The code is tangible but the design is conceptual. Keeping a project healthy means doing both well. Here’s my concern: whenever you mix the conceptual with the tangible, it’s easier to neglect the conceptual. When you miss a tangible target, it’s obvious, but when you miss a conceptual target, you might not recognize it, or might rationalize that, because it’s impossible to measure, you were really quite close.
Blindly applying a factory process to software development will drive improvements to the tangible part (the code) at the expense of the conceptual part (the design). We see plenty of examples of this today, where teams have great feature velocity at first, are puzzled when velocity slows, and eventually the project is abandoned. As Cunningham warned, if we bolt features onto an existing codebase without consolidating those ideas into the code, the design will suffer, and over time “[e]ntire engineering organizations can be brought to a standstill under the debt load of an unconsolidated implementation.”
there’s one big thing that needs to change for wartime CPOs I want to cover today, and that is prioritization and evaluation.
So, when a fraught issue arises, how can we help our organization move forward in a way that actually builds rather than breaks trust? Over time, and after many communication mistakes, I honed a four-part template, covering the following
After I have a lay of the land, the next step is to determine what to work on first, second, third etc. Not every concern in the product provides equal value to the customer. Some parts are core to the problem. Without them the app has no purpose. Others are necessary but have nothing to do with the domain, like user-authentication.
Bungay Stanier explains how advice-giving goes bad; the three personas of your Advice Monster; and the powerful act of staying curious a little longer.
Relegating all data knowledge to a handful of people within a company is problematic on many levels. Data scientists find it frustrating because it’s hard for them to communicate their findings to colleagues who lack basic data literacy. Business stakeholders are unhappy because data requests take too long to fulfill and often fail to answer the original questions. In some cases, that’s because the questioner failed to explain the question properly to the data scientist.
A data-literate team makes better requests. Even a basic understanding of tools and resources greatly improves the quality of interaction among colleagues. When the “effort level” — the amount of back-and-forth needed to clarify what is wanted — of each request goes down, speed and quality go up.
Shared skills improve workplace culture and results in another way, too: They improve mutual understanding. If you know how hard it will be to get a particular data output, you’ll adjust the way you interact with the people in charge of giving you that output. Such adjustments improve the workplace for everyone.
As the Peter Principle suggests, we tend to rise to the level of our incompetence… but that’s not actually such a bad thing, as long as we can learn fast, safely. The best way to do that is to make sure things are safe-to-fail, which usually means putting appropriate feedback loops in place. In a human system, that usually means feedback.
Sometimes it’s the simplest thing in the world, and we forget to do it. Clarifying why you want something allows people to make autonomous decisions about how best to work towards the outcome you want; or (even more important) give you information about the context you were unaware of that will cause difficulty getting that outcome.
Timothy R. Clark:
Low-velocity decision making. In a nice culture, there’s pressure to go along to get along. A low tolerance for candor makes the necessary discussion and analysis for decision making shallow and slow. You either get an echo chamber in which the homogenization of thought gives you a flawed decision, or you conduct what seem to be endless rounds of discussion in pursuit of consensus. Eventually, this can lead to chronic indecisiveness.
Pete enumerates some patterns of teams and code ownership.
You need to take a step back and view data on a macro level, not micro. As the founder, you should care more about the trends not the constant, inexplainable anomalies.
One of the really frustrating parts of running a business is that many times we just don’t know the answer to “why?”.
Why did churn go up 10%? Why are trial conversions decreasing? Where did all these new users come from? Why is our growth half of what it was last month?
Many of those questions have no answer and trying to find an answer will cause you to rip your hair out.
A curated collection on the missing aspect of managed time in programming.
A look at how Skyscanner support their people’s transition from individual contributer to managers.
Lovingly craft those commits, friends.
Bezos considers 70% certainty to be the cut-off point where it is appropriate to make a decision. That means acting once we have 70% of the required information, instead of waiting longer. Making a decision at 70% certainty and then course-correcting is a lot more effective than waiting for 90% certainty.
Reversible decisions can be made fast and without obsessing over finding complete information. We can be prepared to extract wisdom from the experience with little cost if the decision doesn’t work out. Frequently, it’s not worth the time and energy required to gather more information and look for flawless answers. Although your research might make your decision 5% better, you might miss an opportunity.
Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.
Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the ‘root cause’ of an accident is possible. The evaluations based on such reasoning as ‘root cause’ do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes.
Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.
So many more nuggets of wisdom in there.
I pretty much highlighted the lot.
The general idea of an “Islands” architecture is deceptively simple: render HTML pages on the server, and inject placeholders or slots around highly dynamic regions. These placeholders/slots contain the server-rendered HTML output from their corresponding widget. They denote regions that can then be “hydrated” on the client into small self-contained widgets, reusing their server-rendered initial HTML.
Loads of fantastic advice.
Being right was something that we were taught was the ultimate pinnacle of knowledge, and there’s a reason, culturally, that so many of us care so deeply about being right. But it’s time to get rid of that. It’s no longer the currency that separates who does the really great work in life from who doesn’t.
Ensemble programming vs pull requests.
Rolling up your sleeves is magic.
At HashiCorp, we’ve grown from a few hundred to over a thousand people, so the goal is to build scalable systems that enable employees to do their best work and contribute to the outcomes of the company. For us, that’s shaped up into three specific systems: strategic planning, knowledge management, and communications.”
They also run a simluation to give their leaders a chance to practice.
“Using a firm called BTS, we run a business simulation where leaders get to ‘run’ the business for three years. Taking a simplified view of the company, we essentially build a game board based on our five-year financial model and this year’s three executive focus areas,” says Fishner.
Stephen Covey explained that “trust is a function of two things: competence and character. Competence includes your capabilities, your skills, and your track record. Character includes your integrity, your motive and your intent with people. Both are vital.”
Great teams are comprised of ordinary people that are empowered and inspired.
Truly empowered teams that produce extraordinary results don’t require exceptional hires. They do require people that are competent and not assholes, so they can establish the necessary trust with their teammates and with the rest of the company.
Truly empowered teams also need the business context that comes from the leadership – especially the product vision – and the support of their management, especially ongoing coaching, and then given the opportunity to figure out the best way to solve the problems they have been assigned.
The fourth role is by far the most important. It’s the role the vast majority of engineers will follow in their careers, and I’m going to call it “This. Forever.” The role you have right now is the thing you are going to do be doing forever.
A depressing thought? Not when you remember you’re on a quest.
To sum up: Variance is the enemy of performance and the source of much of the latency we encounter when using software.
To keep latency to a minimum:
- As a rule of thumb, target utilization below 75%,
- Steer slower workloads to paths with lower utilization,
- Limit variance as much as possible when utilization is high,
- Implement backpressure in systems where it is not built-in,
- Use throttling and load shedding to reduce pressure on downstream queues.