You could say, then, that by the late 1960s, software development was facing three crises: a crying need for more programmers; an imperative to wrangle development into something more predictable; and, as businesses saw it, a managerial necessity to get developers to stop acting so weird.
A feedback process 100% aimed at professional growth would, I suspect, be totally divorced from promotions and compensation bumps. Not because those things should be unrelated to professional growth but because truly reflecting on how you can do better and being open to feedback from your peers and managers is already tremendously difficult; when you are also worrying about whether or not you’re going to get that promotion or raise you were hoping for, it’s probably impossible.
Lots more in there on feedback, biannual reviews, titles, promotions, and the role of management.
A promotion to a higher job level puts you in a more influential position. You are being given more responsibility. It’s not a reward. Instead, it’s the company granting you more influence.
Of course, increased pay often accompanies a promotion. However, the added responsibility is the reason the company did the promotion, not the compensation.
Those who run the company are always looking for people to take on more responsibility. They’re looking for people who can come up with the next business idea, lead larger spaces, identify opportunities, and fix recurring problems. It is relatively easy to find people who are good at their jobs, and hard to find people capable of doing the next level job.
Circling back, companies promote people into larger responsibilities when that person looks like a leader. Leaders identify their own opportunities.
First, PfP is an extremely blunt instrument. Roy’s machine shop illustrates it well. The incentive doesn’t skew behavior a bit: it skews behavior immensely. Almost half of the total observations are in a narrow band around the rerate line. And it isn’t even a management-defined line. It is the guesstimate of workers as to at what point management might take deleterious action. Management isn’t even in control of the impacts of its own system.
“I thought that was my job — to take away all this crap from you and let you do your CEO thing. I thought you wanted me to be autonomous. I need autonomy.”
He said sure, but you should cheat “and use my brain to help you”.
Unless you understand “why” things are the way they are (and there often is a method to every madness, if you’re patient to dig deep enough), any proposal you might have on “how” to improve the situation might end up very much going against the grain, making it that much more of an uphill task for your proposal to be accepted. Furthermore, it’ll make it seem as though you put in no effort to understand the history of the system, which doesn’t exactly breed a lot of confidence into why you should be entrusted with fixing the system.
And to make a long rambling story even longer and more rambling, being a manager or director or VP is kinda like this all the time. You just navigate fucked up policy after policy, deciding which pushback will work or which you have the energy for
And you will reach a limit because it’s fucking exhausting to unwind corporate cognitive dissonance all day every day, and so a bunch of unfair, ridiculous things just persist because you don’t have the mental wherewithal to keep fighting for everything.
(this is not even to account for doing the exact same thing for product and technical stuff on your team). Knowing your limit of this is a good indicator if you would succeed and enjoy management at any given scope of team/size of company
When managers collaborate, they can collaborate on the entire management job. That job is to create an environment where everyone can do succeed, in service of a greater goal.
Instead of changing the management structure, the organization can change the management collaboration.
Instead of separating leadership from serving people, integrate leadership and serving at all levels.
Amazon effectively operates like a federation of smaller companies. There are (extremely) large organizations, which break into divisions, and then into small, functionally independent teams.
One financial lesson they should teach in school is that most of the things we buy have to be paid for twice.
There’s the first price, usually paid in dollars, just to gain possession of the desired thing, whatever it is: a book, a budgeting app, a unicycle, a bundle of kale.
But then, in order to make use of the thing, you must also pay a second price. This is the effort and initiative required to gain its benefits, and it can be much higher than the first price.
…
But no matter how many cool things you acquire, you don’t gain any more time or energy with which to pay their second prices—to use the gym membership, to read the unabridged classics, to make the ukulele sound good—and so their rewards remain unredeemed.
This is participatory sense-making. When we want to work on a thing together, and we need a shared understanding to do it right, then everyone gets to participate in constructing that understanding.
…
Shared understanding doesn’t come from “I share my understanding, and you adopt it.” it comes from “I share my knowledge, you share yours, and we construct a new understanding together.”
“Holding accountable” is code for blame, the attempt to avoid or deflect consequences. Blame is a weak premise from which to work. Blame requires that you spend time and energy protecting yourself. In an environment of blame it is not safe to say what you do and don’t know. Blame leaves everyone worried about who is out to get them. All the energy they spend hiding could be spent interacting and adding value to the project. Work gets done much less efficiently.
Accountability is a powerful premise from which to work. Working well and visibly builds strong relationships. Accepting responsibility sets the stage for satisfaction in a job well done. It’s a pity that the word “accountability” is misused, because the misuse obscures a useful concept.
Accountability can be offered, asked, even demanded, but it cannot be forced. “I hold you accountable,” doesn’t make sense. “I blame you,” or, “I hope you will accept the consequences,” are at least honest, even if they are a toxic basis for a working relationship. Managers can request or demand accountability. For example, a manager could ask that the software be ready to deploy at the end of every week so that the team’s progress is visible. From the other side, accountability can be offered even if it isn’t requested. “I can show you a log of how I spent my time last week,” is an offer of accountability.
The only way to change people is to tell them in the clearest possible terms what they’re doing wrong. And if they don’t want to listen, they don’t belong on the team.
Those turnarounds taught me a fundamental lesson about leadership: You have to be honest with people—brutally honest. You have to tell them the truth about their performance, you have to tell it to them face-to-face, and you have to tell it to them over and over again. Sometimes the truth will be painful, and sometimes saying it will lead to an uncomfortable confrontation. So be it. The only way to change people is to tell them in the clearest possible terms what they’re doing wrong. And if they don’t want to listen, they don’t belong on the team.
The moves in software delivery towards ever-increasing team autonomy have, in my mind at least, heightened the need for more architectural thinking combined with alternative approaches to architectural decision-making.
Ensuring our software teams experience true autonomy raises a key problem: how might a small group of architects feed a significant number of hungry, value-stream-aligned teams? Why? Because in this environment Architects
now need to be in many, many more places at once, doing all that traditional “architecture”.
…
The Rule: anyone can make an architectural decision.
The Qualifier: before making the decision, the decision-taker must consult two groups: The first is everyone who will be meaningfully affected by the decision. The second is people with expertise in the area the decision is being taken.
Once I took on board that it was my job as an engineer to actually deliver impact, provide value, then I became a lot more engaged with my cross-functional partners.
Good technical design decisions are very dependent on context. Teams that regularly work together on common goals are able to communicate regularly and negotiate changes quickly. These teams exhibit a strong force of alignment, and can make technology and design decisions that harness that strong force. As we zoom out in a larger organisation an increasingly weak force exists between teams and divisions that work independently and have less frequent collaboration. Recognising the differences in these strong and weak forces allows us to make better decisions and give better guidance for each level, allowing for more empowered teams that can move faster.
The Curse of Knowledge describes the cognitive bias or limitation that makes it very difficult for humans to imagine what it would be like not to possess a piece of information, and hence to properly put themselves in the shoes of somebody with less knowledge than them.
…
The first, and most important, is to overcommunicate. What exactly is meant by overcommunicating? To me, it is two things: (1) repeat your key messages a lot more often than seems reasonable or comfortable; and (2) when in doubt about whether your audience has a particular piece of important context, always err on the side of providing that context.
Overcommunication may seem inefficient but when it comes to communication, robustness is far more important than efficiency. Remember that the costs are asymmetric: communicate too much and you pay the cost of small amounts of wasted time, but communicate too little and it could lead to major disasters.
When you’re shaping an org in practice, the work you’re doing usually decomposes to manipulating three levers: structure, incentives, and culture (or, as Andy Grove puts it ‘contractual obligations, economic incentives, and culture’). A good org designer knows that they must pay attention to the interplay between all three elements.
…
I bring up the interplay between structure, incentives and culture because I think highlighting this interplay serves one other purpose: it enlarges the discourse around org design. If you focus on org structure alone, every solution to an org design problem is necessarily a restructuring. This is unrealistic. And, from experience, there are whole classes of problems that may be patched by a change to an incentive system or via a tweak to org culture instead.
Rule #1: Don’t be afraid to launch a product without machine learning.
Machine learning is cool, but it requires data. Theoretically, you can take data from a different problem and then tweak the model for a new product, but this will likely underperform basic heuristics. If you think that machine learning will give you a 100% boost, then a heuristic will get you 50% of the way there.
To recharge themselves, individuals need to recognize the costs of energy-depleting behaviors and then take responsibility for changing them, regardless of the circumstances they’re facing.
The article covers four dimensions of energy: body, emotions, mind, and spirit.