Saying what needs to be said

Every improvement is a change but not every change is an improvement

Lost in Space
Lost in Space

Fundamentally, a transformation is a learning process – empirical and experiential. Making change upon change can get us lost in space, not knowing which way is up or down, left or right. Measurement is the only way to retain a handle on what’s real. Plan, do, check, act. Design a way to measure the impacts of change in terms of the desired results.

How can we know what to change?

Every company operates according to its policies. Those policies are encoded into the processes. Some policies are good – they make sense; some processes are effective and efficient enough. Some policies and processes are not so good. How can we know which ones to change? Do we go after the easiest ones? Or the cheapest ones? Or the ones people complain about the most? Perhaps we should just start again?

In knowledge working the constraints can be invisible. What we see is the tip of the iceberg. The bulk of the ‘berg, hidden beneath the surface, is really dictating what happens. It’s easy to assume, knowingly or not, that we work within an ordered system. “Danger Will Robinson!” Beware the retrospective coherence trap. For example, doing Scrum will not necessarily lead to the desired results. The world of software development and delivery is complex. How can we identify the weakest link in an invisible chain? Our approach to change needs to be scientific. Let’s collect hard data and analyze it, and verbalize our intuition and emotions as we rigorously test our assumptions. Then we can better understand the system and select where to intervene and make changes that will lead to improvements.

Change to what?

When we find a bonafide constraint what do we replace it with? What will the improved system look like? How will it feel? How will it behave? We’ve all experienced solutions that didn’t work, solutions looking for problems, and solutions that fixed the problem but also caused other problems. What does better look like? How will we know when we get there?

There’s a need to clearly show how the selected action leads to the elimination of the constraint; how the proposed solution solves the defined problem. Describe the desired future in a way that we can determine what we want, why we want it, and avoid unintended and undesired consequences that might result. And let’s measure the outcome (and impacts) of each change.

How to cause the change?

It’s easy to spend a lot of our time doing stuff. Then we get frustrated because we’re working so hard, doing so much, and improvement just isn’t happening. Our busy-ness isn’t taking care of business. We come up with solutions fast – arguably before we truly understand the problem. As Einstein said:

“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

Let’s take the time to work through why we think a specific action will lead to a specific result. Moreover, it seems like we rarely stop to define, in a quantifiable way, the outcome we’re trying to achieve. By unambiguously defining problems or constraints, and the desired outcomes, we can systematically design and execute the steps to overcome the obstacles in between and recognise when our plan should be altered.

Stop and think. Then do

In an age where agility has quickly become the key to competitiveness, the demand for a change blueprint wrapped up in what’s being called an Agile Transformation is about as bogus as it gets. Business agility and true competitive edge lie in our ability to learn faster and generate deeper knowledge and to use that knowledge to inform purposeful action – deliberate and designed actions that solve clearly defined problems and produce unambiguous and valuable business outcomes.

There needs to be a big-picture frame for making small bets

What’s the point of doing a roadmap? It doesn’t help get rid of uncertainty (although some people behave like it does). There’s no way to be certain about any future outcome.

The act of describing desired future states in a roadmap identifies where we want to play and have impact. It provides a frame of focus that helps us figure out what we need to pay attention to and how we can get better. Within that frame we can identify options; we can continuously make small bets about the future and respond rapidly based on measurement of the outcomes and impact. We’re saying “this is what we think will happen.” Then we watch what actually happens. We observe any deviations from our expectations, we take appropriate action and, as early as possible, we update the roadmap and new bets based on the new information.

Without a roadmap and unambiguous, quantified measurements we have no way of knowing if what we’re doing is truly adding value, we won’t know what to watch for or how to make sense of the things that happen – we’ll be much slower to respond.

A meaningful roadmap with unambiguous, quantified measurements should raise the signal-to-noise ratio of any feedback from customers and business stakeholders. It can improve our odds of success.

Don’t forget about usability when iterating features

Every iteration of a feature, say, and by that I mean a released version of a feature, needs to be easy to use. To clarify, I’m talking about the act of iterating – that being completely different from an incremental approach.

“It’s not iteration if you only do it once.”

– Jeff Patton.

If an iteration or version of a feature is not easy to use because it’s unintuitive, or difficult to recall, or because it just looks a mess, then we incur ‘design debt’ (for want of a better name) and perhaps also create some failure demand. In the worst case maybe some customers are lost – forever. The question, as it nearly always is, is how much is enough? How much experience design and usability testing is required at every iteration? On past experiences, I expect it to be different from one iteration to the next, to some extent. It’s a matter of investment.

Based on situational factors, expectations, and a measure of sound judgment it’s usually possible to make a sensible investment. But only if we stop and actually think about it rather than stampeding to what’s usually a builder’s definition of ‘done’. We should expect to under invest sometimes and over invest other times. Occasionally we can expect to get it about right.

It would help us if our success criteria incorporated the user’s definition of ‘done’.

Business agility requires financial agility

The calendar or fiscal year might be an appropriate period for reporting results to investors but is it an appropriate period for managing business? Strategic initiatives usually extend beyond the annual budget period and managers scrambling to meet arbitrary targets by the end of each period actually disrupts flow, drives further dysfunctional behavior, and likely damages long-term capability building.

Instead of negotiating numbers once a year and following a predetermined plan, why can’t financial planning be a continuous and inclusive investment process informed by open information systems that make truth visible and deploy relevant data to the right people at the right times? When decisions are made closer to the action, the information hasn’t decayed (or been tampered with by people in the middle to make it look better). According to Beyond Budgeting case studies, a significant benefit of managing without a predetermined plan or budget for the year ahead is that managers become more aware of the changing business environment, are better prepared to face different situations and are able to focus attention on responding to events in order to realize the most value. As a consequence, I can imagine more businesses being able to build the capability to respond to customers more quickly and produce better outcomes.

More entrepreneurial business capability

Does everyone in a company typically feel like they’re accountable for customer experiences and business outcomes? Are they committed to satisfying customers profitably? Is the responsibility for value-creating decisions distributed throughout the organization?

Responding to customer demand with high speed and low costs requires devolved and adaptive working so that expertise can quickly come together to cause delight. A person must feel free to respond to customer demand. Innovation and responsiveness doesn’t come from people when they’re confined to functional departments and restricted to predetermined plans. Imagine what would be possible in a culture where people are safe to challenge assumptions and risks, where they have time and space to think deeply about constraints and economics, and are free to make small bets across a set of options to prove the best course of action or investment to make.

“Purpose and principles, clearly understood and articulated, and commonly shared, are the genetic code of any healthy organization. To the degree that you hold purpose and principles in common among you, you can dispense with command and control. People will know how to behave in accordance with them, and they’ll do it in thousands of unimaginable, creative ways. The organization will become a vital, living set of beliefs.”

– Dee Hock, founder and former CEO of Visa

Why not devolve business strategy and responsibility for business performance to teams closer to customers and benefit from richer participation of people and more entrepreneurial business capabilities that are able to coordinate actions according to customer demand? There can be explicit principles and boundaries so people know what they can and cannot do.

More effective governance

Every implementation of governance I’ve seen has been a major source of impediments. Everyone subjected to it complains about it. Conventional governance is obsessed with productivity, resource utilization, and cost. When value isn’t clearly defined, isn’t well understood and isn’t quantified and measured, communication is reduced to time, scope, and cost: “Go faster. Deliver more. Cut costs.” I wonder if quality has almost become an ‘undiscussable’. People in business think they’re buying it and assume they’re getting it but have no way to test for it. A lot of people don’t hang around long enough to feel the pain of poor quality and appreciate economic consequences. Just because things were late last time, or came in over budget, or didn’t deliver on critical business objectives doesn’t mean the answer is more (of the same) governance!

Governance has to focus on continuous value creation for customers and stakeholders, including shareholders. It has to monitor portfolio performance through relative business performance indicators. Only when governance lives at the gemba can it truly know what’s going on and support local decision making. Such a governance framework provides strategic direction and guidelines to front-line people making decisions to create customer value consistent with company goals and challenges their key assumptions and risks prior to investing. It need interfere only when absolutely necessary. Largely people ought to be left to get on with things. Governance ought to be inquiry-by-exception “based on outliers from the patterns and trends that possibly reflect changes in customer behaviors.”

Open information systems that make truth visible

People on the front line making decisions about business strategy, customer needs and customer profitability need access to real-time, unadulterated financial and operational information. Their insights come from asking important questions, challenging strategic assumptions, and calling out the risks as and when they see them. A prerequisite is being safe. Everyone has to be trusted to live up to the principles and values expected of them. Beyond Budgeting talks about values such as integrity, openness, and fairness adding to the effectiveness of risk management.

While the finance department often feels like a dark overlord perpetuating misconceptions about software development (it’s nothing like manufacturing) it does sit on some very relevant measurements. An objective is continuous monitoring of existing initiatives by looking at their impact on business performance characteristics such as profitability and cash flow, among other things. Another objective is continuous re-evaluation, prioritization and investing. No more annual round of estimating and agreeing budgets, RAG reports and variance analysis. Instead? ‘Fast actuals’ deployed to the front line in real-time through a simple accounting system that’s always up-to-date. Rolling reviews of outcomes in line with trends connecting ‘fast actuals’ with ‘flash forecasts’ made quickly and only looking immediately ahead. More informed small and reversible strategic decisions. Cost is no longer driving everything but there is now a rolling system of cost management. Bring on fair finances and ethical reporting.

Business agility requires financial agility

Managing to a predicted future provides only the illusion of control. A discovery-driven approach that makes small bets builds competence in sketching future options that traditional budget-driven processes fail to see until it’s too late.

More control very rarely delivers success. Success requires responsiveness – the anticipation of possibilities, seeing options and responding effectively to events. Responsiveness requires liquidity. Business agility requires financial agility and operational flexibility.

Casino software development

We can’t reliably predict which ideas will work and which won’t so we’ve got to experiment to find out.

Big bets fail big

A feature specification or description is a solution hypothesis. When we build a feature and delay validating it with customers we’re making a bet that it’s “right”. What if we’re wrong? Building feature upon feature this way ups the ante until we find ourselves betting the budget on a big-bang release. When we do this we’re not only creating more inventory, we’re investing deeper and deeper, taking on more risk, and essentially making a bigger and bigger bet.Big betsCapital investments in IT projects are big bets made on the merits of business cases built on assumptions. Senior managers often have stronger psychological attachment to bigger projects because they’re assumed to have bigger payback over the long term. Ned Barnholt, HP Executive Vice President and former CEO of Agilent Technologies called this the “tyranny of large numbers”. People get carried away with the expectation of gains instead of what can afford to be lost. Unfortunately, the bigger the investment, the less likely it will be questioned. The business case gets treated as a statement of fact by those responsible for delivery when, really, a business case is also nothing more than a hypothesis.

More control doesn’t reduce the bet

There are some big assumptions being made in this approach, for example:

  1. The requirements define the right solution.
  2. All requirements must be delivered before any value can be realized (all or nothing).
  3. If all requirements are delivered, the value will be realized.

Conventional governance and management practices attempt to optimize productivity. This actually does nothing to address the big risks in the above assumptions. There’s no way to reliably predict what will be valuable and what won’t. There’s more concern about maintaining control than delivering value. As John Seddon likes to say, “it’s about doing the wrong things righter.”

Can we deliver less without delivering too little?

Yep. By making smaller bets we can prove earlier what’s valuable and what’s not valuable. And we can determine what’s enough rather than just paying for everything. If we bet what we can afford to lose each time, we can pursue a set of options and get faster feedback to discover the best and right solution. We can focus on the user, their context, what they’re trying to achieve – the outcome they desire, rather than pursuing a single definition of a feature thought to be the solution users want.Small betsBy making small bets we create multiple, repeating opportunities to test for value. We maintain a smaller overall investment while enabling massive flexibility to achieve business objectives. We build capability through learning about users. We fold innovation into delivery through creative discovery and experimentation.

Incremental capital and operational investments

If we were to take a product-oriented or business services view of the world, where end-to-end cross-functional IT capabilities were on the business front-line so to speak, then we could manage portfolios of products or business services rather than IT projects. Governance then faces the exciting opportunity to help business investors make investments that maximize business performance. Isn’t that more valuable? Isn’t that more meaningful? In this context, why not make smaller incremental capital and operational investments based on actual business measurements rather than on predictive annual budgeting and forecasting? To do this people need simplified accounting and real-time and relevant information. Beyond Budgeting talks about Fast Actuals and Flash Forecasts using moving averages and rolling monthly and quarterly reviews to inform continuous re-evaluation and prioritization.

At Energized Work, we’ve been using cash-basis accounting and value-oriented governance in and across product streams since we originally experimented with throughput accounting with a client in 2008. The types of measurements we take fall within the following categories:

  • Individual product or business service cash flows.
  • Overall portfolio performance.
  • Risk exposure.
  • Impending obsolescence.
  • Cost of delay.


Basically, we place small bets across a portfolio and monitor for profitability, customer and stakeholder delight, operating risks, and staff morale. Looking at the data coming back, we gain insights that inform our governance and operating decisions. We can continue investing, making small bets to test for further value. We can stop capital investment early when enough value has been realized, and not waste money on features that won’t be used, while continuing operational investment for as long as the product or business service operates profitably with acceptable risks. We can stop investing in systems approaching their ‘sell-by date’. Today’s projects are tomorrow’s legacies. Imagine this – by sensing impending obsolescence, we can make letting something become a legacy be an explicit portfolio decision based on risk, rather than it just happening in the background. Or we can exit, shutting down anything that’s no longer profitable or adding value, and move the people onto other streams.

Decisions are bets. Understand the gamble

How long does it take for decisions to be tested in your company? Days? Weeks? Months? As Sean Blezard said, “Count the cost, understand the gamble.” We can afford to be more scientific. There are still no guarantees but we can be smarter with our money by making more informed decisions based on evidence. Making small bets allows us to discover new ideas and strategies through an emergent process. Small wins validate the direction. False starts and small mistakes give a signal to proceed in a different way. Making small bets doesn’t mean there’s no bold ambition. We just adapt as we go rather than follow a course set out at the start that may lead to failure.

Something Will McInnes said in his book Culture Shock nicely sums up small bets: “We’re no longer seeking ‘the single best decision for all time’. Instead, we’re free to make a whole series of the ‘best decision right now’. [..] We shift from ‘perfectly planned’ to ‘always learning’ in the face of uncertainty and constant change.”

(This post is based on one of the topics from Energized Work’s World Cafe event at the Agile Business Conference: Governance – Friend or Foe?)

Cash flow keeps it real

In an increasingly integrated world it seems reasonable to have a standard such as IGAAP that allows international investors to examine the accounts of any company anywhere in the world and know they were calculated with the same rules. Law dictates that companies create accounts according to generally accepted accounting principles (GAAP). That’s one way to view a company’s financial performance and current position. It serves investors and the market purpose and because of that it’s a view shared by the senior executives and board members. In between this view of a company and the reality inside it, things tend to get muddled. Really, the operational and financial view inside a company should be quite different to the outside view of investors.

Here’s Bill Waddell talking about Lean Accounting:

Abstraction is useful because it’s often applied to make things easier to understand. But with too much abstraction we can lose touch with reality and suffer abstraction fatigue. When abstraction goes so far it can create a whole new universe more complicated than reality, then we should be worried. When that universe is perceived to be reality, then alarm bells should be ringing.

When an asset isn’t an asset

Traditional accounting practices see inventory as something of value to the business and it’s therefore treated as an asset. That means that software still being worked on or completed software not yet released is treated as an asset. I find it hard to accept that holding equity in something that’s not generating revenue is actually an asset. If you’re living in a house and paying a mortgage on it, that house is more like an expense? If that same house were being rented out at a profit then I’d consider it to be an asset. I prefer to deal in what’s real. Cash is real. It’s not the accounting definition of an asset but I like the idea of an asset being something that creates positive cash flow and drives a profitable service.

Inventory is a high-risk investment

Building inventory is a high-risk investment because it assumes that demand will exist for whatever is being built when it is finished. What if demand dries up? What if the wrong thing is being built?

The world of software is dominated by the Big Project funded by a Big Budget and scheduled for a Big Bang release. That’s a mighty big bet with big pile of cash on getting it right first time. Don’t worry, it’s a corporate asset, right? And with the added benefit of tax relief on the capital expenditure for research and development. Splendid! All looking good on the balance sheet. No. It’s inventory, and as Waddell says, “inventory is a great big pile of cash that’s being drained from the company on a regular basis.”

Shorter cycles and smaller bets

Lean principles are about reducing the time between being asked for something and delivering it. Two-week iterations are popular. If software is released every week instead of every fortnight inventory is reduced. Continuous delivery where each feature is released upon completion reduces inventory even further. Less money is spent more slowly. That can only be a good thing for cash flow.

In terms of risk, making small investments to continuously validate features with customers has proved an effective way to deal with uncertainty. Releasing early and often presents repeating opportunities to drive revenue, keep sunk costs to a minimum, and ultimately achieve profitability sooner. Or to find out earlier that we should stop investing.

Cash basis accounting

Over two years, starting back in 2008, I experimented with Throughput Accounting applied to the portfolio management and governance of software product streams. It felt clumsy and the model didn’t really fit so I moved to Lean Accounting. Eventually I settled on plain old cash basis accounting – money in, money out. This has proved a more meaningful way of accounting that helps visualize software economics and facilitate more informed decision-making. Real, unequivocal profit comes from having more cash coming in than going out and not by making accounting allocations and assignments. Waddell’s analogy of the family budget is spot on. When you sit down at the end of each month there’s no ambiguity – you know exactly whether it was a good month or a bad month.

Keep it real

The increasing popularity of Lean Accounting, albeit very slow, and Beyond Budgeting of course, suggests there is some recognition that operational decisions would be better informed by something other than the abstracted accounting numbers we find in traditional company accounts. Why not run simple cash books in parallel with the GAAP accounts? Move more accountants to the front line, closer to customers, and where their financial insights can add real value.

Further reading:

  1. Throughput Accounting
  2. Throughput Accounting: A Guide to Constraint Management
  3. Practical Lean Accounting: A Proven System for Measuring and Managing the Lean Enterprise

Fund success through lots of small investments not from one big estimate

If you have to estimate at least know why you’re estimating and do it appropriately so that it’s in some way useful with associated risks clearly defined. Understand what the estimate will be used for, choose a suitable technique, provide each estimate as a range (incorporating a percentage-error or tolerance), and don’t invest time trying make the estimate more accurate.

Absurdity and bad behavior

A certain amount of absurdity and bad behavior always accompanies estimation.

When you don’t have enough information to even make assumptions, and you’re being pressured for estimates, you can base them on hallucinations and take the consequences when they come. And they will come. Being pressured to make any kind of estimate without appropriate information feels like a bullying tactic to secure the blame for later on. Is that a tacit admission by the bullies that they think the project is likely to fail anyway? If there’s any truth to that, that’s really quite fucked up. Sadly many projects are so obviously set up to fail, and fail they do. It’s really no wonder people behave in this way.

There’s likely to be a good reason for the lack of appropriate information. If it relates to scope, for example, I’ll bet the business people don’t really know what the business needs are or what customers actually want. I wouldn’t know for sure. I might have lots of assumptions and what I think are great ideas but I’d still be unsure. That uncertainty is reality. What’s dumb is when people pretend that uncertainty doesn’t exist. Perhaps they think admitting they don’t know would be embarrassing, even career-threatening. In the typical corporate environment, that’s a fair point. It’s unsafe to show weakness. That doesn’t excuse making unreasonable, often ludicrous demands of the folks responsible for delivery. Alas, it’s easier to do just that. Like I said in No Bull, “It’s always an IT problem, until it’s proved not to be.” By keeping the spotlight on IT fewer people will know people in the business are complicit in delivering the wrong thing.

The thing is, when everyone openly recognizes the uncertainty, an opportunity presents itself to have a very different kind of conversation. In my experience success begins right there.

Need to set a budget?

“Give me estimates. I need to know how much it’ll cost. I need to set a budget.” This thinking drives a vicious circle that keeps software delivery in failure mode. “Guess what? Yet again IT failed to deliver against their estimates.” Err yeah, because they were estimates! And so IT is forced to provide more accurate estimates next time. For crying out loud, there’s no such thing as an “accurate estimate” – it’s an oxymoron. Rather than perpetuate conversations grounded in make-believe, how about trying something else.

I get the need to set a spending limit. Who wouldn’t? It’s everyday thinking. But are costs calculated from estimates, which were perhaps made under ‘duress’, the best way to calculate a budget? There’s a lot riding on that figure after all. Is this sounding like it has some risk to it?

Instead, how about deciding the amount of money the company would be willing to lose if the business assumptions proved to be wrong? Let’s make that the initial financial investment with which to deliver a minimum viable product. First, the business people need to quantify the value those business assumptions will apparently realize. In Competitive Engineering, Tom Gilb shows how this can be done in a more meaningful way than anything we see in typical business cases or financial forecasts. It’s time business operations started closing the loop on business decisions and learning to use software development more effectively as part of that process.

Not getting it. Not getting the benefits

As a business, if you’ve got a truly agile delivery capability that keeps the cost of change affordable and the total cost of ownership low, you can discover the right things to deliver through constant iteration and continuous delivery. You’re hands are on the wheel of something really quite special. Rather than betting the whole budget on a hole-in-one, make lots of small bets and code your way to what the business actually needs and what customers actually want, and waste no money on features that won’t be used.

Contrary to popular Agile belief, you can’t pull this off with incremental development and a release plan. Incremental development requires something at the start that describes the whole thing at some level of detail – that sounds likes a spec of some kind, doesn’t it? A release plan means a big bang or at least a series of medium pops, either way feedback from customers comes via scheduled encounters – that’s not a continuous conversation, and the longer you go without feedback, the greater the likelihood of error and bad investment. This sounds like a bigger bet to me.

Lots of businesses are missing out because IT departments are busy trying to do Agile rather than working upstream in the business to be agile as a unified customer-oriented capability. And I suspect some business people are happy with this arrangement because the spot light stays off them and IT continues to get blamed. The status quo is utterly preposterous when you think about it. One thing underlies all this: A complete lack of trust. It’s truly a sad state of affairs.

Business success requires risk be dealt with properly

What’s important here? The illusion of success from the comfort of the status quo or measurable success in the real world? Knowing that uncertainty is always there no matter what you do, is it not less risky to start small, deliver working features to customers every week proving business assumptions and ideas as you go, and continue to invest small amounts of money periodically for as long as it makes sense based on actual profitability and customer feedback? This is what gives a business real responsiveness in the market and an unfair advantage over its competitors.

All that’s needed is a small amount of money for initial outlay, a little bit of trust, and an agile delivery capability. Then you can laugh at the absurd truth in Dilbert’s estimation stories.

Engineering the -ilities iteratively

Delivering the right thing isn’t just about getting customers and stakeholders the functionality they need so they can achieve what they’re trying to do. A feature has function and performance characteristics. Most specifications, design decisions and cost estimates only consider the function bit.

“So it needs to do X.


How well does it need to do X?”

There are -ilities – reliability, usability, learnability, upgradeability, replaceability, scalability, etc. These are quantifiable characteristics that cost resources – time, hardware, network bandwidth, money, etc.

“How much of this -ility?

How much of that -ility?

What would that cost in terms of x, y, £?”

These are design decisions trading immediate and longer-term economics for the experiences being created. There are multitude options screaming for set-based validation.

Customers trust us to get the -ilities right. If we don’t, customers get a shit experience no matter how shiny the turd – the shiny soon loses its lustre. It’s ok for customers to be wowed by the shiny and not give any consideration to the -ilities they expect. It’s not ok for us to do the same. We’re responsible for working with customers and stakeholders to specify quantified -ilities given the desired experiences we want to cause, the resources we have available, and the investment we’re prepared to make.

When the -ilities are a design-afterthought, chucked in a ‘bucket of miscellaneous’ called ‘non-functional requirements’ and left to the end, you get rework and disappointment.
The shiny gets reworked later on because the -ilities can’t match up, costing extra money and upsetting the customer who signed off on a certain version of shiny way back when. Leave engineering the -ilities out of the design thinking at your peril.

The real capability is to make engineering the -ilities a continuous and intrinsic part of emergent design thinking, iterative development and continuous delivery, based on an investment strategy that believes a system will be carefully crafted that meets today’s demand today, and trusts that it can be re-crafted cost effectively tomorrow to scale and meet tomorrow’s demand.

Thinking about effectiveness

Outputs create outcomes which have impact. Output is what we ‘physically’ deliver – typically that’s running tested software. The outcome is the net result our output produces – we helped customers to achieve the thing they wanted to achieve. Huzzah! Or maybe we didn’t. Bugger! Impact is the ripple effect of consequences from the outcome. Of particular note are the experiences we cause people to have – customers are ecstatic and tell their friends; we get more customers, more revenue, improved profitability. Or customers give up and look elsewhere for a solution to their problem. Another desired outcome might be certain technical debt is repaid. The impact? Perhaps an up-coming feature will now be cheaper to deliver and will carry less risk. Impact gives us reasons why. If our goal is to create a certain impact then we must consider the outcome and the output as hypotheses.

A measure of output alone, for example productivity, velocity or throughput, isn’t useful in determining how effective we are. We could be delivering the wrong thing; we could be delivering a load of wrong things.

Now I’m thinking out loud …


Successful in producing a desired or intended result

The degree to which something is successful in producing a desired result; success

Ok. So, if we are effective then we are successful in producing the desired impact. We caused people to have the experiences we wanted them to have and consequently they took the actions we wanted them to take. We helped them achieve whatever it was they wanted to achieve. We delivered the right thing. How impact is measured is entirely specific to that impact. And given the ripple effect an outcome is likely to have a different impact on different people in different contexts at different times. A measure of impact is not a measure of effectiveness. We might have had great impact but maybe we got lucky. Effectiveness feels like a performance characteristic – a measure of our ability to produce the desired impact. How could we measure effectiveness?

I’ve messed around with the idea of measuring effectiveness before. Back then I saw effectiveness as an indicator of a team’s ability to “sustain throughput (i.e. the number of stories released to production that deliver value) while fixing defects immediately and repaying technical debt to keep the amount of rework small“. I think the premise is still valid but maybe not the whole story. I’ve seen too many teams go on a feature-fest delighting the pants off pushy business owners but creating shit awful software. Every one eventually ground to a halt as the code became increasingly difficult and expensive to change. Technical debt comes back and bites! Surprise, surprise. Again, I think the idea of impact being ripples of concentric effects allows for a set of consequences from one outcome – delighted customers (problem solved); happy stakeholders (business value realized); healthy product (habitable code and hassle-free operations); healthy team (social capital).

Efficiently effective

Achieving maximum productivity with minimum wasted effort or expense

The state or quality of being efficient
The ratio of the useful work performed to the total energy expended

Pursuing efficiency without first being effective risks delivering the wrong things well. There are other wastes beside defects and rework. Basic coordination and transaction costs. If we are efficiently effective then we are producing maximum positive impact with minimum waste and expense. How could we measure efficiency?

Any thoughts? Please comment.

Blog at

Up ↑

%d bloggers like this: