the mba nerd
Let me introduce you to my friend: the MBA Nerd.
He is very smart and a bit of a wizard when in front of Excel spreadsheets and PowerPoint slides. Above all, he means well and only wants to optimize outcomes.
He confuses the model for the reality, what is visible with what is real. He wants clean inputs and clean outputs. When, back in the day, Tim and I were tinkering with investments, we were MBA Nerds of the professional variety. Our model's clean outputs felt like real insight.
In beast mode, the MBA Nerd would sell one of your lungs (you only need one anyway) and invest it in an efficient portfolio.
In the business and financial world, he produces fragile systems: over-optimization, high leverage, risk praised as innovation rather than exposure.
In moral philosophy, he produces something similar: the urge to treat ethics as an optimization problem, the effort to maximize the visible and ignore the invisible (or less visible).
This piece is about that ignored part.
the appeal of moral arithmetic
Consequentialism, in its simplest form, is hard to dislike. It says: choose the action that leads to the best outcomes. Don’t fetishize rules. Don’t hide behind principles. Care about what happens to real people.
As a starting impulse, that is sane.
The difficulty is not the aim. It is the confidence with which the aim becomes a method.
When people argue about the trolley problem, the argument often turns into a contest over arithmetic. Five lives versus one. Net plus four. Pull the lever.
The debate is framed as though moral reasoning occurs in a closed system, with known outcomes, fixed probabilities, and no downstream effects.
But morality is practiced in open systems. Systems with uncertainty, incentives, abuse, and long chains of consequence. A method that performs well in closed systems can fail catastrophically in open ones.
The trolley problem is useful mainly because it highlights this mismatch.
why the trolley problem matters and why it doesn’t
The classic trolley problem offers two versions of the same calculation.
In one version, you divert a trolley by pulling a switch: one person dies, five live. The vast majority of people (approximately 90%) say you should pull the switch.
In another, you push a man off a bridge to stop the trolley: one person dies, five live. Interestingly, in this scenario, a similar proportion (90% of survey respondents) refuse to kill the man.
Some, including MBA nerds, treat this difference as an embarrassment—a contradiction that proves our intuitions are irrational, distorted by proximity or physical contact.
A more charitable reading is that the difference is not primarily about distance. It is about permission.
Diverting a trolley can feel like managing harm already in motion—choosing the lesser tragedy when tragedy cannot be avoided.
Pushing someone can feel like something else: making an innocent person into an instrument. Authoring harm directly, not merely redirecting it.
In an academic experiment, you can try to strip that distinction away. In real life, the distinction is a boundary that prevents certain kinds of reasoning from spreading.
The question is not only “what happens in this case?” The question is “what are we allowing people to do, and what kind of society does that permission create?”
To see that clearly, the trolley needs to go to the hospital.
the hospital thought experiment
Imagine five patients who will die without organ transplants. A healthy man arrives for a routine checkup. A doctor could kill him and use his organs to save the five.
The arithmetic is the same: one dies, five live.
Yet almost no one wants to live in a society where this is permissible.
That reaction is often dismissed as emotion interfering with reason. But it can also be seen as practical moral reasoning about a world with incentives.
If doctors are permitted to harvest the innocent for the sake of the many, hospitals stop being places of refuge. People avoid care. Trust degrades. You begin to treat every professional with suspicion because the rule creates a new background risk: the risk of being optimized away.
The first-order consequence is the one the spreadsheet captures: five survive today.
The second- and third-order consequences are what determine whether the system remains worth living in.
This is the central problem for any ethics that relies heavily on outcome calculation: outcomes are not only immediate and visible. The most important outcomes are often indirect.
the epistemic problem
To apply consequentialism in the way its advocates often imply, you would need to know the net sum of consequences across all orders.
Not just:
- what happens immediately,
but also: - how people respond to what is now permitted,
- how institutions adapt,
- how incentives change,
- how the permission is exploited,
- what new risks are created,
- what trust is destroyed and how that changes behavior.
In practice, we do not know these things well.
We can sometimes make reasonable local estimates. We can sometimes see obvious harms. But open systems have a habit of hiding their most consequential effects until late.
The MBA nerd’s characteristic error is not caring about outcomes. It is acting as though the outcomes he can count are the only outcomes that exist.
A moral framework built on that assumption will tend to recommend interventions that look good on paper and age poorly in reality.
rules as anti-exploitation technology
This is why moral rules can be more than archaic superstitions. They are constraints that keep certain kinds of reasoning from becoming tools.
“Do not kill an innocent person” is not merely a sentiment. It is a barrier against a class of abuse.
Once you allow exceptions based on aggregated benefit, you create a demand for justification and a market for arguments. People get good at producing moral arithmetic that supports what they wanted to do anyway.
Cleon, in Thucydides’ account of the Mytilenean debate (Book 3), warns the Athenians about this:
"The most alarming feature [is]... our seeming ignorance of the fact that bad laws which are never changed are better for a city than good ones that have no authority... and that ordinary men usually manage public affairs better than their more gifted fellows. The latter are always wanting to appear wiser than the laws, and to overrule every proposition brought forward, thinking that they cannot show their wit in more important matters..."
His point isn’t that reasoning is bad; it’s that rhetoric can become insidious and deceitful, especially when the stakes are moral and the audience wants to feel “wise” rather than be safe.
The danger is not only that someone makes a mistake in a single case. The danger is that the exception becomes precedent, and the precedent becomes a method.
In other domains, we recognize this pattern with less difficulty. A little leverage improves returns until you blow up. A little optimization improves efficiency until it removes slack and turns the system brittle. A rule that looks restrictive (i.e., friction) is often the reason the system survives stress.
Moral constraints serve the same role: they prevent local “improvements” that create systemic vulnerability.
a place for consequentialism, properly constrained
None of this requires rejecting outcomes. It requires being modest about our ability to foresee them.
Consequentialism can be useful as a lens:
- to notice harms rules may hide,
- to compare obvious tradeoffs,
- to evaluate institutions over long periods.
A lens, not a license to cross fundamental boundaries on the basis of clean calculations made under deep uncertainty.
If you cannot reliably see the second- and third-order consequences, then be humble and embrace the limitations of your "maximize the good" calculations.
In opaque worlds, humility is not optional.
conclusion: ethics for people who don’t know the future
We do not live in a world where consequences come labeled and summed. We live in a world of partial information and delayed effects.
That fact does not make moral reasoning impossible. It changes its character.
It means we should be cautious about moral frameworks that depend on precise enumeration of outcomes, and more respectful of constraints that have survived because they may protect society from certain failure modes: exploitation, loss of trust, and the gradual normalization of using people as means.
The trolley problem is often presented as a test of whether your moral instincts can be made consistent.
It can also be read as a reminder that consistency is not the same thing as wisdom.
In an opaque world, the goal is not to win the puzzle. The goal is to build rules, habits, and institutions that remain humane in the face of limited knowledge and imperfect incentives.
That is a different project than pulling levers in thought experiments.
It is also the project we actually have.