Yes, we are all bored of talking about it, but it’s not going away. We are in the midst of a major technological transformation on a scale equally as impactful as the adoption of mobile phones, of the internet, of personal computers. Even though this new technology is scary and unprecedented, human reaction patterns are not. We will find that much of what happens next is rooted in familiar historical patterns. I am writing in reaction to what I see as undue alarmism over the potential impacts of AI on high-skill jobs. I find myself bullish for this upcoming future, not because I’m an eternal optimist or have some snake oil to sell, but because history, incentives, human ingenuity, and competitive markets all point in that direction. While we must not tread haplessly in a blind trust that markets will sort things out for the best, neither avoidance and rejection, nor fear and doomerism will provide the framework to guide us in shaping this future.
Jevons and Copernicus
The Jevons paradox is frequently invoked to defend against claims that AI will cause massive disruptions in the job market. The idea is that if a resource becomes cheaper to produce, total demand for it will rise because people will find new uses for it that were previously uneconomical. It seems absolutely intuitively obvious to me that this is the future AI will bring to us; that we cannot presently account for the unforeseen sources of demand that have yet to be unlocked. Rather than fighting over whether this observation can rightly be elevated to the level of being an Iron Law of Economics, or fixating on the various differences between 19th Century coal production and modern day software development that may call into question its present-day applicability, I am content to merely riff on it as a handy analogy, as a very apt explanation of the mechanism by which the exact opposite of what we would logically expect is in reality the more likely to occur.
Against the more alarmist predictions of AI’s knock-on effects, I would like to apply the Copernican Principle as an even better framework with which to ground our perspective. We should not assume that we are living in a privileged time period of history. Probability implores us to conclude that our present experience, despite what appears to us in the moment as a rapid onset of unprecedented change, is actually entirely mediocre. Statistically speaking, it is unlikely that we happen to live at the apex of a singularity that will suddenly, starkly, and forever change the nature of humanity itself. It is unlikely that the AI revolution will eclipse all other technological advancements in the history of our species, becoming known to future historians as the undeniable turning point in our evolution. What is likely is easily extrapolated based on observations of similar advancements throughout history, both distant and recent. There will be periods of experimentation, upheaval, adoption, and co-option. There will be winners and losers. Many things will change, some for the better, some for the worse. It will be exciting and terrifying, all because we have not yet written the pages in front of us. But, this is not the end of history, or of science or technology. It will be just one more chapter and really, this time is no different.
My intent here is to leverage the following two arguments to advance a claim that AI, once the business cycle plays out a bit further, will have created an enormous new set of opportunities for software developers and similar technical professionals:
- Even if AI is able to displace workers by doing their current jobs, it will never keep pace with the unlocked demand the future holds.
- The Market will not allow highly skilled workers to sit idle when there is profit to be made.
The first point is basically Jevons, imbued not so much with an
optimism for the future, but rather a nod to Ecclesiastes 1:9: There
is nothing new under the sun. The way this will play out, in fact
the way it is already playing out, is the same playbook we’ve seen over
and over again since the day Galileo dropped the orange. Having circled
the earth more times than I’d like to count, I’ve seen the rising and
falling edges of so many trends, fads, cycles, and memes, and while
we’re flowing it may appear that we will flow forever, sooner or later
it will ebb again, like it always does. And this time is no different.
This is what I mean by “once the business cycle plays out a bit
further”. Remember the dotcom crash? We haven’t hit that yet, and once
we do, we will then all be talking about “AI 2.0”. Mark my words.
!remindme 10 years
The second point highlights the mechanism through which this magical job creation thing happens, and omg shoot me before I turn into a supply-sider. Let me back it up a bit because you should never anthropomorphize The Market: it hates when you do that.
Because more seriously, or as serious as my arrogant take on undergraduate economics can bear me, there is no such thing as “The Market”, it’s just a convenient fiction they tell you so you don’t complain when your soy latte is a dollar more expensive. There is no such thing as The Market, all you have are a bunch of people, specifically Money People. And in my experience, if there is any such a thing as an Iron Law of Economics, it’s this: Money People want to make more money. If one of the Money People has amassed some ridiculous sum such as $1 billion, you would expect them to do the reasonable thing, declare victory and summarily retreat into a nice comfortable hobby like bee-keeping or contract bridge. But that’s never what happens. Instead, Money People do everything in their power, which sadly usually involves things they really shouldn’t have the power to do, in order to turn that $1 billion into $1 billion++. Most of the time this manifests in the standard methods for wealth acquisition—price-fixing, bribing regulators, blaming immigrants, buying elections, and so on. But on the rare occasion, and here’s where the rest of us can stand a chance to benefit, a job just might be created in there somewhere. It just takes one of the Money People to observe that a high-skill worker can easily be convinced to part with their surplus labor and then, Bam! Profit!
I have been blown away many times by what LLMs have been able to produce, or more specifically, what I have been able to humbly produce with their help. But I will have none of the doomerism, and I am far from an optimist. I’m a rationalist, almost to a Hegelian extreme. I find myself uncomfortably allied with the AI bros (hey, in this industry, you’re either selling snake oil, or you’re buying it) because the boosters are making claims that I find far more realistic than the dire predictions of catastrophic job loss and mass poverty. Step outside of the tunnel vision of excitement or panic that leads you to conclude that just because AI can invert a binary tree faster than you can it means your job is soon to be forfeit.
In developing my arguments, I would like to begin with two of the most egregious examples of alarmism, then trace their roots to common tropes that arise in the business press reporting on AI news. I will steel the opposing argument and allow rightful space for the uncertainty that things could still go horribly wrong if we ignore the hard decisions we must make as a society. Lastly, I will make some predictions—predictions that are not derived from any keen insight into the future, but that are simply grounded in the consistent lessons of human behavior from the past.
The Foils
To begin my assessment of the landscape that has given rise to the most fanciful of AI doomsday scenarios, we start with two essays that successfully amplify a deep subconscious fear many of us are having in reaction to the endless hype-pumping uncertainty we face: Something Big Is Happening by Matt Shumer and Your bridge to wealth is being pulled up by Daniel Homola.
Their arguments as I read them work like this: AI models are surpassing human ability at cognitive tasks. Therefore, in five years time the world will be an unrecognizable hellscape in which the select über-class of AI owners operate a fully autonomous self-sufficient economy with exponentially compounding bank accounts while the 99.999% suffer untold misery amid nasty, brutish, short, tokenless lives.
Both Shumer and Homola base their warnings on observations that are certainly true: that AI models are performing as well or better than humans at a variety of high-skill tasks, and that the models continue to improve at a fast pace. But the analysis ends there. They jump to the conclusion that these facts prove we’re all doomed, but they do not explain the mechanism by which it will happen. What’s left unsaid, the exercise to the reader, the “it stands to reason”. I find this irresponsible, since the majority of their time is spent building up a drum-beat of desperation rather than taking a critical stance on how this terrible future is supposed to come into being.
Implicit to this logical leap is the assumption that eventually the models will be able to operate unprompted and completely autonomously. While it’s laughable to imagine an AI building the newest “X for Uber”, securing YC funding, and attracting millions of users with no humans in the loop, let’s grant that there is a subset of use cases in which a substantial portion of software could be fully generated by AI with minimal low-skilled human operators. How will such firms compete against those who pair high-skilled labor with the same tools? While certainly some will choose to use AI to simply cut costs, those who innovate will grow and be rewarded, leading others to follow their example, and herald the start of another business cycle. Focusing on the doom narrative completely bypasses these inconvenient arguments.
Let’s look at how these dire predictions are being seeded, because I think you’ll find there’s a lot more psychology than rationality or economics to explain their spread.
The problem in the business press
What feeds into this paranoia are mainstream media and business press articles like Satya Nadella says as much as 30% of Microsoft code is written by AI. I want to focus on this quote:
Microsoft and Meta together employ tens of thousands of software developers, but they’re the latest companies to discuss how AI is replacing some of the work written by human software developers.
Note there is a massive difference in meaning between “replacing some of the work of developers” (emphasis mine) and “replacing some of the developers”. And yet in articles like this, everyone simply reads it as the latter. It is an implied assumption, but nobody bothers to call this out or explain the mechanism whereby it can actually occur.
AI code-writing claims like those above by Nadella and Zuckerberg, or by friends Pichai, Amodei, and Benioff all feed into this hype but they do not specify exactly what their claims mean. Non technical readers interpret these claims to mean that “it stands to reason” that we can fire the remaining x% of engineers because now AI is doing all the programming work. Such quotes feed the zeitgeist that we are only a few steps removed from AI taking everyone’s jobs.
Like Shumer and Homola, I too have used these models extensively. I too have seen how much they’ve improved in a very short time. I too have been amazed, impressed, blown away, but never once have I felt intimidated into thinking they could take over my job. They are simply tools to help me do my job better, like syntax highlighting or Dark Mode.
How could they actually replace software developers? Who’s prompting them? Alice in accounting? Bob the intern? Zoltar from Coney Island? Between the admiration for code generated by a good prompt to an LLM actually supplanting an engineer’s job there is an enormous gulf and I refuse to let anyone hand-wave it away. I find it preposterous to assume that the current (and generally quite good) state of LLM code generation combined with a nod in the direction of its rapid improvement suggests that the models can do better without high-skill humans in the loop.
But of course the business press loves these vague hand-wavy claims. They are seductive and tantalizing and lead to all kinds of clicks. They’ve also led to mass acquiescence that AI-induced job loss is inevitable.
In the next section I want to move beyond the hype, set aside ignorant misconceptions, and also dig deeper into a more defensible argument for how LLMs can take away jobs.
The Steel-man
Is it possible that The Market will close ranks on the cost-cutting advantages of AI? We already have evidence that the models can automate a great deal of previously manual and tedious work. Suppose the equilibrium toward which we are gravitating is one senior engineer supervising the output of multiple agents, rather than junior engineers. This is the strongest version of the AI displacement thesis. The fear is not that models become start-up founders or self-directed engineers in the science-fiction sense, but that they become sufficiently competent across large swaths of routine engineering tasks that we no longer need to staff as much in order to meet our present demand.
For so long, the demand for software engineers has been driven by the fact that our ideas for what to build are limited by the supply of talent. We just can’t build everything we want as fast as we want. In dramatically catapulting the productivity of the best engineers, we face a sudden inversion of the demand curve.
My first response is that while AI may increasingly generate code, that does not mean it independently builds viable products. The real question is not whether models can emit code without being constantly babysat. Increasingly, they can. The real question is whether code emission is equivalent to the lifecycle of go-to-market product development, and whether autonomous execution is competitive with expert-guided execution. I remain deeply skeptical on both counts. Products are not code repos. They are evolving bundles of judgment, tradeoffs, stakeholder demands, failure response, and strategic choices. If AI lowers the cost of implementation, it simultaneously raises the premium on knowing what should be built, how it should evolve, and what risks are acceptable. In that world, high-skill humans become the most important leverage you have. Robert Englander develops this line of thought brilliantly.
My second response is the observation that nature and markets abhor a vacuum. As soon as the demand curve inverts, when productivity catches up with our ideas about what to build, we will just dream up more and better things to build!
Let’s try a highly contrived analogy. Imagine a hypothetical world where all buildings are constructed entirely with hand tools. Then power tools are invented. A house that once took a month to build now takes a day. The shallow conclusion is “Great, we may now fire 90% of the builders!” Just give the remaining 10% a set of power tools and we can meet all our building needs. All our current building needs.
The deeper and more historically realistic conclusion is “Ok, remember that hospital we wanted to build but we couldn’t because everyone was already too busy building houses? Well now we can! Or that school, or that bridge, or my axe…” Suddenly it becomes so much cheaper to experiment with different techniques, iterate on safer structures, better materials, whole neighborhoods that didn’t exist before. The demand for things we haven’t thought of yet will explode, and the world will need even more builders than we have now.
My third response is that today’s junior engineers are doing what senior engineers did yesterday. AI may displace the work of today’s juniors, and firms may stop hiring entry-level employees in the short term. But once we’re further into the business cycle, we will discover an entire new class of work that is fitting for the junior engineers of tomorrow. What that might be nobody knows yet. This type of work will shake out as AI is more thoroughly adopted and we begin to see specialization in the types of work that it unlocks. As that specialization evolves, intelligent managers will begin to spot the types of work that juniors should take in lieu of seniors.
This is the logic of AI. The productivity gain does not merely compress existing demand, it unlocks adjacent demand we cannot yet enumerate. Millions of highly skilled technical professionals, with tools that make them far more efficient than in years past, but suddenly the world will simply decide “No, we can stop innovating now, what we have is enough now, let’s not invent anything else and just cheaply maintain what we already have”? The Market will simply not stand for it. That is why predictions based solely on today’s task inventory are so fundamentally myopic: they assume tomorrow is merely today with better autocomplete. History has rarely been that boring.
The bright red Caution tape
One of the best expressions of the strong version of the argument that AI may dramatically impact employment is Why I’m Worried About AI Job Loss by Clay Wren, who writes in response to Why I’m not worried about AI job loss by David Oks. Wren grants that a Jevons-like effect is likely to occur, but also warns that a net positive outcome can still mask a great deal of harm caused by uneven distributional gains from AI’s transformation of the workforce. Case in point being that reduced prices for commodity goods brought about by movement of manufacturing jobs away from the US was no comfort to the pockets of industrial towns in the Midwest and South that saw a large chunk of their employment capacity eliminated. Wren also reminds us that there is no guarantee that the new jobs created by AI will command salaries we’d expect to be typical of high-value work.
Nathan Witkin’s article The Jevons Paradox for Intelligence that I cited in the previous section does an excellent job refuting many of Wren’s counter-arguments on the marginal impact of the Jevons effect. I find Witkin’s conclusions very much on point:
Wren’s, and many others’ main mistake is to assume that firms will only use AI to more efficiently produce the exact same products.
This is the fundamental lesson of the new Jevons paradox for knowledge work. Generative AI represents an utterly vast, new, and widely accessible vein of intelligence. To imagine that we will only use it to do what we already do faster represents a catastrophic failure of imagination.
On balance, however, Wren leaves us with some extremely salient points cautioning us not to trust blindly that the introduction of AI will lift all boats equally:
The right question is: who captures the surplus?
Think seriously about who will own the systems that are about to become the most productive capital assets in human history, and pay attention to whether the institutional frameworks being built now will ensure you share in the gains.
I have been playing fast and loose with my Money People metaphor and wish not for my flippancy to be misinterpreted as a naive faith that The Market will naturally sort things out in the most beneficial way for everybody. The revolutionary fervor I felt in my youthful exuberance of having come of age during the rise of the Internet, a technology that I assumed would have eliminated prejudice by democratizing access to knowledge, is somewhat tempered by the reality of surveillance infrastructure and algorithmic platforms designed to promote lies and seed division. The only way this doesn’t end with a disastrous regulatory capture is if we have strong policy guidance to protect against capitalism’s worst (er, basic) impulses. As Wren concludes:
Benign outcomes from technological transitions have never been the default. They’ve been the product of deliberate institutional design: labor law, antitrust enforcement, public education, social insurance.
We absolutely must get in front of these policy decisions now, if we have any hopes of ensuring that gains from AI enhanced productivity are not hoarded by a select few.
My predictions
If it were not already obvious where I land, let me make it explicit: AI will create a massive number of new high-skill jobs. Some jobs will disappear, and some firms will cut recklessly. Some executives will mistake temporary cost savings for strategy. AI is not going to take your job, but clueless leadership just might.
And yet this is precisely where the logic of The Market and the Money People turns against the doomsayers. Imagine two competing firms, each with ten engineers. AI arrives and suddenly each team can operate at the productive capacity of a hundred. One firm pockets the gains, fires nine people, and congratulates itself on efficiency. The other realizes it now has the equivalent of a hundred engineers and decides to build everything that had previously been stuck in the backlog, plus three things nobody had yet dared to attempt. That second firm runs circles around the first. Then the rest of the industry FOMOs into the next cycle, as they always do.
Right now, 10,000 proto-founders are clanking away with Claude Code. Somewhere in the next few years, one vibe-coded unicorn will absolutely emerge and the business press will lose its collective mind. We will be told, breathlessly, that no one needs engineers anymore. What they will neglect to mention is the 9,999 failures that disappeared into the statistical compost heap, or the fact that the unicorn will immediately begin hiring an enormous technical staff the moment it becomes real enough to break in production.
The deeper truth is that AI lowers the cost of implementation while raising the premium on judgment. High-skill human plus LLM will outcompete low-skill human plus LLM. Every. Single. Time.
But history offers a second lesson, and it would be reckless to ignore it: productive transitions are not necessarily equitable ones. The internet created extraordinary wealth as well as massive walled gardens and algorithms that serve addiction and partisanship. Industrialization expanded prosperity while also generating brutal concentrations of power before institutions finally began to catch up. AI will likely follow the same familiar pattern. The gains may be enormous, but whether they are broadly shared or hoarded by a select few will depend less on the technology than on the policy frameworks we build around it.
This is the part the market does not solve for on its own. Labor law, antitrust, education, and an effective social safety net are the only things standing between a technological Renaissance and another Gilded Age with better autocomplete.
History has rarely been as boring as the doomers imagine.
This time is no different.