If you haven’t been living under a rock for the past few months, you’ve probably read the news about what’s happening in the high-tech sector. The companies driving the AI revolution are also the ones experiencing its secondary effects earliest. It’s a “bloodbath” for knowledge workers and the “great flattening.”12
Why it’s happening is rather simple to imagine and understand, if you are the CEO of a company that creates or has early “unrestricted” access to a product that can perform and act at the PhD level,3 which doesn’t take lunch or breaks and would never have HR complaints, then likely the first question the stakeholders and the board are going to ask you is: how quickly can you deploy and scale? The question after, the uncomfortable one, you may face is how you will justify the high salaries paid to knowledge workers, particularly the middle layers, most of them don’t even have PhDs and still take long lunch breaks.
At this point, you may expect me to discuss the pros, cons, or ethics of the “AI Replacement Theory,” or if you are my fellow nerds, our position in the ever-changing process of “natural” selection. However, that is not where I am going. I am considering something entirely different, even maybe a bit selfish: at an individual level, should we be worried? Does it have anything to do with the layers one operates at?
My thesis is simple, obviously opinionated, and highly biased: AI doesn’t eliminate the complexity we face at work - it amplifies it in new ways. Competent, self-assured individuals who are broad, deep in their domain, and excel at operating holistically will become the stars of the new era. A funny fact, in my opinion: these folks are overrepresented in the middle. If you are one of them, you should celebrate, as the future is yours (until, of course, we all bow down to the computer overlord).
To me, it is already an indisputable fact that AI and agentic workforces are transforming work. Those who disagree are either in denial, have not used or comprehended it, or disagree over the time horizon of things being played out. I digress. The arrival of agentic co-workers or labor (or boss?) presents new requirements and challenges, and folks with above mentioned “charteristics”, regardless of layers, will now shine. The reasons can be distilled into the following.
Leadership: The Orchestrator’s Edge
AI is transforming work - automating tasks, accelerating workflows, and distributing decisions across humans and machines. But this shift creates new problems no algorithm can solve: aligning human judgment with machine outputs, maintaining accountability when your colleague might be silicon-based, and knowing when to trust the model versus your gut. These aren’t technical problems, they’re leadership problems.4 The winners here aren’t the technologists or the strategists, but the orchestrators who broker between human intuition and machine logic, translate strategy into prompts, and take ownership when AI meets messy reality.
Breadth: The Connector’s Premium
The specialist hiding in their silo? Dead species. When AI outperforms domain experts at narrow tasks, value shifts to those who connect dots across disciplines. Ask ourselves: who trains the AI, spots when it’s confidently wrong, and sees patterns beyond single datasets? It’s the ones with broad knowledge and experience. This isn’t just technical breadth, it’s knowing how your industry actually works, why customers buy, which competitors matter. The middle layer, once dismissed as “redundant bureaucracy,” suddenly becomes invaluable. They speak engineering to developers, strategy to C-suite, and reality to AI. They grasp not just how to build, but why customers care and which partnerships matter. When AI excels at isolated optimization, connecting business context across domains becomes a superpower.
Depth: the Practical Expert’s Advantage
While breadth rises, depth transforms. Forget knowing more facts than Google, real depth means understanding the why behind the what - the edge cases, the unwritten rules. The middle layer has a unique advantage: they’ve risen from the trenches recently enough to remember how things actually work, but haven’t climbed high enough to lose touch. Not yet jaded or insulated, they know why that stupid process exists, remember customer pain firsthand, and can still review code or jump on client calls. Their expertise isn’t academic; it’s operational, battle-tested, and domain-specific. They understand how their function connects to business reality, which features matter to which segments, and what competitive moves actually threaten. In a world where AI has all the answers, they know which questions to ask - because they’ve lived the problems. Better yet, they also know who can be replaced and who cannot, and hopefully are also empathetic. I know, lots of requirements…
So, should we be worried? That uncomfortable boardroom question about ease-to-pick-on middle layers? They’re asking the wrong question.
The irony is delicious: the middle-layer battle-scarred leaders who tend to be designated AI “collateral damage” might be the most irreplaceable.5 They’re the orchestrators who know when to trust the machine. The connectors who smell when AI hallucinates because they understand the business, not just the data. The player-coach experts who’ve lived through the edge cases that make organizations actually work.
AI doesn’t eliminate complexity - it redistributes it. And that redistribution favors those who navigate between worlds: human and machine, strategy and execution, theory and practice. The corporate stars won’t be pure technologists or ivory tower strategists, but those who can orchestrate the messy symphony of human-AI collaboration.
The companies that understand and act on this nuance will write the success stories for the AI age. The rest will wonder why their perfectly trained models keep making perfectly logical decisions that miss the point entirely.
After all, someone needs to tell the PhD-level AI when it’s being an idiot. Might as well be someone who’s seen enough corporate stupidity to recognize it in any form.
What’s your take? Are we undervaluing the human infrastructure needed to make AI actually work?
If you’re interested in the human-AI interface challenge, read The Intelligence Inflection on why we’ve become the bottleneck in our own intelligence augmentation.
References
Footnotes
-
OpenTools AI. (2025). Microsoft’s 2025 layoffs signal major shift towards AI and efficiency. ↩
-
Robinson, B. (2025). The Great Flattening Trend Is Picking Up Steam In 2025. Forbes. ↩
-
Gartner. (2024). Gartner Unveils Top Predictions for IT Organizations and Users in 2025 and Beyond. ↩
-
Fortune. (2025). Agentic AI systems must have ‘a human in the loop,’ says Google exec. ↩
-
McKinsey & Company. (2025). Middle managers hold the key to unlock generative AI. ↩