Symphonies, Squirrels, and Semi-Sentient Code

It’s 07:10. I’ve just shoehorned myself into seat 12E, prime real estate between two business types who look like their spirits were confiscated at security causing them to lose the will to live and now we’re being treated to the in-flight interpretive dance otherwise known as the safety demonstration.

Today I’m off to Santander. Another trip to the most dramatic part of Spain, not just for its landscape, but for its weather system, which generally reflects the expressions of my current companions: dull and miserable.

The target of my attention today is a small cabin. One that looks like a dollhouse nestled in the trees between two mountain peaks. Think crackling fires, creaky floorboards, and maybe a friendly squirrel named Dennis. I settle back, close my eyes, and let Sibelius’ 2nd symphony wash over me, painting pictures of rugged mountains, snowcapped peaks, biting winds; the perfect soundtrack to accompany my destination.

A tap on the shoulder brings me back out of my reverie. “Sorry,” the air hostess says. “You need to remove your headphones on this flight.”

I look at her, slightly bemused.

“This is the emergency exit row. You can’t wear headphones.”

Apparently, in the event of a crash, I must be aurally alert. After all, what better way to face catastrophe than being fully attuned to the chorus of collective panic and the piercing solo of one's own screams.

I sigh, nod, and pocket the orchestra.

In place of sonic bliss, I summon wisdom. Out comes my phone, where I’ve stashed “Thoughts on AI and Software Development” by Uwe Friedrichsen, CTO of codecentric AG. Because nothing says “relaxing flight” like pondering the rise of the machines while your knees kiss your sternum.

So here I am, somewhere over Europe, dreaming of cabins, dodging music bans, and contemplating the rise of artificial minds, all while praying the emergency door doesn’t suddenly become my responsibility.

Twigs, Vines, and the Illusion of Progress

Before we dive in, a little disclosure. Just so you know where I stand before the pitchforks come out: when it comes to AI, I’m not what you’d call a wide-eyed evangelist. Some say I’m skeptical. Others say I’m deeply suspicious. Both are technically correct and right on the mark. In fact, I practically exude “AI? Hmm...” energy.

About a year ago, someone either very brave or very misinformed invited me to sit on a panel about AI at the Adidas Tech Summit in Zaragoza. Picture it: a stage full of VPs, CTOs, and other acronym-rich humans, all nodding in rapt agreement about the glowing, godlike future of AI in front of a room filled with wide eyed engineers all dying to hear about the latest hype.

And then there was me. Nothing special, just a blunt northerner from Preston England who accidentally found himself a bag of smarts.

Apparently that day I missed the memo that we were all supposed to drink the digital Kool-Aid. While they waxed lyrical about innovation and disruption, I politely rained on the parade like a well-dressed thundercloud.

When it was my turn to speak, I did what any self-respecting contrarian would do.

I nicked an idea I heard a few minutes earlier and now I’m going to elaborate on it in this article.

Imagine you're hiking in the mountains; because of course you are, you're bold, adventurous and have half a dozen followers on Instagram or whatever the cool kids use today.

You come across a narrow gorge with a steady little stream slicing through it. There’s no easy way across, but you’re resourceful. You channel your inner MacGyver, pull out an axe (because naturally you brought one), chop down a few young trees, lash them together with vines, and sling the whole thing over the gorge. Miraculously, it sticks. You cross.

Congratulations. You’ve built a bridge. Kind of…

Now, this is where I liken AI to those trees and vines who audaciously pretended to be a bridge.

Today’s AI is exactly that: improvised, scrappy, surprisingly effective, and one strong winter storm away from being very, very broken.

Sure, it gets you across. But would you trust it to carry a car? A crowd? A toddler with a juice box and no fear of gravity?

So when people talk about AI like it’s the final form of human ingenuity, I can’t help but raise an eyebrow and wonder if they’ve actually looked at the thing they’re standing on or just assumed the vines will hold.

As I settle into the blog series, I find myself nodding along to Uwe’s thoughts on the topic. I’ll come back to the basis of his series a little later on but first I’d like to drill into some of what he wrote and really try and understand how his perspective aligns with my own.

Back at the tech summit, my point wasn’t that I was against AI or its existence. I’m just not sold on this particular incarnation, this clumsy, overhyped prototype that looks impressive until you poke it with anything resembling real-world complexity. It’s like watching a toddler in a lab coat: adorable, occasionally brilliant, but mostly just flinging crayons at the wall and calling it innovation.

Sure, it can whip up a sonnet about entangled quarks in the style of Dr. Seuss, but ask it to calculate the speed of a shelled gastropod launched from a catapult, and suddenly it’s breaking the laws of physics and common sense in one breathless, confidently wrong answer.

So no, I’m not anti-AI. I’m just not particularly dazzled by this current Frankenstein’s monster of math and marketing. I care about what comes next. What happens when we stop slapping together bridges from metaphorical twigs and vines and start pouring actual concrete?

Sledgehammers and Scalpel Jobs

One of the fundamental issues I see with the way AI is being applied in industry today is this: we’re wielding it like a sledgehammer in a world that often requires scalpels.

Now, don’t get me wrong. Sledgehammers have their place. Sometimes you need to knock down a few symbolic walls, smash through decades of legacy code, or reduce an ancient Jira backlog to rubble. But you wouldn’t use one to assemble a violin, or hang a door straight. And yet, that’s precisely what we’re doing with AI when we’re asking it to finesse problems it’s barely capable of understanding.

It’s one thing to use AI as a rough guide, a co-pilot, or a conversation starter. That’s fine. Encouraged, even. To be fair, Uwe makes this point too: if used responsibly, AI could free senior engineers from repetitive tasks and offer junior developers lightweight mentorship at scale, helping them grow, learn, and avoid falling headfirst into the tarpit of Stack Overflow.

But that only works with oversight. Without it, relying on AI to work solo, unsupervised and unchecked? That’s not just lazy, it’s reckless.

Because when you let a black box make your decisions, you’re trusting a system you don’t control, trained on data you can’t trace, hosted on infrastructure you don’t own, governed by policies you didn’t write. You can opt out of training, you can disable logging and switch off memory, but how confident are you, really, that these toggles won’t mysteriously flip back on in the next release cycle?

If you think that’s paranoia, I invite you to revisit the long and storied tradition of platform giants “accidentally” rewriting their privacy policies at 2am. At one point, Facebook practically turned it into a seasonal event: quietly changing terms of service to grant themselves sweeping access to your data, then issuing a non-apology when caught.

The real problem isn’t AI itself. It’s not evil. It’s not sentient. It’s not out to get us. It’s just a tool. A powerful one. The issue is who’s holding the handle, and what they’re incentivised to do with it.

Because right now, AI is being deployed by people with one overarching mission: maximise profit. Ethics? Transparency? Long-term impact? Those are secondary, sometimes tertiary concerns. Profit is the guiding star, and everything else is ballast. If something shiny and "intelligent" helps cut costs, increase output, or boost engagement metrics, it gets greenlit faster than you can say “data breach.”

So no, I don’t think AI is the villain. But I’m deeply suspicious of the suits currently dressing it up for the market.

Uwe makes a point of stepping back and approaching the debate with what he calls an unbiased mind. He’s not waving a pitchfork or selling a miracle. He’s pointing at a machine, calmly asking, “Have we actually tested this thing?”

And that, I think, is what makes Uwe’s view feel not just reasonable, but inevitable. The hype machine, bloated on buzzwords and venture capital, can only run so long before it runs out of road, or rails, or runway, or whatever metaphor it last borrowed from a McKinsey slide deck.

From Hype Train to Quiet Exit

Now, Uwe seems to think we’re headed for a good old-fashioned bubble pop. The kind where VC cash dries up, the tech bros pivot to cryptocurrency (again), and we’re left sweeping up the broken dreams of half-baked startups. Honestly, I see the appeal. There’s a certain poetic justice to watching a hype train derail itself at full speed.

But personally, I think it’s going to be more of a fade. A whimper, not a bang. The kind of gradual, shrugging deflation where LLMs quietly slip out of the spotlight and back into the comfortable, dusty corners of academia, right next to those dusty shelves where we keep particle physics, failed moonshots, and other things that don’t trend on TikTok.

Because here’s the thing: human attention is a resource that’s been mined into extinction. First, we swapped books for smartphones. Then TV gave way to YouTube. Then came TikTok, Reels, Shorts; digital speed dating for your brain. Long-form thinking? Please.

If it’s not moving, lip-syncing, glitchy, monetised, interactive, or being sold by someone doing an exaggerated shrug, it’s practically invisible.

So yes, we’re neck-deep in the AI hype right now. But tomorrow? That hype will dissolve. Washed away like everything else in the great tidepool of human fickleness. A new shiny thing will come along, probably wearable, definitely overpriced, and AI will slip quietly back into the realm of science and experiment without the in-your-face obnoxiousness that has built up around it.

This isn’t just an AI thing, either. We’ve seen this pattern pop up across platforms and entire industries, part trend, part generational shift, part “maybe we’ve finally started thinking things through.”

Take the alcohol industry. Not so long ago, the weekend wasn’t complete until someone lost a shoe, their dignity, and three hours to a kebab queue. But now? Gen Z is collectively side-eyeing that lifestyle and opting instead for sober sophistication. Out with the shots, in with artisanal, alcohol-free gin that tastes like regret and cucumber.

Whether it's health, culture, or just the general vibe shift, there’s a clear retreat from excess and a hunger for something real.

As Uwe points out though, one of the reasons we may see an obscuring of LLMs is that they don’t really push the envelope far enough, highlighting that pioneers like Yann LeCun are moving away from GenAI as they do not see the future of AI there.

And Steve Yegge, despite his excitement, even throws in a cost warning. At current rates, using agents could run up to $25 per day per developer. At scale, that’s not a sidekick. It’s a budget line item dressed like Iron Man

Wrangling Geese and Other Misadventures

If you haven’t dipped into Uwe’s series yet, they read as a sharp and measured counterpoint to Steve Yegge’s “Revenge of the Junior Developer.” I promised I would loop back, and here we are. First, a line from Steve that made me laugh and wince in equal measure:

"Be nice to your goose. Don’t overstuff it. You need to break things down and shepherd coding agents carefully. If you give one a task that’s too big, like ‘Please fix all my JIRA tickets’, it will hurl itself at the problem and get almost nowhere. They require careful supervision and thoughtful problem selection today. In short, they are ornery critters."

He is right. LLMs, for all their flair, collapse when faced with too many moving parts. Give them too much and they short-circuit into polite gibberish; give them too little and they invent context with the confidence of a pub-quiz team that never studied.

It is a balancing act: either feed them just enough to succeed, or brace for output that is simultaneously enthusiastic and catastrophically wrong, like a golden retriever trying to write Go while attempting to chase the Gopher mascot around the terminal.

This fragility is exactly why researchers such as Yann LeCun are stepping back from the current generation. These models cannot truly reason or adapt; they are autocomplete engines dressed up for the keynote stage.

In my analogy, the next logical step after a wooden bridge is to replace the wood with stone. And here is where it gets interesting. From stone, we progress to iron, from iron to concrete and steel. But herein lies the problem. Both of these latter two contain fundamental material weaknesses. Iron rusts and concrete rots.

If this generation of LLMs is likened to the wooden bridge, does that imply that we will reach the pinnacle of artificial intelligence with the next wave of AI? As we teach AI reasoning and planning, train it to interact with our material world, does its strongest and most lasting iteration become that which we would equate to a bridge of stone?

There are almost certainly those who will argue that is where we should stop (if not before), that training AI in this way is both fundamentally and morally wrong. On this point, I would be inclined, for once, to disagree, however with future incarnations, we are likely to start seeing the introduction of flaws in the design that are inescapable for reasons we will not discover for centuries to come.

But here is the thing: we’re not talking about centuries. We’re talking about tomorrow.

Vibe Coding and the Coming Generational Rift

In his article, Steve Yegge highlights that tomorrow is already here, and that agent-driven development is the future of software engineering.

With this, he paints a very pessimistic picture of the future of software engineering, wrapped in pretty paper and tied with a bow called “vibe coding.”

I’m actually of two minds over the underlying meaning of “Revenge of the Junior Developer.” On the one hand, he paints this rosy picture of junior developers taking over the world of coding through the untethered and unrestrained use of AI platforms at the expense of more senior developers, who have, as he clearly explains, all been fired. But on the other hand, the article serves as a warning, restrain AI now or this is the future we face. The devil is in the detail here though, as the article is very much written as a puff piece on AI agents and vibe coding.

Noted, that whilst I’m not overly familiar with Yegge’s body of work, I know he’s got cred. His blogs often show up on “must-read” lists for engineers, and with good reason. He’s a master of hiding sharp technical analysis inside charming stories, like sneaking broccoli into a lasagna. You walk away feeling nourished and smarter, without needing a nap to process it all.

That said, I read the revenge post on the flight home from Santander and at first it made me angry. I was incensed by his apparent trivialisation of senior engineers, the thought of that level of skill and expertise being shunted out of the door as the junior engineers herd swarms of agents and bots that replace them, vibe coding their way into obscurity.

Where this really started to make me think though was this example:

“Example: a tech director at a well-known brand just told me that one of their devs sent them a PDF explaining, with color slides and charts, why they all needed to abandon AI and go back to regular coding.”

Sound familiar?

It should. Because we’ve seen this movie before. How many SysAdmins are out there who resisted the DevOps movement? How many companies still maintain their own datacentres where the admins can go watch the blinky lights? How many release engineers resist GitOps, or infrastructure engineers resist IaC?

This isn’t new. It’s the same generational tech shift, the same resistance to change, the same “I built this the hard way, and now you want to automate it?” outrage. It’s an argument as old as time. Or at the very least, as old as Bash.

And yet, those old positions still exist. Companies still hire (and rely on) sys-admins, DB admins, infrastructure and network engineers. Cloud was supposed to replace the datacentre where even the datacentre moved into the cloud, and yet servers still sell at enormous rates, datacentres are still built and the world quietly sits by as prime real estate is gobbled up by racks of humming machines and the ever-growing appetite of GPUs in steel containers.

The death of on-prem was greatly exaggerated, because despite the marketing decks, not everything can be serverless, stateless, or someone else’s problem. There’s comfort in wires you can actually touch, LEDs that blink in predictable patterns, and backups that don’t vanish with a billing dispute.

From Uwe’s writing, one thing is abundantly clear: he’s not just interpreting Steve Yegge’s post, he’s dissecting it like a lab frog under a magnifying glass. And his conclusion? Steve wholeheartedly embraces this brave new world of vibe-coding with fleets of semi-autonomous agents, and sees it as the inevitable future of software engineering.

It’s not a huge leap to make. After all, the imagery Yegge paints, of wide-eyed juniors fresh out of university, wrangling armies of chatbots like Pokémon, feels less like speculative fiction and more like Tuesday at a West Coast startup.

Need proof? Look no further than Elon Musk’s appointment of Edward “Big Balls” Coristine to lead DOGE in the U.S. Because clearly, if you put enough meme power behind something, even the most unserious candidate can find themselves in charge of something that technically affects millions of people. We’re not dealing in satire anymore, we’re living it.

Now, realistically, the future will probably be a hybrid, half human, half AI, slightly buggy, and held together with duct tape and YAML. But Uwe raises a critical alarm. If companies decide to cut the cord entirely and fire their senior engineers en masse, there will be blood. Not just within the walls of tech companies, but across the wider economy. The mid-to-long term fallout of removing institutional expertise isn’t just a minor efficiency loss, it’s the kind of economic self-sabotage that quietly kills entire sectors.

It’s a grim landscape that Uwe maps out, one where skill is devalued, nuance is ignored, and the only qualification that matters is how fast you can prompt a bot into shipping broken code. And what’s particularly notable is that this is the part that’s missing from Steve’s writing.

Yegge’s post, for all its flair, leans heavily into the celebration of AI agents and the shiny future they bring. It feels like a toast to the LLM uprising, served with a side of, “Sorry seniors, don’t let the door hit you on the way out.” And yet, I don’t fully buy it.

Unless Steve’s decided to mortgage his credibility on a full send into the AI agent gold rush, I can’t help but see his piece as something more complex, a veiled call to action. One that says, in no uncertain terms: adapt or perish. Learn this new wave of tech now, or prepare for your career to be measured in months, not years.

Conclusion: Between the Hype and the Hard Place

So where does that leave us? Somewhere between awe and anxiety, I suspect. Between wild optimism and cold pragmatism. Between what AI promises and what it currently delivers.

We are not looking at a terminator scenario or utopia on demand. We are watching, in real time, the industry swing a spotlight onto a tool that is equal parts brilliant and brittle, captivating and chaotic. A tool that can scaffold code, inspire creativity, automate the dull, but also hallucinate answers, mangle logic, and confidently walk you off a cliff.

What matters now is not whether we are pro or anti AI. That binary died with Internet Explorer. What matters is whether we can cut through the noise, the hype, the evangelism, and the doomerism, and actually build something better. Something useful. Something that can be trusted.

Not because it wears a suit. Not because it demos well. But because it works, reliably, transparently, and with accountability baked into the foundation.

If we get that part right, we are not just building bridges anymore.

We are building roads, railways, and cities.

Leave a Reply

Your email address will not be published. Required fields are marked *