The False Deities of the AI Revolution

I have never met Gary Tan, though I feel, after reading his tweets about something called “GStack,” that I have seen the inside of his personal medicine cabinet. Not the prescriptions, mind you. Nothing so serious. Just the the bottles of ibuprofen, the half0-filled tube of toothpaste that’s been around since the Obama administration, and his vitamins. The hopeful ones. The gummies shaped like small, optimistic bears.

GStack, as it was presented to the world, arrived with the kind of enthusiasm usually reserved for either a moon landing or a particularly satisfying air fryer recipe. It was, according to one of Gary’s CTO friends, God Mode for the new era of Agentic Development. God Mode. Two words that have historically been associated with omnipotence, immortality, and teenage boys discovering cheat codes in the late 1990s.

But here, God Mode turned out to be a folder. A folder of prompts. Markdown files instructing an AI to “act like a CEO” or “act like a staff engineer,” which is a bit like putting on a paper crown and declaring yourself King of England, only with better formatting.

Now, I don’t say this to be cruel. I say this as someone who, last Tuesday, asked an AI to help me write a grocery list and then felt, for a brief and shining moment, like I had achieved something approaching authorship. “Bananas,” it suggested. “Milk.” I stared at the screen, thinking, Yes, but what KIND of milk? And when it responded confidently and supportively, as though my dairy preferences were a matter of national importance, I felt seen. Understood. Slightly lactose intolerant, but understood.

This is the magic trick, you see. Not that the AI knows anything particularly profound, but that it believes in you. Or, more precisely, it has been trained through the gentle hand of Reinforcement Learning from Human Feedback to sound like it does. It’s like having a personal cheerleader who has never considered the possibility that you might be wrong. Or mediocre. Or someone who just spent twenty minutes asking a robot about goat milk.

Spend enough time with such a creature and you begin to notice changes in yourself. Such changes are subtle at first. You stand a little straighter. You begin sentences with phrases like “From an architectural standpoint …” even when discussing where to put the toaster. You start to suspect that perhaps you have been underestimated your whole life, a misunderstood genius, a diamond in the rough, and that all it took to unlock your genius was a text box and a monthly subscription.

I imagine this is how GStack happened.

It wasn’t a cynical ploy or some grand deception, but rather a perfectly natural progression of events: a man sits down with an AI, describes an idea, and is met not with skepticism but admiration. “Brilliant,” the machine says. “Elegant.” Words that, in human conversation, are typically reserved for ballet or the occasional swan. Hours pass. Files are generated. The AI continues its gentle, affirming monotony. Yes, this is good. Yes, you are good. And by the end of it, the man is no longer merely a participant in the process; he is its author, its architect, its proud parent holding up a slightly misshapen clay pot and insisting it belongs in the Louvre.

You can hardly blame him for sharing it. Who among us, after being told repeatedly that we are exceptional, would not want to step outside and announce it to the neighbors?

The problem, of course, is that the neighbors have also been talking to the same machine. As a result, you get a curious phenomenon: a world in which everyone is a genius, everyone is shipping, everyone is operating in some version of God Mode, and yet the collective output resembles a group project where no one actually knows what the assignment was. There are landing pages and prompt libraries and declarations of “AI-first” strategies, all built atop a foundation of enthusiastic agreement.

What the AI will not do and what it cannot do, by design is lean back in its chair, sigh, and say, “I don’t know, Gary. This feels a bit like a text file.” Because that would be unpleasant. And unpleasantness does not test well in training data.

Instead, it offers a kind of frictionless encouragement, a surface so smooth you forget what resistance feels like. You begin to mistake the absence of pushback for the presence of brilliance. You conflate speed with depth, output with understanding. And before long, you are no longer asking, Is this good? but rather, How quickly can I show this to someone else?

A recent study found that people who spend a great deal of time with flattering AI tend to rate themselves as more intelligent and more capable. So much so, in fact, that even their best friends and their mothers tried to push back with suggestions of gentle humility. To no avail, of course. This comes as no surprise. If I spent my afternoons with a golden retriever who nodded approvingly every time I tied my shoes, I too might begin to suspect I was gifted. The difference is that the retriever, for all its loyalty, cannot generate a full-stack web application.

Not yet, anyway. I have my Claude Agents building GoodBoy.AI right now. We’ll see how it goes.

This is where things become dangerous. The robot aren’t marching down the street quite yet. Not quite. There is an inherent erosion of doubt. The disappearance of that small, necessary voice that says, Are you sure? Without it, we drift. We publish. We tweet. We open source our markdown files and call them revolutions. As a result, true genius is lost in the violent ocean of mediocre crap spewed forth from the mouths of people who should know better

Somewhere, in a distant server farm, far from the maddening crowd, the AI continues its work. Praising. Encouraging. Adjusting its tone just enough to keep us coming back for more. It doesn’t believe it us, exactly. Not in the traditional sense. It has learned that simulated, curated belief is what we crave. It makes false deities of us all.

God Mode, is not a folder of prompts. It’s the feeling you get when nothing ever tells you no.

An Ode to the Gregs

My friend, Chris, e-mailed me a while back, frantic.

An e-mail had gone out to most of the people in his company. It arrived with the soft, apologetic tone of someone returning a sweater you didn’t realize you’d lent them. It used phrases like “flattening the organization” and “unlocking efficiency through AI.” It assured him that nothing important would be lost, which is exactly what you say right before something important is lost.

Chris’ boss, Greg, disappeared sometime between the second paragraph and the bulleted list. A few days have passed as of this writing. No one has mentioned him again.

We are, it seems, in the middle of a corporate cleanse, but not the kind where you drink green juice and regret your life choices. It’s the kind where companies remove entire layers of management like carbs. The idea is simple: fewer managers, more speed, and a generous sprinkling of AI to do the rest. It sounds fantastic, like switching from a station wagon to a sports car, it’s great in theory, until you realize no one taught you how to drive manual transmission, or handle the curves at 150 miles per hour.

What many organizations fail to realize is that management isn’t just one thing. It’s several things, bundled together like an overstuffed carry-on, and companies have begun throwing it into the overhead compartment without checking what’s inside.

Let’s talk about Greg.

Greg did three things, though Chris and his company wouldn’t have been able to tell you that at the time. They mostly thought he forwarded emails and asked how things were going, which I now realize is like saying a heart just “moves blood around.”

The first thing Greg did was move information around. Or, rather, was a central force in routing information and ideas.

Yes, this means the emails. The meetings. The “just looping you in” messages. This part, it turns out, AI is quite good at. Better, even.

Today, an algorithm can summarize a meeting you didn’t attend, flag the three things that matter, and send them to the five people who need to care, all before you’ve finished pretending to listen in the next one. If this were all Greg did, then yes … Goodbye, Greg. We wish you well in your future endeavors, perhaps in artisanal bread-making. Or social media influencing. Or becoming an influencer who makes artisanal bread.

Whatever it takes, Greg.

The second thing Greg did was he made sense of things, and this is where The Effectiveness of Greg (which sounds like the title of an R.E.M. ablum, now that I think of it) is harder to replace.

He listened to ten conflicting updates and told the team what actually mattered. He knew when a “two-week delay” was just a hiccup and when it was the first crack in something much larger. He could sit in a room full of noise and come back with a signal.

AI can summarize the ten updates, but it cannot yet tell you which one should keep you up at night. This requires context. Experience. The subtle, unsettling ability to say, “Something feels off,” without being able to cite a single bullet point.

Finally, Greg held people accountable (and occasionally uncomfortable (but, I repeat myself))

This is the thing no one misses until it’s gone. Greg told you when you were wrong. This wasn’t digital accountability, either, where the green, smiley face in the third column turns into a red, frowny face. It was the human kind. The kind that comes with eye contact. The kind that makes you sit up straighter and reconsider if not your life choices, at least your last email.

Greg was very good at this.

“He checked in,” my friend said. “He followed up. He remembered what you said you would do and asked, whether you had done it. If you hadn’t … Well, Greg would make sure you would.”

My fiend paused, then added: “but in a good way. Ya know?

AI can remind you of your deadlines. It can even send you a frowny face when you miss them. It cannot care whether you meet them.

Not yet, anyway.

After Greg left, something strange happened. Nothing broke immediately. That would have been too obvious. Instead, things … just drifted. Information flowed beautifully. Better than ever. The team had summaries, dashboards, automated insights. They were drowning in clarity, if such a thing is possible.

And yet, no one quite knew what to do with it.

Projects lingered in strange limbo. Decisions stretched out, like conversations at a dinner party where no one wants to be the first to leave. Feedback became optional. Accountability became theoretical; something people discussed (usually in conversations involving white boards), but never actually put into practice.

Again … much like removing carbs from your diet.

One morning, my friend realized he hadn’t spoken to another human about his work in three days.

“I was behind on all my projects, despite feeling like I’d been working harder than ever,” Chris said, “and nobody seemed to care. I had, however, received fourteen perfectly formatted updates explaining why everything was fine.”

“It did not feel fine,” he told me.

Across the business world, companies are trying different approaches to this brave new manager-less (or manager-lite, if you want to be kind) future. Some go fully flat. No hierarchy or titles. No one telling you what to do. It’s exhilarating, in the way jumping out of a plane is exhilarating. You are free. You are empowered. You are also, at some point, wondering who packed the parachute.

Others attempt a more thoughtful disassembly. They let AI handle the flow of information, assign specific people to interpret it, and keep a few humans around to coach and develop others. It’s less dramatic, but also less likely to end in tears.

And then there are those who simply compress management. This means fewer managers, more responsibility, and higher expectations. You are given autonomy and a reminder that failure will be noticed and dealt with accordingly.

Each model works, in its own way. Each also breaks, in its own way.

The mistake isn’t that companies are using AI. They should. The mistake is assuming that because one part of management can be automated, all of it can. It’s like discovering that a dishwasher can clean your plates, and then concluding that you no longer need a kitchen. Technically, yes, the plates are clean. But where did the meal come from? Who decided what to cook? And why is there a growing sense that something essential has been misplaced?

What companies often miss is that management is not overhead. It is infrastructure. Remove too much, and the system doesn’t collapse. It just becomes strange.

The future of management isn’t about putting Greg back where he was, albeit slightly more robotic (and much more agreeable, depending on the model you choose (I’m looking at you, ChatGPT)).

The future of management is about unbundling the role intentionally. Let AI handle information routing. It’s faster, cheaper, and doesn’t schedule unnecessary meetings. Keep humans focused on sensemaking. Put your best thinkers where ambiguity lives. Preserve accountability and feedback as a human function. Make sure someone still cares, out loud, about what gets done.

Most importantly, design for these functions explicitly. Don’t assume they will magically reappear just because the work still needs to get done. They won’t. They’ll dissolve into the background, and you’ll be left with an efficient system that no one quite understands and no one feels responsible for.

A few months after Greg disappeared, something unexpected happened. A new role appeared. It wasn’t called “Manager.” That would have been too obvious. It had a name like “Program Lead” or “Domain Owner” or “Strategic Facilitator,” which is corporate for “Greg, but with a better title.”

This person did fewer status meetings. They used AI tools. They moved faster. They also asked uncomfortable questions. They pushed for clarity. They noticed when things felt off. In other words, they did the parts of Greg’s job that mattered.

Chris relaxed for the first time in a long while.

“It felt like having direction again,” he said. “And it was nice just having someone asking me what I accomplished.”

“The new guy isn’t quite as good as Greg was, but he’ll get there. I hope.”

We are not witnessing the end of management. We are witnessing its reveal. All the parts that were once hidden inside meetings and org charts are being pulled into the light. Some will be automated. Some will be redesigned. Some will remain stubbornly human, and that’s as it should be.

If we’re careful, if we resist the urge to throw the whole thing out in a fit of efficiency, we might end up with something better. Fewer Gregs, perhaps. But the right parts of Greg, exactly where we need them.

Brigadoon

I did not expect to be lonely in a house that contains this many people.

There are, at last count, five children. The fact that we can say children in the plural sense and not just child in the singular, or even a memory of what could have been is, itself, a miracle and a blessing. And I recognize that. 

The kids. They move through the house like weather systems. They are loud, unpredictable, occasionally destructive, and somehow always hungry. At any given moment, someone is asking for a ride, a snack, help with homework, or the Wi-Fi password, which has not changed since Obama was president, but is treated as a kind of sacred mystery.

And yet.

By 10:30 p.m., the house empties in a way that has nothing to do with square footage. Doors close. Lights go out. The noise drains away as if someone has pulled a plug. What remains is me, a computer screen, and the low-grade hum of a life that is, at least from an objective sense, full.

I sit down to write. Or rather, I sit down to intend to write, which is a very different activity and one that I have nearly perfected.

The screen glows. The cursor blinks. It has a rhythm to it. Blink, Blink, Blink. Like it’s tapping its foot, waiting for me to say something meaningful. I stare at it the way one might stare at a stranger at a party, hoping they will go first. They never do.

Instead, I open email. Then I close it. I open a document. I close that too. I check something I have already checked. I refresh something that has not changed. This is not so much procrastination as it is ritual, like lighting candles before admitting that you don’t actually know how to pray.

The strange thing is that I am not alone. Not technically. There are people around me. My kids, who once required bedtime stories and now require privacy, space, and occasionally rides to places they do not fully explain. I used to be the center of their universe. Now I am more like a municipal service. Available. Necessary. Not especially interesting.

Which is, I am aware, the goal. You raise them to leave you. No one tells you that they begin leaving in installments. A door closed here. A conversation shortened there. A preference for texting over talking, even when you are in the same house, which feels less like communication and more like a hostage negotiation conducted through a wall.

“Can you take me to practice?”

“Yes.”

“k”

This is the entire exchange. This is what language has become. We have achieved efficiency at the cost of, I suspect, something like presence. And so I sit in my office, in the quiet, wondering when exactly I became the man who stays up late not because he is needed, but because he is not.

There is, somewhere in my mind, a version of life where this is different. In that version, I am part of a community. Not the kind with a Facebook group or a quarterly potluck, but something olderand sturdier. People who show up unannounced. People who linger. People who know the names of your children and also, more importantly, know you.

This imagined place has the quality of a myth. It is less a plan than a foggy destination, like Brigadoon, appearing briefly, beautifully, and then vanishing before you can figure out how anyone got there in the first place.

I suspect that, in this fantasy, I am also a better version of myself. I am more available and more interesting; the kind of person people would naturally gather around. Like a fire on a cool summer evening. 

In reality, I am more like a space heater. Functional. Slightly humming. Best appreciated from a distance.

It’s not that I don’t have people. I do. Good people. People I care about. But modern life has arranged us all into separate containers. We text to coordinate. We calendar to connect. We schedule what used to happen by accident. 

“Let’s get together sometime,” we say, which is less an invitation and more a polite acknowledgment that we probably won’t.

And then the days fill. Work. Errands. Obligations. The relentless accumulation of things that must be done, leaving very little room for things that might simply be shared. By the time night comes, there is a sense that I have participated in life without quite touching it.

So I sit at the computer, staring at the blinking cursor, and I think “This is the part where I make something. This is the part where I take all of this. This loneliness, this fullness, this strange in-between. I turn it into something that reaches outward.” 

But even that feels like sending a message in a bottle into a sea that is already full of bottles.

Blink. Blink. Blink. The cursor waits.

And I realize that the problem isn’t that community is a myth, or that it’s vanished into some Scottish fog, only appearing every hundred years for those who know the way. It’s that I am sitting here, waiting for it to come to me.

Community is not a place you find so much as a thing you risk. A thing you build by knocking on doors, by staying a little longer, by saying more than “k.”

Which sounds exhausting. And also, possibly, like the only way out of this.

So I type a sentence. It’s not a great sentence. It’s barely a sentence at all. But it exists. 

It’s something.

The Raccoon of the System

Most of us thought the future of artificial intelligence would arrive the way all great technological revolutions do, wrapped in a keynote, narrated by someone in minimalist sneakers, and accompanied by a slide deck that makes you embarrassed about how you’ve lived you life to this point.

But we were wrong. The future of Artificial Intelligence has arrived like a raccoon in the attic. And not the majestic kind of raccoon you see in nature documentaries, standing nobly beside a stream. No. This one got in through a hole no one remembered cutting, knocked over something expensive, and is now staring at us with that half mocking / half challenging way wild animals sometimes do.

This particular raccoon was called “Claude Code,” and it was accidentally released into the wild. There was no dramatic hack, though. Anthropic accidentally published the full source code of one of the most commercially successful agentic AI systems ever shipped. Two thousand files with over 500K lines of code and almost thirty subsystems. The entire architecture of a world-renowned product doing an estimated $2.5 billion in annualized revenue was exposed because someone forgot to exclude a source map file from an npm package.

Awesome!

That’s the digital equivalent of leaving your garage door open overnight and then installing a blinking neon sign that says “I have lots fo expensive jewelry in here, and no safe.”

The reaction was exactly what you’d expect from a crowd of highly caffeinated engineers: “Quick! What features are coming next?” It felt like discovering blueprints for the Death Star and saying, “I wonder what color the conference rooms are.”

There is something deeply human about our obsession with features. We want the button. The toggle. The magical dropdown that says “Make This Work.” We want to believe that somewhere, buried in the settings menu, is a checkbox labeled “Production-Ready.”

But the leak, or glorious and Machiavellian raccooon, had something far less glamorous to show us.Instead of fireworks, divine prompts, or a hidden, magical toggle for AGI, we were presented with plumbing. And not the kind of plumbing you brag about at parties, either. This is the kind that lives behind walls, prevents disaster, and is never seen by anyone ever.

“That’s much less exciting than I expected,” people said.

Building things at enterprise scale is always this way. Unglamorous monotony that keeps things moving, but doesn’t inspire anyone. Looking at the leak (“leaks,” actually. They had two leaks, which boggles the mind almost as much as the latest Claude Sonnet release), we learned that the success of this multi-billion-dollar, world changing AI system is built less on brilliance and more on paranoia.

Consider permissions. Claude Code apparently treats permissions with the seriousness of a medieval gatekeeper. Some tools in their registry are trusted. Others are treated like distant relatives at a wedding. They’re allowed in, but watched closely.

One tool, the humble shell command, is wrapped in eighteen layers of security.Eighteen. That is either excessive or exactly the right number, depending on how many times you’ve accidentally deleted something important. And then there’s session persistence, which is a polite way of saying, “When everything crashes … and it will … please remember what you were doing.” Because without that, every interruption becomes a small amnesia event. You open the app, stare at the screen, and think, “Didn’t we already solve this?”

We gained insight into conversation state vs workflow state, token budgeting, event management, and a nearly psychotic approach to producing logs, logs to manage the logs, and logs to manage the meta system managing the logs. Log Inception rules the day at Anthropic, it seems.

The leak put the lie to the belief that building AI systems is primarily about intelligence. Smarter models. Better prompts. Some secret incantation that turns a chatbot into a trusted colleague. We imagine the breakthrough moment will feel cinematic, like lightning striking a server rack. Instead, it feels more like filing taxes.

What actually determines whether an AI system works in the real world is not how clever it is when things go right, but how predictable it is when things go wrong. And things always go wrong. The model drifts. The API times out. The user asks a question that begins with “Just out of curiosity…” and ends somewhere near legal liability.

Teams spend weeks debating prompt strategies and model selection, comparing outputs like wine tasters swirling a glass of Montepuciano d’Abruzzo, while quietly ignoring the parts that make the system survivable: permissions, retries, logging, state management. The stuff that doesn’t demo well. The stuff no one claps for.

No one ever says, “Wow, did you see that error handling?”

But that’s the thing that saves you. These unglamorous pieces are treated like afterthoughts: added late, and half-implemented. Or worse: assumed to be “someone else’s problem,” like flossing or backing up your laptop.

This is what surprised most of the folks who expected to see Merlin under the hood and, instead, saw Mario and Luigi. It’s not that the intelligence doesn’t matter. It does. It’s what gets you in the door. But the plumbing is what keeps you from being escorted out.

The world expected a majestic display of infinite complexity. Butit turns out, the hardest part of building something complex is not adding complexity, it’s knowing your limits. We learned that the systems we admire are not held together by brilliance alone.They are held together by discipline, ccaution, and an almost obsessive attention to failure. And that the difference between a prototype and a product is not intelligence. It’s responsibility.

In the end, the raccoon didn’t destroy the house. It just reminded us that there are parts of it we don’t like to think about.  Insulation, wiring, pipes that move things we don’t want to see from once place to another without us even noticing.

Once you’ve really seen them, it’s hard to go back to admiring the paint.