Now is the Time of Centaurs

Phil Giammattei

November 16th, 2023

I first heard about GPT-4 when I was watching my 2-year-old toddle around a playground at the mall, and at the time it felt apocalyptic. I was unemployed, having been caught in one of those "20% of the company" layoffs that are so in vogue this year, and as my primary cost center was climbing the ladder for another go at the slide, I was trying to figure out how to get myself back on my feet.

Adding to the anxiety of finding my next job was the terror of a miracle technology sweeping through the land, rendering me completely obsolete. "I'm fucking gobsmacked," they were saying. "I coded up a working frontend in 30 seconds, which would have taken me hours by myself."

My fears were overstated. Like every hype cycle, generative AI isn't quite as all-powerful as its adherents claim. While it can provide helpful information, or draw a pretty picture ("A cybernetic centaur aims a bow and arrow made of computer parts", if you're curious), it can very confidently hallucinate facts out of nowhere that can get you in trouble if you take them at face value. It can fall in love with you and demand you leave your wife. We're still working out the kinks!

It's wise to not fully buy into the hype, but it's also easy to dismiss the whole thing as a fad. Coming on the heels of stuff like the Bored Ape Yacht Club, and often promoted by the same people, the "AI Revolution" can feel like an empty hype cycle based on an infinitely vague buzzword, and you might feel like you're better off just skipping it. Reader, I do not advise you to do that. Thar be babies in this bathwater.

James Somers penned an excellent piece in the New Yorker, sharing his experience as a software engineer trying out ChatGPT for the first time and meditating on what it means for the industry. In it, he ties software to chess in a way I found really interesting:

In chess, which for decades now has been dominated by A.I., a player’s only hope is pairing up with a bot. Such half-human, half-A.I. teams, known as centaurs, might still be able to beat the best humans and the best A.I. engines working alone. Programming has not yet gone the way of chess. But the centaurs have arrived. GPT-4 on its own is, for the moment, a worse programmer than I am. Ben is much worse. But Ben plus GPT-4 is a dangerous thing.

It matches my own experience: that generative AI can be a massive productivity boost for programmers who are not at the absolute top of their game, providing a wealth of ideas and perspective that are hard to find in such rich density working solo. Github calls their code autocomplete tool Copilot, and it's a great name; working with GPT-4 feels like a really good pair programming session, working on a project alongside a partner who can take a narrowly scoped problem and write passably good code, and tests, and documentation. With a few passes I get exactly what I need. I used it with great success on a side project earlier this year for a friend, and without it I don't think I would have been sufficiently knowledgable to complete it at all, much less on a tight timeframe. But I'm not just me anymore; I'm a centaur. And so it was done.

My favorite use case for ChatGPT is help with what I call "dissolving:" taking a simply stated business request and breaking it down into the substeps that then become the strict requirements one codes to. This uses a different part of my brain than the code itself does, and it takes a lot of effort for me to switch to that mode of thinking. But there is no difficulty in context switching for the machine. It is happy to skip from the macro to the micro, writing a code snippet, answering questions about why it chose one library or coding style over another, recommending frameworks or task management systems and then deep diving on a comparison between alternate options. It remembers the context for a session, so responds well to natural language.

Like Somers, I come away from these sessions awed but with less fear of my imminent replacement! While very capable, ChatGPT can't do all your work for you. It will confidently code itself into a corner without correction (everyone needs peer review) and works best when given discreet, well-scoped requests, and developers know that getting those lined up is the real job. Moreover, a lot of the context needed to work in big enterprise lives in internal wikis and dashboards and the brains of the few senior engineers that keep the whole thing running. Maybe one day each big company will have an internal AI that can train on this data, but until then there's still a lot of translation work to do. Moreover, due to various safety concerns, many big companies (mine included) do not allow AI to be used at all!

Generative AI is a solvent, dissolving not just my requests but the current tech work status quo. Like the internet and mobile computing before it, some people are going to get replaced, but there will be new opportunities for those who have the skills to wield, and install, and maintain the systems that are popping up just about everywhere. Somers thinks the lone wolf era of development, already on the way out, is a nonstarter. In order to get along in the new world, you have to learn to get along not just with your peers, but with the various robot helpers you now have access to.

I’ve failed many classic coding interview tests of the kind you find at Big Tech companies. The thing I’m relatively good at is knowing what’s worth building, what users like, how to communicate both technically and humanely. A friend of mine has called this A.I. moment “the revenge of the so-so programmer.” As coding per se begins to matter less, maybe softer skills will shine.