The recent developments in Artificial Intelligence have been nothing less than jaw dropping. Large language models (LLMs) like OpenAI’s ChatGPT can handle complex questions and prompts and produce incredibly human-like responses. But aside from merely conversing, they can generate entirely new works, such as press releases, poems, songs, and even code.
We are currently in the middle of an explosion of new applications of this technology, with new features and startups being launched every day. And while I can get swept away by a good demo as much as the next person, I wonder how society will come to view AI-generated content once it is prolific. The analogy I keep coming back to in my mind: the Segway."As big a deal as the PC.” - Steve Jobs, commenting on the Segway in 2001
To this day I’ve still never actually ridden a Segway (although I briefly tried my wife’s new Segway Drift skates), but since the day that “Ginger” was unveiled to the world I’ve wanted to ride around one of those two-wheeled marvels. The self-balancing technology was incredibly impressive in 2001, and spawned an entire industry of hover boards and electric unicycles. The Segway will never live down it’s over-hyped and under-delivered promise of revolutionizing human transportation, but I don’t think hype alone is what led me to think back to this product. It’s that the failure was in part driven by the opinions and norms that developed around it in society. Let’s look at two social factors that hurt the adoption of the Segway and their parallels to AI. Airports, malls, and other delightfully flat and open indoor environments ban the use of hoverboards. And while I imagine that is partly due to the history of their large batteries catching on fire, I think another aspect is that they make life more annoying for the rest of us walkers. They take up more space, travel at a different speed, and are more prone to accidents. Similarly, some schools and universities have already imposed bans on ChatGPT and AI-generated content. While their concerns revolved around the potential for cheating, as generative AI becomes more ubiquitous, we may find reasons to ban it in other contexts as well. Vanderbilt University’s Peabody School came under heat for using ChatGPT to write a consolatory email about a mass shooting at another university, with students calling out the generic sentiments and factual inaccuracies. These models are excellent at creating content that looks great at first sight, but can be vapid or plain wrong upon closer examination. In 2021 research paper On the Dangers of Stochastic Parrots, AI researchers describe a language model as a “system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.” Since these models are trained only on raw language, and not the ideas behind the words, their output tends to be very readable but not very novel. There’s also a growing awareness that these programs are prone to hallucinations, whereby they confidently deliver statements that have no basis in reality. If this wave of AI becomes known for producing empty filler text and made up facts, I suspect it too could be increasingly banned from more spaces.Photo by Ajit Sandhu on Unsplash
Another force in the death of the Segway was that it was seen as uncool. While eventually it become associated with helmet-wearing tourists and mall cops, even in the early days it wasn’t able to cross over from a curiosity to a “must have” item.
Fads are incredibly hard to predict and it’s too early to tell if LLMs will suffer a similar fate, but it already feels like using AI to write for you is something you want to hide, not show off. Using spell check and auto-correct are clearly fine, but would you tell your partner that an AI wrote the message inside of their Valentine’s day card? A common thread between the Segway and generative AI is that the result doesn’t appear to be “earned.” You don’t need to sweat or struggle, simply lean in a direction and the machine will do the rest. That said, unlike riding around in the park, writing is something that is done behind closed doors. With a sufficiently good prompt, and maybe some light editing, no one has to know that you used an AI to generate the cover letter or PRD. Perhaps “coolness” won’t be as strong a force when the action itself is so much more invisible to others. It’s far too early to know confidently how society will come to view the technology, and perhaps tools like ChatGPT will one day become as ubiquitous and unremarkable as spell correct and auto-complete. For the time being though I am more excited about using these models to edit my words, rather than author completely new ones.