AboutAbout WritingWriting CanonCanon SubscribeSubscribe WorkWork With Us

This essay was written by Yousr at Native.

The Cool Era

The first decade or so of the 21st century was an era of genuinely cool technology: laptops, social media, video games, music and video streaming, communities and forums, and more.

“Cool,” of course, is doing a lot of heavy lifting here. Things that started as nerdy and niche crossed over into the mainstream and became cultural in the fullest sense. These tools and gadgets became spaces where identity was formed and community was built. Technology was an enabler of cultural output. It created new possibilities for entertainment, expression, and discussion. It gave rise to forums and fandoms and entire subcultures. All of these platforms made human activity both more possible and relevant without feeling alien.

There was a balance: futuristic enough to feel exciting, yet grounded enough to feel like an extension of ordinary life rather than a replacement for it. This ratio of cultural enablement to disruption is exactly what allowed for coolness.

The Verdict

Today, we’re facing a new wave of technology—AI—that doesn’t seem to be striking the balance. It has enormous potential as a disrupter, perhaps more than any technology before it, but the outrage surrounding it is unprecedented. Even young people (especially young people?) don’t think AI is cool. They might use it to get tasks done, but very few are meaningfully engaging with it as a hobby.

Coolness, in this sense, is not about novelty or capability, but about a technology’s potential for becoming a site of culture. A highly disruptive technology, à la ChatGPT, only becomes cool if using it generates something of cultural value: references, entertainment, conversation, community. So far, AI generates none of these outside of niche tech circles: it does not create spaces for people to gather or spark discussions around itself. If anything, it suppresses the ones that already exist, creating an internet full of slop and fakes.

This was not obvious when ChatGPT was released in December 2022. At the time, I was convinced it would be the coolest technology of our generation, the kind of thing that should spark the same cultural explosion as the early internet. Three years later, the verdict is in:

AI is actively despised.

The sentiment is not disappointment in the raw capabilities of ChatGPT; it is disgust. On social media, AI is seen as sloppy, almost unholy, unglamorous, a tool that sucks the joy and soul out of everything it touches. Someone called it “Temu for thinking,” as if to say: AI is cheap, AI is the shortcut that marks you as someone who couldn’t be bothered to do the thing properly. Using ChatGPT signals a lack of agency and responsibility. Not commissioning an artist and AI-generating an image instead is, at best, tolerable if you’re a small creator with no budget, but a potential scandal if you’re a brand or a major influencer with resources and a reputation to protect.

The Adoption Gap

You can use AI in private, perhaps, but you don’t admit to it in public, and you certainly don’t present its output as your own without expecting some measure of contempt. There’s now a bot on Twitter you can summon under any post to check whether it’s “slop.” You really do not want to be the author of a post that gets flagged. It’s embarrassing, rightfully so. And yet the adoption is staggering. Over half of new articles published online are now AI-generated; one recent analysis found that 74% of newly created web pages contain AI content. 78% of companies use AI in at least one business function. The gap between public disgust and private adoption is vast: people use AI constantly, they just don’t want to be seen doing it.

I am no exception. I use LLMs more than most, and yet I still cringe at the phrase “AI art” and disconnect the moment I catch a whiff of ChatGPT in someone’s writing. X recently launched a $1 million prize for the best article on the platform—I’ve been skimming through submissions, and so many lose me the moment a sentence has that ChatGPT smell. The points could be extraordinary, but detecting the slop makes them repulsive. This aversion is not something I reasoned myself into, which means I haven’t been able to reason myself out of it, and I suspect this applies doubly to most AI detractors.

Cultural Dilution

I suspect the disgust comes back to the framework I introduced earlier: AI’s enablement is strictly non-cultural. It is so total, so frictionless, that it replaces the human presence that made online spaces worth inhabiting in the first place. This is why AI-hate is predominantly an online phenomenon—it threatens the social internet itself. Not the environment, not the economy, not the balance of power, though those concerns exist, but the slow degradation of the spaces where online people actually live: the reddit threads, the tutorial rabbit holes, the youtube comment sections that form the texture of the internet. What discussion can ChatGPT spark? What forum does it create? None. By flooding social media with posts and replies that involve little to no human in their production, LLMs are the anti-cultural-enablement invention par excellence: a cultural dilution where the signal of genuine human expression drowns in an ocean of generated slop.

Once this aversion becomes widespread enough, it naturally finds political shape. You see it in the language people reach for: AI is “forced down our throats,” AI serves “Billionaire Bill” who wants to “up his stock prices,” AI is designed “to consolidate power, wealth, and control.” Hating Sam Altman is one of Silicon Valley’s few cultural exports. The framing is one of coercion, of something done to people rather than for them, and it is felt most acutely in the very places where culture used to happen: comment sections you can no longer trust, images and videos you must scrutinize before believing. No one asked for this. No one wants to second-guess every interaction to confirm a human is on the other end. AI doesn’t just fail to enable culture, it degrades the act of cultural exchange itself. Hence the feeling of violation, of coercion, of being stripped of what made the social internet worth inhabiting.

What People Want

I am not a Luddite, and I suspect most of my AI-hating peers aren’t either. People don’t want less technology; they simply want cool technology. The visceral reactions you see online—the cringing, the mockery, the “I want to scream, I want to cry, I want to puke”—are not economic anxieties dressed up as culture war, but rather they are the sound of people who loved the internet watching it fill with noise they never asked for.

What people want from technology is what they always wanted: something worth gathering around, something worth remembering, something they are proud to use… something cool.

AI, so far, isn’t it.