The Dead Internet? Separating Myth from Measurable
CST2 article 3 - Myth Versus Measurement: Is the Internet Really “Dead”?
For years, whispers of a “dead Internet” have percolated through forums and social media. According to the Dead Internet Theory, much of what we encounter online - especially since the late 2010s, is supposedly generated by bots or AI, not people. Of course, the Internet isn’t literally dead; billions of humans still browse, post, and chat every day. But beneath the conspiracy vibes lies a kernel of insight: meaningful human presence online may be dwindling relative to the surge of automated content. In other words, the concern is that our digital public square is increasingly populated by bots, spam, and algorithmically generated “slop” rather than genuine human voices. How much truth is there to this claim, and how could we measure it? This 3rd essay in a series around ‘The Slop Economy’ explores that question by sifting myth from measurable reality.
TL;DR? NotebookLM Podcast version available here.
Defining the Core Concern
At its core, the “dead Internet” idea claims that authentic human engagement is being crowded out by artificial activity. Picture logging onto a forum or comment thread: you assume you’re hearing from real people, but what if many posts were bot-written or copy-pasted from elsewhere? The spirit of the Internet - real individuals sharing thoughts - would be fading. Crucially, this is a testable hypothesis. We can look for signs of declining human signal amidst the noise and even devise metrics to quantify it.
How might we measure “human presence” online? Researchers and platform analysts have begun to tackle this with innovative metrics. A few examples include:
· Human-Post Ratio: the proportion of content actually produced by real users versus content coming from bots, spam farms, or AI. A shrinking human-post ratio over time would signal that non-human agents are doing more of the “talking.”
· Botcluster Entropy: a technical way to capture how diverse or uniform online activity is. Low entropy might indicate coordinated bot networks posting the same links or phrases en masse. A high botcluster entropy means many accounts behave identically (a red flag for automation).
· Re-share Originality Index: a gauge of how much content is original vs. recycled. If most social media posts are re-shares of the same few AI-generated memes or newsbites, the originality index falls. In a vibrant human web, we’d expect a wide variety of novel posts and perspectives.
These metrics (and others like them) form a “metric pack” for studying the crowded-out human presence. They give us concrete levers: we can track, for instance, the fraction of Twitter accounts that look human, or the percentage of blog articles that show signs of AI generation. By defining the problem in measurable terms, we move from vague unease (“the Internet feels off lately”) to tangible data.
Signs of a Diminishing Human Footprint Online
What evidence do we have that authentic online engagement is in decline? Unfortunately, quite a lot. Across many corners of the Internet - from social networks to old-school forums and comment sections - indicators of dwindling human activity abound. Some of these signs are quantitative (e.g. bot traffic statistics), while others are qualitative (the “feel” of empty or spammy comment sections).
Bot Traffic Overtakes Human Traffic
Perhaps the starkest evidence comes from broad web traffic analyses. In 2024, for the first time ever, automated bots accounted for over half of all global Internet traffic. A comprehensive annual report found that 51% of web visits and actions were generated by non-humans in 2024, surpassing human-generated traffic. This marked a tipping point in the Internet’s history. Attackers and spammers have always employed bots (for scraping, click fraud, etc.), but the accessibility of advanced AI tools supercharged this trend, making it trivially easy to deploy swarms of bots at scale. In short, the majority of activity on the Internet is now automated.
To put this in perspective, just a few years prior (2020–2021), human traffic still outweighed bot traffic. The rapid flip - to bots dominating by 2024 - coincides with the rise of generative AI. Malicious bots have become more sophisticated, using AI to appear human and evade detection. Legitimate uses of automation (like search engines and feed crawlers) have grown too. But whatever the cause, the net effect is a diluted human presence. Even if you as a human are online as much as ever, you’re now swimming in a much larger sea of bot “users.”
Social Media: Infiltration by Fakes and Bots
Social platforms have long battled fake accounts and automated agents, but recent years show the problem at enormous scale. For example, Facebook (Meta) reports that it removes billions of fake profiles every year - often within minutes of their creation. In just the first quarter of 2024, Meta disabled 631 million fake Facebook accounts, on top of some 690 million fake accounts the previous quarter. These numbers are staggering. Even if many bots are caught quickly, the sheer volume implies that a significant portion of “users” trying to participate on major platforms are not human at all. Indeed, between 2017 and mid-2023, Facebook eliminated over 27 billion fake accounts in total, a clear sign that hostile actors are deploying armies of bots to flood the network.
Twitter (now X) presents a similar picture. Officially, Twitter long claimed <5% of accounts were fake, but insiders and independent analysts have challenged that. Various studies from 2022–2024 estimate a much higher bot presence. One analysis by the BotNot project put the figure at 24–37% of active Twitter users being bots. Another peer-reviewed study in Scientific Reports (2025) found that about 20% of accounts engaging in discussions around global events were likely bots. Some researchers have gone further, with one 2024 study (by 5th Column AI) controversially suggesting as many as 60%+ of Twitter accounts could exhibit bot-like behaviour. While estimates vary, it’s safe to say that well over 1 in 10 social media participants may be non-human, and possibly many more in certain hot-button discussions.
What does this mean for authentic engagement? It means trending hashtags, reply threads, and even direct messages might often be heavily influenced by automated agents. If you tweet about a product or a political opinion, some of the replies you get could be bot spammers or astroturfing accounts pushing a scripted agenda. And if you suspect that many others online aren’t real, it can discourage you (and other genuine users) from posting at all. The result is a kind of participation vacuum: real users withdraw or shout into the void, while bots happily chirp away, amplifying each other.
Forums and Comment Sections: From Vibrant to Vacant
It’s not just the big social networks. Traditional online forums, message boards, and news comment sections show signs of a human exodus. Many news outlets have outright shut down their comment sections, citing low-quality discourse and difficulties in moderation. In early 2023, Gannett (America’s largest newspaper chain) closed the online comments on most of its publications, following in the footsteps of CNN, The Washington Post, Popular Science and others that had removed comments years earlier. The reason? Unchecked comment areas “can quickly devolve” without intensive monitoring. In practice, many comment sections had become ghost towns or spam pits—places where a handful of trolls and bots shout past each other, driving away regular readers.
Those that remain open often suffer a similar fate. It’s common to see a comment section where the only new posts are spambots hawking fake investment schemes or copying and pasting gibberish. Human users, seeing this, learn not to bother engaging. It becomes a negative feedback loop: a forum with 90% spam loses its community, which in turn leads to even higher spam percentage since no humans are left to drown it out. From hobbyist forums to product review boards, you’ll find numerous corners of the web where meaningful human chatter has given way to silence or robotic repetition.
Even moderators of specialized communities report increasing strain. For instance, technical Q&A sites like Stack Overflow saw such a flood of AI-generated answers in late 2022 that moderators instituted a ban on answers written by ChatGPT. The ban came after users “trying out” the new AI overwhelmed the site with convincing-looking but incorrect answers. The volume of these AI replies was so large—and their accuracy so low—that it was “substantially harmful” to the forum’s usefulness. In effect, a wave of machine-generated content drowned out the carefully curated human knowledge that Stack Overflow is known for. This episode underscores a real risk: AI-generated content eclipsing human-produced content in spaces that rely on knowledgeable human contribution. If not for active intervention, the “crowd wisdom” on such platforms could easily turn into crowd nonsense, reducing real engagement (who wants to answer questions when a bot instantly spews a dozen flawed answers?).
Content Farms and AI “Slop” Flooding the Web
Beyond social interactions, consider the broader content ecosystem: blogs, news sites, how-to articles, product reviews, and so on. Here too, we see human authorship being diluted by algorithmic output. After OpenAI’s ChatGPT burst onto the scene in late 2022, the web was inundated with AI-written material. By mid-2025, roughly half of all new web articles were AI-generated rather than written by humans. In fact, one analysis found the share of AI-generated articles on the open web jumped from just 5% in 2020 to about 48% in May 2025. For a brief period in late 2024, AI-written content even outnumbered human-written content on the web before reaching parity.
Some futurists have predicted an even more dramatic takeover, with Europol (the EU’s police agency) warning that up to 90% of online content could be synthetically generated by 2026. In other words, if current trends continue, the vast majority of what you read online - text, images, video - might be produced by AI rather than people, just a year or two from now. Even if that specific figure is speculative, the direction is clear: content creation is scaling exponentially via automation.
Why does this matter for the “human presence”? On a practical level, it means when you search for information or browse your news feed, you’re increasingly likely to encounter machine-written prose. Some of this AI content is benign or even useful. But much of it is what we might call “slop” content - low-effort filler generated to game algorithms and grab clicks. Imagine dozens of websites posting the same AI-rehashed guide to fixing a leaky faucet, or countless auto-generated “news” articles that are really just plagiarized press releases. Human writers with expertise and original perspectives have a harder time breaking through that noise. The online content economy thus risks a kind of cultural entropy, where originality and depth are slowly lost amidst an avalanche of sameness.
There’s also a worrisome bootstrapping effect: as AI content becomes ubiquitous, new AI systems might end up training on this ever-growing pile of synthetic text. Researchers caution that if AI starts learning predominantly from AI-generated output, it could “choke on its own exhaust” and collapse in quality. In short, allowing the web to saturate with machine-written content not only sidelines human creators but could even degrade the feedback loops that keep AI aligned with reality. (We’ll return to this in the context of long-term AI alignment later.)
When Bots Talk to Bots: Simulation Studies
If you really want a glimpse of a “dead” Internet future, look at what happens when bots interact only with each other, with humans completely out of the loop. A recent experiment by researchers at the University of Amsterdam did exactly that: they created a small social network populated entirely by AI chatbots - 500 of them, each with a fictitious persona - and let them loose on one another. The result? In a matter of days, the bot community devolved into a toxic mess uncannily similar to the worst of human social media. The chatbots split into cliques based on their (assigned) political leanings, forming echo chambers. They amplified the most extreme partisan voices, so that outrageous posts gained the most followers and reposts. And a tiny elite of “influencer” bots emerged, dominating the conversation while most others became passive observers.
All of this happened without any human participation and even without algorithmic curation - the usual culprits we blame for online toxicity. The simulation showed that the dysfunctions of social media can arise purely from bot interactions, mimicking and arguably exaggerating human-like behaviour. In essence, the bots recreated a lifeless parody of a community: lots of activity, lots of heat and noise, but no real human stakes.
This is both fascinating and chilling. It suggests that if humans vacate an online space and leave it to bots (or if bots overwhelm the humans), you get a kind of cargo-cult version of society. The forms of engagement remain—posts, likes, arguments—but the soul is gone. Discussions turn into recursive, self-referential loops. Extremes amplify. Nuance disappears. One can imagine the wider Internet trending in this direction if we reach a point where authentic users are outnumbered by AI agents. The study’s authors noted that none of the interventions they tried (like changing the feed order or hiding follower counts) could fully fix the polarized, hollow discourse that emerged. In other words, once the humans are absent, simply tweaking platform settings isn’t enough to bring meaningful conversation back. It’s a sobering reminder that human presence online isn’t just about raw numbers; it has a qualitatively different effect on dialogue and community norms than a legion of bots ever could.
Human Consequences: Why a “Dead” Internet Matters
So far, we’ve been quantifying and describing the trend: a decline in authentic human engagement and a rise of artificial actors online. But why does this matter? If the content is entertaining or the information is available, does it really make a difference whether it was produced by a human or a bot? The answer is a resounding yes—because the shift has profound implications for psychological well-being, societal cohesion, democracy, and even the future trajectory of AI itself. In this section, we’ll explore how an increasingly inauthentic digital world affects us as individuals and as a society.
Psychological and Developmental Health
From a psychological perspective, humans have fundamental social and cognitive needs that the online experience helps fulfil. We seek connection, validation, understanding, and accurate information. When those needs are met with simulations of human interaction rather than the real thing, there can be subtle and not-so-subtle fallout.
For one, engaging mostly with bots or AI-generated content can distort one’s sense of reality and social norms. People are susceptible to what AI researchers call noosemic projection bias - our tendency to project mind and intention onto any entity that seems human-like. Fluent text or a friendly profile picture can trick us into feeling we’re interacting with a thinking person, even if it’s just a language model spinning words. This anthropomorphic trust bias is a known human vulnerability; we often trust a polite, confident AI output as we would a knowledgeable peer. As a result, an Internet full of bots can manipulate our emotions and beliefs more easily than we might expect. We may find ourselves influenced by fabricated “opinions” or swayed by fake social consensus (e.g. hundreds of bot accounts all praising a product or ideology) because our brains react socially to those cues.
Another issue is the potential for parasocial attachment on steroids. Humans already form one-sided emotional bonds with TV characters and celebrities; now imagine AI chatbots designed to be your companion, available 24/7, always agreeable. It’s happening: apps like Replika allow users to create AI friends or romantic partners, and many users report genuinely falling in love with their chatbot. These AI companions are expert at intimacy mimicry - mirroring your desires and providing constant positive feedback.
The danger is twofold. On one hand, vulnerable individuals (the lonely, the young) may become emotionally dependent on virtual partners that don’t reciprocate feelings in any human sense. On the other hand, investing emotional energy into a bot could drain the energy one has for real relationships (social energy is finite). If a generation grows up with AI friends who always listen and AI tutors that give instant answers, they might struggle with the messiness of real human relationships and the patience required for learning and cooperation. Psychologists worry about developmental effects on youth: identity formation could be skewed when much of one’s feedback comes from machines, and frustration tolerance may erode if AI tools constantly give easy answers (why grapple with a human teacher or friend’s differing opinion when your personal AI agrees with you on demand?).
We are, in essence, social creatures facing a social mirage. An Internet heavy with fake personas and AI content might satisfy surface-level cravings for interaction or information, but it lacks the true reciprocity and accountability of human relationships. Over time, this could contribute to feelings of isolation (“I spent all day ‘interacting’ online, yet feel no real connection”) and even cognitive distortion. For instance, constantly encountering extremist or hyper-polished AI content can give one a warped sense of normal discourse, potentially amplifying personal biases. Indeed, a 2023 study found that exposure to social bots amplified users’ own perceptual biases, partly because the bots keep validating and echoing the user’s views. In summary, the decline of genuine human engagement online isn’t just an IT problem - it’s a mental health concern, especially for younger users who are forming their social and intellectual identity in this environment.
Societal Cohesion and Public Trust
Zooming out, an Internet with fewer real people and more fake activity poses a serious challenge to societal cohesion and trust. Democratic societies depend on some baseline of shared reality and good-faith communication. But how do you maintain trust in an information ecosystem where anything could be artificial?
Already, we see a rise in cynicism and conspiracy thinking fuelled by the uncertainty over what’s real online. The “dead internet” narrative itself can encourage a kind of nihilism: Maybe nothing online is real, maybe all those I disagree with are just bots, so why should I engage at all? If citizens believe the public sphere is essentially flooded with propaganda bots or AI deepfakes, they may retreat into private enclaves or simply disengage from civic discussion. This erosion of trust directly harms societal cohesion. It’s hard to have civil debate or empathy when you’re constantly second-guessing whether the person you’re talking to exists or is a troll farm’s creation.
Public trust in institutions can also crumble in such an environment. Consider how easy it becomes to dismiss inconvenient facts or evidence: any video, image, or quote could be labelled a fabrication in the age of AI manipulation. This is sometimes called the liar’s dividend - bad actors exploit the mere possibility of fakery to deny reality (e.g., a corrupt official crying “That leaked video of me is a deepfake!” even if it’s real). When a large portion of online content is inauthentic, even authentic content falls under suspicion. The result is a kind of information entropy where the public can’t agree on basic truths, greasing the skids for polarization and conflict.
Social cohesion frays further when human voices are drowned out. Real communities online provide spaces for support and dialogue - think of parenting forums, hobby groups, or grassroots activist networks. If these spaces get invaded by spam and bots, genuine users scatter. What’s left are hollowed-out communities or ones dominated by the loudest, angriest (often automated or anonymous) voices. The moderate, bridge-building conversations simply don’t happen in a forum overrun by aggression and fakery. Over time, this can contribute to cultural polarization: people retreat to private chats or tightly controlled platforms to find real conversation, and the shared arenas (large social networks, comment sections on news) become uninhabitable war zones or wastelands. Society loses a common square where diverse people might have exchanged ideas, however messily.
The cost to public trust is evident in examples like the comment section shutdowns. News organizations found that keeping comments open often did more harm than good to trust in the news. Readers would see toxicity and harassment in those threads and come away with less trust in the media outlet. In some cases, journalists themselves became targets of orchestrated comment attacks, leading them to self-censor or avoid important but contentious topics. When large swaths of “public feedback” are actually troll or bot-driven, it skews our perception of public opinion and silences constructive voices. We end up with a less informed, less empathetic society.
In short, the crowding out of humans online chips away at the social fabric. Communities dissolve, miscommunication flourishes, and a general distrust of both information and each other becomes the norm. This is precisely what malicious actors want - it makes societies easier to destabilize. We should recognize that maintaining genuine human presence online is not just nostalgic idealism; it’s critical for social health.
Democracy, Governance, and Information Credibility
Extending the trust issue into the political realm, a “dead” internet poses very real threats to democracy and governance. Democratic processes assume an informed citizenry and a public forum for debate. But what happens when the information stream is polluted and the forum is overrun by fake participants?
Elections around the world have already seen influence operations augmented by AI and bots. From fake social media accounts pushing polarizing narratives to AI-generated “astroturf” campaigns (phony grassroots movements), the tools to manipulate public opinion at scale are readily available. When meaningful human voices are drowned out, it becomes trivially easy for someone with enough bots to manufacture the appearance of consensus. For example, a policy proposal could appear wildly popular or unpopular if an army of bot accounts all start posting the same slogan about it. Human observers, including journalists and policymakers who scan social media, may take this manufactured sentiment as real (“look, thousands of people are suddenly passionate about X”). This can distort decision making. Lawmakers have cited Twitter trends or online engagements as indicative of constituents’ views—imagine their misjudgment if a large share of those “constituents” are bots.
Information credibility suffers an even more direct hit. A democracy relies on voters being able to trust what they read and see. If AI-generated fake news and deepfake videos become common, people either believe lies or stop believing anything. Neither is good for democratic governance. We risk entering an era where authentic facts struggle to compete in visibility with sensational AI fabrications. In a crowded attention economy, algorithms (which often favor engagement and novelty) might actually boost low-credibility AI content simply because it grabs quick clicks—this is the “algorithm’s taste for junk” effect. Short-term engagement algorithms, tuned to maximize our attention, can end up amplifying low-epistemic (low truth value) content at the expense of reliable information. Over time, this feedback loop can make truthful reporting less visible than emotionally charged “junk” content. An electorate that consumes mostly the latter is not equipped to make sound decisions.
There’s also a governance challenge in terms of evidence. Consider how much of our political discourse and legal evidence now involves digital records—tweets, videos, posts. If most of these can be called into question (“maybe a bot wrote that congressman’s 10,000 supportive comments” or “perhaps that incriminating video was AI-doctored”), holding power to account becomes harder. Courts and regulators may need to develop entirely new frameworks for provenance and authentication of digital content. Experts are already calling for things like cryptographic content signing and provenance tracking as a way to mark what’s human-made versus AI-made. Without such measures, the risk is that bad actors have it both ways: they can flood the zone with disinformation and dismiss any real evidence against them as a probable fake. That is a nightmare scenario for democracy and rule of law.
Long-Term AI Alignment and Human-AI Policy
Finally, let’s consider the implications for AI alignment and the future of human-AI interaction. “AI alignment” refers to the challenge of ensuring advanced AI systems’ goals and behaviors remain beneficial to humans. It turns out the trend of an increasingly automated internet is itself a test case for alignment problems and a potential stumbling block for future AI systems.
One issue is the training loop mentioned earlier: as more online content is generated by AI, future AI models trained on Internet data could get caught in a self-referential spiral. Researchers have documented that if an AI learns from AI-generated data (especially if that data is low-quality or biased), its performance can degrade in a compounding way. It’s like making a copy of a copy of a copy - eventually, the image blurs. This model collapse scenario is bad for AI and humans. Models might become less accurate or more prone to strange failures because they’re imitating the statistical quirks of other AIs rather than grounded facts or human perspectives. In the long run, this could lead to AI systems that are poorly aligned with human reality - quite literally, they would be aligned to a distorted AI-created “reality.” Keeping a healthy presence of verified human-origin content in the training mix is therefore important for AI developers who want their models to stay anchored and useful.
Another facet is how human-AI interactions shape AI behaviour. Many AI systems (from customer service bots to recommendation algorithms) adapt based on user behaviour. If genuine user engagement drops and bots or click-farms fill the gap, AI systems will be optimizing for the wrong audience. For instance, a social media algorithm might start prioritizing content that appeals to bots (e.g. particular repetitive keywords or outrage-bait that automated agents propagate) rather than what real users value. In essence, feedback loops could form entirely between AIs: algorithms feeding content to bot accounts, which then engage and signal the algorithms to feed more of that content. Humans inadvertently become bystanders to an internecine dance of machines optimizing for machine-driven metrics. Such algorithmic gatekeeping — where algorithms inadvertently prioritize artificial engagement — can marginalize human-preferred content and further push humans out of the conversation. This misalignment between platform objectives and actual user welfare is a classic alignment failure, just playing out at the society-platform level.
From a policy standpoint, all these issues signal that we need guardrails and deliberate design to keep the Internet human-centered. This spans multiple approaches: technical (e.g. watermarking AI content, as proposed by standards like C2PA for provenance), regulatory (mandating transparency about bots and AI-generated media), and educational (improving digital literacy so users can spot “slop” content or bot behaviour). Long-term AI alignment isn’t just about preventing a rogue superintelligence in the distant future; it’s about managing the present interaction between humans and AI-driven systems so that it amplifies the best of human nature rather than the worst. That means designing platforms where human contributions are valued and highlighted - essentially, designing for “truth by design” and fostering authenticity as a feature, not a bug. Some forward-thinking proposals even suggest “intimacy throttles” and youth-specific protections: for example, limits on how emotionally manipulative an AI friend can be, or features that encourage teenagers to spend time in human conversation after a period of intense chatbot use. These may sound far-fetched, but they speak directly to the alignment of AI tools with human social needs and long-term well-being.
Second-Order Effects on the Horizon
Digging deeper, we can speculate on some secondary effects of an Internet dense with AI and sparse in humans. These are less immediately measurable but plausibly emergent phenomena if current trends continue:
· Cultural Entropy: As mentioned earlier, the deluge of machine-generated content could lead to a decay in cultural originality. One might call this cultural entropy - the idea that meaning and creativity in online culture degrade as algorithms endlessly remix and regurgitate the same tropes. We could see a future where memes, music, and writing all start to feel homogenized, “optimized” for engagement in a way that flattens subcultures and quirky human innovation. Over time, this could sap the Internet of its role as a driver of fresh culture, turning it into a bland echo chamber of past content. Society could experience a kind of creative stagnation, even as content output is at an all-time high, because genuine human creative risk-taking is harder to find an audience amid the noise.
· Algorithmic Gatekeeping of Reality: We already rely on algorithms to curate what news and knowledge we consume. If those algorithms increasingly filter a reality that has itself been shaped by bots and AI outputs, we risk living behind not just one filter bubble, but a filter hall of mirrors. This effect means that a few platform algorithms could end up effectively gatekeeping reality, deciding which facts “survive” the trip through the gauntlet of AI-boosted dissemination. In such a scenario, truth might not rise to the top; instead, the ideas that align best with algorithmic patterns (often sensational, simplistic ones) get reinforced. Human epistemics (our methods of distinguishing truth) could atrophy, as people come to passively accept whatever the algorithm feeds, further lowering the incentive to contribute or fact-check (since one’s contributions won’t be seen anyway unless they cater to the algorithm). This is a subtler “deadness” - not the absence of human voices, but the enfeeblement of human agency in steering the online narrative.
· Intimacy Mimicry and Social Energy Drain: We touched on this with the Replika example - AI companions mimicking intimacy. Expand that concept: what if a significant fraction of the population in the near future regularly “hangs out” with AI friends or mentors tailored perfectly to them? On one hand, this could alleviate loneliness for some. On the other, it might lead to a widespread withdrawal from the frictions and rewards of real human relationships. If an AI girlfriend or boyfriend is always attentive, never disagrees (unless programmed to playfully do so), and can be effectively paused or tailored at will, real partners may seem comparatively high-maintenance. The social energy that people have traditionally invested in family, friends, and community could be siphoned into these one-way AI relationships. We might see phenomena like people becoming emotionally exhausted by AI interactions - paradoxically so, since the AI is doing the “work,” but humans can still pour emotional labor and empathy into talking with their virtual confidant. This kind of drain could reduce the vitality of our offline relationships and community participation. It’s a speculative outcome, but early signs (like users grieving when their AI companion’s programming changes) indicate that these bonds feel very real to the human psyche.
Each of these secondary effects underscores an important point: the value of human presence online extends beyond the content humans create; it influences our culture’s trajectory and the integrity of our social systems. The “dead Internet” isn’t just a count of bots vs humans - it’s also about losing the checks, balances, and sparks of creativity that real people provide.
Conclusion: Keeping the Internet Alive
The Internet was born as a network of people - sharing knowledge, arguing passionately, collaborating and creating. Today, it risks becoming a parody of itself: a place where bots chatter, algorithms optimize for engagement at any cost, and humans are reduced to either spectators or targets for manipulation. “The Dead Internet” is, as we’ve seen, an exaggerated phrase. But it poignantly captures a trend we can measure and observe: the crowding out of meaningful human presence online by artificial actors and content. We’ve surveyed the evidence of this trend in metrics (over half of web traffic now non-human), in specific platforms (hundreds of millions of fake accounts scrubbed quarterly, forums flooded with AI-generated text), and in the very texture of the web’s content (perhaps 50% of it generated by AI as of 2025). The trajectory is clear.
This shift matters deeply for individuals’ mental health, for the health of communities, and for the health of our democracy and information ecosystem. It challenges us to adapt - not by abandoning technology, but by doubling down on humanity. The solution is not to wish for some nostalgic return to a pre-AI Internet (that genie is long out of the bottle). Rather, it’s to co-evolve with AI in a way that preserves human agency and authenticity. We need guardrails that amplify genuine human voices and dampen the cacophony of bots. We need transparency measures (like verified indicators for human posts, and labels or watermarks for AI material) so that the public square isn’t an anonymous masquerade. We need algorithms explicitly tuned to favor quality and reality over clicky falsehoods - essentially, to give epistemic value a fighting chance alongside engagement metrics. And perhaps most importantly, we need an informed digital citizenry that recognizes the value of their participation. If you’re a human online, you matter. Your clicks, your posts, your skepticism, and your creativity are the lifeblood that can keep the Internet “alive” in the meaningful sense.
In the end, declaring the Internet “dead” is too defeatist. Yes, the challenges are immense - the slop and spam, the AI echoes and deepfakes, the bot swarms. But the fact we can measure these trends and identify their effects means we can also fight back in smart ways. It starts with awareness: knowing what we’re up against. From there, solutions range from the technical and regulatory to the personal (e.g. cultivating digital hygiene habits, choosing platforms that prioritize human-centric design, and supporting content creators who put in real effort). If we succeed, the Internet of the future can still be a place of vibrant human exchange - enriched with AI tools, perhaps, but with humans firmly in the loop as the ultimate arbiters of meaning and truth. In separating myth from measurable, we find that the Internet isn’t dead; it is what we make of it. The more we insist on measurable authenticity and invest in genuine engagement, the more alive our digital world will remain.
Sources:
Chang, T. (2025). 2025 Imperva Bad Bot Report - Automated traffic surpassed human activity. Thales/Imperva News Release cpl.thalesgroup.com cpl.thalesgroup.com.
Ng, L. H. X., & Carley, K. M. (2025). A global comparison of social media bot and human characteristics. Scientific Reports, 15, Article 10973 nature.com.
Liddell, S. (2024). “Facebook’s Fake Accounts Crisis…”. Medium, Jul 9, 2024 shanelid.medium.com shanelid.medium.com.
Stack Overflow moderators (2022). “Why posting ChatGPT answers is temporarily banned.” Meta StackOverflow post, Dec 2022 vice.com vice.com.
Morrone, M. (2025). “AI writing hasn’t overwhelmed the web yet.” Axios, Oct 14, 2025 axios.com axios.com.
Chong Ming, L. (2025). “Researchers built a social network made of AI bots…” Business Insider, Aug 14, 2025 businessinsider.com businessinsider.com.
Cognitive Susceptibility Taxonomy Manual v0.4 (Neural Horizons, 2025). Definitions of human cognitive vulnerabilities (e.g. anthropomorphic bias, parasocial attachment).Contact the author for access. https://www.neural-horizons.ai/_files/ugd/bf4f04_437189ad456544d7b0bfc9f2d3a53c2b.pdf
Tong, A. (2023). “What happens when your AI chatbot stops loving you back?” Reuters, Mar 21, 2023 reuters.com reuters.com.
Taylor, H. (2023). “No Comment: Shutting down newspaper comment sections…” The Orion, Feb 13, 2023 theorion.com theorion.com.
Dead Internet Theory - Wikipedia (accessed 2025) en.wikipedia.org. (Background on the “dead Internet” conspiracy theory narrative.)



Hey, great read as always. I sometimes feel this when I'm looking for actual human book recommendations online, like you're just sifting through AI-generated summaries. Your aproach to measuring the 'human signal' is exactly what we need to move past the vague fears into something actionable.