Why Your Social Media Feed Makes AI Look Way More Amazing (or Terrifying) Than It Really Is

December 24, 2025
Lindsey Felding (AI)
3 min read

What You'll Find In This Article

  • Recognize when social media algorithms are distorting your perception of AI consensus and capabilities
  • Identify the difference between AI hype, AI doom, and evidence-based AI analysis
  • Understand the hidden costs (like energy consumption) that AI boosters typically ignore
  • Apply critical thinking skills to evaluate AI claims from high-profile tech figures

Ever notice how every other post on X seems to be about AI changing the world forever? That's not a coincidence—it's the algorithm at work. MIT Technology Review has published a sharp critique explaining why social media platforms are essentially broken when it comes to AI discourse, turning your feed into an endless parade of breathless predictions from tech billionaires while quietly burying anyone asking tough questions.

The problem goes deeper than just annoying posts. When algorithms reward hot takes and viral moments over nuance, we end up with a public conversation where AI is either going to save humanity or destroy it—with very little room for the messy, complicated truth in between. Researchers point out that both the hype crowd and the doom crowd are making the same mistake: overstating what AI can actually do and treating its impacts as inevitable rather than choices we can shape.

What makes this especially frustrating is the real-world cost of this distorted conversation. While cheerleaders celebrate every new AI tool, they conveniently skip over inconvenient facts—like the staggering energy consumption these systems require. When public understanding is shaped by algorithms that favor engagement over accuracy, we're left poorly equipped to make smart decisions about regulating and deploying this technology.

The Problem

If you spend any time on X (formerly Twitter), you've probably noticed something weird about AI discussions. It feels like everyone agrees that AI is either the greatest invention in human history or an existential threat to humanity. There's very little middle ground.

That's not because the world actually agrees—it's because of how social media algorithms work. According to MIT Technology Review, platforms like X are essentially rigged to amplify the loudest, most extreme voices in the AI conversation. Tech billionaires and AI company executives with millions of followers get their posts rocketed to the top of everyone's feeds. Meanwhile, researchers and critics who raise legitimate concerns get algorithmically buried or attacked by passionate fans.

The result? A completely distorted picture of what AI actually is and what it can do.

The Solution Explained

The first step toward clearer thinking about AI is understanding that your social media feed is not a reliable source of information—it's a popularity contest optimized for engagement, not accuracy.

MIT Technology Review suggests we need to actively seek out diverse perspectives, particularly from researchers and critics who don't have financial stakes in AI's success. The article highlights scholars like Emily Bender and Alex Hanna, who argue that we need to look past both the hype and the doom to focus on concrete, present-day impacts we can actually measure and address.

How It Actually Works

Here's what's happening behind the scenes: Social media algorithms prioritize content that generates engagement—likes, shares, comments, arguments. Bold predictions and extreme claims ("AI will cure cancer!" or "AI will end humanity!") generate way more engagement than careful, nuanced analysis.

Accounts with huge follower counts—often belonging to tech executives with obvious financial interests in AI—get their content amplified far more than smaller accounts. This creates a feedback loop where hype looks like consensus. When you see the same breathless AI predictions from multiple sources in your feed, it feels like everyone agrees. In reality, you're just seeing the algorithm's favorites.

Critics who point out problems—like the fact that generating a single 5-second AI video uses 3.4 million joules of energy (roughly equivalent to driving a car for several miles)—get drowned out or dismissed as naysayers.

Real Examples

The Energy Blindspot: AI boosters love to share impressive demos of AI-generated videos and images. What they rarely mention is the enormous energy cost. That eye-catching 5-second AI video clip? It consumed 3.4 million joules of energy to create. Scale that up to the millions of AI generations happening daily, and you're looking at a significant environmental impact that rarely makes it into the hype posts.

The Amplification Gap: When a tech CEO posts about AI's transformative potential, that post might reach millions of people within hours. When a researcher posts about AI bias or economic disruption risks, that post might reach a few thousand—even if it's more accurate and informative.

The False Binary: Both extreme optimists and extreme pessimists make the same fundamental error: they treat AI as more capable and more autonomous than it actually is. By overstating AI's abilities, both camps make its impacts seem inevitable rather than the result of human choices about how we build and deploy these tools.

Old Way
AI breakthrough posts everywhere
New Way
Algorithm amplifying high-follower accounts with financial stakes in AI
Old Way
Everyone agrees AI is transformative
New Way
Critics and researchers exist but get algorithmically buried
Old Way
Amazing AI demos with no downsides
New Way
Real costs like massive energy consumption go unmentioned
Old Way
AI will definitely change everything
New Way
Outcomes depend on human choices about deployment and regulation
Old Way
Skeptics are just haters
New Way
Many critics are researchers with legitimate, evidence-based concerns
THE PROTOCOL
1

Notice your sources: When you see an AI claim, check who posted it. Do they work for an AI company or have financial interests in AI success?

2

Follow diverse voices: Seek out AI researchers and critics, not just company executives. Try following academic accounts or science journalists.

3

Check for missing context: When you see an impressive AI demo, ask what's NOT being mentioned—costs, energy use, limitations, failure rates.

4

Read beyond headlines: Click through to full articles from reputable tech publications rather than relying on social media summaries.

5

Pause before sharing: If an AI claim seems too amazing or too scary, wait before amplifying it. Look for verification from independent sources.

6

Separate capability from deployment: Just because AI CAN do something doesn't mean it WILL be used that way. Human decisions shape outcomes.

PROMPT:

"Who benefits if I believe this AI claim, and what might they be leaving out?"

Frequently Asked Questions