• AI Anti-Hype
  • Posts
  • ❤️‍🩹 Why is the hype machine so hard to break

❤️‍🩹 Why is the hype machine so hard to break

5 types of Hype and how they harm

Hey again!

It looks like there’s been a lot going down in the AI space in the last two weeks, but before we dive into all the cool things you can click, watch, and read, I want to talk about why allll the hype around AI is a problem.

There's a significant divide between those in AI Safety and AI Ethics. This has been covered a lot recently, but I've identified one distinction.

I've noticed a pattern among those in the AI Safety camp. Whether their ideologies are part of the TESCREAL group or not, near-term risks that mostly impact marginalized groups do not matter to AI Safety folks. The discrimination and bias we see AI perpetuating now isn’t considered existential harms as they don’t impact the lives of the entire planet, just racial, gender, and other marginalized groups.

I’m not going to be arguing why these near-term risks mainly to marginalized groups matter, but rather I’ll be highlighting the work to address these tangible harms now.

TO ME: Potential existential risks are not and will never be as important or more important than mitigating near-term harms and remediating the harms already caused.

Types of Hype

  1. Exaggerated Capabilities

This is one of the most common issues when looking at AI. A great example, in 2016 Geoffrey Hinton said "We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists."

We still train and have radiologists, and it's one of the more in-demand roles in medicine right now. AI did NOT take over the field or become better than humans, but there's skin in the game for those who tell you there is.

This sends the message that AI has capabilities it doesn't. It cannot reason, act ethically or psychotically, or understand. This reinforces our human tendencies to over-rely on tech tools and take their answers at face value without being critical of outputs.

We don't question our calculators, scales, or digital clocks and we expect other tools to just work without having to be critical of its outputs. Recently, a lawyer using ChatGPT to help research cases relied on the LLM to verify its own outputs, rather than finding direct sources. Humans tend to trust our tools and it is on the builders to design tools in ways that don't encourage misuse and overreliance.

This example specifically increases fears about job loss in traditionally high-paid, seemingly “future-proof” roles.

  1. Fear Mongering

There are companies and individuals with a financial interest in making you worried and scared about AI. You’ve probably seen the many posts and promoted tweets from Linkedin, Tiktok, and Twitter Blue Checks saying if you don't get into AI now, you'll get left behind. This is to make you fearful so those who can profit from your fear can exploit you later. The people working to fix these issues want you to understand what’s going on, be critical of AI tools and their outputs, and want you to feel empowered to impact change.

How it's harmful

First, nobody ever talks about what getting left behind means! Left to die? Unable to find a job? This increases the interest and intrigue around AI while (intentionally) not providing any direction, but to have you buy their course/app/whatever. This easily makes viewers both worried and willing to buy whatever their trusted informant is selling. They’re encouraging you to rush to adopt, use AI for every task, and most importantly, give them money.

  1. Doomerism

“AI is gonna kill us!!” This is one of the most harmful types of hype because it's a distraction from the problems happening right now. The call for a pause on AI development was on the grounds that it could kill us. These AI “godfathers” and Turing Award winners are only worried that their life’s work might hurt other, rich white people and feel no guilt or take issue with how it’s already hurt people.

How it's harmful

For a lot of folks who haven't worked in AI, it’s hard to know what the real risks are and what solutions exist. This hype seems so alarming, yet has little merit. The trendiness of AI fuels AI adoption even when individuals are fearful. While I’m not saying there’s no chance whatsoever that AI might kill us, it’s just that AI has killed people already. It’s use has incarcerated people already. It’s use has caused harm to thousands upon thousands of people, so why shouldn’t we work on that since it’s happening right now?

  1. Overestimating Complexity

There are many people with a vested interest in keeping the general public uninformed about how AI works, the ways in which it’s developed, and what they really do with our data. They want you to believe it’s too late to learn or that it’s too hard so they can keep the market small and eliminate would-be competitors, and take your money for tools you don’t need or could build yourself.

How it's harmful

Being honest about how complex AI can be is a double-edged sword. On one hand, we should talk clearly and specifically about how we train, deploy, and monitor AI tools, but some choose to go as far as purporting it's sooo hard to understand unless you're a Ph.D or genius. By framing AI as hard to regulate and thus hard for policymakers to understand, they send the message that the average person couldn’t understanding it.

This only helps this small group of mostly white men that dominate AI to hit us with the one-two punch:

  1. Tell the masses AI is so hard to understand, but we PhD-holders and engineers are among the select few that understand it.

  2. Fix the problem they created with someone else's money

  1. AI Equality

"As long as we make sure AI works for all of us, we'll be okay!" In this context, there is no all of us. There are those who are being harmed now and those who might be harmed in the future. There is no “us” when some experience no harms and are the least likely to. We cannot treat these groups equally.

This is a common message even from those in the ethical/responsible AI spaces as the canned, non-offensive response, but it’s too late for that. Even the White House’s proposed AI Bill of Rights leans towards equality. AI CANNOT work for all of us, nor should that be our goal. We should be focused on remediation and mitigating new harms.

I believe the equality narrative so little, I created a DC-based meetup specificlly about equity in AI. Join if you’re in the DC area!

How it's harmful

These people remove the justice aspect from the message of ethical/responsible AI while ignoring our warnings and issuing statements that don't help anyone but themselves. These statements and signed letters hope to funnel funds in their direction. 👇🏾

While there are other kinds of hype I haven’t covered, these are the ones I’ve identified over the last few months when discussing AI.

Subscribe to keep reading

This content is free, but you must be subscribed to AI Anti-Hype to continue reading.

Already a subscriber?Sign In.Not now

Reply

or to participate.