- AI Anti-Hype
- Posts
- v1.01 ~ A re-introdution to AI hype
v1.01 ~ A re-introdution to AI hype
Hey friends, AI Anti Hype is back!
In this return, I want to mention a shift. I so appreciate everyone whoâs read and shared this so far. To reach even more people, Iâm going to move away from the long opinion pieces, and start explaining wtf AI actually is. Please share with your grandma (if she doesn't mind cursing) and others who may not realize the role that AI plays in our everyday life.
I want to try and go a little deeper than most 101 content so youâll see a lot of (QTNAs - questions that need answering). You may also see the brand change soon, but thatâs TBD.
đ want to listen instead of read? click below to listen to the newsletter in my voice (not AI).
It's so hard to dispel AI hype because Big Tech puts a lot of time, money, and attention into obscuring how AI is developed. If they make it seem like AI is mystical or hard to learn, fewer people will take the time to understand the basics about AI. If they work to keep their training data, models, and methods hidden, itâs easier for them to hide that theyâre stealing data or just using human labor instead of any AI at all. If you dismiss it as some phenomenon of human advancement and growth, you wonât ask about how much energy it consumes.
Iâm laser-focused on addressing AI hype because I think itâs one of the major drivers for the injustices, harms, and rapid adoption we see.
(QTNA: Why do I think that?) Hype is the killer of reasonable expectations. Reasonable expectations about what a technology can or canât do allows us to plan for when these tools eventually fail, develop guardrails, take the output with a grain of salt, and give us the space to criticize how we use AI.
Sure we can do those things now, but until the recent backlash around Generative AI critics were always called doubters, pessimists, and lots of worse names for just trying to be realistic. Having a lot of hype around any tool lowers our guard and allows us to be more easily swayed by those tools, even when we know better.
cut to the chase ⤾ď¸
what can I do? đ
Before I dive in, you may already be feeling discouraged about what we actually do about this. Itâs not often angry citizens are actually successful at stopping what corporations want to do. Donât worry, I have a plan! I know of so many methods for fighting the AI infiltration and a lot of it depends on your background and skills, but Iâll start with the basics.
If thereâs one thing you take away from this newsletter, speak up when you see AI bullshit. When you see AI infiltrating your favorite vintage sites, search results, or note apps contact them and tell them what you think. Weâve been duped into thinking we donât have a choice, but take a note from the boomer handbook and send that strongly-worded email.
Thatâs the best first thing action everyone can take right now.
whatâs the deal with being âantiâ hype?
Companies that develop AI oversell its function, performance, and potential while neglecting the privacy concerns, discriminatory tendencies, and risks weâve seen happen over and over again. I mean, they steal
Since AI tools can make millions of decisions in minutes, the chance that bad decisions (like who gets recommended for a liver transplant) can be made at a rate that isn't acceptable. To add insult to injury, people often have no way to opt out of having these tools make decisions about them, even for critical aspects of their lives.
(QTNA: Is the hype really that bad?) AI hype has funneled vast resources, investments, and attention toward AI, while the tools themselves remain immature, untested, and prone to extending discrimination. AI has widened the disparity of power between institutions (governments, companies, academia) and people.
Hype is exacerbated by the multitude of AI "experts" and influencers who thrive on the attention, making promises about how AI will "change the world." These people have an interest in perpetuating the hype; after all, it brings them new investors and customers. Ever noticed that few of the people telling you to become a millionaire overnight with AI arenât Data Scientists or engineers? Not that they have to be, but these roles know AI the best, wouldnât they be the most likely millionaires?
Pro tip: Block anyone who peddles phrases like "learn about AI so you won't get left behind" or "you won't get replaced by AI, you'll get replaced by someone using AI." These are the telltale signs of hype dealers.
why we should all be against AI Hype
Overreliance: AI hype can lead to an overreliance on AI, even when they're untested, unpredictable, and costly. This leads us to make poor decisions based on flawed AI outcomes. Weâve already seen organizations replacing sometimes entire teams, but this wonât work out well. Even OpenAIâs own study showed ChatGPT was wrong more than half the time.
High Expectations: When AI technologies fail to meet the expectations created by hype, it can lead to disappointment and a loss of trust in AI as a whole. It can also lead to tangible harm like when tools are adopted prematurely (because the companies paying for commercial AI tools think theyâre more advanced than they are) and havenât been stress-tested enough to reject them from applications in critical industries like healthcare and public safety.
Early adoption (derogatory): Hype can pressure organizations and individuals to adopt AI without critically deciding if itâs the right solution, considering the inevitable risks, and ethical consequences. Often this happens so companies can keep up with the market expectations and to make the shareholders of a company remain confident the company is cutting edge.
Misdirecting Resources: AI hype can influence companies to divert resources away from audits and fixes for AI and towards rapidly expanding features and integrations, a leading cause of harmful consequences. Moving away from criticism is what a hype culture does, like I mentioned before, critics can be doubted because theyâre just negative Nancyâs.
Reinforcing Biases: AI systems amplify existing biases because AI systems are pattern-matching machines. We (data scientists and engineers) show AI tools vast amounts of data about past decisions humans made, and AI replicates those patterns, even the discriminatory ones. Hype can lead to the rushed deployment of biased AI systems and causes companies to bend to pressure rather than test their products for bias. Unfortunately for consumers, it doesnât benefit the companies making AI to just âdo the right thingâ.
back to basics: what is AI?
It can even be hard to be on the same page as everyone talking about AI because we all mean different things. From reinforcement learning agents to classical ML, AI has a broad definition so letâs get aligned on a few concepts first.
What is AI?
So AI as a term is more marketing than science, but AI actually isn't as new as most people think, It all started at a conference in the 1950s. A group of computing professors gathered at Dartmouth College and asked if we could teach computers to learn by example rather than telling them precisely what to do.
Keep in mind it was the 1950s and not everyone was allowed to go to college, much less become a college professor. This important fact helps us keep in mind how the field of AI may be limited by their limited perspective and imagination.
Many of the advancements in AI were spurred by its military use and investments from the Department of Defense (DoD) and Defense Advanced Research Projects Agency (DARPA). From early computing initiatives to modern machine learning techniques, military funding has shaped AI development significantly.
There are two main kinds of AI, predictive or generative tools. You might have dabbled with something like ChatGPT which represents generative AI. Tools like that create ânewâ content by predicting the likelihood of the next word when you prompt it. So if you write âthe cat jumped over the ____â, a generative text tool would reference all the examples itâs been fed that include âthe cat jumped over theâ and calculate the chance the next word is âhatâ. It would find, as you can assume, the next word is hat in most of the past examples.
Predictive AI tries to make decisions about something. That can be if an email is classified as spam or not spam, if you get approved or denied for a loan, or how many people might
The biggest thing to remember with predictive AI is that these tools are wannabe fortune tellers. They donât output facts, just predictions about things. Unfortunately, when it comes to how good they are at predicting the future, they are rarely accurate. Itâs hard for anyone to predict the future, but AI tools are especially bad because they donât have context about the world (more on that soon).
How is AI developed?
This part is long, but I want to describe each step of the process.
1ď¸âŁ Companies scrape billions of texts, images, and videos from the internet, buy questionable datasets from data brokers, collect user information through their products, and utilize government and academic datasets. Much of this happens in a legal gray area, with content creators and individuals rarely giving explicit consent for their data to be used this way.
2ď¸âŁ Before this data can be used, it needs to be cleaned, organized, and labeled. This is the hidden human labor behind AI. Armies of underpaid workers, often in developing countries, spend countless hours labeling images, moderating content, and cleaning up datasets. They're the ones who see the worst of the internet, suffering psychological trauma while making sure AI training data is "clean." These workers are the invisible backbone of AI development, yet they're rarely mentioned in press releases or awards for AI breakthroughs.
3ď¸âŁ Then comes the actual model design, the blueprint for how decisions are made. Here's a dirty secret of the industry, most companies aren't really building the solutions that fit them best. They tend to copy what worked for bigger companies like Google, often without fully understanding why certain choices were made. A lot of times they choose black box models which are hard to understand, even by their engineers.
4ď¸âŁ The training process itself is where things get worse for the environment. Depending on how large models are and the type of data theyâre trained on, training can require massive computing clusters running for weeks or months, consuming vast amounts of electricity. Training a single LLM can emit as much carbon as dozens of cars in a year.
5ď¸âŁ After initial training comes fine-tuning, where models are adjusted for specific tasks, and in some cases safety constraints are added. Fine-tuning often means trying to make a tool that works in one industry work in another. Sometimes companies try to remove harmful biases and test how a model works to solve various problems. Without taking the proper methods for making AI less biased, this process frequently introduces new biases while trying to fix old ones.
6ď¸âŁ Once a model is fine-tuned, their developers move to deploy them, meaning actually embed them in their products. This is how you get the little star at the bottom of your screen begging you to generate text with AI. When this happens, itâs common for companies to find that their model doesnât work as well once theyâve deployed it. Sometimes itâs the model itself, but itâs usually that people don't use tools in the way developers think they will. Or they find out that they were paying attention to the wrong thing. Then they go back to fine-tuning, deploy it again, see if it works any better, and keep iterating through this process.
The scale of resources required for creating AI has created a troubling power dynamic. Only the biggest tech companies can afford the infrastructure, electricity, specialized hardware, and engineering talent needed to develop competitive AI systems. This has led to a concentration of power in the hands of a few corporations that mimics their power over everyday consumers. We donât get to make AI about all the big companies out there and rank them in order of worst product. Even if we did, it wouldnât have the power to materially change their conditions, investors, etc, the way their algorithms greatly impact our opportunities.
Why is AI relevant right now?
Businesses overuse AI for everything! For monitoring our internet use, surveilling customers through a store, adjusting what social media posts we see next, predicting what we'll buy, tracking our productivity at work, screening our job applications, calculating our insurance rates, and deciding if we're worthy of a loan.
Because itâs being used in these areas that have vast impacts on our lives, we have to address the flaws in our systems. Like many societal issues, it's not the tool itself that's the problem. In a vacuum AI could be made âfairâ, but we live in an unfair world. If we have unfair systems and we use AI to optimize them, weâre just increasing the rate of harm.
What problems are there with AI?
The Data AI tools are trained on can be really biased. Or they're not representative of the people the tools are used on. Or itâs stolen. Or itâs used without someone's consent. Or it contains information that can be traced back to someone. Or it was illegally scraped from the internet. And thatâs just the beginning of the data issues.
Behind the scenes, AI development relies heavily on human labor which is rarely acknowledged. Content moderators see traumatic material daily, data labelers make pennies per hour, and quality assurance testers work long hours to make sure systems appear to work properly. This hidden workforce is crucial but exploited, often working in poor conditions for low pay.
The lack of transparency means training processes are treated as trade secrets, data sources aren't disclosed, and limitations are hidden from users. Plus, the environmental impact is staggering, with massive energy consumption and hardware waste. Finally, ethical issues abound, from biased training data to privacy violations and consent issues.
These systems amplify existing societal biases and create new forms of discrimination. When AI makes decisions about loans, jobs, or healthcare, it often perpetuates historical prejudices while hiding behind a veneer of objectivity. Facial recognition systems work best on white faces. Language models understand standard English better than other languages and dialects. Medical AI tools discriminate against more sick Black patients.
When these systems fail â and they do fail â it's usually the most vulnerable who suffer the consequences. This isnât even a complete list of the problems, but itâll get us started for now.
the path forward
If we're going to prevent the worst outcomes, we need to start demanding:
1. Enforcing existing regulations and creating new ones specific to developing AI
2. Required compensation/attribution/transparency for data used in training
3. The right to opt out of AI systems entirely
4. Mandated transparency about when AI is being used
5. Environmental limits for large (computationally expensive) AI models
Every time you hear about an "exciting new AI development," ask yourself, Who profits from this? Who loses? What rights am I giving up? What's the real cost to society?
Remember: The people selling AI solutions are the same ones who created many of our current problems. They're not likely to solve them without intense public pressure and regulation.
Is this your first rodeo (interaction with AI criticism)?
shameless self promo đ
My new LinkedIn Learning Course on Auditing AI tools in Python might be useful for the Data Scientist in your life who wants an intro to technically evaluating ML models.
spicy take of the week đśď¸
resources
đ read
đ watch
đľ listen
get involved
Want to stay in the conversation? If youâre so inclined, join me on the AI Equity & Inclusion Discord channel. Connect with people who care about how we reduce the harm caused by AI.
I'll let you in on the best-kept secret in AI. It's far more impacted by the beliefs and values of human creators than they would ever let on.
Thanks for reading!
100% written by humans
Reply