- AI Anti-Hype
- Posts
- v1.02 ~ AI hurts us, but maybe it'll help us someday
v1.02 ~ AI hurts us, but maybe it'll help us someday
Welcome back friends, now that you know a little about how AI is developed and the hype around it, let’s talk about AI harms.
AI isn't neutral, and neither is technology as a whole. Think about how we (as humanity) have created tools over time. If we have a need and we have the resources, we'll invent a tool to aid us with the work we were already doing. That’s exactly how we developed AI. Systemic oppression is a naturally occurring human phenomenon. Many cultures around the world develop caste systems based on race, family lineage, health (leper colonies).
AI harms are the consequences of developing tools to speed up decision-making in already broken systems.
Speed Is the New Segregation
Here's what makes AI different from old-school discrimination: SCALE and SPEED.
When a racist banker denied mortgages back in the day, they could only hurt so many people at once. But now an AI system can deny millions of people a home loan in the time it takes to make a cup of coffee. We're talking about automation of oppression at a scale that’s never possible before AI.
This is why fixing problems in AI requires sociotechnical work. This just means there’s some technical aspect with the math (like training models to not discriminate against black lenders with hard coded constraints) and with the social side (why do mortgage companies repeat past discriminatory patterns and what are the drivers that would make them stop).
I want to stress that it’s REALLY hard to convince companies to do the fixing. There are a few excuses you can expect, it’ll cost them money without gaining anything, they don’t care about the groups impacted, and they fear lawsuits and bad PR. There are plenty more reasons, but a big part of our (AI Ethicists) work is trying to find good reasons to motivate them.
This effort has been long and tedious with little success because money will always be better than just “doing the right thing” to most businesses. This failure to get companies to do the right thing has also led to a change in tactics. Lately we’ve been moving towards getting governments to enforce existing laws and write new ones that will hold companies accountable. That’s why you may be seeing the techies in senate hearings and talks of new AI laws in almost every state.
We need to understand that AI harms aren't just a technology problem, they're part of the larger struggle for liberation. Marginalized people have had to fight against scientific racism, harmful bureaucracies, systemic discrimination and their AI powered descendants are just a new spin on the same societal problems.
QTNA: What kinds of harm are there? Plenty.
Allocation Harms
When AI systems control who gets access to something, they often reinforce existing patterns of discrimination. Whether someone is denied a job, a mortgage, medicaid benefits, or college admissions, these decisions of these systems control access to life changing resources. There’s a famous case of Amazon’s failed hiring tool penalizing resumes containing words associated with women, such as "women's chess club captain," or “women’s college” in candidate rankings. One of the reasons allocation hams are so common is because companies thought they could replicate how people make decisions to speed up business or save money.
Representational Harms
AI systems have a tendency to erase certain groups while amplifying others. Medical imaging AI that was trained primarily on light skinned patients can miss crucial symptoms on darker skin tones, effectively ignoring these patients' medical needs. This reflects a bias in the medical field of lacking examples of dermatological issues on patients with dark skin. The stereotyping runs deep, image generation AI consistently produces CEOs as white men in suits, nurses as women, and criminals with darker skin. Even when short-sighed fixes are applied, it’s hard to get the results we want from AI tools. Representational harms are some of the hardest to These are reflections of societal biases encoded into data.
Quality of Service Harms
These happen when AI systems provide a lower quality of service to certain groups, not through obvious discrimination, but through technical limitations that disproportionately impact communities. Another technology that has this issue are automatic soap dispensers, but notice how AI follows the pattern of other tools. Automated customer service systems struggle with accents and speech patterns common in immigrant communities. The facial recognition used to unlock smartphones works well for some users, but darker skinned people and many East Asian people may end up relying on less convenient backup methods dozens of times a day.
These annoyances eventually create a two tiered system where some people consistently receive smooth, efficient service while others face constant friction and frustration accessing the same basic functions. The technology technically "works for everyone," but the quality gap reveals who it was really designed to serve.
Degradation Harms
These occur when AI systems explicitly degrade or insult certain groups. This is common when generative AI outputs slurs or early issues when prompted with just “two muslims” and chatbots consistently used to output negative stereotypes about terrorism. Google Photos labeled two black people as gorillas. This is an obviously racist term to refer to black people.
Instead of fixing these issues, Big Tech companies including Apple just removed the label gorilla completely. It’s a wild “solution” to avoid the problem entirely when they could have spent the effort to fix the labeling issue. Sorry Black people, your issues aren’t worth fixing. And sorry wildlife photographers, you can’t find the gorillas in your photos because of racism.
Psychological Harms
The daily barrage of AI microaggressions takes a heavy toll. Imagine being a person with a disability constantly having to prove to AI systems that you can perform basic tasks, or being a transgender person repeatedly misgendered by AI voice recognition. Each interaction becomes a reminder that these systems weren't built with you in mind.
The mental burden of navigating biased AI systems is exhausting. Think about Black professionals having to "codeswitch" not just for humans but for AI interview systems, or immigrants having to suppress their accents to be understood by virtual assistants. The constant need to conform to AI expectations creates a form of digital PTSD when a group of people always has to anticipate the next tool not working for them or adjustment they have to make so that it does.
Economic Harms
AI driven job displacement isn't hitting all communities equally, but overwhelmingly targets artists, writers, rural communities, low income earners, and minority groups. When a warehouse automates its operations, it's the working class employees who lose their jobs first. Meanwhile, AI systems are becoming gatekeepers to economic opportunity from automated resume screeners that favor certain educational backgrounds to credit scoring algorithms that perpetuate historical lending biases.
The wealth extraction is systematic, several automated financial products target vulnerable communities with payday loans that prey on the financially stressed, or personalized pricing systems that charge more in lower income and rural neighborhoods. The impact of AI on the job market in 2022 is stark for many in creative industries whose work is devalued to train AI tools.
Community Harms
The destruction of cultural knowledge happens subtly when AI language models are trained primarily on mainstream. They may misunderstand or misrepresent cultural practices and traditions, often deeming them as untrustworthy information. Local knowledge gets buried under what the algorithm determines is authoritative information.
Community trust erodes when AI systems become mediators of social interaction. Imagine a neighborhood where AI surveillance systems monitor gathering spaces, where predictive policing algorithms discourage community events, and where automated property valuation tools drive gentrification by labeling culturally rich areas as "undervalued." The technology that promises to connect us often ends up disrupting the social fabric that holds communities together.
QTNA: What makes these harms happen?
The Three Headed Monster
There are three main drivers of AI harms and while they mainly trace back to capitalism and colonialism, there are various aspects of AI that make it different than dealing with harms in other systems.
1. Volume and Capitalism: Businesses like banks say they need AI because there's too many applications for their employees to process. But are they interested in critically shifting their human processes given the discriminatory outcomes of their human decision makers? They probably could hire more people, but often don’t want to spend money when they can try to automate the process. This makes a lot of assumptions: that all people make decisions the same way, that past decisions should be replicated, and that they can replace people with AI tools.
2. Historical Poison: They're feeding AI data from decades of racist lending practices. When your training data comes from redlining and discrimination, guess what you get? More sophisticated redlining and discrimination.
3. Power Dynamics: Who's building these systems? Who profits from them? And most importantly, who gets hurt? Why do some companies get to collect this data and decide what its worth? Who decides who makes money from their data? There are so many ways we are merely subjects to these large tech companies and their AI experiments we interact with.
When people think about AI harms, they typically just think an AI tool is racist or sexist. While true, these tools have racist and sexist outputs frequently, it’s not just about that. Bald men often get superimposed with hair when using face filters. This may be a low risk example, but there’s still and impact on our psyche when an entire group of people are exposed to tools that make them think their features (by choice or otherwise) are wrong.
Baldness isn’t the only non-protected attribute that Ai tools discriminate against.
Geographic Location: Algorithms can use zip codes as features, which can serve as a proxy for race or socioeconomic status. This can lead to discrimination in areas like credit scoring, insurance, and rideshare pricing.
Educational Institution: An automated system was found to charge higher loan prices for refinancing student loans to applicants who attended Historically Black Colleges and Universities (HBCUs) compared to those who did not, even when controlling for other credit related factors.
Device Type: Algorithms discriminate based on the type of device a person uses, which may correlate with socioeconomic status or age. Several food delivery apps have been criticized for charging iPhone customers more as their phones are typically more expensive than Android phones.
A recent study claims that algorithms may be biased against white sounding names in some cases. Please note I just learned about this study and haven’t fully read into this it for it’s authority.While I initially raised an eyebrow since multiple studies have shown that LLMs discriminate against minority and foreign sounding names, this may be the ammo we need to get holistic approaches to AI accepted.
Unfortunately, most people in dominant groups never care about fixing these issues until it impacts them directly. I’ve stated in other writing around the internet that my singular career goal is to reduce the algorithmic harm marginalized people suffer, so I care about the results moreso than the methods we must take to get there.
I’m not familiar with The Register, so for those that are, please let me know if they’re some neo-Nazi, right-wing propaganda site, but from my skim it didn’t give off that vibe.
Regardless, we should all care about algorithmic harms and want companies to do better.
The Resistance Playbook
As AI gets adopted into more and more industries, these harms aren't going away. They're going to get worse unless we fundamentally change who has power. The real solutions aren’t really about tweaking algorithms or adding diversity training. This is about revolutionary change in who controls these tools and who they serve.
Okay, but what can we do leading up to that?
Community Literacy
AI literacy programs: Minority communities are underserved in various ways, but we need to increasingly adopt AI literacy tools that are realistic about the outcomes of these tools and help young people becoming vocal critics of them.
Alternative technology: (social media, web hosting, payment processing) Especially with the re-election of Donald Trump marginalized communities will need to invest in community and open source initiatives that can help keep our conversations private. Many institutions will bend to the knee of an authoritarian and may not be so private or safe for much longer.
Collective Action: Organize grassroots movements to challenge AI implementations that threaten privacy, perpetuate biases, or undermine local autonomy. Community groups can demand transparency and accountability from organizations deploying AI systems. There’s an especially good opportunity to do this at the city, county, and state level. These local governments are paying attention to what their residents want and we have more sway in local policy than we do nationally.
Advocating for Regulation: Push for legislative measures that protect individual rights, ensure algorithmic transparency, and hold AI developers accountable for the impacts of their systems. This is often what many in academia and industry do, but this can also mean speaking up against adopting AI tools by your school board. This can also look like writing your representatives about enforcing
Find Your Place: Use whatever experise you have whether that’s customer service, law, or marine biology to leave feedback on the ways AI impacts your industry or life. You can submit comments on the government’s use of AI on regulations.gov or simply weigh in when you see some AI-riddled tool to do one of the many things we do well without AI.
For Companies
There are plenty of ways companies building and using AI can make the ecosystem better than what it is now.
Auditing AI: Develop and implement robust ethical guidelines for AI development and deployment. This includes regular audits to identify and mitigate biases in AI algorithms.
Phased Adoption: Instead of rushing into full scale AI implementation, companies can adopt a gradual approach. Start with pilot projects to assess the impact and address concerns before wider deployment.
Employee Involvement: Create open forums where employees can voice concerns about AI adoption. This participatory approach can help address fears and identify problems faster.
Prioritize Human Skills: Invest in upskilling programs that emphasize uniquely human capabilities such as creativity, emotional intelligence, and complex problem solving. This approach positions AI as a complement to human work rather than a replacement.
resources
đź“– read
Thanks so much for reading!
100% written by humans
Reply