• AI Anti-Hype
  • Posts
  • [LONG READ] ⚖️ Can we truly benefit from AI?

[LONG READ] ⚖️ Can we truly benefit from AI?

For over a decade, we’ve been inundated with AI snake oil, false claims, and over-exaggerations about how AI “benefits” us all. However, when we investigate these major claims, nearly none live up to the expectations. One reason for this is how we’ve developed AI tools. This has mimicked how we adopt other “innovations” under capitalism where there is pressure to bring a tool to market first, exaggerate outcomes, and drum up a sense of urgency and FOMO. In the US, most AI tools are developed at startups which have created the current ecosystem of AI deployment. With AI research hotbeds at top schools Stanford, Carnegie Mellon, and MIT building AI adjacent to Silicon Valley’s VC environment have impacted the processes, priorities, and incentive structures that AI developers face.

It’s a common understanding in industry that startups overpromise about the beneficial impact but ultimately underdeliver on promised features and functionality. Despite this, various organizations utilize and deploy systems far before they’re tested, audited, or measured against their claims. Unfortunately, this means we have had systems making predictions about the public that are working far below expectations and having devastating impacts on individuals and communities. Even the FTC has attempted to address the issue of AI systems frequently exhibiting extreme underperformance compared to claims. There is a major push to highlight the benefits of AI, however, I debate that our current advancements aren’t truly innovative as they have yet to prove truly beneficial in industries like medicine, autonomous cars, and hiring as promised.

The ways in which AI fails us:

Reliability: These tools have been sold to us under the guise that they work, however when we look into AI tools, they rarely work as intended and the organizations that build them face no consequences from the FTC or otherwise. These organizations can deploy AI that fails in the real world, profit from it, harm various already marginalized groups, and face no consequences from the market.

Deployment usage: Even when a tool isn’t necessary, the techno-solutionism we’ve celebrated has brought on the adoption of tools that don’t work in industries where they aren’t needed.

This applies particularly in policing and surveillance. The use of AI tools in policing and surveillance is a prime example of using a non-biased tool in a way that leads to disproportionate outcomes. If we were to audit a system like ShotSpotter, we may not find any technical biases, however, these tools are used to further surveil and marginalize already heavily surveilled populations, that are also the least represented in AI engineering communities.

Trustworthy AI is not merely about AI performance, but that the companies and the federal agencies regulating them will not allow AI to be used to cause further harm to communities.

Non-scientific work: Various aspects of humanity aren’t measurable, and thus not predictable. Often, organizations build and sell commercial tools that aren’t rooted in reality. Take for example emotion detection. There are various tools that claim to “detect emotion” or even gender, however, both are unobservable human phenomena. These tools identify observable, surface-level proxies such as facial expressions, hair, and gender presentation, but are poor measures for the unobservable phenomena they claim to predict.

Consider all the various cultures in the world. Human facial expressions and norms around the world aren’t the same. For example, in many Russian communities, smiling is seen as foolish, something nobody wants to be seen, so it’s rare you’d walk around St. Petersburg and find dozens of big smiles.

Not only are facial expressions a bad measure of emotion, but we also often hide our emotions, smile or laugh when we’re uncomfortable, and appear angry when internally our emotions are neutral. Technically, it’s unreasonable to imagine a computer can detect or measure emotion, instead, we rely on poor proxies. Additionally, people of all cultures also don’t smile similarly, and systems trained on people in the US tend to impose an American smile on people from other cultures.

Sustainability: The health of the planet is already dire. The amount of compute required to train large AI systems is massive and has a large carbon footprint. We shouldn’t speed up climate change for the small social wins of productivity and time savings. While AI can help automate repetitive tasks or identify fraud, is that worth speeding up the decline of our vital ecosystems?

Fritzchens Fritz / Better Images of AI / GPU shot etched 2 / CC-BY 4.0

Most of this compute comes from the specialized GPUs used to train deep neural networks (a network with 2+ hidden layers) are fundamental for deep learning, which generative AI including tools like ChatCPT are part of. Generative AI simply refers to AI tools that create new text or images rather than predicting something like someone’s creditworthiness.

Often, so much computing power is needed because of the type of data used to train deep learning and generative models. In addition, these models can be trained on millions of individual inputs that are typically broken down into even smaller pieces during the training process. An image is broken down into individual pixels and a document into individual words before being used to train AI tools.

Old Perspectives: One of the most harmful ways AI harms us is in how we intend to develop AI. Many countries view advancing AI the way they looked at the space race, but AI is not a race. If we treat it as such, we will plow past civil rights, privacy, and human morals. The Us vs Them perspective is especially inappropriate as a framework for American AI advancement because an AI race would be a race to see who can allow AI development to grow and gain adoption as quickly as possible. As American companies have removed their ethical teams have shown during the recent popularity of AI, organizations cannot regulate themselves nor do the needs of people outweigh the market and profit benefits from being the first to market.

AI Snake Oil

AI will improve Healthcare
In reality, the way AI is used in healthcare has come to disastrous outcomes. An investigation of the Boston healthcare system showed that black patients with liver failure were recommended less often for transplants despite having similar or worse liver vitals. In another example, we notice poorer and minority patients are deprioritized in emergency rooms. We’ve also seen doctors over-rely on AI tools to attempt to diagnose patients. We've had the promise of AI improving healthcare for years, without proof of tangible improvements in one of the highest-stakes applications of AI.

Self-driving cars will prove safer than human drivers
While we can all agree, traffic deaths are an unnecessary result of a sparse, car-dominated society, self-driving cars have not proven to solve this problem. Despite the amount of research and funding these, mostly privatized, car companies have used, we have not created safe self-driving cars. By design, Teslas are distraction machines with large screens and even games that have only recently been made unusable while driving, promote a lax driving culture, and promote driver reliance on its automated braking among other features.

There have been various accidents that show faults in self-driving capabilities especially after Teslas were involved in 19 deaths where autopilot was being used and 373 total deaths. Autopilot has overpromised and caused various drivers to sleep, eat, apply makeup, and engage in other distracted activities as they believe it is safe to do so.

Self-driving cars are not being properly regulated and the promise of fixing human problems is alway sunderscored by the least compliant and sloppy methods of deploying AI tools. Yes, the so-called geniuses at Tesla and other self-driving car companies suck at performing meaningful performance evaluations, testing, and mitigating poor driver behavior in relation to how high-stakes automated driving is. This isn't about individual engineers, but how engineering practices, lack of legal implications, and market conditions promote following standard software development practices to the most harmful applications of automated tools such as driving.

AI will improve hiring and decrease human bias
There have been various issues raised against companies using automated tools in hiring. This has become such a large problem, that the city of New York has passed Local Law 144 which requires all companies building automated tools for HR to be audited by third parties. HireVue uses biased facial recognition tools to "help" companies assess candidates. Hiring platforms like Workday are being faced with lawsuits for racial, disability, and age bias.

Biased AI has been proven to cause various types of harm including allocation harms which extend or withold resources. Withholding employment can be devastating to individuals and can further perpetuate existing racial and gender inequalities. The catalyst for AI tools in HR is the idea that AI will be less biased than humans in the HR process, but we've yet to see tangible proof hiring is less biased when AI tools are used.

With this being said, there are various AI uses that are less harmful and beneficial in terms of businesses, but maybe not to society as a whole. AI has made major advancements in manufacturing, robotics, and cybersecurity threat detection. There have also been helpful examples of language models helping job seekers cater their resumes and save time during the job search.

Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

On ChatGPT

LLMs can provide productivity benefits to users such as help with coding bugs, improving resumes, and writing marketing copy. While LLMs can safely be used in a few cases, this doesn't stand as a good argument that they should be used generally, given their widespread biases. They've been used by judges, doctors, and as therapists but these industries are too high-risk. Humans are particularly vulnerable to over-relying on computational tools, complicating this problem. In addition, this has been the biggest critique of OpenAI's rollout of ChatGPT. It's a fundamental ethical violation to create an easy-to-use tool like ChatGPT without building with mitigations for misuse as part of the design process.

OpenAI has also been completely discrete about exactly what data was used to train ChatGPT. This goes against the AI transparency we so desperately need. While the problems with LLMs are vast, they are good at fooling people into believing a string of text is human-created, but it's optimized by developers to output the most popular string of words, rather than having sentience or creativity. LLMs tend to be inaccurate and invent “facts” such as fake paper citations. This makes it far too risky to be used as a tool in various industries.

Additionally, prompt engineering shouldn't exist. If a generative AI system (developed using natural language processing techniques) needs to be prompted in very specific ways to get good results, what can we say about how performant that system is? Why doesn't it respond with a good output as a response to a human input, when humans prompt it in the ways that are most natural to us?

There’s a lot of hype around how to build AI that benefits us all, but I want to buck this trend to avoid going down the path our society tends to take time and time again. We cannot make AI benefit us all. We must recognize that AI has already caused irreparable harm including the loss of home, livelihood, freedom, and life. Thus the goal of responsible/ethical AI is not to benefit everybody, it should be to remediate harms already caused and mitigate future ones.

We deserve tools that work for us, and when I say "us", I mean the people most impacted by algorithmic bias and those alone. I have no interest in protecting the wealthy from less favorable AI outcomes. They already benefit from systemic privileges provided to them by their wealth.

I have no interest in protecting people with fair skin from the non-existent or minor harms they may potentially face from facial recognition when darker-skinned individuals and dark-skinned women are locked out of crucial systems used to file taxes, board planes, and .

I have no interest in treating privileged communities the same way we treat marginalized communities. We are not building AI fairness from a neutral standpoint where certain groups haven't already been deprioritized and harmed. Moving forward, we must focus on equity and allocating more resources to ensure AI tools are helpful for the most marginalized people first.

A fundamental shift
If we do not pivot from “AI that we all benefit from” rhetoric towards AI that remediates existing harms, prioritizes those who are most likely to be harmed and works specifically to benefit these groups, AI will continue to only benefit the rich, white, powerful, and have access digital technologies like high-speed internet and smartphones.

The goal should not be towards making AI work for all of us. If that is the goal we will fall back to doing what is the “best for most of us”, yet most of us aren’t marginalized. Most of us aren’t misidentified by facial recognition systems. Most of us aren’t stereotyped by language models. The utilitarianism ethical stance of creating the most benefit for the majority of people is not an appropriate ethical model for AI, considering the vast and deadly harms AI has already exhibited.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

Few engineers have skin in the game
As a developer of AI models, I am in a very small minority of engineers who have seen tangible impacts of algorithmic bias in my own life. I have been denied housing with no redress and I face quality of service harms with facial recognition tools I’m unable to opt out of like id.me. Techno-racism doesn't impact the majority of people maing the technologies and that is a massive issue. For example, Black people hold merely ~5% of roles in tech and just ~3% of roles in data and machine learning.

As the tech industry, including most AI organizations, has worked to increase its diversity, they really haven’t made good progress over the last six years. For Black women, who face the harshest harms of facial recognition make up less than 1% of the workforce in data science in machine learning.

Once we recognize that most people building AI will never face the harm directly, we must move accordingly. These engineers and more broadly their companies who benefit from selling AI, have unique incentives that deprioritize fixing algorithmic harms. Since the launch of ChatGPT we’ve seen Microsoft, Twitch, and Google cut back on their ethics organizations. The scalability of AI is a blessing and a curse. We have allowed the tech industry to build a tool that scales quickly, with nearly no guardrails, regulations, or accountability. AI has the ability to make thousands of bad decisions per second and no regulatory unit that subpoenas and investigates these data and algorithms has been put forth, think FDA, CFPB, or EPA.

Final Thoughts
At the end of the day, creating trustworthy AI is a bi-partisan initiative. Most Americans believe their data should be theirs, and that algorithms should be monitored and used safely. We have proposed Blueprint for an AI Bill of Rights that most people can agree on.

For whom?
A common detail we must address in this goal of developing AI tools responsibly is "for whom". Who do we make AI beneficial for? Who do prioritize and why? This is a toughie because businesses love to keep things neutral and answer everybody, but that's just not possible.

We cannot put people first without describing who those people are. The needs that those on the fringe of the digital divide have different needs than those who are not historically marginalized. We need to put people with disabilities first. We need to put surveilled communities first. We need to put Black and brown communities first.

This is our fundamental divide, not whether you're for the hype or an AI doomer. If you do not acknowledge the vast amounts of harm already caused by AI, you may be confused about why we should prioritize these groups. If you do not recognize that these groups need enhanced protections, you may not grasp how AI as we’ve developed it this far DO NOT benefit us all equally.

There is no AI race, and the one we hear about is self-imposed
The idea of an AI race is purely hype. What will happen if the US does not develop AI as quickly as other countries? Absolutely nothing. Except companies and governments might feel insecure that other nations are “doing better than us”. If we race towards AI, it will be impossible to truly mitigate the harm. What are we racing towards? It’s not uplifting the poorest communities with unstable access to the internet. It’s not financially compensating those harmed by AI. The AI “race” harkens back to one of the worst tech mottos “Move fast and break things”. Rather, any effort towards responsible, ethical, or trustworthy AI should move slowly and fix things.

AI is not a force for Good
The way we have used AI thus far have caused far larger harms than benefits to society when we consider those harmed have already been previously marginalized. We have hundreds of AI incidents of varying severity that have not been remediated. In its current state, the promise of AI has not come to pass despite over a decade of heavy research and investment. To be a force for good, we must address and remediate the harm already caused. “AI for good” does not exist yet and many methods that approach this insufficiently consider how to leverage AI while prioritizing the most marginalized. Instead, many of these “for good” efforts look to address social issues through a purely technical lens completely ignoring the communities they impact?

AI is not sentient
We do not have AI sentience right now, and even if we did, it would still rank relatively low on our list of responsible AI priorities. I believe achieving AI sentience is improbable, but not impossible, however many people aim to consider the potential rights sentient AI bots might have, rather than addressing the very real and large harms that have already happened. (IF we achieve AI sentience I believe all sentient beings should have rights.) Moving forward, as an industry, we must remediate the collateral damage we’ve caused on the road to AGI.

Ulterior motives abound
Companies and those with financial motives know the language of “marginalized communities”, “vulnerable groups”, and “AI harms”. This doesn’t mean they're motivated with the goal to help these groups. The ads you're seeing for front-end UIs for products like ChatGPT for Instagram optimization or creating undetectable AI-generated text are not developed by the ML engineers who wanted to work on something new. They're grifters hoping to make a quick buck while merely serving you something from OpenAI's API. The people telling you that you need to learn AI or you'll be "left behind" are going to try and charge you to learn AI. We should be skeptical and avoid the overly-optimistic hype fueling an interest in AI right now. Those who have a vested interest in the success and commercialization of AI have no incentives for transparency or to inform you about the limitations of these tools.

Like all human creations, we are somewhat in the position to steer how this turns out. AI is not some natural force like climate change. AI is not necessary for our banks to run or for lights to turn on. AI is not something that will mature and happen against our will and it will take a concerted effort from governments, companies, and the general public to ensure we do this in a way that redresses existing harms and is deployed with true transparency.

My View of AI

After reading this your first thought may be to consider me an AI doomer, but that's not the right characterization. I'm not more concerned with the theoretical arguments about sentience and AGI, but I'm invested in remediating the harms AI has already caused and mitigating future harms, in other words, AI justice.

My goal in working in the sub-field of responsible AI may not mirror everyone else in the field, but for me, we will not achieve any meaningful trust in AI until we have addressed the injustices AI systems have perpetuated before focusing on further innovation using old perspectives and techniques.

bye for now!

Join the conversation

or to participate.