Ethics and AI Hardly Mix

So what comes next?

Hey friends, I’m back!

Guess Whos Back GIF by AIDES

My apologies for the unplanned delay. In the past few months have been pretty intense to be honest. 🥵 I’ve started a new role, am advising on AI policy, and I just released a new LinkedIn Learning Course. As I’ve been challenged in various new ways recently, these endeavors have forced me to grow and expand how I think and approach problems.

Despite all of the news in the past few months about AI incidents, lawsuits, and regulations, there’s a thought I just can’t shake: maybe we’ve been going at this the wrong way all along.

Ethical AI, in the most basic sense, has still not been achieved despite a push towards it since around 2014.

I’ve thought long and hard about the reasons why and have concluded that because true ethical actions are directly opposed with the reward systems of capitalism, it’s merely a pipe dream.

Without government regulation, a business who can dubiously collect user data and make it profitable would likely never choose not to prioritize profit to preserve user rights. Add that the organizations that led the data-as-a-product revolution already had near monopolies in their industries. So their hypothetical choice is to refuse an additional income stream at the consequence of losing some users. From the perspective of a business, there’s seemingly no choice.

What we’ve refused to openly acknowledge is how businesses work. They don’t have public safety in mind and if you asked a corporation, they’d say public safety is not their job. That’s why they can’t be trusted to protect us or much less, police themselves.

  • Ethical AI requires companies to make decisions about who they prioritize and they can’t.

  • It requires them to choose who to prioritize vs “making AI work for us all'' and they don’t.

  • It requires transparency (even when they fuck up) and over and over again organizations prove they are unwilling to do this.

Companies can’t imagine the consequences if they put Ethical AI into practice.

They worry: How would markets react? How would stakeholders and investors react? How much money can they expect to hemorrhage?

It’s far too high a societal risk for them to state their diversity/Ethical AI efforts prioritize marginalized people and they know it. So instead, we get the “AI can have so many benefits to all of humanity” bullshit that those of us entrenched in this work have witnessed for years. We’ve witnessed this come to life in the form of Principles they don’t follow, Risk frameworks they don’t use, and pithy prepared statements they use to satiate our hunger for algorithmic justice.

One of the most frustrating parts of this is that businesses DO have opinions and want to take certain stances that may not hurt them as much financially. Anyone remember Hobby Lobby suing to be considered a business with an opinion and religion with right’s to opinions? They’re proud of it!

From their website, ”The Hobby Lobby case has always been about one thing only; the Greens’ right to live out their faith in their business without government unduly intruding on their ability to do so..” 

So why the refusal to take a stance on who AI needs to protect?

They. Lose. Money.

Ethical AI is a pipe dream because companies are incapable of:

  • Choosing ethical frameworks in which to operate from

    • Mostly because they don’t know what ethical stances exist, much less how to choose

  • Choose which groups to prioritize and when

  • Choosing which fairness definitions and metrics are relevant to their tools and societal context

  • They can’t maintain governance documentation such as datasheets or model cards, much less audit their systems properly

  • They can’t stand ten toes down (uphold) on a decision they make on ethical development whether for legal or social reasons

  • Choosing to remediate the harms imposed on data subjects without regulations

  • Providing clear details to users and the public when algorithms make poor decisions.

Ultimately, BigTech companies will shell out hundreds of thousands of dollars to work with pale, male contracting firms rather than listening to the marginalized people in their orgs or the data subjects who are exposed to their tools. These firms will implement some documentation or procedures to be the “fairest” unethical tools, but at least they’ll be technically sound.

So, fuck it, Ethical AI is dead.

To be fair, it was never really alive. Most of the industry did a slight pivot to Responsible AI, but even then, responsibility doesn’t equal consequences or recourse now.

Today, companies are still not responsible, as in holding legal liability for the outcomes of their AI systems. While proposed legislation aims to make this reality, we aren’t there yet.

So where are we? We’re in this weird gray-area where AI adoption has outpaced governments’ wildest dreams and they’re scrambling to write new laws with extremely limited knowledge of how these tools work. There have been various engineers and researchers warning about the harms of AI systems and pushing for regulation for years, but mainstream attention has ignored them.

Post-Ethics Era

So with AI Ethics as a failed endeavor, what do we turn to? If regulation can manage to be speedy and establish powerful enforcement, we might see Compliant AI or Responsible AI have better outcomes.

If not, I’d like to consider Realistic AI as a way to view our current state of AI development. With the wave of Generative AI attracting new eyes, AI is not in it’s infancy.

In it’s decades old history it’s been extremely damaging to human life, mental health, and physical safety. It has been used on powerless communities and leveraged to legally harm wide arrays of people. Given what we know, moving forward, AI should be designed and developed in ways that acknowledge these past failures rather than ignoring them.

So it’s clear, it’s not that I expect AI tools to always work perfectly, as someone who’s spent seven years training and evaluating AI systems. I know that’s unrealistic, it’s impossible. But what is possible is deploying these tools with mistakes in mind. This allows us to build guardrails, kill switches, and plan mitigations for what happens when things go wrong.

Right now, the issue is that AI developers rarely do any of that, leading to the state of ongoing AI incidents and harms that could have been avoided (in some cases), if we gave a fuck about people to begin with.

Realistic AI looks like:

  • Government enforcement of AI compliance

    • Meaningful fines, starting in the millions

  • Financial recourse for those harmed by automated tools as a global standard

  • Legally considering misleading or exaggerated AI marketing as fraud, and prosecuted as such

  • Restructuring AI education to name the harms and dangers first (like some practices in aerospace engineering)

  • Open Data Requirements for tools once products have reached critical adoption (Facebook, Twitter, etc)

  • Elections that give citizens the power to vote where AI can be used on them

  • Development lifecycles that require data subjects and public disclosures at each stage

I know this entire email is me on my soapbox. but if we don’t start doing things correctly now, we may never be able to eject from the roller coaster we’re on headed straight to dystopia.

We made up just about every system in our society. We made up race, we made up gender, we made up highways and time zones. And yet many massive inventions of the human mind suck.

Our (western) business systems, markets, and social structures suck because they promote creating and profiting from inequality. Time zones suck because they create discord, confusion, and millions of missed meetings.

If we got the chance to change the course of suckage for one thing, this should be it.

We should imagine a wildly different AI future that isn’t us merely sweeping up after every mistake, but is rebuilt from the ground up with highly customizable, narrow-use tools.

Despite Big Tech and their deep pockets, we're at a crucial time in history where this is actually possible. Creating an AI ecosystem that doesn’t steal our data, manipulate us for profit, and falsely accuse us of crimes is overwhelmingly bi-partisan.

I will not be moved by business value. I will not be moved by “AI will save us” pipe dreams. I will be building and prioritizing AI to make the world better for those already harmed or nothing at all.

If you haven’t realized it by now, nobody is coming to save us. No coding genius, no great philosopher is going to magically pull us out of this mess. We can either rebel now, or continue to be subjects of an unusually cruel AI overlords.

It’s truly laughable that we thought we’d be subjugated to harmful AI (and we still ponder when it will happen) when it’s happening right now. Is being falsely arrested not an existential risk? (It is if you’re Black!) Is being fired not an existential risk? Is getting denied for transplant surgery not an existential risk?

We are willing to rely on a machine to determine the outcomes of critical aspects of our lives like who is triaged as higher priority in an ER and somehow we think we’re not restricted subjects of automation and greed. That’s what I call algorithmic Stockholm syndrome.

We’re living at a time where we CAN invent better structures and develop AI with the explicit intent of improving the lives of marginalized people. To me, everything else is wastewater.

Learn More

Reply

or to participate.