Everything is Not a Nail

but AI seems like a hammer

Hello Friends and Happy Black History Month!

Last year was a doozy in AI news, but I’m thrilled to be back with some new perspectives and projects coming soon, including a new podcast, more resources, and more opportunities for those hoping to build a career path in Responsible AI. Until then…

What we often get wrong about AI is that we tend to villainize the tool when the real criminal is the person wielding the tool.

AI has been used to spread misinformation, infringe on privacy rights, and exacerbate inequities, but where do we draw the line on who can use AI and how? How can we draw a line when we don’t all agree? AI tools aren’t evil, they’re tools. A hammer isn’t inherently a weapon, but there have been plenty of crimes committed with them. With manmade incentives like the money, hype, and attention all on AI, I can understand why everything looks like a good application.

The matter of the fact is, that AI is just exposing the hard problems we haven’t solved as a society. The ways we adopt AI will always be impacted by our perspectives and experiences. It’s one reason we need to shift away from bias as the “problem” with AI and recognize the real culprit is how we adopt AI. 

Adopting AI hastily has resulted in financial harms, false arrests, and the amplification of existing discrimination. In the past, we’ve framed this as: “AI has resulted in discrimination” or “Faulty AI has resulted in discrimination”, but being precise here helps us identify how to address these issues better.

During this year’s Super Bowl, songstress and highly recognized vocalist Alicia Keys missed one of the early high notes during her performance. Many live viewers noted the off-key start, but the video uploaded to YouTube was corrected. (We can’t confirm this was done by AI, but this is a common application of AI tools). If you’re Alicia Keys, this change provides a chance to save face, but the public is offput. I wouldn’t call the reaction a backlash per se, but the public expects the raw truth, at least that’s what it seems like for American fans.

In a parallel example, there was a recent episode of a reality show that showcased iouples getting married. Unfortunately for one of the grooms, his fly was down during most of his on-screen time. If he wanted to use AI tools to alter his copies of his wedding photos and videos (to hide this faux pas from his kids) I think this is a great use case of Generative AI, but still not one that gets around the copyright of the underlying data.

Let’s say he’s mortified and wanted the network to alter the video that aired to millions of people. I’d say that’s not an appropriate time since we’d rewrite what actually happened. It’s embarrassing, but he did give informed consent to the truth being broadcast. Before generative AI, there wouldn’t be a way to fix this and ultimately it’s a fleeting moment of relatively small proportions. This isn’t to pour salt on the wound, it’s to remain consistent. If this logic were applied to elections, I firmly believe one of our rights as a society is the right to the truth, unaltered, in scenarios that are public and world events.

If the President wanted to alter videos of their fly being down, tough luck. We should adopt a new norm around AI that we don’t alter the truth in public spheres and especially politics, at the consequence of major erosion of trust.

I know not every single person will agree on this, and that’s okay (there are plenty of laws I don’t agree with). I believe this is the line. When we want to alter images or audio for our benefit with little impact to others, great! When we change our documented histories of what has happened, we defy our social expectations.

Pragmatically, I’m leaning towards the utilitarian ethics camp on this. We should prioritize the greatest benefit (the right to the truth) for the most amount of people. Events like the Super Bowl and elections are events in the public sphere so our duty to uphold the truth applies.

The greatest win Big Tech has accomplished has been making us feel like the path forward is predetermined. Like AI was bound to cost us jobs and bound to perpetuate biases. We aren’t on a one-way track with no brakes.

We can still influence regulation, AI development, and how we adopt AI tools. As we are in another year already beginning with layoffs and AI nonsense, we must keep in mind that wonderful people working on these issues. They need our funding, support, and amplification. We can change things for the better and the most dangerous misstep would be failing to confront that it’s responsibility to.

bye for now!

Reply

or to participate.