v1.03 ~ Whose Ethics Are They Anyway?

The view from my lens

In partnership with

Disclaimer, this edition includes an ad to a news platform. If you want to see less ads, consider contributing to our exclusive đź’« Patreon.

Ethical AI doesn't mean the same thing to everyone. I’ve even claimed in past editions that it’s not likely we’ll reach a point in the near future where AI is used ethically. To others it means changing systems at the root and there’s a large, but sometimes invisible divide between them. Since AI “ethics” or Responsible AI is in its infancy, to most people there is no difference and that is one of the major issues. Imagine if there was a group of people providing emergency housing during emergencies and another group raising the rent on their existing rentals because of a climate disaster. We’d almost never lump them into the same group outside the fact that they provide housing.

While dramatic, this is how I see the divide in AI ethics. This is important because so much ethics work tends to toe the line and try to be excessively bipartisan and vague. This schism crosses race, gender, class, and political affiliations.

The developers building AI mostly do so from positions of privilege that allow them to abstract away the very real human consequences of their work. Most critics of technology share the same privilege, but some have lived experience as a marginalized person that often gives them unique perspectives on developing and mitigating AI. There are plenty of folks also doing great work who belong to dominant groups. Often, they’re leftists or progressives who understand the core issue: the real impact of AI upholds America’s caste system, which in itself should be dismantled.

Over the past 6 years I’ve been to conference after conference listening to examples of AI bias, technical solutions, and ultimately postmortems about what failed. We keep going through the same cycle because in whatever ways possible, the ruling class stomps out efforts to usurp their power. Whether this is decision-makers at nonprofits, research labs, foundations, startups, you get the point. For as many good well-meaning engineers and leaders there are, they’ve always been given roadblocks once whoever has the power to make change realizes that the AI audit, framework, or checklist will help people they deem inferior.

My background

To add nuance to the very shallow conversations about AI that we usually have, I’ll add context about my perspective. I was trained to do Data Science in I was taught in college and through job experience to build machine learning models, and am typically speaking about predictive ML models that use historical data to predict some kind of outcome. To be an AI ethicist in this moment is to carry a double consciousness; to understand both the technical limitations of AI systems and the weight of the negative societal caste structures they emerge from and reinforce. I’m able to fully grasp this as someone who often faces issues like Joy Boulamwini exposes in Unmasking AI. In 2016 I was a Data Science master’s student who couldn’t use the open-source facial recognition (FR) tools at the time because they just didn’t see me. However, I didn’t think of using a white mask to help. When I proposed a FR tool trained entirely on Black faces, my white advisors laughed off the idea. I come to this work with a never ending well of motivation. If I fail to improve things, I can’t just switch industries and forget about these problems because I will face these harms personally.

Positionality

Many people consider themselves AI ethicists and can come from various backgrounds like software engineering, or non-technical backgrounds like philosophy and sociology. Some AI Ethicists are often allies for marginalized communities. They tend to be in dominant groups, but they care about advancing equity and justice through their work. Both perspectives are valuable, but we must acknowledge the difference between allyship and lived experience. Many of those who can walk away from this work and board planes, buy homes, and __ without fear of discrimination live privileged lives in terms of treatment by AI. Through no malice of their own, they are easily deterred by hurdles because they choose to see these issues and can just as easily close their eyes to them.

A tech revolution at odds with humanity

Thus far, the powers at be like established tech leaders and venture capitalists, only value the technical engineering backgrounds. Often, I find I’m in the rooms I’m in because decision-makers value my Master’s degree. I’ve checked off a requirement they set and am thus allowed to speak. They don’t always value my experience facing quality-of-service harms and speaking on what it feels like to be subject to AI powered discrimination.

There is no way to divorce politics from our work. The way AI is funded, taught, developed, and commercialized is all tied to the power structures that shape our society. From venture capital funding patterns, which only funds 2% of women founded companies and with companies led by Women of color - which lumps all minority women together - raising less than 1% of all VC funding, to who gets to sit in decision-making rooms, from what problems we choose to solve to who benefits from our solutions – every aspect of AI development is inherently political.

This harsh truth may leave some people feeling raw or uncomfortable. That's okay, embrace it. I want us to be like Lauren Olamina in Parable of the Sower: turn towards the hard, ragged feelings rather than accept death by denial. This work should make you uncomfortable, you feel that way because your feelings acknowledge the real-life differences in how humans live and how our data - which only tells a slice of our lives, is seen as everything there is about us. Comfort with the status quo is a privilege that affected AI practitioners cannot afford.

Consider predictive policing algorithms. The standard "ethical AI" approach focuses on reducing bias in these systems. But this framing accepts the underlying premise that we should use predictive algorithms for policing at all. A position I disagree with, and many give up trying to convince others that AI is the wrong solution to begin with.

Tech at large and all it’s masculine energy lack imagination. It’s always apps that objectify women by rating how hot they are like Meta’s origins to AI-ified porn where you can give the porn star you’re watching the face of someone who didn’t consent to be seen that way.

If we use our imaginations and think about what factors might materially change conditions for people. A justice-oriented approach instead asks: What if we built AI systems to predict where people resources and support are needed in communities? What if, instead of training models to predict "crime," we trained them to identify areas experiencing food insecurity, lack of healthcare access, or educational resource gaps?

Instead of: Predictive policing algorithms that reinforce over-surveillance of marginalized communities

We build: AI systems that aggregate community knowledge about mutual aid networks, free health clinics, food banks, and educational resources.

Instead of: AI optimization tools that help corporations maximize resource extraction

We build: AI systems that help communities track environmental hazards, predict climate impacts on vulnerable populations, and optimize renewable energy distribution. In Louisiana, Indigenous communities are using AI to model coastal erosion patterns and protect sacred lands.

Instead of: AI systems that optimize worker surveillance and automation

We build: Tools that help workers organize, document labor violations, and coordinate mutual support networks. Worker centers in California are using machine learning to identify wage theft patterns and support collective action.

AI for the Movement

Why must we actively build for change? Because technology is never neutral. When we build AI systems that operate within existing social structures without questioning them, we're actively reinforcing those structures. The choice isn't between neutral technology and activist technology—it's between technology that preserves the status quo and technology that challenges it. Sometimes the solution is no technology at all.

Instead of rapid prototyping and "move fast and break things" we need:

  • Participatory design processes that center marginalized voices

  • Long-term impact assessment frameworks

  • Creating community-owned and operated AI infrastructure

  • Building open-source tools that serve community needs rather than corporate interests

We should reimagine AI governance through:

  • Community oversight boards with real power

  • Mandatory evaluations that center affected communities

  • Legal frameworks that prioritize collective benefit over corporate profit

Instead of venture capital and corporate funding, we need:

  • Creating alternative funding sources outside venture capital

  • Supporting cooperative and community-owned AI initiatives

  • Implementing reparative funding practices that address historical inequities

This work requires us to ask different questions:

Not just "is this AI system biased?" but "who does this system serve?".

Not just "is this technology ethical?" but "does this technology build equity?"

Not just "how can we make this system fair?" but "should this system exist at all?"

We must move beyond the framework of "responsible AI" to ask what technology for liberation even looks like. This means centering marginalized wisdom and ways of knowing, Designing systems that distribute power, and destroying our harmful AI and societal system.

Receive Honest News Today

Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.

AI should be Destroyed

When we talk about harmful AI systems, we must acknowledge that we’ve barely scratched the surface of deciding whether or not dismantling them. As this election has shown many of us, we’re past the point of polite contemplation, it's a moral imperative. Just as we would shut down a factory that pollutes a community's water supply, we should be prepared to deactivate, rollback, and destroy AI models that cause harm, especially to already vulnerable populations. This isn't about being anti-technology; it's about recognizing that not all technological "advances" represent actual progress. Some are castist discrimination codified.

History shows us numerous examples of harmful technologies that were successfully restricted or dismantled - from specific pharmaceutical compounds to dangerous industrial processes. A great article on this is Ali Altkahib’s blog which covers this topic in more detail and I suggest you give it a read. AI models, despite their widespread use, are not exempt from this possibility. We can and should develop frameworks for evaluating AI systems' societal impact and mechanisms for shutting down those that fail to meet the new societal standards we must all draw.

Communities should have the power to reject AI systems that affect them. This right to refusal needs to be protected by law and supported by technical mechanisms for model destruction. When facial recognition systems enable surveillance of marginalized communities, or when hiring algorithms perpetuate historical discrimination, affected communities should have legal pathways to demand their dismantling.

The ability to destroy harmful models must be built into AI governance frameworks from the start. This includes technical mechanisms for permanent model deactivation and data deletion. By normalizing the dismantling of harmful systems, we create accountability that has been notably absent in AI development.

When we examine AI's role in American society, we cannot ignore how these systems often calcify existing social hierarchies, effectively digitizing and automating what scholars like Isabel Wilkerson have identified as America's Caste system. This reality demands that we expand our imagination beyond simple technical fixes or surface-level policy changes. We need radically reimagined development workflows that center impacted communities from the start, architectural changes that build in kill switches and sunset provisions, and most importantly, robust financial compensation mechanisms for those harmed by these systems.

Just as we've seen cases for reparations in other contexts of systemic harm, we must establish frameworks for compensating communities and individuals harmed by discriminatory AI systems. This compensation shouldn't be viewed as a cost of doing business, but as a fundamental component of justice and accountability. The power to destroy harmful algorithms must come with the responsibility to make whole those who have suffered under them. Only by combining model dismantling with material compensation can we begin to address the deep inequities these systems have helped entrench.

The path forward requires more than acknowledgment or incremental change. It demands a fundamental restructuring of how we develop, deploy, and govern AI systems. This means challenging comfortable assumptions, confronting uncomfortable truths, and working toward a future where AI serves the many, not just the few.

This is uncomfortable work because it challenges fundamental assumptions about technology, progress, and power. It requires us to imagine beyond the limitations of current systems and to build toward futures that many will dismiss as unrealistic or impractical.

The future of AI ethics isn't about creating more efficient systems of oppression or better ways to exploit labor. It's about fundamentally reimagining what technology can do when it's built with justice, equity, and human dignity at its core. The question isn't whether this transformation is possible, but whether we have the courage to pursue it and push past traditional barriers.

Where do we go from here?

Consider your ethics, how you relate to this work and what you’re willing to risk. Consider what informs your views on AI and how you may benefit or suffer at the mercy of automated systems.

More importantly, take note of all the things I mentioned we should and must do. A past version of me would have said we need to ask for power, sit at their table and change minds. While this has the potential for impact, although limited, my tactics have changed. I was a staunch believer that we can trojan horse good, equity work under the guise of AI safety, but organizations are emboldened to discriminate based on perceived “wokeness” now.

We need to ignore the old ways, the old table, in favor for new ones that are outside the traditional methods of organizing. I’m building a nonprofit that will do much of this work, but we need open-source AI utilities, worker-owned organizations, and other ways to get

If you, like me, tend to get weary doing this work, you aren’t alone. A small part of my stretch goal is to host a retreat for those doing this work. Prioritize your mental, emotional and physical safety because the AI incidents aren’t just going to magically stop. Find community with others and step away from the work whenever you can. ❤️

đź“– Resources

Reply

or to participate.