DEI in AI

AI isn’t some all-knowing force.

It’s a mirror. It reflects us. The voices we amplify. The perspectives we choose to prioritize. The narratives we allow to shape our industries.

So what happens when companies roll back their DEI policies?

When systems that were already flawed become even less inclusive?

AI doesn’t correct for that. It absorbs it.

If we aren’t intentional, we become part of the problem.

And that’s not an option.

AI is evolving with every prompt we type, every dataset it’s trained on, and every piece of content we generate.

It is not separate from us. It is built by us, influenced by us, and ultimately shaped by the choices we make.

So how do we use it intentionally?

  1. Question its outputs. If AI gives you a generic or surface-level answer, ask who is missing from the narrative. Reframe the prompt to invite different perspectives.

  2. Call in underrepresented voices. Instead of asking AI for a “top expert” opinion, specify that you want insights from BIPOC leaders, disabled entrepreneurs, or queer creators in your industry.

  3. Challenge the defaults. Many AI tools prioritize mass appeal over nuance. If the first response centers the same voices we always hear, push for a different angle.

  4. Recognize bias and push back. AI is not immune to systemic bias. If it suggests harmful stereotypes or erases key perspectives, don’t take it at face value. Correct it. Train it. Make it better.

This isn’t about whether AI is good or bad. It is about whether we are willing to be intentional, to challenge what it spits out, and to hold ourselves accountable for what we create with it.

Because AI isn’t going anywhere.

But how we use it? That is entirely up to us.

How are you making sure your use of AI reflects the world you actually want to live in?

Previous
Previous

How to Keep Your Business Running When Life Feels Impossible

Next
Next

AI Beyond Chatbots? Cool. But Can We Go Beyond Smugness, Too?