AI for Promoting Diversity, Equity, and Inclusion: False Hopes or Fast Gains?

The development of Artificial Intelligence (AI) is folding into this trend, transforming the ways we communicate and create online. Today, AI is an integral tool in data analysis, process automation, and content creation. However, it’s important to consider the question of AI’s effect on DEI objectives. How well is AI-generated content doing at representing and preserving the diversity of voices that make up any audience? What is the impact of training algorithms on internet data which is often not inclusive? Read on to explore the ways AI might impact the content we engage with and the possible algorithmic effects on our media.


Diversity and AI Drive Revenue in Media


For media and entertainment, having staff that represents society (and audiences, by extension) is essential for engaging viewers in a holistic and genuine way. In this case, the right thing to do is also driving higher revenues. According to UCLA’s annual Hollywood Diversity Report, profits went hand-in-hand with inclusion in 2022 as they had for years. For instance, movies with less than 11% minority casts performed the worst in terms of median global box office. 
Given that this trend is reported year by year, it’s become essential for production companies to integrate more tools for inclusion, which they have been piloting for years. In 2021, USC Annenberg’s Inclusion Initiative was tasked by Netflix to assess off-screen and on-screen staff diversity on their film and original series’ sets. The results were promising for strides taken with a growing number of women of color and Black professionals included in productions. In the same period, this type of analysis was conducted for NBCUniversal’s scripts by Geena Davis Institute’s AI tool, Spellcheck for Bias. After a year-long pilot, the collaboration was expanded and is set to become a staple. More structural solutions and success cases from across content, media, and entertainment were offered in the 2021 WEF/Accenture report and will surely be highlighted by the WEF’s Power of Media Taskforce on DEI.
To address DEI issues in audio and video, Danish MediaCatch is rolling out an AI-driven Diversity Tracker. The AI technology is able to analyze the gender, age, and racial balance in on-screen representation. In a handy dashboard, the tool delivers quantitative and actionable insights on bias, and has already been tested by the Danish Broadcasting Corporation, with the results showcased in the European Broadcasting Union’s DEI Casebook of AI innovation cases. 

Beyond representation in hiring numbers, internal processes for diverse staff are also a potential avenue for integrating AI successes. AI algorithms can be beneficial for ensuring that staff from historically underrepresented groups have equal internal opportunities and non-confrontational paths for giving feedback and suggesting solutions to structural issues.

Diverse AI Companies Brainstorm Better


Emerging AI companies leverage the start-up culture, which values diverse voices and perspectives crucial for success. Catalyzing this culture within engineering, research, and product teams is vital. The tech market’s increasing diversity drives hope that the AI models they develop will better represent society. The opportunities begin with recruitment, fostering an open AI space with ample mentorship for entry-level and junior staff. Ultimately, as the public’s scrutiny of AI systems grows, structural recognition of DEI in these models becomes imperative.
Data has been showing over and over that gender and ethnic diversity drive higher profits and innovation revenues for companies. Still, it's been a drawn-out process to diversify industry C-suite groups and staff on all levels. AI can be instrumental in these goals when applied to recruitment strategies, business practices, and analyses of the internal company culture. Using AI technologies, recruiters can make their selection processes bias-free and merit-based, ensuring a more diverse candidate pool. The staff diversity is thought to lead to greater inclusion in the products companies release, including the media we consume. A recent survey by Forbes and Deloitte has shown that AI is an eagerly mobilized force and could become a catalyst for social change that ensures workplaces are hiring and retaining diverse and under-represented talent in systematic ways.

Fighting Bias: AI Benefits and Shortcomings 


With widespread AI deployment, there’s public concern that racial, ethnic, gender, and other forms of bias and inequity can populate the algorithms. The assumption here is that the training datasets could include inequities; biases could also be demonstrated by the data selection itself, as it’s conducted by humans. On the other hand, AI can be instrumental in identifying negative biases and uncovering interpersonal issues. Data mining could be a tool for sorting through data and identifying discrimination and stereotyping. There’s a great deal of hope that over time and with proper focus and investment, AI can help reverse and prevent social biases.
High-profile media cases have demonstrated that AI implementation without planning for DEI outcomes could easily backfire. Famously, racial disparity was revealed in Google’s AI voice recognition technology. Research demonstrated that there weren’t enough samples of African-American Vernacular English (AAVE) in the datasets. As a result, Black users of the AI voice technology were implicitly conditioned to adjust the way they were speaking for the product to work – and as a result, the backup in-product organic dataset wasn’t evolving. To address the inequity, in 2023 Google announced a collaboration with Howard University, a renowned historically Black US institution, set to collect AAVE speech in Project Elevate Black Voices. Incidentally, they are also establishing a benchmark for ethical data collection: Howard University stewards the data and ensures that it benefits Black communities. 
Another crucial example of AI disruption is AI avatars – the generated versions of real celebrities or completely new personalities. Shudu Gram is known as “The First Digital Supermodel”: an ostensibly Black AI-generated avatar and influencer. However, Shudu doesn’t represent real South African women, given that she was created by a white photographer who based Shudu’s appearance on Barbie dolls. Experts warn that the outcome of AI avatars is illusory diversity, a facade that supplies the demand for realistic representation with an empty image lacking culture, history, or personality. When quality representation could be efficient in fighting stereotyping and teaching about other cultures, creators have the responsibility to avoid cultural appropriation and even exploitation with AI avatars that purport to showcase real identities.

Dubformer’s AI Solution for DEI

The Dubformer team is a gender-diverse group from seven different countries contributing to an inclusive AI technology. Thanks to a combined 10+ years of experience across media, entertainment, machine learning, and AI, we also have a great industry and mindset variety that informs our decision-making. 
Our AI solution employs a large selection of language models. We have a nuanced understanding of language variance across the world, and build AI localization for the specific audiences our clients have in mind. Diversity and inclusion are also embedded in over 1000 diverse voices that are available for AI dubbing on Dubformer.
Book a call to discuss your inclusion strategy!

AI-related laws don't have to be confusing

Subscribe to our newsletter to stay on top of all things legal

Similar Articles

Book a call with our team

No boring sales pitches: talk to one of our experts to find a solution that fits the needs of your business best
Head of Media
20+ years of experience in Media
CBDO
10+ years of experience in Technology and Localization