Technology

Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History

Brooklyn-based artist Stephanie Dinkins has long been a pioneer in combining art and technology. In May, she was awarded $100,000 by the foundation. Guggenheim Museum Her groundbreaking innovations were recognized, including an ongoing series of interviews with the humanoid robot Bina48.

For the past seven years, she has been experimenting with AI’s ability to realistically portray a laughing and crying black woman using various word prompts. Initial results were lackluster, if not alarming. Her algorithm produced a pink-shaded humanoid covered in a black cloak.

“I was expecting something a little more black feminine,” she said. And while technology has advanced since her first experiment, Dinkins found herself using roundabout terminology in her text prompts to help the AI ​​image generator achieve the image she wanted. “To give the machine the opportunity to do what I wanted.” However, whether she uses the term “African American woman” or “black woman”, there is a high rate of machine distortion that disturbs her facial features and hair texture.

“The improvement obscures some of the deeper questions we should be asking about discrimination,” Dinkins said. The artist, who is black, said, “Bias are so embedded deep in these systems that they automatically take root. We want to capture who Black people are in a subtle way so that we can feel supported.”

She’s not the only one asking tough questions about the troubling relationship between AI and race. Many black artists find evidence of racial bias in artificial intelligence, both in the large datasets that teach machines how to generate images, and in the underlying programs that run the algorithms. In some cases, AI technology has ignored or distorted the artist’s textual prompts, influencing the portrayal of black people in images, or stereotyping or censoring black history and culture. There seems to be

Controversy over racial bias in artificial intelligence has exploded in recent years, with studies showing that facial recognition technology and digital assistants have difficulty discerning images and voice patterns of people who are not white. This study raised broader questions about fairness and bias.

Leading companies providing AI image generation tools, such as OpenAI, Stability AI, and Midjourney, have promised to improve their tools. “Bias is a key issue across the industry,” OpenAI spokesman Alex Beck said in an email interview, noting that the company continues to use AI to “improve performance, reduce bias, and mitigate detrimental outcomes.” He added that he is trying to She declined to say how many employees are working on issues of racial bias or how much money the company has allocated to the issue.

‘Blacks are used to being inconspicuous’ Senegalese artist Linda Dunia Rebeiss In her introduction to her exhibition “In/Visible” she writes: ferrule file, NFT Marketplace. “We get used to being misrepresented when we are seen.”

To prove his point in an interview with a reporter, Rebees, 28, asked OpenAI’s image generator: Darui 2, I imagined a building in her hometown of Dakar. Lebeys said the algorithm produced an arid desert landscape and abandoned buildings that were nothing like the coastal houses of Senegal’s capital.

“It demoralizes,” Lebates said. “The algorithm is biased towards the Western-made cultural image of Africa. It defaults to the worst stereotypes that already exist on the internet.”

Last year, OpenAI Said It established new techniques to diversify the images DALL-E 2 produces, ensuring that the tool “produces images of people that more accurately reflect the diversity of the world’s population.”

Artists exhibited at the Lebates exhibition, Minne Atayl I am a Ph.D. candidate at Columbia University Teachers College and was planning to use an image generator for young students of color in the South Bronx. But she now worries that “it might give the students an uncomfortable image,” explained Atayair.

The Feral File exhibition includes images from her “Blonde Braids Studies” that explore the limits of Midjourney’s algorithm for generating images of black women with naturally blonde hair. When the artist requested an image of black identical twins with blond hair, the program created a lighter-skinned brother instead.

“This tells us where the algorithm is pooling the images from,” Atail said. “It’s not necessarily excerpted from a black corpus, it’s for white people.”

She said she feared young black children would try to generate an image of themselves and would see children with lighter skin. Atail recalled some of her early experiments in Midjourney before her abilities improved with the recent update. “It creates an image that looks like blackface,” she says. “You can see the nose, but it wasn’t a human nose. It looked like a dog’s nose.”

In response to a request for comment, Midjourney founder David Holz said in an email: “If someone finds a problem with our system, we ask that you send us a specific example so we can investigate.”

Image generation service provider Stability AI said it plans to work with the AI ​​industry to improve its bias assessment techniques for more diverse countries and cultures. The AI ​​company said the bias was caused by “overrepresentation” in the general dataset, but didn’t clarify whether white overrepresentation was an issue here.

Earlier this month, Bloomberg analyzed An analysis of more than 5,000 images generated by Stability AI found that the program amplified stereotypes about race and gender, typically making light-skinned people look like they had high-paying jobs. It turned out that dark-skinned people were labeled “dishwashers” and “housekeepers.”

These issues haven’t stopped investment frenzy in the tech industry. A recent rosy report by consulting firm McKinsey predicted that generative AI could add $4.4 trillion to the global economy annually. About 3,200 startups received his $52.1 billion funding last year. according to To the GlobalData trading database.

Tech companies have struggled with accusations of bias in portraying dark skin since the dawn of color photography in the 1950s. At that time, companies like Kodak white model in their color development. Eight years ago, Google disabled an AI program that allowed people to search for gorillas and monkeys through its Photos app because its algorithms miscategorized black people. As of May of this year, the issue was still open. Two former employees working on the technology told The New York Times that Google didn’t use enough images of black people to train its AI systems.

Other experts who study artificial intelligence, referring to the technology’s early development in the 1960s, said the biases lie deeper than the dataset.

“The issue is more complex than data bias,” said a cultural historian at Dartmouth College, The birth of computer vision. His research found that there was little debate about race in the early days of machine learning, and most of the scientists working on the technology were white men.

“It’s hard to separate today’s algorithms from their history because engineers built on previous versions,” says Dobson.

To reduce the appearance of racial prejudice and hateful imagery, some companies have banned certain words like “slave” and “fascist” from text prompts users send to generators.

But Dobson said companies that want simple solutions like censoring the types of prompts users can send are sidestepping a more fundamental problem of bias in the underlying technology.

“It’s time to worry as these algorithms are getting more complex. And when you see the garbage coming out, you wonder what kind of garbage processes are left in the model.” You have to think about it,” added the professor.

Oria Harveyan artist included in recent works by the Whitney Museum of American Art exhibition “Reconstruction” regarding digital identities ran into these prohibitions in a recent project with Midjourney. “I wanted to ask questions about what the database knew about slave ships,” she said. “I received a message from Midjourney that my account will be suspended if I continue.”

Ms. Dinkins encountered similar problems with the NFTs she created and sold that show how okra was brought to North America by enslaved peoples and settlers. She was censored when trying to use her generator program. replicate, to make pictures of slave ships. She eventually learned how to use the term “pirate ship” to outsmart the censors. The image she received was close to what she wanted, but it also raised troubling questions for the artist.

“What does this technology bring to history?” Dinkins asked. “I see someone trying to correct prejudice and at the same time erasing part of history. Because you forget what it is.”

The Guggenheim Museum’s chief curator, Naomi Beckwith, acknowledged that Dinkins’ nuanced approach to issues of representation and technology was one of the reasons she won the museum’s first Art & Technology Award.

“Stephanie has become part of a tradition of artists and cultural activists trying to poke holes in overarching, overarching theories about how things work,” Beckwith said. The curator added that her own initial paranoia that AI programs would replace human creativity was greatly alleviated when she realized these algorithms knew virtually nothing about black culture.

But Dinkins isn’t going to give up on the technology entirely. Despite her skepticism, she continues to use it in her art projects. She said, “If a system could produce very high-fidelity images of black women crying and laughing, could we rest?”

Related Articles

Back to top button