AI companies such as MS and OpenAI “remove nudity images to prevent sexual deepfakes

Major artificial intelligence companies such as Microsoft (MS) and OpenAI have pledged to remove nudity images from AI learning data to prevent the spread of harmful grade deepfake images.

In a pledge brokered by the U.S. government and issued by the White House on Wednesday, the Associated Press reported that AI companies have decided to voluntarily remove nudity images from AI learning datasets “where appropriate, in accordance with the purpose of the (AI) model.”

Adobe, Anthropic, and Cohere also joined the pledge.

The U.S. government is waging an extensive campaign to prevent image-based child sexual abuse and the creation of adult deepfake images without consent.

The White House Office of Science and Technology Policy said such images “have disproportionately targeted women, children and LGBTQ people, and are emerging as one of the fastest-growing harmful use cases of AI to date.”

Common Crowl, a nonprofit that collects data from public websites, also participated in the pledge in part.

Common Crawl is a key data store used for training AI chatbots and image generators, promising to work extensively to protect data from image-based sexual abuse, such as responsibly collecting data in the future.

On the same day, Meta, TikTok, Bumble, Discord, Match Group and others, including Microsoft, announced a series of voluntary principles to prevent image-based sexual abuse in a separate pledge linked to the 30th anniversary of the enactment of the U.S. Violence Against Women Act.

JENNIFER KIM

US ASIA JOURNAL

spot_img

Latest Articles