Companies such as Google, Meta, and OpenAI, which are leading the artificial intelligence (AI) craze, have agreed to block access to child sex-related objects from the learning stage of AI to protect children from sexual exploitation content.
According to the Wall Street Journal (WSJ) on the 23rd (local time), non-profit organization “All Tech Is Human” and child protection organization “Thorn” agreed with AI companies to implement these principles to minimize the risk of Generative AI tools.
The organizations that led the agreement explained that they avoided information that could have contained child sex and asked companies to remove child sex from AI learning materials. In order to ease the difficulties in law enforcement due to new AI tools, they will add a mark to check whether the content is made by AI.
According to the U.S. Center for the Prevention of Missing and Abused Children (NCMEC), there were 36 million reports of sexual exploitation of children last year alone. While simple image generation has become possible due to advances in AI, the agreement was reached due to growing concerns that related risks will increase with new technologies.
The management of AI companies that participated in the agreement stated that they would not allow their AI tools to be used to generate sexual exploitation of children.
Recently, children are using AI a lot. As children use it, companies will have to make clear policies. Not only companies but also the government should actively step up.
JULIE KIM
US ASIA JOURNAL