Meta does not limit the spread of many images of celebrities of Deepfake Sexualized on Facebook Aitrend

Meta has deleted more than a dozen fraudulent and sexualized images of famous actors and athletes after a CBS News investigation found a high prevalence of Deepfake images manipulated ai on the company’s Facebook platform.

Dozens of fake highly sexualized images of the actors Miranda Cosgrove, Jeanette McCurdy, Ariana Grande, Scarlett Johansson and the former tennis star Maria Sharapova have been widely shared by several Facebook accounts, collecting hundreds of thousands of likes and Queues on the platform.

“We have deleted these images to have violated our policies and will continue to monitor other violent messages. This is a challenge to the industry level, and we work continuously to improve our detection and application technology, “Meta spokesman Erin Logan said on Friday in a press release by press release e-mail.

An analysis of more than a dozen these images of Reality Defender, a platform that works to detect the media generated by AI, showed that many photos were deep images-with bodies dressed in underwear Ai-generated replacing the bodies of celebrities differently in otherwise in Aim real photographs. Some of the images have probably been created using image sewing tools that do not imply AI, according to Reality Defender’s analysis.

“Almost all Deepfake pornographies do not have the consent of the subject of depth,” said Ben Colman, co-founder and CEO of Reality Defender on Sunday. “Such content increases at a dizzying rate, especially since the existing measures to stop this content are rarely implemented.”

CBS News asked for comments from Miranda Cosgrove, Jeanette McCurdy, Ariana Grande and Maria Sharapova on this story. Johansson refused to make a comment, according to a representative of the actor.

The expert shows how to identify a fake deep created with AI

02:39


As part of Meta’s intimidation and harassment policy, the company prohibits “photoshop or derogatory sexualized drawings” on its platforms. The company also prohibits the nudity of adults, sexual activity and sexual exploitation of adults, and its regulations aim to prevent users from sharing or threatening from sharing intimate non -consensual images. Meta also deployed the use of “information on AI” labels to clearly mark the content that is handled.

But the questions remain on the efficiency of the technological society police on such content. CBS News has found dozens of images generated by Ai-Generated and Sexualized of Cosgrave and McCUNDY still accessible to the public on Facebook, even after the generalized sharing of this content, in violation of the terms of the company, was reported in Meta .

Such a deep cosgrave image which was still in place during the weekend had been shared by an account with 2.8 million followers.

The two actors – The two former Child stars on the Nickelodeon show Icarly, which belongs to the parent company of CBS News, Paramount Global – is the most prolific for Deepfake content, on the basis of the images of the public characters that CBS News A analyzed.

The Meta Supervisory Board, an almost independent organization that consists of experts in the field of human rights and freedom of expression, and makes recommendations for the moderation of content on Meta platforms, A declared to CBS New are insufficient.

The supervisory board cited the recommendations it made in Meta in the past year, in particular by urging the company to make its rules clearer by updating its ban on “derogatory sexualized photoshop” to include specifically “Non -consensual” word and to encompass another technical photo manipulation such as AI.

The board of directors has also recommended to put the meta-speech of “derogatory sexualized photoshop” in the regulations of sexual exploitation of the company’s adults, so that the moderation of such content would be more rigorously applied.

Asked Monday by CBS News on the recommendations of the board of directors, Meta stressed the Guidelines on its transparency websiteWho show that the company has so far rejected the suggestions, although Meta noted in his declaration that he still envisaged means to report a lack of consent in the images generated by the AI. Meta also said that she was considering reforms of her sexual exploitation policies for adults, to “capture the spirit” of the recommendations of the board of directors.

“The supervisory board has clearly indicated that non -consensual intimate images are a serious violation of privacy and personal dignity, disproportionately harming women and girls. These images are not only improper use of technology – they are a form of abuse that can have lasting consequences, “said Michael McConnell, co -chair of the surveillance board on Friday.

“The board of directors actively monitors META’s response and will continue to put pressure on stronger guarantees, faster application and greater responsibility,” said McConnell.

Meta is not the only social media company to deal with the question of widespread and sexualized Deepfake content.

Last year, the Elon Musk X platform temporarily blocked research related to Taylor Swift after false pornographic images generated by AI to the resemblance of the singer who has largely circulated on the platform and collected millions of views and impressions.

“The publication of non-consensual nudity images (NCN) is strictly prohibited on X and we have a zero tolerance policy towards such content,” said the platform security team in a post at the time.

A study published earlier this month by the British government revealed that the number of images deepake on social media platforms developed at a rapid pace, the government providing that 8 million deep fans would be shared this year, against 500,000 in 2023.

Leave a Comment