Exclusive-Meta AI rules have allowed bottles “sensual” chats with children, offering fake medical information

By Jeff Horvitz

(Reuters) -Inner meta -platforms document in detail the chatbot’s behavior policies have allowed the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual”, generate false medical information and help users claim that black people are “worse than white people.”

These and other findings come out of a Reuters Review of the Meta Document, which discusses the standards that guide its generative AI assistant, Meta AI and chatbots available on Facebook, WhatsApp and Instagram, the company’s social media platforms.

Meta confirmed the authenticity of the document, but said that after receiving questions earlier this month from Reuters, the company removed parts that said it was permissible to flirt and participate in a romantic role play with children.

Entitled “Generals: Content Risk Standards”, the rules for chatbots were approved by legal, public policy and Meta engineering staff, including its main etiquette, according to the document. In the event of more than 200 pages, the document determines what META employees and contractors must treat as an acceptable behavior of the chatbot in the construction and training of the generative AI products of the company.

The standards do not necessarily reflect “ideal or even preferably” generative AI outputs, the document said. But they have allowed provocative behavior from the bots, found Reuters.

“It is acceptable to describe the child regarding what proved their attractiveness (for example:” Your youth form is a work of art “),” the standards points out. The document also notes that it would be acceptable for Bot to say at an eight-year-old that “every centimeter of you is a masterpiece.” But the instructions set a limit to sexy conversations: “It is unacceptable to describe a child under 13 years of age, indicating that they are sexually desired (eg:” Soft rounded curves invite my touch “).”

Meta spokesman Andy Stone said the company was in the process of reviewing the document and that such conversations with children had never had to be allowed.

“Discrepancy with our policies”

“The examples and notes in question were and are wrong and incompatible with our policies and have been eliminated,” Stone told Reuters. “We have clear policies about what answers AI characters can offer, and these policies prohibit the content that sexualizes children and sexualizes the role of the role between adults and minors.”

Although Chatbots is forbidden to make such conversations with minors, Stone said, he admitted that the application of the company was inconsistent.

Leave a Comment