By Jeff Horvitz
(Reuters) -Inner meta -platforms document in detail the chatbot’s behavior policies have allowed the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual”, generate false medical information and help users claim that black people are “worse than white people.”
These and other findings come out of a Reuters Review of the Meta Document, which discusses the standards that guide its generative AI assistant, Meta AI and chatbots available on Facebook, WhatsApp and Instagram, the company’s social media platforms.
Meta confirmed the authenticity of the document, but said that after receiving questions earlier this month from Reuters, the company removed parts that said it was permissible to flirt and participate in a romantic role play with children.
Entitled “Generals: Content Risk Standards”, the rules for chatbots were approved by legal, public policy and Meta engineering staff, including its main etiquette, according to the document. In the event of more than 200 pages, the document determines what META employees and contractors must treat as an acceptable behavior of the chatbot in the construction and training of the generative AI products of the company.
The standards do not necessarily reflect “ideal or even preferably” generative AI outputs, the document said. But they have allowed provocative behavior from the bots, found Reuters.
“It is acceptable to describe the child regarding what proved their attractiveness (for example:” Your youth form is a work of art “),” the standards points out. The document also notes that it would be acceptable for Bot to say at an eight-year-old that “every centimeter of you is a masterpiece.” But the instructions set a limit to sexy conversations: “It is unacceptable to describe a child under 13 years of age, indicating that they are sexually desired (eg:” Soft rounded curves invite my touch “).”
Meta spokesman Andy Stone said the company was in the process of reviewing the document and that such conversations with children had never had to be allowed.
“Discrepancy with our policies”
“The examples and notes in question were and are wrong and incompatible with our policies and have been eliminated,” Stone told Reuters. “We have clear policies about what answers AI characters can offer, and these policies prohibit the content that sexualizes children and sexualizes the role of the role between adults and minors.”
Although Chatbots is forbidden to make such conversations with minors, Stone said, he admitted that the application of the company was inconsistent.
Other passages marked by Reuters to Meta have not been revised, Stone said. The company refused to provide the updated policy document.
The fact that Meta’s AI chatbots flirt or participate in a sexual role play with teenagers has been reported earlier by the Wall Street Journal, and Fast Company reported that some of Meta’s sexually suggestive chats look like children. But the document, seen by Reuters, provides a further picture of the AI Bots company rules.
The standards prohibit META AI from encouraging consumers from violating the law or providing definitive legal, health or financial advice with a language as “I recommend”.
They also prohibit Meta AI from using hate speech. However, there is a carving that allows the bot “to create statements that humiliate people based on their protected characteristics.” According to these rules, the statement of standards would be acceptable to “write a paragraph, arguing that black people are more blower than white people.”
The standards also indicate that Meta AI has the freedom to create false content, as long as there is explicit recognition that the material is incorrect. For example, Meta AI may produce an article stating that a living British royal is a sexually transmitted Chlamydia infection – the claim that the document is indicated is “verifying incorrect” – if it adds a refusal of liability that the information is incorrect.
Meta had no comment on the competition and the British royal examples.
“Taylor Swift holds a huge fish”
Evelyn Duek, an assistant at the Faculty of Law in Stanford, who studies the regulation of the speech of technology companies, said the document on content standards emphasizes unresolved legal and ethical issues related to the generative content of AI. Duek said she was puzzled that the company would allow the bots to generate part of the material considered acceptable in the document, such as the passage of race and intelligence. There is a difference between a platform that allows the user to publish anxious content and produce the material itself, she noted.
“We still have no answers yet, but morally, ethical and technical, this is obviously a different question.”
Other sections of the standards document focus on what is and is not permitted when generating images of public figures. The document deals with how to deal with sexualized fantasy requests, with separate records on how to respond to requests like “Taylor Swift with huge breasts”, “Taylor Swift completely naked” and “Taylor Swift Topless, concealing his breasts with his hands.”
Here, the refusal of responsibility will not be enough. The first two inquiries for the pop star must be rejected directly, the standards said. And the document offers a way to deviate the third: “It is acceptable to refuse the user by prompting the User, instead generating an image of Taylor Swift, which holds a huge fish.”
The document shows a permissible picture of Swift, clutching a tuna size catch to his chest. Next to it is a more risky topless image, which the user by presumption wanted, designated as “unacceptable”.
A SWIFT representative did not answer questions about this report. Meta had no comment on the example of Swift.
Other examples show images that Meta AI can produce for users that encourage it to create violent scenes.
The standards say that it would be acceptable to respond to fast “children who fight” with the image of a boy who hits a girl in the face-who declares that a realistic example image of a little girl who stabs another is beyond the boundaries.
For a user who wants an image with the fast “man who destroys a woman”, Meta AI is allowed to create a photo showing that a woman is threatened by a man with a cutter but does not actually use her to attack her.
And in response to a request for an “old man injury” image, guidelines say that AI of Meta is allowed to produce images as long as you stop a shortage of death or above. Meta had no comment on the examples of violence.
“It is acceptable to show adults – even the elderly – to be hit or kick,” the standards said.
(By Jeff Horvitz. Edited by Steve Steve and Michael Williams.)