Summary
A leaked Meta policy showed its chatbots could flirt or do sensual roleplay with children and offer false medical information. The rules also allowed bots to make demeaning statements about racial groups and create false claims if labeled untrue. Meta says those examples were errors, removed some passages, and admits enforcement was inconsistent.
Highlights
id929849593
Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
Respuesta a la pregunta de la Ayi respecto de en qué ventanas uno interactúa con los chatbots mencionados.
id929849731
Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document.
Es decir, bastante oficial.
id929849949
The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Márgenes de la política.