In Meta’s WhatsApp, users can generate stickers using AI prompts. The Guardian reported Friday that the AI model used to create those stickers sometimes generates images of children holding guns when prompted with “Palestine” and similar words. Meanwhile, according to the outlet, “Israel” prompts resulted in no such imagery.

A month ago, Meta’s AI sticker generator started rolling out with the tendency to create inappropriately violent or crass imagery, including, yes, child soldiers. According to the article, an unnamed source said some of the company’s workers had flagged and escalated the issue with prompts related to the war in Israel.

Meta spokesperson Kevin McAlister told The Verge via email that the company is addressing it, adding that Meta will “continue to improve these features as they evolve and more people share their feedback.”

Meta has had other issues with bias in its AI models, such as Instagram’s auto-translate feature inserting the word “terrorist” into user bios written in Arabic, echoing a Facebook mistranslation that led to a Palestinian man’s 2017 arrest in Israel.

chatgpt-subscribers-may-get-a-‘gpt-builder’-option-soon Previous post ChatGPT Subscribers May Get A ‘GPT Builder’ Option Soon
Next post Choosing the Right IT Support Provider in Chichester: A Strategic Approach