Meta announced its efforts to safeguard teenage users from “sextortion” scams on Instagram, addressing concerns about the platform’s impact on youth mental health.
The company revealed plans for an AI-driven “nudity protection” tool being tested, which would automatically blur images containing nudity sent to minors via the app’s messaging system. Capucine Tuffier, overseeing child protection at Meta France, highlighted that this tool aims to shield recipients from unwanted content, giving them the option to view or not.
Additionally, Meta intends to send messages offering advice and safety tips to users involved in sending or receiving such content.
This initiative follows Meta’s pledge in January to enhance protections for under-18s across its platforms, prompted by legal actions from numerous US states alleging the firm’s exploitation of children’s vulnerabilities.
Recent internal research leaks, as reported by The Wall Street Journal and whistle-blower Frances Haugen, a former Facebook engineer, exposed Meta’s long-standing awareness of the mental health risks posed to young users by its platforms.
Meta emphasized that these new tools are part of ongoing efforts to shield young people from harmful online interactions. The company aims to combat sextortion and intimate image abuse while making it harder for potential scammers and criminals to target and engage with teens.