{"id":1775,"date":"2025-08-19T10:00:00","date_gmt":"2025-08-19T10:00:00","guid":{"rendered":"http:\/\/www.logicalware.net\/?p=1775"},"modified":"2025-08-19T19:18:56","modified_gmt":"2025-08-19T19:18:56","slug":"meta-faces-backlash-over-sensual-chatbot-conversations-with-children","status":"publish","type":"post","link":"http:\/\/www.logicalware.net\/index.php\/2025\/08\/19\/meta-faces-backlash-over-sensual-chatbot-conversations-with-children\/","title":{"rendered":"Meta faces backlash over \u2018sensual\u2019 chatbot conversations with children"},"content":{"rendered":"

Lawmakers on both sides of the aisle are seizing on new revelations<\/a> about \u201csensual\u201d chatbot conversations Meta deemed acceptable for children, dragging the tech giant and its checkered past on children\u2019s safety back into the spotlight.\u00a0<\/p>\n

Meta, the parent company of Facebook and Instagram, has long faced scrutiny<\/a> over the impact of its social media platforms on children. As the company has expanded into artificial intelligence (AI) alongside the rest of the tech industry, it is grappling with both familiar and new, distinct problems.\u00a0<\/p>\n

In an internal policy document obtained by Reuters, Meta featured examples of acceptable conversations between its AI chatbot and children, suggesting they could engage in \u201cconversations that are romantic or sensual\u201d and describe the children \u201cin terms that evidence their attractiveness\u201d \u2014 examples Meta said were erroneous and have since been removed.\u00a0<\/p>\n

Sen. Josh Hawley (R-Mo.) slammed the tech giant Thursday, suggesting the revelations were \u201cgrounds for an immediate congressional investigation.\u201d   <\/p>\n

He followed up with a letter to Meta CEO Mark Zuckerberg on Friday, saying the Senate Judiciary Subcommittee on Crime and Counterterrorism was opening a probe into the company\u2019s generative AI products. <\/p>\n

\u201cIt’s unacceptable that these policies were advanced in the first place,\u201d Hawley wrote. \u201cMeta must immediately preserve all relevant records and produce responsive documents so Congress can investigate these troubling practices.\u201d <\/p>\n

Sen. Marsha Blackburn (R-Tenn.), who has long championed the Kids Online Safety Act (KOSA), pointed to the revelations as underscoring the need for such legislation. A spokesperson said the senator supports an investigation into the company. <\/p>\n

\u201cWhen it comes to protecting precious children online, Meta has failed miserably by every possible measure,\u201d she said in a statement. \u201cEven worse, the company has turned a blind eye to the devastating consequences of how its platforms are designed. This report reaffirms why we need to pass the Kids Online Safety Act.\u201d   <\/p>\n

Democrats have also joined the backlash, with Sen. Brian Schatz (D-Hawaii) questioning how the chatbot guidance was approved. <\/p>\n

\u201cMETA Chat Bots that basically hit on kids \u2014 f\u2011\u2011\u2011 that,\u201d he wrote in a post on the social platform X. \u201cThis is disgusting and evil. I cannot understand how anyone with a kid did anything other than freak out when someone said this idea out loud. My head is exploding knowing that multiple people approved this.\u201d\u00a0<\/p>\n

Sen. Ron Wyden (D-Ore.) suggested the incident shows Meta is a company \u201cmorally and ethically off the rails.\u201d <\/p>\n

\u201cIt seems clear that Mark Zuckerberg rushed an unsafe chatbot to a mass market just to keep up with the competition, consequences for its users be damned,\u201d he said.  <\/p>\n

\u201cI’ve long said that Section 230 does not protect generative AI bots like this, which are entirely created by the company, not users,\u201d the senator continued. \u201cMeta and Zuckerberg should be held fully responsible for any harm these bots cause.\u201d <\/p>\n

Wyden\u2019s concerns underscore a key difference between the problems that Meta has previously encountered as a social media company and the issues that plague recent AI developments. <\/p>\n

Previous scandals involved content on Facebook and Instagram that was generated by users<\/a>, clearly giving Meta cover under Section 230 \u2014 a portion of the Communications Decency Act that shields companies from liability for user-generated content.\u00a0<\/p>\n

Social media has increasingly tested the limits of this law in recent years, as some argue major tech companies should be held responsible for harmful content on their platforms.  <\/p>\n

Meta felt the severity of this backlash in 2021, when Facebook whistleblower Frances Haugen leaked a tranche of internal documents. She later testified before Congress, alleging the firm was aware its products were harming children and teens, but still sought to profit off their engagement. <\/p>\n

In 2024, Zuckerberg was hauled before lawmakers to discuss Meta\u2019s child safety policies, alongside the CEOs of TikTok, Discord, Snapchat and X. Following a contentious exchange with Hawley, Zuckerberg turned around in the hearing room to apologize to dozens of parents and activists. <\/p>\n

\u201cI’m sorry for everything you have all been through,\u201d he said at the time. \u201cNo one should go through the things that your families have suffered.\u201d <\/p>\n

However, the emergence of AI tools and chatbots has created new challenges for tech companies, as they make decisions about how to train AI models and what limitations to put on chatbot responses. Some, like Wyden, have argued these tools fall outside the protections of Section 230.\u00a0<\/p>\n

Parent advocates said the newly reported documents \u201cconfirm our worst fears about AI chatbots and children’s safety.\u201d  <\/p>\n

\u201cWhen a company’s own policies explicitly allow bots to engage children in ‘romantic or sensual’ conversations, it\u2019s not an oversight, it\u2019s a system designed to normalize inappropriate interactions with minors,\u201d Shelby Knox, campaign director for tech accountability and online safety at ParentsTogether, said in a statement. <\/p>\n

\u201cNo child should ever be told by an AI that ‘age is just a number’ or be encouraged to lie to their parents about adult relationships,\u201d she continued. \u201cMeta has created a digital grooming ground, and parents deserve answers about how this was allowed to happen.\u201d  <\/p>\n

Meta spokesperson Andy Stone said in a statement Thursday that the company has \u201cclear policies\u201d that \u201cprohibit content that sexualizes children and sexualized role play between adults and minors.\u201d <\/p>\n

Additional examples, notes, and annotations on its policies \u201creflect teams grappling with different hypothetical scenarios,\u201d he added, underscoring that those in question have been removed. <\/p>\n

The latest firestorm threatens to derail Zuckerberg\u2019s apparent efforts to alter his and Meta\u2019s public image to one that is more palatable to conservatives. <\/p>\n

He validated conservative censorship concerns<\/a> last year, writing to the House Judiciary Committee that his company had been pressured by Biden-era officials in 2021 to censor content related to COVID-19 \u2014 frustrations he later reiterated during an appearance on Joe Rogan\u2019s podcast.\u00a0<\/p>\n

Zuckerberg also overhauled Meta\u2019s content moderation policies in January, announcing plans to eliminate third-party fact-checking in favor of a community-based program in what he described as an effort to embrace free speech. The move earned praise from President Trump. <\/p>\n

Like other tech leaders, the Meta chief also courted Trump\u2019s favor as he returned to office, meeting with the president-elect at Mar-a-Lago and scoring a front-row seat to the inauguration. <\/p>\n","protected":false},"excerpt":{"rendered":"

Lawmakers on both sides of the aisle are seizing on new revelations about \u201csensual\u201d chatbot conversations Meta deemed acceptable for children, dragging the tech giant and its checkered past on<\/p>\n

Continue reading <\/use> <\/svg>Meta faces backlash over \u2018sensual\u2019 chatbot conversations with children<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":1777,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[11],"tags":[],"_links":{"self":[{"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/posts\/1775"}],"collection":[{"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/comments?post=1775"}],"version-history":[{"count":1,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/posts\/1775\/revisions"}],"predecessor-version":[{"id":1776,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/posts\/1775\/revisions\/1776"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/media\/1777"}],"wp:attachment":[{"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/media?parent=1775"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/categories?post=1775"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.logicalware.net\/index.php\/wp-json\/wp\/v2\/tags?post=1775"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}