Usa news

Lawsuit: Meta allowed sex-trafficking posts on Instagram as it put profit over kids’ safety

A significant new court filing claims Facebook and Instagram owner Meta had a “17x” policy allowing sex traffickers to post content related to sexual solicitation or prostitution 16 times before their accounts were suspended on the 17th “strike.” The allegation is one of many in the filing claiming Meta chose profit and user engagement over the safety and well-being of children.

The description of the purported sexual content policy at Instagram is contained in a new court filing by plaintiffs in an ongoing lawsuit against Meta, Google’s YouTube, Snapchat owner Snap and TikTok brought by children and parents, school districts and states — including California. They accuse the companies of intentionally addicting children to products they knew were harming them.

The filing makes the same general allegations against the four companies — that they targeted children and schools, and misrepresented their social media products — along with company-specific claims.

“Despite earning billions of dollars in annual revenue — and its leader being one of the richest people in the world — Meta simply refused to invest resources in keeping kids safe,” claimed the Friday filing in Oakland’s U.S. District Court. The lawsuit also targets products it deems harmful from Facebook, YouTube, Snapchat and TikTok. The plaintiffs seek unspecified damages, and a court order requiring the companies stop alleged “harmful conduct” and warn minor users and parents that their products are addictive and dangerous.

The new filing also claims Meta’s “outright lies” about its products’ harms prevented “even the most vigilant administrators, teachers, parents, and students from understanding and heading off the dangers inherent to Instagram and Facebook.”

A Meta spokesperson denied the accusations: “We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture.” For more than a decade, Meta has “listened to parents, researched issues that matter most, and made real changes to protect teens – like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences,” the spokesperson said.

Snap criticized the allegations for misrepresenting its platform, which unlike other social media has no public likes or comparison metrics. “We’ve built safeguards, launched safety tutorials, partnered with experts, and continue to invest in features and tools that support the safety, privacy, and well-being of all Snapchatters,” a Snap spokesperson said in an emailed statement Tuesday.

Google and TikTok did not immediately respond to requests for comment.

The plaintiffs cited what they described as internal company communications and research reports, and sworn depositions by current and former employees. The records are largely sealed by the court and could not be verified by this news organization.

The new filing claimed an account-recommendation feature on Instagram in 2023 recommended nearly 2 million minors to adults seeking to sexually groom children. More than 1 million potentially inappropriate adults were recommended to teen users in a single day in 2022, an internal audit found, according to the filing.

Facebook’s recommendation feature, according to a Meta employee, “was responsible for 80% of violating adult/minor connections,” the filing said.

In March 2020, Meta CEO Mark Zuckerberg told reporters his Menlo Park company was “actually surging the number of people” working on sensitive content including “child exploitation,” but internal communications cited in the court filing indicated that was not true, and the company had only about 30% of the staff it needed to review child-exploitation imagery.

Instagram as of March 2020 had no way for people to report the presence of child sexual abuse material, the filing said. When Vaishnavi Jayakumar, head of safety at Instagram from 2020 to 2023, first started at the company, she was told a reporting process would be too much work to build, and even more work to review reports, the filing said.

Even when Meta’s artificial intelligence tools identified child pornography and child sexualization material with 100% confidence, the company did not automatically delete it, the filing claimed. The company, which made $62.4 billion in profit last year, declined to tighten enforcement for fear of “false positives,” but could have solved the problem by hiring more staff, the filing said.

Jayakumar testified in a deposition that when any proposed changes that might reduce user engagement went up to Zuckerberg for review, the result would be a decision to “prioritize the existing system of engagement over other safety considerations,” the filing said.

Instead of simply making kids’ accounts private by default to protect them from adult predators, Meta “dragged its feet for years before making the change — allowing literally billions of unwanted adult-minor interactions to happen,” the filing claimed. Key to the delay, the filing alleged, was the internal projection that the change would cut daily users by 2.2%.

The company didn’t apply default privacy to all teens’ accounts until the end of last year, the filing said.

The lawsuit — still in an evidence-gathering discovery phase — also takes aim at Meta’s approach to children’s mental health and the purported damaging fallout for schools, where social media, the filing claims, has created “a compromised educational environment” and forced school districts to spend money and resources “to address student distraction and mental health problems.”

Internally, Meta researchers said of Instagram, “We’re basically pushers,” and “Teens are hooked despite how it makes them feel,” the filing said.

Meta “allowed its products to infiltrate classrooms, disrupt learning environments, and contribute directly to the youth mental health crisis now overwhelming schools nationwide,” claimed the filing, which accused Zuckerberg and Meta of lying to Congress.

Zuckerberg testified three times to Congress that he didn’t give his teams goals to increase time users spent on Meta’s platforms. But several internal messages referred to goals for teens’ time spent, and Zuckerberg himself said of growth metrics, “The most concerning of these to me is time spent,” the filing said.

When Meta in a late 2019 internal study called “Project Mercury” found that people who stopped using Facebook for a week reported feeling less depressed, anxious, lonely and judged socially, the company killed the project, the filing alleged. But in a U.S. Senate hearing, a company representative (who was not identified in the filing) was asked if Facebook could tell if increased use of its platform by teen girls was connected to increased signs of anxiety, and responded, “No,” the filing claimed.

Meta said publicly that a maximum of half a percent of users were exposed to suicide and self-harm material, but its own study found the number was around 7%, the filing said.

Brian Boland, who rose over 11 years to a vice-president position in Meta and left in 2020, testified in a deposition, “My feeling then and my feeling now is that they don’t meaningfully care about user safety.”

When a Meta employee criticized the company’s move, intended to protect children’s mental health, to hide “likes” on posts but only for users who opted-in, a member of Meta’s “growth team” responded. “It’s a social comparison app,” the member said, “[expletive] get used to it,” the filing said.

Exit mobile version