Matching Teens to "Groomers" Was Once Part of This Company's Business
A major antitrust trial takes an "ancillary" detour to a chilling tour of online life
A month ago, on April 14th, one of the largest antitrust trials in recent memory commenced in the Washington, D.C. courtroom of District Chief Justice James Boasberg, now famous for finding the Trump administration in contempt for violating court orders on deportations. Federal Trade Commission v. Meta Platforms, Inc. has nothing to with immigration, instead asking a novel question: did Mark Zuckerberg’s Facebook/Meta operation acquire WhatsApp and Instagram for legitimate reasons, or to quash competition in search of social media monopoly?
If that question doesn’t interest you, you’d likely have been dozing when last week, on the 14th trial day, Meta Chief Information Security Officer Guy Rosen took the stand. Rosen was called by the FTC to answer questions about whether or not Facebook invested adequate resources in Instagram after its 2012 acquisition for $1 billion. In 2019, Rosen reportedly believed IG was “understaffed as an app-surface when compared to Facebook” and sought 149 more employees to handle various issues. One notable one was raised by Rosen to IG head Adam Mosseri in an email on May 16, 2018:
Harmful Behavior… e.g., grooming especially — this really worries me given we’re finding a lot of, umm, opportunity on FB, and given IG’s younger audience I bet we’ll find we have work to do there…
Say what? “Grooming… really worries me given we’re finding a lot of, um, opportunity on FB” is not a common line in a legal exhibit. Brendan Benedict, who’s covering the trial for Matt Stoller’s aptly-named and excellent Substack “Big Tech on Trial,” wrote down another reply from Rosen to the company’s head of Data Analytics: “You are correct, there is a growing realization this is underfunded. This was deliberate — I explicitly had the convo with Mark at HC planning and he thought IG has another year or two to catch up. I think we are not sure that is the case anymore.” Another email read, “IG hasn’t done much on harmful content.”
The idea that FB/IG had “work to do” on “grooming,” and that certain kinds of underfunding were “deliberate,” became more explicit when an exhibit was introduced, an internal study from the next year, 2019, titled “Inappropriate Interactions with Children on Instagram.”
To be clear from the start, the mere fact that FB/IG conducted this study showed executives were concerned about the problem, and understood there was “work to do there.” However, the study suggested not only that there was a longstanding problem with “groomers” meeting minors on IG, but that the company’s algorithms were accelerating those interactions. One slide dropped four consecutive unsettling bullet points:
Overall IG: 7% of all follow recommendations to adults were minors
Groomers: 27% of all follow recommendations to
groomers were minors
We are recommending nearly 4X as many minors to groomers (nearly 2 million minors in the last 3 months)
22% of those recommendations resulted in a follow request
You can do the math yourself, but 22% of 2 million minors receiving follow requests from “groomers” over a 3-month period is not a small number. Perhaps more unsettling was a slide reading, “We may be facilitating possible groomers finding young people,” adding, “IG recommended a minor… to an account engaged in groomer-esque behavior.” Additionally, there was a flow chart demonstrating the process:
Looking at this from one angle, there’s nothing necessarily unusual here. Social media algorithms are designed to match people to their likes. Cardi B enthusiasts will meet other Cardi B enthusiasts and people who like gummy bears will see gummy bears. But “the recommendation algorithms worked as intended” in the context of matching minors to “groomer-esque behavior” is unsettling nonetheless.
There was more on the subject, though according to Benedict’s story Boasberg cut off the cross, suggesting Meta was falling into the FTC’s “trap” of diving into what the judge described as “ancillary” matters. When I asked Brendan what the response was among reporters watching the trial from an overflow room, he said, “The press was shocked” and “Everybody’s been talking about it,” with some calling it the high moment of the trial.
Nonetheless, apart from Brendan, only Bloomberg did a story: “Instagram Suggested ‘Groomers’ Connect With Minors, FTC Says.” Lee Hepner of the American Economic Liberties Project tweeted about it, saying the data was “beyond disturbing” and “Meta had a designation for ‘groomers’ and recommended millions of minors to them. Front page stuff.” As Stoller noted, a lot of ink is spilled about how platforms allowed online phenomena like QAnon, so a revelation like this getting so little coverage is odd.
The presentation raised a number of questions, some of which were noted by Benedict. Was FB/IG identifying a category of people one could colloquially call “groomers”? If that was done for safety reasons, why were they still on the platform and having minors disproportionately recommended to them for so long? Then there was an issue more germane to the FTC’s suit, namely that Facebook and/or Mark Zuckerberg may have been intentionally distributing fewer resources to acquisitions, so as not to “tax” Facebook.
In one email summary of “Mark’s thoughts” on Facebook’s app “family,” Rosen wrote, “One specific thing I worry about is that supporting integrity in other apps shouldn’t be an indirect tax on the FB app.” In conjunction with a known “inappropriate interactions” problem, those lines are cringier.
I reached out to Meta about the bizarre detour of the antitrust case. Understandably, the company was not happy this came up, and helpfully took time to explain difficulties the company faced. Like Twitter (and other companies, surely), IG/FB had strict surveillance searching for child predators that led to zero-tolerance bans, but there obviously exists a category of person who doesn’t break hard rules, and merely show potential leanings or signs. It’s taken some time to develop strategies to keep “groomers” both from finding each other and from contacting minors, but they believe they’ve gotten there.
“This six-year-old research shows just one piece of the work to improve our child safety efforts, which we’ve significantly expanded since then,” a Meta spokesperson told me. “We’ve spent years refining the signals we use to detect these accounts and prevent them from finding, following or interacting with teen accounts – or with each other – and we continue to focus on understanding these evolving behaviors. Our launch of Teen Accounts last year also places teens under 18 into the safest automatic protections, which limit who can contact them and the content they see.”
It’s a good answer, and there’s no evidence that says definitively that the firm was intentionally making child exploitation a profit center or anything like that. At the same time, Instagram was acquired in 2012, the study about “inappropriate actions” only took place in 2019, and an announcement about measures “making it harder for potentially suspicious accounts to find young people” came out in 2021. The reader may judge the aggressiveness of that timeline on his or her own.
When I asked Benedict about the paucity of coverage, he mentioned a recent Wall Street Journal piece about a Meta Chatbot version of wrestler John Cena who engaged in a “graphic sexual scenario” with “a user who identified herself as a 14-year-old girl, after telling her to ‘cherish your innocence.’” Other celebrity AIs were apparently also “coaxed to highly troubling situations.” The company told the Journal it had “implemented new changes to make it more difficult for bad actors to exploit the AI personas feature for “extreme use cases,’” but as the paper noted, it’s not clear that makes up for the bizarre rollout.
At minimum, it’s odd what does and does not get coverage in the world of social media regulation. “Putting that all together,” Benedict says, “I think is really unsettling.”
I think it's important to note the role of gender ideology in the increasingly widespread "grooming" phenomenon. If a child can consent to sex reassignment surgery, it pretty much follows that they should be able to consent to anything. The logic used to legitimize pediatric sex changes gets left-wing spazzes right up to the precipice of a rationale that also legitimizes sexual encounters with children.
This fucking world has officially jumped the shark.