A company experiences a certain kind of tension when its past and future are rapidly diverging. That’s about where Meta is at the moment, launching a brand-new AI model on the one hand while a California jury recently revealed to the world that Instagram was created with children in mind.
In a case centered on the experiences of a 20-year-old woman known in court records as KGM, or Kaley, the decision was rendered recently. Her story—severe body dysmorphia, depression, and suicidal thoughts, all purportedly exacerbated by obsessive Instagram use during her early years—was detailed enough to be convincing and intimate enough to hit home.
| Category | Details |
|---|---|
| Company Name | Meta Platforms, Inc. |
| Founded | 2004 (as Facebook) |
| CEO | Mark Zuckerberg |
| Headquarters | Menlo Park, California, USA |
| Key Platforms | Facebook, Instagram, WhatsApp |
| Recent Verdict | California jury found Meta liable for designing addictive platforms targeting children |
| Compensatory Damages | $3 million total; Meta responsible for 70% |
| New Mexico Penalty | $375 million — for misleading users and enabling child exploitation |
| AI Initiative | Meta Superintelligence Labs; new Muse Spark model |
| Child Safety Reports (2025) | Nearly 11 million reports filed with NCMEC, including 1.2 million related to child trafficking |
| Legal Status | Evaluating appeal options following California verdict |
Abstract statistics were not necessary for the jury. They had an individual. The jury awarded $3 million in compensatory damages, with Meta covering 70% of that amount, after finding that Google’s and Meta’s negligence contributed a “substantial factor” to mental health harms. There might be more punitive actions.
It’s difficult to ignore how differently Meta’s executive team interprets the circumstances. In his testimony during the trial, Mark Zuckerberg refuted the notion that social media is clinically addictive, a stance the company has maintained for years in spite of mounting evidence to the contrary.

Instagram CEO Adam Mosseri referred to these descriptions as “problematic” as recently as February. Instagram’s teen accounts feature was only introduced a few years ago, which begs the obvious question: why build the safeguards so late if the platform wasn’t hurting young users?
The timing of Meta’s AI announcement, which unveiled the Muse Spark model from its recently formed Meta Superintelligence Labs, seems purposeful, or at the very least, well-planned. After its previous Llama model family failed, the company hurried to develop that AI division.
The delayed rollout embarrassed the company in front of a fast-paced, forgetful industry. The first genuine product to come out of that chaos is Muse Spark. It’s still unclear if it’s truly impressive or just well-marketed.
It’s obvious that Meta desperately needs the AI story. Lawsuits alleging social media addiction are increasing in number. Just one day prior to the California verdict, a New Mexico jury found that Meta permitted child sexual exploitation and deceived users about the security of its platforms, resulting in fines in the hundreds of millions.
Before going to trial, TikTok and Snap, who were initially involved in the California case, reached a quiet settlement. Google and Meta decided to fight, and they were defeated. Sacha Haworth of The Tech Oversight Project stated, “The era of Big Tech invincibility is over,” and it’s the kind of statement that sticks because it seems plausible.
A slow institutional reckoning with an industry that built its profits on a product it knew could cause harm seems to be what’s happening here, similar to the tobacco litigation of the 1990s. Meta profits from the ambiguity of causation, just like the cigarette companies did before it. Is it possible for a jury to prove that a particular teen’s depression was caused by an algorithm? It’s challenging. However, it appears that juries are becoming more at ease making those connections.
Nor has the Senate remained silent. Senator Chuck Grassley recently launched a separate investigation into how tech companies report suspected child exploitation, and the results thus far are concerning. Over a million of these reports were submitted by Amazon’s AI services division in 2025, but none of them included sufficient information for law enforcement to take appropriate action. In the same year, Meta submitted almost 11 million reports. There is a problematic infrastructure. Apparently, the accountability infrastructure still doesn’t.
Meta will make an appeal. Google will make an appeal. While Zuckerberg continues to build his AI empire and wagers that the future will surpass the past, the legal apparatus will continue to operate at its customary slow pace.
The company may actually become less dependent on engagement loops and attention traps as a result of the AI pivot. It might also be the most costly diversion in the history of technology. As this develops, one thing is certain: Silicon Valley no longer intimidates the courtrooms. Everything is altered just by that.
Disclaimer
Nothing published on Creative Learning Guild — including news articles, legal news, lawsuit summaries, settlement guides, legal analysis, financial commentary, expert opinion, educational content, or any other material — constitutes legal advice, financial advice, investment advice, or professional counsel of any kind. All content on this website is provided strictly for informational, educational, and news reporting purposes only. Consult your legal or financial advisor before taking any step.
