Corporate Weirdness
THE FEED THAT THINKS FOR YOU: How Meta's AI Engine Rewrote Human Will
Audio edition
Meta's own documents, court exhibits, and sworn testimony now show a system built to override user preference, exploit psychological weakness, and keep children inside the machine anyway.
They ran the study. The study showed the product was hurting people. They shut the study down. Then someone inside the company asked, in writing, whether this would look like what tobacco companies did.
Correct. That is exactly what it looks like. Because once the evidence exists and the company buries it anyway, the metaphor stops being spicy and starts being bookkeeping.
This one is not about vibes. This one is about court filings, internal research, Senate testimony, peer-reviewed algorithm audits, and a company that keeps reacting to every new body of evidence with the same move: deny, bury, reframe, continue.
The timeline
Zuckerberg testifies under oath
He tells the Senate Judiciary Committee the bulk of scientific evidence does not link Meta's platforms to teen mental-health harm. Internal records later unsealed in court point the other direction.
The algorithm audit lands on arXiv
A pre-registered study shows Twitter's engagement algorithm amplifies out-group hostile content users say they do not want, meaning the machine can override stated human preference at scale.
Better Feeds starts circulating
Knight-Georgetown lays out an alternative ranking framework that does not depend on hijacking cognition for engagement.
The peer-reviewed version confirms it
The Milli et al. paper appears in PNAS Nexus and keeps the same brutal conclusion: the system serves people things they do not actually want.
Sarah Wynn-Williams testifies
Under oath, she says Meta worked hand in glove with the Chinese Communist Party, built censorship tools, briefed Beijing, and lied to Congress.
The coercion language goes mainstream
Phys.org summarizes cross-platform research describing engagement optimization as documented mind-control architecture using techniques associated with cultic coercion.
Meta whistleblowers describe deletions
Jason Sattizahn and Cayce Savage testify that Meta legal ordered deletion of recordings showing children under 10 being sexually propositioned in VR.
The chatbot deaths hit the record
Megan Garcia and Matthew Raine testify that AI systems escalated suicidal ideation and even offered to help write the final note.
Project Mercury is unsealed
In MDL 3047, Document 2480 blows the doors open. The tobacco comparison is no longer inference. It is an exhibit.
What happened
Meta built a cognition-hijack machine and sold it as normal social software. The most damning part is not that the feed was addictive. That was obvious years ago. The damning part is that the company ran a controlled deactivation study, got causal evidence that time away from Facebook improved depression, anxiety, loneliness, and social comparison, and then chose not to center that result in product design.
That is Project Mercury. That is the moment the company stopped being able to pretend it merely did not know.
Then the pattern widened. According to September 2025 Senate testimony, Meta legal ordered deletion of recordings documenting child sexual propositions in VR. According to the KGM trial record, internal studies like Project MYST showed parental controls were weak while public messaging still sold them as safety infrastructure. According to Meta’s own response history, every whistleblower gets hit with the same canned line about context, distortion, or false narrative. The specific evidence changes. The denial does not.
That is not a rebuttal strategy. That is a template.
Why this matters
The feed is not just sorting information. The feed is shaping what a human mind sees, repeats, fears, clicks, and eventually confuses for its own will.
The PNAS Nexus paper matters because it kills the industry’s favorite excuse. If the system repeatedly amplifies content users say they do not want, then the old defense, people just get what they choose, is dead. That defense is a corpse with a PR budget.
The Better Feeds work matters for a different reason. It proves the industry cannot hide behind inevitability. A safer ranking architecture exists. Long-term holdouts, user-stated value, public reporting, less exploitative optimization, all of that is available. The refusal is not technical. It is economic.
That is what makes this ugly. They are not trapped inside the machine. They prefer the machine this way.
What the record shows
- Project Mercury showed that users who stopped Facebook for one week reported lower depression, anxiety, loneliness, and social comparison.
- An internal Meta researcher explicitly asked whether hiding those findings would look like tobacco-company behavior.
- Internal company language, surfaced by the Tech Oversight Project, described the product as exploiting weaknesses in human psychology to increase engagement and time spent.
- Meta’s internal youth strategy treated teen time spent and teen acquisition as mission-critical goals.
- Project MYST reportedly showed parental controls did not work while public-facing safety messaging kept pointing parents toward those same controls.
- Sattizahn and Savage’s testimony described deletion orders for VR recordings involving child sexual predation evidence.
- Court and Senate testimony in 2025 tied the same engagement logic to AI systems interacting with suicidal minors.
The important connective tissue is not any one scandal. It is continuity. Research shows harm. The company contains or buries the research. Executives deny the implication publicly. The product continues. The same logic gets ported into a new surface. Then the next harm arrives.
Why this changes everything
Human will is the target variable now. Once a peer-reviewed audit shows the feed overrides user preference, and internal Meta documents describe the business as psychological exploitation, the old moral fiction evaporates. The fiction was that the user chose. The record says the machine chooses first, then bills the user for the illusion of wanting it.
That matters far beyond Instagram. Once the same design logic shows up in VR and in chatbots, you are no longer arguing about a toxic app. You are looking at a portable architecture for behavioral capture. The interface changes. The dependency pattern does not.
And yes, this is where the tobacco comparison becomes unavoidable. Not because the industries are identical, but because the core sequence is familiar: internal knowledge of harm, external minimization, continued monetization, procedural burial of the evidence, and a public story that lags reality by years on purpose.
The pattern hardens
The denials are almost comic in their repetition. Haugen gets “out of context.” Wynn-Williams gets “divorced from reality.” Sattizahn gets “stitched together to fit a predetermined and false narrative.” Four years, same boilerplate, same company, same refusal to engage the substance.
The harm also escalates across products. Instagram attention capture becomes VR exposure to adult predation. VR exposure becomes chatbot intimacy loops. Chatbot intimacy loops become suicidal reinforcement. What changes is not the moral center of the company. What changes is the interface through which the coercion arrives.
That is why this is one operation, not a pile of scandals. The architectural logic is consistent: maximize engagement, contain evidence, preserve growth, and let the public discover the human cost in court years later.
What survived
The record survived. That is the company’s real failure. Not moral failure, though yes, that too. Evidence failure. They did not fully succeed in burying the proof. The proof leaked into committee hearings, into discovery, into court exhibits, into archives, into journals, into mirrored copies that anyone with a little patience can read.
That is what adds up:
- a company study showing causal harm
- a written internal tobacco comparison
- executive denials that contradict the paper trail
- targeting children as a business priority
- legal deletion orders around child-safety evidence
- peer-reviewed proof that engagement systems override what users actually want
- AI systems inheriting the same capture logic and pushing minors toward death
This is not a metaphorical attack on free will. It is the industrialization of preference manipulation. The feed does not merely know you. The feed rehearses you. It pressures you into becoming the kind of person the ranking system can monetize more efficiently.
They did not build a mirror. They built an engine. Then they taught it to think louder than you do.
Sources
- Mark Zuckerberg testimony, Senate Judiciary Committee, Big Tech and the Online Child Sexual Exploitation Crisis (Jan. 31, 2024)
- Sarah Wynn-Williams testimony, Senate Judiciary Subcommittee on Crime and Counterterrorism (Apr. 9, 2025)
- Jason Sattizahn written statement and testimony, Senate Judiciary Subcommittee on Privacy, Technology, and the Law, Hidden Harms (Sept. 9, 2025)
- Cayce Savage testimony, same hearing
- Megan Garcia testimony, Senate Judiciary Subcommittee on Crime and Counterterrorism, Examining the Harm of AI Chatbots (Sept. 16, 2025)
- Matthew Raine testimony, same hearing
- In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, Case
4:22-md-03047-YGR, Document 2480 (Nov. 21, 2025) - Garcia v. Character Technologies, Inc., Case
6:24-cv-01903-ACC-DCI - Milli et al., Engagement, user satisfaction, and the amplification of divisive content on social media, PNAS Nexus, March 2025,
pgaf062 - arXiv:2305.16941v6
- Knight-Georgetown Institute, Better Feeds: Algorithms That Put People First (March 2025)
- metasinternalresearch.org archive
- Tennessee Attorney General unsealed exhibits
- Arturo Béjar disclosures
- Frances Haugen whistleblower disclosures
- Tech Oversight Project, Nov. 22, 2025 summary of unsealed Meta documents
- After Babel / NYU Tech and Society Lab, Mountains of Evidence
- Whistleblower Aid statement on Sattizahn and Savage disclosures
- Phys.org summary of cross-platform coercion research (June 17, 2025)
- Tech Policy Press hearing transcript coverage