Jason Sattizahn

From Wikipedia, the free encyclopedia

Jason Sattizahn is an American integrative neuroscientist, user experience (UX) researcher, and whistleblower known for his professional contributions to the video game industry and for his 2025 disclosures regarding troubling safety and integrity practices at Meta Platforms.[1][2]

Jason Sattizahn at the Lincoln Memorial

Education and early academic research

Sattizahn was born and raised in Missouri.[3] He attended the University of Chicago, where he earned a PhD in integrative neuroscience from the Psychology department in 2017.[4][5] During his academic tenure, the research he published focused on the limitations and changes of the human mind under anxiety and stress, including how perception impacts education and how hormones interact with mental processing.[3][6][7] His most recent academic work investigated how fluctuating hormone levels and competition outcomes affected working memory and mathematical accuracy.[8] His doctoral adviser was Sian Beilock.[5]

Career

Video game development

Sattizahn served as a UX Researcher for Sony Interactive Entertainment (SIE) in the Worldwide Studios Experience Lab.[9][10][11] His research contributed to several major PlayStation titles, including Horizon: Zero Dawn (2017)[11] and God of War (2018), the 2018 Game of the Year.[12][9][10]

Meta

Between April 2018 and May 2024, Sattizahn worked as a UX researcher at Meta Platforms.[1][13] During his six-year tenure, he held senior roles leading integrity research for Facebook Marketplace, Facebook’s Faith initiative, and ranking.[4] In 2022, he transitioned to the Reality Labs division, where he served as a senior researcher focused on safety & integrity for Virtual Reality (VR) hardware.[1][4]

Whistleblowing

In September 2025, Sattizahn and five other researchers – comprising both current and past Meta employees – came forward as a group to allege that Meta systematically manipulated, suppressed or erased internal research regarding harms to Meta’s users and children.[14][15] Represented by the legal non-profit Whistleblower Aid, the group filed a detailed disclosure and provided a trove of internal documents to Congress, the Securities and Exchange Commission (SEC), and the Federal Trade Commission (FTC).[14][16]

On September 9, 2025, Sattizahn and fellow whistleblower Cayce Savage provided sworn testimony before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law in a hearing titled "Hidden Harms".[4][2][16][17] Their testimony alleged that Meta took severe Legal Intervention against Research at the company following the 2021 release of "The Facebook Files" by Frances Haugen. Testimony detailed that Meta’s legal department intervened in research to "lock down" information[4] through what Sattizahn described as a “funnel manipulation put on research”.[4] Researchers were reportedly instructed to include lawyers in their work so Meta’s Legal team could determine what and how research was performed, how to write research findings (write vaguely, avoiding terms like "illegal" or "not compliant") and protect Meta from potential findings being seen by "adverse parties" via attorney-client privilege.[15]

The testimony provided extensive details regarding Meta's actions, including:

  • Suppression and Deletion of Evidence: Sattizahn testified that Meta "systematically covered up" harms by manipulating and erasing data that Meta deemed unfavorable,[1][16] and that Meta had required researchers to delete data that showed harm to kids occurring on Meta’s platforms.[17]
  • Prioritizing Profits over Safety: Sattizahn stated that Meta “deliberately compromising internal processes, policies, and research to protect company profits over their users,[4] and stated that Meta knowingly ignored child safety because "children drive profits".[18][2] Sattizahn and Savage both testified that Meta lacked adequate data regarding the actual ages of its users, allowing many children under 13 to remain on the platform to maintain high metrics[2][4] and claimed that the company avoided safety measures that would decrease user engagement, monetization, or ad revenue.
  • Hostile Research Environment: Researchers were reportedly told to avoid performing research and work that might produce evidence of child harm.[13][14] Sattizahn also testified that Meta’s legal team threatened their careers in order to get researchers to follow the Meta Legal team’s reportedly inappropriate instructions, stating that one Meta lawyer told him “You wouldn't want to have to testify publicly if this research was to get out, would you?”[17]

References

Related Articles

Wikiwand AI