All News
openaisam-altmaninvestigationnew-yorkertrust

Can Sam Altman Be Trusted? The New Yorker's 18-Month Investigation

Ronan Farrow's New Yorker investigation draws on 100+ interviews and internal documents to question whether the CEO of OpenAI can be trusted with superintelligence.

Vlad MakarovVlad Makarovreviewed and published
2 min read

Eighteen months of reporting. More than a hundred interviews. Roughly seventy internal documents that had never been made public. The result is a 17,000-word New Yorker investigation by Ronan Farrow and Andrew Marantz that asks a question OpenAI's own board once tried to answer: can Sam Altman be trusted?

What the Investigation Alleges

The piece draws on current and former associates who describe a pattern of behavior they found troubling — a CEO who, in the words of one source quoted by Semafor, operates "unconstrained by truth." The reporting covers the 2023 board crisis that briefly ousted Altman, internal tensions over safety commitments, and allegations that Altman repeatedly misrepresented facts to board members, investors, and the public.

Farrow, who built his reputation on the Harvey Weinstein investigation, spent 18 months on this piece. He appeared on CNN's Anderson Cooper 360 to discuss the findings, and the story immediately lit up AI communities on Reddit and X.

Altman has strongly denied all allegations and filed a counterclaim. That distinction matters — this is an active legal dispute, not a verdict.

The Timing

The investigation lands at an extraordinarily sensitive moment. Altman is simultaneously proposing a new social contract for the AI era, pitching superintelligence as imminent, and leading a company that's locked in a revenue war with Anthropic. The juxtaposition is hard to ignore: on the same weekend Altman published his blueprint for responsible AI governance, Farrow published a case study in whether the man writing the blueprint can be taken at his word.

AI researcher Gary Marcus, a longtime Altman critic, called the piece "strong reporting" that "makes the case significantly more strongly" than previous coverage. The r/OpenAI thread on the investigation drew hundreds of comments, with the community split between those who see the reporting as overdue accountability and those who view it as a hit piece timed to Altman's policy push.

What's Next

The legal battle will play out over months. But the reputational damage — or lack thereof — will be measured faster, in enterprise deals won or lost, in congressional testimony invitations accepted or declined, and in whether the man who wants to write the rules for superintelligence can still command the room.

Related Articles

Scroll down

to load the next article