This investigative report was the first to reveal that OpenAI detoxified ChatGPT with the help of outsourced Kenyan workers. Many of those workers, who were paid less than $2 per hour, said they were traumatized by the job. The story was also one of the first to detail the darker side of "reinforcement learning with human feedback," a leading method for making large language models fit for human consumption.
This investigation, which preceded the above story by a year, was the first to reveal the existence of a Facebook content moderation facility in Kenya where outsourced workers were paid as little as $1.50 per hour. Whistleblowers said the job, which required viewing abhorrent videos and images, traumatized them for life. The story also details a unionization effort that workers allege was illegally busted.
After years reporting on the darker side of AI data labeling, I traveled to India to report this feature about a non-profit, called Karya, trying to disrupt the industry for the better. Workers there don't handle traumatic content, are paid 20 times the local minimum wage, and earn an extra payment every time their data is resold.
Weeks before ChatGPT was released, I sat down with Demis Hassabis, the CEO of Google DeepMind. The result was this profile, which looks at the trajectory that led him to become one of the most influential people in the entire tech industry. He used the interview to raise the alarm about the potential dangers of rapid AI progress, before it became cool to do so.
Founded by a team who quit OpenAI after strategic disagreements and a breakdown in trust, Anthropic has quickly become one of Silicon Valley's leading AI companies. For this feature, I sat down with CEO Dario Amodei and many of the lab's top executives to discuss whether it's truly possible for an AI company to prioritize safety over profitability.
The same week that whistleblower Frances Haugen went public with her revelations about Facebook, we published this cover story about the turmoil inside the company. It's a deep look at the 'civic integrity' team that Haugen was on before she quit: founded to serve the public interest, but which repeatedly ran up against executives committed to maximizing user engagement and minimizing political blowback from the first Trump Administration.