Billy Perrigo
I'm Billy, a tech correspondent at TIME magazine. I mostly write about how artificial intelligence and social media platforms are reshaping our world.
I broke the news that OpenAI used outsourced Kenyan workers earning less than $2 per hour to detoxify ChatGPTāwork that left many of them traumatized.Ā
I was shortlisted for the Orwell Prize in 2022 for my article Inside Facebook's African Sweatshop. The details first revealed in that story are the basis of two ongoing lawsuits against Meta in Kenya, and preceded a successful vote to unionize by many of the affected workers.Ā
As well as covering AI 'from below,' as a historian might put it, I have also interviewed the world's most influential AI CEOs, including OpenAI's Sam Altman, Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei.Ā
And I love profiling smaller companies and nonprofits that are trying to improve the tech ecosystem like Karya, Signal, Block Party, and Polis.
My previous role at TIME was on the international team. As well as reporting tech-focused stories, I covered news from around the globe including the COVID-19 pandemic, the fallout from Brexit in the U.K., and the transformation of India under Narendra Modi.Ā
I have a first-class degree in modern history from the University of Warwick. My dissertation was a study of how emerging communications technologies facilitated the rise of Hindu nationalism in postcolonial India. As a student, I was an investigative reporter and deputy news editor for my university newspaper, the Warwick Boar.
I'm open to TV, radio, podcast, and IRL public speaking invitations. Here's how to contact me.
My best work
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
This investigative report was the first to reveal that OpenAI detoxified ChatGPT with the help of outsourced Kenyan workers. Many of those workers, who were paid less than $2 per hour, said they were traumatized by the job. The story was also one of the first to detail the darker side of "reinforcement learning with human feedback," a leading method for making large language models fit for human consumption.
Inside Facebook's African Sweatshop
This investigation, which preceded the above story by a year, was the first to reveal the existence of a Facebook content moderation facility in Kenya where outsourced workers were paid as little as $1.50 per hour. Whistleblowers said the job, which required viewing abhorrent videos and images, traumatized them for life. The story also details a unionization effort that workers allege was illegally busted.
The Workers Behind AI Rarely See Its Rewards. This Indian Startup Wants to Fix That
After years reporting on the darker side of AI data labeling, I traveled to India to report this feature about a non-profit, called Karya, trying to disrupt the industry for the better. Workers there don't handle traumatic content, are paid 20 times the local minimum wage, and earn an extra payment every time their data is resold.
DeepMindās CEO Helped Take AI Mainstream. Now Heās Urging Caution
Weeks before ChatGPT was released, I sat down with Demis Hassabis, the CEO of Google DeepMind. The result was this profile, which looks at the trajectory that led him to become one of the most influential people in the entire tech industry. He used the interview to raise the alarm about the potential dangers of rapid AI progress, before it became cool to do so.
Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy
Founded by a team who quit OpenAI after strategic disagreements and a breakdown in trust, Anthropic has quickly become one of Silicon Valley's leading AI companies. For this feature, I sat down with CEO Dario Amodei and many of the lab's top executives to discuss whether it's truly possible for an AI company to prioritize safety over profitability.
How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits
The same week that whistleblower Frances Haugen went public with her revelations about Facebook, we published this cover story about the turmoil inside the company. It's a deep look at the 'civic integrity' team that Haugen was on before she quit: founded to serve the public interest, but which repeatedly ran up against executives committed to maximizing user engagement and minimizing political blowback from the first Trump Administration.