What is the story about?
Another key voice in the artificial intelligence world has stepped away, and her reason is deeply unsettling. OpenAI researcher Zoe Hitzig, who spent two years helping shape how ChatGPT was built, priced, and governed, has resigned, warning that the company is beginning to walk “the same path as Facebook.”
Her exit, coming just as OpenAI starts testing ads for ChatGPT, has intensified concerns about the growing unease within the AI industry itself.
Hitzig announced her resignation in a guest essay for The New York Times, revealing that her decision stemmed from doubts about OpenAI’s direction.
“I once believed I could help the company get ahead of the problems AI might create,” she wrote. “But this week confirmed my belief that OpenAI has stopped asking the questions I’d joined to help answer.”
Zoe Hitzig used the phrase “same path as Facebook” to draw a clear warning, that OpenAI’s introduction of ads inside ChatGPT could lead it down the same slippery slope that transformed social media from idealistic platforms into profit-driven systems that prioritised engagement over ethics.
In her New York Times essay, Hitzig argued that while OpenAI’s first version of ads might indeed follow the company’s promises, being clearly labelled and not influencing responses, the real danger lies in what happens later.
As financial incentives grow, she fears that OpenAI could gradually compromise on its principles, just as Facebook did over time. “The concern is about incentives,” she wrote, suggesting that financial pressure could push the company to weaken its own principles in pursuit of engagement and profit.
She pointed to Facebook’s early commitments to privacy and user control, noting that the platform once promised users they would have a say in how their data was handled, even the ability to vote on policy changes. But as advertising became its primary business model, those commitments were quietly abandoned.
By comparing OpenAI to Facebook, Hitzig was sounding an alarm: once an AI company depends on ads, its business goals and ethical goals will inevitably collide. The drive for engagement and profit could begin shaping how AI interacts with users, even subtly influencing what it says, promotes, or prioritises, just as social media algorithms did.
In essence, her warning wasn’t about ads themselves being immoral, but about the incentives they create, and how those same incentives once turned Facebook from a social platform into a manipulation machine.
Hitzig expressed alarm over the kind of data ChatGPT holds, what she called “an archive of human candour that has no precedent.” Users routinely share intimate details with the chatbot: health fears, relationship struggles, even religious doubts.
Embedding ads within such a context, she warned, could create new forms of manipulation, with algorithms learning how to subtly influence human choices in ways users might never detect. “Advertising built on such sensitive data could create a potential for manipulating users in ways we don’t have the tools to understand,” she cautioned.
Hitzig drew a direct parallel with the trajectory of social media platforms. “In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded,” she wrote. “The company eliminated holding public votes on policy.”
Hitzig’s resignation follows another high-profile exit in the AI research community, Anthropic’s safeguards research lead, Mrinank Sharma, who also stepped down recently, warning that “the world is in peril.”
The departures of both researchers from two of the most influential AI labs highlight a worrying trend: even those building the systems shaping our technological future are losing faith in the direction of the industry.
It raises a critical question, if AI insiders are choosing to walk away, what does that say about where artificial intelligence is heading?
Her exit, coming just as OpenAI starts testing ads for ChatGPT, has intensified concerns about the growing unease within the AI industry itself.
Hitzig announced her resignation in a guest essay for The New York Times, revealing that her decision stemmed from doubts about OpenAI’s direction.
“I once believed I could help the company get ahead of the problems AI might create,” she wrote. “But this week confirmed my belief that OpenAI has stopped asking the questions I’d joined to help answer.”
Ads inside ChatGPT spark internal and ethical alarms
Zoe Hitzig used the phrase “same path as Facebook” to draw a clear warning, that OpenAI’s introduction of ads inside ChatGPT could lead it down the same slippery slope that transformed social media from idealistic platforms into profit-driven systems that prioritised engagement over ethics.
In her New York Times essay, Hitzig argued that while OpenAI’s first version of ads might indeed follow the company’s promises, being clearly labelled and not influencing responses, the real danger lies in what happens later.
As financial incentives grow, she fears that OpenAI could gradually compromise on its principles, just as Facebook did over time. “The concern is about incentives,” she wrote, suggesting that financial pressure could push the company to weaken its own principles in pursuit of engagement and profit.
She pointed to Facebook’s early commitments to privacy and user control, noting that the platform once promised users they would have a say in how their data was handled, even the ability to vote on policy changes. But as advertising became its primary business model, those commitments were quietly abandoned.
By comparing OpenAI to Facebook, Hitzig was sounding an alarm: once an AI company depends on ads, its business goals and ethical goals will inevitably collide. The drive for engagement and profit could begin shaping how AI interacts with users, even subtly influencing what it says, promotes, or prioritises, just as social media algorithms did.
In essence, her warning wasn’t about ads themselves being immoral, but about the incentives they create, and how those same incentives once turned Facebook from a social platform into a manipulation machine.
“An archive of human candour” at risk
Hitzig expressed alarm over the kind of data ChatGPT holds, what she called “an archive of human candour that has no precedent.” Users routinely share intimate details with the chatbot: health fears, relationship struggles, even religious doubts.
Embedding ads within such a context, she warned, could create new forms of manipulation, with algorithms learning how to subtly influence human choices in ways users might never detect. “Advertising built on such sensitive data could create a potential for manipulating users in ways we don’t have the tools to understand,” she cautioned.
Hitzig drew a direct parallel with the trajectory of social media platforms. “In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded,” she wrote. “The company eliminated holding public votes on policy.”
A troubling pattern in the AI world
Hitzig’s resignation follows another high-profile exit in the AI research community, Anthropic’s safeguards research lead, Mrinank Sharma, who also stepped down recently, warning that “the world is in peril.”
The departures of both researchers from two of the most influential AI labs highlight a worrying trend: even those building the systems shaping our technological future are losing faith in the direction of the industry.
It raises a critical question, if AI insiders are choosing to walk away, what does that say about where artificial intelligence is heading?














