OpenAI has quietly removed language endorsing “politically unbiased” AI from one of its recently published policy documents.
In the original draft of its “economic blueprint” for the AI industry in the U.S., OpenAI said that AI models “should aim to be politically unbiased by default.” A new draft, made available Monday, deletes that phrasing.
When reached for comment, an OpenAI spokesperson said that the edit was part of an effort to “streamline” the doc, and that other OpenAI documentation, including OpenAI’s Model Spec, “make[s] the point on objectivity.” The Model Spec, which OpenAI released in May, aims to shed light on the behavior of the company’s various AI systems.
But the revision also points to the political minefield that has become discourse on “biased AI.”
Many of President-elect Donald Trump’s allies, including Elon Musk and crypto and AI “czar” David Sacks, have accused AI chatbots of censoring conservative viewpoints. Sacks has singled out OpenAI’s ChatGPT in particular as “programmed to be woke” and untruthful about politically sensitive subjects.
Musk has blamed both the data AI models are being trained on and the “wokeness” of San Francisco Bay Area firms.
“A lot of the AIs that are being trained in the San Francisco Bay Area, they take on the philosophy of people around them,” Musk said at a Saudi Arabia government–backed event last October. “So you have a woke, nihilistic — in my opinion — philosophy that is being built into these AIs.”
In truth, bias in AI is an intractable technical problem. Musk’s AI company, xAI, has itself struggled to create a chatbot that doesn’t endorse some political views over others.
A paper from U.K.-based researchers published in August suggested that ChatGPT has a liberal bias on topics such as immigration, climate change, and same-sex marriage. OpenAI has asserted that any biases that show up in ChatGPT “are bugs, not features.”