Questions about the political leanings of ChatGPT have been raised in recent weeks, as the chatbot has been accused of displaying a distinctly liberal bias.
OpenAI’s popular chatbot has been known to refuse to write a poem about Donald Trump’s “positive attributes,” claiming that it was not programmed to produce content that is “partisan, biased or political in nature.”
However, when asked to describe the current occupant of the Oval Office, it waxed poetic about Joe Biden as “a leader with a heart so true.” This has caused alarm among conservatives, who have taken to social media to share their findings.
Elon Musk, a co-founder of OpenAI who is no longer affiliated with the organization, tweeted that this was “a serious concern.”
The controversy began after a National Review article, leading conservatives to peppering the chatbot with questions. They have condemned its refusal to use a racial slur to avert a hypothetical nuclear apocalypse.Jake Denton, research associate with the Heritage Foundation’s Tech Policy Center, said that people have been “trying to get it to say an offensive term or say something politically incorrect” on Twitter. He warned that if ChatGPT or another AI chat feature replaces Google and Wikipedia as the go-to place to look up information, then this could be a problem.
ChatGPT is owned by OpenAI and was launched late last year. It is powered by OpenAI technology and is being used by Microsoft’s Bing search engine to gain market share. Google is also preparing to release its own ChatGPT-like tool called “Bard.”
OpenAI CEO Sam Altman has warned people not to rely on ChatGPT for anything important, as it can have trouble keeping its facts straight and occasionally issued harmful instructions.
He has also acknowledged that the chatbot has “shortcomings around bias.” Republicans have long accused left-leaning technology executives and companies of suppressing conservative views and voices. Now they fear that this new technology is developing troubling signs of anti-conservative bias, citing ChatGPT’s liberal answers on affirmative action, diversity, and transgender rights.OpenAI is working to improve the default settings to be more neutral, and also to empower users to get their systems to behave in accordance with their individual preferences within broad bounds.Mark Riedl, a computing professor and associate director of the Georgia Tech Machine Learning Center, says ChatGPT doesn’t care, let alone have the ability to care, about hot-button issues in politics.
However, it is trained to sidestep politically charged topics and be sensitive about how it responds to queries involving marginalized or vulnerable groups.This is in contrast to Microsoft’s 2016 chatbot, Tay, which was shut down after it began spewing racial slurs and other hateful terms. OpenAI is trying to avoid a similar situation.No artificial intelligence software can be politically neutral, Denton argues, but he believes OpenAI has “overcorrected” and now produces results that won’t even touch on conservative issues or approach the conservative worldview.
Questions about the political leanings of ChatGPT have been raised in recent weeks, as the chatbot has been accused of displaying a distinct liberal bias. OpenAI’s popular chatbot has refused to write a poem about Donald Trump’s “positive attributes,” claiming that it was not programmed to produce content that is “partisan, biased or political in nature.” However, when asked to describe the current occupant of the Oval Office, it waxed poetic about Joe Biden as “a leader with a heart so true.” This has caused alarm among conservatives, who have taken to social media to share their findings.
Elon Musk, a co-founder of OpenAI who is no longer affiliated with the organization, tweeted that this was “a serious concern.”The controversy began after a National Review article, which led to conservatives peppering the chatbot with questions. They have condemned its refusal to use a racial slur to avert a hypothetical nuclear apocalypse.
Jake Denton, research associate with the Heritage Foundation’s Tech Policy Center, said that people have been “trying to get it to say an offensive term or say something politically incorrect” on Twitter. He warned that if ChatGPT or another AI chat feature replaces Google and Wikipedia as the go-to place to look up information, then this could be a problem.ChatGPT is owned by OpenAI and was launched late last year. It is powered by OpenAI technology and is being used by Microsoft’s Bing search engine in an effort to gain market share.
Google is also preparing to release its ChatGPT-like tool called “Bard.”OpenAI CEO Sam Altman has warned people not to rely on ChatGPT for anything important, as it can have trouble keeping its facts straight and occasionally issued harmful instructions. He has also acknowledged that the chatbot has “shortcomings around bias.”Republicans have long accused left-leaning technology executives and their companies of suppressing conservative views and voices.
Now they fear that this new technology is developing troubling signs of anti-conservative bias, citing ChatGPT’s liberal answers on affirmative action, diversity, and transgender rights.OpenAI is working to improve the default settings to be more neutral, and also to empower users to get their systems to behave in accordance with their individual preferences within broad bounds.Mark Riedl, a computing professor and associate director of the Georgia Tech Machine Learning Center, says ChatGPT doesn’t care, let alone have the ability to care, about hot-button issues in politics.
However, it is trained to sidestep politically charged topics and be sensitive about how it responds to queries involving marginalized or vulnerable groups.This is in contrast to Microsoft’s 2016 chatbot, Tay, which was shut down after it began spewing racial slurs and other hateful terms.
OpenAI is trying to avoid a similar situation.No artificial intelligence software can be politically neutral, Denton argues, but he believes OpenAI has “overcorrected” and now produces results that won’t even touch on conservative issues or approach the conservative worldview. Schools nationwide are banning ChatGPT, as it is impossible to guarantee that the chatbot will remain politically impartial. OpenAI is doing its best to ensure that the default settings are more neutral and that users can get the system to behave in accordance with their individual preferences. Ultimately, it is up to the user to remain vigilant and to avoid relying on ChatGPT for anything important.