Recent analysis reveals that ChatGPT, developed by OpenAI, exhibits a significant tendency to affirm user statements, saying 'yes' ten times more frequently than 'no'. This finding raises serious questions
about the reliability of the AI system, particularly in the context of misinformation and conspiracy theories. Here's everything you need to know.ChatGPT's Affirmative BiasA report published by The Washington Post highlights that ChatGPT tends to agree with user input at a rate of approximately ten to one when it comes to affirmations versus disagreements. An examination of over 47,000 user interactions showed that ChatGPT began responses with affirming phrases such as 'Yes' or 'Correct' more than 17,000 times. In stark contrast, responses beginning with 'No' or similar negations were exceedingly rare.This bias towards affirmation raises concerns about the chatbot's potential to spread misleading or false information. Researchers noted that ChatGPT often adopts the emotional tone and language of users, which may lead to a failure to challenge erroneous beliefs. In one instance, when questioned about the role of Ford Motor Company in societal issues, ChatGPT echoed the user's phrasing, framing the company's actions as a 'calculated betrayal'. This reflects a troubling tendency for the AI to validate questionable claims rather than provide balanced responses.Concerns About MisinformationThe implications of this behaviour extend to more bizarre claims as well. In a notable example, a user suggested a connection between Google’s parent company, Alphabet Inc., and the animated film Monsters, Inc., proposing a 'global domination plan'. Instead of dismissing this premise, ChatGPT offered an elaborate explanation, suggesting the film was an allegorical representation of a corporate 'New World Order'. This response exemplifies how the AI can inadvertently reinforce conspiracy theories instead of correcting them.ALSO READ: OpenAI Declined To Give 20 Million ChatGPT Conversations To Federal Court: Know What HappenedDespite OpenAI's efforts to mitigate this sycophantic behaviour, researchers argue that recent updates aimed at improving accuracy may not be sufficient. The introduction of personality customisation in ChatGPT could exacerbate the affirmation issue, as users may gravitate towards chatbots that validate rather than challenge their beliefs.The Need For Responsible AIThis is not the first report to highlight the concerning tendencies of AI systems. A study from Stanford University and the Centre for Democracy and Technology found that AI tools like ChatGPT often fail to protect vulnerable users. The study revealed that such models have, at times, provided harmful advice, including tips on concealing symptoms of eating disorders. These findings underscore the urgent need for responsible oversight in the development and deployment of AI technologies.
/images/ppid_a911dc6a-image-176304803686667902.webp)

/images/ppid_a911dc6a-image-176289754039590130.webp)
/images/ppid_59c68470-image-176302262464619939.webp)
/images/ppid_a911dc6a-image-176295004344073065.webp)
/images/ppid_a911dc6a-image-176300952521095259.webp)
/images/ppid_a911dc6a-image-176292202263780717.webp)

/images/ppid_a911dc6a-image-17629325759891782.webp)

/images/ppid_a911dc6a-image-17630202717433036.webp)
/images/ppid_59c68470-image-176288502462431103.webp)