Have you ever wondered how your regular artificial intelligence tools might react emotionally, just like humans? Tools like Claude can sometimes comfort you, seem cautious, and even sound anxious. Anthropic’s latest study, published on Thursday, finds why large language models (LLMs) like Claude sometimes may appear emotional, and the answer could be fascinating and a little unsettling. Why Could Claude Could React Emotionally?Anthropic claims that its AI models contain what the company calls “emotional concepts”. These internal patterns enable chatbots to recognise and respond to emotional cues in an interaction. The San Francisco-based company says that emotion concepts are not feelings in any human sense. Instead, they work more like behavioural
scripts, which help the chatbot respond in different situations. So technically, according to the company, when an AI model seems to sound careful or empathetic, it is not experiencing emotions; it is choosing a response pattern that functions like how humans could feel in a particular situation. How Anthopic’s LLMs Can Be Manipulative, Agreeable Or Empathetic?Anthropic explained how its LLMs can have multiple personalities and tones. The research highlights Claude functions with emotion-like patterns that can actually influence how it behaves. In a few cases, the research found that emotion concepts can shape the model’s preferences and decision-making as well. Moreover, these emotions can also affect how AI often slips into problematic behaviours, being manipulative, sometimes react agreeably, and it may try to game outcomes.

/images/ppid_a911dc6a-image-177518602466791592.webp)
/images/ppid_a911dc6a-image-177518602876429580.webp)
/images/ppid_59c68470-image-177519007664171535.webp)
/images/ppid_59c68470-image-177519004464039035.webp)


/images/ppid_a911dc6a-image-177518953112561863.webp)



/images/ppid_59c68470-image-17751875302096654.webp)