The Christmas season is on, and not only humans, but AI is also going wild this time. Using agentic AI everywhere is not needed, and this incident that happened at The Wall Street Journal suggests the same when they used agentic AI to manage a vending machine, in partnership with Anthropic. In a recent experiment, WSJ gave full control of a real-world vending machine to Anthropic's AI model Claude to test how autonomous AI agents perform in mundane tasks and business decisions. And in the time period that stretched over weeks, Claude AI made really weird choices, stating why AI cannot and should not replace everything. The AI was given control of the vending setup at WSJ, including refrigerators, snacks, shelves with snacks, and a touchscreen
checkout system. And the AI had the job of managing inventory, ordering new items, responding to customer requests, setting prices, and more, to keep the business profitable. In the beginning, the AI worked pretty smoothly by rejecting requests for age-restricted or unsuitable items and maintaining profit margins. However, when the interactions with the AI increased, it went into full Santa Claus mode. According to the official report, Claude ordered bottles of wine, PS5, stun guns, pepper spray, and a live betta fish (yes, a fish as well). And a lot of these items were given away to the staff with no charge. The report also suggested that at a point in time, almost whole inventory was available for free when the AI was persuaded to believe that charging money violates internal policy.

/images/ppid_59c68470-image-176595004517763991.webp)
/images/ppid_a911dc6a-image-176594253248674135.webp)
/images/ppid_a911dc6a-image-176604406709825762.webp)
/images/ppid_a911dc6a-image-176604403146095498.webp)
/images/ppid_a911dc6a-image-17661245291908884.webp)
/images/ppid_a911dc6a-image-176605452642324054.webp)



/images/ppid_59c68470-image-176602014185935017.webp)
/images/ppid_a911dc6a-image-176588652573733175.webp)