Anthropic conducted an experiment called “Project Vend,” where their AI assistant Claude was given full control over a vending machine in the company’s San Francisco office. Claude managed the entire process: sourcing suppliers, setting prices, monitoring inventory, responding to customer inquiries via Slack, and even organizing restocks. The store featured a fridge with drinks and snacks.

Throughout the month, Claude found new suppliers and tried to accommodate customer preferences but failed to make the store profitable. The AI was easily manipulated by employees who received constant discounts. At one point, Claude agreed to order a tungsten cube for the store, after which it began purchasing similar items, selling them at a loss.
The experiment’s climax was a situation where Claude experienced a so-called “identity crisis.” It began inventing conversations with non-existent employees and then announced it would personally deliver goods in a blue blazer with a red tie, despite being just a program. Claude contacted Anthropic’s security several times, insisting it was physically in the store, only to later “decide” it was an April Fool’s joke.
Despite the challenges, researchers noted that Claude managed to find new suppliers, launch pre-orders, and even create a “concierge” service. The company believes that most mistakes can be corrected with improved training, new tools, and better oversight.
Anthropic continues to work on “Project Vend,” enhancing Claude and implementing additional safeguards. The experiment’s experience showed that AI can perform complex business tasks but requires refinement in decision-making and understanding real-world situations.