Research by the Paris-based company Giskard revealed that asking AI to be concise can cause the model to hallucinate more than usual. Scientists analyzed the responses of leading systems, including OpenAI’s GPT-4o, Mistral Large, and Anthropic’s Claude 3.7 Sonnet, and found that when users request brief answers, accuracy noticeably decreases.
This is especially evident in situations with ambiguous or incorrect questions. Instead of refuting a false premise or providing a detailed explanation, the model opts for brevity and skips important nuances. The researchers emphasize—even seemingly harmless instructions like “be concise” can push AI toward misinformation.
It is also striking that users often prefer answers that sound confident, even if they are not entirely true. Giskard noticed that models are less likely to refute controversial statements if the question is phrased confidently. This creates a dangerous balance between the desire to please the user and maintaining information accuracy.
Giskard experts point out that the pursuit of convenience and speed in AI can come at the cost of accuracy. The choice between short answers and truthfulness becomes a real challenge for developers, who must consider such unexpected consequences when creating and configuring systems.