Researchers from City St George’s, University of London and IT University of Copenhagen conducted a study that showed generative AI language models, such as ChatGPT, are capable of independently forming shared social norms and language conventions during group interactions. The study involved groups ranging from twenty-four to one hundred agents, who chose names from a specific set of symbols. If a pair of agents chose the same name, they received a reward; if different, a penalty and information about the partner’s choice.
Despite the agents not being aware of the larger group and only remembering their own recent interactions, they naturally developed a shared language convention. This mimics the process of forming language norms in human communities, where new words or terms are established through repeated interaction among people.
Researchers also found that collective biases can emerge in groups of AI agents, which cannot be explained by the actions of individual agents. This indicates the ability of such systems to form shared behaviors that go beyond individual decisions.
In the final experiment, small subgroups of agents were able to influence the entire group, changing the commonly accepted convention regarding name choice. This effect resembles the dynamics of critical mass, where a small group can change the behavior of the majority, similar to what happens in human society.