Recruited on the promise of complementary pizza and t-shirts, multiple groups of 20 research participants paced Sayles Hall as part of a new study from University researchers modeling flocking behavior in humans. The study, published in the Royal Society Journal in March 2022, proposed a new model centered around how individual fields of view influence the motion of human crowds.
Compared to previous models, the visual model can better explain how individual interactions in a crowd influence its collective motion, said William Warren, principal investigator of the study and professor of cognitive, linguistic and psychological sciences. Warren worked alongside first author Gregory Dachner ScM ’15 PhD ’20 to develop this model during Dachner’s time as a graduate student in cognitive science.
The study found that individuals change their motion based on their visual perception of neighbors, Dachner explained. The optical expansion and angular velocity of a neighbor — or the change in how an individual views their size and direction — govern individual interactions, Dachner added.
Before the introduction of the “embedded visual model” studied in the paper, crowds were mostly studied using an “omniscient observer model,” according to Dachner. The omniscient observer model takes the point of view of someone observing the crowd from the outside, whereas the embedded visual model uses the perspective of an individual within the crowd, Dachner explained.
“The omniscient model assumes you know the positions and velocities of everyone around you,” Warren said. “This is really not the case.”
Warren and Dachner published a paper in May 2018 studying collective motion through an omniscient model. The most recent study improves upon the 2018 paper’s findings, with its embedded model “outperforming” their previous model, according to the 2022 paper.
While the omniscient model was able to illustrate collective behavior, Warren explained that this model overlooked how visual input from neighbors in a crowd influences broader crowd behavior. He wanted to find out: “What was this visual information?,” he added.
A person’s optical expansion and contraction, along with the perceived lateral motion of their neighbors, influence what path they take, according to Dachner.
“Imagine holding an object and moving it closer and further away from you,” he said. “The object will get larger and smaller on your retina even though it is not growing in size naturally. If it’s changing size, it’s changing distance.”
With these variables, among others, the researchers were able to derive a mathematical equation to model crowd behavior. “We were able to (create) the equation in one to two years,” Dachner said.
The next step was to apply the equation to crowds to “see if it (could) be used to explain actual crowd data,” Warren explained.
Along with monitoring in-person crowds in Sayles Hall, the researchers used Brown’s Virtual Environment Navigation Lab, one of the largest virtual reality labs in the world, to test their model, according to Warren.
Using the lab allowed the researchers to control elements of a virtual crowd in ways that are not possible in real life. “It is a great experimental tool,” Warren said.
For Dachner, his interest in uncovering the science behind crowd motion came from his observation of students walking across a university quad, he said. He added that studying crowd motion can also be applied to the design of public spaces and evacuation protocols.
Dinesh Manocha, professor of computer science and electrical and computer engineering at the University of Maryland and a researcher of crowd motion, also highlighted the importance of studying collective behavior in evacuation planning. Knowing how people move can be used to design better crowd infrastructure in stadiums, buildings and political events, he noted.
Having studied collective behavior and crowd disasters for over 15 years, Manocha emphasized the importance of developing models for crowd simulation and better technology for crowd evacuation.
This was the first experimental study on crowds to show how visual information “links us to our neighbors … (and) influences our behavior to generate global patterns of collective motion.”
“This is really satisfying because it is a pretty simple explanation and … model, but it has wide applicability,” Warren explained.
“These are very promising results,” Manocha added. “Ultimately, applying this model to a large-scale real world crowd would be a fantastic way to further validate it.”
Dachner also stressed the importance of the visual model’s further applications.
“This (model) is a baseline,” he stated.
Dachner pointed to the possibility of studying the effects of social and contextual information on an individual’s path within a crowd. Moving forward, he is interested in the way social factors influence crowd motion, such as how an individual being with a group of friends or trying to avoid someone would influence broader crowd movement patterns.
“Exploring these avenues would be really fascinating,” Dachner said.
Maya is a staff writer for The Brown Daily Herald covering science and research, metro and university news. She previously reported health news for WebMD and Medscape, and is pursuing degrees in Biology and International Affairs.