No Thumbnail Available

Visual social information use in collective foraging

Mezey, David; Deffner, Dominik; Kurvers, Ralf HJM; Romanczuk, Pawel

Collective dynamics emerge from individual-level decisions, yet we still poorly understand the link between such individual-level decision making processes and collective outcomes in realistic physical environments. Using collective foraging to study the key trade-off between personal and social information use, we present a mechanistic, spatially-explicit agent-based model that combines individual-level evidence accumulation of personal and (visual) social cues with particle-based movement. Under idealized conditions without physical constraints, our mechanistic framework reproduces findings from established probabilistic models, but explains how individual-level decision processes generate collective outcomes in a bottom-up way. Groups performed best in clustered environments if agents quickly accumulated social information and approached successful others; individualistic search was most beneficial in uniform environments. Incorporating different real-world physical and perceptual constraints greatly improved collective performance by buffering maladaptive herding and generating self-organized exploration. Our study uncovers the mechanisms linking individual cognition to collective outcomes in human and animal foraging and paves the way for decentralized robotic applications.
Dataset.zip
ZIP Archive2.49 GB

This is a compressed dataset for our paper on visual social information use in collective foraging. Visual social information use in collective foraging. (Dataset) 1: Institute for Theoretical Biology, Humboldt University Berlin, Berlin, Germany 2: Science of Intelligence Excellence Cluster, Technical University Berlin, Berlin, Germany 3: Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany Authors: (1, 2) David Mezey - corresp.: david.mezey@scioi.de (2, 3) Dominik Deffner (2, 3) Ralf HJM Kurvers (1, 2) Pawel Romanczuk #### Abstract #### Collective dynamics emerge from individual-level decisions, yet we still poorly understand the link between such individual-level decision making processes and collective outcomes in realistic physical environments. Using collective foraging to study the key trade-off between personal and social information use, we present a mechanistic, spatially-explicit agent-based model that combines individual-level evidence accumulation of personal and (visual) social cues with particle-based movement. Under idealized conditions without physical constraints, our mechanistic framework reproduces findings from established probabilistic models, but explains how individual-level decision processes generate collective outcomes in a bottom-up way. Groups performed best in clustered environments if agents quickly accumulated social information and approached successful others; individualistic search was most beneficial in uniform environments. Incorporating different real-world physical and perceptual constraints greatly improved collective performance by buffering maladaptive herding and generating self-organized exploration. Our study uncovers the mechanisms linking individual cognition to collective outcomes in human and animal foraging and paves the way for decentralized robotic applications. This package concludes: 1.) SimulationData: a representative dataset for each experiment we carried out using our mechanistic agent-based model and pygame simulation framework. Each folder represents a single experiment and holds the data of agents and resource patches with different parameterizations. To visualize the data, run the replay tool (Replay/Replay.exe) and browse one of the folders. You can change the parametrization on the right side with the sliders and start/stop the replay, change the framerate etc. You can find a summary about the main features below. 2.) FigureData: a summarized dataset of all experiments that serves as an input to generate our figures about our main findings. For each figure a python file is attached that can be run directly. 3.) Playground and Replay tool executables: with the executables one can run our 2 main explorative tools. With the replay tool one can read and play back simulated experiments and calculate some of the main metrics on the dataset (such as efficiency). The playground tool is an interactive simulator environment that allows the reader to change model parameters and observe the resulting system without any programming. A list of features for the playground tool you can find below. Note, that the executables have been only tested on Windows10 and Ubuntu20.04.6LTS. In case your system differs from these, please run the applications via python as described in the main repo: https://github.com/scioip34/ABM

Creative Commons Attribution 4.0 (CC BY)Creative Commons Attribution 4.0 (CC BY)

Version History

Now showing 1 - 2 of 2
VersionDateSummary
2023-11-20 15:10:16
2023-11-08 14:06:56