No Thumbnail Available

Visual social information use in collective foraging

Mezey, David; Deffner, Dominik; Kurvers, Ralf HJM; Romanczuk, Pawel

This archive includes a compressed dataset and code base for our paper on visual social information use in collective foraging. Authors: (1, 2) David Mezey - corresp.: david.mezey@scioi.de (2, 3) Dominik Deffner (2, 3) Ralf HJM Kurvers (1, 2) Pawel Romanczuk 1: Institute for Theoretical Biology, Humboldt University Berlin, Berlin, Germany 2: Science of Intelligence Excellence Cluster, Technical University Berlin, Berlin, Germany 3: Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany Abstract: Collective dynamics emerge from individual-level decisions, yet we still poorly understand the link between such individual-level decision making processes and collective outcomes in realistic physical environments. Using collective foraging to study the key trade-off between personal and social information use, we present a mechanistic, spatially-explicit agent-based model that combines individual-level evidence accumulation of personal and (visual) social cues with particle-based movement. Under idealized conditions without physical constraints, our mechanistic framework reproduces findings from established probabilistic models, but explains how individual-level decision processes generate collective outcomes in a bottom-up way. Groups performed best in clustered environments if agents quickly accumulated social information and approached successful others; individualistic search was most beneficial in uniform environments. Incorporating different real-world physical and perceptual constraints greatly improved collective performance by buffering maladaptive herding and generating self-organized exploration. Our study uncovers the mechanisms linking individual cognition to collective outcomes in human and animal foraging and paves the way for decentralized robotic applications. The compressed Dataset.zip includes: 1.) SimulationData: a representative dataset for each experiment we carried out using our mechanistic agent-based model and pygame simulation framework. Each folder represents a single experiment and holds the data of agents and resource patches with different parameterizations. To visualize the data, run the replay tool (Replay/Replay.exe) and browse one of the folders. You can change the parametrization on the right side with the sliders and start/stop the replay, change the framerate etc. If you use Ubuntu, be sure that you browse into experimental folders when opening the tool. 2.) FigureData: a summarized dataset of all experiments that serves as an input to generate our figures about our main findings. For each figure a python file is attached that can be run directly via python3 (py in case of Windows). 3.) Playground and Replay tool executable: our 2 main explorative tools (tested on Windows10 and Ubuntu20). With the replay tool one can read and play back simulated experiments and calculate some of the main metrics on the dataset (such as efficiency). The playground tool is an interactive simulator environment that allows the reader to change model parameters and observe the resulting system without any programming. A list of features for the playground tool you can find below. Note, that the executables have been only tested on Windows10 and Ubuntu20.04.6LTS. In case your system differs from these, please run the applications via python as described in the main repo at https://github.com/scioip34/ABM or from the attached code base. In this case follow this part of the README.txt for instructions: https://github.com/scioip34/ABM#with-gui-running-without-docker The compressed code base includes the used software framework (https://github.com/scioip34/ABM) frozen as a zip file at state (commit ID.: 43b6ae28083d9040fe3a95f7558f7ad3e8a24852)

Version History

Now showing 1 - 2 of 2
VersionDateSummary
2023-11-20 15:10:16
2023-11-08 14:06:56