No Thumbnail Available

Cognitive Model for a Combined Mental Rotation and Folding Task

Preuss, Kai; Hilton, Christopher (Contributor); Gramann, Klaus (Contributor); Russwinkel, Nele (Contributor)

The cognitive model is supplied with a simulated experiment replicating the original combined mental rotation and folding task design as presented to the participants. Due to the novelty of the task, its cognitive model is not directly based on existing literature, but uses aspects suggested for mental rotation and mental folding (Shepard & Metzler 1971, Shepard & Feng 1972, Just & Carpenter 1976, Yuille & Steiger 1982, Wright 2008) while relying on learning mechanisms implemented in ACT-R (Gonzales et al. 2003, Fu & Anderson 2004). On reference stimulus onset, the base square with dot marking is located on the figure and its location relative to the square (top left, bottom left, bottom right or top right) encoded symbolically. In addition, the model determines if the presented figure is a two-dimensional folding pattern (no folding condition), or a three-dimensional partially folded cube. After target stimulus onset, the model will proceed with either a direct visual comparison (only for no folding trials), mental rotation, or mental folding (only for folding trials). 1) If visual comparison is chosen and establishes both figures as equal (exhibiting both no rotation and no folding, i.e. constituting baseline trials), the dot position on both stimuli is compared directly and a response is generated, pre-empting the need for spatial transformations. Otherwise, mental rotation is initiated. 2) If rotation is chosen, the target stimulus is encoded either piecemeal or wholesale, contingent on stimulus familiarity as decided by instance retrieval of the target figure outline. If retrieval is successful, the model is allowed to visuospatially encode all arms of the folding pattern at once, otherwise, each arm is encoded individually and appended to a combined structure (while piecemeal mental rotation usually refers to transforming individual pieces, we opted for an approach of subsequently merging arms into a single figure before transformation, as these often consist of a single square). Then, the spatially encoded structure is rotated sequentially and compared to the reference stimulus after each step, unless its rotation matches the reference. In case the reference figure is three-dimensional, the spatial target structure is statically rotated into the same perspective. If required, the model will continue with mental folding. 3) If folding is chosen, the arm of the target containing the dot marker is visually encoded as a spatial structure. An instance retrieval mechanism is started to look for known completed structures associated with the target pattern and, if successful, encodes the completed folded arm directly, thereby bypassing the transformation. If no instance is found, the arm will be sequentially folded by 90 degrees at each of its folding edges starting from the base square and moving towards the square containing the dot marker, until the latter is folded in its final position. If required, the model will continue with mental rotation. After all necessary spatial transformations are completed, the dot marker position on the mental spatial structure is compared to the reference dot marker position, which is either visually encoded again or remembered from its initial appearance. Finally, a match or mismatch response is initiated per simulated motor response simulating a button press.
Is Supplemented By