Archive for October, 2021

Morten Moshagen

Morten Moshagen, Psychological Research Methods, University of Ulm, will give a presentation via Zoom in this week’s Social, Economic, and Decision Psychology research seminar (Thursday 28 October, 12:00-13:00).

The dark factor of personality

Ethically and socially aversive behaviors cause severe challenges for societies at many levels. In personality research, such behaviors are often attributed to aversive (“dark”) traits, most prominently the “dark triad” components—narcissism, Machiavellianism, and psychopathy—but indeed many more (such as greed, sadism, and spitefulness, to name a few). Given that aversive traits exhibit substantial conceptual, operational, and empirical overlap, the Dark Factor of Personality (D) has been proposed to represent the basic underlying disposition from which any more specific aversive trait arises as manifestation, thereby representing their commonalities. D is conceptualized as the general tendency to maximize one’s individual utility—disregarding, accepting, or malevolently provoking disutility for others—accompanied by beliefs that serve as justifications. The talk will elaborate on the theoretical conceptualization, summarize corresponding empirical evidence, and illustrate consequences of D.

Supporting literature

Moshagen, M., Hilbig, B. E., & Zettler, I. (2018). The dark core of personality. Psychological Review, 125, 5, 656-688.

Olivia Guest

Olivia Guest, Donders Centre for Cognitive Neuroimaging, Radboud University, Netherlands, will give a presentation via Zoom in this week’s Social, Economic, and Decision Psychology research seminar (Thursday 21 October, 15:00-16:00).

On logical inference over brains, behavior, and artificial neural networks

In the cognitive, computational, and neuro- sciences, we often reason about what models (viz., formal and/or computational) represent, learn, or “know”, as well as what algorithm they instantiate. The putative goal of such reasoning is to generalize claims about the model in question to claims about the mind and brain. This reasoning process typically presents as inference about the representations, processes, or algorithms the human mind and brain instantiate. Such inference is often based on a model’s performance on a task, and whether that performance approximates human behavior or brain activity. The model in question is often an artificial neural network (ANN) model, though the problems we discuss are generalizable to all reasoning over models. Arguments typically take the form “the brain does what the ANN does because the ANN reproduced the pattern seen in brain activity” or “cognition works this way because the ANN learned to approximate task performance.” Then, the argument concludes that models achieve this outcome by doing what people do or having the capacities people have. At first blush, this might appear as a form of modus ponens, a valid deductive logical inference rule. However, as we explain in this article, this is not the case, and thus, this form of argument eventually results in affirming the consequent—a logical or inferential fallacy. We discuss what this means broadly for research in cognitive science, neuroscience, and psychology; what it means for models when they lose the ability to mediate between theory and data in a meaningful way; and what this means for the logic, the metatheoretical calculus, our fields deploy in high-level scientific inference.

Bradley C. Love

Brad Love, Professor of Cognitive & Decision Sciences in Experimental Psychology, University College London, will give a presentation via Zoom in this week’s Social, Economic, and Decision Psychology research seminar (Thursday 7 October, 12:00-13:00).

Embedding spaces for decision making

A variety of domains, including images, text, and brain measures, can be captured in embedding spaces. For example, word embedding models place each word at some point in a high-dimensional space with the relative positions of words conveying similarity. In this talk, I will consider what comparing embedding spaces to one another can tell us about cognition and its brain basis. First, I will cover model-based neuroscience research that compares model and brain representations. I will discuss limitations of existing model-based approaches, including deep learning accounts of the ventral visual stream, and suggest an alternative approach to linking models and brain measures that assesses causal efficacy within the overall computation rather than simply shared variance between embedding spaces, which can be misleading. Second, I will discuss how embedding spaces derived from large-scale studies of human behaviour can help us evaluate models. One conclusion is that better performing models are not necessarily better models of humans. In the final part of the talk, I will consider how people rely on multiple embedding spaces (akin to memory systems) when making open-ended decisions, such as deciding what to add next to their online shopping cart. Overall, these results indicate the value of embedding spaces for developing and evaluating models of mind and brain at scale.

Supporting literature

Supplementary literature: