Archive for the ‘open science’ Category

How do people render self-reports of their willingness to take risks?

Markus Steiner, Florian Seitz, and I have a new paper (just published in Decision) in which we investigate the cognitive processes underlying people’s self-reports of their risk preferences. Specifically, we were interested in the information-integration processes that people may rely on during judgment formation, with a particular focus on the type of evidence people may consider when rendering their self-reports. In doing so, we aimed to contribute to a better understanding of why self-reports typically achieve high degrees of convergent validity and test-retest reliability, thus often outperforming their behavioral counterparts (i.e., monetary lotteries and other lab tasks).

To achieve these goals we employed the process-tracing method of aspect listing, to thus gain “a window into people’s mind” while they render self-reports. Our cognitive modeling analyses illustrated that people are particularly sensitive to the strength of evidence of the information retrieved from memory during judgment formation. Interestingly, people’s self-reported risk preferences and the strength of evidence of the retrieved aspects remained considerable stable in a retest study (i.e., across a one-month interval). Moreover, intraindividual changes in the latter were closely aligned with intraindividual changes in the former – suggesting that a relatively reliable psychological mechanism is at play when people render self-reports.

Beyond our quantitative modeling analyses, the process-tracing method of aspect listing also rendered possible more qualitative insights, such as concerning the sources and contents of the information people retrieved from memory (see the word clouds below). To learn more about all further details on this, please have a look at the paper!

Steiner, M., Seitz, F., & Frey, R. (2021). Through the window of my mind: Mapping information integration and the cognitive representations underlying self-reported risk preference. Decision, 8, 97–122. doi:10.1037/dec0000127 | PDF

First appeared on https://renatofrey.net/blog

New paper in JEP-Gen: Is representative design the key to valid assessments of people’s risk preferences?

A large body of research has documented the relatively poor psychometric properties of behavioral measures of risk taking, such as low convergent validity and poor test–retest reliability. In this project we examined the extent to which these issues may be related to violations of “representative design” – the idea that experimental stimuli should be sampled or designed such that they represent the environments to which measured constructs are supposed to generalize.

To this end, we focused on one of the most prominent behavioral measures of risk taking, the Balloon Analogue Risk Task (BART). Our analyses demonstrate that the typical implementation of the BART violates the principle of representative design, and strongly conflicts with the expectations people might have formed from real balloons. We conducted two extensive empirical studies (N = 772 and N = 632), aimed at testing the effects of improved representative designs. Indeed, participants acquired more accurate beliefs about the optimal behavior in the BART due to these task adaptions. Yet strikingly, these improvements proved to be insufficient to enhance the task’s psychometric properties (e.g., convergent validity with other measures of risk preference and related constructs). We conclude that for the development of valid behavioral measurement instruments, our field has to overcome the philosophy of the “repair program” (i.e., fixing existing tasks). Instead, the development of valid task designs may require ecological assessments that identify those real-life behaviors and associated psychological processes that lab tasks are supposed to capture and generalize to.

This is a joint project with Markus Steiner (see picture below), who successfully defended his thesis last week – congratulations, Dr. Steiner!

Steiner, M., & Frey, R. (2021). Representative design in psychological assessment: A case study using the Balloon Analogue Risk Task (BART). Journal of Experimental Psychology: General. doi:10.1037/xge0001036 | PDF

First appeared on https://renatofrey.net/blog

 

New paper: Using brain activation to predict risk taking

Taking risks is an adaptive aspect of human life that can promote happiness and success. However, engagement in maladaptive risk taking can have detrimental effects on individual as well as societal levels of health, wealth and criminality. One approach to understanding and, ultimately, predicting individual differences in risk taking has been to illuminate the biological substrates, specifically the neural pathways. In the past, brain activation has been associated with or even found to be predictive of risky behaviors, yet one fundamental problem of existing studies relates to the challenge of measuring risk taking: convergence between risk-taking measures is low, both at the level of behavior and brain activation. By extension, whether brain activation is merely correlated with or actually predictive of real-life risky behaviors is also likely to vary as a function of the measure used.

In our new paper, out in Frontiers in Behavioral Neuroscience, we addressed this issue by analyzing within-participant neuroimaging data for two widely used risk-taking tasks collected from the imaging subsample of the Basel–Berlin Risk Study (N = 116 young human adults). We focused on core brain regions implicated in risk taking, and examined average (that is, group-level) activation for risky versus safe choices in the Balloon Analogue Risk Task and a Monetary Gambles task. Importantly, we also examined associations between individuals’ brain activation in risk-related brain areas and various risk-related outcomes, including psychometrically derived risk preference factors. We found that, on average, risky decisions in both tasks were associated with increased activation in the nucleus accumbens, a small subcortical brain structure with a central role in the brain’s reward circuitry. However, the results from our individual differences analyses support the idea that the presence and directionality of associations between brain activation and risk taking varies as a function of the risk-taking measures used to capture individual differences.

Read the full paper here for a thorough discussion of the findings, including implications for intervention and prevention efforts, and our recommendations for future research aimed at predicting real-life behavior from brain markers.

Citation: Tisdall, L., Frey, R., Horn, A., Ostwald, D., Horvath, L., Pedroni, A., Rieskamp, J., Blankenburg, F., Hertwig, R. and Mata, R., 2020. Brain–Behavior Associations for Risk Taking Depend on the Measures Used to Capture Individual Differences. Frontiers in Behavioral Neuroscience14, p.194.

Registered reports: A way to publish when data collection is paused

During the last few months, COVID-19 has affected everyone’s lives to some extent. For researchers like us psychologists that rely on data collection involving human interaction, this sometimes meant a complete halt of all research activities because laboratories had to be closed. Especially for early-career researchers, several months of not being able to collect data can have serious consequences. They need publications to graduate, but they often need data to publish in journals.

Traditionally, psychologists first collect data and then write an article. Recently, more and more journals in our field have introduced a format that allows us to publish before we collect data, namely “registered reports”. The idea is that the research question, the hypotheses, the study details, and the planned analyses undergo peer review before data collection. Authors thereby receive critical feedback and can improve their studies before they invest valuable resources. This way, the ideas and the soundness of the proposed research are evaluated instead of whether the results are “interesting”. If the manuscript then meets the journal’s requirements, the article receives an in-principle acceptance. That means, it will be published no matter how the results turn out (given that the authors follow the procedures previously agreed upon). Studies that do not yield the expected results are therefore still being published and do not disappear in the “file drawer”, which happens all too often.

Besides these benefits, registered reports enable researchers to add publications to their CV even when they cannot collect data in the lab, as was the case with our registered report. In our article, we propose three studies to examine whether the act of sharing secrets influences the relationship between two people. Intuitively, one might think it does. It might, however, also depend on the nature of the secrets shared, meaning whether they are positive or whether they shine a negative light on the person who shares them.  All of this has not been studied yet.

Before our work was accepted as a first stage registered report at PLOS ONE, it went through peer review at two other journals and the respective feedback helped us to craft the current and more compelling paper. We hope that the COVID-19 situation remains under control and that we will soon be able to collect data. Yet, even during the lockdown, we were able to contribute to the scientific literature and will complement our work with data as soon as possible.

Jaffé, M., & Douneva, M. (2020). Secretive and close? How sharing secrets may impact perceptions of distance. PLOS ONE, 15: e0233953. doi.org/10.1371/journal.pone.0233953

NARPS: Inside and beyond

Perhaps you remember a previous blog post in which we (Laura Fontanesi and Loreen Tisdall) announced an exciting research collaboration we were part of: the Neuroimaging Analysis Replication and Prediction Study (NARPS). Fast forward 18 months, and we are happy to announce that the resultant paper (Botvinik-Nezer et al., 2020, Nature) is now published, you can read it hereBelow, we share a summary of the study, the results, and our thoughts on what to make of it all.

Inside NARPS

Our adventure with NARPS started at the meeting of the Society for NeuroEconomics in Philadelphia in October 2018. That’s where we met Tom Schonberg, busy with recruiting analysis teams for the project. The main idea of NARPS’ leading researchers was to collect a somewhat large (n > 100) functional magnetic resonance imaging (fMRI) dataset, provide this data to many teams across the world, and ask them to independently test nine predefined hypotheses. The task chosen for this endeavor was a mixed-gambles task, which is widely used for studying decision-making under risk. Crucially, the motivation behind the project was not to find the truth about value-related signals in the human brain. Instead, the goal was to estimate the agreement across independent research teams on hypothesis testing based on fMRI data, when little to no instructions are provided on the methods or software to be used. On top of that, the project leaders were planning to let a second group of researchers bet on the results of such agreement, before the results were out. In light of the replication crisis in psychology and to a lesser extent also in economics, a study on the reliability of empirical findings in neuroeconomics seemed almost due at that point. Plus, this study reminded us of a similar many-analysts project in the cognitive modeling field, where independent teams were asked to use a cognitive model to test behavioral effects of experimental manipulations, and found that conclusions were affected by the specific software they used. 

The most crucial result coming from NARPS is that the agreement on null hypothesis rejection was overall quite low. On average, across the nine hypotheses, 20% of the teams reported a different result from the majority of the teams. Crucially, maximum disagreement between teams is marked by 50% (i.e., 50% of teams report results which support, and 50% of teams report results which reject a given hypothesis). Given this benchmark, a 20% disagreement rating is almost ‘halfway’ between maximum agreement and maximum disagreement. Surprisingly, the probability to find a significant result was not affected by the choice of using preprocessed data, and the statistical brain activation maps (before thresholding) were highly correlated across the teams. Therefore, the statistical decisions that are made in later stages of fMRI analyses (e.g., how to correct for multiple corrections) might actually play the most crucial role in null hypothesis testing. In addition, the prediction market revealed an “optimism bias”: Researchers (including a subset of researchers who had participated in the data analysis part of NARPS) overestimated the probability of finding significant results.

Beyond NARPS

The title of the NARPS publication is ‘Variability in the analysis of a single neuroimaging dataset by many teams’. Importantly, variability in analytical pipelines is not a problem, per se. In fact, we often engage with the analytical multiverse that surrounds every single piece of research (simply because there are always varying ways of examining research questions and testing hypotheses) and check the sensitivity of our results to analytical variability. However, the NARPS findings suggest that the use of different analytical pipelines can produce results which support opposite conclusions. In the spirit of replicability, this is not a desired outcome. 

So, what now? Should we scrap all neuroimaging research? In our opinion, the NARPS findings highlight two important issues: (1) open science is the way to go, and (2) more many-analysts projects are needed to understand how widespread this problem is across tasks, brain regions, and/or neuroimaging techniques.

First, let’s start with the main take-home message of the paper: Considering that analytical decisions can have a big impact on research findings and conclusions, it is of crucial importance to thoroughly plan and clearly communicate analytical pipelines. In other words, go full throttle on transparency and open science: Preregister your study, apply optimized preprocessing pipelines, consider the suitability of your smoothing kernel given your anatomical regions of interest, be transparent about significance thresholds, share your code and data, and share your unthresholded activation maps. 

Second, we think it is also important to consider the generalizability of the NARPS findings. In particular, we noticed that the choice of the behavioral task was not part of the public discourse (see here and here for examples) triggered by the NARPS publication. In our opinion, individual differences play an important role in the mixed-gambles task, both at the behavioral (risk preferences) and the neural level, and such variability can lead to lower statistical power (especially when spatial smoothing is not optimal for a given anatomical region). On top of that, response times (RTs) in this task are difficult to dissociate from the signal of interest, because RTs are highly correlated with option values. In fact, this could cause power issues that might not be relevant for other neuroeconomics tasks, or tasks in different psychological domains. 

To understand the role of the task and, importantly, the extent to which the NARPS findings generalize to the entire field of neuroimaging, ideally we would use the NARPS approach to study variability in results observed for (1) other decision-making tasks, (2) other fields, such as visual perception, (3) off-task functional activation differences (resting state), (4) other imaging modalities (e.g., EEG, MEG, eye tracking), and (5) other methodological approaches (e.g., model-based cognitive neuroscience). When polled on Twitter, 64.6% of M/EEG researchers (N=601) indicated that a similar approach in their field would lead to results that are more consistent than the results found for fMRI; the jury is out on whether this response pattern mirrors the overconfidence reported for fMRI results.

In summary, we thoroughly enjoyed being part of NARPS. This project was not only timely, but revealed that we might be overly optimistic about the reliability of fMRI analyses. It also revealed that our statistical decisions in the analysis pipeline (e.g., how to correct for multiple comparisons) make substantial contributions to this lack of reliability (as opposed to decisions during data preprocessing). Our hope is that NARPS will motivate more many-analyst projects in different neuroimaging subfields and methodologies.

Registered report on competitive decisions from experience published in JDM

We often (have to) make choices between risky options without knowing the possible outcomes upfront. Sometimes, however, we can obtain a preview through active information search (e.g., sampling reviews on Tripadvisor to choose one of two hotels). But what if other people simultaneously pursue the same goal, forcing us to make decisions from experience under competitive pressure (“only one room left at this price”)? In this paper, I studied to what extent competition reduces pre-decisional search (and potentially choice performance) in different choice environments. A set of simulation analyses and empirical studies indicated that reduced search due to competitive pressure was particularly detrimental for choice performance in “wicked” environments, which contain rare events and thus require ample exploration to identify advantageous options. Interestingly, however, from a cost-benefit perspective and taking into account search costs, frugal search may not only be efficient in “kind” but also in “wicked” environments. For the full results, please have a look here:

Frey, R. (2020). Decisions from experience: Competitive search and choice in kind and wicked environments. Judgment and Decision Making, 15, 282-303. Online | PDF

On a side note, in this project I was up for some exploration myself: In the spirit of trying out new avenues for promoting transparent and reproducible research, I was committed to publish this paper as a registered report (RR). The idea of this relatively new publication format is to run the paper’s theoretical rationale through the full peer-review process at a scientific journal, with the goal of obtaining “in-principle acceptance” before the empirical studies are conducted. It was a very interesting but sometimes also difficult process, as it may be particularly hard to convince reviewers of the soundness and importance of the research questions a-priori, without being able to present fancy results yet. So I am glad that this paper found a nice home at JDM, and I hope that more psychological journals will adopt the format of RRs soon!

For more on my research on decisions from experience, please also see the research section.

First appeared on https://renatofrey.net/blog

New Open Access Publication Fund at the University of Basel

The University Library has published on its Open Access pages which criteria must be fulfilled in order to be eligible for funding.

Finanzierung von Publikationskosten für Open Access (Pilotprojekt ab Januar 2020)

Die Universität Basel unterstützt Forschende bei Open-Access Publikationen:

Pilotprojekt Publikationsfonds 2020-2022: Für Gold Open Access Publikationen, bei denen die Article Processing Charge (APC) nicht durch eine Drittmittelförderung abgedeckt werden kann, stellt die Universität ab Januar einen komplementären Fond bereit. Angehörige der Universität Basel können die Förderung beantragen, wenn ihre Publikationen folgende Kriterien erfüllen:

  • Hauptautoren /Hauptautorinnen oder Corresponding Authors gehören der Universität Basel an (unibas-Email-Adresse).
  • Die Publikation ist ein Zeitschriftenartikel oder ein Buchbeitrag.
  • Es ist eine Gold Open Access Publikation.
  • Die Zeitschrift hat ein Peer-Review-Verfahren oder erfüllt äquivalente, im Fach etablierte Qualitätskriterien.
  • Es liegt keine Drittmittelfinanzierung vor, die auch Kosten für Gold Open Access Publikationen abdeckt.

Maximal ist eine Förderung von CHF 2’500 möglich. Eine Teilfinanzierung von höheren Publikationsgebühren ist möglich.

Nicht gefördert werden Hybrid Open Access Publikationen oder Open Access Bücher.

Sie erhalten zeitnah Bescheid, ob ein Anteil der Publikationsgebühren durch den Fonds gedeckt werden kann oder nicht. Anträge werden nach Reihenfolge des Eingangs bearbeitet. Das Eingangsdatum ist massgebend für die Berücksichtigung, sollte der Fonds vorzeitig erschöpft sein.

For more information see https://www.ub.unibas.ch/ub-hauptbibliothek/dienstleistungen/publizieren/open-access/

How to get the article?

On the occasion of the failed negotiations with SpringerNature, swissuniversities published a factsheet on how to get papers without a campus license.

I would like to add point 9.

Auto Sci-Hub
Auto-modify the url to load Sci-Hub page with your article. Free browser extension for Chrome and Firefox.

Update: Tweet by SpringerNature from December 19:

We’re highly disappointed that thus far we’ve been unable to reach an agreement with for 2020. We’re continuing discussions and Swiss researchers will still be able to access all our existing and newly published 2020 journal content until further notice.

Free alternative to Covidence for screening papers for Systematic Reviews

Qatar Computing Research Institute (QCRI) has developed a free tool for screening papers for Systematic Reviews / Meta-Analyses: https://rayyan.qcri.org/welcome

From the description:

Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion. (…) Rayyan also has a mobile app. With this app, you can screen your reviews on the go such as while you are riding the bus. You can even use the app while offline; once connected, the app will automatically sync back to the Rayyan servers!

Watch a quick tour: https://www.youtube.com/watch?v=irAOQgzFMs4&feature=youtu.be

Open Science in Aging Research

Last week, I attended the 8th edition of the Geneva Aging Series, organised by the Cognitive Aging Lab (Matthias Kliegel, University of Geneva). This year’s topic was “Cognition meets emotion” and included keynotes by Carien van Reekum, University of Reading, UK, who presented work on the neural basis of individual and age differences in emotional processing, and Derek Isaacowitz, Northeastern University, USA, who gave an overview on experimental work aiming to assess the links between aging and emotion identification/regulation and included some food-for-thought about open science in aging research. 

Derek pointed out that most work on aging focuses on identifying differences between age groups (while similarities are likely to be considered of less importance) and argued that this is likely to have led to a large rate of false positives in aging research. Derek suggested that this state of affairs may imply we need to reassess some preconceptions about key findings in our field, for example, those related to positivity effects in emotional processing with increased age (“older adults look on the brighter side of life”). 

More generally, Derek suggested that aging journals have a responsibility to foster open sciences practices (e.g., encourage replications and registered reports, encourage/mandate publishing of data and code) to help counteract the tendency of reporting only “significant” age differences and reduce researcher-degrees-of-freedom to put aging research on firmer empirical ground. 

I for one, as reviewer, would very much appreciate clearer guidelines from aging journals about what to expect/demand from authors. These days, I end most of my reviews encouraging authors to make their data and code available (Yes! I’m Reviewer #2) but it would be nice if this went without saying and there were checklists in place that could help authors and reviewers in this process. Fortunately, there are some initiatives in this direction at Journals of Gerontology B: Psychological Sciences (for which Derek is an editor-in-chief) and, somewhat more timidly, at Psychology and Aging (where I serve as a consulting editor). It would be great to see open science gain some traction in aging research… 

open science

Issues of reproducibility (or lack thereof) in psychology have led to calls for increasing transparency in scientific practices.  Loreen Tisdall and I were curious to learn how the topics of reproducibility and open science – “the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of an inquiring society, amateur or professional” – are being perceived and tackled by researchers in Social, Economic, and Decision Psychology. For this purpose, we conducted an internal survey inspired by the Swiss Open Psychological Science Initiative asking about researcher’s awareness/experience as well as attitudes to a number of open science practices.

We received 19 responses to the following two questions…

1. Which of the following research practices are you aware of, and which do you have experience of using or doing?

2. How important do you believe the following practices are for optimising the reproducibility and efficiency of research in your field?

Our reading of these results is…

  • there is considerable awareness of open science practices in Social, Economic and Decision Psychology, albeit only few individuals report high levels of expertise with these practices.
  • there is considerable importance attributed to open science practices in increasing reproducibility and efficiency in our field, but there is clearly some variance of opinions, in particular concerning a few of the practices (e.g., registered reports, many analysts, preprints).

We discussed some of these issues during the latest meeting of the Social, Economic, and Decision Psychology doctoral program. There appears to be consensus for a continued discussion and establishing of common guidelines and training regarding open science in our groups – stay tuned…