Archive for September, 2018

Ellen Peters

Dr Ellen Peters visits this week to give a presentation in the SWE Colloquia series. Dr. Peters is a Distinguished Professor of Psychology, Director of the Decision Sciences Collaborative, Professor of Medicine in the Department of Internal Medicine (by courtesy), and Professor of Marketing & Logistics at the Fisher College of Business (by courtesy) at The Ohio State University.

Innumeracy in the lab and in the wild: A focus on efficacy and action with numbers

Research has demonstrated the importance of objective numeracy (defined as the ability to understand and use probabilistic and other mathematical concepts) to judgments and decisions in the lab and in life.  However, not everybody can understand and use numeric information effectively. In addition, researchers have largely ignored the potentially motivating power of numeric self-efficacy, independent of objective numeracy. In this talk, I’ll introduce: 1) the extent of innumeracy, 2) why these differences matter, and 3) how objective numeracy and subjective numeracy (numeric self-efficacy) relate to numeric persistence and life outcomes. Objective numeracy and numeric efficacy capture distinct psychological construct important to judgments and decisions.

Douglas Bates

The first speaker in this semester’s SWE Colloquia series was Douglas Bates, Professor Emeritus, Department of Statistics, University of Wisconsin, Madison.

Fitting complex mixed-effects models to large datasets

Mixed-effects models are a type of statistical model that incorporate both fixed-effects parameters and random effects.  From the point of view of experimental design, fixed-effects are associated with experimental factors (e.g. priming or not) or with covariates that have a fixed, reproducible set of levels (e.g. sex of the subject, socio-economic status).  Random-effects are associated with blocking factors – a known source of variability for which we wish to control.  The most common such blocking factor is “Subject”.  In many studies “Item” will be another blocking factor.  Mixed-effects models provide a way of taking into account these different sources of variability in the analysis of the data, but only in the last decade or so has software been available to fit complex mixed-effects models, especially those with crossed random effects such as “Subject” and “Item”, to large data sets.  As usually happens, the models that researchers wish to fit are becoming more and more complex and the data sets are becoming larger and larger, straining the capabilities of some of the software used to fit these models.  Dr Bates’ presentation on 20 June included a discussion of these models, some of the software used to fit them, and future directions in R/lme4 and the recently developed Julia language.

intelligent health

Today, I attended half a day of the Intelligent Health 2018 conference here in Basel. I felt a little out of my depth in a conference where most individuals were wearing suits, a BBC presenter introduced speakers, and a video DJ (DJ sleeper) played background music to footage from popular movies during breaks. This was buzzword-beat (AI, deep learning, digital health) and somewhat removed from the scientific conferences I usually attend. 

I got to watch a talk by a ERC-advanced grant winner, Stefano Stramigioli, who presented the MURAD project that aims to develop robots that can perform (or help with) mamography biopsies (disturbing), a panel discussion sponsored by the World Health Organisation on using data/AI to improve health care around the world (solid but somewhat uninspiring) and the two main “forward gazing” talks (the ones I was there for) by Jay Olshansky (in the flesh) and Gary Marcus (by Skype). 

The presentation by Olshansky was rather disappointing. I was expecting a talk on the promise of digital technology to deal with demographic challenges and got a pitch about an algorithm that detects age and health behaviours (smoking, BMI) from pictures of faces (apparently Olshansky sells this technology to insurance companies that want to make sure you’re not lying about such things when getting an insurance policy over the internet). 

Gary Marcus was more interesting. He’s a deep learning skeptic and gave a pitch from his upcoming book on how deep learning is too hyped in business and media alike. According to Marcus, deep learning is not close to delivering its promise on the intellectual problems that we are likely to care about in different fields, including health care. Marcus also argued that deep learning should be seen as just another tool in the artificial intelligence toolbox and that getting machines to think, plan, and reason, will require hybrid models that use other tools from AI beyond deep learning. Unfortunately, Marcus was not at all clear on how these models could look like.

This was not a conference for psychologists, yet, pychology could have a role to play in many of the topics discussed. How will humans deal with the idea of machines taking biopsies? How do we avoid “algorithm aversion” in patients, physicians, or policy-makers? It would be interesting to see some discussion of such topics in the next edition in 2019…