Archive for February, 2018

paying not to phub

My students phub me during class a lot. It’s annoying to me and, likely, counterproductive for them: Ravizza et al. (2016) tracked non-academic use of the internet in university classrooms and found that (taraaa!) “nonacademic Internet use was common among students who brought laptops to class and was inversely related to class performance” (r = -.25, between time online and exam performance) even controlling for a number of things like intelligence and motivation/interest. So what can we do about it?

I’ve let my students know about the movement to stop phubbing (it hurts!) and Ravizza et al.’s findings – hoping they will make the right choice on their own. A new app, Hold, takes a different, less subtle approach: Pay students NOT to look at their cell phones – students get points for time on campus without using their phone (points that can be used to get movie tickets, discounts, etc.). I’m very curious to see reports of long term effects of such approaches based solely on extrinsic motivation… If we do this at university, does this imply that employers will need to start giving out boni to workers for not using their smartphones on the job?

See here for a newspaper article about the new app in the NZZ (in German).

WWZ Colloquium Schedule

Our colleagues at the WWZ have the schedule for this semester’s colloquium up. There are a number of interesting speakers that could be of interest to many of us in the Social, Economic, and Decision Psychology area, such as Prof. Ulf Bockenholt, Kellogg School of Management, or Prof. Dr. Lorenz Götte, University of Bonn, to name just a couple. All seminars are open to the public and ususally take place on Tuesdays from 12:30 pm to 13:45 pm (Faculty of Business and Economics, Peter Merian-Weg 6, seminar room S 13).

Fabian Krüger

On Thursday 1 March, Fabian Krüger, University of Heidelberg, will give a talk in the Social, Economic, and Decision Psychology Colloquium.

Forecast evaluation: the role(s) of the scoring function

Forecasts specify the value of a functional (such as the mean or a quantile), conditional on a given information set. For example, a theory may predict the probability of a person preferring option A over option B in a lab experiment. Alternatively, economists aim to predict the mean rate of inflation as a function of relevant background variables. When evaluating such forecasts, the choice of loss function (or scoring function) is crucial. From an ex ante perspective, it sets the incentive to provide honest and accurate forecasts. From an ex post perspective, it allows to compare and rank alternative forecasters or statistical models. This talk presents an overview of the recent statistical literature on the subject, discuss some recent results under model misspecification, and points to applications across the disciplines.

David Budescu

We have David Budescu, Fordham University, USA, visiting and giving a talk in the Social, Economic, and Decision Psychology Colloquium (talk and abstract follow).

Identifying Expertise and the Wisdom of Selected Crowds

The term “wisdom of crowds” (Surowiecki, 2004) is often used to describe the robust empirical finding that statistical aggregates of the opinions or estimates of the group’s members (i.e., methods that do not involve direct interactions among the members) outperform most individual judgments. In the first part of this talk I will offer a general definition of the “wisdom of the crowd effect”’ as well as a statistical framework in which to evaluate it, that explicitly accounts for the inter-dependency among the members of the crowd and their biases. Crowd prediction is treated as a linear combination of group member prediction distributions and the average performance of this aggregate prediction is compared to an individual member (or group of members) selected according to an arbitrary, pre-specified probability distribution. In the second part of the talk I will discuss new methods to measure the contributions of the various individuals to the crowd. I propose using a variant of the influence function that differentiates between the individual contributions, identifies the best (and the worst) contributors and allows one to derive differential weights for the various group members. These measures of individual contributions can be used to derive improved aggregation schemes that outperform the regular averaging procedures. The procedure is illustrated and validated with date from a large–scale project involving forecasts of geopolitical events.