intelligent health

Today, I attended half a day of the Intelligent Health 2018 conference here in Basel. I felt a little out of my depth in a conference where most individuals were wearing suits, a BBC presenter introduced speakers, and a video DJ (DJ sleeper) played background music to footage from popular movies during breaks. This was buzzword-beat (AI, deep learning, digital health) and somewhat removed from the scientific conferences I usually attend. 

I got to watch a talk by a ERC-advanced grant winner, Stefano Stramigioli, who presented the MURAD project that aims to develop robots that can perform (or help with) mamography biopsies (disturbing), a panel discussion sponsored by the World Health Organisation on using data/AI to improve health care around the world (solid but somewhat uninspiring) and the two main “forward gazing” talks (the ones I was there for) by Jay Olshansky (in the flesh) and Gary Marcus (by Skype). 

The presentation by Olshansky was rather disappointing. I was expecting a talk on the promise of digital technology to deal with demographic challenges and got a pitch about an algorithm that detects age and health behaviours (smoking, BMI) from pictures of faces (apparently Olshansky sells this technology to insurance companies that want to make sure you’re not lying about such things when getting an insurance policy over the internet). 

Gary Marcus was more interesting. He’s a deep learning skeptic and gave a pitch from his upcoming book on how deep learning is too hyped in business and media alike. According to Marcus, deep learning is not close to delivering its promise on the intellectual problems that we are likely to care about in different fields, including health care. Marcus also argued that deep learning should be seen as just another tool in the artificial intelligence toolbox and that getting machines to think, plan, and reason, will require hybrid models that use other tools from AI beyond deep learning. Unfortunately, Marcus was not at all clear on how these models could look like.

This was not a conference for psychologists, yet, pychology could have a role to play in many of the topics discussed. How will humans deal with the idea of machines taking biopsies? How do we avoid “algorithm aversion” in patients, physicians, or policy-makers? It would be interesting to see some discussion of such topics in the next edition in 2019…

Be the first to leave a comment. Don’t be shy.

Join the Discussion

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>