Short bio
Andrea obtained a Master's degree in Physics from the University of Milan in 2005 and he got his PhD in Mathematics from ETH Zurich in 2012. He then worked for five years in industry as a consultant in data and analytics for financial services. In the period 2018-2023, he led the Mobiliar Lab for Analytics at ETH Zurich as its Scientific Director. Since 2024, he joined the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich as Senior Research Associate. His research lies at the intersection between philosophy and technology, with a focus on epistemological and ethical problems of artificial intelligence (AI) in healthcare applications, such as the algorithmic prediction of goal of care preferences of incapacitated patients, the ethically-sensitive design of digital health interventions to manage work-related stress with mixed reality technology and the relation between explainability and trust in AI. His works appeared in Synthese, Philosophy & Technology, the Journal of Medical Ethics, Ethics and Information Technology, the Journal of Biomedical Informatics, BMC Digital Health, JMIR and Computers in Human Behavior, among others. His research has been featured in national and international media outlets, such as Tagesanzeiger, Frankfurter Allgemeine, Financial Times and The Wall Street Journal.
Research interests
- Artificial intelligence (AI)
- philosophy of AI
- explainable AI
- trust in AI
- patient decision aid systems
- advance directives
- goal of care preferences
- digital health interventions
- work-related stress
- mixed reality
Publikationen
ZORA Publication List
Download Options
Publications
-
2025
-
Journal Article
-
Co-developing an educational platform against ageism with older adults: A use case from Switzerland. Educational Gerontology:epub ahead of print.
-
-
-
2024
-
Journal Article
-
Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach. Science and Engineering Ethics, 30(6):55.
-
Large language models in medical ethics: useful but not expert. Journal of Medical Ethics, 50(9):653-654.
-
Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems. Minds and Machines, 34(3):30.
-
The Patient Preference Predictor: A Timely Boost for Personalized Medicine. The American Journal of Bioethics, 24(7):35-38.
-
-
-
2023
-
Journal Article
-
A case for preference-sensitive decision timelines to aid shared decision-making in intensive care: need and possible application. Frontiers in Digital Health, 5:1274717.
-
Investigating Employees’ Concerns and Wishes Regarding Digital Stress Management Interventions With Value Sensitive Design: Mixed Methods Study. Journal of Medical Internet Research, 25:e44131.
-
Virtual reality-supported biofeedback for stress management: Beneficial effects on heart rate variability and user experience. Computers in Human Behavior, 141:107607.
-
Evaluating the effects of a programming error on a virtual environment measure of spatial navigation behavior. Journal of Experimental Psychology: Learning, Memory, and Cognition, 49(4):575-589.
-
Ethics of the algorithmic prediction of goal of care preferences: from theory to practice. Journal of Medical Ethics, 49(3):165-174.
-
An interpretable machine learning approach to multimodal stress detection in a simulated office environment. Journal of Biomedical Informatics, 139:104299.
-
AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction. Journal of Medical Ethics, 49(3):185-186.
-
-
2022
-
Journal Article
-
An Ethical Framework for Incorporating Digital Technology into Advance Directives: Promoting Informed Advance Decision Making in Healthcare. Yale Journal of Biology and Medicine, 95(3):349-353.
-
Predicting Working Memory in Healthy Older Adults Using Real-Life Language and Social Context Information: A Machine Learning Approach. JMIR research protocols, 5(1):e28333.
-
In Search of a Mission: Artificial Intelligence in Clinical Ethics. The American Journal of Bioethics, 22(7):23-25.
-
-