Back to Publications

Exploring User Expectations of Proactive AI Systems

Christian Meurisch, Cristina A. Mihale-Wilson, Adrian Hawlitschek, Florian Giger, Florian Müller, Oliver Hinz, Max Mühlhäuser
IMWUT 2020
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
TL;DR
What we did: We conducted an in-the-wild study to explore user expectations of proactive artificial intelligence systems across various use cases and proactivity levels with 272 participants using our mobile application ProfileMe.
What we found: We found that while users showed a significant openness towards proactive support in most areas, they preferred reactive or no support in mental health contexts, largely due to privacy concerns and trust issues regarding AI systems.
Takeaway: Our findings suggest that designers of proactive AI systems should adapt their systems to match user expectations, ensuring clear communication and user control to foster acceptance and utilization.

Abstract

Recent advances in artificial intelligence (AI) enabled digital assistants to evolve towards proactive user support. However, expectations as to when and to what extent assistants should take the initiative are still unclear; discrepancies to the actual system behavior might negatively affect user acceptance. In this paper, we present an in-the-wild study for exploring user expectations of such user-supporting AI systems in terms of different proactivity levels and use cases. We collected 3,168 in-situ responses from 272 participants through a mixed method of automated user tracking and context-triggered surveying. Using a data-driven approach, we gain insights into initial expectations and how they depend on different human factors and contexts. Our insights can help to design AI systems with varying degree of proactivity and preset to meet individual expectations.