Workshops at the International Conference on Intelligent User Interfaces (IUI) 2023

Joint Proceedings of the IUI 2023 Workshops: HAI-GEN, ITAH, MILC, SHAI, SketchRec, SOCIALIZE
co-located with the ACM International Conference on Intelligent User Interfaces (IUI 2023)

Sydney, Australia, March 27-31, 2023.

Edited by

Alison Smith-Renner *
Paul Taele **

* Dataminr, Human AI Innovation, New York, USA
** Sketch Recognition Lab, Texas A&M University, USA

Table of Contents

Summary: This volume includes the proceedings of 6 workshops, which accepted a total of 34 papers.

Workshop 1: ITAH: Workshop on Interactive Technologies for AI in Healthcare

Organizers: Öznur Alkan (Optum Ireland), Oya Celiktutan (King's College London), Hanan Salam (New York University Abu Dhabi), Marwa Mahmoud (University of Glasgow), Greg Buckley (Optum Ireland), Niamh Phelan (Optum Ireland)

AI for healthcare has been a very active research area in recent years. AI can support many processes in healthcare including but not limited to automatic screening and diagnostic tools, health management applications, administrative workflow automation, clinical documentation, patient outreach, and specialized support via image analysis, and medical device automation. Although AI systems have been shown to reduce medical errors and improve patient outcomes, adoption of these systems in practice remains a challenge due to the lack of user-centered design, personalisation, and the opaqueness of algorithms . Target users for AI systems in healthcare space can include clinicians, patients, and healthcare payers, where clinicians might include all practitioners who diagnose, treat, or care for patients. From the perspective of clinicians, as shown by the studies, they are more likely to accept a decision support system if the system matches their own decision-making processes, which can be possible by allowing them to interact with the systems and placing the user in the loop. Considering different application areas, feedback from not only the clinicians but also other target user groups can improve the systems’ performance significantly, which can result in better performance and end-user experience. From the perspective of target users, it is increasingly desirable that such technologies are tailored to their specific needs and profiles. While modern technologies can offer many benefits, e.g., planning and delivery of clinical care, and management of special conditions at home, they still suffer from unidirectional interaction, and a one-fits-all paradigm. Motivated by the above points, we aim to address the Interactivity in AI solutions targeted for healthcare domain by bringing together multidisciplinary researchers and practitioners from AI, healthcare, medicine, user interaction & experience design domains and facilitating the discussions in this very critical space. We aim to cover topics around different means of interactivity in AI solutions for healthcare, challenges associated with adaptation of AI models in healthcare space from end-users perspective, and how human-AI interaction can help build better solutions that can lead to better user experience.

More details here.

Workshop 2: HAI-GEN: Human-AI Co-Creation with Generative Models

Organizers: Mary Lou Maher (University of North Carolina), Justin D. Weisz (IBM Research AI),, Hendrik Strobelt (IBM Research AI), Lydia Chilton (Columbia University), Werner Geyer (IBM Research AI)

Recent advances in generative AI through deep learning approaches such as generative adversarial networks (GANs), variational autoencoders (VAEs), and large language models will enable new kinds of user experiences around content creation, across a range of media types (text, images, audio, and video). These advances have enabled content to be produced with an unprecedented level of fidelity, for tasks such as generating faces, prose and poems, deep fake videos of celebrities, music, and even code. In many cases, content generated by generative models is either indistinguishable from human-generated content or could not be produced by human hands. These examples also highlight some of the significant societal, ethical, and organizational challenges generative AI is posing around issues such as security, privacy, ownership, quality metrics, and evaluation of generated content.

More details here.

Workshop 3: MILC: Workshop on Intelligent Music Interfaces for Listening and Creation

Organizers: Peter Knees (TU WIen), Alexander Lerch (Georgia Institute of Technology)

Today’s music ecosystem is permeated by digital technology—from recording to production to distribution to consumption. Intelligent technologies and interfaces play a crucial role during all these steps. On the music creation side, tools and interfaces like new sensor-based musical instruments or software like digital audio workstations (DAWs) and sound and sample browsers support creativity. Generative systems can support novice and professional musicians by automatically synthesizing new sounds or even new musical material. On the music consumption side, tools and interfaces such as recommender systems, automatic radio stations, or active listening applications allow users to navigate the virtually endless spaces of music repositories. Since the workshop's first two editions in Tokyo 2018 and Los Angeles 2019, we have witnessed a drastic technical evolution and, reflecting this trend, an increase in volume of works dealing with music listening and creation interfaces. In addition to technical developments, we now see further challenges arising wrt. gaining deeper understandings of user intent, in the area of human-AI co-creation, and in building systems for automatic curation of generated contents. To address these and other challenges, the 3rd Workshop on Intelligent Music Interfaces for Listening and Creation (MILC 2023) will again bring together researchers from the communities of music information retrieval (MIR)—in particular content-based retrieval, recommender systems, machine learning, human computer interaction, adaptive systems, and beyond.

More details here

Workshop 4: SHAI: Workshop on Designing for Safety in Human-AI Interactions

Organizers: Nitesh Goyal (Google Research), Sungsoo Ray Hong (George Mason University), Regan L. Mandryk (University of Saskatchewan), Toby Jia-Jun Li (University of Notre Dame), Kurt Luther (Virginia Polytechnic Institute and State University), DaKuo Wang (IBM Research)

Generative ML Models have an unprecedented opportunity to produce non-safe outcomes and harms at a large volume that can be incredibly challenging during Human AI Interactions. Despite best intentions, inadvertent outcomes might accrue leading to harms, especially to marginalized groups in society. On the other hand, those motivated and skilled at causing harm might be able to perpetuate even deeper harms. Our workshop is aimed at practitioners and academic researchers at the intersection of AI and HCI who are interested in understanding these socio-technical challenges, and identify opportunities to address them collaboratively.

More details here.

Workshop 5: SketchRec: Workshop on Sketch Recognition

Organizers: Rachel Blagojevic (Massey University), Paul Taele (Texas A&M University), Tracy Hammond (Texas A&M University), Josh Cherian (Texas A&M University), Jung In Koh (Texas A&M University), Samantha Ray (Texas A&M University)

Sketch recognition is the interpretation of hand-drawn diagrams, and seeks to understand the users' intent while allowing them to draw unconstrained diagrams. Sketch recognition research has been on-going for approximately half a century, and has experienced iterative advances due to the difficulty of the problem. As pen- and touch-capable devices such as smartphones, tablets, touch- driven monitors, and large touchscreen devices have become ubiquitous, and as emergent technologies such as virtual and augmented reality-driven computing technologies are becoming more advanced, sketch recognition-related research remains an open field for researchers to explore in approaching continuing interaction and recognition challenges with these technologies. The workshop on Sketch Recognition aims to share and discuss state-of-the-art innovations and challenges in IUI research topics that relate to sketch interactions and recognition. We especially focus on highlighting research contributions and engaging in healthy dialogue that intersect topics pertaining to sketch recognition user interfaces and techniques.

More details here.

Workshop 6: SOCIALIZE: Social and Cultural Integration with Personalized Interfaces

Organizers: Fabio Gasparetti (Roma Tre University), Cristina Gena (University of Torino), Giuseppe Sansonetti (Roma Tre University), Marko Tkalčič (University of Primorska)

The SOCIALIZE workshop aims to bring together all those interested in the development of interactive techniques that may contribute to fostering the social and cultural inclusion of a broad range of users. More specifically, we intend to attract research that takes into account the interaction peculiarities typical of different realities, with a focus on disadvantaged and at-risk categories (e.g., refugees and migrants) and vulnerable groups (e.g., children, elderly, autistic, and disabled people). Among others, we are also interested in human-robot interaction techniques aimed at developing social robots, that is, autonomous robots that interact with people by engaging in socially-affective behaviors, abilities, and rules related to their collaborative role.

More details here.

2023-03-05: submitted by Alison Renner, metadata incl. bibliographic data published under Creative Commons CC0
2023-03-16: published on CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073) |valid HTML5|