What UX-Design can contribute to the AI/ML engineering process

In the AI/ML engineering process, accuracy is typically a key metric. However, to turn a great technology into a great product or service, a bit more is required. This is typically the subject of investigation for User Experience Design (UX-Design). Often, the discipline is associated with the creation of Graphical User Interfaces, typically apps or websites. Yet, the field is much broader and offers a range of methods and processes that shapes technology into usable, adoptable, desirable and even more viable products or services. Desiderata that also seem very relevant to AI/ML systems. Let’s explore.

What is UX-Design?

User Experience Design is rooted in Human-Computer-Interaction : the science of designing human interaction for technology. Technology has to be interpreted widely, from e-commerce to airplane cockpits. It might seem odd that a discipline is adept at designing for such a broad spectrum. The explanation is that UX-Design focuses on a single consistent factor within all technical systems: the human factor. This is also known as human-centered design: the idea that by putting the human at the centre of attention, you’ll get better products. This sounds very arty and subjective, however, the process is driven by empirical research methods and guidelines are based on psychological insights and principles.

Right problem, right solution

A typical saying in User Experience Design is to „ solve the right problem, in the right way “. This means, it is as much about identifying core product functional attributes as it is about usability. The process is not opinion-led and instead driven by qualitative and quantitative user research. Methods investigate context of use, its users, mindsets and mental models, interactions and processes. Based on those insights, a core value proposition is generated (the right problem). This is followed by designing product interactions (the right way). The „ double diamond “, one of the gold process descriptions of design, makes this process explicit and differentiates between these two modes of thinking, each with their own set of research methods.

UX is all you need?

In the AI/Ml discipline, the discussion on how to make „State-Of-The-Art“ is quickly on technicalities such as the type of loss functions, model architectures or data pipelines. However, according to a McKinsey study , the biggest differentiator of high-performing AI teams is not rooted in technical practices. It is the adoption of the designing thinking process. In other words, the usage of the user-centred design approach, to iteratively and with the user in focus, identify and solve problems.

The adoption of the designing thinking process is the key-differentiator for high performing AI teams.

I won’t go into the nuance differences between UX-Design, Design Thinking, Service Design or Customer Centricity. They share more than disciples give credit to. For convenience’s sake, I’ll refer to this as UX-Design. More importantly, I want to detail what facets of UX-Design contribute to an improved AI/ML engineering process.

8 themes what UX can contribute to AI engineering
Context investigation to improve model performance
Models aim for high accuracy with high data generalisability. However, this should not be confused with a model being successful whenever deployed in an unknown domain. By not understanding the context in which an AI/ML-Model is deployed, model accuracy can degrade and ultimately lead to subpar product performance [2].

By implementing user research early and frequently, AI teams get a good grip on reality. Early realistic data samples can be collected, and further user/prediction requirements can be understood. This can inform hard technical choices such as architecture, data augmentation strategies or the operationalisation of data pipelines. In a sense, it collects a true ground truth.
Understand and map processes with a service design perspective
AI/ML models typically „automate“ a part of a longer process. User Journeys, or service blueprints, are a method for UX-Designers to appreciate the chain of actions in which people, props, and processes interact in a time-axis. This helps to better understand the impact of interventions and better integrate products or services in its larger part. Deepmind made a model that assists with diagnosing a medical condition that triggers vision loss [13]. Rather than predicting the likelihood of the condition being present, the model predicts the likelihood the condition will unfold in the next 6 months. The difference is subtle, but the latter fits the journey of a patient in a healthcare system better.

Applied to AI/ML, service blueprints help break down complex processes and anticipate how AI/ML systems fit in. This holistic view helps AI/ML to integrate more usefully into existing processes.
Designing the Human-AI interaction
It is a fallacy to think AI will remove the human from the product equation. AI/ML systems will indeed often drastically simplify the user interface (consider Alexa, Face Unlock, Tesla 3). Indeed, much of the UX-Design is no longer graphical and instead invisible or behavioural driven. New guidelines are emerging to design the human-AI interaction (great review at [12]).

Usable AI-systems lead to overall higher satisfaction rates and helps to drive product adoption and success.
Contributing to trustworthy AI
Studies in the medical domain have shown that having a highly accurate model is not enough for adoption (e.g. [8]). Elements of mistrust or model opaqueness can prevent uptake of an assistive technology.

The driver of adoption is not technical specification. It is how much trust the user has in the system [1].

Explainable AI, is a booming new research field at the intersection of AI and UX and is strongly motivated by creating trust into AI systems. Many new methods are emerging (great overview at [4]) but the field is also recognising the “human” perspective is missing, i.e. the question of how to design useful and valuable explanations [9]. In addition to explanations, further mechanisms can be envisioned that foster trust.

With this lens, a user-centered-design approach contributes to trustworthy AI, an important target for AI made in Europe.
Design safe human-in-the-loop experience
In terms of AI-safety, engineers quickly resort to ideas such as human-in-the-loop concepts. Human factors have long been investigating semi-automated settings [3,5,7,10]. Human-in-the-loop situations are very tricky, explain airplane crashes, and therefore require deliberate and careful design.

A thorough human factors analysis helps designing and analysing for semi-automated settings. Especially when outcomes can have severe consequences, such as in the medical domain, these design considerations should receive detailed attention. Without, solution might not only fail but have disastrous effects, even with a human in the loop.
Foster interdisciplinary collaboration for great user experiences
In classical software, the User Interface is the language franca between product and development people. The UI serves as a target for engineers to develop against. For AI/ML projects, this model no longer holds since much of the experience and design considerations are non-graphical. This implies that the need to for effective collaboration between product and engineering disciplines is even larger for AI/ML projects [12].

The facilitation of a design thinking process triggers a playful, experimental and effective collaboration. This leads to new perspectives and creative solutions for AI/ML products.
Real hypertuning of the learning rate
It is said 87% of all AI/ML products fail. Even worse, a study on 2.212 Covid-19 models showed that not a single one managed to get deployed into a clinical setting [11]. UX-Design operates with a „faily early“ attitude. The reasoning is that correcting conceptual mistakes are much cheaper to fix with design artefacts. By creating low-tech prototypes, storyboards and even faking experiences, AI-teams can learn without investing in long technical processes.
Add a sense of new perspective
„Users don’t want a drill, but a hole in the wall to attach a picture“. This line of thinking popularised by the “Job to be done” methodology [6], helps framing new solutions.

UX design delivers fresh perspective on hard problems. With design thinking, inspiring user research and playful prototyping these perspective emerge almost automatically.

This helps an AI teams to frame problems in new light and find creative new solutions.

The bottom line

With user research, playful exploration and prototyping, collaboration, and experimentation, UX-design can inform hard technical choices, and shape the AI experience. Furthermore, by carefully considering human factors,  interaction with the technology can be designed such that adoption is fostered. The main contribution, however, is the transformation from a technology to a product/service perspective within a development team.



[1] Tammy Bahmanziari, J. Michael Pearson, and Leon Crosby. 2003. Is Trust Important in Technology Adoption? A Policy Capturing Approach. Journal of Computer Information Systems 43, 4 (September 2003), 46–54.

[2] Emma Beede, Elizabeth Baylor, Fred Hersch, Anna Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, and Laura M. Vardoulakis. 2020. A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems . Association for Computing Machinery, New York, NY, USA, 1–12.

[3] Amba Kak Ben Green. The False Comfort of Human Oversight as an Antidote to A.I. Harm. Retrieved March 17, 2022 from https://slate.com/technology/2021/06/human-oversight-artificial-intelligence-laws.html

[4] Francesco Bodria, Fosca Giannotti, Riccardo Guidotti, Francesca Naretto, Dino Pedreschi, and Salvatore Rinzivillo. 2021. Benchmarking and Survey of Explanation Methods for Black Box Models. arXiv e-prints (February 2021), arXiv:2102.13076.

[5] Jeffrey M. Bradshaw, Robert R. Hoffman, David D. Woods, and Matthew Johnson. 2013. The Seven Deadly Myths of “Autonomous Systems.” IEEE Intell. Syst. 28, 3 (May 2013), 54–61.

[6] Clayton M. Christensen, Taddy Hall, Karen Dillon, and David S. Duncan. 2016. Know Your Customers’ “Jobs to Be Done.” Harvard Business Review . Retrieved March 17, 2022 from https://hbr.org/2016/09/know-your-customers-jobs-to-be-done

[7] Mica R. Endsley. 2017. From Here to Autonomy: Lessons Learned From Human–Automation Research. Hum. Factors 59, 1 (February 2017), 5–27.

[8] Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy Jr, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Transl. Psychiatry 11, 1 (February 2021), 108.

[9] Tim Miller, Piers Howe, and Liz Sonenberg. 2017. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. arXiv [cs.AI] . Retrieved from http://arxiv.org/abs/1712.00547

[10] Raja Parasuraman and Dietrich H. Manzey. 2010. Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 3 (June 2010), 381–410.

[11] Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I. Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, Jonathan R. Weir-McCall, Zhongzhao Teng, Effrossyni Gkrania-Klotsas, James H. F. Rudd, Evis Sala, and Carola-Bibiane Schönlieb. 2021. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nature Machine Intelligence 3, 3 (March 2021), 199–217.

[12] Hariharan Subramonyam, Jane Im, Colleen Seifert, and Eytan Adar. 2022. Solving separation-of-concerns problems in collaborative design of human-AI systems through leaky abstractions. CHI’22 (To Appear) (2022). DOI:https://doi.org/ 10.1145/3491102.3517537

[13] Jason Yim, Reena Chopra, Terry Spitz, Jim Winkens, Annette Obika, Christopher Kelly, Harry Askham, Marko Lukic, Josef Huemer, Katrin Fasler, Gabriella Moraes, Clemens Meyer, Marc Wilson, Jonathan Dixon, Cian Hughes, Geraint Rees, Peng T. Khaw, Alan Karthikesalingam, Dominic King, Demis Hassabis, Mustafa Suleyman, Trevor Back, Joseph R. Ledsam, Pearse A. Keane, and Jeffrey De Fauw. 2020. Predicting conversion to wet age-related macular degeneration using deep learning. Nat. Med. 26, 6 (June 2020), 892–899.