Exploring the role of user centred design in AI data training and explainability
A recent conversation with a data scientist at a major London tech firm highlighted how their organisation is investing in data literacy training for UX designers. They recognise that a limited understanding of how to ‘read’ data impacts product development. This got me thinking about how different professionals such as designers, researchers, and engineers, interact with AI systems and training data.
AI data training is fundamentally a technical challenge, but the way professionals approach it is not uniform. Model developers often rely on hypothesis-driven debugging, testing assumptions about errors in training data to refine models. Their approach is iterative and technical, but this process also reveals a broader question: how do different professionals make sense of training data and AI model behaviour?
Understanding AI models is not just about feature attribution or performance metrics. AI debugging, for instance, is not purely about finding errors but about identifying which training data points had the most significant impact on the model’s learning process. While model developers focus on technical precision, other professionals engaging with AI systems, whether in product, research, or UX, may interpret outputs through different lenses, depending on their expertise and objectives. The way people interact with explainability tools, assess AI behaviour, and form mental models of system outputs varies widely across disciplines.
Designers and researchers already work with data, whether in user research, product analytics, or AI-driven interfaces. But AI systems introduce new layers of complexity: debugging a model requires understanding not only individual decisions but the broader patterns encoded in training data. While explainability tools often focus on mathematical rigour, they could benefit from better alignment with the way different professionals interpret information. Engineers may need detailed attribution metrics for debugging, while designers and product teams might focus more on how outputs align with expected behaviours and user needs. Despite working on the same AI system, professionals approach its outputs differently, and explainability methods may be more effective when they reflect this diversity.
These differences raise interesting questions about how professionals across disciplines build their understanding of AI behaviour, how they evaluate whether an AI system is behaving as expected, and what kind of information they need to do so effectively. Explainability is sometimes an afterthought in AI development, yet gaps in understanding emerge early in the process, not just at the stage where AI systems are being deployed. If different professionals rely on different mental models to interpret training data and model behaviour, then there may be opportunities to refine how explainability tools are designed to support a broader range of stakeholders, without compromising technical rigour.
The role of user-centred design in AI is not about making AI ‘simpler’, but about making its development and use more intelligible across disciplines. As AI models become more complex, bridging the gap between technical optimisation and human understanding could lead to more effective collaboration between engineers, designers, and decision-makers. The interplay between human reasoning and AI explainability remains an area worth further exploration, one that could work alongside deep technical expertise. Hopefully, this initial reflection on how user-centred design could be more intentionally integrated into AI development will spark further conversations.
Nguyen, E., Bertram, J., Kortukov, E., Song, J. Y., & Oh, S. J. (2024). Towards user-focused research in training data attribution for human-centered explainable AI. arXiv Preprint. https://doi.org/10.48550/arXiv.2409.16978
Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., Qian, B., Wen, Z., Shah, T., Morgan, G., & Ranjan, R. (2023). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9), Article 194. https://doi.org/10.1145/3561048
Parts of this manuscript were drafted with the assistance of AI language models (specifically, Claude 3.7, ChatGPT 4.0, Google Gemini 2.0). The author used AI as a tool to enhance clarity and organisation of ideas, generate initial drafts of certain sections, and assist with language refinement. All AI-generated content was reviewed, edited, and verified by the author. The author takes full responsibility for the content, arguments, analyses, and conclusions presented. This disclosure is made in the interest of transparency regarding emerging research practices.