LLM & HCI
Leveraging Large Language Models for Usability Evaluation
Project Overview
This project explores the intersection of Large Language Models (LLMs) and Human-Computer Interaction, specifically focusing on how AI can support and enhance usability evaluation processes. We investigate novel applications of LLMs in identifying usability flaws, generating heuristic evaluations, and supporting UX researchers throughout the design and development lifecycle.
Status: Active Research
Team: 22 undergraduate researchers (Summer 2025)
Publication: "Catching UX Flaws in Code" (Accepted, 2025)
Research Questions
- Can LLMs effectively identify usability flaws in code at the development stage?
- How can AI tools augment traditional usability evaluation methods?
- What are the limitations and biases of LLM-based usability assessment?
- How do practitioners perceive and adopt AI-assisted HCI tools?
- What role can LLMs play in teaching usability principles to developers?
Key Research Areas
- Automated Usability Evaluation: Using LLMs to detect UX issues in user interfaces and code
- Heuristic Generation: AI-assisted creation of domain-specific usability heuristics
- Developer Tools: Integrating usability checks into development workflows
- Educational Applications: Teaching HCI principles through AI-powered feedback
- Cultural Considerations: Exploring how LLMs can support culturally-aware design evaluation
Catching UX Flaws in Code
Our paper "Catching UX Flaws in Code: Leveraging LLMs to Identify Usability Flaws at the Development Stage" (accepted for publication in 2025) presents a novel approach to usability evaluation. By analyzing source code directly, we demonstrate how LLMs can identify potential usability issues early in the development process, before they manifest in the final user interface.
This research has implications for developer tools, code review processes, and the integration of usability considerations into agile development workflows.
Student Involvement
With 22 undergraduate researchers working on LLM & HCI projects in Summer 2025, this is one of our largest research initiatives. Students are exploring various applications of AI in HCI, from building prototype tools to conducting user studies and analyzing the effectiveness of LLM-based evaluations.
Future Directions
We are continuing to investigate how LLMs can support the entire UX research process, from initial user research to final evaluation. Future work will explore multimodal AI systems that can analyze both code and visual designs, as well as the ethical implications of automated usability evaluation.
Collaborations
This project involves collaboration with researchers and students across multiple institutions, combining expertise in HCI, natural language processing, software engineering, and computing education.