Experiences Using Heat Maps to Help Students Find Their Bugs: Problems and Solutions

Abstract

Automated grading systems provide feedback to students in a variety of ways, but usually focus on identifying incorrect program behaviors. Such systems provide notices of test case failures or runtime errors, but without debugging skills, students often become frustrated when they don’t know where to start. They know their code has defects, but finding the problem may be beyond their experience, especially for beginners. An additional concern is balancing the need to provide enough direction to be useful, without giving the student so much direction that you effectively give them the answer. This paper presents our experience using heat maps to visually guide student attention to parts of their code that are most likely to contain problems. These visualizations are generated using existing tools that capture execution traces from instructor-written tests to identify which portions of the code are executed during tests that pass, and which portions are executed during tests that fail. Superimposing execution footprints allows statistical identification of locations in the student’s code that are most likely to contain faults. This paper describes the results of using this feedback approach to help guide student attention with heat map visualizations over two semesters of CS1 involving over 700 students. Based on this experience, we analyze the utility of the heat maps, describe student perceptions of their helpfulness, and describe the unexpected challenges arising from students attempts to understand and apply this style of feedback. We conclude with concrete solutions proposed to improve how guiding feedback is presented to students.

Publication
Proceedings of the 50th ACM Technical Symposium on Computer Science Education

Related