Automated grading systems are useful for conveying debugging feedback to students, but the manner in which this feedback is displayed can be problematic for students. Automated systems usually report failures in student code, whether it be the results of test case failures or runtime errors. However, since most students are not explicitly taught how to debug, many get stuck. They know the defects exist, but they don’t have to experience to know how to find them. Much work has been done in software engineering research using automatic fault localization to programmatically locate bugs within a piece of code under test, as well as research into validating those detection models when applied to student code. However, determining how to present that information back to the student has not been addressed. There is a balance between giving hints and telling the student exactly where the error occurred. The goal of this paper is to present a technique to present the student’s code with suggestions to the student of where to investigate, based on the results of the automatic fault location. Taking results from the GZoltar statistical fault localization library, we have developed a method of expressing the suggestions to the students by visualizing the potential defect locations within the context of the student’s source code using heatmaps.