my research
interests
include
CS education,
software testing,
component techniques,
and
innovative teaching
including automatic grading using student-written tests
About Me
I am a Professor and the Associate Department Head for Undergraduate Studies in the Department of Computer Science at Virginia Tech, where I have been teaching since 1996. I received my B.S. in electrical engineering from Caltech, and M.S. and Ph.D. degrees in computer and information science from The Ohio State University. My research interests are in computer science education, software engineering, automated testing, the use of formal methods in programming languages, and component-based approaches to software engineering and reuse.
I am the project lead for Web-CAT, the most widely used open-source automated grading system in the world. Web-CAT is known for allowing instructors to grade students based on how well they test their own code. In addition, my research group has produced a number of other open-source tools used in classrooms at many other institutions. More information on my research projects appears below.
Currently, I am researching innovative ways of giving feedback to students as they work on assignments to provide a more welcoming experience for students. The goal is to recognize the effort they put in and the accomplishments they make as they work on solutions, rather than simply looking at whether the student has finished what is required. We hope to use this approach to feedback to strengthen growth mindset beliefs while encouraging deliberate practice, self-checking, and skill improvement as students work.
Projects
My research and teaching activities all advance a common theme: improving software quality through better design and better assessment. In addition to researching design techniques that reduce the frequency of defects, I am also interested in design techniques that promote software testing, and even approaches to producing code with built-in self-testing features. At the same time, I am passionate about bringing the most effective of these techniques to our students, to train future practitioners with the skills necessary to produce higher quality code.
Computer Science Education
I regularly work on a variety of research projects in computer science education, including automated feedback strategies, assessment measures and metrics, gamification, innovative teaching methods, and techniques for nudging students to adopt more productive behaviors.
Web-CAT
Web-CAT is an advanced automated grading system that can grade students on how well they test their own code. Web-CAT is free and open source on github. It is highly customizable and extensible, and supports virtually any model of program grading, assessment, and feedback generation. Web-CAT is implemented as a web application with a plug-in-style architecture so that it also can serve as a platform for providing additional student support services to help students learn programming or software testing.
As of September, 2020, the Web-CAT server at Virginia Tech had been used in 1,642 course sections across 39 universities, on assignments covering 8 programming languages, and has processed 4,368,856 submissions from 49,758 students. This does not include submissions processed at other institutions running their own servers (approximately 90).
CodeWorkout
CodeWorkout is an online system for people learning a programming language for the first time. It is a free, open-source solution for practicing small programming problems, also on github. Students may practice coding exercises on a variety of programming concepts within the convenience of a web browser. Exercises provide customized, immediate feedback to support learning and suggests appropriate exercises as students improve their mastery.CodeWorkout was inspired by many great systems built by others, but aims to bring together the best from earlier forerunners while adding important new features. It provides comprehensive support for teachers who want to use coding exercises in their courses, while also maintaining flexibility for self-paced learners who aren't part of an organized course.
PEML
PEML, the Programming Exercise Markup Language, is a simple, easy format for CS and IT instructors of all kinds (college, community college, high school, whatever) to describe programming assignments and activities. We want it to be so easy (and obvious) to use that instructors won't see it as a technological or notational barrier to expressing their assignments. PEML is being developed as part of the SPLICE project to make it easier for new users to begin using automated assessment tools.
SPLICE
The SPLICE project (Standards, Protocols, and Learning Infrastructure for Computing Education) is a multi-university project with a mission to support the CS Education community by supplying documentation and infrastructure to help with adopting shared standards, protocols, and tools. In this way we hope to promote development and broader re-use of innovative learning content that is instrumented for rich data collection; formats and tools for analysis of learner data; and best practices to make large collections of learner data and associated analytics available to researchers in the CSE, data science, and learner science communities.
Students Needed
I am interested in hearing from everyone who is excited about independent study or research projects, from the undergraduate looking for undergraduate research credit to the Ph.D. candidate looking for a topic. However, I expect to work with serious, self-directed, research-oriented students who are capable of both carrying out and clearly writing up research projects with little supervision. I am willing to discuss potential topics in any of the projects described above. Here are a few more specific topics where I am looking to add students:
- PEMLtest: Developing and implementing a streamlined domain-specific language for writing software test cases.
- CodeWorkout Algorithm Visualization: Adapting an existing program animation tool (the Python Tutor and its relatives) for use in CodeWorkout to allow students to debug/step through their own solutions to small programming exercises using data that causes them to fail.
- Gamification of Programming Assignments: Adding a unique combination of multiple game-inspired mechanisms and features to how students work, so that students can earn achievements, see their skills develop, and learn how to practice productive student behaviors.
- Dockerized Web-CAT: Develop a container deployment strategy for Web-CAT servers, including dockerhub images for Web-CAT and a Kubernetes strategy for cluster deployment.
- Java Program Analysis Microservices: Develop containerized Java services to perform various analysis tasks (static analysis, compilation, etc.) in order to provide higher performance assessment platforms.
- Worked Examples: Providing automated support for allowing students to use reference examples of solutions, and build similarly structured answers to similar problems so they can learn the patterns.
- Multi-part Coding Questions: Adding support for exercises with multiple parts to CodeWorkout.
- Automatic Test Data Generation: Adapting techniques used in some functional programming languages for generating test data used to test student programs in various contexts.
- Programming Question Generators: Adapting techniques used to parameterize exercises to work in CodeWorkout to provide custom, individualized versions of exercises to each student.
Publications
Some of my recent publications:
Bob Edmison and Stephen H. Edwards. 2020. Turn up the heat! Using heat maps to visualize suspicious code to help students successfully complete programming problems faster. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET '20). Association for Computing Machinery, New York, NY, USA, 34–44.
Stephen H. Edwards, Krishnan P. Murali, and Ayaan M. Kazerouni. 2019. The relationship between voluntary practice of short programming exercises and exam performance. In Proceedings of the ACM Conference on Global Computing Education (CompEd '19). Association for Computing Machinery, New York, NY, USA, 113–119.
Michael S. Irwin and Stephen H. Edwards. 2019. Can mobile gaming psychology be used to improve time management on programming assignments? In Proceedings of the ACM Conference on Global Computing Education (CompEd '19). Association for Computing Machinery, New York, NY, USA, 208–214.
Some of my most popular publications:
Stephen H. Edwards. 2004. Using software testing to move students from trial-and-error to reflection-in-action. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (SIGCSE '04). Association for Computing Machinery, New York, NY, USA, 26–30.
Stephen H. Edwards. 2003. Improving student performance by evaluating how well students test their own programs. J. Educ. Resour. Comput. 3, 3 (September 2003).
Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H. Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, Miguel Ángel Rubio, Judy Sheard, Bronius Skupas, Jaime Spacco, Claudia Szabo, and Daniel Toll. 2015. Educational data mining and learning analytics in programming: Literature review and case studies. In Proceedings of the 2015 ITiCSE on Working Group Reports (ITICSE-WGR '15). Association for Computing Machinery, New York, NY, USA, 41–63.
Stephen H. Edwards. 2003. Rethinking computer science education from a test-first perspective. In Companion of the 18th Annual ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications (OOPSLA '03). Association for Computing Machinery, New York, NY, USA, 148–155.
- Stephen H. Edwards. A framework for practical, automated black-box testing of component-based software. Software Testing, Verification and Reliability, 11(2): 97–111, June, 2001.
View the full list of my publications:
Contact
Send me a message if you'd like to start a conversation.