Spring 2022 Software Engineering Qualifier Homepage

DRAFT: Subject to major changes! The timetable is just a place-holder cut-and-pasted from a prior year, this will change a lot. The reading list is subject to change.

Committee

Timetable

Exam Policies

To register for the exam, contact Dr. Shaffer by the registration due date. You may withdraw from the exam without penalty (contact Dr. Shaffer) up until when the take-home exam questions are posted. From that point, you are committed to being scored unless some extraordinary event intervenes.

Academic Integrity

The answers submitted by every student to the qualifier questions should reflect their individual effort. Therefore, discussions of the writing prompt once it has been posted are prohibited among students. This examination is conducted under the University's Graduate Honor System Code. Students are encouraged to draw from other papers than those listed in the exam to the extent that this strengthens their arguments. However, the answers submitted must represent the sole and complete work of the student submitting the answers. Material substantially derived from other works, whether published in print or found on the web, must be explicitly and fully cited. Your grade will be more strongly influenced by arguments you make rather than arguments you quote or cite.

Exam Format

A reading list is provided below. Prior to the date when the take-home exam questions will be issued, you are advised to read the papers and take notes. On the date listed above a set of written questions will be posted, along with page restrictions and formatting requirements for the answers. Students will be given approximately two weeks to produce their answers, and the total length of the response will probably be about 6-8 pages. The questions will likely require the student to synthesize a subset of the body of work as embodied by these papers into a coherent framework, and/or make some proposal on how to move the research or state of the practice forward.

After the written material has been given an inital evaluation by the committee, each student will be given a half-hour oral examination. This examination will focus on the written responses given by the student. Contact Dr. Shaffer to schedule the oral.

Reading List

  1. Kalle Aaltonen, Petri Ihantola, and Otto Seppälä. Mutation analysis vs. code coverage in automated assessment of students’ testing skills, in Proceedings of the ACM International Conference Companion on Object Oriented Programming Systems Languages and Applications Companion, OOPSLA ’10, pages 153–160, New York, NY, USA, 2010.

  2. Maurício Aniche, Felienne Hermans, and Arie van Deursen. Pragmatic software testing education. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education, SIGCSE ’19, pages 414–420, 2019.

  3. Maurício Finavaro Aniche and Marco Aurélio Gerosa. Most common mistakes in testdriven development practice: Results from an online survey with developers. In Software Testing, Verification, and Validation Workshops (ICSTW), 2010 Third International Conference on Software Testing, Verification, and Validation Workshops, pages 469–478.

  4. Gina R. Bai, Justin Smith, and Kathryn T. Stolee, How Students Unit Test: Perceptions, Practices, and Pitfalls, Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1 (ITiCSE 2021), June 26-July 1, 2021.

  5. E.T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo, The Oracle Problem in Software Testing: A Survey, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 41, NO. 5, MAY 2015.

  6. Andrew Begel and Beth Simon. Struggles of new college graduates in their first software development job. In Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education, SIGCSE ’08, page 226–230.

  7. Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. When, how, and why developers (do not) test in their ides. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pages 179–190.

  8. Thirumalesh Bhat and Nachiappan Nagappan. Evaluating the efficacy of test-driven development: Industrial case studies. In Proceedings of the 2006 ACM/IEEE International Symposium on Empirical Software Engineering, ISESE ’06, pages 356–363, 2006.

  9. David Bowes, Tracy Hall, Jean Petrić, Thomas Shippey, and Burak Turhan. How good are my tests? In Proceedings of the 8th Workshop on Emerging Trends in Software Metrics, pages 9–14, 2017.

  10. Eric Brechner. Things they would not teach me of in college: What microsoft developers learn later. In Companion of the 18th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA ’03, page 134–136.

  11. Timothy A. Budd, Richard A. DeMillo, Richard J. Lipton, and Frederick G. Sayward. Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In Proceedings of the 7th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’80, page 220–233, 1980.

  12. Kevin Buffardi, Pedro Valdivia, and Destiny Rogers. Measuring unit test accuracy. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education, SIGCSE ’19, pages 578–584, 2019.

  13. Henrik Bundefinedrbak Christensen. Systematic testing should not be a topic in the computer science curriculum! In Proceedings of the 8th Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE ’03, page 7–10, 2001.

  14. J. C. Carver and N. A. Kraft. Evaluating the testing ability of senior-level computer science students. In 2011 24th IEEE-CS Conference on Software Engineering Education and Training (CSEE T), pages 169–178, May 2011.

  15. Benjamine S. Clegg, Jose Miguel Rojas, and Gordon Fraser, Teaching Software Testing Concepts Using a Mutation Testing Game, 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering Education and Training Track.

  16. Chetan Desai, David Janzen, and Kyle Savage. A survey of evidence for test-driven development in academia. SIGCSE Bull., 40(2):97–101, June 2008.

  17. Stephen H. Edwards and Zalia Shams. Comparing test quality measures for assessing student-written tests. In Companion Proceedings of the 36th International Conference on Software Engineering, ICSE Companion 2014, pages 354–363, 2014.

  18. S. Elbaum, G. Rothermel, and J. Penix, Techniques for improving regression testing in continuous integration development environments, in Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software EngineeringNovember 2014 Pages 235–245.

  19. P. G. Frankl and E. J. Weyuker. An applicable family of data flow testing criteria. IEEE Transactions on Software Engineering, 14(10):1483–1498, October 1988.

  20. J. Garcia, A. de Amescua, M. Velasco, and A. Sanz, Ten factors that impede improvement of verification and validation processes in software intensive organizations, Software Process: Improvement and Practice 13, 4(July/August 2008), 335-343.

  21. Marko Ivanković, Goran Petrović, René Just, and Gordon Fraser. Code coverage at Google. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2019, pages 955–963.

  22. Y. Jia and M. Harman. An analysis and survey of the development of mutation testing. IEEE Transactions on Software Engineering, 37(5):649–678, Sep. 2011.

  23. A.M. Kazerouni, J.C. Davis, A. Basak, C.A. Shaffer, F. Servant, and S.H. Edwards, Fast and accurate incremental feedback for students’ software tests using selective mutation analysis, Journal of Systems and Software 175, (May 2021), 110905.

  24. Leino, K. Rustan M. Developing verified programs with Dafny. 35th International Conference on Software Engineering (ICSE). IEEE, 2013.

  25. Raphael Pham, Stephan Kiesling, Leif Singer, and Kurt Schneider. Onboarding inexperienced developers: struggles and perceptions regarding automated testing. Software Quality Journal, 25(4):1239–1268, December 2017.

  26. Alex Radermacher and Gursimran Walia. Gaps between industry expectations and the abilities of graduates. In Proceeding of the 44th ACM Technical Symposium on Computer Science Education, SIGCSE ’13, page 525–530, 2013.

  27. J.A. Whittaker, What is software testing? And why is it so hard?, IEEE Software 17, 1(Jan/Feb 2000).

  28. L. Williams, G. Kudrjavets, and N. Nagappan, On the Effectiveness of Unit Test Automation at Microsoft, 2009 20th International Symposium on Software Reliability Engineering.