Skip directly to: content | search

ExCAPE: Expeditions in Computer Augmented Program Engineering


Personalized Education

Theme Leaders: Bjoern Hartmann (Berkeley) and Sanjit Seshia (Berkeley)

With the increasing importance of online education, we have realized that technology for formal verification and synthesis can contribute to online learning by analyzing students’ solutions to provide meaningful, personalized explanation of their mistakes, and also by automatic grading. We have built such tools for representative topics in three undergraduate courses: introduction to programming, theory of computation, and design of cyber-physical systems. The design of these tools not only uses computational engines built for synthesis by ExCAPE researchers, but also integrates HCI research on user interaction and tool evaluation.

AutomataTutor

PIs: Alur (Penn) and Hartmann (Berkeley)

Personalized Education, has emerged as a new challenge area for ExCAPE. Key problems where synthesis can make contributions include automatic grading and provision of automatic feedback. The goal of automatic feedback is to provide a meaningful explanation of students’ mistakes. We focus on providing feedback for constructing a deterministic finite automaton (DFA) that accepts strings that match a described pattern [ADGKV13]. We have developed novel ways of automatically computing alternative hints. Our tool, AutomataTutor, can suggest edits to the student’s DFA, highlight phrases from the problem that the student may have misunderstood, or produce a description of the set of all incorrectly handled strings.

We also contribute a rigorous evaluation of feedback with 377 students in two university courses. We find that providing either counterexamples or hints is judged as helpful, increases student perseverance, and can improve problem completion time. Second, both strategies have particular strengths and weaknesses, and the preferred choice of feedback depends on the type of mistake and the student. AutomataTutor was used in undergraduate course on Theory of Computation at the following universities: University of Pennsylvania, Fall 2013, 160 students; University of Illinois at Urbana-Champaig, Fall 2013, 180 students; Reykjavik University, Iceland, Spring 2014, 300 students. The students’ response to the tool has been enthusiastic [DKAGVH14]

AutomataTutor was highlighted by CRA/CCC (Computing Research Association's, Computing Community Consortium). See short blurb here.

Auto-Grading Virtual Laboratories in Cyber-Physical Systems

PI: Seshia (Berkeley) and Hartmann (Berkeley)

We have designed an automatic grader for a laboratory in the area of cyber-physical systems [DJJS14]. The core technical problem involves parameter synthesis for temporal testers, where the temporal testers are represented in parameterized signal temporal logic. We have demonstrated the effectiveness of our auto-grader on a large data set obtained from an actual on-campus lab course, also given via edX. This work lays a foundation for lab-based online courses in a range of engineering disciplines, including especially Robotics. [See article in the New Scientist, referring to CPSGrader and a couple of other projects as Robotutors.]

AutoProf (Automated Program Feedback)

PI: Solar-Lezama (MIT) and Hartmann (Berkeley)

We are developing a tool, called AutoProf, for automatically providing feedback for introductory programming problems [SGS13]. This work comes under the ExCAPE vision on transforming online education. Providing quality feedback to students is already a major problem in classroom courses, but with the advent of MOOCs, this problem has grown exponentially bigger. In order to use AutoProf, the instructor only needs to provide a reference implementation of the assignment and an error model consisting of potential corrections to errors that students typically make for the given assignment. Using this information, AutoProf derives minimal corrections to student’s incorrect solutions, providing them with a measure of exactly how incorrect their solution was, as well as feedback about what they did wrong. We introduce a simple language for describing error models in terms of correction rules, and formally define a rule-directed translation strategy that reduces the problem of finding minimal corrections in an incorrect program to the problem of synthesizing a correct program from a sketch. [See here MIT NEWS and MIT CSAIL Spotlight article on AutoProf (previously referred to as Autograder)].

We have evaluated our system on thousands of real student attempts obtained from the Introduction to Programming course both at MIT and at MITx. Our results show that relatively simple error models can provide feedback on 64% of all incorrect submissions in our benchmark set. The AutoProf system is currently being integrated on the MITx and edX platforms to provide feedback on programming assignments for the MITx class 6.00.1x (Introduction to Computer Science and Programming using Python).

Teaching Induction with Functional Programming and a Proof Assistant

PI: Zdancewic (Penn) and Hartmann (Berkeley)

This work explores the potential for using functional programming and interactive proof assistants for teaching the concepts of mathematical induction. The resulting proof-of-concept web-based tutorial has students write small functional programs and prove correctness properties using inductive reasoning [OZ13]. In the long run, we anticipate being able to synthesize programming and proving exercises, as well as provide interactive feedback to help students learn the concepts.

Problem Generation for Discrete Mathematics and Algorithms

PI: Parthasarathy (UIUC)

The PI, along with a student Shambwaditya Saha and in collaboration with Sumit Gulwani, Principal Researcher, Microsoft Research, has been working on automated problem and feedback synthesis for learning concepts related to discrete structures and algorithms. The idea is to equip teachers with a synthesis language using which they can express parameterized problems that explore pedagogical understanding of concepts, capturing problem design logically. An automated tool can then generate a large number of variants of the problem. Furthermore, since the problem design is itself captured, this allows the teacher to also auto-generate meaningful feedback for wrong answers and to control the hardness of the problems. Auto-generated questions can be used in both traditional classrooms and in online settings, and can be used to prevent plagiarism and to enable weaker students to take multiple attempts at similar problems to master the material. As computing education widens and reaches a larger number of people, including high school children, we believe such tools can help better the quality of education provided. The tool has been used recently to auto-generate questions for the most recent MEC algorithms exam in India, where multiple variants were used to prevent plagiarism in the online exam.

Papers:
Tools and Projects:
  • AutomataTutor: The Automata Tutor
  • AutoProf: Automated Program Feedback
  • CPSGrader: Automatic grading cum feedback generation tool (auto-grader) for laboratory courses
  • InductFun! Learn the power of induction by way of functional programming and proof assistant.
  • Problem Generator: Problem Generation for Discrete Mathematics and Algorithms

ExCAPE: Expeditions in Computer Augmented Program Engineering NSF National Science Foundation Award CCF-1138996