When building reliable applications, current techniques for program verification require the programmer to annotate the program with inductive invariants to aid verification. Providing these annotations is one of the biggest challenges that we face in making such techniques useful for programmers. The project hence focuses on aiding the programmer by synthesizing these annotations automatically. Intuitively, this kind of synthesis is a particular kind of predicate synthesis (as opposed to function synthesis) and is covered by the general SyGuS format proposed in ExCAPE (see Syntax-Guided Synthesis on Section 5). The group has been developing predicate synthesis algorithms using learning. Intuitively, the actual constraints are hidden from the learner, and the learner synthesizes predicates based only on concrete examples with their classification as true/false. Machine learning techniques, which are often very scalable, can then be utilized to build classifiers, which then can be interpreted as logical predicates. For invariant synthesis, it turns out that we need to learn from samples where the classification is not known a priori, but where certain pairs of samples are constrained. We have defined a general learning formalism called ICE (learning using examples, counter-examples, and implications) [GLMN13]. We are currently generalizing the competitive solvers we have developed (some based on SMT solvers, some based on machine learning classifiers), to work with predicate synthesis problems formulated in SyGuS, in order to build general backend solvers for SyGuS.

The aim of this project is a marriage of two techniques both developed by ExCAPE researchers: the natural proofs technique from program verification, and the sketching technique from program synthesis. While natural proofs encodes a set of powerful proof tactics into automatically generated ghost code, it allows to use Sketch to synthesize provably correct code snippets from scratch. We currently focus on data-structure manipulations and specs written in the Dryad logic. Technically, we are encoding a symbolic interpreter for Dryad into Sketch, so that Sketch can synthesize code snippets that meet given Dryad specs. Ongoing case studies include basic list/tree manipulations.

- Pranav Garg, Christof Löding, P. Madhusudan, Daniel Neider. ICE: A Robust Framework for Learning Invariants. CAV 2014: 69-87
- P. Garg, C. Löding, P. Madhusudan, and D. Neider. Learning universally quantified invariants of linear data structures. In Natasha Sharygina and Helmut Veith, editors, CAV, volume 8044 of Lecture Notes in Computer Science, pages 813–829. Springer, 2013.