What to Define When You’re Defining Gradual Type Systems

So you want to define a gradual type system, like all the cool kids? My SNAPL 2019 paper imagines three possible motivations:

  1. Expressiveness. You have nothing to lose but your static chains!
  2. Interoperation. Gradual typing seamlessly weaves the dynamic and static worlds into a single fabric.
  3. Typing itself. Static typing offers myriad benefits: enjoy them today!

You don’t have to pick just one. Or maybe you have a different motivation—I’d love to hear it. If you’re motivated by one of these goals but aren’t sure what to do, the paper offers a variety of challenge problems in Section 3.

Now, what do you have to do to define your gradual type system? You have to come up with a type system that has a question mark in it, of course. (You can also write any or dyn or Dynamic or *—whatever works for you.) But what else?

  • A surface language. Since Siek and Taha’s seminal 2006 paper, gradual types have commonly been expressed via elaboration: a source language (with nonexistent or optional or partial type annotations) is translated to a core language that makes all dynamism explicit. What is your source language? Even if you don’t define your source language formally, give an intuition about how programmers will experience it. Can programmers control what’s dynamic and what’s static? Do you ever reject source programs? Which? (GTLC rejects true 5—even in dead code—but different source languages could do different things.) Why is your line the right one to draw?
  • Concrete examples. Ideally drawing from real-world examples, what might be good about gradual types in your context? What new programs do you allow? What problems do you avoid? What guarantees do you gain? Make your example programs good! As Alan Perlis said, “A program without a loop and a structured variable isn’t worth writing”. Examples from the SNAPL paper include: the flatten function, JSON processing, or the “attach-to-the-request” idiom in middleware.
  • Operations. What can we do with base types? Having real operations around will force you to answer hard questions about your source and core languages. How does equality work, i.e., what can be compared and what are the answers? Does your dynamic language reject 5 + "hit"? What about 5 + ((λx.x) "hit")? If you truly have a dynamic type, what operations can you do on it? Which can fail? Is there a way to check at runtime whether casting to a static type will succeed before you commit to such reckless behavior?
  • Control. Include conditionals or some other nontrivial notion of control flow. The first published rules for gradual typing that used a notion of ‘meet’ came in 2012! The way you treat join points in control says a lot about the ergonomics of your system. Church encodings do not cut the mustard.
  • Type semantics. Are your types worth the pixels they’re written on? What do they mean? If I have a value of a given type, what guarantees do I have? You don’t need to give a formal type semantics, but it’s important to know what to expect. If I write a function λx:T. e, what can I actually assume about x in e? If T is int, do I know x is an int, or could it blow up? What about ref int… can reading fail? Writing? What about list int? Does pattern matching on it cause conversions, or possible failure? What about…

The SNAPL 2019 paper argues that there are two ‘lineages’ of gradual typing: one which starts from statically typed languages and relaxes or modifies the type system to include dynamic features, and one which starts from dynamic languages and tries to develop a static type system that can accommodate your ‘preexisting conditions’—legacy code. Whichever lineage you’re working in, each item above is worth carefully considering.

I want to conclude by calling out a paper that more people ought to know about; it does a good job on most of these points. It came out the same year as Alanis Morisette’s acclaimed international debut album, Jagged Little Pill.

(The official ACM version is less complete than the technical report—alas.) They are clear about their surface language (Scheme—with argument lists and call/cc, but not arbitrary set!). They have an entire section of concrete examples, with good demonstrations of how conditionals work with their coercion parameters. They even draw on examples from the literature, citing Mike Fagan’s thesis (which is a goldmine of examples). They don’t give a formal type semantics, but they do explain (with positive and negative examples) how type coercion parameters and polymorphism interact to achieve in their elaborated ML the ad hoc polymorphism necessary to implement their source Scheme.

I also want to highlight this paper because it’s one that I’ve never heard folks actually talk about, though it seems to be cited well enough. I urge anyone who is interested in gradual types to read it. Just like Alanis’s crie de coeur against the shallow world of pop, some things from 1995 are worth revisiting.

Ron Garcia gave helpful feedback on a draft of this post. Thanks, Ron!

Formulog: ML + Datalog + SMT

If you read a description of a static analysis in a paper, what might you find? There’ll be some cute model of a language. Maybe some inference rules describing the analysis itself, but those rules probably rely on a variety of helper functions. These days, the analysis likely involves some logical reasoning: about the terms in the language, the branches conditionals might take, and so on.

What makes a language good for implementing such an analysis? You’d want a variety of features:

  • Algebraic data types to model the language AST.
  • Logic programming for cleanly specifying inference rules.
  • Pure functional code for writing the helper functions.
  • An SMT solver for answering logical queries.

Aaron Bembenek, Steve Chong, and I have developed a design that hits the sweet spot of those four points: given Datalog as a core, you add constructors, pure ML, and a type-safe interface to SMT. If you set things up just right, the system is a powerful and ergonomic way to write static analyses.

Formulog is our prototype implementation of our design; our paper on Formulog and its design was just conditionally accepted to OOPSLA 2020. To give a sense of why I’m excited, let me excerpt from our simple liquid type checker. Weighing in under 400 very short lines, it’s a nice showcase of how expressive Formulog is. (Our paper discusses substantially more complex examples.)

type base =
  | base_bool

type typ = 
  | typ_tvar(tvar)
  | typ_fun(var, typ, typ)
  | typ_forall(tvar, typ)
  | typ_ref(var, base, exp)

and exp = 
  | exp_var(var)
  | exp_bool(bool)
  | exp_op(op)
  | exp_lam(var, typ, exp)
  | exp_tlam(tvar, exp)
  | exp_app(exp, exp)
  | exp_tapp(exp, typ)

ADTs let you define your AST in a straightforward way. Here, bool is our only base type, but we could add more. Let’s look at some of the inference rules:

(* subtyping *)
output sub(ctx, typ, typ)

(* bidirectional typing rules *)
output synth(ctx, exp, typ)
output check(ctx, exp, typ)

(* subtyping between refinement types is implication *)
sub(G, typ_ref(X, B, E1), typ_ref(Y, B, E2)) :-
  wf_ctx(G),
  exp_subst(Y, exp_var(X), E2) = E2prime,
  encode_ctx(G, PhiG),
  encode_exp(E1, Phi1),
  encode_exp(E2prime, Phi2),
  is_valid(`PhiG /\ Phi1 ==> Phi2`).

(* lambda and application synth rules *)
synth(G, exp_lam(X, T1, E), T) :-
  wf_typ(G, T1),
  synth(ctx_var(G, X, T1), E, T2),
  typ_fun(X, T1, T2) = T.

synth(G, exp_app(E1, E2), T) :-
  synth(G, E1, typ_fun(X, T1, T2)),
  check(G, E2, T1),
  typ_subst(X, E2, T2) = T.

(* the only checking rule *)
check(G, E, T) :-
  synth(G, E, Tprime),
  sub(G, Tprime, T).

First, we declare our relations—that is, the (typed) inference rules we’ll be using. We show the most interesting case of subtyping: refinement implication. Several helper relations (wf_ctx, encode_*) and helper functions (exp_subst) patch things together. The typing rules below follow a similar pattern, mixing the synth and check bidirectional typing relations with calls to helper functions like typ_subst.

fun exp_subst(X: var, E : exp, Etgt : exp) : exp =
  match Etgt with
  | exp_var(Y) => if X = Y then E else Etgt
  | exp_bool(_) => Etgt
  | exp_op(_) => Etgt
  | exp_lam(Y, Tlam, Elam) =>
    let Yfresh = 
      fresh_for(Y, X::append(typ_freevars(Tlam), exp_freevars(Elam)))
    in
    let Elamfresh = 
      if Y = Yfresh
      then Elam
      else exp_subst(Y, exp_var(Yfresh), Elam)
    in
    exp_lam(Yfresh,
            typ_subst(X, E, Tlam),
            Elamfresh)
  | exp_tlam(A, Etlam) =>
    exp_tlam(A, exp_subst(X, E, Etlam))
  | exp_app(E1, E2) => 
    exp_app(exp_subst(X, E, E1), exp_subst(X, E, E2))
  | exp_tapp(Etapp, T) => 
    exp_tapp(exp_subst(X, E, Etapp), typ_subst(X, E, T))
  end

Expression substitution might be boring, but it shows the ML fragment well enough. It’s more or less the usual ML, though functions need to have pure interfaces, and we have a few restrictions in place to keep typing simple in our prototype.

There’s lots of fun stuff that doesn’t make it into this example: not only can relations call functions, but functions can examine relations (so long as everything is stratified). Hiding inside fresh_for is a clever approach to name generation that guarantees freshness… but is also deterministic and won’t interfere with parallel execution. The draft paper has more substantial examples.

We’re not the first to combine logic programming and SMT. What makes our design a sweet spot is that it doesn’t let SMT get in the way of Datalog’s straightforward and powerful execution model. Datalog execution is readily parallelizable; the magic sets transformation can turn Datalog’s exhaustive, bottom-up search into a goal-directed one. It’s not news that Datalog can turn these tricks—Yiannis Smaragdakis has been saying it for years!—but integrating Datalog cleanly with ML functions and SMT is new. Check out the draft paper for a detailed related work comparison. While our design is, in the end, not so complicated, getting there was hard.

Relatedly, we have also have an extended abstract at ICLP 2020, detailing some experiments in using incremental solving modes from Formulog. You might worry that Datalog’s BFS (or heuristic) strategy wouldn’t work with an SMT solver’s push/pop (i.e., DFS) assertion stack—but a few implementation tricks and check-sat-assuming indeed provide speedups.

Smart constructors are smarter than you think

CS 181-N Advanced Functional Programming is a project-based elective I’m teaching this semester. The first half of the course teaches the students basic Haskell programming, from purity to FAM and IO, by way of parsing. The second half of the course has the students build a small project using a pure FP (Haskell and Elm, this go round). The projects were kicking off just as we pivoted to video.

I’ve been streaming my own project as a way of showing students different software engineering practices: how to use CI, how to write good tests and generators, how to run benchmarks, how to file bug reports, and so on. It’s been fun.

My ‘project’ is the Regularity, a library for working with regular languages and automata. We get a lot of A-plot Haskell mixing with B-plot theory of computation.

After implementing a few kinds of NFA (and seeing that they’re much more efficient than naive “find all matches” regex matching), I turned to Brzozowski derivatives. Most undergraduates aren’t exposed to the idea of derivatives outside of a calculus class, so I thought it’d be fun to show. Here’s the implementation, from Regex.hs:

data Regex =
    Empty
  | Epsilon
  | Char Char
  | Seq !Regex !Regex
  | Alt !Regex !Regex
  | Star !Regex
  deriving (Eq, Ord)

dMatches :: Regex -> Text -> Bool
dMatches re t = nullable (T.foldl (flip deriv) re t)

-- re `matches` c:s iff deriv c re `matches` s
deriv :: Char -> Regex -> Regex
deriv _c Empty         = empty
deriv _c Epsilon       = empty
deriv  c (Char c')     = if c == c' then epsilon else empty
deriv  c (Seq re1 re2) = 
  alt (seq (deriv c re1) re2) (if nullable re1 then deriv c re2 else empty)
deriv  c (Alt re1 re2) = alt (deriv c re1) (deriv c re2)
deriv  c (Star re)     = seq (deriv c re) (star re)

-- `nullable re` returns true iff re accepts the empty string
nullable :: Regex -> Bool
nullable Empty         = False
nullable Epsilon       = True
nullable (Char _)      = False
nullable (Seq re1 re2) = nullable re1 && nullable re2
nullable (Alt re1 re2) = nullable re1 || nullable re2
nullable (Star _re)    = True

It only took a few minutes to implement the derivatives, and performance was… fine. Thinking it would be nice to show the concept, I did smart constructors next:

empty :: Regex
empty = Empty

epsilon :: Regex
epsilon = Epsilon

char :: Char -> Regex
char = Char

seq :: Regex -> Regex -> Regex
seq Epsilon re2     = re2
seq re1     Epsilon = re1
seq Empty   _re2    = Empty
seq _re1    Empty   = Empty
seq re1     re2     = Seq re1 re2

alt :: Regex -> Regex -> Regex
alt Empty   re2     = re2
alt re1     Empty   = re1
alt Epsilon re2     = if nullable re2 then re2 else Alt Epsilon re2
alt re1     Epsilon = if nullable re1 then re1 else Alt re1 Epsilon
alt re1     re2     = if re1 == re2 then re1 else Alt re1 re2

star :: Regex -> Regex
star Empty     = Epsilon
star Epsilon   = Epsilon
star (Star re) = Star re
star re        = Star re

But then something surprised me: with smart constructors, the Brzozowski-based matching algorithm was substantially faster than NFAs, with or without ε transitions!

How much faster? Here are the times as reported by criterion.

Matching (a|a)* …on a50 …on a100
Naive, all-matches 71.4 µs 255   µs
NFA with ε transitions 31.6 µs 55.0 µs
NFA 4.88µs 9.29µs
Brzozowski derivative, dumb constructors 61.8 µs 262   µs
Brzozowski derivative, smart constructors 1.88µs 3.73µs

I knew smart constructors were an optimization, but I was blown away! If you didn’t know about smart constructors, you might shrug off Brzozowski derivatives as a curiosity for math dweebs. But with just a tiny bit of optimization from smart constructors, Brzozowski derivatives offer the best performance and the simplest implementation, at least on these meager tests.

It would be interesting to dig into why the dumb constructors are so slow. It doesn’t seem to be laziness, so is it just allocation churn? (We’re not even using hash-consing, which can provide another performance boost.) Is it large, redundant regexes? Something else?

Regular-expression derivatives re-examined by Owens, Reppy, and Turon is closely related and worth a read! Relatedly, Neel Krishnaswami has some lovely theoretically discussion, too.

Flapjax on PL Perspectives

Shriram Krishnamurthi, Arjun Guha, Leo Meyerovich, and I wrote a post about Flapjax on PL Perspectives, the SIGPLAN blog. (Thanks to Mike Hicks for helping us edit the post!)

Flapjax won the OOPSLA MIP award for 2009 (though the SIGPLAN website isn’t yet up to date). Our blog post is about the slightly unconventional way we worked: most of the Flapjax work happened in 2006 and 2007, but we didn’t even try to write the paper until several years later (Leo and I were in grad school). Rather than recapitulate those ideas, go read the post!

smoosh v0.1

I’ve been building an executable formalization of the POSIX shell semantics, which I’ve been calling smoosh (the Symbolic, Mechanized, Observable, Operational SHell).

I’m pleased to announce an important early milestone: smoosh passes the POSIX test suite (modulo locales, which smoosh doesn’t currently support). I’ve accordingly tagged this ‘morally correct’ verison as smoosh v0.1. You can play around with the latest version via the shtepper, a web-based frontend to a symbolic stepper. I think the shtepper will be particularly useful for those trying to learn the shell or to debug shell scripts.

There’s still plenty to do, of course: there are bugs to fix, quirks to normalize, features to implement, and tests to run. But please: play around with the smoosh shell and the shtepper, file bug reports, and submit patches!

Collapsible Contracts: Space-Efficient Contracts in Racket

While on sabbatical in Cambridge, MA (thanks, Steve!), I had the good fortune to attend my first SPLASH.

I was particularly excited by one paper: Collapsible Contracts: Fixing a Pathology of Gradual Typing by Daniel Feltey, Ben Greenman, Christophe Scholliers, Robby Findler, and Vincent St-Amour. (You can get the PDF from the ACM DL or from Vincent’s website.)

Their collapsible contracts are an implementation of the theory in my papers on space-efficient contracts (Space-Efficient Manifest Contracts from POPL 2015 and Space-Efficient Latent Contracts from TFP 2016). They use my merge algorithm to ‘collapse’ contracts and reduce some pathologically bad overheads. I’m delighted that my theory works with only a few bits of engineering cleverness:

  • Racket’s contracts are first-class values, which means subtle implementation details can impede detecting duplicates. Racket’s contract-stronger? seems to do a good enough job—though it helps that many contracts in Racket are just checking simple types.
  • There’s an overhead to using the merge strategy in both space and time. You don’t want to pay the price on every contract, but only for those that would consume unbounded space. Their implementation waits until something has been wrapped ten times before using the space-efficient algorithms.
  • Implication queries can be expensive; they memoize the results of merges.

I am particularly pleased to see the theory/engineering–model/implementation cycle work on such a tight schedule. Very nice!

Disjointness of subset types

In hybrid type checking, a subtyping relationship between subset types {x:T|e} determines when it’s safe to omit a cast. The structural extension of subtyping to, e.g., function types, gives us a straightforward way to achieve verification by optimization: if we can prove that a cast is from a subtype to a supertype, there’s no need to pay the runtime cost of checking anything.

When should we reject a hybrid typed program? In Flanagan’s seminal paper, he offers a straightforward plan:


Flanagan enacts this plan by rejecting casts which are provably not from a subtype to a supertype: if the SMT solver finds that a cast might fail, then it rejects the program. In his model of an SMT solver, returning a checkmark (✓) when checking proves subtyping, a question mark (?) when checking times out, or an x-mark (×) when a counterexample of subtyping is found.



The compilation and checking judgment (a/k/a cast insertion) will insert a cast on successful or indeterminate checking, but has no rules for (and therefore rejects) programs where there is a counterexample.



Unfortunately, such a policy is too restrictive. For example, an SMT solver can easily prove that {x:Int|true} doesn’t imply {x:Int|even x}… but such a program shouldn’t be rejected! We should only reject programs that must fail.

When must a cast fail? When the two types are disjoint, i.e., when they don’t share any values. Since our language with casts has when the only normal form typeable at both types is an error. Cătălin Hriţcu got a good start on this in his thesis (see sections 2.4.6 and the end of 3.4.4), where he defines a notion of non-disjointness. He doesn’t define the notion for function types, but that’s actually very easy: so long as two function types are compatible (i.e., equal after erasure of refinements), they are non-disjoint. Why? Well, there’s at least one value that {x:T|e11}→{x:T|e12} and {x:T|e21}→{x:T|e22} have in common: λx. blame.

I can imagine slightly stronger version: if the domain type is a refinement of base type, then we could exclude functions which could never even be called, i.e., with disjoint domains. At higher orders, though, there’s no way to tell (maybe the function in question never calls its argument), so I think the more liberal interpretation is the right starting point.

NB I briefly discuss when to reject hybrid-typed programs in my dissertation (Section 6.1.4).

A refinement type by any other name

Frank Pfenning originated the idea of refinement types in his seminal PLDI 1991 paper with Tim Freeman. Freeman and Pfenning’s refinement types allow programmers to work with refined datatypes, that is, sub-datatypes induced by refining the set of available constructors. For example, here’s what that looks like for lists, with a single refinement type, α singleton:

datatype α list = nil | cons of α * α list
rectype α singleton = cons α nil

That is, a programmer defines a datatype α list, but can identify refined types like α singleton—lists with just one element. We can imagine a lattice of type refinements where α list is at the top, but below it is the refinement of lists of length 0 or 1—written α singleton ∨ α nil. This type is itself refined by its constituent refinements, which are refined by the empty type. Here’s such a lattice, courtesy of a remarkably nice 1991-era TeX drawing:
Refinements of α list
Another way of phrasing all of this is that refinement types identify subsets of types. Back in 1983, Bengt Nordström and Kent Petersson introduced—as far as I know—the idea of subset types in a paper called Types and Specifications at the IFIP Congress. Unfortunately, I couldn’t find a copy of the paper, so it’s not clear where the notation {x∈A|B(x)} set-builder-esque notation first came from, but it shows up in Bengt Nordström, Kent Petersson, and Jan M. Smith’s Programming in Martin-Löf’s Type Theory in 1990. Any earlier references would be appreciated. Update (2015-03-18): Colin Gordon pointed out that Robert Constable‘s Mathematics as programming from 1984 uses the subset type notation, as does the NUPRL tech report from 1983. The NUPRL TR came out in January ’83 while IFIP ’83 happened in September. Nate Foster, who works with Bob Constable, suspects that Constable has priority. Alright: subset types go to Robert Constable in January 1983 with the Nearly Ultimate Pearl. Going once…

My question is: when did we start calling {x∈A|B(x)} and other similar subset types a “refinement type”? Any advice or pointers would be appreciated—I’ll update the post.

Susumu Hayashi in Logic of refinement types describes “ATTT”, which, according to the abstract, “has refi nement types which are intended to be subsets of ordinary types or speci cations of programs”, where he builds up these refinements out of some set theoretic operators on singletons. By rights, this paper is probably the first to use “refinement type” to mean “subset type”… though I have some trouble pinpointing where the paper lives up to that claim in the abstract.

Ewen Denney was using refinement types to mean types and specifications augmented with logical propositions. This terminology shows up in his 1998 PhD thesis and his 1996 IFIP paper, Refinement Types for Specification.

In 1998, Hongwei Xi and Frank Pfenning opened the door to flexible interpretations of “refinements” in Eliminating Array Bound Checking Through Dependent Types. In Section 2.4, they use ‘refinement’ in a rather different sense:

Besides the built-in type families int, bool, and array, any user-defined data type may be refined by explicit declarations. …

typeref ’a list of nat
with nil <| ’a list(0)
| :: <| {n:nat} ’a * ’a list(n) -> ’a list(n+1)

Later on, in Section 3.1, they have a similar use of the term:

In the standard basis we have refined the types of many common functions on integers such as addition, subtraction, multiplication, division, and the modulo operation. For instance,

+ <| {m:int} {n:int} int(m) * int(n) -> int(m+n)

is declared in the system. The code in Figure 3 is an implementation of binary search through an array. As before, we assume:

sub <| {n:nat} {i:nat | i < n} ’a array(n) * int(i) -> ’a

So indices allow users to refine types, though they aren’t quite refinement types. In 1999, Xi and Pfenning make a strong distinction in Dependent Types in Practical Programming; from Section 9:

…while refinement types incorporate intersection and can thus ascribe multiple types to terms in a uniform way, dependent types can express properties such as “these two argument lists have the same length” which are not recognizable by tree automata (the basis for type refinements).

Now, throughout the paper they do things like “refine the datatype with type index objects” and “refine the built-in types: (a) for every integer n, int(n) is a singleton type which contains only n, and (b) for every natural number n, 0 a array(n) is the type of arrays of size n”. So here there’s a distinction between “refinement types”—the Freeman and Pfenning discipline—and a “refined type”, which is a subset of a type indicated by some kind of predicate and curly braces.

Joshua Dunfield published a tech report in 2002, Combining Two Forms of Type Refinements, where he makes an impeccably clear distinction:

… the datasort refinements (often called refinement types) of Freeman, Davies, and Pfenning, and the index refinements of Xi and Pfenning. Both systems refine the simple types of Hindley-Milner type systems.

In his 2004 paper with Frank, Tridirectional Typechecking, he maintains the distinction between refinements, but uses a term I quite like—“property types”, i.e., types that guarantee certain properties.

Yitzhak Mandelbaum, my current supervisor David Walker, and Bob Harper wrote An Effective Theory of Type Refinements in 2003, but they didn’t quite have subset types. Their discussion of related work makes it seem that they interpret refinement types as just about any device that allows programmers to use the existing types of a language more precisely:

Our initial inspiration for this project was derived from work on refinement types by Davies and Pfenning and Denney and the practical dependent types proposed by Xi and Pfenning. Each of these authors proposed to sophisticated type systems that are able to specify many program properties well beyond the range of conventional type systems such as those for Java or ML.

In the fairly related and woefully undercited 2004 paper, Dynamic Typing with Dependent Types, Xinming Ou, Gang Tan, Yitzhak Mandelbaum, and David Walker used the term “set type” to define {x∈A|B(x)}.

Cormac Flanagan‘s Hybrid Type Checking in 2006 is probably the final blow for any distinction between datasort refinements and index refinements: right there on page 3, giving the syntax for types, he writes “{x:B|t} refinement type“. He says on the same page, at the beginning of Section 2, “Our refinement types are inspired by prior work on decidable refinement type systems”, citing quite a bit of the literature: Mandelbaum, Walker, and Harper; Freeman and Pfenning; Davies and Pfenning ICFP 2000; Xi and Pfenning 1999; Xi LICS 2000; and Ou, Tan, Mandelbaum, and Walker. After Cormac, everyone just seems to call them refinement types: Ranjit Jhala‘s Liquid Types, Robby Findler and Phil Wadler in Well typed programs can’t be blame, my own work, Andy Gordon in Semantic Subtyping with an SMT Solver. This isn’t a bad thing, but perhaps we can be more careful with names. Now that we’re all in the habit of calling them refinements, I quite like “indexed refinements” as a distinction. Alternatively, “subset types” are a very clear term with solid grounding in the literature.

Finally: I didn’t cite it in this discussion, but Rowan Davies‘s thesis, Practical Refinement-Type Checking, was extremely helpful in looking through the literature.

Edited to add: thanks to Ben Greenman for some fixes to broken links and to Lindsey Kuper and Ron Garcia for helping me clarify what refines what.

2020-04-27 update: Shriram Krishnamurthi suggests that Robert (Corky) Cartwright had a notion of “refinement type” in “User-Defined Data Types as an Aid to Verifying LISP Programs” from ICALP 1976 and with John McCarthy in First order programming logic in POPL 1979. I haven’t been able to get a PDF copy of the ICALP paper (please email hidden; JavaScript is required if you can find it!). The POPL paper is clearly related:

The key idea underlying our formal systems is that recursive definitions of partial functions can be interpreted as equations extending a first order theory of the program domain.

Their model is typed, and the paper is about how Corky and John independently discovered ways of addressing recursion/fixed points. They translate programs to logic, treating checks in negative positions as ⟂ like Blume and McAllester’s “A sound (and complete) model of contracts”, but they don’t seem to think of themselves as actually refining types per se. This paper is an interesting early use of an SMT-like logic to prove properties of programs… though they do the proofs by hand!

Cartwright’s dissertation, A Practical Formal Semantic Definition and Verification System for Typed Lisp (which I’ve hosted here, since I could only find it on a very slow server elsewhere) makes it clear that the work is indeed very closely related. Here’s a long quote from the end of his introduction:

The auxiliary function ATOMLIST [a program predicate] serves as a clumsy mechanism for specifying the implicit data type atom-list [which he defined by hand]. If we included atom-list as a distinct, explicit data type in our programming language and expanded our first-order theory to include atom-lists as well as S-expressions, the informal proof using induction on atom-lists [given earlier] could be formalized directly in our first order system. However, since LISP programs typically involve a wide variety of abstract data types, simply adding a few extra data types such as atom-list to LISP will not eliminate the confusion caused by dealing with abstract data type representations rather than the abstract types themselves. In fact, the more complex that an abstract type is, the more confusing that proofs involving its representations are likely to be. Consequently, I decided that the best solution to this problem is to include a comprehensive data type definition facility in LISP and to formally define the semantics of a program P by creating a first-order theory for the particular data types defined in P. The resulting language TYPED LISP is described in the next chapter.

PLVNET

I’m really happy to be part of the first PLVNET, a workshop on the intersection of PL, verification, and networking. I have two abstracts up for discussion.

The first abstract, Temporal NetKAT, is about adding reasoning about packet histories to a network policy language like NetKAT. The work on this is moving along quite nicely (thanks in large part to Ryan Beckett!), and I’m looking forward to the conversations it will spark.

The second abstract, Type systems for SDN controllers, is about using type systems to statically guarantee the absence of errors in controller programs. Fancy new switches have tons of features, which can be tricky to operate—can we make sure that a controller doesn’t make any mistakes when it talks to a switch? Some things are easy, like making sure that the match/action rules are sent to tables that can handle them; some things are harder, like making sure the controller doesn’t fill up a switch’s tables. I think this kind of work is a nice complement to the NetKAT “whole policy” approach, a sort of OpenFlow 1.3+ version of VeriCon with slightly different goals.

Should be fun!

Space-Efficient Manifest Contracts at POPL 15

I am delighted to announce that Space-Efficient Manifest Contracts will appear at POPL 2015 in Mumbai. Here’s the abstract:

The standard algorithm for higher-order contract checking can lead to unbounded space consumption and can destroy tail recursion, altering a program’s asymptotic space complexity. While space efficiency for gradual types—contracts mediating untyped and typed code—is well studied, sound space efficiency for manifest contracts—contracts that check stronger properties than simple types, e.g., “is a natural” instead of “is an integer”—remains an open problem.

We show how to achieve sound space efficiency for manifest contracts with strong predicate contracts. The essential trick is breaking the contract checking down into coercions: structured, blame-annotated lists of checks. By carefully preventing duplicate coercions from appearing, we can restore space efficiency while keeping the same observable behavior.

The conference version is a slightly cut down version of my submission, focusing on the main result: eidetic λH is a space-efficient manifest contract calculus with the same operational behavior as classic λH. More discussion and intermediate results—all in a unified framework for space efficiency—can be found in the technical report on the arXiv.