Racist Bullshit in Mathematics

Robin Gandy’s “On the Axiom of Extensionality–Part 1”, Journal of Symbolic Logic, Vol. 21, No. 1 (Mar., 1956) quotes Alan Turing using a racist phrase.

A screengrab if the bottommatter of the first page of Gandy's paper.

Received July 24, 1955.
1 Indeed A. M. Turing once told me that he had done this, and that the proof was fairly difficult. I have found among his manuscripts two versions of the proof: one is rather short and contains a fallacy which could not, I think, easily be put right; the other (perhaps a second draft) is unfinished and only a beginning. He may therefore have discovered and surmounted the fallacy. On the other hand, he always spoke of the axiom of extensionality as being 'the nigger in the woodpile', which suggests that he did not think his consistency was transcendental enough to accord with Gödel's theorem; but, by the results of this paper, it would have to be just that.
[Turing] always spoke of the axiom of extensionality as being ‘the nigger in the woodpile’, which suggests that he did not think his consistency proof was transcendental enough to accord Gödel’s theorem.

Yikes. Those unfamiliar with this particular racist phrase will be disappointed to learn that it’s still current enough in the UK to be used “totally unintentional[ly]”… whatever that means.

Gandy’s paper isn’t the first time I’ve been pulled out of my mathematical/logical/philosophical reverie by racist bullshit. When I was reading Ronald Clark’s The Life of Bertrand Russell, I posted a thread on Twitter of Russell’s many racist utterances, with a selection of three racist Bertrand Russell quotes; two anti-racist quotes repudiate his earlier statements, offering some modest redemption.

I found these episodes of casual, by-the-way racism jarring: they pulled me out of my investment with the material and my ability or even desire to identify with the author.

What’s galling is that Turing “always” spoke of the axiom of extensionality this way; Gandy thought the phrase worth repeating verbatim; the reviewers and editors and publishers thought the phrase acceptable; and those who cite the paper don’t seem to find this footnote worth remarking on. Gandy’s paper is important and widely cited—a foundational resource on extensionality in general, and functional extensionality in particular—but if I refer someone to it, I’m going to let them know to expect a disappointingly racist quote from Turing.

What to Define When You’re Defining Gradual Type Systems

So you want to define a gradual type system, like all the cool kids? My SNAPL 2019 paper imagines three possible motivations:

  1. Expressiveness. You have nothing to lose but your static chains!
  2. Interoperation. Gradual typing seamlessly weaves the dynamic and static worlds into a single fabric.
  3. Typing itself. Static typing offers myriad benefits: enjoy them today!

You don’t have to pick just one. Or maybe you have a different motivation—I’d love to hear it. If you’re motivated by one of these goals but aren’t sure what to do, the paper offers a variety of challenge problems in Section 3.

Now, what do you have to do to define your gradual type system? You have to come up with a type system that has a question mark in it, of course. (You can also write any or dyn or Dynamic or *—whatever works for you.) But what else?

  • A surface language. Since Siek and Taha’s seminal 2006 paper, gradual types have commonly been expressed via elaboration: a source language (with nonexistent or optional or partial type annotations) is translated to a core language that makes all dynamism explicit. What is your source language? Even if you don’t define your source language formally, give an intuition about how programmers will experience it. Can programmers control what’s dynamic and what’s static? Do you ever reject source programs? Which? (GTLC rejects true 5—even in dead code—but different source languages could do different things.) Why is your line the right one to draw?
  • Concrete examples. Ideally drawing from real-world examples, what might be good about gradual types in your context? What new programs do you allow? What problems do you avoid? What guarantees do you gain? Make your example programs good! As Alan Perlis said, “A program without a loop and a structured variable isn’t worth writing”. Examples from the SNAPL paper include: the flatten function, JSON processing, or the “attach-to-the-request” idiom in middleware.
  • Operations. What can we do with base types? Having real operations around will force you to answer hard questions about your source and core languages. How does equality work, i.e., what can be compared and what are the answers? Does your dynamic language reject 5 + "hit"? What about 5 + ((λx.x) "hit")? If you truly have a dynamic type, what operations can you do on it? Which can fail? Is there a way to check at runtime whether casting to a static type will succeed before you commit to such reckless behavior?
  • Control. Include conditionals or some other nontrivial notion of control flow. The first published rules for gradual typing that used a notion of ‘meet’ came in 2012! The way you treat join points in control says a lot about the ergonomics of your system. Church encodings do not cut the mustard.
  • Type semantics. Are your types worth the pixels they’re written on? What do they mean? If I have a value of a given type, what guarantees do I have? You don’t need to give a formal type semantics, but it’s important to know what to expect. If I write a function λx:T. e, what can I actually assume about x in e? If T is int, do I know x is an int, or could it blow up? What about ref int… can reading fail? Writing? What about list int? Does pattern matching on it cause conversions, or possible failure? What about…

The SNAPL 2019 paper argues that there are two ‘lineages’ of gradual typing: one which starts from statically typed languages and relaxes or modifies the type system to include dynamic features, and one which starts from dynamic languages and tries to develop a static type system that can accommodate your ‘preexisting conditions’—legacy code. Whichever lineage you’re working in, each item above is worth carefully considering.

I want to conclude by calling out a paper that more people ought to know about; it does a good job on most of these points. It came out the same year as Alanis Morisette’s acclaimed international debut album, Jagged Little Pill.

(The official ACM version is less complete than the technical report—alas.) They are clear about their surface language (Scheme—with argument lists and call/cc, but not arbitrary set!). They have an entire section of concrete examples, with good demonstrations of how conditionals work with their coercion parameters. They even draw on examples from the literature, citing Mike Fagan’s thesis (which is a goldmine of examples). They don’t give a formal type semantics, but they do explain (with positive and negative examples) how type coercion parameters and polymorphism interact to achieve in their elaborated ML the ad hoc polymorphism necessary to implement their source Scheme.

I also want to highlight this paper because it’s one that I’ve never heard folks actually talk about, though it seems to be cited well enough. I urge anyone who is interested in gradual types to read it. Just like Alanis’s cri de coeur against the shallow world of pop, some things from 1995 are worth revisiting.

Ron Garcia gave helpful feedback on a draft of this post. Thanks, Ron!

Formulog: ML + Datalog + SMT

If you read a description of a static analysis in a paper, what might you find? There’ll be some cute model of a language. Maybe some inference rules describing the analysis itself, but those rules probably rely on a variety of helper functions. These days, the analysis likely involves some logical reasoning: about the terms in the language, the branches conditionals might take, and so on.

What makes a language good for implementing such an analysis? You’d want a variety of features:

  • Algebraic data types to model the language AST.
  • Logic programming for cleanly specifying inference rules.
  • Pure functional code for writing the helper functions.
  • An SMT solver for answering logical queries.

Aaron Bembenek, Steve Chong, and I have developed a design that hits the sweet spot of those four points: given Datalog as a core, you add constructors, pure ML, and a type-safe interface to SMT. If you set things up just right, the system is a powerful and ergonomic way to write static analyses.

Formulog is our prototype implementation of our design; our paper on Formulog and its design was just conditionally accepted to OOPSLA 2020. To give a sense of why I’m excited, let me excerpt from our simple liquid type checker. Weighing in under 400 very short lines, it’s a nice showcase of how expressive Formulog is. (Our paper discusses substantially more complex examples.)

type base =
  | base_bool

type typ = 
  | typ_tvar(tvar)
  | typ_fun(var, typ, typ)
  | typ_forall(tvar, typ)
  | typ_ref(var, base, exp)

and exp = 
  | exp_var(var)
  | exp_bool(bool)
  | exp_op(op)
  | exp_lam(var, typ, exp)
  | exp_tlam(tvar, exp)
  | exp_app(exp, exp)
  | exp_tapp(exp, typ)

ADTs let you define your AST in a straightforward way. Here, bool is our only base type, but we could add more. Let’s look at some of the inference rules:

(* subtyping *)
output sub(ctx, typ, typ)

(* bidirectional typing rules *)
output synth(ctx, exp, typ)
output check(ctx, exp, typ)

(* subtyping between refinement types is implication *)
sub(G, typ_ref(X, B, E1), typ_ref(Y, B, E2)) :-
  wf_ctx(G),
  exp_subst(Y, exp_var(X), E2) = E2prime,
  encode_ctx(G, PhiG),
  encode_exp(E1, Phi1),
  encode_exp(E2prime, Phi2),
  is_valid(`PhiG /\ Phi1 ==> Phi2`).

(* lambda and application synth rules *)
synth(G, exp_lam(X, T1, E), T) :-
  wf_typ(G, T1),
  synth(ctx_var(G, X, T1), E, T2),
  typ_fun(X, T1, T2) = T.

synth(G, exp_app(E1, E2), T) :-
  synth(G, E1, typ_fun(X, T1, T2)),
  check(G, E2, T1),
  typ_subst(X, E2, T2) = T.

(* the only checking rule *)
check(G, E, T) :-
  synth(G, E, Tprime),
  sub(G, Tprime, T).

First, we declare our relations—that is, the (typed) inference rules we’ll be using. We show the most interesting case of subtyping: refinement implication. Several helper relations (wf_ctx, encode_*) and helper functions (exp_subst) patch things together. The typing rules below follow a similar pattern, mixing the synth and check bidirectional typing relations with calls to helper functions like typ_subst.

fun exp_subst(X: var, E : exp, Etgt : exp) : exp =
  match Etgt with
  | exp_var(Y) => if X = Y then E else Etgt
  | exp_bool(_) => Etgt
  | exp_op(_) => Etgt
  | exp_lam(Y, Tlam, Elam) =>
    let Yfresh = 
      fresh_for(Y, X::append(typ_freevars(Tlam), exp_freevars(Elam)))
    in
    let Elamfresh = 
      if Y = Yfresh
      then Elam
      else exp_subst(Y, exp_var(Yfresh), Elam)
    in
    exp_lam(Yfresh,
            typ_subst(X, E, Tlam),
            Elamfresh)
  | exp_tlam(A, Etlam) =>
    exp_tlam(A, exp_subst(X, E, Etlam))
  | exp_app(E1, E2) => 
    exp_app(exp_subst(X, E, E1), exp_subst(X, E, E2))
  | exp_tapp(Etapp, T) => 
    exp_tapp(exp_subst(X, E, Etapp), typ_subst(X, E, T))
  end

Expression substitution might be boring, but it shows the ML fragment well enough. It’s more or less the usual ML, though functions need to have pure interfaces, and we have a few restrictions in place to keep typing simple in our prototype.

There’s lots of fun stuff that doesn’t make it into this example: not only can relations call functions, but functions can examine relations (so long as everything is stratified). Hiding inside fresh_for is a clever approach to name generation that guarantees freshness… but is also deterministic and won’t interfere with parallel execution. The draft paper has more substantial examples.

We’re not the first to combine logic programming and SMT. What makes our design a sweet spot is that it doesn’t let SMT get in the way of Datalog’s straightforward and powerful execution model. Datalog execution is readily parallelizable; the magic sets transformation can turn Datalog’s exhaustive, bottom-up search into a goal-directed one. It’s not news that Datalog can turn these tricks—Yiannis Smaragdakis has been saying it for years!—but integrating Datalog cleanly with ML functions and SMT is new. Check out the draft paper for a detailed related work comparison. While our design is, in the end, not so complicated, getting there was hard.

Relatedly, we have also have an extended abstract at ICLP 2020, detailing some experiments in using incremental solving modes from Formulog. You might worry that Datalog’s BFS (or heuristic) strategy wouldn’t work with an SMT solver’s push/pop (i.e., DFS) assertion stack—but a few implementation tricks and check-sat-assuming indeed provide speedups.

Flapjax on PL Perspectives

Shriram Krishnamurthi, Arjun Guha, Leo Meyerovich, and I wrote a post about Flapjax on PL Perspectives, the SIGPLAN blog. (Thanks to Mike Hicks for helping us edit the post!)

Flapjax won the OOPSLA MIP award for 2009 (though the SIGPLAN website isn’t yet up to date). Our blog post is about the slightly unconventional way we worked: most of the Flapjax work happened in 2006 and 2007, but we didn’t even try to write the paper until several years later (Leo and I were in grad school). Rather than recapitulate those ideas, go read the post!

OBT on hiatus

The recent organizers of the Off the Beaten Track (OBT) workshop (Luke Church, Bob Atkey, Lindsey Kuper, Swarat Chaudhuri, Ranjit Jhala, Shriram Krishnamurthi, David Walker, and me) have decided not to hold OBT at POPL 2020.

OBT served a particular purpose: a venue for testing out new, weird ideas at a familiar setting with familiar faces. It was, as Lindsey Kuper explains, very successful! We all felt that there are now many venues for initial attempts with weird ideas; with that need met elsewhere, OBT feels less important.

OBT may not be gone forever. We’ll reconsider next year. If you have a strong opinion on the subject, please let me know!

The Dynamic Practice and Static Theory of Gradual Typing

I’ll be presenting my thoughts on the state of gradual typing research—along with some goals and challenges—at SNAPL 2019. Here’s the abstract of my paper, The Dynamic Practice and Static Theory of Gradual Typing:

We can tease apart the research on gradual types into two ‘lineages’: a pragmatic, implementation-oriented dynamic-first lineage and a formal, type-theoretic, static-first lineage. The dynamic-first lineage’s focus is on taming particular idioms—‘pre-existing conditions’ in untyped programming languages. The static-first lineage’s focus is on interoperation and individual type system features, rather than the collection of features found in any particular language. Both appear in programming languages research under the name “gradual typing”, and they are in active conversation with each other.

What are these two lineages? What challenges and opportunities await the static-first lineage? What progress has been made so far?

See you in Providence?

Computer science at a small liberal arts college

NB I left Pomona college in 2021. But they’re hiring again, and I recommend you consider it!

It’s my fifth year as a tenure-track assistant professor at Pomona College, a small liberal arts college (SLAC) in Claremont, CA. You may have read about why mathematicians might find the liberal arts setting appealing. Here’s why I think computer scientists might, too. I’ll start with what I love about working at Pomona and then I’ll talk about what I don’t.

Research freedom

First and foremost, I get to do the research I want to do. Neither the department nor the institution rely on overheads to operate, so there’s no grant pressure. Without graduate students, who require both financial support and through-lines of work for a dissertation, I can choose my focus freely.

The “without graduate students” part isn’t for everyone. If your research needs a lab of ten students to build enormous systems, then a SLAC may not be the right place. But if you want to build smaller systems, like my work on Smoosh, then a SLAC could be great; if you’re theoretically inclined, I think a SLAC is an ideal home.

Research expectations vary at liberal arts schools, and I’d count Pomona among the more robust, research-wise—the tenure evaluation split between research and teaching is somewhere between 50/50 and 40/60. If you’re sick of research and want a teaching-only position, then SLACs may not be the right fit.

I’ve been particularly enjoying the freedom to pursue side projects. I spent some of this two summers ago working with two undergraduates to write software for controlling the art department’s CNC router, marrying two of my interests: programming and woodworking. It was a good experience for the students, and it might even lead to a publication, though I never expected it to. While I might have had the time and resources to do such a thing at a larger institution… would I have bothered to lay out so much time on a project unlikely to produce anything for my tenure case?

Finally, Pomona has a generous junior leave policy. I just spent a year with Steve Chong (thanks, Steve!). I spent a year doing nothing but research, and my productivity itch has been pretty well scratched. You can make it work.

Students

Pomona has an 8-to-1 student-to-faculty ratio, with a 2:2 teaching load; some SLACs might have more teaching, like 3:2 or even 3:3. I couldn’t handle a heavier load—I wouldn’t get enough research done to be happy with myself—but I enjoy teaching well rounded students in moderately sized classes (my elective next semester will have no more than 20 students). The high end of students is astonishingly high—juniors and seniors working at an early graduate student level—and it’s rewarding to work with them on things that actually advance my research. My field (programming languages) requires a lot of background to do much of anything; when I came to Pomona, I didn’t expect that I’d be able to do any of my actual research with students. But I’ve been pleasantly surprised—I’ve been able to have one or two students at a time working on real PL research; they go on and do cool things.

Community

Our department of nine tenure-track professors is convivial, genial, and collaborative. It’s not that we’re magically free of disagreement, but rather that we can disagree productively. I’ve spent significant time in CS departments since high school, and many of them are lovely places to be—but I’m charmed by my department’s mellow, friendly relations.

On a larger scale, Pomona College has roughly 200 professors. I’m pleased to have friends and colleagues from many different departments and disciplines across Pomona’s campus (and beyond—there are 400 more faculty at the other Claremont Colleges—the Consortium here is an idiosyncratic collection of institutions). As someone who majored in both computer science and Egyptology, colleagues with diverse interests enliven and enrich my days.

What’s more, Pomona has a culture of service; service is a contribution, a lending of myself that makes me feel like I’m part of the college. Starting from their second year, all professors are expected to sit on college-wide committees—the sort of thing that one might not do until a much higher rank elsewhere. In my committee work, I help steer Pomona’s course; in so doing, I invest in the school as an enterprise. Sure, that investment takes the form of extra meetings and work that doesn’t directly serve me or my research or my teaching or my department. I can imagine a SLAC would be a frustrating place to be for someone who wasn’t community-minded or who was unwilling to take part in bureaucracy.

Values

Many of my values align with institutional goals and practices. Pomona’s diversity plan is a strong, aspirational one—the school is very dedicated to equity, with a focus on students who are the first in their family to attend college, students from low-income families, and students from other underrepresented minorities. I’ve been particularly pleased to see the way the school has rallied around DACA-mented students. My department does a good job bringing in and retaining women, who constitute 30-40% of our majors and just about half of our CS faculty.

To be clear: while Pomona is already doing a lot of things right, there’s of course more to do. I don’t expect an institution to mirror all of my values—and I think it’s healthy to separate my identity, my work, and my employer. But the service culture prevailing at liberal arts colleges leaves the door open for my values to help guide the institution.

Of course, it’s not all roses

Let’s be honest: it’s still a job; the arrangement is that I exchange my time and energy for a nice salary. There are things I wish were different. I often wonder what it would be like to train a grad student or two; I suspect I would be much more productive and have a greater influence on my field. When I submitted an NSF CAREER proposal this two years ago, I had to write everything myself: there’s no big grant office with templates for my data management plan; there’s nobody to look things over for boneheaded mistakes. A larger institution with a bigger department would offer more room for research collaboration. I could happily work on research full time, like last year. Like every other computer science department, we have more majors than we can handle—our classes aren’t as small as we’d like. The liberal arts context is enriching, but students tend to go for more breadth than depth; not enough of our students go on to graduate school and too many get sucked into Silicon Valley.

On balance, though, I’m very happy. I’m less productive than I could be, but I think my best papers are my most recent ones. I have to hunt for students who want depth, but they’re there—and hungry for it! And most importantly: I can work at Pomona without being consumed. The work is tiring, to be sure, but there’s room left for the rest of myself… and, given my own peculiar, personal limits and abilities and aspirations, I’m not so sure there would be elsewhere.

P.S. we’re hiring

haha you saw this coming

If you’re interested… we’re hiring! Feel free to get in touch if you have any questions.

A refinement type by any other name

Frank Pfenning originated the idea of refinement types in his seminal PLDI 1991 paper with Tim Freeman. Freeman and Pfenning’s refinement types allow programmers to work with refined datatypes, that is, sub-datatypes induced by refining the set of available constructors. For example, here’s what that looks like for lists, with a single refinement type, ? singleton:

datatype ? list = nil | cons of ? * ? list
rectype ? singleton = cons ? nil
Refinements of α list

That is, a programmer defines a datatype ? list, but can identify refined types like ? singleton—lists with just one element. We can imagine a lattice of type refinements where ? list is at the top, but below it is the refinement of lists of length 0 or 1—written ? singleton ? ? nil. This type is itself refined by its constituent refinements, which are refined by the empty type. Here’s such a lattice, courtesy of a remarkably nice 1991-era TeX drawing:

Another way of phrasing all of this is that refinement types identify subsets of types. Back in 1983, Bengt Nordström and Kent Petersson introduced—as far as I know—the idea of subset types in a paper called Types and Specifications at the IFIP Congress. Unfortunately, I couldn’t find a copy of the paper, so it’s not clear where the notation {x:A|B(x)} set-builder-esque notation first came from, but it shows up in Bengt Nordström, Kent Petersson, and Jan M. Smith’s Programming in Martin-Löf’s Type Theory in 1990. Any earlier references would be appreciated. Update (2015-03-18): Colin Gordon pointed out that Robert Constable‘s Mathematics as programming from 1984 uses the subset type notation, as does the NUPRL tech report from 1983. The NUPRL TR came out in January ’83 while IFIP ’83 happened in September. Nate Foster, who works with Bob Constable, suspects that Constable has priority. Alright: subset types go to Robert Constable in January 1983 with the Nearly Ultimate Pearl. Going once…

My question is: when did we start calling {x:A | B(x)} and other similar subset types a “refinement type”? Any advice or pointers would be appreciated—I’ll update the post.

Susumu Hayashi in Logic of refinement types describes “ATTT”, which, according to the abstract, “has refi nement types which are intended to be subsets of ordinary types or specifications of programs”, where he builds up these refinements out of some set theoretic operators on singletons. By rights, this paper is probably the first to use “refinement type” to mean “subset type”… though I have some trouble pinpointing where the paper lives up to that claim in the abstract.

Ewen Denney was using refinement types to mean types and specifications augmented with logical propositions. This terminology shows up in his 1998 PhD thesis and his 1996 IFIP paper, Refinement Types for Specification.

In 1998, Hongwei Xi and Frank Pfenning opened the door to flexible interpretations of “refinements” in Eliminating Array Bound Checking Through Dependent Types. In Section 2.4, they use ‘refinement’ in a rather different sense:

Besides the built-in type families int, bool, and array, any user-defined data type may be refined by explicit declarations. …

typeref ? list of nat
with nil <| ? list(0)
   | ::  <| {n:nat} ? * ? list(n) -> ? list(n+1)

Later on, in Section 3.1, they have a similar use of the term:

In the standard basis we have refined the types of many common functions on integers such as addition, subtraction, multiplication, division, and the modulo operation. For instance,

+ <| {m:int} {n:int} int(m) * int(n) -> int(m+n)

is declared in the system. The code in Figure 3 is an implementation of binary search through an array. As before, we assume:

sub <| {n:nat} {i:nat | i < n} ? array(n) * int(i) -> ?

So indices allow users to refine types, though they aren’t quite refinement types. In 1999, Xi and Pfenning make a strong distinction in Dependent Types in Practical Programming; from Section 9:

…while refinement types incorporate intersection and can thus ascribe multiple types to terms in a uniform way, dependent types can express properties such as “these two argument lists have the same length” which are not recognizable by tree automata (the basis for type refinements).

Now, throughout the paper they do things like “refine the datatype with type index objects” and “refine the built-in types: (a) for every integer n, int(n) is a singleton type which contains only n, and (b) for every natural number n, 0 a array(n) is the type of arrays of size n”. So here there’s a distinction between “refinement types”—the Freeman and Pfenning discipline—and a “refined type”, which is a subset of a type indicated by some kind of predicate and curly braces.

Jana Dunfield published a tech report in 2002, Combining Two Forms of Type Refinements, where she makes an impeccably clear distinction:

… the datasort refinements (often called refinement types) of Freeman, Davies, and Pfenning, and the index refinements of Xi and Pfenning. Both systems refine the simple types of Hindley-Milner type systems.

In her 2004 paper with Frank, Tridirectional Typechecking, she maintains the distinction between refinements, but uses a term I quite like—“property types”, i.e., types that guarantee certain properties.

Yitzhak Mandelbaum, my current supervisor David Walker, and Bob Harper wrote An Effective Theory of Type Refinements in 2003, but they didn’t quite have subset types. Their discussion of related work makes it seem that they interpret refinement types as just about any device that allows programmers to use the existing types of a language more precisely:

Our initial inspiration for this project was derived from work on refinement types by Davies and Pfenning and Denney and the practical dependent types proposed by Xi and Pfenning. Each of these authors proposed to sophisticated type systems that are able to specify many program properties well beyond the range of conventional type systems such as those for Java or ML.

In the fairly related and woefully undercited 2004 paper, Dynamic Typing with Dependent Types, Xinming Ou, Gang Tan, Yitzhak Mandelbaum, and David Walker used the term “set type” to define {x:A  | B(x)}.

Cormac Flanagan‘s Hybrid Type Checking in 2006 is probably the final blow for any distinction between datasort refinements and index refinements: right there on page 3, giving the syntax for types, he writes “{x:B|t} refinement type“. He says on the same page, at the beginning of Section 2, “Our refinement types are inspired by prior work on decidable refinement type systems”, citing quite a bit of the literature: Mandelbaum, Walker, and Harper; Freeman and Pfenning; Davies and Pfenning ICFP 2000; Xi and Pfenning 1999; Xi LICS 2000; and Ou, Tan, Mandelbaum, and Walker. After Cormac, everyone just seems to call them refinement types: Ranjit Jhala‘s Liquid Types, Robby Findler and Phil Wadler in Well typed programs can’t be blame, my own work, Andy Gordon in Semantic Subtyping with an SMT Solver. This isn’t a bad thing, but perhaps we can be more careful with names. Now that we’re all in the habit of calling them refinements, I quite like “indexed refinements” as a distinction. Alternatively, “subset types” are a very clear term with solid grounding in the literature.

Finally: I didn’t cite it in this discussion, but Rowan Davies‘s thesis, Practical Refinement-Type Checking, was extremely helpful in looking through the literature.

Edited to add: thanks to Ben Greenman for some fixes to broken links and to Lindsey Kuper and Ron Garcia for helping me clarify what refines what.

2020-04-27 update: Shriram Krishnamurthi suggests that Robert (Corky) Cartwright had a notion of “refinement type” in “User-Defined Data Types as an Aid to Verifying LISP Programs” from ICALP 1976 and with John McCarthy in First order programming logic in POPL 1979. I haven’t been able to get a PDF copy of the ICALP paper (please send me one if you can find it!). The POPL paper is clearly related:

The key idea underlying our formal systems is that recursive definitions of partial functions can be interpreted as equations extending a first order theory of the program domain.

Their model is typed, and the paper is about how Corky and John independently discovered ways of addressing recursion/fixed points. They translate programs to logic, treating checks in negative positions as ?‚ like Blume and McAllester’s “A sound (and complete) model of contracts”, but they don’t seem to think of themselves as actually refining types per se. This paper is an interesting early use of an SMT-like logic to prove properties of programs… though they do the proofs by hand!

Cartwright’s dissertation, A Practical Formal Semantic Definition and Verification System for Typed Lisp (which I’ve hosted here, since I could only find it on a very slow server elsewhere) makes it clear that the work is indeed very closely related. Here’s a long quote from the end of his introduction:

The auxiliary function ATOMLIST [a program predicate] serves as a clumsy mechanism for specifying the implicit data type atom-list [which he defined by hand]. If we included atom-list as a distinct, explicit data type in our programming language and expanded our first-order theory to include atom-lists as well as S-expressions, the informal proof using induction on atom-lists [given earlier] could be formalized directly in our first order system. However, since LISP programs typically involve a wide variety of abstract data types, simply adding a few extra data types such as atom-list to LISP will not eliminate the confusion caused by dealing with abstract data type representations rather than the abstract types themselves. In fact, the more complex that an abstract type is, the more confusing that proofs involving its representations are likely to be. Consequently, I decided that the best solution to this problem is to include a comprehensive data type definition facility in LISP and to formally define the semantics of a program P by creating a first-order theory for the particular data types defined in P. The resulting language TYPED LISP is described in the next chapter.

PHPEnkoder 1.13

I’ve resolved some E_NOTICE-level messages that were showing up when people set WP_DEBUG to true. Thanks to Rootside for pointing out this problem on the WordPress forums. As always, please let me know on the forums or via email if you run into any problems.

Cultural criticism and ‘tech’

As an academic computer scientist, I frequently interact with the world of ‘tech’, as embodied by Silicon Valley, startups, etc. Many of my friends—from college, from graduate school—work there. My younger brother works there. One of the things that has kept me out of that world is my wariness of its politics, ethics, and aesthetics. I was delighted, then, when I was introduced to Model View Culture, a venue for cultural criticism of tech, sensu lato. They cover a wide range:

I’m writing because other academics—the audience of this blog—might be interested. Many students coming out of the elite CS programs (my academic home for more than a decade) are going to end up working in the world MVC writes about. They’ll go as interns and then as employees. What is it like there?

But I’m also writing because MVC has been subjected to tremendous blowback. I’m not going to link to it, but it’s not hard to find. Silence is complicity, so: Model View Culture is writing smart things about hard problems. If you’d like to support them, just reading is a fine place to start… but of course money is good, too.