ESOP 2011 Papers

I’m happy to announce the final versions of two ESOP 2011 papers. Polymorphic Contracts was work done with João Belo, Atsushi Igarashi, and my advisor, Benjamin Pierce. Measure Transformer Semantics for Bayesian Machine Learning was work done at my internship at MSR Cambridge this summer; the real heroes of this story are Johannes Borgström, Andy Gordon, James Margetson, and Jurgen Van Gael.

See you in Saarbrücken?

Polymorphic Contracts

João Belo, Atsushi Igarashi, Benjamin Pierce, and I submitted a paper, Polymorphic Contracts, to ESOP’11. Here’s the abstract:

Manifest contracts track precise properties by refining types with predicates—e.g., {x:Int | x > 0} denotes the positive integers. Contracts and polymorphism make a natural combination: programmers can give abstract types strong contracts, precisely stating pre- and post-conditions while hiding implementation details—for example, an abstract type of stacks might specify that the pop operation has input type {x:α Stack | not (empty x)}. We formalize this combination by defining FH, a polymorphic calculus with manifest contracts, and establishing its fundamental properties, including type soundness and relational parametricity. Our development relies on a significant technical improvement over earlier presentations of contracts: instead of introducing a denotational model to break a problematic circularity between typing, subtyping, and evaluation, we develop the metatheory of contracts in a completely syntactic fashion, omitting subtyping from the core system and recovering it post facto as a derived property.

The Resurgence of Parallelism

There’s an article in the CACM trying to restore historical context to research in parallel computation: The Resurgence of Parallelism. (One of) the answer(s), it seems, is functional programming:

Parallelism is not new; the realization that it is essential for continued progress in high-performance computing is. Parallelism is not yet a paradigm, but may become so if enough people adopt it as the standard practice and standard way of thinking about computation.

The new era of research in parallel processing can benefit from the results of the extensive research in the 1960s and 1970s, avoiding rediscovery of ideas already documented in the literature: shared memory multiprocessing, determinacy, functional programming, and virtual memory.

Via LtU.

Nested functions in GCC

GCC supports “nested functions” using the -fnested-functions flag. When I first saw this, I was excited: closures in C! In the famous words of Admiral Ackbar, “it’s a trap!”

#include 

typedef int (*fptr)(int);

fptr f(int arg) {
  int nested_function(int nested_arg) {
    return arg + nested_arg;
  }

  return &nested_function;
}

void smash(int arg) {
  return;
}

int main(void) {
  fptr g = f(10);
  printf("%d\n", (*g)(5));
  smash(12);
  // printf("%d\n", (*g)(5));
  fptr h = f(12);
  printf("%d\n", (*g)(5));
  printf("%d\n", (*h)(5));

  return 0;
}

Try compiling (gcc -fnested-functions). What does the second call to g produce—15 or 17? Try uncommenting line 21. What happens? Does commenting out line 20 affect this? What if line 19 is commented out, but lines 20 and 21 are uncommented?

I’m not sure this feature is worth it.