Semantics for Exceptions and Interrupts

Hutton and Wright have a paper What is the Meaning of These Constant Interruptions? in which they prove correct a semantics for exceptions and interrupts in a contrived language relative to a virtual machine by means of a compilation function.

The paper is a nice bit of work in semantics. Given a trivial, variable-less arithmetic language, they add sequencing and try/catch/throw. They then go on to implement finally on top of these primitives. Things get more interesting when they add interrupts. Their initial natural semantics is made nondeterministic (rather, the semantics becomes a relation instead of a function) with the addition of an INT judgment, which replaces the current instruction with “throw”. Additionally, they split the semantic relation over ‘status’ — ‘blocked’ or ‘unblocked’ — such that INT only occurs when the status is unblocked.

Their proofs (of soundness/completeness of the compilation function from the arithmetic language to a stack machine) end up being fairly simple structural induction. (When performing the inductions over the semantic relation, they call it “rule induction” — I don’t fully understand why they make the distinction; what they’re doing seems to be standard structural induction to me.) So theoretically, the work is good — extension to structured control-flow operators (loops, functions, etc.) is fairly natural, since they already do stack unwinding to allow for multiple handlers inside complex arithmetic expressions. Easier said than done, of course.

Aside from the proofs, their main contribution seems to be the observation that in Marlow, et al.’s Asynchronous Exceptions in Haskell, interrupts and blocking act differently from the actual Haskell implementation. To understand the problem, we first need to establish the meaning of finally.

finally x y = block {
    try {
        unblock { x }
    } catch {
        y
        throw
    }
    y
}

(You’ll have to take my word for it.) A desirable property of finally x y is that y will always be executed. But if INT is triggered right before the block, then this guarantee won’t be met. In Marlow, et al.’s formulation INT can’t be triggered right before blocking begins; in the actual Haskell implementation, this is of course allowed.

Very good: the work of Marlow, et al. in 2001 made the wrong assumption. Looking at it in Hutton and Wright’s formalism, it’s a silly assumption — why would you guarantee that a finally be executed if an interrupt occurs before it? No language works this way. It’s pleasing to see the theory bear this out.

In the end, my response is “Neat!” There don’t seem to be any improvements on the ground, simply to a formalization of Haskell. This isn’t a contribution to the actual theory of programming languages, but rather to the meta-theory, and so I feel skeptical of this new tool in our box — let it better prove itself.

2 Comments

  1. Interesting! I may not understand what you wrote correctly, but it seems like it’s simply a bug/nonconformance in GHC’s output — the exception queue isn’t being checked when the block is left. It seems feasible to check for exceptions after blocking, e.g. POSIX signals are checked at every context switch. So: why is this still a problem? Is GHC compiling down to primitives that don’t have guarantees for inter-thread communication, or is there some other barrier?

Leave a Reply

Your email address will not be published. Required fields are marked *